SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
5.4.101 not booting. I just upgraded from 5.4.96 using the same config. I get the dreaded:
Code:
VFS: Cannot open root device "sdb2" or unknown-block(0,0): error -6
/dev/sdb2 is and has been the root partition and ext4 is compiled into the kernel. Regardless, I did try mkinitrd including ext4 in case. No luck. Since I am using the same .config for the same kernel series, I am at a loss on why it doesn't boot this time. I am still using Slackware 14.2 except for the upgraded home-rolled kernel, which worked at 5.4.96. I will probably try one of the intermediate kernels (5.4.97 to 5.4.100) to see where booting fails on me.
Distribution: VM Host: Slackware-current, VM Guests: Artix, Venom, antiX, Gentoo, FreeBSD, OpenBSD, OpenIndiana
Posts: 1,018
Rep:
Quote:
Originally Posted by Gerswing
5.4.101 not booting. I just upgraded from 5.4.96 using the same config. I get the dreaded:
Code:
VFS: Cannot open root device "sdb2" or unknown-block(0,0): error -6
/dev/sdb2 is and has been the root partition and ext4 is compiled into the kernel. Regardless, I did try mkinitrd including ext4 in case. No luck. Since I am using the same .config for the same kernel series, I am at a loss on why it doesn't boot this time. I am still using Slackware 14.2 except for the upgraded home-rolled kernel, which worked at 5.4.96. I will probably try one of the intermediate kernels (5.4.97 to 5.4.100) to see where booting fails on me.
Quote:
uname -rip
5.4.101-cephei Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz GenuineIntel
The 5.11 kernel is noticeably faster. I did not install the package, I rebuilt it with the supplied config and saying yes to ext4 filesystem. I only did it to nullify the initrd. I thought at first that the lack of an initrd made it faster. Not sure, but it sure saved 41 seconds from a cold boot.
5.11.1 is working well here after upgrade from 5.10.17.
Nothing is running noticeably faster on my AMD gear, and boot time was the same within 1 sec, but then I didn't see anything in the announcement targetting my gear for higher speed under 5.11.x.
This is not really old by *my* standards - bought mine 2016, AMD launched it 2014 - but I would guess it's been stably supported by the kernel for years now, aside from an early 5.10.x blank screen problem with AMD Kaveri.
KDE System Information output
Operating System: Slackware 14.2 (that's the ouput, but it's -current updated this morning)
KDE Plasma Version: 5.21.1
KDE Frameworks Version: 5.79.0
Qt Version: 5.15.2
Kernel Version: 5.11.1
OS Type: 64-bit
Graphics Platform: X11
Processors: 4 × AMD A10-7800 Radeon R7, 12 Compute Cores 4C+8G
Memory: 6.7 GiB of RAM
Graphics Processor: AMD KAVERI
5.4.101 seems to build and boot fine here running -current (15.0alpha):
Code:
Linux 5.4.101 #1 SMP Sat Feb 27 18:07:50 CET 2021 x86_64 Intel(R) Core(TM)2 Duo CPU T6400 @2.00GHz
Built like this:
Quote:
cd /usr/src/linux-5.4.101
make localmodconfig
make -j2 bzImage modules && make modules_install
cp arch/x86/boot/bzImage /boot/vmlinuz-custom-5.4.101 # copy the new kernel file
cp System.map /boot/System.map-custom-5.4.101 # copy the System.map (optional)
cp .config /boot/config-custom-5.4.101 # backup copy of your kernel config
make clean
rm -rf .config.old .version
cd /boot
/usr/share/mkinitrd/mkinitrd_command_generator.sh -k 5.4.101|bash && lilo
Last edited by mats_b_tegner; 02-27-2021 at 12:40 PM.
Distribution: VM Host: Slackware-current, VM Guests: Artix, Venom, antiX, Gentoo, FreeBSD, OpenBSD, OpenIndiana
Posts: 1,018
Rep:
Quote:
Originally Posted by Gerswing
Definitely something happened with the config file. I copied the config file I used for 5.4.101 to use for 5.4.97 and it would not boot either.
I'm going to try this since something is obviously wrong with the one I am using. Thanks.
localmodconfig is not the best option: script gathers information about loaded modules only.
If you have modules that are loaded on demand, the these will not be added unless loaded first.
There is a script available that you can install and let it run for some time (hoping that in time you eventually load all modules at some point), then it generates a database of all modules ever used and helps build kernel config.
Out of curiosity, I would look at failed config file, why ext4 is not added or even removed if you are using config file from previously working kernel.
Also I would look at the config file after you build a new kernel to make sure that ext4 is included.
localmodconfig is not the best option: script gathers information about loaded modules only.
If you have modules that are loaded on demand, the these will not be added unless loaded first.
There is a script available that you can install and let it run for some time (hoping that in time you eventually load all modules at some point), then it generates a database of all modules ever used and helps build kernel config.
Out of curiosity, I would look at failed config file, why ext4 is not added or even removed if you are using config file from previously working kernel.
Also I would look at the config file after you build a new kernel to make sure that ext4 is included.
Thank you. The config was the problem. I did compile with localmodconfig and it is now working:
I plan on going back to investigate where it failed. I am guessing that I must have made changes to the config since last compile for 5.4.96. I have been investigating each part of the kernel config to try and reduce what I actually don't need and turn it off and must have made a mistake somewhere. I had diff'd in emacs to see if there were any differences between the most recent 5.4.96 config and the 5.4.101 I had used and failed and there was no difference. Ext4 was supposedly compiled in. So it must be something else that I had turned off by mistake.
As far as localmodconfig, I'll have to see what doesn't load now and turn it on as I identify it. I do have to compile and load both nvidia-kernel and virtualbox-kernel separately. But do concur that I need to find if there are any other different modules that will need to be compiled.
I would be interested in the script you mention that identifies all of the modules ever loaded.
I plan on going back to investigate where it failed. I am guessing that I must have made changes to the config since last compile for 5.4.96. I have been investigating each part of the kernel config to try and reduce what I actually don't need and turn it off and must have made a mistake somewhere. I had diff'd in emacs to see if there were any differences between the most recent 5.4.96 config and the 5.4.101 I had used and failed and there was no difference. Ext4 was supposedly compiled in. So it must be something else that I had turned off by mistake.
As far as localmodconfig, I'll have to see what doesn't load now and turn it on as I identify it. I do have to compile and load both nvidia-kernel and virtualbox-kernel separately. But do concur that I need to find if there are any other different modules that will need to be compiled.
I would be interested in the script you mention that identifies all of the modules ever loaded.
I have kernels tailored to my hardware and needs, they are smaller in size of course, have smaller surface attack and often when there is security issue regarding kernel, I am not affected:
e.g. problems with ipv6 security/privacy do not affect me, I don't have ipv6 compiled at all.
localmodconfig is not the best option: script gathers information about loaded modules only.
If you have modules that are loaded on demand, then these will not be added unless loaded first.
There is a script available that you can install and let it run for some time (hoping that in time you eventually load all modules at some point), then it generates a database of all modules ever used and helps build kernel config.
Granted, make localmodconfig only builds a subset of available kernel modules. I do have the kernel-modules.SlackBuild should I need to rebuild/upgrade the kernel-module package.
Last edited by mats_b_tegner; 02-28-2021 at 02:10 AM.
It was a very long time I hadn't compiled a kernel. I just grabbed a docking station and it appears a patch that's around for about a year and needed for usb hub to work after resuming still isn't merged.
I can't find it in kernel git, only in patchwork and failed to find out how to ask or find what the current state is.
.....
The one thing that perhaps stands out is that this release actually
did a fair amount of historical cleanup. Yes, overall we still have
more new lines than we have removed lines, but we did have some spring
cleaning, removing the legacy OPROFILE support (the user tools have
been using the "perf" interface for years), and removing several
legacy SoC platforms and various drivers that no longer make any
sense........
Last edited by cwizardone; 02-28-2021 at 06:51 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.