Quote:
Originally Posted by SlackCoder
@lazardo do you keep the old kernel modules installed too?
|
Code:
$ ls /lib/modules
5.15.137v1 5.15.143v1 6.1.80v1 6.1.81v1 6.1.82v1 6.1.85v1 6.6.10v1 6.6.8 6.6.8v1
Update: The answer is yes, else booting one of the old kernels would fail, eg, kernel+modules s/b viewed as a single blob stored in multiple pieces.
The context below is still valid but not necessarily relevant to the question.
Larger context:
Keep the original Slackware install huge kernel+modules until the system is known to be stable, like a life jacket.
Phase 1. Customize by reducing components in huge not needed, and architecture+hardware tuning using 'make nconfig', each time bumping CONFIG_LOCALVERSION.
During this phase there may be 6-8 kernel versions and their respective /lib/modules/x.y.z. Note that Phase 1 is not technically necessary, rather it greatly reduces build time+resources, even with ccache+distcc, and I never use initrd.
Phase 2. As soon as the reduce/customize/tune phase is stable, the old kernels+modules are purged: 'removepkg' for Slackware kernel, 'rm /boot/*-x.y.z${LOCALVERSION}*; rm -r /lib/modules/x.y.z${LOCALVERSION}' for custom iterations.
When the LTS kernel branch gets bumped, it is tracked by applying only incremental patches to the now stable+tuned base:
Code:
rsync rsync://rsync.kernel.org/pub/linux/kernel/v6.x/incr/
Phase 3. After each new incremental build is known to be stable, the older iterations are purged, but always keeping a small pool of previous incremental builds as boot failure fallback.
Typically there are two tracked LTS branches, eg, 6.1.x and 6.6.x, with two to three kernel+modules for each track. For major jumps, like 6.1 + 6.6, apply the stable .config to the new release and back to Phase 1.
This workflow helps me stay current with kernel development and still have a stable platform. You can always fetch the Slackware huge kernel+modules in patches/packages when things go completely septic