If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.
They call it Parker because it’s almost, but not quite, the right thing.
I know that Square you’re talking about!
Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?
Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.
If this works out, it’s likely something that container engines would take advantage of as well. It may take more resources to do (we’ll have to see), but adding kernel isolation would make for a much stronger sandbox. Containers are just a collection of other isolation tools like this anyway.
gvisor already exists for environments like this, where the extra security at the cost of some performance is welcome. But having support for passing processes an isolated, hardened kernel from the primary running Linux kernel would probably make a lot of that performance gap disappear.
I’m also thinking it could do wonders for compatibility too, since you could bundle abandoware apps with an older kernel, or ship new apps that require features from the latest kernel to places that wouldn’t normally have those capabilities.
How is this better than a hypervisor OS running multiple VM’s?
There is not hypervisor. So, no hypervisor to update.
I recently heard this great phrase:
“A VM makes an OS believe that it has the machine to itself; a container makes a process believe that it has the OS to itself.”
This would be somewhere between that, where each container could believe it has the OS to itself, but with different kernels.
I imagine there’s some overhead savings but I don’t know what. I guess with classic hypervisor there’s still calls going through the host kerbel whereas with this they’d go straight to the hardware without special passthrough features?
Saving on some overhead, because the hypervisor is skipped. Things like disk IO to physical disks can be more efficient using multikernel (with direct access to HW) than VMs (which have to virtualize at least some components of HW access).
With the proposed “Kernel Hand Over”, it might be possible to send processes to another kernel entirely. This would allow booting a completely new kernel, moving your existing processes and resources over, then shutting down the old kernel, effectively updating with zero downtime.
It will definitely take some time for any enterprises to transition over (if they have a use for this), and consumers will likely not see much use in this technology.
More transparent hardware sharing, less over head by not needing to virtualize hardware.
I remember partitioned systems being a big thing in like the '90s '00s since those were the days you would pour $$$$ into large systems. But I thought the “cattle not pets” movement did away with that? Are we back to the days of “big iron”?
Constant back and forth. Moving things closer increases efficenicy moving them apart increases resillency.
So we are constantly shuffling between the two for different workloads to optimize for the given thing.
That said i see this as an extension too the cattle idea by making even the kernel a thing to raised and culled on demand. This matter a lot more with heavy workloads like HPC and AI stuff where a process can be measure in days or weeks and stable uptime is paramount, vs the stateless work of intended k8s stuff (i say intended because you can k8s all the things now but it needs extensions to handle the new lifecycles).
And the wheel of reincarnation forever keeps turning.
What do you think all those cattle run on?
Just big ass servers with tons of cores and ram.
I figured it was cattle all the way down. Even if they’re big. Especially when you have thousands of them.
Though maybe these setups can be scripted/automated to be easy to replicate and reproduce?
In essence yes, for example VMware ESXi hosts can be managed by a single image with customizations made at the cluster level. Give me pxe and I can provision you n hosts in about the same time as 1 host
This seems to be a pretty niche use case brought about by changes in the available hardware for servers. Likely they are having situations where their servers have copious amounts of RAM, and CPU cores that the task it is handling don’t need all of, or perhaps isn’t even able to make use of due to software constraints. So this is a way for them to run different tasks on the same hardware without having to worry about virtualization. Effectively turning a bare metal server into 2 bare metal servers. They mention in their statement that, “The primary use case in mind for parker is on the machines with high core counts, where scalability concerns may arise.”
I run a Proxmox homelab. I just had to shut everything it runs down to upgrade Proxmox. If I could hot rreload the kernel, I would not have had to do that. Sounds pretty handy to me. But that may be the multikernel approach, not this partitioning.
Honestly, even on the desktop. On distros like Arch or Chimera Linux, the kernel is getting updated all the time. It would be great to avoid restarts there too.
If you consider the core count in modern server grade CPUs, this makes sense.
And they said k8s was overengineered!
I mean isn’t this just Xen revisited? I don’t understand why this is necessary.
Xen is running full virtual machines. You run full operating systems on simulated hardware. The real “host” operating system is the hypervisor (Xen). Inside a VM, you have the concept of one or more CPUs but you do not know which actual CPU cores that maps to. The load can be distributed to any of them by the real host.
In something like Docker, you only run a single host kernel. On top of that you run sandbox environments that run on the kernel that “think” they have an environment to themselves but are actually sharing a single host kernel. The single host kernel directly manages the real hardware. Processes can run on any of the CPUs managed by the single host kernel.
In both of the above, updating the host means shutting the system down.
With this new approach, you have multiple kernels, all running natively on real hardware. Any given CPU is being managed by only one of the kernels. No hypervisor.
deleted by creator
GTFO, you’re the brainrot ai slop hosting TikTok company.
Code is code. If it’s good Free code, I’ll use it. I also don’t like Microsoft and Facebook but I run their kernel code too.
Why should i trust them with this multi-kernel thingy if they let the dumpster fire that is tiktok, exist? And, they’re probably trying to embrace-extend-extinguish Linux just like microsoft and apple with their WSL and Containers.app respectively.
Because it’s Free and reviewed by kernel maintainers, what do you mean?