A university near me must be going through a hardware refresh, because they’ve recently been auctioning off a bunch of ~5 year old desktops at extremely low prices. The only problem is that you can’t buy just one or two. All the auction lots are batches of 10-30 units.
It got me wondering if I could buy a bunch of machines and set them up as a distributed computing cluster, sort of a poor man’s version of the way modern supercomputers are built. A little research revealed that this is far from a new idea. The first ever really successful distributed computing cluster (called Beowulf) was built by a team at NASA in 1994 using off the shelf PCs instead of the expensive custom hardware being used by other super computing projects at the time. It was also a watershed moment for Linux, then only a few yeas old, which was used to run Beowulf.
Unfortunately, a cluster like this seems less practical for a homelab than I had hoped. I initially imagined that there would be some kind of abstraction layer allowing any application to run across all computers on the cluster in the same way that it might scale to consume as many threads and cores as are available on a CPU. After some more research I’ve concluded that this is not the case. The only programs that can really take advantage of distributed computing seem to be ones specifically designed for it. Most of these fall broadly into two categories: expensive enterprise software licensed to large companies, and bespoke programs written by academics for their own research.
So I’m curious what everyone else thinks about this. Have any of you built or admind a Beowulf cluster? Are there any useful applications that would make it worth building for the average user?
There are aseveral options, although some may be defunct.
Last time I looked into this, openMosix was the most interesting, affordable, general-purpose option. It turned several computers into one big virtual computer. I ran a very small, 3-node cluster for a time. The upside was that you could run almost anything on it - unlike most HPC solutions, it didn’t require bespoke languages, libraries, or targetted solutions. The downside was performance; it turns out that to really take adventage of HPC, you really need to program for it. OpenMosix looks defunct now.
OpenPMIx looks to have taken up the torch from OpenMosix. It looks active; I have no specific knowledge about it.
tldp.org has some good required reading before you invest in this, in particular discussing the elephant in the room, networking latency. The short version is that, no matter how slow your computers, the bottleneck will still be the network. Unless you’re willing to invest a lot into fiber and expensive, fast switches, it’s probably not worth it.
slurm crosses the line into modern cluster job management, like you might find in a cloud provider like AWS, which is tye direction the non-supercomputer industry took when commodity MPI turned out to be not feasible. Warewolf is another version, sort of one foot in distributed container management and lightweight MPI. Both are pretty involved, more Beowulf than OpenMosix.
tldr, it’s probably not worth it if you’re looking for a cheap Beowulf cluster, because such a thing doesn’t exist in any practical sense. Cost, and physics, get in the way. If you want to set up a data center, or some job farm like AWS or GCS, that’s another matter. But it’s a far cry from MPI.