The Project

We were tasked by our Professional Responsibility course to “execute a project that benefits more than 5 people.” Minimum requirement was 8 hours of work. All in all, Nate and I spent about 12 hours together getting this project to fruition.

This seemed a little shocking at first, we were given complete oversight of our projects, and were to run our ideas by our professor for approval prior to starting.

How we landed at Rosetta@Home

So, the first idea was simple, set up a “private cloud” in the apartment’s closet where we could host VMs and have a dedicated space to run projects for us, friends, and the world. Unfortunately, this only directly benefited the two of us, until it dawned: Rosetta@Home is one of the many BOINC projects out there. If we could get this “private cloud” off the ground and install a VM purely dedicated to Rosetta@Home, we could meet the 8-hour requirement, and help the world.

The server still runs and is part of the Cure-4-Cancer team. You can check the box’s stats here

Procurement

I already had a PowerEdge R710 I scored from r/homelabsales for free, provided it was going to a good cause. I think the original owner would’ve been proud to see how it helps the world now, considering it started as a test bed for me to fiddle with some bare metal. The box has 2 Xeon X5650’s, and 64GB of RAM. We allocated 18 vCPUs to the Rosetta@Home VM, leaving 6 vCPUs for Nate & I to run our projects with. Frontier provides us with “upto” 500mbit up and down, so we had plenty of bandwidth to boot.

Execution

Physical Topology

The topology of the physical environment is dirt simple. The box sits on a shelf in a closet in the apartment and has a 16-port unmanaged switch above it, with a run going to the Frontier (residential Fiber ISP) router. Because I really don’t like residential/consumer grade router/firewall devices, we chose to apply a DMZ rule on the Frontier router, thus forwarding all ports from the apartment’s edge to… the pfSense Virtual Machine running on the metal!

So how did we get to pfSense in a VM? Easy, really.

Setting up the “private cloud” VM Host

  1. Install xcp-ng on the bare metal
  2. Set up a management interface on an IP that won’t be pfSense’s Edge address.
  3. Set up a vSwitch. The idea here is the Edge is only exposed to pfSense, and all VMs will use the vSwitch to route through pfSense. Thus, we have packet filtering, HAProxy, LetsEncrypt, OpenVPN, and a few other services running in pfSense to make the environment easier to manage remotely.
  4. Set up pfSense, and make sure to disable tcp checksums in Xen, otherwise suffer a gnarly performance hit to the throughput of 16Kbit’s a second! All we ended up needing was xe pif-param-set uuid=$PIFUUID other-config:ethtool-tx="off";
  5. Set up an OpenVPN installation in pfSense, so we can tunnel into the virtual LAN where our VMs run
  6. Set up an SMB Mount on the local windows desktop, so we can get a few ISO images ready to provision VMs.

From there, we just ran. It took about 6-7 hours to work through the numerous installs, dig up documentation, work through the initial bugs.

Setting up Ubuntu 20, Docker, and Boinc.

Getting Ubuntu installed was cake, and the feature that lets us import public keys from GitHub directly to OpenSSH during installation saved me from ever needing a password. Except, of course, to disable password prompts in sudo! Even better, installing Docker was super-quick thanks to the folks there giving us the shell snippets we needed to run.

The final steps of getting the BOINC client and Ubuntu were trivial, and well documented. We chose the BOINC in docker approach, because why not? Docker makes life easy if the documentation for the project is good.

Some extras with the VMs

  • We use CloudFlare CDN to cache our static websites and proxy requests to our REST APIs, like scoreboard. This helps us stay protected and gives us the edge caching that is oh-so-fast.

  • We moved the SMB Mount to a TrueNAS VM. This way the ISO repository stays local and compatible w/ Xen and eliminates the dependency of the desktop being on.