Deploying Openstack Victoria On Raspberry Pis
If you are trying to learn about OpenStack, as I have been lately, you may have found some writeups about deploying it on a Raspberry Pi cluster as an exercise. This struck me as a great idea - it gives you a chance to get hands-on with OpenStack using physical hardware, customize it to your preference, and produce a complete deployment you can use as a tool to experiment and learn. The best writeup I found is this one, but although it is a cool post with lots of good information, it is a bit light on the details for someone who has never worked with OpenStack before. This post is my attempt at a more beginner-friendly writeup that will make this a more approachable project.
INTRODUCTION
For my distro, I chose Ubuntu 20.04 server. For my OpenStack deployment, I chose Victoria because it is the current supported release and an LTS release. For my hardware, I used the following:
- 4 x Raspberry Pi Model B (8 GB RAM)
- 4 x PoE Hats
- 1 x Unmanaged PoE Switch
- 4 x USB SSDs
- 4 x Ethernet 1’ cables
- 1 x Ethernet 5’ cable
- 1 x SD Card (for initial setup)
- 1 x Cloudlet cluster case
- 4 x Heat sink kits
I would also recommend you get a micro-hdmi to hdmi adapter, so that if you have trouble booting your Raspberry Pis you can plug them in to a monitor and see what they are doing.
It’s important to point out: you do not need all of this hardware. I wanted to build something cool, and this is what I chose. You could use as few as 2 Raspberry Pis if you wanted to: one to act as a controller node, the other to run compute and block storage services. You could even set up OpenStack on a single node if you want, but at that point you may as well install Devstack (which is also a great tool, but with different goals than this project).
Here are some changes you can make to save money and still closely follow the guide:
- Don’t get SSDs, just get SD cards. You will be sacrificing a lot of IO speed, but that’s okay, this is an educational project, not an enterprise cloud deployment.
- Don’t use PoE. Just get a cheap switch and 4 of the offical power supplies.
- If you don’t get the PoE hats, you probably don’t need a nice case with active cooling. You can go with cheap cases, or even no case at all.
If you want, you don’t even have to buy a switch. You can use the Raspberry Pi’s built-in wifi. This guide will not cover that variation, but it will show how to configure the wifi interface with a static IP address. Modifying the other steps should not be overly challenging.
I set up my nodes in the following manner: One controller node, one node running compute and block storage, and two nodes running swift. You don’t have to set yours up the same way, but be aware that this setup will be used as reference.
This guide assumes you are using a linux machine to set up your cluster. If that is not the case, you may have to use Google to figure out how to follow some steps on your platform.
DOCUMENTATION
OpenStack provides extensive official documentation. We will be referring to the Install Guide. Specifically, we will be following the minimal deployment guide (and adding on swift at the end). Depending on your level of familiarity with OpenStack, it may be a good idea to spend a few hours looking over this documentation to familiarize yourself with how OpenStack works.
We will, however, be deviating from this documentation for a few reasons:
- The documentation expects you to use an x86 platform, but we are using arm
- The documentation wants you to set up the network using multiple interfaces per node, but unless you want to use both the ethernet and wifi interfaces, we only have one per node. It would be interesting to use both and follow the documentation’s networking setup more closely, but I chose to stick with the reliability of ethernet only
- My switch doesn’t support VLAN tagging - I figured that is a low priority for a project of this nature, as the network will never be able to use the most desirable configuration anyway when your nodes are Raspberry Pis
- Some of the steps didn’t work until after I did a bit of troubleshooting and modified or added to them slightly
BOOTING TO UBUNTU ON YOUR SSD
This step only applies if you intend to boot from USB devices rather than SD cards. If not, just follow Canonical’s instructions instead and skip to the next section.
Raspberry Pi 4 model B supports booting directly to a USB device. However, it needs a sufficiently new version of its firmware installed to do so. The easiest way to update the firmware is to install raspbian on an SD card, boot to it, then do a sudo apt update && sudo apt upgrade
followed by sudo raspi-config
. Select the newest firmware under bootloader options. Do this for all of your Raspberry Pis. (Note: I did this part a little while ago. If I misremembered the proper steps and it doesn’t work, just google raspberry pi update firmware. I will update this section later after double checking (Edit 2024 when revisiting this blog: I don’t think that’s getting updated)).
Ubuntu 20.04 server edition for the raspberry pi does not expect to be booted via USB, and is not compatible with this setup out of the box. You will need to follow this guide for each raspberry pi (archived link here just in case). Note that if you completed the previous step, you can skip step 3 here. Repeat this process for each boot device (although you may want to skip to the next section, verify that you correctly made a bootable drive, then come back and prepare the other drives once you know you got it right).
At this point, if all went well, you should be able to boot your Raspberry Pis to Ubuntu 20.04 on your USB drives without any SD card inserted.
INITIAL SETUP
If you haven’t already, go ahead and plug your switch into your router. Attach a boot device to your first Pi, then plug it into your switch (and power source, if not using PoE). It will boot up, do an initial setup on its own, and get an IP address from your router using DHCP. Figure out what address it is using in one of two ways:
- If you log in to your router’s web interface, it will most likely provide you with a list (somewhere) of IP addresses being used, and the hostname of the device using it. Try to find your Pi’s IP in that list
- Use
sudo nmap -sA 192.168.0.0/24
, substituting in the values for “192.168.0” that correspond to your network
Once you have the ip address, ssh into the Pi with ssh ubuntu@<ip>
. The default password is ubuntu. You will be prompted to enter that password again and then change it - don’t lose your new password. It will then log you out, so ssh back in with your new password.
The first thing you should do after logging back in is sudo apt update && sudo apt upgrade
. This may not work, as unattended-upgrades may already be running. Personally, I don’t want unattended-upgrades running on 4 machines on my network, producing spikes of bandwidth use at random. Once unattended-upgrades was done, I ran sudo apt remove unattended-upgrades
, then sudo apt update && sudo apt upgrade
. If you choose to remove unattended-upgrades, you are taking responsibility for manually logging in to your raspberry pis and running sudo apt update && sudo apt upgrade
on a regular basis to keep them updated and secure.
Next, you should change the hostname. If you reference the official documentation linked earlier, you will see that it uses the hostnames: controller, compute1, block1, object1, and object2. You can follow that convention, or use your own names. Just remember when following the documentation to use your hostnames when configuring things. Your first node will be the controller, so set its hostname to controller
or whatever you prefer with sudo hostnamectl set-hostname <hostname>
.
You will also want a static IP address for your ethernet port. I also assigned a static IP address to the wifi interface so that I would have two potential ways to SSH into my Pis if I got locked out of one. Ubuntu 20.04 uses netplan to accomplish this goal. Create a new netplan config file using sudo vim /etc/netplan/10-netplan.yaml
, and use the following as a template:
Since this file contains your wifi password, it is best practice to make it readable only by root with sudo chmod 0600 /etc/netplan/10-netplan.yaml
. Now open the pre-existing config file with sudo vim /etc/netplan/50-cloud-init.yaml
(or possibly a different name if you have a different Ubuntu 20.04 image than the one I used). In this file, change dhcp4: true
to dhcp4: false
. Then, read the comments at the top:
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
Follow those instructions.
At this point you need to run sudo netplan generate
. If that does not produce errors, run sudo netplan apply
. Now you can reboot your Pi. Give it a minute or two to reboot, then ssh into it at its new IP address. If that doesn’t work, try to ssh into it at its alternate new wifi IP address. If that doesn’t work either, you probably have a broken netplan config. You can either plug your Pi into a keyboard/mouse/monitor and fix it that way, or reinstall Ubuntu. Just make sure you give your raspberry Pi a few minutes to finish rebooting/booting before doing anything drastic, and try to reboot a couple times in case it it just being finicky (occasionally with USB boot, I have had nodes require a couple attempts to boot, but they work fine after).
It would be smart to set up passwordless key-based SSH at this point, add the Pi to your SSH config file, and disable password SSH login on your Pi. If you don’t know how to do these things, there are many tutorials on the subject.
Now, go back to the beginning of this section and repeat it for each Raspberry Pi.
After each Raspberry Pi is set up, you will need to make them aware of each other. On each Pi, you need to do sudo vim /etc/hosts
and add a line containing <ip> <hostname>
for each node. To verify that it worked, ping each node by hostname: ping <hostname>
.
This section of the guide is intended as a replacement for the environment networking section of the official minimal deployment guide. It omits some steps that use ifconfig and expect you to have multiple interfaces. We have not yet placed the controller/compute node interfaces into promiscuous mode, which is necessary for this deployment to work. However, we will address that later when deploying neutron.
ENVIRONMENT
Now we are going to start working through the deployment guide’s Environment section.
Go ahead and open up the deployment guide’s security page in another tab. You might as well generate all of the passwords listed here right now and get it out of the way. It would be best to use a password manager of your choice to manage them - but please follow the instructions and use something like openssl rand -hex 10
to actually generate the passwords. I forgot this information in my initial attempt and used special characters, which resulted in many wasted hours debugging vague error messages.
In addition to the listed passwords, create the passwords SWIFT, SWIFT_PREFIX and SWIFT_SUFFIX if you intend to follow my instructions to install swift at the end.
Now it is time to configure NTP. These instructions do not require much modification, except that on the controller node you probably don’t have an NTP server you want to specify - you can leave it with the default pools (but do delete them from the other nodes).
In the following sections and continuing throughout this entire deployment, there will be packages of the name python-*. We aren’t using python, we are using python3. So when installing these packages, just replace ‘python’ with ‘python3’. The documentation will state this explicitly at first, but it will not do so in every section.
Follow these instructions:
- Install the official repository and client (Every node)
- Install database (Controller node only)
- Install message queue (Controller node only)
- Install memcached (Controller node only)
- Install etcd (Controller node only)
When you install etcd, you will see an error. Etcd does not officially support arm, although it is compiled for it. You simply need to add one line to /etc/default/etcd: ETCD_UNSUPPORTED_ARCH=arm64
and then run sudo systemctl enable etcd && sudo systemctl restart etcd
.
At this point, our nodes are finally ready to begin actually installing OpenStack.
INSTALL OPENSTACK
Note that the order the services are listed in this section is not arbitrary. You need to install them in this order. You don’t need to install Swift if you don’t want object storage, and if you choose not to use Swift then depending on the number of nodes you have it may be reasonable to put Nova and Cinder on different nodes.
KEYSTONE
This page describes the steps to install Keystone. Take at least a few moments to familiarize yourself with the overview, then complete all the steps under “Keystone Installation Tutorial for Ubuntu”. All of the service installation tutorials are set up more or less like this: first background information on the service itself, then subsections for particular platforms. Just make sure you always go to the Ubuntu section.
In the keystone setup guide, the ‘verify operation’ page is under the Ubuntu section. For other services, you may find that page in its own section. Always ensure that you visit this page and follow its instructions, or you may end up with a broken install and not realize it until much later.
In this section and all further sections, make sure that when you copy and paste to configuration files, you carefully ensure that everything reflects your environment. Check and doublecheck the hostnames, IP addresses, and passwords. Once you get to the “Getting Started” page, you are done with that section for the purposes of our deployment. However, do rememeber that this documentation exists in case you want to come back and learn more.
GLANCE
Now you will install glance. You will be told to download a CirrOS image - you need to find an image for arm64/aarch64.
PLACEMENT
Install the placement api. There is a longer overview for this section, but don’t worry. Setting it up won’t be harder than the previous sections were.
NOVA
Time to get some compute power going. Don’t worry too much about all the overview subsections here. They are worth reading and understanding, of course, but for our deployment purposes you can skip straight to the installation sections for Ubuntu under “Install and configure controller node” and “Install and configure a compute node”.
You will be told to check that your platform supports KVM, and that check will fail. However, the Raspberry Pi 4 does support KVM. The check described in OpenStack’s guide is simply looking for AMD or Intel’s version of virtualization support.
NEUTRON
Neutron is OpenStack’s networking solution. This is what worried me the most when I was setting up my cluster, since I made no attempt to adhere to either of the networking options described in the install guide. However, I got it working without too much trouble.
We are going to use the instructions for “Networking Option 2: Self-service networks”, so make sure you skip the instructions for “Networking Option 1: Provider Networks”. When you are enumerating type_drivers in the config, omit vlan from the list (unless you have a switch that does support VLAN).
Remember before when I mentioned that eth0 needs to be in promiscuous mode? After you finish following OpenStack’s instructions for this section, it’s time to take care of that. On both your controller node and your compute node, run sudo systemctl edit neutron-linuxbridge-agent.service
. Put the following text in the document that opens up and save it:
[Service]
ExecStartPre=+ip link set eth0 promisc on
Then reboot each node.
HORIZON
Technically, this is already a complete, minimal OpenStack deployment. But there are some other common packages you most likely want. For starters, Horizon. Although the openstack CLI client is quite nice, having a web interface is extremely convenient.
CINDER
You can launch VMs already, and they will have an image copied from one of the images you upload into glance. That is useful, but VMs are much more useful if you can provide them with persistent volumes that can be disattached, reattached and managed separately. Cinder will make this possible. For my setup, I put this service on the same Pi that serves as my compute node.
I did run into problems actually using Cinder volumes, with the following error in nova and cinder logs:
(Invalid input received: Connector doesn't have required information: initiator). (HTTP 500)
The solution was to do systemctl enable iscsid && systemctl enable iscsi
on the block storage node and reboot it.
I opted not to install the backup service when installing Cinder. If/when I do add it later, I will update this section.
SWIFT
Object storage is another nice feature of cloud platforms, and one solution for this in OpenStack is swift. Swift has no corresponding section in the minimal deployment guide, but it does have an install guide here.
The way you complete this section will be dependent on what hardware you have available. I deployed Swift on two Raspberry Pis with one SSD each, but the guide will expect you to have two nodes, each with two extra drives dedicated to swift. Just keep this in mind and modify instructions as necessary to suit your configuration. When you get to the “Create and distribute inital rings” section, you will find cryptic commands like swift-ring-builder account.builder create 10 3 1
. Straight from man swift-ring-builder
, here is the meaning of these three numbers:
create <part_power> <replicas> <min_part_hours>
Creates <builder_file> with 2^<part_power> partitions and <replicas>. <min_part_hours>
is number of hours to restrict moving a partition more than once.
I ended up using 10 2 1, since with two disks available I can’t have more than 2 replicas.
Finally, I encountered one bug. I was missing a configuration file that prevented swift from working. I was able to solve this problem by running:
curl -o /etc/swift/internal-client.conf https://opendev.org/openstack/swift/raw/branch/master/etc/internal-client.conf-sample
WRAPPING UP
At this point, you have a complete OpenStack deployment. It won’t be the fastest cloud in the world, but you should have learned a good bit just from setting it up. Additionally, you are free to experiment to your heart’s content without incurring charges as you would on a public cloud. There are other solutions for that purpose (Devstack), but now you can deploy OpenStack from the ground up and you have a good place to start if you want a more customized, larger scale setup.
That’s it, have fun with your new cloud!