OpenStack Neutron – DHCP and DNS

In a cloud environment, a virtual instance typically uses a DHCP server to receive its assigned IP address and DNS services to resolve IP addresses. In this post, we will look at how these services are realized in our OpenStack playground environment.

DHCP basics

To understand what follows, it is helpful to quickly recap the basis mechanisms behind the DHCP protocol. Historically, the DHCP protocol originated from the earlier BOOTP protocol, which was developed by Sun Microsystems (my first employer, back in the good old days, sigh…) to support diskless workstations which, at boot time, need to retrieve their network configuration, the name of a kernel image (which could subsequently be retrieved using TFTP) and an NFS share to be used for the root partition. The DHCP protocol builds on the BOOTP standard and extends BOOTP, for instance by adding the ability to deliver more configuration options than BOOTP is capable of.

DHCP is a client-server protocol using UDP with the standardized ports 67 (server) and 68 (client). At boot time, a client sends a DHCPDISCOVER message to request configuration data. In reply, the server sends a DHCPOFFER message to the client, offering configuration data including an IP address. More than one server can answer a discovery message, and thus a client might receive more than one offer. The DCHCP client then sends a DHCPREQUEST to all servers, containing the ID of an offer that the client wishes to accept. The server from which that offer originates then replies with a DHCPACK to complete the handshake, all other servers simply record the fact that the IP address that they have offered is again available. Finally, a DHCP client can release an IP address again sending the DHCPRELEASE message.

There are a few additional message types like DHCPINFORM which a client can use to only request configuration parameters if it already has an IP address, or a DHCPDECLINE message that a client sends to a server if it determines (using ARP) that a message offered by the server is already in use, but these message types do usually not take part in a standard bootstrap sequence which is summarized in the diagram below.


We have said that DHCP is using UDP which again is sitting on top of the IP protocol. This raises an interesting chicken-egg problem – how can a client use DHCP to talk to a server if is does not yet have an IP address?

The answer is of course to use broadcasts. Initially, a client sends a DHCP request to the IP broadcast address and the Ethernet broadcast address ff:ff:ff:ff:ff:ff. The IP source address of this request is

Then, the server responds with a DHCPOFFER directed towards the MAC address of the client and using the offered IP address as IP target address. The DHCPREQUEST is again a broadcast (this is required as it needs to go to all DHCP servers on the subnet), and the acknowledge message is again a unicast packet.

This process assumes that the client is able to receive an IP message directed to an IP address which is not (yet) the IP address of one of its interfaces. As most Unix-like operating systems (including Linux) do not allow this, DHCP clients typically use a raw socket to receive all incoming IP traffic (see for instance the relevant code of the udhcp client which is part of BusyBox which uses a so-called packet socket initially and only switches to an ordinary socket once the interface is fully configured). Alternatively, there is a flag that a client can set to indicate that it cannot deal with unicast packets before the interface is fully configured, in which case the server will also use broadcasting for its reply messages.

Note that, as specified in RFC 922, a router will not forward IP broadcasts directed to the broadcast address Therefore, without further precautions, a DHCP exchange cannot pass a router, which implies that a DHCP server must be present on every subnet. It is, however, possible to run a so-called DHCP relay which forwards DHCP requests to a DHCP server in a different network. Neutron will, by default, start a DHCP server for each individual virtual network (unless DHCP is disabled for all subnets in the network).

One of the reasons why the DHCP protocol is more powerful than the older BOOTP protocol is that the DHCP protocol is designed to provide a basically unlimited number of configuration items to a client using so-called DHCP options. The allowed options are defined in various RFCs (see this page for an overview maintained by the IANA organisation). A few notable options which are relevant for us are

  • Option 1: subnet mask, used to inform the client about the subnet in which the provided IP address is valid
  • Option 3: list of routers, where the first router is typically used as gateway by the client
  • Option 6: DNS server to use
  • Option 28: broadcast address for the clients subnet
  • Option 121: classless static routes (replaces the older option 33) that are transmitted from the server to the client and supposed to be added to the routing table by the client

DHCP implementation in OpenStack

After this general introduction, let us now try to understand how all this is implemented in OpenStack. To be able to play around and observe the system behavior, let us bring up the configuration of Lab 10 once more.

git clone
cd openstack-labs/Lab10
vagrant up
ansible-playbook -i hosts.ini site.yaml
ansible-playbook -i hosts.ini demo.yaml

This will install OpenStack with a separate network node, create two networks and bring up instances on each of these networks.

Now, for each virtual network, Neutron will create a network namespace on the network node (or, more precisely, on the node on which the DHCP agent is running) and spawn a separate DHCP server process in each of these namespaces. To verify this, run

sudo ip netns list

on the network node. You should see three namespaces, two of them starting with “qdhcp-“, followed by the ID of one of the two networks that we have created. Let us focus our investigation on the flat network, and figure out which processes this namespace contains. The following sequence of commands will determine the network ID, derive the name of the namespace and list all processes running in this namespace.

source demo-openrc
network_id=$(openstack network show \
    flat-network \
    -f yaml | egrep "^id: " | awk '{ print $2}')
pids=$(sudo ip netns pid $ns_id)
for pid in $pids; do
  ps --no-headers -f --pid $pid

We see that there are two processes running inside the namespace – an instance of the lightweight DHCP server and DNS forwarder dnsmasq and a HA proxy process (which handles metadata requests, more on this in a separate post). It is interesting to look at the full command line which has been used to start the dnsmasq process. Among the long list of options, you will find two options that are especially relevant.

First, the process is started using the --no-hosts flag. Usually, dnsmasq will read the content of the local /etc/hosts file and return the name resolutions defined there. Here, this is disabled, as otherwise an instance could retrieve the IP addresses of the OpenStack nodes. The process is also started with –no-resolv to skip reading of the local resolver configuration on the network node.

Second, the dnsmasq instance is started with the --dhcp-host-file option, which, in combination with the static keyword in the --dhcp-range option, restricts the address allocations that the DHCP server will hand out to those defined in the provided file. This file is maintained by the DHCP agent process. Thus the DHCP server will not actually perform address allocations, but is only a way to communicate the address allocations that Neutron has already prepared to a client.

To better understand how this actually works, let us go through the process of starting a virtual machine and allocating an IP address in a bit more detail. Here is a summary of what happens.


First, when a user requests the creation of a virtual machine, Nova will ask Neutron to create a port (1). This request will eventually hit the create_port method of the ML2 plugin. The plugin will then create the port as a database object and, in the course of doing this, reach out to the configured IPAM driver to obtain an IP address for this port.

Once the port has been created, an RPC message is sent to the DHCP agent (2). The agent will then invoke the responsible DHCP driver (which in our case is the Linux DHCP driver) to re-build the hosts file and send a SIGHUP signal to the actual dnsmasq process in order to reload the changed file (5)

At some later point in the provisioning process, the Nova compute agent will create the VM (6) which will start its boot process. As part of the boot process, a DHCP discover broadcast will be sent out to the network (7). The dnsmasq process will pick up this request, consult the hosts file, read the pre-determined IP address and send a corresponding DHCP offer to the VM. The VM will usually accept this offer and configure its network interface accordingly.

Network setup

We have now understood how the DHCP server handles address allocations. Let us now try to figure out how the DHCP server is attached to the virtual network which it server.

To examine the setup, log into the network node again and repeat the above commands to again populate the environment variable ns_id with the ID of the namespace in which the DHCP server is running. Then, run the following commands to gather information on the network setup within the namespace.

sudo ip netns exec $ns_id ip link show

We see that apart from the loopback device, the DHCP namespace has one device with a name composed of the fixed part tap followed by a unique identifier (the first few characters of the ID of the corresponding port). When you run sudo ovs-vsctl show on the network node, you will see that this device (which is actually an OVS created device) is attached to the integration bridge br-int as an access port. When you dump the OpenFlow rules on the integration bridge and the physical bridge, you will see that for packets with this VLAN ID, the VLAN tag is stripped off at the physical bridge and the packet eventually reaches the external bridge br-ext, confirming that the DHCP agent is actually connected to the flat network that we have created (based on our VXLAN network that we have built outside of OpenStack).


Also note that in our case, the interface used by the DHCP server has actually two IP addresses assigned, one being the second IP address on the subnet (, and the second one being the address of the metadata server (which might seem a bit strange, but again this will be the topic of the next post).

If you want to see the DHCP protocol in action, you can install dhcpdump on the network node, attach to the network namespace and run a dump on the interface while bringing up an instance on the network. Here is a sequence of commands that will start a dump.

source admin-openrc
sudo apt-get install dhcpdump
port_id=$(openstack port list \
  --device-owner network:dhcp \
  --network flat-network \
  -f value | awk '{ print $1}')
source demo-openrc
network_id=$(openstack network show \
    flat-network \
    -f yaml | egrep "^id: " | awk '{ print $2}')
sudo ip netns exec $ns_id dhcpdump -i $interface

In a separate window (either on the network node or any other node), enter the following commands to bring up an additional instance

source demo-openrc
openstack server create \
  --flavor m1.nano \
  --network flat-network \
  --key demo-key \
  --image cirros demo-instance-4

You should now see the sequence of DHCP messages displayed in the diagram above, starting with the DHCPDISCOVER sent by the newly created machine and completed by the DHCPACK.

Configuration of the DHCP agent

Let us now go through some of the configuration options that we have for the DHCP agent in the file /etc/neutron/dhcp_agent.ini. First, there is the interface_driver which we have set to openvswitch. This is the driver that the DHCP agent will use to set up and wire up the interface in the DHCP server namespace. Then, there is the dhcp_driver which points to the driver class used by the DHCP agent to control the DHCP server process (dnsmasq).

Let us also discuss a few options which are relevant for the name resolution process. Recall that dnsmasq is not only a DHCP server, but can also act as DNS forwarder, and these settings control this functionality.

  • We have seen above that the dnsmasq process is started with the –no-resolv option in order to skip the evaluation of the /etc/resolv.conf file on the network node. If we set the configuration option dnsmasq_local_resolv to true, then dnsmasq will read this configuration and effectively be able to use the DNS configuration of the network node to provide DNS services to instances.
  • A similar settting is dnsmasq_dns_servers. This configuration item can be used to provide a list of DNS servers to which dnsmasq will forward name resolution requests

DNS configuration options for OpenStack

The configuration items above give us a few options that we have to control the DNS resolution offered to the instances.

First, we can set dnsmasq_local_resolv to true. When you do this and restart the DHCP agent, all dnsmasq processes will be restarted without the –no-resolv option. The DHCP server will then instruct instances to use its own IP address as the address of a DNS server, and will leverage the resolver configuration on the network node to forward requests. Note, however, that this will only work if the nameserver in /etc/resolv.conf on the network node is set to a server which can be reached from within the DHCP namespace, which will typically not be the case (on a typical Ubuntu system, name resolution is done using systemd-resolved which will listen on a loopback interface on the network node which cannot be reached from within that namespace).

The second option that we have is to put one or more DNS servers which are reachable from the virtual network into the configuration item dnsmasq_dns_servers. This will instruct the DHCP agent to start the dnsmasq processes with the –server flag, thus specifying a name server to which dnsmasq is supposed to forward requests. Assuming that this server is reachable from the network namespace in which dnsmasq is running (i.e. from the virtual network to which the DHCP server is attached), this will provide name resolution services using this nameserver for all instances on this network.

As the configuration file of the DHCP agent is applied for all networks, using this option implies that all instances will be using this DNS server, regardless of the network to which they are attached. In more complex settings, this is sometimes not possible. For these situations, Neutron offers a third option to configure DNS resolution – defining a DNS server per subnet. In fact, a list of DNS servers is an attribute of each subnet. To set the DNS server for our flat subnet, use

source admin-openrc 
openstack subnet set \
  --dns-nameserver flat-subnet
source demo-openrc

When you now restart the DHCP agent, you will see that the DHCP agent has added a line to the options file for the dnsmasq process which sets the DNS server for this specific subnet to

Note that this third option works differently from the first two options in that it really sets the DNS server in the instances. In fact, this option does not govern the resolution process when using dnsmasq as a nameserver, but determines the nameserver that will be provided to the instances at boot time via DHCP, so that the instances will directly contact this nameserver. Our first two configuration did work differently, as in both options, the nameserver known to the instance will be the DHCP server, and the DHCP server will then forward the request to the configured name server. The diagram below summarizes this mechanism.


Of course you can combine these settings – use dnsmasq_dns_servers to enable the dnsmasq process to serve and forward DNS requests, and, if needed, override this using a subnet specific DNS server for individual subnets.

This completes our investigation of DHCP and DNS handling on OpenStack. In the next post, we will turn to a topic that we have already touched upon several times – instance metadata.


OpenStack Neutron – building virtual routers

In a previous post, we have set up an environment with a flat network (connected to the outside world, in this case to our lab host). In a typical environment, such a network is combined with several internal virtual networks, connected by a router. Today, we will see how an OpenStack router can be used to realize such a setup.

Installing and setting up the L3 agent

OpenStack offers different models to operate a virtual router. The model that we discuss in this post is sometimes called a “legacy router”, and is realized by a router running on one of the controller hosts, which implies that the routing functionality is no longer available when this host goes down. In addition, Neutron offers more advanced models like a distributed virtual router (DVR), but this is beyond the scope of todays post.

To make the routing functionality available, we have to install respectively enable two additional pieces of software:

  • The Routing API is provided by an extension which needs to be loaded by the Neutron server upon startup. To achieve this, this extension needs to be added to the service_plugins list in the Neutron configuration file neutron.conf
  • The routing functionality itself is provided by an agent, the L3 agent, which needs to be installed on the controller node

In addition to installing these two components, there are a few changes to the configuration we need to make. First, of course, the L3 agent comes with its own configuration file that we need to adapt. Specifically, there are two changes that we make for this lab. First, we set the interface driver to openvswitch, and second, we ask the L3 agent not to provide a route to the metadata proxy by setting enable_metadata_proxy to false, as we use the mechanism provided by the DHCP agent.

In addition, we change the configuration of the Horizon dashboard to make the L3 functionality available in the GUI as well (this is done by setting the flag horizon_enable_router in the configuration to “True”).

All this can again be done in our lab environment by running the scripts for Lab 8. In addition, we run the demo playbook which will set up a VLAN network with VLAN ID 100 and two instances attached to it (demo-instance-1 and demo-instance 3) and a flat, external network with one instance (demo-instance-3).

git clone
cd Lab8
vagrant up
ansible-playbook -i hosts.ini site.yaml
ansible-playbook -i hosts.ini demo.yaml

Setting up our first router

Before setting up our first router, let us inspect the network topology that we have created. The Horizon dashboard has a nice graphical representation of the network topology.


We see that we have two virtual Ethernet networks, carrying one IP network each. The network on the left – marked as external – is the flat network, with an IP subnet with CIDR The network on the right is the VLAN network, with IP range

Now let us create and set up the router. This happens in several steps. First, we create the router itself. This is done using the credentials of the demo user, so that the router will be part of the demo project.

vagrant ssh controller
source demo-openrc
openstack router create demo-router

At this point, the router exists as an object, and there is an entry for it in the Neutron database (table routers). However, the router is not yet connected to any network. Let us do this next.

It is important to understand that, similar to a physical router, a Neutron virtual router has two interfaces connected to two different networks, and, again as in the physical network world, the setup is not symmetric. Instead, there is one network which is considered external and one internal network. By default, the router will allow for traffic from the internal network to the external network, but will not allow any incoming connections from the external network into the internal networks, very similar to the cable modem that you might have at home to connect to this WordPress site.


Correspondingly, the way how the external network and the internal network are attached to the router differ. Let us start with the external network. The connection to the external network is called the external gateway of the router and can be assigned using the set command on the router.

openstack router set \

When you run this command and inspect the database once more, you will see that the column gw_port_id has been populated. In addition, listing the ports will demonstrate that OpenStack has created a port which is attached to the router (this port is visible in the database but not via the CLI as the demo user, as the port is not owned by this user) and has received an IP address on the external network.

To complete the setup of the router, we now have to connect the router to an internal network. Note that this needs to be done by the administrator, so we first have to source the credentials of the admin user.

source admin-openrc
openstack router add subnet demo-router vlan-subnet

When we now log into the Horizon GUI as the demo user and ask Horizon to display the network topology, we get the following result.


We can reach the flat network (and the lab host) from the internal network, but not the other way around. You can verify this by logging into demo-instance-1 via the Horizon VNC console and trying to ping demo-instance-3.

Now let us try to understand how the router actually works. Of course, somewhere behind the scenes, Linux routing mechanisms and iptables are used. One could try to implement a router by manipulating the network stack on the controller node, but this would be difficult as the configuration for different routers might conflict. To avoid this, Neutron creates a dedicated network namespace for each router on the node on which the L3 agent is running.

The name of this namespace is qrouter-, followed by the ID of the virtual router (here “q” stands for “Quantum” which was the name of what is now known as Neutron some years ago). To analyze the network stack within this namespace, let us retrieve its ID and spawn a shell inside the namespace.

netns=$(ip netns list \
        | grep "qrouter" \
        | awk '{print $1'})
sudo ip netns exec $netns /bin/bash

Running ifconfig -a and route -n shows that, as expected, the router has two virtual interfaces (both created by OVS). One interface starting with “qg” is the external gateway, the second one starting with “qr” is connected to the internal network. There are two routes defined, corresponding to the two subnets to which the respective interfaces are assigned.

Let us now inspect the iptables configuration. Running iptables -S -t nat reveals that Neutron has added an SNAT (source network address translation) rule that applies to traffic coming from the internal interface. This rule will replace the source IP address of the outgoing traffic by the IP address of the router on the external network.

To understand how the router is attached to the virtual network infrastructure, leave the namespace again and display the bridge configuration using sudo ovs-vsctl show. This will show you that the two router interfaces are both attached to the integration bridge.


Let us now see how traffic from a VM on the internal network flows through the stack. Suppose an application inside the VM tries to reach the external network. As inside the VM, the default route goes to, the routing mechanism inside the VM targets the packet towards the qr-interface of the router. The packet leaves the VM through the tap interface (1). The packet enters the bridge via the access port and receives a local VLAN tag (2), then travels across the bridge to the port to which the qr-interface is attached. This port is an access port with the same local VLAN tag as the virtual machine, so it leaves the bridge as untagged traffic and enters the router (3).

Within the router, SNAT takes place (4) and the packet is forwarded to the qg-interface. This interface is attached to the integration bridge as access port with local VLAN ID 2. The packet then travels to the physical bridge (5), where the VLAN tag is stripped off and the packet hits the physical networks as part of the native network corresponding to the flat network.

As the IP source address is the address of the router on the external network, the response will be directed towards the qg-interface. It will enter the integration bridge coming from the physical bridge as untagged traffic, receive local VLAN ID 2 and end up at the qg-access port. The packet then flows back through the router, is leaving it again at the qr interface, appears with local VLAN tag 1 on the integration bridge and eventually reaches the VM.

There is one more detail that deserves being mentioned. When you inspect the iptables rules in the mangle table of the router namespace, you will see some rules that add marks to incoming packets, which are later evaluated in the nat and filter tables. These marks are used to implement a feature called address scopes. Essentially, address scopes are reflecting routing domains in OpenStack, the idea being that two networks that belong to the same address scope are supposed to have compatible, non-overlapping IP address ranges so that no NATing is needed when crossing the boundary between these two networks, while a direct connection between two different address scopes should not be possible.

Floating IPs

So far, we have set up a router which performs a classical SNAT to allow traffic from the internal network to appear on the external network as if it came from the router. To be able to establish a connection from the external network into the internal network, however, we need more.

In a physical infrastructure, you would use DNAT (destination netting) to achieve this. In OpenStack, this is realized via a floating IP. This is an IP address on the external network for which DNAT will be performed to pass traffic targeted to this IP address to a VM on the internal network.

To see how this works, let us first create a floating IP, store the ID of the floating IP that we create in a variable and display the details of the floating IP.

source demo-openrc
out=$(openstack floating ip create \
         -f shell \
         --subnet flat-subnet\
floatingIP=$(eval $out ; echo $id)
openstack floating ip show $floatingIP

When you display the details of the floating IP, you will see that Neutron has assigned an IP from the external network (the flat network), more precisely from the network and subnet that we have specified during creation.

This floating IP is still fully “floating”, i.e. not yet attached to any actual instance. Let us now retrieve the port of the server demo-instance-1 and attach the floating IP to this port.

port=$(openstack port list \
         --server demo-instance-1 \
         -f value \
         | awk {'print $1}')
openstack floating ip set --port $port $floatingIP

When we now display the floating IP again, we see that floating IP is now associated with the fixed IP address of the instance demo-instance-1.

Now leave the controller node again. Back on the lab host, you should now be able to ping the floating IP (using the IP on the external network, i.e. from the network) and to use it to SSH into the instance.

Let us now try to understand how the configuration of the router has changed. For that purpose, enter the namespace again as above and run ip addr. This will show you that now, the external gateway interface (the qg interface) has now two IP addresses on the external network – the IP address of the router and the floating IP. Thus, this interface will respond to ARP requests for the floating IP with its MAC address. When we now inspect the NAT tables again, we see that there are two new rules. First, there is an additional source NAT rule which replaces the source IP address by the floating IP for traffic coming from the VM. Second, there is now – as expected – a destination NAT rule. This rule applies to traffic directed to the floating IP and replaces the target address with the VM IP address, i.e. with the corresponding fixed IP on the internal network.

We can now understand how a ping from the lab host to the floating IP flows through the stack. On the lab host, the packet is routed to the vboxnet1 interface and shows up at enp0s9 on the controller node. From there, it travels through the physical bridge up to the integration bridge and into the router. There, the DNAT processing takes place, and the target IP address is replaced by that of the VM. The packet leaves the router at the internal qr-interface, travels across the integration bridge and eventually reaches the VM.

Direct access to the internal network

We have seen that in order to connect to our VMs using SSH, we first need to build a router to establish connectivity and assign a floating IP address. Things can go wrong, and if that operation fails for whatever reason or the machines are still not reachable, you might want to find a different way to get access to the instances. Of course there is the noVNC client built into Horizon, but it is more convenient to get a direct SSH connection without relying on the router. Here is an approach how this can be done.

Recall that on the physical bridge on the controller node, the internal network has the VLAN segmentation ID 100. Thus to access the VM (or any other port on the internal network), we need to tag our traffic with the VLAN tag 100 and direct it towards the bridge.

The easiest way to do this is to add another access port to the physical bridge, to assign an IP address to it which is part of the subnet on the internal network and to establish a route to the internal network from this device.

vagrant ssh controller
sudo ovs-vsctl add-port br-phys vlan100 tag=100 \
     -- set interface vlan100 type=internal 
sudo ip addr add dev vlan100
sudo ip link set vlan100 up

Now you should to be able to ping any instance on the internal VLAN network and SSH into it as usual from the controller node.

Why does this work? The upshot of our discussion above is that the interaction of local VLAN tagging, global VLAN tagging and the integration bridge flow rules effectively attach all virtual machines in our internal network via access ports with tagging 100 to the physical network infrastructure, so that they all communicate via VLAN 100. What we have done is to simply create another network device called vlan100 which is also connected to this VLAN. Therefore, it is effectively on one Ethernet segment with our first two demo instances. We can therefore assign an IP address to it and then use it to reach these instances. Essentially, this adds an interface to the controller which is connected to the virtual VLAN network so that we can reach each port on this network from the controller node (be it on the controller node or a compute node).

There is much more we could say about routers in OpenStack, but we leave that topic for the time being and move on to the next post, in which we will discuss overlay networks using VXLAN.

OpenStack Neutron – running Neutron with a separate network node

So far, our OpenStack control plane setup was rather simple – we had a couple of compute nodes, and all other services were running on the same controller node. In practice, this does not only create a single point of failure, but also a fairly high traffic on the network interfaces. In this post, we will move towards a more distributed setup with a dedicated network node.

OpenStack nodes and their roles

Before we plan our new topology, let us quickly try to understand what nodes we have used so far. Recall that each node has an Ansible hostname which is the name returned by the Ansible variable inventory_hostname and defined in the inventory file. In addition, each node has a DNS name, and during our installation procedure, we adapt the /etc/hosts file on each node so that DNS names and Ansible hostnames are identical. A host called, for instance, controller in the Ansible inventory will therefore be reachable under this name across the cluster.

The following table lists the types of nodes (defined by the services running on them) and the Ansible hostnames used so far.

Node type Description Hostname
api_node Node on which all APIs are exposed. This will typically be the controller node, but could also be a load balancer in a HA setup controller
db_node Node on which the database is running controller
mq_node Node on which the Rabbit MQ service is running controller
memcached_node Node on which the memcached service is running controller
ntp_node Node on which the NTP service is running controller
network_node Node on which DHCP agent and the L3 agent (and thus routers) are running controller
horizon_node Node on which Horizon is running controller
compute_node Compute nodes compute*

Now we can start to split out some of the nodes and distribute the functionality across several machines. It is rather obvious how to do this for e.g. the database node or the RabbitMQ node – simply start MariaDB or the RabbitMQ service on a different node and update all URLs in the configuration accordingly. In this post, we will instead introduce a dedicated network node that will hold all our Neutron agents. Thus the new distribution of functionality to hosts will be as follows.

Node type Description Hostname
api_node Node on which all APIs are exposed. This will typically be the controller node, but could also be a load balancer in a HA setup controller
db_node Node on which the database is running controller
mq_node Node on which the Rabbit MQ service is running controller
memcached_node Node on which the memcached service is running controller
ntp_node Node on which the NTP service is running controller
network_node Node on which DHCP agent and the L3 agent (and thus routers) are running network
horizon_node Node on which Horizon is running controller
compute_node Compute nodes compute*

Here is a diagram that summarizes our new distribution of components to the various VirtualBox instances.


In addition, we will make a second change to our network topology. So far, we have used a setup where all machines are directly connected on layer 2, i.e. are part of a common Ethernet network. This did allow us to use flat networks and VLAN networks to connect our different nodes. In reality, however, an OpenStack cluster will might sometimes be operated on top of an IP fabric so that layer 3 connectivity between all nodes is guaranteed, but we cannot assume layer 2 connectivity. Also, broadcast and multicast traffic might be restricted – an example could be an existing bare-metal cloud environment on top of which we want to install OpenStack. To be prepared for this situation, we will change our setup to avoid direct layer 2 connectivity. Here is our new network topology implementing these changes.


This is a bit more complicated than what we used in the past, so let us stop for a moment and discuss the setup. First, each node (i.e. each VirtualBox instance) will still be directly connected to the network on our lab host by a VirtualBox NAT interface with IP, and – for the time being – we continue to use this interface to access our machines via SSH and to download packages and images (this is something which we will change in an upcoming post as well). Then, there is still a management network with IP range

The network that we did call the provider network in our previous posts is now called the underlay network. This network is reserved for traffic between virtual machines (and OpenStack routers) realized as a VXLAN virtual OpenStack network. As we use this network for VXLAN traffic, the network interfaces connected to it are now numbered.

All compute nodes are connected to the underlay network and to the management network. The same is true for the network node, on which all Neutron agents (the DHCP agent, the L3 agent and the metadata agent) will be running. The controller node, however, is not connected to the underlay network any more.

But we need a bit more than this to allow our instances to connect to the outside world. In our previous posts, we did build a flat network that was directly connected to the physical infrastructure to provide access to the public network. In our setup, where direct layer 2 connectivity between the machines can no longer be assumed, we realize this differently. On the network node and on each compute node, we bring up an additional OVS bridge called the external bridge br-ext. This bridge will essentially act as an additional virtual Ethernet device that is used to realize a flat network. All external bridges will be connected with each other by a VXLAN that is not managed by OpenStack, but by our Ansible scripts. For this VXLAN, we use a segmentation ID which is different from the segmentation IDs of the tenant networks (as all VXLAN connections will use the underlay network IP addresses and the standard port 4789, this isolation is required).


Essentially, we use the network node as a virtual bridge connecting all compute nodes with the network node and with each other. For Neutron, the external bridges will look like a physical interface building a physical network, and we can use this network as supporting provider network for a Neutron flat network to which we can attach routers and virtual machines.

This setup also avoids the use of multicast traffic, as we connect every bridge to the bridge running on the network node directly. Note that we also need to adjust the MTU of the bridge on the network node to account for the overhead of the VXLAN header.

On the network node, the external bridge will be numbered with IP address It can therefore, as in the previously created flat networks, be used as a gateway. To establish connectivity from our instances and routers to the outside world, the network node will be configured as a router connecting the external bridge to the device enp0s3 so that traffic can leave the flat network and reach the lab host (and from there, the public internet).

MTU settings in Neutron

This is a good point in time to briefly discuss how Neutron handles MTUs. In Neutron, the MTU is an attribute of each virtual network. When a network is created, the ML2 plugin determines the MTU of the network and stores it in the Neutron database (in the networks table).

When a network is created, the ML2 plugin asks the type driver to calculate the MTU by calling its method get_mtu. For a VXLAN network, the VXLAN type driver first determines the MTU of the underlay network as the minimum of two values:

  1. the globally defined MTU set by the administrator using the parameter global_physnet_mtu in the neutron.conf configuration file (which defaults to 1500)
  2. the path MTU, defined by the parameter path_mtu in the ML2 configuration

Then, 50 bytes are subtracted from this value to account for the VXLAN overhead. Here 20 bytes are for the IPv4 header (so Neutron assumes that no options are used) and 30 bytes are for the remaining VXLAN overhead, assuming no inner VLAN tagging (you might want to consult this post to understand the math behind this). Therefore, with the default value of 1500 for the global MTU and no path MTU set, this results in 1450 bytes.

For flat networks, the logic is a bit different. Here, the type driver uses the minimum of the globally defined global_physnet_mtu and a network specific MTU which can be defined for each physical network by setting the parameter physical_network_mtus in the ML2 configuration. Thus, there are in total three parameters that determine the MTU of a VXLAN or flat network.

  1. The globally defined global_physnet_mtu in the Neutron configuration
  2. The per-network MTU defined in physical_network_mtus in the ML2 configuration which can be used to overwrite the global MTU for a specific flat network
  3. The path MTU in the ML2 configuration which can be used to overwrite the global MTU of the underlay network for VXLAN networks

What does this imply in our case? In the standard configuration, the VirtualBox network interfaces have an MTU of 1500, which is the standard Ethernet MTU. Thus we set the global MTU to 1500 and leave the path MTU undefined. With these settings, Neutron will correctly derive the MTU 1450 for interfaces attached to a Neutron managed VXLAN network.

Our own VXLAN network joining the external bridges is used as supporting network for a Neutron flat network. To tell Neutron that the MTU of this network is only 1450 (the 1500 of the underlying VirtualBox network minus 50 bytes for the encapsulation overhead), we can set the MTU for this network explicitly in the physical_network_mtus configuration item.

Implementation and tests

Let us now take a closer look at how our setup needs to change to make this work. First, obviously, we need to adapt our Vagrantfile to bring up an additional node and to reflect the changed network configuration.

Next, we need to bring up the external bridge br-ext on the network node and on the compute nodes. On each compute node, we create a VXLAN port pointing to the network node, and on the network node, we create a corresponding VXLAN port for each compute node. All VXLAN ports will be assigned to the VXLAN ID 100 (using the option:key= option of OVS). We then add a flow table entry to the bridge which defines NORMAL processing for all packets, i.e. forwarding like an ordinary switch.

We also need to make sure that the external bridge interface on the network node is up and that it has an IP address assigned, which will automatically create a route on the network node as well.

The next step is to configure the network node as a router. After having set the famous flag in /proc/sys/net/ipv4/ip_forward to 1 to enable forwarding, we need to set up the necessary rules in iptables. Here is a sample iptables-save file that demonstrates this setup.

# Set policies for raw table to accept
# Set policies for NAT table to ACCEPT and 
# add SNAT rule for traffic going out via the public interface
# Generated by iptables-save v1.6.1 on Mon Dec 16 08:50:33 2019
# Set policies in mangle table to ACCEPT
# Set policy for forwarded traffic in filter table to DROP, but allow
# forwarding for traffic coming from br-ext and established connections
# Also block incoming traffic on the public interface except SSH traffic 
# and reply to connected traffic
# Do not set the INPUT policy to DROP, as this would also drop all traffic
# on the management and underlay networks
-A FORWARD -i br-ext -j ACCEPT
-A FORWARD -i enp0s3 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i enp0s3 -p tcp --destination-port 22 -j ACCEPT
-A INPUT -i enp0s3 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i enp0s3 -j DROP
-A INPUT -i br-ext -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i br-ext -p icmp -j ACCEPT
-A INPUT -i br-ext -j DROP

Let us quickly discuss these rules. In the NAT table, we set up a rule to apply IP masquerading to all traffic that goes out on the public interface. Thus, the source IP address will be replaced by the IP address of enp0s3 so that the reply traffic is correctly routed. In the filter table, we set the default policy in the FORWARD chain to DROP. We then explicitly allow forwarding for all traffic coming from br-ext and for all traffic coming from enp0s3 which belongs to an already established connection. This is the firewall part of the rules – all traffic not matching one of these rules cannot reach the OpenStack networks. Finally, we need some rules in the INPUT table to protect the network node itself from unwanted traffic from the external bridge (to avoid attacks from an instance) and from the public interface and only allow reply traffic and SSH connections. Note that we make an exception for ICMP traffic so that we can ping from the flat network – this is helpful to avoid confusion during debugging.

In addition, we use an OVS patch-port to connect our external bridge to the bridge br-phys, as we would do it for a physical device. We can then proceed to set up OpenStack as before, with the only difference that we install the Neutron agents on our network node, not on the controller node. If you want to try this out, run

git clone
cd openstack-labs/Lab10
vagrant up
ansible-playbook -i hosts.ini site.yaml
ansible-playbook -i hosts.ini demo.yaml

The playbook demo.yaml will again create two networks, one VXLAN network and one flat network, and will start one instance (demo-instance-3) on the flat network and two instances on the VXLAN network. It will also install a router connecting these two networks, and assign a floating IP address to the first instance on the VXLAN network.

As before, we can still reach Horizon from the lab host via the management network on If we navigate to the network topology page, we see the same pattern that we have already seen in the previous post.


Let us now try out a few things. First, let us try to reach the instance demo-instance-3 from the network node. To do this, log into the network node and ssh into the machine from there (replacing with the IP address of demo-instance-3 on the flat network, which might be different in your case).

vagrant ssh network
ssh -i demo-key cirros@

Note that we can no longer reach the machine from the lab host, as we are not using the VirtualBox network vboxnet1 any more. We can, however, SSH directly from the lab host (!) into this machine using the network node as a jump host.

ssh -i ~/.os_credentials/demo-key \
    -o StrictHostKeyChecking=no \
    -o "ProxyCommand ssh -i .vagrant/machines/network/virtualbox/private_key \
      -o StrictHostKeyChecking=no \
      -o UserKnownHostsFile=/dev/null \
      -p 2200 -q \
      -W \
      vagrant@" \

What about the instances on the VXLAN network? Our demo playbook has created a floating IP for one of the instances which you can either take from the output of the playbook or from the Horizon GUI. In my case, this is, and we can therefore reach this machine similarly.

ssh -i ~/.os_credentials/demo-key \
    -o StrictHostKeyChecking=no \
    -o "ProxyCommand ssh -i .vagrant/machines/network/virtualbox/private_key \
      -o StrictHostKeyChecking=no \
      -o UserKnownHostsFile=/dev/null \
      -p 2200 -q \
      -W \
      vagrant@" \

Now let us try to reach a few target IP addresses from this instance. First, you should be able to ping the machine on the flat network, i.e. in this case. This is not surprising, as we have created a virtual router connecting the VXLAN network to the flat network. However, thanks to the routing functionality on the network node, we should now also be able to reach our lab host and other machines in the network of the lab host. In my case, for instance, the lab host is connected to a cable modem router with IP address, and in fact pinging this IP address from demo-instance-1 work just fine. You should even be able to SSH into the lab host from this instance!

It is interesting to reflect on the path that this ping request takes through the network.

  • First, the request is routed to the default gateway network interface eth0 on the instance
  • From there, the packet travels all the way down via the integration bridge to the tunnel bridge on the compute node, via VXLAN to the tunnel bridge on the network node, to the integration bridge on the network node and to the internal port of the router
  • In the virtual OpenStack router, the packet is forwarded to the gateway interface and reaches the integration bridge again
  • As we are now on the flat network, the packet travels from the integration bridge to the physical bridge br-phys and from there to our external bridge br-ext. In the OpenStack router, a first NAT’ing takes place which replaces the source IP address of the packet by
  • The packet is received by the network node, and our iptables rules become effective. Thus, a second NAT’ing happens, the IP source address is set to that of enp0s3 and the packet is forwarded to enp0s3
  • This device is a VirtualBox NAT device. Therefore VirtualBox now opens a connection to the target, replaces the source IP address with that of the outgoing lab host interface via which this target can be reached and sends the packet to target host

If we log into the instance demo-instance-3 which is directly attached to the flat network, we are of course also able to reach our lab host and other machines to which it is directly connected, essentially via the same mechanism with the only difference that the first three steps are not necessary.

There is, however, still one issue: DNS resolution inside the instances does not work. To fix this, we will have to set up our DHCP agent, and this agent and how it works will be the topic of our next post.

OpenStack Neutron – building VXLAN overlay networks with OVS

In this post, we will learn how to set up VXLAN overlay networks as tenant networks in Neutron and explore the resulting configuration on our compute nodes.

Tenant networks

The networks that we have used so far have been provider networks – they have been created by an administrator, specifying the link to the physical network resource (physical network, VLAN ID) manually. As already indicated in our introduction to Neutron, OpenStack also allows us to create tenant networks, i.e. networks that a tenant, using a non-privileged user, can create using either the CLI or the GUI.

To make this work, Neutron needs to understand which resources, i.e. VLAN IDs in the case of VLANs or VXLAN IDs in the case of VXLANs, it can use when a tenant requests the creation of a network without getting in conflict with the physical network infrastructure. To achieve this, the configuration of the ML2 plugin allows us to specify ranges for VLANs and VXLANs which act as pools from which Neutron can allocate resources to tenant networks. In the case of VLANs, these ranges can be specified in the item network_vlan_ranges that we have already seen. Instead of just specifying the name of a physical network, we could use an expression like


to tell Neutron that the VLAN ids between 100 and 200 can be freely allocated for tenant networks. A similar range can be specified for VXLANs, here the configuration item is called vni_ranges. In addition, the ML2 configuration contains the item tenant_network_types which is an ordered list of network types which are offered as tenant networks.

When a user requests the creation of a tenant network, Neutron will go through this list in the specified order. For each network type, it will try to find a free segmentation ID. The first segmentation ID found determines the type of the network. If, for instance, all configured VLAN ids are already in use, but there is still a free VXLAN id, the tenant network will be created as a VXLAN network.

Setting up VXLAN networks in Neutron

After this theoretical discussion, let us now configure our playground to offer VXLAN networks as tenant networks. Here are the configuration changes that we need to enable VXLAN tenant networks.

  • add vxlan to the list type_drivers in the ML plugin configuration file ml2_conf.ini so that VXLAN networks are enabled
  • add vxlan to tenant_network_types in the same file
  • navigate to the ml2_type_vxlan section and edit vni_ranges to specify VXLAN IDs available for tenant networks
  • Set the local_ip in the configuration of OVS agent, which is the ID on which the agent will listen for VXLAN connections. Here we use the IP address of the management network (which is something you would probably not do in a real world situation as it is preferred to use separate network interfaces for management and VM traffic, but in our setup, the interface connected to the provider network is unnumbered)
  • In the same file, add vxlan to the key tunnel_types
  • In the Horizon configuration, add vxlan to the item horizon_supported_network_types

Instead of doing all this manually, you can of course once more use one of the Labs that I have created for you. To do this, run

git clone
cd Lab9
vagrant up
ansible-playbook -i hosts.ini site.yaml
ansible-playbook -i hosts.ini demo_base.yaml

Note that the second playbook that we run will create a demo project and a demo user as in our previous projects as well as the m1.nano flavor. This playbook will also modify the default security groups to allow incoming ICMP and SSH traffic.

Let us now inspect the state of the compute nodes by logging into the compute node and executing the usual commands to check our network configuration.

vagrant ssh compute1
ifconfig -a
sudo ovs-vsctl show

The first major difference that we see compared to the previous setup with a pure VLAN based network separation is that an additional OVS bridge has been created by Neutron called the tunnel bridge br-tun. This bridge is connected to the integration bridge by a virtual patch cable. Attached to this bridge, there are two VXLAN peer-to-peer ports that connect the tunnel bridge on the compute node compute1 to the tunnel bridges on the second compute node and the controller node.


Let us now inspect the flows defined for the tunnel bridge. At this point in time, with no virtual machines created, the rules are rather simple – all packets are currently dropped.


Let us now verify that, as promised, a non-admin user can use the Horizon GUI to create virtual networks. So let us log into the Horizon GUI (reachable from the lab host via as the demo user (use the demo password from /.os_credentials/credentials.yaml) and navigate to the “Networks” page. Now click on “Create Network” at the top right corner of the page. Fill in all three tabs to create a network with the following attributes.

  • Name: vxlan-network
  • Subnet name: vxlan-subnet
  • Network address:
  • Gateway IP:
  • Allocation Pools:, (make sure that there is no space after the comma separating the start and end address of the allocation pool)

It is interesting to note that the GUI does not ask us to specify a network type. This is in line with our discussion above on the mechanism that Neutron uses to assign tenant networks – we have only specified one tenant network type, namely VXLAN, and even if we had specified more than one, Neutron would pick the next available combination of segmentation ID and type for us automatically.

If everything worked, you should now be able to navigate to the “Network topology” page and see the following, very simple network layout.


Now use the button “Launch Instance” to create two instances demo-instance-1 and demo-instance-2 attached to the VXLAN network that we have just created. When we now navigate back to the network topology overview, the image should look similar to the one below.


Using the noVNC console, you should now be able to log into your instances and ping the first instance from the second one and the other way around. When we now re-investigate the network setup on the compute node, we see that a few things have changed.

First, as you would probably guess from what we have learned in the previous post, an additional tap port has been created which connects the integration bridge to the virtual machine instance on the compute node. This port is an access port with VLAN tag 1, which is again the local VLAN ID.


The second change that we find is that additional rules have been created on the tunnel bridge.Ethernet unicast traffic coming from the integration bridge is processed by table 20. In this table, we have one rule for each peer which directs traffic tagged with VLAN ID 1 for this peer to the respective VXLAN port. Note that the MAC destination address used here is the address of the tap port of the respective other VM. Outgoing Ethernet multicast traffic with VLAN ID 1 is copied to all VXLAN ports (“flooding rule”). Traffic with unknown VLAN IDs is dropped.

For ingress traffic, the VXLAN ID is mapped to VLAN IDs in table 4 and the packets are forwarded to table 10. In this table, we find a learning rule that will make sure that MAC addresses from which traffic is received are considered as targets in table 20. Typically, the packet triggering this learning rule is an ARP reply or an ARP request, so that the table is populated automatically when two machines establish an IP based communication. Then the packet is forwarded to the integration bridge.


At this point, we have two instances which can talk to each other, but there is still no way to connect these instances to the outside world. To do this, let us now add an external network and a router.

The external network will be a flat network, i.e. a provider network, and therefore needs to be added as admin user. We do this using the CLI on the controller node.

vagrant ssh controller
source admin-openrc
openstack network create \
  --share \
  --external \
  --provider-network-type flat \
  --provider-physical-network physnet \
openstack subnet create \
  --dhcp  \
  --subnet-range \
  --network flat-network \
  --allocation-pool start=,end= \

Back in the Horizon GUI (where we are logged in as the user demo), let us now create a router demo-router with the flat network that we just created as the external network. When you inspect the newly created router in the GUI, you will find that Neutron has assigned one of the IP addresses on the flat network to the external interface of the router. On the “Interfaces” tab, we can now create an internal interface connected to the VXLAN network. When we verify our work so far in the network topology overview, the displayed image should look similar to the one below.


Finally, to be able to reach our instances from the flat network (and from the lab host), we need to assign a floating IP. We will create it using the CLI as the demo user.

vagrant ssh controller
source demo-openrc
openstack floating ip create \
  --subnet flat-subnet \

Now switch back to the GUI and navigate to the compute instance to which you want to attach the floating IP, say demo-instance-1. From the instance overview page, select “Associate floating IP”, pick the IP address of the floating IP that we have just created and complete the association. At this point, you should be able to ping your instance and SSH into it from the lab host.

Concluding remarks

Before closing this post, let us quickly discuss several aspects of VXLAN networks in OpenStack that we have not yet touched upon.

First, it is worth mentioning that the MTU settings turn out to be an issue that comes up frequently when working with VXLANs. Recall that VXLAN technology encapsulates Ethernet frames in UDP packets. This implies a certain overhead – to a given Ethernet frame, we need to add a VXLAN header, a UDP header, an IP header and another Ethernet header. This overhead increases the size of a packet compared to the size of the payload.

Now, in an Ethernet network, every interface has an MTU (minimum transmission unit) which defines the maximum payload size of a frame which can be processed by this interface without fragmentation. A typical MTU for Ethernet is 1500 bytes, which (adding 14 bytes for the Ethernet header and 4 bytes for the checksum) corresponds to an Ethernet frame of 1518 bytes. However, if the MTU of the physical device on the host used to transmit the VXLAN packets is 1500 bytes, the MTU available to the virtual device is smaller, as the packet transmitted by the physical device is composed of the packet transmitted by the virtual device plus the overhead.

As discussed in RFC 4459, there are several possible solutions for this issue which essentially boil down to two alternatives. First, we could of course simply allow fragmentation so that packets exceeding the MTU are fragmented, either by the encapsulator (i.e. fragment the outer packet) or by the virtual interface (i.e. fragment the inner packet). Second, we could adjust the MTU of the underlying network infrastructure, for instance by jusing Jumbo frames with a size of 9000 bytes. The long and interesting discussion of the pros and cons of both approaches in the RFC comes to the conclusion that no clean and easy solution for this problem exists. The approach that OpenStack takes by default is to reduce the MTU of the virtual network interfaces (see the comments of the parameter global_physnet_mtu in the Neutron configuration file). In addition, the ML2 plugin can also be configured with specific MTUs for physical devices and a path MTU, see this page for a short discussion of the options.

The second challenge that the usage of VXLAN can create is the overhead generated by broadcast traffic. As OVS does not support the use of VXLAN in combination with multicast IP groups, Neutron needs to handle broadcast and multicast traffic differently. We have already seen that Neutron will install OpenFlow rules on the tunnel bridge (if you want to know all nitty-gritty details, the method tunnel_sync in the OVS agent is a good starting point), which implies that all broadcast traffic like ARP requests go to all nodes, even those on which no VMs in the same virtual network are hosted (and, as remarked above, the reply typically creates a learned OpenFlow rule used for further communication).

To avoid the unnecessary overhead of this type of broadcast traffic, the L2 population driver was introduced. This driver uses a proxy ARP mechanism to intercept ARP requests coming from the virtual machines on the bridge. The ARP requests are then answered locally, using an OpenFlow rule, and the request never leaves the local machine. As therefore the ARP reply can no longer be used to learn the MAC addresses of the peers, forwarding rules are created directly by the agent to send traffic to the correct VXLAN endpoint (see here for an entry point into the code).

OpenStack Neutron – deep dive into flat and VLAN networks

Having installed Neutron in my last post, we will now analyze flat networks and VLAN networks in detail and see how Neutron actually realizes virtual Ethernet networks. This will also provide the basic understanding that we need for more complex network types in future posts.


To follow this post, I recommend to repeat the setup from the previous post, so that we have two virtual machines running which are connected by a flat virtual network. Instead of going through the setup again manually, you can also use the Ansible scripts for Lab5 and combine them with the demo playbook from Lab6.

git clone
cd openstack-labs/Lab5
vagrant up
ansible-playbook -i hosts.ini site.yaml
ansible-playbook -i hosts.ini ../Lab6/demo.yaml

This will install a demo project and a demo user, import an SSH key pair, create a flavor and a flat network and bring up two instances connected to this network, one on each compute node.

Analyzing the virtual devices

Once all instances are running, let us SSH into the first compute node and list all network interfaces present on the node.

vagrant ssh compute1
ifconfig -a

The output is long and a bit daunting. Here is the output from my setup, where I have marked the most relevant sections in red.


The first interface that we see at the top is the integration bridge br-int which is created automatically by Neutron (in fact, by the Neutron OVS agent). The second bridge is the bridge that we have created during the installation process and that is used to connect the integration bridge to the physical network infrastructure – in our case to the interface enp0s9 which we use for VM traffic. The name of the physical bridge is known to Neutron from our configuration file, more precisely from the mapping of logical network names (physnet in our case) to bridge devices.

The full output also contains two devices (virbr0 and virbr0-nic) that are created by the libvirt daemon but not used.

We also see a tap device, tapd5fc1881-09 in our case. This tap device is realizing the port of our demo instance. To see this, source the credentials of the demo user and run openstack port list to see all ports. You will see two ports, one corresponding to each instance. The second part of the name of the tap device matches the first part of the UUID of the corresponding port (and we can use ethtool -i to get the driver managing this interface and see that it is really a tap device).

The virtual machine is listening on the tap device and using it to provide the virtual NIC to its guest. To verify that QEMU is really listening on this tap device, you can use the following commands (run this and all following commands as root).

# Figure out the PID of QEMU
pid=$(ps --no-headers \
      -C qemu-system-x86_64 \
      | awk '{ print $1}')
# Search file descriptors in /proc/*/fdinfo 
grep "tap" /proc/$pid/fdinfo/*

This should show you that one of the file descriptors is connected to the tap device. Let us now see how this tap device is attached to the integration bridge by running ovs-vsctl show. The output should look similar to the following sample output.

    Manager "ptcp:6640:"
        is_connected: true
    Bridge br-phys
        Controller "tcp:"
            is_connected: true
        fail_mode: secure
        Port phy-br-phys
            Interface phy-br-phys
                type: patch
                options: {peer=int-br-phys}
        Port "enp0s9"
            Interface "enp0s9"
        Port br-phys
            Interface br-phys
                type: internal
    Bridge br-int
        Controller "tcp:"
            is_connected: true
        fail_mode: secure
        Port "tapd5fc1881-09"
            tag: 1
            Interface "tapd5fc1881-09"
        Port int-br-phys
            Interface int-br-phys
                type: patch
                options: {peer=phy-br-phys}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.11.0"

Here we see that both OVS bridges are connected to a controller listening on port 6633, which is actually the Neutron OVS agent (the manager in the second line is the OVSDB server). The integration bridge has three ports. First, there is a port connected to the tap device, which is an access port with VLAN tag 1. This tagging is used to separate traffic on the integration bridge belonging to two different virtual networks. The VLAN ID here is called the local VLAN ID and is only valid per node. Then, there is a patch port connecting the integration bridge to the physical bridge, and there is the usual internal port.

The physical bridge has also three ports – the other side of the patch port connecting it to the integration bridge, the internal port and finally the physical network interface enp0s9. Thus the following picture emerges.


So we get a first idea of how traffic flows. When the guest sends a packet to the virtual interface in the VM, it shows up on the tap device and goes to the integration bridge. It is then forwarded to the physical bridge and from there to the physical interface. The packet travels across the physical network connecting the two compute nodes and there again hits the physical bridge, travels along the virtual patch cable to the integration bridge and finally arrives at the tap interface.

At this point, it is important that the physical network interface enp0s9 is in promiscuous mode. In fact, it needs to pick up traffic directed to the MAC address of the virtual instance, not to its own MAC address. Effectively this interface itself is not visible and only part of a virtual Ethernet cable connecting the two physical bridges.

OpenFlow rules on the bridges

We now have a rough understanding of the flow, but there is still a bit of a twist – the VLAN tagging. Recall that the port to which the tap interface is connected is an access port, so traffic arriving there will receive a VLAN tag. If you run tcpdump on the physical interface, however, you will see that the traffic is untagged. So at some point, the VLAN tag is stripped of.

To figure out where this happens, we need to inspect the OpenFlow rules on the bridges. To simplify this process, we will first remove the security groups (i.e. disable firewall rules) and turn off port security for the attached port to get rid off the rules realizing this. For simplicity, we do this for all ports (needless to say that this is not a good idea in a production environment).

source /home/vagrant/admin-openrc
ports=$(openstack port list \
       | grep "ACTIVE" \
       | awk '{print $2}')
for port in $ports; do 
  openstack port set --no-security-group $port
  openstack port set --disable-port-security $port

Now let us dump the flows on the integration bridge using ovs-ofctl dump-flows br-int. In the following image, I have marked those rules that are relevant for traffic coming from the instance.


The first rule drops all traffic for which the VLAN TCI, masked with 0x1fff (i.e. the last 13 bits) is equal to 0x0fff, i.e. for which the VLAN ID is the reserved value 0xfff. These packets are assumed to be irregular and are dropped. The second rule directs all traffic coming from the tap device, i.e. from the virtual machine, to table 60.

In table 60, the traffic coming from the tap device, i.e. egress traffic, is marked by loading the registers 5 and 6, and resubmitted to table 73, where it is again resubmitted to table 94. In table 94, the packet is handed over to the ordinary switch processing using the NORMAL target.

When we dump the rules on the physical bridge br-phys, the result is much shorter and displayed in the lower part of the image above. Here, the first rule will pick up the traffic, strip off the VLAN tag (as expected) and hand it over to normal processing, so that the untagged package is forwarded to the physical network.

Let us now turn to the analysis of ingress traffic. If a packet arrives at br-phys, it is simply forwarded to br-int. Here, it is picked up by the second rule (unless it has a reserved VLAN ID) which adds a VLAN tag with ID 1 and resubmits to table 60. In this table, NORMAL processing is applied and the packet is forwarded to all ports. As the port connected to the tap device is an access port for VLAN 1, the packet is accepted by this port, the VLAN tag is stripped off again and the packet appears in the tap device and therefore in the virtual interface in the instance.


All this is a bit confusing but becomes clearer when we study the meaning of the various tables in the Neutron source code. The relevant source files are and Here is an extract of the relevant tables for the integration bridge from the code.

Table Name
0 Local switching table
23 Canary table
24 ARP spoofing table
25 MAC spoofing table
60 Transient table
71,72 Firewall for egress traffic
73,81,82 Firewall for accepted and ingress traffic
91 Accepted egress traffic
92 Accepted ingress traffic
93 Dropped traffic
94 Normal processing

Let us go through the meaning of some of these tables. The canary table (23) is simply used to verify that OVS is up and running (whence the name). The MAC spoofing and ARP spoofing tables (24, 25) are not populated in our example as we have disabled the port protection feature. Similarly, the firewall tables (71 , 72, 73, 81, 82) only contain a minimal setup. Table 91 (accepted egress traffic) simply routes to table 94 (normal processing), tables 92 and 93 are not used and table 94 simply hands over the packets to normal processing.

In our setup, the local switching table (table 0) and the transient table (table 60) are actually the most relevant tables. Together, these two tables realize the local VLANs on the compute node. We will see later that on each node, a local VLAN is built for each global virtual network. The method provision_local_vlan installs a rule into the local switching table for each local VLAN which adds the corresponding VLAN ID to ingress traffic coming from the corresponding global virtual and then resubmits to the transient table.

Here is the corresponding table for the physical bridge.

Table Name
0 Local switching table
2 Local VLAN translation

In our setup, only the local switching table is used which simply strips off the local VLAN tags for egress traffic.

You might ask yourself how we can reach the instances from the compute node. The answer is that a ping (or an SSH connection) to the instance running on the compute node actually travels via the default gateway, as there is no direct route to on the compute node. In our setup, the gateway is on the enp0s3 device which is the NAT-network provided by Virtualbox. From there, the connection travels via the lab host where we have configured the virtual network device vboxnet1 as a gateway for, so that the traffic enters the virtual network again via this gateway and eventually reaches enp0s9 from there.

We could now turn on port protection and security groups again and study the resulting rules, but this would go far beyond the scope of this post (and far beyond my understanding of OpenFlow rules). If you want to get into this, I recommend this summary of the firewall rules. Instead, we move on to a more complex setup using VLANs to separate virtual networks.

Adding a VLAN network

Let us now adjust our configuration so that we are able to provision a VLAN based virtual network. To do this, there are two configuration items that we have to change and that both appear in the configuration of the ML2 plugin.

The first item is type_drivers. Here, we need to add vlan as an additional value so that the VLAN type driver is loaded.

When starting up, this plugin loads the second parameter that we need to change – network_vlan_ranges. Here, we need to specify a list of physical network labels that can be used for VLAN based networks. In our case, we set this to physnet to use our only physical network that is connected via the br-phys bridge.

You can of course make these changes yourself (do not forget to restart the Neutron server) or you can use the Ansible scripts of lab 7. The demo script that is part of this lab will also create a virtual network based on VLAN ID 100 and attach two instances to it.

git clone
cd openstack-labs/Lab7
vagrant up
ansible-playbook -i hosts.ini site.yaml
ansible-playbook -i hosts.ini demo.yaml

Once the instances are up, log again into the compute node and, as above, turn off port security for all ports. We can now go through the exercise above again and see what has changed.

First, ifconfig -a shows that the basic setup is the same as before. We have still our integration bridge connected to the tap device and connected to the physical bridge. Again, the port to which the tap device is attached is an access port, tagged with the VLAN ID 1. This is the local VLAN corresponding to our virtual network.

When we analyze the OpenFlow rules in the two bridges, however, a difference to our flat network is visible. Let us start again with egress traffic.

In the integration bridge, the flow is the same as before. As the port to which the VM is attached is an access port, traffic originating from the VM is tagged with VLAN ID 1, processed by the various tables and eventually forwarded via the virtual patch cable to br-phys.

Here, however, the handling is different. The first rule for this bridge matches, and the VLAN ID 1 is rewritten to become VLAN ID 100. Then, normal processing takes over, and the packet leaves the bridge and travels via enp0s9 to the physical network. Thus, traffic which the VLAN ID on the integration bridge shows up with VLAN ID 100 on the physical network. This is the mapping between local VLAN ID (which represents a virtual network on the node) and global VLAN ID (which represents a virtual VLAN network on the physical network infrastructure connecting the nodes).


For ingress traffic, the reverse mapping applies. A packet travels from the physical bridge to the integration bridge. Here, the second rule for table 0 matches for packets that are tagged with VLAN 100, the global VLAN ID of our virtual network, and rewrites the VLAN ID to 1, the local VLAN ID. This packet is then processed as before and eventually reaches the access port connecting the bridge with the tap device. There, the VLAN tagging is stripped off and the untagged traffic reaches the VM.


The diagram below summarizes our findings. We see that on the same physical infrastructure, two virtual networks are realized. There is still the flat network corresponding to untagged traffic, and the newly created virtual network corresponding to VLAN ID 100.


It is interesting to note how the OpenFlow rules change if we bring up an additional instance on this compute node which is attached to the flat network. Then, an additional local VLAN ID (2 in our case) will appear corresponding to the flat network. On the physical bridge, the VLAN tag will be stripped off for egress traffic with this local VLAN ID, so that it appears untagged on the physical network. Similarly, on the integration bridge, untagged ingress traffic will no longer be dropped but will receive VLAN ID 2.

Note that this setup implies that we can no longer easily reach the machines connected to a VLAN network via SSH from the lab host or the compute node itself. In fact, even if we would set up a route to the vboxnet1 interface on the lab host, our traffic would come in untagged and would not reach the virtual machine. This is the reason why our lab 7 comes with a fully installed Horizon GUI which allows you to use the noVNC console to log into our instances.

This is very similar to a physical setup where a machine is connected to a switch via an access port, but the connection to the external network is on a different VLAN or on the untagged, native VLAN. In this situation, one would typically use a router to connect the two networks. Of course, Neutron offers virtual routers to connect two virtual networks. In the next post, we will see how this works and re-establish SSH connectivity to our instances.

OpenStack Neutron installation – basic setup and our first instances

In this post, we will go through the installation of Neutron for flat networks and get to know the basic configuration options for the various Neutron components, thus completing our first fully working OpenStack installation. If you have not already read my previous post describing some of the key concepts behind Neutron, I advise you to do this to be able to better follow todays post.

Installing Neutron on the controller node

The setup of Neutron on the controller node follows a similar logic to the installation of Nova or other OpenStack components that we have already seen. We create database tables and database users, add users, services and endpoints to Keystone, install the required packages, update the configuration and sync the database.


As the first few steps are rather standard, let us focus on the configuration. There are

  • the Neutron server configuration file /etc/neutron.conf
  • the configuration of the ML2 plugin /etc/neutron/plugins/ml2/ml2_conf.ini
  • the configuration of the DHCP agent /etc/neutron/dhcp_agent.ini
  • the configuration of the DHCP agent /etc/neutron/metadata_agent.ini

Let us now discuss each of these configuration files in a bit more detail.

The Neutron configuration file on the controller node

The first group of options that we need to configure is familiar – the authorization strategy (keystone) and the related configuration for the Keystone authtoken middleware, the URL of the RabbitMQ server and the database connection as well as a directory for lock files.

The second change is the list of service plugins. Here we set this to an empty string, so that no additional service plugins beyond the core ML2 plugin will be loaded. At this point, we could configure additional services like Firewall-as-a-service which act as an extension.

Next we set the options notify_nova_on_port_status_changes and notify_nova_on_port_data_changes which instruct Neutron to inform Nova about status changes, for instance when the creation of a port fails.

To send these notifications, the external events REST API of Nova is used, and therefore Neutron needs credentials for the Nova service which need to be present in a section [nova] in the configuration file (loaded here).

The Neutron ML2 plugin configuration file

As you might almost guess from the presentation of Neutrons high-level architecture in the previous post, the ML2 plugin configuration file contains the information which type drivers and which mechanism drivers are loaded. The corresponding configuration items (type_drivers and mechanism_drivers) are both comma-separated lists. For our initial setup, we load the type drivers for a local network and a flat network, and we use the openvswitch mechanism driver.

In this file, we also configure the list of network types that are available as tenant networks. Here, we leave this list empty as we only offer a flat provider network.

In addition, we can configure so-called extension drivers. These are additional drivers that can be loaded by the ML2 plugin. In our case, the only driver that we load at this point in time is the port_security driver which allows us to disable the built-in port protection mechanisms and eases debugging.

In addition to the main [ml2] section, the configuration file also contains one section for each network type. As we only provide flat networks so far, we only have to populate this section. The parameter that we need is a list of flat networks. These networks are physical networks, but are not specified by the name of an actual physical interface, but by a logical network name which will later be mapped to an actual network device. Here, we choose the name physnet for our physical network.

The configuration of the metadata agent

We will talk in more detail about the metadata agent in a later post, and only briefly discuss it here. The metadata agent is an agent that allows instances to read metadata like the name of an SSH key using the de-facto standard provided by EC2s metadata service.

Nova provides a metadata API service that typically runs on a controller node. To allow access from within an instance, Neutron provides a proxy service which is protected by a secret shared between Nova and Neutron. In the configuration, we therefore need to provide the IP address (or DNS name) of the node on which the Nova metadata service is running and the shared secret.

The configuration of the DHCP agent

The configuration file for the DHCP agent requires only a few changes. First, we need to define the driver that the DHCP agent uses to connect itself to the virtual networks, which in our case is the openvswitch driver. Then, we need to specify the driver for the DHCP server itself. Here, we use the dnsmasq driver which actually spawns a dnsmasq process. We will learn more about DHCP agents in Neutron in a later post.

There is another feature that we enable in our configuration file. Our initial setup does not contain any router. Usually, routers are used to provide connectivity between an instance and the metadata service. To still allow the instances to access the metadata agent, the DHCP agent can be configured to add a static route to each instance pointing to the DHCP service. We enable this feature by setting the flag enable_isolated_metadata to true.

Installing Neutron on the compute nodes

We are now done with the setup of Neutron on the controller nodes and can turn our attention to the compute nodes. First, we need to install Open vSwitch and the neutron OVS agent on each compute node. Then, we need to create an OVS bridge on each compute node.

This is actually the point where I have been cheating in my previous post on Neutrons architecture. As mentioned there, Neutron does not connect the integration bridge directly to the network interface of the compute node. Instead, Neutron expects the presence of an additional OVS bridge that has to be provided by the adminstrator. We will simply let Neutron known what the name of this bridge is, and Neutron will attach it to the integration bridge and add a few OpenFlow rules. Everything else, i.e. how this bridge is connected to the actual physical network infrastructure, is up to the administrator. To create the bridge on the compute node, simply run (unless you are using my Ansible scripts, of course – see below)

ovs-vsctl  add-br br-int \
  -- set bridge br-int fail-mode=secure
ovs-vsctl add-port br-phys enp0s9

Once this is done, we now we need to adapt our configuration files as follows. In the item bridge_mappings, we need to provide the mapping from the physical network names to bridges. In our example, we have used the name physnet for our physical network. We now map this name to the device just created by setting

bridge_mappings = physnet:br-phys

We then need to define the driver that the Neutron agent uses to provide firewall functionality. This should of course be compatible with the chosen L2 driver, so we set this to openvswitch. We also enable security groups.

Once all configuration files have been updated, we should now restart the Neutron OVS agent on each compute node and all services on the controller nodes.

Installation using Ansible scripts

As always, I have put the exact steps to replicate this deployment into a set of Ansible scripts and roles. To run them, use (assuming, as always, that you did go through the basic setup described in an earlier post)

git clone
cd openstack-labs/Lab5
vagrant up
ansible-playbook -i hosts.ini site.yaml

Let us now finally test our installation. We will create a project and a user, a virtual network, bring up two virtual machines, SSH into them and test connectivity.

First, we SSH into the controller node and source the OpenStack credentials for the admin user. We then create a demo project, assign the generated password for the demo user from demo-openrc, create the demo user with the corresponding password and assign the member role in the demo project to this user.

vagrant ssh controller
source admin-openrc
openstack project create demo
password=$(awk -F '=' '/OS_PASSWORD=/ { print $2}' demo-openrc)
openstack user create \
   --domain default\
   --project demo\
   --project-domain default\
   --password $password\
openstack role add \
   --user demo\
   --project demo\
   --project-domain default\
   --user-domain default member

Let us now create our network. This will be a flat network (the only type we have so far), based on the physical network physnet, and we will call this network demo-network.

openstack network create \
--share \
--external \
--provider-physical-network physnet \
--provider-network-type flat \

Next, we create a subnet attached to this network with an IP range from to

openstack subnet create --network demo-network \
  --allocation-pool start=,end= \
  --gateway \
  --subnet-range demo-subnet

Now we need to create a flavor. A flavor is a bit like a template for a virtual machines and defines the machine size.

openstack flavor create\
  --disk 1\
  --vcpus 1\
  --ram 512 m1.nano

To complete the preparations, we now have to adjust the firewall rules in the default security group to allow for ICMP and SSH traffic and to import the SSH key that we want to use. We execute these commands as the demo user.

source demo-openrc
openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default
openstack keypair create --public-key /home/vagrant/ demo-key

We are now ready to create our first instances. We will bring up two instances, called demo-instance-1 and demo-instance-2, both being attached to our demo network.

source demo-openrc
net_id=$(openstack network show -f value -c id demo-network)
openstack server create \
--flavor m1.nano \
--image cirros \
--nic net-id=$net_id \
--security-group default \
--key-name demo-key \
openstack server create \
--flavor m1.nano \
--image cirros \
--nic net-id=$net_id \
--security-group default \
--key-name demo-key \

To inspect the status of the instances, use openstack server list, which will also give you the IP address of the instances. To determine the IP address of the first demo instance and verify connectivity using ping, run

ip=$(openstack server show demo-instance-1 \
  -f value -c addresses \
  | awk -F "=" '{print $2}')
ping $ip

Finally, you should be able to SSH into your instances as follows.

ssh -i demo-key cirros@$ip

You can now ping the second instance and verify that the instance is in the ARP cache, demonstrating that there is a direct layer 2 connectivity between the instances.

We have now successfully completed a full OpenStack installation. In the next post, we will analyse the network setup that Neutron uses on the compute nodes in more detail. Until then, enjoy your running OpenStack playground!

OpenStack Neutron – architecture and overview

In this post, which is part of our series on OpenStack, we will start to investigate OpenStack Neutron – the OpenStack component which provides virtual networking services.

Network types and some terms

Before getting into the actual Neutron architecture, let us try to understand how Neutron provides virtual networking capabilities to compute instances. First, it is important to understand that in contrast to some container networking technologies like Calico, Neutron provides actual layer 2 connectivity to compute instances. Networks in Neutron are layer 2 networks, and if two compute instances are assigned to the same virtual network, they are connected to an actual virtual Ethernet segment and can reach each other on the Ethernet level.

What technologies do we have available to realize this? In a first step, let us focus on connecting two different virtual machines running on the same host. So assume that we are given two virtual machines, call them VM1 and VM2, on the same physical compute node. Our hypervisor will attach a virtual interface (VIF) to each of these virtual machines. In a physical network, you would simply connect these two interfaces to ports of a switch to connect the instances. In our case, we can use a virtual switch / bridge to achieve this.

OpenStack is able to leverage several bridging technologies. First, OpenStack can of course use the Linux bridge driver to build and configure virtual switches. In addition, Neutron comes with a driver that uses Open vSwitch (OVS). Throughout this series, I will focus on the use of OVS as a virtual switch.

So, to connect the VMs running on the same host, Neutron could use (and it actually does) an OVS bridge to which the virtual machine networking interfaces are attached. This bridge is called the integration bridge. We will see later that, as in a typical physical network, this bridge is also connected to a DHCP agent, routers and so forth.


But even for the simple case of VMs on the same host, we are not yet done. In reality, to operate a cloud at scale, you will need some approach to isolate networks. If, for instance, the two VMs belong to different tenants, you do not want them to be on the same network. To do this, Neutron uses VLANs. So the ports connecting the integration bridge to the individual VMs are tagged, and there is one VLAN for each Neutron network.

This networking type is called a local network in Neutron. It is possible to set up Neutron to only use this type of network, but in reality, this is of course not really useful. Instead, we need to move on and connect the VMs that are attached to the same network on different hosts. To do this, we will have to use some virtual networking technology to connect the integration bridges on the different hosts.


At this point, the above diagram is – on purpose – a bit vague, as there are several technologies available to achieve this (and I am cheating a bit and ignoring the fact that the integration bridge is not actually connected to a physical network interface but to a second bridge which in turn is connected to the network interface). First, we could simply connect each integration bridge to a physical network device which in turn is connected to the physical network. With this setup, called a flat network in Neutron, all virtual machines are effectively connected to the same Ethernet segment. Consequently, there can only be one flat network per deployment.

The second option we have is to use VLANs to partition the physical network according to the virtual networks that we wish to establish. In this approach, Neutron would assign a global VLAN ID to each virtual network (which in general is different from the VLAN ID used on the integration bridge) and tag the traffic within each virtual network with the corresponding VLAN ID before handing it over to the physical network infrastructure.

Finally, we could use tunnels to connect the integration bridges across the hosts. Neutron supports the most commonly used tunneling protocols (VXLAN, GRE, Geneve).

Regardless of the network type used, Neutron networks can be external or internal. External networks are networks that allow for connectivity to networks outside of the OpenStack deployment, like the Internet, whereas internal networks are isolated. Technically, Neutron does not really know whether a network has connectivity to the outside world, therefore “external network” is essentially a flag attached to a network which becomes relevant when we discuss IP routers in a later post.

Finally, Neutron deployment guides often use the terms provider network and tenant networks. To understand what this means, suppose you wanted to establish a network using e.g. VLAN tagging for separation. When defining this network, you would have to assign a VLAN tag to this virtual network. Of course, you need to make sure that there are no collisions with other Neutron networks or other reserved VLAN IDs on the physical networks. To achieve this, there are two options.

First, an administrator who has a certain understanding of the underlying physical network structure could determine an available VLAN ID and assign it. This implies that an administrator needs to define the network, and thus, from the point of view of a tenant using the platform, the network is created by the platform provider. Therefore, these networks are called provider networks.

Alternatively, an administrator could, initially, when installing Neutron, define a pool of available VLAN IDs. Using this pool, Neutron would then be able to automatically assing a VLAN ID when a tenant uses, say, the Horizon GUI to create a network. With this mechanism in place, tenants can define their own networks without having to rely on an administrator. Therefore, these networks are called tenant networks.

Neutron architecture

Armed with this basic understanding of how Neutron realizes virtual networks, let us now take a closer look at the architecture of Neutron.


The diagram above displays a very rough high-level overview of the components that make up Neutron. First, there is the Neutron server on the left hand side that provides the Neutron API endpoint. Then, there are the components that provide the actual functionality behind the API. The core functionality of Neutron is provided by a plugin called the core plugin. At the time of writing, there is one plugin – the ML2 plugin – which is provided by the Neutron team, but there are also other plugins available which are provided by third parties, like the Contrail plugin. Technically, a plugin is simply a Python class implementing the methods of the NeutronPluginBaseV2 class.

The Neutron API can be extended by API extensions. These extensions (which are again Python classes which are stored in a special directory and loaded upon startup) can be action extensions (which provide additional actions on existing resources), resource extensions (which provide new API resources) or request extensions that add new fields to existing requests.

Now let us take a closer look at the ML2 plugin. This plugin again utilizes pluggable modules called drivers. There are two types of drivers. First, there are type drivers which provide functionality for a specific network type, like a flat network, a VXLAN network, a VLAN network and so forth. Second, there are mechanism drivers that contain the logic specific to an implementation, like OVS or Linux bridging. Typically, the mechanism driver will in turn communicate with an L2 agent like the OVS agent running on the compute nodes.

On the right hand side of the diagram, we see several agents. Neutron comes with agents for additional functionality like DHCP, a metadata server or IP routing. In addition, there are agents running on the compute node to manipulate the network stack there, like the OVS agent or the Linux bridging agent, which correspond to the chosen mechanism drivers.

Basic Neutron objects

To close this post, let us take a closer look at same of the objects that Neutron manages. First, there are networks. As mentioned above, these are virtual layer 2 networks to which a virtual machine can attach. The point where the machine attaches is called a port. Each port belongs to a network and has a MAC address. A port contains a reference to the device to which it is attached. This can be a Nova managed instance, but also be another network device like a DHCP agent or a router. In addition, an IP address can be assigned to a port, either directly when the port is created (this is often called a fixed IP address) or dynamically.

In addition to layer 2 networks, Neutron has the concept of a subnet. A subnet is attached to a network and describes an IP network on top of this Ethernet network. Thus, a subnet has a CIDR and a gateway IP address.

Of course, this list is far from complete – there are routers, floating IP addresses, DNS servers and so forth. We will touch upon some of these objects in later posts in this series. In the next post, we will learn more about the components making up Neutron and how they are installed.

OpenStack Nova – deep-dive into the provisioning process

In the last post, we did go through the installation process and the high-level architecture of Nova, talking about the Nova API server, the Nova scheduler and the Nova agent. Today, we will make this a bit more tangible by observing how a typical request to provision an instance flows through this architecture.

The use case we are going to consider is the creation of a virtual server, triggered by a POST request to the /servers/ API endpoint. This is a long and complicated process, and we try to focus on the main path through the code without diving into every possible detail. This implies that we will skim over some points very briefly, but the understanding of the overall process should put us in a position to dig into other parts of the code if needed.

Roughly speaking, the processing of the request will start in the Nova API server which will perform validations and enrichments and populate the database. Then, the request is forwarded to the Nova conductor which will invoke the scheduler and eventually the Nova compute agent on the compute nodes. We will go through each of these phases in a bit more detail in the following sections.

Part I – the Nova API server

Being triggered by an API request, the process of course starts in the Nova API server. We have already seen in the previous post that the request is dispatched to a controller based on a set of hard-wired routes. For the endpoint in question, we find that the request is routed to the method create of the server controller.

This method first assembles some information like the user data which needs to be passed to the instance or the name of the SSH key to be placed in the instance. Then, authorization is carried out be calling the can method on the context (which, behind the scenes, will eventually invoke the Oslo policy rule engine that we have studied in our previous deep dive). Then the request data for networks, block devices and the requested image is processed before we eventually call the create method of the compute API. Finally, we parse the result and use a view builder to assemble a response.

Let us now see follow the call into the compute API. Here, all input parameters are validated and normalized, for instance by adding defaults. Then the method _provision_instances is invoked, which builds a request specification and the actual instance object and stores these objects in the database.

At this point, the Nova API server is almost done. We now call the method schedule_and_build_instances of the compute task API. From here, the call will simply be delegated to the corresponding method of the client side of the conductor RPC API which will send a corresponding RPC message to the conductor. At this point, we leave the Nova API server and enter the conductor. The flow through the code up to this point is summarized in the diagram below.


Part II – the conductor

In the last post, we have already seen that RPC calls are accepted by the Nova conductor service and are passed on to the Nova conductor manager. The corresponding method is schedule_and_build_instances

This method first retrieves the UUIDs of the instances from the request. Then, for each instance, the private method self._schedule_instances is called. Here, the class SchedulerQueryClient is used to submit an RPC call to the scheduler, which is being processed by the schedulers select_destinations method.

We will not go into the details of the scheduling process here, but simply note that this will in turn make a call to the placement service to retrieve allocation candidates and then calls the scheduler driver to actually select a target host.

Back in the conductor, we check whether the scheduling was successful. It not, the instance is moved into the cell0. If yes, we determine the cell in which the selected host is living, update some status information and eventually, at the end of the method, invoke the method build_and_run_instance of the RPC client for the Nova compute service. At this point, we leave the Nova conductor service and the processing continues in the Nova compute service running on the selected host.


Part III – the processing on the compute node

We have now reached the Nova compute agent running on the selected compute node, more precisely the method build_and_run_instance of the Nova compute manager. Here we spawn a separate worker thread which runs the private method _do_build_and_run_instance.

This method updates the VM state to BUILDING and calls _build_and_run_instance. Within this method, we first invoke _build_resources which triggers the creation of resources like networks and storage devices, and then move on to the spawn method of the compute driver from nova.virt. Note that this is again a pluggable driver mechanism – in fact the compute driver class is an abstract class, and needs to be implemented by each compute driver.

Now let us see how the processing works in our specific case of the libvirt driver library. First, we create an image for the VM by calling the private method _create_image. Next, we create the XML descriptor for the guest, i.e. we retrieve the required configuration data and turn it into the XML structure that libvirt expects. Finally, we call _create_domain_and_network and finally set a timer to periodically check the state of the instance until the boot process is complete.

In _create_domain_and_network, we plug in the virtual network interfaces, set up the firewall (in our installation, this is the point where we use the No-OP firewall driver as firewall functionality is taken over by Neutron) and then call _create_domain which creates the actual guest (called a domain in libvirt).

This delegates the call to nova.virt.libvirt.Guest.create()and then powers on the guest using the launch method on the newly created guest. Let us take a short look at each of these methods in turn.

In nova.virt.libvirt.Guest.create(), we use the write_instance_config method of the host class to create the libvirt guest without starting it.

In the launch method in nova/virt/libvirt/, we now call createWithFlags on the domain. This is actually a call into the libvirt library itself and will launch the previously defined guest.


At this point, our newly created instance will start to boot. The timer which we have created earlier will check in periodic intervalls whether the boot process is complete and update the status of the instance in the database accordingly.

This completes our short tour through the instance creation process. There are a few points which we have deliberately skipped, for instance the details of the scheduling process, the image creation and image caching on the compute nodes or the network configuration, but the information in this post might be a good starting point for further deep dives.

OpenStack Nova – installation and overview

In this post, we will look into Nova, the cloud fabric component of OpenStack. We will see how Nova is installed and go briefly through the individual components and Nova services.


Before getting into the installation process, let us briefly discuss the various components of Nova on the controller and compute nodes.

First, there is the Nova API server which runs on the controller node. The Nova service will register itself as a systemd service with entry point /usr/bin/nova-api. Similar to Glance, invoking this script will bring up an WSGI server which uses PasteDeploy to build a pipeline with the actual Nova API endpoint (an instance of nova.api.openstack.compute.APIRouterV21) being the last element of the pipeline. This component will then distribute incoming API requests to various controllers which are part of the nova.api.openstack.compute module. The routing rules themselves are actually hardcoded in the ROUTE_LIST which is part of the Router class and maps request paths to controller objects and their methods.


When you browse the source code, you will find that Nova offers some APIs like the image API or the bare metal API which are simply proxies to other OpenStack services like Glance or Ironic. These APIs are deprecated, but still present for backwards compatibility. Nova also has a network API which, depending on the value of the configuration item use_neutron will either acts as proxy to Neutron or will present the legacy Nova networking API.

The second Nova component on the controller node is the Nova conductor. The Nova conductor does not expose a REST API, but communicates with the other Nova components via RPC calls (based on RabbitMQ). The conductor is used to handle long-running tasks like building an instance or performing a live migration.

Similar to the Nova API server, the conductor has a tiered architecture. The actual binary which is started by the systemd mechanism creates a so called service object. In Nova, a service objects represents an RPC API endpoint. When a service object is created, it starts up an RPC service that handles the actual communication via RabbitMQ and forwards incoming requests to an associated service manager object.


Again, the mapping between binaries and manager classes is hardcoded and, for the Stein release, is as follows.

  'nova-compute': 'nova.compute.manager.ComputeManager',
  'nova-console': 'nova.console.manager.ConsoleProxyManager',
  'nova-conductor': 'nova.conductor.manager.ConductorManager',
  'nova-metadata': 'nova.api.manager.MetadataManager',
  'nova-scheduler': 'nova.scheduler.manager.SchedulerManager',

Apart from the conductor service, this list contains one more component that runs on the controller node and use the same mechanism to handle RPC requests (the nova-console binary is deprecated and we use the noVNC proxy, see the section below, the nova-compute binary is running on the compute node, and the nova-metadata binary is the old metadata service used with the legacy Nova networking API). This is the the Nova scheduler.

The scheduler receives and maintains information on the instances running on the individual hosts and, upon request, uses the Placement API that we have looked at in the previous post to take a decision where a new instance should be placed. The actual scheduling is carried out by a pluggable instance of the nova.scheduler.Scheduler base class. The default scheduler is the filter scheduler which first applies a set of filters to filter out individual hosts which are candidates for hosting the instance, and then computes a score using a set of weights to take a final decision. Details on the scheduling algorithm are described here.

The last service which we have not yet discussed is Nova compute. One instance of the Nova compute service runs on each compute node. The manager class behind this service is the ComputeManager which itself invokes various APIs like the networking API or the Cinder API to manage the instances on this node. The compute service interacts with the underlying hypervisor via a compute driver. Nova comes with compute driver for the most commonly used hypervisors, including KVM (via libvirt), VMWare, HyperV or the Xen hypervisor. In a later post, we will go once through the call chain when provisioning a new instance to see how the Nova API, the Nova conductor service, the Nova compute service and the compute driver interact to bring up the machine.

The Nova compute service itself does not have a connection to the database. However, in some cases, the compute service needs to access information stored in the database, for instance when the Nova compute service initializes on a specific host and needs to retrieve a list of instances running on this host from the database. To make this possible, Nova uses remotable objects provided by the Oslo versioned objects library. This library provides decorators like remotable_classmethod to mark methods of a class or an object as remotable. These decorators point to the conductor API (indirection_api within Oslo) and delegate the actual method invocation to a remote copy via an RPC call to the conductor API. In this way, only the conductor needs access to the database and Nova compute offloads all database access to the conductor.

Nova cells

In a large OpenStack installation, access to the instance data stored in the MariaDB database can easily become a bottleneck. To avoid this, OpenStack provides a sharding mechanism for the database known as cells.

The idea behind this is that the set of your compute nodes are partitioned into cells. Every compute node is part of a cell, and in addition to these regular cells, there is a cell called cell0 (which is usually not used and only holds instances which could not be scheduled to a node). The Nova database schema is split into a global part which is stored in a database called the API database and a cell-local part. This cell-local database is different for each cell, so each cell can use a different database running (potentially) on a different host. A similar sharding applies to message queues. When you set up a compute node, the configuration of the database connection and the connection to the RabbitMQ service determine to which cell the node belongs. The compute node will then use this database connection to register itself with the corresponding cell database, and a special script (nova-manage) needs to be run to make these hosts visible in the API database as well so that they can be used by the scheduler.

Cells themselves are stored in a database table cell_mappings in the API database. Here each cell is set up with a dedicated RabbitMQ connection string (called the transport URL) and a DB connection string. Our setup will have two cells – the special cell0 which is always present and a cell1. Therefore, our installation will required three databases.

Database Description
nova_api Nova API database
nova_cell0 Database for cell0
nova Database for cell1

In a deployment with more than one real cell, each cell will have its own Nova conductor service, in addition to a “super conductor” running across cells, as explained here and in diagram below which is part of the OpenStack documentation.

The Nova VNC proxy

Usually, you will use SSH to access your instances. However, sometimes, for instance if the SSHD is not coming up properly or the network configuration is broken, it would be very helpful to have a way to connect to the instance directly. For that purpose, OpenStack offers a VNC console access to running instances. Several VNC clients can be used, but the default is to use the noVNC browser based client embedded directly into the Horizon dashboard.

How exactly does this work? First, there is KVM. The KVM hypervisor has the option to export the content of the emulated graphics card of the instance as a VNC server. Obviously, this VNC server is running on the compute node on which the instance is located. The server for the first instance will listen on port 5900, the server for the second instance will listen on port 5901 and so forth. The server_listen configuration option determines the IP address to which the server will bind.

Now theoretically a VNC client like noVNC could connect directly to the VNC server. However, in most setups, the network interfaces of the compute node are not directly reachable from a browser in which the Horizon GUI is running. To solve this, Nova comes with a dedicated proxy for noVNC. This proxy is typically running on the controller node. The IP address and port number on which this proxy is listening can again be configured using the novncproxy_host and novncproxy_port configuration items. The default port is 6080.

When a client like the Horizon dashboard wants to get access to the proxy, it can use the Nova API path /servers/{server_id}/remote-consoles. This call will be forwarded to the Nova compute method get_vnc_console on the compute node. This method will return an URL, consisting of the base URL (which can again be configured using the novncproxy_base_url configuration item), and a token which is stored in the database as well. When the client uses this URL to connect to the proxy, the token is used to verify that the call is authorized.

The following diagram summarizes the process to connect to the VNC console of an instance from a browser running noVNC.


  1. Client uses Nova API /servers/{server_id}/remote-consoles to retrieve the URL of a proxy
  2. Nova API delegates the request to Nova Compute on the compute node
  3. Nova Compute assembles the URL, which points to the proxy, and creates a token, containing the ID of the instance as well as additional information, and the URL including the token is handed out to the client
  4. Client uses the URL to connect to the proxy
  5. The proxy validates the token, extracts the target compute node and port information, establishes the connection to the actual VNC server and starts to service the session

Installing Nova on the controller node

Armed with the knowledge from the previous discussions, we can now almost guess what steps we need to take in order to install Nova on the controller node.


First, we need to create the Nova databases – the Nova API database (nova_api), the Nova database for cell0 (nova_cell0) and the Nova database for our only real cell cell1 (nova). We also need to create a user which has the necessary grants on these databases.

Next, we create a user in Keystone representing the Nova service, register the Nova API service with Keystone and define endpoints.

We then install the Ubuntu packages corresponding to the four components that we will install on the controller node – the Nova API service, the Nova conductor, the Nova scheduler and the VNC proxy.

Finally, we adapt the configuration file /etc/nova/nova.conf. The first change is easy – we set the value my_ip to the IP of the controller management interface.

We then need to set up the networking part. To enforce the use of Neutron instead of the built-in legacy Nova networking, we set the configuration option use_neutron that we already discussed above to True. We also set the firewall driver to the No-OP driver nova.virt.firewall.NoopFirewallDriver.

The next information we need to provide is the connection information to RabbitMQ and the database. Recall that we need to configure two database connections, one for the API database and one for the database for cell 1 (Nova will automatically append _cell0 to this database name to obtain the database connection for cell 0).

We also need to provide some information that Nova needs to communicate with other Nova services. In the glance section, we need to define the URL to reach the Glance API server. In the neutron section, we need to set up the necessary credentials to connect to Neutron. Here we use a Keystone user neutron which we will set up when installing Neutron in a later post, and we also define some data needed for the metadata proxy that we will discuss in a later post. And finally Nova needs to connect to the Placement service for which we have to provide credentials as well, this time using the placement user created earlier.

To set up the communication with Keystone, we need to set the authorization strategy to Keystone (which will also select the PasteDeploy Pipeline containing the Keystone authtoken middleware) and provide the credentials that the authtoken middleware needs. And finally, we set the path that the Oslo concurrency library will use to create temporary files.

Once all this has been done, we need to prepare the database for use. As with the other services, we need to sync the database schema to the latest version which, in our case, will simply create the database schema from scratch. We also need to establish our cell 1 in the database using the nova-manage utility.

Installing Nova on the compute nodes

Let us now turn to the installation of Nova on the compute nodes. Recall that on the compute nodes, only nova-compute needs to be running. There is no database connection needed, so the only installation step is to install the nova-compute package and to adapt the configuration file.

The configuration file nova.conf on the compute node is very similar to the configuration file on the controller node, with a few differences.

As there is no database connection, we can comment out the DB connection string. In the light of our above discussion of the VNC proxy mechanism, we also need to provide some configuration items for the proxy mechanism.

  • The configuration item server_proxyclient_address is evaluated by the get_vnc_console of the compute driver and used to return the IP and port number on which the actual VNC server is running and can be reached from the controller node (this is the address to which the proxy will connect)
  • The server_listen configuration item is the IP address to which the KVM VNC server will bind on the compute host and should be reachable via the server_proxyclient_address from the controller node
  • the novncproxy_base_url is the URL which is handed out by the compute node for use by the proxy

Finally, there is a second configuration file nova-compute.conf specific to the compute nodes. This file determines the compute driver used (in our case, we use libvirt) and the virtualization type. With libvirt, we can either use KVM or QEMU. KVM will only work if the CPU supports virtualization (i.e. offers the VT-X extension for Intel or AMD-V for AMD). In our setup, the virtual machines will run on top of another virtual machine (Virtualbox), which does only pass through these features for AMD CPUs. We will therefore set the virtualization type to QEMU.

Finally, after installing Nova on all compute nodes, we need to run the nova-manage tool once more to make these nodes known and move them into the correct cells.

Run and verify the installation

Let us now run the installation and verify that is has succeeded. Here are the commands to bring up the environment and to obtain and execute the needed playbooks.

git clone
cd openstack-labs/Lab4
vagrant up
ansible-playbook -i hosts.ini site.yaml

This will run a few minutes, depending on the network connection and the resources available on your machine. Once the installation completes, log into the controller and source the admin credentials.

vagrant ssh controller
source admin-openrc

First, we verify that all components are running on the controller. To do this, enter

systemctl | grep "nova"

The output should contain four lines, corresponding to the four services nova-api, nova-conductor, nova-scheduler and nova-novncproxy running on the controller node. Next, let us inspect the Nova database to see which compute services have registered with Nova.

openstack compute service list

The output should be similar to the sample output below, listing the scheduler, the conductor and two compute instances, corresponding to the two compute nodes that our installation has.

| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
|  1 | nova-scheduler | controller | internal | enabled | up    | 2019-11-18T08:35:04.000000 |
|  6 | nova-conductor | controller | internal | enabled | up    | 2019-11-18T08:34:56.000000 |
|  7 | nova-compute   | compute1   | nova     | enabled | up    | 2019-11-18T08:34:56.000000 |
|  8 | nova-compute   | compute2   | nova     | enabled | up    | 2019-11-18T08:34:57.000000 |

Finally, the command sudo nova-status upgrade check will run some checks meant to be executed after an update that can be used to further verify the installation.

OpenStack supporting services – Glance and Placement

Apart from Keystone, Glance and Placement are two additional infrastructure services that are part of every OpenStack installation. While Glance is responsible for storing and maintaining disk images, Placement (formerly part of Nova) is keeping track of resources and allocation in a cluster.

Glance installation

Before we get into the actual installation process, let us take a short look at the Glance runtime environment. Different from Keystone, but similar to most other OpenStack services, Glance is not running inside Apache, but is an independent process using a standalone WSGI server.

To understand the startup process, let us start with the setup.cfg file. This file contains an entry point glance-api which, via the usual mechanism provided by Pythons setuptools, will provide a Python executable which runs glance/cmd/ This in turn uses the simple WSGI server implemented in glance/common/ This server is then started in the line

server.start(config.load_paste_app('glance-api'), default_port=9292)

Here we see that the actual WSGI app is created and passed to the server using the PasteDeploy Python library. If you have read my previous post on WSGI and WSGI middleware, you will know that this is a library which uses configuration data to plumb together a WSGI application and middleware. The actual call of the PasteDeploy library is delegated to a helper library in glance/common and happens in the function load_past_app defined here.

Armed with this understanding, let us now dive right into the installation process. We will spend a bit more time with this process, as it contains some recurring elements which are relevant for most of the OpenStack services that we will install and use. Here is a graphical overview of the various steps that we will go through.


The first thing we have to take care of is the database. Almost all OpenStack services require some sort of database access, thus we have to create one or more databases in our MariaDB database server. In the case of Glance, we create a database called glance. To allow Glance to access this database, we also need to set up a corresponding MariaDB user and grant the necessary access rights on our newly created database.

Next, Glance of course needs access to Keystone to authenticate users and authorize API requests. For that purpose, we create a new user glance in Keystone. Following the recommended installation process, we will in fact create one Keystone user for every OpenStack service, which is of course not strictly necessary.

With this, we have set up the necessary identities in Keystone. However, recall that Keystone is also used as a service catalog to decouple services from endpoints. An API user will typically not access Glance directly, but first get a list of service endpoints from Keystone, select an appropriate endpoint and then use this endpoint. To support this pattern, we need to register Glance with the Keystone service catalog. Thus, we create a Keystone service and API endpoints. Note that the port provided needs to match the actual port on which the Glance service is listening (using the default unless overridden explicitly in the configuration).

OpenStack services typically expose more than one endpoint – a public endpoint, an internal endpoint and an admin endpoint. As described here, there does not seem to be fully consistent configuration scheme that allows an administrator to easily define which endpoint type the services will use. Following the installation guideline, we will install all our services with all three endpoint types.

Next, we can install Glance by simply installing the corresponding APT package. Similar to Keystone, this package comes with a set of configuration files that we now have to adapt.

The first change which is again standard across all OpenStack components is to change the database connection string so that Glance is able to find our previously created database. Note that this string needs to contain the credentials for the Glance database user that we have created.

Next, we need to configure the Glance WSGI middleware chain. As discussed above, Glance uses the PasteDeploy mechanism to create a WSGI application. When you take a look at the corresponding configuration, however, you will see that it contains a variety of different pipeline definitions. To select the pipeline that will actually be deployed, Glance has a configuration option called deployment flavor. This is a short form for the name of the pipeline to be selected, and when the actual pipeline is assembled here, the name of the pipeline is put together by combining the flavor with the string “glance-api”. We use the flavor “keystone” which will result in the pipeline “glance-api-keystone” being loaded.

This pipeline contains the Keystone auth token middleware which (as discussed in our deep dive into tokens and policies) extracts and validates the token data in a request. This middleware components needs access to the Keystone API, and therefore we need to add the required credentials to our configuration in the section [keystone_authtoken].

To complete the installation, we still have to create the actual database schema that Glance expects. Like most other OpenStack services, Glance is able to automatically create this schema and to synchronize an existing database with the current version of the existing schema by automatically running the necessary migration routines. This is done by the helper script glance-manage.

The actual installation process is now completed, and we can restart the Glance service so that the changes in our configuration files are picked up.

Note that the current version of the OpenStack install guide for Stein will instruct you to start two Glance services – glance-api and glance-registry. We only start the glance-api service, for the following reason.

Internally, Glance is structured into a database access layer and the actual Glance API server, plus of course a couple of other components like common services. Historically, Glance used the first of todays three access layers called the Glance registry. Essentially, the Glance registry is a service sitting between the Glance API service and the database, and contains the code for the actual database layer which uses SQLAlchemy. In this setup, the Glance API service is reachable via the REST API, whereas the Glance registry server is only reachable via RPC calls (using the RabbitMQ message queue). This will add an additional layer of security, as the database credentials need to be stored in the configuration of the Glance registry service only, and makes it easier to scale Glance across several nodes. Later, the Glance registry service was deprecated, and the actual configuration instructs Glance to access the database directly (this is the data_api parameter in the Glance configuration file).

As in my previous posts, I will not replicate the exact commands to do all this manually (you can find them in the well-written OpenStack installation guide), but have put together a set of Ansible scripts doing all this. To run them, enter the following commands

git clone
cd Lab3
vagrant up
ansible-playbook -i hosts.ini site.yaml

This playbook will not only install and configure the Glance service, but will also download the CirrOS image (which I have mirrored in an S3 bucket as the original location is sometimes a bit slow) and import it into Glance.

Working with Glance images

Let us now play a bit with Glance. The following commands need to be run from the controller node, and we have to source the credentials that we need to connect to the API. So SSH into the controller node and source the credentials by running

vagrant ssh controller
source admin-openrc

First, let us use the CLI to display all existing images.

openstack image list

As we have only loaded one image so far – the CirrOS image – the output will be similar to the following sample output.

| ID                                   | Name   | Status |
| f019b225-de62-4782-9206-ed793fbb789f | cirros | active |

Let us now get some more information on this image. For better readability, we display the output in JSON format.

openstack image show cirros -f json

The output is a bit longer, and we will only discuss a few of the returned attributes. First, there is the file attribute. If you look at this and compare this to the contents of the directory /var/lib/glance/images/, you will see that this is a reference to the actual image stored on the hard disk. Glance delegates the actual storage to a storage backend. Storage backends are provided by the separate glance_store library and include a file store (which simply stores files on the disk as we have observed and is the default), a HTTP store which uses a HTTP GET to retrieve an image, an interface to the RADOS distribute object store and interfaces to Cinder, Swift and VMWare data store.

We also see from the output that images can be active or inactive, belong to a project (the owner field refers to a project), can be tagged and can either be visible for the public (i.e. outside the project to which they belong) or private (i.e. only visible within the project). It is also possible to share images with individual projects by adding these projects as members.

Note that Glance stores image metadata, like visibility, hash values, owner and so forth in the database, while the actual image is stored in one of the storage backends.

Let us now go through the process of adding an additional image. The OpenStack Virtual Machine Image guide contains a few public sources for OpenStack images and explains how adminstrators can create their own images based on the most commonly used Linux distributions. As an example, here are the commands needed to download the latest Ubuntu Bionic cloud image and import it into Glance.

openstack image create \
    --disk-format qcow2 \
    --file bionic-server-cloudimg-amd64.img \
    --public \
    --project admin \

We will later see how we need to reference an image when creating a virtual machine.

Installing the placement service

Having discussed the installation process for the Glance service in a bit more detail, let us now quickly go over the steps to install Placement. Structurally, these steps are almost identical to those to install Glance, and we will not go into them in detail.


Also the changes in the configuration file are very similar to those that we had to apply for Glance. Placement again uses the Keystone authtoken plugin, so that we have to supply credentials for Keystone in the keystone_authtoken section of the configuration file. We also have to supply a database connection string. Apart from that, we can take over the default values in the configuration file without any further changes.

Placement overview

Let us now investigate the essential terms and objects that the Placement API uses. As the openstack client does not yet contain full support for placement, we will directly use the API using curl. With each request, we need to include two header parameters.

  • X-Auth-Token needs to contain a valid token that we need to retrieve from Keystone first
  • OpenStack-API-Version needs to be included to define the version of the API (this is the so-called microversion).

Here is an example. We will SSH into the controller, source the credentials, get a token from Keystone and submit a GET request on the URL /resource_classes that we feed into jq for better readability.

vagrant ssh controller
source admin-openrc
sudo apt-get install jq
token=$(openstack token issue -f json | jq -r ".id") 
curl \
  -H "X-Auth-Token: $token" \
  -H "OpenStack-API-Version: placement 1.31"\
  "http://controller:8778/resource_classes" | jq

The resulting list is a list of all resource classes known to Placement. Resource classes are types of resources that Placement manages, like IP addresses, vCPUs, disk space or memory. In a fully installed system, OpenStack services can register as resource providers. Each provider offers a certain set of resource classes, which is called an inventory. A compute node, for instance, would typically provide CPUs, disk space and memory. In our current installation, we cannot yet test this, but in a system with compute nodes, the inventory for a compute node would typically look as follows.

  "resource_provider_generation": 3,
  "inventories": {
    "VCPU": {
      "total": 2,
      "reserved": 0,
      "min_unit": 1,
      "max_unit": 2,
      "step_size": 1,
      "allocation_ratio": 16
    "MEMORY_MB": {
      "total": 3944,
      "reserved": 512,
      "min_unit": 1,
      "max_unit": 3944,
      "step_size": 1,
      "allocation_ratio": 1.5
    "DISK_GB": {
      "total": 9,
      "reserved": 0,
      "min_unit": 1,
      "max_unit": 9,
      "step_size": 1,
      "allocation_ratio": 1

Here we see that the compute node has two virtual CPUs, roughly 4 GB of memory and 9 GB of disk space available. For each resource provider, Placement also maintains usage data which keeps track of the current usage of the resources. Here is a JSON representation of the usage for a compute node.

  "resource_provider_generation": 3,
  "usages": {
    "VCPU": 1,
    "MEMORY_MB": 128,
    "DISK_GB": 1

So in this example, one vCPU, 128 MB RAM and 1 GB disk of the compute node are in use. To link consumers and usage information, Placement uses allocations. An allocation represents a usage of resources by a specific consumer, like in the following example.

  "allocations": {
    "aaa957ac-c12c-4010-8faf-55520200ed55": {
      "resources": {
        "DISK_GB": 1,
        "MEMORY_MB": 128,
        "VCPU": 1
      "consumer_generation": 1
  "resource_provider_generation": 3

In this case, the consumer (represented by the UUID aaa957ac-c12c-4010-8faf-55520200ed55) is actually a compute instance which consumes 1 vCPU, 128 MB memory and 1 GB disk space. Here is a diagram that represents a simplified version of the Placement data model.


Placement offers a few additional features like Traits, which define qualitative properties of resource providers, or aggregates which are groups of resource providers, or the ability to make reservations.

Let us close this post by briefly discussing the relation between Nova and Placement. As we have mentioned above, compute nodes represent resource providers in Placement, so Nova needs to register resource provider records for the compute nodes it manages and providing the inventory information. When a new instance is created, the Nova scheduler will request information on inventories and current usage from Placement to determine the compute node on which the instance will be placed, and will subsequently update the allocations to register itself as a consumer for the resources consumed by the newly created instance.

With this, the installation of Glance and Placement is complete and we have all the ingredients in place to start installing the Nova services in the next post.