OpenStack Neutron installation – basic setup and our first instances

In this post, we will go through the installation of Neutron for flat networks and get to know the basic configuration options for the various Neutron components, thus completing our first fully working OpenStack installation. If you have not already read my previous post describing some of the key concepts behind Neutron, I advise you to do this to be able to better follow todays post.

Installing Neutron on the controller node

The setup of Neutron on the controller node follows a similar logic to the installation of Nova or other OpenStack components that we have already seen. We create database tables and database users, add users, services and endpoints to Keystone, install the required packages, update the configuration and sync the database.

NeutronInstallation

As the first few steps are rather standard, let us focus on the configuration. There are

  • the Neutron server configuration file /etc/neutron.conf
  • the configuration of the ML2 plugin /etc/neutron/plugins/ml2/ml2_conf.ini
  • the configuration of the DHCP agent /etc/neutron/dhcp_agent.ini
  • the configuration of the DHCP agent /etc/neutron/metadata_agent.ini

Let us now discuss each of these configuration files in a bit more detail.

The Neutron configuration file on the controller node

The first group of options that we need to configure is familiar – the authorization strategy (keystone) and the related configuration for the Keystone authtoken middleware, the URL of the RabbitMQ server and the database connection as well as a directory for lock files.

The second change is the list of service plugins. Here we set this to an empty string, so that no additional service plugins beyond the core ML2 plugin will be loaded. At this point, we could configure additional services like Firewall-as-a-service which act as an extension.

Next we set the options notify_nova_on_port_status_changes and notify_nova_on_port_data_changes which instruct Neutron to inform Nova about status changes, for instance when the creation of a port fails.

To send these notifications, the external events REST API of Nova is used, and therefore Neutron needs credentials for the Nova service which need to be present in a section [nova] in the configuration file (loaded here).

The Neutron ML2 plugin configuration file

As you might almost guess from the presentation of Neutrons high-level architecture in the previous post, the ML2 plugin configuration file contains the information which type drivers and which mechanism drivers are loaded. The corresponding configuration items (type_drivers and mechanism_drivers) are both comma-separated lists. For our initial setup, we load the type drivers for a local network and a flat network, and we use the openvswitch mechanism driver.

In this file, we also configure the list of network types that are available as tenant networks. Here, we leave this list empty as we only offer a flat provider network.

In addition, we can configure so-called extension drivers. These are additional drivers that can be loaded by the ML2 plugin. In our case, the only driver that we load at this point in time is the port_security driver which allows us to disable the built-in port protection mechanisms and eases debugging.

In addition to the main [ml2] section, the configuration file also contains one section for each network type. As we only provide flat networks so far, we only have to populate this section. The parameter that we need is a list of flat networks. These networks are physical networks, but are not specified by the name of an actual physical interface, but by a logical network name which will later be mapped to an actual network device. Here, we choose the name physnet for our physical network.

The configuration of the metadata agent

We will talk in more detail about the metadata agent in a later post, and only briefly discuss it here. The metadata agent is an agent that allows instances to read metadata like the name of an SSH key using the de-facto standard provided by EC2s metadata service.

Nova provides a metadata API service that typically runs on a controller node. To allow access from within an instance, Neutron provides a proxy service which is protected by a secret shared between Nova and Neutron. In the configuration, we therefore need to provide the IP address (or DNS name) of the node on which the Nova metadata service is running and the shared secret.

The configuration of the DHCP agent

The configuration file for the DHCP agent requires only a few changes. First, we need to define the driver that the DHCP agent uses to connect itself to the virtual networks, which in our case is the openvswitch driver. Then, we need to specify the driver for the DHCP server itself. Here, we use the dnsmasq driver which actually spawns a dnsmasq process. We will learn more about DHCP agents in Neutron in a later post.

There is another feature that we enable in our configuration file. Our initial setup does not contain any router. Usually, routers are used to provide connectivity between an instance and the metadata service. To still allow the instances to access the metadata agent, the DHCP agent can be configured to add a static route to each instance pointing to the DHCP service. We enable this feature by setting the flag enable_isolated_metadata to true.

Installing Neutron on the compute nodes

We are now done with the setup of Neutron on the controller nodes and can turn our attention to the compute nodes. First, we need to install Open vSwitch and the neutron OVS agent on each compute node. Then, we need to create an OVS bridge on each compute node.

This is actually the point where I have been cheating in my previous post on Neutrons architecture. As mentioned there, Neutron does not connect the integration bridge directly to the network interface of the compute node. Instead, Neutron expects the presence of an additional OVS bridge that has to be provided by the adminstrator. We will simply let Neutron known what the name of this bridge is, and Neutron will attach it to the integration bridge and add a few OpenFlow rules. Everything else, i.e. how this bridge is connected to the actual physical network infrastructure, is up to the administrator. To create the bridge on the compute node, simply run (unless you are using my Ansible scripts, of course – see below)

ovs-vsctl  add-br br-int \
  -- set bridge br-int fail-mode=secure
ovs-vsctl add-port br-phys enp0s9

Once this is done, we now we need to adapt our configuration files as follows. In the item bridge_mappings, we need to provide the mapping from the physical network names to bridges. In our example, we have used the name physnet for our physical network. We now map this name to the device just created by setting

bridge_mappings = physnet:br-phys

We then need to define the driver that the Neutron agent uses to provide firewall functionality. This should of course be compatible with the chosen L2 driver, so we set this to openvswitch. We also enable security groups.

Once all configuration files have been updated, we should now restart the Neutron OVS agent on each compute node and all services on the controller nodes.

Installation using Ansible scripts

As always, I have put the exact steps to replicate this deployment into a set of Ansible scripts and roles. To run them, use (assuming, as always, that you did go through the basic setup described in an earlier post)

git clone https://github.com/christianb93/openstack-labs
cd openstack-labs/Lab5
vagrant up
ansible-playbook -i hosts.ini site.yaml

Let us now finally test our installation. We will create a project and a user, a virtual network, bring up two virtual machines, SSH into them and test connectivity.

First, we SSH into the controller node and source the OpenStack credentials for the admin user. We then create a demo project, assign the generated password for the demo user from demo-openrc, create the demo user with the corresponding password and assign the member role in the demo project to this user.

vagrant ssh controller
source admin-openrc
openstack project create demo
password=$(awk -F '=' '/OS_PASSWORD=/ { print $2}' demo-openrc)
openstack user create \
   --domain default\
   --project demo\
   --project-domain default\
   --password $password\
   demo
openstack role add \
   --user demo\
   --project demo\
   --project-domain default\
   --user-domain default member

Let us now create our network. This will be a flat network (the only type we have so far), based on the physical network physnet, and we will call this network demo-network.

openstack network create \
--share \
--external \
--provider-physical-network physnet \
--provider-network-type flat \
demo-network

Next, we create a subnet attached to this network with an IP range from 172.16.0.2 to 172.16.0.10.

openstack subnet create --network demo-network \
  --allocation-pool start=172.16.0.2,end=172.16.0.10 \
  --gateway 172.16.0.1 \
  --subnet-range 172.16.0.0/12 demo-subnet

Now we need to create a flavor. A flavor is a bit like a template for a virtual machines and defines the machine size.

openstack flavor create\
  --disk 1\
  --vcpus 1\
  --ram 512 m1.nano

To complete the preparations, we now have to adjust the firewall rules in the default security group to allow for ICMP and SSH traffic and to import the SSH key that we want to use. We execute these commands as the demo user.

source demo-openrc
openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default
openstack keypair create --public-key /home/vagrant/demo-key.pub demo-key

We are now ready to create our first instances. We will bring up two instances, called demo-instance-1 and demo-instance-2, both being attached to our demo network.

source demo-openrc
net_id=$(openstack network show -f value -c id demo-network)
openstack server create \
--flavor m1.nano \
--image cirros \
--nic net-id=$net_id \
--security-group default \
--key-name demo-key \
demo-instance-1
openstack server create \
--flavor m1.nano \
--image cirros \
--nic net-id=$net_id \
--security-group default \
--key-name demo-key \
demo-instance-2

To inspect the status of the instances, use openstack server list, which will also give you the IP address of the instances. To determine the IP address of the first demo instance and verify connectivity using ping, run

ip=$(openstack server show demo-instance-1 \
  -f value -c addresses \
  | awk -F "=" '{print $2}')
ping $ip

Finally, you should be able to SSH into your instances as follows.

ssh -i demo-key cirros@$ip

You can now ping the second instance and verify that the instance is in the ARP cache, demonstrating that there is a direct layer 2 connectivity between the instances.

We have now successfully completed a full OpenStack installation. In the next post, we will analyse the network setup that Neutron uses on the compute nodes in more detail. Until then, enjoy your running OpenStack playground!

2 Comments

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s