I have a set of servers and I would like to install OpenStack, how can I install OpenStack with Metal-as-a-Service (MAAS)?
Ubuntu – How to use MAAS to prepare to install OpenStack
maasopenstack
Related Solutions
With the latest MAAS and Juju releases (available for 12.04 from the Cloud Tools archive), its possible to mix LXC containers with physical servers to support your OpenStack deployment on a smaller number of servers. Its possible to run the following charms in LXC containers:
- cinder (so long as you are using a Ceph backend)
- glance
- mysql
- rabbitmq-server
- nova-cloud-controller
- swift-proxy
- keystone
Once you have deployed the charms that need physical servers (nova-compute, quantum-gateway, ceph and swift-storage), you can add LXC containers to specific machines:
juju add-machine lxc:1
The example above will create a LXC container on physical machine 1.
You can then target a charm to a specific LXC container - for example:
juju deploy --to 1/lxc/0 nova-cloud-controller
Deploys nova-cloud-controller to the first LXC container on physical machine 1.
We have some work currently pending final testing that will allow you to deploy ceph/swift-storage and nova-compute on the same physical machines - allowing you to have shared storage/compute servers within your deployment.
Preparing MAAS for Juju and OpenStack using Simplestreams
When Juju bootstraps a cloud, it needs two critical pieces of information:
- The uuid of the image to use when starting new compute instances.
- The URL from which to download the correct version of a tools tarball.
This necessary information is stored in a json metadata format called "simplestreams". For supported public cloud services such as Amazon Web Services, HP Cloud, Azure, etc, no action is required by the end user. However, those setting up a private cloud, or who want to change how things work (eg use a different Ubuntu image), can create their own metadata, after understanding a bit about how it works.
The simplestreams format is used to describe related items in a structural fashion. See the Launchpad project lp:simplestreams. Below we will discuss how Juju determines which metadata to use, and how to create your own images and tools and have Juju use them instead of the defaults.
Basic Workflow
Whether images or tools, Juju uses a search path to try and find suitable metadata. The path components (in order of lookup) are:
- User supplied location (specified by
tools-metadata-url
orimage-metadata-url
config settings). - The environment's cloud storage.
- Provider specific locations (eg keystone endpoint if on Openstack).
- A web location with metadata for supported public clouds - https://streams.canonical.com
Metadata may be inline signed, or unsigned. We indicate a metadata file is signed by using the '.sjson' extension. Each location in the path is first searched for signed metadata, and if none is found, unsigned metadata is attempted before moving onto the next path location.
Juju ships with public keys used to validate the integrity of image and tools metadata obtained from https://streams.canonical.com. So out of the box, Juju will "Just Work" with any supported public cloud, using signed metadata. Setting up metadata for a private (eg Openstack) cloud requires metadata to be generated using tools which ship with Juju.
Image Metadata Contents
Image metadata uses a simplestreams content type of "image-ids". The product id is formed as follows:
com.ubuntu.cloud:server:<series_version>:<arch>
For Example:
com.ubuntu.cloud:server:14.04:amd64
Non-released images (eg beta, daily etc) have product ids like:
com.ubuntu.cloud.daily:server:13.10:amd64
The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component):
<path_url>
|-streams
|-v1
|-index.(s)json
|-product-foo.(s)json
|-product-bar.(s)json
The index file must be called "index.(s)json" (sjson for signed). The various product files are named according to the Path values contained in the index file.
Tools metadata uses a simplestreams content type of "content-download". The product id is formed as follows:
"com.ubuntu.juju:<series_version>:<arch>"
For example:
"com.ubuntu.juju:12.04:amd64"
The metadata index and product files are required to be in the following directory tree (relative to the URL associated with each path component). In addition, tools tarballs which Juju needs to download are also expected.
|-streams
| |-v1
| |-index.(s)json
| |-product-foo.(s)json
| |-product-bar.(s)json
|
|-releases
|-tools-abc.tar.gz
|-tools-def.tar.gz
|-tools-xyz.tar.gz
The index file must be called "index.(s)json" (sjson for signed). The product file and tools tarball name(s) match whatever is in the index/product files.
Configuration
For supported public clouds, no extra configuration is required; things work out-of-the-box. However, for testing purposes, or for non-supported cloud deployments, Juju needs to know where to find the tools and which image to run. Even for supported public clouds where all required metadata is available, the user can put their own metadata in the search path to override what is provided by the cloud.
User specified URLsThese are initially specified in the .juju/environments.yaml
file (and then subsequently copied to the jenv file when the environment is bootstrapped). For images, use image-metadata-url
; for tools, use tools-metadata-url
. The URLs can point to a world readable container/bucket in the cloud, an address served by a http server, or even a shared directory which is accessible by all node instances running in the cloud.
Assume an Apache http server with base URL https://juju-metadata
, providing access to information at <base>/images
and <base>/tools
. The Juju environment yaml file could have the following entries (one or both):
tools-metadata-url: https://juju-metadata/tools
image-metadata-url: https://juju-metadata/images
The required files in each location is as per the directory layout described
earlier. For a shared directory, use a URL of the form file:///sharedpath
.
If no matching metadata is found in the user specified URL, environment's cloud storage is searched. No user configuration is required here - all Juju environments are set up with cloud storage which is used to store state information, charms etc. Cloud storage setup is provider dependent; for Amazon and Openstack clouds, the storage is defined by the "control-bucket" value, for Azure, the "storage-account-name" value is relevant.
The (optional) directory structure inside the cloud storage is as follows:
|-tools
| |-streams
| |-v1
| |-releases
|
|-images
|-streams
|-v1
Of course, if only custom image metadata is required, the tools directory will not be required, and vice versa.
Note that if juju bootstrap is run with the --upload-tools
option, the tools and metadata are placed according to the above structure. That's why the tools are then available for Juju to use.
Providers may allow additional locations to search for metadata and tools. For OpenStack, Keystone endpoints may be created by the cloud administrator. These are defined as follows:
juju-tools the value as described above in Tools Metadata Contentsproduct-streams the <path_url> value as described above in Image Metadata Contents
Other providers may similarly be able to specify locations, though the implementation will vary.
This is the default location used to search for image and tools metadata and is used if no matches are found earlier in any of the above locations. No user configuration is required.
There are two main issues when deploying a private cloud:
- Image ids will be specific to the cloud.
- Often, outside internet access is blocked
Issue 1 means that image id metadata needs to be generated and made available.
Issue 2 means that tools need to be mirrored locally to make them accessible.
Juju tools exist to help with generating and validating image and tools metadata. For tools, it is often easiest to just mirror https://streams.canonical.com/tools
. However image metadata cannot be simply mirrored because the image ids are taken from the cloud storage provider, so this needs to be generated and validated using the commands described below.
The available Juju metadata tools can be seen by using the help command:
juju help metadata
The overall workflow is:
- Generate image metadata
- Copy image metadata to somewhere in the metadata search path
- Optionally, mirror tools to somewhere in the metadata search path
- Optionally, configure tools-metadata-url and/or image-metadata-url
Generate image metadata using
juju metadata generate-image -d <metadata_dir>
As a minimum, the above command needs to know the image id to use and a directory in which to write the files.
Other required parameters like region, series, architecture etc. are taken from the current Juju environment (or an environment specified with the -e option). These parameters can also be overridden on the command line.
The image metadata command can be run multiple times with different regions, series, architecture, and it will keep adding to the metadata files. Once all required image ids have been added, the index and product json files can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the image-metadata-url
setting or the cloud's storage etc.
Examples:
image-metadata-url
- upload contents of to
http://somelocation
- set image-metadata-url to
http://somelocation/images
- upload contents of to
Cloud storage
If run without parameters, the validation command will take all required details from the current Juju environment (or as specified by -e) and output the image id it would use to spin up an instance. Alternatively, series, region, architecture etc. can be specified on the command line to override the values in the environment config.
Tools metadataGenerally, tools and related metadata are mirrored from https://streams.canonical.com/tools
. However, it is possible to manually generate metadata for a custom built tools tarball.
First, create a tarball of the relevant tools and place in a directory structured like this:
<tools_dir>/tools/releases/
Now generate relevant metadata for the tools by running the command:
juju generate-tools -d <tools_dir>
Finally, the contents of can be uploaded to a location in the Juju metadata search path. As per the Configuration section, this may be somewhere specified by the tools-metadata-url setting or the cloud's storage path settings etc.
Examples:
tools-metadata-url
- upload contents of the tools dir to
http://somelocation
- set tools-metadata-url to
http://somelocation/tools
- upload contents of the tools dir to
Cloud storage
upload contents of directly to environment's cloud storage
As with image metadata, the validation command is used to ensure tools are available for Juju to use:
juju metadata validate-tools
The same comments apply. Run the validation tool without parameters to use details from the Juju environment, or override values as required on the command line. See juju help metadata validate-tools
for more details.
Next step:
Best Answer
Scope
This document provides instructions on how to install the Metal As A Service (MAAS) software.
Introducing MAAS
Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource.
What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware’s okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud.
When you’re ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It’s as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you’re done, it’s just as easy to give the node back to Nova.
Installing MAAS from the Cloud Archive
The Ubuntu Cloud Archive is a repository made especially to provide users with the most up to date, stable versions of MAAS, Juju and other tools. It is highly recommended to keep your software up to date:
There are several packages that comprise a MAAS install. These are:
The DHCP setup is critical for the correct PXE booting of nodes.
As a convenience, there is also a
maas
metapackage, which will install all these components.If you need to separate these services or want to deploy an additional cluster controller, you should install the corresponding packages individually.
Installing the packages
Running the command:
...will initiate installation of all the components of MAAS. The maas-dhcp and maas-dns packages should be installed by default.
Once the installation is complete, the web-based interface for MAAS will start. In many cases, your MAAS controller will have several NICs. By default, all the services will initiate using the first discovered controller (i.e. usually eth0)
Before you login to the server for the first time, you should create a superuser account.
Create a superuser account
Once MAAS is installed, you'll need to create an administrator account:
Running this command will prompt for a username, an email address and a password for the admin user. You may also use a different username for your administrator account, but "root" is a common convention and easy to remember.
You can run this command again for any further administrator accounts you may wish to create, but you need at least one.
Import the boot images
MAAS will check for and download new Ubuntu images once a week. However, you'll need to download them manually the first time. To do this you should connect to the MAAS web interface using a web browser. Use the URL:
You should substitute in the IP address of the server where you have installed the MAAS software. If there are several possible networks, by default it will be on whichever one is assigned to the eth0 device.
You should see a login screen like this:
Enter the username and password you specified for the admin account. When you have successfully logged in you should see the main MAAS page:
Either click on the link displayed in the warning at the top, or on the 'Cluster' tab in the menu to get to the cluster configuration screen. The initial cluster is automatically added to MAAS when you install it, but it has no associated images for booting nodes with yet. Click on the button to begin the download of suitable boot images.
Importing the boot images can take some time, depending on the available network connection. This page does not dynamically refresh, so you can refresh it manually to determine when the boot images have been imported.
Login to the server
To check that everything is working properly, you should try and login to the server now. Both the error messages should have gone (it can take a few minutes for the boot image files to register) and you can see that there are currently 0 nodes attached to this controller.
Configure switches on the network
Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use.
To alleviate this problem, you should enable Portfast for Cisco switches or its equivalent on other vendor equipment, which enables the ports to come up almost immediately.
Add an additional cluster
Whilst it is certainly possible to run MAAS with just one cluster controller for all the nodes, in the interests of easier maintenance, uprades and stability, it is desirable to have at least two operational clusters.
Each cluster needs a controller node. Install Ubuntu on this node and then follow a similar setup proceedure to install the cluster controller software:
Once the cluster software is installed, it is useful to run:
This will enable you to make sure the cluster controller agent is pointed at the correct address for the MAAS master controller.
Configure additional Cluster Controller(s)
Cluster acceptance
When you install your first cluster controller on the same system as the region controller, it will be automatically accepted by default (but not yet configured, see below). Any other cluster controllers you set up will show up in the user interface as “pending,” until you manually accept them into the MAAS.
To accept a cluster controller, click on the "Clusters" tab at the top of the MAAS web interface:
You should see that the text at the top of the page indicates a pending cluster. Click on that text to get to the Cluster acceptance screen.
Here you can change the cluster’s name as it appears in the UI, its DNS zone, and its status. Accepting the cluster changes its status from “pending” to “accepted.”
Now that the cluster controller is accepted, you can configure one or more of its network interfaces to be managed by MAAS. This will enable the cluster controller to manage nodes attached to those networks. The next section explains how to do this and what choices are to be made.
Cluster Configuration
MAAS automatically recognises the network interfaces on each cluster controller. Some of these will be connected to networks where you want to manage nodes. We recommend letting your cluster controller act as a DHCP server for these networks, by configuring those interfaces in the MAAS user interface.
As an example, we will configure the cluster controller to manage a network on interface eth0. Click on the edit icon for eth0, which takes us to this page:
Here you can select to what extent you want the cluster controller to manage the network:
You cannot have DNS management without DHCP management because MAAS relies on its own DHCP server’s leases file to work out the IP address of nodes in the cluster. If you set the interface to be managed, you now need to provide all of the usual DHCP details in the input fields below. Once done, click “Save interface”. The cluster controller will now be able to boot nodes on this network.
There is also an option to leave the network unmanaged. Use this for networks where you don’t want to manage any nodes. Or, if you do want to manage nodes but want to use an existing DHCP service on your network.
A single cluster controller can manage more than one network, each from a different network interface on the cluster-controller server. This may help you scale your cluster to larger numbers of nodes, or it may be a requirement of your network architecture.
Enlisting nodes
Now that the MAAS controller is running, we need to make the nodes aware of MAAS and vice-versa. With MAAS controlling DHCP and nodes capable of PXE booting, this is straightforward
Automatic Discovery
With nodes set to boot from a PXE image, they will start, look for a DHCP server, receive the PXE boot details, boot the image, contact the MAAS server and shut down.
During this process, the MAAS server will be passed information about the node, including the architecture, MAC address and other details which will be stored in the database of nodes. You can accept and comission the nodes via the web interface. When the nodes have been accepted the selected series of Ubuntu will be installed.
You may also accept and commission all nodes from the commandline. This requires that you first login with the API key, then run the command:
Once commissioned, the node's status will be updated to "Ready". you can check the results of the comissioning scripts by clicking on the node name and then clicking on the link below the heading "Commissioning output". The screen will show a list of files and their result - you can further examine the output by clicking on the status of any of the files.
Manually adding nodes
If your nodes are not capable of booting from PXE images, they can be manually registered with MAAS. On the main web interface screen, click on the "Add Node" button:
This will load a new page where you can manually enter details about the node, including its MAC address. This is used to identify the node when it contacts the DHCP server.
Power management
MAAS supports several types of power management. To configure power management, you should click on an individual node entry, then click on the "Edit" button. The power management type should be selected from the drop down list, and the appropriate power management details added.
If you have a large number of nodes, it should be possible to script this process using the MAAS cli.
Without power management, MAAS will be unable to power on nodes when they are required.
Next Steps: