I want to create 100 virtual servers. They will be used for testing, so they should be easy to create and destroy.
- They must be accessible through SSH from another physical machine (I provide the public ssh-key)
- They must have their own IP-address and be accessible from another physical host as
ssh I.P.n.o
e.g.ssh 10.0.0.99
(IPv4 or IPv6, private address space OK, port-forwarding is not – so this may involve setting up a bridge) - They must have basic UNIX tools installed (preferably a full distro)
- They must have /proc/cpuinfo, a root user, and a netcard (This is probably only relevant if the machine is not fully virtualized)
- Added bonus if they can be made to run an X server that can be connected to remotely (using VNC or similar)
What is the fastest way (wall clock time) to do this given:
- The host system runs Ubuntu 20.04 and has plenty of RAM and CPU
- The LAN has a DHCP-server (it is also OK to use a predefined IP-range)
- I do not care which Free virtualization technology is used (Containerization is also OK if the other requirements are met)
and what are the actual commands I should run/files I should create?
I have the feeling that given the right technology this is a 50 line job that can be set up in minutes.
The few lines can probably be split into a few bash functions:
install() {
# Install needed software once
}
setup() {
# Configure the virtual servers
}
start() {
# Start the virtual servers
# After this it is possible to do:
# ssh 10.0.0.99
# from another physical server
}
stop() {
# Stop the virtual servers
# After there is no running processes on the host server
# and after this it is no longer possible to do:
# ssh 10.0.0.99
# from another physical server
# The host server returns to the state before running `start`
}
destroy() {
# Remove the setup
# After this the host server returns to the state before running `setup`
}
Background
For developing GNU Parallel I need an easy way to test running on 100 machines in parallel.
For other projects it would also be handy to be able to create a bunch of virtual machines, test some race conditions and then destroy the machines again.
In other words: This is not for a production environment and security is not an issue.
Docker
Based on @danielleontiev's notes below:
install() {
# Install needed software once
sudo apt -y install docker.io
sudo groupadd docker
sudo usermod -aG docker $USER
# Logout and login if you were not in group 'docker' before
docker run hello-world
}
setup() {
# Configure the virtual servers
mkdir -p my-ubuntu/ ssh/
cp ~/.ssh/id_rsa.pub ssh/
cat ssh/*.pub > my-ubuntu/authorized_keys
cat >my-ubuntu/Dockerfile <<EOF
FROM ubuntu:bionic
RUN apt update && \
apt install -y openssh-server
RUN mkdir /root/.ssh
COPY authorized_keys /root/.ssh/authorized_keys
# run blocking command which prevents container to exit immediately after start.
CMD service ssh start && tail -f /dev/null
EOF
docker build my-ubuntu -t my-ubuntu
}
start() {
testssh() {
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@"$1" echo "'$1'" '`uptime`'
}
export -f testssh
# Start the virtual servers
seq 100 |
parallel 'docker run -d --rm --name my-ubuntu-{} my-ubuntu; docker inspect my-ubuntu-{}' |
# After this it is possible to do:
# ssh 10.0.0.99
# from another physical server
perl -nE '/"IPAddress": "(\S+)"/ and not $seen{$1}++ and say $1' |
parallel testssh
docker ps
}
stop() {
# Stop the virtual servers
# After there is no running processes on the host server
# and after this it is no longer possible to do:
# ssh 10.0.0.99
# from another physical server
# The host server returns to the state before running `start`
seq 100 | parallel docker stop my-ubuntu-{}
docker ps
}
destroy() {
# Remove the setup
# After this the host server returns to the state before running `setup`
rm -rf my-ubuntu/
docker rmi my-ubuntu
}
full() {
install
setup
start
stop
destroy
}
$ time full
real 2m21.611s
user 0m47.337s
sys 0m31.882s
This takes up 7 GB RAM in total for running 100 virtual servers. So you do not even need to have plenty of RAM to do this.
It scales up to 1024 servers after which the docker bridge complains (probably due to Each Bridge Device can have up to a maximum of 1024 ports).
Only thing missing now is making the docker bridge talk to the ethernet, so the containers are accessible from another physical server.
Vagrant
Based on @Martin's notes below:
install() {
# Install needed software once
sudo apt install -y vagrant virtualbox
}
setup() {
# Configure the virtual servers
mkdir -p ssh/
cp ~/.ssh/id_rsa.pub ssh/
cat ssh/*.pub > authorized_keys
cat >Vagrantfile <<'EOF'
Vagrant.configure("2") do |config|
config.vm.box = "debian/buster64"
(1..100).each do |i|
config.vm.define "vm%d" % i do |node|
node.vm.hostname = "vm%d" % i
node.vm.network "public_network", ip: "192.168.1.%d" % (100+i)
end
end
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("authorized_keys").first.strip
s.inline = <<-SHELL
mkdir /root/.ssh
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
apt-get update
apt-get install -y parallel
SHELL
end
end
EOF
}
start() {
testssh() {
ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@"$1" echo "'$1'" '`uptime`'
}
export -f testssh
# Start the virtual servers
seq 100 | parallel --lb vagrant up vm{}
# After this it is possible to do:
# ssh 192.168.1.111
# from another physical server
parallel testssh ::: 192.168.1.{101..200}
}
stop() {
# Stop the virtual servers
# After there is no running processes on the host server
# and after this it is no longer possible to do:
# ssh 10.0.0.99
# from another physical server
# The host server returns to the state before running `start`
seq 100 | parallel vagrant halt vm{}
}
destroy() {
# Remove the setup
# After this the host server returns to the state before running `setup`
seq 100 | parallel vagrant destroy -f vm{}
rm -r Vagrantfile .vagrant/
}
full() {
install
setup
start
stop
destroy
}
start
gives a lot of warnings:
NOTE: Gem::Specification.default_specifications_dir is deprecated; use Gem.default_specifications_dir instead. It will be removed on or after 2020-02-01.
stop
gives this warning:
NOTE: Gem::Specification.default_specifications_dir is deprecated; use Gem.default_specifications_dir instead. It will be removed on or after 2020-02-01.
Gem::Specification.default_specifications_dir called from /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/bundler.rb:428.
NOTE: Gem::Specification.default_specifications_dir is deprecated; use Gem.default_specifications_dir instead. It will be removed on or after 2020-02-01.
Gem::Specification.default_specifications_dir called from /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/bundler.rb:428.
/usr/share/rubygems-integration/all/gems/vagrant-2.2.6/plugins/kernel_v2/config/vm.rb:354: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/share/rubygems-integration/all/gems/vagrant-2.2.6/plugins/kernel_v2/config/vm_provisioner.rb:92: warning: The called method `add_config' is defined here
/usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/errors.rb:103: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/share/rubygems-integration/all/gems/i18n-1.8.2/lib/i18n.rb:195: warning: The called method `t' is defined here
Each virtual machine takes up 0.5 GB of RAM on the host system.
It is much slower to start than the Docker machines above. The big difference is that the Vagrant-machines do not have to run the same kernel as the host, but are complete virtual machines.
Best Answer
I think docker meets your requirements.
1) Install docker (https://docs.docker.com/engine/install/) Make sure you are done with linux post installation steps (https://docs.docker.com/engine/install/linux-postinstall/)
2) I assume you have the following directory structure:
id_rsa.pub
is your public key andDockerfile
we will discuss below3) First, we are going to build docker image. It's like template for containers that we are going to run. Each container would be something like materialization of our image.
4) To build image we need a template. It's
Dockerfile
:FROM ubuntu:bionic
defines our base image. You can find base for Arch, Debian, Apline, Ubuntu, etc on hub.docker.comapt install
part installs ssh serverCOPY from to
copies our public key to the place where it will be in the containerRUN
statements to do additional things: install software, create files, etc...5)
docker build my-ubuntu -t my-ubuntu
- builds image. The output of this command:6) Let's run
my-ubuntu
. (Once againmy-ubuntu
is the name of image). Starting container with namemy-ubuntu-1
which is derived frommy-ubuntu
image:docker run -d --rm --name my-ubuntu-1 my-ubuntu
Options:
-d
demonize for running container in bg--rm
to erase container after container stops. It can be important because when you deal with a lot of containers they can quickly pollute you HDD.--name
name for containermy-ubuntu
image we start from7) Image is running.
docker ps
can prove this:8) To execute command in the container run:
docker exec -it my-ubuntu-1 bash
- to get into the container'sbash
. It is possible to provide any command9) If running command the way above is not enough do
docker inspect my-ubuntu-1
and grepIPAddress
field. For my it's172.17.0.2
.10) To stop container:
docker stop my-ubuntu-1
11) Now it is possible to run 100 containers:
My
docker ps
:I can do f.e.
docker inspect my-ubuntu-15
, get its IP and connect to ssh to it or use docker exec.It is possible to
ping
containters from containers (installiputils-ping
to reproduce):N.B. running containers from bash is quick solution. If you would like scalable approach consider using
kubernetes
orswarm
P.S. Useful commands:
docker ps
docker stats
docker container ls
docker image ls
docker stop $(docker ps -aq)
- stops all running containersAlso, follow the basics from docs.docker.com - it's 1 hour time spent for better experience working with containers
Additional:
Base image in the example is really minimal image. It does not have DE or even xorg. You could install it manually (adding packages to
RUN apt install ...
section) or use image that already has the software you need. Quick googling gives me this (https://github.com/fcwu/docker-ubuntu-vnc-desktop). I have never tried but I think it should work. If you are definitely need VNC access I should try to play around a bit and add info to the answerExposing to local network:
This one may be tricky. I am sure it can be done with some obscure port forwarding but the straightforward solution is to change running script as follows:
After that you would be able to access your containers with host machine IP: