Skip to content

Jesses Software Engineering Blog

Apr 22

Jesse

Vagrant Virtual Machine Cluster

With virtualization and cloud development taking over the industry there are often situations where systems need to be developed on multiple servers. Instead of having to spin up new instances of servers or make space in the cloud, one can use virtual machines (VMs) to develop locally in a clustered environment. Also, when working with a remotely distributed team it is extremely useful to be able to share a development environment. Vagrant offers a very simple approach to spinning up VM clusters as well as foundations for VM sharing. This article will demonstrate how to get a Vagrant cluster up and running along with VirtualBox.

Install

Fedora 20 Install:

# make sure the kernel version is up to date and reboot
sudo yum -y update
reboot

# install all of the dependencies
sudo yum install -y binutils gcc make patch libgomp glibc-headers glibc-devel kernel-headers kernel-devel dkms VirtualBox-4.3

# rebuild kernel modules, add user
service vboxdrv setup
usermod -a -G vboxusers <user_name>

# test virtual box, this should open a GUI
VirtualBox

# download Vagrant rpm file, http://www.vagrantup.com/downloads.html, then install
sudo yum install vagrant_1.6.3_x86_64.rpm

# verify
vagrant --version

# install guest box plugin
vagrant plugin install vagrant-vbguest

Ubuntu 14 Install:

# make sure all repos are available
sudo add-apt-repository main
sudo add-apt-repository universe
sudo add-apt-repository multiverse

sudo add-apt update -y

sudo apt-get install virtualbox

# download Vagrant deb file, http://www.vagrantup.com/downloads.html, then install
sudo dpkg -i vagrant_1.6.3_x86_64.deb

# install guest box plugin
vagrant plugin install vagrant-vbguest

Single Node

Now that VirtualBox and Vagrant are installed a VM can be brought up. In order for the VM to be started a Vagrant “box”, or OS, needs to be installed. VagrantCloud offers a list of available boxes. Vagrant also allows for custom box creation but with provisioning and Puppet/Chef support, there is little need as all dependencies can be installed into a box in easier ways.

mkdir -p ~/Vagrant/single
cd ~/Vagrant/single

Vagrant box install:

vagrant box add chef/centos-6.5
vagrant init

Or in one line:

vagrant init chef/centos-6.5

After the box has been installed and initiated there will be a VagrantFile that will have all of the configurations for this particular VM. The VagrantFile is all that is needed to spin up VMs. In a development environment the VagrantFile would be shared, or added to version control, and all team members can spin up the exact same VM environment. Here is a list of the different VagrantFile configurations.

To start the VM simply run the up command and SSH in:

vagrant up
vagrant ssh

NOTE: If you run into issues using Putty, see if this helps.

To free up resources on the host from the VM (can be reactivated with up command):

vagrant halt

To remove the VM (it can be re-spun off of the VagrantFile with up command):

vagrant destroy

Multiple Nodes

There are different ways to set up node clusters, this will demonstrate how to do it using a private network. Since my home network uses IPs 192.X.X.X I will use 10.X.X.X for my Vagrant boxes. The Vagrant file can be created from scratch using a text editor opposed to using the init command as long as the specified Vagrant box has already been installed. To see a list of installed boxes:

vagrant box list
mkdir -p ~/Vagrant/cluster
# create the VagrantFile
vi ~/Vagrant/cluster/VagrantFile
# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  config.vm.define "master" do |master|
     master.vm.box = "chef/centos-6.5"
     master.vm.network "private_network", ip: "10.2.2.2"
  end

  config.vm.define "slave" do |slave|
     slave.vm.box = "chef/centos-6.5"
     slave.vm.network "private_network", ip: "10.2.2.4"
  end

end

Note that the configuration defines two VMs. Adding the second VM configuration entry is all that is needed for the second VM to be created. Also the private network IPs are manually set for each of the VMs. Running the following command will bring up both VMs and both will be available via SSH:

vagrant up
vagrant ssh master
vagrant ssh slave

During the output from the up command there was a line that specified which box uses which port for SSH:

...
slave: 22 => 2201 (adapter 1)
...

So alternatively VMs can be accessed (default password is ‘vagrant’):

ssh vagrant@localhost -p2200
ssh vagrant@localhost -p2201

SSH onto both VMs and verify the IP addresses and that the other server is ping’able:

vagrant ssh master
ifconfig
ping 10.2.2.2

To make SSH’ing between the various VMs and host easier, add SSH key signatures (default password is ‘vagrant’) from host to VM and in-between the VMs.:

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub vagrant@10.2.2.2
ssh vagrant@10.2.2.2

Since these VMs use VirtualBox, provider specific configs can be used to specify how much RAM and CPUs each VM can use by adding the following to the VagrantFile:

config.vm.provider "virtualbox" do |v|
  v.memory = 1024
  v.cpus = 1
end

Conclusion

And that is how a local virtual cluster can be set up using Vagrant. I recommend reading through all the Vagrant Docs, which are well written and informative. It’s important to understand provisioning, Puppet/Chef integration, synced folders, and the various networking options.

Blog Powered By Wordpress