How to use Ansible for Vagrant and production
May 13th, 2014
Warning: this is not a getting started guide for Ansible, it presumes some knowlege of writing tasks and roles.
Since we started using Vagrant boxes in our projects, we've also been experimenting with provisioning tools. During this exploration I've personally tested Puppet, Chef, Saltstack and Ansible. I found Ansible an absolute pleasure to work with (with Saltstack in second place), so we decided to go with Ansible as our standard.
However, this post is not about the best provisioning tool - I am no expert, we don't have hundreds of production servers to provision. Not even tens. But we do have a few, and so this post is about how write your Ansible scripts for use with Vagrant as well as production servers.
First, let's look at some of the mistakes we made early on.
Don't create a single huge task list.
This is not cool on a lot of levels, most importantly re-usability. The second project I wanted to provision had so many overlapping tasks that I felt dirty after copy-pasting. You need roles. Also, there are some things very different on Vagrant boxes compared to production, and roles are a better way to make that distinction than tasks.
Don't create roles mixing vagrant specific taks with general tasks
(this is the same mistake as above on a smaller scale)
For example: I initially created a role for "nginx", and in this role I placed the nginx config file, to be copied to the guest. These config files are most certainly different for Vagrant and production so again, too much attention was spent on just provisioning the Vagrant box, leaving us with a script we could not use on production machine.
Also, I added tasks to change file privileges on certain folders - so the www-data user could do stuff on the /vagrant share. These tasks should not be in a general-use Nginx role.
Don't create a variable called "vagrant"
This mistake is more subtle. It is certainly a possibility to use this technique (a variable indicating a local environment set to true/false), but I prefer the solution described below. The Vagrant box is nothing more than just another host, even though it has "special needs". We treat it as such, and use the Ansible groups to make the distinction.
So, enough about mistakes.
A separate group for Vagrant
Below is an example inventory file containing a generic group called "servers" and two child groups; one called "production" for obvious reasons, and one called "vagrant". (this is just an example, you could go with any other group name that you prefer)
[vagrant] some.local.vagrant.box.ip-or-name [production] your.production-server.com [servers:children] vagrant production
Having the vagrant group gives you at least 3 ways to limit ansible tasks to run only on the vagrant host(s):
In your Vagrantfile
You can set the variable ansible-limit to vagrant in your Vagrantfile (so you never accidentally provision a production machine when you run "Vagrant up").
config_dev.vm.provision :ansible do |ansible| ... ansible-limit: vagrant ... end
I have to add here that it's probably a better idea to make two separate inventory files, one that contains the Vagrant host(s) and one that contains the production host(s). That is an ever better way to prevent from accidentally provisioning hosts you did not mean to
For an entire play
Vagrant lets you define an Ansible playbook to run. This playbook is just a yaml file, and it may contain more than one play. Each play has a "hosts" setting that determines which hosts should run that particular play. In this case, we add a special play just for the vagrant box(es) in the playbook.yml:
- name: Provision tasks hosts: all roles: ... - name: Vagrant-only tasks hosts: vagrant roles: ...
--> This leads to a solution for the file-privileges mistake I mentioned before. You can do the Nginx install on all hosts in the first play, and then add the vagrant specific tasks in the second play.
In a specific task
Finally, you can limit individual tasks with the "when" statement. Ansible has a variable called group_names, and during a run it contains all the groups a host belongs to. You can check if it contains your group with in or not in
- name: Example task only run for hosts in the group vagrant debug: msg="I'm a vagrant box" when: "vagrant" in group_names - name: Example task only run for hosts *not* in the group vagrant debug: msg="I'm not in the vagrant group.. sad panda :(" when: "vagrant" not in group_names
Use group vars to re-use roles
Even if you don't make separate plays, having the vagrant group can be of benefit. Ansible has the ability to set variables per group, just by adding a "group_vars" folder where your playbook.yaml file is. Inside the group_vars folder, you can add a file called "all", or a file with a specific group name - i.e. vagrant. (these are also yaml files, even though they don't have the .yml extension).
So if the only difference between Vagrant and production is a config file, you can add variables to your tasks/templates that set the right values for the right environment.
--> This leads to a solution for the mistake I mentioned before. When you have a role like "nginx" with config files that should be used for Vagrant and production, make sure your config files are Ansible templates, and make sure all the differences between Vagrant and Production environments are variables inside those templates. I.e. nginx_project_root: "/vagrant" versus nginx_project_root: "/var/www/project/web"
An example directory structure with group_vars files:
/ansible /group_vars /all <--- contains variables for all hosts /vagrant <---- contains variables only for vagrant hosts /production <---- contains variables only for production /hosts /playbook.yml
Another benefit of this approach is the separation of the vars files into vagrant/production.
You will probably want to secure the variables you use in production. They could contain API keys, SSL certificates and other highly secret stuff. The Vagrant box's passwords are probably not secure (or generated automatically on first run which is even better). Ansible has a tool to encrypt the vars files called Ansible Vault, but anytime you run a playbook that includes a "vaulted" file, you will need to add the "--ask-vault-pass" commandline parameter. And you don't want to hand over the vault password to all developers who work on the project.
Because of the separation of vagrant/production vars files, you are free to encrypt only the production file. Ansible will not ask for your vault password if you --limit the hosts to the "vagrant" group.
Additional pro tip: Use Ansible Vault with Vagrant
Because of projects like Vagrant, more people than ever before are using provisioning tools - and probably to do very different things than the creators intended. We don't manage hundreds of web servers in exact configuration. We don't do rolling updates over database clusters with zero downtime.
We provision our local virtual boxes, to be able to share development environments with others. We create varying configurations for each project, because each has it's specific needs. Sometimes, provisioning the development environments takes such a high priority, we forget that further down the road, the project is going to need a production environment.
So make sure when you're writing tasks that you keep sight of what might be different for your production environment later on. Re-writing provisioning tasks is a lot more painful than writing them properly the first time around.
The best preparation for good work tomorrow is to do good work today.
Pointy haired boss