Infrastructure as code using Vagrant, Ansible, Cucumber and ServerSpec

Designing and developing VMs as code is at last mainstream. This post is in fact a presentation I give to highlight that we can treat infrastructure code just as we would regular code.

We use TDD/BDD and monitors to spec, implement, test and monitor the resulting VM, keeping its code close to the app’s code and as an integral part of it.

infrastructure as code


Install MySQL using Ansible, using an idempotent script

This Ansible role will install MySQL on a *nix and may be run multiple times without failure, even though root’s password is changed when running it.
The order is important and here are some tips:

  • The ‘’ template does not include user and password entries
  • The ‘.my.cnf’ template only includes user and password entries and is copied to root’s home directory (since my script runs as root), not the deploy’s home directory.
  • Root’s password is set for security reasons
  • Deploy’s only granted access to the application’s databases. I use db1 and db2 as examples here.

Put the below section in your /tasks/main.yml file.

  - name: Install MySQL packages
    apt: pkg={{item}} state=installed
      - bundler
      - mysql-server-core-5.5
      - mysql-client-core-5.5
      - libmysqlclient-dev
      - python-mysqldb
      - mysql-server
      - mysql-client
      - build-essential

- name: Remove the MySQL test database
action: mysql_db db=test state=absent

- name: Create global my.cnf
template: dest=/etc/mysql/my.cnf

- name: Create databases
mysql_db: name={{item}} state=present collation=utf8_general_ci encoding=utf8
- db1
- db2

- name: Add deploy DB user and allow access to news_* databases
mysql_user: name={{user}} password={{password}} host="%" priv=db1.*:ALL/db2.*:ALL,GRANT state=present

- name: Set root password
mysql_user: name=root password={{password}} host="{{item}}" priv=*.*:ALL,GRANT state=present
- "{{ansible_hostname}}"
- ::1
- localhost

- name: Create local my.cnf for root user
template: src=my.cnf dest=/root/.my.cnf owner=root mode=0600

- name: Restart the MySQL service
action: service name=mysql state=restarted enabled=true

What is the difference between TDD and BDD?

The short answer is: none.

All variants of Driven Development (henceforth the ‘xDDs’) strive to attain focused, minimalistic coding. The premise of lean development is that we should write the minimal amount of code to satisfy our goals. This principal can be applied to any development management methodology the team has, whether it be Waterfall, Agile or any other.

A way to ensure that code is solving a given problem over time and change is to articulate the problem in machine-readable form. This allows us to programatically validate its correctness.

For this reason, xDD is mainly used the context of testing frameworks. Goals, as well as the code to fulfil them, are run by a framework as a series of tests. In turn, these tests may be used in collaboration with other tools, such as continuous integration, as part of the software development cycle.

We’ve now established that writing tests is a Good Thing(tm). We now turn to answer “when”, “which” and “how” tests should be written, as we strive to achieve a Better Thing(tm).

Defining goals in machine-readable form in itself does not assure the imperative of minimalistic development. To solve this, someone had a stroke of genius: The goals, now viewed as tests, are to be written prior to writing their solutions. Lean and minimalistic development is attained as we write just enough code to satisfy a failing test. As a developer I know, from past experience, that anything I write will ultimately be held against me. It will be criticised by countless people in different roles over a long period of time, until it will ultimately be discarded and rewritten. Hence, I strive to write as little code as possible, Vanitas vanitatum et omnia vanitas.

However, the shortcomings of this methodology are that we need a broad test suite to cover all the goals of the product along with a way to ensure that we have implemented the strict minimum that the test required. I’ll be visiting these two points later, but would like to primarily describe the testing pyramid and enumerate the variants of DD and their application to the different layers.

Having established when to write the tests (prior to writing code), we now turn to discuss “which” tests we should write, and “how” we should write them.

The Testing Pyramid

The testing pyramid depicts the different kinds of tests that are used when developing software.

A graphically wider tier depicts a quantitatively larger set of tests than the tier above it, although some projects may be depicted as rectangles when there is high complexity and the testing technology allows for them.

The testing pyramid


Unit Tests

Although people use the term loosely to denote tests in general, Unit Tests are very focused, isolated and scoped to single functions or methods within an object. Dependencies on external resources are discounted using mocks and stubs.


Using rSpec, a testing framework available for Ruby, this test assures that a keyword object has a value:

it “should not be null” do
  k1 = => ”)
  k1.should_not be_valid

This example shows the use of mocks, which are programmed to return arbitrary values when their methods are called:

it “returns a newssource” do
  news_source = NewsSource.get_news_source
  news_source.should_not == nil

NewsSource is mocked out to return an empty set of active news sources, yet the test assures that one will be created in this scenario.

By virtue of being at the lowest level of the pyramid, Unit Tests serve as a gatekeeper to the source control management system used by the project: These tests run on the developer’s local machine and should prevent code at the root of failing tests to be committed to source control. A counter-measure to developers having committed such code is to have a continuous integration service revert those commits when the tests fail in its environment. When practicing TDD (as a generic term), developers would write Unit Tests prior to implementing any function or method.

Functional or Integration Tests

Functional or integration tests span a single end-to-end functional thread. These represent the total interaction of internal and external objects expected to achieve a portion of the application’s desired functionality.
These tests too serve as gatekeepers, but of the promotion model. By definition, passing tests represent allegedly functioning software, hence failures represent software that does not deliver working functionality. As such, failing tests may be allowed to source control yet will be prevented from being promoted to higher levels of acceptance.


Here we are assuring that Subscribers, Articles and Notifications work as expected. Real objects are used, not mocks.

it “should notify even out of hours if times are not enabled” do
  @sub.times_enabled = 0
  @notification = Notification.create!(:subscriber_id =>, :article_id =>
  @notification.subscriber.should_notify(Time.parse(@sub.time2).getutc + 1.hour).should be_true

A “BDD” example is:

Feature: NewsAlert user is able to see and manage her notifications

  Given I have subscriptions such as “Obama” and “Putin”
  And “Obama” and “Putin” have notifications
  And I navigate to the NewsAlert web site
  And I choose to log in and enter my RID and the correct password

Scenario: Seeing notifications
  When I see the “Obama” subscription page
  Then I see the notifications for “Obama”

This is language a BA or Product Owner can understand and write. If the BAs or POs on your project cannot write these scenarios, then you can “drop down” to rSpec instead, if you think the above is too chatty.

Performance and Penetration Tests

Performance and penetration tests are cross-functional and without context. These test performance and security across different scope of the code by applying expected thresholds to unit and functional threads. At the unit level, they will surface problems with poorly performing methods. At the functional level poorly performing system interfaces will be highlighted. At the application level load/stress tests will be applied to selected user flows.


A “BDD” example is:

Scenario: Measuring notification deletion
When I decide to remove all “1000” notifications for “Obama”
Then it takes no longer than 10 milliseconds

User Interface and User Experience Tests

UI/UX tests validate the user’s experience as she uses the system to achieve her business goals for which the application was originally written.
These tests may also validate grammar, layout, style and other standards.
Of the testing framework, these are the most fragile. One reason is that their authors do not separate essence from the implementation. The UI will likely have the greatest rate of change in a given project as product owners are exposed to working software iteratively. Having UI tests that rely heavily on the UI’s physical layout will lead to their rework as the system undergoes change. Being able to express the essence, or desired behaviour, of the thread under test is key to writing maintainable UI tests.


Feature: NewsAlert user is able to log in

  Given I am a Mobile NewsAlert customer
  And I navigate to the NewsAlert web site
  Then I am taken to the home page which has “Log in” and “Activate” buttons

Scenario: Login
  When I am signed up
  When I choose to log in and enter my ID and the correct password
  Then I am logged in to my account settings page

BDD or ATDD may be used for all these layers, as it is more convenient to use User Story format for integration tests in some instances than low-level Unit Test syntax. ATDD is put to full use if the project is staffed with Product Owners that are comfortable using the English-like syntax of Gherkin (see example below). In their absence or will, BAs may take on this task. If neither are available nor willing, developers would usually “drop down” to a more technical syntax such as used in rSpec, in order to remove what they refer to as “fluff”. I would recommend writing Gherkin as it serves as functional specifications that can be readily communicated to non-technical people as a reminder of how they intended the system to function.

Exploratory Testing

Above “UI Tests”, at the apex of the pyramid, we find “Exploratory Testing”, where team members perform unscripted tests to find corners of functionality not covered by other tests. Successful ones are then redistributed down to the lower tiers as formally scripted tests. Since these are unscripted, we’ll not cover them further here.

Flavours of Test Driven Development

This author thinks that all xDDs are basically the same, deriving from the generic term of “Test Driven Development”, or TDD. When thinking of TDD and all other xDDs, please bear in mind the introductory section above: we develop the goals (tests) prior to developing the code that will satisfy them. Hence, the the “driven” suffix: nothing but the tests drives our development efforts. Given a testing framework and a methodology, we can implement a system purely by writing code that satisfies the sum of its tests.

The dichotomy of the different xDDs can be explained by their function and target audience. Generically, and falsely, TDD would most probably denote the writing of Unit Tests by developers as they implement objects and need to justify methods therein and their implementation. Applied to non-object oriented development, Unit Tests would be written to test single functions.

The reader may contest to this being the first step in a “driven” system. To have methods under test, one must have their encapsulating object, themselves borne of an analysis yet unexpressed. Subscribing to this logic, I usually recommend development using BDD. Behaviour-driven development documents the system’s specification by example (a must-read book), regardless of their implementation details. This allows us to distinguish and isolate the specification of the application by describing value to its consumer, with the goal of ignoring implementation and user interactions.

This has great consequences in software development. As one writes BDD scripts, one shows commitment and rationale to their inherent business value. Nonsensical requirements may be promptly pruned from the test suite and thus from the product, establishing a way to develop lean products, not only their lean implementation.

A more technical term is Acceptance Test Driven Development (ATDD). This flavour is the same as BDD, but alludes that Agile story cards’ tests are being expressed programatically. Here, the acceptance criteria for stories are translated to machine readable acceptance tests.

As software development grows to encompass Infrastructure as Code (IaC), there are now ways to express hardware expectations using MDD, or Monitor-driven Development (MDD). MDD applies the same principles of lean development to code that represents machines (virtual or otherwise).


This example will actually provision a VM, configure it to install mySQL and drop the VM at the end of the test.

Feature: App deploys to a VM

  Given I have a vm with ip “”

Scenario: Installing mySQL
  When I provision users on it
  When I run the “dbserver” ansible playbook
  Then I log on as “deployer”, then “mysql” is installed
  And I can log on as “deployer” to mysql
  Then I remove the VM

The full example can be viewed here.

ServerSpec gives us bliss:

describe service(‘apache2’) do
  it { should be_enabled }
  it { should be_running }

describe port(80) do
  it { should be_listening }

For a more extreme example of xDD, please refer to my blog entry regarding Returns-driven Development (RDD) for writing tests from a business-goal perspective.

Tools of the trade

.net: nUnit | SpecFlow

Java: jUnit | jBehave

Ruby: rSpec | Cucumber | ServerSpec


The quality of the tests is measured by how precisely they test the code at their respective levels, as well as how they were written with regards to the amount of code or spanning responsibility and the quality of their assertions.

Unit tests that do not use stubs or mocks when accessing external services of all kinds are probably testing too much and will be slow to execute. Slow test-suites will, eventually, become a bottleneck and may be destined to be abandoned by the team. Conversely, testing basic compiler functions will lead to a huge test-suite, giving false indication of the breath of the safety-net it provides.

Similarly, tests that lack correct assertions or have too many of them, are either testing nothing at all, or testing too much.

Yet there is a paradox: The tests’ importance and impact are proportionally inverse to their fragility in the pyramid. In other words, as we climb the tiers, the more important the tests become, yet they become less robust and trustworthy at the same time. A major pitfall at the upper levels is the lack of application or business-logic coverage. I was on a project that had hundreds of passing tests, yet the product failed in production as external interfaces were not mocked, simulated nor tested adequately. Our pyramid’s peak was bare, and the product’s shortcomings were immediately visible in production. Such may be the fate of any system that interacts with external systems across different companies; Lacking dedicated environments, one must resort to simulating their interfaces, something that comes with its own risks.

In summary, we quickly found that the art and science of software development is no different than the art and science of contriving its tests. It is for this reason that I rely on the “driven” methodologies to save me from my own misdoings.


Write as little code as you can, using TDD.

Happy driving!

From Zero to Deployment: Vagrant, Ansible, Capistrano 3 to deploy your Rails Apps to DigitalOcean automatically (part 2)


Use Vagrant to create a VM using DigitalOcean and provision it using Ansible.


Parts zero and one of this blog series demonstrates some Ansible playbooks to create a VM ready for Rails deployment using Vagrant. Here we show the Vagrant file that will provision a DigitalOcean droplet for public access.

First thing to do is to install the DigitalOcean plugin:

vagrant plugin install vagrant-digitalocean


The Vagrantfile for DigitalOcean

# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = 'digital_ocean'
config.vm.box_url = ""
config.vm.hostname = "staging"
config.vm.define "staging"
config.vm.provision 'ansible' do |ansible|
ansible.playbook = "devops/user.yml"
ansible.inventory_path = "devops/webhosts"
ansible.verbose = "vvvv"
ansible.sudo = true
config.vm.provider :digital_ocean do |provider, override|
override.ssh.private_key_path = '~/.ssh/id_rsa'
override.vm.box_url = ""
provider.client_id = ‘<YOUR ID>'
provider.api_key = ‘<YOUR KEY>'
provider.image = "Ubuntu 12.10 x64"
provider.region = "New York 2"


You can get your ID and Key from the DigitalOcean website once logged on there.


Not much to it, eh?

The Ansible files stay mostly the same, apart from the use of ‘root’ wherever ‘vagrant’ was used. And don’t forget to change your inventory file to the IP address given by DigitalOcean. I’ll be thinking about how to automate these parameters too, to have complete hands-free installations.

If you are curious to learn more about configuring VMs in DigitalOcean, please see their help page here.

From Zero to Deployment: Vagrant, Ansible, Capistrano 3 to deploy your Rails Apps to DigitalOcean automatically (part 0)


Use Cucumber to start us off on our Infrastructure as code journey.



Part 1 of this blog series demonstrates some Ansible playbooks to create a VM ready for Rails deployment using Vagrant. This is a prequel in the sense that, as a staunch believer in all that’s xDD, I should have started this blog with some Cucumber BDD!
Please forgive my misbehaving and accept my apologies with a few Cucumber scenarios as penance. Hey, it’s never too late to write tests…

The Cucumber Scenarios

As BDD artefacts, they should speak for themselves; write to me if they don’t as it means they were not clear enough!


Feature: App deploys to a VM
Given I have a vm with ip ""
Scenario: Building the VM
When I provision users on it
Then I can log on to it as the "deploy" user
And I can log on to it as the "root" user
And I can log on to it as the "vagrant" user
Then I remove the VM
Scenario: Adding Linux dependencies
When I provision users on it
When I run the "webserver" ansible playbook
And I log on as "deploy", there is no "ruby"
But "gcc" is present
Then I remove the VM
Scenario: Installing mySQL
When I provision users on it
When I run the "dbserver" ansible playbook
Then I log on as "deploy", then "mysql" is installed
And I can log on as "deploy" to mysql
Then I remove the VM

The Cucumber Steps

Given(/^I have a vm with ip "(.*?)"$/) do |ip|
@ip = ip
output=`vagrant up`
assert $?.success?
When(/^I provision users on it$/) do
output=`vagrant provision web`
assert $?.success?
Then(/^I can log on to it as the "(.*?)" user$/) do |user|
output=`ssh "#{user}@#{@ip}" exit`
assert $?.success?
When(/^I run the "(.*?)" ansible playbook$/) do |playbook|
output=`ansible-playbook devops/"#{playbook}".yml -i devops/webhosts`
assert $?.success?
When(/^I log on as "(.*?)", there is no "(.*?)"$/) do |user, program|
@user = user
output = run_remote(user, program)
assert !$?.success?
When(/^"(.*?)" is present$/) do |program|
output = run_remote(@user, program)
assert $?.success?
Then(/^I log on as "(.*?)", then "(.*?)" is installed$/) do |user, program|
output = run_remote(user, program)
assert $?.success?
Then(/^I remove the VM$/) do
output=`vagrant destroy -f`
assert $?.success?
Then(/^I can log on as "(.*?)" to mysql$/) do |user|
`ssh "#{user}@#{@ip}" 'echo "show databases;" | mysql -u "#{user}" -praindrop'`
def run_remote(user, program)
`ssh "#{user}@#{@ip}" '"#{program}" --version'`

From Zero to Deployment: Vagrant, Ansible, Capistrano 3 to deploy your Rails Apps to DigitalOcean automatically (part 1)

update: please refer to the prequel that sets the stage with Cucumber scenarios as a BDD exercise.


In this post, I would like to share that my anxiety about setting up a new server to host an application reminded me why I like being in IT: automation. I attempt to avoid snowflake servers and deploy a Rails application to a VM using idempotent scripts with the help of Ansible and Capistrano.

This entry is a step-by-step guide to get a VM up and running with a Rails app deployed to it. I describe the steps needed to be taken with VagrantAnsible and Capistrano to deploy to a local VM while leaving deployment to DigitalOcean for part two.

the problem

Writing code comes easy to you. As a developer, you develop and test your code with a certain ease and enjoyment . To a certain extent, you may not even think much about the production phase of your project as you may already have an environment set up. However, you might only have a certain idea of what your prod environment looks like as you may have set it up, say, a year or two ago? Maybe your development environment is out-of-sync? Maybe you have to rely on other people (sys-admins) to take care of that “stuff”? That requires A HandOff Ceremony, something we want to avoid on planet DevOps.

In summary, it would be nice to have an automated, testable, repeatable way of provisioning hosts for testing and deployment uses. Obviously, scripts and scripting systems exist for that, and after mucking around with Chef and Puppet, I opted for Ansible.

a solution

In my mind, Ansible is to shell, what CoffeeScript is to JavaScript. I can express what I want to do at a high level (given there’s a module for it) and not worry about the details. In the case of Ansible, I don’t have to worry about idempotence either. So I settled on a way to provision virtual machines (VMs) using Vagrant and Ansible.

While I do not claim to be an expert in any systems herein mentioned, I do declare that “it worked for me”. Please leave comments, tips and tricks if you see any aberrations or more elegant ways of doing things with these tools.

I’d like to credit my friend and colleague Jefferson Girao from ThoughtWorks for having introduced me to Ansible in the first place, and mention that he’s on a similar journey to optimising Rails deployment, with the goal of using Ansible only. I am taking a more conservative approach and will stick with good-old Capistrano for the Rails part.

0: punt on windows, linux.

The demo is on a Mac, but feel free to try to adapt it to other platforms.


1: Install VirtualBox, Vagrant and Anisble

Here we install stuff, not a lot. 

Get VirtualBox here, or by following the vagrant guide and then install the vagrant gem:

gem install vagrant

 Now let’s install Ansible by the command:

brew install ansible

That assumed you had brew installed. If you don’t have it, I recommend installing it as it makes Mac OS X installations easy. If you prefer not to use brew, do it the hard way


2: Prepare to build the machine

Here we create a sub-drectory that will contain our Vagrant file and later on, our Rails app. We’ll keep the Vagrant file near our source code so we can say that we’re compatible with the idea of “Infrastructure As Code” (we’ll get to that in a future chapter).

mkdir app
cd app
vagrant init

This will create an initial Vagrantfile. Replace it with this one:

In summary, when run (don’t run it yet, it will fail), this Vagrant script will spin up an Ubuntu Precise 64 instance, make its home on your private network on IP and will invoke the Ansible provisioner to run the user.yml playbook.



Before we can run the above Vagrantfile, we need to create the ‘user.yml’ file in the devops directory, or elsewhere, if you care to change the  ‘anisble.playbook’  line in Vagrantfile.

I’d like to pause and explain what that user.yml playbook will do so you don’t freak out when you see me moving rsa keys all over the place.

On one hand, I’d like to set up a machine with all needed dependencies. This will require making some apt-get and other calls that will need root rights. That’s fine. We’ll have root (later on, when talking to DigiitalOcean), but for the moment, we’ve the default privileged ‘vagrant’ account for that, which is fine. I would like, however, to run my Rails stuff under the ‘deploy’ account, which would be better off being a regular account. So now we have two accounts, ‘vagrant’ (built-in) and ‘deploy’. I care less about the vagrant user since we’ll throw it away when we provision to DigitalOcean. I do care about the deploy account though:

That ‘deploy’ account will later be used to connect to an external git host, such as bitbucket or github and it will need keys to do so. I will be using that account to log into the instance, so it would be nice if it had my key too. For the scm related issue, I generated a key pair and posted the public portion to bitbucket and github under my account, so they will allow it git operations.

So take a deep breath and step through ‘devops/user.yml’ by reading the task names.


3: Playbook: set up a user on the VM

At the app folder root, do this:

mkdir -p devops

Copy the following text into ‘user.yml’:

The names of the tasks document sufficiently what they do. Note the following however:

1. I send over a known_hosts file that includes bitbucket’s URL.
2. I send a config file that contains bitbucket’s into the deploy user’s .ssh directory so that the first git operation does not hang forever.

OK, if you’re eager to run this playbook, you’ll need the vars.yml file:

Create vars.yml in the same directory as the user.yml file and paste this into it:

Replace the text in red with your own values:

1. Running crypt on “secret” with “SaltAndPepper” will create a password token that you place in the password variable. That is the password for the deploy user created on your VM. It’s neat that we don’t have to keep clear text passwords in YAML files.
2. repo holds the git repo you’re application resides in (for a later step).

And you’ll need the templates folder with the following files in it:

Create the templates folder:

mkdir -p templates

Inside it:

1. Copy your public key into a file named ‘’
2. Copy bitbucket’s RSA signature to a file named known_hosts, thus:, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==

3. Copy your deploy’s private key into a file named deploy_rsa
4. Copy your deploy’s public key into a file named
5. Copy this to a file named ssh_config:

  IdentityFile /home/deploy/.ssh/deploy_rsa
  StrictHostKeyChecking no

This will make some security people cringe – I’m bypassing checking on bitbucket. Yeah.

6. Create a copy of your sudoers file and add the following line to it:

deploy  ALL=(ALL:ALL) ALL

Then place it in the templates directory as well.

That’s all that’s needed as templates for now. 

You need an inventory file too: Create a file called webhosts and paste the following into it:


To run this playbook, enter this at the command prompt:

vagrant up web
vagrant provision web

The first line wakes up vagrant. If it’s the first time you’re trying to access Precise64, this step can take quite a bit of time – Vagrant will download the Precise64 box over your internet connection. Time to brew and drink some coffee.
The second line will be cute to watch, Ansible will light up your screen like a disco, at the end of which you’ll have a VM with Ubuntu installed as well as a login for deploy, using your own ssh key.

You can access this VM via any of the following commands:

1. vagrant ssh
2. ssh vagrant@
3. ssh deploy@

If it does not work, it’s either this blog is buggy or it’s a case of PEBKAC. Please check and let me know.

If it works, have some fun with your new free VM, something that would have otherwise cost you a few hundred dollars at your retail PC store.

By the way, adventurous developers can try to provision directly from Ansible:

vagrant up web
ansible-playbook devops/user.yml -i devops/webhosts

4: Playbook: get some linux


The playbook will give us a real Linux to allow us to move forward with our provisioning (Ruby, Rails)


Create a file called webserver.yml and paste this into it: 

Play it by issuing the following command:

ansible-playbook devops/webserver.yml -i devops/webhosts

5. Playbook: get some mySQL

The playbook will install mySQL on the provisioned VM. Create a file called dbserver.yml and paste this into it:

It will install the needed packages for mySQL and then:

  • Start the service
  • Remove the test database
  • Create a ‘deploy’ user
  • Remove anonymous users from the DB
  • Set up a my.cnf file
  • Change root password
While a great idea to change the root password, this feature renders this playbook non idempotent.

6: Playbook: get some Ruby

The playbook will install the current Ruby 2.0 version. This edition of the blog does not use RVM as it is hell to deal with non-interactive terminals, I am saving the setup of RVM with Ansible for a later post.

Create a file called virtual_ruby.yml and paste this into it:

Play it by issuing the following command:

ansible-playbook devops/virtual_ruby.yml -i devops/webhosts

7: Playbook: get the project’s ruby and install bundler

The playbook will install the project’s ruby in under the deploy user and install bundler to be used later on.

Create a file called project.yml and paste this into it:

Play it by issuing the following command:

ansible-playbook devops/project.yml -i devops/webhosts

8: Using Capistrano 3 to deploy the Rails app

This is not a playbook, of course, but a Capistrano 3 recipe.

Install Capistrano 3 following their instructions and replace the deploy.rb file with this one:

Replace the contents of config/deploy/production.rb file with this:

Deploy the app by issuing the following command:

cap production deploy 

9: Have some fun with your new scripts. See the disco colours!

You can repeat these commands to provision, re-provision or just test Ansible’s idempotence:
vagrant up web
vagrant provision web
ansible-playbook devops/user.yml -i devops/webhosts -vvvvv
ansible-playbook devops/webserver.yml -i devops/webhosts -vvvvv
ansible-playbook devops/dbserver.yml -i devops/webhosts -vvvvv
ansible-playbook devops/virtual_ruby.yml -i devops/webhosts -vvvvv
ansible-playbook devops/project.yml -i devops/webhosts -vvvvv
cap production deploy

In the next post, we’ll push the Rails project to a DigitalOcean VM instead of a local one and it run.

Please comment and send feedback about the form and content.

Happy provisioning!


Returns Driven Development

The premise of all the “DD” acronyms is to minimise technical debt in one way or another and otherwise drive us to being lean.

The motivation for this article is “writing the minimum amount of code” in the spirit of Agile in general and TDD/BDD specifically. As someone who has developed code for more than a quarter of a century, I have learned that anything I write as code will be used against me as long as the software is in use. I don’t want to write more code than I need to in order to justify my reward. In this case, my reward is to have the RDD monitor set off an alert that serves as feedback to knowledgable people to make decisions about the product such that I will continue to be rewarded.
So, what is RDD?

TDD instructs us to write as little code as we can to assure a passing set of tests.
BDD instructs us to write as little code as we can to assure a valuable set of features.
I’d like to extend these guides to a methodology that instructs us to write as little code as we can to assure a specific level of business returns (i.e. ROI). I’ll call it RDD for fun. Returns Driven Development (thanks to my fellow ThoughtWorker Kyle for coming up with the name!).

In most cases, there is an underlying business case for creating or modifying software. Of those, some are justified by a business plan that shows how much more money the business would make if only the requested features were implemented. Of those, only a few are borne of a real market analysis. In the rest of the cases, the primary motivation is the product manager’s intuition that it would be nice to have these new features.

I wanted a way for the product owner to convey her ideas about the modifications, without regard to her motivation. RDD is a way to describe software feature requests without having to make up financial data to justify the requests. It’s also a way to validate the intuition of the product team.

Some examples:

Our customer acquisition rate will increase if we made signup easier.

Our salespeople will sell more licenses if they could demonstrate the software at trade shows with preloaded customer accounts.

Our sales will increase if we exposed our B2B services to the public Internet.

All these sound valid points for a product manager to present as justification in embarking on a technical investment in creating or modifying existing software.

The only change I would make to the above examples is to add a quantity. Acquisition rate will increase by 30%; we’d have 25% more sales etc.

This is the starting point of RDD: in order to assure the growth of this business, we need to increase sales by X%.

Now that we have that statement, it will be scrutinised by the company’s board and a decision will be made regarding its implementation. If action were to be taken, RDD is now charged by proving those statements.

RDD assures statement validation by providing business feedback to the product owner that the course charted is indeed driving towards that stated goal. The sooner and more precise the feedback, better decisions will be made to adjust the statement or the course of action.

RDD proposes to set up the monitors first and develop minimal software to satisfy them. The monitors will provide a baseline of the current situation and, prior to development, will indicate whether the premise was indeed factual and worthwhile.

As an example, an RDD monitor will state:

Generate an alert if the number of the daily sales of licenses is below 30 or is in decline more than 5% week over week.

Generate an alert if the number of B2B API calls originates from more than 10% of our customers.

Primarily, the alerts will indicate movement in their business domains and will set a baseline of alert frequency. They can also serve as indicators that something is not functioning on a technical level, but that’s secondary as other IT alerts exist for that purpose.

Now comes the fun, DD, part:

The monitor’s premise is dependent on much more than meets the eye at first reading.
The data for the alarm may not exist. The transaction table may or many not exist, depending on the state of the product that the alarm is set to monitor. If this is a new product, a transaction data source may actually have to be created just for the monitor. That alone is an excellent improvement to the organisation.
Following that scenario, a data table without data is not much use; enter BDD. Enter TDD. Soon, you may have developed the app from scratch. A whole system may have been created as the result of a Returns statement made by the product manager, yet we invested the minimal amount of development needed to satisfy the monitor. Feedback is guaranteed as the monitor was implemented up front even before the software was.
RDD is also effective when extending existing software as well, while assuring that the minimum code was written to satisfy the monitor’s goal.
The claim that a simple sign up form will boost customer acquisitions will soon be proven right or wrong. The monitor will raise an alert if signups have increased week-over-week. If it does not, we may need another monitor that observes another aspect of the product that questions its value to the users.

So, the next time you are involved in a product’s inception or new feature, start with business monitors!

Try asking for a business returns monitor from the operations group. At first, their mouths will open and close but without words coming out. Soon after, they will realise that it is nothing but another monitor. You then employ DDD/BDD/TDD To develop it and the system that feeds it information. You then sit back and wait for the product owners to request new monitors or features as they attempt to regulate the reported data to prove their original claims either right or wrong, or a little of both.