Alexa on Rails – how to develop and test Alexa skills using Rails


Introduction

Alexa is awesome and I think that conversational software is the future. This post documents what I set myself as a technical learning challenge:

  • Host the skill locally, to allow a fast development feedback cycle prior to pushing code.
  • To find a way to automated tests (unit, functional and end-to-end), as most demos refer to manual testing.
  • To use something other than JS (like most of the demos do)
  • To write an Alexa skill that’s backed by a data store
  • To be able to handle conversations.

The way Alexa services interact with apps is the following:

User->Echo: “Alexa, …”
Note right of Echo: Wakes on ‘Alexa’
Echo->Amazon: Streams data spoken
Amazon->Rails: OfficeIntent
Rails->SkillsController: POST
SkillsController->Amazon: reply (text)
Amazon->Echo: reply (voice)
Echo->User: Speaks

The skill

The skill is a data retrieval one, giving information about the company’s offices and the workers there.

Alexa, Rails, git, ngrok and an Amazon account

I bought a dot and set up an Amazon account to register the skill on.

Install Rails and git for your OS. You’ll also need a data-store, easily using sqlite, or mysql gems.

ngrok is a nifty tool that will tunnel Alexa calls in to our local server.

Get the code

Fork or clone the repo for a head-start, or read along taking only pieces you need from this post.

Set up the app

  • Setting some environment variables

The database connection use the following environment variables:

export ALEXA_DB_USERNAME=
export ALEXA_DB_PASSWORD=
  • Setting up the database
bundle
rake db:create db:migrate db:seed spec

This will create and setup the database tables, seed the development tables and run the unit and integration tests.

  • Running tests
rake

Will run all tests excluding the audio tests, which I’ll describe below. Make sure all tests pass.

Connecting to the real thing

When a user invokes your skill, Amazon will route requests to an endpoint listed on the Alexa site. In order for this to function, you must first configure the skill there. It’s straightforward, but must be manually uploaded to the skill’s configuration page on Amazon’s site.

Intent schema

This is where you define the intents the user can express to your skill. I think of ‘intents’ as the skill’s ‘methods’, if you think of the skill as an object.

Utterances

Permutations on the intent’s syntax. For example:

Bookit for vacant rooms between {StartDate} and {EndDate}
OfficeWorkers who the {Staff} from {Office} are

Slot types

Here are the slot types for our skill, defining synonyms for our slots, being the parameters for intents. If you think this is complex, please remember that I am only the messenger here…

slots

Now that you have configured the skill’s interfaces, we now need to route communications from Amazon to our local server running Rails as we develop and debug. This is easily done using ngrok, explained below.

ngrok

ngrok is a service, with a free tier, that will redirect traffic from outside your home/office’s firewall into your network. Once configured, it will route traffic from Amazon to our http://localhost:3000, essential for our aspired fast development cycle.

Run it using:

ngrok http -hostname=endpoint.ngrok.io 3000

Your configuration may vary, depending on whether you are paying customer or not, so change ‘endpoint’ accordingly.

You’ll see something like this once you run it:

1495923802.png

Add your endpoint to Amazon’s skill page under configuration:

endpoint

Generating a certificate

Once you’ve settled on the endpoint URL, you’ll need to create or reuse a certificate for Amazon to use when communicating with your server process.

genrsa 2048 > private-key.pem
openssl req -new -key private-key.pem -out csr.pem
openssl req -new -x509 -days 365 -key private-key.pem -config cert.cnf -out certificate.pem

Copy the the contents of ‘certificate.pem’ to the skill’s page on Amazon:

cert

Toggle the test switch to ‘on’, otherwise Amazon will think you’re trying to publish the skill on their Skills store:

testing

Last but not least, enable the skill on your iPhone or Android by launching the Alexa app and verifying that the skill exists in ‘Your skills’ tab.

Amazon recap

We uploaded the skill info, including:

  • The Interaction model, uploading the ‘intent schema’, ‘Custom slot types’, and ‘Sample utterances’.
  • Configured the end-point
  • Uploaded the SSL cert
  • Enabled the test flag
  • Verified that the skill is enabled by using your Alexa app on your mobile device

The moment we’ve been waiting for

Run your rails app:

rails s

Run ngrok in another terminal window:

ngrok http -hostname=alexa01.ngrok.io 3000

Say something to Alexa:

Alexa, tell Buildit to list the offices

If all goes well, you should:

  • See the request being logged in the ngrok terminal (telling you that Amazon connected and passed the request to it)
  • See that the rails controller got the request by looking at the logs
  • Hear the response from your Alexa device

If there was a problem at this stage, please contact me so I can improve the instructions.

Code walkthrough

Route to a single skills controller:

 Rails.application.routes.draw do
   # Amazon comes in with a post request
   post '/' => 'skills#root', :as => :root
 end

Set up that controller:

class SkillsController < ApplicationController
  skip_before_action :verify_authenticity_token

  def root
    case params['request']['type']
      when 'LaunchRequest'
        response = LaunchRequest.new.respond
      when 'IntentRequest'
        response = IntentRequest.new.respond(params['request']['intent'])
     end
     render json: response
  end
end

Handle the requests:

def respond intent_request
  intent_name = intent_request['name']

  Rails.logger.debug { "IntentRequest: #{intent_request.to_json}" }

  case intent_name
    when 'ListOffice'
      speech = prepare_list_office_request
    when 'OfficeWorkers'
      speech = prepare_office_workers_request(intent_request)
    when 'OfficeQuery'
      speech = prepare_office_query_request(intent_request)
    when 'Bookit'
      speech = prepare_bookit_request(intent_request)
    when 'AMAZON.StopIntent'
      speech = 'Peace, out.'
    else
      speech = 'I am going to ignore that.'
  end

  output = AlexaRubykit::Response.new
  output.add_speech(speech)
  output.build_response(true)
end

Test walkthrough

Unit tests

Really fast, not touching any Alexa or controller code, just making sure that the methods create the correct responses:

 

require 'rails_helper'

RSpec.describe 'Office' do
  before :all do
    @intent_request = IntentRequest.new
  end
  describe 'Intents' do
    it 'handles no offices' do
      expect(@intent_request.handle_list_office_request([])).to match /We don't have any offices/
    end

    it 'handles a single office' do
      expect(@intent_request.handle_list_office_request(['NY'])).to match /NY is the only office./
    end

    it 'handles multiple offices' do
      expect(@intent_request.handle_list_office_request(['NY', 'London'])).to match /Our offices are in NY, and last but not least is the office in London./
    end
  end
end

Integration tests

Mocking out Alexa calls, ensure that the JSON coming in and out is correct:

describe 'Intents' do
  describe 'Office IntentRequest' do
    it 'reports no offices' do
      request = JSON.parse(File.read('spec/fixtures/list_offices.json'))
      post :root, params: request, format: :json
      expect(response.body).to match /We don't have any offices/
    end

    it 'reports a single office' do
      request = JSON.parse(File.read('spec/fixtures/list_offices.json'))
      Office.create name:'London'
      post :root, params: request, format: :json
      expect(response.body).to match /London is the only office/
    end

    it 'reports multiple offices' do
      request = JSON.parse(File.read('spec/fixtures/list_offices.json'))
      Office.create [{name: 'London'}, {name: 'Tel Aviv'}]
      post :root, params: request, format: :json
      expect(response.body).to match /Our offices are in London, and last but not least is the office in Tel Aviv./
    end
  end
end

Audio tests

I was keen on finding a way to simulate what would otherwise be an end-to-end user-acceptance test, like a Selenium session for a web-based app.

The audio test I came up with has the following flow:

describe 'audio tests', :audio do
  it 'responds to ListOffice intent' do
    london = 'Paris'
    aviv = 'Tel Aviv'

    Office.create [{ name: london }, { name: aviv }]

    pid = play_audio 'spec/fixtures/list-office.m4a'

    client, data = start_server

    post :root, params: JSON.parse(data), format: :json
    result = (response.body =~ /(?=#{london})(?=.*#{aviv})/) > 0

    reply client, 'The list offices intent test ' + (result ? 'passed' : 'failed')
    expect(result).to be true
  end

end

Line 6: Creates some offices.
Line 8: Plays an audio file that asks Alexa to list the offices
Line 10: Starts an HTTP server listening on port 80\. Make sure that rails is not running, but keep ngrok up to direct traffic to the test.
Line 12: Will direct the intent request from Alexa to the controller
Line 13: Makes sure that both office names are present in the response
Line 15: Replaces the response that would have been sent back to Alexa with a curt message about the test passing or not.
Line 16: Relays the test status back to RSpec for auditing.

This is as close as I got to an end-to-end test (audio and controller). Please let me know if you have other ways of achieving the same!

Conclusion

What was technically done here?

  • We registered an Alexa skill
  • We have a mechanism to direct traffic to our server
  • We have a mechanism to unit-test, integration-test and acceptance-test our skill
  • We have a mechanism that allows for a fast development cycle, running the skill locally till we’re ready to deploy it publicly.

My main learning, however, was not a technical one (despite my thinking that the audio test is nifty!). Being an advocate for TDD and BDD, I realise that now there’s a new way of thinking about intents, whether the app is a voice-enabled one or not.

We may call it CDD, being Conversation Driven Development.

The classic “As a..”, “I want to…”, “So that…” manner of describing intent seems so static compared to imagining a conversation with your product, whether it’s voice-enabled or not. In our case, try to imagine what a conversation with an office application would be like?

“Alexa, walk me through onboarding”. Through booking time, booking conference rooms, asking where office-mates are, what everyone is working on etc.

If the app happens to be a voice-enabled one, just make audio recordings of the prompts, and employ TDD using them. If it’s a classic app, use those conversations to create BDD scripts to help you implement the intents.

 

NRF51 full-screen debugging


NetBeans Debugging

Introduction

This post is a quick tutorial on how to set up a GUI for debugging NRF51 code.

Currently, CLion does not support remote debugging, but they promised to consider it if enough votes were collected – so vote here! I am sure it’s going to be seamless once they implement it, so I’m awaiting eagerly.

Modifying XCode to use ARM tools is a big challenge and support documentation seems to stop at XCode 6. I have it in mind to try to write a plugin to support toolchain switching as well as code templates, so watch this space…

I also tried configuring CodeLite as there were romours that it was possible. Unfortunately it never worked for me (user error, I assume) but I intend to pursue this as I’d like to support this open source effort.

Conscious of my mental well-being, I am avoiding Eclipse and will not refer to it again in this post.

So which shall we use? NetBeans. While not the most modern and the least flexible of above-mentioned IDEs, it actually manages to debug cross-compiled code running on a remote device.

Here’s how

Step 1: Download the C/C++ enabled version of NetBeans from here.


Step 2: Create a new project


Follow the wizard’s path and select to open your project’s root directory.

Step 3: Create a new configuration for the ARM toolchain


Step 4: Set up the toolchain

Enter the path of your ARM toolchain, usually in /usr/local and fill in the rest of the form


You can access this form in the future by clicking on the “Services” tab on the left hand side of the project explorer


Then right click on the configuration name and select “Properties”


This will bring up a similar window as was shown the first time when you configued the toolchain


At this stage, you’ve set up a configuration to use the ARM toolchain.

Step 5: Connect the config to our project.

This is done by selecting “Set Project Configuration” from the Run menu


Step 6: Verify that the project is set to use the ARM config.

This is done by selecting “File/Project Properties” and making sure that “Tool Collection” is set to “ARM”


Step 7: Set up debugging session parameters


You should now be ready!

Let’s build the application by using your makefile


To debug, we must first launch the “jlinkgdbserver” executable, as described in my previous blog.
Unfortunately, I have not found a way to automatically do this from within NetBeans as it does not appear to have a pre-debugging hook where we would have been able to run the server. If anyone is aware of doing so, please alert me and I will update this post.

Open a terminal window and run the following command:

jlinkgdbserver -device nrf51822 -if swd -speed 4000 -noir -port 2331

The result should look something like this:


We can now start our debugger session by selecting “Debug/Attach Debugger” in NetBeans


This will open a dialog that you should fill in as shown here


If everything went well, you should be able to see the code in NetBeans and use their debugger fully!


I hope you found this tutorial helpful! See you next time, hopefully with a solution for CLion and XCode.
Happy hacking!

Some helpful links regarding the subject:

– ARM toolchain: GCC ARM Embedded in Launchpad

– Blog entry on setting up an NRF51 dev environment manually: Nordic NRF51 up and running | InContext, by Itamar Hassin

– Setting up an NRF51 dev environment on your mac: ihassin/fruitymesh-mac-osx · GitHub

– Setting up an NRF51 dev environment using Ansible (VirtualBox and Parallels): ihassin/fruitymesh-ubuntu-vm · GitHub

– Compiling an example using Make and CMake : ihassin/nrf51-blinky-cmake · GitHub

– FruityMesh example module: ihassin/fruitymesh-ping · GitHub

– FruityMesh example on official FruityMesh site: fruitymesh/Readme.md at master · mwaylabs/fruitymesh · GitHub

– Debugging NRF51 code using NetBeans GUI: NRF51 full-screen debugging | InContext, by Itamar Hassin

Nordic NRF51 and FruityMesh BLE Up and Running


 

Update:

There’s now also an Ansible script that runs locally if you want to use your Mac natively. Use this repo.

Enjoy!

 

Some learnings and new implementations have happened since the last post about the Nordic NRF51:

– I wrote an Ansible script to automate the provisioning and deployment of a complete development environment for NRF51 using the FruityMesh framework. Please note that the environment is hosted on a headless Ubuntu, so you need some command-line fu.
The repo supports VirtualBox and Parallels running Ubuntu using Vagrant. I hope you find it be a useful way to quickly enable you to develop modules for BLE mesh experimentation or simply develop for the NRF51.

– I cloned the original and created this repo to exercise its mesh programming, specifically:
* Timer functions
* RSSI values
* GPIO programming

The implementation demonstrates an RGB LED that changes colours when its paired NRFs change their relative signal strengths as their distance from it changes.

I hope you find these two artefacts useful, and as always, your comments are welcomed.

Some useful links:
M-Way Labs FruityMesh implementation
– Helpers for development
Mac OS/X setup (without FruityMesh support)

Happy hacking!

Nordic NRF51 up and running


Update:

If you want to know the insides of how to set up a development environment, read on!

If you want Ansible to do all the work for you, skip this post and check out my repos:
* For an Ubuntu VM, use this repo
* For using your Mac natively, use this repo.

Enjoy!

Introduction

There is not much documentation about the NRF51, and the tool-collection hunting and gathering process can be intimidating.
I hope this blog entry will help those that want to use and program the Nordic NRF51 development board to test out BLE functionality.

The hardware

We are using the NRF51 development board, which was purchased from here.

Basic operations

Connecting to the board

Connection is done via conneting a standard Micro USB Cable to your host computer. Once power is supplied to board, it will run the current program automatically.

Communicating with the board

Flashing the device can be done using the JLinkExe program runing on the host computer. JLinkExe can be downloaded from here.

Resetting the board to manufacturer settings

From a terminal window, as the device is connected and turned on, run the following command line:

prompt> JLinkExe -device nrf51822

When the JLink prompt appears, type the following:

J-Link> w4 4001e504 2 
J-Link> w4 4001e50c 1 
J-Link> r 
J-Link> g 
J-Link> exit 

This will erase all the programs that were loaded.

Programming the device

In order to program the device, you must first set up the following tools:

The Nordic SDK

The SDK can be downloaded from the Nordic website here. For our testing, we used nRF51_SDK_v9.0.0. The SDK contains a binary referred to as “SoftDevice” that supports BLE management of the chip. Please see below on how to load the SoftDevice to the board using JLink.

Compiler and Linker toolchain from GNU

The cross-compiler/linker tools that are needed to build executables for the board can be found here. We placed them under ‘/usr/local’. If you have multiple development environments, it may be easier to set an alias to run the right tools rather than modifying the path. For example:

alias gdb51="/usr/local/gcc-arm-none-eabi-4_9-2015q2/bin/arm-none-eabi-gdb"
alias jdb51="jlinkgdbserver -device nrf51822 -if swd -speed 4000 -noir -port 2331”

Loading a binary to the device

An executable image is created in the form of “.HEX” files that has to be loaded to the board’s flash memory. To load it to the device, open a terminal window and run JLinkExe, this time using the loadfile command:

prompt> JLinkExe -device nrf51822 
J-Link> loadfile path-to-binary
J-Link> r  
J-Link> g  
J-Link> exit 

When you program BLE functionality, you will need to load the chip’s firmware in order to support your programs. This is packaged as an executable and is part of the SDK. In order to load the SoftDevice, simply use the loadfile command with the correct path, such as:

J-Link> loadfile SDK_ROOT/components/softdevice/s110/hex/s110_softdevice.hex 
J-Link> r
J-Link> g
J-Link> exit

Select a different path if you want to change the version loaded (in this example, it’s S110).

Checkpoint

At this stage, you should have a connected board that has a version of the firmware loaded, and the toolchain downloaded, ready for development to begin!

BLE is hard, but blinking the board is easy

Using the toolchain let’s compile and load the demo blink program that comes with the SDK to make sure we have everything in place for future development.

Making Make make

Here you’ll edit the file named Makefile.posix to point to the correct toolchain for cross-development. The file is found at SDK_ROOT/components/toolchain/gcc/Makefile.posix, where SDK_ROOT being the location you installed the Nordik SDK files.
Edit this file so it contains the path to where you installed the cross-compiler:

GNU_INSTALL_ROOT := /usr/local/gcc-arm-none-eabi-4_9-2015q2
GNU_VERSION := 4.9.3
GNU_PREFIX := arm-none-eabi

Building the blink example

Navigate to the “blink” example directory

cd SDK_ROOT/examples/peripheral/blinky

Depending on your board (the one we used was PCA10028), you might need to create a subdirectory within “blinky” by copying the one present, if your model number does not appear there:

cp -r pcaXXXXX pca10028

Edit the Makefile in the PCA10028/armgcc directory to reference DBOARD_PCA10028, if it’s not already referenced there.

The path to the makefile is: SDK_ROOT/examples/peripheral/blinky/pca10028/s110/armgcc/Makefile.

Once you have saved the modification, return to the terminal window and invoke make to build the image:

prompt> make

in the directory where the makefile is located.

Even though the LED program does not need BLE functionality, let’s load the S110 firmware prior to loading our image for illustrative purposes:

prompt> JLinkExe -device nrf51822
JLink> loadfile SDK_ROOT/components/softdevice/s110/hex/s110_softdevice.hex

And now we’ll load our blink example

JLink> loadfile _build/nrf51422_xxac.hex
JLink> r 
JLink> g

You should now see the board’s four LEDs should blink at a nice rhythm.

Debugging

Download the jlinkgdbserver debugger from here. When run, it will connect to the board via the serial cable, and wait for commands coming from the GNU debugger, which was included in the GCC download described previously.

To build with debug symbols, invoke make with the debug goal:

prompt> make clean
prompy> make debug

Run the debugger server in a terminal window or tab:

prompt> jlinkgdbserver -device nrf51822 -if swd -speed 4000 -noir -port 2331

Open another terminal window and run your image from the armgcc subdirectory so that JLinkExe will be able to load the debug symbols created when building the application:

prompt> gdb51 program-name.out
(gdb) target remote localhost:2331
(gdb) gdb-command-here

This runs the debugger, loading debug symbols which will relay instructions to the JLink server that in turn will relay those to the board.

Summary

We made sure that the hardware and the development environment was set up correctly for future application development. In order to take advantage of the hardware’s capabilities, please refer to the documention of the board and firmware here, which contains essential links to the BLE functionality as well as a demo mesh project.

Acknowledgements

I’d like to thank Tim Kadom, my friend and colleague at ThoughtWorks, who sparked my interest by introducing me to BLE and mesh applications and was instrumental in helping me set up the environment and getting everything to work.

Nordic NRF51 up and running


Update:

If you want to know the insides of how to set up a development environment, read on!

If you want Ansible to do all the work for you, skip this post and check out my repos:
* For an Ubuntu VM, use this repo
* For using your Mac natively, use this repo.

Enjoy!

Introduction

There is not much documentation about the NRF51, and the tool-collection hunting and gathering process can be intimidating.
I hope this blog entry will help those that want to use and program the Nordic NRF51 development board to test out BLE functionality.

The hardware

We are using the NRF51 development board, which was purchased from here.

Basic operations

Connecting to the board

Connection is done via conneting a standard Micro USB Cable to your host computer. Once power is supplied to board, it will run the current program automatically.

Communicating with the board

Flashing the device can be done using the JLinkExe program runing on the host computer. JLinkExe can be downloaded from here.

Resetting the board to manufacturer settings

From a terminal window, as the device is connected and turned on, run the following command line:

prompt> JLinkExe -device nrf51822

When the JLink prompt appears, type the following:

J-Link> w4 4001e504 2 
J-Link> w4 4001e50c 1 
J-Link> r 
J-Link> g 
J-Link> exit 

This will erase all the programs that were loaded.

Programming the device

In order to program the device, you must first set up the following tools:

The Nordic SDK

The SDK can be downloaded from the Nordic website here. For our testing, we used nRF51_SDK_v9.0.0. The SDK contains a binary referred to as “SoftDevice” that supports BLE management of the chip. Please see below on how to load the SoftDevice to the board using JLink.

Compiler and Linker toolchain from GNU

The cross-compiler/linker tools that are needed to build executables for the board can be found here. We placed them under ‘/usr/local’. If you have multiple development environments, it may be easier to set an alias to run the right tools rather than modifying the path. For example:

alias gdb51="/usr/local/gcc-arm-none-eabi-4_9-2015q2/bin/arm-none-eabi-gdb"
alias jdb51="jlinkgdbserver -device nrf51822 -if swd -speed 4000 -noir -port 2331”

Loading a binary to the device

An executable image is created in the form of “.HEX” files that has to be loaded to the board’s flash memory. To load it to the device, open a terminal window and run JLinkExe, this time using the loadfile command:

prompt> JLinkExe -device nrf51822 
J-Link> loadfile path-to-binary
J-Link> r  
J-Link> g  
J-Link> exit 

When you program BLE functionality, you will need to load the chip’s firmware in order to support your programs. This is packaged as an executable and is part of the SDK. In order to load the SoftDevice, simply use the loadfile command with the correct path, such as:

J-Link> loadfile SDK_ROOT/components/softdevice/s110/hex/s110_softdevice.hex 
J-Link> r
J-Link> g
J-Link> exit

Select a different path if you want to change the version loaded (in this example, it’s S110).

Checkpoint

At this stage, you should have a connected board that has a version of the firmware loaded, and the toolchain downloaded, ready for development to begin!

BLE is hard, but blinking the board is easy

Using the toolchain let’s compile and load the demo blink program that comes with the SDK to make sure we have everything in place for future development.

Making Make make

Here you’ll edit the file named Makefile.posix to point to the correct toolchain for cross-development. The file is found at SDK_ROOT/components/toolchain/gcc/Makefile.posix, where SDK_ROOT being the location you installed the Nordik SDK files.
Edit this file so it contains the path to where you installed the cross-compiler:

GNU_INSTALL_ROOT := /usr/local/gcc-arm-none-eabi-4_9-2015q2
GNU_VERSION := 4.9.3
GNU_PREFIX := arm-none-eabi

Building the blink example

Navigate to the “blink” example directory

cd SDK_ROOT/examples/peripheral/blinky

Depending on your board (the one we used was PCA10028), you might need to create a subdirectory within “blinky” by copying the one present, if your model number does not appear there:

cp -r pcaXXXXX pca10028

Edit the Makefile in the PCA10028/armgcc directory to reference DBOARD_PCA10028, if it’s not already referenced there.

The path to the makefile is: SDK_ROOT/examples/peripheral/blinky/pca10028/s110/armgcc/Makefile.

Once you have saved the modification, return to the terminal window and invoke make to build the image:

prompt> make

in the directory where the makefile is located.

Even though the LED program does not need BLE functionality, let’s load the S110 firmware prior to loading our image for illustrative purposes:

prompt> JLinkExe -device nrf51822
JLink> loadfile SDK_ROOT/components/softdevice/s110/hex/s110_softdevice.hex

And now we’ll load our blink example

JLink> loadfile _build/nrf51422_xxac.hex
JLink> r 
JLink> g

You should now see the board’s four LEDs should blink at a nice rhythm.

Debugging

Download the jlinkgdbserver debugger from here. When run, it will connect to the board via the serial cable, and wait for commands coming from the GNU debugger, which was included in the GCC download described previously.

To build with debug symbols, invoke make with the debug goal:

prompt> make clean
prompy> make debug

Run the debugger server in a terminal window or tab:

prompt> jlinkgdbserver -device nrf51822 -if swd -speed 4000 -noir -port 2331

Open another terminal window and run your image from the armgcc subdirectory so that JLinkExe will be able to load the debug symbols created when building the application:

prompt> gdb51

This runs the debugger, loading debug symbols which will relay instructions to the JLink server that in turn will relay those to the board.

Summary

We made sure that the hardware and the development environment was set up correctly for future application development. In order to take advantage of the hardware’s capabilities, please refer to the documention of the board and firmware here, which contains essential links to the BLE functionality as well as a demo mesh project.

Acknowledgements

I’d like to thank Tim Kadom, my friend and colleague at ThoughtWorks, who sparked my interest by introducing me to BLE and mesh applications and was instrumental in helping me set up the environment and getting everything to work.

Arduino programming using Ruby, Cucumber & rSpec


The project

This project serves as a sanity check that all is in order with the hardware, without the need to write on-board code using the IDE nor use the avr toolchain. What better tool than Ruby to do so?

The first thing we’ll do is to assure that the board and its built-in LED are responsive. Let’s define the behviour we would like, and implement it using Cucumber, in true BDD fashion:

Feature:
  Assure board led is responsive

  Background:
    Given the board is connected

  Scenario: Turn led on
    When I issue the led "On" command
    Then the led is "On"

  Scenario: Turn led off
    When I issue the led "Off" command
    Then the led is "Off"

The step implementation follows:

require 'driver'

Given(/^the board is connected$/) do
  @driver ||= Driver.new
end

When(/^I issue the led "([^"]*)" command$/) do |command|
  value = string_to_val command
  expect(@driver.set_led_state value).to be value
end

Then(/^the led is "([^"]*)"$/) do |state|
  expect(@driver.get_led_state).to eq string_to_val state
end

def string_to_val state
  case state.downcase
    when 'on'
      my_state = ON
    when 'off'
      my_state = OFF
  end
end

Some things to note:

  • We don’t have an assertion on @driver ||= Driver.new because the driver will simulate a connection in case the phyical board is disconnected or unavailable due to disrupted communications.
  • The user communicates using the words “on” and “off”, which are translated to ON and OFF for internal use.

This test will fail, of course, as we have yet to define the Driver class and we drop to rSpec, in TDD fashion:

require 'driver'

describe "led functions" do
  before(:each) do
    @driver = Driver.new
  end

  it "turns the led on" do
    expect(@driver.set_led_state ON).to eq ON
  end

  it "turns the led off" do
    expect(@driver.set_led_state OFF).to eq OFF
  end

  it "blinks" do
    @driver.blink 3
  end
end

This too fails, of course, and we implement Driver thus:

class Driver
  def initialize 
    @arduino ||= ArduinoFirmata.connect nil, :bps =&gt; 57600 
  rescue Exception =&gt; ex 
    puts "Simulating. #{ex.message}" if @arduino.nil?
  end 
  def set_led_state state 
    result = @arduino.digital_write(LED_PIN, state)
  rescue Exception =&gt; ex 
    @state = state 
    state 
  end 

  def get_led_state 
    @arduino.output_digital_read(LED_PIN)
  rescue Exception =&gt; ex 
    @state 
  end 

  def blink num 
    (0..num).each do 
      set_led_state ON 
      sleep 0.5 
      set_led_state OFF 
      sleep 0.5 
    end 
  end 
end

 

Some things to note:

  • I am using the arduino_firmata gem, please see the Gemfile for details.
  • The initialize method catches the exception thrown when the Arduino is not connected, as the other methods do, in order to simulate the board in such circumstances. The simulation is always succeeds, by the way, and was coded to allow development without the board connected.
  • arduino.output_digital_read is a monkey-patch to the gem, as I could not find a way to query the board if an output pin was on or off:
module ArduinoFirmata
  class Arduino
    def output_digital_read(pin)
      raise ArgumentError, "invalid pin number (#{pin})" if pin.class != Fixnum or pin &lt; 0
      (@digital_output_data[pin &gt;&gt; 3] &gt;&gt; (pin &amp; 0x07)) &amp; 0x01 &gt; 0 ? ON : OFF
    end
  end
end

All green

Having implemented the code, the tests should now pass and running rake again will run both Cucumber and rSpec, yielding:

~/Documents/projects/arduino (master)$ rake
/Users/ThoughtWorks/.rvm/rubies/ruby-2.2.1/bin/ruby -I/Users/ThoughtWorks/.rvm/gems/ruby-2.2.1/gems/rspec-support-3.3.0/lib:/Users/ThoughtWorks/.rvm/gems/ruby-2.2.1/gems/rspec-core-3.3.1/lib /Users/ThoughtWorks/.rvm/gems/ruby-2.2.1/gems/rspec-core-3.3.1/exe/rspec --pattern spec/\*\*\{,/\*/\*\*\}/\*_spec.rb
...

Finished in 7.56 seconds (files took 0.27749 seconds to load)
3 examples, 0 failures

/Users/ThoughtWorks/.rvm/rubies/ruby-2.2.1/bin/ruby -S bundle exec cucumber 
Feature: 
  Assure board led is responsive

  Background:                    # features/initial.feature:4
    Given the board is connected # features/step_definitions/initial_steps.rb:3

  Scenario: Turn led on               # features/initial.feature:7
    When I issue the led "On" command # features/step_definitions/initial_steps.rb:7
    Then the led is "On"              # features/step_definitions/initial_steps.rb:12

  Scenario: Turn led off               # features/initial.feature:11
    When I issue the led "Off" command # features/step_definitions/initial_steps.rb:7
    Then the led is "Off"              # features/step_definitions/initial_steps.rb:12

2 scenarios (2 passed)
6 steps (6 passed)
0m4.579s

 

Make this better!

The project is here. Please feel free to fork and contribute.

Conclusion

How much is “good enough”? If you notice, the assertions are implemented using the data structure exposed by arduino_firmata, not with a call to the board itself. This is always a tradeoff in testing. How far should we go? For this project, testing via data structure is “good enough”. For a medical application, or something that flies a plane, it’s obviously not good enough and we would have to assert on an electric current flowing to the LED. And again, who is to assure us that the LED is actually emitting light?

There’s not much else we can do with a standalone Arduino without any periferals connected, but it’s enough to make sure that everything is set up correctly for future development.

Disclaimer

This installment was to show a quick-and-dirty sanity check without bothering to flash the device.

Afterword

The testing and writing of this installment were made while flying to Barcelona, hoping that fellow passengers would not freak out seeing wires and blinking lights mid-flight.

Happy Arduinoing!

Infrastructure as code using Vagrant, Ansible, Cucumber and ServerSpec


Designing and developing VMs as code is at last mainstream. This post is in fact a presentation I give to highlight that we can treat infrastructure code just as we would regular code.

We use TDD/BDD and monitors to spec, implement, test and monitor the resulting VM, keeping its code close to the app’s code and as an integral part of it.

infrastructure as code

Install MySQL using Ansible, using an idempotent script


This Ansible role will install MySQL on a *nix and may be run multiple times without failure, even though root’s password is changed when running it.
The order is important and here are some tips:

  • The ‘etc.my.cnf’ template does not include user and password entries
  • The ‘.my.cnf’ template only includes user and password entries and is copied to root’s home directory (since my script runs as root), not the deploy’s home directory.
  • Root’s password is set for security reasons
  • Deploy’s only granted access to the application’s databases. I use db1 and db2 as examples here.

Put the below section in your /tasks/main.yml file.

  - name: Install MySQL packages
    apt: pkg={{item}} state=installed
    with_items:
      - bundler
      - mysql-server-core-5.5
      - mysql-client-core-5.5
      - libmysqlclient-dev
      - python-mysqldb
      - mysql-server
      - mysql-client
      - build-essential

- name: Remove the MySQL test database
action: mysql_db db=test state=absent

- name: Create global my.cnf
template: src=etc.my.cnf dest=/etc/mysql/my.cnf

- name: Create databases
mysql_db: name={{item}} state=present collation=utf8_general_ci encoding=utf8
with_items:
- db1
- db2

- name: Add deploy DB user and allow access to news_* databases
mysql_user: name={{user}} password={{password}} host="%" priv=db1.*:ALL/db2.*:ALL,GRANT state=present

- name: Set root password
mysql_user: name=root password={{password}} host="{{item}}" priv=*.*:ALL,GRANT state=present
with_items:
- "{{ansible_hostname}}"
- 127.0.0.1
- ::1
- localhost

- name: Create local my.cnf for root user
template: src=my.cnf dest=/root/.my.cnf owner=root mode=0600

- name: Restart the MySQL service
action: service name=mysql state=restarted enabled=true

What is the difference between TDD and BDD?


The short answer is: none.

All variants of Driven Development (henceforth the ‘xDDs’) strive to attain focused, minimalistic coding. The premise of lean development is that we should write the minimal amount of code to satisfy our goals. This principal can be applied to any development management methodology the team has, whether it be Waterfall, Agile or any other.

A way to ensure that code is solving a given problem over time and change is to articulate the problem in machine-readable form. This allows us to programatically validate its correctness.

For this reason, xDD is mainly used the context of testing frameworks. Goals, as well as the code to fulfil them, are run by a framework as a series of tests. In turn, these tests may be used in collaboration with other tools, such as continuous integration, as part of the software development cycle.

We’ve now established that writing tests is a Good Thing(tm). We now turn to answer “when”, “which” and “how” tests should be written, as we strive to achieve a Better Thing(tm).

Defining goals in machine-readable form in itself does not assure the imperative of minimalistic development. To solve this, someone had a stroke of genius: The goals, now viewed as tests, are to be written prior to writing their solutions. Lean and minimalistic development is attained as we write just enough code to satisfy a failing test. As a developer I know, from past experience, that anything I write will ultimately be held against me. It will be criticised by countless people in different roles over a long period of time, until it will ultimately be discarded and rewritten. Hence, I strive to write as little code as possible, Vanitas vanitatum et omnia vanitas.

However, the shortcomings of this methodology are that we need a broad test suite to cover all the goals of the product along with a way to ensure that we have implemented the strict minimum that the test required. I’ll be visiting these two points later, but would like to primarily describe the testing pyramid and enumerate the variants of DD and their application to the different layers.

Having established when to write the tests (prior to writing code), we now turn to discuss “which” tests we should write, and “how” we should write them.

The Testing Pyramid

The testing pyramid depicts the different kinds of tests that are used when developing software.

A graphically wider tier depicts a quantitatively larger set of tests than the tier above it, although some projects may be depicted as rectangles when there is high complexity and the testing technology allows for them.

The testing pyramid

 

Unit Tests

Although people use the term loosely to denote tests in general, Unit Tests are very focused, isolated and scoped to single functions or methods within an object. Dependencies on external resources are discounted using mocks and stubs.

Examples

Using rSpec, a testing framework available for Ruby, this test assures that a keyword object has a value:

it “should not be null” do
  k1 = Keyword.new(:value => ”)
  k1.should_not be_valid
end

This example shows the use of mocks, which are programmed to return arbitrary values when their methods are called:

it “returns a newssource” do
  NewsSource.should_receive(:find_by_active).and_return(false)
  news_source = NewsSource.get_news_source
  news_source.should_not == nil
end

NewsSource is mocked out to return an empty set of active news sources, yet the test assures that one will be created in this scenario.

By virtue of being at the lowest level of the pyramid, Unit Tests serve as a gatekeeper to the source control management system used by the project: These tests run on the developer’s local machine and should prevent code at the root of failing tests to be committed to source control. A counter-measure to developers having committed such code is to have a continuous integration service revert those commits when the tests fail in its environment. When practicing TDD (as a generic term), developers would write Unit Tests prior to implementing any function or method.

Functional or Integration Tests

Functional or integration tests span a single end-to-end functional thread. These represent the total interaction of internal and external objects expected to achieve a portion of the application’s desired functionality.
These tests too serve as gatekeepers, but of the promotion model. By definition, passing tests represent allegedly functioning software, hence failures represent software that does not deliver working functionality. As such, failing tests may be allowed to source control yet will be prevented from being promoted to higher levels of acceptance.

Example

Here we are assuring that Subscribers, Articles and Notifications work as expected. Real objects are used, not mocks.

it “should notify even out of hours if times are not enabled” do
  @sub.times_enabled = 0
  @notification = Notification.create!(:subscriber_id => @sub.id, :article_id => @article.id)
  @notification.subscriber.should_notify(Time.parse(@sub.time2).getutc + 1.hour).should be_true
end

A “BDD” example is:

Feature: NewsAlert user is able to see and manage her notifications

Background:
  Given I have subscriptions such as “Obama” and “Putin”
  And “Obama” and “Putin” have notifications
  And I navigate to the NewsAlert web site
  And I choose to log in and enter my RID and the correct password

Scenario: Seeing notifications
  When I see the “Obama” subscription page
  Then I see the notifications for “Obama”

This is language a BA or Product Owner can understand and write. If the BAs or POs on your project cannot write these scenarios, then you can “drop down” to rSpec instead, if you think the above is too chatty.

Performance and Penetration Tests

Performance and penetration tests are cross-functional and without context. These test performance and security across different scope of the code by applying expected thresholds to unit and functional threads. At the unit level, they will surface problems with poorly performing methods. At the functional level poorly performing system interfaces will be highlighted. At the application level load/stress tests will be applied to selected user flows.

Example

A “BDD” example is:

Scenario: Measuring notification deletion
When I decide to remove all “1000” notifications for “Obama”
Then it takes no longer than 10 milliseconds

User Interface and User Experience Tests

UI/UX tests validate the user’s experience as she uses the system to achieve her business goals for which the application was originally written.
These tests may also validate grammar, layout, style and other standards.
Of the testing framework, these are the most fragile. One reason is that their authors do not separate essence from the implementation. The UI will likely have the greatest rate of change in a given project as product owners are exposed to working software iteratively. Having UI tests that rely heavily on the UI’s physical layout will lead to their rework as the system undergoes change. Being able to express the essence, or desired behaviour, of the thread under test is key to writing maintainable UI tests.

Example

Feature: NewsAlert user is able to log in

Background:
  Given I am a Mobile NewsAlert customer
  And I navigate to the NewsAlert web site
  Then I am taken to the home page which has “Log in” and “Activate” buttons

Scenario: Login
  When I am signed up
  When I choose to log in and enter my ID and the correct password
  Then I am logged in to my account settings page

BDD or ATDD may be used for all these layers, as it is more convenient to use User Story format for integration tests in some instances than low-level Unit Test syntax. ATDD is put to full use if the project is staffed with Product Owners that are comfortable using the English-like syntax of Gherkin (see example below). In their absence or will, BAs may take on this task. If neither are available nor willing, developers would usually “drop down” to a more technical syntax such as used in rSpec, in order to remove what they refer to as “fluff”. I would recommend writing Gherkin as it serves as functional specifications that can be readily communicated to non-technical people as a reminder of how they intended the system to function.

Exploratory Testing

Above “UI Tests”, at the apex of the pyramid, we find “Exploratory Testing”, where team members perform unscripted tests to find corners of functionality not covered by other tests. Successful ones are then redistributed down to the lower tiers as formally scripted tests. Since these are unscripted, we’ll not cover them further here.

Flavours of Test Driven Development

This author thinks that all xDDs are basically the same, deriving from the generic term of “Test Driven Development”, or TDD. When thinking of TDD and all other xDDs, please bear in mind the introductory section above: we develop the goals (tests) prior to developing the code that will satisfy them. Hence, the the “driven” suffix: nothing but the tests drives our development efforts. Given a testing framework and a methodology, we can implement a system purely by writing code that satisfies the sum of its tests.

The dichotomy of the different xDDs can be explained by their function and target audience. Generically, and falsely, TDD would most probably denote the writing of Unit Tests by developers as they implement objects and need to justify methods therein and their implementation. Applied to non-object oriented development, Unit Tests would be written to test single functions.

The reader may contest to this being the first step in a “driven” system. To have methods under test, one must have their encapsulating object, themselves borne of an analysis yet unexpressed. Subscribing to this logic, I usually recommend development using BDD. Behaviour-driven development documents the system’s specification by example (a must-read book), regardless of their implementation details. This allows us to distinguish and isolate the specification of the application by describing value to its consumer, with the goal of ignoring implementation and user interactions.

This has great consequences in software development. As one writes BDD scripts, one shows commitment and rationale to their inherent business value. Nonsensical requirements may be promptly pruned from the test suite and thus from the product, establishing a way to develop lean products, not only their lean implementation.

A more technical term is Acceptance Test Driven Development (ATDD). This flavour is the same as BDD, but alludes that Agile story cards’ tests are being expressed programatically. Here, the acceptance criteria for stories are translated to machine readable acceptance tests.

As software development grows to encompass Infrastructure as Code (IaC), there are now ways to express hardware expectations using MDD, or Monitor-driven Development (MDD). MDD applies the same principles of lean development to code that represents machines (virtual or otherwise).

Example

This example will actually provision a VM, configure it to install mySQL and drop the VM at the end of the test.

Feature: App deploys to a VM

Background:
  Given I have a vm with ip “33.33.33.33”

Scenario: Installing mySQL
  When I provision users on it
  When I run the “dbserver” ansible playbook
  Then I log on as “deployer”, then “mysql” is installed
  And I can log on as “deployer” to mysql
  Then I remove the VM

The full example can be viewed here.

ServerSpec gives us bliss:

describe service(‘apache2’) do
  it { should be_enabled }
  it { should be_running }
end

describe port(80) do
  it { should be_listening }
end

For a more extreme example of xDD, please refer to my blog entry regarding Returns-driven Development (RDD) for writing tests from a business-goal perspective.

Tools of the trade

.net: nUnit | SpecFlow

Java: jUnit | jBehave

Ruby: rSpec | Cucumber | ServerSpec

Caution

The quality of the tests is measured by how precisely they test the code at their respective levels, as well as how they were written with regards to the amount of code or spanning responsibility and the quality of their assertions.

Unit tests that do not use stubs or mocks when accessing external services of all kinds are probably testing too much and will be slow to execute. Slow test-suites will, eventually, become a bottleneck and may be destined to be abandoned by the team. Conversely, testing basic compiler functions will lead to a huge test-suite, giving false indication of the breath of the safety-net it provides.

Similarly, tests that lack correct assertions or have too many of them, are either testing nothing at all, or testing too much.

Yet there is a paradox: The tests’ importance and impact are proportionally inverse to their fragility in the pyramid. In other words, as we climb the tiers, the more important the tests become, yet they become less robust and trustworthy at the same time. A major pitfall at the upper levels is the lack of application or business-logic coverage. I was on a project that had hundreds of passing tests, yet the product failed in production as external interfaces were not mocked, simulated nor tested adequately. Our pyramid’s peak was bare, and the product’s shortcomings were immediately visible in production. Such may be the fate of any system that interacts with external systems across different companies; Lacking dedicated environments, one must resort to simulating their interfaces, something that comes with its own risks.

In summary, we quickly found that the art and science of software development is no different than the art and science of contriving its tests. It is for this reason that I rely on the “driven” methodologies to save me from my own misdoings.

Summary

Write as little code as you can, using TDD.

Happy driving!

From Zero to Deployment: Vagrant, Ansible, Capistrano 3 to deploy your Rails Apps to DigitalOcean automatically (part 2)


tl;dr

Use Vagrant to create a VM using DigitalOcean and provision it using Ansible.

Introduction

Parts zero and one of this blog series demonstrates some Ansible playbooks to create a VM ready for Rails deployment using Vagrant. Here we show the Vagrant file that will provision a DigitalOcean droplet for public access.

First thing to do is to install the DigitalOcean plugin:

 
vagrant plugin install vagrant-digitalocean

 

The Vagrantfile for DigitalOcean

# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = 'digital_ocean'
config.vm.box_url = "https://github.com/smdahlen/vagrant-digitalocean/raw/master/box/digital_ocean.box"
config.vm.hostname = "staging"
config.vm.define "staging"
config.vm.provision 'ansible' do |ansible|
ansible.playbook = "devops/user.yml"
ansible.inventory_path = "devops/webhosts"
ansible.verbose = "vvvv"
ansible.sudo = true
end
config.vm.provider :digital_ocean do |provider, override|
override.ssh.private_key_path = '~/.ssh/id_rsa'
override.vm.box_url = "https://github.com/smdahlen/vagrant-digitalocean/raw/master/box/digital_ocean.box"
provider.client_id = ‘<YOUR ID>'
provider.api_key = ‘<YOUR KEY>'
provider.image = "Ubuntu 12.10 x64"
provider.region = "New York 2"
end

end

You can get your ID and Key from the DigitalOcean website once logged on there.

 

Not much to it, eh?

The Ansible files stay mostly the same, apart from the use of ‘root’ wherever ‘vagrant’ was used. And don’t forget to change your inventory file to the IP address given by DigitalOcean. I’ll be thinking about how to automate these parameters too, to have complete hands-free installations.

If you are curious to learn more about configuring VMs in DigitalOcean, please see their help page here.

From Zero to Deployment: Vagrant, Ansible, Capistrano 3 to deploy your Rails Apps to DigitalOcean automatically (part 0)


tl;dr

Use Cucumber to start us off on our Infrastructure as code journey.

 

Introduction

Part 1 of this blog series demonstrates some Ansible playbooks to create a VM ready for Rails deployment using Vagrant. This is a prequel in the sense that, as a staunch believer in all that’s xDD, I should have started this blog with some Cucumber BDD!
Please forgive my misbehaving and accept my apologies with a few Cucumber scenarios as penance. Hey, it’s never too late to write tests…

The Cucumber Scenarios

As BDD artefacts, they should speak for themselves; write to me if they don’t as it means they were not clear enough!

 

Feature: App deploys to a VM
Background:
Given I have a vm with ip "33.33.33.33"
Scenario: Building the VM
When I provision users on it
Then I can log on to it as the "deploy" user
And I can log on to it as the "root" user
And I can log on to it as the "vagrant" user
Then I remove the VM
Scenario: Adding Linux dependencies
When I provision users on it
When I run the "webserver" ansible playbook
And I log on as "deploy", there is no "ruby"
But "gcc" is present
Then I remove the VM
Scenario: Installing mySQL
When I provision users on it
When I run the "dbserver" ansible playbook
Then I log on as "deploy", then "mysql" is installed
And I can log on as "deploy" to mysql
Then I remove the VM

The Cucumber Steps

Given(/^I have a vm with ip "(.*?)"$/) do |ip|
@ip = ip
output=`vagrant up`
assert $?.success?
end
When(/^I provision users on it$/) do
output=`vagrant provision web`
assert $?.success?
end
Then(/^I can log on to it as the "(.*?)" user$/) do |user|
output=`ssh "#{user}@#{@ip}" exit`
assert $?.success?
end
When(/^I run the "(.*?)" ansible playbook$/) do |playbook|
output=`ansible-playbook devops/"#{playbook}".yml -i devops/webhosts`
assert $?.success?
end
When(/^I log on as "(.*?)", there is no "(.*?)"$/) do |user, program|
@user = user
output = run_remote(user, program)
assert !$?.success?
end
When(/^"(.*?)" is present$/) do |program|
output = run_remote(@user, program)
assert $?.success?
end
Then(/^I log on as "(.*?)", then "(.*?)" is installed$/) do |user, program|
output = run_remote(user, program)
assert $?.success?
end
Then(/^I remove the VM$/) do
output=`vagrant destroy -f`
assert $?.success?
end
Then(/^I can log on as "(.*?)" to mysql$/) do |user|
`ssh "#{user}@#{@ip}" 'echo "show databases;" | mysql -u "#{user}" -praindrop'`
end
def run_remote(user, program)
`ssh "#{user}@#{@ip}" '"#{program}" --version'`
end

From Zero to Deployment: Vagrant, Ansible, Capistrano 3 to deploy your Rails Apps to DigitalOcean automatically (part 1)


update: please refer to the prequel that sets the stage with Cucumber scenarios as a BDD exercise.

tl;dr

In this post, I would like to share that my anxiety about setting up a new server to host an application reminded me why I like being in IT: automation. I attempt to avoid snowflake servers and deploy a Rails application to a VM using idempotent scripts with the help of Ansible and Capistrano.

This entry is a step-by-step guide to get a VM up and running with a Rails app deployed to it. I describe the steps needed to be taken with VagrantAnsible and Capistrano to deploy to a local VM while leaving deployment to DigitalOcean for part two.

the problem

Writing code comes easy to you. As a developer, you develop and test your code with a certain ease and enjoyment . To a certain extent, you may not even think much about the production phase of your project as you may already have an environment set up. However, you might only have a certain idea of what your prod environment looks like as you may have set it up, say, a year or two ago? Maybe your development environment is out-of-sync? Maybe you have to rely on other people (sys-admins) to take care of that “stuff”? That requires A HandOff Ceremony, something we want to avoid on planet DevOps.

In summary, it would be nice to have an automated, testable, repeatable way of provisioning hosts for testing and deployment uses. Obviously, scripts and scripting systems exist for that, and after mucking around with Chef and Puppet, I opted for Ansible.

a solution

In my mind, Ansible is to shell, what CoffeeScript is to JavaScript. I can express what I want to do at a high level (given there’s a module for it) and not worry about the details. In the case of Ansible, I don’t have to worry about idempotence either. So I settled on a way to provision virtual machines (VMs) using Vagrant and Ansible.

While I do not claim to be an expert in any systems herein mentioned, I do declare that “it worked for me”. Please leave comments, tips and tricks if you see any aberrations or more elegant ways of doing things with these tools.

I’d like to credit my friend and colleague Jefferson Girao from ThoughtWorks for having introduced me to Ansible in the first place, and mention that he’s on a similar journey to optimising Rails deployment, with the goal of using Ansible only. I am taking a more conservative approach and will stick with good-old Capistrano for the Rails part.
 

0: punt on windows, linux.

The demo is on a Mac, but feel free to try to adapt it to other platforms.

 

1: Install VirtualBox, Vagrant and Anisble

Here we install stuff, not a lot. 

Get VirtualBox here, or by following the vagrant guide and then install the vagrant gem:

gem install vagrant

 Now let’s install Ansible by the command:

brew install ansible

That assumed you had brew installed. If you don’t have it, I recommend installing it as it makes Mac OS X installations easy. If you prefer not to use brew, do it the hard way

 

2: Prepare to build the machine

Here we create a sub-drectory that will contain our Vagrant file and later on, our Rails app. We’ll keep the Vagrant file near our source code so we can say that we’re compatible with the idea of “Infrastructure As Code” (we’ll get to that in a future chapter).

mkdir app
cd app
vagrant init

This will create an initial Vagrantfile. Replace it with this one: https://gist.github.com/ihassin/7968349

In summary, when run (don’t run it yet, it will fail), this Vagrant script will spin up an Ubuntu Precise 64 instance, make its home on your private network on IP 33.33.33.33 and will invoke the Ansible provisioner to run the user.yml playbook.

 

Intermission

Before we can run the above Vagrantfile, we need to create the ‘user.yml’ file in the devops directory, or elsewhere, if you care to change the  ‘anisble.playbook’  line in Vagrantfile.

I’d like to pause and explain what that user.yml playbook will do so you don’t freak out when you see me moving rsa keys all over the place.

On one hand, I’d like to set up a machine with all needed dependencies. This will require making some apt-get and other calls that will need root rights. That’s fine. We’ll have root (later on, when talking to DigiitalOcean), but for the moment, we’ve the default privileged ‘vagrant’ account for that, which is fine. I would like, however, to run my Rails stuff under the ‘deploy’ account, which would be better off being a regular account. So now we have two accounts, ‘vagrant’ (built-in) and ‘deploy’. I care less about the vagrant user since we’ll throw it away when we provision to DigitalOcean. I do care about the deploy account though:

That ‘deploy’ account will later be used to connect to an external git host, such as bitbucket or github and it will need keys to do so. I will be using that account to log into the instance, so it would be nice if it had my key too. For the scm related issue, I generated a key pair and posted the public portion to bitbucket and github under my account, so they will allow it git operations.

So take a deep breath and step through ‘devops/user.yml’ by reading the task names.

 

3: Playbook: set up a user on the VM

At the app folder root, do this:

mkdir -p devops
 

Copy the following text into ‘user.yml’: https://gist.github.com/ihassin/7968371

The names of the tasks document sufficiently what they do. Note the following however:

1. I send over a known_hosts file that includes bitbucket’s URL.
2. I send a config file that contains bitbucket’s into the deploy user’s .ssh directory so that the first git operation does not hang forever.

OK, if you’re eager to run this playbook, you’ll need the vars.yml file:

Create vars.yml in the same directory as the user.yml file and paste this into it: https://gist.github.com/ihassin/7968378

Replace the text in red with your own values:

1. Running crypt on “secret” with “SaltAndPepper” will create a password token that you place in the password variable. That is the password for the deploy user created on your VM. It’s neat that we don’t have to keep clear text passwords in YAML files.
2. repo holds the git repo you’re application resides in (for a later step).

And you’ll need the templates folder with the following files in it:

Create the templates folder:

mkdir -p templates

Inside it:

1. Copy your public key into a file named ‘your.pub’
2. Copy bitbucket’s RSA signature to a file named known_hosts, thus:

bitbucket.org,207.223.240.181 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==

3. Copy your deploy’s private key into a file named deploy_rsa
4. Copy your deploy’s public key into a file named deploy_rsa.pub
5. Copy this to a file named ssh_config:

Hostname bitbucket.com
  IdentityFile /home/deploy/.ssh/deploy_rsa
  StrictHostKeyChecking no

This will make some security people cringe – I’m bypassing checking on bitbucket. Yeah.

6. Create a copy of your sudoers file and add the following line to it:

deploy  ALL=(ALL:ALL) ALL

Then place it in the templates directory as well.

That’s all that’s needed as templates for now. 

You need an inventory file too: Create a file called webhosts and paste the following into it:

[webservers]
33.33.33.33

To run this playbook, enter this at the command prompt:

vagrant up web
vagrant provision web

The first line wakes up vagrant. If it’s the first time you’re trying to access Precise64, this step can take quite a bit of time – Vagrant will download the Precise64 box over your internet connection. Time to brew and drink some coffee.
The second line will be cute to watch, Ansible will light up your screen like a disco, at the end of which you’ll have a VM with Ubuntu installed as well as a login for deploy, using your own ssh key.

You can access this VM via any of the following commands:

1. vagrant ssh
2. ssh vagrant@33.33.33.33
3. ssh deploy@33.33.33.33

If it does not work, it’s either this blog is buggy or it’s a case of PEBKAC. Please check and let me know.

If it works, have some fun with your new free VM, something that would have otherwise cost you a few hundred dollars at your retail PC store.

By the way, adventurous developers can try to provision directly from Ansible:

vagrant up web
ansible-playbook devops/user.yml -i devops/webhosts
 

4: Playbook: get some linux

 

The playbook will give us a real Linux to allow us to move forward with our provisioning (Ruby, Rails)

 

Create a file called webserver.yml and paste this into it: https://gist.github.com/ihassin/7968389 

Play it by issuing the following command:

ansible-playbook devops/webserver.yml -i devops/webhosts


5. Playbook: get some mySQL

The playbook will install mySQL on the provisioned VM. Create a file called dbserver.yml and paste this into it: https://gist.github.com/ihassin/8106956

It will install the needed packages for mySQL and then:

  • Start the service
  • Remove the test database
  • Create a ‘deploy’ user
  • Remove anonymous users from the DB
  • Set up a my.cnf file
  • Change root password
While a great idea to change the root password, this feature renders this playbook non idempotent.


6: Playbook: get some Ruby

The playbook will install the current Ruby 2.0 version. This edition of the blog does not use RVM as it is hell to deal with non-interactive terminals, I am saving the setup of RVM with Ansible for a later post.

Create a file called virtual_ruby.yml and paste this into it: https://gist.github.com/ihassin/7968406

Play it by issuing the following command:

ansible-playbook devops/virtual_ruby.yml -i devops/webhosts
 

7: Playbook: get the project’s ruby and install bundler

The playbook will install the project’s ruby in under the deploy user and install bundler to be used later on.

Create a file called project.yml and paste this into it: https://gist.github.com/ihassin/8004746

Play it by issuing the following command:

ansible-playbook devops/project.yml -i devops/webhosts
 

8: Using Capistrano 3 to deploy the Rails app

This is not a playbook, of course, but a Capistrano 3 recipe.

Install Capistrano 3 following their instructions and replace the deploy.rb file with this one: https://gist.github.com/ihassin/8106917.

Replace the contents of config/deploy/production.rb file with this: https://gist.github.com/ihassin/8107048.

Deploy the app by issuing the following command:

cap production deploy 

9: Have some fun with your new scripts. See the disco colours!

You can repeat these commands to provision, re-provision or just test Ansible’s idempotence:
vagrant up web
vagrant provision web
ansible-playbook devops/user.yml -i devops/webhosts -vvvvv
ansible-playbook devops/webserver.yml -i devops/webhosts -vvvvv
ansible-playbook devops/dbserver.yml -i devops/webhosts -vvvvv
ansible-playbook devops/virtual_ruby.yml -i devops/webhosts -vvvvv
ansible-playbook devops/project.yml -i devops/webhosts -vvvvv
cap production deploy

In the next post, we’ll push the Rails project to a DigitalOcean VM instead of a local one and it run.

Please comment and send feedback about the form and content.

Happy provisioning!

output

Returns Driven Development


The premise of all the “DD” acronyms is to minimise technical debt in one way or another and otherwise drive us to being lean.

The motivation for this article is “writing the minimum amount of code” in the spirit of Agile in general and TDD/BDD specifically. As someone who has developed code for more than a quarter of a century, I have learned that anything I write as code will be used against me as long as the software is in use. I don’t want to write more code than I need to in order to justify my reward. In this case, my reward is to have the RDD monitor set off an alert that serves as feedback to knowledgable people to make decisions about the product such that I will continue to be rewarded.
So, what is RDD?

TDD instructs us to write as little code as we can to assure a passing set of tests.
BDD instructs us to write as little code as we can to assure a valuable set of features.
I’d like to extend these guides to a methodology that instructs us to write as little code as we can to assure a specific level of business returns (i.e. ROI). I’ll call it RDD for fun. Returns Driven Development (thanks to my fellow ThoughtWorker Kyle for coming up with the name!).

In most cases, there is an underlying business case for creating or modifying software. Of those, some are justified by a business plan that shows how much more money the business would make if only the requested features were implemented. Of those, only a few are borne of a real market analysis. In the rest of the cases, the primary motivation is the product manager’s intuition that it would be nice to have these new features.

I wanted a way for the product owner to convey her ideas about the modifications, without regard to her motivation. RDD is a way to describe software feature requests without having to make up financial data to justify the requests. It’s also a way to validate the intuition of the product team.

Some examples:

Our customer acquisition rate will increase if we made signup easier.

Our salespeople will sell more licenses if they could demonstrate the software at trade shows with preloaded customer accounts.

Our sales will increase if we exposed our B2B services to the public Internet.

All these sound valid points for a product manager to present as justification in embarking on a technical investment in creating or modifying existing software.

The only change I would make to the above examples is to add a quantity. Acquisition rate will increase by 30%; we’d have 25% more sales etc.

This is the starting point of RDD: in order to assure the growth of this business, we need to increase sales by X%.

Now that we have that statement, it will be scrutinised by the company’s board and a decision will be made regarding its implementation. If action were to be taken, RDD is now charged by proving those statements.

RDD assures statement validation by providing business feedback to the product owner that the course charted is indeed driving towards that stated goal. The sooner and more precise the feedback, better decisions will be made to adjust the statement or the course of action.

RDD proposes to set up the monitors first and develop minimal software to satisfy them. The monitors will provide a baseline of the current situation and, prior to development, will indicate whether the premise was indeed factual and worthwhile.

As an example, an RDD monitor will state:

Generate an alert if the number of the daily sales of licenses is below 30 or is in decline more than 5% week over week.

Generate an alert if the number of B2B API calls originates from more than 10% of our customers.

Primarily, the alerts will indicate movement in their business domains and will set a baseline of alert frequency. They can also serve as indicators that something is not functioning on a technical level, but that’s secondary as other IT alerts exist for that purpose.

Now comes the fun, DD, part:

The monitor’s premise is dependent on much more than meets the eye at first reading.
The data for the alarm may not exist. The transaction table may or many not exist, depending on the state of the product that the alarm is set to monitor. If this is a new product, a transaction data source may actually have to be created just for the monitor. That alone is an excellent improvement to the organisation.
Following that scenario, a data table without data is not much use; enter BDD. Enter TDD. Soon, you may have developed the app from scratch. A whole system may have been created as the result of a Returns statement made by the product manager, yet we invested the minimal amount of development needed to satisfy the monitor. Feedback is guaranteed as the monitor was implemented up front even before the software was.
RDD is also effective when extending existing software as well, while assuring that the minimum code was written to satisfy the monitor’s goal.
The claim that a simple sign up form will boost customer acquisitions will soon be proven right or wrong. The monitor will raise an alert if signups have increased week-over-week. If it does not, we may need another monitor that observes another aspect of the product that questions its value to the users.

So, the next time you are involved in a product’s inception or new feature, start with business monitors!

Try asking for a business returns monitor from the operations group. At first, their mouths will open and close but without words coming out. Soon after, they will realise that it is nothing but another monitor. You then employ DDD/BDD/TDD To develop it and the system that feeds it information. You then sit back and wait for the product owners to request new monitors or features as they attempt to regulate the reported data to prove their original claims either right or wrong, or a little of both.

Drobo will not mount in Finder


I have a Drobo with 4x2GB disks installed to hold all my stuff. ALL my stuff.

The other day, I connected to Drobo in order to execute a TimeMachine backup. Drobo came up green yet Finder did not mount it. I heard rumbling noises and knew that something was messing with it. I just was not sure whether it was Finder or the Drobo’s firmware itself.

DiskUtils could not access the disk. Big panic. Called Drobo support and was surprised by their shallow “do a repair” reply.

I stumbled upon the following procedure by poking around and taking chances. I was “this far” from writing off 3TB of photos and projects, so I felt I had nothing to lose. Please note that in no way can I guarantee that this will help you, but it worked for me:

1. The rumbling noises that I heard when I connected the Drobo to the Mac were due to Finder trying to mount the disk but with running FSCK prior to doing so. The rumbling was probably due to a screwed up MBS.

Disk Utils was hampered as long as Finder was messing with Drobo, so I killed it:

2. Looked for any disk-related tasks by ‘ps ax’ in terminal.

3. Killed those tasks using ‘sudo kill -9’

4. That left the Drobo in the exact state I wanted it: Unmounted and left alone

5. Ran DiskUtils and chose to Repair the disk. I got a reasonable progress bar (as opposed the infinite one when I ran DiskUtils while Finder had stuff going on). It took over 12 hours to repair. Be patient!

6. Used the Mount icon in DiskUtils to mount Drobo.

7. All good, happy.

So now what? A backup of a backup on S3? Is there no end to this cycle?

Command & Control Management – The Party Killer


I was asked why some developers don’t have parties or late night coding sessions. I do not think it was meant literally, since organising a party is a trivial activity and does not warrant a discussion.

I understood the question to be why wouldn’t they be as involved with their projects as others might be elsewhere. After all, celebrating success or staying late to meet a deadline is the result of being engaged, involved and caring about the projects one is working on. Consequently, not celebrating success may be a symptom of not being engaged, nor involved nor caring about those projects.

I propose that they do not have parties because their management style is “Command and Control”. They have a hierarchy and teams are told what to do. Teams have leaders that enforce C&C upon their members. There is a separation of duties and expectations across teams and the relationship between all teams is defined by their relative roles in the project’s lifecycle. A “food chain” emerges that is defined by “suppliers” and “customers”. A team’s role in the project is either to serve another team’s needs or is entitled to another team’s services.
This vision is well suited to the C&C management style, which clearly defines the roles and behaviour for the participating teams. The teams, however, rarely have a say in defining the goals of the product nor a say in the overall strategy of achieving those goals. Value is skewed and variances from it are not tolerated thus creating more problems for future business and technological change.

Handoffs between teams are mandated, rarely with any multilateral conversations, and the handoff of requirements is basically synonymous with “Shut up, this is what we want, what’s the estimate?”

C&C stifles independent thought and is inconsistent with excellent programming, which demands intelligence and creativity.

No one would think of ordering the sax player to play certain notes in a Jazz session. Developing products is closer in structure and dynamics to Jazz sessions rather than to orchestrated classical music.

The C&C approach distances people from the product so much that moving the project to the next station in the workflow is met with a sigh of relief …not with the joy we all want to feel when creating something of value. No one in the C&C production chain feels as though they own the product. No one thinks to throw a party because they do not have shared values to celebrate.

C&C demotivates creativity, teamwork, and the drive it takes to work long hours or over the weekend. Under C&C, people wait to be told what to do while the list of backlog tasks becomes a black hole of client frustration.

Another reason is that developers are often described as “resources”. Just by that outlook, we’ll fail. We are human beings with names, abilities and skills. Any project plan will fail if we do not address our people on a personal level, taking their strengths and weaknesses into account. When that is not a part of management practices, teams are manage like conference rooms – available or busy.
A happy, engaged, concentrated developer will produce quality software, all else being equal. A resource is acquired, used, then relinquished. Have we ever seen conference rooms get together for a party? Resources don’t have parties, humans do.

Product Management, Marketing, PMOs, Developers and QA should all meet and brainstorm throughout the project’s lifecycle. Without collaboration, there is little creativity and even less ownership. No one is motivated to pitch in to make better quality products. Teams will not feel ownership, have parties or mark the occasion of software releases if they aren’t invited to actively participate at the beginning. Because of separation, developers are never present at business meetings, and don’t have the opportunity to fully understand the client’s needs. Instead, they are ordered to write code (quickly).

I doubt that we will ever see self-organising teams or a true sense of ownership as long as C&C and segregation is instilled in management’s culture. We need ownership and team collaboration. Alas, teams find themselves against each other in a game of politics. There is scarcely any collaboration between them, only downstream C&C. In some companies, PM does not stand for Project Manager, but for Political Manager.

As a consequence, creativity and communication have been replaced by stale, boring, incomplete and sub-standard power-point presentations. They present lies: the business unit grossly exaggerates the product’s value and the developers exaggerate the cost of its implementation. Everyone is scared.

Another symptom of C&C is that we do not share goals. Since we are broken into segregated teams, each tends to develop their own set of goals and priorities.

Those goals are then presented (barked) as imperatives to the other teams. Product Management’s goal is to have something available in the market. The developers’ goal is to have something adhering to current best practices and quality standards. Project Management’s goal is to satisfy Product Management and so forth. The lack of shared goals divides the teams, creates the need to run interference, and justifies still greater C&C.

I don’t know what value the other units extract from such fiascos, but developers do manage to extract experience and technical problem solving skills, not the least of which are getting around SysAdmin and Network Engineering obstacles that diminish their productivity.
So theirs is not a total loss, but how could anyone expect any team to rejoice at the release of such products? The feeling is of regret, if anything, at having been forced to write rushed code for a perceived meaningless business case. No parties there.

On the other hand, in startup restaurants cooks also take out the garbage, and the owner also sweeps the floors. In software development, developers take QA’s testability needs into account when writing code without being asked to. The business analyst works with the product manager in optimising the process before feature requests are discussed.

I propose that we form project-teams from all disciplines that would report to the project itself, to give all the participants a sense of ownership. I’ll argue that this would lead to self organising teams and that it would also lead to parties, that there would be no more such questions and that I would not have needed to spend so much time writing this apology for bad management practices.

Ugh, what a dismal end to this article. So on a happier note:

Come to planet agile and enjoy the party!

Quickie – ssh dynamic port forwarding to avoid unsecured public networks


You’re in an airport, and there’s free wifi (you’re obviously not in the US…). You want to connect but are worried about someone sniffing your connection. You’re rich, so you have a remote box with ssh access to it.

The solution is to ssh into your remote box and forward all your traffic to it. It will be your secure proxy for your session.

Easy to do:

Open a terminal and issue:

ssh -D 8888 remote-host

This will start port dynamic port forwarding to the remote-host machine.

Then, set up a proxy on your local machine to proxy all localhost traffic to port 8888.

On the Mac, it looks like this:

Image

Presto, as long as the terminal is open with the ssh -D command running, all your internet communications will pass through to the remote-host using the secure socket connection.

How to reconnect to a database when its connection was lost


One of my projects has a long-running task that constantly needs information from the database. I needed a mechanism to assure that the task will automatically reconnect to the database if and when that connection was broken.

I came up with this scheme using a trick with rescue blocks (code abbreviated for clarity) in this gist.

def my_task

    while(true) do
      begin
        database_access_here
      rescue Exception => ex
        begin
          ActiveRecord::Base.connection.reconnect!
        rescue
          sleep 10
          retry # will retry the reconnect
        else
          retry # will retry the database_access_here call
        end
      end
    end
  end

Here’s a line-by-line explanation:

Line 4: This is where your application’s database access logic would be.

Line 5: Catch a database access exception here

Here is where it gets interesting:

Line 7: Open a new block and retry the connection.

Line 10: This retry will retry the reconnect method and will loop as long as the database connection is still down.

Line 11: The else clause will execute if _no_ exception happened in line 10, and will retry the original database call in line 4.

In my case and example, I am not counting retries because I don’t care that I’ve failed – I must continue to retry. You may want to use “retry if retries < 3” as a break mechanism.

I have also removed some mailer code that notifies me when the reconnect fails so I can (manually) see what happened to the connection. The moment the connection is re-established, life goes on as normal within the infinite while loop.

Weekend warrior – MacRuby and rSpec, Mac OS X Lion, Xcode V4.3.2


Inspired by the recent buzz over RubyMotion, of which I am a proud licensee, I wanted to play a little with MacRuby just to get into the swing of things.

After deciding that doing so was more worthwhile than to mow the lawn, I set out to see what it took to start a project in MacRuby with rSpec support as a basis to start work.

MacRuby’s article got me started, but did not work because the test target could not find the framework that I wanted tested. I don’t know why, since I (sort of) follwed the instructions there. I say “sort of” since the article shows screen-shots of an older Xcode, and even though I thought I set things correctly in my version (Xcode V4.3.2), it still would not build. Also, I am on Mac OS X Lion and that may have had something to do with it.

After realising that if I did not continue trying, a certain member of the household would make me mow that lawn, Google found another article here by Steve Madsen.

It too looked promising, but again, needed tweaking to get working in my environment. It’s thanks to Steve’s post that I managed to get it working.

Here were my steps:
a. Create a new project in Xcode (or use an existing one that you want to rSpec)
b. Install MacRuby
c. Follow Steve Madsen’s instructions

At that stage it still did not work for me, but that was because of a misunderstaning that was clarified quickly enough:

Steve’s screen-shot for the scheme settings on the Specs framework is cut off and does not show the “Expand Variables Based On” setting, so $(SRCROOT) was never expanded for me. I replaced it with an absolute path (ugh) and it worked, so I knew something was not picking up that macro. The solution was to give a value to that drop-down, as shown in the screen-shot below.

If, like me, you’re on Xcode V4.3.2, you might find the following screen-shots useful (just refer to them as you follow Steve’s post):

a. Build settings:
Image

b. Scheme settings:

Image

You cannot imagine the joy of seeing Ruby code drive an Objective-C framework testing session using rSpec in Xcode.

Now to that mower…

DDD – Document Driven Development


We rarely document. We are used to being handed a set of PowerPoint slides that describe, on a very high level, the business need for software. We roll our eyes at the slides, and get to work, asking questions, clarifying the needs, hope to understand them and start imagining features and how we can deliver the implementation within the requested timeline.

If we follow the Agile framework, we’ll translate the transformed slides into stories. We do so and derive tasks from them. If we’re lucky, we might be able to condition the business to accept deliverable milestones that are aligned with those stories.

Using BDD, we’ll transcribe the stories into Gherkin and using TDD, we’ll start coding tests at that time (rSpec, Cucumber).

As development gets under way, we cycle through iterations and we deliver collaboratively.

After the celebrations, all the good things mentioned above (stories, milestones, BDD, TDD) evaporate as the project starts gliding at low altitude as the business moves to new territories. We’re left with mundane maintenance and tickets are opened for small bug fixes and minor enhancements. Stories are no longer written as “it’s not worth it” and small changes are never fully documented.

The project stops being documented and over time, as the team members rotate and business rules change, people no longer remember why we check-off the ‘accept contract’ terms after signup and not on the page where the user enters their email address. It so happens that there will be a major impact on the back-end provisioning system if we change that.

I think the pattern is clear – If we don’t use our documents, the whole eco-system of our product degrades to entropy and will ultimately lead us to revival by rewrite, or at least by going through the analysis again and likely to some re-engineering. Time wasted.

What I would love to see is a system whereby the development and maintenance is driven by documentation and that the documentation drives the deliverables.

The pieces are there, we just need to use them:
Participate in the requirements phases, translate them to stories, deliver story implementions. Always, recurringly. Never stopping this cycle.

Months from now, anyone reading your stories will fully understand why the system behaves the way it does – people like to read stories and will understand the system on their own terms. New hires in the business will use them as a guidline on how to perform their jobs. New developers to the team will have a standard to meet when fixing bugs or evaulating new or changed requirements.

We will end up with a document-driven system, accumulating a library of living documents that drove our software development effort. Any new contradictory story will violate the automated validations for previous generations of stories and will stop us in our tracks, showing us exactly where the business flow will break if we add that new feature. No one actually needs to know this in advance: Let the business tell new stories and see how the system reacts. It’ll tell us whether we’re in violation of any existing processes and alert us automatically.

If you’re using Gherkin and Cucumber already, put them front and center of your development workflow and don’t let go of them!

 

The tip of the (good) iceberg


Recently, a “perfect storm” situation occurred when we realised that there was a convergence of a new business need with an old technical need.

We have a technical issue that we wanted to deal with for a long time and yet never got to it because of high-priority tasks requests for the business. Our issue has to do with overhauling data structures and internal SOA processes to be more flexible and to be able to support business requirements in the future. The tasks and migrations were analyzed and estimated at “medium” and “hard” complexity levels and felt like an elephant in every meeting concerning the project, which is a central one in our business.

The “perfect storm” appeared when the business unit requested a feature that was solved by our internal analysis as part of the overhaul, but the key factor was that they asked for an initial implementation where only 20% of the customers be effected.
We were confronted with a situation where both parties had the same goal – we had technical justifications to make what we saw as needed changes and the business unit had a market-driven justification for asking for changes. This is a perfect situation to be in as both units are aligned and there is no conflict of interest.

The beauty of the situation is in the “20%”. By requesting that only a selected 20% of the customer base be affected, we could now picture the technical scope of the project with a different mindset – one of depth of development for a limited breath of the customers. By this I mean that we are able to plan for the “most value” for the business unit – producing working software to solve for the 20%, while back-filling the rest of our technical debt towards this project by the current processes till more is developed for the next segment of customers. True, there will be “production support” till we achieve 100% customer base, yet solving for 20% economically justifies that cost and effort.

The result is that the business unit will see only the tip of the iceberg of this project with immediate value, while we work on the invisible part that will cover the subsequent market segments that will be addressed sequentially over time.

The convergence of the business goals with ours makes it possible for us to succeed with this project and introduce it to the market in small segments. If only all business and technical requirements were so well aligned!

Oh, the places you’ll go…


Inspired from the Practicing Ruby entry, I somewhat clarified the code a little (for my taste) and learned that the call stack in Ruby is:

0) Undefined method resolution
1) Methods defined in the object’s singleton class (i.e. the object itself)
2) Modules mixed into the singleton class in reverse order of inclusion
3) Methods defined by the object’s class
4) Modules included into the object’s class in reverse order of inclusion
5) Methods defined by the object’s superclass, i.e. inherited methods

module ModuleA
 def foo
   "- Mixed in method defined by ModuleA\n" + super
 end
end  
module ModuleB
  def foo
   "- Mixed in method defined by ModuleB\n" + super
  end
end  
module ModuleC
  def foo
   "- Extended in method defined by ModuleC\n" + super
 end
end  
module ModuleD
  def foo
   "- Extended in method defined by ModuleD\n" + super
 end
end  
class A
 def foo
   "- Instance method defined by A\n"
 end
end  
class B < A
 include ModuleA
 include ModuleB
 def foo
    "- Instance method defined by B\n" + super
 end  
  def method_missing(method)
   puts "- method_missing (#{method}) on b. Redeirecting to b.foo\n"
   foo
 end
end  
b = B.new
b.extend(ModuleC)
b.extend(ModuleD)
def b.foo
 "- Method defined directly on an instance of B\n" + super
end
def b.method_missing(method)
 "- method_missing (#{method}) on b. Calling super\n" + super
end
puts "Calling 'bar' on b of type #{b.class}:\n"
puts b.bar

Which gives:

~/projects/ita/ruby$ ruby test.rb

Calling ‘bar’ on b of type B:

– method_missing (bar) on b. Redeirecting to b.foo
– method_missing (bar) on b. Calling super
– Method defined directly on an instance of B
– Extended in method defined by ModuleD
– Extended in method defined by ModuleC
– Instance method defined by B
– Mixed in method defined by ModuleB
– Mixed in method defined by ModuleA
– Instance method defined by A

Follow the conversation on Stack Overflow.

Puppet book review


This book is an excellent Puppet book for beginners and professionals alike.

I manage a software team and have read this book cover-to-cover in order to study Puppet for our team’s use on a daily basis.

Despite step-by-step instructions for the initial installation, I needed some tinkering since different OSs have slightly different distributions, but once I had a server and agent running on two different VMs (Ubuntu) – there was an “Aha!” moment when the agent had emacs automatically installed on it! Getting past the initial installation phase allowed me to really enjoy the rest of the book as well as enjoy Puppet itself.

Puppet is not trivial, but the book covers its concepts very clearly and one “gets” it quite early on (especially if you get your hands dirty and follow along the examples).

The book then expertly guides the reader to its “pro” section detailing use of Puppet with configuration management tools such as git and db-based storage.

It then goes on to detail how to use AMQ with Puppet for scaling. I doubt I will use such a robust configuration, but was thrilled to see how flexible and extensible Puppet is by use of load-balancers and integration with Apache/Passenger.

Overall, the book is well written, and I would highly recommend it as a *text book* for Puppet. This is a readable text book on the subject – not a reference manual, although it has countless links to the reference manuals.

I always wanted to learn Puppet, and this book certainly is the one to read if you’re dealing with configuration management whether as a developer or a DevOps person.

Setting up a Rails server on a GoDaddy VPS


I thought my experience with setting up a Centos 5 box from scratch with Rails 3.1 would be helpful to some readers.

1. Get a VM – this one is on GoDaddy, just for kicks.

Demo config:

Operating System: CentOS 5
RAM: 1 GB
Disk Space: 20 GB
And it costs $30 a month. Not too bad.

2. Get some tools

Become root for that: “$ su -”

Then issue:

# yum groupinstall ‘Development Tools’
# yum groupinstall ‘Development Libraries’
# exit

3. Install your ssh key for logins

Copy your key to ~/.ssh/authorized_keys

chmod go-w ~
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys

4. Install node.js

Become root for that: “$ su -”
Then issue:

# cd /root
# wget http://nodejs.org/dist/node-v0.4.11.tar.gz
# gunzip node-v0.4.11.tar.gz
# tar -xf node-v0.4.11.tar
# cd node-v0.4.11
# ./configure
# make
# make install
# exit

5. Install Git

Become root for that: “$ su -”
Then issue:

# yum install gettext-devel expat-devel curl-devel zlib-devel openssl-devel
# yum install zlib-devel
# yum install openssl-devel
# yum install perl
# yum install cpio
# yum install expat-devel
# yum install gettext-devel

# wget http://www.codemonkey.org.uk/projects/git-snapshots/git/git-latest.tar.gz
# tar xzvf git-latest.tar.gz
# cd git-{date}
# autoconf
#./configure –with-curl=/usr/local
# make
# make install
# exit

6. Install RVM

$ bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)

then add
[[ -s “/home/your-user/.rvm/scripts/rvm” ]] && source “/home/your-user/.rvm/scripts/rvm”
to the end of .bash_profile

8. Install readine

$ rvm pkg install readline

9. Install ruby

$ rvm install 1.9.2 –with-readline-dir=$rvm_path/usr

10. Create a gemset

$ rvm gemset create rails3.1

$ rvm –default use 1.9.2@rails3.1

11. Load Rails3.1

$ export LC_CTYPE=en_US.UTF-8

$ export LANG=en_US.UTF-8

$ gem install rails 3.1

12. Create ssh key for git repo

$ ssh-keygen -t rsa

13. Upload the public key to your repo

Make sure the end of the key file has a newline

Test access by issuing
$ git clone ssh://git@yourepo.com/xxx.git

14. Install bundler

$ gem install bundler

Test bundler by running ‘bundle install’ in the directory created by (6)

15. Install mysql

Become root for that: “$ su -”

Then issue:

# yum install mysql
# /etc/init.d/mysqld start
# exit

16. Get a copy of your project

From git by cloning the repo and run ‘rake spec’ to see that everything is installed and running correctly
This assumes you use rSpec, else run ‘rake test’, or whatever testing framework you use.

17. Install passenger

$ gem install passenger
$ passenger start

18. Test it out…

Navigate to http://xxxx:3000 to see your app!

Hello world!


This seems to be a nice place to organise my thoughts. Please don’t think that I presume that those are at all worthy to be published just because they appear here. I’ll use this shoebox as a platform to solicit constructive criticism in order to chop away at my ignorance.