Technical

TEALS

For the past year, my friend Lester Jackson has been volunteering at Manson High School in Central Washington by remotely teaching Computer Science through a Microsoft Youth Spark program named TEALS.

Lester has always been super passionate of improving computer literacy, especially in unrepresented communities. Several other volunteers and Lester work with an experienced high school teacher, and come in before work a few days a week to teach CS in their assigned high school 1 to 2 days a week.

Why does Lester do it?

According to a 2013 study by Code.org, 90% of US high schools do not teach computer science. With software engineers in high demand in the private sector, schools often cannot find instructors with a computer science background, and struggle to compete with the compensation packages offered in industry. Even more staggering are the following statistics:

•Less than 2.4% of college students graduate with a degree in computer science and the numbers have dropped since the last decade

•Exposure to CS leads to some of the best paying jobs in the world. But 75% of our population is underrepresented

•In 2012, fewer than 3,000 African Americans and Hispanic students took the high school A.P. computer science exam

•While 57% of bachelor’s degrees are earned by women, just 12% of computer science degrees are awarded to women

•In 25 of 50 US states, computer science doesn’t count towards high school graduation math or science requirements Source: Code.org

The program needs more volunteers for next year. Here is how you can get involved:

http://c.tealsk12.org/l/249

#TeachCS

Running Continous Integration on a Shoestring with Docker and Fig

By: Jason Marshall

One of the things I love about Continuous Delivery (CD) is the "Show, don't Tell" aspect of the process. While we can often convince a customer or coworker what's the 'right thing to do', some people are harder to sell, and nothing beats a demonstration.

The downside of Continuous Delivery is that, on the face of it, we use a lot of hardware. Multiple copies of multiple servers all doing nominally the same thing if you don't understand the system. Cloud services are great for proving out the system due to the low monthly outlay, but not all organizations allow it. Maybe it's a billing issue, or concern about your source getting stolen, or in an older company it may be a longstanding IT policy. If a manager believes in the system, they may be willing to stick their neck out and get paperwork signed or policies changed. But how do you get them on board in the first place? This chicken and egg problem has been bothering me for a while now, and Docker helps a lot with this situation.

Jenkins in a Box

The thing I wanted to know was: "could I get a CI server and all of its dependencies into a set of Docker containers?" It turns out not only is the answer 'yes', but most of the work has already been done for us. You just have to wire the right things together.

Why start here?

The Big Ask for hardware starts with the build environment.

Continuous Delivery didn't always exist as a term. Before that it was just a concept. You start with a repeatable build. You automate compiling the code. You automate testing the code. You set up a build server so you know if it's safe to pull down trunk/master in the morning. You start enforcing clean builds of trunk/master. You automate packaging the code. Then you automate archiving the packages. One day you wake up and realize you have a self service system where QA can pull new versions onto their test systems and from there it's a short leap to capturing configuration and doing the same thing in staging and production.

But halfway through this process, you needed to do UI testing. For web apps that means Selenium. PhantomJS is a good starting point, but there are many things that only break on Firefox, or Chrome. Running a browser in a VM without a video card takes some special knowledge that not everybody has. And when the tests break you can't always reproduce them locally. Sooner or later you need to watch the build server run the tests to get a clue why things aren't working. Nothing substitutes for pixels. Saucelabs can solve this for you but we're trying to start small.

The Plan

Most of what you need is out there, we just have to stitch it together. The Jenkins team maintains Docker images.SeleniumHQ has their own as well, that can run Firefox and Chrome in a headless environment. They also have 'debug' builds with support VNC connections, which we'll be using. What we need is a Fig script to connect them to each other, and the Jenkins slaves need our development toolchain.

We need:

  1. A Jenkins instance
  2. A Selenium Grid (hub) to dole out browsers
  3. Selenium 'nodes' which can run browsers
  4. A Jenkins slave that can see the Selenium Grid
  5. SSH Certs on the slave so that Jenkins can talk to it

Caveats

Rather than modifying the Jenkins image, I opted to build a custom Jenkins Slave. Personally, I prefer not to run slaves on the Jenkins box. First, the hardware budget for the two is very different. Slaves are IO, memory, and CPU bound. The filesystem can be deleted between builds with few repurcussions. The Jenkins server is a different beast. It needs to be backed up, it uses a lot disk space for artifacts (build statistics and test reports, even if you store your binaries in a system of record), and it needs some bandwidth. There are many ways for a bad build to take out the entire server, and I would rather not even have to worry about it.

Also it's probable you already have a Jenkins server, and it's easy enough to tweak this demo code to use it with your existing server without impacting your current operations.

Fig to the rescue

Fig is a great Docker tool for wiring up a bunch of services to each other. Since I know a lot of people who like to poke at the build environment, I opted to write a Fig file where all of the ports are wired to fixed port numbers on the host operating system.

You'll need to install Fig of course (it's not part of the Docker install, or at least not yet), and you'll need to create a ~/jenkins_home directory which will contain all of the configuration for Jenkins, you'll need to generate an SSH key for Jenkins, and copy it into authorized_keys for the slave. Then you can just type in two magic little words:

fig up

And after a few minutes of downloading and building images, You'll have a Jenkins environment running in a box.

You'll have the following running (substitute 192.168.59.103 if you're running boot2docker)

  1. Jenkins on http://127.0.0.1:8080
  2. A Jenkins slave listening for SSH connections on 127.0.0.1:2222
  3. A virtual desktop running Firefox tests listening on 127.0.0.1:5950
  4. A virtual desktop running Chrome tests listening on 127.0.0.1:5960
  5. Selenium hub listening on port 4444 (behaving similarly to selenium-standalone)

Further Improvements

If that's not already cool enough for you, there are some more steps I'll leave as an exercise for the reader.

Go smaller: Single node

On small projects, it's not uncommon to run the Integration Tests sequentially. A single browser open at a time, to avoid any concurrent modification issues resulting in false build failures.

I did an experiment where I took the SeleniumHQ chrome debug image, dropped firefox on it as well, and changed the configuration to offer both browsers. I run this version in [compact.yml] instead of the two run in the normal example. This means only one copy of X11 and xvfb is running, and you only need one VNC session to see everything. The trouble with this is ongoing maintenance. I've done my best to create the minimum configuration possible, but it's always a possibility that a new SeleniumHQ release won't be compatible. For this reason I'd say this should only be used for Phase 1 of a project, and should be a priority to eliminate this custom image ASAP.

fig --file=compact.yml build
fig --file=compact.yml up

This version of the system peaked at a little under 4 GB of RAM. With developer grade machines frequently having 16GB of RAM or more this becomes something you could actually run on someone's desktop for a while. Or you could split it and run it on 2 machines.

Go bigger: Parallel tests

One of the big reasons people run Selenium Grid is to run tests in parallel. One cool thing you can do with Fig is tell it "I want you to run 4 copies of this image" by using the fig scale command, and it will spool them up. The tradeoff is that at present it doesn't have a way to deal with fixed port numbers (there's no support for port ranges) so you have to take out the port mappings (eg: "5950:5900" becomes "5900"). The consequence is that every time you restart Fig, the ports tend to change. But watching a parallel test run over VNC would be challenging to say the least, in which case you might opt to not run VNC at all. In that case you can save some resources by using the non-debug images

Examples & Further Reading

Protractor: Using the Page Object Model

What is Protractor?

Protractor is an end-to-end (e2e) test automation framework for AngularJS application. It is an open source Node.js program built on top of WebDriverJS originally developed by a team at Google. Test cases written in Protractor run in the browser simulating the actions of a real user. An e2e test written in Protractor makes sure your application behaves as expected.

Challenge: Code Duplication

There is always duplication in test cases. For instance login, find, and logout are clearly duplicated in the following two test cases:

Test case 1: login to the website, find an item, add it to my wish list and logout.

Test case 2: login to the website, find an item, add it to cart, purchase and logout.

Duplicate test cases result in code duplication. An e2e test suite with code duplication is difficult to maintain and requires costly modifications. In this tutorial, we will implement a page object design best practice for Protractor to minimize code duplication, make tests more readable, reduce the cost of modification, and improve maintainability.  

The most important concept here is to separate the abstraction of the test object (the page) and the test script (the spec). Hence, a single test object can be used multiple times by test scripts without rewriting it.

Using the PhoneCat application

We will use the popular AngularJS PhoneCat application to demonstrate how Protractor tests could make use of the page object design pattern to create simple and maintainable e2e test automation.

A concise instruction set, on how to setup the PhoneCat application in your local machine, is at the end of this post.

Abstraction: Separation of Test Object from Test Script

The PhoneCat app has the ‘phones list view’ page where all available phones are listed. A user can search or change the order of the listed phones on the page. When selecting a phone from the list, a user navigates to the ‘phone details view’ page, where more details about the selected phone are included.

In line with the page object design pattern best practice: the PhoneCat application has two test objects, the phones list view page and the phone details view page. Each of the pages should be self-contained, meaning they should provide all the locators and functions required to interact with each page. For example, the phones list view page should have a locator for the search input box and a function to search.

The image below shows the separation of the test object (page object files) from the test script (spec files). The spec files under the spec folder contain only test scripts. The page object files under the page object folder contain page specific locators and functions.

Figure 1: Separation of page object from test specification

Test Object (Page Object)

The PhoneCat application have the phones list page and the phone details page. The following two page object files provide the locators and functions required to interact with these pages. 

Phones = {

    elements: {

        _search: function () {

            return element(by.model('query'));

        },

 

        _sort: function(){

            return element(by.model('orderProp'));

        },

 

        _phoneList: function(){

            return element.all(by.repeater('phone in phones'));

        }

        _phoneNameColumn: function(){

            returnelement.all(by.repeater('phone in phones').column('phone.name'));

        }

    },

    _phonesCount: function(){

        return this.elements._phoneList().count();

    },

    searchFor: function(word){

        this.elements._search().sendKeys(word);

    },

    clearSearch: function(){

        this.elements._search().clear();

    },

    _getNames: function(){

        return this.elements._phoneNameColumn().map(function(elem){

            return elem.getText();

        });

    },

    sortItBy: function(type){

        this.elements._sort().element(by.css('option[value="' + type + '"]')).click();

    },

    selectFirstPhone : function(){

        element.all(by.css('.phones li a')).first().click();

        return require('./phone.details.page.js');

    }

};

module.exports = Phones;

Listing 1: phones.page.js

PhoneDetails = {

    elements:{

        _name: function(){

            return element(by.binding('phone.name'));

        },

        _image: function(){

            return element(by.css('img.phone.active'));

        },

        _thumbnail: function(index){

            return element(by.css('.phone-thumbs li:nth-child(' + index +') img'));

        }

    },

    _getName: function(){

        return this.elements._name().getText();

    },

    _getImage: function(){

        return this.elements._image().getAttribute('src');

    },

    clickThumbnail: function(index){

        this.elements._thumbnail(index).click();

    }

};

module.exports = PhoneDetails; 

Listing 2: phone.details.page

Test Script (Spec)

The test script can now make use of the page object files. All the functions required to interact with the page (the test object) are encapsulated in the page object and the test scripts are more readable and concise.

describe('Phone list view', function(){

    var phones = require('../page_objects/phones.page.js');

    beforeEach(function() {

        browser.get('app/index.html#/phones');

    })

    it('should filter the phone list as a user types into the search box', function() {

        expect(phones._phonesCount()).toBe(20);

        phones.searchFor('nexus');

        expect(phones._phonesCount()).toBe(1);

        phones.clearSearch();

        phones.searchFor('motorola');

        expect(phones._phonesCount()).toBe(8);

    });

    it('should be possible to control phone order via the drop down select box', function() {

        phones.clearSearch();

        phones.searchFor('tablet'); //let's narrow the dataset to make the test assertions shorter

        expect(phones._getNames()).toEqual([

            "Motorola XOOM\u2122 with Wi-Fi",

            "MOTOROLA XOOM\u2122"

        ]);

        phones.sortItBy('name');

        expect(phones._getNames()).toEqual([

            "MOTOROLA XOOM\u2122",

            "Motorola XOOM\u2122 with Wi-Fi"

        ]);

    });

    it('should render phone specific links', function() {

        phones.clearSearch();

        phones.searchFor('nexus');

        phones.selectFirstPhone();

        browser.getLocationAbsUrl().then(function(url) {

            expect(url.split('#')[1]).toBe('/phones/nexus-s');

        });

    });

});

Listing 3: phones.spec.js

describe('Phone detail view', function(){

    var phones = require('../page_objects/phones.page.js'),

        phoneDetails;

    beforeEach(function() {

        browser.get('app/index.html#/phnes');

        phones.searchFor('nexus');

        phoneDetails = phones.selectFirstPhone();

    });

    it('should display nexus-s page', function() {

        expect(phoneDetails._getName()).toBe('Nexus S');

    });

    it('should display the first phone image as the main phone image', function() {

        expect(phoneDetails._getImage()).toMatch(/img\/phones\/nexus-s.0.jpg/);

    });

    it('should swap main image if a thumbnail image is clicked on', function() {

        phoneDetails.clickThumbnail(3);

        expect(phoneDetails._getImage()).toMatch(/img\/phones\/nexus-s.2.jpg/);

        phoneDetails.clickThumbnail(1);

        expect(phoneDetails._getImage()).toMatch(/img\/phones\/nexus-s.0.jpg/);

    });

});

Listing 4: phone.details.spec.js

In conclusion, when a page object design pattern is properly used in Protractor test automation, it will make an e2e test easy to maintain and reduce code duplication.

Appendix

GitHub Repo: For This Tutorial

The following gitHub link for the PhoneCat tutorial and adopt it to this tutorial. It is basically a sample Protractor test (scenarios.js) of the PhoneCat app rewritten in a page object model.

https://github.com/xgirma/angular-phonecat.git

This could be a good starting point for discussion on the application of the page object model to improve the maintainability of Protractor tests.

Comparison

The following table shows the main use of the page object model, which is minimizing code duplication. The table compares a Protractor test included in the PhoneCat (scenario.js) and a Protractor test (phones.page.js, phone.details.page.js, phones.spec.js and phone.details.spec.js) which implements the same test case with the page object model. As shown in the table, even in this simple test, code duplication is enormous when implemented without the page object model. In contrast, code duplication when implemented with the page object model is very minimal.

 

Table 1: Comparison of code duplication with and with out page object model

 

 

PhoneCat app: the Setup

1.     Install Git and Node.js.

2.     Clone the angular-phonecat repository. ($ git clone --depth=14 https://github.com/angular/angular-phonecat.git)

3.     Change your current directory to angular-phonecat ($ cd angular-phonecat). Download the tool dependencies by running ($ npm install).

4.     Use npm helper scripts to start a local development web-server($ npm start). This will create a local webserver in your machine, listening to port 8000. Browse the application at http://localhost:8000/app/index.html

5.     To install the drivers needed by Protractor ($ npm run update-webdriver) and to run the Protractor end to end tests ($ npm run protractor).

Refer to the AngularJS site for complete instructions.

https://docs.angularjs.org/tutorial/step_00

Final note: If you want to try the code samples given in this tutorial,  besides creating folders, the page object files, and the spec files, you need to change the path to the the new spec files in protractor-cof.js file. Simply change spec: [‘e2e/*.js’] to spec:[‘e2e/spec/*.spec.js’] or to a path where you put the spec files.

Related Works

1.     Using Page Objects to Organize Tests https://github.com/angular/protractor/blob/master/docs/page-objects.md

2.     Using Page Objects to Overcome Protractor's Shortcomings http://www.thoughtworks.com/insights/blog/using-page-objects-overcome-protractors-shortcomings

3.     Getting Started with Protractor and Page Objects for AngularJS E2E Testing https://teamgaslight.com/blog/getting-started-with-protractor-and-page-objects-for-angularjs-e2e-testing

 

 

 

Three Ways To Share Code

There are three primary ways to collaboratively share code.

  • As source.
  • As a service.
  • As a library.

These aren't mutually exclusive, but represent the main deployment strategy. For example, you might offer a library (e.g. via a Maven or NuGet repository), but also make the source available.

Source is fine, but you really only want a focused team of 5-7 working closely to manage the incoming commits. Otherwise, the code suffers from the tragedy of the commons - your tests, code coverage, and overall quality will suffer. Source sharing also makes dependency management hard - "did you build the code from this morning at 9:38am or 9:52am"?

Running a service (e.g. REST/JSON) is actually really hard because of the dependency management issues. "Well, we'd like to update the staging server services, but that will break three other teams." Interestingly, the main reasons for running a service are data management & security, not code sharing. It's possible, but you really, really need to think about service version management.

Sharing code as a library, using a proper repository manager with dependency management is by far the easiest strategy. Set up a 1-click deployment with a CI tool, and you're off to the races.

As a simple set of guidelines:

  • 5-7 people with direct commit per source repository, max.
  • All other incoming code should be submitted to that team as patches/pull requests.
  • If the security and/or data are the key value of the code, it's a service.
  • Otherwise, if possible, publish your code as a library to an appropriate binary repository (public or private as needed)

Jasmine 2.0 Matchers, with AngularJS

One of the breaking changes with Jasmine 2.0 was a change to how Matchers are written. Using Jasmine with AngularJS introduces another set of limitations that I will cover in due course.

Why Matchers?

A matcher lets you extract repeated code around your 'assert' and 'equals' methods and reuse them across all of your tests. In addition to removing potential bugs in your tests (debug once, reuse everywhere), they can also provide more detailed text for failed tests than you get from the built-in test methods.

Code Reuse

We spend a lot of time teaching people not to repeat themselves when writing production code, and people are naturally averse to breaking all of these rules when they write tests. In reaction people have invented a lot of solutions to this 'problem', some of them are good and a lot of them are counterproductive; they help you write the initial set of tests but make it hard to keep them working over time. See the endless "DRY vs DAMP" debates that rage seemingly forever on the internet.

The authors of Testing frameworks recognize this problem, and most frameworks have provided a generous set of tools for eliminating these issues. Unfortunately they are often misused, or aren't used at all. Matchers are a crucial but often overlooked tool in this toolbox.

Diagnostics

Expected undefined to be 'true'.

How many times have you seen this dreaded message? What does that even mean? It might as well say Test failed., which is exactly what the line preceding the error said, so it provides no extra information whatsoever.

A Matcher gives you an opportunity to provide a detailed failure message, providing debugging information to the user when it is the most useful. Often it can steer them to a solution without ever having to use the debugger.

Expected <input name="foo" type="checkbox"></input> to be checked.

Doesn't that tell you so much more about what's wrong?

How do Matchers work?

In most test frameworks a Matcher provides two answers for every call. Whether the test case passed, and what error to display if it didn't. The framework watches for the failure and handles the bookkeeping to determine which tests passed which didn't, and where they failed if they didn't.

However, Jasmine goes one step farther in 2.0. In a bid to remove a lot of nearly duplicate Matchers, they introduced the .not operator that inverts the result of the test. Now instead of needing a toBeNull() and toNotBeNull() matcher, I just need toBeNull() and if I want the opposite of that I use not.toBeNull().

Unfortunately this requires a different structure for the Matcher functions, which isn't backward compatible, and may look a little odd if you don't understand all of this background I've shared with you.

toBeHidden: function () {

    return {

        compare: function (actual) {

            var expected = 'ng-hide';

            var pass = actual.hasClass(expected);

            var toHave = pass ? "not to have" : "to have";

 

            return {

                pass: pass,

                message: "Expected '" + angular.mock.dump(actual) + "' " + toHave + " a class '" + expected + "'."

            };

        }

    };

}

What's going on in here?

At the innermost point of this code we're testing a DOM element for visibility, assuming it is using the ng-hide directive to conditionally display a piece of UI. The rest of the code seems to be about putting together an error message.

In Jasmine as each test runs, it generates a status and a message. If the status is 'false' then the message appears in the test results, otherwise it is swallowed. Unless the .not operator was used, in which case if the status is 'true' then the message appears.

So what we do is we check assertion, if it returns false we generate an error explaining that we wanted the condition to be true. If it returns true we generate an error that explains that we did not want this condition to be true.

The last bit, and perhaps the most important, is the debugging output in the response message. angular.mock.dump(actual) turns a cryptic error with very little useful content into a message that contains the object under test. In this case it's a DOM element, and so the user will have a much better idea of what's broken and can hone in on the solution more quickly.

Loading Matchers

The Problem

In the old days with Jasmine there were many ways to get your Matchers loaded. You could just poke them into one of the data structures. However Angular does some monkey patching of Jasmine and several of the old strategies no longer work. In the official Jasmine documentation they recommend loading the matchers at the top of every test suite. While this is compatible with Angular's strategy of reloading Jasmine over and over again, the tendency toward many small modules with small test suites and that boilerplate can get a bit crazy.

The Solution

Some clever people figured out that a naked beforeEach (outside of a describe()) works just fine in Angular.js.

/**

 * matchers.js

 **/

 

beforeEach(function () {

    jasmine.addMatchers({

        toContainText: function () {

            ...

        },

        toHaveClass: function () {

            ...

        },

        toBeHidden: function () {

            ...

        }

    });

});

If you load the matchers before the first test file, then this block will load before every test, and you're good.Jasmine is so fast that running this bit of boilerplate before every test hardly impacts your test speed. On my last project our unit tests averaged 15 milliseconds per test (1400 in 22 seconds), which is well within the range of the definition of 'fast unit tests'.

I've provided a matchers.js file for you, containing this setup pattern along with a few of my favorite Angular-compatible matchers.

Designing a good matcher

One of the tenets of Testing is that unit tests should test (or assert) exactly one thing. This means you set up a scenario, and then prove that one single, specific aspect of that scenario holds true. In real code it's common for a single scenario to have a number of consequences, and if you want one assertion per test that means you're going to be repeating a lot of effort and code.

The setup and teardown (beforeEach, afterEach) methods remove the Lion's Share of boilerplate from your tests, and in Jasmine they can go even farther because you can organize partly related tests with nested setup methods, removing far more duplication from your boilerplate.

But after the setup and before the teardown there is often a smaller but far more important tangle of repetitive code that sets up the conditions for the assertion. Some people write their own custom helper functions to deal with this, but a Matcher is usually the correct solution to this problem.

Pick something to match

As with refactoring normal code, your goal is to end up with a set of short and sweet functions with descriptive names and straightforward internals.

Things to consider for a Matcher:

1.Do I have lots of scenarios that lead to the same outcome?

2.Do I use the same Object in many places to report an outcome?

3.Do I have lots of objects that behave similarly?

The last one requires some caution. 'Similar' code often indicates a missing level of refactoring is needed. Trying to create a matcher prior to doing this work may actually complicate the rework. It's a matter of when you need coverage on the code and which sources of pain you can avoid.

In an Angular app the conceptual space is pretty small, so this work can be pretty obvious in many situations. You have lots of code that deals with JSON responses, lots of code that works with DOM elements, and both can benefit from having Matchers that test attributes, presence of children, String comparisons (loose and strict), etc.

Reporting is Key

The big rule for any Matcher is that you have to clearly state what the problem is in a failure condition. Remember that we write tests largely to help keep people from accidentally breaking our code later in the project. During that delicate time where they're trying to write a new feature, a good error message can often tell the person what they broke without them having to context switch to look at your test.

And in the case of a bug, remember that the person fixing it may already be frustrated with the situation before work starts, don't pile onto that frustration with cryptic or subtly misleading test failure messages. Be kind. The sanity you save may be your own.

There is a short list of things I always put into my matcher messages:

1.I should know which matcher failed by the message. Make each one unique.

2.The actual and expected values must appear.

3.The actual should always appear before the expected. Common convention avoids confusion.

4.Using a dump() method to report the entire 'actual' object is wordy, but may save you from starting the debugger.

5.The values should be bracketed in some way so whitespace errors are obvious.

Bracketing turns this:

Expected  foo to be foo

into this:

Expected ' foo' to be 'foo'

You may notice the error in the first message immediately, or you might not. If you don't, you'll feel pretty stupid later on. But is that extra whitespace an error in the code, or did I get the string concatenation wrong in my Matcher and that extra whitespace is a red herring? The latter message makes it pretty dead obvious what happened, and takes only a couple seconds longer to write.

Always double-check your work

Rule #1 of Matchers: Any time you change a matcher, force some tests to fail to verify that the error makes sense.

It's easy to get the boolean logic wrong and have a set of tests that fail silently. It's easy to invert the meaning of the error message and not notice. It's really pretty easy to check:

expect(answer).toEqual('blah');

Just try both of these negative tests:

// Check the error message

expect(answer).not.toEqual('blah');

 

// Check the equality test

expect(answer).toEqual('something else');

And maybe throw in a null check, and you've got a pretty good idea that your matcher won't fail on you later.

Good luck!       

Intro to Go for Java Developers

Unless you've been living under a rock, or deep in crunch mode for several years, you've likely heard of Go (AKA golang), Google's new-ish language. It was designed as an alternative to the growing complexity of C++, especially around concurrency. It's also attracting droves of Python developers, as it offers dramatically better performance, all the fun of type safety, and a syntax that's more comfortable than Java or C#.

But I like Java just fine

However, for us Java (and C#) developers, we're told every new language is the one that will save us from ourselves. Let's take a quick tour of Go and see what it offers.

To this end, I won't bore you with explaining the basics of programming. I will show you the key differences with Java, and why you might consider Go for your next project.

Playground

For all of the examples listed in this article, you'll see a link next to 'Play this' -- this refers to the Golang Playground. This is a quick and easy way to test out the language without installing anything.

Hello World

Of course, before we get started, here is the canonical 'Hello World' for Go:

package main

import "fmt"

func main() {
    fmt.Println("Hello, world!")
}

Play this

This syntax is familiar to most developers in C-style languages.

Is it Object-Oriented? Functional? Procedural?

Go has constructs from all of these schools of thought, but with some modern best practices built in. For example, we've all heard these mantras before:

For this reason, Go has made some interesting choices. First off, it has no concept of "Objects" -- a single abstraction that represents both state and behavior. It just has the idea of Types -- in C-like structs:

type Address struct {
    Number string
    Street string
    City   string
    State  string
    Zip    string
}

Notice also that the types follow the declaration, and upper-cased letters are used to start identifiers.

So, this would almost seem like a purely procedural language. If you've used Scala or C#, however, you're probably familiar with the idea of Extension Methods. This is also possible in JavaScript (by modifying the object prototype), Groovy (by manipulating the metaclass), and Ruby (monkey-patching). Instead of having those as a separate concept, Go makes those the only way to define behavior for a type:

package main

import "fmt"

type Address struct {
    Number string
    Street string
    City   string
    State  string
    Zip    string
}

func (a Address) Location() {
    fmt.Println("I’m at", a.Number, a.Street, a.City, a.State, a.Zip)
}

func main() {
    address := Address{Number: "137", Street: "Park Lane", City: "Kirkland", State: "WA", Zip: "98033"}
    address.Location()
}

Play this

Notice some more neat things here. We have named constructor parameters. We did not provide a type to the variable 'address'. The pattern := tells the Go compiler to infer the type. And, the Location() function was automatically bound as a method on the Address type.

So, what would inheritance look like in this world? Let's create a MultiFamilyAddress:

type MultiFamilyAddress struct {
    Address Address
    Unit string
}

This is a perfect example of composition-over-inheritance but in Go. Now if we want to call the Location method, we have to do it like so:

func main() {
    address := Address {Number: "137", Street: "Park Lane", City: "Kirkland", State: "WA", Zip: "98033"}
    multi := MultiFamilyAddress {Address: address, Unit: "200"}
    multi.Address.Location()
}

Play this

Of course, we can always define a method with the signature func (m MultiFamilyAddress) Location() if we wanted to avoid this indirection. This isn't really inheritance the way we think of it. To do field-based inheritance, we use a construct Go calls anonymous fields:

type MultiFamilyAddress struct {
    Address
    Unit string
}

Not much different, right? This is Go's way of including all the fields of Address as though they were local fields on MultiFamilyAddress. This means the instantiation of MultiFamilyAddress will now look like this:

multi := MultiFamilyAddress{Address{Number: "137", Street: "Park Lane", City: "Kirkland", State: "WA", Zip: "98033"}, "200"}
multi.Location()

Play this

Go also offers interfaces, but they are a bit different than your normal OO interfaces. We'll cover those in another article.

So we've seen the procedural and object-oriented methodologies, but what about functional? A key component of functional programming Higher-order Functions. In Java, as of version 8, we can do something like this:

List<String> strings = Arrays.asList("Hello", "World");
strings.foreach(n -> System.out.println(n));

Of course, in Java 7 or before, it would be more like this:

List<String> strings = Arrays.asList("Hello", "World");
for ( String str : strings )
    System.out.println(str);

In Go, it would look something like this:

func main() {
    strings := [...]string{"Hello", "World"}
    for _, item := range strings {
        fmt.Println(item)
    }
}

Play this

Some interesting things here. First, to declare an array, we put that at the beginning of the variable definition. We used [...] in indicate the compiler should figure out the actual size. We could have easily made it [2]string{"Hello", "World"}.

The for loop is where it gets interesting. First, you see we are taking 2 parameters back, one indicated with an _ character. This is a convention in Go (and some other languages) for a parameter we don't care about. In this case, it's the index position of the element. The range operator takes a []T type, and executes the code inside the curly braces on each item.

Of course, this wasn't clearly a higher-order function, nor did it involve closures. Let's take a look at a simple example that does this:

func main() {
    x := 5
    fn := func() {
        fmt.Println("x is", x)
    }
    fn()
    x++
    fn()
}

Play this

This prints, as you might expect:

x is 5
x is 6

So we have functions as data types. This lets us do some interesting things:

package main

import (
    "fmt"
    "math/rand"
    "time"
)

type calcOp func(int, int) int

func main() {
    // You seed your RNGs, right?
    rand.Seed(time.Now().Unix())

    fns := []calcOp{
        func(x, y int) int { return x + y },
        func(x, y int) int { return x - y },
        func(x, y int) int { return x * y },
        func(x, y int) int { return x / y },
        func(x, y int) int { return x % y },
    }

    fn := fns[rand.Intn(len(fns))]

    x, y := 171, 35
    fmt.Println(fn(x, y))
}

Play this

So what's going on here? First, we've defined a type called calcOp -- a calculator operation. It is a function that takes 2 integers, and returns an integer. This is now a defined type we can use in argument lists and objects.

In the main method, we create a collection of these objects. However, since we have ommitted a size, it's not an array. In Go parlance, this is called a Slice.

We instantiate this collection of calcOp functions. We pick one at random. We initialize x and y with 171 and 35 respectively (that multi-assign syntax is also a feature of Go), then execute the function with those values. Neat!

Concurrency Constructs

So now we've seen that Go encapsulates many existing programming schools, but if you're a fan of one of those in particular, there is almost certainly a better language for it. Haskell and OCaml for functional, Clojure and Ruby for OO, and C and Rust for procedural. One of the key selling points, and I cringe while typing this out, is that Go is meant for the cloud. Not only do we parallelize and distribute our applications, we need to parallelize our code as well. This has been a major source of both performance issues, and correctness issues.

To that end, Go has two constructs that are going to help us: goroutines and channels. Goroutines are a lot like actors (in the Akka Actor sense) -- basically multiple threads without necessarily having a 1-to-1 correlation to system threads. When one blocks, another takes over. Channels are a way to separate computation and provide a clean interface to talk between them. Let's take a look at what they do:

package main

import (
    "fmt"
    "math/rand"
    "time"
    "strconv"
)

func Announce(message string, delay time.Duration) {
    go func() {
        time.Sleep(delay)
        fmt.Println(message)
    }()
}

func main() {
    for i := 0; i < 20; i++ {
        dur := time.Duration(rand.Int31n(10)) * time.Millisecond
        Announce("Item " + strconv.Itoa(i), dur)
    }

    fmt.Println("Done!")
}

Play this

The main method is just a bunch of setup -- defining dur to be a small duration of time (up to 10 milliseconds), and printing a value to the console 20 times. If you ran this program as-is, what would you expect to see? A bunch of random-ordered "Item X" messages, followed by a 'Done!' message? Here's what you actually get:

Done!
Program exited.

Wait, what? Let's look at that Announce function again. It is called with go func() -- this is how you invoke a goroutine. I am oversimplifying, but think of goroutines as backgrounded processes on the shell. Or, if you really know your threading model in Java, they are daemon threads. That is, they do not hold up program execution. When the main thread dies, they die as well. In Go, a goroutine will execute if the program is still running. We didn't get anything on the console because the program didn't run long enough. Let's add this line right before the 'Done!' line in the main function:

time.Sleep(time.Duration(5 * time.Second))

Play this

This tells our main thread to pause for 5 seconds, then we can continue and finish. With this model, we get our expected output:

Item 18
Item 15
Item 9
Item 5
Item 6
Item 17
...

So, that's goroutines. They're like background processes. The obvious question here is -- how do I make sure they execute? That is, you want to (potentially) offload the work to another thread or process, but it's important that it finishes. This is where Channels come in.

In Go, Channels -- blatantly taken from the link -- are "the pipes that connect concurrent goroutines. You can send values into channels from one goroutine and receive those values into another goroutine."

Call this IPC or eventing or what have you. It is a basic construct of communicating between goroutines. So, what does a channel look like? To make a channel, we use the Go builtin make. It makes a variable for you, and it's how you make channels:

mychan := make(chan string)

chan is the identifier for a channel. The string identifier says it's a channel of strings. That is, it takes and emits strings. The simplest way to emit and receive messages is this:

go func() { mychan <- "ping" }()
msg := <-mychan
fmt.Println(msg)

Play this

We are using a goroutine lambda to emit a message to the channel mychan, and then receiving it into msg.

So, how would we apply this to the example above? We know we can send a message to a channel, and we know we can receive messages. Additionally, receiving a message is a blocking operation -- the execution stops until a message is available. We could go really naive with it:

func Announce(message string, delay time.Duration) {
    mychan := make(chan bool)

    go func() {
        time.Sleep(delay)
        fmt.Println(message)
        mychan <- true
    }()

    <-mychan
}

Play this

In this example, we receive from mychan after the execution of func finishes. This has one rather predictable side effect: all lines are printed in order. Because receiving a message is a blocking operation, we don't return control to the for loop until we have received a message. Now, what if we want to keep the parallelism? Here's how I solved this one:

package main

import (
    "fmt"
    "math/rand"
    "strconv"
    "time"
)

func Announce(message string, delay time.Duration, done chan bool) {
    go func() {
        time.Sleep(delay)
        fmt.Println(message)
        done <- true
    }()
}

func main() {
    numMessages := 20

    channels := make([]chan bool, numMessages)

    for i := 0; i < numMessages; i++ {
        channels[i] = make(chan bool)
        dur := time.Duration(rand.Int31n(10)) * time.Millisecond
        Announce("Item "+strconv.Itoa(i), dur, channels[i])
    }

    for i := 0; i < numMessages; i++ {
        <-channels[i]
    }

    fmt.Println("Done!")
}

Play this

Here, we use the make function again to create an array of channels, one for each message. Then, inside the loop, we create a channel and stick it in the array. We then pass that channel to the Announce function. The goroutine inside that function signals the channel when it has executed. Because we don't query the channels until afterwards, this allows the random-order execution we're looking for. To finish it up, we drain the array of channels.

There are other problems with this solution -- what if we don't know the number of channels we want, what if the number is too large to reasonably store in memory? These will be left as an exercise for the reader.

Last Little Bits

So we've seen some neat concurrency concepts, as well as how to structure types and methods.

First, if you don't want to use the := syntax, you can declare a variable with a type:

var myint int = 5

This is not too useful for our examples. You can also declare constants:

const foo = "This is a constant"

We saw above that you can return multiple values from a function. You can do that yourself:

func mutireturn() (int, string) {
    return 42, "foo"
}
var x, str = multireturn()

We didn't show a pure example of higher-order functions in the functional section, so here's two of those:

func adder() func(int) int {
    sum := 0
    return func(x int) int {
        sum += x // sum is declared outside, but still visible
        return sum
    }
}

func sum(i int) func(int) int {
    sum := i
    return func(x int) int {
        sum += x
        return sum
    }
}

func main() {
    add := adder()
    fmt.Println(add(3))
    fmt.Println(add(5))

    add2 := sum(2)
    fmt.Println(add2(0))
    fmt.Println(add2(3))
}

Play this

This gives us the output:

3
8
2
5

And one last bit. Go has a defined structure to the code. There is only one correct way to format your Go programs. It's so important, that there is a go format command to put your code in the correct style, and it's not configurable. Holy wars have been started over the correct way to align braces, spaces, and brackets in C-style languages. Go picked one and built it in. When you have one less thing to worry about, you can focus on more important concerns.

Final Thoughts

Go is quite a fun language to work with. It has a lot of the power of C/C++ (including pointers), but cuts out a lot of cruft. It can be run either as a pre-compiled unit, or you can run a single file on the command line with go run myprogram.go. This makes it serve dual purpose of compiled and interpreted software. This makes it just as appropriate for high-performance, long-running software as it does for advanced shell scripting. Happy programming!

Continuous Delivery Tool Recommendation for the Java Stack

There are eight essential components of a Continuous Delivery setup.


1.    Source Control
2.    Build Tool
3.    Automated Tests
4.    Continuous Integration (CI) Server
5.    Binary Repository
6.    Configuration Management
7.    Automated Deployment
8.    Monitoring and Analytics


An issue management system could also be argued for, but it is more of a project management concern.


Source Control:


For this, I recommend Git unequivocally. Stash, which is like GitHub but behind a firewall, is also an effective tool. They allow for pull based workflows to enforce code reviews and knowledge sharing. Git is a fantastic tool.


Build Tool


I have to recommend Maven for this. While some may object to its verbose xml syntax, it’s very well supported by all the major Java IDEs. In addition, nearly every CI tool offers native Maven support. Maven also deals with dependency management, which could be its own category if they were not bundled so easily here. Gradle is another great alternative, but the ability to put code into your build scripts is a bit scary. It can be great if you have a disciplined team, but could lead to non-repeatable builds. Additionally, the more heavy the customization you put in, the less your tooling chain can help you.


Automated Tests


For commit tests, there are really two good choices. jUnit and TestNG. Either of them works. Nearly every java developer should be familiar with jUnit. TestNG offers some more advanced tooling and arguably better runtime behavior. Nobody will get fired for using jUnit, but TestNG is a bit better if you are staring a greenfield project.
For mocking/stubbing, I like to use Mockito. It is pretty unrivaled in ease of use.


For fluent assertions, I like AsserJ. It supersedes hamcrest and FEST.


Acceptance testing can often be done with jUnit and TestNG as well. I like to use the RestAssured framework for testing REST endpoints. I also do a bit of selenium and other browser-based testing. PhantomJS is a great too to do a first pass. I like acceptance testing in a framework called Cucumber, because the test specifications follow and almost English language structure.


For performance testing, I like Gatling locally and Neustar for cloud-based testing.


CI Server


Industry standard here is Jenkins, and it works fine. It has great community support and all that comes with it. However, I prefer TeamCity. It offers a lot of powerful features like extracting templates from a build, easy automatic job creation for new branches, and many more. I also like the way it manages VCS roots a lot better. It is a commercial product past a certain size, but I think it is worth it. To get the same features out of Jenkins, you must to a bunch of configuration on a bunch of plugins from many different sources.


Binary Repository


There are only two reasonable choices here: Nexus or Artifactory. People can get into religious wars over these, but I prefer Artifactory. It can act as an NPM repository and an RPM repository. However, there is a more contentious issue. Artifactory will rewrite POM files to remove <repository> information so that you don’t leak requests. Nexus does not. That means that if somebody specifies a custom repository in a POM file, you will end up searching that one as well.


Configuration Management


There is no single tool here that stands out. I like using Typesafe Config for configuration. You still need a way to deploy it, though that is more a component of automated deployment. There is a lot of talk about distributed configuration management and configuration discovery. For that, etcd is the popular choice.


Automated Deployment


This can be a contentious issue, and I don’t have a solid opinion on it. The two primary packages are Chef and Puppet. I think either is a reasonable choice. They both work to automatically bring a system to a known state, but they take different tacks. Puppet is more declarative, and Chef is more scripted. I have worked more with Puppet, so I am more comfortable with it.


Monitoring and Analytics


For analytics, it is still hard to beat Dropwizard Metrics. A few annotations and you are on your way.

For monitoring, Zabbix seems to be a rather common tool – that everyone has some problem with. ZenOSS is nice, but is usually used in very large organizations and therefore tends to be cumbersome. It is only really appropriate if you are managing 100 or more servers. Nagios is pretty popular, but seems like it has stagnated in terms of advancements. I remember it being purely plugin-driven as well, meaning you need to know the ecosystem just to get it running.

Altogether, I still have to recommend Zabbix for most circumstances.

 

Java Release Process with Continuous Delivery

Note: A lot of the release specifics were pioneered by Axel Fontaine.

One of the most interesting things we deal with is releases. Not a deployment -- which is actually running the new software. A release, in our parlance, is creating a binary artifact at a specific and immutable version. In the Java world, most of us use Maven for releases. More pointedly, we use the maven-release-plugin. I am going to show you why you should stop using that plugin.

Why Change?

This is a question I field a lot. There are several reasons, but the primary one is this: In a continuous delivery world, any commit could theoretically go to production. This means that you should be performing a maven release every time you build the software. So, let's revisit what happens inside your CI server when you use the maven-release-plugin properly:

  • CI checks out the latest revision from SCM
  • Maven compiles the sources and runs the tests
  • Release Plugin transforms the POMs with the new non-SNAPSHOT version number
  • Maven compiles the sources and runs the tests
  • Release Plugin commits the new POMs into SCM
  • Release Plugin tags the new SCM revision with the version number
  • Release Plugin transforms the POMs to version n+1 -SNAPSHOT
  • Release Plugin commits the new new POMs into SCM
  • Release Plugin checks out the new tag from SCM
  • Maven compiles the sources and runs the tests
  • Maven publishes the binaries into the Artifact Repository

Did you get all of that? It's 3 full checkout/test cycles, 2 POM manipulations, and 3 SCM revisions. Not to mention, what happens when somebody commits a change to the pom.xml (say, to add a new dependency) in the middle of all this? It's not pretty.

The method we're going to propose has 1 checkout/test cycle, 1 POM manipulation, and 1 SCM interaction. I don't know about you, but this seems significantly safer.

Versioning

Before we get into the details, let's talk about versioning. Most organizations follow the versioning convention they see most frequently (often called Semantic Versioning or SEMVER), but don't follow the actual principles. The main idea behind this convention is that you have 3 version numbers in dotted notation X.Y.Z, where:

  1. X is the major version. Any changes here are backwards-incompatible.
  2. Y is the minor version. Any changes here are backwards-compatible, but there may be bug fixes or new features.
  3. Z is the incremental version. All changes here are backwards-compatible.

However, most organizations do not use these numbers correctly. How many apps have you seen that sit at 1.0.x despite drastic breaking changes, feature addition/removal, and more? This scheme provides little value, especially when most artifacts are used in-house only. So, what makes a good version number?

  • Natural order: it should be possible to determine at a glance between two versions which one is newer
  • Build tool support: Maven should be able to deal with the format of the version number to enforce the natural order
  • Machine incrementable: so you don't have to specify it explicitly every time

While subversion offers a great candidate (the repository commit number), git does not have the same. However, all build systems, including both Bamboo and Jenkins, expose an environment variable that is the current build number. This is a perfect candidate that satisfies all three criteria, and has the added benefit that any artifact can be tied back to its specific build through convention.

What about Snapshots?

Snapshots are an anti-pattern in continuous delivery. Snapshots are, by definition, ephemeral. However, we're making one exception, and that's in the POM file itself. The rule we're following is that the pom.xmlalways has the version 0-SNAPSHOT. From here on out, no more snapshots!

The New Way

So, we're going to use the build number as the version number, and not have snapshots (except as described above). Our POM file is going to look a little something like this:

<project ...>
  ...
  <version>0-SNAPSHOT</version>
</project>

This is the only time we will use -SNAPSHOT identifiers. Everything else will be explicitly versioned. I am assuming your distributionManagement and scm blocks are filled in correctly. Next, we need to add 2 plugins to our POM file:

<build>
    ...
    <plugins>
    ...
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>versions-maven-plugin</artifactId>
            <version>2.1</version>
        </plugin>
        <plugin>
            <artifactId>maven-scm-plugin</artifactId>
            <version>1.8.1</version>
            <configuration>
                <tag>${project.artifactId}-${project.version}</tag>
            </configuration>
        </plugin>
    </plugins>
</build>

The devil is in the details, of course, so let's see what should happen now during your release process. Note that I am using Bamboo in this example. You should make sure to modify it for your CI server's variables. The process is:

  • CI checks out the latest revision from SCM
  • CI runs mvn versions:set -DnewVersion=${bamboo.buildNumber}
  • Maven compiles the sources and runs the tests
  • Maven publishes the binaries into the Artifact Repository
  • Maven tags the version

    Steps 3, 4, and 5 are run with one command: mvn deploy scm:tag.

That's it. We have one specific revision being tagged for a release. Our history is cleaner, we can see exactly which revision/refs were used for a release, and it's immune to pom.xml changes being committed during the process. Much better!

Gotcha!

Ok, this all works great, unless you have a bad setup. The primary culprit of a bad setup is distinct modules having snapshot dependencies. Remember how I told you snapshots are an anti-pattern? Here's the general rule: if the modules are part of the same build/release lifecycle, they should be put together in one source repository, and should be built/versioned/tagged/released as one unit. If the modules are completely separate, then they should be in a separate source repository, and you should have fixed-version dependencies between them to provide a consistent interface. If you are depending on snapshot versions, you are creating non-repeatable builds, as the time of day you run the build/release will determine which exact dependency you fetch.

Dev Environments with Vagrant

If you work with a number of clients, one issue pops up over and over: setting up a new machine. Sometimes, you're lucky and a client will let you use your own machine. More often than not, though, you're forced to use their hardware. This usually involves reading a bunch of out-of-date wiki documents, asking people around you, and maybe contributing back to the wiki for the next person. If you're lucky, you'll get this done in a day or two. More typically, it can take a week or so.

If you're a manager, this should also worry you. You're making these developers, who you likely spent a good amount of money on recruiting and compensation for, spend a week or so of down time just setting up their computer. Even taking a conservative estimate of $65/hr, that means you're spending $2600 for somebody to get up and running. Now imagine you're paying prevailing market rate for consultants, and that figure rises dramatically.

At Dev9, we like to automate. Typical payback times for automation projects may be in the months or even years, but imagine you could shave 2-3 days off of new machine setup time for each developer you onboard. This kind of tool could pay for itself with your first new developer, with better returns for each additional developer. So, what do we do?

Code

This article is going to involve some code. If you want to play along at home, you can view our repo at https://github.com/dev9com/vagrant-dev-env.

Enter Vagrant

Vagrant is a tool perfectly designed for our use case. It utilizes virtual machines (I use Oracle VirtualBox). VMs used to be clunky, and kind of slow. But we're living in an age where laptops come with 16GB RAM and 500+GB SSD drives, along with 8-core processors. We are living in an age of abundance here, and it would be a shame to let it go to waste :).

The Build

What we are going to build is a development machine image. While companies can benefit from creating this and handing it to new hires, it's just as valuable if you have multiple clients. I can transition between provided hardware with ease, because I'm just using them all as a host for my VM. In addition, I can make a change to the provisioning of one VM, and propogate it quickly to the others.

This VM is going to be a headless VM. That means there is no UI. We will interact with it over SSH. This helps keep it fast and portable. I have no problem using IntelliJ IDEA on Windows or Mac or Linux, but what I always want is my terminal and build tools. So, that's the machine we're going to build.

Initial Setup

First, get Vagrant and VirtualBox installed. Maybe clone our git repo if you want to follow along. That should be all for now!

This is something that only comes with research, but our base image is going to be phusion/ubuntu-14.04-amd64. This is the foundation of all of our images. This one was chosen because it plays really nicely with Docker. Full disclosure, we are Docker's PNW partner, so this is actually important to me :).

Step 1: A Basic Box

The first step in anything software related seems to be hello world. So, to create a Vagrant instance, we create a Vagrantfile. Clever, right? And even better, your Vagrantfile is just Ruby code -- like a Rakefile. The simplest possible Vagrantfile for what we're doing:

box      = 'phusion/ubuntu-14.04-amd64'
version  = 2

Vagrant.configure(version) do |config|
    config.vm.box = box
end

Let's go through this. As I mentioned above, our base box is that Ubuntu distro. You can just as easily choose CentOS, SUSE, CoreOS, or any number of other images. People even have entire dev stacks as one image! The version identifier is just signalling to Vagrant which configuration API to use. I've personally never seen anything except 2, but given the concept of versioned APIs in the REST world, it's not difficult to see how they plan to use it in the future.

So, to run this, we just type vagrant up:

[10:50:48 /ws/dev9/vagrant-dev-env/step1]$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: step1_default_1409766665528_9289
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2200 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => /ws/dev9/vagrant-dev-env/step1

[10:51:23 /ws/dev9/vagrant-dev-env/step1]$

Notice that this took all of about 35 seconds. Most of the output is rather self-explanatory. So, this box is "up" -- how do we use it?

[10:51:23 /ws/dev9/vagrant-dev-env/step1]$ vagrant ssh
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
Last login: Tue Apr 22 19:47:09 2014 from 10.0.2.2
vagrant@ubuntu-14:~$

That's it. There's your Ubuntu VM! Let's say we want to take it down, delete it, and bring it back up:

vagrant@ubuntu-14:~$ exit
Connection to 127.0.0.1 closed.

[10:55:23 /ws/dev9/vagrant-dev-env/step1]$ vagrant destroy -f
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...

[10:55:30 /ws/dev9/vagrant-dev-env/step1]$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: step1_default_1409766945197_31521
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2200 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => /ws/dev9/vagrant-dev-env/step1

[10:56:02 /ws/dev9/vagrant-dev-env/step1]$

So under a minute to destroy a VM and bring up an identical one. Not bad, Future. Not bad. A box like this is fine and dandy, but we probably want to do more with it.

Step 2: Basic Provisioning

Even at a base level, let's say we want Java. So, let's tweak our Vagrantfile a bit:

box      = 'phusion/ubuntu-14.04-amd64'
version  = 2

Vagrant.configure(version) do |config|
    config.vm.box = box

    config.vm.provision :shell, :inline => "apt-get -qy update"
    config.vm.provision :shell, :inline => "apt-get -qy install openjdk-7-jdk"
end

If you now run vagrant up, you'll get a machine with Java installed:

[11:27:33 /ws/dev9/vagrant-dev-env/step2](git:master+?)
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: step2_default_1409768866354_7342
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2201.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2201 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2201
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => /ws/dev9/vagrant-dev-env/step2
==> default: Running provisioner: shell...
    default: Running: inline script

[ clipping a bunch of useless stuff -- you know how it is. ]

==> default: 1 upgraded, 182 newly installed, 0 to remove and 109 not upgraded.
==> default: Need to get 99.4 MB of archives.
==> default: After this operation, 281 MB of additional disk space will be used.
[ ... ]
==> default: done.
==> default: done.

[11:30:15 /ws/dev9/vagrant-dev-env/step2]$ vagrant ssh
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
Last login: Tue Apr 22 19:47:09 2014 from 10.0.2.2

vagrant@ubuntu-14:~$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.1) (7u65-2.5.1-4ubuntu1~0.14.04.2)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)

vagrant@ubuntu-14:~$

And there we go. A scripted buildout of a base Ubuntu box with Java. Of course, shell scripts can and do go wrong. They get progressively more complex, especially as you start having components that mix and match. Additionally, since all developers should be getting familiar with Continuous Delivery concepts, let's take this opportunity to explore a little tool called Puppet

Step 3: Buildout with Puppet

Puppet is pretty awesome -- and so are Chef and Ansible. I chose Puppet initially because I could get it working quicker. I'm not making a value judgement on which one works best.

The idea with Puppet is that you use the puppet files to describe the state you want the machine to be in, and Puppet manages getting it there. Vagrant also has first-class support for Puppet. Remember above, how we're provisioning with inline shell scripts? Well, Vagrant also has a Puppet provisioner. If you've never used Puppet before, that's OK, the examples should give you a basic overview of its usage.

To set up a basic Puppet provisioner, let's do something like this in our Vagrantfile:

box      = 'phusion/ubuntu-14.04-amd64'

Vagrant.configure(2) do |config|
    config.vm.box = box

    # Now let puppet do its thing.
    config.vm.provision :puppet do |puppet|
      puppet.manifests_path = 'puppet/manifests'
      puppet.manifest_file = 'devenv.pp'
      puppet.module_path = 'puppet/modules'
      puppet.options = "--verbose"
    end
end

This also seems pretty straightforward. Again, don't worry too much if you don't know Puppet. Those paths are relative to the Vagrantfile, so your directory structure (initially) will look like this:

[12:43:47 /ws/dev9/vagrant-dev-env/step3]$ tree
.
├── Vagrantfile
└── puppet
    ├── manifests
    │   └── devenv.pp
    └── modules

In the provisioner, we're giving it 2 paths. Manifests is where puppet will look for manifest files. A manifest is a basic unit of execution in Puppet. A manifest is made up of one or more resource declarations -- the desired state of a resource. These resource declarations are the basic building blocks. So, to start, let's just get our previous example working in Puppet. Modify your devenv.pp to look like this:

group { 'puppet': ensure => 'present' }

exec { "apt-get update":
  command => "apt-get -yq update",
  path    => ["/bin","/sbin","/usr/bin","/usr/sbin"]
}

exec { "install java":
  command => "apt-get install -yq openjdk-7-jdk",
  require => Exec["apt-get update"],
  path    => ["/bin","/sbin","/usr/bin","/usr/sbin"]
}

This is pretty self explanatory, with one caveat: Order doesn't matter. Puppet tries to optimize the running and management of dependencies, so the steps will not necessarily be executed in the order you expect. This is why the require: declaration exists on the install java exec. We are telling Puppet to execute the apt-get update before this step. Notice also that it's a capital E in a require -- that's just the way Puppet does things. I'm sure somebody has a better explanation, but for now just consider it the required convention.

So, let's bring this box up:

[12:56:35 /ws/dev9/vagrant-dev-env/step3]$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: step3_default_1409774249245_48069
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2202.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2202 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2202
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => /ws/dev9/vagrant-dev-env/step3
    default: /tmp/vagrant-puppet-3/manifests => /ws/dev9/vagrant-dev-env/step3/puppet/manifests
    default: /tmp/vagrant-puppet-3/modules-0 => /ws/dev9/vagrant-dev-env/step3/puppet/modules
==> default: Running provisioner: puppet...
==> default: Running Puppet with devenv.pp...
==> default: stdin: is not a tty
==> default: Notice: Compiled catalog for ubuntu-14.04-amd64-vbox in environment production in 0.07 seconds
==> default: Info: Applying configuration version '1409774267'
==> default: Notice: /Stage[main]/Main/Exec[apt-get update]/returns: executed successfully
==> default: Notice: /Stage[main]/Main/Exec[install java]/returns: executed successfully
==> default: Info: Creating state file /var/lib/puppet/state/state.yaml
==> default: Notice: Finished catalog run in 117.84 seconds

[12:59:48 /ws/dev9/vagrant-dev-env/step3](git:master+?)
$ vagrant ssh
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64)

Last login: Tue Apr 22 19:47:09 2014 from 10.0.2.2

vagrant@ubuntu-14:~$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.1) (7u65-2.5.1-4ubuntu1~0.14.04.2)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
vagrant@ubuntu-14:~$

And now we have puppet provisioning our system! The output is also much nicer, and you can get some hint of how Puppet works -- there are stages, it gives us return values, it saves a state file, and there is a concept of environments. Any wonder why Puppet is so popular in the DevOps world? When you hear DevOps folks talking about a VM as a unit of deployment, they're not kidding. It's just a file.

Of course, this is basically cheating. The Puppet way is to describe the state of a system, and this is not describing the state of the system, it's describing commands to run. While some of you may like that, there are different frameworks for that. This is a declarative, stateful framework, so let's not try to turn it into glorified shell scripting. So, we can change that up a bit...

Part 4: Actually Using Puppet

For this step, the Vagrantfile doesn't change. We're just changing the Puppet files. Check this out:

group { 'puppet': ensure => 'present' }

exec { "apt-get update":
  command => "apt-get -yq update",
  path    => ["/bin","/sbin","/usr/bin","/usr/sbin"]
}

package { "openjdk-7-jdk":
  ensure  => installed,
  require => Exec["apt-get update"],
}

Now we're declaring state. We're just telling puppet to make sure openjdk-7-jdk is installed, and run an apt-get update beforehand. Since apt-get update is idempotent on its own, this whole definition is now idempotent. That means we can run it multiple times without issue!

Let's bring the box up:

[13:36:30 /ws/dev9/vagrant-dev-env/step4](git:master+!?)
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: step4_default_1409776604916_69804
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2202.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 => 2202 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2202
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
    default: /vagrant => /ws/dev9/vagrant-dev-env/step4
    default: /tmp/vagrant-puppet-3/manifests => /ws/dev9/vagrant-dev-env/step4/puppet/manifests
    default: /tmp/vagrant-puppet-3/modules-0 => /ws/dev9/vagrant-dev-env/step4/puppet/modules
==> default: Running provisioner: puppet...
==> default: Running Puppet with devenv.pp...
==> default: stdin: is not a tty
==> default: Notice: Compiled catalog for ubuntu-14.04-amd64-vbox in environment production in 0.17 seconds
==> default: Info: Applying configuration version '1409776705'
==> default: Notice: /Stage[main]/Main/Exec[apt-get update]/returns: executed successfully
==> default: Notice: /Stage[main]/Main/Package[openjdk-7-jdk]/ensure: ensure changed 'purged' to 'present'
==> default: Info: Creating state file /var/lib/puppet/state/state.yaml
==> default: Notice: Finished catalog run in 134.04 seconds

There we go! We've declared the state of our machine, and Puppet does its magic. Of course, Puppet can do a whole lot more -- file templating, adding and removing users, setting up configuration, making sure some packages are NOT present, etc. This is YOUR machine -- install git, maven, oh-my-zsh, etc.

Also, keep in mind that Puppet is a really in-demand skill. You might find yourself with a valuable new tool.