Thursday 7 November 2013

Agile Testing Days take-aways #3 - Infrastructure Testing with Jenkins, Puppet and Vagrant

Infrastructure-as-code (IaC) tools, such as Puppet and Chef, have fascinated me since I first became aware of them about three years ago, and I'm very excited that my organisation is now making use of them. Attending this session was thus an easy choice to make. Carlos Sanchez delivered a well balanced talk that was high-level enough for newbies to understand, but still provided enough meat to satisfy those familiar with the concepts. You can read the abstract and grab the slides from his blog.

I had two direct take-aways from his talk, one technical, the other cultural:
  1. Carlos shared how RSpec-puppet can be used to assert that packages are set up correctly - think acceptance-test-driven-development for puppet.
  2. There was a great quote and this picture at the very beginning, which gave the Printing Press as an example of a tool can lead to significant cultural change. The quote was from Patrick Debois: Tools can enable change in behaviour and eventually change culture. This is controversial point, because, in the case of BDD, for example, using tools too early can hinder or even prevent the right kind of culture change. Choosing the right tool at the right time is a difficult decision to make, but one we shouldn't be afraid of.


Following his talk, Carlos and I met up to have a deeper discussion on the following practices.

Environment specific variables and configuration servers

One of the benefits of IaC is that it helps overcome environmental configuration issues, by explicitly codifying each environment's configuration. A few years back, the concept of a configuration server was mooted, which also partly addressed this. The concept involves a central server and configuration repository, from which applications retrieve their configuration (via HTTP, for example), by telling the server where their running. We discussed how IaC solves this, and whether configuration servers are now redundant. Our conclusion was that environment specific variables (e.g. connection strings for QA, Pre-Prod, Live) should be separated from recipes into environment specific files. It's still worth considering the use of a configuration server, quite separate from Puppet/Chef. I think experimentation would be required to establish if this separation was beneficial or not. If any readers have experience of this, I'd love to hear it.


Should engineers develop on local, remote or cloud VM's?

IaC tools can lead to the increased use of Virtual Machines (VM's) throughout the development process, including an individual engineer's dev-environment. This raises the question of what's the best host for VM's used for this particular purpose. Is it better that they be hosted locally (on the engineer's own laptop), remotely on a VM Server, or remotely in the cloud (e.g. AWS). There are many trade-off's to make in deciding this, but our discussion concluded with the instinct that (beefy) local hosts were best.


What's best: immutable images or a clean base + recipes?

Martin Fowler has an excellent article that presents the concept of immutable images (a VM image that needs no configuration), and compares them to deploying binaries that need no further tinkering nor configuration. At first, this seems to go against the IaC concept of having base images that source-controlled recipes fully define the configuration of. When you evaluate this in practice though, a hybrid of the two practices gives the best of both worlds, as using an immutable image significantly speeds up chef execution and deployment. The immutable image should be created, though, by using a clean base image and a source-controlled set of recipes. The image and it's recipes should be updated every couple of weeks, depending on your release cycle.

Who chef's the chef / who puppet's the puppet?

Chef/Puppet recipes include definitions of required applications, where to get them from, and what version to get. So where do you define what version of chef to use (in order to keep it consistent and up-to-date)? Carlos suggested this could be achieved by a mix of running update scripts and using chef recipes themselves.

Tuesday 5 November 2013

Agile Testing Days take-aways #2 - Accelerating Agile Testing

This is the second in my series of take-aways from Agile Testing Days.

The ever-enthusiastic Dan North did a high-energy keynote on Accelerating Agile Testing. His slides were as innovative as ever - this time a series of index cards stitched together using Cam Scanner. Check them out at Speaker Deck.

My personal take-aways were:

  • UX is all about the experience a user has when they use the product - e.g. what drives Apple users to stay loyal and queue up overnight. How do we develop our culture so that it thinks as much as Apple does about UX?
  • What we do reveals what we value: values and beliefs drive our capabilities which drives our behaviour. Consultants and coaches typically work at the capabilities level, unless they have permission to help people adjust their values. Working systemically can help make adjustments at the values level.
  • Are we testing in all the right areas, in the right ways? Check the testing capabilities matrix:
Testing capabilities matrix
  • We should be putting most of our testing effort in the upper right quadrant of the impact vs. likelihood matrix. Bear in mind that this matrix has several planes, with each being specific to a context/stakeholder.
Impact vs. likelihood matrix
  • Explore other testing methods, consider opportunity cost (i.e. not everything is worth automating), test deliberately (consider how, where, what, when).

Monday 4 November 2013

Agile Testing Days take-aways #1 - Visualising Quality

Last week I had the privilege of attending Agile Testing Days 2013, where I was a consensus speaker. This post starts a series summarising the main learning points I took away from the awesome conference.

First up is David Evans on Visualising Quality. David provided a very moving key-note that ranged from the Challenger disaster and Napolean's retreat to Lady Gaga and the McGurk effect. He very effectively communicated the care and thought needed when we construct visualisations. Some things I felt helpful to consider or try out were:
  1. wordle.net - Pour in all of our test case names and see what it looks like.
  2. The value of the information we provide is equal to the value of the decision it informs - provides a good rule for deciding how much effort to put into creating a visualisation.
  3. We're not pilots, so don't build a dashboard that makes it looks like we are - good advice against going overboard with awesome dashboards.
  4. Consider carefully how your information is presented: use a representation appropriate to the decision its informing and ensure that those making the decision understand what the data is saying. David gave many good examples (such as how the US voted during the 2012 elections) of how the choice of axis can distort or enhance the true picture. Each time we create a visualisation of something, we should carefully consider how appropriate the axis are (e.g. what's the best code-metric: LOC, files, test coverage, live usage?).