Fixed Velocity is a Fallacy

Velocity in an XP sense is a historical rate of “work completed” per iteration. Measuring and using velocity is powerful because:

  • Planning based on actual velocity figures gives you a more realistic plan than depending on the optimistic/pessimistic estimates of developers;
  • It is generally cheaper to work out how much time work really takes, than spending excessive time attempting to guess how much it will;
  • Dramatic changes in the number give greater visibility to issues a team may be having;
  • The rate indicates if a deliverable is on track, or if scope needs to be re-negotiated.

A fixed velocity is unrealistic because in the real world as there is always a force working against it… friction. Project friction takes on a number of forms including:

  • Communication breakdown – Sometimes it’s difficult to get answers from the business, or maybe team members forget to tell each other important things that take up time as people find out issues.
  • Environmental Issues – Development environments are never perfect and as you depend on more and more external resources, the team faces additional risk not being able to complete a story because of a database or server is down.
  • Ineffective Iteration Planning – Poor quality story cards slipped by the Iteration Manager and required excessive time going back and forth trying to work needed doing, or the third party prerequisite never came through.
  • Constrained Resources – Depending on key members for particular tasks can be an effective way for ensuring good productivity, but team members can be ill, or be required for other things. Bringing on new people should affect a team’s velocity in some manner.

Keep in mind the following list of things you may experience when you have a fixed velocity:

  • Planning based on an inaccurate number is like setting yourself potentially unrealisable goals instead of the more useful forecasting you can do with a real velocity measurement.
  • You lose major visibility into issues affecting the team, making it more difficult to identify and address them.
  • The importance of maintaining the magic number adds another opposing force typically misaligned to the core business objective. You lose all sorts of things such as a sustainable pace (read more about the 40 hour week and the need for slack time), a reduction in quality of output leading to additional maintenance or a poor user experience, and more accounting games as iterations lengths or other numbers are “adjusted” to continue the facade of a fixed velocity.

Like most things in an agile process, velocity is one of those metrics that provides another feedback mechanism to help you plan and identify places where you might benefit from change. Use real world numbers to help you, instead of the artificial ones that handcuff you.

What Sort of Pasta Do You Want?

This post comes out of a discussion I had a while back with some other Thoughtworkers in my home city, Brisbane, and must credit both Vladimir Sneblic and James Webster for bringing this up. Since they are not frequent bloggers, I thought that this little gem was still worthwhile sharing.

Anyone who has ever dealt with software would have heard the term spaghetti code. It’s a great term used to describe software that is difficult to maintain or change because parts of the system are intricately entwined, and a change to one part can adversely affect another. After reflecting on a system that was developed using techniques found more so in agilest development teams such as Test Driven Development (TDD) and dependency injection, they observed that the parts of the system were more loosely coupled and more easily interchangeable, good indicators that it would be a better system to maintain. The code is better described as ravioli code instead of that of its more common pasta brethren.

This analogy has really stuck with me since because of the number of parallels it draws. Take one such example – the reason that ravioli is typically more expensive than spaghetti, even though they are both made from the same fundamental ingredients, is that making good ravioli takes a lot more skill than it does spaghetti. This idea is,of course, not new, and can be taken to extremes (see Wikipedia’s entry) but I know which one is is my favourite.

Optimise Your Build with Faster Running JUnit Tests

Introduction
There are many techniques that you can use that can improve a build time. Here’s one that can be used when:

  • Tests break the build
  • You only care about failing tests being reported
  • You want to reuse the existing formatting utilities provided by standard Ant optional tasks
  • You don’t want the formatting to be too slow
  • The percentage of tests passing is not important

When running with the optional JUnit task, the normal strategy is to use the standard XML formatter and then style the information into something presentable with the optional JUnitReport task. Unfortunately, both the cost of spitting out XML for every single test suite executing (i.e. usually every single Test class you have) and then the cost of applying XSL is typically quite high. In my experience, it’s been several minutes running tests, let alone waiting for the report to be generated. Just try using the plain logger (<formatter type="plain"/>) and see the differences yourself.

A Better Way
The alternative to the standard XML formatter is the QuietXMLFormatter (download here). The aim of this formatter is to:

  • Only produce output on tests that fail or error;
  • Produce the XML output in the same format as the standard org.apache.tools.ant.taskdefs.optional.junit.XMLJUnitResultFormatter; and
  • Do it without inheritance;

The result is a faster build (it can be quite significant depending on the number of your tests) that still reports errors and failures in the same way with only a few tweaks to the build.

Note that the QuietXMLFormatter has only been tested with:

  • Forked (once per batch) TestSuites (JUnitTask produces different behaviours depending on if you are forked or not)
  • Ant Version 1.6.2 and 1.6.5 (some of these classes change a fair bit)
  • JUnit 3.8.1

How To Add It To Your Build

  1. Download the QuietXMLFormatter distribution jar (quietXmlFormatter-0.1.jar or with source)
  2. Instead of the normal entry that looks like (<formatter type="xml"/>) use, the following: <formatter classname="com.thekua.ant.optional.junit.QuietXMLFormatter" extension=".xml"/>
  3. If you fork your build, you need to make sure that the jar is in the classpath, or ensure that ant makes it available
  4. Run your build as per normal

Known Issues
In testing this with a few of the latest versions of Ant, I found a few issues that, although not detrimental, can be slightly annoying. When I get time, I might get around to trying to see how the latest ant source handles this. The issues currently include:

  • The actual files output by the task are managed outside of a given formatter, and there is an assumption that your formatter would produce some output. This means that if you are not actually outputting anything, then you still end up with zero sized files from each individual test suite being executed.
  • At least when you run in forked mode, the extension for each output file doesn’t seem to get added to the controlling class that manages the OutputStream made available to each JUnitResultFormatter. It would be okay if JUnitReport didn’t die on files without an extension but I couldn’t work out a way. Try the following bit of code:

    &lt;move todir="${test.output.dir}" includeemptydirs="false"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;fileset dir="${test.output.dir}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;exclude name="**/*.xml"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;/fileset&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;mapper type="glob" from="*" to="*.xml"/&gt;<br />&lt;/move&gt;

  • JunitReport is fine for styling each of the output test reports, but because an empty file is not a valid XML document, you can end up with a fairly noisy build. The solution to this, is of course something that deletes empty files from a directory. There is also another task in the jar (DeleteEmptyFilesTask) that you can use in your build that does this. The same rules for custom ant tasks apply when you incorporate this task into your build. Integrate it like this:

    &lt;taskdef classname="com.thekua.ant.taskdefs.DeleteEmptyFilesTask" name="DeleteEmptyFiles" classpath="classes"/&gt;

    with the following code added to the target that runs your JUnitReport:

    &lt;DeleteEmptyFiles directory="${testOutputDir}"/&gt;

As always, feedback, comments and thoughts always appreciated.

Please Stage Your Tests

It’s a bad sign if the only feedback you have for a project is a long build with only one set of tests. Do yourself (and your team) a favour by splitting test execution into logical groupings, with the fastest (or most important) running first. Here’s an ant macro you can reuse easily that’s optimised to only generate the junit HTML report and fail the build if any tests fail. (Sorry about the funny quote characters – I can’t seem to get my blog software to properly encode them)

&lt;macrodef name="run_junit_tests" description="Macro for running junit tests"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="testclasspath" default="unit.test"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="testfileset" default="unittest.fileset"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="outputdir" default="build/output/test"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="basedir" default="."/&gt;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&lt;sequential&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;mkdir dir="@{outputdir}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;junit forkmode="perBatch"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;printsummary="yes"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;haltonfailure="false"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;failureproperty="unit.test.failure"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;haltonerror="false"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;errorproperty="unit.test.error"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;dir="@{basedir}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;classpath refid="@{testclasspath}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;formatter type="xml"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;batchtest fork="yes" todir="@{outputdir}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;fileset refid="@{testfileset}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/batchtest&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/junit&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;condition property="tests.failed.or.errored"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;or&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;isset property="unit.test.failure"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;isset property="unit.test.error"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/or&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/condition&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;property name="_junit_report_dir_" value="@{outputdir}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;antcall target="-generate_junit_report_and_fail"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;/sequential&gt;<br />&lt;/macrodef&gt;<br />
&lt;target name="-generate_junit_report_and_fail" if="tests.failed.or.errored"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;description="Generate the unit test report if tests failed and cause build to stop short"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;junitreport todir="${_junit_report_dir_}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;fileset dir="${_junit_report_dir_}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;include name="TEST-*.xml"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/fileset&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;report format="frames" todir="${_junit_report_dir_}/output"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;/junitreport&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;fail if="tests.failed.or.errored" message="Build failed due to Unit test failures or errors"/&gt;<br />&lt;/target&gt;

IntelliJ Live Templates for Eclipse

Migrating to Eclipse has been interesting having used it for a little over a week now. I’m definitely not as productive in it yet, and there are features (or plugins) that win some points, but others that lose dismally. I’m still reserving my judgement for a few more weeks and probably should wait until I pair with an Eclipse super user. Til then though, I’ve missed some of the common IntelliJ Live Templates (equating to Eclipse Templates), so here‘s a file for those that want them (for use with Eclipse 3.1.1). Import them under Window -> Preferences -> Java -> Editor -> Templates (no, I’m not kidding). I’m building up a list of key mappings as well, but that will take a little bit longer.

Anything that saves me from typing is awesome. Enjoy!

Issues With Eclipse

I’ve always been an avid IntelliJ user, even when I was forced to use JBuilder and JDeveloper quite some time ago. I’ve been trying to give Eclipse a fair go, and despite having to change the way that I think, the transition hasn’t been too bad. I’m perhaps suffering a little bit more RSI (a consequence of having to press CTRL-SHIFT a lot more combined with my bad habit of always using the left hand side for CTRL and SHIFT), but other than that, most things are about learning different keystrokes.

I’m not a big believer in saying that IntelliJ is better than Eclipse because I haven’t used Eclipse in anger as much, so I’m still open to giving it a fair go. There are a few things that I do miss that I haven’t been able to find, so if someone can knows about them, or can suggest (yet another) plugin to fix, then I’ll be happy to try it out.

My current list includes:

  • ALT-F8 also known as Expression Evaluation (both code fragment mode and expression mode) – Since I’m a lot faster when I use the keyboard over the mouse, I like to use this feature of IntelliJ rather than setting watches and inspecting values. It’s easier for me to add a break point, debug, fire this up and then evaluate at runtime to my heart’s content. I can easily focus on working out the values of the things I care, the state of things to come, and blocks of code to see if I change code, what the effects would be. Better yet, a lot of the normal IDE features are available in this mode, including code completion, normal import options (ALT-ENTER) and the rest of niceities IntelliJ offers. The closest thing I have found is the “Evaluate Expression” in Eclipse but it seems to be constrainted to static values in code. (The closest thing for Rubyists is the IRB)
  • CTRL-SHIFT-ALT-N also known as Symbol Search – I find this feature is most useful, for when you are new to a code base (you can easily find which class it belongs to) and in finding a specific test case that is failing. I don’t really use this that much if I’m familiar with the code base as there are faster ways of navigating, but it is really useful in the right circumstances. I haven’t been able to find any alternative
  • CTRL-SHIFT-10 (inside a test method) also known as the execution of a single test method – When I have a test method fail inside a test class, my first instinct is to go and run that test in the IDE to see if I can replicate it. IntelliJ makes that easy by understanding the context where you are – if your cursor is inside a test method it will run that single test instead of the entire suite. The best that I have seen so far is for someone to run a test suite, stop it and then right click and run the single test (no shortcut key!)
  • CTRL-ALT-L (autoformat) – Although there are pretty code formatters for Eclipse, I love the fact that I can use the same shortcut in IntelliJ and get the same result, be it Java, XML, HTML or even Javascript! Better yet it can be run over an entire directory without having to touch the mouse.