The Goodness in JGoodies is Great: A Brief Tutorial

I’ve been heads down at work lately developing a desktop application, something that is a bit refreshing considering I’ve mainly been working on web applications or distributed systems for some time. Desktop applications have some nice things but developing them also means they have their own fair share of pain (acceptance testing a multithreaded app is not fun!).

The last time I had developed anything remotely serious desktop wise was back during University days when I was using Borland’s JBuilder 2 (yes that means Java 1.1 and AWT) and then slightly later, the terrible Microsoft’s J++ (and non-java classes). I have repeated found that IDEs are wonderful things for building prototype UIs, but anyone who has had the horror of maintaining any of the automagically created code in the background, will agree with me at how terrible it can be.

Not willing to repeat mistakes of the past, on my current project we have had great successes leveraging the JGoodies Forms Layout Manager. The makers of this opensource component specifically designed it with maintenance and addressing the common tasks of simple forms without being excessively verbose about it. They have an excellent white paper on their site and their APIs are intuitive, but I thought it might be useful stepping through a step by step guide anyway. Read more “The Goodness in JGoodies is Great: A Brief Tutorial”

Fixed Velocity is a Fallacy

Velocity in an XP sense is a historical rate of “work completed” per iteration. Measuring and using velocity is powerful because:

  • Planning based on actual velocity figures gives you a more realistic plan than depending on the optimistic/pessimistic estimates of developers;
  • It is generally cheaper to work out how much time work really takes, than spending excessive time attempting to guess how much it will;
  • Dramatic changes in the number give greater visibility to issues a team may be having;
  • The rate indicates if a deliverable is on track, or if scope needs to be re-negotiated.

A fixed velocity is unrealistic because in the real world as there is always a force working against it… friction. Project friction takes on a number of forms including:

  • Communication breakdown – Sometimes it’s difficult to get answers from the business, or maybe team members forget to tell each other important things that take up time as people find out issues.
  • Environmental Issues – Development environments are never perfect and as you depend on more and more external resources, the team faces additional risk not being able to complete a story because of a database or server is down.
  • Ineffective Iteration Planning – Poor quality story cards slipped by the Iteration Manager and required excessive time going back and forth trying to work needed doing, or the third party prerequisite never came through.
  • Constrained Resources – Depending on key members for particular tasks can be an effective way for ensuring good productivity, but team members can be ill, or be required for other things. Bringing on new people should affect a team’s velocity in some manner.

Keep in mind the following list of things you may experience when you have a fixed velocity:

  • Planning based on an inaccurate number is like setting yourself potentially unrealisable goals instead of the more useful forecasting you can do with a real velocity measurement.
  • You lose major visibility into issues affecting the team, making it more difficult to identify and address them.
  • The importance of maintaining the magic number adds another opposing force typically misaligned to the core business objective. You lose all sorts of things such as a sustainable pace (read more about the 40 hour week and the need for slack time), a reduction in quality of output leading to additional maintenance or a poor user experience, and more accounting games as iterations lengths or other numbers are “adjusted” to continue the facade of a fixed velocity.

Like most things in an agile process, velocity is one of those metrics that provides another feedback mechanism to help you plan and identify places where you might benefit from change. Use real world numbers to help you, instead of the artificial ones that handcuff you.

What Sort of Pasta Do You Want?

This post comes out of a discussion I had a while back with some other Thoughtworkers in my home city, Brisbane, and must credit both Vladimir Sneblic and James Webster for bringing this up. Since they are not frequent bloggers, I thought that this little gem was still worthwhile sharing.

Anyone who has ever dealt with software would have heard the term spaghetti code. It’s a great term used to describe software that is difficult to maintain or change because parts of the system are intricately entwined, and a change to one part can adversely affect another. After reflecting on a system that was developed using techniques found more so in agilest development teams such as Test Driven Development (TDD) and dependency injection, they observed that the parts of the system were more loosely coupled and more easily interchangeable, good indicators that it would be a better system to maintain. The code is better described as ravioli code instead of that of its more common pasta brethren.

This analogy has really stuck with me since because of the number of parallels it draws. Take one such example – the reason that ravioli is typically more expensive than spaghetti, even though they are both made from the same fundamental ingredients, is that making good ravioli takes a lot more skill than it does spaghetti. This idea is,of course, not new, and can be taken to extremes (see Wikipedia’s entry) but I know which one is is my favourite.

Optimise Your Build with Faster Running JUnit Tests

Introduction
There are many techniques that you can use that can improve a build time. Here’s one that can be used when:

  • Tests break the build
  • You only care about failing tests being reported
  • You want to reuse the existing formatting utilities provided by standard Ant optional tasks
  • You don’t want the formatting to be too slow
  • The percentage of tests passing is not important

When running with the optional JUnit task, the normal strategy is to use the standard XML formatter and then style the information into something presentable with the optional JUnitReport task. Unfortunately, both the cost of spitting out XML for every single test suite executing (i.e. usually every single Test class you have) and then the cost of applying XSL is typically quite high. In my experience, it’s been several minutes running tests, let alone waiting for the report to be generated. Just try using the plain logger (<formatter type="plain"/>) and see the differences yourself.

A Better Way
The alternative to the standard XML formatter is the QuietXMLFormatter (download here). The aim of this formatter is to:

  • Only produce output on tests that fail or error;
  • Produce the XML output in the same format as the standard org.apache.tools.ant.taskdefs.optional.junit.XMLJUnitResultFormatter; and
  • Do it without inheritance;

The result is a faster build (it can be quite significant depending on the number of your tests) that still reports errors and failures in the same way with only a few tweaks to the build.

Note that the QuietXMLFormatter has only been tested with:

  • Forked (once per batch) TestSuites (JUnitTask produces different behaviours depending on if you are forked or not)
  • Ant Version 1.6.2 and 1.6.5 (some of these classes change a fair bit)
  • JUnit 3.8.1

How To Add It To Your Build

  1. Download the QuietXMLFormatter distribution jar (quietXmlFormatter-0.1.jar or with source)
  2. Instead of the normal entry that looks like (<formatter type="xml"/>) use, the following: <formatter classname="com.thekua.ant.optional.junit.QuietXMLFormatter" extension=".xml"/>
  3. If you fork your build, you need to make sure that the jar is in the classpath, or ensure that ant makes it available
  4. Run your build as per normal

Known Issues
In testing this with a few of the latest versions of Ant, I found a few issues that, although not detrimental, can be slightly annoying. When I get time, I might get around to trying to see how the latest ant source handles this. The issues currently include:

  • The actual files output by the task are managed outside of a given formatter, and there is an assumption that your formatter would produce some output. This means that if you are not actually outputting anything, then you still end up with zero sized files from each individual test suite being executed.
  • At least when you run in forked mode, the extension for each output file doesn’t seem to get added to the controlling class that manages the OutputStream made available to each JUnitResultFormatter. It would be okay if JUnitReport didn’t die on files without an extension but I couldn’t work out a way. Try the following bit of code:

    &lt;move todir="${test.output.dir}" includeemptydirs="false"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;fileset dir="${test.output.dir}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;exclude name="**/*.xml"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;/fileset&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;mapper type="glob" from="*" to="*.xml"/&gt;<br />&lt;/move&gt;

  • JunitReport is fine for styling each of the output test reports, but because an empty file is not a valid XML document, you can end up with a fairly noisy build. The solution to this, is of course something that deletes empty files from a directory. There is also another task in the jar (DeleteEmptyFilesTask) that you can use in your build that does this. The same rules for custom ant tasks apply when you incorporate this task into your build. Integrate it like this:

    &lt;taskdef classname="com.thekua.ant.taskdefs.DeleteEmptyFilesTask" name="DeleteEmptyFiles" classpath="classes"/&gt;

    with the following code added to the target that runs your JUnitReport:

    &lt;DeleteEmptyFiles directory="${testOutputDir}"/&gt;

As always, feedback, comments and thoughts always appreciated.

Please Stage Your Tests

It’s a bad sign if the only feedback you have for a project is a long build with only one set of tests. Do yourself (and your team) a favour by splitting test execution into logical groupings, with the fastest (or most important) running first. Here’s an ant macro you can reuse easily that’s optimised to only generate the junit HTML report and fail the build if any tests fail. (Sorry about the funny quote characters – I can’t seem to get my blog software to properly encode them)

&lt;macrodef name="run_junit_tests" description="Macro for running junit tests"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="testclasspath" default="unit.test"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="testfileset" default="unittest.fileset"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="outputdir" default="build/output/test"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;attribute name="basedir" default="."/&gt;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&lt;sequential&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;mkdir dir="@{outputdir}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;junit forkmode="perBatch"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;printsummary="yes"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;haltonfailure="false"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;failureproperty="unit.test.failure"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;haltonerror="false"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;errorproperty="unit.test.error"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;dir="@{basedir}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;classpath refid="@{testclasspath}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;formatter type="xml"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;batchtest fork="yes" todir="@{outputdir}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;fileset refid="@{testfileset}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/batchtest&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/junit&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;condition property="tests.failed.or.errored"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;or&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;isset property="unit.test.failure"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;isset property="unit.test.error"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/or&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/condition&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;property name="_junit_report_dir_" value="@{outputdir}"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;antcall target="-generate_junit_report_and_fail"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;/sequential&gt;<br />&lt;/macrodef&gt;<br />
&lt;target name="-generate_junit_report_and_fail" if="tests.failed.or.errored"<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;description="Generate the unit test report if tests failed and cause build to stop short"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;junitreport todir="${_junit_report_dir_}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;fileset dir="${_junit_report_dir_}"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;include name="TEST-*.xml"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;/fileset&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;report format="frames" todir="${_junit_report_dir_}/output"/&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;/junitreport&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&lt;fail if="tests.failed.or.errored" message="Build failed due to Unit test failures or errors"/&gt;<br />&lt;/target&gt;