patkua@work

The intersection of technology and leadership

Page 26 of 53

Agile Coaches Gathering UK 2010

Last year, I missed the announcement, and therefore the Agile Coaches Gathering UK organised by Rachel Davies (author of Agile Coaching). Fortunately I happened to join twitter late last year and saw the announcement for this year’s gathering shortly after its announcement. I bought a ticket as soon as I could. It’s a good thing too since I think I bought one of the last few tickets for a restricted sixty person gathering.

This year’s gathering, held at historic Bletchley Park spanned a day and a half with Friday evening opening the Open Space format to be used on Saturday. Bletchley Park ended up an awesome venue, and the open space format worked well with a number of spaces located outside. Given the UK actually had a summer this year, it turned out very nicely.

Bletchley Park Mansion

One of my favourite things about this sort of gathering is getting a whole bunch of people working in many different environments, contexts and getting them to share ideas and approaches about what works for them. We had a mixture of experienced coaches and coaches new to the role yet there was something about being there on a Saturday and a passion for it that brought people together. What follows are my notes from each of the sessions.

Experiences from the coaching discipline (not just agile practices)
My first session of the day explored other coach’s experiences working or exploring what coaches from other disciplines do. One person shared their experience working with a swimming coach and how they helped them get better at their practice.

This theme, looking at other coaching disciplines to see what they do, definitely ran throughout the day although I’m cautious of taking the role too literally. We discussed this during this session as most people being coached (life coaching, sports coaching, etc) tend to do so on a voluntary basis. Working as agile coaches for organisations, not everyone is necessarily there on a voluntary basis.

Another aspect that, I think differs, is that most coaches aren’t necessarily experts in the field they’re coaching in. For instance, sports coaches are often not people who’ve been the most successful in the world. In my experiences, agile coaches are really there to act as shepherds to help people get the most out of thinking and working with agile practices. Unlike other coaches, this often requires a level of mastery than what most other coaches have in their field. I’ve seen coaches act dangerously with not enough experience with agile practices.

I still think there is value in looking outside of our discipline for ideas and methods of working, yet conscious of the appropriate context to use them in (and there is always a context).

Presentation is not facilitation
Tobias Mayer is really well known in the Scrum community, and proposed a session to rant about the way that most training is done. I think there are plenty of examples where Death by Powerpoint is interpreted as effective training and whilst I agree with the premise, I’m not sure I agree with the end conclusion. I think presentations have a place, just maybe not in training. I also don’t necessarily think that training is solely about trying to trigger a complete mindshift in people.

When a coach gives up
I met Xavier Quesada Allue at XP2009 and again at Agile2009 so was interested to hear the premise around is it okay for coaches to give up and when to give up. This was one of those sessions located in the sun and some interesting stories about dealing with difficult teams or organisations where it doesn’t make sense to try.

We didn’t attempt to clarify, at the start of this session, what giving up meant to everyone. This probably made it difficult to get an agreement about when or why to give up, although we heard some very interesting stories.

My take on this, is that coaches end up needing to prioritise their work, just like everyone else and thus, not everything is going to get taken care of at the same time. Working with people also means you cannot predict how quickly people will move, or how they will react, therefore there is no way that you can always set completely achievable goals (it relies on others, not just yourself to make them happen). As a result, as a coach, you need to be comfortable with things not always going as planned.

I think that coaches also have the benefit of seeing the system that is driving a lot of the behaviour for the people on teams and unless something is done with that, then they will continue to behave as they always have. Both of these take time.

Game sense approach (what I learned coaching rugby)
Another inter-disciplinary coaching session that I attended briefly although a very loud bus made it difficult to concentrate and I ended up leaving because I had difficulty participating during this session.

Double loop learning + Defensive routines
If you ever get a chance to listen to Benjamin Mitchell in a safe environment, he’s quite the riot. If anything, perhaps it’s his self-deprecating and admitting his own faults and looking back at mistakes amidst his jokes that makes it so warming.

During this session, he talked about a book he’d been reading discussing how people react, and talking about different “models” in which the author classified people and using that to be able to project behaviour in different circumstances.

Once again, Benjamin reminded me of some key points I’ve learned over the years like:

  • What you say and what people hear are completely different
  • Avoid positional or solutions-based negotiation. Lay your interests on the table and you’ll often end up in a better position.

I’m not really clear about what double loop learning is, so I’m going to have to add that to my to do list as well.

Open space schedule

New books

Presented at Universite du Si

It’s almost three weeks ago I presented at USI2010 (Universite du SI). Organised wonderfully by the Octo consulting company, the conference’s tag line, “The Annual meeting of Geeks and Bosses” captures a really good essence. Mix over conferences and events are important to ensure that communities don’t silo themselves in ways that prohibit their growth. The complexity and chaos community clearly demonstrated the value of idea cross-pollination between between professions with their think tank, the Santa Fe Institute. This event is definitely the seeds of something good like this.

To add to the mix, I had the opportunity to present my session on Building the Next Generation of Technical Leaders here. This is the first conference I’ve been to where the majority of the session were not in English. This made me think a lot about how memes spread, and how quickly this affects how adaptable a community is.

I think it is wonderful for this conference to invite speakers from non-French speaking backgrounds, as I hope that it helped seed some more ideas into a community where translating text into a local language hinders the uptake of new ideas. I know that it is much more difficult for people to understand a language other than their native tongue and I can only admire those French people willing to strike up a conversation with me during the conference where my study of the French language is what tourist books teach.

The conference was very well run and although I would like to comment much more on the presentations since most of them were in French. If you understand French, and you find yourself near Paris, then I think it’s an excellent one to attend.

Downloading specific files through Maven

In ant, it’s pretty easy to use the ant task get to download something for you. Apparently it’s not something most maven users do (probably because they end up doing something like an ant=run). Even though maven has in-built web communications (this is how it often is found downloading the interweb), it wasn’t easy finding out how to download something not in a maven repository (without having to deploy artefacts into a maven repository which I understand is the “maven way”).

Anyway, after sometime, here’s the plugin configuration I used to do the download. Note that we’re using Maven 2.1.0 (and it could have changed in the latest version).

The following plugin will download configuration.jar and artifact.jar from the web directory http://host/pathToArtifact and drop it into a folder called target/downloadedFiles. This does this at the start of the maven lifecycle (note the phase it is attached to below).

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>wagon-maven-plugin</artifactId>
	<version>1.0-beta-3</version>
	<executions>
		<execution>
			<phase>validate</phase>
			<goals>
				<goal>download</goal>
			</goals>
			<configuration>
				<url>http://host/pathToArtifact</url>
				<includes>
					configuration.jar, artifact.jar
				</includes>
				<toDir>target/downloadedFiles</toDir>
			</configuration>
		</execution>
	</executions>
</plugin>

Some limitations is that it won’t fail if it can download some of the artifacts and not others.

Executable War in Maven

One of the many issues I had (have) with Maven is finding out how to use an appropriate plugin. The site documentation is normally pretty obscure, and examples are sparse. One of the activites we needed to do recently is to turn our war artifact into something that would run standalone. I’d normally use something like jetty for this but I didn’t find a way to easy bundle jetty and refer to the war within the same classpath (and still have things work).

Another tool I stumbled across was winstone (Hudson uses this beast). Wrapping this up seemed pretty easy enough. Note that the configuration below is used to start a war that won’t have dynamic JSP compilation. If you do, you can add a useJasper=true configuration and add the appropriate maven dependencies for jasper to ensure they’re available.

The result of this plugin will effectively build an executable jar, that you simply start up.

Need to add these dependencies:

Given that your project is a war package as well

&amp;lt;plugin&amp;gt;
	&amp;lt;groupId&amp;gt;net.sf.alchim&amp;lt;/groupId&amp;gt;
	&amp;lt;artifactId&amp;gt;winstone-maven-plugin&amp;lt;/artifactId&amp;gt;
	&amp;lt;executions&amp;gt;
		&amp;lt;execution&amp;gt;
			&amp;lt;goals&amp;gt;
				&amp;lt;goal&amp;gt;embed&amp;lt;/goal&amp;gt;
			&amp;lt;/goals&amp;gt;
			&amp;lt;phase&amp;gt;package&amp;lt;/phase&amp;gt;
		&amp;lt;/execution&amp;gt;
	&amp;lt;/executions&amp;gt;
	&amp;lt;configuration&amp;gt;
		&amp;lt;filename&amp;gt;executableWebApplication.jar&amp;lt;/filename&amp;gt;
	&amp;lt;/configuration&amp;gt;
&amp;lt;/plugin&amp;gt;

You will find this artifact built in your standard target directly.

Who’s calling you?

I’ve been working on some performance testing profiling and in trying to diagnose a fix, we found one particular method was being called constantly. Searching for usages (static analysis) tells me who could possibly call the method we were inspecting, but we were interested in the runtime invocations. Since this was quite deep in the code, I used the power of dynamic proxies to do this.

I’ve rebuilt the code here:

Source:

package com.thekua.examples;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;

public class TraceBackProxy implements InvocationHandler {
    public static interface CallingMethodListener {
        void notify(String method);
    }

    private final Object wrapped;
    private final CallingMethodListener listener;

    private TraceBackProxy(Object wrapped, CallingMethodListener context) {
        this.wrapped = wrapped;
        this.listener = context;
    }

    public static Object wrap(Object target, CallingMethodListener context) {
        Class targetClass = target.getClass();
        return Proxy.newProxyInstance(targetClass.getClassLoader(),
                targetClass.getInterfaces(), new TraceBackProxy(target, context));
    }

    @Override
    public Object invoke(Object proxy, Method method, Object[] arguments) throws Throwable {
        String callingMethod = findCallingMethod(method);
        listener.notify(callingMethod);
        return method.invoke(wrapped, arguments);
    }

    private String findCallingMethod(Method method) {
        try {
            throw new RuntimeException();
        } catch(RuntimeException e) {
            StackTraceElement[] elements = e.getStackTrace();
            int callingMethodIndex = findIndexOfMethod(elements, method) + 1; // caller is next one down in stack
            return elements[callingMethodIndex].getMethodName();
        }
    }

    private int findIndexOfMethod(StackTraceElement[] elements, Method method) {
        for (int i = 0; i < elements.length; i++) {
            StackTraceElement current = elements[i];
            // does not cope with overloaded or duplicate method names
            if (current.getMethodName().equals(method.getName())) {
                return i;
            }
        }
        throw new IllegalStateException("Something went wrong and couldn't find method in stacktrace");
    }
}

Test:

package com.thekua.examples;

import org.junit.Test;

import static org.hamcrest.core.Is.is;
import static org.hamcrest.core.IsEqual.equalTo;
import static org.junit.Assert.assertThat;

public class TraceBackProxyTest {

    public static interface SomeRole {
        void doStuff();
    }

    public static class TestSubject implements SomeRole {
        public boolean called;

        @Override
        public void doStuff() {
            called = true;
        }
    }

    @Test
    public void shouldStillDelegate() {
        TestSubject target = new TestSubject();
        SomeRole action = (SomeRole)TraceBackProxy.wrap(target, new TestOnlyListener());

        action.doStuff();

        assertThat(target.called, is(true));
    }

    public static class TestOnlyListener implements TraceBackProxy.CallingMethodListener {
        String lastCalledMethod;

        @Override
        public void notify(String method) {
            lastCalledMethod = method;
        }
    }

    @Test
    public void shouldFindCallingMethod() {
        TestOnlyListener listener = new TestOnlyListener();
        SomeRole action = (SomeRole) TraceBackProxy.wrap(new TestSubject(), listener);

        action.doStuff();

        assertThat(listener.lastCalledMethod, equalTo("shouldFindCallingMethod"));
    }
}

Note that your mileage may vary since it probably won’t work when you have duplicate method names across classes, or overloaded methods on the same. It proved useful for me and hope it helps you.

Refactoring: Convert static into instance

You want to ensure reusing functionality does not have any other side effects in other places that might be using it.

Make the static instance explicit by introduce a static instance encapsulating any static state and delegate items to it. Use the instance.

Imagine you had this sort of code (below) and there’s dozens of use throughout a codebase. Static imports make it easy for people to simply increment a count, but in order to use the Counter, you have no assurance nothing else will try to use it.

public class Counter {
  static int totalCount;

  public static void count() {
    totalCount++;
  }
}

The following transformations will help you keep existing clients happy:

  • Introduce new method (instance level)
  • Introduce global instance
  • Delegate static method to new method (instance level)
  • Turn static state into instance state

The final result is below (note that this code is not thread-safe and not designed for concurrent programs, but is there to demonstrate the result of the refactoring steps):

public class Counter {
   private static final Counter INSTANCE = new Counter();

   private int totalCount;

   public static void count() {
     INSTANCE.recordCount();
   }

   public void recordCount() {
     totalCount++;
   }
}

Now we had a bit more of a complex example, but this refactoring helped us move to the right direction of using instances where possible, instead of sharing global state.

Building the Testing Pipeline

The point of this workshop was to help bring the ideas of Test Strategy to the forefront of developers writing automated tests, an often considered, essential agile practice. By looking at a small set of automated tests (acceptance, integration, unit), and looking carefully at the different tradeoffs, people in their own project context should more explicitly consider whether or not they have the right ratio of tests (i.e. too many acceptance tests or too many unit tests) with the idea that all of these should be giving you fast feedback in order to have confidence you’re shipping the right thing. All the slides are available on Slideshare here. I ran this workshop at both ACCU2010 and XP2010.

The first part of the workshop was brainstorming a number of types of automated tests before brainstorming the benefits and costs associated with each category of tests. Note that the benefits/costs are relative to your environment. Some (and many more) of the following features are likely to affect this:

  • The nature of the application being built (server side, web application, client side, etc)
  • The language, libraries and toolkits available
  • The length of the project (e.g. a large legacy codebase versus a fresh, new application)
  • The size of the project/team

The following are sets of tables (in no particular order) transcribing all the different notes.

Acceptance

Acceptance Tests

Benefits Costs
Can be shown to clients Slow
User value ensured Less information available to learn how to do it well
Closer to end requirements Long execution time
Test through layers High learning curve
Easy on legacy code Pin points source of bug
Can be written in some tool (by non developers) Hard to write (well)
Documentation Data dependent
Feeling of completion “done” Requires entire architecture stack/environment
Easier to introduce around legacy code environment cost
Refactoring friendly Continuous updating as features emerge
Documents expected behaviour Can get very verbose
Value centric Expectations must be much clearer
Drive design Costly hardware
Gains confidence that the application does what the user wants Availability of external services
Drives behaviour of the system outside in Test many different configurations
Exercise more code/test Makes changes to “flow” hard
Safety harness Requires infrastructure
Tough to find Brittle and can break for the “wrong” reason
Brittle if through UI
Integration

Integration Tests

Benefits Costs
Force communication with 3rd party Hard to set up
Testing external behaviour Require knowledge of context and multiple layers
Less surprises at the end (no big bang integration) Discourages acceptance tests
Requires no GUI Slow
Easier to wrap around legacy code Is hard to keep on a good level
Verifies assumptions about external systems Environment costs
Defines external services Ties implementation to “technical solution”
Flexible Lego tests (pick and choose which ones to run) Authoring to get real data
Platform flexibility Maintenance
Covers configuration Hard to sell in
Complicated/impossible
No IDE
Heavy investment into infrastructure
Pinpointing source of bug
Hard to configure
Detective work
Unit

Unit Tests

Benefits Costs
Easy to write Unit tests often on the same level as refactoring -> troubles doing refactorings
Proves that it works Highly coupled to code
Low number of dependencies Might validate unused code
Enable Refactoring Heavy upfront investment in coding time
Becomes system doc(umentation) Hard to maintain
It’s fast feedback “Unit” results in tests on too low a level
Drives simplicity in solutions (if test driven) Inventory: Lines of code
Encourages less coupling, testable design Hard to understand for others than developers
(If pairing or code sharing) -> risk reduction “Many eyes on code” Not effective until critical mass reached (ex: x% code covered)
Helps you take action Difficult in legacy code
Easy to configure Difficult: Developer aversion to doing it
Allows for testing boundary conditions False sense of security
Pinpoints the problem Steep learning curve for developers
Fast to make Tricky mocks
Low individual cost Redundency
Low requirement of domain knowledge

Tool Brainstorm

The following are all the tools brainstormed by the various groups, and grouped accordingly to their recommended classifications (these don’t reflect my views on whether or not they are correctly classified). Leave a comment if you feel like something really should be moved!

Tools for Acceptance Tests

  • Cucumber (ruby)
  • Robot (.net)
  • Watin/Watir (.net/ruby)
  • Abbot (java)
  • Parasoft (java)
  • JBehave (Java/.net)
  • Fitnesse/FIT (java/.net)
  • Webdriver (java, .net, ruby)
  • Selenium
  • Cuke4Nuke (.net)
  • Cuke4Duke (java)
  • Bumblebee (java)
  • HTMLUnit
  • HTTPUnit

Tools for Integration Tests

  • SOAP UI (webservices SOAP based)
  • DB Unit (java.net)
  • JMeter
  • Jailer

Tools for Unit Tests

  • MockObjects
  • RSpec
  • PHPunit
  • JSTestDriver
  • JSpec
  • JSMockito
  • Rhinomock
  • NUnit
  • NCover
  • Moq
  • Typemock
  • TestNG
  • JMock/Easymock
  • Mockito
  • JUnit
  • Clover
  • EMMA
  • Cobertura
  • FEST-assert
  • RMock
  • UISpec4J
  • Erlang
  • Checkstyle

Other tools not classified

  • Lab Manager (TFS)
  • HP Quality Centre
  • Test manager (TFS Zero)
  • VSTS (.net)
  • JTest (java)
  • Lando webtest
  • Powermock (java)
  • Grinder (Java)
  • AB (python)

XP2010 Review

This year’s XP2010 conference was held in the great Northern parts of Norway, up in Trondheim, only about 200km south from my first XP conference (XP2006) I attended several years ago. I always find this series of conferences interesting, partly because there is always the blend between the academic (research papers and experience reports) and the industrial side as well as the infusion of the hosting country’s culture.

This year, the conference saw 425 attendees (50% of them Norwegian) across 8 different tracks and covering four days of a wide variety of conference mixtures including Lightning Talks, Panels, Workshops, Tutorials, and Open Space events.

ScottPage

The first of the two keynotes, run by Scott Page was my favourite – a lecturer and researched on complexity in different environments. He talked about the importance of diversity and its impact on building better solutions as well as a provable formula calculating what the wisdom of the crowds was. Hopefully they’ll be putting up his slides somewhere as a result.

The second of the keynotes, ran by David Anderson did a great job of condensing down a long talk into a thought-provoking discussion on the work that he’s been doing on building a Kanban movement in the places he’s been working as well as a discussion of his recently released book about Kanban. He had some nice visualisations on the slides and even finished slightly earlier with plenty of times for questions.

Overall the conference seemed to have a nice blend of both process and technical topics, although there were a few complaints about specific discussions around “XP” itself although I think plenty of discussions about specific practices quite useful within any agile context.

I ended up much busier than I thought I would be, playing host for two of the session talks, helping out with a Pecha Kucha (more on that later) session and running a workshop on my own.

Entertainment

Venue, organisation and efficiency wise, the organisers need to be congratulated on a job that will be hard to surpass. Everything seemed to run smoothly including an amazing and difficult-to-forget Conference Banquet involving interesting local foods (smoked reindeer heart, moose and local seafood), a pair of comedic jazz academics, Keep of Kalessin playing some Extreme Metal, all inside Trondheim’s amazing Student Society venue.

ExtremeMetal

The strangest part of the conference for me was participating in a “What’s in my agile suitcase?” Pecha Kucha run by Martin Heider and Bernd Schiffer. You can read more about the format here and it was great to be one of the five other prepared Pecha Kucha’s including Rachel Davies, Mary Poppendieck, Joshua Kierevsky and Jeff Patton. I found the diversity of styles and approaches fascinating as well as the specific things that people “packed” in their suitcases. Being the first time format all the speakers found it a fascinating format made thrilling by the short-time and the fact you don’t have control over the timing of the slides. If I were to do this differently, I’d definitely try to focus on a smaller range of topics (or just one).

My week ended (as did the conference) with my final workshop called “Building the Testing Pipeline.” I’d specifically targeted people who’d been working with automated tests for some time and I ended up with a surprisingly full room. I’d run this previously at ACCU with a slightly different audience. We had some great brainstorming sessions (I’ll be putting that up soon) and hopefully more people came away thinking more consciously about the balance of tests they have and whether or not they have the correct balance that maximises their confidence, and minimises their feedback cycle.

I’m also very proud that the experience report that I shepherded won best paper (experience report), and I finally got to meet the author, Jørn Ola Birkeland of Bekk Consulting, in person to congratulate him on the day.

Thanks to all the organisers, participants, and passionate people that made the conference so much fun and success. Wonderful to reconnect with old friends and make many new ones.

Maven. What’s it good for?

Just like a certain number of speakers, Maven seems to divide developers into two camps:

  1. Those who like it; and
  2. Those who loathe it

I try to be objective as much as possible about tools, so let’s explore what Maven is and what it’s good for.

What’s Maven all about?
Maven has two major responsibilities (with a handful of minor ones). These two are:

  1. Library dependency management – Anyone who’s worked on a java project will know the large number of libraries you might end up with. Part of this is the JDKs inability to solve some of the simplest programming problems (solved with Apache Commons, various logging frameworks, etc). Maven was one of the first successful attempts at fixing this, focusing on applications on what version of libraries they’d like with its dependencies automatically resolved.
  2. Build lifecycle tool – Maven provides a standard set of fixed build lifecyles with hooks for each step in the lifecycle should you need to extend it.

Why was Maven developed?
To better understand Maven’s design, let’s look at the context developers built it in. Maven sprung forth out of the growing complexities in the Java Apache community. New Apache projects started off almost all the time and there was no “standard” way of building, or creating them. More than that, many Apache projects built on the work of other projects and managing the versioning of libraries started to become an onerous task.

Each Apache project characteristically depended on several other projects although generated its own single distributable (i.e. jar, war, etc).

Maven helped simplify these by assuming a default project structure, a default build lifecycle common to all many Apache projects.

Where is Maven most useful?
If you’re on a project that is similar to a single Apache project (a single distributable with a single set of unit tests) then you’ve hit Maven’s sweet spot.

Where does Maven start to break?
Here are some of the situations I’ve faced where Maven is not the best tool for the job:

  • When one team is producing a number of artefacts, you find yourself needing to work around the Maven structure, often with resorting to “Pom Inheritence”. You have to do ugly things like maven installs before hand.
  • If you need to do anything more complex than the standard build, then you start to obfuscate the build process. Strange things like different processes running under different profiles (whether or not the build is running on a build server or not) can be very confusing for lots of people.
  • Maven adds more complexity to your development environment as you rely on either a public artefact server, or need to set up an internal one. Whilst this seems simple, it is yet another environmental factor that affects how you develop, and its external nature adds variability to projects that often cause false negative build failures.

What other situations would your recommend using Maven or not?

ThoughtWorks Global Culture

I’ve been fortunate enough to spend my last week (most of it holidays) in Australia at ThoughtWorks’ Melbourne office. The last time that I visited here was when I first joined over six years ago.

One of the things I love about visiting our other offices is to see how consistently strong the company culture is. We have so many unique individuals yet there is some common bond that easily makes us identify with each other. I feel it’s part of the passion that we all share, and our ability to have fun all the time. In fact, I’ve never laughed so hard this last week for some time now.

I’m proud to be part of this family and will definitely miss all my Australia colleagues.

« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑