patkua@work

The intersection of technology and leadership

Page 37 of 53

Controlling Time: How to deal with infinity

It’s been a while since I’ve had to slice and dice legacy code. It reminds me how easily, non test driven code, slips into the abyss of tight coupling and the effort of retrofitting (unit) tests increasing in effort with time. Of course, I don’t believe TDD should be done all the time, but for most production code I do. In this entry, I’m going to use the Sprout Inner Class mechanism to demonstrate how to start making something that would have been untestable much more manageable.

The scenario
What we have below is the entry point into our program (i.e. a class with a main method). When I first encountered this class, it had too many responsibilities including, and not limited to, parsing command line options, some application logic, handling terminal signals, and wiring up all the objects for the program to start. Combined with an infinite working loop, you can imagine what a nightmare it was to unit test. Here’s the modified class, called RunningProgram representing the important parts of the class related to this walk through.

Our task: To reduce the number of responsibilities and test the body of the start method (we couldn’t simply extract method immediately due to the number of fields it modified)

package com.thekua.examples;

import sun.misc.Signal;
import sun.misc.SignalHandler;

public class RunningProgram implements SignalHandler {
	private boolean running = true;
	// some other fields
	
	public void start() {
		running = true;
		while (running) {
			// do some work (that we want to unit test)
			// it changes about ten fields depending on what condition
			// gets executed 
		}
		// Finished doing some work
	}
	
	// it also had plenty of other methods
	
	public static void main(String [] args) {
		RunningProgram program = new RunningProgram();
		Signal.handle(new Signal("TERM"), program);
		Signal.handle(new Signal("INT"), program);
		program.start();
	}

	public void handle(Signal signal) {
	    // Program terminated by a signal
		running = false;		
	}
}

Step 1: Understand the dependencies that make this test difficult to test
To get some level of functional testing around this program we had a number of timed triggers, watching for state modifications with a given timeout. Our problem was infinity itself, or rather, not sure when to interrupt infinity to watch for when the body of work inside the loop had finished its work once, twice, and not too early. We could have made it more complicated by introducing lifecycle listeners yet we hesitated at that option because we thought it would complicate the code too much.

We noticed the use of the running flag. We noticed it was the condition for whether or not we continued looping, and was also the trigger for a graceful shutdown using the sun.misc.Signal class. Notice that the running program implements the SignalHandler interface as a result. We thought that running behaviour of the RunningProgram could be extracted into a separate aspect.

Step 2: Encapsulate, encapsulate, encapsulate
Our first task, was to remove direct access to the running flag since the class modified it in two places. Sprout an inner class, and simply delegate to getters and setters and we might find we have a class that looks like:

package com.thekua.examples;

import sun.misc.Signal;
import sun.misc.SignalHandler;

public class RunningProgram implements SignalHandler {
    public static class RunningCondition {
	    private boolean running = true;
		
		boolean shouldContinue() {
		    return running;
		}
		
		public void stop() {
		    running = false;
		}
    }
	
	private RunningCondition condition = new RunningCondition();
	
	// some other fields
	
	public void start() {
		while (condition.shouldContinue()) {
			// do some work (that we want to unit test)
			// it changes about ten fields depending on what condition
			// gets executed 
		}
		// Finished doing some work
	}
	
	// it also had plenty of other methods
	
	public static void main(String [] args) {
		RunningProgram program = new RunningProgram();
		Signal.handle(new Signal("TERM"), program);
		Signal.handle(new Signal("INT"), program);
		program.start();
	}

	public void handle(Signal signal) {
	    // Program terminated by a signal
		condition.stop();
	}
}

Moving the running flag to a separate class gives us a number of benefits. It lets us hide the implementation of how we handle running, and puts us down the road of clearly teasing apart the overloaded responsibilities.

Step 3: Consolidate related behaviour
It bothered me that the main program had the shutdown hook. That behaviour definitely felt strongly related to the RunningCondition. I felt it was a good thing to move it to the that class. We now have something that looks like:

package com.thekua.examples;

import sun.misc.Signal;
import sun.misc.SignalHandler;

public class RunningProgram {
    public static class RunningCondition implements SignalHandler {
	    private boolean running = true;
		
		boolean shouldContinue() {
		    return running;
		}
		
		public void handle(Signal signal) {
	        // Program terminated by a signal
			running = false;
	    }
    }
	
	private RunningCondition condition = new RunningCondition();
	
	// some other fields
	
	public void start() {
		while (condition.shouldContinue()) {
			// do some work (that we want to unit test)
			// it changes about ten fields depending on what condition
			// gets executed 
		}
		// Finished doing some work
	}
	
	// it also had plenty of other methods
	
	public static void main(String [] args) {
		RunningProgram program = new RunningProgram();
		Signal.handle(new Signal("TERM"), program.condition);
		Signal.handle(new Signal("INT"), program.condition);
		program.start();
	}

}

Note that it is now the RunningCondition that now implements the SignalHandler interface (that we are using to register with the Signal

Step 3: Remove dependency chain
The difficulty with this class still exists. We cannot modify the RunningCondition of this program since it creates one for itself. Since I prefer Constructor Based Dependency Injection, I’m going to apply Introduce Parameter to Constructor, moving the field declaration to the constructor itself. Here it is what the class looks like now:

package com.thekua.examples;

import sun.misc.Signal;
import sun.misc.SignalHandler;

public class RunningProgram {
    public static class RunningCondition implements SignalHandler {
	    private boolean running = true;
		
		boolean shouldContinue() {
		    return running;
		}
	    
		public void handle(Signal signal) {
	        // Program terminated by a signal
			running = false;
	    }
    }
	
	private final RunningCondition condition;
	
	public RunningProgram(RunningCondition condition) {
	    this.condition = condition;
	}
	
	// some other fields
	
	public void start() {
		while (condition.shouldContinue()) {
			// do some work (that we want to unit test)
			// it changes about ten fields depending on what condition
			// gets executed 
		}
		// Finished doing some work
	}
	
	// it also had plenty of other methods
	
	public static void main(String [] args) {
		RunningProgram program = new RunningProgram(new RunningCondition());
		Signal.handle(new Signal("TERM"), program.condition);
		Signal.handle(new Signal("INT"), program.condition);
		program.start();
	}

}

Note that we are still bound to the implementation of the specific RunningCondition, so it’s time to apply Extract Interface, and understand what role that RunningCondition has. We chose the name, RunStrategy for the role name, and to help keep the names more aligned, we ended up renaming RunningCondition to RunLikeADaemonStrategy. Our code now looks like this:

package com.thekua.examples;

import sun.misc.Signal;
import sun.misc.SignalHandler;

public class RunningProgram {
    public static interface RunStrategy {
	    boolean shouldContinue();
	}
	
    public static class RunLikeADaemonStrategy implements SignalHandler, RunStrategy {
	    private boolean running = true;
		
		public boolean shouldContinue() {
		    return running;
		}
		
		public void handle(Signal signal) {
	        // Program terminated by a signal
		    running = false;
	    }
    }
	
	private final RunStrategy runStrategy;
	
	public RunningProgram(RunStrategy runStrategy) {
	    this.runStrategy = runStrategy;
	}
	
	// some other fields
	
	public void start() {
		while (runStrategy.shouldContinue()) {
			// do some work (that we want to unit test)
			// it changes about ten fields depending on what condition
			// gets executed 
		}
		// Finished doing some work
	}
	
	// it also had plenty of other methods
	
	public static void main(String [] args) {
		RunLikeADaemonStrategy strategy = new RunLikeADaemonStrategy();
		RunningProgram program = new RunningProgram(strategy);
		Signal.handle(new Signal("TERM"), strategy);
		Signal.handle(new Signal("INT"), strategy);
		program.start();
	}
}

The best thing is that our RunningProgram no longer needs to know what happens if a terminate or interrupt signal is sent to the program. With simply a dependency on the RunStrategy we can know inject a fixed run strategy for tests that we ended up calling a RunNumberOfTimesStrategy. We also promoted the specific RunLikeADaemonStrategy to a full class (not an inner class).

Thanks for getting this far! Please leave a comment if you thought this was useful.

Controlling Time

Disclaimer: This technique does not work outside of programming. Do not try this on your neighbours, kids or pets…

What’s wrong with time dependent tests?
It’s easy to write tests that are far too flaky and intermittent. Worse yet, a quick fix often results in putting a pause to tests that make them drag out longer and longer. Alternatively, unneeded complexity is added to try to do smart things to poll for tests to time out.

What can we do about it?
I’m all about solutions, so I’m out about to outline the secret to controlling time (in at least unit tests). Here’s a situation you might recognise. Or not. Sorry to anyone vegetarian reading this entry.

We have a class called Beef that knows when it’s past its prime using the Joda Time libraries.

package com.thekua.examples;

import org.joda.time.DateTime;

public class Beef {
	private final DateTime expiryDate;

	public Beef(DateTime expiryDate) {
		this.expiryDate = expiryDate;
	}
	
	public boolean isPastItsPrime() {
		DateTime now = new DateTime(); // Notice this line?
		return now.isAfter(expiryDate); 
	}
}

Surprise, surprise. We also have a unit test for it:

package com.thekua.examples;

import static org.junit.Assert.assertTrue;
import org.joda.time.DateTime;
import org.junit.Test;

public class BeefTest {
	@Test
	public void shouldBePastItsPrimeWhenExpiryDateIsPast() throws Exception {
		int timeToPassForExpiry = 100;
		
		Beef beef = new Beef(new DateTime().plus(timeToPassForExpiry));
		
		Thread.sleep(timeToPassForExpiry * 2); // Sleep? Bleh...
		
		assertTrue(beef.isPastItsPrime());
	}
}

Step 1: Contain time (in an object of course)
The first step is to contain all the use of time concepts behind an object. Don’t even try to call this class a TimeProvider. It’s a Clock okay? (I’m sure I used to call it that in the past as well, so don’t worry!). The responsibility of the Clock is to tell us the time. Here’s what it looks like:

package com.thekua.examples;

import org.joda.time.DateTime;

public interface Clock {
	DateTime now();
}

In order to support the system working as normally, we are going to introduce the SystemClock. I sometimes call this a RealClock. It looks a bit like this:

package com.thekua.examples;

import org.joda.time.DateTime;

public class SystemClock implements Clock {
	public DateTime now() {
		return new DateTime();
	}
}

We are now going to let our Beef now depend on our Clock concept. It should now look like this:

package com.thekua.examples;

import org.joda.time.DateTime;

public class Beef {
	private final DateTime expiryDate;
	private final Clock clock;

	public Beef(DateTime expiryDate, Clock clock) {
		this.expiryDate = expiryDate;
		this.clock = clock;
	}
	
	public boolean isPastItsPrime() {
		DateTime now = clock.now();
		return now.isAfter(expiryDate); 
	}
}

If you wanted to, the step by step refactoring would look like:

  1. Replace new DateTime() with new SystemClock().now()
  2. Replace new instance with field
  3. Instantiate new field in constructor

We’d use the RealClock in both the code that creates the Beef as well as our test. Our test should look like…

package com.thekua.examples;

import static org.junit.Assert.assertTrue;
import org.junit.Test;

public class BeefTest {
	@Test
	public void shouldBePastItsPrimeWhenExpiryDateIsPast() throws Exception {
		int timeToPassForExpiry = 100;
		SystemClock clock = new SystemClock();
		
		Beef beef = new Beef(clock.now().plus(timeToPassForExpiry), clock);
		
		Thread.sleep(timeToPassForExpiry * 2);
		
		assertTrue(beef.isPastItsPrime());
	}
}

Step 2: Change the flow of time in tests
Now that we have the production code dependent on an abstract notion of time, and our test still working, we now want to substitute the RealClock with another object that allows us to shift time for tests. I’m going to call it the ControlledClock. Its responsibility is to control the flow of time. For the purposes of this example, we’re only going to allow time to flow forward (and ensure tests use relative times instead of absolute). You might vary it if you needed very precise dates and times. Note the new method forwardTimeInMillis.

package com.thekua.examples;

import org.joda.time.DateTime;

public class ControlledClock implements Clock {
	private DateTime now = new DateTime();

	public DateTime now() {
		return now;
	}	
	
	public void forwardTimeInMillis(long milliseconds) {
		now = now.plus(milliseconds);
	}
}

Now we can use this new concept in our tests, and replace the way that we previously forwarded time (with the Thread.sleep) with our new class. Here’s what our final test looks like now:

package com.thekua.examples;

import static org.junit.Assert.assertTrue;
import org.junit.Test;

public class BeefTest {
	@Test
	public void shouldBePastItsPrimeWhenExpiryDateIsPast() throws Exception {
		int timeToPassForExpiry = 100;
		ControlledClock clock = new ControlledClock();
		
		Beef beef = new Beef(clock.now().plus(timeToPassForExpiry), clock);
		
		clock.forwardTimeInMillis(timeToPassForExpiry * 2);
		
		assertTrue(beef.isPastItsPrime());
	}
}

We can even further improve this test to be more specific by forwarding time by simply adding one rather than multiplying twice.

Step 3: Save time (and get some real sleep)
Although this is a pretty trivial example of a single use of time dependent tests, it shouldn’t take too much effort to introduce this concept to any classes that depend on time. Not only will you save yourself heart-ache with either flaky, broken tests, but you should also save yourself the waiting time you’d otherwise need to introduce, leading to faster test execution, and that wonderful thing of fast feedback.

Enjoy! Let me know what you thought of this by leaving a comment.

Practicing Root Cause Analysis: An example for programmers

Adding complexity in code is easy. Removing it often takes a little bit more effort. Using root cause analysis, we can sometimes remove this unneeded complexity. Here’s an example my pair and I went through today.

Me: Why is this test breaking?
Pair: Because we have to specify a different value than that one.

Me: Why do we have to a specify a different value than that one?
Pair: Because we have different values in this configuration file for tests.

Me: Why do we have different values in the configuration file for tests?
Pair: So that we can bind services to a different name?

Me: Why do we want to bind services to a different name? (since we have a contract with our clients that never change)
Pair: I’m not so sure… Let’s ask our BA.

Me (to the BA): Why do we want to bind services to a different name?
BA: I don’t know. (Thinks about it for a while…) I don’t think we would.

Now knowing that we wouldn’t ever deploy our application with a different configuration, we got to delete a whole lot of code, and an unnecessarily complicated configuration file. Awesome!

Singleton vs Single Instance

I’ve been fighting the fight against the Singleton pattern on a new code base and got me thinking about easy it is for developers to equate Singleton with Single Instance and, in my experience, more often than not, all they really want is a Single Instance. Of course, you may have proper uses for the Singleton pattern (all listed in the Gang of Four book I’m sure) yet there are many reasons why you should default to Single Instance.

Singletons often tend to be a pain because their “singleness” often makes testing a pain, with shared state persistent across different tests, and additional code to “reset” it. Worse still is the way that their global state makes them just a little bit too easy to stick into the middle of a block of code, without regard of whether or not it belongs. It often takes a while to unwind any dependencies and identify appropriate responsibilities. It’s global nature also often makes it all too tempting for it to evolve into a bit of the god object.

When Retrospectives Go Wrong: Too Many Goals

signpostsTeams run retrospectives for many different reasons. I’ve found that trying to meet too many goals in a heartbeat retrospective severely limits their effectiveness. When I prepare for retrospectives, I normally ask the sponsor (the person who asked me to facilitate) what they want to achieve. Sometimes they don’t know themselves, so it’s a useful exercise, by itself, to clarify their intended goals.

When I’ve sat in teams new to retrospectives and the goal is not made clear, people start to bring up too many different issues, and it’s difficult to resolve anything. One hour seems to be the maximum that teams are willing to set aside, and when you’re dealing with team issues, technical issues, process issues and more, all that time literally flies by. The result… nothing is improved and people get frustrated with the vehicle that brings some visibility (the retrospective).

What you can do about it
I use a rule of thumb, “One Goal Per Heartbeat Retrospective”, and make it clear at the start of the retrospective. It helps, if you’re the facilitator, to agree on what that goal is before you start, and do whatever you can beforehand to make sure everyone agrees. It helps if you’re working with a team that is already is formed (see the Tuckman Model) and you confirm this before hand.

Place the goal in front of everyone where they can see it, both as a subtle hint, and as an aid if you feel the retrospective veering off. Run retrospectives with a different goal in mind to address those that you don’t get around to talking about.

Photo of the cross signs taken from Gregkendallball’s Flickr stream under the Creative Commons licence.

Estimates degrade over time

Project management is a funny thing. It seems like once something is converted into a number, the number is often all that matters to traditional schools of project management. Plans get created, promises get made, yet little diligence goes into ensuring the number is based on sound thinking, and that the basis for that number remains correct. Maybe that also explains the state of the current banking industry, though that’s another post entirely.

Most people fail to understand that estimates have a half life, and often a very short one. Even results produced with agile estimation techniques, have an expiry date that is often ignored. Magic numbers (aka, the estimates) have an expiry date because in order for them to be even slightly reasonable, they need assumptions to be made, and it is often these assumptions that expire. Gantt charts are awful at highlighting what those assumptions are, let alone tracking when they fail to be met.

burntnumbers
Image of burnt numbers provided by Dead Scene under the Creative Commons licence

A recommended practice for agile estimation is to get the people who are going to do the work, to estimate. Put these estimates on a shelf to ripen for a year and, presto, you have a new team to which, the original estimates no longer apply. Even given the same team, set to work on something else only to return to the system, will have lost some flow, forgotten some things and need more time than if they just continued to work on the same system. Environments often change, and the underlying needs for the business change even more frequently.

What can we do about it? I refer to Lean Software Development’s principle of the Last Responsible Moment, to not only make decisions, but when collecting (and keeping) estimates. Collect estimates now if it helps you make a better decision, discarding the estimates if you choose not to immediately implement the project. You set yourself up for failure if you choose to estimate now, knowing that the project is to be shelved for “only three” months and keeping those numbers as if they are in any way relevant. If you don’t need to make a decision now, work out when you need to make the decision and defer collecting estimates until just before then.

Declaring Spring Programmatically

I’ve always been a big fan of Picocontainer because it has a lovely programmatic interface as an IoC container, and I generally value programmatic APIs over external representations, particularly for config files (note that it doesn’t mean external representations aren’t useful).

Unfortunately the followers of Spring love writing their code in XML, and many of their examples don’t show what you need to do to wire up all their examples declaratively in code. I spent some time trying to work out how to do some things.

Bean with an id ‘myBean’

<bean id="myBean" class="com.thekua.example.SomeClass">

in code:

GenericBeanDefinition someClassDefinition = BeanDefinitionBuilder.rootBeanDefinition(SomeClass.class).getBeanDefinition();
getDefaultListableBeanFactory().register("myBean", someClassDefinition);

Guided by this article: Programmatically Build A Spring Application Context

I’ll add more examples for the more complicated items later.

How I like to pair

Chris asks “How do you pair?“, so here’s my top tips when it comes to pair programming:

Understand each other’s working styles
I like to understand how the person I work with prefers to work and I like to share how I like to work. Understanding each other’s preferences helps you not get frustrated when your pair needs to do something different. Some people like to draw diagrams (class/sequences), or use CRC cards, whilst others like to analyse the code. I know that I like to break things down into Tiny Tasks, so that I know when done is done. Make the implicit things explicit.

Relax, and count to 3
I tend to be a fast typist. I’m not saying this because I want to boast. I recognise that when I pair with people who don’t type as fast, I need to be patient when it comes to things like spelling mistakes and syntax errors. Nothing says, “I don’t trust you,” like a pair who is constantly nit-picking over these things when the other pair is still translating the design into code. I trust my pair for them to pick up spelling mistakes, and I wait for a long gap before I point out any of these. I recognise when I go to interrupt, and count to 3 in my head to give them some space. Sure, spelling mistakes are annoying, but they’re easy to fix and they’re not the end of the world.

Swap constantly
Swapping frequently enough creates a sense of shared ownership however swapping too frequently to start with can break someone’s flow. When I start pairing with people, I try to give them as much time on the keyboard, suggesting appropriate times to swap (end of test, end of a piece of major functionality). Ping pong pairing helps with this.

Ensure both parties understand *why* they are pairing
Pairing brings many benefits and I read a great book with fantastic advice when I first started pair programming. It’s difficult to understand why you are pairing sometimes, but think about if you are getting benefits from shared understanding, continuous code reviews, considering more than one design or onboarding someone new. If you are doing it by rote, then consider asking how you can link these benefits into the pairing activities. Without understanding why you pair, many people never achieve the results.

Recognise that pairing is not the solution for all problems
It bothers me when I hear, “we need to pair on this” without recognising what the problem is. Sometimes problems are better solved by the team, or a standard needs to be agreed as a team, not just as an individual pair. On the other hand, I don’t believe all tasks need two people sitting there doing things such as reading documentation, or scouring the interweb for an answer to some problems. I think it often takes another level of awareness (that many people don’t make) to know when to pair and when not to pair. Understanding why you pair will help with this.

The navigator is looking wider than the current task
I strong temptation for a navigator is to focus too much on the task at hand (i.e. pointing out typos). Taking the racing car metaphor, it would simply be like a classic backseat driver. The value of the navigator is to think beyond the current task, to what lies ahead, be it danger or direction. When I take the navigator role, I am thinking of the consequences of what the current task has to other parts of the systems, considering what other tasks we may have missed, or what other test scenarios we still need to generate. I’m also thinking as I watch the code unfold whether or not the task we set out to do makes sense and is consistent with the rest of the system. I am looking to see how much closer it brings us to our final destination, or if there is a better way.

Coding styles leads to (or prevents) certain classes of bugs

I fixed a bug yesterday caused by a loop that terminated too early. It took me a while to understand the fact that it was terminating too early, and I realised that I’ve never written this type of bug because of the way that I tend to write code. Basically, the method looked like this:

public void someMethod() {
   for ( String key : keys ) {
       if (someCondition()) {
         return;
       }
       doSomeProcessingForThe(key);
   }
}

As you can see, the innocent looking return statement breaks out of the entire method, and what they really wanted was a continue statement instead. Note that TDD had been applied to this codebase, and it’s just another reminder that driving your code with tests isn’t enough (you also want to think of ensuring you write more comprehensive tests, particularly for loops).

I’m writing this post because I though it’s interesting that I realised a certain coding style removes (and possibly adds) a whole class of bugs. Had I been writing the code to start with, I probably would have avoided the use of the return (and/or continue) statement resulting in the following code:

public void someMethod() {
   for ( String key : keys ) {
       if (!someCondition()) {
         doSomeProcessingForThe(key);
       }       
   }
}

Prevention is easy with tools like checkstyle, PMD or findbugs (and maybe some of them already check for this). If not, I think it’d be easy to add.

« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑