The point of this workshop was to help bring the ideas of Test Strategy to the forefront of developers writing automated tests, an often considered, essential agile practice. By looking at a small set of automated tests (acceptance, integration, unit), and looking carefully at the different tradeoffs, people in their own project context should more explicitly consider whether or not they have the right ratio of tests (i.e. too many acceptance tests or too many unit tests) with the idea that all of these should be giving you fast feedback in order to have confidence you’re shipping the right thing. All the slides are available on Slideshare here. I ran this workshop at both ACCU2010 and XP2010.
The first part of the workshop was brainstorming a number of types of automated tests before brainstorming the benefits and costs associated with each category of tests. Note that the benefits/costs are relative to your environment. Some (and many more) of the following features are likely to affect this:
- The nature of the application being built (server side, web application, client side, etc)
- The language, libraries and toolkits available
- The length of the project (e.g. a large legacy codebase versus a fresh, new application)
- The size of the project/team
The following are sets of tables (in no particular order) transcribing all the different notes.
Acceptance Tests
Benefits | Costs |
---|---|
Can be shown to clients | Slow |
User value ensured | Less information available to learn how to do it well |
Closer to end requirements | Long execution time |
Test through layers | High learning curve |
Easy on legacy code | Pin points source of bug |
Can be written in some tool (by non developers) | Hard to write (well) |
Documentation | Data dependent |
Feeling of completion “done” | Requires entire architecture stack/environment |
Easier to introduce around legacy code | environment cost |
Refactoring friendly | Continuous updating as features emerge |
Documents expected behaviour | Can get very verbose |
Value centric | Expectations must be much clearer |
Drive design | Costly hardware |
Gains confidence that the application does what the user wants | Availability of external services |
Drives behaviour of the system outside in | Test many different configurations |
Exercise more code/test | Makes changes to “flow” hard |
Safety harness | Requires infrastructure |
Tough to find | Brittle and can break for the “wrong” reason |
Brittle if through UI |
Integration Tests
Benefits | Costs |
---|---|
Force communication with 3rd party | Hard to set up |
Testing external behaviour | Require knowledge of context and multiple layers |
Less surprises at the end (no big bang integration) | Discourages acceptance tests |
Requires no GUI | Slow |
Easier to wrap around legacy code | Is hard to keep on a good level |
Verifies assumptions about external systems | Environment costs |
Defines external services | Ties implementation to “technical solution” |
Flexible Lego tests (pick and choose which ones to run) | Authoring to get real data |
Platform flexibility | Maintenance |
Covers configuration | Hard to sell in |
Complicated/impossible | |
No IDE | |
Heavy investment into infrastructure | |
Pinpointing source of bug | |
Hard to configure | |
Detective work |
Unit Tests
Benefits | Costs |
---|---|
Easy to write | Unit tests often on the same level as refactoring -> troubles doing refactorings |
Proves that it works | Highly coupled to code |
Low number of dependencies | Might validate unused code |
Enable Refactoring | Heavy upfront investment in coding time |
Becomes system doc(umentation) | Hard to maintain |
It’s fast feedback | “Unit” results in tests on too low a level |
Drives simplicity in solutions (if test driven) | Inventory: Lines of code |
Encourages less coupling, testable design | Hard to understand for others than developers |
(If pairing or code sharing) -> risk reduction “Many eyes on code” | Not effective until critical mass reached (ex: x% code covered) |
Helps you take action | Difficult in legacy code |
Easy to configure | Difficult: Developer aversion to doing it |
Allows for testing boundary conditions | False sense of security |
Pinpoints the problem | Steep learning curve for developers |
Fast to make | Tricky mocks |
Low individual cost | Redundency |
Low requirement of domain knowledge |
Tool Brainstorm
The following are all the tools brainstormed by the various groups, and grouped accordingly to their recommended classifications (these don’t reflect my views on whether or not they are correctly classified). Leave a comment if you feel like something really should be moved!
Tools for Acceptance Tests
- Cucumber (ruby)
- Robot (.net)
- Watin/Watir (.net/ruby)
- Abbot (java)
- Parasoft (java)
- JBehave (Java/.net)
- Fitnesse/FIT (java/.net)
- Webdriver (java, .net, ruby)
- Selenium
- Cuke4Nuke (.net)
- Cuke4Duke (java)
- Bumblebee (java)
- HTMLUnit
- HTTPUnit
Tools for Integration Tests
- SOAP UI (webservices SOAP based)
- DB Unit (java.net)
- JMeter
- Jailer
Tools for Unit Tests
- MockObjects
- RSpec
- PHPunit
- JSTestDriver
- JSpec
- JSMockito
- Rhinomock
- NUnit
- NCover
- Moq
- Typemock
- TestNG
- JMock/Easymock
- Mockito
- JUnit
- Clover
- EMMA
- Cobertura
- FEST-assert
- RMock
- UISpec4J
- Erlang
- Checkstyle
Other tools not classified
- Lab Manager (TFS)
- HP Quality Centre
- Test manager (TFS Zero)
- VSTS (.net)
- JTest (java)
- Lando webtest
- Powermock (java)
- Grinder (Java)
- AB (python)