Last year, I wrote about trialling the idea of Feature Leads. I think the idea worked out and I would encourage more teams to adopt this approach. It helped devolve some of the responsibility and made the work more engaging for developers. Looking back at the list of things to consider, I would now add more items.
What is missing list?
- New environment needs? – Do we require new environments to support business stakeholders in their own testing, or do we overload an existing environment? If we rely on external dependencies, can they support the number of environments that we need.
- Identify external dependencies – If we are working with external vendors, we need to probably be a bit more upfront in working out when key dates are so that we can co-ordinate
- Has the business made any external/internal commitments – As much as teams get frustrated by arbitrary dates set by the business, it’s useful to know if a) any have already been set, or b) business stakeholders want to communicate dates because that means you need to manage expectations and ensure that those commitments are balanced with other priorities going on.
- Is the solution simple, but evolvable – Does the approach make any anticipated work harder than it needs to be? Does it balance out time to market? Can we go for an even more lightweight solution and substitue a more complex one later if needed?
- Do we need to build anything for the feature? – Is software even needed, or can some lightweight business process take care of the need? If we build this, how long will be used, and therefore how much effort in maintaining it/adding automated tests around it?
Looking back at the list of responsibilities, I think these elements help add to a standard list of what things to consider when designing any sort of software solution, and not just the building of it, but thinking about the long term effects of it (who uses it, who’s going to run it, who’s going to maintain it).
Leave a Reply