The intersection of technology and leadership

Category: Learning (Page 9 of 15)

Chartering Around an Old Codebase

Arriving as someone who has to inherit a large existing codebase is often an intimidating exercise. Many agile practices in particular help you learn a number of the details, including the original XP metaphor, pair programming, test driven development, daily stand ups, and showcases. Many other valuable practices should also help including an excellent onboarding program, always available mentor(s), and an easy to set up environment.

We’ve been considering a number of techniques to learn as much about the system as possible. Here are just a few of the ones that spring to mind:

  • Uncover all external dependencies – External dependencies and integration points are often killer spots for fast feedback loops, be they running local tests through an interface, or by just trying to deploy an application and see if they are live. Each dependency adds complexity to deployment, another point of failure, and possibly another communication bottleneck with parties outside. Some examples of external dependencies include specially licensed software or services, databases, file systems, other software applications, web services, REST services, and messaging queues or messaging buses.
  • Validate your own understanding of the architecture – An architecture diagram is often useful when starting to navigate a codebase. The implementation of that architecture may not be as clear as a high level diagram, so it’s important to uncover the flow of a system. What we’ve found useful is building up a flow through the system using a specific example scenario to understand the interactions of classes as they fit into the architecture.
  • Read through tests – If you’re lucky, your system will have plenty of tests. Start with the higher level ones that walkthrough system level interactions, delving into the more granular ones when it’s not necessarily clear.
  • Try to write some automated tests – Try to test something in the system and you’ll suddenly discover you’re pulling a string that happens to be connected to everything else. You may learn what happens to be the most used (or abused) classes and where all the dependencies start to overlap.
  • Generate diagrams using analysis tools – Consider different visualisations of code to understand how all the parts of the system fit together.
  • Write down questions as you go (and get them answered) – Ask lots of questions after you’ve had some attempt at getting your own understanding. It will take a while to get the domain vocabulary and your questions will be more useful the more context you have.

Leave a comment if you have other strategies that you have found particularly useful. We’d certainly appreciate it right now.

Repeatability distracts from the real goal

Most organisations emphasise repeating the “process”, serving to distract from the real goal – repeating “success”. You need a certain amount of flexibility and adaptability in your process because you never work in the same environment twice.

This does not mean you do not use things that have worked well for you in the past. It means you should be prepared to change them if you can do it better.

Maximising learning in development: Do Things Multiple Times

Cockburn talks about waterfall development being a poor strategy for learning so what do agile methods give us that allows us to learn better?

One thing I constantly remind myself is that we tend to be write pretty poor code the first time we do it. Unfortunately most people also write a lot of first time code, check it in and move on. Refactoring is one strategy that lets you learn about how to change the code better and is often one that most people reach for.

One practice that I’ve been doing more and more frequently is Do things multiple times. Sounds horrific right? Sounds like a huge waste of time? I think it can be if you don’t learn anything from it. Therefore in order to maximise learning, I think you need to also master a number of supporting practices like Use Version Control, Check in Frequently, Small Commits, and Automated Tests.

Here’s an example of this practice in action:

I’d seen some spike code that had somehow made it’s way into the codebase. It had minimal test coverage, had many more lines of code than I was comfortable with and involved many different classes all slightly dependent on each other in some way. I wanted to refactor it but didn’t really know what the best way of doing it was. I retrofitted some tests around it, running code coverage to get a good understanding about what areas I was now more comfortable changing. I ran the build, got the green light and checked in.

I applied some small refactorings, ran the tests and watched everything spectacularly break. I looked back at what I did and started to understand some of the relationships better. I rolled everything back, this time trying something slightly different, before running the tests again. Things broke in a slightly different way and I spent a little bit of time understanding what happened this time. I rolled things back again and then tried a different approach. I want to emphasise that the timeframe for this is about fifteen or twenty minutes.

Compare it to other approaches I see quite frequently, where someone sets out to do something, gets into a broken situation, finds something else to fix and ends up working on multiple things at once. They keep patching stuff to get the tests to pass again, and once they do, check in and move on.

You should only Do things multiple times if you can Check in Frequently, execute Small Commits (therefore you lose very little when you rollback) and have Automated Tests so you know if you broke anything.

Repeating something is only waste if you don’t learn anything from it.

Active Passive Load Balancer Configuration for HAProxy

It took me a while to understand the help for the configuration file for HAProxy, another software load balancer that runs on a number of *nix-based systems. Here’s the resulting file that we used to successfully test an active-passive machine configuration that someone will hopefully find useful one day.

global
    log     127.0.0.1 alert 
    log     127.0.0.1 alert debug
    maxconn 4096

defaults
    log        global
    mode       http
    option     httplog
    option     dontlognull
    option     redispatch
    retries    3
    maxconn    2000
    contimeout 5000
    clitimeout 50000
    srvtimeout 50000

####################################
#
#           loadbalancer
#         10.0.16.222:8555
#           /          \
#   webServerA         webServerB
# 10.0.5.91:8181     10.0.5.92:8181
#    (active)           (passive)
#
####################################

listen webfarm 10.0.16.222:8555
    mode    http
    stats   enable
    balance roundrobin
    option  httpclose
    option  forwardfor
    option  httplog
    option  httpchk    GET /someService/isAlive             
    server  webServerA 10.0.5.91:8181 check inter 5000 downinter 500    # active node
    server  webServerB 10.0.5.92:8181 check inter 5000 backup           # passive node

A guide for receiving feedback

I recently gave some advice on how to give feedback effectively and was asked to give some advice about receiving feedback. My guidelines for receiving feedback are pretty much based on understanding how to give effective feedback. Ola recently also shared his experiences with this.

Before I understanding how to receive feedback, it’s useful to recap some guidelines on how to give feedback:

  • Feedback should be specific. Talk about specific observations and impact of the behaviours exhibited during those observations.
  • Believe that someone was doing what they thought was correct at the time. (Akin to the Retrospective Prime Directive)
  • Feedback should be timely. Give it early and often as you see fit.
  • Feedback should be about both strengthening confidence and improving effectiveness. It shouldn’t be about making someone feel bad about themselves.
  • The focus of feedback should be about behaviours, not perceived values or attitudes.

When you receive feedback, be prepared not to receive feedback in an ideal manner. For many different reasons, most people find it difficult to give effective feedback, often requiring plenty of practice to get almost incrementally better at it.

Gift

Photo taken the Powerhouse Museum’s Flickr Stream under the Creative Commons licence

Listen candidly
When I receive feedback, I try to listen without reacting immediately to the feedback. Some common (ineffective) feedback people give is something like, “Your code is awful”. When put that way, who isn’t going to get defensive? Even something like “You’re really great” makes it hard to understand what behaviours you should continue repeating, and what behaviours you might consider changing. It’s particularly challenging listening to other feedback if you’ve already been put on the defensive, therefore…

Clarify detail and ask for specifics
When you feel offended or shocked with the feedback, ask for what observations people made to reach that conclusion. I like to ask for what observations they made and what impact it had, as well as how they felt about it. For some people, it’s useful to help them understand that you currently feel defensive. I might say something like, “I feel like I’ve just been judged and feeling defensive. I’d like to understand what behaviours you saw had a negative impact so that I can better understand your perspective.”

Share your context with them
People often jump to the wrong conclusion because they may not have the complete picture. It’s often useful to share other motivating forces about the same observed behaviours. For example, “I joined the conversation uninvited because I feared you would never ask me for my input and I felt I had important things to contribute.”

Acknowledge and thank them for their feedback
When people give feedback, they are giving up some of their time. Some people may have overcome certain fears about giving feedback. So when you’re receiving them, ensure that you acknowledge what they are saying and thank them for their feedback. Even if you disagree with their conclusion acknowledge their contribution if you also observed the same behaviour.

Ask for feedback early and often
Giving effective feedback takes time and isn’t often at the front of people’s minds. We know that it’s easier to respond to feedback early when you have an opportunity to change something. As the person receiving feedback, it often helps to invite people to give you feedback as this alleviates the fear most people have when giving feedback, “How are you going to react?” Giving people some notice about collecting feedback also helps.

Move people away from judgements to positive action items
For some people, it will be difficult to move them away from their “evaluation” and brining them back to observed behaviours. Also, some people don’t take remember specific behaviours or impact and like to talk about their “gut feeling”. While this isn’t particularly effective, as a person receiving feedback you can still benefit by asking them, “What should I do differently?” or “What could I do to make more of the situation, or make the situation better?”

What helps you?
I’m sure there are plenty of other tips on how to receive feedback. What do you tend to focus on when receiving feedback?

Come along to XP2009

At this year’s XP2009, I’m going to run the workshop, Climbing the Dreyfus ladder of Agile Practices where we’ll look at learning models (focusing on one in particular) and how to use them to help as a model for coaching and transferring skills around agile practices.

It’s going to be great fun, and contains some great material inspired by all the wonderful coaching work that Liz Keogh has been doing (we’re also hoping to get into Agile 2009).

Bring your friends, your work colleagues and anyone you think might get some benefit. I’ll be maintaining this page as we get closer to the conference.

A model for understanding retrospective impact

Steven List asks the question, Are Retrospectives an Anti-pattern? Of course, retrospectives are a topic close to my heart so I naturally wanted to share my view of them. The conversation apparently started on the Kanban Development mailing list and Steven’s post already captures some great discussion. I won’t repeat it here, but I find the dialogue echoing the same sentiments about other agile practices and whether or not they’re useful. For me, it’s too extremist and not particularly helpful. They make it sound like you need to choose from two positions: Either you run retrospectives, or you don’t.

I think the more interesting question is, “When are retrospectives most useful?” To help explain my thoughts, I’ve put together the following: A Model for understanding Retrospective Impact (click on it for a slightly bigger view).

A model for understanding Retrospective Impact

What is a retrospective?
Being specific and clear about this term is important in the discussion about whether or not retrospectives are useful. Some people’s understanding of a retrospective can differ from other people’s depending on each of their experiences. The conversations on the Kanban list seem to imply retrospectives are solely an iteration focused ritual (and of course, there is no such concept of a Scrum or XP iteration in Kanban). Though I appreciate Kerth’s original definition, I think in the agile community, Derby and Larsen’s definition better captures its essence:

A special meeting where the team gathers after completing an increment of work to inspect and adapt their methods and teamwork.

I like this definition best because its ties the concept of improvement (inspect and adapt) based on a shared understanding (the team) according to some unstated time period (an increment of work, noting it’s not specifically tied to an iteration).

What is impact?
I think it’s also important to define the term impact. One of the definitions by the Oxford Dictionary seems most appropriate:

a marked effect or influence

I want to emphasis that you can have a high impact retrospective without necessarily taking a lot of time, and that the opposite is entirely possible too (a long retrospective with a low impact). Note that I’m not going to discuss how to run retrospectives that have the highest impact (a totally separate topic worth its own set of posts).

Effect of team dynamics
The state of the team has an enormous influence on the impact of retrospectives. A high performing team will naturally have better communication amongst its members, and are more likely to fix things that get in the way of their goal. There will naturally be better and more frequent individual improvements, identified and implemented more frequently without the call for a specific meeting. This doesn’t mean that retrospectives are never a practice they call upon. Rather, it’s a practice that has less impact as there are many other items in the toolkit that get called upon. Sometimes it’s important for the group to get together, establish a common understanding and to inspect and adapt noting there may not be any specific regularity to these particular meetings.

In contrast, a dysfunctional team are less likely to call upon individual improvement practices, and thus, retrospectives play a more important role, providing an explicit opportunity to improve. I use the term dysfunctional in this sense to capture a broad category including newly formed teams, or simply groups of people. This does not necessarily mean the team cannot advance, nor that it hasn’t had a chance to advance, just simply the state it exists in.

Environmental Effects
I would classify a majority of environments as chaotic. In chaotic environments, improvement isn’t second nature. Through practices that start with good intentions such as auditing (“I want to have confidence you are doing what you say you are doing”), people in these environments become afraid to suggest changes, and are often fearful of attempting anything different because they’ve been punished for a failed experiment in the past. In these chaotic environments, retrospectives have a significant impact by providing an explicitly safe environment for teams to make, commit to, and hopefully be supported in making changes. Retrospectives have a higher impact in these environments because the majority of other activities doesn’t encourage learning, innovation and improvement.

In contrast, a nurturing environment welcomes fail fast innovation, rewards learning from failed experiments and supports continuous improvement in an explicit, non document centric approach. In these environments, the outcomes from implementing improvements and passing on tacit learnings is more important that meeting audits and compliance. Retrospectives have less impact in this environment because individuals are more likely to suggest and implement improvements without the need for a special meeting, and without the need for an entire team. Once again, it doesn’t necessarily imply that retrospectives are not useful (some issues require a shared understanding across the entire team) or that they are regularly scheduled events.

Avoid deciding between whether or not you run retrospectives. Consider when is it best for you to run a retrospectives instead.
Understand what situation you find yourself in using the model above and ask yourself whether or not retrospectives have the potential for the most impact (dysfunctional teams and chaotic environments), or where energy is better spent pursuing other activities because retrospectives have less, but not zero, impact (high performing teams and nurturing environments).

In the next set of posts, I hope to describe different situations I’ve seen first hand and what impact retrospectives had in relation to them.

Leave a comment and let me know what you think.

Experimentation and Learning lead to Ford’s production line insight

I find it slightly ironic how the learning and experimenting philosophy so heavily emphasised in the lean world ultimately produced, what is considered the opposite approach inspired by the Ford production line.

“…when Henry Ford and his team developed the production line, they didn’t just sit down and deductively theorize about it on paper. nor did they merely try random experiments. Instead, they used a bit of both. … Armed with a set of deductive hypotheses, Ford began experimenting with different configurations of his plant between 1908 and 1912. After four years of tinkering, in 1913 he struck on the key insight that the car itself should move along the production line rather than the workers…” – From Chapter 12 of the Origins of Wealth by Eric D Beinhocker.

Am I missing something?

It’s about time I probably asked this question about the new Manifesto for Software Craftsmanship, but seriously, am I missing something?

Firstly, the whole idea about craftsmanship applied to software isn’t too much of a new idea. I remember reading Software Craftsmanship: The New Imperative when it first came out, and how the Pragmatic’s also talked about this concept.

I’m probably biased a little bit because I probably believed in what craftsmanship stood for before being fully immersed in agile for the last five(?) years. A part of me understands why this is important for agilistas. After all, I hear of stories all of the time where in many organisations that focus solely on Scrum without adopting any developer discipline still have a “crappy codebase”.

Yet I worry that it focuses on the wrong problem. I worry that it may be addressing a symptom, not a cause. I worry that it will further split our industry into two camps, those who craft (or should craft) software, and everyone else.

The real question I ask myself when going into new organisations and with new teams is, “Do people care about the quality in their work?” This question applies to all parts of an organisation, all parts of a team, not just developers.

So I’m jetlagged, tired and honestly just ranting a little bit, but I still ask the question, “Am I missing something?”.

Retrospectives are not the only place for continual improvement

Teams adopting agile, and even frequently teams who consider themselves agile, often hit a stumbling block. Here’s how the thinking goes… agile is about improvement. Agile projects do retrospective to improve. Therefore, retrospectives = improvement and improvements (only) happen in retrospectives.

Unfortunately many teams suffer without realising improvements go beyond retrospectives. Everyday there is an opportunity to improve, an opportunity to learn. It sometimes takes a while to see them. It often takes much longer to unwind the restraints of organisational “process” on people’s desires to experiment and fail fast, and learn from those mistakes.

Don’t get me wrong. Retrospectives also have their place. Sometimes teams don’t have an environment safe enough and retrospectives are one way of helping establish some safety. It takes commitment from leaders to create this environment of safety, and something I encourage greatly when I work with teams.

Do you recognise your team falling in this pattern? Break out of them, and remind them that improvements don’t have to wait for a meeting to be attempted.

« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑