The intersection of technology and leadership

Category: Development (Page 12 of 18)

XP2009 Day 1

General impressions about the conference
I really enjoyed this year’s conference with the combination of a remote island in Italy and the small numbers (100+) giving many great opportunities for chatting with our experienced practitioners, and a handful of academics about lots of different topics. I found it refreshing that there seemed to be significantly more experienced practitioners and thus, I found it extremely nice to be able to chat about similar experiences rather than simply unidirectional advice I find when present with a higher proportion of beginners.

conferencepool

Who wouldn’t want to gather around this place for some great conversations?

The quality of sesions was better than the last two conferences, once again focused less on the introductory nature and more focused on specific aspects. Of course, I had recommendations about how to improve, particularly the organisational aspects, of the conference and I’ve at least had an opportunity to give that feedback having shared a return train with one of the organisers for next year’s conference.

Thoughts about the first day
The first part of this day was a keynote delivered by lean evangelist, Mary Poppendieck. Titled, “The Cultural Assumptions behind Agile Software Development”, Mary proposed that there are several (American-style) cultural assumptions behind many of the agile practices that make it all the more difficult to implement. She referenced heavily the work discussed by Geert Hofsted in his book, Cultural Dimensions.

I didn’t find the keynote particularly inspiring, nor particularly challenging. Country-based cultural dimensions are just one of the factors that permeate the way that people behave. As an agile consultant, you end up fighting corporate culture, and the systems that encourage and maintain that corporate culture and I see country-based cultural dimensions yet another contributing systemic effect. This does not mean that just because a country has a high degree of individualism, working in pairs or working collaboratively in a team will be impossible (perhaps just all the more difficult). As much as I enjoy hearing Mary speak, I also found her presentation a little bit too heavy in the whole powerpoint presentation with far too much text and outdated clipart.

I also ran my workshop in the morning, titled “Climbing the Dreyfus Ladder of Agile Practices” and want to thank all the experienced people that attended as it resulted in some really rich discussions. We managed to discuss seven different agile practices in detail, brainstormed a large set of behaviours for each, classifying them and classifying them into the different levels described the the Dreyfus Model of Skills Acquisition. The results from the workshop can be found here (photos to be updated).

In the afternoon, I helped out Emily Bache’s coding dojo, focused on Test Driven Development. We saw four different pairs tackling four different coding problems using different tools and languages. I learned about the tool JDave (it’s still a little bit too verbose for my liking), and saw different styles in the way that people executed the code kata. For me, I was hoping to demonstrate more on the pair programming side of test driven development as a pair, and I had a lot of fun even though I felt a little bit out of my depth with the tools. Thanks to Danilo for complementing my lack of experience with tools like Cucumber. 🙂

More to come…

Visual Studio 2008 – The Anti Region Shortcut

If you happen to be unfortunate enough to stumble across “region”-ised C# code, use the following short cut keys to expand all in a file:

CTRL-M + CTRL-L

Give it a go! Or better yet, disable all code folding by default:

Text Editor -> C# -> Advanced. Uncheck “Enter outlining mode when files

Go to Text Editor->C#->Advanced. Under the “Outlining” section, uncheck the option labelled “Enter outlining mode when files open”.

Why ORID matters?

Last time I wrote about the ORID (Objective, Reflective, Interpretive, Decisional) model for conversations. Software development is hard because not only do you have to deal with technical challenges, but there are so many opportunities for poor communication. Detecting when poor communication occurs is important, as it lets you ask questions and steer conversations back into a much more productive result. Here’s a classic conversation I hear on software teams all the time:

Person A: We’re going to use <tool/framework/library>
Person B: No! <tool/framework/library> sucks. We’re going to use <alternative tool/framework/library>

Note that both of these people have jumped to the Decisional stage of the conversation (i.e. What to do). I’m sure that individually, both people went through the model, yet didn’t attempt to step through together (or as a team) each of those stages. I’ve learned that it’s easy to jump to different conclusions (D) if you don’t share the same background.

Using the ORID model as a guide, here’s a much more effective way the people could have had that conversation:
Person A: I’ve noticed we write a lot of code that deals with files. (O)
Person B: I’ve noticed that as well. (O)
Person A: It makes me feel frustrated (R) because we spend less time doing interesting stuff (I)
Person B: I didn’t realise you were frustrated about it! I also notice we tend to have a lot of bugs in that area as well (I)
Person A: I’d like to use some <tool/framework/library> so that we can improve our productivity (D)
Person B: I definitely would like to as well.
Person A: I think <tool/framework/library> would work well
Person B: I’ve had bad experiences with <tool/framework/library> because it also has lots of bugs (I). I think <alternative tool/framework/library> might still do the job, and help you feel less frustrated. What do you think? (D)
Person A: Let’s give it a go.

The conversation may not still go in the same direction of agreement, however there is much more opportunity to discuss why one particular solution will fit the issues both parties are feeling. Without making those items explicit, the discussion becomes an argument about choosing one solution over another. With a shared understanding of how both people are feeling, there are many more opportunities to clarify what problems the solutions are trying to fix. In fact, the solution each party originally thought of may change as a result.

Chartering Around an Old Codebase

Arriving as someone who has to inherit a large existing codebase is often an intimidating exercise. Many agile practices in particular help you learn a number of the details, including the original XP metaphor, pair programming, test driven development, daily stand ups, and showcases. Many other valuable practices should also help including an excellent onboarding program, always available mentor(s), and an easy to set up environment.

We’ve been considering a number of techniques to learn as much about the system as possible. Here are just a few of the ones that spring to mind:

  • Uncover all external dependencies – External dependencies and integration points are often killer spots for fast feedback loops, be they running local tests through an interface, or by just trying to deploy an application and see if they are live. Each dependency adds complexity to deployment, another point of failure, and possibly another communication bottleneck with parties outside. Some examples of external dependencies include specially licensed software or services, databases, file systems, other software applications, web services, REST services, and messaging queues or messaging buses.
  • Validate your own understanding of the architecture – An architecture diagram is often useful when starting to navigate a codebase. The implementation of that architecture may not be as clear as a high level diagram, so it’s important to uncover the flow of a system. What we’ve found useful is building up a flow through the system using a specific example scenario to understand the interactions of classes as they fit into the architecture.
  • Read through tests – If you’re lucky, your system will have plenty of tests. Start with the higher level ones that walkthrough system level interactions, delving into the more granular ones when it’s not necessarily clear.
  • Try to write some automated tests – Try to test something in the system and you’ll suddenly discover you’re pulling a string that happens to be connected to everything else. You may learn what happens to be the most used (or abused) classes and where all the dependencies start to overlap.
  • Generate diagrams using analysis tools – Consider different visualisations of code to understand how all the parts of the system fit together.
  • Write down questions as you go (and get them answered) – Ask lots of questions after you’ve had some attempt at getting your own understanding. It will take a while to get the domain vocabulary and your questions will be more useful the more context you have.

Leave a comment if you have other strategies that you have found particularly useful. We’d certainly appreciate it right now.

Improving Collaboration Between Developers and Testers

One of the biggest divides you need to cross, even on agile teams, is the chasm between testers and developers. By the nature of their different roles there will always be tension with developers focusing on creation, change, and construction, and testers focusing on breaking, destructive, and exposing questionable system behaviours. It’s essential that both organisations and, in particular, individuals realise what they can do to ensure this tension doesn’t dissolve into an exclusively confrontational relationship.

catdogfight

Photo taken from Sephiroty’s Flickr stream under the Creative Commons licence

Recently, a new QA person highlighted this very issue for me. I’d finished some functionality with my pair and the QA had been testing it after we declared it ready to test. They came along to my desk and said, “I’ve found some defects with your code.” Internally I winced as I noticed myself wanting to say, “There’s nothing wrong with my code.” It got me thinking about why.

Evaluating people puts them on the defensive
Just as giving effective feedback should focus on behaviour, poor feedback makes a person feel criticised. When the QA said that there are some “defects”, it implied a broken state that you had to fix it. Made worse is the way that they said it made it feel like they were blaming you, and it’s very hard not to feel defensive in a situation like this. A very natural outcome is to pretty much deny the “evaluation” which I’m sure anyone in our industry would have witnessed at least once.

Avoid terms like “defect”, “broken”, “bugs”
One of the biggest differences working with agile testers versus testers who come from traditional backgrounds is the terms that they use. Traditional testers constantly use the words above. Agile testers focus on discussing the behaviour of the system and what expected behaviour they would see. They only use the words above once they have agreement on both of the both the current and expected behaviours. I definitely recommend you do not start a conversation with the words above as they all carry some connotation of “evaluation” and I’m yet to meet someone who truly feels comfortable being “evaluated”

Focus on Effective Feedback
Effective feedback follows a neat and simple pattern:

  1. What behaviour did you see?
  2. What impact did it have?
  3. What do we want to change?

Testers should use a similar series of questions (in order):

  1. What behaviour did you see?
  2. What behaviours did you expect to see?
  3. What are the consequences of the current system behaviour?
  4. Is that desired or undesired?
  5. Do we need to change it?

Apply the guideline above and watch the collaboration improve!

Maximising learning in development: Do Things Multiple Times

Cockburn talks about waterfall development being a poor strategy for learning so what do agile methods give us that allows us to learn better?

One thing I constantly remind myself is that we tend to be write pretty poor code the first time we do it. Unfortunately most people also write a lot of first time code, check it in and move on. Refactoring is one strategy that lets you learn about how to change the code better and is often one that most people reach for.

One practice that I’ve been doing more and more frequently is Do things multiple times. Sounds horrific right? Sounds like a huge waste of time? I think it can be if you don’t learn anything from it. Therefore in order to maximise learning, I think you need to also master a number of supporting practices like Use Version Control, Check in Frequently, Small Commits, and Automated Tests.

Here’s an example of this practice in action:

I’d seen some spike code that had somehow made it’s way into the codebase. It had minimal test coverage, had many more lines of code than I was comfortable with and involved many different classes all slightly dependent on each other in some way. I wanted to refactor it but didn’t really know what the best way of doing it was. I retrofitted some tests around it, running code coverage to get a good understanding about what areas I was now more comfortable changing. I ran the build, got the green light and checked in.

I applied some small refactorings, ran the tests and watched everything spectacularly break. I looked back at what I did and started to understand some of the relationships better. I rolled everything back, this time trying something slightly different, before running the tests again. Things broke in a slightly different way and I spent a little bit of time understanding what happened this time. I rolled things back again and then tried a different approach. I want to emphasise that the timeframe for this is about fifteen or twenty minutes.

Compare it to other approaches I see quite frequently, where someone sets out to do something, gets into a broken situation, finds something else to fix and ends up working on multiple things at once. They keep patching stuff to get the tests to pass again, and once they do, check in and move on.

You should only Do things multiple times if you can Check in Frequently, execute Small Commits (therefore you lose very little when you rollback) and have Automated Tests so you know if you broke anything.

Repeating something is only waste if you don’t learn anything from it.

Active Passive Load Balancer Configuration for HAProxy

It took me a while to understand the help for the configuration file for HAProxy, another software load balancer that runs on a number of *nix-based systems. Here’s the resulting file that we used to successfully test an active-passive machine configuration that someone will hopefully find useful one day.

global
    log     127.0.0.1 alert 
    log     127.0.0.1 alert debug
    maxconn 4096

defaults
    log        global
    mode       http
    option     httplog
    option     dontlognull
    option     redispatch
    retries    3
    maxconn    2000
    contimeout 5000
    clitimeout 50000
    srvtimeout 50000

####################################
#
#           loadbalancer
#         10.0.16.222:8555
#           /          \
#   webServerA         webServerB
# 10.0.5.91:8181     10.0.5.92:8181
#    (active)           (passive)
#
####################################

listen webfarm 10.0.16.222:8555
    mode    http
    stats   enable
    balance roundrobin
    option  httpclose
    option  forwardfor
    option  httplog
    option  httpchk    GET /someService/isAlive             
    server  webServerA 10.0.5.91:8181 check inter 5000 downinter 500    # active node
    server  webServerB 10.0.5.92:8181 check inter 5000 backup           # passive node

Active Passive Load Balancer Configuration for Nginx

Here’s the configuration file that we ended up with testing an active passive configuration for our application using the software load balancer Nginx that I previously posted about.

worker_processes  1;
error_log  logs/error.log;
events {
    worker_connections  1024;
}

####################################
#
#          loadbalancer
#         localhost:7777
#           /          \
#   webServerA         webServerB
# localhost:8080      localhost:8181
#    (active)           (passive)
#
####################################

http {
    include               mime.types;
    default_type          application/octet-stream;
    proxy_connect_timeout 2s;
    proxy_read_timeout    2s;

    sendfile              on;
    keepalive_timeout     65;

    upstream backend {
        server 127.0.0.1:8080     fail_timeout=1s max_fails=1;    # active node
        server 127.0.0.1:8181     backup;                         # passive node
    }

    server {
        listen                   7777;
        server_name  localhost;

        location / {
            proxy_pass http://backend;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑