patkua@work

The intersection of technology and leadership

Page 13 of 53

What is Failure Demand in Software Development

The idea of “Failure Demand” comes from systems thinker, John Seddon, who describes it as “unnecessary burden on the system.” By looking at removing failure demand on a system, you free up more capacity to focus on value added work. Much of failure demand also maps to the lead concept of “waste” although not all “waste” is the same as failure demand.

Some classic examples (and tell-tale signs) I see with companies include:

  • Poor quality work – Features that are not tested, or well designed end up generating bugs. A smell to look for is lots of issues reported by end users. Lots of errors in production logs are also another great smell for detecting this.
  • Features designed without thinking about User Experience – Without putting the end user of a system in mind, many organisations build functionality without exploring how/why an end user of their system will end up using it. Working with an effective user experience capability means simpler, clearer interfaces that help end users get the job done. Smells to look out for include interfaces that have too many additions or features added to it.
  • Requirements solely driven by a Product Manager – Many organisations rely solely on the HiPPO (the Highest Paid Person’s Opinion) to drive requirements. Although a Product Manager role is still useful for other reasons, faster experimentation and data collection of testing hypothesis is of use. Look out for smells like long release cycles, date driven requirements, or large backlog requests of detailed “user requirements” specified by the Product Manager without real involvement or feedback from end users.
  • Misunderstandings – As a software organisation grows, the communication channels significantly increase. When people do not validate their understandings with each other, they end up doing more rework than necessary. Depending on how complex the problem space is, using visual models, workshops that explore a certain approach and simply showing progress constantly (daily or weekly basis) help to resolve this.

What other examples of failure demand do you see? Please leave a comment.

Making google analytics work with requirejs

We were trying to template google analytics and make it part of our requirejs setup. When we did what we thought was obvious to wrap it in a module and declare a dependency on it, it seemed to fail quite miserably.

The trick was that we missed that you needed to export a global variable. This means in the javascript file initialising google analytics we had to add the following code block

var registerGoogleAnalytics = function(accountId) {
  var _gaq = _gaq || [];
  if (window) {
    window._gaq = _gaq; // export as global
  }
  _gaq.push(['_setAccount', accountId]);
  _gaq.push(['_trackPageview']);

  // rest of google analytics initialiser script...
}

Some links that helped us working this out:

Making jade and mustache templating work together

One our frustrations using jade and icanhaz (a javascript front end mustache implementation) was that when we were trying things that were obvious to us, jade would simply fail to template and we weren’t sure what was causing it.

Fortunately small TDD cycles and experimentation made us realise that it was the combination of new line characters and mustache code made jade work/break.

We would try something like this:

script(type="text/html", id="my_checkbox", class="partial")
  li 
    label(for="{{code}}")
      {{name}} 
    input(id="{{code}}", checked="checked", name="{{code}}", type="checkbox")

The set of statements above would be valid mustache (once converted to HTML) but jade complains because the {{name}} is on its own line. The fix was to use the pipe (|) character to force jade to recognise a line break. It looks like this now

script(type="text/html", id="my_checkbox", class="partial")
  li 
    label(for="{{code}}")
      | {{name}} 
      input(id="{{code}}", checked="checked", name="{{code}}", type="checkbox")

Simple, but not particularly obvious from the examples in their documentation.

Book Review: Taiichi Ohno’s Workplace Management

Workplace Management On my most recent plane trip, I got a chance to read Taiichi Ohnos Workplace Management: Special 100th Birthday Edition. It’s a book, translated and written down from a series of narratives and distilled into a small set of digestible chapters full of short stories. It has a pretty great representation of many of his ideas, and is a great read about the philosophy and attitude behind Toyota, and ultimately the movement classified as lean thinking/manufacturing, etc.

I found the book sometimes jarring, perhaps it’s just the conversational style and the translation that means it’s a bit halting. The constant references to manufacturing terminology also makes it slow to digest, but I find it fascinating to see how many of these ideas easily translate into the world of software as well. The book touches upon a little bit of thing when he goes on to analyse the difficulties of the “white collar workers” and how it’s much harder for them to “go to the gemba” to see the results.

Much of the advice is still appropriate today. Many take aways reinforce many of the ideas espoused by many of the lean movements such as tool makers should not be separated from the tool users, or they end up creating tools that are not useful. The idea that improvement cannot be mandated centrally, away from the “gemba” but must be done by the people “on the gemba”.

The book also starts off with his attitudes towards people being human, the the problems that we have with our own mental models or misconceptions that lead us to be wrong. Chapters like “The wise men mend their ways” and “If you are wrong, admit it” are good examples of how to cope with these human traits.

The book is a short read, and is full of nice little soundbites. Probably my favourite out of the book is:

“There are so many things in this world that we cannot know until we try something. Very often after we try we find that the results are completely the opposite of what we expected, and this is because having misconceptions is part of what it means to be human”, in the Chapter: “If you are wrong, admit it”

A builder pattern implementation in Javascript

We’re using the builder pattern on our javascript project, as it useful for starting off with a set of defaults, but is clear when we want to override particular values. Although there are a few places on the net that describe who uses the builder pattern in javascript, they don’t really provide an implementation.

Here’s one that works for us:

var Builder = function() {
  var a = "defaultA";
  var b = "defaultB";
  
  return {
      withA : function(anotherA) {
        a = anotherA;
        return this;
      },
      withB : function(anotherB) {
        b = anotherB; 
        return this;
      },
      build : function() {
        return "A is: " + a +", B is: " + b;
      }
  };
};

var builder = new Builder();

console.log(builder.build());

var first = builder.withA("a different value for A").withB("a different value for B").build();

var second = builder.withB("second different value for B").build();

var third = builder.withA("now A is different again").build();

console.log(first);
console.log(second);
console.log(third);

Feedback for Conference Presenters

After presenting at the recent Quarterly Technology Briefing in London, Manchester and Hamburg I had a very good question from one of my colleagues about what feedback I found most valuable.

Our feedback forms were quite short with two quantitative questions (out of a 1-5 scale), and three or four free text questions. Although the quantitative questions gave me a good indication of general feedback from the audience, it is not specific enough for me to really understand what things to do more of, or things to do less of. It reminds me of a traffic light system some conferences used (red, yellow, green) for evaluating conference presenters. Fun, quick, but entirely useless to know why people put numbers down.

Although the free text answers to feedback forms take more time to read, the feedback is much more helpful, particularly around getting an understanding of where expectations for a session matched or didn’t match, and useful suggestions or ideas to focus more on. I can take this feedback and actually do something about it for a different presentation.

For conference organisers, or if you’re putting feedback forms together for your own workshop, please don’t leave feedback as a binary, or based solely on numbers. Although there are advantages to getting quicker to an evaluation, you don’t really know why people rated something well or not well. Ask open ended questions and provide these to speakers unedited and raw.

I think if conferences really wanted speakers to get better as well, I think having some peer presenters sit in a session and provide targeted feedback would be even better. I could imagine something like this could focus solely on the mechanics and/or execution of the presentation and give timely, helpful feedback to improve the session and the presenter.

Crashplan on Mac OSX not compatible with Java 1.7

Last year, I decided to backup my data in the cloud. I liked the idea of Crashplan because it encrypts stuff before shipping it off to the cloud. It runs in the background, and as long as you have a reasonable upload speed, backing up things like SLR photos aren’t so painful.

Unfortunately I rebooted my machine today, and found that my backup service was no longer working. I scoured their twitter stream, their website to see status, but it all looked good. I figured something must have changed on my machine. I forgot that I had installed JDK7 earlier in the week on my machine, but didn’t really link the two events because I barely need to restart the mac.

Fortunately this post told me how to reconfigure crashplan to run on Java 1.6 again. Thanks interwebs.

Book Review: The Coaching Bible

I’ve had this book sitting around for a while, but I thought I should get around to reading it. The snow in London and the cold weather gives me a perfect reason to get through a little bit more reading. The Coaching Bible: The essential handbook focuses on some of the skills an effective coach requires, and introduces a few tools that a coach can use.

The Coaching Bible

The book is largely domain agnostic, although the coaching examples they use tend to be focused on a business context (i.e. not life coaching, sports coaching or agile coaching). I think that makes it quite accessible to any person interested in developing coaching skills, but aren’t necessarily looking to be a full-time coach themselves.

They introduce this “Multi-modal” coaching model made up of four different perspectives a coach can focus on:

  • Logical levels – Beliefs (why), Environment (where, when), Behaviours (what), Capability (how), Identity (who). A good point is that an effective coach considers which logical level to focus on and where their efforts might have the most impact. Doing so at the wrong logical level leads to frustration and an ineffective coaching relationship
  • Remedial versus Generative Continuum – Coaching falls along a spectrum, of whether or not it needs to be targeted at a specific instance (remedial) or outcome, or help with exploring options (generative). Once again, consider what is most appropriate for the situation.
  • Systemic Context – With a strong nod to one of my favourite books on systems thinking, The Fifth Discipline: The art and practice of the learning organization: Second edition, the idea here is that coaches are working with people who are working in a larger environment that drives their behaviour. It’s useful to step back and view this larger context, and explore it as part of the coaching conversations
  • Interpersonal-intra psychic continuum – Lastly, and the one I understood the least, is the idea of trying to not simply focus on external relationships/observations but also to think about exploring the inner beliefs and internal drivers of the coachee.

I agree with quite a number of the other chapters in the book and I think they offer quite a number of practical examples and advice on items a coach focuses on, such as “Building the Alliance” with a client (agree on how/when to meet, develop an agenda, establish goals and how to measure progress) and the importance of identifying the “Mind-Body-State” necessary for both you as a coach, and the coachee to have a healthy conversation.

One of the most useful resources for a new coach is also found in the appendix, referring to core competencies outlined by the International Coach Federation.

« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑