patkua@work

The intersection of technology and leadership

Page 14 of 53

The Retrospective Handbook – Now in Print

Last year, I announced the digital version of The Retrospective Handbook being released. As much as I feel digital books are important, I am one of those people who like reading using a physical copy of a book. It’s great for you, and it’s also a great way to give one away. And now you can too!

The Retrospective Handbook

The print copy of the book is now available via Amazon (all the links are below). Buy one for you, your team or as a gift today.

Fixing my Buffalo Linkstation Live LS-CHL

I bought a NAS drive a year or two ago and I was trying to upgrade the firmware to the latest version, 1.60. Unfortunately along the way the firmware update managed to fail and I ended up with a bricked NAS. The result was a RED LED light blinking at me six times in a row upon reboot requiring. I tried quite a few combinations before I was able to restore anything. I’m writing them out here, step by step, just in case it helps someone.

Pre-requisites
I work on a mac, but the only software they provide to reset firmware effectively runs on windows. Fortunately I had a windows netbook still around that I could use to reset it.

Boot the machine using TFTP
This approach for booting the machine remotely is well-documented here, but unfortunately their linked software didn’t work for my case. A kirkwood one floating the internet and listed on a forum post seemed to work best for me.

The steps that worked for me included:

  1. Connect the NAS via ethernet directly to the windows laptop
  2. Set a fixed IP to 192.168.11.1, allowing the default gateway details to fill in (tabbing away works). Save this
  3. Start the TFTP Boot.exe program from the kirkwood zip
  4. Start the LS-CHL Linkstation Live in TFTP mode (hold the function key down for a while, turn on the power and wait for the blinking blue lights). The red flashing lights came on, and when I hit the function key again, it would eventually bootstrap
  5. You should see two console messages as described in the post saying “uImage.buffalo, xxx Blocks Served” and “initrd.buffalo, xxx Blocks Served”

Reset the firmware
At this point, I figured, the machine is rebooted, now you have to apply some firmware. I downloaded the latest, and then waited to see if the machine would come up for an update. At this point, you need to make sure you follow the Force Firmware Update post instructions.

One extra step that I ended up having to do was in response to a “Couldn’t connect” problems. At this point, another post pointed out that I needed to remove my static IP I had set earlier. I changed the windows box, fired up the NasNavi (to obtain a different IP and to establish a connection to the Linkstation) and then I could follow the firmware update.

I rebooted the machine, and it still flashed red, but trying to go through this cycle again, at some point – I don’t remember when I saw yellow blinking lights. I counted them, and they informed me and, according to the manual, the machine was resetting its firmware. Yay! A little bit more waiting, and I had to repartition the drive and it was blue lights all on again.

Independent Refactoring is Irresponsible

Both Michael Feathers and Rachel Davies recently recently wrote about attempts to make refactoring a more explicit step in the development process by adding a particular refactoring task to the board.

It got me thinking about my latest project, a green-field application where, in the last week, I think we almost tripled our estimated velocity. You might think that we gamed the estimates, bloating figures to make us look good, but we did not. We still used relative complexity to estimate essential work that must be done. We did learn a little bit more than the week before, but the biggest change was actually some preparatory refactoring. Before I explain why it worked, I’m going to take a slight detour.

I remember working with one client who spun up a “refactoring team”. It sounded great at the outset – a legacy application that had plenty of code as they knew the development team cut corners to meet their milestones. Rather than completely halt new development, they split out a small team who would refactor mercilessly. This team spent one whole month renaming classes, adding tests, adding patterns here and there, fighting the big cyclomatic complexity numbers and then claiming victory on their poor opponent (the codebase). The result after the refactoring team… new feature development slowed down even more.

When investigating why feature development slowed down, we discovered the following:

  • New, and unfamiliar designs – The refactoring team did some wonderful work cleaning up certain parts of the codebase. THey made some places consistent, introduced a few patterns to tackle common smells and honestly helped reduce the codebase. What they neglected to do was to inform the other developers of the new design, where they now needed to look for the same functionality and the intention behind the newly named classes. Instead, the developers working on new functionality struggled to find the huge, very finely commented code they were familiar with and then when they tried to apply their old technique for fixes, failed to do.
  • Immediately irrelevant refactoring – With most codebases, there are parts that change a lot, parts that change a bit, and parts that almost never change. In my experience (disclaimer: not researched) those parts that change a lot and are highly complex end up as a huge source of bugs. Those parts that don’t change can remain ugly and still be perfectly fine. In the case of this client, a lot of effort spent refactoring ended up in areas where new functionality wasn’t being added.
  • A divide in cultures – I heard a few snarky comments during this time from the new feature development team about the refactoring team, basically implying most of them to be developer-divas whilst they had to do all the grunt work. The result… by outsourcing refactoring, the new feature team basically cared less about the codebase and I’m sure they weren’t helping the clean up with the code they added.

Litter
Image taken from Will Lion’s Flickr stream under the Creative Commons licence

My reading on lean thinking taught me that you need to Build Quality In and that separately quality from the product ends up costlier and results in poorer quality.

The answer… is actually quite clear in Martin Fowler’s book on Refactoring:

Refactor because you want to do something else (i.e. add function, or fix a bug)

I return to my current situation. We achieved the tripling in velocity because we spent time thinking about why adding a new feature was so cumbersome. I’ll admit that I spent a lot of time (almost a day) trying to add part of a new feature, attempting a few refactors and rolling back when they did not work. I was trying to get a feel for what steps we did most often, and attempted several approaches (most failed!) to make it simpler, clearer and added the least amount of code. We did settle on some patterns and we realised its benefits almost immediately – adding a new feature that previously took us a day to implement now took us only an hour or two with tests.

I find that sometimes the most satisfying part of software development is actually reshaping existing code so that the addition of a new feature is just a single method call, or just a single instance of a class. Unfortunately I don’t see this often enough.

Reflecting on Feature Leads

Last year, I wrote about trialling the idea of Feature Leads. I think the idea worked out and I would encourage more teams to adopt this approach. It helped devolve some of the responsibility and made the work more engaging for developers. Looking back at the list of things to consider, I would now add more items.

What is missing list?

  • New environment needs? – Do we require new environments to support business stakeholders in their own testing, or do we overload an existing environment? If we rely on external dependencies, can they support the number of environments that we need.
  • Identify external dependencies – If we are working with external vendors, we need to probably be a bit more upfront in working out when key dates are so that we can co-ordinate
  • Has the business made any external/internal commitments – As much as teams get frustrated by arbitrary dates set by the business, it’s useful to know if a) any have already been set, or b) business stakeholders want to communicate dates because that means you need to manage expectations and ensure that those commitments are balanced with other priorities going on.
  • Is the solution simple, but evolvable – Does the approach make any anticipated work harder than it needs to be? Does it balance out time to market? Can we go for an even more lightweight solution and substitue a more complex one later if needed?
  • Do we need to build anything for the feature? – Is software even needed, or can some lightweight business process take care of the need? If we build this, how long will be used, and therefore how much effort in maintaining it/adding automated tests around it?

Looking back at the list of responsibilities, I think these elements help add to a standard list of what things to consider when designing any sort of software solution, and not just the building of it, but thinking about the long term effects of it (who uses it, who’s going to run it, who’s going to maintain it).

Managing Ruby Development Environments

One of the principles I like is being able to set up new development environments very quickly. The java space offers many libraries for managing your environment that means that each project works in a separate space. In contrast, in the .Net space an anti-pattern is one that often requires many installs to your GAC (Global Assembly Cache) often through the use of a “mouse-driven”-only installer.

Fortunately the ruby community offers a number of tools for managing both versions of ruby and the libraries that you use. The ones that I will often reach for, now include:

  • RVM – Ruby Version Manager. Allows you to have different versions of ruby, and to quickly switch between them
  • Bundler – Management of gems.

The ultimate acceptance test for this, is can developers simply “check-out” and go. The lead time to setup a new development environment should be very quick.

Note that there is now a competing tool for managing environments called RBenv although integrated tool support (like in RubyMine) is only starting to come through

Taming the Hippo (CMS) Beast

I eluded in a previous post our struggles dealing with the HippoCMS platform. It wasn’t our preferred path, but a choice handed down from above. Enough said about that. It’s useful to understand a little bit about the environment we were using it.

I believe the pressure to choose a CMS came from a deadline that required some choices about platform choice to be made in an organisation. At this time, the extent to what the actual product was unknown. Our experience working with other clients is that you should generally work out what you want to do before you pick a product, or the platform will often dictate and limit your ability to do things. My colleague Erik Dörnenburg has been writing more about this recently.

The premises of a CMS are alluring for organisations. We have content… therefore we need a content management system. The thought ensues, “Surely we can just buy one off the shelf.” Whether or not you should use a CMS is for another blog post, and you can read some of Martin Fowler’s thoughts on the subject here.

We wanted to protect our client’s ability to evolve their website beyond the restrictions of their CMS, so we architected a system where content managed in a CMS would sit behind a content service, and a separate part of the stack focused on the rendering side. It looks a little like this:

ContentService

The issues that we faced with HippoCMS included:

A small community based on the Java Content Repository

Hippo is based on the Java Content Repository (JCR) API, a specification for standardising the storage and access of content. Even as I write this blog, putting “JCR” or “Java Content Repository” I am forced to link to the wikipedia page because I spent three minutes trying to find the official Java site (it looks like the official site is hosted by Adobe here). If the standard is small, the community surrounding the products are naturally going to be smaller. Unlike users of spring, putting a stacktrace into google will generally show the sourcecode of the file rather than how someone got over it. I’d be happy living on the bleeding edge… if the technology was actually pretty decent.

Unfortunately a lot of the gripes I write about are the fact that the product itself is based on the the JCR specification. Some simple examples include:

  • A proprietary query syntax – You query the JCR with an xpath-like query language. It’s actually less useful than xpath, such as not implementing all functions available in xpath and some weird quirks
  • Connecting to the repository via two mechanisms – Either RMI (yuck! and inefficient) or in memory. This automatically limits your deployment options to the application container model. Forget fast feedback loops of changing, starting a java process and then retesting.

Hippo CMS UI generates a huge number of exceptions
One reason Hippo was selected was for the perceived separability of the CMS editor and the website component (referred to as the Hippo Site Toolkit). We didn’t want to tightly couple the publishing/rendering side to the same technology stack as the underlying CMS. Hippo allows you to do this by having separately deployed artefacts in the application container. Unfortunately, the Wicket-based UI (maybe because we used it without the Hippo Site Toolkit) generates exceptions like nobody’s business. We spent some effort trying to understand the exceptions and fix them, but there were frankly too many to mention.

Poor taxonomy plugin implementation
One of the reasons Hippo was allegedly picked was for the taxonomy plugin. Unfortunately this gave us no world of pain both in usability and in terms of maintaining it. In terms of the specific issues we faced with the maintenance included the multi-language support (it didn’t allow that) and then just simply getting it deployed without issues.

CMS UI lack of responsiveness
Our client’s usage of the site wasn’t very big. Less than 300 articles and, at the peak, about 10 concurrent users. Let’s just say that even with three people, the UI was sluggish and unresponsive. We tried some of the suggestions on this page, but it’s a bit of a worry that it can’t responsively support more than one user out of the box with standard configuration.

Configuration inside the JCR
Most of our projects take a pretty standard approach to implementing Continuous Delivery. We want to easily source control configuration, and script deployments so that releases into different environments are repeatable, reliable, rapid and consistent. Unfortunately a lot of the configuration for new document type involves “switching a flag to capture changes”, playing around with the UI for a new document type” and then exporting a bunch of XML that you must then load with some very proprietary APIs.

After several iterations, we were able to streamline this process as best we could but that took some time (I’m guessing about a developer two weeks full time).

Lack of testability
We spent quite a bit of effort trying to work out the best automated testing strategy. Some of the developers first tried replicating the JCR structure the UI would recreate but then I pointed out that would give us no feedback of if Hippo changed the way did its mapping. We ended up with some integration tests that drove the wicket-based UI (with a wonderfully consistent but horrid set of generated IDs) and then poked our content service for expected results.

A pair of developers worked out a great strategy for dealing with this, working out the dynamically generated APIs and driving the UI via Selenium Webdriver to generate the data we would query inside the proprietary XML-based data store.

Lack of real clustering
In “enterprise” mode, you can opt to pay for clustering support although it’s a little bit strange because you aren’t recommended to upgrade a single node within a cluster when other nodes are connected to the same datastore (in case the shared state is corrupted). This kind of makes seamless upgrades without complicated DB mirror/restore and switcheroo really difficult. We ended up architecting the system for a degraded service using caches on the content service as a compromise to the “clustered” CMS.

Summary
As much as I wish success for the Hippo group, I think many of the problems are around its inherent basis on the JCR. I do think that there are a couple more things that could be done to make life easier for developers including increasing the amount of documentation and thinking about how to better streamline automated, frequent deployments around the CMS.

Cheat Sheet for Javascript Testing with Jasmine

Jasmine is the default unit testing framework that I use when writing javascript, however my poor brain can’t always remember all the different ways of getting things to work. There are quite a number of cheat sheets out on the internet including:

They don’t quite cover all the examples as well. Here’s my contributions to demonstrate some of the common uses.

describe("jasmine", function () {

    describe("basic invocations", function () {

        var SampleDependency = function () {
            return {
                usefulMethod:function (firstParameter, secondParameter) {
                },
                anotherUsefulMethod:function () {
                }
            };
        };

        var Consumer = function (dependency) {
            return {
                run:function () {
                    dependency.usefulMethod("first", "second");
                },
                runSecondMethod:function () {
                    dependency.anotherUsefulMethod();
                },
                runWithRequiredCallback:function(callback) {
                    callback("an argument");
                }
            };
        };

        it("should spy on an existing function", function () {
            // given
            var dependency = new SampleDependency();
            spyOn(dependency, "usefulMethod");
            var consumer = new Consumer(dependency);

            // when
            consumer.run();

            // then
            expect(dependency.usefulMethod).toHaveBeenCalled();
            expect(dependency.usefulMethod).toHaveBeenCalledWith("first", "second");
            expect(dependency.usefulMethod).toHaveBeenCalledWith(jasmine.any(String), jasmine.any(String));
            expect(dependency.usefulMethod.callCount).toEqual(1);
            expect(dependency.usefulMethod.mostRecentCall.args).toEqual(["first", "second"]);
        });

        it("should demonstrate resetting of the spy", function () {
            // given
            var dependency = new SampleDependency();
            spyOn(dependency, "usefulMethod");
            dependency.usefulMethod();
            dependency.usefulMethod.reset();

            // when
            dependency.usefulMethod();

            // then
            expect(dependency.usefulMethod).toHaveBeenCalled();
            expect(dependency.usefulMethod.callCount).toEqual(1);
        });

        it("should demonstrate creating a spy object with prepopulated methods", function () {
            // given
            var dependency = jasmine.createSpyObj("dependency", ["usefulMethod", "anotherUsefulMethod"]);
            var consumer = new Consumer(dependency);

            // when
            consumer.run();
            consumer.runSecondMethod();

            // then
            expect(dependency.usefulMethod).toHaveBeenCalled();
            expect(dependency.anotherUsefulMethod).toHaveBeenCalled();
        });

        it("should demonstrate creating a stub object", function () {
            // given
            var dependency = jasmine.createSpyObj("dependency", ["usefulMethod", "anotherUsefulMethod"]);
            var consumer = new Consumer(dependency);
            var stubbedCallback = jasmine.createSpy("stub callback");


            // when
            consumer.runWithRequiredCallback(stubbedCallback);

            // then
            expect(stubbedCallback).toHaveBeenCalled();
            expect(stubbedCallback).toHaveBeenCalledWith("an argument");
        });
    });


    describe("returning a value", function () {
        var Dependency = function () {
            return {
                getMultiplier:function () {
                    return 10;
                }
            };
        };

        var Consumer = function (dependency) {
            return {
                calculateSomethingWithMultiplier:function (number) {
                    return number * dependency.getMultiplier();
                }
            };
        };

        it('should be correct', function () {
            // given
            var dependency = new Dependency();
            spyOn(dependency, "getMultiplier").andReturn(40);
            var consumer = new Consumer(dependency);

            // when
            var result = consumer.calculateSomethingWithMultiplier(3);

            // then
            expect(result).toEqual(120);
        });
    });


    it("should demonstrate creating a stub that returns a value", function () {
    });

    describe("creating a stub that calls a fake", function () {
        var Dependency = function () {
            return {
                request:function (callback) {
                }
            };
        };
        var Consumer = function (dependency) {
            var capturedValue = "";
            return {
                hardAtWork:function () {
                    dependency.request(function (value) {
                        capturedValue = value;
                    });
                },
                getCapturedValue:function () {
                    return capturedValue;
                }
            };
        };

        it('should demonstrate creating a stub function that does something interesting', function () {
            // given
            var dependency = new Dependency();
            spyOn(dependency, "request").andCallFake(function (callback) {
                callback("Controlled return value from callback");
            });
            var consumer = new Consumer(dependency);

            // when
            consumer.hardAtWork();

            // then
            expect(consumer.getCapturedValue()).toEqual("Controlled return value from callback");
        });

    });
});

RequireJS is the Spring Framework of Javascript

I’ve been working on setting up the infrastructure for a mostly javascript based project, and we’ve been putting RequireJS into the codebase to help us manage the file dependencies instead of having to declare them within the page that is using them. As a concept, RequireJS is helping us keep different javascript modules apart in different files and let’s us assemble them.

RequireJS works by declaring dependencies and having the framework pull them in when you need them.

define(["aDependency"], function(theDependency) {
  // now I can do something with theDependency
  theDependency.aMethodOnIt();
})

This is pretty much how spring works, but the issue I have is that RequireJS manages the lifecycle of the javascript objects, so when you want to pass in a substitute for a test, you end up in a dilemma.

define(["aDependency"], function(theDependency) { // how do I get inject a different instance?
  // now I can do something with theDependency
  theDependency.aMethodOnIt();
})

Unsurprisingly a number of people wrote libraries such as testr which allow you to override the requirejs to inject different versions. Although very reasonable approaches, I find this approach a little bit smelly as you’re effectively patching a library you don’t own. The ruby community know the dangers of monkey patching too much, particularly those parts of a code base you cannot control and the potential issues you face when you try to upgrade.

Our current approach involves using RequireJS to manage the file/name dependencies, but for us to write javascript that allows us to control the instances of the objects that we want. Here’s an example:

dependency.js

define([], function () {
    return function () {
        return {
            doSomeWork:function () {
            }
        };
    };
});

consumer.js

define([], function () {
    return function (aDependency) {
        var dependency = aDependency;
        return {
            start:function () {
                dependency.doSomeWork();
            }
        };
    };
});

And then we control the lifecycle of the components and instances in the application using the following code.

main.js

define(["consumer", "dependency"], function (Consumer, Dependency) {
    var dependency = Dependency();
    var consumer = Consumer(dependency);
    consumer.start();
});

And our jasmine tests get to look like this:

requirejs = require('requirejs');

describe("consumer", function() {
    it("should ensure the dependency does some work", function() {
        // given
        var dependency = jasmine.createSpyObj("dependency", ["doSomeWork"]);
        var consumer = requirejs("consumer")(dependency);

        // when
        consumer.start();

        // then
        expect(dependency.doSomeWork).toHaveBeenCalled();
    });
});

This approach has been working out well, forcing us to manage the dependency and global hell that javascript global functions can quickly become. Thoughts? Please leave a comment.

“fs is not defined” in jasmine-node version 1.0.28

I was using jasmine-node this weekend (package.json says it’s version 1.0.28) and hit this error:

../node_modules/jasmine-node/lib/jasmine-node/cli.js:89
        var existsSync = fs.existsSync || path.existsSync;
                         ^
ReferenceError: fs is not defined

It looks like this has already been reported here as Issue #186. The quick fix is to add the require declaration in yourself at the top of the file.

var fs = require('fs');
« Older posts Newer posts »

© 2024 patkua@work

Theme by Anders NorenUp ↑