Working for a client in Berlin, I find the plane time where I normally catch up on some reading. Services like Read It Later make bookmarking online pages for offline reading a pleasure. This morning’s trip, I finished reading a book Michael Feathers tweeted about that. Titled, “Object-oriented software metrics”, and published 15 years ago I found this book most easily from a online second hand book store, and have to say I enjoyed many aspects to this book.
Wondering how much interest a metrics book could be, the author did well to keep the short book punchy and brief. I enjoyed the conversational style of the writing, and the pragmatic nature of his recommendations, such as “I put the threshold at zero so that we explicitly consider any violations.” He starts the book describing metrics and that they should be used for a real purpose, not just randomly corrected, and something I’m pleased resonates very well with a chapter I’m contributing to a book. It’s obvious he comes from applying metrics with real purpose in the real world, talking about examples where various metrics might be used to drive various design choices, or further investigation.
The author divides the metrics into two sections, the first focusing on metrics related to estimating and sizing, or project planning. The second set focuses on design metrics related to code. The metrics that emphasise estimation piqued my interest as a reflection on how estimation methods used to be run, or maybe in some places still are run such as Developer Days per Line of Code, or his suggestion on Developer Days per Public Responsibility. I think the second set proved more relevant to me.
The author shares some of his preferred metrics thresholds and, they too, resonate strongly with my own views of size of methods, number of instance variables in classes, etc. In fact, I’d almost say they were definitely much more extreme such as 6 message sends per method, with my preferred number between 5-10 depending on the team I’m working with. Part of this, something that the author emphasises, is also heavily influenced by the programming language of choice.
Few of the metrics talked about were new to me, having made use of tools like Checkstyle and PMD, although I found he used several I’ve not really tracked such as number of classes thrown away, number of times a class is reused and the number of times a class is touched, something I’d like to ponder on a lot more. One metric I’ve also never considered collecting or tracking is number of problems or errors reported by class/module though I suspect the overhead in tracking this may outweigh the benefit it brings because it’s much harder to automate this.
His emphasis on the influencing factors on code metrics also got me reflecting on my own experiences, once again strongly resonating with my own experiences. His mentioning of key classes resonate with the domain model described in Eric Evans’ Domain Driven Design. I would also anecdotally agree that the type of UI heavily influences the number of supporting classes with technical interfaces (i.e. APIs) requiring less classes than rich GUIs. I like a lot of the distribution graphs he uses and will definitely consider using these in the future.
I’d highly recommend this book if you’ve never really sat down and thought about using code metrics on your projects. It’s got me thinking about a number of other interesting side projects about visualisation and further feedback loops on projects.