The pace of the conference and the speakers drinks event on Thursday night meant that I missed the first session on the final day. It’s a shame since I think Joe Armstrong would have been good to listen to. I look forward to reading anyone’s report of that session in the blogosphere.
Track: The Concurrency Challenge – Session: Embracing Concurrency at Scale
I really appreciated this speaker for being very pragmatic and conscious about helping people think more explicitly about tradeoffs. He encouraged people to actively think about what solution is most appropriate considering there is always some trade off.
This session resonated another one of those ideas that we use “models” all the time and that models are inherently limited by simplifying things. As the speaker said, “There are many models in science, all of them wrong but some of them useful.” He used time as an example in that there are many ways of looking at it although we have a tendency to want to structure it linearly. “When you admit time is a difficult problem, you will have problems with multiple actors”.
He talked about the ideas that we like convenient mechanisms (such as distributed transactions) because they’re easy to grok and we’re comfortable with those ideas but because they have implicit state, you’re throwing away the value of distributed systems. He went on to talk about how we’re taught about the goodness of ACID properties (particularly around database-centric applications). The alternative he presents is BASE (Basic, Availability, Soft State, Eventual Consistency). There’s apparently a paper on this found here (ACM login required). He referenced Pat Helland’s alternative definition of the acronym, Associative, Commutative, Idempotent and Distributed.
Once again, he emphasises the importance of making the decision on alternatives an explicit one and there is a real tradeoff. This was yet another time someone talked about the CAP theorem that describes three different system properties but you can only guarantee two of them. It’s great as a reminder and highly recommend you read about it as a refresher.
He talks about the world being Eventually Consistent and that “We shouldn’t try to model a world more perfect than it actually is”. He gave a number of examples of using this in practice, and there are ways to make sure that you try to do this. One such example was a Shopping Cart situation where a service should tell the “stock to decrement by 1” instead of “reduce stock from 99 to 98”. I think this is just another example of good OO (tell, don’t ask).
Track: Solution – Session: Scenario Driven Development
Our own Ben Butler Cole ran this session that attracted a huge audience considering it was running in the Solution Track. Ben attempted to help people understand how to use Scenario based testing instead of alternatives such as exhaustive acceptance testing or story based acceptance testing as the regression suite.
I think the audience were largely sceptical but Ben’s views align very similarly to the things I’ve seen on projects, particularly when doing development using stories. One of the misunderstandings about automated testing and stories is that everyone always views stories as additive. For me, you not only have stories that add functionality, you also have stories that change functionality (replace) and stories that also cause functionality to disappear. I tend to think the set of tests also goes hand in hand – if you’re deleting functionality, you should be deleting existing tests as well, or if you’re changing tests, you should be changing existing tests.
I think part of the problem is that stories are kind of fractal in nature and the scenarios that Ben talked about were like at an epic level (maybe even a use case level). What is interesting is how you go about enhancing, maintaining that scenario over time and how the team works around those scenarios – something we unfortunately only got a taster for in the hour.
I heard a lot of people still come away with lots of interesting insights and I would highly recommend everyone go and listen to Ben speak even if it’s just to hear him present. Totally worth it.
Track: The Concurrency Challenge – Session: Modelling Concurrency with Actors in Java – Lessons learned from Erjang
This session was fairly packed out, I guess given the hype of people working with or wanting to work with Erlang. This presenter’s take was interesting because in order to better understand concurrency and Erlang, this guy decided to implement an interpreter on top of the JVM (instead of simply playing around with the language).
The premise behind his talk was that Object Orientation as a paradigm brought with it plenty of productivity and that Actor-based modelling was going to be the thing to give us the next boost. Whilst I admire his premise, I don’t necessarily agree. I don’t think we’re all that good (as an industry) as implementing Object Orientation. I do think things like the JVM have helped raise the abstraction level so that we don’t need to think as much about the lower level memory management (although you have a memory leak in these languages if you’re not careful).
He spent a long time fascinating all the language geeks with his ability to run a simply Erlang program on top of his project, Erjang (Erlang on the JVM) but I really wanted to hear about the lessons learned from programming instead of the technical details of writing an interpreter on the JVM, something he spent a majority of the time on. I don’t think there were very many interesting bits other than designing parts of a system with different ideas. As he said, “Imagine treating parts of system as different actors with different responsibilities?” to which I was thinking, “That’s what good Object Orientation is about. Treating objects playing different roles with the right set of (small) responsibilities”. Nothing new unfortunately.
Track: How do you want to test that? – Session: Performance Testing at the Edge
This session brought a lot of mixed reactions. I agree that the start of it seemed a little bit commercial, talking about their product and the complexities of performance testing. I like discussing the importance of performance testing for applications because it’s one of those things that are often neglected and really not treated very well. This presenter eventually got around to talking about some of these aspects, but like many of the talks still ended up being fairly late in the talk.
I think that this speaker had a lot of great things to say and although much of it was very common sense, didn’t give people an impression about how they could go about doing it well. They had some interesting approaches to monitoring performance such as running it on the same set of dedicated hardware (that wasn’t even a replica of production). This has a trade off of being able to compare results over time which they had some interesting graphs, although it is at a cost of being a proper representative sample of what the software might actually run on.
I found that some of the good things he talked about were similar to the talk that Alistair and I had, although it still sounded like the performance testing team was a predominantly split responsibility that worked orthogonally to the actual development team. It also sounds like there was a big cultural barrier that they have been trying to break down by getting developers to write performance tests and have some more ownership.
The crowd also had a lot of scepticism about convincing a business to invest in the automation of performance testing.