Monday 23 November 2009

Most People do not Know How to Ride a Bike

Dear Junior

Of course most people can be claimed to know how to ride a bike. The proof is simple: give them a bike and watch them without instructions mount onto it and take a ride without falling of – even in tricky conditions such as bumps or on slippery surface. However, the notion of “knowledge” is a little bit trickier than that, and a too “binary” notion of knowledge hurt us.

For example, take anyone of those that “know” how to ride a bike, and ask them to explain how to make a left turn. Chances are high, that you either will get no answer at all, or an answer that if followed will make you fall over - which in a pragmatic sense can be judged to be “wrong” or at least “incorrect” (did not cause the desired effect).

In the same way I can ask about myself: do I know “Java exceptions”, “EJB 3”, “agile system development” or “project management”? And when reading CVs I am regularly frustrated over just being given a long list of acronyms and frameworks; a list that tells me nothing about whether this person has been to a one-day class on Hibernate, or if she can wrinkle out modelling mistakes that will hurt performance.

For my own use I like to view knowledge not as binary (“know” vs “not know”), but rather as having levels of “depth of knowledge”. A main source of inspiration is Benjamin Bloom’s taxonomy. He defined six levels of “cognitive learning” (knowledge, comprehension, application, analysis, synthesis, and evaluation).

However, when applying this to system development, I usually collapse the three deepest levels (analysis, synthesis, and evaluation) to one, ending up with four levels. The reason for doing this is that I seldom see the need to distinguish between the deepest three, but often want to point out the difference between the first three (knowledge, comprehension, and analysis) in relation to the four latter stages. I guess a cynic might point out the choice of collapse as an observation of the state of our industry.

One thing I really like with Bloom’s taxonomy is its focus on utilisation of the knowledge. It quantifies by measured in the abilities to process and use information in a meaningful way, simply by telling what you can do at a given level. Note here “utilisation” and “measure” – two words that strongly align with my affection for pragmatism in software development.

With each level there are a couple of typical verbs attached – things a person is supposed to be able to do at that level. And this gives something that is measurable in a testable fashion.

So, the four levels I use with some typical verbs are

  • Recognition (renamed: originally “knowledge”): recite, list, label, locate
  • Comprehension: restate, give example, summarise
  • Application: apply, solve, show
  • Analysis (including Synthesis and Evaluation): compare, compose, invent, critique, evaluate, judge

Let me apply these levels to myself and some areas of knowledge just as examples.

So, do I know the Java memory model? Well, I follow along when my colleague Tommy talks about it, but I am not sure I could restate it without serious mistakes; so at recognition level yes, and perhaps at comprehension level, but not deeper.

Do I know Scrum, Java exceptions, or Domain Driven Design? Hey – how long do you want me to talk about it? I guess I could spend endless hours in discussion on those topics with other experts – so I would say analysis level knowledge.

Do I know JPA 2.0? Sure, I could solve a problem by using it, but I would probably fail when people start discussing trade-offs and what is happening “under the hood” in different implementation; in other words application level knowledge.

As a side note I would say the levels are logarithmical (like the Richter Scale), so that each level of depth takes ten times amount of thoughtful experience to reach, compared to the previous. You can learn a buzzword by listening to a sales-level presentation of one hour. To be able to restate it yourself, you probably need to spend a day (10 h). Before you can practice it without checking up details all the time, you probably need two-to-three weeks experience (100 h). And the level where you can compare and contrast different approaches will probably take half a year (1000 h). Of course it is the two latest levels that are most interesting and important to system developers.

Back to the bike riding: recognition would be “yeah – bikes are a kind of vehicle – I see one over there!”; comprehension would be “you get onto it, pedal to get speed, steer by pulling the handles, and break by pedalling backwards”; application would be: “let me show you – watch me!”.

And analysis level? Well, if you want to turn left you should not pull the left grip – that will cause you to tip over to the right due to the centrifugal force you expire when the bike starts turning left. In short – the bike will turn left, but you and your centre of gravity will continue forward; so you fall into the ground on the right hand side of the bike.

And, by the way: centrifugal forces do exist.

To actually turn left you first pull the right handle. The effect is that you start falling over to the left. After a split of a second you have fallen the appropriate amount so that your angle towards the ground will make gravity and centrifugal force balance for the turn you intended. Then, you pull the left handle to actually do the left turn. And as you now have the appropriate angle, forces balance out and you stay on top. People trying to explain this often express it as doing a small contra-turn in the opposite direction as a preparation before making the real turn.

If you want to see this in action, have a friend film you while just doing some cycling around on the backyard, and then watch it frame by frame. You will see that you intuitively do the right thing – but you probably had no idea you did.

Note that for most of us there is no need for analysis level knowledge on bicycling – we are perfectly happy to just be able to ride that bike. The analysis level knowledge of bike riding you find at people like motorcycle driving instructors or those that just have studied a little bit too much analytical mechanics.

The same goes with people in the system development field. Just doing stuff and gaining experience does not give analysis level knowledge. To there you also have to have an analytical process in place that constantly evaluates what you do, why you do it, and what the alternatives could be.

Culture that analytical process.

Most people do not know how to ride a bike – they simply do it.

Yours

Dan


Monday 16 November 2009

Demeter Saves Mocking Fairies

Dear Junior
In a Twitter post Damian Guy recently expressed his frustration with some hard-to-unit-test code as “”Every time a mock returns a mock, a fairy dies”. (Twitter @damianguy 2009 Oct 19)
I immediately fell in love with the quote and how it “backwards” refers to a guideline on how to design (or not design) your code for testability. What has struck me recently is that this design guideline is actually nothing but the battle-proven Law of Demeter (“Don’t Talk to Strangers”).
Let me give a rather sketchy example of how you could get a report of the shipping weight of an order composed of order items.
OrderService.reportTotalWeight(order)
orderitemlist= order.items()
weight = 0
for(item in orderitemlist)
weight.add(item.weight);
return weight;
So, what is the problem? Well, imagine the unit test for the report functionality in OrderService. To properly unit test that functionality we need to rig a “test bench” with mocked versions of the “surrounding” objects. If we do not use mocks, we will actually also test the functionality in the classes of the surrounding objects as well, and then we are not really “unit-testing” any longer – more like “micro-integration-testing”.
So a setup would need to pass in a mocked order. But then comes the call “order.items()” upon which the mock need to pass back a list of objects. Further on, these objects will have their “weight” method called, so they need to be mocks as well …
OrderServiceTest.shouldSumOrderItemWeightsForOrderWeightReport()
// Given
order = new Mock()
item1 = new Mock().expect(weight).return(4711)
item2 = new Mock().expect(weight).return(42)
order.expect(items).return([item1,item2])
// When
orderservice.reportTotalWeight(order)
// Then
Messy, messy, messy - and we have a mock that returns mocks. Not good for the fairy population.
The underlying design problem is that the OrderService “reaches beyond” the order object to work on the order item objects. By doing this, it "extends its contact surface" to the rest of the application – and makes it hard to write a simple and complete unit test.
Well, the Law of Demeter is a 20 years old design rule in object orientation that says that an object should only talk to its immediate neighbours. The definition found in e g Wikipedia is that the allowed objects are:
  • The object itself
  • Method parameters
  • Any objects created/instantiated within the method
  • The object’s direct component objects
In many OO-supporting languages build upon C syntax, this can be viewed as “just one dot” – this is not precise, but gives the right intuition about the rule.
And here we see that the order items do not fit into that list from the point of view of the order service.
Redesigning to follow Law of Demeter would limit OrderService to reach to order, but not to order items. This requires a new method in Order: “weight of all items”.
OrderService.reportTotalWeight(order)
weight = order.weighOfAllItems()
return weight
This piece of functionality is obviously trivial to unit test (in case you insist), and will only require you to mock one object, the order.
What has happened is of course that the complexity have moved elsewhere – into the order and its “weight of all items” method.
Order.weightOfAllItems
orderitemlist= this.items()
weight = 0
for(item in orderitemlist)
weight.add(item.weight);
return weight
It is the same code as earlier – with “order” changed to “this” (and some obvious refactoring simplifications). Now unit testing of this only requires one level of mocks, as the code only talks to “its immediate neighbours”.
OrderTest.shouldSumOrderItemWeights ()
// Given
item1 = new Mock().expect(weight).return(4711)
item2 = new Mock().expect(weight).return(42)
// When
order.add(item1).add(item2)
// Then
And no fairies where killed in the testing of this code.
What I find beautiful is that these two ways lead towards the same goal. Law of Demeter was an object orientation theory invention to address coupling and coherence of designs. The save-the-fairies quote came from a pragmatic culture of wanting to produce high-quality code. I just love that they actually mean the same.
Well, when the Law of Demeter was coined, TDD was not around, “mock” meant something else, and fairies was not to be hip for a decade or two. So, finally I must praise Damian for formulating such a modern phrasing of a piece of ancient wisdom.
Yours
Dan
ps To be able to easily write unit tests without neither killing fairies nor "mocking yourself to death" is of course crucial for TDD to be fun.
pps Realising that redesigning your code for testability is of cause an effect of wanting to test the behaviour of the code, not the code per se.

Sunday 8 November 2009

Traps When Establishing a Domain Term

Dear Junior

Establishing a domain term like “username” is not simple. Taking Domain Driven Design seriously, we then need to make sure to end up with a meaning of the word that is both commonly accepted and precise. What we really are up to is to make “username” a part of the ubiquitous language talking about this system. There are several traps to avoid here, where three are extra risky.

The first trap is to become too techie – talking among the programmers, agreeing on some meaning, and settling for that. Problem is that the term never become ubiquitous, thus rendering it useless as soon as we want to communicate with people outside the tech clique.

Instead we need to involve all the people: programmers, DBAs, product owner, GUI designers, tech writers, and trainers. Why? Because we want (need) the same term being used with the same meaning in code, manual, training material, and GUI.

Second trap is to talk about usernames in general. It is completely fruitless and uninteresting to discuss the “true nature” of usernames or to find a definition that applies to a lot of systems.

Instead, what is useful for us is some definition of what a username is in this specific system. We should not pretend, not even strive for, that our definition of “username” will apply to some other system – our definition is bounded by the context of “this system at hand”.

Third trap is not being techie enough – settling for vague definitions. It is not enough to agree that the username is “some kind of identifier string”¨ - simply because that definition is not useful. What we need is something that can be turned into data structures, search schemes and validation code.

Instead, we need the courage to nail down a definition that is precise – and precise in the mathematical, logical sense. Here the bounded context comes to our help. The “weakness” that our definition only applies to our system turns into a strength; we can be very precise about our needs and need not take the rest of the world into account. Close each discussion with stating something along the line “so, in our system we define usernames to be strings of characters that are at least five characters long, maximum 25 chars, and consists solely of lowercase letters”.

So, if we end up with a definition of “username” that is shared and agreed ubiquitously among all involved, that applies specifically bounded to the context of our system, and that is so precise we can use it for coding – then we have established a term that has become a part of our ubiquitous language.

Yours

Dan

PS Of course, establishing ubiquitous, context-bounded, and precise definitions is crucial for security.

Tuesday 3 November 2009

SQL Injection is not an Indata Validation Problem

Dear Junior

If DDS style use of value objects solves the indata validation problem , and if DDS style indata validation does not solve SQL Injection, then there is only one logical conclusion to draw.

SQL Injection is not an indata validation problem. Yes, that might be contrary to popular belief, but it is obviously so.

If not an indata validation problem, what kind of beast is it? And what can we do about it?

Yours

Dan