Friday 27 August 2010

Two Types of Performance




Dear Junior

In architecture one of the most important tasks is to keep an eye on the non-functional (or quality) attributes of the system. Often this importance is enhanced by stakeholders holding up some non-functional requirement (NFR) saying "the system must really fulfill this". Unfortunately, these NFRs are often quite sloppily formulated and a key example is "performance". I have stopped counted the times I have heard "we must have high performance".

I think a baseline requirement for any NFR is that it should be specific. There should be no doubt about what quality we talk about. In my experience "performance" can mean at least two very different things. Instead of accepting requirements on performance I rather try to reformulate the NFR to use some other wording instead. I have found that the "performance" asked for often can be split into two different qualities: latency and throughput.

Latency or Response Time


With latency or response time I mean the time it takes for some job to pass through the system. A really simple case is the loading of a web page, where the job is to take a request and deliver a response. So we can get out our stop-watch and measure the time it takes from the moment we click "search" until the result-page shows up. This latency is probably in the range of 100 ms - 10 s. Of course, this response-time is crucial to keep the user happy.

But latency can also be an important property even without human interaction. In the context of cron-started batch jobs it might be the time from the input-file is read until the processing has committed to the database. The latency for this processing might have to be short enough so the result does not miss next batch downstream. E g it might be crucial that the salary-calculation is finished before the payment-batch is sent to the bank on salary day. 

In a less batch-oriented scenario the system might process data asynchronously, pulling it from one queue, processing it, and pushing it onto another queue. Then the latency will be the time it takes from data being pulled in until the corresponding data is pushed out at the other end.

All in all, the latency is seen from the perspective of one single request or transaction. Latency talks about how fast the system is from one traveller's point of view. Latency is about "fast" in the same way as an F1-car is fast, but will not carry a lot of load.

Throughput or Capacity


On the other hand, throughput or capacity talks about how much work the system can process. For example, a news information portal might have to handle a thousand simultaneous requests — because at nine o'clock coffee break a few thousand people might simultaneous surf to that site to check out the news.

Throughput is also important in the non-interactive scenario. Each salary-calculation might only take a few seconds, but how many will the system be able to process during those 10000 s between midnight (when it starts) and 02:45 when the bank-batch leaves. If we cannot process all 50 000 employees, some will complain. To meet the goal we need a throughput of five transactions per second.

In other words, where latency was the performance from the perspective of one client or transaction, then throughput is the performance seen from the perspective of the collective of all clients or transactions, how much load the system can carry. Here "load" in the same way as a bus will take a lot of load transporting lots of people at once, even if it is not fast.


Fastness vs Load Capacity

Both F1 cars and heavy-duty trucks are no doubt "high-performing" cars. But they are so in completely different ways. To have a F1 car showing up at the coal mine would be a misunderstanding that only could be matched by the truck at the race track.


So, I avoid talking about "performance" and risk misunderstanding. Instead I try to use "latency" and "response time" to talk about how fast things happen — while thinking about an F1 car.  And I use "throughput" and "capacity" to talk how much load the system can handle — while thinking about a bus full of people.

What is the latency for transporting yourself between Stockholm and Gothenburg using a F1 car or public-transport bus? What is the throughput of transporting a few thousand people from Stockholm to Gothenburg using an F1 car or public-transport bus?

Yours

    Dan


P s Now when we are moving to multicore, we will see increasing capacity but latency leveling out or getting worse. This in itself will be a reason to move to non-traditional architectures, where I think the event driven architectures (EDA) is a good candidate for saving the day. My presentation at the upcoming JavaZone will mainly revolve around this issue.

Monday 23 August 2010

Agile is Different

Dear Junior

From time to time I hear, see, or read how Agile is explained as "it is no news" or "it is the same old good project management, with some twists". My impression is that this is done as an attempt to make Agile less scary to make it easier to "sell" to management. I think this is a serious mistake. I think that in subtle but fundamental and important ways - at its heart and roots - Agile is different.

The difference is subtle because you cannot observe it directly. Any and all of the things you can see in an Agile-honouring organisation can easily be copied. There are software delivered in short and regular intervals, there are team retrospectives, and daily team-meeting. And all of these practises can be used in a traditionally managed organisation as well, but it does not make that organisation Agile.

To me the fundamental difference is in how you look at humans.

Traditional management use humans as building parts to build a software-producing machine, or a factory. In this machine people are the moving parts and they are strung together by a processes that dictated their interactions. At the end of the process, software emerges. It is very mechanical. For example, in this world it would be very strange if people would arbitrarily start changing the process - then the designed process might just break and who knows what would happen. So that cannot be allowed.

I might be guilty of over-exaggerating, but the practice of constantly referring to people as "resources" to me unveils an outlook on people that I find scary.

The view of humans in Agile is different. In Agile we acknowledge that it is the engagement and skills of people that make things happen. We make it a first-order concern that people should feel motivated and proud. And instead of a mechanical world view, we rely on a more organic view of organisations. If people want to change the process they are not only allowed, they are encouraged to do that — even if we do not know the precise effect it will have on the overall system.

To explain this, I think it is easiest to look at the Agile Manifesto. The first of its values is:
Individuals and interactions over processes and tools
This is not a small thing. Here lays a fundamental difference in how we look at people and organisations. A traditional process is defined by a single person at a single point of time. However, the wisdom and insight of that person at that time is nothing - absolutely nothing - compared to what can be achieved by having each involved person thinking and discussing with their peers - and doing so continuously. And if given the choice between an ever-so-well defined process on one side and trusting the wisdom of the crowd on the other hand, we chose the crowd any day of the week.

It is a little bit like democracy. We could trust a wise and benign emperor. However, we think we get a better result if all citizens engage in an open discussion. We create and change our laws according to that discussion - even if we do not know the result in advance.

In this perspective Agile is a celebration of the wonderful and mysterious system that emerge from initiatives that rise out of interaction between people that care.

This is also what can be seen in the fifth principle of the Agile Manifesto:
Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.
We actually trust that people want to work. We acknowledge that we need to give them the proper environment and tools, but that will be enough. There is no need to command and control. Things will just happen. It is a leap-of-faith to let control go. Organisations with traditional management dare not take that leap. Agile does.

So, when looking at a traditional organisation or project and comparing that one with one honouring Agile, you might not see much of a difference. But if you are inside of it, you feel the difference. It is there - at the heart. You feel trusted and empowered. And if you look closely you might see it as well - the small smile on peoples faces.

The difference is subtle. But it is fundamental. And it is important.

I am convinced that at its roots and heart - Agile is different.

Yours

Dan

Friday 20 August 2010

Two Observations on Overtime in Traditional Projects

Dear Junior
I have seen quite a few classically managed projects, and through friends and collegues I have been in contact with even more of them. I would just like to make two observation on the practice of working
overtime in different phases of such projects.

* It is very uncommon, actually I have neither experienced nor heard of, that project management orders overtime for requirements analysists so that they will deliver the full and finished requirement documentation on a specified date.

* It is very common, actually I have experienced it several times myself and heard of it numerous times, that project management orders overtime for programmers so that they will deliver the full and finished implementation on a specified date.

From these two observations it is probably possible to deduct lots of interesting conclusions. I will refrain from doing so here and leave it up to you as an interesting mind-game.

Yours

Dan

Wednesday 18 August 2010

DDS Value Object Presentation Video

Dear Junior

At the OWASP AppSec conference I had a presentation on value objects and Domain Driven Security. It turned out into a 35 minute code kata with refactorings, where I used value objects "DDD style" in two ways.

Firstly, I used them to create a design that makes indata validation come natural in the context of Injection Flaws. I used the classical SQL Injection login-attack as an example and applied "hard type" value objects in the same way as we have discussed before.

Secondly, I addressed a Cross-Site Scripting (XSS) scenario. Here I noted that even if XSS is often perceived as a problem with "bad indata", there are cases when those data are perfectly valid. So, instead I choose to look at it from an output-encoding perspective - not "bad indata validation", but "bad outdata encoding".

To be able to do something about the outdata encoding, you can think about the client side browser as a subsystem of your system. Seen that way, the presentation tier plays the role of the API to that subsystem. Then it becomes obvious that we should enforce proper encoding at the border of that subsystem - i e in the API we use for calling it.

Modifying the presentation tier API from "soft type" strings to "hard type" value objects made it obvious where proper encoding should go - thus solving the XSS problem.

The full video coverage from the conference has been released, and my presentation can be found at the OWASP website.
Yours
Dan

Thursday 12 August 2010

Blitz Retrospective

Dear Junior


One of my favourite method tools is a retrospective format I call the "Blitz Retrospective". I have used it in a lot of situation where it has been impossible to do "large" retrospectives. For example, in some situations, the teams have been very "retrospective sceptic". In other, there has been a formally appointed "scrum master" who was not very open to suggestions, so suggesting changes to his retrospectives would not achieve much. In yet other, the sprints have been so long I have not had the patience to wait a few weeks. In all these situations it has been possible to suggest, and do, a "really quick retrospective".

The "Blitz Retrospective" only takes 25 minutes, and its focus is to find *some* change the team think would improve things. The format consists of three parts: the startup, collection of ideas, and vote. 


Startup


If it is the first time, the startup is spent explaining the format so everybody feels comfortable with what will happen. I clarify that the purpose is to find one single idea of improvement that we will try out next week.

A question that always come up on purpose is why we are limited to one idea, and what happens with the rest. Well, if the timespan is only a week, it would be foolish to focus on more that one thing - it would just result in none of them being changed. 


As for the rest of the ideas that come up - they will be there in the back of peoples minds and might change things implicitly - but they will not be actively in focus. If it is not the first time, I usually quickly rehearse the procedure, but I spend the time on evaluating the last retrospective's "winning idea" - more on that later.

The startup in full should not take more than five minutes. 


Collection of Ideas


Second section is the collection of ideas. To help the team members I split a whiteboard in three sections labeled "Continue/Increase", "Start/Try", and "Quit".

First section "Continue/Increase" is for things we already do and that support our work. Second section "Start/Try" is for ideas that we think we should benefit from, but we do not do it already, at least not in any significant amount. Last section "Quit" is for things team members think we do, but should stop doing as it does not serve us well, or even harm us.

Then team members are free to write down suggestions in any of these sections. Usually, I let them write on stickies, so that they can write a short statement on the sticky and explain it briefly when they post it on the board. If a short discussion emerges, then fine. However, if it seems to turn into a debate, or risk running long, I cut it short pointing to the purpose (find one idea) and the procedure (there will be a vote later). 


I also try to convince the team to keep the suggestions very concrete and limit them to thinks they directly can effect. For example, replacing the ventilation system of the building might really improve, but selecting such improvements is just asking for failure. In other retrospective formats such ideas are really valuable, but the purpose of Blitz Retrospective is basically to get acceptance for retrospectives - and then they must make a difference. We want things we actually can do, and that we can do within a week.

The collection of ideas could take ten to fifteen minutes. On occasions when time has not been an issue, I have let it run on longer - but usually most ideas are on the board after ten minutes. 


Grooming the List of Suggestions


When getting closer to the end of collection of ideas, I take a more active role, starting talking about the stickies in the "Quit" section. 


It is extremely valuable to get the "Quit" feedback, and I really encourage people to post such suggestions. However, I am a firm believer of that you should focus on "telling the good stories", because the stories you tell (and repeat) will become part of the "team lore" and shape the atmosphere of the team work. This is the basic idea behind the organisational philosophy "Appreciative Inquiry".


So, telling "bad stories" will basically make people feel bad - but not do much good in the long run. Telling "good stories" will culture a nice team atmosphere as well as enforcing the good habits.


Therefore I take on the role as the "positivist fanatic" and try to rephrase each sticky under "Quit" into "positive" ones. For example, if a quit-note says "stop coming late to meetings", I might suggest a start-note "meetings begin exactly on time - even if people are missing".


Before throwing away the "Quit" not I ask the original poster whether the new notes have captured the original purpose. Surprisingly often there is an additional aspect I had not understood. In the "begin on time" example, the poster might say: "It is not only about starting on time, it is also that every time someone arrive (late) there is a start of chit-chat and small talk that disrupts the meeting". To capture this there will be a second note start-note "keep meeting focus when people arrive mid-meeting".

I also ask if there are some suggestion that the poster think is a duplicate of some other - if so they can have it removed. Merging two similar suggestions to one of course gives them a better chance to "win".


Still when "grooming", the board is still open for new ideas, it is just for the team members to step up, post a sticky, and present the idea.

At the end of collection of ideas there often are a load of ideas and we only have a few minutes until we shall leave the room with one selected idea: time to vote.


Voting

For the voting, I simply rearrange the stickies and let each team member give three suggestions one vote each. The sticky with the most votes is the winner and is what the team should try for improvement the coming week.


As for the rest - things might improve just by having vented them, but they are not in active focus.

Evaluation


Next week I use the startup section for a quick evaluation. Here I want to separate two parts. 


One question is whether we did as we intended at all. Not trying isn't necessarily failure. Things change quickly from time to time and we might have had valid reasons not to do it. Even "did not have time" is a valid reason - and suggests that we should put aside time.


If the team did try the suggestion, the second question is whether it helped us or not. If it was helpful, we try keep doing it. If it was not helpful, then it is important to keep in mind that it was an experiment - and such should have positive and negative outcomes from time to time.


If the suggestion was tried and found helpful, we should find a way to formalize it. Technical stuff might go into a new check on the build server, working habits might go into the team's "Team Rules" or "Working Agreements" - whatever the name. The important part is that we do something that makes it plausible we will continue doing this good thing we have just found.

Getting Retrospectives Going, at All

I have found that running a few Blitz Retrospectives often results in an acceptance of having retrospectives at all. In combination with the not-very-scary format "ok, we can spare 25 min after Friday lunch", it is a good way to get retrospective started at all.

Of course, such a brief format misses many points - things a longer and deeper retrospective will catch. But the purpose of Blitz Retrospective is not to catch those - it is to win acceptance for having retrospectives at all an pave way for deeper retrospectives down the road.

I have found that the format works well for that purpose.

Yours
   Dan


ps Ester Derby and Diana Larsen have written a great book named Agile Retrospectives that is really helpful once you have gotten past that initial resistance, have established retrospectives, and want to elaborate them, specialise them, or just improve them in general.


pps Tobias Fors has made the point (blog post in Swedish) that feeling safe and secure is fundamental to engaging in a retrospective. He suggests the fundamental rule being "Everyone did their best given the conditions", and I have seen it helpful to start each retrospective with writing some similar statement on the whiteboard.

Wednesday 4 August 2010

Typewriting is not Storytelling

Dear Junior

If you would observe a famous author during a workday to get insights in how such people work, you might come out with a report along these lines.

"After breakfast, Famous Author sits down at the typewriter. She then punches the keys, using the tip of her fingers, repeatedly. From time to time she picks a new blank page and roll it into the typewriter. She continues until early afternoon, except for a lunch break, whereafter she walks around in town taking pictures of people."

Even the Famous Author herself might describe her workday in a similar way ("do always write at least ten pages a day" or "only write when inspired"), describing the structure of the work, or the ceremonies surrounding it.

Correct as these descriptions might be, they totally miss the point. They tell you nothing about weaving a plot, about evolution of characters, about where to start the story, about how to finish it, or other things that makes the work worth reading.

Unfortunately, I have the feeling that many descriptions of agile practices make the same mistake.

Yours
   Dan