Alistair Cockburn

Re: Taylorism strikes software development

Nah, the essence of Taylorism isn’t scientific management per se, but management of OTHERS using scientific methods.

In Scrum for instance we value both self-organization AND scientific methods. Call it scientific self-management if you want. Definitely not Taylorism where the worker is the study object of the scientist. Rather the worker being the scientist.

Too sad I couldn’t come to Scandev this year.

-by Ola Berg on 4/5/2011 at 5:16 AM

Nice observation, Ola. Thank you for that.
If there wasn’t such intense social pressure on THESE people to adopt what THOSE people like to do, I’d feel more comfortable. However, that THESE/THOSE split is exactly the OTHERS that you capitalized in your first sentence. So I stay worried.
thanks – best, Alistair

-by Alistair on 4/5/2011 at 5:19 AM


I have this uncanny feeling too, but my perspective is more from the Cynefin framework, not so much Taylorism. Although the general word of mouth is that software development is in the complex domain it seems that the pressure is on being it in the complicated domain with all its ‘good practices’.

Maybe the good thing is that by discovering that we have more control over our technical area of expertise which makes it possible to move parts from the complex domain into the complicated domain and from the yet-chaotic domain into the complex domain.

Or maybe people are just looking for security, for ease of mind, for a silver bullet.

-by Maurice le Rutte on 4/5/2011 at 5:57 AM


In what way is Scrum less “Tayloric” than Kanban?

And what´s bad in identifying software development as consisting of necessary steps (like req.analysis, design implementation, review, quality assurance), putting them in a sequence and applying findings from the Theory of Constraints?

What´s bad in evolving the industry onto a level where not all shops have to reinvent the wheel and find their conventions and rules? Art and craftsmen of all sorts since long have identified “best practices” to reach certain results – which does not preclude creativity but rather gives creativity a focus.

-by Ralf Westphal on 4/5/2011 at 6:08 AM


IMHO I reckon the key difference between Taylorism and Lean/Agile is that Taylorism begins with the assumption that workers are feckless, and Lean/Agile begins with the assumption that workers are intelligent and engaged. From these two different starting points you get two very different approaches. Taylorism and Lean/Agile both use scientific methods, but the difference in the basic motivations lead to vastly different applications of the methods and data.

-by Clifford Penton on 4/5/2011 at 6:53 AM


David Anderson spoke about two waves of lean reception in the west. The first without a concept of ‘variability’. It’s the reception of ‘the machine that changed the world’. And in the software world – I think – it’s the reception of Mary and Tom Poppendiek. Reading ‘Lean Software Development’ there is the sense of having read this content in other terms in others introductory XP and Scrum-Books. Kanban and Anderson are different. I think the conceptual cause is his notion of variability. And this could be what your term Taylorism addresses. Kanban as a tool to reduce variability. It’s the counterpart of your engagement on craft, creativity and personalities. In other words: the grassroot-movement of agile software development has crossed the chasm of market economy. But – my hope – the market economy, which adopted agile, will slowly shift to become more creative and more human.

-by Jens Himmelreich on 4/5/2011 at 8:38 AM


I thought this was about uncovering better ways of developing software by doing it and helping others do it.

Perhaps there is a lot less uncovering now than there was in the 90s, and more ‘helping others do it.’

I suppose I could view this as Taylorism if I saw these popular practices begin to stifle disruptive innovations in how we work. My instinct says that is not happening at all, but perhaps I’ll look at this differently in another 10 years.

-by ksteff on 4/5/2011 at 10:08 AM


How about “Everyone needs to do it this way until, individually, they find a better way”? In our organization two product groups adopted agile in two very different ways. One saw agile as a major change and managed it as such. They completely changed their organization, and the way they worked. The other tacked on agile practices to their existing structure. About 18 months in and the group that completely changed the way they worked, and adopted a “standard” way of doing agile appears to be the more advanced organization. Is everything perfect, no, and they are now allowing deviations from the standard to achieve better performance. Is there value to having everyone work one way, yes, and is it always the right answer, no. The question is, at what points during the evolution of the organization does it help to apply “Taylorism”?

-by Robert Fayle on 4/5/2011 at 10:44 AM


How about “Everyone needs to do it this way until, individually, they find a better way”? In our organization two product groups adopted agile in two very different ways. One saw agile as a major change and managed it as such. They completely changed their organization, and the way they worked. The other tacked on agile practices to their existing structure. About 18 months in and the group that completely changed the way they worked, and adopted a “standard” way of doing agile appears to be the more advanced organization. Is everything perfect, no, and they are now allowing deviations from the standard to achieve better performance. Is there value to having everyone work one way, yes, and is it always the right answer, no. The question is, at what points during the evolution of the organization does it help to apply “Taylorism”?

-by Robert Fayle on 4/5/2011 at 10:54 AM


Toyota has an ongoing success pattern in Toyota ‘lean’. It adapts locally, one plant does things differently than the next. It engenders lots of new ideas. It’s founded on ‘go look and see’ – true scientific method. Western ‘lean’ lives in lots of coaches creating better ways, in the office – not so much ‘go look and see’ – not true scientific method.

I think sima-Taylorism is arriving in the encrusted build-up of better ‘in the office’ ideas being applied.

Pattern: Pity the poor dumb newbys trying to explore the mass of new materials – no patterns at first. Add advice. More layers of advice > small divergent hills. > Route finding, compass skills. Ask Taylor for help.

-by vic williams on 4/5/2011 at 12:03 PM


Re: “In this article, he shows the worldview of lean as that of production” I spotted that the quote from Mary and Tom’s introduction was “we believe that software development is similar to product development” – Product development (as in design the car) as opposed to Production (as in build the car).

In case that helps.

-by Bob Corrick on 4/5/2011 at 12:28 PM


If I understand the thrust of this article, it is that we shouldn’t be forcing teams to “do the XP practices” because now we are just the “new boss – same as the old boss”. If I miss the point, please flame me until I learn.
But if that is the point, I think it is important to know what has come before you if you are to move forward. We expect people on our teams to know and be proficient at the defined XP practices if they are going to determine their own way.

-by Jim Kimball on 4/5/2011 at 6:34 PM


I think Cliff hit on it, but I will elaborate on his point. Taylorism consists of 4 points:

1. Replace rule-of-thumb work methods with methods based on a scientific study of the tasks.

2. Scientifically select, train, and develop each worker rather than passively leaving them to train themselves.

3. Cooperate with the workers to ensure that the scientifically developed methods are being followed.

4. Divide work nearly equally between managers and workers, so that the managers apply scientific management principles to planning the work and the workers actually perform the tasks.

I think point #4 is where you see the major divergence between Lean systems as devised by Toyota. In TPS, the people who use the standard own it and can change it. Taylor explicitly said there were “managers” who made the standards and “workers” who followed them. This is a big point and I wouldn’t gloss over this difference

-by Brian Bozzuto on 4/5/2011 at 10:59 PM


Reading your post and the comments were the best 45 minutes of my day :-)
I follow in the steps of Cliff and Brian. Well put.
My addition is the fear of US/THEM and OTHERS. I constantly fight the often found argument for applying a process, method or practice: “Because someone told me/us so.”
The decision should be made by me or us and should be based on a need or whish.

-by Arne Åhlander on 4/6/2011 at 3:34 AM


The Taylorism/Kanban possible mix has always worried me. It’s something we need to keep an eye on. I have concerns that AndersonKanban could be a trojan horse to lead a whole heap of Tayloristic and 20th century business practices in the back door.

(Or front gate to continue the Troy Metaphor.)

I hope the Kanban movement continues to grow in more interesting, creative directions. Perhaps towards the Scrum world.

-by Nigel on 4/6/2011 at 11:35 AM


>I add chapters on craft, creativity and personalities, not as compensation, just as part of the mix. I don’t see others putting those into the mix.

It’s a worry. I don’t see people doing it either. I had a try at doing it myself last month, with a talk on the people side of agile. It was a lonely experience, because I felt that I was saying things that few others were saying.

I wonder if too much “social momentum” has built up behind the process-centric/”Taylorist” view of agile. Consequently, learners don’t ask to learn the craft-centric view (they may not even know it exists!) and (most) teachers don’t teach the craft-centric view – since one feels like a lonely heretic if you do!

Having said that, there is a strong (largely unrecognized) need for learning the craft/people side of things. On the rare occasions when it is presented, I suspect feedback is very positive. (At least, it has been in my recent experience).

Brian's first point re Taylorism, above, is interesting: "Replace rule-of-thumb work methods with methods based on a scientific study of the tasks". IMHO, if scientific study is done, then what you find is exactly "people + craft". (As in your studies Alistair, in Schon's work from the 80s, and James Bach's recent talk on the Myths of Rigour). So, regarding your concern about an overemphasis on analytical processes in agile, perhaps the problem not really Taylorist. Instead, perhaps its actually Taylorism done wrong. Taylorism done _right_ starts with science, and when it comes to creating software, the science leads us to favour "people + craft" over "process + analytics".

Finally, in most cultures, we are not very good at knowing what to do about "people + craft". We come through education systems which encourage us to think that expertise results from one-time aquistion of pre-defined knowledge (very “Shu”). I don’t think our minds, or our corporate cultures, easily embrace the idea that true expertise really comes from a lengthy period of honing one’s craft.

-by John Rusk on 4/6/2011 at 8:21 PM


This reminds me of something that has bothered me for a long time. It has become fashonable to denounce best practices as inherently evil. People cite innumerable examples of best practices stifling change and not taking context into account. At the same time even the most dogmatic opponents of best practices go to conferences and listen to other peoples accounts of how they solved their problems.

-by Niklas Bjørnerstedt on 4/7/2011 at 7:26 AM


This reminds me of something that has bothered me for a long time. It has become fashonable to denounce best practices as inherently evil. People cite innumerable examples of best practices stifling change and not taking context into account. At the same time even the most dogmatic opponents of best practices go to conferences and listen to other peoples accounts of how they solved their problems.

-by Niklas Bjørnerstedt on 4/7/2011 at 7:34 AM


Software development is production, in the economic sense of the word. What it’s not, is mass (aka assembly line) production.

Forms of production range from one-off projects, through job shops performing custom work, to mass and finally to flow production, each successive type more predictable, repetitive, amenable to automation. In my opinion, commercial software development usually falls somewhere between the project and job-shop models depending on how the work is organized. Calling it production does not disparage the work or the craftsmanship of the workers.

Suggesting that analysis and the scientific method are somehow bad in and of themselves is a flat-earth worldview that dead-ends progress. The idea that we’ve already found the “best way” is a symptom of a lack of process analysis, not an excess of analysis.

-by larry white on 4/7/2011 at 7:42 AM


I share your general concerns, but do not recognise your interpretation of Taylorism and Scientific Management (both dysfunctional, at best, imo).

Hence I do not accept your conflation of Taylorism and some aspects of Agile, Lean practices. For me the heart of Agile (and Lean) is Respect for People.
If people want to do “scientific” things like compile metrics, then all well and good (let’s respect them for those choices). And if they want to do things a certain way because others are, or because it’s the norm, then again let’s respect them for that.

Most of society has a profoundly Newtonian / Analytic/ Taylorist view of life in general. I think it’s inevitable then, that many agilists, especially those new to the field, will bring that world-view with them into the workplace.

We must all try to help them (and ourselves) understand these (Newtonian, Analytic, Taylorist) assumptions – and the implication of those assumptions, too. With such understanding, maybe some folks might then seek an alternative (and BTW more “effective”) world-view, such as the Synergistic, or even the Chaordic.

- Bob @FlowchainSensei

-by FlowchainSensei on 4/7/2011 at 7:50 AM


Niklas: best practices aren’t evil, they’re just not always best. What is best for me right now may be worst for you.

-by Tobias Fors on 4/7/2011 at 8:27 AM


Niklas: best practices aren’t evil, they’re just not always best. What is best for me right now may be worst for you.

-by Tobias Fors on 4/7/2011 at 8:28 AM


Talking about TPS (and not LPD as Toyotas way of doing PD has been oft mentioned), Brian writes:

“In TPS, the people who use the standard own it and can change it.”

I can’t remember reading that anywhere but I rarely remember so much, I doubt very much it pans out that way in practice and my understanding is that this very point is at the core of what separates sociotech based plants from lean based.

There may be problem that comes from the outlook behind adoption of some lean techniques: The danger is getting a “grey” production feel where creativity is stifled. I believe I’ve seen it happen. A problem may be in common attitudes of many creative minds and personalities toward structured processes. Attitudes shaped from our culture, perhaps like stereotypes.

Thanks for the thoughts.

/mawi

-by mawi on 4/7/2011 at 9:14 AM


Software development is production, in the economic sense of the word. What it’s not, is mass (aka assembly line) production.

Forms of production range from one-off projects, through job shops performing custom work, to mass and finally to flow production, each successive type more predictable, repetitive, amenable to automation. In my opinion, commercial software development usually falls somewhere between the project and job-shop models depending on how the work is organized. Calling it production does not disparage the work or the craftsmanship of the workers.

Suggesting that analysis and the scientific method are somehow bad in and of themselves is a flat-earth worldview that dead-ends progress. The idea that we’ve already found the “best way” is a symptom of a lack of process analysis, not an excess of analysis.

-by larry white on 4/7/2011 at 9:45 AM


For me the crux of what you’re saying is this statement:

Agile views software development as a process for building a consensual theory of the world: with an artifact being a byproduct – an expression – of that theory.

I agree with the statement that agile is a process for building a consensual theory of the world, but to me the artifacts are the MEANS by which the consensus is built, not just a byproduct. The shared focus on creating something of value is the ground from which agile consensus is built, whether in a small collocated team or in a huge distributed one.

From that standpoint, I agree with you that ten years on, even agilists need to make a specific effort to keep refocusing on people rather than getting so caught up in the process. And those of us who identify as agile but then define it in terms of automated test coverage or shortness of stand-up meetings have missed this fundamental point. It seems like the challenge is to take on mastery of new techniques without losing focus on the shared delivery of value which ought to be holding our teams together.

-by Elena on 4/7/2011 at 11:41 AM


Successful Software Development requires 2 components — human creative abilities supported by the necessary talent and knowledge, and a good process for managing / predicting the project.

We can easily observe, pick apart, analyze and reason about the process. But the first part is just magic. Where does creativity come from? Why can some people do it while others can’t? What is it that causes good ideas to come from the mind of good developers? We really just don’t have a clue.

So we obsess about the part we can understand, and over-optimize it. Of course, tremendous benefits have come from paying attention to process; but it can be easy to forget that the main thing is the part we don’t understand — the mysterious ability of the human brain to create things.

-by Charlie Flowers on 4/7/2011 at 6:07 PM


Does not Scrum ask people to work as a team and self-organize? Does not Scrum ask someone to prioritize and the team to respect it? Does not Scrum ask someone to remove impediments? Does not Scrum ask a team to build an increment every sprint? Does not Scrum ask to run Sprints? Does not Scrum require transparency?

Sorry, if you think XP guys are forcing people to follow best practices Scrum does exactly the same.

By the way, Kanban is the less invasive approach. Kanban is just an improvement/change management. Kanban has no rules. Kanban respects people including managers. I use Kanban to map the mess, making it visible, and the team will be empowered to fix it “the baby step” way.

Just my 2 cents. This post can be classified as FUD. Not much your style Alistair.

-by Rodrigo Yoshima on 4/8/2011 at 9:56 AM


For me, kanban is a tool to visualize work.

You can use it with Scrum. It’s just a board with sticky notes on it.

All it asks is that you understand what you are doing and that you limit your work in progress.

Yes, some people want to add more process to it than necessary. That’s because most process is unnecessary. But people use it as a crutch when they don’t have control or know what is going on. That crutch, in turn, becomes politicized and deified, until we get unproductive conversations over value creation.

If we all just lost the dogma and looked at good ideas, we’d have a much better discussion.

Jim Benson

-by Jim Benson on 4/8/2011 at 11:32 AM


Hi Jim,

Agreed, but kanban became “Kanban” with all the associated paraphernalia and that is where the problem started.

I’m glad that someone actually stated what Taylorism actually is. The TPS builds on Taylorism, sharing 3 out of the 4 points. The last point has more to do with culture then science and was ditched by the Japanese in favor of common sense.

CAS theory moves us beyond newtonian science, which is deemed inappropriate by some when it comes to software development (software development is not manufacturing). This is the Agile movements big break with the past.

Alistair is raising a concern, which I think is a valid one. Lets not forget that the most common management approach is still Scientific Management, aka Taylorism. This is still the corner stone of Western Management culture.

If Taylorism as remoulded itself and is making its present felt through the back door, it wouldn’t be the first time. Change is difficult, and it is up to us to make sure that the new thing isn’t a revamping of the old thing, but in a different guise :)

-by Paul Beckford on 4/8/2011 at 1:21 PM


Hi Jim,

Agreed, but kanban became “Kanban” with all the associated paraphernalia and that is where the problem started.

I’m glad that someone actually stated what Taylorism actually is. The TPS builds on Taylorism, sharing 3 out of the 4 points. The last point has more to do with culture then science and was ditched by the Japanese in favor of common sense.

CAS theory moves us beyond newtonian science, which is deemed inappropriate by some when it comes to software development (software development is not manufacturing). This is the Agile movements big break with the past.

Alistair is raising a concern, which I think is a valid one. Lets not forget that the most common management approach is still Scientific Management, aka Taylorism. This is still the corner stone of Western Management culture.

If Taylorism as remoulded itself and is making its present felt through the back door, it wouldn’t be the first time. Change is difficult, and it is up to us to make sure that the new thing isn’t a revamping of the old thing, but in a different guise :)

-by Paul Beckford on 4/8/2011 at 1:41 PM


Any process exists because, although we would love to think otherwise, not everyone can be empowered to make their own decisions. In fact, most people need help deciding how to move forward.

That’s when we come in with processes.

And, like any other area of human knowledge there is always a group of people with a greater truth than others trying to impose their perception of the world.

And, like any other area, not everyone follows.

And we will keep evolving.

-by Pedrob on 4/8/2011 at 9:07 PM


Taylor called for managers to make the decisions regarding the working process and workers to execute management’s edicts. In contrast, Kanban’s pull system empowers workers to make choices as to what work they will do at what time. WIP limits are set by agreement of all stakeholders. If management were to press for artificially high WIP limits, the result would not be increased output, but rather impeded flow.

Although David Anderson calls for scientific analysis of results to determine success of a Kanban effort – this is not an element of Kanban itself. Certainly other tools such as empirical observation and satisfaction surveys could be employed in addition or as alternatives.

-by Chance Gold on 4/9/2011 at 2:32 PM


How did a thoughtful post turn into a red-herring attack on Kanban? If you haven’t read the book or participated in a thoughtful discussion on Kanban with an informed practioner – then don’t chime in. Alistair’s point is that there is danger with all of the practices that people implement them without understanding the underlying principles – whether its Scrum, XP, or Kanban. There is some risk of gurus deciding they “have the answer” and now need to tell other people “the best way to build software” – whether its Scrum, XP, or Kanban. We see this everyday in businesses.

The fact that a team can get some data out of Kanban and use it to determine how to focus is valuable – not Tayloristic. The Kanban community does not agree with the Poppendieck’s direct translation of Lean manufacturing approaches to software development. They view this approach as value destroying and de-humanizing.

Kanban recognizes variability, the notion that people that do the work have the best understanding of what is happening, and the understanding that de-humanizing the environment is contrary to what is best for knowledge workers. The Kanban community suggests understanding the environment and establishing a process of ongoing improvement – using visualization and limited WIP in the context of variability and knowledge work.

Taylorism is destructive. So is TQM. Kanban is not Taylorism. Kanban is not TQM (and we have had this discussion for over a year). Just arbitrarily connecting two related things and then going after the red-herring worst possible case is irresponsible.

Alistair has a valid point to contemplate. When we work with companies (our own or as trainers and coaches) are helping companies improve software development using appropriate tools from Kanban, Scrum, or XP – or do we dictate the “one true way.” Red-herring and uninformed discussions are of no value in this community.

Dennis

-by Dennis Stevens on 4/10/2011 at 9:52 AM


H Dennis,

No one is attacking Kanban. Jim mentioned kanban and I responded in that context. The same response is equally applicable to XP, Scrum or any codified set of practices, that are implemented in a way the emphasises process over people.

I didn’t mention TQM in my response :)

-by Paul Beckford on 4/11/2011 at 3:08 AM


Oips… facilitator intervention here… please stop name calling and accusation. I will delete any such remarks. Keep it clean, gentlemen, just discuss the topic and not each other. Thanks —- Alistair.

-by Alistair on 4/11/2011 at 12:46 PM


One missing ingredient in this debate: what do industrial product developers do?

In my review of that literature I have found very little support for concepts like statistical process control applied to the development of novel products for manufacture.

Would a manufacturing professional view this debate as perhaps a little naive? Product development in the atom-space is just as risky, uncertain, and iterative as software development in the bit-space.

Don Reinertsen has covered this ground in great depth and understands both bits and atoms.

Charlie

-by Charles Betz on 4/11/2011 at 1:45 PM


Not sure why that double posted, apologies.

-by Charles Betz on 4/11/2011 at 1:58 PM


Charlie – apologies for the double post on my side. It happens sometimes, and I haven’t yet worked out why. Alistair

-by Alistair on 4/11/2011 at 4:15 PM


My fear with respect to lean has always been implementations based on missunderstandings. For instance that “Standard Process” means the only (and best) way, instead of the way we work until we invent a better way or working. The focus on flow may result in implementations that institutionalize handoffs (which by Mary and Tom Poppendieck actually identifies as one sources of waste in their book). Managing flow and queues are tools to improve the cycle time, so we can get faster feedback – but we must also remember to have collaborative cross-functional teams that move together, as pointed out in “The new new product development game” article.
I second Charlie that Don Reniertsen’s work has importent points for both software development and product development.
Bjorn

-by B Rasmusson on 4/12/2011 at 9:48 AM


Let me admit first of all that I know nothing about Taylorism, Kanban, and a slew of other topics covered in Alistair’s blog post and the comments on it. But with 10 years as a BA in a waterfall environment, and having just been trained and transitioned into Agile for a Fortune 50 company 3 months ago, one thing strikes me as universal in both this discussion and common practice: methodologies have very little value apart from the quality of the team that adopts them. If a team has done well in waterfall and then adopts a lean methodology of one kind or another, the very fact that they’ve taken the initiative to make the change is an indication that they are likely to succeed. If a team has done poorly and adopts a new methodology and succeeds, I find that’s usually the result of the engagement of team members and management that was absent prior to the change more than it is due to the methodology itself. As often as not, the team falls apart again as soon as the stakeholders pronounce the transition/migration “done.”

From the “worker bees” to the highest level stakeholders, methodologies consistently fall apart on one point: no one methodology can be universally applied. It is the nature of the beast (human, manager, developer – whatever beast you pick): we wait and hope for a “once and done” approach that will guide us through waters made ever murkier by the pace of change and the almost incomprehensible possibilities that technologies offer. If that weren’t enough, most methodologies fail colossally in accounting for one very interesting paradox: we hope for the once and done definitions of a process that will guide us through the unknown; but the truly productive worker – of any stripe – is never satisfied in a static state.

If it’s true that Taylorism starts with the assumption that workers are feckless and Agile starts with the assumption that workers are intelligent and engaged, then they’re both going to fall apart on some point – because both assumptions fail at some point. Give me a team of 10 people and a week to track them, and I’ll give you a range of 20 to 30 degrees between feckless and engaged/productive – with some members fully running the gamut in that short time!

Granted, all methodologies have to start somewhere. I’m feeling pretty partial to Agile myself – having seen more productive discussion come out of a co-located Agile team in 10 minutes than I have seen come from some waterfall teams in 10 months. That said, “best” of anything is always, ALWAYS going to be a moving target. If it weren’t simply the complexities of people that make that true, then the compounding of complexities by N number of times (i.e., number of team members) makes it true.

As a general rule, people rise (or sink) to a level of expectation held by a person in authority over them. If a manager is visionary, engaged, communicative, and expects his workers to be the same, he can generally get the same out of his team (or keep changing up team members until he does), regardless of the methodology. I know there are scientific measurements of this approach and that approach that offer hard evidence for the efficacy of processes. But I recently heard a talk by a former IBM executive who has published on a methodology he devised where he extolled the virtues of his approach for an hour – only to say at the end that the astounding results he achieved in his first project with that approach have never been replicated.

If Alistair’s point is that it’s distressing to see the pendulum swinging back in a direction opposite the one that took us to lean, there is this consolation in it: it is a luxury of creative, productive people to worry about how to get work done better. Whether it is a cultural phenomenon, the social degradation of recent decades or the historical manner of being, there are way too many workers out there who care neither for the end product nor how it’s produced. Much less do they care for the angst over how the future products are conceptualized and realized. The best tend to stay the best and the others tend to stay the others regardless of what methodology they’re using at any given time. Everyone pat yourselves on the back. Just a willingness to engage in the discussion (and the passion to be heated about it) rates you in the first group!

-by Karen C. on 4/13/2011 at 6:52 PM


If you replace “manager” and “worker” with “architect” and “developer”, I think the lean/Taylorist caution does apply to software. A lot of development groups are having solutions imposed by architects that stand apart from the day to day production process. Unfortunately, the architects often view development as low-level commodity work that they’ve found a way to transcend and rise above.

-by Michael D. on 4/15/2011 at 9:34 AM


Agile is not evolving when it embraces Taylorism, it is devolving.

Mike Rollings
Research VP – Gartner

Here are several related posts:
Taylorism is a Pox upon Agile
http://blogs.gartner.com/mike-rollings/2011/04/11/taylorism-a-pox-upon-agile/

Replacing Taylorism as our Management Doctrine
http://blogs.gartner.com/mike-rollings/2011/04/18/replacing-taylorism-as-our-management-doctrine/

Context breaks Taylor’s hold on strategy
http://blogs.gartner.com/mike-rollings/2011/04/26/context-breaks-taylors-hold-on-strategy/

-by Mike Rollings on 4/27/2011 at 9:41 AM

Hi, Mike! Great post – thanks for dropping by and sharing those links/thoughts. Alistair

Late to the conversation…

Tayloristic thinking is about controlling work/tasks via process and not via people. People are components that are plugable to the process. Measurement is via output of the process.

It is human nature that since we can’t totally control all aspects of the process, i.e., people, that we need that lack of control (lack of trust) to be compensated by some method. That ends up being more processes, standards, reporting, etc…. When one mistake occurs we add more process so it doesn’t happen again, instead of recognizing for what it is, a mistake. Now everyone pays the price with more process, policy, and bureaucracy.

The ongoing fight is to balance process with people. Creative, innovative people leave when process becomes too heavy. People at the team level need to decide what and how much process they employ. I cringe at the statement “that won’t work for how we do agile”. Don’t fit people into processes, but processes need to support who people are and how they work. That is harder work to do.

Now that agile/lean team development is mainstream it is in the cross hairs of people whose responsibility it is ensure ordered process and uninterrupted smooth running organizations. The notion of a chaordic team is contradictory to their mission.

The need change is that the people in control need to be more trusting. The larger the organization the harder that can be. If people are not held accountable for their actions then it is impossible.

If a team has a miss then discuss the real reason behind it, don’t automatically add process. What was the root cause? Often the root cause is what people do not want to see, because it is not coherent with the story they tell themselves and others. It affects how the organization operates and thinks.

-by Riley Horan on 6/24/2011 at 1:03 PM

Nice comment, Riley. Thanks. (Alistair)

Don’t blame Taylor. Reductionism is in our genes. We have an endless need to simplify complex processes or situations to reach an “understanding”, misleading us to think that we are in control. The problems arise when we start to organize our business using the template from a task break-down of a software development process (losing knowledge in each hand-off as well as introducing waste). For this, we can blame management for embracing scientific management. Lean and Agile are tools to make a shift from all this. Speaking Taylorian with management is a good way to get their attention and things going.

-by micke on 6/29/2011 at 8:11 AM

Thanks, Micke. I just want to say at this point that I’m enjoying the thoughts you all are offering in these posts, so thank you for that. Alistair

It occurs to me, re Taylorism, that I’ve been on jobs where I had to change my way of working to a much less efficient style because I was setting a bad example – meaning that it became apparent that coworkers who thought they understood what I was doing and why, were going to **** up royally if they kept imitating me (imperfectly). Often this was because they would “steal” part of a method, but not, say, safety checks that seemed to them to be mostly unnecessary. I’m sure most people find themselves in this situation from time to time and make the sensible choice: namely, to adopt the less efficient method and blend in.

-by on 10/20/2011 at 10:29 PM

Nice comment, thanks … but please be so kind and sign your actual name – this is a friendly site, and is made more friendly when people use their names, thanks. Alistair

I think at least part of the problem here is the inability (or disinterest) for “most” people to move beyond shu-level practices, which are (and should be) typically presented in a Taylorist model (this is the way, the right way, and the only way).

The understanding of “there are many paths to the top of the mountain, some easier some harder” requires people to climb the damn mountain more than once. How often do you meet people in our business that actually have?

-by Jonathan House on 4/9/2014 at 5:51 PM

right.

QuAnswer

Dov said, “Sometimes there is no question seeking an answer and there is no answer trying to be found, there’s just a QuAnswer that happens.”

hahah!

“QuAnswer” [kwanser] thx Dov Tsal Sela: when there was no Q, hence was no A for it; but it just appeared in the dialog. @malk_zameth saw it appear as a QuAnser itself. Today at La Pere Tranquille, Paris :)

what constitutes a theory

In a quest to present a theory of project management and software development, I inevitably run into the question of how would I know if I was, indeed, holding a “theory” in my hands at all.

Some short form answers are:

Reading
http://cmsdev.aom.org/uploadedFiles/Publications/AMR/WhettenWhatconstitutes.pdf
http://www.geocities.ws/perezbarreto1/sapge/t1/teoria.pdf

...building up for the future….

Taylorism strikes software development

Although we software people have spurned Taylorism loudly ever since Kent Beck introduced Extreme Programming, nonetheless we are moving heavily in that direction. I am sitting here at the Scandinavia Developer Conference and it is striking more and more forcibly each passing month and with each talk I listen to.

Oddly enough, Extreme Programming and the Lean approaches to software development are themselves the groups who haved moved the industry back to Taylorism. Just as saying “Do No Evil” doesn’t actually make Google’s actions non-evil, so publicly reviling Taylor does not make XP/Lean recommendations less Taylorist.

I myself participate in this shift as I include lean flow management, queueing theory, Yesterday’s Weather and the like in my lectures and classes .. and worry the entire time as I do so. I add chapters on craft, creativity and personalities, not as compensation, just as part of the mix. I don’t see others putting those into the mix.

The trigger for me writing this little note was sitting and watching yet another person describe how the introduction of visual flow (kanban-style views) and Work-in-Progress limits allows teams to develop metrics and post graphs of productivity, variance, and improvement progress on the wall for management to see.

I don’t mind these things, at some level —- it is called “getting better”, and there are indeed ways of getting better. I find it ironic, however, that the very cluster of people who started and are driving agile development as the center of self-organization and craft are pushing the Taylorist agenda of scientific management.

Dave West is so far the only voice that has identified this trend and spoken out against it, in his InfoQ article Lean and Agile: Marriage Made in Heaven or Oxymoron?. In this article, he shows the worldview of lean as that of production : “Mary and Tom Poppendieck take lean industrial practices to a new level” (citing Jim Highsmith’s preface to Lean Software Development ). He puts the agile worldview as that of theory building, a very different sort of activity, citing my Agile Software Development book, and in particular the reprint of Peter Naur’s 1985 article on the subject.

Dave write:

Lean views software development as a process for moving from conception to product. It wants to optimize that process, albeit in a radically different way and with radically different values than traditional (e.g. Taylorism) attempts at optimization.

Agile views software development as a process for building a consensual theory of the world: with an artifact being a byproduct – an expression – of that theory.

Except, not all agilists view it that way.

I am watching the pressure on programmers to adopt the XP practices as mandatory indicators of professionalism: Are you programming in pairs? Why not? Do you write your tests before you write your code? Why not? Do you have continuous integration in place? Why not?

It is not that these are bad practices. Quite the contrary.

What makes me nervous is the assumption that we have now found the Best way of working. That is the Taylorist stance: We (some “we”) will find the Best way of working, and You will Work That Way.

For myself, I am not sure where to draw the line between, “This is a great way of working,” and “Everyone has to work this way.”

This is a blog entry scratched out quickly during an outtake from a conference. I don’t have time to develop the ideas any more clearly. What are your views?

Re: Quotes on leadership

Particularly like the Maya Angelou one – and secretly the Katharine Hepburn :)

-by Flowmotion on 7/2/2012 at 1:05 PM


Particularly like the Katharine Hepburn one!

Is there a strategy whereby one can “deliberately” and “consciously” break rules in a safe manner, assuming of course that person is not habitually a rule breaker?

-by Deepak Srinivasan on 3/28/2014 at 2:38 PM

of course, Deepak, I teach it in my courses :). Alistair

Crystal methodologies

Crystal is a family of human-powered, adaptive, ultralight, “stretch-to-fit” software development methodologies.

  • “Human-powered” means that the focus is on achieving project success through enhancing the work of the people involved (other methodologies might be process-centric, or architecture-centric, or tool-centric, but Crystal is people-centric).
  • “Ultralight” means that for whatever the project size and priorities, a Crystal-family methodology for the project will work to reduce the paperwork, overhead and bureaucracy to the least that is practical for the parameters of that project.
  • “Stretch-to-fit” means that you start with something just smaller than you think you need, and grow it just enough to get it the right size for you (stretching is easier, safer and more efficient than cutting away).

Crystal is non-jealous, meaning that a Crystal methodology permits substitution of similar elements from other methodologies.

What to expect from this site. Methodologies are big things, and Crystal is a family of them. Although there is a growing book collection about the Crystal methodology family, not everything of value can be collected into that book collection (some people had the audacity of publishing good books before Crystal was conceived!) Further, Crystal is non-jealous, meaning that you might wish to substitute something in that wasn’t given a prime spot in the family. Therefore…

This site is organized as a reading list. The pages are organized as a framework for studying and building the skills of the team members, and enhancing the effectiveness of the team. On the various pages, you might find the roles involved in projects, and books or articles that we feel can improve the skills of people working those roles. Or, you might find a book or article describing a particular technique or work product. Or, you might find a discussion of a sample team structure to use on a project of a particular size and sort, or a technique for the team to use to enhance its effectiveness.

Crystal is evolving in tandem with our understanding of the principles of lightweight software development processes and people-centric project management. It aligns with the manifesto for software development. Seealso the talk: “Software Development as a Cooperative Game

Where to go next:

About methodologies
Methodology_per_project

Just-in-time_methodology_construction
Characterizing_people_as_non-linear,_first-order_components_in_software_development
Balancing_lightness_with_sufficiency
Cooperative_game_manifesto_for_software_development
Software_development_as_community_poetry_writing
Software_development_as_a_cooperative_game


Presentations about Crystal by other people

Paolo Farina: http://www.slideshare.net/PaoloFarina/semi-23282723
Abid Quereshi: http://www.slideshare.net/skillsmatter/crystal-3285696
“Mr. Jaba”: http://www.slideshare.net/MrJaba/crystal-agile
from Mauch, Bassuday, van Zyl, le Roux (a class assignment, well done:) http://www.slideshare.net/bassuday/crystal-methodology

Disciplined Learning

How “Learn Early, Learn Often” Takes Us Beyond Risk Reduction

Dr. Alistair Cockurn
Humans and Technology
Technical Report TR-2013.01
Feb. 2013,
©2013 Alistair Cockburn


alias: Learn Early, Learn Often
alias: DAKA2 (and see also DAKA)

Apply Figure 1 and you have everything, including Lean Startups, Collaborative Agile Marketing, feature thinning, “Trim the Tail”, Deliberate Discovery and risk-based project management [ER, GW, AC-ft, AC-ttt, DN, W-rm].


Figure 1: Disciplined knowledge acquisition during product development

Although Figure 1 shows the desired path of learning, most development and marketing looks more like Figure 2:


Figure 2: The typical “late-learning” sequence of product development.

To get acquainted with the idea, let’s first walk through Figure 2, the typical “late-learning” sequence:

The project starts with a great idea, high hopes, and no consideration that any of the ideas might be wrong. The charter is drawn up, the team(s) assembled, and people start working energetically. They work hard, talking with each other, learning about their assignment, about the design, about each other.

They need to learn four things:
1. How to work together, and indeed, whether these people can get this job done at all.
2. Where their technical ideas are flawed.
3. How much it will cost to develop. And finally:
4. Whether they are even building the right thing.

When do they learn these four things?

#1 they learn along the way. At some point they learn whether the people they assembled have the skills needed to get the job done or they got the staffing wrong. They also learn how to work together, becoming a “jelled” or high-performing team [TdM], if they are lucky.
#2 they all-too-often learn toward the end, when they integrate the pieces designed by the different teams, and learn what mistaken assumptions they made about each others’ work (hence the famous quip: “the first 90% of the project takes the first 90% of the time, the last 10% of the project takes the next 90% of the time”).
#3 they learn at the end, when all the work is done. There are typically so many surprises at the end that no cost estimate is meaningful until the final bill is turned in.
#4 they learn when it is all over late: when the customer does or does not buy the product. Unfortunately, this, the most important to learn, is learned last.

My purpose in this article is to describe how some project teams manage to avoid the bulk of these errors (late surprises catch even the best teams). These project teams enjoy not only greater success, but something incredible and rare:

the ability for the business to decide whether to deliver:
- earlier with small things trimmed out in order to hit a desired delivery moment, or
- later, with those small things improved for a more refined, higher quality experience.

No established method, agile or otherwise, allows that choice at the moment.

This disciplined knowledge acquisition approach is the evolution of risk management. Where risk management approaches ask

“What could prevent us from delivering?”

disciplined knowledge acquisition asks
“How can we get the best possible results from our efforts?”

Let’s turn our attention back to Figure 1, repeated below:


Figure 1 (repeated): Disciplined knowledge acquisition during product development

The magic trick is to apply the “broken” strategy in very small stages. The problem is not that learning doesn’t happen in the ordinary approach, the problem is that the learning comes at integration and again at sales, i.e., late. These are often the first moments when the team members are forced to put their ideas together, and the first time that the customer, user or buyer gets to see the new system.

What we will do is to apply the concept of “breaking things” at very small scales, and take them in the preferred order:

Q1. How can we learn what should get built?
Q2. How can we learn how much it will cost (time, money, people)?
Q3. How can we accelerate the team learning how to work together?
Q4. How soon can we correct the mistaken assumptions in the design?

The result is an amazingly effective development plan, quite different from what is recommended in standard agile development, and quite disorienting for the newcomer.

The discussion is simpler if we look at these in reverse order:

Q4. Correct mistaken design decisions

This category includes recovering from all technical errors, such as

  • choosing technology that doesn’t work as advertised,
  • the inevitable omissions and mistakes in design, and
  • mistakes due to people not talking to each other, causing out of date and mistaken assumptions about each others’ work.

The way to do this is the simple, if sometimes difficult practice of “micro-incremental” development:

  • Integrate early, integrate often

The best developers integrate every hour, more often if possible. (Less expert developers explain why they can’t integrate that often; the expert developers find ways to integrate anyway.)

When designing, people make decisions often at the rate of several per minute or one every several minutes (note: these are approximate, not measured rates). The mistakes in those decisions are based on mistakes in their memories, in their conversations, in their thoughts. Left untested, the mistakes build on each other, becoming increasingly more expensive to correct. Thus, finding mistakes sooner is important.

The “walking skeleton” strategy [AC-ws] is the start of the micro-incremental sequence. The basic architecture is constructed supporting the smallest connections through the architectural elements, allowing the team to discover the first round of surprises in the technologies they are using, and celebrate success once they have the pieces working (also part of Q3, “Learn to work together”).

Once the system is thinly connected, each team adds onto the system, integrating every hour or so, and possibly refactoring the skeleton itself. Since they are integrating so frequently, they will discover their mistaken assumptions and misunderstood communications much more quickly. As a side benefit, they are less likely to change the same part of the design at the same time, and so they do not need to check out and hold onto separate copies of the design, making integration easier, faster, and less error prone.

Micro-incremental development with frequent integration does not address a second category of technical mistakes: system performance, including load time, speed and scaling. They are addressed through early use of the walking skeleton and

  • Spikes [WC-c2spike]

A spike is a small, disposable piece of work created to explicitly address the question, “Will this possibly work?”

The difference between a spike and ordinary incremental development is that ordinary incremental development is conducted using full production conventions, with the assumption that the work will be used in the final product; whereas the spike work will absolutely not be used in the final product, is throwaway work, and therefore will be done in the most rapid and effective manner possible with the sole purpose of addressing the question at hand.

With a walking skeleton at hand and spike solution work available as a strategy, questions about the loading, throughput, and scaled performance of new technology and of the final, integrated system can be moved forward in the timeline so they can be addressed early(-ier) in the project, and not left to the very end.

There is a final category of technical surprises, those that might seem impossible to move forward in the schedule. An example is the final conversion of the database from one form to another.

The strategy to use here is this:

  • Story splitting [AC-abws]

Split a story into a spike piece and a production piece. Move the spike piece forward in the schedule in order to learn what problems lie underneath it and how to deliver it effectively; leave the actual implementation late in the schedule, but with the added confidence of the information that the spike delivered.

Here is an example of story splitting:

Our website was going to be converted from Wikipedia markup format to Textile format. Part-way through the project, we suddenly discovered that none of us was expert at language manipulation and really knew how to do this conversion. We stopped other work in order to set into place a trusted way to do that conversion. It took less than two hours to locate an open-source framework, install it, try it out, modify it, and convert a few sample pages of the site. With that in place, we went back to our regularly scheduled work, knowing that when it came time to convert the database, it would be a straightforward task.

Summarizing, the three strategies for disciplined learning on technical issues are:

  • Micro-incremental development
  • Spikes
  • Story splitting

I will introduce one, final, slight adjustment to these strategies in the section, “Creating a plan.”

Q3. Correct staffing and learn to work together

This category of learning includes the social aspects, from hiring to creating what Tom DeMarco and Tim Lister called a “jelled team” [TdM-jt].

Ordinary incremental development and increment delivery helps to flush out as quickly as possible whether the people on the team have the skills they advertised when getting hired.

However, failure to deliver sometimes is due, not to the people being not correct for the assignment, but to them not having learned how to work together. This is remedied through the strategies

  • “Early victory” [AC-ev]
  • “Walking Skeleton” [AC-ws]
  • “Simplest first, worst second” [AC-sfws]

The first strategy is based on the work of sociologists [KW], showing that achieving results helps people come to trust each other more, raises morale and helps them perform better.

The development strategy “Walking Skeleton” calls for a thin thread through the system under design to be constructed. It delivers an early, real victory to the team and to the sponsors. Originally a technical design strategy, the Walking Skeleton idea can be applied to developing and deploying new organizational processes, products into work flow, technical architecture, and even the development of individual system features.

The third strategy, “simplest-first, worst second” builds on “early victory” in a way contrary to the current common recommendation in the agile development world. The advice there, to build “highest business value first”, makes good sense once the team is functioning well, social risks have been reduced, and the team is capable and confident of being able to deliver whatever is of the highest business value (delivering something of high business value is, indeed, a victory the team can celebrate).

However, many conversations need to take place before the team has ironed out the types of conflicts and skill gaps that are likely to be present. For this reason, a useful strategy is to build something real, but very simple, so that you can encounter both social and technical surprises early, in good time to fix them.

Q2. Learn how much it will cost

The most difficult to apply set of strategies surrounds discovering how much a development project will cost. These strategies are difficult to apply because they cost money, and time, and deliver nothing but knowledge, and so test the patience of everyone involved, from the sponsor to the developers. As with all strategies, there are times when these are not needed, and others when these are badly needed.

  • “Core samples”
  • “Microcosm” [AC-rrp]

The following story is one I heard Tim Lister give at a conference:

A man wanting a pool built in his back yard calls in three contractors to present estimates. The third contractor, instead of presenting an estimate, tells the homeowner he will need to drill a core sample in the ground, and will charge the man for that.
The homeowner complains, saying that the first two contractors didn’t charge him for core sampling.
The contractor responds that he has no idea how the first two contractors could submit a bid, since they don’t know what sorts of rock layer lies under the lawn, but he couldn’t possibly put in a bid without having that information.
The homeowner is equally puzzled, but now comfortable with with the third contractor, whom he hires for the work.

One can do the same with a development project, identifying and isolating particular parts of the system the development of which is not completely obvious, and developing very small elements of the system within those areas, in order to understand how easy or how difficult the work will really be, and to identify what sorts of surprises might lurk below the surface. Carefully selecting such “core samples” allows the project team to develop a more reliable cost-, time-, and resource estimate for the project.

Core sampling is the miniature version of the more general “Microcosm” strategy, in which a mini-project is run for the sole purpose of establishing a sound estimate. A full Microcosm project can be set up to test the productivity of a new development team (think off-shoring, in particular), as well as to test the learning speed of staff with new technologies, and to benchmark the productivity of expert versus ordinary or new developers.

Whereas a core sample effort is intended to take hours to days, a full Microcosm project may take weeks to carry out, and should therefore only be used for large development efforts.

Q1. Learn what should get built

Finally, we get to the most important question needing to be answered: Will people like, buy and use what we’re building? Normally, this question gets answered when it is all too late. Recently, however, strategies have come into fairly common usage that move this learning process forward. The strategies are fairly simply, but as with all the others, require discipline, patience, and a willingness to change based on what is learned.

  • Paper prototyping
  • Ambassador user
  • Early delivery
  • Empty delivery and Manual Delivery [ER]

Paper prototyping [W-pp] and related strategies involve nothing more complicated than putting a mockup of the product into the hands of the consumer, who reacts to these early design thoughts. Prepared at low cost, early in the development cycle, these prototypes allow the development team to change their minds about how to proceed.

An “ambassador user” is a friendly user to whom the team can deliver an incomplete but growing product (using, of course, incremental development). This user can usually break the system within moments, or give valuable feedback from his or her limited perspective. The difference between the “ambassador user” and “paper prototyping” is that the ambassador user is encountering the actual system as it grows, not a mockup of the system.

“Early delivery” is a full deployment of the system, but with reduced capabilities. The intention is to learn, first of all, what is incorrect with the product as envisioned, but possibly more significantly, how the presence of the product in its final environment will change the requirements for what should be built in the first place. This strategy is built on the recognition that once people start using a system, their habits and needs change, often in unpredictable ways. Delivering a thinned version of the system early allows the development team to gather new input and adjust the priorities on what should be developed.

The most interesting strategies to emerge in the last decade are Empty and Manual Delivery (my terms for them), both legitimized by the “lean startup” movement [ER].

“Empty Delivery” is epitomized by the “parrot cage” story [BBC]. In brief:

The company wanting to find a new market, saw that there were many searches for parrot cages, but not many paid ads supplying them. So they created an empty site that advertised parrot cages, but had no delivery capability.

Over time they adjusted the web site to fit demand, introduced suppliers, and eventually moved into the business fully.

It is practical to use the “Empty Delivery” strategy for online products, where, initially, all you care about is whether anyone clicks on a link or accesses a feature. Measuring these clicks, you can develop those features that are drawing the most attention, and evolve the system in the direction of maximum draw.

“Manual Delivery” is described for several companies in Eric Ries’ book, The Lean Startup. Companies initially spend far more money than they eventually could afford to, doing work, even delivering products, manually, for the simple reason that manual procedures can be set up and changed for very little cost. Delivering manually, the designers can change the product offering on every purchase, evolving to what the customer base indicates is really desired.

All of these strategies, while requiring inventiveness and discipline, allow the development team to move this most important category of learning far forward in the development cycle shifting the question from

Will people like, buy and use what we’re building?

to

What changes should we make so that it is?
Creating a plan

In the light of these disciplined knowledge acquisition strategies, the creation of a project plan is rather different than before.

The old-style, and still frequently used, project plan is a list of the tasks to be performed to construct the system. An effective, rapid and low cost planning method of this type is “Blitz Planning” [AC-bp]. This conventional project plan still provides a good starting point, serving to peek ahead at what is involved in developing the system and delivering approximate size and difficulty for the effort.

The more recent, agile-style project plan is simply a list of all the features or user stories [W-us] to be built in a single list, usually with the highest business value item placed at the top of the list (see above for the reasons this may not be the best order to place them in).

The knowledge-acquisition approach, on the contrary, starts with a brainstorming of the four categories of knowledge needing to be acquired, along with the starting set of features to be delivered, making five categories of possible work efforts. These are artfully interleaved into a sequence of work assignments designed to reduce risk, deliver crucial information, and develop product capability in an “optimal” way.

You don’t have to be very alert as a reader to see the words “artful” and “optimal” there. The quality of the plan is quite plainly sensitive to the ability of the planners to brainstorm, list, identify and sort the possible risks, the learning opportunities, and the possibilities for income in their future. In everyday life, the project plan will need to be updated periodically as lessons are learned and new risks and opportunities spotted. Even with the best intentions and work, the team is quite likely to get surprised late in the game by something they had not considered or detected.

I mentioned at the end of Q4, above, there is one twist to mention in creating the project plan. That twist is that there are not just two parts to each feature (the risk element that gets reduced and the actual delivery part), there are three:

  • The risk element
  • The main value part
  • The polishing and glossing that makes the feature “wonderful”, the “tail”.

Not every feature is of equal value to the buyers and users. Arrange for the minimum set of features to be at an “adequate” level of delivery in plenty of time before delivery (if possible), then spend the last time polishing and glossing those feature that are more important than the others [JP-ttt].

This leads straight into the “trim the tail” result.

Reaping the Benefits: Early Income & Trimming the Tail

Developing with disciplined knowledge acquision well delivers two benefits:

  • Early income
  • Trimming the tail

Early income is the obvious benefit and is well presented in Software by Numbers [MD]. A project can become self-funding if it is delivered to paying users part-way through its development, thus lowering the load on the sponsors.

Less obvious but equally valuable is the ability to “trim the tail,” not developing aspects of the system that are less valuable. This is a bit less obvious, and deserves a more description. Here is the shortest example, to give the idea:

When you are opening a new hotel, it may not be necessary to shine the doorknobs before opening to the public. If it is necessary to have shined doorknobs for the guests, it is probably not necessary that all of the doorknobs need be shined.

Four aspects of a system might be trimmed back:

  • Complete features
  • Feature details
  • Usage quality
  • Internal quality

The first is obvious: For an early delivery, your (e.g.) car might not need a sun roof. The first iPads delivered did not not contain telephone capability.

The second is less obvious: Given that your car must have all of the basics of a car (such as brakes), it might not need fancy brakes with (for example) an antilock braking system. A computer system might require searching capability, but not include auto-completion or auto-correction ability. In the case of the iPhone, it would have been possible to have delivered it without the sophisticated scrolling, bouncing visuals.

The third is related to the second: Really smooth and easy to use features take a lot of work, with a lot of revision. it is possible to prepare a delivery-ready version of a feature that is intermediate in usability, ready to deliver earlier.

The fourth has to do with defect rates and internal design quality. The question is how much internal quality is needed for the delivery in question.

If, and this is the important if, development has proceeded incrementally, attending to the knowledge management areas, then the sponsoring, management, and development teams have the luxury of being able to deliver

  • early, with reduced features or quality;
  • on time, with either full or reduced quality, depending on where development stands at that time; or
  • later, with enriched features or quality;

at the choice of the sponsors!

Under usual project circumstances, these choices are not available, the only choices being to delay or work overtime. The “trim the tail” option is available only for those who have worked in this more disciplined fashion.

The trim-the-tail strategy is one of the few strategies equally available to very small and very large projects, fixed-price and floating-price projects. Here are three examples, taken from real projects:

Small, floating-price project: A new web site development involving only two people, the web site owner and the programmer. After several months of open-ended work, the web site owner wanted the site delivered “soon”, and trimmed the tail back aggressively and repeatedly until something much smaller than expected but still suitable was deployed.
Small, fixed-price project: The company in question always bid small, fixed-price contracts of three- to six-months, involving three to eight people. As usual, the bids were aggressive and the teams typically ended late, missing the deadline or scope, with resulting overtime from the developers and penalties at the end of the contract. Jeff Patton [JP] worked in the manner described in this article, leaving the least important features to the end, and deliberately thinning the less critical features, so that when the contract period ended, it was clear to the customers that they had gotten most of what they wanted. This produced the least overtime, the smallest penalties, the highest customer satisfaction and the greatest likelihood of receiving a follow-on contract.
Very large development project: A company with several thousand developers in several countries, working on a product line with multiple variations, applications and releases. Under normal circumstances, when they call for a full integration on a particular date, every team starts to work overtime and jockey for position not to be the one most behind schedule. The integration date keeps getting slipped back as team after team fails to complete their work on time. Using the trim-the-tail approach, each team would have in place the essential elements needed for the integration, with only tail elements left unfinished. For delivery, management would be in position to deliver slightly less, on time, or slightly more, a bit later.
Final Summary

Four knowledge areas in which development project suffer are

  • What is the right thing to build.
  • How much it will cost to develop.
  • Whether adequate people are on the team, and how they best work together.
  • Where their technical ideas are flawed.

The disciplined knowledge acquisition approach consists of identifying, early and continuously, small experiments and small slices of deliverable work, so that the development team gains the knowledge in these four areas in time to maximize the effectiveness of the product delivered.

Some strategies described to accomplish this include:

  • Integrate early, integrate often
  • Early victory
  • Core samples
  • Microcosm
  • Paper prototyping
  • Walking Skeleton
  • Simplest first, worst second
  • Story splitting
  • Spikes
  • Ambassador user
  • Early delivery
  • Empty delivery and Manual delivery
  • Trim the tail

It is exciting to find a baseline strategy that applies to projects of such different sizes and natures as just outlined. The disciplined knowledge acquisition approach is, as mentioned at the start, not for the faint of heart — it requires discipline, inventiveness and constant correction. The payoff is the ability to get a development team working together, discover what is needed in time to delivery it, delivery it early in order to create a self-funding project, and trim the tail at the end to meet inelastic deadlines.

References

[AC-abws] http://alistair.cockburn.us/The+A-B+work+split
[AC-bp] http://devblog.point2.com/2009/09/08/blitz-planning-at-point2/
[AC-ft] http://alistair.cockburn.us/The+A-B+work+split%2c+feature+thinning+and+fractal+walking+skeletons
[AC-rrp] http://alistair.cockburn.us/Project+risk+reduction+patterns
[AC-sfws] Alistair Cockburn, Crystal Clear: A Human-Powered Methodology for Small Teams, Addison-Wesley, 2005. Also online at http://alistair.cockburn.us/ASD+book+extract%3A+%22Individuals%22
[AC-ttt] http://alistair.cockburn.us/Trim+the+Tail
[AC-ws] http://alistair.cockburn.us/Walking+skeleton
[BBC] “Searching the internet’s long tail and finding parrot cages”, http://www.bbc.co.uk/news/business-11495839
[DN] http://dannorth.net/2010/08/30/introducing-deliberate-discovery/
[ER] Eric Reis, The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, Crown Business, 2011.
[JP] Jeff Patton, “Unfixing the Fixed Scope Project: Using Agile Methodologies to Create Flexibility in Project Scope”, in Agile Development Conference 2003, Proceedings of the Conference on Agile Development, 2003, ACM Press. Available online through a Google docs search.
[JP-ttt] http://www.agileproductdesign.com/downloads/patton_embrace_uncertainty_optimized.ppt
[KW] Karl Weick, The Social Psychology of Organizing, McGraw-Hill Humanities/Social Sciences/Languages; 2nd edition, 1979.
[MD] Mark Denne and Jane Cleland-Huang. Software by Numbers: Low-Risk, High-Return Development. Prentice-Hall, 2003.
[TdM-jt] Tom DeMarco and Timothy Lister, Peopleware: Productive Projects and Teams. New York: Dorset House Publishing Co., 1987.
[W-pp] http://en.wikipedia.org/wiki/Paper_prototyping
[W-rm] http://en.wikipedia.org/wiki/Risk_management
[W-us] http://en.wikipedia.org/wiki/User_story
[WC-spike] http://agiledictionary.com/209/spike/
[WC-c2spike] http://c2.com/xp/SpikeSolution.html

Known Uses

The Lean Startup field is the best current arena in which to find examples. A super example is this one: http://steveblank.com/2013/07/22/an-mvp-is-not-a-cheaper-product-its-about-smart-learning/

Re: Agile Use Cases (Writing Effective Use Cases meets Agile Software Development) 180


t

-by Venkat on 6/17/2010 at 2:20 AM


I would like to use a slide from this presentation in an informal presentation to co-workers. May I do so with your permission?

-by Henning on 12/17/2010 at 3:49 PM


Certainly. Just include the full slide so my name etc appears for attribution.

-by Alistair on 12/18/2010 at 2:26 PM


Thank you. Will use whole slide, and of course also point co-workers to your site. Thanks a lot for the work you do.

-by Henning on 12/20/2010 at 9:17 AM


GRAX

-by DIANA RGUEZ on 5/22/2012 at 11:17 PM


thanks

-by carla on 3/22/2014 at 1:32 AM

Sydney

Panzerotti Cafe, Margaret St at Wynyard
Baxter Inn bar

Melbourne

Project219 at 219 Russell, CBD. Green eggs breakfast :)
Bennetts Lane Jazz
Uptown Jazz Fitzroy
Night Cat Jazz Fitzroy
Black Cat Cafe Fitzroy
Grub Van on Moor St.
No Lights No Latex Tues eve 7pm Fitzroy
No Lights No Latex Wed eve E Brunswick Nickolson & whatever
Tango Thurs eve same place E Brunswick
5 rhythmss

Software development as a cooperative game

Alistair A.R. Cockburn
Humans and Technology

Alistair Cockburn’s talk given at the 1999 ObjectActive conference in MidRand, South Africa.

Colloquially known as Alistair’s “scum” talk.

My name is Alistair Cockburn – pronounced Co-burn,. In the U.S. they often forget to teach that ‘ck’ is occasionally silent, as in my name.

I am a hardware and software engineer by training, but my chosen profession is that of a methodologist. That means that I study how people produce software, successfully, or unsuccessfully. To most programmers, software methodologists, like software process designers and marketing people, are among the lowest of life forms, “scum”, in the vernacular. To us, of course, we are essential agents of life, rather like blue-green algae. No doubt, to blue-green algae, other algae are attractive. But from a certain distance, and to a swimmer crossing the pond, they look mostly like scum.

I’m a methodologist, not a process designer. I have rather a distrust of most software proceses, as some of you will hav come to understand by now. I think most documented software processes are simply incorrect. But this is like saying I am really a blue algae, not a blue-green algae. To the swimmer, it makes no difference what color the scum is.

As a true-blue methodologist, I work by collecting raw data: A person says, “We did this. This happened.” I take that as a fact. Collect facts like that. Then I look at cognitive science, neuro-linguistic programming, sociology, ethnography, organizational theory and personality types, looking for their theories of what is at work, what is cause and what is effect – what is the “ecology” that surrounds programming. They are not facts, but ideas I can build on and test against my facts and observations. I keep the ones that I like and seem to explain something. Then I make a wild guess at at theory that fits some of the data.

Then I try to break the theory. Well, actually, I don’t really want to break it. I really want it to be right. So maybe I don’t try all that hard to break it. I don’t show it to my worst opponents, because they’ll break it too easily. But I announce the theory, preferably in the middle of programmers or in the middle of a project, and see how people react. However much I want the theory to be correct, I have found that I only make progress when they break it.

What I have observed in the last 10 years of doing this, is that there is this cycle. I start off fairly ignorant and with a guess. I learn and learn, and then think I know something. I am at the top of this turning wheel. I announce and try my theory, and immediately start discovering it doesn’t work. So my understanding starts dropping, until I end up convinced I know nothing and am totally depressed. At that point I notice something new or have an idea, and start collecting information again.

This wheel goes up, pauses, then comes down. I have been around that wheel approximately six times in the last 10 years, and have gotten to know the feeling.

I was giving a talk like this a cycle or so ago, and told all this to the audience. I said, “I have bad news for you. While preparing for this talk, I discovered that I am currently at the top of cycle. That means I can announce with perfect certainty and wonderful evidence what I believe, and be completely sure that it is wrong. However, I have no information as to where or how it is wrong. And what’s worse, since I have been around the circle 4 times, I can probably tell you where you are wrong, but you can’t tell me where I am wrong.” With that, I began my lecture.

You are in a better position today. Since I discovered that cycle, preparing for that talk, I decided to try to start on the next cycle even while I was still going up on the current cycle. So I am going up and down simultaneously on different ideas. So now I can describe theories I know hasn’t reached fruition yet, so I don’t know where they are going, and in the next breath tell you stuff that used to be true and is now suspect. And I have so many stories to make or break any theory that is just dizzying.

An aside before I go on with the reassuring and comforting part of this talk, which is the theory I am currently trying to convince you to use.

It appears to me that software development is happening in industry, not in the universities. Universities are great for problems that can be solved by sitting alone and thinking or experimenting for months on end. Universities were great for giving us automata theory, complexity analysis, compilers and the like. But universities are not at all well suited to understanding what is happening during software development.

Software development at the moment is much more like early manufacture of samurai swords, shields, and battlefield tactics. You make a pile of swords or war tactics, send them onto the battlefield, and see which ones worked better. Then you make different swords and tactics, and so on. You can’t figure out the right answer sitting alone in the room. You have to be on the battlefield. I can’t imagine learning the things I’ve learned while sitting peacefully in my office reflecting. Most of my original reflections and predictions were just wrong. So any one of you who is interested in this topic probably has to work as a developer or consultant, so you can see the moment-to-moment action and get raw data.

I took part in a discussion up at the University a short while ago, to discuss the idea of a “Software Engineering” curriculum, as distinct from Computer Science. Not to poke fun of this particular Univ. – the department head is very sympathetic to the concept, and the representative from the Dean’s office is a professor who is right in line with the common thinking in the academic segment of our industry. They were discussing “What People Would Think – How Other Academics Would React”. The were looking for a comparison faculty for this new curriculum. They wanted it to be like Chemical Engineering. That is a good, macho discipline. Good science, good engineering, good success rate, good money. But Software Engineering isn’t like Chemical Engineering, and they were aware of it. I got around to describing how software engineering research ought to be carried out – “rather like a social science, anthropology, for example”. The Dean’s representative snorted, “That’s just what we don’t need – to be contaminated with the status of a social science department!”

That struck me as rather odd, given the generally low opinion people have of the intellectual rigor of Computer Science as a discipline. Rather like the rocks on the bottom of the pond calling the mud “low”. However, he didn’t look to be in a particularly receptive mood at the time, so I didn’t mention it to him. I did, however, think it. To me, as a blue algae scum, a social anthropologist is a collegial lower life form, perhaps a brown algae. Also life giving, and something to learn from. At least sitting out in the field, learning how the big world works.

Anyway, here I am, true-blue algae though I might be, and trying to tell you my best current guess as to how this software development thing works. All of what I have to say is based on facts, backed by guessing and checking. Or guessing around facts and checks.

You will notice that my talk doesn’t follow the published slides in the proceedings. The reason for this is that I have had both inspiration and breakage since I sent in the slides. Derek can tell you how hard it was to get slides from me in time to publish – and I hated to send them, even then. I am always convinced I’ll learn something critical and damaging the day before the talk, so my preferred mode of operation is to write the talk from 11 pm to 4 am the night before the talk. But in this case, Derek prevailed. The talk slides are almost a month old, so of course I have found a new way to talk the material by now.

I’ll try to put this all into the smallest number of words and let those of you who get it right away get it right away. I believe these ideas are correct and will stand you in good stead on your current and next projects.

1. Communication

The first thing to get is that no communication is ever perfect and complete.

It just can’t be done. It is not even in the realm of possibility. Your listener, or the receiver of the communication, has to jump across a gap, at some point, and has to do that all on their own. You can’t do it for them. If they are very different from you, then they can’t jump a big gap. You have to explain some basic concepts to them, and then build forward until they build their own bridge of experience so they can finally get what you are saying. But however much you back up, there is someone who wouldn’t understand that, and you’d have to back up more. There is no final end point to this backing up. However, if you are communicating with someone who has a very similar background, you wave your hands and mutter a few phrases, and they get it. They can jump a huge distance, because they have a similar base of experience to draw from, and can fill the gap with accurate predictions of your meanings.

The way I say all of that in shortest form is, “All communication is touching into shared experience.”

The point is, we write these specification and design documents as though we could actually ever explain what we mean. And we can’t. We can never hope to completely specify the requirements or the design. Not even the faintest chance. When we write, we assume that the reader has a certain level of experience. If we can assume more experience, then we can write less. If we have to assume less experience, then we have to write more.

I was working with a US company that was employing programmers in Russia. They wanted me to teach them to write use cases in the US for programmers who knew neither English nor the domain. I said, “You can’t hope to teach them the domain inside the requirements document. First teach them the domain, then write a short requirements document that speaks to someone knowledgeable in the domain.” They decided to do one better. They decided to write the short version of the requirements document, and then fly one of their domain experts over to Russia for 2 weeks at a time to translate, explain and generally ensure that the programmers were doing the right thing.

See how that works? The domain specialist could jump the large gap presented by the brief use case document, and then back up, as needed, and only as needed, to fill in to get the size of gaps that the Russian programmers could jump.

So the first signifant idea I have to give you today is that complete communication is never possible, and so it is our task in on a software development project to manage the incompleteness of our communications. Estimate how much is needed, when we can quit, how we can help receivers to jump larger gaps, when and how to make the gaps smaller. Every time we try to make the gap smaller, it costs time and money, and software projects are short on both. So what we want is to find out how large of a gap – how much incompleteness in the communication – we can get away with, and stop there.

There is a second moral to that story. How did the company decide to fill in the programmers’ gaps? By making them read books? Sending them to a course (that’s what I suggested). No, by sending someone to talk with them, face to face. Because real-time, multi-modal, 3dimensional, face-to-face communcation with question and answer is the most effective way to transfer information and see that it was received. Two people standing at the white board, talking, questioning, drawing, maybe typing on the computer if that is the issue.

- 2. Using Whiteboards

So the second idea of the day is that as you remove those characteristics of two people at the whiteboard, you reduce the efficiency and effectiveness of the communication session.

That is what the graph shows. Take away part proximity, and you get video conferencing, and many of us have experienced how hard it is to collaborate over a video link. You lose 3dimensionality, the visual proximity that gives non-verbal cues. Back up one step further, put people on the phone and you lose all visual cues. Go to email and you lose tonal inflection, and timing. Go to videotape to get visuals back but lose question-and-answer. Go to audiotape and lose visuals again.

Go to paper and guess what? You’ve lost almost everything. The writer has to, very laboriously, I should note, guess who the audience is, guess their level of experience, guess what they understand, guess what their questions will be, and guess what the best answers to those questions are. What are the odds of them getting all that right? Very small. And expensive.

But how do we demand that people communicate on a project? Written text and drawings! In the light of this communications model, that is clearly absurb – and yet we do it. We demand that people communicate in the slowest, least effective medium, and downplay the most effective medium.

So if this theory is any good, we should be able to draw a prediction from it. All right, here is the prediction I get from looking at this graph. How should we create archival documentation of a design decision? Back up the curve to highest archivable communication medium, and we find Videotape.

This suggests the following documentation scenario. The designer gives a 10-20 minute, semi-prepared talk at the whiteboard, telling some other designers, who do not know the answer, how the system works. They get to ask questions. The designer will first fill in background details, give the simple solution, then add complexity. The questions will indicate where the designer has been vague, and the designer will explain. All this is videotaped. After the taping, someone transfers a couple of the key drawings, instances diagrams, collaboration diagrams, examples, whatever, to paper, so that the people can recall from looking at the drawings, what the conversation was. And, finally, someone puts index marks on the videotape for where interesting bits of the discussion took place, and publishes that index along with the draw. People can then review the drawings, recall the conversation, and look up the key discussion on the tape. All this would be relatively inexpensive and easy to to do. Certainly faster and more palatable to the person having to do the presentation, and plausibly more informative to the viewer.

I’ve been trying to find people who have done this or who are willing to try this out. So far, I have found that Gerald Weinberg made a similar suggestion a decade ago, by whatever reasoning he got to it. I found a woman at Lucent Technologies / Bell Labs who had actually done this once and said it worked great. Actually, she was the one who told me to put the index marks on the video tape and publish them. Recently, I have found a team lead over several projects who said he will try this sometime soon. So hopefully I’ll get back more than just the one answer as to whether this model gives good predictions. If any of you are willing to try this, please let me know: what you did, and what happened.

The upshot of this model is the conclusion that you want to encourage informal, face-to-face contact wherever possible. In fact, you want to rely on it. It should not be an accident, it should be core to your development process. Put the people in one room, if possible, in adjacent offices at least. Jim Coplien says that studiies show that as soon as people have to cross a stairwell, communication drops precipitously.

—3. People are Active Devices

This gets us to the third signifant idea: that software development processes use humans as the active components.

Humans are not linear devices, the way rods and hinges are. Even non-linear devices like transistors are easy to characterize compared with people. So when we talk about the software development process, we should first try to find out what are the active characteristics of these things called “people”.

People have lots of interesting characteristics, and we don’t know what they are – which makes it all the more absurd that we try to define methodologies and processes that incorporate them.

People are really good at just looking around, around the code, around the project, around the problem, and understanding what’s going on. So you don’t have to be all that precise, in many instances, you rely on people being able to look around. Program maintainers tell me this all the time. They expect the documentation to be out of date, so they just go and read the code.

People are really good at communicating with other people, face to face, of course.

People are really good at taking initiative, just seeing what needs to be done, and doing it. It is my experience that this is what saves most processes – people do whatever is needed, never mind what the process says. In my interviews with projects, when I ask how the project managed to deliver in the end, there is almost always some comment about “a few good people, who stepped in and just did whatever was necessary.”

However, people are typically inconsistent in their work, careless, sloppy and undisciplined. You can pretty much count on it. They don’t read the instructions. They goof off alternating with working hard. They resist learning and using new ideas. All of those make life hard on the process designer. Whatever the actual optimal process and design technique could manage to be, people will resist using it, and then use it sporadically and carelessly, and then have to step in with some personal heroics to make it all work out in the end.

This curse plagues all of us blue and blue-green algae – it doesn’t matter what great estimating, designing, programming, testing, managing techniques we discover and teach, people generally won’t use them anyway.

So these are some of the characteristics of people – their success modes, and their failure modes. I am trying to build my newest set of methodologies around them. Around apprenticeship programs, because that allows learning by example, on the job, in personal contact with another, with feedback. All these are important, and I’ll get back to some of them as we go along here.

One of the most interesting discoveries I made while capturing one group’s methodology, was this milestone I call a “Declaration” milestone. I found 3 kinds of milestones, and have found them since on all projects. The first is one we expect: a Review. A review happens when a number of people congregate and stare at some work product and give feedback. The interesting questions about a review are “Who is there?”, “What are they staring at?” and “What does it mean to Fail the review?”

The second kind of milestone is “Publish”. Every time you check code back into the configuration management system that is a form of publish. Other Publish milestones involve deploying the software, circulating a document, and the like. These are also fairly obvious.

“Declaration” is, however, not obvious.. When does the technical writing group start writing the online help text? The answer I found was that the team lead would show up and say, “It’s ready”. The team lead doesn’t mean, “It’s complete, correct, done, defect free” or any of those things. The team lead simply means, “If you start writing now, I believer that the further changes we will make will be small compared to the total effort of writing.” (In my book, in the Risk Reduction Strategies section, there is a strategy called Gold Rush that discusses this sort of activity).

There is no double-check on the correctness of the team lead’s assertion. It just is “declared”. Since that time, I have found many examples of the Declaration milestone that are used in project scheduling. “The object model is stable enough for the DBAs to start designing from” is one. “Our software is ready for Alpha Release.” Alpha release? When is the software ready for Alpha Release? When it is bug-free? Of course not! When what? When someone decides that the defect state is low enough that the alpha users can tolerate it. It is simply a Declaration. Declaration milestones are an example of the way we manage the incompletenes of communication.

—- 4. Games

At this point I’m going to jump to the most powerful predictor I have come across for managing software development – considering it as a cooperative game.

Games are not just what children play, although those are also games. Games are invented by adults, by mathematicians, by novelists, by teenagers, and by children.

If you are sitting around the living room on a winter’s evening, and someone says, “Let’s play a game,” what could you play? You could play charades (play-acting to uncover a hidden phrase). You could play tic-tac-toe or checkers, poker or bridge. You could play hide-and-seek or table tennis. You could play “when I took a trip…”, a game in which each person adds a sentence onto a story that grows in the telling. You could end up having a wrestling match on the living room floor.

Checkers and tic-tac-toe are positional games. The entire state of the game at any moment is represented by the position on the board at that moment. These games have properties that mathematicians find interesting, and John Conway, in the book On Numbers and Games, shows how 2-person, positional games can be used to define all numbers, real, imaginary, finite or transfinite. He actually generates the notion of number from the notion of 2-person, positional games.

Most competitive games, such as checkers, tic-tac-toe, bridge and table tennis, are zero-sum games. Two sides play, one side wins, one side loses. If you score 1 point for winning, and -1 point for losing, at the end of the game the sum of the scores is zero. “Not everyone wins,” is a fact not lost on adults creating games for children’s parties.

Many of the games you would consider playing on that winter’s evening are not zero-sum games. In poker, gin rummy, parchesi and other group competitive games, only one person wins, the rest lose. In hide-and-seek, any number of the hiders may win (or lose) against the seeker.

But rock climbing, story-telling, and carpet wrestling are not about winning or losing; the game is all about having fun. As long as the guessing or the story-telling is interesting, the game is worth playing. These are cooperative games. The point of the game is to interact with each other, or perhaps to help each other.

Not all games even have an end! The story-telling game, the carpet-wrestling, and the musical session are not even _goal-seeking_games. It is not the purpose to reach a goal as fast as possible. They come to an end when enough people get tired of playing and step out. The game is expected to end, but has not particular endpoint.

There are, however, some games in which the primary intention is to NOT end – to keep the game going. These are called infinite games (all the other games must therefore be finite). If you start a club or a company, the purpose is to keep the club or company going. The way to keep the club going is to make it interesting for the participants. A person’s profession falls into the category of an infinite game. The person, wanting to to continue the profession, makes a set of moves which permit their practice of that profession to continue.

So games come in all sorts of flavors. We have seen just a few possible dimensions: physical / mental / solo / team-based / competitive / cooperative / goal-seeking / finite / infinite.. It is therefore appropriate that the American Heritage Dictionary gives the first definition of game as: “A way amusing oneself.”

Getting slowly to software development, I’ll briefly consider rock climbing. In contrast with children playing Legos or jazz musicians jamming, climbers aim to reach the top. They evaluate the climb on how well they climbed together, and how much they enjoyed themselves, but the first measure of success is whether they reached the top. Reaching the endpoint is a primary goal, and the game is over when they reach the top. So rock-climbing is a goal-seeking, cooperative game.

Now if you are a rock climber, you might well interrupt me here. For many rock climbers, the moment of reaching the end of the climb is a sad one, for it signals the end of the game. That is true of cooperative games in general. The game comes to an end when the endpoint is reached, but if the players have been enjoying themselves, they may not want to stop. When we compare to software development, we do indeed see that sometimes software developers do not want to finish their design, because then the fun part of their work will be over. Software development is similar to rock-climbing in ways that I shall develop a little more fully in a minute.

I would like you to consider software development as a cooperative, finite, goal-seeking, group game. The goal is to produce a working system. The group, or team, consists of the sponsor, manager, requirements or usage specialists, software designers, testers, and writers. Usually the goal is to produce the system as quickly as possible, but there other factors affect the time goal: ease of use, cost, defect freedom, and liability protection. In general, it is a resource-limited game, which affects how the moves are made.

The game is finite because it is officially over when the goal is reached. Sometimes the termination point is delivery of the system, sometimes it comes a bit later. But funding and motivation for the game change around the time the system is delivered, and a new game is defined. The new game may be to improve the system, to replace the system, to build an entirely different system, or possibly, to disband the group.

The game is cooperative because the people on the team help each other to reach the goal. The measure of their quality as a team is how well they cooperate and communicate during the game. This measure is used because that affects how well they reach the goal.

Although the software development game within the project is finite and cooperative, other games are being played at the same time, other games, with other characteristics.

Software development is competitive across teams. Teams in different companies – sometimes within the same company- compete to put the system out ahead of the other teams. The competitive game across teams is in play at the same time at the same time as each team is working its cooperative game (think here of team rock-climbing competitions).

Career management is an infinite game for the individual. The purpose of playing this game well is to be able to get the best position on the next game. As with the card game called “So long, sucker”, the teams and alliances in and individual’s career management change continually and without notice. The individual moves people make for their career affect the ways in which they collaborate with their alleged teammates.

Organizational survival is an infinite group game. It is cooperative within the organization (subject to the individual career game), and competitive across organizations.

This game model of software development has stood me in good stead recently, as I evaluate military software projects and open-source software development. In some of the military software projects, what we see is predominance of the career- and corporate-enhancing infinite games. It is quite clear that delivery of the software is a secondary concern, and growing the company, growing personal influence, or growing the career is what is many people’s minds. The logic of the funny contractor behavior doesn’t make sense until you realize they are playing a different game, in which different moves are called for. Then it suddenly all makes sense – even if you don’t like it.

Open-source development is different because it is not a resource-limited game, nor is it finite and end-point directed. Linus Torvald did not say, “We’ll make a shippable copy of Linux, and then we can all go home.” No, Linus is around, and it will evolve. The game is interesting as long as it is interesting. Any number of players may show up, and they are not on a time-line. The game will abandoned as soon as it stops being interesting for the players. In that sense, it is much more like musicians playing together, or carpet-wrestling, or lego building. It is a cooperative game that is not directed toward “reaching the goal”, and is not built around managing scant resources. And so the moves that make sense in open-source development naturally don’t make the same sense for a standard resource-limited, goal-seeking software development project.

Of all the comparison partners for software development that I have seen, rock climbing has emerged as the best. Here are some of the words and phrases that we can link with rock climbing. You can see how well they transfer to software development (by the way, read Jim Highsmith’s new book, Adaptive Software Development for a more detailed look of the rock climbing comparison).

Rock climbing is Technical. The rock climber must have technical proficiency. The novice can only approach simple climbs. With practice, the climber can attack more and more difficult climbs. The better rock climber can simply do things that the others cannot. Similarly, software development is technical and requires technical proficiency, and there is a frank difference in what a more skilled person can do compared with a less skilled person.

Training. Rock climbers are continually training and searching for new techniques they can use, just as software designers do.

Technical Pass/Fail. A key point of comparison between rock climbing and software development, for me, is that not just any solution will do. The climbers must actually support their weight on their hand and feet; the software must actually run and produce reasonably accurate answers. This key characteristic is missing from most alternative activities that people select to compare software development with.

Individual and Team. Some people just naturally climb better than others. Some people will never handle certain climbs. At each moment on the climb, the person is drawing on their own capabilities, have to hold up their own weight. The same is true in software.

And yet, climbing is usually done in teams. There are a solo climbers, but they are in the minority. Under normal circumstances, climbers form a team for the purpose of a climb and the team has to actually work together to accomplish the climb. Similarly, software developers, while working on their individual assignments, must function as a team to get the software out. The “Team – Individual” dual aspects of software development form the basis for most of my current work in methodologies, and I’ll get back to it before I’m done.

Tools. Tools are a requirement for serious rock-climbing: chalk, chucks, harness, rope, caribeener, and so on. It is important to be able to reach for the right tool for the right moment. It is possible to climb very small distances with no tools. The longer the climb, the more critical the tool selection is. You software developers will recognize this. When you need a performance profiler, you really need it. You can’t funtion without the compiler. The team gets stuck without the version control system. And so on.

Planning and Improvising. Whether bouldering, doing a single-rope climb, or a multi-day climb, the climbers always make some sort of a plan. The longer the climb, the more extensive the plan must be, even though the team knows that the plan will be insufficient, and wrong in places.

Unforeseen, unforeseeable and purely chance obstacles are certain to show up on even the most meticulously planned climbing expeditions, unless the climb is short and the people have already done it several times before. Therefore, the climbers must be prepared to change their plans, to improvise, at a moment’s notice.

This dichotomy is part of what makes software development manages gnash their teeth. They want a plan, but have to deal with unforeseen difficulties. It is one of the reasons why incremental development is so critical to project success. (Does that sound like climbing in stages, and setting various base camps?)

Fun. Climbers climb because it is fun. Climbers experience a sense of flow while climbing, and this total occupation is part of what makes it fun. Similarly, programmers typically enjoy their work, and part of that enjoyment is getting into the flow of designing or programming. Flow in the case of rock climbing is both physical and mental. Flow in the case of programming is purely mental.

Challenging. Climbers climb because there is a challenge – can they really make it to the top? Most programmers crave this challenge, too. If programmers do not find their assignment challenging, they may quit, or start embellishing the system with design elements they find challenging.

Resource-limited. Rock climbing works against a time and energy budget, needing to be completed before the team is too tired, before the food runs out, by nightfall or before the snows come. In general, climbers plan their climbs to fit a resource budget. Similarly, commercial software development is governed by budget and need. It is in this sense that open-source development is different from commercial software development.

Dangerous. If you fall wrong on a rock climb, you can die or get maimed. This is probably the one aspect of rock climbing that does not transfer to software development. Rock climbers are fond of saying that climbing with proper care is less dangerous than driving a car. However, I have never heard programmers need to even compare the danger of programming with the danger of driving a car or crossing the street.

Software development has been compared with math, science, engineering, poetry, theatre and jazz. It is useful to have such a comparison partner, because we can get some distance and clarity by comparing software development to its partner, and reflecting on whether that comparison holds in each particular case. I find rock climbing has more in common with software development than do all the comparison partners that have been used before.

Math, science and poetry are not games of same sort. Theatre is a finite, group, cooperative, planned/improvised game, but is not goal-seeking, and is missing the technical pass/fail nature of software and rock climbing. We could simply put on a terribly play and get away with it, if everything falls apart, but we cannot simply float up the rock climb or wish the software would run if it will not.

Engineering is too close to software development for us to stand outside of. In fact, I could be talking about engineering rather than software development here – it just happens to be about software.

There are two things that software development is not. *Science *is one of them. Of course, there has been much written as to just “what is” science, and I shall not try to recap it all here. However, there are several thoughts that can be usefully be considered for a moment.

The scientific method is oftan claimed to consist of “Observe, deduce, experiment, confirm.” This adage has been discredited for a long time. Many science writers have shown that scientists very often start with an answer or hypothesis in mind, and set out to prove it. We should not encourage software to be developed using this adage, because it does not fit the matter of developing software, and it does not describe science in the first place

A closer phrasing is “Observe, invent, experiment, confirm.” Where does the hypothesis come from? Invention. Sometimes invention after observation, sometimes invention on the basis of prior thinking on any number of topics. Having once invented an idea, a scientist quite often then sets out to demonstrate that the hypothesis is correct. This is also closer how many ideas reach fruition in software development. The designer observes, thinks, and invents some design. Sketches, prototypes and experiments are made to see if the idea “holds water”, and if so, then it proceeds.

While a more accurate phrase, it does not give us insights as to how to manage the development of software, as it does not cover more than a small part of the software development process. It does not say anything about requirements, implementation, testing, deployment, teams, tools or training, in particular.

Paul Feyerabend [Against Method] claimed that scientific progress has been so different, from case to case, that no method could be said to work for it. Here at last we have a partner to software development! Starting from Peter Naur in the 1960s, we hear programmers and designers defending software development as a unique and individual activity, one that cannot be scheduled, predicted or turned into a procedure. However, this thought applies only to the inventive part of software development. While a significant part, it is not all of software development. My goal is to be able to say something constructive about managing software projects, even if we accept Paur Feyerabend and Peter Naur’s thesis that there is no sure-fire, successful method for coming up with the invention.

Science is one thing that does not capture the essence of software development, and *Model building *is the other thing I wish to say that software development is not. Ivar Jacobson has actively promoted the view that “software development is model building”, over the last decade, and it has ended up as a catch-phrase in the the industry. It is, however, dangerously inaccurate.

When software development becomes model building, then a valid measure of the quality of the software or of the development process is the quality of the models, their fidelity to the real-world, their completeness. However, as I interviewed dozens of successful projects around the world, I was repeatedly told that the people on the project did not have the time to complete their models, or never drew them at all. Their common comments were:

“We don’t have time to create fancy or complete models. Often, we don’t have time to create models at all.”

or

“The interesting part of what we want to express doesn’t get captured in those models. The interesting part is what we say to each other while drawing on the board.”

In the cases where I have found people diligently creating models, software was not getting delivered. In other words, paying attention to the “models” interfered with developing the software.

How can we reconcile these interviews and many people’s personal experiences with what is, no doubt, Ivar’s voice of experience? The game description gives us an answer.

The game is to deliver the software. Any other activity is secondary. A model, or, indeed, any communication, is sufficient, as soon as it permits the next person to make their next move. The work products of the team, therefore, should be measured for their sufficiency with respect to communicating to their target group

They should be measured for nothing else. It does not matter if the model is incomplete, drawn with incorrect syntax, and actually not like the real world – if it communicates sufficiently to its recipient. In some cases, people get the intended communication from surprisingly sparse messages . In other cases, a more detailed, accurate communication is required,. Understanding the nature of the game, the rules and variations on the game, gives us insight into how elaborate a model to build, or whether to build a model at all.

Software development is a game of invention and communication. Communication is so important, that we need to spend time absorbing the range of factors that affect the quality of the communication.

This reference to communication gets us back to my opening section about communication as never being perfect and complete, communication as touching into shared experience.

I’ll give you an example of how I use this two-legged model of software development as a goal-seeking game of invention and communication, and communication as touching into shared experience, never fully complete or perfect.

A project architect was telling me that sometimes a group of designers go to lunch, have an idea, and someone sketches out a design on a napkin. They go back and stick the napkin on their corkboard. For the next months, possibly longer, that napkin acts as the central touchpoint of the design – it is what people remember and point to when discussing the evolution of the design.

From my point of view, this is fine, and not merely fine, but very good. The napkin serves as a reminder. Because it is a napkin, the designers remember the restaurant, the room, the discussion. It has life in several sensory modes. It has shape, it has texture, it has, perhaps, grease stains. All of those anchor the memory of the discussion.

The architect was telling me that he had just hired a communications specialist. One of his ideas was that this communications specialist would walk around, and copy the contents of the napkin into some drawing tools so it would be pretty and archived for later reference.

In our discussion, we decided that copying that napkin into a drawing tool would cause it to lose information! The shape and texture of the napkin would be lost, the rectangles would be perfect instead of wobbly and different sizes. The memories associated with the napkin would be lost in the transfer.

But what alternative is there? Does the napkin just stay on the cork board – or is there some way to share it with the team? The answer was to scan the napkin into the computer, and put the scanned image into Lotus Notes or a Word file or whatever. Then the napkin itself becomes a design marker, complete with its bumpy texture and hand-drawn boxes. It is both cheaper and more informative than the redrawn version in the drawing tool. It fits both the communication model and the game model better. It is simply better for our purposes.

This was true on another project. A person came in and said, “Alistair, I know this is not a legitimate drawing in any known notation, but it is the best I can come up with.” And he presented a hand drawing of stick figures doing what would more or less be called a work-flow or collaboration diagram. Over the months, my colleagues and I tried redrawing his drawing in any number of different forms, but whenever we had difficulties, we went back to his original drawing, which we had photocopied and distributed, because it communicated to us in ways no other drawing did.

— 5. Methodology per project

The fifth point I’d like to make is that all projects are different, and so you need a different methodology on each project.

First, I have say, what do I mean by “methodology”. This is a repeat for those of you who came to my methodology tutorial.

Methodology, in the biggest sense of the word, is how your organization operates to deliver software, successfully, over and over again. Your methodology is your organization’s strategy for winning the game.

“Big-M” methodology is what gets your software out. It is a unique construction of your organization – it is, in fact, a social agreement within the organization. It is

who you hire and what you hire them for,

how they work together,

what they each produce,

how they share.

It is the combined job descriptions, procedures and conventions of everyone on your team.

All organizations have a big-M methodology. It is simply how they do business. Even just seven people in a group have a way of working, a way of trading information, separating work into pieces suited to different people, of putting their work back together as the final product. All founded on assumed values and cultural “norms”.

Only a few companies bother to try to write it all down, usually the large consulting houses: Andersen, Coopers & Lybrand, Ernst & Young, and so on. A few have gone so far as to create an expert system that prints out the full methodology needed for a project based on project staffing, complexity, deadlines and the like. No methodology I have seen captures the cultural assumptions or provides for variations along the lines of values or cultures.

The slide shows the basic elements of a “Big-M” methodology, with examples of some of those elements. The elements are teams, roles, skills, activities, techniques, tools, quality measures, deliverables, and standards – and team values. The diagram is as applicable to the poetry writing group as for a group of software developers. What gets filled behind each box varies, but the existence of the boxes doesn’t.

A big-M methodology is 1/3 a matter of individual people doing their individual work, 2/3 thirds their working together, and a few percentage points of technology. Average people produce only average designs. However, all the smartest people together will not produce group success without cooperation, communication and coordination. Most of use have seen or heard of such groups. It would happen as well on a poetry or theatre project as on a software project. So success hinges around cooperation, communication and coordination, which, hinge around the value systems of the people in your organization. These value systems include

What people choose to spend their time on

How they choose to communicate

How decision-making power is distributed.

A set of factors make every software project different. One set revolves around the cultures of the nation, the organization, and the people. Another set revolves around the size of the project team, the number of people who need to coordinate their work. Another is the technical nature of the work, and alongside that is how life-critical the system is. Each variation of these will indicate the use of a slight variation in the methodology. Jim Highsmith refers to these as variations of the “fitness landscape”.

So it is not the case at all that one methodology definition could possibly suit all projects. Don’t even think that the Rational Unified Process, or the UML Unified Process, or the something something something methodology will fit your project out of the box. In fact, your first guess at the methodology to use on your project will be the worst guess you make, because it is based on no experience with the actual project. Over the course of the project, you will be able to say much better what the most suitable methodology is. But I’ll get back to that in a minute.

For the moment, let’s just consider two of the factors that characterize your project: the number of people, and how life-critical it is. That’s what you see on the grid. I’d like you to consider that a 200-person project for an atomic power plant should really work in a different way than a 30-person Y2K project for a company, should really work in a different way than a 2-person project to keep scores for the neighborhood soccer league. They are just so different that it would seem strange to try to legislate one methodology for all three of them.

And yet, that’s what we find. Organizations ask for the One True Methodology for their projects, COBOL and Java and mainframe and internetworked and mission critical and casual and large and small. It just doesn’t make sense, and I’d like you all to be conscious of that. And authors publishing the One True Methodology, for all of the above.

These days, we find tailorable methodologies on the market, but most of them are not set up to handle the range of projects you see on the grid. And they don’t say what their range of applicability is, so you can’t tell just from reading the cover notes.

So the next idea I’d like you consider is that every category of project on the grid plausibly has its own, most suited methodology. Further, that within each grid box, each project will have its own slightly different optimal methodology, based on the cultures, the expertise, the strengths and the weaknesses of the people involved.

That means, that your first guess at the methodology to use on your project will be the worst guess. This plays in with our discussion of software development as a game, and the characteristics of people as active devices. One of the characteristics of people is that they learn from feedback – from seeing results. They learn how to develop software through incremental development, and the sooner the feedback the better.

The punchline of this particular line of thinking is that you will look at the external characteristics of your project – based upon this grid, among other things – and nominate your best guess at a methodology. In some sense, it really does not matter what you nominate, although I can suggest that the lightest methodology you can imagine getting away with will probably do you the most good as a starting guess.

Then you make sure you work in increments… no longer than 4 months for any one increment, preferably shorter. Good people are telling me these days that they distrust increments longer than 2 months, but I am still comfortable with increments of 3-4months duration. So make sure you exercise the entire process every 4 months or sooner – every team delivers some piece of running, tested code every 4 months or sooner.

And at the end of each increment – at the start of each increment, if you like – you ask the following questions:

-what did we do right? what did we do wrong?

-what are our priorities

- what is it most important that we keep?

-what do we change for next time?

-what do we have to add? what can we drop?

After 2 or 3 increments, you will start to converge on a methodology that your project can tolerate, even thrive on.

As you can see, this strategy will not even make sense unless you are developing in increments.

Now I’m going to add one more piece to that – ask yourself those questions at least once in the middle of the increment. Ask, “Are we working in a way that will work for us? Are our groups and teams set up right – can we even deliver this increment working this way?”

At the first mid-increment introspection, you are mostly interested in catching catastrophic errors, things in your process that will stop you from succeeding. In later increments, you can make various and sundry improvement and optimizations. In one of our projects, during increment 3 we went through 3 team structures in 2 months. We had delivered twice, but weren’t comfortable with the way we were working. So we tried a new team structure, and it became quickly obvious that was going to be a total mess. So we changed, and changed again, and found a team structure we stayed with for the next 3 increments. We couldn’t have done that on the first increment, because on the first increment we were just focussed on getting something out the door.

— 6. Crystal Methodologies

Putting all the above thoughts together results in what I call the Crystal Manifesto of Software Development, or the Crystal Principle. Software development is a cooperative game – using props and markers to remind and incite,to reach the next move. The endpoint of the game is a delivered system. The next game is to alter or replace the system. The Crystal Principle informs us as to how we should play this game to get our software. The notion of the Game helps us understand when someone else is playing a different game, and gives a handle on how to react.

The family of methodologies that I am working with fit the gaming model, fit the characteristics of humans as nonlinear devices, and fit the grid of methodologies. There are a number of Crystal methodologies – I rank them by color: Clear, Yellow, Orange, Red, etc., up through Blue and Purple. But all of them are based on being strong on communications, light on intermediate work products, aiming at high-productivity, and being self-evolving.

I am trying to build them around the strengths and weaknesses of people, to make them something people can live with – habitable is the word. Each one is aimed at being as light as possible for the project at hand, to make the most sense in a resource-limited, finite, goal-seeking, cooperative, group game.

ENDING

What is software development, really – and does it matter? The answer to the second question is Yes, it matters to you a great deal. If software development is really a science, you could apply the scientific method to it. If it is really engineering, then you could apply known engineering techniques. If software development is a matter of producing models, then you should spend your money developing models.

However, it is none of those. It is a “game”, a game of speed and cooperation within your team, in competition against other teams. A game against time, and a game for mind-share. You should spend your money to win that game.

Viewing software development as a game gives you better ideas on where to spend your money, how to structure your teams, and how they should allocate their efforts.

I hope that in this talk I’ve managed to not exaggerate or lie, but still turn your thoughts in a new direction. Please feel free to visit my website, members.aol.com/acockburn, and pull down the draft text and outlines I have written on Software as a Cooperative Game, and on the lightest, most people tolerant methodology I think can work, Crystal Clear. Thank you.

The Advanced Agile Masterclass Tour

The Advanced Agile Masterclass is going on tour!

DatesLocation Courtesy of Mar 11-13 – Melbourne Ed Wong, Craig Brown ( register here ) Mar 18-20 – Wash, D.C. Santeon ( register here ) Mar 25-27 – Denver Santeon ( register here ) Apr 2-4 – Riga Stockholm School of Economics in Riga (registration coming) Apr 14-16 – Atlanta Leading Agile ( register here ) Apr 22-24 – Wash, D.C. Santeon ( register here )

(Yes, that last one is “Agile Business Analyst”, but if you know me, you know it will be advanced, anyway:).

I look forward to seeing you in one of them…

If you have more questions, please email me at TotherAlistair@aol.com

Pages