Saturday, October 26, 2013

Disconnect: why business does not "get" the idea of "experiments"

Experimentation is an important agile concept. Experimentation is a risk management tool: the idea is that one tries out a new approach out as an "experiment" before committing wholesale to the new approach. If the approach does not work out, one can quickly change course, and the experiment was a "contained failure".

What business hears is, "Let's play: we will experiment, and reward failure. We will spend your money trying out things that we are not sure will work."

This is a big disconnect, and the agile community bears much of the blame for this disconnect. It is the hubris of the agile community that emboldens it to speak about experiments and other agile practices as if the case for experiments need not be made. When speaking to senior IT people who have decades of pre-agile experience, one really should have some deference, and not expect them to swallow practices with names that are purposefully controversial - even frivolous ("team happiness").

As Carl Sagan said, "extraordinary claims require extraordinary evidence".

Perhaps instead of talking about "experiments", we should talk about "proof-of-concept" or "pilot". Experienced IT people understand those concepts, and they are really the same thing. It is counter-productive to use new terms - terms that have a shock effect - when trying to convince management in an established organization to adopt a new practice.

One thing that is new about the concept of experiments is that failure should not be viewed as negative: failure causes learning, and therefore better decisions are made after that point. It is a sad fact that in most large organizations, any kind of failure is damaging to one's career - even if the experiment was daring and innovative and caused learning. Agilists want to encourage a culture where prudent and careful risk taking is accepted and rewarded - even if it sometimes results in a contained failure.

But failure is still failure: it results in sunk costs - lost time, wasted effort, wasted money. Management needs assurance that failure - even contained failure - actually results in learning, and that the failure was unavoidable. They want to know that teams are being thoughtful and are using their best judgment and the best information available before they try something that results in failure. It is up to teams to instill that confidence, and it is up to management to be open to encouraging risk if the team demonstrates that it is cautious and thoughtful before it undertakes an experiment.

Are we upholding our end of the bargain?

Monday, October 21, 2013

Apply critical thinking at agile conferences

The book Critical Thinking Strategies For Success compares what it calls "sophistic thinking" with "strong sense critical thinking". The former is when there are doctrines and everyone nods their head yes to anything that supports those doctrines.

Recently I attended AgileDC 2013, and I noted that there was a talk by someone who I know to be incompetent and who does not know what he is talking about: in fact, he was fired from his last company for that reason; yet he was presenting at AgileDC and he has a large following in the community. That community does not know, however, that in real world situations, this person cannot perform because his knowledge does not extend any deeper than platitudes. He does not have enough real world experience to turn the platitudes into action.

Another person speaking at the conference laid out an approach that I know for a fact is not the approach used in the organization in which that person works, yet this approach was presented as a cornerstone approach. Again, after sufficient platitudes, all the heads nodded yes. More sophistry.

We are not doing enough critical thinking in the agile community. We need to be skeptical. Just because someone says something at a conference does not make it so, and where is the proof that they actually did what they say they did? Unlike scientific conferences, agile conferences are practitioner conferences, and the work presented is not research that has been replicated under controlled conditions, and there is no standard of ethics that is being enforced to ensure that people are held accountable by their respective organizations for presenting accurately. In fact, there is plenty of incentive to spin things because it enhances the careers of the presenters and the reputations of their organizations sponsoring those presenters. AgileDC - and most practitioner conferences - are more marketing than they are reality and we have to keep that in mind.

These conferences are still valuable though. There are lots of good ideas that are shared: we just need to be skeptical because some bad ideas can be made to sound viable when they are not. There is networking that happens at agile conferences, and that is always worthwhile. But don't believe something just because it was presented at an agile conference. Practice critical thinking.

Sunday, October 20, 2013

Pre-agile helps one to understand agile

One of the greatest mistakes of the agile community is to compare agile with waterfall. There is an assumption that before there was agile, there was waterfall, and that most projects were waterfall. That is not my experience.

It is not fair to compare a well run agile project with a highly dysfunctional project using waterfall, yet that is the comparison that is routinely made.

During the 1980s I was on a string of very successful IT projects, and none of them were waterfall. On the other hand, I was on one project that was not successful, and it was a waterfall project.

The projects that were successful (all non-waterfall) were at two companies: Intermetrics, and CAD Language Systems. These companies built compilers and other advanced tools that were used to design hardware systems. This was major league programming. Our projects were characterized by small teams (3-15 people), lots of collaboration, evolutionary design, and lots of other practices that agile claims credit for.

These experiences have helped me enormously to understand agile, because I can look at agile practices and compare them to earlier practices that I saw work well - even though they were often done slightly differently - so I can discern what really makes those agile practices important. I can also discern that certain agile practices are not critical, because I saw projects be successful without those practices. Standups for example: of all of the successful projects that I was on during the 1980s, none used the practice of standups, and so I am confident in saying that standups are not important. Another practice that is a red herring is the team room: during the 1980s, programmers had their own offices (at least they did everywhere I worked), yet we collaborated continually - separate offices were not an impediment, as long as we were co-located. Co-location was important. And I distinctly recall closing my door from time to time so that I could maximize the quiet to think deeply about a problem, and then open it again when I had finished thinking. Thus, the ability to shut out the world to think deeply was also important. The open door was a universal signal that you were open to someone walking in to discuss something: the closed door was the reverse.

Each of those projects that were successful had a person who was responsible for making sure that everything fit together: someone who was charged with thinking about the entire system in an end-to-end manner. That was essential, and when there was no such person, or when the person was incompetent (that was the case on the waterfall project) things went wrong very quickly. I have also seen agile projects flounder for lack of such a person. The theory that the entire team is responsible for design is kind of like communism: it is a nice egalitarian theory, but in practice it seldom works - I won't say "never" because there are always exceptions. Generally speaking, there needs to be a qualified person whose main job is to think end-to-end, even if that person also does coding. The real issue is what type of person that should be, because at other times in my career I have had nightmare project managers or technical leads who almost single-handedly made everything go wrong (the waterfall project was like that).

In the course of these 1980s projects, the two things that I found to really make a difference in a project were:
    ▪    Small team: if there are so many people that they cannot keep track of what they are each working on, then communication breaks down and code diverges.
    ▪    Servant leadership: Someone who provides gentle leadership to the team: not someone dictatorial, but someone who keeps track of what everyone is doing and what challenges they have on a daily basis; ensures that people discuss issues that need to be discussed; asks hard questions, listens, and proposes solutions but rarely dictates them; and who also understands all of the issues - someone who is knowledgeable about what the team is working on and how it will work. In my experience, self-organization cannot substitute for a good servant leader.

From there, things kind of take care of themselves! With good servant leadership, you will end up with continuous daily regression testing (we did), you will have information radiators on testing results and on the evolving design (we did), you will have a continuous feature-driven or story-driven process with testable features or stories (we did), you will have continual design discussions as needed throughout the project (we did), the team members will feel empowered to work in their own way and contribute ideas and innovation (we did and did), and there will be a sense of harmony, order, and calm rather than an atmosphere of crisis and frustration. Servant leadership is really the key: everything else will follow, as long as the project is not hamstrung from the beginning by having a team that is too large or by having other poisonous situations imposed from the outside.

Even the practice of developing requirements incrementally is not new: circa 1980 I worked at American Electric Power as a nuclear physics simulation engineer, and there was a programming team that supported us, and the method in which we interacted with them was such that the programmer would sit with us and talk about what we wanted, they would go away and develop some of that, then come back and show us to get feedback, then go away and build some more - sound familiar?

So when I reflect on the Agile Manifesto today - or when I did after it was published - I see it as a rejection of the wrong paths that some projects - waterfall projects - took before that and a return to what worked. It was not new, but rather it was a validation of key things that had worked in the past, and that historical perspective helped me to understand the motivation behind each value and each principle and what its intent really was. And yes, agile does add some tweaks to some of those historical practices: that is valuable contribution, but the historical perspective is just as - I would say more - valuable.

Saturday, October 19, 2013

I want to run an agile project

There is a very humorous cartoon on YouTube called “I want to run an agile project”. It depicts a young and enthusiastic project manager who sets out to run an agile project in an organization that is not accustomed to agile. The video follows the poor project manager as he goes from department to department trying to overcome one institutional barrier after another.

Of all the barriers that he encounters, only one pertains to the software development team: it is a scene in which he tries to convince two team member to pair and collaborate. All of the other barriers have to do with policies and rules that the organization has – rules that impede the agile process.

This is why agile IT transformation actually has only partly to do with agile teams: it has much more to do with the way that various organization functions are run, including IT and its internal functions, as well as external functions such as contracting. Agile transformation consists of convincing and educating these various stakeholders; it also consists of training teams and coaching teams, but if one does not give equal – or greater – attention to the impediments that are external to the teams, then the transformation will proceed very slowly and possibly lose momentum.

The problem of agile transformation is therefore not so much a problem of scaling agile: it is a problem of enabling agile. Scaling pertains to having many teams on a project, or coordinating multiple agile projects. That is certainly part of the problem of becoming agile, but becoming agile must also address how teams are supported by the various IT support functions that large IT organizations have, including data center operations, enterprise architecture, IT risk management and governance, IT security, data architecture, release management, IT portfolio management, and so on. Many of these functions need to change to accommodate agile, but these changes are huge and impact the missions of these groups, and so this change must be worked in a gradual and inclusive manner. This is an enterprise change management process – often the province of management consulting – informed by agile values and practices. It is much more than “scaling agile”.

In undertaking an agile transformation, one must focus on the goal. The goal is not to implement agile: that is not a business goal. Rather, the goal is usually to make the organization more nimble (“agile”, in the dictionary sense) – i.e. to increase business agility. Business agility is not the same thing as agile in the sense of agile software development. Agile software development is a tool for enabling business agility, but business agility is more than that and differs in many ways. Some business agility strategies rely on significant command and control – approaches that are antithetical to agile software development. Melding agile software development with the way the rest of the organization works, so as to enable business agility – and doing so with approaches that are compatible with the strategies that are being adopted by the other parts of the organization – is the challenge of an agile IT transformation.

Problems with facilitation methods

Facilitation is a core skill for agile coaches, and most of us are pretty good at it. There are some practices that I have seen that can be problematic however.

Dot Voting

The whole point of dot voting is to rank things by importance, priority, urgency, or some other scale - and to use the participants as the deciders so that they feel ownership of the ranking.

But what if the participants do not have the judgment needed to properly rank something?

For example, consider a group of diverse participants - including many agile novices - that is ranking the agile practices that they want to focus on. The ranking will most likely end up reflecting the sources of pain that they currently feel. What it will likely not reflect is the root causes, because it takes a Ri level agilist to understand root causes. And we all know that if we do not address root causes, we will not solve a problem.

So the implication here is that if the facilitator has not drilled into the practices and discussed root causes with the participants, the root causes will not be reflected in their ranking, because the participants are diverse and many are therefore new to agile and will not appreciate the root causes.

The lesson: be careful what you rank, and what you do with the ranking. In the example above, if the goal is to identify practices to talk about, and talk through root causes, voting will achieve that. But if the goal is to identify what practices to focus on, it will not be effective, because the participants do not have the judgment required to make good choices about that.

Not Allowing the Facilitator To Voice an Opinion

A central aspect of facilitation is that the facilitator should not bias the group. But what if the facilitator is an expert in the topic being discussed? What do you do then?

We probably all know the answer to this: you guide the group by asking hard questions, rather than telling them the answer. In fact, they probably have some local domain knowledge that you do not. But what if the group needs to be informed by your expertise?

One technique is to explicitly take off your facilitator "hat" by saying something like, "Ok, allow me to explain what I know on this", and then give a brief explanation based on your expertise. When doing this, I usually punctuate it by saying something like, "So that is the accepted approach to that, but it is not necessarily what we have to do here, because our situation might be unique". That last part lets the group know that they are still in control: they can decide to go against standard practice. Every time you share your expertise, you again re-iterate that it is an accepted view, but that the group can depart from that if it wants to. You then resume facilitating and have complete willingness to record and support choices that go against what your expertise advises. You have done your job to inform, but then the group decides the content of the discussion.

Putting Cards On the Wall

Putting cards on the wall is a long-standing practice for facilitation. I personally first encountered this technique when I participated in a six week (all day, six days a week) modeling session with Peter Coad, David Anderson, Jeff DeLuca and others in Singapore in the late '90s. The purpose of this technique is to encourage people to voice their opinion on something: if you just ask for opinions, some people remain silent. If you give them cards and tell them that they have to write something, they will. It gets all the ideas out in the open.

The problem is, people often write small, or illegibly, and so you cannot read what they wrote unless you go up close to the cards. And if you have a group of more than five people, it starts to become difficult for people to see past others as they crowd around - especially the smaller people. Further, if there are many cards (say, more than 20), some people will not read them all.

Having people stand up close to the cards has another problem: standing uses working memory and consumes a tiny bit of your focus, and standing in close proximity to other people who are shifting around uses even more working memory and focus. Try this experiment: while standing, perform some long division in your head. Now sit, and repeat the experiment (using different numbers of course). You will find that while sitting, you can think more deeply and therefore do the arithmetic more easily. You might think that standing is something that you can do on autopilot, but it actually does consume some mental energy. Sitting, with everyone else in the room stationary, allows you to focus better on purely mental tasks. Sitting is therefore better for the participants of a facilitated session if you want to get their best - their deepest - thoughts. This does not apply to the facilitator because the facilitator's attention is mostly on the group - not the topic. The facilitator has to focus somewhat on the topic, but his or her primary focus is on the people, and the direction things are taking, and standing is also important for the facilitator in order to establish a sense of authority over the process. The people who need to think deeply are the participants.

One myth about the use of cards is that writing cards enables things to go more quickly. The purpose of facilitation is to establish a shared way of thinking about a problem. That means that all ideas that are expressed - as cards or otherwise - need to be mentally processed, one by one, by everyone in the room. It is an inherently serial process, so don't be fooled into thinking that you gain time by having people simultaneously writing their ideas on cards. Saving time through concurrency is not the purpose of the cards. Each card still needs to be read by each person, or the facilitator can read each card aloud. But if people cannot see the cards, they cannot then sit back and reflect on them: they will not remember what each card said and they will not be able to "connect the dots" in their heads. Even if you do affinity analysis, it is often the case that very critical things are mentioned by some cards in an affinity group, and so just looking at the grouping is likely to miss major ideas.

In order to enable participants to sit, and to ensure that all ideas are heard and read and can be contemplated by everyone, I re-write each card on the whiteboard. I do this as I read each card, so it consumes little additional time. I write each idea large and cleanly (legible writing is an important facilitation skill) so that everyone in the room can read it, and then we discuss it. Once all ideas have been written and discussed, we can discuss all the ideas as a whole, coming up with holistic strategies that address all of the ideas. I find that this works much, much better than having cards on the wall.

Sunday, October 13, 2013

Apple Donut Headquarters - Agile, or Anachronistic?

Everyone reading this post has no doubt heard about Apple's new headquarters, under construction:
http://bangphotos.smugmug.com/001-News-1/Bay-Area/apples-proposed-new-office/i-QgsWpVh/0/L/ssjm1013apple004-L.jpg

From an agile perspective, the Apple Donut seems very "agile":
It promotes lots of collaboration, because it is only four stories (no getting on a elevator to go see someone) and it encourages one to walk past other teams on the way to a meeting or one's primary work area.

But on the other hand, it seems to me like the logical conclusion of 20th century industrial age thinking, in which masses of people travel to a central location every day, work intensely for a hierarchical organization, and then travel back at the end of the day - kind of like Metropolis (http://www.imdb.com/title/tt0017136/). A glance at the photo (see link above) of the planned Apple headquarters shows the massive highway leading underground to the parking area - not too unlike the river of people flowing in and out of Metropolis every day! I can almost hear the factory siren signaling the start of work. And as the San Francisco Mercury News put it, the new headquarters "promises to bring a world-class real-estate project - along with a lot of traffic congestion - to the heart of Silicon Valley." I don't know about you, but it takes a pretty high incentive for me to suffer an unpleasant grid-locked commute every day.

To be fair, the elite of Metropolis did not espouse agile principles: the movie depicts no signs of collaboration, but rather only hierarchical control with pre-defined jobs.

But is it really that different?

No matter, because a much more pertinent question is, Is Apple a model for other organizations? Should we be trying to learn from it, to inform our guidance of our clients, for how to structure their organizations?

I contend that the answer is usually no, and in the cases where the answer is no, it is emphatically no.

The reason is that most companies are not like Apple: most companies are not as "cool" as Apple, and don't have an inspiring mission the way that Apple, Google, and some of the other most glamorous tech companies have. Most companies - and most IT work in most companies - is relatively hum-drum, and such companies cannot attract the best and brightest as a mere result of their mission or their "cool factor". Most companies have to attract IT workers based on other traits - including working conditions and compensation. In other words, if getting in and out of the workplace every day is a miserable experience, consuming two hours of one's day in a horrible commute, then the organization had better (1) offer very high compensation, (2) be very "cool" to work for, or (3) expect to obtain only the least qualified talent, because the best talent will choose opportunities that offer either #1 or #2.

But what is the alternative? Apple can get away with a four story Tower Of Babel absurdity, because it is so "cool". But what about the countless other companies? What is the right kind of agile workplace for them?

We have to be careful here, because agile principles emphasize certain things like face-to-face conversation and working together in real time, and it is easy to take those things to their logically absurd conclusion and arrive at the Apple headquarters. But what is the right way to scale those agile principles? Does it mean forcing every conversation to be face-to-face? Every meeting to be in-person? Every collaboration to be real time?

Of course, the answer is no. In fact, global trends are the reverse. In our increasingly global economy, we see more and more workers who are needed in many geographically separated places on the same day, because their skills are move valuable than the value of proximity. We also see an inexorable trend toward flexible working patterns. Two income households and removal of the barriers preventing more flexibility - combined with the increasingly global nature of work - are making this come about. In the IT world, early agile has inserted a small hickup in this trend, by sending IT people back to the office for core hours to be on teams, but the overall trend is there. Now that agile has matured, and the focus is on continuous delivery, teams are discovering that they need to be in continual contact with diverse stakeholders in other parts of the organization - people who cannot be physically present - and it is often the case that the business stakeholders are in other parts of the country.

Early visionaries such as Alvin Toffler were not wrong on this: the trend is toward less commuting, more flexibility, a return to the organization of populations around communities rather than commuting corridors, and the substitution of electronic collaboration for physical presence. Commuting was a 20th century anomaly.

So the question is not whether agile values are right - they are - but rather, how does one achieve agile when the trend is toward more work flexibility, more time zones, and workers who are needed in many places at once - and in and environment in which the best workers can get jobs that give them the flexibility that they need?

That is the real question for agile. And Apple's new headquarters does not answer that question.