tag:blogger.com,1999:blog-77853966070691068152024-03-14T02:05:22.629-07:00Value-Driven ITAchieving Agility and Assurance Without Compromising EitherCliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-7785396607069106815.post-79407621652478447722016-04-22T08:29:00.000-07:002016-04-22T08:29:04.854-07:00Both REST and JSON suck - really!!Alan Kay <a href="http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442" target="_blank">once said</a>,<br />
<blockquote class="tr_bq">
<i>The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.</i></blockquote>
Programmers experience the resulting pain of the poor implementation of the Web on a daily basis. Two glaring examples are REST and JSON.<br />
<br />
You might wonder, Wait - JSON was created as a better alternative to SOAP, so isn't it really better? And REST was created as a better alternative to WSDL - so isn't that better also?<br />
<br />
Well, better, yes, but that's a pretty low bar. Let's not dredge back up WSDL and SOAP - let's please leave those in the trash can of horrors where they belong. REST and JSON are sufficiently terrible that we don't need to go back to things that were even more terrible.<br />
<br />
So why is REST terrible? It all started with the notion that inter-system messages need to be human readable. Really, I think it started with firewalls: in the late '90s programmers wanted to make remote requests across firewalls, and existing protocols had trouble doing that, so programmers turned to HTTP, which was designed for human readable content. Programmers wanted to send inter-system messages, so they grabbed XML, which was a nice data format that could easily be pumped over HTTP. Then OASIS and W3C got into the mix and soon we had WSDL and a raft of other standards - all of which repeat the mistakes of HTTP, namely the lack of type safety, and the lack of scoping of a standard so that you don't have to figure out what you don't know that you need to know - e.g., which HTTP headers are appropriate for the data you are sending - given that header types are defined in an ever growing list of ever updated RFCs, and there is no "header compiler" - no way to validate your headers or content body format without actually running the code.<br />
<br />
HTTP is, frankly, a mess.<br />
<br />
REST tried to simply the horrors of WSDL by defining a simpler approach. After all, all we are trying to do is send a friggin' message. REST says, Just put the message in an HTTP payload - forget all the WSDL definition. The client will parse the payload and know what to do.<br />
<br />
The problem is, clients now have to parse the message. Message parsing is something that should be done behind the scenes - it should be automatic. Client and server endpoint programs should be able to work with an API that enables them to send a data structure to another machine, or receive a data structure - in the language in which they are working. Application programmers should not have to parse messages.<br />
<br />
Languages like Go make JSON and XML parsing easier because parsing support is built into the language, but it is still a-lot of work - and a-lot of code. E.g, in Go, a JSON stream will be parsed into a data structure - but it is not the data structure you want: it is a hashtable of "interface{}" types. You have to programmatically convert the hashtable into your desired strongly typed object. It is all quite klunky.<br />
<br />
JSON was created as a better alternative to XML, which is very hard to read. However, JSON suffers from the fact that it is still a message syntax - that is, one writes an actual message in JSON, rather than defining a message schema. Thus, there is no compiler - and therefore no way to check a JSON message until you actually run your code and send a message. Actually, that is not entirely true now - someone has realized this problem and invented a <a href="http://json-schema.org/" target="_blank">JSON schema tool</a>. But then if one has defined a schema, why code JSON messages by hand? - why not generate the code that does the message marshaling and unmarshaling?<br />
<br />
Ironically, Google - the creator of Go - has come up with <a href="https://developers.google.com/protocol-buffers/" target="_blank">Protocol Buffers</a> as an alternative to REST and JSON. And guess what? - messages are not human readable, and the programmer only defines the message schema - all the parsing code is automatically generated. Hmmm - that's what CORBA did. Why did Google do this? Answer: it turns out that message processing efficiency matters when you scale: imagine that REST/JSON messages require X CPU cycles to marshal and unmarshal, and Y amount of bandwidth, and that the same application using protocol buffers requires X/100 CPU cycles and Y/100 bandwidth - if X and Y are Internet-scale, that translates to real dollars, like needing ten machines instead of 1000 machines. Google has switched to Go for the same reason: natively compiled code runs faster than scripted code - a-lot faster - and that translates to less compute resources.<br />
<br />
So we are back to the future. We have come full circle. What a circuitous detour. So much wasted effort.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-39974762967546410342015-12-29T13:16:00.000-08:002015-12-29T13:22:30.048-08:00Why Agile task planning does not workIn <i>Extreme Programming Explained</i>, Kent Beck wrote,<br />
<blockquote class="tr_bq">
<i>In XP, the [elements of planning] are the stories. The [scope units] are the estimates attached to the stories. The [scope constraint] is the amount of time available.</i></blockquote>
Yet, it seems like every time I have coached an Agile team, the team is compelled by management to do task level planning - that is, decomposing each story into work tasks. On top of this, most of the popular Agile planning tools, including VersionOne, TFS, Rally, and Jira, all have a heavy emphasis on task level planning: e.g., in Rally, you cannot define a story without defining its tasks. As someone who has used Rally a great deal, I found this to be a horrible nuisance.<br />
<br />
Task level planning is very counter to Agile in many ways, and I have seen task planning greatly undermine Agile teams. Some of the problems with task level planning in an Agile project are,<br />
<ol>
</ol>
<blockquote class="tr_bq">
<b>1. Task level planning is excessively time consuming</b>; and since planning involves the entire team, this ties up the team for too much time - the team would rather get to work. </blockquote>
<blockquote class="tr_bq">
<b>2. Task level estimates are usually <i>wildly wrong</i></b>, in contrast to story level estimates - which are often very accurate, in terms of their consistency. </blockquote>
<blockquote class="tr_bq">
<b>3. </b>The <b>actual tasks needed to complete a story do not reveal themselves</b> until the developer starts working on the story. </blockquote>
<blockquote class="tr_bq">
<b>4. </b>Party because of #3, <b>adding up a story's tasks does <i>not</i> yield the time required</b> to complete a story. </blockquote>
<blockquote class="tr_bq">
<b>5. Task completion does <u><i>not</i></u> prove progress</b> - only story completion does: that is the entire point of stories - that a story represents demonstrable progress, and that completion is defined by the story's acceptance criteria and the team's definition of done for its stories. Tasks do not have these attributes. This is central to Agile: waterfall projects are notorious for being "on schedule" until release day, when they suddenly need more time - yet the project hit all of its prior milestones, with all tasks such as "design", "code", etc. completing - but with nothing actually demonstrable. It is the crucible of running software, passing acceptance tests, that proves progress - nothing else does. </blockquote>
<blockquote class="tr_bq">
<b>6. Completion of a task often (usually?) does not mean that the task is really complete</b>: since tasks are often inter-dependent, completing one task might reveal that another task - which was thought to be done - is actually not done. For example, a test programmer might write some acceptance tests, but when the app programmer runs them against the story's implementation, the programmer finds that some tests fail that should pass - indicating that the tests are wrong, and meaning that the testing task was not actually done - yet it had been marked as done. Only running software, passing tests, proves that the story is done. Task progress is suspect.</blockquote>
<br />
That said, some level of task planning is useful. For example, it makes sense to especially when more than one person is involved in implementing a story, such as a test programmer and an app programmer. One can then have tasks for the story, such as "write automated acceptance tests" and "write unit tests and app code". <b><u><i>But</i></u></b>, progress should not be measured based on task completion; and it is a total waste of time to come up with estimates for these tasks ahead of time. Instead, it is better to merely have people estimate on the spot the day they plan to work on a task - that is likely to be more accurate than an estimate done a week or two before.<br />
<br />
<br />
Some of the consequences of paying too much attention to tasks in an Agile project are,<br />
<ul>
<li>Parties external to the team, such as the project manager, <b>start to think of the work at a task level</b>, and report progress based on that, with all of its pitfalls (see #5 above).</li>
<li>Parties who pay attention to task estimates, such as the team lead, will be <b>constantly disappointed</b>, because of #2,3,4 above.</li>
<li>Teams will <b>lose an entire day</b> or more to planning each sprint, because of #1.</li>
<li><b>Team members will collaborate less</b>, feeling that "I did my task - now its in your court", instead of working together to get app code to pass its tests.</li>
</ul>
Even though many Agile authors talk about tasks, and many "Agile" tools support task level planning, task level planning is antithetical to Agile. As the <a href="http://agilemanifesto.org/" target="_blank">Agile Manifesto</a> says,<br />
<blockquote class="tr_bq">
<i>Working software is the primary measure of progress.</i></blockquote>
Not task completion. Measuring task completion is waterfall. It's Earned Value Management. It's Gantt charts. It is not Agile.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-15861398488414455822015-12-25T13:53:00.000-08:002016-05-10T08:45:52.221-07:00The "go" language is a messI have been using go for the past six months, in an effort to learn a new natively compiled language for high performance applications. I have been hoping that go was it - sadly, it is not.<br />
<br />
Go is, frankly, a mess. One of its creators, Ken Thompson of Unix/C fame, called go an "experiment" - IMO, it is an experiment that produced Frankenstein's monster.<br />
<h2>
It is OO, but has arcane and confusing syntax</h2>
Go is object oriented, but unlike most OO languages, the syntax for defining interfaces and concrete objects is completely different: one defines an "interface" and then one defines a struct - and these are quite different things. But also unlike many OO languages, the methods of a concrete object type are not defined with the object type - they are defined outside the object definition - in fact, they can be in any file that is labeled as belonging to the "package" in which the object type (struct) is defined. Thus, you cannot tell at a glance what a type's methods are. On top of that, there is no syntax for saying that "concrete type A implements interface I", so you cannot tell if a concrete type implements an interface unless you try to compile it and see if you get an error: the rule is that a concrete type implements an interface if the concrete type has all of the methods that are defined by the interface - and yet the concrete type's methods are strewn all over the place. What a mess.<br />
<br />
As a result, there is no language-provided clear declaration of a type network - interface types and the concrete types that implement them. You have to keep track of that on a piece of paper somewhere, or using naming to link them. The reason for this chaos escapes me, as I have not see any helpful language feature that results from this - you <i>cannot</i> extend types dynamically, so I see no advantage to the forceful decoupling of interface types, concrete types, and the methods that belong to the concrete types. Perhaps this was part of the experiment - and with terrible results.<br />
<h2>
Its polymorphism is broken</h2>
Go lets you define an interface and then define concrete types (structs) that implements that interface (and possibly others). Yet, the way that this works is very peculiar and is likely to trip up programmers. E.g., if you create an instance of a concrete type and then call an interface method on it, you will get what you expect - the right method for the concrete type will be called. But if you pass a concrete type into a method (via another method call) and then call the method, the wrong one might be called - the method for the abstract type will likely be called - it will if the calling method uses an abstract type for its parameter. Go does not actually have abstract types, so to create one you have to define a struct and give it a dummy method for each method that you don't want to implement. My point here is that the behavior of the polymorphism is statically determined and so depends on the context - and that is very confusing and likely to introduce subtle errors - it defeats most of the value proposition of polymorphism.<br />
<br />
You want an example? Try this code:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">package main<br />import "fmt"<br /><br />type Resource interface {<br /> getParentId() string<br /> printParentId()<br />}<br />type Dockerfile interface {<br /> Resource<br />}<br />type InMemResource struct { // abstract<br />}<br />func (resource *InMemResource) getParentId() string {<br /> fmt.Println("Internal error - getParentId called on abstract type InMemResource")<br /> return ""<br />}<br />type InMemDockerfile struct {<br /> InMemResource<br /> RepoId string<br />}<br />func (dockerfile *InMemDockerfile) getParentId() string {<br /> return dockerfile.RepoId<br />}<br />func (resource *InMemResource) printParentId() {<br /> fmt.Println(resource.getParentId())<br />}<br />func main() {<br /> var curresource Resource = &InMemDockerfile{<br /> InMemResource: InMemResource{},<br /> RepoId:"12345",<br /> }<br /> curresource.printParentId()<br />}</span></span></blockquote>
<br />
When you run it, you will see that the <span style="font-family: "courier new" , "courier" , monospace;">getParentId</span> method defined by <span style="font-family: "courier new" , "courier" , monospace;">InMemResource</span> will be called - instead of the <span style="font-family: "courier new" , "courier" , monospace;">getParentId</span> defined by <span style="font-family: "courier new" , "courier" , monospace;">InMemDockerfile</span> - which is the one that, IMO, should be called, because the object (struct) is actually an <span style="font-family: "courier new" , "courier" , monospace;">InMemDockerfile</span>. Yet if you call <span style="font-family: "courier new" , "courier" , monospace;">curresource.getParentId</span> directly from the main function, you will get the expected polymorphic behavior.<br />
<br />
The reason is this: if you add a method,<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">func (dockerfile *InMemDockerfile) printParentId() {<br /> fmt.Println(dockerfile.getParentId())<br />}</span></span></blockquote>
to the above program, it works. Thus, the above program did not work because one of the methods being called did not have an implementation by the concrete type (InMemDockerfile) - that effectively obscured the actual type from the final method in the call sequence. Programmers who are accustomed to dynamic typing like Java will find this behavior surprising.<br />
<h2>
Type casting affects reference value</h2>
Another peculiarity of the go type system is that if you compare a value with nil, it might fail (so it is not nil), but then if you type cast it and compare with nil again, it can succeed. Here is an example:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">var failMsg apitypes.RespIntfTp<br />...<br /><b>if failMsg == nil {</b><br /> fmt.Println("failMsg is nil")<br /><b>} else {</b><br /> fmt.Println("failMsg is NOT nil")<br /> var isType bool<br /> var fd *apitypes.FailureDesc<br /><b> fd, isType = failMsg.(*apitypes.FailureDesc)</b><br /> if isType {<br /><b> if fd == nil {</b><br /> fmt.Println("fd is nil!!!!! WTF??")<br /> if failMsg != nil {<br /><span style="color: red;"><b> fmt.Println("And failMsg is still not nil")</b></span><br /> }<br /> } else {<br /> fmt.Println("Confirmed: fd is not nil")<br /> }<br /> } else {<br /> fmt.Println("Cast failed: NOT a *apitypes.FailureDesc")<br /> }<br />}</span></span> </blockquote>
The line in red executes; draw your own conclusions - but regardless, I expect this unexpected behavior to be the source of a great many bugs in programmers' code.<br />
<h2>
Its compilation rules are too confining</h2>
With C, one compiles to a binary that one can then link with or save somewhere. With go, the binaries are managed "magically" by the compiler, and you have to "install" them. Go's approach tries to make compilation and binary management simple for stupid people - yet anyone using go is not likely to be stupid, and anyone using go will likely want to be able to decide how they compile and manage binaries. In order to get out of the go "box" one has to reverse engineer what the tools do and take control using undocumented features. Nice -<b><i> not!</i></b><br />
<h2>
Its package mechanism is broken</h2>
Go's package rules are so confusing that when I finally got my package structure to compile I quickly wrote the derived rules down, so that I would not have to repeat the trial and error process. The rules, as I found them to be, are:<br />
<ol>
<li>Package names can be anything.</li>
<li>Subdirectory names can be anything - as long as they are all under a directory that represents the project name - that is what must be referenced in an install command. But when you refer to a sub-package, you must prefix it with the sub-directory name.</li>
<li>When referring to a package in an import, prefix with project name, which must be same as main directory name that is immediately under the src directory.</li>
<li>Must install packages before they can be used by other packages - cannot build multiple packages at once.</li>
<li>There must be a main.go file immediately under the project directory. It can be in package “main”, as can other files in other directories.</li>
</ol>
Are there other arrangements that work? No doubt - this is what I found to work. The rules are very poorly documented, and they might even be specific to the tool (the compiler) - I am not sure, and it seems that way. And <a href="http://dtrace.org/blogs/wesolows/2014/12/29/golang-is-trash/" target="_blank">here is an interesting blog</a> post about the golang tools.<br />
<h2>
It is hard to find answers to programming questions</h2>
This is partly because of the name, "go" - try googling "go" and see what you get. So you have to search for "golang" - the problem is that much of the information on go is not indexed as "golang" but as "go", because if someone (like me) writes a blog post about go, he/she will refer to it as go - not as "golang" - so the search engines will not find it.<br />
<br />
Another reason is that the creators of go don't seem to know that it is their responsibility to be online. Creators of important tools nowadays go online and answer questions about the language, and that results in a wealth of information that helps programmers to get answers quickly; with go, one is lucky to find answers.<br />
<h2>
The Up Side</h2>
One positive thing that I did find was that go is very robust when refactoring. I performed major reorganization of the code several times, and each time, once the new code compiled, it worked without a single error. This is a testimony to the idea that type safety has value, and go has very robust type safety. I would venture to say that for languages such as go, unit testing is a waste of time - I found that having a full suite of behavioral tests to be sufficient, because refactoring never introduced a single error. This is very different from languages such as Ruby, where refactoring can cause a large number of errors because of the lack of type safety: for such languages, comprehensive unit tests are paramount - and that is a large cost on the flexibility of the code base because of the effort required to maintain so many unit tests. I found that with go, a complete behavioral suite was sufficient.<br />
<h2>
Summary</h2>
When I finish the test project that I have been working on, I am going to go back to other languages, or perhaps explore some new ones. Among natively compiled languages, the "rust" language intrigues me. I also think that C++, which I used a-lot many years ago, deserves another chance, but with some discipline to use it in a way that produces compact and clear code - because C++ gives you the freedom to write horribly confusing and bloated code. I am not going to use go for any new projects though - it has proved to be a terrible language for so many reasons.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-36570619479351445322015-11-21T09:44:00.001-08:002015-11-21T09:47:01.673-08:00Why web services are a messI smile when I hear younger programmers talk about Web services; but my smile is a smile of sadness - because what I am thinking is that they don't know what they are missing. They don't know just how broken things are.<br />
<br />
A colleague of mine recently had to implement a Web app that accesses a set of REST services running in another Web service. Being a little stale in the current tools - because they change yearly - he had a learn a set of new frameworks. He got up to speed quickly and things went pretty well until he tried to access the REST service directly from the Javascript side (bypassing his Web service) - at that point he hit "CORS" wall - the Web service did not set the "Access-Control-Allow-Origin" header.<br />
<br />
He worked around that and things went fine until he tried to use a REST method that required some form parameters and also required a file attachment. He ended up wading through headers and the "multipart/form-data" versus "application/x-www-form-urlencoded" <a href="http://www.w3.org/TR/REC-html40/interact/forms.html#h-17.13.4" target="_blank">mess</a>. It took him a week to figure out what the problem actually was and use his framework to format things the way that the REST service was expecting.<br />
<br />
It doesn't have to be this way. Frankly, the foundation of the Web - HTTP - is a horrendous mess. From a computer science and software engineering perspective, it violates core principles of encapsulation, information hiding, and maintainability. HTTP mixes together directives for encoding with directives for control, and it is a forest of special cases and optional features that are defined in a never-ending sequence of add-on standards. The main challenge in using HTTP is that you cannot easily determine what you don't know but what matters for what you are doing. Case in point: my friend did not even know about CORS until his Javascript request failed - and then he had to Google for the error responses, which contained references to CORS, and then search out what that was, and eventually look at headers (control information). Figuring out exactly what the server wanted was a matter of trial and error - the REST interface does not define a clear spec for what is required in terms of headers for the range of usage scenarios that are possible.<br />
<br />
Many of the attacks that are possible in the Web are the result of the fact that browsers exchange application level information (HTML) that places control constructs side by side with rendering constructs - it is this fact that makes Javascript injection possible.<br />
<br />
Yet it could have been like this: Imagine that one wants to send a request to a server, asking for data. Imagine that the request could be written as in a programming language, such as,<br />
<blockquote class="tr_bq">
<span style="font-family: "courier new" , "courier" , monospace;">getCustomerAddress(customerName: string) : array of string</span></blockquote>
Of course, one would run this through a compiler to generate the code that performs the message formatting and byte level encoding - application level programmers should not have to think about those things.<br />
<br />
Yet today, an application programmer has to get down into the details of the way the URL is constructed (the REST "endpoint"), the HTTP headers (of which there are many - and all defined in different RFCs!), the type of HTTP method to use, and data encodings - and the many attacks that are possible if one is not very careful about encodings!<br />
<br />
The result is terrible productivity for Web app development - especially when someone learns a new framework, which is a frequent activity nowadays.<br />
<br />
The problem traces back to the origin of the Internet, and the use of RFCs - essentially suggestions for standards. It appears that early RFCs did not give much thought to how the Internet would be used by programmers. From the beginning, all the terrible practices that I talk about were used. Even the concept of Web pages and hyperlinking - something that came about much later - is terribly conceived: the <a href="https://tools.ietf.org/html/rfc1738" target="_blank">RFC for URLs</a> talks about "unsafe" characters in URLs. Instead, it should have defined an API function for constructing an encoded URL - making it unnecessary for application programmers to worry about it. The behavior of that function could be defined in a separate spec - one that most programmers would never have to read. Information hiding. Encapsulation of function. Separation of control and data. The same is true for HTTP and all of the other myriad specs that IETF and W3C have pumped out - they all suffer from over-complexity and a failure to separate what tool programmers need to know versus what application programmers need to know.<br />
<br />
Today's younger programmers do not know that it could be better, because they have not seen it better. I remember the Object Management Group's attempt to bring order to the task of distributed computing - and how all that progress got swept away by XML-based hacks created to get through firewalls by hiding remote calls in HTTP. Today, more and more layers get heaped on the bad foundation that we have - more headers, more frameworks, more XML-based standards, except that now we have JSON, which is almost as bad. (Why is JSON bad? Reason: you don't find out if your JSON is wrong until runtime). We really need a clean break - a typesafe statically verifiable messaging API standard, as an alternative to the HTTP/REST/XML/JSON tangle, and a standard set of API-defined functions built on top of the messaging layer.Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-59762673609328738262014-11-11T15:02:00.001-08:002014-11-12T07:11:13.474-08:00History of agileThis is for those who think that Agile is a recent evolutionary advance in software engineering. It is not. Before the 1990s, a great many - perhaps most? - software projects were executed in a non-waterfall way. Some were agile, some were not. In the 1980s I was fortunate to have been on many that were: projects with a servant leader, with full automated regression testing run daily, with test results displayed from a database, with a backlog of small demonstrable features, with co-location (individual offices side by side), with daily sharing of issues, with collaborative and evolutionary design, and with a sustainable pace. I can recall personally writing up to 1000 lines of tested C code in a day on my Sun Unix "pizzabox" workstation: those projects were highly productive - today's tools and methodologies do <u><i>not</i></u> exceed that productivity.<br />
<br />
However, over time more and more large software projects came to be managed by administrative program managers and procurement managers who had never personally developed software, and they foolishly applied a procurement approach that is appropriate for commodities - but not for custom built software. This was motivated by a desire to tightly control costs and hold vendors accountable. Waterfall provided the perfect model for these projects: the up-front requirements could be done first and then serve as the basis for a fixed cost, fixed schedule "procurement" involving the implementation phases.<br />
<br />
This was a horrible failure. Software people knew in the 1960s that this approach could not work.<br />
<br />
So in the late 1990s a movement finally came together to push back on the trend of more and more waterfall projects, by returning to <i>what had worked before</i>: <b>iterative development of demonstrable features by small teams, and a rejection of communication primarily by documents</b>. This basic approach took many forms, as shown by the chart. And that is why I am against "prescriptive Agile" - that is, following a template or rule book (such as Scrum) for how to do Agile. There are many, many ways to do Agile, and the right way depends on the situation! And first and foremost, Agile is about thinking and applying contextual judgment - not "following a plan"!<br />
<br />
And then you have young people come along, their software engineering experience dating no farther back than 1990, and they claim that Agile is a breakthrough and that the "prior waterfall approach" is wrong. Well, it was always wrong - people who actually<i> wrote code</i> always knew that waterfall was idiotic. There is nothing new there. And Agile is not new. So when an Agile newbie tells a seasoned developer that he/she should use Scrum, or that he/she is not doing Agile the right way, it demonstrates tremendous naiveté. People who developed software long before the Agile Manifesto during the '70s and '80s know the real Agile: they know what <u><i>really</i></u> matters and what <u><i>really</i></u> makes a project agile (lowercase "a") and successful - regardless which "ceremonies" you do, regardless of which roles you have on a team, etc. It turns out that most of those ceremonies don't matter: what matters the most - by far - is the personalities, leadership styles, and knowledge.<br />
<br />
This chart was developed by a colleague at a company that I worked at, <a href="http://www.santeon.com/" target="_blank">Santeon</a>. The information in the graphic was taken from an article by Craig Larman. <a href="http://www.craiglarman.com/wiki/downloads/misc/history-of-iterative-larman-and-basili-ieee-computer.pdf" target="_blank">Here</a> is the article.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-2qAjk1lTqg8/VGKU9QXbPSI/AAAAAAAAAX8/E5qt-3S1Hk0/s1600/history-of-agile.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://2.bp.blogspot.com/-2qAjk1lTqg8/VGKU9QXbPSI/AAAAAAAAAX8/E5qt-3S1Hk0/s400/history-of-agile.jpg" width="400" /></a></div>
<span style="font-size: x-small;"><a href="https://drive.google.com/file/d/0By4lfBwI0rt6UXhENGtST3BtYlk/view?usp=sharing" target="_blank">As PDF</a>.</span>Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-58196512816422513482014-11-06T09:45:00.002-08:002014-11-06T10:00:27.422-08:00The horrible state of open source toolsAre you kidding me???<br />
<br />
Recently I wrote a performance testing tool in Ruby and I have been rewriting it in Java. The tool uses <a href="http://cukes.info/" target="_blank">Cucumber</a>, so I have decided to substitute <a href="http://jbehave.org/" target="_blank">JBehave</a>, since JBehave is the predominant BDD tool in the Java space, and also because I tried to use the Java version of Cucumber but it is broken and incomplete. (Sigh - why not call it "beta"?)<br />
<br />
So I first looked at the JBehave docs, and was irritated to discover that there are no code examples: you have to jump through hoops, such as running Etsy.com, in order to just see an example. I don't know what Etsy.com is and I don't want to know - I just want to see a friggin' code example. So I googled and found one - a good one - <a href="https://blog.codecentric.de/en/2012/06/jbehave-configuration-tutorial/" target="_blank">here</a>.<br />
<br />
Even better, the example gets right to the point and shows me how to run JBehave <i>without having to use any other tools</i> - most JBehave examples use JUnit, which I detest. I just want to run JBehave. Period. No complications. This is how you do it:<br />
<blockquote class="tr_bq">
<pre class="java5" style="font-family: monospace;">Embedder embedder = <span style="color: black; font-weight: bold;">new</span> Embedder<span style="color: #009900;">(</span><span style="color: #009900;">)</span><span style="color: #339933;">;</span>
<span style="color: #339933;"><span style="color: #003399; font-weight: bold;">List</span><span style="color: #339933;"><</span><span style="color: #003399; font-weight: bold;">String</span><span style="color: #339933;">></span> storyPaths = <span style="color: #003399; font-weight: bold;">Arrays</span></span><span style="color: #339933;"><span style="color: #003399; font-weight: bold;">.<span style="color: #006633;">asList</span><span style="color: #009900;">(</span><span style="color: blue;">"Math.story"</span><span style="color: #009900;">)</span><span style="color: #339933;">;</span></span></span>
<span style="color: #339933;"><span style="color: #003399; font-weight: bold;"><span style="color: #339933;">embedder.<span style="color: #006633;">candidateSteps</span><span style="color: #009900;">(</span><span style="color: #009900;">)</span>.<span style="color: #006633;">add</span><span style="color: #009900;">(</span><span style="color: black; font-weight: bold;">new</span> ExampleSteps<span style="color: #009900;">(</span><span style="color: #009900;">)</span><span style="color: #009900;">)</span><span style="color: #339933;">;</span></span></span></span>
<span style="color: #339933;"><span style="color: #003399; font-weight: bold;"><span style="color: #339933;"><span style="color: #339933;">embedder.<span style="color: #006633;">runStoriesAsPaths</span><span style="color: #009900;">(</span>storyPaths<span style="color: #009900;">)</span><span style="color: #339933;">;</span> </span> </span></span> </span></pre>
</blockquote>
The file path ending in ".story" is from the example, and I wanted to find out the exact rules for what that path could be (the explanation of the example is not clear), so I went to the JBehave Javadocs, and <a href="http://jbehave.org/reference/stable/javadoc/core/org/jbehave/core/embedder/Embedder.html#runStoriesAsPaths%28java.util.List%29" target="_blank">this is what I found</a>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-GE7iwfOmCCs/VFuusjt3CKI/AAAAAAAAAXs/P_nkYjq4XYE/s1600/JBehaveJavadocs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-GE7iwfOmCCs/VFuusjt3CKI/AAAAAAAAAXs/P_nkYjq4XYE/s1600/JBehaveJavadocs.png" height="563" width="640" /></a></div>
<br />
Are you kidding me??? - oh, I already said that.<br />
<br />
I am used to Javadocs serving as a definitive specification for what a method does. In contrast, the JBehave methods have no header comments, and so the Javadoc methods have no specs. <b><i>How is one supposed to know what each method's intended behavior is?</i></b><br />
<br />
Am I supposed to go and find the unit tests and read them and infer what the intended behavioral rules are? Maybe if I had hours of spare time and that kind of perverse gearhead curiosity I would do that, but I just want to use the <span style="font-family: "Courier New",Courier,monospace;">runStoriesAsPaths</span> method. An alternative is to dig through examples and infer, but that is guesswork and needlessly time consuming.<br />
<br />
Unfortunately, this is a trend today with open source tools: not commenting code. The method name gives me a hint about the method's intended behavior, but it does not fill in the gaps. For example, can a path be a directory? Is the path a feature file? What will happen if there are no paths provided - will an exception be thrown or will the method silently do nothing?<br />
<br />
This is trash programming. Methods need human readable specifications. Agile is about keeping things lean, but <u><i>zero</i></u> is not lean - it is incompetent and lazy. A good programmer should always write a method description as part of the activity of writing a method: otherwise, you don't know what your intentions are: you are hacking, trying this and that until it does something that you want and then hurrying on to the next method. This is what I would expect a beginner to do - not an experienced programmer.<br />
<br />
Yet so many tools today are like this. It used to be that if you used a new tool, you could rely on the documentation to tell you truthful things: if something did not work, you either did not understand the documentation or there was a software bug. Today, the documentation is often incomplete, or just plain wrong: it often tells you that you can do something, but in reality you have to do it in a certain way that is not documented. That is what I found to be the case with the Java plugin for Gradle. Recently I wrote a Java program that took me two hours to write and test (without JUnit or any other tools - just writing some quick test code), and then I spent a <i>whole day</i> trying to get the Gradle Java plugin to do what I wanted. That is <b><u><i>not</i></u></b> a productivity gain!<br />
<br />
Tools that are fragile and undocumented are a disservice to us all. If you are going to write a tool, make sure that the parts that you write and make available work, and are documented, and work according to what the documentation says - and don't require a particular pattern of usage to work.<br />
<br />
<i>Please!!!</i><br />
<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-49324123489429106412014-10-25T06:45:00.002-07:002014-10-25T09:04:49.342-07:00Tests Do NOT Define BehaviorLast spring one of the gurus of the Ruby world set off an earthquake when he published a <a href="http://david.heinemeierhansson.com/2014/tdd-is-dead-long-live-testing.html" target="_blank">blog post titled, "TDD is dead. Long live testing"</a>.<br />
<br />
Test driven development (TDD) is one of the sacred cows of certain segments of the agile community. The theory is that,<br />
<blockquote class="tr_bq">
1. If you write tests before you write behavior, it will clarify your thinking and you will write better code.<br />
2. The tests will expose the need to remove unnecessary coupling between methods, because coupling forces you to write "mocks", and that is painful.<br />
3. When the code is done, it will have a full coverage test suite. To a large extent, that obviates the need for "testers" to write additional (functional) tests.<br />
4. The tests define the behavior of the code, so a spec for the code's methods is not necessary.</blockquote>
<br />
Many people in the agile community have long felt that there was something wrong with the logic here. What about design? To design a feature, one should think holistically, and that means designing an entire aspect of a system at a time - not a feature at a time. Certainly, the design must be allowed to evolve, and should not address details before those details are actually understood, but thinking holistically is essential for good design. TDD forces you to focus on a feature at a time. Does the design end up being the equivalent of Frankenstein's monster, with pieces added on and add on? Proponents of TDD say no, because each time you add a feature, you refactor - i.e., you rearrange the entire codebase to accommodate the new feature in an elegant and appropriate manner, as if you had designed the feature and all preceding features together.<br />
<br />
That's a-lot of rework though: every time you add a feature, you have to do all that refactoring. Does it slow you down, for marginal gains in quality? Well, that's the central question. It is a question of tradeoffs.<br />
<br />
There is another question though: how people work. People work differently. In the sciences, there is an implicit division between the "theorists" and the "experimentalists". The theorists are people who spend their time with theory: to them, a "design" is something that completely defines a solution to a problem. The experimentalists, in contrast, spend their time trying things. They create experiments, and they see what happens. In the sciences, it turns out we need both: without both camps, science stalls.<br />
<br />
TDD is fundamentally experimentalism. It is hacking: you write some code and see what happens. That's ok. That is a personality type. But not everyone thinks that way. For some people it is very unnatural. Some people need to think a problem through in its entirely, and map it out, before they write a line of code. For those people, TDD is a brain aneurism. It is antithetical to how they think and who they are. Being forced to do it is like a ballet dancer being forced to sit at a desk. It is like an artist being forced to do accounting. It is futile.<br />
<br />
That is not to say that a TDD experience cannot add positively to someone's expertise in programming. Doing some TDD can help you to think differently about coupling and about testing; but being forced to do it all the time, for all of your work - that's another thing entirely.<br />
<br />
Doesn't the Agile Manifesto say, "Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done."<br />
<br />
I.e., don't force people to work a certain way. Let <u><i>them</i></u> decide what works best for them. Don't force TDD on someone who does not want to work that way.<br />
<h4>
But if everyone on a team does TDD, there is consistency, and that is good</h4>
The argument is always, "If we all do TDD, then we can completely change our approach as a team: we don't need testers, we don't need to document our interfaces, and we will get better code as a team. So people who can't do TDD really don't fit on our team."<br />
<br />
So if <a href="https://en.wikipedia.org/wiki/Donald_Knuth" target="_blank">Donald Knuth</a> applied to work on your team, you would say, "Sorry, you don't fit in"; because Donald Knuth doesn't do TDD.<br />
<br />
What ever happened to diversity of thought? Why has agile become so prescriptive?<br />
<br />
Also, many of the arguments for TDD don't actually hold up. #1 above is true: TDD will help you to think through the design. But, TDD prevents you from thinking holistically, so one could argue that it actually degrades the design, and constrains the ability that many people have to creatively design complex things. And that's a shame. That's a loss.<br />
<br />
#2 about improving coupling is true, but one does not have to do TDD for that. Instead, one can write methods and then attempt to write unit tests for them. The exercise of writing the unit tests will force one to think through the coupling issues. One does not have to do this for every single method - something that TDD requires - one can merely do it for the methods where one suspects there might be coupling issues. That's a-lot more efficient.<br />
<br />
It can be argued that the enormous number of tests that TDD generates results in <i>less</i> agility - not more. Full coverage tests at an interface level provide plenty of protection against unintended consequences of code changes. For those who use type-safe languages, type safety is also very effective for guarding against unintended consequences during maintenance. One does not need a mountain of unit tests. Type safety is not about productivity: it is about maintainability, and it works.<br />
<br />
#3 about code coverage is foolish. The fox is guarding the henhouse. One of the things that tests are supposed to check is that the programmer understands the requirements. If the programmer who writes the code also writes the tests, and if the programmer did not listen carefully to the Product Owner, then the programmer's misunderstanding will end up embedded in the tests. This is the test independence issue. Also, functional testing is but one aspect of testing, so we still need test programmers.<br />
<br />
One response to the issue about test independence is that acceptance tests will ensure that the code does what the Product Owner wants it to do. But the contradiction there is that someone must write the code that implements the acceptance criteria: who is that? If it is the person who wrote the feature code, then the tests themselves are suspect, because there is a-lot of interpretation that goes on between a test condition and the implementation. For example, "When the user enters their name, Then the system checks that the user is authorized to perform the action". What does that mean? The Product Owner might think that the programmer knows what "authorized" means in that context, but if there is a misunderstanding, then the test can be wrong and no one will know - until a bug shows up in production. Having separate people - who work independently and who both have equal access to the Product Owner - write the code and the test is crucial.<br />
<br />
I saved the best for last. #4.<br />
<br />
Let me say this clearly.<br />
<br />
<blockquote class="tr_bq">
<span style="font-size: large;">Tests. Do. Not. Define. Behavior.</span></blockquote>
<br />
And,<br />
<br />
<blockquote class="tr_bq">
<span style="font-size: large;">Tests. Are. A. Horrible. Substitute. For. An. Interface. Spec.</span></blockquote>
<br />
Tests do not define behavior because (1) the test might be wrong, and (2) the test specifies what is expected to happen in a particular instance. In other words, tests do not express the <i>conceptual intention</i>. When people look up a method to find out what it does, they want to learn the conceptual intention, because that conveys the knowledge about the method's behavior most quickly and succinctly, in a way that is easiest to incorporate into one's thinking. If one has to read through tests and infer - reverse engineer - what a method does, it can be time wasting and confusing.<br />
<br />
The argument that one gets from the TDD community is that method descriptions can be wrong. Well, tests can be incomplete, leading to an incorrect understanding of a method's intended behavior. There is no silver bullet for keeping code complete and accurate, and that applies to the tests as well as the code comments. It is a matter of discipline. But a method spec has a much better chance of being accurate, because people read it frequently (in the form of javadocs or ruby docs), and if it is incomplete or wrong people will notice it. Missing unit tests don't get noticed.<br />
<h2>
Conclusion</h2>
<br />
If people want to do TDD, it is right for them and it makes them productive, so let them do it. But don't force everyone else to do it!<br />
<br />
Long live testing!Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-39755305237962899732014-08-27T10:23:00.001-07:002014-08-27T10:23:22.830-07:00To Be Certified – Or Not?Recently in the LinkedIn group “Agile”, <a href="https://www.linkedin.com/in/agility" target="_blank">Alan Moran</a> posted a question, “<a href="https://www.linkedin.com/groups/How-valuable-is-agile-certification-81780.S.5899425527688626178" target="_blank">How valuable is agile certification to you?</a>”<br /><br />The general consensus seemed to be that certification was helpful in terms of getting a job. For example, <a href="https://www.linkedin.com/pub/nicolas-umiastowski/6/685/51" target="_blank">Nicolas Umiastowski</a> wrote, <br />
<blockquote class="tr_bq">
<i>“Certifications are important to prove your skills to recruiters.”</i></blockquote>
<br /><a href="https://www.linkedin.com/in/percivall" target="_blank">Joseph Percivall</a> wrote,<br />
<blockquote class="tr_bq">
<i>“I found it to be very valuable to set me apart from other applicants in my job/internship search. It was a talking point in every interview I had. It showed that I wanted to learn more about my field and thrive in it.”</i></blockquote>
<br />The last sentence is interesting: it implies that certification demonstrates a level of seriousness about one’s work. Indeed, in a recent <a href="http://www.transition2agile.com/p/interview-with-elena-yatzeck-former.html">interview of Elena Yatzeck</a> by this journal, she said, “Cert speaks to the person’s interest in their seriousness in pursuing agile techniques as a professional.”<br /><br />The primary dissenting view was that certification is a lowest common denominator of knowledge. For example, <a href="https://www.linkedin.com/pub/paul-oldfield/1/99b/205" target="_blank">Paul Oldfield </a>wrote,<br />
<blockquote class="tr_bq">
<i>“I'm of the opinion that certification is only of value to mediocre people and mediocre organizations. Good people and organizations find each other without help, the really dire of each cannot be helped by certificates.”</i></blockquote>
<br /><a href="https://www.linkedin.com/in/abhijeetniktemba" target="_blank">Abhijeet Nikte</a> wrote,<br />
<blockquote class="tr_bq">
“I find it disconcerting that while a bunch of us are talking about the certification and its value, we seem to be in minority, or so I think. I firmly believe that (demonstrable) knowledge is far more important than a certification. However, there are tons of companies out there that place a very high value on certification. There is an (incorrect, in my mind) assumption that if a person is certified so that person must have knowledge. Sad, but true.”</blockquote>
<br />What do CIOs think?<br /><br />Interestingly, recently there was also a <a href="https://www.linkedin.com/groups/IT-Skills-Gap-51825.S.5863590864688807938" target="_blank">discussion about this topic</a> in the LinkedIn group “Chief Information Officer (CIO) Network”. The discussion was about the IT skills gap, and it generated many posts on the topic of certification. For example, <a href="https://www.linkedin.com/pub/greg-scott/11/135/45" target="_blank">Greg Scott</a>, CTO of InfraSupport, posted this – it’s long, but it is so powerful that I will repeat the entire thing here:<br />
<blockquote class="tr_bq">
<i>Consider these two hypothetical job descriptions for the same position:</i></blockquote>
<blockquote>
<i>Description #1: </i><blockquote class="tr_bq">
<i>We need an IT resource to finish implementing our ERP system. Skills required: C++, Java, PHP, and excellent communication skills.</i></blockquote>
<i>Description #2 - same position, same job, same company </i><blockquote class="tr_bq">
<i>We need a motivated individual to finish our partially completed ERP system. Take the bull by the horns, finish building this system, set up a mechanism for ongoing support, and help us transform our company. The system uses C++, Java, and PHP and developers who know their way around these tools will have an advantage. But developers with a demonstrable track record of constant learning will have an even bigger advantage. If you want to take on a challenge and help us transform our company, we want to talk to you.</i></blockquote>
<i>If you're an IT pro and looking for a job, which one would you go after? <br /><br />I propose all hiring managers, all HR departments, and everyone everywhere eliminate the word, "resource" when referring to IT professionals. Your doctor is not a resource. Your accountant is not a resource. Your attorney is not a resource. Why are the people on your IT team resources? <br /><br />Eliminate this word and begin to change your attitude. Change your attitude towards the people on your IT team and you'll begin fostering that culture of constant learning everyone talks about. Begin to change your attitude about your IT team and the people on your IT team will begin to change their attitude about your company. <br /><br />Have the guts to do this and the skills gap at your company will go away while everyone else tries to figure out your secret. The counter-intuitive result will be, you'll probably make more money than your competition and leave them behind to eat your dust.</i></blockquote>
<br />The core opinion expressed in this post seems to be that IT people should not be treated as interchangeable “resources”, and that evaluating people based on which certifications they have contributes to that commoditization. Scott seems to contend that “a demonstrable track record of constant learning” if far more important.<br /><br />Here is another insightful post by <a href="https://www.linkedin.com/pub/alexander-freund/8/b62/289" target="_blank">Alexander Freund</a>, President & CIO of 4IT Inc.:<br />
<blockquote class="tr_bq">
<i>Over the course of the past 10 years, I have hired for many IT positions including L1 and L2 support, project engineering, project management, service management, technical sales, and network and server engineering positions. What I have learned is that IT skills (competence in a specific product or area of knowledge) is generally far less valuable than what I call employee skills. We try very hard not to emergency hire to fill a spot, so immediate impact to our team is generally not the goal. So, what are the employee skills I am referring to? For me, there are really only three:<br /><br />1. Brain power - The person needs to have enough raw brain power to learn and do the job. We readily accept that not everyone has the capacity to be a particle physicist, but continue to believe that we can train anyone to do almost any function in IT. Our experience is this is simply not the case. Find people that can learn, and teach them how to do the job. Even if they come with experience from another firm, they have never seen our processes and work culture.<br /><br />2. Work Ethic - Work ethic is the true measure of the impact that any employee will eventually make to the TEAM. I consider this to be the skills gap that I encounter the most, and one which in general, can never be fixed.<br /><br />3. Team player - When I consider the workload faced by most IT departments, it's clear that only cooperative teams working well together can get the work done on time without costly mistakes. Lone wolves, whiners, and poorly behaved team members are just too costly.</i></blockquote>
<br />This post is again stressing the importance of soft skills – what Freund calls “employee skills” – over acronym skills.<br /><br /><a href="https://www.linkedin.com/pub/tim-magnus/0/157/2bb" target="_blank">Tim Magnus</a>, an IT consultant, then says,<br />
<blockquote class="tr_bq">
<i>We have not established a yard stick or even definitions for the foundational skills. When job descriptions focus on transient skills [such as specific languages, tools, and frameworks], we make IT people into transient resources and so we will continue to search for people and fail to find the correct people to do the job. Foundational skills and fundamental problem solving skills are developed and are not picked up overnight.</i></blockquote>
<br />I will say that these sentiments echo my own feelings and experience. When I was a CTO, Java was in its heyday. When I interviewed technical people for a job, I did not care what Java certifications they had. What I wanted to know what whether they were problem solvers, and if they were smart. Indeed, I myself did not have any Java certifications, but my book Advanced Java Development was a recommended text for those studying for the Java Architect certification. Would it not have been ironic if I myself had interviewed for a job as a Java architect, and had been turned down because I did not have Java Architect certification?<br /><br />I personally feel the same way about Agile certifications: that’s why I myself don’t have any. My own feeling is that if someone wants me to be certified in an Agile methodology, then they themselves don’t understand Agile well enough to discern my level of experience with Agile and therefore I don’t want to work for them. That’s my opinion though: I am certainly serious about my work, so the lack of certification does not indicate lack of seriousness.<br /><br />Many people clearly find that certification helps them to focus their learning in their career. As far as focus goes, I shy away from certification because <i>I do not want to be focused:</i> I want to retain the right to think for myself, rather than endorse the opinions that are demanded by a certification. There was one certification that I once considered obtaining: CISSP. I had just written a 600 page book on application security. While taking a practice exam, I discovered that I disagreed with many of the “answers”. In order to pass the exam, I would have to adopt perspectives that I did not agree with. I stopped studying for the exam and decided not to pursue the certification.<br /><br />It is also clear that certification is useful for getting a job for many people: that is possibly because HR departments are failing to find the people who have the “natural learner” or “employee” or “foundational” skills that many of the posters to the CIO Network think are much more crucial. It is easy for HR to scan for buzzwords such as CSM than to try to understand someone’s background. That means that if you are hiring for Agile skills, you can’t rely on HR: you need to get involved in the search, and make sure that the best people are not being screened out because they don’t have a checkbox checked.<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-12357706474220024352014-08-11T09:03:00.002-07:002014-10-14T14:56:50.837-07:00Why private offices are important for programmersAround the year 2000, the company that I had co-founded in 1995, Digital Focus, went agile. We adopted eXtreme Programming (XP). We therefore had to undergo our own "agile transformation", to figure out how to adapt all of our processes and infrastructure to support this new way of working. One of the issues that we faced was how to arrange teams.<br />
<br />
It is pretty standard nowadays that agile teams are co-located into a bullpen so that they can collaborate easily. A purportedly ideal setup includes lots of whiteboards and a wall for posting the agile stories and other information radiators. This is indeed a nice setup: it is cozy and one can hear conversations that are often relevant. And if you want to talk to someone, you simply stroll over to his or her desk and start talking.<br />
<br />
But there is a deep down side to this. In such a setting, distractions are constant. You overhear conversations when you don't want to - often while you are trying to focus on a problem. It is kind of like being in a Starbucks: it is fun, but you will not do your best work there.<br />
<br />
I have found that in such settings, people who really need to focus often go home for a day in order to crack a hard problem or to come up with a fresh approach. To really focus, one needs quiet and isolation - like one used to have with a private office.<br />
<br />
During the mid 1980s I worked for two compiler development companies. In each case, the teams were co-located in that everyone had an office on the same floor of a small building. Thus, if you wanted to talk to someone, you simply strolled over to their door: and if the door was open, you walked in and started talking. But if the door was closed, you knew that they were trying to focus (or were talking on the phone), and you went back to your desk and tried a little later, or perhaps shot them an email saying that you need to chat.<br />
<br />
The disadvantage of this is that you don't have the opportunity to accidentally overhear things that are relevant to your work. At Digital Focus, we solved this by giving each developer their own office, but also having a bullpen right next to those offices. It worked really well.<br />
<br />
Unfortunately the use of cubicles and now bullpens for software development is so prevalent that it has set a new standard for the square feet needed per developer, which translates into a direct cost per developer. CFOs will now balk at giving developers private offices - something that was standard practice during the 1980s.<br />
<br />
The hidden cost is that we might be losing the best creativity and ideas of developers. In an environment with distractions you never really think deeply. Your thoughts can get down to a certain level of depth, but never all the way. In a recent article in the New York Times Sunday Review, "<a href="http://www.nytimes.com/2014/08/10/opinion/sunday/hit-the-reset-button-in-your-brain.html" target="_blank">Hit the Reset Button in Your Brain</a>" by Daniel Levitin, director of the Laboratory for Music, Cognition and Expertise at McGill University and the author of “The Organized Mind: Thinking Straight in the Age of Information Overload,” Levitin says,<br />
<br />
<i>"...the insight that led to them probably came from the daydreaming mode. This brain state, marked by the flow of connections among disparate ideas and thoughts, is responsible for our moments of greatest creativity and insight, when we’re able to solve problems that previously seemed unsolvable."</i><br />
<br />
Collaboration is great; but it is not a silver bullet. People sometimes need to think quietly by themselves. If we deny them that, we are not getting the best parts of their mind.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-24790595579802030682013-11-29T11:55:00.001-08:002013-11-29T12:40:27.679-08:00Are agilists turning their backs on the very technologies that their clients are building?I remember three decades ago standing in the bookstore of my university reading sections in a new - and first ever - textbook about the design of large scale integrated circuits. The book was noteworthy because it was created through a collaboration of people from different countries and universities around the world, all communicating via email and other protocols. This was before the Internet, back when networks were owned by universities and DOD and companies, and those private networks using proprietary protocols (e.g., DecNET) were tied together by ad-hoc leased line connections. But it was usually possible to email someone if you knew how to "reach" them, and at the start of any project the first thing we did was establish was way to email everyone.<br />
<br />
Technology has opened up the possibility of collaborating in near real time with people around the world. <b><i>This made that book possible</i></b>: it would not have been possible before. And I remember reading articles about the book, explaining how continent-spanning networks had made this book possible for the first time: that the knowledge needed for the book had to be pulled together from people working in far-flung companies and universities. And the book was turned around in one year - before the information became obsolete. That was important. Not only was the book a breakthrough, but <b><i>the process by which it was written was a breakthrough</i></b>.<br />
<br />
We are turning our backs on the very technologies that our clients are building.<br />
<br />
The Internet revolution is about communication. Agile did not invent this idea. In fact, in many ways, agile is undoing some of the benefits. Agilists have observed that face to face meetings and physical proximity enable quick discussion and learning by osmosis, but they have drawn the wrong conclusion. The conclusion should not be that "more proximity is better" or that "all collaboration is best if it is face to face". To enforce that is to unwind the benefits of the Internet.<br />
<br />
Agilists often point to the value of brainstorming to come up with innovative solutions. But as Susan Cain explains in her breakthrough book Quiet,<br />
<blockquote>
<i>"There’s only one problem with Osborn’s breakthrough idea: group brainstorming doesn’t actually work...Studies have shown that performance gets worse as group size increases...The one exception to this is online brainstorming...This shouldn’t surprise us; as we’ve said, it was the curious power of electronic collaboration that contributed to the New Groupthink in the first place. What created Linux, or Wikipedia, if not a gigantic electronic brainstorming session?" *</i></blockquote>
This is important: it turns out that brainstorming is most effective when it occurs online and <i>not in real time</i>, so that the participants are able to think at their own pace before they respond, rather than having to think everything through in real time.<br />
<br />
Historically, the agile insistence on person-to-person collaboration is really a rejection of communication by documents, and for good reason. During the 1970s and 1980s it was common practice to build systems and poorly document them. <b><i>I remember this</i></b>. The complaint was with respect to "key person dependencies" - the fact that the people who built a system would leave and no one could maintain it. And there was a response by the IT community to fix this, by insisting that everything be documented. I remember this also. <b><i>Like the agilist thought leaders today, the thought leaders who were then in control took this to an extreme</i></b> and the result was that projects started to be planned and measured around documents. Documents became king, and waterfall flourished.<br />
<br />
And that did not work.<br />
<br />
Agile pushes back on that, but <b><i>it is also going to an extreme</i></b>, insisting that all communication be face to face. That is the wrong conclusion, and it is very linear thinking. Rather, the conclusions should be much more nuanced, something like,<br />
<blockquote>
<i>Give people the opportunity to talk when <b>they</b> find it best for them, and to write when <b>they</b> find it best for them, and enable this to happen just in time when it is needed. And <b>balance</b> the value of proximity with the value of access to remote talent and the need of people for undistracted thought when doing complex tasks. And <b>don't rely on documents</b> to convey information: the people who wrote the documents <b>must be around</b> to answer questions. But <u><b>do</b></u> make sure that you document important decisions <b>as they are made</b>. Little else needs to be documented.</i></blockquote>
There is a history to this, and we would all do well to not forget it. Otherwise, we are making the same mistakes.<br />
<br />
Extremes do not work. Absolutes do not work.<br />
<br />
<span style="font-size: x-small;">* Ref: Quiet: The Power of Introverts in a World That Can't Stop Talking, by Susan Cain (2012-01-24). (p. 88). Crown Publishing Group. Kindle Edition.</span>Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-90015860613758189682013-11-28T08:12:00.001-08:002013-11-29T11:23:47.763-08:00The Emperor has no clothes: Verbal communication is NOT more effective than written communicationIt is not true that people communicate better in person that in writing.<br />
<br />
The preference for face to face meeting over written communication is deeply entrenched in agile values. The <a href="http://agilemanifesto.org/" target="_blank">Agile Manifesto</a> (an effort initiated by Alistair Cockburn) enshrines it as "The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation".<br />
<br />
Cockburn's <a href="http://alistair.cockburn.us/get/2287" target="_blank">well known diagram</a> of the effectiveness of forms of communication illustrates the various forms in terms of a tradeoff between "richness" (real time nature, bandwidth) and effectiveness. But this is linear thinking and it does not reflect how effective collaboration actually occurs. To be fair, <a href="http://alistair.cockburn.us/ASD+book+extract%3A+%22Communicating,+cooperating+teams%22" target="_blank">Cockburn's article on this topic</a> delves into the issues at length, but ignores an important fact: collaboration occurs over time - not in an instant. His diagram reflects an instant, not an process.<br />
<br />
Scientific conferences have it right: Their tried and true approach is to first distribute papers on topics, which interested attendees read ahead of time. The attendees then attend the presentations on those topics. Then <u><i>afterwards</i></u>, they gather to discuss the topics in person.<br />
<br />
The advantage of this approach is that a paper allows the author to lay out a complex argument, without interruption, from beginning to end. Complex issues often require a lengthy statement of one's point of view before the point of view starts to make sense. In face to face conversation, it is too easy to be interrupted, and too easy for the conversation to be diverted into side issues; and a one hour meeting is most certainly too short to lay out a very complex issue and discuss it to resolution.<br />
<br />
Cockburn talks about what to do when lengthy discussion is needed. For example, he says, "They worked on it over the weeks, experimenting with representations of
their concerns that would allow them to view their mutual
interdependence." But he is talking about having on ongoing discussion in which issues evolve over time, because software development is occurring. That is a different situation than what I am talking about. I am talking about when a decision on a very complex issue must be made, and you don't have weeks to mull it over. In that situation, the issue is essentially static - at that point in time - and you need to decide on a course of action, soon. <br />
<br />
In that situation, a much more effective process is to first lay out one's position in writing - to peel the onion - and then, after others have had a chance to read and absorb it, during which they build their own mental models of the issues and your position on it; and then meet to discuss it. The discussion can then focus on the points of contention, making the discussion much more effective.<br />
<br />
More effective: isn't that what we are after? Certainly, if the issue at hand is not complex, it is often better to just meet on it and talk it through. But if the issue is multi-faceted and requires deep thought, it is far better to first have each person write up their thoughts, and for each to read the other's thoughts, possibly have a few written discussions on certain points, and then meet in person to talk through the points of disagreement and drive to consensus.<br />
<br />
<i><u>That</u></i> is how effective collaboration occurs.<br />
<br />
Contrary to what the Agile Manifesto implies, written and verbal communication are not an either-or proposition: they do not compete with each other - rather, they <i><u>complement</u></i> each other.<br />
<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-17352937027914425332013-11-19T14:27:00.002-08:002013-11-19T16:16:42.054-08:00Does devops change security practices?Security is still the elephant in the agile room - or should I say the fly in the ointment.<br />
<br />
Agile people and business people like to think about functionality that will be delivered quickly; but the sad reality is that if something is delivered that is not secure, it can kill you, in terms of your business reputation. The problem is, how does security fit into agile?<br />
<br />
I address this at length in Value-Driven IT and in High-Assurance Design. But now with devops, the concepts need some refreshing. Does devops change things, with respect to security?<br />
<br />
On the level of fundamentals, no. The only effective approach to application security is to have developers that understand and appreciate application security, because <i>security affects the design</i> - at least, it should. That often means embedding a security expert in a team to mentor the team, with the strong expectation that some of the people on the team will strive to become experts as well, at which point the embedded security expert can leave. That approach is far more effective than a control-based approach that tries to externalize security by defining rules from a distance.<br />
<br />
A very naive approach is to rely on scanning. Scanning is now thought to only detect 5-10% of actual vulnerabilities. (See <a href="http://www.cs.umd.edu/~pugh/MistakesThatMatter.pdf">http://www.cs.umd.edu/~pugh/MistakesThatMatter.pdf</a>
and <a href="http://samate.nist.gov/docs/SA_tool_effect_QoP.pdf">http://samate.nist.gov/docs/SA_tool_effect_QoP.pdf</a>.) Scanning is important - critical even - but it is not sufficient. There is no substitute for developers who are knowledgeable and current on application security.<br />
Since it is hard to teach security and to get people to learn application
security, this implies smaller teams of very experienced developers -
developers who either know application level security or who want to
learn about it. And it implies having enough security experts who can
be embedded at least part time to work directly with teams on an ongoing basis. Of course, this needs to
be done based on a risk model - one need not make everything as secure as possible.<br />
<br />
These things have not changed. But what has changed is that virtualization makes it possible to do security testing earlier. If one can provision entire test environments (VMs, software-defined network, virtual storage, other resources - including production-like test data) from images on demand via a cloud using OpenStack, AWS, etc., then one can perform security testing at will and do it at regular intervals throughout a development cycle rather than waiting until the end. I am talking about exploratory types of testing of course - things that are manual, like penetration testing. One should also embed automated scanning within the continuous integration test suite.<br />
<br />
One of the core agile values, implicit in many of the principles of the Agile Manifesto, is that we really need to elevate people rather than trying to make things better only through better processes: ultimately, you can't improve things unless people increase what they know. This applies to security as well.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-46021535196508297392013-11-16T13:11:00.002-08:002013-11-16T13:11:29.049-08:00Agile is a business thing - not just an IT thingBusiness people think that agile is an IT thing. But really, it is a business thing.<br />
<br />
Because the decision to "do agile" cannot be made at the point where IT takes over. The decision must be made when the project is conceived, in terms of its business goals.<br />
<br />
Project inception generally has two phases in a large organization: when the business case is made, and when the project work starts. To make the business case, you have to estimate the timeline and the costs. To do the cost estimate, you have to do some up front requirements analysis. To do the timeline, you have to make assumptions about the delivery process. Agile changes both of these. Thus, agile must be considered during preparation of the business case.<br />
<br />
When project work begins, the business sponsor often contracts for a detailed requirements analysis to be performed. The theory is that this can then be used to solicit bids on building to the requirements; but that theory is deeply flawed, and the horrible track record of IT projects bears witness to this. This up-front detailed requirements is a root cause of failure, because these up-front requirements are usually wrong on many levels. That is where agile comes in: agile allows requirements to evolve.<br />
<br />
This means that if you perform an up-front detailed requirements process, you have killed the project from the outset: if done in a waterfall manner, you can be sure that the requirements will be wrong, and if done in an agile manner, you have locked in the requirements so they cannot change, and so agile cannot work.<br />
<br />
This is why agile is not just an IT thing: it is a business thing. Project sponsors need to learn and understand how agile processes work so that they can think in terms of the agile cycle from the outset - even when they are making the case.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-20465913543473114072013-11-16T07:08:00.001-08:002013-11-16T07:08:08.195-08:00Don't use frameworks (like SAFe) out of the box (continued)In my prior post I promised to provide an example of how SAFe is best thought of as a model rather than as a design: a model is a tool for thinking whereas a design is something to implement precisely.<br /><br />Consider SAFe’s model for portfolio management. The SAFe model defines a portfolio “kanban” process in which work is defined as a set of “business epics”. Just think of a business epic as a project: it is not really, but for our purposes you can think of it that way.<br /><br />The SAFe process defines the lifecycle of a business epic, from identification of the need for the project, through alternatives analysis, through implementation. This is all pretty standard. The SAFe process is kanban-like in that it defines a single pipeline through which all business epics pass.<br /><br />That works fine in many organizations, but there are many organizations that have multiple portfolios. In that case, one would need several pipelines. Some organizations tier their portfolio based on the source of funds, or by investment amount: the latter is typical in government agencies. In fact, government agencies usually have mandatory investment management processes and so one could not even use the SAFe model, but one could still use other parts of SAFe. Indeed, the SAFe guidance says that one should expect to have multiple “kanban systems”.<br /><br />Such complications means that there often cannot be a single backlog. Another complicating factor is that it is often the case that one investment serves multiple strategic goals: yet SAFe presumes that there is a hierarchy of investment themes which are associated with epics, comprising a “release train”.<br /><br />SAFe also differentiates between “business epics” and “architectural epics”. This is sometimes a useful thing to try to do, but it is not always so clear cut. For example, when a telecommunications company invests in a new network, is that a business epic or an architectural epic? Hmmm. When one adds servers to reduce customer wait time, is that a business epic or an architectural epic? Hmmm. But as the SAFe guidance points out, there might be different sources of funding and oversight for the business and architecture investments, and this might cause these investment categories to be separated.<br /><br />These are not flaws in SAFe. As I explained, SAFe is a model - at least, that is how I look at it. So to apply SAFe, one should first understand the intent of the model, and then consider how that intent can be realized in one’s situation. In the example at hand, it might mean adjusting the portfolio process defined by SAFe.<br /><br />My point is: do not view frameworks as designs or templates. View them as models. Create your own design.<br /><br /><br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-32226374483721482032013-11-14T13:00:00.001-08:002013-11-15T09:11:25.409-08:00Don't use frameworks (like SAFe) out of the boxIs there a "template" for life?<br />
<br />
Can you directly implement the advice your mother and father gave you? Or was the advice intended as abstract, requiring you to incorporate it into your thinking, so that you can integrate it with other advice and other knowledge and apply it to each of life's unique situations?<br />
<br />
Applying a framework like SAFe exactly as defined is like applying your parents' advice exactly as articulated: it won't work.<br />
<br />
SAFe - and the countless other frameworks that have come from IT thought leaders and organizations - is an excellent model, but a model is food for thought. Models always leave out details. Models are a basis for discussion, for analysis, and for design: a <u><i>basis</i></u> for design - not a design.<br />
<br />
To apply SAFe, you have to think about it, and customize it to your organization. In the next post I will discuss one particular aspect of SAFe in order to illustrate the point.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-49314703444187195542013-11-13T14:01:00.000-08:002013-11-13T14:01:05.241-08:00Does agile encourage bad behavior?All agile coaches are familiar with projects that “do agile” but don’t <i>really</i> do it. Agile can be used as a license to not plan (throw out the master schedule), not coordinate (cancel all the formal meetings and expect ad hoc collaboration to just occur), have the team commit to a sprint backlog and yet allow the product owner to change the stories during the sprint while still expecting the team to deliver on their commitments at the end of the sprint. These are all known issues to all coaches, but these issues all have to do with behavior that is imposed on a team. What about <i>team</i> behavior? Do teams themselves mis-apply agile ideas in ways that are enablers for bad habits and dysfunctional group behavior?<br />
<br />It took decades for disenfranchised groups to get managers to understand that conversations on the golf course or in the men’s room leaves people out (e.g., women and those who are not personal friends with the boss). Now agile comes along and advocates a return to ad hoc conversations: are we risking a return to patterns of exclusion?<br /><br />Let’s remember that agile was designed for software development. It should not be applied to general business processes without careful thought. Agile assumes that teams work in close proximity, and so if an ad hoc conversation starts, others can hear it and join in. If teams do not work in close proximity, ad hoc does not work.<br /><br />Another dysfunction that I see a-lot is when teams do not keep meeting notes. Meeting notes? Isn’t that old-school? Doesn’t that sounds like those old pre-agile dysfunctional meetings where it took three weeks or more to schedule it, and lots of people sat around a table and no one said what they really thought, or there was very low quality discussion, or worse, someone in the meeting got mad because he felt that others had not included him in discussions that occurred prior to the meeting and he felt that the meeting was an ambush? (I have seen and lived through all these things.)<br /><br />No. Effective meetings require that all participants have take-aways. One of the core practices of Extreme Programming is to document decisions on the team’s wiki: that is a form of meeting note. Meeting notes - in any form that is appropriate - are essential for remembering what decisions were made, and if the notes are done well, they also record <i>why</i> the decision was made. Meeting notes do not have to record what everyone said, but they must mention key discussion points, issues that were identified, and decisions. That’s it. Very short and sweet, very to-the-point, but sufficient for others to read and know and understand the outcomes of the meeting.<br />
<br />Even ad hoc meetings should result in meeting notes, in an appropriate form, and too often they do not: agile’s ad hoc philosophy is being misused to excuse bad business behavior and laziness.<br />
<br />Planning: isn’t that old-school too? No. Agile requires planning. The only difference is that an agile plan focuses only on what matters and not all the details; and the plan is reviewed and updated continually, as things change. But planning is essential. It’s just not about the plan: it is about the <i>planning</i> that is crucial. But the plan is important too, in that it needs to be an information radiator so that everyone can see the current plan and be aware when it changes. The plan forms a reference-able shared understanding across the team and other stakeholders.<br /><br />Agile should not be an excuse to not plan.<br /><br />I think you get the idea. Before we throw out every old practice, or apply an agile value to a new situation, we should ask ourselves what purpose existing non-agile practices serve, and make sure that we are improving things and not making them worse. Most traditional practices have an agile equivalent: simply abandoning old practices is not sufficient to become agile.<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-2398383070595074612013-11-08T14:53:00.000-08:002013-11-08T14:53:03.976-08:00Agile coaching is not transformation coachingThe Agile Coaching Institute has a wonderful breakdown of the skills that are needed to successfully coach agile teams. The model is useless for the work that I do, however, and I am an agile coach - a transformation coach.<br />
<br />
Consider basketball. When pro basketball players think about their sport, they think about the things that they experience, and that are important to them: the plays during the games, the practices, the endorsements. Aspects of basketball - aspects that are central to the sport, such as sponsorships and team management - are on the periphery of a player's thinking.<br />
<br />
But if you were to ask a team owner what things matter, they would have a very different perspective. The two perspectives are compared in the figure below.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-3xAHyq2UmOc/Un1rFqWrsTI/AAAAAAAAAE4/RzMq6kMuv5g/s1600/BacketballPIeCharts.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="188" src="https://lh3.ggpht.com/-3xAHyq2UmOc/Un1rFqWrsTI/AAAAAAAAAE4/RzMq6kMuv5g/s1600/BacketballPIeCharts.png" width="320" /></a></div>
<br />
<br />
The same holds true for agile coaching. Agile team coaches experience activities pertaining to their teams, and aspects of software development that are external to teams are experienced - by team coaches - in terms of the way that those things interface to the team; those externalities are experienced as somewhat peripheral. It is kind of like the famous <a href="http://strangemaps.wordpress.com/2007/02/07/72-the-world-as-seen-from-new-yorks-9th-avenue/" target="_blank">"View From NYC" picture from New Yorker magazine</a> that we have all seen: to a New Yorker, the features of New York loom large, but the features of the rest of the world - while just as important - diminish toward the horizon. In other words, one's perspective depends on one's experiences.<br />
<br />
That is why "transformation" is only a single item in the Agile Coaching Institute's list of skills needed for agile coaching: because to a team coach, transformation is just one thing going on, and it is not usually central to what teams think about. Teams are affected by a transformation program, and they participate in it if it exists, but it is not what they focus on each day.<br />
<br />
In contrast, an agile transformation coach thinks about transformation every day, and their view of coaching - transformation coaching - is very different from the view of a team coach. The two views are illustrated in the figure below.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-OZlLUA9XcQQ/Un1rJS1Ty7I/AAAAAAAAAFA/z14l0TeIpwM/s1600/TransformationPieCharts.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="185" src="https://lh3.ggpht.com/-OZlLUA9XcQQ/Un1rJS1Ty7I/AAAAAAAAAFA/z14l0TeIpwM/s1600/TransformationPieCharts.png" width="320" /></a></div>
<br />
<br />
The two diagrams are linked: in the team coach's view (adapted from the Agile Coaching Institute's list of core coaching skills), "Transformation Mastery" is a single slice of the pie. In the transformation coach's view, "Transformation" is the whole pie; and in that pie, "Team Coaching" is a single slice. Thus, one set of skills does not subsume the other; rather, they inter-connect.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-9644905854090317632013-10-26T11:57:00.001-07:002013-10-26T12:00:04.047-07:00Disconnect: why business does not "get" the idea of "experiments"Experimentation is an important agile concept. Experimentation is a risk management tool: the idea is that one tries out a new approach out as an "experiment" before committing wholesale to the new approach. If the approach does not work out, one can quickly change course, and the experiment was a "contained failure".<br />
<br />
What business hears is, "Let's play: we will experiment, and reward failure. We will spend your money trying out things that we are not sure will work."<br />
<br />
This is a big disconnect, and the agile community bears much of the blame for this disconnect. It is the hubris of the agile community that emboldens it to speak about experiments and other agile practices as if the case for experiments need not be made. When speaking to senior IT people who have decades of pre-agile experience, one really should have some deference, and not expect them to swallow practices with names that are purposefully controversial - even frivolous ("team happiness").<br />
<br />
As Carl Sagan said, "extraordinary claims require extraordinary evidence".<br />
<br />
Perhaps instead of talking about "experiments", we should talk about "proof-of-concept" or "pilot". Experienced IT people understand those concepts, and they are really the same thing. It is counter-productive to use new terms - terms that have a shock effect - when trying to convince management in an established organization to adopt a new practice.<br />
<br />
One thing that is new about the concept of experiments is that failure should not be viewed as negative: failure causes learning, and therefore better decisions are made after that point. It is a sad fact that in most large organizations, any kind of failure is damaging to one's career - even if the experiment was daring and innovative and caused learning. Agilists want to encourage a culture where prudent and careful risk taking is accepted and rewarded - even if it sometimes results in a contained failure.<br />
<br />
But failure is still failure: it results in sunk costs - lost time, wasted effort, wasted money. Management needs assurance that failure - even contained failure - actually results in learning, and that the failure was unavoidable. They want to know that teams are being thoughtful and are using their best judgment and the best information available before they try something that results in failure. It is up to teams to instill that confidence, and it is up to management to be open to encouraging risk if the team demonstrates that it is cautious and thoughtful before it undertakes an experiment.<br />
<br />
Are we upholding our end of the bargain?<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-78069862810345793412013-10-21T11:38:00.000-07:002013-10-21T11:38:44.836-07:00Apply critical thinking at agile conferencesThe book <a href="http://www.ftpress.com/store/critical-thinking-strategies-for-success-collection-9780132938563" target="_blank">Critical Thinking Strategies For Success</a> compares what it calls "sophistic thinking" with "strong sense critical thinking". The former is when there are doctrines and everyone nods their head yes to anything that supports those doctrines.<br /><br />Recently I attended AgileDC 2013, and I noted that there was a talk by someone who I know to be incompetent and who does not know what he is talking about: in fact, he was fired from his last company for that reason; yet he was presenting at AgileDC and he has a large following in the community. That community does not know, however, that in real world situations, this person cannot perform because his knowledge does not extend any deeper than platitudes. He does not have enough real world experience to turn the platitudes into action.<br /><br />Another person speaking at the conference laid out an approach that I know for a fact is not the approach used in the organization in which that person works, yet this approach was presented as a cornerstone approach. Again, after sufficient platitudes, all the heads nodded yes. More sophistry.<br /><br />We are not doing enough critical thinking in the agile community. We need to be skeptical. Just because someone says something at a conference does not make it so, and where is the proof that they actually did what they say they did? Unlike scientific conferences, agile conferences are practitioner conferences, and the work presented is not research that has been replicated under controlled conditions, and there is no standard of ethics that is being enforced to ensure that people are held accountable by their respective organizations for presenting accurately. In fact, there is plenty of incentive to spin things because it enhances the careers of the presenters and the reputations of their organizations sponsoring those presenters. AgileDC - and most practitioner conferences - are more marketing than they are reality and we have to keep that in mind.<br /><br />These conferences are still valuable though. There are lots of good ideas that are shared: we just need to be skeptical because some bad ideas can be made to sound viable when they are not. There is networking that happens at agile conferences, and that is always worthwhile. But don't believe something just because it was presented at an agile conference. Practice critical thinking.<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-84326268210292051982013-10-20T06:26:00.001-07:002013-10-20T06:26:28.148-07:00Pre-agile helps one to understand agileOne of the greatest mistakes of the agile community is to compare agile with waterfall. There is an assumption that before there was agile, there was waterfall, and that most projects were waterfall. That is not my experience.<br /><br />It is not fair to compare a well run agile project with a highly dysfunctional project using waterfall, yet that is the comparison that is routinely made.<br /><br />During the 1980s I was on a string of very successful IT projects, and <i>none of them were waterfall</i>. On the other hand, I was on one project that was <u><i>not</i></u> successful, and it <u><i>was</i></u> a waterfall project.<br /><br />The projects that were successful (all non-waterfall) were at two companies: Intermetrics, and CAD Language Systems. These companies built compilers and other advanced tools that were used to design hardware systems. This was major league programming. Our projects were characterized by small teams (3-15 people), lots of collaboration, evolutionary design, and lots of other practices that agile claims credit for.<br /><br />These experiences have helped me enormously to understand agile, because I can look at agile practices and compare them to earlier practices that I saw work well - even though they were often done slightly differently - so I can discern what really makes those agile practices important. I can also discern that certain agile practices are not critical, because I saw projects be successful without those practices. Standups for example: of all of the successful projects that I was on during the 1980s, <i>none used the practice of standups</i>, and so I am confident in saying that standups are not important. Another practice that is a red herring is the team room: during the 1980s, programmers had their own offices (at least they did everywhere I worked), yet we collaborated continually - separate offices were not an impediment, as long as we were co-located. Co-location was important. And I distinctly recall closing my door from time to time so that I could maximize the quiet to think deeply about a problem, and then open it again when I had finished thinking. Thus, the ability to shut out the world to think deeply was also important. The open door was a universal signal that you were open to someone walking in to discuss something: the closed door was the reverse.<br />
<br />Each of those projects that were successful had a person who was responsible for making sure that everything fit together: someone who was charged with thinking about the entire system in an end-to-end manner. That was essential, and when there was no such person, or when the person was incompetent (that was the case on the waterfall project) things went wrong very quickly. I have also seen <i>agile</i> projects flounder for lack of such a person. The theory that the entire team is responsible for design is kind of like communism: it is a nice egalitarian theory, but in practice it seldom works - I won't say "never" because there are always exceptions. Generally speaking, there needs to be a qualified person whose <i>main job is to think end-to-end</i>, even if that person also does coding. The real issue is what type of person that should be, because at other times in my career I have had nightmare project managers or technical leads who almost single-handedly made everything go wrong (the waterfall project was like that).<br /><br />In the course of these 1980s projects, the two things that I found to really make a difference in a project were:<br /> ▪ <b>Small team</b>: if there are so many people that they cannot keep track of what they are each working on, then communication breaks down and code diverges.<br /> ▪ <b>Servant leadership</b>: Someone who provides gentle leadership to the team: not someone dictatorial, but someone who keeps track of what everyone is doing and what challenges they have on a daily basis; ensures that people discuss issues that need to be discussed; asks hard questions, listens, and proposes solutions but rarely dictates them; and who also <i>understands</i> all of the issues - someone who is knowledgeable about <u><i>what</i></u> the team is working on and <u><i>how</i></u> it will work. In my experience, self-organization cannot substitute for a good servant leader.<br /><br />From there, things kind of take care of themselves! With good servant leadership, you will end up with continuous daily regression testing (we did), you will have information radiators on testing results and on the evolving design (we did), you will have a continuous feature-driven or story-driven process with testable features or stories (we did), you will have continual design discussions as needed throughout the project (we did), the team members will feel empowered to work in their own way and contribute ideas and innovation (we did and did), and there will be a sense of harmony, order, and calm rather than an atmosphere of crisis and frustration. Servant leadership is really the key: everything else will follow, as long as the project is not hamstrung from the beginning by having a team that is too large or by having other poisonous situations imposed from the outside.<br /><br />Even the practice of developing requirements incrementally is not new: circa 1980 I worked at American Electric Power as a nuclear physics simulation engineer, and there was a programming team that supported us, and the method in which we interacted with them was such that the programmer would sit with us and talk about what we wanted, they would go away and develop some of that, then come back and show us to get feedback, then go away and build some more - sound familiar?<br /><br />So when I reflect on the Agile Manifesto today - or when I did after it was published - I see it as a rejection of the <i>wrong</i> paths that some projects - waterfall projects - took before that and a return to what worked. It was not new, but rather it was a validation of key things that had worked in the past, and that historical perspective helped me to understand the motivation behind each value and each principle and what its intent really was. And yes, agile does add some tweaks to some of those historical practices: that is valuable contribution, but the historical perspective is just as - I would say more - valuable.Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-42385556010333898272013-10-19T13:41:00.000-07:002013-10-19T13:41:03.502-07:00I want to run an agile projectThere is a very humorous cartoon on YouTube called “<a href="https://www.youtube.com/watch?v=4u5N00ApR_k" target="_blank">I want to run an agile project</a>”. It depicts a young and enthusiastic project manager who sets out to run an agile project in an organization that is not accustomed to agile. The video follows the poor project manager as he goes from department to department trying to overcome one institutional barrier after another.<br />
<br />Of all the barriers that he encounters, <i>only one</i> pertains to the software development team: it is a scene in which he tries to convince two team member to pair and collaborate. All of the other barriers have to do with policies and rules that the organization has – rules that impede the agile process.<br />
<br />This is why agile IT transformation actually has only partly to do with agile teams: it has much more to do with the way that various organization functions are run, including IT and its internal functions, as well as external functions such as contracting. Agile transformation consists of convincing and educating these various stakeholders; it also consists of training teams and coaching teams, but if one does not give equal – or greater – attention to the impediments that are external to the teams, then the transformation will proceed very slowly and possibly lose momentum.<br />
<br />The problem of agile transformation is therefore not so much a problem of scaling agile: it is a problem of enabling agile. Scaling pertains to having many teams on a project, or coordinating multiple agile projects. That is certainly part of the problem of becoming agile, but becoming agile must also address how teams are supported by the various IT support functions that large IT organizations have, including data center operations, enterprise architecture, IT risk management and governance, IT security, data architecture, release management, IT portfolio management, and so on. Many of these functions need to change to accommodate agile, but these changes are huge and impact the missions of these groups, and so this change must be worked in a gradual and inclusive manner. This is an enterprise change management process – often the province of management consulting – informed by agile values and practices. It is much more than “scaling agile”.<br />
<br />
In undertaking an agile transformation, one must focus on the goal. The goal is not to implement agile: that is not a business goal. Rather, the goal is usually to make the organization more nimble (“agile”, in the dictionary sense) – i.e. to increase business agility. Business agility is not the same thing as agile in the sense of agile software development. Agile software development is a tool for enabling business agility, but business agility is more than that and differs in many ways. Some business agility strategies rely on significant command and control – approaches that are antithetical to agile software development. Melding agile software development with the way the rest of the organization works, so as to enable business agility – and doing so with approaches that are compatible with the strategies that are being adopted by the other parts of the organization – is the challenge of an agile IT transformation.<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-4367480866488251142013-10-19T07:15:00.002-07:002013-10-19T07:15:17.992-07:00Problems with facilitation methodsFacilitation is a core skill for agile coaches, and most of us are pretty good at it. There are some practices that I have seen that can be problematic however.<br />
<h4>
Dot Voting</h4>
The whole point of dot voting is to rank things by importance, priority, urgency, or some other scale - and to use the participants as the deciders so that they feel ownership of the ranking.<br /><br />But what if the participants do not have the judgment needed to properly rank something?<br /><br />For example, consider a group of diverse participants - including many agile novices - that is ranking the agile practices that they want to focus on. The ranking will most likely end up reflecting the sources of pain that they currently feel. What it will likely not reflect is the root causes, because it takes a Ri level agilist to understand root causes. And we all know that if we do not address root causes, we will not solve a problem.<br /><br />So the implication here is that if the facilitator has not drilled into the practices and discussed root causes with the participants, the root causes will not be reflected in their ranking, because the participants are diverse and many are therefore new to agile and will not appreciate the root causes.<br /><br />The lesson: be careful what you rank, and what you do with the ranking. In the example above, if the goal is to identify practices to talk about, and talk through root causes, voting will achieve that. But if the goal is to identify what practices to focus on, it will not be effective, because the participants do not have the judgment required to make good choices about that.<br />
<h4>
Not Allowing the Facilitator To Voice an Opinion</h4>
A central aspect of facilitation is that the facilitator should not bias the group. But what if the facilitator is an expert in the topic being discussed? What do you do then?<br /><br />We probably all know the answer to this: you guide the group by asking hard questions, rather than telling them the answer. In fact, they probably have some local domain knowledge that you do not. But what if the group needs to be informed by your expertise?<br /><br />One technique is to explicitly take off your facilitator "hat" by saying something like, "Ok, allow me to explain what I know on this", and then give a brief explanation based on your expertise. When doing this, I usually punctuate it by saying something like, "So that is the accepted approach to that, but it is not necessarily what we have to do here, because our situation might be unique". That last part lets the group know that they are still in control: they can decide to go against standard practice. Every time you share your expertise, you again re-iterate that it is an accepted view, but that the group can depart from that if it wants to. You then resume facilitating and have complete willingness to record and support choices that go against what your expertise advises. You have done your job to inform, but then the group decides the content of the discussion.<br />
<h4>
Putting Cards On the Wall</h4>
Putting cards on the wall is a long-standing practice for facilitation. I personally first encountered this technique when I participated in a six week (all day, six days a week) modeling session with Peter Coad, David Anderson, Jeff DeLuca and others in Singapore in the late '90s. The purpose of this technique is to encourage people to voice their opinion on something: if you just ask for opinions, some people remain silent. If you give them cards and tell them that they have to write something, they will. It gets all the ideas out in the open.<br /><br />The problem is, people often write small, or illegibly, and so you cannot read what they wrote unless you go up close to the cards. And if you have a group of more than five people, it starts to become difficult for people to see past others as they crowd around - especially the smaller people. Further, if there are many cards (say, more than 20), some people will not read them all.<br /><br />Having people stand up close to the cards has another problem: standing uses working memory and consumes a tiny bit of your focus, and standing in close proximity to other people who are shifting around uses even more working memory and focus. Try this experiment: while standing, perform some long division in your head. Now sit, and repeat the experiment (using different numbers of course). You will find that while sitting, you can think more deeply and therefore do the arithmetic more easily. You might think that standing is something that you can do on autopilot, but it actually does consume some mental energy. Sitting, with everyone else in the room stationary, allows you to focus better on purely mental tasks. Sitting is therefore better for the participants of a facilitated session if you want to get their best - their deepest - thoughts. This does not apply to the facilitator because the facilitator's attention is mostly on the group - not the topic. The facilitator has to focus somewhat on the topic, but his or her primary focus is on the people, and the direction things are taking, and standing is also important for the facilitator in order to establish a sense of authority over the process. The people who need to think deeply are the participants.<br /><br />One myth about the use of cards is that writing cards enables things to go more quickly. The purpose of facilitation is to establish a shared way of thinking about a problem. That means that all ideas that are expressed - as cards or otherwise - need to be mentally processed, one by one, by everyone in the room. It is an inherently serial process, so don't be fooled into thinking that you gain time by having people simultaneously writing their ideas on cards. Saving time through concurrency is not the purpose of the cards. Each card still needs to be read by each person, or the facilitator can read each card aloud. But if people cannot see the cards, they cannot then sit back and reflect on them: they will not remember what each card said and they will not be able to "connect the dots" in their heads. Even if you do affinity analysis, it is often the case that very critical things are mentioned by some cards in an affinity group, and so just looking at the grouping is likely to miss major ideas.<br /><br />In order to enable participants to sit, and to ensure that all ideas are heard and read and can be contemplated by everyone, I re-write each card on the whiteboard. I do this as I read each card, so it consumes little additional time. I write each idea large and cleanly (legible writing is an important facilitation skill) so that everyone in the room can read it, and then we discuss it. Once all ideas have been written and discussed, we can discuss all the ideas as a whole, coming up with holistic strategies that address all of the ideas. I find that this works much, much better than having cards on the wall.<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-68365625011211802462013-10-13T08:16:00.003-07:002013-10-13T09:43:58.566-07:00Apple Donut Headquarters - Agile, or Anachronistic?Everyone reading this post has no doubt heard about Apple's new headquarters, under construction:<br />
<a href="http://bangphotos.smugmug.com/001-News-1/Bay-Area/apples-proposed-new-office/i-QgsWpVh/0/L/ssjm1013apple004-L.jpg">http://bangphotos.smugmug.com/001-News-1/Bay-Area/apples-proposed-new-office/i-QgsWpVh/0/L/ssjm1013apple004-L.jpg</a><br />
<br />
From an agile perspective, the Apple Donut seems very "agile": <br />
It promotes lots of collaboration, because it is only four stories (no getting on a elevator to go see someone) and it encourages one to walk past other teams on the way to a meeting or one's primary work area.<br />
<br />
But on the other hand, it seems to me like the logical conclusion of 20th century industrial age thinking, in which masses of people travel to a central location every day, work intensely for a hierarchical organization, and then travel back at the end of the day - kind of like Metropolis (<a href="http://www.imdb.com/title/tt0017136/">http://www.imdb.com/title/tt0017136/</a>). A glance at the photo (see link above) of the planned Apple headquarters shows the massive highway leading underground to the parking area - not too unlike the river of people flowing in and out of Metropolis every day! I can almost hear the factory siren signaling the start of work. And as the San Francisco Mercury News put it, the new headquarters "promises to bring a world-class real-estate project - along with a lot of traffic congestion - to the heart of Silicon Valley." I don't know about you, but it takes a pretty high incentive for me to suffer an unpleasant grid-locked commute every day.<br />
<br />
To be fair, the elite of Metropolis did not espouse agile principles: the movie depicts no signs of collaboration, but rather only hierarchical control with pre-defined jobs.<br />
<br />
But is it really that different?<br />
<br />
No matter, because a much more pertinent question is, Is Apple a model for other organizations? Should we be trying to learn from it, to inform our guidance of our clients, for how to structure their organizations?<br />
<br />
I contend that the answer is usually no, and in the cases where the answer is no, it is emphatically no.<br />
<br />
The reason is that most companies are not like Apple: most companies are not as "cool" as Apple, and don't have an inspiring mission the way that Apple, Google, and some of the other most glamorous tech companies have. Most companies - and most IT work in most companies - is relatively hum-drum, and such companies cannot attract the best and brightest as a mere result of their mission or their "cool factor". Most companies have to attract IT workers based on other traits - including working conditions and compensation. In other words, if getting in and out of the workplace every day is a miserable experience, consuming two hours of one's day in a horrible commute, then the organization had better (1) offer very high compensation, (2) be very "cool" to work for, or (3) expect to obtain only the least qualified talent, because the best talent will choose opportunities that offer either #1 or #2.<br />
<br />
But what is the alternative? Apple can get away with a four story Tower Of Babel absurdity, because it is so "cool". But what about the countless other companies? What is the right kind of agile workplace for them?<br />
<br />
We have to be careful here, because agile principles emphasize certain things like face-to-face conversation and working together in real time, and it is easy to take those things to their logically absurd conclusion and arrive at the Apple headquarters. But what is the right way to scale those agile principles? Does it mean forcing every conversation to be face-to-face? Every meeting to be in-person? Every collaboration to be real time?<br />
<br />
Of course, the answer is no. In fact, global trends are the reverse. In our increasingly global economy, we see more and more workers who are needed in many geographically separated places on the same day, because their skills are move valuable than the value of proximity. We also see an inexorable trend toward flexible working patterns. Two income households and removal of the barriers preventing more flexibility - combined with the increasingly global nature of work - are making this come about. In the IT world, early agile has inserted a small hickup in this trend, by sending IT people back to the office for core hours to be on teams, but the overall trend is there. Now that agile has matured, and the focus is on continuous delivery, teams are discovering that they need to be in continual contact with diverse stakeholders in other parts of the organization - people who cannot be physically present - and it is often the case that the business stakeholders are in other parts of the country.<br />
<br />
Early visionaries such as Alvin Toffler were not wrong on this: the trend is toward less commuting, more flexibility, a return to the organization of populations around communities rather than commuting corridors, and the substitution of electronic collaboration for physical presence. Commuting was a 20th century anomaly.<br />
<br />
So the question is not whether agile values are right - they are - but rather, how does one achieve agile when the trend is toward more work flexibility, more time zones, and workers who are needed in many places at once - and in and environment in which the best workers can get jobs that give them the flexibility that they need?<br />
<br />
That is the real question for agile. And Apple's new headquarters does not answer that question.<br />
<br />Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0tag:blogger.com,1999:blog-7785396607069106815.post-60579878245505828892010-08-05T11:14:00.000-07:002010-08-07T15:06:40.177-07:00How Development Time Impacts ROI<span style="font-weight: bold; color: rgb(0, 0, 0);"><br />“The expected ROI is 60% higher if we defer the most risky features until the core features are done.”</span><br /><br /><div class="storycontent"><p>Is this the kind of confident and informed statement you would like to be able to make when asked about the feature development strategy for your software project?</p><p>Unfortunately, a more typical statement is something like, “The risky features might take longer than we think; but we're not really sure what impact this might have for the business – that is your area.”</p><p>Step back and consider the power of the first statement, and compare it with the extremely weak and almost useless content of the second statement. If you were a business decision maker, would you want to waste your time with people who “inform” you with statements similar to the second one? You might as well outsource the project and at least bind someone contractually.</p><p>If you like to be able to make statements like the former, and have them be credible and tangible enough to discuss and critique, you need to develop an analytical model of the sources of value, cost, and risk pertaining to your project.</p><p><span style="font-weight: bold; color: rgb(0, 0, 0);">Mitigating Risk In the Design</span><br /></p><p>Most software developers endeavor to mitigate risk by incorporating design features that decouple risky features from critical elements. That way, the risky features can be developed somewhat independently. This is not always possible, however, especially if the risky features represent an aspect of behavior that permeates all levels of a system. For example, security strategies often have this quality. Thus, if one chooses to “add security later” one might have to rewrite large portions of a system, introducing substantial technical risk. Even so, deferring such features might be wise if the overall probability of success of the project is enhanced because the software development risks are postponed until after the core features have stabilized and customers have had a chance to use and learn to like the product.</p><p>Emphasize “might”: one answer does not work for all situations. That is why it is necessary to have an understanding of how the various sources of value, risk, and cost all interplay. An analytical model is a powerful tool for this purpose.</p><span style="font-weight: bold; color: rgb(0, 0, 0);">Modeling Risk</span><span style="font-weight: bold; color: rgb(0, 0, 0);">y Features</span><br /><p>Decision makers need to know what the tradeoffs are: Is it better to tackle hard problems up front, or to postpone them? The answer is different in different situations: it depends on the relative size of the tradeoffs. Thus, it is necessary to identify what the tradeoffs are and estimate their relative magnitudes.<a style="" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_JlhPcbyff5o/TFsDXe1GxGI/AAAAAAAAABY/8q_oMPvDO4A/s1600/HighRiskFeaturesHistogram.png"><img style="float: right; margin: 0pt 0pt 10px 10px; cursor: pointer; width: 165px; height: 200px;" src="http://1.bp.blogspot.com/_JlhPcbyff5o/TFsDXe1GxGI/AAAAAAAAABY/8q_oMPvDO4A/s200/HighRiskFeaturesHistogram.png" alt="" id="BLOGGER_PHOTO_ID_5501995071710872674" border="0" /></a></p><p>Simple estimates might even not be enough. In many cases, there are complex scenarios involving unknowns, such that if certain outcomes occur the tradeoffs change. For example, if a major security incident occurs while a customer is using the software then the customer's interest in the security features might grow rapidly. Simply saying that this could happen is not sufficient because it might be extremely unlikely: one must estimate the chances of this occurrence in order to evaluate the importance of addressing security now, given that time to market risks are reduced by postponing it.</p><p>Most developers <a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_JlhPcbyff5o/TFsDvIfOPFI/AAAAAAAAABo/_WCOLqGbYcQ/s1600/HighRiskFeaturesTrend.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 200px; height: 121px;" src="http://4.bp.blogspot.com/_JlhPcbyff5o/TFsDvIfOPFI/AAAAAAAAABo/_WCOLqGbYcQ/s200/HighRiskFeaturesTrend.png" alt="" id="BLOGGER_PHOTO_ID_5501995478030367826" border="0" /></a>use gut feeling to make these recommendations, but that leads to statements such as the one above that goes “...and we're not really sure....” It is much more persuasive to have thoughtful analysis behind a recommendation.</p><p><a href="http://expresswaysolutions.com/Whitepapers/HighRiskFeatures.pdf">This whitepaper</a> provides an example of how to develop such a model. The example focuses on how to model the tradeoffs pertaining to the postponement of high-risk features.</p><a href="http://slashdot.org/slashdot-it.pl?op=basic&url=http%3A%2F%2Fvaluedrivenit.blogspot.com%2F2010%2F08%2Fhow-development-time-impacts-roi.html"><img src="http://images.slashdot.org/favicon.ico" alt="Slashdot" height="16" width="16" border="0" /> Slashdot It!</a> <a href="http://digg.com/submit?url=http%3A%2F%2Fvaluedrivenit.blogspot.com%2F2010%2F08%2Fhow-development-time-impacts-roi.html&title=How%20Development%20Time%20Impacts%20ROI&bodytext=How%20Development%20Time%20Impacts%20ROI&media=news&topic=tech_news"> <b>Digg</b> It!</a><br /><br /></div>Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com2tag:blogger.com,1999:blog-7785396607069106815.post-24216243610199386632009-03-15T09:33:00.000-07:002009-03-15T10:24:38.969-07:00The Foolhardy Rush to Consolidation<div class="storycontent"><p>Is your organization under a mandate to consolidate some aspect of IT?</p><p>Most likely it is. Consolidation is a mania that has spread across organizations of all kinds, in all sectors. It is usually driven by the business or financial side, and approved by the CIO - after all, who can argue with cutting costs through economies of scale?</p><p>But in the process, are we losing some important advantages of decentralization? Is this even part of the business case? That is, did the business case even attempt to account for the enterprise value of the existing decentralization?</p><p>Most likely not, since those benefits are usually somewhat intangible, and business cases for consolidation almost always focus on direct costs: the costs of servers, IT personnel, and software licenses.</p><p>Recently I was involved with two different calls for consolidation. One was a consolidation of IT operations: for IT to take ownership of all of the computing applications in the field and move them into a single data center, and have IT take over the apps and combine them when possible. The field is no longer allowed to write their own apps. The other effort sought to consolidate all HR functions and create a single "self-service" system: field HR personnel were eliminated or moved to the central location. The business case was based on reduced direct cost of HR personnel.</p><p>These looked compelling on paper, but both efforts foundered. Let's look at the HR effort. It turned out that people in the field did not want a self-service approach: they were accustomed to having someone to ask about HR matters. This allowed the people in the field to focus on their primary jobs. In the new approach, line of business managers found that they were spending all of their time working HR functions and were not able to focus on their own respective missions. The business case did not account for this loss of mission effectiveness, but in the words of one of the senior field managers, "the business case is flawed".<a href="#SeeChap11"><sup>1</sup></a></p><p>In the case of the IT systems consolidation effort, the same mistake was made: the value of decentralization was not accounted for. As a result, people in the field will no longer have the flexibility to respond tactically to meet their own needs. The effort is still underway so we shall see what happens, but the symptoms are all looking familiar....</p><a name="SeeChap11"></a><br />1. See chapter 11 of my book for techniques on how to model the value of "intangibles".<br /><br /><a href="http://slashdot.org/slashdot-it.pl?op=basic&url=http%3A%2F%2Fvaluedrivenit.blogspot.com%2F%2F2009%2F02%2Ffoolhearty-rush-to-consolidation-is.html"><img src="http://images.slashdot.org/favicon.ico" alt="Slashdot" border="0" height="16" width="16" /> Slashdot It!</a> <a href="http://digg.com/submit?url=http%3A%2F%2Fvaluedrivenit.blogspot.com%2F2009%2F02%2Ffoolhearty-rush-to-consolidation-is.html&title=Foolhardy%20Rush%20to%20Consolidation&bodytext=Foolhardy%20Rush%20to%20Consolidation&media=news&topic=tech_news"> <b>Digg</b> It!</a><br /><br /></div>Cliff Berghttp://www.blogger.com/profile/02103767196153470434noreply@blogger.com0