Tuesday, December 29, 2015

Why Agile task planning does not work

In Extreme Programming Explained, Kent Beck wrote,
In XP, the [elements of planning] are the stories. The [scope units] are the estimates attached to the stories. The [scope constraint] is the amount of time available.
Yet, it seems like every time I have coached an Agile team, the team is compelled by management to do task level planning - that is, decomposing each story into work tasks. On top of this, most of the popular Agile planning tools, including VersionOne, TFS, Rally, and Jira, all have a heavy emphasis on task level planning: e.g., in Rally, you cannot define a story without defining its tasks. As someone who has used Rally a great deal, I found this to be a horrible nuisance.

Task level planning is very counter to Agile in many ways, and I have seen task planning greatly undermine Agile teams. Some of the problems with task level planning in an Agile project are,
1. Task level planning is excessively time consuming; and since planning involves the entire team, this ties up the team for too much time - the team would rather get to work. 
2. Task level estimates are usually wildly wrong, in contrast to story level estimates - which are often very accurate, in terms of their consistency. 
3. The actual tasks needed to complete a story do not reveal themselves until the developer starts working on the story. 
4. Party because of #3, adding up a story's tasks does not yield the time required to complete a story.
5. Task completion does not prove progress - only story completion does: that is the entire point of stories - that a story represents demonstrable progress, and that completion is defined by the story's acceptance criteria and the team's definition of done for its stories. Tasks do not have these attributes. This is central to Agile: waterfall projects are notorious for being "on schedule" until release day, when they suddenly need more time - yet the project hit all of its prior milestones, with all tasks such as "design", "code", etc. completing - but with nothing actually demonstrable. It is the crucible of running software, passing acceptance tests, that proves progress - nothing else does. 
6. Completion of a task often (usually?) does not mean that the task is really complete: since tasks are often inter-dependent, completing one task might reveal that another task - which was thought to be done - is actually not done. For example, a test programmer might write some acceptance tests, but when the app programmer runs them against the story's implementation, the programmer finds that some tests fail that should pass - indicating that the tests are wrong, and meaning that the testing task was not actually done - yet it had been marked as done. Only running software, passing tests, proves that the story is done. Task progress is suspect.

That said, some level of task planning is useful. For example, it makes sense to especially when more than one person is involved in implementing a story, such as a test programmer and an app programmer. One can then have tasks for the story, such as "write automated acceptance tests" and "write unit tests and app code". But, progress should not be measured based on task completion; and it is a total waste of time to come up with estimates for these tasks ahead of time. Instead, it is better to merely have people estimate on the spot the day they plan to work on a task - that is likely to be more accurate than an estimate done a week or two before.

Some of the consequences of paying too much attention to tasks in an Agile project are,
  • Parties external to the team, such as the project manager, start to think of the work at a task level, and report progress based on that, with all of its pitfalls (see #5 above).
  • Parties who pay attention to task estimates, such as the team lead, will be constantly disappointed, because of #2,3,4 above.
  • Teams will lose an entire day or more to planning each sprint, because of #1.
  • Team members will collaborate less, feeling that "I did my task - now its in your court", instead of working together to get app code to pass its tests.
Even though many Agile authors talk about tasks, and many "Agile" tools support task level planning, task level planning is antithetical to Agile. As the Agile Manifesto says,
Working software is the primary measure of progress.
Not task completion. Measuring task completion is waterfall. It's Earned Value Management. It's Gantt charts. It is not Agile.

Friday, December 25, 2015

The "go" language is a mess

I have been using go for the past six months, in an effort to learn a new natively compiled language for high performance applications. I have been hoping that go was it - sadly, it is not.

Go is, frankly, a mess. One of its creators, Ken Thompson of Unix/C fame, called go an "experiment" - IMO, it is an experiment that produced Frankenstein's monster.

It is OO, but has arcane and confusing syntax

Go is object oriented, but unlike most OO languages, the syntax for defining interfaces and concrete objects is completely different: one defines an "interface" and then one defines a struct - and these are quite different things. But also unlike many OO languages, the methods of a concrete object type are not defined with the object type - they are defined outside the object definition - in fact, they can be in any file that is labeled as belonging to the "package" in which the object type (struct) is defined. Thus, you cannot tell at a glance what a type's methods are. On top of that, there is no syntax for saying that "concrete type A implements interface I", so you cannot tell if a concrete type implements an interface unless you try to compile it and see if you get an error: the rule is that a concrete type implements an interface if the concrete type has all of the methods that are defined by the interface - and yet the concrete type's methods are strewn all over the place. What a mess.

As a result, there is no language-provided clear declaration of a type network - interface types and the concrete types that implement them. You have to keep track of that on a piece of paper somewhere, or using naming to link them. The reason for this chaos escapes me, as I have not see any helpful language feature that results from this - you cannot extend types dynamically, so I see no advantage to the forceful decoupling of interface types, concrete types, and the methods that belong to the concrete types. Perhaps this was part of the experiment - and with terrible results.

Its polymorphism is broken

Go lets you define an interface and then define concrete types (structs) that implements that interface (and possibly others). Yet, the way that this works is very peculiar and is likely to trip up programmers. E.g., if you create an instance of a concrete type and then call an interface method on it, you will get what you expect - the right method for the concrete type will be called. But if you pass a concrete type into a method (via another method call) and then call the method, the wrong one might be called - the method for the abstract type will likely be called - it will if the calling method uses an abstract type for its parameter. Go does not actually have abstract types, so to create one you have to define a struct and give it a dummy method for each method that you don't want to implement. My point here is that the behavior of the polymorphism is statically determined and so depends on the context - and that is very confusing and likely to introduce subtle errors - it defeats most of the value proposition of polymorphism.

You want an example? Try this code:
package main
import "fmt"

type Resource interface {
    getParentId() string
type Dockerfile interface {
type InMemResource struct {  // abstract
func (resource *InMemResource) getParentId() string {
    fmt.Println("Internal error - getParentId called on abstract type InMemResource")
    return ""
type InMemDockerfile struct {
    RepoId string
func (dockerfile *InMemDockerfile) getParentId() string {
    return dockerfile.RepoId
func (resource *InMemResource) printParentId() {
func main() {
    var curresource Resource = &InMemDockerfile{
        InMemResource: InMemResource{},

When you run it, you will see that the getParentId method defined by InMemResource will be called - instead of the getParentId defined by InMemDockerfile - which is the one that, IMO, should be called, because the object (struct) is actually an InMemDockerfile. Yet if you call curresource.getParentId directly from the main function, you will get the expected polymorphic behavior.

The reason is this: if you add a method,
func (dockerfile *InMemDockerfile) printParentId() {
to the above program, it works.  Thus, the above program did not work because one of the methods being called did not have an implementation by the concrete type (InMemDockerfile) - that effectively obscured the actual type from the final method in the call sequence. Programmers who are accustomed to dynamic typing like Java will find this behavior surprising.

Type casting affects reference value

Another peculiarity of the go type system is that if you compare a value with nil, it might fail (so it is not nil), but then if you type cast it and compare with nil again, it can succeed. Here is an example:
var failMsg apitypes.RespIntfTp
if failMsg == nil {
    fmt.Println("failMsg is nil")
} else {
    fmt.Println("failMsg is NOT nil")
    var isType bool
    var fd *apitypes.FailureDesc
    fd, isType = failMsg.(*apitypes.FailureDesc)
    if isType {
        if fd == nil {
            fmt.Println("fd is nil!!!!! WTF??")
            if failMsg != nil {
                fmt.Println("And failMsg is still not nil")
        } else {
            fmt.Println("Confirmed: fd is not nil")
    } else {
        fmt.Println("Cast failed: NOT a *apitypes.FailureDesc")
The line in red executes; draw your own conclusions - but regardless, I expect this unexpected behavior to be the source of a great many bugs in programmers' code.

Its compilation rules are too confining

With C, one compiles to a binary that one can then link with or save somewhere. With go, the binaries are managed "magically" by the compiler, and you have to "install" them. Go's approach tries to make compilation and binary management simple for stupid people - yet anyone using go is not likely to be stupid, and anyone using go will likely want to be able to decide how they compile and manage binaries. In order to get out of the go "box" one has to reverse engineer what the tools do and take control using undocumented features. Nice - not!

Its package mechanism is broken

Go's package rules are so confusing that when I finally got my package structure to compile I quickly wrote the derived rules down, so that I would not have to repeat the trial and error process. The rules, as I found them to be, are:
  1. Package names can be anything.
  2. Subdirectory names can be anything - as long as they are all under a directory that represents the project name - that is what must be referenced in an install command. But when you refer to a sub-package, you must prefix it with the sub-directory name.
  3. When referring to a package in an import, prefix with project name, which must be same as main directory name that is immediately under the src directory.
  4. Must install packages before they can be used by other packages - cannot build multiple packages at once.
  5. There must be a main.go file immediately under the project directory. It can be in package “main”, as can other files in other directories.
Are there other arrangements that work? No doubt - this is what I found to work. The rules are very poorly documented, and they might even be specific to the tool (the compiler) - I am not sure, and it seems that way. And here is an interesting blog post about the golang tools.

It is hard to find answers to programming questions

This is partly because of the name, "go" - try googling "go" and see what you get. So you have to search for "golang" - the problem is that much of the information on go is not indexed as "golang" but as "go", because if someone (like me) writes a blog post about go, he/she will refer to it as go - not as "golang" - so the search engines will not find it.

Another reason is that the creators of go don't seem to know that it is their responsibility to be online. Creators of important tools nowadays go online and answer questions about the language, and that results in a wealth of information that helps programmers to get answers quickly; with go, one is lucky to find answers.

The Up Side

One positive thing that I did find was that go is very robust when refactoring. I performed major reorganization of the code several times, and each time, once the new code compiled, it worked without a single error. This is a testimony to the idea that type safety has value, and go has very robust type safety. I would venture to say that for languages such as go, unit testing is a waste of time - I found that having a full suite of behavioral tests to be sufficient, because refactoring never introduced a single error. This is very different from languages such as Ruby, where refactoring can cause a large number of errors because of the lack of type safety: for such languages, comprehensive unit tests are paramount - and that is a large cost on the flexibility of the code base because of the effort required to maintain so many unit tests. I found that with go, a complete behavioral suite was sufficient.


When I finish the test project that I have been working on, I am going to go back to other languages, or perhaps explore some new ones. Among natively compiled languages, the "rust" language intrigues me. I also think that C++, which I used a-lot many years ago, deserves another chance, but with some discipline to use it in a way that produces compact and clear code - because C++ gives you the freedom to write horribly confusing and bloated code. I am not going to use go for any new projects though - it has proved to be a terrible language for so many reasons.