Tuesday, November 11, 2014

History of agile

This is for those who think that Agile is a recent evolutionary advance in software engineering. It is not. Before the 1990s, a great many - perhaps most? - software projects were executed in a non-waterfall way. Some were agile, some were not. In the 1980s I was fortunate to have been on many that were: projects with a servant leader, with full automated regression testing run daily, with test results displayed from a database, with a backlog of small demonstrable features, with co-location (individual offices side by side), with daily sharing of issues, with collaborative and evolutionary design, and with a sustainable pace. I can recall personally writing up to 1000 lines of tested C code in a day on my Sun Unix "pizzabox" workstation: those projects were highly productive - today's tools and methodologies do not exceed that productivity.

However, over time more and more large software projects came to be managed by administrative program managers and procurement managers who had never personally developed software, and they foolishly applied a procurement approach that is appropriate for commodities - but not for custom built software. This was motivated by a desire to tightly control costs and hold vendors accountable. Waterfall provided the perfect model for these projects: the up-front requirements could be done first and then serve as the basis for a fixed cost, fixed schedule "procurement" involving the implementation phases.

This was a horrible failure. Software people knew in the 1960s that this approach could not work.

So in the late 1990s a movement finally came together to push back on the trend of more and more waterfall projects, by returning to what had worked before: iterative development of demonstrable features by small teams, and a rejection of communication primarily by documents. This basic approach took many forms, as shown by the chart. And that is why I am against "prescriptive Agile" - that is, following a template or rule book (such as Scrum) for how to do Agile. There are many, many ways to do Agile, and the right way depends on the situation! And first and foremost, Agile is about thinking and applying contextual judgment - not "following a plan"!

And then you have young people come along, their software engineering experience dating no farther back than 1990, and they claim that Agile is a breakthrough and that the "prior waterfall approach" is wrong. Well, it was always wrong - people who actually wrote code always knew that waterfall was idiotic. There is nothing new there. And Agile is not new. So when an Agile newbie tells a seasoned developer that he/she should use Scrum, or that he/she is not doing Agile the right way, it demonstrates tremendous naiveté. People who developed software long before the Agile Manifesto during the '70s and '80s know the real Agile: they know what really matters and what really makes a project agile (lowercase "a") and successful - regardless which "ceremonies" you do, regardless of which roles you have on a team, etc. It turns out that most of those ceremonies don't matter: what matters the most - by far - is the personalities, leadership styles, and knowledge.

This chart was developed by a colleague at a company that I worked at, Santeon. The information in the graphic was taken from an article by Craig Larman. Here is the article.

As PDF.

Thursday, November 6, 2014

The horrible state of open source tools

Are you kidding me???

Recently I wrote a performance testing tool in Ruby and I have been rewriting it in Java. The tool uses Cucumber, so I have decided to substitute JBehave, since JBehave is the predominant BDD tool in the Java space, and also because I tried to use the Java version of Cucumber but it is broken and incomplete. (Sigh - why not call it "beta"?)

So I first looked at the JBehave docs, and was irritated to discover that there are no code examples: you have to jump through hoops, such as running Etsy.com, in order to just see an example. I don't know what Etsy.com is and I don't want to know - I just want to see a friggin' code example. So I googled and found one - a good one - here.

Even better, the example gets right to the point and shows me how to run JBehave without having to use any other tools - most JBehave examples use JUnit, which I detest. I just want to run JBehave. Period. No complications. This is how you do it:
Embedder embedder = new Embedder();
List<String> storyPaths = Arrays.asList("Math.story");
embedder.candidateSteps().add(new ExampleSteps());
embedder.runStoriesAsPaths(storyPaths);   
The file path ending in ".story" is from the example, and I wanted to find out the exact rules for what that path could be (the explanation of the example is not clear), so I went to the JBehave Javadocs, and this is what I found:


Are you kidding me??? - oh, I already said that.

I am used to Javadocs serving as a definitive specification for what a method does. In contrast, the JBehave methods have no header comments, and so the Javadoc methods have no specs. How is one supposed to know what each method's intended behavior is?

Am I supposed to go and find the unit tests and read them and infer what the intended behavioral rules are? Maybe if I had hours of spare time and that kind of perverse gearhead curiosity I would do that, but I just want to use the runStoriesAsPaths method. An alternative is to dig through examples and infer, but that is guesswork and needlessly time consuming.

Unfortunately, this is a trend today with open source tools: not commenting code. The method name gives me a hint about the method's intended behavior, but it does not fill in the gaps. For example, can a path be a directory? Is the path a feature file? What will happen if there are no paths provided - will an exception be thrown or will the method silently do nothing?

This is trash programming. Methods need human readable specifications. Agile is about keeping things lean, but zero is not lean - it is incompetent and lazy. A good programmer should always write a method description as part of the activity of writing a method: otherwise, you don't know what your intentions are: you are hacking, trying this and that until it does something that you want and then hurrying on to the next method. This is what I would expect a beginner to do - not an experienced programmer.

Yet so many tools today are like this. It used to be that if you used a new tool, you could rely on the documentation to tell you truthful things: if something did not work, you either did not understand the documentation or there was a software bug. Today, the documentation is often incomplete, or just plain wrong: it often tells you that you can do something, but in reality you have to do it in a certain way that is not documented. That is what I found to be the case with the Java plugin for Gradle. Recently I wrote a Java program that took me two hours to write and test (without JUnit or any other tools - just writing some quick test code), and then I spent a whole day trying to get the Gradle Java plugin to do what I wanted. That is not a productivity gain!

Tools that are fragile and undocumented are a disservice to us all. If you are going to write a tool, make sure that the parts that you write and make available work, and are documented, and work according to what the documentation says - and don't require a particular pattern of usage to work.

Please!!!


Saturday, October 25, 2014

Tests Do NOT Define Behavior

Last spring one of the gurus of the Ruby world set off an earthquake when he published a blog post titled, "TDD is dead. Long live testing".

Test driven development (TDD) is one of the sacred cows of certain segments of the agile community. The theory is that,
1. If you write tests before you write behavior, it will clarify your thinking and you will write better code.
2. The tests will expose the need to remove unnecessary coupling between methods, because coupling forces you to write "mocks", and that is painful.
3. When the code is done, it will have a full coverage test suite. To a large extent, that obviates the need for "testers" to write additional (functional) tests.
4. The tests define the behavior of the code, so a spec for the code's methods is not necessary.

Many people in the agile community have long felt that there was something wrong with the logic here. What about design? To design a feature, one should think holistically, and that means designing an entire aspect of a system at a time - not a feature at a time. Certainly, the design must be allowed to evolve, and should not address details before those details are actually understood, but thinking holistically is essential for good design. TDD forces you to focus on a feature at a time. Does the design end up being the equivalent of Frankenstein's monster, with pieces added on and add on? Proponents of TDD say no, because each time you add a feature, you refactor - i.e., you rearrange the entire codebase to accommodate the new feature in an elegant and appropriate manner, as if you had designed the feature and all preceding features together.

That's a-lot of rework though: every time you add a feature, you have to do all that refactoring. Does it slow you down, for marginal gains in quality? Well, that's the central question. It is a question of tradeoffs.

There is another question though: how people work. People work differently. In the sciences, there is an implicit division between the "theorists" and the "experimentalists". The theorists are people who spend their time with theory: to them, a "design" is something that completely defines a solution to a problem. The experimentalists, in contrast, spend their time trying things. They create experiments, and they see what happens. In the sciences, it turns out we need both: without both camps, science stalls.

TDD is fundamentally experimentalism. It is hacking: you write some code and see what happens. That's ok. That is a personality type. But not everyone thinks that way. For some people it is very unnatural. Some people need to think a problem through in its entirely, and map it out, before they write a line of code. For those people, TDD is a brain aneurism. It is antithetical to how they think and who they are. Being forced to do it is like a ballet dancer being forced to sit at a desk. It is like an artist being forced to do accounting. It is futile.

That is not to say that a TDD experience cannot add positively to someone's expertise in programming. Doing some TDD can help you to think differently about coupling and about testing; but being forced to do it all the time, for all of your work - that's another thing entirely.

Doesn't the Agile Manifesto say, "Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done."

I.e., don't force people to work a certain way. Let them decide what works best for them. Don't force TDD on someone who does not want to work that way.

But if everyone on a team does TDD, there is consistency, and that is good

The argument is always, "If we all do TDD, then we can completely change our approach as a team: we don't need testers, we don't need to document our interfaces, and we will get better code as a team. So people who can't do TDD really don't fit on our team."

So if Donald Knuth applied to work on your team, you would say, "Sorry, you don't fit in"; because Donald Knuth doesn't do TDD.

What ever happened to diversity of thought? Why has agile become so prescriptive?

Also, many of the arguments for TDD don't actually hold up. #1 above is true: TDD will help you to think through the design. But, TDD prevents you from thinking holistically, so one could argue that it actually degrades the design, and constrains the ability that many people have to creatively design complex things. And that's a shame. That's a loss.

#2 about improving coupling is true, but one does not have to do TDD for that. Instead, one can write methods and then attempt to write unit tests for them. The exercise of writing the unit tests will force one to think through the coupling issues. One does not have to do this for every single method - something that TDD requires - one can merely do it for the methods where one suspects there might be coupling issues. That's a-lot more efficient.

It can be argued that the enormous number of tests that TDD generates results in less agility - not more. Full coverage tests at an interface level provide plenty of protection against unintended consequences of code changes. For those who use type-safe languages, type safety is also very effective for guarding against unintended consequences during maintenance. One does not need a mountain of unit tests. Type safety is not about productivity: it is about maintainability, and it works.

#3 about code coverage is foolish. The fox is guarding the henhouse. One of the things that tests are supposed to check is that the programmer understands the requirements. If the programmer who writes the code also writes the tests, and if the programmer did not listen carefully to the Product Owner, then the programmer's misunderstanding will end up embedded in the tests. This is the test independence issue. Also, functional testing is but one aspect of testing, so we still need test programmers.

One response to the issue about test independence is that acceptance tests will ensure that the code does what the Product Owner wants it to do. But the contradiction there is that someone must write the code that implements the acceptance criteria: who is that? If it is the person who wrote the feature code, then the tests themselves are suspect, because there is a-lot of interpretation that goes on between a test condition and the implementation. For example, "When the user enters their name, Then the system checks that the user is authorized to perform the action". What does that mean? The Product Owner might think that the programmer knows what "authorized" means in that context, but if there is a misunderstanding, then the test can be wrong and no one will know - until a bug shows up in production. Having separate people - who work independently and who both have equal access to the Product Owner - write the code and the test is crucial.

I saved the best for last. #4.

Let me say this clearly.

Tests. Do. Not. Define. Behavior.

And,

Tests. Are. A. Horrible. Substitute. For. An. Interface. Spec.

Tests do not define behavior because (1) the test might be wrong, and (2) the test specifies what is expected to happen in a particular instance. In other words, tests do not express the conceptual intention. When people look up a method to find out what it does, they want to learn the conceptual intention, because that conveys the knowledge about the method's behavior most quickly and succinctly, in a way that is easiest to incorporate into one's thinking. If one has to read through tests and infer - reverse engineer - what a method does, it can be time wasting and confusing.

The argument that one gets from the TDD community is that method descriptions can be wrong. Well, tests can be incomplete, leading to an incorrect understanding of a method's intended behavior. There is no silver bullet for keeping code complete and accurate, and that applies to the tests as well as the code comments. It is a matter of discipline. But a method spec has a much better chance of being accurate, because people read it frequently (in the form of javadocs or ruby docs), and if it is incomplete or wrong people will notice it. Missing unit tests don't get noticed.

Conclusion


If people want to do TDD, it is right for them and it makes them productive, so let them do it. But don't force everyone else to do it!

Long live testing!

Wednesday, August 27, 2014

To Be Certified – Or Not?

Recently in the LinkedIn group “Agile”, Alan Moran posted a question, “How valuable is agile certification to you?

The general consensus seemed to be that certification was helpful in terms of getting a job. For example, Nicolas Umiastowski wrote,
“Certifications are important to prove your skills to recruiters.”

Joseph Percivall wrote,
“I found it to be very valuable to set me apart from other applicants in my job/internship search. It was a talking point in every interview I had. It showed that I wanted to learn more about my field and thrive in it.”

The last sentence is interesting: it implies that certification demonstrates a level of seriousness about one’s work. Indeed, in a recent interview of Elena Yatzeck by this journal, she said, “Cert speaks to the person’s interest in their seriousness in pursuing agile techniques as a professional.”

The primary dissenting view was that certification is a lowest common denominator of knowledge. For example, Paul Oldfield wrote,
“I'm of the opinion that certification is only of value to mediocre people and mediocre organizations. Good people and organizations find each other without help, the really dire of each cannot be helped by certificates.”

Abhijeet Nikte wrote,
“I find it disconcerting that while a bunch of us are talking about the certification and its value, we seem to be in minority, or so I think. I firmly believe that (demonstrable) knowledge is far more important than a certification. However, there are tons of companies out there that place a very high value on certification. There is an (incorrect, in my mind) assumption that if a person is certified so that person must have knowledge. Sad, but true.”

What do CIOs think?

Interestingly, recently there was also a discussion about this topic in the LinkedIn group “Chief Information Officer (CIO) Network”. The discussion was about the IT skills gap, and it generated many posts on the topic of certification. For example, Greg Scott, CTO of InfraSupport, posted this – it’s long, but it is so powerful that I will repeat the entire thing here:
Consider these two hypothetical job descriptions for the same position:
Description #1:
We need an IT resource to finish implementing our ERP system. Skills required: C++, Java, PHP, and excellent communication skills.
Description #2 - same position, same job, same company
We need a motivated individual to finish our partially completed ERP system. Take the bull by the horns, finish building this system, set up a mechanism for ongoing support, and help us transform our company. The system uses C++, Java, and PHP and developers who know their way around these tools will have an advantage. But developers with a demonstrable track record of constant learning will have an even bigger advantage. If you want to take on a challenge and help us transform our company, we want to talk to you.
If you're an IT pro and looking for a job, which one would you go after?

I propose all hiring managers, all HR departments, and everyone everywhere eliminate the word, "resource" when referring to IT professionals. Your doctor is not a resource. Your accountant is not a resource. Your attorney is not a resource. Why are the people on your IT team resources?

Eliminate this word and begin to change your attitude. Change your attitude towards the people on your IT team and you'll begin fostering that culture of constant learning everyone talks about. Begin to change your attitude about your IT team and the people on your IT team will begin to change their attitude about your company.

Have the guts to do this and the skills gap at your company will go away while everyone else tries to figure out your secret. The counter-intuitive result will be, you'll probably make more money than your competition and leave them behind to eat your dust.

The core opinion expressed in this post seems to be that IT people should not be treated as interchangeable “resources”, and that evaluating people based on which certifications they have contributes to that commoditization. Scott seems to contend that “a demonstrable track record of constant learning” if far more important.

Here is another insightful post by Alexander Freund, President & CIO of 4IT Inc.:
Over the course of the past 10 years, I have hired for many IT positions including L1 and L2 support, project engineering, project management, service management, technical sales, and network and server engineering positions. What I have learned is that IT skills (competence in a specific product or area of knowledge) is generally far less valuable than what I call employee skills. We try very hard not to emergency hire to fill a spot, so immediate impact to our team is generally not the goal. So, what are the employee skills I am referring to? For me, there are really only three:

1. Brain power - The person needs to have enough raw brain power to learn and do the job. We readily accept that not everyone has the capacity to be a particle physicist, but continue to believe that we can train anyone to do almost any function in IT. Our experience is this is simply not the case. Find people that can learn, and teach them how to do the job. Even if they come with experience from another firm, they have never seen our processes and work culture.

2. Work Ethic - Work ethic is the true measure of the impact that any employee will eventually make to the TEAM. I consider this to be the skills gap that I encounter the most, and one which in general, can never be fixed.

3. Team player - When I consider the workload faced by most IT departments, it's clear that only cooperative teams working well together can get the work done on time without costly mistakes. Lone wolves, whiners, and poorly behaved team members are just too costly.

This post is again stressing the importance of soft skills – what Freund calls “employee skills” – over acronym skills.

Tim Magnus, an IT consultant, then says,
We have not established a yard stick or even definitions for the foundational skills. When job descriptions focus on transient skills [such as specific languages, tools, and frameworks], we make IT people into transient resources and so we will continue to search for people and fail to find the correct people to do the job. Foundational skills and fundamental problem solving skills are developed and are not picked up overnight.

I will say that these sentiments echo my own feelings and experience. When I was a CTO, Java was in its heyday. When I interviewed technical people for a job, I did not care what Java certifications they had. What I wanted to know what whether they were problem solvers, and if they were smart. Indeed, I myself did not have any Java certifications, but my book Advanced Java Development was a recommended text for those studying for the Java Architect certification. Would it not have been ironic if I myself had interviewed for a job as a Java architect, and had been turned down because I did not have Java Architect certification?

I personally feel the same way about Agile certifications: that’s why I myself don’t have any. My own feeling is that if someone wants me to be certified in an Agile methodology, then they themselves don’t understand Agile well enough to discern my level of experience with Agile and therefore I don’t want to work for them. That’s my opinion though: I am certainly serious about my work, so the lack of certification does not indicate lack of seriousness.

Many people clearly find that certification helps them to focus their learning in their career. As far as focus goes, I shy away from certification because I do not want to be focused: I want to retain the right to think for myself, rather than endorse the opinions that are demanded by a certification. There was one certification that I once considered obtaining: CISSP. I had just written a 600 page book on application security. While taking a practice exam, I discovered that I disagreed with many of the “answers”. In order to pass the exam, I would have to adopt perspectives that I did not agree with. I stopped studying for the exam and decided not to pursue the certification.

It is also clear that certification is useful for getting a job for many people: that is possibly because HR departments are failing to find the people who have the “natural learner” or “employee” or “foundational” skills that many of the posters to the CIO Network think are much more crucial. It is easy for HR to scan for buzzwords such as CSM than to try to understand someone’s background. That means that if you are hiring for Agile skills, you can’t rely on HR: you need to get involved in the search, and make sure that the best people are not being screened out because they don’t have a checkbox checked.

Monday, August 11, 2014

Why private offices are important for programmers

Around the year 2000, the company that I had co-founded in 1995, Digital Focus, went agile. We adopted eXtreme Programming (XP). We therefore had to undergo our own "agile transformation", to figure out how to adapt all of our processes and infrastructure to support this new way of working. One of the issues that we faced was how to arrange teams.

It is pretty standard nowadays that agile teams are co-located into a bullpen so that they can collaborate easily. A purportedly ideal setup includes lots of whiteboards and a wall for posting the agile stories and other information radiators. This is indeed a nice setup: it is cozy and one can hear conversations that are often relevant. And if you want to talk to someone, you simply stroll over to his or her desk and start talking.

But there is a deep down side to this. In such a setting, distractions are constant. You overhear conversations when you don't want to - often while you are trying to focus on a problem. It is kind of like being in a Starbucks: it is fun, but you will not do your best work there.

I have found that in such settings, people who really need to focus often go home for a day in order to crack a hard problem or to come up with a fresh approach. To really focus, one needs quiet and isolation - like one used to have with a private office.

During the mid 1980s I worked for two compiler development companies. In each case, the teams were co-located in that everyone had an office on the same floor of a small building. Thus, if you wanted to talk to someone, you simply strolled over to their door: and if the door was open, you walked in and started talking. But if the door was closed, you knew that they were trying to focus (or were talking on the phone), and you went back to your desk and tried a little later, or perhaps shot them an email saying that you need to chat.

The disadvantage of this is that you don't have the opportunity to accidentally overhear things that are relevant to your work. At Digital Focus, we solved this by giving each developer their own office, but also having a bullpen right next to those offices. It worked really well.

Unfortunately the use of cubicles and now bullpens for software development is so prevalent that it has set a new standard for the square feet needed per developer, which translates into a direct cost per developer. CFOs will now balk at giving developers private offices - something that was standard practice during the 1980s.

The hidden cost is that we might be losing the best creativity and ideas of developers. In an environment with distractions you never really think deeply. Your thoughts can get down to a certain level of depth, but never all the way. In a recent article in the New York Times Sunday Review, "Hit the Reset Button in Your Brain" by Daniel Levitin, director of the Laboratory for Music, Cognition and Expertise at McGill University and the author of “The Organized Mind: Thinking Straight in the Age of Information Overload,” Levitin says,

"...the insight that led to them probably came from the daydreaming mode. This brain state, marked by the flow of connections among disparate ideas and thoughts, is responsible for our moments of greatest creativity and insight, when we’re able to solve problems that previously seemed unsolvable."

Collaboration is great; but it is not a silver bullet. People sometimes need to think quietly by themselves. If we deny them that, we are not getting the best parts of their mind.