I have been using go for the past six months, in an effort to learn a new natively compiled language for high performance applications. I have been hoping that go was it - sadly, it is not.
Go is, frankly, a mess. One of its creators, Ken Thompson of Unix/C fame, called go an "experiment" - IMO, it is an experiment that produced Frankenstein's monster.
It is OO, but has arcane and confusing syntax
Go is object oriented, but unlike most OO languages, the syntax for defining interfaces and concrete objects is completely different: one defines an "interface" and then one defines a struct - and these are quite different things. But also unlike many OO languages, the methods of a concrete object type are not defined with the object type - they are defined outside the object definition - in fact, they can be in any file that is labeled as belonging to the "package" in which the object type (struct) is defined. Thus, you cannot tell at a glance what a type's methods are. On top of that, there is no syntax for saying that "concrete type A implements interface I", so you cannot tell if a concrete type implements an interface unless you try to compile it and see if you get an error: the rule is that a concrete type implements an interface if the concrete type has all of the methods that are defined by the interface - and yet the concrete type's methods are strewn all over the place. What a mess.
As a result, there is no language-provided clear declaration of a type network - interface types and the concrete types that implement them. You have to keep track of that on a piece of paper somewhere, or using naming to link them. The reason for this chaos escapes me, as I have not see any helpful language feature that results from this - you
cannot extend types dynamically, so I see no advantage to the forceful decoupling of interface types, concrete types, and the methods that belong to the concrete types. Perhaps this was part of the experiment - and with terrible results.
Its polymorphism is broken
Go lets you define an interface and then define concrete types (structs) that implements that interface (and possibly others). Yet, the way that this works is very peculiar and is likely to trip up programmers. E.g., if you create an instance of a concrete type and then call an interface method on it, you will get what you expect - the right method for the concrete type will be called. But if you pass a concrete type into a method (via another method call) and then call the method, the wrong one might be called - the method for the abstract type will likely be called - it will if the calling method uses an abstract type for its parameter. Go does not actually have abstract types, so to create one you have to define a struct and give it a dummy method for each method that you don't want to implement. My point here is that the behavior of the polymorphism is statically determined and so depends on the context - and that is very confusing and likely to introduce subtle errors - it defeats most of the value proposition of polymorphism.
You want an example? Try this code:
package main
import "fmt"
type Resource interface {
getParentId() string
printParentId()
}
type Dockerfile interface {
Resource
}
type InMemResource struct { // abstract
}
func (resource *InMemResource) getParentId() string {
fmt.Println("Internal error - getParentId called on abstract type InMemResource")
return ""
}
type InMemDockerfile struct {
InMemResource
RepoId string
}
func (dockerfile *InMemDockerfile) getParentId() string {
return dockerfile.RepoId
}
func (resource *InMemResource) printParentId() {
fmt.Println(resource.getParentId())
}
func main() {
var curresource Resource = &InMemDockerfile{
InMemResource: InMemResource{},
RepoId:"12345",
}
curresource.printParentId()
}
When you run it, you will see that the
getParentId method defined by
InMemResource will be called - instead of the
getParentId defined by
InMemDockerfile - which is the one that, IMO, should be called, because the object (struct) is actually an
InMemDockerfile. Yet if you call
curresource.getParentId directly from the main function, you will get the expected polymorphic behavior.
The reason is this: if you add a method,
func (dockerfile *InMemDockerfile) printParentId() {
fmt.Println(dockerfile.getParentId())
}
to the above program, it works. Thus, the above program did not work because one of the methods being called did not have an implementation by the concrete type (InMemDockerfile) - that effectively obscured the actual type from the final method in the call sequence. Programmers who are accustomed to dynamic typing like Java will find this behavior surprising.
Type casting affects reference value
Another peculiarity of the go type system is that if you compare a value with nil, it might fail (so it is not nil), but then if you type cast it and compare with nil again, it can succeed. Here is an example:
var failMsg apitypes.RespIntfTp
...
if failMsg == nil {
fmt.Println("failMsg is nil")
} else {
fmt.Println("failMsg is NOT nil")
var isType bool
var fd *apitypes.FailureDesc
fd, isType = failMsg.(*apitypes.FailureDesc)
if isType {
if fd == nil {
fmt.Println("fd is nil!!!!! WTF??")
if failMsg != nil {
fmt.Println("And failMsg is still not nil")
}
} else {
fmt.Println("Confirmed: fd is not nil")
}
} else {
fmt.Println("Cast failed: NOT a *apitypes.FailureDesc")
}
}
The line in red executes; draw your own conclusions - but regardless, I expect this unexpected behavior to be the source of a great many bugs in programmers' code.
Its compilation rules are too confining
With C, one compiles to a binary that one can then link with or save somewhere. With go, the binaries are managed "magically" by the compiler, and you have to "install" them. Go's approach tries to make compilation and binary management simple for stupid people - yet anyone using go is not likely to be stupid, and anyone using go will likely want to be able to decide how they compile and manage binaries. In order to get out of the go "box" one has to reverse engineer what the tools do and take control using undocumented features. Nice -
not!
Its package mechanism is broken
Go's package rules are so confusing that when I finally got my package structure to compile I quickly wrote the derived rules down, so that I would not have to repeat the trial and error process. The rules, as I found them to be, are:
- Package names can be anything.
- Subdirectory names can be anything - as long as they are all under a directory that represents the project name - that is what must be referenced in an install command. But when you refer to a sub-package, you must prefix it with the sub-directory name.
- When referring to a package in an import, prefix with project name, which must be same as main directory name that is immediately under the src directory.
- Must install packages before they can be used by other packages - cannot build multiple packages at once.
- There must be a main.go file immediately under the project directory. It can be in package “main”, as can other files in other directories.
Are there other arrangements that work? No doubt - this is what I found to work. The rules are very poorly documented, and they might even be specific to the tool (the compiler) - I am not sure, and it seems that way. And
here is an interesting blog post about the golang tools.
It is hard to find answers to programming questions
This is partly because of the name, "go" - try googling "go" and see what you get. So you have to search for "golang" - the problem is that much of the information on go is not indexed as "golang" but as "go", because if someone (like me) writes a blog post about go, he/she will refer to it as go - not as "golang" - so the search engines will not find it.
Another reason is that the creators of go don't seem to know that it is their responsibility to be online. Creators of important tools nowadays go online and answer questions about the language, and that results in a wealth of information that helps programmers to get answers quickly; with go, one is lucky to find answers.
The Up Side
One positive thing that I did find was that go is very robust when refactoring. I performed major reorganization of the code several times, and each time, once the new code compiled, it worked without a single error. This is a testimony to the idea that type safety has value, and go has very robust type safety. I would venture to say that for languages such as go, unit testing is a waste of time - I found that having a full suite of behavioral tests to be sufficient, because refactoring never introduced a single error. This is very different from languages such as Ruby, where refactoring can cause a large number of errors because of the lack of type safety: for such languages, comprehensive unit tests are paramount - and that is a large cost on the flexibility of the code base because of the effort required to maintain so many unit tests. I found that with go, a complete behavioral suite was sufficient.
Summary
When I finish the test project that I have been working on, I am going to go back to other languages, or perhaps explore some new ones. Among natively compiled languages, the "rust" language intrigues me. I also think that C++, which I used a-lot many years ago, deserves another chance, but with some discipline to use it in a way that produces compact and clear code - because C++ gives you the freedom to write horribly confusing and bloated code. I am not going to use go for any new projects though - it has proved to be a terrible language for so many reasons.