Rob Pike you sick son of a bitch

Too close for missiles, switching to guns.

Golang is an interesting experiment in designing and maintaining a language that is purposefully scoped minimized. This is at odds with a lot of modern languages these days. And perhaps, impossible. But I am loving it.


Patterns to write better go code.


My favorite go patterns involve testing. Some of these are so good they are influencing how I write code in other languages.

A quick recap on the builtin go test command:

  • go test filters files with the suffix _test (go build ignores _test files btw).
  • Tests are just functions in these files which follow two rules:
    1. The name of the test function must start with Test.
    2. They must take one argument of type *testing.T, a type injected by the testing package itself, to provide ways to print, skip, and fail the test.
go test -cover ./...

run test on all packages in the directory, recursive, with code coverage analysis

Ok on to some patterns. The first is the legendary “accept interfaces, return structs” principle. This one took me a bit to get my head around coming from Java-land where it is very easy to do the opposite.

Let’s say your are writing a function and you know you are going to depend on certain client for a service. The client exposes 10 or so methods used to read and write to the service. Your function could take a direct dependency on this client value. “I only accept this struct!”. This keeps things pretty straightforward, but there are costs in the long-term and I think in the short-term as well.

What happens when you want to write a unit test for your new function? You will probably have to mock what this client dependency does in order to test certain scenarios. Given it has 10 or so methods and might be a third party dependency which you don’t fully understand the implementation, this could get really hard fast and in reality we just won’t write that test.

So what if instead of depending on the struct directly, the new function depends on a minimal interface which just so happens to be implemented by the client? The new function only uses 2 methods of the client, so the interface only defines those 2 functions. Now when we write that unit test, mocking is simply returning the exact data we want from these calls, couldn’t be simpler. So the short-term benefit is that we will actually write tests because the pattern allows us to write easy and maintain-able unit tests.

A more long-term benefit is that the separation allows for easier refactoring of the dependency. If one of those other 8 methods changes, our new function doesn’t care. This benefit is truly awesome in large monorepo setups.

The “return structs” half of the principle was not as clear to me. If a function returns an interface it, the caller may in fact use that interface since it is exported. This is not great for your code in the future since any change to that interface is now a breaking change (adding a function, removing, or tweaking one all break the interface contract for the caller). The caller could use your function and immediately cast to their own interface (I think related to covariant types), but the chances of them actually doing that are low. Better to return the struct which makes it easier on everyone to write good code.

Once you have bought into the “accept interfaces, return structs” principle, a few other patterns become more powerful. The “test tables” pattern for instance makes it super easy to crank out a bunch of different scenarios for a function. The gotests + moq code generation tools are a potent combo to generate a bunch of the boilerplate for this pattern.

tests := []struct {
    name    string
    graph   *lnrpc.ChannelGraph
    request NodesByDistanceRequest
    want    []Node
        name: "identity",
        graph: &lnrpc.ChannelGraph{
            Nodes: []*lnrpc.LightningNode{
                    PubKey:     rootPubkey,
                    Alias:      rootAlias,
                    LastUpdate: uint32(rootUpdatedTime(t).Unix()),
        request: NodesByDistanceRequest{
            MinUpdated: rootUpdatedTime(t).Add(-time.Hour * 24),
            Limit:      1,
        want: []Node{
                pubkey:  rootPubkey,
                alias:   rootAlias,
                updated: rootUpdatedTime(t),

for _, tc := range tests {
    app := App{
        Infoer:  fakeInfoer{info: &lnrpc.GetInfoResponse{IdentityPubkey: rootPubkey}},
        Grapher: fakeGrapher{graph: tc.graph},
        Log:     log.New(ioutil.Discard, "", 0),
        Verbose: false,

    nodes, err := NodesByDistance(app, tc.request)

    if err != nil {
        t.Fatal("error calculating nodes by distance")

    if !reflect.DeepEqual(tc.want, nodes) {
        t.Fatalf("%s nodes by distance are incorrect\nwant: %v\ngot: %v", tc.name, tc.want, nodes)

the test table pattern


Go doesn’t provide enumerations out-of-the-box, but there are some helpful pattern. None match full-fledged enum’s like in rust.

// AccessLevel is the level of access for a Dataset
type AccessLevel string

// Access levels
const (
        SharedAccessLevel     AccessLevel = "shared"
        RestrictedAccessLevel AccessLevel = "restricted"

AccessLevel showing off the type definition enum pattern

Nice way to reference some constants, but doesn’t have strong type safety! Any string can be an AccessLevel and AccessLevel still has the standard "" zero value (could iron that out with UnknownAccessLevel AccessLevel = "").

It also doesn’t have exhaustive matching that other languages support. That will probably never change as long as go sticks to its super minimal scope promise. Bit of a double edge for me in the scenario since I love the match pattern, but respect the go principle. But maybe an opportunity to explore a cool module like exhaustive which can layer on the ability at build time?

sentinel and constant errors

“Sentinel” errors are cool, “constant” errors are cooler.

The ever-present golang convention is to return an error interface as the last item in a function return tuple. Originally in go, this was all the language really supported. What this meant for callers of a function is all they know is if an error occurred, there is no more information given to base logic off of (e.g. if this type of error do this instead).

This is where the “sentinel” pattern emerged. A package could export a known error value and callers could check to see if a returned error is that value.

// package exports sentinel error value
var ErrNotFound = errors.New("not found")

// caller checks a returned err to see if it is the sentinel value
if err == ErrNotFound {
    // something wasn't found

sentinel error pattern

In Go 1.13, the “wrapping” error convention was hardened with a few new standard library additions. If an error wraps another, it can implement an Unwrap method returning the underlying error. The new standard library errors.Is and errors.As functions use this convention to examine an error and determine if a value or type respectively appears anywhere in an error chain.

if errors.Is(err, ErrNotFound) {
    // something wasn't found

the sentinel pattern with error wrapping convention

A sentinel error extends a package interface and if that sentinel wraps other errors, those are all now part of the package interface. So they shouldn’t be added without some thought.

There is a pretty fatal flaw of the sentinel error pattern though. They are not constants, so any package which imports them can change them…which would really mess with other packages depending on them. Which leads us to the “constant” error pattern.

type RewardError string

func (e RewardError) Error() string { return string(e) }

const (
	// ErrForbiddenWithdrawal occur when a user is not able to get a reward from a reward pool.
	ErrForbiddenWithdrawal RewardError = "forbidden_withdraw"

the constant error pattern

The constant error requires a little more boilerplate to create a new type which implements the error interface. A constant string can then satisfy the type, no longer mutable like a sentinel error. The boilerplate is necessary because the errors.errorString struct (what powers errors.New()) is not a constant expression, it is not known at compile time.

project layout

I think there are two good choices for project layout.

  1. The top level package is main, can bury the library package in a nested directory (e.g. mango/mango).
  2. The top level package is the library’s name (e.g. mango), can bury the main package in the command directory (e.g. cmd/mango) following convention.

Option #1 makes sense for an application, the user will interact with the executable and having it at the top level interface keeps things clean. Option #2 is much better for a library, since #1 would introduce needless/confusing nesting (mango/mango).


Go things I always forget.

go get, build, and install

  • get manages dependencies
  • build focuses on building executable for distribution
    • build writes the resulting executable to an output file named after the first source file
  • install is for building and installing on local environment
    • hard to use go install in 1.18 with replace directives, issue


  • modules cannot be installed, only packages
  • go get / go install commands were fundamentally changed to support go modules
  • modules determine dependencies used by packages
    • a package lives in a module, but also can be depended on by packages in other modules


  • Go doesn’t have a ternary operator!
for i := 0; i < 10; i++ {
    sum += i

for sum < 1000 {
    sum += sum

for loops everywhere, for is go’s while

// range returns index and value of a slice
for i, v := range pow {
    fmt.Printf("2**%d = %d\n", i, v)

// can use on a string to iterate runes
for i, c := range "abc" {
    fmt.Println(i, " => ", string(c))



The world is made up of concurrent agents, but our minds are sequential. Language concurrency features try to bridge this gap, not performance (e.g. parallelism).

  • goroutine is a coroutine
  • Channels are a typed conduit through which you can send and receive values with the channel operator, <-
  • send/receive as well as synchronize (block on sends and receives)
func main() {
	ch := make(chan int, 2)
	ch <- 1
	ch <- 2

channels can buffer events, if a channel has no buffer the routine blocks until another picks up the message

// function literal
go func() {
	for i := 0; i < 10; i++ {
	quit <- 0

go routines go hand-in-hand with channels which allow them to synchronize

func fibonacci(n int, c chan int) {
	x, y := 0, 1
	for i := 0; i < n; i++ {
		c <- x
		x, y = y, x+y

func main() {
	c := make(chan int, 10)
	go fibonacci(cap(c), c)
	// range keeps pulling off events until the close signal
	for i := range c {

the close signal is first class for a channel, but doesn’t have to be used

func fibonacci(c, quit chan int) {
	x, y := 0, 1
	for {
		select {
		case c <- x:
			x, y = y, x+y
		case <-quit:

func main() {
	c := make(chan int)
	quit := make(chan int)
	go func() {
		for i := 0; i < 10; i++ {
		quit <- 0
	fibonacci(c, quit)

select can listen on multiple channels

  • the default case can be used when no other select cases (channels) have events ready to go


  • goroutines run until the function is complete and then shut down and release all resources automatically
  • goroutines hold onto stack and heap variable references (avoiding garbage collection)
  • if main routine exits, it just kills running goroutines

Panic vs. error?

Panic is a built-in function that stops the ordinary flow of control and begins panicking. When the function F calls panic, execution of F stops, any deferred functions in F are executed normally, and then F returns to its caller. To the caller, F then behaves like a call to panic. The process continues up the stack until all functions in the current goroutine have returned, at which point the program crashes. Panics can be initiated by invoking panic directly. They can also be caused by runtime errors, such as out-of-bounds array accesses.

  • calling panic in a goroutine will kill the main routine unless its recovered in the goroutine
  • panics are nuclear for a program


First class citizen to send cancel signals between goroutines.

  • Background, mainly used in the main function, initialization, and test code, is the top-level Context of the tree structure, the root Context, which cannot be canceled.

slices and arrays

  • array’s size is fixed; its length is part of its type ([4]int and [5]int are distinct, incompatible types)
  • slice type is an abstraction built on top of Go’s array type
    • slice is not an array. A slice describes a piece of an array.
  • Arrays are not often seen in Go programs because the size of an array is part of its type, which limits its expressive power
// Array literal
b := [2]string{"Penn", "Teller"}

// Have the compiler do the counting
b := [...]string{"Penn", "Teller"}

// Slice literal
letters := []string{"a", "b", "c", "d"}

// make function
s := make([]byte, 5)

// append slice to slice
s3 := append(s2, s0...)

// slice a slice
// 0 <= low <= high <= cap(a)

// pass pointer for recursive append
backtrack(candidates, target, current, &combinations)
func backtrack(candidates []int, target int, current []int, combinations *[][]int)
*combinations = append(*combinations, tmp)

// copy slice
tmp := make([]int, len(current))
copy(tmp, current)

// delete index
a = append(a[:i], a[i+1:]...)
  • zero value of a slice is nil. The len and cap functions will both return 0 for a nil slice.
    • compared to an array which is 0
  • A slice cannot be grown beyond its capacity.
    • append must be used to grow a slice
  • [:] is an easy way to make a slice from an array


  • the built in sort library takes an interface
// sort the characters of a string

// create a type which implements the sort interface of Less, Swap, and Len
type sortRunes []rune

func (s sortRunes) Less(i, j int) bool {
    return s[i] < s[j]

func (s sortRunes) Swap(i, j int) {
    s[i], s[j] = s[j], s[i]

func (s sortRunes) Len() int {
    return len(s)

func SortString(s string) string {
    r := []rune(s)
    // passing the sort type to Sort
    return string(r)

func main() {
    w1 := "bcad"
    w2 := SortString(w1)



// with make
m = make(map[string]int)
// literal
m = map[string]int{}

// check existence
i, ok := m["route"]
// more compressed
if val, ok := dict["foo"]; ok {
    //do something here

// maps can be used for sets
m := make(map[rune]bool)

// iterate over key values
for k, v := range m { 
    fmt.Printf("key[%s] value[%s]\n", k, v)
// or just keys
for k := range m { 
    fmt.Printf("key[%s]\n", k)

This causes non-obvious issues when trying to assign a value to a struct in a map of structs. Easiest work around seems to be to use a map of pointers to structs.

strings and runes

  • Rob Pike’s blog post on strings
  • a string holds arbitrary bytes. It is not required to hold Unicode text, UTF-8 text, or any other predefined format. As far as the content of a string is concerned, it is exactly equivalent to a slice of bytes.

Characters in golang are called runes, but they are not the characters (aka 2 bytes) of old. They try to cover up some of the ambiguity of utf8 (more bytes).

  • In Go rune type is not a character type, it is just another name for int32
  • indexing a string yields its bytes, not its characters: a string is just a bunch of bytes

Most deterministic way to iterate per “character” is to use range to get the runes (plus beginning byte index). ASCII is single byte encoding, so if it can be assumed, can use more general slicing.

// use of rune type
m := make(map[rune]int)
s = s[:len(s)-1]

// have to convert byte to string in order to append
s + string(c)

// type byte
// type string


  • go spec
    • A type determines a set of values together with operations and methods specific to those values
    • underlying type
    • Predeclared types, defined types, and type parameters are called named types
      • unnamed types are composite types defined by a type literal
    • type parameter came with generics
// unnamed type
var x struct{ I int }

// named type
type Foo struct{ I int }
var y Foo

named type

type person struct {
    name string
    age  int

p := person{name: name}


type (
	Name = string
	Age  = int

alias declaration binds an identifier to the given type

  • type definitions may be used to define different boolean, numeric, or string types and associate methods with them
  • can convert between an A and string at zero cost (because they are the same), but need to be explicit about it

type assertion

t := i.(T)

panic if i is not T, change to T if it is

  • *T is not T


  • An interface type defines a type set. A variable of interface type can store a value of any type that is in the type set of the interface.
  • Duct typing used to determine if type implementes interface
  • implementing
  • type vs value


Go lets you express behavioral polymorphism with interfaces, but until generics land, you essentially can’t express state polymorphism. So don’t try. Seriously. Whatever hacks you have to do with interface{} or reflect or whatever are worse than the disease. The empty interface{} and package reflect are tools of last resort, for when there’s literally no other way to solve your problem. In practice, that means in library code that needs to operate on types that are unknown at compile time. In application code, by definition, you know the complete set of types you’ll ever have to deal with at compile time. So there’s almost never a reason to use either of these hacks. You can always write type-specific versions of everything you need. Do that.

In the above quote, behavioral ≈ subtyping and state ≈ parametric. I think the heap library is a good example of library code that doesn’t know the type it will be operating on.

  • interfaces let us capture the common aspects of different types and express them as methods
  • go’s current generics support helps data structures (like a heap) accept and produce a static type
    • there isn’t a ton of gain for the builtin data structures slices and maps which could already guarantee types
  • generics are still limited to functions, need field support for some use cases (maybe 1.21?)
    • can’t make a function which acts on data types which have a shared set of fields, would need to expose with getter method
// SumIntsOrFloats sums the values of map m. It supports both int64 and float64
// as types for map values.
func SumIntsOrFloats[K comparable, V int64 | float64](m map[K]V) V {
    var s V
    for _, v := range m {
        s += v
    return s

example of a union of 2 types

  • Specify for the V type parameter a constraint that is a union of two types: int64 and float64. Using | specifies a union of the two types, meaning that this constraint allows either type. Either type will be permitted by the compiler as an argument in the calling code.
  • docs
type Number interface {
    int64 | float64

a type constraint

make vs. new

  • slice, map and chan are data structures. They need to be initialized, otherwise they won’t be usable.
  • make only works on slice, map, or chan
  • make passes back the value, not a pointer like new
p := new(chan int)   // p has type: *chan int
c := make(chan int)  // c has type: chan int

// make only inits the top level data structure
cache := make([]map[int]int, len(nums))
for i,_ := range cache {
	cache[i] = make(map[int]int, 0)
  • could use new to create pointers to an already allocated data structure
  • the tilde ~ operator


  • create a new empty pointer with new(type)
  • type *T is a pointer to a T value
  • zero value is nil
  • & operator generates a pointer to its operand
  • * operator denotes the pointer’s underlying value (dereference)
i := 42
p = &i
fmt.Println(*p) // read i through the pointer p
*p = 21         // set i through the pointer p

// pointers to builtins (e.g. slices) need to be de-referenced before using syntax (unlike normal structs)
(*traversal)[level] = append((*traversal)[level], root.Val)

pointers in action

  • syntactic sugar to access the field X of a struct when we have the struct pointer p we could write (*p).X. However, that notation is cumbersome, so the language permits us instead to write just p.X, without the explicit dereference.

methods (receivers)

  • a method is a function with a receiver argument
type Vertex struct {
	X, Y float64

func Abs(v Vertex) float64 {
	return math.Sqrt(v.X*v.X + v.Y*v.Y)

can access values of v, but can’t change them cause passed by value

func (v *Vertex) Scale(f float64) {
	v.X = v.X * f
	v.Y = v.Y * f

can access values of v and change them

  • syntactic sugar on calls
  • convention doesn’t mix value and pointer receivers on a given type
  • gets interesting with slice methods since value could work if modifying the underlying array, but won’t work if modifying the slice header
  • some trickiness when stored in an interface and go determining a type’s method set
type Foo interface {

type Bar struct {}

func (b Bar) foo() {}

func main() {
	// type pointer of Bar (*Bar)
	var foo Foo = &Bar{}
	// works, go can pass the value receiver from by dereferencing the pointer's type

	// type Bar
	var foo Foo = Bar{}
	// works

value receiver

type Foo interface {

type Bar struct {}

func (b *Bar) foo() {}

func main() {
	// type pointer of Bar (*Bar)
	var foo Foo = &Bar{}
	// works

	// type Bar
	var foo Foo = Bar{}
	// DOES NOT WORK! Foo interface's value is not addressable, go can't pass a pointer to the method receiver

pointer receiver

nil receiver

16:00:42 <yonson> whats the reason for checking if `m != nil` in this case? https://play.golang.org/p/zGoRwTc3g77
16:03:26 <fizzie> It's not entirely uncommon to make methods callable on a nil receiver.
16:03:32 <fizzie> All proto accessors do that, for example.
16:04:02 <fizzie> And of course if you want to do that, you need to have a `m != nil` check in order for the `m.getUserFn` access not to panic.
16:05:28 <b0nn> hmm, you shouldn't be able to call that function if m is nil
16:05:44 <fizzie> No, you're perfectly able to call that method even if m is nil.
16:06:10 <fizzie> As long as you have a value of type `*userTeamMock`, you can call the method on it.
16:07:30 <fizzie> https://play.golang.org/p/CNRLk7yEEt3 and so on.
16:08:25 <fizzie> I mean, you can certainly argue making methods not panic on a nil pointer receiver is a bad idea and you shouldn't do it, if that's what you meant. But the language doesn't have any rule against it.

some real life IRC on https://play.golang.org/p/zGoRwTc3g77

package main

import (

type example struct {
	i int

func (e *example) foo() int {
	if e == nil {
		return 123
	return e.i

func main() {
	var e *example

// returns 123


  • it is a union of the embedded interfaces. Only interfaces can be embedded within interfaces.
  • struct embedding is less straigh forward
  • embedding the structs directly avoids bookkeeping of “raising” methods to the outer type
  • to refer to an embedded field directly, the type name of the field, ignoring the package qualifier, serves as a field name
  • Embedding types introduces the problem of name conflicts but the rules to resolve them are simple. First, a field or method X hides any other item X in a more deeply nested part of the type. If log.Logger contained a field or method called Command, the Command field of Job would dominate it.
  • embedding interface in a struct
    • can swap out implementation of a struct at creation time by passing a different type


  • best to deal with an error where it occurred (zen of go)
  • return “zero value” of structs along side error
// the error interface
type error interface {
    Error() string

// errorString is a trivial implementation of error.
type errorString struct {
    s string

func (e *errorString) Error() string {
    return e.s

// New returns an error that formats as the given text.
func New(text string) error {
    return &errorString{text}

this type is what errors.New() builds

fmt.Errorf("error parsing something: %w", err)

wrap (annotate) errors with %w to see where they come from, easier to debug

  • use %w over %v

The pattern in Go is for functions to return an error interface, not a struct, which goes against the “accept interfaces, return structs” idiom. Why is this? The key is that an interface type holds a concrete value and concrete type, both of which have to be nil for the interface to be nil. If a function returns a type instead of an interface, it will always be a non-nil interface. This “breaks the chain” of bubbling up errors and checking if err != nil.

type MyError string

func (e MyError) Error() string { return string(e) }

func f() *MyError {
	return nil

// returns an interface with type *MyError and value nil
func g() error {
	return f()

func main() {
	x := g()
	if x == nil {

this won’t print anything, the concrete type isn’t nil

  • Inspect error type when need to pull more info out of the error.
if e, ok := err.(net.Error); ok && e.Timeout() {
	// it's a timeout, sleep and retry

type assertion to check if error of certain type

type case for multiple type assertions

  • As and Is were introduced in go 1.13 to help examine wrapped errors.
  • As is for error types
  • Is is for sentinel errors
err := f()
if errors.Is(err, ErrFoo) {
	// you know you got an ErrFoo
	// respond appropriately

var bar *BarError
if errors.As(err, &bar) {
	// you know you got a BarError
	// bar's fields are populated
	// respond appropriately

packages and modules

  • A package is a collection of source files in the same directory that are compiled together. Functions, types, variables, and constants defined in one source file are visible to all other source files within the same package. Exposed functions and types make up the package’s interface.
  • a module provides dependency management for one or more packages, the packages are released together.
  • the import syntax imports packages not modules.
  • go.mod implicitly includes all packages under its directory (with some basic exception rules)
  • go.mod does not include modules under its directory
  • A go.mod is like its own little GOPATH. There is no implicit reference to other nearby modules. In particular being in one repo does not mean that they all move in lock step and always refer to the code from the same commit. That’s not what users will get either.
  • multi-module repos
    • Each module has its own version information. Version tags for modules below the root of the repository must include the relative directory as a prefix.
  • the multimodule-monorepo pattern:
    • root go.mod requires submodule at a published version (this is the version any consumers would get too if they imported the root which is unlikely in monorepo world)
    • root go.mod replaces the requires (replace directive requires a require directive) with the local filesystem so everything is on same hash again
    • submodules have their own go.mod’s so their own dependency trees, they should probably be kept very light
  • resolving a package to a module
  • workspace mode proposal
    • modules are not implicitly imported even if they are “submodules”
  • first statement in a Go source file must be package name. Executable commands must always use package main.
// Package sort provides primitives for sorting slices and user-defined
// collections.
package sort

package level comment

  • If your module depends on A that itself has a require D v1.0.0 and your module also depends on B that has a require D v1.1.1, then Go Modules would select v1.1.1 of dependency D. This selection of v1.1.1 remains consistent even if some time later a v1.2.0 of D becomes available. (Like Gradle’s default)

  • If later an indirect dependency is removed, Go modules will still keep track of the latest not-greatest version. In other words, if we were to remove from our module the dependency B containing a require D v1.1.1 but keep dependency A with a require D v1.0.0, then Go modules would not fallback to v1.0.0 but instead keep v1.1.1 of D.

  • go list -m all – list all modules including indirect with version

  • exclude and replace directives only operate on the current main module. exclude and replace directives in modules other than the main module are ignored when building the main module. The replace and exclude statements, therefore, allow the main module complete control over its own build, without also being subject to complete control by dependencies.

  • // indirect dependencies are added to go.mod to provide 100% reproducible builds and tests by recording precise dependency information.

  • A replace directive replaces the contents of a specific version of a module, or all versions of a module, with contents found elsewhere. The replacement may be specified with either another module path and version, or a platform-specific file path.

    • make use of replace directives. Replace directives allow neighboring modules to import each others code without first publishing and remotely fetching it. However, they arent the most robust solution because they can be tricky to maintain (e.g. certain commands such as go list won work as expected) and require a decent amount of boilerplate code (e.g. replace directives are ignored outside of the current module being run/developed in, so youll need to repeat a replace directive in every go.mod in the repo where its relevant).
// main.go
package main

import "fmt"

func main() {
    fmt.Println("Hello, world.")

executable commands must always use package main and have main() func