# Bad Go: not sizing slices

This is the 4th in a series of posts about Bad Go - a clickbaity title for posts about Go code that I’ve found frustrating because it could just be that little bit better. Better in my mind means more performant with less impact on GC, without being more complex or harder to read.

In this post we’ll look at a very common issue - not setting the size of a slice when you know how big it needs to be. I’m talking about code like the following.

``````func numbersToStringsBad(numbers []int) []string {
vals := []string{}
for _, n := range numbers {
vals = append(vals, strconv.Itoa(n))
}
return vals
}
``````

Why is this bad? Well, you know that the slice of strings needs to have the same capacity as the slice of ints, so why not save everyone some guesswork and make the slice have the right capacity? Here’s a better version. We don’t have to do anything particularly different, just set the capacity of the slice to the length of the slice of numbers.

``````func numbersToStringsBetter(numbers []int) []string {
vals := make([]string, 0, len(numbers))
for _, n := range numbers {
vals = append(vals, strconv.Itoa(n))
}
return vals
}
``````

Let’s do a quick benchmark and see how bad failing to set the capacity is, and how much we can gain with a little extra care. I’ve used `b.Run` to group these two benchmarks together this time.

``````func BenchmarkSliceConversion(b *testing.B) {
numbers := make([]int, 100)
for i := range numbers {
numbers[i] = i
}

b.ReportAllocs()
for i := 0; i < b.N; i++ {
}
})

b.Run("better", func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
numbersToStringsBetter(numbers)
}
})
}
``````

I ran the benchmarks 8 times each and fed the results to benchstat. Here’s the results.

``````name                      time/op
SliceConversion/better-8  1.02µs ± 4%

name                      alloc/op
SliceConversion/better-8  1.79kB ± 0%

name                      allocs/op
SliceConversion/better-8    1.00 ± 0%
``````

You’ll see the better version takes half as much time and does only 1 allocation versus the 8 allocations done by the bad version. Why does this happen? It is all to do with growing the slice to fit the data.

I think we can see what’s happening by modifying `numbersToStringsBad` to show how the capacity of the slice grows as we append more items to it. This modified code prints the capacity of the slice whenever it changes.

``````func numbersToStringsBadInstrumented(numbers []int) []string {
vals := []string{}
oldCapacity := cap(vals)
for _, n := range numbers {
vals = append(vals, strconv.Itoa(n))
if capacity := cap(vals); capacity != oldCapacity {
fmt.Printf("len(vals)=%d, cap(vals)=%d (was %d)\n", len(vals), capacity, oldCapacity)
oldCapacity = capacity
}
}
return vals
}

func main() {
numbers := make([]int, 100)
for i := range numbers {
numbers[i] = i
}

}
``````

Here’s the output. We can see that `append` doubles the capacity of the slice each time it fills up.

``````len(vals)=1, cap(vals)=1 (was 0)
len(vals)=2, cap(vals)=2 (was 1)
len(vals)=3, cap(vals)=4 (was 2)
len(vals)=5, cap(vals)=8 (was 4)
len(vals)=9, cap(vals)=16 (was 8)
len(vals)=17, cap(vals)=32 (was 16)
len(vals)=33, cap(vals)=64 (was 32)
len(vals)=65, cap(vals)=128 (was 64)
``````

How does the capacity of the slice increase? `append` increases the slice by allocating a new slice that’s twice as big and copying the data over. So as well as the overhead of extra allocations we suffer the overhead of copying the data. And all the intermediate versions of the slice that are discarded will need to be garbage collected too.

Now you know the consequences, you’ll size your slices whenever you can won’t you? But if you don’t know what the final size will be, don’t worry. Although the “bad” version is slower, it really isn’t that bad.

Since you’ve read this far, as a bonus let’s see if we can do a little bit better if we set the length of the slice as well as the capacity, and set each slice entry directly rather than appending.

``````func numbersToStringsBest(numbers []int) []string {
vals := make([]string, len(numbers))
for i, n := range numbers {
vals[i] = strconv.Itoa(n)
}
return vals
}
``````

benchstat gives us the following. Our final version is faster by about 10%

``````name                      time/op
SliceConversion/better-8  1.00µs ± 2%
SliceConversion/best-8     900ns ± 2%

name                      alloc/op