Memoization is one of the dynamic programming patterns where the program calculates the function once for the passed parameter(s) and reuses the output in subsequent calls instead of calculating it. This article shows an approach to achieve a generic memoization for Go.
Map
The following is an example:
1type Page struct{ name string }
2
3var pages map[string]*Page = map[string]*Page{}
4
5func NewPage(name string) (p *Page) {
6 var ok bool
7 if p, ok = pages[name]; !ok {
8 p = &Page{
9 name: name,
10 }
11 pages[name] = p
12 }
13
14 return p
15}
sync#Map
or have a sync#Mutex
and lock/unlock it when you access the map.NewPage
with the same parameterIf you have a lock that looks like so, it’ll lock the whole map while you’re calculating the value and setting it in the map
1var pages map[string]*Page = map[string]*Page{}
2var l sync.Mutex
3
4func NewPage(name string) (p *Page) {
5 var ok bool
6 l.Lock()
7 if p, ok = pages[name]; !ok {
8 p = &Page{
9 name: name,
10 }
11 pages[name] = p
12 }
13 l.Unlock()
14
15 return p
16}
sync#Once
¶The solution I came up with is the following:
sync#Map
to make sure it’s concurrency safesync#OnceValue
and save it in the mapsync#OnceValue
OnceValue
makes sure the calculation runs once for each parameterOnce
instance is assumed to be faster than the actual calculation I want to dosync#Map
I can swap it for another implementation that caches on disk/redis/databaseHere is the implementation
1import (
2 "sync"
3)
4
5type Page struct{ name string }
6
7var NewPage = MemoryMemoizer(func(name string) *Page {
8 return &Page{
9 name: name,
10 }
11})
12
13type Cacher[K any, V any] interface {
14 LoadOrStore(key K, value V) (actual V, loaded bool)
15}
16
17type MemoryCache[K any, V any] struct{ sync.Map }
18
19func (m *MemoryCache[K, V]) LoadOrStore(key K, value V) (actual V, loaded bool) {
20 a, loaded := m.Map.LoadOrStore(key, value)
21 return a.(V), loaded
22}
23
24type Memoizer[In any, Out any, F func(In) Out] struct {
25 Cache Cacher[In, func() Out]
26 Fun F
27}
28
29func (m *Memoizer[In, Out, F]) Do(i In) Out {
30 once, _ := m.Cache.LoadOrStore(i,
31 sync.OnceValue(
32 func() Out { return m.Fun(i) },
33 ),
34 )
35
36 return once()
37}
38
39func MemoryMemoizer[In any, Out any, F func(In) Out](fun F) F {
40 m := Memoizer[In, Out, F]{
41 Cache: &MemoryCache[In, func() Out]{},
42 Fun: fun,
43 }
44
45 return m.Do
46}
The neat part is that with generics you can wrap any function that takes 1 input and return 1 output
1var NewPage = MemoryMemoizer(func(name string) *Page {
2 return &Page{
3 name: name,
4 }
5})
1type MemoizerWithErr[In any, Out any, F func(In) (Out, error)] struct {
2 Cache Cacher[In, func() (Out, error)]
3 Fun F
4}
5
6func (m *MemoizerWithErr[In, Out, F]) Do(i In) (Out, error) {
7 once, _ := m.Cache.LoadOrStore(i,
8 sync.OnceValues(
9 func() (Out, error) { return m.Fun(i) },
10 ),
11 )
12
13 return once()
14}
15
16func MemoryMemoizerWithErr[In any, Out any, F func(In) (Out, error)](fun F) F {
17 m := MemoizerWithErr[In, Out, F]{
18 Cache: &MemoryCache[In, func() (Out, error)]{},
19 Fun: fun,
20 }
21
22 return m.Do
23}
go test --bench .
goos: linux
goarch: amd64
pkg: github.com/emad-elsaid/memoize
cpu: AMD Ryzen 7 2700X Eight-Core Processor
BenchmarkGoplio/Keys:10-16 1000000 1364 ns/op
BenchmarkGoplio/Keys:100-16 787778 1459 ns/op
BenchmarkGoplio/Keys:1000-16 870450 1449 ns/op
BenchmarkGoplio/Keys:10000-16 859154 1396 ns/op
BenchmarkGoplio/Keys:100000-16 620088 1733 ns/op
BenchmarkGoplioParallel/Keys:10-16 1000000 1090 ns/op
BenchmarkGoplioParallel/Keys:100-16 1000000 1165 ns/op
BenchmarkGoplioParallel/Keys:1000-16 999837 1176 ns/op
BenchmarkGoplioParallel/Keys:10000-16 985710 1200 ns/op
BenchmarkGoplioParallel/Keys:100000-16 879453 1379 ns/op
BenchmarkMemoizer/Keys:10-16 2594858 439.7 ns/op
BenchmarkMemoizer/Keys:100-16 2828440 459.8 ns/op
BenchmarkMemoizer/Keys:1000-16 2630336 455.1 ns/op
BenchmarkMemoizer/Keys:10000-16 2582623 456.1 ns/op
BenchmarkMemoizer/Keys:100000-16 2357390 464.5 ns/op
BenchmarkMemoizerParallel/Keys:10-16 14447314 84.29 ns/op
BenchmarkMemoizerParallel/Keys:100-16 14607079 83.06 ns/op
BenchmarkMemoizerParallel/Keys:1000-16 14245002 84.10 ns/op
BenchmarkMemoizerParallel/Keys:10000-16 12791991 91.26 ns/op
BenchmarkMemoizerParallel/Keys:100000-16 2490600 450.4 ns/op
PASS
ok github.com/emad-elsaid/memoize 33.902s
➜ go test --bench .
goos: linux
goarch: amd64
pkg: github.com/emad-elsaid/memoize
cpu: AMD Ryzen 7 2700X Eight-Core Processor
BenchmarkGoplio/Keys:10-16 1000000 1287 ns/op
BenchmarkGoplio/Keys:100-16 940515 1294 ns/op
BenchmarkGoplio/Keys:1000-16 897920 1232 ns/op
BenchmarkGoplio/Keys:10000-16 847644 1328 ns/op
BenchmarkGoplio/Keys:100000-16 776767 1542 ns/op
BenchmarkGoplioParallel/Keys:10-16 1000000 1029 ns/op
BenchmarkGoplioParallel/Keys:100-16 971307 1188 ns/op
BenchmarkGoplioParallel/Keys:1000-16 1000000 1125 ns/op
BenchmarkGoplioParallel/Keys:10000-16 1000000 1120 ns/op
BenchmarkGoplioParallel/Keys:100000-16 1000000 1239 ns/op
BenchmarkMemoizer/Keys:10-16 25258276 43.08 ns/op
BenchmarkMemoizer/Keys:100-16 25672358 46.79 ns/op
BenchmarkMemoizer/Keys:1000-16 27347005 42.03 ns/op
BenchmarkMemoizer/Keys:10000-16 21916824 55.37 ns/op
BenchmarkMemoizer/Keys:100000-16 14022218 84.96 ns/op
BenchmarkMemoizerParallel/Keys:10-16 45314600 26.33 ns/op
BenchmarkMemoizerParallel/Keys:100-16 45316003 25.73 ns/op
BenchmarkMemoizerParallel/Keys:1000-16 41009624 26.64 ns/op
BenchmarkMemoizerParallel/Keys:10000-16 37614056 28.19 ns/op
BenchmarkMemoizerParallel/Keys:100000-16 2024397 537.5 ns/op
PASS
ok github.com/emad-elsaid/memoize 29.049s
I have released the previous implementation + performance improvement as a go package