-
Eric Myhre authored
(There might be a cleverer way to do this, but it's beyond me at this present moment, so smashing it is. And I'll give up my "no abstractions in benchmarks mantra" for this one enough to put a value table together and pay the cost to offset into it.) Confirmed, things do get better at larger scale. ``` pkg: github.com/ipld/go-ipld-prime/_rsrch/nodesolution/impls BenchmarkMap3nBaselineNativeMapAssignSimpleKeys-8 6062440 199 ns/op 256 B/op 2 allocs/op BenchmarkMap3nBaselineJsonUnmarshalMapSimpleKeys-8 520588 2308 ns/op 672 B/op 18 allocs/op BenchmarkMap3nFeedGenericMapSimpleKeys-8 2062002 626 ns/op 520 B/op 8 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeys-8 2456760 489 ns/op 416 B/op 5 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeysDirectly-8 2482074 468 ns/op 416 B/op 5 allocs/op BenchmarkMap3nBaselineNativeMapIterationSimpleKeys-8 15704199 76.0 ns/op 0 B/op 0 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 19439997 63.0 ns/op 16 B/op 1 allocs/op BenchmarkMap3nGennedMapIterationSimpleKeys-8 20279289 59.0 ns/op 16 B/op 1 allocs/op BenchmarkMap25nBaselineNativeMapAssignSimpleKeys-8 726440 1457 ns/op 1068 B/op 2 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 304988 3961 ns/op 2532 B/op 30 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeys-8 388693 3003 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeysDirectly-8 429612 2757 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nBaselineNativeMapIterationSimpleKeys-8 3132525 417 ns/op 0 B/op 0 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 4186132 286 ns/op 16 B/op 1 allocs/op BenchmarkMap25nGennedMapIterationSimpleKeys-8 4406563 271 ns/op 16 B/op 1 allocs/op pkg: github.com/ipld/go-ipld-prime/impl/free BenchmarkMap3nFeedGenericMapSimpleKeys-8 1177724 1026 ns/op 1216 B/op 13 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 3497580 344 ns/op 464 B/op 4 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 156534 8159 ns/op 7608 B/op 62 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 393928 2543 ns/op 3632 B/op 26 allocs/op ``` Basically: - the build time ratio of our maps to native maps actually gets better (I didn't expect this) (though native maps still win handily; which, still, is no surprise, since ours Do More and have to pay at least Some abstraction cost for all the interface stuff). - the iterate time ratio of our maps to native maps *also* gets better; it's almost a full third faster. - we can confirm that the allocations are completely amortized for our codegen'd maps (the count doesn't rise with scale *at all*). Nice. - our maps are admittedly still about twice the size in memory as a golang native map would be. But this is no surprise with this current internal architecture. And one could make other ones. - and we can see the old design just out-of-control *sucking* at scale. Building still taking twice as long in the old design; and iterating taking -- yep -- 10 times as long. I'm not sure if these tests will be worth keeping around, because it's kinda just showing of some unsurprising stuff, but, eh. It's nice to have the expected results confirmed at a another scale.
53fa8ac7