- 22 Jan, 2020 5 commits
-
-
Eric Myhre authored
(There might be a cleverer way to do this, but it's beyond me at this present moment, so smashing it is. And I'll give up my "no abstractions in benchmarks mantra" for this one enough to put a value table together and pay the cost to offset into it.) Confirmed, things do get better at larger scale. ``` pkg: github.com/ipld/go-ipld-prime/_rsrch/nodesolution/impls BenchmarkMap3nBaselineNativeMapAssignSimpleKeys-8 6062440 199 ns/op 256 B/op 2 allocs/op BenchmarkMap3nBaselineJsonUnmarshalMapSimpleKeys-8 520588 2308 ns/op 672 B/op 18 allocs/op BenchmarkMap3nFeedGenericMapSimpleKeys-8 2062002 626 ns/op 520 B/op 8 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeys-8 2456760 489 ns/op 416 B/op 5 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeysDirectly-8 2482074 468 ns/op 416 B/op 5 allocs/op BenchmarkMap3nBaselineNativeMapIterationSimpleKeys-8 15704199 76.0 ns/op 0 B/op 0 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 19439997 63.0 ns/op 16 B/op 1 allocs/op BenchmarkMap3nGennedMapIterationSimpleKeys-8 20279289 59.0 ns/op 16 B/op 1 allocs/op BenchmarkMap25nBaselineNativeMapAssignSimpleKeys-8 726440 1457 ns/op 1068 B/op 2 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 304988 3961 ns/op 2532 B/op 30 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeys-8 388693 3003 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeysDirectly-8 429612 2757 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nBaselineNativeMapIterationSimpleKeys-8 3132525 417 ns/op 0 B/op 0 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 4186132 286 ns/op 16 B/op 1 allocs/op BenchmarkMap25nGennedMapIterationSimpleKeys-8 4406563 271 ns/op 16 B/op 1 allocs/op pkg: github.com/ipld/go-ipld-prime/impl/free BenchmarkMap3nFeedGenericMapSimpleKeys-8 1177724 1026 ns/op 1216 B/op 13 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 3497580 344 ns/op 464 B/op 4 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 156534 8159 ns/op 7608 B/op 62 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 393928 2543 ns/op 3632 B/op 26 allocs/op ``` Basically: - the build time ratio of our maps to native maps actually gets better (I didn't expect this) (though native maps still win handily; which, still, is no surprise, since ours Do More and have to pay at least Some abstraction cost for all the interface stuff). - the iterate time ratio of our maps to native maps *also* gets better; it's almost a full third faster. - we can confirm that the allocations are completely amortized for our codegen'd maps (the count doesn't rise with scale *at all*). Nice. - our maps are admittedly still about twice the size in memory as a golang native map would be. But this is no surprise with this current internal architecture. And one could make other ones. - and we can see the old design just out-of-control *sucking* at scale. Building still taking twice as long in the old design; and iterating taking -- yep -- 10 times as long. I'm not sure if these tests will be worth keeping around, because it's kinda just showing of some unsurprising stuff, but, eh. It's nice to have the expected results confirmed at a another scale.
-
Eric Myhre authored
Again, checked for impact: it's in single-digit nanosecond territory. Fortunately, "assume the node escapes to heap" is already part of our intended model. And while this function is probably too big to be inlined (I didn't check, mind), it's still dwarfed by our actual work (to the scale of two orders of mag base 10), so it's fine.
-
Eric Myhre authored
I'm wary as heck at introducing any amount of abstraction in benchmarks, because sometimes, it starts to foment lies in the most unexpected ways. However, I did several fairly long runs with and without this contraction, and they seem to vary on the order of single nanoseconds -- while the noise between test runs varies by a few dozen. So, this appears to be safe to move on.
-
Eric Myhre authored
(This is the new feature I just merged all those library version bumps to enable.)
-
Eric Myhre authored
Includes duplicate key checks that were previously missing. Overall, checks many more invariants. There are now "*_ValueAssembler" types involved in both the 'free'/generic implementation, and the codegen implementation: in both cases, it's needed for several tasks, mostly revolving around the "Done" method (or anything else that causes doneness, e.g. any of the scalar "Assign*" methods): - to make sure the map assembly doesn't move on until the value assembly is finished! Need to do this to make it possible to maintain any other invariant over the whole tree! - to do state machine keeping on the map assembler - to do any integrity checks that the map itself demands - and in some cases, to actually commit the entry itself (although in some cases, pointer funtimes at key finish time are enough). The introduction of these '*_KeyAssembler' and '*_ValueAssembler' types is somewhat frustrating, because they add more memory, more vtable interactions (sometimes; in codegen, the compiler can inline them out), and just plain more SLOC. Particularly irritatingly, they require a pointer back to their parent assembler... even though in practice, they're always embedded *in* that same structure, so it's a predictable offset from their own pointer. But I couldn't easily seem to see a way around that (shy of using unsafe or other extreme nastiness), so, I'm just bitting the bullet and moving on with that. (I even briefly experimented with using type aliases to be able to decorate additional methods contextually onto the same struct memory, hoping that I'd be able to choose which type's set of methods I apply. (I know this is possible internally -- if one writes assembler, that's *what the calls are like*: you grab the function definition from a table of functions per type, and then you apply it to some memory!) This would make it possible to put all the child assembler functions on the same memory as the parent assembler that embeds them, and thus save us the cyclic pointers! But alas, no. Attempting to do this will run aground on "duplicate method" errors quite quickly. Aliases were not meant to do this.) There's some new tests, in addition to benchmarks. 'plainMap', destined to be part of the next version of the 'ipldfree' package, is now complete, and passes tests. A couple key tests are commented out, because they require a bump in version of the go-wish library, and I'm going to sort that in a separate commit. They do pass, though, otherwise. Some NodeStyle implementations are introduced, and now those are the way to get builders for those nodes, and all the tests and benchmarks use them. The placeholder 'NewBuilder*' methods are gone. There are some open questions about what naming pattern to use for exporting symbols for NodeStyles. Some comments regard this, but it's a topic to engage in more earnest later. Benchmarks have been renamed for slightly more consistency and an eye towards additional benchmarks we're likely to add shortly. Some new documentation file are added! These are a bit ramshackle, because they're written as an issue of note occurs to me, but there are enough high-level rules that should be held the same across various implementations of Node and NodeAssembler that writing them in a doc outside the code began to seem wise.
-