- 22 Jan, 2020 6 commits
-
-
Eric Myhre authored
(There might be a cleverer way to do this, but it's beyond me at this present moment, so smashing it is. And I'll give up my "no abstractions in benchmarks mantra" for this one enough to put a value table together and pay the cost to offset into it.) Confirmed, things do get better at larger scale. ``` pkg: github.com/ipld/go-ipld-prime/_rsrch/nodesolution/impls BenchmarkMap3nBaselineNativeMapAssignSimpleKeys-8 6062440 199 ns/op 256 B/op 2 allocs/op BenchmarkMap3nBaselineJsonUnmarshalMapSimpleKeys-8 520588 2308 ns/op 672 B/op 18 allocs/op BenchmarkMap3nFeedGenericMapSimpleKeys-8 2062002 626 ns/op 520 B/op 8 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeys-8 2456760 489 ns/op 416 B/op 5 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeysDirectly-8 2482074 468 ns/op 416 B/op 5 allocs/op BenchmarkMap3nBaselineNativeMapIterationSimpleKeys-8 15704199 76.0 ns/op 0 B/op 0 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 19439997 63.0 ns/op 16 B/op 1 allocs/op BenchmarkMap3nGennedMapIterationSimpleKeys-8 20279289 59.0 ns/op 16 B/op 1 allocs/op BenchmarkMap25nBaselineNativeMapAssignSimpleKeys-8 726440 1457 ns/op 1068 B/op 2 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 304988 3961 ns/op 2532 B/op 30 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeys-8 388693 3003 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeysDirectly-8 429612 2757 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nBaselineNativeMapIterationSimpleKeys-8 3132525 417 ns/op 0 B/op 0 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 4186132 286 ns/op 16 B/op 1 allocs/op BenchmarkMap25nGennedMapIterationSimpleKeys-8 4406563 271 ns/op 16 B/op 1 allocs/op pkg: github.com/ipld/go-ipld-prime/impl/free BenchmarkMap3nFeedGenericMapSimpleKeys-8 1177724 1026 ns/op 1216 B/op 13 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 3497580 344 ns/op 464 B/op 4 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 156534 8159 ns/op 7608 B/op 62 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 393928 2543 ns/op 3632 B/op 26 allocs/op ``` Basically: - the build time ratio of our maps to native maps actually gets better (I didn't expect this) (though native maps still win handily; which, still, is no surprise, since ours Do More and have to pay at least Some abstraction cost for all the interface stuff). - the iterate time ratio of our maps to native maps *also* gets better; it's almost a full third faster. - we can confirm that the allocations are completely amortized for our codegen'd maps (the count doesn't rise with scale *at all*). Nice. - our maps are admittedly still about twice the size in memory as a golang native map would be. But this is no surprise with this current internal architecture. And one could make other ones. - and we can see the old design just out-of-control *sucking* at scale. Building still taking twice as long in the old design; and iterating taking -- yep -- 10 times as long. I'm not sure if these tests will be worth keeping around, because it's kinda just showing of some unsurprising stuff, but, eh. It's nice to have the expected results confirmed at a another scale.
-
Eric Myhre authored
Results are pleasing. ``` pkg: github.com/ipld/go-ipld-prime/_rsrch/nodesolution/impls BenchmarkMap3nBaselineNativeMapAssignSimpleKeys-8 5206788 216 ns/op 256 B/op 2 allocs/op BenchmarkMap3nBaselineJsonUnmarshalMapSimpleKeys-8 491780 2316 ns/op 672 B/op 18 allocs/op BenchmarkMap3nFeedGenericMapSimpleKeys-8 2105220 568 ns/op 520 B/op 8 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeys-8 2401208 501 ns/op 416 B/op 5 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeysDirectly-8 2572612 469 ns/op 416 B/op 5 allocs/op BenchmarkMap3nBaselineNativeMapIterationSimpleKeys-8 15420255 76.1 ns/op 0 B/op 0 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 18151563 66.1 ns/op 16 B/op 1 allocs/op BenchmarkMap3nGennedMapIterationSimpleKeys-8 18951807 62.7 ns/op 16 B/op 1 allocs/op pkg: github.com/ipld/go-ipld-prime/impl/free BenchmarkMap3nFeedGenericMapSimpleKeys-8 1170026 1025 ns/op 1216 B/op 13 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 3851317 311 ns/op 464 B/op 4 allocs/op ``` Iterating our new maps, both codegen and non, is fast. It's actually faster than iterating native golang maps. (This may seem shocking, but it's not totally out of line: we paid higher costs up front, after all. Also, we aren't going out of our way to randomize access order. I am still a bit surprised the costs of vtables in our system isn't more noticeable, though... and our one alloc, for the iterator!) We can speed up iteration further by embedding an iterator in the map structures. I'll probably do this in the final version, and simply have this be an optimistic system; two extra words of memory in the map is nearly free in context; and asking for another iterator after the first simply gives you an alloc again. It would be moderately irritating to measure this though, so I'm passing on it for the present. The benchmarks for our old `ipldfree.Node` implementations are... well, we knew these new systems would be a big improvement, but now we can finally see how much. Much. Our old system had a whopping 13 allocs to build a three-entry map. The new system has it down to 5 (and two of those are internal to golang's native maps, so it's trim indeed) for codegen and 8 for the new generic one. The wallclock effect of this was to make the old system almost twice as slow! All of these issues with the old system were forced by the NodeBuilder interface and its build-small-then-build-bigger paradigm. We couldn't have gotten these improvements without the switch to the NodeAssembler interface and its lay-it-out-then-fill-it-in paradigm. The new system is also *four times* as fast to iterate -- and does its work with only a single allocation: for the iterator itself. The old system performed an alloc for every single entry the iterator yielded! This is basically a change from O(n) to O(1) -- huge win. (Obviously, the iteration itself is still O(n), but as we can see from the timing, O(n) accesses vs O(n) allocs is a world of difference!) All of these results should also continue to look better and better if the same tests are applied to larger data structures. These small samples are pretty much the _worst_ way to demo these improvements! So that's something to look forward to. (Especially, in codegen: the improvements we're demonstrating here are particularly useful in the long run for enabling us to get the most mileage out of struct embedding... which we will plan to do a lot of in generated code.) Overall, this result pretty much confirms this design direction. It's now time to start moving this research back into the main package, and propagating upgrades as necessary for the improved interfaces. Sweet.
-
Eric Myhre authored
Again, checked for impact: it's in single-digit nanosecond territory. Fortunately, "assume the node escapes to heap" is already part of our intended model. And while this function is probably too big to be inlined (I didn't check, mind), it's still dwarfed by our actual work (to the scale of two orders of mag base 10), so it's fine.
-
Eric Myhre authored
I'm wary as heck at introducing any amount of abstraction in benchmarks, because sometimes, it starts to foment lies in the most unexpected ways. However, I did several fairly long runs with and without this contraction, and they seem to vary on the order of single nanoseconds -- while the noise between test runs varies by a few dozen. So, this appears to be safe to move on.
-
Eric Myhre authored
(This is the new feature I just merged all those library version bumps to enable.)
-
Eric Myhre authored
Includes duplicate key checks that were previously missing. Overall, checks many more invariants. There are now "*_ValueAssembler" types involved in both the 'free'/generic implementation, and the codegen implementation: in both cases, it's needed for several tasks, mostly revolving around the "Done" method (or anything else that causes doneness, e.g. any of the scalar "Assign*" methods): - to make sure the map assembly doesn't move on until the value assembly is finished! Need to do this to make it possible to maintain any other invariant over the whole tree! - to do state machine keeping on the map assembler - to do any integrity checks that the map itself demands - and in some cases, to actually commit the entry itself (although in some cases, pointer funtimes at key finish time are enough). The introduction of these '*_KeyAssembler' and '*_ValueAssembler' types is somewhat frustrating, because they add more memory, more vtable interactions (sometimes; in codegen, the compiler can inline them out), and just plain more SLOC. Particularly irritatingly, they require a pointer back to their parent assembler... even though in practice, they're always embedded *in* that same structure, so it's a predictable offset from their own pointer. But I couldn't easily seem to see a way around that (shy of using unsafe or other extreme nastiness), so, I'm just bitting the bullet and moving on with that. (I even briefly experimented with using type aliases to be able to decorate additional methods contextually onto the same struct memory, hoping that I'd be able to choose which type's set of methods I apply. (I know this is possible internally -- if one writes assembler, that's *what the calls are like*: you grab the function definition from a table of functions per type, and then you apply it to some memory!) This would make it possible to put all the child assembler functions on the same memory as the parent assembler that embeds them, and thus save us the cyclic pointers! But alas, no. Attempting to do this will run aground on "duplicate method" errors quite quickly. Aliases were not meant to do this.) There's some new tests, in addition to benchmarks. 'plainMap', destined to be part of the next version of the 'ipldfree' package, is now complete, and passes tests. A couple key tests are commented out, because they require a bump in version of the go-wish library, and I'm going to sort that in a separate commit. They do pass, though, otherwise. Some NodeStyle implementations are introduced, and now those are the way to get builders for those nodes, and all the tests and benchmarks use them. The placeholder 'NewBuilder*' methods are gone. There are some open questions about what naming pattern to use for exporting symbols for NodeStyles. Some comments regard this, but it's a topic to engage in more earnest later. Benchmarks have been renamed for slightly more consistency and an eye towards additional benchmarks we're likely to add shortly. Some new documentation file are added! These are a bit ramshackle, because they're written as an issue of note occurs to me, but there are enough high-level rules that should be held the same across various implementations of Node and NodeAssembler that writing them in a doc outside the code began to seem wise.
-
- 13 Jan, 2020 4 commits
-
-
Eric Myhre authored
It does not. I turned benchtime up to 15s because in 1s runs, any signal was well below the threshhold of noise. And even with larger sampling: ``` BenchmarkFeedGennedMapSimpleKeys-8 39906697 457 ns/op 400 B/op 5 allocs/op BenchmarkFeedGennedMapSimpleKeysDirectly-8 39944427 455 ns/op 400 B/op 5 allocs/op ``` It's literally negligible. It's still possible we'll see more consequential results in the case of structs, possibly. But from this result? I'd say there's pretty good arguments made *against* having the extra method, here.
-
Eric Myhre authored
More costly than I expected. New results: ``` BenchmarkBaselineNativeMapAssignSimpleKeys-8 5772964 206 ns/op 256 B/op 2 allocs/op BenchmarkBaselineJsonUnmarshalMapSimpleKeys-8 470348 2349 ns/op 672 B/op 18 allocs/op BenchmarkFeedGennedMapSimpleKeys-8 2484633 446 ns/op 400 B/op 5 allocs/op ``` Okay, so our wall clock time got worse, but is still flitting around 2x; not thrilling, but acceptable. But apparently we got a 5th alloc? Ugh. I looked for typos and misunderstandings, but I think what I failed to understand is actually the interal workings of golang's maps. The new alloc is in line 343: `ma.w.m[ma.w.t[l].k] = &ma.w.t[l].v`. As scary as that line may look, it's just some pointer shuffles; there's no new memory allocations here, just pointers to stuff that we've already got on hand. Disassembly confirms this: there's no `runtime.newobject` or other allocations in the disassembly of the `flushLastEntry` function. Just this salty thing: `CALL runtime.mapassign_faststr(SB)`. Which can, indeed, allocate inside. I guess what's going on here is golang's maps don't allocate *buckets* until the first insertion forces them to? Today I learned...
-
Eric Myhre authored
Memory works like I think it does. That's good. Added a third entry just to make some numbers odd and effects a wee bit more visible. Fixed the map to do allocs up front using the size hint; and rather importantly, actually return the embedded child assemblers. (Those are... kinda an important part of the whole design.) And that got things lined up where I hoped. Current results: ``` BenchmarkBaselineNativeMapAssignSimpleKeys-8 5665226 199 ns/op 256 B/op 2 allocs/op BenchmarkBaselineJsonUnmarshalMapSimpleKeys-8 519618 2334 ns/op 672 B/op 18 allocs/op BenchmarkFeedGennedMapSimpleKeys-8 4212643 291 ns/op 192 B/op 4 allocs/op ``` This is what I'm gunning for. Those four allocs are: - One for the builder; - one for the node; - and one each for the internal map and entry slice. This is about as good as we can get. Everything's amortized. And we're getting ordered maps out of the deal, which is more featureful than the stdlib map. And the actual runtime is pretty dang good: less than 150% of the native map -- that's actually better than I was going to let myself hope for. We're *not* paying for: - extra allocs per node in more complex structures; - extra allocs per builder in more complex structures; - allocs per key nor per value in maps; - and I do believe we're set up even to do generic map iteration without incurring interface boxing costs. Nice. I haven't begun to look for time optimizations at all yet; but now that the alloc count is right, I can move on to do that. There's also one fairly large buggaboo here: the values don't actually get, uh, inserted into the map. That's... let's fix that.
-
Eric Myhre authored
(Thank goodness. Been in theoryland for a while.) There's somewhat more content here than necessary for the benchmark that's presently runnable; right now only the Map_K_T implementation is targetted. I want benchmarks of things with complex keys in codegen and also benchmarks of the runtime/generic/free impls soon, so they can all be compared. There's also a quick fliff of stdlib map usage in a wee benchmark to give us some baselines... And there's also a quick fliff of stdlib json unmarshalling for the same reason. It's not fair, to be sure: the json thing is doing work parsing, and allocating strings (whereas the other two get to use compile-time const strings)... but it sure would be embarassing if we *failed* to beat that speed, right? So it's there to keep it in mind. Some off-the-cuff results: ``` BenchmarkBaselineNativeMapAssignSimpleKeys-8 6815284 175 ns/op 256 B/op 2 allocs/op BenchmarkBaselineJsonUnmarshalMapSimpleKeys-8 724059 1644 ns/op 608 B/op 14 allocs/op BenchmarkFeedGennedMapSimpleKeys-8 2932563 410 ns/op 176 B/op 8 allocs/op ``` This pretty good. If we're *only* half the speed of the native maps... that's actually reallyreally good, considering we're doing more work to keep things ordered, to say nothing of all the other interface support efforts we have to do. But 8 allocs? No. That was not the goal. This should be better. Time to dig...
-