- 02 Feb, 2020 1 commit
-
-
Eric Myhre authored
When I started typing this, I thought I was going to need it -- or at least *want* it -- in the recursive bits of map value assemblers. By the end, it turns out... no, actually, I don't. Map value assembler does better just embedding the (very sparse) amount of relevant logic without abstraction in this case; and it also *couldn't* embed the anyBuilder regardless, because the anyBuilder embeds the map assembler which embeds it's value assembler..... Fhwooo. As you can see in some of the comments, I was also surprised to find that it currently shakes out that there's no demand at all for an 'anyNode' type. There could be! -- if we had maps and lists use that rather space-costly mechanism in exchange for fewer value allocs -- but at present, we don't. But even if that supposed dependency path turned out wrong, of course we still need an 'anyBuilder' regardless -- deserialization often starts here! -- so, here it is. More things need to be filled in yet -- for the other scalars I haven't bothered to stamp out in this package yet, namely -- but these will be cookie cutter and shan't be interesting.
-
- 01 Feb, 2020 3 commits
-
-
Eric Myhre authored
These parts, at least, are super cookie-cutter. Picking them off the shelf and putting them in the "done" bin as I go on the 'any' stuff, which as mentioned in previous commit, is a bit more interesting.
-
Eric Myhre authored
Starting to use a short little snippet of interface constraints at the top of every file is starting to feel useful again, both as a compiler sanity check and as a short readable inventory of the file. The hackme accumulates and clarifies a bunch of notes I found myself needing to make in one place as I try to settle the high level rules. It's mostly for the "ipldfree" implementations, and will move to that package when we're done with our research branch here and start moving things into their final places. A few things might be more general; I'll sort those out into some separate file later. And plainString now has the full complement of NodeStyle, NodeBuilder, and all the other interfaces. In parallel but not committed here, I'm working on the 'any' node implementation for this new generation of "ipldfree". That's turning out a bit more special (isn't everything in this project?) than expected: the ideal operation for NodeBuilder and NodeAssembler actually diverge, because the builder would like to return rather more less and more granular memory, whereas the assembler has no freedom to do so; this is the first occurance of such a divergence. So that's fun! (I wonder if something like this will also come up for union types, later on.)
-
Eric Myhre authored
-
- 29 Jan, 2020 1 commit
-
-
Eric Myhre authored
I ran into one significantly different requirement that structs surface which maps and other basic Data Model stuff didn't: in order to track "doneness" invariants across a whole structure, we need the final interation on the child NodeAssembler (meaning the call to either the "Assign*" or the "Done" methods, depending on whether *that* system was used recursively) to update the parent's state machine and let it know it's now correct for it to progress. (So far so good; *that* part is the same for basic maps and lists.) The trouble is... with maps, we could easily do this with one wrapper type; for structs, which may have different types per field, this is a lot less trivial! There's now a commentstorm in the 'map_K2_T2.go' file exploring alternatives for solving this. As usual, there are tradeoffs, and no single obviously correct choice. (E.g., the fastest, most inliner-friendly choice involves significant bloat to the generated code size and binary, so that one's certainly not a slam dunk.) Building a large enough prototype to disambiguate which of identified solution paths for this will perform best is of course a nice idea, but the volume of code that has to manifest to provide a sufficiently realistic benchmarkable prototype is... well, not looking feasible; we'll be better off doing a first draft of all codegen with a randomly chosen solution, and then trying variations from there. (Amusingly/headdeskingly, at some point I had it in my head that this call-up'ing situation was going to be one of the downsides unique to the alternate plans that kept the old builder interface plus "freezing" marks, and that this redesign jumped over it; alas, no, that was silly: both designs need this for similar reasons. (This design is still lighter in other ways, though.)) But! None of this is looking like show-stoppers, just Hard Choices. So, I'm going to table this -- and instead switch back to filling out the rest of the "ipldfree" next gen node kinds, and then, combined with the confidence we already have from those benchmarks in the previous few commits... start porting interface changes into core! (And just... assume everything will go according to plan when it's time to come back to this codegen stuff we're tabling here. If it doesn't... that'll be fun.) Oh! Also seen here: a slight improvement to the map assembler state machine tracking, now using an enum rather than bool. This is definitely a good idea and I'm going to lift it back to the other map implementations immediately, even if much of the rest of the poor 'map_K2_T2.go' file is sadly being left in a storm of todo's.
-
- 22 Jan, 2020 8 commits
-
-
Eric Myhre authored
(There might be a cleverer way to do this, but it's beyond me at this present moment, so smashing it is. And I'll give up my "no abstractions in benchmarks mantra" for this one enough to put a value table together and pay the cost to offset into it.) Confirmed, things do get better at larger scale. ``` pkg: github.com/ipld/go-ipld-prime/_rsrch/nodesolution/impls BenchmarkMap3nBaselineNativeMapAssignSimpleKeys-8 6062440 199 ns/op 256 B/op 2 allocs/op BenchmarkMap3nBaselineJsonUnmarshalMapSimpleKeys-8 520588 2308 ns/op 672 B/op 18 allocs/op BenchmarkMap3nFeedGenericMapSimpleKeys-8 2062002 626 ns/op 520 B/op 8 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeys-8 2456760 489 ns/op 416 B/op 5 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeysDirectly-8 2482074 468 ns/op 416 B/op 5 allocs/op BenchmarkMap3nBaselineNativeMapIterationSimpleKeys-8 15704199 76.0 ns/op 0 B/op 0 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 19439997 63.0 ns/op 16 B/op 1 allocs/op BenchmarkMap3nGennedMapIterationSimpleKeys-8 20279289 59.0 ns/op 16 B/op 1 allocs/op BenchmarkMap25nBaselineNativeMapAssignSimpleKeys-8 726440 1457 ns/op 1068 B/op 2 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 304988 3961 ns/op 2532 B/op 30 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeys-8 388693 3003 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nFeedGennedMapSimpleKeysDirectly-8 429612 2757 ns/op 1788 B/op 5 allocs/op BenchmarkMap25nBaselineNativeMapIterationSimpleKeys-8 3132525 417 ns/op 0 B/op 0 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 4186132 286 ns/op 16 B/op 1 allocs/op BenchmarkMap25nGennedMapIterationSimpleKeys-8 4406563 271 ns/op 16 B/op 1 allocs/op pkg: github.com/ipld/go-ipld-prime/impl/free BenchmarkMap3nFeedGenericMapSimpleKeys-8 1177724 1026 ns/op 1216 B/op 13 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 3497580 344 ns/op 464 B/op 4 allocs/op BenchmarkMap25nFeedGenericMapSimpleKeys-8 156534 8159 ns/op 7608 B/op 62 allocs/op BenchmarkMap25nGenericMapIterationSimpleKeys-8 393928 2543 ns/op 3632 B/op 26 allocs/op ``` Basically: - the build time ratio of our maps to native maps actually gets better (I didn't expect this) (though native maps still win handily; which, still, is no surprise, since ours Do More and have to pay at least Some abstraction cost for all the interface stuff). - the iterate time ratio of our maps to native maps *also* gets better; it's almost a full third faster. - we can confirm that the allocations are completely amortized for our codegen'd maps (the count doesn't rise with scale *at all*). Nice. - our maps are admittedly still about twice the size in memory as a golang native map would be. But this is no surprise with this current internal architecture. And one could make other ones. - and we can see the old design just out-of-control *sucking* at scale. Building still taking twice as long in the old design; and iterating taking -- yep -- 10 times as long. I'm not sure if these tests will be worth keeping around, because it's kinda just showing of some unsurprising stuff, but, eh. It's nice to have the expected results confirmed at a another scale.
-
Eric Myhre authored
Results are pleasing. ``` pkg: github.com/ipld/go-ipld-prime/_rsrch/nodesolution/impls BenchmarkMap3nBaselineNativeMapAssignSimpleKeys-8 5206788 216 ns/op 256 B/op 2 allocs/op BenchmarkMap3nBaselineJsonUnmarshalMapSimpleKeys-8 491780 2316 ns/op 672 B/op 18 allocs/op BenchmarkMap3nFeedGenericMapSimpleKeys-8 2105220 568 ns/op 520 B/op 8 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeys-8 2401208 501 ns/op 416 B/op 5 allocs/op BenchmarkMap3nFeedGennedMapSimpleKeysDirectly-8 2572612 469 ns/op 416 B/op 5 allocs/op BenchmarkMap3nBaselineNativeMapIterationSimpleKeys-8 15420255 76.1 ns/op 0 B/op 0 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 18151563 66.1 ns/op 16 B/op 1 allocs/op BenchmarkMap3nGennedMapIterationSimpleKeys-8 18951807 62.7 ns/op 16 B/op 1 allocs/op pkg: github.com/ipld/go-ipld-prime/impl/free BenchmarkMap3nFeedGenericMapSimpleKeys-8 1170026 1025 ns/op 1216 B/op 13 allocs/op BenchmarkMap3nGenericMapIterationSimpleKeys-8 3851317 311 ns/op 464 B/op 4 allocs/op ``` Iterating our new maps, both codegen and non, is fast. It's actually faster than iterating native golang maps. (This may seem shocking, but it's not totally out of line: we paid higher costs up front, after all. Also, we aren't going out of our way to randomize access order. I am still a bit surprised the costs of vtables in our system isn't more noticeable, though... and our one alloc, for the iterator!) We can speed up iteration further by embedding an iterator in the map structures. I'll probably do this in the final version, and simply have this be an optimistic system; two extra words of memory in the map is nearly free in context; and asking for another iterator after the first simply gives you an alloc again. It would be moderately irritating to measure this though, so I'm passing on it for the present. The benchmarks for our old `ipldfree.Node` implementations are... well, we knew these new systems would be a big improvement, but now we can finally see how much. Much. Our old system had a whopping 13 allocs to build a three-entry map. The new system has it down to 5 (and two of those are internal to golang's native maps, so it's trim indeed) for codegen and 8 for the new generic one. The wallclock effect of this was to make the old system almost twice as slow! All of these issues with the old system were forced by the NodeBuilder interface and its build-small-then-build-bigger paradigm. We couldn't have gotten these improvements without the switch to the NodeAssembler interface and its lay-it-out-then-fill-it-in paradigm. The new system is also *four times* as fast to iterate -- and does its work with only a single allocation: for the iterator itself. The old system performed an alloc for every single entry the iterator yielded! This is basically a change from O(n) to O(1) -- huge win. (Obviously, the iteration itself is still O(n), but as we can see from the timing, O(n) accesses vs O(n) allocs is a world of difference!) All of these results should also continue to look better and better if the same tests are applied to larger data structures. These small samples are pretty much the _worst_ way to demo these improvements! So that's something to look forward to. (Especially, in codegen: the improvements we're demonstrating here are particularly useful in the long run for enabling us to get the most mileage out of struct embedding... which we will plan to do a lot of in generated code.) Overall, this result pretty much confirms this design direction. It's now time to start moving this research back into the main package, and propagating upgrades as necessary for the improved interfaces. Sweet.
-
Eric Myhre authored
Again, checked for impact: it's in single-digit nanosecond territory. Fortunately, "assume the node escapes to heap" is already part of our intended model. And while this function is probably too big to be inlined (I didn't check, mind), it's still dwarfed by our actual work (to the scale of two orders of mag base 10), so it's fine.
-
Eric Myhre authored
I'm wary as heck at introducing any amount of abstraction in benchmarks, because sometimes, it starts to foment lies in the most unexpected ways. However, I did several fairly long runs with and without this contraction, and they seem to vary on the order of single nanoseconds -- while the noise between test runs varies by a few dozen. So, this appears to be safe to move on.
-
Eric Myhre authored
(This is the new feature I just merged all those library version bumps to enable.)
-
Eric Myhre authored
-
Eric Myhre authored
Be aware I'm using the go mod files blindly, and actually testing and developing using the git submodules. I actually only wanted one update here -- the go-wish library bump, for a specific new feature I wanted for some work on another branch -- but as long as I'm here, apparently all these bumps break nothing.
-
Eric Myhre authored
Includes duplicate key checks that were previously missing. Overall, checks many more invariants. There are now "*_ValueAssembler" types involved in both the 'free'/generic implementation, and the codegen implementation: in both cases, it's needed for several tasks, mostly revolving around the "Done" method (or anything else that causes doneness, e.g. any of the scalar "Assign*" methods): - to make sure the map assembly doesn't move on until the value assembly is finished! Need to do this to make it possible to maintain any other invariant over the whole tree! - to do state machine keeping on the map assembler - to do any integrity checks that the map itself demands - and in some cases, to actually commit the entry itself (although in some cases, pointer funtimes at key finish time are enough). The introduction of these '*_KeyAssembler' and '*_ValueAssembler' types is somewhat frustrating, because they add more memory, more vtable interactions (sometimes; in codegen, the compiler can inline them out), and just plain more SLOC. Particularly irritatingly, they require a pointer back to their parent assembler... even though in practice, they're always embedded *in* that same structure, so it's a predictable offset from their own pointer. But I couldn't easily seem to see a way around that (shy of using unsafe or other extreme nastiness), so, I'm just bitting the bullet and moving on with that. (I even briefly experimented with using type aliases to be able to decorate additional methods contextually onto the same struct memory, hoping that I'd be able to choose which type's set of methods I apply. (I know this is possible internally -- if one writes assembler, that's *what the calls are like*: you grab the function definition from a table of functions per type, and then you apply it to some memory!) This would make it possible to put all the child assembler functions on the same memory as the parent assembler that embeds them, and thus save us the cyclic pointers! But alas, no. Attempting to do this will run aground on "duplicate method" errors quite quickly. Aliases were not meant to do this.) There's some new tests, in addition to benchmarks. 'plainMap', destined to be part of the next version of the 'ipldfree' package, is now complete, and passes tests. A couple key tests are commented out, because they require a bump in version of the go-wish library, and I'm going to sort that in a separate commit. They do pass, though, otherwise. Some NodeStyle implementations are introduced, and now those are the way to get builders for those nodes, and all the tests and benchmarks use them. The placeholder 'NewBuilder*' methods are gone. There are some open questions about what naming pattern to use for exporting symbols for NodeStyles. Some comments regard this, but it's a topic to engage in more earnest later. Benchmarks have been renamed for slightly more consistency and an eye towards additional benchmarks we're likely to add shortly. Some new documentation file are added! These are a bit ramshackle, because they're written as an issue of note occurs to me, but there are enough high-level rules that should be held the same across various implementations of Node and NodeAssembler that writing them in a doc outside the code began to seem wise.
-
- 13 Jan, 2020 7 commits
-
-
Eric Myhre authored
It does not. I turned benchtime up to 15s because in 1s runs, any signal was well below the threshhold of noise. And even with larger sampling: ``` BenchmarkFeedGennedMapSimpleKeys-8 39906697 457 ns/op 400 B/op 5 allocs/op BenchmarkFeedGennedMapSimpleKeysDirectly-8 39944427 455 ns/op 400 B/op 5 allocs/op ``` It's literally negligible. It's still possible we'll see more consequential results in the case of structs, possibly. But from this result? I'd say there's pretty good arguments made *against* having the extra method, here.
-
Eric Myhre authored
More costly than I expected. New results: ``` BenchmarkBaselineNativeMapAssignSimpleKeys-8 5772964 206 ns/op 256 B/op 2 allocs/op BenchmarkBaselineJsonUnmarshalMapSimpleKeys-8 470348 2349 ns/op 672 B/op 18 allocs/op BenchmarkFeedGennedMapSimpleKeys-8 2484633 446 ns/op 400 B/op 5 allocs/op ``` Okay, so our wall clock time got worse, but is still flitting around 2x; not thrilling, but acceptable. But apparently we got a 5th alloc? Ugh. I looked for typos and misunderstandings, but I think what I failed to understand is actually the interal workings of golang's maps. The new alloc is in line 343: `ma.w.m[ma.w.t[l].k] = &ma.w.t[l].v`. As scary as that line may look, it's just some pointer shuffles; there's no new memory allocations here, just pointers to stuff that we've already got on hand. Disassembly confirms this: there's no `runtime.newobject` or other allocations in the disassembly of the `flushLastEntry` function. Just this salty thing: `CALL runtime.mapassign_faststr(SB)`. Which can, indeed, allocate inside. I guess what's going on here is golang's maps don't allocate *buckets* until the first insertion forces them to? Today I learned...
-
Eric Myhre authored
Memory works like I think it does. That's good. Added a third entry just to make some numbers odd and effects a wee bit more visible. Fixed the map to do allocs up front using the size hint; and rather importantly, actually return the embedded child assemblers. (Those are... kinda an important part of the whole design.) And that got things lined up where I hoped. Current results: ``` BenchmarkBaselineNativeMapAssignSimpleKeys-8 5665226 199 ns/op 256 B/op 2 allocs/op BenchmarkBaselineJsonUnmarshalMapSimpleKeys-8 519618 2334 ns/op 672 B/op 18 allocs/op BenchmarkFeedGennedMapSimpleKeys-8 4212643 291 ns/op 192 B/op 4 allocs/op ``` This is what I'm gunning for. Those four allocs are: - One for the builder; - one for the node; - and one each for the internal map and entry slice. This is about as good as we can get. Everything's amortized. And we're getting ordered maps out of the deal, which is more featureful than the stdlib map. And the actual runtime is pretty dang good: less than 150% of the native map -- that's actually better than I was going to let myself hope for. We're *not* paying for: - extra allocs per node in more complex structures; - extra allocs per builder in more complex structures; - allocs per key nor per value in maps; - and I do believe we're set up even to do generic map iteration without incurring interface boxing costs. Nice. I haven't begun to look for time optimizations at all yet; but now that the alloc count is right, I can move on to do that. There's also one fairly large buggaboo here: the values don't actually get, uh, inserted into the map. That's... let's fix that.
-
Eric Myhre authored
(Thank goodness. Been in theoryland for a while.) There's somewhat more content here than necessary for the benchmark that's presently runnable; right now only the Map_K_T implementation is targetted. I want benchmarks of things with complex keys in codegen and also benchmarks of the runtime/generic/free impls soon, so they can all be compared. There's also a quick fliff of stdlib map usage in a wee benchmark to give us some baselines... And there's also a quick fliff of stdlib json unmarshalling for the same reason. It's not fair, to be sure: the json thing is doing work parsing, and allocating strings (whereas the other two get to use compile-time const strings)... but it sure would be embarassing if we *failed* to beat that speed, right? So it's there to keep it in mind. Some off-the-cuff results: ``` BenchmarkBaselineNativeMapAssignSimpleKeys-8 6815284 175 ns/op 256 B/op 2 allocs/op BenchmarkBaselineJsonUnmarshalMapSimpleKeys-8 724059 1644 ns/op 608 B/op 14 allocs/op BenchmarkFeedGennedMapSimpleKeys-8 2932563 410 ns/op 176 B/op 8 allocs/op ``` This pretty good. If we're *only* half the speed of the native maps... that's actually reallyreally good, considering we're doing more work to keep things ordered, to say nothing of all the other interface support efforts we have to do. But 8 allocs? No. That was not the goal. This should be better. Time to dig...
-
Eric Myhre authored
per comment in previous commit, it's time to aim towards executable benchmarks again.
-
Eric Myhre authored
We really, truly cannot get away from having a KeyAssembler option. That much seems clear. So that's moved up into the interface we're working with, and one of the alternate drafts dropped. The comments around the NodeBuilder.Reset method are resolved: the parts about building keys are removed, as we've concluded pretty solidly that those were indeed about to take a wrong turn. (Resetting a builder is still a useful thing overall if you're going to make a bunch of nodes of a single type: the node allocations will still happen, but the builder being resettable should be pretty much a giant duff's device action, and saves one of the two mega-amortized allocations in a codegen'd world, which is better than not.) Dropped the "demo" function. This isn't the place for it, and that demo had also gone awry. Some new questions about some of the other "simple" methods on MapAssembler and ListAssembler. Are they useful shortcuts (that actually provide a performance benefit if present directly on the assembler interface)? Or are they trivially recreateable from a freestanding function operating the interface, and thus better be done that way, to save SLOC in codegen outputs? I think I'm going to start needing benchmarks to come to useful conclusions on that last question pair, so... it's time to move forward on writing some more implementation code matching these interfaces. (Unfortunately, I think there's at least... four different implementations needed to get a good eye on things, since the codegen path vs runtime fully generic interfaces path have *very* different characteristics, and then we need each of the styles of interest implemented for each. But that's just what we'll have to do.)
-
Eric Myhre authored
This is one of those situations where I'd really, really, really love to be able to state default values for method params. And I don't see who could possibly be hurt by such a language feature, so I'm sad we don't have it.
-
- 10 Jan, 2020 3 commits
-
-
Eric Myhre authored
Hopefully to become final. I'm going to try to copy the contents out to the root dir to become the new interfaces at the end of this. I copied most of the node.go file and it's probably best to read that as a diff from the live one. Mostly, it adds the NodeStyle type and the documentation around that. The nodebuilder.go file still devolved into some commentstorms and incomplete parts, but is probably honing in on minimally-unsatisfying compromises. I used a couple symlinks to "copy" the types for Path, etc, from the live main package, just to get more things compiling together. Those features aren't showing any cracks and aren't up for review due to any transitive interactions that are in question, so, this treatment is sufficient to move things along. Links aren't present in the package yet. I may or may not do some reconsiderations on those to match the "style" naming pattern, as the 'nodestyle' research package already briefly probed; or, it might be out of scope and something for later (if at all). This is a presently a compile error until I handle it one way or the other. The long-story-made-short of the remaining commentstorm is: handling map keys generically just absolutely sucks. There is basically no way to do it without massive allocation penalties for boxing. Any and all alternative tradeoffs I can think of are baroque and not pleasing. I'm still hoping for another idea to strike, but at this point it's looking like it's time to bite some bullet (any bullet) and move on. Signed-off-by: Eric Myhre <hash@exultant.us>
-
Eric Myhre authored
Signed-off-by: Eric Myhre <hash@exultant.us>
-
Eric Myhre authored
Not all of these compile. I'm going to try to blaze through this and the rest of the experiments until we get to a result quickly, and probably then promptly remove most of these directories. Meanwhile, since we're under a path prefix beginning with an underscore, build and test commands using the "./..." don't trip here. There's at least one thing seriously wrong with each of these drafts, and each of them also devolves into comments rather than completion, but I think they're getting closer to circling around something useful. Signed-off-by: Eric Myhre <hash@exultant.us>
-
- 07 Dec, 2019 6 commits
-
-
Eric Myhre authored
-
Eric Myhre authored
And expand on the need for both general and specific methods, even though they both can accomplish the same goals.
-
Eric Myhre authored
-
Eric Myhre authored
See the comment blocks at the top of the file for reasons why. See the comment blocks at the bottom for some remarks on the outcomes. tl;dr it's certainly *differently* frustrating than the current interface, if nothing else! Better? I dunno. Maybe!
-
Eric Myhre authored
I'm still scratching out prototypes in another directory to make it easier to focus on the details I'm interested in rather than get wrapped up with the kerfuffling details of the existing full interfaces... as well as easier to try out different interfaces. And at some point, of course, we want *codegen* for these things, while what I'm plunking about with here is a sketch of the expected *output*. So, this is a large step removed from the final code... but it's easier to sketch this way and "imagine" where the templating can later appear. This solution approach seems to be going largely well. And we have some tests which count allocs very carefully to confirm the desired result! Still a lot to go. There's large hunks of comments and unresolved details still in this commit; I just need a checkpoint. Some things around creation of maps are still looking tricky to support optimizations for. Research needed there. A comment hunk describes the questions, but so far there's no tradeoff-free answer. Perhaps most usefully: here's also a checkpoint of the "HACKME_memorylayout" document! It also still has a few todos, but it's the most comprehensive block of prose I've got in one place so far. Hopefully it will be useful reading for anyone else wanting to get up to speed on the in-depth in "why" -- it's awfully, awfully hard to infer this kind of thing from the eschatology of just observing where pointers are in a finished product! Signed-off-by: Eric Myhre <hash@exultant.us>
-
Eric Myhre authored
Feat/add code gen disclaimer
-
- 03 Dec, 2019 4 commits
-
-
Eric Myhre authored
As you can see from the directory name ("multihoisting"), when I began exploring this topic, I didn't know they were called that. :/ This thus turned out to very much be one of those occasions where knowing the *right two words* to put into a search engine would've saved a fairly enormous number of hours. Once I finally found the right term, some perfectly lovely documentation appeared. But alas. (Like the previous commits, this stuff is coming from a while ago; roughly Oct 25. It uncovers a lot of stuff that gets us much closer to being able to make correct and performant designs to minimize and amortize the number of allocations that will be required to make our node trees work in codegen (though with that said, there will also still be plenty of details in need of refinement after this, too).) Still working on a "HACKME_memorylayout.md" doc, which will appear in the codegen directories in the near future and contain a summary of these learnings and how they balance against other design concerns. Meanwhile, a couple of other notes in their rough form: - basically, don't use non-pointer methods. - turns out value receivers tend to mean "copy it", and pointer receivers *don't* mean "heap alloc it" (they just mean "consider escape, maybe"). - so overall you're tying the compilers hands when you use a value receiver, and you're *not* when you use a pointer receiver. - taking pointers to things already being passed around in pointer form or already having heap-escaped seems to be free. - it might demand something become heap alloc if it wasn't already... - but this turns out to be fairly amortizable. - because if you write nearly-everthing to be pointers, then, there you go. - and if you're lucky and something is shortlived (provably doesn't escape), then even whole stacks of ptr-receiver methods will still probably inline and collapse to no heap action. - the analysis on this seems to reach much further than i thought it would. - `-gcflags '-m -m'` was extremely revealing on this point. - tl;dr: - more pointers not less in all functions and passing; - but do have one struct that's the Place of Residence of the data without pointers. - this pair of choices probably leads to the best outcomes. - hokay so. applied to topic: two sets of embeds. - builders might as well have their own big slab of embed too. - logically nonsense to embed them, but incredibly rare you're not gonna use the memory, so! And a couple important incantations, which can help understand What's Really Going On Here in a bunch of ways: - `-gcflags '-S'` -- gives you assembler dump - `-gcflags '-m'` -- gives you escape analysis (minimally informative and hard to read) - `-gcflags '-m -m'` -- gives you radically more escape analysis, in stack-form (actually useful!!) - `-gcflags '-l'` -- disables inlining! I learned about the '-m -m' thing by grepping the Go compiler source, incidentally. It's a wildly under-documented feature. No joke: I encountered it via doing a `grep "Debug['m']` in go/src; there is currently no mention of it in `go tool compile -d help`. Once I found the magic string, and could submit it to search engines, I started to find a few other blogs which mention it... but I'd seen none of them (and had not found queries that turned them up) until having this critical knowledge already in-hand. >:I So chalking up another score for "the right words would've been nice". Performance work is fun!
-
Eric Myhre authored
This is... a set of changes of mixed virtue. The codegen won't stay in this state for long. Things have been learned since this was drafted. This diff uses value methods heavily, and those are... mostly a bad idea, big picture. But it's got some incremental progress on other things, so I'm going to fold it into history rather than drop it.
-
Eric Myhre authored
It's been, broadly speaking, "tricky" to plan out and design some of this project. It's been doubly tricky to do it while understanding the holistic performance implications of the detailed choices. And while "premature optimization, evil", etcetera... I've gone through the performance rodeo in this vincinity at least once before (in the form of the refmt libraries): and resultingly, I'm going to claim a slightly-better-than-zero intuition for what's premature and what's utterly critical to keep track of as early as possible lest ye be doomed to a rewrite after learning things the hard way. (Arguably, half this codebase **is** the rewrite after learning things the hard way. But I digress.) So: to combat this "trickiness", I've started writing a lot of "research" code and experimental benchmarks. This is still certainly fraught with peril -- in fact, one of the major things I've learned overall is just *how* dangerously misleading microbenchmarks can be (and to combat this, I've now started regularly reading the assembler output to make sure the optimizer is doing exactly what I think it is, neither more nor less) -- but it's much more informative than *not* doing it, and trying to suss out the mess later once you've built a whole system. And that's what it's time to start committing. (I should've started a while ago, honestly.) This "rsrch" directory will get some more content momentarily in coming commits, because there's a *lot* of stuff on my localhost. Some of it was so preliminary I'm not going to bother to preserve it; a lot of things are potentially pretty interesting, as educational and archeological material, if nothing else: those I'm going to commit. (It's possible all this will end up `git rm` again at some time in the future, too. But honestly, it's darn hard to remember "why didn't you just do X?" sometimes. Notes need to get committed *somewhere*, at least briefly enough that it's possible to dig them out again.) So! To start: here are some research benchmarks into what strongly- natively-typed builders might look like in our codegen outputs. These are from about Oct 13th, as best I can suss localhost mtimes. I think most of these concepts are ones I'm (frustratingly) backing away from now, because I've learned a *lot* more about performance in the meanwhile, and I don't think these are gonna do well. But the worked exploration into ergonomics is still potentially interesting and worth review! Signed-off-by: Eric Myhre <hash@exultant.us>
-
Eric Myhre authored
The readme hadn't seen an update for quite a long time, so it was just overdue for it. I think the comments about where to expect changes at this point are practically accurate and now are also stated. The Node interface, despite being so central to the entire system, was missing a doc block on the type as a whole! It now has one. (I'd like to continue to iterate on this and have it do more linking out to ipld docs for some of the more conceptual parts, but we've got a lot of work to do on that content as well, so, for now just doing it here is better than not having it.) Signed-off-by: Eric Myhre <hash@exultant.us>
-
- 13 Nov, 2019 1 commit
-
-
Eric Myhre authored
fix(schema/gen): return value not reference
-
- 08 Nov, 2019 1 commit
-
-
hannahhoward authored
Add a disclaimer to the header for generated code to prevent linters from complaining
-
- 07 Nov, 2019 1 commit
-
-
hannahhoward authored
Map/List should return an actual node type, not a pointer to a node. Otherwise, breaks when nested in other map/list type
-
- 25 Oct, 2019 4 commits
-
-
Eric Myhre authored
Using pointers for optional or nullable fields forces a lot of things to need allocations on the heap. And maybe that's not so good. So here's a very different take. 'Maybe' structures for any optional or nullable fields store the extra information about null or absent values. This takes more linear memory... but should tend to result in fewer allocs. It's also much easier to code against, I suspect. (And we expect it's not unreasonable for users to add more methods in the generated output package, which would mean we should be believable to code against.)
-
Eric Myhre authored
feat(schema/gen): make emit methods public
-
hannahhoward authored
Make emit methods public to allow a preliminary method for generating code outside this module
-
Eric Myhre authored
-