1. 01 Dec, 2020 7 commits
    • Eric Myhre's avatar
      Move the pretty package out of the codec subtree. · 1d1fc495
      Eric Myhre authored
      It still uses the codec/tools package, but it's very clearly not a
      codec itself and shouldn't be grouped like one, despite the shared
      common implementation details.
      
      Renamed methods to "Stringify".
      
      Stringify no longer returns errors; those errors only arise from
      the writer erroring, and writing into an in-memory buffer can't error.
      1d1fc495
    • Eric Myhre's avatar
      Introduce pretty printing tool. · 32a1ed04
      Eric Myhre authored
      I think this, or a variant of it, may be reasonable to rig up as a
      Stringer on the basicnode types, and recommend for other Node
      implementations to use as their Stringer too.
      
      It's a fairly verbose output: I'm mostly aiming to use it in examples.
      
      Bytes in particular are fun: I decided to make them use the hex.Dump
      format.  (Why not?)
      
      I've put this in a codec sub-package, because in some ways it feels
      like a codec -- it's something you can apply to any node, and it
      streams data out to an io.Writer -- but it's also worth noting it's
      not meant to be a multicodec or generally written with an intention
      of use anywhere outside of debug printf sorts of uses.
      
      The codectools package, although it only has this one user, is a
      reaction to previous scenarios where I've wanted a quick debug method
      and desperately wanted something that gives me reasonable quoted
      strings... without reaching for a json package.  We'll see if it's
      worth it over time; I'm betting yes, but not with infinite confidence.
      (This particular string escaping function also has the benefit of
      encoding even non-utf-8 strings without loss of information -- which
      is noteworthy, because I've recently noticed JSON _does not_; yikes.)
      32a1ed04
    • Eric Myhre's avatar
      Revert "Make the cidlink.Link type only usable as a pointer." · 96aa55ea
      Eric Myhre authored
      Trying to make CIDs only usable as a pointer would be nice from a
      consistency perspective, but has other consequences.
      
      It's easy to forget this (and I apparently just did), but...
      We often use link types as map keys.  And this is Important.
      
      That means trying to handle CIDs as pointers leads to nonsensical
      results: pointers are technically valid as a golang map key, but
      they don't "do the right thing" -- the equality check ends up operating
      on the the pointer rather than on the data.  This is well-defined,
      but generally useless for these types in context.
      96aa55ea
    • Eric Myhre's avatar
      Make the cidlink.Link type only usable as a pointer. · 55dcf0c3
      Eric Myhre authored
      As the comments in the diff say: it's a fairly sizable footgun for
      users to need to consider whether they expect the pointer form or
      the bare form when inspecting what an `ipld.Link` interface contains:
      so, let's just remove the choice.
      
      There's technically no reason for the Link.Load method to need to be
      attached to the pointer receiver other than removing this footgun.
      From the other side, though, there's no reason *not* to make it
      attached to the pointer receiver, because any time a value is assigned
      to an interface type, it necessarily heap-escapes and becomes a pointer
      anyway.  So, making it unconditional and forcing the pointer to be
      clear in the user's hands seems best.
      55dcf0c3
    • Eric Myhre's avatar
      Tweak to alloc counting tests. · ca680715
      Eric Myhre authored
      I dearly wish this wasn't such a dark art.
      But I really want these tests, too.
      ca680715
    • Eric Myhre's avatar
      Revamped DAG-JSON decoder and unmarshaller. · 53fb23e4
      Eric Myhre authored
      This is added in a new "dagjson2" package for the time being,
      but aims to replace the current dagjson package entirely,
      and will take over that namespace when complete.
      
      So far only the decoder/unmarshaller is included in this first commit,
      and the encoder/marshaller is still coming up.
      
      This revamp is making several major strides:
      
      - The decoding system is cleanly separated from the tree building.
      
      - The tree building reuses the codectools token assembler systems.
        This saves a lot of code, and adds a lot of consistency.
        (By contrast, the older dagjson and dagcbor packages had similar
        outlines, but didn't actually share much code; this was annoying
        to maintain, and meant improvements to one needed to be ported
        to the other manually.  No more.)
      
      - The token type used by this codectools system is more tightly
        associated with the IPLD Data Model.  In practice, what this means
        is links are parsed at the same stage as the rest of parsing,
        rather than being added on in an awkward "parse 1.5" stage.
        This results in much less complicated code than the old token
        system from refmt which the older dagjson package leans on.
      
      - Budgets are more consistently woven through this system.
      
      - The JSON decoder components are in their own sub-package,
        and should be relatively reusable.  Some features like string parsing
        are exported in their own right, in addition to being accessable
        via the full recursive supports-everything decoders.
        (This might not often be compelling, but -- maybe.  I myself wanted
        more reusable access to fine-grained decoder and encoder components
        when I was working on the "JST" experiment, so, I'm scratching my
        own itch here if nothing else.)
        End-users should mostly not need to see this, but library
        implementors might appreciate it.
      
      - The codectools scratch.Reader type is used in all the decoder APIs.
        This results in good performance for either streaming io.Reader or
        already-in-memory bytes slices as data sources, and does it without
        doubling the number of exported functions we need (or pushing the
        need for feature detection into every single exported function).
      
      - The configuration system for the decoder is actually in this repo,
        and it's sanely and clearly settable while also being optional.
        Previously, if you wanted to configure dagjson, you'd have to reach
        into the refmt json package for *those* configuration structs,
        which was workable but just very confusing and gave the end-user a
        lot of different places to look before finding what they need.
      
      - The implementations are very mindful of memory allocation efficiency.
        Almost all of the component structures carefully utilize embedding:
        ReusableUnmarsahller embeds the Decoder; the Decoder embeds the
        scratch.Reader as well as the Token it yields; etc.
        This should result in overall being able to produce fully usable
        codecs with a minimal number of allocations -- much fewer than the
        older implementations required.
      
      Some benefits have yet to be realized, but are on the map now:
      
      - The new Token structure also includes space for position and
        progress tracking, which we want to use to produce better errors.
        (This needs more implementation work, still, though.)
      
      - There are several configuraiton options for strictness.
        These aren't all backed up by the actual implementation yet
        (I'm porting over old code fast enough to write a demo and make
        sure the whole suite of interfaces works; it'll require further
        work, especially on this strictness front, later), but
        at the very least these are now getting documented,
        and several comment blocks point to where more work is needed.
      
      - The new multicodec registry is alluded to in comments here, but
        isn't implemented yet.  This is part of the long game big goal.
        The aim is to, by the end of this revamp, be able to do something
        about https://github.com/ipld/go-ipld-prime/issues/55 , and approach
        https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
      53fb23e4
    • Eric Myhre's avatar
      Add scratch.Reader tool, helpful for decoders. · 3040f082
      Eric Myhre authored
      The docs in the diff should cover it pretty well.
      It's a reader-wrapper that does a lot of extremely common
      buffering and small-read operations that parsers tend to need.
      
      This emerges from some older generation of code in refmt with similar purpose:
      https://github.com/polydawn/refmt/blob/master/shared/reader.go
      Unlike those antecedents, this one is a single concrete implementation,
      rather than using interfaces to allow switching between the two major modes of use.
      This is surely uglier code, but I think the result is more optimizable.
      
      The tests include aggressive checks that operations take exactly as
      many allocations as planned -- and mostly, that's *zero*.
      
      In the next couple of commits, I'll be adding parsers which use this.
      
      Benchmarks are still forthcoming.  My recollection from the previous
      bout of this in refmt was that microbenchmarking this type wasn't
      a great use of time, because when we start benchmarking codecs built
      *upon* it, and especially, when looking at the pprof reports from that,
      we'll see this reader showing up plain as day there, and nicely
      contextualized... so, we'll just save our efforts for that point.
      3040f082
  2. 14 Nov, 2020 7 commits
    • Eric Myhre's avatar
      Add position tracking fields to Token. · 1110155d
      Eric Myhre authored
      These aren't excersied yet -- and this is accordingly still highly
      subject to change -- but so far in developing this package, the pattern
      has been "if I say maybe this should have X", it's always turned out
      it indeed should have X.  So let's just do that and then try it out,
      and have the experimental code instead of the comments.
      1110155d
    • Eric Myhre's avatar
      Token.Normalize utility method. · a8995f6f
      Eric Myhre authored
      Useful for tests that do deep equality tests on structures.
      
      Same caveat about current placement of this method as in the previous
      commit: this might be worth detaching and shifting to a 'codectest'
      or 'tokentest' package.  But let's see how it shakes out.
      a8995f6f
    • Eric Myhre's avatar
      Extract and export StringifyTokenSequence utility. · d3511334
      Eric Myhre authored
      This is far too useful in testing to reproduce in each package that
      needs something like it.  It's already shown up as desirable again
      as soon as I start implementing even a little bit of even one codec
      tokenizer, and that's gonna keep happening.
      
      This might be worth moving to some kind of a 'tokentest' or
      'codectest' package instead of cluttering up this one, but...
      we'll see; I've got a fair amount more code to flush into commits,
      and after that we can reshake things and see if packages settle
      out differently.
      d3511334
    • Eric Myhre's avatar
      Add budget parameter to TokenReader. · 33fb7d98
      Eric Myhre authored
      There were already comments about how this would be "probably"
      necessary; I don't know why I wavered, it certainly is.
      33fb7d98
    • Eric Myhre's avatar
      Type the TokenKind consts correctly. · 72793f26
      Eric Myhre authored
      You can write a surprising amount of code where the compiler will shrug
      and silently coerce things for you.  Right up until you can't.
      (Some test cases that'll be coming down the commit queue shortly
      happened to end up checking the type of the constants, and, well.
      Suddenly this was noticable.)
      72793f26
    • Eric Myhre's avatar
      Drop earlier design comments. · 2143068c
      Eric Myhre authored
      We definitely did make a TokenWalker, heh.
      
      The other naming marsh (heh, see what I did there?) is still unresolved
      but can stay unresolved a while longer.
      2143068c
    • Eric Myhre's avatar
      Fresh take on codec APIs, and some tokenization utilities. · 1da7e2dd
      Eric Myhre authored
      The tokenization system may look familiar to refmt's tokens -- and
      indeed it surely is inspired by and in the same pattern -- but it
      hews a fair bit closer to the IPLD Data Model definitions of kinds,
      and it also includes links as a token kind.  Presense of link as
      a token kind means if we build codecs around these, the handling
      of links will be better and most consistently abstracted (the
      current dagjson and dagcbor implementations are instructive for what
      an odd mess it is when you have most of the tokenization happen
      before you get to the level that figures out links; I think we can
      improve on that code greatly by moving the barriers around a bit).
      
      I made both all-at-once and pumpable versions of both the token
      producers and the token consumers.  Each are useful in different
      scenarios.  The pumpable versions are probably generally a bit slower,
      but they're also more composable.  (The all-at-once versions can't
      be glued to each other; only to pumpable versions.)
      
      Some new and much reduced contracts for codecs are added,
      but not yet implemented by anything in this comment.
      The comments on them are lengthy and detail the ways I'm thinking
      that codecs should be (re)implemented in the future to maximize
      usability and performance and also allow some configurability.
      (The current interfaces "work", but irritate me a great deal every
      time I use them; to be honest, I just plain guessed badly at what
      the API here should be the first time I did it.  Configurability
      should be both easy to *not* engage in, but also easier if you do
      (and in pariticular, not require reaching to *another* library's
      packages to do it!).)  More work will be required to bring this
      to fruition.
      
      It may be particularly interesting to notice that the tokenization
      systems also allow complex keys -- maps and lists can show up as the
      keys to maps!  This is something not allowed by the data model (and
      for dare I say obvious reasons)... but it's something that's possible
      at the schema layer (e.g. structs with representation strategies that
      make them representable as strings can be used as map keys), so,
      these functions support it.
      1da7e2dd
  3. 21 Oct, 2020 2 commits
  4. 20 Oct, 2020 1 commit
  5. 24 Sep, 2020 1 commit
  6. 10 Sep, 2020 1 commit
    • Daniel Martí's avatar
      all: don't use buffers where readers suffice · 8e26c7e2
      Daniel Martí authored
      Buffers are not a good option for tests if the other side expects a
      reader. Otherwise, the code being tested could build assumptions around
      the reader stream being a single contiguous chunk of bytes, such as:
      
      	_ = r.(*bytes.Buffer).Bytes()
      
      This kind of hack might seem unlikely, but it's an easy mistake to make,
      especially with APIs like fmt which automatically call String methods.
      
      With bytes.Reader and strings.Reader, the types are much more
      restricted, so the tests need to be more faithful.
      8e26c7e2
  7. 25 Aug, 2020 2 commits
  8. 29 Jun, 2020 2 commits
  9. 26 Jun, 2020 1 commit
  10. 13 May, 2020 1 commit
    • Eric Myhre's avatar
      First pass of very basic coloration; and demo. · d99e82fa
      Eric Myhre authored
      Key coloration is easy because we already have key emission in one place,
      and we already have size computation for alignment separated from emission.
      Value coloration will be a little more involved.
      d99e82fa
  11. 10 May, 2020 5 commits
    • Eric Myhre's avatar
      Test that JST sub-sub-tables work. · d2c11dad
      Eric Myhre authored
      They do.
      d2c11dad
    • Eric Myhre's avatar
      JST codec now supports absent columns. · 1e80a058
      Eric Myhre authored
      Alignment just proceeds around them, leaving appropriate space based on
      what other rows needed in order to align with each other.
      
      If a column is absent at the end of a row, the whole row wraps up fast.
      1e80a058
    • Eric Myhre's avatar
      Test that JST sub-tables align at a distance. · 4eaf0f74
      Eric Myhre authored
      They do.
      4eaf0f74
    • Eric Myhre's avatar
      Trailing separators and other fiddly bits of JST. · 7ce9660a
      Eric Myhre authored
      The first two example fixtures of what I wanted to achieve pass now :3
      That's exciting.
      7ce9660a
    • Eric Myhre's avatar
      Introducing JST -- json tables. · e9133615
      Eric Myhre authored
      See the package docs in 'jst.go' for introduction to what and why;
      tldr: I want pretty and I want JSON and I want them at the same time.
      
      I'm putting this in the codec package tree because it fits there moreso
      than anywhere else, but it's probably not going to be assigned a
      multicodec magic number or anything like that; it's really just JSON.
      
      This code doesn't *quite* pass its own fixture tests yet, but nearly.
      I thought this would be a nice checkpoint because the only thing left
      is dealing with the fiddly trailing-comma-or-not bits.
      
      This first pass also completely ignores character encoding issues,
      the correct counting of graphemes, and so forth; those are future work.
      Most configurability is also speculative for 'first draft' reasons.
      All good things in time.
      
      This is something of a little hobby sidequest.  It's not particularly
      related to the hashing-and-content-addressing quest usually focused.
      Accordingly, as you may be able to notice from some of the comments
      in the package documentation block, I did initially try to write this
      over in the refmt repo instead.  However, I got about 20 seconds in on
      that effort before realizing that our Node interface here would be a
      wildly better interface to build this with.  Later, I also started
      realizing Selectors would be Quite Good for other forms of
      configuration that I want to add to this system... so, it's rapidly
      turning into a nice little exercise for other core IPLD primitives!
      Yay!  Copacetic.
      e9133615
  12. 28 Apr, 2020 1 commit
    • hannahhoward's avatar
      fix(dagcbor): fix marshalling error · 2edb45a4
      hannahhoward authored
      Fix an error with marshalling that causes bytes nodes to get written as links if they are written
      after a link, because the tag was never reset
      2edb45a4
  13. 26 Mar, 2020 1 commit
  14. 11 Mar, 2020 1 commit
  15. 02 Mar, 2020 1 commit
    • Eric Myhre's avatar
      Promote NodeAssembler/NodeStyle solution to core. · 4eb8c55c
      Eric Myhre authored
      This is a *lot* of changes.  It's the most significant change to date,
      both in semantics and in character count, since the start of this repo.
      It changes the most central interfaces, and significantly so.
      
      But all tests pass.  And all benchmarks are *improved*.
      
      The Node interface (the reading side) is mostly unchanged -- a lot of
      consuming code will still compile and work just fine without changes --
      but any other Node implementations out there might need some updating.
      
      The NodeBuilder interface (the writing side) is *extremely* changed --
      any implementations out there will *definitely* need change -- and most
      consumers will too.  It's unavoidable with a semantic fix this big.
      The performance improvements should make it worth your while, though.
      
      If you want more background on how and why we got here, you've got
      quite a few commits on the "research-admissions" branches to catch up
      on reading.  But here's a rundown of the changes:
      
      (Get a glass of water or something calming before reading...)
      
      === NodeAssembler introduced! ===
      
      NodeAssembler is a new interface that describes most of the work of
      creating and filling data into a new Node.
      
      The NodeBuilder interface is still around, but changed in role.
      A NodeBuilder is now always also a NodeAssembler; additionally, it can
      return the final Node to you.
      
      A NodeAssembler, unlike NodeBuilder, can **not** return a Node to you.
      In this way, a NodeBuilder represents the ability to allocate memory.
      A NodeAssembler often *does not*: it's just *filling in* memory.
      
      This design overall is much more friendly to efficient operations:
      in this model, we do allocations in bulk when a NodeBuilder is used,
      and then NodeAssemblers are used thereafter to fill it in -- this
      mental model is very friendly to amortizing memory allocations.
      Previously, the NodeBuilder interface made such a pattern of use
      somewhere between difficult and outright impossible, because it was
      modeled around building small values, then creating a bigger value and
      inserting the smaller ones into it.
      
      This is the key change that cascaded into producing the entire other
      set of changes which land in this commit.
      
      The NodeBuilder methods for getting "child builders" are also gone
      as a result of these changes.  The result feels a lot smoother.
      (You can still ask for the NodeStyle for children of a recursive kind!
      But you'll find that even though it's possible, it's rarely necessary.)
      
      We see some direct improvements from this interface change already.
      We'll see even more in the future: creating values when using codegen'd
      implementations of Node was hugely encumbered by the old NodeBuilder
      model; NodeAssembler *radically* raises the possible ceiling for
      performance of codegen Node implementations.
      
      === NodeStyle introduced ===
      
      NodeStyle is a new interface type that is used to carry information
      about concrete node implementations.
      
      You can always use a NodeStyle to get a NodeBuilder.
      
      NodeStyle may also have additional features on it which can be detected
      by interface checks.  (This isn't heavily used yet, but we imagine it
      might become handy in the future.)
      
      NodeStyle replaces NodeBuilder in many function arguments,
      because often what we wanted was to communicate a selection of Node
      implementation strategy, but not actually the start of construction;
      the NodeStyle interface now allows us to *say that*.
      
      NodeStyle typically cost nothing to pass around, whereas a NodeBuilder
      generally requires an allocation to create and initialize.  This means
      we can use NodeStyle more freely in many contexts.
      
      === node package paths changed ===
      
      Node implementations are now in packages under the "node/*" directory.
      Previously, they were under an "impl/*" directory.
      
      The "impl/free" package is replaced by the the "node/basic" package!
      The package name was "ipldfree"; it's now "basicnode".
      
      === basicnode is an improved runtime/anycontent Node implementation ===
      
      The `basicnode` package works much the same as the `ipldfree` package
      used to -- you can store any kind of data in it, and it just does as
      best it can to represent and handle that, and it works without any
      kind of type info nor needs of compile-time special support, etc --
      while being just quietly *better at it*.
      
      The resident memory size of most things has gone down.
      (We're not using "fat unions" in the implementation anymore.)
      
      The cost of iterating maps has gone down *dramatically*.
      Iteration previously suffered from O(n) allocations due to
      expensive `runtime.conv*` calls when yielding keys.
      Iteration is now O(1) (!!) because we redesigned `basicnode` internals
      to use "internal pointers" more heavily, and this avoids the costs
      from `runtime.conv*`.
      (We could've done this separately from the NodeAssembler change,
      admittedly.  But both are the product of research into how impactful
      clever use of "internal pointers" can be, and lots of code in the
      neighborhood had to be rewritten for the NodeAssembler interface,
      so, these diffs arrive as one.)
      
      Error messages are more informative.
      
      Many small operations should get a few nanoseconds faster.
      (The implementation uses more concrete types and fewer switch
      statements.  The difference probably isn't the most noticeable part of
      all these changes, but it's there.)
      
      --- basicnode constructor helpers do all return pointers ---
      
      All the "New*" helper functions in the basicnode package return
      interfaces which are filled by a pointer now.
      This is change from how they worked previously when they were first
      implemented in the "rsrch" package.
      
      The experience of integrating basicnode with the tests in the traversal
      package made it clear that having a mixture of pointer and non-pointer
      values flying around will be irritating in practice.  And since it is
      the case that when returning values from inside a larger structure,
      we *must* end up returning a pointer, pointers are thus what we
      standardize on.
      
      (There was even some writeup in the HACKME file about how we *might*
      encounter issues on this, and need to change to pointers-everywhere --
      the "pointer-vs-value inhabitant consistency" heading.  Yep: we did.
      And since this detail is now resolved, that doc section is dropped.)
      
      This doesn't really make any difference to performance.
      The old way would cause an alloc in those method via 'conv*' methods;
      the new way just makes it more explicit and go through a different
      runtime method at the bottom, but it's still the same number of
      allocations for essentially the same reasons.  (I do wonder if at some
      future point, the golang compiler might get cleverer about eliding
      'conv*' calls, and then this change we make here might be unfortunate;
      but that's certainly not true today, nor in the future at any proximity
      that I can foresee.)
      
      === iterator getters return nil for wrong-kind ===
      
      The Node.MapIterator and Node.ListIterator methods now return nil
      if you call them on non-maps or non-lists.
      
      Previously, they would return an iterator, but using it would just
      constantly error.
      
      I don't think anyone was honestly really checking those error thunks,
      and they made a lot of boilerplate white noise in the implementations,
      and the error is still entirely avoidable by checking the node kind
      up-front (and this is strictly preferable anyway, since it's faster
      than getting an error thunk, poking it to get the error, etc)...
      so, in total, there seem like very few reasons these were useful:
      the idea is thus dropped.
      
      Docs in the Node interface reflect this.
      
      === node/mixins makes new Node implementations easier ===
      
      The mixins package isn't usable directly, but if you're going to make
      a new Node implementation, it should save you a lot of typing...
      and also, boost consistency of basic error handling.
      
      Codegen will look forward to using this.  (Codegen already had much of
      these semantics internally, and so this package is sort of lifting that
      back out to be more generally usable.  By making it live out here as
      exported symbols in the core library, we should also reduce the sheer
      character count of codegen output.)
      
      === 'typed.Node' is now 'schema.TypedNode' ===
      
      A bunch of interfaces that were under the "impl/typed" path moved to
      be in the "schema" package instead.  This probably makes sense to you
      if you look at them and needs no further explanation.
      
      (The reason it comes in this diff, though, is that it was forced:
      adding better tests to the traversal package highlighted a bunch of
      cyclic dependency issues that came from 'typed.Node' being in a
      package that had concrete use of 'basicnode'.)
      
      === codecs ===
      
      The 'encoding' package is now named 'codec'.
      
      This name is shorter; it's more in line with vocabulary we use
      elsewhere in the IPLD project (whereas 'encoding' was more of a nod
      to the naming found in the golang standard library); and in my personal
      opinion it does better at describing the both directions of the process
      (whereas 'encoding' sounds like only the to-linear-bytes direction).
      
      I just like it better.
      
      === unmarshal functions no longer return node ===
      
      Unmarshal functions accept an NodeAssembler parameter (rather than
      a NodeBuilder, as before, nor a NodeStyle, which might also make sense
      in the new family of interfaces).
      
      This means they no longer need to return a Node, either -- the caller
      can decide where the unmarshalled data lands.  If the caller is using
      a NodeBuilder, it means they can call Build on that to get the value.
      (If it's a codegen NodeBuilder with More Information, the caller can
      use any specialized functions to get the more informative pointers
      without need for casting!)
      
      Broadly speaking, this means users of unmarshal functions have more
      control over how memory allocation comes into play.
      
      We may want to add more helper functions to the various codec packages
      which take a NodeStyle argument and do return a Node.  That's not in
      this diff, though.  (Need to decide what pattern of naming these
      various APIs would deserve, among other things.)
      
      === the fluent package ===
      
      The fluent package changed significantly.
      
      The readonly/Node side of it is dropped.  It didn't seem to get a ton
      of exercise in practice; the 'traversal' package (and in the future,
      perhaps also a 'cursor' package) addresses a lot of the same needs,
      and what remains is also covered well these days by the 'must' package;
      and the performance cost of fluent node wrappers as well as the
      composability obstruction of them... is just too much to be worth it.
      
      The few things that used fluent.Node for reading data now mostly use
      the 'must' package instead (and look better for it, imo).
      
      It's possible that some sort of fluent.Node will be rebuilt someday,
      but it's not entirely clear to me what it should look like, and indeed
      whether or not it's a good idea to have in the repo at all if the
      performance of it is counterindicated in a majority of situations...
      so, it's not part of today's update.
      
      The writing/NodeBuilder/NodeAssembler fluent wrappers are continued.
      It's similar to before (panics promptly on errors, and has a lot of
      closures)... but also reflects all of the changes made in the migration
      towards NodeAssembler: it doesn't return intermediate nodes, and
      there's much less kerfuffle with getting child builders.
      Overall, the fluent builders are now even more streamlined than before;
      the closures need even fewer parameters; great success!
      
      The fluent.NodeAssembler interface retains the "Create" terminology
      around maps and lists, even though in the core interfaces,
      the ipld.NodeAssembler interface now says "Begin" for maps and lists.
      This is because the fluent.NodeAssembler approach, with its use of
      closures, really does do the whole operation in one swoop.
      
      (It's amusing to note that this change includes finally nuking some
      fairly old "REVIEW" comment blocks from the old fluent package which
      regarded the "knb" value and other such sadness around typed recursion.
      Indeed, we've finally reviewed that: and the answer was indeed to do
      something drastically different to make those recursions dance well.)
      
      === selectors ===
      
      Selectors essentially didn't change as part of this diff.  Neat.
      
      (They should get a lot faster when applied, because our node
      implementations hit a lot less interface boxing in common operations!
      But the selector code itself didn't need to change to get the gains.)
      
      The 'selector/builder' helper package *did* change a bit.
      The changes are mostly invisible to the user.
      I do have some questions about the performance of the result; I've got
      a sneaking suspicion there's now a bunch of improvements that might be
      easier to get to now than they would've been previously.  But, this is
      not my quest today.  Perhaps it will deserve some review in the future.
      
      The 'selector/builder' package should be noted as having some
      interesting error handling strategies.  Namely, it doesn't.
      Any panics raised by the fluent package will just keep rising; there's
      no place where they're converted to regular error value returns.
      I'm not sure this is a good interface, but it's the way it was before
      I started passing through, so that's the way it stays after this patch.
      
      ExploreFieldsSpecBuilder.Delete disappears.  I hope no one misses it.
      I don't think anyone will.  I suspect it was there only because the
      ipld.MapBuilder interface had such a method and it seemed like a
      reasonable conservative choice at the time to proxy it; now that the
      method proxied is gone, though, so too shall go this.
      
      === traversal ===
      
      Traversal is mostly the same, but a few pieces of config have new names.
      
      `traversal.Config.LinkNodeBuilderChooser` is now
      `traversal.Config.LinkTargetNodeStyleChooser`.
      Still a mouthful; slightly more accurate; and reflects that it now
      works in terms of NodeStyle, which gives us a little more finesse in
      reasoning about where NodeBuilders are actually created, and thus
      better control and insight into where allocations happen.
      
      `traversal.NodeBuilderChooser` is now
      `traversal.LinkTargetNodeStyleChooser` for the same reasons.
      
      The actual type of the `LinkTargetNodeStyleChooser` now requires
      returning a `NodeStyle`, in case all the naming hasn't made it obvious.
      
      === disappearing node packages ===
      
      A couple of packages under 'impl/*' are just dropped.
      
      This is no real loss.  The packages dropped were Node implementations
      that simply weren't done.  Deleting them is an increase in honesty.
      
      This doesn't mean something with the same intentions as those packages
      won't come back; it's just not today.
      
      --- runtime typed node wrapper disappeared ---
      
      This one will come back.  It was just too much of a pain to carry
      along in this diff.  Since it was also a fairly unfinished
      proof-of-concept with no downstream users, it's easier to drop and
      later reincarnate it than it is to carry it along now.
      
      === linking ===
      
      Link.Load now takes a `NodeAssembler` parameter instead of a
      `NodeBuilder`, and no longer returns a `Node`!
      
      This should result in callers having a little more control over where
      allocations may occur, letting them potentially reuse builders, etc.
      
      This change should also make sense considering how codec.Unmarshal
      now similarly takes a NodeAssembler argument and does not return
      a Node value since its understood that the caller has some way to
      access or gather the effects, and it's none of our business.
      
      Something about the Link interface still feels a bit contorted.
      Having to give the Load method a Loader that takes half the same
      arguments all over again is definitely odd.  And it's tempting to take
      a peek at fixing this, since the method is getting a signature change.
      It's unclear what exactly to do about this, though, and probably
      a consequential design decision space... so it shall not be reopened
      today during this other large refactor.  Maybe soon.  Maybe.
      
      === the dag-json codec ===
      
      The dag-json codec got harder to implement.  Rrgh.
      
      Since we can't tell if something is going to become a Link until
      *several tokens in*, dag-json is always a bit annoying to deal with.
      Previously, however, dag-json could still start optimistically building
      a map node, and then just... quietly drop it if we turn out to be
      dealing with a link instead.  *That's no longer possible*: the process
      of using NodeAssembler doesn't have a general purpose mechanism for
      backtracking.
      
      So.  Now the dag-json codec has to do even more custom work to buffer
      tokens until it knows what to do with them.  Yey.
      
      The upside is: of course, the result is actually faster, and does fewer
      memory allocations, since it gathers enough information to decide what
      it's doing before it begins to do it.
      (This is a lovely example of the disciplined design of NodeAssembler's
      interface forcing other code to be better behaved and disciplined!)
      
      === traversal is faster ===
      
      The `BenchmarkSpec_Walk_MapNStrMap3StrInt/n=32` test has about doubled
      in speed on the new `basicnode` implementation in comparison to the old
      `ipldfree.Node` implementation.
      
      This is derived primarily from the drop in costs of iteration on
      `basicnode` compared to the old `ipldfree.Node` implementation.
      
      Some back-of-the-envelope math on the allocation still left around
      suggest it could double in speed again.  The next thing to target
      would be allocations of paths, followed by iterator allocations.
      Both are a tad trickier, though (see a recently merge-ignore'd
      commit for notes on iterators; and paths... paths will be a doozy
      because the path forward almost certainly involves path values
      becoming invalid if retained beyond a scope, which is... unsafe),
      so certainly need their own efforts and separate commits.
      
      === marshalling is faster ===
      
      Marshalling is much faster on the new `basicnode` implementation in
      comparison to the old `ipldfree.Node` implementation.
      Same reasons as traversal.
      
      Some fixes to marshalling which previously caused unnecessary
      allocations of token objects during recursions have also been made.
      These improve speed a bit (though it's not nearly as noticeable as the
      boost provided by the Node implementation improvements to iteration).
      
      === size hints showed up all over the place ===
      
      The appearance of size hint arguments to assembly of maps and lists
      is of course inevitable from the new NodeAssembler interface.
      
      It's particularly interesting to see how many of them showed up in
      the selector and selectorbuilder packages as constants.
      
      And super especially interesting how many of them are very small
      constants.  44 zeros.  86 ones.  25 twos.  9 threes.  2 fours.
      (Counted via variations of `grep -r 'Map(.*4, func' | wc -l`.)
      It's quite a distribution, neh?  We should probably consider some
      more optimizations specifically targeted to small maps.
      (This is an unscientific sample, and shifted by what we chose to
      focus on in testing, etc etc, but the general point stands.)
      
      `-1` is used to indicate "no idea" for size.  There's a small fix
      to the basicnode implementations to allow this.  A zero would work
      just as well in practice, but using a negative number as a hint to
      the human seems potentially useful.  It's a shame we can't make the
      argument optional; oh well.
      
      === codegen ===
      
      The codegen packages still all compile... but do nonsensical things,
      for the moment: they've not been updated to emit NodeAssembler.
      
      Since the output of codegen still isn't well rigged to test harnesses,
      this breakage is silent.
      
      The codegen packages will probably undergo a fairly tabula-rasa sweep
      in the near future.  There's been a lot of lessons learned since the
      start of the code currently there.  Updating to emit the NodeAssembler
      interface will be such a large endeavor it probably represents a good
      point to just do a fresh pass on the whole thing all at once.
      
      --------
      
      ... and that's all!
      
      Fun reading, eh?
      
      Please do forgive the refactors necessary for all this.  Truly, the
      performance improvements should make it all worth your while.
      4eb8c55c