1. 29 Jul, 2021 1 commit
  2. 16 Dec, 2020 1 commit
    • Daniel Martí's avatar
      all: rewrite interfaces and APIs to support int64 · f6e9a891
      Daniel Martí authored
      We only supported representing Int nodes as Go's "int" builtin type.
      This is fine on 64-bit, but on 32-bit, it limited those node values to
      just 32 bits. This is a problem in practice, because it's reasonable to
      want more than 32 bits for integers.
      
      Moreover, this meant that IPLD would change behavior if built for a
      32-bit platform; it would not be able to decode large integers, for
      example, when in fact that was just a software limitation that 64-bit
      builds did not have.
      
      To fix this problem, consistently use int64 for AsInt and AssignInt.
      
      A lot more functions are part of this rewrite as well; mainly, those
      revolving around collections and iterating. Some might never need more
      than 32 bits in practice, but consistency and portability is preferred.
      Moreover, many are interfaces, and we want IPLD interfaces to be
      flexible, which will be important for ADLs.
      
      Below are some GNU sed lines which can be used to quickly update
      function signatures to use int64:
      
      	sed -ri 's/(func.* AsInt.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* AssignInt.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* Length.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* LookupByIndex.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* Next.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* ValuePrototype.*)\<int\>/\1int64/g' **/*.go
      
      Note that the function bodies, as well as the code that calls said
      functions, may need to be manually updated with the integer type change.
      That cannot be automated, because it's possible that an automated fix
      would silently introduce potential overflows not being handled.
      
      Some TODOs and FIXMEs for overflow checks are removed, since we remove
      some now unnecessary int64->int conversions. On the other hand, the
      older codecs based on refmt need to gain some overflow check TODOs,
      since refmt uses ints. That is okay for now, since we'll phase out refmt
      pretty soon.
      
      While at it, update codectools to use int64 for token Length fields, so
      that it properly supports full IPLD integers without machine-dependent
      behavior and overflow checks. The budget integer is also updated to be
      int64, since the lengths it uses are now int64.
      
      Note that this refactor needed changes to the Go code generator as well
      as some of the tests, for the purpose of updating all the code.
      
      Finally, note that the code-generated iterator structs do not use int64
      fields internally, even though they must return int64 numbers to
      implement the interface. This is because they use the numeric fields to
      count up to a small finite amount (such as the number of fields in a Go
      struct), or up to the length of a map/slice. Neither of them can ever
      outgrow "int".
      
      Fixes #124.
      f6e9a891
  3. 01 Dec, 2020 1 commit
    • Eric Myhre's avatar
      Revamped DAG-JSON decoder and unmarshaller. · 53fb23e4
      Eric Myhre authored
      This is added in a new "dagjson2" package for the time being,
      but aims to replace the current dagjson package entirely,
      and will take over that namespace when complete.
      
      So far only the decoder/unmarshaller is included in this first commit,
      and the encoder/marshaller is still coming up.
      
      This revamp is making several major strides:
      
      - The decoding system is cleanly separated from the tree building.
      
      - The tree building reuses the codectools token assembler systems.
        This saves a lot of code, and adds a lot of consistency.
        (By contrast, the older dagjson and dagcbor packages had similar
        outlines, but didn't actually share much code; this was annoying
        to maintain, and meant improvements to one needed to be ported
        to the other manually.  No more.)
      
      - The token type used by this codectools system is more tightly
        associated with the IPLD Data Model.  In practice, what this means
        is links are parsed at the same stage as the rest of parsing,
        rather than being added on in an awkward "parse 1.5" stage.
        This results in much less complicated code than the old token
        system from refmt which the older dagjson package leans on.
      
      - Budgets are more consistently woven through this system.
      
      - The JSON decoder components are in their own sub-package,
        and should be relatively reusable.  Some features like string parsing
        are exported in their own right, in addition to being accessable
        via the full recursive supports-everything decoders.
        (This might not often be compelling, but -- maybe.  I myself wanted
        more reusable access to fine-grained decoder and encoder components
        when I was working on the "JST" experiment, so, I'm scratching my
        own itch here if nothing else.)
        End-users should mostly not need to see this, but library
        implementors might appreciate it.
      
      - The codectools scratch.Reader type is used in all the decoder APIs.
        This results in good performance for either streaming io.Reader or
        already-in-memory bytes slices as data sources, and does it without
        doubling the number of exported functions we need (or pushing the
        need for feature detection into every single exported function).
      
      - The configuration system for the decoder is actually in this repo,
        and it's sanely and clearly settable while also being optional.
        Previously, if you wanted to configure dagjson, you'd have to reach
        into the refmt json package for *those* configuration structs,
        which was workable but just very confusing and gave the end-user a
        lot of different places to look before finding what they need.
      
      - The implementations are very mindful of memory allocation efficiency.
        Almost all of the component structures carefully utilize embedding:
        ReusableUnmarsahller embeds the Decoder; the Decoder embeds the
        scratch.Reader as well as the Token it yields; etc.
        This should result in overall being able to produce fully usable
        codecs with a minimal number of allocations -- much fewer than the
        older implementations required.
      
      Some benefits have yet to be realized, but are on the map now:
      
      - The new Token structure also includes space for position and
        progress tracking, which we want to use to produce better errors.
        (This needs more implementation work, still, though.)
      
      - There are several configuraiton options for strictness.
        These aren't all backed up by the actual implementation yet
        (I'm porting over old code fast enough to write a demo and make
        sure the whole suite of interfaces works; it'll require further
        work, especially on this strictness front, later), but
        at the very least these are now getting documented,
        and several comment blocks point to where more work is needed.
      
      - The new multicodec registry is alluded to in comments here, but
        isn't implemented yet.  This is part of the long game big goal.
        The aim is to, by the end of this revamp, be able to do something
        about https://github.com/ipld/go-ipld-prime/issues/55 , and approach
        https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
      53fb23e4