1. 22 Aug, 2021 1 commit
  2. 16 Aug, 2021 1 commit
  3. 29 Jul, 2021 1 commit
  4. 21 Jul, 2021 3 commits
  5. 16 Jul, 2021 2 commits
  6. 28 Apr, 2021 1 commit
  7. 26 Apr, 2021 1 commit
  8. 25 Apr, 2021 1 commit
    • Eric Myhre's avatar
      Package docs for dag-cbor. · 92c695e9
      Eric Myhre authored
      These are somewhat overdue, and clarify what features are supported,
      and also note some discrepancies in implementation versus the spec.
      
      (As I'm taking this inventory of discrepancies, there's admittedly
      rather more than I'd like... but step 1: document the current truth.
      Prioritizing which things to hack on, in the field of infinite possible
      prioritizations of things that need hacking on, can be a step 2.)
      92c695e9
  9. 23 Mar, 2021 3 commits
  10. 22 Mar, 2021 1 commit
  11. 12 Mar, 2021 2 commits
  12. 05 Mar, 2021 1 commit
    • Daniel Martí's avatar
      codec/raw: implement the raw codec · 7e692244
      Daniel Martí authored
      It's small, it's simple, and it's already widely used as part of unixfs.
      So there's no reason it shouldn't be part of go-ipld-prime.
      
      The codec is tiny, but has three noteworthy parts: the Encode and Decode
      funcs, the cidlink multicodec registration, and the Bytes method
      shortcut. Each of these has its own dedicated regression test.
      
      I'm also using this commit to showcase the use of quicktest instead of
      go-wish. The result is extremely similar, but with less dot-import
      magic. For example, if I remove the Bytes shortcut in Decode:
      
      	--- FAIL: TestDecodeBuffer (0.00s)
      	    codec_test.go:115:
      	        error:
      	          got non-nil error
      	        got:
      	          e"could not decode raw node: must not call Read"
      	        stack:
      	          /home/mvdan/src/ipld/codec/raw/codec_test.go:115
      	            qt.Assert(t, err, qt.IsNil)
      7e692244
  13. 25 Feb, 2021 2 commits
    • Eric Myhre's avatar
      Extract multi{codec,hash} registries better. · 8fef5312
      Eric Myhre authored
      And, make a package which can be imported to register "all" of the
      multihashes.  (Or at least all of them that you would've expected
      from go-multihash.)
      
      There are also packages that are split roughly per the transitive
      dependency it brings in, so you can pick and choose.
      
      This cascaded into more work than I might've expected.
      Turns out a handful of the things we have multihash identifiers for
      actually *do not* implement the standard hash.Hash contract at all.
      For these, I've made small shims.
      
      Test fixtures across the library switch to using sha2-512.
      Previously I had written a bunch of them to use sha3 variants,
      but since that is not in the standard library, I'm going to move away
      from that so as not to re-bloat the transitive dependency tree
      just for the tests and examples.
      8fef5312
    • Eric Myhre's avatar
      Introduce LinkSystem. · a1482fe2
      Eric Myhre authored
      This significantly reworks how linking is handled.
      
      All of the significant operations involved in storing and loading
      data are extracted into their own separate features, and the LinkSystem
      just composes them.  The big advantage of this is we can now add as
      many helper methods to the LinkSystem construct as we want -- whereas
      previously, adding methods to the Link interface was a difficult
      thing to do, because that interface shows up in a lot of places.
      
      Link is now *just* treated as a data holder -- it doesn't need logic
      attached to it directly.  This is much cleaner.
      
      The way we interact with the CID libraries is also different.
      We're doing multihash registries ourselves, and breaking our direct
      use of the go-multihash library.  The big upside is we're now using
      the familiar and standard hash.Hash interface from the golang stdlib.
      (And as a bonus, that actually works streamingly; go-mulithash didn't.)
      However, this also implies a really big change for downstream users:
      we're no longer baking as many hashes into the new multihash registry
      by default.
      a1482fe2
  14. 25 Dec, 2020 1 commit
    • Daniel Martí's avatar
      all: rename schema.Kind to TypeKind, ipld.ReprKind to Kind · 2d7d25c4
      Daniel Martí authored
      As discussed on the issue thread, ipld.Kind and schema.TypeKind are more
      intuitive, closer to the spec wording, and just generally better in the
      long run.
      
      The changes are almost entirely automated via the commands below. Very
      minor changes were needed in some of the generators, and then gofmt.
      
      	sed -ri 's/\<Kind\(\)/TypeKind()/g' **/*.go
      	git checkout fluent # since it uses reflect.Value.Kind
      
      	sed -ri 's/\<Kind_/TypeKind_/g' **/*.go
      	sed -i 's/\<Kind\>/TypeKind/g' **/*.go
      	sed -i 's/ReprKind/Kind/g' **/*.go
      
      Plus manually undoing a few renames, as per Eric's review.
      
      Fixes #94.
      2d7d25c4
  15. 16 Dec, 2020 1 commit
    • Daniel Martí's avatar
      all: rewrite interfaces and APIs to support int64 · f6e9a891
      Daniel Martí authored
      We only supported representing Int nodes as Go's "int" builtin type.
      This is fine on 64-bit, but on 32-bit, it limited those node values to
      just 32 bits. This is a problem in practice, because it's reasonable to
      want more than 32 bits for integers.
      
      Moreover, this meant that IPLD would change behavior if built for a
      32-bit platform; it would not be able to decode large integers, for
      example, when in fact that was just a software limitation that 64-bit
      builds did not have.
      
      To fix this problem, consistently use int64 for AsInt and AssignInt.
      
      A lot more functions are part of this rewrite as well; mainly, those
      revolving around collections and iterating. Some might never need more
      than 32 bits in practice, but consistency and portability is preferred.
      Moreover, many are interfaces, and we want IPLD interfaces to be
      flexible, which will be important for ADLs.
      
      Below are some GNU sed lines which can be used to quickly update
      function signatures to use int64:
      
      	sed -ri 's/(func.* AsInt.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* AssignInt.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* Length.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* LookupByIndex.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* Next.*)\<int\>/\1int64/g' **/*.go
      	sed -ri 's/(func.* ValuePrototype.*)\<int\>/\1int64/g' **/*.go
      
      Note that the function bodies, as well as the code that calls said
      functions, may need to be manually updated with the integer type change.
      That cannot be automated, because it's possible that an automated fix
      would silently introduce potential overflows not being handled.
      
      Some TODOs and FIXMEs for overflow checks are removed, since we remove
      some now unnecessary int64->int conversions. On the other hand, the
      older codecs based on refmt need to gain some overflow check TODOs,
      since refmt uses ints. That is okay for now, since we'll phase out refmt
      pretty soon.
      
      While at it, update codectools to use int64 for token Length fields, so
      that it properly supports full IPLD integers without machine-dependent
      behavior and overflow checks. The budget integer is also updated to be
      int64, since the lengths it uses are now int64.
      
      Note that this refactor needed changes to the Go code generator as well
      as some of the tests, for the purpose of updating all the code.
      
      Finally, note that the code-generated iterator structs do not use int64
      fields internally, even though they must return int64 numbers to
      implement the interface. This is because they use the numeric fields to
      count up to a small finite amount (such as the number of fields in a Go
      struct), or up to the length of a map/slice. Neither of them can ever
      outgrow "int".
      
      Fixes #124.
      f6e9a891
  16. 01 Dec, 2020 7 commits
    • Eric Myhre's avatar
      Move the pretty package out of the codec subtree. · 1d1fc495
      Eric Myhre authored
      It still uses the codec/tools package, but it's very clearly not a
      codec itself and shouldn't be grouped like one, despite the shared
      common implementation details.
      
      Renamed methods to "Stringify".
      
      Stringify no longer returns errors; those errors only arise from
      the writer erroring, and writing into an in-memory buffer can't error.
      1d1fc495
    • Eric Myhre's avatar
      Introduce pretty printing tool. · 32a1ed04
      Eric Myhre authored
      I think this, or a variant of it, may be reasonable to rig up as a
      Stringer on the basicnode types, and recommend for other Node
      implementations to use as their Stringer too.
      
      It's a fairly verbose output: I'm mostly aiming to use it in examples.
      
      Bytes in particular are fun: I decided to make them use the hex.Dump
      format.  (Why not?)
      
      I've put this in a codec sub-package, because in some ways it feels
      like a codec -- it's something you can apply to any node, and it
      streams data out to an io.Writer -- but it's also worth noting it's
      not meant to be a multicodec or generally written with an intention
      of use anywhere outside of debug printf sorts of uses.
      
      The codectools package, although it only has this one user, is a
      reaction to previous scenarios where I've wanted a quick debug method
      and desperately wanted something that gives me reasonable quoted
      strings... without reaching for a json package.  We'll see if it's
      worth it over time; I'm betting yes, but not with infinite confidence.
      (This particular string escaping function also has the benefit of
      encoding even non-utf-8 strings without loss of information -- which
      is noteworthy, because I've recently noticed JSON _does not_; yikes.)
      32a1ed04
    • Eric Myhre's avatar
      Revert "Make the cidlink.Link type only usable as a pointer." · 96aa55ea
      Eric Myhre authored
      Trying to make CIDs only usable as a pointer would be nice from a
      consistency perspective, but has other consequences.
      
      It's easy to forget this (and I apparently just did), but...
      We often use link types as map keys.  And this is Important.
      
      That means trying to handle CIDs as pointers leads to nonsensical
      results: pointers are technically valid as a golang map key, but
      they don't "do the right thing" -- the equality check ends up operating
      on the the pointer rather than on the data.  This is well-defined,
      but generally useless for these types in context.
      96aa55ea
    • Eric Myhre's avatar
      Make the cidlink.Link type only usable as a pointer. · 55dcf0c3
      Eric Myhre authored
      As the comments in the diff say: it's a fairly sizable footgun for
      users to need to consider whether they expect the pointer form or
      the bare form when inspecting what an `ipld.Link` interface contains:
      so, let's just remove the choice.
      
      There's technically no reason for the Link.Load method to need to be
      attached to the pointer receiver other than removing this footgun.
      From the other side, though, there's no reason *not* to make it
      attached to the pointer receiver, because any time a value is assigned
      to an interface type, it necessarily heap-escapes and becomes a pointer
      anyway.  So, making it unconditional and forcing the pointer to be
      clear in the user's hands seems best.
      55dcf0c3
    • Eric Myhre's avatar
      Tweak to alloc counting tests. · ca680715
      Eric Myhre authored
      I dearly wish this wasn't such a dark art.
      But I really want these tests, too.
      ca680715
    • Eric Myhre's avatar
      Revamped DAG-JSON decoder and unmarshaller. · 53fb23e4
      Eric Myhre authored
      This is added in a new "dagjson2" package for the time being,
      but aims to replace the current dagjson package entirely,
      and will take over that namespace when complete.
      
      So far only the decoder/unmarshaller is included in this first commit,
      and the encoder/marshaller is still coming up.
      
      This revamp is making several major strides:
      
      - The decoding system is cleanly separated from the tree building.
      
      - The tree building reuses the codectools token assembler systems.
        This saves a lot of code, and adds a lot of consistency.
        (By contrast, the older dagjson and dagcbor packages had similar
        outlines, but didn't actually share much code; this was annoying
        to maintain, and meant improvements to one needed to be ported
        to the other manually.  No more.)
      
      - The token type used by this codectools system is more tightly
        associated with the IPLD Data Model.  In practice, what this means
        is links are parsed at the same stage as the rest of parsing,
        rather than being added on in an awkward "parse 1.5" stage.
        This results in much less complicated code than the old token
        system from refmt which the older dagjson package leans on.
      
      - Budgets are more consistently woven through this system.
      
      - The JSON decoder components are in their own sub-package,
        and should be relatively reusable.  Some features like string parsing
        are exported in their own right, in addition to being accessable
        via the full recursive supports-everything decoders.
        (This might not often be compelling, but -- maybe.  I myself wanted
        more reusable access to fine-grained decoder and encoder components
        when I was working on the "JST" experiment, so, I'm scratching my
        own itch here if nothing else.)
        End-users should mostly not need to see this, but library
        implementors might appreciate it.
      
      - The codectools scratch.Reader type is used in all the decoder APIs.
        This results in good performance for either streaming io.Reader or
        already-in-memory bytes slices as data sources, and does it without
        doubling the number of exported functions we need (or pushing the
        need for feature detection into every single exported function).
      
      - The configuration system for the decoder is actually in this repo,
        and it's sanely and clearly settable while also being optional.
        Previously, if you wanted to configure dagjson, you'd have to reach
        into the refmt json package for *those* configuration structs,
        which was workable but just very confusing and gave the end-user a
        lot of different places to look before finding what they need.
      
      - The implementations are very mindful of memory allocation efficiency.
        Almost all of the component structures carefully utilize embedding:
        ReusableUnmarsahller embeds the Decoder; the Decoder embeds the
        scratch.Reader as well as the Token it yields; etc.
        This should result in overall being able to produce fully usable
        codecs with a minimal number of allocations -- much fewer than the
        older implementations required.
      
      Some benefits have yet to be realized, but are on the map now:
      
      - The new Token structure also includes space for position and
        progress tracking, which we want to use to produce better errors.
        (This needs more implementation work, still, though.)
      
      - There are several configuraiton options for strictness.
        These aren't all backed up by the actual implementation yet
        (I'm porting over old code fast enough to write a demo and make
        sure the whole suite of interfaces works; it'll require further
        work, especially on this strictness front, later), but
        at the very least these are now getting documented,
        and several comment blocks point to where more work is needed.
      
      - The new multicodec registry is alluded to in comments here, but
        isn't implemented yet.  This is part of the long game big goal.
        The aim is to, by the end of this revamp, be able to do something
        about https://github.com/ipld/go-ipld-prime/issues/55 , and approach
        https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
      53fb23e4
    • Eric Myhre's avatar
      Add scratch.Reader tool, helpful for decoders. · 3040f082
      Eric Myhre authored
      The docs in the diff should cover it pretty well.
      It's a reader-wrapper that does a lot of extremely common
      buffering and small-read operations that parsers tend to need.
      
      This emerges from some older generation of code in refmt with similar purpose:
      https://github.com/polydawn/refmt/blob/master/shared/reader.go
      Unlike those antecedents, this one is a single concrete implementation,
      rather than using interfaces to allow switching between the two major modes of use.
      This is surely uglier code, but I think the result is more optimizable.
      
      The tests include aggressive checks that operations take exactly as
      many allocations as planned -- and mostly, that's *zero*.
      
      In the next couple of commits, I'll be adding parsers which use this.
      
      Benchmarks are still forthcoming.  My recollection from the previous
      bout of this in refmt was that microbenchmarking this type wasn't
      a great use of time, because when we start benchmarking codecs built
      *upon* it, and especially, when looking at the pprof reports from that,
      we'll see this reader showing up plain as day there, and nicely
      contextualized... so, we'll just save our efforts for that point.
      3040f082
  17. 14 Nov, 2020 7 commits
    • Eric Myhre's avatar
      Add position tracking fields to Token. · 1110155d
      Eric Myhre authored
      These aren't excersied yet -- and this is accordingly still highly
      subject to change -- but so far in developing this package, the pattern
      has been "if I say maybe this should have X", it's always turned out
      it indeed should have X.  So let's just do that and then try it out,
      and have the experimental code instead of the comments.
      1110155d
    • Eric Myhre's avatar
      Token.Normalize utility method. · a8995f6f
      Eric Myhre authored
      Useful for tests that do deep equality tests on structures.
      
      Same caveat about current placement of this method as in the previous
      commit: this might be worth detaching and shifting to a 'codectest'
      or 'tokentest' package.  But let's see how it shakes out.
      a8995f6f
    • Eric Myhre's avatar
      Extract and export StringifyTokenSequence utility. · d3511334
      Eric Myhre authored
      This is far too useful in testing to reproduce in each package that
      needs something like it.  It's already shown up as desirable again
      as soon as I start implementing even a little bit of even one codec
      tokenizer, and that's gonna keep happening.
      
      This might be worth moving to some kind of a 'tokentest' or
      'codectest' package instead of cluttering up this one, but...
      we'll see; I've got a fair amount more code to flush into commits,
      and after that we can reshake things and see if packages settle
      out differently.
      d3511334
    • Eric Myhre's avatar
      Add budget parameter to TokenReader. · 33fb7d98
      Eric Myhre authored
      There were already comments about how this would be "probably"
      necessary; I don't know why I wavered, it certainly is.
      33fb7d98
    • Eric Myhre's avatar
      Type the TokenKind consts correctly. · 72793f26
      Eric Myhre authored
      You can write a surprising amount of code where the compiler will shrug
      and silently coerce things for you.  Right up until you can't.
      (Some test cases that'll be coming down the commit queue shortly
      happened to end up checking the type of the constants, and, well.
      Suddenly this was noticable.)
      72793f26
    • Eric Myhre's avatar
      Drop earlier design comments. · 2143068c
      Eric Myhre authored
      We definitely did make a TokenWalker, heh.
      
      The other naming marsh (heh, see what I did there?) is still unresolved
      but can stay unresolved a while longer.
      2143068c
    • Eric Myhre's avatar
      Fresh take on codec APIs, and some tokenization utilities. · 1da7e2dd
      Eric Myhre authored
      The tokenization system may look familiar to refmt's tokens -- and
      indeed it surely is inspired by and in the same pattern -- but it
      hews a fair bit closer to the IPLD Data Model definitions of kinds,
      and it also includes links as a token kind.  Presense of link as
      a token kind means if we build codecs around these, the handling
      of links will be better and most consistently abstracted (the
      current dagjson and dagcbor implementations are instructive for what
      an odd mess it is when you have most of the tokenization happen
      before you get to the level that figures out links; I think we can
      improve on that code greatly by moving the barriers around a bit).
      
      I made both all-at-once and pumpable versions of both the token
      producers and the token consumers.  Each are useful in different
      scenarios.  The pumpable versions are probably generally a bit slower,
      but they're also more composable.  (The all-at-once versions can't
      be glued to each other; only to pumpable versions.)
      
      Some new and much reduced contracts for codecs are added,
      but not yet implemented by anything in this comment.
      The comments on them are lengthy and detail the ways I'm thinking
      that codecs should be (re)implemented in the future to maximize
      usability and performance and also allow some configurability.
      (The current interfaces "work", but irritate me a great deal every
      time I use them; to be honest, I just plain guessed badly at what
      the API here should be the first time I did it.  Configurability
      should be both easy to *not* engage in, but also easier if you do
      (and in pariticular, not require reaching to *another* library's
      packages to do it!).)  More work will be required to bring this
      to fruition.
      
      It may be particularly interesting to notice that the tokenization
      systems also allow complex keys -- maps and lists can show up as the
      keys to maps!  This is something not allowed by the data model (and
      for dare I say obvious reasons)... but it's something that's possible
      at the schema layer (e.g. structs with representation strategies that
      make them representable as strings can be used as map keys), so,
      these functions support it.
      1da7e2dd
  18. 21 Oct, 2020 2 commits
  19. 20 Oct, 2020 1 commit
  20. 24 Sep, 2020 1 commit