- 23 Mar, 2021 1 commit
-
-
Will Scott authored
-
- 22 Mar, 2021 1 commit
-
-
Will Scott authored
-
- 12 Mar, 2021 2 commits
-
-
hannahhoward authored
-
Eric Myhre authored
-
- 05 Mar, 2021 1 commit
-
-
Daniel Martí authored
It's small, it's simple, and it's already widely used as part of unixfs. So there's no reason it shouldn't be part of go-ipld-prime. The codec is tiny, but has three noteworthy parts: the Encode and Decode funcs, the cidlink multicodec registration, and the Bytes method shortcut. Each of these has its own dedicated regression test. I'm also using this commit to showcase the use of quicktest instead of go-wish. The result is extremely similar, but with less dot-import magic. For example, if I remove the Bytes shortcut in Decode: --- FAIL: TestDecodeBuffer (0.00s) codec_test.go:115: error: got non-nil error got: e"could not decode raw node: must not call Read" stack: /home/mvdan/src/ipld/codec/raw/codec_test.go:115 qt.Assert(t, err, qt.IsNil)
-
- 25 Feb, 2021 2 commits
-
-
Eric Myhre authored
And, make a package which can be imported to register "all" of the multihashes. (Or at least all of them that you would've expected from go-multihash.) There are also packages that are split roughly per the transitive dependency it brings in, so you can pick and choose. This cascaded into more work than I might've expected. Turns out a handful of the things we have multihash identifiers for actually *do not* implement the standard hash.Hash contract at all. For these, I've made small shims. Test fixtures across the library switch to using sha2-512. Previously I had written a bunch of them to use sha3 variants, but since that is not in the standard library, I'm going to move away from that so as not to re-bloat the transitive dependency tree just for the tests and examples.
-
Eric Myhre authored
This significantly reworks how linking is handled. All of the significant operations involved in storing and loading data are extracted into their own separate features, and the LinkSystem just composes them. The big advantage of this is we can now add as many helper methods to the LinkSystem construct as we want -- whereas previously, adding methods to the Link interface was a difficult thing to do, because that interface shows up in a lot of places. Link is now *just* treated as a data holder -- it doesn't need logic attached to it directly. This is much cleaner. The way we interact with the CID libraries is also different. We're doing multihash registries ourselves, and breaking our direct use of the go-multihash library. The big upside is we're now using the familiar and standard hash.Hash interface from the golang stdlib. (And as a bonus, that actually works streamingly; go-mulithash didn't.) However, this also implies a really big change for downstream users: we're no longer baking as many hashes into the new multihash registry by default.
-
- 25 Dec, 2020 1 commit
-
-
Daniel Martí authored
As discussed on the issue thread, ipld.Kind and schema.TypeKind are more intuitive, closer to the spec wording, and just generally better in the long run. The changes are almost entirely automated via the commands below. Very minor changes were needed in some of the generators, and then gofmt. sed -ri 's/\<Kind\(\)/TypeKind()/g' **/*.go git checkout fluent # since it uses reflect.Value.Kind sed -ri 's/\<Kind_/TypeKind_/g' **/*.go sed -i 's/\<Kind\>/TypeKind/g' **/*.go sed -i 's/ReprKind/Kind/g' **/*.go Plus manually undoing a few renames, as per Eric's review. Fixes #94.
-
- 16 Dec, 2020 1 commit
-
-
Daniel Martí authored
We only supported representing Int nodes as Go's "int" builtin type. This is fine on 64-bit, but on 32-bit, it limited those node values to just 32 bits. This is a problem in practice, because it's reasonable to want more than 32 bits for integers. Moreover, this meant that IPLD would change behavior if built for a 32-bit platform; it would not be able to decode large integers, for example, when in fact that was just a software limitation that 64-bit builds did not have. To fix this problem, consistently use int64 for AsInt and AssignInt. A lot more functions are part of this rewrite as well; mainly, those revolving around collections and iterating. Some might never need more than 32 bits in practice, but consistency and portability is preferred. Moreover, many are interfaces, and we want IPLD interfaces to be flexible, which will be important for ADLs. Below are some GNU sed lines which can be used to quickly update function signatures to use int64: sed -ri 's/(func.* AsInt.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* AssignInt.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* Length.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* LookupByIndex.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* Next.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* ValuePrototype.*)\<int\>/\1int64/g' **/*.go Note that the function bodies, as well as the code that calls said functions, may need to be manually updated with the integer type change. That cannot be automated, because it's possible that an automated fix would silently introduce potential overflows not being handled. Some TODOs and FIXMEs for overflow checks are removed, since we remove some now unnecessary int64->int conversions. On the other hand, the older codecs based on refmt need to gain some overflow check TODOs, since refmt uses ints. That is okay for now, since we'll phase out refmt pretty soon. While at it, update codectools to use int64 for token Length fields, so that it properly supports full IPLD integers without machine-dependent behavior and overflow checks. The budget integer is also updated to be int64, since the lengths it uses are now int64. Note that this refactor needed changes to the Go code generator as well as some of the tests, for the purpose of updating all the code. Finally, note that the code-generated iterator structs do not use int64 fields internally, even though they must return int64 numbers to implement the interface. This is because they use the numeric fields to count up to a small finite amount (such as the number of fields in a Go struct), or up to the length of a map/slice. Neither of them can ever outgrow "int". Fixes #124.
-
- 01 Dec, 2020 7 commits
-
-
Eric Myhre authored
It still uses the codec/tools package, but it's very clearly not a codec itself and shouldn't be grouped like one, despite the shared common implementation details. Renamed methods to "Stringify". Stringify no longer returns errors; those errors only arise from the writer erroring, and writing into an in-memory buffer can't error.
-
Eric Myhre authored
I think this, or a variant of it, may be reasonable to rig up as a Stringer on the basicnode types, and recommend for other Node implementations to use as their Stringer too. It's a fairly verbose output: I'm mostly aiming to use it in examples. Bytes in particular are fun: I decided to make them use the hex.Dump format. (Why not?) I've put this in a codec sub-package, because in some ways it feels like a codec -- it's something you can apply to any node, and it streams data out to an io.Writer -- but it's also worth noting it's not meant to be a multicodec or generally written with an intention of use anywhere outside of debug printf sorts of uses. The codectools package, although it only has this one user, is a reaction to previous scenarios where I've wanted a quick debug method and desperately wanted something that gives me reasonable quoted strings... without reaching for a json package. We'll see if it's worth it over time; I'm betting yes, but not with infinite confidence. (This particular string escaping function also has the benefit of encoding even non-utf-8 strings without loss of information -- which is noteworthy, because I've recently noticed JSON _does not_; yikes.)
-
Eric Myhre authored
Trying to make CIDs only usable as a pointer would be nice from a consistency perspective, but has other consequences. It's easy to forget this (and I apparently just did), but... We often use link types as map keys. And this is Important. That means trying to handle CIDs as pointers leads to nonsensical results: pointers are technically valid as a golang map key, but they don't "do the right thing" -- the equality check ends up operating on the the pointer rather than on the data. This is well-defined, but generally useless for these types in context.
-
Eric Myhre authored
As the comments in the diff say: it's a fairly sizable footgun for users to need to consider whether they expect the pointer form or the bare form when inspecting what an `ipld.Link` interface contains: so, let's just remove the choice. There's technically no reason for the Link.Load method to need to be attached to the pointer receiver other than removing this footgun. From the other side, though, there's no reason *not* to make it attached to the pointer receiver, because any time a value is assigned to an interface type, it necessarily heap-escapes and becomes a pointer anyway. So, making it unconditional and forcing the pointer to be clear in the user's hands seems best.
-
Eric Myhre authored
I dearly wish this wasn't such a dark art. But I really want these tests, too.
-
Eric Myhre authored
This is added in a new "dagjson2" package for the time being, but aims to replace the current dagjson package entirely, and will take over that namespace when complete. So far only the decoder/unmarshaller is included in this first commit, and the encoder/marshaller is still coming up. This revamp is making several major strides: - The decoding system is cleanly separated from the tree building. - The tree building reuses the codectools token assembler systems. This saves a lot of code, and adds a lot of consistency. (By contrast, the older dagjson and dagcbor packages had similar outlines, but didn't actually share much code; this was annoying to maintain, and meant improvements to one needed to be ported to the other manually. No more.) - The token type used by this codectools system is more tightly associated with the IPLD Data Model. In practice, what this means is links are parsed at the same stage as the rest of parsing, rather than being added on in an awkward "parse 1.5" stage. This results in much less complicated code than the old token system from refmt which the older dagjson package leans on. - Budgets are more consistently woven through this system. - The JSON decoder components are in their own sub-package, and should be relatively reusable. Some features like string parsing are exported in their own right, in addition to being accessable via the full recursive supports-everything decoders. (This might not often be compelling, but -- maybe. I myself wanted more reusable access to fine-grained decoder and encoder components when I was working on the "JST" experiment, so, I'm scratching my own itch here if nothing else.) End-users should mostly not need to see this, but library implementors might appreciate it. - The codectools scratch.Reader type is used in all the decoder APIs. This results in good performance for either streaming io.Reader or already-in-memory bytes slices as data sources, and does it without doubling the number of exported functions we need (or pushing the need for feature detection into every single exported function). - The configuration system for the decoder is actually in this repo, and it's sanely and clearly settable while also being optional. Previously, if you wanted to configure dagjson, you'd have to reach into the refmt json package for *those* configuration structs, which was workable but just very confusing and gave the end-user a lot of different places to look before finding what they need. - The implementations are very mindful of memory allocation efficiency. Almost all of the component structures carefully utilize embedding: ReusableUnmarsahller embeds the Decoder; the Decoder embeds the scratch.Reader as well as the Token it yields; etc. This should result in overall being able to produce fully usable codecs with a minimal number of allocations -- much fewer than the older implementations required. Some benefits have yet to be realized, but are on the map now: - The new Token structure also includes space for position and progress tracking, which we want to use to produce better errors. (This needs more implementation work, still, though.) - There are several configuraiton options for strictness. These aren't all backed up by the actual implementation yet (I'm porting over old code fast enough to write a demo and make sure the whole suite of interfaces works; it'll require further work, especially on this strictness front, later), but at the very least these are now getting documented, and several comment blocks point to where more work is needed. - The new multicodec registry is alluded to in comments here, but isn't implemented yet. This is part of the long game big goal. The aim is to, by the end of this revamp, be able to do something about https://github.com/ipld/go-ipld-prime/issues/55 , and approach https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
-
Eric Myhre authored
The docs in the diff should cover it pretty well. It's a reader-wrapper that does a lot of extremely common buffering and small-read operations that parsers tend to need. This emerges from some older generation of code in refmt with similar purpose: https://github.com/polydawn/refmt/blob/master/shared/reader.go Unlike those antecedents, this one is a single concrete implementation, rather than using interfaces to allow switching between the two major modes of use. This is surely uglier code, but I think the result is more optimizable. The tests include aggressive checks that operations take exactly as many allocations as planned -- and mostly, that's *zero*. In the next couple of commits, I'll be adding parsers which use this. Benchmarks are still forthcoming. My recollection from the previous bout of this in refmt was that microbenchmarking this type wasn't a great use of time, because when we start benchmarking codecs built *upon* it, and especially, when looking at the pprof reports from that, we'll see this reader showing up plain as day there, and nicely contextualized... so, we'll just save our efforts for that point.
-
- 14 Nov, 2020 7 commits
-
-
Eric Myhre authored
These aren't excersied yet -- and this is accordingly still highly subject to change -- but so far in developing this package, the pattern has been "if I say maybe this should have X", it's always turned out it indeed should have X. So let's just do that and then try it out, and have the experimental code instead of the comments.
-
Eric Myhre authored
Useful for tests that do deep equality tests on structures. Same caveat about current placement of this method as in the previous commit: this might be worth detaching and shifting to a 'codectest' or 'tokentest' package. But let's see how it shakes out.
-
Eric Myhre authored
This is far too useful in testing to reproduce in each package that needs something like it. It's already shown up as desirable again as soon as I start implementing even a little bit of even one codec tokenizer, and that's gonna keep happening. This might be worth moving to some kind of a 'tokentest' or 'codectest' package instead of cluttering up this one, but... we'll see; I've got a fair amount more code to flush into commits, and after that we can reshake things and see if packages settle out differently.
-
Eric Myhre authored
There were already comments about how this would be "probably" necessary; I don't know why I wavered, it certainly is.
-
Eric Myhre authored
You can write a surprising amount of code where the compiler will shrug and silently coerce things for you. Right up until you can't. (Some test cases that'll be coming down the commit queue shortly happened to end up checking the type of the constants, and, well. Suddenly this was noticable.)
-
Eric Myhre authored
We definitely did make a TokenWalker, heh. The other naming marsh (heh, see what I did there?) is still unresolved but can stay unresolved a while longer.
-
Eric Myhre authored
The tokenization system may look familiar to refmt's tokens -- and indeed it surely is inspired by and in the same pattern -- but it hews a fair bit closer to the IPLD Data Model definitions of kinds, and it also includes links as a token kind. Presense of link as a token kind means if we build codecs around these, the handling of links will be better and most consistently abstracted (the current dagjson and dagcbor implementations are instructive for what an odd mess it is when you have most of the tokenization happen before you get to the level that figures out links; I think we can improve on that code greatly by moving the barriers around a bit). I made both all-at-once and pumpable versions of both the token producers and the token consumers. Each are useful in different scenarios. The pumpable versions are probably generally a bit slower, but they're also more composable. (The all-at-once versions can't be glued to each other; only to pumpable versions.) Some new and much reduced contracts for codecs are added, but not yet implemented by anything in this comment. The comments on them are lengthy and detail the ways I'm thinking that codecs should be (re)implemented in the future to maximize usability and performance and also allow some configurability. (The current interfaces "work", but irritate me a great deal every time I use them; to be honest, I just plain guessed badly at what the API here should be the first time I did it. Configurability should be both easy to *not* engage in, but also easier if you do (and in pariticular, not require reaching to *another* library's packages to do it!).) More work will be required to bring this to fruition. It may be particularly interesting to notice that the tokenization systems also allow complex keys -- maps and lists can show up as the keys to maps! This is something not allowed by the data model (and for dare I say obvious reasons)... but it's something that's possible at the schema layer (e.g. structs with representation strategies that make them representable as strings can be used as map keys), so, these functions support it.
-
- 21 Oct, 2020 2 commits
-
-
Eric Myhre authored
All of these were fixed by https://github.com/ipld/go-ipld-prime/pulls/85 , but it's good to have a test saying so and watching for regression.
-
Eric Myhre authored
Reported via https://github.com/LeastAuthority/go-ipld-prime/issues/7 .
-
- 20 Oct, 2020 1 commit
-
-
Eric Myhre authored
I haven't implemented the reader side because I'm not sure it's possible; the specification is insufficiently clear. I opened Issue https://github.com/ipld/specs/issues/302 to track this.
-
- 24 Sep, 2020 1 commit
-
-
Eric Myhre authored
-
- 10 Sep, 2020 1 commit
-
-
Daniel Martí authored
Buffers are not a good option for tests if the other side expects a reader. Otherwise, the code being tested could build assumptions around the reader stream being a single contiguous chunk of bytes, such as: _ = r.(*bytes.Buffer).Bytes() This kind of hack might seem unlikely, but it's an easy mistake to make, especially with APIs like fmt which automatically call String methods. With bytes.Reader and strings.Reader, the types are much more restricted, so the tests need to be more faithful.
-
- 25 Aug, 2020 2 commits
-
-
Daniel Martí authored
As spotted by staticcheck. While at it, remove punctuation from another couple of errors, as per https://github.com/golang/go/wiki/CodeReviewComments#error-strings: Error strings should not be capitalized (unless beginning with proper nouns or acronyms) or end with punctuation, since they are usually printed following other context.
-
Daniel Martí authored
There were two vet errors in two packages containing tests, resulting in 'go test' erroring out before any tests were run. Both were due to the same reason - an Error method that ends up calling itself forever, thus a panic. While at it, 'gofmt -w -s' everything, which removes a redundant type.
-
- 29 Jun, 2020 2 commits
-
-
Eric Myhre authored
-
Eric Myhre authored
Hopefully this increases clarity and eases comprehension. Notes and discussion can be found at https://github.com/ipld/go-ipld-prime/issues/54 (and also I suppose in some of our weekly video chats, but I'd have to go on quite a dig to find the relevant links and time). Many many refernces to 'ns' are also updated to 'np', making the line count in this diff pretty wild.
-
- 26 Jun, 2020 1 commit
-
-
Eric Myhre authored
See the changelog for discussion; this had already been on the docket for a while now.
-
- 13 May, 2020 1 commit
-
-
Eric Myhre authored
Key coloration is easy because we already have key emission in one place, and we already have size computation for alignment separated from emission. Value coloration will be a little more involved.
-
- 10 May, 2020 5 commits
-
-
Eric Myhre authored
They do.
-
Eric Myhre authored
Alignment just proceeds around them, leaving appropriate space based on what other rows needed in order to align with each other. If a column is absent at the end of a row, the whole row wraps up fast.
-
Eric Myhre authored
They do.
-
Eric Myhre authored
The first two example fixtures of what I wanted to achieve pass now :3 That's exciting.
-
Eric Myhre authored
See the package docs in 'jst.go' for introduction to what and why; tldr: I want pretty and I want JSON and I want them at the same time. I'm putting this in the codec package tree because it fits there moreso than anywhere else, but it's probably not going to be assigned a multicodec magic number or anything like that; it's really just JSON. This code doesn't *quite* pass its own fixture tests yet, but nearly. I thought this would be a nice checkpoint because the only thing left is dealing with the fiddly trailing-comma-or-not bits. This first pass also completely ignores character encoding issues, the correct counting of graphemes, and so forth; those are future work. Most configurability is also speculative for 'first draft' reasons. All good things in time. This is something of a little hobby sidequest. It's not particularly related to the hashing-and-content-addressing quest usually focused. Accordingly, as you may be able to notice from some of the comments in the package documentation block, I did initially try to write this over in the refmt repo instead. However, I got about 20 seconds in on that effort before realizing that our Node interface here would be a wildly better interface to build this with. Later, I also started realizing Selectors would be Quite Good for other forms of configuration that I want to add to this system... so, it's rapidly turning into a nice little exercise for other core IPLD primitives! Yay! Copacetic.
-
- 28 Apr, 2020 1 commit
-
-
hannahhoward authored
Fix an error with marshalling that causes bytes nodes to get written as links if they are written after a link, because the tag was never reset
-