- 01 Dec, 2020 9 commits
-
-
Eric Myhre authored
I think this, or a variant of it, may be reasonable to rig up as a Stringer on the basicnode types, and recommend for other Node implementations to use as their Stringer too. It's a fairly verbose output: I'm mostly aiming to use it in examples. Bytes in particular are fun: I decided to make them use the hex.Dump format. (Why not?) I've put this in a codec sub-package, because in some ways it feels like a codec -- it's something you can apply to any node, and it streams data out to an io.Writer -- but it's also worth noting it's not meant to be a multicodec or generally written with an intention of use anywhere outside of debug printf sorts of uses. The codectools package, although it only has this one user, is a reaction to previous scenarios where I've wanted a quick debug method and desperately wanted something that gives me reasonable quoted strings... without reaching for a json package. We'll see if it's worth it over time; I'm betting yes, but not with infinite confidence. (This particular string escaping function also has the benefit of encoding even non-utf-8 strings without loss of information -- which is noteworthy, because I've recently noticed JSON _does not_; yikes.)
-
Eric Myhre authored
In fact this became a docs change; it is often desirable to *not* use the cidlink.Link type as a pointer.
-
Eric Myhre authored
-
Eric Myhre authored
Trying to make CIDs only usable as a pointer would be nice from a consistency perspective, but has other consequences. It's easy to forget this (and I apparently just did), but... We often use link types as map keys. And this is Important. That means trying to handle CIDs as pointers leads to nonsensical results: pointers are technically valid as a golang map key, but they don't "do the right thing" -- the equality check ends up operating on the the pointer rather than on the data. This is well-defined, but generally useless for these types in context.
-
Eric Myhre authored
As the comments in the diff say: it's a fairly sizable footgun for users to need to consider whether they expect the pointer form or the bare form when inspecting what an `ipld.Link` interface contains: so, let's just remove the choice. There's technically no reason for the Link.Load method to need to be attached to the pointer receiver other than removing this footgun. From the other side, though, there's no reason *not* to make it attached to the pointer receiver, because any time a value is assigned to an interface type, it necessarily heap-escapes and becomes a pointer anyway. So, making it unconditional and forcing the pointer to be clear in the user's hands seems best.
-
Eric Myhre authored
Codec revamp
-
Eric Myhre authored
I dearly wish this wasn't such a dark art. But I really want these tests, too.
-
Eric Myhre authored
This is added in a new "dagjson2" package for the time being, but aims to replace the current dagjson package entirely, and will take over that namespace when complete. So far only the decoder/unmarshaller is included in this first commit, and the encoder/marshaller is still coming up. This revamp is making several major strides: - The decoding system is cleanly separated from the tree building. - The tree building reuses the codectools token assembler systems. This saves a lot of code, and adds a lot of consistency. (By contrast, the older dagjson and dagcbor packages had similar outlines, but didn't actually share much code; this was annoying to maintain, and meant improvements to one needed to be ported to the other manually. No more.) - The token type used by this codectools system is more tightly associated with the IPLD Data Model. In practice, what this means is links are parsed at the same stage as the rest of parsing, rather than being added on in an awkward "parse 1.5" stage. This results in much less complicated code than the old token system from refmt which the older dagjson package leans on. - Budgets are more consistently woven through this system. - The JSON decoder components are in their own sub-package, and should be relatively reusable. Some features like string parsing are exported in their own right, in addition to being accessable via the full recursive supports-everything decoders. (This might not often be compelling, but -- maybe. I myself wanted more reusable access to fine-grained decoder and encoder components when I was working on the "JST" experiment, so, I'm scratching my own itch here if nothing else.) End-users should mostly not need to see this, but library implementors might appreciate it. - The codectools scratch.Reader type is used in all the decoder APIs. This results in good performance for either streaming io.Reader or already-in-memory bytes slices as data sources, and does it without doubling the number of exported functions we need (or pushing the need for feature detection into every single exported function). - The configuration system for the decoder is actually in this repo, and it's sanely and clearly settable while also being optional. Previously, if you wanted to configure dagjson, you'd have to reach into the refmt json package for *those* configuration structs, which was workable but just very confusing and gave the end-user a lot of different places to look before finding what they need. - The implementations are very mindful of memory allocation efficiency. Almost all of the component structures carefully utilize embedding: ReusableUnmarsahller embeds the Decoder; the Decoder embeds the scratch.Reader as well as the Token it yields; etc. This should result in overall being able to produce fully usable codecs with a minimal number of allocations -- much fewer than the older implementations required. Some benefits have yet to be realized, but are on the map now: - The new Token structure also includes space for position and progress tracking, which we want to use to produce better errors. (This needs more implementation work, still, though.) - There are several configuraiton options for strictness. These aren't all backed up by the actual implementation yet (I'm porting over old code fast enough to write a demo and make sure the whole suite of interfaces works; it'll require further work, especially on this strictness front, later), but at the very least these are now getting documented, and several comment blocks point to where more work is needed. - The new multicodec registry is alluded to in comments here, but isn't implemented yet. This is part of the long game big goal. The aim is to, by the end of this revamp, be able to do something about https://github.com/ipld/go-ipld-prime/issues/55 , and approach https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
-
Eric Myhre authored
The docs in the diff should cover it pretty well. It's a reader-wrapper that does a lot of extremely common buffering and small-read operations that parsers tend to need. This emerges from some older generation of code in refmt with similar purpose: https://github.com/polydawn/refmt/blob/master/shared/reader.go Unlike those antecedents, this one is a single concrete implementation, rather than using interfaces to allow switching between the two major modes of use. This is surely uglier code, but I think the result is more optimizable. The tests include aggressive checks that operations take exactly as many allocations as planned -- and mostly, that's *zero*. In the next couple of commits, I'll be adding parsers which use this. Benchmarks are still forthcoming. My recollection from the previous bout of this in refmt was that microbenchmarking this type wasn't a great use of time, because when we start benchmarking codecs built *upon* it, and especially, when looking at the pprof reports from that, we'll see this reader showing up plain as day there, and nicely contextualized... so, we'll just save our efforts for that point.
-
- 30 Nov, 2020 1 commit
-
-
Will authored
This change will look at the destination package that codegen is being built into, and will skip generation of types that are already declared by files not prefixed with `ipldsch_`. This isn't the cleanest escape-hatch, but it's a start.
-
- 18 Nov, 2020 1 commit
-
-
Eric Myhre authored
add import to ipld in ipldsch_types.go
-
- 17 Nov, 2020 6 commits
-
-
Will Scott authored
cleanup from #105
-
Eric Myhre authored
Codegen output rearrange
-
Eric Myhre authored
An underscore; and less "gen", because reviewers indicated it felt redundant.
-
Eric Myhre authored
I'd still probably prefer to replace this with simply having a stable order that is carried through consistently, but that remains blocked behind getting self-hosted types, and while it so happens I also got about 80% of the way there on those today, the second 80% may take another day. Better make this stable rather than wait.
-
Eric Myhre authored
Also, emit some comments around the type definitions. The old file layout is still available, but renamed to GenerateSplayed. It will probably be removed in the future. The new format does not currently have stable output order. I'd like to preserve the original order given by the schema, but our current placeholder types for schema data don't have this. More work needed on this.
-
Eric Myhre authored
Validate struct builder sufficiency
-
- 14 Nov, 2020 17 commits
-
-
Will Scott authored
-
Eric Myhre authored
Fresh take on codec APIs, and some tokenization utilities.
-
Eric Myhre authored
These aren't excersied yet -- and this is accordingly still highly subject to change -- but so far in developing this package, the pattern has been "if I say maybe this should have X", it's always turned out it indeed should have X. So let's just do that and then try it out, and have the experimental code instead of the comments.
-
Eric Myhre authored
Useful for tests that do deep equality tests on structures. Same caveat about current placement of this method as in the previous commit: this might be worth detaching and shifting to a 'codectest' or 'tokentest' package. But let's see how it shakes out.
-
Eric Myhre authored
This is far too useful in testing to reproduce in each package that needs something like it. It's already shown up as desirable again as soon as I start implementing even a little bit of even one codec tokenizer, and that's gonna keep happening. This might be worth moving to some kind of a 'tokentest' or 'codectest' package instead of cluttering up this one, but... we'll see; I've got a fair amount more code to flush into commits, and after that we can reshake things and see if packages settle out differently.
-
Eric Myhre authored
There were already comments about how this would be "probably" necessary; I don't know why I wavered, it certainly is.
-
Eric Myhre authored
You can write a surprising amount of code where the compiler will shrug and silently coerce things for you. Right up until you can't. (Some test cases that'll be coming down the commit queue shortly happened to end up checking the type of the constants, and, well. Suddenly this was noticable.)
-
Eric Myhre authored
We definitely did make a TokenWalker, heh. The other naming marsh (heh, see what I did there?) is still unresolved but can stay unresolved a while longer.
-
Eric Myhre authored
The tokenization system may look familiar to refmt's tokens -- and indeed it surely is inspired by and in the same pattern -- but it hews a fair bit closer to the IPLD Data Model definitions of kinds, and it also includes links as a token kind. Presense of link as a token kind means if we build codecs around these, the handling of links will be better and most consistently abstracted (the current dagjson and dagcbor implementations are instructive for what an odd mess it is when you have most of the tokenization happen before you get to the level that figures out links; I think we can improve on that code greatly by moving the barriers around a bit). I made both all-at-once and pumpable versions of both the token producers and the token consumers. Each are useful in different scenarios. The pumpable versions are probably generally a bit slower, but they're also more composable. (The all-at-once versions can't be glued to each other; only to pumpable versions.) Some new and much reduced contracts for codecs are added, but not yet implemented by anything in this comment. The comments on them are lengthy and detail the ways I'm thinking that codecs should be (re)implemented in the future to maximize usability and performance and also allow some configurability. (The current interfaces "work", but irritate me a great deal every time I use them; to be honest, I just plain guessed badly at what the API here should be the first time I did it. Configurability should be both easy to *not* engage in, but also easier if you do (and in pariticular, not require reaching to *another* library's packages to do it!).) More work will be required to bring this to fruition. It may be particularly interesting to notice that the tokenization systems also allow complex keys -- maps and lists can show up as the keys to maps! This is something not allowed by the data model (and for dare I say obvious reasons)... but it's something that's possible at the schema layer (e.g. structs with representation strategies that make them representable as strings can be used as map keys), so, these functions support it.
-
Eric Myhre authored
Add a demo ADL (rot13adl)
-
Eric Myhre authored
Fixed a symbol to be exported that's needed for this to be possible when outside the package. This still probably deserves an interface, too, though. Comments on that also updated, but we'll still leave that for future work (more examples of more ADLs wanted before we try to solidify on something there).
-
Eric Myhre authored
-
Eric Myhre authored
rot13adl demo: finish documentation; simplify Reify; more recommendations about how to implement Reify; consistent export symbol conventions; some fixes.
-
Eric Myhre authored
-
Eric Myhre authored
Introduce traversal function that selects links out of a tree.
-
Will Scott authored
-
Will Scott authored
-
- 13 Nov, 2020 1 commit
-
-
Eric Myhre authored
-
- 02 Nov, 2020 1 commit
-
-
Eric Myhre authored
Codegen various improvements
-
- 30 Oct, 2020 4 commits
-
-
Eric Myhre authored
-
Eric Myhre authored
-
Eric Myhre authored
Absence of this is an oversight, and I just happened to catch it while passing through the vicinity. Also: dropped a comment for later review on the bytesprefix strategy. While adding the stringprefix strategy, it's hard not to notice that variable length strings are allowed; so, it occurs to me we should probably do the same for byteprefix. (Also, possibly renaming it to byte*s*prefix.) Doing this would also fix the ancient weirdness of the map being flipped in an awkward way to evade int keys, which is a very happy coincidence (and in retrospect, I'm not sure why we didn't think of this solution earlier).
-
Eric Myhre authored
The capitalization on this has varied a bit over time. It's been tempting to capitalize these things because they're clearly two english words. However, I'm taking the line that they're a single word that just happens to have been derived from two english words, and such a neologism does not retain mid-word capitalization. (I'm looking at this right now because I'm attempting to write some new code around the schema-schema outputs, and so I want any dissonance and inconsistency gone from the start in this new code.)
-