- 25 Dec, 2020 1 commit
-
-
Daniel Martí authored
As discussed on the issue thread, ipld.Kind and schema.TypeKind are more intuitive, closer to the spec wording, and just generally better in the long run. The changes are almost entirely automated via the commands below. Very minor changes were needed in some of the generators, and then gofmt. sed -ri 's/\<Kind\(\)/TypeKind()/g' **/*.go git checkout fluent # since it uses reflect.Value.Kind sed -ri 's/\<Kind_/TypeKind_/g' **/*.go sed -i 's/\<Kind\>/TypeKind/g' **/*.go sed -i 's/ReprKind/Kind/g' **/*.go Plus manually undoing a few renames, as per Eric's review. Fixes #94.
-
- 17 Dec, 2020 1 commit
-
-
Daniel Martí authored
This should be more intuitive to Go programmers, since assignments are generally trivial operations, but conversions imply that extra work might be needed to adapt the value to fit in the recipient. The entire change is just: sed -ri 's/AssignNode/ConvertFrom/g' **/*.go Downstream users can very likely use the same line to fix their function declarations and calls. Fixes #95.
-
- 16 Dec, 2020 1 commit
-
-
Daniel Martí authored
We only supported representing Int nodes as Go's "int" builtin type. This is fine on 64-bit, but on 32-bit, it limited those node values to just 32 bits. This is a problem in practice, because it's reasonable to want more than 32 bits for integers. Moreover, this meant that IPLD would change behavior if built for a 32-bit platform; it would not be able to decode large integers, for example, when in fact that was just a software limitation that 64-bit builds did not have. To fix this problem, consistently use int64 for AsInt and AssignInt. A lot more functions are part of this rewrite as well; mainly, those revolving around collections and iterating. Some might never need more than 32 bits in practice, but consistency and portability is preferred. Moreover, many are interfaces, and we want IPLD interfaces to be flexible, which will be important for ADLs. Below are some GNU sed lines which can be used to quickly update function signatures to use int64: sed -ri 's/(func.* AsInt.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* AssignInt.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* Length.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* LookupByIndex.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* Next.*)\<int\>/\1int64/g' **/*.go sed -ri 's/(func.* ValuePrototype.*)\<int\>/\1int64/g' **/*.go Note that the function bodies, as well as the code that calls said functions, may need to be manually updated with the integer type change. That cannot be automated, because it's possible that an automated fix would silently introduce potential overflows not being handled. Some TODOs and FIXMEs for overflow checks are removed, since we remove some now unnecessary int64->int conversions. On the other hand, the older codecs based on refmt need to gain some overflow check TODOs, since refmt uses ints. That is okay for now, since we'll phase out refmt pretty soon. While at it, update codectools to use int64 for token Length fields, so that it properly supports full IPLD integers without machine-dependent behavior and overflow checks. The budget integer is also updated to be int64, since the lengths it uses are now int64. Note that this refactor needed changes to the Go code generator as well as some of the tests, for the purpose of updating all the code. Finally, note that the code-generated iterator structs do not use int64 fields internally, even though they must return int64 numbers to implement the interface. This is because they use the numeric fields to count up to a small finite amount (such as the number of fields in a Go struct), or up to the length of a map/slice. Neither of them can ever outgrow "int". Fixes #124.
-
- 14 Dec, 2020 4 commits
-
-
Eric Myhre authored
-
Eric Myhre authored
clean up node/gendemo regeneration
-
Eric Myhre authored
-
Eric Myhre authored
Do regeneratation. Some diff churn: the codegen doesn't yet jive smoothly with a recent change made in service to go vet. We'll want to straighten that out soon, I guess.
-
- 13 Dec, 2020 6 commits
-
-
Eric Myhre authored
-
Eric Myhre authored
Schema types rebased to use codegen types for the data
-
Eric Myhre authored
Cannot quite wire that up yet because of some other still incomplete features.
-
Eric Myhre authored
Since https://github.com/ipld/go-ipld-prime/pull/121, presence of fields is actually checked... but that code also doesn't understand implicit fields yet, which makes us need a lot of filler. Also the lack of the "members" field for unions? That was just plain wrong. Good think we're catching things like that now.
-
Eric Myhre authored
filenames change due to https://github.com/ipld/go-ipld-prime/pull/105 . gofmt also applied for the first time. from here on out: `go generate` should just cause these files to be automagically updated and formatted.
-
Eric Myhre authored
Move the existing setup from the schema-schema "demo" dir to here; and rig it up with go generate conventions that I'm hoisting back from mvdan's https://github.com/ipld/go-ipld-adl-hamt/blob/master/gen.go . Move the parse tests with it.
-
- 04 Dec, 2020 7 commits
-
-
Eric Myhre authored
draft of schema types using codegen for data model, with a package for the fully validated data which is implemented by retaining and accessing into the raw data.
-
Eric Myhre authored
codegen: assembler for struct with map representation validates all non-optional fields are present
-
Eric Myhre authored
codegen: assembler for struct with map representation now validates all non-optional fields are present. This continues what https://github.com/ipld/go-ipld-prime/pull/111/ did and adds the same logic to the map representation. The actual state tracking works the same way (and was mostly already there). Rearranged the tests slightly. Made error messages include both field name and serial key when they differ due to a rename directive. (It's possible this error would get nicer if it used a list of StructField instead of just strings, but it would also get more complicated. Maybe revisit later.)
-
Eric Myhre authored
-
Eric Myhre authored
(Trying to call Build on an assembler that previously errored is very likely to panic, so the fluent.Build function should return before trying to do that.)
-
Daniel Martí authored
Reduces the output of 'go vet ./...' from 374 lines to 96. Many warnings remain, but I have lost my patience for today. Most of the changes below were automated, especially the single-line mixins expressions. Unfortunately, many of the Traits structs required manual copy-pasting.
-
Daniel Martí authored
This is the mixins package, so "Mixin" in filenames is redundant.
-
- 01 Dec, 2020 13 commits
-
-
Daniel Martí authored
The files in this codegen demo correspond to an older version of the Go code generator. That's not terrible in itself, but it did make repeated uses of 'go test' fail: $ go test testing: warning: no tests to run PASS ok github.com/ipld/go-ipld-prime/node/gendemo 0.112s $ go test # github.com/ipld/go-ipld-prime/node/gendemo [github.com/ipld/go-ipld-prime/node/gendemo.test] ./minima.go:10:2: midvalue redeclared in this block previous declaration at ./ipldsch_minima.go:13:27 [...]
-
Eric Myhre authored
The original idea of this branch was to explore some reusable components for codecs; maybe to actually get a useful prettyprinter; and... actually turned a lot into being a bit of practical discovery about string encoding and escaping. Some of those goals were partially accomplished, but in general, this seems better to chalk up as a learning experience. https://github.com/ipld/go-ipld-prime/pull/89#issuecomment-703327909 already does a good job of discussing why, and what as learned. A lot of the reusable codec components stuff has also now shaken out, just... in other PRs that were written after the learnings here. Namely, https://github.com/ipld/go-ipld-prime/pull/101/ was able to introduce some tree transformers; and then https://github.com/ipld/go-ipld-prime/pull/112 demonstrates how those can compose into a complete codec end to end. There's still work to go on these, too, but they seem to have already grabbed the concept of reusable parts I was hoping for here and gotten farther with it, so. These diffs are interesting enough I want to keep them referencable in history. But I'm merging them with the "-s ours" strategy, so that the diffs don't actually land any impact on master. These commits are for reference only.
-
Eric Myhre authored
It still uses the codec/tools package, but it's very clearly not a codec itself and shouldn't be grouped like one, despite the shared common implementation details. Renamed methods to "Stringify". Stringify no longer returns errors; those errors only arise from the writer erroring, and writing into an in-memory buffer can't error.
-
Eric Myhre authored
-
Eric Myhre authored
I think this, or a variant of it, may be reasonable to rig up as a Stringer on the basicnode types, and recommend for other Node implementations to use as their Stringer too. It's a fairly verbose output: I'm mostly aiming to use it in examples. Bytes in particular are fun: I decided to make them use the hex.Dump format. (Why not?) I've put this in a codec sub-package, because in some ways it feels like a codec -- it's something you can apply to any node, and it streams data out to an io.Writer -- but it's also worth noting it's not meant to be a multicodec or generally written with an intention of use anywhere outside of debug printf sorts of uses. The codectools package, although it only has this one user, is a reaction to previous scenarios where I've wanted a quick debug method and desperately wanted something that gives me reasonable quoted strings... without reaching for a json package. We'll see if it's worth it over time; I'm betting yes, but not with infinite confidence. (This particular string escaping function also has the benefit of encoding even non-utf-8 strings without loss of information -- which is noteworthy, because I've recently noticed JSON _does not_; yikes.)
-
Eric Myhre authored
In fact this became a docs change; it is often desirable to *not* use the cidlink.Link type as a pointer.
-
Eric Myhre authored
-
Eric Myhre authored
Trying to make CIDs only usable as a pointer would be nice from a consistency perspective, but has other consequences. It's easy to forget this (and I apparently just did), but... We often use link types as map keys. And this is Important. That means trying to handle CIDs as pointers leads to nonsensical results: pointers are technically valid as a golang map key, but they don't "do the right thing" -- the equality check ends up operating on the the pointer rather than on the data. This is well-defined, but generally useless for these types in context.
-
Eric Myhre authored
As the comments in the diff say: it's a fairly sizable footgun for users to need to consider whether they expect the pointer form or the bare form when inspecting what an `ipld.Link` interface contains: so, let's just remove the choice. There's technically no reason for the Link.Load method to need to be attached to the pointer receiver other than removing this footgun. From the other side, though, there's no reason *not* to make it attached to the pointer receiver, because any time a value is assigned to an interface type, it necessarily heap-escapes and becomes a pointer anyway. So, making it unconditional and forcing the pointer to be clear in the user's hands seems best.
-
Eric Myhre authored
Codec revamp
-
Eric Myhre authored
I dearly wish this wasn't such a dark art. But I really want these tests, too.
-
Eric Myhre authored
This is added in a new "dagjson2" package for the time being, but aims to replace the current dagjson package entirely, and will take over that namespace when complete. So far only the decoder/unmarshaller is included in this first commit, and the encoder/marshaller is still coming up. This revamp is making several major strides: - The decoding system is cleanly separated from the tree building. - The tree building reuses the codectools token assembler systems. This saves a lot of code, and adds a lot of consistency. (By contrast, the older dagjson and dagcbor packages had similar outlines, but didn't actually share much code; this was annoying to maintain, and meant improvements to one needed to be ported to the other manually. No more.) - The token type used by this codectools system is more tightly associated with the IPLD Data Model. In practice, what this means is links are parsed at the same stage as the rest of parsing, rather than being added on in an awkward "parse 1.5" stage. This results in much less complicated code than the old token system from refmt which the older dagjson package leans on. - Budgets are more consistently woven through this system. - The JSON decoder components are in their own sub-package, and should be relatively reusable. Some features like string parsing are exported in their own right, in addition to being accessable via the full recursive supports-everything decoders. (This might not often be compelling, but -- maybe. I myself wanted more reusable access to fine-grained decoder and encoder components when I was working on the "JST" experiment, so, I'm scratching my own itch here if nothing else.) End-users should mostly not need to see this, but library implementors might appreciate it. - The codectools scratch.Reader type is used in all the decoder APIs. This results in good performance for either streaming io.Reader or already-in-memory bytes slices as data sources, and does it without doubling the number of exported functions we need (or pushing the need for feature detection into every single exported function). - The configuration system for the decoder is actually in this repo, and it's sanely and clearly settable while also being optional. Previously, if you wanted to configure dagjson, you'd have to reach into the refmt json package for *those* configuration structs, which was workable but just very confusing and gave the end-user a lot of different places to look before finding what they need. - The implementations are very mindful of memory allocation efficiency. Almost all of the component structures carefully utilize embedding: ReusableUnmarsahller embeds the Decoder; the Decoder embeds the scratch.Reader as well as the Token it yields; etc. This should result in overall being able to produce fully usable codecs with a minimal number of allocations -- much fewer than the older implementations required. Some benefits have yet to be realized, but are on the map now: - The new Token structure also includes space for position and progress tracking, which we want to use to produce better errors. (This needs more implementation work, still, though.) - There are several configuraiton options for strictness. These aren't all backed up by the actual implementation yet (I'm porting over old code fast enough to write a demo and make sure the whole suite of interfaces works; it'll require further work, especially on this strictness front, later), but at the very least these are now getting documented, and several comment blocks point to where more work is needed. - The new multicodec registry is alluded to in comments here, but isn't implemented yet. This is part of the long game big goal. The aim is to, by the end of this revamp, be able to do something about https://github.com/ipld/go-ipld-prime/issues/55 , and approach https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
-
Eric Myhre authored
The docs in the diff should cover it pretty well. It's a reader-wrapper that does a lot of extremely common buffering and small-read operations that parsers tend to need. This emerges from some older generation of code in refmt with similar purpose: https://github.com/polydawn/refmt/blob/master/shared/reader.go Unlike those antecedents, this one is a single concrete implementation, rather than using interfaces to allow switching between the two major modes of use. This is surely uglier code, but I think the result is more optimizable. The tests include aggressive checks that operations take exactly as many allocations as planned -- and mostly, that's *zero*. In the next couple of commits, I'll be adding parsers which use this. Benchmarks are still forthcoming. My recollection from the previous bout of this in refmt was that microbenchmarking this type wasn't a great use of time, because when we start benchmarking codecs built *upon* it, and especially, when looking at the pprof reports from that, we'll see this reader showing up plain as day there, and nicely contextualized... so, we'll just save our efforts for that point.
-
- 30 Nov, 2020 1 commit
-
-
Will authored
This change will look at the destination package that codegen is being built into, and will skip generation of types that are already declared by files not prefixed with `ipldsch_`. This isn't the cleanest escape-hatch, but it's a start.
-
- 18 Nov, 2020 1 commit
-
-
Eric Myhre authored
add import to ipld in ipldsch_types.go
-
- 17 Nov, 2020 5 commits
-
-
Will Scott authored
cleanup from #105
-
Eric Myhre authored
Codegen output rearrange
-
Eric Myhre authored
An underscore; and less "gen", because reviewers indicated it felt redundant.
-
Eric Myhre authored
I'd still probably prefer to replace this with simply having a stable order that is carried through consistently, but that remains blocked behind getting self-hosted types, and while it so happens I also got about 80% of the way there on those today, the second 80% may take another day. Better make this stable rather than wait.
-
Eric Myhre authored
Also, emit some comments around the type definitions. The old file layout is still available, but renamed to GenerateSplayed. It will probably be removed in the future. The new format does not currently have stable output order. I'd like to preserve the original order given by the schema, but our current placeholder types for schema data don't have this. More work needed on this.
-