Commit 53fb23e4 authored by Eric Myhre's avatar Eric Myhre

Revamped DAG-JSON decoder and unmarshaller.

This is added in a new "dagjson2" package for the time being,
but aims to replace the current dagjson package entirely,
and will take over that namespace when complete.

So far only the decoder/unmarshaller is included in this first commit,
and the encoder/marshaller is still coming up.

This revamp is making several major strides:

- The decoding system is cleanly separated from the tree building.

- The tree building reuses the codectools token assembler systems.
  This saves a lot of code, and adds a lot of consistency.
  (By contrast, the older dagjson and dagcbor packages had similar
  outlines, but didn't actually share much code; this was annoying
  to maintain, and meant improvements to one needed to be ported
  to the other manually.  No more.)

- The token type used by this codectools system is more tightly
  associated with the IPLD Data Model.  In practice, what this means
  is links are parsed at the same stage as the rest of parsing,
  rather than being added on in an awkward "parse 1.5" stage.
  This results in much less complicated code than the old token
  system from refmt which the older dagjson package leans on.

- Budgets are more consistently woven through this system.

- The JSON decoder components are in their own sub-package,
  and should be relatively reusable.  Some features like string parsing
  are exported in their own right, in addition to being accessable
  via the full recursive supports-everything decoders.
  (This might not often be compelling, but -- maybe.  I myself wanted
  more reusable access to fine-grained decoder and encoder components
  when I was working on the "JST" experiment, so, I'm scratching my
  own itch here if nothing else.)
  End-users should mostly not need to see this, but library
  implementors might appreciate it.

- The codectools scratch.Reader type is used in all the decoder APIs.
  This results in good performance for either streaming io.Reader or
  already-in-memory bytes slices as data sources, and does it without
  doubling the number of exported functions we need (or pushing the
  need for feature detection into every single exported function).

- The configuration system for the decoder is actually in this repo,
  and it's sanely and clearly settable while also being optional.
  Previously, if you wanted to configure dagjson, you'd have to reach
  into the refmt json package for *those* configuration structs,
  which was workable but just very confusing and gave the end-user a
  lot of different places to look before finding what they need.

- The implementations are very mindful of memory allocation efficiency.
  Almost all of the component structures carefully utilize embedding:
  ReusableUnmarsahller embeds the Decoder; the Decoder embeds the
  scratch.Reader as well as the Token it yields; etc.
  This should result in overall being able to produce fully usable
  codecs with a minimal number of allocations -- much fewer than the
  older implementations required.

Some benefits have yet to be realized, but are on the map now:

- The new Token structure also includes space for position and
  progress tracking, which we want to use to produce better errors.
  (This needs more implementation work, still, though.)

- There are several configuraiton options for strictness.
  These aren't all backed up by the actual implementation yet
  (I'm porting over old code fast enough to write a demo and make
  sure the whole suite of interfaces works; it'll require further
  work, especially on this strictness front, later), but
  at the very least these are now getting documented,
  and several comment blocks point to where more work is needed.

- The new multicodec registry is alluded to in comments here, but
  isn't implemented yet.  This is part of the long game big goal.
  The aim is to, by the end of this revamp, be able to do something
  about https://github.com/ipld/go-ipld-prime/issues/55 , and approach
  https://gist.github.com/warpfork/c0200cc4d99ee36ba5ce5a612f1d1a22 .
parent 3040f082
// Several groups of exported symbols are available at different levels of abstraction:
//
// - You might just want the multicodec registration! Then never deal with this package directly again.
// - You might want to use the `Encode(Node,Writer)` and `Decode(NodeAssembler,Reader)` functions directly.
// - You might want to use `ReusableEncoder` and `ReusableDecoder` types and their configuration options,
// then use their Encode and Decode methods with that additional control.
// - You might want to use the lower-level TokenReader and TokenWriter tools to process the serial data
// as a stream, without necessary creating ipld Nodes at all.
// - (this is a stretch) You might want to use some of the individual token processing functions,
// perhaps as part of a totally new codec that just happens to share some behaviors with this one.
//
// The first three are exported from this package.
// The last two can be found in the "./token" subpackage.
package dagjson
package dagjson
import (
"io"
"github.com/ipld/go-ipld-prime"
"github.com/ipld/go-ipld-prime/codec/codectools"
"github.com/ipld/go-ipld-prime/codec/dagjson2/token"
)
// Unmarshal reads data from input, parses it as DAG-JSON,
// and unfolds the data into the given NodeAssembler.
//
// The strict interpretation of DAG-JSON is used.
// Use a ReusableMarshaller and set its DecoderConfig if you need
// looser or otherwise customized decoding rules.
//
// This function is the same as the function found for DAG-JSON
// in the default multicodec registry.
func Unmarshal(into ipld.NodeAssembler, input io.Reader) error {
// FUTURE: consider doing a whole sync.Pool jazz around this.
r := ReusableUnmarshaller{}
r.SetDecoderConfig(jsontoken.DecoderConfig{
AllowDanglingComma: false,
AllowWhitespace: false,
AllowEscapedUnicode: false,
ParseUtf8C8: true,
})
r.SetInitialBudget(1 << 20)
return r.Unmarshal(into, input)
}
// ReusableUnmarshaller has an Unmarshal method, and also supports
// customizable DecoderConfig and resource budgets.
//
// The Unmarshal method may be used repeatedly (although not concurrently).
// Keeping a ReusableUnmarshaller around and using it repeatedly may allow
// the user to amortize some allocations (some internal buffers can be reused).
type ReusableUnmarshaller struct {
d jsontoken.Decoder
InitialBudget int
}
func (r *ReusableUnmarshaller) SetDecoderConfig(cfg jsontoken.DecoderConfig) {
r.d.DecoderConfig = cfg
}
func (r *ReusableUnmarshaller) SetInitialBudget(budget int) {
r.InitialBudget = budget
}
func (r *ReusableUnmarshaller) Unmarshal(into ipld.NodeAssembler, input io.Reader) error {
r.d.Init(input)
return codectools.TokenAssemble(into, r.d.Step, r.InitialBudget)
}
package jsontoken
import (
"fmt"
"io"
"github.com/ipld/go-ipld-prime/codec/codectools"
"github.com/ipld/go-ipld-prime/codec/codectools/scratch"
)
type Decoder struct {
r scratch.Reader
phase decoderPhase // current phase.
stack []decoderPhase // stack of any phases that need to be popped back up to before we're done with a complete tree.
some bool // true after first value in any context; use to decide if a comma must precede the next value. (doesn't need a stack, because if you're popping, it's true again.)
tok codectools.Token // we'll be yielding this repeatedly.
DecoderConfig
}
type DecoderConfig struct {
AllowDanglingComma bool // normal json: false; strict: false.
AllowWhitespace bool // normal json: true; strict: false.
AllowEscapedUnicode bool // normal json: true; strict: false.
ParseUtf8C8 bool // normal json: false; dag-json: true.
}
func (d *Decoder) Init(r io.Reader) {
d.r.Init(r)
d.phase = decoderPhase_acceptValue
d.stack = d.stack[0:0]
d.some = false
}
func (d *Decoder) Step(budget *int) (next *codectools.Token, err error) {
switch d.phase {
case decoderPhase_acceptValue:
err = d.step_acceptValue()
case decoderPhase_acceptMapKeyOrEnd:
err = d.step_acceptMapKeyOrEnd()
case decoderPhase_acceptMapValue:
err = d.step_acceptMapValue()
case decoderPhase_acceptListValueOrEnd:
err = d.step_acceptListValueOrEnd()
}
return &d.tok, err
}
func (d *Decoder) pushPhase(newPhase decoderPhase) {
d.stack = append(d.stack, d.phase)
d.phase = newPhase
d.some = false
}
func (d *Decoder) popPhase() {
d.phase = d.stack[len(d.stack)-1]
d.stack = d.stack[:len(d.stack)-1]
d.some = true
}
type decoderPhase uint8
const (
decoderPhase_acceptValue decoderPhase = iota
decoderPhase_acceptMapKeyOrEnd
decoderPhase_acceptMapValue
decoderPhase_acceptListValueOrEnd
)
func (d *Decoder) readn1skippingWhitespace() (majorByte byte, err error) {
if d.DecoderConfig.AllowWhitespace {
for {
majorByte, err = d.r.Readn1()
switch majorByte {
case ' ', '\t', '\r', '\n': // continue
default:
return
}
}
} else {
for {
majorByte, err = d.r.Readn1()
switch majorByte {
case ' ', '\t', '\r', '\n':
return 0, fmt.Errorf("whitespace not allowed by decoder configured for strictness")
default:
return
}
}
}
}
// The original step, where any value is accepted, and no terminators for recursives are valid.
// ONLY used in the original step; all other steps handle leaf nodes internally.
func (d *Decoder) step_acceptValue() error {
majorByte, err := d.r.Readn1()
if err != nil {
return err
}
return d.stepHelper_acceptValue(majorByte)
}
// Step in midst of decoding a map, key expected up next, or end.
func (d *Decoder) step_acceptMapKeyOrEnd() error {
majorByte, err := d.readn1skippingWhitespace()
if err != nil {
return err
}
if d.some {
switch majorByte {
case '}':
d.tok.Kind = codectools.TokenKind_MapClose
d.popPhase()
return nil
case ',':
majorByte, err = d.readn1skippingWhitespace()
if err != nil {
return err
}
// and now fall through to the next switch
// FIXME: AllowDanglingComma needs a check hereabouts
}
}
switch majorByte {
case '}':
d.tok.Kind = codectools.TokenKind_MapClose
d.popPhase()
return nil
default:
// Consume a value for key.
// Given that this is JSON, this has to be a string.
err := d.stepHelper_acceptValue(majorByte)
if err != nil {
return err
}
if d.tok.Kind != codectools.TokenKind_String {
return fmt.Errorf("unexpected non-string token where expecting a map key")
}
// Now scan up to consume the colon as well, which is required next.
majorByte, err = d.readn1skippingWhitespace()
if err != nil {
return err
}
if majorByte != ':' {
return fmt.Errorf("expected colon after map key; got 0x%x", majorByte)
}
// Next up: expect a value.
d.phase = decoderPhase_acceptMapValue
d.some = true
return nil
}
}
// Step in midst of decoding a map, value expected up next.
func (d *Decoder) step_acceptMapValue() error {
majorByte, err := d.readn1skippingWhitespace()
if err != nil {
return err
}
d.phase = decoderPhase_acceptMapKeyOrEnd
return d.stepHelper_acceptValue(majorByte)
}
// Step in midst of decoding an array.
func (d *Decoder) step_acceptListValueOrEnd() error {
majorByte, err := d.readn1skippingWhitespace()
if err != nil {
return err
}
if d.some {
switch majorByte {
case ']':
d.tok.Kind = codectools.TokenKind_ListClose
d.popPhase()
return nil
case ',':
majorByte, err = d.readn1skippingWhitespace()
if err != nil {
return err
}
// and now fall through to the next switch
// FIXME: AllowDanglingComma needs a check hereabouts
}
}
switch majorByte {
case ']':
d.tok.Kind = codectools.TokenKind_ListClose
d.popPhase()
return nil
default:
d.some = true
return d.stepHelper_acceptValue(majorByte)
}
}
func (d *Decoder) stepHelper_acceptValue(majorByte byte) (err error) {
switch majorByte {
case '{':
d.tok.Kind = codectools.TokenKind_MapOpen
d.tok.Length = -1
d.pushPhase(decoderPhase_acceptMapKeyOrEnd)
return nil
case '[':
d.tok.Kind = codectools.TokenKind_ListOpen
d.tok.Length = -1
d.pushPhase(decoderPhase_acceptListValueOrEnd)
return nil
case 'n':
d.r.Readnzc(3) // FIXME must check these equal "ull"!
d.tok.Kind = codectools.TokenKind_Null
return nil
case '"':
d.tok.Kind = codectools.TokenKind_String
d.tok.Str, err = DecodeStringBody(&d.r)
if err == nil {
d.r.Readn1() // Swallow the trailing `"` (which DecodeStringBody has insured we have).
}
return err
case 'f':
d.r.Readnzc(4) // FIXME must check these equal "alse"!
d.tok.Kind = codectools.TokenKind_Bool
d.tok.Bool = false
return nil
case 't':
d.r.Readnzc(3) // FIXME must check these equal "rue"!
d.tok.Kind = codectools.TokenKind_Bool
d.tok.Bool = true
return nil
case '-', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
// Some kind of numeric... but in json, we can't tell if it's float or int. At least, certainly not yet.
// We'll have to look ahead quite a bit more to try to differentiate. The decodeNumber function does this for us.
d.r.Unreadn1()
d.tok.Kind, d.tok.Int, d.tok.Float, err = DecodeNumber(&d.r)
return err
default:
return fmt.Errorf("Invalid byte while expecting start of value: 0x%x", majorByte)
}
}
package jsontoken
import (
"fmt"
"io"
"strconv"
"github.com/ipld/go-ipld-prime/codec/codectools"
"github.com/ipld/go-ipld-prime/codec/codectools/scratch"
)
// License note: the string and numeric parsers here borrow
// heavily from the golang stdlib json parser scanner.
// That code is originally Copyright 2010 The Go Authors,
// and is governed by a BSD-style license.
// DecodeNumber will attempt to decode data in the format of a JSON numer from the reader.
// JSON is somewhat ambiguous about numbers: we'll return an int if we can, and a float if there's any decimal point involved.
// The boolean return indicates which kind of number we have:
// if true, we have an int (and the float return is invalid);
// if false, we have a float (and the int return is invalid).
func DecodeNumber(r *scratch.Reader) (codectools.TokenKind, int64, float64, error) {
r.Track()
// Scan until scanner tells us end of numeric.
// Pick the first scanner stepfunc based on the leading byte.
majorByte, err := r.Readn1()
if err != nil {
return codectools.TokenKind_Null, 0, 0, err
}
var step numscanStep
switch majorByte {
case '-':
step = numscan_neg
case '0':
step = numscan_0
case '1', '2', '3', '4', '5', '6', '7', '8', '9':
step = numscan_1
default:
panic("unreachable") // FIXME not anymore it ain't, this is exported
}
for {
b, err := r.Readn1()
if err == io.EOF {
break
}
if err != nil {
return 0, 0, 0, err
}
step, err = step(b)
if step == nil {
// Unread one. The scan loop consumed one char beyond the end (this is unavoidable in json!),
// and that might be part of what whatever is going to be decoded from this stream next.
r.Unreadn1()
break
}
if err != nil {
return 0, 0, 0, err
}
}
// Parse!
// *This is not a fast parse*.
// Try int first; if it fails try float; if that fails return the float error.
s := string(r.StopTrack())
if i, err := strconv.ParseInt(s, 10, 64); err == nil {
return codectools.TokenKind_Int, i, 0, nil
}
f, err := strconv.ParseFloat(s, 64)
return codectools.TokenKind_Float, 0, f, err
}
// Scan steps are looped over the stream to find how long the number is.
// A nil step func is returned to indicate the string is done.
// Actually parsing the string is done by 'parseString()'.
type numscanStep func(c byte) (numscanStep, error)
// numscan_neg is the state after reading `-` during a number.
func numscan_neg(c byte) (numscanStep, error) {
if c == '0' {
return numscan_0, nil
}
if '1' <= c && c <= '9' {
return numscan_1, nil
}
return nil, fmt.Errorf("invalid byte in numeric literal: 0x%x", c)
}
// numscan_1 is the state after reading a non-zero integer during a number,
// such as after reading `1` or `100` but not `0`.
func numscan_1(c byte) (numscanStep, error) {
if '0' <= c && c <= '9' {
return numscan_1, nil
}
return numscan_0(c)
}
// numscan_0 is the state after reading `0` during a number.
func numscan_0(c byte) (numscanStep, error) {
if c == '.' {
return numscan_dot, nil
}
if c == 'e' || c == 'E' {
return numscan_e, nil
}
return nil, nil
}
// numscan_dot is the state after reading the integer and decimal point in a number,
// such as after reading `1.`.
func numscan_dot(c byte) (numscanStep, error) {
if '0' <= c && c <= '9' {
return numscan_dot0, nil
}
return nil, fmt.Errorf("invalid byte after decimal in numeric literal: 0x%x", c)
}
// numscan_dot0 is the state after reading the integer, decimal point, and subsequent
// digits of a number, such as after reading `3.14`.
func numscan_dot0(c byte) (numscanStep, error) {
if '0' <= c && c <= '9' {
return numscan_dot0, nil
}
if c == 'e' || c == 'E' {
return numscan_e, nil
}
return nil, nil
}
// numscan_e is the state after reading the mantissa and e in a number,
// such as after reading `314e` or `0.314e`.
func numscan_e(c byte) (numscanStep, error) {
if c == '+' || c == '-' {
return numscan_eSign, nil
}
return numscan_eSign(c)
}
// numscan_eSign is the state after reading the mantissa, e, and sign in a number,
// such as after reading `314e-` or `0.314e+`.
func numscan_eSign(c byte) (numscanStep, error) {
if '0' <= c && c <= '9' {
return numscan_e0, nil
}
return nil, fmt.Errorf("invalid byte in exponent of numeric literal: 0x%x", c)
}
// numscan_e0 is the state after reading the mantissa, e, optional sign,
// and at least one digit of the exponent in a number,
// such as after reading `314e-2` or `0.314e+1` or `3.14e0`.
func numscan_e0(c byte) (numscanStep, error) {
if '0' <= c && c <= '9' {
return numscan_e0, nil
}
return nil, nil
}
package jsontoken
import (
"fmt"
"strconv"
"unicode"
"unicode/utf16"
"unicode/utf8"
"github.com/ipld/go-ipld-prime/codec/codectools/scratch"
)
// License note: the string and numeric parsers here borrow
// heavily from the golang stdlib json parser scanner.
// That code is originally Copyright 2010 The Go Authors,
// and is governed by a BSD-style license.
// DecodeString will attempt to decode data in the format of a JSON string from the reader.
// If the first byte read is not `"`, it is not a string at all, and an error is returned.
// Any other parse errors of json strings also result in error.
func DecodeString(r *scratch.Reader) (string, error) {
// Check that this actually begins like a string.
majorByte, err := r.Readn1()
if err != nil {
return "", err
}
if majorByte != '"' {
return "", fmt.Errorf("not a string: strings must begin with '\"', not %q", majorByte)
}
// Decode the string body.
s, err := DecodeStringBody(r)
if err != nil {
return "", err
}
// Swallow the trailing `"` again (which DecodeStringBody has insured we have).
r.Readn1()
return s, nil
}
// DecodeStringBody will attempt to decode data in the format of a JSON string from the reader,
// except it assumes that the leading `"` has already been consumed,
// and will similarly leave the trailing `"` unread (although it will check for its presence).
//
// Implementation note: you'll find that this method is used in the Decoder's implementation,
// while DecodeString is actually not. This is because when doing a whole document parse,
// the leading `"` is always already consumed because it's how we discovered it's time to parse a string.
func DecodeStringBody(r *scratch.Reader) (string, error) {
// First `"` is presumed already eaten.
// Start tracking the byte slice; real string starts here.
r.Track()
// Scan until scanner tells us end of string.
for step := strscan_normal; step != nil; {
majorByte, err := r.Readn1()
if err != nil {
return "", err
}
step, err = step(majorByte)
if err != nil {
return "", err
}
}
// Unread one. The scan loop consumed the trailing quote already,
// which we don't want to pass onto the parser.
r.Unreadn1()
// Parse!
s, ok := parseString(r.StopTrack())
if !ok {
panic("string parse failed") // this is a sanity check; our scan phase should've already excluded any data that would cause this.
}
return string(s), nil
}
// strscanStep steps are applied over the data to find how long the string is.
// A nil step func is returned to indicate the string is done.
// Actually parsing the string is done by 'parseString()'.
type strscanStep func(c byte) (strscanStep, error)
// The default string scanning step state. Starts here.
func strscan_normal(c byte) (strscanStep, error) {
if c == '"' { // done!
return nil, nil
}
if c == '\\' {
return strscan_esc, nil
}
if c < 0x20 { // Unprintable bytes are invalid in a json string.
return nil, fmt.Errorf("invalid unprintable byte in string literal: 0x%x", c)
}
return strscan_normal, nil
}
// "esc" is the state after reading `"\` during a quoted string.
func strscan_esc(c byte) (strscanStep, error) {
switch c {
case 'b', 'f', 'n', 'r', 't', '\\', '/', '"':
return strscan_normal, nil
case 'u':
return strscan_escU, nil
}
return nil, fmt.Errorf("invalid byte in string escape sequence: 0x%x", c)
}
// "escU" is the state after reading `"\u` during a quoted string.
func strscan_escU(c byte) (strscanStep, error) {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
return strscan_escU1, nil
}
return nil, fmt.Errorf("invalid byte in \\u hexadecimal character escape: 0x%x", c)
}
// "escU1" is the state after reading `"\u1` during a quoted string.
func strscan_escU1(c byte) (strscanStep, error) {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
return strscan_escU12, nil
}
return nil, fmt.Errorf("invalid byte in \\u hexadecimal character escape: 0x%x", c)
}
// "escU12" is the state after reading `"\u12` during a quoted string.
func strscan_escU12(c byte) (strscanStep, error) {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
return strscan_escU123, nil
}
return nil, fmt.Errorf("invalid byte in \\u hexadecimal character escape: 0x%x", c)
}
// "escU123" is the state after reading `"\u123` during a quoted string.
func strscan_escU123(c byte) (strscanStep, error) {
if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
return strscan_normal, nil
}
return nil, fmt.Errorf("invalid byte in \\u hexadecimal character escape: 0x%x", c)
}
// Convert a json serial byte sequence that is a complete string body (i.e., quotes from the outside excluded)
// into a natural byte sequence (escapes, etc, are processed).
//
// The given slice should already be the right length.
// A blithe false for 'ok' is returned if the data is in any way malformed.
//
// FUTURE: this is native JSON string parsing, and not as strict as DAG-JSON should be.
//
// - this does not implement UTF8-C8 unescpaing; we may want to do so.
// - this transforms invalid surrogates coming from escape sequences into uFFFD; we probably shouldn't.
// - this transforms any non-UTF-8 bytes into uFFFD rather than erroring; we might want to think twice about that.
// - this parses `\u` escape sequences at all, while also allowing UTF8 chars of the same content; we might want to reject variations.
//
// It might be desirable to implement these stricter rules as configurable.
func parseString(s []byte) (t []byte, ok bool) {
// Check for unusual characters. If there are none,
// then no unquoting is needed, so return a slice of the
// original bytes.
r := 0
for r < len(s) {
c := s[r]
if c == '\\' || c == '"' || c < ' ' {
break
}
if c < utf8.RuneSelf {
r++
continue
}
rr, size := utf8.DecodeRune(s[r:])
if rr == utf8.RuneError && size == 1 {
break
}
r += size
}
if r == len(s) {
return s, true
}
b := make([]byte, len(s)+2*utf8.UTFMax)
w := copy(b, s[0:r])
for r < len(s) {
// Out of room? Can only happen if s is full of
// malformed UTF-8 and we're replacing each
// byte with RuneError.
if w >= len(b)-2*utf8.UTFMax {
nb := make([]byte, (len(b)+utf8.UTFMax)*2)
copy(nb, b[0:w])
b = nb
}
switch c := s[r]; {
case c == '\\':
r++
if r >= len(s) {
return
}
switch s[r] {
default:
return
case '"', '\\', '/', '\'':
b[w] = s[r]
r++
w++
case 'b':
b[w] = '\b'
r++
w++
case 'f':
b[w] = '\f'
r++
w++
case 'n':
b[w] = '\n'
r++
w++
case 'r':
b[w] = '\r'
r++
w++
case 't':
b[w] = '\t'
r++
w++
case 'u':
r--
rr := getu4(s[r:])
if rr < 0 {
return
}
r += 6
if utf16.IsSurrogate(rr) {
rr1 := getu4(s[r:])
if dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar {
// A valid pair; consume.
r += 6
w += utf8.EncodeRune(b[w:], dec)
break
}
// Invalid surrogate; fall back to replacement rune.
rr = unicode.ReplacementChar
}
w += utf8.EncodeRune(b[w:], rr)
}
// Quote, control characters are invalid.
case c == '"', c < ' ':
return
// ASCII
case c < utf8.RuneSelf:
b[w] = c
r++
w++
// Coerce to well-formed UTF-8.
default:
rr, size := utf8.DecodeRune(s[r:])
r += size
w += utf8.EncodeRune(b[w:], rr)
}
}
return b[0:w], true
}
// getu4 decodes \uXXXX from the beginning of s, returning the hex value,
// or it returns -1.
func getu4(s []byte) rune {
if len(s) < 6 || s[0] != '\\' || s[1] != 'u' {
return -1
}
r, err := strconv.ParseUint(string(s[2:6]), 16, 64)
if err != nil {
return -1
}
return rune(r)
}
package jsontoken
import (
"errors"
"io"
"testing"
. "github.com/warpfork/go-wish"
)
func TestDecodeString(t *testing.T) {
t.Run("SimpleString", func(t *testing.T) {
s, err := DecodeString(makeReader(`"asdf"`))
Wish(t, err, ShouldEqual, nil)
Wish(t, s, ShouldEqual, "asdf")
})
t.Run("NonString", func(t *testing.T) {
s, err := DecodeString(makeReader(`not prefixed right`))
Wish(t, err, ShouldEqual, errors.New(`not a string: strings must begin with '"', not 'n'`))
Wish(t, s, ShouldEqual, "")
})
t.Run("UnterminatedString", func(t *testing.T) {
s, err := DecodeString(makeReader(`"ohno`))
Wish(t, err, ShouldEqual, io.ErrUnexpectedEOF)
Wish(t, s, ShouldEqual, "")
})
t.Run("StringWithEscapes", func(t *testing.T) {
s, err := DecodeString(makeReader(`"as\tdf\bwow"`))
Wish(t, err, ShouldEqual, nil)
Wish(t, s, ShouldEqual, "as\tdf\bwow")
})
}
package jsontoken
import (
"io"
"strings"
"testing"
. "github.com/warpfork/go-wish"
"github.com/ipld/go-ipld-prime/codec/codectools"
"github.com/ipld/go-ipld-prime/codec/codectools/scratch"
)
func makeReader(s string) *scratch.Reader {
r := &scratch.Reader{}
r.InitSlice([]byte(s))
return r
}
var inf int = 1 << 31
func TestDecode(t *testing.T) {
t.Run("SimpleString", func(t *testing.T) {
var d Decoder
d.Init(strings.NewReader(`"asdf"`))
tok, err := d.Step(&inf)
Wish(t, err, ShouldEqual, nil)
Wish(t, tok.Kind, ShouldEqual, codectools.TokenKind_String)
Wish(t, tok.Str, ShouldEqual, "asdf")
tok, err = d.Step(&inf)
Wish(t, err, ShouldEqual, io.EOF)
})
t.Run("SingleMap", func(t *testing.T) {
var d Decoder
d.Init(strings.NewReader(`{"a":"b","c":"d"}`))
tok, err := d.Step(&inf)
Wish(t, err, ShouldEqual, nil)
Wish(t, tok.Kind, ShouldEqual, codectools.TokenKind_MapOpen)
Wish(t, d.phase, ShouldEqual, decoderPhase_acceptMapKeyOrEnd)
tok, err = d.Step(&inf)
Wish(t, err, ShouldEqual, nil)
Wish(t, tok.Kind, ShouldEqual, codectools.TokenKind_String)
Wish(t, tok.Str, ShouldEqual, "a")
Wish(t, d.phase, ShouldEqual, decoderPhase_acceptMapValue)
tok, err = d.Step(&inf)
Wish(t, err, ShouldEqual, nil)
Wish(t, tok.Kind, ShouldEqual, codectools.TokenKind_String)
Wish(t, tok.Str, ShouldEqual, "b")
tok, err = d.Step(&inf)
Wish(t, err, ShouldEqual, nil)
Wish(t, tok.Kind, ShouldEqual, codectools.TokenKind_String)
Wish(t, tok.Str, ShouldEqual, "c")
tok, err = d.Step(&inf)
Wish(t, err, ShouldEqual, nil)
Wish(t, tok.Kind, ShouldEqual, codectools.TokenKind_String)
Wish(t, tok.Str, ShouldEqual, "d")
tok, err = d.Step(&inf)
Wish(t, err, ShouldEqual, nil)
Wish(t, tok.Kind, ShouldEqual, codectools.TokenKind_MapClose)
tok, err = d.Step(&inf)
Wish(t, err, ShouldEqual, io.EOF)
})
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment