Need help?
<- Back

Comments (56)

  • qixxiq
    Current implementation has the following limitations: Maximum object size: 65534 keys The order of object keys is not preserved ... These limitations may be lifted by using more bytes to store offset pointers and counts on binary level. Though it's hard to imagine a real application which would need that. I've worked on _many_ applications which have needed those features. Object keys is a per implementation detail, but failing at 65k keys seems like a problem people would likely hit if this were to be used at larger scales.
  • gethly
    It is a bit confusing that JSON is being mention so much when in reality this has nothing to do with it - except to showcase that JSON is not suitable for streaming whereas this format is.Secondly, I fail to see advantages here as the claim is that it allows streaming for partial processing compared to JSON that has to be fully loaded in order to be parseable. Mainly, because the values must be streamed first, before their location/pointers in order for the structure to make sense and be usable for processing, but that also means we need all the parent pointes as well in order to know where to place the children in the root. So all in all, I just do not see why this is advantageous format above JSON(as that is its main complaint here), since you can stream JSON just as easily because you can detect { and } and { and ] and " and , delimiters and know when your token is complete to then process it, without having to wait for the whole structure to finish being streamed or wait for the SICK pointers to arrive in full so you can build the structure.Or, I am just not getting it at all...
  • Retr0id
    I started on something a bit like this, but using sqlite instead of a custom serialisation format: https://github.com/DavidBuchanan314/dag-sqlite (it's intended to store DAG-CBOR objects but it is trivial to represent a JSON object as DAG-CBOR)
  • slashdave
    So... isn't this just a database, and a bespoke one at that?
  • noobcoder
    it could be really useful for cases where youre repeatedly processing similar JSON structure like in case of analytical events but any plans for language bindings beyond the current implementation?
  • kordlessagain
    Use for vectors and stream smaller projections first?
  • IshKebab
    This sounds quite similar to Amazon Ion which is one of the few binary JSON formats that allows sparse reads, and deduplicated keys.However I found that in cases where you have the requirements of streaming/random access and every entry is the same... SQLite is a really great choice. It's way faster and more space efficient than it has any right to be, and it gets you proper random access (not just efficient streaming), and there are nice GUIs for it. And you can query it.
  • anon
    undefined
  • anon
    undefined
  • keyliejener
    [flagged]