Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove hard dependency on Biniou #74

Merged
merged 8 commits into from Jul 21, 2020
Merged

Remove hard dependency on Biniou #74

merged 8 commits into from Jul 21, 2020

Conversation

Leonidas-from-XIV
Copy link
Member

@Leonidas-from-XIV Leonidas-from-XIV commented Jan 26, 2019

Looking at the dependency tree of Yojson, which is one of the more popular packages in the OCaml ecosystem I encountered Biniou, a serialization library. Biniou is not a particularly popular in OCaml, given it has (at time of writing) 11612 downloads per week in OPAM, which might seem a lot but my hypothesis is that this is mainly due to the dependency in yojson (1.5.0 accounts for 7469 downloads, 1.4.1 for 5120).

So therefore I was curious how tightly interconnected Yojson and Biniou are and it turns out… not much. I split the code that depends on Biniou into a new OPAM/findlib package, to preserve compatibility with code that requires it. The remaining code was using Bi_outbuf which can be replaced nearly 1:1 with Buffer. This changes the signature of some functions, but the advantage is that it replaces the somewhat Bi_outbuf.t type with Buffer.t which, due to the fact it is in the compiler standard library is somewhat more common in the OCaml community.

I haven't run many tests except for very basic "does it even still work" but to my understanding Bi_outbuf and Buffer should have similar asymptotic behaviour since they are designed to fit a similar purpose.

Edit: It was noted on IRC that it is not entirely clear, but the changes in write.mli and read.mli change the public API, thus requiring something bumping the major number. Judging by the fact that these changes only affect programs that use Biniou AND yojson and there are hardly any on OPAM, I believe this is a reasonable trade-off.

@Leonidas-from-XIV
Copy link
Member Author

Again, one of the macOS builds failed for unrelated reasons, the rest is green.

@mjambon
Copy link
Member

mjambon commented Jan 26, 2019

Bi_outbuf was created because it was faster than Buffer, thanks to some unsafe operations not provided by Buffer. Having this module in biniou was more convenient than creating a separate package just for this, so this how we ended up with the dependency.

Some people may rely on the extra performance here [benchmarks needed], especially when used from atdgen (which is the reason why yojson was created in the first place).

For context, the original application for which I developed biniou, yojson, and atdgen originally was a 24-hour hadoop pipeline running on 80 machines around 2011. The inputs and outputs of the pipeline were using json, while the intermediate steps were using biniou instead of json because it was significantly faster to process (4x if I remember correctly, don't quote me on this) and was otherwise equally extensible.

There may be scenarios where json is imposed but the cost of parsing or printing json dominates. I don't know.

@c-cube
Copy link
Member

c-cube commented Jan 26, 2019

I'm curious, what operations of Bi_outbuf.t are faster than Buffer.t? As far as I can tell, these days Buffer.add_char is an inlined function, and the other operations seem hard to optimize much more than Bytes.blit?

@mjambon
Copy link
Member

mjambon commented Jan 26, 2019

@c-cube see the write_int function in write.ml. We first ensure that there's enough room to write all the digits, and them we write the digits without checking whether there's enough room like add_char would do.

@c-cube
Copy link
Member

c-cube commented Jan 26, 2019

@mjambon that makes sense, thank you. It'd be interesting to see benchmarks, I think add_char is probably still pretty fast (except on the slow path which is unlikely to happen).

@Leonidas-from-XIV
Copy link
Member Author

I'll look into providing a microbenchmark to check the difference in performance. Except for write_int no other function used extend (which is available in Bi_outbuf but not Buffer), so if this turns into a bottleneck it can maybe be solved in a different way.

@NathanReb
Copy link
Collaborator

I think performances are indeed a concern for a lot of Yojson users and changes that might impact them should not be taken lightly.

If switching to Buffer turns out not to slow things down then it's indeed a welcome change.

Benchmarks would be welcome, even more if they can be integrated to the repo so that we can easily evaluate the performances when changes are made.

There's an existing bench/ dir which aimed at comparing Yojson to json-wheel. Might be the right time to bring it back to life, dropping the json-wheel part. Comparing Yojson to itself is what really matters now.

@Leonidas-from-XIV
Copy link
Member Author

There's an existing bench/ dir which aimed at comparing Yojson to json-wheel. Might be the right time to bring it back to life, dropping the json-wheel part. Comparing Yojson to itself is what really matters now.

I'll create another PR for it, since that'd have to be merged before this one to measure before/after.

@Leonidas-from-XIV
Copy link
Member Author

To get started here's the PR for a revived benchmark: #76

I should probably add a case where it writes JSON with a lot of integers to maximize the impact that this change might have on the performance.

@Leonidas-from-XIV
Copy link
Member Author

I have run the revised benchmark (where I added a test that writes a lot of integers to test how big), this is the current master branch, 32d17e9:

Estimated testing time 50s (5 benchmarks x 10s). Change using -quota SECS.
┌──────────────────────────┬────────────┬────────────┬────────────┬──────────┬────────────┐
│ Name                     │   Time/Run │    mWd/Run │   mjWd/Run │ Prom/Run │ Percentage │
├──────────────────────────┼────────────┼────────────┼────────────┼──────────┼────────────┤
│ JSON reading             │    17.64us │  1_167.02w │      2.24w │    2.24w │      1.52% │
│ JSON writing             │    10.77us │    721.00w │    258.30w │    0.30w │      0.93% │
│ JSON writing assoc       │ 1_164.26us │ 20_266.11w │ 47_254.91w │    4.91w │    100.00% │
│ JSON writing int list    │   387.98us │    261.09w │ 22_255.37w │    2.37w │     33.32% │
│ JSON writing string list │   743.99us │ 20_261.13w │ 41_142.86w │    3.86w │     63.90% │
└──────────────────────────┴────────────┴────────────┴────────────┴──────────┴────────────┘

and here are the results from this branch, 6420a5e:

Estimated testing time 50s (5 benchmarks x 10s). Change using -quota SECS.
┌──────────────────────────┬────────────┬────────────┬────────────┬──────────┬────────────┐
│ Name                     │   Time/Run │    mWd/Run │   mjWd/Run │ Prom/Run │ Percentage │
├──────────────────────────┼────────────┼────────────┼────────────┼──────────┼────────────┤
│ JSON reading             │    17.11us │  1_141.03w │      2.44w │    2.44w │      1.70% │
│ JSON writing             │     9.73us │    695.01w │    258.30w │    0.30w │      0.97% │
│ JSON writing assoc       │ 1_007.03us │ 20_240.10w │ 47_256.25w │    6.25w │    100.00% │
│ JSON writing int list    │   371.72us │    235.09w │ 22_256.17w │    3.17w │     36.91% │
│ JSON writing string list │   639.12us │ 20_235.13w │ 41_143.84w │    4.84w │     63.47% │
└──────────────────────────┴────────────┴────────────┴────────────┴──────────┴────────────┘

Both are running on an 2015 Macbook Pro (OCaml 4.07.1, without flambda) right after each other (admittedly, not the greatest machine to run benchmarks on, I invite you to run the benchmarks yourself). I can also try to rename this branch to Yojson2 to build it in the same binary if there is an interest in that.

But from these numbers it looks like using Buffer made it slightly faster in every benchmark.

@hcarty
Copy link

hcarty commented Jan 31, 2019

I'm not super familiar with reading core_bench's output. Are the columns after Time/Run referring to GC activity?

@Leonidas-from-XIV
Copy link
Member Author

I'm not super familiar with that output either, but yes, according to their Getting Started document it is about words, major words and promoted words. In that criterion it looks like this branch has a few words less, one major word more and promotes more into major heap. But I don't really know how to judge this.

@hcarty
Copy link

hcarty commented Jan 31, 2019

I'm not sure how to weigh it either, but the speed increase + smaller dependency footprint does seem like a nice benefit.

@NathanReb
Copy link
Collaborator

Same as you I'm not sure exactly how to interpret all of the benchmark output but mostly it seems to have no significant impact and if anything it's faster than it use to so as far as I'm concerned it's 👍

I haven't reviewed it all yet but that should be fine.

As you mentioned it, this will break the API so that requires a major version bump, especially since atdgen relies on those parts (specifically on lexer_state using Bi_outbuf).
I'd like to squeeze in a few other changes before 2.0 but it might take a bit of time making sure which ones are doable and will fit the yojson users' need. Because of that I'm wondering whether we should merge this to master or to a 2.0.0 branch so that we can still release minor changes and patches in the meantime.

What do you think?

@Leonidas-from-XIV
Copy link
Member Author

Because of that I'm wondering whether we should merge this to master or to a 2.0.0 branch so that we can still release minor changes and patches in the meantime.

I think yes. I was thinking about working on Easy_format next and not having this merged will create quite a lot of pain because I either have to base it off this branch (which was painful the last time I did it) or create another branch that will almost certainly conflict with this one.

I don't see much need to do minor changes or patches in 1.x since Yojson has releases around once a year. Even then it is pretty feasible to just branch off the 1.6 commit and create a 1.x branch and release from there. It might be a bit tricky wrt gh-pages but @c-cube has somehow managed to get the docs of all versions of containers on Github pages so I assume it can be done somehow.

@c-cube
Copy link
Member

c-cube commented Feb 1, 2019

Wrt gh-pages I just make a new subdir for every release, copy the content of _build/default/_doc (something like that) into it, add a link in the toplevel index.md file, and that's about it :)

@NathanReb
Copy link
Collaborator

Sorry for the delay. I agree Yojson has been around for quite a while without the need for releases so I doubt there will be an urgent need for a patch while we work on 2.0 and if there is your suggestion about using a 1-x sound reasonable.

I'll review and (hopefully) merge today!

let ob =
match buf with
None -> Bi_outbuf.create_channel_writer ?len oc
None -> Buffer.create len
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This breaks to_channel and to_file since we just write to a Buffer and nothing else ever happens.

We need to fix this obviously but my guess is that depending on how we do it this might have an impact on performances. Just calling Buffer.output_buffer at the end will probably perform pretty bad comapred to the old implementation.

The fact that this can pass the CI is alarming, tomorrow I will:

  1. Add basic unit tests to make sure the entry points of the API work as expected
  2. Add IO versions to the benchmarks

let ob =
match buf with
None -> Bi_outbuf.create_output_writer ?len out
None -> Buffer.create len
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same remark as above!

@NathanReb
Copy link
Collaborator

I rebased on top of master which now has a bit more tests to showcase the bug it introduces!

@XVilka
Copy link

XVilka commented Jul 1, 2019

What is the decision on this PR? Seems quite some time passed.

@Leonidas-from-XIV
Copy link
Member Author

What is the decision on this PR? Seems quite some time passed.

There isn't really a decision that needs to be made at the moment. The PR currently breaks existing functionality with no replacement (as the test introduced by @NathanReb shows), so it cannot be merged at the moment until me or someone else implements it in a way where you can actually output to a file.

The Buffer module does not have a create_channel_writer function, so an abstraction on top of that would be necessary where the wrapping module can write to a channel and then the code has to be adjusted to do that.

@Leonidas-from-XIV
Copy link
Member Author

Leonidas-from-XIV commented Dec 19, 2019

So, my hope is that I'll get to work on this during this "between the years" period and can get it merged soon and then with #88 can remove also the transitive dependency on easy-format. As such I've implemented the missing functionality in the simplest possible way, as you can see in the latest commit.

Thanks for the unit tests @NathanReb, they are passing now. I think I'll put adding more extensive tests for this on the TODO list. I would've expected you to be right, with the Buffer.output_buffer being slower, but unless I am reading this wrong the opposite seems to be the case:

This branch (rebased on current master):

Estimated testing time 50s (5 benchmarks x 10s). Change using -quota SECS.
┌──────────────────────────┬────────────┬────────────┬────────────┬──────────┬────────────┐
│ Name                     │   Time/Run │    mWd/Run │   mjWd/Run │ Prom/Run │ Percentage │
├──────────────────────────┼────────────┼────────────┼────────────┼──────────┼────────────┤
│ JSON reading             │    17.26us │  1_141.03w │      2.43w │    2.43w │      1.67% │
│ JSON writing             │     9.72us │    695.01w │    258.30w │    0.30w │      0.94% │
│ JSON writing assoc       │ 1_036.13us │ 20_240.10w │ 47_256.26w │    6.26w │    100.00% │
│ JSON writing int list    │   363.74us │    235.09w │ 22_256.18w │    3.18w │     35.11% │
│ JSON writing string list │   642.48us │ 20_235.13w │ 41_143.85w │    4.85w │     62.01% │
└──────────────────────────┴────────────┴────────────┴────────────┴──────────┴────────────┘

Current master, 41c6c29:

Estimated testing time 50s (5 benchmarks x 10s). Change using -quota SECS.
┌──────────────────────────┬────────────┬────────────┬────────────┬──────────┬────────────┐
│ Name                     │   Time/Run │    mWd/Run │   mjWd/Run │ Prom/Run │ Percentage │
├──────────────────────────┼────────────┼────────────┼────────────┼──────────┼────────────┤
│ JSON reading             │    17.39us │  1_167.02w │      2.24w │    2.24w │      1.55% │
│ JSON writing             │    10.66us │    721.00w │    258.30w │    0.30w │      0.95% │
│ JSON writing assoc       │ 1_121.55us │ 20_266.10w │ 47_254.90w │    4.90w │    100.00% │
│ JSON writing int list    │   392.04us │    261.09w │ 22_255.36w │    2.36w │     34.96% │
│ JSON writing string list │   719.97us │ 20_261.13w │ 41_142.85w │    3.85w │     64.19% │
└──────────────────────────┴────────────┴────────────┴────────────┴──────────┴────────────┘

I am a bit surprised by these results, and I suspect I might be missing something, so I would appreciate if people could take a look and run the benchmarks themselves.

Edit: Okay, what I was missing is that the benchmarks don't stress IO and that would of course make them faster. I'll add this.

This was referenced Dec 19, 2019
@Leonidas-from-XIV
Copy link
Member Author

Leonidas-from-XIV commented Dec 19, 2019

Okay, after adding the relevant benchmarks:

io-benchmarks (master+channel benchmarks) branch:

┌─────────────────────────────────────┬────────────┬────────────┬────────────┬──────────┬────────────┐
│ Name                                │   Time/Run │    mWd/Run │   mjWd/Run │ Prom/Run │ Percentage │
├─────────────────────────────────────┼────────────┼────────────┼────────────┼──────────┼────────────┤
│ JSON reading                        │    17.46us │  1_167.02w │      2.24w │    2.24w │      1.58% │
│ JSON writing                        │    10.70us │    721.00w │    258.30w │    0.30w │      0.97% │
│ JSON writing assoc                  │ 1_106.36us │ 20_266.10w │ 47_254.90w │    4.90w │    100.00% │
│ JSON writing int list               │   388.18us │    261.09w │ 22_255.35w │    2.35w │     35.09% │
│ JSON writing string list            │   713.11us │ 20_261.13w │ 41_142.85w │    3.85w │     64.46% │
│ JSON writing int list to channel    │   366.73us │     52.00w │  6_682.97w │    0.97w │     33.15% │
│ JSON writing string list to channel │   638.09us │ 20_052.34w │  9_255.87w │    3.87w │     57.68% │
│ JSON writing assoc to channel       │ 1_019.29us │ 20_057.34w │ 15_424.31w │    4.31w │     92.13% │
└─────────────────────────────────────┴────────────┴────────────┴────────────┴──────────┴────────────┘

This branch (plus the cherry-picked benchmark from io-benchmark):

┌─────────────────────────────────────┬────────────┬────────────┬────────────┬──────────┬────────────┐
│ Name                                │   Time/Run │    mWd/Run │   mjWd/Run │ Prom/Run │ Percentage │
├─────────────────────────────────────┼────────────┼────────────┼────────────┼──────────┼────────────┤
│ JSON reading                        │    17.23us │  1_141.03w │      2.43w │    2.43w │      1.55% │
│ JSON writing                        │     9.49us │    695.01w │    258.30w │    0.30w │      0.86% │
│ JSON writing assoc                  │   981.97us │ 20_240.10w │ 47_256.25w │    6.25w │     88.58% │
│ JSON writing int list               │   364.64us │    235.09w │ 22_256.18w │    3.18w │     32.89% │
│ JSON writing string list            │   633.92us │ 20_235.13w │ 41_143.84w │    4.84w │     57.18% │
│ JSON writing int list to channel    │   352.56us │     14.04w │ 15_882.40w │    0.40w │     31.80% │
│ JSON writing string list to channel │   737.77us │ 20_014.16w │ 32_268.87w │    0.87w │     66.55% │
│ JSON writing assoc to channel       │ 1_108.61us │ 20_019.17w │ 32_269.40w │    1.40w │    100.00% │
└─────────────────────────────────────┴────────────┴────────────┴────────────┴──────────┴────────────┘

So indeed writing to a channel got slower, but less than I would've expected honestly.

@andersfugmann
Copy link

andersfugmann commented Jun 26, 2020

@NathanReb, @Leonidas-from-XIV What is the current status? Are there any outstanding questions or work that I can help with?

I would really love to see these changes merged and have version 2.0.0 published though opam with reduced dependencies.

Looking at the benchmarks, it seems that the changes results in reduced GC pressure.
So in case the users is concerned about serialization speed, i.e. doing serializations in a tight loop, then a high GC pressure will kill performance sooner or later.

The timings differences are also very small (ranging from 87% to 113%), and can always be micro-optimized later.

@NathanReb NathanReb added this to the 2.0.0 milestone Jul 9, 2020
@NathanReb
Copy link
Collaborator

I'm looking into this right now! @Leonidas-from-XIV could you rebase this on top of master please? I think the benchmarks you added have now been merged!

@Leonidas-from-XIV
Copy link
Member Author

Thanks! I've just rebased it and since I did the same change as @bschommer in #89 I merged his change first so the PR got slightly smaller.

Copy link
Collaborator

@NathanReb NathanReb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! I have a couple minor comments but otherwise it's good to go!

The main loss here is that the buffer will grow until the final writing to channel/file/output. That means when writing a large JSON value using those function we'll alocate an accordingly large buffer when used to be emptied every now and then, is that correct?

If that becomes an issue I guess we could improve those functions so that we do that ourselves internally without leaking a custom buffer type in the API. For the to_string and to_buffer variants it's not an issue so we're good on that front.

yojson.opam Outdated Show resolved Hide resolved
lib/write.mli Outdated Show resolved Hide resolved
@NathanReb
Copy link
Collaborator

Ouh! Could you also please add one or more changelog entries about the API changes we're introducing here?

@Leonidas-from-XIV
Copy link
Member Author

Yep, done all that! For some reason notifications are kinda flaky for me, so I didn't see your review before. Needs some investigating.

yojson-biniou.opam Outdated Show resolved Hide resolved
The previous behaviour was a change from the old serialization
behaviour, notably it omitted the 'u' from the escape sequence. That was
incorrect.
The signature changes anyway so might as well change the name too.
Copy link
Collaborator

@NathanReb NathanReb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, looks good to me!

Thanks for working on this and for your patience!

@Leonidas-from-XIV Leonidas-from-XIV merged commit 06b9da8 into ocaml-community:master Jul 21, 2020
@Leonidas-from-XIV Leonidas-from-XIV deleted the unbundle-biniou branch July 21, 2020 11:23
panglesd added a commit to panglesd/opam-repository that referenced this pull request Jun 2, 2022
CHANGES:

### Removed

- Removed dependency on easy-format and removed `pretty_format` from
  `Yojson`, `Yojson.Basic`, `Yojson.Safe` and `Yojson.Raw`. (@c-cube, ocaml-community/yojson#90)
- Removed dependency on `biniou`, simplifying the chain of dependencies. This
  changes some APIs:
  * `Bi_outbuf.t` in signatures is replaced with `Buffer.t`
  * `to_outbuf` becomes `to_buffer` and `stream_to_outbuf` becomes
    `stream_to_buffer`
  (@Leonidas-from-XIV, ocaml-community/yojson#74, and @gasche, ocaml-community/yojson#132)
- Removed `yojson-biniou` library
- Removed deprecated `json` type aliasing type `t` which has been available
  since 1.6.0 (@Leonidas-from-XIV, ocaml-community/yojson#100).
- Removed `json_max` type (@Leonidas-from-XIV, ocaml-community/yojson#103)
- Removed constraint that the "root" value being rendered (via either
  `pretty_print` or `to_string`) must be an object or array. (@cemerick, ocaml-community/yojson#121)
- Removed `validate_json` as it only made sense if the type was called `json`.
  (@Leonidas-from-XIV, ocaml-community/yojson#137)

### Add

- Add an opam package `yojson-bench` to deal with benchmarks dependency
  (@tmcgilchrist, ocaml-community/yojson#117)
- Add a benchmark to judge the respective performance of providing a buffer vs
  letting Yojson create an internal (ocaml-community/yojson#134, @Leonidas-from-XIV)
- Add an optional `suf` keyword argument was added to functions that write
  serialized JSON, thus allowing NDJSON output. Most functions default to not
  adding any suffix except for `to_file` (ocaml-community/yojson#124, @panglesd) and functions
  writing sequences of values where the default is `\n` (ocaml-community/yojson#135,
  @Leonidas-from-XIV)

### Change

- The `stream_from_*` and `stream_to_*` functions now use a `Seq.t` instead of a
  `Stream.t`, and they are renamed into `seq_from_*` and `seq_to_*` (@gasche, ocaml-community/yojson#131).

### Fix

- Avoid copying unnecessarily large amounts of strings when parsing (ocaml-community/yojson#85, ocaml-community/yojson#108,
  @Leonidas-from-XIV)
- Fix `stream_to_file` (ocaml-community/yojson#133, @tcoopman and @gasche)
panglesd added a commit to panglesd/opam-repository that referenced this pull request Jun 3, 2022
CHANGES:

### Removed

- Removed dependency on easy-format and removed `pretty_format` from
  `Yojson`, `Yojson.Basic`, `Yojson.Safe` and `Yojson.Raw`. (@c-cube, ocaml-community/yojson#90)
- Removed dependency on `biniou`, simplifying the chain of dependencies. This
  changes some APIs:
  * `Bi_outbuf.t` in signatures is replaced with `Buffer.t`
  * `to_outbuf` becomes `to_buffer` and `stream_to_outbuf` becomes
    `stream_to_buffer`
  (@Leonidas-from-XIV, ocaml-community/yojson#74, and @gasche, ocaml-community/yojson#132)
- Removed `yojson-biniou` library
- Removed deprecated `json` type aliasing type `t` which has been available
  since 1.6.0 (@Leonidas-from-XIV, ocaml-community/yojson#100).
- Removed `json_max` type (@Leonidas-from-XIV, ocaml-community/yojson#103)
- Removed constraint that the "root" value being rendered (via either
  `pretty_print` or `to_string`) must be an object or array. (@cemerick, ocaml-community/yojson#121)
- Removed `validate_json` as it only made sense if the type was called `json`.
  (@Leonidas-from-XIV, ocaml-community/yojson#137)

### Add

- Add an opam package `yojson-bench` to deal with benchmarks dependency
  (@tmcgilchrist, ocaml-community/yojson#117)
- Add a benchmark to judge the respective performance of providing a buffer vs
  letting Yojson create an internal (ocaml-community/yojson#134, @Leonidas-from-XIV)
- Add an optional `suf` keyword argument was added to functions that write
  serialized JSON, thus allowing NDJSON output. Most functions default to not
  adding any suffix except for `to_file` (ocaml-community/yojson#124, @panglesd) and functions
  writing sequences of values where the default is `\n` (ocaml-community/yojson#135,
  @Leonidas-from-XIV)

### Change

- The `stream_from_*` and `stream_to_*` functions now use a `Seq.t` instead of a
  `Stream.t`, and they are renamed into `seq_from_*` and `seq_to_*` (@gasche, ocaml-community/yojson#131).

### Fix

- Avoid copying unnecessarily large amounts of strings when parsing (ocaml-community/yojson#85, ocaml-community/yojson#108,
  @Leonidas-from-XIV)
- Fix `stream_to_file` (ocaml-community/yojson#133, @tcoopman and @gasche)
Leonidas-from-XIV pushed a commit to panglesd/opam-repository that referenced this pull request Jun 9, 2022
CHANGES:

### Removed

- Removed dependency on easy-format and removed `pretty_format` from
  `Yojson`, `Yojson.Basic`, `Yojson.Safe` and `Yojson.Raw`. (@c-cube, ocaml-community/yojson#90)
- Removed dependency on `biniou`, simplifying the chain of dependencies. This
  changes some APIs:
  * `Bi_outbuf.t` in signatures is replaced with `Buffer.t`
  * `to_outbuf` becomes `to_buffer` and `stream_to_outbuf` becomes
    `stream_to_buffer`
  (@Leonidas-from-XIV, ocaml-community/yojson#74, and @gasche, ocaml-community/yojson#132)
- Removed `yojson-biniou` library
- Removed deprecated `json` type aliasing type `t` which has been available
  since 1.6.0 (@Leonidas-from-XIV, ocaml-community/yojson#100).
- Removed `json_max` type (@Leonidas-from-XIV, ocaml-community/yojson#103)
- Removed constraint that the "root" value being rendered (via either
  `pretty_print` or `to_string`) must be an object or array. (@cemerick, ocaml-community/yojson#121)
- Removed `validate_json` as it only made sense if the type was called `json`.
  (@Leonidas-from-XIV, ocaml-community/yojson#137)

### Add

- Add an opam package `yojson-bench` to deal with benchmarks dependency
  (@tmcgilchrist, ocaml-community/yojson#117)
- Add a benchmark to judge the respective performance of providing a buffer vs
  letting Yojson create an internal (ocaml-community/yojson#134, @Leonidas-from-XIV)
- Add an optional `suf` keyword argument was added to functions that write
  serialized JSON, thus allowing NDJSON output. Most functions default to not
  adding any suffix except for `to_file` (ocaml-community/yojson#124, @panglesd) and functions
  writing sequences of values where the default is `\n` (ocaml-community/yojson#135,
  @Leonidas-from-XIV)

### Change

- The `stream_from_*` and `stream_to_*` functions now use a `Seq.t` instead of a
  `Stream.t`, and they are renamed into `seq_from_*` and `seq_to_*` (@gasche, ocaml-community/yojson#131).

### Fix

- Avoid copying unnecessarily large amounts of strings when parsing (ocaml-community/yojson#85, ocaml-community/yojson#108,
  @Leonidas-from-XIV)
- Fix `stream_to_file` (ocaml-community/yojson#133, @tcoopman and @gasche)
thierry-martinez added a commit to thierry-martinez/KappaTools that referenced this pull request Oct 19, 2022
This should make Kappa compatible with the latest version of yojson.

Since ocaml-community/yojson#74, yojson does
not rely on biniou buffers.

The main drawback seems to be that `JsonUtil.write_to_channel` now
writes everything to the memory (in a `Buffer.t`) before dumping the
buffer to the channel.
thierry-martinez added a commit to thierry-martinez/KappaTools that referenced this pull request Oct 19, 2022
This should make Kappa compatible with the latest version of yojson.

Since ocaml-community/yojson#74, yojson does
not rely on biniou buffers.

The main drawback seems to be that `JsonUtil.write_to_channel` now
writes everything to the memory (in a `Buffer.t`) before dumping the
buffer to the channel.
thierry-martinez added a commit to thierry-martinez/KappaTools that referenced this pull request Oct 19, 2022
This should make Kappa compatible with the latest version of yojson.

Since ocaml-community/yojson#74, yojson does
not rely on biniou buffers.

The main drawback seems to be that `JsonUtil.write_to_channel` now
writes everything to the memory (in a `Buffer.t`) before dumping the
buffer to the channel.
feret pushed a commit to Kappa-Dev/KappaTools that referenced this pull request Oct 19, 2022
This should make Kappa compatible with the latest version of yojson.

Since ocaml-community/yojson#74, yojson does
not rely on biniou buffers.

The main drawback seems to be that `JsonUtil.write_to_channel` now
writes everything to the memory (in a `Buffer.t`) before dumping the
buffer to the channel.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants