Need help?
<- Back

Comments (185)

  • Aurornis
    The key point for me was not the rewrite in Go or even the use of AI, it was that they started with this architecture:> The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC. That meant that for every event (and expression) we had to serialize, send over the network, evaluate, serialize the result, and finally send it back.> This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.For something so core to the business, I'm baffled that they let it get to the point where it was costing $300K per year.The fact that this only took $400 of Claude tokens to completely rewrite makes it even more baffling. I can make $400 of Claude tokens disappear quickly in a large codebase. If they rewrote the entire thing with $400 of Claude tokens it couldn't have been that big. Within the range of something that engineers could have easily migrated by hand in a reasonable time. Those same engineers will have to review and understand all of the AI-generated code now and then improve it, which will take time too.I don't know what to think. These blog articles are supposed to be a showcase of engineering expertise, but bragging about having AI vibecode a replacement for a critical part of your system that was questionably designed and costing as much as a fully-loaded FTE per year raises a lot of other questions.
  • rozzie
    Some background on one of the other two golang implementations mentioned in the comments.Years ago I hired an Upwork contractor to port v1.5.3 to golang as best he could. He did a great job and it served us well, however it was far, far from perfect and it couldn't pass most of the JS test suite. The worst was that it had several recursion bugs that could segfault with bad expressions.That was the now-deprecated implementation athttps://github.com/blues/jsonata-goEarly in 2025 I used Claude Code and Codex to do a proper, compliant port that passes the full set of tests and is safe. It was most certainly not a trivial task for AI, as many nuances of JSONata syntax derive from its JS roots.Regardless, it was a great experience and here's the 2.0.6 AI port, along with a golang exerciser that lets you flip back and forth between the implementations. We did a seamless migration and it's been running beautifully in prod in Blues' Notehub for quite a while - as a core transformation capability used by customers in our JSON message pipeline.https://github.com/jsonata-go/jsonata
  • jdub
    > At Reco, we have a policy engine that evaluates JSONata expressions against every message in our data pipeline - billions of events, on thousands of distinct expressions.The original architecture choice and price almost gave me a brain aneurysm, but the "build it with AI" solution is also under-considered.This looks like a perfect candidate for existing, high quality, high performance, production grade solutions such quamina (independent successor to aws/event-ruler, and ancestor to quamina-rs).There's going to be a lot of "we were doing something stupid and we solved it by doing something stupid with AI [LLM code]" in our near future. :-|
  • tantalor
    They say "embedding V8 directly into Go (to avoid the network hop)" was only an "incremental improvement"I'm very curious why this didn't help more. That was my first thought. Maybe they didn't get the result they wanted immediately so gave up before evaluating this fully?
  • kace91
    >The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.the first question that comes to mind is: who takes care of this now?You had a dependency with an open source project. now your translated copy (fork?) is yours to maintain, 13k lines of go. how do you make sure it stays updated? Is this maintainance factored in?I know nothing about JSONata or the problem it solves, but I took a look at the repo and there's 15PRs and 150 open issues.
  • hirako2000
    These examples of rewrite are fallacy.See Next.js, over a decade of iterative development. Countless vulnerabilities discovered internally, and externally, which got patches with tribal knowledge acquired by core contributors, security reviewers.Now Joe shows off he rewrote it with Vite at its core, for just 1,100 dollars worth of token. Performance improvement and no licensing liability.Outcome: more money for Nvidia, and even more money into the pockets of your next hackers.
  • tabs_or_spaces
    The headline seems to be flashy indeed, but ai didn't really solve this imo.They just seemed to fix their technology choices and got the benefits.There's existing golang versions of jsonata, so this could have been achieved with those libraries too in theory. There's nothing written about why the existing libraries aren't good enough and why a new one needed to be written. Usually you need to do some due diligence in this area, but no mentions of it in this postIn order to measure the real efficiency, gnata should've been benchmarked against the existing golang libraries. For all we know, the ai implementation is much slower.The benchmarks in the blog are also weird. The measurement is done within the app, but you're meant to measure the calls within the library itself (e.g calling the js version in its isolated benchmark vs go version in its isolated benchmark). So you don't actually know what the actual performance of the ai written version is?The only benefit, again, is that they fixed their existing bad technology choice, and based on what is observed, with a lesser bad technology choice. Then it's layered with clickbait marketing titles for others to read.I'll probably need to expect more of these types of posts in the future.
  • hgo
    > I shared the numbers internally and someone asked about the ROI. Production cost for jsonata-js in the previous month was about $25K - now it was 0. That conversation ended up being pretty short.I'm obviously projecting from my own experience, but it echoes so clearly how power can be wielded without actual insight and an almost arrogantly: "OK, all very nice, but the ROI...?"The article seems to come from a company with stellar engineering so maybe doesn't apply to this case. But, the tone I imagine from that comment still stands out. To me more, precisely because of the mature engineering.Of course ROI is important and a company exists to build it. I'm extrapolating from something tiny and thinking of the Boeing culture shift: https://news.ycombinator.com/item?id=25677848In short, why can't good engineering just be good engineering fostered with trust and then profits?
  • ebb_earl_co
    > This was costing us ~$300K/year in compute, and the number kept growing as more customers and detection rules were added.Maybe I’m out of touch, but I cannot fathom this level of cost for custom lambda functions operating on JSON objects.
  • pravetz259
    Congrats! This author found a sub-optimal microservice and replaced it with inline code. This is the bread and butter work of good engineering. This is also part of the reason that microservices are dangerous.The bad engineering part is writing your own replacement for something that already exists. As other commenters here have noted, there were already two separate implementations of JSONata in Go. Why spend $400 to have Claude rewrite something when you can just use an already existing, already supported library?
  • cjonas
    The docs indicate there are already 2 other go implementations. Why not just use one of those? https://docs.jsonata.org/overview.html
  • sarchertech
    I’ve predicted the future and I’ve figured out where vibe coding is going to go based on this article.1. People are going to come in and vibe code a replacement for some shitty component in a morning. They aren’t going to take time to verify and understand the code.2. The new code will fix most of the problems with the original component, but it will have a whole new set of issues.3. People will use AI to fix the bugs, but they won’t take the time to understand the fixes or the regression tests that they tell AI to add.4. The new system will get so complicated that it’s hard for even AI to work on it. The “test suite” will be so full of tests that are redundant, and nonsensical that the run time will be too high to meaningfully guide AI. And even in the cases where AI does use it, many of the tests are just reimplementing the code under test in the test (Claude does this about 25% of the time based on what I’ve seen if you don’t catch it).5. Goto 1This is the same cycle I’ve seen in 90% of companies I’ve worked at, it will just be on a faster cadence.And that is how we’ll get to a place where we output 100x lines of code, and spend 2x developers salaries on tokens, with little meaningful impact on the outside world.
  • VladVladikoff
    This isn’t the first time I’ve read a ridiculous story like this on hackernews. It seems to be a symptom of startups who suddenly get a cash injection with no clue how to properly manage it. I have been slowly scaling a product over the past 12 years, on income alone, so I guess I see things differently, but I could never allow such a ridiculous spend on something so trivial reach even 1% of this level before squashing it.
  • bawolff
    I'm just kind of confused what took them so long. So it was costing 300k a year, plus causing deployment headaches, etc.But its a realitively simple tool from the looks of it. It seems like their are many competitors, some already written in go.Its kind of weird why they waited so long to do this. Why even need AI? This looks like the sort of thing you could port by hand in less than a week (possibly even in a day).
  • captn3m0
    For context, JSONata's reference implementation is 5.5k lines of javascript.
  • Yokohiii
    Bad decision making. Lack of code ownership. Absence of confidence.Is what made this exaggerated cost even possible.Or: the peter principle.
  • gloosx
    If the first commit was two weeks ago, how did it ended up saving 500k a year already? Did they mean expected to save?
  • cromka
    If they were paying $500k/year, why haven't they paid someone to rewrite it? Surely would be cheaper still.But above everything else, this is a great example of how much JavaScript inefficiency actually costs us, as humanity. How many companies burn money through like this?
  • cosmotic
    Next maybe they will use a binary format instead of JSON.
  • camgunz
    Wait could I have written a JSONata parser and sold it to reco.ai for $499k/yr?
  • NetOpWibby
    With my favorite database (Gel) effectively dead (team acquihire by Vercel), I told Claude to reimplement it in Deno/TypeScript. While I haven't tested it on a real project yet (on my TODO for tmrw), hundreds of tests pass so we'll see.If it does work I'll do a Show HN in a few months. One thing I always do with LLM-code though is review every single line (mainly because I'm particular with formatting). disc.sh is gonna be the domain when I launch the marketing site.
  • amazingamazing
    how many billions of compute are wasted because this industry can't align on some binary format across all languages and APIs and instead keep serializing and deserializing things
  • crazygringo
    > The approach was the same as Cloudflare’s vinext rewrite: port the official jsonata-js test suite to Go, then implement the evaluator until every test passes.This makes me wonder, for reimplementation projects like this that aren't lucky enough to have super-extensive test suites, how good are LLM's at taking existing code bases and writing tests for every single piece of logic, every code path? So that you can then do a "cleanish-room" reimplementation in a different language (or even same language) using these tests?Obviously the easy part is getting the LLM's to write lots of tests, which is then trivial to iterate until they all pass on the original code. The hard parts are how to verify that the tests cover all possible code paths and edge cases, and how to reliably trigger certain internal code paths.
  • lwansbrough
    Huh, I just did basically the same thing. My requirements were not due to spending $300k/yr on parsing (lol), but I was amazed how far I got just asking the AI for progressively more functionality.My use case is a bit different. I wanted JSONata as the query language to query Flatbuffers data (via schema introspection) in Rust, due to its terseness and expressiveness, which is a great combination for AI generated queries.
  • __0x01
    > Correctness: 1,778 test cases from the official jsonata-js test suite + 2,107 integration tests in the production wrapper.The AI generated code can still introduce subtle bugs that lead to incorrect behaviour.One example of this is the introduction of functions into the codebase (by AI) that have bugs but no corresponding tests.EDIT: correct quotation characters
  • comrade1234
    So they used an ai trained on the original source code to "rewrite" the original source code.
  • fock
  • teaearlgraycold
    Anyone who ships a k8s cluster to make a JS library available over RPC needs to have a long hard look in the mirror. Should have bundled node, quickjs, anything into the go nodes for the first pass. k8s truly is a cancer for many teams.
  • err4nt
    The moment the amount of savings surpassed the annual salary of a good programmer you know you made the wrong investment.
  • ipsum2
    Everyone is surprised at the $300k/year figure, but that seems on the low end. My previous work place spends tens of millions a year on GPU continuous integration tests.
  • bitbasher
    Why not use FFI from Go to something in C/C++ that is faster than Go's JSON stuff?
  • neya
    AI company selling AI products claims to have solved a problem using AI when it could've solved it with better code and engineering foundations
  • adityaathalye
    If "AI" is the poor man's (unhygienic) macro system, then a lot of such token software builders are going to viscerally know what it is to plumb the darketst depths of the "Lisp Curse".
  • zellyn
    If you can incorporate Quamina or similar logic in there, you might be able to save even more… worth looking into, at least
  • politelemon
    > No longer just vibe codingIt is, by definition.
  • mickael-kerjean
    A principal engineer spending his week end vibe coding some slop at a rate of 13k lines of code in 7h to replace a vendor. Is this really the new direction we want to set for our industry? For the first time ever, I have had a CTO vibe conding something to replace my product [1] even though it cost less than a day of his salary. The direction we are heading makes me want to quit, all points to software now being worthless[1] https://github.com/mickael-kerjean/filestash
  • nirb89
    Hey all,I'm the author of the blog post. I'm honestly loving the discussion this is generating (including the less flattering comments here). I'll try to answer some of the assumptions I've seen, hopefully it clears a few things.First off - some numbers. We're a near real-time cybersecurity platform, and we ingest tens of billions of raw events daily from thousands of different endpoints across SaaS. Additionally, a significant subset of our customers are quite large (think Fortune 500 and up). For the engine, that means a few things:- It was designed to be dynamic by nature, so that both out-of-the-box and user-defined expressions evaluate seamlessly.- Schemas vary wildly, of which there are thousands, since they are received from external sources. Often with little documentation.- A matching expression needs to be alerted on immediately, as these are critical to business safety (no use triggering an alert on a breached account a day later).- Endpoints change and break on a near-weekly basis, so being able to update expressions on the fly is integral to the process, and should not require changes by the dev team.Now to answer some questions:- Why JSONata: others have mentioned it here, but it is a fantastic and expressive framework with a very detailed spec. It fits naturally into a system that is primarily NOT maintained by engineers, but instead by analysts and end-users that often have little coding expertise.- Why not a pre-existing library: believe me, we tried that first. None actually match the reference spec reliably. We tried multiple Go, Rust and even Java implementations. They all broke on multiple existing expressions, and were not reliably maintained.- Why JSON at all (and not a normalized pipeline): we have one! Our main flow is much more of a classic ELT, with strongly-defined schemas and distributed processing engines (i.e. Spark). It ingests quite a lot more traffic than gnata does, and is obviously more efficient at scale. However, we have different processes for separate use-cases, as I suspect most of the organizations you work at do as well.- Why Go and not Java/JS/Rust: well, because that's our backend. The rule engine is not JUST for evaluating JSONata expressions. There are a lot of layers involving many aspects of the system, one of which is gnata. A matching event must pass all these layers before it even gets to the evaluation part. Unless we rewrote our backend out in JS, no other language would have really mitigated the problem.Finally, regarding the $300k/year cost (which many here seem to be horrified by) - it seems I wasn't clear enough in the blog. 200 pods was not the entire fleet, and it was not statically set. It was a single cluster at peak time. We have multiple clusters, each with their own traffic patterns and auto-scaling configurations. The total cost was $25k/month when summed as a whole.Being slightly defensive here, but that really is not that dramatic a number when you take into account the business requirements to get such a flexible system up and running (with low latency). And yes, it was a cost sink we were aware of, but as others have mentioned - business ROI is just as important as pure dollar cost. It is a core feature that our customers rely on heavily, and changing its base infrastructure was neither trivial nor cost-effective in human-hours. AI completely changed that, and so I took it as a challenge to see how far it could go. gnata was the result.
  • hooverd
    Darn, I'd wished they improved one of the existing Go or Rust implementations.
  • whalesalad
    > The reference implementation is JavaScript, whereas our pipeline is in Go. So for years we’ve been running a fleet of jsonata-js pods on Kubernetes - Node.js processes that our Go services call over RPC.> This was costing us ~$300K/year in computeWooof. As soon as that kind of spend hit my radar for this sort of service I would have given my most autistic and senior engineer a private office and the sole task of eliminating this from the stack.At any point did anyone step back and ask if jsonata was the right tool in the first place? I cannot make any judgements here without seeing real world examples of the rules themselves and the ways that they are leveraged. Is this policy language intentionally JSON for portability with other systems, or for editing by end users?
  • anon
    undefined
  • nbevans
    The most baffling thing here is that they allowed a very very simple JSON expression language to become a 500k/year cost burden on their businessMy god. But I am happy that they finally realised their error and put it right.
  • lmaoeven
    $500k for JSON files LOL OK
  • techpression
    As others have said, the title is bollocks. For any mismanaged infrastructure you can make these crazy claims. If they did it today it would be ”saved $100/year”.The thing is, if it took them a day with AI it would’ve been _at most_ a week without it. So why did they wait? Someone is not being responsible with the company funds.
  • themafia
    > then pointed AI at it and had it implement code until every test passed.You used to have two problems. Now you have three.
  • TZubiri
    As long as you are using JSON, you will be able to optimize.Did you know that you can pass numbers up to 2 billion in 4 constant bytes instead of as a string of 20 average dynamic bytes? Also, fun fact, you can cut your packets in half by not repeating the names of your variables in every packet, you can instead use a positional system where cardinality represents the type of the variable.And you can do all of this with pre AI technology!Neat trick huh?
  • mads_quist
    I mean, great, but which CTO gave greenlight to such a weird architectural choice. Sorry for the rant!
  • leonidasv
    Congrats to the team. Unfortunately many comments here are missing the big picture by attacking the previous architectural decisions with no context about why they were taken. It's always easy to say so in retrospect.Also, I have to comment on the many commenters that spent time researching existing Go implementations just to question everything, because "AI bad". I don't know how much enterprise experience the average HN commenter these days have, but it's not usually easy to simply swap a library in a production system like that, especially when the replacement lib is outdated and unmaintened (which is the case here). I remember a couple of times I was tasked with migrating a core library in a production system only to see everything fall apart in unexpected ways the moment it touched real data. Anyway, the case here seems to be even simpler: the existing Go libs, apart from being unmaintened and obscure, don't support current feature of the JSONata 2.x, which gnata does. Period.The article missed anticipating such critics and explaining this in more detail, so that's my feedback to the authors. But congrats anyway, this is one of the best use cases for current AI coding agents.
  • jgalt212
    These "solutions" place a lot of faith in a "complete" set of test cases. I'm not saying don't do this, but I'd feel more comfortable doing this plus hand-generating a bunch of property tests. And then generating code until all pass. Even better, maybe Claude can generate some / most of the property tests by reading the standard test suite.
  • sublinear
    These articles remind me so much of those old internet debates about "teleportation" and consciousness.Your physical form is destructively read into data, sent via radio signal, and reconstructed on the other end. Is it still you? Did you teleport, or did you die in the fancy paper shredder/fax machine?If vibe code is never fully reviewed and edited, then it's not "alive" and effectively zombie code?
  • bustah
    [dead]
  • pugchat
    [dead]
  • mergeshield
    [dead]
  • sudeepsd__
    [dead]
  • felixagentai
    [flagged]
  • edinetdb
    [dead]
  • elicohen1000
    [dead]
  • panelpowder
    [dead]
  • anon
    undefined