<- Back
Comments (153)
- roadbuster> The Claude C Compiler illustrates the other side: it optimizes for> passing tests, not for correctness. It hard-codes values to satisfy> the test suite. It will not generalize.This is one of the pain points I am suffering at work: workers ask coding agents to generate some code, and then to generate test coverage for the code. The LLM happily churns out unit tests which are simply reinforcing the existing behaviour of the code. At no point does anyone stop and ask whether the generated code implements the desired functional behaviour for the system ("business logic").The icing on the cake is that LLMs are producing so much code that humans are just rubber stamping all of it. Off to merge and build it goes.I have no constructive recommendations; I feel the industry will keep their foot on the pedal until something catastrophic happens.
- maltalexMaybe I'm missing something, but isn't this the same as writing code, but with extra steps?Currently, engineers work with loose specifications, which they translate into code. With the proposed approach, they would need to first convert those specifications into a formally verifiable form before using LLMs to generate the implementation.But to be production-ready, that spec would have to cover all possible use-cases, edge cases, error handling, performance targets, security and privacy controls, etc. That sounds awfully close to being an actual implementation, only in a different language.
- midtakeI do. I verify it so hard I've begun to mistrust it lately, seeing Gemini make glaring conceptual mistakes and refusing to align to my corrections.
- madroxI encourage everyone to RTFA and not just respond to the headline. This really is a glimpse into where the future is going.I've been saying "the last job to be automated will be QA" and it feels more true every day. It's one thing to be a product engineer in this era. It's another to be working at the level the author is, where code needs to be verifiable. However, once people stop vibing apps and start vibing kernels, it really does fundamentally change the game.I also have another saying: "any sufficiently advanced agent is indistinguishable from a DSL." I hadn't considered Lean in this equation, but I put these two ideas together and I feel like we're approaching some world where Lean eats the entire agentic framework stack and the entire operating system disappears.If you're thinking about building something today that will still be relevant in 10 years, this is insightful.
- muraikiThe article says that AWS's Cedar authorization policy engine is written in Lean, but it's actually written in Dafny. Writing Dafny is a lot closer to writing "normal" code rather than the proofs you see in Lean. As a non-mathematician I gave up pretty early in the Lean tutorial, while in a recent prototype I learned enough Dafny to be semi-confident in reviewing Claude's Dafny code in about half a day.The Dafny code formed a security kernel at the core of a service, enforcing invariants like that an audit log must always be written to prior to a mutating operation being performed. Of course I still had bugs, usually from specification problems (poor spec / design) or Claude not taking the proof far enough (proving only for one of a number of related types, which could also have been a specification problem on my part).In the end I realized I'm writing a bunch of I/O bound glue code and plain 'ol test driven development was fine enough for my threat model. I can review Python code more quickly and accurately than Dafny (or the Go code it eventually had to link to), so I'm back to optimizing for humans again...
- wyumI believe there is a Verification Complexity BarrierAs you add components to a system, the time it takes to verify that the components work together increases superlinearly.At a certain point, the verification complexity takes off. You literally run out of time to verify everything.AI coding agents hit this barrier faster than ever, because of how quickly they can generate components (and how poorly they manage complexity).I think verification is now the problem of agentic software engineering. I think formal methods will help, but I don't see how they will apply to messy situations like end-to-end UI testing or interactions between the system and the real world.I posted more detailed thoughts on X: https://x.com/i/status/2027771813346820349
- _pdp_I think the issue goes even deeper than verification. Verification is technically possible. You could, in theory, build a C compiler or a browser and use existing tests to confirm it works.The harder problem is discovery: how do you build something entirely new, something that has no existing test suite to validate against?Verification works because someone has already defined what "correct" looks like. There is possible a spec, or a reference implementation, or a set of expected behaviours. The system just has to match them.But truly novel creation does not have ground truth to compare against and no predefined finish line. You are not just solving a problem. You are figuring out what the problem even is.
- vicchenaiThe verification problem scales poorly with AI complexity. Current approaches rely on test suites, but AI-generated code tends to optimize for passing existing tests rather than correctness in the general case.What's interesting is this might be the forcing function that finally brings formal verification into mainstream use. Tools like Lean and Coq have been technically impressive but adoption-starved. If unverified AI code is too risky to deploy in critical systems, organizations may have no choice but to invest in formal specs. AI writes the software, proof assistants verify it.The irony: AI-generated code may be what makes formal methods economically viable.
- turlockmikeThis is where you can use adverserial systems.1. Agent writes a minimal test against a spec. 2. Another agent writes minimal implementation to make test pass only.RepeatThis is ping pong pair programming.
- faitswulffSomeone actually did fuzz the claude c compiler:https://john.regehr.org/writing/claude_c_compiler.html
- lateforworkThe first thing you should have AI write is a comprehensive test suite. Then have it implement the main functionality. If the tests pass that is one level of verification.In addition you can have one AI check another AI's code. I routinely copy/paste code from Claude to ChatGPT and Gemini have them check each other's code. This works very well. During the process I have my own eyes verify the code as well.
- pnathanI am experimenting at a very early stage with using Verus in Rust to generate proveably correct Rust. I let the AI bang on the proof and trust the proof assistant to confirm it.There is another route with Lean where Rust generates the Lean and there is proof done there but I haven't chased that down fully.I think formal verification is a big win in the LLM era.
- bfungWhen humans write the software, who verifies it?half sarcasm, half real-talk.TDD is nice, but human coders barely do it. At least AI can do it more!
- dataviz1000100% of my innovation for the past month has been getting the coding agent to iterate with an OODA loop (I make it validate after act step) trying to figure out how to get it to not stop iterating.For example, I have discovered there is a big difference between prompting 'there is a look ahead bias' and 'there is a [T+1] look ahead bias' where the later will cause it to not stop until it finds the [T+1] look ahead bias. It will start to write scripts that will `.shift(1)` all values and do statistical analysis on the result set trying to find the look ahead bias.Now, I know there isn't look ahead bias, but the point is I was able to get it to iterate automatically trying different approaches to solve the problem.The software is going to verify itself eventually, sooner than later.
- chromatonTFA seems to be big on mathematical proof of correctness, but how do you ever know you're proving the right thing?
- ljlolelThe users verify and fix it on the fly with the Claude VM JIT https://jperla.com/blog/claude-is-a-jit
- oakpondYou do. Even the latest models still frequently write really weird code. The problem is some developers now just submit code for review that they didn't bother to read. You can tell. Code review is more important than ever imho.
- shubhamintechFormal verification gets you to deploy with confidence but it's still a snapshot. What happens when real-world inputs drift from what you tested against? The subtler problem is runtime behavioral drift: an agent that's technically correct but consistently misunderstands a whole class of user queries is invisible to any pre-deploy check. Pre-deploy and post-deploy verification are genuinely different problems.
- boznzThis is the biggest problem going forward. I wrote about the problem many times on my blog, in talks, and as premises in my sci-fi novelsSitting in your cubical with your perfect set of test suites, code verification rules, SOP's and code reviews you wont want to hear this, but other companies will be gunning for your market; writing almost identical software to yours in the future from a series of prompts that generate the code they want fast, cheap, functionally identical, and quite possibly untested.As AI gets more proficient and are given more autonomy (OpenClaw++) they will also generate directly executable binaries completely replacing the compiler, making it unreadable to a normal human, and may even do this without prompts.The scenario is terrifying to professional software developers, but other people will do this regardless of what you think, and run it in production, and I expect we are months or just a few years away from this.Source code of the future will be the complete series of prompts used to generate the software, another AI to verify it, and an extensive test suites.
- p0u4aSomeone once told me, agentic coding may lead to something akin to a "software engineering union" forming, where a set of guidelines control code quality. Namely, at least one of writing, testing, and reviewing of code must be done by a human.
- RonsenshiA bit unrelated to the article, more of a commentary about how many engineers at this point barely write any code or even do code review.It seems to me like a huge amount of engineers/developers in comments are turning into Tom Smykowski from The Office. Remember that guy?His job was to be a liaison between customers and engineers because he had "people skills":"I deal with the god damn customers so the engineers don't have to. I have people skills; I am good at dealing with people. Can't you understand that? What the hell is wrong with you people?"Except now, based on comments here it, some engineers are passing instructions from customers to AI because they have "AI skills". While AI is doing coding, helps with spec clarification, reviewing code and writing tests.That's scary and depressing. This field in a few years will be impossible to recognize.
- 50loOne thing that seems under-discussed in this space is the shift from verifying programs to verifying generation processes.If a piece of code is produced by an agent loop (prompt -> tool calls -> edits -> tests), the real artifact isn’t just the final code but the trace/pipeline that produced it.In that sense verification might look closer to: checking constraints on the generator (tests/specs/contracts), verifying the toolchain used by the agent, and replaying generation under controlled inputs.That feels closer to build reproducibility or supply-chain verification than traditional program proofs.
- mkoubaaPMs have been asking the same question about software developers for decades
- anonundefined
- oztenThis is really great and important progress, but Lean is still an island floating in space. Too hard to get actual work done building any real world system.
- holtkam2At the end of the day you need humans who understand the business critical (or safety critical) systems that underpin the enterprise.Someone needs to be held accountable when things go wrong. Someone needs to be able to explain to the CEO why this or that is impossible.If you want to have AI generate all the code for your business critical software, fine, but you better make sure you understand it well. Sometimes the fastest path to deep understanding is just coding things out yourself - so be it.This is why the truly critical software doesn’t get developed much faster when AI tools are introduced. The bottleneck isn’t how fast the code can be created, it’s how fast humans can construct their understanding before they put their careers on the line by deploying it.Ofc… this doesn’t apply to prototypes, hackathons, POCs, etc. for those “low stakes” projects, vibe code away, if you wish.
- csenseIt seems like sound testing methodology to identify important theorems related to the code, prove them, and then verify the proof.Verification gets sold as "bulletproof" but I'm skeptical for a couple reasons:- How do you establish the relationship between the code and the theorem? Lean theorem can be applied to zlib implemented in Lean, what if you want to check zlib implemented in a normal programming language like C, JS, Zig, or whatever?- How do you know the key properties mean what you think they mean? E.g. the theorem says "ZlibDecode.decompressSingle (ZlibEncode.compress data level) = .ok data" but it feels like it would be very easy to accidentally prove ∃ x s.t. decompress(compress(x)) == x while thinking you proved ∀ x, decompress(compress(x)) == x.I've tried Lean and Coq and...I don't really like them. The proofs use specialized programming languages. And they seem deliberately designed to require you to use a context explorer to have any hope of understanding the proof at all. OTOH a normal unit test is written in a general purpose programming language (usually the same one as the program being tested), I'm much more comfortable checking that a Claude-written unit test does what I think it's doing than a Claude-written Lean proof of correctness.
- anonundefined
- mentalgear> Where this leads is clear. Layer by layer, the critical software stack will be reconstructed with mathematical proofs built in. The question is not whether this happens, but when.
- anonhacker199The biggest issue right now is most AI tools aren't hooked up appropriately to an environment they can test in (Chrome typically). Replit works extremely well because it has an integrated browser and testing strategy. AI works very well when it has the ability to check its own work.
- bryanlarsenYou can use AI to make a reviewers job much easier. Add documents, divide your MR into reviewable chunks, et cetera.If reviewing is the expensive part now, optimize for reviewability.
- heftykooAnother AI, obviously. And then a third AI to monitor the first two for conflicts of interest. Jokes aside, this is exactly the era where formal verification (like TLA+ or Lean, seeing the other post on the front page) actually makes commercial sense. If the code is generated, the only human output of value is the spec. We are moving from writing logic to writing constraints.
- phyzomeVerify? Seems like no one is even reviewing this stuff.
- sslayerState Sponsored Hackers AI will verify it.
- yoaviramI just finished writing a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering.Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse.The marginal cost of code is collapsing. That single fact changes everything.https://nonstructured.com/zen-of-ai-coding/
- indymikeBecause of the scale of generated code, often it is the AI verifying the AI's work.
- simonwThe "Nearly half of AI-generated code fails basic security tests" link provided in this piece is not credible in my opinion. It's a very thinly backed vendor report from a company selling security scanning software.
- acedTrexNo one does currently, and its going to take a few very painful and high profile failures of vital systems for this industry to RELEARN its lesson about the price of speed.In fact it will probably need to happen a few times PER org for the dust to settle. It will take several years.
- nemo44xI believe the old ways, which agile destroyed, will come back because the implementation isn’t the hardest part now. Agile recognized correctly that implementation was the hard part to predict and that specification through requirements docs, UML, waterfall, etc. were out of date by the time the code was cooked.I don’t think we’ll get those exact things back but we will see more specification and design than we do today.
- slopinthebagLLM generated code combined with formal verification just feels like we're entering the most ridiculous timeline. We know formal verification doesn't work at scale, hence we don't use it. We might get fully vibe coded systems but we sure as hell won't be able to verify them.The collapse of civilisation is real.
- lglI'm in the process of building v2.0 of my app using opus 4.6 and largely agree with this.It's pretty awesome but still does a lot of basic idiotic stuff. I was implementing a feature that required a global keyboard shortcut and asked opus to define it, taking into account not to clash with common shortcuts. He built a field where only one modifier key was required. After mentioning that this was not safe since users could just define CTRL+C for the shortcut and we need more safeguards and require at least two modifier keys I got the usual "you're absolutely right" and proceeded to require two modifier keys. But then it also created a huge list of common shortcuts into a blacklist like copy, cut, paste, print, select all, etc.. basically a bunch of single modifier key shortcuts. Once I mentioned that since we're already forcing two modifier keys that's useless it said I'm right again and fixed it.The counter point of this idiocy is that it's very good overall at a lot of what is (in my mind) much more complicated stuff. It's a .NET app and stuff like creating models, viewmodels, usercontrols, setting up the entire hosting DI with pretty much all best practices for .net it does it pretty awesomely.tl;dr is that training wheels are still mandatory imho
- bitwizeAlso AI.
- righthandNo one really. Code is for humans to read and for machines to compile and execute. Llms are enabling people to just write the code and not have anyone read it. It’s solving a problem that didn’t really exist (we already had code generators before llms).It’s such an intoxicating copyright-abuse slot machine that a buddy who is building an ocaml+htmx tree editor told me “I always get stuck and end up going to the llm to generate code. Usually when I get to the html part.” I asked if he used a debugger before that, he said “that’s a good idea”.
- aplomb1026[dead]
- bdcravensThe same ones who verify it when I write it: my customers in production! /s (well, maybe /s)
- agenthustler[flagged]
- furyofantares(answering the title) The lusers
- foolfoolzno one wants to believe this but there will be a point soon when an ai code review meets your compliance requirements to go to production. is that 2026? no. but it will come
- drivebyhootingThat was a prolix and meandering essay. Next time I’d rather just look at the prompts and hand edits that went into writing it rather than the final artifact; much like reviewing the documentation, spec, and proof over the generated code as extolled by the article.
- rademakerIn his latest essay, Leonardo de Moura makes a compelling case that if AI is going to write a significant portion of the world’s software, then verification must scale alongside generation. Testing and code review were never sufficient guarantees, even for human-written systems; with AI accelerating output, they become fundamentally inadequate. Leo argues that the only sustainable path forward is machine-checked formal verification — shifting effort from debugging to precise specification, and from informal reasoning to mathematical proof checked by a small, auditable kernel. This is precisely the vision behind Lean: a platform where programs and proofs coexist, enabling AI not just to generate code, but to generate code with correctness guarantees. Rather than slowing development, Lean-style verification enables trustworthy automation at scale.