<- Back
Comments (157)
- cmiles8There will be many more things like this and it’s an elephant in the room for the supposed mass replacement of people with AI.Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.You can make humans more productive but for the foreseeable future you can’t take the human out of the loop to have an AI implementation that’s not a disaster/lawsuit waiting to happen. That, probably more than anything else, is why companies just aren’t seeing the much promised mass step change in productivity from AI and why so many companies are now saying they see zero ROI from AI efforts.The lowest hanging fruit will be low value rote repetitive tasks like the whole India offshoring industry, which will be the first to vaporize if AI does start replacing humans. But until companies see success on the lowest of lowest hanging fruit on en-mass labor replacement with AI things higher up on the value chain will remain relatively safe.PS: Nearly every mass layoff recently citing “AI productivity” hasn’t withstood scrutiny. They all seem to be just poorly performing companies slashing staff after overhiring, which management looking for any excuse other than just admitting that.
- codegladiator> She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wroteI don't think the intention matters here. Its the same deal with every profession using llm to "automate" their work. The onus in on the professional, not the llm. Arstechnica case could have been justified by same manner otherwise.Not knowing the law isnt execuse to break law, so why is not knowing the tool an excuse to blame the tool.
- voidUpdateHow many of these cases do we have to have before lawyers realise that they need to check that the things an LLM tells them are actually true?
- thisislife2The high court also advocated for the "exercise of actual intelligence over artificial intelligence". Hehe.
- eagleheadThis is going to be a huge problem in every sector. I have been exploring solution in this space for fintech and so far what Resemble AI is doing [1] is probably the best way to defend.The surface level for us is not just LLM generated text, it is also the combination of AI augmented audio (for incoming calls) and then for our own voice agents being able to protect and identify services cloning our own agent voices with watermarking.It's not fun, as we are constantly catching up.[1] https://www.resemble.ai/detect/
- chrisjj> She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source"Next: gunman pleads death occured solely due to reliance on an automatic weapon.
- thisisitIt's sad that it took the highest court in the country to point out lack of professionalism and misconduct.The judge took no personal responsibility.> She told the court that this was her first time using an AI tool and she had believed the citations to be "genuine". She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote.She had one job. And that was to read the citations. Instead of owning up to the mistake of being lazy all she wanted to talk about "intentions".The high court also took no responsibility.> In its order, the high court said that "the citations may be non-existent, but if the learned trial court has considered the correct principles of law and its application to the facts of the case is also correct,This line of reasoning is questionable and attempt to gaslight everyone. Judges cite other cases in their judgement. But if the junior judge had no clue that the references were fake what correct principles was she applying?End of the day maybe the judgement is correct but this overall bullshit.Given that this is happening all over the world people seem to have a convenient excuse - The AI made me do it.
- kaptainscarletThere will be loads of papers and publications with fake citation. AI will be trained on these. In the end, we'll have more and more hallucinated information that true content on the internet.
- jfengelI feel like this points out a very general problem with the law: it generates a lot of boilerplate text. Lawyers don't really read it; they skim it for the relevant bits.Obviously lawyers should not be cheating with AI, especially when they don't even check it. But it does sound to me as if this is an opportunity to re-factor the process. We're carrying forward some ideas originally implemented in Latin, and which can be dramatically simplified.I'm not a lawyer; I know this only in passing. And I am aware that there are big differences between law and code. But every time I encounter the law, and hear about cases like this, what I see are vast oceans of text that can surely be made more rigorous. AI is not the problem; it's pointing out the opportunity.
- alansaberThis is a big problem in the US and UK too. Lawyers are not technical at all and they need a robust system of governance, since currently they're (directly editing, not even diffing) documents with a chatbot which makes these mistakes inevitable. See https://insights.doughtystreet.co.uk/post/102mi96/38-uk-case...
- sathish316Next token prediction and Hallucination as a bug. This should be of deep concern to all Frontier labs, who think Integrity and Trust is optional when LLMs are used this way in places where it's most important.
- tamimioI wonder how many similar cases are happening in the engineering or software development sector that go unnoticed, and it seems no one is caring enough, only waiting for a disaster to happen so we can start seeing some regulation preventing the use of AI in engineering/coding industry.
- anonundefined
- chris_wotIn Australia, our universities are finding that a large proportion of Indian students have been using GenAI for cheating. Often they get away with it. I'm not saying that people other than Indian overseas students cheat, but it does seem more entrenched. I'd love to know why. It doesn't actually help in the long term!
- zkmonAndhra is like silicon valley of India. Wouldn't blame the poor judge.
- dartharvaThe scary thing is that Indian juduciary is infamous for being incapable of tolerating any kind of criticism against it and not hesitating to put people in jail for "contempt" for just calling out corruption. Imagine the official courts of 1.4B+ people being run by such braindead narcissists, now unhindered with having to even pretend to do their jobs as they just offload everything to AI tools.
- newzino[dead]
- CagedJean[dead]
- mikelitoris[flagged]
- arashsadrieh[flagged]
- ionwakeone should also consider that even with fake hallucinated AI situation, the productivity and correctness of the work produced by the culprit ( in general ) may still have been of higher quality then before AI regardless of the fails
- whatever1The vast majority of court cases lead to dismissal.Why not use AI to adjudicate cases, and if it is dismissal, dismissal it is.If not then move to a proper court.This way the backlog of cases will significantly drop, and we will work only on cases that there is enough meat to lead to a conviction.
- arunc> Senior judges at the Supreme Court in Delhi have threatened consequences over the use of AISetting AI aside for a moment, this reflects a broader issue in India and elsewhere. When institutions respond to new technologies with anger or threats rather than systemic thinking, it signals a deeper problem.The real challenge is not AI itself, but how complex systems adapt to change. Instead of reacting defensively, institutions should anticipate second-order effects, build regulatory capacity, and treat this as a governance and systems problem.Mature institutions approach disruption with foresight, incentives, and feedback loops, not emotions. Without that shift, they risk reinforcing outdated hierarchies rather than serving the public effectively.
- gracelynewhouseThe pattern here isn't really about individual negligence — it's a systems design problem. We keep deploying LLMs into workflows where the failure mode is "plausible-sounding fabrication" and the downstream consequence is legal or institutional harm, then blaming the end user for not catching it.The better question is why these tools are being integrated into judicial workflows without mandatory citation verification layers. The EU AI Act classifies judicial AI as high-risk and requires human oversight mechanisms specifically for this reason. India's Digital Personal Data Protection Act (2023) doesn't yet have equivalent provisions for AI in courts, which is the actual gap.From an engineering standpoint, the fix is straightforward: any LLM-assisted legal research tool should require grounded retrieval (RAG against verified case law databases) with mandatory source links that the user must click through before citing. The fact that most legal AI tools still don't enforce this is a product design failure, not a user education problem.