<- Back
Comments (53)
- jadenpetersonNot to be a luddite, but large language models are fundamentally not meant for tasks of this nature. And listen to this:> Most notably, it provides confidence levels in its findings, which Cheeseman emphasizes is crucial.These 'confidence levels' are suspect. You can ask Claude today, "What is your confidence in __" and it will, unsurprisingly, give a 'confidence interval'. I'd like to better understand the system implemented by Cheeseman. Otherwise I find the whole thing, heh, cheesy!
- alsetmusicCall me when a disinterested third-party says so. PR announcements by the very people who have a large stake in our belief in their product are unreliable.
- falloutxOf course this comes from Anthropic PR. Stanford basically has a stake in making LLMs and AI hype so no wonder they are the most receptive.
- anonundefined
- username223Pairs well with this: https://hegemon.substack.com/p/the-age-of-academic-slop-is-u...Taking CV-filler from 80% to 95% of published academic work is yet another revolutionary breakthrough on the road to superintelligence.
- LegitShadyoh look another advertisement for anthropic
- black_13[dead]
- bpodgursky[flagged]
- desireco42By paying Anthropic large sums of money ?!?Funny you say that.