Need help?
<- Back

Comments (57)

  • HarHarVeryFunny
    > Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3So the smart get smarter and the dumb get dumber?Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental work took over from physical, but maybe I got that reversed!
  • thr0waway001
    AI reminds of listening to any person who seems like an intellectual authority on multiple subjects on YouTube and is not afraid to wax confidently on any topic. They seem very intelligent and knowledgable until they actually talk about something you know.In other words, I try to learn from it whenever it does something I can't do but when it does something I can do or something I'm really good at it I find myself wanting to correct it cause it doesn't do it that well.It just seems like a really quick thinking and fast executing but, ultimately, mid skilled / novice person.
  • vicchenai
    I've noticed this in my own work with financial data. I used to manually sanity-check numbers from SEC filings and catch weird stuff all the time. Started leaning on LLMs to parse them faster and realized after a few weeks I was just... accepting whatever came back without thinking about it. Had to consciously force myself to go back to spot-checking.The "System 3" framing is interesting but I think what's really happening is more like cognitive autopilot. We're not gaining a new reasoning system, we're just offloading the old ones and not noticing.
  • woopsn
    In the technophile's future people aren't just getting dumber, not wanting to think or forgetting how - they aren't allowed to think. Maybe about anything. It's too big liability, costs too much to support, moreover detracts from the product. Like Sam A telling those Indian students they aren't worth the energy and water. That's what we're dealing with.
  • gmuslera
    The main problem with "System 3" is that it have its own kind of "cognitive biases", like System 1, but those new cognitive biases are designed by marketing, politics, culture and whatever censor or makes visible the original training. Even if the process, the processing and whatever else around was perfect (that is not, i.e. hallucinations)But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.
  • Ozzie_osman
    When humans have an easy way to do something that is almost as good, we choose that easy way. Call it laziness, energy conservation, coddling, etc. The hard thing then becomes hard to do even when the easy thing isn't available, because the cognitive muscle and the discipline atrophy.Like kids who are never taught to do things for themselves.
  • meander_water
    I'm conflicted about this. As I was reading the paper, my AI detector senses were tingling all over the place.Large parts of the paper score very high probability of being written entirely by AI in gptzero.I'm not sure if I could trust anything written in it.
  • kikkupico
    Contrary to the general opinion, I feel that AI has IMPROVED my cognitive skills. I find myself discovering solutions to problems I've always struggled with (without asking AI about it, of course). I also find myself becoming much better at thinking on my feet during regular conversations. I believe I'm spending more time deep thinking than ever before because I can leave the boring cognitive stuff to AI, and that's giving my mind tougher workouts and making it stronger; but I could be completely wrong.
  • danilor
    I couldn't figure if this was published to a journal? Or is it only published to a pre-print server?
  • nasretdinov
    I mean... I don't really check calculations made by a computer (e.g. by my own programs) all that often either and I think I'm completely fine :). But I guess the difference is that we kind of know how computers work and that they're generally super accurate and make mistakes incredibly rarely. The "AI" (although I disagree with "I" part) is wrong incredibly often, and I don't think people appreciate that the difference to the "traditional" approach isn't just significant, it's astronomical: LLMs make things up at least 5% of the time, whereas CPUs male mistakes maybe (10^-12)% of time or less. It's 12 orders of magnitude or so.
  • pink_eye
    Can it design and implement a plutonium electric fuel cell with a 24,000 year half life? We have yet to witness it. Can it automate Farming and Agriculture? These are the real questions. #Born-Crusty
  • andai
    Damn. I came up with a hypothetical "System 3" last year! I didn't find AI very helpful in that regard though.Current status: partially solved.Problem: System 2 is supposed to be rational, but I found this to be far from the case. Massive unnecessary suffering.Solution (WIP): Ask: What is the goal? What are my assumptions? Is there anything I am missing?--So, I repeatedly found myself getting into lots of trouble due to unquestioned assumptions. System 2 is supposed to be rational, but I found this to be far from the case.So I tried inventing an "actually rational system" that I could "operate manually", or with a little help. I called it System 3, a system where you use a Thinking Tool to help you think more effectively.Initial attempt was a "rational LLM prompt", but these mostly devolve into unhelpful nitpicking. (Maybe it's solvable, but I didn't get very far.)Then I realized, wouldn't you get better results with a bunch of questions on pen and paper? Guided writing exercises?So here are my attempts so far:reflect.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...unstuck.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...--I'm not sure what's a good way to get yourself "out of a rut" in terms of thinking about a problem. It seems like the longer you've thought about it, the less likely you are to explore beyond the confines of the "known" (i.e. your probably dodgy/incomplete assumptions).I haven't solved System 3 yet, but a few months later found myself in an even more harrowing situation which could have been avoided if I had a System 3.The solution turned out to be trivial, but I missed it for weeks... In this case, I had incorrectly named the project, and thus doomed it to limbo. Turns out naming things is just as important in real life as it is in programming!So I joked "if being pedantic didn't solve the problem, you weren't being pedantic enough." But it's not a joke! It's about clear thinking. (The negative aspect of pedantry is inappropriate communication. But the positive aspect is "seeing the situation clearly", which is obviously the part you want to keep!)
  • bobokaytop
    [dead]
  • andrewssobral
    [dead]
  • ashwinnair99
    [flagged]
  • ashwinnair99
    [flagged]
  • bjourne
    "Time pressure (Study 2) and per-item incentives and feedback (Study 3) shifted baseline performance but did not eliminate this pattern: when accurate, AI buffered time-pressure costs and amplified incentive gains; when faulty, it consistently reduced accuracy regardless of situational moderators."I LOLed.
  • johnnymonster
    blocking access to a site because you don't enable javascript is diabolical
  • deevelton
    Have been curious what it could look like (and whether it might be an interesting new type of “post” people make) if readers could see the human prompts and pivots and steering of the LLM inline within the final polished AI output.