Need help?
<- Back

Comments (83)

  • btown
    > The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue.This is incredibly good for science. arXiv is free, but it's a privilege not a right!I'm not seeing this clearly listed on https://info.arxiv.org/help/policies/index.html so it's possible this is planned but not live yet - or perhaps I'm not digging deeply enough?As a certain doctor once said: the whole point of the doomsday machine is lost if you keep it a secret!
  • imenani
  • rgmerk
    Good.If it’s not worth your time to check the output of your LLM carefully, it’s not worth my time to read it.
  • MinimalAction
    There needs be to a careful vetting before such adverse actions. If somebody includes a name and pushed it without express permission, does everyone get the ban? I agree that implemented the right way, this is good.
  • noobermin
    Seeing the usual LLM hypers angry replying to this on twitter is such a tell. Just like the comments on the LLM poisoning articles, some people just can't accept that some people don't like LLMs and get upset when you put any amount of hindrance to their rapid acceptance.
  • nullc
    It's been pretty eye opening watching Craig Wright (of bitcoin fakery fame) flooding out LLM generated 'academic' papers and even having some of them accepted.He's toast if SSRN were to adopt a similar policy.
  • bigfishrunning
    Good; academic literature is in crisis because of all of the slop. Forcing some consequences on easily-detectable hallucinations can only be a good thing
  • squirrelon
    Had a colleague submit a paper with literal AI slop left in the text, got hit with a nasty revision request. Check your drafts before you submit, people. The reviewers will find it.
  • jszymborski
    Should be more harsh in my opinion.
  • anon
    undefined
  • hyunwoo222
    [flagged]
  • random3
    It seems a good idea to ban cheating, but how hard is it, especially in new reasoning/agents contexts to validate references?The deeper question is whether legitimate AI generated results are allowed or not? Test - In the extreme - think proof of Riemann Hypothesis autonomously generated (end to end) formally proven - is it allowed or not?