<- Back
Comments (139)
- bonoboTPTo be clear, as the article says, these authors were offered a choice and agreed to be on the "no LLMs allowed" policy.And detection was not done with some snake oil "AI detector" but by invisible prompt injection in the paper pdf, instructing LLMs to put TWO long phrases into the review. They then detected LLM use through checking if both phrases appear in the review.This did not detect grammar checks and touchups of an independently written review. The phrases would only get included if the reviewer fed the pdf to the LLM in clear violation to their chosen policy.> After a selection process, in which reviewers got to choose which policy they would like to operate under, they were assigned to either Policy A or Policy B. In the end, based on author demands and reviewer signups, the only reviewers who were assigned to Policy A (no LLMs) were those who explicitly selected “Policy A” or “I am okay with either [Policy] A or B.” To be clear, no reviewer who strongly preferred Policy B was assigned to Policy A.
- mijoharasOne thing to note.They were quite conservative in their approach, so the only things that were rejected were from people who had agreed not to use an LLM and almost definitely did use an LLM (since they fed hidden watermarked instructions to the llm's).This means the true number of people that used LLM's in their review (even in group A that had agreed not to) is likely higher.Also worth noting, 10% of these authors used them in more than half of their reviews.
- hodgehog11I'm amazed that such a simple method of detection worked so flawlessly for so many people. This would not work for those who merely used LLMs to help pinpoint strengths and weaknesses in the paper; there are separate techniques to judge that. Instead, it only detects those who quite literally copied and pasted the LLM output as a review.It's incredible how so many people thought it was fair that their paper should be assessed by human reviewers alone, and yet would not extend the same courtesy to others.
- grey-areaInteresting, so someone submitting a paper for review could also submit one with hidden instructions for LLMs to summarise or review it in a very positive light.Given this detection method works so well in the use case of feeding reviewing LLMs instructions, it should also work for the original submitted paper itself, as long as it was passed along with its watermark intact. Even those just using LLMs to summarise could easily be affected if LLMs were instructed to generate very positive summaries.So the 2% cheaters on policy A, AND 100% of policy B reviewers could fall for this and be subtly guided by the LLMs overly-positive summaries or even complete very positive reviews (based on hidden instructions).That this sort of adversarial attack works is really quite troubling for those using LLMs to help them understand texts, because it would work even if asked to summarise something.
- sampoTook me a while understand. So, the same person has both submitted their research article to the conference, and also acted as a reviewer for articles submitted by other people.And if they in their review work have agreed to a "no LLM use" policy, but got exposed using LLMs anyway, then their submitted research article is desk rejected. Theoretically, someone could have submitted a stellar research article, but because they didn't follow agreed policy when reviewing other people's work, then also their research contribution is not welcome.(At first I understood that innocent author's articles would have been rejected just because they happened to go to a bad reviewer. But this is not the case.)
- pppoeI really like how they approach to the detection. But I am worried that this is something the community can only use effectively once. There are too many ways to bypass this detection once you know how it works.
- merelysoundsRelated discussion elsewhere and from a different point of view:> ICML: every paper in my review batch contains prompt-injection text embedded in the PDFsource: https://old.reddit.com/r/MachineLearning/comments/1r3oekq/d_...There are recent comments there as well:> Desk Reject Comments: The paper is desk rejected, because the reciprocal reviewer nominated for this paper ([OpenReview ID redacted]) has violated the LLM reviewing policy. The reviewer was required to follow Policy A (no LLMs), but we have found a strong evidence that LLM was used in the preparation of at least one of their reviews. This is a breach of peer-review ethics and grounds for desk rejection. (...)source: https://old.reddit.com/r/MachineLearning/comments/1r3oekq/d_...
- ozgungI think the real news from this experiment is that LLM usage is almost unavoidable even among high level professionals who are capable to and promised to do the task without LLMs. I don’t think these policies will be around in a few years. They are more like naive transition period attempts to stop a tsunami.
- aledevvGreat experiment!Correct me if I'm wrong, but this means that many people are using LLMs despite claiming not to.It's the first symptom of a dependency mechanism.If this happens in this context, who knows what happens in normal work or school environments?(P.S.: The use of watermarks in PDFs to detect LLM usage is very interesting, even though the LLM might ignore hidden instructions.)
- jacquesmI keep spotting clear LLM 'tells' in text where I know the people on the other side believe they're 'getting away with it'. It is incredible at what levels of commerce people do this, and how they're prepared to risk their reputation by saving a few characters typed. It makes me wonder what they think they are getting paid for.
- michaelbuckbeeWorth reading for the discussion of the LLM watermark technique alone.
- auggieroseIt would be interesting to know how many of the cheaters didn't check policy A, but checked "don't care if A or B". Because the operative part of that is "don't care", not "I will strictly adhere to either policy A or B, whatever somebody else selects for me".So it is a sneaky and typically academic way of doing stuff. Also, "We hope that by taking strong action against violations of agreed-upon policy we will remind the community that as our field changes rapidly the thing we must protect most actively is our trust in each other. If we cannot adapt our systems in a setting based in trust, we will find that they soon become outdated and meaningless." is so academic and pointless.
- causalityltdThe declaration of no-LLM was done for social prestige or maybe self-deception of self-sufficiency like "I don't need LLM". And when it was time to do the actual work, the dependency kicked in like drugs. A lesson for all of us with LLMs in our workflow.
- LercI have heard people say that they find that people who broadcast their distaste for LLMs secretly use it. I was fairly sceptical of the claim, but this seems to suggest that it happens more than I would have thought.One wonders what leads them to the AI rejecting option in the first place.
- zulbanI've learned a bit today about how often people on hn read the article when commenting. Or potentially bots who are way off. The title alone isn't enough to totally grasp what happened here, or the methods used.Extremely conservative detection. The real number must be much higher.
- gebveee70interesting results but the eval methodology seems a bit optimistic
- quinndupontHow is nobody considering the broader political economy of scholarly publications and reviews? These are UNPAID reviews! Sure, maybe ICML isn’t Elsevier, but they are cousins to the socially parasitic and exploitative companies, at the very least.Hiding behind a false “choice” to not use AI or basically not use AI isn’t an appropriate proposal. This is crooked and shameful. We should boycott ICML except we can’t because they are already the gatekeepers!
- LlioraI've seen a similar issue in our own review process. We've found that reviewers using LLM
- geremiiahIf you need an LLM to understand a paper you should not be a reviewer for said paper.
- mika-elThe irony here is that the detection method is literally prompt injection — the same technique that's a security vulnerability everywhere else. ICML embedded hidden instructions in PDFs that manipulate LLM output. In a different context that's an attack, here it's enforcement.From my perspective this says something important about where we are with LLMs. The fact that you can reliably manipulate model output by hiding instructions in the input means the model has no real separation between data and commands. That's the fundamental problem whether you're catching lazy reviewers or defending against actual attacks.
- ritcgabWell deserved.
- mvrckhckrIt’s ironic. I also doubt the validity of the AI writing detection.
- coldteaAnother 30-40% just didn't get caught because the reviewers also used LLM in their "reviews"
- jillesvangurpThis is about reviewers, not authors. Title is a bit misleading.In any case, having reviewed a lot of mostly very poorly written articles and occasionally solid papers when I was still a researcher, I can sympathize with using LLMs to streamline the process. There are a lot of meh papers that are OK for a low profile workshop or small conference where you cut people some slack. But generally standards should be higher for things like journals. Judging what is acceptable for what is part of the game. For a workshop, the goal is to get interesting junior researchers together with their senior peers. Honestly, workshops are where the action is in the academic world. You meet interesting people and share great ideas.Most people may not realize this but there are a lot of people that are starting in their research career that will try to get their papers accepted for workshops, conferences, or journals. We all have to start somewhere. I certainly was not an amazing author early on. Getting rejections with constructive feedback is part of how you get better. Constructive feedback is the hard part of reviewing.The more you publish, the more you get invited to review. It's how the process works. It generates a lot of work for reviewers. I reviewed probably at least 5-10 papers per month. It actually makes you a better author if you take that work seriously. But it can be a lot of work unless you get organized. That's on top of articles I chose to read for my own work. Digesting lots of papers efficiently is a key skill to learn.Reviewing the good papers is actually relatively easy. It's enjoyable even; you learn something and you get to appreciate the amazing work the authors did. And then you write down your findings.It's the mediocre ones that need a lot of careful work. You have to be fair and you have to be strict and right. And then you have to provide constructive feedback. With some journals, even an accept with revisions might land an article on the reject pile.The bad ones are a chore. They are not enjoyable to read at all.The flip side of LLMs is that both sides can and should (IMHO) use them: authors can use them to increase the quality of their papers. With LLMs there no longer is any excuse for papers with lots bad grammar/spelling or structure issues anymore. That actually makes review work harder. Because most submitted papers now look fairly decent which means you have to dive into the detail. Rejecting a very rough draft is easy. Rejecting a polished but flawed paper is not.If I was still doing reviews (I'm not), I'd definitely use LLMs to pick apart papers, to quickly zoom in on the core issues and to help me keep my review fair and balanced and professional in tone. I would manually verify the most important bits and my effort would be proportional to which way I'm leaning based on what I know. Of course, editors can use LLMs as well to make sure reviews are fair and reasonable in their level of detail and argumentation. Reviewing the reviewers always has been a weakness of the peer review system and sometimes turf wars are being fought by some academics via reviews. It's one of the downsides of anonymous reviews and the academic world can be very political. A good editor would stay on top of this and deal with it appropriately.LLMs are good at filtering, summarizing, flagging, etc. With proper guard rails, there's no reason to not lean on that a bit. It's the abuse that needs to be countered. In the end, that begins and ends with editors. They select the reviewers. So when those do a bad job, they need to act. And when their journals fill up with AI slop, it's their reputations that are on the line.Like any tool, use caution and common sense. Blanket bans are not that productive at this stage.
- justboy1987[dead]
- Iamkkdasari74[dead]
- AlpacinoDp82[dead]
- gethwhunter34[flagged]