Need help?
<- Back

Comments (78)

  • bensyverson
    Really fascinating how this works; it's basically context-aware decoding. From the paper:> Code interleaves fork positions, where several continuations are genuinely plausible and may correspond to different solution approaches, with lock positions, where syntax and semantics leave little ambiguity but a low-probability distractor tail still remains… The best global decoding setting is therefore necessarily a compromise; we call this tension the precision-exploration conflict.In other words, just like us, the model needs to shift from "exploration" in "fork" mode (divergent thinking to produce a creative solution) to "precision" in "lock" mode (producing syntactically correct code).What this paper shows is that their simple technique (SSD) can improve the ranking of optimal tokens in both lock and fork positions, meaning the model is more likely to explore when it should be exploring, and more likely to be precise when it needs to be.I love that we're still learning the emergent properties of LLMs!
  • dwa3592
    Can anyone help clarify these doubts - I didn't see any information about how different the test/benchmark set is from the training set. It feels like an important gap to not fill in a ML paper. What if there is an overlap between the problems in the test set and the training set?? What is the decontamination strategy of going from LCBv5 to LCBv6 ?
  • wg0
    After TurboQuant and Gemma 4, came across the following video[0] running Gemma on local machine at 50 token/second.That already looks like Sonnet 3x and 4 level capabilities to me where the model in question (Gemma 4) set ups whole python project with a UI and installs python libraries using uv etc.Add this Simple Self Distillation to the picture and by 2028 I see cheaper coding model providers with much more generous usage limits in the future and power users would be mostly running their own models anyway.Anyone using these models as "non-deterministic transpilers" from natural language to code (experienced engineers who can write code themselves) would probably not be paying to any AI providers.[0] https://www.youtube.com/watch?v=-_hC-C_Drcw
  • khalic
    Incredible, will translate to better coding models in the near future.We really need to develop better tools to understand what's happening inside these NNs. Working with high-D spaces is not something we're good at, and we're basically throwing stuff at it and seeing if it sticks.
  • p1esk
    It’s so ironic that Apple still publishes AI research and OpenAI does not.
  • 0x3f
    Haven't read the paper yet, but it is interesting how seemingly simple many breakthroughs in ML are. Even transformers are like that. Maybe it's hindsight bias.I suppose we just don't have a deeper underlying theory to lean on and help us 'design' anything.
  • ultramann
    Maybe not the thing I should be focusing on, but I was surprised this paper came from apple. I was under the impression that apples ai/LLM research was far behind the curve. I get that research is a rising tides lifts all boats situation, I just thought that I had seen lots of negative news about apples progress in the front, and heuristically haven’t seen many (any?) apple research papers make it the front page of hacker news. Wondering if anyone more familiar with apple/ai research could comment on this?
  • l5870uoo9y
    > Our method, simple self-distillation (SSD), is embarrassingly simple: sample solutions from the base model with specified temperature and truncation, then fine-tune on those raw, unverified samples via standard cross-entropy loss.So you prompt the base model for answer and then rerun the prompt with the answer from the first run?
  • an0malous
    I’d like to understand AI research better and I recall some posts a while back where someone collected all the key papers that one should read, but I don’t remember enough to be able to find it. Does anyone know what I’m talking about and could link me to that post?
  • itmitica
    It’s an interesting claim, and the reported benchmark gains are large, but it is still an April 1, 2026 arXiv preprint, so I’d treat it as promising rather than settled.
  • robwwilliams
    Very cool. An evolutionary biologist would say: Welcome to the party!Mutation rate modulation is the AI engineers’ heat. And selection does the trimming of the outliers.Some more serious biomorphic thinking and we may get to the next big insight courtesy of 3+ billion years of evolution—- evolution that enabled a great ape species to write a paper like this and build LMM’s like Gemma4 that totally rock on a 3.5 pound MacBookPro M5 Max with 128 GB of RAM.
  • fooker
    I'm excited for the long tail of techniques like this that are going to be discovered over the next several decades that's going to make this technology eventually run on a toaster!
  • roger_
    Skimmed this but don't have an intuitive understanding of why this works and how temperature and truncation factor in.
  • xbmcuser
    So the chances of Singularity went up.
  • vishnugupta
    Can someone please eli5 this to a friend web developer? I read the abstract but couldn’t understand much.
  • augment_me
    Isn't this was DeepSeek + Kimi did to Claude?
  • anon
    undefined
  • antirez
    Another potentially usable trick is the following: based on the observation that longer token budget improves model performances, one could generate solutions using a lot of thinking budget, then ask the LLM to turn the trace into a more compact one, and later SFT on that. That said, I have the feeling the result of the paper will likely be hard to apply in practice without affecting other capabilities, and/or not superior to other techniques that provide similar improvement in sampling.
  • 4b11b4
    Self-consistency meets fine-tuning?
  • drooby
    Fascinating...This feels eerily similar to sleep consolidation or synaptic pruning
  • smallerize
    I don't suppose they published the improved models?
  • VoqalAI
    [dead]
  • usermac
    [dead]
  • pithtkn
    [dead]
  • dist-epoch
    [flagged]
  • jofzar
    > simple self-distillation (SSD):Sorry apple, SSD is already taken, you can't use that acronym.
  • politelemon
    It's cringe worthy to see that the original paper itself is editorialised.Title should be: Simple Self-Distillation Improves Code Generation
  • ape4
    Shouldn't a scientific paper be using metric units (like 30T) rather than 30B.There are two distinct billions. https://en.wikipedia.org/wiki/Billion