Need help?
<- Back

Comments (43)

  • simjnd
    Probably a testament to how good Qwen3.6 is considering Qwen3.6-35B-A3B is not only ahead of their similar weight class XS.2 but also their M.1 (close to 10x bigger at 225B-A23B).Interestingly, Gemma 4 26B-A4B and Qwen3.6 27B (dense) have been left out of the comparison.The smaller models are becoming very good and quantization techniques like importance weighting and TurboQuant on model weights let you run aggressively quantized version (IQ2, TQ3_4S) on consumer hardware with extremely acceptable perplexity and quality loss.Very exciting times for local LLMs.
  • rohitpaulk
    Been testing these via their "pool" agent. It's fast, and the agent adheres to the ACP spec pretty well (better than codex, opencode etc.) so it's a good experience in Zed.
  • vijgaurav
    The fact theyre shipping the actual agent harness alongside the weights is the part that matters. Most labs dump the model and make you figure out the agent layer yourself. If its the same runtime they use for RL training, its actually been exercised in production rather than being some demo wrapper.
  • simonw
    Pelicans via OpenRouter - the M.1 one is better, neither are particularly great though: https://gist.github.com/simonw/382464026d2e3535986e06437fb6d...
  • sudb
    I'm not sure I understand why Poolside are training their own models at all - what's the perceived or real advantage of splitting up model training efforts into smaller companies and dividing up resources like this? Is it just to have a US-domiciled LLM lab?
  • orliesaurus
    The colors used in the charts are borderline criminal
  • jaen
    For similarly sized models, not looking very good on the slightly-less-benchmaxxed Terminal-Bench 2.0: Laguna XS.2 33B-A3B params: 30.6 Qwen 3.6 35B-A3B : 51.5 Devstral 2 123B : 31.2 Quite a huge lead for Qwen... well, at least it's catching up to other smaller Western labs.
  • throwaw12
    Has anyone tried these models?I like their honesty in benchmarks, looks like Qwen3.6 35B is outperforming their Laguna M.1 225B model
  • franksiem
    Felt like they would never come out of stealth mode but very nice to see it materialized into something competitive.
  • speedgoose
    Please update the charts. Consider using textures or filling patterns.I usually score pretty well in colour perception tests but distinguishing between those two purples made me doubt myself.
  • anon
    undefined
  • gslepak
    Very cool to see more small open models being worked on!One nit: I've seen on this homepage, and many others, this notion that the people behind the models are "working towards AGI".I get that this is marketing speak, but transformers are not AGI, and they will never be AGI, so it'd be great if people stopped saying that as it sort of wears out the meaning of "working towards AGI".
  • kingjimmy
    the color-codes make those benchmarks charts impossible to understand. very pretty though.
  • esafak
    They're not winning any popular benchmark. Is there some niche where it excels?