Need help?
<- Back

Comments (86)

  • Aurornis
    This is their hosted-only model, not an open weight model like they’ve become known for. They got a lot of good publicity for their open weight model releases, which was the goal. The hard part is pivoting from an open weight provider to being considered as a competitor to Claude and ChatGPT. Initial reactions are mostly anger from everyone who didn’t realize that the play along was to give away the smaller models as advertising, not because they were feeling generous.Comparing to Opus 4.5 instead of the current 4.6 and other last-gen models is clearly an attempt to deceive, which isn’t winning them any points either.I think there is a moderately large market for models like this that aren’t quite SOTA level but can be served up much cheaper. I don’t know how successful they’ll be in the race to the bottom in this market niche, though. Most users of cheap API tokens are not loyal to any brand and will change providers overnight each time someone releases a slightly better model.
  • Alifatisk
    I understand peoples reactions of Qwen team comparing against Opus 4.5 instead of 4.6. And them comparing against Gemini Pro 3.0 instead of 3.1. But calling it misleading is a bit of stretch in my eyes, people here are acting like we immediately forgot how previous generations performed just because a new version is released.This field is going in a incredible pace, the providers release a new model every quarter or so. The amount of criticism is a bit overblown in my opinion. The benchmarks still look very good to me. I’ve used GLM-5 (latest is GLM-5.1) and Kimi K2.5, they are decent and gets the job done, so seeing how this model of Qwen performs compared to it is kinda impressive.Also, why are so many pointing out the fact that this model is not open-weight as if this is their first time doing so. Qwen-3.5-plus, Qwen-3-Max is also closed source. This is not something new.I think Qwen trying to catch up to the SOTA models is still healthy for us, the consumers. Sure, its sad news that this version is closed-weight, but I won’t downplay their progress.
  • simonw
    Pretty solid Pelican: https://gist.github.com/simonw/ca081b679734bc0e5997a43d29fad...I used the https://modelstudio.alibabacloud.com/ API to generate that one, which required signing up for an account and attaching PayPal billing - but it looks like OpenRouter are offering it for free right now so I could have used that: https://openrouter.ai/qwen/qwen3.6-plus:free
  • jgbuddy
    Worth noting that this model, unlike almost all qwen models, is not open-weight, nor is the parameter count exposed. Also odd that it is compared against opus 4.5 even though 4.6 was released like 2 months ago.
  • furyofantares
    I'll diverge from some of these comments, I don't find it misleading to compare to Opus 4.5.I can remember how good Opus 4.5 was. If I'm considering using this, it's most informative to me to compare to the model it's closest to that I have familiarity with.I'm obviously not switching to this if I want the best model. I'm switching if I'm hopeful that the smaller versions are close to it, or if I want to have more options for providers, or for any other reasons unrelated to getting the highest quality responses possible.
  • gburgett
    Looking forward to when this gets on Bedrock. I built an app with a niche AI agent and to this point only Sonnet is really good enough for our use case, but its expensive!
  • woeirua
    Just more evidence that the B tier models are six months behind. Ultimately that’s good. Opus 4.6 level intelligence will be cheap later this year!
  • linolevan
    I’m surprised that people are surprised. Qwen has been hosting private plus and max variants for a while now.
  • srmatto
    The benchmarks provided are for Opus-4.5, not for the latest Opus-4.6 and Qwen is still lagging in a lot of them.
  • karimf
    > In the coming days, we will also open-source smaller-scale variants, reaffirming our commitment to accessibility and community-driven innovation.
  • giancarlostoro
    I hope their open source variants are just as good, having a 1 million token window for a fully offline model would be VERY interesting.
  • wg0
    It hallucinates a lot more then Sonnet or even MiniMax M2.5. Especially in tool calls, it would end up duplicating the content in code files and then realising later and getting stuck in a loop.
  • kanehorikawa
    the tool use examples are nice, but i'm curious about the structured output reliability. we've had other API models completely fall apart on complex, nested JSON schemas under concurrent load
  • Caum
    The agent benchmarks here are interesting but I'd love to see how Qwen3.6-Plus handles long-horizon tasks where it needs to recover from its own mistakes. Most agent evals test the happy path. The hard part is when the model takes a wrong action at step 3 and needs to recognize and backtrack at step 15. Has anyone stress-tested this in a real dev workflow?
  • wolvoleo
    Nice, I hope there will also come a small open version of it.
  • zkmon
    It is no longer available on OpenRouter. They say "going away on 3-March", but it's already gone!
  • throwaw12
    I would love to hear from people using both (Claude Code OR Codex) AND (Qwen) and their experience with Qwen models, are they on par, or how far are they?
  • Art9681
    How convenient of them to compare themselves to the last generation Opus and GPT models to make their model look better than it really is.
  • shubhamgarg86
    the comparison is helpful but i'd want to see how it handles edge cases
  • MarsIronPI
    It's not open weights so I'm not interested.
  • esafak
    Does anyone have experience with Alibaba's coding plan? Not that I'm very tempted at $50/month...
  • eis
    Quite strong results in the benchmarks but why Gemini 3 Pro instead of 3.1? Why only for a few of the benchmarks? Why is OpenAI not there in the coding benchmarks? Why Opus 4.5 and not 4.6? Just jumps out into my eye as a bit strange.As always, we'll have to try and see how it performs in the real world but the open weight models of Qwen were pretty decent for some tasks so still excited to see what this brings.
  • johnwhitman
    [dead]
  • techpulselab
    [dead]
  • maxothex
    [dead]
  • daft_pink
    Not really interested in using models hosted on alibaba cloud.Like Qwen local for it’s privacy, but I trust the privacy of Google/OpenAI/Anthropic more than alibaba.