<- Back
Comments (242)
- bertiliA relief to see the Qwen team still publishing open weights, after the kneecapping [1] and departures of Junyang Lin and others [2]![1] https://news.ycombinator.com/item?id=47246746 [2] https://news.ycombinator.com/item?id=47249343
- homebrewerAlready quantized/converted into a sane format by Unsloth:https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF
- syntaxingIs it worth running speculative decoding on small active models like this? Or does MTP make speculative decoding unnecessary?
- aleccoRelated interesting find on Qwen."Qwen's base models live in a very exam-heavy basin - distinct from other base models like llama/gemma. Shown below are the embeddings from randomly sampled rollouts from ambiguous initial words like "The" and "A":"https://xcancel.com/N8Programs/status/2044408755790508113
- armanjI recall a Qwen exec posted a public poll on Twitter, asking which model from Qwen3.6 you want to see open-sourced; and the 27b variant was by far the most popular choice. Not sure why they ignored it lol.
- mtct88Nice release from the Qwen team.Small openweight coding models are, imho, the way to go for custom agents tailored to the specific needs of dev shops that are restricted from accessing public models.I'm thinking about banking and healthcare sector development agencies, for example.It's a shame this remains a market largely overlooked by Western players, Mistral being the only one moving in that direction.
- ameliusLooks like they compare only to open models, unfortunately.As I am using mostly the non-open models, I have no idea what these numbers mean.
- psim1(Please don't downvote - serious question) Are Chinese models generally accepted for use within US companies? The company I work for won't allow Qwen.
- 999900000999Looking to move off ollama on Open Suse tumbleweed.Should I use brew to install llma.ccp or the zypper to install the tumbleweed package?
- seemazeFingers crossed for mid and larger models as well. I'd personally love to see Qwen3.6-122B-A10B.
- abhikul0I hope the other sizes are coming too(9B for me). Can't fit much context with this on a 36GB mac.
- jake-coworkerThis is surprisingly close to Haiku quality, but open - and Haiku is quite a capable model (many of the Claude Code subagents use it).
- fooblasterHonestly, this is the AI software I actually look forward to seeing. No hype about it being too dangerous to release. No IPO pumping hype. No subscription fees. I am so pumped to try this!
- rvnxChina won again in terms of openness
- adrian_bAvailable for download:https://huggingface.co/Qwen/Qwen3.6-35B-A3B
- dataflowI'm a newbie here and lost how I'm supposed to use these models for coding. When I use them with Continue in VSCode and start typing basic C: #include <stdio.h> int m I get nonsensical autocompletions like: #include <stdio.h> int m</fim_prefix> What is going on?
- aliljetI'm broadly curious how people are using these local models. Literally, how are they attaching harnesses to this and finding more value than just renting tokens from Anthropic of OpenAI?
- ghchow does this compare to gpt-oss-120b? It seems weird to leave it out.
- GlemllksdfI tried Gemma 4 A4B and was surprised how hart it is to use it for agentic stuff on a RTX 4090 with 24gb of ram.Balancing KV Cache and Context eating VRam super fast.
- anonundefined
- lopsotronicDangit, I'll need to give this a run on my personal machine. This looks impressive.At the time of writing, all deepseek or qwen models are de facto prohibited in govcon, including local machine deployments via Ollama or similar. Although no legislative or executive mandate yet exists [1], it's perceived as a gap [2], and contracts are already including language for prohibition not just in the product but any part of the software environment.The attack surface for a (non-agentic) model running in local ollama is basically non-existent . . but, eh . . I do get it, at some level. While they're not l33t haXX0ring your base, the models are still largely black boxes, can move your attention away from things, or towards things, with no one being the wiser. "Landing Craft? I see no landing craft". This would boil out in test, ideally, but hey, now you know how much time your typical defense subcon spends in meaningful software testing[3].[1] See also OMB Memorandum M-25-22 (preference for AI developed and produced in the United States), NIST CAISI assessment of PRC-origin AI models as "adversary AI" (September 2025), and House Select Committee on the CCP Report (April 16, 2025), "DeepSeek Unmasked".[2] Overall, rather than blacklist, I'd recommend a "whitelist" of permitted models, maintained dynamically. This would operate the same way you would manage libraries via SSCG/SSCM (software supply chain governance/management) . . but few if any defense subcons have enough onboard savvy to manage SSCG let alone spooling a parallel construct for models :(. Soooo . . ollama regex scrubbing it is.[3] i.e. none at all, we barely have the ability to MAKE anything like software, given the combination of underwhelming pay scales and the fact defense companies always seem to have a requirement for on-site 100% in some random crappy town in the middle of BFE. If it wasn't for the downturn in tech we wouldn't have anyone useful at all, but we snagged some silcon refugees.
- anonundefined
- kombineWhat kind of hardware (preferably non-Apple) can run this model? What about 122B?
- zoobab"open source"give me the training data?
- anonundefined
- btbr403Planning to deploy Qwen3.6-35B-A3B on NVIDIA Spark DGX for multi-agent coding workflows. The 3B active params should help with concurrent agent density.
- incomingpainWowzers, we were worried Qwen was going to suffer having lost several high profile people on the team but that's a huge drop.It's better than 27b?
- nurettinI tried the car wash puzzle:You want to wash your car. Car wash is 50m away. Should you walk or go by car?> Walk. At 50 meters, the round trip is roughly 100 meters, taking about two minutes on foot. Driving would require starting the engine, navigating, parking, and dealing with unnecessary wear for a negligible distance. Walk to the car wash, and if the bay requires the vehicle inside, have it moved there or return on foot. Walking is faster and more efficient.Classic response. It was really hard to one shot this with Qwen3.5 Q4_K_M.Qwen3.6 UD-IQ4_XS also failed the first time, then I added this to the system prompt:> Double check your logic for errorsThen I created a new dialog and asked the puzzle and it responded:> Drive it. The car needs to be present to be washed. 50 meters is roughly a 1-minute walk or a 10-second drive. Walking leaves the car behind, making the wash impossible. Driving it the short distance is the only option that achieves the goal.Now 3.6 gets it right every time. So not as great as a super model, but definitely an improvement.
- zshn25What do all the numbers 6-35B-A3B mean?
- yieldcrvAnybody use these instead of codex or claude code? Thoughts in comparison?benchmarks dont really help me so much
- fred_is_fredHow does this compare to the commercial models like Sonnet 4.5 or GPT? Close enough that the price is right (free)?
- tristorI'm disappointed they didn't release a 27B dense model. I've been working with Qwen3.5-27B and Qwen3.5-35B-A3B locally, both in their native weights and the versions the community distilled from Opus 4.6 (Qwopus), and I have found I generally get higher quality outputs from the 27B dense model than the 35B-A3B MOE model. My basic conclusion was that MoE approach may be more memory efficient, but it requires a fairly large set of active parameters to match similarly sized dense models, as I was able to see better or comparable results from Qwen3.5-122B-A10B as I got from Qwen3.5-27B, however at a slower generation speed. I am certain that for frontier providers with massive compute that MoE represents a meaningful efficiency gain with similar quality, but for running models locally I still prefer medium sized dense models.I'll give this a try, but I would be surprised if it outperforms Qwen3.5-27B.
- bossyTeacherDoes anyone have any experience with Qwen or any non-Western LLMs? It's hard to get a feel out there with all the doomerists and grifters shouting. Only thing I need is reasonable promise that my data won't be used for training or at least some of it won't. Being able to export conversations in bulk would be helpful.
- maxothex[dead]
- reynaventures[dead]
- reynaventures[dead]
- typia[dead]
- shevy-javaI don't want "Agentic Power".I want to reduce AI to zero. Granted, this is an impossible to win fight, but I feel like Don Quichotte here. Rather than windmill-dragons, it is some skynet 6.0 blob.
- amazingamazingMore benchmaxxing I see. Too bad there’s no rig with 256gb unified ram for under $1000