Need help?
<- Back

Comments (27)

  • BloondAndDoom
    This pretty cool, and useful but I only wish this was a website. I don’t like the idea of running an executable for something that can perfectly be done as a website. (Other than some minor features, tbh even you can enable Corsair and still check the installed models from a web browser).Sounds like a fun personal project though.
  • kamranjon
    This is a great idea, but the models seem pretty outdated - it's recommending things like qwen 2.5 and starcoder 2 as perfect matches for my m4 macbook pro with 128gb of memory.
  • est
    Why do I need to download & run to checkout?Can I just submit my gear spec in some dropdowns to find out?
  • asimovDev
    as someone who's very uneducated when it comes to LLMs I am excited about this. I am still struggling to understand correlation between system resources and context, e.g how much memory i need for N amount of context.Been recently into using local models for coding agents, mostly due to being tired of waiting for gemini to free up and it constantly retrying to get some compute time on the servers for my prompt to process like you are in the 90s being a university student and have to wait for your turn to compile your program on the university computer. Tried mistral's vibe and it would run out of context easily on a small project (not even 1k lines but multiple files and headers) at 16k or so, so I slammed it at the maximum supported in LM studio, but I wasn't sure if I was slowing it down to a halt with that or not (it did take like 10 minutes for my prompt to finish, which was 'rewrite this C codebase into C++')
  • manmal
    Slightly tangential, I‘m testdriving an MLX Q4 variant of Qwen3.5 32B (MoE 3B), and it’s surprisingly capable. It’s not Opus ofc. I‘m using it for image labeling (food ingredients) and I‘m continuously blown away how well it does. Quite fast, too, and parallelizable with vLLM.That’s on an M2 Max Studio with just 32GB. I got this machine refurbed (though it turned out totally new) for €1k.
  • castral
    I wish there was more support for AMD GPUs on Intel macs. I saw some people on github getting llama.cpp working with it, would it be addable in the future if they make the backend support it?
  • ff00
    Found this website, not tested https://www.caniusellm.com/
  • windex
    What I do is i ask claude or codex to run models on ollama and test them sequentially on a bunch of tasks and rate the outputs. 30 minutes later I have a fit. It even tested the abliterated models.
  • sneilan1
    This is exactly what I needed. I've been thinking about making this tool. For running and experimenting with local models this is invaluable.
  • dotancohen
    In the screenshots, each model has a use case of General, Chat, or Coding. What might be the difference between General and Chat?
  • fwipsy
    Personally I would have found a website where you enter your hardware specs more useful.
  • andsoitis
    Claude is pretty good at among recommendations if you input your system specs.
  • esafak
    I think you could make a Github Page out of this.
  • genie3io
    [dead]