<- Back
Comments (37)
- 0xbadcafebeeYou can already do this with some GPU drivers: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdttm.pages_limit=5242880 ttm.pages_limit=5242880" One downside is your kernel isn't going to reserve that memory away from userland. You will still see all the memory at system level as "free". As the GPU driver starts using it, other apps/the OS will try to use the "free" memory, not knowing how much of it is in use (it may show up as "cache", or not at all). Then OOM killer starts going or programs start crashing, and at some point the OS tips over or GPU driver crashes. You can add loads of swap as a compromise and it works okay, if a bit slow.In any case, loading a gigantic model just to use system RAM is absurdly slow (due to mem bandwidth), like 1-5 t/s, so it's not practical. It'd take a whole day to process one 86k token request. Just pay a cloud provider $0.01 to do it in 10 seconds.
- nlThis is really interesting engineering, but I agree with the other commentators that the benchmarking makes it hard to understand the contribution various factors are having.The ExLlamaV3 EXL3 2bpw (8 GB, full VRAM) row is an order of magnitude faster than the baseline - but the baseline seems to be the 32GB model running with the KV cache shared to system memory only (I think?)But if a 8GB model gives sufficient quality then it seems like that would have worked without the shared memory thing?I think the useful apples-to-apples benchmark is currently the Ollama + GreenBoost shim (baseline) (2-5 tps) vs ExLlamaV3 + GreenBoost cache (8–20 tps) comparison.It would be really useful to see this compared with the existing llama CPU/memory offload. There is a note at the start ("Offload layers to CPU — works, but drops token/s by 5–10× because CPU RAM has no CUDA coherence") - but it is unclear if that 5-10x token speed drop is compared to running a model completely in GPU or compared to the greenboost approach.I think it is vs GPU, in which case it seems likely the performance is similar to what greenboost is giving but probably much more stable.
- Havoc> The best strategy is to shrink the model until it fits — either with EXL3 quantization or ModelOpt PTQ — and use GreenBoost's DDR4 pool for KV cache only.Does this make sense? I'd have thought the KV is guaranteed to be used 100% of the time while say in a MoE the same can't be said of the weights.Though I suppose if you're shooting for huge context then having that allocation go into ram makes sense specially when its allocated but not used yet
- ma2kxThe physical bottleneck to system memory remains. Therefore, I assume that better results are achieved by manually adjusting which layers are offloaded.I would prefer to use system memory to cache different models, focusing on things like embedding, rerankers, and TTS. This is sufficient to run a more complex RAG locally, for example, via Mem0, and then use a larger LLM via the cloud.
- daneel_wRelated, a couple of years ago: https://old.reddit.com/r/Amd/comments/15t0lsm/i_turned_a_95_..."I turned a $95 AMD APU into a 16GB VRAM GPU and it can run stable diffusion!"
- armada651Doesn't Windows already do this by default? I can already run models bigger than my GPU VRAM and it will start using up to 50% of my system RAM as "shared memory". This is on a Desktop PC without a shared memory architecture.
- yjtpesesu2How does this differ from anything llama.cpp offers, regarding offloading layers? The repo consistently refers to "DDR4". Is there a reason DDR5 won't work with this?
- InsanityExtend your VRAM using RAM, then extend your RAM using Swap.
- paultendoCould be a very useful way to do some overnight tasks using spare RAM. Possibly things like LLM-based categorisation, labelling, data cleansing. That's what comes to mind for me anyway.
- bhewesThis has been fun we can task our nemotron-3-super model to run over night when our desktops are idle. 4070s and 96gb of ram works fine. Slow but does it's job.
- yjftsjthsd-hPreviously: https://news.ycombinator.com/item?id=47384557(Still cool, still would benefit from better benchmarks)
- felipe_aramburuHow does this relate to cuCascade https://github.com/nvidia/cucascade
- sabareeshI wish it provided benchmark comparing Direct RAM offload vs CPU offload vs Full VRAM
- pabs3Would be great to get this into mainline Linux.
- tandrSome simpler benchmark table would be great. May I suggest Ollama on base machine, Ollama with T1, Ollama with T1+T2 etc. on midsize and big models to compare token/sec?
- aplomb1026[dead]
- ajaimk[dead]
- holodukeThe is extremely slow and not useful in my opinion.