<- Back
Comments (138)
- ivraatiemsThere's some irony in the fact that this website reads as extremely NOT AI-generated, very human in the way it's designed and the tone of its writing.Still, this is a great idea, and one I hope takes off. I think there's a good argument that the future of AI is in locally-trained models for everyone, rather than relying on a big company's own model.One thought: The ability to conveniently get this onto a 240v circuit would be nice. Having to find two different 120v circuits to plug this into will be a pain for many folks.
- bastawhizThere's no way the red v2 is doing anything with a 120b parameter model. I just finished building a dual a100 ai homelab (80gb vram combined with nvlink). Similar stats otherwise. 120b only fits with very heavy quantization, enough to make the model schizophrenic in my experience. And there's no room for kv, so you'll OOM around 4k of context.I'm running a 70b model now that's okay, but it's still fairly tight. And I've got 16gb more vram then the red v2.I'm also confused why this is 12U. My whole rig is 4u.The green v2 has better GPUs. But for $65k, I'd expect a much better CPU and 256gb of RAM. It's not like a threadripper 7000 is going to break the bank.I'm glad this exists but it's... honestly pretty perplexing
- vessenesThe exabox is interesting. I wonder who the customer is; after watching the Vera Rubin launch, I cannot imagine deciding I wanted to compete with NVIDIA for hyperscale business right now. Maybe it’s aiming at a value-conscious buyer? Maybe it’s a sensible buy for a (relatively) cash-strapped ML startup; actually I just checked prices, and it looks like Vera Rubin costs half for a similar amount of GPU RAM. I’m certain that the interconnect will not be as good as NV’s.I have no idea who would buy this. Maybe if you think Vera Rubin is three years out? But NV ships, man, they are shipping.
- SmartestUnknownRegarding 2x faster than pytorch being a condition for tinygrad to come out of alpha:Can they/someone else give more details as to what workloads pytorch is more than 2x slower than the hardware provides? Most of the papers use standard components and I assume pytorch is already pretty performant at implementing them at 50+% of extractable performance from typical GPUs.If they mean more esoteric stuff that requires writing custom kernels to get good performance out of the chips, then that's a different issue.
- siliconc0wTinybox is cool but I think the market is maybe looking more for a turn-key explicit promise of some level of intelligence @ a certain Tok/s like "Kimi 2.5 at 50Tok/s".
- mcianciaNot sure why they stopped using 6 GPUs in thei builds - with 4 GPUs, both 9070 and rtx6000 come in 2 slot designs, so it easy to build it yourself using a bit more expensive, but still fairly regular motherboard.With 6 GPUs you have to deal with risers, pcie retimers, dual PSUs and custom case for so value proposition there was much better IMO
- hmokiguessIs this like the new equivalent of crypto mining? I remember the early days when they would sell hardware for farming crypto, now it’s AI?
- adrianwajPerhaps this company should think about acting as a landlord for their hardware. You buy (or lease) but they also offer colocation hosting. They could partner with crypto miners who are transitioning to AI factories to find the space and power to do this. I wonder if the machines require added cooling, though, in what would otherwise be a crypto mining center. CoreWeave made the transition and also do colocation. The switchover is real.I think Tinygrad should think about recycling. Are they planning ahead in this regard? Is anyone? My thought is if there was a central database of who own what and where, at least when the recycling tech become available, people will know where to source their specific trash (and even pay for it.) Having a database like that in the first place could even fuel the industry.
- mmoustafaI would love to see real-life tokens/sec values advertised for one or various specific open source models.I'm currently shopping for offline hardware and it is very hard to estimate the performance I will get before dropping $12K, and would love to have a baseline that I can at least always get e.g. 40 tok/s running GPT-OSS-120B using Ollama on Ubuntu out of the box.
- ekropotinIDK, I feel it’s quite overpriced, even with the current component prices.I almost sure it’s possible to custom build a machine as powerful as their red v2 within 9k budget. And have a lot of fun along the way.
- paxysThe problem with all these "AI box" startups is that the product is too expensive for hobbyists, and companies that need to run workloads at scale can always build their own servers and racks and save on the markup (which is substantial). Unless someone can figure out how to get cheaper GPUs & RAM there is really no margin left to squeeze out.
- jeremie_strandThe AMD angle is interesting given the history — tinygrad has had to work around a lot of driver quirks to get ROCm into a usable state. At that price point, you're esentially betting on a software stack that NVIDIA has had years to stabilize. Would be curious to see real-world utilization numbers vs. a comparable NVIDIA setup.
- operatingthetanThe incremental price increases between products is funny.$12,000, $65,000, $10,000,000.
- comrade1234Cool that you have a dual power supply model. It says rack mountable or free standing. Does that mean two form factors? $65K is more than we can afford right now but we are definitely eventually in the market for something we can run in our own colo.It's funny though... we're using deepseek now for features in our service and based on our customer-type we thought that they would be completely against sending their data to a third-party. We thought we'd have to do everything locally. But they seem ok with deepseek which is practically free. And the few customers that still worry about privacy may not justify such a high price point.
- wongarsuSound like solid prebuilt with well balanced components and a pretty caseNot revolutionary in any way, but nice. Unless I'm missing something here?
- ilakshI thought the most interesting thing about tinygrad was that theoretically you could render a model all the way into hardware similar to Taalas (tinygrad might be where Taalas got the idea for all I know).I could swear I filed a GitHub issue asking about the plans for that but I don't see it. Anyway I think he mentioned it when explaining tinygrad at one point and I have wondered why that hasn't got more attention.As far as boxes, I wish that there were more MI355X available for normal hourly rental. Or any.
- p0w3n3dQuite expensive little bastard. I wonder how much does it make sense to invest in a such device, if you can get $0.40/mtok from hyperbolic for example
- mayukhWhat’s the most effective ~$5k setup today? Interested in what people are actually running.
- zahirbmirza10 mil today... 1k in 10 years. Are OpenAI and Anthropic overvalued?
- vlovich123Surprising to see this with AMD GPUs considering how George famously threw up his hands as AMD not being worth working with.
- andaiCan someone explain the exabox? They say it "functions as a single GPU". Is there anything like that currently existing?
- himata4113exabox reads as if it was making a joke of something or someone. if it's real then it's really interesting!
- sudo_cowsayI always wonder about these expensive products: Does the company make them once its ordered or do they just make them beforehand?
- operatingthetanAre we at the point where 2x 9070XT's are a viable LLM platform? (I know this has 4, just wondering for myself).
- heinternetsexabox -720x RDNA5 AT0 XL 25,920 GB VRAM 23,040 GB System RAM~ $10 MillionWho is the target market here?
- orliesaurusI wonder if this is frontpage right now because of the other tiiny (the names are similar) video that went viral ... which turns out wasn't an actual product by the tinygrad linked in this post[1][1]https://x.com/ShriKaranHanda/status/2035284883384553953
- droidjjAdding this to my list of ~beautifully~ designed things to buy when I win the lottery.
- renewiltordI have 8x RTX 6000 Pro. Better to run the 300 W version of the cards. And it costs close to their 4x version. I get why they make it so big. So you can cool it at home. I prefer to just put in datacenter. Much cheaper power.
- aabaker99> Can I pay with something besides wire transfer? In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. Wire transfer is the only accepted form of payment.Sorry, what? Is this just a scam?
- ppap3I thought there was a typo in the price
- rpastuszakWho is this for?
- throwatdem12311Finally, a computer that should be able to run Monster Hunter Wilds with decent performance.But let’s be real, 12k is kinda pushing it - what kind of people are gonna spend $65k or even $10M (lmao WTAF) on a boutique thing like this. I dont think these kinds of things go in datacenters (happy to be corrected) and they are way too expensive (and probably way too HOT) to just go in a home or even an office “closet”.
- jauntywundrkindMy interest in anything associated with geohot took a colossal nose dive today after seeing this post against democracy, quoting frelling M*ncius M*ldbug: Democracy is a Liability. https://news.ycombinator.com/item?id=47469543 https://geohot.github.io//blog/jekyll/update/2026/03/21/demo...Theres a lot there that makes sense & I think needs to be considered. But a lot just seems to be out of the blue, included without connection, in my view. Feels like maybe are in-grouo messages, that I don't understand. How this is headered as against democracy is unclear to me, and revolting. I both think we must grapple with the world as it is, and this post is in that area, strongly, but to let fear be the dominant ruling emotion is one of the main definitions of conservativism, and it's use here to scare us sounds bad.
- flykespice"tiny" and it's 20k lbs and cost about 10k...Since when did our perception of tiny blow out of size in tech? Is it the influence of "hello world" eletron apps consuming 100mb of mem while idle setting the new standard? Anyway being an AI bro seems like an expensive hobby...
- aplomb1026[dead]
- baibai008989[dead]
- Heer_J[dead]
- pink_eye[flagged]
- fhn"but if you haven't contributed to tinygrad your application won't be considered" this company expects people to work for free?