Need help?
<- Back

Comments (74)

  • MatthiasPortzel
    One key thing to understand about TigerBeetle is that it's a file-system-backed database. Static allocation means they limit the number of resources in memory at once (number of connections, number of records that can be returned from a single query, etc). One of the points is that these things are limited in practice anyways (MySQL and Postgres have a simultaneous connection limit, applications should implement pagination). Thinking about and specifying these limits up front is better than having operations time out or OOM. On the other hand, TigerBeetle does not impose any limit on the amount of data that can be stored in the database.=> https://tigerbeetle.com/blog/2022-10-12-a-database-without-d...It's always bad to use O(N) memory if you don't have to. With a FS-backed database, you don't have to. (Whether you're using static allocation or not. I work on a Ruby web-app, and we avoid loading N records into memory at once, using fixed-sized batches instead.) Doing allocation up front is just a very nice way of ensuring you've thought about those limits, and making sure you don't slip up, and avoiding the runtime cost of allocations.This is totally different from OP's situation, where they're implementing an in-memory database. This means that 1) they've had to impose a limit on the number of kv-pairs they store, and 2) they're paying the cost for all kv-pairs at startup. This is only acceptable if you know you have a fixed upper bound on the number of kv-pairs to store.
  • leumassuehtam
    > All memory must be statically allocated at startup. No memory may be dynamically allocated (or freed and reallocated) after initialization. This avoids unpredictable behavior that can significantly affect performance, and avoids use-after-free. As a second-order effect, it is our experience that this also makes for more efficient, simpler designs that are more performant and easier to maintain and reason about, compared to designs that do not consider all possible memory usage patterns upfront as part of the design. > TigerStyleIt's baffling that a technique known for 30+ years in the industry have been repackage into "tiger style" or whatever this guru-esque thing this is.
  • ivanjermakov
    Related recent post from Tigerbeetle developer: https://matklad.github.io/2025/12/23/static-allocation-compi...
  • Zambyte
    I'm doing pretty much this exact pattern with NATS right now instead of Redis. Cool to see other people following similar strategies.The fact that the Zig ecosystem follows the pattern set by the standard library to pass the Allocator interface around makes it super easy to write idiomatic code, and then decide on your allocation strategies at your call site. People have of course been doing this for decades in other languages, but it's not trivial to leverage existing ecosystems like libc while following this pattern, and your callees usually need to know something about the allocation strategy being used (even if only to avoid standard functions that do not follow that allocation strategy).
  • d3ckard
    Personally I believe static allocation has pretty huge consequences for theoretical computer science.It’s the only kind of program that can be actually reasoned about. Also, not exactly Turing complete in classic sense.Makes my little finitist heart get warm and fuzzy.
  • loeg
    We do a lazier version of this with a service at work. All of the large buffers and caches are statically (runtime-configured) sized, but various internal data structures assumed to be approximately de minimis can use the standard allocator to add items without worrying about it.
  • xthe
    Nice write-up. Static allocation forcing you to think about limits up front feels like a real design win
  • dale-cooper
    Maybe I'm missing something, but two thoughts:1. Doesn't the overcommit feature lessen the benefits of this? Your initial allocation works but you can still run out of memory at runtime.2. For a KV store, you'd still be at risk of application level use-after-free bugs since you need to keep track of what of your statically allocated memory is in use or not?
  • kristoff_it
    A coincidental counterpart to this project is my zero allocation Redis client. If kv supports RESPv3 then it should work without issue :^)https://github.com/kristoff-it/zig-okredis
  • lazy-lambda
    No mention of AI - such a good post.
  • netik
    Didn’t we solve this already with slab allocators in memcached? The major problem with fixed allocation like this is fragmentation in memory over time, which you then have to reinvent GC for.
  • anon
    undefined
  • TheMagicHorsey
    In a strange coincidence (or maybe its actually inevitable given the timing) I also saw a podcast with Matklad of Tigerbeetle and had a similar idea--I've been working on a massively multiplayer game as a hobby project, also built in zig, also fully allocating all memory at startup, and also had an experience almost identical to OP's. In my case both my client and server are in Zig. Zig is pretty great at doing performant game code (rendering and physics on the client) ... it's less great on the server compared to Go (early days and fewer batteries included, fewer things just work out of the box ... but you can find pretty much everything you need for a game server with a little hunting and pecking and a debugging few build issues).Zig also works "okay" with vibe coding. Golang works much better (maybe a function of the models I use (primarily through Cursor) or maybe its that there's less Zig code out in the wild for models to scrape and learn from, or maybe idiomatic Zig just isn't a thing yet the way it is with Go. Not quite sure.
  • nromiun
    > All memory must be statically allocated at startup.But why? If you do that you are just taking memory away from other processes. Is there any significant speed improvement over just dynamic allocation?