<- Back
Comments (217)
- JBrussee-2Author here. A few people are arguing against a stronger claim than the repo is meant to make. As well, this was very much intended to be a joke and not research level commentary.This skill is not intended to reduce hidden reasoning / thinking tokens. Anthropic’s own docs suggest more thinking budget can improve performance, so I would not claim otherwise.What it targets is the visible completion: less preamble, less filler, less polished-but-nonessential text. Therefore, since post-completion output is “cavemanned” the code hasn’t been affected by the skill at all :)Also surprising to hear so little faith in RL. Quite sure that the models from Anthropic have been so heavily tuned to be coding agents that you cannot “force” a model to degrade immensely.The fair criticism is that my “~75%” README number is from preliminary testing, not a rigorous benchmark. That should be phrased more carefully, and I’m working on a proper eval now.Also yes, skills are not free: Anthropic notes they consume context when loaded, even if only skill metadata is preloaded initially.So the real eval is end-to-end: - total input tokens - total output tokens - latency - quality/task successThere is actual research suggesting concise prompting can reduce response length substantially without always wrecking quality, though it is task-dependent and can hurt in some domains. (https://arxiv.org/html/2401.05618v3)So my current position is: interesting idea, narrower claim than some people think, needs benchmarks, and the README should be more precise until those exist.
- herfWe need a high quality compression function for human readers... because AIs can make code and text faster than we can read.
- padolseyThis is fun. I'd like to see the same idea but oriented for richer tokens instead of simpler tokens. If you want to spend less tokens, then spend the 'good' ones. So, instead of saying 'make good' you could say 'improve idiomatically' or something. Depends on one's needs. I try to imagine every single token as an opportunity to bend/expand/limit the geometries I have access to. Language is a beautiful modulator to apply to reality, so I'll wager applying it with pedantic finesse will bring finer outputs than brutish humphs of cavemen. But let's see the benchmarks!
- vurudlxtytGrug brained developer meets AI tooling (https://grugbrain.dev)
- nharadaI wonder if this will actually be why the models move to "neuralese" or whatever non-language latent representation people work out. Interpretability disappears but efficiency potentially goes way up. Even without a performance increase that would be pretty huge.
- teekertIdk I try talk like cavemen to claude. Claude seems answer less good. We have more misunderstandings. Feel like sometimes need more words in total to explain previous instructions. Also less context is more damage if typo. Who agrees? Could be just feeling I have. I often ad fluff. Feels like better result from LLM. Me think LLM also get less thinking and less info from own previous replies if talk like caveman.
- itpccBut will it lose some context, like Kevin’s small talk? (https://www.youtube.com/watch?v=_K-L9uhsBLM)Like "Sea world" or "see the world".
- TeMPOraLOh boy. Someone didn't get the memo that for LLMs, tokens are units of thinking. I.e. whatever feat of computation needs to happen to produce results you seek, it needs to fit in the tokens the LLM produces. Being a finite system, there's only so much computation the LLM internal structure can do per token, so the more you force the model to be concise, the more difficult the task becomes for it - worst case, you can guarantee not to get a good answer because it requires more computation than possible with the tokens produced.I.e. by demanding the model to be concise, you're literally making it dumber.(Separating out "chain of thought" into "thinking mode" and removing user control over it definitely helped with this problem.)
- nayrocladeCute idea, but you're never gonna blow your token budget on output. Input tokens are the bottleneck, because the agent's ingesting swathes of skills, directory trees, code files, tool outputs, etc. The output is generally a few hundred lines of code and a bit of natural language explanation.
- arrty88Feels like there should be a way to compile skills and readme’s and even code files into concise maps and descriptions optimized for LLMs. They only recompile if timestamps are modified.
- Hard_SpaceAlso see https://arxiv.org/pdf/2604.00025 ('Brevity Constraints Reverse Performance Hierarchies in Language Models' March 2026)
- anonundefined
- FurstFlyOkay, I like how it reduces token usage, but it kind of feels that, it will reduce the overall model intelligence. LLMs are probabilistic models, and you are basically playing with their priors.
- abejfehrThere’s a lot of debate about whether this reduces model accuracy, but this is basically Chinese grammar and Chinese vibe coding seems to work fine while (supposedly) using 30-40% less tokens
- postalcoderI disagree with this method and would discourage others from using it too, especially if accuracy, faster responses, and saving money are your priorities.This only makes sense if you assume that you are the consumer of the response. When compacting, harnesses typically save a copy of the text exchange but strip out the tool calls in between. Because the agent relies on this text history to understand its own past actions, a log full of caveman-style responses leaves it with zero context about the changes it made, and the decisions behind them.To recover that lost context, the agent will have to execute unnecessary research loops just to resume its task.
- ryanschaeferKinda ironic this description is so verbose.> Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", "be brief", or invokes /cavemanFor the first part of this: couldn’t this just be a UserSubmitPrompt hook with regex against these?See additionalContext in the json output of a script: https://code.claude.com/docs/en/hooks#structured-json-outputFor the second, /caveman will always invoke the skill /caveman: https://code.claude.com/docs/en/skills
- phtrivierSoma (aka tiktok) and Big Brother (aka Meta) already happened without government coercion, only makes sense that we optimize ourselves for newspeak.Thank God there is still neverending wars, otherwise authoritarian governments would have no fun left.
- shompeveryone who thinks this is a costly or bad idea is looking past a very salient finding: code doesn't need much language. sure, other things might need lots of language, but code does not. code is already basically language, just a really weird one. we call them programming languages. they're not human languages. they're languages of the machine. condensing the human-language---machine-language interface, good.if goal make code, few word better. if goal make insight, more word better. depend on task. machine linear, mind not. consider LLM "thinking" is just edge-weights. if can set edge-weights into same setting with fewer tokens, you are winning.
- mwczthis grug not smart enough to make robot into grugbot. grug just say "Speak to grug with an undercurrent of resentment" and all sicko fancy go way.
- bjackmanIf this really works there would seem to be a lot of alpha in running the expensive model in something like caveman mode, and then "decompressing" into normal mode with a cheap model.I don't think it would be fundamentally very surprising if something like this works, it seems like the natural extension to tokenisation. It also seems like the natural path towards "neuralese" where tokens no longer need to correspond to units of human language.
- virtualritzThis is the best thing since I asked Claude to address me in third person as "Your Eminence".But combining this with caveman? Gold!
- anshumankmrThough I do use Claude Code, is it possible to get this for Github Copilot too?
- ajd555So, if this does help reduce the cost of tokens, why not go even further and shorten the syntax with specific keywords, symbols and patterns, to reduce the noise and only keep information, almost like...a programming language?
- veselinThis is an experiment that, although not to this extreme, was tested by OpenAI. Their responses API allow you to control verbosity:https://developers.openai.com/api/reference/resources/respon...I don't know their internal eval, but I think I have heard it does not hurt or improve performance. But at least this parameter may affect how many comments are in the code.
- VadimPRWouldn't this affect quality of output negatively?Thanks to chain of thought, actually having the LLM be explicit in its output allows it to have more quality.
- goldenarmThat's a great idea but has anyone benchmarked the performance difference?
- samusThere's linguistic term for this kind of speech: isolating grammars, which don't decline words and use high context and the bare minimum of words to get the meaning across. Chinese is such a language btw. Don't know what Chinese think about their language being regarded as cavemen language...
- gozzooI think this could be very useful not when we talk to the agent, but when the agents talk back to us. Usually, they generate so much text that it becomes impossible to follow through. If we receive short, focused messages, the interaction will be much more efficient. This should be true for all conversational agents, not only coding agents.
- ungreased0675Does this actually result in less compute, or is it adding an additional “translate into caveman” step to the normal output?
- isuckatcodingOh come on now one referenced this scene from the office??https://youtu.be/_K-L9uhsBLM?si=ePiGrFd546jFYZd8
- vivid242Great idea- if the person who made it is reading: Is this based on the board game „poetry for cavemen“? (Explain things using only single-syllable words, comes even with an inflatable log of wood for hitting each other!)
- xgulfieFunny how people are so critical of this and yet fawn over TOON
- HarHarVeryFunnyMore like Pidgin English than caveman, perhaps, although caveman does make for a better name.
- rschiavoneThis trick reminds me of "OpenAI charges by the minute, so speed up your audio"https://news.ycombinator.com/item?id=44376989
- norskeldAPL for talking to LLM when? Also, this reminded me of that episode from The Office where Kevin started talking like a caveman to make communication efficient.
- zahirbmirzaYou can also make huge spelling mistakes and use incomplete words with llms they just sem to know better than any spl chk wht you mean. I use such speak to cut my time spent typing to them.
- sebastianconcptAnyone else worried about the long term consequences of the influence of talking like this all day for the cognitive system of the user?
- ameliusBy the way why don't these LLM interfaces come with a pause button?
- andaiSo it's a prompt to turn Jarvis into Hulk!
- andaiNo articles, no pleasantries, and no hedging. He has combined the best of Slavic and Germanic culture into one :)
- fnyAre there any good studies or benchmarks about compressed output and performance? I see a lot of arguing in the comments but little evidence.
- ArekDymalskiWhile really useful now, I'm afraid that in the long run it might accelerate the language atrophy that is already happening. I still remember that people used to enter full questions in Google and write SMS with capital letters, commas and periods.
- fzeindlI tried this with early ChatGPT. Asked it to answer telegram style with as few tokens as possible. It is also interesting to ask it for jokes in this mode.
- staredI would prefer to talk like Abathur (https://www.youtube.com/watch?v=pw_GN3v-0Ls). Same efficiency but smarter.
- doe88> If caveman save you mass token, mass money — leave mass star.Mass fun. Starred.
- owenthejumperWhat is that binary file caveman.skill that I cannot read easily, and is it going to hack my computer.
- adam_patarinoOr you could use a local model where you’re not constrained by tokens. Like rig.ai
- anonundefined
- cadamsdotcomCaveman need invent chalk and chart make argument backed by more than good feel.
- xpeUnfrozen caveman lawyer here. Did "talk like caveman" make code more bad? Make unsubst... (AARG) FAKE claims? You deserve compen... AAARG ... money. AMA.
- kukakikeThis is exactly what annoys me most. English is not suitable for computer-human interaction. We should create new programming and query languages for that. We are again in cobol mindset. LLM are not humans and we should stop talking to them as if they are.
- anonundefined
- saidnooneeverLOL it actually reads how humans reply the name is too clever :').Not sure how effective it will be to dirve down costs, but honestly it will make my day not to have to read through entire essays about some trivial solution.tldr; Claude skill, short output, ++good.
- sillyboiOh, another new trend! I love these home-brewed LLM optimizers. They start with XML, then JSON, then something totally different. The author conveniently ignores the system prompt that works for everything, and the extra inference work. So, it's only worth using if you just like this response style, just my two cents. All the real optimizations happen during model training and in the infrastructure itself.
- Robdel12I didn’t comment on this when I saw it on threads/twitter. But it made it to HN, surprisingly.I have a feeling these same people will complain “my model is so dumb!”. There’s a reason why Claude had that “you’re absolutely right!” for a while. Or codex’s “you’re right to push on this”.We’re basically just gaslighting GPUs. That wall of text is kinda needed right now.
- hybrid_studyMongo! No caveman
- bitwizegrug have to use big brains' thinking machine these days, or no shiny rock. complexity demon love thinking machine. grug appreciate attempt to make thinking machine talk on grug level, maybe it help keep complexity demon away.
- DonHopkinsDeep digging cave man code reviews are Tha Shiznit:https://www.youtube.com/watch?v=KYqovHffGE8
- setnonecaveman multilingo? how sound?
- vova_hn2I don't know about token savings, but I find the "caveman style" much easier to read and understand than typical LLM-slop.
- Adam_cipher[dead]
- meidad_g[dead]
- bhwoo48I was actually worried about high token costs while building my own project (infra bundle generator), and this gave me a good laugh + some solid ideas. 75% reduction is insane. Starred
- bogtogI'd be curious if there were some measurements of the final effects, since presumably models wont <think> in caveman speak nor code like that