<- Back
Comments (86)
- zkmonThis seems assumes coding as the usecase. A lot of business activity expects software outputs be precise, reproducible and can be held liable for accuracy. Imagine excel spreadsheet showing approximate numbers as formula outputs. "Caution: AI might make mistakes" is basically saying that I'm not responsible for correctness of my outputs. Does speed compensate for accuracy loss? Maybe, in some cases. But a lot of AI stories pretend that it is acceptable in all cases.
- jfalcon>someone raised the question of “what would be the role of humans in an AI-first society”.Norbert Wiener, considered to be the father of Cybernetics, wrote a book back in the 1950's entitled "The Human Use of Human Beings" that brings up these questions in the early days of digital electronics and control systems. In it, he brings up things like:- 'Robots enslaving humans for doing jobs better suited by robots due to a lack of humans in the feedback loop which leads to facist machines.'- 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.'- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".'The human purpose is not to compete but to safeguard the telology (purpose) of the system.
- pjsousa79One thing that seems to be missing in most discussions about "context" is infrastructure.The dream system for AI agents is probably something like a curated data hub: a place where datasets are continuously ingested, cleaned, structured and documented, so agents can query it to obtain reliable context.Right now most agents spend a lot of effort stitching context together from random APIs, web scraping, PDFs, etc. The result is brittle and inconsistent.If models become interchangeable, the real leverage might come from shared context layers that many agents can query.
- baxtrFor anyone worried about AGI coming soon. Today I asked Claude to stop using em dashes. That was his/her answer:Noted — I'II avoid em dashes going forward and use other punctuation or restructure sentences instead.
- adonovan> "what is the role of humans in a scenario where work is no longer necessary?"People have been fantasizing about this scenario throughout the industrial era--read William Morris' News from Nowhere (1890) for example--but it has failed to come to pass so many times, and the reasons are pretty obvious. The benefits of technology are spread unequally, and increasingly so over time, so only a wealthy few get the option of a post-labor existence. Also, our demands for the products of labor change as labor productivity increases; we prefer (or have been persuaded to act as if we prefer) material riches over lives with less stuff and more time.We still haven't seen that AI actually replaces labor, as opposed to amplifying it, like a power saw or CNC mill used by a carpenter, so all these discussions about the end of labor seem like unwitting sales pitches for AI.> “what would be the role of humans in an AI-first society”The real question is why would anyone want, or want to help build, such an obscenity.
- zurferwhenever i worry that AI will eventually do all the work I remind myself that the world is full of almost infinite problems and we'll continue to have a choice to be problem solvers over just consumers.
- _pdp_My observation is that nobody knows how to deploy these LLMs yet. So yes. Context is everything.OpenAI is still selling model access not new science or new discoveries. They are pushing the context problem to the masses hoping someone might find a useful application of the technology.
- jbergqvistIn a way, isn't this the same old data moat that always existed in AI/ML, but supercharged? Generalist models can now reason over proprietary data as context instead of requiring you to train narrow expert models on it. What changed is you no longer need an ML team to turn that data into value.
- loss_flowOnly scarce context is a moat and what is scarce is changing quickly. OpenClaw is a great example of the context substrate not being scarce (local files, skills are easily copied to another platform) and thus not providing a moat.Claude's recent import of ChatGPT's memory is another example of context that was scarce becoming abundant (chat export) and potentially becoming scarce again (OpenAI cutting out chat export).
- vicchenaiBeen swapping between models a lot lately for a side project and yeah, the model swap is like 5 minutes. Getting the context right is where all my time goes. It's basically a data pipeline problem at this point, not an AI problem.
- tonnydouradoI know this is not the explicit meaning, but lol, intelligence isn't a commodity among humans, let alone LLMs
- ledauphinI just don't buy this. It is not what I observe with these things. They are not at all "thoughtful".
- amirhirschNot sure about the conclusion regarding NVidia value capture. I imagine the context for many applications will come from a physical simulation environment running in dramatically more GPUs than the AI part.
- farcitizenGreat Article. And this idea is Largely behind all the new Microsoft IQ products, Work IQ, Foundry IQ, Fabric IQ. Giving the Agents Context of all relevant enterprise data to do their job.
- gertlabsNeither intelligence nor context are what really differentiate the most successful model for programming (Claude Opus 4.6) from slightly 'smarter' competitors (Codex 5.3, Gemini 3.1 Pro).It's tool use and personality. If models stopped advancing today, we could still reach effective AGI with years of refining harnesses. There is still incredible untapped potential there.I maintain a benchmark at https://gertlabs.com that competes models against each other in competitive, open-ended games. It's harder to game the benchmark because there's no correct answer (at least none that any of the models have gotten remotely close to) and it requires anticipation of other players' behavior.One thing I've found is that Codex and Gemini models tend to perform the best at one-shotting problems, but when given a harness and tools to iterate towards a solution, Anthropic models continue improving where Codex and Gemini struggle to use tools they weren't trained on or take the initiative to follow the high level objectives.
- freediverAs much as I use AI in daily workflows, I do not think an AI-first society will ever be a thing.Historically there is no evidence of that happening with tech revolutions - or rather perhaps you could say to some extent - you can not say that we are an internet-first society, or cars-first society or mobile phone - first society despite these being profound technological revolutions.And more importantly, the only science fiction movies that talk about "AI first societies" tend to be dystopian in nature (eg Terminator). And humans eventually always do better than that.As much as the world in Star Trek is advanced for example, with all the fancy AI there is, it is still a human-first society. Only 10% of any Star Trek is about AI and fancy technologies, 90% is still human drama.
- rembalThe pyramids in the article are missing "energy" and "capital": in the world where intelligence becomes a commodity only those two matter. Capital to buy the hardware and install it, and energy to run it. Models already are a commodity, and "physical is the new king".As a side note, if you believe that because of the agents doing most of the work we will face the problem of what do we do with the all the free time (with presumably UBI in place), please contact me, I have a bridge to sell you.
- qseraAh another article that implies the inevitable AI apocalypse disguised as a thought piece!
- the_afI think a lot of this kind of conversations seem to be simply ignoring or missing the lessons from the past.For example:> [...] OpenClaw is around 400k lines of code for a while loop and the list of all the integrations and connections supported by the system. The next generation of Claws only have around 4K lines of code for the core, and the rest are just skills (i.e. markdown files) that tell the agent how to implement or run the code for the specific connections that want to be enabled (like a plugin system).Shifting code from "the core" and moving it to "skills" is simply moving code from one place to another. It may also mean translating it from classic source code to an English-like specification language full of ambiguity but that's also code. So the overall code is not reduced, just transformed and shifted around. You don't get a free lunch "because AI".> A user using one of these second-generation Claws only needs to node the core logic (that can be easily understood and audited) and can leverage the skills (as the plugins) to activate the functionality that they need for their case.The "core" may be easier to audit, but that's because the messy parts have been moved to the skills/plug-ins, which are as hard as always to audit.I'm not saying this cannot work, but it's very frustrating seeing everybody simply dumping all lessons from the past and pretending nothing that came before mattered and that AI vibe coding is fundamentally different and the rules of accidental and intrinsic complexity don't apply anymore.Have we all collectively lost our minds?
- JackSlateurIntelligence is rarer than ever
- ares623Money and power is the real moat. Everything else is confetti.
- 2OEH8eoCRo0I think a lot about liability. If AI wrecks something are they liable? If not then thats very risky and unusable for many applications. But if it is liable then that's extremely risky for the AI providers!It seems risky either way!
- dude250711That is a nice blog post, Gemini!
- philipwhiuk> But the topic of conversation that I enjoyed the most was when someone raised the question of “what would be the role of humans in an AI-first society”. Some were skeptical about whether we are ever going to reach an AI-first society. If we understand as an AI-first society, one where the fabric of the economy and society is automated through agents interacting with each other without human interaction, I think that unless there is a catastrophic event that slows the current pace of progress, we may reach a flavor of this reality in the next decade or two.I don't really know how you can make this prediction and be taken seriously to be honest.Either you think it's the natural result of the current LLM products, in which case a decade looks way too long.Or you think it requires a leap of design in which case it's kind of an unknown when we get to that point and '10 to 20 years' is probably just drawn from the same timeframe as the 'fusion as a viable source of electricity' predictions - i.e. vague guesswork.
- LetsGetTechniclWhy the fuck would we ever want an AI-first society
- AIorNot"what is the role of humans in a scenario where work is no longer necessary? This is significant because, since the industrial revolution, work has played an important role in shaping an individual’s identity. How will we occupy our time when we don’t have to spend more than half of our waking hours on a job"Umm I have been working in AI in multiple verticals for the past 3 years and I have been far busier and more stressed with far less job security than past 15 before that in tech.For now this is far more accurate: https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...Wake me up when the computers run the world and I can relax..but I don't think its happening in my lifetime.
- 7777777philAPI prices dropped 97% in two years so the model layer is already a commodity. The question is which context layer actually sticks. The OpenClaw example in the article (400K lines to 4K) is a nice proof point for what happens when context replaces code.I've been arguing for some time now that it's the "organizational world model," the accumulated process knowledge unique to each company that's genuinely hard to replicate. I did a full "report" about the six-layer decomposition here: https://philippdubach.com/posts/dont-go-monolithic-the-agent...
- simianwordsI have my own challenge: I think LLMs can do everything that a human can do and typically way better if the context required for the problem can fit in 10,000 tokens.For now this challenge is text only.Can we think of anything that LLMs can’t do?