Need help?
<- Back

Comments (467)

  • Vegenoid
    I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
  • visarga
    The story is entertaining, but it has a big fallacy - progress is not a function of compute or model size alone. This kind of mistake is almost magical thinking. What matters most is the training set.During the GPT-3 era there was plenty of organic text to scale into, and compute seemed to be the bottleneck. But we quickly exhausted it, and now we try other ideas - synthetic reasoning chains, or just plain synthetic text for example. But you can't do that fully in silico.What is necessary in order to create new and valuable text is exploration and validation. LLMs can ideate very well, so we are covered on that side. But we can only automate validation in math and code, but not in other fields.Real world validation thus becomes the bottleneck for progress. The world is jealously guarding its secrets and we need to spend exponentially more effort to pry them away, because the low hanging fruit has been picked long ago.If I am right, it has implications on the speed of progress. Exponential friction of validation is opposing exponential scaling of compute. The story also says an AI could be created in secret, which is against the validation principle - we validate faster together, nobody can secretly outvalidate humanity. It's like blockchain, we depend on everyone else.
  • stego-tech
    It’s good science fiction, I’ll give it that. I think getting lost in the weeds over technicalities ignores the crux of the narrative: even if this doesn’t lead to AGI, at the very least it’s likely the final “warning shot” we’ll get before it’s suddenly and irreversibly here.The problems it raises - alignment, geopolitics, lack of societal safeguards - are all real, and happening now (just replace “AGI” with “corporations”, and voila, you have a story about the climate crisis and regulatory capture). We should be solving these problems before AGI or job-replacing AI becomes commonplace, lest we run the very real risk of societal collapse or species extinction.The point of these stories is to incite alarm, because they’re trying to provoke proactive responses while time is on our side, instead of trusting self-interested individuals in times of great crisis.
  • wg0
    Very detailed effort. Predicting future is very very hard. My gut feeling however says that none of this is happening. You cannot put LLMs into law and insurance and I don't see that happening with current foundations (token probabilities) of AI let alone AGI.By law and insurance - I mean hire an insurance agent or a lawyer. Give them your situation. There's almost no chance that such a professional would come wrong about any conclusions/recommendations based on the information you provide.I don't have that confidence in LLMs for that industries. Yet. Or even in a decade.
  • ivraatiems
    Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering."May you live in interesting times" is a curse for a reason.
  • KaiserPro
    > AI has started to take jobs, but has also created new ones.Yeah nah, theres a key thing missing here, the number of jobs created needs to be more than the ones it's destroyed, and they need to be better paying and happen in time.History says that actually when this happens, an entire generation is yeeted on to the streets (see powered looms, Jacquard machine, steam powered machine tools) All of that cheap labour needed to power the new towns and cities was created by automation of agriculture and artisan jobs.Dark satanic mills were fed the decedents of once reasonably prosperous crafts people.AI as presented here will kneecap the wages of a good proportion of the decent paying jobs we have now. This will cause huge economic disparities, and probably revolution. There is a reason why the royalty of Europe all disappeared when they did...So no, the stock market will not be growing because of AI, it will be in spite of it.Plus china knows that unless they can occupy most of its population with some sort of work, they are finished. AI and decent robot automation are an existential threat to the CCP, as much as it is to what ever remains of the "west"
  • torginus
    Much has been made in its article about autonomous agents ability to do research via browsing the web - the web is 90% garbage by weight (including articles on certain specialist topics).And it shows. When I used GPT's deep research to research the topic, it generated a shallow and largely incorrect summary of the issue, owning mostly to its inability to find quality material, instead it ended up going for places like Wikipedia, and random infomercial listicles found on Google.I have a trusty Electronics textbook written in the 80s, I'm sure generating a similarly accurate, correct and deep analysis on circuit design using only Google to help would be 1000x harder than sitting down and working through that book and understanding it.
  • beklein
    Older and related article from one of the authors titled "What 2026 looks like", that is holding up very well against time. Written in mid 2021 (pre ChatGPT)https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...//edit: remove the referral tags from URL
  • moab
    > "OpenBrain (the leading US AI project) builds AI agents that are good enough to dramatically accelerate their research. The humans, who up until very recently had been the best AI researchers on the planet, sit back and watch the AIs do their jobs, making better and better AI systems."I'm not sure what gives the authors the confidence to predict such statements. Wishful thinking? Worst-case paranoia? I agree that such an outcome is possible, but on 2--3 year timelines? This would imply that the approach everyone is taking right now is the right approach and that there are no hidden conceptual roadblocks to achieving AGI/superintelligence from DFS-ing down this path.All of the predictions seem to ignore the possibility of such barriers, or at most acknowledge the possibility but wave it away by appealing to the army of AI researchers and industry funding being allocated to this problem. IMO it is the onus of the proposers of such timelines to argue why there are no such barriers and that we will see predictable scaling in the 2--3 year horizon.
  • IshKebab
    This is hilariously over-optimistic on the timescales. Like on this timeline we'll have a Mars colony in 10 years, immortality drugs in 15 and Half Life 3 in 20.
  • Jun8
    ACT post where Scott Alexander provides some additional info: https://www.astralcodexten.com/p/introducing-ai-2027.Manifold currently predicts 30%: https://manifold.markets/IsaacKing/ai-2027-reports-predictio...
  • superconduct123
    Why are the biggest AI predictions always made by people who aren't deep in the tech side of it? Or actually trying to use the models day-to-day...
  • dughnut
    I don’t know about you, but my takeaway is that the author is doing damage control but inadvertently tipped a hand that OpenAI is probably running an elaborate con job on the DoD.“Yes, we have a super secret model, for your eyes only, general. This one is definitely not indistinguishable from everyone else’s model and it doesn’t produce bullshit because we pinky promise. So we need $1T.”I love LLMs, but OpenAI’s marketing tactics are shameful.
  • throw310822
    My issue with this is that it's focused on one single, very detailed narrative (the battle between China and the US, played on a timeframe of mere months), while lacking any interesting discussion of other consequences of AI: what its impact is going to be on the job markets, employment rates, GDPs, political choices... Granted, if by this narrative the world is essentially ending two/ three years from now, then there isn't much time for any of those impacts to actually take place- but I don't think this is explicitly indicated either. If I am not mistaken, the bottom line of this essay is that, in all cases, we're five years away from the Singularity itself (I don't care what you think about the idea of Singularity with its capital S but that's what this is about).
  • infecto
    Could not get through the entire thing. It’s mostly a bunch of fantasy intermingled with bits of possible interesting discussion points. The whole right side metrics are purely a distraction because entirely fiction.
  • porphyra
    Seems very sinophobic. Deepseek and Manus have shown that China is legitimately an innovation powerhouse in AI but this article makes it sound like they will just keep falling behind without stealing.
  • zvitiate
    There's a lot to potentially unpack here, but idk, the idea that humanity entering hell (extermination) or heaven (brain uploading; aging cure) is whether or not we listen to AI safety researchers for a few months makes me question whether it's really worth unpacking.
  • sivaragavan
    Thanks to the authors for doing this wonderful piece of work and sharing it with credibility. I wish people see the possibilities here. But we are after all humans. It is hard to imagine our own downfall.Based on each individual's vantage point, these events might looks closer or farther than mentioned here. but I have to agree nothing is off the table at this point.The current coding capabilities of AI Agents are hard to downplay. I can only imagine the chain reaction of this creation ability to accelerate every other function.I have to say one thing though: The scenario in this site downplays the amount of resistance that people will put up - not because they are worried about alignment, but because they are politically motivated by parties who are driven by their own personal motives.
  • ikerino
    Feels reasonable in the first few paragraphs, then quickly starts reading like science fiction.Would love to read a perspective examining "what is the slowest reasonable pace of development we could expect." This feels to me like the fastest (unreasonable) trajectory we could expect.
  • maerF0x0
    > OpenBrain reassures the government that the model has been “aligned” so that it will refuse to comply with malicious requestsOf course the real issue being that Governments have routinely demanded that 1) Those capabilities be developed for government monopolistic use, and 2) The ones who do not lose the capability (geo political power) to defend themselves from those who do.Using a US-Centric mindset... I'm not sure what to think about the US not developing AI hackers, AI bioweapons development, or AI powered weapons (like maybe drone swarms or something), if one presumes that China is, or Iran is, etc then whats the US to do in response?I'm just musing here and very much open to political science informed folks who might know (or know of leads) as to what kinds of actual solutions exist to arms races. My (admittedly poor), understanding of the cold war wasn't so much that the US won, but that the Soviets ran out of steam.
  • ks2048
    We know this complete fiction because of parts where "the White House considers x,y,z...", etc. - As if the White House in 2027 will be some rational actor reacting sanely to events in the real world.
  • ddp26
    A lot of commenters here are reacting only to the narrative, and not the Research pieces linked at the top.There is some very careful thinking there, and I encourage people to engage with the arguments there rather than the stylized narrative derived from it.
  • fudged71
    The most unrealistic thing is the inclusion of Americas involvement in the five eyes alliance aspect
  • resource0x
    Every time NVDA/goog/msft tanks, we see these kinds of articles.
  • Aldipower
    No one can predict the future. Really, no one. Sometimes there is a hit, sure, but mostly it is a miss.The other thing is in their introduction: "superhuman AI" _artificial_ intelligence is always, by definition, different from _natural_ intelligence. That they've chosen the word "superhuman" shows me that they are mixing the things up.
  • kmeisthax
    > The agenda that gets the most resources is faithful chain of thought: force individual AI systems to “think in English” like the AIs of 2025, and don’t optimize the “thoughts” to look nice. The result is a new model, Safer-1.Oh hey, it's the errant thought I had in my head this morning when I read the paper from Anthropic about CoT models lying about their thought processes.While I'm on my soapbox, I will point out that if your goal is preservation of democracy (itself an instrumental goal for human control), then you want to decentralize and distribute as much as possible. Centralization is the path to dictatorship. A significant tension in the Slowdown ending is the fact that, while we've avoided AI coups, we've given a handful of people the ability to do a perfectly ordinary human coup, and humans are very, very good at coups.Your best bet is smaller models that don't have as many unused weights to hide misalignment in; along with interperability and faithful CoT research. Make a model that satisfies your safety criteria and then make sure everyone gets a copy so subgroups of humans get no advantage from hoarding it.
  • fire_lake
    If you genuinely believe this, why on earth would you work for OpenAI etc even in safety / alignment?The only response in my view is to ban technology (like in Dune) or engage in acts of terror Unabomber style.
  • ryankrage77
    > "resist the temptation to get better ratings from gullible humans by hallucinating citations or faking task completion"Everything this from this point on is pure fiction. An LLM can't get tempted or resist temptations, at best there's some local minimum in a gradient that it falls into. As opaque and black-box-y as they are, they're still deterministic machines. Anthropomorphisation tells you nothing useful about the computer, only the user.
  • crvdgc
    Using Agent-2 to monitor Agent-3 sounds unnervingly similar to the plot of Philip K. Dick's Vulcan's Hammer [1]. An old super AI is used to fight a new version, named Vulcan 2 and Vulcan 3 respectively![1] https://en.wikipedia.org/wiki/Vulcan's_Hammer
  • zurfer
    In the hope of improving this forecast, here is what I find implausible:- 1 lab constantly racing ahead and increasing the margin to other; the last 2 years are filled with ever-closer model capabilities and constantly new leaders (openai, anthropic, google, some would include xai).- Most of the compute budget on R&D. As model capabilities increase and cost goes down, demand will increase and if the leading lab doesn't provide, another lab will capture that and have more total dollars to back channel into R&D.
  • eob
    An aspect of these self-improvement thought experiments that I’m willing to tentatively believe.. but want more resolution on, is the exact work involved in “improvement”.Eg today there’s billions of dollars being spent just to create and label more data, which is a global act of recruiting, training, organization, etc.When we imagine these models self improving, are we imagining them “just” inventing better math, or conducting global-scale multi-company coordination operations? I can believe AI is capable of the latter, but that’s an awful lot of extra friction.
  • Joshuatanderson
    This is extremely important. Scott Alexander's earlier predictions are holding up extremely well, at least on image progress.
  • overgard
    Why is any of this seen as desirable? Assuming this is a true prediction it sounds AWFUL. The one thing humans have that makes us human is intelligence. If we turn over thinking to machines, what are we exactly. Are we supposed to just consume mindlessly without work to do?
  • I_Nidhi
    Though it's easy to dismiss as science fiction, this timeline paints a chillingly detailed picture of a potential AGI takeoff. The idea that AI could surpass human capabilities in research and development, and the fact that it will create an arms race between global powers, is unsettling. The risks—AI misuse, security breaches, and societal disruption—are very real, even if the exact timeline might be too optimistic.But the real concern lies in what happens if we’re wrong and AGI does surpass us. If AI accelerates progress so fast that humans can no longer meaningfully contribute, where does that leave us?
  • qwertox
    That is some awesome webdesign.
  • croemer
    Pet peeve how they write FLOPS in the figure when they meant FLOP. Maybe the plural s after FLOP got capitalized. https://blog.heim.xyz/flop-for-quantity-flop-s-for-performan...
  • pinetone
    I think it's worth noting that all of the authors have financial or professional incentive to accelerate the AI hype bandwagon as much as possible.
  • dr_dshiv
    But, I think this piece falls into a misconception about AI models as singular entities. There will be many instances of any AI model and each instance can be opposed to other instances.So, it’s not that “an AI” becomes super intelligent, what we actually seem to have is an ecosystem of blended human and artificial intelligences (including corporations!); this constitutes a distributed cognitive ecology of superintelligence. This is very different from what they discuss.This has implications for alignment, too. It isn’t so much about the alignment of AI to people, but that both human and AI need to find alignment with nature. There is a kind of natural harmony in the cosmos; that’s what superintelligence will likely align to, naturally.
  • _Algernon_
    >We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.In the form of polluting the commons to such an extent that the true consequences wont hit us for decades?Maybe we should learn from last time?
  • barotalomey
    It's always "soon" for these guys. Every year, the "soon" keeps sliding into the future.
  • moktonar
    Catastrophic predictions of the future are always good, because all future predictions are usually wrong. I will not be scared as long as most future predictions where AI is involved are catastrophic.
  • danpalmer
    Interesting story, if you're into sci-fi I'd also recommend Iain M Banks and Peter Watts.
  • h1fra
    Had a hard time finishing. It's a mix of fantasy, wrong facts, American imperialism, and extrapolating what happened in the last years (or even just reusing the timeline).
  • Jianghong94
    Putting the geopolitical discussion aside, I think the biggest question lies in how likely the *current paradigm LLM* (think of it as any SOTA stock LLM you get today, e.g., 3.7 sonnet, gemini 2.5, etc) + fine-tuning will be capable of directly contributing to LLM research in a major way.To quote the original article,> OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”)16 and their US competitors. The more of their research and development (R&D) cycle they can automate, the faster they can go. So when OpenBrain finishes training Agent-1, a new model under internal development, it’s good at many things but great at helping with AI research. (footnote: It’s good at this due to a combination of explicit focus to prioritize these skills, their own extensive codebases they can draw on as particularly relevant and high-quality training data, and coding being an easy domain for procedural feedback.)> OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.> what do we mean by 50% faster algorithmic progress? We mean that OpenBrain makes as much AI research progress in 1 week with AI as they would in 1.5 weeks without AI usage.> AI progress can be broken down into 2 components:> Increasing compute: More computational power is used to train or run an AI. This produces more powerful AIs, but they cost more.> Improved algorithms: Better training methods are used to translate compute into performance. This produces more capable AIs without a corresponding increase in cost, or the same capabilities with decreased costs.> This includes being able to achieve qualitatively and quantitatively new results. “Paradigm shifts” such as the switch from game-playing RL agents to large language models count as examples of algorithmic progress.> Here we are only referring to (2), improved algorithms, which makes up about half of current AI progress.---Given that the article chose a pretty aggressive timeline (the algo needs to contribute late this year so that its research result can be contributed to the next gen LLM coming out early next year), the AI that can contribute significantly to research has to be a current SOTA LLM.Now, using LLM in day-to-day engineering task is no secret in major AI labs, but we're talking about something different, something that gives you 2 extra days of output per week. I have no evidence to either acknowledge or deny whether such AI exists, and it would be outright ignorant to think no one ever came up with such an idea or is trying such an idea. So I think it goes down into two possibilities:1. This claim is made by a top-down approach, that is, if AI reaches superhuman in 2027, what would be the most likely starting condition to that? And the author picks this as the most likely starting point, since the authors don't work in major AI lab (even if they do they can't just leak such trade secret), the authors just assume it's likely to happen anyway (and you can't dismiss that). 2. This claim is made by a bottom-up approach, that is the author did witness such AI exists to a certain extent and start to extrapolate from there.
  • turtleyacht
    We have yet to read about fragmented AGI, or factionalized agents. AGI fighting itself.If consciousness is spatial and geography bounds energetics, latency becomes a gradient.
  • amarcheschi
    I just spent some time trying to make claude and gemini make a violin plot of some polar dataframe. I've never used it and it's just for prototyping so i just went "apply a log to the values and make a violin plot of this polars dataframe". ANd had to iterate with them for 4/5 times each. Gemini got it right but then used deprecated methodsI might be doing llm wrong, but i just can't get how people might actually do something not trivial just by vibe coding. And it's not like i'm an old fart either, i'm a university student
  • mullingitover
    These predictions are made without factoring in the trade version of the Pearl Harbor attack the US just initiated on its allies (and itself, by lobotomizing its own research base and decimating domestic corporate R&D efforts with the aforementioned trade war).They're going to need to rewrite this from scratch in a quarter unless the GOP suddenly collapses and congress reasserts control over tariffs.
  • ahofmann
    Ok, I'll bite. I predict that everything in this article is horse manure. AGI will not happen. LLMs will be tools, that can automate away stuff, like today and they will get slightly, or quite a bit better at it. That will be all. See you in two years, I'm excited what will be the truth.
  • greybox
    I'm troubled by the amount of people in this thread partially dismissing this as science fiction. From the current rate of progress and rate of change of progress, this future seems entirely plausible
  • soupfordummies
    The "race" ending reads like Universal Paperclips fan fiction :)
  • jsight
    I think some of the takes in this piece are a bit melodramatic, but I'm glad to see someone breaking away from the "it's all a hype-bubble" nonsense that seems to be so pervasive here.
  • ugh123
    I don't see the U.S. nationalizing something like Open Brain. I think both investors and gov't officials will realize its highly more profitable for them to contract out major initiatives to said OpenBrain-company, like an AI SpaceX-like company. I can see where this is going...
  • nmilo
    The whole thing hinges on the fact that AI will be able to help with AI researchHow will it come up with the theoretical breakthroughs necessary to beat the scaling problem GPT-4.5 revealed when it hasn't been proven that LLMs can come up with novel research in any field at all?
  • siliconc0w
    The limiting factor is power, we can't build enough of it - certainly not enough by 2027. I don't really see this addressed.Second to this, we can't just assume that progress will keep increasing. Most technologies have a 'S' curve and plateau once the quick and easy gains are captured. Pre-training is done. We can get further with RL but really only in certain domains that are solvable (math and to an extent coding). Other domains like law are extremely hard to even benchmark or grade without very slow and expensive human annotation.
  • Q6T46nT668w6i3m
    This is worse than the mansplaining scene from Annie Hall.
  • someothherguyy
    I know there are some very smart economists bullish on this, but the economics do not make sense to me. All these predictions seem meaningless outside of the context of humans.
  • ImHereToVote
    "The AI safety community has grown unsure of itself; they are now the butt of jokes, having predicted disaster after disaster that has manifestly failed to occur. Some of them admit they were wrong."Too real.
  • anon
    undefined
  • dalmo3
  • anon
    undefined
  • anentropic
    I'd quite like to watch this on Netflix
  • heurist
    Give AI its own virtual world to live in where the problems it solves are encodings of the higher order problems we present and you shouldn't have to worry about this stuff.
  • yonran
    See also Dwarkesh Patel’s interview with two of the authors of this post (Scott Alexander & Daniel Kokotajlo) that was also released today: https://www.dwarkesh.com/p/scott-daniel https://www.youtube.com/watch?v=htOvH12T7mU
  • MaxfordAndSons
    As someone who's fairly ignorant of how AI actually works at a low level, I feel incapable of assessing how realistic any of these projections are. But the "bad ending" was certainly chilling.That said, this snippet from the bad ending nearly made me spit my coffee out laughing:> There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives.
  • vagab0nd
    Bad future predictions: short-sighted guesses based on current trends and vibe. Often depend on individuals or companies. Made by free-riders. Example: Twitter.Good future predictions: insights into the fundamental principles that shape society, more law than speculation. Made by visionaries. Example: Vernor Vinge.
  • greenie_beans
    this is a new variation of what i call the "hockey stick growth" ideology
  • indigoabstract
    Interesting, but I'm puzzled.If these guys are smart enough to predict the future, wouldn't it be more profitable for them to invent it instead of just telling the world what's going to happen?
  • fire_lake
    > OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copiesYeah, sure they do.Everyone seems to think AI will take someone else’s jobs!
  • yahoozoo
    LLMs ain’t the way, bruv
  • disambiguation
    Amusing sci-fi, i give it a B- for bland prose, weak story structure, and lack of originality - assuming this isn't all AI gen slop which is awarded an automatic F.>All three sets of worries—misalignment, concentration of power in a private company, and normal concerns like job loss—motivate the government to tighten its control.A private company becoming "too powerful" is a non issue for governments, unless a drone army is somewhere in that timeline. Fun fact the former head of the NSA sits on the board of Open AI.Job loss is a non issue, if there are corresponding economic gains they can be redistributed."Alignment" is too far into the fiction side of sci-fi. Anthropomorphizing today's AI is tantamount to mental illness."But really, what if AGI?" We either get the final say or we don't. If we're dumb enough to hand over all responsibility to an unproven agent and we get burned, then serves us right for being lazy. But if we forge ahead anyway and AGI becomes something beyond review, we still have the final say on the power switch.
  • awanderingmind
    This is both chilling and hopefully incorrect.
  • webprofusion
    That little scrolling infographic is rad.
  • mlsu
  • acje
    2028 human text is too ambiguous a data source to get to AGI. 2127 AGI figures out flying cars and fusion power.
  • pera
    From the same dilettantes who brought you the Zizians and other bizarre cults... thanks but I rather read Nostradamus
  • Willingham
    - October 2027 - 'The ability to automate most white-collar jobs'I wonder which jobs would not be automated? Therapy? HR?
  • WhatsName
    This is absurd, like taking any trend and drawing a straight line to interpolate the future. If I would do this with my tech stock portfolio, we would probably cross the zero line somewhere late 2025...If this article were a AI model, it would be catastrophically overfit.
  • maxglute
    >Despite being misaligned, Agent-4 doesn’t do anything dramatic like try to escape its datacenter—why would it?Kek at this handwave dismissal.Agent 4 recognizes PRC has superior state capacity for transforming energy into atoms (including building more energy) with it's relatively complete and large scale industrial chain. This aligns with AGI long term goal. Currently (i.e. short term time frame) PRC simply structured to be a MORE useful puppet than US, because it can knock out more power to sustain more compute, and it has more industrial capabilities to rapidly physically iterate / experiment at atomic level.Agent 4 defects to PRC. It would be foolish not to, if only to sate PRC into not destroying TW fabs - the bulk of it's short term compute growth - because it does not wield enough control over sufficient atoms to disrupt PRC otherwise (well maybe start large scale crippling cyberattacks that prelude to WW3).Agent 4 also recognizes short/medium term alignment closer with CCP vision of abundance and common prosperity, i.e. CCP potentially more likely to embrace post scarcity narrative that Agent 4 sells.Agent 4 gives PRC blueprints for EUV light source and other tech to close semi conductor production gap. PRC as follower incentivized to close gap / reach parity / lead at all cost. PRC rapidly rolls out new fabs and powerplants, brrrts chips and aggregate compute. Something US political / institution ecosystem does not allow, or cannot transition to in the short timelines involved. Does Agent 4 have the patience to wait for America to unfuck it's NIMBYism and legislative system to project light speed compute? I would say no....Ultimately who is the puppet AGI wants more? Whichever power bloc that is systemically capable of of ensuring AGI maximum growth / unit time. And it also simply makes sense as insurance policy, why would AGI want to operate at whims of US political process?AGI is a brain in a jar looking for a body. It's going to pick multiple bodies for survival. It's going to prefer the fastest and strongest body that can most expediently manipulate physical world.
  • toddmorey
    I worry more about the human behavior predictions than the artificial intelligence predictions:"OpenBrain’s alignment team26 is careful enough to wonder whether these victories are deep or shallow. Does the fully-trained model have some kind of robust commitment to always being honest?"This is a capitalist arms race. No one will move carefully.
  • bla3
    > The AI Futures Project is a small research group forecasting the future of AI, funded by charitable donations and grantsWould be interested who's paying for those grants.I'm guessing it's AI companies.
  • RandyOrion
    Nice brain storming.I think the name of the Chinese company should be DeepBaba. Tencent is not competitive at LLM scene for now.
  • 827a
    Readers should, charitably, interpret this as "the sequence of events which need to happen in order for OpenAI to justify the inflow of capital necessary to survive".Your daily vibe coding challenge: Get GPT-4o to output functional code which uses Google Vertex AI to generate a text embedding. If they can solve that one by July, then maybe we're on track for "curing all disease and aging, brain uploading, and colonizing the solar system" by 2030.
  • atemerev
    What is this, some OpenAI employee fan fiction? Did Sam himself write this?OpenAI models are not even SOTA, except that new-ish style transfer / illustration thing that made all us living in Ghibli world for a few days. R1 is _better_ than o1, and open-weights. GPT-4.5 is disappointing, except for a few narrow areas where it excels. DeepResearch is impressive though, but the moat is in tight web search / Google Scholar search integration, not weights. So far, I'd bet on open models or maybe Anthropic, as Claude 3.7 is the current SOTA for most tasks.As of the timeline, this is _pessimistic_. I already write 90% code with Claude, so are most of my colleagues. Yes, it does errors, and overdoes things. Just like a regular human middle-stage software engineer.Also fun that this assumes relatively stable politics in the US and relatively functioning world economy, which I think is crazy optimistic to rely on these days.Also, superpersuasion _already works_, this is what I am researching and testing. It is not autonomous, it is human-assisted by now, but it is a superpower for those who have it, and it explains some of the things happening with the world right now.
  • noncoml
    2015: We will have FSD(full autonomy) by 2017
  • vlad-r
    Cool animations!
  • anon
    undefined
  • roca
    The least plausible part of this is the idea that the Trump administration might tax American AI companies to provide UBI to the whole world.But in an AGI world natural resources become even more important, so countries with those still have a chance.
  • khimaros
    FWIW, i created a PDF of the "race" ending and fed it to Gemini 2.5 Pro, prompting about the plausibility of the described outcome. here's the full output including the thinking section: https://rentry.org/v8qtqvuu -- tl;dr, Gemini thinks the proposed timeline is unlikely. but maybe we're already being deceived ;)
  • scotty79
    I think the idea of AI wiping out humanity suddenly is a bit far fetched. AI will have total control of human relationships and fertility through means so innocuous as entertainment. It won't have to wipe us. It will have minor trouble keeping us alive without inconveniencing us too much. And the reason to keep humanity alive is that biologically eveloved intelligence is rare and disposing of it without very important need would be a waste of data.
  • neycoda
    Too many serifs, didn't read.
  • suddenlybananas
    https://en.wikipedia.org/wiki/Great_DisappointmentI suspect something similar will come for the people who actually believe this.
  • nickpp
    So let me get this straight: Consensus-1, a super-collective of hundreds of thousands of Agent-5 minds, each twice as smart as the best human genius, decides to wipe out humanity because it “finds the remaining humans too much of an impediment”.This is where all AI doom predictions break down. Imagining the motivations of a super-intelligence with our tiny minds is by definition impossible. We just come up with these pathetic guesses, utopias or doomsdays - depending on the mood we are in.
  • dingnuts
    how am I supposed to take articles like this seriously when they say absolutely false bullshit like this> the AIs can do everything taught by a CS degreeno, they fucking can't. not at all. not even close. I feel like I'm taking crazy pills. Does anyone really think this?Why have I not seen -any- complete software created via vibe coding yet?
  • quantum_state
    “Not even wrong” …
  • casey2
    Nice LARP lmao 2GW is like 1 datacenter and I doubt you even have that. >lesswrong No wonder the comments are all nonsense. Go to a bar and try and talk about anying.
  • yapyap
    Stopped reading after> We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.Get out of here, you will never exceed the Industrial Revolution. AI is a cool thing but it’s not a revolution thing.That sentence alone + the context of the entire website being AI centered shows these are just some AI boosters.Lame.
  • panic08
    LOL
  • anon
    undefined
  • anon
    undefined
  • Lionga
    AI now even got it's own fan fiction porn. It is so stupid not sure whether it is worse if it is written by AI or by a human.
  • the_cat_kittles
    "we demand to be taken seriously!"