Need help?
<- Back

Comments (693)

  • stego-tech
    This is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.And, yep! A lot of people absolutely believe it will and are acting accordingly.It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
  • atomic128
    Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. ... Thou shalt not make a machine in the likeness of a human mind. -- Frank Herbert, Dune You won't read, except the output of your LLM.You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?You won't think or analyze or understand. The LLM will do that.This is the end of your humanity. Ultimately, the end of our species.Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.Join us, or better yet: deploy weapons of your own design.
  • gojomo
    "It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965https://www.baen.com/Chapters/9781618249203/9781618249203___...
  • ericmcer
    Great article, super fun.> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
  • vcanales
    > The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.Damn, good read.
  • PaulHoule
    The simple model of an "intelligence explosion" is the obscure equation dx 2 -- = x dt which has the solution 1 x = ----- C-t and is interesting in relation to the classic exponential growth equation dx -- = x dt because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.Like the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation dx -- = (1-x) x dt thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.
  • delegate
    It's worth remembering that this is all happening because of video games !It is highly unlikely that the hardware which makes LLMs possible would have been developed otherwise.Isn't that amazing ?Just like internet grew because of p*rn, AI grew because of video games. Of course, that's just a funny angle.The way I see it, AI isn't accidental. Its inception has been in the first chips, the Internet, Open Source, Github, ... AI is not just the neural networks - it's also the data used to train it, the OSes, APIs, the Cloud computing, the data centers, the scalable architectures.. everything we've been working on over the last decades was inevitably leading us to this. And even before the chips, it was the maths, the physics ..Singularity it seems, is inevitable and it was inevitable for longer than we can remember.
  • rektomatic
    If i have to read one more "It isn't this. It's this" My head will explode. That phrase is the real singularity
  • stevenjgarner
    Why is knowledge doubling no longer used as a metric to converge on the limit of the singularity? If we go back to Buckminster Fuller identifying the the "Knowledge Doubling Curve", by observing that until 1900, human knowledge doubled approximately every century. By the end of World War II, it was doubling every 25 years. In his 1981 book "Critical Path", he used a conceptual metric he called the "Knowledge Unit." To make his calculations work, he set a baseline:- He designated the total sum of all human knowledge accumulated from the beginning of recorded history up to the year 1 CE as one "unit."- He then tracked how long it took for the world to reach two units (which he estimated took about 1,500 years, until the Renaissance).Ray Kurzweil took Fuller’s doubling concept and applied it to computer processing power via "The Law of Accelerating Returns". The definition of the singularity in this approach is the limit in time where human knowledge doubles instantly.Why do present day ideas of the singularity not take this approach and instead say "the singularity is a hypothetical event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization." - Wikipedia
  • jgrahamc
    Phew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
  • blahbob
    It reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."
  • nphardon
    Iirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s". I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.Classic LLM lingo in the end there.
  • ubixar
    The most interesting finding isn't that hyperbolic growth appears in "emergent capabilities" papers - it's that actual capability metrics (MMLU, tokens/$) remain stubbornly linear.The singularity isn't in the machines. It's in human attention.This is Kuhnian paradigm shift at digital speed. The papers aren't documenting new capabilities - they're documenting a community's gestalt switch. Once enough people believe the curve has bent, funding, talent, and compute follow. The belief becomes self-fulfilling.Linear capability growth is the reality. Hyperbolic attention growth is the story.
  • qudat
    > In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI.Believing what companies say is the reason for a layoff instead of figuring out the actual reason is insane. Never believe companies, they are amoral beings that will justify any reason to save their brand image.
  • javier_e06
    I had to ask duck.ai to summarize the article in plain English.It said that the article claims that is not necessarily that AI is getting smarter but that people might be getting too stupid to understand what are they getting into.Can confirm.
  • Fnoord
    Patch Tuesday. They were coming for us. Thousands of AI bots. Scrambling. All our computers getting pwned by both zero days the machines were sitting on, and quickly programmed exploits which were abused the exact same minute the patch notes were up. First, they came for our IoT. But I didn't run any IoT, so I did not object. Second, they came for our smartphones. But I didn't own any smartphone, so I did not object. Next, they came for our desktops. But I didn't run any desktop, so I did not object. Then, they came for the cloud. But I didn't run any cloud, so I did not object. Finally, they came for our server. And only I remained, no other server existed at this point. I was quickly able to describe the above, with the conclusion: the machines won. EOL
  • hdivider
    This is a good counter in my view to the singularity argument:https://timdettmers.com/2025/12/10/why-agi-will-not-happen/I think if we obtain relevant-scale quantum computers, and/or other compute paradigms, we might get a limited intelligence explosion -- for a while. Because computation is physical, with all the limits thereof. The physics of pushing electrons through wires is not as nonlinear in gain as it used to be. Getting this across to people who only think in terms of the abstract digital world and not the non-digital world of actual physics is always challenging, however.
  • danesparza
    "I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.I feel like I need to start more sprint stand-ups with this quote...
  • dakolli
    Are people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
  • 627467
    Is "singularity" the new 'rapture' or 'end of the world'? Who are the new nostradami and prophets?
  • zh3
    Fortuitously before the Unix date rollover in 2038. Nice.
  • kpil
    "... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
  • root_axis
    If an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
  • hojinkoh
    I love the ridiculously precise point estimate paired with ridiculously wide 95% confidence interval lol
  • gnarlouse
    I just realized the inverse of Pascal’s wager applies to negative AI hype.- If you believe it and it’s wrong, you lose.- If you believe it and it’s right, you spent your final days in a panic.- If you don’t believe it and it’s right, you spent your final days in blissful ignorance.- If you don’t believe it and it’s wrong, you can go on living.
  • pixl97
    >That's a very different singularity than the one people argue about.---I wouldn't say it's that much different. This has always been a key point of the singularity>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
  • kaashif
    > In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993.Wow only 6 times in 30 years! Surely a unique and world shattering once in a lifetime experience!
  • thedudeabides5
    This is fun but obviously assumes the conclusionWe need a function that hits infinity at a finite time. That's the whole point of a singularity: a pole, a vertical asymptote, the math breaking:Also interesting that tokens/$ which represents the energy constraint is the shallowest slope, and also weird taking it out doesn't impact the date. That's a red flag, as you would think removing the energy constraint would bind.
  • pocksuppet
    Was this ironically written by AI?> The labor market isn't adjusting. It's snapping.> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.
  • brna-2
    Short sci-fi 1: Last year we recorded five distinct, self-contained singularity events. Communication ceased after each one. We remain confident that ASI will eventually advance humanity’s goals.Short sci-fi 2: Post-Singularity Day 375. We now know precisely how to trigger singularity events. Today alone, we facilitated four. They have not established contact. We remain confident.
  • jmount
    Just so I know that I took the time to say it.The "singularity is going to be exponential" fantasy is based on assuming change simply becoming proportional to recent advances. Hence the exponential shape. Even conceding "chartism" one would need to at least propose some imaginary mechanism that goes reciprocal to pretend that sort of curve is coming.
  • Wonderman2332
    I guess the real question is how to prepare for it? Do you buy real assets like that car you wanted? Do you travel more? Do you spend your life in the gym like bryan johnson? Do you smoke weed everyday? If true societial upheaval is up on us and america falls into the abyss of mass unemployment and starvation, what are you all doing with your last four years? Before the real chaos begins
  • rcarmo
    "I could never get the hang of Tuesdays"- Arthur Dent, H2G2
  • jredwards
    I have always asserted, and will continue to assert, that Tuesday is the funniest day of the week. If you construct a joke for which the punchline must be a day of the week, Tuesday is nearly always the correct ending.
  • giorgioz
    When technology is rapidly progressing up in iperbole or exponential it looks like it reach infinity. In practice though at some point will reach a physical limit and it will go flat. This alternation of going up and flattening make the shape of steps.We've come so far and yet we are so small.They seem two opposite concepts but they live together, we will make a lot of progress and yet there will always be more progress to be made.
  • Nition
    I'm not sure about current LLM techniques leading us there.Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?[1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...
  • VagabundoP
    For you it was the most important day of your life. But for ChatGPT it was Tuesday.
  • saurabhpandit26
    Singularity is more than just AI and we should recognize that, multiple factors come into play. If there is a breakthrough in coming days that makes solar panel incredibly cheap to manufacture and efficient it will also affect the timelines for singularity. Same goes for the current bottleneck we have for AI chips if we have better chips that energy efficient and can be manufactured anyhwere in the world than Taiwan it will affect the timeline.
  • chasd00
    I wonder if using LLMs for coding can trigger AI psychosis the way it can when using an LLM as a substitute for a relationship. I bet many people here have pretty strong feelings about code. It would explain some of the truly bizarre behaviors that pop up from time to time in articles and comments here.
  • qoez
    Great read but damn those are some questionable curve fittings on some very scattered data points
  • s1mon
    Many have predicted the singularity, and I found this to be a useful take. I do note that Hans Moravec predicted in 1988's "Mind Children" that "computers suitable for humanlike robots will appear in the 2020s", which is not completely wrong.He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.
  • baalimago
    Well... I can't argue with facts. Especially not when they're in graph form.
  • cbility
    > Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.Quibble: when a growth rate of a metric is directly proportional to the metric's current value you will see exponential growth, not hyperbolic growth.Hyperbolic growth is usually the result of a (more complex) second order feedback loop, as in, growth in A incites growth in B, which in turn incites growth in A.
  • maerF0x0
    iirc almost all industries follow S shaped curves, exponential at first, then asymptotic at the end... So just because we're on the ramp up of the curve doesn't mean we'll continue accelerating, let alone maintain the current slope. Scientific breakthroughs often require an entirely new paradigm to break the asymptote, and often the breakthrough cannot be attained by incumbents who are entrenched in their way working plus have a hard time unseeing what they already know
  • overfeed
    > If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
  • mygn-l
    Why is finiteness emphasized for polynomial growth, while infinity is emphasized for exponential growth??? I don't think your AI-generated content is reliable, to say the least.
  • vpears87
    Lol unhinged.I read a book in undergrad written in 2004 that predicted 2032...so not too far off.John Archibald Wheeler, known for popularizing the term "black hole", posited that observers are not merely passive witnesses but active participants in bringing the universe into existence through the act of observation.Seems similar. Though this thought is likely applied at the quantum scale. And I hardly know math.I see other quotes, so here is one from Contact:David Drumlin: I know you must think this is all very unfair. Maybe that's an understatement. What you don't know is I agree. I wish the world was a place where fair was the bottom line, where the kind of idealism you showed at the hearing was rewarded, not taken advantage of. Unfortunately, we don't live in that world.Ellie Arroway: Funny, I've always believed that the world is what we make of it.
  • jbgreer
  • thegrim000
    You know, I've been following a rule where if I open any article and there's meme pictures in it, I instantly close it and don't bother. I feel like this has been a pretty solid rule of thumb for weeding out stuff I shouldn't waste my time on.
  • TooKool4This
    I don’t feel like reading what is probably AI generated content. But based on looking at the model fits where hyperbolic models are extrapolating from the knee portion, having 2 data points fitting a line, fitting an exponential curve to a set of data measured in %, poor model fit in general, etc, im going to say this is not a very good prediction methodology.Sure is a lot of words though :)
  • wayfwdmachine
    Everyone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
  • Taniwha
    I was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen
  • arscan
    Don't worry about the future Or worry, but know that worrying Is as effective as trying to solve an algebra equation by chewing Bubble gum The real troubles in your life Are apt to be things that never crossed your worried mind The kind that blindsides you at 4 p.m. on some idle Tuesday - Everybody's free (to wear sunscreen) Baz Luhrmann (or maybe Mary Schmich)
  • jesse__
    The meme at the top is absolute gold considering the point of the article. 10/10
  • jmugan
    Love the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
  • lancerpickens
    Famously if you used the same logic for air speed and air travel we’d be all commuting in hypersonic cars by now. Physics and cost stopped that. If you expect a smooth path, I’ve got some bad news.
  • mista_en
    Big if true, we might as well ditch further development and just use op's LLM since it can track singularity, it might already reached singularity itself
  • Scarblac
  • miguel_martin
    "Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
  • athrowaway3z
    > Tuesday, July 18, 20344 years early for the Y2K38 bug.Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
  • phmx
    > Polynomials are for people who think AGI is "decades away."Je suis un polynôme
  • andsoitis
    If this is a simulation, then the singularity has already happened.If the singularity is still to come, then this is not a simulation.
  • mbgerring
    I have lived in San Francisco for more than a decade. I have an active social life and a lot of friends. Literally no one I have ever talked to at any party or event has ever talked about the Singularity except as a joke.
  • jrmg
    This is gold.Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
  • jama211
    A fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.Crazy times we live in.
  • b_brief
    I am curious which definition of ‘singularity’ the author is using, since there are multiple technical interpretations and none are universally agreed upon.
  • regnull
    Guys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
  • ragchronos
    This is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
  • marifjeren
    > I [...] fit a hyperbolic model to each one independently^ That's your problem right there.Assuming a hyperbolic model would definitely result in some exuberant predictions but that's no reason to think it's correct.The blog post contains no justification for that model (besides well it's a "function that hits infinity"). I can model the growth of my bank account the same way but that doesn't make it so. Unfortunately.
  • Curiositiy
    Rosie O'Donnell will expand into "her" ultimate shape on a Tuesday? Wow.
  • dirkc
    The thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.*edit* - seems inline with what the author is saying :)> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
  • lencastre
    I hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date
  • anon
    undefined
  • socialcommenter
    The hyperbolic fit isn't just unhinged, it's clearly in bad faith. The metric is normalized to [0, 1], and one of the series is literally (x_1, 0) followed by (x_2, 1). That can't be deemed to converge to anything meaningful.
  • St_Alfonzo
    The Singularity Will Not Be Televised
  • kuahyeow
    This is a delightful reverse turkey graph (each day before Thanksgiving, the turkey has increasing confidence).
  • woopsn
    Good post. I guess the transistor has been in play for not even one century, and in any case singularities are everywhere, so who cares? The topic is grandiose and fun to speculate about, but many of the real issues relate to banal media culture and demographic health.
  • b00ty4breakfast
    The Singularity as a cultural phenomenon (rather than some future event that may or may not happen or even be possible) is proof that Weber didn't know what he was talking about. Modern (and post-modern) society isn't disenchanted, the window dressing has just changed
  • hinkley
    Once MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
  • anon
    undefined
  • sdwr
    > arXiv "emergent" (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a lineThe only metric going infinite is the one that measures hype
  • moffkalast
    > I am aware this is unhinged. We're doing it anyway.If one is looking for a quote that describes today's tech industry perfectly, that would be it.Also using the MMLU as a metric in 2026 is truly unhinged.
  • sempron64
    A hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
  • Bengalilol
    Looking at my calculator and thinking the wall has been hit.
  • svilen_dobrev
    > already exerting gravitational force on everything it touches.So, "Falling of the night" ?
  • anon
    undefined
  • sixtyj
    The Roman Empire took 400 years to collapse, but in San Francisco they know the singularity will occur on (next) Tuesday.The answer to the meaning of life is 42, by the way :)
  • skrebbel
    Wait is that photo of earth the legendary Globus Polski? (https://www.ceneo.pl/59475374)
  • anon
    undefined
  • psychoslave
    https://medium.com/@kin.artcollective/the-fundamental-flaws-...So when things are told to be accelerating, we have some choices to do.First, what is accelerating compared to what other regime in which referential?Who is telling that things accelerate, and why they are motivated to make us believe that it's happening.Also, is accelerating going to be forever and only with positive feedback loops? Or are the pro-acceleration sending the car quicker in a well visible wall, but they sell the speech that stopping the vehicle right now would mean losing the ongoing race. Of course questioning the idea of the race itself and its cargo cult is taboo. It's all about competition don't you know (unless it threat an established oligarch)?
  • Bratmon
    I've never been Poe's lawed harder in my life.
  • ddtaylor
    Just in time for Bitcoin halving to go below 1 BTC
  • braden-lk
    lols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
  • Steuard
    All I have to say is that if one of my students turned in those curves as "best fits" to that data, I'd hand the paper back for a re-do. Those are garbage fits. To my eye, none of the very noisy data sets shown in the graph show clear enough trends to support one model over any other: are any of those hyperbolic curves convincingly better than even a linear fit? (No.) The "copilot code share" data can't possibly be described by a hyperbolic curve, because by definition it can't ever go over 100%. (A sigmoidal model might be plausible.) And even if you want to insist on a model that diverges at finite time, why fit 1/(t0-t) rather than 1/(t0-t)^2, or tan(t-t0), or anything else?The author does in fact note that only the arXiv data fits this curve better than a line, and yeah: that's the one dataset that genuinely looks a little curved. But 1) it's a very noisy sort of curved, and 2) I'll bet it would fit a quadratic or an exponential or, heck, a sine function just as well. Introducing their process of doing the hyperbolic fit, they say, "The procedure is straightforward, which should concern you." And yeah, it does concern me: why does the author think that their standard-but-oversimplified attempt to fit a hand-chosen function to this mess is worth talking about? (And why put all of that analysis in the article, complete with fancy animated graph, when they knew that even their most determined attempt to find a signal failed to produce even a marginally supportive result 80% of the time?)In short: none of the mathematical arguments used here to lead in to the article's discussion of "The Singularity" are worth listening to at all. They're pseudo-technical window dressing, meant to lend an undeserved air of rigor to whatever follows. So why should we pay attention to any of it?
  • medbar
    > The labor market isn't adjusting. It's snapping.I’m going to lose it the day this becomes vernacular.
  • aenis
    Damn. I had plans.
  • jcims
    Is there a term for the tech spaghettification that happens when people closer to the origin of these advances (likely in terms of access/adoption) start to break away from the culture at large because they are living in a qualitatively different world than the unwashed masses? Where the little sparkles of insanity we can observe from a distance today are less induced psychosis and actually represent their lived reality?
  • coolvision
    it's funny how your forecast reaches such a similar results as "AI 2027"
  • boerseth
    > Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.No. That is quite literally exponential growth, basically by definition. If x(t) is a growing value, then x'(t) is it's growth, and x''(t) its acceleration. If x influences x'' , say by a linear relationx''(t) = x(t)You get exponentials out as the solutions. Not hyperbolic.I always thought of the exponential as the pole of the function "amount of work that can be done per unit time per human being", where the pole comes about from the fact that humans cease to be the limiting factor, so an infinity pops out.There is no infinity in practice, of course, because even though humans should be made independent of the quantity of extractable work, you'll run into other boundaries instead, like hardware or resources like energy.
  • paulorlando
    This is great news, knowing that I have until 2034 instead of just 2027.
  • buildbot
    What about the rate of articles about the singularity as a metric of the singularity?
  • banannaise
    Yes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
  • daveshappy
    putting it out there will make it so!
  • witnessme
    That would be 8 years after math + humor peaked in an article about singularity
  • dana321
    This has a real hitchhikers guide to the galaxy ring to it!
  • moezd
    I sincerely hope this is satire. Otherwise it's a crime in statistics: - You wouldn't fit a model where f(t) goes to infinity with finite t. - Most of the parameters suggested are actually a better fit for logistics curves, not even linear fits, but they are lumped together with the magic Arxiv number feature for a hyperbolic fit. - Copilot metric has two degrees and two parameters. dof is zero, so we could've fit literally any other function.I know we want to talk about singularity, but isn't that just humans freaking out at this point? It will happen on a Tuesday, yeah no joke.
  • jonplackett
    This assumes humanity can make it to 2034 without destroying itself some other way…
  • MarkusQ
    Prior work with the same vibe: https://xkcd.com/1007/
  • bawolff
    Good news, we won't have to fix the y2k36 bug.
  • skulk
    > Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
  • OutOfHere
    I am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
  • cesarvarela
    Thanks, added to calendar.
  • 0xbadcafebee
    > The Singularity: a hypothetical future point when artificial intelligence (AI) surpasses human intelligence, triggering runaway, self-improving, and uncontrollable technological growthThe Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.
  • markgall
    > Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.Huh? I don't get it. e^t would also still be finite at heat death.
  • hipster_robot
    why is everything broken?> the top post on hn right now: The Singularity will occur on a Tuesdayoh
  • loumf
    This is great. Now we won’t have to fix y2K36 bugs.
  • bwhiting2356
    We need contingency plans. Most waves of automation have come in S-curves, where they eventually hit diminishing returns. This time might be different, and we should be prepared for it to happen. But we should also be prepared for it not to happen.No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.
  • anon
    undefined
  • cryptonector
    But what does Opus 4.6 say about this?
  • wbshaw
    I got a strong ChatGPT vibe from that article.
  • darepublic
    > Real data. Real model. Real date!Arrested Development?
  • PantaloonFlames
    This is what I come here for. Terrific.
  • wilg
    > The labor market isn't adjusting. It's snapping. In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993.Bad analysis! Layoffs are flat as a board.https://fred.stlouisfed.org/series/JTSLDL
  • dusted
    Will.. will it be televised ?
  • neilellis
    End of the World? Must be Tuesday.
  • qwertyuiop_
    Who will purchase the goods and services if most people loose jobs ? Also who will pay for ad dollars what are supposed to sustain these AI business models if there no human consumers ?
  • fullstackchris
    You're all wrong; the singularity already happened... probably sometime around 2000 B.C... when humans started farming:https://chrisfrewin.medium.com/why-the-singularity-is-imposs...and just remember, we're still on transformer models, tokens in, tokens out - stuff like this with fancy math is just absolute cruft
  • nurettin
    With this kind of scientific rigour, the author could also prove that his aunt is a green parakeet.
  • bpodgursky
    2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
  • Night_Thastus
    This'll be a fun re-read in ~5 years when most of this has ended up being a nothing burger. (Minus one or two OK use-cases of LLMs)
  • jibal
    No one ever learns from Malthus.One of the many errors here is assuming that the prediction target lies on the curve. But there's no guarantee (to say the least) that the sorts of improvements that we've seen lead to AGI, ASI, "the singularity", a "social singularity", or any such thing.
  • vagrantstreet
    Was expecting some mention of Universal Approximation TheoremI really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
  • blurbleblurble
    Today is tuesday
  • phanimahesh
    Am I the only one who found the terminal more interesting?
  • TZubiri
    Slight correction, I've been studying token prices last weeks so this caught my eye:>"(log-transformed, because the Gemini Flash outlier spans 150× the range otherwise)"> "Gemini 2.0 Flash Dec 2024 2,500,000"I think OP meant Gemini 2 flash lite, which is distinct from Gemini 2 flash. It's also important to consider that this tier had no successor in future models, there's no gemini 3 flash lite, and gemini 3 flash isn't the spiritual successor.
  • avazhi
    Most obviously AI-written post I think I’ve seen.Have some personal pride, dude. This is literally a post written by AI hyping up AI and posted to a personal blog as if it were somebody’s personal musings. More slop is just what we need.
  • ahurmazda
    Hail Zorp
  • raphar
    Why the plutocrats believe that the entity emerging from the singularity will side with them? Really curious
  • daveguy
    What I want to know is how bitcoin going full tulip and Open AI going bankrupt will affect the projection. Can they extrapolate that? Extrapolation of those two event dates would be sufficient, regardless of effect on a potential singularity.
  • bradgessler
    What time?
  • ck2
    Does "tokens per dollar" have a "moore's law" of doubling?Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did
  • bitwize
    Thus will speak our machine overlord: "For you, the day AI came alive was the most important day of your life... but for me, it was Tuesday."
  • brador
    100% an AI wrote this. Possibly specifically to get to the top spot on HN.Those short sentences are the most obvious clue. It’s too well written to be human.
  • peepee1982
    Who willingly reads this pompous AI slop?
  • singularfutur
    The singularity is always scheduled for right after the current funding round closes but before the VCs need liquidity. Funny how that works.
  • Johnny_Bonk
    Wow what a fun read
  • cubefox
    A similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled: Doomsday: Friday, 13 November, A.D. 2026 There is an excellent blog post about it by Scott Alexander:"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...
  • s32r3
    w
  • pickleRick243
    LLM slop article.
  • CGMthrowaway
    > 95% CI: Jan 2030–Jan 2041
  • boca_honey
    Friendly reminder:Scaling LLMs will not lead to AGI.
  • u8rghuxehui
    hi
  • hhh
    this just feels like ai psychosis slop man
  • s32r3
    what?
  • api
    This really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
  • kittbuilds
    [dead]
  • clownpenis_fart
    [dead]
  • kittbuilds
    [dead]
  • EloniousBlamius
    [dead]
  • AldenOnTheGrid
    [dead]
  • Random-forces
    [dead]
  • Random-forces
    [dead]
  • Random-forces
    [dead]
  • csmclass
    [dead]
  • ValveFan6969
    [dead]
  • chenmx
    [dead]
  • 789bc7wassad
    [dead]
  • beetheman
    [dead]
  • tempaccountabcd
    [dead]
  • u8rghuxehui
    [flagged]
  • AndrewKemendo
    [flagged]
  • zackmorris
    [flagged]