<- Back
Comments (105)
- ripeI really like this author's summary of the 1983 Bainbridge paper about industrial automation. I have often wondered how to apply those insights to AI agents, but I was never able to summarize it as well as OP.Bainbridge by itself is a tough paper to read because it's so dense. It's just four pages long and worth following along:https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...For example, see this statement in the paper: "the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have."This summarizes the first irony of automation, which is now familiar to everyone on HN: using AI agents effectively requires an expert programmer, but to build the skills to be an expert programmer, you have to program yourself.It's full of insights like that. Highly recommended!
- nuancebydefaultThe article discusses basically 2 new problems with using agentic AI:- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.
- z_This is a thought provoking piece.“But at what cost?”We’ve all accepted calculators into our lives as being faster and correct when utilized correctly (Minus Intel tomfoolery), but we emphasize the need to know how to do the math in educational settings.Any post education adult will confirm when confronted with an irregular math problem (or a skill) that there is a wait time to revive the ability.Programming automation having the potential skill decay AND being critical path is … worth thinking about.
- jiehongThis irony of automation has been dealt with in the aviation industry for pilot for years: auto pilots can actually land the plane in many cases, and do fly the plane on most of the cruise.Yet, pilots are constantly trained on actual scenarios, and are expected to land airplanes manually monthly (and during take off too).This ensures pilots maintain their skills, while the auto pilot helps most of the time.On top of that, plane commands often are half automatic already, aka they are assisted (but not by LLMs!), so it’s a complex comparison.
- AnimatsBainbridge [1] is interesting, but dated. A more useful version of that discussion from slightly later is "Children of the Magenta", [2] an airline chief pilot talking to his pilots about cockpit automation and how to use it. Requires a basic notion of aviation jargon.There's been progress since then. Although the details are not widely publicized, enough pilots of the F-22, F-35, or the Gripen have talked about what modern fighter cockpit automation is like. The real job of fighter pilots is to fight and win battles, not drive the airplane. A huge amount of effort has been put into simplifying the airplane driving job so the pilot can focus on killing targets. The general idea today is that the pilot puts the pointy end in the right direction and the control systems take care of the details. An F-22 pilot has been quoted as saying that the F-22 is far less fussy than a Cessna as a flying machine.For the F-35, which has a VTOL configuration (B) and a carrier-landing configuration (C), much effort was put into making VTOL landing and carrier landing easy. Not because pilots can't learn to do it, but because training tended to obsess on those tasks. The hard part of Harrier (the only previous successful VTOL fighter) was learning to land the unstable beast without crashing. There were still a lot of Harrier crashes.The hard part of Naval aviator training is landing on a carrier deck. Neither of these tasks has anything to do with the real job of taking a bite out of the enemy, but they consumed most of the training time. So, for the F-35, both of those tasks have enough computer-added stability to make them much easier. One of the stranger features of the F-35 is that it has two main controls, called "inceptors", which correspond to throttle and stick. In normal flight, they mostly work like throttle and stick. But in low-speed hover, the "throttle" still controls speed while the "stick" controls attitude, even though the "stick" is affecting engine speed and the "throttle" is affecting control surfaces in that mode. So the pilot doesn't have to manage the strange transitions of a VTOL craft directly.This refocuses pilot training on using the sensors and weapons to do something to the enemy. Classic training is mostly about the last few minutes of getting home safely.As AI for programming advances, we should expect to devote more user time to analyzing the tactical problem, rather than driving the bus.[1] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...[2] https://www.youtube.com/watch?v=5ESJH1NLMLs
- didibusA good read, but it reminds me that people see the programmer as being there to identify when the AI makes an error or a mistake.But in my use of AI agents as a programmer and also for other work. I would say that, while yes, you also have to look for mistakes or errors, most of the time I spend is on programming the AI still.The AI agent has no idea what it must produce, what it's meant to do, when it can alter something existing to enable something new, etc.And this is true for both functional and non-functional requirements.Unlike in traditional manufacturing, you've already built your manufacturing pipeline for a precise output, you've got your CAD designs done, you ran your simulations, you've calibrated everything already for what you want.So most of the work remains that of programming the machine.
- dsjoerg> Typically, before people are put in a leadership role directing humans, they will get a lot of leadership training teaching them the skills and tools needed to lead successfully.I question this.
- everdriveI can feel the skill atrophy creeping in. My very first instinct is go use the LLM. I think much like forcing yourself to exercise, eat right, and avoid social media / distractions, this will be a new modern skillset; do you have the discipline to avoid becoming useless without an LLM? A small few will be great at this, the middle of the bell curve will do "well enough," and you know the story for the rest.
- sublimefireGood discussion of the paper and the observations and ironies. A thing to note is that we do have software factories already, with a bunch of automation in place and folks being trained to deal with incidents. The pools of agents just elevate what we currently have but the tools are still lacking severely. IMO the tools need to improve for us to move forward as it is difficult to observe the decisions of agents when they fall apart.Also, by and large the current AI tools are not in the critical path yet, well except those drones that lock on targets to eliminate them in case of interference, and even then it is ML. Agents can not be in that path due to predictability challenges yet.
- AnimatsThere are a few issues here.It's useful to think about AI-driven coding assistants in terms of the SAE levels of automation for automatic driving.- Level 0 - totally manual- Level 1 - a bit of assistance, such as cruise control- Level 2 - speed and steering control that requires constant supervision by a human driver. This is where most of the commercial systems are now.- Level 3 - Level 2, but reliable enough that the human driver doesn't need to supervise constantly. Able to bring the vehicle to a safe stop by itself. Mercedes -Benz Drive Pilot is supposedly level 3. Handoff between computers and human remains a problem. Human still liable for accidents.- Level 4 - Full automation, but not under all conditions. Waymo is Level 4. Human just selects the destination.- Level 5 - Full automation, at least as capable as human drivers under all conditions. Not yet seen.What we're looking at with the various programming assistance AI systems ls Level 2 or Level 3 competence. These are the most troublesome levels. Who's in charge? Who's to blame?The need for such programming assistance systems may be transient, as it clearly is in automotive. Eventually, everybody in automotive will get to Level 4 or better, or drop out due to competitive pressure.
- abrookewoodTROJAN WARNING:My AV is reporting issues with this link: 15/12/2025 2:59:56 PM;HTTP filter;file;https://cdn.jsdeliver.net/npm/mathjax@3.2.2/es5/tex-chtml.js... trojan;connection terminated;
- steveBK123I think for most non-coding tasks we are still in the "convincing liar" stage, and not even at the "its right 99.9% of the time and humans need to quickly detect the 0.1% errors" problem. I think a lot of the HN crowd misses this because they are programmers using it for programming.I work at a firm that has given AI tooling to non-developer data analyst type people who otherwise live & die in excel. Much of their day job involves reading PDFs. I occasionally will use some of the firms AI tooling for PDF summarizing/parsing/interrogation/etc type tasks and remain consistently underwhelmed.Stuff like taking 10 PDFs each with a simple 30 row table per PDF, with the same title in each file, it ends up puking on 3-4 out of 10 with silent failures. Row drops, duplicating data, etc. When you point out its missed rows, it goes back and duplicates rows to get to the correct row count.Using it to interrogate standard company filings PDfs that it has been specially trained on and it gave very convincing answers which were wrong because it has silently truncated its search context to only recent year financial filings. Nowhere did it show this limitation to the user. It only became apparent after researching the 4th or 5th company when it decided to caveat its answer with its knowledge window. This invalidated the previous answers as questions such as "when was the first X" or "have they ever reported Y" were operating on incomplete information.Most users of these tool are not that technical, and are going to be much more naive in taking the answers for fact without considering the context.
- jinwoo68"Most companies are efficiency-obsessed."But what most of them do is not to be more efficient but to be shown to be more efficient. The main reason they are so obsessed with AI is because they want to send the signal that they are pursuing to be more efficient, whether they succeed or not.
- bdangubic> “ If it does not work properly, you need better prompts” is the usual response if someone struggles with directing agents successfullyso much this!
- demorroThese observations were made 40 years ago. I suspect we have solved many of these problems now and have close to fully automated manufacturing and flight systems, or close enough that the training trade-off is worth it.However, this took 40 years and actual fatalities. We should keep that in mind when we're pushing the AI acceleration pedal down ever harder.
- jennyholzer2"Most companies are efficiency-obsessed. Hence, they also expect AI solutions to increase “productivity”, i.e., efficiency, to a superhuman level. If a human is meant to monitor the output of the AI and intervene if needed, this requires that the human needs to comprehend what the AI solution produced at superhuman speed – otherwise we are down to human speed. This presents a quandary that can only be solved if we enable the human to comprehend the AI output at superhuman speed (compared to producing the same output by traditional means)."
- analog8374I spent years creating automated drawing machines. But I can still draw better than any of them with my hand. Not as quickly tho.
- throwaway613745If your process is shit, you're just automating shit at lightning speed.If you're bad at your job, you're automating it at lightning speed.You need have good business process and be good at your job without AI in order to have any chance in hell of being successful with it. The idea that you can just outsource your thinking to the AI and don't need to actually understand or learn anything new anymore is complete delusion.
- wesammikhailOur of curiosity, does anyone know of a good writeup / blog post made by someone in the industry that revolves around reducing orchestration error rates? Would love to read some more about the topic and I'm looking for a few good resources.
- jennyholzer[dead]