<- Back
Comments (199)
- sanarotheI think there's something about the physical acts and moments of writing out or typing out the words, or doing the analysis, etc. Writing 'our', backspacing, then forward again. Writing out a word but skipping two letters ahead, crossing out, starting again. Stopping mid paragraph to have a sip of coffee.What Dutch OSINT Guy was saying here resonates with me for sure - the act of taking a blurry image into the photo editing software, the use of the manipulation tools, there seems to be something about those little acts that are an essential piece of thinking through a problem.I'm making a process flow map for the manufacturing line we're standing up for a new product. I already have a process flow from the contract manufacturer but that's only helpful as reference. To understand the process, I gotta spend the time writing out the subassemblies in Visio, putting little reference pictures of the drawings next to the block, putting the care into linking the connections and putting things in order.Ideas and questions seem to come out from those little spaces. Maybe it's just letting our subconscious a chance to speak finally hah.L.M. Sacasas writes a lot about this from a 'spirit' point of view on [The Convivial Society](https://theconvivialsociety.substack.com/) - that the little moments of rote work - putting the dishes away, weeding the garden, the walking of the dog, these are all essential part of life. Taking care of the mundane is living, and we must attend to them with care and gratitude.
- Aurornis> Participants weren’t lazy. They were experienced professionals.Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smartI don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.
- AnimatsThe big problem in open source intelligence is not in-depth analysis. It's finding something worth looking at in a flood of info.Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...[2] https://apnews.com/article/us-intelligence-services-ai-model...
- jruohonen"""• Instead of forming hypotheses, users asked the AI for ideas.• Instead of validating sources, they assumed the AI had already done so.• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.This isn’t hypothetical. This is happening now, in real-world workflows."""Amen, and OSINT is hardly unique in this respect.And implicitly related, philosophically:https://news.ycombinator.com/item?id=43561654
- palmoteaOne way to achieve superhuman intelligence in AI is to make humans dumber.
- 0hijinksIt sure seems like the use of GenAI in these scenarios is a detriment rather than a useful tool if, in the end, the operator must interrogate it to a fine enough level of detail that she is satisfied. In the author's Scenario 1:> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”> It spits out a convincing response: “Paris, near Place de la République.” ...> But a trained eye would notice the signage is Belgian. The license plates are off.> The architecture doesn’t match. You trusted the AI and missed the location by a country.Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...
- pcj-githubThis resonates with me. I feel like AI is making me learn slower.For example, I am learning Rust, for quite awhile now. While AI has been very helpful in lowering the bar to /begin/ learning Rust, it's making it slower to achieve a working competence with it, because I always seem reliant on the LLM to do the thinking. I think I will have to turn off all the AI and struggle struggle struggle, until I don't, just like the old days.
- LurkandComment1. I've worked with analysts and done analysis for 20+ years. I have used Machine Learning with OSINT as far back as 2008 and use AI with OSINT today. I also work with many related analysts.2. Most analysts in a formal institution are professionally trained. In Europe, Canada and some parts of the US it's a profession with degree and training requirements. Most analysts have critical thinking skills, for sure the good ones.3. OSINT is much more accessible because the evidence ISN'T ALWAYS controlled by a legal process so there are a lot of people who CAN be OSINT analysts or call themselves that and are not professionally trained. They are good at getting results from Google and a handful of tools or methods.4. MY OPINION: The pressure to jump to conclusions in AI whether financially motivated or not comes from perceived notion that with technology everything should be faster and easier. In most cases it is, however, just as technology is increasing so is the amount of data. So you might not be as efficient as those around you expect, especially if they are using expensive tools, so there will be pressure to give into AI's suggestions.5. MY OPINION: OSINT and analysis is a Tradecraft with a method. OSINT with AI makes things possible that weren't possible before or took way too much time for it to be worth it. Its more like, here are some possible answers where there were none before. Your job is to validate it now and see what assumptions have been made.6. These assumptions have existed long before AI and OSINT. I seen many cases where we have multiple people look at evidence to make sure no one is jumping to conclusions and to validate the data. MY OPNION: So this lack of critical thinking might also be because there are less people or passes to validate the data.7. Feel Free to ask me more.
- treyfittyWell, if I want to first understand the basics, such as “what do the letters OSINT mean,” I’d think the homepage (https://osintframework.com/) would tell me. But alas, it does not, and a simple chatgpt query would have told me the answer without the wasted effort.
- ridgeguyI think this post isn't limited to OSINT. It's widely applicable, probably where AI is being adopted as a new set of tools.
- sepositus> Participants weren’t lazy. They were experienced professionals. But when the tool responded quickly, confidently, and clearly they stopped doing the hard part.This seems contradictory to me. I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature. If they didn't research the tool and its limitations, that's lazy. At some point, they stopped believing in this limitation and offloaded more of their thinking to it. Why did they stop? I can't think of a single reason other than being lazy. I don't accept the premise that it's because the tool responded quickly, confidently, and clearly. It did that the first 100 times they used it when they were probably still skeptical.Am I missing something?
- BrenBarnIt's become almost comical to me to read articles like this and wait for the part that, in this example, comes pretty close to the beginning: "This isn’t a rant against AI."It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI.To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them.
- AnimatsYou have to use machine filtering of some kind, because there's too much information.A director of NSA, pre 9/11, once remarked that the entire organization produced about two pieces of actionable intelligence a day, and about one item a week that reached the President. An internal study from that era began "The U.S. Government collects too much information".But that was from the Cold War era, when the intelligence community was struggling to find out basic things such as how many tank brigades the USSR had. After 9/11, the intel community had to try to figure out what little terrorist units with tens of people were up to. That required trolling through far too much irrelevant information.
- zora_goronI wrote about some similar observations in the clinical domain -- I call it the "human -> AI reasoning shunt" [0]. Explicitly requesting an AI tool to perform reasoning is one thing, but a concern I have is that, with the increasing prevalence of these AI tools, even tasks that theoretically are not reasoning-based (ie helping write clinical notes or answer simple questions) can surreptitiously offload some degree of reasoning away from humans by allowing these systems to determine what bits of information are important or not.[0] https://samrawal.substack.com/p/the-human-ai-reasoning-shunt
- tqiIt's been less than 3 years, yet this guy is already able to confidently predicting a "collapse of critical thinking." I'm sure that is the product of rational analysis and not confirmation bias...
- ghssdsI like how all these articles miss the elephant in the room: using a chatbot as an assistant is offering your data, thoughts, insights, and focus of interests to a corporation that's at best neutral and at worse hostile. Moreover, that corporation may also share anything with business partners, governments, and law enforcement institutions with unknown objectives.
- BariumBlueGood point in the post about confidence - most people equate confidence with accuracy - and since AIs always sound confident, they always sound correct
- black_puppydogI'd argue that for a profession that has existed for quite some time, "since chatGPT appeared" isn't in any way "slow"
- axegon_OSINT is a symptom of it. When GPT-2 came along, I was worried that at some point the internet will get spammed with AI-crap. Boy, was I naive... I see this incredibly frequently and I get a ton of hate for saying this (including here on HN): LLMs and AI in general is a perfect demonstration of a shiny-new-toy. What people fail to acknowledge is that the so called "reasoning" is nothing more then predicting the most likely next token, which works reasonably well for basic one-off tasks. And I have used LLMs in that way - "give me the ISO 3166-1 of the following 20 countries:". That works. But as soon as you throw something more complex and start analyzing the results(which look reasonable at first glance), the picture becomes very different. "Oh just use RAGs, are you dumb?", I hear you say. Yeah?class ParsedAddress(BaseModel): street: str | None postcode: str | None city: str | None province: str | None country_iso2: str | None Response:{ "street": "Boulevard", "postcode": 12345, "city": "Cannot be accurately determined from the input", "province": "MY and NY are both possible in the provided address", "country_iso2": "US" }Sure, I can spend 2 days trying out different models and tweaking the prompts and see which one gets it, but I have 33 billion other addresses and a finite amount of time.The issue occurs in OSINT as well: A well structured answer lures people into a mental trap. Anthropomorphism is something humans have fallen for since the dawn of mankind and is doing so yet again with AI. The thought that you have someone intelligent nearby with god-like abilities can be comforting but... Um... LLMs don't work like that.
- nottorpWhy OSINT? That goes for any domain.Besides "OSINT" has been busy posting scareware for years, even before "AI".There's so much spam that you can't figure out what the real security issues are. Every other "security article" is about "an attacker" that "could" obtain access if you were sitting at your keyboard and they were holding a gun to your head.
- ringerylessI question the notion that such tools are necessary or admissible in my daily life.Mere observation of others has shown me the decadence that results from even allowing such "tools" into my life at all.(who or what is the tool being used?)I have seen zero positive effects from the cynical application of such tools in any aspect of life. The narrative that we "all use them" is false.
- ramonverse> Not because analysts are getting lazy, but because AI is making the job feel easier than it actually is.But all the examples feel like people are being really lazy, e.g.> Paste the image into the AI tool, read the suggested location, and move on.> Ask Gemini, “Who runs this domain?” and accept the top-line answer.
- torginusMost cybersecurity is just a smoke show anyways, presentation matters more than content. AI is just good at security theather as humans are.
- ringerylessAka, i have no problem being explicitly anti AI as a bad idea to begin with. This is what I think, that it is a foolish project from the get go.Techne is the Greek word for HAND.
- Terr_> What Dies When Tradecraft Goes Passive?Eventually, Brazil (1985) happens, to the detriment of Archibald [B]uttle, where everyone gives unquestionable trust to a flawed system.
- vincnetasTried one exercise from the article, to ask gemini to identify owner of domain (my domain). Gemini was very confident and very wrong.I bet any OSINT person would have had my name and contact in half an hour.
- DaubAm I the only one to have to search for what OSINT was an acronym for?
- ingohelpingerIt's true, so often chatgpt has to apologize because it was wrong. lol
- Barrin92> “Paris, near Place de la République.” It sounds right. You move on. But a trained eye would notice the signage is Belgian. The license plates are off. The architecture doesn’t match. You trusted the AI and missed the location by a country.I genuinely hope if you're a professional intelligence analyst it doesn't take a trained eye to distinguish Paris from Belgium. Genuinely every day there's articles like this. The post about college students at elite universities who can't read, tariff policy by random number generator, programmers who struggle to solve first semester CS problems, intelligence analysts who can't do something you can do if you play Geoguessr as a hobby. Are we just getting dumber every year? It feels like we're falling off a cliff over the last decade or so.Like, the entire article boils down to "verify information and use critical thinking", you'd think someone working in intelligence and law enforcement which this author trains knows this when they get hired?
- petesergeantRelevant today as I unpick some unit tests I let AI write and turn out to be very plausible-looking at first and second glance, but turned out to test nothing of value when properly examined.
- roenxiThis article seems a bit weird because it doesn't talk about whether the quality of the analysis went up or down afterwards.To pick an extreme example, programmers using a strongly typed language might not bother manually checking for potential type errors in their code and leave it to the type checker to catch them. If the type checker turns out to be buggy then their code may fail in production due to their sloppiness. However, we expect the code to eventually be free of type errors to a superhuman extent because they are using a tool that is strong to cover their personal weaknesses.AI isn't as provably correct as type checkers, but they're pretty good at critical thinking (superhuman compared to the average HN argument) and human analysts must also routinely leave a trail of mistakes in their wake. The real question is what influence the AI has on the quality and I don't see why the assumption is that it is negative. It might well be; but the article doesn't seem to go into that in any depth.
- cess11"OSINT" has had a rather quick collapse in that area for quite some time, many participants under that label are basically propaganda outlets for whatever state or other.Maybe the article addresses that, I'm not permitted to read it, likely because I'm using IPv6.Forensic Architecture is a decent counterexample, however. They've been using machine learning and computer synthesis techniques for years without dropping in quality.
- ImHereToVoteThe trouble with OSINT is that they often take the opinions of "good" government officials and journalists at face value.This sort of lazy thinking doesn't miss a beat when it comes to take the opinions of an LLM at face value.Why not? It sounds mostly the same. The motivations to believe AI, is exactly the same as the motivation to believe government officials and journalists.
- voidhorseThe main takeaway of this whole LLM chatbot nonsense to me is how gullible people are and how low the bar is.These tools are brand new and have proven kinks (hallucinations, for example). But instead of being, rightly, in my view, skeptical, the majority of people completely buy into the hype and already have full automation bias when it comes to these tools. They blindly trust the output, and merrily push forth AI generated, incorrect garbage that they themselves have no expertise or ability to evaluate. It's like everyone is itching to buy a bridge.In some sense, I suppose it's only natural. Much of the modern economy sustains itself on little more than hype and snake oil anyway, so I guess it's par for the course. Still, it's left me a bit incredulous, particularly when people I thought were smart and capable of being critical seemingly adopt this nonsense without batting an eye. Worse, they all hype it up even further. Makes me feel like the whole LLM business is some kind of ponzi scheme given how willingly users will schill for these products for nothing.
- FrankWilhoitA crutch is one thing. A crutch made of rotten wood is another.
- smashahAt the end of the day it is people who are doing OSINT and their self/ai confidence is a reflection of their fallibility, just as being manipulated by intelligence operatives in their discord servers to be peer pressured into pushing a certain narrative. OSINT should be about uncovering objective truth in a sea full of lies in a storm of obfuscation through a tsunami of misinformation caused by an earthquake of disinformation. Now these OSINT people need to battle the siren song of clout (and being first).I doubt anyone can do it perfectly every time, it requires a posthuman level of objectivity and high level of information quality that hardly ever exists.
- nonrandomstring> This isn’t a rant against AI. I use it dailyIt is, but it adds disingenuous apologetic.Not wishing to pick on this particular author, or even this particular topic, but it follows a clear pattern that you can find everywhere in tech journalism: Some really bad thing X is happening. Everyone knows X is happening. There is evidence X is happening, But I am *not* arguing against X because that would brand me a Luddite/outsider/naysayer.... and we all know a LOT of money and influence (including my own salary) rests on nobody talking about X. Practically every article on the negative effects of smartphones or social media printed in the past 20 years starts with the same chirpy disavowal of the authors actual message. Something like;"Smartphones and social media are an essential part of modern life today... but"That always sounds like those people who say "I'm not a racist, but..."Sure, we get it, there's a lot of money and powerful people riding on "AI". Why water down your message of genuine concern?
- aaron695[dead]
- AIorNotThis is another silly against AI tools - that doesn’t offer useful or insightful suggestions on how to adapt or provide an informed study of areas of concern and - one that capitalizes on the natural worries we have on HN because of our generic fears around critical thinking being lost when AI will take over our jobs - in general, rather like concerns about the web in pre-internet age and SEO in digital marketing ageOSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinkingAI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too