<- Back
Comments (12)
- whatever1I don't understand why LLMs get a free pass when all of the existing businesses have to play by the rules.Businesses have to comply with IP, privacy, HIPAA, security and safety laws to name just a few.NONE of these apply to the LLMs.Of course I can now build and deploy an app to hospitals in a weekend since I can circumvent all of the difficult parts using the magic LLMs. If asked why, the response is "It's AI!"
- heyethanThe failure mode here seems less about capability and more about interaction. Language turns coordination into a moving target.
- iqihsas someone who is working in the cybersecurity space and recently obtained my CISSP designation, i am left wondering when the pedagogy of my field will expand and include a separate domain dedicated to AI agent safety and security best practicesit really does feel like we are way behind in the way we train people in cyber compared to the pace of the development of agentic AI, robotics etc
- manmalThe TLDR is that current agents are as problematic as many of us already know they are:> unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover
- e7h4nzIn this problem domain, I believe humanity is still in a very early stage. What we can do is treat the agent and its operating environment as a "black box" and audit all incoming and outgoing network request traffic.This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
- cyanydeezThis is begging to turned into a youtube style "Real World", where you pit 12 humans with 12 AIs and they're only allowed to interact through CLIs.Then you slowly reveal they're all humans.
- AIorNotAll this to say: OpenClaw is hella insecure and unreliable?I mean all of in the space already know this but I suppose its important to be showcasing the problems of systems of agents
- charlotte12345[dead]
- P-MATRIX[dead]
- Sim-In-Silico[dead]
- dnaranjo[dead]
- EGregThis is exactly why I built Safebots to prevent problems with agents. This article shows how it can address every security issue with agents that came up in the study:https://community.safebots.ai/t/researchers-gave-ai-agents-e...
- hackermeowsyour IQ > Model IQ- you will have good results as you have the ability to detect when model is wrong.your IQ < Model IQ - god bless you.