<- Back
Comments (171)
- detente18LiteLLM maintainer here, this is still an evolving situation, but here's what we know so far:1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-092. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt3. The package is in quarantine on pypi - this blocks all downloads.We are investigating the issue, and seeing how we can harden things. I'm sorry for this.- Krrish
- syllogismMaintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.
- jFriedensreichWe just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.
- eoskxAlso, not surprising that LiteLLM's SOC2 auditor was Delve. The story writes itself.
- ramimacThis is tied to the TeamPCP activity over the last few weeks. I've been responding, and keeping an up to date timeline. I hope it might help folks catch up and contextualize this incident:https://ramimac.me/trivy-teampcp/#phase-09
- hiciuBesides main issue here, and the owners account being possibly compromised as well, there's like 170+ low quality spam comments in there.I would expect better spam detection system from GitHub. This is hardly acceptable.
- intothemildI just installed Harbor, and it instantly pegged my cpu.. i was lucky to see my processes before the system hard locked.Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.Got lucky, no backdoor installed here from what i could make out of the binary
- f311aTheir previous release would be easily caught by static analysis. PTH is a novel technique.Run all your new dependencies through static analysis and don't install the latest versions.I implemented static analysis for Python that detects close to 90% of such injections.https://github.com/rushter/hexora
- saidnooneeverjust wanna state this can litterally happen to anyone within this messy package ecosystem. maintainer seems to be doing his bestif you have tips i am sure they are welcome. snark remarks are useless. dont be a sourpuss. if you know better, help the remediation effort.
- rdevillaIt will only take one agent-led compromise to get some Claude-authored underhanded C into llvm or linux or something and then we will all finally need to reflect on trusting trust at last and forevermore.
- brataoLook like the Founder and CTO account has been compromised. https://github.com/krrishdholakia
- cedwsThis looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.
- shay_kerA general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
- eoskxThis is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.
- ShankI wonder at what point ecosystems just force a credential rotation. Trivy and now LiteLLM have probably cleaned out a sizable number of credentials, and now it's up to each person and/or team to rotate. TeamPCP is sitting on a treasure trove of credentials and based on this, they're probably carefully mapping out what they can exploit and building payloads for each one.It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.
- nickvecLooks like all of the LiteLLM CEO’s public repos have been updated with the description “teampcp owns BerriAI” https://github.com/krrishdholakia
- santiagobasultoI blogged about this last year[0]...> ### Software Supply Chain is a Pain in the A*> On top of that, the room for vulnerabilities and supply chain attacks has increased dramaticallyAI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...
- sschuellerDoes anyone know a good alternate project that works similarly (share multipple LLMs across a set of users)? LiteLLM has been getting worse and trying to get me to upgrade to a paid version. I also had issues with creating tokens for other users etc.
- mohsen1If it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attackit does a lot of CPU intensive work spawn background python decode embedded stage run inner collector if data collected: write attacker public key generate random AES key encrypt stolen data with AES encrypt AES key with attacker RSA pubkey tar both encrypted files POST archive to remote host
- cpburns2009You can see it for yourself here:https://inspector.pypi.io/project/litellm/1.82.8/packages/fd...
- tom_alexanderOnly tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
- abhisekWe just analysed the payload. Technical details here: https://safedep.io/malicious-litellm-1-82-8-analysis/We are looking at similar attack vectors (pth injection), signatures etc. in other PyPI packages that we know of.
- postalcoderThis is a brutal one. A ton of people use litellm as their gateway.
- mark_l_watsonA question from a non-python-security-expert: is committing uv.lock files for specific versions, and only infrequently updating versions a reasonable practice?
- kevmlMore details here: https://futuresearch.ai/blog/litellm-pypi-supply-chain-attac...
- segalordLiteLLM has like a 1000 dependencies this is expected https://github.com/BerriAI/litellm/blob/main/requirements.tx...
- rgambeeLooking forward to a Veritasium video about this in the future, like the one they recently did about the xz backdoor.
- 0fflineuserI was running it (as a proxy) in my homelab with docker compose using the litellm/litellm:latest image https://hub.docker.com/layers/litellm/litellm/latest/images/... , I don't think this was compromised as it is from 6 months ago and I checked it is the version 1.77.I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.I also just added it to my homelab this sunday, I guess that's good timing haha.
- dec0dedab0degithub, pypi, npm, homebrew, cpan, etc etc. should adopt a multi-multi-factor authentication approach for releases. Maybe have it kick in as a requirement after X amount of monthly downloads.Basically, have all releases require multi-factor auth from more than one person before they go live.A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.
- xinayderWhen something like this happens, do security researchers instantly contact the hosting companies to suspend or block the domains used by the attackers?
- xunairahVersion 1.82.7 is also compromised. It doesn't have the pth file, but the payload is still in proxy/proxy_server.py.
- wswinI will wait with updating anything until this whole trivy case gets cleaned up.
- hmokiguesswhat's up with everyone in the issue thread thanking it, is this an irony trend or is that a flex on account takeover from teampcp? this feels wild
- tom-blkStuff like is happening too much recently. Seems like the more fast paced areas of development would benefit from a paradigm shift
- johnhenryI've been developing an alternative to LiteLLM. Javascript. No dependencies. https://github.com/johnhenry/ai.matey/
- 6thbittitle is bit misleading.The package was directly compromised, not “by supply chain attack”.If you use the compromised package, your supply chain is compromised.
- hmokiguessWhat’s the best way to identify a compromised machine? Check uv, conda, pip, venv, etc across the filesystem? Any handy script around?EDIT: here's what I did, would appreciate some sanity checking from someone who's more familiar with Python than I am, it's not my language of choice.find / -name "litellm_init.pth" -type f 2>/dev/nullfind / -path '/litellm-1.82..dist-info/METADATA' -exec grep -l 'Version: 1.82.[78]' {} \; 2>/dev/null
- oncelearnerThat's a bad supply-chain attack, many folks use litellm as main gateway
- 6thbitWorth exploring safeguard for some: The automatic import can be suppressed using Python interpreter’s -S option.This would also disable site import so not viable generically for everyone without testing.
- 0123456789ABCDEairflow, dagster, dspy, unsloth.ai, polar
- nickspacekteampcp taking credit?https://github.com/krrishdholakia/blockchain/commit/556f2db3... - # blockchain - Implements a skeleton framework of how to mine using blockchain, including the consensus algorithms. + teampcp owns BerriAI
- BlackthornEdit: ignore this silliness, as it sidesteps the real problem. Leaving it here because we shouldn't remove our own stupidity.It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.
- fratellobigioIt's been quarantined on PyPI
- kstenerudWe need real sandboxing. Out-of-process sandboxing, not in-process. The attacks are only going to get worse.That's why I'm building https://github.com/kstenerud/yoloai
- mikert89Wow this is in a lot of software
- ImustaskforhelpOur modern economy/software industry truly runs on egg-shells nowadays that engineers accounts are getting hacked to create a supply-chain attack all at the same time that threat actors are getting more advanced partially due to helps of LLM's.First Trivy (which got compromised twice), now LiteLLM.
- iwhalenWhat is happening in this issue thread? Why are there 100+ satisfied slop comments?
- danielvaughnI work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.The Python ecosystem provides too many nooks and crannies for malware to hide in.
- cpburns2009LiteLLM is now in quarantine on PyPI [1]. Looks like burning a recovery token was worth it.[1]: https://pypi.org/project/litellm/
- gkfasdfasdfSomeone needs to go to prison for this.
- zhismeAm I the only one having feeling that with LLM-era we have now bigger amount of malicious software lets say parsers/fetchers of credentials/ssh/private keys? And it is easier to produce them and then include in some 3rd party open-source software? Or it is just our attention gets focused on such things?
- te_chrisI reviewed the LiteLLM source a while back. Without wanting to be mean, it was a mess. Steered well clear.
- otabdeveloper4LiteLLM is the second worst software project known to man. (First is LangChain. Third is OpenClaw.)I'm sensing a pattern here, hmm.
- deep_nozgood i was too lazy to bump versions
- TZubiriThank you for posting this, interesting.I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
- chillfoxNow I feel lucky that I switched to just using OpenRouter a year ago because LiteLLM was incredible flaky and kept causing outages.
- jamiemallers[dead]
- maxothex[dead]
- ddactic[dead]
- rsmtjohn[dead]
- mitul005[dead]
- peytongreen_dev[dead]
- thibault000[dead]
- hahaddmmm12x[dead]