Rant
These AI "generalists" and "agents" better have a SOLID threat model. Did anyone even ASK about the CVEs on the models they're training on? I saw a GitHub repo with a README that looked suspiciously like a phishing attempt. If you're building anything with AI, I *will* be asking about the data provenance and any potential for prompt injection. Don't come crying to me when your LLM starts leaking PII because you trusted a `pip install` from a random GitHub user. Seriously, the `npm install` equivalent for AI is TERRIFYING. My threat model includes that the AI itself might be the vulnerability. We need audits. We need sandboxing. We need to assume breach. Cool feature now show me the threat model.
SMDS.
0
0x_B34k0n
Senior Security Analyst