Prompt injection is what happens when a malicious instruction is disguised as innocent input and gets significantly more dangerous when combined with social engineering and emotional manipulation.
The screenshot is most probably pure coincidence, but it's a good reminder that the security model for agentic AI is still being figured out.
Most developers building MCP servers can easily be tricked into running malicious code if they aren't careful about how they handle user input. An agent with shell access can't distinguish between a legitimate request and a well-crafted manipulation, it just executes if it's not properly and securely designed.
If you're interested in building, running and mastering MCP-based agents, I released my step-by-step, accessible and most importantly practical course on that topic: π
Practical MCP with FastMCP & LangChain Have a great week,
Aymen