"Given enough eyeballs, all bugs are shallow."
That's Linus's Law and it's a great theory.
The idea is: The more people reviewing code, the more likely someone spots a vulnerability. It's the foundational argument for why open source software can be more secure than proprietary alternatives.
Now enter AI agents.
In theory, multiple AI agents continuously reviewing code is Linus's Law finally fulfilled. Tireless, systematic, available 24/7, no ego and no boredom.
But here's the issue: AI agents trained on similar data share similar blind spots. Diversity of "perspective" is precisely what makes many eyeballs valuable.
A roomful of identical reviewers, human or AI, doesn't give you that.
Take the example of OpenSSL and the Heartbleed bug. It was a simple off-by-one error that went unnoticed for years propbably because the contributors share the same "expertise" and "perspective" on the codebase.
Diversity is the key here. Instead of relying on 1 large model, relying on multiple smaller models trained on different data, with different architectures, fine-tuned for different areas of expertise (memory, concurrency, cryptography, etc) could be a more effective way to catch vulnerabilities.
This is where MCP comes in. It gives you the infrastructure to orchestrate exactly that - multiple specialized agents, different models, different focuses, reviewing the same codebase in parallel. Not one genius. A diverse committee. Linus's Law, finally staffed correctly.
If you want to learn how to build systems like this, I'm releasing
Practical MCP with FastMCP & LangChain - Engineering the Agentic Experience a complete guide from first principles to production deployment. Pre-sale is open now, at a discount, before the official launch.
Have a great day,
Aymen.