84.92%.
That's GitHub's real uptime over the last 90 days. Not the 99.something the official status page claims.
The reconstructed one, built by Marek ล uppa from public incident data, tells a different story.
๐ 89 incidents in 90 days = 1 every day, on average.
Why now, after 15 years of being the default?
The answer lies in the traffic mix: GitHub used to have predictable cadence and load as it was built for humans: commits, pushes, PR, CI, reviews, merges, and the day-to-day operations.
๐ค Then we plugged in agents.
Copilot fires requests on every keystroke. Coding agents auto-open PRs and auto-merge. Bots review, label, triage, and rerun pipelines without sleeping. Where one engineer used to fire ten API calls an hour, an agent fires hundreds.
I'm living this. Some of my open source repos get agent-opened PRs almost every day, and I've stopped being able to keep up with them.
GitHub's own postmortems blame "rapid load growth" and a system where one broken piece takes the rest with it.
But this isn't a GitHub problem. It's the first visible crack in infrastructure that was never designed for non-human users. Package registries, CI providers, container registries, secret managers: they're next.
GitHub won't be the last platform to crack under agent!
Have a great week,
Aymen