| |
| 🔗 Stories, Tutorials & Articles |
| |
|
| |
| Rendering 100M pixels a second over ssh |
| |
| |
| A massively multiplayer snake game accessible over ssh, capable of handling thousands of concurrent players and rendering over a hundred million pixels a second. The game utilizes bubbletea for rendering frames and custom techniques to reduce bandwidth usage to around 2.5 KB/sec. Performance improvements including pre-allocating resources and optimizing string handling allowed the game to support up to 2,500 concurrent users. |
|
| |
|
| |
|
| |
| The real cost of random I/O |
| |
| |
| The random_page_cost was introduced ~25 years ago, and its default value has remained at 4.0 since then. Recent experiments suggest that the actual cost of reading a random page may be significantly higher than the default value, especially on SSDs. Lowering the random_page_cost may not always be the best solution, as there are various factors to consider in optimizing query planning. |
|
| |
|
| |
|
| |
| Google API Keys Weren't Secrets. But then Gemini Changed the Rules |
| |
| |
A report reveals Google Cloud's API keys use the same format for public IDs and secret auth. That overlap lets public keys reach the Gemini API.
New keys default to Unrestricted. Existing keys can be retroactively granted Gemini access. Google will add scoped defaults, block leaked keys, and notify affected projects. |
|
| |
|
| |
|
| |
| How to scale GitOps in the enterprise: From single cluster to fleet management |
| |
| |
| Implementing GitOps at scale can lead to challenges such as config sprawl, Git repository bottlenecks, and cultural resistance. State store strategies like OCI registries, ConfigHub, and multi-cluster topology patterns are key to overcoming these obstacles. Secrets management through Sealed Secrets and External Secrets Operator, policy enforcement with Kyverno, multi-tenancy via Argo CD Projects, and repository organization with trunk-based development and progressive delivery strategies are vital components of scaling GitOps. Choosing the right tool, whether it's Argo CD, Flux CD, or Sveltos, and building a central catalog for GitOps manifests are essential for managing large-scale GitOps deployments efficiently. |
|
| |
|
| |
|
| |
| LLMs Are Good at SQL. We Gave Ours Terabytes of CI Logs. |
| |
| |
Mendral's agent runs ad‑hoc SQL against compressed ClickHouse logs. It traces flaky tests across months and scans up to 4.3B rows per investigation.
They denormalize 48 metadata columns per log line. They compress 5.31 TiB down to ~154 GiB (~21 bytes/line) — a 35:1 ratio. That turns arbitrary filters into column predicates.
The pipeline uses materialized views, bloom and ngram indexes, and Inngest for durable execution. GitHub API throttling (~3 req/s, 4k spare/hr) keeps P95 ingest under 5 minutes.
System shift: Granting LLM agents direct SQL access to denormalized, columnar CI logs moves debugging out of fixed tool APIs and into ad‑hoc, queryable data stores. |
|
| |
|
| |
👉 Got something to share? Create your FAUN Page and start publishing your blog posts, tools, and updates. Grow your audience, and get discovered by the developer community. |