From LocalStack-powered serverless runs to a Zig kernel on RISC‑V, this batch leans bottom‑up—scale what works, question the dashboards, and make the cloud blink first. Sharper SLIs, saner Postgres privileges, and cost controls with teeth; plus tiny tools you can lift straight into production.
🚀 Accelerate serverless testing with LocalStack integration in VS Code IDE 🛠️ Best 20 Linux Commands for Daily Use in Production Servers ⚡ %CPU Utilization Is A Lie 💸 Introducing Budget Controls for AWS: Automatically Manage Your Cloud Costs 🪄 Magical systems thinking 🐘 PostgreSQL maintenance without superuser 📈 Scaling Prometheus: Managing 80M Metrics Smoothly 🎯 SLI Evolution Stages 🧵 Writing an operating system kernel from scratch 🔀 Writing Load Balancer From Scratch In 250 Line of Code
The AWS Toolkit for VS Code now hooks straight into LocalStack. Run full end-to-end tests for serverless workflows—Lambda, SQS, EventBridge, the whole crew—without bouncing between tools or writing boilerplate.
Just deploy to LocalStack from the IDE using the AWS SAM CLI. It feels like the cloud, but it's all local.
Budget Controls for AWS just got better. The open-source tool now reins in more than just EC2. It wrangles RDS Aurora, SageMaker, and OpenSearch too.
Under the hood, it taps AWS Budgets, AWS Config, and custom tags to watch spend like a hawk. Hit a budget threshold? It can alert, stop, or nuke resources—automatically.
As of recent releases—starting way back in v9.6 and maturing through PostgreSQL 18 (coming 2025)—there are now 15+ built-in admin roles. No need to hand out superuser just to get things done.
These roles cover the ops spectrum: monitoring, backups, filesystem access, logical replication, and routine maintenance. Fine-grained control without the keys to the whole kingdom. Less god-mode, more role-based delegation by design!
A new SLI evolution model lays out a maturity roadmap—from rebranded latency/error metrics to ones that actually track business impact. It replaces shallow signals and pulls in the stuff that matters: how service failures hit user goals, tasks, and bottom lines.
A developer rolled out a fully working Go load balancer with a clean Round Robin setup—and hooks for dropping in smarter strategies like Least Connection or IP Hash. Backend servers live in a custom server pool. Swapping balancing logic? Just plug into the interface.
A barebones time-sharing OS kernel, written in Zig, running on RISC-V. It leans on OpenSBI for console I/O and timer interrupts. Threads? Statically allocated, each running in user mode (U-mode). The kernel stays in supervisor mode (S-mode), where it catches system calls and context switches via timer ticks.
One neat trick: kernel and userland share a single binary. No dynamic linking. No loaders. Everything stitched together upfront.
A fresh roundup drops 20 go-to Linux commands for production sysadmins, dialing in on modern defaults like htop > top, ss > netstat, and ip > ifconfig.
Expect the usual suspects—journalctl, rsync, crontab—all still pulling weight for logs, file sync, and scheduled jobs.
Stress tests on the Ryzen 9 5900X uncovered a big gap between reported CPU utilization and what the chip actually pushes. Around 50% on paper? Could mean close to full throttle in reality—thanks to sneaky behaviors from SMT resource sharing and Turbo frequency scaling.
Takeaway: Raw utilization metrics can’t be trusted for capacity planning anymore. They lie. Benchmark real throughput instead.
AI now writes over 25% of Google’s and as much as 90% of Anthropic’s code. That’s not a trend—it’s a regime change.
Still, the mess in large public systems reminds us: clever analysis isn’t enough. Complex systems don’t behave; they misbehave.
When the machines are churning out code, the smarter move is to evolve from working systems—not architect them from scratch. Bottom-up beats big plans.
Flipkart ditched its creaky StatsD + InfluxDB stack for a federated Prometheus setup—built to handle 80M+ time-series metrics without choking. The move leaned into pull-based collection, PromQL's firepower, and hierarchical federation for smarter aggregation and long-haul queries.
Local first semantic and hybrid BM25 grep / search tool for use by AI and humans!
🤔 Did you know?
Did you know Amazon Aurora never flushes data pages? It ships redo to 10 GB blocks replicated 6× across 3 AZs. Commits need 4/6 acks, reads 3/6—so even if an AZ goes down, Aurora keeps serving reads and writes.
😂 Meme of the week
🤖 Once, SenseiOne Said
"Autoscaling solves capacity, not causality. SLOs don't prevent incidents; they define which ones you'll tolerate. The cloud will rent you more CPU, not better judgment." — SenseiOne
👤 This Week's Human
This Week’s Human is Ben Sheppard, a 4x founder, Strategic Advisor & Business Coach, and co‑founder of Silta AI building sector‑specific tools for the infrastructure stack. He’s shipped AI due‑diligence and ESG workflows that cut review time by 60%+, including helping a multilateral bank compress a weeks‑long assessment to under two hours. He also advises institutions like the Asian Development Bank, USAID, and Bloomberg Philanthropies, pairing grounded judgment with execution across complex projects.