| |
| 🔗 Stories, Tutorials & Articles |
| |
|
| |
| Kubernetes OptimizationInPlace Pod Resizing,ZoneAware Routin |
| |
| |
Halodoc cut EC2 costs and shaved latency by leaning into two Kubernetes tricks:
In-place pod resizing (v1.33) lets them dial pod resources up or down on the fly, especially handy during off-peak hours.
Zone-aware routing via topology-aware hints keeps inter-service traffic close to home (same AZ), skipping extra hops.
A custom scheduler keeps those resource tweaks sticky, even when pods restart. |
|
| |
|
| |
|
| |
| Avoiding Zombie Cluster Members When Upgrading to etcd v3.6 |
| |
| |
etcd v3.5.26 patches a nasty upgrade bug. It now syncs v3store from v2store to stop zombie nodes from corrupting clusters during the jump to v3.6.
The core issue: Older versions let stale store states bring removed members back from the dead. |
|
| |
|
| |
|
| |
| 1.35: In-Place Pod Resize Graduates to Stable |
| |
| |
In-Place Pod Resize hits GA in Kubernetes 1.35. You can now tweak CPU and memory on live pods without restarts. This is finally production-ready!
What’s new since beta? It now handles memory limit decreases, does prioritized resizes, and gives you better observability with fresh Kubelet metrics and Pod events.
Big shift: Vertical scaling just got real. Smooth enough for autoscalers, fast enough for low-latency apps. |
|
| |
|
| |
|
| |
| 93% Faster Next.js in (your) Kubernetes |
| |
| |
| Next.js brings advanced capabilities to developers out-of-the-box, but scaling it in your own environment can be challenging due to uneven load distribution and high latency. Watt addresses these issues by leveraging SO_REUSEPORT in the Linux kernel, resulting in significantly improved performance metrics compared to traditional scaling approaches on Kubernetes. The implemented solution, described in the post, eliminates some coordination overhead and improves load distribution and reliability for Node.js applications like Next.js running in containerized environments. |
|
| |
|
| |
|
| |
| Dapr Deployment Models |
| |
| |
Dapr started as a humble Kubernetes sidecar. Now? It's a full-blown multi-mode runtime that runs wherever you need it, edge, VM, or serverless APIs.
Diagrid’s Catalyst takes that further. It wraps Dapr in a fully managed API layer that’s detached from your app’s lifecycle. No infra lock-in, just token-based HTTP access across any stack. |
|
| |
|
| |
|
| |
| Troubleshooting Cilium network policies: Four common pitfalls |
| |
| |
Cilium’s Day 2 playbook covers the real work: dialing in L7 policy controls, tuning Hubble observability, and wringing performance from BPF. It's how you keep big Kubernetes clusters sane.
The focus? Multi-tenant isolation, node-to-node encryption, and scaling cleanly with external etcd so the network doesn’t turn into guesswork. |
|
| |
|
| |
|
| |
| v1.35: Job Managed By Goes GA |
| |
| |
In Kubernetes v1.35, spec.jobControllerManagedBy hits GA. That means full handoff of Job reconciliation to external controllers is now official.
It unlocks tricks like MultiKueue, where a single management cluster fires off Jobs to multiple worker clusters, without losing sight of what’s running where.
Big shift: Kubernetes steps back from owning the whole Job lifecycle. Scheduling and execution are finally split, clearing the way for smarter, more modular batch systems. |
|
| |
|
| |
👉 Got something to share? Create your FAUN Page and start publishing your blog posts, tools, and updates. Grow your audience, and get discovered by the developer community. |