| |
| 🔗 Stories, Tutorials & Articles |
| |
|
| |
| Terraform Stacks: A Deep-Dive for Azure Practitioners in Europe |
| |
| |
Terraform Stacks just hit GA on HCP Terraform, and they bring some real structure to the chaos. Think modular, declarative, and way less workspace spaghetti. Build reusable components (a.k.a. modules), bundle them into deployments, and wire up stacks using publish/consume patterns - complete with automated triggers downstream.
EU regions? Covered. Stack support is live there too, with full parity. |
|
| |
|
| |
|
| |
| Unlocking self-service LLM deployment with platform engineering |
| |
| |
A new platform stack - Port** + GitHub Actions + HCP Terraform - is turning LLM deployment into a clean self-service flow. The result => predictable, governed pipelines that ship faster.
Infra gets standardized. Provisioning? Handled through GitHub Actions. Policies? Baked in via HCP Terraform. Port ties it all together with opinionated blueprints that hide the messy bits.
The shift: LLM ops moves from hand-rolled chaos to reusable platform rails. Enterprises get scale. Devs get speed. Everyone stops begging the infra team. |
|
| |
|
| |
|
| |
| WTF is ... - AI-Native SAST? |
| |
| |
AI-native SAST is replacing the “LLM as magic scanner” myth. Instead, the smart play is combining language models with real static analysis. That’s how teams are catching the gnarlier stuff - like business logic bugs - that usually slip through.
The trick? Use static analysis to grab clean, relevant chunks of code, then rope in RAG and purpose-built prompts to guide the LLM. Think triage, not tarot reading. |
|
| |
|
| |
|
| |
| A complete guide to HTTP caching |
| |
| |
A fresh guide reframes HTTP caching as less of a tweak, more of an architectural move. It breaks caching into layers - browser memory, CDNs, reverse proxies, app stores - and shows how each one plays a part (or gets in the way).
It gets granular with headers like Cache-Control, ETag, and Vary, calling out common faceplants like no-store abuse or using Vary: Cookie like it's harmless. Hint: it’s not. When cache layers slip out of sync, stale content isn't just annoying - it can quietly break critical flows. |
|
| |
|
| |
|
| |
| S3 Storage Classes: Fast Access ✅ |
| |
| |
A cost deep-dive breaks down three AWS S3 storage classes - Standard, Standard-IA, and Glacier Instant Retrieval - with sharp, interactive visualizations. It maps out the tradeoffs: storage cost, access frequency, and early deletion pain.
Key tipping points surface: - Use Standard-IA if you read the object once a month or less. - Glacier Instant Retrieval makes sense closer to once a quarter - assuming access patterns and object sizes don’t vary much.
Bigger picture: Picking a storage class isn’t guesswork anymore. You’ve got to model it. AWS S3’s now grown a real optimization layer. |
|
| |
|
|
| |
|
| |
| AWS to Bare Metal Two Years Later: Answering Your Toughest Questions About Leaving AWS |
| |
| |
OneUptime ditched the cloud bill and rolled their own dual-site setup. Think bare metal, orchestrated with MicroK8s, booted by Tinkerbell, patched together with Ceph, Flux, and Terraform. Result? 99.993% uptime and $1.2M/year saved - 76% cheaper than even well-optimized AWS. They run it all with just ~14 engineer-hours/month. Thanks, Talos. The cloud's still in play, but only where it helps: archival, CDN, and burst capacity. |
|
| |
|
| |
👉 Got something to share? Create your FAUN Page and start publishing your blog posts, tools, and updates. Grow your audience, and get discovered by the developer community. |