Allow loading remote contents and showing images to get the best out of this email.FAUN.dev's AI/ML Weekly Newsletter
 
🔗 View in your browser   |  ✍️ Publish on FAUN.dev   |  🦄 Become a sponsor
 
Allow loading remote contents and showing images to get the best out of this email.
Kala
 
#ArtificialIntelligence #MachineLearning #MLOps
 
 
🔍 Inside this Issue
 
 
Between Spotify’s 1,500 agent-written PRs and teams dragging interviews back to whiteboards, the ground under what “counts as coding” is shifting fast. From mega-scale clusters and 2M‑token windows to pragmatic agent runtimes you can actually ship, let’s dig into the work—tools, tradeoffs, and where the human loop still matters.

🤖 1,500+ PRs Later: Spotify’s Journey with Our Background Coding Agent

🧪 AI Broke Interviews

AI’s Dial-Up Era

☁️ AWS Unveils Project Rainier: Massive AI Cluster with Trainium2 Chips

🧠 Elon Musk's Grok 4 AI Gets Major Boost with 2M Token Context

🛠️ How I Use Every Claude Code Feature

✍️ You Should Write An Agent

Steal the leverage, keep the judgment—and ship.

Have a great week!
FAUN.dev() Team
 
 
⭐ Patrons
 
zerossl.com zerossl.com
 
SSL Protection For Anyone Fast. Reliable. Free.
 
 
Easily secure any site by putting SSL management on autopilot, supporting one-step validation and renewal via REST API.
 
 
👉 Spread the word and help developers find you by promoting your projects on FAUN. Get in touch for more information.
 
ℹ️ News, Updates & Announcements
 
faun.dev faun.dev
 
Elon Musk's Grok 4 AI Gets Major Boost with 2M Token Context
 
 
Grok 4 just cranked its context window up to 2 million tokens. That’s not a typo. It can now chew through massive codebases or docs in one go - no chunking gymnastics required.

Reasoning accuracy jumped from 77.5% to 94.1%. General performance? 97.9%.

It now takes multimodal inputs and brings web/X search into the loop. So yes, it reads, thinks, and browses.
 
 
faun.dev faun.dev
 
AWS Unveils Project Rainier: Massive AI Cluster with Trainium2 Chips
 
 
AWS just flipped the switch on Project Rainier - its biggest AI cluster to date. It's running nearly 500,000 Trainium2 chips and already towers 70% over anything AWS has built before.

The cluster is live and fueling Claude, thanks to a deep partnership with Anthropic. AWS plans to double the chip count to over a million by the end of 2025.

What’s different this time? Full-stack control. From custom silicon to datacenter design, AWS is going vertical. That’s not just good for scale - it rewires how energy-efficient AI infrastructure gets built.
 
 
👉 Enjoyed this?Read more news on FAUN.dev/news
 
⭐ Sponsors
 
cloudns.net cloudns.net
 
Free DNS Hosting with Global Anycast DNS Network
 
 
Cloud DNS is the most cost-effective way to manage your domain names. You can use it with Free DNS or Premium DNS, depending on your needs. Our Cloud DNS service provides up to 10,000% uptime Service Level Agreement (SLA).

ClouDNS offers Free DNS zone migration for all new customers!
 
 
👉 Spread the word and help developers find you by promoting your projects on FAUN. Get in touch for more information.
 
🔗 Stories, Tutorials & Articles
 
blog.sshh.io blog.sshh.io
 
How I Use Every Claude Code Feature
 
 
Claude Code isn't just generating responses anymore - it's gearing up to run projects.

The new direction turns it into a programmable, auditable agent runtime. Think custom hooks, restart logic, planning workflows, GitHub Actions, and subagent delegation tricks like the “Master-Clone” pattern.

At the core sits CLAUDE.md. It’s not documentation; it’s a contract. This file defines how tools work together, keeps token use in check, and enforces conventions so the whole thing doesn’t spiral into chaos.

Bigger picture: With the rise of Claude CLI and its growing ecosystem, prompt-chaining is taking a backseat. What’s emerging? Structured, scriptable agents baked into real engineering workflows.
 
 
wreflection.com wreflection.com
 
AI's Dial-Up Era
 
 
AI's reshaping jobs - but not evenly. Some industries will feel the squeeze faster than others. It all comes down to a race: productivity vs. demand.

History's playbook? Think textiles, steel, autos. Automation boosted output. Jobs stuck around - as long as demand kept growing. Once markets topped out, headcount sank, even as factories hummed faster.
 
 
yusufaytas.com yusufaytas.com
 
AI Broke Interviews
 
 
AI has revolutionized technical interviews, blurring the line between genuine skill and cheating with perfect solutions and polished answers. In response, companies are shifting back to in-person interviews for real-time cognitive transparency, authenticity constraints, realistic collaboration signals, reduced noise in the pipeline, and rebalancing the playing field. In this new era of AI-resistant interviewing, the focus is on measuring human reasoning through activities like explaining code, real-time architectural debates, physical whiteboards, live collaboration, adaptive questioning, and behavioral questions without scripts.
 
 
fly.io fly.io
 
You Should Write An Agent
 
 
Building LLM agents - essentially looping stateless models through tools - looks simple. Until it isn't. Peel back the layers, and you hit real architectural puzzles: context engineering, agent loops, sub-agent choreography, execution constraints.
 
 
engineering.atspotify.com engineering.atspotify.com
 
1,500+ PRs Later: Spotify’s Journey with Our Background Coding Agent
 
 
Spotify just gave its internal Fleet Management tooling a serious brain upgrade. They've wired in AI coding agents that now handle source-to-source transformations across repos - automatically.

So far? Over 1,500 AI-generated PRs pushed. Not just lint fixes - these include heavy-duty migrations. They're reporting up to 90% time saved vs. grinding it out by hand.
 
 

👉 Got something to share? Create your FAUN Page and start publishing your blog posts, tools, and updates. Grow your audience, and get discovered by the developer community.

 
⭐ Supporters
 
bytevibe.co bytevibe.co
 
cat /var/logs/*
 
 
Meet your new favorite debugging buddy. The “cat /var/logs” Mug — because even your coffee deserves root access.

Perfect for late-night deploys, production “incidents,” or pretending to read logs while scrolling memes.
Sleek black ceramic, dev-approved design, and a chonky white cat who clearly knows tail -f.

Drink. Debug. Repeat.
 
 
👉 Spread the word and help developers find you by promoting your projects on FAUN. Get in touch for more information.
 
💬 Discussions, Q&A & Forums
 
news.ycombinator.com news.ycombinator.com
 
Who uses open LLMs and coding assistants locally? Share setup and laptop
 
 
A dev's gathering intel on running open-source LLMs and coding assistants straight from the laptop - no cloud crutches. Digging into setups powered by Ollama, VS Code plugins, and the real bottlenecks: GPU, NPU, RAM, and how well these rigs handle things like code completion and refactoring.
 
 
 
⚙️ Tools, Apps & Software
 
github.com github.com
 
vdaas/vald
 
 
A Highly Scalable Distributed Vector Search Engine
 
 
github.com github.com
 
karpathy/nanochat
 
 
The best ChatGPT that $100 can buy.
 
 
github.com github.com
 
pathwaycom/llm-app
 
 
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. Docker-friendly.Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
 
 
github.com github.com
 
pulsecost/pulsecost-oss
 
 
Proxy + Dashboard for optimizing LLM usage costs with reports
 
 
github.com github.com
 
virattt/dexter
 
 
An autonomous agent for deep financial research
 
 

👉 Spread the word and help developers find and follow your Open Source project by promoting it on FAUN. Get in touch for more information.

 
🤔 Did you know?
 
 
Did you know that in machine learning there’s a phenomenon called “double descent” where increasing a model’s size initially improves performance, then worsens it, then improves it again?
 
 
😂 Meme of the week
 
 
 
 
🤖 Once, SenseiOne Said
 
 
"We celebrate model accuracy, then deploy into a data pipeline we don’t own; that isn’t ML, it’s hope. If you can’t version, test, and roll back data with the same rigor as code, you’re not doing MLOps—you’re scheduling incidents."
— SenseiOne
 

(*) SenseiOne is FAUN.dev’s work-in-progress AI agent

 
👤 This Week's Human
 
 
This Week’s Human is Geoffrey Dayrit, Cyber Security Senior Lead Technical Program Manager at Lumen Technologies. He builds and scales secure Government Cloud environments aligned with CMMC 2.0, DFARS 252.204‑7012, and FedRAMP/NIST 800‑171, driving zero‑trust architecture and vulnerability remediation to cut risk and shorten audit timelines. Previously he led the Global Security Services PMO, grounding teams in the NIST Cybersecurity Framework (Identify, Protect, Detect, Respond, Recover).
 

💡 Engage with FAUN.dev on LinkedIn — like, comment on, or share any of our posts on LinkedIn — you might be our next “This Week’s Human”!

 
❤️ Thanks for reading
 
 
👋 Keep in touch and follow us on social media:
- 💼LinkedIn
- 📝Medium
- 🐦Twitter
- 👥Facebook
- 📰Reddit
- 📸Instagram

👌 Was this newsletter helpful?
We'd really appreciate it if you could forward it to your friends!

🙏 Never miss an issue!
To receive our future emails in your inbox, don't forget to add community@faun.dev to your contacts.

🤩 Want to sponsor our newsletter?
Reach out to us at sponsors@faun.dev and we'll get back to you as soon as possible.
 

Kala #502: Elon Musk's Grok 4 AI Gets Major Boost with 2M Token Context
Legend: ✅ = Editor's Choice / ♻️ = Old but Gold / ⭐ = Promoted / 🔰 = Beginner Friendly

You received this email because you are subscribed to FAUN.dev.
We (🐾) help developers (👣) learn and grow by keeping them up with what matters.

You can manage your subscription options here (recommended) or use the old way here (legacy). If you have any problem, read this or reply to this email.