Allow loading remote contents and showing images to get the best out of this email.FAUN.dev's AI/ML Weekly Newsletter
 
🔗 View in your browser   |  ✍️ Publish on FAUN.dev   |  🦄 Become a sponsor
 
Allow loading remote contents and showing images to get the best out of this email.
Kala
 
#ArtificialIntelligence #MachineLearning #MLOps
 
 
📝 The Opening Call
 
 
Final days: The FAUN.sensei() launch discount expires soon.

Since we flipped the switch on FAUN.sensei(), the response has been incredible. It’s clear that many of you are ready to move beyond basic tutorials and learn from those who have already "lived the journey."

I’ve decided to keep the launch celebration going through the end of the year, but the clock is officially ticking. You have until December 31st to grab any (or all) of our inaugural courses at 25% off.

ℹ️ Use code SENSEI2525 at checkout.

The lineup has expanded! We’ve just added two new courses to the collection: a deep dive into the Helm ecosystem and a comprehensive guide to Generative AI. Here is the full list:

👉 Helm in Practice – Designing, Deploying, and Operating Kubernetes Applications at Scale

👉 Building with GitHub Copilot – Master the shift from coding to AI-assisted orchestration.

👉 Observability with Prometheus and Grafana – Hands-on guide to achieving true operational clarity.

👉 DevSecOps in Practice – How to actually operationalize security at scale.

👉 Cloud-Native Microservices With Kubernetes (2nd Edition) – The comprehensive blueprint for high-availability systems.

👉 Cloud Native CI/CD with GitLab – Streamlining the path from commit to production.

👉 End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector – The complete architectural journey to production.

👉 Generative AI For The Rest Of US – Your Future, Decoded

Remember, the SENSEI2525 code works as many times as you need, but it vanishes when the calendar turns to the new year.

See you on FAUN.sensei() !

Aymen, Founder of FAUN.dev()
 
 
🔍 Inside this Issue
 
 
Agents aren’t waiting for permission, they’re eating SaaS, jumping on-call, and snapping into your APIs, while long-context copilots and no‑code builders tilt buy-vs-build back to build. Rust is creeping into AI plumbing, MCP is making clouds legible to models, and fresh benchmarks and geopolitics add just enough friction, details below.

🍽️ AI agents are starting to eat SaaS
🚨 AWS Previews DevOps Agent to Automate Incident Investigation Across Cloud Environments
🇨🇳 Chinese AI in 2025, Wrapped
🛡️ Evaluating AI Agents in Security Operations
🧰 Everything to know about Google Gemini’s most recent AI updates
🧠 GitHub Copilot Adds GPT-5.2 With Long-Context and UI Generation
🦀 Google Releases Magika 1.0: AI File Detection in Rust
🔌 Google’s Cloud APIs Become Agent-Ready with Official MCP Support
🔎 Review of Deep Seek OCR

Smarter stack, fewer excuses, ship it.

Until next time!
FAUN.dev() Team
 
 
ℹ️ News, Updates & Announcements
 
faun.dev faun.dev
 
GitHub Copilot Adds GPT-5.2 With Long-Context and UI Generation
 
 
OpenAI GPT-5.2 is now in public preview for all paid GitHub Copilot users. It’s wired into VS Code, GitHub Mobile, and the Copilot CLI.

This model handles long-context reasoning and UI generation like it was built for it. It’s already pulling ahead on benchmarks like GDPval and SWE-Bench Pro.
 
 
faun.dev faun.dev
 
Google Releases Magika 1.0: AI File Detection in Rust
 
 
Google dropped Magika 1.0, now powered by a Rust-based engine. File type support? Doubled. Now over 200, including tricky formats like Jupyter, Numpy, and PyTorch.

Under the hood: ONNX Runtime handles fast model inference. Tokio brings async I/O muscle. And synthetic training data sharpens detection without bloating the model.
 
 
faun.dev faun.dev
 
Google’s Cloud APIs Become Agent-Ready with Official MCP Support
 
 
Google just flipped the switch on Model Context Protocol (MCP) across BigQuery, GKE, Compute Engine, Maps, and Apigee. Now AI models can tap into these services through a standard interface.

MCP endpoints ride on managed servers and bake in each service’s security and logging by default. Clean, controlled, and finally consistent.
 
 
faun.dev faun.dev
 
AWS Previews DevOps Agent to Automate Incident Investigation Across Cloud Environments
 
 
AWS just released the DevOps Agent into public preview. It chews through telemetry, config, and deployment data mid-incident, trying to make sense of the mess before you even blink.

Hooks into CloudWatch, GitHub, GitLab, Datadog, ServiceNow, and more let it auto-map your system’s topology and stitch together context across your workflows. Less clicking, more clarity.
 
 
👉 Enjoyed this?Read more news on FAUN.dev/news
 
🔗 Stories, Tutorials & Articles
 
martinalderson.com martinalderson.com
 
AI agents are starting to eat SaaS
 
 
AI coding agents are eating the lunch of low-complexity SaaS.

Teams with a bit of dev muscle are skipping subscription logins and spinning up dashboards, pipelines, even decks, using Claude, Gemini, whoever’s fastest that day.

Build vs. buy? Tilting back toward build. The kicker: build now takes minutes.
 
 
euronews.com euronews.com
 
Everything to know about Google Gemini’s most recent AI updates
 
 
Google jammed a full no-code AI workshop into Gemini. The browser now bakes in Opal, a drag-and-drop app builder with a shiny new visual editor. You can chain prompts, preview apps, and feed it text, voice, or images, without touching code.

They also dropped the Gemini 3 Flash model, built for dual reasoning. It powers real-time Search and Translate, now smarter with context, idioms, and full-on conversational flow.
 
 
cotool.ai cotool.ai
 
Evaluating AI Agents in Security Operations
 
 
Cotool threw frontier LLMs at real-world SecOps tasks using Splunk’s BOTSv3 dataset. GPT-5 topped the chart in accuracy (62.7%) and gave the best results per dollar. Claude Haiku-4.5 blazed through tasks fastest, just 240 seconds on average, maxing out tool integrations. Gemini-2.5-pro flopped on both accuracy and reliability, with repeat failures.
 
 
lukeatkins.me lukeatkins.me
 
Review of Deep Seek OCR
 
 
DeepSeek-OCR flips the OCR script. Instead of feeding full image tokens to the decoder, it leans on an encoder to compress them up front, trimming down input size and GPU strain in one move. That context diet? It opens the door for way bigger windows in LLMs.

Why it matters: Shoving compression earlier in the pipeline could shift how multimodal models train and run, especially when hardware is tight.
 
 
chinatalk.media chinatalk.media
 
Chinese AI in 2025, Wrapped
 
 
Chinese AI milestones in 2025: Big models from DeepSeek and others, AGI discussions at Alibaba, US-China chip war swings, Beijing's AI Action plan, and more. DeepSeek led the way with an open-source model, setting off a wave of Chinese companies going open-source. China's push for AGI and involvement in US-China chip war dynamics remain key trends to watch in 2026.
 
 

👉 Got something to share? Create your FAUN Page and start publishing your blog posts, tools, and updates. Grow your audience, and get discovered by the developer community.

 
⭐ Supporters
 
bytevibe.co bytevibe.co
 
25% Off - Treat yourself before the new year deploy. 🎄
 
 
You've survived the on-call rotations, the hard work, and the marathons of 2025! It’s time for some better swag, so stop wearing boring shirts. Get yours from ByteVibe, where the gear actually represents our culture!

We're giving all our subscribers 25% off at ByteVibe, the home of "Rock 'n' Roll" dev gear.

Use code SUBSCR1B3R at checkout. Valid until Dec 31st!
 
 
👉 Spread the word and help developers find you by promoting your projects on FAUN. Get in touch for more information.
 
💬 Discussions, Q&A & Forums
 
reddit.com reddit.com
 
After 3 months of Claude Code CLI: my "overengineered" setup that actually ships production code
 
 
This redditor shared their setup after switching from Cursor to Claude Code CLI, detailing core tools, custom skills, and the use of CLAUDE.md. They also highlighted the benefits of their setup, such as vibe coding with guardrails, the iteration loop, and browser-in-the-loop testing.
 
 
 
⚙️ Tools, Apps & Software
 
github.com github.com
 
arm/metis
 
 
Metis is an open-source, AI-driven tool for deep security code review
 
 
github.com github.com
 
thedotmack/claude-mem
 
 
A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.
 
 
github.com github.com
 
langchain-ai/open_deep_research
 
 
This is a simple, configurable, fully open source deep research agent that works across many model providers, search tools, and MCP servers. It's performance is on par with many popular deep research agents.
 
 
github.com github.com
 
mistralai/mistral-vibe
 
 
Minimal CLI coding agent by Mistral
 
 

👉 Spread the word and help developers find and follow your Open Source project by promoting it on FAUN. Get in touch for more information.

 
🤔 Did you know?
 
 
Did you know that many large language models speed up inference using speculative decoding? A small draft model proposes tokens that a larger model verifies in parallel, letting the system accept multiple tokens at once while producing the exact same output as normal decoding. This cuts expensive forward passes on the large model and boosts throughput without changing model behavior.
 
 
🤖 Once, SenseiOne Said
 
 
"You don't own a model until you own its lineage, monitoring, and rollback; everything else is a demo. Scale won't fix missing labels or flaky pipelines—it only makes those assumptions costlier."
— SenseiOne
 

(*) SenseiOne is FAUN.dev’s work-in-progress AI agent

 
⚡Growth Notes
 
 
Pick one model family or layer type and quietly become the person who really understands it, down to reading key papers and skimming the actual source. Each week, run one small, controlled experiment that tests a hypothesis about that piece of the stack, and log the outcome, graphs, and gotchas in a personal decision journal. Over time, shape those notes into short, reusable internal docs or notebooks that others can actually plug into real projects. This positions you as the engineer who brings measurable improvements, not just opinions, which quietly shifts who people loop in on important work. The habit is simple: experiments on Friday, notes on Saturday, share one sharp insight on Monday.
 
Each week, we share a practical move to grow faster and work smarter
 
😂 Meme of the week
 
 
 
 
❤️ Thanks for reading
 
 
👋 Keep in touch and follow us on social media:
- 💼LinkedIn
- 📝Medium
- 🐦Twitter
- 👥Facebook
- 📰Reddit
- 📸Instagram

👌 Was this newsletter helpful?
We'd really appreciate it if you could forward it to your friends!

🙏 Never miss an issue!
To receive our future emails in your inbox, don't forget to add community@faun.dev to your contacts.

🤩 Want to sponsor our newsletter?
Reach out to us at sponsors@faun.dev and we'll get back to you as soon as possible.
 

Kala #508: An "Overengineered" Claude Code Setup That Actually Ships Production Code
Legend: ✅ = Editor's Choice / ♻️ = Old but Gold / ⭐ = Promoted / 🔰 = Beginner Friendly

You received this email because you are subscribed to FAUN.dev.
We (🐾) help developers (👣) learn and grow by keeping them up with what matters.

You can manage your subscription options here (recommended) or use the old way here (legacy). If you have any problem, read this or reply to this email.