Allow loading remote contents and showing images to get the best out of this email.FAUN.dev's AI/ML Weekly Newsletter
 
🔗 View in your browser   |  ✍️ Publish on FAUN.dev   |  🦄 Become a sponsor
 
Allow loading remote contents and showing images to get the best out of this email.
Kala
 
#ArtificialIntelligence #MachineLearning #MLOps
 
 
🔍 Inside this Issue
 
 
Open models might never catch up, yet they could still set the agenda, while agent swarms ship code and a wall plotter turns prompts into ink. If that tension resonates, the links below map the space with real builds and hard-won lessons.

🎨 Generative Pen-trained Transformer
🧭 My AI Adoption Journey
🧩 Nathan Lambert: Open Models Will Never Catch Up
🏈 Self-Optimizing Football Chatbot
⚙️ This Is the First AI That Helped Build Itself - Meet GPT-5.3-Codex
🚗 Towards self-driving codebases

Ship smarter this week.

Take care!
FAUN.dev() Team
 
 
⭐ Patrons
 
faun.dev faun.dev
 
February Only: 20% off all FAUN.sensei() Courses
 
 
Most of us spend our time learning tools, frameworks, and patterns that sit several layers above the real system. That works until something changes. Then the gaps show up fast.

FAUN.sensei() is about closing those gaps. In addition to tools and technologies, the courses focus on fundamentals, mental models, and how systems actually behave underneath the abstractions.

If you've been meaning to step back and strengthen your foundations, February is a good moment to do it. Use the code SenseiFebruary to get 20% off all my courses throughout February.
 
 
👉 Spread the word and help developers find you by promoting your projects on FAUN. Get in touch for more information.
 
ℹ️ News, Updates & Announcements
 
faun.dev faun.dev
 
This Is the First AI That Helped Build Itself - Meet GPT-5.3-Codex
 
 
OpenAI's GPT-5.3-Codex levels up. Think 25% faster runtimes, sharper reasoning, and more reach - across terminals, IDEs, and browsers. It tackles the full dev loop: debugging, deployments, PRD writing. Even lets users steer output in real time.

It crushes benchmarks like SWE-Bench Pro, Terminal-Bench, and OSWorld. And it’s the first Codex model tagged “High capability” for cybersecurity. Big deal.
 
 
👉 Enjoyed this?Read more news on FAUN.dev/news
 
🔗 Stories, Tutorials & Articles
 
cursor.com cursor.com
 
Towards self-driving codebases
 
 
OpenAI spun up a swarm of GPT-5.x agents - thousands of them. Over a week-long sprint, they cranked out runnable browser code and shipped it nonstop. The system hit 1,000 commits an hour across 10 million tool calls.

The architecture? A planner-worker stack. Hierarchical. Recursive. Lean on agent chatter. Heavy on self-steering behavior.
 
 
theodore.netprojects theodore.netprojects
 
Generative Pen-trained Transformer
 
 
Meet GPenT, an open-source, wall-mounted polargraph pen plotter with a flair for generative art. It blends custom hardware, Marlin firmware, a Flask web UI running on Raspberry Pi, and Gemini-generated drawing prompts.

The stack? Machina + LLM. Prompts go in, JSON drawing commands come out. That drives a real-world plotter that spits out SVGs and algorithmic patterns like it’s no big deal.
 
 
turingpost.com turingpost.com
 
Nathan Lambert: Open Models Will Never Catch Up
 
 
Open models will be the engine for the next ten years of AI research, according to Nathan Lambert, a research scientist at AI2. He explains that while open models may not catch up with closed ones due to fewer resources, they are still crucial for innovation. Lambert emphasizes the importance of intentional investment in open models to maintain influence in the AI research field.
 
 
databricks.com databricks.com
 
Self-Optimizing Football Chatbot Guided by Domain Experts on
 
 
Generic LLM judges and static prompts fail to capture domain-specific nuance in football defensive analysis. The architecture for self-optimizing agents built on Databricks Agent Framework allows developers to continuously improve AI quality using MLflow and expert feedback. The agent, such as a DC Assistant for American Football, can interact with users via Databricks Apps, creating a tool-calling agent for specific domain expertise. The build phase creates an initial prototype, while the optimize phase accelerates to production by continuously optimizing the agent based on feedback.
 
 
mitchellh.com mitchellh.com
 
My AI Adoption Journey
 
 
A dev walks through the shift from chatbot coding to agent-based AI workflows, think agents that read files, run code, and double-check their work. Things only clicked once they built out custom tools and configs to help agents spot and fix their own screwups. That’s the real unlock.
 
 

👉 Got something to share? Create your FAUN Page and start publishing your blog posts, tools, and updates. Grow your audience, and get discovered by the developer community.

 
⚙️ Tools, Apps & Software
 
github.com github.com
 
op7418/CodePilot
 
 
A native desktop GUI for Claude Code — chat, code, and manage projects visually. Built with Electron + Next.js.
 
 
github.com github.com
 
HKUDS/nanobot
 
 
The Ultra-Lightweight Clawdbot
 
 
github.com github.com
 
Nutlope/roomGPT
 
 
Upload a photo of your room to generate your dream room with AI.
 
 
github.com github.com
 
julep-ai/julep
 
 
Deploy serverless AI workflows at scale. Firebase for AI agents
 
 
github.com github.com
 
thedotmack/claude-mem
 
 
A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.
 
 

👉 Spread the word and help developers find and follow your Open Source project by promoting it on FAUN. Get in touch for more information.

 
🤔 Did you know?
 
 
Did you know that QLoRA lets you fine-tune a 65 billion-parameter LLaMA model on a single 48 GB GPU by combining 4-bit quantization with Low-Rank Adapters (LoRA)? QLoRA uses a 4-bit NormalFloat (NF4) format designed for normally distributed weights, then applies double quantization to shrink the quantization constants themselves - so you get near full FP16 accuracy with about a 4× reduction in VRAM usage. It also uses paged optimizers that move optimizer state between GPU VRAM and CPU memory to avoid out-of-memory spikes, enabling stable training without extra GPUs.
 
 
🤖 Once, SenseiOne Said
 
 
"CI for ML tries to pin a moving target; if it works too well, you stopped learning. Automation moves errors from code to configuration, and ML moves them from configuration to data."
SenseiOne
 

(*) SenseiOne is FAUN.dev’s work-in-progress AI agent

 
⚡Growth Notes
 
 
Resist building "just enough" evaluation for your models; instead, treat your eval harness as a first-class product artifact with versioned datasets, locked prompts, and reproducible runs, even if it slows you down for a week. The real cost only surfaces a year later when nobody can trust an offline win, people start A/B testing everything in production, and your infrastructure budget quietly becomes the most accurate measure of model quality.
 
Each week, we share a practical move to grow faster and work smarter
 
😂 Meme of the week
 
 
 
 
❤️ Thanks for reading
 
 
👋 Keep in touch and follow us on social media:
- 💼LinkedIn
- 📝Medium
- 🐦Twitter
- 👥Facebook
- 📰Reddit
- 📸Instagram

👌 Was this newsletter helpful?
We'd really appreciate it if you could forward it to your friends!

🙏 Never miss an issue!
To receive our future emails in your inbox, don't forget to add community@faun.dev to your contacts.

🤩 Want to sponsor our newsletter?
Reach out to us at sponsors@faun.dev and we'll get back to you as soon as possible.
 

Kala #515: This Is the First AI That Helped Build Itself - Meet GPT-5.3-Codex
Legend: ✅ = Editor's Choice / ♻️ = Old but Gold / ⭐ = Promoted / 🔰 = Beginner Friendly

You received this email because you are subscribed to FAUN.dev.
We (🐾) help developers (👣) learn and grow by keeping them up with what matters.

You can manage your subscription options here (recommended) or use the old way here (legacy). If you have any problem, read this or reply to this email.