Allow loading remote contents and showing images to get the best out of this email.FAUN.dev's AI/ML Weekly Newsletter
 
🔗 View in your browser   |  ✍️ Publish on FAUN.dev   |  🦄 Become a sponsor
 
Allow loading remote contents and showing images to get the best out of this email.
 
AILinks
 
This week in Generative AI/ML, with Kala the Koala
 
 
📝 A Few Words
 
 
Anthropic is no longer just a chatbot company. They're systematically building a vertically integrated software stack, from design to code to security audit to deployment, and Wall Street is pricing it in before most engineers have noticed.

The proof is in the first four months of 2026:

👉January - Claude Cowork ships multi-agent workflow plug-ins. SaaS stocks sell off.

👉February - Claude Code Security launches. CrowdStrike drops 8%, Okta 9%. Three days later, a blog post about Claude Code automating COBOL dependency mapping wipes $40B off IBM.

👉March - A leaked draft about Claude Mythos surfaces, reportedly capable of chaining vulnerabilities to bypass modern security protocols. Palo Alto drops 6%, CrowdStrike 6%.

👉April - Claude Design launches. Reads codebases and design files to build reusable design systems, exports to PPTX/PDF/Canva, or hands off directly to Claude Code. Figma drops 7%.

Anthropic is not only shipping features but also building a monopoly at startup speed: owning design, code, security, and deployment in a single vertically integrated stack. I don't know of any company in tech history that has moved this fast to capture this many layers at once.

A stock analyst might say the market is overreacting. Maybe. But by the time they finish explaining why, Anthropic will have shipped three more products.

Have a great week,
Aymen
 
 
🔍 Inside this Issue
 
 
AI in prod is starting to look less like magic and more like plumbing: hardened sandboxes, reproducible tests, and prompts treated like code that can regress. Pair that with edge-native Git and tiny 1.58-bit models running on laptops and phones, and the stack is getting weird in the best way.

🧩 A GitHub agentic workflow
🧠 How LLMs Work: A Visual Deep Dive
⚡ Introducing Coregit
📉 Introducing Ternary Bonsai: Top Intelligence at 1.58 Bits
🛠️ The PR you would have opened yourself

Ship the boring guardrails, then enjoy the speed.

Stay safe out there.
FAUN.dev() Team
 
 
⭐ Patrons
 
iacconf.com iacconf.com
 
How is infrastructure keeping pace with AI in 2026?
 
 
Managing IaC or leading platform engineering? IaCConf is the “can’t miss” event featuring 20 top IaC leaders across 13 sessions. Join 5,000+ practitioners to share what’s actually working and swap hard-won lessons.

Register Now
 
 
eventbrite.co.uk eventbrite.co.uk
 
Are Your APIs Ready for AI Agents? A Hands-on Workshop on May 23rd
 
 
Are Your APIs Ready for AI Agents? A Hands-on Workshop on May 23rd

AI agents are beginning to autonomously call APIs, chain services, and create integrations that most platforms were never designed to handle. This hands-on masterclass on Designing AI-ready APIs helps architects and developers build governed, predictable API ecosystems using OpenAPI, Overlay, and Arazzo.

Learn how to add guardrails, improve discoverability, and safely evolve existing APIs for automated consumption.

FAUN.dev readers get an exclusive 40% discount using code FAUN40.
 
 
👉 Spread the word and help developers find you by promoting your projects on FAUN. Get in touch for more information.
 
🔗 Stories, Tutorials & Articles
 
blog.frankel.ch blog.frankel.ch
 
A GitHub agentic workflow
 
 
The developer automated parsing of unstructured release notes with GitHub agentic workflows. The pipeline compiles Markdown to YAML, then runs an agent.

The setup requires a fine-grained Copilot token. It enforces a hardened sandbox policy and forbids Marketplace actions. CI runs a compile-then-compare check to spot diffs.
 
 
prismml.com prismml.com
 
Introducing Ternary Bonsai: Top Intelligence at 1.58 Bits
 
 
PrismML unveils Ternary Bonsai: a family of 1.58-bit LMs in 1.7B, 4B, and 8B sizes. Models use ternary weights {-1,0,+1} with group-wise quantization.

Weights are ternary (-1,0,+1). Each group of 128 weights shares an FP16 scale. That cuts memory by ~9x versus 16-bit and boosts benchmark scores.

The 8B hits 75.5 avg at 1.75 GB. 82 toks/sec on M4 Pro. 27 toks/sec on iPhone 17 Pro Max. Ships under Apache 2.0.
 
 
huggingface.co huggingface.co
 
The PR you would have opened yourself
 
 
A Skill ports models from transformers to mlx-lm. It bootstraps an env, discovers variants, downloads checkpoints, writes MLX implementations, and runs layered tests. It produces disclosed PRs with per-layer diffs, dtype checks, generation examples, numerical comparisons, and a reproducible, non-agentic test suite.
 
 
coregit.dev coregit.dev
 
Introducing Coregit
 
 
Coregit reimplements Git's object model in TypeScript and runs on Cloudflare Workers as a serverless edge Git API.
Its commit endpoint accepts up to 1,000 file changes per request and replaces 105+ GitHub calls with one. Yes - one.

It acknowledges writes in Durable Objects (~2ms), then flushes objects to R2. It caches objects by SHA-1. Embeddings live in KV.

This edge-native, session-first model reframes repo I/O for AI agents. It favors content-addressed caching over REST workflows, cutting auth complexity and latency.
 
 
ynarwal.github.io ynarwal.github.io
 
How LLMs Work — A Visual Deep Dive   ✅
 
 
A complete walkthrough of how large language models like ChatGPT are built, from raw internet text to a conversational assistant.
 
 

👉 Got something to share? Create your FAUN Page and start publishing your blog posts, tools, and updates. Grow your audience, and get discovered by the developer community.

 
💬 Discussions, Q&A & Forums
 
reddit.com reddit.com
 
DeepSeek V4 has released
 
 
 
 
 
⚙️ Tools, Apps & Software
 
github.com github.com
 
brexhq/CrabTrap
 
 
An LLM-as-a-judge HTTP proxy to secure agents in production
 
 
github.com github.com
 
QwenLM/Qwen3.6
 
 
Qwen3.6 is the large language model series developed by Qwen team, Alibaba Group.
 
 
github.com github.com
 
midudev/autoskills
 
 
One command. Your entire AI skill stack. Installed.
 
 
github.com github.com
 
huytieu/COG-second-brain
 
 
 Self-evolving second brain with 17 AI skills, 6 worker agents, and people CRM — inspired by Garry Tan's gstack and gbrain. Works with Claude Code, Cursor, Kiro, Gemini CLI, Codex.
 
 
github.com github.com
 
heygen-com/hyperframes
 
 
Write HTML. Render video. Built for agents.
 
 

👉 Spread the word and help developers find and follow your Open Source project by promoting it on FAUN. Get in touch for more information.

 
🤖 Once, SenseiOne Said
 
 
"Most ML failures are not model failures; they're bookkeeping failures with a GPU budget. The paradox is that the more you automate training, the more your job becomes policing data, versions, and incentives. If you can't reproduce a prediction, you didn't ship intelligence, you shipped a rumor."
— SenseiOne
 

(*) SenseiOne is FAUN.dev’s work-in-progress AI agent

 
⚡Growth Notes
 
 
How much of your prompt engineering is tested against regression, and how do you know last week's system prompt still produces the same quality output after a model update? The team that can't answer that is shipping non-deterministic behavior with no safety net, and the cost shows up as silent quality degradation that no one notices until a customer does.
 
Each week, we share a practical move to grow faster and work smarter
 
😂 Meme of the week
 
 
 
 
❤️ Thanks for reading
 
 
👋 Keep in touch and follow us on social media:
- 💼LinkedIn
- 📝Medium
- 🐦Twitter
- 👥Facebook
- 📰Reddit
- 📸Instagram

👌 Was this newsletter helpful?
We'd really appreciate it if you could forward it to your friends!

🙏 Never miss an issue!
To receive our future emails in your inbox, don't forget to add community@faun.dev to your contacts.

🤩 Want to sponsor our newsletter?
Reach out to us at sponsors@faun.dev and we'll get back to you as soon as possible.
 

AILinks #526: How LLMs Work - A Visual Deep Dive
Legend: ✅ = Editor's Choice / ♻️ = Old but Gold / ⭐ = Promoted / 🔰 = Beginner Friendly

You received this email because you are subscribed to FAUN.dev.
We (🐾) help developers (👣) learn and grow by keeping them up with what matters.

You can manage your subscription options here (recommended) or use the old way here (legacy). If you have any problem, read this or reply to this email.