#36 - 🤖 AI writes code faster than we review

The review bottleneck is getting real

Hey readers! 👋 This week's AI coding landscape is buzzing with a fascinating tension: tools are getting dramatically more powerful, but the review bottleneck is becoming impossible to ignore. From OpenAI's Codex literally building itself to teams abandoning AI coding entirely because review is too hard, we're watching the industry wrestle with what happens when code generation outpaces human verification. Let's dig in!

🔥 This Week's Highlights

OpenAI's Codex is now building itself - The majority of Codex's codebase is now generated by Codex itself, creating a recursive development loop. Engineers treat it as a junior teammate, assigning tasks via Slack or Linear. The tool helped ship the Sora Android app in just 28 days. – Ars Technica

"Codex is literally a teammate in your workspace."

Claude Code comes to Slack - Anthropic's beta integration lets developers trigger coding sessions directly from Slack threads. Tag Claude in a conversation with a bug report, and it spins up a session, posts progress updates, and returns a PR link when done. This mirrors a broader trend of embedding AI assistants into collaboration platforms. – AI Native Dev

"The need to go to an IDE and make changes is diminishing day by day."

DeepSeek-V3 drops with impressive efficiency - This 671B-parameter model activates only 37B parameters per token, trained on 14.8 trillion tokens in just 2.788M GPU hours. It introduces auxiliary-loss-free load balancing and multi-token prediction, achieving performance rivaling top closed-source models with remarkable training stability. – DeepSeek-AI

📊 The Review Bottleneck Gets Real

The numbers are stark this week. Qodo's State of AI Code Quality 2025 report reveals that AI boosts code output by 97.8%, but review time increases 91.1%. This is becoming the primary bottleneck in software development. – @QodoAI

CodeRabbit's analysis found that AI-generated PRs show 75% more logic and correctness issues compared to human-written code, with security problems like improper password handling appearing up to twice as frequently. – @coderabbitai

Perhaps most telling: one team is abandoning AI coding tools entirely because reviewing AI-generated code is harder than writing it themselves. – @jhleath

This has sparked debate about whether code review AI tools might have a larger market than code generation tools. The logic is compelling: both AI and human code needs review, vibe coding is creating unsustainable review loads, and verification is fundamentally easier than generation. – @swyx

🛠️ New Tools and Approaches

Augment Code Review launches with claims of achieving both high precision (65%) and high recall (55%), outperforming competitors by 10 F-score points. The tool uses a Context Engine to understand complex codebases, with free access for open-source projects. – @augmentcode

Kilo Code offers model-agnostic flexibility - Founded by ex-GitLab CEO Sid Sijbrandij, this open-source agent supports any LLM and features distinct modes for asking, architecting, coding, debugging, and orchestrating. The name reflects the vision of producing code "by the kilo." – AI Native Dev

Zencoder's Zenflow desktop app replaces ad-hoc prompting with structured workflows and multi-agent verification. Their "committee approach" cross-checks code between different LLMs, claiming a 20% improvement in correctness and double the feature shipping pace. – Freeform Dynamics

Eclipse Theia 2025-11 introduces the first open-source native Claude Code IDE integration, with chat session persistence, slash commands, and new AI agents for GitHub and project information. – EclipseSource

🔮 Industry Perspectives

Steve Yegge predicts the IDE will be gone by 2026, replaced by AI-native tools where developers articulate intent rather than write code line by line. He claims developers are 9-12 months behind the AI curve. StartupHub.ai

GitLab's vision of the "cognitive architect" sees developers managing hybrid teams of humans and AI agents, with a proactive "meta agent" functioning like a full team member you can assign tasks to. – The New Stack

Mark Pesce argues that vibe coding has moved from novelty to practical utility, using Google's Antigravity to build a macOS VRML browser in under a day. The key insight: skill at "steering" AI assistants may soon be the most sought-after quality in engineers. – The Register

Meanwhile, engineers at top tech companies report their entire job now consists of prompting Cursor or Claude Code with Opus 4.5 and sanity-checking the output. We've crossed some threshold where AI handles "most" software tasks. – @deedydas

⚠️ Security Concerns

A security analysis uncovered more than 30 critical vulnerabilities in tools like GitHub Copilot and Amazon Q, including command injection, path traversal, and information leakage. The core issue: these tools operate with elevated privileges within IDEs, accessing files, networks, and cloud resources. – WebProNews

🤖 Multi-Agent Systems Emerge

IBM's Project ALICE uses three specialized agents for incident analysis, code context, and code analysis to automate bug detection. Early testing shows 10-25% faster root-cause identification. DevOps.com

DeepSource's Autofixbot combines static analysis with an agent harness, achieving the top score on the OpenSSF CVE Benchmark by finding more issues with fewer false positives than LLM-only tools. – @ycombinator

📝 Quick Hits

Made with ❤️ by Data Drift Press

Have thoughts on the review bottleneck? Tried any of these new tools? Hit reply - we'd love to hear from you!