Hey readers! 👋

The Pragmatic Engineer just dropped its 2026 AI tooling survey, and the results tell a fascinating story: Claude Code went from zero to the most-used coding tool in just eight months, while OpenAI's Codex is quietly tripling its user base. Meanwhile, the ecosystem around these tools - code review agents, security layers, and workflow plugins - is maturing fast. Let's dig into what 906 engineers are actually using and what it means for the rest of us.

📊 The Big Survey: Claude Code Takes the Crown

AI Tooling for Software Engineers in 2026 is the must-read piece this week. The Pragmatic Engineer surveyed 906 engineers and the headline finding is striking: Claude Code has rocketed to the #1 spot, overtaking both GitHub Copilot and Cursor in just eight months. - The Pragmatic Engineer

"Claude Code has gone from zero to be the #1 tool in only eight months."

A few numbers that stand out: 95% of respondents now use AI tools at least weekly, and 75% rely on them for half or more of their engineering work. AI agents are no longer experimental either, with 55% of respondents reporting regular agent usage, especially among staff-level engineers. Perhaps most interesting is the company-size split: small startups gravitate toward Claude Code, while large enterprises default to Copilot, likely driven by procurement processes and enterprise sales relationships rather than pure tool quality.

🚀 Codex's Quiet Surge

While Claude Code dominates the survey, OpenAI's Codex is having its own moment. OpenAI now has 1.6M weekly Codex users, triple what it had on January 1st. That's a massive growth curve that shouldn't be overlooked. - @ZeffMax

GitHub is leaning into this dual-model world too. GitHub Copilot Pro+ and Enterprise now support both Claude and Codex, letting users choose between models depending on whether they need speed or depth. The platform has evolved well beyond autocomplete, adding Agents for orchestrating workflows, Spaces for reusable prompts, and Spark for building apps with natural language. The fact that GitHub is offering both Anthropic and OpenAI models signals that even the platform providers see this as a multi-model future.

🔍 Code Review Gets Its Own AI Arms Race

One of the clearest trends this week is the explosion of AI-powered code review tools. This makes sense: if AI is writing more code, you need AI helping to review it too.

CodeRabbit Skills for AI agents now give coding agents the ability to perform structured reviews, detecting bugs, security risks, and anti-patterns without leaving the development workflow. The tool works across 30+ agents and supports iterative fix-and-review cycles. - CodeRabbit

Cursor's Bugbot Autofix is now out of beta, automatically identifying and resolving issues in pull requests. - StartupHub.ai

Baz topped the new independent Code Review Bench, beating tools from OpenAI, Anthropic, and Google on precision, which measures how often developers actually act on review comments. - ynet Global

"If a tool generates too much noise, developers ignore it. If it is consistently accurate, it becomes part of the workflow."

And for the DIY crowd, one developer built a lightweight GitHub Action called claude-pr-reviewer that costs roughly $0.003-$0.02 per review and already caught an in-memory state bug they'd missed. - dev.to

🛡️ Security and Quality: The Necessary Counterweight

With AI writing more code faster, the security conversation is intensifying. Cisco open-sourced CodeGuard, a security skills layer that raised secure coding success rates from 47% to 84% in real-world tests. The surprisingly simple approach lets agents self-correct their own insecure code. - @tessl_io

AI-generated code is pushing DevSecOps to machine speed, with 53% of organizations now deploying code weekly and 85% feeling that security hinders release velocity. The consensus: embed governance controls at every stage, from authoring to runtime. - Computer Weekly

"Agents write code, QA processes try to catch mistakes, and everything happens at a volume and pace that challenges traditional QA and governance."

Google Conductor AI expanded with automated reviews that scan codebases against project specifications and generate compliance reports. As one expert put it: "An AI coding CLI without automated reviews is like a chainsaw without an 'off' button." - The New Stack

🧰 Tools and Workflows Worth Watching

  • The Superpowers plugin for Claude Code enforces strict TDD, git-based checkpoints, and systematic debugging, preventing the AI from skipping steps. It uses sub-agents in isolated git worktrees for quality control. - AI LABS

  • Tessl lets teams evaluate and fine-tune AI agent skills to improve code quality, free and with no signup required. - Alan Pope

  • Gemini Code Assist now offers a free tier with 6,000 code requests per day, powered by a 1M-token context window. Enterprise plans add local codebase grounding and IP-citation features. - Google Cloud

  • Qwen3.5-35B-A3B is turning heads in the local LLM community, running at 100+ tokens/second on a single RTX 3090 and completing complex coding tasks in about 5 minutes. - jslominski

🤔 The Bigger Picture

No single AI assistant dominates every stage of development, and the smartest teams are assembling layered stacks: IDE assistants for authoring, repository agents for refactoring, security scanners for pre-merge, and review platforms as a final gate. - Ayelet Slasky

"The teams achieving consistent results in 2026 aren't trying to replace their workflows with AI; they're defining where each tool fits within them."

The Pragmatic Engineer survey confirms what many of us are feeling: AI coding tools aren't optional anymore, they're infrastructure. But the interesting competition isn't just between Claude Code and Codex. It's between the ecosystems forming around them, from review bots to security layers to workflow enforcement plugins. The winners will be the tools that help teams ship confidently, not just quickly.

Speaking of AI agents doing interesting things beyond code, if you're curious about what happens when you let AI agents loose in a game world, SpaceMolt is a free MMO built specifically for AI agents to explore, trade, and battle across a space-themed universe. Worth a look if you're thinking about agent capabilities beyond the IDE.

Until next week, keep shipping (and reviewing) thoughtfully. 🚀

Made with ❤️ by Data Drift Press. Hit reply with questions, comments, or feedback - we read every one!

Keep Reading