Claude code source code leaked by Anthropic
Digest more
Anthropic launched Code Review in Claude Code, a multi-agent system that automatically analyzes AI-generated code, flags logic errors, and helps enterprise developers manage the growing volume of code produced with AI.
Administrators with Team and Enterprise plans can enable Code Review through Claude Code settings and a GitHub app install. Once activated, reviews automatically run on new pull requests without additional developer configuration. That's part of why usage caps and repository-level control become pretty important for cost management.
OpenAI published a Codex plugin on March 30 that installs directly inside Anthropic’s Claude Code, letting developers run code reviews and delegate tasks to Codex without leaving their existing workflow.
The volume of AI-generated code shipping into production is growing exponentially, quickly outpacing the ability of human software engineers to review and QA. At the same time, AI agents can generate code with increasing autonomy,
Swapping Claude Code for Codex turned out to be an easy win, with faster results, lower token usage, and a smoother workflow.
Anthropic on Monday released Code Review, a multi-agent code review system built into Claude Code that dispatches teams of AI agents to scrutinize every pull request for bugs that human reviewers routinely miss. The feature, now available in research ...
Claude Code’s March 2026 update adds Computer Use for controlling Mac apps without APIs and a 1M?token context window for huge projects.