Tagged: productivity
38 posts
Teaching Your AI Agent to Learn From Its Mistakes
Your AI coding agent makes the same mistakes over and over. What if it could learn from corrections, track which skills cause failures, and tell you whether it already fixed the problem? I built a closed-loop learning system for my coding agent, inspired by a meta-learning paper, and here's how it works.
From Autoresearch to Autoimprove: Generalizing the Agentic Experiment Loop
autonomous optimization loop
Your CLAUDE.md Is Probably Too Long
Most CLAUDE.md files are bloated with instructions the model already knows, documentation meant for humans, and duplicate rules that compete for limited attention. Here's how to fix yours.
Quote
—
Best Practices for Claude Code
CLAUDE.md Best Practices from Prompt Learning
How Claude remembers your project
How to Write a Good CLAUDE.md File
Writing a good CLAUDE.md
Agentic Engineering Patterns
AI Agents and Code Review
AI Doesn't Reduce Work - It Intensifies It
As one practitioner put it
between 0.7% and 94%
CIO.com described it well
Making Coding Agents Reliable
METR Study: AI Tools and Developer Speed
Some Simple Economics of AGI (arXiv)
Some Simple Economics of AGI
Stack Overflow 2025 Developer Survey - AI Section
State of AI vs Human Code Generation Report
The 80% Problem in Agentic Coding
The AI Coding Trust Gap
The AI Productivity Paradox Research Report
The New Asymmetry: When Generation Outpaces Verification
Transitioning to the Verification Economy
Using Linters to Direct Agents
Why Your AI Coding Assistant Keeps Doing It Wrong
The Verification Bottleneck: Why AI's Real Cost Is Human Attention
AI scales execution to near-zero cost. But verifying that output stays biologically bounded. The bottleneck was never intelligence. It's human verification bandwidth.
Shrinking the Verification Gap: Practical Patterns for AI-Assisted Development
If AI scales execution and verification is the bottleneck, the winning move is to make verification cheaper. Here are the patterns that actually work.