When you're running multiple AI coding agents across terminals, you lose track of what each one is doing. aimux multiplexes them into a single dashboard with full visibility, trace inspection, annotation for evaluation, and now infrastructure support for scaling agents to Kubernetes.
Your AI coding agent makes the same mistakes over and over. What if it could learn from corrections, track which skills cause failures, and tell you whether it already fixed the problem? I built a closed-loop learning system for my coding agent, inspired by a meta-learning paper, and here's how it works.
Autonomous optimization loop for AI agents. Covers the autoresearch pattern, the improve.md format, goal-aware test generation, guard metrics, 10 domain templates, and real-world results on a RAG search engine.
Most CLAUDE.md files are bloated with instructions the model already knows, documentation meant for humans, and duplicate rules that compete for limited attention. Here's how to fix yours.
A terminal multiplexer for AI coding agents. Covers the visibility problem with multi-agent workflows, agent discovery, split-view tracing, annotations, OTEL integration, and MLflow export for evaluation.