Implementing Autoresearch: A Case Study in Automating Legacy Research Code with Claude Code
This article explores a practical implementation of Andrej Karpathy’s 'Autoresearch' concept, applied to a legacy eCLIP research project. The author details a workflow where an LLM agent, specifically Claude Code, iteratively optimizes a training script within a constrained optimization loop. By utilizing a structured 'hypothesize-edit-train-evaluate' cycle, the agent performs hyperparameter tuning and architectural modifications. To ensure security, the process is containerized with restricted network and execution permissions. The experiment highlights the potential for AI agents to breathe new life into old research code through rapid iteration, though the author notes the necessity of adapting datasets for modern testing. The project demonstrates a shift toward autonomous experimentation where the researcher provides the framework and the AI executes the discovery process.
Key Takeaways
- Autoresearch Framework: The system operates as a constrained optimization loop where an LLM agent modifies a single training file to improve evaluation metrics.
- Structured Iteration: The process follows a tight cycle of hypothesize, edit, train, evaluate, and then commit or revert based on performance.
- Security through Sandboxing: To prevent arbitrary code execution, the training loop is containerized with no network access and restricted file permissions.
- Phased Exploration: Research tasks are divided into phases, ranging from basic hyperparameter tuning to autonomous 'moonshot' ideas using web access.
- Efficiency Constraints: Experiments are limited to approximately five minutes per run to encourage quick iterations and avoid overfitting.
In-Depth Analysis
The Mechanics of Autonomous Research
The core of this implementation is the 'Autoresearch' loop, a concept inspired by Andrej Karpathy. The author utilizes an LLM agent to manage a specific research problem by iteratively modifying a train.py file. This process is guided by a program.md file containing instructions and a scratchpad.md file that serves as the agent's working memory for documenting thought processes and experiment history. The workflow is designed to be highly iterative: the agent makes a hypothesis, edits the code, runs the training script, and evaluates the results. If the change improves the metric, it is committed; otherwise, it is reverted.
Phased Experimentation and Web Integration
The research journey is structured into distinct phases to maintain control over the agent's exploration. Initially, the agent focuses on obvious hyperparameter tuning before moving into architectural changes. In the final, more advanced phase, the agent is given 'moonshot' objectives and granted web access. This allows the AI to read academic papers and integrate new ideas into the training loop. By keeping individual runs short—roughly five minutes of wall-clock time—the system prioritizes rapid feedback and prevents the model from overfitting to noise in the data.
Security and Environment Configuration
A significant portion of the project focuses on the safety of running an autonomous agent. The author implemented a strict sandboxing environment using a run.sh orchestrator. Claude Code is restricted to editing only the necessary files and executing the orchestration script. To protect the host workstation, the training loop is containerized, and critical functions such as pip installs, network access, and git push commands are disabled. This ensures that while the agent has the freedom to experiment with the code logic, it cannot compromise the system or leak data.
Industry Impact
This experiment signifies a growing trend in the AI industry toward 'Agentic Research,' where the role of the human researcher shifts from manual coding to system orchestration. By automating the trial-and-error phase of machine learning, tools like Claude Code can significantly accelerate the pace of discovery. The use of sandboxing and constrained loops addresses primary concerns regarding the reliability and safety of autonomous agents. Furthermore, the ability to apply these methods to legacy code suggests a future where old research can be systematically updated and optimized with minimal human intervention.
Frequently Asked Questions
Question: What is the primary goal of the Autoresearch loop?
The goal is to iteratively improve a specific evaluation metric by allowing an LLM agent to modify training code within a controlled, repeatable cycle of experimentation.
Question: How does the author ensure the AI agent doesn't perform harmful actions?
The author uses containerization to isolate the training environment, removes network access, and restricts the agent's permissions so it can only edit specific files and run a predefined orchestration script.
Question: Why are the experiment runs limited to five minutes?
Short run times are enforced to encourage the agent to find quick iterations and to prevent the optimization process from overfitting to noise in the experimental results.

