Ralph is a long-running AI agent loop. Ralph automates software development tasks by iteratively working through a task list until completion.
This is an implementation that actually works, containing a hackable script so you can configure it to your env and favorite agentic AI CLI. It's set up by default to use Claude Code in a Docker sandbox.
- Getting Started
- Run the loop
- How It Works
- How Is This Different from Other Ralphs?
- Steering the Agent
- Features
- Support
- Structure
- Skills
- Reference
- License
I recommend using a CLI to bootstrap your project with the necessary tools and dependencies, e.g.:
npx create-vite@latest src --template react-ts
# or
npx create-next-app@latest srcIf you must start from a blank slate, which is not recommended, see Starting from scratch
Run this in your project's directory to install Ralph.
npx @pageai/ralph-loopUse the prd-creator skill to generate a PRD from your requirements.
Open up Claude Code and prompt it with your requirements. Like so:
Use the prd-creator skill to help me create a PRD and task list for the below requirements.
An app is already set up with Next.js, Tailwind CSS and TypeScript.
Requirements:
- A SaaS product that helps users manage their finances.
- Target audience: Small business owners and freelancers.
- Core features:
- Track income and expenses.
- Create and send invoices.
- Track payments and receipts.
- Generate reports and insights.
- Connect to bank accounts and credit cards.
- Connect to accounting software.
- Connect to payment processors.
- Use the shadcn/ui library for components.
- Integrate with Stripe for payments.
- Use Supabase for database.
- You can find env variables in the .env.example file: SUPABASE_URL, SUPABASE_PUBLISHABLE_KEY, STRIPE_SECRET_KEY, etc. are available in the runtime.
// etc.
Pro tips:
- mention libraries and frameworks you want to use
- mention env variables, e.g. for database, 3rd party API keys, etc. Store them in a .env file that you add to .gitignore
- describe user flows and journeys
- be as descriptive as possible
- it's fine to write in your own language
Then follow the Skill's instructions and verify the PRD and then tasks.
It is highly recommended that you review individual task requirements before starting the loop. Review EACH TASK INDIVIDUALLY.
Authenticate inside the Docker sandbox before running Ralph. Run:
docker sandbox run claude .And follow the instructions to log in into Claude Code.
👉 Answer "Yes" to Bypass Permissions mode, that's the exact reason why you are using the Docker sandbox.
If you want to use a different agentic CLI, see Running with a different agentic CLI.
./ralph.sh -n 50 # Run Ralph Loop with 50 iterations# Run the agent loop (default: 10 iterations)
./ralph.sh
# Run with custom iteration limit
./ralph.sh 5
./ralph.sh -n 5
./ralph.sh --max-iterations 5
# Run exactly one iteration
./ralph.sh --once
# Show help
./ralph.sh --helpNB: you might need to run
chmod +x ralph.shto make the script executable.
⚠️ The default "mode" is "implementation". Depending on your use case, you might want to change.agent/PROMPT.mdto a different mode, e.g. "refactor", "review", "test" etc.
This script assumes the following are installed:
- Playwright for e2e testing
- Vitest for unit testing
- TypeScript for type checking
- ESLint for linting
- Prettier for formatting
If you'd like to use a different language, testing framework etc. please adjust .agent/PROMPT.md to reflect your setup, server ports and startup commands etc.
👉 The loop is controlled by this prompt, which will be sent to the agent each iteration.
Each iteration, Ralph will:
- Find the highest-priority incomplete task from
.agent/tasks.json - Work through the task steps defined in
.agent/tasks/TASK-{ID}.json - Run tests, linting, and type checking
- Update task status and commit changes
- Repeat until all tasks pass or max iterations reached
This was kept hackable so you can make it your own.
The script follows the original concepts of the Ralph Wiggum Loop, working with fresh contexts and providing clear verifiable feedback.
It also works generically with any task set.
Besides that:
- it allows you to dump unstructured requirements and have the agent create a PRD and task list for you.
- it uses a task lookup table with individual detailed steps -> more scalable as you get 100s of tasks done.
- it's sandboxed and more secure
- it shows progress and stats so you can keep an eye on what's been done
- it instructs the agent to write and run automated tests and screenshots per task
- it provides observability and traceability of the agent's work, showing a stream of output and capturing full historical logs per iteration
In some cases, you might notice the agent is having trouble, slowed down or struggling to overcome a blocker.
While the loop is running, you can edit the .agent/STEERING.md file to add critical work that needs to be done before the loop can continue.
The agent will check this file each iteration and if it finds any critical work, it will skip tasks and complete the critical work first.
- PRD generation - Creates a PRD and task list from requirements
- Task lookup table generation - Creates a task lookup table from the PRD
- Task breakdown + step generation - Breaks down each task into manageable steps
- Iteration tracking - Shows progress through iterations with timing
- Stream preview - Shows live output from the Agent
- Step detection - Identifies current activity (Thinking, Implementing, Testing, etc.)
- Screenshot capture - Captures a screenshot of the current screen
- Notifications - Alerts when human input is needed
- History logging - Saves clean output from each iteration
- Timing - Shows timing metrics for each iteration and total time
The ralph.sh script is designed to be hackable.
It is configured to use Claude Code in a Docker sandbox by default, but with a one-liner change you can change it to use any other agentic AI CLI.
Check the ralph.sh script around # This is the main command loop. for the main command loop.
NB: skills are supported by all major agentic AI CLIs via symlinks.
Ralph uses semantic tags to communicate status:
<promise>COMPLETE</promise>- All tasks finished successfully<promise>BLOCKED:reason</promise>- Agent needs human help<promise>DECIDE:question</promise>- Agent needs a decision
| Code | Meaning |
|---|---|
| 0 | COMPLETE - All tasks finished |
| 1 | MAX_ITERATIONS - Reached limit |
| 2 | BLOCKED - Needs human help |
| 3 | DECIDE - Needs human decision |
.agent/
├── PROMPT.md # Prompt sent to Agent each iteration
├── tasks.json # Task lookup table (required)
├── tasks/ # Individual task specs (TASK-{ID}.json)
├── prd/
│ ├── PRD.md # Product requirements document
│ └── SUMMARY.md # Short project overview sent to Agent each iteration
├── logs/
│ └── LOG.md # Progress log (auto-created)
├── history/ # Iteration output logs
└── skills/ # Shared skills (source of truth)
Skills are reusable agent capabilities that provide specialized knowledge and workflows. The canonical source is .agent/skills/, which is symlinked to multiple agent tool directories for compatibility.
| Skill | Description |
|---|---|
component-refactoring |
Patterns for splitting and refactoring React components |
e2e-tester |
End-to-end testing workflows |
frontend-code-review |
Code quality and performance review guidelines |
frontend-testing |
Unit and integration testing patterns |
prd-creator |
Create PRDs and task breakdowns for Ralph |
skill-creator |
Create new skills |
vercel-react-best-practices |
React/Next.js performance patterns |
web-design-guidelines |
UI/UX design principles |
Skills are symlinked from .agent/skills/ to multiple locations for cross-tool compatibility:
# Source of truth
.agent/skills/
├── component-refactoring/
├── e2e-tester/
├── frontend-code-review/
├── frontend-testing/
├── prd-creator/
├── skill-creator/
├── vercel-react-best-practices/
└── web-design-guidelines/
# Symlinks -> .agent/skills/*
.agents/skills/
.claude/skills/
.codex/skills/
.cursor/skills/
If you are using Playwright, here is a recommended configuration:
import { defineConfig, devices } from '@playwright/test';
/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: './tests',
fullyParallel: true,
globalTimeout: 30 * 60 * 1000,
forbidOnly: !!process.env.CI,
retries: process.env.CI ? 2 : 1,
workers: process.env.CI ? 3 : 6,
reporter: 'html',
use: {
baseURL: 'http://localhost:3000',
trace: 'on-first-retry',
},
// NB: only chromium will run in Docker (arm64).
projects: [
{
name: 'chromium',
use: { ...devices['Desktop Chrome'] },
}
],
});If you are using Vitest, here is a recommended configuration:
import { defineConfig } from 'vitest/config'
import react from '@vitejs/plugin-react'
import path from 'path'
export default defineConfig({
plugins: [react()],
test: {
environment: 'jsdom',
globals: true,
setupFiles: ['./vitest.setup.ts'],
include: ['**/*.test.{ts,tsx}'],
exclude: ['node_modules', '.next', 'tests'],
},
resolve: {
alias: {
'@': path.resolve(__dirname, './'),
},
},
})And:
import '@testing-library/jest-dom/vitest'
import { vi } from 'vitest'
import React from 'react'
// Mock next/image
vi.mock('next/image', () => ({
default: ({ src, alt, ...props }: { src: string; alt: string }) => {
return React.createElement('img', { src, alt, ...props })
},
}))
// Mock next/link
vi.mock('next/link', () => ({
default: ({
children,
href,
...props
}: {
children: React.ReactNode
href: string
}) => {
return React.createElement('a', { href, ...props }, children)
},
}))If you want to use a different agentic CLI, you can adjust the ralph.sh script to reflect your CLI of choice.
Check the ralph.sh script around # This is the main command loop. for the main command loop.
Replace docker sandbox run claude . -- with the your favorite CLI. Remember to also update the options after the --.
docker sandbox run codex . # for Codex CLI
docker sandbox run gemini . # for Gemini CLI
Docker currently supports: claude, codex, gemini, cagent, kiro.
See more in Docker's docs.
For AI to actually verify its implementation and for the loop to work, you need a way to verify it.
To that end, at the minimum you need an end-to-end test framework and a unit test framework.
For example, you can use the following commands to install Playwright and Vitest:
npm i @playwright/test vitest jsdom typescript eslint prettier -D
# If using React, also recommend installing:
npm i @vitejs/plugin-react @testing-library/dom @testing-library/jest-dom @testing-library/react @testing-library/user-event -DIt is recommended that you add skills for your specific language and framework. See skills.sh to discover existing skills.
MIT
