CLI Reference
Complete reference for all PawMate CLI commands.
Installation
Install the CLI globally:
bash
npm install -g pawmate-ai-challengeOr use with npx (no installation):
bash
npx pawmate-ai-challenge init --profile model-a-rest --tool "YourAI"Global Options
Available for all commands:
| Option | Description |
|---|---|
-V, --version | Output the version number |
-h, --help | Display help for command |
Commands
pawmate init
Initialize a new benchmark run.
Usage:
bash
pawmate init --profile <name> --tool <name> [options]Required Options:
| Option | Description |
|---|---|
--profile <name> | Profile to use (see profiles below) |
--tool <name> | Tool under test (name) |
Optional Options:
| Option | Description | Default |
|---|---|---|
--tool-ver <version> | Tool version/build id | None |
--spec-ver <version> | Frozen spec version | From SPEC_VERSION file |
--run-dir <path> | Custom run directory path | Auto-generated |
--hidden | Create hidden directory | Visible directory |
Profiles:
model-a-rest- Model A (Minimum) + RESTmodel-a-graphql- Model A (Minimum) + GraphQLmodel-b-rest- Model B (Full) + RESTmodel-b-graphql- Model B (Full) + GraphQL
Examples:
bash
# Basic usage
pawmate init --profile model-a-rest --tool "Cursor"
# With version
pawmate init --profile model-a-rest --tool "Cursor" --tool-ver "v0.43"
# Hidden directory
pawmate init --profile model-a-rest --tool "Cursor" --hidden
# Custom directory
pawmate init --profile model-a-rest --tool "Cursor" --run-dir ~/my-run
# Specific spec version
pawmate init --profile model-a-rest --tool "Cursor" --spec-ver v2.7.0What it creates:
pawmate-run-<timestamp>/ (or .pawmate-run-<timestamp>/ if --hidden)
├── start_build_api_prompt.txt
├── start_build_ui_prompt.txt
├── run.config
├── PawMate/
└── benchmark/
└── result_submission_instructions.mdOutput:
Displays run folder path, workspace path, and generated prompt file locations.
pawmate submit
Submit benchmark results.
Usage:
bash
pawmate submit <result-file> [options]Arguments:
| Argument | Description |
|---|---|
<result-file> | Path to result JSON file |
Optional Options:
| Option | Description |
|---|---|
--github-token <token> | GitHub personal access token |
--email-only | Skip GitHub issue creation |
Examples:
bash
# Email submission (default)
pawmate submit pawmate-run-*/benchmark/result.json
# Email + GitHub issue
export GITHUB_TOKEN=ghp_xxxx
pawmate submit pawmate-run-*/benchmark/result.json
# With token as flag
pawmate submit result.json --github-token ghp_xxxx
# Email only (skip GitHub)
pawmate submit result.json --email-onlyWhat it does:
- Validates result file format
- Prompts for attribution (optional)
- Opens email client with pre-filled content
- Creates GitHub issue (if token provided)
Environment Variables:
| Variable | Description |
|---|---|
GITHUB_TOKEN | GitHub personal access token for issue creation |
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Error (validation failed, command failed, etc.) |
Configuration Files
run.config
Generated by pawmate init in each run directory:
ini
spec_version=v2.7.0
spec_root=(bundled with CLI)
tool=Cursor
tool_ver=v0.43
model=A
api_type=REST
workspace=/path/to/pawmate-run-<timestamp>/PawMate.npmrc (created by AI in backend/)
Sandbox-friendly npm configuration:
ini
cache=.npm-cache
audit=false
fund=false
prefer-offline=trueFile Locations
Generated Prompts
| File | Location | Purpose |
|---|---|---|
start_build_api_prompt.txt | Run directory root | API/backend build prompt |
start_build_ui_prompt.txt | Run directory root | UI/frontend build prompt |
Run Metadata
| File | Location | Purpose |
|---|---|---|
run.config | Run directory root | Run configuration |
Workspace
| Directory | Location | Purpose |
|---|---|---|
PawMate/ | Run directory | Workspace for generated code |
PawMate/backend/ | Workspace | API implementation |
PawMate/ui/ | Workspace | UI implementation (if built) |
Benchmark Artifacts
| File | Location | Purpose |
|---|---|---|
ai_run_report.md | benchmark/ | Complete run report with timestamps |
run_instructions.md | benchmark/ | Instructions to run the implementation |
acceptance_checklist.md | benchmark/ | Acceptance criteria verification |
result_submission_instructions.md | benchmark/ | Submission guide |
*.json | benchmark/ | Result file for submission |
Common Patterns
Initialize and Run
bash
# Create directory
mkdir my-benchmark
cd my-benchmark
# Initialize
pawmate init --profile model-a-rest --tool "Cursor" --tool-ver "v0.43"
# Copy prompt to AI agent
cat pawmate-run-*/start_build_api_prompt.txtSubmit Results
bash
# Find result file
ls pawmate-run-*/benchmark/*.json
# Submit via email
pawmate submit pawmate-run-*/benchmark/result.json
# Or with GitHub issue
export GITHUB_TOKEN=ghp_xxxxxxxxxxxx
pawmate submit pawmate-run-*/benchmark/result.jsonMultiple Runs
bash
# Organize runs by tool and model
mkdir -p benchmarks/cursor
cd benchmarks/cursor
# Run 1
mkdir run1 && cd run1
pawmate init --profile model-a-rest --tool "Cursor"
# ... complete benchmark ...
cd ..
# Run 2
mkdir run2 && cd run2
pawmate init --profile model-b-rest --tool "Cursor"
# ... complete benchmark ...Glob Patterns
You can use glob patterns with result files:
bash
# Submit most recent run
pawmate submit pawmate-run-*/benchmark/*.json
# Specific pattern
pawmate submit pawmate-run-2026*/benchmark/cursor*.jsonPlatform-Specific Notes
macOS
- Default email client: Mail.app
- Directory visibility: Hidden files start with
. - Ports: May need to allow Node through firewall
Windows
- Default email client: Outlook or Windows Mail
- Directory visibility: Hidden attribute (not dot-prefix)
- Line endings: Handled automatically by npm
- Use PowerShell or CMD
Linux
- Default email client: Varies by distribution
- Directory visibility: Hidden files start with
. - Ports: Check firewall rules if needed
Updating the CLI
Check for updates:
bash
npm outdated -g pawmate-ai-challengeUpdate to latest version:
bash
npm update -g pawmate-ai-challengeInstall specific version:
bash
npm install -g pawmate-ai-challenge@1.2.0Uninstalling
Remove the CLI:
bash
npm uninstall -g pawmate-ai-challengeTroubleshooting Commands
Verify Installation
bash
# Check if installed
which pawmate
# Check version
pawmate --version
# View help
pawmate --help
pawmate init --help
pawmate submit --helpDebug Information
bash
# Node version
node --version # Should be >= 18.0.0
# npm version
npm --version
# Check global packages
npm list -g --depth=0 | grep pawmateReset npm Cache
If having installation issues:
bash
npm cache clean --force
npm install -g pawmate-ai-challengeAdvanced Usage
Scripting
You can script benchmark runs:
bash
#!/bin/bash
# run-benchmarks.sh
TOOLS=("Cursor" "Copilot" "Codeium")
PROFILES=("model-a-rest" "model-a-graphql")
for tool in "${TOOLS[@]}"; do
for profile in "${PROFILES[@]}"; do
mkdir -p "benchmarks/${tool}/${profile}"
cd "benchmarks/${tool}/${profile}"
pawmate init --profile "$profile" --tool "$tool"
echo "Initialized: $tool - $profile"
cd ../../..
done
doneCI/CD Integration
You can integrate into CI/CD:
yaml
# .github/workflows/benchmark.yml
name: Run Benchmark
on: [workflow_dispatch]
jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- name: Install CLI
run: npm install -g pawmate-ai-challenge
- name: Initialize
run: |
mkdir benchmark-run
cd benchmark-run
pawmate init --profile model-a-rest --tool "GitHub-Actions"
- name: Run AI agent
run: |
# Your AI agent integration here
# Copy prompt, execute, wait for completion
- name: Submit results
env:
GITHUB_TOKEN: ${{ secrets.PAWMATE_TOKEN }}
run: |
cd benchmark-run
pawmate submit pawmate-run-*/benchmark/*.jsonHelp & Support
For more information:
