Skip to content

CLI Reference

Complete reference for all PawMate CLI commands.

Installation

Install the CLI globally:

bash
npm install -g pawmate-ai-challenge

Or use with npx (no installation):

bash
npx pawmate-ai-challenge init --profile model-a-rest --tool "YourAI"

Global Options

Available for all commands:

OptionDescription
-V, --versionOutput the version number
-h, --helpDisplay help for command

Commands

pawmate init

Initialize a new benchmark run.

Usage:

bash
pawmate init --profile <name> --tool <name> [options]

Required Options:

OptionDescription
--profile <name>Profile to use (see profiles below)
--tool <name>Tool under test (name)

Optional Options:

OptionDescriptionDefault
--tool-ver <version>Tool version/build idNone
--spec-ver <version>Frozen spec versionFrom SPEC_VERSION file
--run-dir <path>Custom run directory pathAuto-generated
--hiddenCreate hidden directoryVisible directory

Profiles:

  • model-a-rest - Model A (Minimum) + REST
  • model-a-graphql - Model A (Minimum) + GraphQL
  • model-b-rest - Model B (Full) + REST
  • model-b-graphql - Model B (Full) + GraphQL

Examples:

bash
# Basic usage
pawmate init --profile model-a-rest --tool "Cursor"

# With version
pawmate init --profile model-a-rest --tool "Cursor" --tool-ver "v0.43"

# Hidden directory
pawmate init --profile model-a-rest --tool "Cursor" --hidden

# Custom directory
pawmate init --profile model-a-rest --tool "Cursor" --run-dir ~/my-run

# Specific spec version
pawmate init --profile model-a-rest --tool "Cursor" --spec-ver v2.7.0

What it creates:

pawmate-run-<timestamp>/  (or .pawmate-run-<timestamp>/ if --hidden)
├── start_build_api_prompt.txt
├── start_build_ui_prompt.txt
├── run.config
├── PawMate/
└── benchmark/
    └── result_submission_instructions.md

Output:

Displays run folder path, workspace path, and generated prompt file locations.

pawmate submit

Submit benchmark results.

Usage:

bash
pawmate submit <result-file> [options]

Arguments:

ArgumentDescription
<result-file>Path to result JSON file

Optional Options:

OptionDescription
--github-token <token>GitHub personal access token
--email-onlySkip GitHub issue creation

Examples:

bash
# Email submission (default)
pawmate submit pawmate-run-*/benchmark/result.json

# Email + GitHub issue
export GITHUB_TOKEN=ghp_xxxx
pawmate submit pawmate-run-*/benchmark/result.json

# With token as flag
pawmate submit result.json --github-token ghp_xxxx

# Email only (skip GitHub)
pawmate submit result.json --email-only

What it does:

  1. Validates result file format
  2. Prompts for attribution (optional)
  3. Opens email client with pre-filled content
  4. Creates GitHub issue (if token provided)

Environment Variables:

VariableDescription
GITHUB_TOKENGitHub personal access token for issue creation

Exit Codes

CodeMeaning
0Success
1Error (validation failed, command failed, etc.)

Configuration Files

run.config

Generated by pawmate init in each run directory:

ini
spec_version=v2.7.0
spec_root=(bundled with CLI)
tool=Cursor
tool_ver=v0.43
model=A
api_type=REST
workspace=/path/to/pawmate-run-<timestamp>/PawMate

.npmrc (created by AI in backend/)

Sandbox-friendly npm configuration:

ini
cache=.npm-cache
audit=false
fund=false
prefer-offline=true

File Locations

Generated Prompts

FileLocationPurpose
start_build_api_prompt.txtRun directory rootAPI/backend build prompt
start_build_ui_prompt.txtRun directory rootUI/frontend build prompt

Run Metadata

FileLocationPurpose
run.configRun directory rootRun configuration

Workspace

DirectoryLocationPurpose
PawMate/Run directoryWorkspace for generated code
PawMate/backend/WorkspaceAPI implementation
PawMate/ui/WorkspaceUI implementation (if built)

Benchmark Artifacts

FileLocationPurpose
ai_run_report.mdbenchmark/Complete run report with timestamps
run_instructions.mdbenchmark/Instructions to run the implementation
acceptance_checklist.mdbenchmark/Acceptance criteria verification
result_submission_instructions.mdbenchmark/Submission guide
*.jsonbenchmark/Result file for submission

Common Patterns

Initialize and Run

bash
# Create directory
mkdir my-benchmark
cd my-benchmark

# Initialize
pawmate init --profile model-a-rest --tool "Cursor" --tool-ver "v0.43"

# Copy prompt to AI agent
cat pawmate-run-*/start_build_api_prompt.txt

Submit Results

bash
# Find result file
ls pawmate-run-*/benchmark/*.json

# Submit via email
pawmate submit pawmate-run-*/benchmark/result.json

# Or with GitHub issue
export GITHUB_TOKEN=ghp_xxxxxxxxxxxx
pawmate submit pawmate-run-*/benchmark/result.json

Multiple Runs

bash
# Organize runs by tool and model
mkdir -p benchmarks/cursor
cd benchmarks/cursor

# Run 1
mkdir run1 && cd run1
pawmate init --profile model-a-rest --tool "Cursor"
# ... complete benchmark ...
cd ..

# Run 2
mkdir run2 && cd run2
pawmate init --profile model-b-rest --tool "Cursor"
# ... complete benchmark ...

Glob Patterns

You can use glob patterns with result files:

bash
# Submit most recent run
pawmate submit pawmate-run-*/benchmark/*.json

# Specific pattern
pawmate submit pawmate-run-2026*/benchmark/cursor*.json

Platform-Specific Notes

macOS

  • Default email client: Mail.app
  • Directory visibility: Hidden files start with .
  • Ports: May need to allow Node through firewall

Windows

  • Default email client: Outlook or Windows Mail
  • Directory visibility: Hidden attribute (not dot-prefix)
  • Line endings: Handled automatically by npm
  • Use PowerShell or CMD

Linux

  • Default email client: Varies by distribution
  • Directory visibility: Hidden files start with .
  • Ports: Check firewall rules if needed

Updating the CLI

Check for updates:

bash
npm outdated -g pawmate-ai-challenge

Update to latest version:

bash
npm update -g pawmate-ai-challenge

Install specific version:

bash
npm install -g pawmate-ai-challenge@1.2.0

Uninstalling

Remove the CLI:

bash
npm uninstall -g pawmate-ai-challenge

Troubleshooting Commands

Verify Installation

bash
# Check if installed
which pawmate

# Check version
pawmate --version

# View help
pawmate --help
pawmate init --help
pawmate submit --help

Debug Information

bash
# Node version
node --version  # Should be >= 18.0.0

# npm version
npm --version

# Check global packages
npm list -g --depth=0 | grep pawmate

Reset npm Cache

If having installation issues:

bash
npm cache clean --force
npm install -g pawmate-ai-challenge

Advanced Usage

Scripting

You can script benchmark runs:

bash
#!/bin/bash
# run-benchmarks.sh

TOOLS=("Cursor" "Copilot" "Codeium")
PROFILES=("model-a-rest" "model-a-graphql")

for tool in "${TOOLS[@]}"; do
  for profile in "${PROFILES[@]}"; do
    mkdir -p "benchmarks/${tool}/${profile}"
    cd "benchmarks/${tool}/${profile}"
    pawmate init --profile "$profile" --tool "$tool"
    echo "Initialized: $tool - $profile"
    cd ../../..
  done
done

CI/CD Integration

You can integrate into CI/CD:

yaml
# .github/workflows/benchmark.yml
name: Run Benchmark

on: [workflow_dispatch]

jobs:
  benchmark:
    runs-on: ubuntu-latest
    steps:
      - name: Install CLI
        run: npm install -g pawmate-ai-challenge
      
      - name: Initialize
        run: |
          mkdir benchmark-run
          cd benchmark-run
          pawmate init --profile model-a-rest --tool "GitHub-Actions"
      
      - name: Run AI agent
        run: |
          # Your AI agent integration here
          # Copy prompt, execute, wait for completion
      
      - name: Submit results
        env:
          GITHUB_TOKEN: ${{ secrets.PAWMATE_TOKEN }}
        run: |
          cd benchmark-run
          pawmate submit pawmate-run-*/benchmark/*.json

Help & Support

For more information:

Released under the MIT License.