llmpromptkit / docs /cli_usage.md
biswanath2.roul
Update documentation and remove deprecated files
6f40440

CLI Usage

LLMPromptKit provides a command-line interface (CLI) for managing prompts, versions, tests, and evaluations.

Basic Commands

Prompt Management

# Create a prompt
llmpromptkit prompt create "Weather Forecast" --content "Provide a weather forecast for {location} on {date}" --tags "weather,forecast"

# List all prompts
llmpromptkit prompt list

# Get prompt details
llmpromptkit prompt get <prompt_id>

# Update a prompt
llmpromptkit prompt update <prompt_id> --content "New content" --tags "new,tags"

# Delete a prompt
llmpromptkit prompt delete <prompt_id>

Version Control

# Commit a version
llmpromptkit version commit <prompt_id> --message "Version description"

# List versions
llmpromptkit version list <prompt_id>

# Check out (revert to) a specific version
llmpromptkit version checkout <prompt_id> <version_number>

# Compare versions
llmpromptkit version diff <prompt_id> <version1> <version2>

Testing

# Create a test case
llmpromptkit test create <prompt_id> --input '{"location": "New York", "date": "tomorrow"}' --expected "Expected output"

# List test cases
llmpromptkit test list <prompt_id>

# Run a specific test case
llmpromptkit test run <test_case_id> --llm openai

# Run all test cases for a prompt
llmpromptkit test run-all <prompt_id> --llm openai

# Run an A/B test between two prompts
llmpromptkit test ab <prompt_id_a> <prompt_id_b> --inputs '[{"var": "value1"}, {"var": "value2"}]' --llm openai

Evaluation

# Evaluate a prompt
llmpromptkit eval run <prompt_id> --inputs '[{"var": "value1"}, {"var": "value2"}]' --llm openai

# List available metrics
llmpromptkit eval metrics

# Register a custom metric
llmpromptkit eval register-metric <metric_file.py>

Environment Configuration

The CLI supports environment variables for configuration:

  • LLMPROMPTKIT_STORAGE: Path to store prompts and related data
  • LLMPROMPTKIT_OPENAI_API_KEY: OpenAI API key for built-in LLM support
  • LLMPROMPTKIT_DEFAULT_LLM: Default LLM to use for testing and evaluation

You can also create a config file at ~/.llmpromptkit/config.json:

{
  "storage_path": "/path/to/storage",
  "default_llm": "openai",
  "api_keys": {
    "openai": "your-openai-key"
  }
}

Advanced Usage

Multiple Storage Locations

# Specify a storage location for a command
llmpromptkit --storage /path/to/storage prompt list

# Export a prompt to another storage
llmpromptkit prompt export <prompt_id> --output /path/to/output.json

# Import a prompt from a file
llmpromptkit prompt import /path/to/prompt.json

Automation and Scripting

# Get output in JSON format
llmpromptkit --json prompt list

# Use in shell scripts
PROMPT_ID=$(llmpromptkit --json prompt create "Script Prompt" --content "Content" | jq -r '.id')
echo "Created prompt with ID: $PROMPT_ID"