API Reference
This document provides detailed API documentation for the main components of LLMPromptKit.
PromptManager
The PromptManager
class is the core component for managing prompts.
from llmpromptkit import PromptManager
Methods
__init__(storage_path=None)
- Description: Initialize a new PromptManager.
- Parameters:
storage_path
(str, optional): Path to store prompts. Defaults to "~/llmpromptkit_storage".
create(content, name, description='', tags=None, metadata=None)
- Description: Create a new prompt.
- Parameters:
content
(str): The prompt text with optional variables in {variable_name} format.name
(str): Name of the prompt.description
(str, optional): Description of the prompt.tags
(list of str, optional): Tags for categorization.metadata
(dict, optional): Additional metadata.
- Returns:
Prompt
object.
get(prompt_id)
- Description: Get a prompt by ID.
- Parameters:
prompt_id
(str): The ID of the prompt.
- Returns:
Prompt
object or None if not found.
update(prompt_id, content=None, name=None, description=None, tags=None, metadata=None)
- Description: Update a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt to update.content
(str, optional): New prompt text.name
(str, optional): New name.description
(str, optional): New description.tags
(list of str, optional): New tags.metadata
(dict, optional): New metadata.
- Returns: Updated
Prompt
object.
delete(prompt_id)
- Description: Delete a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt to delete.
- Returns: True if deleted, False otherwise.
list_all()
- Description: List all prompts.
- Returns: List of
Prompt
objects.
search_by_tags(tags, match_all=False)
- Description: Search prompts by tags.
- Parameters:
tags
(list of str): Tags to search for.match_all
(bool, optional): If True, prompt must have all tags.
- Returns: List of matching
Prompt
objects.
VersionControl
The VersionControl
class manages prompt versions.
from llmpromptkit import VersionControl
Methods
__init__(prompt_manager)
- Description: Initialize the version control system.
- Parameters:
prompt_manager
(PromptManager): A PromptManager instance.
commit(prompt_id, commit_message, metadata=None)
- Description: Create a new version of a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt.commit_message
(str): Message describing the changes.metadata
(dict, optional): Additional version metadata.
- Returns: Version number (int).
list_versions(prompt_id)
- Description: List all versions of a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt.
- Returns: List of version objects.
get_version(prompt_id, version_number)
- Description: Get a specific version of a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt.version_number
(int): The version number.
- Returns: Version data.
checkout(prompt_id, version_number)
- Description: Revert a prompt to a specific version.
- Parameters:
prompt_id
(str): The ID of the prompt.version_number
(int): The version to revert to.
- Returns: Updated
Prompt
object.
diff(prompt_id, version1, version2)
- Description: Compare two versions of a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt.version1
(int): First version number.version2
(int): Second version number.
- Returns: Diff object.
PromptTesting
The PromptTesting
class provides testing capabilities.
from llmpromptkit import PromptTesting
Methods
__init__(prompt_manager)
- Description: Initialize the testing system.
- Parameters:
prompt_manager
(PromptManager): A PromptManager instance.
create_test_case(prompt_id, input_vars, expected_output=None, name=None, description=None)
- Description: Create a test case for a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt to test.input_vars
(dict): Variables to substitute in the prompt.expected_output
(str, optional): Expected response.name
(str, optional): Test case name.description
(str, optional): Test case description.
- Returns: Test case object.
run_test_case(test_case_id, llm_callback)
- Description: Run a test case.
- Parameters:
test_case_id
(str): The ID of the test case.llm_callback
(callable): Function to call LLM.
- Returns: Test result.
run_all_tests(prompt_id, llm_callback)
- Description: Run all tests for a prompt.
- Parameters:
prompt_id
(str): The ID of the prompt.llm_callback
(callable): Function to call LLM.
- Returns: List of test results.
ab_test(prompt_id_a, prompt_id_b, test_cases, llm_callback, metrics=None)
- Description: Run A/B tests comparing two prompts.
- Parameters:
prompt_id_a
(str): First prompt ID.prompt_id_b
(str): Second prompt ID.test_cases
(list): Test cases to run.llm_callback
(callable): Function to call LLM.metrics
(list, optional): Metrics to compare.
- Returns: A/B test results.
Evaluator
The Evaluator
class handles prompt evaluation.
from llmpromptkit import Evaluator
Methods
__init__(prompt_manager)
- Description: Initialize the evaluator.
- Parameters:
prompt_manager
(PromptManager): A PromptManager instance.
register_metric(metric)
- Description: Register a new evaluation metric.
- Parameters:
metric
(EvaluationMetric): The metric to register.
evaluate_prompt(prompt_id, inputs, llm_callback, expected_outputs=None, metric_names=None)
- Description: Evaluate a prompt with the given inputs and metrics.
- Parameters:
prompt_id
(str): The ID of the prompt.inputs
(list): List of input dictionaries.llm_callback
(callable): Function to call LLM.expected_outputs
(list, optional): Expected outputs.metric_names
(list, optional): Metrics to use.
- Returns: Evaluation results.
PromptTemplate
The PromptTemplate
class provides advanced templating.
from llmpromptkit import PromptTemplate
Methods
__init__(template_string)
- Description: Initialize a template.
- Parameters:
template_string
(str): Template with variables, conditionals, and loops.
render(**variables)
- Description: Render the template with given variables.
- Parameters:
variables
(dict): Variables to substitute.
- Returns: Rendered string.
EvaluationMetric
The EvaluationMetric
is the base class for evaluation metrics.
from llmpromptkit import EvaluationMetric
Methods
__init__(name, description=None)
- Description: Initialize a metric.
- Parameters:
name
(str): Metric name.description
(str, optional): Metric description.
compute(generated_output, expected_output=None, **kwargs)
- Description: Compute the metric score.
- Parameters:
generated_output
(str): Output from LLM.expected_output
(str, optional): Expected output.**kwargs
: Additional parameters.
- Returns: Score (float between 0 and 1).
Built-in Metrics
ExactMatchMetric
: Scores exact matches between generated and expected output.ContainsKeywordsMetric
: Scores based on keyword presence.LengthMetric
: Scores based on output length.
from llmpromptkit import ExactMatchMetric, ContainsKeywordsMetric, LengthMetric