biswanathroul commited on
Commit
897dc40
·
verified ·
1 Parent(s): 6f40440

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -1,3 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # LLMPromptKit: LLM Prompt Management System
2
 
3
  LLMPromptKit is a comprehensive library for managing, versioning, testing, and evaluating prompts for Large Language Models (LLMs). It provides a structured framework to help data scientists and developers create, optimize, and maintain high-quality prompts.
@@ -10,6 +32,34 @@ LLMPromptKit is a comprehensive library for managing, versioning, testing, and e
10
  - **Evaluation Framework**: Measure prompt quality with customizable metrics
11
  - **Advanced Templating**: Create dynamic prompts with variables, conditionals, and loops
12
  - **Command-line Interface**: Easily integrate into your workflow
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  ## Documentation
15
 
@@ -20,6 +70,7 @@ For detailed documentation, see the [docs](./docs) directory:
20
  - [CLI Usage](./docs/cli_usage.md)
21
  - [Advanced Features](./docs/advanced_features.md)
22
  - [Integration Examples](./docs/integration_examples.md)
 
23
 
24
  ## Installation
25
 
 
1
+ ---
2
+ library_name: llmpromptkit
3
+ title: LLMPromptKit
4
+ emoji: 🚀
5
+ tags:
6
+ - prompt-engineering
7
+ - llm
8
+ - nlp
9
+ - prompt-management
10
+ - huggingface
11
+ - version-control
12
+ - ab-testing
13
+ - evaluation
14
+ languages:
15
+ - python
16
+ license: mit
17
+ pipeline_tag: text-generation
18
+ datasets:
19
+ - none
20
+
21
+ ---
22
+
23
  # LLMPromptKit: LLM Prompt Management System
24
 
25
  LLMPromptKit is a comprehensive library for managing, versioning, testing, and evaluating prompts for Large Language Models (LLMs). It provides a structured framework to help data scientists and developers create, optimize, and maintain high-quality prompts.
 
32
  - **Evaluation Framework**: Measure prompt quality with customizable metrics
33
  - **Advanced Templating**: Create dynamic prompts with variables, conditionals, and loops
34
  - **Command-line Interface**: Easily integrate into your workflow
35
+ - **Hugging Face Integration**: Seamlessly test prompts with thousands of open-source models
36
+
37
+ ## Hugging Face Integration
38
+
39
+ LLMPromptKit includes a powerful integration with Hugging Face models, allowing you to:
40
+
41
+ - Test prompts with thousands of open-source models
42
+ - Run evaluations with models like FLAN-T5, GPT-2, and others
43
+ - Compare prompt performance across different model architectures
44
+ - Access specialized models for tasks like translation, summarization, and question answering
45
+
46
+ ```python
47
+ from llmpromptkit import PromptManager, PromptTesting
48
+ from llmpromptkit.integrations.huggingface import get_huggingface_callback
49
+
50
+ # Initialize components
51
+ prompt_manager = PromptManager()
52
+ testing = PromptTesting(prompt_manager)
53
+
54
+ # Get a HuggingFace callback
55
+ hf_callback = get_huggingface_callback(
56
+ model_name="google/flan-t5-base",
57
+ task="text2text-generation"
58
+ )
59
+
60
+ # Run tests with the model
61
+ test_results = await testing.run_test_cases(prompt_id="your_prompt_id", llm_callback=hf_callback)
62
+ ```
63
 
64
  ## Documentation
65
 
 
70
  - [CLI Usage](./docs/cli_usage.md)
71
  - [Advanced Features](./docs/advanced_features.md)
72
  - [Integration Examples](./docs/integration_examples.md)
73
+ - [Integration Examples](./docs/integration_examples.md)
74
 
75
  ## Installation
76