Chunk ID
stringlengths 5
184
| Chunk
stringlengths 20
3.59k
| Source
stringclasses 22
values |
---|---|---|
Techniques to improve reliability - Part 15 | Summarize the text using the original language of the text. The summary should be one sentence long. | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability - Part 16 | Text:
"La estadística (la forma femenina del término alemán Statistik, derivado a su vez del italiano statista, 'hombre de Estado') es una ciencia que estudia la variabilidad, colección, organización, análisis, interpretación, y presentación de los datos, así como el proceso aleatorio que los genera siguiendo las leyes de la probabilidad. | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability - Part 17 | Summary:
The text explains that statistics is a science that studies the variability, collection, organization, analysis, interpretation, and presentation of data, as well as the random process that generates them following the laws of probability. | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability - Part 18 | However, if we first ask the model to identify the language of the text, and then summarize the text, it becomes more reliable: | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability - Part 19 | First, identify the language of the text. Second, summarize the text using the original language of the text. The summary should be one sentence long. | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability - Part 20 | Language:
Spanish
La estadística es una ciencia que estudia la variabilidad, colección, organización, análisis, interpretación, y presentación de los datos, así como el proceso aleatorio que los genera siguiendo las leyes de la probabilidad. | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability - Part 21 | Prompt the model to explain before answering
Another powerful technique for improving the reliability of answers is to prompt the model to gradually reason out the answer rather than jumping immediately to the final answer. By 'thinking aloud' the model can be far more likely to arrive at the correct answer. | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability - Part 22 | Zero-shot
Method
Published by Takeshi Kojima et al. in 2022, the easiest way to prompt a model to reason out the answer is to simply prepend answers with Let's think step by step | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability | Source: Large Language Models are Zero-Shot Reasoners by Takeshi Kojima et al. (2022).
Results
Applying this simple trick to the MultiArith math dataset, the authors found Let's think step by step quadrupled the accuracy, from 18% to 79%!
zero-shot reasoning example | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability (continued) | Implications
Although the Let's think step by step trick works well on math problems, it's not effective on all tasks. The authors found that it was most helpful for multi-step arithmetic problems, symbolic reasoning problems, strategy problems, and other reasoning problems. It didn't help with simple math problems or common sense questions, and presumably wouldn't help with many other non-reasoning tasks either.
zero-shot reasoning example | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability (continued) | To learn more, read the full paper.
If you apply this technique to your own tasks, don't be afraid to experiment with customizing the instruction. Let's think step by step is rather generic, so you may find better performance with instructions that hew to a stricter format customized to your use case. For example, you can try more structured variants like First, think step by step about why X might be true. Second, think step by step about why Y might be true. Third, think step by step about whether X or Y makes more sense.. And you can even give the model an example format to help keep it on track, e.g.: | https://cookbook.openai.com/techniques_to_improve_reliability |
Question: Can I claim a federal tax credit for my Toyota Prius Prime bought in 2021? | (1) For each criterion, determine whether it is met by the vehicle purchase
- Does the vehicle have at least four wheels? Let's think step by step.
The Toyota Prius Prime has four wheels, so the answer is yes.
- Does the vehicle weigh less than 14,000 pounds? Let's think step by step. The Toyota Prius Prime weighs less than 14,000 pounds, so the answer is yes.
- Does the vehicle draw energy from a battery with at least 4 kilowatt hours that may be recharged from an external source? Let's think step by step. The Toyota Prius Prime has a battery with at least 4 kilowatt hours that may be recharged from an external source, so the answer is yes.
- Was the vehicle purchased in a year before 2022? Let's think step by step. The Toyota Prius Prime was purchased in 2021, which is before 2022, so the answer is yes.
- Was the vehicle purchased in a year after 2022? N/A
- If so, is the vehicle present in the following list of North American-assembled vehicles? N/A
(2) After considering each criterion in turn, phrase the final answer as "Because of {reasons}, the answer is likely {yes or no}."
Because the Toyota Prius Prime meets all of the criteria for a federal tax credit, the answer is likely yes. | https://cookbook.openai.com/techniques_to_improve_reliability |
Method: Few-shot examples | Method
Prompting the model to reason out its answers can be done in many ways. One way is to demonstrate with a few examples ('few-shot'), as studied by Jason Wei and Denny Zhou et al. from Google. Here's an example few-shot chain-of-thought prompt: | https://cookbook.openai.com/techniques_to_improve_reliability |
Method: Few-shot examples (continued) | Results
Testing on grade school math problems, the authors found that chain of thought prompting tripled the solve rate, from 18% to 57%.
In addition to math problems, chain of thought prompting also lifted performance on questions related to sports understanding, coin flip tracking, and last letter concatenation. In most cases, not many examples were need to saturate the performance gains (less than 8 or so).
To learn more, read the full paper. | https://cookbook.openai.com/techniques_to_improve_reliability |
Method: Few-shot examples (continued) | Implications
One advantage of the few-shot example-based approach relative to the Let's think step by step technique is that you can more easily specify the format, length, and style of reasoning that you want the model to perform before landing on its final answer. This can be particularly helpful in cases where the model isn't initially reasoning in the right way or depth. | https://cookbook.openai.com/techniques_to_improve_reliability |
Method: Fine-tuned | Method
In general, to eke out maximum performance on a task, you'll need to fine-tune a custom model. However, fine-tuning a model using explanations may take thousands of example explanations, which are costly to write.
In 2022, Eric Zelikman and Yuhuai Wu et al. published a clever procedure for using a few-shot prompt to generate a dataset of explanations that could be used to fine-tune a model. The idea is to use a few-shot prompt to generate candidate explanations, and only keep the explanations that produce the correct answer. Then, to get additional explanations for some of the incorrect answers, retry the few-shot prompt but with correct answers given as part of the question. The authors called their procedure STaR (Self-taught Reasoner): | https://cookbook.openai.com/techniques_to_improve_reliability |
Techniques to improve reliability | Results
When the authors applied this technique to a Common Sense Q&A dataset, they found that STaR outperformed both chain-of-thought prompting alone (73% > 37%) and fine-tuning alone (73% > 60%):
STaR results
Source: STaR: Bootstrapping Reasoning With Reasoning by Eric Zelikman and Yujuai Wu et al. (2022)
To learn more, read the full paper. | https://cookbook.openai.com/techniques_to_improve_reliability |
Implications | Using a few-shot prompt to extend or modify a fine-tuning dataset is an idea that can be generalized beyond explanation writing. For example, if you have large quantities of unstructured text that you want to train on, you may find opportunities to use a prompt to extract a structured dataset from your unstructured text, and then fine-tune a custom model on that structured dataset.
Extensions to chain-of-thought prompting
A number of extensions of chain-of-thought prompting have been published as well.
Selection-inference prompting
Method
Published by Antonia Creswell et al., one extension of the chain-of-thought technique is to split the single prompt for generating explanations and answers into smaller parts. First, a prompt selects a relevant subset of facts from the text ('selection prompt'). Then, a second prompt infers a conclusion from the selected facts ('inference prompt'). These prompts are then alternated in a loop to generate multiple steps of reasoning and eventually land on a final answer. The authors illustrate the idea in the following figure: | https://cookbook.openai.com/techniques_to_improve_reliability |
Results | Selection-inference prompting
Source: Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning by Antonia Creswell et al. (2022)
When applied to a 7B-parameter model, the authors found that selection-inference prompting substantially improved performance relative to chain-of-thought prompting on the bAbi and Proof Writer benchmark tasks (both of which require longer sequences of reasoning steps). The best performance they achieved combined both selection-inference prompting with fine-tuning.
Selection-inference prompting
Source: Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning by Antonia Creswell et al. (2022) | https://cookbook.openai.com/techniques_to_improve_reliability |
Implications | Although the gains on these benchmarks were large, these benchmarks were specifically chosen because they required longer sequences of reasoning. On problems that don't require reasoning with many steps, the gains are likely smaller.
The results highlight a couple of general lessons for working with large language models. One, splitting up complex tasks into smaller tasks is a great way to improve reliability and performance; the more atomic the task, the less room there is for the model to err. Two, getting maximum performance often means combining fine-tuning with whatever approach you've chosen.
To learn more, read the full paper. | https://cookbook.openai.com/techniques_to_improve_reliability |
Faithful reasoning architecture | A few months after publishing the selection-inference prompting technique, the authors extended the technique in a follow-up paper, with ideas for:
figuring out when the selection-inference cycle should stop or continue
adding a value function to help search over multiple reasoning paths
reducing hallucination of fake facts by fine-tuning a model to reason about sentence labels (e.g., sen1) rather than writing out the sentences themselves
Method
In the original selection-inference technique, specialized 'selection' and 'inference' prompts are alternated to select facts and make inferences from those facts, combining to generate a sequence of reasoning steps. | https://cookbook.openai.com/techniques_to_improve_reliability |
Results | The authors evaluated their technique on two benchmarks: the ProofWriter task (not shown) and EntailmentBankQA (shown). The technique increased accuracy substantially, especially on harder reasoning problems.
Implications
This paper illustrates a number of helpful lessons for improving the reliability of large language models:
Split complex tasks into smaller, more reliable subtasks
Generate your answer in a step-by-step fashion, evaluating it along the way
Generate many possible answers and use another model or function to pick the ones that look best
Reduce hallucination by constraining what the model can say (e.g., by using sentence labels instead of sentences)
Maximize performance of models by fine-tuning them on specialized tasks
To learn more, read the full paper. | https://cookbook.openai.com/techniques_to_improve_reliability |
Least-to-most prompting | Least-to-most prompting
Source: Least-to-most Prompting Enables Complex Reasoning in Large Language Models by Denny Zhou et al. (2022)
Results
When applied to benchmarks involving long reasoning chains using code-davinci-002 (which is optimized for code but can still understand text), the authors measured gains as large as 16% -> 99.7%!
Implications
Although the above gains from least-to-most prompting are impressive, they are measured on a very narrow set of tasks that require long reasoning chains.
Still, they illustrate a common theme: increase reliability by (a) breaking complex tasks into smaller subtasks and (b) giving the model more time and space to work out the answer.
To learn more, read the full paper. | https://cookbook.openai.com/techniques_to_improve_reliability |
Related ideas | Maieutic prompting
Method
In contrast to the previous techniques, which try to maximize the likelihood of correct answers, another approach is to use GPT-3 to generate a tree of possible explanations (both correct and incorrect), and then analyze their relationships to guess at which set is correct. This technique was coined maieutic prompting by Jaehun Jung et al. in May 2022 (maieutic means relating to the Socratic method of asking questions to elicit ideas).
The method is complicated, and works as follows:
First, build a maieutic tree, where each node is a statement that could be true or false:
Start with a multiple-choice question or true/false statement (e.g. War cannot have a tie)
For each possible answer to the question, use the model to generate a corresponding explanation (with a prompt like War cannot have a tie? True, because)
Then, prompt the model with the question and the generated explanation, and ask it to produce the answer. If reversing the explanation (with a prefix like It is wrong to say that {explanation}) reverses the answer, then the explanation is considered 'logically integral.' | https://cookbook.openai.com/techniques_to_improve_reliability |
Results | Maieutic prompting results
Source: Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations by Jaehun Jung et al. (2022)
Implications
Beyond the complexity, one limitation of this method is that it appears to only apply to questions that can be posed as multiple-choice.
To learn more, read the full paper. | https://cookbook.openai.com/techniques_to_improve_reliability |
Extensions | Self-consistency
Method
For tasks with a discrete set of answers, one simple way to improve reliability is to sample multiple explanations & answers from the model (using a positive temperature) and then pick the final answer that appears most often. | https://cookbook.openai.com/techniques_to_improve_reliability |
Results | This technique lifted accuracies by anywhere from 1 to 24 percentage points on a suite of math and reasoning benchmarks. (Plotted below are results from Google's LaMDA model; using Google's larger PaLM model, the baselines were higher but the gains were a bit smaller.)
Implications
Although this technique is simple to implement, it can be costly. Generating a set of 10 answers will increase your costs by 10x.
Also, as with many of these techniques, it applies only to tasks with a limited set of answers. For open-ended tasks where each answer is unique (such as writing a poem), it's not obvious what it would mean to pick the most common answer.
Lastly, this technique ought to be most beneficial when there are multiple paths or phrasings to reach an answer; if there's only one path, then the technique may not help at all. An extreme example: If the task was to generate a single token answer, then taking the most common token from 100 generations would be no different than taking the token with the highest logprobs (which you can get with a single generation at temperature=0). | https://cookbook.openai.com/techniques_to_improve_reliability |
Verifiers | Another key technique for improving task performance is to train a verifier or discriminator model to evaluate the outputs of the main generative model. If the discriminator rejects the output, then you can resample the generative model until you get an acceptable output. In many cases, it's easier to judge an answer than it is to create an answer, which helps explain the power of this method.
Method
In 2021, OpenAI researchers applied this technique to grade school math problems, using the following procedure: | https://cookbook.openai.com/techniques_to_improve_reliability |
Results | With a 175B GPT-3 model and 8,000 training examples, this technique substantially lifted grade school math accuracy from ~33% to ~55%.
Implications
Similar to the self-consistency technique, this method can get expensive, as generating, say, 100 solutions per task will increase your costs by roughly ~100x. | https://cookbook.openai.com/techniques_to_improve_reliability |
Theories of reliability | Although the techniques above vary in their approach, they all share the goal of improving reliability on complex tasks. Mainly they do this by:
decomposing unreliable operations into smaller, more reliable operations (e.g., selection-inference prompting)
using multiple steps or multiple relationships to make the system's reliability greater than any individual component (e.g., maieutic prompting)
Probabilistic graphical models
This paradigm of trying to build a reliable system out of less reliable components is reminiscent of probabilistic programming, and many of the analysis techniques of that field can be applied to this one.
In the paper Language Model Cascades, David Dohan et al. interpret the above techniques in the paradigm of probabilistic graphical models: | https://cookbook.openai.com/techniques_to_improve_reliability |
Implications | Although formulating these techniques as probabilistic graphical models may not be immediately useful for solving any particular problem, the framework may be helpful in selecting, combining, and discovering new techniques.
Closing thoughts
Research into large language models is very active and evolving rapidly. Not only do researchers continue to improve the models, they also continue to improve our understanding of how to best employ the models. To underscore the pace of these developments, note that all of the papers shared above were published within the past 12 months (as I write in Sep 2022).
In the future, expect better models and better techniques to be published. Even if the specific techniques here are eclipsed by future best practices, the general principles behind them will likely remain a key part of any expert user's toolkit. | https://cookbook.openai.com/techniques_to_improve_reliability |
How large language models work | Large language models are functions that map text to text. Given an input string of text, a large language model predicts the text that should come next.
The magic of large language models is that by being trained to minimize this prediction error over vast quantities of text, the models end up learning concepts useful for these predictions. For example, they learn:
how to spell
how grammar works
how to paraphrase
how to answer questions
how to hold a conversation
how to write in many languages
how to code
etc.
None of these capabilities are explicitly programmed in—they all emerge as a result of training. | https://cookbook.openai.com/how_to_work_with_large_language_models |
How to control a large language model | Of all the inputs to a large language model, by far the most influential is the text prompt.
Large language models can be prompted to produce output in a few ways:
Instruction: Tell the model what you want
Completion: Induce the model to complete the beginning of what you want
Demonstration: Show the model what you want, with either:
A few examples in the prompt
Many hundreds or thousands of examples in a fine-tuning training dataset
An example of each is shown below. | https://cookbook.openai.com/how_to_work_with_large_language_models |
Instruction prompts | Instruction-following models (e.g., text-davinci-003 or any model beginning with text-) are specially designed to follow instructions. Write your instruction at the top of the prompt (or at the bottom, or both), and the model will do its best to follow the instruction and then stop. Instructions can be detailed, so don't be afraid to write a paragraph explicitly detailing the output you want.
Example instruction prompt:
Extract the name of the author from the quotation below.
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
Output:
Ted Chiang | https://cookbook.openai.com/how_to_work_with_large_language_models |
Completion prompt example | Completion-style prompts take advantage of how large language models try to write text they think is mostly likely to come next. To steer the model, try beginning a pattern or sentence that will be completed by the output you want to see. Relative to direct instructions, this mode of steering large language models can take more care and experimentation. In addition, the models won't necessarily know where to stop, so you will often need stop sequences or post-processing to cut off text generated beyond the desired output.
Example completion prompt:
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
The author of this quote is
Output:
Ted Chiang | https://cookbook.openai.com/how_to_work_with_large_language_models |
Demonstration prompt example (few-shot learning) | Similar to completion-style prompts, demonstrations can show the model what you want it to do. This approach is sometimes called few-shot learning, as the model learns from a few examples provided in the prompt.
Example demonstration prompt:
Quote:
“When the reasoning mind is forced to confront the impossible again and again, it has no choice but to adapt.”
― N.K. Jemisin, The Fifth Season
Author: N.K. Jemisin
Quote:
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
Author:
Output:
Ted Chiang | https://cookbook.openai.com/how_to_work_with_large_language_models |
Fine-tuned prompt example | With enough training examples, you can fine-tune a custom model. In this case, instructions become unnecessary, as the model can learn the task from the training data provided. However, it can be helpful to include separator sequences (e.g., -> or ### or any string that doesn't commonly appear in your inputs) to tell the model when the prompt has ended and the output should begin. Without separator sequences, there is a risk that the model continues elaborating on the input text rather than starting on the answer you want to see.
Example fine-tuned prompt (for a model that has been custom trained on similar prompt-completion pairs):
“Some humans theorize that intelligent species go extinct before they can expand into outer space. If they're correct, then the hush of the night sky is the silence of the graveyard.”
― Ted Chiang, Exhalation
###
Output:
Ted Chiang | https://cookbook.openai.com/how_to_work_with_large_language_models |
Code Capabilities | Large language models aren't only great at text - they can be great at code too. OpenAI's specialized code model is called Codex.
Codex powers more than 70 products, including:
GitHub Copilot (autocompletes code in VS Code and other IDEs)
Pygma (turns Figma designs into code)
Replit (has an 'Explain code' button and other features)
Warp (a smart terminal with AI command search)
Machinet (writes Java unit test templates)
Note that unlike instruction-following text models (e.g., text-davinci-002), Codex is not trained to follow instructions. As a result, designing good prompts can take more care. | https://cookbook.openai.com/how_to_work_with_large_language_models |
More prompt advice | For more prompt examples, visit OpenAI Examples.
In general, the input prompt is the best lever for improving model outputs. You can try tricks like:
Give more explicit instructions. E.g., if you want the output to be a comma separated list, ask it to return a comma separated list. If you want it to say "I don't know" when it doesn't know the answer, tell it 'Say "I don't know" if you do not know the answer.'
Supply better examples. If you're demonstrating examples in your prompt, make sure that your examples are diverse and high quality.
Ask the model to answer as if it was an expert. Explicitly asking the model to produce high quality output or output as if it was written by an expert can induce the model to give higher quality answers that it thinks an expert would write. E.g., "The following answer is correct, high-quality, and written by an expert."
Prompt the model to write down the series of steps explaining its reasoning. E.g., prepend your answer with something like "Let's think step by step." Prompting the model to give an explanation of its reasoning before its final answer can increase the likelihood that its final answer is consistent and correct. | https://cookbook.openai.com/how_to_work_with_large_language_models |
How to fine-tune chat models (Chunk 1) | This notebook provides a step-by-step guide for our new gpt-3.5-turbo fine-tuning. We'll perform entity extraction using the RecipeNLG dataset, which provides various recipes and a list of extracted generic ingredients for each. This is a common dataset for named entity recognition (NER) tasks. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 2) | We will go through the following steps:
Setup: Loading our dataset and filtering down to one domain to fine-tune on.
Data preparation: Preparing your data for fine-tuning by creating training and validation examples, and uploading them to the Files endpoint. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 3) | Fine-tuning: Creating your fine-tuned model.
Inference: Using your fine-tuned model for inference on new inputs.
By the end of this you should be able to train, evaluate and deploy a fine-tuned gpt-3.5-turbo model. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 4) | For more information on fine-tuning, you can refer to our documentation guide, API reference or blog post
Setup
# make sure to use the latest version of the openai python package
!pip install --upgrade openai
import json
import openai
import os
import pandas as pd
from pprint | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 5) | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
Fine-tuning works best when focused on a particular domain. It's important to make sure your dataset is both focused enough for the model to learn, but general enough that unseen examples won't be missed. Having this in mind, we have extracted a subset from the RecipesNLG dataset to only contain documents from www.cookbooks.com. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 6) | # Read in the dataset we'll use for this task.
# This will be the RecipesNLG dataset, which we've cleaned to only contain documents from www.cookbooks.com
recipe_df = pd.read_csv("data/cookbook_recipes_nlg_10k.csv")
recipe_df.head() | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 7) | Data preparation
We'll begin by preparing our data. When fine-tuning with the ChatCompletion format, each training example is a simple list of messages. For example, an entry could look like: | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 8) | [{'role': 'system',
'content': 'You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided.'}, | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 9) | {'role': 'user',
'content': 'Title: No-Bake Nut Cookies\n\nIngredients: ["1 c. firmly packed brown sugar", "1/2 c. evaporated milk", "1/2 tsp. vanilla", "1/2 c. broken nuts (pecans)", "2 Tbsp. butter or margarine", "3 1/2 c. bite size shredded rice biscuits"]\n\nGeneric ingredients: '}, | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 10) | {'role': 'assistant',
'content': '["brown sugar", "milk", "vanilla", "nuts", "butter", "bite size shredded rice biscuits"]'}] | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 11) | During the training process this conversation will be split, with the final entry being the completion that the model will produce, and the remainder of the messages acting as the prompt. Consider this when building your training examples - if your model will act on multi-turn conversations, then please provide representative examples so it doesn't perform poorly when the conversation starts to expand. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 12) | Please note that currently there is a 4096 token limit for each training example. Anything longer than this will be truncated at 4096 tokens. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 13) | training_data = []
system_message = "You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided." | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 14) | def create_user_message(row):
return f"Title: {row['title']}\n\nIngredients: {row['ingredients']}\n\nGeneric ingredients: " | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 15) | def prepare_example_conversation(row):
messages = []
messages.append({"role": "system", "content": system_message}) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 16) | user_message = create_user_message(row)
messages.append({"role": "user", "content": user_message})
messages.append({"role": "assistant", "content": row["NER"]}) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 17) | return {"messages": messages}
pprint(prepare_example_conversation(recipe_df.iloc[0])) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 18) | Let's now do this for a subset of the dataset to use as our training data. You can begin with even 30-50 well-pruned examples. You should see performance continue to scale linearly as you increase the size of the training set, but your jobs will also take longer. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 19) | # use the first 100 rows of the dataset for training
training_df = recipe_df.loc[0:100] | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 20) | # apply the prepare_example_conversation function to each row of the training_df
training_data = training_df.apply(prepare_example_conversation, axis=1).tolist() | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 21) | for example in training_data[:5]:
print(example) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 22) | In addition to training data, we can also optionally provide validation data, which will be used to make sure that the model does not overfit your training set. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 23) | validation_df = recipe_df.loc[101:200]
validation_data = validation_df.apply(prepare_example_conversation, axis=1).tolist() | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 24) | We then need to save our data as .jsonl files, with each line being one training example conversation. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 25) | def write_jsonl(data_list: list, filename: str) -> None:
with open(filename, "w") as out:
for ddict in data_list:
jout = json.dumps(ddict) + "\n"
out.write(jout) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 26) | training_file_name = "tmp_recipe_finetune_training.jsonl"
write_jsonl(training_data, training_file_name) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 27) | validation_file_name = "tmp_recipe_finetune_validation.jsonl"
write_jsonl(validation_data, validation_file_name) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 28) | This is what the first 5 lines of our training .jsonl file look like:
# print the first 5 lines of the training file
!head -n 5 tmp_recipe_finetune_training.jsonl | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 29) | Upload files
You can now upload the files to our Files endpoint to be used by the fine-tuned model. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 30) | training_response = openai.File.create(
file=open(training_file_name, "rb"), purpose="fine-tune"
)
training_file_id = training_response["id"]
validation_response = openai.File.create(
file=open(validation_file_name, "rb"), purpose="fine-tune"
)
validation_file_id = validation_response["id"] | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 31) | print("Training file ID:", training_file_id)
print("Validation file ID:", validation_file_id) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 32) | Fine-tuning
Now we can create our fine-tuning job with the generated files and an optional suffix to identify the model. The response will contain an id which you can use to retrieve updates on the job. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 33) | Note: The files have to first be processed by our system, so you might get a File not ready error. In that case, simply retry a few minutes later. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 34) | response = openai.FineTuningJob.create(
training_file=training_file_id,
validation_file=validation_file_id,
model="gpt-3.5-turbo",
suffix="recipe-ner",
) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 35) | job_id = response["id"]
print("Job ID:", response["id"])
print("Status:", response["status"]) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 36) | Check job status
You can make a GET request to the https://api.openai.com/v1/alpha/fine-tunes endpoint to list your alpha fine-tune jobs. In this instance you'll want to check that the ID you got from the previous step ends up as status: succeeded. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 37) | Once it is completed, you can use the result_files to sample the results from the validation set (if you uploaded one), and use the ID from the fine_tuned_model parameter to invoke your trained model. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 38) | response = openai.FineTuningJob.retrieve(job_id)
print("Job ID:", response["id"]) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 39) | print("Status:", response["status"])
print("Trained Tokens:", response["trained_tokens"]) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 40) | We can track the progress of the fine-tune with the events endpoint. You can rerun the cell below a few times until the fine-tune is ready. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 41) | response = openai.FineTuningJob.list_events(id=job_id, limit=50)
events = response["data"]
events.reverse() | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 42) | for event in events:
print(event["message"]) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 43) | Now that it's done, we can get a fine-tuned model ID from the job:
response = openai.FineTuningJob.retrieve(job_id)
fine_tuned_model_id = response["fine_tuned_model"]
if fine_tuned_model_id is None:
raise RuntimeError("Fine-tuned model ID not found. Your job has likely not been completed yet.") | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 44) | print("Fine-tuned model ID:", fine_tuned_model_id) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 45) | Inference
The last step is to use your fine-tuned model for inference. Similar to the classic FineTuning, you simply call ChatCompletions with your new fine-tuned model name filling the model parameter. | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 46) | test_df = recipe_df.loc[201:300]
test_row = test_df.iloc[0]
test_messages = []
test_messages.append({"role": "system", "content": system_message}) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 47) | user_message = create_user_message(test_row)
test_messages.append({"role": "user", "content": create_user_message(test_row)}) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 48) | pprint(test_messages)
response = openai.ChatCompletion.create(
model=fine_tuned_model_id, messages=test_messages, temperature=0, max_tokens=500
) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 49) | print(response["choices"][0]["message"]["content"]) | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
How to fine-tune chat models (Chunk 50) | Conclusion
Congratulations, you are now ready to fine-tune your own models using the ChatCompletion format! We look forward to seeing what you build | https://cookbook.openai.com/examples/how_to_finetune_chat_models |
Evaluating Abstractive Summarization - Part 1 | In this notebook we delve into the evaluation techniques for abstractive summarization tasks using a simple example. We explore traditional evaluation methods like ROUGE and BERTScore, in addition to showcasing a more novel approach using LLMs as evaluators. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 2 | Evaluating the quality of summaries is a time-consuming process, as it involves different quality metrics such as coherence, conciseness, readability and content. Traditional automatic evaluation metrics such as ROUGE and BERTScore and others are concrete and reliable, but they may not correlate well with the actual quality of summaries. They show relatively low correlation with human judgments, especially for open-ended generation tasks (Liu et al., 2023). There's a growing need to lean on human evaluations, user feedback, or model-based metrics while being vigilant about potential biases. While human judgment provides invaluable insights, it is often not scalable and can be cost-prohibitive. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 3 | In addition to these traditional metrics, we showcase a method (G-Eval) that leverages Large Language Models (LLMs) as a novel, reference-free metric for assessing abstractive summaries. In this case, we use gpt-4 to score candidate outputs. gpt-4 has effectively learned an internal model of language quality that allows it to differentiate between fluent, coherent text and low-quality text. Harnessing this internal scoring mechanism allows auto-evaluation of new candidate outputs generated by an LLM. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 4 | # Installing necessary packages for the evaluation
# rouge: For evaluating with ROUGE metric
# bert_score: For evaluating with BERTScore
# openai: To interact with OpenAI's API
!pip install rouge --quiet
!pip install bert_score --quiet
!pip install openai --quiet | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 5 | import openai
import os
import re
import pandas as pd
# Python Implementation of the ROUGE Metric
from rouge import Rouge
# BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity.
from bert_score import BERTScorer
openai.api_key = os.environ.get("OPENAI_API_KEY") | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 6 | For the purposes of this notebook we'll use the example summarization below. Notice that we provide two generated summaries to compare, and a reference human-written summary, which evaluation metrics like ROUGE and BERTScore require. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 7 | Excerpt (excerpt):
OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI will build safe and beneficial AGI directly, but will also consider its mission fulfilled if its work aids others to achieve this outcome. OpenAI follows several key principles for this purpose. First, broadly distributed benefits - any influence over AGI's deployment will be used for the benefit of all, and to avoid harmful uses or undue concentration of power. Second, long-term safety - OpenAI is committed to doing the research to make AGI safe, and to promote the adoption of such research across the AI community. Third, technical leadership - OpenAI aims to be at the forefront of AI capabilities. Fourth, a cooperative orientation - OpenAI actively cooperates with other research and policy institutions, and seeks to create a global community working together to address AGI's global challenges. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 8 | Summaries:
Reference Summary /ref_summary (human generated)
Eval Summary 1 / eval_summary_1 (system generated)
Eval Summary 2 / eval_summary_2 (system generated) | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 9 | Take a moment to figure out which summary you'd personally prefer and the one that captures OpenAI's mission really well. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Chunk 1) | OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI will build safe and beneficial AGI directly, but will also consider its mission fulfilled if its work aids others to achieve this outcome. OpenAI follows several key principles for this purpose. First, broadly distributed benefits - any influence over AGI's deployment will be used for the benefit of all, and to avoid harmful uses or undue concentration of power. Second, long-term safety - OpenAI is committed to doing the research to make AGI safe, and to promote the adoption of such research across the AI community. Third, technical leadership - OpenAI aims to be at the forefront of AI capabilities. Fourth, a cooperative orientation - OpenAI actively cooperates with other research and policy institutions, and seeks to create a global community working together to address AGI's global challenges. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Chunk 2) | ROUGE, which stands for Recall-Oriented Understudy for Gisting Evaluation, primarily gauges the overlap of words between a generated output and a reference text. It's a prevalent metric for evaluating automatic summarization tasks. Among its variants, ROUGE-L offers insights into the longest contiguous match between system-generated and reference summaries, gauging how well the system retains the original summary's essence. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |