id
stringlengths
14
15
text
stringlengths
8
2.08k
source
stringclasses
1 value
fd0fa414031d-0
Introduction | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-1
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceGet startedIntroductionOn this pageIntroductionLangChain is a framework for developing applications powered by language models. It enables applications that are:Data-aware: connect a language model to other sources of dataAgentic: allow a language model to interact with its environmentThe main value props of LangChain are:Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or notOff-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasksOff-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.Get started​Here’s how to install LangChain, set up your environment, and start building.We recommend following our Quickstart guide to familiarize yourself with the framework by building your first LangChain application.Note: These docs are for the LangChain Python package. For documentation on LangChain.js, the JS/TS version, head here.Modules​LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/O​Interface with language modelsData connection​Interface with application-specific dataChains​Construct sequences of callsAgents​Let chains choose which tools to use given high-level directivesMemory​Persist application state between runs of a chainCallbacks​Log and stream intermediate steps of any chainExamples, ecosystem, and resources​Use cases​Walkthroughs and best-practices for common end-to-end use cases, like:ChatbotsAnswering questions using sourcesAnalyzing structured dataand much more...Guides​Learn best practices
https://python.langchain.com/docs/get_started
fd0fa414031d-2
questions using sourcesAnalyzing structured dataand much more...Guides​Learn best practices for developing with LangChain.Ecosystem​LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of integrations and dependent repos.Additional resources​Our community is full of prolific developers, creative builders, and fantastic teachers. Check out YouTube tutorials for great tutorials from folks in the community, and Gallery for a list of awesome LangChain projects, compiled by the folks at KyroLabs. Support Join us on GitHub or Discord to ask questions, share feedback, meet other developers building with LangChain, and dream about the future of LLM’s.API reference​Head to the reference section for full documentation of all classes and methods in the LangChain Python package.Edit this pagePreviousGet startedNextInstallationGet startedModulesExamples, ecosystem, and resourcesUse casesGuidesEcosystemAdditional resourcesAPI referenceCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-3
Overview
https://python.langchain.com/docs/get_started
fd0fa414031d-4
Jump to ContentLearnForumSupportSystem StatusContactGuidesAPI ReferenceExamplesLibrariesLearnForumSupportSystem StatusContactLog InSign Up FreeLog InSign Up FreeGuidesAPI ReferenceExamplesLibrariesAIGetting startedOverviewQuickstartExamplesChoosing index type and sizeorganizationsUnderstanding organizationsManaging costUnderstanding costMonitoring your usageManage billingUnderstanding subscription statusChanging your billing planSetting up billing through AWS MarketplaceSetting up billing through GCP MarketplaceprojectsUnderstanding projectsCreate a projectAdd users to projects and organizationsChange project pod limitRename a projectindexesUnderstanding indexesManage indexesScale indexesUnderstanding collectionsBack up indexesrecordsInsert dataManage dataSparse-dense embeddingsQuery dataFiltering with metadataUsing namespacesdatasetsPinecone public datasetsUsing public Pinecone datasetsCreating and loading private datasetsoperationsUnderstanding multitenancyMonitoringPerformance tuningTroubleshootingMoving to productionIntegrationsOpenAICohereHaystackHugging Face Inference EndpointsElasticsearchDatabricksLangChainreferencePython ClientNode.JS ClientLimitsRelease notesLibrariesSecuritySupportSupport ForumSupport PortalStatusOverviewAn introduction to the Pinecone vector database.Suggest EditsPinecone Overview Pinecone makes it easy to provide long-term memory for high-performance AI applications. It’s a managed, cloud-native vector database with a simple API and no infrastructure hassles. Pinecone serves fresh, filtered query results with low latency at the scale of billions of vectors. Vector embeddings provide long-term memory for AI. Applications that involve large language models, generative AI, and semantic search rely on vector embeddings, a type of data that represents semantic information. This information allows AI applications to gain understanding and maintain a long-term memory that they can draw upon when executing complex tasks. Vector databases store and query embeddings quickly and at scale.
https://python.langchain.com/docs/get_started
fd0fa414031d-5
Vector databases store and query embeddings quickly and at scale. Vector databases like Pinecone offer optimized storage and querying capabilities for embeddings. Traditional scalar-based databases can’t keep up with the complexity and scale of such data, making it difficult to extract insights and perform real-time analysis. Vector indexes like FAISS lack useful features that are present in any database. Vector databases combine the familiar features of traditional databases with the optimized performance of vector indexes. Pinecone indexes store records with vector data. Each record in a Pinecone index contains a unique ID and an array of floats representing a dense vector embedding. Each record may also contain a sparse vector embedding for hybrid search and metadata key-value pairs for filtered queries. Pinecone queries are fast and fresh. Pinecone returns low-latency, accurate results for indexes with billions of vectors. High-performance pods return up to 200 queries per second per replica. Queries reflect up-to-the-second updates such as upserts and deletes. Filter by namespaces and metadata or add resources to improve performance. Upsert and query vector embeddings with the Pinecone API. Perform CRUD operations and query your vectors using HTTP, Python, or Node.js. Pythonindex = pinecone.Index('example-index')
https://python.langchain.com/docs/get_started
fd0fa414031d-6
upsert_response = index.upsert( vectors=[ {'id': 'vec1', 'values': [0.1, 0.2, 0.3], 'metadata': {'genre': 'drama'}, 'sparse_values': { 'indices': [10, 45, 16], 'values': [0.5, 0.5, 0.2] }}, {'id': 'vec2', 'values': [0.2, 0.3, 0.4], 'metadata': {'genre': 'action'}, 'sparse_values': { 'indices': [15, 40, 11], 'values': [0.4, 0.5, 0.2] }} ], namespace='example-namespace' ) Query your index for the most similar vectors. Find the top k most similar vectors, or query by ID. PythonJavaScriptcurlpinecone.create_index("example-index", dimension=128, metric="euclidean", pods=4, pod_type="s1.x1") await pinecone.createIndex({ name: "example-index", dimension: 128, metric: "euclidean", pods: 4, podType: "s1.x1", }); curl -i -X POST https://controller.YOUR_ENVIRONMENT.pinecone.io/databases \ -H 'Api-Key: YOUR_API_KEY' \ -H 'Content-Type: application/json' \ -d '{ "name": "example-index", "dimension": 128, "metric": "euclidean", "pods": 4, "pod_type": "p1.x1" }'
https://python.langchain.com/docs/get_started
fd0fa414031d-7
Find the top k most similar vectors, or query by ID. PythonJavaScriptcurlindex.query( vector=[0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3], top_k=3, include_values=True ) # Returns: # {'matches': [{'id': 'C', # 'score': -1.76717265e-07, # 'values': [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]}, # {'id': 'B', # 'score': 0.080000028, # 'values': [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]}, # {'id': 'D', # 'score': 0.0800001323, # 'values': [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]}], # } const index = pinecone.Index("example-index"); const queryRequest = { vector: [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3], topK: 3, includeValues: true }; const queryResponse = await index.query({ queryRequest });
https://python.langchain.com/docs/get_started
fd0fa414031d-8
// Returns: // {'matches': [{'id': 'C', // 'score': -1.76717265e-07, // 'values': [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3]}, // {'id': 'B', // 'score': 0.080000028, // 'values': [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]}, // {'id': 'D', // 'score': 0.0800001323, // 'values': [0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4]}], // } curl -i -X POST https://hello-pinecone-YOUR_PROJECT.svc.YOUR_ENVIRONMENT.pinecone.io/query \ -H 'Api-Key: YOUR_API_KEY' \ -H 'Content-Type: application/json' \ -d '{ "vector":[0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3], "topK": 3, "includeValues": true }'
https://python.langchain.com/docs/get_started
fd0fa414031d-9
Get started Go to the quickstart guide to get a production-ready vector search service up and running in minutes.Updated about 18 hours ago Table of Contents Pinecone Overview Vector embeddings provide long-term memory for AI. Vector databases store and query embeddings quickly and at scale. Pinecone indexes store records with vector data. Pinecone queries are fast and fresh. Upsert and query vector embeddings with the Pinecone API. Query your index for the most similar vectors. Get started Documentation Learning Center Developer Forum Support Center Status Page Careers © Pinecone Systems, Inc. | San Francisco, CA | Terms | Privacy | Product Privacy | Cookies | Trust & Security | System Status Pinecone is a registered trademark of Pinecone Systems, Inc. 🦜️🔗 Langchain Foundational | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-10
Foundational | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalLLMRouterSequentialTransformationDocumentsPopularAdditionalMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesChainsFoundationalFoundational📄️ LLMAn LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.📄️ RouterThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.📄️ SequentialThe next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.📄️ TransformationThis notebook showcases using a generic transformation chain.Edit this pagePreviousSerializationNextLLMCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Popular | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-11
Popular | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QASQLSummarizationAdditionalMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesChainsPopularPopular📄️ API chainsAPIChain enables using LLMs to interact with APIs to retrieve relevant information. Construct the chain by providing a question relevant to the provided API documentation.📄️ Retrieval QAThis example showcases question answering over an index.📄️ Conversational Retrieval QAThe ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.📄️ SQLThis example demonstrates the use of the SQLDatabaseChain for answering questions over a SQL database.📄️ SummarizationA summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.Edit this pagePreviousMap re-rankNextAPI chainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Additional | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-12
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AIExtractionFLAREGraph DB QA chainKuzuQAChainNebulaGraphQAChainGraph QAHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesChainsAdditionalAdditional📄️ Analyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.📄️ Self-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.📄️ ExtractionThe extraction chain uses the OpenAI functions parameter to specify a schema to extract entities from a document. This helps us make sure that the model outputs exactly the schema of entities and properties that we want, with their appropriate types.📄️ FLAREThis notebook is an implementation of Forward-Looking Active REtrieval augmented
https://python.langchain.com/docs/get_started
fd0fa414031d-13
FLAREThis notebook is an implementation of Forward-Looking Active REtrieval augmented generation (FLARE).📄️ Graph DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.📄️ KuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.📄️ NebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.📄️ Graph QAThis notebook goes over how to do question answering over a graph data structure.📄️ Hypothetical Document EmbeddingsThis notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper.📄️ Bash chainThis notebook showcases using LLMs and a bash process to perform simple filesystem commands.📄️ Self-checking chainThis notebook showcases how to use LLMCheckerChain.📄️ Math chainThis notebook showcases using LLMs and Python REPLs to do complex word math problems.📄️ HTTP request chainUsing the request library to get HTML results from a URL and then an LLM to parse results📄️ Summarization checker chainThis notebook shows some examples of LLMSummarizationCheckerChain in use with different types of texts. It has a few distinct differences from the LLMCheckerChain, in that it doesn't have any assumptions to the format of the input text (or summary).📄️ ModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like
https://python.langchain.com/docs/get_started
fd0fa414031d-14
on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.📄️ Dynamically selecting from multiple promptsThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.📄️ Dynamically selecting from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.📄️ Retrieval QA using OpenAI functionsOpenAI functions allows for structuring of response output. This is often useful in question answering when you want to not only get the final answer but also supporting evidence, citations, etc.📄️ OpenAPI chainThis notebook shows an example of using an OpenAPI chain to call an endpoint in natural language, and get back a response in natural language.📄️ OpenAPI calls with OpenAI functionsIn this notebook we'll show how to create a chain that automatically makes calls to an API based only on an OpenAPI spec. Under the hood, we're parsing the OpenAPI spec into a JSON schema that the OpenAI functions API can handle. This allows ChatGPT to automatically select and populate the
https://python.langchain.com/docs/get_started
fd0fa414031d-15
that the OpenAI functions API can handle. This allows ChatGPT to automatically select and populate the relevant API call to make for any user input. Using the output of ChatGPT we then make the actual API call, and return the result.📄️ Program-aided language model (PAL) chainImplements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf.📄️ Question-Answering CitationsThis notebook shows how to use OpenAI functions ability to extract citations from text.📄️ Document QAHere we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.📄️ TaggingThe tagging chain uses the OpenAI functions parameter to specify a schema to tag a document with. This helps us make sure that the model outputs exactly tags that we want, with their appropriate types.📄️ Vector store-augmented text generationThis notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation.Edit this pagePreviousSummarizationNextAnalyze DocumentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-16
How to | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toAsync APIDifferent call methodsCustom chainDebugging chainsLoading from LangChainHubAdding memory (state)SerializationFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesChainsHow toHow to📄️ Async APILangChain provides async support for Chains by leveraging the asyncio library.📄️ Different call methodsAll classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using call:📄️ Custom chainTo implement your own custom chain you can subclass Chain and implement the following methods:📄️ Debugging chainsIt can be hard to debug a Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing.📄️ Loading from LangChainHubThis notebook covers how to load chains from LangChainHub.📄️ Adding memory (state)Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.📄️ SerializationThis notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.Edit this pagePreviousChainsNextAsync APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Chains | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-17
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesChainsOn this pageChainsUsing an LLM in isolation is fine for simple applications,
https://python.langchain.com/docs/get_started
fd0fa414031d-18
but more complex applications require chaining LLMs - either with each other or with other components.LangChain provides the Chain interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:class Chain(BaseModel, ABC): """Base interface that all chains should implement.""" memory: BaseMemory callbacks: Callbacks def __call__( self, inputs: Any, return_only_outputs: bool = False, callbacks: Callbacks = None, ) -> Dict[str, Any]: ...This idea of composing components together in a chain is simple but powerful. It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.For more specifics check out:How-to for walkthroughs of different chain featuresFoundational to get acquainted with core building block chainsDocument to learn how to incorporate documents into chainsPopular chains for the most common use casesAdditional to see some of the more advanced chains and integrations that you can use out of the boxWhy do we need chains?​Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.Get started​Using LLMChain​The LLMChain is most basic building block chain. It takes in a prompt template, formats it with the user input and returns the response from an LLM.To use the LLMChain, first create a prompt template.from
https://python.langchain.com/docs/get_started
fd0fa414031d-19
returns the response from an LLM.To use the LLMChain, first create a prompt template.from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatellm = OpenAI(temperature=0.9)prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.from langchain.chains import LLMChainchain = LLMChain(llm=llm, prompt=prompt)# Run the chain only specifying the input variable.print(chain.run("colorful socks")) Colorful Toes Co.If there are multiple variables, you can input them all at once using a dictionary.prompt = PromptTemplate( input_variables=["company", "product"], template="What is a good name for {company} that makes {product}?",)chain = LLMChain(llm=llm, prompt=prompt)print(chain.run({ 'company': "ABC Startup", 'product': "colorful socks" })) Socktopia Colourful Creations.You can use a chat model in an LLMChain as well:from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)human_message_prompt = HumanMessagePromptTemplate( prompt=PromptTemplate( template="What is a good name for a company that makes {product}?", input_variables=["product"], ) )chat_prompt_template =
https://python.langchain.com/docs/get_started
fd0fa414031d-20
) )chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])chat = ChatOpenAI(temperature=0.9)chain = LLMChain(llm=chat, prompt=chat_prompt_template)print(chain.run("colorful socks")) Rainbow Socks Co.Edit this pagePreviousZepNextHow toWhy do we need chains?Get startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-21
Documents | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-22
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsStuffRefineMap reduceMap re-rankPopularAdditionalMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesChainsDocumentsDocumentsThese are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.These chains all implement a common interface:class BaseCombineDocumentsChain(Chain, ABC): """Base interface for chains combining documents.""" @abstractmethod def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]: """Combine documents into a single string."""📄️ StuffThe stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.📄️ RefineThe refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.📄️ Map reduceThe map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.📄️ Map re-rankThe map
https://python.langchain.com/docs/get_started
fd0fa414031d-23
This compression step is performed recursively if necessary.📄️ Map re-rankThe map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.Edit this pagePreviousTransformationNextStuffCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-24
LLMs | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-25
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsHow-toIntegrationsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OLanguage modelsLLMsOn this pageLLMsLarge Language Models (LLMs) are a core component of LangChain.
https://python.langchain.com/docs/get_started
fd0fa414031d-26
LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs.For more detailed documentation check out our:How-to guides: Walkthroughs of core functionality, like streaming, async, etc.Integrations: How to use different LLM providers (OpenAI, Anthropic, etc.)Get started​There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the LLM class is designed to provide a standard interface for all of them.In this walkthrough we'll work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types.Setup​To start we'll need to install the OpenAI Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.llms import OpenAIllm = OpenAI(openai_api_key="...")otherwise you can initialize without any params:from langchain.llms import OpenAIllm = OpenAI()__call__: string in -> string out​The simplest way to use an LLM is a callable: pass in a string, get a string completion.llm("Tell me a joke") 'Why did the chicken cross the road?\n\nTo get to the other side.'generate: batch calls, richer outputs​generate lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can includes things like multiple top responses and other LLM provider-specific information:llm_result = llm.generate(["Tell me a joke",
https://python.langchain.com/docs/get_started
fd0fa414031d-27
other LLM provider-specific information:llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)len(llm_result.generations) 30llm_result.generations[0] [Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'), Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]llm_result.generations[-1] [Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."), Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]You can also access provider specific information that is returned. This information is NOT standardized across providers.llm_result.llm_output {'token_usage': {'completion_tokens': 3903, 'total_tokens': 4023, 'prompt_tokens': 120}}Edit this pagePreviousLanguage modelsNextAsync APIGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright
https://python.langchain.com/docs/get_started
fd0fa414031d-28
modelsNextAsync APIGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-29
Chat models | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-30
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsHow-toIntegrationsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OLanguage modelsChat modelsOn this pageChat modelsChat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different. Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.Chat model APIs are fairly new, so we are still figuring out the correct abstractions.The following sections of documentation are provided:How-to guides: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.Integrations: How to use different chat model providers (OpenAI, Anthropic, etc).Get started​Setup​To start we'll need to install the OpenAI Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.chat_models import ChatOpenAIchat = ChatOpenAI(open_api_key="...")otherwise you can initialize without any params:from langchain.chat_models import ChatOpenAIchat = ChatOpenAI()Messages​The chat model interface is based around messages rather than raw text.
https://python.langchain.com/docs/get_started
fd0fa414031d-31
The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage__call__​Messages in -> message out​You can get chat completions by passing one or more messages to the chat model. The response will be a message.from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)chat([HumanMessage(content="Translate this sentence from English to French: I love programming.")]) AIMessage(content="J'aime programmer.", additional_kwargs={})OpenAI's chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.")]chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={})generate​Batch calls, richer outputs​You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter.batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love artificial intelligence.") ],]result = chat.generate(batch_messages)result LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None,
https://python.langchain.com/docs/get_started
fd0fa414031d-32
LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})You can recover things like token usage from this LLMResultresult.llm_output {'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}}Edit this pagePreviousWriterNextCachingGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-33
Language models | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-34
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsLLMsChat modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OLanguage modelsOn this pageLanguage modelsLangChain provides interfaces and integrations for two types of models:LLMs: Models that take a text string as input and return a text stringChat models: Models that are backed by a language model but take a list of Chat Messages as input and return a Chat MessageLLMs vs Chat Models​LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", "AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models.To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models),
https://python.langchain.com/docs/get_started
fd0fa414031d-35
but if you're creating an application that should work with different types of models the shared interface can be helpful.Edit this pagePreviousSelect by similarityNextLLMsLLMs vs Chat ModelsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-36
Output parsers | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-37
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersList parserDatetime parserEnum parserAuto-fixing parserPydantic (JSON) parserRetry parserStructured output parserData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OOutput parsersOn this pageOutput parsersLanguage models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:"Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted."Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.And then one optional one:"Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.Get started​Below we go over the main type of output parser, the PydanticOutputParser.from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel, Field, validatorfrom typing import Listmodel_name = 'text-davinci-003'temperature = 0.0model = OpenAI(model_name=model_name, temperature=temperature)# Define your desired data structure.class Joke(BaseModel):
https://python.langchain.com/docs/get_started
fd0fa414031d-38
temperature=temperature)# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()})# And a query intented to prompt a language model to populate the data structure.joke_query = "Tell me a joke."_input = prompt.format_prompt(query=joke_query)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')Edit this pagePreviousPromptLayer ChatOpenAINextList parserGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-39
Prompt templates | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-40
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesConnecting to a Feature StoreCustom prompt templateFew-shot prompt templatesFew shot examples for chat modelsFormat template outputTemplate formatsTypes of MessagePromptTemplatePartial prompt templatesCompositionSerializationValidate templateExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OPromptsPrompt templatesOn this pagePrompt templatesLanguage models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
https://python.langchain.com/docs/get_started
fd0fa414031d-41
LangChain provides several classes and functions to make constructing and working with prompts easy.What is a prompt template?​A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.A prompt template can contain:instructions to the language model,a set of few shot examples to help the language model generate a better response,a question to the language model.Here's the simplest example:from langchain import PromptTemplatetemplate = """/You are a naming consultant for new companies.What is a good name for a company that makes {product}?"""prompt = PromptTemplate.from_template(template)prompt.format(product="colorful socks") You are a naming consultant for new companies. What is a good name for a company that makes colorful socks?Create a prompt template​You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.from langchain import PromptTemplate# An example prompt with no input variablesno_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.")no_input_prompt.format()# -> "Tell me a joke."# An example prompt with one input variableone_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.")one_input_prompt.format(adjective="funny")# -> "Tell me a funny joke."# An example prompt with multiple input variablesmultiple_input_prompt = PromptTemplate( input_variables=["adjective", "content"], template="Tell me a {adjective} joke about {content}.")multiple_input_prompt.format(adjective="funny", content="chickens")# -> "Tell me a funny joke about chickens."If you do not wish to specify input_variables manually, you can also create a PromptTemplate using
https://python.langchain.com/docs/get_started
fd0fa414031d-42
you do not wish to specify input_variables manually, you can also create a PromptTemplate using from_template class method. langchain will automatically infer the input_variables based on the template passed.template = "Tell me a {adjective} joke about {content}."prompt_template = PromptTemplate.from_template(template)prompt_template.input_variables# -> ['adjective', 'content']prompt_template.format(adjective="funny", content="chickens")# -> Tell me a funny joke about chickens.You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates.Chat prompt template​Chat Models take a list of chat messages as input - this list commonly referred to as a prompt.
https://python.langchain.com/docs/get_started
fd0fa414031d-43
These chat messages differ from raw string (which you would pass into a LLM model) in that every message is associated with a role.For example, in OpenAI Chat Completion API, a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of PromptTemplate when querying chat models to fully exploit the potential of underlying chat model.from langchain.prompts import ( ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)To create a message template associated with a role, you use MessagePromptTemplate.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template="You are a helpful assistant that translates {input_language} to {output_language}."system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template="{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}.", input_variables=["input_language", "output_language"],)system_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)assert system_message_prompt == system_message_prompt_2After that, you can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can
https://python.langchain.com/docs/get_started
fd0fa414031d-44
You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages() [SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}), HumanMessage(content='I love programming.', additional_kwargs={})]Edit this pagePreviousPromptsNextConnecting to a Feature StoreWhat is a prompt template?CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-45
Example selectors | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsCustom example selectorSelect by lengthSelect by maximal marginal relevance (MMR)Select by n-gram overlapSelect by similarityLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OPromptsExample selectorsExample selectorsIf you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.The base interface is defined as below:class BaseExampleSelector(ABC): """Interface for selecting examples to include in prompts.""" @abstractmethod def select_examples(self, input_variables: Dict[str, str]) -> List[dict]: """Select which examples to use based on the inputs."""The only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let's take a look at some below.Edit this pagePreviousValidate templateNextCustom example selectorCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Prompts | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-46
Prompts | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsPrompt templatesExample selectorsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OPromptsPromptsThe new way of programming models is through prompts. A prompt refers to the input to the model. This input is often constructed from multiple components. LangChain provides several classes and functions to make constructing and working with prompts easy.Prompt templates: Parametrize model inputsExample selectors: Dynamically select examples to include in promptsEdit this pagePreviousModel I/ONextPrompt templatesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Model I/O | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OPromptsLanguage modelsOutput parsersData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesModel I/​OModel I/OThe core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.Prompts: Templatize, dynamically select, and manage model inputsLanguage models: Make calls to language models through common interfacesOutput parsers: Extract information from model outputsEdit this pagePreviousModulesNextPromptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Tools | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-47
Tools | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toToolsHow-toIntegrationsToolkitsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesAgentsToolsOn this pageToolsTools are interfaces that an agent can use to interact with the world.Get started​Tools are functions that agents can use to interact with the world. These tools can be generic utilities (e.g. search), other chains, or even other agents.Currently, tools can be loaded with the following snippet:from langchain.agents import load_toolstool_names = [...]tools = load_tools(tool_names)Some tools (e.g. chains, agents) may require a base LLM to use to initialize them. In that case, you can pass in an LLM as well:from langchain.agents import load_toolstool_names = [...]llm = ...tools = load_tools(tool_names, llm=llm)Edit this pagePreviousUse ToolKits with OpenAI FunctionsNextDefining Custom ToolsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Agent types | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-48
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesConversationalOpenAI functionsOpenAI Multi Functions AgentPlan and executeReActReAct document storeSelf ask with searchStructured tool chatHow-toToolsToolkitsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesAgentsAgent typesOn this pageAgent typesAction agents​Agents use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning a response to the user. Here are the agents available in LangChain.Zero-shot ReAct​This agent uses the ReAct framework to determine which tool to use based solely on the tool's description. Any number of tools can be provided. This agent requires that a description is provided for each tool.Note: This is the most general purpose action agent.Structured input ReAct​The structured tool chat agent is capable of using multi-input tools. Older agents are configured to specify an action input as a single string, but this agent can use a tools' argument schema to create a structured action input. This is useful for more complex tool usage, like precisely navigating around a browser.OpenAI Functions​Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a function should to be called and respond with the inputs that should be passed to the function. The OpenAI Functions Agent is designed to work with these models.Conversational​This agent is designed to be used in conversational settings. The prompt is designed to make the agent helpful and conversational.
https://python.langchain.com/docs/get_started
fd0fa414031d-49
The prompt is designed to make the agent helpful and conversational. It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.Self ask with search​This agent utilizes a single tool that should be named Intermediate Answer. This tool should be able to lookup factual answers to questions. This agent is equivalent to the original self ask with search paper, where a Google search API was provided as the tool.ReAct document store​This agent uses the ReAct framework to interact with a docstore. Two tools must be provided: a Search tool and a Lookup tool (they must be named exactly as so). The Search tool should search for a document, while the Lookup tool should lookup a term in the most recently found document. This agent is equivalent to the original ReAct paper, specifically the Wikipedia example.Plan-and-execute agents​Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by BabyAGI and then the "Plan-and-Solve" paper.Edit this pagePreviousAgentsNextConversationalAction agentsZero-shot ReActStructured input ReActOpenAI FunctionsConversationalSelf ask with searchReAct document storePlan-and-execute agentsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-50
Toolkits | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-51
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toToolsToolkitsAzure Cognitive Services ToolkitCSV AgentDocument ComparisonGmail ToolkitJiraJSON AgentOffice365 ToolkitOpenAPI agentsNatural Language APIsPandas Dataframe AgentPlayWright Browser ToolkitPowerBI Dataset AgentPython AgentSpark Dataframe AgentSpark SQL AgentSQL Database AgentVectorstore AgentCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesAgentsToolkitsToolkitsToolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.📄️ Azure Cognitive Services ToolkitThis toolkit is used to interact with the Azure Cognitive Services API to achieve some multimodal capabilities.📄️ CSV AgentThis notebook shows how to use agents to interact with a csv. It is mostly optimized for question answering.📄️ Document ComparisonThis notebook shows how to use an agent to compare two documents.📄️ Gmail ToolkitThis notebook walks through connecting a LangChain email to the Gmail API.📄️ JiraThis notebook goes over how to use the Jira tool.📄️ JSON AgentThis notebook showcases an agent designed to interact with large JSON/dict objects. This is useful when you want to answer questions about a JSON blob that's too large to fit in the context window of an LLM. The agent is able to iteratively explore the blob to find what it needs to answer the user's question.📄️ Office365 ToolkitThis notebook walks through connecting LangChain to Office365 email and calendar.📄️ OpenAPI agentsWe can construct agents to consume arbitrary APIs, here APIs conformant to the OpenAPI/Swagger specification.📄️ Natural Language APIsNatural Language API Toolkits (NLAToolkits) permit LangChain Agents to efficiently plan and combine calls across
https://python.langchain.com/docs/get_started
fd0fa414031d-52
API Toolkits (NLAToolkits) permit LangChain Agents to efficiently plan and combine calls across endpoints. This notebook demonstrates a sample composition of the Speak, Klarna, and Spoonacluar APIs.📄️ Pandas Dataframe AgentThis notebook shows how to use agents to interact with a pandas dataframe. It is mostly optimized for question answering.📄️ PlayWright Browser ToolkitThis toolkit is used to interact with the browser. While other tools (like the Requests tools) are fine for static sites, Browser toolkits let your agent navigate the web and interact with dynamically rendered sites. Some tools bundled within the Browser toolkit include:📄️ PowerBI Dataset AgentThis notebook showcases an agent designed to interact with a Power BI Dataset. The agent is designed to answer more general questions about a dataset, as well as recover from errors.📄️ Python AgentThis notebook showcases an agent designed to write and execute python code to answer a question.📄️ Spark Dataframe AgentThis notebook shows how to use agents to interact with a Spark dataframe and Spark Connect. It is mostly optimized for question answering.📄️ Spark SQL AgentThis notebook shows how to use agents to interact with a Spark SQL. Similar to SQL Database Agent, it is designed to address general inquiries about Spark SQL and facilitate error recovery.📄️ SQL Database AgentThis notebook showcases an agent designed to interact with a sql databases. The agent builds off of SQLDatabaseChain and is designed to answer more general questions about a database, as well as recover from errors.📄️ Vectorstore AgentThis notebook showcases an agent designed to retrieve information from one or more vectorstores, either with or without sources.Edit this pagePreviousZapier Natural Language Actions APINextAzure Cognitive Services ToolkitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-53
Agents | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-54
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsAgent typesHow-toToolsToolkitsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesAgentsOn this pageAgentsSome applications require a flexible chain of calls to LLMs and other tools based on user input. The Agent interface provides the flexibility for such applications. An agent has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next.There are two main types of agents:Action agents: at each timestep, decide on the next action using the outputs of all previous actionsPlan-and-execute agents: decide on the full sequence of actions up front, then execute them all without updating the planAction agents are suitable for small tasks, while plan-and-execute agents are better for complex or long-running tasks that require maintaining long-term objectives and focus. Often the best approach is to combine the dynamism of an action agent with the planning abilities of a plan-and-execute agent by letting the plan-and-execute agent use action agents to execute plans.For a full list of agent types see agent types. Additional abstractions involved in agents are:Tools: the actions an agent can take. What tools you give an agent highly depend on what you want the agent to doToolkits: wrappers around collections of tools that can be used together a specific use case. For example, in order for an agent to
https://python.langchain.com/docs/get_started
fd0fa414031d-55
interact with a SQL database it will likely need one tool to execute queries and another to inspect tablesAction agents​At a high-level an action agent:Receives user inputDecides which tool, if any, to use and the tool inputCalls the tool and records the output (also known as an "observation")Decides the next step using the history of tools, tool inputs, and observationsRepeats 3-4 until it determines it can respond directly to the userAction agents are wrapped in agent executors, which are responsible for calling the agent, getting back an action and action input, calling the tool that the action references with the generated input, getting the output of the tool, and then passing all that information back into the agent to get the next action it should take.Although an agent can be constructed in many ways, it typically involves these components:Prompt template: Responsible for taking the user input and previous steps and constructing a prompt
https://python.langchain.com/docs/get_started
fd0fa414031d-56
to send to the language modelLanguage model: Takes the prompt with use input and action history and decides what to do nextOutput parser: Takes the output of the language model and parses it into the next action or a final answerPlan-and-execute agents​At a high-level a plan-and-execute agent:Receives user inputPlans the full sequence of steps to takeExecutes the steps in order, passing the outputs of past steps as inputs to future stepsThe most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more here.Get started​from langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.llms import OpenAIFirst, let's load the language model we're going to use to control the agent.llm = OpenAI(temperature=0)Next, let's load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.tools = load_tools(["serpapi", "llm-math"], llm=llm)Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)Now let's test it out!agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") > Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: Camila Morrone Thought: I need
https://python.langchain.com/docs/get_started
fd0fa414031d-57
girlfriend" Observation: Camila Morrone Thought: I need to find out Camila Morrone's age Action: Search Action Input: "Camila Morrone age" Observation: 25 years Thought: I need to calculate 25 raised to the 0.43 power Action: Calculator Action Input: 25^0.43 Observation: Answer: 3.991298452658078 Thought: I now know the final answer Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078. > Finished chain. "Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078."Edit this pagePreviousZep MemoryNextAgent typesAction agentsPlan-and-execute agentsGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-58
Callbacks | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-59
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksHow-toIntegrationsModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesCallbacksCallbacksLangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.You can subscribe to these events by using the callbacks argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail.Callback handlers​CallbackHandlers are objects that implement the CallbackHandler interface, which has a method for each event that can be subscribed to. The CallbackManager will call the appropriate method on each handler when the event is triggered.class BaseCallbackHandler: """Base callback handler that can be used to handle callbacks from langchain.""" def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> Any: """Run when LLM starts running.""" def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any ) -> Any: """Run when Chat Model starts running.""" def on_llm_new_token(self, token: str, **kwargs: Any) -> Any: """Run on new LLM token. Only available when streaming is enabled.""" def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any: """Run when LLM ends running."""
https://python.langchain.com/docs/get_started
fd0fa414031d-60
-> Any: """Run when LLM ends running.""" def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> Any: """Run when LLM errors.""" def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> Any: """Run when chain starts running.""" def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any: """Run when chain ends running.""" def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> Any: """Run when chain errors.""" def on_tool_start( self, serialized: Dict[str, Any], input_str: str, **kwargs: Any ) -> Any: """Run when tool starts running.""" def on_tool_end(self, output: str, **kwargs: Any) -> Any: """Run when tool ends running.""" def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> Any: """Run when tool errors.""" def on_text(self, text: str, **kwargs: Any) -> Any: """Run on arbitrary text.""" def on_agent_action(self, action: AgentAction, **kwargs:
https://python.langchain.com/docs/get_started
fd0fa414031d-61
arbitrary text.""" def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any: """Run on agent action.""" def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any: """Run on agent end."""Get started​LangChain provides a few built-in handlers that you can use to get started. These are available in the langchain/callbacks module. The most basic handler is the StdOutCallbackHandler, which simply logs all events to stdout.Note when the verbose flag on the object is set to true, the StdOutCallbackHandler will be invoked even without being explicitly passed in.from langchain.callbacks import StdOutCallbackHandlerfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatehandler = StdOutCallbackHandler()llm = OpenAI()prompt = PromptTemplate.from_template("1 + {number} = ")# Constructor callback: First, let's explicitly set the StdOutCallbackHandler when initializing our chainchain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])chain.run(number=2)# Use verbose flag: Then, let's use the `verbose` flag to achieve the same resultchain = LLMChain(llm=llm, prompt=prompt, verbose=True)chain.run(number=2)# Request callbacks: Finally, let's use the request `callbacks` to achieve the same resultchain = LLMChain(llm=llm, prompt=prompt)chain.run(number=2, callbacks=[handler]) > Entering new LLMChain chain... Prompt after formatting: 1 + 2 = > Finished chain. > Entering new LLMChain chain...
https://python.langchain.com/docs/get_started
fd0fa414031d-62
> Entering new LLMChain chain... Prompt after formatting: 1 + 2 = > Finished chain. > Entering new LLMChain chain... Prompt after formatting: 1 + 2 = > Finished chain. '\n\n3'Where to pass in callbacks​The callbacks argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) in two different places:Constructor callbacks: defined in the constructor, eg. LLMChain(callbacks=[handler], tags=['a-tag']), which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the LLMChain constructor, it will not be used by the Model attached to that chain.Request callbacks: defined in the call()/run()/apply() methods used for issuing a request, eg. chain.call(inputs, callbacks=[handler]), which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the call() method).The verbose argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, eg. LLMChain(verbose=True), and it is equivalent to passing a ConsoleCallbackHandler to the callbacks argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.When do you want to use each of these?​Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are not specific to a single request, but rather to the entire chain. For
https://python.langchain.com/docs/get_started
fd0fa414031d-63
etc., which are not specific to a single request, but rather to the entire chain. For example, if you want to log all the requests made to an LLMChain, you would pass a handler to the constructor.Request callbacks are most useful for use cases such as streaming, where you want to stream the output of a single request to a specific websocket connection, or other similar use cases. For example, if you want to stream the output of a single request to a websocket, you would pass a handler to the call() methodEdit this pagePreviousVectorstore AgentNextAsync callbacksCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-64
Memory | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-65
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryHow-toIntegrationsAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesMemoryOn this pageMemory🚧 Docs under construction 🚧By default, Chains and Agents are stateless, meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves). In some applications, like chatbots, it is essential to remember previous interactions, both in the short and long-term. The Memory class does exactly that.LangChain provides memory components in two forms. First, LangChain provides helper utilities for managing and manipulating previous chat messages. These are designed to be modular and useful regardless of how they are used.
https://python.langchain.com/docs/get_started
fd0fa414031d-66
Secondly, LangChain provides easy ways to incorporate these utilities into chains.Get started​Memory involves keeping a concept of state around throughout a user's interactions with an language model. A user's interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain.Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages.We will walk through the simplest form of memory: "buffer" memory, which just involves keeping a buffer of all prior messages. We will show how to use the modular utility functions here, then show how it can be used in a chain (both returning a string as well as a list of messages).ChatMessageHistory​One of the core utility classes underpinning most (if not all) memory modules is the ChatMessageHistory class. This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all.You may want to use this class directly if you are managing memory outside of a chain.from langchain.memory import ChatMessageHistoryhistory = ChatMessageHistory()history.add_user_message("hi!")history.add_ai_message("whats up?")history.messages [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})]ConversationBufferMemory​We now show how to use this simple concept in a chain. We first showcase ConversationBufferMemory which is just a wrapper around
https://python.langchain.com/docs/get_started
fd0fa414031d-67
use this simple concept in a chain. We first showcase ConversationBufferMemory which is just a wrapper around ChatMessageHistory that extracts the messages in a variable.We can first extract it as a string.from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory()memory.chat_memory.add_user_message("hi!")memory.chat_memory.add_ai_message("whats up?")memory.load_memory_variables({}) {'history': 'Human: hi!\nAI: whats up?'}We can also get the history as a list of messagesmemory = ConversationBufferMemory(return_messages=True)memory.chat_memory.add_user_message("hi!")memory.chat_memory.add_ai_message("whats up?")memory.load_memory_variables({}) {'history': [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})]}Using in a chain​Finally, let's take a look at using this in a chain (setting verbose=True so we can see the prompt).from langchain.llms import OpenAIfrom langchain.chains import ConversationChainllm = OpenAI(temperature=0)conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="Hi there!") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished chain. " Hi there! It's nice to meet you. How can I help you today?"conversation.predict(input="I'm doing well! Just having a conversation
https://python.langchain.com/docs/get_started
fd0fa414031d-68
How can I help you today?"conversation.predict(input="I'm doing well! Just having a conversation with an AI.") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: I'm doing well! Just having a conversation with an AI. AI: > Finished chain. " That's great! It's always nice to have a conversation with someone new. What would you like to talk about?"conversation.predict(input="Tell me about yourself.") > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: I'm doing well! Just having a conversation with an AI. AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about? Human: Tell me about yourself. AI: > Finished chain. " Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural
https://python.langchain.com/docs/get_started
fd0fa414031d-69
I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers."Saving Message History​You may often have to save messages, and then load them to use again. This can be done easily by first converting the messages to normal python dictionaries, saving those (as json or something) and then loading those. Here is an example of doing that.import jsonfrom langchain.memory import ChatMessageHistoryfrom langchain.schema import messages_from_dict, messages_to_dicthistory = ChatMessageHistory()history.add_user_message("hi!")history.add_ai_message("whats up?")dicts = messages_to_dict(history.messages)dicts [{'type': 'human', 'data': {'content': 'hi!', 'additional_kwargs': {}}}, {'type': 'ai', 'data': {'content': 'whats up?', 'additional_kwargs': {}}}]new_messages = messages_from_dict(dicts)new_messages [HumanMessage(content='hi!', additional_kwargs={}), AIMessage(content='whats up?', additional_kwargs={})]And that's it for the getting started! There are plenty of different types of memory, check out our examples to see them allEdit this pagePreviousVector store-augmented text generationNextHow to add Memory to an LLMChainGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-70
Modules | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesOn this pageModulesLangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:Model I/O​Interface with language modelsData connection​Interface with application-specific dataChains​Construct sequences of callsAgents​Let chains choose which tools to use given high-level directivesMemory​Persist application state between runs of a chainCallbacks​Log and stream intermediate steps of any chainEdit this pagePreviousQuickstartNextModel I/OCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. Text embedding models | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-71
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionDocument loadersDocument transformersText embedding modelsIntegrationsVector storesRetrieversChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesData connectionText embedding modelsOn this pageText embedding modelsThe Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.The base Embeddings class in LangChain exposes two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).Get started​Setup​To start we'll need to install the OpenAI Python package:pip install openaiAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.embeddings import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings(openai_api_key="...")otherwise you can initialize without any params:from langchain.embeddings import OpenAIEmbeddingsembeddings_model
https://python.langchain.com/docs/get_started
fd0fa414031d-72
can initialize without any params:from langchain.embeddings import OpenAIEmbeddingsembeddings_model = OpenAIEmbeddings()embed_documents​Embed list of texts​embeddings = embedding_model.embed_documents( [ "Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!" ])len(embeddings), len(embeddings[0])(5, 1536)embed_query​Embed single query​Embed a single piece of text for the purpose of comparing to other embedded pieces of texts.embedded_query = embedding_model.embed_query("What was the name mentioned in the conversation?")embedded_query[:5][0.0053587136790156364, -0.0004999046213924885, 0.038883671164512634, -0.003001077566295862, -0.00900818221271038]Edit this pagePreviousSplit by tokensNextAleph AlphaGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-73
Document loaders | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-74
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionDocument loadersHow-toIntegrationsDocument transformersText embedding modelsVector storesRetrieversChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesData connectionDocument loadersOn this pageDocument loadersUse document loaders to load data from a source as Document's. A Document is a piece of text and associated metadata. For example, there are document loaders for loading a simple .txt file, for loading the text contents of any web page, or even for loading a transcript of a YouTube video.Document loaders expose a "load" method for loading data as documents from a configured source. They optionally
https://python.langchain.com/docs/get_started
fd0fa414031d-75
implement a "lazy load" as well for lazily loading data into memory.Get started​The simplest loader reads in a file as text and places it all into one Document.from langchain.document_loaders import TextLoaderloader = TextLoader("./index.md")loader.load()[ Document(page_content='---\nsidebar_position: 0\n---\n# Document loaders\n\nUse document loaders to load data from a source as `Document`\'s. A `Document` is a piece of text\nand associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text\ncontents of any web page, or even for loading a transcript of a YouTube video.\n\nEvery document loader exposes two methods:\n1. "Load": load documents from the configured source\n2. "Load and split": load documents from the configured source and split them using the passed in text splitter\n\nThey optionally implement:\n\n3. "Lazy load": load documents into memory lazily\n', metadata={'source': '../docs/docs_skeleton/docs/modules/data_connection/document_loaders/index.md'})]Edit this pagePreviousData connectionNextCSVGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-76
Document transformers | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-77
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionDocument loadersDocument transformersText splittersText embedding modelsVector storesRetrieversChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesData connectionDocument transformersOn this pageDocument transformersOnce you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.Text splitters​When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text.
https://python.langchain.com/docs/get_started
fd0fa414031d-78
This notebook showcases several ways to do that.At a high level, text splitters work as following:Split the text up into small, semantically meaningful chunks (often sentences).Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).That means there are two different axes along which you can customize your text splitter:How the text is splitHow the chunk size is measuredGet started with text splitters​The default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""]In addition to controlling which characters you can split on, you can also control a few other things:length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it's pretty common to pass a token counter here.chunk_size: the maximum size of your chunks (as measured by the length function).chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window).add_start_index: whether to include the starting position of each chunk within the original document in the metadata.# This is a long document we can split up.with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100,
https://python.langchain.com/docs/get_started
fd0fa414031d-79
small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, add_start_index = True,)texts = text_splitter.create_documents([state_of_the_union])print(texts[0])print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' metadata={'start_index': 0} page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' metadata={'start_index': 82}Edit this pagePreviousYouTube transcriptsNextSplit by characterText splittersGet started with text splittersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-80
Vector stores | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-81
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionDocument loadersDocument transformersText embedding modelsVector storesIntegrationsRetrieversChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesData connectionVector storesOn this pageVector storesOne of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search
https://python.langchain.com/docs/get_started
fd0fa414031d-82
for you.Get started​This walkthrough showcases basic functionality related to VectorStores. A key part of working with vector stores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the text embedding model interfaces before diving into this.This walkthrough uses the FAISS vector database, which makes use of the Facebook AI Similarity Search (FAISS) library.pip install faiss-cpuWe want to use OpenAIEmbeddings so we have to get the OpenAI API Key.import osimport getpassos.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')from langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import FAISSraw_documents = TextLoader('../../../state_of_the_union.txt').load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)documents = text_splitter.split_documents(raw_documents)db = FAISS.from_documents(documents, OpenAIEmbeddings())Similarity search​query = "What did the president say about Ketanji Brown Jackson"docs = db.similarity_search(query)print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did
https://python.langchain.com/docs/get_started
fd0fa414031d-83
has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.Similarity search by vector​It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.embedding_vector = embeddings.embed_query(query)docs = db.similarity_search_by_vector(embedding_vector)Edit this pagePreviousTensorflowHubNextAlibaba Cloud OpenSearchGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-84
Retrievers | 🦜️🔗 Langchain
https://python.langchain.com/docs/get_started
fd0fa414031d-85
Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversHow-toIntegrationsChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesData connectionRetrieversOn this pageRetrieversA retriever is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well.Get started​The BaseRetriever class in LangChain is as follows:from abc import ABC, abstractmethodfrom typing import Listfrom langchain.schema import Documentclass BaseRetriever(ABC): @abstractmethod def get_relevant_documents(self, query: str) -> List[Document]: """Get texts relevant for a query. Args: query: string to find relevant texts for Returns: List of relevant documents """It's that simple! The get_relevant_documents method can be implemented however you see fit.Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide.In order to understand what a vectorstore retriever is, it's important to understand what a Vectorstore is. So let's look at that.By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we'll first need to install chromadb.pip install chromadbThis example showcases question answering over documents.
https://python.langchain.com/docs/get_started
fd0fa414031d-86
We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.Question answering over documents consists of four steps:Create an indexCreate a Retriever from that indexCreate a question answering chainAsk questions!Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.First, let's import some common classes we'll use no matter what.from langchain.chains import RetrievalQAfrom langchain.llms import OpenAINext in the generic setup, let's specify the document loader we want to use. You can download the state_of_the_union.txt file herefrom langchain.document_loaders import TextLoaderloader = TextLoader('../state_of_the_union.txt', encoding='utf8')One Line Index Creation​To get started as quickly as possible, we can use the VectorstoreIndexCreator.from langchain.indexes import VectorstoreIndexCreatorindex = VectorstoreIndexCreator().from_loaders([loader]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.query = "What did the president say about Ketanji Brown Jackson"index.query(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal
https://python.langchain.com/docs/get_started
fd0fa414031d-87
said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."query = "What did the president say about Ketanji Brown Jackson"index.query_with_sources(query) {'question': 'What did the president say about Ketanji Brown Jackson', 'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n", 'sources': '../state_of_the_union.txt'}What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that.index.vectorstore <langchain.vectorstores.chroma.Chroma at 0x119aa5940>If we then want to access the VectorstoreRetriever, we can do that with:index.vectorstore.as_retriever() VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})Walkthrough​Okay, so what's actually going on? How is this index getting created?A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing?There are three main steps going on after the documents are loaded:Splitting documents into chunksCreating embeddings for each documentStoring documents and embeddings in a vectorstoreLet's walk through this in codedocuments = loader.load()Next, we will split the documents into chunks.from langchain.text_splitter import CharacterTextSplittertext_splitter =
https://python.langchain.com/docs/get_started
fd0fa414031d-88
split the documents into chunks.from langchain.text_splitter import CharacterTextSplittertext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)We will then select which embeddings we want to use.from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()We now create the vectorstore to use as the index.from langchain.vectorstores import Chromadb = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.So that's creating the index. Then, we expose this index in a retriever interface.retriever = db.as_retriever()Then, as before, we create a chain and use it to answer questions!qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)query = "What did the president say about Ketanji Brown Jackson"qa.run(query) " The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:index_creator = VectorstoreIndexCreator( vectorstore_cls=Chroma, embedding=OpenAIEmbeddings(), text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0))Hopefully this highlights
https://python.langchain.com/docs/get_started
fd0fa414031d-89
chunk_overlap=0))Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it's important to have a simple way to create indexes, we also think it's important to understand what's going on under the hood.Edit this pagePreviousZillizNextMultiQueryRetrieverGet startedCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/get_started
fd0fa414031d-90
Data connection | 🦜️🔗 Langchain Skip to main content🦜️🔗 LangChainJS/TS DocsGitHubCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionDocument loadersDocument transformersText embedding modelsVector storesRetrieversChainsMemoryAgentsCallbacksModulesUse casesGuidesEcosystemAdditional resourcesAPI referenceModulesData connectionData connectionMany LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the building blocks to load, transform, store and query your data via:Document loaders: Load documents from many different sourcesDocument transformers: Split documents, drop redundant documents, and moreText embedding models: Take unstructured text and turn it into a list of floating point numbersVector stores: Store and search over embedded dataRetrievers: Query your dataEdit this pagePreviousStructured output parserNextDocument loadersCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. 🦜️🔗 Langchain DescribeIndexStats
https://python.langchain.com/docs/get_started
fd0fa414031d-91
Jump to ContentLearnForumSupportSystem StatusContactGuidesAPI ReferenceExamplesLibrariesLearnForumSupportSystem StatusContactLog InSign Up FreeLog InSign Up FreeGuidesAPI ReferenceExamplesLibrariesAIJUMP TOPinecone APIVector OperationsDescribeIndexStatspostQuerypostDeletepostFetchgetUpdatepostUpsertpostPinecone APIIndex Operationslist_collectionsgetcreate_collectionpostdescribe_collectiongetdelete_collectiondeletelist_indexesgetcreate_indexpostdescribe_indexgetdelete_indexdeleteconfigure_indexpatchJUMP TOPinecone APIVector OperationsDescribeIndexStatspostQuerypostDeletepostFetchgetUpdatepostUpsertpostPinecone APIIndex Operationslist_collectionsgetcreate_collectionpostdescribe_collectiongetdelete_collectiondeletelist_indexesgetcreate_indexpostdescribe_indexgetdelete_indexdeleteconfigure_indexpatchDescribeIndexStatspost https://index_name-project_id.svc.environment.pinecone.io/describe_index_statsThe DescribeIndexStats operation returns statistics about the index's contents, including the vector count per namespace and the number of dimensions.LanguageShellPythonHTTPNodeAuthenticationHeaderHeaderLog in to use your API keysURLBase URLhttps://index_name-project_id.svc.environment.pinecone.io/describe_index_stats Documentation Learning Center Developer Forum Support Center Status Page Careers © Pinecone Systems, Inc. | San Francisco, CA | Terms | Privacy | Product Privacy | Cookies | Trust & Security | System Status Pinecone is a registered trademark of Pinecone Systems, Inc. pinecone-group
https://python.langchain.com/docs/get_started
fd0fa414031d-92
Jump to ContentGuidesAPI ReferenceExamplesLibrariesLearnForumSupportSystem StatusContactLog InSign Up FreeLog InGuidesAPI ReferenceExamplesLibrariesAI Pinecone Documentation Get started using Pinecone, explore our examples, learn Pinecone concepts and components, and check our reference documentation. What are you looking for? Getting Started Learn Pinecone basics and get up to speed quickly. Overview Quickstart Choosing index type and size Examples Projects Learn how to use projects in Pinecone. Understanding Projects Create a Project Add Users to a Project Change Project Pod Limit Rename a Project Indexes Learn how to use indexes in Pinecone. Understanding Indexes Manage Indexes Scale Indexes Back Up Indexes Records Learn about working with data in your Pinecone indexes. Insert Data Manage Data Query Data Metadata Filtering Organizations Understand how to administer your Pinecone organization. Understanding Organizations Understanding Cost Managing Cost Manage Billing Monitoring Your Usage Reference Access detailed reference information about the Pinecone API and libraries. Python Client Node.js Client Security Release Notes
https://python.langchain.com/docs/get_started
fd0fa414031d-93
DescribeIndexStats Jump to ContentLearnForumSupportSystem StatusContactGuidesAPI ReferenceExamplesLibrariesLearnForumSupportSystem StatusContactLog InSign Up FreeLog InSign Up FreeGuidesAPI ReferenceExamplesLibrariesAIJUMP TOPinecone APIVector OperationsDescribeIndexStatspostQuerypostDeletepostFetchgetUpdatepostUpsertpostPinecone APIIndex Operationslist_collectionsgetcreate_collectionpostdescribe_collectiongetdelete_collectiondeletelist_indexesgetcreate_indexpostdescribe_indexgetdelete_indexdeleteconfigure_indexpatchJUMP TOPinecone APIVector OperationsDescribeIndexStatspostQuerypostDeletepostFetchgetUpdatepostUpsertpostPinecone APIIndex Operationslist_collectionsgetcreate_collectionpostdescribe_collectiongetdelete_collectiondeletelist_indexesgetcreate_indexpostdescribe_indexgetdelete_indexdeleteconfigure_indexpatchDescribeIndexStatspost https://index_name-project_id.svc.environment.pinecone.io/describe_index_statsThe DescribeIndexStats operation returns statistics about the index's contents, including the vector count per namespace and the number of dimensions.LanguageShellPythonHTTPNodeAuthenticationHeaderHeaderLog in to use your API keysURLBase URLhttps://index_name-project_id.svc.environment.pinecone.io/describe_index_stats Documentation Learning Center Developer Forum Support Center Status Page Careers
https://python.langchain.com/docs/get_started
fd0fa414031d-94
Documentation Learning Center Developer Forum Support Center Status Page Careers © Pinecone Systems, Inc. | San Francisco, CA | Terms | Privacy | Product Privacy | Cookies | Trust & Security | System Status Pinecone is a registered trademark of Pinecone Systems, Inc. pinecone-group Jump to ContentGuidesAPI ReferenceExamplesLibrariesLearnForumSupportSystem StatusContactLog InSign Up FreeLog InGuidesAPI ReferenceExamplesLibrariesAI Pinecone Documentation Get started using Pinecone, explore our examples, learn Pinecone concepts and components, and check our reference documentation. What are you looking for? Getting Started Learn Pinecone basics and get up to speed quickly. Overview Quickstart Choosing index type and size Examples Projects Learn how to use projects in Pinecone. Understanding Projects Create a Project Add Users to a Project Change Project Pod Limit Rename a Project Indexes Learn how to use indexes in Pinecone.
https://python.langchain.com/docs/get_started
fd0fa414031d-95
Indexes Learn how to use indexes in Pinecone. Understanding Indexes Manage Indexes Scale Indexes Back Up Indexes Records Learn about working with data in your Pinecone indexes. Insert Data Manage Data Query Data Metadata Filtering Organizations Understand how to administer your Pinecone organization. Understanding Organizations Understanding Cost Managing Cost Manage Billing Monitoring Your Usage Reference Access detailed reference information about the Pinecone API and libraries. Python Client Node.js Client Security Release Notes
https://python.langchain.com/docs/get_started

No dataset card yet

Downloads last month
3