id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
32f9d2501d32-27 | Parameters
file_path (str) β path of the local file to store the messages.
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from the local file
add_message(message)[source]ο
Append the message to the record in the local file
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from the local file
Return type
None
class langchain.memory.InMemoryEntityStore(*, store={})[source]ο
Bases: langchain.memory.entity.BaseEntityStore
Basic in-memory entity store.
Parameters
store (Dict[str, Optional[str]]) β
Return type
None
attribute store: Dict[str, Optional[str]] = {}ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
class langchain.memory.MomentoChatMessageHistory(session_id, cache_client, cache_name, *, key_prefix='message_store:', ttl=None, ensure_cache_exists=True)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history cache that uses Momento as a backend.
See https://gomomento.com/ | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-28 | See https://gomomento.com/
Parameters
session_id (str) β
cache_client (momento.CacheClient) β
cache_name (str) β
key_prefix (str) β
ttl (Optional[timedelta]) β
ensure_cache_exists (bool) β
classmethod from_client_params(session_id, cache_name, ttl, *, configuration=None, auth_token=None, **kwargs)[source]ο
Construct cache from CacheClient parameters.
Parameters
session_id (str) β
cache_name (str) β
ttl (timedelta) β
configuration (Optional[momento.config.Configuration]) β
auth_token (Optional[str]) β
kwargs (Any) β
Return type
MomentoChatMessageHistory
property messages: list[langchain.schema.BaseMessage]ο
Retrieve the messages from Momento.
Raises
SdkException β Momento service or network error
Exception β Unexpected response
Returns
List of cached messages
Return type
list[BaseMessage]
add_message(message)[source]ο
Store a message in the cache.
Parameters
message (BaseMessage) β The message object to store.
Raises
SdkException β Momento service or network error.
Exception β Unexpected response.
Return type
None
clear()[source]ο
Remove the sessionβs messages from the cache.
Raises
SdkException β Momento service or network error.
Exception β Unexpected response.
Return type
None
class langchain.memory.MongoDBChatMessageHistory(connection_string, session_id, database_name='chat_history', collection_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in MongoDB.
Parameters
connection_string (str) β connection string to connect to MongoDB
session_id (str) β arbitrary key that is used to store the messages | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-29 | session_id (str) β arbitrary key that is used to store the messages
of a single chat session.
database_name (str) β name of the database to use
collection_name (str) β name of the collection to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from MongoDB
add_message(message)[source]ο
Append the message to the record in MongoDB
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from MongoDB
Return type
None
class langchain.memory.MotorheadMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, url='https://api.getmetal.io/v1/motorhead', session_id, context=None, api_key=None, client_id=None, timeout=3000, memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
url (str) β
session_id (str) β
context (Optional[str]) β
api_key (Optional[str]) β
client_id (Optional[str]) β
timeout (int) β
memory_key (str) β
Return type
None
attribute api_key: Optional[str] = Noneο
attribute client_id: Optional[str] = Noneο
attribute context: Optional[str] = Noneο
attribute session_id: str [Required]ο
attribute url: str = 'https://api.getmetal.io/v1/motorhead'ο
delete_session()[source]ο
Delete a session
Return type
None
async init()[source]ο | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-30 | Delete a session
Return type
None
async init()[source]ο
Return type
None
load_memory_variables(values)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.memory.PostgresChatMessageHistory(session_id, connection_string='postgresql://postgres:mypassword@localhost/chat_history', table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in a Postgres database.
Parameters
session_id (str) β
connection_string (str) β
table_name (str) β
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from PostgreSQL
add_message(message)[source]ο
Append the message to the record in PostgreSQL
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from PostgreSQL
Return type
None
class langchain.memory.ReadOnlySharedMemory(*, memory)[source]ο
Bases: langchain.schema.BaseMemory
A memory wrapper that is read-only and cannot be changed.
Parameters
memory (langchain.schema.BaseMemory) β
Return type
None
attribute memory: langchain.schema.BaseMemory [Required]ο
clear()[source]ο
Nothing to clear, got a memory like a vault.
Return type
None | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-31 | Nothing to clear, got a memory like a vault.
Return type
None
load_memory_variables(inputs)[source]ο
Load memory variables from memory.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Nothing should be saved or changed
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Return memory variables.
class langchain.memory.RedisChatMessageHistory(session_id, url='redis://localhost:6379/0', key_prefix='message_store:', ttl=None)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in a Redis database.
Parameters
session_id (str) β
url (str) β
key_prefix (str) β
ttl (Optional[int]) β
property key: strο
Construct the record key to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from Redis
add_message(message)[source]ο
Append the message to the record in Redis
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from Redis
Return type
None
class langchain.memory.RedisEntityStore(session_id='default', url='redis://localhost:6379/0', key_prefix='memory_store', ttl=86400, recall_ttl=259200, *args, redis_client=None)[source]ο
Bases: langchain.memory.entity.BaseEntityStore
Redis-backed Entity store. Entities get a TTL of 1 day by default, and
that TTL is extended by 3 days every time the entity is read back.
Parameters | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-32 | that TTL is extended by 3 days every time the entity is read back.
Parameters
session_id (str) β
url (str) β
key_prefix (str) β
ttl (Optional[int]) β
recall_ttl (Optional[int]) β
args (Any) β
redis_client (Any) β
Return type
None
attribute key_prefix: str = 'memory_store'ο
attribute recall_ttl: Optional[int] = 259200ο
attribute redis_client: Any = Noneο
attribute session_id: str = 'default'ο
attribute ttl: Optional[int] = 86400ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
property full_key_prefix: strο
class langchain.memory.SQLChatMessageHistory(session_id, connection_string, table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in an SQL database.
Parameters
session_id (str) β
connection_string (str) β
table_name (str) β
property messages: List[langchain.schema.BaseMessage]ο | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-33 | property messages: List[langchain.schema.BaseMessage]ο
Retrieve all messages from db
add_message(message)[source]ο
Append the message to the record in db
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from db
Return type
None
class langchain.memory.SQLiteEntityStore(session_id='default', db_file='entities.db', table_name='memory_store', *args)[source]ο
Bases: langchain.memory.entity.BaseEntityStore
SQLite-backed Entity store
Parameters
session_id (str) β
db_file (str) β
table_name (str) β
args (Any) β
Return type
None
attribute session_id: str = 'default'ο
attribute table_name: str = 'memory_store'ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
property full_table_name: strο
class langchain.memory.SimpleMemory(*, memories={})[source]ο
Bases: langchain.schema.BaseMemory
Simple memory for storing context or other bits of information that shouldnβt
ever change between prompts. | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-34 | Simple memory for storing context or other bits of information that shouldnβt
ever change between prompts.
Parameters
memories (Dict[str, Any]) β
Return type
None
attribute memories: Dict[str, Any] = {}ο
clear()[source]ο
Nothing to clear, got a memory like a vault.
Return type
None
load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Nothing should be saved or changed, my memory is set in stone.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.memory.VectorStoreRetrieverMemory(*, retriever, memory_key='history', input_key=None, return_docs=False)[source]ο
Bases: langchain.schema.BaseMemory
Class for a VectorStore-backed memory object.
Parameters
retriever (langchain.vectorstores.base.VectorStoreRetriever) β
memory_key (str) β
input_key (Optional[str]) β
return_docs (bool) β
Return type
None
attribute input_key: Optional[str] = Noneο
Key name to index the inputs to load_memory_variables.
attribute memory_key: str = 'history'ο
Key name to locate the memories in the result of load_memory_variables.
attribute retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]ο
VectorStoreRetriever object to connect to.
attribute return_docs: bool = Falseο | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-35 | attribute return_docs: bool = Falseο
Whether or not to return the result of querying the database directly.
clear()[source]ο
Nothing to clear.
Return type
None
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Union[List[langchain.schema.Document], str]]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
The list of keys emitted from the load_memory_variables method.
class langchain.memory.ZepChatMessageHistory(session_id, url='http://localhost:8000')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
A ChatMessageHistory implementation that uses Zep as a backend.
Recommended usage:
# Set up Zep Chat History
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
)
# Use a standard ConversationBufferMemory to encapsulate the Zep chat history
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=zep_chat_history
)
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see: https://getzep.github.io/
This class is a thin wrapper around the zep-python package. Additional
Zep functionality is exposed via the zep_summary and zep_messages
properties.
For more information on the zep-python package, see: | https://api.python.langchain.com/en/latest/modules/memory.html |
32f9d2501d32-36 | properties.
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Parameters
session_id (str) β
url (str) β
Return type
None
property messages: List[langchain.schema.BaseMessage]ο
Retrieve messages from Zep memory
property zep_messages: List[Message]ο
Retrieve summary from Zep memory
property zep_summary: Optional[str]ο
Retrieve summary from Zep memory
add_message(message)[source]ο
Append the message to the Zep memory history
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
search(query, metadata=None, limit=None)[source]ο
Search Zep memory for messages matching the query
Parameters
query (str) β
metadata (Optional[Dict]) β
limit (Optional[int]) β
Return type
List[MemorySearchResult]
clear()[source]ο
Clear session memory from Zep. Note that Zep is long-term storage for memory
and this is not advised unless you have specific data retention requirements.
Return type
None | https://api.python.langchain.com/en/latest/modules/memory.html |
81b746c66a31-0 | Vector Storesο
Wrappers on top of vector stores.
class langchain.vectorstores.AlibabaCloudOpenSearch(embedding, config, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Alibaba Cloud OpenSearch Vector Store
Parameters
embedding (langchain.embeddings.base.Embeddings) β
config (langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings) β
kwargs (Any) β
Return type
None
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, search_filter=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
search_filter (Optional[Dict[str, Any]]) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_relevance_scores(query, k=4, search_filter=None, **kwargs)[source]ο
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query (str) β input text
k (int) β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-1 | score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
search_filter (Optional[dict]) β
kwargs (Any) β
Returns
List of Tuples of (doc, similarity_score)
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, search_filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_filter (Optional[dict]) β
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
inner_embedding_query(embedding, search_filter=None, k=4)[source]ο
Parameters
embedding (List[float]) β
search_filter (Optional[Dict[str, Any]]) β
k (int) β
Return type
Dict[str, Any]
create_results(json_result)[source]ο
Parameters
json_result (Dict[str, Any]) β
Return type
List[langchain.schema.Document]
create_results_with_score(json_result)[source]ο
Parameters
json_result (Dict[str, Any]) β
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, config=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-2 | metadatas (Optional[List[dict]]) β
config (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) β
kwargs (Any) β
Return type
langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch
classmethod from_documents(documents, embedding, ids=None, config=None, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
ids (Optional[List[str]]) β
config (Optional[langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearchSettings]) β
kwargs (Any) β
Return type
langchain.vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch
class langchain.vectorstores.AlibabaCloudOpenSearchSettings(endpoint, instance_id, username, password, datasource_name, embedding_index_name, field_name_mapping)[source]ο
Bases: object
Opensearch Client Configuration
Attribute:
endpoint (str) : The endpoint of opensearch instance, You can find it
from the console of Alibaba Cloud OpenSearch.
instance_id (str) : The identify of opensearch instance, You can find
it from the console of Alibaba Cloud OpenSearch.
datasource_name (str): The name of the data source specified when creating it.
username (str) : The username specified when purchasing the instance.
password (str) : The password specified when purchasing the instance.
embedding_index_name (str) : The name of the vector attribute specified
when configuring the instance attributes.
field_name_mapping (Dict) : Using field name mapping between opensearch
vector store and opensearch instance configuration table field names:
{ | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-3 | vector store and opensearch instance configuration table field names:
{
βidβ: βThe id field name map of index document.β,
βdocumentβ: βThe text field name map of index document.β,
βembeddingβ: βIn the embedding field of the opensearch instance,
the values must be in float16 multivalue type and separated by commas.β,
βmetadata_field_xβ: βMetadata field mapping includes the mapped
field name and operator in the mapping value, separated by a comma
between the mapped field name and the operator.β,
}
Parameters
endpoint (str) β
instance_id (str) β
username (str) β
password (str) β
datasource_name (str) β
embedding_index_name (str) β
field_name_mapping (Dict[str, str]) β
Return type
None
endpoint: strο
instance_id: strο
username: strο
password: strο
datasource_name: strο
embedding_index_name: strο
field_name_mapping: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata_field_x': 'metadata_field_x,operator'}ο
class langchain.vectorstores.AnalyticDB(connection_string, embedding_function, embedding_dimension=1536, collection_name='langchain_document', pre_delete_collection=False, logger=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
VectorStore implementation using AnalyticDB.
AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.
- connection_string is a postgres connection string.
- embedding_function any embedding function implementing
langchain.embeddings.base.Embeddings interface.
collection_name is the name of the collection to use. (default: langchain) | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-4 | collection_name is the name of the collection to use. (default: langchain)
NOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_collection if True, will delete the collection if it exists.(default: False)
- Useful for testing.
Parameters
connection_string (str) β
embedding_function (Embeddings) β
embedding_dimension (int) β
collection_name (str) β
pre_delete_collection (bool) β
logger (Optional[logging.Logger]) β
Return type
None
create_table_if_not_exists()[source]ο
Return type
None
create_collection()[source]ο
Return type
None
delete_collection()[source]ο
Return type
None
add_texts(texts, metadatas=None, ids=None, batch_size=500, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (Optional[List[str]]) β
batch_size (int) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with AnalyticDB with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-5 | k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_vector(embedding, k=4, filter=None)[source]ο
Parameters
embedding (List[float]) β
k (int) β
filter (Optional[dict]) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-6 | Return VectorStore initialized from texts and embeddings.
Postgres Connection string is required
Either pass it as a parameter
or set the PG_CONNECTION_STRING environment variable.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
embedding_dimension (int) β
collection_name (str) β
ids (Optional[List[str]]) β
pre_delete_collection (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.analyticdb.AnalyticDB
classmethod get_connection_string(kwargs)[source]ο
Parameters
kwargs (Dict[str, Any]) β
Return type
str
classmethod from_documents(documents, embedding, embedding_dimension=1536, collection_name='langchain_document', ids=None, pre_delete_collection=False, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Postgres Connection string is required
Either pass it as a parameter
or set the PG_CONNECTION_STRING environment variable.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
embedding_dimension (int) β
collection_name (str) β
ids (Optional[List[str]]) β
pre_delete_collection (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.analyticdb.AnalyticDB
classmethod connection_string_from_db_params(driver, host, port, database, user, password)[source]ο
Return connection string from database parameters.
Parameters
driver (str) β
host (str) β
port (int) β
database (str) β
user (str) β
password (str) β
Return type
str | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-7 | user (str) β
password (str) β
Return type
str
class langchain.vectorstores.Annoy(embedding_function, index, metric, docstore, index_to_docstore_id)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Annoy vector database.
To use, you should have the annoy python package installed.
Example
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
Parameters
embedding_function (Callable) β
index (Any) β
metric (str) β
docstore (Docstore) β
index_to_docstore_id (Dict[int, str]) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
process_index_results(idxs, dists)[source]ο
Turns annoy results into a list of documents and scores.
Parameters
idxs (List[int]) β List of indices of the documents in the index.
dists (List[float]) β List of distances of the documents in the index.
Returns
List of Documents and scores.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_vector(embedding, k=4, search_k=- 1)[source]ο
Return docs most similar to query.
Parameters
query β Text to look up documents similar to. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-8 | Parameters
query β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
embedding (List[float]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_index(docstore_index, k=4, search_k=- 1)[source]ο
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
docstore_index (int) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score(query, k=4, search_k=- 1)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, search_k=- 1, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-9 | Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
kwargs (Any) β
Returns
List of Documents most similar to the embedding.
Return type
List[langchain.schema.Document]
similarity_search_by_index(docstore_index, k=4, search_k=- 1, **kwargs)[source]ο
Return docs most similar to docstore_index.
Parameters
docstore_index (int) β Index of document in docstore
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
kwargs (Any) β
Returns
List of Documents most similar to the embedding.
Return type
List[langchain.schema.Document]
similarity_search(query, k=4, search_k=- 1, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
search_k (int) β inspect up to search_k nodes which defaults
to n_trees * n if not provided
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-10 | Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
k (int) β Number of Documents to return. Defaults to 4.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source]ο
Construct Annoy wrapper from raw documents.
Parameters
texts (List[str]) β List of documents to index. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-11 | Parameters
texts (List[str]) β List of documents to index.
embedding (langchain.embeddings.base.Embeddings) β Embedding function to use.
metadatas (Optional[List[dict]]) β List of metadata dictionaries to associate with documents.
metric (str) β Metric to use for indexing. Defaults to βangularβ.
trees (int) β Number of trees to use for indexing. Defaults to 100.
n_jobs (int) β Number of jobs to use for indexing. Defaults to -1.
kwargs (Any) β
Return type
langchain.vectorstores.annoy.Annoy
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
classmethod from_embeddings(text_embeddings, embedding, metadatas=None, metric='angular', trees=100, n_jobs=- 1, **kwargs)[source]ο
Construct Annoy wrapper from embeddings.
Parameters
text_embeddings (List[Tuple[str, List[float]]]) β List of tuples of (text, embedding)
embedding (langchain.embeddings.base.Embeddings) β Embedding function to use.
metadatas (Optional[List[dict]]) β List of metadata dictionaries to associate with documents.
metric (str) β Metric to use for indexing. Defaults to βangularβ.
trees (int) β Number of trees to use for indexing. Defaults to 100.
n_jobs (int) β Number of jobs to use for indexing. Defaults to -1
kwargs (Any) β
Return type
langchain.vectorstores.annoy.Annoy | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-12 | Return type
langchain.vectorstores.annoy.Annoy
This is a user friendly interface that:
Creates an in memory docstore with provided embeddings
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
save_local(folder_path, prefault=False)[source]ο
Save Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path (str) β folder path to save index, docstore,
and index_to_docstore_id to.
prefault (bool) β Whether to pre-load the index into memory.
Return type
None
classmethod load_local(folder_path, embeddings)[source]ο
Load Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path (str) β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings (langchain.embeddings.base.Embeddings) β Embeddings to use when generating queries.
Return type
langchain.vectorstores.annoy.Annoy
class langchain.vectorstores.AtlasDB(name, embedding_function=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Atlas: Nomicβs neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-13 | Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
Parameters
name (str) β
embedding_function (Optional[Embeddings]) β
api_key (Optional[str]) β
description (str) β
is_public (bool) β
reset_project_if_exists (bool) β
Return type
None
add_texts(texts, metadatas=None, ids=None, refresh=True, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]]) β An optional list of ids.
refresh (bool) β Whether or not to refresh indices with the updated data.
Default True.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs)[source]ο
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail.
Parameters
kwargs (Any) β
Return type
Any
similarity_search(query, k=4, **kwargs)[source]ο
Run similarity search with AtlasDB
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
kwargs (Any) β
Returns
List of documents most similar to the query text.
Return type
List[Document] | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-14 | List of documents most similar to the query text.
Return type
List[Document]
classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, name=None, api_key=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source]ο
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) β The list of texts to ingest.
name (str) β Name of the project to create.
api_key (str) β Your nomic API key,
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
kwargs (Any) β
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_documents(documents, embedding=None, ids=None, name=None, api_key=None, persist_directory=None, description='A description for your project', is_public=True, reset_project_if_exists=False, index_kwargs=None, **kwargs)[source]ο
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) β Name of the collection to create.
api_key (str) β Your nomic API key, | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-15 | api_key (str) β Your nomic API key,
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
persist_directory (Optional[str]) β
kwargs (Any) β
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
class langchain.vectorstores.AwaDB(table_name='langchain_awadb', embedding=None, log_and_data_dir=None, client=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Interface implemented by AwaDB vector stores.
Parameters
table_name (str) β
embedding (Optional[Embeddings]) β
log_and_data_dir (Optional[str]) β
client (Optional[awadb.Client]) β
Return type
None
add_texts(texts, metadatas=None, is_duplicate_texts=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
:param texts: Iterable of strings to add to the vectorstore.
:param metadatas: Optional list of metadatas associated with the texts.
:param is_duplicate_texts: Optional whether to duplicate texts.
:param kwargs: vectorstore specific parameters.
Returns | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-16 | :param kwargs: vectorstore specific parameters.
Returns
List of ids from adding the texts into the vectorstore.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
is_duplicate_texts (Optional[bool]) β
kwargs (Any) β
Return type
List[str]
load_local(table_name, **kwargs)[source]ο
Parameters
table_name (str) β
kwargs (Any) β
Return type
bool
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_relevance_scores(query, k=4, **kwargs)[source]ο
Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding=None, k=4, scores=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (Optional[List[float]]) β Embedding to look up documents similar to. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-17 | Parameters
embedding (Optional[List[float]]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
scores (Optional[list]) β
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
create_table(table_name, **kwargs)[source]ο
Create a new table.
Parameters
table_name (str) β
kwargs (Any) β
Return type
bool
use(table_name, **kwargs)[source]ο
Use the specified table. Donβt know the tables, please invoke list_tables.
Parameters
table_name (str) β
kwargs (Any) β
Return type
bool
list_tables(**kwargs)[source]ο
List all the tables created by the client.
Parameters
kwargs (Any) β
Return type
List[str]
get_current_table(**kwargs)[source]ο
Get the current table.
Parameters
kwargs (Any) β
Return type
str
classmethod from_texts(texts, embedding=None, metadatas=None, table_name='langchain_awadb', log_and_data_dir=None, client=None, **kwargs)[source]ο
Create an AwaDB vectorstore from a raw documents.
Parameters
texts (List[str]) β List of texts to add to the table.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
table_name (str) β Name of the table to create.
log_and_data_dir (Optional[str]) β Directory of logging and persistence.
client (Optional[awadb.Client]) β AwaDB client
kwargs (Any) β
Returns | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-18 | kwargs (Any) β
Returns
AwaDB vectorstore.
Return type
AwaDB
classmethod from_documents(documents, embedding=None, table_name='langchain_awadb', log_and_data_dir=None, client=None, **kwargs)[source]ο
Create an AwaDB vectorstore from a list of documents.
If a log_and_data_dir specified, the table will be persisted there.
Parameters
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
table_name (str) β Name of the table to create.
log_and_data_dir (Optional[str]) β Directory to persist the table.
client (Optional[awadb.Client]) β AwaDB client
kwargs (Any) β
Returns
AwaDB vectorstore.
Return type
AwaDB
class langchain.vectorstores.AzureSearch(azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type='hybrid', semantic_configuration_name=None, semantic_query_language='en-us', **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Parameters
azure_search_endpoint (str) β
azure_search_key (str) β
index_name (str) β
embedding_function (Callable) β
search_type (str) β
semantic_configuration_name (Optional[str]) β
semantic_query_language (str) β
kwargs (Any) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Add texts data to an existing index.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-19 | similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
vector_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
vector_search_with_score(query, k=4, filters=None)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filters (Optional[str]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
hybrid_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
hybrid_search_with_score(query, k=4, filters=None)[source]ο
Return docs most similar to query with an hybrid query.
Parameters
query (str) β Text to look up documents similar to. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-20 | Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filters (Optional[str]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
semantic_hybrid_search(query, k=4, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
semantic_hybrid_search_with_score(query, k=4, filters=None)[source]ο
Return docs most similar to query with an hybrid query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filters (Optional[str]) β
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, azure_search_endpoint='', azure_search_key='', index_name='langchain-index', **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
azure_search_endpoint (str) β
azure_search_key (str) β
index_name (str) β
kwargs (Any) β
Return type
langchain.vectorstores.azuresearch.AzureSearch | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-21 | kwargs (Any) β
Return type
langchain.vectorstores.azuresearch.AzureSearch
class langchain.vectorstores.Cassandra(embedding, session, keyspace, table_name, ttl_seconds=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Cassandra embeddings platform.
There is no notion of a default table name, since each embedding
function implies its own vector dimension, which is part of the schema.
Example
from langchain.vectorstores import Cassandra
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
session = ...
keyspace = 'my_keyspace'
vectorstore = Cassandra(embeddings, session, keyspace, 'my_doc_archive')
Parameters
embedding (Embeddings) β
session (Session) β
keyspace (str) β
table_name (str) β
ttl_seconds (int | None) β
Return type
None
delete_collection()[source]ο
Just an alias for clear
(to better align with other VectorStore implementations).
Return type
None
clear()[source]ο
Empty the collection.
Return type
None
delete_by_document_id(document_id)[source]ο
Parameters
document_id (str) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str] | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-22 | Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search_with_score_id_by_vector(embedding, k=4)[source]ο
Return docs most similar to embedding vector.
No support for filter query (on metadata) along with vector search.
Parameters
embedding (str) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
Returns
List of (Document, score, id), the most similar to the query vector.
Return type
List[Tuple[langchain.schema.Document, float, str]]
similarity_search_with_score_id(query, k=4, **kwargs)[source]ο
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float, str]]
similarity_search_with_score_by_vector(embedding, k=4)[source]ο
Return docs most similar to embedding vector.
No support for filter query (on metadata) along with vector search.
Parameters
embedding (str) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
Returns
List of (Document, score), the most similar to the query vector.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-23 | Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Returns
List of Documents selected by maximal marginal relevance.
Parameters
embedding (List[float]) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-24 | Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Optional.
Returns
List of Documents selected by maximal marginal relevance.
Parameters
query (str) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Create a Cassandra vectorstore from raw texts.
No support for specifying text IDs
Returns
a Cassandra vectorstore.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.cassandra.CVST
classmethod from_documents(documents, embedding, **kwargs)[source]ο
Create a Cassandra vectorstore from a document list.
No support for specifying text IDs
Returns
a Cassandra vectorstore.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.vectorstores.cassandra.CVST
class langchain.vectorstores.Chroma(collection_name='langchain', embedding_function=None, persist_directory=None, client_settings=None, collection_metadata=None, client=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-25 | Bases: langchain.vectorstores.base.VectorStore
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings)
Parameters
collection_name (str) β
embedding_function (Optional[Embeddings]) β
persist_directory (Optional[str]) β
client_settings (Optional[chromadb.config.Settings]) β
collection_metadata (Optional[Dict]) β
client (Optional[chromadb.Client]) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with Chroma.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of documents most similar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-26 | Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:type embedding: str
:param k: Number of Documents to return. Defaults to 4.
:type k: int
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents most similar to the query vector.
Parameters
embedding (List[float]) β
k (int) β
filter (Optional[Dict[str, str]]) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of documents most similar to
the query text and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Tuple[Document, float]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-27 | lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
delete_collection()[source]ο
Delete the collection.
Return type
None
get(ids=None, where=None, limit=None, offset=None, where_document=None, include=None)[source]ο
Gets the collection.
Parameters
ids (Optional[OneOrMany[ID]]) β The ids of the embeddings to get. Optional.
where (Optional[Where]) β A Where type dict used to filter results by. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-28 | where (Optional[Where]) β A Where type dict used to filter results by.
E.g. {βcolorβ : βredβ, βpriceβ: 4.20}. Optional.
limit (Optional[int]) β The number of documents to return. Optional.
offset (Optional[int]) β The offset to start returning results from.
Useful for paging results with limit. Optional.
where_document (Optional[WhereDocument]) β A WhereDocument type dict used to filter by the documents.
E.g. {$contains: {βtextβ: βhelloβ}}. Optional.
include (Optional[List[str]]) β A list of what to include in the results.
Can contain βembeddingsβ, βmetadatasβ, βdocumentsβ.
Ids are always included.
Defaults to [βmetadatasβ, βdocumentsβ]. Optional.
Return type
Dict[str, Any]
persist()[source]ο
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
Return type
None
update_document(document_id, document)[source]ο
Update a document in the collection.
Parameters
document_id (str) β ID of the document to update.
document (Document) β Document to update.
Return type
None
classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source]ο
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) β List of texts to add to the collection.
collection_name (str) β Name of the collection to create. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-29 | collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
client (Optional[chromadb.Client]) β
kwargs (Any) β
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_documents(documents, embedding=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source]ο
Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
client (Optional[chromadb.Client]) β
kwargs (Any) β
Returns
Chroma vectorstore.
Return type
Chroma
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None
class langchain.vectorstores.Clickhouse(embedding, config=None, **kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-30 | Bases: langchain.vectorstores.base.VectorStore
Wrapper around ClickHouse vector database
You need a clickhouse-connect python package, and a valid account
to connect to ClickHouse.
ClickHouse can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse)
Parameters
embedding (Embeddings) β
config (Optional[ClickhouseSettings]) β
kwargs (Any) β
Return type
None
escape_str(value)[source]ο
Parameters
value (str) β
Return type
str
add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]ο
Insert more texts through the embeddings and add to the VectorStore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the VectorStore.
ids (Optional[Iterable[str]]) β Optional list of ids to associate with the texts.
batch_size (int) β Batch size of insertion
metadata β Optional column data to be inserted
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Returns
List of ids from adding the texts into the VectorStore.
Return type
List[str]
classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]ο
Create ClickHouse wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (ClickHouseSettings, Optional) β ClickHouse configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-31 | Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to ClickHouse.
Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
into (Other keyword arguments will pass) β [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[Dict[Any, Any]]]) β
kwargs (Any) β
Returns
ClickHouse Index
Return type
langchain.vectorstores.clickhouse.Clickhouse
similarity_search(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with ClickHouse
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with ClickHouse by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-32 | of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
embedding (List[float]) β
kwargs (Any) β
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with ClickHouse
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of documents
Return type
List[Document]
drop()[source]ο
Helper function: Drop data
Return type
None
property metadata_column: strο
pydantic settings langchain.vectorstores.ClickhouseSettings[source]ο
Bases: pydantic.env_settings.BaseSettings
ClickHouse Client Configuration
Attribute:
clickhouse_host (str)An URL to connect to MyScale backend.Defaults to βlocalhostβ.
clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (list): index build parameter.
index_query_params(dict): index query parameters. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-33 | index_param (list): index build parameter.
index_query_params(dict): index query parameters.
database (str) : Database name to find the table. Defaults to βdefaultβ.
table (str) : Table name to operate on.
Defaults to βvector_tableβ.
metric (str)Metric to compute distance,supported are (βangularβ, βeuclideanβ, βmanhattanβ, βhammingβ,
βdotβ). Defaults to βangularβ.
https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{βidβ: βtext_idβ,
βuuidβ: βglobal_unique_idβ
βembeddingβ: βtext_embeddingβ,
βdocumentβ: βtext_plainβ,
βmetadataβ: βmetadata_dictionary_in_jsonβ,
}
Defaults to identity map.
Show JSON schema{
"title": "ClickhouseSettings", | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-34 | Show JSON schema{
"title": "ClickhouseSettings",
"description": "ClickHouse Client Configuration\n\nAttribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": { | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-35 | "type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'clickhouse_host'}",
"type": "string"
},
"port": {
"title": "Port",
"default": 8123,
"env_names": "{'clickhouse_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'clickhouse_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'clickhouse_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "annoy",
"env_names": "{'clickhouse_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"default": [
"'L2Distance'",
100
],
"env_names": "{'clickhouse_index_param'}",
"anyOf": [
{
"type": "array",
"items": {}
},
{
"type": "object"
}
]
},
"index_query_params": {
"title": "Index Query Params",
"default": {},
"env_names": "{'clickhouse_index_query_params'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
}, | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-36 | "type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"uuid": "uuid",
"document": "document",
"embedding": "embedding",
"metadata": "metadata"
},
"env_names": "{'clickhouse_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'clickhouse_database'}",
"type": "string"
},
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'clickhouse_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "angular",
"env_names": "{'clickhouse_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = clickhouse_
Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Union[List, Dict]])
index_query_params (Dict[str, str])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str]) | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-37 | port (int)
table (str)
username (Optional[str])
attribute column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}ο
attribute database: str = 'default'ο
attribute host: str = 'localhost'ο
attribute index_param: Optional[Union[List, Dict]] = ["'L2Distance'", 100]ο
attribute index_query_params: Dict[str, str] = {}ο
attribute index_type: str = 'annoy'ο
attribute metric: str = 'angular'ο
attribute password: Optional[str] = Noneο
attribute port: int = 8123ο
attribute table: str = 'langchain'ο
attribute username: Optional[str] = Noneο
class langchain.vectorstores.DeepLake(dataset_path='./deeplake/', token=None, embedding_function=None, read_only=False, ingestion_batch_size=1000, num_workers=0, verbose=True, exec_option='python', **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Deep Lake, a data lake for deep learning applications.
We integrated deeplakeβs similarity search and filtering for fast prototyping,
Now, it supports Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesnβt require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-38 | To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
Parameters
dataset_path (str) β
token (Optional[str]) β
embedding_function (Optional[Embeddings]) β
read_only (bool) β
ingestion_batch_size (int) β
num_workers (int) β
verbose (bool) β
exec_option (str) β
kwargs (Any) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Examples
>>> ids = deeplake_vectorstore.add_texts(
... texts = <list_of_texts>,
... metadatas = <list_of_metadata_jsons>,
... ids = <list_of_ids>,
... )
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
**kwargs β other optional keyword arguments.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Examples
>>> # Search using an embedding
>>> data = vector_store.similarity_search(
... query=<your_query>,
... k=<num_items>,
... exec_option=<preferred_exec_option>,
... )
>>> # Run tql search: | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-39 | ... )
>>> # Run tql search:
>>> data = vector_store.tql_search(
... tql_query="SELECT * WHERE id == <id>",
... exec_option="compute_engine",
... )
Parameters
k (int) β Number of Documents to return. Defaults to 4.
query (str) β Text to look up similar documents.
**kwargs β Additional keyword arguments include:
embedding (Callable): Embedding function to use. Defaults to None.
distance_metric (str): βL2β for Euclidean, βL1β for Nuclear, βmaxβ
for L-infinity, βcosβ for cosine, βdotβ for dot product.
Defaults to βL2β.
filter (Union[Dict, Callable], optional): Additional filterbefore embedding search.
- Dict: Key-value search on tensors of htype json,
(sample must satisfy all key-value filters)
Dict = {βtensor_1β: {βkeyβ: value}, βtensor_2β: {βkeyβ: value}}
Function: Compatible with deeplake.filter.
Defaults to None.
exec_option (str): Supports 3 ways to perform searching.βpythonβ, βcompute_engineβ, or βtensor_dbβ. Defaults to βpythonβ.
- βpythonβ: Pure-python implementation for the client.
WARNING: not recommended for big datasets.
βcompute_engineβ: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets.
βtensor_dbβ: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database.
Use runtime = {βdb_engineβ: True} during dataset creation.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-40 | similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Examples
>>> # Search using an embedding
>>> data = vector_store.similarity_search_by_vector(
... embedding=<your_embedding>,
... k=<num_items_to_return>,
... exec_option=<preferred_exec_option>,
... )
Parameters
embedding (Union[List[float], np.ndarray]) β Embedding to find similar docs.
k (int) β Number of Documents to return. Defaults to 4.
**kwargs β Additional keyword arguments including:
filter (Union[Dict, Callable], optional):
Additional filter before embedding search.
- Dict - Key-value search on tensors of htype json. True
if all key-value filters are satisfied.
Dict = {βtensor_name_1β: {βkeyβ: value},
βtensor_name_2β: {βkeyβ: value}}
Function - Any function compatible withdeeplake.filter.
Defaults to None.
exec_option (str): Options for search execution includeβpythonβ, βcompute_engineβ, or βtensor_dbβ. Defaults to
βpythonβ.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database.
To store datasets in this database, specify
runtime = {βdb_engineβ: True} during dataset creation. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-41 | runtime = {βdb_engineβ: True} during dataset creation.
distance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity,
βdotβ for dot product. Defaults to L2.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Run similarity search with Deep Lake with distance returned.
Examples:
>>> data = vector_store.similarity_search_with_score(
β¦ query=<your_query>,
β¦ embedding=<your_embedding_function>
β¦ k=<number_of_items_to_return>,
β¦ exec_option=<preferred_exec_option>,
β¦ )
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
**kwargs β Additional keyword arguments. Some of these arguments are:
distance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, βdotβ for dot product.
Defaults to L2.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults
to None.
exec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either βpythonβ, βcompute_engineβ or
βtensor_dbβ. Defaults to βpythonβ.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-42 | any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify runtime = {βdb_engineβ: True}
during dataset creation.
kwargs (Any) β
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance. Maximal marginal
relevance optimizes for similarity to query AND diversity among selected docs.
Examples:
>>> data = vector_store.max_marginal_relevance_search_by_vector(
β¦ embedding=<your_embedding>,
β¦ fetch_k=<elements_to_fetch_before_mmr_search>,
β¦ k=<number_of_items_to_return>,
β¦ exec_option=<preferred_exec_option>,
β¦ )
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch for MMR algorithm.
lambda_mult (float) β Number between 0 and 1 determining the degree of diversity.
0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5.
exec_option (str) β DeepLakeVectorStore supports 3 ways for searching.
Could be βpythonβ, βcompute_engineβ or βtensor_dbβ. Defaults to
βpythonβ.
- βpythonβ - Pure-python implementation running on the client. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-43 | βpythonβ.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be used
with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for
data stored in the Deep Lake Managed Database. To store datasets
in this database, specify runtime = {βdb_engineβ: True}
during dataset creation.
**kwargs β Additional keyword arguments.
kwargs (Any) β
Returns
List[Documents] - A list of documents.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source]ο
Return docs selected using maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Examples:
>>> # Search using an embedding
>>> data = vector_store.max_marginal_relevance_search(
β¦ query = <query_to_search>,
β¦ embedding_function = <embedding_function_for_query>,
β¦ k = <number_of_items_to_return>,
β¦ exec_option = <preferred_exec_option>,
β¦ )
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents for MMR algorithm.
lambda_mult (float) β Value between 0 and 1. 0 corresponds | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-44 | lambda_mult (float) β Value between 0 and 1. 0 corresponds
to maximum diversity and 1 to minimum.
Defaults to 0.5.
exec_option (str) β Supports 3 ways to perform searching.
- βpythonβ - Pure-python implementation running on the client.
Can be used for data stored anywhere. WARNING: using this
option with big datasets is discouraged due to potential
memory issues.
βcompute_engineβ - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
any data stored in or connected to Deep Lake. It cannot be
used with in-memory or local datasets.
βtensor_dbβ - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database. To store
datasets in this database, specify
runtime = {βdb_engineβ: True} during dataset creation.
**kwargs β Additional keyword arguments
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Raises
ValueError β when MRR search is on but embedding function is
not specified.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, dataset_path='./deeplake/', **kwargs)[source]ο
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted in that location,
otherwise by default at ./deeplake
Examples:
>>> # Search using an embedding
>>> vector_store = DeepLake.from_texts(
β¦ texts = <the_texts_that_you_want_to_embed>,
β¦ embedding_function = <embedding_function_for_query>,
β¦ k = <number_of_items_to_return>,
β¦ exec_option = <preferred_exec_option>, | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-45 | β¦ exec_option = <preferred_exec_option>,
β¦ )
Parameters
dataset_path (str) β
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use βactiveloop loginβ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required
in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesnβtsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
texts (List[Document]) β List of documents to add.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
Note, in other places, it is called embedding_function.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
**kwargs β Additional keyword arguments.
kwargs (Any) β
Returns
Deep Lake dataset.
Return type
DeepLake
Raises
ValueError β If βembeddingβ is provided in kwargs. This is deprecated,
please use embedding_function instead.
delete(ids=None, filter=None, delete_all=None)[source]ο
Delete the entities in the dataset.
Parameters
ids (Optional[List[str]], optional) β The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional) β The filter to delete by. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-46 | filter (Optional[Dict[str, str]], optional) β The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional) β Whether to drop the dataset.
Defaults to None.
Returns
Whether the delete operation was successful.
Return type
bool
classmethod force_delete_by_path(path)[source]ο
Force delete dataset by path.
Parameters
path (str) β path of the dataset to delete.
Raises
ValueError β if deeplake is not installed.
Return type
None
delete_dataset()[source]ο
Delete the collection.
Return type
None
class langchain.vectorstores.DocArrayHnswSearch(doc_index, embedding)[source]ο
Bases: langchain.vectorstores.docarray.base.DocArrayIndex
Wrapper around HnswLib storage.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β.
Parameters
doc_index (BaseDocIndex) β
embedding (langchain.embeddings.base.Embeddings) β
classmethod from_params(embedding, work_dir, n_dim, dist_metric='cosine', max_elements=1024, index=True, ef_construction=200, ef=10, M=16, allow_replace_deleted=True, num_threads=1, **kwargs)[source]ο
Initialize DocArrayHnswSearch store.
Parameters
embedding (Embeddings) β Embedding function.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
dist_metric (str) β Distance metric for DocArrayHnswSearch can be one of:
βcosineβ, βipβ, and βl2β. Defaults to βcosineβ. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-47 | βcosineβ, βipβ, and βl2β. Defaults to βcosineβ.
max_elements (int) β Maximum number of vectors that can be stored.
Defaults to 1024.
index (bool) β Whether an index should be built for this field.
Defaults to True.
ef_construction (int) β defines a construction time/accuracy trade-off.
Defaults to 200.
ef (int) β parameter controlling query time/accuracy trade-off.
Defaults to 10.
M (int) β parameter that defines the maximum number of outgoing
connections in the graph. Defaults to 16.
allow_replace_deleted (bool) β Enables replacing of deleted elements
with new added ones. Defaults to True.
num_threads (int) β Sets the number of cpu threads to use. Defaults to 1.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
kwargs (Any) β
Return type
langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch
classmethod from_texts(texts, embedding, metadatas=None, work_dir=None, n_dim=None, **kwargs)[source]ο
Create an DocArrayHnswSearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
**kwargs β Other keyword arguments to be passed to the __init__ method.
kwargs (Any) β
Returns
DocArrayHnswSearch Vector Store
Return type
langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-48 | Return type
langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch
class langchain.vectorstores.DocArrayInMemorySearch(doc_index, embedding)[source]ο
Bases: langchain.vectorstores.docarray.base.DocArrayIndex
Wrapper around in-memory storage for exact search.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β.
Parameters
doc_index (BaseDocIndex) β
embedding (langchain.embeddings.base.Embeddings) β
classmethod from_params(embedding, metric='cosine_sim', **kwargs)[source]ο
Initialize DocArrayInMemorySearch store.
Parameters
embedding (Embeddings) β Embedding function.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
kwargs (Any) β
Return type
langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Create an DocArrayInMemorySearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]) β Metadata for each text
if it exists. Defaults to None.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
kwargs (Any) β
Returns | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-49 | Defaults to βcosine_simβ.
kwargs (Any) β
Returns
DocArrayInMemorySearch Vector Store
Return type
langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url, index_name, embedding, *, ssl_verify=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore, abc.ABC
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-50 | Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) β The URL for the Elasticsearch instance.
index_name (str) β The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) β An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
ssl_verify (Optional[Dict[str, Any]]) β
Raises
ValueError β If the elasticsearch python package is not installed.
add_texts(texts, metadatas=None, refresh_indices=True, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
refresh_indices (bool) β bool to refresh ElasticSearch indices
ids (Optional[List[str]]) β
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-51 | Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[dict]) β
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to query.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Parameters
query (str) β
k (int) β
filter (Optional[dict]) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, elasticsearch_url=None, index_name=None, refresh_indices=True, **kwargs)[source]ο
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
elasticsearch_url (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-52 | elasticsearch_url (Optional[str]) β
index_name (Optional[str]) β
refresh_indices (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.elastic_vector_search.ElasticVectorSearch
create_index(client, index_name, mapping)[source]ο
Parameters
client (Any) β
index_name (str) β
mapping (Dict) β
Return type
None
client_search(client, index_name, script_query, size)[source]ο
Parameters
client (Any) β
index_name (str) β
script_query (Dict) β
size (int) β
Return type
Any
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None
class langchain.vectorstores.FAISS(embedding_function, index, docstore, index_to_docstore_id, relevance_score_fn=<function _default_relevance_score_fn>, normalize_L2=False)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
Parameters
embedding_function (Callable) β
index (Any) β
docstore (Docstore) β
index_to_docstore_id (Dict[int, str]) β
relevance_score_fn (Optional[Callable[[float], float]]) β
normalize_L2 (bool) β
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-53 | Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of unique IDs.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
add_embeddings(text_embeddings, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings (Iterable[Tuple[str, List[float]]]) β Iterable pairs of string and embedding to
add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of unique IDs.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to query.
Parameters
embedding (List[float]) β Embedding vector to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, Any]]) β Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
**kwargs β kwargs to be passed to similarity search. Can include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
kwargs (Any) β
Returns | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-54 | filter the resulting set of retrieved docs
kwargs (Any) β
Returns
List of documents most similar to the query text and L2 distance
in float for each. Lower score represents more similarity.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score(query, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
kwargs (Any) β
Returns
List of documents most similar to the query text with
L2 distance in float. Lower score represents more similarity.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
kwargs (Any) β
Returns
List of Documents most similar to the embedding.
Return type
List[langchain.schema.Document]
similarity_search(query, k=4, filter=None, fetch_k=20, **kwargs)[source]ο
Return docs most similar to query.
Parameters | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-55 | Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, Any]]) β (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
fetch_k (int) β (Optional[int]) Number of Documents to fetch before filtering.
Defaults to 20.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_with_score_by_vector(embedding, *, k=4, fetch_k=20, lambda_mult=0.5, filter=None)[source]ο
Return docs and their similarity scores selected using the maximal marginalrelevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch before filtering to
pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, Any]]) β
Returns
List of Documents and similarity scores selected by maximal marginalrelevance and score for each.
Return type
List[Tuple[langchain.schema.Document, float]]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-56 | Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch before filtering to
pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, Any]]) β
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch before filtering (if needed) to
pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, Any]]) β
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
merge_from(target)[source]ο | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-57 | Return type
List[langchain.schema.Document]
merge_from(target)[source]ο
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target (langchain.vectorstores.faiss.FAISS) β FAISS object you wish to merge into the current one
Returns
None.
Return type
None
classmethod from_texts(texts, embedding, metadatas=None, ids=None, **kwargs)[source]ο
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.vectorstores.faiss.FAISS
classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ids=None, **kwargs)[source]ο
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings) | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-58 | faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
Parameters
text_embeddings (List[Tuple[str, List[float]]]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.vectorstores.faiss.FAISS
save_local(folder_path, index_name='index')[source]ο
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path (str) β folder path to save index, docstore,
and index_to_docstore_id to.
index_name (str) β for saving with a specific index file name
Return type
None
classmethod load_local(folder_path, embeddings, index_name='index')[source]ο
Load FAISS index, docstore, and index_to_docstore_id from disk.
Parameters
folder_path (str) β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings (langchain.embeddings.base.Embeddings) β Embeddings to use when generating queries
index_name (str) β for saving with a specific index file name
Return type
langchain.vectorstores.faiss.FAISS
class langchain.vectorstores.Hologres(connection_string, embedding_function, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, logger=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
VectorStore implementation using Hologres.
connection_string is a hologres connection string.
embedding_function any embedding function implementinglangchain.embeddings.base.Embeddings interface.
ndims is the number of dimensions of the embedding output.
table_name is the name of the table to store embeddings and data.(default: langchain_pg_embedding) | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-59 | - NOTE: The table will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_table if True, will delete the table if it exists.(default: False)
- Useful for testing.
Parameters
connection_string (str) β
embedding_function (Embeddings) β
ndims (int) β
table_name (str) β
pre_delete_table (bool) β
logger (Optional[logging.Logger]) β
Return type
None
create_vector_extension()[source]ο
Return type
None
create_table()[source]ο
Return type
None
add_embeddings(texts, embeddings, metadatas, ids, **kwargs)[source]ο
Add embeddings to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
embeddings (List[List[float]]) β List of list of embedding vectors.
metadatas (List[dict]) β List of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (List[str]) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
ids (Optional[List[str]]) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Run similarity search with Hologres with distance. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-60 | Run similarity search with Hologres with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, filter=None)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search_with_score_by_vector(embedding, k=4, filter=None)[source]ο
Parameters
embedding (List[float]) β
k (int) β
filter (Optional[dict]) β
Return type
List[Tuple[langchain.schema.Document, float]] | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-61 | Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
βEither pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ndims (int) β
table_name (str) β
ids (Optional[List[str]]) β
pre_delete_table (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source]ο
Construct Hologres wrapper from raw documents and pre-
generated embeddings.
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
βEither pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Example
from langchain import Hologres
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings)
Parameters
text_embeddings (List[Tuple[str, List[float]]]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
ndims (int) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-62 | metadatas (Optional[List[dict]]) β
ndims (int) β
table_name (str) β
ids (Optional[List[str]]) β
pre_delete_table (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod from_existing_index(embedding, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, **kwargs)[source]ο
Get intsance of an existing Hologres store.This method will
return the instance of the store without inserting any new
embeddings
Parameters
embedding (langchain.embeddings.base.Embeddings) β
ndims (int) β
table_name (str) β
pre_delete_table (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod get_connection_string(kwargs)[source]ο
Parameters
kwargs (Dict[str, Any]) β
Return type
str
classmethod from_documents(documents, embedding, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_collection=False, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
βEither pass it as a parameter
or set the HOLOGRES_CONNECTION_STRING environment variable.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
ndims (int) β
table_name (str) β
ids (Optional[List[str]]) β
pre_delete_collection (bool) β
kwargs (Any) β
Return type
langchain.vectorstores.hologres.Hologres
classmethod connection_string_from_db_params(host, port, database, user, password)[source]ο | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-63 | Return connection string from database parameters.
Parameters
host (str) β
port (int) β
database (str) β
user (str) β
password (str) β
Return type
str
class langchain.vectorstores.LanceDB(connection, embedding, vector_key='vector', id_key='id', text_key='text')[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around LanceDB vector database.
To use, you should have lancedb python package installed.
Example
db = lancedb.connect('./lancedb')
table = db.open_table('my_table')
vectorstore = LanceDB(table, embedding_function)
vectorstore.add_texts(['text1', 'text2'])
result = vectorstore.similarity_search('text1')
Parameters
connection (Any) β
embedding (Embeddings) β
vector_key (Optional[str]) β
id_key (Optional[str]) β
text_key (Optional[str]) β
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Turn texts into embedding and add it to the database
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids to associate with the texts.
kwargs (Any) β
Returns
List of ids of the added texts.
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return documents most similar to the query
Parameters
query (str) β String to query the vectorstore with.
k (int) β Number of documents to return.
kwargs (Any) β
Returns | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-64 | kwargs (Any) β
Returns
List of documents most similar to the query.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, connection=None, vector_key='vector', id_key='id', text_key='text', **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
connection (Any) β
vector_key (Optional[str]) β
id_key (Optional[str]) β
text_key (Optional[str]) β
kwargs (Any) β
Return type
langchain.vectorstores.lancedb.LanceDB
class langchain.vectorstores.MatchingEngine(project_id, index, endpoint, embedding, gcs_client, gcs_bucket_name, credentials=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Vertex Matching Engine implementation of the vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour.
Parameters
project_id (str) β
index (MatchingEngineIndex) β
endpoint (MatchingEngineIndexEndpoint) β
embedding (Embeddings) β
gcs_client (storage.Client) β
gcs_bucket_name (str) β
credentials (Optional[Credentials]) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-65 | gcs_bucket_name (str) β
credentials (Optional[Credentials]) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters.
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β The string that will be used to search for similar documents.
k (int) β The amount of neighbors that will be retrieved.
kwargs (Any) β
Returns
A list of k matching documents.
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Use from components instead.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.matching_engine.MatchingEngine
classmethod from_components(project_id, region, gcs_bucket_name, index_id, endpoint_id, credentials_path=None, embedding=None)[source]ο
Takes the object creation out of the constructor.
Parameters
project_id (str) β The GCP project id.
region (str) β The default location making the API calls. It must have
regional. (the same location as the GCS bucket and must be) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-66 | regional. (the same location as the GCS bucket and must be) β
gcs_bucket_name (str) β The location where the vectors will be stored in
created. (order for the index to be) β
index_id (str) β The id of the created index.
endpoint_id (str) β The id of the created endpoint.
credentials_path (Optional[str]) β (Optional) The path of the Google credentials on
system. (the local file) β
embedding (Optional[langchain.embeddings.base.Embeddings]) β The Embeddings that will be used for
texts. (embedding the) β
Returns
A configured MatchingEngine with the texts added to the index.
Return type
langchain.vectorstores.matching_engine.MatchingEngine
class langchain.vectorstores.Milvus(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around the Milvus vector database.
Parameters
embedding_function (Embeddings) β
collection_name (str) β
connection_args (Optional[dict[str, Any]]) β
consistency_level (str) β
index_params (Optional[dict]) β
search_params (Optional[dict]) β
drop_old (Optional[bool]) β
add_texts(texts, metadatas=None, timeout=None, batch_size=1000, **kwargs)[source]ο
Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-67 | embedding and the columns are decided by the first metadata dict.
Metada keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Parameters
texts (Iterable[str]) β The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]) β Metadata dicts attached to each of
the texts. Defaults to None.
timeout (Optional[int]) β Timeout for each batch insert. Defaults
to None.
batch_size (int, optional) β Batch size to use for insertion.
Defaults to 1000.
kwargs (Any) β
Raises
MilvusException β Failure to add texts
Returns
The resulting keys for each inserted element.
Return type
List[str]
similarity_search(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a similarity search against the query string.
Parameters
query (str) β The text to search.
k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a similarity search against the query string.
Parameters
embedding (List[float]) β The embedding vector to search.
k (int, optional) β How many results to return. Defaults to 4. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-68 | k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
query (str) β The text being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Return type
List[float], List[Tuple[Document, any, any]]
similarity_search_with_score_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here: | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-69 | documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
embedding (List[float]) β The embedding vector being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Result doc and score.
Return type
List[Tuple[Document, float]]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search and return results that are reordered by MMR.
Parameters
query (str) β The text being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document] | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-70 | Returns
Document results for search.
Return type
List[Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source]ο
Perform a search and return results that are reordered by MMR.
Parameters
embedding (str) β The embedding vector being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs (Any) β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source]ο
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-71 | metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use. Defaults
to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
kwargs (Any) β
Returns
Milvus Vector Store
Return type
Milvus
class langchain.vectorstores.Zilliz(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source]ο
Bases: langchain.vectorstores.milvus.Milvus
Parameters
embedding_function (Embeddings) β
collection_name (str) β
connection_args (Optional[dict[str, Any]]) β
consistency_level (str) β
index_params (Optional[dict]) β
search_params (Optional[dict]) β
drop_old (Optional[bool]) β
classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source]ο
Create a Zilliz collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-72 | Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use.
Defaults to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
kwargs (Any) β
Returns
Zilliz Vector Store
Return type
Zilliz
class langchain.vectorstores.SingleStoreDB(embedding, *, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
This class serves as a Pythonic interface to the SingleStore DB database.
The prerequisite for using this class is the installation of the singlestoredb
Python package.
The SingleStoreDB vectorstore can be created by providing an embedding function and
the relevant parameters for the database connection, connection pool, and
optionally, the names of the table and the fields to use.
Parameters
embedding (Embeddings) β
distance_strategy (DistanceStrategy) β
table_name (str) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-73 | distance_strategy (DistanceStrategy) β
table_name (str) β
content_field (str) β
metadata_field (str) β
vector_field (str) β
pool_size (int) β
max_overflow (int) β
timeout (float) β
kwargs (Any) β
vector_fieldο
Pass the rest of the kwargs to the connection.
connection_kwargsο
Add program name and version to connection attributes.
add_texts(texts, metadatas=None, embeddings=None, **kwargs)[source]ο
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) β Optional pre-generated
embeddings. Defaults to None.
kwargs (Any) β
Returns
empty list
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Returns the most similar indexed documents to the query text.
Uses cosine similarity.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
filter (dict) β A dictionary of metadata fields and values to filter by.
kwargs (Any) β
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
Examples
similarity_search_with_score(query, k=4, filter=None)[source]ο
Return docs most similar to query. Uses cosine similarity.
Parameters
query (str) β Text to look up documents similar to. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-74 | Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
filter (Optional[dict]) β A dictionary of metadata fields and values to filter by.
Defaults to None.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source]ο
Create a SingleStoreDB vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new table for the embeddings in SingleStoreDB.
Adds the documents to the newly created table.
This is intended to be a quick way to get started.
.. rubric:: Example
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
distance_strategy (langchain.vectorstores.singlestoredb.DistanceStrategy) β
table_name (str) β
content_field (str) β
metadata_field (str) β
vector_field (str) β
pool_size (int) β
max_overflow (int) β
timeout (float) β
kwargs (Any) β
Return type
langchain.vectorstores.singlestoredb.SingleStoreDB
as_retriever(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
langchain.vectorstores.singlestoredb.SingleStoreDBRetriever | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-75 | Return type
langchain.vectorstores.singlestoredb.SingleStoreDBRetriever
class langchain.vectorstores.Clarifai(user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Clarifai AI platformβs vector store.
To use, you should have the clarifai python package installed.
Example
from langchain.vectorstores import Clarifai
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Clarifai("langchain_store", embeddings.embed_query)
Parameters
user_id (Optional[str]) β
app_id (Optional[str]) β
pat (Optional[str]) β
number_of_docs (Optional[int]) β
api_base (Optional[str]) β
Return type
None
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Add texts to the Clarifai vectorstore. This will push the text
to a Clarifai application.
Application use base workflow that create and store embedding for each text.
Make sure you are using a base workflow that is compatible with text
(such as Language Understanding).
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
similarity_search_with_score(query, k=4, filter=None, namespace=None, **kwargs)[source]ο
Run similarity search with score using Clarifai.
Parameters
query (str) β Query text to search for. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-76 | Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata.
None. (Defaults to) β
namespace (Optional[str]) β
kwargs (Any) β
Returns
List of documents most simmilar to the query text.
Return type
List[Document]
similarity_search(query, k=4, **kwargs)[source]ο
Run similarity search using Clarifai.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding=None, metadatas=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source]ο
Create a Clarifai vectorstore from a list of texts.
Parameters
user_id (str) β User ID.
app_id (str) β App ID.
texts (List[str]) β List of texts to add.
pat (Optional[str]) β Personal access token. Defaults to None.
number_of_docs (Optional[int]) β Number of documents to return
None. (Defaults to) β
api_base (Optional[str]) β API base. Defaults to None.
metadatas (Optional[List[dict]]) β Optional list of metadatas.
None. β
embedding (Optional[langchain.embeddings.base.Embeddings]) β
kwargs (Any) β
Returns
Clarifai vectorstore.
Return type
Clarifai | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-77 | Returns
Clarifai vectorstore.
Return type
Clarifai
classmethod from_documents(documents, embedding=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source]ο
Create a Clarifai vectorstore from a list of documents.
Parameters
user_id (str) β User ID.
app_id (str) β App ID.
documents (List[Document]) β List of documents to add.
pat (Optional[str]) β Personal access token. Defaults to None.
number_of_docs (Optional[int]) β Number of documents to return
None. (during vector search. Defaults to) β
api_base (Optional[str]) β API base. Defaults to None.
embedding (Optional[langchain.embeddings.base.Embeddings]) β
kwargs (Any) β
Returns
Clarifai vectorstore.
Return type
Clarifai
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url, index_name, embedding_function, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
Parameters
opensearch_url (str) β
index_name (str) β
embedding_function (Embeddings) β
kwargs (Any) β
add_texts(texts, metadatas=None, ids=None, bulk_size=500, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-78 | Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids to associate with the texts.
bulk_size (int) β Bulk API request count; Default: 500
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
metadata_field: Document field that metadata is stored in. Defaults to
βmetadataβ.
Can be set to a special value β*β to include the entire document.
Optional Args for Approximate Search:search_type: βapproximate_searchβ; default: βapproximate_searchβ
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: βmustβ | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-79 | subquery_clause: Query clause on the knn vector field; default: βmustβ
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:search_type: βscript_scoringβ; default: βapproximate_searchβ
space_type: βl2β, βl1β, βlinfβ, βcosinesimilβ, βinnerproductβ,
βhammingbitβ; default: βl2β
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
Optional Args for Painless Scripting Search:search_type: βpainless_scriptingβ; default: βapproximate_searchβ
space_type: βl2Squaredβ, βl1Normβ, βcosineSimilarityβ; default: βl2Squaredβ
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Return docs and itβs scores most similar to query.
By default, supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents along with its scores most similar to the query.
Return type
List[Tuple[langchain.schema.Document, float]]
Optional Args:same as similarity_search
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-80 | Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
list[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, bulk_size=500, **kwargs)[source]ο
Construct OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
Optional Keyword Args for Approximate Search:engine: βnmslibβ, βfaissβ, βluceneβ; default: βnmslibβ | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-81 | space_type: βl2β, βl1β, βcosinesimilβ, βlinfβ, βinnerproductβ; default: βl2β
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
bulk_size (int) β
kwargs (Any) β
Return type
langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch
class langchain.vectorstores.MongoDBAtlasVectorSearch(collection, embedding, *, index_name='default', text_key='text', embedding_key='embedding')[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around MongoDB Atlas Vector Search.
To use, you should have both:
- the pymongo python package installed
- a connection string associated with a MongoDB Atlas Cluster having deployed an
Atlas Search index
Example
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings.openai import OpenAIEmbeddings
from pymongo import MongoClient
mongo_client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)
Parameters
collection (Collection[MongoDBDocumentType]) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-82 | Parameters
collection (Collection[MongoDBDocumentType]) β
embedding (Embeddings) β
index_name (str) β
text_key (str) β
embedding_key (str) β
classmethod from_connection_string(connection_string, namespace, embedding, **kwargs)[source]ο
Parameters
connection_string (str) β
namespace (str) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[Dict[str, Any]]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List
similarity_search_with_score(query, *, k=4, pre_filter=None, post_filter_pipeline=None)[source]ο
Return MongoDB documents most similar to query, along with scores.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we
may introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query (str) β Text to look up documents similar to.
k (int) β Optional Number of Documents to return. Defaults to 4.
pre_filter (Optional[dict]) β Optional Dictionary of argument(s) to prefilter on document
fields. | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-83 | fields.
post_filter_pipeline (Optional[List[Dict]]) β Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=4, pre_filter=None, post_filter_pipeline=None, **kwargs)[source]ο
Return MongoDB documents most similar to query.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we may
introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query (str) β Text to look up documents similar to.
k (int) β Optional Number of Documents to return. Defaults to 4.
pre_filter (Optional[dict]) β Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline (Optional[List[Dict]]) β Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding, metadatas=None, collection=None, **kwargs)[source]ο
Construct MongoDBAtlasVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Adds the documents to a provided MongoDB Atlas Vector Search index(Lucene)
This is intended to be a quick way to get started.
Example
Parameters
texts (List[str]) β
embedding (Embeddings) β
metadatas (Optional[List[dict]]) β | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-84 | embedding (Embeddings) β
metadatas (Optional[List[dict]]) β
collection (Optional[Collection[MongoDBDocumentType]]) β
kwargs (Any) β
Return type
MongoDBAtlasVectorSearch
class langchain.vectorstores.MyScale(embedding, config=None, **kwargs)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around MyScale vector database
You need a clickhouse-connect python package, and a valid account
to connect to MyScale.
MyScale can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)
Parameters
embedding (Embeddings) β
config (Optional[MyScaleSettings]) β
kwargs (Any) β
Return type
None
escape_str(value)[source]ο
Parameters
value (str) β
Return type
str
add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
ids (Optional[Iterable[str]]) β Optional list of ids to associate with the texts.
batch_size (int) β Batch size of insertion
metadata β Optional column data to be inserted
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source]ο
Create Myscale wrapper with existing texts | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-85 | Create Myscale wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (MyScaleSettings, Optional) β Myscale configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to MyScale.
Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
into (Other keyword arguments will pass) β [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[Dict[Any, Any]]]) β
kwargs (Any) β
Returns
MyScale Index
Return type
langchain.vectorstores.myscale.MyScale
similarity_search(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with MyScale by vectors
Parameters
query (str) β query string | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-86 | Perform a similarity search with MyScale by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
embedding (List[float]) β
kwargs (Any) β
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source]ο
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
kwargs (Any) β
Returns
List of documents most similar to the query text
and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Document]
drop()[source]ο
Helper function: Drop data
Return type
None
property metadata_column: strο
pydantic settings langchain.vectorstores.MyScaleSettings[source]ο
Bases: pydantic.env_settings.BaseSettings
MyScale Client Configuration
Attribute: | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-87 | Bases: pydantic.env_settings.BaseSettings
MyScale Client Configuration
Attribute:
myscale_host (str)An URL to connect to MyScale backend.Defaults to βlocalhostβ.
myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (dict): index build parameter.
database (str) : Database name to find the table. Defaults to βdefaultβ.
table (str) : Table name to operate on.
Defaults to βvector_tableβ.
metric (str)Metric to compute distance,supported are (βl2β, βcosineβ, βipβ). Defaults to βcosineβ.
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{βidβ: βtext_idβ,
βvectorβ: βtext_embeddingβ,
βtextβ: βtext_plainβ,
βmetadataβ: βmetadata_dictionary_in_jsonβ,
}
Defaults to identity map.
Show JSON schema{
"title": "MyScaleSettings", | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-88 | Show JSON schema{
"title": "MyScaleSettings",
"description": "MyScale Client Configuration\n\nAttribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'myscale_host'}",
"type": "string"
},
"port": {
"title": "Port", | https://api.python.langchain.com/en/latest/modules/vectorstores.html |
81b746c66a31-89 | },
"port": {
"title": "Port",
"default": 8443,
"env_names": "{'myscale_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'myscale_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'myscale_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "IVFFLAT",
"env_names": "{'myscale_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"env_names": "{'myscale_index_param'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"text": "text",
"vector": "vector",
"metadata": "metadata"
},
"env_names": "{'myscale_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'myscale_database'}",
"type": "string"
},
"table": {
"title": "Table", | https://api.python.langchain.com/en/latest/modules/vectorstores.html |