id
stringlengths 14
16
| text
stringlengths 20
3.26k
| source
stringlengths 65
181
|
---|---|---|
b0d842837a35-0 | langchain_community.callbacks.arize_callback.ArizeCallbackHandler¶
class langchain_community.callbacks.arize_callback.ArizeCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None)[source]¶
Callback Handler that logs to Arize.
Initialize callback handler.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([model_id, model_version, ...])
Initialize callback handler.
on_agent_action(action, **kwargs)
Do nothing.
on_agent_finish(finish, **kwargs)
Run on agent end.
on_chain_end(outputs, **kwargs)
Do nothing.
on_chain_error(error, **kwargs)
Do nothing.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Do nothing.
on_llm_new_token(token, **kwargs)
Do nothing.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...]) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
b0d842837a35-1 | on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run on arbitrary text.
on_tool_end(output[, observation_prefix, ...])
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
Parameters
model_id (Optional[str]) –
model_version (Optional[str]) –
SPACE_KEY (Optional[str]) –
API_KEY (Optional[str]) –
__init__(model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None) → None[source]¶
Initialize callback handler.
Parameters
model_id (Optional[str]) –
model_version (Optional[str]) –
SPACE_KEY (Optional[str]) –
API_KEY (Optional[str]) –
Return type
None
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Do nothing.
Parameters
action (AgentAction) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run on agent end.
Parameters
finish (AgentFinish) –
kwargs (Any) –
Return type
None
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
b0d842837a35-2 | Do nothing.
Parameters
outputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
b0d842837a35-3 | on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Do nothing.
Parameters
token (str) –
kwargs (Any) –
Return type
None
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running.
ATTENTION: This method is called for non-chat models (regular LLMs). Ifyou’re implementing a handler for a chat model,
you should use on_chat_model_start instead.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
kwargs (Any) –
Return type
None
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
b0d842837a35-4 | kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, **kwargs: Any) → None[source]¶
Run on arbitrary text.
Parameters
text (str) –
kwargs (Any) –
Return type
None
on_tool_end(output: Any, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Run when tool ends running.
Parameters
output (Any) –
observation_prefix (Optional[str]) –
llm_prefix (Optional[str]) –
kwargs (Any) –
Return type
None
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
b0d842837a35-5 | Run when tool errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
f8fc9cb5a9eb-0 | langchain_community.callbacks.manager.get_openai_callback¶
langchain_community.callbacks.manager.get_openai_callback() → Generator[OpenAICallbackHandler, None, None][source]¶
Get the OpenAI callback handler in a context manager.
which conveniently exposes token and cost information.
Returns
The OpenAI callback handler.
Return type
OpenAICallbackHandler
Example
>>> with get_openai_callback() as cb:
... # Use the OpenAI callback handler
Examples using get_openai_callback¶
AzureChatOpenAI
How to run custom functions
How to track token usage for LLMs
How to track token usage in ChatModels | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.manager.get_openai_callback.html |
3d92b1453809-0 | langchain_community.callbacks.aim_callback.AimCallbackHandler¶
class langchain_community.callbacks.aim_callback.AimCallbackHandler(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True)[source]¶
Callback Handler that logs to Aim.
Parameters
repo (str, optional) – Aim repository path or Repo object to which
Run object is bound. If skipped, default Repo is used.
experiment_name (str, optional) – Sets Run’s experiment property.
‘default’ if not specified. Can be used later to query runs/sequences.
system_tracking_interval (int, optional) – Sets the tracking interval
in seconds for system usage metrics (CPU, Memory, etc.). Set to None
to disable system metrics tracking.
log_system_params (bool, optional) – Enable/Disable logging of system
params such as installed packages, git info, environment variables, etc.
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run
and then logs the response to Aim.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([repo, experiment_name, ...])
Initialize callback handler.
flush_tracker([repo, experiment_name, ...])
Flush the tracker and reset the session.
get_custom_callback_meta()
on_agent_action(action, **kwargs) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.AimCallbackHandler.html |
3d92b1453809-1 | get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run when agent is ending.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
reset_callback_meta()
Reset the callback metadata.
setup(**kwargs) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.AimCallbackHandler.html |
3d92b1453809-2 | reset_callback_meta()
Reset the callback metadata.
setup(**kwargs)
__init__(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True) → None[source]¶
Initialize callback handler.
Parameters
repo (Optional[str]) –
experiment_name (Optional[str]) –
system_tracking_interval (Optional[int]) –
log_system_params (bool) –
Return type
None
flush_tracker(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True, langchain_asset: Optional[Any] = None, reset: bool = True, finish: bool = False) → None[source]¶
Flush the tracker and reset the session.
Parameters
repo (str, optional) – Aim repository path or Repo object to which
Run object is bound. If skipped, default Repo is used.
experiment_name (str, optional) – Sets Run’s experiment property.
‘default’ if not specified. Can be used later to query runs/sequences.
system_tracking_interval (int, optional) – Sets the tracking interval
in seconds for system usage metrics (CPU, Memory, etc.). Set to None
to disable system metrics tracking.
log_system_params (bool, optional) – Enable/Disable logging of system
params such as installed packages, git info, environment variables, etc.
langchain_asset (Optional[Any]) – The langchain asset to save.
reset (bool) – Whether to reset the session.
finish (bool) – Whether to finish the run.
Returns – None
Return type
None
get_custom_callback_meta() → Dict[str, Any]¶
Return type
Dict[str, Any] | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.AimCallbackHandler.html |
3d92b1453809-3 | Return type
Dict[str, Any]
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
Parameters
action (AgentAction) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
Parameters
finish (AgentFinish) –
kwargs (Any) –
Return type
None
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.AimCallbackHandler.html |
3d92b1453809-4 | Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
Parameters
token (str) –
kwargs (Any) –
Return type
None
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
kwargs (Any) –
Return type
None
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.AimCallbackHandler.html |
3d92b1453809-5 | kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, **kwargs: Any) → None[source]¶
Run when agent is ending.
Parameters
text (str) –
kwargs (Any) –
Return type
None
on_tool_end(output: Any, **kwargs: Any) → None[source]¶
Run when tool ends running.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.AimCallbackHandler.html |
3d92b1453809-6 | Run when tool ends running.
Parameters
output (Any) –
kwargs (Any) –
Return type
None
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
kwargs (Any) –
Return type
None
reset_callback_meta() → None¶
Reset the callback metadata.
Return type
None
setup(**kwargs: Any) → None[source]¶
Parameters
kwargs (Any) –
Return type
None
Examples using AimCallbackHandler¶
Aim | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.AimCallbackHandler.html |
53b230cef556-0 | langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought¶
class langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought(parent_container: DeltaGenerator, labeler: LLMThoughtLabeler, expanded: bool, collapse_on_complete: bool)[source]¶
A thought in the LLM’s thought stream.
Initialize the LLMThought.
Parameters
parent_container (DeltaGenerator) – The container we’re writing into.
labeler (LLMThoughtLabeler) – The labeler to use for this thought.
expanded (bool) – Whether the thought should be expanded by default.
collapse_on_complete (bool) – Whether the thought should be collapsed.
Attributes
container
The container we're writing into.
last_tool
The last tool executed by this thought
Methods
__init__(parent_container, labeler, ...)
Initialize the LLMThought.
clear()
Remove the thought from the screen.
complete([final_label])
Finish the thought.
on_agent_action(action[, color])
on_llm_end(response, **kwargs)
on_llm_error(error, **kwargs)
on_llm_new_token(token, **kwargs)
on_llm_start(serialized, prompts)
on_tool_end(output[, color, ...])
on_tool_error(error, **kwargs)
on_tool_start(serialized, input_str, **kwargs)
__init__(parent_container: DeltaGenerator, labeler: LLMThoughtLabeler, expanded: bool, collapse_on_complete: bool)[source]¶
Initialize the LLMThought.
Parameters
parent_container (DeltaGenerator) – The container we’re writing into.
labeler (LLMThoughtLabeler) – The labeler to use for this thought.
expanded (bool) – Whether the thought should be expanded by default. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought.html |
53b230cef556-1 | expanded (bool) – Whether the thought should be expanded by default.
collapse_on_complete (bool) – Whether the thought should be collapsed.
clear() → None[source]¶
Remove the thought from the screen. A cleared thought can’t be reused.
Return type
None
complete(final_label: Optional[str] = None) → None[source]¶
Finish the thought.
Parameters
final_label (Optional[str]) –
Return type
None
on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶
Parameters
action (AgentAction) –
color (Optional[str]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Parameters
token (str) –
kwargs (Any) –
Return type
None
on_llm_start(serialized: Dict[str, Any], prompts: List[str]) → None[source]¶
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
Return type
None
on_tool_end(output: Any, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Parameters
output (Any) –
color (Optional[str]) –
observation_prefix (Optional[str]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought.html |
53b230cef556-2 | color (Optional[str]) –
observation_prefix (Optional[str]) –
llm_prefix (Optional[str]) –
kwargs (Any) –
Return type
None
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought.html |
438ac1143d5e-0 | langchain_core.callbacks.manager.AsyncRunManager¶
class langchain_core.callbacks.manager.AsyncRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
get_sync()
Get the equivalent sync RunManager.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncRunManager.html |
438ac1143d5e-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Return type
None
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
abstract get_sync() → RunManager[source]¶
Get the equivalent sync RunManager.
Returns
The sync RunManager.
Return type
RunManager
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None[source]¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
kwargs (Any) –
Return type
None
async on_text(text: str, **kwargs: Any) → Any[source]¶
Run when text is received.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncRunManager.html |
438ac1143d5e-2 | Run when text is received.
Parameters
text (str) – The received text.
kwargs (Any) –
Returns
The result of the callback.
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncRunManager.html |
6cd429f4081a-0 | langchain_community.callbacks.mlflow_callback.construct_html_from_prompt_and_generation¶
langchain_community.callbacks.mlflow_callback.construct_html_from_prompt_and_generation(prompt: str, generation: str) → Any[source]¶
Construct an html element from a prompt and a generation.
Parameters
prompt (str) – The prompt.
generation (str) – The generation.
Returns
The html string.
Return type
(str) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.construct_html_from_prompt_and_generation.html |
6ef2120b1e5b-0 | langchain_nvidia_ai_endpoints.callbacks.get_usage_callback¶
langchain_nvidia_ai_endpoints.callbacks.get_usage_callback(price_map: dict = {}, callback: Optional[UsageCallbackHandler] = None) → Generator[UsageCallbackHandler, None, None][source]¶
Get the OpenAI callback handler in a context manager.
which conveniently exposes token and cost information.
Returns
The OpenAI callback handler.
Return type
OpenAICallbackHandler
Parameters
price_map (dict) –
callback (Optional[UsageCallbackHandler]) –
Example
>>> with get_openai_callback() as cb:
... # Use the OpenAI callback handler | https://api.python.langchain.com/en/latest/callbacks/langchain_nvidia_ai_endpoints.callbacks.get_usage_callback.html |
eb7167697e97-0 | langchain_community.callbacks.openai_info.OpenAICallbackHandler¶
class langchain_community.callbacks.openai_info.OpenAICallbackHandler[source]¶
Callback Handler that tracks OpenAI info.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
completion_tokens
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
prompt_tokens
raise_error
run_inline
successful_requests
total_cost
total_tokens
Methods
__init__()
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Collect token usage.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, **kwargs)
Print out the token. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html |
eb7167697e97-1 | on_llm_new_token(token, **kwargs)
Print out the token.
on_llm_start(serialized, prompts, **kwargs)
Print out the prompts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__() → None[source]¶
Return type
None
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
Parameters
action (AgentAction) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
Parameters
finish (AgentFinish) –
run_id (UUID) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html |
eb7167697e97-2 | Parameters
finish (AgentFinish) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html |
eb7167697e97-3 | kwargs (Any) –
Return type
Any
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Collect token usage.
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Print out the token.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html |
eb7167697e97-4 | Print out the token.
Parameters
token (str) –
kwargs (Any) –
Return type
None
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Print out the prompts.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
kwargs (Any) –
Return type
None
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html |
eb7167697e97-5 | metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
Parameters
text (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_end(output: Any, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool ends running.
Parameters
output (Any) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html |
eb7167697e97-6 | kwargs (Any) –
Return type
Any
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
inputs (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.OpenAICallbackHandler.html |
512115fc7184-0 | langchain_community.callbacks.mlflow_callback.analyze_text¶
langchain_community.callbacks.mlflow_callback.analyze_text(text: str, nlp: Optional[Any] = None, textstat: Optional[Any] = None) → dict[source]¶
Analyze text using textstat and spacy.
Parameters
text (str) – The text to analyze.
nlp (spacy.lang) – The spacy language model to use for visualization.
textstat (Optional[Any]) – The textstat library to use for complexity metrics calculation.
Returns
A dictionary containing the complexity metrics and visualizationfiles serialized to HTML string.
Return type
(dict) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.analyze_text.html |
e32c01b00b1b-0 | langchain_community.callbacks.infino_callback.InfinoCallbackHandler¶
class langchain_community.callbacks.infino_callback.InfinoCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, verbose: bool = False)[source]¶
Callback Handler that logs to Infino.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([model_id, model_version, verbose])
on_agent_action(action, **kwargs)
Do nothing when agent takes a specific action.
on_agent_finish(finish, **kwargs)
Do nothing.
on_chain_end(outputs, **kwargs)
Do nothing when LLM chain ends.
on_chain_error(error, **kwargs)
Need to log the error.
on_chain_start(serialized, inputs, **kwargs)
Do nothing when LLM chain starts.
on_chat_model_start(serialized, messages, ...)
Run when LLM starts running.
on_llm_end(response, **kwargs)
Log the latency, error, token usage, and response to Infino.
on_llm_error(error, **kwargs)
Set the error flag.
on_llm_new_token(token, **kwargs)
Do nothing when a new token is generated.
on_llm_start(serialized, prompts, **kwargs)
Log the prompts to Infino, and set start time and error flag.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
e32c01b00b1b-1 | Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Do nothing.
on_tool_end(output[, observation_prefix, ...])
Do nothing when tool ends.
on_tool_error(error, **kwargs)
Do nothing when tool outputs an error.
on_tool_start(serialized, input_str, **kwargs)
Do nothing when tool starts.
Parameters
model_id (Optional[str]) –
model_version (Optional[str]) –
verbose (bool) –
__init__(model_id: Optional[str] = None, model_version: Optional[str] = None, verbose: bool = False) → None[source]¶
Parameters
model_id (Optional[str]) –
model_version (Optional[str]) –
verbose (bool) –
Return type
None
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Do nothing when agent takes a specific action.
Parameters
action (AgentAction) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Do nothing.
Parameters
finish (AgentFinish) –
kwargs (Any) –
Return type
None
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing when LLM chain ends.
Parameters
outputs (Dict[str, Any]) –
kwargs (Any) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
e32c01b00b1b-2 | Parameters
outputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Need to log the error.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing when LLM chain starts.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → None[source]¶
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
kwargs (Any) –
Return type
None
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Log the latency, error, token usage, and response to Infino.
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Set the error flag.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Do nothing when a new token is generated.
Parameters
token (str) –
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
e32c01b00b1b-3 | token (str) –
kwargs (Any) –
Return type
None
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Log the prompts to Infino, and set start time and error flag.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
kwargs (Any) –
Return type
None
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
e32c01b00b1b-4 | parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, **kwargs: Any) → None[source]¶
Do nothing.
Parameters
text (str) –
kwargs (Any) –
Return type
None
on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Do nothing when tool ends.
Parameters
output (str) –
observation_prefix (Optional[str]) –
llm_prefix (Optional[str]) –
kwargs (Any) –
Return type
None
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when tool outputs an error.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Do nothing when tool starts.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
kwargs (Any) –
Return type
None
Examples using InfinoCallbackHandler¶
Infino | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
0a64945a4790-0 | langchain_community.callbacks.utils.BaseMetadataCallbackHandler¶
class langchain_community.callbacks.utils.BaseMetadataCallbackHandler[source]¶
Handle the metadata and associated function states for callbacks.
step¶
The current step.
Type
int
starts¶
The number of times the start method has been called.
Type
int
ends¶
The number of times the end method has been called.
Type
int
errors¶
The number of times the error method has been called.
Type
int
text_ctr¶
The number of times the text method has been called.
Type
int
ignore_llm_¶
Whether to ignore llm callbacks.
Type
bool
ignore_chain_¶
Whether to ignore chain callbacks.
Type
bool
ignore_agent_¶
Whether to ignore agent callbacks.
Type
bool
ignore_retriever_¶
Whether to ignore retriever callbacks.
Type
bool
always_verbose_¶
Whether to always be verbose.
Type
bool
chain_starts¶
The number of times the chain start method has been called.
Type
int
chain_ends¶
The number of times the chain end method has been called.
Type
int
llm_starts¶
The number of times the llm start method has been called.
Type
int
llm_ends¶
The number of times the llm end method has been called.
Type
int
llm_streams¶
The number of times the text method has been called.
Type
int
tool_starts¶
The number of times the tool start method has been called.
Type
int
tool_ends¶
The number of times the tool end method has been called.
Type
int
agent_ends¶
The number of times the agent end method has been called.
Type
int
on_llm_start_records¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.utils.BaseMetadataCallbackHandler.html |
0a64945a4790-1 | Type
int
on_llm_start_records¶
A list of records of the on_llm_start method.
Type
list
on_llm_token_records¶
A list of records of the on_llm_token method.
Type
list
on_llm_end_records¶
A list of records of the on_llm_end method.
Type
list
on_chain_start_records¶
A list of records of the on_chain_start method.
Type
list
on_chain_end_records¶
A list of records of the on_chain_end method.
Type
list
on_tool_start_records¶
A list of records of the on_tool_start method.
Type
list
on_tool_end_records¶
A list of records of the on_tool_end method.
Type
list
on_agent_finish_records¶
A list of records of the on_agent_end method.
Type
list
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_llm
Whether to ignore LLM callbacks.
Methods
__init__()
get_custom_callback_meta()
reset_callback_meta()
Reset the callback metadata.
__init__() → None[source]¶
Return type
None
get_custom_callback_meta() → Dict[str, Any][source]¶
Return type
Dict[str, Any]
reset_callback_meta() → None[source]¶
Reset the callback metadata.
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.utils.BaseMetadataCallbackHandler.html |
0caf77282456-0 | langchain_community.callbacks.wandb_callback.analyze_text¶
langchain_community.callbacks.wandb_callback.analyze_text(text: str, complexity_metrics: bool = True, visualize: bool = True, nlp: Optional[Any] = None, output_dir: Optional[Union[str, Path]] = None) → dict[source]¶
Analyze text using textstat and spacy.
Parameters
text (str) – The text to analyze.
complexity_metrics (bool) – Whether to compute complexity metrics.
visualize (bool) – Whether to visualize the text.
nlp (spacy.lang) – The spacy language model to use for visualization.
output_dir (str) – The directory to save the visualization files to.
Returns
A dictionary containing the complexity metrics and visualizationfiles serialized in a wandb.Html element.
Return type
(dict) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.analyze_text.html |
9be8fb2a5a92-0 | langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup¶
class langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, parent_run_manager: AsyncCallbackManagerForChainRun, **kwargs: Any)[source]¶
Async callback manager for the chain group.
Initialize callback manager.
Attributes
is_async
Return whether the handler is async.
Methods
__init__(handlers[, inheritable_handlers, ...])
Initialize callback manager.
add_handler(handler[, inherit])
Add a handler to the callback manager.
add_metadata(metadata[, inherit])
add_tags(tags[, inherit])
configure([inheritable_callbacks, ...])
Configure the async callback manager.
copy()
Copy the callback manager.
on_chain_end(outputs, **kwargs)
Run when traced chain group ends.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs[, run_id])
Run when chain starts running.
on_chat_model_start(serialized, messages[, ...])
Run when LLM starts running.
on_llm_start(serialized, prompts[, run_id])
Run when LLM starts running.
on_retriever_start(serialized, query[, ...])
Run when retriever starts running.
on_tool_start(serialized, input_str[, ...])
Run when tool starts running.
remove_handler(handler)
Remove a handler from the callback manager.
remove_metadata(keys)
remove_tags(tags)
set_handler(handler[, inherit])
Set handler as the only handler on the callback manager.
set_handlers(handlers[, inherit])
Set handlers as the only handlers on the callback manager.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
9be8fb2a5a92-1 | Set handlers as the only handlers on the callback manager.
Parameters
handlers (List[BaseCallbackHandler]) –
inheritable_handlers (Optional[List[BaseCallbackHandler]]) –
parent_run_id (Optional[UUID]) –
parent_run_manager (AsyncCallbackManagerForChainRun) –
kwargs (Any) –
__init__(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, parent_run_manager: AsyncCallbackManagerForChainRun, **kwargs: Any) → None[source]¶
Initialize callback manager.
Parameters
handlers (List[BaseCallbackHandler]) –
inheritable_handlers (Optional[List[BaseCallbackHandler]]) –
parent_run_id (Optional[UUID]) –
parent_run_manager (AsyncCallbackManagerForChainRun) –
kwargs (Any) –
Return type
None
add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Add a handler to the callback manager.
Parameters
handler (BaseCallbackHandler) –
inherit (bool) –
Return type
None
add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None¶
Parameters
metadata (Dict[str, Any]) –
inherit (bool) –
Return type
None
add_tags(tags: List[str], inherit: bool = True) → None¶
Parameters
tags (List[str]) –
inherit (bool) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
9be8fb2a5a92-2 | tags (List[str]) –
inherit (bool) –
Return type
None
classmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) → AsyncCallbackManager¶
Configure the async callback manager.
Parameters
inheritable_callbacks (Optional[Callbacks], optional) – The inheritable
callbacks. Defaults to None.
local_callbacks (Optional[Callbacks], optional) – The local callbacks.
Defaults to None.
verbose (bool, optional) – Whether to enable verbose mode. Defaults to False.
inheritable_tags (Optional[List[str]], optional) – The inheritable tags.
Defaults to None.
local_tags (Optional[List[str]], optional) – The local tags.
Defaults to None.
inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable
metadata. Defaults to None.
local_metadata (Optional[Dict[str, Any]], optional) – The local metadata.
Defaults to None.
Returns
The configured async callback manager.
Return type
AsyncCallbackManager
copy() → AsyncCallbackManagerForChainGroup[source]¶
Copy the callback manager.
Return type
AsyncCallbackManagerForChainGroup
async on_chain_end(outputs: Union[Dict[str, Any], Any], **kwargs: Any) → None[source]¶
Run when traced chain group ends.
Parameters
outputs (Union[Dict[str, Any], Any]) – The outputs of the chain.
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
9be8fb2a5a92-3 | kwargs (Any) –
Return type
None
async on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
Parameters
error (Exception or KeyboardInterrupt) – The error.
kwargs (Any) –
Return type
None
async on_chain_start(serialized: Dict[str, Any], inputs: Union[Dict[str, Any], Any], run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForChainRun¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) – The serialized chain.
inputs (Union[Dict[str, Any], Any]) – The inputs to the chain.
run_id (UUID, optional) – The ID of the run. Defaults to None.
kwargs (Any) –
Returns
The async callback managerfor the chain run.
Return type
AsyncCallbackManagerForChainRun
async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], run_id: Optional[UUID] = None, **kwargs: Any) → List[AsyncCallbackManagerForLLMRun]¶
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM.
messages (List[List[BaseMessage]]) – The list of messages.
run_id (UUID, optional) – The ID of the run. Defaults to None.
kwargs (Any) –
Returns
The list ofasync callback managers, one for each LLM Run
corresponding to each inner message list.
Return type
List[AsyncCallbackManagerForLLMRun]
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], run_id: Optional[UUID] = None, **kwargs: Any) → List[AsyncCallbackManagerForLLMRun]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
9be8fb2a5a92-4 | Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM.
prompts (List[str]) – The list of prompts.
run_id (UUID, optional) – The ID of the run. Defaults to None.
kwargs (Any) –
Returns
The list of asynccallback managers, one for each LLM Run corresponding
to each prompt.
Return type
List[AsyncCallbackManagerForLLMRun]
async on_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForRetrieverRun¶
Run when retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (Optional[UUID]) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
AsyncCallbackManagerForRetrieverRun
async on_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForToolRun¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) – The serialized tool.
input_str (str) – The input to the tool.
run_id (UUID, optional) – The ID of the run. Defaults to None.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
kwargs (Any) –
Returns
The async callback managerfor the tool run.
Return type
AsyncCallbackManagerForToolRun
remove_handler(handler: BaseCallbackHandler) → None¶
Remove a handler from the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
9be8fb2a5a92-5 | Remove a handler from the callback manager.
Parameters
handler (BaseCallbackHandler) –
Return type
None
remove_metadata(keys: List[str]) → None¶
Parameters
keys (List[str]) –
Return type
None
remove_tags(tags: List[str]) → None¶
Parameters
tags (List[str]) –
Return type
None
set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Set handler as the only handler on the callback manager.
Parameters
handler (BaseCallbackHandler) –
inherit (bool) –
Return type
None
set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None¶
Set handlers as the only handlers on the callback manager.
Parameters
handlers (List[BaseCallbackHandler]) –
inherit (bool) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
0f8b0567c1da-0 | langchain_community.callbacks.clearml_callback.import_clearml¶
langchain_community.callbacks.clearml_callback.import_clearml() → Any[source]¶
Import the clearml python package and raise an error if it is not installed.
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.clearml_callback.import_clearml.html |
c0b2dc75bf4d-0 | langchain_community.callbacks.streamlit.mutable_expander.ChildType¶
class langchain_community.callbacks.streamlit.mutable_expander.ChildType(value)[source]¶
Enumerator of the child type.
MARKDOWN = 'MARKDOWN'¶
EXCEPTION = 'EXCEPTION'¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.mutable_expander.ChildType.html |
798773a461e5-0 | langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler¶
class langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler(*, project_name: str = 'langchain', key_type: str = 'openai', api_key: str = 'sk-****************', model: str = 'gpt-3.5-turbo', log_results: bool = True)[source]¶
Callback Handler that logs evaluation results to uptrain and the console.
Parameters
project_name (str) – The project name to be shown in UpTrain dashboard.
key_type (str) – Type of key to use. Must be ‘uptrain’ or ‘openai’.
api_key (str) – API key for the UpTrain or OpenAI API.
GPT.) ((This key is required to perform evaluations using) –
model (str) –
log_results (bool) –
Raises
ValueError – If the key type is invalid.
ImportError – If the uptrain package is not installed.
Initializes the UpTrainCallbackHandler.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(*[, project_name, key_type, ...])
Initializes the UpTrainCallbackHandler.
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
798773a461e5-1 | Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Do nothing when chain starts
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id[, parent_run_id])
Log records to uptrain when an LLM ends.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id]) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
798773a461e5-2 | on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
uptrain_evaluate(evaluation_name, data, checks)
Run an evaluation on the UpTrain server using UpTrain client.
__init__(*, project_name: str = 'langchain', key_type: str = 'openai', api_key: str = 'sk-****************', model: str = 'gpt-3.5-turbo', log_results: bool = True) → None[source]¶
Initializes the UpTrainCallbackHandler.
Parameters
project_name (str) –
key_type (str) –
api_key (str) –
model (str) –
log_results (bool) –
Return type
None
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
Parameters
action (AgentAction) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
Parameters
finish (AgentFinish) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain ends running.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
798773a461e5-3 | Run when chain ends running.
Parameters
outputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → None[source]¶
Do nothing when chain starts
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
run_id (UUID) –
tags (Optional[List[str]]) –
parent_run_id (Optional[UUID]) –
metadata (Optional[Dict[str, Any]]) –
run_type (Optional[str]) –
name (Optional[str]) –
kwargs (Any) –
Return type
None
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
798773a461e5-4 | Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None[source]¶
Log records to uptrain when an LLM ends.
Parameters
response (LLMResult) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
798773a461e5-5 | Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when LLM starts running.
ATTENTION: This method is called for non-chat models (regular LLMs). Ifyou’re implementing a handler for a chat model,
you should use on_chat_model_start instead.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
798773a461e5-6 | Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
None
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
Parameters
text (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_end(output: Any, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
798773a461e5-7 | Run when tool ends running.
Parameters
output (Any) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
inputs (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
uptrain_evaluate(evaluation_name: str, data: List[Dict[str, Any]], checks: List[str]) → None[source]¶
Run an evaluation on the UpTrain server using UpTrain client.
Parameters
evaluation_name (str) –
data (List[Dict[str, Any]]) –
checks (List[str]) –
Return type
None
Examples using UpTrainCallbackHandler¶
UpTrain | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.UpTrainCallbackHandler.html |
d172e76c4eb0-0 | langchain_community.callbacks.sagemaker_callback.save_json¶
langchain_community.callbacks.sagemaker_callback.save_json(data: dict, file_path: str) → None[source]¶
Save dict to local file path.
Parameters
data (dict) – The dictionary to be saved.
file_path (str) – Local file path.
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.sagemaker_callback.save_json.html |
aadb94cfe764-0 | langchain_nvidia_ai_endpoints.callbacks.get_token_cost_for_model¶
langchain_nvidia_ai_endpoints.callbacks.get_token_cost_for_model(model_name: str, num_tokens: int, price_map: dict, is_completion: bool = False) → float[source]¶
Get the cost in USD for a given model and number of tokens.
Parameters
model_name (str) – Name of the model
num_tokens (int) – Number of tokens.
price_map (dict) – Map of model names to cost per 1000 tokens.
Defaults to AI Foundation Endpoint pricing per https://www.together.ai/pricing.
is_completion (bool) – Whether the model is used for completion or not.
Defaults to False.
Returns
Cost in USD.
Return type
float | https://api.python.langchain.com/en/latest/callbacks/langchain_nvidia_ai_endpoints.callbacks.get_token_cost_for_model.html |
bc07779374b1-0 | langchain_community.callbacks.openai_info.get_openai_token_cost_for_model¶
langchain_community.callbacks.openai_info.get_openai_token_cost_for_model(model_name: str, num_tokens: int, is_completion: bool = False) → float[source]¶
Get the cost in USD for a given model and number of tokens.
Parameters
model_name (str) – Name of the model
num_tokens (int) – Number of tokens.
is_completion (bool) – Whether the model is used for completion or not.
Defaults to False.
Returns
Cost in USD.
Return type
float | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.openai_info.get_openai_token_cost_for_model.html |
579440953bae-0 | langchain_core.callbacks.base.ToolManagerMixin¶
class langchain_core.callbacks.base.ToolManagerMixin[source]¶
Mixin for tool callbacks.
Methods
__init__()
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
__init__()¶
on_tool_end(output: Any, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool ends running.
Parameters
output (Any) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.ToolManagerMixin.html |
4f14112bff9a-0 | langchain_core.callbacks.file.FileCallbackHandler¶
class langchain_core.callbacks.file.FileCallbackHandler(filename: str, mode: str = 'a', color: Optional[str] = None)[source]¶
Callback Handler that writes to a file.
Initialize callback handler.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(filename[, mode, color])
Initialize callback handler.
on_agent_action(action[, color])
Run on agent action.
on_agent_finish(finish[, color])
Run on agent end.
on_chain_end(outputs, **kwargs)
Print out that we finished a chain.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Print out that we are entering a chain.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id[, parent_run_id])
Run when LLM ends running.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.file.FileCallbackHandler.html |
4f14112bff9a-1 | Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text[, color, end])
Run when agent ends.
on_tool_end(output[, color, ...])
If not the final action, print out observation.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
Parameters
filename (str) –
mode (str) –
color (Optional[str]) –
__init__(filename: str, mode: str = 'a', color: Optional[str] = None) → None[source]¶
Initialize callback handler.
Parameters
filename (str) –
mode (str) –
color (Optional[str]) –
Return type
None
on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶
Run on agent action.
Parameters
action (AgentAction) –
color (Optional[str]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.file.FileCallbackHandler.html |
4f14112bff9a-2 | color (Optional[str]) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) → None[source]¶
Run on agent end.
Parameters
finish (AgentFinish) –
color (Optional[str]) –
kwargs (Any) –
Return type
None
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Print out that we finished a chain.
Parameters
outputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Print out that we are entering a chain.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.file.FileCallbackHandler.html |
4f14112bff9a-3 | Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM ends running.
Parameters
response (LLMResult) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.file.FileCallbackHandler.html |
4f14112bff9a-4 | Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when LLM starts running.
ATTENTION: This method is called for non-chat models (regular LLMs). Ifyou’re implementing a handler for a chat model,
you should use on_chat_model_start instead.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.file.FileCallbackHandler.html |
4f14112bff9a-5 | Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) → None[source]¶
Run when agent ends.
Parameters
text (str) –
color (Optional[str]) –
end (str) –
kwargs (Any) –
Return type
None
on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.file.FileCallbackHandler.html |
4f14112bff9a-6 | If not the final action, print out observation.
Parameters
output (str) –
color (Optional[str]) –
observation_prefix (Optional[str]) –
llm_prefix (Optional[str]) –
kwargs (Any) –
Return type
None
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
inputs (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.file.FileCallbackHandler.html |
ce172a882a85-0 | langchain_community.callbacks.llmonitor_callback.identify¶
langchain_community.callbacks.llmonitor_callback.identify(user_id: str, user_props: Optional[Any] = None) → UserContextManager[source]¶
Builds an LLMonitor UserContextManager
Parameters
user_id (-) – The user id.
user_props (-) – The user properties.
Returns
A context manager that sets the user context.
Return type
UserContextManager
Examples using identify¶
LLMonitor | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.identify.html |
bad382c9e4aa-0 | langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler¶
class langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler(app_id: Optional[str] = None, api_url: Optional[str] = None, verbose: bool = False)[source]¶
Callback Handler for LLMonitor`.
#### Parameters:
app_id: The app id of the app you want to report to. Defaults to
None, which means that LLMONITOR_APP_ID will be used.
- api_url: The url of the LLMonitor API. Defaults to None,
which means that either LLMONITOR_API_URL environment variable
or https://app.llmonitor.com will be used.
#### Raises:
ValueError: if app_id is not provided either as an
argument or as an environment variable.
- ConnectionError: if the connection to the API fails.
#### Example:
```python
from langchain_community.llms import OpenAI
from langchain_community.callbacks import LLMonitorCallbackHandler
llmonitor_callback = LLMonitorCallbackHandler()
llm = OpenAI(callbacks=[llmonitor_callback],
metadata={“userId”: “user-123”})
llm.invoke(“Hello, how are you?”)
```
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([app_id, api_url, verbose])
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
bad382c9e4aa-1 | Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id[, parent_run_id])
Run when LLM ends running.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, ...])
Run when tool ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
bad382c9e4aa-2 | Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
Parameters
app_id (Optional[str]) –
api_url (Optional[str]) –
verbose (bool) –
__init__(app_id: Optional[str] = None, api_url: Optional[str] = None, verbose: bool = False) → None[source]¶
Parameters
app_id (Optional[str]) –
api_url (Optional[str]) –
verbose (bool) –
Return type
None
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run on agent action.
Parameters
action (AgentAction) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run on agent end.
Parameters
finish (AgentFinish) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
bad382c9e4aa-3 | kwargs (Any) –
Return type
Any
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when chain errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶
Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
bad382c9e4aa-4 | run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None[source]¶
Run when LLM ends running.
Parameters
response (LLMResult) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
bad382c9e4aa-5 | kwargs (Any) –
Return type
Any
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
Run when LLM starts running.
ATTENTION: This method is called for non-chat models (regular LLMs). Ifyou’re implementing a handler for a chat model,
you should use on_chat_model_start instead.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
None
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
bad382c9e4aa-6 | kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
Parameters
text (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_end(output: Any, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run when tool ends running.
Parameters
output (Any) –
run_id (UUID) –
parent_run_id (Optional[UUID]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
bad382c9e4aa-7 | run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
None
Examples using LLMonitorCallbackHandler¶
LLMonitor | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.LLMonitorCallbackHandler.html |
fe7c257d2058-0 | langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler¶
class langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler(name: Optional[str] = 'langchainrun-%', experiment: Optional[str] = 'langchain', tags: Optional[Dict] = None, tracking_uri: Optional[str] = None, run_id: Optional[str] = None, artifacts_dir: str = '')[source]¶
Callback Handler that logs metrics and artifacts to mlflow server.
Parameters
name (str) – Name of the run.
experiment (str) – Name of the experiment.
tags (dict) – Tags to be attached for the run.
tracking_uri (str) – MLflow tracking server uri.
run_id (Optional[str]) –
artifacts_dir (str) –
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to mlflow server.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([name, experiment, tags, ...])
Initialize callback handler.
flush_tracker([langchain_asset, finish])
get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
fe7c257d2058-1 | on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, **kwargs)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run when text is received.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
reset_callback_meta()
Reset the callback metadata. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
fe7c257d2058-2 | Run when tool starts running.
reset_callback_meta()
Reset the callback metadata.
__init__(name: Optional[str] = 'langchainrun-%', experiment: Optional[str] = 'langchain', tags: Optional[Dict] = None, tracking_uri: Optional[str] = None, run_id: Optional[str] = None, artifacts_dir: str = '') → None[source]¶
Initialize callback handler.
Parameters
name (Optional[str]) –
experiment (Optional[str]) –
tags (Optional[Dict]) –
tracking_uri (Optional[str]) –
run_id (Optional[str]) –
artifacts_dir (str) –
Return type
None
flush_tracker(langchain_asset: Optional[Any] = None, finish: bool = False) → None[source]¶
Parameters
langchain_asset (Optional[Any]) –
finish (bool) –
Return type
None
get_custom_callback_meta() → Dict[str, Any]¶
Return type
Dict[str, Any]
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
Parameters
action (AgentAction) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
Parameters
finish (AgentFinish) –
kwargs (Any) –
Return type
None
on_chain_end(outputs: Union[Dict[str, Any], str, List[str]], **kwargs: Any) → None[source]¶
Run when chain ends running.
Parameters
outputs (Union[Dict[str, Any], str, List[str]]) –
kwargs (Any) –
Return type
None
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
fe7c257d2058-3 | on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
kwargs (Any) –
Return type
None
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
fe7c257d2058-4 | Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
Parameters
token (str) –
kwargs (Any) –
Return type
None
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
kwargs (Any) –
Return type
None
on_retriever_end(documents: Sequence[Document], **kwargs: Any) → Any[source]¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, **kwargs: Any) → Any[source]¶
Run when Retriever errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, **kwargs: Any) → Any[source]¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
fe7c257d2058-5 | kwargs (Any) –
Return type
Any
on_text(text: str, **kwargs: Any) → None[source]¶
Run when text is received.
Parameters
text (str) –
kwargs (Any) –
Return type
None
on_tool_end(output: Any, **kwargs: Any) → None[source]¶
Run when tool ends running.
Parameters
output (Any) –
kwargs (Any) –
Return type
None
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
Parameters
error (BaseException) –
kwargs (Any) –
Return type
None
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
kwargs (Any) –
Return type
None
reset_callback_meta() → None¶
Reset the callback metadata.
Return type
None
Examples using MlflowCallbackHandler¶
MLflow | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
a28e8e6da30a-0 | langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler¶
class langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler[source]¶
Callback Handler that tracks bedrock anthropic info.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
completion_tokens
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
prompt_tokens
raise_error
run_inline
successful_requests
total_cost
total_tokens
Methods
__init__()
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Collect token usage.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler.html |
a28e8e6da30a-1 | on_llm_new_token(token, **kwargs)
Print out the token.
on_llm_start(serialized, prompts, **kwargs)
Print out the prompts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__() → None[source]¶
Return type
None
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
Parameters
action (AgentAction) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
Parameters
finish (AgentFinish) –
run_id (UUID) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler.html |
a28e8e6da30a-2 | Parameters
finish (AgentFinish) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler.html |
a28e8e6da30a-3 | kwargs (Any) –
Return type
Any
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Collect token usage.
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Print out the token.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler.html |
a28e8e6da30a-4 | Print out the token.
Parameters
token (str) –
kwargs (Any) –
Return type
None
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Print out the prompts.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) –
kwargs (Any) –
Return type
None
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler.html |
a28e8e6da30a-5 | metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
Parameters
text (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_end(output: Any, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool ends running.
Parameters
output (Any) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler.html |
a28e8e6da30a-6 | kwargs (Any) –
Return type
Any
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
inputs (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler.html |
16ab9aae97fa-0 | langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler¶
class langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]¶
Callback handler that returns an async iterator.
Only the final output of the agent will be iterated.
Instantiate AsyncFinalIteratorCallbackHandler.
Parameters
answer_prefix_tokens (Optional[List[str]]) – Token sequence that prefixes the answer.
Default is [“Final”, “Answer”, “:”]
strip_tokens (bool) – Ignore white spaces and new lines when comparing
answer_prefix_tokens to last tokens? (to determine if answer has been
reached)
stream_prefix (bool) – Should answer prefix itself also be streamed?
Attributes
always_verbose
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(*[, answer_prefix_tokens, ...])
Instantiate AsyncFinalIteratorCallbackHandler.
aiter()
append_to_last_tokens(token)
check_if_answer_reached()
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, ...])
Run when chain ends running.
on_chain_error(error, *, run_id[, ...])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id) | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
16ab9aae97fa-1 | on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run on new LLM token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run on retriever end.
on_retriever_error(error, *, run_id[, ...])
Run on retriever error.
on_retriever_start(serialized, query, *, run_id)
Run on retriever start.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id, tags])
Run on arbitrary text.
on_tool_end(output, *, run_id[, ...])
Run when tool ends running.
on_tool_error(error, *, run_id[, ...])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False) → None[source]¶
Instantiate AsyncFinalIteratorCallbackHandler.
Parameters
answer_prefix_tokens (Optional[List[str]]) – Token sequence that prefixes the answer.
Default is [“Final”, “Answer”, “:”] | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
16ab9aae97fa-2 | Default is [“Final”, “Answer”, “:”]
strip_tokens (bool) – Ignore white spaces and new lines when comparing
answer_prefix_tokens to last tokens? (to determine if answer has been
reached)
stream_prefix (bool) – Should answer prefix itself also be streamed?
Return type
None
async aiter() → AsyncIterator[str]¶
Return type
AsyncIterator[str]
append_to_last_tokens(token: str) → None[source]¶
Parameters
token (str) –
Return type
None
check_if_answer_reached() → bool[source]¶
Return type
bool
async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on agent action.
Parameters
action (AgentAction) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on agent end.
Parameters
finish (AgentFinish) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) – | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
16ab9aae97fa-3 | Run when chain ends running.
Parameters
outputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when chain errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
None
async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
16ab9aae97fa-4 | Run when a chat model starts running.
ATTENTION: This method is called for chat models. If you’re implementinga handler for a non-chat model, you should use on_llm_start instead.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Any
async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
Parameters
response (LLMResult) –
kwargs (Any) –
Return type
None
async on_llm_error(error: BaseException, **kwargs: Any) → None¶
Run when LLM errors.
Parameters
error (BaseException) – The error that occurred.
kwargs (Any) – Additional keyword arguments.
- response (LLMResult): The response which was generated before
the error occurred.
Return type
None
async on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) –
kwargs (Any) –
Return type
None
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running.
ATTENTION: This method is called for non-chat models (regular LLMs). Ifyou’re implementing a handler for a chat model,
you should use on_chat_model_start instead.
Parameters
serialized (Dict[str, Any]) –
prompts (List[str]) – | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
16ab9aae97fa-5 | serialized (Dict[str, Any]) –
prompts (List[str]) –
kwargs (Any) –
Return type
None
async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on retriever end.
Parameters
documents (Sequence[Document]) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on retriever error.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶
Run on retriever start.
Parameters
serialized (Dict[str, Any]) –
query (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
16ab9aae97fa-6 | kwargs (Any) –
Return type
None
async on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
Parameters
retry_state (RetryCallState) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on arbitrary text.
Parameters
text (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_tool_end(output: Any, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when tool ends running.
Parameters
output (Any) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None
async on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when tool errors.
Parameters
error (BaseException) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
16ab9aae97fa-7 | kwargs (Any) –
Return type
None
async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) –
input_str (str) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
tags (Optional[List[str]]) –
metadata (Optional[Dict[str, Any]]) –
inputs (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
None | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
766ccdb1d9b2-0 | langchain_community.callbacks.tracers.wandb.WandbTracer¶
class langchain_community.callbacks.tracers.wandb.WandbTracer(run_args: Optional[WandbRunArgs] = None, **kwargs: Any)[source]¶
Callback Handler that logs to Weights and Biases.
This handler will log the model architecture and run traces to Weights and Biases.
This will ensure that all LangChain activity is logged to W&B.
Initializes the WandbTracer.
Parameters
run_args (Optional[WandbRunArgs]) – (dict, optional) Arguments to pass to wandb.init(). If not
provided, wandb.init() will be called with no arguments. Please
refer to the wandb.init for more details.
kwargs (Any) –
To use W&B to monitor all LangChain activity, add this tracer like any other
LangChain callback:
```
from wandb.integration.langchain import WandbTracer
tracer = WandbTracer()
chain = LLMChain(llm, callbacks=[tracer])
# …end of notebook / script:
tracer.finish()
```
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([run_args])
Initializes the WandbTracer.
finish()
Waits for all asynchronous processes to finish and data to upload.
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.WandbTracer.html |
766ccdb1d9b2-1 | Run on agent end.
on_chain_end(outputs, *, run_id[, inputs])
End a trace for a chain run.
on_chain_error(error, *[, inputs])
Handle an error for a chain run.
on_chain_start(serialized, inputs, *, run_id)
Start a trace for a chain run.
on_chat_model_start(serialized, messages, *, ...)
Start a trace for an LLM run.
on_llm_end(response, *, run_id, **kwargs)
End a trace for an LLM run.
on_llm_error(error, *, run_id, **kwargs)
Handle an error for an LLM run.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Start a trace for an LLM run.
on_retriever_end(documents, *, run_id, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, *, run_id, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id, **kwargs)
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id, **kwargs)
End a trace for a tool run.
on_tool_error(error, *, run_id, **kwargs)
Handle an error for a tool run. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.WandbTracer.html |
766ccdb1d9b2-2 | Handle an error for a tool run.
on_tool_start(serialized, input_str, *, run_id)
Start a trace for a tool run.
__init__(run_args: Optional[WandbRunArgs] = None, **kwargs: Any) → None[source]¶
Initializes the WandbTracer.
Parameters
run_args (Optional[WandbRunArgs]) – (dict, optional) Arguments to pass to wandb.init(). If not
provided, wandb.init() will be called with no arguments. Please
refer to the wandb.init for more details.
kwargs (Any) –
Return type
None
To use W&B to monitor all LangChain activity, add this tracer like any other
LangChain callback:
```
from wandb.integration.langchain import WandbTracer
tracer = WandbTracer()
chain = LLMChain(llm, callbacks=[tracer])
# …end of notebook / script:
tracer.finish()
```
finish() → None[source]¶
Waits for all asynchronous processes to finish and data to upload.
Proxy for wandb.finish().
Return type
None
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
Parameters
action (AgentAction) –
run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
Parameters
finish (AgentFinish) –
run_id (UUID) –
parent_run_id (Optional[UUID]) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.WandbTracer.html |
766ccdb1d9b2-3 | run_id (UUID) –
parent_run_id (Optional[UUID]) –
kwargs (Any) –
Return type
Any
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Run¶
End a trace for a chain run.
Parameters
outputs (Dict[str, Any]) –
run_id (UUID) –
inputs (Optional[Dict[str, Any]]) –
kwargs (Any) –
Return type
Run
on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a chain run.
Parameters
error (BaseException) –
inputs (Optional[Dict[str, Any]]) –
run_id (UUID) –
kwargs (Any) –
Return type
Run
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a chain run.
Parameters
serialized (Dict[str, Any]) –
inputs (Dict[str, Any]) –
run_id (UUID) –
tags (Optional[List[str]]) –
parent_run_id (Optional[UUID]) –
metadata (Optional[Dict[str, Any]]) –
run_type (Optional[str]) –
name (Optional[str]) –
kwargs (Any) –
Return type
Run | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.WandbTracer.html |
766ccdb1d9b2-4 | name (Optional[str]) –
kwargs (Any) –
Return type
Run
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for an LLM run.
Parameters
serialized (Dict[str, Any]) –
messages (List[List[BaseMessage]]) –
run_id (UUID) –
tags (Optional[List[str]]) –
parent_run_id (Optional[UUID]) –
metadata (Optional[Dict[str, Any]]) –
name (Optional[str]) –
kwargs (Any) –
Return type
Run
on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for an LLM run.
Parameters
response (LLMResult) –
run_id (UUID) –
kwargs (Any) –
Return type
Run
on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for an LLM run.
Parameters
error (BaseException) –
run_id (UUID) –
kwargs (Any) –
Return type
Run
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Run¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.WandbTracer.html |
Subsets and Splits