Skip to content

Util

MESSAGE_ROLE_TO_CLASS_MAP = {MessageRoles.SYSTEM: SystemMessage, MessageRoles.USER: HumanMessage, MessageRoles.ASSISTANT: AIMessage, MessageRoles.TOOL_CALL: AIMessage, MessageRoles.TOOL_RESPONSE: ToolMessage} module-attribute

logger = logging.getLogger(__name__) module-attribute

MessageSchemaPlaceholder

Bases: BaseMessagePromptTemplate

Prompt template that converts MessageSchema(s) to langchain message(s).

input_variables property

Input variables for this prompt template.


List of input variable names.

optional = False class-attribute instance-attribute

variable_name instance-attribute

Name of variable to use as message(s).

__init__(variable_name, *, optional=False, **kwargs)

format_messages(**kwargs)

Format messages from kwargs.


**kwargs: Keyword arguments to use for formatting.

List of BaseMessage.

get_lc_namespace() classmethod

Get the namespace of the langchain object.

pretty_repr(html=False)

RunnableFlatten

Bases: Runnable[dict[str, Any], dict[str, Any]]

Flatten one level of input dictionaries.

Helpful for re-combining inputs after parallel processing on separate parts

For example

RunnableFlatten.invoke(input = {"a": {"a_1": 1, "a_2": 2}, "d": [1, 2, 3]}) == {"a_1": 1, "a_2": 2, "d": [1, 2, 3]}

args_to_flatten = set(args_to_flatten) instance-attribute

__init__(args_to_flatten)

invoke(input, config=None, **kwargs)

RunnableRemove

Bases: Runnable[dict[str, Any], dict[str, Any]]

Remove keys from input dictionary.

Helpful for removing keys that are no longer needed after e.g. RunnablePassthrough.assign(...)

For example

RunnableRemove.invoke(input = {"a": 1, "b": 2, "c": 3}) == {"a": 1, "c": 3}

args_to_remove = set(args_to_remove) instance-attribute

__init__(args_to_remove)

invoke(input, config=None, **kwargs)

generate_lc_ai_tool_call_message(call_id, tool_name, tool_args)

Generate an AIMessage that represents a tool call.

This can be used to help with providing responses to the FakeLLM class when testing.

Note: This function only currently supports generating a single tool call, but could be extended to support multiple tool calls.


call_id: ID of the tool call (or list of IDs if multiple)
tool_name: Name of the tool being called (or tools if multiple)
tool_args: Arguments to the tool (or tools if multiple)

AIMessage representing the tool call

generate_lc_tool_response_message(call_id, tool_name, tool_response)

Generate a ToolMessage that represents a tool response.

This can be used to help with providing responses to the FakeLLM class when testing.


call_id: ID of the tool call
tool_name: Name of the tool being called
tool_response: Response from the tool

ToolMessage representing the tool response

generate_tool_call_message(call_id, tool_name, tool_args)

Generate a MessageSchema that represents a tool call.

PARAMETER DESCRIPTION
call_id

ID(s) of the tool call (e.g. UUID)

TYPE: str | list[str]

tool_name

Name(s) of called tool

TYPE: str | list[str]

tool_args

Arguments to the tool (or list of arguments if multiple)

TYPE: dict[str, str] | list[dict[str, str]]

generate_tool_response_message(call_id, tool_name, tool_response)

Generate a MessageSchema that represents a tool response.

PARAMETER DESCRIPTION
call_id

ID of the tool call

TYPE: str

tool_name

Name of the tool being called

TYPE: str

tool_response

Response from the tool

TYPE: str | dict

get_token_cost_for_model(provider, model_name, tokens, is_completion=False)

Convert token usage to cost in $.

join_lc_messages(messages, add_separators=True)

Naively combine lc messages into a single string.

lc_message_from_chunk(chunk)

message_from_lc(message)

Convert a langchain message to a MessageSchema.

message_to_lc(message)

Convert a MessageSchema to a langchain message.

tiktoken_len(text=None, messages=None, model_name='gpt-4')

Get token len using tiktoken.

tokens_to_cost(model_name, prompt_tokens, completion_tokens)

Convert token usage to cost in $.

tool_call_message_to_generation_chunks(message, chunk_size)

Convert a lc message with tool_calls into a list of chunks.

I.e. to mimic how a real LLM would stream the tool calls.