Fake
FakeChatModel
Bases: BaseChatModel
echo_response = False
class-attribute
instance-attribute
With echo_response=True, the model will echo the prompt back as a single message.
fixed_responses = Field(default_factory=list)
class-attribute
instance-attribute
max_context = 8000
class-attribute
instance-attribute
Mimic the max context length that real models have.
model = Field(default='fake-chat-model')
class-attribute
instance-attribute
private_response_index = 0
class-attribute
instance-attribute
stream_chunk_size = 3
class-attribute
instance-attribute
streaming
instance-attribute
__get_next_message(prompt_messages)
First go through fixed responses, then into echo mode if set.
bind_tools(tools, **kwargs)
get_num_tokens_from_messages(messages, tools=None)
Get num tokens from messages.
Note: 2024-10-28 -- Base implementation doesn't count tool call at all.
FakeChatOutOfResponsesError
Bases: BackendError
FakeEmbeddings
Bases: Embeddings
__init__()
embed_documents(texts)
embed_query(text)
MaxContextExceededError
Bases: BackendError