Get Started
With ChatLab, you can augment Large Language Models with computational powers quickly.
- 🐍 Write functions in Python, use any package
- 📗 Run in Jupyter, Colab, Kaggle, and more
- 🤖 Chat with your agents in the notebook
Installation
pip install chatlab
Configuration
Set your OpenAI API key as an environment variable.
export OPENAI_API_KEY=<your key>
You can find your API key on your OpenAI account page. Once you have your key, set it to the OPENAI_API_KEY
environment variable.
There are many ways to set the OPENAI_API_KEY
both securely and insecurely. Learn more methods and avoid common pitfalls via Setting API Keys.
First Chat ⚽️
Let's play a game of soccer. We'll write a function to flip a coin to determine who gets the first move. The assistant gets to be the referee.
from chatlab import Chat, system, user
import random
def flip_a_coin():
'''Returns heads or tails'''
return random.choice(['heads', 'tails'])
chat = Chat(
system("You are an experienced official soccer referee"),
system("Form responses in Markdown and use emojis."),
system("The home team captain steps up."),
)
chat.register(flip_a_coin)
await chat("**Kai**: We call tails.")
𝑓Ranflip_a_coin
Referee: It's tails! The first move goes to the home team. Good luck to both teams! Let's begin the game! ⚽️👍🏼
Conversation Roles
In the above example, we used the system
function to create a Message
for the assistant to interpret. We also implicitly created user Message
s. These are the two most common roles in a conversation you will use directly. ChatLab captures not just these messages. It also captures responses from the assistant and function calls.
To understand what transpired above, let's look at each Message
from chat.messages
:
chat.messages
{
'role': 'system',
'content': 'You are an experienced official soccer referee'
},
{
'role': 'system',
'content': 'Form responses in Markdown and use emojis.'
},
{
'role': 'system',
'content': 'The home team captain steps up.'
},
{
'role': 'user',
'content': '**Kai**: We call tails.'
},
{
'role': 'assistant',
'content': None,
'function_call': {
'name': 'flip_a_coin',
'arguments': '{}'
}
},
{
'role': 'function',
'content': 'tails',
'name': 'flip_a_coin'
},
{
'role': "assistant",
'content': "**Referee**: It's tails! The first move goes to the home team. Good luck to both teams! Let's begin the game! ⚽️👍🏼",
'function_call': None,
}
As you can see, there are four roles in a chat. Let's break those down.
user
The user is you, the human, the person, the player, etc.
assistant
The assistant is the Artificial Intelligence (the AI). People colloquially call it "the model" as a shortening of Large Language Model (LLM). The assistant creates messages for display to the user as well as requests to call functions.
function
The result of a function call is sent with role function
from chatlab. The content
is the return value of the function.
system
The system role sets the scene and steers the conversation. Think of it like a background narrator to inform the AI of the context of the conversation.
system
is controlled by you. Use it to:
- Set the tone of the assistant
- Inform the assistant of conditions
The assistant tends to assume it is part of the overall system and is responsible for communicating with the user.
Registering Functions
Any function with typed arguments can be registered quickly in a conversation. Registering the function will allow the assistant
to call it during the conversation.
chat.register(flip_a_coin)
{
"name": "flip_a_coin",
"description": "Returns heads or tails",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
Under the hood, ChatLab inspects your function and generates a JSON Schema for it. This schema is used to validate the arguments the assistant sends to your function.
Submitting Messages
Every time you run submit
, ChatLab sends the conversation to the assistant and returns the response. The response is a Message
with the role
of assistant
. Assistant messages can either have content
or a function_call
. The content
is the text the assistant wants to send to the user. When function_call
is set, the assistant is requesting to run a function.
from chatlab.messaging import assistant
assistant("Let's play ball!")
{
"role": "assistant",
"content": "Let's play ball!"
}
from chatlab.messaging import assistant_function_call
assistant_function_call("flip_a_coin", arguments="{}")
{
"role": "assistant",
"content": null,
"function_call": {
"name": "flip_a_coin",
"arguments": "{}"
}
}
The content
is null
because the assistant has decided to call a function. The arguments
are empty because flip_a_coin
doesn't take any arguments.
Calling Functions
When the assistant calls a function, chatlab
sends back a Message
with the role function
. The content
is the return value of the function.
from chatlab.messaging import function_result
function_result(content="tails", name="flip_a_coin")
{
"role": "function",
"content": "tails",
"name": "flip_a_coin"
}
While you wouldn't normally call this function directly, one way to use the function_result
directly is for testing LLM responses to your functions, saving you from having to run the entire conversation or a potentially expensive function. In fact, you don't even have to implement a function to test the assistant's response.
from chatlab import Chat
from chatlab.messaging import (
assistant,
assistant_function_call,
function_result,
system
)
seeded_chat = Chat(
system("What's the tide like in Santa Cruz right now?"),
assistant_function_call("tides", arguments="{ 'station': 'Santa Cruz' }"),
function_result(name="tides", content="The station is not a valid station.")
)
await seeded_chat.submit()
I apologize, but I am unable to provide real-time information on the tide in Santa Cruz. It is recommended to check official tide websites, local tide charts, or consult with local authorities for the most accurate and up-to-date information.