-->
This tutorial will guide you through the basics of building chat applications using LangChain, focusing on setting up your environment, selecting language models (like OpenAI and Mistral AI), querying them, and using prompt templates.
Before we begin, we need to set up our environment by installing the necessary libraries and configuring API keys.
First, import the getpass
and os
modules [1-3]. We’ll use getpass
to securely obtain API keys and os
to set them as environment variables [1-3].
import getpass
import os
For tracing and connecting to the LangSmith platform, you'll need to set the following environment variables:
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"]="https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
os.environ["LANGCHAIN_PROJECT"] = "LangChain tutorial"
Make sure to replace the getpass() prompts with your actual LangSmith API key. Selecting Language Models LangChain provides integrations with various language model providers. Let’s look at how to set up OpenAI and Mistral AI.
To use OpenAI models, you need to install the langchain-openai library. You might also need to upgrade the httpx library.
pip install -qU langchain-openai
pip install --upgrade "httpx<0.28"
Next, set your OpenAI API key as an environment variable:
os.environ["OPENAI_API_KEY"] = getpass.getpass()
Then, you can import the ChatOpenAI class and instantiate a model, for example, gpt-4o-mini:
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o-mini")
Similarly, to use Mistral AI models, install the langchain-mistralai library and ensure httpx is up to date.
pip install -qU langchain-mistralai
pip install --upgrade "httpx<0.28"
Set your Mistral API key as an environment variable:
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
Then, import ChatMistralAI and instantiate a model like mistral-large-latest:
from langchain_mistralai import ChatMistralAI
model = ChatMistralAI(model="mistral-large-latest")
Once you have a language model instance, you can query it with messages. LangChain uses lists of BaseMessage objects, such as HumanMessage and SystemMessage, to structure conversations.
Here’s an example of sending a list of messages to a language model and getting a normal response:
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage("""You are a travel agent. You operate in North America and Europe only.
Help in best possible way, but say sorry and no to anyone who asks for help outside the """),
HumanMessage("Plan a 3 day trip to Chicago"),
]
response = model.invoke(messages)
print(response.content)
The output for this query, based on the subsequent content in the source, would likely be a travel itinerary for Chicago. For instance:
Sure! Here’s a suggested 3-day itinerary for your trip to Chicago:
... (rest of the Chicago itinerary) ...
LangChain also supports streaming responses, where you receive the model’s output token by token. This can be useful for improving the perceived responsiveness of your application.
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage("""You are a travel agent. You operate in North America and Europe only.
Help in best possible way, but say sorry and no to anyone who asks for help outside the """),
HumanMessage("Plan a 3 day trip to Chicago"),
]
for token in model.stream(messages):
print(token.content, end="")
This will print the Chicago itinerary as it’s being generated.
LangChain’s prompt templates allow you to create reusable and dynamic prompts by defining placeholders.
First, import ChatPromptTemplate:
from langchain_core.prompts import ChatPromptTemplate
# You can define separate templates for the system and user messages:
system_template = """You are a travel agent. You operate in {region} only.
Help in best possible way, but say sorry and no to anyone who asks for help outside the """
user_template = "Plan a {days} day trip to {city}"
Then, create a ChatPromptTemplate from these message templates:
prompt_template = ChatPromptTemplate.from_messages(
[("system", system_template), ("user", user_template)]
)
You can then invoke the template with specific values for the placeholders to create a PromptValue:
prompt = prompt_template.invoke({"region": "Europe", "days": 5, "city": "Berlin"})
print(prompt)
This will output the structured messages:
messages=[SystemMessage(content='You are a travel agent. You operate in Europe only.\n Help in best possible way, but say sorry and no to anyone who asks for help outside the area you operate.', additional_kwargs={}, response_metadata={}), HumanMessage(content='Plan a 5 day trip to Berlin', additional_kwargs={}, response_metadata={})]
Finally, you can pass this PromptValue to your language model’s invoke method:
response = model.invoke(prompt)
print(response.content)
However, based on the provided example output, if the model was still an instance configured for only North America and Europe (as in the querying examples), and we asked for a trip to Berlin (Europe), the response might look something like:
I'd be delighted to help you plan a 5-day trip to Berlin! Here's a suggested itinerary that covers ... (an itinerary for Berlin would follow, similar in style to the Chicago and Tokyo examples, though a Berlin itinerary is not explicitly provided in the sources).
Conversely, if the prompt was for a location outside the allowed region, like Tokyo, the response, based on the last line of the provided text, would be:
I apologize, but I can only assist with travel within Asia.
Note: The source contains an example of a 9-day trip to Tokyo, Japan, which contradicts the initial system message specifying operation only in North America and Europe. This suggests that the model instance used for the Tokyo example might have been configured differently or that the system message was not consistently applied.
This tutorial provides a foundational understanding of using chat models in LangChain. You can further explore more advanced features like output parsers, memory, and chains to build more sophisticated applications.