How to Build Your First AI Agent with LangChain
Introduction
If you have been playing with LLMs like GPT-4 or Claude, you know they are smart—but they are also “trapped” in a text box, unable to act on the outside world. To unlock their full potential and allow them to browse the web, check the weather, or run code, you need to learn how to build a LangChain AI Agent.
In this tutorial, we will build a simple AI Agent using LangChain. By the end of this post, you will have a Python script where an AI autonomously “reasons” about a problem and picks the right tools to solve it.
What is a LangChain Agent?
Standard LLM interactions are like a call-and-response:
- You: “What is 5 * 5?”
- AI: “25.”
An Agent is different. An Agent uses the LLM as a “reasoning engine” to decide what to do next.
- It looks at your question.
- It looks at a list of Tools you gave it (like a Calculator or Google Search).
- It decides which tool to use.
- It runs the tool and observes the output.
Let’s build one.
Prerequisites
You will need Python installed and an OpenAI API key.
Bash
pip install langchain langchain-openai
Step 1: Define Your Tools
Tools are the “hands” of your agent. In LangChain, we can easily create custom tools using the @tool decorator.
Let’s create a simple tool that calculates the length of a word (a task LLMs sometimes struggle with due to tokenization).
Python
from langchain.tools import tool
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
@tool
def get_weather(city: str) -> str:
"""Returns the current weather in a given city."""
# In a real app, you would call a Weather API here.
# For this demo, we will mock it.
return f"The weather in {city} is sunny and 25°C."
tools = [get_word_length, get_weather]
Step 2: Initialize the “Brain” (LLM)
We need an LLM to control these tools. We will use OpenAI’s GPT-4o (or GPT-3.5-turbo) because it is excellent at following tool instructions.
Python
import os
from langchain_openai import ChatOpenAI
# Make sure you set your OPENAI_API_KEY in your environment variables
# os.environ["OPENAI_API_KEY"] = "sk-..."
llm = ChatOpenAI(model="gpt-4o", temperature=0)
Note: Setting temperature=0 is crucial for agents. We want the AI to be precise and factual, not creative, when choosing tools.
Step 3: Create the Agent
In modern LangChain (v0.2+), the easiest way to create an agent is binding tools to the model and using a pre-built agent constructor.
Python
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
# 1. Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. You have access to tools. Use them when needed."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
# 2. Construct the Agent
agent = create_tool_calling_agent(llm, tools, prompt)
# 3. Create the Executor (This is the runtime that actually runs the agent)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
Step 4: Run Your Agent
Now for the magic. We will ask a question that requires the agent to use a tool.
Python
response = agent_executor.invoke({
"input": "How many letters are in the word 'Supercalifragilisticexpialidocious' and is it sunny in London?"
})
print(response["output"])
What happens when you run this?
Because we set verbose=True, you will see the Agent “thinking” in your terminal:
- Thought: The user wants two things: word count and weather.
- Action 1: Calls
get_word_length('Supercalifragilisticexpialidocious')-> Returns34. - Action 2: Calls
get_weather('London')-> ReturnsSunny and 25°C. - Final Answer: “The word has 34 letters, and yes, it is sunny in London.”
Conclusion
You just built your first Autonomous AI Agent.
While this example is simple, you can replace the dummy weather function with real APIs—like Stripe for payments, Slack for messaging, or a SQL Database for querying data.
The future of AI isn’t just chatbots; it’s agents that do things.


