The agent is built: How I built a simple chat agent in TypeScript with LangGraph and Anthropic.
How I built a simple chat agent in TypeScript with LangGraph and Anthropic.
Today, we will write code to create a simple chatbot that we will probably reuse for future projects.
We are going to be using Typescript, LangGraph, and Langchain.
Before we get started, there will be a few tools that you will need to install before we begin the project if you are a beginner.
Briefly...
The Prerequisites
-
You will also need an Anthropic API KEY for this example. This will allow us to develop the agent for free using their evaluation plan.
We can start by creating a new folder for our project. Open the terminal and run the following code
For me, I'm naming the project. Xander, for the fun.
mkdir agent-xander
cd agent-xander
npm init -y
git init
We will need our dependencies
-
contains the building blocks used to assemble an agent
-
includes a few tools we may add to enhance the experience
-
dotenv to help load our .env variables to avoid accidentally posting our sensitive information.
You can install these dependencies by running the following npm command in your terminal:
npm install @langchain/core @langchain/langgraph @langchain/anthropic @langchain/community dotenv
Creating our Agent
Now that our dependencies are installed, we will create a new directory to hold some of the project's code and the file for our agent.
mkdir src && touch src/agent.ts
You will copy the following code to the agent.ts file. This logic will allow us to create agents more efficiently by adding nodes to our graph.
// ./src/agent.ts
import {ChatPromptTemplate, MessagesPlaceholder} from '@langchain/core/prompts';
import {Runnable} from '@langchain/core/runnables';
import {ChatAnthropic} from '@langchain/anthropic';
/**
* Create an agent that can run a set of tools.
*/
export async function createAgent({
llm,
systemMessage,
}: {
llm: ChatAnthropic;
systemMessage: string;
}): Promise<Runnable> {
let prompt = ChatPromptTemplate.fromMessages([
[
'system',
'You are a helpful AI assistant, collaborating with other assistants.' +
' Use the provided tools to progress towards answering the question.' +
" If you are unable to fully answer, that's OK, another assistant with different tools " +
' will help where you left off. Execute what you can to make progress.' +
' If you or any of the other assistants have the final answer or deliverable,' +
' prefix your response with FINAL ANSWER so the team knows to stop.' +
' \n{system_message}',
],
new MessagesPlaceholder('messages'),
]);
prompt = await prompt.partial({
system_message: systemMessage,
});
return prompt.pipe(llm);
}
Creating our State
In the src directory, you must create a new file name, state.ts.
You can use the following commands:
bash
touch state.ts
Here, we will define the graph's state. For this example, we will be tracking messages and the recent sender, but it can be extended to track more or less information as needed.
//./src/state.ts
import {BaseMessage} from '@langchain/core/messages';
import {Annotation} from '@langchain/langgraph';
// This defines the object that is passed between each node
// in the graph. We will create different nodes for each agent and tool
export const AgentState = Annotation.Root({
messages: Annotation({
reducer: (x: any, y: any) => x.concat(y),
}),
sender: Annotation({
reducer: (x: any, y: any) => y ?? x ?? 'user',
default: () => 'user',
}),
});
Our LLM
I will be moving the logic that handles our LLM to its file so that it will be easier to support multiple LLMs and models in the future. You can create a new file in the src directory named llm.ts using the following commands:
touch llm.ts
Once the file is created, you can copy the following logic into the file.
//./src/llm.ts
import {ChatAnthropic} from '@langchain/anthropic';
export async function llm({
model,
temperature,
max_tokens
}: {
model: string;
temperature: number;
max_tokens: number;
}): Promise<ChatAnthropic> {
return new ChatAnthropic({
model,
temperature,
maxTokens:max_tokens,
});
}
This logic will allow us to call our LLM where it is needed. So far, we've been building a small library to make this project modular, easy to update, and reuse in the future.
Xander's little helper
We will add our agent to our graph as a node.
To do this, I created a helper function allowing us to add as many nodes to the graph as we develop agents.
// ./src/createNode.ts
import {HumanMessage} from '@langchain/core/messages';
import type {RunnableConfig} from '@langchain/core/runnables';
import { AgentState } from './state';
import {Runnable} from '@langchain/core/runnables';
// Helper function to run a node for a given agent
export async function createNode(props: {
state: typeof AgentState.State;
agent: Runnable;
name: string;
config?: RunnableConfig;
}) {
const {state, agent, name, config} = props;
let result = await agent.invoke(state, config);
// We convert the agent output into a format that is suitable
// to append to the global state
if (!result?.tool_calls || result.tool_calls.length === 0) {
// If the agent is NOT calling a tool, we want it to
// look like a human message.
result = new HumanMessage({...result, name: name});
}
return {
messages: [result],
// Since we have a strict workflow, we can
// track the sender so we know who to pass to next.
sender: name,
};
}
The Graph
This is where we will put everything together to create our graph and talk to our chatbot.
// ./src/graph.ts
import {END, START, StateGraph, MessagesAnnotation} from '@langchain/langgraph';
import {createAgent} from './agent';
import {createNode} from './createNode';
import {llm} from './llm';
import {AgentState} from './state';
import {RunnableConfig} from '@langchain/core/runnables';
// Define a graph that uses the agent and the state
export async function createGraph(props: {
model: string;
temperature: number;
system_message: string;
max_tokens: number;
}) {
// Create an agent
const agent = await createAgent({
llm: await llm({
model: props.model || 'claude-3-haiku-20240307',
temperature: props.temperature || 0.5,
max_tokens: props.max_tokens || 250,
}),
systemMessage:
props.system_message ||
'Welcome to the team! You are a helpful AI assistant, collaborating with other assistants.',
});
// Define a node
const node = (state: typeof AgentState.State, config?: RunnableConfig) =>
createNode({
state: state,
agent,
name: 'Xander',
});
// Define a new graph
const workflow = new StateGraph(AgentState)
.addNode('agent', node)
.addEdge(START, 'agent') // __start__ is a special name for the entrypoint
.addEdge('agent', END);
// Finally, we compile it into a LangChain Runnable.
return workflow.compile();
}
The .env
this will be where you store your API Key so we can interact with the Anthropic API. Make sure to .gitignore this file so it doesn't end up on GitHub...
ANTHROPIC_API_KEY=
main.ts
Now that we have all the pieces together, we can use the next bit of code to test what we've done so far and give us a place to expand and tailor this ai to our needs.
import {HumanMessage} from '@langchain/core/messages';
import {createGraph} from './src/graph';
import 'dotenv/config';
async function main(props: {model: string; temperature?: number, system_message: string, max_tokens: number, prompt: string}) {
// Use the agent
const graph = await createGraph({
model: props.model,
temperature: props.temperature || 0.5,
system_message: props.system_message,
max_tokens: props.max_tokens,
});
const response = await graph.invoke({
messages: [new HumanMessage(props.prompt)],
});
console.log(response.messages[response.messages.length - 1].content);
return response.messages[response.messages.length - 1].content;
}
main({
model: 'claude-3-haiku-20240307',
temperature: 0.5,
system_message:'Welcome to the team! You are a helpful AI assistant, collaborating with other assistants.',
max_tokens: 250,
prompt: 'Who is Dr.Doom?',
});
Now, to run the code, make sure you are at the root of our project and type the following line into the terminal:
npx tsx main.ts
You may be prompted to install "ts." Doing so will ensure you have the proper typescript dependencies installed. Afterward, the agent should continue allowing you to review your response.
The Response
In this example, I asked the ai to tell me "who Dr.Doom is".
Conclusion
This is a basic chatbot using langgraph, langchain, and anthropic. Hopefully, this helps you make a chatbot of your own. I plan to add more to the agent as I develop ideas. Next, we will create a simple UI for the chatbot so that other users can use the project. The chat portion will be a BYOKey model so that users can spend how they want to The Repo: