Updated on 2025-02-19T14:44:02.393Z
Written on 2025-02-18T08:39:51.768Z
LLMs served through API providers don't actually care much about you. They don't maintain ongoing relationships with users or remember previous conversation. If you ever start developing a chatbot development you will probably experience a scenario like this.
YOU: Hello! How do you say hello in Japanese?
AI: こんにちは (Konnichiwa) – Hello.
If you send a request again without including the previous conversation in the context, the interaction might look like this:
YOU: What about Italian?
AI: Could you clarify what you want in Italian? Are you asking about translating something, discussing your experience in Italian, or something else?
It feels almost like starting a completely new conversation with each prompt. But how does it work exactly for ChatGPT, Claude. Gemini or any other LLM chatbot? The answer is that they store your previous conversation in your chat session, either in your browser or cache storage somewhere, then every time you send a new prompt, they include the past conversation along with it.
To put it simply, what is actually sent to the provider in the second turn might look like this
YOU: Hello! How do you say hello in Japanese?
AI: こんにちは (Konnichiwa) – Hello.
YOU: What about Italian?
Then the answer will be:
AI: Ciao – Hello.
The above section provides an ELI5 overview of what I'll explain from a technical perspective in the next section. Here are the tools I use:
NOTE: This article will focus only on chat storage. To set up a basic chatbot app, you can follow Vercel's guide at: https://sdk.vercel.ai/docs/getting-started. This approach is based on the official guide for storing chat logs in local files, which you can find at: https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot-message-persistence. This article extends that guide by showing you how to implement chat storage using Redis instead.
For the Redis instance, it doesn't have to be Upstash specifically - you can use Redis Cloud, Vercel KV, Azure Cache, or any other provider. If you want to try it out without paying, I recommend getting a free instance from any of these providers:
Let's get started. I assume you have already set up a basic functioning chatbot app. The first step is to ensure you have provisioned a Redis instance and your application can connect to it. In this case, I'm using Upstash's service, which comes with a convenient library. You only need to install it using npm i @upstash/redis
and set up the URI and TOKEN in your environment variables. For detailed instructions, visit https://upstash.com/docs/redis/quickstarts/nextjs-app-router.
I also recommend using a GUI client application to monitor the data stored in Redis. Redis Insight is a free option that's compatible with instances provisioned from all the providers mentioned above: https://redis.io/insight/
Now that everything is set up, let's dive into the code. To connect to your Redis instance, you'll need to declare a singleton client instance here.
To store chat logs, you need an identifier for each chat session. This allows you to append messages from each interaction with the LLM and revisit the session later. While you could use a simple serial number or UUID, the Vercel AI SDK already provides a function specifically for this purpose. Let's utilize that to initialize a session.
Now that we have a function to create a session, let's go to the server-side page.tsx
where users interact with the chatbot. We'll use the /page
route with a query parameter ?_id
to identify each chat session.
NOTE: This is for demonstration purposes only. In practice, you might want to initiate a session when the first API call completes, rather than every time the /chat
page is rendered.
With the code above, when you visit the /chat
page, it will automatically redirect you to a URL like http://localhost:3000/chat?_id=AtxO0T24HKN4gdrA
. If you check Redis Insight, you'll find an empty session (I've implemented user authentication separately. You can either ignore this for now or add your own logic by passing the user's email to the createChat
argument).
The next step is to create CRUD operations that will update the chat log by appending new messages during interactions with the chatbot.
Then header over to the API route where your client component will call to interact with the LLM. In this case, it will be /api/chat
located in /api/chat/route.ts
Whenever you chat with your chatbot, messages will now be stored in a chat session, which you can monitor using Redis Insight.
We're almost finished. The last step is adding logic to retrieve messages when users revisit a session. Let's create a new function called getChat
.
Now let's return to the server-side page.tsx
and add logic to retrieve past conversations. This if-else statement will retrieve messages from Redis when it detects an _id
parameter, and create a new session if it doesn't.
Finally, the client component needs to receive the messages object and pass it to the useChat hook to render the past conversation.
NOTE: In production, you should handle these additional scenarios for better performance:
That wraps up our guide to implementing chat storage with Redis! Hope you find this article useful. You now have a chatbot that can store and retrieve conversation history. Feel free to experiment with other tools like MongoDB or other NoSQL databases (JSON makes it easy to work with React message states), add features like user authentication, or optimize the storage logic further. See u next time!
// lib/crud/redis.ts
import { Redis } from "@upstash/redis";
export const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN
});
// lib/create-chat.ts
import { generateId } from "ai";
import { redis } from "@/lib/crud/redis"; // Client instance from above
//Chat expiration in seconds, remove this if you want the log to stay indefinitely
const options = { ex: 600 }
export async function createChat() {
const _id = generateId();
await redis.set(
`chat:${_id}`,
JSON.stringify({ _id, messages: [] }),
options
);
return _id;
}
// lib/save-chat.ts
// Expiration will be overidden upon save. Make sure this has the same configuration as createChat()
const options = { ex: 600 }
export async function saveChat({ _id, messages }) {
console.log("Saving chat: ", { _id, messages });
await redis.set(`chat:${_id}`,
JSON.stringify({ _id, messages }),
options);
}
// api/chat/route.ts
import { NextRequest } from "next/server";
import { streamText, appendResponseMessages, createIdGenerator } from "ai";
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { saveChat } from "@/lib/save-chat";
export async function POST(req: NextRequest) {
const { messages, id } = await req.json();
const result = streamText({
model: createGoogleGenerativeAI("gemini-2.0-flash"),
messages: [...messages],
experimental_generateMessageId: createIdGenerator({
prefix: 'msgs',
size: 16,
}),
// Rest of the configuration
async onFinish({ response }) {
await saveChat({
_id: id,
// append new messages
messages: appendResponseMessages({
messages,
responseMessages: response.messages,
}),
});
},
});
return result.toDataStreamResponse();
}
// lib/get-chat.ts
export async function getChat(_id: string) {
try {
const result = await redis.get(`chat:${_id}`);
const messages = result.messages || [];
return messages;
} catch (error) {
console.error("Error in getChat: ", error);
return []
}
}
// app/chat/page.tsx
import { auth } from "@/auth";
import { redirect } from "next/navigation";
import { createChat } from "@/lib/create-chat";
export default async function Chat({ searchParams }) {
let _id = await createChat();
redirect(`/chat?_id=${_id}`);
// The rest of the component
}
// app/chat/page.tsx
import { auth } from "@/auth";
import { redirect } from "next/navigation";
import { createChat } from "@/lib/create-chat";
import { getChat } from "@/lib/get-chat";
import { ChatComponent } from "@/components/Chat/";
export default async function Chat({ searchParams }) {
let messages = [];
let _id = (await searchParams)._id!;
console.log("_id: ", _id);
if (_id) {
messages = await getChat(_id);
} else {
_id = await createChat();
redirect(`/chat?_id=${_id}`);
}
// Pass the messages to the client component where useChat() hook is initialized
return (
<ChatComponent initialMessages={messages} _id={_id} />
)
}
export default function ChatComponent(props: any) {
const { initialMessages, _id } = props;
const {
messages,
isLoading,
input,
setInput,
handleInputChange,
handleSubmit,
} = useChat({
id: _id,
initialMessages: initialMessages,
sendExtraMessageFields: true,
generateId: createIdGenerator({
prefix: "msgc",
size: 16,
}),
body: {
user: email,
},
});