AI Chat
A ready-to-use AI chat interface powered by the Vercel AI SDK. Demonstrates streaming responses, conversation persistence, and rate limiting.
Key files
lib/ai/model.ts- Provider-agnostic model configurationapp/api/ai/chat/route.ts- Streaming chat endpointapp/api/ai/conversations/route.ts- Conversation managementlib/queries/chat.ts- Database queries for chatcomponents/chat/- UI components (ChatShell, ChatThread, ChatComposer, ConversationSidebar)db/schema.ts- Chat tables (chat_conversations,chat_messages)middleware.ts- Anonymous session cookie
Environment variables
# Required for AI chat
AI_PROVIDER=openai # or "anthropic"
AI_MODEL=gpt-4o-mini # or "claude-3-5-sonnet-latest"
AI_API_KEY=sk-xxx # your API key
# Optional
AI_CHAT_RATE_LIMIT_PER_HOUR=30 # default: 30
Features
- Provider-agnostic: Switch between OpenAI and Anthropic with env vars
- Streaming responses: Real-time token streaming via Vercel AI SDK
- Conversation persistence: Messages stored in PostgreSQL
- Rate limiting: DB-backed rate limiting (30 requests/hour default)
- Anonymous sessions: Works without authentication via cookie-based identity
How it works
- Anonymous users get a
cc_anon_idcookie (set by middleware) - Users can create conversations and send messages
- Messages stream from the AI provider in real-time
- Both user and assistant messages are persisted to the database
- Rate limiting prevents abuse (30 requests/hour per anonymous user)
Database schema
// Conversations
export const chatConversations = pgTable("chat_conversations", {
id: uuid("id").primaryKey().defaultRandom(),
ownerAnonId: text("owner_anon_id").notNull(),
title: text("title"),
createdAt: timestamp("created_at").notNull().defaultNow(),
updatedAt: timestamp("updated_at").notNull().defaultNow(),
});
// Messages
export const chatMessages = pgTable("chat_messages", {
id: uuid("id").primaryKey().defaultRandom(),
conversationId: uuid("conversation_id").notNull().references(() => chatConversations.id),
role: text("role").notNull(), // "user" | "assistant"
content: text("content").notNull(),
createdAt: timestamp("created_at").notNull().defaultNow(),
});
API endpoints
GET /api/ai/conversations
List all conversations for the current anonymous user.
POST /api/ai/conversations
Create a new conversation.
DELETE /api/ai/conversations/[id]
Delete a conversation.
POST /api/ai/chat
Stream a chat completion. Body: { conversationId, messages }.
Adding a system prompt
By default, no system prompt is included. To add one, modify app/api/ai/chat/route.ts:
const result = streamText({
model,
system: "You are a helpful assistant specialized in...",
messages: filteredMessages,
// ...
});
Changing providers
Update your .env.local:
# For OpenAI
AI_PROVIDER=openai
AI_MODEL=gpt-4o-mini
AI_API_KEY=sk-xxx
# For Anthropic
AI_PROVIDER=anthropic
AI_MODEL=claude-3-5-sonnet-latest
AI_API_KEY=sk-ant-xxx
Rate limiting
Rate limiting uses the existing DB-backed rate limiter from lib/rate-limit.ts:
- Default: 30 requests per anonymous user per hour
- Key format:
ai:chat:anon:{hashed_anon_id} - Configurable: Set
AI_CHAT_RATE_LIMIT_PER_HOURin env
Protecting the chat route
To require authentication, move the chat routes under app/app/ and update the API routes to use requireUser() instead of the anonymous cookie.