AI 产品(如智能聊天应用、自动化助手)对前端架构提出了独特的要求。与传统 Web 应用相比,AI 产品需要处理实时流式响应、复杂的状态管理、上下文保持和多样化的交互方式。用户期望快速、流畅且智能的体验,而开发者则需要平衡性能、功能复杂性和可维护性。
本文将通过一个完整的 AI 聊天应用案例,探索 AI 产品前端的最佳实践与挑战。我们将实现一个支持流式响应、多模型切换、会话持久化和自然语言交互的前端系统,结合最新的技术趋势,提供详细的代码示例和场景分析。
随着人工智能技术的飞速发展,AI 产品(如智能聊天机器人、自动化助手、内容生成工具)在目前已成为企业和个人用户的核心工具。这些产品不仅需要强大的后端模型支持,还要求前端提供流畅、响应式和智能的交互体验。AI 产品前端面临独特的挑战:如何处理流式响应、实现多模型切换、管理复杂会话状态、设计自然语言驱动的交互界面,同时确保手机端适配和触控优化?
本文将通过构建一个 AI 聊天应用,全面探索 AI 产品前端的最佳实践。我们将从需求分析开始,逐步完成技术选型、功能实现、性能优化和部署上线,涵盖流式响应、上下文缓存、会话持久化、指令 UI 和手机端适配等关键点。通过丰富的代码示例和场景分析,您将掌握如何打造一个兼具响应式、流式和智能交互的 AI 产品前端。
通过本项目,您将体验到:
在动手编码之前,我们需要明确项目的功能需求。一个清晰的需求清单不仅能指导开发过程,还能帮助我们理解每个功能的意义。以下是 AI 聊天应用的核心需求:
这些需求覆盖了 AI 产品前端的核心场景,同时为学习 React 和 AI 技术提供了丰富的实践机会:
这些需求还为最新的的技术趋势提供了实践场景,如流式交互、多模态输入和智能化 UI 的普及。
在实现功能之前,我们需要选择合适的技术栈。以下是本项目使用的工具和技术,以及选择它们的理由:
这些工具组合不仅易于上手,还能帮助您掌握 AI 产品前端开发的最佳实践。
现在进入核心部分——代码实现。我们将从项目搭建开始,逐步完成组件设计、流式响应、多模型切换、会话持久化、指令 UI、手机端适配和部署。
使用 Vite 创建一个 React 项目:
npm create vite@latest ai-chat -- --template react
cd ai-chat
npm install
npm run dev
安装必要的依赖:
npm install @tanstack/react-query framer-motion tailwindcss postcss autoprefixer localforage axios
初始化 Tailwind CSS:
npx tailwindcss init -p
编辑 tailwind.config.js
:
/** @type {import('tailwindcss').Config} */
export default {
content: [
"./index.html",
"./src/**/*.{js,ts,jsx,tsx}",
],
theme: {
extend: {},
},
plugins: [],
}
在 src/index.css
中引入 Tailwind:
@tailwind base;
@tailwind components;
@tailwind utilities;
我们将应用拆分为以下组件:
src/
├── components/
│ ├── ChatWindow.tsx
│ ├── Message.tsx
│ ├── InputBox.tsx
│ ├── ModelSelector.tsx
│ └── StatusIndicator.tsx
├── hooks/
│ └── useAI.ts
├── App.tsx
├── main.tsx
└── index.css
使用 Server-Sent Events(SSE)实现流式响应。
创建一个简单的 Node.js 后端支持 SSE:
mkdir backend
cd backend
npm init -y
npm install express openai
backend/index.js
:
require('dotenv').config();
const express = require('express');
const { OpenAI } = require('openai');
const app = express();
app.use(express.json());
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
app.get('/api/stream', async (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const { prompt, model } = req.query;
const stream = await openai.chat.completions.create({
model: model || 'gpt-4',
messages: [{ role: 'user', content: prompt }],
stream: true,
});
for await (const chunk of stream) {
const data = chunk.choices[0]?.delta?.content || '';
res.write(`data: ${JSON.stringify({ content: data })}\n\n`);
}
res.write('data: [DONE]\n\n');
});
app.listen(3001, () => console.log('Server running on port 3001'));
创建 .env
文件:
OPENAI_API_KEY=your_openai_api_key
运行后端:
node index.js
src/hooks/useAI.ts
:
import { useState, useEffect } from 'react';
import localForage from 'localforage';
interface Message {
role: 'user' | 'assistant';
content: string;
}
export function useAI(model: string) {
const [messages, setMessages] = useState<Message[]>([]);
const [status, setStatus] = useState<'idle' | 'streaming' | 'error'>('idle');
const [error, setError] = useState<string | null>(null);
useEffect(() => {
localForage.getItem('chatHistory').then((history: Message[] | null) => {
if (history) setMessages(history);
});
}, []);
const sendMessage = (prompt: string) => {
setMessages([...messages, { role: 'user', content: prompt }]);
setStatus('streaming');
const eventSource = new EventSource(`[invalid url, do not cite]
let responseText = '';
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.content === '[DONE]') {
setStatus('idle');
eventSource.close();
return;
}
responseText += data.content;
setMessages(prev => {
const newMessages = [...prev];
if (newMessages[newMessages.length - 1].role === 'assistant') {
newMessages[newMessages.length - 1].content = responseText;
} else {
newMessages.push({ role: 'assistant', content: responseText });
}
localForage.setItem('chatHistory', newMessages);
return newMessages;
});
};
eventSource.onerror = () => {
setStatus('error');
setError('流式响应失败');
eventSource.close();
};
return () => eventSource.close();
};
return { messages, status, error, sendMessage };
}
src/components/ModelSelector.tsx
:
interface ModelSelectorProps {
model: string;
onChange: (model: string) => void;
}
function ModelSelector({ model, onChange }: ModelSelectorProps) {
const models = ['gpt-4', 'grok', 'mistral'];
return (
<div className="p-2 bg-white rounded-lg shadow">
<select
value={model}
onChange={(e) => onChange(e.target.value)}
className="p-2 border rounded-lg"
>
{models.map(m => (
<option key={m} value={m}>{m}</option>
))}
</select>
</div>
);
}
export default ModelSelector;
使用粗略 token 计数器(简化版):
function estimateTokens(text: string): number {
return Math.ceil(text.length / 4); // 简化为每 4 个字符约 1 个 token
}
function useAI(model: string) {
// ... 其他代码同上
const [tokenCount, setTokenCount] = useState(0);
useEffect(() => {
const totalTokens = messages.reduce((sum, msg) => sum + estimateTokens(msg.content), 0);
setTokenCount(totalTokens);
}, [messages]);
return { messages, status, error, sendMessage, tokenCount };
}
使用 LocalForage 保存会话:
src/hooks/useAI.ts
(更新):
useEffect(() => {
localForage.setItem('chatHistory', messages);
}, [messages]);
const clearHistory = () => {
setMessages([]);
localForage.removeItem('chatHistory');
};
const editMessage = (index: number, newContent: string) => {
setMessages(prev => {
const newMessages = [...prev];
newMessages[index].content = newContent;
localForage.setItem('chatHistory', newMessages);
return newMessages;
});
};
return { messages, status, error, sendMessage, tokenCount, clearHistory, editMessage };
src/components/InputBox.tsx
:
import { useState } from 'react';
interface InputBoxProps {
onSend: (input: string) => void;
disabled: boolean;
}
function InputBox({ onSend, disabled }: InputBoxProps) {
const [input, setInput] = useState('');
const suggestions = ['查天气', '翻译文本', '生成代码'];
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim() || disabled) return;
onSend(input);
setInput('');
};
const handleSuggestion = (suggestion: string) => {
onSend(suggestion);
};
return (
<div className="p-4 border-t bg-white flex flex-col space-y-2">
<div className="flex space-x-2">
{suggestions.map(s => (
<button
key={s}
onClick={() => handleSuggestion(s)}
className="px-3 py-1 bg-gray-200 rounded-lg text-sm"
>
{s}
</button>
))}
</div>
<div className="flex items-center space-x-2">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && handleSubmit(e)}
className="flex-1 p-2 border rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500"
placeholder="输入您的请求..."
disabled={disabled}
/>
<button
onClick={handleSubmit}
className="px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 disabled:bg-gray-400"
disabled={disabled}
>
发送
</button>
</div>
</div>
);
}
export default InputBox;
使用 Tailwind CSS 实现响应式布局:
src/components/ChatWindow.tsx
:
import { useState } from 'react';
import { motion } from 'framer-motion';
import { useAI } from '../hooks/useAI';
import Message from './Message';
import InputBox from './InputBox';
import ModelSelector from './ModelSelector';
import StatusIndicator from './StatusIndicator';
function ChatWindow() {
const [model, setModel] = useState('gpt-4');
const { messages, status, error, sendMessage, tokenCount, clearHistory, editMessage } = useAI(model);
return (
<div className="w-full max-w-2xl bg-white rounded-lg shadow-lg flex flex-col h-[80vh] md:h-[70vh]">
<div className="p-2 border-b flex justify-between items-center">
<ModelSelector model={model} onChange={setModel} />
<button
onClick={clearHistory}
className="px-3 py-1 bg-red-500 text-white rounded-lg text-sm"
>
清空历史
</button>
</div>
<StatusIndicator status={status} error={error} tokenCount={tokenCount} />
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{messages.map((msg, index) => (
<motion.div
key={index}
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.3 }}
>
<Message
role={msg.role}
content={msg.content}
onEdit={(newContent) => editMessage(index, newContent)}
/>
</motion.div>
))}
</div>
<InputBox onSend={sendMessage} disabled={status === 'streaming'} />
</div>
);
}
export default ChatWindow;
src/components/Message.tsx
:
interface MessageProps {
role: 'user' | 'assistant';
content: string;
onEdit: (newContent: string) => void;
}
function Message({ role, content, onEdit }: MessageProps) {
const [isEditing, setIsEditing] = useState(false);
const [editText, setEditText] = useState(content);
const handleSave = () => {
onEdit(editText);
setIsEditing(false);
};
return (
<div className={`p-3 rounded-lg max-w-xs ${role === 'user' ? 'bg-blue-500 text-white ml-auto' : 'bg-gray-200'}`}>
{isEditing ? (
<div>
<textarea
value={editText}
onChange={(e) => setEditText(e.target.value)}
className="w-full p-2 border rounded-lg"
/>
<button
onClick={handleSave}
className="mt-2 px-3 py-1 bg-green-500 text-white rounded-lg"
>
保存
</button>
</div>
) : (
<div>
<p>{content}</p>
{role === 'user' && (
<button
onClick={() => setIsEditing(true)}
className="text-sm text-gray-500"
>
编辑
</button>
)}
</div>
)}
</div>
);
}
export default Message;
src/components/StatusIndicator.tsx
:
interface StatusIndicatorProps {
status: 'idle' | 'streaming' | 'error';
error: string | null;
tokenCount: number;
}
function StatusIndicator({ status, error, tokenCount }: StatusIndicatorProps) {
const statusText = {
idle: '待机',
streaming: '正在生成...',
error: error || '错误',
};
return (
<div className="p-2 bg-gray-100 text-center text-sm text-gray-600">
<p>{statusText[status]}</p>
<p>Token 使用量: {tokenCount}</p>
</div>
);
}
export default StatusIndicator;
使用 React Query 缓存 API 响应:
src/hooks/useAI.ts
(更新):
import { useQueryClient } from '@tanstack/react-query';
function useAI(model: string) {
const queryClient = useQueryClient();
// ... 其他代码同上
const sendMessage = (prompt: string) => {
const cacheKey = ['response', prompt, model];
const cached = queryClient.getQueryData(cacheKey);
if (cached) {
setMessages([...messages, { role: 'user', content: prompt }, cached]);
return;
}
// SSE 逻辑同上
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.content === '[DONE]') {
queryClient.setQueryData(cacheKey, { role: 'assistant', content: responseText });
setStatus('idle');
eventSource.close();
return;
}
responseText += data.content;
setMessages(prev => {
const newMessages = [...prev];
if (newMessages[newMessages.length - 1].role === 'assistant') {
newMessages[newMessages.length - 1].content = responseText;
} else {
newMessages.push({ role: 'assistant', content: responseText });
}
localForage.setItem('chatHistory', newMessages);
return newMessages;
});
};
};
return { messages, status, error, sendMessage, tokenCount, clearHistory, editMessage };
}
仅加载可见消息:
src/components/ChatWindow.tsx
(更新):
import { useRef } from 'react';
import { useInView } from 'framer-motion';
function ChatWindow() {
const { messages, status, error, sendMessage, tokenCount, clearHistory, editMessage } = useAI('gpt-4');
const ref = useRef(null);
const isInView = useInView(ref);
return (
<div className="w-full max-w-2xl bg-white rounded-lg shadow-lg flex flex-col h-[80vh] md:h-[70vh]">
<div className="p-2 border-b flex justify-between items-center">
<ModelSelector model="gpt-4" onChange={() => {}} />
<button
onClick={clearHistory}
className="px-3 py-1 bg-red-500 text-white rounded-lg text-sm"
>
清空历史
</button>
</div>
<StatusIndicator status={status} error={error} tokenCount={tokenCount} />
<div className="flex-1 overflow-y-auto p-4 space-y-4" ref={ref}>
{isInView && messages.map((msg, index) => (
<motion.div
key={index}
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.3 }}
>
<Message
role={msg.role}
content={msg.content}
onEdit={(newContent) => editMessage(index, newContent)}
/>
</motion.div>
))}
</div>
<InputBox onSend={sendMessage} disabled={status === 'streaming'} />
</div>
);
}
npm run build
npm run build
dist
后端需单独部署到 Vercel 或其他平台。
为巩固所学,设计一个练习:为应用添加语音输入功能。
添加语音识别
使用 Web Speech API 实现语音输入。
更新 InputBox
添加语音输入按钮。
src/components/InputBox.tsx
(更新):
import { useState } from 'react';
interface InputBoxProps {
onSend: (input: string) => void;
disabled: boolean;
}
function InputBox({ onSend, disabled }: InputBoxProps) {
const [input, setInput] = useState('');
const [isRecording, setIsRecording] = useState(false);
const suggestions = ['查天气', '翻译文本', '生成代码'];
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim() || disabled) return;
onSend(input);
setInput('');
};
const handleSuggestion = (suggestion: string) => {
onSend(suggestion);
};
const startRecording = () => {
const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
recognition.lang = 'zh-CN';
recognition.onresult = (event) => {
const transcript = event.results[0][0].transcript;
setInput(transcript);
setIsRecording(false);
};
recognition.start();
setIsRecording(true);
};
return (
<div className="p-4 border-t bg-white flex flex-col space-y-2">
<div className="flex space-x-2 overflow-x-auto">
{suggestions.map(s => (
<button
key={s}
onClick={() => handleSuggestion(s)}
className="px-3 py-1 bg-gray-200 rounded-lg text-sm whitespace-nowrap"
>
{s}
</button>
))}
</div>
<div className="flex items-center space-x-2">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && handleSubmit(e)}
className="flex-1 p-2 border rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500"
placeholder="输入或语音您的请求..."
disabled={disabled}
/>
<button
onClick={startRecording}
className={`px-3 py-2 rounded-lg ${isRecording ? 'bg-red-500' : 'bg-gray-200'} text-white`}
disabled={disabled}
>
{isRecording ? '录音中' : '语音'}
</button>
<button
onClick={handleSubmit}
className="px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 disabled:bg-gray-400"
disabled={disabled}
>
发送
</button>
</div>
</div>
);
}
export default InputBox;
通过此练习,您将学会使用 Web Speech API 实现语音输入,优化 AI 产品的交互性,尤其在手机端。
通过这个 AI 聊天应用项目,您完整体验了一个 AI 产品前端从需求分析到部署的全流程,掌握了流式响应、多模型切换、会话持久化、指令 UI 和手机端适配等关键技术。这些技能将成为您开发现代化 AI 应用的坚实基础。
AI 产品前端将进一步融入多模态交互(如语音、图像)和智能化 UI。希望您继续探索 AI 驱动的前端开发,打造创新的用户体验。欢迎在社区分享成果,一起成长!