Agent是LangChain中最具智能化的组件,其核心思想基于 ReAct框架(Reasoning + Acting),即通过 思维(Thought) 和 行动(Action) 的协同实现自主决策。ReAct框架的核心流程如下:
from langchain.agents import Tool, AgentExecutor, ReActAgent
tools = [
Tool(
name="Weather API",
func=lambda location: get_weather_data(location),
description="查询指定地区的实时天气"
)
]
agent = ReActAgent(tools=tools)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.run("北京今天的天气如何?")
关键点:
LangChain支持通过Tool
类封装任意外部功能。例如,开发一个 文件读取工具:
import os
class FileReaderTool(Tool):
def __init__(self):
super().__init__(
name="File Reader",
func=self.read_file,
description="读取指定路径的文本文件内容"
)
def read_file(self, file_path):
with open(file_path, "r") as f:
return f.read()
try:
result = tool.run(input)
except Exception as e:
return f"调用工具时出错:{str(e)}"
import requests
def get_weather_data(location):
url = f"https://api.weatherapi.com/v1/current.json?key=YOUR_API_KEY&q={location}"
response = requests.get(url)
return response.json()
类型 | 优点 | 缺点 |
---|---|---|
内存存储 | 快速、无需依赖外部服务 | 会话结束后数据丢失 |
Redis存储 | 持久化、支持高并发 | 需要维护Redis服务 |
数据库存储 | 支持复杂查询、安全性高 | 实现复杂、性能较低 |
from langchain.memory import Neo4jGraphMemory
from neo4j import GraphDatabase
driver = GraphDatabase.driver("neo4j://localhost:7687", auth=("neo4j", "password"))
memory = Neo4jGraphMemory(driver=driver, session_id="user123")
# 存储会话历史
memory.save_context({"input": "你好"}, {"output": "您好!"})
# 查询会话历史
history = memory.load_memory_variables({})
print(history) # 输出:{"history": "Human: 你好\nAI: 您好!"}
摘要策略:
代码示例:动态上下文截取算法
def truncate_context(context, max_tokens=2000):
tokens = context.split()
if len(tokens) > max_tokens:
return " ".join(tokens[:max_tokens]) + "[...]"
return context
多Agent系统通过 任务分解 和 路由决策 实现复杂业务逻辑。例如,一个电商客服系统可能包含以下Agent:
from langchain.agents import AgentExecutor, ReActAgent
from langchain.tools import Tool
# 定义工具
tools = [
Tool(name="Return Policy Lookup", func=query_policy, description="查询退货政策"),
Tool(name="After-sales Service", func=contact_support, description="转接人工客服")
]
# 定义Agent
intent_recognition_agent = ReActAgent(tools=tools)
policy_lookup_agent = ReActAgent(tools=tools)
# 路由逻辑
def route_query(query):
if "退货" in query:
return policy_lookup_agent
else:
return intent_recognition_agent
RAG(Retrieval-Augmented Generation)通过以下步骤实现:
from langchain.vectorstores import Qdrant
from langchain.embeddings import TongYiEmbeddings
# 构建向量数据库
db = Qdrant.from_documents(docs, embeddings, url="http://localhost:6333")
# 检索相关文档
query = "API调用频率限制是多少?"
results = db.similarity_search(query, k=3)
# 生成答案
prompt = PromptTemplate(template="根据以下内容回答:{context}\n问题:{query}")
chain = LLMChain(llm=llm, prompt=prompt)
answer = chain.run({"context": results, "query": query})
ThreadPoolExecutor
并发执行多个工具调用。from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor() as executor:
futures = [executor.submit(tool.run, input) for tool in tools]
results = [future.result() for future in futures]
asyncio
实现非阻塞调用:import asyncio
async def async_call_tool(tool, input):
return await tool.arun(input)
async def main():
tasks = [async_call_tool(tool, input) for tool in tools]
results = await asyncio.gather(*tasks)
from collections import deque
class FIFOCache:
def __init__(self, max_size=100):
self.cache = {}
self.queue = deque()
self.max_size = max_size
def get(self, key):
if key in self.cache:
return self.cache[key]
return None
def set(self, key, value):
if len(self.cache) >= self.max_size:
oldest = self.queue.popleft()
del self.cache[oldest]
self.cache[key] = value
self.queue.append(key)
logging
模块记录不同级别的日志(DEBUG/INFO/WARNING/ERROR)。import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.debug("调试信息")
logger.info("常规信息")
OpenTelemetry
实现分布式追踪:from opentelemetry import trace
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("Agent Execution"):
result = agent.run(input)
from prometheus_client import start_http_server, Counter
REQUESTS = Counter('agent_requests_total', 'Total number of agent requests')
start_http_server(8000)
def run_agent(input):
REQUESTS.inc()
return agent.run(input)
参考资料:
版权声明:本文为CSDN博客原创内容,转载请注明出处。