Skip to main content
智能体应用让大语言模型自行决定解决问题的后续步骤。这种灵活性非常强大,但模型的黑盒特性使得难以预测对智能体某部分的调整会如何影响其他部分。要构建生产就绪的智能体,全面的测试至关重要。 测试智能体有几种方法:
  • 单元测试使用内存模拟对象独立测试智能体的小型、确定性部分,以便快速、确定地断言确切行为。
  • 集成测试 使用真实网络调用测试智能体,以确认组件协同工作、凭证和模式匹配,以及延迟可接受。
智能体应用往往更依赖集成测试,因为它们将多个组件链接在一起,并且必须处理由于LLM的非确定性特性带来的不稳定性。

集成测试

许多智能体行为只有在使用真实LLM时才会出现,例如智能体决定调用哪个工具、如何格式化响应,或者提示修改是否影响整个执行轨迹。LangChain的 agentevals 包提供了专门设计用于使用实时模型测试智能体轨迹的评估器。 AgentEvals 允许您通过执行轨迹匹配或使用LLM法官来轻松评估智能体的轨迹(消息的确切序列,包括工具调用):

轨迹匹配

为给定输入硬编码参考轨迹,并通过逐步比较验证运行。适用于测试明确定义的工作流,其中您了解预期行为。当您对应该调用哪些工具以及调用顺序有特定期望时使用此方法。这种方法具有确定性、快速且成本效益高,因为它不需要额外的LLM调用。

LLM作为法官

使用LLM定性验证智能体的执行轨迹。“法官”LLM根据提示标准(可以包括参考轨迹)审查智能体的决策。更灵活,可以评估效率和适当性等细微方面,但需要LLM调用且确定性较低。当您希望评估智能体轨迹的整体质量和合理性,而没有严格的工具调用或顺序要求时使用。

安装 AgentEvals

npm install agentevals @langchain/core
或者,直接克隆 AgentEvals 仓库

轨迹匹配评估器

AgentEvals 提供 createTrajectoryMatchEvaluator 函数来将智能体的轨迹与参考轨迹进行匹配。有四种模式可供选择:
模式描述使用场景
strict相同顺序的消息和工具调用的精确匹配测试特定序列(例如,策略查找在授权之前)
unordered相同的工具调用可以以任何顺序出现验证顺序无关紧要时的信息检索
subset智能体仅调用参考中的工具(无额外工具)确保智能体不超过预期范围
superset智能体至少调用参考工具(允许额外工具)验证至少执行了所需的最小操作
strict 模式确保轨迹包含相同顺序的相同消息和工具调用,尽管允许消息内容存在差异。当您需要强制执行特定的操作序列时,这非常有用,例如要求在授权操作之前进行策略查找。
import { createAgent, tool, HumanMessage, AIMessage, ToolMessage } from "langchain"
import { createTrajectoryMatchEvaluator } from "agentevals";
import * as z from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `It's 75 degrees and sunny in ${city}.`;
  },
  {
    name: "get_weather",
    description: "Get weather information for a city.",
    schema: z.object({
      city: z.string(),
    }),
  }
);

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [getWeather]
});

const evaluator = createTrajectoryMatchEvaluator({  
  trajectoryMatchMode: "strict",  
});  

async function testWeatherToolCalledStrict() {
  const result = await agent.invoke({
    messages: [new HumanMessage("What's the weather in San Francisco?")]
  });

  const referenceTrajectory = [
    new HumanMessage("What's the weather in San Francisco?"),
    new AIMessage({
      content: "",
      tool_calls: [
        { id: "call_1", name: "get_weather", args: { city: "San Francisco" } }
      ]
    }),
    new ToolMessage({
      content: "It's 75 degrees and sunny in San Francisco.",
      tool_call_id: "call_1"
    }),
    new AIMessage("The weather in San Francisco is 75 degrees and sunny."),
  ];

  const evaluation = await evaluator({
    outputs: result.messages,
    referenceOutputs: referenceTrajectory
  });
  // {
  //     'key': 'trajectory_strict_match',
  //     'score': true,
  //     'comment': null,
  // }
  expect(evaluation.score).toBe(true);
}
unordered 模式允许以任何顺序进行相同的工具调用,这在您希望验证是否检索了特定信息但不关心顺序时很有帮助。例如,智能体可能需要检查城市的天气和事件,但顺序无关紧要。
import { createAgent, tool, HumanMessage, AIMessage, ToolMessage } from "langchain"
import { createTrajectoryMatchEvaluator } from "agentevals";
import * as z from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `It's 75 degrees and sunny in ${city}.`;
  },
  {
    name: "get_weather",
    description: "Get weather information for a city.",
    schema: z.object({ city: z.string() }),
  }
);

const getEvents = tool(
  async ({ city }: { city: string }) => {
    return `Concert at the park in ${city} tonight.`;
  },
  {
    name: "get_events",
    description: "Get events happening in a city.",
    schema: z.object({ city: z.string() }),
  }
);

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [getWeather, getEvents]
});

const evaluator = createTrajectoryMatchEvaluator({  
  trajectoryMatchMode: "unordered",  
});  

async function testMultipleToolsAnyOrder() {
  const result = await agent.invoke({
    messages: [new HumanMessage("What's happening in SF today?")]
  });

  // 参考显示了与实际执行顺序不同的工具调用顺序
  const referenceTrajectory = [
    new HumanMessage("What's happening in SF today?"),
    new AIMessage({
      content: "",
      tool_calls: [
        { id: "call_1", name: "get_events", args: { city: "SF" } },
        { id: "call_2", name: "get_weather", args: { city: "SF" } },
      ]
    }),
    new ToolMessage({
      content: "Concert at the park in SF tonight.",
      tool_call_id: "call_1"
    }),
    new ToolMessage({
      content: "It's 75 degrees and sunny in SF.",
      tool_call_id: "call_2"
    }),
    new AIMessage("Today in SF: 75 degrees and sunny with a concert at the park tonight."),
  ];

  const evaluation = await evaluator({
    outputs: result.messages,
    referenceOutputs: referenceTrajectory,
  });
  // {
  //     'key': 'trajectory_unordered_match',
  //     'score': true,
  // }
  expect(evaluation.score).toBe(true);
}
supersetsubset 模式匹配部分轨迹。superset 模式验证智能体至少调用了参考轨迹中的工具,允许额外的工具调用。subset 模式确保智能体没有调用超出参考范围的任何工具。
import { createAgent } from "langchain"
import { tool } from "@langchain/core/tools";
import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";
import { createTrajectoryMatchEvaluator } from "agentevals";
import * as z from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `It's 75 degrees and sunny in ${city}.`;
  },
  {
    name: "get_weather",
    description: "Get weather information for a city.",
    schema: z.object({ city: z.string() }),
  }
);

const getDetailedForecast = tool(
  async ({ city }: { city: string }) => {
    return `Detailed forecast for ${city}: sunny all week.`;
  },
  {
    name: "get_detailed_forecast",
    description: "Get detailed weather forecast for a city.",
    schema: z.object({ city: z.string() }),
  }
);

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [getWeather, getDetailedForecast]
});

const evaluator = createTrajectoryMatchEvaluator({  
  trajectoryMatchMode: "superset",  
});  

async function testAgentCallsRequiredToolsPlusExtra() {
  const result = await agent.invoke({
    messages: [new HumanMessage("What's the weather in Boston?")]
  });

  // 参考仅需要 getWeather,但智能体可能调用额外工具
  const referenceTrajectory = [
    new HumanMessage("What's the weather in Boston?"),
    new AIMessage({
      content: "",
      tool_calls: [
        { id: "call_1", name: "get_weather", args: { city: "Boston" } },
      ]
    }),
    new ToolMessage({
      content: "It's 75 degrees and sunny in Boston.",
      tool_call_id: "call_1"
    }),
    new AIMessage("The weather in Boston is 75 degrees and sunny."),
  ];

  const evaluation = await evaluator({
    outputs: result.messages,
    referenceOutputs: referenceTrajectory,
  });
  // {
  //     'key': 'trajectory_superset_match',
  //     'score': true,
  //     'comment': null,
  // }
  expect(evaluation.score).toBe(true);
}
您还可以设置 toolArgsMatchMode 属性和/或 toolArgsMatchOverrides 来自定义评估器如何考虑实际轨迹与参考轨迹中工具调用之间的相等性。默认情况下,只有具有相同参数且调用相同工具的工具调用才被视为相等。访问仓库了解更多详情。

LLM作为法官评估器

您还可以使用LLM通过 createTrajectoryLLMAsJudge 函数来评估智能体的执行路径。与轨迹匹配评估器不同,它不需要参考轨迹,但如果可用,也可以提供参考轨迹。
import { createAgent } from "langchain"
import { tool } from "@langchain/core/tools";
import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";
import { createTrajectoryLLMAsJudge, TRAJECTORY_ACCURACY_PROMPT } from "agentevals";
import * as z from "zod";

const getWeather = tool(
  async ({ city }: { city: string }) => {
    return `It's 75 degrees and sunny in ${city}.`;
  },
  {
    name: "get_weather",
    description: "Get weather information for a city.",
    schema: z.object({ city: z.string() }),
  }
);

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [getWeather]
});

const evaluator = createTrajectoryLLMAsJudge({  
  model: "openai:o3-mini",  
  prompt: TRAJECTORY_ACCURACY_PROMPT,  
});  

async function testTrajectoryQuality() {
  const result = await agent.invoke({
    messages: [new HumanMessage("What's the weather in Seattle?")]
  });

  const evaluation = await evaluator({
    outputs: result.messages,
  });
  // {
  //     'key': 'trajectory_accuracy',
  //     'score': true,
  //     'comment': 'The provided agent trajectory is reasonable...'
  // }
  expect(evaluation.score).toBe(true);
}
如果您有参考轨迹,可以向提示中添加额外变量并传入参考轨迹。下面,我们使用预构建的 TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE 提示并配置 reference_outputs 变量:
import { TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE } from "agentevals";

const evaluator = createTrajectoryLLMAsJudge({
  model: "openai:o3-mini",
  prompt: TRAJECTORY_ACCURACY_PROMPT_WITH_REFERENCE,
});

const evaluation = await evaluator({
  outputs: result.messages,
  referenceOutputs: referenceTrajectory,
});
有关LLM如何评估轨迹的更多可配置性,请访问仓库

LangSmith 集成

为了随时间跟踪实验,您可以将评估器结果记录到 LangSmith,这是一个用于构建生产级LLM应用程序的平台,包括追踪、评估和实验工具。 首先,通过设置所需的环境变量来设置 LangSmith:
export LANGSMITH_API_KEY="your_langsmith_api_key"
export LANGSMITH_TRACING="true"
LangSmith 提供了两种主要的运行评估方法:Vitest/Jest 集成和 evaluate 函数。
import * as ls from "langsmith/vitest";
// import * as ls from "langsmith/jest";

import { createTrajectoryLLMAsJudge, TRAJECTORY_ACCURACY_PROMPT } from "agentevals";

const trajectoryEvaluator = createTrajectoryLLMAsJudge({
  model: "openai:o3-mini",
  prompt: TRAJECTORY_ACCURACY_PROMPT,
});

ls.describe("trajectory accuracy", () => {
  ls.test("accurate trajectory", {
    inputs: {
      messages: [
        {
          role: "user",
          content: "What is the weather in SF?"
        }
      ]
    },
    referenceOutputs: {
      messages: [
        new HumanMessage("What is the weather in SF?"),
        new AIMessage({
          content: "",
          tool_calls: [
            { id: "call_1", name: "get_weather", args: { city: "SF" } }
          ]
        }),
        new ToolMessage({
          content: "It's 75 degrees and sunny in SF.",
          tool_call_id: "call_1"
        }),
        new AIMessage("The weather in SF is 75 degrees and sunny."),
      ],
    },
  }, async ({ inputs, referenceOutputs }) => {
    const result = await agent.invoke({
      messages: [new HumanMessage("What is the weather in SF?")]
    });

    ls.logOutputs({ messages: result.messages });

    await trajectoryEvaluator({
      inputs,
      outputs: result.messages,
      referenceOutputs,
    });
  });
});
使用您的测试运行器运行评估:
vitest run test_trajectory.eval.ts
# 或
jest test_trajectory.eval.ts
或者,您可以在 LangSmith 中创建一个数据集并使用 evaluate 函数:
import { evaluate } from "langsmith/evaluation";
import { createTrajectoryLLMAsJudge, TRAJECTORY_ACCURACY_PROMPT } from "agentevals";

const trajectoryEvaluator = createTrajectoryLLMAsJudge({
  model: "openai:o3-mini",
  prompt: TRAJECTORY_ACCURACY_PROMPT,
});

async function runAgent(inputs: any) {
  const result = await agent.invoke(inputs);
  return result.messages;
}

await evaluate(
  runAgent,
  {
    data: "your_dataset_name",
    evaluators: [trajectoryEvaluator],
  }
);
结果将自动记录到 LangSmith。
要了解有关评估智能体的更多信息,请参阅 LangSmith 文档

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.