Update llm-api-benchmark
This commit is contained in:
parent
e1fe821544
commit
f61999ccb2
@ -55,8 +55,11 @@
|
||||
pip install aiohttp pandas tiktoken matplotlib
|
||||
```
|
||||
|
||||
# 或者,不安装可选库:
|
||||
# pip install aiohttp pandas
|
||||
或者,不安装可选库:
|
||||
|
||||
```bash
|
||||
pip install aiohttp pandas
|
||||
```
|
||||
|
||||
## 配置
|
||||
|
||||
@ -123,4 +126,20 @@ python benchmark_vllm.py \
|
||||
|
||||
* Token 计数的准确性依赖于 API 服务器在响应的 `usage` 字段中正确返回 `prompt_tokens` 和 `completion_tokens`。
|
||||
* `min_tokens` 参数的有效性取决于目标 API 端点是否支持它。提示增强是作为一种备用策略。
|
||||
* 如果未安装 `tiktoken`,Token 计数将基于字符长度进行近似估算,这对于非英语文本或代码来说准确性较低。
|
||||
* 如果未安装 `tiktoken`,Token 计数将基于字符长度进行近似估算,这对于非英语文本或代码来说准确性较低。
|
||||
|
||||
## 实例
|
||||
|
||||
使用下面的命令在 H200 HGX 平台上进行测试
|
||||
|
||||
```bash
|
||||
python vllm_benchmark.py --token-lengths 10 100 500 1000 2000 5000 10000 20000 30000 --concurrency-levels 1 16 64 128 256 512 result
|
||||
```
|
||||
|
||||
结果如下
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
BIN
llm-api-benchmark/src/avg_tokens_per_second_vs_concurrency.png
Normal file
BIN
llm-api-benchmark/src/avg_tokens_per_second_vs_concurrency.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 157 KiB |
BIN
llm-api-benchmark/src/success_rate_vs_concurrency.png
Normal file
BIN
llm-api-benchmark/src/success_rate_vs_concurrency.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 110 KiB |
BIN
llm-api-benchmark/src/total_tokens_per_second_vs_concurrency.png
Normal file
BIN
llm-api-benchmark/src/total_tokens_per_second_vs_concurrency.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 122 KiB |
Loading…
x
Reference in New Issue
Block a user