如何选择LLM模型?

新知榜官方账号

2023-12-08 16:22:35

如何选择LLM模型?

从开源热度和快速应用角度以及国内环境,入门级LLM模型可选:ChatGLM-6B,ChatGLM2-6B,Baichuan-13B,InternLM-Chat-7B。高级的商用,可选GPT/GLM等基座模型自行预训练和精调或者使用平台级公司开放的大模型API。性能评测结果如下表所示:

数据集/模型InternLM-Chat-7BChatGLM2-6BBaichuan-7BLLaMA-7BAlpaca-7BVicuna-7B
C-Eval(Val)53.250.942.724.228.931.2
MMLU50.846.041.535.2*39.747.3
AGIEval42.539.024.620.824.126.4
CommonSenseQA75.260.058.865.068.766.7
BUSTM74.355.051.348.548.862.5
CLUEWSC78.659.852.850.350.352.2
MATH6.46.63.02.82.22.8
GSM8K34.529.29.710.16.015.3
HumanEval14.09.29.214.09.211.0
RACE(High)76.366.328.146.9*40.754.0

值得一提的是,InternLM-Chat-7B作为一个新出的70亿参数LLM模型,从评测的结果看还是比较令人惊艳的,几乎与Baichuan-13B130亿参数的模型的评测结果不相上下。期待它在实际应用中的效果验证,以及它的高性能版书生·浦语104B的应用效果。

我们在各个权威大语言模型的中英文benchmark上进行了5-shot评测。结果如下表所示:

AverageSTEMSocialSciencesHumanitiesOthers
Chinese-Alpaca-Plus-13B38.835.245.640.038.2
Vicuna-13B32.830.538.232.532.5
Chinese-LLaMA-Plus-13B32.130.338.032.929.1
Ziya-LLaMA-13B-Pretrain30.027.634.432.028.6
LLaMA-13B28.527.033.627.727.6
moss-moon-003-base(16B)27.427.029.127.226.9
Baichuan-7B42.838.252.046.239.3
Baichuan-13B-Base52.445.963.557.249.3
Baichuan-13B-Chat51.543.764.656.249.2
MMLU52.040.460.549.558.4
LLaMA-13B46.336.153.044.052.8
Chinese-Alpaca-Plus-13B43.936.948.940.550.5
Ziya-LLaMA-13B-Pretrain42.935.647.640.149.4
Baichuan-7B42.335.648.938.448.1
Chinese-LLaMA-Plus-13B39.233.142.837.044.6
moss-moon-003-base(16B)23.622.422.824.224.4
Baichuan-13B-Base51.641.660.947.458.5
Baichuan-13B-Chat52.140.960.948.859.0

本页网址:https://www.xinzhibang.net/article_detail-22301.html

寻求报道,请 点击这里 微信扫码咨询

关键词

LLM模型 ChatGLM Baichuan InternLM Vicuna GPT/GLM

分享至微信: 微信扫码阅读

相关工具

相关文章

相关快讯