DAPO浅析
论文地址 https://arxiv.org/abs/2503.14476参考实验:DAPO + vLLM v1 + VeRL —— VOC性能比较
Motivation
没有完整的GRPO训练R1-32B的框架
目标:
[*]降低错误样本的长度 (token-level loss)
[*]训练更加稳定 (overlong filter)
[*]避免generation entropy的塌陷(higher clip)
[*]提高训练效率(dynamic sample)
Method
整体优化目标如下
\[\mathcal{J} = \mathbb{E}_{(q,a)\sim \mathcal{D}, \{o_i\}_{i=1}^G\sim \pi_{old}(\cdot|q)} [\frac{1}{\sum_{i=1}^G|o_i|}\sum_{i=1}^G\sum_{t=1}^{|o_i|}\min(r_{i,t}(\theta)A_{i, t}, clip(r_{i,t}(\theta),1-\epsilon_{low}, 1+\epsilon_{high})A_{i,t})]\\s.t.\ 0 热心回复! 热心回复! 感谢,下载保存了 谢谢分享,试用一下 感谢发布原创作品,程序园因你更精彩
页:
[1]