RLHFuse: Efficient RLHF Training for Large Language Models with Inter- and Intra-Stage Fusion.
Yinmin Zhong,
Zili Zhang,
Bingyang Wu,
Shengyu Liu,
Yukun Chen,
Changyi Wan,
Hanpeng Hu,
Lei Xia,
Ranchen Ming,
Yibo Zhu,
Xin JinIn Preprint.
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs.
Ziheng Jiang,
Haibin Lin,
Yinmin Zhong,
Qi Huang,
Yangrui Chen,
Zhi Zhang,
Yanghua Peng,
Xiang Li,
Cong Xie,
Shibiao Nong,
Yulu Jia,
Sun He,
Hongmin Chen,
Zhihao Bai,
Qi Hou,
Shipeng Yan,
Ding Zhou,
Yiyao Sheng,
Zhuo Jiang,
Haohan Xu,
Haoran Wei,
Zhang Zhang,
Pengfei Nie,
Leqi Zou,
Sida Zhao,
Liang Xiang,
Zherui Liu,
Zhe Li,
Xiaoying Jia,
Jianxi Ye,
Xin Jin,
Xin LiuIn NSDI 2024.
AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving.
Zhuohan Li,
Lianmin Zheng,
Yinmin Zhong,
Vincent Liu,
Ying Sheng,
Xin Jin,
Yanping Huang,
Zhifeng Chen,
Hao Zhang,
Joseph E. Gonzalez,
Ion StoicaIn OSDI 2023.