木叶吟
木叶吟
Home
Experience
Publications
Posts
CV
Light
Dark
Automatic
Distributed Training
ASYRA: Automating Graph Scheduling for Communication-Computation Overlap in Efficient Model Parallelism
Scaling large models requires complex multi-dimensional (n-D) parallelism, yet this paradigm suffers from severe communication bubbles …
Lei Zhang
,
Zhisheng YE
PDF
Cite
AMSP: Super-Scaling LLM Training via Advanced Model States Partitioning
Large Language Models (LLMs) have demonstrated impressive performance across various downstream tasks. When training these models, …
Qiaoling Chen
,
Qinghao Hu
,
Zhisheng YE
,
Guoteng Wang
,
Peng Sun
,
Yonggang Wen
,
Tianwei Zhang
Preprint
Cite
Cite
×