-
Notifications
You must be signed in to change notification settings - Fork 42
Improve implementations of Dion2 #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@kwangjunahn please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
dion/dion2.py
Outdated
| Update momentum with gradient and compute the input to orthogonalization. | ||
| More specifically, it does the following steps: | ||
| - updates the momentum with gradient | ||
| - computes the top-k indices to determine submatrices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe its worth mentioning here or on readme that it is top-k based on L1 norm, if I'm understanding the impl correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right. I have added more detailed comments.
| if not state: | ||
| state["momentum"] = torch.zeros_like(param) | ||
| if algo == "adamw": | ||
| state["variance"] = torch.zeros_like(param) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if param is bfloat16 momentum and variance will be bfloat16 right?, is that ok or you want fp32 here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I think usually master weights are kept in fp32. This _get_or_initialize_state mirrors muon.py, so I would just keep it as it is for unity.
| ) | ||
| M_work.mul_(ef_decay) | ||
| # Compute L1 norm along norm_dim (sum of absolute values) | ||
| slice_norms = M_stacked.norm(p=1, dim=norm_dim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment: logging the norms would be interesting for tuning the fraction/k hyperparam, to see if the distribution is heavy tailed or flat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it would be nice to log that stat. I think users can easily add that feature, so I would just leave as is.
In this pull request, a more efficient Dion2 implementation is provided.
Previous implementation had a somewhat complicated logic for sharding the Newton-Schulz computation across
distributed_mesh. I won't get into the details of it too much, but roughly speaking, it kept the momentum states "batch"-sharded across different iterates (i.e., the full momentum matrix is taken care of by its owner device). This had some benefits of being able to do single matrix math efficiently. However, this led to "asymmetric" optimizer states across devices, which led to some checkpointing issues (as pointed out by #18)Hence, in this new implementation, we take a different approach. Let me describe the main difference for the FSDP case below.
muon.pyis handling the optimizer states, we just simply keep the optimizer states sharded in the same way parameters are.select_dimcarefully according to the matrices are sharded. In the case of row-sharding, we select along rows, and in the case of column-sharding, we select along columns. (This way, we don't have to communicate over devices to compute the row/column norms.)This leads to not only compute savings (because only submatrix gets orthonormalized), but also communication savings (all-to-all's are on the submatrices.)
Moreover, now the optimizer states do not cause any issues with checkpointing.
Other than the new Dion2 implementation, I have made some changes to
README.md, as well astrain.py. Fortrain.py, I just made things a bit easier for users to configure distributed mesh, and also naming convention for the wandb runs.