Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add deepspeed example #610

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Conversation

kuizhiqing
Copy link
Member

No description provided.

Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign terrytangyuan for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

template:
spec:
containers:
- image: registry.cn-beijing.aliyuncs.com/acs/deepspeed:hello-deepspeed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be useful to check in the Dockerfile as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'er right. This demo is intend to be used to test the up coming feature, I'll manage it later.

@vsoch
Copy link
Contributor

vsoch commented Apr 29, 2024

@kuizhiqing do you have a suggested place to run / test a setup like this? I've been trying to get just 3 nodes each with a single GPU on Google Cloud and I never get the allocation. I was able to get a single node with one GPU last week but it felt like luck. Will it work without GPU?

@vsoch
Copy link
Contributor

vsoch commented Apr 29, 2024

It looks like it defaults to CPU, but it's not clear to me how communication is setup. Is it just using a shared volume at /workspace? if that's the case, what's the point of an operator that supports MPI?

@kuizhiqing
Copy link
Member Author

Hi @vsoch , actually, I've test it in the production environment of my affiliation. I've no idea where to run it in an open platform.

The workspace path of the example contains materials from https://github.com/microsoft/DeepSpeedExamples/tree/master/training/HelloDeepSpeed, it do not involve the communication process. The communication setup by pdsh with the hostfile provided by the mpi-operator.

For the CPU version, I'm afraid I cannot provide more information since I did not work with it.

@vsoch
Copy link
Contributor

vsoch commented May 1, 2024

https://github.com/microsoft/DeepSpeedExamples/tree/master/training/HelloDeepSpeed, it do not involve the communication process. The communication setup by pdsh with the hostfile provided by the mpi-operator.

There would need to be some variant of an MPI run in there, and the communication (with the MPI operator) would happen via ssh bootstrap and then targeting that hostfile (which if I remember is an envar). I'm trying to understand where that logic is here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants