Skip to content

Commit

Permalink
Add GPU usage in helloworld ipynb
Browse files Browse the repository at this point in the history
  • Loading branch information
bichengying committed Jan 19, 2021
1 parent f88efcd commit c25967b
Showing 1 changed file with 53 additions and 16 deletions.
69 changes: 53 additions & 16 deletions examples/interactive_bluefog_helloworld.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -142,10 +142,10 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[stdout:0] Hello, I am 2 among 4 processes\n",
"[stdout:1] Hello, I am 3 among 4 processes\n",
"[stdout:2] Hello, I am 1 among 4 processes\n",
"[stdout:3] Hello, I am 0 among 4 processes\n"
"[stdout:0] Hello, I am 0 among 4 processes\n",
"[stdout:1] Hello, I am 1 among 4 processes\n",
"[stdout:2] Hello, I am 3 among 4 processes\n",
"[stdout:3] Hello, I am 2 among 4 processes\n"
]
}
],
Expand Down Expand Up @@ -182,15 +182,15 @@
"output_type": "stream",
"text": [
"==========================================\n",
"[stdout:0] Hello, I am 2 among 4 processes\n",
"[stdout:1] Hello, I am 3 among 4 processes\n",
"[stdout:2] Hello, I am 1 among 4 processes\n",
"[stdout:3] Hello, I am 0 among 4 processes\n",
"[stdout:0] Hello, I am 0 among 4 processes\n",
"[stdout:1] Hello, I am 1 among 4 processes\n",
"[stdout:2] Hello, I am 3 among 4 processes\n",
"[stdout:3] Hello, I am 2 among 4 processes\n",
"==========================================\n",
"[stdout:0] Hello, I am 2 among 4 processes\n",
"[stdout:1] Hello, I am 3 among 4 processes\n",
"[stdout:2] Hello, I am 1 among 4 processes\n",
"[stdout:3] Hello, I am 0 among 4 processes\n",
"[stdout:0] Hello, I am 0 among 4 processes\n",
"[stdout:1] Hello, I am 1 among 4 processes\n",
"[stdout:2] Hello, I am 3 among 4 processes\n",
"[stdout:3] Hello, I am 2 among 4 processes\n",
"==========================================\n"
]
}
Expand Down Expand Up @@ -257,7 +257,7 @@
{
"data": {
"text/plain": [
"tensor([2.])"
"tensor([0.])"
]
},
"execution_count": 6,
Expand Down Expand Up @@ -292,7 +292,7 @@
{
"data": {
"text/plain": [
"[tensor([3.]), tensor([1.])]"
"[tensor([1.]), tensor([3.])]"
]
},
"execution_count": 7,
Expand All @@ -318,7 +318,7 @@
{
"data": {
"text/plain": [
"[tensor([2.]), tensor([3.]), tensor([1.]), tensor([0.])]"
"[tensor([0.]), tensor([1.]), tensor([3.]), tensor([2.])]"
]
},
"execution_count": 8,
Expand Down Expand Up @@ -390,6 +390,43 @@
"print(\"I received seed as value: \", seed)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## (Optional) Working with multiple GPUs\n",
"\n",
"If you have multiple GPUs, it is typical that each worker is pinned to one device.\n",
"Then, you can let the tensor wihin each worker set on different GPUs as shown in following code.\n",
"\n",
"*Note: If you want to pin multiple workers in one device, you cannot use NCCL as communication backend.*"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[stdout:0] tensor([0.], device='cuda:0')\n",
"[stdout:1] tensor([1.], device='cuda:1')\n",
"[stdout:2] tensor([3.], device='cuda:3')\n",
"[stdout:3] tensor([2.], device='cuda:2')\n"
]
}
],
"source": [
"%%px\n",
"import torch\n",
"if torch.cuda.is_available():\n",
" torch.cuda.set_device(bf.local_rank())\n",
" x = torch.FloatTensor([bf.rank()]).cuda()\n",
" print(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -754,7 +791,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.5"
"version": "3.7.9"
},
"toc": {
"base_numbering": 1,
Expand Down

0 comments on commit c25967b

Please sign in to comment.