Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpumemusage [error for multi-gpu] #1

Closed
foolwood opened this issue Dec 6, 2017 · 1 comment
Closed

gpumemusage [error for multi-gpu] #1

foolwood opened this issue Dec 6, 2017 · 1 comment

Comments

@foolwood
Copy link

foolwood commented Dec 6, 2017

Thanks for sharing your code.
I notice in README you said "Multiple GPU training is supported".
But i think this function only support for a single gpu computer. I add some code for my personal usage, but I think there are better solutions.

def gpumemusage():
    gpu_mem = subprocess.check_output("nvidia-smi | grep MiB | cut -f 3 -d '|'", shell=True).\
        replace(' ', '').replace('\n', '').replace('i', '').replace('MB', 'MB/').replace('//', '/')
    gpu_mem = gpu_mem[:-1]
    gpu_info= [float(a[:-2]) for a in gpu_mem.split('/')]
    curr = sum(gpu_info[0::2])
    tot = sum(gpu_info[1::2])
    util = "%1.2f"%(100*curr/tot)+'%'
    cmem = str(int(math.ceil(curr/1024.)))+'GB'
    gmem = str(int(math.ceil(tot/1024.)))+'GB'
    gpu_mem = util + '--' + join(cmem, gmem)
    return gpu_mem
@fitsumreda
Copy link
Contributor

Thanks @foolwood !
That function should now work for any number of gpus.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants