Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large memory consumption 0.4 #766

Open
plooney opened this issue Nov 22, 2017 · 63 comments
Open

Large memory consumption 0.4 #766

plooney opened this issue Nov 22, 2017 · 63 comments
Assignees
Labels
core:backend theme:performance Performance, scalability, large data sizes, slowness, etc.

Comments

@plooney
Copy link

plooney commented Nov 22, 2017

I have just upgraded to Tensorflow 1.4 and Tensorboard 0.4. I had Tensorboard running for 20 hours. It was consuming 10GB of memory. I shut it down and restarted. It memory consumption is increasing steadily at ~10MB per second.

@lucasb-eyer
Copy link

I observe the same behaviour, especially when there's a lot of data, be it many small experiments, or few long running ones. I've had 64GB memory systems starting to swap after a while when opening 2-3 such tensorboards.

@jart jart self-assigned this Nov 29, 2017
@zaxliu
Copy link

zaxliu commented Dec 16, 2017

Same observation here.

@weberxie
Copy link

What's the progress of this issue now? @jart can you elaborate on the reason for this problem?

@jart
Copy link
Contributor

jart commented Dec 28, 2017

Any chance you guys could post tensorboard --inspect --logdir mylogdir?

@zaxliu
Copy link

zaxliu commented Dec 29, 2017

Hi @jart , here's the shell output of tensorboard --inspect --logdir mylogdir for one of my experiments.
sample.txt

@mattphillipskitware
Copy link

Checking in, I'm also getting this in spades and I have to kill tensorboard at least once a day to keep it from grinding everything to a halt.

@mattphillipskitware
Copy link

tensorboard_log.txt

Here's one, this only got to about 2GB RAM before I shut it down. Other instances have gotten to 10GB as others have reported.

@Sylvus
Copy link

Sylvus commented Jan 8, 2018

Same here (currently at 6GB). Is there a flag to disable loading the graph for example?

@igorgad
Copy link

igorgad commented Feb 4, 2018

Hi, I am also observing this behavior. Is this fixed in the 1.5 version?

@mingdachen
Copy link

Any updates? Or anybody found a workaround for this problem?

@plooney
Copy link
Author

plooney commented Feb 20, 2018

In tensorboard 1.5 the issue is still there. Memory consumption is increasing steadily at ~10MB per second.. Here is the output of

tensorboard --inspect --logdir mylogdir

sample.txt

@jason-morgan
Copy link

I am having this same issue. The model is a simple LSTM that uses a pre-trained 600k x 300 dimension word embedding. I have 16 model versions and Tensorboard quickly consumes all 64Gb of memory on my machine. I am running Tensorboard 1.5. Here is the inspection log.

inspection.txt

@Sylvus
Copy link

Sylvus commented Mar 8, 2018

What helped in my case was never saving the graph. Make sure you do not add the graph somewhere and also pass graph=None to the FileWriter. Not a real solution, but maybe it helps.

@po0ya
Copy link

po0ya commented Mar 14, 2018

+1

@cancan101
Copy link

any news on this?

@jart
Copy link
Contributor

jart commented May 4, 2018

We're currently working on having a DB storage layer that puts information like the graphdef on disk rather than in memory. We'd be happy to accept a contribution that, for example, adds a flag to not load the GraphDef into memory, or perhaps saves a pointer to its file in memory to load it on demand, since the GraphDef is usually the very first thing inside an event log file.

@inoryy
Copy link

inoryy commented May 7, 2018

Unfortunately graph=None to the FileWriter didn't solve the issue, running quite quickly out even with just a few models.

@sharvil
Copy link

sharvil commented Aug 10, 2018

I'm also experiencing this issue with TensorBoard 1.9. Evicting GraphDef from memory might be an okay short-term solution but it's a fixed size, so it should only save a constant amount of memory. The problem for me is memory growth over time.

@jart is someone actively looking into this issue? It's fine if the answer is no, just want to understand where things are. Also, is there any additional information the community can provide to help diagnose what's going on?

@rom1504
Copy link

rom1504 commented Aug 29, 2018

I'm having the same thing from tensorboard 1.10

@mzhaoshuai
Copy link

mzhaoshuai commented Dec 16, 2018

I also have the same thing from tensorboard 1.12.
The tensorboard will occupy more and more memory as the time goes by.
I run it on a server, it finally occupied up to 60GB memory...

Later I use an alternative measure:

# sleep time, hours
sleep_t=6
times=0

# while loop
while true
do
	tensorboard --logdir=${logdir} --port=${port} &
	last_pid=$!
	sleep ${sleep_t}h
	kill -9 ${last_pid}
	times=`expr ${times} + 1`
	echo "Restart tensorboard ${times} times."
done

Kill and restart the tensorboard periodly......

@Z-Zheng
Copy link

Z-Zheng commented Jan 10, 2019

I also have the same thing from tensorboard 1.12.
The tensorboard will occupy more and more memory as the time goes by.
I run it on a server, it finally occupied up to 60GB memory...

I also meet this problem with 70+GB :(

@rex-yue-wu
Copy link

Guess what? I encounter the same issue, the only difference here is that I ran tensorboard on a server with 512GB memory, and yeah tensorboard ate all memory!!!

@rom1504
Copy link

rom1504 commented Feb 15, 2019 via email

@nfelt
Copy link
Collaborator

nfelt commented Dec 19, 2019

Hi folks - we're trying to get to the bottom of this, and we're sorry it's been such a longstanding problem.

For those of you on the thread who have experienced this, it would really help if you can comment with the following information:

  • TensorBoard version
  • TensorFlow version
  • Python version via python -c "import sys; print(sys.version)"
  • OS and OS version (e.g. Ubuntu 16.04)
  • How you're installing and running TensorBoard (e.g. pip, conda, docker container, built from source, etc.)
  • Any details about the logdir you're running against - is it static? growing as you're running TensorBoard? mounted remotely? rough size? etc. - or ideally a copy of it like @paulguerrero above, thank you
  • Memory usage details:
    • Size at startup
    • Rate of increase - size after 1 hour? 24 hours?
    • How exactly you're checking the memory usage
  • Whether you have the TensorBoard tab open during the whole period, and if so, whether you have the auto-refresh behavior enabled (it is on by default)

@OscarVanL
Copy link

OscarVanL commented Dec 20, 2019

Hi, I was about to open a new issue for this but found you're already working on it. In my case, Tensorboard used 12GB of RAM and 20% of my CPU resources. I'll provide the details you asked for.

  1. tensorboard==2.0.2

  2. tensorflow-gpu==2.0.0, tensorflow-estimator==2.0.1

  3. 3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]

  4. Windows 10 Education (same as Enterprise) Version 1909

  5. pip install within my conda environment

  6. It's static, I performed the tuning on a different machine, then copied my logdir hparam-tuning to my machine, then opened it with tensorboard --logdir C:\Users\Oscar\PycharmProjects\________\hparam-tuning on my own PC to view the results.
    I have attached the logdir
    hparam-tuning.zip
    It is 255 iterations big, 26.7MB unzipped.

  • Size at startup: Immediately blows up to 12GB usage within 10 seconds of starting up the program. Stops expanding after ~30 seconds, but RAM usage is sustained high and using 20% CPU continually.
  • Rate of increase: Haven't been running it for longer than 5 minutes, I can't see how it could grow much more though... lol
  • Task Manager in Windows
  1. I do have the tab open, the auto-refresh behaviour is every 30s.

Additional:

Diagnostics

Diagnostics output
--- check: autoidentify
INFO: diagnose_tensorboard.py version d515ab103e2b1cfcea2b096187741a0eeb8822ef

--- check: general
INFO: sys.version_info: sys.version_info(major=3, minor=7, micro=5, releaselevel='final', serial=0)
INFO: os.name: nt
INFO: os.uname(): N/A
INFO: sys.getwindowsversion(): sys.getwindowsversion(major=10, minor=0, build=18363, platform=2, service_pack='')

--- check: package_management
INFO: has conda-meta: True
INFO: $VIRTUAL_ENV: None

--- check: installed_packages
WARNING: Could not generate requirement for distribution -ensorflow-gpu 2.0.0 (c:\users\oscar\anaconda3\envs\_________\lib\site-packages): Parse error at "'-ensorfl'": Expected W:(abcd...)
INFO: installed: tensorboard==2.0.2
INFO: installed: tensorflow-gpu==2.0.0
INFO: installed: tensorflow-estimator==2.0.1

--- check: tensorboard_python_version
INFO: tensorboard.version.VERSION: '2.0.2'

--- check: tensorflow_python_version
2019-12-20 09:58:34.346839: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
INFO: tensorflow.__version__: '2.0.0'
INFO: tensorflow.__git_version__: 'v2.0.0-rc2-26-g64c3d382ca'

--- check: tensorboard_binary_path
INFO: Could not find files for the given pattern(s).
INFO: which tensorboard: None

--- check: addrinfos
socket.has_ipv6 = True
socket.AF_UNSPEC = <AddressFamily.AF_UNSPEC: 0>
socket.SOCK_STREAM = <SocketKind.SOCK_STREAM: 1>
socket.AI_ADDRCONFIG = <AddressInfo.AI_ADDRCONFIG: 1024>
socket.AI_PASSIVE = <AddressInfo.AI_PASSIVE: 1>
Loopback flags: <AddressInfo.AI_ADDRCONFIG: 1024>
Loopback infos: [(<AddressFamily.AF_INET6: 23>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('::1', 0, 0, 0)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('127.0.0.1', 0))]
Wildcard flags: <AddressInfo.AI_PASSIVE: 1>
Wildcard infos: [(<AddressFamily.AF_INET6: 23>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('::', 0, 0, 0)), (<AddressFamily.AF_INET: 2>, <SocketKind.SOCK_STREAM: 1>, 0, '', ('0.0.0.0', 0))]

--- check: readable_fqdn
INFO: socket.getfqdn(): 'Oscar-XPS-Laptop.lan'

--- check: stat_tensorboardinfo
INFO: directory: C:\Users\Oscar\AppData\Local\Temp\.tensorboard-info
INFO: os.stat(...): os.stat_result(st_mode=16895, st_ino=9570149209514493, st_dev=2585985196, st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=1576835418, st_mtime=1576835418, st_ctime=1576764046)
INFO: mode: 0o40777

--- check: source_trees_without_genfiles
INFO: tensorboard_roots (1): ['C:\\Users\\Oscar\\Anaconda3\\envs\\____________\\lib\\site-packages']; bad_roots (0): []

--- check: full_pip_freeze
WARNING: Could not generate requirement for distribution -ensorflow-gpu 2.0.0 (c:\users\oscar\anaconda3\envs\________________\lib\site-packages): Parse error at "'-ensorfl'": Expected W:(abcd...)
INFO: pip freeze --all:
absl-py==0.8.1
astor==0.8.1
attrs==19.3.0
backcall==0.1.0
bleach==3.1.0
cachetools==3.1.1
certifi==2019.11.28
chardet==3.0.4
colorama==0.4.1
cycler==0.10.0
decorator==4.4.1
defusedxml==0.6.0
entrypoints==0.3
gast==0.2.2
google-auth==1.8.2
google-auth-oauthlib==0.4.1
google-pasta==0.1.8
grpcio==1.25.0
h5py==2.10.0
idna==2.8
importlib-metadata==1.2.0
ipykernel==5.1.3
ipython==7.10.1
ipython-genutils==0.2.0
ipywidgets==7.5.1
jedi==0.15.1
Jinja2==2.10.3
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==5.3.4
jupyter-console==5.2.0
jupyter-core==4.6.1
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
MarkupSafe==1.1.1
matplotlib==3.1.2
mistune==0.8.4
more-itertools==7.2.0
nbconvert==5.6.1
nbformat==4.4.0
notebook==6.0.2
numpy==1.17.4
oauthlib==3.1.0
opt-einsum==3.1.0
pandas==0.25.3
pandocfilters==1.4.2
parso==0.5.1
pickleshare==0.7.5
pip==19.3.1
prometheus-client==0.7.1
prompt-toolkit==3.0.2
protobuf==3.11.1
pyasn1==0.4.8
pyasn1-modules==0.2.7
Pygments==2.5.2
pyparsing==2.4.5
pyrsistent==0.15.6
python-dateutil==2.8.1
pytz==2019.3
pywin32==223
pywinpty==0.5.5
pyzmq==18.1.0
qtconsole==4.6.0
requests==2.22.0
requests-oauthlib==1.3.0
rsa==4.0
Send2Trash==1.5.0
setuptools==42.0.2.post20191203
six==1.13.0
tensorboard==2.0.2
tensorflow-estimator==2.0.1
tensorflow-gpu==2.0.0
termcolor==1.1.0
terminado==0.8.3
testpath==0.4.4
tornado==6.0.3
traitlets==4.3.3
urllib3==1.25.7
wcwidth==0.1.7
webencodings==0.5.1
Werkzeug==0.16.0
wheel==0.33.6
widgetsnbextension==3.5.1
wincertstore==0.2
wrapt==1.11.2
zipp==0.6.0

Next steps

No action items identified. Please copy ALL of the above output,
including the lines containing only backticks, into your GitHub issue
or comment. Be sure to redact any sensitive information.

@bileschi
Copy link
Collaborator

Assigning this to @nfelt who is actively looking into this. Please reassign or unassign as appropriate.

@nfelt
Copy link
Collaborator

nfelt commented Dec 21, 2019

Quick update everyone - we think we've narrowed this down to a memory leak in tf.io.gfile.isdir() which we've reported in TensorFlow as tensorflow/tensorflow#35292.

In terms of a fix, it appears that by pure coincidence a change landed in TensorFlow yesterday that replaces the leaking code, so in our testing we're seeing at least a much lower rate of memory leakage when running TensorBoard against today's tf-nightly==2.1.0.dev20191220.

If you're still seeing the issue, please try running TensorBoard in an environment with that version of TensorFlow (the actual version of TensorFlow you use for generating the log data should not affect this) and let us know if it seems to resolve the issue or not.

We will see what we can do to try to work around the issue so that we can get a fix to you sooner than the next TF release that would include yesterday's change (2.2) - if possible we'll see if we can fix this on the TB side so that those who can't easily update TF to the most recent version have access to a fix.

@zaxliu
Copy link

zaxliu commented Dec 21, 2019

@nfelt hi this is good news, thanks. Curious though: are you planning for an independent tensorboard build with this issue fixed?

@adizhol
Copy link

adizhol commented Dec 25, 2019

Hi all,

I'm running tensorboard without tensorflow, and I no longer experience the huge memory consumptio.

@perone
Copy link

perone commented Jan 7, 2020

Tried to use the tf-nightly==2.1.0.dev20191220 version, but without success, the same problem still remains.

  • TensorBoard version
    2.2.0a20200106
  • TensorFlow version
    2.1.0-dev20191220
  • Python version via python -c "import sys; print(sys.version)"
    3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0]
  • OS and OS version (e.g. Ubuntu 16.04)
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.3 LTS
Release:        18.04
Codename:       bionic
  • How you're installing and running TensorBoard (e.g. pip, conda, docker container, built from source, etc.)
    Installation: pip install tf-nightly==2.1.0.dev20191220
    Running: tensorboard --port 9090 --bind_all --logdir ./
  • Any details about the logdir you're running against - is it static? growing as you're running TensorBoard? mounted remotely? rough size? etc. - or ideally a copy of it like @paulguerrero above, thank you
    It doesn't matter if it is static or not, the same problem happens either ways. The size of my logdir folder is 34MB, and resident memory achieves 16GB of RAM upon starting tensorboard.
  • Memory usage details:
  • Size at startup: rapidly increases to 16GB of RAM (around 5 seconds to increase up to 16GB of RAM)
  • Rate of increase - size after 1 hour? 24 hours? Immediatly.
  • How exactly you're checking the memory usage. Using RES of htop.
  • Whether you have the TensorBoard tab open during the whole period, and if so, whether you have the auto-refresh behavior enabled (it is on by default)
    It doesn't matter if the browser is open or not.

I noticed that if I add a lot of files inside of the logdir folder, TensorBoard throws an exception:

TensorBoard 2.2.0a20200106 at http://anonymized:9090/ (Press CTRL+C to quit)
Exception in thread Reloader:
Traceback (most recent call last):
  File "/env/lib/python3.7/threading.py", line 917, in _bootstrap_inner
  File "/env/lib/python3.7/threading.py", line 865, in run
  File "/env/lib/python3.7/site-packages/tensorboard/backend/application.py", line 660, in _reload
  File "/env/lib/python3.7/site-packages/tensorboard/backend/event_processing/plugin_event_multiplexer.py", line 202, in AddRunsFromDirectory
  File "/env/lib/python3.7/site-packages/tensorboard/backend/event_processing/io_wrapper.py", line 213, in <genexpr>
  File "/env/lib/python3.7/site-packages/tensorboard/backend/event_processing/io_wrapper.py", line 164, in ListRecursivelyViaWalking
  File "/env/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 676, in walk_v2
  File "/env/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 606, in list_directory
  File "/env/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 635, in list_directory_v2
tensorflow.python.framework.errors_impl.ResourceExhaustedError: ./; Too many open files

The memory issue happens without adding a lot of small files inside the logdir as well, but since there is this recursive process opening a lot of files, it might be one of the root causes of this quick memory growth that happens upon starting it (as related to tf.io.gfile.isdir()), but if the fix is really on tf-nightly==2.1.0.dev20191220, then it might be another leak hidden somewhere on these directory/file handling routines.

@perone
Copy link

perone commented Jan 7, 2020

Just to add another comment, if I run:

pip uninstall tf-nightly

As suggested by @adizhol, TensorBoard works fine and takes only 310MB of resident memory, which really seems to solve the issue. So it seems that this is definitely caused by tensorflow code. It gives the warning:

TensorFlow installation not found - running with reduced feature set

Which seems to limit the available features on TensorBoard.

@perone
Copy link

perone commented Jan 7, 2020

Just adding more info, I think I found the culprit.

If you just use (on tensorboard/compat/__init__.py):

from tensorboard.compat.tensorflow_stub import pywrap_tensorflow

To force it to use the pywrap_tensorflow from TensorBoard itself, the memory issue just disappears. However, if you let it import the tensorflow.python.pywrap_tensorflow, which seems to be a swig extension, the memory leak returns. That explains why removing TensorFlow solves the issue. It seems that one method on the TensorFlow's pywrap_tensorflow is leaking a lot of memory.

It changes the memory usage from 16GB to around ~500MB.

@mufathurrohman
Copy link

I have 16 GB RAM and also suffer from this memory leak problem. After executing tensorboard through command prompt (Windows 10), it shows:

Unable to get first event timestamp for run W11-LSTM64-FC16L0D0-Run_0

W11-LSTM64-FC16L0D0-Run_0 is the name I assigned for my architecture and I ran this approximately one month ago, roughly equivalent to 200 models ago. I did shut down my PC and restart all the process, so this is not because I keep the PC running.

There are lots of lines that shows the same "Unable to get first event timestamp".

After I moved all the old logs, the lines stopped showing and the problem with memory leak seems to disappear as well. I don't really know what happened but I guess @ismael-elatifi 's guess is correct.

@Foxigen
Copy link

Foxigen commented Jul 25, 2020

One practical, easy way that I tried last night is that I right-clicked on the C drive and chose Properties. Then, I chose to clean up and selected Temporary files to be deleted from my computer.
By doing this, I've free up nearly 10 GB of my Ram, which was used by Tensorboard.

wchargin added a commit to wchargin/tensorboard-data-server-go that referenced this issue Sep 9, 2020
This reads a single event file from start to end, parsing the frame of
each TFRecord, but not inspecting the payload at all. Unfortunately, the
TFRecord reading is already ~3× slower than RustBoard’s entire pipeline,
and ~13× slower than RustBoard’s TFRecord reading. :-(

Discussion here is on a 248 MiB event file, the `egraph_edge_cgan_003`
run from a user-supplied log directory:
<tensorflow/tensorboard#766 (comment)>

The effect of the buffer size is a bit strange. As expected, buffering
definitely helps (~3× improvement with default buffer size), and the
improvements taper off as the buffer size increases: 4 KiB and 1 MiB are
about the same. But then in the 4 MiB to 8 MiB range we start to see
sharp improvements: 1 MiB to 4 MiB is no change, but 4 MiB to 8 MiB is
25% faster. The improvements continue even up to 128 or 256 MiB on a
file that’s 248 MiB long. Compare to RustBoard, which sees similar
effects at low buffer sizes but no extra improvements for very large
buffers. (This all running with hot filesystem cache.)

Buffer size sweep:
<https://gist.github.com/wchargin/b73b5af3ef36b88e4e1aacf9a2453ea6>

CPU profiling shows that about 35% of time is spent in `make([]byte)` in
`ExtendTo`, which seems unfortunate, since due to the actual access
patterns we barely overallocate there (overallocating only 12 bytes per
record), so it’s not obvious how to avoid that cost. Another 50% of
total time is spent in `runtime.mallocgc`. And 20% of total time (not
necessarily disjoint) is spent in the `result := TFRecord{...}` final
allocation in `ReadRecord`, which is surprising to me since it just has
two small fields (a slice header and a `uint32`) and they’ve already
been computed above. (Inlining effects?)

Checksum validation is fast when enabled; runtime increases by ~10%.
@wchargin
Copy link
Contributor

wchargin commented Jan 22, 2021

The next version of TensorBoard loads much faster (~100× throughput) and
should have fewer memory leaks. If you are interested in testing it, and
you are not using Windows, please see #4784.

TL;DR: Update to latest tb-nightly and pass --load_fast=true.
We’d love to hear any feedback.

@brando90
Copy link

I've been experiencing OOM and sigkills when using pytorch and tensorboard. I cannot guarantee unfortuantely that it is tensorboard doing the error but thought it would be good to mention it and give a reference:

https://stackoverflow.com/questions/67560357/can-writing-to-tensorboard-cause-memory-ram-oom-issues-especially-in-pytorch?noredirect=1#comment119422773_67560357

@stephanwlee
Copy link
Contributor

@brando90 Sorry but the stackoverflow is not related to this repository at all. For the summary writer for PyTorch, please seek help from https://github.com/lanpa/tensorboardX.

@bradiex
Copy link

bradiex commented May 21, 2021

I'm also seeing a steady increase in memory usage of about 10MiB per hour which seems to go on forever.

I'm using the tensorflow 2.5.0 docker image (with uses the new data loader) and logs are stored on an s3 minio service.
Command:
docker run --rm -it --entrypoint tensorboard tensorflow/tensorflow:2.5.0 --logdir s3://<logs> --bind_all

However, memory keeps increasing even when there's no extra logs added. I guess something is not freed properly in this reload loop: https://github.com/tensorflow/tensorboard/blob/56be365/tensorboard/backend/event_processing/data_ingester.py#L93

@guodongfan-pts
Copy link

Is there a solution to this problem now?

@stephanwlee
Copy link
Contributor

For people who can use it, we are recommending using https://pypi.org/project/tensorboard-data-server/ which should make log ingestion faster and memory not blow up.

@pindinagesh pindinagesh pinned this issue Nov 9, 2021
@pindinagesh pindinagesh unpinned this issue Nov 9, 2021
@ArfaSaif
Copy link

ArfaSaif commented Nov 18, 2021

Hi, Tensorboard is eating up alot of RAM (0.5GB/s on startup, and the system becomes unusable after a few mins) when log_dir contains event files that contain training batch images saved per epoch to tensorboard log_dir... we think its related to this bug...has this issue been resolved?

@bileschi
Copy link
Collaborator

@ArfaSaif are you able to use tensorboard-data-server ?

@mweiden
Copy link

mweiden commented Mar 25, 2022

@stephanwlee Is there documentation for tensorboard data server? I don't see any on the pypi page, nor in the project subdirectory.

@bileschi
Copy link
Collaborator

probably the DEVELOPMENT.md in that same directory, and rustboard.md are your best bets.

@DanielWicz
Copy link

DanielWicz commented Jan 12, 2023

Here the tensorboard can eat up to teens of GB RAM using the latest version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core:backend theme:performance Performance, scalability, large data sizes, slowness, etc.
Projects
None yet
Development

No branches or pull requests