Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EfficientZero high memory consumption / keeps increasing after replay buffer is full #26

Open
rPortelas opened this issue May 23, 2022 · 7 comments

Comments

@rPortelas
Copy link

rPortelas commented May 23, 2022

I am currently experimenting on scaling EfficientZero to learning setups with high-data regimes.

As a first step, I am running experiments on Atari, with a replay buffer of 1M environment steps.
While doing this I observed that RAM consumption keeps increasing long after the replay buffer reached its maximum size.

Here are tensorboard plots on Breakout, for a 600k training steps run (20M environment steps / 80M environment frames):

breakout_high_mem

I perform experiments on cluster computers featuring 4 tesla V100 gpus / 40 cpus and 187GB of RAM.

As you can see, although the maximum replay buffer size ("total_node_num") is reached after 30k training steps, RAM (in %) keeps increasing until around 250k steps, from 80% to 85%.

Ideally, I would also like to increase the batch size. But it seems like the problem gets worse in that setting:

breakout_mem

The orange curves are from the same Breakout experiments, but with a batch size of 512 (instead of 256), and a smaller replay buffer size (0.1M). Here the maximum replay buffer size is obtained at 4k training steps but memory keeps increasing until 100K+ steps.
I understand that a bigger batch means more RAM because more data is being processed when updating/doing MCTS, but it does not explain why it keeps increasing after the replay buffer fills up

Any ideas on what causes this high ram consumption, and how we could mitigate that ?

Run details

Here are the parameters used for the first experiment I described (pink curves):

Param: {'action_space_size': 4, 'num_actors': 2, 'do_consistency': True, 'use_value_prefix': True, 'off_correction': True, 'gray_scale': False, 'auto_td_steps
_ratio': 0.3, 'episode_life': True, 'change_temperature': True, 'init_zero': True, 'state_norm': False, 'clip_reward': True, 'random_start': True, 'cvt_string': True, 'image_based': True, 'max_moves': 27000, 'test_max_m
oves': 3000, 'history_length': 400, 'num_simulations': 50, 'discount': 0.988053892081, 'max_grad_norm': 5, 'test_interval': 10000, 'test_episodes': 32, 'value_delta_max': 0.01, 'root_dirichlet_alpha': 0.3, 'root_explora
tion_fraction': 0.25, 'pb_c_base': 19652, 'pb_c_init': 1.25, 'training_steps': 900000, 'last_steps': 20000, 'checkpoint_interval': 100, 'target_model_interval': 200, 'save_ckpt_interval': 100000, 'log_interval': 1000, '
vis_interval': 1000, 'start_transitions': 2000, 'total_transitions': 30000000, 'transition_num': 1.0, 'batch_size': 256, 'num_unroll_steps': 5, 'td_steps': 5, 'frame_skip': 4, 'stacked_observations': 4, 'lstm_hidden_siz
e': 512, 'lstm_horizon_len': 5, 'reward_loss_coeff': 1, 'value_loss_coeff': 0.25, 'policy_loss_coeff': 1, 'consistency_coeff': 2, 'device': 'cuda', 'debug': False, 'seed': 0, 'value_support': <core.config.DiscreteSuppor
t object at 0x152644d101d0>, 'reward_support': <core.config.DiscreteSupport object at 0x152644d10210>, 'use_adam': False, 'weight_decay': 0.0001, 'momentum': 0.9, 'lr_warm_up': 0.01, 'lr_warm_step': 1000, 'lr_init': 0.2
, 'lr_decay_rate': 0.1, 'lr_decay_steps': 900000, 'mini_infer_size': 64, 'priority_prob_alpha': 0.6, 'priority_prob_beta': 0.4, 'prioritized_replay_eps': 1e-06, 'image_channel': 3, 'proj_hid': 1024, 'proj_out': 1024, 'p
red_hid': 512, 'pred_out': 1024, 'bn_mt': 0.1, 'blocks': 1, 'channels': 64, 'reduced_channels_reward': 16, 'reduced_channels_value': 16, 'reduced_channels_policy': 16, 'resnet_fc_reward_layers': [32], 'resnet_fc_value_l
ayers': [32], 'resnet_fc_policy_layers': [32], 'downsample': True, 'env_name': 'BreakoutNoFrameskip-v4', 'obs_shape': (12, 96, 96), 'case': 'atari', 'amp_type': 'torch_amp', 'use_priority': True, 'use_max_priority': Tru
e, 'cpu_actor': 14, 'gpu_actor': 20, 'p_mcts_num': 128, 'use_root_value': False, 'auto_td_steps': 270000.0, 'use_augmentation': True, 'augmentation': ['shift', 'intensity'], 'revisit_policy_search_rate': 0.99}
@lezhang-thu
Copy link

lezhang-thu commented May 31, 2022

in lines 240-241 of core / reanalyze_worker.py, try changing them to

            trained_steps = ray.get(self.storage.get_counter.remote())
            target_weights = None

and changing lines 252-253 to

            if new_model_index > self.last_model_index:
                self.last_model_index = new_model_index
                target_weights = ray.get(self.storage.get_target_weights.remote())

also, try explicitly doing gc.collect() periodically.

@lezhang-thu
Copy link

btw, in train/mean_score of your posted plot, 100K in x-axis is not for Atari 100K, but for Atrai 10M (i.e., 10M interactions with the env)?
is the understanding above right?

@rPortelas
Copy link
Author

The X axis corresponds to training steps (not environment steps). My experiments were scheduled to run 900k training steps while performing 30M environment steps (I stopped them at around 600k). This means that for each 100k training steps in the x-axis there are around 30/9= 3,33M environment steps being processed.

Is it clearer ?

@rPortelas
Copy link
Author

Thanks for your suggestions :).

I already tried to add periodic gc.collect() , which did not solve the issue. For your other suggested modifications, could you tell me a bit more about it ? I see that it makes the code slightly more efficient since it loads target weights only if needed.
Did you solve this RAM issue on your side by modifying these lines ?

@lezhang-thu
Copy link

lezhang-thu commented Jun 1, 2022

did not try the exp in the large scale as you discussed.

but the change on codes relevant to target_weights makes the train.sh be runnable.
and decreasing the gpu_actor really helps for the RAM usage.

lastly, in line 17 of storage.py, try changing it to self.queue = Queue(maxsize=size, actor_options={"num_cpus": 3}) or larger than 3, as the bottleneck seems to be the ray Queue is not fast enough to get and to send the data, not the gpu_actor is 20 or a number less than the default 20.

@rPortelas
Copy link
Author

but the change on codes relevant to target_weights makes the train.sh be runnable.

Hmm interesting. Could it be just because you never get to load the target weights in your experiments because they are shorter than the target model checkpoint interval (meaning that you never get into the if statement in line 252) ?

@lezhang-thu
Copy link

no. it is just because this would save RAM memory, so train.sh would run without breaking until the end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants