Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hypergradient Descent #6

Closed
akaniklaus opened this issue Jan 3, 2019 · 5 comments
Closed

Hypergradient Descent #6

akaniklaus opened this issue Jan 3, 2019 · 5 comments

Comments

@akaniklaus
Copy link

akaniklaus commented Jan 3, 2019

Thank you for sharing this. Would it be possible if you can also integrate Hypergradient Descent technique into your AdamW implementation? It reduces the necessity of hypertuning the initial learning rate. https://github.com/gbaydin/hypergradient-descent

                if state['step'] > 1:
                    prev_bias_correction1 = 1 - beta1 ** (state['step'] - 1)
                    prev_bias_correction2 = 1 - beta2 ** (state['step'] - 1)
                    # Hypergradient for Adam:
                    h = torch.dot(grad.view(-1), torch.div(exp_avg, exp_avg_sq.sqrt().add_(group['eps'])).view(-1)) * math.sqrt(prev_bias_correction2) / prev_bias_correction1
                    # Hypergradient descent of the learning rate:
                    group['lr'] += group['hypergrad_lr'] * h

I have also read lots of criticism about AmsGrad and haven't been able to yet get any improvement with that variant. Can I please learn your thoughts about that? FYI, two other techniques that I am currently experimenting with are Padam and QHAdam.

@akaniklaus
Copy link
Author

I did the combination, can share upon your request.

@uyekt
Copy link

uyekt commented Jan 9, 2019

I did the combination, can share upon your request.

I would be interested :-)

@akaniklaus
Copy link
Author

akaniklaus commented Jan 9, 2019

@uyekt This code combines techniques from three papers (including the one from this repository)
The continuous partial parameter controls how likely the optimizer work to SGD (0.0) or Adam (1.0).

https://arxiv.org/abs/1806.06763
https://arxiv.org/abs/1711.05101

It could be great in my opinion if the partial parameter can be updated on the fly similar to what have been done with the learning rate here, as there is also previous research suggesting switching from Adam to SGD during training for improving the generalization: https://arxiv.org/abs/1712.07628

class AdamComb(Optimizer):
    def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-5, weight_decay=1e-5, hypergrad=1e-5, partial=0.5):
        defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, hypergrad=hypergrad, partial=partial)
        super().__init__(params, defaults)
    def step(self, closure=None):
        loss = None if closure is None else closure()
        for group in self.param_groups:
            for p in group['params']:
                if p.grad is None: continue
                grad = p.grad.data
                state = self.state[p]
                if len(state) == 0:
                    state['step'] = 0
                    state['exp_avg'] = torch.zeros_like(p.data)
                    state['exp_avg_sq'] = torch.zeros_like(p.data)
                exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
                beta1, beta2 = group['betas']
                state['step'] += 1
                if group['hypergrad'] > 0 and state['step'] > 1:
                    prev_bias_correction1 = 1 - beta1 ** (state['step'] - 1)
                    prev_bias_correction2 = 1 - beta2 ** (state['step'] - 1)
                    h = torch.dot(grad.view(-1), torch.div(exp_avg, exp_avg_sq.sqrt().add_(group['eps'])).view(-1)) * math.sqrt(prev_bias_correction2) / prev_bias_correction1
                    group['lr'] += group['hypergrad'] * h
                exp_avg.mul_(beta1).add_(1 - beta1, grad)
                exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
                denom = exp_avg_sq.sqrt().add_(group['eps'])
                bias_correction1 = 1 - beta1 ** state['step']
                bias_correction2 = 1 - beta2 ** state['step']
                step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
                if group['weight_decay'] != 0:
                    decayed_weights = torch.mul(p.data, group['weight_decay'])
                    p.data.addcdiv_(-step_size, exp_avg, denom**group['partial'])
                    p.data.sub_(decayed_weights)
                else:
                    p.data.addcdiv_(-step_size, exp_avg, denom**group['partial'])
        return loss

@mpyrozhok
Copy link
Owner

mpyrozhok commented Jan 9, 2019

@akaniklaus @uyekt I've done some experiments with hypergradient and for me it behaved pretty much like a simple linear lr decay with small hypergrad_lr value. With large hypergrad_lr values it just added redundant stochasticity to the training process. Was your experience different and if it was then what problem/data/batch size/network architecture did you try it with?

@akaniklaus
Copy link
Author

akaniklaus commented Jan 10, 2019

@mpyrozhok Hello, it is quite normal that it works similar to a decayed learning rate schedule. I believe the best functionality is that you do not have to re-optimize max and min learning rate of such decaying schedule each time you change something e.g. batch-size, architecture, etc. This is quite something especially when you are hypertuning (as one would need to re-optimize both initial and min learning-rate of a scheduler for each configuration and thus updating it online saves lots of resources). Furthermore, I am generally first making few trial runs to decide on a initial learning rate to start with (I start with a low learning-rate and then as it first increases, I use the peak point where it starts to drop again as the initial learning rate of my actual run). As for hypergrad_lr, the paper suggests 1e-5 and 1e-4, I experienced that the default value in the code (1e-8) is too low and it causes adapting too slowly; whilst a higher value sometimes is causing de-convergence from the minima (and even sometimes negative learning rate!).

@mpyrozhok Do you have any idea how we can change the code to use the validation loss gradient? Maybe the warm restart can also be implemented, what would be the best way to do it automatically?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants