You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for releasing this - when I saw the paper I really hoped it would be open sourced.
It seems the biggest gains can come from using the models pretrained on your internal JFT-300M dataset. Are there plans to release weights from the models pretrained on this dataset?
Cheers!
The text was updated successfully, but these errors were encountered:
Unfortunately, this is an internal dataset and we are not allowed to release these weights. ImageNet-21k was in anticipation for this, and we are actively looking at public datasets larger than ImageNet-21k in order to be able to release even better pre-trained models.
Happy to hear recommendations for large, public datasets we could try!
Unfortunately, this is an internal dataset and we are not allowed to release these weights. ImageNet-21k was in anticipation for this, and we are actively looking at public datasets larger than ImageNet-21k in order to be able to release even better pre-trained models.
Happy to hear recommendations for large, public datasets we could try!
Surely, there must be some subset for which you may be able to train and release without any licensing hassle. Or maybe the permissions your team needs is few mails down the line, if you and everyone here feels that releasing the model will be a great service to humanity.
Thanks for releasing this - when I saw the paper I really hoped it would be open sourced.
It seems the biggest gains can come from using the models pretrained on your internal JFT-300M dataset. Are there plans to release weights from the models pretrained on this dataset?
Cheers!
The text was updated successfully, but these errors were encountered: