Skip to content
This repository has been archived by the owner on Jan 15, 2024. It is now read-only.

[v0.9][BUGFIX] Fix wd in finetune_squad.py #1223

Merged
merged 2 commits into from
May 8, 2020

Conversation

eric-haibin-lin
Copy link
Member

Description

The command for finetune_squad.py was originally using adam without wd. Removing it from the script. Fixes #1204 #1211

Checklist

Essentials

  • PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage
  • Code is well-documented

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

cc @dmlc/gluon-nlp-team

@mli
Copy link
Member

mli commented May 6, 2020

Job PR-1223/1 is complete.
Docs are uploaded to http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR-1223/1/index.html

@codecov
Copy link

codecov bot commented May 7, 2020

Codecov Report

Merging #1223 into v0.9.x will decrease coverage by 0.34%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           v0.9.x    #1223      +/-   ##
==========================================
- Coverage   88.86%   88.51%   -0.35%     
==========================================
  Files          72       72              
  Lines        6977     6977              
==========================================
- Hits         6200     6176      -24     
- Misses        777      801      +24     
Impacted Files Coverage Δ
src/gluonnlp/data/word_embedding_evaluation.py 89.31% <0.00%> (-7.64%) ⬇️
src/gluonnlp/data/glue.py 96.81% <0.00%> (-1.82%) ⬇️

@eric-haibin-lin eric-haibin-lin merged commit 6ee8f02 into dmlc:v0.9.x May 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

BERT fine-tuning on SQuAD 1.1 doesn't converge
2 participants