Skip to content

@joelgrus joelgrus released this Mar 29, 2019 · 28 commits to master since this release

Highlighted Changes

  • This release includes code for working with the DROP dataset, including the official evaluation script, a DatasetReader, and the NAQANet model. (#2559, #2556 and #2560)
  • We added a no-op trainer that allows you to create AllenNLP model archives for programmatic baselines, alternatively trained models, etc. (#2610)

Breaking Changes

  1. In #2607 we changed the (default) SpacyWordSplitter to return allennlp Tokens (which are compact, efficient NamedTuples) rather than spacy.tokens.Tokens. This was done primarily to decrease memory usage for large datasets; secondarily to play nicer with the multiprocess dataset reader.

This is a breaking change in two ways, neither of which should affect most users:

  • in theory everyone should be programming to the Token abstraction that's shared between both implementations, but it's possible that someone could be relying on having the actual spacy token, in which case they would need need to configure their word splitter with keep_spacy_tokens=True.

  • a NamedTuple can't have different constructor parameters and field names. Our previous Token implementation used e.g. pos as the name of the constructor argument but then pos_ as the name of the field. Converting this to a namedtuple meant that the constructor argument now also has to be pos_. If you were for some reason generating your own tokens manually (which the wikitables dataset reader was doing) you would need to make the corresponding changes to that code; if you were only creating Tokens using our Tokenizers, then there's no difference to you.

It's quite likely that neither of these changes will affect even a single user, but in theory they could.

List of Commits

baef953 bump version number to v0.8.3
0abefe2 Fix docstrings after inspection (#2655)
a80aac7 Move register to typical location. (#2662)
e1d70bb Add missing paren (#2661)
2bf0779 Fixed ELMO command's lack of encoding specification when reading from… (#2614)
e138d6c TextCat Reader skip_label_indexing Fix (#2653)
6e1ee2e uses open(file_path) where file_path is a URL (#2654)
263d340 Upgrade Dockerfile to stretch. (#2647)
fab4b15 Fix quarel explanations (#2648)
37a078a make things backward compatible with spacy 2.0 (#2644)
e79b713 add dependency parser config (#2639)
305bd35 final_state_tuple is a Tuple (#2645)
a4a4306 Checkpointer should check in case user deleted a serialized model (#2531)
3c889ca Update outdated doc (#2641)
12626ac fix sampled softmax tests (#2061)
c06e904 add option "-k" to limit tests in test-install command (#2635)
1357c7e Remove reference to from the README (#2633)
0bbd359 Add workaround for missing linking in spacy 2.1, remove (#2632)
1b07b48 Bump up spacy version pin to 2.1 (#2626)
e3038a3 bug fixes in drop evaluation and more test cases (#2594)
f8b10a9 Add a no-op trainer. (#2610)
9e72ee0 Fix TextClassificationJsonReader handling of unlabeled instances (#2621)
0106536 Add text classification model (#2591)
43b384d Move some scripts to allennlp/allennlp/tools (#2584)
fe80f9f Fix 'cuda_device' docstring in Trainer.init (#2613)
f19c0ee Enable Pruner class to keep different number of items for different entries in minibatch. (#2511)
3cdb7e2 Ensure contiguous initial state tensors in _EncoderBase(stateful=True) (#2451)
ca998b2 Feature Request: Add a dtype parameter to ArrayField (#2609)
ff90845 change pins to bounds (#2490)
0fffb9b Allow the transition from M to M in the BMES constraint type (#2611)
0542c5a make spacy word splitter return allennlp Tokens (now NamedTuples) by default (#2607)
9e3f405 Only log the keys in the "extras" dictionary when instantiating objects from_params (#2608)
720d306 Handle edge cases in beam search (#2557)
79936e5 Re-use .allennlp when running Docker commands (#2593)
55458f5 fix bugs in naqanet (#2604)
c163b63 Fixed memory error in make_vocab on big dataset. (#2606)
b61d511 context manager that allows predictor to capture model internals (#2581)
3e0fcf0 Update (#2601)
18312a0 Seq2seq dataset reader improvements (#2599)
1adb3e8 Interactive beam search (#2513)
0f7bcf5 Add support for overriding list elements (#2585)
9437b61 disable tutorial test (#2580)
6ea273e Allow checkpointer to be initialized from params (#2491)
b0ea7ab Make tutorial use GPU if available. (#2570)
41174da Fix unit test to work with GPUs. (#2574)
32defc3 fix a bug in (#2534)
cdbac6d Fix min padding length in pretrained NER predictors (#2541)
d0f7170 Make load_archive operate on serialization directories. (#2554)
31af01e Add missing requirement to (#2564)
c54fcc6 Add NAQANet model for DROP (#2560)
97f3578 add initializer to copynet (#2558)
bbb67e9 Add dataset reader for DROP (#2556)
4d5eade Add official DROP evaluation script (#2559)
3d5560f missing =overrides argument when instantiate Params despite a second time (#2553)
64a8e13 Scope DeprecationWarning errors to just allennlp-internal stuff (#2549)
321cf91 Clarify data_parallel implementation. (#2488)
540ebac Propose a deprecation policy. (#2424)
6d8da97 make archival take an optional output path (#2510)
fefc439 Restore tensorboard epoch metrics to pre-refactoring behavior (#2532)
0205c26 Bump version numbers to v0.8.3-unreleased

Assets 2

@schmmd schmmd released this Feb 19, 2019 · 88 commits to master since this release

List of Commits

4bac9b5 bump version number to v0.8.2
3a5373f upgrade huggingface pretrained bert (#2524)
2f2b481 Adding bag_of_embeddings as an alternate name for the boe encoder (Issue #2268) (#2521)
9f87aa5 fix typo (#2508)
751d2e1 Remove extended_vocab argument from extend_embedder_vocab API. (#2505)
cf6eff2 Allow vocab+embedding extension in evaluate command. (#2501)
39413f2 Add necessary implicit embedding extension for transfer-modules api and vocab extension (#2431)
47feb35 remove notebooks and juptyer dependency (#2499)
56c11e5 Revert "Less restrictive token characters indexer (#2494)" (#2497)
89056f1 Less restrictive token characters indexer (#2494)
43dc4cb upgrade pytorch-pretrained-bert (#2495)
b9a4003 Add momentum schedulers to trainer (#2469)
19d0f59 Add support to transfer submodules, and in different modes. (#2471)
ce83cb4 Variational dropout in Augmented LSTM (#2344)
2e7acb0 Mark QaNet test as flaky (#2481)
23ea590 Add support for pretrained embedding extension in fine-tuning. (#2395)
dbd7085 WIP: Make warnings errors and filter library warnings from pytest (#2479)
234fb18 make bag_of_word_counts token embedder ignore padding and UNK tokens (#2432)
e5a74ab [Feature Enhanced] Support sentence pair in BERT (#2279)
6f5f565 Fix CORS error for react in config explorer (#2476)
08a8c5e Add QaNet model (#2446)
e417486 Fixes unnecessary parameter clone in moving average. (#2468)
9a2f198 Add support to kwargs in TimeDistributed (#2439)
9a5ea1a Ten times faster than before in GPU get_best_span of bidaf (#2465)
84eb4d1 More help info for (#2327)
a8e0f15 Make mask in PassThroughEncoder work (#2448)
2b52492 Fix path issue for certain cases when using --include-package (#2464)
35f7f16 Update Running AllenNLP in (#2447)
dddcbf2 Cleanup global logging after training (#2458)
5560466 [Feature Enhanced]Add FeedforwardEncoder for Sequences (#2444)
5ff923c Generalize LR scheduler (#2345)
508cb73 Instantiate the class Activation properly when testing FeedForward. (#2443)
dc901a6 rename test case (#2441)
00c877b Fix edgecase of potentially missing best-val-metrics in metrics.json on training recovery (#2352)
4465a6e Support exponential moving average in the default trainer. (#2406)
b6cc9d3 Optimizing the memory usage in multi_head_self_attention and masked_softmax (#2405)
90f39f9 Mark Atis Parser tests as flaky. Fix pylint. (#2430)
8b07c42 Improve error message for "undocumented modules" (#2427)
585c19e Fixing BERT mask size (#2429)
b7d56ae Fix typo when loading train state (#2425)
cb9651a Add AUC metric. (#2358)
9b01d4c fix params duplicate bug (#2421)
c1aace7 Get rid of hardcoded Vocabulary class name. (#2418)
55b9bd0 serialization in the tutorial (#2412)
d6e3140 Replace scripts with entry_points.console_scripts (#2232)
3baec62 Remove outdated reference to custom extensions (#2401)
012e42d Ensure vocab is popped off params (#2409)
a9b3475 Fix docstring for PretrainedBertIndexer's pretrained_model param (#2399)
11d8327 Fix typing for python 3.7 (#2393)
7525c61 Remove scattering for multi-GPU training. (#2200)
d0a5a40 Remove unnecessary line (#2385)
9719b5c Allow embedding extension to load from pre-trained embeddings file. (#2387)
174f539 enable pickling for vocabulary (#2391)
2d29736 Add support for extending Embeddings/Embedders with Extended Vocabulary. (#2374)
7cc8ba0 Rollback PR #2308 "Check train/dev/test-file-path" (#2386)
4fe8fa0 move model to cuda in tests, add comment (#2384)
8eb2d75 Text classification JSON dataset reader (#2366)
e9b0aca Make DomainLanguage handle executing action sequences directly, with side arguments (#2375)
122a21a changed log-to-console flags (#2381)
385e66e pieces for multitask learning (#2369)
ae63a2a New WikiTables language (#2350)
fc91f3e Sparse Gradient Clipping (#2312)
7da19bc Disable requires_grad for Transformer ELMo parser. (#2336)
e9287d4 Bag of words token embedder (#2365)
e75b19b Use f-string (#2370)
e2af6b4 Install package in dockerfile (#2305)
9649f3a fix a typo (#2367)
e08ade8 trainer refactor (#2304)
be57ecc Add snippet for using LanguageModelTokenEmbedder. (#2359)
abc10ed Update howto by removing old allennlp configure (#2330)
083f49e Bump up pytorch-pretrained-bert to v0.4.0 (#2349)
aa8ed3b Added StackedBiDirectional to list of encoders (#2339)
51ac814 Add optional additional tokens to ELMo character indexer (#2364)
6bd5975 sanitize environment variables that can't be unicode encoded (#2357)
0f6796a Fix very minor docstring typo in (#2361)
0409371 New NLVR language (#2319)
632b14c Warn default value of min_padding_length (#2309)
4c5de57 adjust call to lisp_to_nested_expression (#2347)
43ea97c QuaRel Domain Language (#2321)
2368760 Remove pylint ignores for backslashes. (#2331)
e6ad6e9 Evaluate on a token-weighted basis. (#2183)
71ebcd8 add infer_and_cast (#2324)
059b057 remove custom extensions (#2332)
5f9fb41 Stacked Bi-Directional LSTM to be compatible with the Seq2Vec wrapper (#2318)
93250f0 Skip Custom Highway LSTM tests, since they're broken (#2307)
7ecf772 Fix backslash exceptions. (#2322)
5bce1d5 Display default values in help message for allennlp command (#2323)
36d7400 Added links for some tutorials and organized the How-to alphabetically. (#2315)
f160059 Add raw prefix to avoid warnings from regex (#2310)
8a7f808 Check train/dev/test-file-path before process (#2308)
7d3b130 Masked Flip (#2299)
b0e2956 Add CopyNet seq2seq model (#2237)
a4670ad Grammar induction from a python executor (#2281)
088f0bb Turn BidirectionalLM into a more-general LanguageModel class (#2264)
f76dc70 Bump version numbers to v0.8.2-unreleased

Assets 2

@schmmd schmmd released this Jan 8, 2019 · 183 commits to master since this release

Highlighted changes

  • We now include a script to easily update pre-existing model archives (#2223).
  • We fixed an issue that caused issues using AllenNLP from within iPython (#2257).

List of Commits

b019196 Don't deprecate bidirectional-language-model name (#2297)
01913fb Update the base image in the Dockerfiles. (#2298)
1bec3f9 Add repr to Vocabulary (#2293)
73f0e5b Update the find-lr subcommand help text. (#2289)
e13aae4 fix #2285 (#2286)
2f662cf Fix spelling in tutorial README (#2283)
e6b0f21 script for doing archive surgery (#2223)
259ce32 Add a Contributions section to (#2277)
e394b7a Fix type annotation for .forward(...) in tutorial (#2122)
cf6a7d7 Fix bug in uniform_unit_scaling #2239 (#2273)
511c846 Deprecate bidirectional-language-model for bidirectional_language_model (#2253)
b5141b2 fix doc (#2213)
0d54a7a Add a quick README for training transformer ELMo. (#2231)
07d193e clarify the "num_output_representations" more clear (#2256)
ee02ed0 Fix multiGPU peak gpu memory test (#2254)
fb58783 Check for existing vocabulary before creating a new one (#2240) (#2261)
3cff7d3 Remove deprecated SpanPruner (#2248)
a98481c Remove deprecated batch_average argument to sequence ce with logits (#2247)
cf67128 Remove EpochTrackingBucketIterator (#2249)
2f56765 fix get_output_dim() for ElmoTokenEmbedder (#2234)
42e0815 Fix circular dependency. (#2257)
b5b9b40 add some docstrings about parameter force (#2226)
e53b4f4 Allow loading model from path with ~ HOME (#2215)
3f0953d change mislabeled variable in description (#2242)
1938a5a fix completely masked case in BooleanAccuracy (#2230)
fe62658 Corrected documentation (#2229)
e2f66c0 Simplifies Subsequent Mask (#2224)
ce060ba Bidirectional LM Embedder (#2138)

Assets 2

@schmmd schmmd released this Dec 20, 2018 · 213 commits to master since this release

Major New Features

  • PyTorch 1.0 support

List of Commits

e142042 Update
4cc4b6b Remove constraint_type parameter from CRF Tagger (#2208)
eff25a3 Adding metadata and debug info for the NLVR demo (#2214)
050b767 Update
bb000a0 Allow NLVR predictor to handle string inputs for the structured representation (#2209)
6aaf76a Adding DatasetReader for the bAbI tasks and Qangaroo (#2194)
3fd224f fix lowercase-ization in bert indexer (#2205)
a2084fd Remove last_dim_softmax as it's deprecated and scheduled for removal. (#2207)
d1bae5c Remove deprecated dataset_readers.{nlvr, wikitables} components. (#2206)
0663762 Fix (most?) warnings from BasicTextFieldEmbedder (#2117)
c666357 Adding an initializer to initialize a model to pretrained weights (#2043)
3061298 Fix loaded model not taken to cuda device (#2190)
7e22b86 add CPU/GPU usage in metrics.json (#2136)
12d3297 Make bz2 and lzma optional dependencies (#2196)
a558f7f minor spelling tweaks (#2192)
89cef2f Clean up temporary archive directory at exit (#2184)
0ae699a empty time stamp should be a str instead of int 0 (#2153)
8e861e7 (upstream/add_memory_usage_into_metricsA) pytorch 1.0 (#2165)
d0d6310 add default argument for SpacyWordSplitter (#2174)
a89aeba Minor fixes to help evaluate ELMo port. (#2172)
da761e8 pre commit hook to verify file sizes (#2151)
41c7196 instantiate multiprocessing.log_to_stderr lazily (#2166)
021471a Transformer ELMo (#2119)
6c33005 fix bert input order (#2145)
a8adc6c Fixes broken link and rephrasing (#2162)
140c3ec make max dialog length configurable through json file (#2160)
2e412e3 fix mismatches (#2157)
a75cb0a Update the elmo command help text. (#2143)
7dbd7d3 add [CLS] and [SEP] tags to bert indexer (#2142)
411b5a2 upgrade pytorch-pretrained-bert + fix kwarg (#2130)

Assets 2
Dec 19, 2018


bump version number to v0.8.0

@schmmd schmmd released this Dec 3, 2018 · 245 commits to master since this release

Major New Features

  • You can now run allennlp configure to launch a GUI tool that helps you build a model configuration.
  • You can now use a BERT embedder within you model.

List of Commits

ca7e3cf release config explorer (#2118)
2783044 pin msgpack (#2125)
3eecd1a more informative message for BasicTextFieldEmbedder mismatched keys (#2124)
3a7e065 fix epoch tracking (#2121)
db0096f Fix more BasicTextFieldEmbedder warnings (#2114)
f757f7a Fix epoch tracking bucket iterator warnings. (#2108)
44269a1 Rename DATASET_CACHE to CACHE_DIRECTORY. (#2000)
3842820 Fix span pruner warning. (#2113)
92c71a1 Upgrade flask to remove werkzeug warning. (#2107)
240974f Pack batches more tightly with maximum_samples_per_batch. (#2111)
42d076e modify BERT embedder to deal with higher order inputs (#2104)
dcd1d25 Resolve some of the token_embedders warnings. (#2100)
2648d95 Add BLEU metric to simple_seq2seq (#2063)
582d0e4 add BERT token embedder (#2067)
d78daa4 Fix EVALB compilation by cd to directory path instead of binary path (#2090)
6b16222 calypso transformer (#2049)
5890111 Separate calculation of num_tokens per TokenIndexer (#2080)
193bb04 Specify path at which to compile evalb. (#2027)
af902a3 Add support of tokenized input for coref and srl predictors (#2076)
e3e8e1c Fix typo in IntraSentenceAttentionEncoder docstring (#2072)
d504089 Add wikitables predicator to the default predicators (#2071)
2a2d9c9 Improve server_simple by adding predict_batch and adding GPU option (#2064)
19c784f Add BLEU metric (#2001)
6ecd193 Fix logging statement so it has the proper scope. (#2059)
888c11a add flaky decorator + increase tolerance (#2060)
86da880 add sampled softmax loss (#2042)
07b5749 Enable multi-gpu training in (#2045)
43243ac Update issue templates
de7c013 Sentence splitter (#2036)
0701dbd Dynamic stopwords (#2037)
aa1b774 Test sql decoding (#2030)
a5c2d9e Catch nan loss in training loop (#2029)
82bbee7 (warnings) Add link to EMNLP tutorial from tutorial README.
68cbfb8 modify training_configs related issue #1954 (#1997)
481c181 Fix ArrayField.to_tensor not working with a scalar (#2008)
be36374 change stopping factor default (#2021)
b919f5a Add logging statement when EVALB is compiled. (#2018)
3ca6942 Support stdin for prediction jsonl. (#2003)
bae7758 Change log level to clean up allennlp command. (#2004)
1406a85 Added default predictor for bimpm model (#2014)
7751799 Ignore hidden files in vocabulary/ (#2002)
6af83e7 Prevent inspect_cache from swallowing exception. (#1999)
e4f9131 Untyped grammar (#1986)
a4b885c Add scalar_mix_parameters to Elmo.from_params (#1992)
1f782d3 Add --force option to find-lr command (#1991)
021f8bb use wordsplit with taggers (#1981)
c262ef5 Match vocab generation in currently online Event2Mind model. (#1978)
bc97ce8 combine_tensors_and_multiply_with_batch_size_one_and_seq_len_one (#1980)
5aa1c8f Fix for import_submodules (#1976)
02317e1 add min_padding_length to TokenCharactersIndexer (#1954) (#1967)
39e16c4 delete swp file (#1975)
ad729e3 Link tutorial site from tutorials/ (#1972)
2a8bd63 Extend GPU fixes from vidurj and MaxMotovilov (#1944)

Assets 2

@schmmd schmmd released this Nov 12, 2018 · 300 commits to master since this release

This is a minor release.

List of Commits

5512a8f Add config for non-elmo constituency parser, rename existing parser config (#1965)
dedc4ce Try compiling EVALB in metric if it doesn't exist (#1964)
26f09cf Disable tqdm when there isn't a TTY (#1927)
9a3e4b6 Add Windows support info to (#1962)
a3bd475 Update
19d106e minor bug fix in get_agenda (#1959)
4e44097 Add scalar_mix_parameters to ElmoTokenEmbedder.from_params (#1956)
0264002 Add scalar_mix_parameters to ElmoTokenEmbedder (#1955)
ce0bc55 Add training config for bidaf with elmo (#1953)
c6fb86d Various fixes related to the variable free wikitables world (#1917)
aeb2fc3 Fix small typo (#1939)
d089d52 Add a dimension check to DialogQA, fix example configuration (#1934)
3e2d795 Remove the report from pylint. (#1932)
53a555c Update (#1837)
afc36eb Add a --force command to train (#1913)
947bd16 make api more pythonic (#1926)
0e82106 clean up simple_seq2seq tests (#1928)
b529f6d Generalize beam search, improve simple_seq2seq (#1841)
b0ade1b add matplotlib to (#1919)
188e06d Passing the 'stateful' parameter to the 'PytorchSeq2SeqWrapper' (#1925)
360c3e1 Text2sql model (#1905)
f29839e fix BiMPM url in (#1923)
404b529 Enable setting scalar mix parameters for ScalarMix and Elmo (#1921)
9fcc795 Learning Rate Finder (#1776)
63836c4 Adding support for list, tuple, and set in from_params (#1914)
f224c62 Agenda improvements (#1897)
2ec52a5 Uptick cffi to 1.11.5 (#1846)
0e47d16 bidirectional LM + cnn highway encoder + gated cnn encoder (#1787)
158a29c Tentative port of LMDatatsetReader (#1881)
ae7b9a7 Move 'What is AllenNLP' from README to docs. (#1909)
a94a23e More closely emulate original Event2Mind implementation. (#1903)
d3a8f4f Track dev loss in ATIS model (#1907)
c450565 Update
3753b0b Multilayer decoder for semantic parsing framework (#1902)
7a707ea Make SlantedTriangular a little more robust (#1751)
4a22c29 # added optimizer parameter (#1766)
27fab84 Add more configuration options for ATIS semantic parser (#1821)
dc66c8f Fix QuaRel reference (#1899)
ae9c9c8 try to do some type inference on variables (#1898)
1691cb3 Run ActionSpaceWalker on the new variable free language for WikiTables (#1860)
0d9ad65 Var free grammar (#1893)
24e5547 feature-enhancement: make trainer registrable (#1884)
91bfb4c Global grammar values (#1888)
ffab320 strings in sql/prelinked entities (#1876)
043ff07 allow Model to use custom Vocabulary subclasses
c1dcd0f Reflects the updated code (#1873)
371fd80 input_size of phrase_layer: 1144 -> 1124 (#1875)

Assets 2

@joelgrus joelgrus released this Oct 5, 2018

Major new features

  • A new framework for training state-machine-based models, and several examples of using this for semantic parsing. This still has a few rough edges, but we've successfully used it for enough models now that we're comfortable releasing it.
  • A model for neural open information extraction
  • A re-implementation of a graph-based semantic dependency parser.
  • A MultiProcessDataIterator for loading data on multiple threads on the CPU (we haven't actually used this much, though - if you have trouble with it, let us know).

Breaking Changes

  1. Previously if you were working on a GPU, you would specify a cuda_device at the time you called instance.as_tensor_dict(), and the tensors would be generated on the GPU initially. As we started to develop code for generating instances in parallel across multiple processes, we became concerned that over-generation of instances could potentially exhaust the GPU memory.

Accordingly, now instance.as_tensor_dict() (and all the field.as_tensor operations that underlie it) always return tensors on the CPU, and then the Trainer (or the evaluation loop, or whoever) moves them to the GPU right before sending them to the model.

Most likely this won't affect you (other than making your training loop a tiny bit slower), but if you've been creating your own custom Fields or Iterators, they'll require small changes as in #1731

List of commits

7ddc7f1 Automatically map function names to aliases for NLTK's logic parser (#1870)
3989000 Bug fix binary expression in updates to grammar (#1869)
3d78e46 Tutorial for the semantic parsing framework (#1853)
c530fde Update for Event2Mind. (#1866)
8ff8324 Add QuaRel semantic parser (#1857)
8236624 Compile and fix warning for regex in SQL action seq formatting (#1864)
5172c85 Create a markdown file that enumerates available models (#1802)
2111428 Use test-install rather than (#1849)
c635bc4 Graph parser for semantic dependencies (#1743)
c728951 Integrate new table context in variable free world (#1832)
3de6943 Avoid deprecation warnings (#1861)
a7da2ab Fast grammar generation (#1852)
d8b13e0 Simplified GrammarStatelet, made a new LambdaGrammarStatelet class for WikiTables (#1829)
1d50292 Use current_log_probs instead of log_probs in debug_info (#1855)
0934512 move ftfy to right section, fix req. check script (#1858)
6183c90 Remove wget in wikitables tests by using requests (#1854)
358c36b Raise RuntimeError if java is not available (#1856)
99308f6 Structured sql data (#1845)
53b166e Executor for WikiTables variable free language (#1762)
38c87e0 fix network issue (#1844)
0b852fb Add Open AI tokenizer, and ability to add special tokens to token indexer (#1836)
f7c4195 Faster data loading for SQL Context (#1838)
a1ec53d initial text2sql grammar with unconstrained context (#1834)
a6b4c30 Add per node beam size option to beam search (#1835)
63ba3fb make some sql context code more generic (#1831)
e84e496 Support adding to vocabulary from pretrained embeddings (#1822)
9c7d0d0 sql data updates (#1827)
b1db1c9 ATIS Predictor (#1818)
53bba3d SQL Executor (#1815)
8be358e Seq2Seq Test Cleanup (#1814)
02e2930 Model can store extra pretained embeddings (#1817)
f65ced5 Add an example of using ELMo interactively (#1771)
0459261 Atis model action refactored (#1792)
546242f Fixes for seq2seq model (#1808)
63fcada Make num_start_types optional in transition functions (#1811)
7833447 Load the pre-trained embeddings for all NER tokens (#1806)
c78bb36 Add a correctness test for Open AI transformer (#1801)
e64373c Update
01d2906 Add tags_to_spans_function param to SpanBasedF1Measure. (#1783)
3906981 Moving the WikiTables executor into semparse.executors (#1786)
7647de8 Fixing the table format for the WikiTables executor (#1785)
6039ac0 SQL Coverage script (#1750)
9306e97 Add minimal configuration to existing models. (#1770)
ae72f79 Better multi-word predicates in Open IE predictors (#1759)
cca99b9 Add a predictor for Event2Mind. (#1779)
63dbdf1 one tutorial to rule them all (#1613)
8759ea3 Event2Mind (#1679)
c5501c7 fix trainer initialization (#1761)
898cfed BiattentiveClassificationNetwork with ELMo and without GloVe (#1767)
421f9a4 add simplest possible pretrained model interface (#1768)
b9cbfd6 Add an example of how to read ELMo embeddings with h5py (#1772)
e4f41e7 use managers for queues (#1769)
64f253f Add a requirements check to scripts/ (#1699)
b5087e7 multiprocess dataset reader and iterator (#1760)
1761684 Fixed bug in _get_combination_and_multiply for batch size equal to 1 (#1764)
ffe037d Update (#1763)
606a61a output metrics.json every epoch (#1755)
49f43ec Remove large openie model file (#1756)
609babe Add logging of learning rates to tensorboard (#1745)
eda2ba5 Set up SlantedTriangular for first batch (#1744)
1d81d8b Grammar for a variable free language for WikiTableQuestions (#1709)
4674b01 Make bmes_tags_to_spans support ill-formed spans. (#1710)
4c99f8e Text2sql reader (#1738)
8867f2f create tensors on cpu, move them to gpu later (#1731)
0664893 Discriminative fine-tuning, gradual unfreezing, slanted triangular learning rates (ULMFiT) (#1636)
1532886 Rename and organize tutorials (#1741)
72f7b4b Remove bucket iterator shuffle warning. (#1742)
8bbde0d Fix index_with bug in basic iterator (#1715)
3f54fc8 make openai transformer byte pair indexer add to the vocab (#1705)
7bf930f Add Open Information Extraction (#1726)
6c1607e bump gevent to 1.3.6 (#1732)
934ee17 Update (#1721)
72c9e98 Sql text utils (#1717)
ec25acd Update
647e53e Removing contents of requirements.txt file (#1729)
a585994 Update (#1722)
a61aa67 Moving allennlp.nn.decoding to allennlp.state_machines (#1714)
0b7bb20 Fixes and updates to (#1718)
c47318b Add configs for tasks in ELMo paper, with and without ELMo (#1626)
f2884ad require torch 0.4.1 (#1708)
ef72e2e Save learning rate scheduler to training state (#1650)
6cb7005 Add average parameter to sequence cross entropy (#1702)
5e68d04 Rename SpanPruner -> Pruner, remove -infs (#1703)

Assets 2

@schmmd schmmd released this Aug 31, 2018 · 435 commits to master since this release

This release includes a new dependency parser model, a QUAC model, and a new NLI model, as well as many bugfixes and small improvements.

4920249 bump version number to v0.6.1
ec2d5a1 skip moto tests, unpin dependencies (#1697)
863ded8 Add option to keep sentence boundaries in Elmo (#1695)
e027478 Upgrade Flask to 0.12.4 (fixes bug) (#1694)
335d899 Make SpanBasedF1Measure support BMES (#1692)
d16f6c0 Add a default predictor for biaffine-parser. (#1677)
4b6f8d1 Remove inplace modification in Covariance (#1691)
a0506a7 remove print statement (#1690)
89729e0 Add BMES constrain to is_transition_allowed function (#1688)
3df54c8 Removing last_dim_*softmax (#1687)
d1f6748 minor memory improvements in _joint_likelihood() of ConditionalRandomField with advanced indexing (#1686)
2a45f44 Add Covariance and PearsonCorrelation metrics (#1684)
1b31320 Add MeanAbsoluteError metric (#1683)
e9710c8 Fix links in docs and improve (#1680)
cbeef92 Predictor for QUAC models (#1674)
279f325 SQL semantic parser entity improvements (#1658)
45e6a0f pin boto3 + awscli (#1671)
fcdbbd3 require Python 3.6.1 (#1667)
5a305e3 improve API, update tests (#1664)
6c7c807 Adding decoder to bimpm and improve demo server. (#1665)
9caac66 Fix lazy dataset reader bug in ModelTestCase (#1668)
0b3ebcf Reading comprehension model for QUAC dataset (#1625)
994b996 Make CrfTagger work with non-BIO tagging tasks (#1661)
c31d5f9 Fix conll2000 data reading (#1657)
abfb32d Refactoring how actions are handled in the semantic parsing code (#1294)
75abebb Standardize tagging datasets (#1656)
362060a add back sniff test for parser (#1654)
5e13d24 Fix Conll2003 reader docs to reflect true label namespace names (#1655)
fea0d0a Upgrade flask to avoid security vulnerability. (#1653)
d27770a Make replace_masked_values more efficient by using masked_fill (#1651)
4ade6e4 Fix module docstring for training.learning_rate_schedulers (#1649)
301f2d6 Implement cosine with restarts (#1647)
681a9cf make pos selection an option in dataset reader, use in predictor (#1648)
bca6c2a make max_vocab_size default to None for a given namespace in Vocabulary._extend (#1643)
1d43188 Add learning rate to logs (#1641)
b70e026 Make LinearAttention and LinearMatrixAttention memory-efficient (#1632)
be76b5c Add missing --recover flag to train docs (#1640)
2e47ac4 typing is part of standard library from python3.6 (gives errors on python3.7) (#1638)
4df4638 Data reader for QUAC (#1624)
bf75c9b Allow configurable label namespaces (#1621)
4c6731b Pip install the library in editable mode (-e) (#1592)
14aee14 fix calculation of estimated time remaining (#1631)
8c89b08 Parser decoding fix 2 (#1619)
7a9975e Fix bimpm config file for names num_perspective(s). (#1627)
0a5aea7 atis dataset reader (#1577)
659bf25 Allow files to be downloaded from S3 (#1620)
76a65a8 BiMPM model (#1594)
58119c0 make fine-tune not expand vocabulary by default (#1623)
3107a0c Don't error if fine-tune serialization dir already exists (#1622)
82686c1 Add IOB1 as allowed CRF constraint (#1615)
59132d2 Avoid divide by zero in CategoricalAccuracy (#1617)

Assets 2

@joelgrus joelgrus released this Aug 15, 2018 · 487 commits to master since this release

AllenNLP v.0.6.0 has been upgraded to use PyTorch 0.4.1. Accordingly, it should now run on Python 3.7.

It contains a handful of breaking changes, most of which probably won't affect you.

Breaking changes:

1. HOCON -> Jsonnet for Configuration files

Although our experiment configurations look like JSON, they were technically HOCON (which was a superset of JSON). In this release we changed the format to Jsonnet, which is a different superset of JSON.

If your configuration files are "JSON with comments", this change should not affect you. Your configuration files are valid jsonnet and will work fine as is. We believe this described 99+% of people using allennlp.

If you are using advanced features of HOCON, then these changes will be breaking for you. Probably the two most common issues will be

unquoted strings

JSON requires strings to be quoted. HOCON doesn't. Jsonnet does. So in the off chance that you have not been putting your strings in quotes, you'll need to start putting them in quotes.

environment variables

HOCON allows you to substitute in environment variables, like

    "root_directory": ${HOME}

Jsonnet only allows substitution of explicit variables, using a syntax like

    "root_directory": std.extVar("HOME")

these are in fact variables fed to the Jsonnet parser (not environment variables); however, the allennlp code will read all the environment variables and feed them to the parser

the elimination of ConfigTree

(you probably don't care about this)

previously the AllenNLP Params object was a wrapper around a pyhocon ConfigTree, which is basically a fancy dict. After this change, Params.params is just a plain dict instead of a ConfigTree, so if you have code that relies on it being a ConfigTree, that code will break. This is very unlikely to affect you.

why did we make this change?

There is a bug in the Python HOCON parser that incorrectly handles backslashes in strings. This created issues involving initializer regexes being serialized and deserialized incorrectly. Once we determined that the bug was not simple enough for us to easily fix, we chose this as the next best solution.

(in addition, jsonnet has some nice features involving templates that you might find useful in your experiments)

2. Change to the Predictor API

The API for the _json_to_instance method of the Predictor used to be (json: JsonDict) -> Tuple[Instance, JsonDict], where the returned JsonDict contained information from the input which you wanted to be returned in the predictor. This is now not allowed, and the _json_to_instance method returns only an Instance, meaning any additional information must be routed through your model via the use of MetadataFields. This change was to make Predictors agnostic of where Instances they process come from, allowing us to generate predictions from an original dataset using a DatasetReader to generate instances.

This means you can now do:
allennlp predict /path/to/original/dataset --use-dataset-reader, rather than having to format your data as .jsonl files.

3. Automatic implementation of from_params

It used to be the case that if you implemented your own Model or DatasetReader or whatever, you were required to implement a from_params classmethod that unpacked a Params object and called the constructor with the relevant values. In most cases this method was just boilerplate that didn't do anything interesting -- it popped off strings and strings and ints and ints and so on. And it opened you up to a class of subtle bugs if your from_params popped parameters with a different default value than the constructor used.

In the latest version, any class that inherits from FromParams (which automatically includes all Registrable classes) gets for free a from_params method that does the "right thing". If you need complex logic to instantiate your class from a JSON config, you'll still have to write your own method, but in most cases you won't need to.

There are some from_params methods that take additional parameters; for example, every Model constructor requires a Vocabulary, which will need to be supplied by its from_params method. To support this, the automatic from_params allows extra keyword-only arguments. That is, if you are calling the from_params method yourself (which you probably aren't), you have to do

YourModel.from_params(params, vocab=vocab)

if you try to supply the extra arguments positionally (which you could when all of the from_params were defined explicitly), you will get an error. This is the "breaking" component of the change.

4. changes to TokenIndexers

previously the interface for TokenIndexer was

TokenIndexer.token_to_indices(self, token: Token, vocabulary: Vocabulary) -> TokenType:

this assumption (one token) -> (one or more indices) turned out to be not general enough. there are cases where you want to generate indices that depend on multiple tokens, and where you want to generate multiple sets of (related) indices from one input text. accordingly, we changed the API to

TokenIndexer.tokens_to_indices(self, tokens: List[Token], vocabulary: Vocabulary, index_name: str) -> Dict[str, List[TokenType]]:

this is some real library-innards stuff, and it is unlikely to affect you or your code unless you have been writing your own TokenIndexer subclasses or Field subclasses (which is not most users). If this does describe you, look at the changes to TextField to see how to update your code.

other changes:

9540125 Tree decoding fix (#1606)
4eaeff7 Fix use of scalar tensors in ConllCorefScores (#1604)
982cedd Small typo fixes in tutorial (#1603)
45fff83 use empty list for no package not empty string (#1602)
e0e5f4a revert conllu changes (#1600)
49626cc filter out numpy 'size changed' warnings (#1601)
4fca028 (include-package-fix) Log number of parameters in optimizers (#1598)
b79d500 Add file friendly logging for elmo. (#1593)
152b590 Output details when running check-links. (#1569)
068407e make --include-package include all submodules (#1586)
12b74e5 Add some debugging echo commands to pip. (#1579)
9194b30 copy bin/ into image (#1587)
bf760b0 be friendlier to windows users (#1572)
4fa4dc2 fix and pin conllu dependency == 1.0 (#1581)
6b37dd2 Turn off shuffling during evaluation (#1578)
07bfc31 Demo features for the dependency parser (#1560)
025b5e7 Remove a step from verify. (#1565)
d2c0274 Don't use a hard-coded temp directory. (#1564)
15e3645 openai transformer LM embedder (#1525)
87b32bb Expose iterator shuffling through Trainer config (#1557)
52f44e2 Add parsimonious for SQL parsing to (#1558)
089d16d SQL action sequences and Atis World (#1524)
6f0fec1 make passing different feedforward modules more flexible (#1555)
dcba726 WIP: Skip tests that require Java in test-install (#1551)
8438f91 Remove the unused NltkWordSplitter and punkt model. (#1548)
09c2cc5 Dependency parser predictor (#1538)
c2e70ca upgrade to pytorch 0.4.1 + make work with python 3.7 (but still 3.6 also) (#1543)
c000ae2 made checklist updates more efficient (#1552)
2ec4c5c re-work dependency parser to use HEAD sentinel inside model (#1544)
10ac9ed Update
1c2a0de Remove requirements_test.txt (merge into requirements.txt) (#1541)
2154e72 Allow server start without field-names. (#1523)
e32f486 fix BasicTextFieldEmbedder.from_params to reflect the constructor (#1474)
dc1ff36 Fix the reported broken links. (#1533)
e049afc fix ud reader in case of implicit references (#1529)
ad265f8 Add output-file option in evaluate to save the computed metrics (#1512)
c37ff2c update config files to jsonnet format (#1479)
f3fce4c Move cache breaker to the end. (#1527)
c9385e7 Fixed broken link (#1508)
8cf893e Add a scirpt to report broken links in all markdowns. (#1522)
be69e52 Parser improvements (#1515)
e4b86b0 Show warning before ignoring key with unseparable batches from model.forward. (#1520)
2c9abf9 Minor change of a comment (#1500)
0722d7f DenseSparseAdam + CRF Feedforward layer (#1519)
1402b7c Add to_file method in Params and default preference ordering. (#1517)
e0581b6 Preserve best metrics (#1504)
de0d3f7 Dependency parser (#1465)
be0f0c2 Remove extra .params (#1513)
7df8275 Text field updates to support multiple arrays in TokenIndexer (#1506)
88c381a Make usage of make-vocab, dry-run consistent with train and allow 'extend' to be used by both (#1487)
34a92d0 Update (#1509)
8e5ee65 fix dumb domain filtering (#1505)
8a20820 Ensure Contiguous Hidden State Tensors in Encoders (#1493)
66b2c1c Bio to bioul (#1497)
f4eef6e [for discussion] change token_to_indices -> tokens_to_indices (in preparation for byte pair encoding) (#1499)
9ec3aa6 Fix start of tqdm logging in training. (#1492)
ee003d2 Fix SpanBasedF1Measure allowed label encodings comment (#1501)
d307a25 Add IOB1 support to SpanBasedF1Metric (#1494)
9c21696 fixing a bug in trainer for histograms (#1498)
5f2f539 Add option to have tie breaking in Categorical Accuracy (#1485)
7457710 Update (#1496)
5fc7a00 Fix SpanBasedF1Measure for tags without conll labels (#1491)
01ddd12 make tables nice in validation summary (#1490)
ba6f345 Crf ner tweaks (#1488)
f5bbe59 Move param import (#1484)
e50b102 fine grained ner reader (#1483)
d9e9861 don't call create_kwargs for a class that has no constructor (#1481)
52c0835 instantiate default activation functions in constructor (#1478)
580dc8b (mostly) remove from_params (#1191)
ff41dda Implementation of ESIM model (#1469)
e2edc9b unwrap tensors in avg metric (#1463)
77298a9 Fix logging of no-grad parameters. (#1448)
bef52ed Fix call to vocab.token_from_index -> self.label_namespace (#1459)
7cc3db1 fix Vocabulary.from_params to accept a dict for max_vocab_size (#1460)
59ecd3b Fix conll2003.from_params incorrect default (#1453)
f09ff87 Allow to use a different validation iterator from training iterator (#1455)
a56fa40 remove RegistrableVocabulary (#1454)
f136ae0 Fix a typo in embedding_tokens notebook. (#1449)
d4ee5db Make bucket iterator respect maximum_samples_per_batch (#1446)
f0ed1d4 Few feature additions (#1438)
74a30d0 update the look and feel of the config explorer (#1412)
fa34344 refactor iterators (#1157)
43fc89e Enables Predict to use dataset readers from models (#1434)
d2e3035 enable mypy on tests (#1437)
7664b12 Add support for selective finetune (freeze parameters by regex from config file) (#1427)
8855042 eliminate or make private most of the new Vocabulary methods (#1436)
a0c368a Fix an edge case for incompatible vocabulary extension. (#1435)
18d4fee remove adaptive iterator (#1433)
0312b16 Add support for configurable vocabulary extension (#1416)
5d38282 Avoid non-model state in predictors (#1422)
eaf5b7e Call before logging to tensorboard (#1423)
872acf9 Make evaluation tqdm description ignore metrics starting with _ (#1430)
36d91fd Make tqdm description ignore metrics starting with _ (#1425)
70d4d3c use sensible default for num_serialised_models_to_keep (#1420)
9dbba33 Fix chdir in ModelTestCase breaking downstream models (#1418)
1031815 duplicate config in Predictor.from_archive (#1413)
2bf1e28 fix a minor typo in docstring causing wrong api usage docs of vocabulary config. (#1415)
e16a6b5 Split off function to find latest checkpoint in Trainer (#1414)
70b4ffb (jonborchardt/master) replace hocon with jsonnet (#1409)
8a31494 (upstream/ratecalculus, jonborchardt/ratecalculus) Add --include-sentence-indices flag to ELMo command (#1404)
4bd8e7f Add support for prevention of parameter initialization which match the given regexes (#1405)
76deabb Remove frontend (#1407)
6800d76 fix elmo command to use line indices and disallow empty lines (#1397)
e903018 Fix multiple GPU training after upgrading to pytorch 0.4 (#1401)
db519af Update with ./allennlp/ (#1395)
3dff9c7 remove demo (#1338)
6da17d6 Adds support for reading pretrained embeddings (text format) from uncompressed files and archives (#1364)
5e38a08 Update Dockerfile.pip
da429d6 Update Dockerfile.pip
2b32a86 Update Dockerfile.pip
bb08b06 Add a Dockerfile for downstream usage of AllenNLP. (#1389)
bab565a Update (#1388)
3aa81e7 In get_from_cache(), allow redirections in head requests (#1387)
f4d8d07 Output answers in wikitables predictor when inputs are batched (#1384)
a807239 create ccgbank dataset reader (#1381)

Assets 2
You can’t perform that action at this time.