Skip to content

Latest commit

 

History

History
65 lines (35 loc) · 3.69 KB

File metadata and controls

65 lines (35 loc) · 3.69 KB

Dataset

WSJ, CTB, SPMRL, and UD are widely used. You can check out XCFG for treebank preprocessing.

Dependency Grammar Induction

Constituency Grammar Induction

Compound Probabilistic Context-Free Grammars for Grammar Induction, paper, code.

Unsupervised Recurrent Neural Network Grammars, paper, code.

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Autoencoders, paper, code.

Unsupervised Learning of PCFGs with Normalizing Flow. paper. code.

Emerging Topics in Unsupervised Grammar Induction

Visually Grounded Grammar Induction

  • Visually Grounded Neural Syntax Acquisition, [paper], [code].
  • Visually Grounded Compound PCFGs, [paper], [code].
  • VLGrammar: Grounded Grammar Induction of Vision and Language, [paper], [code].

Grammar Induction with Language Models

Specialized LMs

  • Neural Language Modeling by Jointly Learning Syntax and Lexicon, [paper], [code].
  • Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks, [paper], [code].

General-Purpose LMs

  • Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction, [paper], [code].
  • Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT, [paper], [code].

Neural Lexicalized Grammar Induction

  • The Return of Lexical Dependencies: Neural Lexicalized PCFGs, [paper], [code].
  • Neural Bi-Lexicalized PCFG Induction, [paper], [code].

Transfer Learning for Grammar Induction

Cross-Lingual Transfer

  • Multilingual Grammar Induction with Continuous Language Identification, [paper], [code].

Cross-Domain Transfer

  • On the Transferability of Visually Grounded PCFGs, [paper], [code].

Language Acquisition

Bootstrapping Language Acquisition, paper, code.

Syntax for Downstream Tasks

Scalable Syntax-Aware Language Models Using Knowledge Distillation. paper. code.

Language Modeling with Shared Grammar. paper. code.

Learning to Compose Task-Specific Tree Structures, paper, code.

Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming, paper, code.