Skip to content

Commit 37e35de

Browse files
committed
fix link to repo
1 parent 4fc1b5a commit 37e35de

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

site/publications/picard.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author:
66
- Dzmitry Bahdanau
77
journal: "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
88
date: Nov 1, 2021
9-
tldr: "Introducing PICARD - a simple and effective constrained beam search algorithm for any language model. PICARD helps with generating valid code, which is useful for program synthesis and semantic parsing. We achieve SoTA on both Spider and CoSQL."
9+
tldr: "Introducing PICARD - a simple and effective constrained beam search algorithm for any language model. PICARD helps to generate valid code, which is useful for program synthesis and semantic parsing. We achieve SoTA on both Spider and CoSQL."
1010
image: picard.jpg
1111
tags:
1212
items: [research, haskell]
@@ -16,4 +16,4 @@ talk: 'https://youtu.be/kTpixsr-37w'
1616
code: 'https://github.com/ElementAI/picard'
1717
---
1818

19-
Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of 10,000s of sub-word tokens. When fine-tuned to target constrained formal languages like SQL, these models often generate invalid code, rendering it unusable. We propose PICARD (code and trained models available at this https URL), a method for constraining auto-regressive decoders of language models through incremental parsing. PICARD helps to find valid output sequences by rejecting inadmissible tokens at each decoding step. On the challenging Spider and CoSQL text-to-SQL translation tasks, we show that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions.
19+
Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of 10,000s of sub-word tokens. When fine-tuned to target constrained formal languages like SQL, these models often generate invalid code, rendering it unusable. We propose PICARD, a method for constraining auto-regressive decoders of language models through incremental parsing. PICARD helps to find valid output sequences by rejecting inadmissible tokens at each decoding step. On the challenging Spider and CoSQL text-to-SQL translation tasks, we show that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions. Code and trained models are available [here](https://github.com/ElementAI/picard).

0 commit comments

Comments
 (0)