Skip to content

Commit 44afb95

Browse files
Corrections: May 2024 (#3287)
* Paper Revision 2023.emnlp-main.502, closes #3276. * Paper Metadata: 2024.eacl-demo.22, closes #3289. * Paper Revision 2020.emnlp-main.26, closes #3288. * Paper Metadata: 2022.emnlp-main.252, closes #3249. * Author Metadata: Gerasimos Spanakis, closes #3294. * Paper Metadata: 2023.sicon-1.5, closes #3291. * Author Metadata: Marcio Lima Inácio, closes #3230. * Paper Revision{2022.findings-acl.278}, closes #3295. * Paper Meta Data Correction 2024.eacl-long.34, closes #3189. * Author Metadata: {Yifan Peng}, closes #3259. * Paper Revision: 2024.lrec-main.855, closes #3308. * Paper Metadata: 2024.lrec-main.16, closes #3311. * Paper Metadata: {2024.lrec-main.85}, closes #3310. * paper retraction for 2022.coling-1.306 per publication chair request. * Author Metadata: Kexin Wang, closes #3231. * Author Metadata: Weiwei Sun, closes #3274. * reingested ijcnlp demo for missing paper, closes #3253. * Author Metadata: Chao Zhang, closes #3243. * Paper Revision: 2023.inlg-main.34, closes #2889. * Updating Paper Metadata for 2024.eacl-demo.23, closes #3177. * Author Metadata: Yoo Yeon Sung, clsoes #3215. * Author Metadata: {Pranjal A. Chitale}, closes #3216. * Author Metadata: {Santosh T.y.s.s}, closes #3217. * Author Metadata: Katharina Hämmerl, closes #3219. * Author Metadata: {maria berger}, closes #3225. * Add MWE-UD 2024 proceedings to SIGLEX and SIGPARSE, closes #3321. * Paper Metadata: 2024.lrec-main.1355, closes #3320. * Paper Metadata: 2024.lrec-main.1397, closes #3315. * Paper Metadata: 2024.lrec-main.346, closes #3316. * Paper Revision{2024.lrec-main.921}, closes #3318. * Paper Metadata: 2024.mwe-1.27, closes #3327. * Paper Metadata: 2024.cawl-1.6, closes #3326. * Paper Metadata: 2024.lrec-main.8, closes #3325. * Paper Metadata: 2024.lrec-main.1109, closes #3323. * Paper Revision: {2024.lrec-main.539}, closes #3329. * Paper Metadata: 2024.lrec-main.677, closes #3330. * Paper Revision 2024.ecnlp-1.13, closes #3332. * Paper meta data 2024.ldl-1.6, closes #3333. * Paper Metadata: 2024.lt4hala-1.9, closes #3334. * Paper Revision{2024.findings-eacl.1}, closes #3336. * Paper Revision 2024.lrec-main.350, closes #3335. * Paper meta correction for 2024.lrec-main.1463, closes #3339. * Paper Metadata: {2024.mwe-1.0}, closes #3343. * author order change for 2024.legal-1.9. * Paper Revision{2024.lrec-main.215}, closes #3347. * Paper meta 2024.cogalex-1.17, closes #3345. * Paper Metadata: 2024.lrec-main.1471, closes #3351. * Paper Metadata: 2024.lrec-main.1392, closes #3350. * Paper Metadata: J16-3007, closes #3349. * Paper Revision{2024.lrec-main.405}, closes #3341.
1 parent 6538dc6 commit 44afb95

37 files changed

+129
-62
lines changed

data/xml/2020.coling.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6270,7 +6270,7 @@
62706270
</paper>
62716271
<paper id="469">
62726272
<title><fixed-case>S</fixed-case>a<fixed-case>SAKE</fixed-case>: Syntax and Semantics Aware Keyphrase Extraction from Research Papers</title>
6273-
<author><first>Santosh</first><last>Tokala</last></author>
6273+
<author><first>Santosh</first><last>T.y.s.s</last></author>
62746274
<author><first>Debarshi</first><last>Kumar Sanyal</last></author>
62756275
<author><first>Plaban Kumar</first><last>Bhowmick</last></author>
62766276
<author><first>Partha Pratim</first><last>Das</last></author>

data/xml/2020.emnlp.xml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -409,14 +409,16 @@
409409
<author><first>David</first><last>Schlangen</last></author>
410410
<pages>357–374</pages>
411411
<abstract>While humans process language incrementally, the best language encoders currently used in NLP do not. Both bidirectional LSTMs and Transformers assume that the sequence that is to be encoded is available in full, to be processed either forwards and backwards (BiLSTMs) or as a whole (Transformers). We investigate how they behave under incremental interfaces, when partial output must be provided based on partial input seen up to a certain time step, which may happen in interactive systems. We test five models on various NLU datasets and compare their performance using three incremental evaluation metrics. The results support the possibility of using bidirectional encoders in incremental mode while retaining most of their non-incremental quality. The “omni-directional” BERT model, which achieves better non-incremental performance, is impacted more by the incremental access. This can be alleviated by adapting the training regime (truncated training), or the testing procedure, by delaying the output until some right context is available or by incorporating hypothetical right contexts generated by a language model like GPT-2.</abstract>
412-
<url hash="09d22bbc">2020.emnlp-main.26</url>
412+
<url hash="3ba95a3f">2020.emnlp-main.26</url>
413413
<doi>10.18653/v1/2020.emnlp-main.26</doi>
414414
<video href="https://slideslive.com/38938866"/>
415415
<bibkey>madureira-schlangen-2020-incremental</bibkey>
416416
<pwccode url="https://github.com/briemadu/inc-bidirectional" additional="false">briemadu/inc-bidirectional</pwccode>
417417
<pwcdataset url="https://paperswithcode.com/dataset/atis">ATIS</pwcdataset>
418418
<pwcdataset url="https://paperswithcode.com/dataset/ontonotes-5-0">OntoNotes 5.0</pwcdataset>
419419
<pwcdataset url="https://paperswithcode.com/dataset/snips">SNIPS</pwcdataset>
420+
<revision id="1" href="2020.emnlp-main.26v1" hash="09d22bbc"/>
421+
<revision id="2" href="2020.emnlp-main.26v2" hash="3ba95a3f" date="2024-05-07">Added a few missing citations and fixed results of a previously wrong implementation of one secondary evaluation metric.</revision>
420422
</paper>
421423
<paper id="27">
422424
<title>Augmented Natural Language for Generative Sequence Labeling</title>

data/xml/2020.lrec.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5590,7 +5590,7 @@
55905590
<paper id="446">
55915591
<title><fixed-case>NMT</fixed-case> and <fixed-case>PBSMT</fixed-case> Error Analyses in <fixed-case>E</fixed-case>nglish to <fixed-case>B</fixed-case>razilian <fixed-case>P</fixed-case>ortuguese Automatic Translations</title>
55925592
<author><first>Helena</first><last>Caseli</last></author>
5593-
<author><first>Marcio</first><last>Inácio</last></author>
5593+
<author><first>Marcio</first><last>Lima Inácio</last></author>
55945594
<pages>3623–3629</pages>
55955595
<abstract>Machine Translation (MT) is one of the most important natural language processing applications. Independently of the applied MT approach, a MT system automatically generates an equivalent version (in some target language) of an input sentence (in some source language). Recently, a new MT approach has been proposed: neural machine translation (NMT). NMT systems have already outperformed traditional phrase-based statistical machine translation (PBSMT) systems for some pairs of languages. However, any MT approach outputs errors. In this work we present a comparative study of MT errors generated by a NMT system and a PBSMT system trained on the same English – Brazilian Portuguese parallel corpus. This is the first study of this kind involving NMT for Brazilian Portuguese. Furthermore, the analyses and conclusions presented here point out the specific problems of NMT outputs in relation to PBSMT ones and also give lots of insights into how to implement automatic post-editing for a NMT system. Finally, the corpora annotated with MT errors generated by both PBSMT and NMT systems are also available.</abstract>
55965596
<url hash="02cdcab2">2020.lrec-1.446</url>

data/xml/2021.ranlp.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -798,7 +798,7 @@
798798
</paper>
799799
<paper id="70">
800800
<title>Semantic-Based Opinion Summarization</title>
801-
<author><first>Marcio</first><last>Inácio</last></author>
801+
<author><first>Marcio</first><last>Lima Inácio</last></author>
802802
<author><first>Thiago</first><last>Pardo</last></author>
803803
<pages>619–628</pages>
804804
<abstract>The amount of information available online can be overwhelming for users to digest, specially when dealing with other users’ comments when making a decision about buying a product or service. In this context, opinion summarization systems are of great value, extracting important information from the texts and presenting them to the user in a more understandable manner. It is also known that the usage of semantic representations can benefit the quality of the generated summaries. This paper aims at developing opinion summarization methods based on Abstract Meaning Representation of texts in the Brazilian Portuguese language. Four different methods have been investigated, alongside some literature approaches. The results show that a Machine Learning-based method produced summaries of higher quality, outperforming other literature techniques on manually constructed semantic graphs. We also show that using parsed graphs over manually annotated ones harmed the output. Finally, an analysis of how important different types of information are for the summarization process suggests that using Sentiment Analysis features did not improve summary quality.</abstract>

data/xml/2022.emnlp.xml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -981,7 +981,7 @@
981981
</paper>
982982
<paper id="74">
983983
<title>Deconfounding Legal Judgment Prediction for <fixed-case>E</fixed-case>uropean Court of Human Rights Cases Towards Better Alignment with Experts</title>
984-
<author><first>T.y.s.s</first><last>Santosh</last><affiliation>Technical University of Munich</affiliation></author>
984+
<author><first>Santosh</first><last>T.y.s.s</last><affiliation>Technical University of Munich</affiliation></author>
985985
<author><first>Shanshan</first><last>Xu</last><affiliation>Technical University of Munich</affiliation></author>
986986
<author><first>Oana</first><last>Ichim</last><affiliation>Graduate Institute of International and Development Studies</affiliation></author>
987987
<author><first>Matthias</first><last>Grabmair</last><affiliation>Technical University of Munich</affiliation></author>
@@ -3307,6 +3307,7 @@
33073307
<paper id="252">
33083308
<title>How to disagree well: Investigating the dispute tactics used on <fixed-case>W</fixed-case>ikipedia</title>
33093309
<author><first>Christine</first><last>De Kock</last><affiliation>University of Cambridge</affiliation></author>
3310+
<author><first>Tom</first><last>Stafford</last><affiliation>University of Cambridge</affiliation></author>
33103311
<author><first>Andreas</first><last>Vlachos</last><affiliation>University of Cambridge</affiliation></author>
33113312
<pages>3824-3837</pages>
33123313
<abstract>Disagreements are frequently studied from the perspective of either detecting toxicity or analysing argument structure. We propose a framework of dispute tactics which unifies these two perspectives, as well as other dialogue acts which play a role in resolving disputes, such as asking questions and providing clarification. This framework includes a preferential ordering among rebuttal-type tactics, ranging from ad hominem attacks to refuting the central argument. Using this framework, we annotate 213 disagreements (3,865 utterances) from Wikipedia Talk pages. This allows us to investigate research questions around the tactics used in disagreements; for instance, we provide empirical validation of the approach to disagreement recommended by Wikipedia. We develop models for multilabel prediction of dispute tactics in an utterance, achieving the best performance with a transformer-based label powerset model. Adding an auxiliary task to incorporate the ordering of rebuttal tactics further yields a statistically significant increase. Finally, we show that these annotations can be used to provide useful additional signals to improve performance on the task of predicting escalation.</abstract>

data/xml/2022.findings.xml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4359,11 +4359,13 @@
43594359
<author><first>Ashutosh</first><last>Modi</last></author>
43604360
<pages>3521-3536</pages>
43614361
<abstract>Many populous countries including India are burdened with a considerable backlog of legal cases. Development of automated systems that could process legal documents and augment legal practitioners can mitigate this. However, there is a dearth of high-quality corpora that is needed to develop such data-driven systems. The problem gets even more pronounced in the case of low resource languages such as Hindi. In this resource paper, we introduce the Hindi Legal Documents Corpus (HLDC), a corpus of more than 900K legal documents in Hindi. Documents are cleaned and structured to enable the development of downstream applications. Further, as a use-case for the corpus, we introduce the task of bail prediction. We experiment with a battery of models and propose a Multi-Task Learning (MTL) based model for the same. MTL models use summarization as an auxiliary task along with bail prediction as the main task. Experiments with different models are indicative of the need for further research in this area.</abstract>
4362-
<url hash="0e114075">2022.findings-acl.278</url>
4362+
<url hash="65199027">2022.findings-acl.278</url>
43634363
<attachment type="software" hash="c763b971">2022.findings-acl.278.software.zip</attachment>
43644364
<bibkey>kapoor-etal-2022-hldc</bibkey>
43654365
<doi>10.18653/v1/2022.findings-acl.278</doi>
43664366
<pwccode url="https://github.com/exploration-lab/hldc" additional="false">exploration-lab/hldc</pwccode>
4367+
<revision id="1" href="2022.findings-acl.278v1" hash="0e114075"/>
4368+
<revision id="2" href="2022.findings-acl.278v2" hash="65199027" date="2024-05-17">This revision updates funding information in the Acknowledgements section of the paper.</revision>
43674369
</paper>
43684370
<paper id="279">
43694371
<title>Rethinking Document-level Neural Machine Translation</title>

data/xml/2022.iwslt.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -478,7 +478,7 @@
478478
<author><first>Patrick</first><last>Fernandes</last></author>
479479
<author><first>Siddharth</first><last>Dalmia</last></author>
480480
<author><first>Jiatong</first><last>Shi</last></author>
481-
<author><first>Yifan</first><last>Peng</last></author>
481+
<author id="yifan-peng-cmu"><first>Yifan</first><last>Peng</last></author>
482482
<author><first>Dan</first><last>Berrebbi</last></author>
483483
<author><first>Xinyi</first><last>Wang</last></author>
484484
<author><first>Graham</first><last>Neubig</last></author>

data/xml/2023.acl.xml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6061,7 +6061,7 @@
60616061
</paper>
60626062
<paper id="424">
60636063
<title>Answering Ambiguous Questions via Iterative Prompting</title>
6064-
<author><first>Weiwei</first><last>Sun</last><affiliation>Shandong University</affiliation></author>
6064+
<author id="weiwei-sun-sd"><first>Weiwei</first><last>Sun</last><affiliation>Shandong University</affiliation></author>
60656065
<author><first>Hengyi</first><last>Cai</last><affiliation>JD.com</affiliation></author>
60666066
<author><first>Hongshen</first><last>Chen</last><affiliation>JD.com</affiliation></author>
60676067
<author><first>Pengjie</first><last>Ren</last><affiliation>Shandong University</affiliation></author>
@@ -10357,7 +10357,7 @@
1035710357
<paper id="719">
1035810358
<title><fixed-case>RADE</fixed-case>: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue</title>
1035910359
<author><first>Zhengliang</first><last>Shi</last><affiliation>Shandong University</affiliation></author>
10360-
<author><first>Weiwei</first><last>Sun</last><affiliation>Shandong University</affiliation></author>
10360+
<author id="weiwei-sun-sd"><first>Weiwei</first><last>Sun</last><affiliation>Shandong University</affiliation></author>
1036110361
<author><first>Shuo</first><last>Zhang</last><affiliation>Bloomberg</affiliation></author>
1036210362
<author><first>Zhen</first><last>Zhang</last><affiliation>Shandong University</affiliation></author>
1036310363
<author><first>Pengjie</first><last>Ren</last><affiliation>School of Computer Science and Technology, Shandong University</affiliation></author>
@@ -12556,7 +12556,7 @@
1255612556
<paper id="873">
1255712557
<title>Estimating the Uncertainty in Emotion Attributes using Deep Evidential Regression</title>
1255812558
<author><first>Wen</first><last>Wu</last><affiliation>University of Cambridge</affiliation></author>
12559-
<author><first>Chao</first><last>Zhang</last><affiliation>Tsinghua University</affiliation></author>
12559+
<author id="chao-zhang-tu"><first>Chao</first><last>Zhang</last><affiliation>Tsinghua University</affiliation></author>
1256012560
<author><first>Philip</first><last>Woodland</last><affiliation>University of Cambridge</affiliation></author>
1256112561
<pages>15681-15695</pages>
1256212562
<abstract>In automatic emotion recognition (AER), labels assigned by different human annotators to the same utterance are often inconsistent due to the inherent complexity of emotion and the subjectivity of perception. Though deterministic labels generated by averaging or voting are often used as the ground truth, it ignores the intrinsic uncertainty revealed by the inconsistent labels. This paper proposes a Bayesian approach, deep evidential emotion regression (DEER), to estimate the uncertainty in emotion attributes. Treating the emotion attribute labels of an utterance as samples drawn from an unknown Gaussian distribution, DEER places an utterance-specific normal-inverse gamma prior over the Gaussian likelihood and predicts its hyper-parameters using a deep neural network model. It enables a joint estimation of emotion attributes along with the aleatoric and epistemic uncertainties. AER experiments on the widely used MSP-Podcast and IEMOCAP datasets showed DEER produced state-of-the-art results for both the mean values and the distribution of emotion attributes.</abstract>
@@ -14871,7 +14871,7 @@
1487114871
</paper>
1487214872
<paper id="132">
1487314873
<title><fixed-case>MOSPC</fixed-case>: <fixed-case>MOS</fixed-case> Prediction Based on Pairwise Comparison</title>
14874-
<author><first>Kexin</first><last>Wang</last><affiliation>Bytedance</affiliation></author>
14874+
<author id="kexin-wang-bd"><first>Kexin</first><last>Wang</last><affiliation>Bytedance</affiliation></author>
1487514875
<author><first>Yunlong</first><last>Zhao</last><affiliation>Institute of Automation, Chinese Academy of Sciences</affiliation></author>
1487614876
<author><first>Qianqian</first><last>Dong</last><affiliation>ByteDance AI Lab</affiliation></author>
1487714877
<author><first>Tom</first><last>Ko</last><affiliation>ByteDance AI Lab</affiliation></author>
@@ -15923,7 +15923,7 @@
1592315923
<author><first>Jiatong</first><last>Shi</last><affiliation>Carnegie Mellon University</affiliation></author>
1592415924
<author><first>Yun</first><last>Tang</last><affiliation>Facebook</affiliation></author>
1592515925
<author><first>Hirofumi</first><last>Inaguma</last><affiliation>Meta AI</affiliation></author>
15926-
<author><first>Yifan</first><last>Peng</last><affiliation>Carnegie Mellon University</affiliation></author>
15926+
<author id="yifan-peng-cmu"><first>Yifan</first><last>Peng</last><affiliation>Carnegie Mellon University</affiliation></author>
1592715927
<author><first>Siddharth</first><last>Dalmia</last><affiliation>Google</affiliation></author>
1592815928
<author><first>Peter</first><last>Polák</last><affiliation>Charles University, MFF UFAL</affiliation></author>
1592915929
<author><first>Patrick</first><last>Fernandes</last><affiliation>Carnegie Mellon University, Instituto de Telecomunicações</affiliation></author>

data/xml/2023.ccl.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1626,7 +1626,7 @@
16261626
</paper>
16271627
<paper id="3">
16281628
<title>Studying Language Processing in the Human Brain with Speech and Language Models</title>
1629-
<author><first>Zhang</first><last>Chao</last></author>
1629+
<author id="chao-zhang-tu"><first>Zhang</first><last>Chao</last></author>
16301630
<author><first>Thwaites</first><last>Andrew</last></author>
16311631
<author><first>Wingfield</first><last>Cai</last></author>
16321632
<pages>17–23</pages>

0 commit comments

Comments
 (0)