/
papers.html
354 lines (348 loc) · 17.8 KB
/
papers.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Research</title>
<meta name="author" content="Vlad" />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.css" integrity="sha384-AfEj0r4/OFrOo5t7NnNe46zW/tFgW6x/bCJG8FqQCEo3+Aro6EYUG4+cU+KJWu/X" crossorigin="anonymous">
<script defer src="https://cdn.jsdelivr.net/npm/katex@0.12.0/dist/katex.min.js"
integrity="sha384-g7c+Jr9ZivxKLnZTDUhnkOnsh30B4H0rpLUpJ4jAIKs4fnJI+sEnkvrMWph2EDg4"
crossorigin="anonymous"></script>
<link rel="stylesheet" type="text/css"
href="//vene.ro/theme/css/main.css" />
<link rel="stylesheet" type="text/css"
href="//vene.ro/theme/css/pygment.css" />
<link rel="stylesheet" type="text/css"
href="//vene.ro/theme/css/typogrify.css" />
<link rel="shortcut icon" href="//vene.ro/favicon.ico" />
<link href="//vene.ro/" type="application/atom+xml"
rel="alternate" title="Vlad Niculae ALL Atom Feed" />
<link href="//fonts.googleapis.com/css?family=PT+Mono|PT+Serif" rel="stylesheet">
<meta property="og:type" content="website" />
<meta property="twitter:creator" content=@vnfrombucharest" />
<meta name="twitter:card" content="summary">
<!-- OpenGraph Info -->
<meta property="og:title" content="Vlad Niculae"/>
<meta property="og:description" content=""/>
<meta property="og:url" content="//vene.ro"/>
<script src="//vene.ro/theme/js/main.js"></script>
</head>
<body>
<div id="container">
<header>
<nav class="navmenu" id="navmenu">
<li id="homelink"><a href="/">Vlad Niculae</a>
</li><li class="menu"><a href="//vene.ro/papers.html">Research</a>
</li><li class="menu"><a href="//vene.ro/blog/">Blog</a>
</li><li class="menu"><a href="//vene.ro/teaching.html">Teaching</a>
</li><li class="menu"><a href="//vene.ro/students.html">Students</a>
</li>
</nav>
</header>
<div id="main">
<h1 id="my-research">My research<a class="headerlink" href="#my-research" title="Permanent link">¶</a></h1>
<h2 id="projects">Projects<a class="headerlink" href="#projects" title="Permanent link">¶</a></h2>
<p>I am currently working on the following funded research projects:</p>
<ul>
<li><span class="caps">NWO</span> <span class="caps">VI</span>.Veni.212.228 “Rethinking Natural Language Generation: Bridging the Gap Between Discrete and Continuous Representations”</li>
<li>Horizon Europe 101070631 “<span class="caps">UTTER</span>: Unified Transcription and Translation for
Extended Reality”</li>
<li>Trustworthy <span class="caps">AI</span> for Media (<span class="caps">TAIM</span>)</li>
</ul>
<h2 id="publications">Publications<a class="headerlink" href="#publications" title="Permanent link">¶</a></h2>
<p>(<a href="https://scholar.google.com/citations?user=7_3UAgQAAAAJ">Google Scholar</a>,
<a href="https://www.semanticscholar.org/author/2114966">Semantic Scholar</a>,
<a href="https://aclweb.org/anthology/people/vlad-niculae"><span class="caps">ACL</span> Anthology</a> links.)</p>
<p><em>Book-like things</em>:</p>
<ul>
<li>
<p>Vlad Niculae, Caio F. Corro, Nikita Nangia, Tsvetomila Mihaylova,
André <span class="caps">F. T.</span> Martins. Discrete Latent Structure in Neural Networks. 2023.
[ <a href="https://arxiv.org/abs/2301.07473">arXiv preprint</a> ]</p>
</li>
<li>
<p>Vlad Niculae.
<em>Learning Deep Models with Linguistically-Inspired Structure.</em>
Doctoral dissertation, Cornell University. 2018.
[ <a href="https://doi.org/10.7298/X4SJ1HVQ">open access doi</a> ]</p>
</li>
</ul>
<p><em>Papers</em>:</p>
<ul>
<li>
<p>Evgeniia Tokarchuk, Vlad Niculae.
The unreasonable effectiveness of random target embeddings for
continuous-output neural machine translation.
<em><span class="caps">NAACL</span>, 2024 (to appear)</em>.
[ <a href="https://arxiv.org/abs/2310.20620">arXiv preprint</a> ]</p>
</li>
<li>
<p>Wafaa Mohammed, Vlad Niculae.
On measuring context utilization in document-level <span class="caps">MT</span> systems.
<em><span class="caps">EACL</span> Findings, 2024</em>.
[ <a href="https://aclanthology.org/2024.findings-eacl.113/">anthology</a> ]
[ <a href="https://arxiv.org/abs/2402.01404">arXiv</a> ]</p>
</li>
<li>
<p>Sergey Troshin, Vlad Niculae.
Wrapped β-Gaussians with compact support for exact probabilistic modeling on manifolds.
<em><span class="caps">TMLR</span> 2023</em>.
[ <a href="https://openreview.net/pdf?id=KrequDpWzt">pdf</a> ]
[ <a href="https://github.com/ltl-uva/wbg">code</a> ]
[ <a href="https://openreview.net/forum?id=KrequDpWzt">openreview</a> ]</p>
</li>
<li>
<p>Vlad Niculae.
Two derivations of Principal Component Analysis on datasets of distributions.
2023.
[ <a href="https://arxiv.org/abs/2306.13503">arXiv preprint</a> ]</p>
</li>
<li>
<p>Ali Araabi, Vlad Niculae, Christof Monz.
Joint Dropout: Improving generalizability in
low-resource neural machine translation through phrase pair variables.
In: <em><span class="caps">MT</span> Summit 2023</em>.
[ <a href="https://aclanthology.org/2023.mtsummit-research.2/">anthology</a> ]
[ <a href="https://arxiv.org/abs/2307.12835">arXiv</a> ]</p>
</li>
<li>
<p>David Stap, Vlad Niculae, Christof Monz.
Viewing knowledge transfer in multilingual machine translation
through a representational lens. Findings of the <span class="caps">ACL</span>, 2023.
[ <a href="https://arxiv.org/abs/2305.11550">arXiv</a> ]</p>
</li>
<li>
<p>Valentina Zantedeschi, Luca Franceschi, Jean Kaddour, Matt J Kusner, Vlad Niculae.
<span class="caps">DAG</span> learning on the Permutahedron.
In: <em><span class="caps">ICLR</span> 2023</em>.
[ <a href="https://arxiv.org/abs/2301.11898">arXiv</a> ]</p>
</li>
<li>
<p>André <span class="caps">F. T.</span> Martins, Marcos Treviso, António Farinhas, Pedro <span class="caps">M. Q.</span> Aguiar,
Mário <span class="caps">A. T.</span> Figueiredo, Mathieu Blondel, Vlad Niculae.
Sparse continuous distributions and Fenchel-Young losses. <em><span class="caps">JMLR</span> 2022</em>.
[ <a href="https://www.jmlr.org/papers/v23/21-0879.html">jmlr abs</a> ]
[ <a href="https://arxiv.org/abs/2108.01988">arXiv preprint</a> ]
[ <a href="https://github.com/deep-spin/sparse_continuous_distributions/">code</a> ]</p>
</li>
<li>
<p>Ali Araabi, Christof Monz, Vlad Niculae.
How effective is Byte Pair Encoding for out-of-vocabulary words in Neural Machine Translation?
In: <em><span class="caps">AMTA</span> 2022</em>.
[ <a href="https://aclanthology.org/2022.amta-research.9/">anthology</a> ]</p>
</li>
<li>
<p>Evgeniia Tokarchuk, Vlad Niculae.
On target representation in continuous-output Neural Machine Translation.
In: <em>Repl4NLP 2022: Workshop on Representation Learning for <span class="caps">NLP</span></em>.
[ <a href="https://aclanthology.org/2022.repl4nlp-1.24">anthology</a> ]</p>
</li>
<li>
<p>Valentina Zantedeschi, Jean Kaddour, Luca Franceschi, Matt Kusner, Vlad Niculae.
<span class="caps">DAG</span> learning on the Permutahedron.
In: <em><span class="caps">ICLR</span> 2022 Workshop on the Elements of Reasoning: Objects, Structure and Causality</em>.
[ <a href="https://openreview.net/pdf?id=S8X8vS_85gc">pdf</a> ]</p>
</li>
<li>
<p>Tsvetomila Mihaylova, Vlad Niculae, André <span class="caps">F. T.</span> Martins.
Modeling structure with Undirected Neural Networks.
In: Proc. of <em><span class="caps">ICML</span> 2022</em>.
[ <a href="https://arxiv.org/abs/2202.03760">arXiv</a> ]</p>
</li>
<li>
<p>António Farinhas, Wilker Aziz, Vlad Niculae, André <span class="caps">F. T.</span> Martins.
Sparse communication via mixed distributions.
In: Proc. of <em><span class="caps">ICLR</span> 2022</em>.
[ <a href="https://arxiv.org/abs/2108.02658">arXiv</a> ]</p>
</li>
<li>
<p>Valentina Zantedeschi, Matt J. Kusner, Vlad Niculae.
Learning binary trees by argmin differentiation.
In: Proc. of <em><span class="caps">ICML</span> 2021</em>
[ <a href="https://arxiv.org/abs/2010.04627">arXiv</a> ]
[ <a href="https://github.com/vzantedeschi/LatentTrees">code</a> ]</p>
</li>
<li>
<p>Pedro Henrique Martins, Vlad Niculae, Zita Marinho, André <span class="caps">F. T.</span> Martins.
Sparse and structured visual attention.
In: Proc. of <em><span class="caps">ICIP</span> 2021</em>, <span class="caps">IEEE</span>
[ <a href="https://arxiv.org/abs/2002.05556">arXiv</a> ]</p>
</li>
<li>
<p>André <span class="caps">F. T.</span> Martins, Marcos Treviso, António Farinhas, Vlad Niculae, Mário <span class="caps">A.
T.</span> Figueiredo, Pedro <span class="caps">M. Q.</span> Aguiar.
Sparse and Continuous Attention Mechanisms. In: Proc. of <em>NeurIPS 2020</em>
[ <a href="https://arxiv.org/abs/2006.07214">arXiv</a> ]</p>
</li>
<li>
<p>Mathieu Blondel, André <span class="caps">F. T.</span> Martins, Vlad Niculae.
Learning with Fenchel-Young losses. <span class="caps">JMLR</span> <em>2020</em>.
[ <a href="https://arxiv.org/abs/1901.02324">arXiv</a> ]
[ <a href="https://github.com/mblondel/fenchel-young-losses">code</a> ]</p>
</li>
<li>
<p>Tsvetomila Mihaylova, Vlad Niculae, André <span class="caps">F. T.</span> Martins.
Understanding the mechanics of <span class="caps">SPIGOT</span>: Surrogate gradients for latent structure
learning.
In: Proc. of <em><span class="caps">EMNLP</span> 2020</em>.
[ <a href="https://www.aclweb.org/anthology/2020.emnlp-main.171/">anthology</a> ]</p>
</li>
<li>
<p>Gonçalo M. Correia, Vlad Niculae, Wilker Aziz, André <span class="caps">F. T.</span> Martins.
Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity.
In: Proc. of <em>NeurIPS 2020</em>.
[ <a href="https://arxiv.org/abs/2007.01919">arXiv</a> ]
[ <a href="https://github.com/deep-spin/sparse-marginalization-lvm">code</a> ]</p>
</li>
<li>
<p>Vlad Niculae and André <span class="caps">F. T.</span> Martins.
<em><span class="caps">LP</span>-SparseMAP:</em> Differentiable relaxed optimization for sparse structured
prediction. In: Proc. of <em><span class="caps">ICML</span> 2020</em>
[ <a href="https://arxiv.org/abs/2001.04437">arXiv</a> ]
[ <a href="https://github.com/deep-spin/lp-sparsemap">code</a> ]</p>
</li>
<li>
<p>Gonçalo M. Correia, Vlad Niculae, André <span class="caps">F. T.</span> Martins.
Adaptively sparse transformers.
In: Proc. of <em><span class="caps">EMNLP</span>-<span class="caps">IJCNLP</span> 2019</em>
[ <a href="https://arxiv.org/abs/1909.00015">arXiv</a> ]
[ <a href="https://github.com/deep-spin/entmax">code</a> ]</p>
</li>
<li>
<p>Ben Peters, Vlad Niculae, André <span class="caps">F. T.</span> Martins.
Sparse sequence-to-sequence models.
In: Proc. of <em><span class="caps">ACL</span> 2019</em>.
[ <a href="https://www.aclweb.org/anthology/P19-1146/">anthology</a> ]
[ <a href="https://arxiv.org/abs/1905.05702">arXiv</a> ]
[ <a href="https://github.com/deep-spin/entmax">code</a> ]</p>
</li>
<li>
<p>Mathieu Blondel, André <span class="caps">F. T.</span> Martins, Vlad Niculae.
Learning classifiers with Fenchel-Young losses: Generalized entropies, margins,
and algorithms. In: Proc. of <em><span class="caps">AISTATS</span> 2019</em>.
[ <a href="https://arxiv.org/abs/1805.09717">arXiv</a> ]
[ <a href="https://github.com/mblondel/fenchel-young-losses">code</a> ]</p>
</li>
<li>
<p>Vlad Niculae, André <span class="caps">F. T.</span> Martins, Mathieu Blondel, Claire Cardie.
<em>SparseMAP:</em> Differentiable sparse structured inference.
In: Proc. of <em><span class="caps">ICML</span> 2018</em>.
[ <a href="https://arxiv.org/abs/1802.04223">arXiv</a> ]
[ <a href="https://github.com/vene/sparsemap">code</a> ]
[ <a href="/talks/sparsemap-icml18-talk.pdf">slides</a> ]
[ <a href="https://vimeo.com/294661122">video</a> ]</p>
</li>
<li>
<p>Vlad Niculae, André <span class="caps">F. T.</span> Martins, Claire Cardie.
Towards dynamic computation graphs via sparse latent structure.
In: Proc. of <em><span class="caps">EMNLP</span> 2018</em>.
[ <a href="https://aclweb.org/anthology/papers/D/D18/D18-1108/">anthology</a> ]
[ <a href="https://arxiv.org/abs/1809.00653">arXiv</a> ]
[ <a href="https://github.com/vene/sparsemap/tree/master/cpp">code</a> ]
[ <a href="/talks/18-sparsemap-emnlp.pdf">slides</a> ]
[ <a href="https://vimeo.com/305198410">video</a> ]</p>
</li>
<li>
<p>Vlad Niculae and Mathieu Blondel.
A regularized framework for sparse and structured neural attention.
In: Proc. of <em>NeurIPS 2017</em>.
[ <a href="https://arxiv.org/abs/1705.07704">arXiv</a> ]
[ <a href="https://github.com/vene/sparse-structured-attention">code</a> ]</p>
</li>
<li>
<p>Mathieu Blondel, Vlad Niculae, Takuma Otsuka, Naonori Ueda.
Multi-output polynomial networks and factorization machines. In: Proc.
of <em>NeurIPS 2017</em>.
[ <a href="https://arxiv.org/abs/1705.07603">arXiv</a> ]</p>
</li>
<li>
<p>Vlad Niculae, Joonsuk Park, Claire Cardie.
Argument mining with structured SVMs and RNNs. In: Proc. of <em><span class="caps">ACL</span> 2017</em>.
[ <a href="https://aclweb.org/anthology/papers/P/P17/P17-1091/">anthology</a> ]
[ <a href="https://arxiv.org/abs/1704.06869">arXiv</a> ]
[ <a href="https://github.com/vene/marseille">code</a> ]
[ <a href="http://joonsuk.org/">data</a> ]
[ <a href="https://vimeo.com/234957758">video</a> ]</p>
</li>
<li>
<p>Vlad Niculae and Cristian Danescu-Niculescu-Mizil.
Conversational markers of constructive discussions. In: Proc. of <em><span class="caps">NAACL</span>-<span class="caps">HLT</span> 2016</em>.
[ <a href="/constructive">website</a> ]
[ <a href="https://aclweb.org/anthology/papers/N/N16/N16-1070/">anthology</a> ]</p>
</li>
<li>
<p>Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee.
Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions. In: Proc. of <em><span class="caps">WWW</span> 2016</em>.
[ <a href="https://chenhaot.com/pages/changemyview.html">website</a> ]</p>
</li>
<li>
<p>Vlad Niculae, Srijan Kumar, Jordan Boyd-Graber, Cristian Danescu-Niculescu-Mizil. Linguistic harbingers of betrayal: A case study
on an online strategy game. In: Proc. of <em><span class="caps">ACL</span> 2015</em>.
[ <a href="/betrayal">website</a> ]
[ <a href="https://aclweb.org/anthology/papers/P/P15/P15-1159/">anthology</a> ]</p>
</li>
<li>
<p>Vlad Niculae*, Caroline Suen*, Justine Zhang*, Cristian Danescu-Niculescu-Mizil, Jure Leskovec. <em><span class="caps">QUOTUS</span>:</em> The structure of political media coverage as revealed by quoting patterns. In: Proc. of <em><span class="caps">WWW</span> 2015</em>.
[ <a href="http://snap.stanford.edu/quotus/">website</a> ]
[ <a href="papers/quotus-talk-vlad-web.pdf">slides</a> ]</p>
</li>
<li>
<p>Marcos Zampieri, Alina Maria Ciobanu, Vlad Niculae, Liviu P. Dinu.
<em><span class="caps">AMBRA</span>:</em> A ranking approach to temporal text classification.
In: Proc. of <em>Semeval 2015</em>.
[ <a href="https://aclweb.org/anthology/papers/S/S15/S15-2144/">anthology</a> ]
[ <a href="http://github.com/vene/ambra">code</a> ]</p>
</li>
<li>
<p>Vlad Niculae and Cristian Danescu-Niculescu-Mizil.
<em>Brighter than Gold:</em> Figurative language in user generated comparisons.
In: Proc. of <em><span class="caps">EMNLP</span> 2014</em>.
[ <a href="/figurative-comparisons">website</a> ]
[ <a href="https://www.aclweb.org/anthology/D14-1215/">anthology</a> ]</p>
</li>
<li>
<p>Vlad Niculae, Marcos Zampieri, Liviu P. Dinu, Alina Maria Ciobanu.
Temporal text ranking and automatic dating of texts. In: Proc. of <em><span class="caps">EACL</span> 2014</em>.
[ <a href="https://aclweb.org/anthology/papers/W/W13/W13-2714/">anthology</a> ]
[ <a href="papers/eacl14-temporal-slides.pdf"> slides </a> ]</p>
</li>
<li>
<p>Vlad Niculae. Comparison pattern matching and creative simile recognition. In:
Proc. of <em><span class="caps">JSSP</span> 2013</em>.
[ <a href="http://aclweb.org/anthology/W/W13/W13-3829/">anthology</a> ]
[ <a href="papers/jssp13-similes-poster.pdf">poster</a> ]
[ <a href="https://github.com/vene/comparison-pattern">code</a> ]</p>
</li>
<li>
<p>Vlad Niculae and Octavian Popescu. Determining <em>is-a</em> relationships for textual
entailment. In: Proc. of <em><span class="caps">JSSP</span> 2013</em>.
[ <a href="http://aclweb.org/anthology/W/W13/W13-3830.pdf">anthology</a> ]
[ <a href="papers/jssp-rte-poster.pdf">poster</a> ]</p>
</li>
<li>
<p>Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas
Müller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort,
Jaques Grobler, Robert Layton, Jake Vanderplas, Arnaud Joly, Brian Holt and
Gaël Varoquaux.
<span class="caps">API</span> design for machine learning software: experiences from the scikit-learn
project. In: <em><span class="caps">ECML</span>/<span class="caps">PKDD</span> 2013 Workshop: Languages for Data Mining and Machine
Learning</em>.
[ <a href="http://orbi.ulg.ac.be/bitstream/2268/154357/1/paper.pdf"><span class="caps">PDF</span></a> ]</p>
</li>
<li>
<p>Vlad Niculae, Victoria Yaneva,
Computational considerations of comparisons and similes. In: <em>Proceedings of <span class="caps">ACL</span>
Student Research Workshop</em>, 2013.
[ <a href="https://aclweb.org/anthology/papers/P/P13/P13-3013/">anthology</a> ]
[ <a href="papers/aclsrw13-poster.pdf">poster</a> ]</p>
</li>
</ul>
</div>
<footer>
<p>Powered by <a href="http://pelican.readthedocs.org">Pelican</a>.
<a href="/privacy.html">Privacy policy</a>.</p>
</footer>
</div>
</body>
</html>