/
supernova.yaml
5769 lines (5607 loc) · 234 KB
/
supernova.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
- bibtex: |-
@inproceedings{wangFarsightFosteringResponsible2024,
title = {Farsight: {{Fostering Responsible AI Awareness During AI Application Prototyping}}},
booktitle = {{{CHI Conference}} on {{Human Factors}} in {{Computing Systems}}},
author = {Wang, Zijie J. and Kulkarni, Chinmay and Wilcox, Lauren and Terry, Michael and Madaio, Michael},
year = {2024}
}
bibtexKey: wangFarsightFosteringResponsible2024
communication: one-way
description:
"Prompt-based interfaces for Large Language Models (LLMs) have made
prototyping and building AI-powered applications easier than ever before.
However, identifying potential harms that may arise from AI applications
remains a challenge, particularly during prompt-based prototyping. To
address this, we present Farsight, a novel in situ interactive tool that
helps people identify potential harms from the AI applications they are
prototyping. Based on a user's prompt, Farsight highlights news articles
about relevant AI incidents and allows users to explore and edit
LLM-generated use cases, stakeholders, and harms. We report design insights
from a co-design study with 10 AI prototypers and findings from a user study
with 42 AI prototypers. After using Farsight, AI prototypers in our user
study are better able to independently identify potential harms associated
with a prompt and find our tool more useful and usable than existing
resources. Their qualitative feedback also highlights that Farsight
encourages them to focus on end-users and think beyond immediate harms. We
discuss these findings and reflect on their implications for designing AI
prototyping experiences that meaningfully engage with AI harms. Farsight is
publicly accessible at: https://pair-code.github.io/farsight."
githubURL: https://github.com/PAIR-code/farsight
implementation: nova
layouts:
- on-demand
materials:
- runtime
modularity: modular
name: farsight
nameDisplay: Farsight
otherURLs: []
paperURL: https://arxiv.org/abs/2402.15350
releaseYear: 2024
sourceType: paper
supportedNotebooks:
- jupyter
- lab
- colab
thumbnail: farsight.jpg
user: data scientist
- bibtex:
"@inproceedings{wuB2BridgingCode2020,\n title = {B2: {{Bridging Code}}\
\ and {{Interactive Visualization}} in {{Computational Notebooks}}},\n shorttitle\
\ = {B2},\n booktitle = {{{UIST}}},\n author = {Wu, Yifan and Hellerstein, Joseph\
\ M. and Satyanarayan, Arvind},\n year = {2020},\n doi = {10.1145/3379337.3415851},\n\
\ url = {https://dl.acm.org/doi/10.1145/3379337.3415851},\n urldate = {2021-09-15},\n\
\ langid = {english}\n}\n"
bibtexKey: wuB2BridgingCode2020
communication: two-way
description:
"Data scientists have embraced computational notebooks to author analysis
code and accompanying visualizations within a single document. Currently, although
these media may be interleaved, they remain siloed: interactive visualizations
must be manually specified as they are divorced from the analysis provenance expressed
via dataframes, while code cells have no access to users' interactions with visualizations,
and hence no way to operate on the results of interaction. To bridge this divide,
we present B2, a set of techniques grounded in treating data queries as a shared
representation between the code and interactive visualizations. B2 instruments
data frames to track the queries expressed in code and synthesize corresponding
visualizations. These visualizations are displayed in a dashboard to facilitate
interactive analysis. When an interaction occurs, B2 reifies it as a data query
and generates a history log in a new code cell. Subsequent cells can use this
log to further analyze interaction results and, when marked as reactive, to ensure
that code is automatically recomputed when new interaction occurs. In an evaluative
study with data scientists, we find that B2 promotes a tighter feedback loop between
coding and interacting with visualizations. All participants frequently moved
from code to visualization and vice-versa, which facilitated their exploratory
data analysis in the notebook."
githubURL: https://github.com/yifanwu/b2
implementation: ipywidget
layouts:
- always-on
materials:
- runtime
- code
modularity: monolithic
name: b2
nameDisplay: B2
otherURLs: []
paperURL: https://dl.acm.org/doi/abs/10.1145/3379337.3415851
releaseYear: 2020
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: b2.webp
user: data scientist
- bibtex:
"@inproceedings{bauerleSymphonyComposingInteractive2022,\n title = {Symphony:\
\ {{Composing Interactive Interfaces}} for {{Machine Learning}}},\n shorttitle\
\ = {Symphony},\n booktitle = {{{CHI}}},\n author = {B{\\\"a}uerle, Alex and\
\ Cabrera, {\\'A}ngel Alexander and Hohman, Fred and Maher, Megan and Koski, David\
\ and Suau, Xavier and Barik, Titus and Moritz, Dominik},\n year = {2022},\n\
\ doi = {10.1145/3491102.3502102},\n url = {https://dl.acm.org/doi/10.1145/3491102.3502102},\n\
\ urldate = {2022-08-24},\n langid = {english}\n}\n"
bibtexKey: bauerleSymphonyComposingInteractive2022
communication: one-way
description:
Interfaces for machine learning (ML), information and visualizations
about models or data, can help practitioners build robust and responsible ML systems.
Despite their benefits, recent studies of ML teams and our interviews with practitioners
(n=9) showed that ML interfaces have limited adoption in practice. While existing
ML interfaces are effective for specific tasks, they are not designed to be reused,
explored, and shared by multiple stakeholders in cross-functional teams. To enable
analysis and communication between different ML practitioners, we designed and
implemented Symphony, a framework for composing interactive ML interfaces with
task-specific, data-driven components that can be used across platforms such as
computational notebooks and web dashboards. We developed Symphony through participatory
design sessions with 10 teams (n=31), and discuss our findings from deploying
Symphony to 3 production ML projects at Apple. Symphony helped ML practitioners
discover previously unknown issues like data duplicates and blind spots in models
while enabling them to share insights with other stakeholders.
githubURL: ''
implementation: ipywidget
layouts:
- on-demand
materials:
- runtime
modularity: modular
name: symphony
nameDisplay: Symphony
otherURLs: []
paperURL: https://dl.acm.org/doi/abs/10.1145/3491102.3502102
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: symphony.webp
user: data scientist
- bibtex:
"@article{leeLuxAlwaysonVisualization2021,\n title = {Lux: Always-on Visualization\
\ Recommendations for Exploratory Dataframe Workflows},\n shorttitle = {Lux},\n\
\ author = {Lee, Doris Jung-Lin and Tang, Dixin and Agarwal, Kunal and Boonmark,\
\ Thyne and Chen, Caitlyn and Kang, Jake and Mukhopadhyay, Ujjaini and Song, Jerry\
\ and Yong, Micah and Hearst, Marti A. and Parameswaran, Aditya G.},\n year =\
\ {2021},\n journal = {VLDB Endowment},\n volume = {15},\n doi = {10.14778/3494124.3494151},\n\
\ url = {https://dl.acm.org/doi/10.14778/3494124.3494151},\n urldate = {2023-04-13},\n\
\ langid = {english}\n}\n"
bibtexKey: leeLuxAlwaysonVisualization2021
communication: one-way
description:
Exploratory data science largely happens in computational notebooks
with dataframe APIs, such as pandas, that support flexible means to transform,
clean, and analyze data. Yet, visually exploring data in dataframes remains tedious,
requiring substantial programming effort for visualization and mental effort to
determine what analysis to perform next. We propose Lux, an always-on framework
for accelerating visual insight discovery in dataframe workflows. When users print
a dataframe in their notebooks, Lux recommends visualizations to provide a quick
overview of the patterns and trends and suggests promising analysis directions.
Lux features a high level language for generating visualizations on demand to
encourage rapid visual experimentation with data. We demonstrate that through
the use of a careful design and three system optimizations, Lux adds no more than
two seconds of overhead on top of pandas for over 98% of datasets in the UCI repository.
We evaluate Lux in terms of usability via a controlled first-use study and interviews
with early adopters, finding that Lux helps fulfill the needs of data scientists
for visualization support within their dataframe workflows. Lux has already been
embraced by data science practitioners, with over 3.1k stars on Github.
githubURL: https://github.com/lux-org/lux
implementation: html
layouts:
- always-on
materials:
- runtime
- code
modularity: monolithic
name: lux
nameDisplay: Lux
otherURLs: []
paperURL: https://arxiv.org/abs/2105.00121
releaseYear: 2021
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: lux.webp
user: data scientist
- bibtex:
"@article{scully-allisonDesigningInteractiveNotebookEmbedded2022,\n title\
\ = {Designing an {{Interactive}}, {{Notebook-Embedded}}, {{Tree Visualization}}\
\ to {{Support Exploratory Performance Analysis}}},\n author = {{Scully-Allison},\
\ Connor and Lumsden, Ian and Williams, Katy and Bartels, Jesse and Taufer, Michela\
\ and Brink, Stephanie and Bhatele, Abhinav and Pearce, Olga and Isaacs, Katherine\
\ E.},\n year = {2022},\n url = {http://arxiv.org/abs/2205.04557},\n urldate\
\ = {2023-04-03},\n archiveprefix = {arxiv},\n journal = {arXiv 2205.04557}\n\
}\n"
bibtexKey: scully-allisonDesigningInteractiveNotebookEmbedded2022
communication: two-way
description:
Interactive visualization via direct manipulation has inherent design
trade-offs in flexibility, discoverability, and ease-of-use. Scripting languages
can support a vast range of user queries and tasks, but may be more cumbersome
for free-form exploration. Embedding interactive visualization in a scripting
environment, such as a computational notebook, provides an opportunity for leveraging
the strengths of both direct manipulation and scripting. We conduct a design study
investigating this opportunity in the context of calling context trees as used
for performance analysis of parallel software. Our collaborators make new performance
analysis functionality available to users via Jupyter notebook examples, making
the project setting conducive to such an investigation. Through a series of semi-structured
interviews and regular meetings with project stakeholders, we produce a formal
task analysis grounded in the expectation that tasks may be supported by scripting,
interactive visualization, or both paradigms. We then design an interactive bivariate
calling context tree visualization for embedding in Jupyter notebooks with features
to pass data and state between the scripting and visualization contexts. We evaluated
our embedded design with seven high performance computing experts. The experts
were able to complete tasks and provided further feedback on the visualization
and the notebook-embedded interactive visualization paradigm. We reflect upon
the project and discuss factors in both the process and the design of the embedded
visualization.
githubURL: ''
implementation: extension
layouts:
- always-on
materials:
- runtime
- code
modularity: monolithic
name: calling-context-tree
nameDisplay: Calling Context Tree
otherURLs: []
paperURL: https://arxiv.org/abs/2205.04557
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- lab
thumbnail: calling-context-tree.webp
user: data scientist
- bibtex:
"@article{liEDAssistantSupportingExploratory2023,\n title = {{{EDAssistant}}:\
\ {{Supporting Exploratory Data Analysis}} in {{Computational Notebooks}} with\
\ {{In Situ Code Search}} and {{Recommendation}}},\n shorttitle = {{{EDAssistant}}},\n\
\ author = {Li, Xingjun and Zhang, Yizhi and Leung, Justin and Sun, Chengnian\
\ and Zhao, Jian},\n year = {2023},\n journal = {ACM TiiS},\n volume = {13},\n\
\ doi = {10.1145/3545995},\n url = {https://dl.acm.org/doi/10.1145/3545995},\n\
\ urldate = {2023-04-03},\n langid = {english}\n}\n"
bibtexKey: liEDAssistantSupportingExploratory2023
communication: two-way
description:
"Using computational notebooks (e.g., Jupyter Notebook), data scientists\
\ rationalize their exploratory data analysis (EDA) based on their prior experience\
\ and external knowledge, such as online examples. For novices or data scientists\
\ who lack specific knowledge about the dataset or problem to investigate, effectively\
\ obtaining and understanding the external information is critical to carrying\
\ out EDA. This article presents EDAssistant, a JupyterLab extension that supports\
\ EDA with in situ search of example notebooks and recommendation of useful APIs,\
\ powered by novel interactive visualization of search results. The code search\
\ and recommendation are enabled by advanced machine learning models, trained\
\ on a large corpus of EDA notebooks collected online. A user study is conducted\
\ to investigate both EDAssistant and data scientists\u2019 current practice (i.e.,\
\ using external search engines). The results demonstrate the effectiveness and\
\ usefulness of EDAssistant, and participants appreciated its smooth and in-context\
\ support of EDA. We also report several design implications regarding code recommendation\
\ tools."
githubURL: ''
implementation: extension
layouts:
- always-on
materials:
- runtime
- code
modularity: monolithic
name: edassistant
nameDisplay: EDAssistant
otherURLs: []
paperURL: https://dl.acm.org/doi/abs/10.1145/3545995?casa_token=xZ3rdEMx9FAAAAAA:0i0ZuQX7M_gUMzBlOyh84uEsNdSP-ZEtk2yzuSObWb2Js9UQSXt5j9Doc0dmSKPD8RDjMmGaR2CHhg
releaseYear: 2023
sourceType: paper
supportedNotebooks:
- lab
thumbnail: edassistant.webp
user: data scientist
- bibtex:
"@mastersthesis{lauNbinteractGenerateInteractive2018,\n title = {Nbinteract:\
\ Generate Interactive Web Pages from {{Jupyter}} Notebooks},\n author = {Lau,\
\ Samuel and Hug, Joshua},\n year = {2018},\n url = {https://www.nbinteract.com/#},\n\
\ school = {University of California at Berkeley}\n}\n"
bibtexKey: lauNbinteractGenerateInteractive2018
communication: one-way
description:
"Nbinteract provides a Python library and a command-line tool to convert\
\ Jupyter notebooks to standalone, interactive HTML web pages. These web pages\
\ may be viewed by any web browser running JavaScript, regardlessof whether the\
\ viewer has Python or Jupyter installed locally. nbinteract\u2019s built-in support\
\ for function-driven plotting makes interactive visualizations simpler to create\
\ by allowing authors to focus on declarative data changes in- stead of callbacks.\
\ nbinteract has use cases for data analysis, visualization, and especially education,\
\ where it is used for a prominent textbook on data science."
githubURL: https://github.com/SamLau95/nbinteract
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: nbinteract
nameDisplay: nbinteract
otherURLs: []
paperURL: https://digitalassets.lib.berkeley.edu/techreports/ucb/text/EECS-2018-57.pdf
releaseYear: 2018
sourceType: paper
supportedNotebooks:
- jupyter
thumbnail: nbinteract.webp
user: data scientist
- bibtex:
"@article{yuVizicJupyterbasedInteractive2017,\n title = {Vizic: {{A Jupyter-based}}\
\ Interactive Visualization Tool for Astronomical Catalogs},\n shorttitle = {Vizic},\n\
\ author = {Yu, W. and Carrasco Kind, M. and Brunner, R.J.},\n year = {2017},\n\
\ journal = {Astronomy and Computing},\n volume = {20},\n doi = {10.1016/j.ascom.2017.06.004},\n\
\ url = {https://linkinghub.elsevier.com/retrieve/pii/S2213133716301500},\n \
\ urldate = {2023-04-24},\n langid = {english}\n}\n"
bibtexKey: yuVizicJupyterbasedInteractive2017
communication: one-way
description:
The ever-growing datasets in observational astronomy have challenged
scientists in many aspects, including an efficient and interactive data exploration
and visualization. Many tools have been developed to confront this challenge.
However, they usually focus on displaying the actual images or focus on visualizing
patterns within catalogs in a predefined way. In this paper we introduce Vizic,
a Python visualization library that builds the connection between images and catalogs
through an interactive map of the sky region. Vizic visualizes catalog data over
a custom background canvas using the shape, size and orientation of each object
in the catalog. The displayed objects in the map are highly interactive and customizable
comparing to those in the observation images. These objects can be filtered by
or colored by their property values, such as redshift and magnitude. They also
can be sub-selected using a lasso-like tool for further analysis using standard
Python functions and everything is done from inside a Jupyter notebook. Furthermore,
Vizic allows custom overlays to be appended dynamically on top of the sky map.
We have initially implemented several overlays, namely, Voronoi, Delaunay, Minimum
Spanning Tree and HEALPix grid layer, which are helpful for visualizing large-scale
structure. All these overlays can be generated, added or removed interactively
with just one line of code. The catalog data is stored in a non-relational database,
and the interfaces have been developed in JavaScript and Python to work within
Jupyter Notebook, which allows to create customizable widgets, user generated
scripts to analyze and plot the data selected/displayed in the interactive map.
This unique design makes Vizic a very powerful and flexible interactive analysis
tool. Vizic can be adopted in variety of exercises, for example, data inspection,
clustering analysis, galaxy alignment studies, outlier identification or just
large scale visualizations.
githubURL: https://github.com/ywx649999311/vizic
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: vizic
nameDisplay: Vizic
otherURLs: []
paperURL: https://www.sciencedirect.com/science/article/pii/S2213133716301500?casa_token=YymkOTIrOHYAAAAA:V6RiDRh3j1BDIvnSwIEEit0j2JSrDbNtW8dcGBBrrHFuV2qismMPDr4Mfv2ZuIksR6pPIRWCiBk
releaseYear: 2017
sourceType: paper
supportedNotebooks:
- jupyter
thumbnail: vizic.webp
user: scientist
- bibtex:
"@article{rosenthalInteractiveNetworkVisualization2018,\n title = {Interactive\
\ Network Visualization in {{Jupyter}} Notebooks: {{visJS2jupyter}}},\n shorttitle\
\ = {Interactive Network Visualization in {{Jupyter}} Notebooks},\n author =\
\ {Rosenthal, Sara Brin and Len, Julia and Webster, Mikayla and Gary, Aaron and\
\ Birmingham, Amanda and Fisch, Kathleen M},\n year = {2018},\n journal = {Bioinformatics},\n\
\ volume = {34},\n doi = {10.1093/bioinformatics/btx581},\n url = {https://academic.oup.com/bioinformatics/article/34/1/126/4158037},\n\
\ urldate = {2023-04-04},\n langid = {english}\n}\n"
bibtexKey: rosenthalInteractiveNetworkVisualization2018
communication: one-way
description:
Network biology is widely used to elucidate mechanisms of disease and
biological processes. The ability to interact with biological networks is important
for hypothesis generation and to give researchers an intuitive understanding of
the data. We present visJS2jupyter, a tool designed to embed interactive networks
in Jupyter notebooks to streamline network analysis and to promote reproducible
research. The tool provides functions for performing and visualizing useful network
operations in biology, including network overlap, network propagation around a
focal set of genes, and co-localization of two sets of seed genes. visJS2jupyter
uses the JavaScript library vis.js to create interactive networks displayed within
Jupyter notebook cells with features including drag, click, hover, and zoom. We
demonstrate the functionality of visJS2jupyter applied to a biological question,
by creating a network propagation visualization to prioritize risk-related genes
in autism. The visJS2jupyter package is distributed under the MIT License. The
source code, documentation and installation instructions are freely available
on GitHub at https://github.com/ucsd-ccbb/visJS2jupyter. The package can be downloaded
at https://pypi.python.org/pypi/visJS2jupyter.
githubURL: https://github.com/ucsd-ccbb/visJS2jupyter
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: visjs2jupyter
nameDisplay: visJS2jupyter
otherURLs: []
paperURL: https://academic.oup.com/bioinformatics/article-abstract/34/1/126/4158037
releaseYear: 2018
sourceType: paper
supportedNotebooks:
- jupyter
thumbnail: visjs2jupyter.webp
user: scientist
- bibtex:
"@article{vanderplasAltairInteractiveStatistical2018,\n title = {Altair:\
\ {{Interactive Statistical Visualizations}} for {{Python}}},\n shorttitle =\
\ {Altair},\n author = {VanderPlas, Jacob and Granger, Brian and Heer, Jeffrey\
\ and Moritz, Dominik and Wongsuphasawat, Kanit and Satyanarayan, Arvind and Lees,\
\ Eitan and Timofeev, Ilia and Welsh, Ben and Sievert, Scott},\n year = {2018},\n\
\ journal = {Journal of Open Source Software},\n volume = {3},\n doi = {10.21105/joss.01057},\n\
\ url = {http://joss.theoj.org/papers/10.21105/joss.01057},\n urldate = {2023-04-13}\n\
}\n"
bibtexKey: vanderplasAltairInteractiveStatistical2018
communication: one-way
description:
Altair is a declarative statistical visualization library for Python.
Statistical visualization is a constrained subset of data visualization focused
on the creation of visualizations that are helpful in statistical modeling. The
constrained model of statistical visualization is usually expressed in terms of
a visualization grammar that specifies how input data is transformed and mapped
to visual properties .
githubURL: https://github.com/altair-viz/altair
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: modular
name: altair
nameDisplay: Altair
otherURLs: []
paperURL: http://www.theoj.org/joss-papers/joss.01057/10.21105.joss.01057.pdf
releaseYear: 2018
sourceType: paper
supportedNotebooks:
- jupyter
- lab
- colab
thumbnail: altair.webp
user: data scientist
- bibtex:
"@article{nguyenNGLviewInteractiveMolecular2018,\n title = {{{NGLview}}\\\
textendash Interactive Molecular Graphics for {{Jupyter}} Notebooks},\n author\
\ = {Nguyen, Hai and Case, David A and Rose, Alexander S},\n year = {2018},\n\
\ journal = {Bioinformatics},\n volume = {34},\n doi = {10.1093/bioinformatics/btx789},\n\
\ url = {https://academic.oup.com/bioinformatics/article/34/7/1241/4721781},\n\
\ urldate = {2023-04-14},\n langid = {english}\n}\n"
bibtexKey: nguyenNGLviewInteractiveMolecular2018
communication: one-way
description:
NGLview is a Jupyter/IPython widget to interactively view molecular
structures as well as trajectories from molecular dynamics simulations. Fast and
scalable molecular graphics are provided through the NGL Viewer. The widget supports
showing data from the file-system, online data bases and from objects of many
popular analysis libraries including mdanalysis, mdtraj, pytraj, rdkit and more.
The source code is freely available under the MIT license at https://github.com/arose/nglview.
Python packages are available from PyPI and bioconda. NGLview uses Python on the
server-side and JavaScript on the client. The integration with Jupyter is done
through the ipywidgets package. The NGL Viewer is embedded client-side to provide
WebGL accelerated molecular graphics.
githubURL: https://github.com/nglviewer/nglview
implementation: ipywidget
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: nglview
nameDisplay: NGLview
otherURLs: []
paperURL: https://academic.oup.com/bioinformatics/article-abstract/34/7/1241/4721781
releaseYear: 2018
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: nglview.webp
user: scientist
- bibtex:
"@techreport{pielawskiTissUUmapsImprovementsInteractive2022,\n type = {Preprint},\n\
\ title = {{{TissUUmaps}} 3: {{Improvements}} in Interactive Visualization, Exploration,\
\ and Quality Assessment of Large-Scale Spatial Omics Data},\n shorttitle = {{{TissUUmaps}}\
\ 3},\n author = {Pielawski, Nicolas and Andersson, Axel and Avenel, Christophe\
\ and Behanova, Andrea and Chelebian, Eduard and Klemm, Anna and Nysj{\\\"o},\
\ Fredrik and Solorzano, Leslie and W{\\\"a}hlby, Carolina},\n year = {2022},\n\
\ institution = {{Bioinformatics}},\n doi = {10.1101/2022.01.28.478131},\n \
\ url = {http://biorxiv.org/lookup/doi/10.1101/2022.01.28.478131},\n urldate\
\ = {2023-04-04},\n langid = {english}\n}\n"
bibtexKey: pielawskiTissUUmapsImprovementsInteractive2022
communication: one-way
description:
"Background and Objectives Spatially resolved techniques for exploring\
\ the molecular landscape of tissue samples, such as spatial transcriptomics,\
\ often result in millions of data points and images too large to view on a regular\
\ desktop computer, limiting the possibilities in visual interactive data exploration.\
\ TissUUmaps is a free, open-source browser-based tool for GPU-accelerated visualization\
\ and interactive exploration of 107+ data points overlaying tissue samples. Methods\
\ Herein we describe how TissUUmaps 3 provides instant multiresolution image viewing\
\ and can be customized, shared, and also integrated into Jupyter Notebooks. We\
\ introduce new modules where users can visualize markers and regions, explore\
\ spatial statistics, perform quantitative analyses of tissue morphology, and\
\ assess the quality of decoding in situ transcriptomics data. Results We show\
\ that thanks to targeted optimizations the time and cost associated with interactive\
\ data exploration were reduced, enabling TissUUmaps 3 to handle the scale of\
\ today\u2019s spatial transcriptomics methods. Conclusion TissUUmaps 3 provides\
\ significantly improved performance for large multiplex datasets as compared\
\ to previous versions. We envision TissUUmaps to contribute to broader dissemination\
\ and flexible sharing of large-scale spatial omics data."
githubURL: https://github.com/TissUUmaps/TissUUmapsCore
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: tissuumaps
nameDisplay: TissUUmaps
otherURLs: []
paperURL: https://www.biorxiv.org/content/10.1101/2022.01.28.478131.abstract
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- jupyter
thumbnail: tissuumaps.webp
user: scientist
- bibtex:
"@article{fernandezClustergrammerWebbasedHeatmap2017,\n title = {Clustergrammer,\
\ a Web-Based Heatmap Visualization and Analysis Tool for High-Dimensional Biological\
\ Data},\n author = {Fernandez, Nicolas F. and Gundersen, Gregory W. and Rahman,\
\ Adeeb and Grimes, Mark L. and Rikova, Klarisa and Hornbeck, Peter and Ma'ayan,\
\ Avi},\n year = {2017},\n journal = {Scientific Data},\n volume = {4},\n \
\ doi = {10.1038/sdata.2017.151},\n url = {https://www.nature.com/articles/sdata2017151},\n\
\ urldate = {2023-04-04},\n langid = {english}\n}\n"
bibtexKey: fernandezClustergrammerWebbasedHeatmap2017
communication: one-way
description:
'Most tools developed to visualize hierarchically clustered heatmaps
generate static images. Clustergrammer is a web-based visualization tool with
interactive features such as: zooming, panning, filtering, reordering, sharing,
performing enrichment analysis, and providing dynamic gene annotations. Clustergrammer
can be used to generate shareable interactive visualizations by uploading a data
table to a web-site, or by embedding Clustergrammer in Jupyter Notebooks. The
Clustergrammer core libraries can also be used as a toolkit by developers to generate
visualizations within their own applications. Clustergrammer is demonstrated using
gene expression data from the cancer cell line encyclopedia (CCLE), original post-translational
modification data collected from lung cancer cells lines by a mass spectrometry
approach, and original cytometry by time of flight (CyTOF) single-cell proteomics
data from blood. Clustergrammer enables producing interactive web based visualizations
for the analysis of diverse biological data.'
githubURL: https://github.com/MaayanLab/clustergrammer
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: clustergrammar
nameDisplay: Clustergrammar
otherURLs: []
paperURL: https://www.nature.com/articles/sdata2017151
releaseYear: 2017
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: clustergrammar.webp
user: scientist
- bibtex:
"@article{perroneNetworkVisualizationsPyvis2020,\n title = {Network Visualizations\
\ with {{Pyvis}} and {{VisJS}}},\n author = {Perrone, Giancarlo and Unpingco,\
\ Jose and Lu, Haw-minn},\n year = {2020},\n url = {http://arxiv.org/abs/2006.04951},\n\
\ urldate = {2023-04-04},\n archiveprefix = {arxiv},\n journal = {arXiv 2006.04951}\n\
}\n"
bibtexKey: perroneNetworkVisualizationsPyvis2020
communication: one-way
description:
Pyvis is a Python module that enables visualizing and interactively
manipulating network graphs in the Jupyter notebook, or as a standalone web application.
Pyvis is built on top of the powerful and mature VisJS JavaScript library, which
allows for fast and responsive interactions while also abstracting away the low-level
JavaScript and HTML. This means that elements of the rendered graph visualization,
such as node/edge attributes can be specified within Python and shipped to the
JavaScript layer for VisJS to render. This declarative approach makes it easy
to quickly explore graph visualizations and investigate data relationships. In
addition, Pyvis is highly customizable so that colors, sizes, and hover tooltips
can be assigned to the rendered graph. The network graph layout is controlled
by a front-end physics engine that is configurable from a Python interface, allowing
for the detailed placement of the graph elements. In this paper, we outline use
cases for Pyvis with specific examples to highlight key features for any analysis
workflow. A brief overview of Pyvis' implementation describes how the Python front-end
binding uses simple Pyvis calls.
githubURL: https://github.com/WestHealth/pyvis
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: modular
name: pyvis
nameDisplay: Pyvis
otherURLs: []
paperURL: https://arxiv.org/abs/2006.04951
releaseYear: 2020
sourceType: paper
supportedNotebooks:
- jupyter
- lab
- colab
thumbnail: pyvis.webp
user: data scientist
- bibtex:
"@inproceedings{chenPI2EndtoendInteractive2022,\n title = {{{PI2}}: {{End-to-end\
\ Interactive Visualization Interface Generation}} from {{Queries}}},\n shorttitle\
\ = {{{PI2}}},\n booktitle = {Proceedings of the 2022 {{International Conference}}\
\ on {{Management}} of {{Data}}},\n author = {Chen, Yiru and Wu, Eugene},\n \
\ year = {2022},\n doi = {10.1145/3514221.3526166},\n url = {https://dl.acm.org/doi/10.1145/3514221.3526166},\n\
\ urldate = {2023-04-04},\n langid = {english}\n}\n"
bibtexKey: chenPI2EndtoendInteractive2022
communication: one-way
description:
Interactive visualization interfaces are critical in data analysis.
Yet creating new interfaces is challenging, as the developer must understand the
queries needed for the desired analysis task, and then design the appropriate
interface. Existing task models are too abstract to be used to automatically generate
interfaces, and visualization recommenders do not take the queries nor interactions
into account. PI2 is the first system to generate fully functional interactive
visualization interfaces from a representative sequence of task queries. PI2 analyzes
queries syntactically and proposes a novel Difftree representation that encodes
the systematic variations between query abstract syntax trees. PI2 then poses
interface generation as a schema mapping problem from each Difftree to a visualization
that renders its results, and the variations encoded in each Difftree to interactions
in the interface. Interface generation further takes the layout and screen size
into account. Our user studies show that PI2 interfaces are comparable to or better
than those designed by developers, and that PI2 can generate exploration interfaces
that are easier to use than the state-of-the-art SQL notebook products. What's
more, PI2 generates high-quality interfaces within a few seconds.
githubURL: ''
implementation: extension
layouts:
- always-on
materials:
- runtime
- code
modularity: modular
name: pi2
nameDisplay: PI2
otherURLs: []
paperURL: https://dl.acm.org/doi/pdf/10.1145/3514221.3520153
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- lab
thumbnail: pi2.webp
user: data scientist
- bibtex:
"@article{simonneGwaihirJupyterNotebook2022,\n title = {{\\emph{Gwaihir}}\
\ : {{{\\emph{Jupyter Notebook}}}} Graphical User Interface for {{Bragg}} Coherent\
\ Diffraction Imaging},\n shorttitle = {{\\emph{Gwaihir}}},\n author = {Simonne,\
\ David and Carnis, J{\\'e}r{\\^o}me and Atlan, Cl{\\'e}ment and Chatelier, Corentin\
\ and {Favre-Nicolin}, Vincent and Dupraz, Maxime and Leake, Steven J. and Zatterin,\
\ Edoardo and Resta, Andrea and Coati, Alessandro and Richard, Marie-Ingrid},\n\
\ year = {2022},\n journal = {Journal of Applied Crystallography},\n volume\
\ = {55},\n doi = {10.1107/S1600576722005854},\n url = {https://scripts.iucr.org/cgi-bin/paper?S1600576722005854},\n\
\ urldate = {2023-04-04}\n}\n"
bibtexKey: simonneGwaihirJupyterNotebook2022
communication: one-way
description:
Bragg coherent X-ray diffraction is a nondestructive method for probing
material structure in three dimensions at the nanoscale, with unprecedented resolution
in displacement and strain fields. This work presents Gwaihir, a user-friendly
and open-source tool to process and analyze Bragg coherent X-ray diffraction data.
It integrates the functionalities of the existing packages bcdi and PyNX in the
same toolbox, creating a natural workflow and promoting data reproducibility.
Its graphical interface, based on Jupyter Notebook widgets, combines an interactive
approach for data analysis with a powerful environment designed to link large-scale
facilities and scientists.
githubURL: https://github.com/DSimonne/gwaihir
implementation: ipywidget
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: gwaihir
nameDisplay: Gwaihir
otherURLs: []
paperURL: https://scripts.iucr.org/cgi-bin/paper?te5096
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: gwaihir.webp
user: scientist
- bibtex:
"@inproceedings{keryMageFluidMoves2020,\n title = {Mage: {{Fluid Moves\
\ Between Code}} and {{Graphical Work}} in {{Computational Notebooks}}},\n shorttitle\
\ = {Mage},\n booktitle = {{{CHI}}},\n author = {Kery, Mary Beth and Ren, Donghao\
\ and Hohman, Fred and Moritz, Dominik and Wongsuphasawat, Kanit and Patel, Kayur},\n\
\ year = {2020},\n doi = {10.1145/3379337.3415842},\n url = {https://dl.acm.org/doi/10.1145/3379337.3415842},\n\
\ urldate = {2021-09-15},\n langid = {english}\n}\n"
bibtexKey: keryMageFluidMoves2020
communication: two-way
description:
We aim to increase the flexibility at which a data worker can choose
the right tool for the job, regardless of whether the tool is a code library or
an interactive graphical user interface (GUI). To achieve this flexibility, we
extend computational notebooks with a new API mage, which supports tools that
can represent themselves as both code and GUI as needed. We discuss the design
of mage as well as design opportunities in the space of flexible code/GUI tools
for data work. To understand tooling needs, we conduct a study with nine professional
practitioners and elicit their feedback on mage and potential areas for flexible
code/GUI tooling. We then implement six client tools for mage that illustrate
the main themes of our study findings. Finally, we discuss open challenges in
providing flexible code/GUI interactions for data workers.
githubURL: https://github.com/magefile/mage
implementation: ipywidget
layouts:
- on-demand
materials:
- runtime
- code
modularity: modular
name: mage
nameDisplay: mage
otherURLs: []
paperURL: https://dl.acm.org/doi/abs/10.1145/3379337.3415842
releaseYear: 2020
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: mage.webp
user: data scientist
- bibtex:
"@article{crockettIvpyIconographicVisualization2021,\n title = {Ivpy: {{Iconographic\
\ Visualization Inside Computational Notebooks}}},\n author = {Crockett, Damon},\n\
\ year = {2021},\n journal = {International Journal for Digital Art History},\n\
\ doi = {10.11588/DAH.2019.4.66401},\n url = {https://journals.ub.uni-heidelberg.de/index.php/dah/article/view/66401},\n\
\ urldate = {2023-04-24},\n langid = {english}\n}\n"
bibtexKey: crockettIvpyIconographicVisualization2021
communication: one-way
description:
Iconographic Visualization in Python, or ivpy, is a software module,
written in the Python programming language, that provides a set of functions for
organizing iconographic representations of data, including images and glyphs.
The module also provides methods for extracting visual features from images; generating
and hand-tuning clusters of data points; and embedding high-dimensional data in
2D coordinate spaces. It is designed for use inside computational notebooks, so
that users working with data needn't leave the notebook environment in order to
generate visualizations. The software is designed primarily for those researchers
working with large image datasets in fields where human visual expertise cannot
be replaced with or superseded by machine vision, such as art history and media
studies.
githubURL: https://github.com/damoncrockett/ivpy
implementation: html
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: ivpy
nameDisplay: ivpy
otherURLs: []
paperURL: https://journals.ub.uni-heidelberg.de/index.php/dah/article/view/66401
releaseYear: 2021
sourceType: paper
supportedNotebooks:
- jupyter
- lab
- colab
thumbnail: ivpy.webp
user: data scientist
- bibtex:
"@inproceedings{wangTimberTrekExploringCurating2022a,\n title = {{{TimberTrek}}:\
\ {{Exploring}} and {{Curating Trustworthy Decision Trees}} with {{Interactive\
\ Visualization}}},\n booktitle = {{{VIS}}},\n author = {Wang, Zijie J. and\
\ Zhong, Chudi and Xin, Rui and Takagi, Takuya and Chen, Zhi and Chau, Duen Horng\
\ and Rudin, Cynthia and Seltzer, Margo},\n year = {2022},\n urldate = {2022-06-19}\n\
}\n"
bibtexKey: wangTimberTrekExploringCurating2022a
communication: one-way
description:
"Given thousands of equally accurate machine learning (ML) models,
how can users choose among them? A recent ML technique enables domain experts
and data scientists to generate a complete Rashomon set for sparse decision trees-a
huge set of almost-optimal inter-pretable ML models. To help ML practitioners
identify models with desirable properties from this Rashomon set, we develop Tim-bertrek,
the first interactive visualization system that summarizes thousands of sparse
decision trees at scale. Two usage scenarios high-light how Timbertrek can empower
users to easily explore, compare, and curate models that align with their domain
knowledge and values. Our open-source tool runs directly in users' computational
notebooks and web browsers, lowering the barrier to creating more responsible
ML models. Timbertrek is available at the following public demo link: https: //poloclub.
github. io/timbertrek."
githubURL: https://github.com/poloclub/timbertrek
implementation: nova
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: timbertrek
nameDisplay: TimberTrek
otherURLs: []
paperURL: https://ieeexplore.ieee.org/abstract/document/9973202/?casa_token=hsSE6q496VsAAAAA:RWHvwh_lulaE2FKfbNiATEZ3sBsy3HvIpY_Cez5ajt3TC7u2T0K-sOIYfqLh_WTfxqN3h9L_oQ
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- jupyter
- lab
- colab
thumbnail: timbertrek.webp
user: data scientist
- bibtex:
"@inproceedings{munechikaVisualAuditorInteractive2022,\n title = {Visual\
\ {{Auditor}}: {{Interactive Visualization}} for {{Detection}} and {{Summarization}}\
\ of {{Model Biases}}},\n booktitle = {{{VIS}}},\n author = {Munechika, David\
\ and Wang, Zijie J. and Reidy, Jack and Rubin, Josh and Gade, Krishna and Kenthapadi,\
\ Krishnaram and Chau, Duen Horng},\n year = {2022},\n doi = {10.1109/VIS54862.2022.00018},\n\
\ urldate = {2022-06-19}\n}\n"
bibtexKey: munechikaVisualAuditorInteractive2022
communication: one-way
description:
As machine learning (ML) systems become increasingly widespread, it
is necessary to audit these systems for biases prior to their de-ployment. Recent
research has developed algorithms for effectively identifying intersectional bias
in the form of interpretable, underper-forming subsets (or slices) of the data.
However, these solutions and their insights are limited without a tool for visually
understanding and interacting with the results of these algorithms. We propose
Visual Auditor, an interactive visualization tool for auditing and summarizing
model biases. Visual Auditor assists model validation by providing an interpretable
overview of intersectional bias (bias that is present when examining populations
defined by multiple features), details about relationships between problematic
data slices, and a comparison between underperforming and overper-forming data
slices in a model. Our open-source tool runs directly in both computational notebooks
and web browsers, making model auditing accessible and easily integrated into
current ML development workflows. An observational user study in collaboration
with domain experts at Fiddler AI highlights that our tool can help ML practitioners
identify and understand model biases.
githubURL: https://github.com/poloclub/visual-auditor
implementation: nova
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: visual-auditor
nameDisplay: Visual Auditor
otherURLs: []
paperURL: https://ieeexplore.ieee.org/abstract/document/9973204/?casa_token=yV4qNZ1nKtgAAAAA:IbRgePfyZOXzZ-lI00Z4e1X0x18Pg8eWtdjqySmPfN8GG-jKsiihsca3Zf4ciht5OJA8clk21w
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- jupyter
- lab
- colab
thumbnail: visual-auditor.webp
user: data scientist
- bibtex:
"@inproceedings{wangInterpretabilityThenWhat2022a,\n title = {Interpretability,\
\ {{Then What}}? {{Editing Machine Learning Models}} to {{Reflect Human Knowledge}}\
\ and {{Values}}},\n booktitle = {{{KDD}}},\n author = {Wang, Zijie J. and Kale,\
\ Alex and Nori, Harsha and Stella, Peter and Nunnally, Mark E. and Chau, Duen\
\ Horng and Vorvoreanu, Mihaela and Wortman Vaughan, Jennifer and Caruana, Rich},\n\
\ year = {2022},\n doi = {10.1145/3534678.3539074},\n url = {https://doi.org/10.1145/3534678.3539074}\n\
}\n"
bibtexKey: wangInterpretabilityThenWhat2022a
communication: one-way
description:
Recent strides in interpretable machine learning (ML) research reveal
that models exploit undesirable patterns in the data to make predictions, which
potentially causes harms in deployment. However, it is unclear how we can fix
these models. We present our ongoing work, GAM Changer, an open-source interactive
system to help data scientists and domain experts easily and responsibly edit
their Generalized Additive Models (GAMs). With novel visualization techniques,
our tool puts interpretability into action -- empowering human users to analyze,
validate, and align model behaviors with their knowledge and values. Built using
modern web technologies, our tool runs locally in users' computational notebooks
or web browsers without requiring extra compute resources, lowering the barrier
to creating more responsible ML models. GAM Changer is available at this https
URL.
githubURL: https://github.com/interpretml/gam-changer
implementation: nova
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: gam-changer
nameDisplay: GAM Changer
otherURLs: []
paperURL: https://arxiv.org/abs/2112.03245
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- jupyter
- lab
- colab
thumbnail: gam-changer.webp
user: data scientist
- bibtex:
"@inproceedings{sivaramanEmblazeIlluminatingMachine2022,\n title = {Emblaze:\
\ {{Illuminating Machine Learning Representations}} through {{Interactive Comparison}}\
\ of {{Embedding Spaces}}},\n shorttitle = {Emblaze},\n booktitle = {27th {{International\
\ Conference}} on {{Intelligent User Interfaces}}},\n author = {Sivaraman, Venkatesh\
\ and Wu, Yiwei and Perer, Adam},\n year = {2022},\n doi = {10.1145/3490099.3511137},\n\
\ url = {https://dl.acm.org/doi/10.1145/3490099.3511137},\n urldate = {2023-02-14},\n\
\ langid = {english}\n}\n"
bibtexKey: sivaramanEmblazeIlluminatingMachine2022
communication: one-way
description:
Modern machine learning techniques commonly rely on complex, high-dimensional
embedding representations to capture underlying structure in the data and improve
performance. In order to characterize model flaws and choose a desirable representation,
model builders often need to compare across multiple embedding spaces, a challenging
analytical task supported by few existing tools. We first interviewed nine embedding
experts in a variety of fields to characterize the diverse challenges they face
and techniques they use when analyzing embedding spaces. Informed by these perspectives,
we developed a novel system called Emblaze that integrates embedding space comparison
within a computational notebook environment. Emblaze uses an animated, interactive
scatter plot with a novel Star Trail augmentation to enable visual comparison.
It also employs novel neighborhood analysis and clustering procedures to dynamically
suggest groups of points with interesting changes between spaces. Through a series
of case studies with ML experts, we demonstrate how interactive comparison with
Emblaze can help gain new insights into embedding space structure.
githubURL: https://github.com/cmudig/emblaze
implementation: ipywidget
layouts:
- on-demand
materials:
- runtime
modularity: monolithic
name: emblaze
nameDisplay: Emblaze
otherURLs: []
paperURL: https://arxiv.org/pdf/2202.02641.pdf
releaseYear: 2022
sourceType: paper
supportedNotebooks:
- jupyter
- lab
thumbnail: emblaze.webp
user: data scientist
- bibtex:
"@inproceedings{graserExploringMovementData2020,\n title = {Exploring Movement\
\ Data in Notebook Environments},\n booktitle = {{{IEEE VIS}} 2020 Workshop on\
\ Information Visualization of Geospatial Networks, Flows and Movement ({{MoVis}})},\n\
\ author = {Graser, Anita and Dragaschnig, Melitta},\n year = {2020},\n url\
\ = {http://move.geog.ucsb.edu/wp-content/uploads/2020/10/MoVIS20_paper_4.pdf}\n\
}\n"
bibtexKey: graserExploringMovementData2020
communication: one-way
description:
Notebooks environments have become popular for data analysis but their
default visualization capabilities are limited. We present ongoing work on the
open geospatial library MovingPandas that enables the exploration of movement
data in notebooks.
githubURL: https://github.com/movingpandas/movingpandas
implementation: other-package
layouts:
- on-demand
materials:
- runtime
modularity: modular
name: moving-pandas
nameDisplay: Moving Pandas
otherURLs: []
paperURL: http://move.geog.ucsb.edu/wp-content/uploads/2020/10/MoVIS20_paper_4.pdf
releaseYear: 2020
sourceType: paper
supportedNotebooks:
- jupyter
- lab