-
Notifications
You must be signed in to change notification settings - Fork 2
/
report.Rmd
2044 lines (1658 loc) · 105 KB
/
report.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "Report"
author: "Francisco Bischoff"
date: "on Jun 06, 2023"
output:
bookdown::html_document2:
base_format: workflowr::wflow_html
toc: true
fig_caption: yes
number_sections: yes
bibliography: [../papers/references.bib]
link-citations: true
csl: ../thesis/csl/ama.csl
css: style.css
editor_options:
chunk_output_type: console
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
echo = FALSE, fig.align = "center", autodep = TRUE,
fig.height = 5, fig.width = 10,
tidy = "styler",
tidy.opts = list(strict = TRUE)
)
if (knitr::is_latex_output()) {
knitr::opts_chunk$set(dev = "pdf")
} else {
knitr::opts_chunk$set(dev = "svg")
}
my_graphics <- function(image_name, base_path = here::here("docs", "figure")) {
file_path <- glue::glue("{base_path}/{image_name}")
if (knitr::is_latex_output()) {
if (file.exists(glue::glue("{file_path}.pdf"))) {
file_path <- glue::glue("{file_path}.pdf")
} else if (file.exists(glue::glue("{file_path}.png"))) {
file_path <- glue::glue("{file_path}.png")
} else {
file_path <- glue::glue("{file_path}.jpg")
}
} else {
if (file.exists(glue::glue("{file_path}.svg"))) {
file_path <- glue::glue("{file_path}.svg")
} else if (file.exists(glue::glue("{file_path}.png"))) {
file_path <- glue::glue("{file_path}.png")
} else {
file_path <- glue::glue("{file_path}.jpg")
}
}
knitr::include_graphics(file_path)
}
my_kable <- function(title, label, content) {
res <- glue(r"(<br><table class="tg"><caption>)", "(\\#tab:{label}) {title}", r"(</caption>{content}</table>)")
out <- structure(res, format = "html", class = "knitr_kable")
attr(out, "format") <- "html"
out
}
tkplot <- function(object, interactive = FALSE, res = 50) {
ecg <- read_ecg_with_atr(here::here("inst/extdata/afib_regimes", object$record), resample_from = 200, resample_to = res)
value <- ecg[[1]]$II
prop <- 250 / res
mask <- seq.int(50, 100)
value[1:5] <- median(value[mask])
value[(length(value) - 5):length(value)] <- median(value[mask])
time <- seq(1, floor(length(value) * prop), length.out = length(value))
data <- tibble::tibble(time = time, value = value)
min_data <- min(data$value)
max_data <- max(data$value)
truth <- clean_truth(floor(attr(ecg[[1]], "regimes") * prop), floor(length(value) * prop)) # object$truth[[1]]
preds <- object$pred[[1]]
title <- glue::glue(
"Recording: {object$record} ",
"#truth: {length(truth)}, ",
"#preds: {length(preds)}, ",
"length: {floor(length(value)*prop)} ",
"FLOSS Score: {round(object$score, 3)}"
)
subtitle <- glue::glue(
"Parameters: ",
"MP window: {object$window_size}, ",
"MP threshold: {object$mp_threshold}, ",
"Time constraint: {object$time_constraint}, ",
"Regime threshold: {object$regime_threshold}, ",
"Regime landmark: {object$regime_landmark}"
)
plot <- data %>%
timetk::plot_time_series(
time, value,
.title = glue::glue(title, "<br><sup>{subtitle}</sup>"),
.interactive = interactive,
.smooth = FALSE,
.line_alpha = 0.3,
.line_size = 0.2,
.plotly_slider = interactive
)
if (interactive) {
plot <- plot %>%
plotly::add_segments(
x = preds, xend = preds, y = min_data,
yend = max_data * 1.1,
line = list(width = 2.5, color = "#0108c77f"),
name = "Predicted"
) %>%
plotly::add_segments(
x = truth, xend = truth, y = min_data,
yend = max_data,
line = list(width = 2.5, color = "#ff00007f"),
name = "Truth"
)
} else {
plot <- plot +
ggplot2::geom_segment(
data = tibble::tibble(tru = truth),
aes(
x = tru, xend = tru,
y = min_data, yend = max_data - (max_data - min_data) * 0.1
), linewidth = 2, color = "#ff00007f"
) +
ggplot2::geom_segment(
data = tibble::tibble(pre = preds),
aes(
x = pre, xend = pre,
y = min_data, yend = max_data
), linewidth = 1, color = "#0108c77f"
) +
ggplot2::theme_bw() +
ggplot2::theme(
legend.position = "none",
plot.margin = margin(0, 0, 0, 10)
) +
ggplot2::labs(title = title, subtitle = subtitle, y = ggplot2::element_blank())
}
plot
}
options(dplyr.summarise.inform = FALSE)
# tinytex::tlmgr_install("tabu")
library(here)
library(glue)
library(visNetwork)
library(tibble)
library(kableExtra)
library(patchwork)
library(targets)
library(ggplot2)
source(here::here("scripts", "common", "read_ecg.R"))
# knitr::opts_knit$set(
# root.dir = here("docs"),
# base.dir = here("protocol"),
# verbose = TRUE
# )
```
Last Updated: 2023-06-12 12:51:54 UTC
# Objectives and the research question
While this research was inspired by the CinC/Physionet Challenge 2015, its purpose is not to beat
the state-of-the-art on that challenge, but to identify, on streaming data, abnormal hearth electric
patterns, specifically those which are life-threatening, using low CPU and low memory requirements
in order to be able to generalize the use of such information on lower-end devices, outside the ICU, as ward devices, home devices, and wearable devices.
The main question is: can we accomplish this objective using a minimalist approach (low CPU, low
memory) while maintaining robustness?
# Principles
This research is being conducted using the Research Compendium principles [@compendium2019]:
1. Stick with the convention of your peers;
2. Keep data, methods, and output separated;
3. Specify your computational environment as clearly as you can.
Data management follows the FAIR principle (findable, accessible, interoperable, reusable)
[@wilkinson2016]. Concerning these principles, the dataset was converted from Matlab's format to
CSV format, allowing more interoperability. Additionally, all the project, including the dataset, is
in conformity with the Codemeta Project [@CodeMeta2017].
# Materials and methods
## Softwares
### Pipeline management
All process steps are managed using the R package `targets` [@landau2021], from data extraction to
the final report. An example of a pipeline visualization created with `targets` is shown in Fig.
\@ref(fig:targets). This package helps to record the random seeds (allowing reproducibility),
changes in some part of the code (or dependencies), and then run only the branches that need to be
updated and several other features to keep a reproducible workflow avoiding unnecessary repetitions.
```{r targets, echo=FALSE, out.width="100%"}
#| fig.cap="Example of pipeline visualization using `targets`.
#| From left to right we see 'Stems' (steps that do not create branches) and 'Patterns'
#| (that contains two or more branches) and the flow of the information.
#| The green color means that the step is up to date to the current code and dependencies."
my_graphics("targets", "figure")
```
### Reports management
The report is available on the main webpage [@franz_website], allowing inspection of previous
versions managed by the R package `workflowr`[@workflowr2021]. This package complements the
`targets` package by taking care of the versioning of every report. It is like a Log Book that keeps
track of every important milestone of the project while summarizing the computational environment
where it was run. Fig. \@ref(fig:workflowr) shows only a fraction of the generated website,
where we can see that this version passed the required checks (the system is up-to-date, no caches,
session information was recorded, and others), and we see a table of previous versions.
```{r workflowr, echo=FALSE, out.width="100%"}
#| fig.cap="Fraction of the website generated by `workflowr`.
#| On top we see that this version passed all checks, and in the middle we see a table
#| referring to the previous versions of the report."
my_graphics("workflowr_print", "figure")
```
### Modeling and parameter tuning
The well-known package used for data science in R is the `caret` (short for **C**lassification
**A**nd **RE**gression **T**raining) [@JSSv028i05]. Nevertheless, the author of `caret` recognizes
several limitations of his (great) package and is now in charge of developing the `tidymodels`
[@tidymodels2020] collection. For sure, there are other available frameworks and opinions
[@Thompson2020]. Notwithstanding, this project will follow the `tidymodels` road. Three significant
arguments 1) constantly improving and constantly being re-checked for bugs; large community
contribution; 2) allows to plug in a custom modeling algorithm that, in this case, will be the one
needed for developing this work; 3) `caret` is not in active development.
### Continuous integration
Meanwhile, the project pipeline has been set up on GitHub, Inc. [@bischoffrepo2021], leveraging
Github Actions [@gitactions2021] for the Continuous Integration lifecycle. The repository is
available at [@bischoffrepo2021], and the resulting report is available at [@franz_website]. This
thesis's roadmap and tasks status are also publicly available on Zenhub [@zenhub2021].
## Developed software
### Matrix Profile {#matrixprofile}
Matrix Profile (MP) [@Yeh2017a] is a state-of-the-art [@DePaepe2020; @Feremans2020] time series
analysis technique that, once computed, allows us to derive frameworks to all sorts of tasks, as
motif discovery, anomaly detection, regime change detection, and others [@Yeh2017a].
Before MP, time series analysis relied on the *distance matrix* (DM), a matrix that stores all the
distances between two time series (or itself, in the case of a Self-Join). This was very
power-consuming, and several pruning and dimensionality reduction methods were researched
[@Lin2007].
For brevity, let's just understand that the MP and the companion Profile Index (PI) are two vectors
that hold one floating-point value and one integer value, respectively, regarding the original time
series: (1) the similarity distance between that point on time (let's call these points "indexes")
and its first nearest-neighbor (1-NN), (2) The index where this 1-NN is located. The original paper
has more detailed information [@Yeh2017a]. It is computed using a rolling window, but instead of
creating a whole DM, only the minimum values and the index of these minima are stored (in the MP and
PI, respectively). We can have an idea of the relationship of both on Fig. \@ref(fig:thematrix).
```{r thematrix, echo=FALSE}
#| fig.cap="A distance matrix (top), and a matrix profile (bottom). The matrix profile stores only
#| the minimum values of the distance matrix."
my_graphics("mp_1", "figure")
```
This research has already yielded two R packages concerning the MP algorithms from UCR [@mpucr]. The
first package is called `tsmp`, and a paper has also been published in the R Journal [@RJ-2020-021]
(Journal Impact Factor™, 2020 of 3.984). The second package is called `matrixprofiler` and enhances
the first one, using low-level language to improve computational speed. The author has also joined
the Matrix Profile Foundation as a co-founder with contributors from Python and Go languages
[@mpf2020; @VanBenschoten2020].
This implementation in R is being used for computing the MP and MP-based algorithms of this thesis.
## The data {#the-data}
The current dataset used is the CinC/Physionet Challenge 2015 public dataset, modified to include
only the actual data and the header files in order to be read by the pipeline and is hosted by
Zenodo [@bischoff2021] under the same license as Physionet.
The dataset is composed of 750 patients with at least five minutes records. All signals have been
resampled (using anti-alias filters) to 12 bit, 250 Hz, FIR band-pass (0.05 to 40Hz), and mains
notch filters applied to remove noise. Pacemaker and other artifacts are still present on the ECG
[@Clifford2015]. Furthermore, this dataset contains at least two ECG derivations and one or more
variables like arterial blood pressure, photoplethysmograph readings, and respiration movements.
The _events_ we seek to identify are the life-threatening arrhythmias as defined by Physionet in
Table \@ref(tab:alarms).
```{r alarms, echo=FALSE}
alarms <- tribble(
~Alarm, ~Definition,
"Asystole", "No QRS for at least 4 seconds",
"Extreme Bradycardia", "Heart rate lower than 40 bpm for 5 consecutive beats",
"Extreme Tachycardia", "Heart rate higher than 140 bpm for 17 consecutive beats",
"Ventricular Tachycardia", "5 or more ventricular beats with heart rate higher than 100 bpm",
"Ventricular Flutter/Fibrillation", "Fibrillatory, flutter, or oscillatory waveform for at least 4 seconds"
)
kbl(alarms,
booktabs = TRUE,
caption = "Definition of the five alarm types used in CinC/Physionet Challenge 2015.",
align = "ll",
position = "ht",
linesep = "\\addlinespace"
) |>
row_spec(0, bold = TRUE) |>
kable_styling(full_width = TRUE)
```
The fifth minute is precisely where the alarm has been triggered on the original recording set. To
meet the ANSI/AAMI EC13 Cardiac Monitor Standards [@AAMI2002], the onset of the event is within 10
seconds of the alarm (i.e., between 4:50 and 5:00 of the record). That doesn't mean that there have
been no other arrhythmias before.
For comparison, on Table \@ref(tab:challenge) we collected the score of the five best participants
of the challenge [@plesinger2015; @kalidas2015; @couto2015; @fallet2015; @hoogantink2015].
```{r challenge, echo=FALSE}
challenge <- tribble(
~Score, ~Authors,
"81.39", "Filip Plesinger, Petr Klimes, Josef Halamek, Pavel Jurak",
"79.44", "Vignesh Kalidas",
"79.02", "Paula Couto, Ruben Ramalho, Rui Rodrigues",
"76.11", "Sibylle Fallet, Sasan Yazdani, Jean-Marc Vesin",
"75.55", "Christoph Hoog Antink, Steffen Leonhardt"
)
kbl(challenge,
booktabs = TRUE,
caption = "Challenge Results on real-time data. The scores were multiplied by 100.",
align = "cl",
position = "ht"
) |>
row_spec(0, bold = TRUE) |>
# column_spec(1, width = "5em") |>
# column_spec(2, width = "30em") |>
kable_styling(full_width = TRUE)
```
The Equation used on this challenge to compute the score of the algorithms is in the Equation
$\eqref{score}$. This Equation is the accuracy formula, with the penalization of the false
negatives. The reasoning pointed out by the authors [@Clifford2015] is the clinical impact of
existing an actual life-threatening event that was considered unimportant. Accuracy is known to be
misleading when there is a high class imbalance [@Akosa2017].
\
$$
Score = \frac{TP+TN}{TP+TN+FP+5*FN} \tag{1} \label{score}
$$
\
Assuming that this is a finite dataset, the pathologic cases (1) $\lim_{TP \to \infty}$ (whenever
there is an event, it is positive) or (2) $\lim_{TN \to \infty}$ (whenever there is an event, it is
false), cannot happen. This dataset has 292 True alarms and 458 False alarms. Experimentally, this
equation yields:
- 0.24 if all guesses are on False class
- 0.28 if random guesses
- 0.39 if all guesses are on True class
- 0.45 if no false positives plus random on True class
- 0.69 if no false negatives plus random on False class
This small experiment (knowing the data in advance) shows that "a single line of code and a few
minutes of effort" [@Wu2020] algorithm could achieve at most a score of 0.39 in this challenge (the
last two lines, the algorithm must be very good on one class).
Nevertheless, this Equation will only be helpful to allow us to compare the results of this thesis
with other algorithms.
## Work structure
### Project start
The project started with a literature survey on the databases Scopus, PubMed, Web of Science, and
Google Scholar with the following query (the syntax was adapted for each database):
\
TITLE-ABS-KEY ( algorithm OR 'point of care' OR 'signal processing' OR 'computer
assisted' OR 'support vector machine' OR 'decision support system*' OR 'neural
network*' OR 'automatic interpretation' OR 'machine learning') AND TITLE-ABS-KEY
( electrocardiography OR cardiography OR 'electrocardiographic tracing' OR ecg
OR electrocardiogram OR cardiogram ) AND TITLE-ABS-KEY ( 'Intensive care unit' OR
'cardiologic care unit' OR 'intensive care center' OR 'cardiologic care center' )
\
The inclusion and exclusion criteria were defined as in Table \@ref(tab:criteria).
```{r criteria, echo=FALSE}
criteria <- tribble(
~"Inclusion criteria", ~"Exclusion criteria",
"ECG automatic interpretation", "Manual interpretation",
"ECG anomaly detection", "Publication older than ten years",
"ECG context change detection", "Do not attempt to identify life-threatening arrhythmias, namely asystole, extreme bradycardia, extreme tachycardia, ventricular tachycardia, and ventricular flutter/fibrillation",
"Online Stream ECG analysis", "No performance measurements reported",
"Specific diagnosis (like a flutter, hyperkalemia, etc.)", ""
)
kbl(criteria,
booktabs = TRUE,
caption = "Literature review criteria.",
align = "ll",
position = "ht",
linesep = "\\addlinespace"
) |>
row_spec(0, bold = TRUE) |>
kable_styling(full_width = TRUE)
```
The survey is being conducted with peer review; all articles on full-text phase were obtained and
assessed for the extraction phase, except 5 articles that were not available. Due to external
factors, the survey is currently stalled in the Data Extraction phase.
Fig. \@ref(fig:prisma) shows the flow diagram of the resulting screening using PRISMA format.
```{r prisma, echo=FALSE, out.width="70%", fig.cap="Flowchart of the literature survey."}
my_graphics("PRISMA", "figure")
```
The peer review is being conducted by the author of this thesis and another colleague, Dr. Andrew
Van Benschoten, from the Matrix Profile Foundation [@mpf2020].
Table. \@ref(tab:kappa) shows the Inter-rater Reliability (IRR) of the screening phases, using
Cohen's $\kappa$ statistic. The bottom line shows the estimated accuracy after correcting
possible confounders [@Bakeman2011].
```{r kappa, echo=FALSE, eval=!knitr::is_latex_output()}
my_kable(
title = "Inter-rater Reliability on the literature survey process.",
label = "kappa",
content = r"(<thead> <tr> <th class="tg-top" colspan="2"> </th> <th class="tg-top"
colspan="2"> Title-Abstract<br>(2388 articles) </th> <th class="tg-top"> </th> <th class="tg-top"
colspan="2"> Full-Review<br>(303 articles) </th> </tr></thead> <tbody> <tr> <td
colspan="2"> </td><td class="tg-second-top" colspan="2"> Reviewer #2 </td><td></td><td
class="tg-second-top" colspan="2"> Reviewer #2 </td></tr><tr> <td colspan="2"> </td><td
class="tg-second-top"> Include </td><td class="tg-second-top"> Exclude </td><td></td><td
class="tg-second-top"> Include </td><td class="tg-second-top"> Exclude </td></tr><tr> <td class="tg-cross-top-low"
rowspan="2"> Reviewer #1 </td><td class="tg-cross-top"> Include </td><td class="tg-cross-top"> 185 </td><td
class="tg-cross-top"> 381 </td><td class="tg-cross-top"></td><td class="tg-cross-top"> 63 </td><td
class="tg-cross-top"> 58 </td></tr><tr> <td class="tg-cross-low"> Exclude </td><td class="tg-cross-low"> 129
</td><td class="tg-cross-low"> 1693 </td><td class="tg-cross-low"> </td><td class="tg-cross-low"> 13
</td><td class="tg-cross-low"> 169 </td></tr><tr> <td class="tg-body-left" colspan="2"> Cohen’s omnibus
<span class="math inline">\(\kappa\)</span> </td><td class="tg-body-center" colspan="2"> 0.30 </td><td
class="tg-body-center"> </td><td class="tg-body-center" colspan="2"> 0.48 </td></tr><tr> <td class="tg-body-left"
colspan="2"> Maximum possible <span class="math inline">\(\kappa\)</span> </td><td class="tg-body-center"
colspan="2"> 0.66 </td><td class="tg-body-center"> </td><td class="tg-body-center" colspan="2"> 0.67
</td></tr><tr> <td class="tg-body-left" colspan="2"> Std Err for <span class="math
inline">\(\kappa\)</span> </td><td class="tg-body-center" colspan="2"> 0.02 </td><td class="tg-body-center">
</td><td class="tg-body-center" colspan="2"> 0.05 </td></tr><tr> <td class="tg-body-left" colspan="2"> Observed
Agreement </td><td class="tg-body-center" colspan="2"> 79% </td><td class="tg-body-center"> </td><td
class="tg-body-center" colspan="2"> 77% </td></tr><tr> <td class="tg-body-left" colspan="2"> Random Agreement
</td><td class="tg-body-center" colspan="2"> 69% </td><td class="tg-body-center"> </td><td class="tg-body-center"
colspan="2"> 55% </td></tr><tr> <td class="tg-bottom-left" colspan="2"> Agreement corrected with KappaAcc
</td><td class="tg-bottom" colspan="2"> 82% </td><td class="tg-bottom"> </td><td class="tg-bottom"
colspan="2"> 85% </td></tr></tbody>)"
)
```
\
```{r kappaa, echo = FALSE, eval=knitr::is_latex_output(), results="asis"}
cat(r"(
\begin{table}[ht]
\caption{\label{tab:kappa}Inter-rater Reliability on the literature survey process.}
\centering
\begin{tabular}{llcclcc}
\toprule
& & \multicolumn{2}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Title-Abstract\\ (2388 articles)\end{tabular}}} & \textbf{} & \multicolumn{2}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Full-Review\\ (303 articles)\end{tabular}}} \\ \cline{3-4} \cline{6-7}
& & \multicolumn{2}{c}{Reviewer \#2} & & \multicolumn{2}{c}{Reviewer \#2} \\ \cline{3-4} \cline{6-7}
& & \multicolumn{1}{l}{Include} & \multicolumn{1}{l}{Exclude} & & \multicolumn{1}{l}{Include} & \multicolumn{1}{l}{Exclude} \\ \hline
\multicolumn{1}{r}{\multirow{2}{*}{Reviewer \#1}} & \multicolumn{1}{r}{Include} & 185 & 381 & & 63 & 58 \\
\multicolumn{1}{r}{} & \multicolumn{1}{r}{Exclude} & 129 & 1693 & & 13 & 169 \\ \hline
Cohen's omnibus $\kappa$ & & \multicolumn{2}{c}{0.30} & & \multicolumn{2}{c}{0.48} \\
Maximum possible $\kappa$ & & \multicolumn{2}{c}{0.66} & & \multicolumn{2}{c}{0.67} \\
Std Err for $\kappa$ & & \multicolumn{2}{c}{0.02} & & \multicolumn{2}{c}{0.05} \\
Observed Agreement & & \multicolumn{2}{c}{79\%} & & \multicolumn{2}{c}{77\%} \\
Random Agreement & & \multicolumn{2}{c}{69\%} & & \multicolumn{2}{c}{55\%} \\ \hline\addlinespace
\multicolumn{2}{l}{\textbf{Agreement corrected with KappaAcc}} & \multicolumn{2}{c}{\textbf{82\%}} & \textbf{} & \multicolumn{2}{c}{\textbf{85\%}} \\ \bottomrule
\end{tabular}
\end{table}
)")
```
The purpose of using Cohen's $\kappa$ in such a review is to allow us to gauge the agreement of both
reviewers on the task of selecting the articles according to the goal of the survey. The most naive
way to verify this would be simply to measure the overall agreement (the number of articles included
and excluded by both, divided by the total number of articles). Nevertheless, this would not take
into account the agreement we could expect purely by chance.
However, the $\kappa$ statistic must be assessed carefully. This topic is beyond the scope of this work
therefore it will be explained briefly.
While it is widely used, the $\kappa$ statistic is well criticized. The direct interpretation of
its value depends on several assumptions that are often violated. (1) It is assumed that both
reviewers have the same level of experience; (2) The "codes" (include, exclude) are identified with
same accuracy; (3) The "codes" prevalences are the same; (4) There is no reviewer bias towards one of
the choices [@Sim2005; @Bakeman1997].
In addition, the number of "codes" affects the relation between the value of $\kappa$ and the actual
agreement between the reviewers. For example, given equiprobable "codes" and reviewers who are 85%
accurate, the value of $\kappa$ are 0.49, 0.60, 0.66, and 0.69 when number of codes is 2, 3, 5, and
10, respectively [@Bakeman1997; @Morgan2019].
To take into account these limitations, the agreement between reviewers was calculated using the
KappaAcc [@Bakeman2011] from Professor Emeritus Roger Bakeman, Georgia State University, which
computes the estimated accuracy of simulated reviewers.
### RAW data
To better understand the data acquisition, it has been acquired a Single Lead Heart Rate Monitor
breakout from Sparkfun™ [@sparkfun2021] using the AD8232 [@AnalogDevices2020] microchip from Analog
Devices Inc., compatible with Arduino^®^ [@arduino2021], for an in-house experiment (Fig.
\@ref(fig:ad8232)).
```{r ad8232, echo=FALSE, out.width="40%", fig.show="hold", fig.cap="Single Lead Heart Rate Monitor."}
knitr::include_graphics(c("figure/sparkfun.jpg", "figure/FullSetup.jpg"))
```
The output gives us a RAW signal, as shown in Fig. \@ref(fig:rawsignal).
```{r rawsignal, echo=FALSE, out.width="50%", fig.cap="RAW output from Arduino at ~300hz."}
my_graphics("arduino_plot", "figure")
```
After applying the same settings as the Physionet database (collecting the data at 500hz, resample
to 250hz, pass-filter, and notch filter), the signal is much better, as shown in Fig.
\@ref(fig:filtersignal).
```{r filtersignal, echo=FALSE, out.width="90%", fig.cap="Gray is RAW, Red is filtered."}
my_graphics("filtered_ecg", "figure")
```
### Preparing the data
Usually, data obtained by sensors needs to be "cleaned" for proper evaluation. That is different
from the initial filtering process where the purpose is to enhance the signal. Here we are dealing
with artifacts, disconnected cables, wandering baselines and others.
Several SQIs (Signal Quality Indexes) are used in the literature [@eerikainen2015], some trivial
measures as _kurtosis_, _skewness_, median local noise level, other more complex as pcaSQI (the
ratio of the sum of the five largest eigenvalues associated with the principal components over the
sum of all eigenvalues obtained by principal component analysis applied to the time aligned ECG
segments in the window). An assessment of several different methods to estimate electrocardiogram
signal quality can was performed by Del Rio, *et al* [@DelRio2011].
By experimentation (yet to be validated), a simple formula gives us the "complexity" of the signal
and correlates well with the noisy data is shown in Equation $\eqref{complex}$ [@Batista2014].
\
$$
\sqrt{\sum_{i=1}^w((x_{i+1}-x_i)^2)}, \quad \text{where}\; w \; \text{is the window size} \tag{2} \label{complex}
$$
\
Fig. \@ref(fig:sqi) shows some SQIs and their relation with the data.
```{r sqi, echo=FALSE, out.width="100%", fig.cap="Green line is the \"complexity\" of the signal."}
my_graphics("noise", "figure")
```
```{r createfilter, include=FALSE}
source(here("scripts", "common", "read_ecg.R"))
source(here("scripts", "common", "sqi.R"))
filter_w <- 200
limit <- 8
file <- "a104s"
size_w <- 16
size_h <- 5
if (file.exists(here("output/createfilter.rds"))) {
data <- readRDS(here("output/createfilter.rds"))
} else {
cli::cli_abort("File not found.")
data <- read_ecg_csv(here(glue("inst/extdata/physionet/{file}.hea")))
data <- data[[file]]$II
saveRDS(data, here("output/createfilter.rds"), compress = "xz")
}
norm_data <- tsmp:::znorm(data)
filter <- win_complex(norm_data, filter_w)
filter <- filter > limit
if (knitr::is_latex_output()) {
grDevices::pdf(here("docs/figure/regime_filter.pdf"),
width = size_w, height = size_h
)
} else {
svglite::svglite(here("docs/figure/regime_filter.svg"),
width = size_w, height = size_h
)
}
plot(norm_data, main = "", type = "l", ylab = "", xlab = "index", lwd = 0.2)
points(cbind(which(filter), 0), col = "blue", pch = 19)
dev.off()
```
Fig. \@ref(fig:datafilter) shows that noisy data (probably patient muscle movements) are marked
with a blue point and thus are ignored by the algorithm.
```{r datafilter, echo=FALSE, out.width="100%", fig.cap="Noisy data marked by the \"complexity\" filter."}
my_graphics("regime_filter", "figure")
```
Although this step of "cleaning" the data is often used, this step will also be tested if it is
really necessary, and the performance with and without "cleaning" will be reported.
### Detecting regime changes
The regime change approach will be using the _Arc Counts_ concept, used on the FLUSS (Fast Low-cost
Unipotent Semantic Segmentation) algorithm, as explained by Gharghabi, _et al._,[@gharghabi2018].
The FLUSS (and FLOSS, the online version) algorithm is built on top of the Matrix Profile
(MP)[@Yeh2017a], described on section \@ref(matrixprofile). Recalling that the MP and the
companion Profile Index (PI) are two vectors holding information about the 1-NN. One can imagine
several "arcs" starting from one "index" to another. This algorithm is based on the assumption that
between two regimes, the most similar shape (its nearest neighbor) is located on "the same side", so
the number of "arcs" decreases when there is a change in the regime and increases again. As show on
Fig. \@ref(fig:arcsoriginal). This drop on the _Arc Counts_ is a signal that a change in the
shape of the signal has happened.
```{r arcsoriginal, echo=FALSE, out.width="100%", fig.cap="FLUSS algorithm, using arc counts."}
my_graphics("fluss_arcs", "figure")
```
The choice of the FLOSS algorithm (the online version of FLUSS) is founded on the following arguments:
- **Domain Agnosticism:** the algorithm makes no assumptions about the data as opposed to most
available algorithms to date.
- **Streaming:** the algorithm can provide real-time information.
- **Real-World Data Suitability:** the objective is not to _explain_ all the data. Therefore, areas
marked as "don't know" areas are acceptable.
- **FLOSS is not:** a change point detection algorithm [@aminikhanghahi2016]. The interest here is
changes in the shapes of a sequence of measurements.
Other algorithms we can cite are based on Hidden Markov Models (HMM) that require at least two
parameters to be set by domain experts: cardinality and dimensionality reduction. The most
attractive alternative could be the Autoplait [@Matsubara2014], which is also domain agnostic and
parameter-free. It segments the time series using Minimum Description Length (MDL) and recursively
tests if the region is best modeled by one or two HMM. However, Autoplait is designed for batch
operation, not streaming, and also requires discrete data. FLOSS was demonstrated to be superior in
several datasets in its original paper. In addition, FLOSS is robust to several changes in data like
downsampling, bit depth reduction, baseline wandering, noise, smoothing, and even deleting 3% of the
data and filling with simple interpolation. Finally, the most important, the algorithm is light and
suitable for low-power devices.
In the MP domain, it is worth also mentioning another possible algorithm: the Time Series Snippets
[@Imani2018], based on MPdist [@gharghabi2018b]. The MPdist measures the distance between two
sequences, considering how many similar sub-sequences they share, no matter the matching order. It
proved to be a useful measure (not a metric) for meaningfully clustering similar sequences. Time
Series Snippets exploits MPdist properties to summarize a dataset extracting the $k$ sequences
representing most of the data. The final result seems to be an alternative for detecting regime
changes, but it is not. The purpose of this algorithm is to find which pattern(s) explains most of
the dataset. Also, it is not suitable for streaming data. Lastly, MPdist is quite expensive compared
to the trivial Euclidean distance.
The regime change detection will be evaluated following the criteria explained in section
\@ref(evaluation).
### Classification of the new regime {#classregime}
The next step towards the objective of this work is to verify if the new regime detected by the
previous step is indeed a life-threatening pattern that we should trigger the alarm.
First, let's dismiss some apparent solutions: (1) Clustering. It is well understood that we cannot
cluster time series subsequences meaningfully with any distance measure or with any algorithm
[@Keogh2005]. The main argument is that in a meaningful algorithm, the output depends on the input,
and this has been proven to not happen in time series subsequence clustering [@Keogh2005]. (2)
Anomaly detection. In this work, we are not looking for surprises but for patterns that are known to
be life-threatening. (3) Forecasting. We may be tempted to make predictions, but this clearly is not
the idea here.
The method of choice is classification. The simplest algorithm could be a `TRUE`/`FALSE` binary
classification. Nevertheless, the five life-threatening patterns have well-defined characteristics
that may seem more plausible to classify the new regime using some kind of ensemble of binary
classifiers or a "six-class" classifier (the sixth class being the `FALSE` class).
Since the model doesn't know which life-threatening pattern will be present in the regime (or if it
will be a `FALSE` case), the model will need to check for all five `TRUE` cases, and if none of
these cases are identified, it will classify the regime as `FALSE`.
To avoid exceeding processor capacity, an initial set of shapelets [@Rakthanmanon2013] can be
sufficient to build the `TRUE`/`FALSE` classifier. And to build such a set of shapelets, leveraging
on the MP, we will use the Contrast Profile [@Mercer2021].
The Contrast Profile (CP) looks for patterns that are at the same time very *similar* to its
neighbors in class *A* while is very *different* from the nearest neighbor from class *B*. In other
words, this means that such a pattern represents well class *A* and may be taken as a "signature" of
that class.
In this case, we need to compute two MP, one self-join MP using the *positive* class $MP^{(++)}$
(the class that has the signature we want to find) and one AB-join MP using the *positive* and
*negative* classes $MP^{(+-)}$. Then we subtract the first $MP^{(++)}$ from the last $MP^{(+-)}$,
resulting in the $CP$. The high values on $CP$ are the locations for the signature candidates we
look for (the author of CP calls these segments *Plato's*).
Due to the nature of this approach, the MP's (containing values in Euclidean Distance) are truncated
for values above $\sqrt{2w}$, where $w$ is the window size. This is because values above this
threshold are negatively correlated in the Pearson Correlation space. Finally, we normalize the
values by $\sqrt{2w}$. The formula $\eqref{contrast}$ synthesizes this computation.
\
$$
CP_w = \frac{MP_{w}^{(+-)} - MP_{w}^{(++)}}{\sqrt{2w}} \quad \text{where}\; w \; \text{is the window size} \tag{3} \label{contrast}
$$
\
For a more complete understanding of the process, Fig. \@ref(fig:contrast) shows a practical example
from the original article [@Mercer2021].
\
```{r contrast, echo=FALSE, out.width="100%"}
#| fig.cap = "Top to bottom: two weakly-labeled snippets of a larger time series. T(-) contains
#| only normal beats. T(+) also contains PVC (premature ventricular contractions).
#| Next, two Matrix Profiles with window size 91; AB-join is in red and self-join in blue.
#| Bottom, the Contrast Profile showing the highest location."
my_graphics("contrast", "figure")
```
After extracting candidates for each class signature, a classification algorithm will be fitted and
evaluated using the criteria explained on section \@ref(evaluation).
### Summary of the methodology {#methodology}
To summarize the steps taken on this thesis to accomplish the main objective, Figs.
\@ref(fig:regimedetection), \@ref(fig:shapelets) and \@ref(fig:fullmodel) show the
overview of the processes involved.
First, let us introduce the concept of Nested Resampling [@Bischl2012]. It is known that when
increasing model complexity, overfitting on the training set becomes more likely to happen
[@Hastie2009]. This is an issue that this work has to countermeasure as many steps require parameter
tuning, even for almost parameter-free algorithms, like the MP.
The rule that must be followed is simple: *do not* evaluate a model on the same resampling split
used to perform its own parameter tuning. Using simple cross-validation, the information about the
test set "leaks" into the evaluation, leading to overfitting/overtuning, and gives us an optimistic
biased estimate of the performance. Bernd Bischl, 2012 [@Bischl2012] describes more deeply these
factors and also provides us with a countermeasure for that: (1) from preprocessing the data to
model selection use the training set; (2) the test set should be touched once, on the evaluation
step; (3) repeat. This guarantees that a "new" separated data is only used *after* the model is
trained/tuned.
Fig. \@ref(fig:nestedresampling) shows us this principle. The steps (1) and (2) described above
are part of the **Outer resampling**, which in each loop splits the data into two sets: the training
set and the test set. The training set is then used in the **Inner resampling** where, for example,
the usual cross-validation may be used (creating an *Analysis set* and an *Assessment set* to avoid
terminology conflict), and the best model/parameters are selected. Then, this best model is
evaluated against the unseen test set created for this resampling.
The resulting (aggregated) performance of all outer samples gives us a more honest estimative
of the expected performance on new data.
```{r nestedresampling, echo=FALSE, out.width="70%"}
#| fig.cap = "Nested resampling.
#| The full dataset is resampled several times (outer resampling), so each branch has its
#| own Test set (yellow). On each branch, the Training set is used as if it were a full dataset,
#| being resampled again (inner resampling); here, the Assessment set (blue) is used to test the
#| learning model and tune parameters. The best model is finally evaluated on its own
#| Test set."
my_graphics("draw-nested-resampling", "figure")
```
\
After the understanding of the Nested Resampling [@Bischl2012], the following flowcharts can be
better interpreted. Fig. \@ref(fig:regimedetection) starts with the "Full Dataset" that contains
all time series from the dataset described in section \@ref(the-data). Each time series
represents one file from the database and represents one patient.
The regime change detection will use subsampling (bootstrapping can lead to substantial bias toward
more complex models) in the Outer resampling and cross-validation in the Inner resampling. How the
evaluation will be performed and why the use of cross-validation will be explained in section
\@ref(evaluation).
```{r regimedetection, echo=FALSE, out.width="90%"}
#| fig.cap = "Pipeline for regime change detection.
#| The full dataset (containing several patients) is divided into a Training set and a Test set.
#| The Training set is then resampled in an Analysis set and an Assessment set. The former is
#| used for training/parameter tuning and the latter for assessing the result. The best parameters
#| are then used for evaluation on the Test set. This may be repeated several times."
my_graphics("draw-regime-model", "figure")
```
Fig. \@ref(fig:shapelets) shows the processes for training the classification model. First, the
last ten seconds of each time series will be identified (the event occurs in this segment). Then the
dataset will be grouped by class (type of event) and `TRUE`/`FALSE` (alarm), so the Outer/Inner
resampling will produce a Training/Analysis set and Test/Assessment set with similar frequency to
the full dataset.
The next step will be to extract shapelet candidates using the Contrast Profile and train the
classifier.
This pipeline will use subsampling (for the same reason above) in the Outer resampling and
cross-validation in the Inner resampling. How the evaluation will be performed and why the use of
cross-validation will be explained in section \@ref(evaluation).
```{r shapelets, echo=FALSE, out.width="60%"}
#| fig.cap = "Pipeline for alarm classification.
#| The full dataset (containing several patients) is grouped by class and by TRUE/FALSE alarm.
#| This grouping allows resampling to keep a similar frequency of classes and TRUE/FALSE of the full dataset.
#| Then the full dataset is divided on a Training set and a Test set.
#| The Training set is then resampled in an Analysis set and an Assessment set. The former is
#| used for extracting shapelets, training the model and parameter tuning; the latter for assessing
#| the performance of the model. Finally, the best model is evaluated on the Test set.
#| This may be repeated several times."
my_graphics("draw-classif-model", "figure")
```
Finally, Fig. \@ref(fig:fullmodel) shows how the final model will be used on the field. In a
streaming scenario, the data will be collected and processed in real-time to maintain an up to date
Matrix Profile. The FLOSS algorithm will be looking for a regime change. When a regime change is
detected, a sample of this new regime will be presented to the trained classifier that will evaluate
if this new regime is a life-threatening condition or not.
```{r fullmodel, echo=FALSE, out.width="60%"}
#| fig.cap = "Pipeline of the final process.
#| The streaming data, coming from one patient, is processed to create its Matrix Profile.
#| Then, the FLOSS algorithm is computed for detecting a regime change. When a new regime is
#| detected, a sample of this new regime is analysed by the model and a decision is made. If
#| the new regime is life-threatening, the alarm will be fired."
my_graphics("draw-global-model", "figure")
```
## Evaluation of the algorithms {#evaluation}
The subsampling method used on both algorithms, regime change, and classification, will be
the Cross-Validation, as the learning task will be in batches.
Other options dismissed [@Bischl2012]:
* Leave-One-Out Cross-Validation: has better properties for regression than for classification. It
has a high variance as an estimator of the mean loss. It also is asymptotically inconsistent and
tends to select too complex models. It is demonstrated empirically that 10-fold CV is often
superior.
* Bootstrapping: while it has low variance, it may be optimistic-biased on more complex models.
Also, its resampling method with replacement can leak information into the assessment set.
* Subsampling: is like bootstrapping, but without replacement. The only argument for not choosing it
is that we make sure all the data is used for analysis and assessment with Cross-Validation.
### Regime change
A detailed discussion about the evaluation process of segmentation algorithms is made by the
FLUSS/FLOSS author [@gharghabi2018]. Previous researches have used precision/recall or derived
measures for performance. The main issue is how to assume that the algorithm was correct? Is this a
miss if the ground truth says the change occurred at location 10,000, and the algorithm detects a
change at location 10,001?
As pointed out by the author, several independent researchers have suggested a temporal tolerance,
that solves one issue but has a hard time penalizing any tiny miss beyond this tolerance.
The second issue is an over-penalization of an algorithm in which most of the detections are good,
but just one (or a few) is poor.
The author proposes the solution depicted in Fig. \@ref(fig:flosseval). It gives 0 as the best
score and 1 as the worst. The function sums the distances between the ground truth locations and the
locations suggested by the algorithm. The sum is then divided by
<!--the product of the number of segments, and then -->
the length of the time series to normalize the range to [0, 1].
The goal is to minimize this score.
<!--
TODO: review the problem when there are too many detections
-->
```{r flosseval, echo=FALSE, out.width="100%"}
#| fig.cap="Regime change evaluation. The top line illustrates the ground truth, and the
#| bottom line the locations reported by the algorithm. Note that multiple proposed locations
#| can be mapped to a single ground truth point."
my_graphics("floss_eval", "figure")
```
### Classification {#classification-criteria}
As described in section \@ref(classregime), the model for classification will use a set of shapelets
to identify if we have a `TRUE` (life-threatening) regime or a `FALSE` (non-life-threatening) regime.
Although the implementation of the final process will be using streaming data, the classification
algorithm will work in batches because it will not be applied on every single data point but on
samples that are extracted when a regime change is detected. During the training phase, the data is
also analyzed in batches.
One important factor we must consider is that, in real-world, the majority of regime changes will be
`FALSE` (i.e., not life-threatening). Thus, a performance measure that is robust to class imbalance
is needed if we want to assess the model after it was trained on the field.
It is well known that the *Accuracy* measure is not reliable for unbalanced data
[@Akosa2017; @Bekkar2013] as it returns optimistic results for a classifier on the majority class. A
description of common measures used on classification is available [@Akosa2017; @Chicco2020]. Here
we will focus on three candidate measures that can be used: F-score (well discussed on
[@Chicco2020]), Matthew's Correlation Coefficient (MCC) [@Matthews1975] and $\kappa_m$ statistic
[@Bifet2015].
The F-score (let's abbreviate to F~1~ as this is the more common setting) is widely used on
*information retrieval*, where the classes are usually classified as "relevant" and "irrelevant",
and combines the *recall* (also known as sensitivity) and the *precision* (the positive predicted
value). *Recall* assess how well the algorithm retrieves relevant examples among the (usually few)
relevant items in the dataset. In contrast, *precision* assesses the proportion of indeed relevant
items which are contained in the retrieved examples. It ranges from [0, 1]. It completely ignores
the irrelevant items that were not retrieved (usually, this set contains many items). In
classification tasks, its main weakness is not evaluating the True Negatives. If the proportion of a
random classifier gets towards the `TRUE` class (increasing the False Positives significantly), this
score actually gets better, thus not suitable to our case. The F~1~ score is defined on Equation
$\eqref{score}$.
$$
F_1 score = \frac{2 \cdot TP}{2 \cdot TP + FP + FN} = 2 \cdot \frac{precision \cdot recall}{precision + recall} \tag{4} \label{fscore}
$$
The MCC is a good alternative to the F~1~ when we do care about the True negatives (both were
considered to "provide more realistic estimates of real-world model performance" [@Dubey2018]). It
is a method to compute the *Pearson product-moment correlation coefficient* [@Delgado2019] between
the actual and predicted values. It ranges from [-1, 1]. The MCC is the only binary classification
rate that only gives a high score if the binary classifier correctly classified the majority of the
positive and negative instances [@Chicco2020]. One may argue that Cohen's $\kappa$ has the same
behavior. Still, there are two main differences (1) MCC is *undefined* in the case of a *majority
voter*. At the same time, Cohen's $\kappa$ doesn't discriminate this case from the random classifier
($\kappa$ is zero for both cases). (2) It is proven that in a special case when the classifier is
increasing the False Negatives, Cohen's $\kappa$ doesn't get worse as expected, MCC doesn't have
this issue [@Delgado2019]. MCC is defined on equation $\eqref{mccval}$.
$$
MCC = \frac{TP \cdot TN - FP \cdot FN}{\sqrt{(TP + FP) \cdot (TP + FN) \cdot (TN + FP) \cdot (TN + FN)}} \tag{5} \label{mccval}
$$
The $\kappa_m$ statistic [@Bifet2015] is a measure that considers not the *random classifier* but
the *majority voter* (a classifier that only votes on the larger class). It was introduced by Bifet
*et al.* [@Bifet2015] for being used in online settings, where the class balance may change over
time. It is defined on Equation $\eqref{kappam}$, where $p_0$ is the observed accuracy, and $p_m$ is
the accuracy of the majority voter. Theoretically, the score ranges from ($-\infty$, 1]. Still, in
practice, you see negative numbers if the classifier performs worse than the majority voter and
positive numbers if performing better than the majority number until the maximum of 1 when the
classifier is optimal.
$$
\kappa_m = \frac{p_0 - p_m}{1 - p_m} \tag{6} \label{kappam}
$$
In the inner resampling (model training/tuning), the classification will be binary, and in our case,
we know that the data is slightly unbalanced (60% false alarms). For this step, the metric for model
selection will be the MCC. Nevertheless, during the optimization process, the algorithm will seek to
minimize the False Negative Rate ($FNR = \frac{FN}{TP+FN}$), and between ties, the smaller FNR wins.
In the outer resampling, the MCC and $\kappa_m$ of all winning models will aggregate and report
using the median and interquartile range.
For different classifiers, we will use Wilcoxon's signed-rank test for comparing their performances.
This method is known to have low Type I and Type II errors in this kind of comparison [@Bifet2015].
### Full model (streaming setting)
For the final assessment, the best and the average model of the previous pipelines will be assembled
and tested using the whole original dataset.
The algorithm will be tested in each of the five life-threatening events split individually in order
to evaluate its strengths and weakness.
For more transparency, the whole confusion matrix will be reported, as well as the MCC, $\kappa_m$, and
the FLOSS evaluation.
# Current results
## Regime change detection
As we have seen previously, the FLOSS algorithm is built on top of the Matrix Profile (MP). Thus,
we have proposed several parameters that may or not impact the FLOSS prediction performance.
The variables for building the MP are:
- **`mp_threshold`**: the minimum similarity value to be considered for 1-NN.
- **`time_constraint`**: the maximum distance to look for the nearest neighbor.
- **`window_size`**: the default parameter always used to build an MP.
Later, the FLOSS algorithm also has a parameter that needs tuning to optimize the prediction:
- **`regime_threshold`**: the threshold below which a regime change is considered.
- **`regime_landmark`**: the point in time where the regime threshold is applied.
Using the `tidymodels` framework, we performed a basic grid search on all these parameters.
Fig. \@ref(fig:thepipeline) shows the workflow using Nested resamplig as described on section \@ref(methodology).
Fig. \@ref(fig:flossregime) shows an example of the regime change detection pipeline. The graph on top shows
the ECG streaming; the blue line marks the ten seconds before the original alarm was fired; the red line
marks the time constraint used on the example; the dark red line marks the limit for taking a decision
in this case of Asystole; The blue horizontal line represents the size of the sliding window. The graph
on the middle shows the Arc counts as seen by the algorithm (with the corrected distribution); the red