-
Notifications
You must be signed in to change notification settings - Fork 0
/
report.lyx
2251 lines (1749 loc) · 46.9 KB
/
report.lyx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#LyX 2.0 created this file. For more info see http://www.lyx.org/
\lyxformat 413
\begin_document
\begin_header
\textclass IEEEtran
\use_default_options true
\maintain_unincluded_children false
\language english
\language_package default
\inputencoding auto
\fontencoding global
\font_roman default
\font_sans default
\font_typewriter default
\font_default_family default
\use_non_tex_fonts false
\font_sc false
\font_osf false
\font_sf_scale 100
\font_tt_scale 100
\graphics default
\default_output_format default
\output_sync 0
\bibtex_command default
\index_command default
\paperfontsize default
\spacing single
\use_hyperref false
\papersize default
\use_geometry false
\use_amsmath 1
\use_esint 1
\use_mhchem 1
\use_mathdots 1
\cite_engine basic
\use_bibtopic false
\use_indices false
\paperorientation portrait
\suppress_date false
\use_refstyle 1
\index Index
\shortcut idx
\color #008000
\end_index
\secnumdepth 3
\tocdepth 3
\paragraph_separation indent
\paragraph_indentation default
\quotes_language english
\papercolumns 1
\papersides 1
\paperpagestyle default
\tracking_changes false
\output_changes false
\html_math_output 0
\html_css_as_file 0
\html_be_strict false
\end_header
\begin_body
\begin_layout Title
Detecting sleep apnea events in snore-related sounds using a convolutional
neural network
\end_layout
\begin_layout Author
Greg Maslov
\family typewriter
<maslov@cs.unc.edu>
\end_layout
\begin_layout Abstract
I propose a Convolutional Neural Network model to automatically detect and
classify sleep apneas and hypopneas using only unsophisticated, noisy audio
recordings of snore-related sounds using a general-purpose microphone.
I evaluate the model on a data set from the UNC Sleep Lab, obtain a disappointi
ng result, and discuss future work.
\end_layout
\begin_layout Section
Motivation
\end_layout
\begin_layout Standard
Obstructive sleep apnea (OSA) is a common condition whose symptoms can include
daytime sleepiness, hypertension, cardiovascular morbidity, and impaired
cognitive function.
It is estimated that 5% of adults suffer from OSA, and up to 20% have mild
or asymptomatic OSA.
However, OSA is usually unrecognized and undiagnosed, and is likely to
result in a large population-level health care burden
\begin_inset CommandInset citation
LatexCommand cite
key "Young2002"
\end_inset
.
\end_layout
\begin_layout Standard
Unfortunately, OSA is expensive to diagnose.
The
\begin_inset Quotes eld
\end_inset
gold standard
\begin_inset Quotes erd
\end_inset
diagnosis requires several nights of instrumented sleep in a hospital and
manual examination of the resulting polysomnogram (PSG) traces.
It is not even easy to tell when a sleep study would be useful, as the
symptoms are often dismissed or mistaken as being caused by something else.
There are portable monitoring devices which can mitigate the cost and inconveni
ence of a sleep study, and achieve diagnostic agreement of between 91% and
75%
\begin_inset CommandInset citation
LatexCommand cite
key "Santos-Silva2009"
\end_inset
, but these still represent a significant cost in sensor hardware and are
not widely used.
\end_layout
\begin_layout Standard
Snoring sounds carry information about sleep apnea events
\begin_inset CommandInset citation
LatexCommand cite
key "Ng2008,Fiz2010,Sola-Soler2007"
\end_inset
.
Hardware capable of recording and processing audio is ubiquitous in the
form of smartphones, tablets, and notebooks.
If a reliable algorithm to detect apnea-hypopnea events based only on snoring
sounds were could be developed, it could be widely deployed to allow easy
self-screening for a large fraction of the population.
\end_layout
\begin_layout Standard
Convolutional neural networks (CNNs) are a type of deep neural architecture
which has been successfully applied to many different classification and
recognition tasks.
I propose the use of such a network to recognize sleep apnea events in
snore-related sound recordings.
\end_layout
\begin_layout Section
Related Work
\end_layout
\begin_layout Standard
Current approaches to detecting sleep apnea events are mostly based on spectral
analysis; they include:
\end_layout
\begin_layout Itemize
linear predictive coding to model formant frequencies (88% sensitivity,
82% specificity);
\begin_inset CommandInset citation
LatexCommand cite
key "Ng2008"
\end_inset
\end_layout
\begin_layout Itemize
logistic regression on snore waveform parameters: pitch, power spectral
density, formant frequency and amplitude, 1st derivatives of these, etc.
(>93% sensitivity, 73-88% specificity);
\begin_inset CommandInset citation
LatexCommand cite
key "Sola-Soler2007"
\end_inset
\end_layout
\begin_layout Itemize
ad hoc analysis of power spectrum peaks (93% sensitivity; 67% specificity)
\begin_inset CommandInset citation
LatexCommand cite
key "Nakano2004"
\end_inset
.
\end_layout
\begin_layout Standard
Higher specificity would be desirable to reduce false-positive costs in
a method intended for wide application.
Furthermore, the above cited methods all use the signal from a tracheal
microphone, the use of which is not feasible in the intended application
(since a tracheal microphone is not a common household item).
\end_layout
\begin_layout Section
Model
\end_layout
\begin_layout Standard
\begin_inset Float figure
wide false
sideways false
status collapsed
\begin_layout Plain Layout
\align center
\begin_inset Graphics
filename figures/lenet-figure.png
width 100col%
\end_inset
\end_layout
\begin_layout Plain Layout
\begin_inset Caption
\begin_layout Plain Layout
\begin_inset CommandInset label
LatexCommand label
name "fig:LeNet"
\end_inset
An example of a LeNet model for image recognition.
\end_layout
\end_inset
\end_layout
\end_inset
\end_layout
\begin_layout Standard
I used a simplified LeNet-like
\begin_inset CommandInset citation
LatexCommand cite
key "LeCun1998"
\end_inset
convolutional neural network.
The input is a 300x300 real-valued image computed using a Short-Time Fourier
Transform (STFT) of the raw audio (see
\begin_inset CommandInset ref
LatexCommand formatted
reference "sec:Data"
\end_inset
).
The first four hidden layers alternate between convolution and subsampling
layers.
Above these is one fully-connected sigmoidal layer followed by a final
logistic regression layer.
A similar model is shown in
\begin_inset CommandInset ref
LatexCommand formatted
reference "fig:LeNet"
\end_inset
.
\end_layout
\begin_layout Standard
Part of my motivation for choosing this type of model was that it works
well for image recognition, and a spectrogram (i.e., an STFT window) is an
image that can be used to isolate spectral features of an audio recording.
The other part was that this type of deep architecture seemed the easiest
to understand and implement given my background (as opposed to, say, RBM-based
models).
\end_layout
\begin_layout Standard
Each convolution layer consists of a small fixed number of feature maps.
Each feature map produces an output image by convolving its input image(s)
by a parameter kernel
\begin_inset Formula $W$
\end_inset
, adding a parameter bias vector
\begin_inset Formula $b$
\end_inset
, and applying a sigmoidal (tanh) nonlinearity.
The convolution kernel of a feature map is thus 3-dimensional, having the
axes <input image index, x-position, y-position>, and the weights of a
layer (composed of several feature maps) form a 4-dimensional tensor.
\end_layout
\begin_layout Standard
The sub-sampling layers have no parameters.
They merely implement a
\begin_inset Quotes eld
\end_inset
max-pooling
\begin_inset Quotes erd
\end_inset
operation.
Each subsampling layer has one hyperparameter, the pool size, which in
most models is simply two.
Max-pooling consists of dividing each input image into a nonoverlapping
uniform grid, each cell being pool-size wide in both dimensions.
The maximum of each cell's elements becomes the output of that cell.
With a pool size of
\begin_inset Formula $m$
\end_inset
, this effectively scales down each input image by a factor of
\begin_inset Formula $1/m$
\end_inset
.
\end_layout
\begin_layout Standard
The max-pooling step is actually implemented using a softmax function, which
permits the use of gradient descent through the subsampling layers.
\end_layout
\begin_layout Standard
After the final subsampling layer is a tanh-activation layer fully connected
with the outputs of the layer below it, just as you would see in any multilayer
perceptron.
On top of that is a logistic regression layer, which performs the final
classification.
\end_layout
\begin_layout Section
\begin_inset CommandInset label
LatexCommand label
name "sec:Data"
\end_inset
Data
\end_layout
\begin_layout Standard
The training data consists of a set of audio recordings of snoring and sleep
sounds, annotated with the approximate time of each apnea or hypopnea event.
The duration of an event is not provided.
The audio is formatted as a stream of 16-bit signed integer samples at
a rate of 12000Hz.
\end_layout
\begin_layout Standard
I performed some preprocessing to extract (hopefully) useful features from
the raw audio data.
The first step is to compute a Short-Time Fourier Transform (STFT) of the
audio signal.
This turns the one-dimensional time-domain data into a two-dimensional
time- and frequency-domain image, known as a spectral waterfall or spectrogram.
The first Fourier coefficient (the DC offset) is discarded from each row,
as well as all of the high-frequency coefficients above the frequency range
of snoring and breathing sounds.
This truncated image is then subdivided into sequential, overlapping windows
of around 15 seconds each.
Each window image is then padded on the left and right sides (along the
time axis) with 10 rows of zeroes.
This ensures that features near the edge of the image can still be detected
by a convolution kernel.
See
\begin_inset CommandInset ref
LatexCommand formatted
reference "sec:Parameters"
\end_inset
for the exact parameters of these steps; I tried to choose them so that
the resulting windows are square, reasonably small, and informative.
\end_layout
\begin_layout Standard
Finally, the windows are classified according to which type of sleep apnea
event, if any, occurs inside them.
Although windows overlap, events never occurred close enough together in
the data set to cause ambiguity.
See figure
\begin_inset CommandInset ref
LatexCommand vref
reference "fig:examples"
\end_inset
for an example of what they look like.
\end_layout
\begin_layout Subsection
UNC Sleep Lab
\end_layout
\begin_layout Standard
The UNC-CH Sleep Lab performs routine sleep studies and collects polysomnograms
(PSGs) with audio and video channels.
The PSGs are manually annotated for apnea-hypopnea events based on the
criteria listed in
\begin_inset CommandInset ref
LatexCommand formatted
reference "tab:Rules-for-scoring"
\end_inset
.
Dr.
Heidi Roth kindly made available one month's worth of data, comprising
29 full-night recordings with a total of 218 hours of audio, of which 103
are annotated.
I did not make use of the unannotated recordings.
\end_layout
\begin_layout Standard
Most subjects are recorded for two nights: one to adjust to sleeping in
the hospital environment, and one used for an actual diagnosis.
Some subjects are healthy, and others have varying types and severities
of sleep apnea.
The audio data was recorded using a microphone (of unknown specifications)
mounted in the ceiling 56 inches directly above the pillow, pointing down.
In addition to sleep sounds, the recordings also contain ambient noise
from air conditioning, hospital systems, television, and conversations.
\end_layout
\begin_layout Standard
The annotations consist of several different types of events: obstructive
apnea (OA), obstructive apnea with arousal (OAa), obstructive hypopnea
(OH), obstructive hypopnea with arousal (OHa), central apnea (CA), central
apnea with arousal (CAa), central hypopnea (CH), central hypopnea with
arousal (CHa), respiratory effort related arousal (RERA), and body position
changes (Supine, Upright, Left, Right).
\end_layout
\begin_layout Standard
The annotated data set did not contain any CH events.
The body position changes were irrelevant.
This leaves the distribution of training examples (after preprocessing
and windowing) shown in
\begin_inset CommandInset ref
LatexCommand formatted
reference "tab:class-counts"
\end_inset
.
An example of each class is shown in figure
\begin_inset CommandInset ref
LatexCommand vref
reference "fig:examples"
\end_inset
.
\end_layout
\begin_layout Standard
\begin_inset Float table
wide false
sideways false
status collapsed
\begin_layout Description
Apnea 90% or more decrease on NASAL channel for at least 10 seconds
\end_layout
\begin_deeper
\begin_layout Itemize
\emph on
Central
\emph default
- absence of airflow and effort.
\end_layout
\begin_layout Itemize
\emph on
Obstructive
\emph default
- absence of airflow and decreased effort.
\end_layout
\begin_layout Itemize
\emph on
Mixed
\emph default
- initial absence of airflow and effort followed by effort in Abdomen and
Chest channels.
\end_layout
\end_deeper
\begin_layout Description
Hypopnea Decrease in NASAL, but partly reduced (at least 30%) in PNASAL
and 4% desaturation in SpO2 channel within 10-20 seconds after the respiratory
event.
\end_layout
\begin_layout Description
RERA Not flat in NASAL, 50% reduction in PNASAL, end either an arousal or
3% desaturation.
In REM, arousal is a chin EMG change; in NREM, need 3 seconds of alpha
waves in EEG.
\begin_inset Caption
\begin_layout Plain Layout
\begin_inset CommandInset label
LatexCommand label
name "tab:Rules-for-scoring"
\end_inset
Rules for scoring respiratory events
\end_layout
\end_inset
\end_layout
\end_inset
\end_layout
\begin_layout Standard
\begin_inset Float table
wide false
sideways false
status collapsed
\begin_layout Plain Layout
\align center
\begin_inset Tabular
<lyxtabular version="3" rows="11" columns="2">
<features tabularvalignment="middle">
<column alignment="center" valignment="top" width="0">
<column alignment="center" valignment="top" width="0">
<row>
<cell alignment="center" valignment="top" topline="true" bottomline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
Class
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" bottomline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
Count
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
(no event)
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
131725
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
RERA
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
4263
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
OAa
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
7707
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
CAa
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
774
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
OHa
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
3954
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
CHa
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
78
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
OA
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
142
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
CA
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
41
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
OH
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
754
\end_layout
\end_inset
</cell>
</row>
<row>
<cell alignment="center" valignment="top" topline="true" bottomline="true" leftline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
CH
\end_layout
\end_inset
</cell>
<cell alignment="center" valignment="top" topline="true" bottomline="true" leftline="true" rightline="true" usebox="none">
\begin_inset Text
\begin_layout Plain Layout
0
\end_layout
\end_inset
</cell>
</row>
</lyxtabular>
\end_inset
\end_layout
\begin_layout Plain Layout
\begin_inset Caption
\begin_layout Plain Layout
\begin_inset CommandInset label
LatexCommand label
name "tab:class-counts"
\end_inset
Number of training examples in each class
\end_layout
\end_inset
\end_layout
\end_inset
\end_layout
\begin_layout Standard
\begin_inset Float figure
placement p
wide true
sideways false
status collapsed
\begin_layout Plain Layout
\align center
\begin_inset Graphics
filename figures/real-examples.png
width 90text%
\end_inset
\end_layout
\begin_layout Plain Layout
\begin_inset Caption
\begin_layout Plain Layout
\begin_inset CommandInset label
LatexCommand label
name "fig:examples"
\end_inset
One randomly chosen example from each class (except CH, which was not present
in the data).
Starting from the upper left and proceeding by rows, the classes are: no
event, RERA, OAa, CAa, OHa, CHa, OA, CA, OH.
The horizontal axis is frequency.
The vertical axis is time.
\end_layout
\end_inset
\end_layout
\end_inset
\end_layout
\begin_layout Subsection
\begin_inset CommandInset label
LatexCommand label
name "sub:Synthetic"
\end_inset
Synthetic
\end_layout
\begin_layout Standard
I used freely available audio processing software to construct a simple
synthetic data set for testing.
The construction was as follows:
\end_layout
\begin_layout Itemize
A background of Brownian noise at amplitude 0.1.
This type of noise was chosen because it sounds similar to the background
noise in the real data.
\end_layout
\begin_layout Itemize
DTMF tones, amplitude 0.01, duty cycle 30%, pattern
\begin_inset Quotes eld
\end_inset
1,2,1,2,1,2,1,...
\begin_inset Quotes erd
\end_inset
.
This is intended to approximately imitate soft breathing sounds: a slow
repeating pattern, alternating between two frequencies, with pauses.
The chosen amplitude is between 0-10dB above the noise floor in the DTMF
frequency range.
DTMF tones were used because they were easy to generate with this particular
software package.
\end_layout
\begin_layout Itemize
To simulate apnea events, I chose some arbitrary sections of the
\begin_inset Quotes eld
\end_inset
breathing
\begin_inset Quotes erd
\end_inset
pattern to delete.
\end_layout
\begin_layout Standard
This data should be relatively easy to model, since the
\begin_inset Quotes eld
\end_inset
breathing
\begin_inset Quotes erd
\end_inset
tones are above the noise floor, and an apnea event can be detected simply
by finding gaps in the pattern.
Unlike the real data, there is no variation in the breathing pattern's
rate, pitch, timbre, or whatever unknown quality might denote an imminent
apnea event.
There is no structured environmental noise and no tossing-and-turning sound.
There are only two classes of event: apnea and normal breathing, as opposed
to the nine classes in the real data set.
\end_layout
\begin_layout Section
Training
\end_layout
\begin_layout Standard
The overall training algorithm is stochastic gradient descent (SGD), with
the cost function being the negative log-likelihood of a batch of predictions
from the logistic regression output layer;
\begin_inset Formula $LL(\theta)=-\sum_{i}\log p_{\theta}(y_{i}|x_{i})$
\end_inset
.
I made use of the symbolic differentiation capabilities of Theano
\begin_inset CommandInset citation
LatexCommand cite
key "bergstra+al:2010-scipy"
\end_inset
to derive the gradient of each parameter with respect to this cost throughout
the model.
\end_layout
\begin_layout Standard
In my initial tests using simple gradient descent (also known as backpropagation
), I found that the performance of this model on the synthetic data set
was extremely sensitive to the learning rate and batch size parameters,
so much so that I was having trouble getting reliable convergence at all.
Implementing the
\shape smallcaps
Rprop
\shape default
algorithm
\begin_inset CommandInset citation
LatexCommand cite
key "Riedmiller1994"
\end_inset
solved my convergence issues and eliminated the need to carefully tune
the learning rate.
\end_layout
\begin_layout Standard
The complete set of hyperparameters is listed below.
\end_layout
\begin_layout Subsection*
\begin_inset CommandInset label
LatexCommand label
name "sec:Parameters"
\end_inset
Data preprocessing parameters
\end_layout
\begin_layout Itemize
STFT width: 150 milliseconds.
\end_layout
\begin_layout Itemize
STFT stride: 50 milliseconds.
\end_layout
\begin_layout Itemize
STFT windowing function: Hamming (raised cosine).
This was chosen to maximize frequency resolution.
\end_layout
\begin_layout Itemize
Number of high-frequency STFT coefficients to discard: 600 (that is, two-thirds
of them).
This step combined with the wide STFT has the effect of magnifying all
the features which occur in the low- to mid-frequency range.
Hopefully those are the interesting ones.