From 07902164755be9faf42f9c78911c61aa247bcedc Mon Sep 17 00:00:00 2001 From: Kelly Sovacool Date: Fri, 17 Mar 2023 10:44:50 -0400 Subject: [PATCH 1/3] Link to Discussions instead of rstudio community --- .github/SUPPORT.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/.github/SUPPORT.md b/.github/SUPPORT.md index ef09e386..225711e0 100644 --- a/.github/SUPPORT.md +++ b/.github/SUPPORT.md @@ -14,11 +14,12 @@ For additional reprex pointers, check out the [Get help!](https://www.tidyverse. Armed with your reprex, the next step is to figure out [where to ask](https://www.tidyverse.org/help/#where-to-ask). -* If it's a question: start with [community.rstudio.com](https://community.rstudio.com/), and/or StackOverflow. There are more people there to answer questions. +* If it's a question: start with [Discussions](https://github.com/SchlossLab/mikropml/discussions), and/or StackOverflow. There are more people there to answer questions. * If it's a bug: you're in the right place, [file an issue](https://github.com/SchlossLab/mikropml/issues/new). -* If you're not sure: let the community help you figure it out! +* If you're not sure: let the community help you figure it out by first asking + in Discussions! If your problem _is_ a bug or a feature request, you can easily return here and report it. Before opening a new issue, be sure to [search issues and pull requests](https://github.com/SchlossLab/mikropml/issues) to make sure the bug hasn't been reported and/or already fixed in the development version. From 22b56c340eb79d8aa489041d6bccf53fe1c782b1 Mon Sep 17 00:00:00 2001 From: Kelly Sovacool Date: Fri, 17 Mar 2023 10:47:32 -0400 Subject: [PATCH 2/3] Remove end date from license Let's not bump it every year --- LICENSE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/LICENSE.md b/LICENSE.md index fba4fabe..43fcb95e 100644 --- a/LICENSE.md +++ b/LICENSE.md @@ -1,6 +1,6 @@ # MIT License -Copyright (c) 2019-2021 Begüm D. Topçuoğlu, Zena Lapp, Kelly L. Sovacool, Evan Snitkin, Jenna Wiens, and Patrick D. Schloss +Copyright (c) 2019 Begüm D. Topçuoğlu, Zena Lapp, Kelly L. Sovacool, Evan Snitkin, Jenna Wiens, and Patrick D. Schloss Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal From 0180355bf5c1bdff4bd8f210c3cdbbb7f67627e8 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Fri, 17 Mar 2023 15:29:14 +0000 Subject: [PATCH 3/3] =?UTF-8?q?=F0=9F=93=91=20Build=20docs=20site?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/dev/LICENSE.html | 2 +- docs/dev/SUPPORT.html | 4 +-- docs/dev/articles/parallel.html | 28 +++++++-------- .../figure-html/feat_imp_plot-1.png | Bin 56929 -> 57037 bytes docs/dev/pkgdown.yml | 2 +- docs/dev/reference/bootstrap_performance.html | 34 +++++++++--------- docs/dev/reference/get_perf_metric_fn.html | 6 ++-- docs/dev/search.json | 2 +- 8 files changed, 39 insertions(+), 39 deletions(-) diff --git a/docs/dev/LICENSE.html b/docs/dev/LICENSE.html index 0a4afa24..833b5ba7 100644 --- a/docs/dev/LICENSE.html +++ b/docs/dev/LICENSE.html @@ -59,7 +59,7 @@
-

Copyright (c) 2019-2021 Begüm D. Topçuoğlu, Zena Lapp, Kelly L. Sovacool, Evan Snitkin, Jenna Wiens, and Patrick D. Schloss

+

Copyright (c) 2019 Begüm D. Topçuoğlu, Zena Lapp, Kelly L. Sovacool, Evan Snitkin, Jenna Wiens, and Patrick D. Schloss

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

diff --git a/docs/dev/SUPPORT.html b/docs/dev/SUPPORT.html index e942589b..ad6febd0 100644 --- a/docs/dev/SUPPORT.html +++ b/docs/dev/SUPPORT.html @@ -67,9 +67,9 @@

Make a reprex

Where to ask?

Armed with your reprex, the next step is to figure out where to ask.

-
  • If it’s a question: start with community.rstudio.com, and/or StackOverflow. There are more people there to answer questions.

  • +
    • If it’s a question: start with Discussions, and/or StackOverflow. There are more people there to answer questions.

    • If it’s a bug: you’re in the right place, file an issue.

    • -
    • If you’re not sure: let the community help you figure it out! If your problem is a bug or a feature request, you can easily return here and report it.

    • +
    • If you’re not sure: let the community help you figure it out by first asking in Discussions! If your problem is a bug or a feature request, you can easily return here and report it.

    Before opening a new issue, be sure to search issues and pull requests to make sure the bug hasn’t been reported and/or already fixed in the development version. By default, the search will be pre-populated with is:issue is:open. You can edit the qualifiers (e.g. is:pr, is:closed) as needed. For example, you’d simply remove is:open to search all issues in the repo, open or closed.

diff --git a/docs/dev/articles/parallel.html b/docs/dev/articles/parallel.html index b3119999..e8c56aaa 100644 --- a/docs/dev/articles/parallel.html +++ b/docs/dev/articles/parallel.html @@ -165,21 +165,21 @@

Bootstrap performance#> # A tibble: 15 × 6 #> term .lower .estimate .upper .alpha .method #> <chr> <dbl> <dbl> <dbl> <dbl> <chr> -#> 1 AUC 0.489 0.656 0.825 0.05 percentile -#> 2 Accuracy 0.448 0.597 0.769 0.05 percentile -#> 3 Balanced_Accuracy 0.441 0.597 0.753 0.05 percentile -#> 4 Detection_Rate 0.166 0.307 0.462 0.05 percentile -#> 5 F1 0.412 0.597 0.759 0.05 percentile -#> 6 Kappa -0.119 0.189 0.507 0.05 percentile -#> 7 Neg_Pred_Value 0.333 0.583 0.774 0.05 percentile -#> 8 Pos_Pred_Value 0.425 0.610 0.812 0.05 percentile -#> 9 Precision 0.425 0.610 0.812 0.05 percentile -#> 10 Recall 0.387 0.598 0.810 0.05 percentile -#> 11 Sensitivity 0.387 0.598 0.810 0.05 percentile -#> 12 Specificity 0.342 0.596 0.789 0.05 percentile +#> 1 AUC 0.434 0.639 0.820 0.05 percentile +#> 2 Accuracy 0.422 0.583 0.744 0.05 percentile +#> 3 Balanced_Accuracy 0.431 0.586 0.749 0.05 percentile +#> 4 Detection_Rate 0.179 0.299 0.449 0.05 percentile +#> 5 F1 0.412 0.585 0.762 0.05 percentile +#> 6 Kappa -0.132 0.167 0.486 0.05 percentile +#> 7 Neg_Pred_Value 0.375 0.572 0.807 0.05 percentile +#> 8 Pos_Pred_Value 0.395 0.599 0.855 0.05 percentile +#> 9 Precision 0.395 0.599 0.855 0.05 percentile +#> 10 Recall 0.375 0.584 0.824 0.05 percentile +#> 11 Sensitivity 0.375 0.584 0.824 0.05 percentile +#> 12 Specificity 0.379 0.587 0.823 0.05 percentile #> 13 cv_metric_AUC 0.622 0.622 0.622 0.05 percentile -#> 14 logLoss 0.655 0.681 0.705 0.05 percentile -#> 15 prAUC 0.465 0.592 0.736 0.05 percentile

+#> 14 logLoss 0.660 0.685 0.714 0.05 percentile +#> 15 prAUC 0.442 0.583 0.734 0.05 percentile
diff --git a/docs/dev/articles/parallel_files/figure-html/feat_imp_plot-1.png b/docs/dev/articles/parallel_files/figure-html/feat_imp_plot-1.png index 7a43acfd35d8ab9ef09085f313b51de4ecf53c24..dee2624a2df06cf03ce5e06af5b0254f5d025445 100644 GIT binary patch delta 42031 zcmb@u1z1&E+b+BmTfwbTDuS^8ikTQrxcZVXK(#^y~$<3k^q`Rd%6afL12Bkx4 zA>DnR3GV&w_dVbF&-eY;nb+mD-LAFf9CM6kJaOOmGX_hEzLpWaXXHYLQ_>c8b_5#t zksQe`Oz@YRP6+1K4G1Xt7TEQ4l&4jPMbc+Pq?Y_f2)l@N$ybY(w{+qrrY$|f*KVY@ zea=xDREbq*uttR8G zTZIzC3kwTne2Dm&%A1Bwf=CN1=jiCD(~6LYh_H$y9d@Vd>&^od2mnWNfFmFkc}0_;fIEYgCZlfhc=Pzg~(1d;~*We%JZjB z$LwscRHVo!`f9L*teii0?p%uPWCz)~je%z;jkng9@Nud+`pNJ|KkxuXFbk2Bm*;fb zbnu}O(Art=tZ>mJL5D_~YS0*wqFqsrov5bJzGm5@y**D6&hM&bUQi)Gt` zjtYYrD0uqx>0;#0rU)LLe;;Z-wX_PLe7EH>R}gKgN7+y-AzlrDrzCGu}A%G>_>E=zp=X4?DE)Te?sPEu1*rIovGq*(Nq)B4sm zHdYYA=g_&sVB85A@`+MirH(o4OOuI0UgSp2F+xH$D(OnhYPq7%4iPhEXBxF8tglq; zAnEXziP^y_tCC(at>Lz0`4Jl)&PO#TzJ_v&=jeZBWT(4)`L4FxT6`#nk(_QnQ)X$* zaS9g0Z|}H6`JD6h8zVFg5%^xN;+Z z+oS!=CM)kQZulCxmzMJsCov7b2JD;uJhv^U3;ZrB{{H@io}wITx7CMyo3m9ZV4bfQ zz<$zUR~o@U*M9~`+YEj@*lXR$caB~66}98fQ$I{^wyq!9xPr72X;obh3=E88EnmIs zy!=a6SomdjPR4pTSH4cCfOEsV$dfpu)o-m9*zeyfVw z6oSr@0z*Qtt{J!({$=*O{Y6M*e)|Yy-Wy; zCKku%yxjJL=&;T9rlVP3sp_3OcV5)!>+3ts_@aA%WOOuC_VnKGFbdq@I#QL=<)oy1 z*B2*ZCBtqe>oo);*=k+_iZU{PqB~$~&JPLfp6V!>DK`&{WZW#a0;5^8vt2b0 ziPa50{;ZQlf4wO%Kd;oqL_>_UqfyNbCl*P3Se0U~acGavFPm6HjE2GiWy zAuu{*cR`6yoM;PXR&RrdbV0yPqqoF9q}^E2gXF@`$G5OJqC+QSbaY~TX?8|VzVhQ= z7d>_A6zV-O8L|4DciYPK-()rja$`qE$V#?G|9nuM*e>{0hg=ulBFgGUywEc&#Ma5r z{#OV;auTt1Q+7;jY|SMIZ1BFv1&CBQkC>>aOr}=Z7slTo=yDy@9xHpf$re*i~1VRoId>py&2h1Ee+BZxxILGYeE`n>9(kt znzY*eI{g{2In=-O7Te3 zus6EZ;D?`YZRO}vCQ$XACTDr^GStdcK?`0oD=Q1O^ox9EmG^?j$*&b61dXb=tGD)k zHE_ec;1?2hW%z!vt#5AI0X zXj>=GBCIf|Cr=Wn$oGCa>)b)dOnU35Ka;OzX@ThU77ia*>2auDZsz6b+H+T>Kl3B{ zFFKCw-1&Prv+@T@e~)Y+QDP$gLCz1b+Ir5;!Z@0vzdwl5O&fsq*}Vkx>W}q7OsX5P zA}eV`uAkrzmx@nFFoy8A{wt^Pr&W${{ASQT3tpas#6=$gn-u`W>KE(JX5Aj?p5ub4SpvE9^>if7lJ)_`0z&I zx!uRt*Nicrd;07dr(Qj2*?c|1q*h=ao^SeNXc9ct?xpa4X|l5c5}~Pr{$*PRMbAbe zEkA=uS0#MzgAe;@lz+#8sF<@aFL_j5D(Ie?l*<6$`^5BS*f~z)pObQ5jGASrWo2dY z3J;kaOMOsl?V!J%tGO zPV)kB{QUfU#@a22t#Yv6{9Wtn>cpg^Iy}jk+A=h6ONMgbqNAf<)ZC&X0cR_kXWZU0 zRPB|S94sEr=d6^foY?{94$g|3)1prUcBv8s4Bb5Pia(3l8!bzEyNYd7(VPO=nyor{ z?C8^qX_%&#O;A`aWgnbK$nq>({T3k&tw} zVS9w8K-IOiO#J*>GZj1K+cP`c(vY@_iHWt0H^+Jr2Wql}Xs{?3O&0bv$BHbi4*Q|| zkH?k}$f8jg6&Hv78i@#KL*UiP!XbJQ!;f!qJp6cIYbg_Hhp46iEBou0>D2eEd#q)P z_!WJjy+I{)El>BRdQdx|e$coz!Dky1&V~sa;~MX?**{EqEQdZR`ppe`&_FWF07E-? z@L(vnbplMH0=TrTwMInA+&lxzz{jU~MHumxnd-{39roce{`mYTkNvbpR&7_lSr`?G z$M$kTMKVmsvE#?(SEhTHCK4lqRYc&YkPVDnpgDg&70M0g#u+xJMZ*^_Uf}TT?CiQZ zKiXijv-xevY68hhyYtNg9#Se_7_%!`0{>yML}*9z9qa^TUam!rC65P@yYY#MsgS6I z-nP&9SXFg=%J0=H8Auy##=`B5dkQVN>}RN({R2 zFyB1Zqu!(6!c$=lw7kffTjZtrMkuVY6R@OmVq%^#v9U`n;;a)1VdnVO2{4`#4#@Ay z7QWpkG{+(${g7VVU0fjzLW=}j7_d_bMzp2WaUmBnv#IU^X$X`rd?9UR^EuCz(WJPo zUSQtdk*=O+`hy*UN1P0?2Xc}|sn+*GP*BiFZ@%@I)KpKAd}isqr`@mb(drguigL}> z(Ig(RZQ7}4B~(9j$BLY^ooE?Jp#eK;t$adMHfRnx%`rZP`pY+Nw02k6L&Q-&K~Ysz zwN%fPDT=q7>WW`nw82pj4S_|86@Sma$igBOKqoo>tvlB!U}y*#%5~jX*74hbijIke zC8O!gRBC3q9IP^r-LIrtYG(;fektI#qQ z_B|vCiNZeT>L5v>rx&&FwGDYma)A*CJE6GV{>0w=ZWEfK@15heii7XpH2r?Dq?aYJ z6-l7g>eQy-?s(+Km#{F_I0?CoRgnfGJ^k$H6KTFBh3M+!vBt=imnC2s zXztfzUXY!gy+}ySWW0J+qypJq%7paq;)Krosk#P2XC`aa^;|Ied0ua_7`a@-SFc_* z!L#yUX~2nwOj1TXE`VqVrP;P(t%!(-bN3yG@_j5JsZE-ju+^i<559c(dBiHHkf*4y zgCx>r{x#O9De8_;jmPunN#LXLEag>$C#`t}u0K0`lBJ~rQNhDZp(Uc@*q&om;QFRW z^O@g7E&61wvsXq#QSadZuXo-2CZXQ)0?Ol#bW-o=acX(SLFjD2-ev%-;KFS?1qjGR zzED?6x~@+o7LNq0>t?Cgv`TUr!M?`FB_+|PtL2F`H#cXiTr+M<@=uU>dwF6=4C!uw zdl-QU8aKzhNK;CM#nEzG(-XRXpPq?HoSU1wFUz8ADHxrzTP`~Bre?6y~uCxRx z*ynB7Z>f;%>+w&&I(5bK+O}y|ZZg*B^PiWF9XrODi)2EzglEzEffN};I3|21+g3_d zT>7U8*-r z#7gzP>V=6IWsiQqHoC2$fB#VE zoT4@(7uSlX{k1dzTnf03xzwgb9*@e#L83sjJ$o3}Qnu2U3TV`7;bn$3jR>WPy5e;~ z%=Kmw1M3X_BQH3PAnys=P3NFip?})BJbN{ypnOsJjvr}RZJ@uM#xmDvsGGS-SU2h7 z=OglsgQ}F)@j`BpYA`NXR(<>^)?>d$=uJ(Qja}UdU_JS`F>*&QTFi1vnbp=v-J||f zp!-Tj$uz!dkUmS<%u?skBk*Vq9JRwO780t{Dk0&nmPVzBDw8jtwm5D^SEGKx{5~WB z6C%GLT6Thek>+2E157HRsY!;LiUiFAplWD6Oyd%zy+u$q?kinW^drWU>lbvF&2&WY zyLQ#Kqn=i9WUS7U_xKU7;i9hcYmD>8&swONX7NEC<^|oXnGR_EoC@%YR_na8S2YrI z-n{V6Rp|%?sGF0`v*~amzEu9$Jj3Rgp0V<_k)ss`FWz4jW$_*Jb%rc7dC{81^{L-E zPBDlYOWWQ0ImsrS8Q^dtb0Mag^%Q1(axbtPt^p)#TXD!S(s`O2+@`jrE!&r2tD?ZL zu+-VvS-G-(nAH;T%+2>|=}6F!ePe9&LxSZ!KGcqLTHIQ?08+Gp1rI%NLKpJ7n0TLWPLWyBX>G{ zuoKZ16vXrsDI5=}@RvOsvC{nHiRW^axXYwL#JufM45N(4t0SD(Mmb|D>hm|}?Zu{& zL_e}99FKUR@Z`x8-myWhG$p2enKTflWB^%xWq5{|nvUx!Dn!r8CCRmeodJVUb?V|<=rg*OAJS|zd z?2mZJK4@{q?J0`!k&m)e+Tw6$aEzqm+V!pLgJOL0yIBE(8K?k{$yK}2JL8r}beIIH zV_b7NN$vHOf>;AmA1Qs31jDZ;V*J!;wn&4LGC#jszwWmq(ZuSfs&O~Hh_2ONL|3y z5C=y`H@9;AZ|?x^9eRBgb=oP`4%6B8i{rA8ZpAm1KY#vQXKZn*TN%wSAt?mNcJ=NC zM7AZ!KA3LZQYq-M^3wh?(ViO8=QPO-$tuoIVB5hz(;mp9n2`ym&DIi5a8K z!P2k#t;_{IF*I(gcK{$G0Qv*vn-yQfa>}3X_5iKE*xu?g$ppyA5MT&0%T>8Bz*@Ig zYXyuuGc?cL(Dw#ZHXQ&X$C=QONF*{7>JYRV|MBC8r;m?fh1=G#3snK-V?Tc;MMYJ1 zrz+vi%h&XAaxN~V(?w&{oG^dp1?o@)t=9@{M+1&UMoLP`qtzI}m#15I+N`%&5o$A8 zzyZl9DU-atPe?}cYvMVqety3D&_*w!HC36l47QM3j=m3}Jv9-D;B!VHe5pivC>C0w zx+4WxSF6k^X!!P#z4=1^_~VZ#h~P#5P=pE~<#y9O$#jyTyGH%|=ot>fri(bnd}c$v zwnS+fw2~hE@)o_lR3v|Tf@Bz>S%jt?iZxPdYAxMw@XRF-fz3M`D>Ew@>q`d_ufd{( z^8KhWLE0F~WuYZRvz`I~D5vYX4J0Asp2NGi>ip*`)yuYLIgJAg%g?RTcP;6w)%$eK}hdJ1aY-uGNQ0 zsFDDxfoQg~^fUU3(`2GodT4&GMXI^1KYzxNk*r1C1Zqis4c(3Uo9T;>%Xs*cj&#Zw z@NF#WbySDVZnsT*OKE?`b$cU(x-`$E)sM6tdD?5PEG0$K=(rkGWIbjKi=b9wm!2t} z{8+fl@<;z0Cfu>YVX1>8jz2wzFU&CGoe?~??&d@MwIy@t3^zmGqC_G~h@_2@VNaVRp6*04~SAOJ)|IV1<( zS{Y0z(;;N>^_i;>+Z@FtC7IhyI&`v7Ge$uQaKETMahh~U0E90GDeDs0Ihk`QAcw{P zrK@G>yeMBAJrB4lA-~&P9yLLz+f6@fydC2(YPmmCwh{rH4GKm8_&yF`u0o;|624eT zbdsJOFEG9fdtp*i;TiF9apTGr>tut|wlc}VW9gbDN`Q$dCrX89Oxz;(5(4Z{9)O8@ z&I()rom%1Jhglo@A=#s#NJI&*4PN56UXbfw;9F{wcOoPwifM}IMVtX92!J=^@uukb zh&|B5JtZ0u@uTj_*7NFf9J_$xiGXQ<9d=b96^BkxUYr=ZEcPa|MI-hh<;$U=AX^z(!Ct#=Gai@K_rP2wZ*OgE6n`=PkV{qy`*m(^ z4&4)O0BENAN=r%x#p!Gm@5ekqg3@&fQ#Y9D2M5h5C7!(u_ zMRPqwY(_@09RMu=FH-)SKXhlTh3u+-XS)D9F;`2|3D8ejLl%w0z&_W-W|3Xb^#1*I z99r)K-~A<^jsx5n}!LQ8b~T@!^99Q4lMV!EXYUs$%f%oh;bHSNh+C&n)-wVxv`f z`8r=JuZFXI(DuN!h%~!2eX2z$fAMhup;C19#o+?(Zty|iLr_8ovO8w={4Xe1uzN*j zad-#QrUXxnW#t<`TIZH2dXxQh7ZU?3J7!h^PzDwU9*mWJtV7)*2#eY$uuSZ+zwysX zuKoLU!)CL4jB07C%<37g`O9XvB^%Ud$vVD0J)3XbE`|cxc6RwOF-S~l_as!kIDmdd z{JA4;hahVsA-m{0I+Rrg0h@Z}Oe}@Ai!46ZVNPdjl&fNHdcjot{evgqVB-PDNr5P)R&0|T z=k#v_@zM0GBZhS=im~+cx7-V-e0$cr^cf{`|nm$OsM(Xhl4P8RV_v zX$(dPpuO;0mUmGW^G6P_T6=V-o=&}F2k&&nj$1FXvn8}n_!hyZ4rx{^a2__Z19wnY zbVbBh0ieY%*A8RcF9YN_=6b^lLILWG{Frtt`lv;e61YY#?r)}9Y;&pU_ZD_Hd91l# zYnT6makr=@%9{@phU;VX-4#D5wen=g;Mw^GI8WTe;c#fplWWjO4Q`A6_HFfkmjxfJ z(c1h7z!j0k9ce~(47=}+ahEP6PVVe;?ZO9{b#Y%NB3c|wPhTwQl1&IF3i3n%Cg=_ZipZy4(L|=T`b} zhDlbeeYMC1^6c&~I0ba4g0Go`N^9Xz`QAMQ6(4fBO*F`G(Tg(Qy_a!oTX5Sk-0~JJ z&i=Vaat)ap>(7#&E^C&|&yI);A3)jUq@)fOQ5`u3(##r#GGD{Efc^>ksvl4_IP3cr z06@?Zpz#kpNez~`SCqoF{*okKy%pNDvji-xe*2fp$PHG^;83jK#U9>}B;@sKUX$Oc&NKkmz+ls^X+n5k72IFSY z`u3W3i2JeS_&R$R)IH1a180E(fRO$!0z;F0&x0gBG~lhHWsvl5)=wr4Kw30m!pDV~ z7uW3-h8UTf@IxTbl2E$v!?VLCGkr+eF=FB}!aHEWari?gsZCMdlavWKZ?r74*tYb8cIrJSMsISHEwBXbOH3wnb{NS`vQJWi$pJmLh= zwynt8;NXD+q-VqY{i8#)oN(g7*JMRSUx6!lh(4xLSJD`Ho`m!H1HEX-r^8_54j!8OZseD$W#7!Nc|@A|lGuYaj=_NUhe7Mj5bay*#yc;7-Ah z;#%DI5e2?^lLR5nc?=B(p08eAkHx|r(W|q$>ZCwgX8C$O@C=ShIV1?xg$wDhhm#;Y z;IReW>7PE`pV@|u?NHSVKs`NBx@AsF?5V)S>|UMzS9>JI#R1jk&qyG!>mAG#fzKD$ z#v)N{q;r0xArF)Q!2F6pVTdvvkTY=Te*ObZ?rw&C{UZ1KKC6{JTz%t{r;!wnR;5|3 z6v%Hw`CW_4x7SU;J=IOv5Fk%NecSr-l=xp-f>r-LX2xi^#;3lap%ru(213HZ1SsvA zQF^YjQoADvDpLQQx1Za{{$iz4sH?66BZWFFzO|nLfa|`oFKHOTH;l~tmLZ${Nh+PW zPgelkQx3an#+dS(2*bGPLZ#^G>)QdS)=#?v%8>bI<@Mhfa+zn4Cy7698t5_EjrCpbR_^JN zlQe#=fUK_VCn8$moTZ6oi3QgBRoXe3M|_pH|yX z?M5RO>WXrxq^4|0`Bn}F>>mZzvsARuYtoCl6yt6@1;y%|F4A1DMCEq95N0Ri)h zEj#THnWUzelQR59+Hb(3KP|}Vsy!}^eesgGopmX_sH_T=mXZnC!|q8i@%^;k*8Y*yeeinP);mz7m} zV}3r9e-`Q6WT-+u84t`DHt*gXgL#uCX*f=%Jg5{11x-REQP6RwzosYP#UT~jq~Xk5 zP9}BpfN#z8-xGXK4bok>5JZMfvLNmJ^6P-m$WZfj$DU~~L`fxm+xAZ96lYcLua02A z_khgPFg|=R1u5f!|5H4%_aiH$Yf#2&+Yz8ABcVg{^JFlP!a#)i$L|4Q9I_6;+(qG? zO7w0JTRgrOZFcEs_sDhcyWMWeCBghg_LEE3riytcZBnK?``3$6sv}D7^1wlif(iJh zPVcifusGnn(JH`2+GosVSRz2s-7Rp6(PFnRK97#Jnh2gRxM- zoCX_(Yx5Z2UlUL%joR(*|Os2%iA+NH2(n0TQ7$Hy1*7x zDCo5d!vg1D^LNaEwR1O42b3O7L$EjitbbPt?%B~ozby~*f3oj?nD>7 zoJq$uS~@!NA0hAFrGdXnSIN2yv}bD^4hbMVcTUaFyJ`?vnr8~BO3ch!rG*~}JkHB$ zN|{>ekg`tp6j?(-TI94coz9|J{MneMhy*x zz-Kh<2=1%24TdK|s1?Vfbj{aDf&N;u-CZW*q{ujs2l}^jx9!qeT3g8g_5dnk0uW<4 zNUgC9K&k-9;2}MHNjb9@~K<(WQK8uP!Hthiegd zig8wC%?rHhmW~u;62J<_LJ3aJBL&q9$~*v23**p+h{Ifa1v!Z6><1n<=-s<(IH>;9 zQ75n}m(dU}-S|$irO2g7H%ZZucqv^|J?~jcTcjk|w`Z~(p92MpMiSKToTlGpz-Buv zkPUSRMfHGe0tX1`1E14k6UwK8&td|i&3g0;TOOT9RW#sj4cAW)Q(hO`O*|1O$N@IW zGintEaH9zn3!D1ipdL(wA*TRcs#fZd)sdk&8Qz+%&b6m3eY5ZY-3N5nvCKS{az6H0 z+g8^k{pe_vVl7T{E;xkt)F_?=^Bg#%NH>5ckpL55ajl8c$_@?=uOI!U zNRj^rNBZv+sgzwwZ^?h3BsBoc9v=rWG{wBP7!sU5ZoqgTw`~QbB#82m7pX3_@HX!C z54-$oIoBU}W@~)AyfLrIQf9OGMwmwI)LHOuN!}?Em6J#9Z`)6&!Gvntx&S^1gaR!N zW!P=9IG=S-E&Ey)X;|?fj{XLeFHtJe?obG0Mqbs`NSDjFa~y^;{&N671B4%dHQL$U za$_@qs#OhvDhbLs&o6U}lX;MS0VkG?^5IFQr?agV34Y1hqKw=h^2Wo@TwQuNnmnlt zu}zRc@vS=-Dt1G;e!0du@7^2(KkNhqalm?0u{aPUqScocvbAK@3C2lsfd`92$^5KG z=4DGtxE9zDOk3~plrN)^o*s>|wb3xZ&^^7pm~LXw1eY;PwYC+jd8xZhcjn8E54gG{ ziIh*h`%hHK7#pWxlR+*JA2D{;8Hn|nn~*U~gtRAh4#76v%}icAe@=S4=1fKeuVZ#g zW6`_rDJ6K{U3vxv4Pfj>lDY#CDx8S ziwg|xV+kg+XCgY6fj#BVI6z+pn%TSDMDD4nr!ys3wmzIBCT3nL1>_XR*oNFFxa_#z zBrGHpbgM=_b4;M8o;UGU$WDu9nNuE2Vmi!S;H_GKL1<~?KWzb=?cY7c(nd|x**`KR z?4ClH?|-y8{Umel@YnV^YfqCA$8hVe5qzw-y|g$hy^L8Okw@g<3f<<>%opz2si&FG zv1po2KE5T!#|nicUZDUn?>&}@TOa9dX~|_ea^gf95IhEXYA6?rq6+BN2G|1C#F}++ zK>ew)RESzD>kIXCMiZs;LQL$C|JNzJCMyM4jH~DIGpPvPo$o1+<;(e*OFXMn$ z7JAWUx#lU-c&q%(Jzmh;oH(w zZXq zLPbA+{&b*t)|R5!P$C-l=+jZxumJUxM;ymUI;M!t>8M+gM33>x{Af>- zeKy7_=!{1!jMXOhfbqC_j<+p%Q4FWm1{01WtcU#dB+@2Y7cWe z*zyV&Dj^}|-yCliN4sg)yW30H+*5Vj{@^#;MU6@EU|a|m{lAeeVH^Si6`z*@HR}bD zHC_x@7r;fZIG{>W7Oeq}06_8(){ys+t1kLT+NWK4u9K zKYH{iLVJ@>8+-*eh@0auM%xIK3Ho2&pbZ3uK+*4wOO?6haJssoLe+UOs@4jv3+Wd2 zsisOwg8RX1Ce|M)#dBB=-G^3yzxWN1=A}-X&~eSe$9EvsH3NzS83BA1{*ni>ImkA|!7Ubp z>ZoBh9L<_su6{ReS4Wt`jOh)Uq%Lmul}@PRAk>ou>x7ng?&v zC?7!J?WW4=)L^cPo}5fMSkjyE?j2Rc!YmWW71}`skZU)krX8|>)rT>p@4m3GF(_Ag ztbV?_Y0;Mkfrt?VZ2(RIlBxiDB6Qt^z^giV|BFuuYMod#hkXc&^-wV8MA)VXTy*_SXgwXxVa?Ni}sQfLm%%J1M29L|c@v2}4LIp33iTE3nY6uBjnE&#v6TjVca7 zD4r@Uu%!Toybdrl3PS+NiKd|d3NYHy-hg$6%iU9XL}~WpPS(@T0+kP5gqLKcNv0|v z)Jd5f5ZP1czq}$VofUz4s@}JMcJDWsfyDn??);yrvrKxC1`FU@^pJ0%ny0-k7fj$$ zsCA;$x&dKFrTRdL8H0uw-CA&g{`%7Yt6BnX_unW|7@ImFpbv>|>NgXJ_3ylXjJSbIN|E@j%Yt8LHrm6WE6hKk<(B)yvGXG`yXfvrQdUkVX zJ6|5`bdi>xrhT>RSM;q}cI&64IA3jqzR8kI!&bw$A-;EPKI7glRKR#%?JxDUd!)|6 z-SU!sI>FHAwYlNCqm`AFZl0PVD6Lu>75&r5y)wnodoq%t6~4RJC;hz737FQ#;aoMa z2Qu;p+ePTdZ#>BNN@#fZM3oQdOM<2<7cpw3!^uJ1vu)&C0bURNDn`t2w;sH}=cG`v zvo%ADgDs5RTcuU9e+=#aK^6E<{NcYW&;-cFCn4RQMag^yZf@|D)YQ{^v<`;;JiM}C z9-z0VPDSj0xLCW7+r_{C|JSom%Ar=oXrsjI=Gp?_%RQDJFHh1;4}0!TBt75m0E0mm z!PVz*56|&sgv z8Gta>cRMC9^qfxrs=#;6Lb0hdp}^23_Pnd6aM3%vTZ$`_7G9<_eNpA)1;*y}+&k8U z-1V8;59+hfny*sgZ^Q)i*WN|n|9@7u?|0RN-;|>?Ic#OfJ2(sn{(U_<5+u*NLfwdG zd0DnQ4XbDVk4CdQ|Nl7#E>$9oMu1C*I!z@1@FaWv*AXiQ0Re41DW!uJ=QoPg!kBsS zFMMN@r&FF2t&QbfUI?5RJF8HUF)@ZdPG$*R*^D}1G@h-{EZUwGeDPYqR&6d(D$I$* zYj3wIocKK=c>dlN8wh=V_<*66#1(*I%z?0k`tnaK1YQs^ufg-WgPf%%8{aL#l4Z*4 zK6erp3Wb|hw^gB`G)Wk}YbW;db-~QOmhYWpeR;hHh_n!V7ZZzute(+qQQ>Wzr5Z0+0@hKtlSvQ4m}xGpOg=)j*H74b4qr zyOM~0@?G#l@NCx8{{ifh@=LMPtdzNkOsA#O2hy)fa6+j#9w?4Gkm1OCa+n056MbX+ZA~b(q>g5M-hw&VQnd30EL4 z=$ONlT}h4a1(dM|J(o#ak}NQ5C8VcLUBcCW{i+DvIejh*lC-q6MvLRk>wwxPzkmP! zZY^4o7HN;bAjv4I&uKF* zgKGgH`r=NG{n(tW;>u2Ke}Epyte)PX_quqo{<6YW!YtWwa?1~_^f7-_lbewl_9yzz#c<)ptra8 zAx#lfjZcf?q-zeBGH9B@G|AWy+7KLCoM0I)UHUL&7&G>uNc?bvrjo}g#agBw9xYi7 z>pSRZ4DX&3dfG`D3t6rh9@;a`Tk%=^fcvjTSl^<;$Gv&;SvixQk;gGe7WhBtCfy32 zqfX^@4D^ybao(7z_SJk22*h$8-@0)LUjBiB&*cy5YsTs!OpB4Rqhn*@ zc#yx7Exp06br5`feA2T_H8eDyWuEyAeHn?M-JI~Cag%}O!Nu>7{6M|J|6px(b=AG$ zu4oIC0>Fgb^(1aK@b<4AmX(k|yk|V^E_R*ncadbP^;NlW$X&Re)8UndKuHtfdQ+tX zZ^V0kA|jtBD>;0RsO?VsXe$s&O2&%L@=%fNVwNYV+d$NFx415JF>*<_5!x4$K3Nio zJ4%(0IG?H8CZ&kf5WmjHSN5TkyB(b1(&j?rBv5tGWbKVYVVa0-yCSDlP55p54rtfA z@OT5%<|us$Q+3Gfux)Hi%w3e@G&VN&sdJ6WuYdH2Y=B=4^G6o&yUE;K5?guE-lpnV8Yw+yV%{k`-jPn$j8?8p}IwA4$XkU-3WJ{F1pmN3dACI;^Xlv z70Aab%GfI+DOx_GnnO`(X%gT$+$?zQz`fvF&KkUH9+*^cOGjZKXludZI<=hBftRgP zqb9NqWB@f_e_Pl&mzzrocWVV^4z607Na{PJDb>mq)ie>LXt?D8RfoS_G1XhSq& zV11NP6_(ptgLcOo(;nzHL_k9cDy>wQ6M;0>aHseTn|b3-`A9!MKV^{F<$_A+0>E}^ zF$(xy`7PQQ{4DcIxWqL7!j~JUw6@rOCIhr2kK>u$Uw!K5=pg*j^!d%l{Xx>MgE#SQtG$x}wEOsJT-CP!E z1S0P`ay1c)2u5FO2-LU<<4P0 z0V%8BiNSlBSlp~w3#b**PHQMT3Eytdv_bl&fXK-Wb!%y8q%{icXjE)XS=d4#M0>+Q z-mcg++ozBc6(y*rr>B@7?l}4uA6M-~VFI0k>@}aE2?i#u6aLJocjjlKkSetHFe#;> zAUbpr>UuYWECZE$h~go1tLrsyyws=-J5dg%pmJu9P{q7Q-c&hQ&~}Ud92+wM#Gt;e z?x>nMGBEmYYj`E-TO1gyUfps74)v4CyynIMDdHnjN>M$W8e z2xJIgBq983KhMCq5<-rSh$9FUdB)yZ+kgVkXm;QOc(CK&YfxP&a7I(TCCX%EWaVQQ zcH7RIDR7u zLft0a+w(f%=w2OuApYQa_65wjV~@NUcW~NQwcSt`kFjeR&lq^IWqB|&ukb0wQuHOw z4|&Mf^tVK4u5cG448$&7dQuxX>kZ?d^mo4=$O_;moKvcjcMcoQQo8h zQE8VDg?H`yFtd$R-!na>gFANId}MVTi+H6HHAqCtW@sdzF#(9`x{#WlK zTS`f(p7zeVX5MPTbr?%1)#*|}B0VY6`W(Gj+6~_j-R+r^7)%MQ4KX`p80P*I8*|>%z z-%5p4y;2Oi6w!V)Y%(Mv(EI06eF`nhFn{oz28xr88SMvKtp4CXtjGL%aMO^Hr(_QZmGo)b9y z6-&RFESypF8|pDNN@5v=k&zLLv$XV?8IBsxwMo5)4vHW-ym9{|#&C8hqpXkchl@@} zdcgMP8*E+coJ94G(>r*#rPx~n+H7oWB*wvb9b0}7MeAO34%1^${?&?2Iah!uy{7yvArS>8vV zhI#G_GZ%&H203d2sPl1vk&_%9G&~^jX2Gdr#P_<(9)%6?U`cuz}CA;c(y;XZNiCOaAeS{ap3&5zcX!7wN?JcjiCNcxgx;CP!3#@s@|~UeqY8O0q!nz zCt66VYW`xf17zLUZ+q^|Mv-+Q#89+X0}xmB3wv6_hp%&Z;OA***Dthb?B+OnmU}SN za}TC~Q_eR75Eo`-C=LBS6RMd8sZ|JeX{x(($Z+?;M^3`r3I=q5?CNXyZDoAmpMO61 z>kf&{bPf>1Qb9d#2*>20MH{3bpm2&qrw5X<4hZ%SZD0LNrL{IOYc4rX8BoLAS?hYze)H5@^R@N17^RFPCAfNunP9 z5J=bGQ0|Cd;`+&xxDM@?i=VIf(Srkx74aiU-)jcJVD87s(mDX~hN_L2ps@#l)m`8f z?0cpPov2H|KTh|1}?xr@YZ18QJiaNF1t?W;pu_`KQ5$MVg4 zQ=r|0@a!ai6WSsQbVzp_w<6Rz)UJ!NAog?QjcB@!l8$f*-oIPI+e(L)osb5hz0aU% zi(skPv_eySD6vUA5VYg3_$OX`(GS_Yhc=Qx<2x`GvWaW}zVbnv0DH|i)W)q(4<6rD ztwNgzv|X1iqE+?`vQuJ4cvn9>X9#^7j7&^kQXapJ=7=g}2-MK@*bu@BvSa8f0I#{P z0gj6phyH~iEVNe1IyySy5h=P?B0lG6Cw^uhydvz~^y&2(*Cmv%@_`c1q$6#vdm0*Y zqTw`-q|(x!l^&?-6;xD`ppS$w8fKvcqIF1s13}ph!gaJo1a$L^EG+p#;qq`808o&_HLkSI-$AP7U&}Y9Hv_k^cX20DsDN5>`ibO&uLkS8cL{X zMDd_&8!wQ69s)EY1K9@J>2;?w`5FLMsN|8QaA3bC6op-IKn-!(91Y}DunC9@dnt#- z6Joa=cLJ)0=VqxZsl~+No^GlsuN3xrGBEP2`K>w>9Y3BD#3n-0S-iE53~^iC*VooE ze9@|o<4nu+O?a}v9Ez`}Fgj3TNMtBc8aZ`#;r*E!#y`r1=cc~>f0b;k<}B+K&+q(i z6`O-&XRoBFXcJ7~_+*X@swISg^r&(QLLDQ*%-jxQf{(@&&Cs^YS9g%({#kjr|5UKoHvMM}w?x-^&gw!Y+JXgYn~qj#MBu`8?meg;6E)50 zdB9>M)l8E`8A`w%uODD-NwwImF@uAkdEyNA*{F!Dzxk5~4`N~RvaRKAbT8YCzBTAj z>5N3rmBEX%94&MOgMj>CiFQD?!kdkel{KVw;=~5QwDOp5t6E*_aF1FiSIz2nY}4T5 z)g9zu?d!|co3pOzE3z~awNmVLPb{vhsk&HYf1>LC30n}Ph>T;7QEb*l8s%#djogNe zfGTLPP&}f}KP?uqUr^C!=u8>l8W1c{MI^NN(8q-;ahfG|>u^lRx!t1eCy-+J(`qsMn_)Mg zfC_+NbMZ9GO{pt&6>6w-3I(5WEw-JSiOSlpd1_}HeWxI#Wpr#Lo&46YEy8B7@}B3! zcTwm>gSz8Ze@xwHXnLe|0K|bhi1vs3%M7UPSdb7ewv+0XJmKET7bSe?oLhmCLiNh% z(IGa|MbVmdd2lENuH^-V7UkTw4;%1fqCsP&;#44TSj%|D3o=;`EY9GarSP-N1xV9V z;ifPZEqR?<>hWnSRrBDt7Qs_Mh2yqiwjDw#_{_k2j#<4*EpFE(ASFf|JG4Vvbvq!^ z87rCRcmw{P9<)lZ;5^VoRn-+zQ6fn}L80snmA4j9c>xMr!g^Zgxy%Exwm>3&=$xDH zvn-kKPqwEW_3hd7GlIP)97p?L2~vPS1_z2B4%c+xk)>tUZrXr$v)?V3RRJjtLW=`K z%*ZzzsSk3Sgf?-7-D8;)x|P7o0qg@YJ_+zn;&Lf?))G`%t6O25Cc#)BmSxMw3Y~rz zWJ&4(?=a!c&`HREZVK%z0sxRFMmsdP2l@NUnlYL??MK;y~+V z8YJ}Tyj8eh>E!TxslCvTkwMOVJy(hk3=U3!U}m^ihc$!tbARv!I4=Q;7Wpa0R=8_F zG184TEo+_%rPL>DNq-}WHY4<_RDpg1H%?&ny~d=(X((cLwh+Y z*#@0ldo*V0GDHp#Dd26oj1seQDMO#brY55#CA~iSkO{bfHfSY;NxPhSZ=dP2v(y1k z+-W$;zaPMG7=ce|f{q1R9G}zsm&~YzS8DC0(hx#Tetplv$3af*6}!6t7kL97dff{6 z!};31-?{h%#B{VJ44VuRJswD8n0Esx6oT8M1vY@3jA%9;2*u{Lzdp{|&8Rt% zYE@LAY^NjUsprN^se1mZ=cWL%nIs`6H$Fv5Aws%4&z0Ngkp4K-I7-aeUm7jdmEkZI z&ttT^KbjoiR(m`1cX=X1B(gB-Iq{<#5^yUHX@GsukKGTV_khp3mpv=iZvIZ$Z%3Wm zt*6qVFA{Bo!A`*0A_Y+3q1VUzK$bMcfpbJlj(?ws9$jLXMIHWc1o+~6R3x?pZWEw~ z#qns9%(9Lyck~XJBSgDtZg5CQA^?3Vcn+|6v`BzU{a?k&?;3d96ojH+{5d?{balgX zNx8WSpsMtIJ-FlEw2JPOs{(7Uu^{?+^o7rm9YWs-q(_i9+tLe`!BHpUy}hFwKMq zuXp!vlP{Y<9sY{WgRMs80z7E&dzo_q+CRK@`vrsnO+UWnH$y{rcLct4sbs4Xlhk_wIuPAG$eFgw8kL$aI;@>DVkicVGMX5W1n8BLC{uC9FWU@QRnz5U4o z5J(fc3rt7;PI>B@tCl7v0*SNUZWc7VfnD|po)tDFZ+^)7b*_E%TPbQCw}A& zssb$T!fnKEQ9S|eMTvtHwiEZyXW9;ZdL4yU>*tP5Z|}#v_krqa_n5yuz~AkZfWz#A z@PEvvnL0$%AS~_Gt7vmpEA%j-4RGLtCS$x6SMk;BQU zfsbbglg7t^aFNI8U&HUt-&Jy=Vs?eK@-|S#s6jIxxFGaoAn0(pJ-!XbL^ucg)7{LW z;z4aqO^3=&mW)w(jenR6tVd}n@9wFCwpGMjNgzss2}kuaOEu>ai~m*Hdxvw`{{Q2b zwn`;;C5l2RDj_4gNK{5;WtYeZEoEPt$O`~JM| z<8!>fzu)&be)k{UN5bp%x~}s&&*$^8p5`#zJC3)ylgy@tD!`ekYLPq#TwrdZIK3Mc z#hL0owxScou-!+)G!uj%uB&*Nkzs9cu|g!PBh)JX9<<@9<{s=iA1FfJ9lc*qOY0G~^nBAC**NJoZ{hCPT=Qb^!B2LDUbHr-Xpq0P+@A+B zK<@9PF}M2-0*S&vOah^luiC~x2hXN_3{7DTXX}|ddPcibU@6}EiaXfEt1; z1iva&WDbEVN$%xQhPari%SA(@13B;Ls3BHifLD`pWYRJVT_5o}BDS zA;L8xjPrFQd7PW}tZ18wg}6@TY;&Fg0txb)tr5Xki7-fB_t&jXXlk$X!Jr&k0(AJu^PI)6GXCJ+{*-J3!tYq5Z&# z)O=lr<828uuKQpArEaecG;&r6jPwRNn#e{gyaU$_J?r(^h|BdBY@%m_BON-{lWWp_ z=V&nktDO60uwar_Jl-eiI{jjH__1spbm)+!RxT6BF}71Or{I6x1|e!n{%n1>`q{$X z%TpcUz~%Kes;((_H($;+C=N~`?Y+#@S>OvO4=5cs06z+z4ZT5SboYme=D_GR6&b9+8MCipW2wshz9F^F_wWUC!EwD7$#Lqo&9O*W- z6VIr+^CSG$2KH6)wPXWK zh#^c=MO**;O&ke3d|6Z91HF#{pmg|m-R~g9F@sRKOmRM)-ap+r2yeFMV#kpL`2jFD zvt0P_U^;X*F0JILDHb%ZYs-P#olWr6jUqqJ6y%Q(OzS};sueehRdC?G zku)g~1uUMzgn*plygYdYY4h5|5P?EjnB)>(Tcna)H76t7GN+2uTm@|RQjFDQ);4rF@bKZo&At4n7D(nHo@jV7 z5>X290VD(IkaRibPU)3X%`mRU1}fh#+m(F;=oo=Bs!gVm_BV{T(|v8(E_EnGK=%HL z7z63?mg|$a+g6E2y1Mc7bQ@Gfk+ENu-3@rVQbBf+@_4(<1l*PscM&Z{xheV!$*rAd1Fw}s^LsEy@@)#~!HRAC|0y5}oa~~Jo zumO&W^DOV3TT*0uzS$2D)n_an_|7vpxPViWJ3=_Na_S_`R1iSKw8*23M2VqX_{|4y zrjWcwmk%jAPq|uLA*Jl&*S?=3u))q{`GU}uYG1X&o^DKX_4_Wh+N0*;(@eu{KZdh7 z=KJTU{mF6NFnzD<{x1hp&;QhHxTu-dG43Uhr}OdMm_5Q*7|IxEif#RT%HuW`<7XwI zfO2$g^2uX>LdUz&D<{;CVnYoYzz%0B_+FIQt!W) zg1*bT0UDVJWD~^SZAP)AVUpOm`0A|9i`GS8+h%H;L3CMQDHmj+H}3$DJE|wfM$fP- zp}S3qfSP06#T5b_b?Zdg*QA~w=UWd6&h+uQZ(~ZHs96Z%jZQh_MW*R**TVTM%W#*P9PbhQsy0C zw(+Lw%Q*+F8gI~&oC+pVH{#RMs_SYY8jNmoPw#by=4g}g%hK~4=ypq;{wP6a!%zNo z6?NJUi30jBcDKyV#PyhLs&uyR&Zzh9H6M23`QdMz80qE1I1_GIXR;|^YftsD0QF1u{dQ-<0@+AReta2VDl?BL6cAcw4m+qx#gTP zjoOa~?-xRI9gMRm-)K<`vq`%^C|*(&wU-EXbp*0McN(*~Shh2BJmZWhZK|1IWN2su zYlk&Y2v%7bY9ZV%6=+_-2Kc=zFCVUv9Vs*0Gbt9VzB&hwjuJ0RC^}QziQs^M)kBRJ zKTMQ1UMSmnvM>9>vCYUb#;phiGg*vkjur0@s{b=K*}bAhA&75G)MfV{RuebrLm4}L09YM?Ne;Uxo&oLLz(>VuC8mrA@Jc{g>5MsE) zDhB?9ZWmSy*&`9OM2(UV`ghj`AH`_FfVy=_3v3NG%>Q$%y_IkVckh_edv1}Zj=My+ z4o|#DE|%Nl>lf-!>Ue;nypsw4#5Y5D)$R&rOxBqgD9AmW^7 zy|?+3cLy9n$D@K6+Vm{EeSO!7_B^4-^wwhBKNq8=(1&!7l|ERwZ0-0cQZ1U~rSl;4e=uBP5&PZ3ybR%Iakg054=xCMY-*p;^;9jktnJVK{-?%X+|mye2F zG2rD`lT!?vlgfT@{X$5f9o3`pV|lT14-&HG#f59!Ih}Eq{_-ue(bjc{-6#aEea$ax zmFI2<-$F6uY>ItFOUvxz$B*7TLkUJRrM9y6QXIli=WxG+XYX0P_>wI_$D0Gg!^177 z9KesbG<^E>NdvInS#U_ zg^%<~(s|j2Pq6qb(BerE&}D4O5?{1mrN7qf?7{jz|L}vBr$+S89_bu9Z{T;;d;_P* z#_Rs?SBms#|fQR;;jxqR%!AX`qe>r|4hOD1vq zqw1{bE_(B(6Cp@}mo99~Ai|{3#8(a<8SkPT9qs77iZs_z?P9rY-6pD1 zn${nqiwQPY);6HQKa|}(qxzduQP??ZzFzQ@LV-1! z`Vr~m-ilhDF0uRxl>%$9FX~7N<54C|U*ClJs9LI^Mj*gTr2y9fFQaV`ml?l=+-)Kw zALKkk+0E5un%V~KV-s|#C&X~e^Gy*s;_S!I*>LPqQMR^qkqXbyK9`LGl9zwp;j!zi zEbJRKA3nLK<89XJ3c=N$SrUu(ALYFGEzZ)3gYiDj+&?Cr&$hEZ*~#x0WgeBlCA|WY zS{J{u-iCFlJ1M!gedLM-g@yItV#!bac0e+@7fzwKp?`;zqnK2?Cvf;N`p!H7m!^a} zvGLl^a9W6)F9-f`Ur;e6cT_js8j9MbIszYA)l9vQ0+uHZ8q#&ab&nRszA`}{jFd$4 z8WqAXk-CJGBH-`F3{;MDKCKH9d)lIQNti)>+$`tBlD zm?G#JQ>6x3S07y)7IHl~KK)L85yJp3k_6y1ZqkYYNPRX&cZ!*8X#zU;vGd?{tEu<^xNV4#8 z3F|P1#-*JP-22#6&ZR-VdsrtHs++`h zjT~BP72y4~bRWp5QLxBn9q48d99Eb~X4 z{{5WM8|I5_Ni9zH<&Ri5=d z**n?#$q#75dv&K0rh%y+i+ICn;r;QQ$n<*zq=H9;7rP6WtqzSPv3ik<1IP>tsw0)+ z@5S4e6YdsLvyadu)zIK;nlq5C(zB8tzfKknL~&yAPY=o)xa>>b@mbR-Km%;5(@&qr zn}^lfMdIX@MRBex1vHEBvEMfLzOsG#JWH3ex=~vpyI$%I1s)e2&Y@G=&>+sY@v3Eg znt+dd9Uu8Uww4}!{pM@AY9dd+jmLSBjb&du5AD#N;J`rntb^#p?Iu8{yZ7(&DrX@c zDgLY^+N|GdaIo{WNYr|5lRc;@fjm&p9hDT9&w2Doj?)9hV8ojHK@ki)wHrRs9=e1E z90y+t@wWrq??UZX4Se7cnM=drcd$>!uQXc3 z zE7&}U)9cFz*GIgO_OGr4^7s|EWoO&RK5cl19K&libLhc%-d(xQ8FzBn4`#EO*e}_6 zCpQrZS-a}BuO20bXuG{lR;z#Cz&~NU444hAc{ z>PvrpvaD4VWADo1+a7O@On~GNo#^5H_-JAPQ;hwI9T1^5cX^1VPxF`k4rQXIjt#U* zTbMon0?JA(vMuP?3F%mxL4!O#6tGsna6`XpEmkQBYTW{`VYB?L8zXGE$*7qER1M#* zW@qooOm661;rKe11iVDNe;^x1@FKjptcj1}T>XL6yf4u8nY+KnQ!6J>B0lAycUm9bM&K6})c#mdAX&cL6GX z&phJb@Y42|?7bS3H_2RGqz-I-Hl)9S$)AJs-h||$7?iO($UnI0gT5s=<))tX z!^hE-*-RF~h#o!q^9#tdJ8h^Nm-LJ%u(h2y69J9Ihc)wEHe>Kl0Eplj8)&Vwem*P1 z3livSQDgqt;i%!Ls653Oy7v_JJXc!O(tE(iA4>S}?a1xK<_WDLD

3;gPVu1f?L#ilSAZ;} z;~?!Qnw?LXE``??x;EIOQFb&)VyxrE=QAWj%W-Yo{PvaKafjb-@a?tXpIq!naw}$3m9>v&f@HJZb!qjt~M_p zq9Q(%B)s9RXavx9nRp9gF$*KyY1kU^ZDR@P%*u20|5V#9w2z<}WH@=fWTw|%q$Ua$ zv1T?{1Awvy?{bPK1RdRTzzmin@}67nj-3WWkSByNs(w8)RDtLvidJ{o`xU*2tN5qb z#)~#Pzr>6!%Z3<16x%CFUc&`_ex^X9o&XK-cV*~Nc1yW8TzTbPFLqe(}Yq%qJ@Q{-KVuR4}Z za_JGyzx1;iL|_xhab-@;Yv>gP#G+mKVz%F1R^cv>2Tu`*6lvpJ6qAAX?wO*`|HSv< z3^-N`6KK0cMrSa8Kn2}W^^{oft$-GwMmCjo;z$;B>#fe9qeEIo?4`_ruZVEyfmk1G zaFti?XQNrZ{C&;TPZ)3r7&e`*A@geT|Gr?TzQk~Ghwe#=6gqu-!}tj?Aa1?bAgY}~ z3FOfShekx&7ud$maJ+NiVNLjP%7jhpn%Kv`F36sLU67vVSJB4<_)rWWP5r!#`INan zcpBP`32BAFq=nSCZ@sxyMy3fWwqP`Q!6*|yLI&+#T?FxcIgIM3>N9O^p~X0oY4f!l z_>(EXI5P)t0Ig5gV~iFqpb&Z-Gx*dI2f6nV(sEgcq6A1swsq&u3TLEZ*VKOo2or+* z-FE%p+-&bZT3|`nO+M#$`D(=W9al+|aZQxbhU2JFNeC&1o`>jp)X&0`M_3bz!O-Z9gGI0dy>}>p zkpBc6-oU&n3o>bv{!Gw%5-n#ZE|)|Lu8YZ=L9d|=YmCh7cpG2O-uXDnY@R^{=C;Tb zwRJ!&JPO6X0a;R?K+T2s8Q8ecZR;(-=s5T85^(|HC5X5W01bhgKfgltwPogq>^o>N z7ym~8_lRRI;8D1mNOm+}c(+9Y0s=~E1ehYSK?o9x0o(p1AjNMD2PaRGNN`-P(H@v5 z$3#>PxN@XX)Pkrul_Ivtzy1sRwlvVyTNH%Cfxji!D`+MNJyahHf6uQ;(AL{1bWI1; zSh(tAkf=$iJ>^ld`G#Y9QN1Dy$)p&39Ro|O?6#@Y0(aYER?Z4H`5|CE-RD-BmLFbH z{1--4!$;si&KLzG4QBsdp8kmlIbkOcm){_Yye}GSl;!Rz+=D z)hVp@U?R*xIxFGb0>vcEIqdps=Tuxj8wsjeKQM>B3CW|RH&eb{Q zcTxV3IXXQ+!8CzSL0P}YY4j|#_Vl8}Ari;rEBQ+uKYbV2`LbOv0L;&`@-wpXi77AKIiXj3;G`ddn*s*9F zx&${s+GYqml+jqB2XVM8Yq~o30|^WR*<0xl_=)h2aBDH(?zQCzY+4D}axtuA%EtoL z0b$@sKq7zgi~V3F6Sa1Ib~MLsSp{WiGBH<+h-jkVd`c$SVDyn&%9iSxTYQb9`!^b) zMaabO#V*s88G(lv0qG!hwe^Z(McZUP+H?B@2B*p3Cp?+tJrI8!!ko3745~QXTZfN7 zDej2lOK}fE90nuH)+f-dNnkU#0~>Aynt`E5kw1@=-M77_J{&p%oY}RmVcCew6AQY@ zma^h(~z$T{IjV*kq0t%q}^z!_he?|Kr?;uB-L30IZu-Gf+v zE$ool@F2|(S~)V4F*)^4jh_6(*7t4*bhLJY_PU{y;>AnU`<7Qy;KaD!DGABWj+^M% zUA%DN2r@NNT-);zMm7Y{k=9NShH*5%eGA!c1MlD0emG6SPLdiAtTRBRM2-l>fOn;( zgY3;HE=_qtpwei*NPJoTrz0J^w<8dW*C-4U-nfw-ka9v$5%ufI;1D2z+m*n+(+KzRo`QEUlp1Ixyy93)Sbx4Ut_F|sVC*r9jm-; zr^g3e8{&GW)IZJU6c>XzutL8)ZP^Q;zsg8CZh z%#`&g9{o>aF|PSIRjvNPsY>d{O?KODuUuBD&Xlv5&n8M-2yt*ALN}0q-h1JV6zDp|x%}KIKcDFLbBkwxCT9mF-@TBOzA0nzwdrk} zN>ZrO^x`jk3%=jKz39E$4T`~}eN}b#g5l#ywfpHpmTLZWuNbeH|8aMlY^avpHIFw- zgQB7Y?b>A=4MvKc;Q`9t?Zw|0uBG6;<;0WhU)S?+a?%mX>n#*)j-6LH)8CH5 z`j*&u20fdGk&&8V%6$a)tAL8;rG~M=+`MPAn``V7yAg+#t3H1`iF z|LF{cSf=V)Dc!cr+VjGb?wTeCVvAahM{K_b^9nVER6Wv4W_0eU^cWaXG-A0yA6MO* zM^z%F{E()mCXe!JgFNSuwL5g=sBNO6LEYWmHyR{A_Nk?n*a_jB&i}l}MCy+&ZtoL5 zV)iFA2EQVTu$Lg~b3}xqudSn_L_K-(L}?@sW)W^`nB=n(am3xi{Jnm~@^YW@2Sv1T zx2SDfw!H1L7iZ6*QN7p>GCkHG%)exqb!mIcG^kA$Dn2<`D>o_y_4R^qNk|9|DJvh{ zT7&a%TM7(qQItA5#tTObjbG{~yA5Jt0YThCX&gV!a?Ypk)YQ{r4!k&`X%~Ngo|UER zFrQ5|tI(rSjVJHBS|@qBh9nWthOsWk^=bX^=5PCWWYRoF7w@!W&$%EeI(JF*=y7Qh zphnHno?=`p4s^#o%!@lv&@{F zoT)nZWh;X2+>wLF*fj-qhft~#1;UK4*PG~YcVtOrIZr%mw!(r1SvH@ZeA0COZOQD~ zxB<=B~`N79Cwx!bf(W0uWfk06W^sS0%v$E3Pv86UF?*;R%j{aw`ekRRdP zfb1z{xlghaW*ZrnPo+-kfe2`86r6gY&GA&k%2M+Ke_^O-;7nH=391&s$Ygr1-GHMNH}% zvvOVOl*L)wIF8{2qb=&h?Q1>6%?_&G?fN1ZiUdE z$7{SUaA4s-aQuo;tm(-oB`;2$roF1i`Hn5hwQC44@8VW0N~&VdaV&G1nY4SDZ{>1L z$#-!sdrsA!z7rxp7M7)~trFdR>Uy2`?8&z}%w6X{i+#X?QuKD4iwMSjED^axR68cz(pZZru;<6(Th?HIy4Fm`03} z*o-*BOFmgqk>lN8v&FE=kLv2)&dQQ#-z+uwT9y&BUe!4C{ym4v=*IwF%nxVCSR3L(=}~~1x2bYg3FlSMVU+92r!J2ahd1)Zsp~d zFfq(G3~cc5(CFh`?RB9kEUzHJR`35pG~>h-RECUgAbm!si~=|e|7=YXd?zoTh-^!9x&`KUoZ5g&rWz! z^@Ym~!Gr~hk|swLt*$nPPit5{u&6{BC<7*a8yKZm0CKMbCciV- zfDAJnLnr{(7jSxA$0kY+Jtb9DEyKWdk8off&n3n@O}_+{9J&IeeLrpc@-5V>H%n$&>xNHQYklpUE7Oa z`Mf^Yqnu=Wiwjd1M@aA=wMW`ceDws2wY&Pqz;)A>X=~qFzVbZq(lyMg>EJ&%i2nB- z`X{GZj8Ug=xR>!g6ubY(vy5DTR?V+xz`LKtr>H>otsL@oY(SQoIpvi&e`@qqo%;=ZB zFhv_xnZiW1T=1%myDC=4r! z{C3clC9wmX8s&yfnICsTKBi~8hAz8=ot+)^;CplTv}cC=Ht$j=@u#eo9Hg+wJkC>8 zcdpF%(pg&ZFsESlhpvdZ1Eu`7rxqMOIC~@1uu7;egn6;zdndDueY?%7y+{pF+#1B{ zFsEKDSx|o=^yOB{JMMMZNfqjb_In)5PiE%_=tmN&l6qy;r<5z}50B#tbS3-nfTSX< ztY#|IKjRA?g_wR``!7td5e>!id|Ko-t)7FqUz8hMBByX9bfVrv95MEnwlM)WN4PBA zOw!ZSvt$bAd)zO(UtuPX6aztIiaJA?tCQ!vmumT>Cd7BrN>AcRv_|ilZo`2CFAEWM$D3=qDP{Dc^d$aYfa5_m}6qczq0} z;VkGG`38byL1kL%ps9j0tADflJK0S9aP&k?2dcyQas}GSuRMQWYlwC`nOV8%0uX4 zlQ8!R?+sR5HH*jvciF{ObTZ+FL=!`8P+;8xmFUfDUvK|`+DOx{ta%0Wr>t)M^^(W> z)Z88o!uGk4>W46UyDW2c_8Q=kf_q0>$ z5}bYZ_x4<3r^RflBm4k`&sU$D*_hq+>HzdssKbA z=zoU%DS&mWs8meL1Pf?_XyB}08{-KSk#EhT_8<>AKP$1aalnKI23IObv)EJb;5l&tMbKliP& z#PK7ytgNi@W+erk9D6}G**qItyV+M3i}`h!z1(KczxBTS%+VQLK;`Nr7lj-CJ~@wE^ws2|SBwyrnhLG|6MRYNkIH)ASt%1y-Z9>dI%xFD$*=Hx^-(&dno!J zM`j94x@;oy`hE2h7>GsE$ac#42-8~$@gA<&a!Wj`O#aP=jats97AWt8juUi@^0X+U? z?Og7!_fbzkB=n`fctML}_VWTaa!1eQPXD2o#Ovj^UU+LP@1ZiatFV(kK!)I*x*lME z@}yis4@5nQkLOFgbe;KSl({oAGv?o1qqdFVS$t`k#dSVqOZasBQLkR0i+pMfyS8sX z8o9@H;M;*vDF-12%%%S6=5&+Yool4}SaUU(Y!xd#H`g_UCtN`R)jLrOZlZ`DCI4|^ z*gaVXDNEJZjn^KoOQcKc0M;$a$-q>_d2(pMh9!5c_S!je6n0G)jXqMUao;W{wVHnx z(^lO{5C8N0LUUG0TNKLpi!Yu%y93hqz~VM+Q#ZV1a*Q{C2cwci86M#IbJy@Q8;&^F?JbzlzevFAP&9jv zf&$i!X0kOs`{TmPPNItX(TCKK5q(@?GG%G&9~2Q4y8kR}~!iXmPqpSu=u zU&ufA4NHXN$=<8u=jRuphaQQ^q}>QK648Cyo6TxHTZh1O)ARNE;^(&=Q){*9-*w`d zf`fd^)H))f7Suuync=yCIl3bdEpONd$++{nTTh#tQ_gmU~nG zuM>;1q#f)2b-6EOlT7K6dJv=%7_BR(8*Ozyp*acxRR>=dnLYQ1Eq7)re{L#2_2UXt zscOA50X*9$Bq8SYGBw?yKh47Q^Oi8oUU(#o(S0i^D_?!xDu;;Veh3z5lK=ezI)>@@ zobdFn;E?i2Ox%I#@Uf2`ox(AT#r_H_=Be|<1$^aI`-O$mwI6WUctg#|Fy2^^bw|l}5W7@0?D_2)g5WS4==)B@ippLKtT@_(w+i!X4s_S#j|zTLeSbpjSO_T%^MiC%QX^ z)~WdcC6V5~J`~e>s)j6kJWubqb^JXIEo&RxoUF30QsJt+s9*a((6cP4u8z6N*WNdG z(wmumbH(s34hO%a#EDlPE1My)@S$ zVj?#G7q|D*u!VkCyakIA3!+*~@qhB27h!5zA8ZTJRZsNg@@<$V!cyYp#z^lJ!e$oC^A<)8$A;+od;Z`d4v zFq=sm-oIJ&d2;hR+j?n?SnJaeTqMi(sUP>6zHOi^Ol@B6wu zXP3#Jt-r4U8?Hh2NdR;kQPRJ_$#}lOI&-t6Bs&pQNzK7`SW;Fd4jhGvnS07_vD?$nfACG>;=y%b~Ms8F6G-Fnjq_ z>f~|$>yfNhyQ4^S&Vzy~cbLGF{6m#?%^=pIB*6D;i{igXMQP`xO1=zZvK*x1=; z-h15|z1y`>*!$OF$3ZKz+zFwJIYRNX-vuwjFN;vo0ED^yeEe z@S-R%_T(CpjP!FO-I_H{k!z?d-p~snz(i272wgIH~uHWtM zm8SNjM<}Ly8>{f-wWgf1#89jK%&KR5Dgw0{P&dD&2gL=sIm&<1dHW6IOUiF=#3oR& z$|T7R#R9d!ZI@xr6Ne_u&S0c4u23DQ!(A)qIV6W~IQJFBIPw?wjZEVL5gVd>>5q#a zq#rx_o%;lH%6*Lwi{$`IKE$Wz`vGiPx+NT$04VZcOFv9?qP5VdpzG*l14asKLS^qL zT6)xi=xHFn**?|?Im@|_WW(G;nD28;y4fc46B+V*$^eTtjH<**Fzq`B`Cw@-$(_ls z!#h=?I)NS>^?fJ!9M$;Ro)2Trxq^?j>gf9Wnx24frkjLk5*g|OH>|R|LlqwDxahV? zH>F9(Z|M)yt~3_xVKB;xjwS*;2d{W%sU}gYhelo||PD>-qF}CBf{GrR#GLO1CTdB&6>7PH1hr0I$3L z(5s@uK5(1eb;9{dFW6Jj48LEz=>#S+&l`M{nRo52&Eb)l%%~E(VL(Bjpl{GjA)_>3 zSw^YXnkv>F-K|)BL#fF#F~sV-gd?8D>$Qyj_Gi7kkLy~G!A-2g!K+w&{mYEnqccpK z>s7Heyg}7vwPx6+f8GONR!2{qSnrT)h|lu4v60rFCx@4Jt<%RnIAITk!Q7wFR|EpP zM?Fd=P(IsjnP*(Zr5x~VleH?(PKV<{n>SyD#8u5E6S~7_DA#=u5vM?rY#xR#^xNN0 z=eM<K1AIMqc#krHv1>M$gGXKq#V5688eeB$P$?GoNj{Q#SS2|3o zTOy#%{e)tNk;)yZ)jL`j=~!RT>AESvNVr?YtE%l@)d=s8LbK;b6%`j@P2R?bhn0gM z^MhiSf9k|1CZhc91g}8XG4@_!{s1nv%BXUy0o9OIkDx{-y; zA=5IM`AbU;OG9*2oP z)*GwT@hTydwiS5}K!Iz-zH*s5{Crxxgd#aP3_rBBwfWhvJ~OvaleRX#B)jTe%KDK< zZ#xG{J0}B&R)L-9-)cC^a}pt+BBG*`xfzc1h~+#vV0oIp?*@7#4X1+60Crp821(Gp zwtw9wl~nBLnBm7@`k}Oxi$VggWBUswm-Iifbw{{As(qJ063;_CH&eMCBaS3Ib(742 zdNfo9W{Mjv1SPgA4BFJ_b3ZFg;LSKt92ZpozEoULh$FF^My0>XET{MsG=YRtN_cSP z`V~SAq?|GPn;E!kLePS+&^&+?2jxCJruRQgm)j$egePRXZIjEt2nbBB|A&Ck{%-_C zHl!CrWmZFK+#WXl-JVa4b?aMbAB`Q^Uv1`v%AANEO;_iNMpShLd3v_6=DHNzwy()% zI1jkrw7fFj>SeA=^QE4hhfPdkgnTDwUmx6E^^rUy-hu|Uu1SevF~$5bJ!7ZC5G9}cUcwA?hm_{q`1zOdt2s78DZ)^!P(}uU4fi#30-4^nW5*lF zUIg*<6Vo5C(&Hr#jsVXXD4b!e@Q1?Du5Mu9gO+MbK1#1wcQ(tb;~0m$hO}ZHo}M&g zZVTIfAHjr=f)YUjhB-?p!ckyoXZK#zQ{y;RHI0OY3{SU?UR5ei2*P1Hk0Mz)C+gO( zU%w5G&x9TSrXI2eS-VD;r!$pr_;ldg2X99fW~Pg${^klqvi{-V2RSv7~q*$>% z)0oU+$%a$_$_>S_;a;=LbJof)+HI0?l*Jc24P(4F6oj0VN8SA+J1@&`##(pY!|Qik zvEp&Xfp}kU4@UnSJ&`((r6L(VC4rF{!jh`q8@N$VU|stM_pqA$G%E%YcdHss%bKPOtrsh${&EU3B(yDg#xChCK%y4?Ldcg8AS#Zd91PhP`P3c7zt( zJ_?gf{~-X1@b#zg8npWqzBu2en`66;{^#1a^Rk29w21r!XK+QGxA(f3Nx9fI&9eVM z@p~a+cbyYB7KH7B!Mt0{@j#Zey!CwHqTzGjK9o2vJGiIZ6Tvq_B1tMNMj|4JOF&U+ z9(fVYUgY{4WIx&f6Yc<6d8lUv=?5?>&8uUVh%jWO!MA=?F_`p4TqrfaMdX zA6Ho*o#hnNgQQ(X?{ppt-WXuO`0EWG;zTryx$B}1_o%jlq;^85BNpgu#2 z#W570CsALOl$XCj4}Mq8dcom2-2EbmU5*+V`QbZ&3^Y3Yh@jjMgm)5QQb=ooaA8s9 z@VhnK!Rz_Cx>z@s4bA}6w1gPv#xF*NM=KR14AV<&ldp})P6P@I`9DKrK+$KL+{?O0 zmp`PLoMrtjs)KEYSjqyKK#iK2Ow58oC9me~Nn!ORce-{Ldmh^Gnu8 zcx^khRnLt5^RKP3w#faByVL&#cu#3_M1b{g;C-KLVM39|VK~*=O0{edzHoX>@{*TGn-HC``qQvdG|A3(L229DM=h;8ueOa;i!l_eS zlrhPFN?~Enn;txDK#zG};`F#>ulWfIAxgzAsb(>94qC_NxJ-%rklG-APR?%mlZ1rp zXaW!=vF_sxS50Ig*>t~l3VtqiIGAGoz5&^V3>0>Qs_BVq1e&Kv0?uUC`k6Y*{{_vn z9yf9feySxHR%3UX2ahiO*Rz|%@mNAqJ2B7zt9X+t!T=1n z0T`y{o;%_>D2iMTz^UV3uVB(Q~+P90xAzk?Sp2$4H6s zN@*e^UV0?$YSl~uu9CMD+xE~QnBpaeyK9!gQ7#W9S3!?q%DRR0!iAI@AfS1vkLGlD z5|};E(<2q+7e!Am_g8Z96pD$`DJ+!`3RP-K3bj(W$=0Cdcf5*&S8Iekhvt!L*y3)j zxw<%8@KG3~jFF9wKzl2OzVp6v#l{(*LeS)vt^za!rMY6rxt7|TnUN8M2}%1X#BYYs zIE(hb7tn}&#UF~nnYl}-ohP2%=%aQ=Z2OV=uMVOeF4sk%2 za<`4#Szj7zM5tC!&G4zs`FMFPv#+;4thT>aZ>w?uC^w(q72P0@A(S#GrK;1dwCBk! zUdgdME_sVG*Zr~sFg9MN)S)wIw!(;TjI3QkZ4^JZsl_q8F;%hYrD&vg+t@rz4S4A(T3_(HHVGXSA(31fMl3 zC}984s5eD5rJ<*HSU~|>@*d>4%#$N>&u)d-Jm;ZDYv{iUINjT7d;RhTwttjSsP7RH z?pE@u65JecC82{&+Ky*6si+T6JVSNyPer}H?WrE~iFrGujv>QZ&mo zFG^7g!QxJFkRdvapybJ*W=w}!n<3kL)>VQe!{Ht2Qka*+N4@G1vNoWH8gAV zjUW$!G{*DR4w=32@^_Kraq+jWIV8#2EcP;w1{+w)zL~pT5_!!-_{Fv35$rpA>9|ZL z@1KjgCPZSIt_eEk2npGm*<2Ex1FptiQ(ZkCVS8XM{lfJt2s|xSyxnJn*v;;on9RKW zo(AcY%w4=(>_>FFG8a@D-VZr?(9+u%<-p?XqLP=2r>|?BIC>-`d0VXS0N+aq5onPv|Fq)&f@S``{FAwl)QQSc6*%*Y29-a`666Q|3t|6mJ?^ax+TLGCF) z4qO9#A0Lq*bsV2XNI9hNqo}?ASiANcu+EkC&t96Jpp22OfPYtnk_8|Z*i1nvd*KRN zq9#3}q2$2CyOOK-uNxRWaOP6(LX%;8)AsNu+SonuM7iUP-Yg@-0r(7=w*_di&%a$sV-zQ7x@nK3XzZaOZ{wZ(6V@Sa6&r2c`NYNqSZaKjevet-pi#kbVQw zYxm_8mca#8gD3`V7{^=k=Vnp{Z1hP-L0T$aFZ~}7SzaVTI$G?B%_8NZU z=U@5}7z4t6Gk914o69h%#S2qij1dwPyqi9`Klb^%e}M6?)xS~i;CRSA+Q6^aLjf!Z z-5;tR9v+?+L<%2JR%UE+>XXJBQIH7x=LJ8j_;5b^H+*iA^ap+If8plPcE*HAHVy6Cgew6Y7RfXjJ#@GHI%3r-u delta 41927 zcmb@u1z4497cIK5K?P-tC=!YbB?Lr3q%jZ|CEX#SNGK@X9~LMe=%S=sy1P{bl#m8# zi*D&Y^9A<)|Nq?cKlh$uG3JJ)cRRB;}VrV^@~S)=u@cVA7(rR{G;q;EH2??3Jrm zrFca|Mf6U1{lBszLvYv}t3~q_$6=Ji$}R&M?_I7=Nm$ zpc>}-X-Bd+BD@2)`BAz{&7n2rj4jo*yIUAxT3H2J9Fq%7r1|yXqMXA4;^6^b%QP0p z)pUOIN}U`C3I+!6Y=XPS=X=%78X_)>57;&>tE6ZI%43vU*F2PNTDoLFXxd33 zbh7-sh_kCR^);q>HRli1z2&+ZCmTw{S5#F|2hLQprQIMpFBJ6!%KvjW=e1E?DdPGH z*0-@-K-qWyV(>^y!f8UUPI1K}fl%?~ao6Y16Ry9aA;z$4=4w4!Ute2E%juS^@O82I zmadm5AMNVTp`AE2_3-)g=SJ1OEJTj^-Um%FFKxzL51werGt-^a{^Lr)V0xW^!!6Fv z)Qg%7H^picS|*e({@jlR>_l%ZVwd^?6Lnr)q*Bcn2$gM3P?WeBEHhcM5LVA+kaOu6 zzj+LPxMZQ%r^l4jx_f@StE5kK@%z)>Nr_i9Oa#drH{_h1oik=Omg+GblLS9^77{Zx zwYTdleH)3^W+T`fT7F(hvC|Z+LZxUxe?&mxrIKhc6dB}zeFy(b=GYk3s*47IS z4Rw;?yT$fZ@un~nA+u{dNYEy=o{J+u>Q|L_0KUg*q33yLj>y_^vOI%r{CL7E}}z zh+){eT~;*-dMX@lhsuo`FV<(mH*OHo@Ha+FuyJ!YcbP3LPXDrB9**O+8c`xNtxT4s zeg6FUO4U$rNzuZ}3A1AP%a!lq8@DdhZ_T$ukr2z zMU|wS&t$(~0*(Lt{zBx~(^@aOQEXG%Jlt|N+%XCkS^VXvd-_!@?jE{7@Zsv!tF3W2 z8RmwYfi=xRFLP-SejBkP{YP(x%8_dPF>^+kQ{{D+qpx-Go-P|zpCBz!T*3T%_ObSRJ> zo1$@z{Qdim)B4h2XVyLM3*6j(K0eWDW$PNmY{w<@x$!PlScrT@N4CcH_kiJl@=3ZW?%FFHY$v9}WR zcGrlruo0L=M0#t@H!q0ca1jbz<7Z497ABO7tjCLgq}CZI=9%`mdwRTh@w)tnT$%G) zz61Z(-EqE8V3S?G{8+$d;v8Y5G1_f@vM+6Bu-*i2Pew}0!gG<0t;WHo_(!=L8z*PI zCvp4lBAU3EzH8?g`S9KQaWKDMe(bvs}p1L^Dm*BQ`T?;)NcY@E~53cURrW6JSxEr~2elH-C7S8TtNx{*)bw zD{8M{uwC5ie2*R^9{KC9XxKAJ65jMgd|p>dObmWcVtPx%R_`zuJWSewVk>(H_|F~F}SW) zVf#;Enel#~GwdnMd3Q5B6(WDemYkuo$#2yD2Tr zgD&Rl!GNvGW6RCqvvHzp1hbMk|FW8`HKzn?t}DkR4u+OMbUko9Dm$BpTXrYzJufEc zw4AMytQJYdreX+hwa&2@-PuKfD|YW>UY~Q)w{L27p@ORQ!V4iZyv8wH`jzog0UTue z_ZR)_E_GT7LMh$M{i8Y#cM?aqc3BDi^>AlmBTc1aLAIo zMK|Q`-hb5a!;{^rp(e*%v43`&vwp}Wj+If1G?6J9^{(#J*LtU=MJZUx6*YL!HPNT` zUNu?o>Z&6*`TngR>=Kf)%s$8UD)mxjsI^jcjUQe;_+r5Q$O?+mhnFW#yH#gh+jR)W z>H96kfBlKcuK_Z8p)q%T+DQ%~=AfI2XIYFS$!sZ!{FYJq@>gbkxOyrtv zI6}g?bwAuLW<`6JZ}%_q(|TQ~<2?`^lC=FLm+F^P#oJ#x6d|qtp$PI0;Iiq3WkR9;ad+AVY%8Hb&7{a zN%5U@V-06nGR(SBjX(Q32Otjg5#!e^b1nhM)oVYBv10)W35&wt=WGW8@8JpikDi;d zVA}c~t~yxnnE54I^P--0j{YF3BS$_PWF&f1f*;z~V6qSm~%ESjqwu`)Abaxd^$cGQLND`sS5Bx>fR zuCES?LiW2!^nLRt^6=^F@qqLTVd3!VecTl)?9eutk^7(vCL^+JZ5%T*ZhrTQk&&?_ zMUxMWRz=&JfPgrFKXz4B6%|J7t4qBz4xaSF8ix-bCdL44f&0UPO39|sF5tSmdlD{n z=UAn3RY8yG8V~@D68ju~PF)Vd^4cV(J1_&IAP1<@xFO>4z}L_u1PfQ9qoP_$9Bg{* z2198uKGTrQ%uJqpUuk86FQ(KE7Sq6keepR9Gy+BboqNfZAiv0li=;rX8$r=HO59`i zx6jQzpJ)m8Gj)7^K|zUc-ySPj{OM6R{4V&nr9h5JI|rg5W@T%+g*KBqzo$228cwW# zdcModT zmlM$JqbKg=z!7=OdW8`B>T_BMee&c98QthmV>At)sa9mtDnOUS6wN&2j?B9>LbeYh zX#lpyz`x9bf?Y)qV8wS9Sj6c#FWnvgmT@(&7qd@_iSoEgM=tSlBh}#i-Q8xuv|8a+ zlbo%_JC%3tIWXl~1Qh7pY!vgWB>H zj-4F7-SuQ~(N-pi=Zl~j@CjIO zcz}#&09?YAOn`-Ds-5dh*L(VBW@hK@uP_rikA7W;Nk<^>;E9XAfJA@%leECwiHs56h^ovzkSQ~H{L`22=_mdIAODkPY{G37WG%Ft}is|en zr+r=1uK+NRkHQg_?0$AxVQjsauUk9Y@C^XGe(NH?(9jfB9Xs)Yf`a9_G%N)+2!Z

Examples) #> Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `AUC`, `Accuracy`, `Balanced_Accuracy`, `Detection_Rate`, `F1`, `Kappa`, `Neg_Pred_Value`, `Pos_Pred_Value`, `Precision`, `Recall`, `Sensitivity`, `Specificity`, `cv_metric_AUC`, `logLoss`, `prAUC`. #> # A tibble: 15 × 6 -#> term .lower .estimate .upper .alpha .method -#> <chr> <dbl> <dbl> <dbl> <dbl> <chr> -#> 1 AUC 0.496 0.659 0.838 0.1 percentile -#> 2 Accuracy 0.510 0.613 0.718 0.1 percentile -#> 3 Balanced_Accuracy 0.490 0.611 0.728 0.1 percentile -#> 4 Detection_Rate 0.254 0.351 0.464 0.1 percentile -#> 5 F1 0.539 0.641 0.752 0.1 percentile -#> 6 Kappa -0.0121 0.213 0.421 0.1 percentile -#> 7 Neg_Pred_Value 0.301 0.561 0.810 0.1 percentile -#> 8 Pos_Pred_Value 0.539 0.654 0.785 0.1 percentile -#> 9 Precision 0.539 0.654 0.785 0.1 percentile -#> 10 Recall 0.521 0.642 0.807 0.1 percentile -#> 11 Sensitivity 0.521 0.642 0.807 0.1 percentile -#> 12 Specificity 0.44 0.579 0.661 0.1 percentile -#> 13 cv_metric_AUC 0.622 0.622 0.622 0.1 percentile -#> 14 logLoss 0.656 0.682 0.702 0.1 percentile -#> 15 prAUC 0.468 0.598 0.743 0.1 percentile +#> term .lower .estimate .upper .alpha .method +#> <chr> <dbl> <dbl> <dbl> <dbl> <chr> +#> 1 AUC 0.483 0.650 0.824 0.1 percentile +#> 2 Accuracy 0.405 0.587 0.695 0.1 percentile +#> 3 Balanced_Accuracy 0.432 0.592 0.700 0.1 percentile +#> 4 Detection_Rate 0.205 0.310 0.413 0.1 percentile +#> 5 F1 0.422 0.597 0.684 0.1 percentile +#> 6 Kappa -0.139 0.180 0.389 0.1 percentile +#> 7 Neg_Pred_Value 0.387 0.603 0.752 0.1 percentile +#> 8 Pos_Pred_Value 0.354 0.579 0.730 0.1 percentile +#> 9 Precision 0.354 0.579 0.730 0.1 percentile +#> 10 Recall 0.489 0.634 0.75 0.1 percentile +#> 11 Sensitivity 0.489 0.634 0.75 0.1 percentile +#> 12 Specificity 0.321 0.550 0.706 0.1 percentile +#> 13 cv_metric_AUC 0.622 0.622 0.622 0.1 percentile +#> 14 logLoss 0.667 0.688 0.712 0.1 percentile +#> 15 prAUC 0.468 0.602 0.740 0.1 percentile if (FALSE) { outcome_colname <- "dx" run_ml(otu_mini_bin, "rf", outcome_colname = "dx") %>% diff --git a/docs/dev/reference/get_perf_metric_fn.html b/docs/dev/reference/get_perf_metric_fn.html index f8e7828b..b31eaf84 100644 --- a/docs/dev/reference/get_perf_metric_fn.html +++ b/docs/dev/reference/get_perf_metric_fn.html @@ -93,7 +93,7 @@

Examples#> data$obs <- factor(data$obs, levels = lev) #> postResample(data[, "pred"], data[, "obs"]) #> } -#> <bytecode: 0x7fc310e7de38> +#> <bytecode: 0x7fcb762258d0> #> <environment: namespace:caret> get_perf_metric_fn("binary") #> function (data, lev = NULL, model = NULL) @@ -151,7 +151,7 @@

Examples#> stats <- stats[c(stat_list)] #> return(stats) #> } -#> <bytecode: 0x7fc30e979690> +#> <bytecode: 0x7fcb726cad18> #> <environment: namespace:caret> get_perf_metric_fn("multiclass") #> function (data, lev = NULL, model = NULL) @@ -209,7 +209,7 @@

Examples#> stats <- stats[c(stat_list)] #> return(stats) #> } -#> <bytecode: 0x7fc30e979690> +#> <bytecode: 0x7fcb726cad18> #> <environment: namespace:caret>

ASeu9|vQ6e?h)P+~uqDj;&^ zOl24O$C+LETB%oTrY#Dzo@*RA=6WiiMtN-Hj&2GaLf3?*_m6jacv9@&uZY69_Ug0K zhX!n&Z|ylqqZET>;L$+AmYMmwdvloLjYE=p1Dx8jujqPVHq_by7DW^>NBHxc+IWti z#W>n$q(d&r6SIw5S$iBNO90l&%gMbKupEl`^y$+Dx-9#-k!4u_mNo2opKoUjy$d#N zdPe#ep1^C|(f~6FaOX5kCSp;Wyf`(xL<=Si#h8J*;2PtcYJ;rd*Z>^`cXO;DlQ>m0!)lX#4-QRpFgw1apCrZwiBAl-D^py zXBAl&;RlFX2MnQT!_UtT$tbM;eR_-#XGqg*(UfNo5Q-R&aLQ!Br^F+4bSV%xhI86= z*eqmR^@d=n5MwZ4O-r=D)H|JkmywA|5N%!xk#KAdlZ#npN7jX}AsnlI!4^)k6Y12# z3xUlF{Jmz?!hUH*BjZDoGOsgY*Q}7I!@i0vs_0Xx4J_=*zQVuPvvI7u`Sm(icOD=K7Uh zapAyK$Zo_Km`Y#UuzO<=^h>(V9rcL$L+BHh2?L=9Qpfu<~(;M^{!SmV_3Pdqu4}`|kR7eqEhe=O7W|w`c5k`FL(!_l<(if%2*K zuX*d1KUt$|21;qFFi@QX&!Z9p}?)(>an*NGJ?H>I|2kf z#FxvQv&C}*e8MrTG@oYCOK@UpzO+OOg4gWYLSMj0K5P*7O!mbMdWoA24lMPP7j;#; zsHxAZrs-%Ci&cM6A2~AgEkmEkk!v;QM`~Z`|J(J^Isn`vhXqsEUJ!7C1*(2EF|0Nv z@0fRFd}vhP{l^lBGl4qbHFcF@+C6n!$kFBnme@o$KL7@}oeTz&P>DW_!Q9ed_$aKa z_J+u-9z2PqN;*2dRFi6IMb;0@%#JL2tBX>LiTe5i9Nt?BLwoqtlFx?mae3BRxq8Ds>BGUjp{#57uvF%2lYY zv`T4rwnZ`msVo^yE&BcNlKZ&wsFapgNVlm?!ho&T@+aQ`FDrh2>VV4NoS)xjY*;3I zJf-YJUd4a55L$cDW15FBz{tz9SeRUUsx}opJqM&cIzem0-eNoZjkSfjB;7ShSmQ+_ z86@xf#;l7LnmZNaWS0tP(B6n&6jFb4k0g>aENNUSS}5%z^Kix!PIaSk>!O{1-1h?F zZ2~6wiI=@BPwiac57!fxKR=P)y?fVh=!^-`a&bzJpa6uq#&JRl(+-<*P}%43+{p%Q`+m1pGikTx8*d>!XUKJT~Hs7yBSfb&OlzH8Ai`Tr&}u=O8hItzVKP z7^|z&*!!^jpz4~$r*UhC6Q}rq|JU+bqWs6;ycmgMG zJmInT!i&eORex$W6pdS7oEf{~_g(!46)kP83G86w9LlLZE`S7)A_(jr(fl_dXMwi{ z1_nAOrdqjEf~c4SId9VKBV=v+wYm1WZj1T^gSw!?$`{;IOl{fLNH8ebYf59qGs#<0yMm zwI3TV=)zZ4P}*x%)nx%TmDg=9C00A!krmhLI3t4uByZt)wKrvx5bl`>Xg4^CYzvV4 z9B&FuKSMg{P*%N&BIp`q;|^0F_EH@`o&e;n{${J6thBUeTbgbfCgQYg_}+y?d|u+( zwIKxPEO!^z88ox!bw|O{ybP#&q%$We_kM@DaLqhxfQwC24B`3u>PS*r^^b>!z-9U$ zEm8(wdK9(3SS4x7q(|UcTQS1P7yd= zvpW?VVpd_o4*oxm@514b{vmd99~oKGJDK3@+ZAL8ZwuN?D8W*igQ+t4_>7e1(!&V+ z(*6A!hvz1GG$9=a@mstDaW)zPVgkYEj6LXBJO;JLAn+Uw1bAe;_Jab0vd2nslJnY! z+qS6*oDxXC(BcRH1f<&QR+<t!wkq&(0UBPEV2??3ffB%_rjgp?$H5UL>SDyMgGkoEx-9S z(5)TcoYyS;#&&OhtfM`frpTXKtSsf|>)0!@Mvo)syEfJ* zH|$s3DUO{N1NsZO?D6Q!r*3ltH>ovw9~iY?*EYGfQo!h2(e#(PJ-%P=%{ZH8UQ4-^ z$x0-n)=ljN$2iVR&faGWab7kmsr1GC{AIJ>Q z!Aj7Y<`@R;zopo2#@F9pCB7X?uDIQpZPc7?+9U9wJ0C?fW6;z0kkiHkX&QpPH)mIT zlwHH{3z119lIc0;#D7x$mTYwgC2(3~^hPl#~=n5g=n*{d&ch zs9)t}|NZgaBXo7x&{i}`OoSC1`|jO{YS}W?j|t35F4BK6MY~7^1O$ZM(}ph{^$-Ux zstn|*hNtY&uS&pbk@Zec!T>I%3JmIvQPxiG|i`bDO7B2^I;hRK8k~imQ-Or_JVRC>3qB4G1th&#YINp zN84hF(m8CUtM5*Z2Jh`p`^2wZ1EkLkGHG6qbuc27pfx1wmN~C;WJ#WXusf}5Tm?2S z+7jW|`cmSr=B6h6mwURZAeot$ALKJ!U7X&m-u2YCBKRP7%MeIEkj897tdWrkj~{}{ zp7OP8&wyIQ#)a8SLdKu&SpKWOK6|s=h5(6_@Fc+fi;;jo96esXoYuJr`*>u~7+vq= zL!@{^pl$p8#Rry_pMm826=2YIMsd|0&q7EtGAr6YR$_s+!?WMOBLkW9*+8bkLWV}K|nWRaI z6OoGMK<;~Lg$L2|;9Ml@oW>;h+Z`WPa8*tpm|RG`$#+Aq)t$jXq^#Tc>UY3xYXDsE z1Ro5t!;xY2_Ey~0+tTam*GqF*T5+k) z^{f=_z1n9~o{HV${S~p7g5DPo?-JiI4+)sMb5buuZFW0MgBW1o3(;=#)}J~@xLG1bSCpFs~t5*Xg-$H(W- zb0x&ZF?fg@C-_X2V0lM6Z5C~hyKYMPo6icB4azC6V_a8=voOp?pI#j9EOjc1@hGug znHxPzKw=gwXZ#mPMcw(2h%u0b|0(j~lmwvA2%41rkLT2Q0!VjlX3jGwCPnJ!K>?13 z#18r(=3(5HoDeG&AX#{9^>1w(st+!zM1!v(35YPJ7EQ2bmq2mEe!k-#6T!~TPEuOB z1tbzYA^Q3&h59hzI#yDULLtz3F^Hx@{d2T7ko#XkcK;a@di2s0WgF{^PFDBo!%`qO z7(+l2c=#Q23A*?o5p5@n!LJ7XJ8@!%kyNoz{u#o(VTC)DbLu`c5-vn;Q1;W+&WYz* zO>_r}QEv_p6iPl=k=kzUS(n6Gqhr;N3Z+H8>x|{^$h{sOyj-DY{~lCsqK^*(JSOn0 z)wQ)sAo-yOg@uO$ zc1%Y{$4Glxc(2ofw%K=3(NX@zDP6a^a-~HGY2x#P;q69libP9^JI@l;Q!2dbYzlG) z#|q3-4nGDBKQAKUy~EBNh~$Q_f?(q= z2Qm^g3e02wDKnsgo`>r{wL0uSi*>m6##;Sh4Ps#biOE6ikPvI;P%-D%idcl+U1+YnQ5`Ay}6tjp!5QD zO;nW~Mo|_xixkgUw$Y!g;iyK3L6Po#9AF<~SZl->hy`D^Vl++;?H+HTKQBPNjfJ|m z(PVE483jezNC;Hw2t*Jh?rfSHj!;p>LbZxeXuF%NdSc0yi$}dSN7c4lnU$>{TkHPcx!pHF07;gtYjJBZZZgj%zS*R)Q?7XL6E@67Rg`!*hMWsvfwg#xzh z=;yyC-o?d%k}RsKXcw6Iq&*r2Y8e7Q5;;tk{`7YiV|H19(4s*fL^TU|L=bJ$Aq_GG z*BKbFpv?iF)$c%fW^G9Qf{fT~Ia>>@Ob~d~b(-`08Y*}e6iz5FB7X@mW{MwM;uY?Y z7{GhZ;JSB!l@Y_(9G;;eLnk0M_5ibw(5}`LHoeAPANnKuS770j?SLZIuYb%H#)cfQ z<=HUXD^RqYGqowuLT-|bfQ{q*qd_WDz0ND+#!zOkYu%2RCtW;G@X2ySr$j~EX)9Gnm&kx>91aMg!x>aZN3SDg^poS#qBqE^_2G5tM~m3 z>^)o|Vt)Sd@O9Isxds;@c|#bxEI9LZxxc%EJkDCcJ#6I(fn-MCTUZ6(T`?z4p@S3i z!>pHQjR1k%AR+_@ab>Z95k1X5T0l%iWm55f_Ca`6?(`O2j|c5ds?fHTbh8@gVlww3 zh5fg}kE9NpP~o`LA|Dyig`HN$!zC(p9``Bu5|pnzJU<;(HoYxPiI4c%V zW7WzPUr?bb}j;?dSQ$}Dd#?N<+{wT}= zKLG0~G}JC+zkSOieUaE1{n4F#V4MwZ^)TVRM&@=R!A;#J1uP}DdrrmU9gWV0>y@sL zlzBCJ$DZe|rK-p{bAV7QZFho`o;v6u_vzphL*HVHqTiUW^Zk9F4rVhmK`|C!lC z@F$7c2wWrm9sk_bb$Lq}#7Vjx-NL{)+@XlS@A6-7B98qZKL3+7Z5Q>XrlxEtvX1wb zXbuk#!wN*!2w<44u~O8$_rEptF>Sxtd)@>~lC&Pv_GI-eUteFPJGzpR?g|PD*|)I@ zGE{x@IU}QBYik?HMea7|y1e+encyla|3Rk8v7P>Do`hhX28impW__ukZYM&K5^-we z1xjjNxH94{P-U7o3W`fJDsY{a@cMlIK#yg!%t%Az(}|)<5h!N_tUcc0GI|b*%z*5P z_#OvrvLr|pD-CDceA0;Dtjee~&1*fT2Ie?skbMARHfh`R#De@rPynYUl!j*^8!!Vo zh?ekf6Wv(43H0oUfLBdFv8JCjuMp~5Q9v5MoIfy`3(5=A*|Vd+yhYn|*Z)xO-n)12 zrx1`xUmfSZe(!4ti)9D~vLUchR4zo%5yWGp;3`3eODzLa6B|4xvE`l22{i3faY6x7 z;UOZ?zq6LizsS{08ILF?`5Ql1O5C_e0)*$C!rXUoc|`)NYl*nL6RlY&@0wB3(Xo3} zFhW5HUchRyR~r>YVbtQfyS2c@R)w_$LAE(_GwfXX`Ia&aLwFME-3>%0sTRm{<9%hi zAc&E@?jCamzntFJrwCoVeg8UpXpxk+^G(d- zK8A+bXrtwtMoVF$`Hh2RvQ2=j%ulEaL%em-RSI(9V`G!j#B%Sy!J};Y=FOW5uUG$v zMa^Qs{x)6zR@VG~scM1@#6*B(uN;aASOLi{u>fI)4U#o**b%VVsM=M++qk&vb$VCv zEQ=*(YJX*1`J>HIV*5M(I)gOTaqxr0zhISlKT2vN2;MsY4BA$XAn8F^!4qHy_*iD} zJ_Wn2*CR;X0+^l8y<@Im6Nt3l6Y!_C^9L*lfBZq(OX;7n#q{*^$ga4_0A1dKJCp{> z$Fe}jz@)o8r;(xpocTxx1w4-u&&!Si`~E5(bv#x0ca7Qk>A#jwj{crwgAt8*22T_G z_AN63G&jwcgSM3uKo5ztwcPcFt%=IS7=Rw=x@jfz8i6K|`nTyS*7AAqEpkQ>EjJ5r zahF{6m;uz96;Hdup#J>z%W2PhxVM-%_o!N74BMTefU(deOdp~bD^c=M9;V>{8>w86 zA%+c7vy&HiNUIp;klnSYZ`cx~NipCmszT(oKhz_`0s;ot>~O4EK+^2WY>IkTCm#2O z3Szx|OiG5baX#USzF%ThJNtkC2A$-y(Xr)of|{8_#-wTZ;jXMf%c()1#44HgLnnnM z$1N-@*0f0F%t4iZZoUVHsj9N{25K!oWdKC;(+-LYM5Xw9rgnd3BU8ZNlA$j&XTSa- z<+Fuj!D_$+Y#bO&HYn$ne&dIl004lBi8Sq^TqyfQ&L7KP=P={jp`;S9LI2B!V)fNV zgBG1=T)s-!0)Lzq2fw7%5@EVFk0%LtEnen6@>cg%lOdG(%?d#{K92J~QXaWGuL;!He86d!+JBhBHIZvIM{!cG0vu9_UgypP5e#;weC2D(KTdd`eVF=~S-Y zZ*eJ976zVKK%ns3Lxy{rBAA+{+Hl=~a9-kZ zP2d_`J70kv6DDxIyg{x9vbciVN0+j1`<zB!)55!f62|06%g9erLWTzdH~srt@-kgZGf8k8d%i73uI}; zENJq!$M4}gBKS_Xr&FVEUZo>B*#(}B>SipdEtApDPcMP4LWU?p+C0JtP&{p_;(%xD zbDVh7@n^=&ycztq(!ZwgUhM$f{toH}(Gb{GUJu*{*qL9(Ci zN~caDup%;Y@-~QcIo9JE{Iujyb4I!kXeeQk-6 z_e_l=9@u2JjCwzgwCaWCKIW;5u=rD=!VHwZNSHmyR(D0tp34wKZP;X>f9+j%?3Ha@<9cA*{Bz-a-wT?Fa8tuGcw65SNb z*Vx#I4CmwwBFPY7OhB-x{`~nh__+Z>Y}(t08m}T{bN<*rH2}>23|IntOf%GYz=@AC z6cQt$TLsB-k9Y1(1oN@|T&t=ReHxheH8StKL}pcFPlp9uIG)?p0zCi~6%|Vd0@+4F zKnoZNwdrj{?E@W^~~zJx>lGy zw0Y1c_R1z5nV-OVlFrBk#X}Vf)$RWA>;Cr(ZvOfIJ@5FN5p3#zVlHI!s9|Aov+M}M z&yCHB;KtD{9?60TN@6w)ZXzT;VhmIS{~;p(#U=Z1!?a13;v?Rz*-EFyZ73NAH=lR+ z>8DZN{jE@0H$XraTRJ(3ZZO{^i>=aVt+s&0o@};PK*Jy z1F>&HMNBdusx&KNNH^(2Y@xQmSwx<(hgXh}+z+lXs7m4q zfM}dkH(%1FDN=lMLjV1CaR2|1HU94xUi>W#CdWlIHZIO&(_964@hS*p3l`#NuyDS# zn-5r1U5z>_Fb)FE`CDRQdy5(FgtoLWcrO)sy6`1_IJj(4ehdrPB*{!@vxqUh`DHmG zwr+<79G-HvzO=R_ulgPn-e%I<%fdy{P{&7;r}W1w&<~_#`9l(CTKU9tDbjRrdT3tc z?ho1Ex075e?J}#*Ko`f}OaCKc{GabCV8d{&b(BkCH{T!vxSjl4prPB;5!!%EA?c6E z;aY-50k?~C)7QSe|Nf2l|Cgo0r@j-X<>MHj$q*_qK6#~ZAq3!TB9!PABLDmx7uglLyW%On^ zC@dst`j)=-EiW$4(H`|ka6@)mx#wtiq!N@KzSb#-eN$G$%)OPPHm&N6r%$hG$#;}2 zVTxx+%=m|n&`~v{8KH4e-v2*sro?Oz0pg(y0A9a5s0j^EnxpUGn5odyCT0UA7%tmb z1E>!k#(6WY&0qRb{;hZY9q7=H|3hEd8t2XNCrNDTy~EDVE(7v(E5JLyurLk)!63LG zZ32p+^570w)833ja2YKhN$RQ2ace-b{~Nw_j=Vy|pTWap0uWjNm=Wr&!2szcz$;{G zIB~{69UBM$GIS%$>{;0mIk-hD^lu7Jz2E8n@+C{xQ@~f~6QT6~lTd;Eccr2>1ahwt z_%EIhgj{-b<;2DN4PYvr1^GM?@b+EXn5{>_`CgA5`B3!m#}!aaP=lUEv2BJk6t7vq z4*;cf_y-W~2-Lwrwtj#tOc1OQOi17b;D*VB38}m4#bX=m%iw26?+*DV9_oIbWmp*$ z{*Rou01tTtFqrD|2M$7?X$^GDL9rW5=BcNyy51lnWCzgaFM65n=LtyUAu$8=4`p{o zLXe14AuKizx)b2N0f9iBm<{f5c+wk%#}0wp0uyVh$lX0XJq^kW(WV#3#q35$(ad6e z;>yOa$uAd3NuS+1RL!{WMdm)`G8V^*C*oRD``wwZWyI#XXKN%;scJdj63@8%DD4k> ztf}Ek=I^d*M|M4VksQUSCy&?jVc*|zh8x;~`vT9@5(8`Te0+Ao4BkTY<=qxv*J=YT z>aozJhl8kfz!>}oBwZzQZJNC1zi%ijDJ_O+Lfs0QouIZZ+5_%2RBZ#T1_U1D>w>XE z8V?f_Q*L|~fOYtbp`RX{ZC35tao~J{Uqd&AVeMP4!tYP^;|Y+m%1mslwuk6iQscSC z?=lnQwAdv%ureiNI6^0(FZ6W5z#9R@Eu8N(0ULGyw5{V^s3)+pu{mvAto%G~t*RLm zktil5=mO$!)qA*p@6d0*sc1{QHj3_gD7wwHpu=qpeW=!_6RMyz%nhZNJYHS0Ba$@X~p=oYAsmjJtwuejqW z53CcP01K9+W)@mRpkUsa`yd$nAtx;92%QC(MI65M?KK;L&{KIk#jZ9MSlknyYf&V| zK$nT;BBNmTygC*|v~v?OI^$<-=@wS=LG~GYHcP^KqX$mbaw{?2PZ)$ z6d9WL^gQ6_?+XwH za4l?=LSr%6+1Z_GNhB@F@_5@M`b8EN7QOZvodjj8x^6X?IUhWv;=Sy@Ugf2yr+;WC zt@t%9z1pv?iLh6q~W=-Vp%mpm0SFvz^R`xhl8jbrGo zQXspCI*P7NJ%t9o_$mfW3ygM`YtEiIvq$^r6`~i$APUac!z%Y*2WQ#_`cOK91TnR>qzspF~_8Gv7NuBZ5fvj9N^j0TR-n<_lX+ zqYp2&uNabuwV-G|;OO<9Kt@L9Nc}YXeg|>aj-Y4m?qpO)k6swiV*^8- zr|QWoug*9$!;VHZS13hd^_Jjja8teqb_8O~OuhFy8HK(lIYX*5Z;;K)r+@yK!^YlK zJpB#WB@-bo)LP?dn&Mmzheq}d@LIjWYdg)gR##WM8`>SfO}UTuw2vHW`24n7GVnp) zqVS}l-zkX|@}t+DE#LB@_3sEUgW-fUSYpCM+Uvnlcw@2@TS?vQYx)yVmdw?W@3eyS z7Zwo_9e}>3*H#3UB(;Y>wWgjG4TtI~*p?8f1KWg)Zj=%cb$FmATnSjzp*JQ(zzd8z z?CocVA|YIx$FjS`r&wcvq=Iie5deylWL&{6F^vaG61rFN5`)c3)lmZs z9va^zz=8TU=C7K+R?9XPHnO2bwM+nt5-L!pfXUznD`i<{P%e;!X5oeIzs`FBpB@3t zY~CaKR2#H9p++4nCTLPf2m+JL+w-UH1m1!(6H=u(3a;67KaxlpIW5o|oD9*!!iTpd z(A`M!4tc2Bpx`iPy&CO7n&4YfJf9ZIT$pHRG7_bD@CO(Y79T%l_u6+*rZqS@)V!KS zE7|4D2L~Ts2vT=IdV($Fca%fR(2CjR0Z-wW4D=-OqW*i-`XBE2=Ot>Q2b(T`sBEZJ zo4UYYhy~A`&xcU(dY}(#QV^HEE2`DQ4(yu&S2*&UK|_5s-~~fyrW5$($qgcBtn=D3 zIuL@SOK>zfYqlO+R|HCj%%~`pg4YHt3eqs2BtEqVGh{DoTz)+HD4yy3#}0Db!RK7p zl{eCSI7MMSLX`p8up>L+gXba>MJ?@7mBe38{mL(5{P>KtG#_kAC6FI(Cu)!UR&~Gzuo*;u7Ky6vp=d$ejE0F961kvA4E%8EVlEm7xkAc zyQfSmvc7lLC!r)z#N{w^XMOFIH=g@*KTuyHnfxlz5)N<}g~lKv9y#$kZ{a>&TaEbOF1m0}#jWx8 zr6vt{LI@r{*Pc)t#uJIZ4f`_I7c|vbY_jwf>WfD1n!3|dDeHgT@l z;tk*sLD^8NTKh>|_8PAZ9c1XBiyV^8-ExS85(@HCfmcN}iZ=8mHfFs4nAiNeJX<^c z^ZkkHYFq3?*3XlZf^dh1&u_l(#;GOV?u_=I=*!uNn9R60db9@nb-Cw}nGWZ|z?_QP z<|Vl*WZK~S==1(vKFADmlRpcs0{C-Ywd&m<(tas&@|N-NcOSLOecp(hi_6SZAo{*; zSQH@_ko}S_fSx*{m#6gREY4*Q=Z_JS6!H&eO~*Bc>%FCjLZPy%Kg}vNX%EYZN|KR* zbv5pQ0lS789U2}N=5(L09()&=)ac;GoLLLOAW zMUq6&n`O4_C%4$nHR61xlBKY%$2h2CS5W)os*1#6;xFF=Ec_jtYY~T=^V|ip3h&hF zREbtrc%FdgYf7_1aR}|%o1q*hCcJeEkdj*=CleIx?F;|&B~osNiy-+K4@r^0>u+-> z+do=AE`HvKd?KJDuU)BZocpWq=^VSo@!#Wb_*eCnGB)3C>=nB4;TM8gag?bvbjl|t z`ZrYWy6iIh7+Zd%y+G>QxKb&%?gVTW^z{bc3UxG^o1X}4G)O)x$^2J?vLS;rL-i3{ zOUO#z`eXAOhu-W;0)6DYBpfNDk$XQ7kHGLzi7gH6PX3cGmeFw8tuA+;MrQ_Cz}GOe zt#RoJVDaYxqb@-KPOCWQwI#QFeho{G#|U-Lz&RmI=uQ8*8qI?zHed!s-5RLA0wrP4 zFcA+#O*b6HQ&&Q*JB?J_Hm!$n19>A$(q2Fk!qcDr(#SIn?aKwF)EMePSoPPh$caA- z+=-QmDOTe+^xyz0huVoG;DeXyS|F9TLVYh5h*wg6z8dt)wKD~1MFCG^;oz83_zC5y5jNq~sd4}kO>epMKaRp_0-GOsBc;vz_B^VhtFEs{CzYU90o0hg*%cRsMKL1H z7W}4B$mKl!s|s8$>XI@tDQ#Zh_~;t= zTFI(FOhSi)fW$!WeRkDrb0@-r_nYFV&we==j_JVpE`YK_5!aLAEK!2X_`=x`$uGL5ybXRm7iYHg$K#&RQ*oEf$8ynCXjn4mYS{ct< zhJ$)cg9Xmc*a{)?2S{7LS#SHrKehor02~QS0^EeCnhXZS!`b&i$e#Ut+WQ8=J`s?s zFv3~NPmpr`EpK-`0Twp`r<=^Su0RhB^h4Z$GLFElqP$-FB-B$0^Ves=rkbiGFaHKk z0*X6fntvJiy$_sma{an1=tg*G0juz!PKt^7bks_A9=3-u7}5)&vH7Y0_K?W&(a1b$ z5AFt-8v!z-X!%NxYfrGXQgEeXajANHdud0>>_8wQi_LA7m3RIJB{$7Cv1E#nH2$9C#_ z`se|AAvIKk1d?nC=MjuRGd0#eiETJQJD4O)NlQ{%DUGsp8UzsH@8LuA_f)3pK*vbE~BXgyB#K$Qs!T3WNz zeAFAdA5Q}I1x8C!W9yRyILmO#s(De#6lGeK^PC z8|%5*o8}}3<$t!ZOFElfR(haR(rR$GLE&JqgP!%byWV}P5_V0Rt$IA^K8aFNBr@Wc zFOP5NA4)%sI9zhF=ZXBChm8@j+UpFS$ZvDtG{Lv_PzPsbUO*AlzTrZzZONBxSFNg1 zKpfhpemQ!ai3am?z;_rsJ8AzrJ824GLt}vJD~0Y?3|n5%dAyG| zXiNqYz${?w5oALGVrRqD4ptgG0fLxw%L-rR5Ik#Z*bK)bIUSveBE~;=HhmZ<+8-?a z3NH0DxF~9jfP9-KtPlvX>9sriy$>~35g=yH3s~Wv-{4$`Z2c>vVuE)b$xfs94-TMQV29Nd}uVGLU`RBmpUjxBu(06?R856Z zMJG-vtEeo8*}yR)LUx(Gc7t^2kg=78$@b&-LLvbd%#F6+gkxbyzxAr$3yp=Yn{WUS zZ6IAhdu#Un8b2zt8 zE`8FRRMHycATi#hvh>xlz(O{eB;2-{-CZ#2p3$>H-*GDAfsLzTc>80IZkCV}WM7nu6 z9kh!i>uuUN)2TP%bPBY3f%P~953<>ppWw)a&x-?6JUE{WXHkH7j90fIxq$O(J}}gC z1mmvUl&>Ny_dQN;R{u^q;-La119_~=a=LnClr*1QHeIYhwsTXq;L(9g zZGSFSpyP&G-ZvH({Is9T00>T7r>TYr8X(vm=!xPVM{Dn8>V25ix&>+kzqqmgcK+}104;UEOr;h|KviOMO=@0Jzxd_qe~BiI{~$)gE> zY=uClR0Ngn=g&palRVDIobu(SGa~`~?93JvL7tEUAx(o8q)taB#|b#*_q6TiFJ#dNvS2fX=0^}wOklnGbGcHu7+x>|K^v?eX9;lp zP6MFHI}gOfgt;njP|gSk9q_LDbz;)ffrrU4*NqN*d5t57aGH;fO~yF@36DUid+b}r z=7ACDbPdE)>FAB{v(@H|KgxCb`U0B>>6$5F@38;K!HJOx&|dC zKRklx&}yG^i4iBJ{SEYJ$S_O9ixhdO#n+~AN?+r{nc>XpW9Pp)kZm{!47wViueTos|a5j!C z>MUe*M&*)>-{&ijB|QaJL%EI86U5@uAuSAn7Z0i|iyLnLexc56Ud(1f3z^3Zg8@S| zgPr$dC#WT2+NXe1rc(j==vy7&1I*CG+pZN20SDP%L90e(7}WBl?Yz`q*zd17l3bz& z#fT~Dnu6Ti2QuJ`8EsE%hyECLw{m$^PAYgJ)>Y>txLa)h6JW;v3u2D>+Ua2hIwF=4 zVq((O27MXGgiM6|5^))k(R4p(f)^u#K-8bd4&GfjSm!m~362l4YST&92kV1Jzm{fj zE-28*y#h`wae5(mFJVUXw+Hn-VHGhEprDSCsl9_kFo-dzMVR1_0bJ!fr@zI62VD>! zY(Jyjad5W{oa*)MI!p^5DXCj%vg`!|V0PL6cuF7)cmg<_?BURxY|CK=Z@n}o!)2MXVI|~ZC zfDvaK7g(w)%u`byXsT*k`K1<)u$E;)#io&AeQGZ#t*Gege9x(nv3 z0B?Bg^1`I9gfI*cc8K0>+F5}F_x=ndoOgim&VJ9`)^+{Me@qDj#eqT_bj?DK(f=sz z&Eu(D+qdyWY1pMwyUY~Iu1L#J%2*-FP|1)?A&JmL=367OBudGgRA#9V85@=|l@M8m zWGEyd^Yk89dq2^)a9{UzpVxUF=W!foEd*YPjVX?TG*ug` zZLPZ-KLV_GVL9d`fedaLP29<%w&?>5VT`$un6sSh3`s#{FXR6aGW7{Cu-?yr)#Dni z30CXOs-NX$%RH)|YZL?etrQ*SBg`4ZH&;+6*7Yg1@jlC3>xb}ykC&cY%r^Zf!LEqf7&ua#4oHr~D$u2@I5qM_d$TH-OJ2yshZm6lE_eMR+^?|z1WIuz_Y2i8iG z8;HMLd(J;`KH}}r+A(y8?Oo(KKceXL3ieyf3lZ6oBeHf2NHI+A5RR@okvdBe(bYHZ-}Y_<~h2dgi_E04~9qb zlt?k3L58K?o;{B@FPHrj2kUJy^bcQt(WDhC6S^aS3*yK}&NG7A^>C3-dXJ*SJGj8e zjHYuaE~T{^$$sZ;;$w0%CMF$iF5G*hA4KDzH368&&T%!(*Fzn9P=Y|-zkNXyKn>XT zEr`00ZO}r^dLzfDL^j>57w7$&Q18vL1VifI3iiK=leD5 z6f&Jrx_t;pC?^mbI>@*Wl3nx}D9edQ#aSfe>*eGO0F@AiBlO5V-uR_*AEh>KtWlX6 zSZYR{tyU+9FL8BTwCR~Du@S)@5qRt*F|qZ!NIp?Nq87Zhw~}yTKHJWI9e27Sta4)2arXHgr+r_ldmoR9(0h^>ZJom`K zT5ozLh*T`Vta}OJ92JFEUl+1KfF?$08UA)?L@h~7un^%6SjT>}%`ZLOq$Vvqx944+ zym|ib`iT+`U2*DEiAu7D!ZFRZ(U%vOD*px65Rv_dp8fT0f2$_~iN#o%;I`ARMyId_ z7k`{XE4%Qt%lL%I{`ykYN8>*`9J6>?nLVR+RiY)vO5C2km(S>C^wO7APX76W{>>n= zT|CPB5?`r#mXgtH`)fr%Uo2W1?G}uW;e9GV(a@@-`G&iFyX(gg6MZY7;41OmgTROx zEl4S*v-_K$*Q4NjY1Gw0$SL8dr~H~qzq}6Av9hvyzXiRaN?@Sgof|=zU_k)tOq>Pm zJK?LvSHqJsvg1(^c6`;d<{!Sp_{QU;sr=N*%PW5z$LpMrL6rLJJpBU)zBn9%#D*=_ z0w^@^mJ`PBr_aF`H-4Z4L|6+0&!x!K_|6KG5_Z5%k=_@3lKCDM8W*xB!E-lo=)a+- zOh?kOqx|9h`*&d6B>UN6f%vWD{ zd9$iL6FdvH2*>27ZLwBZH7!)>F7{{2S1aomDi7<2dHlP){}|yjppo&-|5Zques zpKq5KzcL;U?7~-8BKKmHw!<*J{Ci_oc@voWg?*nWRII}rH*C-blFSgpgU)j!Hr5$2 zAt^Og_W8N~E$A2&PKZ;V80xcEmg`=q?>@`dt+w+CfFU<;jPxn36Wy#1LxTeI#gwW15XAP&dljeki+ic-`S$HVohLD6aeV&$aXCW_^7Hxnrl#MG z>ib+HQHRwUbx|E1#)l5w>EGLio<6%dFM;h_XNqQLJgv2dr7nV^u}(>;nQ)ID03^={ zva=RIQXOIg^%nml3duXXI)X!NU~M&(EoGkH!0mt=Ry+Z74C&AcoHr|L8XR!jj6x4( z*&7s3;JhAQ%W=W1Zl!iz!UeG>?$l71ZztidO_T{h2i}-%Hz6KMzZo6fx6yMj8)S?d zF)>bJcPyrRjMyeiu-V`@$fNZD+J2w*)ltT}K6hD_7eD)s<%_Kgws+$nRN z%>?GOt#fmb$-dV|LYiEr$EGdorpBM8m#Ru!3y!!}V)xZAVub62KBN}&kn!0ed07LwkQ^+Ewz{om_^01&DFn(VfZ9Lx@Kw}-1jS&(S zC@Vq}kgdVhV=Rk~K1S>IbELFwbw!?n`h@VhjBruOSzL|`CJ%8K3zc?Qc2W$dE#Ho1$Q`O}9kyq=5r4I{dZJMrbo@Ik-)ion#?D$G3BTm{k7A{8mlf zgBwPxFAKG*R%<&D>2aab$JEZCyAk>0U6EiAg^P8=#`F*4`nHqboTy$d*hQmHk?NUF6R1QhtUe9F*9bIdDvDLNO9hb5TLjU1UevO_z(1*KZDl-?m@ z@nu<8`(=i6_wJD>)s&@rdUzJ>_O1vJPw8_PJ7D{LY`EARBHBl7YX zso))41oqA&bJW)@s5^XLqjTXMTvLNTNbxBdYS}D$9Tntwp{1l*-qXfxI27*pg6CZ`RybnZ=*XEp@u5{4-=JI&dUImHg>C>o&t;E z33j)|9YQ^qX2N1&t8Hd!84Bx*P7dgc^`CNd&R(ZQ1fO`}?|MN>i*$9o-u<(H6(FmI z{tC31+6hyp9py+IT5`O>&)!jfc||#Bj9fp*lLIJ5MMZ`0Q2?f1!6`zl11J7-#URqT zEe1%0{&~ds*Aa=X*Llm!>C1!iw$FR9U--Luthj&dvsUf08^gmfrd&SRC@&biY-YqHD6MBpu zFXojNvssLuXc%1s@i4D78yu+lY21zsaT#VJ5;44sShLH)J2;4kF4AGJiKHR@S09jG zBA}X&NaHh*?2M;BND6LnD33H=Ahv1KmWSOkU5zJDH2&SneWxhXF%pZ$VCca`OFdsZF-4w8`>9Z_qW@2S4s zDL#5>Pp{iJ5}vhM`0^zwUJ2ejDejqtYBCz}`z=aUR}a(0V^XufhG-sA;@;c#uGH|_ zdE=yv_J^CQ4623Z8&HC-6f~H8<#TA?rN$o41y9Que3SlD$R50q-iDL3wlMaO`8Wc_ zNV8)|$;iZ{=kQ<~^5=9Huk$`WyeT{dpim^BuWt{5Q{icYwIJGgnQuQ8R)*A!cuA4w z0_h18J@Ry?6cqyV&;rnC^sSkUB+Au5C)Fz0FfMjZsfo^8>3;5U}26LW}A%OsAMENWEyw_U%#*WC3uTQ_bWah@Ym7- zeRRTO6mGXPYB%ARB?t>o0yUrHm)nq?CYdm?Jv#Q2d7uU-F{FN{+i(dEzaX92Fti5UZ* zgCq~`6p2ty4R)sspeo*x^V*)#V1P@$J5jI&JK&ScrgZ1KEe|7%PfgXn&fU=19%mn# z-$T`b;lT_9cfEKgS`KKrW0VVGW?eSv(oLY@>H=H{eroDmIcCEfqOm=9bK5PeWZUr6p>W~IKP}ai;!c7NY%2GjYj(Mf7&Y8^+oET<-l86}J46x7Q_}PH0CuoP zN2@lIrFG~Kb?A*$C?RW#C4tOZ_?2(x>_NeoZBwt#0=WoHh@JWM&}Egrtd&xWC|94? zK8yCYr!~4Y(4bvtnfj~o@5?o~-IY{UM*r~V+}_b-k!`_hW$9;5c`y3v{l}~1_J?a? zQj%v%RTDd5l^8Vt_f5rzs%ULL_4S9vEHzPImpwg7&E_QZI1H$Z0Foco{{VF4vV;PX zG1T?VUPF~!eWHy#7Q{1BJUsyj$kz)q63@3dtR_v`V=9w(exDvXp$M4$KfY8moy3v3 zEyu;j5X9>S5pS*+@pu+cU+?Jo~b6ld4zsEO~?8Fx&xw(5SW+z}SfBVW3LB47S|1C_ElUS&gd(q4-&U4C9=-|BZw-Mj2nG+-6 z9*!!F%rN9uO*?Skov=lvPd3b2Vv;qe)Vgn?RN;yRzUR6M%M2z8{R947qE@vW%jh<)2J<)T;`TevWH*2XZH>-p6Ot@p@HtvYi` zLe10b;L2}2jNyS5Z^H^X`%F8hUBVGhP9n_jMwssjflW8~*QCcvBn-a^<3YkIEg9bd zlM!bFtq+LN!cQv+F&Nbab*r>}VDj-#Kb5JHoYvXt)>%$t3ZxGt5!c%vmv0LEHmt_q zj$YQ9?pN_G_ZNyFj^?7#y|KSyV+)z_RNoh%DT~IQ z?6+n(Gr;Q}`|({;E@~zhRC`2{4=UahzkCVFeVukVwYpnv`2!Hp=)t0^;syJwuiE~8 zoOOPJqJkr**x6HJVsnJgPxJez$TEGmYJ%#5>@Q7B3w{kt8%EHDyRdFSnvkgLN}ZzJ zXK)}!fBjI=azh5M@6h~ zq4Yb&AqJQ|TbMj3B&}jRde*mQ!_u$J2wb%fqf@9Db{77yvoZx2)0u@>;0(;zjzNEr zpWk$%iF#ITdg}E`jZiH;y{)0zIJ1LF+mk*rVQA;=2#cXA5SdErAA+mYh2lgOnS@5@ zI7%16p&3pzH(nW0|EgOLy`KQ0KN`jY7N#~igBDS8P!{r^a9ju};poioD5V*Ea%DNw z*aD*tq7gFzHq_#y$Bs>w#Z1lcSyv=z-SA@d10*@~?fn{S1|3iyDb*GWZm-ppM&@dZ z)+nim)<%um0AeVV#uqL%wnbl}P+Gind7oSG>t{)oUY`x>O9Sal*ew%7Y5&;G0*V8I%&13xL*(V#Ruo9MT8jo+qevm8fmqS|Tlewp zxo{4=*)bv=f5Clh^P4fDGrm9%iCX{|a`knsP#ASv(Wq3j0s$gkA}r?CZ%OK5SYNCRmLwa?htF2-l#_!(3s zkx=l~#H%lo0XumJXWAXulRau|fNUWbM#nmUB(>2U?uM`^7lQ*Tk>5xL55QSt3uH@j z$WzwI$<>j~g3PxZChdIIGeh^+uU-2(;o-wbpdz&yK{)s$un6yMY;7Z;nOVV2aUUZ% zgU?W8Yd8`|4jp54e$QP7a!HRUY(w~F%K)CP!IoYjEbI(|lMOk$?htDfW;WlRS%id_ z+Toi^qCj5;;7|o=cbO?TPF>*Z&~ZzLSM7dkMgKe8*0;Bz-6a>Q4N93d?o_dKX3tA1 zFM=Dw$O>o@Eaxreu_U*b>=3%urEz#+3ZC9{b-35TKK^$pSR%{Jl{cv_)RW7c7|Wb6 z&BBx_(YI&&8s*kaFQh%1??>lK6#_~4L>uTzA5rN`Y7tEpY6^!_YN}%}kV^>LXhi7o zuq^cloSMa?Oc@g+@NqMROlAV`L`vG%#W>5Jy(Q9HzMCK|XomjCAJPpJWur~Lfmk0- zhWMSTr@3hnJikeocKfR2+ZQ(zzXi~Ua8Hi8$kQwrfvbJ%SM03V9Ga{kX-?DLe^vxS zOO8K;T}zk0@B*5~yL*GSw!DWCG#2pKAwlem7}A)gsb0K+mJIaM4xMFTrmq*#cs(5NWt$1GYk+;1#uI;Ofv4_f2yzdV{XxK%20g@0i4S%UJgdm8})M znRPdn&q%w{;s!)Y0_RC#TpQZp#Un5tcXnAo^EFvOzVu!;j{@!e*!9oq_XvB0xYeN* zkd2Ha3_>y(s3s>;iGSHe6LAgUK;qpQKpj8@bw@kcqj)%y=3y*D?3#8Mh8Tu(X%sG7;_hy4ZVz*dO1h;+>XM$}>s8cCHK z@pOJ$GNppF>trz{_ZC^;-#OAS2~u^WS_T}l4u|UI2c<%u+CUKXu_;_W9ld>=0BoH* z)BLhA#i03gxg_x^MDcnS{-ySapvj4e3nolnG&?^Nag=Hra48i$hrLBg2R0Vk0N7Ut zaXGEh>#}%9tC{>kQ&W^zGxT`Gz#G3o&a#*WLvmKX%WDq-jSqTlLXqMrc|s_@?<`C755V{0E^^oMV6-{nc=+NUc0bTV8DbO$o61>nIG zU8UiJ+VN|NVxzc2JKhgj#86eRX`_$IM5UzIYNdom}defqukWD zXQ+ARAKF&*u}e;^Cg;5DCF_Kq-m9;l1^t>5?s=l-Xpv!r$2>iMa1UuBU;&9P1)?;e zr(W`WjqXynzUFIw;WQHKFM0cW`?OW`x}3U}l93xYplxB`FPC|q zdK5+*)b$>_knGJ<{*EJPSZ0T8#*<`R@o_*Q+dFUS6Q>BW`h*T@qTDpf>gg9(neNte z$QOTyJn_3x#-^^9n%B7ET6p8w(3;Z-4YBc=8YuPe>FVl=02j$(I++4JxvMvWDjG={ z0@rw+gw)%ocGWLBzJ4{@NCAVK2z!nuJF*aRIA}_v-O1Lj009X@jKJ@nKcLc)3$3Wp zQ*ng5*ms9B5NBrjm?7_Z0L+lB9G&w>0AY0vw?R+9$)4deoMQiuf=zN^HsaYnM;7oe zJFzEV}=8WoiEW!LFgs}WKi_&_oiH#)X}87F{4)QRe`cC zC)Fl|qNeVMH%yt;Rgt4bV*AMWDA!B6qv6@WOmdk zGwQxxU{2jx9Z}ycXaQ((QNwMLBH=C#v-HIJbLv|4&?8(aYuMk4eIig-_>#J_R8&dB zcR~YPpaYI8@^JFdy%nz!v^wBs5>H!%qjQR zn$pNJ(lb>80|UJsjpV2qFVVN82#Nj%yGJXrJGrw(1)fW<#qXJ&H&e-wxx%>gXvbpz zx65~IVw;=0a`g%0k7{>Soi>;}TBW&P_54vAuG;1e?=58cURJKGPrNPa)>~mkwU65O$tU-})xcVMd z;{sX%(AXpQ`5AA9e=%>Dw%^5Ku*YrJ8Uv$VmqrHrlx+zadoR*R2|rDg{5i475~SHX zJ-xkJp<*!OVMo}$NZW^b0yhH+_MiIefS%;Aj^v;X%PFS@t(k7L)ApDExZh*q;^Ok4 zOHXJohIXh54QKD#+x>R>rR|gb9v<7HmAxaxwBkXRMg4c+1DvSYVR(BBF9D`My1aR+ zZI{?8s#6h;Qov~L;jAH7aOKK8!caJ^)VlqHLOM@ftH`#(t2cKfI9&|5rEF!>bhT+V zt800Dr_$*T{{up~7M=y>8-RFC?zSjclwW;Xr|w3D8@C&twRO~v7J@GDS9hPf+5`9*l0+xgq-Yg;+YpEqM_;;x~7tW23ZN=*^k zP0V!gXvP7%oJP!SZ#n|7FZi5LB{?PVnwBHyA;D@~7=nSHJvNCk9-Hbl*X( zP&Hr=YiS3~%mhzf>$jP#-YvYA_Uq$eXO501$;o@%X`JHCmTKwZ%Pj`Q&q9SgJMIXV zHXAC3g<7=Ad2sb|VH35tCY%yR`JsqK-8NkV>{{2IQqMX&g|{6vOp546^b=w&1N zLS#PwCS6qYBJDkd2HNk0X&A}I#!_!wW9)Hy);~CSX?SXRtnFS+%{V4gqsSY9O`Jw1 zM-=smWd8Ma&6X{<(oJ)}b;lRWs2YrPS+JzE?UBJ;JQ ze!VF#7o`!#?@=-+?KmL3GO<^q77jOV+?X>XXBuNkMt)FoL16r>piKbyW3LKvjt>e7 z+7mgs$!)G^XnI(E&D9Jr>W{{E^#WN~L&FIvQq-qWTF^2zug2?qeU4z(mjg5QO)x|Z zbJTcsS5qmgc5Bv~8>nM_4!aaZeFZ=HPK-*H-?nPk1-D|FQ~=hpkB}jPbA2I>G-c1_I7f7qTje)lL&nMwAwTb-)a+u1SK;E) zEnIhM9vD_G^iPfCZu;xYg!Y4t zeEq~O8gsr?e2zzF{zaxbH$mkjDOCUdy)QxtO%)0HktZC5yY~644{9u5yvd@F0cLYM zU;${yFcoY)9+?*v1%XJsFZrhBqNVx! zN)?J5-Ot}S!(QAlFsu=?Vr=>PV>lfS=9G6zNlJ38ZfU9h>BUZS%Han&OT*e)6kS%b zU`5lmzoJi0F0eLS$Z%!GGd|AE-8P<9t6CFB24T&JX;JJn)7;bQ+dj4rSfbaZsjdAA z*T_ItWd+nj9ttIVQjp~0WxqHh3!QVq0&b;Hvonc_iN!$3=R9w|h89md#=;e&^0*A7 zc|QEf*L3C}N-uSMB+Km`n& z`nFOxuMI0$cw!4bG|&rZ@+Uu|t_>ASwI~vTCBio}u1}}hdD+pQs~j|f-zzB?f(b^+ zMuSu4f7~#203Qc)a*o%`Q>{7dCmN!Hhi|Qt`-=<>hXbkWqKt3$=HzyHAMAf-b(W$6 zUyboO@$O7x?Ni??OyA&iv3lr=W60nPx>Ph@3~M-Uz7-~WV!iv%w94cyz1n3Ic*k)W zgLnF6a;C_K_LF-V0z*3m`)LkeIc%#I(yW;z)X^>(RW&p;#IZWFS2*@-D#wn&&1DH_ zNmtMautINwZL;#4LfZ%zGnIM~`}yeB#jPdAkTU7XC%vtzV!|ondU&`$;FUJ~iru$2 znc+VdXoVhh7Kry>08Biuic6B4cDDNi2Tn>g)EUr8Tm(H1$$LAvggg+f5cac|{c7V0 z@QIdTde`}h2@m+-m$aSy45r0gns)T|l_*p$Q>;FX9Xr^3ALykgR^J%yiqaLQeTIo= z-aDLECo{zPEWogui=<}&CJ1fk=L@MGI>d`7DH9ySqF*lqmWmGXZ*U(JRhgTT>oy=l zi^GSOWuI(+|IOo%u#XpX9UUcUe~ssCC(q0u_XdS@C9XuwFKEA7-S=m}fLxO_#UmJ5 z{b6^-k24-m9dCSL+%EXN2@DGKtR$zVy<6vPjb%{Gi2g-P8>OljE?S zROnFjmmVrSrXj$y?W0;UCt2|cQ(4AK!-X#Sb5bS6-z|(#+a+;DzY0>CM5o(Dg0fy? zCuwb2=MrM{d>;&gUMWcX3G&r?6|UMS-s3q}lwa@^FUtQu4+z~wP$k%C#Js>m8?he< z(O{TSMtzD`&`zqa_bkylLZiX!) ztr&RIelTLrHA9+)k37-f3pNwdiCy(syXsIDcjg3j@F!-V^epLfwT1xUL4uS>)EKOx zN>BxH=CFc*MJ@qcy^CZM&S6x*D?i?GP6H6e-UAmgE%GrNKDIO8TneNw8g0co?Vl`e zT>-vBdLA8JUGuc(1}p%5WX{jh`o6R(vcMc0t^R9){WlNJNF8Xw1#N6C`vlabcmQ0V z7r3csJ9kMv>G#wqJzs6Dlh9bqn4yjs#0FkD)v)33UEQ&5#|_O=t7n}RZ{O}W^TNN` zXxcG~ysB<}f&pJw#2r61Ek%tl@_pb+#wK-TBD&l4i!#F`C00w3Y+PL&iGS7wG{tH3G~a zq|#QDcF@#R;AG*{s>5kp1A~QCloPh@DR({(5m_7_ltGUI2wLorE!pI-7SbT%x)SEJ zS)_hNkd#O~u+!zVPClaa&@mGGk#*^Yg`9n60aCwC$zqcFM>3_%^3+fei_&1>#CtOB zWc$8^7?Ykz6RKk?U;Av4cj1EvSN#2xQ&Nrt#+vJkvt_#Fv6JCU$3Wt?5ae#=k7m}y z0wPWJ^+vCB9gWO+Bk6R*0pYOnPS)<>Ya2C2w#L6NlG>)*DO6qSDC5UPjn+}MMpehET?tA)DUMt3QO0tH zY90cHDU@jGf929gbypwIZ5<;jhNQf_2!DSxsY_90 zZiHFRkQrfZM(VFjPFLLFY%MYy8T27$IvKb0C_4Zvl&R%v4 z#f$8pTCvVaH%?{10=BHR#+oxH+)TIa(_;p`#x{2+3PvnI{MI zWFPTY;lJSXb{-(T==${wm-x+5@*JUFZOq8OfP}U^#`DL3f)CCsat;BJ`g~IUdQ=nx z-Ys#DA1?<}vlz#6NL18fXqSH;RuQJA=pLg#I6nt-iv#-nj5ZO;t08k*f~*WoMoVN| zZ>p-U)c0m3n4)H4qN8&O-+d2(0J#}>?n;>HOa%jcvBIg8_u+QY7VvRZI{mti4jNr8 zW$@mc>Iyjo1EKTSxHdL6#ocE`=yWb1bi0w-6AFQT%_=Tkuh;l?$dK7+3HR@h7kJ&< zRxofx!)N%-+qWg4Bhg)Y4x*9uviqzRCI^D9qeKis#})3iG4Jjl{*y`u5A6&38R1(T z`7rz!8ktMjEm4@vGpv(u_*L(zV5+5o1p7qk%6qK!%iB}xG9RpMob(rpAEmN#FKH)W zlHWGy9*e>BI+DW*xpnIby3K9CWsJPpf~!-X_4D%1w`ItM*w2xf@H6n<^r8LH%3r@T zh~Kf6Vp_>}3T$-}00csZX&kJ&fjxZ&+ew^ERG+$%Dpfs_Ju&`jwD2#j<>-O;SPXfl z@_O%1sOp#sVLFdW+a~aQ(M0o=xi|{~rF;mbVXjm?9a+z4Ui>V0Gt+66Vhf$g^Mswc za2x+i{^sw0nXEVee3U``@0T(SuX4Nb{`=2)tW(3jaG3wcJ9r%KkF9n1*ZawbeB3i2 z{(pK4K}n+F|Hg@SL47OqXfzrQ-1A5|2n0w~OQUc)f_e+;i8)ca$JB;iT)1J~g>Fy1 zOa3SND#^@>_?Pf__*5azC@2^=-eDuTJO8|fwAa~S_jn1H)A*CqZ}MVh5Xs|xuhOdgkiNf=n4O( z^LWk{d-{sh8OAkw+SB;8w+^Rhj{VyK|waTQB=g!!wy-d2J4;!b@o< z5#s$N-;8bYl5wN}L(BUwAk&dqPDiIt9lj$PI$Y*hmilVIR-#}y$wfGKzxwtTwGY8p z+<60i1jMoYCNj8lX*)5@V6fHoxzdGg)ephJ3#nrNeedSHD>t^2*TwD0(mbjKf5_CuA+%CV2gysm^D}<-?=D0FY=Vwp>-kgO} zIk~89ZRkhHuP&0u7hiQU*Yp|En4d8Gni~bkQ2?!bnog{%>-K1p`_opdsHlAFr;qg1 zEyB{``^&bBpR0BK(G<|wXacFv0zB_8@oc`_picp>LaOr~4?YO;^PrS?Q&zSUjWAom zG0b0{xjzx<7b;fLdAg+g>=&?N7t3dWLyh5xoXC-Ui(7RU8&dX1->3;827qv6{BcI%#0CfePr}5+8ueD$FtiDNAbI>jP1MA`hlSx3XadFH=urnV z+)OpN8P$1uh>Vn#81}5~iqraYN&%|} z&|rxmG~CBB+EzO14Df~RD@y4L;G!N3#y=0qk&xg;Oy-j77i>)<>XczO8u9!?ize#y zcRc&-*U%}S%f&^e!p-%?JTYol-PF0>k3qEe5Nd=ug51H&76&IBeWDAlMB%>Rb~n)e zT1Z9f+WTA;ywSmt{5Xu-V*z%YauJ^bLd;Q#D+_$1Ezv9jIKbYUbm%UX8 zksIa1UqLDp=dx2wh$Yf~(~w1G3*0IVs2;F%oxRb0XP1BW3Pc7vf`JJKKyn(fay@w9 zz)tgw+dWio?|KD=NZLNgnp?LK9All(EZQyKFkl&(qJfc74T%2<+KdQ2uezOE zKMn76Xnh0mU^ha~TD%Q6mLiUR?5ZGxNPZl`^KjE)lIX$~(2$kyN47mZ0`IvSwD*9- zGEV4wZmv&OX>A{VWxUzsxcV*$VHKY!+Vz_51lSv0KrQgLec(Of)Gjyz?i+jv(t$`@$w z0lD(J^d&vyWc-*fGA?(oTwmBPw&nahqCU%+T}(Ns8L|lrG9oHF*ELxc78! zVs`dN$~jg{i#($2s&#dBWg|#lxM;*kssN-unT0fOcwesh%1zrQaLu<*|kV z)!`8k96ZW0x3JKoA=`NC>?yw}U-zL*oUVE~(N?mn-j=dhBJYDq*mHf+$2ger)d9q5 zw$iFQGJe2FZ!YY^Galw0xQdb#p+TN`_rLG0UX0=$+Iv!!kC5`s`+(#5`ZsR$$4ia5 zGD^PeE`N5aU3Sve-95L5+(04kU;EiroiUxYc)-BGploNHgh`{m9zRd>Qnf(`je_HJ zk9_I8nGQwdg7Xz$mB)&a9n54>|Jk278Yxgz*q-I=4n?5`jS$#BIsLv%|J)>aNv{Dd zVIQ%9Wmkm)=|Fa1tpv)~H4aJ3^k@cF z)7ONaOZg!uvMArf>hRep`=q{9#tKoYy(D?&;khM8m@=#IkUIYBA%$vdU$ctGFFSVC zqN%~a$56y?v)o@pMp!JgoA{-d_;j7W#zU8xDfiB{=G(lB^5G@6Uuk!OHpYC-^<5=p zF0lDjov4&lFzr2J)=fvGNW?#A{f%a%;uD=&;}5yixZQ?dQO27kC4XvcxYg{ml}#sB zV*jMcUtIgaXzF6m2pL$;Xi>IQ&+jqwO-xDAXD&|5yTxLS-aYJmh!ozZYDa~YF9h5g zsjZo3%jiZdJX*l}&lB=rjfQ{q8TP2n45X}_Gic<8IGA{Xl6gM6oWtTXXHJu}JJ)xl zWrmd(z20GM`M3h=v9=ozsF;woCAM2JTY)6{j>(=rU!S`DgO!c8e@An&Cuc|lyK`cu zWQ2ID*YT|Auv1pQc<#PVoDrAydo<`%fldD{*y6^q@AE!@zE{_WhBPfM-!cWKG z%@-=|7q*MM`s2J#Ji4uTNu=rN|J+C@yCL*XBSHFqXe9h!_7N!3NchH4zs#O-DC}Bw zrWcMd8fK5oA?SQCFS02RCwA9)YcK;Pnv|DyyVafk7k)3>yIo015b(m0u=~l$f6%U@ zbU}irVtXhrbyDtf#x*Y65tV{7{+dm_Jl5ZB5p}*_Az`Mj zuTq?wWQ3GwK}+0HF3Kp~bm2z5_~Y)1LW&UX7!&V2IPBphaWQ1=8T>>A=iu~i)LpOXmLn8 zBTB`@#KxwJG!J2iERIWR{kF1uWN{W}1y9Y(02R?*NrCE&1l+W8b(tSAE0E8rFwY_~VR!ZZOKX z|1L9|%l$)*t8Hlj*MOz^Y>k4S|Eq#)bf%Ll$?-Ms`s}PZ_W@_>n%M8M+ljeNDg-X* zS{P@a;3YC<0@mX{uZz8X1VDr&*y(3pBo%F<%Xz1fG z(U}a>bDX<(7EUoQ&_38_VG#_NUtVBI6EY5(FGMV)bVqT73Yhl(*V~n%;k+9~x_Jc4 z2ZuZ6Z;%5keP{h#k(&7;f*B?qekqpIdNUmi?F0#w1;xUFqiv6=Jf&U7=d|kC@X0;k zSDJU&y){Jrr4qf#;q9Wo21UXmaHrU5V0`xEG*`^9mfvF;^$J@#I5@DW=FVQ$w-9;J zEpFJX7x^UbnvKRyFr5C`hlas-ZA-Nh`t|QV$T+uzE5K9Gn$i5dV*24JW_%QwcIzYt zjUL5IrqxpxlpyUSc%u@=k7#Hf?ndus*@_kIqZ#cUZi+F!j&gExJ5)+inEn$_KAk|u zvJr~F0EnFk+Bc8p1q^IW^lXDQY&LXb6a!G-wZmZWpt8&LJ9ie+_kVexhqel2kT zv38C#Ts2BhTUm+Fa2n1@o(xz(S@?|KPj}GBTdKAzKe-#FhVO&Ck3AOw@{LSVA&UuQ8K4(8rrf_Fa(B#kBnXZ!0U^pSNFApwSQxB2+9MIkzg) zieX1!FdA7+r6Fq4>-yl)z(#{ZAdfG7_`Vl{zH7Y3bNgV`KM1<-U|3}r+Hq}Xw`XFmP-Be*x&{8hBtWp373us4eZ8wx2 zuA}OHh-%!5^tXd+`<~}`xP9&I?fv}|fmhqUi_xL|1(%1U6(JMUU}dD?cl)|vO_k&8 zl{*zDm=854mmPmsUXds&sZn}Kvn4IB{#%l0NW&_@hbCpWkzLp*8rRMKs#;Uw#N;O-*C3V&EBoEXL3!6RJUBv3E2`+HrsBx6IWdOLom4>s=hfJ%A71`Dv-mDR9q|sGbak;9C!m0WYk6}@J z+P5OZO+RtwnI6&{m=44U*@FIWX&L*hjM99N4ZLB$VK^bl_Vnvf+u9Jmd)7z?N~-9C z=Aw^_Q?_QSGuKyhv{(+W;}h3lUJRSodna%~lTrX-X^G!O0cy#7q^8<(&cL;50aI_D zX9F>i*5n$|GznABcz;%TES7B1C={^t;b4L3;$^6#jg*sOTJ=nmEJ5 zmA$tksTqLh;FX*8CkI!5i1A<*zwuekM`_gmzJZ1aHqvR*Jn^-P;uybk~=dKCm8AIyrFP zo|d&PvDKy)m2N@ZW4m}}F04%H&6p(Pi#Hv?`I!H#oM6lZfL327^-~&pDPkVF5>oXs z5JXe*?|$kXj(_^}Psr!kX^OK`Lm?F+|Ga_jS_AeL-kOZo)11i4D+GJnbbArrOJGd` zH%2MCFMo23ddCE9Pc&MGd=2@o?+<_+#C*i Iw7l@Y038%nf&c&j diff --git a/docs/dev/pkgdown.yml b/docs/dev/pkgdown.yml index 9acc9029..2cfd3319 100644 --- a/docs/dev/pkgdown.yml +++ b/docs/dev/pkgdown.yml @@ -7,7 +7,7 @@ articles: parallel: parallel.html preprocess: preprocess.html tuning: tuning.html -last_built: 2023-02-01T18:07Z +last_built: 2023-03-17T15:06Z urls: reference: http://www.schlosslab.org/mikropml/reference article: http://www.schlosslab.org/mikropml/articles diff --git a/docs/dev/reference/bootstrap_performance.html b/docs/dev/reference/bootstrap_performance.html index 97b9a7a7..4541a9e6 100644 --- a/docs/dev/reference/bootstrap_performance.html +++ b/docs/dev/reference/bootstrap_performance.html @@ -110,23 +110,23 @@

diff --git a/docs/dev/search.json b/docs/dev/search.json index 829cadb8..bd163184 100644 --- a/docs/dev/search.json +++ b/docs/dev/search.json @@ -1 +1 @@ -[{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":null,"dir":"","previous_headings":"","what":"Contributor Covenant Code of Conduct","title":"Contributor Covenant Code of Conduct","text":"document adapted Tidyverse Code Conduct.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement [INSERT CONTACT METHOD]. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.0, available https://www.contributor-covenant.org/version/2/0/ code_of_conduct.html. Community Impact Guidelines inspired Mozilla’s code conduct enforcement ladder. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https:// www.contributor-covenant.org/translations.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contributing to mikropml","title":"Contributing to mikropml","text":"document adapted Tidyverse Contributing guide.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"fixing-typos","dir":"","previous_headings":"","what":"Fixing typos","title":"Contributing to mikropml","text":"can fix typos, spelling mistakes, grammatical errors documentation directly using GitHub web interface, long changes made source file. generally means ’ll need edit roxygen2 comments .R, .Rd file. can find .R file generates .Rd reading comment first line.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"bigger-changes","dir":"","previous_headings":"","what":"Bigger changes","title":"Contributing to mikropml","text":"want make bigger change, ’s good idea first file issue make sure someone team agrees ’s needed. ’ve found bug, please file issue illustrates bug minimal reprex (also help write unit test, needed).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"pull-request-process","dir":"","previous_headings":"Bigger changes","what":"Pull request process","title":"Contributing to mikropml","text":"Fork package clone onto computer. haven’t done , recommend using usethis::create_from_github(\"SchlossLab/mikropml\", fork = TRUE). Install development dependences devtools::install_dev_deps(), make sure package passes R CMD check running devtools::check(). R CMD check doesn’t pass cleanly, ’s good idea ask help continuing. Create Git branch pull request (PR). recommend using usethis::pr_init(\"brief-description--change\"). Make changes, commit git, create PR running usethis::pr_push(), following prompts browser. title PR briefly describe change. body PR contain Fixes #issue-number. user-facing changes, add bullet top NEWS.md (.e. just first header). Follow style described https://style.tidyverse.org/news.html.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"code-style","dir":"","previous_headings":"Bigger changes","what":"Code style","title":"Contributing to mikropml","text":"New code follow tidyverse style guide. can use styler package apply styles, please don’t restyle code nothing PR. use roxygen2, Markdown syntax, documentation. use testthat unit tests. Contributions test cases included easier accept.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Contributing to mikropml","text":"Please note mikropml project released Contributor Code Conduct. contributing project agree abide terms.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2019-2021 Begüm D. Topçuoğlu, Zena Lapp, Kelly L. Sovacool, Evan Snitkin, Jenna Wiens, Patrick D. Schloss Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":null,"dir":"","previous_headings":"","what":"Getting help with mikropml","title":"Getting help with mikropml","text":"Thanks using mikropml! filing issue, places explore pieces put together make process smooth possible.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":"make-a-reprex","dir":"","previous_headings":"","what":"Make a reprex","title":"Getting help with mikropml","text":"Start making minimal reproducible example using reprex package. haven’t heard used reprex , ’re treat! Seriously, reprex make R-question-asking endeavors easier (pretty insane ROI five ten minutes ’ll take learn ’s ). additional reprex pointers, check Get help! section tidyverse site.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":"where-to-ask","dir":"","previous_headings":"","what":"Where to ask?","title":"Getting help with mikropml","text":"Armed reprex, next step figure ask. ’s question: start community.rstudio.com, /StackOverflow. people answer questions. ’s bug: ’re right place, file issue. ’re sure: let community help figure ! problem bug feature request, can easily return report . opening new issue, sure search issues pull requests make sure bug hasn’t reported /already fixed development version. default, search pre-populated :issue :open. can edit qualifiers (e.g. :pr, :closed) needed. example, ’d simply remove :open search issues repo, open closed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":"what-happens-next","dir":"","previous_headings":"","what":"What happens next?","title":"Getting help with mikropml","text":"efficient possible, development tidyverse packages tends bursty, shouldn’t worry don’t get immediate response. Typically don’t look repo sufficient quantity issues accumulates, ’s burst intense activity focus efforts. makes development efficient avoids expensive context switching problems, cost taking longer get back . process makes good reprex particularly important might multiple months initial report start working . can’t reproduce bug, can’t fix !","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"its-running-so-slow","dir":"Articles","previous_headings":"","what":"It’s running so slow!","title":"Introduction to mikropml","text":"Since assume lot won’t read entire vignette, ’m going say beginning. run_ml() function running super slow, consider parallelizing. See vignette(\"parallel\") examples.","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"the-input-data","dir":"Articles","previous_headings":"Understanding the inputs","what":"The input data","title":"Introduction to mikropml","text":"input data run_ml() dataframe row sample observation. One column (assumed first) outcome interest, columns features. package otu_mini_bin small example dataset mikropml. , dx outcome column (normal cancer), 10 features (Otu00001 Otu00010). 2 outcomes, performing binary classification majority examples . bottom, also briefly provide examples multi-class continuous outcomes. ’ll see, run way binary classification! feature columns amount Operational Taxonomic Unit (OTU) microbiome samples patients cancer without cancer. goal predict dx, stands diagnosis. diagnosis can cancer based individual’s microbiome. need understand exactly means, ’re interested can read original paper (Topçuoğlu et al. 2020). real machine learning applications ’ll need use features, purposes vignette ’ll stick example dataset everything runs faster.","code":"# install.packages(\"devtools\") # devtools::install_github(\"SchlossLab/mikropml\") library(mikropml) head(otu_mini_bin) #> dx Otu00001 Otu00002 Otu00003 Otu00004 Otu00005 Otu00006 Otu00007 #> 1 normal 350 268 213 1 208 230 70 #> 2 normal 568 1320 13 293 671 103 48 #> 3 normal 151 756 802 556 145 271 57 #> 4 normal 299 30 1018 0 25 99 75 #> 5 normal 1409 174 0 3 2 1136 296 #> 6 normal 167 712 213 4 332 534 139 #> Otu00008 Otu00009 Otu00010 #> 1 230 235 64 #> 2 204 119 115 #> 3 176 37 710 #> 4 78 255 197 #> 5 1 537 533 #> 6 251 155 122"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"the-methods-we-support","dir":"Articles","previous_headings":"Understanding the inputs","what":"The methods we support","title":"Introduction to mikropml","text":"methods use supported great ML wrapper package caret, use train machine learning models. methods tested (backend packages) : Logistic/multiclass/linear regression (\"glmnet\") Random forest (\"rf\") Decision tree (\"rpart2\") Support vector machine radial basis kernel (\"svmRadial\") xgboost (\"xgbTree\") documentation methods, well many others, can look available models (see list tag). vetted models used caret, function general enough others might work. can’t promise can help models, feel free [start new discussion GitHub]https://github.com/SchlossLab/mikropml/discussions) questions models might able help. first focus glmnet, default implementation L2-regularized logistic regression. cover examples towards end.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"before-running-ml","dir":"Articles","previous_headings":"","what":"Before running ML","title":"Introduction to mikropml","text":"execute run_ml(), consider preprocessing data, either preprocess_data() function. can learn preprocessing vignette: vignette(\"preprocess\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"the-simplest-way-to-run_ml","dir":"Articles","previous_headings":"","what":"The simplest way to run_ml()","title":"Introduction to mikropml","text":"mentioned , minimal input dataset (dataset) machine learning model want use (method). may also want provide: outcome column name. default run_ml() pick first column, ’s best practice specify column name explicitly. seed results reproducible, get results see (.e train/test split). Say want use logistic regression, method use glmnet. , run ML pipeline : ’ll notice things: takes little run. parameters use. message stating ‘dx’ used outcome column. want, ’s nice sanity check! warning. Don’t worry warning right now - just means hyperparameters aren’t good fit - ’re interested learning , see vignette(\"tuning\"). Now, let’s dig output bit. results list 4 things: trained_model trained model caret. bunch info won’t get , can learn caret::train() documentation. test_data partition dataset used testing. machine learning, ’s always important held-test dataset used training stage. pipeline using run_ml() split data training testing sets. training data used build model (e.g. tune hyperparameters, learn data) test data used evaluate well model performs. performance dataframe (mainly) performance metrics (1 column cross-validation performance metric, several test performance metrics, 2 columns end ML method seed): using logistic regression binary classification, area receiver-operator characteristic curve (AUC) useful metric evaluate model performance. , ’s default use mikropml. However, crucial evaluate model performance using multiple metrics. can find information performance metrics use package. cv_metric_AUC AUC cross-validation folds training data. gives us sense well model performs training data. columns performance metrics test data — data wasn’t used build model. , can see AUC test data much 0.5, suggesting model predict much better chance, model overfit cross-validation AUC (cv_metric_AUC, measured training) much higher testing AUC. isn’t surprising since ’re using features example dataset, don’t discouraged. default option also provides number performance metrics might interested , including area precision-recall curve (prAUC). last columns results$performance method seed (set one) help combining results multiple runs (see vignette(\"parallel\")). feature_importance information feature importance values find_feature_importance = TRUE (default FALSE). Since used defaults, ’s nothing :","code":"results <- run_ml(otu_mini_bin, \"glmnet\", outcome_colname = \"dx\", seed = 2019 ) names(results) #> [1] \"trained_model\" \"test_data\" \"performance\" #> [4] \"feature_importance\" names(results$trained_model) #> [1] \"method\" \"modelInfo\" \"modelType\" \"results\" \"pred\" #> [6] \"bestTune\" \"call\" \"dots\" \"metric\" \"control\" #> [11] \"finalModel\" \"preProcess\" \"trainingData\" \"ptype\" \"resample\" #> [16] \"resampledCM\" \"perfNames\" \"maximize\" \"yLimits\" \"times\" #> [21] \"levels\" head(results$test_data) #> dx Otu00009 Otu00005 Otu00010 Otu00001 Otu00008 Otu00004 Otu00003 #> 9 normal 119 142 248 256 363 112 871 #> 14 normal 60 209 70 86 96 1 123 #> 16 cancer 205 5 180 1668 95 22 3 #> 17 normal 188 356 107 381 1035 915 315 #> 27 normal 4 21 161 7 1 27 8 #> 30 normal 13 166 5 31 33 5 58 #> Otu00002 Otu00007 Otu00006 #> 9 995 0 137 #> 14 426 54 40 #> 16 20 590 570 #> 17 357 253 341 #> 27 25 322 5 #> 30 179 6 30 results$performance #> # A tibble: 1 × 17 #> cv_metric_AUC logLoss AUC prAUC Accuracy Kappa F1 Sensi…¹ Speci…² Pos_P…³ #> #> 1 0.622 0.684 0.647 0.606 0.590 0.179 0.6 0.6 0.579 0.6 #> # … with 7 more variables: Neg_Pred_Value , Precision , Recall , #> # Detection_Rate , Balanced_Accuracy , method , seed , #> # and abbreviated variable names ¹​Sensitivity, ²​Specificity, ³​Pos_Pred_Value results$feature_importance #> [1] \"Skipped feature importance\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"customizing-parameters","dir":"Articles","previous_headings":"","what":"Customizing parameters","title":"Introduction to mikropml","text":"arguments allow change execute run_ml(). ’ve chosen reasonable defaults , encourage change think something else better data.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"changing-kfold-cv_times-and-training_frac","dir":"Articles","previous_headings":"Customizing parameters","what":"Changing kfold, cv_times, and training_frac","title":"Introduction to mikropml","text":"kfold: number folds run cross-validation (default: 5). cv_times: number times run repeated cross-validation (default: 100). training_frac: fraction data training set (default: 0.8). rest data used testing. ’s example change default parameters: might noticed one ran faster — ’s reduced kfold cv_times. okay testing things may even necessary smaller datasets. general may better larger numbers parameters; think defaults good starting point (Topçuoğlu et al. 2020).","code":"results_custom <- run_ml(otu_mini_bin, \"glmnet\", kfold = 2, cv_times = 5, training_frac = 0.5, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: ggplot2 #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Warning in (function (w) : `caret::train()` issued the following warning: #> #> simpleWarning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures. #> #> This warning usually means that the model didn't converge in some cross-validation folds because it is predicting something close to a constant. As a result, certain performance metrics can't be calculated. This suggests that some of the hyperparameters chosen are doing very poorly. #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"custom-training-indices","dir":"Articles","previous_headings":"Customizing parameters > Changing kfold, cv_times, and training_frac","what":"Custom training indices","title":"Introduction to mikropml","text":"training_frac fraction 0 1, random sample observations dataset chosen training set satisfy training_frac using get_partition_indices(). However, cases might wish control exactly observations training set. can instead assign training_frac vector indices correspond rows dataset go training set (remaining sequences go testing set). ’s example ~80% data training set:","code":"n_obs <- otu_mini_bin %>% nrow() training_size <- 0.8 * n_obs training_rows <- sample(n_obs, training_size) results_custom_train <- run_ml(otu_mini_bin, \"glmnet\", kfold = 2, cv_times = 5, training_frac = training_rows, seed = 2019 ) #> Using 'dx' as the outcome column. #> Using the custom training set indices provided by `training_frac`. #> The fraction of data in the training set will be 0.8 #> Training the model... #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"changing-the-performance-metric","dir":"Articles","previous_headings":"Customizing parameters","what":"Changing the performance metric","title":"Introduction to mikropml","text":"two arguments allow change performance metric use model evaluation, performance metrics calculate using test data. perf_metric_function function used calculate performance metrics. default classification caret::multiClassSummary() default regression caret::defaultSummary(). ’d suggest changing unless really know ’re . perf_metric_name column name output perf_metric_function. chose reasonable defaults (AUC binary, logLoss multiclass, RMSE continuous), default functions calculate bunch different performance metrics, can choose different one ’d like. default performance metrics available classification : default performance metrics available regression : ’s example using prAUC instead AUC: ’ll see cross-validation metric prAUC, instead default AUC:","code":"#> [1] \"logLoss\" \"AUC\" \"prAUC\" #> [4] \"Accuracy\" \"Kappa\" \"Mean_F1\" #> [7] \"Mean_Sensitivity\" \"Mean_Specificity\" \"Mean_Pos_Pred_Value\" #> [10] \"Mean_Neg_Pred_Value\" \"Mean_Precision\" \"Mean_Recall\" #> [13] \"Mean_Detection_Rate\" \"Mean_Balanced_Accuracy\" #> [1] \"RMSE\" \"Rsquared\" \"MAE\" results_pr <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 5, perf_metric_name = \"prAUC\", seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Warning in (function (w) : `caret::train()` issued the following warning: #> #> simpleWarning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures. #> #> This warning usually means that the model didn't converge in some cross-validation folds because it is predicting something close to a constant. As a result, certain performance metrics can't be calculated. This suggests that some of the hyperparameters chosen are doing very poorly. #> Training complete. results_pr$performance #> # A tibble: 1 × 17 #> cv_metric_p…¹ logLoss AUC prAUC Accur…² Kappa F1 Sensi…³ Speci…⁴ Pos_P…⁵ #> #> 1 0.577 0.691 0.663 0.605 0.538 0.0539 0.690 1 0.0526 0.526 #> # … with 7 more variables: Neg_Pred_Value , Precision , Recall , #> # Detection_Rate , Balanced_Accuracy , method , seed , #> # and abbreviated variable names ¹​cv_metric_prAUC, ²​Accuracy, ³​Sensitivity, #> # ⁴​Specificity, ⁵​Pos_Pred_Value"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"using-groups","dir":"Articles","previous_headings":"Customizing parameters","what":"Using groups","title":"Introduction to mikropml","text":"optional groups vector groups keep together splitting data train test sets cross-validation. Sometimes ’s important split data based grouping instead just randomly. allows control similarities within groups don’t want skew predictions (.e. batch effects). example, biological data may samples collected multiple hospitals, might like keep observations hospital partition. ’s example split data train/test sets based groups: one difference run_ml() report much data training set run code chunk. can little finicky depending many samples groups . won’t exactly specify training_frac, since include one group either training set test set.","code":"# make random groups set.seed(2019) grps <- sample(LETTERS[1:8], nrow(otu_mini_bin), replace = TRUE) results_grp <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 2, training_frac = 0.8, groups = grps, seed = 2019 ) #> Using 'dx' as the outcome column. #> Fraction of data in the training set: 0.795 #> Groups in the training set: A B D F G H #> Groups in the testing set: C E #> Groups will be kept together in CV partitions #> Training the model... #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"controlling-how-groups-are-assigned-to-partitions","dir":"Articles","previous_headings":"Customizing parameters > Using groups","what":"Controlling how groups are assigned to partitions","title":"Introduction to mikropml","text":"use groups parameter , default run_ml() assume want observations group placed partition train/test split. makes sense want use groups control batch effects. However, cases might prefer control exactly groups end partition, might even okay observations group assigned different partitions. example, say want groups B used training, C D testing, don’t preference happens groups. can give group_partitions parameter named list specify groups go training set go testing set. case, observations & B used training, C & D used testing, remaining groups randomly assigned one satisfy training_frac closely possible. another scenario, maybe want groups F used training, also want allow observations selected training F used testing: need even control , take look setting custom training indices. might also prefer provide train control scheme cross_val parameter run_ml().","code":"results_grp_part <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 2, training_frac = 0.8, groups = grps, group_partitions = list( train = c(\"A\", \"B\"), test = c(\"C\", \"D\") ), seed = 2019 ) #> Using 'dx' as the outcome column. #> Fraction of data in the training set: 0.785 #> Groups in the training set: A B E F G H #> Groups in the testing set: C D #> Groups will not be kept together in CV partitions because the number of groups in the training set is not larger than `kfold` #> Training the model... #> Training complete. results_grp_trainA <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 2, kfold = 2, training_frac = 0.5, groups = grps, group_partitions = list( train = c(\"A\", \"B\", \"C\", \"D\", \"E\", \"F\"), test = c(\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\") ), seed = 2019 ) #> Using 'dx' as the outcome column. #> Fraction of data in the training set: 0.5 #> Groups in the training set: A B C D E F #> Groups in the testing set: A B C D E F G H #> Groups will be kept together in CV partitions #> Training the model... #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"more-arguments","dir":"Articles","previous_headings":"Customizing parameters","what":"More arguments","title":"Introduction to mikropml","text":"ML methods take optional arguments, ntree randomForest-based models case weights. additional arguments give run_ml() forwarded along caret::train() can leverage options.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"case-weights","dir":"Articles","previous_headings":"Customizing parameters > More arguments","what":"Case weights","title":"Introduction to mikropml","text":"want use case weights, also need use custom indices training data (.e. perform partition run_ml() ). ’s one way weights calculated proportion class data set, ~70% data training set: See caret docs list models accept case weights.","code":"set.seed(20221016) library(dplyr) train_set_indices <- get_partition_indices(otu_mini_bin %>% pull(dx), training_frac = 0.70 ) case_weights_dat <- otu_mini_bin %>% count(dx) %>% mutate(p = n / sum(n)) %>% select(dx, p) %>% right_join(otu_mini_bin, by = \"dx\") %>% select(-starts_with(\"Otu\")) %>% mutate( row_num = row_number(), in_train = row_num %in% train_set_indices ) %>% filter(in_train) #> Warning in right_join(., otu_mini_bin, by = \"dx\"): Each row in `x` is expected to match at most 1 row in `y`. #> ℹ Row 1 of `x` matches multiple rows. #> ℹ If multiple matches are expected, set `multiple = \"all\"` to silence this #> warning. head(case_weights_dat) #> dx p row_num in_train #> 1 cancer 0.49 1 TRUE #> 2 cancer 0.49 2 TRUE #> 3 cancer 0.49 3 TRUE #> 4 cancer 0.49 4 TRUE #> 5 cancer 0.49 5 TRUE #> 6 cancer 0.49 6 TRUE tail(case_weights_dat) #> dx p row_num in_train #> 136 normal 0.51 194 TRUE #> 137 normal 0.51 195 TRUE #> 138 normal 0.51 196 TRUE #> 139 normal 0.51 197 TRUE #> 140 normal 0.51 198 TRUE #> 141 normal 0.51 200 TRUE nrow(case_weights_dat) / nrow(otu_mini_bin) #> [1] 0.705 results_weighted <- run_ml(otu_mini_bin, \"glmnet\", outcome_colname = \"dx\", seed = 2019, training_frac = case_weights_dat %>% pull(row_num), weights = case_weights_dat %>% pull(p) )"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"finding-feature-importance","dir":"Articles","previous_headings":"","what":"Finding feature importance","title":"Introduction to mikropml","text":"find features contributing predictive power, can use find_feature_importance = TRUE. use permutation importance determine feature importance described (Topçuoğlu et al. 2020). Briefly, permutes features individually (correlated ones together) evaluates much performance metric decreases. performance decreases feature randomly shuffled, important feature . default FALSE takes run useful want know features important predicting outcome. Let’s look feature importance results: Now, can check feature importances: several columns: perf_metric: performance value permuted feature. perf_metric_diff: difference performance actual permuted data (.e. test performance minus permuted performance). Features larger perf_metric_diff important. pvalue: probability obtaining actual performance value null hypothesis. lower: lower bound 95% confidence interval perf_metric. upper: upper bound 95% confidence interval perf_metric. feat: feature (group correlated features) permuted. method: ML method used. perf_metric_name: name performance metric represented perf_metric & perf_metric_diff. seed: seed (set). can see , differences negligible (close zero), makes sense since model isn’t great. ’re interested feature importance, ’s especially useful run multiple different train/test splits, shown example snakemake workflow. can also choose permute correlated features together using corr_thresh (default: 1). features correlation threshold permuted together; .e. perfectly correlated features permuted together using default value. can see features permuted together feat column. 3 features permuted together (doesn’t really make sense, ’s just example). previously executed run_ml() without feature importance now wish find feature importance fact, see example code get_feature_importance() documentation. get_feature_importance() can show live progress bar, see vignette(\"parallel\") examples.","code":"results_imp <- run_ml(otu_mini_bin, \"rf\", outcome_colname = \"dx\", find_feature_importance = TRUE, seed = 2019 ) results_imp$feature_importance #> perf_metric perf_metric_diff pvalue lower upper feat method #> 1 0.5459125 0.0003375 0.51485149 0.49125 0.60250 Otu00001 rf #> 2 0.5682625 -0.0220125 0.73267327 0.50625 0.63125 Otu00002 rf #> 3 0.5482875 -0.0020375 0.56435644 0.50500 0.59000 Otu00003 rf #> 4 0.6314375 -0.0851875 1.00000000 0.55250 0.71250 Otu00004 rf #> 5 0.4991750 0.0470750 0.08910891 0.44125 0.57125 Otu00005 rf #> 6 0.5364875 0.0097625 0.28712871 0.50125 0.57375 Otu00006 rf #> 7 0.5382875 0.0079625 0.39603960 0.47500 0.58750 Otu00007 rf #> 8 0.5160500 0.0302000 0.09900990 0.46750 0.55750 Otu00008 rf #> 9 0.5293375 0.0169125 0.17821782 0.49500 0.55625 Otu00009 rf #> 10 0.4976500 0.0486000 0.12871287 0.41000 0.56250 Otu00010 rf #> perf_metric_name seed #> 1 AUC 2019 #> 2 AUC 2019 #> 3 AUC 2019 #> 4 AUC 2019 #> 5 AUC 2019 #> 6 AUC 2019 #> 7 AUC 2019 #> 8 AUC 2019 #> 9 AUC 2019 #> 10 AUC 2019 results_imp_corr <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 5, find_feature_importance = TRUE, corr_thresh = 0.2, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Warning in (function (w) : `caret::train()` issued the following warning: #> #> simpleWarning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures. #> #> This warning usually means that the model didn't converge in some cross-validation folds because it is predicting something close to a constant. As a result, certain performance metrics can't be calculated. This suggests that some of the hyperparameters chosen are doing very poorly. #> Training complete. #> Finding feature importance... #> Feature importance complete. results_imp_corr$feature_importance #> perf_metric perf_metric_diff pvalue lower upper #> 1 0.4941842 0.1531842 0.05940594 0.3236842 0.6473684 #> feat #> 1 Otu00001|Otu00002|Otu00003|Otu00004|Otu00005|Otu00006|Otu00007|Otu00008|Otu00009|Otu00010 #> method perf_metric_name seed #> 1 glmnet AUC 2019"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"tuning-hyperparameters-using-the-hyperparameter-argument","dir":"Articles","previous_headings":"","what":"Tuning hyperparameters (using the hyperparameter argument)","title":"Introduction to mikropml","text":"important, whole vignette . bottom line provide default hyperparameters can start , ’s important tune hyperparameters. information default hyperparameters , tune hyperparameters, see vignette(\"tuning\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"other-models","dir":"Articles","previous_headings":"","what":"Other models","title":"Introduction to mikropml","text":"examples train evaluate models. output similar, won’t go details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"random-forest","dir":"Articles","previous_headings":"Other models","what":"Random forest","title":"Introduction to mikropml","text":"rf engine takes optional argument ntree: number trees use random forest. can’t tuned using rf package implementation random forest. Please refer caret documentation interested packages random forest implementations.","code":"results_rf <- run_ml(otu_mini_bin, \"rf\", cv_times = 5, seed = 2019 ) results_rf_nt <- run_ml(otu_mini_bin, \"rf\", cv_times = 5, ntree = 1000, seed = 2019 )"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"decision-tree","dir":"Articles","previous_headings":"Other models","what":"Decision tree","title":"Introduction to mikropml","text":"","code":"results_dt <- run_ml(otu_mini_bin, \"rpart2\", cv_times = 5, seed = 2019 )"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"svm","dir":"Articles","previous_headings":"Other models","what":"SVM","title":"Introduction to mikropml","text":"get message “maximum number iterations reached”, see issue caret.","code":"results_svm <- run_ml(otu_mini_bin, \"svmRadial\", cv_times = 5, seed = 2019 )"},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"multiclass-data","dir":"Articles","previous_headings":"Other data","what":"Multiclass data","title":"Introduction to mikropml","text":"provide otu_mini_multi multiclass outcome (three outcomes): ’s example running multiclass data: performance metrics slightly different, format everything else :","code":"otu_mini_multi %>% dplyr::pull(\"dx\") %>% unique() #> [1] \"adenoma\" \"carcinoma\" \"normal\" results_multi <- run_ml(otu_mini_multi, outcome_colname = \"dx\", seed = 2019 ) results_multi$performance #> # A tibble: 1 × 17 #> cv_metric…¹ logLoss AUC prAUC Accur…² Kappa Mean_F1 Mean_…³ Mean_…⁴ Mean_…⁵ #> #> 1 1.07 1.11 0.506 0.353 0.382 0.0449 NA 0.360 0.682 NaN #> # … with 7 more variables: Mean_Neg_Pred_Value , Mean_Precision , #> # Mean_Recall , Mean_Detection_Rate , Mean_Balanced_Accuracy , #> # method , seed , and abbreviated variable names #> # ¹​cv_metric_logLoss, ²​Accuracy, ³​Mean_Sensitivity, ⁴​Mean_Specificity, #> # ⁵​Mean_Pos_Pred_Value"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"continuous-data","dir":"Articles","previous_headings":"Other data","what":"Continuous data","title":"Introduction to mikropml","text":"’s example running continuous data, outcome column numerical: , performance metrics slightly different, format rest :","code":"results_cont <- run_ml(otu_mini_bin[, 2:11], \"glmnet\", outcome_colname = \"Otu00001\", seed = 2019 ) results_cont$performance #> # A tibble: 1 × 6 #> cv_metric_RMSE RMSE Rsquared MAE method seed #> #> 1 622. 731. 0.0893 472. glmnet 2019"},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"summary","dir":"Articles","previous_headings":"","what":"Summary","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Machine learning (ML) classification prediction based set features used make decisions healthcare, economics, criminal justice . However, implementing ML pipeline including preprocessing, model selection, evaluation can time-consuming, confusing, difficult. , present mikropml (pronounced “meek-ROPE em el”), easy--use R package implements ML pipelines using regression, support vector machines, decision trees, random forest, gradient-boosted trees. package available GitHub, CRAN, conda.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"statement-of-need","dir":"Articles","previous_headings":"","what":"Statement of need","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"applications machine learning (ML) require reproducible steps data pre-processing, cross-validation, testing, model evaluation, often interpretation model makes particular predictions. Performing steps important, failure implement can result incorrect misleading results (Teschendorff 2019; Wiens et al. 2019). Supervised ML widely used recognize patterns large datasets make predictions outcomes interest. Several packages including caret (Kuhn 2008) tidymodels (Kuhn, Wickham, RStudio 2020) R, scikitlearn (Pedregosa et al. 2011) Python, H2O autoML platform (H2O.ai 2020) allow scientists train ML models variety algorithms. packages provide tools necessary ML step, implement complete ML pipeline according good practices literature. makes difficult practitioners new ML easily begin perform ML analyses. enable broader range researchers apply ML problem domains, created mikropml, easy--use R package (R Core Team 2020) implements ML pipeline created Topçuoğlu et al. (Topçuoğlu et al. 2020) single function returns trained model, model performance metrics feature importance. mikropml leverages caret package support several ML algorithms: linear regression, logistic regression, support vector machines radial basis kernel, decision trees, random forest, gradient boosted trees. incorporates good practices ML training, testing, model evaluation (Topçuoğlu et al. 2020; Teschendorff 2019). Furthermore, provides data preprocessing steps based FIDDLE (FlexIble Data-Driven pipeLinE) framework outlined Tang et al. (Tang et al. 2020) post-training permutation importance steps estimate importance feature models trained (Breiman 2001; Fisher, Rudin, Dominici 2018). mikropml can used starting point application ML datasets many different fields. already applied microbiome data categorize patients colorectal cancer (Topçuoğlu et al. 2020), identify differences genomic clinical features associated bacterial infections (Lapp et al. 2020), predict gender-based biases academic publishing (Hagan et al. 2020).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"mikropml-package","dir":"Articles","previous_headings":"","what":"mikropml package","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"mikropml package includes functionality preprocess data, train ML models, evaluate model performance, quantify feature importance (Figure 1). also provide vignettes example Snakemake workflow (Köster Rahmann 2012) showcase run ideal ML pipeline multiple different train/test data splits. results can visualized using helper functions use ggplot2 (Wickham 2016). mikropml allows users get started quickly facilitates reproducibility, replacement understanding ML workflow still necessary interpreting results (Pollard et al. 2019). facilitate understanding enable one tailor code application, heavily commented code provided supporting documentation can read online.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"preprocessing-data","dir":"Articles","previous_headings":"mikropml package","what":"Preprocessing data","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"provide function preprocess_data() preprocess features using several different functions caret package. preprocess_data() takes continuous categorical data, re-factors categorical data binary features, provides options normalize continuous data, remove features near-zero variance, keep one instance perfectly correlated features. set default options based implemented FIDDLE (Tang et al. 2020). details use preprocess_data() can found accompanying vignette.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"running-ml","dir":"Articles","previous_headings":"mikropml package","what":"Running ML","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"main function mikropml, run_ml(), minimally takes model choice data frame outcome column feature columns. model choice, mikropml currently supports logistic linear regression (glmnet: Friedman, Hastie, Tibshirani 2010), support vector machines radial basis kernel (kernlab: Karatzoglou et al. 2004), decision trees (rpart: Therneau et al. 2019), random forest (randomForest: Liaw Wiener 2002), gradient-boosted trees (xgboost: Chen et al. 2020). run_ml() randomly splits data train test sets maintaining distribution outcomes found full dataset. also provides option split data train test sets based categorical variables (e.g. batch, geographic location, etc.). mikropml uses caret package (Kuhn 2008) train evaluate models, optionally quantifies feature importance. output includes best model built based tuning hyperparameters internal repeated cross-validation step, model evaluation metrics, optional feature importances. Feature importances calculated using permutation test, breaks relationship feature true outcome test data, measures change model performance. provides intuitive metric individual features influence model performance comparable across model types, particularly useful model interpretation (Topçuoğlu et al. 2020). introductory vignette contains comprehensive tutorial use run_ml(). mikropml pipeline","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"ideal-workflow-for-running-mikropml-with-many-different-traintest-splits","dir":"Articles","previous_headings":"mikropml package","what":"Ideal workflow for running mikropml with many different train/test splits","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"investigate variation model performance depending train test set used (Topçuoğlu et al. 2020; Lapp et al. 2020), provide examples run_ml() many times different train/test splits get summary information model performance local computer high-performance computing cluster using Snakemake workflow.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"tuning-visualization","dir":"Articles","previous_headings":"mikropml package","what":"Tuning & visualization","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"One particularly important aspect ML hyperparameter tuning. provide reasonable range default hyperparameters model type. However practitioners explore whether range appropriate data, customize hyperparameter range. Therefore, provide function plot_hp_performance() plot cross-validation performance metric single model models built using different train/test splits. helps evaluate hyperparameter range searched exhaustively allows user pick ideal set. also provide summary plots test performance metrics many train/test splits different models using plot_model_performance(). Examples described accompanying vignette hyperparameter tuning.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"dependencies","dir":"Articles","previous_headings":"mikropml package","what":"Dependencies","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"mikropml written R (R Core Team 2020) depends several packages: dplyr (Wickham et al. 2020), rlang (Henry, Wickham, RStudio 2020) caret (Kuhn 2008). ML algorithms supported mikropml require: glmnet (Friedman, Hastie, Tibshirani 2010), e1071 (Meyer et al. 2020), MLmetrics (Yan 2016) logistic regression, rpart2 (Therneau et al. 2019) decision trees, randomForest (Liaw Wiener 2002) random forest, xgboost (Chen et al. 2020) xgboost, kernlab (Karatzoglou et al. 2004) support vector machines. also allow parallelization cross-validation steps using foreach, doFuture, future.apply, future packages (Bengtsson Team 2020). Finally, use ggplot2 plotting (Wickham 2016).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"acknowledgments","dir":"Articles","previous_headings":"","what":"Acknowledgments","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"thank members Schloss Lab participated code clubs related initial development pipeline, made documentation improvements, provided general feedback. also thank Nick Lesniak designing mikropml logo. thank US Research Software Sustainability Institute (NSF #1743188) providing training KLS Winter School Research Software Engineering.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"funding","dir":"Articles","previous_headings":"","what":"Funding","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Salary support PDS came NIH grant 1R01CA215574. KLS received support NIH Training Program Bioinformatics (T32 GM070449). ZL received support National Science Foundation Graduate Research Fellowship Program Grant . DGE 1256260. opinions, findings, conclusions recommendations expressed material authors necessarily reflect views National Science Foundation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"author-contributions","dir":"Articles","previous_headings":"","what":"Author contributions","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"BDT, ZL, KLS contributed equally. Author order among co-first authors determined time since joining project. BDT, ZL, KLS conceptualized study wrote code. KLS structured code R package form. BDT, ZL, JW, PDS developed methodology. PDS, ES, JW supervised project. BDT, ZL, KLS wrote original draft. authors reviewed edited manuscript.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"conflicts-of-interest","dir":"Articles","previous_headings":"","what":"Conflicts of interest","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"None.","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"speed-up-single-runs","dir":"Articles","previous_headings":"","what":"Speed up single runs","title":"Parallel processing","text":"default, preprocess_data(), run_ml(), compare_models() use one process series. ’d like parallelize various steps pipeline make run faster, install foreach, future, future.apply, doFuture. , register future plan prior calling functions: , used multicore plan split work across 2 cores. See future documentation picking best plan use case. Notably, multicore work inside RStudio Windows; need use multisession instead cases. registering future plan, can call preprocess_data() run_ml() usual, run certain tasks parallel. ’s also parallel version rf engine called parRF trains trees forest parallel. See caret docs information.","code":"doFuture::registerDoFuture() future::plan(future::multicore, workers = 2) otu_data_preproc <- preprocess_data(otu_mini_bin, \"dx\")$dat_transformed result1 <- run_ml(otu_data_preproc, \"glmnet\", seed = 2019)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"bootstrap-performance","dir":"Articles","previous_headings":"Speed up single runs","what":"Bootstrap performance","title":"Parallel processing","text":"intend call run_ml() generate one train/test split (e.g. temporal split dataset), can evaluate model performance bootstrapping test set. show generate 100 bootstraps calculate confidence interval model performance. use 100 computation speed, recommended generate 10000 bootstraps precise estimation confidence interval.","code":"boot_perf <- bootstrap_performance(result1, outcome_colname = \"dx\", bootstrap_times = 100, alpha = 0.05 ) boot_perf #> # A tibble: 15 × 6 #> term .lower .estimate .upper .alpha .method #> #> 1 AUC 0.489 0.656 0.825 0.05 percentile #> 2 Accuracy 0.448 0.597 0.769 0.05 percentile #> 3 Balanced_Accuracy 0.441 0.597 0.753 0.05 percentile #> 4 Detection_Rate 0.166 0.307 0.462 0.05 percentile #> 5 F1 0.412 0.597 0.759 0.05 percentile #> 6 Kappa -0.119 0.189 0.507 0.05 percentile #> 7 Neg_Pred_Value 0.333 0.583 0.774 0.05 percentile #> 8 Pos_Pred_Value 0.425 0.610 0.812 0.05 percentile #> 9 Precision 0.425 0.610 0.812 0.05 percentile #> 10 Recall 0.387 0.598 0.810 0.05 percentile #> 11 Sensitivity 0.387 0.598 0.810 0.05 percentile #> 12 Specificity 0.342 0.596 0.789 0.05 percentile #> 13 cv_metric_AUC 0.622 0.622 0.622 0.05 percentile #> 14 logLoss 0.655 0.681 0.705 0.05 percentile #> 15 prAUC 0.465 0.592 0.736 0.05 percentile"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"call-run_ml-multiple-times-in-parallel-in-r","dir":"Articles","previous_headings":"","what":"Call run_ml() multiple times in parallel in R","title":"Parallel processing","text":"can use functions future.apply package call run_ml() multiple times parallel different parameters. first need run future::plan() haven’t already. , call run_ml() multiple seeds using future_lapply(): call run_ml() different seed uses different random split data training testing sets. Since using seeds, must set future.seed TRUE (see future.apply documentation blog post details parallel-safe random seeds). example uses seeds speed simplicity, real data recommend using many seeds get better estimate model performance. examples, used functions future.apply package run_ml() parallel, can accomplish thing parallel versions purrr::map() functions using furrr package (e.g. furrr::future_map_dfr()). Extract performance results combine one dataframe seeds:","code":"# NOTE: use more seeds for real-world data results_multi <- future.apply::future_lapply(seq(100, 102), function(seed) { run_ml(otu_data_preproc, \"glmnet\", seed = seed) }, future.seed = TRUE) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. perf_df <- future.apply::future_lapply(results_multi, function(result) { result[[\"performance\"]] %>% select(cv_metric_AUC, AUC, method) }, future.seed = TRUE ) %>% dplyr::bind_rows() perf_df #> # A tibble: 3 × 3 #> cv_metric_AUC AUC method #> #> 1 0.630 0.634 glmnet #> 2 0.591 0.608 glmnet #> 3 0.671 0.471 glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"multiple-ml-methods","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R","what":"Multiple ML methods","title":"Parallel processing","text":"may also wish compare performance different ML methods. mapply() can iterate multiple lists vectors, future_mapply() works way:","code":"# NOTE: use more seeds for real-world data param_grid <- expand.grid( seeds = seq(100, 103), methods = c(\"glmnet\", \"rf\") ) results_mtx <- future.apply::future_mapply( function(seed, method) { run_ml(otu_data_preproc, method, seed = seed, find_feature_importance = TRUE ) }, param_grid$seeds, param_grid$methods %>% as.character(), future.seed = TRUE ) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"visualize-the-results","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R","what":"Visualize the results","title":"Parallel processing","text":"ggplot2 required use plotting functions . can also create plots however like using results data.","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"mean-auc","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Performance","what":"Mean AUC","title":"Parallel processing","text":"plot_model_performance() returns ggplot2 object. can add layers customize plot:","code":"perf_df <- lapply( results_mtx[\"performance\", ], function(x) { x %>% select(cv_metric_AUC, AUC, method) } ) %>% dplyr::bind_rows() perf_boxplot <- plot_model_performance(perf_df) perf_boxplot perf_boxplot + theme_classic() + scale_color_brewer(palette = \"Dark2\") + coord_flip()"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"roc-and-prc-curves","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Performance","what":"ROC and PRC curves","title":"Parallel processing","text":"First calculate sensitivity, specificity, precision models.","code":"get_sensspec_seed <- function(colnum) { result <- results_mtx[, colnum] trained_model <- result$trained_model test_data <- result$test_data seed <- result$performance$seed method <- result$trained_model$method sensspec <- calc_model_sensspec( trained_model, test_data, \"dx\" ) %>% mutate(seed = seed, method = method) return(sensspec) } sensspec_dat <- purrr::map_dfr( seq(1, dim(results_mtx)[2]), get_sensspec_seed ) #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"plot-curves-for-a-single-model","dir":"Articles","previous_headings":"","what":"Parallel processing","title":"Parallel processing","text":"","code":"sensspec_1 <- sensspec_dat %>% filter(seed == 100, method == \"glmnet\") sensspec_1 %>% ggplot(aes(x = specificity, y = sensitivity, )) + geom_line() + geom_abline( intercept = 1, slope = 1, linetype = \"dashed\", color = \"grey50\" ) + coord_equal() + scale_x_reverse(expand = c(0, 0), limits = c(1.01, -0.01)) + scale_y_continuous(expand = c(0, 0), limits = c(-0.01, 1.01)) + labs(x = \"Specificity\", y = \"Sensitivity\") + theme_bw() + theme(legend.title = element_blank()) baseline_precision_otu <- calc_baseline_precision( otu_data_preproc, \"dx\", \"cancer\" ) #> Using 'dx' as the outcome column. sensspec_1 %>% rename(recall = sensitivity) %>% ggplot(aes(x = recall, y = precision, )) + geom_line() + geom_hline( yintercept = baseline_precision_otu, linetype = \"dashed\", color = \"grey50\" ) + coord_equal() + scale_x_continuous(expand = c(0, 0), limits = c(-0.01, 1.01)) + scale_y_continuous(expand = c(0, 0), limits = c(-0.01, 1.01)) + labs(x = \"Recall\", y = \"Precision\") + theme_bw() + theme(legend.title = element_blank())"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"plot-mean-roc-and-prc-for-all-models","dir":"Articles","previous_headings":"","what":"Parallel processing","title":"Parallel processing","text":"","code":"sensspec_dat %>% calc_mean_roc() %>% plot_mean_roc() sensspec_dat %>% calc_mean_prc() %>% plot_mean_prc(baseline_precision = baseline_precision_otu)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"feature-importance","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results","what":"Feature importance","title":"Parallel processing","text":"perf_metric_diff feature importance data frame contains differences performance actual test data performance permuted test data (.e. test minus permuted). feature important model performance, expect perf_metric_diff positive. words, features resulted largest decrease performance permuted important features.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"feature-importance-for-multiple-models","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Feature importance","what":"Feature importance for multiple models","title":"Parallel processing","text":"can select top n important features models plot like : See docs get_feature_importance() details values computed.","code":"feat_df <- results_mtx[\"feature_importance\", ] %>% dplyr::bind_rows() top_n <- 5 top_feats <- feat_df %>% group_by(method, feat) %>% summarize(mean_diff = median(perf_metric_diff)) %>% filter(mean_diff > 0) %>% slice_max(order_by = mean_diff, n = top_n) #> `summarise()` has grouped output by 'method'. You can override using the #> `.groups` argument. feat_df %>% right_join(top_feats, by = c(\"method\", \"feat\")) %>% mutate(features = forcats::fct_reorder(factor(feat), mean_diff)) %>% ggplot(aes(x = perf_metric_diff, y = features, color = method)) + geom_boxplot() + geom_vline(xintercept = 0, linetype = \"dashed\") + labs( x = \"Decrease in performance (actual minus permutation)\", y = \"Features\", caption = \"Features which have a lower performance when permuted have a difference in performance above zero. The features with the greatest decrease are the most important for model performance.\" %>% stringr::str_wrap(width = 100) ) + theme_bw() + theme(plot.caption = element_text(hjust = 0))"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"feature-importance-for-a-single-model","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Feature importance","what":"Feature importance for a single model","title":"Parallel processing","text":"can also plot feature importance single model. report actual performance, permutation performance, empirical 95% confidence interval permutation performance.","code":"feat_imp_1 <- results_mtx[, 1][[\"feature_importance\"]] perf_metric_name <- results_mtx[, 1][[\"trained_model\"]]$metric perf_actual <- results_mtx[, 1][[\"performance\"]] %>% pull(perf_metric_name) feat_imp_1 %>% filter(perf_metric_diff > 0) %>% mutate(feat = if_else(pvalue < 0.05, paste0(\"*\", feat), as.character(feat)) %>% as.factor() %>% forcats::fct_reorder(perf_metric_diff)) %>% ggplot(aes(x = perf_metric, xmin = lower, xmax = upper, y = feat)) + geom_pointrange() + geom_vline(xintercept = perf_actual, linetype = \"dashed\") + labs( x = \"Permutation performance\", y = \"Features\", caption = \"The dashed line represents the actual performance on the test set. Features which have a lower performance when permuted are important for model performance. Significant features (pvalue < 0.05) are marked with an asterisk (*). Error bars represent the 95% confidence interval.\" %>% stringr::str_wrap(width = 110) ) + theme_bw() + theme(plot.caption = element_text(hjust = 0))"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"live-progress-updates","dir":"Articles","previous_headings":"","what":"Live progress updates","title":"Parallel processing","text":"preprocess_data() get_feature_importance() support reporting live progress updates using progressr package. format , recommend using progress bar like : Note future backends support “near-live” progress updates, meaning progress may reported immediately parallel processing futures. Read progressr vignette. progressr customize format progress updates, see progressr docs.","code":"# optionally, specify the progress bar format with the `progress` package. progressr::handlers(progressr::handler_progress( format = \":message :bar :percent | elapsed: :elapsed | eta: :eta\", clear = FALSE, show_after = 0 )) # tell progressr to always report progress in any functions that use it. # set this to FALSE to turn it back off again. progressr::handlers(global = TRUE) # run your code and watch the live progress updates. dat <- preprocess_data(otu_mini_bin, \"dx\")$dat_transformed #> Using 'dx' as the outcome column. #> preprocessing ========================>------- 78% | elapsed: 1s | eta: 0s results <- run_ml(dat, \"glmnet\", kfold = 2, cv_times = 2, find_feature_importance = TRUE ) #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Feature importance =========================== 100% | elapsed: 37s | eta: 0s"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"parallelizing-with-snakemake","dir":"Articles","previous_headings":"","what":"Parallelizing with Snakemake","title":"Parallel processing","text":"parallelizing multiple calls run_ml() R examples , results objects held memory. isn’t big deal small dataset run seeds. However, large datasets run parallel , say, 100 seeds (recommended), may run problems trying store objects memory . Using workflow manager Snakemake Nextflow highly recommend maximize scalability reproducibility computational analyses. created template Snakemake workflow can use starting point ML project.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"its-running-so-slow","dir":"Articles","previous_headings":"","what":"It’s running so slow!","title":"Preprocessing data","text":"Since assume lot won’t read entire vignette, ’m going say beginning. preprocess_data() function running super slow, consider parallelizing goes faster! preprocess_data() also can report live progress updates. See vignette(\"parallel\") details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"examples","dir":"Articles","previous_headings":"","what":"Examples","title":"Preprocessing data","text":"’re going start simple get complicated, want whole shebang , just scroll bottom. First, load mikropml:","code":"library(mikropml)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"binary-data","dir":"Articles","previous_headings":"Examples","what":"Binary data","title":"Preprocessing data","text":"Let’s start binary variables: addition dataframe , provide name outcome column preprocess_data(). ’s preprocessed data looks like: output list: dat_transformed transformed data, grp_feats list grouped features, removed_feats list features removed. , grp_feats NULL perfectly correlated features (e.g. c(0,1,0) c(0,1,0), c(0,1,0) c(1,0,1) - see details). first column (var1) dat_transformed character changed var1_yes zeros () ones (yes). values second column (var2) stay ’s already binary, name changes var2_1. third column (var3) factor also changed binary b 1 0, denoted new column name var3_b.","code":"# raw binary dataset bin_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(0, 1, 1), var3 = factor(c(\"a\", \"a\", \"b\")) ) bin_df #> outcome var1 var2 var3 #> 1 normal no 0 a #> 2 normal yes 1 a #> 3 cancer no 1 b # preprocess raw binary data preprocess_data(dataset = bin_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 4 #> outcome var1_yes var2_1 var3_b #> #> 1 normal 0 0 0 #> 2 normal 1 1 0 #> 3 cancer 0 1 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"categorical-data","dir":"Articles","previous_headings":"Examples","what":"Categorical data","title":"Preprocessing data","text":"non-binary categorical data: can see, variable split 3 different columns - one type (, b, c). , grp_feats NULL.","code":"# raw categorical dataset cat_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"a\", \"b\", \"c\") ) cat_df #> outcome var1 #> 1 normal a #> 2 normal b #> 3 cancer c # preprocess raw categorical data preprocess_data(dataset = cat_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 4 #> outcome var1_a var1_b var1_c #> #> 1 normal 1 0 0 #> 2 normal 0 1 0 #> 3 cancer 0 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"continuous-data","dir":"Articles","previous_headings":"Examples","what":"Continuous data","title":"Preprocessing data","text":"Now, looking continuous variables: Wow! numbers change? default normalize data using \"center\" \"scale\". often best practice, may want normalize data, may want normalize data different way. don’t want normalize data, can use method=NULL: can also normalize data different ways. can choose method supported method argument caret::preProcess() (see caret::preProcess() docs details). Note methods applied continuous variables. Another feature preprocess_data() provide continuous variables characters, converted numeric: don’t want happen, want character data remain character data even can converted numeric, can use to_numeric=FALSE kept categorical: can see output, case features treated groups rather numbers (e.g. normalized).","code":"# raw continuous dataset cont_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(1, 2, 3) ) cont_df #> outcome var1 #> 1 normal 1 #> 2 normal 2 #> 3 cancer 3 # preprocess raw continuous data preprocess_data(dataset = cont_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 2 #> outcome var1 #> #> 1 normal -1 #> 2 normal 0 #> 3 cancer 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0) # preprocess raw continuous data, no normalization preprocess_data(dataset = cont_df, outcome_colname = \"outcome\", method = NULL) # raw continuous dataset as characters cont_char_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"1\", \"2\", \"3\") ) cont_char_df #> outcome var1 #> 1 normal 1 #> 2 normal 2 #> 3 cancer 3 # preprocess raw continuous character data as numeric preprocess_data(dataset = cont_char_df, outcome_colname = \"outcome\") # preprocess raw continuous character data as characters preprocess_data(dataset = cont_char_df, outcome_colname = \"outcome\", to_numeric = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 4 #> outcome var1_1 var1_2 var1_3 #> #> 1 normal 1 0 0 #> 2 normal 0 1 0 #> 3 cancer 0 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"collapse-perfectly-correlated-features","dir":"Articles","previous_headings":"Examples","what":"Collapse perfectly correlated features","title":"Preprocessing data","text":"default, preprocess_data() collapses features perfectly positively negatively correlated. multiple copies features add information machine learning, makes run_ml faster. can see, end one variable, 3 grouped together. Also, second element list longer NULL. Instead, tells grp1 contains var1, var2, var3. want group positively correlated features, negatively correlated features (e.g. interpretability, another downstream application), can using group_neg_corr=FALSE: , var3 kept ’s ’s negatively correlated var1 var2. can also choose keep features separate, even perfectly correlated, using collapse_corr_feats=FALSE: case, grp_feats always NULL.","code":"# raw correlated dataset corr_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(0, 1, 0), var3 = c(1, 0, 1) ) corr_df #> outcome var1 var2 var3 #> 1 normal no 0 1 #> 2 normal yes 1 0 #> 3 cancer no 0 1 # preprocess raw correlated dataset preprocess_data(dataset = corr_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 2 #> outcome grp1 #> #> 1 normal 0 #> 2 normal 1 #> 3 cancer 0 #> #> $grp_feats #> $grp_feats$grp1 #> [1] \"var1_yes\" \"var3_1\" #> #> #> $removed_feats #> [1] \"var2\" # preprocess raw correlated dataset; don't group negatively correlated features preprocess_data(dataset = corr_df, outcome_colname = \"outcome\", group_neg_corr = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var3_1 #> #> 1 normal 0 1 #> 2 normal 1 0 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\" # preprocess raw correlated dataset; don't group negatively correlated features preprocess_data(dataset = corr_df, outcome_colname = \"outcome\", collapse_corr_feats = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var3_1 #> #> 1 normal 0 1 #> 2 normal 1 0 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"data-with-near-zero-variance","dir":"Articles","previous_headings":"Examples","what":"Data with near-zero variance","title":"Preprocessing data","text":"variables zero, “”? ones won’t contribute information, remove : , var3, var4, var5 variability, variables removed preprocessing: can read caret::preProcess() documentation information. default, remove features “near-zero variance” (remove_var='nzv'). uses default arguments caret::nearZeroVar(). However, particularly smaller datasets, might want remove features near-zero variance. want remove features zero variance, can use remove_var='zv': want include features, can use argument remove_zv=NULL. work, collapse correlated features (otherwise errors underlying caret function use). want nuanced remove near-zero variance features (e.g. change default 10% cutoff percentage distinct values total number samples), can use caret::preProcess() function running preprocess_data remove_var=NULL (see caret::nearZeroVar() function information).","code":"# raw dataset with non-variable features nonvar_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(0, 1, 1), var3 = c(\"no\", \"no\", \"no\"), var4 = c(0, 0, 0), var5 = c(12, 12, 12) ) nonvar_df #> outcome var1 var2 var3 var4 var5 #> 1 normal no 0 no 0 12 #> 2 normal yes 1 no 0 12 #> 3 cancer no 1 no 0 12 # remove features with near-zero variance preprocess_data(dataset = nonvar_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2_1 #> #> 1 normal 0 0 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var4\" \"var3\" \"var5\" # remove features with zero variance preprocess_data(dataset = nonvar_df, outcome_colname = \"outcome\", remove_var = \"zv\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2_1 #> #> 1 normal 0 0 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var4\" \"var3\" \"var5\" # don't remove features with near-zero or zero variance preprocess_data(dataset = nonvar_df, outcome_colname = \"outcome\", remove_var = NULL, collapse_corr_feats = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 5 #> outcome var1_yes var2_1 var3 var5 #> #> 1 normal 0 0 0 12 #> 2 normal 1 1 0 12 #> 3 cancer 0 1 0 12 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var4\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"missing-data","dir":"Articles","previous_headings":"Examples","what":"Missing data","title":"Preprocessing data","text":"preprocess_data() also deals missing data. : Removes missing outcome variables. Maintains zero variability feature already variability (.e. feature removed removing features near-zero variance). Replaces missing binary categorical variables zero (splitting multiple columns). Replaces missing continuous data median value feature. ’d like deal missing data different way, please prior inputting data preprocess_data().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"remove-missing-outcome-variables","dir":"Articles","previous_headings":"Examples > Missing data","what":"Remove missing outcome variables","title":"Preprocessing data","text":"","code":"# raw dataset with missing outcome value miss_oc_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\", NA), var1 = c(\"no\", \"yes\", \"no\", \"no\"), var2 = c(0, 1, 1, 1) ) miss_oc_df #> outcome var1 var2 #> 1 normal no 0 #> 2 normal yes 1 #> 3 cancer no 1 #> 4 no 1 # preprocess raw dataset with missing outcome value preprocess_data(dataset = miss_oc_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> Removed 1/4 (25%) of samples because of missing outcome value (NA). #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2_1 #> #> 1 normal 0 0 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"maintain-zero-variability-in-a-feature-if-it-already-has-no-variability","dir":"Articles","previous_headings":"Examples > Missing data","what":"Maintain zero variability in a feature if it already has no variability","title":"Preprocessing data","text":", non-variable feature missing data removed removed features near-zero variance. maintained feature, ’d ones:","code":"# raw dataset with missing value in non-variable feature miss_nonvar_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(NA, 1, 1) ) miss_nonvar_df #> outcome var1 var2 #> 1 normal no NA #> 2 normal yes 1 #> 3 cancer no 1 # preprocess raw dataset with missing value in non-variable feature preprocess_data(dataset = miss_nonvar_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> There are 1 missing value(s) in features with no variation. Missing values were replaced with the non-varying value. #> $dat_transformed #> # A tibble: 3 × 2 #> outcome var1_yes #> #> 1 normal 0 #> 2 normal 1 #> 3 cancer 0 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\" # preprocess raw dataset with missing value in non-variable feature preprocess_data(dataset = miss_nonvar_df, outcome_colname = \"outcome\", remove_var = NULL, collapse_corr_feats = FALSE) #> Using 'outcome' as the outcome column. #> There are 1 missing value(s) in features with no variation. Missing values were replaced with the non-varying value. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2 #> #> 1 normal 0 1 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"replace-missing-binary-and-categorical-variables-with-zero","dir":"Articles","previous_headings":"Examples > Missing data","what":"Replace missing binary and categorical variables with zero","title":"Preprocessing data","text":"binary variable split two, missing value considered zero .","code":"# raw dataset with missing value in categorical feature miss_cat_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", NA), var2 = c(NA, 1, 0) ) miss_cat_df #> outcome var1 var2 #> 1 normal no NA #> 2 normal yes 1 #> 3 cancer 0 # preprocess raw dataset with missing value in non-variable feature preprocess_data(dataset = miss_cat_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> 2 categorical missing value(s) (NA) were replaced with 0. Note that the matrix is not full rank so missing values may be duplicated in separate columns. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_no var1_yes #> #> 1 normal 1 0 #> 2 normal 0 1 #> 3 cancer 0 0 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"replace-missing-continuous-data-with-the-median-value-of-that-feature","dir":"Articles","previous_headings":"Examples > Missing data","what":"Replace missing continuous data with the median value of that feature","title":"Preprocessing data","text":"’re normalizing continuous features ’s easier see ’s going (.e. median value used):","code":"# raw dataset with missing value in continuous feature miss_cont_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\", \"normal\"), var1 = c(1, 2, 2, NA), var2 = c(1, 2, 3, NA) ) miss_cont_df #> outcome var1 var2 #> 1 normal 1 1 #> 2 normal 2 2 #> 3 cancer 2 3 #> 4 normal NA NA # preprocess raw dataset with missing value in continuous feature preprocess_data(dataset = miss_cont_df, outcome_colname = \"outcome\", method = NULL) #> Using 'outcome' as the outcome column. #> 2 missing continuous value(s) were imputed using the median value of the feature. #> $dat_transformed #> # A tibble: 4 × 3 #> outcome var1 var2 #> #> 1 normal 1 1 #> 2 normal 2 2 #> 3 cancer 2 3 #> 4 normal 2 2 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"putting-it-all-together","dir":"Articles","previous_headings":"Examples","what":"Putting it all together","title":"Preprocessing data","text":"’s complicated example raw data puts everything discussed together: Let’s throw preprocessing function default values: can see, got several messages: One samples (row 4) removed outcome value missing. One variables feature variation missing value replaced non-varying value (var11). Four categorical missing values replaced zero (var9). 4 missing rather just 1 (like raw data) split categorical variable 4 different columns first. One missing continuous value imputed using median value feature (var8). Additionally, can see continuous variables normalized, categorical variables changed binary, several features grouped together. variables group can found grp_feats.","code":"test_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\", NA), var1 = 1:4, var2 = c(\"a\", \"b\", \"c\", \"d\"), var3 = c(\"no\", \"yes\", \"no\", \"no\"), var4 = c(0, 1, 0, 0), var5 = c(0, 0, 0, 0), var6 = c(\"no\", \"no\", \"no\", \"no\"), var7 = c(1, 1, 0, 0), var8 = c(5, 6, NA, 7), var9 = c(NA, \"x\", \"y\", \"z\"), var10 = c(1, 0, NA, NA), var11 = c(1, 1, NA, NA), var12 = c(\"1\", \"2\", \"3\", \"4\") ) test_df #> outcome var1 var2 var3 var4 var5 var6 var7 var8 var9 var10 var11 var12 #> 1 normal 1 a no 0 0 no 1 5 1 1 1 #> 2 normal 2 b yes 1 0 no 1 6 x 0 1 2 #> 3 cancer 3 c no 0 0 no 0 NA y NA NA 3 #> 4 4 d no 0 0 no 0 7 z NA NA 4 preprocess_data(dataset = test_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> Removed 1/4 (25%) of samples because of missing outcome value (NA). #> There are 1 missing value(s) in features with no variation. Missing values were replaced with the non-varying value. #> 2 categorical missing value(s) (NA) were replaced with 0. Note that the matrix is not full rank so missing values may be duplicated in separate columns. #> 1 missing continuous value(s) were imputed using the median value of the feature. #> $dat_transformed #> # A tibble: 3 × 6 #> outcome grp1 var2_a grp2 grp3 var8 #> #> 1 normal -1 1 0 0 -0.707 #> 2 normal 0 0 1 0 0.707 #> 3 cancer 1 0 0 1 0 #> #> $grp_feats #> $grp_feats$grp1 #> [1] \"var1\" \"var12\" #> #> $grp_feats$var2_a #> [1] \"var2_a\" #> #> $grp_feats$grp2 #> [1] \"var2_b\" \"var3_yes\" \"var9_x\" #> #> $grp_feats$grp3 #> [1] \"var2_c\" \"var7_1\" \"var9_y\" #> #> $grp_feats$var8 #> [1] \"var8\" #> #> #> $removed_feats #> [1] \"var4\" \"var5\" \"var10\" \"var6\" \"var11\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"next-step-train-and-evaluate-your-model","dir":"Articles","previous_headings":"Examples","what":"Next step: train and evaluate your model!","title":"Preprocessing data","text":"preprocess data (either using preprocess_data() preprocessing data ), ’re ready train evaluate machine learning models! Please see run_ml() information training models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"the-simplest-way-to-run_ml","dir":"Articles","previous_headings":"","what":"The simplest way to run_ml()","title":"Hyperparameter tuning","text":"mentioned , minimal input dataset (dataset) machine learning model want use (method). run_ml(), default 100 times repeated, 5-fold cross-validation, evaluate hyperparameters 500 total iterations. Say want run L2 regularized logistic regression. : ’ll probably get warning run dataset small. want learn , check introductory vignette training evaluating ML model: vignette(\"introduction\"). default, run_ml() selects hyperparameters depending dataset method used. can see, alpha hyperparameter set 0, specifies L2 regularization. glmnet gives us option run L1 L2 regularization. change alpha 1, run L1-regularized logistic regression. can also tune alpha specifying variety values 0 1. use value 0 1, running elastic net. default hyperparameter lambda adjusts L2 regularization penalty range values 10^-4 10. look 100 repeated cross-validation performance metrics AUC, Accuracy, prAUC tested lambda value, see appropriate dataset better others.","code":"results <- run_ml(dat, \"glmnet\", outcome_colname = \"dx\", cv_times = 100, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: ggplot2 #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. results$trained_model #> glmnet #> #> 161 samples #> 10 predictor #> 2 classes: 'cancer', 'normal' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 100 times) #> Summary of sample sizes: 128, 129, 129, 129, 129, 130, ... #> Resampling results across tuning parameters: #> #> lambda logLoss AUC prAUC Accuracy Kappa F1 #> 1e-04 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 1e-03 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 1e-02 0.7112738 0.6123883 0.5726478 0.5854514 0.17092470 0.5731635 #> 1e-01 0.6819806 0.6210744 0.5793961 0.5918756 0.18369829 0.5779616 #> 1e+00 0.6803749 0.6278273 0.5827655 0.5896356 0.17756961 0.5408139 #> 1e+01 0.6909820 0.6271894 0.5814202 0.5218000 0.02920942 0.1875293 #> Sensitivity Specificity Pos_Pred_Value Neg_Pred_Value Precision #> 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 #> 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 #> 0.5789667 0.5921250 0.5797769 0.5977182 0.5797769 #> 0.5805917 0.6032353 0.5880165 0.6026963 0.5880165 #> 0.5057833 0.6715588 0.6005149 0.5887829 0.6005149 #> 0.0607250 0.9678676 0.7265246 0.5171323 0.7265246 #> Recall Detection_Rate Balanced_Accuracy #> 0.5789667 0.2839655 0.5854870 #> 0.5789667 0.2839655 0.5854870 #> 0.5789667 0.2839636 0.5855458 #> 0.5805917 0.2847195 0.5919135 #> 0.5057833 0.2478291 0.5886711 #> 0.0607250 0.0292613 0.5142963 #> #> Tuning parameter 'alpha' was held constant at a value of 0 #> AUC was used to select the optimal model using the largest value. #> The final values used for the model were alpha = 0 and lambda = 1. results$trained_model$results #> alpha lambda logLoss AUC prAUC Accuracy Kappa F1 #> 1 0 1e-04 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 2 0 1e-03 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 3 0 1e-02 0.7112738 0.6123883 0.5726478 0.5854514 0.17092470 0.5731635 #> 4 0 1e-01 0.6819806 0.6210744 0.5793961 0.5918756 0.18369829 0.5779616 #> 5 0 1e+00 0.6803749 0.6278273 0.5827655 0.5896356 0.17756961 0.5408139 #> 6 0 1e+01 0.6909820 0.6271894 0.5814202 0.5218000 0.02920942 0.1875293 #> Sensitivity Specificity Pos_Pred_Value Neg_Pred_Value Precision Recall #> 1 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 0.5789667 #> 2 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 0.5789667 #> 3 0.5789667 0.5921250 0.5797769 0.5977182 0.5797769 0.5789667 #> 4 0.5805917 0.6032353 0.5880165 0.6026963 0.5880165 0.5805917 #> 5 0.5057833 0.6715588 0.6005149 0.5887829 0.6005149 0.5057833 #> 6 0.0607250 0.9678676 0.7265246 0.5171323 0.7265246 0.0607250 #> Detection_Rate Balanced_Accuracy logLossSD AUCSD prAUCSD AccuracySD #> 1 0.2839655 0.5854870 0.085315967 0.09115229 0.07296554 0.07628572 #> 2 0.2839655 0.5854870 0.085315967 0.09115229 0.07296554 0.07628572 #> 3 0.2839636 0.5855458 0.085276565 0.09122242 0.07301412 0.07637123 #> 4 0.2847195 0.5919135 0.048120032 0.09025695 0.07329214 0.07747312 #> 5 0.2478291 0.5886711 0.012189172 0.09111917 0.07505095 0.07771171 #> 6 0.0292613 0.5142963 0.001610008 0.09266875 0.07640896 0.03421597 #> KappaSD F1SD SensitivitySD SpecificitySD Pos_Pred_ValueSD #> 1 0.15265728 0.09353786 0.13091452 0.11988406 0.08316345 #> 2 0.15265728 0.09353786 0.13091452 0.11988406 0.08316345 #> 3 0.15281903 0.09350099 0.13073501 0.12002481 0.08329024 #> 4 0.15485134 0.09308733 0.12870031 0.12037225 0.08554483 #> 5 0.15563046 0.10525917 0.13381009 0.11639614 0.09957685 #> 6 0.06527242 0.09664720 0.08010494 0.06371495 0.31899811 #> Neg_Pred_ValueSD PrecisionSD RecallSD Detection_RateSD Balanced_AccuracySD #> 1 0.08384956 0.08316345 0.13091452 0.06394409 0.07640308 #> 2 0.08384956 0.08316345 0.13091452 0.06394409 0.07640308 #> 3 0.08385838 0.08329024 0.13073501 0.06384692 0.07648207 #> 4 0.08427362 0.08554483 0.12870031 0.06272897 0.07748791 #> 5 0.07597766 0.09957685 0.13381009 0.06453637 0.07773039 #> 6 0.02292294 0.31899811 0.08010494 0.03803159 0.03184136"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"customizing-hyperparameters","dir":"Articles","previous_headings":"","what":"Customizing hyperparameters","title":"Hyperparameter tuning","text":"example, want change lambda values provide better range test cross-validation step. don’t want use defaults provide named list new values. example: Now let’s run L2 logistic regression new lambda values: time, cover larger different range lambda settings cross-validation. know lambda value best one? answer , need run ML pipeline multiple data splits look mean cross-validation performance lambda across modeling experiments. describe run pipeline multiple data splits vignette(\"parallel\"). train model new lambda range defined . run 3 times different seed, result different splits data training testing sets. can use plot_hp_performance see lambda gives us largest mean AUC value across modeling experiments. can see, get mean maxima 0.03 best lambda value dataset run 3 data splits. fact seeing maxima middle range edges, shows providing large enough range exhaust lambda search build model. recommend user use plot make sure best hyperparameter edges provided list. better understanding global maxima, better run data splits using seeds. picked 3 seeds keep runtime vignette, real-world data recommend using many seeds.","code":"new_hp <- list( alpha = 1, lambda = c(0.00001, 0.0001, 0.001, 0.01, 0.015, 0.02, 0.025, 0.03, 0.04, 0.05, 0.06, 0.1) ) new_hp #> $alpha #> [1] 1 #> #> $lambda #> [1] 0.00001 0.00010 0.00100 0.01000 0.01500 0.02000 0.02500 0.03000 0.04000 #> [10] 0.05000 0.06000 0.10000 results <- run_ml(dat, \"glmnet\", outcome_colname = \"dx\", cv_times = 100, hyperparameters = new_hp, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. results$trained_model #> glmnet #> #> 161 samples #> 10 predictor #> 2 classes: 'cancer', 'normal' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 100 times) #> Summary of sample sizes: 128, 129, 129, 129, 129, 130, ... #> Resampling results across tuning parameters: #> #> lambda logLoss AUC prAUC Accuracy Kappa F1 #> 0.00001 0.7215038 0.6112253 0.5720005 0.5842184 0.1684871 0.5726974 #> 0.00010 0.7215038 0.6112253 0.5720005 0.5842184 0.1684871 0.5726974 #> 0.00100 0.7209099 0.6112771 0.5719601 0.5845329 0.1691285 0.5730414 #> 0.01000 0.6984432 0.6156112 0.5758977 0.5830960 0.1665062 0.5759265 #> 0.01500 0.6913332 0.6169396 0.5770496 0.5839720 0.1683912 0.5786347 #> 0.02000 0.6870103 0.6177313 0.5779563 0.5833645 0.1673234 0.5796891 #> 0.02500 0.6846387 0.6169757 0.5769305 0.5831907 0.1669901 0.5792840 #> 0.03000 0.6834369 0.6154763 0.5754118 0.5821394 0.1649081 0.5786336 #> 0.04000 0.6833322 0.6124776 0.5724802 0.5786224 0.1578750 0.5735757 #> 0.05000 0.6850454 0.6069059 0.5668928 0.5732197 0.1468699 0.5624480 #> 0.06000 0.6880861 0.5974311 0.5596714 0.5620224 0.1240112 0.5375824 #> 0.10000 0.6944846 0.5123565 0.3034983 0.5120114 0.0110144 0.3852423 #> Sensitivity Specificity Pos_Pred_Value Neg_Pred_Value Precision #> 0.5798500 0.5888162 0.5780748 0.5971698 0.5780748 #> 0.5798500 0.5888162 0.5780748 0.5971698 0.5780748 #> 0.5801167 0.5891912 0.5784544 0.5974307 0.5784544 #> 0.5883667 0.5783456 0.5755460 0.5977390 0.5755460 #> 0.5929750 0.5756471 0.5763123 0.5987220 0.5763123 #> 0.5967167 0.5708824 0.5748385 0.5990649 0.5748385 #> 0.5970250 0.5702721 0.5743474 0.5997928 0.5743474 #> 0.5964500 0.5687721 0.5734044 0.5982451 0.5734044 #> 0.5904500 0.5677353 0.5699817 0.5943308 0.5699817 #> 0.5734833 0.5736176 0.5668523 0.5864448 0.5668523 #> 0.5360333 0.5881250 0.5595918 0.5722851 0.5595918 #> 0.1145917 0.8963456 0.5255752 0.5132665 0.5255752 #> Recall Detection_Rate Balanced_Accuracy #> 0.5798500 0.28441068 0.5843331 #> 0.5798500 0.28441068 0.5843331 #> 0.5801167 0.28453770 0.5846539 #> 0.5883667 0.28860521 0.5833561 #> 0.5929750 0.29084305 0.5843110 #> 0.5967167 0.29264681 0.5837995 #> 0.5970250 0.29278708 0.5836485 #> 0.5964500 0.29248583 0.5826110 #> 0.5904500 0.28951992 0.5790926 #> 0.5734833 0.28119862 0.5735505 #> 0.5360333 0.26270204 0.5620792 #> 0.1145917 0.05585777 0.5054686 #> #> Tuning parameter 'alpha' was held constant at a value of 1 #> AUC was used to select the optimal model using the largest value. #> The final values used for the model were alpha = 1 and lambda = 0.02. results <- lapply(seq(100, 102), function(seed) { run_ml(dat, \"glmnet\", seed = seed, hyperparameters = new_hp) }) #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. models <- lapply(results, function(x) x$trained_model) hp_metrics <- combine_hp_performance(models) plot_hp_performance(hp_metrics$dat, lambda, AUC)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"hyperparameter-options","dir":"Articles","previous_headings":"","what":"Hyperparameter options","title":"Hyperparameter tuning","text":"can see default hyperparameters used dataset get_hyperparams_list(). examples built-datasets provide: hyperparameters tuned modeling methods. output similar, won’t go details.","code":"get_hyperparams_list(otu_mini_bin, \"glmnet\") #> $lambda #> [1] 1e-04 1e-03 1e-02 1e-01 1e+00 1e+01 #> #> $alpha #> [1] 0 get_hyperparams_list(otu_mini_bin, \"rf\") #> $mtry #> [1] 2 3 6 get_hyperparams_list(otu_small, \"rf\") #> $mtry #> [1] 4 8 16"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"regression","dir":"Articles","previous_headings":"Hyperparameter options","what":"Regression","title":"Hyperparameter tuning","text":"mentioned , glmnet uses alpha parameter lambda hyperparameter. alpha 0 L2 regularization (ridge). alpha 1 L1 regularization (lasso). alpha elastic net. can also tune alpha like hyperparameter. Please refer original glmnet documentation information: https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html default hyperparameters chosen run_ml() fixed glmnet.","code":"#> $lambda #> [1] 1e-04 1e-03 1e-02 1e-01 1e+00 1e+01 #> #> $alpha #> [1] 0"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"random-forest","dir":"Articles","previous_headings":"Hyperparameter options","what":"Random forest","title":"Hyperparameter tuning","text":"run rf parRF, using randomForest package implementation. tuning mtry hyperparameter. number features randomly collected sampled tree node. number needs less number features dataset. Please refer original documentation information: https://cran.r-project.org/web/packages/randomForest/randomForest.pdf default, take square root number features dataset provide range [sqrt_features / 2, sqrt_features, sqrt_features * 2]. example number features 1000: Similar glmnet method, can provide mtry range.","code":"#> $mtry #> [1] 16 32 64"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"decision-tree","dir":"Articles","previous_headings":"Hyperparameter options","what":"Decision tree","title":"Hyperparameter tuning","text":"run rpart2, running rpart package implementation decision tree. tuning maxdepth hyperparameter. maximum depth node final tree. Please refer original documentation information maxdepth: https://cran.r-project.org/web/packages/rpart/rpart.pdf default, provide range less number features dataset. example 1000 features: 10 features:","code":"#> $maxdepth #> [1] 1 2 4 8 16 30 #> $maxdepth #> [1] 1 2 4 8"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"svm-with-radial-basis-kernel","dir":"Articles","previous_headings":"Hyperparameter options","what":"SVM with radial basis kernel","title":"Hyperparameter tuning","text":"run svmRadial method, tuning C sigma hyperparameters. sigma defines far influence single training example reaches C behaves regularization parameter. Please refer great sklearn resource information hyperparameters: https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html default, provide 2 separate range values two hyperparameters.","code":"#> $C #> [1] 1e-03 1e-02 1e-01 1e+00 1e+01 1e+02 #> #> $sigma #> [1] 1e-06 1e-05 1e-04 1e-03 1e-02 1e-01"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"xgboost","dir":"Articles","previous_headings":"Hyperparameter options","what":"XGBoost","title":"Hyperparameter tuning","text":"run xgbTree method, tuning nrounds, gamma, eta max_depth, colsample_bytree, min_child_weight subsample hyperparameters. can read hyperparameters : https://xgboost.readthedocs.io/en/latest/parameter.html default, set nrounds, gamma, colsample_bytree min_child_weight fixed values provide range values eta, max_depth subsample. can changed optimized user supplying custom named list hyperparameters run_ml().","code":"#> $nrounds #> [1] 100 #> #> $gamma #> [1] 0 #> #> $eta #> [1] 0.001 0.010 0.100 1.000 #> #> $max_depth #> [1] 1 2 4 8 16 30 #> #> $colsample_bytree #> [1] 0.8 #> #> $min_child_weight #> [1] 1 #> #> $subsample #> [1] 0.4 0.5 0.6 0.7"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"other-ml-methods","dir":"Articles","previous_headings":"","what":"Other ML methods","title":"Hyperparameter tuning","text":"ML methods tested set default hyperparameters , theory may able use methods supported caret run_ml(). Take look available models caret (see list tag). need give run_ml() custom hyperparameters just like examples :","code":"run_ml(otu_mini_bin, \"regLogistic\", hyperparameters = list( cost = 10^seq(-4, 1, 1), epsilon = c(0.01), loss = c(\"L2_primal\") ) )"},{"path":"http://www.schlosslab.org/mikropml/dev/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Begüm Topçuoğlu. Author. Zena Lapp. Author. Kelly Sovacool. Author, maintainer. Evan Snitkin. Author. Jenna Wiens. Author. Patrick Schloss. Author. Nick Lesniak. Contributor. Courtney Armour. Contributor. Sarah Lucas. Contributor.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Topçuoğlu et al., (2021). mikropml: User-Friendly R Package Supervised Machine Learning Pipelines. Journal Open Source Software, 6(61), 3073, https://doi.org/10.21105/joss.03073","code":"@Article{, title = {{mikropml}: User-Friendly R Package for Supervised Machine Learning Pipelines}, author = {Begüm D. Topçuoğlu and Zena Lapp and Kelly L. Sovacool and Evan Snitkin and Jenna Wiens and Patrick D. Schloss}, journal = {Journal of Open Source Software}, year = {2021}, month = {May}, volume = {6}, number = {61}, pages = {3073}, doi = {10.21105/joss.03073}, url = {https://joss.theoj.org/papers/10.21105/joss.03073}, }"},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"mikropml-","dir":"","previous_headings":"","what":"User-Friendly R Package for Supervised Machine Learning Pipelines","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"meek-ROPE em el User-Friendly R Package Supervised Machine Learning Pipelines interface build machine learning models classification regression problems. mikropml implements ML pipeline described Topçuoğlu et al. (2020) reasonable default options data preprocessing, hyperparameter tuning, cross-validation, testing, model evaluation, interpretation steps. See website information, documentation, examples.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"can install latest release CRAN: development version GitHub: install terminal using conda mamba:","code":"install.packages('mikropml') # install.packages(\"devtools\") devtools::install_github(\"SchlossLab/mikropml\") mamba install -c conda-forge r-mikropml"},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"dependencies","dir":"","previous_headings":"Installation","what":"Dependencies","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Imports: caret, dplyr, e1071, glmnet, kernlab, MLmetrics, randomForest, rlang, rpart, stats, utils, xgboost Suggests: assertthat, doFuture, forcats, foreach, future, future.apply, furrr, ggplot2, knitr, progress, progressr, purrr, rmarkdown, rsample, testthat, tidyr","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"usage","dir":"","previous_headings":"","what":"Usage","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Check introductory vignette quick start tutorial. -depth discussion, read vignettes /take look reference documentation. can watch Riffomonas Project series video tutorials covering mikropml skills related machine learning. also provide Snakemake workflow running mikropml locally HPC. highly recommend running mikropml Snakemake another workflow management system reproducibility scalability ML analyses.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"help--contributing","dir":"","previous_headings":"","what":"Help & Contributing","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"come across bug, open issue include minimal reproducible example. questions, create new post Discussions. ’d like contribute, see guidelines .","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Please note mikropml project released Contributor Code Conduct. contributing project, agree abide terms.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"license","dir":"","previous_headings":"","what":"License","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"mikropml package licensed MIT license. Text images included repository, including mikropml logo, licensed CC 4.0 license.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"cite mikropml publications, use: Topçuoğlu BD, Lapp Z, Sovacool KL, Snitkin E, Wiens J, Schloss PD (2021). “mikropml: User-Friendly R Package Supervised Machine Learning Pipelines.” Journal Open Source Software, 6(61), 3073. doi:10.21105/joss.03073, https://joss.theoj.org/papers/10.21105/joss.03073. BibTeX entry LaTeX users :","code":"@Article{, title = {{mikropml}: User-Friendly R Package for Supervised Machine Learning Pipelines}, author = {Begüm D. Topçuoğlu and Zena Lapp and Kelly L. Sovacool and Evan Snitkin and Jenna Wiens and Patrick D. Schloss}, journal = {Journal of Open Source Software}, year = {2021}, month = {May}, volume = {6}, number = {61}, pages = {3073}, doi = {10.21105/joss.03073}, url = {https://joss.theoj.org/papers/10.21105/joss.03073}, }"},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"why-the-name","dir":"","previous_headings":"","what":"Why the name?","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"word “mikrop” (pronounced “meek-ROPE”) Turkish “microbe”. package originally implemented machine learning pipeline microbiome-based classification problems (see Topçuoğlu et al. 2020). realized methods applicable many fields , stuck name like !","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/pull_request_template.html","id":"issues","dir":"","previous_headings":"","what":"Issues","title":"NA","text":"Resolves # .","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/pull_request_template.html","id":"checklist","dir":"","previous_headings":"","what":"Checklist","title":"NA","text":"(Strikethrough points applicable.) Write unit tests new functionality bug fixes. roxygen comments vignettes Update NEWS.md includes user-facing changes. check workflow succeeds recent commit. always required PR can merged.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"Uses rsample::bootstraps(), rsample::int_pctl(), furrr::future_map()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"","code":"bootstrap_performance( ml_result, outcome_colname, bootstrap_times = 10000, alpha = 0.05 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"ml_result result returned single run_ml() call outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). bootstrap_times number boostraps create (default: 10000) alpha alpha level confidence interval (default 0.05 create 95% confidence interval)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"data frame estimate (.estimate), lower bound (.lower), upper bound (.upper) performance metric (term).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"","code":"bootstrap_performance(otu_mini_bin_results_glmnet, \"dx\", bootstrap_times = 10, alpha = 0.10 ) #> Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `AUC`, `Accuracy`, `Balanced_Accuracy`, `Detection_Rate`, `F1`, `Kappa`, `Neg_Pred_Value`, `Pos_Pred_Value`, `Precision`, `Recall`, `Sensitivity`, `Specificity`, `cv_metric_AUC`, `logLoss`, `prAUC`. #> # A tibble: 15 × 6 #> term .lower .estimate .upper .alpha .method #> #> 1 AUC 0.496 0.659 0.838 0.1 percentile #> 2 Accuracy 0.510 0.613 0.718 0.1 percentile #> 3 Balanced_Accuracy 0.490 0.611 0.728 0.1 percentile #> 4 Detection_Rate 0.254 0.351 0.464 0.1 percentile #> 5 F1 0.539 0.641 0.752 0.1 percentile #> 6 Kappa -0.0121 0.213 0.421 0.1 percentile #> 7 Neg_Pred_Value 0.301 0.561 0.810 0.1 percentile #> 8 Pos_Pred_Value 0.539 0.654 0.785 0.1 percentile #> 9 Precision 0.539 0.654 0.785 0.1 percentile #> 10 Recall 0.521 0.642 0.807 0.1 percentile #> 11 Sensitivity 0.521 0.642 0.807 0.1 percentile #> 12 Specificity 0.44 0.579 0.661 0.1 percentile #> 13 cv_metric_AUC 0.622 0.622 0.622 0.1 percentile #> 14 logLoss 0.656 0.682 0.702 0.1 percentile #> 15 prAUC 0.468 0.598 0.743 0.1 percentile if (FALSE) { outcome_colname <- \"dx\" run_ml(otu_mini_bin, \"rf\", outcome_colname = \"dx\") %>% bootstrap_performance(outcome_colname, bootstrap_times = 10000, alpha = 0.05 ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":null,"dir":"Reference","previous_headings":"","what":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"Get lower upper bounds empirical confidence interval","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"","code":"lower_bound(x, alpha) upper_bound(x, alpha)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"x vector test statistics, permutation tests bootstraps alpha alpha level confidence interval (default: 0.05 obtain 95% confidence interval)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"value lower upper bound confidence interval","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"functions","dir":"Reference","previous_headings":"","what":"Functions","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"lower_bound(): Get lower bound empirical confidence interval upper_bound(): Get upper bound empirical confidence interval","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"","code":"if (FALSE) { x <- 1:10000 lower_bound(x, 0.05) upper_bound(x, 0.05) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"Calculate fraction positives, .e. baseline precision PRC curve","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"","code":"calc_baseline_precision(dataset, outcome_colname = NULL, pos_outcome = NULL)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"dataset Data frame outcome variable columns features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). pos_outcome positive outcome outcome_colname, e.g. \"cancer\" otu_mini_bin dataset.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"baseline precision based fraction positives","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"","code":"# calculate the baseline precision data.frame(y = c(\"a\", \"b\", \"a\", \"b\")) %>% calc_baseline_precision(\"y\", \"a\") #> Using 'y' as the outcome column. #> [1] 0.5 calc_baseline_precision(otu_mini_bin, outcome_colname = \"dx\", pos_outcome = \"cancer\" ) #> Using 'dx' as the outcome column. #> [1] 0.49 # if you're not sure which outcome was used as the 'positive' outcome during # model training, you can access it from the trained model and pass it along: calc_baseline_precision(otu_mini_bin, outcome_colname = \"dx\", pos_outcome = otu_mini_bin_results_glmnet$trained_model$levels[1] ) #> Using 'dx' as the outcome column. #> [1] 0.49"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":null,"dir":"Reference","previous_headings":"","what":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"Generic function calculate mean performance curves multiple models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"","code":"calc_mean_perf(sensspec_dat, group_var = specificity, sum_var = sensitivity)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"sensspec_dat data frame created concatenating results calc_model_sensspec() multiple models. group_var variable group (e.g. specificity recall). sum_var variable summarize (e.g. sensitivity precision).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"data frame mean & standard deviation sum_var summarized group_var","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"Courtney Armour Kelly Sovacool","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"Used bootstrap_performance().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"","code":"calc_perf_bootstrap_split( test_data_split, trained_model, outcome_colname, perf_metric_function, perf_metric_name, class_probs, method, seed )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"test_data_split single bootstrap test set rsample::bootstraps() trained_model Trained model caret::train(). outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost seed Random seed (default: NA). results reproducible set seed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"long data frame performance metrics rsample::int_pctl()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":null,"dir":"Reference","previous_headings":"","what":"Get performance metrics for test data — calc_perf_metrics","title":"Get performance metrics for test data — calc_perf_metrics","text":"Get performance metrics test data","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get performance metrics for test data — calc_perf_metrics","text":"","code":"calc_perf_metrics( test_data, trained_model, outcome_colname, perf_metric_function, class_probs )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get performance metrics for test data — calc_perf_metrics","text":"test_data Held test data: dataframe outcome features. trained_model Trained model caret::train(). outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get performance metrics for test data — calc_perf_metrics","text":"Dataframe performance metrics.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get performance metrics for test data — calc_perf_metrics","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get performance metrics for test data — calc_perf_metrics","text":"","code":"if (FALSE) { results <- run_ml(otu_small, \"glmnet\", kfold = 2, cv_times = 2) calc_perf_metrics(results$test_data, results$trained_model, \"dx\", multiClassSummary, class_probs = TRUE ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"Combine hyperparameter performance metrics multiple train/test splits generated , instance, looping R using snakemake workflow high-performance computer.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"","code":"combine_hp_performance(trained_model_lst)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"trained_model_lst List trained models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"Named list: dat: Dataframe performance metric group hyperparameters params: Hyperparameters tuned. Metric: Performance metric used.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"","code":"if (FALSE) { results <- lapply(seq(100, 102), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed, cv_times = 2, kfold = 2) }) models <- lapply(results, function(x) x$trained_model) combine_hp_performance(models) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":null,"dir":"Reference","previous_headings":"","what":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"wrapper permute_p_value().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"","code":"compare_models(merged_data, metric, group_name, nperm = 10000)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"merged_data concatenated performance data run_ml metric metric compare, must numeric group_name column group variables compare nperm number permutations, default=10000","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"table p-values pairs group variable","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"Courtney R Armour, armourc@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"","code":"df <- dplyr::tibble( model = c(\"rf\", \"rf\", \"glmnet\", \"glmnet\", \"svmRadial\", \"svmRadial\"), AUC = c(.2, 0.3, 0.8, 0.9, 0.85, 0.95) ) set.seed(123) compare_models(df, \"AUC\", \"model\", nperm = 10) #> group1 group2 p_value #> 1 glmnet svmRadial 0.7272727 #> 2 rf glmnet 0.2727273 #> 3 rf svmRadial 0.5454545"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":null,"dir":"Reference","previous_headings":"","what":"Define cross-validation scheme and training parameters — define_cv","title":"Define cross-validation scheme and training parameters — define_cv","text":"Define cross-validation scheme training parameters","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Define cross-validation scheme and training parameters — define_cv","text":"","code":"define_cv( train_data, outcome_colname, hyperparams_list, perf_metric_function, class_probs, kfold = 5, cv_times = 100, groups = NULL, group_partitions = NULL )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Define cross-validation scheme and training parameters — define_cv","text":"train_data Dataframe training model. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). hyperparams_list Named list lists hyperparameters. perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). kfold Fold number k-fold cross-validation (default: 5). cv_times Number cross-validation partitions create (default: 100). groups Vector groups keep together splitting data train test sets. number groups training set larger kfold, groups also kept together cross-validation. Length matches number rows dataset (default: NULL). group_partitions Specify assign groups training testing partitions (default: NULL). groups specifies samples belong group \"\" belong group \"B\", setting group_partitions = list(train = c(\"\", \"B\"), test = c(\"B\")) result samples group \"\" placed training set, samples \"B\" also training set, remaining samples \"B\" testing set. partition sizes close training_frac possible. number groups training set larger kfold, groups also kept together cross-validation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Define cross-validation scheme and training parameters — define_cv","text":"Caret object trainControl controls cross-validation","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Define cross-validation scheme and training parameters — define_cv","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Define cross-validation scheme and training parameters — define_cv","text":"","code":"training_inds <- get_partition_indices(otu_small %>% dplyr::pull(\"dx\"), training_frac = 0.8, groups = NULL ) train_data <- otu_small[training_inds, ] test_data <- otu_small[-training_inds, ] cv <- define_cv(train_data, outcome_colname = \"dx\", hyperparams_list = get_hyperparams_list(otu_small, \"glmnet\"), perf_metric_function = caret::multiClassSummary, class_probs = TRUE, kfold = 5 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":null,"dir":"Reference","previous_headings":"","what":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"Requires future.apply package","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"","code":"find_permuted_perf_metric( test_data, trained_model, outcome_colname, perf_metric_function, perf_metric_name, class_probs, feat, test_perf_value, nperms = 100, alpha = 0.05, progbar = NULL )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"test_data Held test data: dataframe outcome features. trained_model Trained model caret::train(). outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). feat feature group correlated features permute. test_perf_value value true performance metric held-test data. nperms number permutations perform (default: 100). alpha alpha level confidence interval (default: 0.05 obtain 95% confidence interval) progbar optional progress bar (default: NULL)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"vector mean permuted performance mean difference test permuted performance (test minus permuted performance)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"Get preprocessed dataframe continuous variables","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"","code":"get_caret_processed_df(features, method)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"features Dataframe features machine learning method Methods preprocess data, described caret::preProcess() (default: c(\"center\",\"scale\"), use NULL normalization).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"Named list: processed: Dataframe processed features. removed: Names features removed preprocessing.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"","code":"get_caret_processed_df(mikropml::otu_small[, 2:ncol(otu_small)], c(\"center\", \"scale\")) #> $processed #> Otu00001 Otu00002 Otu00003 Otu00004 Otu00005 #> 1 -0.4198476322 -0.218855527 -0.174296240 -0.59073845 -0.048774220 #> 2 -0.1045750483 1.754032339 -0.718419364 0.03805034 1.537072974 #> 3 -0.7076423302 0.696324396 1.428146361 0.60439092 -0.264559044 #> 4 -0.4936040623 -0.665193276 2.015799335 -0.59289184 -0.675577755 #> 5 1.1116829471 -0.395140184 -0.753787367 -0.58643168 -0.754356341 #> 6 -0.6845030580 0.613808173 -0.174296240 -0.58427829 0.375945115 #> 7 -0.7698291243 -0.496410093 -0.318488868 0.15863997 -0.658451975 #> 8 -0.4241862457 -0.477656406 -0.397386721 -0.55628427 -0.391289813 #> 9 -0.5557908564 1.144537514 1.615868839 -0.35171258 -0.274834512 #> 10 1.4573258257 -0.451401245 -0.693933823 -0.05669866 -0.706404158 #> 11 0.2931311927 -0.721454336 -0.753787367 3.03341063 -0.449517464 #> 12 1.1044519245 0.002437979 -0.473563958 -0.41846755 0.413621830 #> 13 -0.5933921737 -0.297621012 -0.340253793 -0.59289184 -0.655026820 #> 14 -0.8016456236 0.077452727 -0.419151646 -0.59073845 -0.045349064 #> 15 -0.7915221920 0.291244758 -0.269517787 -0.59289184 -0.220032017 #> 16 1.4862499159 -0.683946963 -0.745625520 -0.54551734 -0.744080874 #> 17 -0.3750152923 -0.051947713 0.103206554 1.37745659 0.458148857 #> 18 0.2135899445 0.325001395 0.478651509 -0.34309903 0.560903535 #> 19 -0.5181895390 -0.100707299 -0.073633462 -0.40770062 -0.237157796 #> 20 0.8745054069 -0.676445488 -0.560623658 -0.58212491 -0.154954054 #> 21 2.0184531767 -0.682071594 -0.740184289 -0.58643168 -0.720104782 #> 22 0.5867107089 -0.646439589 -0.560623658 0.09188499 -0.593374013 #> 23 -0.4603413585 -0.397015552 0.386150578 -0.42062094 -0.463218088 #> 24 -0.7553670792 1.401463025 0.829610924 -0.58858507 -0.295385447 #> 25 1.9316809059 0.334378238 -0.228708552 -0.42923448 -0.535146362 #> 26 1.2201482855 -0.108208774 -0.302165174 -0.58858507 0.358819335 #> 27 -0.9158957801 -0.674570119 -0.732022442 -0.53475041 -0.689278379 #> 28 -0.7597056927 -0.595804634 -0.375621796 -0.57566475 -0.730380250 #> 29 -0.5109585165 -0.558297260 -0.432754724 3.84093048 -0.672152599 #> 30 -0.8811868718 -0.385763340 -0.595991661 -0.58212491 -0.192630769 #> 31 0.3437483507 0.902614952 1.376454664 -0.59289184 1.396641581 #> 32 -0.5109585165 0.535042688 -0.484446421 -0.59289184 0.550628067 #> 33 1.2302717171 -0.582677053 0.007985007 -0.40554723 -0.672152599 #> 34 -0.0770971626 0.244360541 -0.313047636 -0.28711099 2.273481498 #> 35 -0.2275024319 2.211622300 1.515206061 -0.57781814 1.269910812 #> 36 0.0284757669 -0.663317907 -0.634080280 -0.57781814 -0.730380250 #> 37 -0.3157209072 -0.290119537 -0.231429168 -0.58643168 -0.233732640 #> 38 -0.1653156379 1.476477772 1.836238704 1.65309003 4.393653017 #> 39 -0.6859492625 -0.134463935 -0.258635324 0.68191283 0.399921206 #> 40 -0.3967083600 -0.126962461 -0.269517787 -0.57135798 0.304016840 #> 41 0.0009978811 -0.485157881 -0.291282711 -0.58858507 -0.463218088 #> 42 -0.4111704051 -0.029443288 -0.400107336 -0.19236199 0.050555301 #> 43 1.4399713715 -0.693323806 -0.734743058 3.42532693 -0.757781497 #> 44 -0.4805882217 -0.593929265 -0.397386721 1.44851834 -0.648176508 #> 45 0.3827958725 -0.421395345 -0.609594739 2.34648041 -0.333062162 #> 46 1.6438862078 -0.592053897 -0.579667967 -0.58427829 -0.726955094 #> 47 -0.5471136293 -0.280742693 0.269164106 1.53680717 -0.202906237 #> 48 1.6164083221 -0.653941063 -0.615035970 -0.58643168 -0.696128691 #> 49 -0.8609400086 -0.078202875 -0.397386721 0.99630722 -0.086450936 #> 50 0.6026189585 0.146841369 -0.345695024 -0.58212491 1.002748649 #> 51 -0.8363545319 -0.635187377 -0.421872261 -0.56705120 -0.593374013 #> 52 -0.1783314785 0.328752132 -0.666727667 -0.58643168 -0.531721206 #> 53 1.6728102981 -0.548920417 -0.639521511 -0.58858507 -0.562547610 #> 54 1.2620882164 -0.458902719 -0.751066751 1.21595262 -0.579673389 #> 55 -0.7828449649 -0.065075294 -0.130766390 1.23963987 0.375945115 #> 56 2.0705165392 -0.605181477 -0.606874124 -0.58858507 -0.733805406 #> 57 -0.6469017407 -0.327626911 -0.666727667 -0.57566475 -0.600224325 #> 58 -0.3663380652 8.732279248 -0.740184289 -0.54551734 -0.730380250 #> 59 -0.4415406999 1.363955651 -0.748346136 -0.10191977 1.746007486 #> 60 -0.4111704051 -0.479531775 -0.242311630 -0.59289184 -0.422116216 #> 61 -0.2694423628 2.967395884 -0.740184289 0.50964193 0.721885863 #> 62 0.6112961856 0.047446828 -0.579667967 -0.52613687 0.564328691 #> 63 -0.2347334545 -0.425146083 -0.479005189 -0.05454527 -0.665302287 #> 64 -0.8334621229 -0.344505229 -0.356577486 -0.45507512 -0.350187942 #> 65 -0.8884178944 -0.550795785 -0.400107336 0.57424351 -0.476918711 #> 66 -0.6049618098 -0.721454336 1.305718658 1.43129125 -0.487194179 #> 67 -0.8970951214 -0.642688851 -0.623197817 -0.58858507 -0.682428067 #> 68 -0.7293353979 0.801345043 -0.130766390 0.47303436 -0.257708732 #> 69 -0.7221043754 -0.721454336 2.736762475 1.05660204 -0.052199376 #> 70 -0.1002364348 -0.717703599 0.679977065 -0.57135798 0.512951352 #> 71 -0.2708885673 -0.584552422 0.783360459 -0.59289184 0.389645738 #> 72 -0.7221043754 0.150592106 1.036377712 1.00276738 0.030004366 #> 73 -0.4979426759 0.486283102 0.116809632 0.01436309 0.314292308 #> 74 -0.5557908564 -0.412018502 -0.527976271 -0.32587194 -0.315936383 #> 75 -0.7061961257 0.306247708 -0.323930099 -0.17944168 0.526651975 #> 76 0.4203971899 -0.558297260 0.269164106 0.31583716 0.074531393 #> 77 -0.6425631272 0.304372339 0.699021375 -0.52829025 0.427322453 #> 78 0.6488975029 -0.053823082 2.739483091 -0.59073845 -0.291960291 #> 79 -0.7380126250 -0.190724996 -0.424592877 -0.16867475 -0.199481081 #> 80 2.5159475292 -0.717703599 -0.710257517 -0.59289184 -0.757781497 #> 81 -0.8710634402 -0.331377648 -0.470843343 -0.55628427 -0.555697298 #> 82 -0.4039393825 1.645260955 -0.138928237 -0.59289184 0.410196674 #> 83 1.0032176086 -0.425146083 -0.405548568 -0.59073845 0.095082329 #> 84 -0.8305697138 -0.548920417 -0.748346136 -0.59289184 -0.319361539 #> 85 -0.8088766461 -0.368885022 2.105579651 -0.57135798 -0.579673389 #> 86 -0.6859492625 -0.550795785 0.410636119 0.67545267 -0.490619335 #> 87 -0.0062331415 0.167470424 0.367106269 -0.59289184 0.869167568 #> 88 -0.6497941497 1.360204913 -0.751066751 -0.59289184 -0.291960291 #> 89 -0.4458793134 0.788217462 0.731668762 -0.30864485 1.211683161 #> 90 -0.1421763657 -0.717703599 0.767036765 -0.59289184 2.119349482 #> 91 -0.7915221920 1.123908458 0.652770909 2.20651030 -0.045349064 #> 92 -0.4502179269 0.116835470 -0.054589152 -0.26342374 3.900430564 #> 93 1.3633225323 -0.593929265 -0.753787367 0.12203240 -0.206331393 #> 94 -0.6743796264 -0.442024401 -0.538858733 2.10314776 -0.679002911 #> 95 -0.2072555688 0.193725586 -0.364739333 -0.49383607 0.554053223 #> 96 -0.3460912020 2.147859765 2.856469562 1.86412189 1.304162371 #> 97 0.2121437400 -0.700825281 -0.742904905 0.42135309 -0.747506030 #> 98 -0.5948383782 0.169345793 0.024308701 -0.36463290 -0.401565281 #> 99 1.5281898468 -0.704576018 -0.699375055 -0.58858507 -0.692703535 #> 100 -0.5688066970 -0.680196225 -0.729301827 -0.58212491 -0.740655718 #> 101 -0.2361796590 -0.179472784 -0.498049499 -0.58427829 -0.631050728 #> 102 2.6518907534 -0.683946963 -0.721139980 -0.53905718 -0.740655718 #> 103 0.8325654761 -0.590178528 -0.473563958 2.55966565 -0.672152599 #> 104 0.3061470334 -0.162594466 0.524901975 0.39981923 1.636402496 #> 105 -0.5644680835 0.915742533 -0.449078418 -0.57997152 0.841766321 #> 106 -0.2390720680 0.345630450 -0.348415640 -0.56274443 0.132759044 #> 107 -0.7394588295 -0.278867325 -0.748346136 0.10049854 0.290316216 #> 108 -0.3648918607 1.302068484 0.802404768 -0.59289184 0.506101040 #> 109 1.5079429836 -0.682071594 -0.650403974 -0.50029623 -0.713254470 #> 110 -0.6035156053 -0.451401245 -0.582388583 -0.59289184 -0.583098545 #> 111 -0.2810119989 1.495231459 0.660932756 -0.59289184 -0.130977963 #> 112 -0.4502179269 -0.719578968 1.033657096 -0.58427829 -0.754356341 #> 113 -0.4458793134 0.508787527 2.353155672 -0.59289184 4.314874431 #> 114 -0.7813987604 0.090580308 -0.304885790 -0.17944168 -0.329637006 #> 115 -0.7365664205 3.171811071 0.228354872 -0.59073845 0.649957589 #> 116 -0.0264800046 -0.078202875 -0.413710414 0.21462801 0.321142620 #> 117 -0.6324396956 -0.205727946 -0.753787367 0.15863997 -0.702979003 #> 118 -0.5557908564 -0.213229420 0.821449078 0.09188499 2.633122871 #> 119 0.2309443987 1.073273503 2.619776003 -0.42062094 0.817790229 #> 120 0.3900268951 -0.674570119 -0.718419364 -0.58858507 -0.737230562 #> 121 -0.9057723485 -0.344505229 -0.661286436 -0.59073845 -0.668727443 #> 122 -0.0264800046 0.013690191 -0.751066751 -0.59289184 -0.761206653 #> 123 -0.7076423302 -0.637062745 -0.468122727 -0.50890978 -0.675577755 #> 124 0.0545074481 -0.650190326 0.443283506 -0.57566475 0.348543867 #> 125 0.3249476920 0.144966000 -0.585109199 -0.58427829 -0.196055925 #> 126 -0.0496192768 2.852998394 0.233796103 -0.58858507 0.944520999 #> 127 -0.3388601794 -0.057573819 0.954759243 -0.42923448 -0.004247193 #> 128 -0.0366034362 -0.286368800 -0.511652577 1.86196850 -0.757781497 #> 129 -0.8522627815 -0.355757441 -0.386504258 -0.32371856 0.194411850 #> 130 -0.2766733854 0.094331045 1.893371632 2.95158195 2.937961748 #> 131 0.5433245735 -0.537668204 -0.712978133 -0.58427829 -0.747506030 #> 132 -0.3764614968 -0.121336354 0.062397319 -0.56274443 -0.360463410 #> 133 0.1340486963 -0.316374699 0.312693956 -0.45722851 -0.052199376 #> 134 -0.5196357436 0.308123076 -0.280400249 -0.56705120 0.598580250 #> 135 0.1499569459 -0.706451387 -0.712978133 -0.57781814 -0.744080874 #> 136 -0.3808001103 1.189546362 0.475930894 -0.59289184 0.214962786 #> 137 -0.6859492625 0.872609053 5.601570724 -0.56705120 -0.672152599 #> 138 0.1210328557 -0.301371749 -0.443637186 -0.59289184 -0.562547610 #> 139 -0.8450317590 -0.192600365 -0.636800895 1.93303025 -0.709829314 #> 140 0.1803272407 0.475030890 0.435121659 1.12120363 1.276761123 #> 141 -0.8464779635 0.321250657 -0.220546705 -0.58643168 -0.538571518 #> 142 -0.8826330763 -0.472030300 0.764316150 3.24444248 0.026579210 #> 143 -0.8537089861 -0.522665255 -0.549741196 1.32792871 -0.357038254 #> 144 -0.7582594882 -0.344505229 -0.664007052 -0.45722851 -0.726955094 #> 145 -0.5832687421 -0.171971309 1.553294680 -0.58212491 -0.374164034 #> 146 -0.6469017407 -0.470154931 -0.644962742 -0.59073845 -0.730380250 #> 147 -0.3460912020 -0.023817182 2.127344576 1.81459400 0.307441996 #> 148 -0.5644680835 -0.408267765 2.886396334 0.15863997 -0.346762786 #> 149 1.1478380599 -0.593929265 -0.601432892 -0.58427829 -0.730380250 #> 150 0.1427259234 3.299336143 2.657864622 -0.56705120 3.835352601 #> 151 -0.5659142880 3.123051485 1.289394964 -0.57781814 0.899993971 #> 152 0.9699549048 0.081203464 -0.285841480 -0.58643168 0.423897298 #> 153 -0.8378007364 0.203102430 0.070559166 -0.58427829 -0.442667152 #> 154 -0.6830568535 -0.653941063 -0.560623658 -0.56274443 -0.668727443 #> 155 -0.3735690877 -0.466404194 -0.590550430 -0.49383607 -0.689278379 #> 156 -0.3460912020 -0.235733845 -0.294003327 2.08161390 -0.682428067 #> 157 4.1935447642 -0.668944013 -0.683051361 2.59842660 -0.716679626 #> 158 0.1687576046 -0.477656406 -0.000176840 -0.57135798 -0.713254470 #> 159 0.9280149739 -0.592053897 2.867352025 -0.58212491 0.584879626 #> 160 1.1001133110 -0.674570119 -0.715698748 -0.48737591 -0.494044491 #> 161 -0.6526865587 -0.693323806 -0.718419364 -0.57566475 -0.750931186 #> 162 -0.0192489821 0.495659946 -0.751066751 -0.59289184 5.606158216 #> 163 -0.2491954996 -0.653941063 -0.536138117 -0.16006120 -0.668727443 #> 164 0.4478750756 -0.250736794 -0.179737471 -0.44215480 -0.500894803 #> 165 0.4088275538 -0.663317907 -0.595991661 -0.56705120 0.310867152 #> 166 -0.9130033711 0.317499920 0.761595534 -0.59289184 -0.449517464 #> 167 -0.8999875305 0.506912158 0.595637981 0.91447854 -0.720104782 #> 168 2.9367930424 -0.721454336 -0.748346136 -0.58643168 -0.754356341 #> 169 3.0119956771 -0.689573069 -0.680330745 -0.59073845 -0.726955094 #> 170 3.9332279519 -0.706451387 -0.740184289 0.67760606 -0.432391684 #> 171 -0.5962845827 0.291244758 -0.196061165 -0.57351136 0.701334928 #> 172 -0.7683829198 -0.554546523 -0.658565820 -0.12776041 -0.552272142 #> 173 -0.1754390695 -0.712077493 -0.734743058 2.77931105 -0.730380250 #> 174 -0.3186133163 -0.166345203 -0.397386721 1.53034702 -0.028223285 #> 175 -0.5890535602 0.373760981 -0.043706690 -0.30649147 -0.518020582 #> 176 -0.3446449975 -0.160719097 2.959852956 -0.57781814 -0.161804366 #> 177 -0.5283129706 -0.288244168 -0.606874124 -0.57566475 0.067681081 #> 178 1.8608168848 -0.036944763 -0.160693162 -0.39262692 -0.329637006 #> 179 0.0501688346 -0.698949912 -0.726581211 0.53978933 -0.750931186 #> 180 -0.8363545319 0.364384137 -0.492608267 -0.33233210 -0.446092308 #> 181 -0.8378007364 0.131838419 -0.296723943 -0.56489782 -0.634475884 #> 182 -0.8233386913 -0.593929265 -0.095398387 -0.58858507 1.266485656 #> 183 -0.7177657618 -0.571424841 -0.737463673 3.51146238 -0.644751352 #> 184 -0.7625981017 0.683196815 -0.405548568 -0.39478030 -0.175504990 #> 185 -0.3301829524 -0.672694750 -0.742904905 -0.59289184 -0.569397921 #> 186 -0.9202343936 -0.383887972 -0.117163312 1.05660204 -0.048774220 #> 187 0.0762005158 -0.006938864 -0.593271045 2.91066761 0.036854678 #> 188 0.5028308471 -0.708326756 -0.742904905 -0.23542972 -0.062474844 #> 189 -0.8652786222 -0.389514078 0.032470547 -0.47230221 -0.055624532 #> 190 -0.7842911694 -0.059449188 -0.356577486 0.82403632 -0.668727443 #> 191 0.7212077286 -0.685822331 -0.753787367 -0.55197750 -0.631050728 #> 192 0.9844169499 -0.713952862 -0.751066751 -0.58427829 -0.754356341 #> 193 0.3061470334 0.120586207 -0.261355940 -0.58427829 0.817790229 #> 194 -0.2810119989 -0.577050947 -0.443637186 0.18017383 -0.661877131 #> 195 -0.5413288112 0.195600955 -0.356577486 -0.58212491 0.677358836 #> 196 1.6858261387 -0.702700649 -0.734743058 -0.59073845 -0.723529938 #> 197 1.1478380599 -0.078202875 3.286326831 -0.57135798 0.852041788 #> 198 5.1046536074 -0.691448437 -0.753787367 -0.48737591 -0.716679626 #> 199 0.9309073830 -0.350131335 -0.590550430 -0.58212491 1.232234096 #> 200 -0.6252086730 0.400016142 -0.424592877 -0.58427829 -0.048774220 #> Otu00006 Otu00007 Otu00008 Otu00009 Otu00010 Otu00011 #> 1 -0.16741683 -0.568745802 -0.0623643594 0.05469266 -0.637876227 0.72598299 #> 2 -0.57321773 -0.642744310 -0.1320452079 -0.40907548 -0.383314722 0.01116897 #> 3 -0.03641025 -0.612472193 -0.2070861217 -0.73691158 2.586569508 -0.77512645 #> 4 -0.58599886 -0.551927960 -0.4697293198 0.13465268 0.025980248 -0.02010414 #> 5 2.72750923 0.191420685 -0.6760918326 1.26208901 1.703091342 1.58822740 #> 6 0.80394909 -0.336659574 -0.0060836741 -0.26514744 -0.348374907 0.55621466 #> 7 2.20028760 -0.716742817 0.0635971744 -0.84485761 -0.682798846 -0.48920084 #> 8 -0.06197251 0.376416954 -0.0221638699 0.61841082 2.381922023 -0.75278851 #> 9 -0.46457812 -0.804195599 0.2940799810 -0.40907548 0.280541753 0.18540489 #> 10 0.68891891 -0.370295259 1.5885357433 -0.72091957 0.495172042 -0.82873750 #> 11 -0.24090833 0.066968649 -0.1963659911 0.09067467 0.500163444 0.38644633 #> 12 -0.35593851 -0.777287051 0.3423205684 0.48247878 0.634931300 -0.48920084 #> 13 -0.49972623 -0.141572599 -0.2178062522 -0.14520740 -0.248546866 1.11019552 #> 14 -0.77452053 -0.622562899 -0.4214887324 -0.64495755 -0.607927815 -0.04690967 #> 15 -0.68824790 -0.582200076 0.8997673563 -0.79688159 -0.892437732 -0.62769606 #> 16 0.91897926 1.180309832 -0.4241687650 -0.06524738 -0.058873587 -0.06478002 #> 17 0.18725954 0.046787238 2.0950619112 -0.13321340 -0.423245938 -0.65003400 #> 18 1.43341977 -0.316478163 0.7791658878 0.44249877 -0.353366309 -0.02903932 #> 19 -0.73298186 -0.693197838 -0.2124461869 0.28657672 0.045945856 -0.55174707 #> 20 -0.11948759 -0.481293021 -0.2365664806 0.54644680 2.327016600 1.20848245 #> 21 5.17190045 0.712773807 -0.6787718653 0.43450277 1.468495445 -0.87788097 #> 22 -0.60517055 1.372033238 0.5004424938 1.02620694 0.040954454 0.29262699 #> 23 -0.26008003 -0.568745802 -0.6787718653 -0.80087960 -0.677807444 0.29709458 #> 24 -0.25368946 0.524413969 -0.6787718653 0.62240882 0.619957094 -0.09605313 #> 25 0.79116796 -0.797468462 0.2779997852 0.79432287 -0.363349113 0.69024229 #> 26 -0.25368946 0.904497213 -0.5126098420 0.24659671 3.609806932 -0.80193198 #> 27 -0.88635543 0.278873467 -0.6760918326 -0.86884562 -0.153710227 -0.87788097 #> 28 -0.88955071 0.077059355 0.1895587082 0.22260871 -0.842523712 -0.87788097 #> 29 2.07567158 -0.518292274 -0.6760918326 0.31056473 0.445258021 -0.81086715 #> 30 -0.80647336 -0.784014188 -0.5903307884 -0.83286360 -0.932368949 -0.80639956 #> 31 1.79768199 -0.797468462 -0.6787718653 -0.82886560 -0.378323320 2.83915193 #> 32 -0.49333566 0.100604335 -0.6787718653 -0.25715144 -0.712747258 -0.54727948 #> 33 -0.79049695 0.917951487 0.3878811232 1.19812099 -0.647859031 0.67683952 #> 34 -0.37511020 1.028949248 -0.6787718653 0.02670665 -0.558013794 -0.87788097 #> 35 -0.15463570 -0.239116087 -0.5822906904 -0.64895555 0.585017279 0.01116897 #> 36 0.92536983 0.743045923 0.9480079437 2.52545738 0.470215032 -0.46239532 #> 37 2.33129419 0.325963426 -0.5849707231 -0.84485761 -0.897429135 1.27549626 #> 38 1.15862546 -0.787377757 -0.6787718653 -0.11722139 0.679853918 1.45419977 #> 39 -0.53806962 -0.373658828 0.0582371091 -0.35710146 -0.737704268 -0.31496493 #> 40 -0.56363188 -0.535110117 -0.5045697441 -0.02926537 0.555068867 -0.24348353 #> 41 -0.11309703 1.207218380 -0.0864846531 0.96623692 1.363676002 0.34177045 #> 42 2.76585263 -0.387113102 0.7014449414 -0.70492757 -0.892437732 0.98063548 #> 43 -0.62753753 -0.797468462 -0.6707317674 6.20761646 1.054209073 0.15859936 #> 44 -0.36552436 0.547958949 -0.6653717021 0.57043481 0.510146248 0.65896917 #> 45 0.01151899 -0.794104894 -0.6466114737 1.17413298 1.608254703 -0.85554303 #> 46 0.17128313 -0.555291528 0.0207166523 1.17813099 -0.233572660 2.34771729 #> 47 -0.75215356 0.036696533 0.9185275847 0.13865068 -0.298460887 0.34623804 #> 48 1.35034242 3.773621166 0.6022837339 -0.57299353 0.150765299 3.36185968 #> 49 -0.81286393 -0.784014188 -0.2526466764 -0.83686161 2.242162765 -0.80193198 #> 50 -0.60836584 0.574867497 -0.4214887324 -0.80087960 -0.148718825 -0.70364505 #> 51 0.16489256 2.014474827 -0.6787718653 0.28257872 2.297068188 -0.56514983 #> 52 0.63140383 0.161148568 0.2860398831 -0.21717142 0.400335403 -0.23454835 #> 53 0.38856234 -0.800832031 -0.6680517347 3.36503763 0.055928660 0.69917746 #> 54 -0.76173940 3.867801085 -0.6787718653 -0.23716143 -0.617910619 -0.87341338 #> 55 -0.54765547 -0.689834269 1.0686094123 -0.25315343 -0.792609691 -0.73045058 #> 56 2.00537536 1.789115735 -0.6787718653 -0.62496754 1.618237507 -0.87341338 #> 57 -0.78730166 -0.296296752 -0.1856458606 0.29857073 0.794656166 -0.46686291 #> 58 0.91258870 -0.800832031 -0.6734118000 -0.88483762 -0.867480722 -0.84660785 #> 59 -0.10990175 -0.565382234 -0.6760918326 0.78232887 0.150765299 -0.77065886 #> 60 -0.80966864 -0.403930944 0.2833598504 -0.08123938 -0.043899381 -0.18540489 #> 61 -0.27925173 -0.659562152 -0.5045697441 -0.58098953 -0.692781650 0.40431668 #> 62 0.74004343 -0.091119071 -0.6573316042 0.29857073 -0.423245938 0.12285866 #> 63 -0.33037625 -0.333296005 -0.4884895482 0.41451276 -0.742695670 -0.39538150 #> 64 -0.46777340 -0.054119818 1.5965758412 -0.61297354 -0.538048186 -0.44005738 #> 65 -0.80008280 -0.777287051 -0.5769306252 -0.73691158 -0.353366309 -0.82873750 #> 66 -0.72659129 0.450415461 1.5670954822 -0.02126937 -0.508099773 0.69470987 #> 67 -0.66588092 -0.696561406 1.1248900976 -0.75290358 -0.498116969 -0.62322847 #> 68 0.06583880 0.362962680 -0.6787718653 0.10666667 -0.218598454 0.82426992 #> 69 0.25755576 -0.575472939 0.0448369460 -0.42506748 -0.687790248 -0.66790435 #> 70 1.16821131 0.188057116 -0.1320452079 -0.08923539 -0.288478082 -0.12732625 #> 71 0.02430012 0.140967157 -0.6707317674 0.50646679 0.709802331 2.57556426 #> 72 0.12335389 -0.461111609 0.6451642560 -0.36109947 -0.273503876 -0.38197874 #> 73 -0.16741683 -0.175208285 -0.1722456974 -0.62896554 -0.558013794 0.16753454 #> 74 0.12335389 -0.040665543 -0.2392465133 0.19862070 0.020988846 -0.68130711 #> 75 -0.48055453 -0.683107132 -0.3116073944 -0.53701152 -0.188650041 -0.02457173 #> 76 -0.48694510 -0.804195599 -0.0302039678 -0.04525737 -0.518082577 0.55621466 #> 77 -0.84162147 -0.558655097 -0.6117710494 -0.80087960 0.205670722 -0.42218703 #> 78 0.86785474 0.053514375 0.1654384145 -0.88083962 2.322025198 -0.86001062 #> 79 -0.62114697 -0.498110863 -0.3089273618 -0.54500752 -0.712747258 -0.15413177 #> 80 -0.90233184 -0.797468462 -0.6760918326 -0.88483762 0.649905506 2.71405948 #> 81 -0.50611679 -0.716742817 -0.6707317674 -0.75290358 -0.852506516 1.21741763 #> 82 -0.08433949 -0.366931691 -0.6787718653 0.05869066 -0.328409299 -0.87788097 #> 83 0.65377080 -0.155026873 -0.6600116368 0.49847079 1.488461053 0.78406163 #> 84 -0.81925449 0.231783507 -0.6787718653 0.63040483 -0.308443691 -0.84660785 #> 85 -0.71381016 -0.753742071 -0.1427653384 -0.71692157 -0.882454928 -0.86894579 #> 86 -0.88316014 0.322599857 -0.6734118000 1.87378319 -0.533056784 1.00744101 #> 87 0.13293973 -0.477929452 -0.6707317674 -0.03326337 1.223916744 2.28070348 #> 88 -0.35274323 -0.400567376 -0.4482890587 -0.66494756 -0.418254536 -0.22114559 #> 89 -0.12587816 -0.141572599 0.5138426570 -0.60097954 -0.458185753 0.30602975 #> 90 1.82643953 -0.575472939 -0.3866483081 -0.34910546 -0.088822000 1.92776406 #> 91 -0.90233184 -0.804195599 -0.5983708863 -0.71692157 -0.707755856 -0.41325185 #> 92 -0.36871964 -0.494747295 -0.0516442289 -0.32111945 -0.702764454 -0.80193198 #> 93 -0.14824514 -0.800832031 0.0930775334 4.84030006 3.445090663 -0.35964080 #> 94 -0.86718373 -0.091119071 1.1329301955 0.40651676 0.305498763 -0.60089053 #> 95 -0.21215079 -0.380385965 0.2377992956 -0.53701152 -0.707755856 0.12732625 #> 96 -0.55724132 -0.464475178 -0.6787718653 -0.87284362 -0.538048186 -0.87788097 #> 97 3.31863652 3.736621913 -0.6734118000 0.94624692 2.671423343 -0.50707119 #> 98 -0.75215356 -0.535110117 -0.6787718653 -0.50102951 -0.013950969 1.64630604 #> 99 0.78477739 -0.804195599 0.2699596873 0.65039483 0.510146248 -0.67237194 #> 100 1.54844994 -0.800832031 -0.6787718653 -0.88483762 -0.667824639 -0.73938575 #> 101 2.03732818 -0.030574838 0.2511994588 -0.60897554 -0.098804804 1.15040381 #> 102 -0.88316014 2.815004136 -0.3330476555 1.66588713 -0.937360351 -0.31049734 #> 103 -0.41664888 2.848639821 -0.6787718653 1.03820094 -0.443211546 -0.15859936 #> 104 -0.39747718 0.262055624 0.0501970112 2.35754133 -0.268512474 -0.13179383 #> 105 -0.44860171 0.191420685 0.2404793283 0.11466267 -0.533056784 2.22262484 #> 106 -0.77771582 -0.730197092 0.8381266057 -0.82486760 -0.942351753 0.84214027 #> 107 -0.39428190 -0.020484132 1.2026110440 -0.52501751 -0.712747258 0.19434006 #> 108 0.83590191 -0.538473686 -0.3384077207 -0.50502751 -0.363349113 -0.66343676 #> 109 -0.28883757 0.066968649 1.6716167550 -0.57699153 -0.657841835 -0.76172369 #> 110 -0.73298186 -0.340023142 0.0475169786 -0.20517742 -0.707755856 -0.41325185 #> 111 -0.47096868 -0.518292274 -0.2794470028 -0.42906549 -0.043899381 -0.25688629 #> 112 -0.89913656 0.194784253 -0.6760918326 -0.46104950 -0.957325959 -0.87788097 #> 113 0.46524913 0.369689817 -0.6787718653 -0.41707148 0.530111856 -0.33283528 #> 114 -0.87357430 -0.582200076 -0.0007236088 -0.64095955 -0.702764454 -0.39091392 #> 115 -0.36232907 -0.363568122 -0.2499666438 -0.07324338 0.270558949 -0.10498831 #> 116 0.08501049 0.921315055 -0.3276875902 -0.61297354 0.125808289 2.32091177 #> 117 1.88395462 0.009787984 2.6230283401 -0.88083962 1.203951135 -0.81980233 #> 118 -0.89594127 1.405668923 1.9905406385 -0.46104950 -0.867480722 0.22561317 #> 119 -0.58599886 0.151057863 -0.6734118000 -0.60897554 1.628220311 0.02010414 #> 120 2.31851306 3.225359496 -0.3357276881 3.15314357 0.365395588 0.14519660 #> 121 -0.19617438 -0.713379249 0.2377992956 -0.79288359 -0.927377547 0.19434006 #> 122 0.04027654 -0.454384472 0.5084825917 0.21861070 0.020988846 -0.05584485 #> 123 -0.83203562 0.053514375 -0.6787718653 0.16663669 -0.882454928 -0.80193198 #> 124 1.19696885 0.201511390 0.0421569133 0.49447279 -0.632884825 0.15413177 #> 125 -0.02682440 -0.111300483 -0.6707317674 -0.08923539 -0.108787608 0.07371520 #> 126 -0.41984416 -0.521655843 -0.1508054363 -0.20917542 -0.113779010 0.12732625 #> 127 -0.31439983 -0.259297498 -0.6626916695 0.65039483 0.844570187 -0.73045058 #> 128 -0.50292151 2.169198979 0.0582371091 -0.09323339 1.193968331 -0.84214027 #> 129 -0.84162147 -0.171844716 -0.2338864480 -0.83686161 -0.822558104 -0.81980233 #> 130 -0.64670923 -0.370295259 -0.6787718653 -0.45705149 -0.682798846 -0.87788097 #> 131 -0.04599609 1.752116482 -0.6760918326 0.94624692 1.533383672 0.28815940 #> 132 0.82631607 -0.138209031 -0.6760918326 -0.44505749 0.160748104 -0.86894579 #> 133 0.43329630 0.235147076 3.7084415570 -0.53701152 -0.563005196 -0.46686291 #> 134 -0.27286116 -0.575472939 -0.0543242615 -0.45705149 -0.672816042 -0.47133049 #> 135 2.29934136 3.295994435 0.5835235055 1.39802105 -0.538048186 -0.72598299 #> 136 -0.49014038 -0.414021650 0.3369605031 -0.55300352 0.440266619 -0.20327524 #> 137 -0.07475364 -0.498110863 -0.1963659911 -0.53701152 -0.752678475 0.19434006 #> 138 -0.53167905 -0.420748787 -0.6787718653 -0.36909547 -0.882454928 0.56068225 #> 139 -0.87037901 -0.696561406 -0.3893283408 -0.35710146 0.919441218 -0.82873750 #> 140 -0.88955071 1.153401284 -0.1052448815 -0.87684162 1.832867796 -0.87788097 #> 141 -0.74256771 -0.646107878 -0.6787718653 0.21461270 -0.832540908 -0.68130711 #> 142 -0.81286393 -0.740287797 -0.1963659911 -0.83286360 -0.947343155 -0.77959404 #> 143 -0.52209321 -0.740287797 -0.4080885692 -0.70492757 -0.762661279 -0.87341338 #> 144 -0.78410638 -0.528382980 -0.0328840004 -0.74490758 -0.777635485 -0.53387672 #> 145 -0.81925449 -0.666289290 -0.6707317674 -0.88483762 -0.907411939 0.97170031 #> 146 -0.89594127 -0.625926467 -0.4992096788 -0.10122939 -0.243555464 -0.35070563 #> 147 1.67945653 -0.800832031 3.4377582610 -0.88483762 -0.927377547 -0.87788097 #> 148 -0.87357430 -0.350113848 -0.6760918326 -0.19318342 -0.847515114 0.23901594 #> 149 -0.14504986 3.423810040 -0.6573316042 -0.82886560 -0.937360351 -0.86894579 #> 150 -0.54765547 -0.686470701 -0.4911695809 -0.88083962 -0.957325959 -0.87788097 #> 151 0.95732265 -0.740287797 -0.6707317674 -0.52501751 -0.677807444 0.73045058 #> 152 0.12974445 0.023242259 0.2538794914 0.43050477 -0.852506516 -0.36410839 #> 153 -0.88316014 -0.760469208 -0.0570042941 -0.69693156 -0.083830598 -0.60089053 #> 154 -0.87676958 0.181329979 -0.6787718653 -0.58098953 -0.907411939 -0.74385334 #> 155 -0.22493192 0.299054878 -0.6760918326 0.60641682 1.154037115 2.12880550 #> 156 1.17460187 -0.185298990 3.1563548344 0.87028490 0.120816887 -0.01116897 #> 157 -0.85440260 -0.800832031 -0.6707317674 -0.87684162 -0.947343155 -0.87788097 #> 158 -0.26966588 -0.235752518 -0.6653717021 -0.02526737 -0.123761814 -0.39091392 #> 159 -0.77452053 -0.740287797 2.1754628902 -0.10122939 -0.822558104 -0.10945590 #> 160 -0.57002245 2.630007867 2.0468213238 2.70936544 -0.797601093 1.57035705 #> 161 -0.85440260 -0.269388203 1.2990922188 -0.24115943 0.001023237 0.19434006 #> 162 -0.75854412 0.164512137 -0.6787718653 -0.76889559 0.979338042 -0.51600637 #> 163 0.34063311 -0.269388203 1.1945709461 -0.40107948 0.020988846 0.41325185 #> 164 1.27046035 2.199471096 0.4656020696 -0.85685161 -0.303452289 -0.74832093 #> 165 -0.10031590 1.190400537 0.2243991325 0.93825092 -0.353366309 -0.50260361 #> 166 -0.89913656 -0.656198584 -0.6653717021 3.16913557 -0.957325959 -0.52047396 #> 167 -0.61156112 -0.639380741 -0.2044060890 -0.73691158 0.110834083 -0.74832093 #> 168 -0.89913656 -0.141572599 1.6823368855 0.02270865 2.371939219 -0.78406163 #> 169 0.02430012 0.282237035 1.0257288901 2.52145938 0.714793733 0.36857598 #> 170 -0.84162147 -0.804195599 -0.6760918326 -0.88483762 -0.952334557 6.31046751 #> 171 -0.58919414 -0.356840985 0.3021200789 -0.34110946 -0.338392103 0.06924761 #> 172 -0.45179699 -0.511565137 -0.0650443920 -0.63296355 -0.617910619 -0.69470987 #> 173 -0.61795169 -0.356840985 -0.4992096788 -0.30112945 -0.717738660 -0.81086715 #> 174 0.44927271 0.803590157 -0.6760918326 0.21861070 0.450249423 0.52940913 #> 175 0.46205385 -0.158390442 -0.6760918326 -0.47304350 -0.637876227 0.72598299 #> 176 0.81673022 0.019878690 -0.6734118000 -0.09723139 0.370386990 0.38644633 #> 177 -0.41345360 -0.659562152 -0.0757645226 -0.40107948 -0.862489320 0.01563656 #> 178 -0.90233184 -0.797468462 -0.6787718653 1.03420294 0.170730908 2.49514768 #> 179 -0.82884034 -0.252570361 4.7054136970 0.72635685 3.634763942 -0.78852921 #> 180 -0.47096868 -0.706652112 -0.6626916695 -0.68493756 -0.927377547 -0.44899256 #> 181 -0.02362912 -0.760469208 -0.3678880797 -0.82886560 -0.872472124 -0.81533474 #> 182 -0.74256771 -0.625926467 0.9506879764 -0.01727137 -0.727721464 0.10945590 #> 183 -0.22812720 1.583938055 -0.1910059258 0.38252875 -0.652850433 -0.01116897 #> 184 -0.72020073 -0.706652112 -0.6358913431 -0.68493756 -0.518082577 -0.54727948 #> 185 -0.25049418 0.938132898 -0.6787718653 0.13065468 -0.782626887 -0.16753454 #> 186 -0.80966864 -0.733560660 -0.2285263827 -0.86884562 -0.912403341 -0.81980233 #> 187 -0.89913656 -0.797468462 -0.6787718653 1.25409301 2.312042394 -0.86894579 #> 188 2.32490362 0.023242259 -0.6171311147 -0.87684162 -0.008959567 -0.83320509 #> 189 -0.89274599 -0.784014188 -0.3678880797 -0.88083962 -0.882454928 -0.84214027 #> 190 -0.31120455 -0.548564391 -0.3518078839 -0.39308348 -0.777635485 0.04690967 #> 191 2.05330460 0.921315055 0.9453279111 -0.26914544 -0.208615649 2.93297127 #> 192 0.66335665 0.070332218 -0.6787718653 0.32655674 -0.368340516 -0.35070563 #> 193 1.15862546 -0.131481894 -0.3062473291 -0.44905549 -0.563005196 0.07371520 #> 194 2.01815649 -0.121391188 -0.3625280144 -0.44505749 0.899475609 -0.36857598 #> 195 -0.59877999 -0.498110863 0.9346077806 -0.28113944 -0.767652681 -0.58302018 #> 196 -0.89913656 -0.457748041 6.7583186948 -0.02926537 0.035963052 -0.68577470 #> 197 -0.21215079 -0.757105640 -0.4429289935 0.58242881 -0.737704268 0.90468649 #> 198 0.46844441 3.948526730 -0.6600116368 1.81381317 0.609974290 -0.86894579 #> 199 2.88088280 -0.797468462 -0.6064109842 -0.01727137 3.969187880 0.98957066 #> 200 -0.20576023 0.147694294 -0.5126098420 -0.88083962 0.210662124 0.80193198 #> Otu00012 Otu00013 Otu00014 Otu00015 Otu00016 #> 1 -0.025998508 1.524303976 1.671766383 1.2574976512 -0.49503122 #> 2 0.413356123 -0.551251533 0.971673864 0.3058058591 -0.50331257 #> 3 -0.678984290 -0.980085316 0.007910136 -0.6658514951 -0.47570808 #> 4 -0.654799631 -0.842858506 -0.542162557 -0.4795062491 -0.50055212 #> 5 -0.674953513 0.517974032 -0.792195600 -0.9653349262 0.34414511 #> 6 -0.267845094 1.215543652 2.385497069 0.4056336694 -0.10856844 #> 7 -0.638676525 0.323569384 -0.737642936 -0.4928166238 -0.39841553 #> 8 -0.058244719 0.294980465 -0.437603285 -0.6059548089 3.78366388 #> 9 -0.038090837 -0.219620075 0.953489643 -0.9453693641 -0.32940431 #> 10 0.183601866 -0.699913911 -0.751281102 -0.8921278653 0.16471596 #> 11 0.425448452 0.392182789 -0.792195600 0.6585307890 -0.50331257 #> 12 -0.211414224 -0.911471911 2.280937796 0.0861846763 1.72712978 #> 13 -0.199321895 -0.534098182 0.485245945 -0.7457137434 -0.49503122 #> 14 -0.610461090 -0.671324993 0.958035698 0.1327709878 -0.12237068 #> 15 -0.501630127 -0.717067263 0.262489234 -0.0136431341 -0.47294763 #> 16 -0.509691680 -0.339693534 -0.769465323 -0.5260925606 -0.19690279 #> 17 -0.404891492 -0.008062075 1.294443792 -0.4795062491 -0.50331257 #> 18 -0.582245655 -0.162442237 1.358088566 1.4970843961 0.02669354 #> 19 -0.457291586 -0.791398452 0.812561928 1.1044283420 -0.37633194 #> 20 -0.594337985 0.643765275 -0.010274085 0.0928398636 1.70504620 #> 21 -0.707199725 0.020526844 -0.792195600 -0.9520245515 -0.47846853 #> 22 -0.030029284 0.312133816 -0.342136123 2.6883629331 0.29997793 #> 23 0.683418144 -0.585558236 0.262489234 0.8981175339 -0.50331257 #> 24 0.687448920 0.981114517 -0.783103489 -0.2066435675 0.39935408 #> 25 0.598771839 -0.231055642 1.771779600 -0.4329199376 -0.36252970 #> 26 -0.654799631 -0.379718020 0.635265770 -0.7390585561 0.90727659 #> 27 -0.767661371 -1.008674235 -0.792195600 -0.9520245515 -0.48674988 #> 28 0.107017114 -0.705631695 -0.701274494 -0.7257481813 -0.49779167 #> 29 0.175540314 -0.448331426 -0.692182383 -0.6725066825 -0.48122898 #> 30 -0.699138172 -0.934343046 0.080647021 -0.8521967411 -0.50331257 #> 31 -0.328306740 0.060551330 1.680858494 -0.9586797389 -0.24383041 #> 32 -0.650768855 0.357876086 -0.792195600 1.2308769018 -0.04507812 #> 33 2.170774647 1.209825868 -0.387596677 -0.2865058157 -0.50331257 #> 34 0.119109444 -1.014392019 -0.792195600 -0.0668846329 -0.30456028 #> 35 -0.142891024 -0.168160021 2.430957622 -0.1800228180 0.33586376 #> 36 -0.529845562 2.467738298 -0.778557434 -0.5726788721 2.28474037 #> 37 -0.618522643 1.204108084 0.894390924 -0.0202983214 -0.20242369 #> 38 -0.723322831 0.500820681 -0.792195600 -0.9120934274 2.25437544 #> 39 -0.308152858 0.923936680 -0.164839966 -0.0469190709 -0.01471319 #> 40 0.687448920 -0.785680668 1.117147634 0.1327709878 -0.50055212 #> 41 1.594373617 1.095470193 -0.687636328 2.3422931905 -0.05888037 #> 42 -0.437137704 -0.762809533 -0.196662353 -0.5527133100 -0.36529014 #> 43 -0.046152390 1.850217651 -0.787649545 0.3723577327 -0.13893337 #> 44 0.542340969 -0.088111048 0.062462800 -0.2931610031 -0.45914539 #> 45 -0.416983822 0.849605491 -0.096649136 -0.8721623032 0.21164358 #> 46 2.731052571 0.180624789 -0.487609894 1.3173943374 -0.50331257 #> 47 1.062311128 0.489385113 0.594351273 0.1727021119 -0.13617292 #> 48 -0.066306272 3.559834998 -0.628537609 -0.9520245515 -0.18586099 #> 49 -0.646738078 -0.871447425 0.553436775 -0.8056104296 -0.18862144 #> 50 -0.130798695 -0.705631695 1.571753166 -0.4595406870 -0.50331257 #> 51 0.006247703 -0.145288886 -0.778557434 -0.3730232514 0.66435713 #> 52 0.489940875 -0.414024723 0.489792000 3.0677086125 0.06533981 #> 53 -0.622553420 0.695225329 -0.687636328 -0.9520245515 -0.50055212 #> 54 3.315515153 -0.997238668 -0.778557434 -0.9387141768 0.70024296 #> 55 -0.570153326 -0.734220614 1.735411158 -0.4528854997 -0.35424835 #> 56 -0.098552484 2.307640352 -0.783103489 0.8714967845 1.32410431 #> 57 -0.683015066 -0.677042776 0.026094358 -0.0202983214 -0.11961023 #> 58 -0.739445936 -1.014392019 -0.783103489 -0.9586797389 -0.48122898 #> 59 0.240032736 1.221261436 0.048824634 -0.3530576893 0.45732350 #> 60 -0.545968668 0.878194409 0.471607779 0.5986341028 -0.23830952 #> 61 -0.678984290 -0.053804345 -0.792195600 -0.2332643169 -0.01747364 #> 62 -0.683015066 -0.791398452 -0.037550417 -0.4196095629 -0.32112297 #> 63 -0.009875402 -0.757091749 -0.733096881 -0.2399195042 -0.13617292 #> 64 -0.578214879 -0.596993804 -0.787649545 -0.9054382400 -0.50331257 #> 65 -0.755569042 -1.014392019 -0.483063838 -0.9387141768 -0.27143490 #> 66 2.138528435 1.753015327 -0.623991553 -0.8721623032 -0.28799759 #> 67 -0.497599350 -0.368282453 -0.483063838 -0.7656793055 0.66435713 #> 68 -0.352491399 -0.528380398 1.198976630 -0.0003327594 0.05429802 #> 69 -0.102583260 -0.396871372 -0.792195600 2.5352936239 -0.49779167 #> 70 -0.421014598 -0.385435804 1.417187285 2.0228441973 -0.48951032 #> 71 -0.489537798 1.158365814 0.357956396 -0.1800228180 -0.50331257 #> 72 -0.574184103 0.003373492 -0.792195600 2.9346048653 -0.12513113 #> 73 -0.545968668 -0.133853318 0.862568536 1.2042561523 -0.28799759 #> 74 -0.203352671 0.071986898 0.467061724 1.5370155202 0.84102583 #> 75 -0.493568574 -0.351129101 2.640076167 0.0262879901 -0.48674988 #> 76 2.344098033 -1.014392019 0.953489643 -0.6192651836 -0.27143490 #> 77 -0.654799631 -0.494073696 -0.778557434 -0.1999883801 -0.48951032 #> 78 2.194959305 -1.014392019 -0.787649545 3.3339161068 -0.50055212 #> 79 -0.489537798 -0.768527317 0.621627604 0.4854959177 0.23648762 #> 80 -0.731384383 2.416278244 -0.787649545 -0.9387141768 -0.50331257 #> 81 -0.719292054 -0.762809533 -0.437603285 0.6452204143 -0.31836252 #> 82 -0.558060997 0.346440519 -0.792195600 -0.9653349262 -0.50331257 #> 83 -0.574184103 0.986832301 -0.210300519 1.8431541387 -0.01747364 #> 84 0.514125534 -0.842858506 -0.587623111 -0.9520245515 -0.50055212 #> 85 -0.650768855 -0.814269587 -0.469425672 -0.1667124433 -0.50331257 #> 86 0.514125534 -1.014392019 -0.792195600 -0.9387141768 -0.06992216 #> 87 -0.392799163 0.552280735 1.153516077 1.2841184006 -0.43706180 #> 88 -0.441168480 -0.471202561 -0.792195600 0.6052892902 -0.41221777 #> 89 -0.231568106 0.134882519 1.435371507 -0.1334365065 -0.50331257 #> 90 0.280340501 3.136718999 0.989858085 -0.1134709444 -0.50331257 #> 91 -0.674953513 0.026244628 -0.792195600 0.8648415971 -0.47294763 #> 92 -0.320245187 0.043397979 0.639811826 2.3223276284 -0.44534315 #> 93 1.646773711 -0.133853318 -0.792195600 0.5387374166 -0.23002817 #> 94 -0.610461090 -0.842858506 0.357956396 -0.6858170572 1.29926027 #> 95 -0.340399070 -0.516944831 2.621891945 -0.4728510618 -0.47294763 #> 96 -0.767661371 -1.014392019 -0.792195600 -0.7190929940 -0.44534315 #> 97 -0.263814317 2.730756352 -0.792195600 -0.6458859330 1.69400440 #> 98 -0.287998976 -0.196748940 1.176246353 -0.2066435675 -0.16929830 #> 99 5.508257532 1.512868408 -0.769465323 -0.8721623032 -0.43154091 #> 100 -0.751538266 -1.014392019 -0.783103489 -0.0003327594 -0.50055212 #> 101 -0.582245655 0.112011384 -0.764919268 0.2459091729 3.51866083 #> 102 -0.203352671 -0.728502830 -0.755827157 -0.9520245515 -0.48122898 #> 103 2.106282224 -0.196748940 -0.701274494 0.5786685407 -0.50331257 #> 104 -0.421014598 0.134882519 -0.792195600 -0.9453693641 1.54494019 #> 105 -0.263814317 0.300698249 2.976484260 0.1660469246 -0.50331257 #> 106 -0.683015066 -0.202466723 0.903483034 -0.6725066825 0.38279139 #> 107 -0.267845094 -0.202466723 -0.251215017 1.0578420305 -0.18034010 #> 108 0.312586712 -0.276797912 -0.787649545 1.0179109063 -0.44810360 #> 109 0.115078667 -0.522662615 -0.751281102 -0.6325755583 -0.50055212 #> 110 -0.646738078 -0.133853318 -0.651267885 -0.6658514951 -0.07268261 #> 111 -0.570153326 -0.516944831 2.126371915 0.3989784821 -0.01195274 #> 112 0.288402054 -0.322540183 -0.792195600 1.1510146535 -0.40393642 #> 113 -0.412953045 -0.665607209 0.685272379 2.0960512583 -0.41773866 #> 114 -0.662861184 -0.762809533 -0.664906051 0.6252548522 -0.26867445 #> 115 -0.433106927 -0.333975750 1.989990256 1.0844627799 -0.28523714 #> 116 -0.392799163 -0.030933210 -0.646721830 0.4056336694 -0.20794458 #> 117 -0.425045375 -0.591276020 -0.792195600 -0.7656793055 0.21716448 #> 118 -0.521784009 -0.282515696 0.271581345 -0.1933331927 0.04049578 #> 119 0.151355655 -0.625582722 2.549155060 1.6434985179 -0.50055212 #> 120 -0.231568106 0.603740788 -0.792195600 -0.8588519285 0.26409210 #> 121 -0.703168948 -0.848576290 -0.133017579 -0.3197817525 -0.50055212 #> 122 0.941387835 1.284157057 0.062462800 1.2109113397 -0.27971624 #> 123 -0.594337985 -1.014392019 -0.410326953 -0.7324033687 -0.49227077 #> 124 -0.493568574 1.186954733 0.307949787 2.1958790686 2.14947840 #> 125 0.933326283 0.409336140 -0.573984945 0.8781519718 -0.46466629 #> 126 -0.421014598 0.746685383 1.939983647 0.2392539855 -0.48122898 #> 127 -0.296060529 -0.728502830 -0.092103081 -0.5460581227 -0.47294763 #> 128 -0.723322831 -0.882882992 -0.724004770 -0.9187486147 -0.45914539 #> 129 0.006247703 -0.968649749 -0.323951902 -0.7856448676 -0.36529014 #> 130 -0.404891492 -0.568404885 2.108187694 -0.8388863664 -0.50331257 #> 131 0.058647797 -0.242491210 -0.351228234 0.7982897235 0.86034897 #> 132 -0.445199257 1.524303976 -0.787649545 0.4389096062 -0.13065203 #> 133 0.264217395 0.129164735 -0.605807332 -0.7923000549 -0.20242369 #> 134 -0.199321895 -0.151006669 3.244701524 0.1527365499 -0.50331257 #> 135 0.393202241 4.720545104 -0.783103489 -0.7324033687 -0.32388342 #> 136 0.123140220 -0.002344291 -0.273945294 0.4189440442 -0.36805059 #> 137 -0.038090837 0.792427653 1.785417766 -0.9453693641 -0.50331257 #> 138 3.795177548 -0.145288886 1.271713515 0.5919789155 -0.50331257 #> 139 -0.723322831 -0.934343046 -0.623991553 -0.8322311791 1.30478117 #> 140 0.824495319 -1.008674235 1.008042307 1.8564645134 -0.49503122 #> 141 0.868833860 -0.213902291 -0.442149340 -0.7324033687 -0.50331257 #> 142 -0.735415160 -0.962931965 -0.037550417 -0.8521967411 -0.45362449 #> 143 -0.723322831 -0.922907479 0.671634213 -0.7590241181 -0.30732073 #> 144 -0.598368761 -0.562687101 -0.696728438 0.1527365499 -0.35424835 #> 145 -0.658830408 -1.002956451 -0.783103489 3.7132617861 -0.41497822 #> 146 -0.638676525 -0.837140722 -0.783103489 -0.7457137434 -0.50331257 #> 147 -0.634645749 -1.008674235 -0.787649545 -0.9653349262 -0.50055212 #> 148 -0.715261278 -0.837140722 0.507976221 -0.8189208043 -0.11408933 #> 149 0.921233953 -0.940060830 -0.423965119 -0.8921278653 -0.50331257 #> 150 -0.106614037 -1.014392019 -0.792195600 -0.9653349262 -0.50331257 #> 151 -0.416983822 -0.408306939 -0.223938685 -0.3131265652 -0.42049911 #> 152 3.017237697 0.180624789 -0.546708613 0.4122888568 -0.41773866 #> 153 -0.566122550 -0.922907479 2.344582571 0.1993228614 -0.50331257 #> 154 -0.344429846 -1.014392019 -0.664906051 -0.9586797389 1.93140297 #> 155 1.134865104 -0.614147155 -0.783103489 1.1310490914 -0.45638494 #> 156 1.219511409 -0.419742507 -0.319405847 -0.9586797389 -0.44534315 #> 157 -0.767661371 1.890242137 -0.783103489 -0.9653349262 -0.49779167 #> 158 4.012839476 2.439149379 -0.351228234 0.1727021119 -0.49779167 #> 159 0.514125534 -0.968649749 -0.787649545 -0.8255759917 0.72232655 #> 160 0.485910099 0.929654463 -0.583077055 -0.4994718112 -0.16377741 #> 161 -0.715261278 1.106905760 -0.792195600 0.6984619132 -0.50331257 #> 162 -0.731384383 0.603740788 -0.792195600 1.6368433306 0.95144377 #> 163 -0.594337985 0.780992085 -0.687636328 0.0129776153 -0.48674988 #> 164 -0.545968668 0.060551330 -0.528524391 1.2907735880 -0.49227077 #> 165 -0.477445468 2.216155812 -0.787649545 -0.6791618698 2.69604719 #> 166 -0.646738078 -1.008674235 -0.792195600 -0.9653349262 -0.49503122 #> 167 -0.529845562 -0.431178074 0.017002247 0.9912901569 -0.45914539 #> 168 0.961541718 -1.002956451 -0.792195600 -0.8987830526 -0.49503122 #> 169 0.308555936 -0.682760560 -0.746735047 -0.8189208043 0.49596977 #> 170 -0.634645749 -1.008674235 -0.419419064 -0.9387141768 -0.02299454 #> 171 -0.469383915 -0.499791479 2.426411566 0.0861846763 -0.38185283 #> 172 0.183601866 -0.871447425 -0.755827157 -0.6991274319 8.63929272 #> 173 -0.191260342 -0.854294073 -0.792195600 -0.9520245515 1.62499319 #> 174 1.155018986 -0.299669047 -0.787649545 0.0395983648 -0.38737373 #> 175 0.227940407 0.981114517 0.021548302 0.7117722879 -0.32112297 #> 176 -0.384737610 0.186342573 -0.774011379 -0.9254038021 -0.50331257 #> 177 -0.541937891 -0.791398452 0.785285596 0.2126332361 -0.50331257 #> 178 1.183234421 0.352158303 -0.701274494 0.5254270419 1.07566395 #> 179 -0.235598882 -0.213902291 -0.792195600 -0.9320589894 -0.50055212 #> 180 -0.751538266 -0.677042776 -0.787649545 0.8714967845 -0.23830952 #> 181 -0.122737142 -0.728502830 -0.628537609 0.0994950510 -0.50055212 #> 182 -0.150952577 -0.048086562 -0.714912660 -0.6791618698 -0.44534315 #> 183 -0.469383915 0.094858033 -0.533070447 0.3257714212 0.23372717 #> 184 -0.654799631 -0.877165208 -0.619445498 -0.2399195042 -0.40669687 #> 185 -0.271875870 0.060551330 -0.787649545 -0.2665402537 -0.50331257 #> 186 -0.715261278 -0.962931965 0.648903936 2.6218110595 -0.48951032 #> 187 1.803973992 0.918218896 -0.655813940 -0.9653349262 3.58767204 #> 188 -0.545968668 0.415053924 -0.792195600 -0.8721623032 2.44484638 #> 189 -0.038090837 -0.940060830 -0.660359996 -0.8455415538 -0.50331257 #> 190 -0.638676525 -0.333975750 0.007910136 0.3856681074 0.21992493 #> 191 0.078801679 2.730756352 -0.678544217 -0.7324033687 -0.48674988 #> 192 -0.416983822 0.094858033 -0.792195600 -0.9586797389 -0.48398943 #> 193 -0.400860716 1.152648031 2.117279805 -0.1667124433 0.36070780 #> 194 4.726286904 -0.191031156 -0.683090272 -0.7190929940 0.57602278 #> 195 -0.154983354 -0.516944831 2.149102192 -0.2598850663 -0.41221777 #> 196 0.631018050 0.317851600 -0.792195600 -0.9653349262 -0.50331257 #> 197 1.195326751 0.826734356 0.821654039 -0.7390585561 -0.50331257 #> 198 -0.719292054 2.136106839 -0.792195600 -0.6458859330 -0.13341247 #> 199 -0.497599350 1.381359381 0.280673455 -0.8056104296 0.18403910 #> 200 -0.283968200 1.124059112 0.703456600 1.6501537053 -0.44258270 #> Otu00017 Otu00018 Otu00019 Otu00020 Otu00021 Otu00022 #> 1 0.47611468 0.399615523 0.55293856 0.554816232 -0.35537010 1.647612103 #> 2 -0.32110972 -0.679309939 0.61541514 -0.360008658 0.15159833 -0.375705829 #> 3 0.49083266 -0.679309939 -0.13846893 -0.529188603 -0.63100342 -0.081618920 #> 4 -0.26714376 0.030253653 0.08644676 -0.266019799 0.74224116 -0.187490207 #> 5 -0.52961456 -0.674449915 -0.64244668 -0.685836701 -0.63100342 -0.367863511 #> 6 3.30687454 -0.008626544 -0.08432256 0.172594874 0.78161735 -0.356100035 #> 7 -0.50263159 -0.518929127 -0.52165862 -0.403870125 -0.63100342 -0.026722697 #> 8 -0.53452056 0.419055622 0.69871725 -0.027914691 -0.60639331 -0.207096001 #> 9 1.29296306 -0.679309939 0.29053693 -0.673304853 -0.63100342 0.141887131 #> 10 -0.52225557 -0.436308709 -0.03017619 0.918239819 -0.52271890 -0.281598018 #> 11 -0.53452056 -0.679309939 -0.35921951 1.005962753 -0.63100342 -0.383548146 #> 12 2.65928302 -0.664729865 -0.21344082 -0.641975234 0.46660784 -0.273755700 #> 13 -0.44375963 -0.650149792 -0.64244668 -0.522922680 -0.62608140 -0.371784670 #> 14 0.94709032 -0.120407110 -0.34255909 -0.479061212 3.60193686 -0.277676859 #> 15 0.68216652 -0.280787922 -0.30923825 -0.585581919 -0.11911297 -0.360021194 #> 16 -0.53452056 2.304745168 -0.35921951 1.087419764 -0.62608140 -0.301203812 #> 17 2.23246135 -0.674449915 -0.23426635 -0.535454527 0.23035070 -0.340415400 #> 18 1.37881799 0.146894244 0.02813529 -0.165765017 0.69302092 -0.163963254 #> 19 0.70914950 0.137174194 0.40299477 -0.159499093 -0.16341118 0.185019877 #> 20 -0.50508458 2.960848490 -0.39670546 -0.234690180 -0.61623735 0.628110819 #> 21 -0.53452056 -0.664729865 -0.63828157 -0.679570777 -0.62115938 -0.379626987 #> 22 -0.53206756 0.224654637 0.28637182 0.673868786 -0.47842069 -0.367863511 #> 23 -0.53452056 0.278114908 0.60291983 2.033574274 -0.63100342 -0.003195744 #> 24 -0.52716157 -0.674449915 -0.64244668 -0.485327136 -0.62115938 -0.379626987 #> 25 -0.35299870 1.157779362 0.69455215 0.254051885 0.41738760 0.185019877 #> 26 2.12943543 0.900198058 -0.44668673 -0.604379690 -0.23231951 -0.352178876 #> 27 -0.53452056 -0.669589890 -0.64244668 -0.685836701 -0.63100342 -0.379626987 #> 28 -0.53452056 -0.679309939 5.46359780 2.321806774 -0.63100342 -0.336494241 #> 29 -0.51489658 -0.674449915 -0.38004504 0.442029602 -0.63100342 -0.293361494 #> 30 1.07709922 -0.679309939 4.20990108 -0.660773005 -0.29630582 -0.367863511 #> 31 -0.53452056 -0.023206617 -0.55081436 -0.585581919 -0.62115938 1.173151890 #> 32 0.40252473 -0.314808094 -0.56330968 -0.441465669 -0.63100342 0.604583867 #> 33 -0.53452056 -0.679309939 0.01980508 -0.071776158 -0.56701712 -0.379626987 #> 34 -0.53452056 -0.679309939 -0.64244668 -0.679570777 1.28366375 0.216389147 #> 35 0.31176380 -0.188447454 -0.18428509 -0.585581919 -0.26677368 -0.383548146 #> 36 -0.51980257 4.146694494 -0.57997010 -0.554252299 -0.63100342 -0.371784670 #> 37 1.22673211 0.389895474 -0.24676167 -0.660773005 -0.02559452 -0.152199778 #> 38 -0.53452056 -0.674449915 -0.63411647 -0.259753876 -0.61131533 -0.375705829 #> 39 -0.53452056 0.176054391 -0.49250288 -0.447731593 -0.53748498 -0.352178876 #> 40 2.04358049 -0.674449915 0.93612826 -0.197094636 0.03346976 -0.261992224 #> 41 0.24553285 0.559996335 -0.24676167 2.240349763 -0.62608140 -0.379626987 #> 42 -0.46093062 -0.329388168 -0.23843146 -0.410136049 1.79063218 -0.332573082 #> 43 -0.46093062 0.219794613 -0.64244668 -0.685836701 -0.62115938 -0.375705829 #> 44 1.26843308 0.195494490 1.00693505 -0.510390832 -0.60639331 0.024252367 #> 45 0.51536265 -0.679309939 -0.57997010 -0.240956104 -0.38982427 -0.379626987 #> 46 -0.50753758 -0.402288537 -0.17178977 -0.190828713 -0.62115938 -0.332573082 #> 47 0.75820946 -0.679309939 -0.54664925 0.078606015 0.89974591 -0.348257717 #> 48 -0.53452056 -0.105827036 0.02813529 3.430875305 -0.58670521 -0.328651923 #> 49 1.34692902 -0.343968241 -0.55081436 -0.610645614 0.80622746 0.024252367 #> 50 3.17195964 2.469986005 -0.22177104 -0.547986375 1.48054469 -0.367863511 #> 51 -0.53206756 -0.679309939 -0.41336588 0.968367210 -0.62608140 -0.265913383 #> 52 0.13514793 -0.207887552 -0.11347830 -0.529188603 0.72747509 -0.363942352 #> 53 -0.36526369 -0.679309939 -0.64244668 -0.598113766 -0.40951236 -0.360021194 #> 54 -0.53452056 -0.664729865 -0.36754972 -0.353742734 -0.55225105 0.094833225 #> 55 2.23491435 -0.368268364 0.18224419 -0.522922680 0.82099353 -0.254149906 #> 56 -0.51244358 0.885617984 -0.64244668 2.722825904 -0.49810879 -0.375705829 #> 57 -0.48055460 -0.431448684 -0.32173356 -0.366274582 0.53059414 -0.312967288 #> 58 -0.51734957 -0.679309939 -0.62995136 -0.679570777 -0.63100342 -0.363942352 #> 59 -0.51980257 -0.363408340 0.80700999 0.003414929 0.45184176 1.631927468 #> 60 0.14005393 1.138339263 -0.05100172 0.028478624 -0.38490224 -0.332573082 #> 61 -0.53452056 -0.679309939 -0.03434129 -0.472795288 -0.62608140 -0.383548146 #> 62 -0.03901494 -0.679309939 -0.55914457 -0.598113766 1.13108102 -0.301203812 #> 63 -0.52225557 0.788417492 -0.36754972 -0.303615343 -0.62608140 -0.363942352 #> 64 -0.53452056 -0.159287306 -0.09681787 1.156344927 -0.24216356 -0.132593984 #> 65 -0.47810160 -0.679309939 1.00276994 -0.616911538 -0.63100342 -0.171805572 #> 66 -0.53452056 -0.674449915 1.28183200 0.636273243 0.37308939 -0.332573082 #> 67 -0.48546060 -0.562669349 -0.35505441 -0.347476810 -0.62608140 -0.246307589 #> 68 -0.53206756 -0.008626544 -0.49250288 -0.052978387 -0.63100342 -0.293361494 #> 69 -0.53452056 -0.669589890 1.39845495 -0.491593060 -0.01575048 -0.258071065 #> 70 3.36819949 1.269559928 -0.62995136 -0.623177462 1.17045721 0.008567732 #> 71 0.32402879 -0.679309939 -0.20511061 -0.479061212 -0.55717307 0.012488891 #> 72 -0.53452056 0.321855129 1.36513411 0.141265254 -0.63100342 0.290891164 #> 73 1.25862108 0.083713924 -0.64244668 -0.134435397 2.44033929 0.118360178 #> 74 0.65273054 -0.679309939 1.11939289 -0.410136049 -0.25692963 -0.297282653 #> 75 2.94383081 -0.679309939 0.50295730 -0.372540506 1.28366375 -0.367863511 #> 76 1.98716153 1.775002486 -0.03017619 -0.397604201 -0.62608140 -0.379626987 #> 77 -0.29903274 -0.679309939 -0.50499820 -0.648241158 2.05149943 0.761430218 #> 78 -0.53452056 0.195494490 -0.64244668 -0.685836701 0.71763104 0.204625671 #> 79 0.99615028 -0.275927897 -0.24676167 -0.554252299 0.07776797 -0.371784670 #> 80 -0.53206756 -0.679309939 6.88389873 -0.679570777 -0.62608140 -0.383548146 #> 81 0.06646398 0.005953530 -0.36754972 -0.629443386 -0.63100342 -0.277676859 #> 82 -0.28186175 -0.674449915 -0.64244668 0.128733407 4.36977254 -0.046328491 #> 83 0.49573866 0.200354514 -0.55914457 -0.491593060 0.13683226 -0.344336558 #> 84 -0.53452056 -0.674449915 -0.64244668 -0.178296865 -0.62608140 7.537192593 #> 85 -0.53206756 -0.664729865 -0.64244668 -0.685836701 -0.63100342 -0.316888447 #> 86 -0.53452056 2.192964602 1.78164465 -0.679570777 -0.63100342 -0.234544113 #> 87 0.40743073 -0.475188906 -0.28008251 -0.422667897 0.31894713 0.377156657 #> 88 -0.53452056 -0.193307479 -0.05100172 -0.090573930 2.66183035 0.702612836 #> 89 -0.24016078 -0.679309939 0.47380156 0.254051885 -0.46857665 1.141782620 #> 90 -0.53452056 -0.679309939 -0.47167736 0.924505743 -0.63100342 0.561451120 #> 91 -0.29412674 -0.679309939 -0.64244668 -0.497858984 -0.62608140 -0.379626987 #> 92 -0.53452056 -0.679309939 0.44048072 -0.504124908 -0.62608140 -0.371784670 #> 93 -0.53452056 -0.679309939 0.27387650 1.782937318 -0.63100342 -0.383548146 #> 94 -0.53452056 2.601206669 1.18603458 -0.259753876 -0.08958083 -0.250228748 #> 95 3.55708035 -0.664729865 1.49008727 -0.598113766 1.48546672 -0.211017160 #> 96 -0.46828961 -0.655009816 -0.64244668 -0.679570777 4.06952910 0.020331208 #> 97 -0.53452056 -0.679309939 -0.45501694 -0.667038929 -0.62608140 -0.383548146 #> 98 0.78519244 -0.455748807 -0.05516682 -0.103105778 -0.63100342 -0.281598018 #> 99 -0.53452056 -0.669589890 3.29774300 0.354306667 -0.62608140 -0.383548146 #> 100 -0.53206756 -0.679309939 -0.52582373 0.147531178 -0.60639331 -0.383548146 #> 101 -0.40451166 1.002258574 -0.63411647 -0.065510234 1.30335184 -0.371784670 #> 102 -0.52225557 -0.679309939 -0.45918204 -0.604379690 -0.63100342 -0.379626987 #> 103 -0.43885363 2.800467678 -0.10514809 0.166328950 -0.62115938 -0.383548146 #> 104 -0.53452056 0.161474318 -0.52165862 -0.178296865 -0.61131533 0.549687644 #> 105 2.59305208 -0.674449915 0.31552756 -0.529188603 0.41246558 0.345787387 #> 106 1.42787796 -0.679309939 1.39012474 -0.673304853 0.20574059 -0.301203812 #> 107 -0.53452056 -0.188447454 0.50712240 -0.272285723 0.61919057 2.274997508 #> 108 -0.25978477 0.681496950 0.22389524 0.222722265 -0.62608140 1.337840559 #> 109 -0.52470857 -0.217607602 2.99785542 2.096233513 -0.60639331 -0.352178876 #> 110 -0.50263159 -0.382848438 -0.41336588 -0.203360560 -0.61623735 -0.269834542 #> 111 -0.53206756 -0.421728635 -0.62578626 -0.416401973 -0.62608140 -0.199253683 #> 112 -0.21072481 -0.669589890 -0.64244668 0.454561450 -0.62608140 -0.383548146 #> 113 -0.53452056 -0.032926667 -0.41336588 0.053542320 2.00227919 -0.316888447 #> 114 -0.40941766 -0.412008586 -0.06349703 -0.491593060 -0.54240700 0.286970005 #> 115 -0.53206756 0.054553776 -0.08848766 -0.052978387 -0.43412248 -0.128672825 #> 116 -0.45111862 1.211239632 0.01147487 0.015946776 0.82591556 -0.336494241 #> 117 -0.53452056 -0.013486568 0.57792920 -0.685836701 -0.39966831 -0.371784670 #> 118 -0.16902384 -0.465468857 0.42798540 0.028478624 0.34847927 0.094833225 #> 119 -0.53452056 -0.679309939 0.72370788 1.739075850 -0.63100342 -0.383548146 #> 120 -0.53452056 0.244094736 -0.21344082 -0.159499093 -0.63100342 -0.383548146 #> 121 -0.52716157 -0.679309939 -0.44252162 -0.679570777 -0.23724154 -0.383548146 #> 122 -0.53452056 -0.679309939 0.23639056 -0.522922680 0.03346976 -0.383548146 #> 123 -0.53452056 4.550076536 -0.48417267 1.544832209 -0.56701712 -0.340415400 #> 124 -0.53206756 -0.421728635 -0.48833778 0.009680852 -0.15356714 -0.352178876 #> 125 -0.48055460 -0.139847208 -0.13846893 -0.215892408 -0.63100342 -0.375705829 #> 126 -0.53452056 -0.309948069 -0.03017619 0.141265254 0.65364473 -0.348257717 #> 127 -0.47319561 -0.596689521 -0.45085183 -0.516656756 1.18522328 -0.156120937 #> 128 -0.49772559 1.687522044 -0.63828157 -0.140701321 -0.63100342 -0.332573082 #> 129 0.10571196 0.919638156 -0.57580499 2.716559980 0.73239711 -0.238465271 #> 130 1.58486984 -0.023206617 0.17391397 -0.660773005 -0.63100342 -0.383548146 #> 131 -0.51489658 0.419055622 -0.64244668 0.084871939 -0.25200761 -0.301203812 #> 132 -0.52470857 -0.669589890 1.18186948 -0.604379690 -0.54732902 -0.379626987 #> 133 -0.53452056 0.030253653 0.86115636 -0.234690180 -0.52764093 -0.285519177 #> 134 3.26762657 -0.650149792 0.57376409 -0.485327136 1.72172385 -0.328651923 #> 135 -0.53452056 0.880757959 1.11106268 2.478454871 -0.59654926 -0.324730765 #> 136 0.11552395 -0.679309939 -0.13430382 -0.547986375 0.70778699 0.118360178 #> 137 -0.53452056 -0.679309939 -0.64244668 -0.667038929 -0.61623735 -0.379626987 #> 138 -0.53206756 -0.460608832 0.26138119 -0.685836701 4.39438266 0.032094685 #> 139 0.17439590 0.380175425 -0.54248415 -0.109371702 -0.62115938 -0.324730765 #> 140 -0.52716157 -0.674449915 -0.63411647 -0.259753876 0.83083758 -0.265913383 #> 141 -0.53452056 0.428775671 0.59042451 -0.009116919 0.05807988 0.141887131 #> 142 -0.37262268 -0.523789152 -0.56330968 -0.673304853 0.61919057 2.714167291 #> 143 -0.53452056 -0.538369226 -0.35921951 -0.109371702 -0.61623735 -0.277676859 #> 144 -0.49527259 0.973098427 -0.53831904 0.786655417 -0.63100342 -0.277676859 #> 145 -0.08807490 -0.528649176 -0.63411647 -0.566784147 3.53302853 -0.352178876 #> 146 -0.51244358 -0.222467626 -0.60079562 -0.435199745 -0.62115938 -0.363942352 #> 147 -0.53452056 -0.679309939 -0.64244668 -0.466529364 -0.62608140 3.682693510 #> 148 0.14741292 -0.081526913 -0.50499820 -0.366274582 -0.62608140 2.231864761 #> 149 -0.53452056 -0.655009816 0.59042451 5.498630194 -0.49810879 -0.383548146 #> 150 -0.53452056 -0.679309939 -0.64244668 -0.554252299 -0.20770940 0.443816357 #> 151 -0.43394764 -0.679309939 -0.39254036 -0.360008658 -0.60147128 -0.261992224 #> 152 -0.48546060 -0.314808094 -0.62162115 0.091137863 1.57898517 -0.352178876 #> 153 -0.53452056 -0.596689521 -0.58413520 -0.591847843 0.34847927 0.130123654 #> 154 -0.52961456 -0.679309939 -0.63828157 4.320636500 0.09745607 -0.191411366 #> 155 -0.53452056 0.214934588 0.20306971 1.024760525 -0.57193914 -0.379626987 #> 156 -0.52470857 0.030253653 -0.63828157 -0.353742734 -0.63100342 -0.328651923 #> 157 -0.53206756 -0.679309939 -0.64244668 -0.685836701 -0.63100342 -0.383548146 #> 158 -0.53452056 -0.091246962 4.23489171 -0.673304853 -0.62608140 -0.211017160 #> 159 -0.53452056 2.523446276 -0.63828157 -0.328679038 0.54043819 1.333919400 #> 160 -0.53452056 1.002258574 0.05312592 1.569895905 -0.63100342 -0.371784670 #> 161 -0.52225557 0.428775671 -0.57997010 0.066074168 -0.63100342 -0.344336558 #> 162 -0.53452056 1.998563618 -0.64244668 0.066074168 -0.63100342 7.666590833 #> 163 -0.53206756 -0.266207848 -0.25925698 2.459657100 -0.63100342 -0.383548146 #> 164 -0.51244358 -0.674449915 -0.62578626 -0.228424256 -0.61623735 -0.371784670 #> 165 -0.51489658 0.351015277 0.32385777 -0.103105778 -0.63100342 -0.375705829 #> 166 -0.53452056 -0.674449915 -0.64244668 -0.648241158 0.11222214 -0.383548146 #> 167 -0.49036659 -0.514069103 -0.63828157 0.279115580 1.49038874 -0.258071065 #> 168 -0.53452056 -0.412008586 0.18224419 -0.159499093 -0.62608140 -0.360021194 #> 169 -0.53206756 -0.679309939 -0.63828157 -0.504124908 -0.63100342 -0.383548146 #> 170 -0.04882693 -0.679309939 -0.63828157 -0.685836701 -0.63100342 -0.261992224 #> 171 3.46877241 -0.407148561 1.34847369 -0.009116919 1.17045721 -0.132593984 #> 172 -0.50753758 1.109179116 -0.31340335 -0.616911538 -0.52764093 -0.167884413 #> 173 -0.53452056 -0.562669349 -0.60912584 2.171424600 -0.62115938 -0.309046129 #> 174 -0.45602462 0.423915646 -0.36754972 0.698932482 -0.63100342 -0.175726731 #> 175 0.17439590 0.039973702 -0.54248415 -0.554252299 0.23527273 -0.258071065 #> 176 0.70914950 -0.679309939 -0.64244668 -0.121903550 2.44526132 -0.375705829 #> 177 0.95444931 -0.271067872 -0.38004504 -0.585581919 -0.06989273 -0.344336558 #> 178 -0.11996387 1.279279977 -0.64244668 -0.685836701 3.24755116 -0.136515143 #> 179 -0.53452056 -0.679309939 -0.19261530 0.435763678 -0.61131533 -0.360021194 #> 180 -0.48546060 -0.518929127 -0.26342209 -0.479061212 -0.63100342 -0.320809606 #> 181 -0.49772559 -0.635569718 -0.56747478 -0.673304853 -0.60639331 2.278918667 #> 182 -0.53206756 1.964543446 -0.63411647 0.391902211 -0.06004869 -0.375705829 #> 183 -0.52716157 -0.169007356 -0.42169609 3.180238349 -0.62608140 -0.383548146 #> 184 -0.32601572 -0.314808094 -0.50499820 -0.610645614 -0.13387904 -0.062013126 #> 185 -0.51489658 3.373950582 -0.27591741 -0.510390832 -0.61131533 -0.383548146 #> 186 -0.51980257 -0.679309939 -0.63411647 -0.641975234 -0.29630582 0.651637772 #> 187 0.38535374 0.783557467 -0.64244668 -0.504124908 1.10154888 -0.371784670 #> 188 -0.53452056 1.993703594 0.05729102 0.084871939 -0.63100342 -0.383548146 #> 189 -0.49281959 -0.353688291 -0.55081436 4.583805304 -0.60639331 3.910120720 #> 190 -0.37262268 -0.339108217 -0.08015745 -0.347476810 -0.62608140 -0.062013126 #> 191 -0.53452056 1.532001256 1.58588470 -0.428933821 -0.57193914 -0.081618920 #> 192 -0.53452056 -0.669589890 -0.27175230 -0.266019799 -0.63100342 -0.379626987 #> 193 3.84898713 -0.518929127 -0.16345956 -0.510390832 0.37308939 -0.348257717 #> 194 -0.52716157 0.715517123 0.39466456 -0.497858984 -0.21755344 -0.379626987 #> 195 3.26026757 0.268394859 -0.03017619 0.153797102 0.67825485 -0.211017160 #> 196 -0.48546060 4.652137053 0.77785425 -0.416401973 -0.63100342 -0.383548146 #> 197 -0.51244358 0.351015277 -0.14679914 -0.685836701 0.41738760 -0.367863511 #> 198 -0.53452056 -0.679309939 -0.63828157 -0.623177462 -0.63100342 -0.383548146 #> 199 1.06483423 -0.674449915 -0.53831904 -0.667038929 -0.18309928 -0.375705829 #> 200 -0.53452056 -0.552949299 0.14059313 -0.002850995 0.27957094 0.196783353 #> Otu00023 Otu00024 Otu00025 Otu00026 Otu00027 Otu00028 #> 1 -0.0069254588 -0.177204415 -0.24303824 -0.22202016 -0.24641906 -0.292554022 #> 2 -0.6642571429 -0.678440995 -0.43616774 -0.29146475 -0.38539990 -0.307394436 #> 3 -0.3747181868 0.177117995 0.04157367 -0.47086329 -0.41259180 -0.168883908 #> 4 -0.3199405465 0.954898895 -0.28369708 0.43770350 -0.36425064 -0.314814643 #> 5 -0.9068438359 -0.695725015 -0.39550890 -0.61553953 -0.06816104 -0.314814643 #> 6 -0.3434166781 0.851194775 0.03649131 -0.45350214 -0.38842122 -0.319761448 #> 7 0.4078195324 -0.669798985 -0.42600303 0.87751927 -0.23131245 -0.295027425 #> 8 -0.0851792307 -0.592020895 -0.35485005 -0.57503018 0.01945732 -0.322234850 #> 9 -0.8990184587 -0.393254665 -0.45141481 -0.62132658 -0.31288816 -0.319761448 #> 10 -0.4060196956 -0.341402605 1.42397434 -0.62132658 -0.40957048 0.214493446 #> 11 0.1965343482 3.962318375 -0.07023815 0.46085170 -0.20412055 -0.322234850 #> 12 1.2451348919 0.324032165 -0.14647348 -0.58660428 0.02852128 -0.319761448 #> 13 0.0713283131 0.488230355 -0.30402650 -0.37248345 -0.39748519 -0.314814643 #> 14 -0.5625272394 -0.280908535 -0.26845001 1.35205733 -0.37935725 -0.322234850 #> 15 -0.6955586517 0.107981915 -0.37009712 -0.26252951 -0.31288816 -0.312341241 #> 16 1.6911813918 -0.713009035 -0.43616774 -0.01368637 -0.32497345 -0.307394436 #> 17 -0.1399568711 0.099339905 0.21437375 -0.25095541 -0.38237857 -0.314814643 #> 18 -0.4138450728 -0.030290245 0.21437375 -0.22780721 -0.39144254 -0.183724322 #> 19 -0.7581616692 -0.021648235 -0.37517948 0.53608334 -0.12556616 -0.307394436 #> 20 0.8538660323 -0.592020895 -0.45141481 -0.54030789 -0.30986683 -0.312341241 #> 21 -0.8911930815 -0.704367025 5.62708227 -0.62132658 -0.41259180 -0.297500827 #> 22 0.7756122604 -0.704367025 0.61587983 -0.32618705 -0.31288816 -0.205984942 #> 23 0.3686926464 -0.721651045 -0.45649716 0.48978694 0.23699254 -0.299974229 #> 24 -0.1243061167 0.203044025 -0.40059125 -0.62132658 0.44848511 -0.314814643 #> 25 1.1434049884 -0.013006225 -0.29386179 -0.62132658 -0.41863444 -0.235665770 #> 26 -0.8285900640 0.168475985 -0.03974402 -0.58660428 0.33367486 -0.089735035 #> 27 -0.8677169499 -0.721651045 -0.14139113 -0.62132658 -0.41561312 1.485822222 #> 28 0.2200104798 -0.678440995 -0.44125010 2.96085712 -0.42467709 4.458851770 #> 29 -0.4216704500 -0.522884815 -0.43616774 -0.10049212 -0.32195212 -0.319761448 #> 30 -0.7816378008 -0.142636375 -0.37517948 -0.58660428 -0.40654915 -0.314814643 #> 31 -0.4920988447 1.680827735 -0.42600303 -0.60396543 -0.40352783 -0.317288045 #> 32 -0.6642571429 1.853667935 -0.31419121 -0.41299279 -0.40957048 -0.210931747 #> 33 1.3546901726 -0.721651045 -0.34976770 -0.59239133 0.49682627 -0.228245563 #> 34 -0.8990184587 -0.410538685 3.72119899 -0.49979854 -0.05909707 -0.260399793 #> 35 -0.2729882833 4.938865505 -0.18204997 -0.52873379 -0.33101609 -0.309867838 #> 36 2.7789088215 -0.661156975 1.47988025 -0.61553953 -0.15275807 -0.314814643 #> 37 -0.5234003535 2.026508135 0.45324446 -0.58081723 0.09801170 -0.314814643 #> 38 -0.9068438359 -0.721651045 0.34143264 -0.59817838 -0.36122932 -0.307394436 #> 39 -0.0069254588 -0.661156975 -0.26845001 -0.43614099 0.49984759 -0.287607218 #> 40 -0.6407810114 0.038845835 -0.25320295 -0.21623311 -0.37935725 -0.314814643 #> 41 1.1825318744 -0.609304915 -0.42092068 -0.61553953 0.26418444 -0.317288045 #> 42 -0.4529719588 0.073413875 -0.42092068 -0.37248345 -0.37935725 5.443265880 #> 43 3.1388761724 -0.721651045 -0.37517948 -0.62132658 -0.34914403 -0.297500827 #> 44 0.4391210411 0.090697895 -0.34976770 -0.59817838 -0.31288816 -0.295027425 #> 45 0.5252001902 -0.410538685 1.46971554 -0.61553953 -0.09535294 -0.317288045 #> 46 1.3077379094 -0.436464715 -0.24303824 0.16571217 -0.37633593 -0.210931747 #> 47 0.5173748130 0.393168245 0.04665602 -0.60396543 0.54818875 -0.317288045 #> 48 1.4877215849 -0.661156975 -0.33960299 -0.62132658 -0.41561312 -0.314814643 #> 49 -0.8442408184 0.151191965 -0.24812059 -0.60396543 -0.41863444 -0.290080620 #> 50 -0.6720825201 0.747490655 -0.18204997 -0.58660428 -0.38842122 -0.267820000 #> 51 -0.3590674325 -0.574736875 -0.44125010 1.11478830 -0.42467709 1.305263855 #> 52 -0.6407810114 0.427736285 -0.21762646 -0.60975248 -0.35518667 -0.302447632 #> 53 1.7459590322 -0.704367025 6.00825892 -0.60975248 0.58746594 -0.223298758 #> 54 1.4877215849 -0.522884815 1.16985657 -0.41877984 -0.36425064 -0.262873195 #> 55 -0.7425109149 0.254896085 -0.17188526 0.50714809 -0.10441691 -0.314814643 #> 56 0.8225645235 -0.713009035 0.03649131 -0.61553953 -0.36727196 -0.314814643 #> 57 -0.3590674325 -0.557452855 -0.45141481 1.07427895 0.25209915 -0.109522253 #> 58 -0.8911930815 -0.669798985 1.25117426 -0.62132658 -0.42467709 0.738854731 #> 59 -0.1008299851 0.445020305 -0.45141481 -0.38984460 0.56027404 -0.312341241 #> 60 0.0165506728 -0.254982505 0.61587983 0.62867613 0.19167270 -0.277713609 #> 61 -0.4294958272 -0.488316775 -0.45649716 -0.28567770 -0.37331461 -0.317288045 #> 62 -0.2338613974 -0.427822705 0.39733855 -0.40720575 -0.17390732 2.002763299 #> 63 1.9259427076 -0.592020895 -0.44633245 0.99904731 -0.42165577 -0.230718965 #> 64 -0.3981943184 -0.713009035 0.88524467 0.14256397 0.11613964 -0.317288045 #> 65 -0.6564317657 -0.531526825 -0.47174423 -0.55188199 8.52145880 0.006727654 #> 66 -0.6955586517 -0.177204415 -0.47174423 -0.62132658 -0.23433377 -0.322234850 #> 67 -0.5625272394 -0.687083005 -0.47174423 2.85669023 0.33367486 -0.322234850 #> 68 -0.3121151693 0.393168245 -0.45649716 0.17728626 -0.39748519 -0.319761448 #> 69 1.1590557428 -0.721651045 0.02124425 1.73400261 0.03758525 -0.309867838 #> 70 0.1808835938 1.940088035 -0.43616774 -0.54030789 -0.38539990 -0.319761448 #> 71 1.0181989533 -0.358686625 1.11395066 -0.61553953 -0.31893080 -0.304921034 #> 72 -0.3355913009 -0.721651045 -0.30910886 1.01640846 -0.16182203 -0.275240206 #> 73 -0.5860033710 -0.038932255 -0.42092068 -0.23359426 -0.26756832 -0.314814643 #> 74 -0.5781779938 -0.177204415 -0.36501477 0.14256397 0.83521439 0.006727654 #> 75 -0.4686227131 0.894404825 0.01107953 -0.30882590 -0.35216535 -0.304921034 #> 76 -0.6486063886 0.531440405 -0.44125010 -0.52294674 -0.36727196 -0.307394436 #> 77 -0.4842734675 0.721564625 -0.47174423 2.76409744 -0.37029328 -0.309867838 #> 78 -0.9068438359 1.015392965 0.94115058 -0.23938131 -0.39446386 -0.292554022 #> 79 -0.4451465816 -0.237698485 -0.26336766 -0.08313097 -0.28569625 -0.314814643 #> 80 0.0791536903 -0.721651045 0.36176206 -0.61553953 -0.42467709 -0.248032781 #> 81 -0.7190347833 -0.687083005 -0.29894415 0.60552794 -0.30986683 -0.322234850 #> 82 0.0087252956 1.145023115 -0.39042654 -0.23938131 -0.11045955 -0.270293402 #> 83 1.9885457251 -0.315476575 -0.33452063 -0.60396543 -0.40654915 -0.257926390 #> 84 0.2747881201 -0.721651045 -0.32943828 2.66571759 2.25221464 -0.314814643 #> 85 -0.8833677043 -0.229056475 -0.46157952 1.49673357 0.05269186 0.911992891 #> 86 -0.9068438359 -0.626588935 -0.45141481 1.59511342 1.12224003 -0.322234850 #> 87 -0.2495121518 5.517880175 -0.38534419 -0.61553953 -0.40352783 -0.309867838 #> 88 -0.2886390377 0.721564625 -0.08040286 -0.22780721 -0.21922716 -0.275240206 #> 89 -0.5234003535 0.133907945 -0.30910886 -0.19308491 -0.41561312 -0.173830713 #> 90 0.0008999184 0.082055885 -0.41075596 0.40876825 -0.42165577 -0.302447632 #> 91 -0.7659870464 -0.393254665 -0.44633245 0.45506465 -0.33705874 -0.302447632 #> 92 -0.7738124236 0.954898895 0.85983289 -0.30882590 -0.41561312 1.837045346 #> 93 0.1417567078 -0.721651045 6.81127108 -0.62132658 -0.14369410 -0.302447632 #> 94 -0.6016541254 -0.341402605 -0.46157952 1.02798256 -0.10743823 -0.149096690 #> 95 0.7286599972 0.254896085 -0.07532051 -0.53452084 -0.30080287 -0.319761448 #> 96 -0.9068438359 0.194402015 -0.46157952 -0.34354820 -0.42467709 -0.322234850 #> 97 1.9181173304 -0.704367025 -0.27353237 -0.62132658 0.98325919 -0.248032781 #> 98 -0.4529719588 0.142549955 0.31093850 0.24094381 -0.35820799 -0.277713609 #> 99 0.7286599972 -0.713009035 -0.07023815 -0.59239133 0.11311831 -0.280187011 #> 100 -0.5234003535 -0.704367025 -0.46666187 -0.60396543 0.06175583 3.006964628 #> 101 0.0243760500 0.514156385 -0.28369708 -0.61553953 3.79913175 -0.322234850 #> 102 5.4160609352 -0.609304915 -0.43108539 -0.61553953 5.83248179 -0.275240206 #> 103 1.1512303656 -0.609304915 -0.44125010 -0.54609494 0.83823571 -0.205984942 #> 104 -0.9068438359 -0.574736875 -0.28369708 0.40298120 -0.42467709 -0.319761448 #> 105 0.1495820850 0.254896085 -0.11597935 -0.59817838 -0.22526980 -0.282660413 #> 106 -0.7972885552 -0.056216275 -0.21254410 -0.59239133 0.43942114 -0.312341241 #> 107 -0.2260360202 -0.229056475 -0.34468534 0.61710203 -0.30080287 0.169972205 #> 108 -0.5468764851 1.335147335 -0.45141481 1.46779833 -0.12254484 -0.309867838 #> 109 1.1121034796 -0.678440995 -0.39550890 -0.59817838 -0.32195212 -0.312341241 #> 110 0.7599615060 -0.479674765 -0.45141481 0.94696386 -0.05305442 -0.309867838 #> 111 -0.6407810114 -0.289550545 1.47479789 0.06154527 -0.40957048 0.058669102 #> 112 -0.5468764851 -0.721651045 -0.25320295 -0.40141870 -0.07722500 -0.314814643 #> 113 -0.8990184587 -0.721651045 -0.24303824 -0.61553953 -0.42165577 -0.314814643 #> 114 -0.6486063886 -0.082142305 -0.30910886 -0.20465901 -0.22829113 -0.319761448 #> 115 -0.4842734675 0.073413875 -0.41583832 -0.62132658 0.20980063 -0.277713609 #> 116 0.1261059534 0.583292465 -0.43108539 -0.60396543 -0.40352783 -0.025426576 #> 117 0.0243760500 -0.514242805 -0.45141481 -0.62132658 -0.39748519 0.763588754 #> 118 -0.0304015904 -0.721651045 -0.27861472 -0.15257556 0.01945732 -0.319761448 #> 119 -0.7033840289 2.389472555 -0.45141481 -0.62132658 -0.38237857 -0.317288045 #> 120 1.8320381813 -0.652514965 -0.20237939 -0.61553953 0.10103302 -0.309867838 #> 121 -0.5547018623 -0.548810845 -0.47174423 -0.44771509 0.03154261 -0.272766804 #> 122 -0.1869091342 -0.254982505 3.03508101 -0.53452084 -0.31893080 -0.250506184 #> 123 -0.2260360202 -0.462390745 -0.46157952 2.06965148 -0.42467709 6.323797094 #> 124 0.1652328394 1.170949145 -0.44125010 -0.60975248 -0.42467709 3.514012096 #> 125 -0.9068438359 -0.531526825 -0.33960299 4.84743529 -0.38842122 -0.299974229 #> 126 -0.6329556342 3.564785915 -0.24812059 -0.52294674 -0.39748519 -0.245559379 #> 127 -0.9068438359 -0.367328635 -0.40059125 0.37983300 -0.36727196 -0.314814643 #> 128 1.6677052603 0.185760005 3.05032807 0.39140710 0.28533370 -0.314814643 #> 129 -0.0851792307 -0.522884815 -0.16680290 5.25252877 0.85032100 -0.280187011 #> 130 -0.6251302570 -0.695725015 0.10764429 -0.60975248 -0.27663229 -0.322234850 #> 131 -0.9068438359 -0.419180695 -0.42600303 -0.51715969 -0.02586252 -0.317288045 #> 132 1.4407693217 -0.592020895 -0.44125010 -0.55188199 1.61169427 -0.285133816 #> 133 0.4547717955 -0.488316775 0.03649131 -0.17572376 -0.21318451 -0.248032781 #> 134 -0.2808136605 0.427736285 0.24486788 -0.45928919 -0.29476022 -0.314814643 #> 135 -0.0695284764 -0.678440995 -0.33452063 -0.59239133 0.91679010 -0.317288045 #> 136 0.3217403832 -0.280908535 -0.39550890 -0.54030789 0.65997768 0.031461677 #> 137 0.4547717955 0.868478795 -0.44125010 0.07890642 -0.36727196 -0.136729678 #> 138 -0.5312257307 0.453662315 -0.47174423 -0.44192804 -0.40957048 1.082657649 #> 139 0.0400268043 -0.133994365 -0.41583832 1.91918820 0.06477715 -0.322234850 #> 140 -0.9068438359 2.795647025 -0.44125010 -0.55188199 -0.41561312 -0.317288045 #> 141 -0.4920988447 -0.583378885 -0.47174423 2.26062412 0.17656609 -0.116942460 #> 142 -0.7894631780 -0.237698485 -0.21762646 -0.42456689 -0.42467709 -0.099628644 #> 143 -0.5155749763 0.038845835 -0.24812059 0.23515676 -0.42467709 -0.015532966 #> 144 0.1417567078 0.142549955 0.09239722 1.66455801 -0.27663229 0.320849745 #> 145 -0.8833677043 -0.315476575 -0.15155584 -0.61553953 -0.40050651 5.809329418 #> 146 -0.3668928096 -0.609304915 -0.44633245 0.68075958 -0.42467709 -0.292554022 #> 147 -0.8990184587 -0.713009035 -0.44125010 -0.60975248 -0.31893080 -0.314814643 #> 148 -0.1869091342 -0.073500295 -0.41075596 1.02798256 0.45452776 -0.223298758 #> 149 -0.1008299851 -0.626588935 -0.39042654 -0.11785327 -0.39748519 -0.299974229 #> 150 0.0322014271 2.372188535 -0.39042654 0.42612940 -0.40352783 -0.322234850 #> 151 -0.2495121518 1.231443215 -0.46157952 -0.60396543 -0.42467709 -0.304921034 #> 152 0.3921687780 1.352431355 -0.20746175 -0.46507624 -0.41259180 -0.280187011 #> 153 -0.8442408184 0.548724425 -0.43108539 0.60552794 -0.34008006 -0.307394436 #> 154 1.2060080059 -0.617946925 -0.36501477 -0.62132658 0.43639982 -0.245559379 #> 155 0.9086436726 -0.531526825 -0.22779117 -0.56924313 0.30648295 0.706700501 #> 156 -0.4686227131 -0.522884815 -0.42092068 -0.61553953 -0.42165577 -0.314814643 #> 157 -0.8911930815 -0.687083005 0.98180942 -0.62132658 -0.33705874 -0.210931747 #> 158 0.9947228218 -0.220414465 0.74293871 0.07311937 -0.41561312 -0.295027425 #> 159 -0.6564317657 -0.125352355 -0.40567361 2.60784710 -0.41561312 -0.277713609 #> 160 -0.6877332745 -0.713009035 -0.34468534 -0.59239133 0.64184975 -0.139203081 #> 161 0.4078195324 -0.669798985 -0.47174423 3.04187582 -0.41561312 -0.314814643 #> 162 -0.8990184587 -0.721651045 -0.14647348 -0.62132658 -0.37633593 -0.285133816 #> 163 1.1121034796 -0.721651045 -0.35993241 0.74441713 -0.29173890 -0.290080620 #> 164 0.9712466902 -0.168562405 -0.32435592 -0.59817838 0.79895852 -0.272766804 #> 165 0.2356612341 -0.566094865 -0.33960299 -0.49979854 5.67839434 -0.297500827 #> 166 -0.3434166781 1.369715375 -0.46157952 -0.60975248 -0.41561312 4.716085608 #> 167 -0.5468764851 0.419094275 -0.46666187 3.73053472 -0.40654915 -0.307394436 #> 168 -0.5155749763 -0.721651045 -0.40567361 -0.59817838 -0.34008006 -0.287607218 #> 169 3.5849226723 -0.704367025 0.95639764 -0.53452084 0.37597337 -0.304921034 #> 170 -0.9068438359 -0.687083005 -0.39042654 -0.62132658 -0.41863444 -0.312341241 #> 171 -0.5390511079 0.617860505 -0.07532051 -0.37827050 -0.37633593 -0.314814643 #> 172 -0.4529719588 -0.626588935 -0.46157952 -0.26252951 2.99243865 -0.077368024 #> 173 -0.8207646868 -0.687083005 -0.40567361 -0.62132658 0.99836580 0.019094666 #> 174 0.4312956639 1.741321805 -0.39042654 -0.51137264 -0.15275807 -0.290080620 #> 175 -0.0695284764 0.107981915 -0.45649716 -0.50558559 -0.29778154 -0.295027425 #> 176 0.4547717955 4.307998775 1.64759798 -0.58660428 -0.37029328 -0.304921034 #> 177 -0.1321314939 -0.220414465 -0.24812059 0.70969483 -0.38842122 -0.319761448 #> 178 -0.9068438359 -0.410538685 -0.45649716 -0.62132658 -0.42165577 -0.299974229 #> 179 0.2982642517 -0.574736875 -0.16680290 -0.06576982 0.68414826 -0.319761448 #> 180 -0.5077495991 0.280822115 -0.44633245 -0.33776115 -0.37029328 0.244174274 #> 181 -0.6877332745 -0.522884815 0.01616189 0.77335237 -0.08931029 -0.302447632 #> 182 -0.5938287482 0.436378295 -0.46157952 1.04534371 -0.20109922 -0.196091333 #> 183 -0.4451465816 -0.367328635 -0.22779117 -0.19308491 -0.30684551 0.273855101 #> 184 -0.7738124236 0.151191965 0.03649131 -0.51137264 -0.36727196 1.483348819 #> 185 3.0997492864 -0.617946925 -0.42092068 -0.56924313 0.18260873 -0.314814643 #> 186 -0.8677169499 0.393168245 -0.47174423 0.21200856 -0.39144254 -0.069947817 #> 187 -0.9068438359 -0.609304915 -0.46157952 -0.61553953 -0.42165577 -0.309867838 #> 188 2.7710834443 -0.721651045 -0.34468534 -0.60396543 -0.08628897 0.773482363 #> 189 -0.8755423271 -0.047574265 -0.43108539 -0.43614099 -0.41863444 0.187286021 #> 190 -0.3355913009 -0.246340495 -0.40567361 1.58353932 -0.11650220 -0.302447632 #> 191 -0.6094795026 -0.479674765 -0.42092068 -0.45350214 -0.41259180 -0.245559379 #> 192 0.1104551991 -0.721651045 0.80900933 -0.59239133 -0.40957048 -0.307394436 #> 193 -0.5077495991 0.609218495 0.12289135 -0.56924313 -0.14671542 -0.297500827 #> 194 3.4518912600 -0.687083005 -0.40567361 1.55460407 0.06175583 -0.260399793 #> 195 -0.4842734675 0.315390155 2.58783373 -0.52873379 0.17958741 -0.282660413 #> 196 2.4658937338 -0.721651045 1.35282136 -0.16414966 -0.42467709 -0.322234850 #> 197 -0.0382269676 -0.669798985 -0.39550890 -0.58660428 -0.40352783 -0.161463701 #> 198 -0.9068438359 -0.721651045 0.15338549 -0.62132658 -0.41561312 -0.297500827 #> 199 -0.8598915727 0.107981915 0.40750326 -0.60396543 -0.27058964 -0.299974229 #> 200 -0.0304015904 0.004277795 -0.14647348 -0.55766903 -0.23131245 -0.317288045 #> Otu00029 Otu00030 Otu00031 Otu00032 Otu00033 #> 1 0.695821495 0.39193166 0.2730666130 1.850227727 -0.352365855 #> 2 -0.252260766 0.44720466 -0.1402887916 -0.493938512 0.152851091 #> 3 0.066720182 -0.59377025 -0.4629076438 -0.357825634 -0.288065517 #> 4 -0.473775313 -0.71352842 1.5937875395 -0.501500339 -0.435037719 #> 5 -0.571241714 0.33665866 -0.5637260352 -0.577118604 0.952012441 #> 6 -0.216818439 -0.52928508 -0.2411071829 0.337862411 0.079364989 #> 7 3.079318020 0.19847615 -0.3520074134 -0.395634767 -0.618752972 #> 8 0.031277854 -0.17001055 -0.3822529308 -0.357825634 -0.444223482 #> 9 -0.730732188 -0.11473754 0.3335576478 -0.070476224 -0.168650602 #> 10 0.137604837 -0.76880143 -0.4830713221 -0.516623992 0.740739900 #> 11 -0.305424257 0.16162748 -0.5939715526 -0.577118604 -0.600381447 #> 12 -0.730732188 -0.54770941 -0.5233986787 0.148816747 0.465167021 #> 13 -0.269981930 -0.62140675 -0.2209435046 0.103445788 -0.453409245 #> 14 -0.526938804 0.54853851 0.1420027042 0.572279035 -0.646310260 #> 15 -0.535799386 -0.33582956 -0.2411071829 0.436166157 -0.655496023 #> 16 -0.340866585 -0.38189040 -0.4729894830 -0.569556778 1.071427356 #> 17 -0.181376111 1.20260239 -0.4427439656 1.071359589 -0.582009922 #> 18 0.279374147 0.65908451 0.0109387955 -0.100723530 0.106922277 #> 19 0.270513565 0.72356969 -0.0797977567 0.466413463 -0.232950941 #> 20 1.431249791 0.85254003 0.4646215565 -0.546871298 0.446795495 #> 21 -0.730732188 -0.76880143 -0.5939715526 -0.569556778 1.787916843 #> 22 2.937548710 -0.28055656 -0.5536441961 -0.456129379 -0.159464840 #> 23 -0.004164473 0.04186930 -0.3217618960 0.141254920 -0.673867548 #> 24 0.146465418 1.07363205 -0.5838897135 0.504222596 0.116108040 #> 25 -0.730732188 0.79726702 -0.1806161481 -0.577118604 -0.021678400 #> 26 -0.730732188 -0.70431626 -0.5637260352 -0.138532663 4.424230724 #> 27 -0.686429278 -0.76880143 -0.5838897135 -0.531747645 1.705244979 #> 28 0.562912767 -0.76880143 -0.5939715526 -0.577118604 -0.490152295 #> 29 0.279374147 -0.52928508 -0.1402887916 -0.357825634 1.098984644 #> 30 -0.721871606 7.25499635 -0.5637260352 0.020265695 -0.692239074 #> 31 -0.128212620 1.34078490 1.6643604135 -0.569556778 -0.012492637 #> 32 1.378086300 -0.06867671 -0.5838897135 2.530792119 -0.627938735 #> 33 0.075580763 -0.43716340 -0.5939715526 -0.577118604 0.428423970 #> 34 -0.243400184 -0.76880143 -0.5838897135 -0.577118604 -0.223765178 #> 35 0.199628910 0.76041836 0.3033121304 -0.441005726 -0.407480431 #> 36 2.388192634 3.49643206 -0.5939715526 -0.509062165 -0.407480431 #> 37 -0.695289860 -0.67667975 -0.4830713221 0.821819312 -0.701424836 #> 38 -0.721871606 -0.03182804 -0.5939715526 -0.577118604 -0.012492637 #> 39 -0.234539602 2.08697046 0.5251125913 -0.350263807 -0.591195684 #> 40 -0.323145421 0.04186930 -0.1402887916 0.065636655 -0.609567210 #> 41 1.316062227 -0.34504173 -0.5233986787 -0.448567553 0.290637530 #> 42 -0.367448331 -0.06867671 -0.2713527003 -0.123409010 -0.692239074 #> 43 -0.721871606 -0.76880143 -0.5738078743 -0.577118604 -0.609567210 #> 44 0.748984986 0.39193166 1.3316597220 -0.478814859 -0.379923143 #> 45 1.989466449 -0.75037709 -0.4931531613 -0.289769194 2.936137175 #> 46 -0.057327965 -0.76880143 -0.4729894830 -0.569556778 2.467663279 #> 47 -0.730732188 -0.73195276 -0.3217618960 -0.297331021 -0.141093314 #> 48 3.495765369 -0.20685922 -0.5435623569 -0.524185818 -0.058421450 #> 49 -0.385169494 -0.72274059 -0.2108616655 -0.229274582 0.492724309 #> 50 -0.624405205 -0.63983108 -0.4124984482 0.489098943 0.042621939 #> 51 -0.588962878 2.18830430 -0.4830713221 -0.561994951 3.110666665 #> 52 -0.137073202 0.12477881 0.6662583392 1.056235936 -0.232950941 #> 53 -0.730732188 -0.76880143 -0.5939715526 -0.561994951 -0.692239074 #> 54 -0.305424257 -0.75037709 -0.5738078743 -0.577118604 -0.398294669 #> 55 -0.535799386 -0.63983108 -0.4225802873 0.050513002 -0.591195684 #> 56 -0.730732188 0.92623737 -0.5536441961 -0.478814859 0.446795495 #> 57 -0.367448331 2.16066779 -0.2511890220 5.563084576 -0.600381447 #> 58 -0.721871606 -0.75037709 -0.5838897135 -0.546871298 0.042621939 #> 59 -0.721871606 -0.23449572 2.7128716834 -0.577118604 1.622573115 #> 60 0.376840547 0.43799250 -0.4024166090 -0.115847183 -0.122721789 #> 61 0.111023091 0.09714230 4.3360477841 -0.055352571 -0.582009922 #> 62 -0.562381132 0.13399097 -0.2209435046 -0.577118604 -0.021678400 #> 63 1.750230739 0.22611265 -0.5133168395 -0.463691206 -0.554452634 #> 64 -0.314284839 0.36429516 2.6422988095 0.254682319 0.079364989 #> 65 -0.721871606 -0.75958926 -0.3923347699 -0.577118604 -0.085978738 #> 66 0.252792401 -0.54770941 -0.5939715526 -0.569556778 -0.333994330 #> 67 -0.358587749 -0.54770941 -0.4024166090 -0.554433125 -0.471780770 #> 68 -0.677568696 0.15241531 0.6965038566 0.012703869 -0.315622805 #> 69 0.642658004 -0.19764705 -0.0596340785 0.156378574 -0.517709583 #> 70 0.155326000 0.24453698 2.8741811096 -0.577118604 -0.499338058 #> 71 0.935057206 -0.48322424 -0.5939715526 0.942808538 -0.389108906 #> 72 -0.491496477 0.21690048 0.1117571868 -0.577118604 -0.343180093 #> 73 -0.730732188 -0.02261587 -0.4729894830 0.186625880 -0.673867548 #> 74 0.048999018 -0.46479990 -0.4225802873 -0.191465449 -0.425851957 #> 75 -0.145933784 1.34078490 -0.3217618960 0.436166157 -0.232950941 #> 76 -0.730732188 1.31314840 4.7393213494 0.141254920 -0.453409245 #> 77 -0.730732188 -0.05025237 4.3864569797 1.404079959 0.079364989 #> 78 -0.730732188 -0.76880143 -0.1302069524 -0.289769194 2.081861248 #> 79 -0.243400184 0.63144801 -0.3520074134 -0.168779969 -0.673867548 #> 80 6.614690190 0.31823432 -0.5939715526 -0.577118604 -0.389108906 #> 81 -0.394030076 -0.05025237 -0.5334805178 -0.342701980 -0.664681786 #> 82 1.759091320 -0.76880143 -0.5939715526 -0.577118604 0.162036853 #> 83 2.007187613 -0.28055656 -0.5334805178 -0.350263807 0.520281597 #> 84 -0.730732188 0.35508299 -0.5939715526 -0.478814859 -0.205393653 #> 85 -0.633265787 -0.08710104 -0.1201251133 -0.577118604 -0.710610599 #> 86 -0.101630874 0.08793014 -0.3419255742 -0.577118604 -0.269693992 #> 87 1.218595826 0.21690048 0.2125755781 1.094045069 -0.131907552 #> 88 -0.721871606 -0.40031473 -0.1906979872 -0.577118604 0.125293803 #> 89 -0.207957857 -0.45558774 -0.5939715526 -0.509062165 -0.425851957 #> 90 -0.730732188 -0.30819306 0.8376496045 -0.577118604 0.667253799 #> 91 -0.730732188 -0.76880143 1.7450151266 -0.093161703 -0.067607213 #> 92 -0.544659968 -0.17001055 -0.1503706307 -0.078038050 -0.582009922 #> 93 0.881893714 -0.76880143 -0.3520074134 -0.577118604 -0.398294669 #> 94 -0.137073202 -0.73195276 -0.1402887916 -0.577118604 -0.554452634 #> 95 -0.624405205 -0.29898089 -0.2612708612 0.383233371 -0.333994330 #> 96 -0.730732188 -0.76880143 -0.5939715526 2.349308281 -0.591195684 #> 97 0.243931819 -0.59377025 -0.5939715526 -0.577118604 2.807536497 #> 98 -0.482635895 0.42878033 1.4223962743 2.530792119 -0.159464840 #> 99 -0.730732188 -0.69510409 -0.5939715526 -0.561994951 -0.600381447 #> 100 -0.730732188 0.40114383 0.1420027042 -0.569556778 -0.600381447 #> 101 -0.704150442 0.91702520 -0.5637260352 -0.561994951 -0.389108906 #> 102 -0.491496477 2.38175981 -0.5939715526 -0.577118604 -0.683053311 #> 103 -0.243400184 -0.30819306 -0.4326621264 -0.569556778 -0.370737381 #> 104 1.316062227 -0.76880143 -0.5939715526 -0.009981611 -0.343180093 #> 105 0.040138436 0.56696284 -0.1201251133 0.156378574 -0.232950941 #> 106 -0.668708114 -0.23449572 -0.4528258047 0.020265695 -0.710610599 #> 107 0.261652983 1.19339022 0.4444578782 -0.138532663 -0.600381447 #> 108 -0.730732188 0.74199402 -0.5838897135 0.564717209 -0.582009922 #> 109 -0.704150442 -0.55692158 -0.4931531613 -0.561994951 -0.040049925 #> 110 -0.261121348 1.46975524 0.3133939695 -0.183903622 -0.288065517 #> 111 -0.367448331 -0.22528355 3.8823650230 -0.055352571 -0.572824159 #> 112 -0.721871606 -0.75958926 -0.5939715526 -0.531747645 -0.710610599 #> 113 -0.128212620 0.83411569 3.5496643316 0.678144607 -0.315622805 #> 114 -0.650986951 -0.10552538 -0.4830713221 -0.546871298 -0.664681786 #> 115 -0.500357059 0.99072254 3.0052450183 0.715953740 0.033436176 #> 116 -0.243400184 -0.56613375 -0.3419255742 -0.259521888 -0.361551618 #> 117 0.917336042 -0.76880143 -0.4427439656 -0.365387460 2.100232773 #> 118 0.616076258 0.43799250 0.7569948914 3.377716696 -0.563638396 #> 119 -0.225679020 -0.76880143 1.0090408698 2.939130754 0.703996850 #> 120 2.512240780 0.53932634 -0.5838897135 -0.546871298 -0.131907552 #> 121 -0.394030076 0.44720466 -0.4830713221 -0.531747645 -0.683053311 #> 122 0.111023091 -0.41873907 1.2409231698 0.950370364 -0.333994330 #> 123 -0.721871606 -0.75037709 -0.2915163786 -0.448567553 -0.683053311 #> 124 0.261652983 0.06029364 -0.3520074134 -0.161218143 -0.609567210 #> 125 -0.721871606 0.94466170 -0.3822529308 0.247120493 -0.012492637 #> 126 0.137604837 -0.75958926 -0.4225802873 -0.569556778 -0.058421450 #> 127 -0.713011024 -0.56613375 0.1117571868 -0.554433125 -0.232950941 #> 128 0.075580763 -0.51086074 -0.5233986787 -0.168779969 3.955756829 #> 129 -0.500357059 -0.56613375 -0.4427439656 -0.463691206 -0.471780770 #> 130 -0.642126369 -0.05946454 -0.5939715526 -0.456129379 -0.333994330 #> 131 2.972991038 -0.66746759 -0.5233986787 0.050513002 1.493972438 #> 132 -0.730732188 0.35508299 -0.4024166090 -0.040228917 0.823411764 #> 133 2.078072268 -0.70431626 0.0109387955 -0.463691206 -0.040049925 #> 134 -0.473775313 -0.54770941 -0.1402887916 0.315176932 -0.517709583 #> 135 2.645149508 -0.53849724 -0.5838897135 -0.561994951 1.319442948 #> 136 0.350258802 -0.45558774 1.1804321350 1.313338040 -0.049235688 #> 137 -0.269981930 -0.20685922 3.0254086966 1.857789554 -0.591195684 #> 138 0.093301927 -0.54770941 -0.4528258047 2.583724905 -0.683053311 #> 139 0.607215676 -0.66746759 -0.2209435046 7.158629984 -0.517709583 #> 140 -0.730732188 0.83411569 2.2087797267 -0.577118604 3.312753443 #> 141 -0.110491456 1.50660391 0.2125755781 0.368109718 -0.600381447 #> 142 -0.305424257 -0.75037709 -0.1705343090 -0.569556778 -0.710610599 #> 143 -0.278842512 -0.06867671 -0.3217618960 0.179064053 -0.683053311 #> 144 -0.571241714 0.50247767 -0.0293885611 2.349308281 -0.582009922 #> 145 1.271759317 -0.29898089 -0.4427439656 -0.365387460 -0.710610599 #> 146 -0.110491456 0.47484117 0.0008569563 0.549593556 0.051807701 #> 147 -0.730732188 -0.76880143 -0.5838897135 -0.577118604 -0.673867548 #> 148 -0.367448331 0.19847615 1.9164063918 0.632773648 -0.710610599 #> 149 -0.642126369 -0.74116493 -0.4326621264 -0.569556778 -0.701424836 #> 150 -0.730732188 4.27025412 -0.5939715526 -0.577118604 -0.701424836 #> 151 -0.402890658 -0.38189040 -0.4629076438 -0.577118604 0.805040239 #> 152 0.740124404 -0.36346606 -0.2511890220 0.050513002 -0.609567210 #> 153 -0.580102296 -0.65825542 0.0109387955 1.162101508 1.025498543 #> 154 -0.704150442 -0.74116493 -0.2209435046 2.825703355 -0.655496023 #> 155 0.004696108 0.90781303 -0.5133168395 -0.448567553 0.005878888 #> 156 0.846451387 -0.07788888 -0.2612708612 -0.561994951 -0.664681786 #> 157 -0.713011024 -0.76880143 -0.5838897135 -0.561994951 -0.710610599 #> 158 -0.367448331 -0.76880143 -0.0797977567 0.156378574 -0.637124498 #> 159 -0.163654947 -0.40031473 2.0676339788 -0.569556778 -0.646310260 #> 160 0.004696108 -0.48322424 -0.5738078743 -0.539309471 -0.370737381 #> 161 1.094547680 -0.48322424 -0.3923347699 -0.433443899 -0.591195684 #> 162 -0.730732188 0.41956816 -0.5939715526 -0.577118604 1.319442948 #> 163 0.181907746 -0.61219458 -0.5637260352 -0.569556778 -0.444223482 #> 164 -0.721871606 -0.25292005 -0.4830713221 -0.501500339 0.465167021 #> 165 -0.030746219 0.01423280 -0.5838897135 -0.554433125 -0.223765178 #> 166 -0.713011024 -0.76880143 0.6662583392 -0.577118604 -0.710610599 #> 167 -0.713011024 4.09522294 1.1602684568 -0.577118604 2.302319551 #> 168 2.388192634 -0.70431626 -0.5939715526 -0.577118604 1.007127017 #> 169 0.270513565 -0.76880143 -0.5738078743 -0.539309471 0.593767698 #> 170 -0.730732188 -0.76880143 0.1016753477 -0.569556778 -0.710610599 #> 171 -0.571241714 -0.61219458 -0.1100432742 0.534469902 -0.600381447 #> 172 -0.287703094 -0.48322424 -0.4225802873 -0.524185818 -0.407480431 #> 173 1.422389209 -0.61219458 -0.5738078743 -0.577118604 2.752421921 #> 174 0.456585784 0.14320314 -0.1705343090 -0.546871298 1.806288368 #> 175 -0.296563675 -0.39110257 -0.0697159176 -0.493938512 -0.627938735 #> 176 0.562912767 1.38684574 -0.5939715526 0.587402689 -0.012492637 #> 177 0.952778369 -0.48322424 -0.1604524698 -0.244398235 -0.683053311 #> 178 -0.721871606 -0.75037709 -0.5838897135 -0.214150929 1.705244979 #> 179 0.217350073 -0.52928508 -0.5435623569 -0.577118604 5.278506651 #> 180 -0.261121348 0.88017653 -0.1604524698 0.557155382 -0.673867548 #> 181 -0.039606801 -0.54770941 -0.1604524698 0.111007614 -0.627938735 #> 182 -0.083909710 -0.64904325 -0.2612708612 -0.577118604 -0.306437042 #> 183 -0.199097275 1.20260239 -0.2108616655 -0.123409010 -0.554452634 #> 184 -0.668708114 -0.30819306 -0.3116800568 1.600687450 -0.572824159 #> 185 0.297095310 2.55679099 -0.5939715526 -0.554433125 -0.627938735 #> 186 -0.713011024 -0.62140675 -0.0293885611 -0.380511113 -0.701424836 #> 187 -0.721871606 -0.75958926 -0.4225802873 -0.085599877 -0.609567210 #> 188 2.990712202 -0.41873907 -0.5939715526 -0.554433125 1.392929049 #> 189 -0.730732188 -0.56613375 -0.4326621264 -0.380511113 -0.710610599 #> 190 0.102162509 -0.25292005 0.0815116694 -0.304892848 -0.609567210 #> 191 -0.668708114 -0.25292005 -0.5133168395 -0.554433125 -0.343180093 #> 192 -0.730732188 -0.32661739 0.6158491435 -0.577118604 -0.205393653 #> 193 0.057859600 -0.63061892 -0.3822529308 0.413480677 -0.278879754 #> 194 -0.509217641 0.14320314 -0.4528258047 -0.577118604 0.162036853 #> 195 -0.668708114 0.11556664 -0.3721710916 0.526908076 -0.692239074 #> 196 -0.730732188 -0.76880143 -0.5838897135 -0.577118604 0.906083628 #> 197 -0.154794365 -0.47401207 2.1079613354 -0.093161703 -0.572824159 #> 198 -0.721871606 -0.67667975 -0.5939715526 -0.577118604 -0.627938735 #> 199 -0.713011024 -0.74116493 -0.4225802873 -0.161218143 -0.232950941 #> 200 -0.730732188 -0.47401207 -0.3217618960 0.511784423 -0.278879754 #> Otu00034 Otu00035 Otu00036 Otu00037 Otu00038 #> 1 -0.1482914828 -0.28857253 -0.337797955 -0.28026882 -0.269009738 #> 2 -0.1507314908 1.32771762 -0.337797955 -0.40104181 -0.269009738 #> 3 -0.1360914431 -0.09645535 -0.309626997 5.43380328 -0.251964926 #> 4 -0.1507314908 -0.24263146 -0.337797955 -0.28781713 -0.254805728 #> 5 0.0469091527 -0.38463111 -0.332163763 -0.55200805 -0.269009738 #> 6 -0.1507314908 -0.31363129 -0.337797955 -0.02362622 -0.269009738 #> 7 -0.1507314908 -0.38880757 3.099058896 -0.19723739 -0.269009738 #> 8 -0.1507314908 -0.25098438 -0.337797955 -0.13685089 -0.266168936 #> 9 -0.0775312524 -0.38880757 -0.337797955 0.32359613 -0.084357613 #> 10 -0.0604511968 -0.30110191 0.811577123 -0.51426649 -0.254805728 #> 11 -0.1507314908 1.31518824 -0.337797955 0.52740055 -0.269009738 #> 12 0.6935112580 -0.25098438 -0.337797955 -0.54445974 -0.266168936 #> 13 -0.1458514749 5.21182571 -0.337797955 -0.55200805 -0.257646530 #> 14 -0.1507314908 -0.31780775 -0.337797955 -0.43878337 -0.269009738 #> 15 -0.1507314908 -0.20921978 0.158010902 -0.40859012 -0.269009738 #> 16 -0.0824112683 -0.36792527 -0.337797955 1.16145875 -0.269009738 #> 17 -0.1507314908 -0.38880757 0.963700295 -0.29536544 0.049160077 #> 18 -0.1507314908 -0.17580810 -0.337797955 0.01411534 -0.200830492 #> 19 -0.1458514749 0.28360254 -0.337797955 -0.43123506 -0.269009738 #> 20 -0.1482914828 -0.36792527 -0.337797955 1.87100007 -0.269009738 #> 21 0.3616701775 -0.38880757 -0.337797955 7.21520489 -0.251964926 #> 22 -0.1214513954 -0.38463111 -0.337797955 0.18772652 -0.232079313 #> 23 -0.1507314908 0.35460236 -0.337797955 -0.25007557 -0.269009738 #> 24 -0.1507314908 -0.38880757 -0.337797955 0.06695353 -0.260487332 #> 25 -0.1360914431 -0.23010208 1.746852922 -0.54445974 0.270742627 #> 26 0.9887522192 -0.38463111 -0.337797955 -0.51426649 -0.260487332 #> 27 13.8524741014 -0.38880757 -0.337797955 -0.55200805 -0.266168936 #> 28 -0.1507314908 -0.38880757 -0.337797955 -0.55200805 -0.101402425 #> 29 -0.1507314908 0.05807368 -0.337797955 -0.31801038 -0.266168936 #> 30 -0.1458514749 -0.38880757 -0.337797955 -0.46897662 -0.260487332 #> 31 -0.1141313716 1.80383409 -0.320895380 0.42927250 0.301991448 #> 32 -0.1482914828 -0.38045465 -0.332163763 -0.33310700 -0.269009738 #> 33 -0.1507314908 -0.30945483 0.929895146 1.22184525 -0.269009738 #> 34 0.3836302490 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 35 -0.1434114669 -0.38880757 -0.337797955 0.05940521 -0.266168936 #> 36 0.0542291766 -0.38880757 -0.337797955 -0.55200805 -0.254805728 #> 37 -0.1068113478 -0.38880757 -0.337797955 -0.52936311 2.219532746 #> 38 0.0883892878 -0.38463111 -0.337797955 -0.55200805 0.196881777 #> 39 -0.1507314908 -0.31780775 -0.337797955 -0.20478570 -0.226397709 #> 40 -0.1507314908 -0.27604314 -0.337797955 -0.14439921 0.114498521 #> 41 -0.1385314510 -0.38463111 -0.332163763 0.98029927 -0.269009738 #> 42 -0.0848512763 -0.30945483 -0.072990952 -0.01607790 -0.146855255 #> 43 -0.0360511174 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 44 -0.1434114669 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 45 -0.1019313319 -0.38880757 -0.337797955 -0.46142831 -0.266168936 #> 46 -0.1409714590 -0.38880757 3.262450451 0.53494886 -0.266168936 #> 47 -0.0214110697 -0.38880757 -0.337797955 0.82933303 -0.269009738 #> 48 -0.1312114272 -0.35121943 -0.337797955 2.98060192 -0.266168936 #> 49 -0.1287714193 -0.38880757 2.969472490 -0.52936311 -0.192308086 #> 50 -0.0946113080 -0.38880757 -0.337797955 -0.49162155 -0.269009738 #> 51 -0.1458514749 -0.18833748 -0.337797955 -0.44633168 -0.135492048 #> 52 -0.1458514749 3.57047681 -0.337797955 -0.54445974 0.392897110 #> 53 0.0493491607 -0.38880757 -0.337797955 1.64455071 -0.229238511 #> 54 0.1249894069 -0.38880757 -0.337797955 -0.54445974 -0.149696057 #> 55 -0.1482914828 -0.19251394 -0.337797955 -0.41613843 -0.269009738 #> 56 -0.0311711015 -0.38880757 -0.337797955 -0.55200805 -0.266168936 #> 57 -0.1507314908 -0.07139659 -0.337797955 -0.43123506 -0.254805728 #> 58 -0.0287310935 -0.37210173 -0.326529572 -0.54445974 -0.269009738 #> 59 -0.1092513557 -0.38880757 -0.337797955 -0.48407324 0.017911256 #> 60 -0.1507314908 -0.11733765 -0.337797955 -0.41613843 -0.269009738 #> 61 -0.1409714590 -0.38880757 -0.337797955 -0.32555869 0.071886493 #> 62 -0.1287714193 -0.28439607 -0.005380653 0.23301639 1.310476131 #> 63 -0.0458111492 -0.38880757 -0.332163763 -0.04627115 -0.007655961 #> 64 -0.1507314908 0.63442520 -0.281456039 0.48965899 -0.226397709 #> 65 -0.1507314908 -0.38880757 -0.337797955 -0.55200805 -0.220716105 #> 66 -0.1409714590 1.92912790 -0.337797955 -0.55200805 -0.090039217 #> 67 -0.1482914828 -0.32198421 -0.337797955 -0.09910934 -0.269009738 #> 68 -0.1507314908 0.04972076 2.293369503 -0.53691142 -0.269009738 #> 69 -0.1507314908 -0.05469075 -0.337797955 -0.42368675 -0.266168936 #> 70 -0.0653312127 0.55507246 -0.337797955 -0.18968908 1.685461984 #> 71 -0.1068113478 -0.38880757 -0.332163763 0.24056470 -0.260487332 #> 72 -0.1482914828 0.44230803 -0.337797955 -0.40104181 -0.226397709 #> 73 -0.1482914828 -0.38880757 -0.337797955 -0.29536544 -0.217875303 #> 74 -0.1482914828 -0.38880757 -0.337797955 -0.25762388 -0.269009738 #> 75 -0.1458514749 -0.34704297 0.011521922 -0.48407324 -0.257646530 #> 76 -0.0897312922 -0.17998456 -0.337797955 -0.55200805 -0.232079313 #> 77 -0.1409714590 -0.25933730 -0.326529572 -0.46897662 0.032115266 #> 78 -0.1482914828 0.07895598 -0.337797955 -0.55200805 -0.246283323 #> 79 -0.1507314908 -0.29692545 -0.337797955 -0.50671818 -0.269009738 #> 80 0.1591495182 -0.38463111 -0.337797955 -0.55200805 -0.269009738 #> 81 -0.1507314908 -0.01292614 0.203084435 -0.53691142 -0.266168936 #> 82 -0.0287310935 -0.36374881 7.662754058 -0.55200805 -0.269009738 #> 83 -0.1190113875 -0.38045465 -0.337797955 2.54279983 -0.195148888 #> 84 -0.1434114669 0.12489705 -0.337797955 2.80699074 -0.266168936 #> 85 0.9009119332 1.03536539 -0.337797955 -0.52936311 -0.269009738 #> 86 -0.1507314908 -0.19669040 -0.337797955 -0.55200805 -0.269009738 #> 87 -0.1507314908 0.47989617 -0.337797955 0.46701406 -0.240601719 #> 88 -0.1141313716 0.53419016 2.304637886 -0.34820363 -0.192308086 #> 89 -0.1507314908 -0.38880757 -0.337797955 -0.29536544 0.398578714 #> 90 -0.0214110697 -0.38880757 -0.337797955 -0.07646440 -0.266168936 #> 91 -0.1434114669 -0.38880757 -0.332163763 -0.46897662 -0.246283323 #> 92 -0.1482914828 1.78712825 -0.337797955 -0.55200805 -0.169581671 #> 93 -0.1507314908 -0.38880757 -0.337797955 -0.39349350 -0.240601719 #> 94 -0.1482914828 -0.32616067 1.284849214 -0.29536544 -0.158218463 #> 95 -0.0824112683 -0.35121943 -0.337797955 -0.25007557 -0.269009738 #> 96 -0.0580111889 -0.38880757 -0.337797955 -0.55200805 -0.266168936 #> 97 0.3909502729 -0.38880757 -0.337797955 -0.52936311 -0.266168936 #> 98 -0.1482914828 1.37365868 -0.337797955 -0.03117453 -0.266168936 #> 99 0.0005490018 -0.35539589 -0.337797955 -0.55200805 -0.269009738 #> 100 0.1786695817 -0.38463111 -0.337797955 -0.55200805 8.500545795 #> 101 -0.0946113080 -0.37210173 -0.247650890 -0.01607790 -0.266168936 #> 102 -0.1434114669 -0.38880757 -0.332163763 -0.42368675 -0.263328134 #> 103 -0.1019313319 -0.38880757 -0.337797955 0.73875328 -0.237760917 #> 104 -0.1482914828 0.41724927 1.160897000 -0.55200805 -0.251964926 #> 105 -0.1263314113 -0.38880757 -0.337797955 -0.52936311 -0.118447236 #> 106 0.5324707336 -0.38463111 0.496062396 -0.55200805 -0.269009738 #> 107 -0.1507314908 1.03954186 -0.337797955 0.11224340 -0.172422473 #> 108 -0.1385314510 -0.38880757 -0.337797955 -0.34820363 -0.095720821 #> 109 -0.1214513954 -0.38045465 -0.337797955 0.74630160 -0.269009738 #> 110 -0.1458514749 -0.38463111 -0.337797955 -0.47652493 -0.266168936 #> 111 -0.1507314908 -0.38463111 -0.337797955 -0.03872284 -0.269009738 #> 112 -0.0165310538 -0.17163164 -0.337797955 0.17262989 -0.263328134 #> 113 0.0200690653 -0.38880757 -0.337797955 -0.45387999 -0.200830492 #> 114 -0.1507314908 -0.32198421 -0.337797955 -0.42368675 -0.075835207 #> 115 -0.1507314908 -0.09645535 -0.337797955 -0.38594519 0.120180125 #> 116 0.1323094308 -0.35539589 -0.332163763 0.55759380 -0.206512096 #> 117 -0.1507314908 -0.30945483 1.476411727 -0.49162155 -0.260487332 #> 118 -0.1434114669 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 119 -0.1507314908 -0.38880757 -0.337797955 0.57269042 -0.269009738 #> 120 -0.1409714590 -0.38045465 -0.332163763 0.88971952 -0.269009738 #> 121 -0.1507314908 -0.38880757 -0.332163763 -0.48407324 -0.269009738 #> 122 -0.1507314908 3.68741770 -0.337797955 -0.55200805 -0.030382377 #> 123 -0.1458514749 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 124 -0.1019313319 -0.10063181 -0.337797955 0.85952627 -0.215034501 #> 125 -0.1287714193 -0.29692545 -0.337797955 0.49720730 -0.217875303 #> 126 -0.1092513557 0.78477778 -0.337797955 -0.10665765 0.228130598 #> 127 -0.1434114669 -0.38880757 -0.337797955 0.17262989 0.151428946 #> 128 -0.1360914431 -0.38045465 -0.332163763 -0.37839688 0.012229652 #> 129 -0.1507314908 -0.38880757 -0.337797955 -0.53691142 0.179836966 #> 130 -0.1482914828 0.61354290 -0.337797955 -0.35575194 1.557625898 #> 131 -0.1409714590 -0.38880757 -0.337797955 1.72003383 -0.234920115 #> 132 -0.1190113875 -0.34286651 -0.332163763 0.27830626 -0.269009738 #> 133 -0.1385314510 0.68454273 6.113351379 0.40662756 -0.146855255 #> 134 -0.1507314908 -0.38880757 -0.337797955 -0.43878337 -0.269009738 #> 135 -0.1336514351 -0.37210173 -0.332163763 -0.53691142 -0.260487332 #> 136 -0.1507314908 0.21260271 -0.337797955 -0.35575194 -0.254805728 #> 137 -0.1360914431 -0.38880757 -0.281456039 -0.55200805 -0.269009738 #> 138 -0.1409714590 1.77042241 -0.332163763 0.11224340 -0.124128840 #> 139 -0.1507314908 0.57595476 0.056595454 -0.52181480 -0.254805728 #> 140 -0.0458111492 0.54254308 -0.337797955 -0.55200805 -0.237760917 #> 141 -0.1507314908 0.12489705 -0.337797955 -0.40104181 -0.192308086 #> 142 -0.1482914828 0.18336749 -0.315261189 -0.55200805 -0.183785680 #> 143 -0.1238914034 -0.36374881 -0.337797955 -0.45387999 -0.243442521 #> 144 -0.1482914828 -0.38880757 1.955318009 -0.24252726 0.441190742 #> 145 -0.1312114272 -0.35957235 -0.337797955 -0.55200805 -0.260487332 #> 146 -0.1507314908 -0.10898473 -0.270187656 -0.55200805 0.784927775 #> 147 -0.0580111889 -0.38880757 -0.332163763 -0.55200805 -0.269009738 #> 148 -0.1507314908 -0.36792527 1.521485259 -0.51426649 -0.001974357 #> 149 0.2201497168 -0.33869005 -0.337797955 0.32359613 -0.269009738 #> 150 -0.0677712207 -0.38880757 -0.337797955 0.21791976 0.509369989 #> 151 -0.1507314908 -0.23845500 -0.337797955 -0.49162155 0.023592860 #> 152 -0.1482914828 -0.38463111 -0.337797955 0.77649484 -0.263328134 #> 153 -0.1482914828 -0.38880757 -0.292724422 -0.06136778 0.162792154 #> 154 -0.1385314510 -0.36374881 -0.337797955 -0.55200805 4.418313433 #> 155 0.2665098677 -0.32198421 -0.337797955 1.95403150 0.091772106 #> 156 -0.1482914828 -0.16745518 -0.337797955 0.35378938 -0.254805728 #> 157 0.4812305668 -0.37210173 -0.332163763 -0.55200805 -0.223556907 #> 158 -0.0824112683 2.04606879 -0.337797955 -0.51426649 0.052000879 #> 159 -0.1263314113 -0.10063181 -0.337797955 -0.53691142 -0.263328134 #> 160 -0.1482914828 -0.38880757 0.203084435 4.20342844 -0.260487332 #> 161 -0.1507314908 -0.38880757 0.974968678 0.32359613 -0.269009738 #> 162 -0.0994913239 -0.38880757 -0.337797955 -0.55200805 -0.263328134 #> 163 -0.1507314908 -0.18416102 -0.337797955 0.35378938 -0.269009738 #> 164 0.1079093513 -0.37627819 -0.163138017 0.90481615 -0.266168936 #> 165 -0.1287714193 -0.37627819 -0.337797955 -0.50671818 -0.237760917 #> 166 0.0347091130 0.50495493 -0.337797955 -0.54445974 5.517703777 #> 167 -0.1507314908 0.04136784 -0.337797955 -0.55200805 -0.269009738 #> 168 -0.1482914828 -0.38463111 -0.337797955 -0.55200805 -0.266168936 #> 169 -0.1482914828 -0.38880757 2.535639740 -0.55200805 -0.240601719 #> 170 0.5861509084 -0.38463111 -0.337797955 -0.55200805 0.941171881 #> 171 -0.1507314908 -0.29274899 -0.337797955 -0.50671818 -0.260487332 #> 172 -0.0799712604 -0.22592562 0.005887730 -0.35575194 -0.144014453 #> 173 0.0127490415 -0.33869005 -0.264553465 -0.12175427 -0.257646530 #> 174 -0.1507314908 -0.38463111 -0.208211549 -0.15949583 -0.001974357 #> 175 -0.1458514749 0.56342538 -0.298358614 0.11224340 -0.260487332 #> 176 -0.1312114272 1.81218701 -0.337797955 0.33869275 -0.266168936 #> 177 -0.1507314908 -0.31363129 1.279215022 -0.28781713 -0.269009738 #> 178 -0.0775312524 -0.38463111 -0.337797955 -0.55200805 -0.215034501 #> 179 0.1298694228 -0.33451359 -0.337797955 2.56544476 -0.269009738 #> 180 0.3445901219 -0.33033713 0.890455805 -0.37084856 0.091772106 #> 181 -0.1507314908 2.17136260 0.777771974 -0.43878337 -0.269009738 #> 182 -0.1507314908 5.69629511 -0.337797955 -0.50671818 -0.115606434 #> 183 -0.0994913239 -0.38045465 -0.337797955 -0.53691142 -0.269009738 #> 184 0.0371491210 -0.20086686 -0.095527718 -0.25762388 -0.223556907 #> 185 -0.1507314908 -0.38880757 2.259564353 0.05940521 -0.234920115 #> 186 -0.1385314510 -0.35957235 -0.089893526 -0.54445974 0.375852298 #> 187 -0.1360914431 -0.38880757 -0.337797955 -0.55200805 -0.246283323 #> 188 -0.1092513557 -0.38880757 -0.337797955 1.79551695 -0.266168936 #> 189 -0.1165713795 -0.36792527 0.417183714 -0.52936311 -0.246283323 #> 190 -0.1507314908 -0.35957235 -0.337797955 -0.34065532 -0.269009738 #> 191 -0.0628912048 -0.29692545 -0.337797955 0.72365666 -0.266168936 #> 192 -0.0189710618 -0.38463111 2.693397103 0.36888600 7.210821722 #> 193 -0.1360914431 -0.38880757 -0.337797955 0.26320964 -0.186626482 #> 194 0.0298290971 -0.38880757 -0.337797955 2.06725618 0.515051592 #> 195 -0.1458514749 -0.38880757 -0.337797955 -0.44633168 -0.269009738 #> 196 -0.1312114272 -0.38880757 -0.337797955 2.57299307 -0.269009738 #> 197 -0.1190113875 -0.34704297 2.225759204 -0.52936311 -0.257646530 #> 198 0.4446304476 -0.38880757 -0.332163763 0.83688134 -0.269009738 #> 199 0.0200690653 -0.38880757 -0.337797955 -0.54445974 0.128702531 #> 200 -0.1092513557 7.49217304 -0.337797955 -0.15194752 -0.269009738 #> Otu00039 Otu00040 Otu00041 Otu00042 Otu00043 #> 1 -0.369691676 -0.20704023 0.122728281 0.690525991 0.719828577 #> 2 0.504524822 -0.32139200 -0.630775883 -0.301679743 -0.243967502 #> 3 -0.439414464 0.35201286 0.855588495 -0.293479696 -0.461086399 #> 4 0.064734927 -0.33409775 -0.620453908 0.641325706 -0.127464679 #> 5 0.252450126 -0.85503359 4.860514738 2.211634782 -0.461086399 #> 6 -0.214156225 0.05978056 0.277557904 -0.301679743 0.545074343 #> 7 -0.385781550 -0.81691633 -0.424336386 -0.301679743 0.126723298 #> 8 -0.278515722 0.30118985 -0.661741808 -0.301679743 -0.381652656 #> 9 -0.133706855 -0.33409775 3.467048133 -0.297579720 -0.455790816 #> 10 -0.412598007 -0.46115527 0.071118407 -0.301679743 -0.461086399 #> 11 0.102277967 0.50448189 -0.661741808 -0.301679743 -0.461086399 #> 12 -0.417961299 -0.63903580 0.081440382 -0.301679743 0.312068697 #> 13 0.080824801 0.37742437 0.205304080 -0.010578061 -0.461086399 #> 14 -0.396508133 -0.55009554 0.298201853 4.581448478 -0.095691182 #> 15 -0.289242305 -0.37221501 1.712312408 3.257140824 -0.026848605 #> 16 -0.439414464 0.75859693 -0.651419833 -0.301679743 0.539778760 #> 17 -0.289242305 -0.33409775 0.659470973 -0.301679743 0.269704035 #> 18 -0.251699265 0.17413233 -0.155965040 -0.277079601 -0.005666274 #> 19 -0.058620775 -0.60091855 0.628505049 -0.256579483 -0.164533759 #> 20 1.362651445 1.52094206 -0.372726512 -0.297579720 -0.461086399 #> 21 -0.439414464 4.04938672 -0.661741808 -0.301679743 -0.455790816 #> 22 -0.310695471 -0.85503359 -0.661741808 -0.256579483 -0.249263085 #> 23 -0.407234716 0.79671419 -0.021779367 -0.297579720 0.132018880 #> 24 -0.305332179 1.34306153 1.640058584 -0.236079364 -0.365765907 #> 25 -0.439414464 0.25036685 -0.651419833 -0.301679743 -0.461086399 #> 26 -0.434051173 -0.74068182 0.721402822 -0.289379672 0.010220475 #> 27 -0.439414464 -0.85503359 -0.641097858 -0.231979341 -0.424017319 #> 28 -0.230246100 -0.57550704 -0.558522059 -0.002378014 -0.418721736 #> 29 0.466981782 -0.72797607 -0.290150713 -0.301679743 -0.392243822 #> 30 8.093582148 -0.74068182 -0.455302311 -0.268879554 3.399393499 #> 31 -0.310695471 0.14872083 -0.661741808 -0.297579720 -0.455790816 #> 32 -0.439414464 -0.30868625 -0.661741808 -0.281179625 -0.424017319 #> 33 -0.192703060 1.16518100 -0.630775883 -0.301679743 1.180544285 #> 34 0.139821007 0.84753719 0.174338155 -0.289379672 -0.413426153 #> 35 -0.273152431 -0.10539421 -0.475946260 -0.301679743 -0.085100016 #> 36 -0.332148636 1.02541772 -0.661741808 -0.297579720 -0.413426153 #> 37 0.542067861 -0.63903580 -0.269506763 -0.301679743 -0.053326519 #> 38 -0.439414464 -0.85503359 -0.651419833 -0.301679743 -0.461086399 #> 39 -0.417961299 -0.14351147 1.412975137 -0.301679743 -0.249263085 #> 40 0.247086835 -0.29598050 -0.114677141 -0.297579720 0.184974709 #> 41 0.043281762 0.31389561 -0.434658361 -0.301679743 -0.238671919 #> 42 -0.412598007 0.14872083 -0.279828738 -0.260679507 -0.392243822 #> 43 -0.439414464 -0.85503359 -0.641097858 -0.301679743 -0.429312902 #> 44 -0.203429643 -0.85503359 0.287879879 -0.289379672 -0.344583576 #> 45 -0.428687881 -0.82962208 -0.475946260 -0.301679743 -0.339287993 #> 46 0.129094424 0.37742437 -0.506912185 -0.252479459 -0.461086399 #> 47 -0.428687881 -0.80421058 -0.032101342 -0.297579720 0.290886366 #> 48 0.123731133 -0.05457121 -0.166287015 -0.301679743 -0.461086399 #> 49 -0.230246100 -0.62633005 -0.424336386 -0.301679743 0.820444651 #> 50 -0.417961299 0.16142658 0.019508532 -0.297579720 0.449753851 #> 51 0.450891908 -0.43574377 -0.455302311 -0.297579720 -0.461086399 #> 52 0.214907086 -0.74068182 -0.465624286 4.749549449 -0.302218913 #> 53 -0.434051173 0.17413233 -0.620453908 0.973427626 -0.461086399 #> 54 -0.439414464 1.10165224 -0.661741808 -0.297579720 -0.450495233 #> 55 -0.037167609 -0.37221501 0.225948029 -0.301679743 0.412684771 #> 56 -0.439414464 -0.85503359 -0.661741808 1.563831038 -0.461086399 #> 57 -0.235609391 -0.51197828 -0.434658361 1.157928692 -0.386948239 #> 58 -0.369691676 -0.84232784 -0.641097858 -0.293479696 -0.445199650 #> 59 -0.026441027 1.69882259 2.032293628 -0.293479696 -0.445199650 #> 60 -0.305332179 0.13601508 -0.228218864 -0.277079601 -0.010961856 #> 61 -0.412598007 -0.48656678 2.352274849 -0.293479696 -0.445199650 #> 62 -0.026441027 0.19954384 -0.290150713 -0.289379672 -0.439904067 #> 63 0.096914676 2.25787568 -0.073389241 -0.293479696 -0.445199650 #> 64 1.389467902 -0.32139200 -0.651419833 -0.289379672 0.052585138 #> 65 -0.439414464 -0.85503359 -0.424336386 -0.301679743 5.326985656 #> 66 -0.010351152 1.20329825 0.143372231 -0.301679743 -0.461086399 #> 67 -0.407234716 -0.81691633 -0.506912185 3.232540682 2.599760488 #> 68 -0.396508133 -0.55009554 1.784566232 -0.301679743 -0.455790816 #> 69 -0.316058762 0.40283587 -0.661741808 -0.301679743 0.063176303 #> 70 -0.273152431 -0.20704023 -0.661741808 -0.297579720 -0.455790816 #> 71 1.603999558 0.40283587 -0.114677141 -0.301679743 -0.381652656 #> 72 -0.273152431 0.05978056 -0.661741808 -0.301679743 -0.450495233 #> 73 -0.417961299 0.08519207 1.113637867 -0.301679743 -0.286332165 #> 74 0.048645053 0.26307260 -0.197252939 -0.297579720 0.211452623 #> 75 -0.310695471 -0.24515749 1.268467489 -0.297579720 0.788671154 #> 76 -0.257062557 -0.85503359 -0.114677141 -0.293479696 -0.116873513 #> 77 -0.358965093 -0.56280129 1.361365263 -0.289379672 -0.418721736 #> 78 -0.439414464 -0.43574377 1.144603791 -0.297579720 -0.461086399 #> 79 -0.396508133 -0.39762651 -0.052745291 -0.301679743 0.089654218 #> 80 -0.439414464 -0.81691633 -0.661741808 -0.301679743 -0.461086399 #> 81 -0.423324590 -0.23245173 -0.661741808 -0.301679743 -0.233376336 #> 82 -0.439414464 1.07624073 0.102084331 0.292823692 0.910469559 #> 83 3.760042699 0.92377171 -0.238540839 -0.297579720 -0.365765907 #> 84 2.816103414 3.09645532 -0.661741808 2.219834829 -0.450495233 #> 85 -0.439414464 -0.82962208 0.463353451 -0.100778582 0.274999617 #> 86 -0.439414464 -0.74068182 0.525285300 -0.297579720 -0.074508851 #> 87 0.820959014 -0.72797607 -0.279828738 -0.285279649 -0.402834987 #> 88 -0.273152431 -0.85503359 -0.651419833 -0.289379672 -0.333992410 #> 89 0.359715954 0.94918321 0.504641350 -0.293479696 -0.376357073 #> 90 -0.434051173 1.01271197 -0.661741808 -0.301679743 -0.461086399 #> 91 -0.391144842 -0.47386102 0.287879879 -0.301679743 -0.455790816 #> 92 -0.283879014 -0.84232784 -0.651419833 -0.301679743 -0.392243822 #> 93 -0.181976477 -0.85503359 -0.661741808 -0.297579720 -0.307514496 #> 94 -0.364328385 -0.85503359 -0.661741808 -0.297579720 -0.455790816 #> 95 -0.251699265 -0.34680350 0.463353451 -0.297579720 0.666872748 #> 96 -0.439414464 -0.09268846 0.153694206 -0.301679743 -0.461086399 #> 97 0.912134968 1.03812348 -0.641097858 -0.301679743 -0.439904067 #> 98 0.096914676 -0.51197828 0.834944546 -0.301679743 -0.461086399 #> 99 0.075461510 0.49177614 -0.661741808 -0.301679743 6.846817934 #> 100 -0.439414464 -0.85503359 -0.620453908 -0.289379672 4.109001601 #> 101 -0.294605596 -0.68985881 -0.372726512 -0.293479696 1.127588456 #> 102 -0.160523311 -0.65174155 -0.517234160 -0.244279412 -0.376357073 #> 103 -0.214156225 1.57176506 -0.589487984 -0.174579009 -0.386948239 #> 104 2.767833791 1.35576728 -0.383048487 -0.297579720 -0.450495233 #> 105 -0.407234716 -0.49927253 0.019508532 0.219023266 0.417980354 #> 106 1.051580544 -0.71527031 0.060796432 -0.301679743 2.864539631 #> 107 -0.396508133 -0.05457121 -0.444980336 -0.301679743 0.476231766 #> 108 -0.439414464 2.90586903 -0.661741808 0.145222839 -0.439904067 #> 109 -0.348238510 0.98730047 -0.630775883 -0.297579720 1.350002936 #> 110 0.134457715 -0.58821279 0.029830507 0.719226157 -0.016257439 #> 111 -0.364328385 -0.65174155 -0.661741808 -0.244279412 -0.445199650 #> 112 -0.439414464 4.51949955 0.339489753 -0.301679743 4.956294857 #> 113 -0.198066351 -0.85503359 -0.661741808 1.752432128 -0.455790816 #> 114 -0.171249894 -0.60091855 2.589680270 -0.297579720 -0.286332165 #> 115 -0.348238510 -0.04186545 -0.661741808 -0.301679743 0.089654218 #> 116 -0.181976477 -0.52468403 -0.001135417 -0.108978630 -0.291627748 #> 117 -0.396508133 0.04707481 0.969130219 -0.301679743 -0.461086399 #> 118 -0.439414464 -0.23245173 2.259377075 -0.301679743 -0.461086399 #> 119 0.107641258 -0.85503359 2.042615603 -0.293479696 -0.461086399 #> 120 6.806392213 1.94023187 -0.651419833 -0.297579720 -0.455790816 #> 121 -0.401871424 -0.65174155 1.113637867 0.018122105 -0.206898422 #> 122 0.745872935 -0.71527031 -0.661741808 1.756532152 -0.455790816 #> 123 -0.439414464 -0.85503359 -0.465624286 -0.297579720 -0.455790816 #> 124 0.761962809 0.93647746 -0.661741808 -0.297579720 -0.461086399 #> 125 -0.428687881 0.94918321 -0.558522059 0.624925612 -0.429312902 #> 126 0.037918470 -0.42303802 0.422065552 0.895527176 -0.461086399 #> 127 -0.122980272 -0.84232784 1.825854131 -0.297579720 -0.445199650 #> 128 0.155910881 -0.56280129 -0.661741808 -0.301679743 -0.243967502 #> 129 0.649333689 -0.66444731 -0.537878109 -0.301679743 -0.281036582 #> 130 -0.385781550 0.36471861 -0.166287015 -0.301679743 -0.461086399 #> 131 -0.439414464 -0.85503359 -0.589487984 -0.256579483 -0.450495233 #> 132 0.155910881 -0.33409775 -0.599809959 0.268223550 1.662442324 #> 133 0.155910881 1.68611683 -0.661741808 -0.301679743 -0.455790816 #> 134 -0.326785345 0.12330932 0.463353451 -0.301679743 1.620077661 #> 135 -0.139070146 0.80941994 -0.651419833 -0.301679743 -0.434608484 #> 136 -0.149796729 -0.21974598 2.114869427 -0.281179625 0.073767469 #> 137 -0.353601802 0.46636463 -0.661741808 0.743826299 -0.058622102 #> 138 -0.101527106 -0.39762651 -0.661741808 2.387935801 -0.461086399 #> 139 -0.149796729 -0.21974598 0.277557904 -0.301679743 -0.217489588 #> 140 0.525977987 1.19059250 0.164016180 -0.301679743 -0.461086399 #> 141 -0.332148636 -0.74068182 0.618183074 1.990233502 0.184974709 #> 142 -0.434051173 -0.84232784 -0.641097858 -0.289379672 -0.333992410 #> 143 1.587909684 -0.66444731 -0.465624286 -0.297579720 -0.318105662 #> 144 -0.439414464 -0.21974598 -0.362404537 -0.301679743 0.492118514 #> 145 -0.321422053 -0.85503359 -0.444980336 -0.281179625 1.561826250 #> 146 -0.342875219 -0.76609332 -0.475946260 9.243175419 -0.450495233 #> 147 -0.439414464 -0.85503359 -0.455302311 -0.293479696 -0.461086399 #> 148 -0.434051173 0.40283587 2.909661491 -0.301679743 0.889287228 #> 149 -0.439414464 -0.52468403 -0.403692436 -0.301679743 -0.461086399 #> 150 -0.439414464 0.45365888 0.308523828 -0.297579720 -0.376357073 #> 151 0.032555179 -0.70256456 0.287879879 -0.301679743 -0.461086399 #> 152 -0.004987861 0.96188896 -0.300472688 -0.002378014 -0.461086399 #> 153 -0.358965093 -0.81691633 6.832011934 -0.293479696 -0.461086399 #> 154 -0.412598007 0.31389561 -0.269506763 -0.297579720 0.169087960 #> 155 0.102277967 0.59342215 -0.630775883 -0.100778582 0.121427715 #> 156 -0.439414464 4.15103274 -0.290150713 -0.301679743 -0.461086399 #> 157 -0.439414464 -0.85503359 -0.630775883 -0.301679743 -0.355174742 #> 158 0.107641258 -0.47386102 0.215626055 -0.301679743 -0.386948239 #> 159 -0.031804318 -0.13080572 0.153694206 -0.281179625 -0.318105662 #> 160 1.169572955 -0.77879908 -0.630775883 -0.301679743 -0.429312902 #> 161 -0.332148636 0.22495534 -0.630775883 -0.301679743 -0.461086399 #> 162 -0.417961299 -0.01645395 -0.661741808 -0.297579720 -0.450495233 #> 163 -0.042530901 0.21224959 -0.599809959 -0.301679743 -0.455790816 #> 164 -0.407234716 1.95293763 -0.114677141 -0.281179625 -0.445199650 #> 165 -0.364328385 2.10540665 -0.610131933 -0.301679743 0.592734588 #> 166 -0.439414464 -0.85503359 -0.661741808 -0.301679743 -0.455790816 #> 167 -0.439414464 -0.85503359 -0.434658361 -0.301679743 -0.461086399 #> 168 3.373885719 -0.06727696 -0.661741808 -0.223779293 -0.450495233 #> 169 0.359715954 -0.84232784 -0.589487984 0.124722721 -0.185716091 #> 170 -0.439414464 -0.85503359 -0.661741808 -0.297579720 -0.461086399 #> 171 -0.391144842 -0.28327474 0.525285300 -0.301679743 0.635099251 #> 172 -0.439414464 0.05978056 -0.465624286 -0.240179388 0.862809314 #> 173 -0.417961299 -0.76609332 -0.630775883 -0.301679743 3.341142087 #> 174 0.338262788 -0.15621722 0.680114923 -0.301679743 -0.085100016 #> 175 0.005738722 -0.04186545 1.010418118 -0.277079601 -0.455790816 #> 176 -0.439414464 -0.85503359 -0.661741808 0.501924901 -0.461086399 #> 177 -0.391144842 -0.43574377 -0.032101342 -0.293479696 -0.058622102 #> 178 -0.439414464 1.39388453 -0.145643065 -0.301679743 -0.461086399 #> 179 -0.439414464 0.61883366 -0.661741808 -0.301679743 -0.445199650 #> 180 -0.369691676 -0.49927253 0.164016180 -0.301679743 -0.069213268 #> 181 -0.267789139 -0.39762651 0.081440382 0.961127555 -0.153942593 #> 182 3.111084440 1.03812348 -0.661741808 -0.178679033 -0.439904067 #> 183 -0.198066351 -0.51197828 -0.290150713 -0.301679743 -0.196307256 #> 184 -0.353601802 -0.70256456 2.486460522 -0.293479696 -0.408130570 #> 185 -0.439414464 1.22870976 -0.496590210 -0.281179625 -0.381652656 #> 186 -0.407234716 -0.85503359 -0.661741808 -0.293479696 -0.413426153 #> 187 -0.439414464 -0.85503359 0.607861099 -0.301679743 -0.455790816 #> 188 -0.439414464 3.94774071 -0.661741808 -0.268879554 -0.445199650 #> 189 -0.423324590 -0.84232784 -0.527556135 -0.256579483 -0.333992410 #> 190 -0.321422053 -0.41033226 1.805210182 -0.285279649 -0.397539405 #> 191 0.134457715 -0.62633005 -0.661741808 0.014022081 -0.386948239 #> 192 -0.439414464 1.52094206 -0.661741808 -0.301679743 -0.450495233 #> 193 -0.412598007 -0.09268846 -0.094033191 -0.289379672 0.455049434 #> 194 -0.423324590 0.98730047 -0.527556135 -0.133578772 -0.392243822 #> 195 -0.375054967 -0.15621722 0.236270004 -0.297579720 1.090519376 #> 196 -0.144433437 -0.85503359 -0.661741808 0.104222602 -0.450495233 #> 197 -0.439414464 -0.82962208 -0.001135417 -0.293479696 -0.376357073 #> 198 0.692240021 -0.81691633 -0.661741808 -0.301679743 -0.445199650 #> 199 -0.423324590 -0.75338757 -0.290150713 -0.293479696 -0.191011673 #> 200 0.445528616 0.11060357 0.494319376 -0.301679743 -0.392243822 #> Otu00044 Otu00045 Otu00046 Otu00047 Otu00048 Otu00049 #> 1 -0.611704260 -0.23391339 0.693551357 -0.203512195 -0.253544727 0.60651290 #> 2 -0.622709104 -0.23391339 -0.569110688 -0.208661143 -0.253544727 -0.42970775 #> 3 0.026576699 -0.23391339 -0.584323484 0.342276360 0.007337307 -0.42161228 #> 4 0.092605763 -0.23391339 -0.523472301 -0.208661143 -0.253544727 -0.43780323 #> 5 -0.303568625 -0.14075174 -0.584323484 -0.208661143 -0.194846269 0.42841248 #> 6 -0.259549248 -0.23391339 0.784828131 -0.208661143 -0.253544727 -0.43780323 #> 7 0.829930318 -0.23391339 -0.584323484 -0.033596890 -0.247022676 0.01554331 #> 8 -0.204525028 -0.23391339 0.221954690 -0.208661143 -0.253544727 -0.33256207 #> 9 -0.534670351 -0.23391339 -0.584323484 -0.208661143 -0.070927303 -0.31637112 #> 10 -0.446631598 -0.23391339 -0.584323484 0.501893767 -0.207890371 -0.42970775 #> 11 0.235668737 -0.23391339 1.895362219 -0.203512195 -0.247022676 -0.43780323 #> 12 -0.622709104 -0.23391339 -0.188790795 -0.208661143 -0.116581659 -0.40542133 #> 13 -0.314573469 -0.23391339 -0.584323484 -0.208661143 -0.227456524 -0.42161228 #> 14 -0.578689727 -0.18733256 0.298018668 -0.208661143 -0.253544727 -0.43780323 #> 15 0.884954539 -0.23391339 1.180360820 -0.208661143 -0.253544727 -0.40542133 #> 16 -0.611704260 -0.10348707 -0.584323484 -0.193214297 -0.253544727 8.67770035 #> 17 0.004567010 -0.23391339 0.678338561 -0.208661143 -0.207890371 -0.41351681 #> 18 -0.215529872 -0.23391339 0.632700174 -0.203512195 -0.253544727 -0.43780323 #> 19 0.169639672 -0.23391339 -0.386557139 -0.208661143 -0.253544727 0.68746764 #> 20 -0.402612222 0.55174991 -0.584323484 -0.208661143 -0.247022676 -0.43780323 #> 21 -0.600699416 -0.23391339 -0.477833914 -0.208661143 1.142174157 -0.42161228 #> 22 0.488780151 -0.23391339 -0.234429182 -0.203512195 -0.227456524 -0.42970775 #> 23 -0.039452366 -0.23391339 -0.097514021 -0.208661143 -0.247022676 0.50936722 #> 24 6.431395968 -0.23391339 1.119509637 0.316531617 -0.253544727 -0.13017522 #> 25 -0.435626754 -0.23391339 -0.584323484 -0.208661143 -0.207890371 -0.43780323 #> 26 0.279688113 -0.23391339 -0.127939612 -0.203512195 0.626932139 -0.43780323 #> 27 -0.732757545 -0.23391339 -0.584323484 -0.208661143 5.707609757 1.02747754 #> 28 -0.380602533 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.43780323 #> 29 -0.633713948 -0.23080800 -0.219216386 0.002445751 -0.253544727 0.03982973 #> 30 -0.545675195 -0.23080800 -0.295280365 -0.203512195 -0.253544727 -0.43780323 #> 31 -0.644718792 -0.23391339 -0.584323484 -0.208661143 -0.207890371 -0.26779828 #> 32 -0.226534716 5.84954278 -0.584323484 -0.208661143 -0.253544727 -0.43780323 #> 33 0.026576699 -0.23391339 1.073871250 -0.141724811 -0.253544727 0.52555816 #> 34 -0.655723636 -0.23391339 0.982594476 3.756029300 0.920424427 -0.02493406 #> 35 -0.347588001 -0.23080800 -0.264854773 -0.208661143 -0.240500625 0.26650300 #> 36 -0.721752701 -0.23391339 -0.584323484 -0.208661143 0.033425511 -0.28398922 #> 37 1.677303314 -0.23391339 0.510997808 -0.208661143 -0.097015507 -0.38113491 #> 38 0.829930318 -0.23391339 0.008975549 -0.208661143 -0.233978575 -0.12207975 #> 39 -0.006437834 7.04201198 0.754402540 -0.208661143 -0.253544727 0.12078447 #> 40 0.180644516 -0.23080800 1.256424799 -0.208661143 -0.253544727 -0.41351681 #> 41 -0.138495963 -0.23080800 0.008975549 -0.208661143 -0.247022676 0.48508079 #> 42 -0.292563781 -0.22459723 -0.493046709 -0.193214297 0.274741392 -0.41351681 #> 43 -0.523665507 -0.23391339 -0.584323484 -0.208661143 1.311747479 -0.34065754 #> 44 -0.094476587 -0.14385712 2.153979746 -0.208661143 -0.227456524 -0.36494396 #> 45 0.202654204 -0.23391339 -0.462621118 -0.208661143 1.279137225 0.19364374 #> 46 -0.380602533 -0.23391339 -0.569110688 -0.188065349 -0.194846269 -0.42161228 #> 47 3.206976645 -0.23391339 -0.386557139 0.661511175 0.079079867 -0.37303944 #> 48 -0.600699416 -0.23080800 -0.584323484 -0.208661143 -0.220934473 -0.43780323 #> 49 -0.380602533 -0.23391339 -0.386557139 -0.208661143 -0.207890371 -0.08969785 #> 50 -0.490650974 -0.23391339 0.100252324 -0.203512195 1.670460276 -0.31637112 #> 51 -0.215529872 -0.23391339 0.419721034 -0.208661143 -0.253544727 -0.43780323 #> 52 -0.688738168 -0.23391339 0.997807271 -0.208661143 -0.253544727 -0.43780323 #> 53 -0.721752701 -0.23391339 -0.584323484 -0.208661143 0.046469612 -0.43780323 #> 54 -0.534670351 -0.22770262 -0.188790795 -0.208661143 0.366050104 -0.42161228 #> 55 -0.248544404 -0.23391339 3.918664050 -0.208661143 -0.253544727 -0.43780323 #> 56 -0.732757545 -0.23391339 -0.584323484 -0.208661143 -0.220934473 -0.42970775 #> 57 -0.127491119 -0.02274697 -0.508259505 -0.208661143 -0.253544727 -0.17065259 #> 58 -0.721752701 -0.23391339 0.176316302 -0.198363246 -0.247022676 -0.34065754 #> 59 -0.325578313 -0.23391339 -0.371344344 -0.203512195 -0.240500625 -0.38923038 #> 60 0.323707489 0.39026971 -0.538685096 -0.208661143 -0.253544727 0.08840257 #> 61 1.226104706 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.42161228 #> 62 -0.699743012 -0.23391339 -0.416982731 -0.203512195 0.079079867 0.25031205 #> 63 -0.501655819 2.33734833 -0.477833914 -0.203512195 -0.175280117 -0.42970775 #> 64 -0.567684883 -0.23391339 0.510997808 -0.203512195 -0.240500625 -0.41351681 #> 65 -0.468641286 -0.23080800 -0.219216386 -0.115980068 -0.253544727 2.04750725 #> 66 0.983998136 -0.23391339 -0.082301225 -0.203512195 -0.149191913 -0.07350690 #> 67 -0.446631598 -0.23391339 -0.508259505 -0.018150044 -0.253544727 0.74413596 #> 68 1.435196744 -0.23391339 3.812174480 -0.208661143 -0.253544727 -0.43780323 #> 69 0.873949695 -0.23391339 5.303028460 -0.208661143 -0.227456524 1.06795491 #> 70 -0.534670351 -0.23391339 -0.584323484 -0.208661143 -0.083971405 -0.42970775 #> 71 0.433755930 -0.18422718 -0.553897892 -0.208661143 -0.240500625 0.54174911 #> 72 1.138065953 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.07350690 #> 73 -0.369597689 -0.23391339 2.473448456 6.943228501 -0.227456524 -0.38923038 #> 74 -0.094476587 1.04550669 -0.386557139 -0.208661143 -0.253544727 -0.20303448 #> 75 -0.347588001 -0.23391339 0.374082647 -0.208661143 -0.253544727 -0.29208470 #> 76 -0.710747857 -0.23391339 -0.158365203 -0.208661143 0.646498291 -0.43780323 #> 77 0.510789839 -0.23080800 -0.553897892 -0.208661143 -0.253544727 -0.43780323 #> 78 -0.732757545 -0.23391339 -0.584323484 -0.208661143 0.033425511 -0.43780323 #> 79 0.048586387 4.98624476 -0.204003591 -0.208661143 -0.253544727 -0.08160238 #> 80 0.323707489 -0.23391339 -0.584323484 -0.208661143 -0.136147812 -0.43780323 #> 81 0.499784995 -0.23391339 0.997807271 -0.208661143 -0.253544727 0.09649805 #> 82 -0.732757545 -0.23391339 -0.584323484 -0.203512195 -0.129625761 -0.42161228 #> 83 0.147629984 -0.23080800 -0.356131548 -0.208661143 -0.240500625 -0.42161228 #> 84 -0.523665507 -0.23391339 -0.584323484 -0.208661143 -0.227456524 -0.30018017 #> 85 5.352921246 -0.19975412 -0.569110688 -0.208661143 -0.175280117 0.06411615 #> 86 -0.457636442 -0.23391339 -0.401769935 -0.208661143 0.248653189 -0.29208470 #> 87 0.081600919 -0.23391339 -0.553897892 -0.208661143 -0.240500625 -0.42970775 #> 88 -0.116486275 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.34875301 #> 89 0.774906098 -0.23391339 1.773659853 -0.208661143 -0.253544727 -0.43780323 #> 90 -0.534670351 -0.22149184 -0.584323484 -0.208661143 0.666064444 -0.43780323 #> 91 -0.380602533 -0.23391339 1.682383079 -0.198363246 -0.253544727 -0.32446659 #> 92 0.499784995 -0.23391339 3.583982544 -0.208661143 -0.253544727 -0.39732586 #> 93 -0.633713948 -0.23391339 -0.538685096 -0.208661143 -0.253544727 -0.35684849 #> 94 -0.457636442 -0.23391339 0.419721034 -0.208661143 -0.253544727 -0.33256207 #> 95 -0.391607378 -0.23391339 0.298018668 -0.208661143 -0.083971405 -0.39732586 #> 96 -0.732757545 -0.23391339 -0.584323484 1.160959192 0.144300375 -0.43780323 #> 97 -0.369597689 -0.23080800 -0.584323484 -0.193214297 0.242131138 0.06411615 #> 98 -0.259549248 -0.23391339 0.434933830 -0.208661143 -0.253544727 -0.38113491 #> 99 -0.677733324 -0.23391339 -0.584323484 -0.208661143 -0.038317049 -0.39732586 #> 100 3.273005710 -0.23391339 -0.477833914 -0.208661143 -0.253544727 4.16042593 #> 101 0.554809216 -0.23391339 -0.553897892 -0.167469554 -0.057883201 1.04366849 #> 102 -0.710747857 0.20084100 -0.508259505 -0.208661143 -0.207890371 -0.42161228 #> 103 -0.435626754 -0.23391339 -0.584323484 -0.208661143 -0.175280117 0.14507089 #> 104 -0.600699416 -0.23080800 -0.584323484 -0.208661143 0.633454190 -0.33256207 #> 105 -0.281558936 -0.23391339 0.008975549 -0.208661143 -0.240500625 -0.36494396 #> 106 -0.479646130 -0.22770262 -0.097514021 -0.208661143 0.509535223 1.65892451 #> 107 0.213659048 -0.23391339 -0.569110688 -0.208661143 -0.253544727 1.18129155 #> 108 0.213659048 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.36494396 #> 109 1.699313003 -0.22459723 1.210786411 -0.208661143 -0.253544727 3.28611475 #> 110 2.260560052 -0.03206314 1.575893509 -0.208661143 -0.240500625 0.03173426 #> 111 1.908405041 -0.23391339 -0.462621118 -0.208661143 -0.253544727 -0.42161228 #> 112 0.686867345 -0.23391339 -0.584323484 0.120871569 -0.253544727 3.50469255 #> 113 0.466770463 -0.23391339 -0.584323484 -0.208661143 -0.233978575 -0.43780323 #> 114 0.653852813 6.28429718 1.560680713 -0.203512195 -0.253544727 -0.33256207 #> 115 -0.149500807 -0.23391339 1.530255122 -0.208661143 -0.247022676 2.12846199 #> 116 -0.314573469 -0.23391339 -0.493046709 -0.146873760 -0.207890371 -0.42970775 #> 117 -0.490650974 -0.23080800 -0.584323484 -0.208661143 -0.207890371 -0.42970775 #> 118 -0.710747857 -0.23080800 -0.584323484 2.715941677 -0.240500625 -0.43780323 #> 119 -0.380602533 -0.23391339 -0.584323484 3.169049157 -0.194846269 -0.41351681 #> 120 -0.600699416 -0.23080800 -0.584323484 -0.208661143 -0.253544727 -0.42970775 #> 121 -0.358592845 5.26883512 -0.584323484 -0.208661143 -0.253544727 0.08840257 #> 122 -0.501655819 -0.23080800 -0.432195526 -0.208661143 -0.253544727 -0.42970775 #> 123 -0.369597689 -0.22149184 -0.584323484 -0.038745838 -0.247022676 -0.43780323 #> 124 -0.402612222 -0.23391339 -0.569110688 -0.208661143 -0.247022676 -0.42970775 #> 125 0.664857657 -0.23391339 -0.508259505 -0.208661143 -0.227456524 -0.42161228 #> 126 -0.490650974 -0.23391339 1.438978347 -0.203512195 -0.201368320 -0.43780323 #> 127 -0.534670351 -0.23080800 -0.401769935 -0.203512195 -0.123103710 -0.34875301 #> 128 -0.644718792 -0.23391339 -0.523472301 -0.208661143 -0.253544727 0.36364869 #> 129 0.015571854 -0.23391339 -0.310493161 1.572875082 -0.253544727 0.71175406 #> 130 -0.094476587 -0.23391339 -0.584323484 -0.203512195 -0.253544727 -0.30827565 #> 131 -0.567684883 0.69770317 -0.584323484 -0.208661143 -0.025272947 -0.43780323 #> 132 -0.039452366 -0.23391339 0.860892110 -0.198363246 -0.253544727 1.01938207 #> 133 0.972993292 -0.23391339 -0.584323484 -0.208661143 -0.240500625 -0.37303944 #> 134 0.400741398 -0.23391339 1.895362219 -0.208661143 -0.253544727 -0.40542133 #> 135 -0.534670351 -0.22770262 -0.432195526 -0.208661143 -0.253544727 -0.10588880 #> 136 0.037581543 -0.23391339 -0.584323484 -0.208661143 -0.253544727 1.36748745 #> 137 -0.578689727 -0.23391339 -0.264854773 -0.208661143 -0.227456524 1.17319607 #> 138 0.928973915 -0.22770262 -0.584323484 -0.208661143 -0.201368320 -0.43780323 #> 139 -0.545675195 -0.11901402 -0.584323484 -0.208661143 -0.247022676 -0.21922543 #> 140 3.262000866 -0.23391339 -0.584323484 -0.203512195 -0.240500625 -0.43780323 #> 141 0.895959383 -0.22149184 -0.386557139 -0.208661143 -0.253544727 0.08840257 #> 142 -0.600699416 -0.23391339 -0.462621118 -0.208661143 -0.253544727 -0.42161228 #> 143 0.125620295 0.74428400 -0.584323484 -0.193214297 -0.240500625 0.82509070 #> 144 -0.468641286 -0.21217567 0.161103507 -0.136575862 -0.253544727 -0.34065754 #> 145 -0.160505651 -0.23391339 -0.584323484 -0.198363246 -0.240500625 -0.33256207 #> 146 -0.589694571 -0.22149184 4.146855986 -0.182916400 -0.253544727 -0.43780323 #> 147 -0.633713948 -0.23391339 -0.584323484 -0.208661143 0.137778324 -0.13017522 #> 148 -0.732757545 -0.23391339 -0.584323484 -0.208661143 -0.247022676 0.81699522 #> 149 -0.567684883 -0.23391339 0.298018668 -0.208661143 0.085601918 -0.42970775 #> 150 -0.732757545 -0.23391339 -0.553897892 -0.208661143 -0.162236015 -0.43780323 #> 151 -0.611704260 -0.23080800 -0.310493161 -0.208661143 -0.253544727 -0.43780323 #> 152 -0.281558936 -0.23391339 -0.584323484 0.980745990 -0.253544727 -0.43780323 #> 153 -0.424621910 -0.23391339 2.777704371 9.152127462 -0.253544727 -0.31637112 #> 154 -0.699743012 -0.23391339 1.515042326 -0.208661143 -0.233978575 0.20983468 #> 155 -0.534670351 -0.23391339 -0.584323484 -0.208661143 -0.207890371 4.74330005 #> 156 -0.490650974 -0.23391339 -0.584323484 -0.208661143 0.020381409 -0.43780323 #> 157 -0.699743012 -0.22770262 -0.584323484 -0.208661143 11.623109885 -0.29208470 #> 158 2.271564896 -0.19975412 3.188449855 -0.208661143 -0.253544727 -0.43780323 #> 159 -0.622709104 -0.23391339 -0.584323484 -0.208661143 -0.175280117 -0.31637112 #> 160 -0.556680039 -0.23080800 -0.401769935 -0.208661143 -0.247022676 -0.43780323 #> 161 -0.567684883 0.65422773 -0.584323484 -0.208661143 -0.253544727 -0.43780323 #> 162 -0.501655819 -0.23391339 0.465359421 -0.208661143 -0.201368320 0.76032691 #> 163 1.369167679 0.46169364 1.241212003 -0.208661143 -0.253544727 -0.30018017 #> 164 -0.446631598 -0.23391339 -0.493046709 -0.198363246 0.222564986 -0.42970775 #> 165 0.400741398 -0.23080800 -0.553897892 -0.208661143 -0.240500625 -0.10588880 #> 166 -0.732757545 -0.23391339 -0.584323484 -0.208661143 1.540019259 -0.26779828 #> 167 -0.545675195 -0.23080800 0.480572217 0.337127411 -0.247022676 -0.39732586 #> 168 0.191649360 -0.23080800 -0.432195526 -0.208661143 -0.253544727 -0.43780323 #> 169 -0.512660663 -0.23391339 -0.432195526 -0.208661143 -0.175280117 0.88985449 #> 170 -0.721752701 -0.23080800 -0.584323484 -0.208661143 0.653020342 -0.36494396 #> 171 0.257678425 -0.23391339 1.362914369 -0.203512195 -0.181802168 -0.40542133 #> 172 -0.501655819 -0.19043795 -0.493046709 -0.208661143 -0.247022676 2.04750725 #> 173 -0.512660663 -0.23391339 -0.553897892 -0.208661143 0.326917799 2.76800443 #> 174 -0.677733324 1.07345519 -0.584323484 -0.208661143 -0.247022676 -0.37303944 #> 175 0.015571854 -0.23391339 -0.112726816 -0.203512195 -0.253544727 -0.43780323 #> 176 -0.358592845 -0.23391339 -0.569110688 -0.208661143 0.366050104 0.11268900 #> 177 0.059591231 0.80639177 -0.280067569 -0.208661143 -0.253544727 -0.43780323 #> 178 1.006007824 -0.23080800 -0.584323484 -0.208661143 0.561711630 -0.43780323 #> 179 -0.732757545 -0.23080800 -0.584323484 -0.208661143 -0.077449354 0.23412110 #> 180 -0.402612222 0.02693925 0.632700174 -0.188065349 -0.253544727 0.32317132 #> 181 -0.270554092 -0.23391339 0.008975549 0.450404281 -0.253544727 0.39603058 #> 182 0.609833436 -0.23391339 0.465359421 -0.208661143 -0.227456524 -0.42161228 #> 183 0.631843124 0.11389013 -0.401769935 -0.208661143 -0.253544727 -0.30018017 #> 184 -0.589694571 -0.22459723 -0.371344344 -0.172618503 0.222564986 -0.35684849 #> 185 -0.457636442 0.65112234 -0.553897892 -0.208661143 -0.253544727 -0.37303944 #> 186 -0.655723636 -0.23391339 -0.477833914 -0.208661143 -0.247022676 -0.32446659 #> 187 0.895959383 -0.23391339 -0.584323484 -0.208661143 0.092123968 -0.30827565 #> 188 -0.248544404 -0.23391339 -0.493046709 -0.208661143 -0.129625761 -0.18684354 #> 189 -0.666728480 -0.23080800 -0.553897892 4.682840053 0.150822426 -0.41351681 #> 190 -0.171510495 1.64484668 1.073871250 -0.110831119 -0.247022676 -0.42970775 #> 191 -0.369597689 -0.23391339 -0.553897892 -0.208661143 2.146569989 -0.30018017 #> 192 3.735209162 -0.22459723 -0.569110688 -0.208661143 -0.240500625 -0.43780323 #> 193 -0.369597689 -0.23080800 0.328444260 -0.208661143 -0.253544727 -0.31637112 #> 194 0.224663892 -0.23391339 -0.356131548 -0.208661143 -0.253544727 -0.32446659 #> 195 -0.204525028 -0.23080800 0.313231464 -0.177767451 -0.247022676 0.43650795 #> 196 -0.490650974 -0.23391339 -0.386557139 -0.208661143 -0.188324219 -0.43780323 #> 197 -0.435626754 -0.23391339 -0.569110688 -0.208661143 -0.142669863 -0.42161228 #> 198 -0.666728480 -0.23391339 -0.553897892 -0.208661143 -0.103537557 -0.22732091 #> 199 -0.303568625 -0.23391339 -0.340918752 -0.208661143 1.983518717 -0.29208470 #> 200 2.876831322 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.42970775 #> Otu00050 Otu00051 Otu00052 Otu00053 Otu00054 Otu00055 #> 1 -0.475385806 -0.20991733 0.19735560 -0.082761027 -0.18688626 -0.256009183 #> 2 -0.450642238 -0.20991733 -0.25745566 0.651532741 -0.45315341 -0.418554697 #> 3 0.304036595 -0.16859502 5.36271211 -0.189845534 1.12780781 -0.377918318 #> 4 1.380381816 -0.20991733 -0.25745566 -0.128654387 -0.08703608 -0.405009237 #> 5 -0.549616511 2.09035789 -0.25745566 -0.465205697 -0.53636190 -0.201827346 #> 6 -0.475385806 -0.20991733 -0.25745566 -0.342823403 -0.58628699 -0.283100102 #> 7 -0.524872942 -0.20991733 0.06740953 -0.082761027 -0.33666153 -0.432100156 #> 8 1.652561068 -0.20991733 -0.22496914 -0.388716763 -0.51972020 -0.418554697 #> 9 0.390639084 -0.20991733 -0.25745566 1.095168558 0.76169047 0.136809140 #> 10 -0.475385806 -0.20991733 -0.25745566 -0.373418976 0.26243956 0.096172762 #> 11 3.384610848 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.296645562 #> 12 -0.549616511 -0.20991733 -0.25745566 -0.419312337 -0.38658662 -0.296645562 #> 13 -0.549616511 -0.20991733 -0.25745566 0.085514628 -0.30337814 -0.418554697 #> 14 -0.425898669 0.04490358 -0.25745566 -0.358121189 -0.50307850 -0.350827400 #> 15 0.192690538 -0.20991733 -0.25745566 0.024323481 -0.58628699 -0.296645562 #> 16 -0.203206555 0.84380156 -0.25745566 -0.465205697 -0.53636190 -0.432100156 #> 17 -0.549616511 -0.20991733 -0.25745566 -0.419312337 -0.03711098 -0.364372859 #> 18 -0.376411533 -0.20991733 -0.25745566 -0.312227829 0.16258938 0.245172816 #> 19 1.120574349 -0.20303028 -0.25745566 -0.281632255 -0.18688626 -0.405009237 #> 20 -0.524872942 0.91955912 -0.25745566 0.100812415 -0.58628699 -0.201827346 #> 21 -0.512501158 -0.20991733 -0.25745566 -0.465205697 -0.10367777 4.850629026 #> 22 -0.487757590 -0.20303028 -0.25745566 -0.449907910 2.24280151 -0.432100156 #> 23 -0.326924396 -0.20991733 -0.25745566 -0.388716763 -0.35330323 -0.432100156 #> 24 1.256663975 -0.20991733 7.27941672 -0.465205697 -0.51972020 -0.432100156 #> 25 -0.265065475 -0.20991733 -0.25745566 -0.006272093 2.12630963 -0.201827346 #> 26 -0.549616511 -0.20991733 0.58719383 -0.388716763 -0.43651171 0.475445626 #> 27 -0.512501158 -0.20991733 -0.25745566 -0.449907910 -0.58628699 2.547900921 #> 28 0.019485560 -0.20991733 -0.25745566 -0.434610124 -0.40322832 -0.405009237 #> 29 1.442240737 -0.18236913 -0.25745566 -0.449907910 -0.32001983 1.829991571 #> 30 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.405009237 #> 31 -0.549616511 -0.20303028 -0.24662682 -0.465205697 -0.07039438 2.209264435 #> 32 -0.005258008 -0.03774104 5.22193719 1.079870772 -0.10367777 -0.418554697 #> 33 -0.302180828 -0.20991733 -0.25745566 -0.327525616 -0.51972020 0.949536707 #> 34 3.533072258 -0.20991733 -0.25745566 -0.449907910 2.79197752 0.109718221 #> 35 -0.549616511 -0.20991733 -0.25745566 -0.312227829 -0.56964529 -0.323736481 #> 36 -0.537244727 -0.20991733 -0.24662682 -0.465205697 -0.41987002 -0.418554697 #> 37 -0.549616511 -0.20991733 -0.25745566 -0.419312337 -0.50307850 -0.147645508 #> 38 -0.524872942 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.377918318 #> 39 -0.512501158 -0.08595040 -0.25745566 0.009025694 -0.58628699 -0.405009237 #> 40 1.937112103 -0.20991733 -0.25745566 -0.465205697 0.11266429 -0.418554697 #> 41 -0.116604066 -0.20991733 -0.25745566 -0.465205697 -0.00382759 3.035537461 #> 42 -0.487757590 -0.18236913 -0.09502307 -0.189845534 0.27908126 -0.120554589 #> 43 0.897882235 -0.20303028 -0.23579798 -0.465205697 2.30936830 2.507264543 #> 44 -0.401155101 -0.20991733 -0.04087887 -0.159249961 -0.12031947 -0.377918318 #> 45 -0.549616511 -0.20991733 -0.25745566 -0.449907910 0.86154066 -0.256009183 #> 46 0.056600912 -0.20991733 -0.25745566 -0.434610124 -0.33666153 -0.432100156 #> 47 -0.500129374 -0.20991733 -0.25745566 -0.388716763 -0.33666153 -0.377918318 #> 48 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.432100156 #> 49 3.124803381 -0.20991733 -0.25745566 -0.465205697 2.22615982 -0.350827400 #> 50 -0.549616511 -0.20991733 -0.25745566 -0.342823403 -0.46979511 -0.323736481 #> 51 -0.549616511 -0.20991733 -0.25745566 -0.251036682 -0.51972020 -0.432100156 #> 52 -0.549616511 -0.20991733 -0.25745566 0.085514628 -0.56964529 -0.418554697 #> 53 -0.524872942 -0.20991733 -0.25745566 -0.465205697 -0.51972020 0.163900059 #> 54 6.564159374 -0.20991733 -0.21414030 -0.465205697 -0.30337814 -0.418554697 #> 55 0.242177675 -0.20991733 -0.25745566 -0.358121189 -0.51972020 -0.337281940 #> 56 -0.537244727 -0.19614323 -0.24662682 -0.312227829 -0.51972020 -0.418554697 #> 57 -0.388783317 0.25840217 -0.25745566 -0.404014550 -0.46979511 -0.405009237 #> 58 -0.549616511 -0.20991733 -0.17082495 -0.449907910 -0.58628699 0.123263681 #> 59 0.254549459 -0.20991733 -0.25745566 -0.465205697 -0.12031947 -0.391463778 #> 60 -0.091860497 2.84104651 -0.25745566 -0.388716763 -0.56964529 0.055536384 #> 61 -0.302180828 -0.20991733 -0.25745566 -0.449907910 -0.46979511 -0.350827400 #> 62 -0.487757590 -0.20991733 -0.25745566 -0.006272093 3.92361292 4.539083459 #> 63 -0.512501158 0.39614321 4.64800869 -0.296930042 -0.58628699 -0.418554697 #> 64 0.613331199 -0.20991733 -0.11668075 0.819808396 0.12930599 -0.432100156 #> 65 0.304036595 -0.20991733 -0.25745566 -0.296930042 -0.56964529 -0.405009237 #> 66 1.454612521 -0.20991733 -0.25745566 -0.465205697 -0.20352796 -0.432100156 #> 67 4.287751091 -0.20991733 -0.25745566 -0.327525616 -0.58628699 -0.432100156 #> 68 0.786536177 -0.20991733 2.70964640 0.223194710 -0.58628699 -0.432100156 #> 69 0.118459833 -0.20991733 -0.25745566 -0.449907910 -0.58628699 -0.418554697 #> 70 -0.537244727 -0.20991733 -0.25745566 -0.465205697 3.05824467 0.367081951 #> 71 -0.549616511 -0.19614323 -0.25745566 -0.465205697 -0.53636190 -0.432100156 #> 72 0.130831617 -0.20991733 -0.25745566 0.391470365 -0.51972020 -0.377918318 #> 73 0.922625803 -0.20991733 -0.25745566 0.116110202 -0.20352796 -0.432100156 #> 74 0.192690538 -0.18925618 -0.25745566 -0.419312337 -0.30337814 -0.432100156 #> 75 -0.524872942 -0.20991733 -0.25745566 -0.052165453 -0.48643681 -0.283100102 #> 76 -0.537244727 -0.20991733 1.99494298 0.529150446 -0.33666153 -0.418554697 #> 77 -0.512501158 -0.20303028 -0.25745566 -0.174547748 -0.58628699 0.055536384 #> 78 -0.351667964 -0.20991733 -0.25745566 1.033977411 -0.56964529 -0.242463724 #> 79 -0.425898669 2.84793356 -0.13833843 -0.419312337 -0.58628699 -0.405009237 #> 80 -0.549616511 -0.20991733 -0.24662682 -0.465205697 -0.28673644 -0.432100156 #> 81 -0.326924396 -0.20991733 -0.25745566 -0.449907910 -0.48643681 -0.432100156 #> 82 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.50307850 -0.174736427 #> 83 -0.549616511 -0.20991733 -0.25745566 -0.404014550 -0.51972020 -0.323736481 #> 84 0.551472278 -0.20991733 -0.25745566 -0.388716763 -0.40322832 -0.323736481 #> 85 1.528843226 -0.18925618 -0.25745566 -0.220441108 -0.43651171 -0.310191021 #> 86 1.256663975 -0.20991733 -0.25745566 -0.449907910 -0.45315341 -0.432100156 #> 87 -0.549616511 -0.20991733 -0.25745566 -0.251036682 0.36228975 0.908900329 #> 88 0.266921243 -0.20991733 -0.25745566 1.095168558 -0.56964529 -0.161190967 #> 89 -0.500129374 -0.20991733 -0.25745566 0.238492497 0.42885653 -0.432100156 #> 90 -0.475385806 -0.20991733 -0.25745566 4.078236988 1.95989266 -0.283100102 #> 91 -0.277437260 -0.18925618 -0.24662682 0.330279217 -0.58628699 -0.432100156 #> 92 -0.351667964 -0.20991733 -0.25745566 1.202253066 -0.56964529 -0.391463778 #> 93 -0.166091202 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.377918318 #> 94 -0.166091202 -0.20991733 -0.25745566 -0.465205697 -0.40322832 -0.432100156 #> 95 -0.524872942 -0.20991733 0.77128410 -0.419312337 -0.03711098 -0.310191021 #> 96 -0.376411533 -0.20991733 -0.25745566 1.752973392 4.00682140 -0.350827400 #> 97 -0.537244727 -0.20991733 -0.24662682 -0.465205697 -0.46979511 1.071445842 #> 98 -0.104232281 -0.20991733 -0.25745566 -0.404014550 -0.56964529 0.082627303 #> 99 -0.401155101 -0.20991733 -0.25745566 -0.465205697 1.22765799 5.026719999 #> 100 -0.549616511 -0.20991733 -0.25745566 8.912337624 -0.58628699 -0.093463670 #> 101 -0.549616511 -0.20991733 -0.25745566 -0.128654387 -0.53636190 -0.418554697 #> 102 -0.549616511 0.14820935 -0.25745566 -0.358121189 -0.58628699 -0.418554697 #> 103 -0.227950123 -0.20991733 -0.25745566 -0.465205697 1.47728345 0.394172870 #> 104 -0.549616511 -0.20991733 -0.25745566 0.269088070 1.22765799 -0.350827400 #> 105 -0.364039749 -0.20991733 -0.25745566 -0.388716763 0.26243956 -0.174736427 #> 106 -0.524872942 -0.20991733 -0.25745566 -0.404014550 1.76019230 -0.418554697 #> 107 0.007113776 -0.20991733 -0.24662682 -0.067463240 -0.58628699 -0.418554697 #> 108 -0.190834770 -0.20991733 -0.24662682 -0.465205697 0.12930599 -0.432100156 #> 109 1.182433270 -0.20991733 -0.24662682 -0.465205697 -0.23681135 -0.405009237 #> 110 2.036086376 0.46501372 -0.24662682 -0.205143321 -0.12031947 -0.377918318 #> 111 -0.265065475 -0.20991733 -0.25745566 0.590341593 -0.55300359 -0.337281940 #> 112 -0.227950123 -0.20991733 -0.25745566 -0.465205697 -0.20352796 -0.432100156 #> 113 -0.450642238 -0.20991733 -0.24662682 -0.006272093 -0.28673644 -0.432100156 #> 114 -0.116604066 0.05179063 1.34521260 -0.082761027 -0.27009474 -0.418554697 #> 115 -0.339296180 -0.20991733 -0.25745566 0.162003562 -0.15360286 -0.052827292 #> 116 -0.537244727 0.38236910 -0.25745566 -0.174547748 -0.08703608 1.003718545 #> 117 -0.487757590 -0.20991733 -0.25745566 -0.205143321 -0.48643681 -0.174736427 #> 118 -0.549616511 -0.20991733 -0.25745566 1.951844620 -0.35330323 -0.391463778 #> 119 -0.289809044 -0.20991733 -0.25745566 1.538804376 0.06273920 -0.432100156 #> 120 -0.549616511 -0.20303028 -0.25745566 -0.465205697 -0.56964529 -0.432100156 #> 121 -0.463014022 11.54627967 -0.25745566 -0.205143321 -0.38658662 -0.432100156 #> 122 -0.326924396 -0.20991733 -0.25745566 2.915605190 -0.55300359 -0.432100156 #> 123 -0.463014022 -0.16170797 1.12863581 -0.342823403 -0.48643681 -0.432100156 #> 124 -0.549616511 -0.20991733 -0.25745566 -0.358121189 -0.43651171 0.597354761 #> 125 1.244292191 -0.20991733 -0.25745566 0.146705776 0.94474914 -0.418554697 #> 126 -0.537244727 -0.20991733 -0.25745566 4.185321496 -0.58628699 -0.432100156 #> 127 0.316408380 -0.20303028 -0.25745566 -0.281632255 0.42885653 -0.432100156 #> 128 -0.376411533 -0.20991733 -0.25745566 -0.281632255 -0.58628699 -0.418554697 #> 129 0.588587631 -0.20991733 -0.25745566 -0.388716763 -0.35330323 -0.432100156 #> 130 -0.425898669 -0.20991733 -0.25745566 0.116110202 -0.51972020 -0.432100156 #> 131 -0.463014022 0.92644617 -0.25745566 -0.449907910 -0.43651171 6.354175024 #> 132 -0.537244727 -0.19614323 -0.25745566 -0.465205697 -0.56964529 -0.432100156 #> 133 0.514356926 -0.20991733 -0.25745566 -0.404014550 -0.56964529 0.407718329 #> 134 -0.549616511 -0.20991733 -0.25745566 -0.143952174 -0.51972020 -0.201827346 #> 135 -0.425898669 -0.20991733 -0.25745566 -0.465205697 -0.45315341 -0.364372859 #> 136 0.192690538 -0.20991733 -0.24662682 3.879365760 -0.36994493 -0.432100156 #> 137 -0.388783317 -0.20991733 -0.25745566 0.100812415 1.19437460 -0.405009237 #> 138 1.145317917 -0.20991733 -0.25745566 -0.251036682 0.31236465 -0.134100048 #> 139 0.019485560 -0.08595040 -0.24662682 -0.113356600 -0.56964529 -0.432100156 #> 140 -0.401155101 -0.20991733 -0.17082495 2.686138388 -0.51972020 -0.432100156 #> 141 -0.487757590 -0.20991733 -0.25745566 -0.052165453 0.02945580 -0.405009237 #> 142 -0.500129374 -0.20991733 -0.25745566 0.452661512 0.71176538 -0.432100156 #> 143 -0.425898669 -0.18925618 -0.25745566 0.024323481 -0.08703608 -0.432100156 #> 144 0.167946970 -0.19614323 1.64842011 -0.235738895 1.92660927 -0.432100156 #> 145 -0.537244727 -0.20991733 -0.25745566 -0.220441108 0.34564805 -0.012190913 #> 146 -0.252693691 -0.19614323 -0.25745566 0.054919055 -0.27009474 -0.296645562 #> 147 4.225892170 -0.20303028 -0.25745566 -0.465205697 0.06273920 0.231627356 #> 148 -0.376411533 -0.20991733 3.34854794 0.177301349 -0.10367777 -0.432100156 #> 149 0.761792609 -0.19614323 -0.24662682 -0.327525616 4.95539814 0.488991086 #> 150 -0.549616511 -0.20991733 -0.25745566 -0.220441108 3.50757049 -0.418554697 #> 151 -0.549616511 -0.20991733 -0.25745566 0.100812415 -0.55300359 -0.432100156 #> 152 -0.549616511 -0.20991733 -0.25745566 -0.373418976 -0.22016965 2.317628111 #> 153 -0.537244727 -0.20991733 -0.25745566 0.636234954 0.02945580 0.150354600 #> 154 1.083458997 -0.20991733 -0.25745566 -0.082761027 3.90697122 -0.377918318 #> 155 1.491727874 -0.20991733 -0.25745566 -0.388716763 -0.45315341 1.355900490 #> 156 -0.153719418 -0.20991733 -0.25745566 -0.327525616 -0.03711098 -0.337281940 #> 157 -0.549616511 -0.20991733 -0.15999611 -0.434610124 -0.58628699 3.726355893 #> 158 -0.500129374 -0.20991733 -0.25745566 -0.205143321 -0.13696117 -0.405009237 #> 159 -0.413526885 -0.20991733 -0.25745566 -0.465205697 3.57413728 -0.405009237 #> 160 -0.537244727 -0.20991733 -0.25745566 -0.449907910 -0.58628699 0.651536599 #> 161 -0.549616511 1.87685929 -0.25745566 -0.327525616 -0.58628699 1.667446057 #> 162 -0.425898669 -0.20991733 -0.25745566 2.303693717 -0.20352796 -0.283100102 #> 163 0.205062322 3.83278193 -0.25745566 -0.358121189 -0.58628699 0.001354546 #> 164 -0.500129374 -0.20991733 -0.23579798 -0.266334469 -0.15360286 0.312900113 #> 165 -0.487757590 -0.18236913 -0.23579798 -0.449907910 -0.56964529 -0.391463778 #> 166 -0.537244727 -0.20991733 -0.25745566 1.018679624 -0.15360286 -0.377918318 #> 167 -0.413526885 -0.20991733 -0.25745566 0.452661512 1.84340078 -0.337281940 #> 168 2.852624130 -0.20991733 -0.25745566 -0.434610124 -0.56964529 -0.432100156 #> 169 -0.227950123 -0.20991733 -0.25745566 -0.358121189 -0.56964529 -0.432100156 #> 170 -0.549616511 -0.20991733 -0.25745566 1.079870772 0.24579787 0.326445573 #> 171 -0.524872942 -0.20991733 -0.25745566 -0.296930042 -0.46979511 -0.337281940 #> 172 -0.463014022 -0.20991733 -0.25745566 -0.358121189 -0.56964529 -0.432100156 #> 173 0.130831617 -0.20991733 -0.25745566 -0.465205697 -0.56964529 -0.188281886 #> 174 -0.524872942 -0.16859502 -0.25745566 -0.449907910 -0.50307850 -0.432100156 #> 175 -0.425898669 -0.20991733 0.19735560 0.620937167 -0.48643681 0.190990978 #> 176 -0.500129374 -0.20991733 -0.25745566 -0.434610124 -0.20352796 -0.161190967 #> 177 0.279293027 3.47465525 -0.01922119 -0.342823403 -0.56964529 -0.405009237 #> 178 -0.512501158 -0.19614323 -0.25745566 -0.342823403 0.29572296 0.231627356 #> 179 -0.401155101 -0.20991733 -0.25745566 -0.465205697 -0.30337814 -0.432100156 #> 180 -0.475385806 1.57382902 0.34895936 -0.128654387 -0.03711098 -0.405009237 #> 181 1.095830781 -0.20991733 -0.25745566 0.054919055 0.01281411 -0.256009183 #> 182 1.009228292 -0.20991733 -0.25745566 -0.404014550 -0.36994493 -0.391463778 #> 183 2.679419152 0.31349859 -0.25745566 -0.404014550 -0.55300359 -0.350827400 #> 184 -0.438270453 1.29834696 -0.24662682 1.538804376 -0.33666153 -0.215372805 #> 185 -0.549616511 0.24462807 -0.01922119 -0.358121189 -0.58628699 -0.405009237 #> 186 -0.524872942 -0.20991733 -0.24662682 -0.419312337 -0.33666153 -0.432100156 #> 187 -0.549616511 -0.20303028 3.85750340 1.095168558 -0.38658662 -0.269554643 #> 188 -0.537244727 -0.20991733 -0.23579798 -0.419312337 -0.55300359 0.475445626 #> 189 -0.425898669 -0.20991733 -0.23579798 -0.373418976 2.70876903 0.177445519 #> 190 0.254549459 -0.09972451 3.55429589 0.162003562 -0.33666153 -0.432100156 #> 191 -0.512501158 -0.20991733 -0.25745566 -0.465205697 0.21251447 -0.147645508 #> 192 -0.537244727 -0.20991733 -0.25745566 -0.465205697 0.54534841 -0.432100156 #> 193 -0.450642238 -0.20303028 -0.25745566 -0.358121189 -0.38658662 -0.310191021 #> 194 0.885510450 -0.20991733 -0.25745566 -0.388716763 -0.56964529 -0.432100156 #> 195 -0.104232281 -0.16170797 0.01326533 -0.388716763 -0.32001983 -0.269554643 #> 196 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.22016965 -0.256009183 #> 197 -0.512501158 -0.05151515 0.31647284 1.768271179 0.91146575 -0.174736427 #> 198 0.167946970 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.147645508 #> 199 -0.537244727 -0.20991733 -0.25745566 -0.434610124 -0.38658662 1.708082436 #> 200 -0.450642238 -0.20991733 -0.25745566 -0.342823403 -0.15360286 2.046718922 #> Otu00056 Otu00057 Otu00058 Otu00059 Otu00060 #> 1 -0.67302626 -0.063085238 0.244028438 -0.04265350 -0.41506494 #> 2 2.49956176 -0.378272648 0.956294184 -0.33573273 -0.41506494 #> 3 -0.80430576 2.658987854 -0.313396928 -0.40900254 -0.40518715 #> 4 0.18029052 -0.340068114 -0.065652321 -0.29386427 -0.41506494 #> 5 -0.80430576 -0.426028317 -0.561141535 -0.39853543 -0.40518715 #> 6 0.77104829 0.786965657 0.151124210 0.66911037 -0.41506494 #> 7 -0.82618568 -0.244556777 -0.545657497 -0.29386427 -0.41506494 #> 8 -0.62926642 -0.426028317 -0.406301156 7.84955171 -0.16812007 #> 9 0.24593027 -0.426028317 -0.483721345 -0.40900254 -0.41506494 #> 10 -0.23542791 -0.406926049 -0.576625573 -0.40900254 2.69644047 #> 11 -0.82618568 -0.406926049 2.798894699 -0.40900254 0.40479204 #> 12 -0.56362667 0.557738450 -0.205008662 0.09341901 0.04919142 #> 13 0.66164870 -0.426028317 1.730496081 -0.40900254 -0.41506494 #> 14 0.04901101 0.529085049 0.213060362 0.69004460 -0.41506494 #> 15 1.82128432 1.407789345 0.832421880 -0.05312061 -0.41506494 #> 16 1.66812490 -0.397374916 -0.158556549 -0.40900254 -0.41506494 #> 17 -0.41046725 0.519533915 -0.220492700 0.21902440 -0.41506494 #> 18 -0.30106766 1.073499667 -0.096620397 0.03061631 -0.38543156 #> 19 -0.69490618 0.147039703 0.569193235 -0.21012735 -0.41506494 #> 20 -0.78242585 -0.359170381 -0.545657497 -0.23106158 -0.41506494 #> 21 -0.82618568 -0.406926049 -0.576625573 -0.40900254 -0.41506494 #> 22 1.88692408 -0.426028317 -0.530173459 2.16590791 -0.41506494 #> 23 0.46472945 -0.426028317 -0.205008662 0.76331441 -0.41506494 #> 24 1.05548722 -0.426028317 -0.375333080 -0.40900254 -0.41506494 #> 25 0.31157002 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 26 -0.32294758 0.357164643 -0.081136359 -0.03218638 1.06660430 #> 27 -0.78242585 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 28 -0.60738651 -0.426028317 -0.607593649 -0.40900254 2.67668488 #> 29 -0.76054593 -0.426028317 -0.437269232 0.75284729 -0.41506494 #> 30 -0.69490618 -0.053534104 -0.189524624 -0.13685754 -0.39530935 #> 31 -0.82618568 -0.426028317 1.482751474 -0.39853543 1.00733753 #> 32 -0.60738651 2.085919835 -0.375333080 -0.40900254 2.59766252 #> 33 -0.62926642 0.252102173 -0.592109611 -0.40900254 -0.31628699 #> 34 -0.82618568 0.242551039 0.770485728 -0.40900254 -0.41506494 #> 35 -0.71678609 8.532935052 0.878873994 -0.19966023 -0.33604258 #> 36 -0.49798692 -0.426028317 -0.607593649 -0.40900254 0.20723614 #> 37 2.23700275 0.280755574 -0.235976738 -0.04265350 -0.41506494 #> 38 -0.76054593 -0.426028317 0.383384780 -0.40900254 0.54308117 #> 39 -0.80430576 -0.426028317 0.615645349 -0.40900254 2.37047324 #> 40 -0.43234717 0.605494118 -0.143072511 0.03061631 0.12821378 #> 41 -0.60738651 -0.292312446 -0.437269232 -0.40900254 -0.41506494 #> 42 0.61788887 -0.416477183 -0.344365004 -0.35666697 -0.40518715 #> 43 -0.80430576 -0.426028317 -0.375333080 -0.40900254 1.60000523 #> 44 1.99632366 -0.063085238 0.042735945 -0.40900254 -0.41506494 #> 45 0.31157002 -0.034431837 -0.514689421 -0.29386427 0.39491424 #> 46 0.02713110 -0.406926049 -0.468237308 -0.40900254 -0.40518715 #> 47 -0.71678609 -0.015329570 -0.313396928 -0.14732465 -0.41506494 #> 48 -0.82618568 -0.387823782 -0.545657497 -0.40900254 -0.41506494 #> 49 -0.65114634 0.137488569 -0.266944814 -0.16825888 -0.41506494 #> 50 -0.54174675 0.634147519 0.305964590 0.28182709 -0.41506494 #> 51 0.37720978 -0.426028317 -0.561141535 4.57334451 -0.40518715 #> 52 -0.47610700 -0.177698842 -0.468237308 -0.25199581 -0.41506494 #> 53 -0.80430576 -0.416477183 -0.592109611 -0.40900254 -0.41506494 #> 54 -0.80430576 -0.426028317 -0.561141535 -0.40900254 -0.40518715 #> 55 -0.38858733 0.739209989 0.058219983 0.08295189 -0.40518715 #> 56 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 57 -0.76054593 -0.416477183 0.135640172 0.40743248 -0.41506494 #> 58 1.20864664 -0.416477183 -0.452753270 -0.40900254 -0.03970874 #> 59 -0.21354799 -0.426028317 0.166608248 0.83658422 -0.40518715 #> 60 -0.10414841 -0.129943173 -0.003716169 0.02014920 -0.41506494 #> 61 0.70540854 -0.426028317 1.157586677 -0.40900254 1.35306035 #> 62 -0.76054593 0.739209989 -0.514689421 -0.40900254 -0.39530935 #> 63 0.44284953 -0.235005644 -0.359849042 -0.39853543 -0.41506494 #> 64 -0.76054593 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 65 -0.82618568 0.318960108 -0.468237308 -0.40900254 0.21711393 #> 66 0.48660936 -0.426028317 5.369244999 -0.40900254 -0.41506494 #> 67 1.29616631 -0.426028317 -0.561141535 0.54350498 0.82953722 #> 68 1.23052655 1.197664405 0.166608248 -0.19966023 2.07413939 #> 69 1.20864664 -0.426028317 1.064682449 -0.40900254 -0.41506494 #> 70 0.13653068 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 71 -0.45422709 -0.349619247 -0.530173459 -0.38806831 6.91425892 #> 72 0.13653068 2.534823116 2.195017219 -0.07405484 1.57037184 #> 73 0.50848928 0.242551039 -0.607593649 -0.40900254 -0.41506494 #> 74 4.62191375 0.013323831 0.182092286 0.63770902 3.72373115 #> 75 0.81480812 0.748761123 0.491773045 1.42274270 -0.41506494 #> 76 -0.82618568 -0.426028317 5.431181150 -0.40900254 0.02943583 #> 77 -0.69490618 -0.426028317 0.213060362 1.06686076 -0.40518715 #> 78 -0.56362667 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 79 1.58060523 -0.091738639 0.940810146 1.19246615 -0.41506494 #> 80 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 81 0.59600895 -0.426028317 1.699528005 0.20855728 -0.41506494 #> 82 3.28723879 0.939783796 -0.607593649 -0.39853543 -0.41506494 #> 83 0.83668804 -0.034431837 -0.545657497 -0.25199581 -0.40518715 #> 84 -0.76054593 -0.426028317 -0.390817118 -0.40900254 -0.16812007 #> 85 -0.43234717 -0.426028317 2.427277789 -0.40900254 -0.41506494 #> 86 -0.82618568 -0.139494307 -0.251460776 -0.40900254 -0.40518715 #> 87 -0.06038857 0.051528366 -0.390817118 -0.36713408 -0.41506494 #> 88 1.01172738 -0.426028317 6.546031883 -0.40900254 -0.41506494 #> 89 1.79940441 -0.359170381 0.151124210 -0.31479850 -0.41506494 #> 90 0.13653068 6.603606053 -0.174040587 -0.28339716 -0.41506494 #> 91 -0.23542791 -0.378272648 -0.344365004 2.80440196 0.95794856 #> 92 -0.76054593 -0.426028317 2.009208764 -0.40900254 0.41466983 #> 93 -0.82618568 -0.426028317 -0.530173459 -0.40900254 -0.41506494 #> 94 -0.80430576 -0.426028317 0.228544400 2.50085561 -0.38543156 #> 95 1.03360730 1.054397400 0.274996514 0.55397210 -0.41506494 #> 96 -0.82618568 -0.426028317 -0.576625573 -0.40900254 -0.41506494 #> 97 -0.78242585 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 98 -0.16978816 -0.426028317 -0.468237308 1.63208501 -0.41506494 #> 99 -0.78242585 -0.406926049 -0.592109611 -0.40900254 -0.41506494 #> 100 2.41204209 -0.397374916 -0.499205383 -0.39853543 -0.37555376 #> 101 1.79940441 -0.177698842 -0.576625573 -0.40900254 -0.41506494 #> 102 -0.80430576 -0.426028317 -0.607593649 -0.36713408 -0.41506494 #> 103 -0.19166808 -0.301863579 -0.421785194 -0.40900254 -0.41506494 #> 104 -0.82618568 1.025743999 0.011767869 -0.40900254 -0.39530935 #> 105 0.18029052 0.509982781 0.027251907 0.47023517 0.07882480 #> 106 0.04901101 0.309408975 -0.235976738 0.03061631 -0.39530935 #> 107 0.20217044 -0.426028317 -0.034684245 -0.40900254 0.33564747 #> 108 0.81480812 -0.426028317 1.838884347 -0.40900254 0.80978163 #> 109 -0.62926642 -0.129943173 -0.251460776 -0.38806831 -0.41506494 #> 110 2.08384333 -0.397374916 -0.205008662 -0.27293004 -0.40518715 #> 111 0.53036920 -0.426028317 -0.220492700 -0.40900254 -0.41506494 #> 112 0.50848928 -0.426028317 0.259512476 -0.40900254 0.13809157 #> 113 -0.21354799 -0.426028317 0.569193235 -0.38806831 -0.41506494 #> 114 0.35532986 -0.378272648 1.637591853 -0.15779177 1.13574887 #> 115 0.44284953 -0.426028317 1.467267436 -0.40900254 -0.06934212 #> 116 2.01820358 -0.215903376 -0.174040587 -0.40900254 -0.41506494 #> 117 -0.03850865 -0.426028317 -0.607593649 -0.40900254 2.64705149 #> 118 0.18029052 -0.426028317 -0.514689421 -0.40900254 -0.41506494 #> 119 -0.82618568 -0.426028317 -0.050168283 -0.40900254 -0.41506494 #> 120 -0.32294758 -0.387823782 -0.607593649 -0.38806831 -0.34592038 #> 121 -0.34482750 0.414471445 1.002746297 0.35509690 4.63248828 #> 122 0.24593027 -0.416477183 -0.576625573 -0.40900254 -0.41506494 #> 123 -0.82618568 -0.426028317 -0.545657497 -0.39853543 -0.41506494 #> 124 0.02713110 -0.426028317 -0.530173459 -0.40900254 -0.41506494 #> 125 -0.60738651 -0.426028317 0.089188059 3.14981678 2.73595165 #> 126 0.63976878 -0.426028317 1.064682449 -0.40900254 -0.41506494 #> 127 -0.27918775 -0.378272648 -0.545657497 -0.31479850 -0.39530935 #> 128 -0.78242585 -0.426028317 -0.576625573 -0.40900254 -0.06934212 #> 129 -0.80430576 -0.110840906 -0.483721345 0.26089286 -0.41506494 #> 130 -0.47610700 -0.426028317 -0.344365004 -0.40900254 -0.40518715 #> 131 -0.56362667 -0.426028317 -0.390817118 -0.40900254 -0.41506494 #> 132 1.47120565 -0.426028317 -0.421785194 -0.40900254 -0.20763125 #> 133 -0.67302626 -0.426028317 -0.530173459 -0.26246293 -0.41506494 #> 134 0.46472945 0.739209989 1.869852422 1.54834808 -0.40518715 #> 135 -0.82618568 -0.406926049 -0.437269232 -0.39853543 -0.41506494 #> 136 0.85856796 -0.426028317 0.011767869 -0.40900254 -0.41506494 #> 137 -0.16978816 2.085919835 -0.468237308 -0.40900254 1.15550446 #> 138 0.88044788 -0.426028317 -0.220492700 -0.40900254 -0.40518715 #> 139 -0.71678609 -0.416477183 -0.468237308 0.11435324 -0.41506494 #> 140 -0.82618568 -0.426028317 -0.220492700 -0.40900254 -0.41506494 #> 141 -0.65114634 -0.426028317 -0.174040587 1.51694674 -0.03970874 #> 142 -0.56362667 1.617914285 0.693065539 -0.40900254 -0.41506494 #> 143 -0.73866601 -0.005778436 -0.607593649 -0.06358773 -0.41506494 #> 144 -0.58550659 1.149908736 -0.468237308 0.88891980 -0.41506494 #> 145 0.61788887 -0.196801109 -0.607593649 -0.40900254 -0.41506494 #> 146 0.81480812 -0.426028317 -0.592109611 -0.06358773 -0.40518715 #> 147 -0.82618568 -0.426028317 -0.592109611 -0.39853543 -0.41506494 #> 148 -0.73866601 -0.426028317 -0.359849042 -0.40900254 -0.41506494 #> 149 -0.71678609 0.185244237 -0.452753270 -0.40900254 -0.41506494 #> 150 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 151 1.66812490 0.834721326 0.878873994 -0.40900254 -0.41506494 #> 152 1.05548722 -0.168147708 -0.576625573 -0.40900254 -0.41506494 #> 153 -0.67302626 -0.426028317 0.058219983 0.45976806 -0.41506494 #> 154 -0.82618568 -0.426028317 -0.607593649 1.78909174 -0.41506494 #> 155 -0.69490618 -0.426028317 -0.545657497 5.65145742 -0.41506494 #> 156 -0.19166808 0.643698653 -0.483721345 -0.40900254 0.16772496 #> 157 -0.82618568 -0.416477183 -0.607593649 -0.40900254 -0.23726464 #> 158 1.53684540 -0.426028317 2.597602206 -0.40900254 -0.37555376 #> 159 -0.78242585 0.041977232 -0.437269232 -0.40900254 -0.41506494 #> 160 -0.80430576 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 161 -0.65114634 -0.426028317 0.352416704 -0.40900254 -0.41506494 #> 162 -0.32294758 -0.426028317 -0.468237308 -0.40900254 0.28625850 #> 163 0.66164870 -0.378272648 0.816937842 3.22308659 -0.41506494 #> 164 -0.80430576 -0.416477183 -0.576625573 -0.40900254 2.05438380 #> 165 -0.71678609 -0.406926049 -0.576625573 -0.40900254 2.11365057 #> 166 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 167 0.48660936 3.585447818 -0.328880966 -0.40900254 -0.27677581 #> 168 -0.82618568 -0.426028317 -0.406301156 -0.40900254 -0.41506494 #> 169 -0.80430576 -0.426028317 -0.530173459 -0.38806831 1.61976082 #> 170 -0.82618568 -0.426028317 -0.607593649 -0.40900254 1.05672651 #> 171 -0.47610700 0.701005455 0.646613425 0.81564999 -0.41506494 #> 172 -0.76054593 -0.426028317 -0.437269232 -0.40900254 -0.01995315 #> 173 -0.82618568 -0.426028317 -0.592109611 -0.40900254 -0.39530935 #> 174 -0.78242585 -0.416477183 -0.421785194 -0.31479850 4.01018720 #> 175 2.43392201 -0.215903376 -0.034684245 -0.40900254 -0.40518715 #> 176 1.07736713 -0.426028317 -0.127588473 -0.39853543 -0.41506494 #> 177 0.20217044 -0.034431837 0.538225159 0.05155054 -0.41506494 #> 178 -0.82618568 -0.426028317 0.182092286 -0.40900254 -0.41506494 #> 179 -0.80430576 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 180 -0.25730783 0.844272459 -0.065652321 -0.10545619 -0.41506494 #> 181 -0.67302626 -0.416477183 -0.576625573 0.78424864 -0.41506494 #> 182 0.26781019 -0.426028317 -0.452753270 0.86798557 -0.41506494 #> 183 -0.41046725 -0.263659045 0.027251907 0.54350498 -0.41506494 #> 184 -0.36670742 -0.273210178 -0.174040587 -0.36713408 -0.30640920 #> 185 2.43392201 -0.378272648 -0.561141535 -0.40900254 -0.41506494 #> 186 -0.78242585 -0.416477183 -0.545657497 -0.37760120 -0.41506494 #> 187 0.31157002 0.548187316 -0.607593649 -0.40900254 -0.15824228 #> 188 -0.82618568 -0.426028317 -0.592109611 -0.40900254 -0.35579817 #> 189 -0.71678609 -0.340068114 -0.514689421 -0.40900254 -0.26689802 #> 190 0.81480812 0.739209989 -0.297912890 -0.25199581 -0.40518715 #> 191 0.00525118 -0.426028317 -0.499205383 -0.40900254 1.41232712 #> 192 1.12112697 -0.426028317 -0.561141535 -0.40900254 -0.41506494 #> 193 1.47120565 1.130806469 0.383384780 0.66911037 -0.05946433 #> 194 -0.56362667 -0.387823782 -0.576625573 0.02014920 0.52332558 #> 195 -0.21354799 0.901579261 0.491773045 0.50163652 -0.39530935 #> 196 -0.82618568 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 197 -0.80430576 1.608363152 -0.514689421 -0.38806831 -0.37555376 #> 198 -0.80430576 -0.426028317 -0.530173459 -0.40900254 -0.25702023 #> 199 1.71188474 0.204346505 -0.421785194 -0.19966023 0.06894701 #> 200 3.72483714 -0.426028317 1.869852422 -0.40900254 -0.32616479 #> #> $removed #> character(0) #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":null,"dir":"Reference","previous_headings":"","what":"Get feature importance using the permutation method — get_feature_importance","title":"Get feature importance using the permutation method — get_feature_importance","text":"Calculates feature importance using trained model test data. Requires future.apply package.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get feature importance using the permutation method — get_feature_importance","text":"","code":"get_feature_importance( trained_model, test_data, outcome_colname, perf_metric_function, perf_metric_name, class_probs, method, seed = NA, corr_thresh = 1, groups = NULL, nperms = 100, corr_method = \"spearman\" )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get feature importance using the permutation method — get_feature_importance","text":"trained_model Trained model caret::train(). test_data Held test data: dataframe outcome features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost seed Random seed (default: NA). results reproducible set seed. corr_thresh feature importance, group correlations equal corr_thresh (range 0 1; default: 1). groups Vector feature names group together permutation. element string feature names separated pipe character (|). NULL (default), correlated features grouped together based corr_thresh. nperms number permutations perform (default: 100). corr_method correlation method. options supported stats::cor: spearman, pearson, kendall. (default: spearman)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get feature importance using the permutation method — get_feature_importance","text":"Data frame performance metrics feature (group correlated features; feat) permuted (perf_metric), differences actual test performance metric permuted performance metric (perf_metric_diff; test minus permuted performance), p-value (pvalue: probability obtaining actual performance value null hypothesis). Features larger perf_metric_diff important. performance metric name (perf_metric_name) seed (seed) also returned.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Get feature importance using the permutation method — get_feature_importance","text":"permutation tests, p-value number permutation statistics greater test statistic, divided number permutations. case, permutation statistic model performance (e.g. AUROC) randomizing order observations one feature, test statistic actual performance test data. default perform 100 permutations per feature; increasing increase precision estimating null distribution, also increases runtime. p-value represents probability obtaining actual performance event null hypothesis true, null hypothesis feature important model performance. strongly recommend providing multiple cores speed computation time. See vignette parallel processing details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get feature importance using the permutation method — get_feature_importance","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get feature importance using the permutation method — get_feature_importance","text":"","code":"if (FALSE) { # If you called `run_ml()` with `feature_importance = FALSE` (the default), # you can use `get_feature_importance()` later as long as you have the # trained model and test data. results <- run_ml(otu_small, \"glmnet\", kfold = 2, cv_times = 2) names(results$trained_model$trainingData)[1] <- \"dx\" feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\" ) # We strongly recommend providing multiple cores to speed up computation time. # Do this before calling `get_feature_importance()`. doFuture::registerDoFuture() future::plan(future::multicore, workers = 2) # Optionally, you can group features together with a custom grouping feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\", groups = c( \"Otu00007\", \"Otu00008\", \"Otu00009\", \"Otu00011\", \"Otu00012\", \"Otu00015\", \"Otu00016\", \"Otu00018\", \"Otu00019\", \"Otu00020\", \"Otu00022\", \"Otu00023\", \"Otu00025\", \"Otu00028\", \"Otu00029\", \"Otu00030\", \"Otu00035\", \"Otu00036\", \"Otu00037\", \"Otu00038\", \"Otu00039\", \"Otu00040\", \"Otu00047\", \"Otu00050\", \"Otu00052\", \"Otu00054\", \"Otu00055\", \"Otu00056\", \"Otu00060\", \"Otu00003|Otu00002|Otu00005|Otu00024|Otu00032|Otu00041|Otu00053\", \"Otu00014|Otu00021|Otu00017|Otu00031|Otu00057\", \"Otu00013|Otu00006\", \"Otu00026|Otu00001|Otu00034|Otu00048\", \"Otu00033|Otu00010\", \"Otu00042|Otu00004\", \"Otu00043|Otu00027|Otu00049\", \"Otu00051|Otu00045\", \"Otu00058|Otu00044\", \"Otu00059|Otu00046\" ) ) # the function can show a progress bar if you have the `progressr` package installed. ## optionally, specify the progress bar format: progressr::handlers(progressr::handler_progress( format = \":message :bar :percent | elapsed: :elapsed | eta: :eta\", clear = FALSE, show_after = 0 )) ## tell progressr to always report progress progressr::handlers(global = TRUE) ## run the function and watch the live progress udpates feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\" ) # You can specify any correlation method supported by `stats::cor`: feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\", corr_method = \"pearson\" ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Get hyperparameter performance metrics — get_hp_performance","title":"Get hyperparameter performance metrics — get_hp_performance","text":"Get hyperparameter performance metrics","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get hyperparameter performance metrics — get_hp_performance","text":"","code":"get_hp_performance(trained_model)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get hyperparameter performance metrics — get_hp_performance","text":"trained_model trained model (e.g. run_ml())","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get hyperparameter performance metrics — get_hp_performance","text":"Named list: dat: Dataframe performance metric group hyperparameters. params: Hyperparameters tuned. metric: Performance metric used.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get hyperparameter performance metrics — get_hp_performance","text":"Zena Lapp, zenalapp@umich.edu Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get hyperparameter performance metrics — get_hp_performance","text":"","code":"get_hp_performance(otu_mini_bin_results_glmnet$trained_model) #> $dat #> alpha lambda AUC #> 1 0 1e-04 0.6082552 #> 2 0 1e-03 0.6082552 #> 3 0 1e-02 0.6086458 #> 4 0 1e-01 0.6166789 #> 5 0 1e+00 0.6221737 #> 6 0 1e+01 0.6187408 #> #> $params #> [1] \"lambda\" #> #> $metric #> [1] \"AUC\" #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":null,"dir":"Reference","previous_headings":"","what":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"details see vignette hyperparameter tuning.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"","code":"get_hyperparams_list(dataset, method)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"dataset Data frame outcome variable columns features. method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"Named list hyperparameters.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"","code":"get_hyperparams_list(otu_mini_bin, \"rf\") #> $mtry #> [1] 2 3 6 #> get_hyperparams_list(otu_small, \"rf\") #> $mtry #> [1] 4 8 16 #> get_hyperparams_list(otu_mini_bin, \"rpart2\") #> $maxdepth #> [1] 1 2 4 8 16 30 #> get_hyperparams_list(otu_small, \"rpart2\") #> $maxdepth #> [1] 1 2 4 8 16 30 #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":null,"dir":"Reference","previous_headings":"","what":"Get outcome type. — get_outcome_type","title":"Get outcome type. — get_outcome_type","text":"outcome numeric, type continuous. Otherwise, outcome type binary two outcomes multiclass two outcomes.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get outcome type. — get_outcome_type","text":"","code":"get_outcome_type(outcomes_vec)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get outcome type. — get_outcome_type","text":"outcomes_vec Vector outcomes.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get outcome type. — get_outcome_type","text":"Outcome type (continuous, binary, multiclass).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get outcome type. — get_outcome_type","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get outcome type. — get_outcome_type","text":"","code":"get_outcome_type(c(1, 2, 1)) #> [1] \"continuous\" get_outcome_type(c(\"a\", \"b\", \"b\")) #> [1] \"binary\" get_outcome_type(c(\"a\", \"b\", \"c\")) #> [1] \"multiclass\""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":null,"dir":"Reference","previous_headings":"","what":"Select indices to partition the data into training & testing sets. — get_partition_indices","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"Use function get row indices training set.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"","code":"get_partition_indices( outcomes, training_frac = 0.8, groups = NULL, group_partitions = NULL )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"outcomes vector outcomes training_frac Fraction data training set (default: 0.8). Rows dataset randomly selected training set, remaining rows used testing set. Alternatively, provide vector integers, used row indices training set. remaining rows used testing set. groups Vector groups keep together splitting data train test sets. number groups training set larger kfold, groups also kept together cross-validation. Length matches number rows dataset (default: NULL). group_partitions Specify assign groups training testing partitions (default: NULL). groups specifies samples belong group \"\" belong group \"B\", setting group_partitions = list(train = c(\"\", \"B\"), test = c(\"B\")) result samples group \"\" placed training set, samples \"B\" also training set, remaining samples \"B\" testing set. partition sizes close training_frac possible. number groups training set larger kfold, groups also kept together cross-validation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"Vector row indices training set.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"groups NULL, uses createDataPartition. Otherwise, uses create_grouped_data_partition(). Set seed prior calling function like data partitions reproducible (recommended).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"","code":"training_inds <- get_partition_indices(otu_mini_bin$dx) train_data <- otu_mini_bin[training_inds, ] test_data <- otu_mini_bin[-training_inds, ]"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":null,"dir":"Reference","previous_headings":"","what":"Get default performance metric function — get_perf_metric_fn","title":"Get default performance metric function — get_perf_metric_fn","text":"Get default performance metric function","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get default performance metric function — get_perf_metric_fn","text":"","code":"get_perf_metric_fn(outcome_type)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get default performance metric function — get_perf_metric_fn","text":"outcome_type Type outcome (one : \"continuous\",\"binary\",\"multiclass\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get default performance metric function — get_perf_metric_fn","text":"Performance metric function.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get default performance metric function — get_perf_metric_fn","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get default performance metric function — get_perf_metric_fn","text":"","code":"get_perf_metric_fn(\"continuous\") #> function (data, lev = NULL, model = NULL) #> { #> if (is.character(data$obs)) #> data$obs <- factor(data$obs, levels = lev) #> postResample(data[, \"pred\"], data[, \"obs\"]) #> } #> #> get_perf_metric_fn(\"binary\") #> function (data, lev = NULL, model = NULL) #> { #> if (!all(levels(data[, \"pred\"]) == levels(data[, \"obs\"]))) #> stop(\"levels of observed and predicted data do not match\") #> has_class_probs <- all(lev %in% colnames(data)) #> if (has_class_probs) { #> lloss <- mnLogLoss(data = data, lev = lev, model = model) #> requireNamespaceQuietStop(\"pROC\") #> requireNamespaceQuietStop(\"MLmetrics\") #> prob_stats <- lapply(levels(data[, \"pred\"]), function(x) { #> obs <- ifelse(data[, \"obs\"] == x, 1, 0) #> prob <- data[, x] #> roc_auc <- try(pROC::roc(obs, data[, x], direction = \"<\", #> quiet = TRUE), silent = TRUE) #> roc_auc <- if (inherits(roc_auc, \"try-error\")) #> NA #> else roc_auc$auc #> pr_auc <- try(MLmetrics::PRAUC(y_pred = data[, x], #> y_true = obs), silent = TRUE) #> if (inherits(pr_auc, \"try-error\")) #> pr_auc <- NA #> res <- c(ROC = roc_auc, AUC = pr_auc) #> return(res) #> }) #> prob_stats <- do.call(\"rbind\", prob_stats) #> prob_stats <- colMeans(prob_stats, na.rm = TRUE) #> } #> CM <- confusionMatrix(data[, \"pred\"], data[, \"obs\"], mode = \"everything\") #> if (length(levels(data[, \"pred\"])) == 2) { #> class_stats <- CM$byClass #> } #> else { #> class_stats <- colMeans(CM$byClass) #> names(class_stats) <- paste(\"Mean\", names(class_stats)) #> } #> overall_stats <- if (has_class_probs) #> c(CM$overall, logLoss = as.numeric(lloss), AUC = unname(prob_stats[\"ROC\"]), #> prAUC = unname(prob_stats[\"AUC\"])) #> else CM$overall #> stats <- c(overall_stats, class_stats) #> stats <- stats[!names(stats) %in% c(\"AccuracyNull\", \"AccuracyLower\", #> \"AccuracyUpper\", \"AccuracyPValue\", \"McnemarPValue\", \"Mean Prevalence\", #> \"Mean Detection Prevalence\")] #> names(stats) <- gsub(\"[[:blank:]]+\", \"_\", names(stats)) #> stat_list <- c(\"Accuracy\", \"Kappa\", \"Mean_F1\", \"Mean_Sensitivity\", #> \"Mean_Specificity\", \"Mean_Pos_Pred_Value\", \"Mean_Neg_Pred_Value\", #> \"Mean_Precision\", \"Mean_Recall\", \"Mean_Detection_Rate\", #> \"Mean_Balanced_Accuracy\") #> if (has_class_probs) #> stat_list <- c(\"logLoss\", \"AUC\", \"prAUC\", stat_list) #> if (length(levels(data[, \"pred\"])) == 2) #> stat_list <- gsub(\"^Mean_\", \"\", stat_list) #> stats <- stats[c(stat_list)] #> return(stats) #> } #> #> get_perf_metric_fn(\"multiclass\") #> function (data, lev = NULL, model = NULL) #> { #> if (!all(levels(data[, \"pred\"]) == levels(data[, \"obs\"]))) #> stop(\"levels of observed and predicted data do not match\") #> has_class_probs <- all(lev %in% colnames(data)) #> if (has_class_probs) { #> lloss <- mnLogLoss(data = data, lev = lev, model = model) #> requireNamespaceQuietStop(\"pROC\") #> requireNamespaceQuietStop(\"MLmetrics\") #> prob_stats <- lapply(levels(data[, \"pred\"]), function(x) { #> obs <- ifelse(data[, \"obs\"] == x, 1, 0) #> prob <- data[, x] #> roc_auc <- try(pROC::roc(obs, data[, x], direction = \"<\", #> quiet = TRUE), silent = TRUE) #> roc_auc <- if (inherits(roc_auc, \"try-error\")) #> NA #> else roc_auc$auc #> pr_auc <- try(MLmetrics::PRAUC(y_pred = data[, x], #> y_true = obs), silent = TRUE) #> if (inherits(pr_auc, \"try-error\")) #> pr_auc <- NA #> res <- c(ROC = roc_auc, AUC = pr_auc) #> return(res) #> }) #> prob_stats <- do.call(\"rbind\", prob_stats) #> prob_stats <- colMeans(prob_stats, na.rm = TRUE) #> } #> CM <- confusionMatrix(data[, \"pred\"], data[, \"obs\"], mode = \"everything\") #> if (length(levels(data[, \"pred\"])) == 2) { #> class_stats <- CM$byClass #> } #> else { #> class_stats <- colMeans(CM$byClass) #> names(class_stats) <- paste(\"Mean\", names(class_stats)) #> } #> overall_stats <- if (has_class_probs) #> c(CM$overall, logLoss = as.numeric(lloss), AUC = unname(prob_stats[\"ROC\"]), #> prAUC = unname(prob_stats[\"AUC\"])) #> else CM$overall #> stats <- c(overall_stats, class_stats) #> stats <- stats[!names(stats) %in% c(\"AccuracyNull\", \"AccuracyLower\", #> \"AccuracyUpper\", \"AccuracyPValue\", \"McnemarPValue\", \"Mean Prevalence\", #> \"Mean Detection Prevalence\")] #> names(stats) <- gsub(\"[[:blank:]]+\", \"_\", names(stats)) #> stat_list <- c(\"Accuracy\", \"Kappa\", \"Mean_F1\", \"Mean_Sensitivity\", #> \"Mean_Specificity\", \"Mean_Pos_Pred_Value\", \"Mean_Neg_Pred_Value\", #> \"Mean_Precision\", \"Mean_Recall\", \"Mean_Detection_Rate\", #> \"Mean_Balanced_Accuracy\") #> if (has_class_probs) #> stat_list <- c(\"logLoss\", \"AUC\", \"prAUC\", stat_list) #> if (length(levels(data[, \"pred\"])) == 2) #> stat_list <- gsub(\"^Mean_\", \"\", stat_list) #> stats <- stats[c(stat_list)] #> return(stats) #> } #> #> "},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":null,"dir":"Reference","previous_headings":"","what":"Get default performance metric name — get_perf_metric_name","title":"Get default performance metric name — get_perf_metric_name","text":"Get default performance metric name cross-validation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get default performance metric name — get_perf_metric_name","text":"","code":"get_perf_metric_name(outcome_type)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get default performance metric name — get_perf_metric_name","text":"outcome_type Type outcome (one : \"continuous\",\"binary\",\"multiclass\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get default performance metric name — get_perf_metric_name","text":"Performance metric name.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get default performance metric name — get_perf_metric_name","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get default performance metric name — get_perf_metric_name","text":"","code":"get_perf_metric_name(\"continuous\") #> [1] \"RMSE\" get_perf_metric_name(\"binary\") #> [1] \"AUC\" get_perf_metric_name(\"multiclass\") #> [1] \"logLoss\""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":null,"dir":"Reference","previous_headings":"","what":"Get model performance metrics as a one-row tibble — get_performance_tbl","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"Get model performance metrics one-row tibble","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"","code":"get_performance_tbl( trained_model, test_data, outcome_colname, perf_metric_function, perf_metric_name, class_probs, method, seed = NA )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"trained_model Trained model caret::train(). test_data Held test data: dataframe outcome features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost seed Random seed (default: NA). results reproducible set seed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"one-row tibble column cross-validation performance, columns performance metrics test data, plus method, seed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"Kelly Sovacool, sovacool@umich.edu Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"","code":"if (FALSE) { results <- run_ml(otu_small, \"glmnet\", kfold = 2, cv_times = 2) names(results$trained_model$trainingData)[1] <- \"dx\" get_performance_tbl(results$trained_model, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\" ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"Generate tuning grid tuning hyperparameters","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"","code":"get_tuning_grid(hyperparams_list, method)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"hyperparams_list Named list lists hyperparameters. method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"tuning grid.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"","code":"ml_method <- \"glmnet\" hparams_list <- get_hyperparams_list(otu_small, ml_method) get_tuning_grid(hparams_list, ml_method) #> lambda alpha #> 1 1e-04 0 #> 2 1e-03 0 #> 3 1e-02 0 #> 4 1e-01 0 #> 5 1e+00 0 #> 6 1e+01 0"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":null,"dir":"Reference","previous_headings":"","what":"Group correlated features — group_correlated_features","title":"Group correlated features — group_correlated_features","text":"Group correlated features","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Group correlated features — group_correlated_features","text":"","code":"group_correlated_features( features, corr_thresh = 1, group_neg_corr = TRUE, corr_method = \"spearman\" )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Group correlated features — group_correlated_features","text":"features dataframe column feature ML corr_thresh feature importance, group correlations equal corr_thresh (range 0 1; default: 1). group_neg_corr Whether group negatively correlated features together (e.g. c(0,1) c(1,0)). corr_method correlation method. options supported stats::cor: spearman, pearson, kendall. (default: spearman)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Group correlated features — group_correlated_features","text":"vector element group correlated features separated pipes (|)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Group correlated features — group_correlated_features","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Group correlated features — group_correlated_features","text":"","code":"features <- data.frame( a = 1:3, b = 2:4, c = c(1, 0, 1), d = (5:7), e = c(5, 1, 4), f = c(-1, 0, -1) ) group_correlated_features(features) #> [1] \"a|b|d\" \"c|f\" \"e\""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/mikropml.html","id":null,"dir":"Reference","previous_headings":"","what":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","title":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","text":"mikropml implements supervised machine learning pipelines using regression, support vector machines, decision trees, random forest, gradient-boosted trees. main functions preprocess_data() process data prior running machine learning, run_ml() run machine learning.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/mikropml.html","id":"authors","dir":"Reference","previous_headings":"","what":"Authors","title":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","text":"Begüm D. Topçuoğlu (ORCID) Zena Lapp (ORCID) Kelly L. Sovacool (ORCID) Evan Snitkin (ORCID) Jenna Wiens (ORCID) Patrick D. Schloss (ORCID)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/mikropml.html","id":"see-vignettes","dir":"Reference","previous_headings":"","what":"See vignettes","title":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","text":"Introduction Preprocessing data Hyperparameter tuning Parallel processing mikropml paper","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_data_preproc.html","id":null,"dir":"Reference","previous_headings":"","what":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","title":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","text":"result running preprocess_data(\"otu_mini_bin\")","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_data_preproc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","text":"","code":"otu_data_preproc"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_data_preproc.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","text":"object class list length 3.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin.html","id":null,"dir":"Reference","previous_headings":"","what":"Mini OTU abundance dataset — otu_mini_bin","title":"Mini OTU abundance dataset — otu_mini_bin","text":"dataset containing relatives abundances OTUs human stool samples binary outcome, dx. subset otu_small.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mini OTU abundance dataset — otu_mini_bin","text":"","code":"otu_mini_bin"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Mini OTU abundance dataset — otu_mini_bin","text":"data frame dx column diagnosis: healthy cancerous (colorectal). columns OTU relative abundances.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_glmnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","title":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","text":"Results running pipeline L2 logistic regression otu_mini_bin feature importance grouping","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_glmnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","text":"","code":"otu_mini_bin_results_glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_glmnet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rf.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","title":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","text":"Results running pipeline random forest otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","text":"","code":"otu_mini_bin_results_rf"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rf.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rpart2.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","title":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","text":"Results running pipeline rpart2 otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rpart2.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","text":"","code":"otu_mini_bin_results_rpart2"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rpart2.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_svmRadial.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","title":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","text":"Results running pipeline svmRadial otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_svmRadial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","text":"","code":"otu_mini_bin_results_svmRadial"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_svmRadial.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_xgbTree.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","title":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","text":"Results running pipeline xbgTree otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_xgbTree.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","text":"","code":"otu_mini_bin_results_xgbTree"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_xgbTree.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_glmnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","text":"Results running pipeline glmnet otu_mini_bin Otu00001 outcome","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_glmnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","text":"","code":"otu_mini_cont_results_glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_glmnet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_nocv.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","text":"Results running pipeline glmnet otu_mini_bin Otu00001 outcome column, using custom train control scheme perform cross-validation","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_nocv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","text":"","code":"otu_mini_cont_results_nocv"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_nocv.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cv.html","id":null,"dir":"Reference","previous_headings":"","what":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","title":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","text":"Cross validation train_data_mini grouped features.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","text":"","code":"otu_mini_cv"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cv.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","text":"object class list length 27.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi.html","id":null,"dir":"Reference","previous_headings":"","what":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","title":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","text":"dataset containing relatives abundances OTUs human stool samples","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","text":"","code":"otu_mini_multi"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","text":"data frame dx column colorectal cancer diagnosis: adenoma, carcinoma, normal. columns OTU relative abundances.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_group.html","id":null,"dir":"Reference","previous_headings":"","what":"Groups for otu_mini_multi — otu_mini_multi_group","title":"Groups for otu_mini_multi — otu_mini_multi_group","text":"Groups otu_mini_multi","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_group.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Groups for otu_mini_multi — otu_mini_multi_group","text":"","code":"otu_mini_multi_group"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_group.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Groups for otu_mini_multi — otu_mini_multi_group","text":"object class character length 490.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_results_glmnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","title":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","text":"Results running pipeline glmnet otu_mini_multi multiclass outcomes","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_results_glmnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","text":"","code":"otu_mini_multi_results_glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_results_glmnet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_small.html","id":null,"dir":"Reference","previous_headings":"","what":"Small OTU abundance dataset — otu_small","title":"Small OTU abundance dataset — otu_small","text":"dataset containing relatives abundances 60 OTUs 60 human stool samples. subset data provided extdata/otu_large.csv, used Topçuoğlu et al. 2020.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_small.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Small OTU abundance dataset — otu_small","text":"","code":"otu_small"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_small.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Small OTU abundance dataset — otu_small","text":"data frame 60 rows 61 variables. dx column diagnosis: healthy cancerous (colorectal). columns OTU relative abundances.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculated a permuted p-value comparing two models — permute_p_value","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"Calculated permuted p-value comparing two models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"","code":"permute_p_value( merged_data, metric, group_name, group_1, group_2, nperm = 10000 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"merged_data concatenated performance data run_ml metric metric compare, must numeric group_name column group variables compare group_1 name one group compare group_2 name group compare nperm number permutations, default=10000","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"numeric p-value comparing two models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Courtney R Armour, armourc@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"","code":"df <- dplyr::tibble( model = c(\"rf\", \"rf\", \"glmnet\", \"glmnet\", \"svmRadial\", \"svmRadial\"), AUC = c(.2, 0.3, 0.8, 0.9, 0.85, 0.95) ) set.seed(123) permute_p_value(df, \"AUC\", \"model\", \"rf\", \"glmnet\", nperm = 100) #> [1] 0.3663366"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot ROC and PRC curves — plot_mean_roc","title":"Plot ROC and PRC curves — plot_mean_roc","text":"Plot ROC PRC curves","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot ROC and PRC curves — plot_mean_roc","text":"","code":"plot_mean_roc(dat, ribbon_fill = \"#C6DBEF\", line_color = \"#08306B\") plot_mean_prc( dat, baseline_precision = NULL, ribbon_fill = \"#C7E9C0\", line_color = \"#00441B\" )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot ROC and PRC curves — plot_mean_roc","text":"dat sensitivity, specificity, precision data calculated calc_mean_roc() ribbon_fill ribbon fill color (default: \"#D9D9D9\") line_color line color (default: \"#000000\") baseline_precision baseline precision calc_baseline_precision()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"functions","dir":"Reference","previous_headings":"","what":"Functions","title":"Plot ROC and PRC curves — plot_mean_roc","text":"plot_mean_roc(): Plot mean sensitivity specificity plot_mean_prc(): Plot mean precision recall","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Plot ROC and PRC curves — plot_mean_roc","text":"Courtney Armour Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot ROC and PRC curves — plot_mean_roc","text":"","code":"if (FALSE) { library(dplyr) # get performance for multiple models get_sensspec_seed <- function(seed) { ml_result <- run_ml(otu_mini_bin, \"glmnet\", seed = seed) sensspec <- calc_model_sensspec( ml_result$trained_model, ml_result$test_data, \"dx\" ) %>% mutate(seed = seed) return(sensspec) } sensspec_dat <- purrr::map_dfr(seq(100, 102), get_sensspec_seed) # plot ROC & PRC sensspec_dat %>% calc_mean_roc() %>% plot_mean_roc() baseline_prec <- calc_baseline_precision(otu_mini_bin, \"dx\", \"cancer\") sensspec_dat %>% calc_mean_prc() %>% plot_mean_prc(baseline_precision = baseline_prec) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot hyperparameter performance metrics — plot_hp_performance","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"Plot hyperparameter performance metrics","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"","code":"plot_hp_performance(dat, param_col, metric_col)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"dat dataframe hyperparameters performance metric (e.g. get_hp_performance() combine_hp_performance()) param_col hyperparameter plotted. must column dat. metric_col performance metric. must column dat.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"ggplot hyperparameter performance.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"Zena Lapp, zenalapp@umich.edu Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"","code":"# plot for a single `run_ml()` call hp_metrics <- get_hp_performance(otu_mini_bin_results_glmnet$trained_model) hp_metrics #> $dat #> alpha lambda AUC #> 1 0 1e-04 0.6082552 #> 2 0 1e-03 0.6082552 #> 3 0 1e-02 0.6086458 #> 4 0 1e-01 0.6166789 #> 5 0 1e+00 0.6221737 #> 6 0 1e+01 0.6187408 #> #> $params #> [1] \"lambda\" #> #> $metric #> [1] \"AUC\" #> plot_hp_performance(hp_metrics$dat, lambda, AUC) if (FALSE) { # plot for multiple `run_ml()` calls results <- lapply(seq(100, 102), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed) }) models <- lapply(results, function(x) x$trained_model) hp_metrics <- combine_hp_performance(models) plot_hp_performance(hp_metrics$dat, lambda, AUC) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"ggplot2 required use function.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"","code":"plot_model_performance(performance_df)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"performance_df dataframe performance results multiple calls run_ml()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"ggplot2 plot performance.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"Begüm Topçuoglu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"","code":"if (FALSE) { # call `run_ml()` multiple times with different seeds results_lst <- lapply(seq(100, 104), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed) }) # extract and combine the performance results perf_df <- lapply(results_lst, function(result) { result[[\"performance\"]] }) %>% dplyr::bind_rows() # plot the performance results p <- plot_model_performance(perf_df) # call `run_ml()` with different ML methods param_grid <- expand.grid( seeds = seq(100, 104), methods = c(\"glmnet\", \"rf\") ) results_mtx <- mapply( function(seed, method) { run_ml(otu_mini_bin, method, seed = seed, kfold = 2) }, param_grid$seeds, param_grid$methods ) # extract and combine the performance results perf_df2 <- dplyr::bind_rows(results_mtx[\"performance\", ]) # plot the performance results p <- plot_model_performance(perf_df2) # you can continue adding layers to customize the plot p + theme_classic() + scale_color_brewer(palette = \"Dark2\") + coord_flip() }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Preprocess data prior to running machine learning — preprocess_data","title":"Preprocess data prior to running machine learning — preprocess_data","text":"Function preprocess data input run_ml().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Preprocess data prior to running machine learning — preprocess_data","text":"","code":"preprocess_data( dataset, outcome_colname, method = c(\"center\", \"scale\"), remove_var = \"nzv\", collapse_corr_feats = TRUE, to_numeric = TRUE, group_neg_corr = TRUE, prefilter_threshold = 1 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Preprocess data prior to running machine learning — preprocess_data","text":"dataset Data frame outcome variable columns features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). method Methods preprocess data, described caret::preProcess() (default: c(\"center\",\"scale\"), use NULL normalization). remove_var Whether remove variables near-zero variance ('nzv'; default), zero variance ('zv'), none (NULL). collapse_corr_feats Whether keep one perfectly correlated features. to_numeric Whether change features numeric possible. group_neg_corr Whether group negatively correlated features together (e.g. c(0,1) c(1,0)). prefilter_threshold Remove features non-zero & non-NA values N rows fewer (default: 1). Set -1 keep columns step. step also skipped to_numeric set FALSE.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Preprocess data prior to running machine learning — preprocess_data","text":"Named list including: dat_transformed: Preprocessed data. grp_feats: features grouped together, named list features corresponding group. removed_feats: features removed preprocessing (e.g. zero variance near-zero variance features). progressr package installed, progress bar time elapsed estimated time completion can displayed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"more-details","dir":"Reference","previous_headings":"","what":"More details","title":"Preprocess data prior to running machine learning — preprocess_data","text":"See preprocessing vignette details. Note values outcome_colname contain spaces, converted underscores compatibility caret.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Preprocess data prior to running machine learning — preprocess_data","text":"Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Preprocess data prior to running machine learning — preprocess_data","text":"","code":"preprocess_data(mikropml::otu_small, \"dx\") #> Using 'dx' as the outcome column. #> $dat_transformed #> # A tibble: 200 × 61 #> dx Otu00001 Otu00002 Otu00003 Otu00004 Otu00005 Otu00006 Otu00…¹ Otu00008 #> #> 1 normal -0.420 -0.219 -0.174 -0.591 -0.0488 -0.167 -0.569 -0.0624 #> 2 normal -0.105 1.75 -0.718 0.0381 1.54 -0.573 -0.643 -0.132 #> 3 normal -0.708 0.696 1.43 0.604 -0.265 -0.0364 -0.612 -0.207 #> 4 normal -0.494 -0.665 2.02 -0.593 -0.676 -0.586 -0.552 -0.470 #> 5 normal 1.11 -0.395 -0.754 -0.586 -0.754 2.73 0.191 -0.676 #> 6 normal -0.685 0.614 -0.174 -0.584 0.376 0.804 -0.337 -0.00608 #> 7 cancer -0.770 -0.496 -0.318 0.159 -0.658 2.20 -0.717 0.0636 #> 8 normal -0.424 -0.478 -0.397 -0.556 -0.391 -0.0620 0.376 -0.0222 #> 9 normal -0.556 1.14 1.62 -0.352 -0.275 -0.465 -0.804 0.294 #> 10 cancer 1.46 -0.451 -0.694 -0.0567 -0.706 0.689 -0.370 1.59 #> # … with 190 more rows, 52 more variables: Otu00009 , Otu00010 , #> # Otu00011 , Otu00012 , Otu00013 , Otu00014 , #> # Otu00015 , Otu00016 , Otu00017 , Otu00018 , #> # Otu00019 , Otu00020 , Otu00021 , Otu00022 , #> # Otu00023 , Otu00024 , Otu00025 , Otu00026 , #> # Otu00027 , Otu00028 , Otu00029 , Otu00030 , #> # Otu00031 , Otu00032 , Otu00033 , Otu00034 , … #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0) #> # the function can show a progress bar if you have the progressr package installed ## optionally, specify the progress bar format progressr::handlers(progressr::handler_progress( format = \":message :bar :percent | elapsed: :elapsed | eta: :eta\", clear = FALSE, show_after = 0 )) ## tell progressor to always report progress if (FALSE) { progressr::handlers(global = TRUE) ## run the function and watch the live progress udpates dat_preproc <- preprocess_data(mikropml::otu_small, \"dx\") }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":null,"dir":"Reference","previous_headings":"","what":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"Randomize feature order eliminate position-dependent effects","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"","code":"randomize_feature_order(dataset, outcome_colname)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"dataset Data frame outcome variable columns features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"Dataset feature order randomized.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"Nick Lesniak, nlesniak@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"","code":"dat <- data.frame( outcome = c(\"1\", \"2\", \"3\"), a = 4:6, b = 7:9, c = 10:12, d = 13:15 ) randomize_feature_order(dat, \"outcome\") #> outcome c b a d #> 1 1 10 7 4 13 #> 2 2 11 8 5 14 #> 3 3 12 9 6 15"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"caret contr.ltfr — reexports","title":"caret contr.ltfr — reexports","text":"objects imported packages. Follow links see documentation. caret contr.ltfr dplyr %>% rlang :=, !!, .data","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":null,"dir":"Reference","previous_headings":"","what":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"Removes columns non-zero & non-NA values threshold row(s) fewer.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"","code":"remove_singleton_columns(dat, threshold = 1)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"dat dataframe threshold Number rows. column non-zero & non-NA values threshold row(s) fewer, removed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"dataframe without singleton columns","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"Kelly Sovacool, sovacool@umich.edu Courtney Armour","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"","code":"remove_singleton_columns(data.frame(a = 1:3, b = c(0, 1, 0), c = 4:6)) #> $dat #> a c #> 1 1 4 #> 2 2 5 #> 3 3 6 #> #> $removed_feats #> [1] \"b\" #> remove_singleton_columns(data.frame(a = 1:3, b = c(0, 1, 0), c = 4:6), threshold = 0) #> $dat #> a b c #> 1 1 0 4 #> 2 2 1 5 #> 3 3 0 6 #> #> $removed_feats #> character(0) #> remove_singleton_columns(data.frame(a = 1:3, b = c(0, 1, NA), c = 4:6)) #> $dat #> a c #> 1 1 4 #> 2 2 5 #> 3 3 6 #> #> $removed_feats #> [1] \"b\" #> remove_singleton_columns(data.frame(a = 1:3, b = c(1, 1, 1), c = 4:6)) #> $dat #> a b c #> 1 1 1 4 #> 2 2 1 5 #> 3 3 1 6 #> #> $removed_feats #> character(0) #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":null,"dir":"Reference","previous_headings":"","what":"Replace spaces in all elements of a character vector with underscores — replace_spaces","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"Replace spaces elements character vector underscores","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"","code":"replace_spaces(x, new_char = \"_\")"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"x character vector new_char character replace spaces (default: _)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"character vector spaces replaced new_char","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"","code":"dat <- data.frame( dx = c(\"outcome 1\", \"outcome 2\", \"outcome 1\"), a = 1:3, b = c(5, 7, 1) ) dat$dx <- replace_spaces(dat$dx) dat #> dx a b #> 1 outcome_1 1 5 #> 2 outcome_2 2 7 #> 3 outcome_1 3 1"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":null,"dir":"Reference","previous_headings":"","what":"Run the machine learning pipeline — run_ml","title":"Run the machine learning pipeline — run_ml","text":"function splits data set train & test set, trains machine learning (ML) models using k-fold cross-validation, evaluates best model held-test set, optionally calculates feature importance using framework outlined Topçuoğlu et al. 2020 (doi:10.1128/mBio.00434-20 ). Required inputs data frame (must contain outcome variable columns features) ML method. See vignette('introduction') details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run the machine learning pipeline — run_ml","text":"","code":"run_ml( dataset, method, outcome_colname = NULL, hyperparameters = NULL, find_feature_importance = FALSE, calculate_performance = TRUE, kfold = 5, cv_times = 100, cross_val = NULL, training_frac = 0.8, perf_metric_function = NULL, perf_metric_name = NULL, groups = NULL, group_partitions = NULL, corr_thresh = 1, seed = NA, ... )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run the machine learning pipeline — run_ml","text":"dataset Data frame outcome variable columns features. method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). hyperparameters Dataframe hyperparameters (default NULL; sensible defaults chosen automatically). find_feature_importance Run permutation importance (default: FALSE). TRUE recommended like identify features important predicting outcome, resource-intensive. calculate_performance Whether calculate performance metrics (default: TRUE). might choose skip perform cross-validation model training. kfold Fold number k-fold cross-validation (default: 5). cv_times Number cross-validation partitions create (default: 100). cross_val custom cross-validation scheme caret::trainControl() (default: NULL, uses kfold cross validation repeated cv_times). kfold cv_times ignored user provides custom cross-validation scheme. See caret::trainControl() docs information use . training_frac Fraction data training set (default: 0.8). Rows dataset randomly selected training set, remaining rows used testing set. Alternatively, provide vector integers, used row indices training set. remaining rows used testing set. perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". groups Vector groups keep together splitting data train test sets. number groups training set larger kfold, groups also kept together cross-validation. Length matches number rows dataset (default: NULL). group_partitions Specify assign groups training testing partitions (default: NULL). groups specifies samples belong group \"\" belong group \"B\", setting group_partitions = list(train = c(\"\", \"B\"), test = c(\"B\")) result samples group \"\" placed training set, samples \"B\" also training set, remaining samples \"B\" testing set. partition sizes close training_frac possible. number groups training set larger kfold, groups also kept together cross-validation. corr_thresh feature importance, group correlations equal corr_thresh (range 0 1; default: 1). seed Random seed (default: NA). results reproducible set seed. ... additional arguments passed caret::train(), case weights via weights argument ntree rf models. See caret::train() docs details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run the machine learning pipeline — run_ml","text":"Named list results: trained_model: Output caret::train(), including best model. test_data: Part data used testing. performance: Data frame performance metrics. first column cross-validation performance metric, last two columns ML method used seed (one set), respectively. columns performance metrics calculated test data. contains one row, can easily combine performance data frames multiple calls run_ml() (see vignette(\"parallel\")). feature_importance: feature importances calculated, data frame row feature correlated group. columns performance metric permuted data, difference true performance metric performance metric permuted data (true - permuted), feature name, ML method, performance metric name, seed (provided). AUC RMSE, higher perf_metric_diff , important feature predicting outcome. log loss, lower perf_metric_diff , important feature predicting outcome.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"more-details","dir":"Reference","previous_headings":"","what":"More details","title":"Run the machine learning pipeline — run_ml","text":"details, please see vignettes.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Run the machine learning pipeline — run_ml","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run the machine learning pipeline — run_ml","text":"","code":"if (FALSE) { # regression run_ml(otu_small, \"glmnet\", seed = 2019 ) # random forest w/ feature importance run_ml(otu_small, \"rf\", outcome_colname = \"dx\", find_feature_importance = TRUE ) # custom cross validation & hyperparameters run_ml(otu_mini_bin[, 2:11], \"glmnet\", outcome_colname = \"Otu00001\", seed = 2019, hyperparameters = list(lambda = c(1e-04), alpha = 0), cross_val = caret::trainControl(method = \"none\"), calculate_performance = FALSE ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"Use functions calculate cumulative sensitivity, specificity, recall, etc. single models, concatenate results together multiple models, compute mean ROC PRC. can plot mean ROC PRC curves visualize results. Note: functions assume binary outcome.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"","code":"calc_model_sensspec(trained_model, test_data, outcome_colname = NULL) calc_mean_roc(sensspec_dat) calc_mean_prc(sensspec_dat)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"trained_model Trained model caret::train(). test_data Held test data: dataframe outcome features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). sensspec_dat data frame created concatenating results calc_model_sensspec() multiple models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"data frame summarized performance","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"functions","dir":"Reference","previous_headings":"","what":"Functions","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"calc_model_sensspec(): Get sensitivity, specificity, precision model. calc_mean_roc(): Calculate mean sensitivity specificity multiple models calc_mean_prc(): Calculate mean precision recall multiple models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"Courtney Armour Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"","code":"if (FALSE) { library(dplyr) # get cumulative performance for a single model sensspec_1 <- calc_model_sensspec( otu_mini_bin_results_glmnet$trained_model, otu_mini_bin_results_glmnet$test_data, \"dx\" ) head(sensspec_1) # get performance for multiple models get_sensspec_seed <- function(seed) { ml_result <- run_ml(otu_mini_bin, \"glmnet\", seed = seed) sensspec <- calc_model_sensspec( ml_result$trained_model, ml_result$test_data, \"dx\" ) %>% mutate(seed = seed) return(sensspec) } sensspec_dat <- purrr::map_dfr(seq(100, 102), get_sensspec_seed) # calculate mean sensitivity over specificity roc_dat <- calc_mean_roc(sensspec_dat) head(roc_dat) # calculate mean precision over recall prc_dat <- calc_mean_prc(sensspec_dat) head(prc_dat) # plot ROC & PRC roc_dat %>% plot_mean_roc() baseline_prec <- calc_baseline_precision(otu_mini_bin, \"dx\", \"cancer\") prc_dat %>% plot_mean_prc(baseline_precision = baseline_prec) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":null,"dir":"Reference","previous_headings":"","what":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"Get plot layers shared plot_mean_roc plot_mean_prc","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"","code":"shared_ggprotos(ribbon_fill = \"#D9D9D9\", line_color = \"#000000\")"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"ribbon_fill ribbon fill color (default: \"#D9D9D9\") line_color line color (default: \"#000000\")","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"list ggproto objects add ggplot","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Tidy the performance dataframe — tidy_perf_data","title":"Tidy the performance dataframe — tidy_perf_data","text":"Used plot_model_performance().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Tidy the performance dataframe — tidy_perf_data","text":"","code":"tidy_perf_data(performance_df)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Tidy the performance dataframe — tidy_perf_data","text":"performance_df dataframe performance results multiple calls run_ml()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Tidy the performance dataframe — tidy_perf_data","text":"Tidy dataframe model performance metrics.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Tidy the performance dataframe — tidy_perf_data","text":"Begüm Topçuoglu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Tidy the performance dataframe — tidy_perf_data","text":"","code":"if (FALSE) { # call `run_ml()` multiple times with different seeds results_lst <- lapply(seq(100, 104), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed) }) # extract and combine the performance results perf_df <- lapply(results_lst, function(result) { result[[\"performance\"]] }) %>% dplyr::bind_rows() # make it pretty! tidy_perf_data(perf_df) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Train model using caret::train(). — train_model","title":"Train model using caret::train(). — train_model","text":"Train model using caret::train().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Train model using caret::train(). — train_model","text":"","code":"train_model( train_data, outcome_colname, method, cv, perf_metric_name, tune_grid, ... )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Train model using caret::train(). — train_model","text":"train_data Training data. Expected subset full dataset. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost cv Cross-validation caret scheme define_cv(). perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". tune_grid Tuning grid get_tuning_grid().#' ... additional arguments passed caret::train(), case weights via weights argument ntree rf models. See caret::train() docs details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Train model using caret::train(). — train_model","text":"Trained model caret::train().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Train model using caret::train(). — train_model","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Train model using caret::train(). — train_model","text":"","code":"if (FALSE) { training_data <- otu_mini_bin_results_glmnet$trained_model$trainingData %>% dplyr::rename(dx = .outcome) method <- \"rf\" hyperparameters <- get_hyperparams_list(otu_mini_bin, method) cross_val <- define_cv(training_data, \"dx\", hyperparameters, perf_metric_function = caret::multiClassSummary, class_probs = TRUE, cv_times = 2 ) tune_grid <- get_tuning_grid(hyperparameters, method) rf_model <- train_model( training_data, \"dx\", method, cross_val, \"AUC\", tune_grid, ntree = 1000 ) rf_model$results %>% dplyr::select(mtry, AUC, prAUC) }"},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-150","dir":"Changelog","previous_headings":"","what":"mikropml 1.5.0","title":"mikropml 1.5.0","text":"CRAN release: 2023-01-16 New example showing plot feature importances parallel vignette (#310, @kelly-sovacool). can now use parRF, parallel implementation rf method, default hyperparameters rf set automatically (#306, @kelly-sovacool). calc_model_sensspec() - calculate sensitivity, specificity, precision model. calc_mean_roc() & plot_mean_roc() - calculate & plot specificity mean sensitivity multiple models. calc_mean_prc() & plot_mean_prc() - calculate & plot recall mean precision multiple models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-140","dir":"Changelog","previous_headings":"","what":"mikropml 1.4.0","title":"mikropml 1.4.0","text":"CRAN release: 2022-10-16 Users can now pass model-specific arguments (e.g. weights) caret::train(), allowing greater flexibility. Improved tests (#298, #300, #303 #kelly-sovacool) Minor documentation improvements.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-130","dir":"Changelog","previous_headings":"","what":"mikropml 1.3.0","title":"mikropml 1.3.0","text":"CRAN release: 2022-05-20 mikropml now requires R version 4.1.0 greater due update randomForest package (#292). New function compare_models() compares performance two models permutation test (#295, @courtneyarmour). Fixed bug cv_times affect reported repeats cross-validation (#291, @kelly-sovacool). Made minor documentation improvements (#293, @kelly-sovacool)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-122","dir":"Changelog","previous_headings":"","what":"mikropml 1.2.2","title":"mikropml 1.2.2","text":"CRAN release: 2022-02-03 minor patch fixes test failure platforms long doubles. actual package code remains unchanged.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-121","dir":"Changelog","previous_headings":"","what":"mikropml 1.2.1","title":"mikropml 1.2.1","text":"CRAN release: 2022-01-30 using groups parameter, groups kept together cross-validation partitions kfold <= number groups training set. Previously, error thrown condition met. Now, enough groups training set groups kept together CV, groups allowed split across CV partitions. Report p-values permutation feature importance (#288, @kelly-sovacool).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-120","dir":"Changelog","previous_headings":"","what":"mikropml 1.2.0","title":"mikropml 1.2.0","text":"CRAN release: 2021-11-10 Also added new parameter calculate_performance, controls whether performance metrics calculated (default: TRUE). Users may wish skip performance calculations training models cross-validation. New parameter group_partitions added run_ml() allows users control groups go partition train/test split (#281, @kelly-sovacool). default, training_frac fraction 0 1 specifies much dataset used training fraction train/test split. Users can instead give training_frac vector indices correspond rows dataset go training fraction train/test split. gives users direct control exactly observations training fraction desired.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-111","dir":"Changelog","previous_headings":"","what":"mikropml 1.1.1","title":"mikropml 1.1.1","text":"CRAN release: 2021-09-14 Also, group_correlated_features() now user-facing function.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-110","dir":"Changelog","previous_headings":"","what":"mikropml 1.1.0","title":"mikropml 1.1.0","text":"CRAN release: 2021-08-10 default still “spearman”, now can use methods supported stats::cor corr_method parameter: get_feature_importance(corr_method = \"pearson\") now video tutorials covering mikropml skills related machine learning, created @pschloss (#270). Fixed bug preprocess_data() converted outcome column character vector (#273, @kelly-sovacool, @ecmaggioncalda).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-100","dir":"Changelog","previous_headings":"","what":"mikropml 1.0.0","title":"mikropml 1.0.0","text":"CRAN release: 2021-05-13 mikropml now logo created @NLesniak! Made documentation improvements (#238, #231 @kelly-sovacool; #256 @BTopcuoglu). Remove features appear N=prefilter_threshold fewer rows data. Created function remove_singleton_columns() called preprocess_data() carry . Provide custom groups features permute together permutation importance. groups NULL default; case, correlated features corr_thresh grouped together. preprocess_data() now replaces spaces outcome column underscores (#247, @kelly-sovacool, @JonnyTran). Clarify intro vignette support multi-label outcomes. (#254, @zenalapp) Optional progress bar preprocess_data() get_feature_importance() using progressr package (#257, @kelly-sovacool, @JonnyTran, @FedericoComoglio). mikropml paper soon published JOSS!","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-002","dir":"Changelog","previous_headings":"","what":"mikropml 0.0.2","title":"mikropml 0.0.2","text":"CRAN release: 2020-12-03 Fixed test failure Solaris. Fixed multiple test failures R 3.6.2 due stringsAsFactors behavior. Made minor documentation improvements. Moved rpart Suggests Imports consistency packages used model training.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-001","dir":"Changelog","previous_headings":"","what":"mikropml 0.0.1","title":"mikropml 0.0.1","text":"CRAN release: 2020-11-23 first release version mikropml! 🎉 Added NEWS.md file track changes package. run_ml() preprocess_data() plot_model_performance() plot_hp_performance() glmnet: logistic linear regression rf: random forest rpart2: decision trees svmRadial: support vector machines xgbTree: gradient-boosted trees Introduction Preprocess data Hyperparameter tuning Parallel processing mikropml paper","code":""}] +[{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":null,"dir":"","previous_headings":"","what":"Contributor Covenant Code of Conduct","title":"Contributor Covenant Code of Conduct","text":"document adapted Tidyverse Code Conduct.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"our-pledge","dir":"","previous_headings":"","what":"Our Pledge","title":"Contributor Covenant Code of Conduct","text":"members, contributors, leaders pledge make participation community harassment-free experience everyone, regardless age, body size, visible invisible disability, ethnicity, sex characteristics, gender identity expression, level experience, education, socio-economic status, nationality, personal appearance, race, religion, sexual identity orientation. pledge act interact ways contribute open, welcoming, diverse, inclusive, healthy community.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"our-standards","dir":"","previous_headings":"","what":"Our Standards","title":"Contributor Covenant Code of Conduct","text":"Examples behavior contributes positive environment community include: Demonstrating empathy kindness toward people respectful differing opinions, viewpoints, experiences Giving gracefully accepting constructive feedback Accepting responsibility apologizing affected mistakes, learning experience Focusing best just us individuals, overall community Examples unacceptable behavior include: use sexualized language imagery, sexual attention advances kind Trolling, insulting derogatory comments, personal political attacks Public private harassment Publishing others’ private information, physical email address, without explicit permission conduct reasonably considered inappropriate professional setting","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"enforcement-responsibilities","dir":"","previous_headings":"","what":"Enforcement Responsibilities","title":"Contributor Covenant Code of Conduct","text":"Community leaders responsible clarifying enforcing standards acceptable behavior take appropriate fair corrective action response behavior deem inappropriate, threatening, offensive, harmful. Community leaders right responsibility remove, edit, reject comments, commits, code, wiki edits, issues, contributions aligned Code Conduct, communicate reasons moderation decisions appropriate.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"scope","dir":"","previous_headings":"","what":"Scope","title":"Contributor Covenant Code of Conduct","text":"Code Conduct applies within community spaces, also applies individual officially representing community public spaces. Examples representing community include using official e-mail address, posting via official social media account, acting appointed representative online offline event.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"enforcement","dir":"","previous_headings":"","what":"Enforcement","title":"Contributor Covenant Code of Conduct","text":"Instances abusive, harassing, otherwise unacceptable behavior may reported community leaders responsible enforcement [INSERT CONTACT METHOD]. complaints reviewed investigated promptly fairly. community leaders obligated respect privacy security reporter incident.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"enforcement-guidelines","dir":"","previous_headings":"","what":"Enforcement Guidelines","title":"Contributor Covenant Code of Conduct","text":"Community leaders follow Community Impact Guidelines determining consequences action deem violation Code Conduct:","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_1-correction","dir":"","previous_headings":"Enforcement Guidelines","what":"1. Correction","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Use inappropriate language behavior deemed unprofessional unwelcome community. Consequence: private, written warning community leaders, providing clarity around nature violation explanation behavior inappropriate. public apology may requested.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_2-warning","dir":"","previous_headings":"Enforcement Guidelines","what":"2. Warning","title":"Contributor Covenant Code of Conduct","text":"Community Impact: violation single incident series actions. Consequence: warning consequences continued behavior. interaction people involved, including unsolicited interaction enforcing Code Conduct, specified period time. includes avoiding interactions community spaces well external channels like social media. Violating terms may lead temporary permanent ban.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_3-temporary-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"3. Temporary Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: serious violation community standards, including sustained inappropriate behavior. Consequence: temporary ban sort interaction public communication community specified period time. public private interaction people involved, including unsolicited interaction enforcing Code Conduct, allowed period. Violating terms may lead permanent ban.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"id_4-permanent-ban","dir":"","previous_headings":"Enforcement Guidelines","what":"4. Permanent Ban","title":"Contributor Covenant Code of Conduct","text":"Community Impact: Demonstrating pattern violation community standards, including sustained inappropriate behavior, harassment individual, aggression toward disparagement classes individuals. Consequence: permanent ban sort public interaction within community.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CODE_OF_CONDUCT.html","id":"attribution","dir":"","previous_headings":"","what":"Attribution","title":"Contributor Covenant Code of Conduct","text":"Code Conduct adapted Contributor Covenant, version 2.0, available https://www.contributor-covenant.org/version/2/0/ code_of_conduct.html. Community Impact Guidelines inspired Mozilla’s code conduct enforcement ladder. answers common questions code conduct, see FAQ https://www.contributor-covenant.org/faq. Translations available https:// www.contributor-covenant.org/translations.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":null,"dir":"","previous_headings":"","what":"Contributing to mikropml","title":"Contributing to mikropml","text":"document adapted Tidyverse Contributing guide.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"fixing-typos","dir":"","previous_headings":"","what":"Fixing typos","title":"Contributing to mikropml","text":"can fix typos, spelling mistakes, grammatical errors documentation directly using GitHub web interface, long changes made source file. generally means ’ll need edit roxygen2 comments .R, .Rd file. can find .R file generates .Rd reading comment first line.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"bigger-changes","dir":"","previous_headings":"","what":"Bigger changes","title":"Contributing to mikropml","text":"want make bigger change, ’s good idea first file issue make sure someone team agrees ’s needed. ’ve found bug, please file issue illustrates bug minimal reprex (also help write unit test, needed).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"pull-request-process","dir":"","previous_headings":"Bigger changes","what":"Pull request process","title":"Contributing to mikropml","text":"Fork package clone onto computer. haven’t done , recommend using usethis::create_from_github(\"SchlossLab/mikropml\", fork = TRUE). Install development dependences devtools::install_dev_deps(), make sure package passes R CMD check running devtools::check(). R CMD check doesn’t pass cleanly, ’s good idea ask help continuing. Create Git branch pull request (PR). recommend using usethis::pr_init(\"brief-description--change\"). Make changes, commit git, create PR running usethis::pr_push(), following prompts browser. title PR briefly describe change. body PR contain Fixes #issue-number. user-facing changes, add bullet top NEWS.md (.e. just first header). Follow style described https://style.tidyverse.org/news.html.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"code-style","dir":"","previous_headings":"Bigger changes","what":"Code style","title":"Contributing to mikropml","text":"New code follow tidyverse style guide. can use styler package apply styles, please don’t restyle code nothing PR. use roxygen2, Markdown syntax, documentation. use testthat unit tests. Contributions test cases included easier accept.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/CONTRIBUTING.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"Contributing to mikropml","text":"Please note mikropml project released Contributor Code Conduct. contributing project agree abide terms.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2019 Begüm D. Topçuoğlu, Zena Lapp, Kelly L. Sovacool, Evan Snitkin, Jenna Wiens, Patrick D. Schloss Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":null,"dir":"","previous_headings":"","what":"Getting help with mikropml","title":"Getting help with mikropml","text":"Thanks using mikropml! filing issue, places explore pieces put together make process smooth possible.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":"make-a-reprex","dir":"","previous_headings":"","what":"Make a reprex","title":"Getting help with mikropml","text":"Start making minimal reproducible example using reprex package. haven’t heard used reprex , ’re treat! Seriously, reprex make R-question-asking endeavors easier (pretty insane ROI five ten minutes ’ll take learn ’s ). additional reprex pointers, check Get help! section tidyverse site.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":"where-to-ask","dir":"","previous_headings":"","what":"Where to ask?","title":"Getting help with mikropml","text":"Armed reprex, next step figure ask. ’s question: start Discussions, /StackOverflow. people answer questions. ’s bug: ’re right place, file issue. ’re sure: let community help figure first asking Discussions! problem bug feature request, can easily return report . opening new issue, sure search issues pull requests make sure bug hasn’t reported /already fixed development version. default, search pre-populated :issue :open. can edit qualifiers (e.g. :pr, :closed) needed. example, ’d simply remove :open search issues repo, open closed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/SUPPORT.html","id":"what-happens-next","dir":"","previous_headings":"","what":"What happens next?","title":"Getting help with mikropml","text":"efficient possible, development tidyverse packages tends bursty, shouldn’t worry don’t get immediate response. Typically don’t look repo sufficient quantity issues accumulates, ’s burst intense activity focus efforts. makes development efficient avoids expensive context switching problems, cost taking longer get back . process makes good reprex particularly important might multiple months initial report start working . can’t reproduce bug, can’t fix !","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"its-running-so-slow","dir":"Articles","previous_headings":"","what":"It’s running so slow!","title":"Introduction to mikropml","text":"Since assume lot won’t read entire vignette, ’m going say beginning. run_ml() function running super slow, consider parallelizing. See vignette(\"parallel\") examples.","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"the-input-data","dir":"Articles","previous_headings":"Understanding the inputs","what":"The input data","title":"Introduction to mikropml","text":"input data run_ml() dataframe row sample observation. One column (assumed first) outcome interest, columns features. package otu_mini_bin small example dataset mikropml. , dx outcome column (normal cancer), 10 features (Otu00001 Otu00010). 2 outcomes, performing binary classification majority examples . bottom, also briefly provide examples multi-class continuous outcomes. ’ll see, run way binary classification! feature columns amount Operational Taxonomic Unit (OTU) microbiome samples patients cancer without cancer. goal predict dx, stands diagnosis. diagnosis can cancer based individual’s microbiome. need understand exactly means, ’re interested can read original paper (Topçuoğlu et al. 2020). real machine learning applications ’ll need use features, purposes vignette ’ll stick example dataset everything runs faster.","code":"# install.packages(\"devtools\") # devtools::install_github(\"SchlossLab/mikropml\") library(mikropml) head(otu_mini_bin) #> dx Otu00001 Otu00002 Otu00003 Otu00004 Otu00005 Otu00006 Otu00007 #> 1 normal 350 268 213 1 208 230 70 #> 2 normal 568 1320 13 293 671 103 48 #> 3 normal 151 756 802 556 145 271 57 #> 4 normal 299 30 1018 0 25 99 75 #> 5 normal 1409 174 0 3 2 1136 296 #> 6 normal 167 712 213 4 332 534 139 #> Otu00008 Otu00009 Otu00010 #> 1 230 235 64 #> 2 204 119 115 #> 3 176 37 710 #> 4 78 255 197 #> 5 1 537 533 #> 6 251 155 122"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"the-methods-we-support","dir":"Articles","previous_headings":"Understanding the inputs","what":"The methods we support","title":"Introduction to mikropml","text":"methods use supported great ML wrapper package caret, use train machine learning models. methods tested (backend packages) : Logistic/multiclass/linear regression (\"glmnet\") Random forest (\"rf\") Decision tree (\"rpart2\") Support vector machine radial basis kernel (\"svmRadial\") xgboost (\"xgbTree\") documentation methods, well many others, can look available models (see list tag). vetted models used caret, function general enough others might work. can’t promise can help models, feel free [start new discussion GitHub]https://github.com/SchlossLab/mikropml/discussions) questions models might able help. first focus glmnet, default implementation L2-regularized logistic regression. cover examples towards end.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"before-running-ml","dir":"Articles","previous_headings":"","what":"Before running ML","title":"Introduction to mikropml","text":"execute run_ml(), consider preprocessing data, either preprocess_data() function. can learn preprocessing vignette: vignette(\"preprocess\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"the-simplest-way-to-run_ml","dir":"Articles","previous_headings":"","what":"The simplest way to run_ml()","title":"Introduction to mikropml","text":"mentioned , minimal input dataset (dataset) machine learning model want use (method). may also want provide: outcome column name. default run_ml() pick first column, ’s best practice specify column name explicitly. seed results reproducible, get results see (.e train/test split). Say want use logistic regression, method use glmnet. , run ML pipeline : ’ll notice things: takes little run. parameters use. message stating ‘dx’ used outcome column. want, ’s nice sanity check! warning. Don’t worry warning right now - just means hyperparameters aren’t good fit - ’re interested learning , see vignette(\"tuning\"). Now, let’s dig output bit. results list 4 things: trained_model trained model caret. bunch info won’t get , can learn caret::train() documentation. test_data partition dataset used testing. machine learning, ’s always important held-test dataset used training stage. pipeline using run_ml() split data training testing sets. training data used build model (e.g. tune hyperparameters, learn data) test data used evaluate well model performs. performance dataframe (mainly) performance metrics (1 column cross-validation performance metric, several test performance metrics, 2 columns end ML method seed): using logistic regression binary classification, area receiver-operator characteristic curve (AUC) useful metric evaluate model performance. , ’s default use mikropml. However, crucial evaluate model performance using multiple metrics. can find information performance metrics use package. cv_metric_AUC AUC cross-validation folds training data. gives us sense well model performs training data. columns performance metrics test data — data wasn’t used build model. , can see AUC test data much 0.5, suggesting model predict much better chance, model overfit cross-validation AUC (cv_metric_AUC, measured training) much higher testing AUC. isn’t surprising since ’re using features example dataset, don’t discouraged. default option also provides number performance metrics might interested , including area precision-recall curve (prAUC). last columns results$performance method seed (set one) help combining results multiple runs (see vignette(\"parallel\")). feature_importance information feature importance values find_feature_importance = TRUE (default FALSE). Since used defaults, ’s nothing :","code":"results <- run_ml(otu_mini_bin, \"glmnet\", outcome_colname = \"dx\", seed = 2019 ) names(results) #> [1] \"trained_model\" \"test_data\" \"performance\" #> [4] \"feature_importance\" names(results$trained_model) #> [1] \"method\" \"modelInfo\" \"modelType\" \"results\" \"pred\" #> [6] \"bestTune\" \"call\" \"dots\" \"metric\" \"control\" #> [11] \"finalModel\" \"preProcess\" \"trainingData\" \"ptype\" \"resample\" #> [16] \"resampledCM\" \"perfNames\" \"maximize\" \"yLimits\" \"times\" #> [21] \"levels\" head(results$test_data) #> dx Otu00009 Otu00005 Otu00010 Otu00001 Otu00008 Otu00004 Otu00003 #> 9 normal 119 142 248 256 363 112 871 #> 14 normal 60 209 70 86 96 1 123 #> 16 cancer 205 5 180 1668 95 22 3 #> 17 normal 188 356 107 381 1035 915 315 #> 27 normal 4 21 161 7 1 27 8 #> 30 normal 13 166 5 31 33 5 58 #> Otu00002 Otu00007 Otu00006 #> 9 995 0 137 #> 14 426 54 40 #> 16 20 590 570 #> 17 357 253 341 #> 27 25 322 5 #> 30 179 6 30 results$performance #> # A tibble: 1 × 17 #> cv_metric_AUC logLoss AUC prAUC Accuracy Kappa F1 Sensi…¹ Speci…² Pos_P…³ #> #> 1 0.622 0.684 0.647 0.606 0.590 0.179 0.6 0.6 0.579 0.6 #> # … with 7 more variables: Neg_Pred_Value , Precision , Recall , #> # Detection_Rate , Balanced_Accuracy , method , seed , #> # and abbreviated variable names ¹​Sensitivity, ²​Specificity, ³​Pos_Pred_Value results$feature_importance #> [1] \"Skipped feature importance\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"customizing-parameters","dir":"Articles","previous_headings":"","what":"Customizing parameters","title":"Introduction to mikropml","text":"arguments allow change execute run_ml(). ’ve chosen reasonable defaults , encourage change think something else better data.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"changing-kfold-cv_times-and-training_frac","dir":"Articles","previous_headings":"Customizing parameters","what":"Changing kfold, cv_times, and training_frac","title":"Introduction to mikropml","text":"kfold: number folds run cross-validation (default: 5). cv_times: number times run repeated cross-validation (default: 100). training_frac: fraction data training set (default: 0.8). rest data used testing. ’s example change default parameters: might noticed one ran faster — ’s reduced kfold cv_times. okay testing things may even necessary smaller datasets. general may better larger numbers parameters; think defaults good starting point (Topçuoğlu et al. 2020).","code":"results_custom <- run_ml(otu_mini_bin, \"glmnet\", kfold = 2, cv_times = 5, training_frac = 0.5, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: ggplot2 #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Warning in (function (w) : `caret::train()` issued the following warning: #> #> simpleWarning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures. #> #> This warning usually means that the model didn't converge in some cross-validation folds because it is predicting something close to a constant. As a result, certain performance metrics can't be calculated. This suggests that some of the hyperparameters chosen are doing very poorly. #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"custom-training-indices","dir":"Articles","previous_headings":"Customizing parameters > Changing kfold, cv_times, and training_frac","what":"Custom training indices","title":"Introduction to mikropml","text":"training_frac fraction 0 1, random sample observations dataset chosen training set satisfy training_frac using get_partition_indices(). However, cases might wish control exactly observations training set. can instead assign training_frac vector indices correspond rows dataset go training set (remaining sequences go testing set). ’s example ~80% data training set:","code":"n_obs <- otu_mini_bin %>% nrow() training_size <- 0.8 * n_obs training_rows <- sample(n_obs, training_size) results_custom_train <- run_ml(otu_mini_bin, \"glmnet\", kfold = 2, cv_times = 5, training_frac = training_rows, seed = 2019 ) #> Using 'dx' as the outcome column. #> Using the custom training set indices provided by `training_frac`. #> The fraction of data in the training set will be 0.8 #> Training the model... #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"changing-the-performance-metric","dir":"Articles","previous_headings":"Customizing parameters","what":"Changing the performance metric","title":"Introduction to mikropml","text":"two arguments allow change performance metric use model evaluation, performance metrics calculate using test data. perf_metric_function function used calculate performance metrics. default classification caret::multiClassSummary() default regression caret::defaultSummary(). ’d suggest changing unless really know ’re . perf_metric_name column name output perf_metric_function. chose reasonable defaults (AUC binary, logLoss multiclass, RMSE continuous), default functions calculate bunch different performance metrics, can choose different one ’d like. default performance metrics available classification : default performance metrics available regression : ’s example using prAUC instead AUC: ’ll see cross-validation metric prAUC, instead default AUC:","code":"#> [1] \"logLoss\" \"AUC\" \"prAUC\" #> [4] \"Accuracy\" \"Kappa\" \"Mean_F1\" #> [7] \"Mean_Sensitivity\" \"Mean_Specificity\" \"Mean_Pos_Pred_Value\" #> [10] \"Mean_Neg_Pred_Value\" \"Mean_Precision\" \"Mean_Recall\" #> [13] \"Mean_Detection_Rate\" \"Mean_Balanced_Accuracy\" #> [1] \"RMSE\" \"Rsquared\" \"MAE\" results_pr <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 5, perf_metric_name = \"prAUC\", seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Warning in (function (w) : `caret::train()` issued the following warning: #> #> simpleWarning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures. #> #> This warning usually means that the model didn't converge in some cross-validation folds because it is predicting something close to a constant. As a result, certain performance metrics can't be calculated. This suggests that some of the hyperparameters chosen are doing very poorly. #> Training complete. results_pr$performance #> # A tibble: 1 × 17 #> cv_metric_p…¹ logLoss AUC prAUC Accur…² Kappa F1 Sensi…³ Speci…⁴ Pos_P…⁵ #> #> 1 0.577 0.691 0.663 0.605 0.538 0.0539 0.690 1 0.0526 0.526 #> # … with 7 more variables: Neg_Pred_Value , Precision , Recall , #> # Detection_Rate , Balanced_Accuracy , method , seed , #> # and abbreviated variable names ¹​cv_metric_prAUC, ²​Accuracy, ³​Sensitivity, #> # ⁴​Specificity, ⁵​Pos_Pred_Value"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"using-groups","dir":"Articles","previous_headings":"Customizing parameters","what":"Using groups","title":"Introduction to mikropml","text":"optional groups vector groups keep together splitting data train test sets cross-validation. Sometimes ’s important split data based grouping instead just randomly. allows control similarities within groups don’t want skew predictions (.e. batch effects). example, biological data may samples collected multiple hospitals, might like keep observations hospital partition. ’s example split data train/test sets based groups: one difference run_ml() report much data training set run code chunk. can little finicky depending many samples groups . won’t exactly specify training_frac, since include one group either training set test set.","code":"# make random groups set.seed(2019) grps <- sample(LETTERS[1:8], nrow(otu_mini_bin), replace = TRUE) results_grp <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 2, training_frac = 0.8, groups = grps, seed = 2019 ) #> Using 'dx' as the outcome column. #> Fraction of data in the training set: 0.795 #> Groups in the training set: A B D F G H #> Groups in the testing set: C E #> Groups will be kept together in CV partitions #> Training the model... #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"controlling-how-groups-are-assigned-to-partitions","dir":"Articles","previous_headings":"Customizing parameters > Using groups","what":"Controlling how groups are assigned to partitions","title":"Introduction to mikropml","text":"use groups parameter , default run_ml() assume want observations group placed partition train/test split. makes sense want use groups control batch effects. However, cases might prefer control exactly groups end partition, might even okay observations group assigned different partitions. example, say want groups B used training, C D testing, don’t preference happens groups. can give group_partitions parameter named list specify groups go training set go testing set. case, observations & B used training, C & D used testing, remaining groups randomly assigned one satisfy training_frac closely possible. another scenario, maybe want groups F used training, also want allow observations selected training F used testing: need even control , take look setting custom training indices. might also prefer provide train control scheme cross_val parameter run_ml().","code":"results_grp_part <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 2, training_frac = 0.8, groups = grps, group_partitions = list( train = c(\"A\", \"B\"), test = c(\"C\", \"D\") ), seed = 2019 ) #> Using 'dx' as the outcome column. #> Fraction of data in the training set: 0.785 #> Groups in the training set: A B E F G H #> Groups in the testing set: C D #> Groups will not be kept together in CV partitions because the number of groups in the training set is not larger than `kfold` #> Training the model... #> Training complete. results_grp_trainA <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 2, kfold = 2, training_frac = 0.5, groups = grps, group_partitions = list( train = c(\"A\", \"B\", \"C\", \"D\", \"E\", \"F\"), test = c(\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\") ), seed = 2019 ) #> Using 'dx' as the outcome column. #> Fraction of data in the training set: 0.5 #> Groups in the training set: A B C D E F #> Groups in the testing set: A B C D E F G H #> Groups will be kept together in CV partitions #> Training the model... #> Training complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"more-arguments","dir":"Articles","previous_headings":"Customizing parameters","what":"More arguments","title":"Introduction to mikropml","text":"ML methods take optional arguments, ntree randomForest-based models case weights. additional arguments give run_ml() forwarded along caret::train() can leverage options.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"case-weights","dir":"Articles","previous_headings":"Customizing parameters > More arguments","what":"Case weights","title":"Introduction to mikropml","text":"want use case weights, also need use custom indices training data (.e. perform partition run_ml() ). ’s one way weights calculated proportion class data set, ~70% data training set: See caret docs list models accept case weights.","code":"set.seed(20221016) library(dplyr) train_set_indices <- get_partition_indices(otu_mini_bin %>% pull(dx), training_frac = 0.70 ) case_weights_dat <- otu_mini_bin %>% count(dx) %>% mutate(p = n / sum(n)) %>% select(dx, p) %>% right_join(otu_mini_bin, by = \"dx\") %>% select(-starts_with(\"Otu\")) %>% mutate( row_num = row_number(), in_train = row_num %in% train_set_indices ) %>% filter(in_train) #> Warning in right_join(., otu_mini_bin, by = \"dx\"): Each row in `x` is expected to match at most 1 row in `y`. #> ℹ Row 1 of `x` matches multiple rows. #> ℹ If multiple matches are expected, set `multiple = \"all\"` to silence this #> warning. head(case_weights_dat) #> dx p row_num in_train #> 1 cancer 0.49 1 TRUE #> 2 cancer 0.49 2 TRUE #> 3 cancer 0.49 3 TRUE #> 4 cancer 0.49 4 TRUE #> 5 cancer 0.49 5 TRUE #> 6 cancer 0.49 6 TRUE tail(case_weights_dat) #> dx p row_num in_train #> 136 normal 0.51 194 TRUE #> 137 normal 0.51 195 TRUE #> 138 normal 0.51 196 TRUE #> 139 normal 0.51 197 TRUE #> 140 normal 0.51 198 TRUE #> 141 normal 0.51 200 TRUE nrow(case_weights_dat) / nrow(otu_mini_bin) #> [1] 0.705 results_weighted <- run_ml(otu_mini_bin, \"glmnet\", outcome_colname = \"dx\", seed = 2019, training_frac = case_weights_dat %>% pull(row_num), weights = case_weights_dat %>% pull(p) )"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"finding-feature-importance","dir":"Articles","previous_headings":"","what":"Finding feature importance","title":"Introduction to mikropml","text":"find features contributing predictive power, can use find_feature_importance = TRUE. use permutation importance determine feature importance described (Topçuoğlu et al. 2020). Briefly, permutes features individually (correlated ones together) evaluates much performance metric decreases. performance decreases feature randomly shuffled, important feature . default FALSE takes run useful want know features important predicting outcome. Let’s look feature importance results: Now, can check feature importances: several columns: perf_metric: performance value permuted feature. perf_metric_diff: difference performance actual permuted data (.e. test performance minus permuted performance). Features larger perf_metric_diff important. pvalue: probability obtaining actual performance value null hypothesis. lower: lower bound 95% confidence interval perf_metric. upper: upper bound 95% confidence interval perf_metric. feat: feature (group correlated features) permuted. method: ML method used. perf_metric_name: name performance metric represented perf_metric & perf_metric_diff. seed: seed (set). can see , differences negligible (close zero), makes sense since model isn’t great. ’re interested feature importance, ’s especially useful run multiple different train/test splits, shown example snakemake workflow. can also choose permute correlated features together using corr_thresh (default: 1). features correlation threshold permuted together; .e. perfectly correlated features permuted together using default value. can see features permuted together feat column. 3 features permuted together (doesn’t really make sense, ’s just example). previously executed run_ml() without feature importance now wish find feature importance fact, see example code get_feature_importance() documentation. get_feature_importance() can show live progress bar, see vignette(\"parallel\") examples.","code":"results_imp <- run_ml(otu_mini_bin, \"rf\", outcome_colname = \"dx\", find_feature_importance = TRUE, seed = 2019 ) results_imp$feature_importance #> perf_metric perf_metric_diff pvalue lower upper feat method #> 1 0.5459125 0.0003375 0.51485149 0.49125 0.60250 Otu00001 rf #> 2 0.5682625 -0.0220125 0.73267327 0.50625 0.63125 Otu00002 rf #> 3 0.5482875 -0.0020375 0.56435644 0.50500 0.59000 Otu00003 rf #> 4 0.6314375 -0.0851875 1.00000000 0.55250 0.71250 Otu00004 rf #> 5 0.4991750 0.0470750 0.08910891 0.44125 0.57125 Otu00005 rf #> 6 0.5364875 0.0097625 0.28712871 0.50125 0.57375 Otu00006 rf #> 7 0.5382875 0.0079625 0.39603960 0.47500 0.58750 Otu00007 rf #> 8 0.5160500 0.0302000 0.09900990 0.46750 0.55750 Otu00008 rf #> 9 0.5293375 0.0169125 0.17821782 0.49500 0.55625 Otu00009 rf #> 10 0.4976500 0.0486000 0.12871287 0.41000 0.56250 Otu00010 rf #> perf_metric_name seed #> 1 AUC 2019 #> 2 AUC 2019 #> 3 AUC 2019 #> 4 AUC 2019 #> 5 AUC 2019 #> 6 AUC 2019 #> 7 AUC 2019 #> 8 AUC 2019 #> 9 AUC 2019 #> 10 AUC 2019 results_imp_corr <- run_ml(otu_mini_bin, \"glmnet\", cv_times = 5, find_feature_importance = TRUE, corr_thresh = 0.2, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Warning in (function (w) : `caret::train()` issued the following warning: #> #> simpleWarning in nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures. #> #> This warning usually means that the model didn't converge in some cross-validation folds because it is predicting something close to a constant. As a result, certain performance metrics can't be calculated. This suggests that some of the hyperparameters chosen are doing very poorly. #> Training complete. #> Finding feature importance... #> Feature importance complete. results_imp_corr$feature_importance #> perf_metric perf_metric_diff pvalue lower upper #> 1 0.4941842 0.1531842 0.05940594 0.3236842 0.6473684 #> feat #> 1 Otu00001|Otu00002|Otu00003|Otu00004|Otu00005|Otu00006|Otu00007|Otu00008|Otu00009|Otu00010 #> method perf_metric_name seed #> 1 glmnet AUC 2019"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"tuning-hyperparameters-using-the-hyperparameter-argument","dir":"Articles","previous_headings":"","what":"Tuning hyperparameters (using the hyperparameter argument)","title":"Introduction to mikropml","text":"important, whole vignette . bottom line provide default hyperparameters can start , ’s important tune hyperparameters. information default hyperparameters , tune hyperparameters, see vignette(\"tuning\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"other-models","dir":"Articles","previous_headings":"","what":"Other models","title":"Introduction to mikropml","text":"examples train evaluate models. output similar, won’t go details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"random-forest","dir":"Articles","previous_headings":"Other models","what":"Random forest","title":"Introduction to mikropml","text":"rf engine takes optional argument ntree: number trees use random forest. can’t tuned using rf package implementation random forest. Please refer caret documentation interested packages random forest implementations.","code":"results_rf <- run_ml(otu_mini_bin, \"rf\", cv_times = 5, seed = 2019 ) results_rf_nt <- run_ml(otu_mini_bin, \"rf\", cv_times = 5, ntree = 1000, seed = 2019 )"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"decision-tree","dir":"Articles","previous_headings":"Other models","what":"Decision tree","title":"Introduction to mikropml","text":"","code":"results_dt <- run_ml(otu_mini_bin, \"rpart2\", cv_times = 5, seed = 2019 )"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"svm","dir":"Articles","previous_headings":"Other models","what":"SVM","title":"Introduction to mikropml","text":"get message “maximum number iterations reached”, see issue caret.","code":"results_svm <- run_ml(otu_mini_bin, \"svmRadial\", cv_times = 5, seed = 2019 )"},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"multiclass-data","dir":"Articles","previous_headings":"Other data","what":"Multiclass data","title":"Introduction to mikropml","text":"provide otu_mini_multi multiclass outcome (three outcomes): ’s example running multiclass data: performance metrics slightly different, format everything else :","code":"otu_mini_multi %>% dplyr::pull(\"dx\") %>% unique() #> [1] \"adenoma\" \"carcinoma\" \"normal\" results_multi <- run_ml(otu_mini_multi, outcome_colname = \"dx\", seed = 2019 ) results_multi$performance #> # A tibble: 1 × 17 #> cv_metric…¹ logLoss AUC prAUC Accur…² Kappa Mean_F1 Mean_…³ Mean_…⁴ Mean_…⁵ #> #> 1 1.07 1.11 0.506 0.353 0.382 0.0449 NA 0.360 0.682 NaN #> # … with 7 more variables: Mean_Neg_Pred_Value , Mean_Precision , #> # Mean_Recall , Mean_Detection_Rate , Mean_Balanced_Accuracy , #> # method , seed , and abbreviated variable names #> # ¹​cv_metric_logLoss, ²​Accuracy, ³​Mean_Sensitivity, ⁴​Mean_Specificity, #> # ⁵​Mean_Pos_Pred_Value"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/introduction.html","id":"continuous-data","dir":"Articles","previous_headings":"Other data","what":"Continuous data","title":"Introduction to mikropml","text":"’s example running continuous data, outcome column numerical: , performance metrics slightly different, format rest :","code":"results_cont <- run_ml(otu_mini_bin[, 2:11], \"glmnet\", outcome_colname = \"Otu00001\", seed = 2019 ) results_cont$performance #> # A tibble: 1 × 6 #> cv_metric_RMSE RMSE Rsquared MAE method seed #> #> 1 622. 731. 0.0893 472. glmnet 2019"},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"summary","dir":"Articles","previous_headings":"","what":"Summary","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Machine learning (ML) classification prediction based set features used make decisions healthcare, economics, criminal justice . However, implementing ML pipeline including preprocessing, model selection, evaluation can time-consuming, confusing, difficult. , present mikropml (pronounced “meek-ROPE em el”), easy--use R package implements ML pipelines using regression, support vector machines, decision trees, random forest, gradient-boosted trees. package available GitHub, CRAN, conda.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"statement-of-need","dir":"Articles","previous_headings":"","what":"Statement of need","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"applications machine learning (ML) require reproducible steps data pre-processing, cross-validation, testing, model evaluation, often interpretation model makes particular predictions. Performing steps important, failure implement can result incorrect misleading results (Teschendorff 2019; Wiens et al. 2019). Supervised ML widely used recognize patterns large datasets make predictions outcomes interest. Several packages including caret (Kuhn 2008) tidymodels (Kuhn, Wickham, RStudio 2020) R, scikitlearn (Pedregosa et al. 2011) Python, H2O autoML platform (H2O.ai 2020) allow scientists train ML models variety algorithms. packages provide tools necessary ML step, implement complete ML pipeline according good practices literature. makes difficult practitioners new ML easily begin perform ML analyses. enable broader range researchers apply ML problem domains, created mikropml, easy--use R package (R Core Team 2020) implements ML pipeline created Topçuoğlu et al. (Topçuoğlu et al. 2020) single function returns trained model, model performance metrics feature importance. mikropml leverages caret package support several ML algorithms: linear regression, logistic regression, support vector machines radial basis kernel, decision trees, random forest, gradient boosted trees. incorporates good practices ML training, testing, model evaluation (Topçuoğlu et al. 2020; Teschendorff 2019). Furthermore, provides data preprocessing steps based FIDDLE (FlexIble Data-Driven pipeLinE) framework outlined Tang et al. (Tang et al. 2020) post-training permutation importance steps estimate importance feature models trained (Breiman 2001; Fisher, Rudin, Dominici 2018). mikropml can used starting point application ML datasets many different fields. already applied microbiome data categorize patients colorectal cancer (Topçuoğlu et al. 2020), identify differences genomic clinical features associated bacterial infections (Lapp et al. 2020), predict gender-based biases academic publishing (Hagan et al. 2020).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"mikropml-package","dir":"Articles","previous_headings":"","what":"mikropml package","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"mikropml package includes functionality preprocess data, train ML models, evaluate model performance, quantify feature importance (Figure 1). also provide vignettes example Snakemake workflow (Köster Rahmann 2012) showcase run ideal ML pipeline multiple different train/test data splits. results can visualized using helper functions use ggplot2 (Wickham 2016). mikropml allows users get started quickly facilitates reproducibility, replacement understanding ML workflow still necessary interpreting results (Pollard et al. 2019). facilitate understanding enable one tailor code application, heavily commented code provided supporting documentation can read online.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"preprocessing-data","dir":"Articles","previous_headings":"mikropml package","what":"Preprocessing data","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"provide function preprocess_data() preprocess features using several different functions caret package. preprocess_data() takes continuous categorical data, re-factors categorical data binary features, provides options normalize continuous data, remove features near-zero variance, keep one instance perfectly correlated features. set default options based implemented FIDDLE (Tang et al. 2020). details use preprocess_data() can found accompanying vignette.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"running-ml","dir":"Articles","previous_headings":"mikropml package","what":"Running ML","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"main function mikropml, run_ml(), minimally takes model choice data frame outcome column feature columns. model choice, mikropml currently supports logistic linear regression (glmnet: Friedman, Hastie, Tibshirani 2010), support vector machines radial basis kernel (kernlab: Karatzoglou et al. 2004), decision trees (rpart: Therneau et al. 2019), random forest (randomForest: Liaw Wiener 2002), gradient-boosted trees (xgboost: Chen et al. 2020). run_ml() randomly splits data train test sets maintaining distribution outcomes found full dataset. also provides option split data train test sets based categorical variables (e.g. batch, geographic location, etc.). mikropml uses caret package (Kuhn 2008) train evaluate models, optionally quantifies feature importance. output includes best model built based tuning hyperparameters internal repeated cross-validation step, model evaluation metrics, optional feature importances. Feature importances calculated using permutation test, breaks relationship feature true outcome test data, measures change model performance. provides intuitive metric individual features influence model performance comparable across model types, particularly useful model interpretation (Topçuoğlu et al. 2020). introductory vignette contains comprehensive tutorial use run_ml(). mikropml pipeline","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"ideal-workflow-for-running-mikropml-with-many-different-traintest-splits","dir":"Articles","previous_headings":"mikropml package","what":"Ideal workflow for running mikropml with many different train/test splits","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"investigate variation model performance depending train test set used (Topçuoğlu et al. 2020; Lapp et al. 2020), provide examples run_ml() many times different train/test splits get summary information model performance local computer high-performance computing cluster using Snakemake workflow.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"tuning-visualization","dir":"Articles","previous_headings":"mikropml package","what":"Tuning & visualization","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"One particularly important aspect ML hyperparameter tuning. provide reasonable range default hyperparameters model type. However practitioners explore whether range appropriate data, customize hyperparameter range. Therefore, provide function plot_hp_performance() plot cross-validation performance metric single model models built using different train/test splits. helps evaluate hyperparameter range searched exhaustively allows user pick ideal set. also provide summary plots test performance metrics many train/test splits different models using plot_model_performance(). Examples described accompanying vignette hyperparameter tuning.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"dependencies","dir":"Articles","previous_headings":"mikropml package","what":"Dependencies","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"mikropml written R (R Core Team 2020) depends several packages: dplyr (Wickham et al. 2020), rlang (Henry, Wickham, RStudio 2020) caret (Kuhn 2008). ML algorithms supported mikropml require: glmnet (Friedman, Hastie, Tibshirani 2010), e1071 (Meyer et al. 2020), MLmetrics (Yan 2016) logistic regression, rpart2 (Therneau et al. 2019) decision trees, randomForest (Liaw Wiener 2002) random forest, xgboost (Chen et al. 2020) xgboost, kernlab (Karatzoglou et al. 2004) support vector machines. also allow parallelization cross-validation steps using foreach, doFuture, future.apply, future packages (Bengtsson Team 2020). Finally, use ggplot2 plotting (Wickham 2016).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"acknowledgments","dir":"Articles","previous_headings":"","what":"Acknowledgments","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"thank members Schloss Lab participated code clubs related initial development pipeline, made documentation improvements, provided general feedback. also thank Nick Lesniak designing mikropml logo. thank US Research Software Sustainability Institute (NSF #1743188) providing training KLS Winter School Research Software Engineering.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"funding","dir":"Articles","previous_headings":"","what":"Funding","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Salary support PDS came NIH grant 1R01CA215574. KLS received support NIH Training Program Bioinformatics (T32 GM070449). ZL received support National Science Foundation Graduate Research Fellowship Program Grant . DGE 1256260. opinions, findings, conclusions recommendations expressed material authors necessarily reflect views National Science Foundation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"author-contributions","dir":"Articles","previous_headings":"","what":"Author contributions","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"BDT, ZL, KLS contributed equally. Author order among co-first authors determined time since joining project. BDT, ZL, KLS conceptualized study wrote code. KLS structured code R package form. BDT, ZL, JW, PDS developed methodology. PDS, ES, JW supervised project. BDT, ZL, KLS wrote original draft. authors reviewed edited manuscript.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/paper.html","id":"conflicts-of-interest","dir":"Articles","previous_headings":"","what":"Conflicts of interest","title":"mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines","text":"None.","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"speed-up-single-runs","dir":"Articles","previous_headings":"","what":"Speed up single runs","title":"Parallel processing","text":"default, preprocess_data(), run_ml(), compare_models() use one process series. ’d like parallelize various steps pipeline make run faster, install foreach, future, future.apply, doFuture. , register future plan prior calling functions: , used multicore plan split work across 2 cores. See future documentation picking best plan use case. Notably, multicore work inside RStudio Windows; need use multisession instead cases. registering future plan, can call preprocess_data() run_ml() usual, run certain tasks parallel. ’s also parallel version rf engine called parRF trains trees forest parallel. See caret docs information.","code":"doFuture::registerDoFuture() future::plan(future::multicore, workers = 2) otu_data_preproc <- preprocess_data(otu_mini_bin, \"dx\")$dat_transformed result1 <- run_ml(otu_data_preproc, \"glmnet\", seed = 2019)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"bootstrap-performance","dir":"Articles","previous_headings":"Speed up single runs","what":"Bootstrap performance","title":"Parallel processing","text":"intend call run_ml() generate one train/test split (e.g. temporal split dataset), can evaluate model performance bootstrapping test set. show generate 100 bootstraps calculate confidence interval model performance. use 100 computation speed, recommended generate 10000 bootstraps precise estimation confidence interval.","code":"boot_perf <- bootstrap_performance(result1, outcome_colname = \"dx\", bootstrap_times = 100, alpha = 0.05 ) boot_perf #> # A tibble: 15 × 6 #> term .lower .estimate .upper .alpha .method #> #> 1 AUC 0.434 0.639 0.820 0.05 percentile #> 2 Accuracy 0.422 0.583 0.744 0.05 percentile #> 3 Balanced_Accuracy 0.431 0.586 0.749 0.05 percentile #> 4 Detection_Rate 0.179 0.299 0.449 0.05 percentile #> 5 F1 0.412 0.585 0.762 0.05 percentile #> 6 Kappa -0.132 0.167 0.486 0.05 percentile #> 7 Neg_Pred_Value 0.375 0.572 0.807 0.05 percentile #> 8 Pos_Pred_Value 0.395 0.599 0.855 0.05 percentile #> 9 Precision 0.395 0.599 0.855 0.05 percentile #> 10 Recall 0.375 0.584 0.824 0.05 percentile #> 11 Sensitivity 0.375 0.584 0.824 0.05 percentile #> 12 Specificity 0.379 0.587 0.823 0.05 percentile #> 13 cv_metric_AUC 0.622 0.622 0.622 0.05 percentile #> 14 logLoss 0.660 0.685 0.714 0.05 percentile #> 15 prAUC 0.442 0.583 0.734 0.05 percentile"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"call-run_ml-multiple-times-in-parallel-in-r","dir":"Articles","previous_headings":"","what":"Call run_ml() multiple times in parallel in R","title":"Parallel processing","text":"can use functions future.apply package call run_ml() multiple times parallel different parameters. first need run future::plan() haven’t already. , call run_ml() multiple seeds using future_lapply(): call run_ml() different seed uses different random split data training testing sets. Since using seeds, must set future.seed TRUE (see future.apply documentation blog post details parallel-safe random seeds). example uses seeds speed simplicity, real data recommend using many seeds get better estimate model performance. examples, used functions future.apply package run_ml() parallel, can accomplish thing parallel versions purrr::map() functions using furrr package (e.g. furrr::future_map_dfr()). Extract performance results combine one dataframe seeds:","code":"# NOTE: use more seeds for real-world data results_multi <- future.apply::future_lapply(seq(100, 102), function(seed) { run_ml(otu_data_preproc, \"glmnet\", seed = seed) }, future.seed = TRUE) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. perf_df <- future.apply::future_lapply(results_multi, function(result) { result[[\"performance\"]] %>% select(cv_metric_AUC, AUC, method) }, future.seed = TRUE ) %>% dplyr::bind_rows() perf_df #> # A tibble: 3 × 3 #> cv_metric_AUC AUC method #> #> 1 0.630 0.634 glmnet #> 2 0.591 0.608 glmnet #> 3 0.671 0.471 glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"multiple-ml-methods","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R","what":"Multiple ML methods","title":"Parallel processing","text":"may also wish compare performance different ML methods. mapply() can iterate multiple lists vectors, future_mapply() works way:","code":"# NOTE: use more seeds for real-world data param_grid <- expand.grid( seeds = seq(100, 103), methods = c(\"glmnet\", \"rf\") ) results_mtx <- future.apply::future_mapply( function(seed, method) { run_ml(otu_data_preproc, method, seed = seed, find_feature_importance = TRUE ) }, param_grid$seeds, param_grid$methods %>% as.character(), future.seed = TRUE ) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Finding feature importance... #> Feature importance complete."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"visualize-the-results","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R","what":"Visualize the results","title":"Parallel processing","text":"ggplot2 required use plotting functions . can also create plots however like using results data.","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"mean-auc","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Performance","what":"Mean AUC","title":"Parallel processing","text":"plot_model_performance() returns ggplot2 object. can add layers customize plot:","code":"perf_df <- lapply( results_mtx[\"performance\", ], function(x) { x %>% select(cv_metric_AUC, AUC, method) } ) %>% dplyr::bind_rows() perf_boxplot <- plot_model_performance(perf_df) perf_boxplot perf_boxplot + theme_classic() + scale_color_brewer(palette = \"Dark2\") + coord_flip()"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"roc-and-prc-curves","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Performance","what":"ROC and PRC curves","title":"Parallel processing","text":"First calculate sensitivity, specificity, precision models.","code":"get_sensspec_seed <- function(colnum) { result <- results_mtx[, colnum] trained_model <- result$trained_model test_data <- result$test_data seed <- result$performance$seed method <- result$trained_model$method sensspec <- calc_model_sensspec( trained_model, test_data, \"dx\" ) %>% mutate(seed = seed, method = method) return(sensspec) } sensspec_dat <- purrr::map_dfr( seq(1, dim(results_mtx)[2]), get_sensspec_seed ) #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column. #> Using 'dx' as the outcome column."},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"plot-curves-for-a-single-model","dir":"Articles","previous_headings":"","what":"Parallel processing","title":"Parallel processing","text":"","code":"sensspec_1 <- sensspec_dat %>% filter(seed == 100, method == \"glmnet\") sensspec_1 %>% ggplot(aes(x = specificity, y = sensitivity, )) + geom_line() + geom_abline( intercept = 1, slope = 1, linetype = \"dashed\", color = \"grey50\" ) + coord_equal() + scale_x_reverse(expand = c(0, 0), limits = c(1.01, -0.01)) + scale_y_continuous(expand = c(0, 0), limits = c(-0.01, 1.01)) + labs(x = \"Specificity\", y = \"Sensitivity\") + theme_bw() + theme(legend.title = element_blank()) baseline_precision_otu <- calc_baseline_precision( otu_data_preproc, \"dx\", \"cancer\" ) #> Using 'dx' as the outcome column. sensspec_1 %>% rename(recall = sensitivity) %>% ggplot(aes(x = recall, y = precision, )) + geom_line() + geom_hline( yintercept = baseline_precision_otu, linetype = \"dashed\", color = \"grey50\" ) + coord_equal() + scale_x_continuous(expand = c(0, 0), limits = c(-0.01, 1.01)) + scale_y_continuous(expand = c(0, 0), limits = c(-0.01, 1.01)) + labs(x = \"Recall\", y = \"Precision\") + theme_bw() + theme(legend.title = element_blank())"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"plot-mean-roc-and-prc-for-all-models","dir":"Articles","previous_headings":"","what":"Parallel processing","title":"Parallel processing","text":"","code":"sensspec_dat %>% calc_mean_roc() %>% plot_mean_roc() sensspec_dat %>% calc_mean_prc() %>% plot_mean_prc(baseline_precision = baseline_precision_otu)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"feature-importance","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results","what":"Feature importance","title":"Parallel processing","text":"perf_metric_diff feature importance data frame contains differences performance actual test data performance permuted test data (.e. test minus permuted). feature important model performance, expect perf_metric_diff positive. words, features resulted largest decrease performance permuted important features.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"feature-importance-for-multiple-models","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Feature importance","what":"Feature importance for multiple models","title":"Parallel processing","text":"can select top n important features models plot like : See docs get_feature_importance() details values computed.","code":"feat_df <- results_mtx[\"feature_importance\", ] %>% dplyr::bind_rows() top_n <- 5 top_feats <- feat_df %>% group_by(method, feat) %>% summarize(mean_diff = median(perf_metric_diff)) %>% filter(mean_diff > 0) %>% slice_max(order_by = mean_diff, n = top_n) #> `summarise()` has grouped output by 'method'. You can override using the #> `.groups` argument. feat_df %>% right_join(top_feats, by = c(\"method\", \"feat\")) %>% mutate(features = forcats::fct_reorder(factor(feat), mean_diff)) %>% ggplot(aes(x = perf_metric_diff, y = features, color = method)) + geom_boxplot() + geom_vline(xintercept = 0, linetype = \"dashed\") + labs( x = \"Decrease in performance (actual minus permutation)\", y = \"Features\", caption = \"Features which have a lower performance when permuted have a difference in performance above zero. The features with the greatest decrease are the most important for model performance.\" %>% stringr::str_wrap(width = 100) ) + theme_bw() + theme(plot.caption = element_text(hjust = 0))"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"feature-importance-for-a-single-model","dir":"Articles","previous_headings":"Call run_ml() multiple times in parallel in R > Visualize the results > Feature importance","what":"Feature importance for a single model","title":"Parallel processing","text":"can also plot feature importance single model. report actual performance, permutation performance, empirical 95% confidence interval permutation performance.","code":"feat_imp_1 <- results_mtx[, 1][[\"feature_importance\"]] perf_metric_name <- results_mtx[, 1][[\"trained_model\"]]$metric perf_actual <- results_mtx[, 1][[\"performance\"]] %>% pull(perf_metric_name) feat_imp_1 %>% filter(perf_metric_diff > 0) %>% mutate(feat = if_else(pvalue < 0.05, paste0(\"*\", feat), as.character(feat)) %>% as.factor() %>% forcats::fct_reorder(perf_metric_diff)) %>% ggplot(aes(x = perf_metric, xmin = lower, xmax = upper, y = feat)) + geom_pointrange() + geom_vline(xintercept = perf_actual, linetype = \"dashed\") + labs( x = \"Permutation performance\", y = \"Features\", caption = \"The dashed line represents the actual performance on the test set. Features which have a lower performance when permuted are important for model performance. Significant features (pvalue < 0.05) are marked with an asterisk (*). Error bars represent the 95% confidence interval.\" %>% stringr::str_wrap(width = 110) ) + theme_bw() + theme(plot.caption = element_text(hjust = 0))"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"live-progress-updates","dir":"Articles","previous_headings":"","what":"Live progress updates","title":"Parallel processing","text":"preprocess_data() get_feature_importance() support reporting live progress updates using progressr package. format , recommend using progress bar like : Note future backends support “near-live” progress updates, meaning progress may reported immediately parallel processing futures. Read progressr vignette. progressr customize format progress updates, see progressr docs.","code":"# optionally, specify the progress bar format with the `progress` package. progressr::handlers(progressr::handler_progress( format = \":message :bar :percent | elapsed: :elapsed | eta: :eta\", clear = FALSE, show_after = 0 )) # tell progressr to always report progress in any functions that use it. # set this to FALSE to turn it back off again. progressr::handlers(global = TRUE) # run your code and watch the live progress updates. dat <- preprocess_data(otu_mini_bin, \"dx\")$dat_transformed #> Using 'dx' as the outcome column. #> preprocessing ========================>------- 78% | elapsed: 1s | eta: 0s results <- run_ml(dat, \"glmnet\", kfold = 2, cv_times = 2, find_feature_importance = TRUE ) #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Feature importance =========================== 100% | elapsed: 37s | eta: 0s"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/parallel.html","id":"parallelizing-with-snakemake","dir":"Articles","previous_headings":"","what":"Parallelizing with Snakemake","title":"Parallel processing","text":"parallelizing multiple calls run_ml() R examples , results objects held memory. isn’t big deal small dataset run seeds. However, large datasets run parallel , say, 100 seeds (recommended), may run problems trying store objects memory . Using workflow manager Snakemake Nextflow highly recommend maximize scalability reproducibility computational analyses. created template Snakemake workflow can use starting point ML project.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"its-running-so-slow","dir":"Articles","previous_headings":"","what":"It’s running so slow!","title":"Preprocessing data","text":"Since assume lot won’t read entire vignette, ’m going say beginning. preprocess_data() function running super slow, consider parallelizing goes faster! preprocess_data() also can report live progress updates. See vignette(\"parallel\") details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"examples","dir":"Articles","previous_headings":"","what":"Examples","title":"Preprocessing data","text":"’re going start simple get complicated, want whole shebang , just scroll bottom. First, load mikropml:","code":"library(mikropml)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"binary-data","dir":"Articles","previous_headings":"Examples","what":"Binary data","title":"Preprocessing data","text":"Let’s start binary variables: addition dataframe , provide name outcome column preprocess_data(). ’s preprocessed data looks like: output list: dat_transformed transformed data, grp_feats list grouped features, removed_feats list features removed. , grp_feats NULL perfectly correlated features (e.g. c(0,1,0) c(0,1,0), c(0,1,0) c(1,0,1) - see details). first column (var1) dat_transformed character changed var1_yes zeros () ones (yes). values second column (var2) stay ’s already binary, name changes var2_1. third column (var3) factor also changed binary b 1 0, denoted new column name var3_b.","code":"# raw binary dataset bin_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(0, 1, 1), var3 = factor(c(\"a\", \"a\", \"b\")) ) bin_df #> outcome var1 var2 var3 #> 1 normal no 0 a #> 2 normal yes 1 a #> 3 cancer no 1 b # preprocess raw binary data preprocess_data(dataset = bin_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 4 #> outcome var1_yes var2_1 var3_b #> #> 1 normal 0 0 0 #> 2 normal 1 1 0 #> 3 cancer 0 1 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"categorical-data","dir":"Articles","previous_headings":"Examples","what":"Categorical data","title":"Preprocessing data","text":"non-binary categorical data: can see, variable split 3 different columns - one type (, b, c). , grp_feats NULL.","code":"# raw categorical dataset cat_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"a\", \"b\", \"c\") ) cat_df #> outcome var1 #> 1 normal a #> 2 normal b #> 3 cancer c # preprocess raw categorical data preprocess_data(dataset = cat_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 4 #> outcome var1_a var1_b var1_c #> #> 1 normal 1 0 0 #> 2 normal 0 1 0 #> 3 cancer 0 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"continuous-data","dir":"Articles","previous_headings":"Examples","what":"Continuous data","title":"Preprocessing data","text":"Now, looking continuous variables: Wow! numbers change? default normalize data using \"center\" \"scale\". often best practice, may want normalize data, may want normalize data different way. don’t want normalize data, can use method=NULL: can also normalize data different ways. can choose method supported method argument caret::preProcess() (see caret::preProcess() docs details). Note methods applied continuous variables. Another feature preprocess_data() provide continuous variables characters, converted numeric: don’t want happen, want character data remain character data even can converted numeric, can use to_numeric=FALSE kept categorical: can see output, case features treated groups rather numbers (e.g. normalized).","code":"# raw continuous dataset cont_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(1, 2, 3) ) cont_df #> outcome var1 #> 1 normal 1 #> 2 normal 2 #> 3 cancer 3 # preprocess raw continuous data preprocess_data(dataset = cont_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 2 #> outcome var1 #> #> 1 normal -1 #> 2 normal 0 #> 3 cancer 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0) # preprocess raw continuous data, no normalization preprocess_data(dataset = cont_df, outcome_colname = \"outcome\", method = NULL) # raw continuous dataset as characters cont_char_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"1\", \"2\", \"3\") ) cont_char_df #> outcome var1 #> 1 normal 1 #> 2 normal 2 #> 3 cancer 3 # preprocess raw continuous character data as numeric preprocess_data(dataset = cont_char_df, outcome_colname = \"outcome\") # preprocess raw continuous character data as characters preprocess_data(dataset = cont_char_df, outcome_colname = \"outcome\", to_numeric = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 4 #> outcome var1_1 var1_2 var1_3 #> #> 1 normal 1 0 0 #> 2 normal 0 1 0 #> 3 cancer 0 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"collapse-perfectly-correlated-features","dir":"Articles","previous_headings":"Examples","what":"Collapse perfectly correlated features","title":"Preprocessing data","text":"default, preprocess_data() collapses features perfectly positively negatively correlated. multiple copies features add information machine learning, makes run_ml faster. can see, end one variable, 3 grouped together. Also, second element list longer NULL. Instead, tells grp1 contains var1, var2, var3. want group positively correlated features, negatively correlated features (e.g. interpretability, another downstream application), can using group_neg_corr=FALSE: , var3 kept ’s ’s negatively correlated var1 var2. can also choose keep features separate, even perfectly correlated, using collapse_corr_feats=FALSE: case, grp_feats always NULL.","code":"# raw correlated dataset corr_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(0, 1, 0), var3 = c(1, 0, 1) ) corr_df #> outcome var1 var2 var3 #> 1 normal no 0 1 #> 2 normal yes 1 0 #> 3 cancer no 0 1 # preprocess raw correlated dataset preprocess_data(dataset = corr_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 2 #> outcome grp1 #> #> 1 normal 0 #> 2 normal 1 #> 3 cancer 0 #> #> $grp_feats #> $grp_feats$grp1 #> [1] \"var1_yes\" \"var3_1\" #> #> #> $removed_feats #> [1] \"var2\" # preprocess raw correlated dataset; don't group negatively correlated features preprocess_data(dataset = corr_df, outcome_colname = \"outcome\", group_neg_corr = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var3_1 #> #> 1 normal 0 1 #> 2 normal 1 0 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\" # preprocess raw correlated dataset; don't group negatively correlated features preprocess_data(dataset = corr_df, outcome_colname = \"outcome\", collapse_corr_feats = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var3_1 #> #> 1 normal 0 1 #> 2 normal 1 0 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"data-with-near-zero-variance","dir":"Articles","previous_headings":"Examples","what":"Data with near-zero variance","title":"Preprocessing data","text":"variables zero, “”? ones won’t contribute information, remove : , var3, var4, var5 variability, variables removed preprocessing: can read caret::preProcess() documentation information. default, remove features “near-zero variance” (remove_var='nzv'). uses default arguments caret::nearZeroVar(). However, particularly smaller datasets, might want remove features near-zero variance. want remove features zero variance, can use remove_var='zv': want include features, can use argument remove_zv=NULL. work, collapse correlated features (otherwise errors underlying caret function use). want nuanced remove near-zero variance features (e.g. change default 10% cutoff percentage distinct values total number samples), can use caret::preProcess() function running preprocess_data remove_var=NULL (see caret::nearZeroVar() function information).","code":"# raw dataset with non-variable features nonvar_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(0, 1, 1), var3 = c(\"no\", \"no\", \"no\"), var4 = c(0, 0, 0), var5 = c(12, 12, 12) ) nonvar_df #> outcome var1 var2 var3 var4 var5 #> 1 normal no 0 no 0 12 #> 2 normal yes 1 no 0 12 #> 3 cancer no 1 no 0 12 # remove features with near-zero variance preprocess_data(dataset = nonvar_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2_1 #> #> 1 normal 0 0 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var4\" \"var3\" \"var5\" # remove features with zero variance preprocess_data(dataset = nonvar_df, outcome_colname = \"outcome\", remove_var = \"zv\") #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2_1 #> #> 1 normal 0 0 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var4\" \"var3\" \"var5\" # don't remove features with near-zero or zero variance preprocess_data(dataset = nonvar_df, outcome_colname = \"outcome\", remove_var = NULL, collapse_corr_feats = FALSE) #> Using 'outcome' as the outcome column. #> $dat_transformed #> # A tibble: 3 × 5 #> outcome var1_yes var2_1 var3 var5 #> #> 1 normal 0 0 0 12 #> 2 normal 1 1 0 12 #> 3 cancer 0 1 0 12 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var4\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"missing-data","dir":"Articles","previous_headings":"Examples","what":"Missing data","title":"Preprocessing data","text":"preprocess_data() also deals missing data. : Removes missing outcome variables. Maintains zero variability feature already variability (.e. feature removed removing features near-zero variance). Replaces missing binary categorical variables zero (splitting multiple columns). Replaces missing continuous data median value feature. ’d like deal missing data different way, please prior inputting data preprocess_data().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"remove-missing-outcome-variables","dir":"Articles","previous_headings":"Examples > Missing data","what":"Remove missing outcome variables","title":"Preprocessing data","text":"","code":"# raw dataset with missing outcome value miss_oc_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\", NA), var1 = c(\"no\", \"yes\", \"no\", \"no\"), var2 = c(0, 1, 1, 1) ) miss_oc_df #> outcome var1 var2 #> 1 normal no 0 #> 2 normal yes 1 #> 3 cancer no 1 #> 4 no 1 # preprocess raw dataset with missing outcome value preprocess_data(dataset = miss_oc_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> Removed 1/4 (25%) of samples because of missing outcome value (NA). #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2_1 #> #> 1 normal 0 0 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"maintain-zero-variability-in-a-feature-if-it-already-has-no-variability","dir":"Articles","previous_headings":"Examples > Missing data","what":"Maintain zero variability in a feature if it already has no variability","title":"Preprocessing data","text":", non-variable feature missing data removed removed features near-zero variance. maintained feature, ’d ones:","code":"# raw dataset with missing value in non-variable feature miss_nonvar_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", \"no\"), var2 = c(NA, 1, 1) ) miss_nonvar_df #> outcome var1 var2 #> 1 normal no NA #> 2 normal yes 1 #> 3 cancer no 1 # preprocess raw dataset with missing value in non-variable feature preprocess_data(dataset = miss_nonvar_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> There are 1 missing value(s) in features with no variation. Missing values were replaced with the non-varying value. #> $dat_transformed #> # A tibble: 3 × 2 #> outcome var1_yes #> #> 1 normal 0 #> 2 normal 1 #> 3 cancer 0 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\" # preprocess raw dataset with missing value in non-variable feature preprocess_data(dataset = miss_nonvar_df, outcome_colname = \"outcome\", remove_var = NULL, collapse_corr_feats = FALSE) #> Using 'outcome' as the outcome column. #> There are 1 missing value(s) in features with no variation. Missing values were replaced with the non-varying value. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_yes var2 #> #> 1 normal 0 1 #> 2 normal 1 1 #> 3 cancer 0 1 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"replace-missing-binary-and-categorical-variables-with-zero","dir":"Articles","previous_headings":"Examples > Missing data","what":"Replace missing binary and categorical variables with zero","title":"Preprocessing data","text":"binary variable split two, missing value considered zero .","code":"# raw dataset with missing value in categorical feature miss_cat_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\"), var1 = c(\"no\", \"yes\", NA), var2 = c(NA, 1, 0) ) miss_cat_df #> outcome var1 var2 #> 1 normal no NA #> 2 normal yes 1 #> 3 cancer 0 # preprocess raw dataset with missing value in non-variable feature preprocess_data(dataset = miss_cat_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> 2 categorical missing value(s) (NA) were replaced with 0. Note that the matrix is not full rank so missing values may be duplicated in separate columns. #> $dat_transformed #> # A tibble: 3 × 3 #> outcome var1_no var1_yes #> #> 1 normal 1 0 #> 2 normal 0 1 #> 3 cancer 0 0 #> #> $grp_feats #> NULL #> #> $removed_feats #> [1] \"var2\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"replace-missing-continuous-data-with-the-median-value-of-that-feature","dir":"Articles","previous_headings":"Examples > Missing data","what":"Replace missing continuous data with the median value of that feature","title":"Preprocessing data","text":"’re normalizing continuous features ’s easier see ’s going (.e. median value used):","code":"# raw dataset with missing value in continuous feature miss_cont_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\", \"normal\"), var1 = c(1, 2, 2, NA), var2 = c(1, 2, 3, NA) ) miss_cont_df #> outcome var1 var2 #> 1 normal 1 1 #> 2 normal 2 2 #> 3 cancer 2 3 #> 4 normal NA NA # preprocess raw dataset with missing value in continuous feature preprocess_data(dataset = miss_cont_df, outcome_colname = \"outcome\", method = NULL) #> Using 'outcome' as the outcome column. #> 2 missing continuous value(s) were imputed using the median value of the feature. #> $dat_transformed #> # A tibble: 4 × 3 #> outcome var1 var2 #> #> 1 normal 1 1 #> 2 normal 2 2 #> 3 cancer 2 3 #> 4 normal 2 2 #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"putting-it-all-together","dir":"Articles","previous_headings":"Examples","what":"Putting it all together","title":"Preprocessing data","text":"’s complicated example raw data puts everything discussed together: Let’s throw preprocessing function default values: can see, got several messages: One samples (row 4) removed outcome value missing. One variables feature variation missing value replaced non-varying value (var11). Four categorical missing values replaced zero (var9). 4 missing rather just 1 (like raw data) split categorical variable 4 different columns first. One missing continuous value imputed using median value feature (var8). Additionally, can see continuous variables normalized, categorical variables changed binary, several features grouped together. variables group can found grp_feats.","code":"test_df <- data.frame( outcome = c(\"normal\", \"normal\", \"cancer\", NA), var1 = 1:4, var2 = c(\"a\", \"b\", \"c\", \"d\"), var3 = c(\"no\", \"yes\", \"no\", \"no\"), var4 = c(0, 1, 0, 0), var5 = c(0, 0, 0, 0), var6 = c(\"no\", \"no\", \"no\", \"no\"), var7 = c(1, 1, 0, 0), var8 = c(5, 6, NA, 7), var9 = c(NA, \"x\", \"y\", \"z\"), var10 = c(1, 0, NA, NA), var11 = c(1, 1, NA, NA), var12 = c(\"1\", \"2\", \"3\", \"4\") ) test_df #> outcome var1 var2 var3 var4 var5 var6 var7 var8 var9 var10 var11 var12 #> 1 normal 1 a no 0 0 no 1 5 1 1 1 #> 2 normal 2 b yes 1 0 no 1 6 x 0 1 2 #> 3 cancer 3 c no 0 0 no 0 NA y NA NA 3 #> 4 4 d no 0 0 no 0 7 z NA NA 4 preprocess_data(dataset = test_df, outcome_colname = \"outcome\") #> Using 'outcome' as the outcome column. #> Removed 1/4 (25%) of samples because of missing outcome value (NA). #> There are 1 missing value(s) in features with no variation. Missing values were replaced with the non-varying value. #> 2 categorical missing value(s) (NA) were replaced with 0. Note that the matrix is not full rank so missing values may be duplicated in separate columns. #> 1 missing continuous value(s) were imputed using the median value of the feature. #> $dat_transformed #> # A tibble: 3 × 6 #> outcome grp1 var2_a grp2 grp3 var8 #> #> 1 normal -1 1 0 0 -0.707 #> 2 normal 0 0 1 0 0.707 #> 3 cancer 1 0 0 1 0 #> #> $grp_feats #> $grp_feats$grp1 #> [1] \"var1\" \"var12\" #> #> $grp_feats$var2_a #> [1] \"var2_a\" #> #> $grp_feats$grp2 #> [1] \"var2_b\" \"var3_yes\" \"var9_x\" #> #> $grp_feats$grp3 #> [1] \"var2_c\" \"var7_1\" \"var9_y\" #> #> $grp_feats$var8 #> [1] \"var8\" #> #> #> $removed_feats #> [1] \"var4\" \"var5\" \"var10\" \"var6\" \"var11\""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/preprocess.html","id":"next-step-train-and-evaluate-your-model","dir":"Articles","previous_headings":"Examples","what":"Next step: train and evaluate your model!","title":"Preprocessing data","text":"preprocess data (either using preprocess_data() preprocessing data ), ’re ready train evaluate machine learning models! Please see run_ml() information training models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"the-simplest-way-to-run_ml","dir":"Articles","previous_headings":"","what":"The simplest way to run_ml()","title":"Hyperparameter tuning","text":"mentioned , minimal input dataset (dataset) machine learning model want use (method). run_ml(), default 100 times repeated, 5-fold cross-validation, evaluate hyperparameters 500 total iterations. Say want run L2 regularized logistic regression. : ’ll probably get warning run dataset small. want learn , check introductory vignette training evaluating ML model: vignette(\"introduction\"). default, run_ml() selects hyperparameters depending dataset method used. can see, alpha hyperparameter set 0, specifies L2 regularization. glmnet gives us option run L1 L2 regularization. change alpha 1, run L1-regularized logistic regression. can also tune alpha specifying variety values 0 1. use value 0 1, running elastic net. default hyperparameter lambda adjusts L2 regularization penalty range values 10^-4 10. look 100 repeated cross-validation performance metrics AUC, Accuracy, prAUC tested lambda value, see appropriate dataset better others.","code":"results <- run_ml(dat, \"glmnet\", outcome_colname = \"dx\", cv_times = 100, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Loading required package: ggplot2 #> Loading required package: lattice #> #> Attaching package: 'caret' #> The following object is masked from 'package:mikropml': #> #> compare_models #> Training complete. results$trained_model #> glmnet #> #> 161 samples #> 10 predictor #> 2 classes: 'cancer', 'normal' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 100 times) #> Summary of sample sizes: 128, 129, 129, 129, 129, 130, ... #> Resampling results across tuning parameters: #> #> lambda logLoss AUC prAUC Accuracy Kappa F1 #> 1e-04 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 1e-03 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 1e-02 0.7112738 0.6123883 0.5726478 0.5854514 0.17092470 0.5731635 #> 1e-01 0.6819806 0.6210744 0.5793961 0.5918756 0.18369829 0.5779616 #> 1e+00 0.6803749 0.6278273 0.5827655 0.5896356 0.17756961 0.5408139 #> 1e+01 0.6909820 0.6271894 0.5814202 0.5218000 0.02920942 0.1875293 #> Sensitivity Specificity Pos_Pred_Value Neg_Pred_Value Precision #> 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 #> 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 #> 0.5789667 0.5921250 0.5797769 0.5977182 0.5797769 #> 0.5805917 0.6032353 0.5880165 0.6026963 0.5880165 #> 0.5057833 0.6715588 0.6005149 0.5887829 0.6005149 #> 0.0607250 0.9678676 0.7265246 0.5171323 0.7265246 #> Recall Detection_Rate Balanced_Accuracy #> 0.5789667 0.2839655 0.5854870 #> 0.5789667 0.2839655 0.5854870 #> 0.5789667 0.2839636 0.5855458 #> 0.5805917 0.2847195 0.5919135 #> 0.5057833 0.2478291 0.5886711 #> 0.0607250 0.0292613 0.5142963 #> #> Tuning parameter 'alpha' was held constant at a value of 0 #> AUC was used to select the optimal model using the largest value. #> The final values used for the model were alpha = 0 and lambda = 1. results$trained_model$results #> alpha lambda logLoss AUC prAUC Accuracy Kappa F1 #> 1 0 1e-04 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 2 0 1e-03 0.7113272 0.6123301 0.5725828 0.5853927 0.17080523 0.5730989 #> 3 0 1e-02 0.7112738 0.6123883 0.5726478 0.5854514 0.17092470 0.5731635 #> 4 0 1e-01 0.6819806 0.6210744 0.5793961 0.5918756 0.18369829 0.5779616 #> 5 0 1e+00 0.6803749 0.6278273 0.5827655 0.5896356 0.17756961 0.5408139 #> 6 0 1e+01 0.6909820 0.6271894 0.5814202 0.5218000 0.02920942 0.1875293 #> Sensitivity Specificity Pos_Pred_Value Neg_Pred_Value Precision Recall #> 1 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 0.5789667 #> 2 0.5789667 0.5920074 0.5796685 0.5977166 0.5796685 0.5789667 #> 3 0.5789667 0.5921250 0.5797769 0.5977182 0.5797769 0.5789667 #> 4 0.5805917 0.6032353 0.5880165 0.6026963 0.5880165 0.5805917 #> 5 0.5057833 0.6715588 0.6005149 0.5887829 0.6005149 0.5057833 #> 6 0.0607250 0.9678676 0.7265246 0.5171323 0.7265246 0.0607250 #> Detection_Rate Balanced_Accuracy logLossSD AUCSD prAUCSD AccuracySD #> 1 0.2839655 0.5854870 0.085315967 0.09115229 0.07296554 0.07628572 #> 2 0.2839655 0.5854870 0.085315967 0.09115229 0.07296554 0.07628572 #> 3 0.2839636 0.5855458 0.085276565 0.09122242 0.07301412 0.07637123 #> 4 0.2847195 0.5919135 0.048120032 0.09025695 0.07329214 0.07747312 #> 5 0.2478291 0.5886711 0.012189172 0.09111917 0.07505095 0.07771171 #> 6 0.0292613 0.5142963 0.001610008 0.09266875 0.07640896 0.03421597 #> KappaSD F1SD SensitivitySD SpecificitySD Pos_Pred_ValueSD #> 1 0.15265728 0.09353786 0.13091452 0.11988406 0.08316345 #> 2 0.15265728 0.09353786 0.13091452 0.11988406 0.08316345 #> 3 0.15281903 0.09350099 0.13073501 0.12002481 0.08329024 #> 4 0.15485134 0.09308733 0.12870031 0.12037225 0.08554483 #> 5 0.15563046 0.10525917 0.13381009 0.11639614 0.09957685 #> 6 0.06527242 0.09664720 0.08010494 0.06371495 0.31899811 #> Neg_Pred_ValueSD PrecisionSD RecallSD Detection_RateSD Balanced_AccuracySD #> 1 0.08384956 0.08316345 0.13091452 0.06394409 0.07640308 #> 2 0.08384956 0.08316345 0.13091452 0.06394409 0.07640308 #> 3 0.08385838 0.08329024 0.13073501 0.06384692 0.07648207 #> 4 0.08427362 0.08554483 0.12870031 0.06272897 0.07748791 #> 5 0.07597766 0.09957685 0.13381009 0.06453637 0.07773039 #> 6 0.02292294 0.31899811 0.08010494 0.03803159 0.03184136"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"customizing-hyperparameters","dir":"Articles","previous_headings":"","what":"Customizing hyperparameters","title":"Hyperparameter tuning","text":"example, want change lambda values provide better range test cross-validation step. don’t want use defaults provide named list new values. example: Now let’s run L2 logistic regression new lambda values: time, cover larger different range lambda settings cross-validation. know lambda value best one? answer , need run ML pipeline multiple data splits look mean cross-validation performance lambda across modeling experiments. describe run pipeline multiple data splits vignette(\"parallel\"). train model new lambda range defined . run 3 times different seed, result different splits data training testing sets. can use plot_hp_performance see lambda gives us largest mean AUC value across modeling experiments. can see, get mean maxima 0.03 best lambda value dataset run 3 data splits. fact seeing maxima middle range edges, shows providing large enough range exhaust lambda search build model. recommend user use plot make sure best hyperparameter edges provided list. better understanding global maxima, better run data splits using seeds. picked 3 seeds keep runtime vignette, real-world data recommend using many seeds.","code":"new_hp <- list( alpha = 1, lambda = c(0.00001, 0.0001, 0.001, 0.01, 0.015, 0.02, 0.025, 0.03, 0.04, 0.05, 0.06, 0.1) ) new_hp #> $alpha #> [1] 1 #> #> $lambda #> [1] 0.00001 0.00010 0.00100 0.01000 0.01500 0.02000 0.02500 0.03000 0.04000 #> [10] 0.05000 0.06000 0.10000 results <- run_ml(dat, \"glmnet\", outcome_colname = \"dx\", cv_times = 100, hyperparameters = new_hp, seed = 2019 ) #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. results$trained_model #> glmnet #> #> 161 samples #> 10 predictor #> 2 classes: 'cancer', 'normal' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 100 times) #> Summary of sample sizes: 128, 129, 129, 129, 129, 130, ... #> Resampling results across tuning parameters: #> #> lambda logLoss AUC prAUC Accuracy Kappa F1 #> 0.00001 0.7215038 0.6112253 0.5720005 0.5842184 0.1684871 0.5726974 #> 0.00010 0.7215038 0.6112253 0.5720005 0.5842184 0.1684871 0.5726974 #> 0.00100 0.7209099 0.6112771 0.5719601 0.5845329 0.1691285 0.5730414 #> 0.01000 0.6984432 0.6156112 0.5758977 0.5830960 0.1665062 0.5759265 #> 0.01500 0.6913332 0.6169396 0.5770496 0.5839720 0.1683912 0.5786347 #> 0.02000 0.6870103 0.6177313 0.5779563 0.5833645 0.1673234 0.5796891 #> 0.02500 0.6846387 0.6169757 0.5769305 0.5831907 0.1669901 0.5792840 #> 0.03000 0.6834369 0.6154763 0.5754118 0.5821394 0.1649081 0.5786336 #> 0.04000 0.6833322 0.6124776 0.5724802 0.5786224 0.1578750 0.5735757 #> 0.05000 0.6850454 0.6069059 0.5668928 0.5732197 0.1468699 0.5624480 #> 0.06000 0.6880861 0.5974311 0.5596714 0.5620224 0.1240112 0.5375824 #> 0.10000 0.6944846 0.5123565 0.3034983 0.5120114 0.0110144 0.3852423 #> Sensitivity Specificity Pos_Pred_Value Neg_Pred_Value Precision #> 0.5798500 0.5888162 0.5780748 0.5971698 0.5780748 #> 0.5798500 0.5888162 0.5780748 0.5971698 0.5780748 #> 0.5801167 0.5891912 0.5784544 0.5974307 0.5784544 #> 0.5883667 0.5783456 0.5755460 0.5977390 0.5755460 #> 0.5929750 0.5756471 0.5763123 0.5987220 0.5763123 #> 0.5967167 0.5708824 0.5748385 0.5990649 0.5748385 #> 0.5970250 0.5702721 0.5743474 0.5997928 0.5743474 #> 0.5964500 0.5687721 0.5734044 0.5982451 0.5734044 #> 0.5904500 0.5677353 0.5699817 0.5943308 0.5699817 #> 0.5734833 0.5736176 0.5668523 0.5864448 0.5668523 #> 0.5360333 0.5881250 0.5595918 0.5722851 0.5595918 #> 0.1145917 0.8963456 0.5255752 0.5132665 0.5255752 #> Recall Detection_Rate Balanced_Accuracy #> 0.5798500 0.28441068 0.5843331 #> 0.5798500 0.28441068 0.5843331 #> 0.5801167 0.28453770 0.5846539 #> 0.5883667 0.28860521 0.5833561 #> 0.5929750 0.29084305 0.5843110 #> 0.5967167 0.29264681 0.5837995 #> 0.5970250 0.29278708 0.5836485 #> 0.5964500 0.29248583 0.5826110 #> 0.5904500 0.28951992 0.5790926 #> 0.5734833 0.28119862 0.5735505 #> 0.5360333 0.26270204 0.5620792 #> 0.1145917 0.05585777 0.5054686 #> #> Tuning parameter 'alpha' was held constant at a value of 1 #> AUC was used to select the optimal model using the largest value. #> The final values used for the model were alpha = 1 and lambda = 0.02. results <- lapply(seq(100, 102), function(seed) { run_ml(dat, \"glmnet\", seed = seed, hyperparameters = new_hp) }) #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. #> Using 'dx' as the outcome column. #> Training the model... #> Training complete. models <- lapply(results, function(x) x$trained_model) hp_metrics <- combine_hp_performance(models) plot_hp_performance(hp_metrics$dat, lambda, AUC)"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"hyperparameter-options","dir":"Articles","previous_headings":"","what":"Hyperparameter options","title":"Hyperparameter tuning","text":"can see default hyperparameters used dataset get_hyperparams_list(). examples built-datasets provide: hyperparameters tuned modeling methods. output similar, won’t go details.","code":"get_hyperparams_list(otu_mini_bin, \"glmnet\") #> $lambda #> [1] 1e-04 1e-03 1e-02 1e-01 1e+00 1e+01 #> #> $alpha #> [1] 0 get_hyperparams_list(otu_mini_bin, \"rf\") #> $mtry #> [1] 2 3 6 get_hyperparams_list(otu_small, \"rf\") #> $mtry #> [1] 4 8 16"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"regression","dir":"Articles","previous_headings":"Hyperparameter options","what":"Regression","title":"Hyperparameter tuning","text":"mentioned , glmnet uses alpha parameter lambda hyperparameter. alpha 0 L2 regularization (ridge). alpha 1 L1 regularization (lasso). alpha elastic net. can also tune alpha like hyperparameter. Please refer original glmnet documentation information: https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html default hyperparameters chosen run_ml() fixed glmnet.","code":"#> $lambda #> [1] 1e-04 1e-03 1e-02 1e-01 1e+00 1e+01 #> #> $alpha #> [1] 0"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"random-forest","dir":"Articles","previous_headings":"Hyperparameter options","what":"Random forest","title":"Hyperparameter tuning","text":"run rf parRF, using randomForest package implementation. tuning mtry hyperparameter. number features randomly collected sampled tree node. number needs less number features dataset. Please refer original documentation information: https://cran.r-project.org/web/packages/randomForest/randomForest.pdf default, take square root number features dataset provide range [sqrt_features / 2, sqrt_features, sqrt_features * 2]. example number features 1000: Similar glmnet method, can provide mtry range.","code":"#> $mtry #> [1] 16 32 64"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"decision-tree","dir":"Articles","previous_headings":"Hyperparameter options","what":"Decision tree","title":"Hyperparameter tuning","text":"run rpart2, running rpart package implementation decision tree. tuning maxdepth hyperparameter. maximum depth node final tree. Please refer original documentation information maxdepth: https://cran.r-project.org/web/packages/rpart/rpart.pdf default, provide range less number features dataset. example 1000 features: 10 features:","code":"#> $maxdepth #> [1] 1 2 4 8 16 30 #> $maxdepth #> [1] 1 2 4 8"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"svm-with-radial-basis-kernel","dir":"Articles","previous_headings":"Hyperparameter options","what":"SVM with radial basis kernel","title":"Hyperparameter tuning","text":"run svmRadial method, tuning C sigma hyperparameters. sigma defines far influence single training example reaches C behaves regularization parameter. Please refer great sklearn resource information hyperparameters: https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html default, provide 2 separate range values two hyperparameters.","code":"#> $C #> [1] 1e-03 1e-02 1e-01 1e+00 1e+01 1e+02 #> #> $sigma #> [1] 1e-06 1e-05 1e-04 1e-03 1e-02 1e-01"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"xgboost","dir":"Articles","previous_headings":"Hyperparameter options","what":"XGBoost","title":"Hyperparameter tuning","text":"run xgbTree method, tuning nrounds, gamma, eta max_depth, colsample_bytree, min_child_weight subsample hyperparameters. can read hyperparameters : https://xgboost.readthedocs.io/en/latest/parameter.html default, set nrounds, gamma, colsample_bytree min_child_weight fixed values provide range values eta, max_depth subsample. can changed optimized user supplying custom named list hyperparameters run_ml().","code":"#> $nrounds #> [1] 100 #> #> $gamma #> [1] 0 #> #> $eta #> [1] 0.001 0.010 0.100 1.000 #> #> $max_depth #> [1] 1 2 4 8 16 30 #> #> $colsample_bytree #> [1] 0.8 #> #> $min_child_weight #> [1] 1 #> #> $subsample #> [1] 0.4 0.5 0.6 0.7"},{"path":"http://www.schlosslab.org/mikropml/dev/articles/tuning.html","id":"other-ml-methods","dir":"Articles","previous_headings":"","what":"Other ML methods","title":"Hyperparameter tuning","text":"ML methods tested set default hyperparameters , theory may able use methods supported caret run_ml(). Take look available models caret (see list tag). need give run_ml() custom hyperparameters just like examples :","code":"run_ml(otu_mini_bin, \"regLogistic\", hyperparameters = list( cost = 10^seq(-4, 1, 1), epsilon = c(0.01), loss = c(\"L2_primal\") ) )"},{"path":"http://www.schlosslab.org/mikropml/dev/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Begüm Topçuoğlu. Author. Zena Lapp. Author. Kelly Sovacool. Author, maintainer. Evan Snitkin. Author. Jenna Wiens. Author. Patrick Schloss. Author. Nick Lesniak. Contributor. Courtney Armour. Contributor. Sarah Lucas. Contributor.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Topçuoğlu et al., (2021). mikropml: User-Friendly R Package Supervised Machine Learning Pipelines. Journal Open Source Software, 6(61), 3073, https://doi.org/10.21105/joss.03073","code":"@Article{, title = {{mikropml}: User-Friendly R Package for Supervised Machine Learning Pipelines}, author = {Begüm D. Topçuoğlu and Zena Lapp and Kelly L. Sovacool and Evan Snitkin and Jenna Wiens and Patrick D. Schloss}, journal = {Journal of Open Source Software}, year = {2021}, month = {May}, volume = {6}, number = {61}, pages = {3073}, doi = {10.21105/joss.03073}, url = {https://joss.theoj.org/papers/10.21105/joss.03073}, }"},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"mikropml-","dir":"","previous_headings":"","what":"User-Friendly R Package for Supervised Machine Learning Pipelines","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"meek-ROPE em el User-Friendly R Package Supervised Machine Learning Pipelines interface build machine learning models classification regression problems. mikropml implements ML pipeline described Topçuoğlu et al. (2020) reasonable default options data preprocessing, hyperparameter tuning, cross-validation, testing, model evaluation, interpretation steps. See website information, documentation, examples.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"can install latest release CRAN: development version GitHub: install terminal using conda mamba:","code":"install.packages('mikropml') # install.packages(\"devtools\") devtools::install_github(\"SchlossLab/mikropml\") mamba install -c conda-forge r-mikropml"},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"dependencies","dir":"","previous_headings":"Installation","what":"Dependencies","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Imports: caret, dplyr, e1071, glmnet, kernlab, MLmetrics, randomForest, rlang, rpart, stats, utils, xgboost Suggests: assertthat, doFuture, forcats, foreach, future, future.apply, furrr, ggplot2, knitr, progress, progressr, purrr, rmarkdown, rsample, testthat, tidyr","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"usage","dir":"","previous_headings":"","what":"Usage","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Check introductory vignette quick start tutorial. -depth discussion, read vignettes /take look reference documentation. can watch Riffomonas Project series video tutorials covering mikropml skills related machine learning. also provide Snakemake workflow running mikropml locally HPC. highly recommend running mikropml Snakemake another workflow management system reproducibility scalability ML analyses.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"help--contributing","dir":"","previous_headings":"","what":"Help & Contributing","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"come across bug, open issue include minimal reproducible example. questions, create new post Discussions. ’d like contribute, see guidelines .","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"code-of-conduct","dir":"","previous_headings":"","what":"Code of Conduct","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"Please note mikropml project released Contributor Code Conduct. contributing project, agree abide terms.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"license","dir":"","previous_headings":"","what":"License","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"mikropml package licensed MIT license. Text images included repository, including mikropml logo, licensed CC 4.0 license.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"cite mikropml publications, use: Topçuoğlu BD, Lapp Z, Sovacool KL, Snitkin E, Wiens J, Schloss PD (2021). “mikropml: User-Friendly R Package Supervised Machine Learning Pipelines.” Journal Open Source Software, 6(61), 3073. doi:10.21105/joss.03073, https://joss.theoj.org/papers/10.21105/joss.03073. BibTeX entry LaTeX users :","code":"@Article{, title = {{mikropml}: User-Friendly R Package for Supervised Machine Learning Pipelines}, author = {Begüm D. Topçuoğlu and Zena Lapp and Kelly L. Sovacool and Evan Snitkin and Jenna Wiens and Patrick D. Schloss}, journal = {Journal of Open Source Software}, year = {2021}, month = {May}, volume = {6}, number = {61}, pages = {3073}, doi = {10.21105/joss.03073}, url = {https://joss.theoj.org/papers/10.21105/joss.03073}, }"},{"path":"http://www.schlosslab.org/mikropml/dev/index.html","id":"why-the-name","dir":"","previous_headings":"","what":"Why the name?","title":"User-Friendly R Package for Supervised Machine Learning Pipelines","text":"word “mikrop” (pronounced “meek-ROPE”) Turkish “microbe”. package originally implemented machine learning pipeline microbiome-based classification problems (see Topçuoğlu et al. 2020). realized methods applicable many fields , stuck name like !","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/pull_request_template.html","id":"issues","dir":"","previous_headings":"","what":"Issues","title":"NA","text":"Resolves # .","code":""},{"path":[]},{"path":"http://www.schlosslab.org/mikropml/dev/pull_request_template.html","id":"checklist","dir":"","previous_headings":"","what":"Checklist","title":"NA","text":"(Strikethrough points applicable.) Write unit tests new functionality bug fixes. roxygen comments vignettes Update NEWS.md includes user-facing changes. check workflow succeeds recent commit. always required PR can merged.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"Uses rsample::bootstraps(), rsample::int_pctl(), furrr::future_map()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"","code":"bootstrap_performance( ml_result, outcome_colname, bootstrap_times = 10000, alpha = 0.05 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"ml_result result returned single run_ml() call outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). bootstrap_times number boostraps create (default: 10000) alpha alpha level confidence interval (default 0.05 create 95% confidence interval)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"data frame estimate (.estimate), lower bound (.lower), upper bound (.upper) performance metric (term).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bootstrap_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate a bootstrap confidence interval for the performance on a single train/test split — bootstrap_performance","text":"","code":"bootstrap_performance(otu_mini_bin_results_glmnet, \"dx\", bootstrap_times = 10, alpha = 0.10 ) #> Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `AUC`, `Accuracy`, `Balanced_Accuracy`, `Detection_Rate`, `F1`, `Kappa`, `Neg_Pred_Value`, `Pos_Pred_Value`, `Precision`, `Recall`, `Sensitivity`, `Specificity`, `cv_metric_AUC`, `logLoss`, `prAUC`. #> # A tibble: 15 × 6 #> term .lower .estimate .upper .alpha .method #> #> 1 AUC 0.483 0.650 0.824 0.1 percentile #> 2 Accuracy 0.405 0.587 0.695 0.1 percentile #> 3 Balanced_Accuracy 0.432 0.592 0.700 0.1 percentile #> 4 Detection_Rate 0.205 0.310 0.413 0.1 percentile #> 5 F1 0.422 0.597 0.684 0.1 percentile #> 6 Kappa -0.139 0.180 0.389 0.1 percentile #> 7 Neg_Pred_Value 0.387 0.603 0.752 0.1 percentile #> 8 Pos_Pred_Value 0.354 0.579 0.730 0.1 percentile #> 9 Precision 0.354 0.579 0.730 0.1 percentile #> 10 Recall 0.489 0.634 0.75 0.1 percentile #> 11 Sensitivity 0.489 0.634 0.75 0.1 percentile #> 12 Specificity 0.321 0.550 0.706 0.1 percentile #> 13 cv_metric_AUC 0.622 0.622 0.622 0.1 percentile #> 14 logLoss 0.667 0.688 0.712 0.1 percentile #> 15 prAUC 0.468 0.602 0.740 0.1 percentile if (FALSE) { outcome_colname <- \"dx\" run_ml(otu_mini_bin, \"rf\", outcome_colname = \"dx\") %>% bootstrap_performance(outcome_colname, bootstrap_times = 10000, alpha = 0.05 ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":null,"dir":"Reference","previous_headings":"","what":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"Get lower upper bounds empirical confidence interval","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"","code":"lower_bound(x, alpha) upper_bound(x, alpha)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"x vector test statistics, permutation tests bootstraps alpha alpha level confidence interval (default: 0.05 obtain 95% confidence interval)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"value lower upper bound confidence interval","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"functions","dir":"Reference","previous_headings":"","what":"Functions","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"lower_bound(): Get lower bound empirical confidence interval upper_bound(): Get upper bound empirical confidence interval","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/bounds.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get the lower and upper bounds for an empirical confidence interval — lower_bound","text":"","code":"if (FALSE) { x <- 1:10000 lower_bound(x, 0.05) upper_bound(x, 0.05) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"Calculate fraction positives, .e. baseline precision PRC curve","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"","code":"calc_baseline_precision(dataset, outcome_colname = NULL, pos_outcome = NULL)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"dataset Data frame outcome variable columns features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). pos_outcome positive outcome outcome_colname, e.g. \"cancer\" otu_mini_bin dataset.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"baseline precision based fraction positives","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_baseline_precision.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate the fraction of positives, i.e. baseline precision for a PRC curve — calc_baseline_precision","text":"","code":"# calculate the baseline precision data.frame(y = c(\"a\", \"b\", \"a\", \"b\")) %>% calc_baseline_precision(\"y\", \"a\") #> Using 'y' as the outcome column. #> [1] 0.5 calc_baseline_precision(otu_mini_bin, outcome_colname = \"dx\", pos_outcome = \"cancer\" ) #> Using 'dx' as the outcome column. #> [1] 0.49 # if you're not sure which outcome was used as the 'positive' outcome during # model training, you can access it from the trained model and pass it along: calc_baseline_precision(otu_mini_bin, outcome_colname = \"dx\", pos_outcome = otu_mini_bin_results_glmnet$trained_model$levels[1] ) #> Using 'dx' as the outcome column. #> [1] 0.49"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":null,"dir":"Reference","previous_headings":"","what":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"Generic function calculate mean performance curves multiple models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"","code":"calc_mean_perf(sensspec_dat, group_var = specificity, sum_var = sensitivity)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"sensspec_dat data frame created concatenating results calc_model_sensspec() multiple models. group_var variable group (e.g. specificity recall). sum_var variable summarize (e.g. sensitivity precision).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"data frame mean & standard deviation sum_var summarized group_var","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_mean_perf.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Generic function to calculate mean performance curves for multiple models — calc_mean_perf","text":"Courtney Armour Kelly Sovacool","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"Used bootstrap_performance().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"","code":"calc_perf_bootstrap_split( test_data_split, trained_model, outcome_colname, perf_metric_function, perf_metric_name, class_probs, method, seed )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"test_data_split single bootstrap test set rsample::bootstraps() trained_model Trained model caret::train(). outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost seed Random seed (default: NA). results reproducible set seed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"long data frame performance metrics rsample::int_pctl()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_bootstrap_split.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate performance for a single split from rsample::bootstraps() — calc_perf_bootstrap_split","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":null,"dir":"Reference","previous_headings":"","what":"Get performance metrics for test data — calc_perf_metrics","title":"Get performance metrics for test data — calc_perf_metrics","text":"Get performance metrics test data","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get performance metrics for test data — calc_perf_metrics","text":"","code":"calc_perf_metrics( test_data, trained_model, outcome_colname, perf_metric_function, class_probs )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get performance metrics for test data — calc_perf_metrics","text":"test_data Held test data: dataframe outcome features. trained_model Trained model caret::train(). outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get performance metrics for test data — calc_perf_metrics","text":"Dataframe performance metrics.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get performance metrics for test data — calc_perf_metrics","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/calc_perf_metrics.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get performance metrics for test data — calc_perf_metrics","text":"","code":"if (FALSE) { results <- run_ml(otu_small, \"glmnet\", kfold = 2, cv_times = 2) calc_perf_metrics(results$test_data, results$trained_model, \"dx\", multiClassSummary, class_probs = TRUE ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"Combine hyperparameter performance metrics multiple train/test splits generated , instance, looping R using snakemake workflow high-performance computer.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"","code":"combine_hp_performance(trained_model_lst)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"trained_model_lst List trained models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"Named list: dat: Dataframe performance metric group hyperparameters params: Hyperparameters tuned. Metric: Performance metric used.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/combine_hp_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Combine hyperparameter performance metrics for multiple train/test splits — combine_hp_performance","text":"","code":"if (FALSE) { results <- lapply(seq(100, 102), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed, cv_times = 2, kfold = 2) }) models <- lapply(results, function(x) x$trained_model) combine_hp_performance(models) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":null,"dir":"Reference","previous_headings":"","what":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"wrapper permute_p_value().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"","code":"compare_models(merged_data, metric, group_name, nperm = 10000)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"merged_data concatenated performance data run_ml metric metric compare, must numeric group_name column group variables compare nperm number permutations, default=10000","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"table p-values pairs group variable","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"Courtney R Armour, armourc@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/compare_models.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Perform permutation tests to compare the performance metric\nacross all pairs of a group variable. — compare_models","text":"","code":"df <- dplyr::tibble( model = c(\"rf\", \"rf\", \"glmnet\", \"glmnet\", \"svmRadial\", \"svmRadial\"), AUC = c(.2, 0.3, 0.8, 0.9, 0.85, 0.95) ) set.seed(123) compare_models(df, \"AUC\", \"model\", nperm = 10) #> group1 group2 p_value #> 1 glmnet svmRadial 0.7272727 #> 2 rf glmnet 0.2727273 #> 3 rf svmRadial 0.5454545"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":null,"dir":"Reference","previous_headings":"","what":"Define cross-validation scheme and training parameters — define_cv","title":"Define cross-validation scheme and training parameters — define_cv","text":"Define cross-validation scheme training parameters","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Define cross-validation scheme and training parameters — define_cv","text":"","code":"define_cv( train_data, outcome_colname, hyperparams_list, perf_metric_function, class_probs, kfold = 5, cv_times = 100, groups = NULL, group_partitions = NULL )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Define cross-validation scheme and training parameters — define_cv","text":"train_data Dataframe training model. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). hyperparams_list Named list lists hyperparameters. perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). kfold Fold number k-fold cross-validation (default: 5). cv_times Number cross-validation partitions create (default: 100). groups Vector groups keep together splitting data train test sets. number groups training set larger kfold, groups also kept together cross-validation. Length matches number rows dataset (default: NULL). group_partitions Specify assign groups training testing partitions (default: NULL). groups specifies samples belong group \"\" belong group \"B\", setting group_partitions = list(train = c(\"\", \"B\"), test = c(\"B\")) result samples group \"\" placed training set, samples \"B\" also training set, remaining samples \"B\" testing set. partition sizes close training_frac possible. number groups training set larger kfold, groups also kept together cross-validation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Define cross-validation scheme and training parameters — define_cv","text":"Caret object trainControl controls cross-validation","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Define cross-validation scheme and training parameters — define_cv","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/define_cv.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Define cross-validation scheme and training parameters — define_cv","text":"","code":"training_inds <- get_partition_indices(otu_small %>% dplyr::pull(\"dx\"), training_frac = 0.8, groups = NULL ) train_data <- otu_small[training_inds, ] test_data <- otu_small[-training_inds, ] cv <- define_cv(train_data, outcome_colname = \"dx\", hyperparams_list = get_hyperparams_list(otu_small, \"glmnet\"), perf_metric_function = caret::multiClassSummary, class_probs = TRUE, kfold = 5 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":null,"dir":"Reference","previous_headings":"","what":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"Requires future.apply package","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"","code":"find_permuted_perf_metric( test_data, trained_model, outcome_colname, perf_metric_function, perf_metric_name, class_probs, feat, test_perf_value, nperms = 100, alpha = 0.05, progbar = NULL )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"test_data Held test data: dataframe outcome features. trained_model Trained model caret::train(). outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). feat feature group correlated features permute. test_perf_value value true performance metric held-test data. nperms number permutations perform (default: 100). alpha alpha level confidence interval (default: 0.05 obtain 95% confidence interval) progbar optional progress bar (default: NULL)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"vector mean permuted performance mean difference test permuted performance (test minus permuted performance)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/find_permuted_perf_metric.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get permuted performance metric difference for a single feature\n(or group of features) — find_permuted_perf_metric","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"Get preprocessed dataframe continuous variables","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"","code":"get_caret_processed_df(features, method)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"features Dataframe features machine learning method Methods preprocess data, described caret::preProcess() (default: c(\"center\",\"scale\"), use NULL normalization).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"Named list: processed: Dataframe processed features. removed: Names features removed preprocessing.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_caret_processed_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get preprocessed dataframe for continuous variables — get_caret_processed_df","text":"","code":"get_caret_processed_df(mikropml::otu_small[, 2:ncol(otu_small)], c(\"center\", \"scale\")) #> $processed #> Otu00001 Otu00002 Otu00003 Otu00004 Otu00005 #> 1 -0.4198476322 -0.218855527 -0.174296240 -0.59073845 -0.048774220 #> 2 -0.1045750483 1.754032339 -0.718419364 0.03805034 1.537072974 #> 3 -0.7076423302 0.696324396 1.428146361 0.60439092 -0.264559044 #> 4 -0.4936040623 -0.665193276 2.015799335 -0.59289184 -0.675577755 #> 5 1.1116829471 -0.395140184 -0.753787367 -0.58643168 -0.754356341 #> 6 -0.6845030580 0.613808173 -0.174296240 -0.58427829 0.375945115 #> 7 -0.7698291243 -0.496410093 -0.318488868 0.15863997 -0.658451975 #> 8 -0.4241862457 -0.477656406 -0.397386721 -0.55628427 -0.391289813 #> 9 -0.5557908564 1.144537514 1.615868839 -0.35171258 -0.274834512 #> 10 1.4573258257 -0.451401245 -0.693933823 -0.05669866 -0.706404158 #> 11 0.2931311927 -0.721454336 -0.753787367 3.03341063 -0.449517464 #> 12 1.1044519245 0.002437979 -0.473563958 -0.41846755 0.413621830 #> 13 -0.5933921737 -0.297621012 -0.340253793 -0.59289184 -0.655026820 #> 14 -0.8016456236 0.077452727 -0.419151646 -0.59073845 -0.045349064 #> 15 -0.7915221920 0.291244758 -0.269517787 -0.59289184 -0.220032017 #> 16 1.4862499159 -0.683946963 -0.745625520 -0.54551734 -0.744080874 #> 17 -0.3750152923 -0.051947713 0.103206554 1.37745659 0.458148857 #> 18 0.2135899445 0.325001395 0.478651509 -0.34309903 0.560903535 #> 19 -0.5181895390 -0.100707299 -0.073633462 -0.40770062 -0.237157796 #> 20 0.8745054069 -0.676445488 -0.560623658 -0.58212491 -0.154954054 #> 21 2.0184531767 -0.682071594 -0.740184289 -0.58643168 -0.720104782 #> 22 0.5867107089 -0.646439589 -0.560623658 0.09188499 -0.593374013 #> 23 -0.4603413585 -0.397015552 0.386150578 -0.42062094 -0.463218088 #> 24 -0.7553670792 1.401463025 0.829610924 -0.58858507 -0.295385447 #> 25 1.9316809059 0.334378238 -0.228708552 -0.42923448 -0.535146362 #> 26 1.2201482855 -0.108208774 -0.302165174 -0.58858507 0.358819335 #> 27 -0.9158957801 -0.674570119 -0.732022442 -0.53475041 -0.689278379 #> 28 -0.7597056927 -0.595804634 -0.375621796 -0.57566475 -0.730380250 #> 29 -0.5109585165 -0.558297260 -0.432754724 3.84093048 -0.672152599 #> 30 -0.8811868718 -0.385763340 -0.595991661 -0.58212491 -0.192630769 #> 31 0.3437483507 0.902614952 1.376454664 -0.59289184 1.396641581 #> 32 -0.5109585165 0.535042688 -0.484446421 -0.59289184 0.550628067 #> 33 1.2302717171 -0.582677053 0.007985007 -0.40554723 -0.672152599 #> 34 -0.0770971626 0.244360541 -0.313047636 -0.28711099 2.273481498 #> 35 -0.2275024319 2.211622300 1.515206061 -0.57781814 1.269910812 #> 36 0.0284757669 -0.663317907 -0.634080280 -0.57781814 -0.730380250 #> 37 -0.3157209072 -0.290119537 -0.231429168 -0.58643168 -0.233732640 #> 38 -0.1653156379 1.476477772 1.836238704 1.65309003 4.393653017 #> 39 -0.6859492625 -0.134463935 -0.258635324 0.68191283 0.399921206 #> 40 -0.3967083600 -0.126962461 -0.269517787 -0.57135798 0.304016840 #> 41 0.0009978811 -0.485157881 -0.291282711 -0.58858507 -0.463218088 #> 42 -0.4111704051 -0.029443288 -0.400107336 -0.19236199 0.050555301 #> 43 1.4399713715 -0.693323806 -0.734743058 3.42532693 -0.757781497 #> 44 -0.4805882217 -0.593929265 -0.397386721 1.44851834 -0.648176508 #> 45 0.3827958725 -0.421395345 -0.609594739 2.34648041 -0.333062162 #> 46 1.6438862078 -0.592053897 -0.579667967 -0.58427829 -0.726955094 #> 47 -0.5471136293 -0.280742693 0.269164106 1.53680717 -0.202906237 #> 48 1.6164083221 -0.653941063 -0.615035970 -0.58643168 -0.696128691 #> 49 -0.8609400086 -0.078202875 -0.397386721 0.99630722 -0.086450936 #> 50 0.6026189585 0.146841369 -0.345695024 -0.58212491 1.002748649 #> 51 -0.8363545319 -0.635187377 -0.421872261 -0.56705120 -0.593374013 #> 52 -0.1783314785 0.328752132 -0.666727667 -0.58643168 -0.531721206 #> 53 1.6728102981 -0.548920417 -0.639521511 -0.58858507 -0.562547610 #> 54 1.2620882164 -0.458902719 -0.751066751 1.21595262 -0.579673389 #> 55 -0.7828449649 -0.065075294 -0.130766390 1.23963987 0.375945115 #> 56 2.0705165392 -0.605181477 -0.606874124 -0.58858507 -0.733805406 #> 57 -0.6469017407 -0.327626911 -0.666727667 -0.57566475 -0.600224325 #> 58 -0.3663380652 8.732279248 -0.740184289 -0.54551734 -0.730380250 #> 59 -0.4415406999 1.363955651 -0.748346136 -0.10191977 1.746007486 #> 60 -0.4111704051 -0.479531775 -0.242311630 -0.59289184 -0.422116216 #> 61 -0.2694423628 2.967395884 -0.740184289 0.50964193 0.721885863 #> 62 0.6112961856 0.047446828 -0.579667967 -0.52613687 0.564328691 #> 63 -0.2347334545 -0.425146083 -0.479005189 -0.05454527 -0.665302287 #> 64 -0.8334621229 -0.344505229 -0.356577486 -0.45507512 -0.350187942 #> 65 -0.8884178944 -0.550795785 -0.400107336 0.57424351 -0.476918711 #> 66 -0.6049618098 -0.721454336 1.305718658 1.43129125 -0.487194179 #> 67 -0.8970951214 -0.642688851 -0.623197817 -0.58858507 -0.682428067 #> 68 -0.7293353979 0.801345043 -0.130766390 0.47303436 -0.257708732 #> 69 -0.7221043754 -0.721454336 2.736762475 1.05660204 -0.052199376 #> 70 -0.1002364348 -0.717703599 0.679977065 -0.57135798 0.512951352 #> 71 -0.2708885673 -0.584552422 0.783360459 -0.59289184 0.389645738 #> 72 -0.7221043754 0.150592106 1.036377712 1.00276738 0.030004366 #> 73 -0.4979426759 0.486283102 0.116809632 0.01436309 0.314292308 #> 74 -0.5557908564 -0.412018502 -0.527976271 -0.32587194 -0.315936383 #> 75 -0.7061961257 0.306247708 -0.323930099 -0.17944168 0.526651975 #> 76 0.4203971899 -0.558297260 0.269164106 0.31583716 0.074531393 #> 77 -0.6425631272 0.304372339 0.699021375 -0.52829025 0.427322453 #> 78 0.6488975029 -0.053823082 2.739483091 -0.59073845 -0.291960291 #> 79 -0.7380126250 -0.190724996 -0.424592877 -0.16867475 -0.199481081 #> 80 2.5159475292 -0.717703599 -0.710257517 -0.59289184 -0.757781497 #> 81 -0.8710634402 -0.331377648 -0.470843343 -0.55628427 -0.555697298 #> 82 -0.4039393825 1.645260955 -0.138928237 -0.59289184 0.410196674 #> 83 1.0032176086 -0.425146083 -0.405548568 -0.59073845 0.095082329 #> 84 -0.8305697138 -0.548920417 -0.748346136 -0.59289184 -0.319361539 #> 85 -0.8088766461 -0.368885022 2.105579651 -0.57135798 -0.579673389 #> 86 -0.6859492625 -0.550795785 0.410636119 0.67545267 -0.490619335 #> 87 -0.0062331415 0.167470424 0.367106269 -0.59289184 0.869167568 #> 88 -0.6497941497 1.360204913 -0.751066751 -0.59289184 -0.291960291 #> 89 -0.4458793134 0.788217462 0.731668762 -0.30864485 1.211683161 #> 90 -0.1421763657 -0.717703599 0.767036765 -0.59289184 2.119349482 #> 91 -0.7915221920 1.123908458 0.652770909 2.20651030 -0.045349064 #> 92 -0.4502179269 0.116835470 -0.054589152 -0.26342374 3.900430564 #> 93 1.3633225323 -0.593929265 -0.753787367 0.12203240 -0.206331393 #> 94 -0.6743796264 -0.442024401 -0.538858733 2.10314776 -0.679002911 #> 95 -0.2072555688 0.193725586 -0.364739333 -0.49383607 0.554053223 #> 96 -0.3460912020 2.147859765 2.856469562 1.86412189 1.304162371 #> 97 0.2121437400 -0.700825281 -0.742904905 0.42135309 -0.747506030 #> 98 -0.5948383782 0.169345793 0.024308701 -0.36463290 -0.401565281 #> 99 1.5281898468 -0.704576018 -0.699375055 -0.58858507 -0.692703535 #> 100 -0.5688066970 -0.680196225 -0.729301827 -0.58212491 -0.740655718 #> 101 -0.2361796590 -0.179472784 -0.498049499 -0.58427829 -0.631050728 #> 102 2.6518907534 -0.683946963 -0.721139980 -0.53905718 -0.740655718 #> 103 0.8325654761 -0.590178528 -0.473563958 2.55966565 -0.672152599 #> 104 0.3061470334 -0.162594466 0.524901975 0.39981923 1.636402496 #> 105 -0.5644680835 0.915742533 -0.449078418 -0.57997152 0.841766321 #> 106 -0.2390720680 0.345630450 -0.348415640 -0.56274443 0.132759044 #> 107 -0.7394588295 -0.278867325 -0.748346136 0.10049854 0.290316216 #> 108 -0.3648918607 1.302068484 0.802404768 -0.59289184 0.506101040 #> 109 1.5079429836 -0.682071594 -0.650403974 -0.50029623 -0.713254470 #> 110 -0.6035156053 -0.451401245 -0.582388583 -0.59289184 -0.583098545 #> 111 -0.2810119989 1.495231459 0.660932756 -0.59289184 -0.130977963 #> 112 -0.4502179269 -0.719578968 1.033657096 -0.58427829 -0.754356341 #> 113 -0.4458793134 0.508787527 2.353155672 -0.59289184 4.314874431 #> 114 -0.7813987604 0.090580308 -0.304885790 -0.17944168 -0.329637006 #> 115 -0.7365664205 3.171811071 0.228354872 -0.59073845 0.649957589 #> 116 -0.0264800046 -0.078202875 -0.413710414 0.21462801 0.321142620 #> 117 -0.6324396956 -0.205727946 -0.753787367 0.15863997 -0.702979003 #> 118 -0.5557908564 -0.213229420 0.821449078 0.09188499 2.633122871 #> 119 0.2309443987 1.073273503 2.619776003 -0.42062094 0.817790229 #> 120 0.3900268951 -0.674570119 -0.718419364 -0.58858507 -0.737230562 #> 121 -0.9057723485 -0.344505229 -0.661286436 -0.59073845 -0.668727443 #> 122 -0.0264800046 0.013690191 -0.751066751 -0.59289184 -0.761206653 #> 123 -0.7076423302 -0.637062745 -0.468122727 -0.50890978 -0.675577755 #> 124 0.0545074481 -0.650190326 0.443283506 -0.57566475 0.348543867 #> 125 0.3249476920 0.144966000 -0.585109199 -0.58427829 -0.196055925 #> 126 -0.0496192768 2.852998394 0.233796103 -0.58858507 0.944520999 #> 127 -0.3388601794 -0.057573819 0.954759243 -0.42923448 -0.004247193 #> 128 -0.0366034362 -0.286368800 -0.511652577 1.86196850 -0.757781497 #> 129 -0.8522627815 -0.355757441 -0.386504258 -0.32371856 0.194411850 #> 130 -0.2766733854 0.094331045 1.893371632 2.95158195 2.937961748 #> 131 0.5433245735 -0.537668204 -0.712978133 -0.58427829 -0.747506030 #> 132 -0.3764614968 -0.121336354 0.062397319 -0.56274443 -0.360463410 #> 133 0.1340486963 -0.316374699 0.312693956 -0.45722851 -0.052199376 #> 134 -0.5196357436 0.308123076 -0.280400249 -0.56705120 0.598580250 #> 135 0.1499569459 -0.706451387 -0.712978133 -0.57781814 -0.744080874 #> 136 -0.3808001103 1.189546362 0.475930894 -0.59289184 0.214962786 #> 137 -0.6859492625 0.872609053 5.601570724 -0.56705120 -0.672152599 #> 138 0.1210328557 -0.301371749 -0.443637186 -0.59289184 -0.562547610 #> 139 -0.8450317590 -0.192600365 -0.636800895 1.93303025 -0.709829314 #> 140 0.1803272407 0.475030890 0.435121659 1.12120363 1.276761123 #> 141 -0.8464779635 0.321250657 -0.220546705 -0.58643168 -0.538571518 #> 142 -0.8826330763 -0.472030300 0.764316150 3.24444248 0.026579210 #> 143 -0.8537089861 -0.522665255 -0.549741196 1.32792871 -0.357038254 #> 144 -0.7582594882 -0.344505229 -0.664007052 -0.45722851 -0.726955094 #> 145 -0.5832687421 -0.171971309 1.553294680 -0.58212491 -0.374164034 #> 146 -0.6469017407 -0.470154931 -0.644962742 -0.59073845 -0.730380250 #> 147 -0.3460912020 -0.023817182 2.127344576 1.81459400 0.307441996 #> 148 -0.5644680835 -0.408267765 2.886396334 0.15863997 -0.346762786 #> 149 1.1478380599 -0.593929265 -0.601432892 -0.58427829 -0.730380250 #> 150 0.1427259234 3.299336143 2.657864622 -0.56705120 3.835352601 #> 151 -0.5659142880 3.123051485 1.289394964 -0.57781814 0.899993971 #> 152 0.9699549048 0.081203464 -0.285841480 -0.58643168 0.423897298 #> 153 -0.8378007364 0.203102430 0.070559166 -0.58427829 -0.442667152 #> 154 -0.6830568535 -0.653941063 -0.560623658 -0.56274443 -0.668727443 #> 155 -0.3735690877 -0.466404194 -0.590550430 -0.49383607 -0.689278379 #> 156 -0.3460912020 -0.235733845 -0.294003327 2.08161390 -0.682428067 #> 157 4.1935447642 -0.668944013 -0.683051361 2.59842660 -0.716679626 #> 158 0.1687576046 -0.477656406 -0.000176840 -0.57135798 -0.713254470 #> 159 0.9280149739 -0.592053897 2.867352025 -0.58212491 0.584879626 #> 160 1.1001133110 -0.674570119 -0.715698748 -0.48737591 -0.494044491 #> 161 -0.6526865587 -0.693323806 -0.718419364 -0.57566475 -0.750931186 #> 162 -0.0192489821 0.495659946 -0.751066751 -0.59289184 5.606158216 #> 163 -0.2491954996 -0.653941063 -0.536138117 -0.16006120 -0.668727443 #> 164 0.4478750756 -0.250736794 -0.179737471 -0.44215480 -0.500894803 #> 165 0.4088275538 -0.663317907 -0.595991661 -0.56705120 0.310867152 #> 166 -0.9130033711 0.317499920 0.761595534 -0.59289184 -0.449517464 #> 167 -0.8999875305 0.506912158 0.595637981 0.91447854 -0.720104782 #> 168 2.9367930424 -0.721454336 -0.748346136 -0.58643168 -0.754356341 #> 169 3.0119956771 -0.689573069 -0.680330745 -0.59073845 -0.726955094 #> 170 3.9332279519 -0.706451387 -0.740184289 0.67760606 -0.432391684 #> 171 -0.5962845827 0.291244758 -0.196061165 -0.57351136 0.701334928 #> 172 -0.7683829198 -0.554546523 -0.658565820 -0.12776041 -0.552272142 #> 173 -0.1754390695 -0.712077493 -0.734743058 2.77931105 -0.730380250 #> 174 -0.3186133163 -0.166345203 -0.397386721 1.53034702 -0.028223285 #> 175 -0.5890535602 0.373760981 -0.043706690 -0.30649147 -0.518020582 #> 176 -0.3446449975 -0.160719097 2.959852956 -0.57781814 -0.161804366 #> 177 -0.5283129706 -0.288244168 -0.606874124 -0.57566475 0.067681081 #> 178 1.8608168848 -0.036944763 -0.160693162 -0.39262692 -0.329637006 #> 179 0.0501688346 -0.698949912 -0.726581211 0.53978933 -0.750931186 #> 180 -0.8363545319 0.364384137 -0.492608267 -0.33233210 -0.446092308 #> 181 -0.8378007364 0.131838419 -0.296723943 -0.56489782 -0.634475884 #> 182 -0.8233386913 -0.593929265 -0.095398387 -0.58858507 1.266485656 #> 183 -0.7177657618 -0.571424841 -0.737463673 3.51146238 -0.644751352 #> 184 -0.7625981017 0.683196815 -0.405548568 -0.39478030 -0.175504990 #> 185 -0.3301829524 -0.672694750 -0.742904905 -0.59289184 -0.569397921 #> 186 -0.9202343936 -0.383887972 -0.117163312 1.05660204 -0.048774220 #> 187 0.0762005158 -0.006938864 -0.593271045 2.91066761 0.036854678 #> 188 0.5028308471 -0.708326756 -0.742904905 -0.23542972 -0.062474844 #> 189 -0.8652786222 -0.389514078 0.032470547 -0.47230221 -0.055624532 #> 190 -0.7842911694 -0.059449188 -0.356577486 0.82403632 -0.668727443 #> 191 0.7212077286 -0.685822331 -0.753787367 -0.55197750 -0.631050728 #> 192 0.9844169499 -0.713952862 -0.751066751 -0.58427829 -0.754356341 #> 193 0.3061470334 0.120586207 -0.261355940 -0.58427829 0.817790229 #> 194 -0.2810119989 -0.577050947 -0.443637186 0.18017383 -0.661877131 #> 195 -0.5413288112 0.195600955 -0.356577486 -0.58212491 0.677358836 #> 196 1.6858261387 -0.702700649 -0.734743058 -0.59073845 -0.723529938 #> 197 1.1478380599 -0.078202875 3.286326831 -0.57135798 0.852041788 #> 198 5.1046536074 -0.691448437 -0.753787367 -0.48737591 -0.716679626 #> 199 0.9309073830 -0.350131335 -0.590550430 -0.58212491 1.232234096 #> 200 -0.6252086730 0.400016142 -0.424592877 -0.58427829 -0.048774220 #> Otu00006 Otu00007 Otu00008 Otu00009 Otu00010 Otu00011 #> 1 -0.16741683 -0.568745802 -0.0623643594 0.05469266 -0.637876227 0.72598299 #> 2 -0.57321773 -0.642744310 -0.1320452079 -0.40907548 -0.383314722 0.01116897 #> 3 -0.03641025 -0.612472193 -0.2070861217 -0.73691158 2.586569508 -0.77512645 #> 4 -0.58599886 -0.551927960 -0.4697293198 0.13465268 0.025980248 -0.02010414 #> 5 2.72750923 0.191420685 -0.6760918326 1.26208901 1.703091342 1.58822740 #> 6 0.80394909 -0.336659574 -0.0060836741 -0.26514744 -0.348374907 0.55621466 #> 7 2.20028760 -0.716742817 0.0635971744 -0.84485761 -0.682798846 -0.48920084 #> 8 -0.06197251 0.376416954 -0.0221638699 0.61841082 2.381922023 -0.75278851 #> 9 -0.46457812 -0.804195599 0.2940799810 -0.40907548 0.280541753 0.18540489 #> 10 0.68891891 -0.370295259 1.5885357433 -0.72091957 0.495172042 -0.82873750 #> 11 -0.24090833 0.066968649 -0.1963659911 0.09067467 0.500163444 0.38644633 #> 12 -0.35593851 -0.777287051 0.3423205684 0.48247878 0.634931300 -0.48920084 #> 13 -0.49972623 -0.141572599 -0.2178062522 -0.14520740 -0.248546866 1.11019552 #> 14 -0.77452053 -0.622562899 -0.4214887324 -0.64495755 -0.607927815 -0.04690967 #> 15 -0.68824790 -0.582200076 0.8997673563 -0.79688159 -0.892437732 -0.62769606 #> 16 0.91897926 1.180309832 -0.4241687650 -0.06524738 -0.058873587 -0.06478002 #> 17 0.18725954 0.046787238 2.0950619112 -0.13321340 -0.423245938 -0.65003400 #> 18 1.43341977 -0.316478163 0.7791658878 0.44249877 -0.353366309 -0.02903932 #> 19 -0.73298186 -0.693197838 -0.2124461869 0.28657672 0.045945856 -0.55174707 #> 20 -0.11948759 -0.481293021 -0.2365664806 0.54644680 2.327016600 1.20848245 #> 21 5.17190045 0.712773807 -0.6787718653 0.43450277 1.468495445 -0.87788097 #> 22 -0.60517055 1.372033238 0.5004424938 1.02620694 0.040954454 0.29262699 #> 23 -0.26008003 -0.568745802 -0.6787718653 -0.80087960 -0.677807444 0.29709458 #> 24 -0.25368946 0.524413969 -0.6787718653 0.62240882 0.619957094 -0.09605313 #> 25 0.79116796 -0.797468462 0.2779997852 0.79432287 -0.363349113 0.69024229 #> 26 -0.25368946 0.904497213 -0.5126098420 0.24659671 3.609806932 -0.80193198 #> 27 -0.88635543 0.278873467 -0.6760918326 -0.86884562 -0.153710227 -0.87788097 #> 28 -0.88955071 0.077059355 0.1895587082 0.22260871 -0.842523712 -0.87788097 #> 29 2.07567158 -0.518292274 -0.6760918326 0.31056473 0.445258021 -0.81086715 #> 30 -0.80647336 -0.784014188 -0.5903307884 -0.83286360 -0.932368949 -0.80639956 #> 31 1.79768199 -0.797468462 -0.6787718653 -0.82886560 -0.378323320 2.83915193 #> 32 -0.49333566 0.100604335 -0.6787718653 -0.25715144 -0.712747258 -0.54727948 #> 33 -0.79049695 0.917951487 0.3878811232 1.19812099 -0.647859031 0.67683952 #> 34 -0.37511020 1.028949248 -0.6787718653 0.02670665 -0.558013794 -0.87788097 #> 35 -0.15463570 -0.239116087 -0.5822906904 -0.64895555 0.585017279 0.01116897 #> 36 0.92536983 0.743045923 0.9480079437 2.52545738 0.470215032 -0.46239532 #> 37 2.33129419 0.325963426 -0.5849707231 -0.84485761 -0.897429135 1.27549626 #> 38 1.15862546 -0.787377757 -0.6787718653 -0.11722139 0.679853918 1.45419977 #> 39 -0.53806962 -0.373658828 0.0582371091 -0.35710146 -0.737704268 -0.31496493 #> 40 -0.56363188 -0.535110117 -0.5045697441 -0.02926537 0.555068867 -0.24348353 #> 41 -0.11309703 1.207218380 -0.0864846531 0.96623692 1.363676002 0.34177045 #> 42 2.76585263 -0.387113102 0.7014449414 -0.70492757 -0.892437732 0.98063548 #> 43 -0.62753753 -0.797468462 -0.6707317674 6.20761646 1.054209073 0.15859936 #> 44 -0.36552436 0.547958949 -0.6653717021 0.57043481 0.510146248 0.65896917 #> 45 0.01151899 -0.794104894 -0.6466114737 1.17413298 1.608254703 -0.85554303 #> 46 0.17128313 -0.555291528 0.0207166523 1.17813099 -0.233572660 2.34771729 #> 47 -0.75215356 0.036696533 0.9185275847 0.13865068 -0.298460887 0.34623804 #> 48 1.35034242 3.773621166 0.6022837339 -0.57299353 0.150765299 3.36185968 #> 49 -0.81286393 -0.784014188 -0.2526466764 -0.83686161 2.242162765 -0.80193198 #> 50 -0.60836584 0.574867497 -0.4214887324 -0.80087960 -0.148718825 -0.70364505 #> 51 0.16489256 2.014474827 -0.6787718653 0.28257872 2.297068188 -0.56514983 #> 52 0.63140383 0.161148568 0.2860398831 -0.21717142 0.400335403 -0.23454835 #> 53 0.38856234 -0.800832031 -0.6680517347 3.36503763 0.055928660 0.69917746 #> 54 -0.76173940 3.867801085 -0.6787718653 -0.23716143 -0.617910619 -0.87341338 #> 55 -0.54765547 -0.689834269 1.0686094123 -0.25315343 -0.792609691 -0.73045058 #> 56 2.00537536 1.789115735 -0.6787718653 -0.62496754 1.618237507 -0.87341338 #> 57 -0.78730166 -0.296296752 -0.1856458606 0.29857073 0.794656166 -0.46686291 #> 58 0.91258870 -0.800832031 -0.6734118000 -0.88483762 -0.867480722 -0.84660785 #> 59 -0.10990175 -0.565382234 -0.6760918326 0.78232887 0.150765299 -0.77065886 #> 60 -0.80966864 -0.403930944 0.2833598504 -0.08123938 -0.043899381 -0.18540489 #> 61 -0.27925173 -0.659562152 -0.5045697441 -0.58098953 -0.692781650 0.40431668 #> 62 0.74004343 -0.091119071 -0.6573316042 0.29857073 -0.423245938 0.12285866 #> 63 -0.33037625 -0.333296005 -0.4884895482 0.41451276 -0.742695670 -0.39538150 #> 64 -0.46777340 -0.054119818 1.5965758412 -0.61297354 -0.538048186 -0.44005738 #> 65 -0.80008280 -0.777287051 -0.5769306252 -0.73691158 -0.353366309 -0.82873750 #> 66 -0.72659129 0.450415461 1.5670954822 -0.02126937 -0.508099773 0.69470987 #> 67 -0.66588092 -0.696561406 1.1248900976 -0.75290358 -0.498116969 -0.62322847 #> 68 0.06583880 0.362962680 -0.6787718653 0.10666667 -0.218598454 0.82426992 #> 69 0.25755576 -0.575472939 0.0448369460 -0.42506748 -0.687790248 -0.66790435 #> 70 1.16821131 0.188057116 -0.1320452079 -0.08923539 -0.288478082 -0.12732625 #> 71 0.02430012 0.140967157 -0.6707317674 0.50646679 0.709802331 2.57556426 #> 72 0.12335389 -0.461111609 0.6451642560 -0.36109947 -0.273503876 -0.38197874 #> 73 -0.16741683 -0.175208285 -0.1722456974 -0.62896554 -0.558013794 0.16753454 #> 74 0.12335389 -0.040665543 -0.2392465133 0.19862070 0.020988846 -0.68130711 #> 75 -0.48055453 -0.683107132 -0.3116073944 -0.53701152 -0.188650041 -0.02457173 #> 76 -0.48694510 -0.804195599 -0.0302039678 -0.04525737 -0.518082577 0.55621466 #> 77 -0.84162147 -0.558655097 -0.6117710494 -0.80087960 0.205670722 -0.42218703 #> 78 0.86785474 0.053514375 0.1654384145 -0.88083962 2.322025198 -0.86001062 #> 79 -0.62114697 -0.498110863 -0.3089273618 -0.54500752 -0.712747258 -0.15413177 #> 80 -0.90233184 -0.797468462 -0.6760918326 -0.88483762 0.649905506 2.71405948 #> 81 -0.50611679 -0.716742817 -0.6707317674 -0.75290358 -0.852506516 1.21741763 #> 82 -0.08433949 -0.366931691 -0.6787718653 0.05869066 -0.328409299 -0.87788097 #> 83 0.65377080 -0.155026873 -0.6600116368 0.49847079 1.488461053 0.78406163 #> 84 -0.81925449 0.231783507 -0.6787718653 0.63040483 -0.308443691 -0.84660785 #> 85 -0.71381016 -0.753742071 -0.1427653384 -0.71692157 -0.882454928 -0.86894579 #> 86 -0.88316014 0.322599857 -0.6734118000 1.87378319 -0.533056784 1.00744101 #> 87 0.13293973 -0.477929452 -0.6707317674 -0.03326337 1.223916744 2.28070348 #> 88 -0.35274323 -0.400567376 -0.4482890587 -0.66494756 -0.418254536 -0.22114559 #> 89 -0.12587816 -0.141572599 0.5138426570 -0.60097954 -0.458185753 0.30602975 #> 90 1.82643953 -0.575472939 -0.3866483081 -0.34910546 -0.088822000 1.92776406 #> 91 -0.90233184 -0.804195599 -0.5983708863 -0.71692157 -0.707755856 -0.41325185 #> 92 -0.36871964 -0.494747295 -0.0516442289 -0.32111945 -0.702764454 -0.80193198 #> 93 -0.14824514 -0.800832031 0.0930775334 4.84030006 3.445090663 -0.35964080 #> 94 -0.86718373 -0.091119071 1.1329301955 0.40651676 0.305498763 -0.60089053 #> 95 -0.21215079 -0.380385965 0.2377992956 -0.53701152 -0.707755856 0.12732625 #> 96 -0.55724132 -0.464475178 -0.6787718653 -0.87284362 -0.538048186 -0.87788097 #> 97 3.31863652 3.736621913 -0.6734118000 0.94624692 2.671423343 -0.50707119 #> 98 -0.75215356 -0.535110117 -0.6787718653 -0.50102951 -0.013950969 1.64630604 #> 99 0.78477739 -0.804195599 0.2699596873 0.65039483 0.510146248 -0.67237194 #> 100 1.54844994 -0.800832031 -0.6787718653 -0.88483762 -0.667824639 -0.73938575 #> 101 2.03732818 -0.030574838 0.2511994588 -0.60897554 -0.098804804 1.15040381 #> 102 -0.88316014 2.815004136 -0.3330476555 1.66588713 -0.937360351 -0.31049734 #> 103 -0.41664888 2.848639821 -0.6787718653 1.03820094 -0.443211546 -0.15859936 #> 104 -0.39747718 0.262055624 0.0501970112 2.35754133 -0.268512474 -0.13179383 #> 105 -0.44860171 0.191420685 0.2404793283 0.11466267 -0.533056784 2.22262484 #> 106 -0.77771582 -0.730197092 0.8381266057 -0.82486760 -0.942351753 0.84214027 #> 107 -0.39428190 -0.020484132 1.2026110440 -0.52501751 -0.712747258 0.19434006 #> 108 0.83590191 -0.538473686 -0.3384077207 -0.50502751 -0.363349113 -0.66343676 #> 109 -0.28883757 0.066968649 1.6716167550 -0.57699153 -0.657841835 -0.76172369 #> 110 -0.73298186 -0.340023142 0.0475169786 -0.20517742 -0.707755856 -0.41325185 #> 111 -0.47096868 -0.518292274 -0.2794470028 -0.42906549 -0.043899381 -0.25688629 #> 112 -0.89913656 0.194784253 -0.6760918326 -0.46104950 -0.957325959 -0.87788097 #> 113 0.46524913 0.369689817 -0.6787718653 -0.41707148 0.530111856 -0.33283528 #> 114 -0.87357430 -0.582200076 -0.0007236088 -0.64095955 -0.702764454 -0.39091392 #> 115 -0.36232907 -0.363568122 -0.2499666438 -0.07324338 0.270558949 -0.10498831 #> 116 0.08501049 0.921315055 -0.3276875902 -0.61297354 0.125808289 2.32091177 #> 117 1.88395462 0.009787984 2.6230283401 -0.88083962 1.203951135 -0.81980233 #> 118 -0.89594127 1.405668923 1.9905406385 -0.46104950 -0.867480722 0.22561317 #> 119 -0.58599886 0.151057863 -0.6734118000 -0.60897554 1.628220311 0.02010414 #> 120 2.31851306 3.225359496 -0.3357276881 3.15314357 0.365395588 0.14519660 #> 121 -0.19617438 -0.713379249 0.2377992956 -0.79288359 -0.927377547 0.19434006 #> 122 0.04027654 -0.454384472 0.5084825917 0.21861070 0.020988846 -0.05584485 #> 123 -0.83203562 0.053514375 -0.6787718653 0.16663669 -0.882454928 -0.80193198 #> 124 1.19696885 0.201511390 0.0421569133 0.49447279 -0.632884825 0.15413177 #> 125 -0.02682440 -0.111300483 -0.6707317674 -0.08923539 -0.108787608 0.07371520 #> 126 -0.41984416 -0.521655843 -0.1508054363 -0.20917542 -0.113779010 0.12732625 #> 127 -0.31439983 -0.259297498 -0.6626916695 0.65039483 0.844570187 -0.73045058 #> 128 -0.50292151 2.169198979 0.0582371091 -0.09323339 1.193968331 -0.84214027 #> 129 -0.84162147 -0.171844716 -0.2338864480 -0.83686161 -0.822558104 -0.81980233 #> 130 -0.64670923 -0.370295259 -0.6787718653 -0.45705149 -0.682798846 -0.87788097 #> 131 -0.04599609 1.752116482 -0.6760918326 0.94624692 1.533383672 0.28815940 #> 132 0.82631607 -0.138209031 -0.6760918326 -0.44505749 0.160748104 -0.86894579 #> 133 0.43329630 0.235147076 3.7084415570 -0.53701152 -0.563005196 -0.46686291 #> 134 -0.27286116 -0.575472939 -0.0543242615 -0.45705149 -0.672816042 -0.47133049 #> 135 2.29934136 3.295994435 0.5835235055 1.39802105 -0.538048186 -0.72598299 #> 136 -0.49014038 -0.414021650 0.3369605031 -0.55300352 0.440266619 -0.20327524 #> 137 -0.07475364 -0.498110863 -0.1963659911 -0.53701152 -0.752678475 0.19434006 #> 138 -0.53167905 -0.420748787 -0.6787718653 -0.36909547 -0.882454928 0.56068225 #> 139 -0.87037901 -0.696561406 -0.3893283408 -0.35710146 0.919441218 -0.82873750 #> 140 -0.88955071 1.153401284 -0.1052448815 -0.87684162 1.832867796 -0.87788097 #> 141 -0.74256771 -0.646107878 -0.6787718653 0.21461270 -0.832540908 -0.68130711 #> 142 -0.81286393 -0.740287797 -0.1963659911 -0.83286360 -0.947343155 -0.77959404 #> 143 -0.52209321 -0.740287797 -0.4080885692 -0.70492757 -0.762661279 -0.87341338 #> 144 -0.78410638 -0.528382980 -0.0328840004 -0.74490758 -0.777635485 -0.53387672 #> 145 -0.81925449 -0.666289290 -0.6707317674 -0.88483762 -0.907411939 0.97170031 #> 146 -0.89594127 -0.625926467 -0.4992096788 -0.10122939 -0.243555464 -0.35070563 #> 147 1.67945653 -0.800832031 3.4377582610 -0.88483762 -0.927377547 -0.87788097 #> 148 -0.87357430 -0.350113848 -0.6760918326 -0.19318342 -0.847515114 0.23901594 #> 149 -0.14504986 3.423810040 -0.6573316042 -0.82886560 -0.937360351 -0.86894579 #> 150 -0.54765547 -0.686470701 -0.4911695809 -0.88083962 -0.957325959 -0.87788097 #> 151 0.95732265 -0.740287797 -0.6707317674 -0.52501751 -0.677807444 0.73045058 #> 152 0.12974445 0.023242259 0.2538794914 0.43050477 -0.852506516 -0.36410839 #> 153 -0.88316014 -0.760469208 -0.0570042941 -0.69693156 -0.083830598 -0.60089053 #> 154 -0.87676958 0.181329979 -0.6787718653 -0.58098953 -0.907411939 -0.74385334 #> 155 -0.22493192 0.299054878 -0.6760918326 0.60641682 1.154037115 2.12880550 #> 156 1.17460187 -0.185298990 3.1563548344 0.87028490 0.120816887 -0.01116897 #> 157 -0.85440260 -0.800832031 -0.6707317674 -0.87684162 -0.947343155 -0.87788097 #> 158 -0.26966588 -0.235752518 -0.6653717021 -0.02526737 -0.123761814 -0.39091392 #> 159 -0.77452053 -0.740287797 2.1754628902 -0.10122939 -0.822558104 -0.10945590 #> 160 -0.57002245 2.630007867 2.0468213238 2.70936544 -0.797601093 1.57035705 #> 161 -0.85440260 -0.269388203 1.2990922188 -0.24115943 0.001023237 0.19434006 #> 162 -0.75854412 0.164512137 -0.6787718653 -0.76889559 0.979338042 -0.51600637 #> 163 0.34063311 -0.269388203 1.1945709461 -0.40107948 0.020988846 0.41325185 #> 164 1.27046035 2.199471096 0.4656020696 -0.85685161 -0.303452289 -0.74832093 #> 165 -0.10031590 1.190400537 0.2243991325 0.93825092 -0.353366309 -0.50260361 #> 166 -0.89913656 -0.656198584 -0.6653717021 3.16913557 -0.957325959 -0.52047396 #> 167 -0.61156112 -0.639380741 -0.2044060890 -0.73691158 0.110834083 -0.74832093 #> 168 -0.89913656 -0.141572599 1.6823368855 0.02270865 2.371939219 -0.78406163 #> 169 0.02430012 0.282237035 1.0257288901 2.52145938 0.714793733 0.36857598 #> 170 -0.84162147 -0.804195599 -0.6760918326 -0.88483762 -0.952334557 6.31046751 #> 171 -0.58919414 -0.356840985 0.3021200789 -0.34110946 -0.338392103 0.06924761 #> 172 -0.45179699 -0.511565137 -0.0650443920 -0.63296355 -0.617910619 -0.69470987 #> 173 -0.61795169 -0.356840985 -0.4992096788 -0.30112945 -0.717738660 -0.81086715 #> 174 0.44927271 0.803590157 -0.6760918326 0.21861070 0.450249423 0.52940913 #> 175 0.46205385 -0.158390442 -0.6760918326 -0.47304350 -0.637876227 0.72598299 #> 176 0.81673022 0.019878690 -0.6734118000 -0.09723139 0.370386990 0.38644633 #> 177 -0.41345360 -0.659562152 -0.0757645226 -0.40107948 -0.862489320 0.01563656 #> 178 -0.90233184 -0.797468462 -0.6787718653 1.03420294 0.170730908 2.49514768 #> 179 -0.82884034 -0.252570361 4.7054136970 0.72635685 3.634763942 -0.78852921 #> 180 -0.47096868 -0.706652112 -0.6626916695 -0.68493756 -0.927377547 -0.44899256 #> 181 -0.02362912 -0.760469208 -0.3678880797 -0.82886560 -0.872472124 -0.81533474 #> 182 -0.74256771 -0.625926467 0.9506879764 -0.01727137 -0.727721464 0.10945590 #> 183 -0.22812720 1.583938055 -0.1910059258 0.38252875 -0.652850433 -0.01116897 #> 184 -0.72020073 -0.706652112 -0.6358913431 -0.68493756 -0.518082577 -0.54727948 #> 185 -0.25049418 0.938132898 -0.6787718653 0.13065468 -0.782626887 -0.16753454 #> 186 -0.80966864 -0.733560660 -0.2285263827 -0.86884562 -0.912403341 -0.81980233 #> 187 -0.89913656 -0.797468462 -0.6787718653 1.25409301 2.312042394 -0.86894579 #> 188 2.32490362 0.023242259 -0.6171311147 -0.87684162 -0.008959567 -0.83320509 #> 189 -0.89274599 -0.784014188 -0.3678880797 -0.88083962 -0.882454928 -0.84214027 #> 190 -0.31120455 -0.548564391 -0.3518078839 -0.39308348 -0.777635485 0.04690967 #> 191 2.05330460 0.921315055 0.9453279111 -0.26914544 -0.208615649 2.93297127 #> 192 0.66335665 0.070332218 -0.6787718653 0.32655674 -0.368340516 -0.35070563 #> 193 1.15862546 -0.131481894 -0.3062473291 -0.44905549 -0.563005196 0.07371520 #> 194 2.01815649 -0.121391188 -0.3625280144 -0.44505749 0.899475609 -0.36857598 #> 195 -0.59877999 -0.498110863 0.9346077806 -0.28113944 -0.767652681 -0.58302018 #> 196 -0.89913656 -0.457748041 6.7583186948 -0.02926537 0.035963052 -0.68577470 #> 197 -0.21215079 -0.757105640 -0.4429289935 0.58242881 -0.737704268 0.90468649 #> 198 0.46844441 3.948526730 -0.6600116368 1.81381317 0.609974290 -0.86894579 #> 199 2.88088280 -0.797468462 -0.6064109842 -0.01727137 3.969187880 0.98957066 #> 200 -0.20576023 0.147694294 -0.5126098420 -0.88083962 0.210662124 0.80193198 #> Otu00012 Otu00013 Otu00014 Otu00015 Otu00016 #> 1 -0.025998508 1.524303976 1.671766383 1.2574976512 -0.49503122 #> 2 0.413356123 -0.551251533 0.971673864 0.3058058591 -0.50331257 #> 3 -0.678984290 -0.980085316 0.007910136 -0.6658514951 -0.47570808 #> 4 -0.654799631 -0.842858506 -0.542162557 -0.4795062491 -0.50055212 #> 5 -0.674953513 0.517974032 -0.792195600 -0.9653349262 0.34414511 #> 6 -0.267845094 1.215543652 2.385497069 0.4056336694 -0.10856844 #> 7 -0.638676525 0.323569384 -0.737642936 -0.4928166238 -0.39841553 #> 8 -0.058244719 0.294980465 -0.437603285 -0.6059548089 3.78366388 #> 9 -0.038090837 -0.219620075 0.953489643 -0.9453693641 -0.32940431 #> 10 0.183601866 -0.699913911 -0.751281102 -0.8921278653 0.16471596 #> 11 0.425448452 0.392182789 -0.792195600 0.6585307890 -0.50331257 #> 12 -0.211414224 -0.911471911 2.280937796 0.0861846763 1.72712978 #> 13 -0.199321895 -0.534098182 0.485245945 -0.7457137434 -0.49503122 #> 14 -0.610461090 -0.671324993 0.958035698 0.1327709878 -0.12237068 #> 15 -0.501630127 -0.717067263 0.262489234 -0.0136431341 -0.47294763 #> 16 -0.509691680 -0.339693534 -0.769465323 -0.5260925606 -0.19690279 #> 17 -0.404891492 -0.008062075 1.294443792 -0.4795062491 -0.50331257 #> 18 -0.582245655 -0.162442237 1.358088566 1.4970843961 0.02669354 #> 19 -0.457291586 -0.791398452 0.812561928 1.1044283420 -0.37633194 #> 20 -0.594337985 0.643765275 -0.010274085 0.0928398636 1.70504620 #> 21 -0.707199725 0.020526844 -0.792195600 -0.9520245515 -0.47846853 #> 22 -0.030029284 0.312133816 -0.342136123 2.6883629331 0.29997793 #> 23 0.683418144 -0.585558236 0.262489234 0.8981175339 -0.50331257 #> 24 0.687448920 0.981114517 -0.783103489 -0.2066435675 0.39935408 #> 25 0.598771839 -0.231055642 1.771779600 -0.4329199376 -0.36252970 #> 26 -0.654799631 -0.379718020 0.635265770 -0.7390585561 0.90727659 #> 27 -0.767661371 -1.008674235 -0.792195600 -0.9520245515 -0.48674988 #> 28 0.107017114 -0.705631695 -0.701274494 -0.7257481813 -0.49779167 #> 29 0.175540314 -0.448331426 -0.692182383 -0.6725066825 -0.48122898 #> 30 -0.699138172 -0.934343046 0.080647021 -0.8521967411 -0.50331257 #> 31 -0.328306740 0.060551330 1.680858494 -0.9586797389 -0.24383041 #> 32 -0.650768855 0.357876086 -0.792195600 1.2308769018 -0.04507812 #> 33 2.170774647 1.209825868 -0.387596677 -0.2865058157 -0.50331257 #> 34 0.119109444 -1.014392019 -0.792195600 -0.0668846329 -0.30456028 #> 35 -0.142891024 -0.168160021 2.430957622 -0.1800228180 0.33586376 #> 36 -0.529845562 2.467738298 -0.778557434 -0.5726788721 2.28474037 #> 37 -0.618522643 1.204108084 0.894390924 -0.0202983214 -0.20242369 #> 38 -0.723322831 0.500820681 -0.792195600 -0.9120934274 2.25437544 #> 39 -0.308152858 0.923936680 -0.164839966 -0.0469190709 -0.01471319 #> 40 0.687448920 -0.785680668 1.117147634 0.1327709878 -0.50055212 #> 41 1.594373617 1.095470193 -0.687636328 2.3422931905 -0.05888037 #> 42 -0.437137704 -0.762809533 -0.196662353 -0.5527133100 -0.36529014 #> 43 -0.046152390 1.850217651 -0.787649545 0.3723577327 -0.13893337 #> 44 0.542340969 -0.088111048 0.062462800 -0.2931610031 -0.45914539 #> 45 -0.416983822 0.849605491 -0.096649136 -0.8721623032 0.21164358 #> 46 2.731052571 0.180624789 -0.487609894 1.3173943374 -0.50331257 #> 47 1.062311128 0.489385113 0.594351273 0.1727021119 -0.13617292 #> 48 -0.066306272 3.559834998 -0.628537609 -0.9520245515 -0.18586099 #> 49 -0.646738078 -0.871447425 0.553436775 -0.8056104296 -0.18862144 #> 50 -0.130798695 -0.705631695 1.571753166 -0.4595406870 -0.50331257 #> 51 0.006247703 -0.145288886 -0.778557434 -0.3730232514 0.66435713 #> 52 0.489940875 -0.414024723 0.489792000 3.0677086125 0.06533981 #> 53 -0.622553420 0.695225329 -0.687636328 -0.9520245515 -0.50055212 #> 54 3.315515153 -0.997238668 -0.778557434 -0.9387141768 0.70024296 #> 55 -0.570153326 -0.734220614 1.735411158 -0.4528854997 -0.35424835 #> 56 -0.098552484 2.307640352 -0.783103489 0.8714967845 1.32410431 #> 57 -0.683015066 -0.677042776 0.026094358 -0.0202983214 -0.11961023 #> 58 -0.739445936 -1.014392019 -0.783103489 -0.9586797389 -0.48122898 #> 59 0.240032736 1.221261436 0.048824634 -0.3530576893 0.45732350 #> 60 -0.545968668 0.878194409 0.471607779 0.5986341028 -0.23830952 #> 61 -0.678984290 -0.053804345 -0.792195600 -0.2332643169 -0.01747364 #> 62 -0.683015066 -0.791398452 -0.037550417 -0.4196095629 -0.32112297 #> 63 -0.009875402 -0.757091749 -0.733096881 -0.2399195042 -0.13617292 #> 64 -0.578214879 -0.596993804 -0.787649545 -0.9054382400 -0.50331257 #> 65 -0.755569042 -1.014392019 -0.483063838 -0.9387141768 -0.27143490 #> 66 2.138528435 1.753015327 -0.623991553 -0.8721623032 -0.28799759 #> 67 -0.497599350 -0.368282453 -0.483063838 -0.7656793055 0.66435713 #> 68 -0.352491399 -0.528380398 1.198976630 -0.0003327594 0.05429802 #> 69 -0.102583260 -0.396871372 -0.792195600 2.5352936239 -0.49779167 #> 70 -0.421014598 -0.385435804 1.417187285 2.0228441973 -0.48951032 #> 71 -0.489537798 1.158365814 0.357956396 -0.1800228180 -0.50331257 #> 72 -0.574184103 0.003373492 -0.792195600 2.9346048653 -0.12513113 #> 73 -0.545968668 -0.133853318 0.862568536 1.2042561523 -0.28799759 #> 74 -0.203352671 0.071986898 0.467061724 1.5370155202 0.84102583 #> 75 -0.493568574 -0.351129101 2.640076167 0.0262879901 -0.48674988 #> 76 2.344098033 -1.014392019 0.953489643 -0.6192651836 -0.27143490 #> 77 -0.654799631 -0.494073696 -0.778557434 -0.1999883801 -0.48951032 #> 78 2.194959305 -1.014392019 -0.787649545 3.3339161068 -0.50055212 #> 79 -0.489537798 -0.768527317 0.621627604 0.4854959177 0.23648762 #> 80 -0.731384383 2.416278244 -0.787649545 -0.9387141768 -0.50331257 #> 81 -0.719292054 -0.762809533 -0.437603285 0.6452204143 -0.31836252 #> 82 -0.558060997 0.346440519 -0.792195600 -0.9653349262 -0.50331257 #> 83 -0.574184103 0.986832301 -0.210300519 1.8431541387 -0.01747364 #> 84 0.514125534 -0.842858506 -0.587623111 -0.9520245515 -0.50055212 #> 85 -0.650768855 -0.814269587 -0.469425672 -0.1667124433 -0.50331257 #> 86 0.514125534 -1.014392019 -0.792195600 -0.9387141768 -0.06992216 #> 87 -0.392799163 0.552280735 1.153516077 1.2841184006 -0.43706180 #> 88 -0.441168480 -0.471202561 -0.792195600 0.6052892902 -0.41221777 #> 89 -0.231568106 0.134882519 1.435371507 -0.1334365065 -0.50331257 #> 90 0.280340501 3.136718999 0.989858085 -0.1134709444 -0.50331257 #> 91 -0.674953513 0.026244628 -0.792195600 0.8648415971 -0.47294763 #> 92 -0.320245187 0.043397979 0.639811826 2.3223276284 -0.44534315 #> 93 1.646773711 -0.133853318 -0.792195600 0.5387374166 -0.23002817 #> 94 -0.610461090 -0.842858506 0.357956396 -0.6858170572 1.29926027 #> 95 -0.340399070 -0.516944831 2.621891945 -0.4728510618 -0.47294763 #> 96 -0.767661371 -1.014392019 -0.792195600 -0.7190929940 -0.44534315 #> 97 -0.263814317 2.730756352 -0.792195600 -0.6458859330 1.69400440 #> 98 -0.287998976 -0.196748940 1.176246353 -0.2066435675 -0.16929830 #> 99 5.508257532 1.512868408 -0.769465323 -0.8721623032 -0.43154091 #> 100 -0.751538266 -1.014392019 -0.783103489 -0.0003327594 -0.50055212 #> 101 -0.582245655 0.112011384 -0.764919268 0.2459091729 3.51866083 #> 102 -0.203352671 -0.728502830 -0.755827157 -0.9520245515 -0.48122898 #> 103 2.106282224 -0.196748940 -0.701274494 0.5786685407 -0.50331257 #> 104 -0.421014598 0.134882519 -0.792195600 -0.9453693641 1.54494019 #> 105 -0.263814317 0.300698249 2.976484260 0.1660469246 -0.50331257 #> 106 -0.683015066 -0.202466723 0.903483034 -0.6725066825 0.38279139 #> 107 -0.267845094 -0.202466723 -0.251215017 1.0578420305 -0.18034010 #> 108 0.312586712 -0.276797912 -0.787649545 1.0179109063 -0.44810360 #> 109 0.115078667 -0.522662615 -0.751281102 -0.6325755583 -0.50055212 #> 110 -0.646738078 -0.133853318 -0.651267885 -0.6658514951 -0.07268261 #> 111 -0.570153326 -0.516944831 2.126371915 0.3989784821 -0.01195274 #> 112 0.288402054 -0.322540183 -0.792195600 1.1510146535 -0.40393642 #> 113 -0.412953045 -0.665607209 0.685272379 2.0960512583 -0.41773866 #> 114 -0.662861184 -0.762809533 -0.664906051 0.6252548522 -0.26867445 #> 115 -0.433106927 -0.333975750 1.989990256 1.0844627799 -0.28523714 #> 116 -0.392799163 -0.030933210 -0.646721830 0.4056336694 -0.20794458 #> 117 -0.425045375 -0.591276020 -0.792195600 -0.7656793055 0.21716448 #> 118 -0.521784009 -0.282515696 0.271581345 -0.1933331927 0.04049578 #> 119 0.151355655 -0.625582722 2.549155060 1.6434985179 -0.50055212 #> 120 -0.231568106 0.603740788 -0.792195600 -0.8588519285 0.26409210 #> 121 -0.703168948 -0.848576290 -0.133017579 -0.3197817525 -0.50055212 #> 122 0.941387835 1.284157057 0.062462800 1.2109113397 -0.27971624 #> 123 -0.594337985 -1.014392019 -0.410326953 -0.7324033687 -0.49227077 #> 124 -0.493568574 1.186954733 0.307949787 2.1958790686 2.14947840 #> 125 0.933326283 0.409336140 -0.573984945 0.8781519718 -0.46466629 #> 126 -0.421014598 0.746685383 1.939983647 0.2392539855 -0.48122898 #> 127 -0.296060529 -0.728502830 -0.092103081 -0.5460581227 -0.47294763 #> 128 -0.723322831 -0.882882992 -0.724004770 -0.9187486147 -0.45914539 #> 129 0.006247703 -0.968649749 -0.323951902 -0.7856448676 -0.36529014 #> 130 -0.404891492 -0.568404885 2.108187694 -0.8388863664 -0.50331257 #> 131 0.058647797 -0.242491210 -0.351228234 0.7982897235 0.86034897 #> 132 -0.445199257 1.524303976 -0.787649545 0.4389096062 -0.13065203 #> 133 0.264217395 0.129164735 -0.605807332 -0.7923000549 -0.20242369 #> 134 -0.199321895 -0.151006669 3.244701524 0.1527365499 -0.50331257 #> 135 0.393202241 4.720545104 -0.783103489 -0.7324033687 -0.32388342 #> 136 0.123140220 -0.002344291 -0.273945294 0.4189440442 -0.36805059 #> 137 -0.038090837 0.792427653 1.785417766 -0.9453693641 -0.50331257 #> 138 3.795177548 -0.145288886 1.271713515 0.5919789155 -0.50331257 #> 139 -0.723322831 -0.934343046 -0.623991553 -0.8322311791 1.30478117 #> 140 0.824495319 -1.008674235 1.008042307 1.8564645134 -0.49503122 #> 141 0.868833860 -0.213902291 -0.442149340 -0.7324033687 -0.50331257 #> 142 -0.735415160 -0.962931965 -0.037550417 -0.8521967411 -0.45362449 #> 143 -0.723322831 -0.922907479 0.671634213 -0.7590241181 -0.30732073 #> 144 -0.598368761 -0.562687101 -0.696728438 0.1527365499 -0.35424835 #> 145 -0.658830408 -1.002956451 -0.783103489 3.7132617861 -0.41497822 #> 146 -0.638676525 -0.837140722 -0.783103489 -0.7457137434 -0.50331257 #> 147 -0.634645749 -1.008674235 -0.787649545 -0.9653349262 -0.50055212 #> 148 -0.715261278 -0.837140722 0.507976221 -0.8189208043 -0.11408933 #> 149 0.921233953 -0.940060830 -0.423965119 -0.8921278653 -0.50331257 #> 150 -0.106614037 -1.014392019 -0.792195600 -0.9653349262 -0.50331257 #> 151 -0.416983822 -0.408306939 -0.223938685 -0.3131265652 -0.42049911 #> 152 3.017237697 0.180624789 -0.546708613 0.4122888568 -0.41773866 #> 153 -0.566122550 -0.922907479 2.344582571 0.1993228614 -0.50331257 #> 154 -0.344429846 -1.014392019 -0.664906051 -0.9586797389 1.93140297 #> 155 1.134865104 -0.614147155 -0.783103489 1.1310490914 -0.45638494 #> 156 1.219511409 -0.419742507 -0.319405847 -0.9586797389 -0.44534315 #> 157 -0.767661371 1.890242137 -0.783103489 -0.9653349262 -0.49779167 #> 158 4.012839476 2.439149379 -0.351228234 0.1727021119 -0.49779167 #> 159 0.514125534 -0.968649749 -0.787649545 -0.8255759917 0.72232655 #> 160 0.485910099 0.929654463 -0.583077055 -0.4994718112 -0.16377741 #> 161 -0.715261278 1.106905760 -0.792195600 0.6984619132 -0.50331257 #> 162 -0.731384383 0.603740788 -0.792195600 1.6368433306 0.95144377 #> 163 -0.594337985 0.780992085 -0.687636328 0.0129776153 -0.48674988 #> 164 -0.545968668 0.060551330 -0.528524391 1.2907735880 -0.49227077 #> 165 -0.477445468 2.216155812 -0.787649545 -0.6791618698 2.69604719 #> 166 -0.646738078 -1.008674235 -0.792195600 -0.9653349262 -0.49503122 #> 167 -0.529845562 -0.431178074 0.017002247 0.9912901569 -0.45914539 #> 168 0.961541718 -1.002956451 -0.792195600 -0.8987830526 -0.49503122 #> 169 0.308555936 -0.682760560 -0.746735047 -0.8189208043 0.49596977 #> 170 -0.634645749 -1.008674235 -0.419419064 -0.9387141768 -0.02299454 #> 171 -0.469383915 -0.499791479 2.426411566 0.0861846763 -0.38185283 #> 172 0.183601866 -0.871447425 -0.755827157 -0.6991274319 8.63929272 #> 173 -0.191260342 -0.854294073 -0.792195600 -0.9520245515 1.62499319 #> 174 1.155018986 -0.299669047 -0.787649545 0.0395983648 -0.38737373 #> 175 0.227940407 0.981114517 0.021548302 0.7117722879 -0.32112297 #> 176 -0.384737610 0.186342573 -0.774011379 -0.9254038021 -0.50331257 #> 177 -0.541937891 -0.791398452 0.785285596 0.2126332361 -0.50331257 #> 178 1.183234421 0.352158303 -0.701274494 0.5254270419 1.07566395 #> 179 -0.235598882 -0.213902291 -0.792195600 -0.9320589894 -0.50055212 #> 180 -0.751538266 -0.677042776 -0.787649545 0.8714967845 -0.23830952 #> 181 -0.122737142 -0.728502830 -0.628537609 0.0994950510 -0.50055212 #> 182 -0.150952577 -0.048086562 -0.714912660 -0.6791618698 -0.44534315 #> 183 -0.469383915 0.094858033 -0.533070447 0.3257714212 0.23372717 #> 184 -0.654799631 -0.877165208 -0.619445498 -0.2399195042 -0.40669687 #> 185 -0.271875870 0.060551330 -0.787649545 -0.2665402537 -0.50331257 #> 186 -0.715261278 -0.962931965 0.648903936 2.6218110595 -0.48951032 #> 187 1.803973992 0.918218896 -0.655813940 -0.9653349262 3.58767204 #> 188 -0.545968668 0.415053924 -0.792195600 -0.8721623032 2.44484638 #> 189 -0.038090837 -0.940060830 -0.660359996 -0.8455415538 -0.50331257 #> 190 -0.638676525 -0.333975750 0.007910136 0.3856681074 0.21992493 #> 191 0.078801679 2.730756352 -0.678544217 -0.7324033687 -0.48674988 #> 192 -0.416983822 0.094858033 -0.792195600 -0.9586797389 -0.48398943 #> 193 -0.400860716 1.152648031 2.117279805 -0.1667124433 0.36070780 #> 194 4.726286904 -0.191031156 -0.683090272 -0.7190929940 0.57602278 #> 195 -0.154983354 -0.516944831 2.149102192 -0.2598850663 -0.41221777 #> 196 0.631018050 0.317851600 -0.792195600 -0.9653349262 -0.50331257 #> 197 1.195326751 0.826734356 0.821654039 -0.7390585561 -0.50331257 #> 198 -0.719292054 2.136106839 -0.792195600 -0.6458859330 -0.13341247 #> 199 -0.497599350 1.381359381 0.280673455 -0.8056104296 0.18403910 #> 200 -0.283968200 1.124059112 0.703456600 1.6501537053 -0.44258270 #> Otu00017 Otu00018 Otu00019 Otu00020 Otu00021 Otu00022 #> 1 0.47611468 0.399615523 0.55293856 0.554816232 -0.35537010 1.647612103 #> 2 -0.32110972 -0.679309939 0.61541514 -0.360008658 0.15159833 -0.375705829 #> 3 0.49083266 -0.679309939 -0.13846893 -0.529188603 -0.63100342 -0.081618920 #> 4 -0.26714376 0.030253653 0.08644676 -0.266019799 0.74224116 -0.187490207 #> 5 -0.52961456 -0.674449915 -0.64244668 -0.685836701 -0.63100342 -0.367863511 #> 6 3.30687454 -0.008626544 -0.08432256 0.172594874 0.78161735 -0.356100035 #> 7 -0.50263159 -0.518929127 -0.52165862 -0.403870125 -0.63100342 -0.026722697 #> 8 -0.53452056 0.419055622 0.69871725 -0.027914691 -0.60639331 -0.207096001 #> 9 1.29296306 -0.679309939 0.29053693 -0.673304853 -0.63100342 0.141887131 #> 10 -0.52225557 -0.436308709 -0.03017619 0.918239819 -0.52271890 -0.281598018 #> 11 -0.53452056 -0.679309939 -0.35921951 1.005962753 -0.63100342 -0.383548146 #> 12 2.65928302 -0.664729865 -0.21344082 -0.641975234 0.46660784 -0.273755700 #> 13 -0.44375963 -0.650149792 -0.64244668 -0.522922680 -0.62608140 -0.371784670 #> 14 0.94709032 -0.120407110 -0.34255909 -0.479061212 3.60193686 -0.277676859 #> 15 0.68216652 -0.280787922 -0.30923825 -0.585581919 -0.11911297 -0.360021194 #> 16 -0.53452056 2.304745168 -0.35921951 1.087419764 -0.62608140 -0.301203812 #> 17 2.23246135 -0.674449915 -0.23426635 -0.535454527 0.23035070 -0.340415400 #> 18 1.37881799 0.146894244 0.02813529 -0.165765017 0.69302092 -0.163963254 #> 19 0.70914950 0.137174194 0.40299477 -0.159499093 -0.16341118 0.185019877 #> 20 -0.50508458 2.960848490 -0.39670546 -0.234690180 -0.61623735 0.628110819 #> 21 -0.53452056 -0.664729865 -0.63828157 -0.679570777 -0.62115938 -0.379626987 #> 22 -0.53206756 0.224654637 0.28637182 0.673868786 -0.47842069 -0.367863511 #> 23 -0.53452056 0.278114908 0.60291983 2.033574274 -0.63100342 -0.003195744 #> 24 -0.52716157 -0.674449915 -0.64244668 -0.485327136 -0.62115938 -0.379626987 #> 25 -0.35299870 1.157779362 0.69455215 0.254051885 0.41738760 0.185019877 #> 26 2.12943543 0.900198058 -0.44668673 -0.604379690 -0.23231951 -0.352178876 #> 27 -0.53452056 -0.669589890 -0.64244668 -0.685836701 -0.63100342 -0.379626987 #> 28 -0.53452056 -0.679309939 5.46359780 2.321806774 -0.63100342 -0.336494241 #> 29 -0.51489658 -0.674449915 -0.38004504 0.442029602 -0.63100342 -0.293361494 #> 30 1.07709922 -0.679309939 4.20990108 -0.660773005 -0.29630582 -0.367863511 #> 31 -0.53452056 -0.023206617 -0.55081436 -0.585581919 -0.62115938 1.173151890 #> 32 0.40252473 -0.314808094 -0.56330968 -0.441465669 -0.63100342 0.604583867 #> 33 -0.53452056 -0.679309939 0.01980508 -0.071776158 -0.56701712 -0.379626987 #> 34 -0.53452056 -0.679309939 -0.64244668 -0.679570777 1.28366375 0.216389147 #> 35 0.31176380 -0.188447454 -0.18428509 -0.585581919 -0.26677368 -0.383548146 #> 36 -0.51980257 4.146694494 -0.57997010 -0.554252299 -0.63100342 -0.371784670 #> 37 1.22673211 0.389895474 -0.24676167 -0.660773005 -0.02559452 -0.152199778 #> 38 -0.53452056 -0.674449915 -0.63411647 -0.259753876 -0.61131533 -0.375705829 #> 39 -0.53452056 0.176054391 -0.49250288 -0.447731593 -0.53748498 -0.352178876 #> 40 2.04358049 -0.674449915 0.93612826 -0.197094636 0.03346976 -0.261992224 #> 41 0.24553285 0.559996335 -0.24676167 2.240349763 -0.62608140 -0.379626987 #> 42 -0.46093062 -0.329388168 -0.23843146 -0.410136049 1.79063218 -0.332573082 #> 43 -0.46093062 0.219794613 -0.64244668 -0.685836701 -0.62115938 -0.375705829 #> 44 1.26843308 0.195494490 1.00693505 -0.510390832 -0.60639331 0.024252367 #> 45 0.51536265 -0.679309939 -0.57997010 -0.240956104 -0.38982427 -0.379626987 #> 46 -0.50753758 -0.402288537 -0.17178977 -0.190828713 -0.62115938 -0.332573082 #> 47 0.75820946 -0.679309939 -0.54664925 0.078606015 0.89974591 -0.348257717 #> 48 -0.53452056 -0.105827036 0.02813529 3.430875305 -0.58670521 -0.328651923 #> 49 1.34692902 -0.343968241 -0.55081436 -0.610645614 0.80622746 0.024252367 #> 50 3.17195964 2.469986005 -0.22177104 -0.547986375 1.48054469 -0.367863511 #> 51 -0.53206756 -0.679309939 -0.41336588 0.968367210 -0.62608140 -0.265913383 #> 52 0.13514793 -0.207887552 -0.11347830 -0.529188603 0.72747509 -0.363942352 #> 53 -0.36526369 -0.679309939 -0.64244668 -0.598113766 -0.40951236 -0.360021194 #> 54 -0.53452056 -0.664729865 -0.36754972 -0.353742734 -0.55225105 0.094833225 #> 55 2.23491435 -0.368268364 0.18224419 -0.522922680 0.82099353 -0.254149906 #> 56 -0.51244358 0.885617984 -0.64244668 2.722825904 -0.49810879 -0.375705829 #> 57 -0.48055460 -0.431448684 -0.32173356 -0.366274582 0.53059414 -0.312967288 #> 58 -0.51734957 -0.679309939 -0.62995136 -0.679570777 -0.63100342 -0.363942352 #> 59 -0.51980257 -0.363408340 0.80700999 0.003414929 0.45184176 1.631927468 #> 60 0.14005393 1.138339263 -0.05100172 0.028478624 -0.38490224 -0.332573082 #> 61 -0.53452056 -0.679309939 -0.03434129 -0.472795288 -0.62608140 -0.383548146 #> 62 -0.03901494 -0.679309939 -0.55914457 -0.598113766 1.13108102 -0.301203812 #> 63 -0.52225557 0.788417492 -0.36754972 -0.303615343 -0.62608140 -0.363942352 #> 64 -0.53452056 -0.159287306 -0.09681787 1.156344927 -0.24216356 -0.132593984 #> 65 -0.47810160 -0.679309939 1.00276994 -0.616911538 -0.63100342 -0.171805572 #> 66 -0.53452056 -0.674449915 1.28183200 0.636273243 0.37308939 -0.332573082 #> 67 -0.48546060 -0.562669349 -0.35505441 -0.347476810 -0.62608140 -0.246307589 #> 68 -0.53206756 -0.008626544 -0.49250288 -0.052978387 -0.63100342 -0.293361494 #> 69 -0.53452056 -0.669589890 1.39845495 -0.491593060 -0.01575048 -0.258071065 #> 70 3.36819949 1.269559928 -0.62995136 -0.623177462 1.17045721 0.008567732 #> 71 0.32402879 -0.679309939 -0.20511061 -0.479061212 -0.55717307 0.012488891 #> 72 -0.53452056 0.321855129 1.36513411 0.141265254 -0.63100342 0.290891164 #> 73 1.25862108 0.083713924 -0.64244668 -0.134435397 2.44033929 0.118360178 #> 74 0.65273054 -0.679309939 1.11939289 -0.410136049 -0.25692963 -0.297282653 #> 75 2.94383081 -0.679309939 0.50295730 -0.372540506 1.28366375 -0.367863511 #> 76 1.98716153 1.775002486 -0.03017619 -0.397604201 -0.62608140 -0.379626987 #> 77 -0.29903274 -0.679309939 -0.50499820 -0.648241158 2.05149943 0.761430218 #> 78 -0.53452056 0.195494490 -0.64244668 -0.685836701 0.71763104 0.204625671 #> 79 0.99615028 -0.275927897 -0.24676167 -0.554252299 0.07776797 -0.371784670 #> 80 -0.53206756 -0.679309939 6.88389873 -0.679570777 -0.62608140 -0.383548146 #> 81 0.06646398 0.005953530 -0.36754972 -0.629443386 -0.63100342 -0.277676859 #> 82 -0.28186175 -0.674449915 -0.64244668 0.128733407 4.36977254 -0.046328491 #> 83 0.49573866 0.200354514 -0.55914457 -0.491593060 0.13683226 -0.344336558 #> 84 -0.53452056 -0.674449915 -0.64244668 -0.178296865 -0.62608140 7.537192593 #> 85 -0.53206756 -0.664729865 -0.64244668 -0.685836701 -0.63100342 -0.316888447 #> 86 -0.53452056 2.192964602 1.78164465 -0.679570777 -0.63100342 -0.234544113 #> 87 0.40743073 -0.475188906 -0.28008251 -0.422667897 0.31894713 0.377156657 #> 88 -0.53452056 -0.193307479 -0.05100172 -0.090573930 2.66183035 0.702612836 #> 89 -0.24016078 -0.679309939 0.47380156 0.254051885 -0.46857665 1.141782620 #> 90 -0.53452056 -0.679309939 -0.47167736 0.924505743 -0.63100342 0.561451120 #> 91 -0.29412674 -0.679309939 -0.64244668 -0.497858984 -0.62608140 -0.379626987 #> 92 -0.53452056 -0.679309939 0.44048072 -0.504124908 -0.62608140 -0.371784670 #> 93 -0.53452056 -0.679309939 0.27387650 1.782937318 -0.63100342 -0.383548146 #> 94 -0.53452056 2.601206669 1.18603458 -0.259753876 -0.08958083 -0.250228748 #> 95 3.55708035 -0.664729865 1.49008727 -0.598113766 1.48546672 -0.211017160 #> 96 -0.46828961 -0.655009816 -0.64244668 -0.679570777 4.06952910 0.020331208 #> 97 -0.53452056 -0.679309939 -0.45501694 -0.667038929 -0.62608140 -0.383548146 #> 98 0.78519244 -0.455748807 -0.05516682 -0.103105778 -0.63100342 -0.281598018 #> 99 -0.53452056 -0.669589890 3.29774300 0.354306667 -0.62608140 -0.383548146 #> 100 -0.53206756 -0.679309939 -0.52582373 0.147531178 -0.60639331 -0.383548146 #> 101 -0.40451166 1.002258574 -0.63411647 -0.065510234 1.30335184 -0.371784670 #> 102 -0.52225557 -0.679309939 -0.45918204 -0.604379690 -0.63100342 -0.379626987 #> 103 -0.43885363 2.800467678 -0.10514809 0.166328950 -0.62115938 -0.383548146 #> 104 -0.53452056 0.161474318 -0.52165862 -0.178296865 -0.61131533 0.549687644 #> 105 2.59305208 -0.674449915 0.31552756 -0.529188603 0.41246558 0.345787387 #> 106 1.42787796 -0.679309939 1.39012474 -0.673304853 0.20574059 -0.301203812 #> 107 -0.53452056 -0.188447454 0.50712240 -0.272285723 0.61919057 2.274997508 #> 108 -0.25978477 0.681496950 0.22389524 0.222722265 -0.62608140 1.337840559 #> 109 -0.52470857 -0.217607602 2.99785542 2.096233513 -0.60639331 -0.352178876 #> 110 -0.50263159 -0.382848438 -0.41336588 -0.203360560 -0.61623735 -0.269834542 #> 111 -0.53206756 -0.421728635 -0.62578626 -0.416401973 -0.62608140 -0.199253683 #> 112 -0.21072481 -0.669589890 -0.64244668 0.454561450 -0.62608140 -0.383548146 #> 113 -0.53452056 -0.032926667 -0.41336588 0.053542320 2.00227919 -0.316888447 #> 114 -0.40941766 -0.412008586 -0.06349703 -0.491593060 -0.54240700 0.286970005 #> 115 -0.53206756 0.054553776 -0.08848766 -0.052978387 -0.43412248 -0.128672825 #> 116 -0.45111862 1.211239632 0.01147487 0.015946776 0.82591556 -0.336494241 #> 117 -0.53452056 -0.013486568 0.57792920 -0.685836701 -0.39966831 -0.371784670 #> 118 -0.16902384 -0.465468857 0.42798540 0.028478624 0.34847927 0.094833225 #> 119 -0.53452056 -0.679309939 0.72370788 1.739075850 -0.63100342 -0.383548146 #> 120 -0.53452056 0.244094736 -0.21344082 -0.159499093 -0.63100342 -0.383548146 #> 121 -0.52716157 -0.679309939 -0.44252162 -0.679570777 -0.23724154 -0.383548146 #> 122 -0.53452056 -0.679309939 0.23639056 -0.522922680 0.03346976 -0.383548146 #> 123 -0.53452056 4.550076536 -0.48417267 1.544832209 -0.56701712 -0.340415400 #> 124 -0.53206756 -0.421728635 -0.48833778 0.009680852 -0.15356714 -0.352178876 #> 125 -0.48055460 -0.139847208 -0.13846893 -0.215892408 -0.63100342 -0.375705829 #> 126 -0.53452056 -0.309948069 -0.03017619 0.141265254 0.65364473 -0.348257717 #> 127 -0.47319561 -0.596689521 -0.45085183 -0.516656756 1.18522328 -0.156120937 #> 128 -0.49772559 1.687522044 -0.63828157 -0.140701321 -0.63100342 -0.332573082 #> 129 0.10571196 0.919638156 -0.57580499 2.716559980 0.73239711 -0.238465271 #> 130 1.58486984 -0.023206617 0.17391397 -0.660773005 -0.63100342 -0.383548146 #> 131 -0.51489658 0.419055622 -0.64244668 0.084871939 -0.25200761 -0.301203812 #> 132 -0.52470857 -0.669589890 1.18186948 -0.604379690 -0.54732902 -0.379626987 #> 133 -0.53452056 0.030253653 0.86115636 -0.234690180 -0.52764093 -0.285519177 #> 134 3.26762657 -0.650149792 0.57376409 -0.485327136 1.72172385 -0.328651923 #> 135 -0.53452056 0.880757959 1.11106268 2.478454871 -0.59654926 -0.324730765 #> 136 0.11552395 -0.679309939 -0.13430382 -0.547986375 0.70778699 0.118360178 #> 137 -0.53452056 -0.679309939 -0.64244668 -0.667038929 -0.61623735 -0.379626987 #> 138 -0.53206756 -0.460608832 0.26138119 -0.685836701 4.39438266 0.032094685 #> 139 0.17439590 0.380175425 -0.54248415 -0.109371702 -0.62115938 -0.324730765 #> 140 -0.52716157 -0.674449915 -0.63411647 -0.259753876 0.83083758 -0.265913383 #> 141 -0.53452056 0.428775671 0.59042451 -0.009116919 0.05807988 0.141887131 #> 142 -0.37262268 -0.523789152 -0.56330968 -0.673304853 0.61919057 2.714167291 #> 143 -0.53452056 -0.538369226 -0.35921951 -0.109371702 -0.61623735 -0.277676859 #> 144 -0.49527259 0.973098427 -0.53831904 0.786655417 -0.63100342 -0.277676859 #> 145 -0.08807490 -0.528649176 -0.63411647 -0.566784147 3.53302853 -0.352178876 #> 146 -0.51244358 -0.222467626 -0.60079562 -0.435199745 -0.62115938 -0.363942352 #> 147 -0.53452056 -0.679309939 -0.64244668 -0.466529364 -0.62608140 3.682693510 #> 148 0.14741292 -0.081526913 -0.50499820 -0.366274582 -0.62608140 2.231864761 #> 149 -0.53452056 -0.655009816 0.59042451 5.498630194 -0.49810879 -0.383548146 #> 150 -0.53452056 -0.679309939 -0.64244668 -0.554252299 -0.20770940 0.443816357 #> 151 -0.43394764 -0.679309939 -0.39254036 -0.360008658 -0.60147128 -0.261992224 #> 152 -0.48546060 -0.314808094 -0.62162115 0.091137863 1.57898517 -0.352178876 #> 153 -0.53452056 -0.596689521 -0.58413520 -0.591847843 0.34847927 0.130123654 #> 154 -0.52961456 -0.679309939 -0.63828157 4.320636500 0.09745607 -0.191411366 #> 155 -0.53452056 0.214934588 0.20306971 1.024760525 -0.57193914 -0.379626987 #> 156 -0.52470857 0.030253653 -0.63828157 -0.353742734 -0.63100342 -0.328651923 #> 157 -0.53206756 -0.679309939 -0.64244668 -0.685836701 -0.63100342 -0.383548146 #> 158 -0.53452056 -0.091246962 4.23489171 -0.673304853 -0.62608140 -0.211017160 #> 159 -0.53452056 2.523446276 -0.63828157 -0.328679038 0.54043819 1.333919400 #> 160 -0.53452056 1.002258574 0.05312592 1.569895905 -0.63100342 -0.371784670 #> 161 -0.52225557 0.428775671 -0.57997010 0.066074168 -0.63100342 -0.344336558 #> 162 -0.53452056 1.998563618 -0.64244668 0.066074168 -0.63100342 7.666590833 #> 163 -0.53206756 -0.266207848 -0.25925698 2.459657100 -0.63100342 -0.383548146 #> 164 -0.51244358 -0.674449915 -0.62578626 -0.228424256 -0.61623735 -0.371784670 #> 165 -0.51489658 0.351015277 0.32385777 -0.103105778 -0.63100342 -0.375705829 #> 166 -0.53452056 -0.674449915 -0.64244668 -0.648241158 0.11222214 -0.383548146 #> 167 -0.49036659 -0.514069103 -0.63828157 0.279115580 1.49038874 -0.258071065 #> 168 -0.53452056 -0.412008586 0.18224419 -0.159499093 -0.62608140 -0.360021194 #> 169 -0.53206756 -0.679309939 -0.63828157 -0.504124908 -0.63100342 -0.383548146 #> 170 -0.04882693 -0.679309939 -0.63828157 -0.685836701 -0.63100342 -0.261992224 #> 171 3.46877241 -0.407148561 1.34847369 -0.009116919 1.17045721 -0.132593984 #> 172 -0.50753758 1.109179116 -0.31340335 -0.616911538 -0.52764093 -0.167884413 #> 173 -0.53452056 -0.562669349 -0.60912584 2.171424600 -0.62115938 -0.309046129 #> 174 -0.45602462 0.423915646 -0.36754972 0.698932482 -0.63100342 -0.175726731 #> 175 0.17439590 0.039973702 -0.54248415 -0.554252299 0.23527273 -0.258071065 #> 176 0.70914950 -0.679309939 -0.64244668 -0.121903550 2.44526132 -0.375705829 #> 177 0.95444931 -0.271067872 -0.38004504 -0.585581919 -0.06989273 -0.344336558 #> 178 -0.11996387 1.279279977 -0.64244668 -0.685836701 3.24755116 -0.136515143 #> 179 -0.53452056 -0.679309939 -0.19261530 0.435763678 -0.61131533 -0.360021194 #> 180 -0.48546060 -0.518929127 -0.26342209 -0.479061212 -0.63100342 -0.320809606 #> 181 -0.49772559 -0.635569718 -0.56747478 -0.673304853 -0.60639331 2.278918667 #> 182 -0.53206756 1.964543446 -0.63411647 0.391902211 -0.06004869 -0.375705829 #> 183 -0.52716157 -0.169007356 -0.42169609 3.180238349 -0.62608140 -0.383548146 #> 184 -0.32601572 -0.314808094 -0.50499820 -0.610645614 -0.13387904 -0.062013126 #> 185 -0.51489658 3.373950582 -0.27591741 -0.510390832 -0.61131533 -0.383548146 #> 186 -0.51980257 -0.679309939 -0.63411647 -0.641975234 -0.29630582 0.651637772 #> 187 0.38535374 0.783557467 -0.64244668 -0.504124908 1.10154888 -0.371784670 #> 188 -0.53452056 1.993703594 0.05729102 0.084871939 -0.63100342 -0.383548146 #> 189 -0.49281959 -0.353688291 -0.55081436 4.583805304 -0.60639331 3.910120720 #> 190 -0.37262268 -0.339108217 -0.08015745 -0.347476810 -0.62608140 -0.062013126 #> 191 -0.53452056 1.532001256 1.58588470 -0.428933821 -0.57193914 -0.081618920 #> 192 -0.53452056 -0.669589890 -0.27175230 -0.266019799 -0.63100342 -0.379626987 #> 193 3.84898713 -0.518929127 -0.16345956 -0.510390832 0.37308939 -0.348257717 #> 194 -0.52716157 0.715517123 0.39466456 -0.497858984 -0.21755344 -0.379626987 #> 195 3.26026757 0.268394859 -0.03017619 0.153797102 0.67825485 -0.211017160 #> 196 -0.48546060 4.652137053 0.77785425 -0.416401973 -0.63100342 -0.383548146 #> 197 -0.51244358 0.351015277 -0.14679914 -0.685836701 0.41738760 -0.367863511 #> 198 -0.53452056 -0.679309939 -0.63828157 -0.623177462 -0.63100342 -0.383548146 #> 199 1.06483423 -0.674449915 -0.53831904 -0.667038929 -0.18309928 -0.375705829 #> 200 -0.53452056 -0.552949299 0.14059313 -0.002850995 0.27957094 0.196783353 #> Otu00023 Otu00024 Otu00025 Otu00026 Otu00027 Otu00028 #> 1 -0.0069254588 -0.177204415 -0.24303824 -0.22202016 -0.24641906 -0.292554022 #> 2 -0.6642571429 -0.678440995 -0.43616774 -0.29146475 -0.38539990 -0.307394436 #> 3 -0.3747181868 0.177117995 0.04157367 -0.47086329 -0.41259180 -0.168883908 #> 4 -0.3199405465 0.954898895 -0.28369708 0.43770350 -0.36425064 -0.314814643 #> 5 -0.9068438359 -0.695725015 -0.39550890 -0.61553953 -0.06816104 -0.314814643 #> 6 -0.3434166781 0.851194775 0.03649131 -0.45350214 -0.38842122 -0.319761448 #> 7 0.4078195324 -0.669798985 -0.42600303 0.87751927 -0.23131245 -0.295027425 #> 8 -0.0851792307 -0.592020895 -0.35485005 -0.57503018 0.01945732 -0.322234850 #> 9 -0.8990184587 -0.393254665 -0.45141481 -0.62132658 -0.31288816 -0.319761448 #> 10 -0.4060196956 -0.341402605 1.42397434 -0.62132658 -0.40957048 0.214493446 #> 11 0.1965343482 3.962318375 -0.07023815 0.46085170 -0.20412055 -0.322234850 #> 12 1.2451348919 0.324032165 -0.14647348 -0.58660428 0.02852128 -0.319761448 #> 13 0.0713283131 0.488230355 -0.30402650 -0.37248345 -0.39748519 -0.314814643 #> 14 -0.5625272394 -0.280908535 -0.26845001 1.35205733 -0.37935725 -0.322234850 #> 15 -0.6955586517 0.107981915 -0.37009712 -0.26252951 -0.31288816 -0.312341241 #> 16 1.6911813918 -0.713009035 -0.43616774 -0.01368637 -0.32497345 -0.307394436 #> 17 -0.1399568711 0.099339905 0.21437375 -0.25095541 -0.38237857 -0.314814643 #> 18 -0.4138450728 -0.030290245 0.21437375 -0.22780721 -0.39144254 -0.183724322 #> 19 -0.7581616692 -0.021648235 -0.37517948 0.53608334 -0.12556616 -0.307394436 #> 20 0.8538660323 -0.592020895 -0.45141481 -0.54030789 -0.30986683 -0.312341241 #> 21 -0.8911930815 -0.704367025 5.62708227 -0.62132658 -0.41259180 -0.297500827 #> 22 0.7756122604 -0.704367025 0.61587983 -0.32618705 -0.31288816 -0.205984942 #> 23 0.3686926464 -0.721651045 -0.45649716 0.48978694 0.23699254 -0.299974229 #> 24 -0.1243061167 0.203044025 -0.40059125 -0.62132658 0.44848511 -0.314814643 #> 25 1.1434049884 -0.013006225 -0.29386179 -0.62132658 -0.41863444 -0.235665770 #> 26 -0.8285900640 0.168475985 -0.03974402 -0.58660428 0.33367486 -0.089735035 #> 27 -0.8677169499 -0.721651045 -0.14139113 -0.62132658 -0.41561312 1.485822222 #> 28 0.2200104798 -0.678440995 -0.44125010 2.96085712 -0.42467709 4.458851770 #> 29 -0.4216704500 -0.522884815 -0.43616774 -0.10049212 -0.32195212 -0.319761448 #> 30 -0.7816378008 -0.142636375 -0.37517948 -0.58660428 -0.40654915 -0.314814643 #> 31 -0.4920988447 1.680827735 -0.42600303 -0.60396543 -0.40352783 -0.317288045 #> 32 -0.6642571429 1.853667935 -0.31419121 -0.41299279 -0.40957048 -0.210931747 #> 33 1.3546901726 -0.721651045 -0.34976770 -0.59239133 0.49682627 -0.228245563 #> 34 -0.8990184587 -0.410538685 3.72119899 -0.49979854 -0.05909707 -0.260399793 #> 35 -0.2729882833 4.938865505 -0.18204997 -0.52873379 -0.33101609 -0.309867838 #> 36 2.7789088215 -0.661156975 1.47988025 -0.61553953 -0.15275807 -0.314814643 #> 37 -0.5234003535 2.026508135 0.45324446 -0.58081723 0.09801170 -0.314814643 #> 38 -0.9068438359 -0.721651045 0.34143264 -0.59817838 -0.36122932 -0.307394436 #> 39 -0.0069254588 -0.661156975 -0.26845001 -0.43614099 0.49984759 -0.287607218 #> 40 -0.6407810114 0.038845835 -0.25320295 -0.21623311 -0.37935725 -0.314814643 #> 41 1.1825318744 -0.609304915 -0.42092068 -0.61553953 0.26418444 -0.317288045 #> 42 -0.4529719588 0.073413875 -0.42092068 -0.37248345 -0.37935725 5.443265880 #> 43 3.1388761724 -0.721651045 -0.37517948 -0.62132658 -0.34914403 -0.297500827 #> 44 0.4391210411 0.090697895 -0.34976770 -0.59817838 -0.31288816 -0.295027425 #> 45 0.5252001902 -0.410538685 1.46971554 -0.61553953 -0.09535294 -0.317288045 #> 46 1.3077379094 -0.436464715 -0.24303824 0.16571217 -0.37633593 -0.210931747 #> 47 0.5173748130 0.393168245 0.04665602 -0.60396543 0.54818875 -0.317288045 #> 48 1.4877215849 -0.661156975 -0.33960299 -0.62132658 -0.41561312 -0.314814643 #> 49 -0.8442408184 0.151191965 -0.24812059 -0.60396543 -0.41863444 -0.290080620 #> 50 -0.6720825201 0.747490655 -0.18204997 -0.58660428 -0.38842122 -0.267820000 #> 51 -0.3590674325 -0.574736875 -0.44125010 1.11478830 -0.42467709 1.305263855 #> 52 -0.6407810114 0.427736285 -0.21762646 -0.60975248 -0.35518667 -0.302447632 #> 53 1.7459590322 -0.704367025 6.00825892 -0.60975248 0.58746594 -0.223298758 #> 54 1.4877215849 -0.522884815 1.16985657 -0.41877984 -0.36425064 -0.262873195 #> 55 -0.7425109149 0.254896085 -0.17188526 0.50714809 -0.10441691 -0.314814643 #> 56 0.8225645235 -0.713009035 0.03649131 -0.61553953 -0.36727196 -0.314814643 #> 57 -0.3590674325 -0.557452855 -0.45141481 1.07427895 0.25209915 -0.109522253 #> 58 -0.8911930815 -0.669798985 1.25117426 -0.62132658 -0.42467709 0.738854731 #> 59 -0.1008299851 0.445020305 -0.45141481 -0.38984460 0.56027404 -0.312341241 #> 60 0.0165506728 -0.254982505 0.61587983 0.62867613 0.19167270 -0.277713609 #> 61 -0.4294958272 -0.488316775 -0.45649716 -0.28567770 -0.37331461 -0.317288045 #> 62 -0.2338613974 -0.427822705 0.39733855 -0.40720575 -0.17390732 2.002763299 #> 63 1.9259427076 -0.592020895 -0.44633245 0.99904731 -0.42165577 -0.230718965 #> 64 -0.3981943184 -0.713009035 0.88524467 0.14256397 0.11613964 -0.317288045 #> 65 -0.6564317657 -0.531526825 -0.47174423 -0.55188199 8.52145880 0.006727654 #> 66 -0.6955586517 -0.177204415 -0.47174423 -0.62132658 -0.23433377 -0.322234850 #> 67 -0.5625272394 -0.687083005 -0.47174423 2.85669023 0.33367486 -0.322234850 #> 68 -0.3121151693 0.393168245 -0.45649716 0.17728626 -0.39748519 -0.319761448 #> 69 1.1590557428 -0.721651045 0.02124425 1.73400261 0.03758525 -0.309867838 #> 70 0.1808835938 1.940088035 -0.43616774 -0.54030789 -0.38539990 -0.319761448 #> 71 1.0181989533 -0.358686625 1.11395066 -0.61553953 -0.31893080 -0.304921034 #> 72 -0.3355913009 -0.721651045 -0.30910886 1.01640846 -0.16182203 -0.275240206 #> 73 -0.5860033710 -0.038932255 -0.42092068 -0.23359426 -0.26756832 -0.314814643 #> 74 -0.5781779938 -0.177204415 -0.36501477 0.14256397 0.83521439 0.006727654 #> 75 -0.4686227131 0.894404825 0.01107953 -0.30882590 -0.35216535 -0.304921034 #> 76 -0.6486063886 0.531440405 -0.44125010 -0.52294674 -0.36727196 -0.307394436 #> 77 -0.4842734675 0.721564625 -0.47174423 2.76409744 -0.37029328 -0.309867838 #> 78 -0.9068438359 1.015392965 0.94115058 -0.23938131 -0.39446386 -0.292554022 #> 79 -0.4451465816 -0.237698485 -0.26336766 -0.08313097 -0.28569625 -0.314814643 #> 80 0.0791536903 -0.721651045 0.36176206 -0.61553953 -0.42467709 -0.248032781 #> 81 -0.7190347833 -0.687083005 -0.29894415 0.60552794 -0.30986683 -0.322234850 #> 82 0.0087252956 1.145023115 -0.39042654 -0.23938131 -0.11045955 -0.270293402 #> 83 1.9885457251 -0.315476575 -0.33452063 -0.60396543 -0.40654915 -0.257926390 #> 84 0.2747881201 -0.721651045 -0.32943828 2.66571759 2.25221464 -0.314814643 #> 85 -0.8833677043 -0.229056475 -0.46157952 1.49673357 0.05269186 0.911992891 #> 86 -0.9068438359 -0.626588935 -0.45141481 1.59511342 1.12224003 -0.322234850 #> 87 -0.2495121518 5.517880175 -0.38534419 -0.61553953 -0.40352783 -0.309867838 #> 88 -0.2886390377 0.721564625 -0.08040286 -0.22780721 -0.21922716 -0.275240206 #> 89 -0.5234003535 0.133907945 -0.30910886 -0.19308491 -0.41561312 -0.173830713 #> 90 0.0008999184 0.082055885 -0.41075596 0.40876825 -0.42165577 -0.302447632 #> 91 -0.7659870464 -0.393254665 -0.44633245 0.45506465 -0.33705874 -0.302447632 #> 92 -0.7738124236 0.954898895 0.85983289 -0.30882590 -0.41561312 1.837045346 #> 93 0.1417567078 -0.721651045 6.81127108 -0.62132658 -0.14369410 -0.302447632 #> 94 -0.6016541254 -0.341402605 -0.46157952 1.02798256 -0.10743823 -0.149096690 #> 95 0.7286599972 0.254896085 -0.07532051 -0.53452084 -0.30080287 -0.319761448 #> 96 -0.9068438359 0.194402015 -0.46157952 -0.34354820 -0.42467709 -0.322234850 #> 97 1.9181173304 -0.704367025 -0.27353237 -0.62132658 0.98325919 -0.248032781 #> 98 -0.4529719588 0.142549955 0.31093850 0.24094381 -0.35820799 -0.277713609 #> 99 0.7286599972 -0.713009035 -0.07023815 -0.59239133 0.11311831 -0.280187011 #> 100 -0.5234003535 -0.704367025 -0.46666187 -0.60396543 0.06175583 3.006964628 #> 101 0.0243760500 0.514156385 -0.28369708 -0.61553953 3.79913175 -0.322234850 #> 102 5.4160609352 -0.609304915 -0.43108539 -0.61553953 5.83248179 -0.275240206 #> 103 1.1512303656 -0.609304915 -0.44125010 -0.54609494 0.83823571 -0.205984942 #> 104 -0.9068438359 -0.574736875 -0.28369708 0.40298120 -0.42467709 -0.319761448 #> 105 0.1495820850 0.254896085 -0.11597935 -0.59817838 -0.22526980 -0.282660413 #> 106 -0.7972885552 -0.056216275 -0.21254410 -0.59239133 0.43942114 -0.312341241 #> 107 -0.2260360202 -0.229056475 -0.34468534 0.61710203 -0.30080287 0.169972205 #> 108 -0.5468764851 1.335147335 -0.45141481 1.46779833 -0.12254484 -0.309867838 #> 109 1.1121034796 -0.678440995 -0.39550890 -0.59817838 -0.32195212 -0.312341241 #> 110 0.7599615060 -0.479674765 -0.45141481 0.94696386 -0.05305442 -0.309867838 #> 111 -0.6407810114 -0.289550545 1.47479789 0.06154527 -0.40957048 0.058669102 #> 112 -0.5468764851 -0.721651045 -0.25320295 -0.40141870 -0.07722500 -0.314814643 #> 113 -0.8990184587 -0.721651045 -0.24303824 -0.61553953 -0.42165577 -0.314814643 #> 114 -0.6486063886 -0.082142305 -0.30910886 -0.20465901 -0.22829113 -0.319761448 #> 115 -0.4842734675 0.073413875 -0.41583832 -0.62132658 0.20980063 -0.277713609 #> 116 0.1261059534 0.583292465 -0.43108539 -0.60396543 -0.40352783 -0.025426576 #> 117 0.0243760500 -0.514242805 -0.45141481 -0.62132658 -0.39748519 0.763588754 #> 118 -0.0304015904 -0.721651045 -0.27861472 -0.15257556 0.01945732 -0.319761448 #> 119 -0.7033840289 2.389472555 -0.45141481 -0.62132658 -0.38237857 -0.317288045 #> 120 1.8320381813 -0.652514965 -0.20237939 -0.61553953 0.10103302 -0.309867838 #> 121 -0.5547018623 -0.548810845 -0.47174423 -0.44771509 0.03154261 -0.272766804 #> 122 -0.1869091342 -0.254982505 3.03508101 -0.53452084 -0.31893080 -0.250506184 #> 123 -0.2260360202 -0.462390745 -0.46157952 2.06965148 -0.42467709 6.323797094 #> 124 0.1652328394 1.170949145 -0.44125010 -0.60975248 -0.42467709 3.514012096 #> 125 -0.9068438359 -0.531526825 -0.33960299 4.84743529 -0.38842122 -0.299974229 #> 126 -0.6329556342 3.564785915 -0.24812059 -0.52294674 -0.39748519 -0.245559379 #> 127 -0.9068438359 -0.367328635 -0.40059125 0.37983300 -0.36727196 -0.314814643 #> 128 1.6677052603 0.185760005 3.05032807 0.39140710 0.28533370 -0.314814643 #> 129 -0.0851792307 -0.522884815 -0.16680290 5.25252877 0.85032100 -0.280187011 #> 130 -0.6251302570 -0.695725015 0.10764429 -0.60975248 -0.27663229 -0.322234850 #> 131 -0.9068438359 -0.419180695 -0.42600303 -0.51715969 -0.02586252 -0.317288045 #> 132 1.4407693217 -0.592020895 -0.44125010 -0.55188199 1.61169427 -0.285133816 #> 133 0.4547717955 -0.488316775 0.03649131 -0.17572376 -0.21318451 -0.248032781 #> 134 -0.2808136605 0.427736285 0.24486788 -0.45928919 -0.29476022 -0.314814643 #> 135 -0.0695284764 -0.678440995 -0.33452063 -0.59239133 0.91679010 -0.317288045 #> 136 0.3217403832 -0.280908535 -0.39550890 -0.54030789 0.65997768 0.031461677 #> 137 0.4547717955 0.868478795 -0.44125010 0.07890642 -0.36727196 -0.136729678 #> 138 -0.5312257307 0.453662315 -0.47174423 -0.44192804 -0.40957048 1.082657649 #> 139 0.0400268043 -0.133994365 -0.41583832 1.91918820 0.06477715 -0.322234850 #> 140 -0.9068438359 2.795647025 -0.44125010 -0.55188199 -0.41561312 -0.317288045 #> 141 -0.4920988447 -0.583378885 -0.47174423 2.26062412 0.17656609 -0.116942460 #> 142 -0.7894631780 -0.237698485 -0.21762646 -0.42456689 -0.42467709 -0.099628644 #> 143 -0.5155749763 0.038845835 -0.24812059 0.23515676 -0.42467709 -0.015532966 #> 144 0.1417567078 0.142549955 0.09239722 1.66455801 -0.27663229 0.320849745 #> 145 -0.8833677043 -0.315476575 -0.15155584 -0.61553953 -0.40050651 5.809329418 #> 146 -0.3668928096 -0.609304915 -0.44633245 0.68075958 -0.42467709 -0.292554022 #> 147 -0.8990184587 -0.713009035 -0.44125010 -0.60975248 -0.31893080 -0.314814643 #> 148 -0.1869091342 -0.073500295 -0.41075596 1.02798256 0.45452776 -0.223298758 #> 149 -0.1008299851 -0.626588935 -0.39042654 -0.11785327 -0.39748519 -0.299974229 #> 150 0.0322014271 2.372188535 -0.39042654 0.42612940 -0.40352783 -0.322234850 #> 151 -0.2495121518 1.231443215 -0.46157952 -0.60396543 -0.42467709 -0.304921034 #> 152 0.3921687780 1.352431355 -0.20746175 -0.46507624 -0.41259180 -0.280187011 #> 153 -0.8442408184 0.548724425 -0.43108539 0.60552794 -0.34008006 -0.307394436 #> 154 1.2060080059 -0.617946925 -0.36501477 -0.62132658 0.43639982 -0.245559379 #> 155 0.9086436726 -0.531526825 -0.22779117 -0.56924313 0.30648295 0.706700501 #> 156 -0.4686227131 -0.522884815 -0.42092068 -0.61553953 -0.42165577 -0.314814643 #> 157 -0.8911930815 -0.687083005 0.98180942 -0.62132658 -0.33705874 -0.210931747 #> 158 0.9947228218 -0.220414465 0.74293871 0.07311937 -0.41561312 -0.295027425 #> 159 -0.6564317657 -0.125352355 -0.40567361 2.60784710 -0.41561312 -0.277713609 #> 160 -0.6877332745 -0.713009035 -0.34468534 -0.59239133 0.64184975 -0.139203081 #> 161 0.4078195324 -0.669798985 -0.47174423 3.04187582 -0.41561312 -0.314814643 #> 162 -0.8990184587 -0.721651045 -0.14647348 -0.62132658 -0.37633593 -0.285133816 #> 163 1.1121034796 -0.721651045 -0.35993241 0.74441713 -0.29173890 -0.290080620 #> 164 0.9712466902 -0.168562405 -0.32435592 -0.59817838 0.79895852 -0.272766804 #> 165 0.2356612341 -0.566094865 -0.33960299 -0.49979854 5.67839434 -0.297500827 #> 166 -0.3434166781 1.369715375 -0.46157952 -0.60975248 -0.41561312 4.716085608 #> 167 -0.5468764851 0.419094275 -0.46666187 3.73053472 -0.40654915 -0.307394436 #> 168 -0.5155749763 -0.721651045 -0.40567361 -0.59817838 -0.34008006 -0.287607218 #> 169 3.5849226723 -0.704367025 0.95639764 -0.53452084 0.37597337 -0.304921034 #> 170 -0.9068438359 -0.687083005 -0.39042654 -0.62132658 -0.41863444 -0.312341241 #> 171 -0.5390511079 0.617860505 -0.07532051 -0.37827050 -0.37633593 -0.314814643 #> 172 -0.4529719588 -0.626588935 -0.46157952 -0.26252951 2.99243865 -0.077368024 #> 173 -0.8207646868 -0.687083005 -0.40567361 -0.62132658 0.99836580 0.019094666 #> 174 0.4312956639 1.741321805 -0.39042654 -0.51137264 -0.15275807 -0.290080620 #> 175 -0.0695284764 0.107981915 -0.45649716 -0.50558559 -0.29778154 -0.295027425 #> 176 0.4547717955 4.307998775 1.64759798 -0.58660428 -0.37029328 -0.304921034 #> 177 -0.1321314939 -0.220414465 -0.24812059 0.70969483 -0.38842122 -0.319761448 #> 178 -0.9068438359 -0.410538685 -0.45649716 -0.62132658 -0.42165577 -0.299974229 #> 179 0.2982642517 -0.574736875 -0.16680290 -0.06576982 0.68414826 -0.319761448 #> 180 -0.5077495991 0.280822115 -0.44633245 -0.33776115 -0.37029328 0.244174274 #> 181 -0.6877332745 -0.522884815 0.01616189 0.77335237 -0.08931029 -0.302447632 #> 182 -0.5938287482 0.436378295 -0.46157952 1.04534371 -0.20109922 -0.196091333 #> 183 -0.4451465816 -0.367328635 -0.22779117 -0.19308491 -0.30684551 0.273855101 #> 184 -0.7738124236 0.151191965 0.03649131 -0.51137264 -0.36727196 1.483348819 #> 185 3.0997492864 -0.617946925 -0.42092068 -0.56924313 0.18260873 -0.314814643 #> 186 -0.8677169499 0.393168245 -0.47174423 0.21200856 -0.39144254 -0.069947817 #> 187 -0.9068438359 -0.609304915 -0.46157952 -0.61553953 -0.42165577 -0.309867838 #> 188 2.7710834443 -0.721651045 -0.34468534 -0.60396543 -0.08628897 0.773482363 #> 189 -0.8755423271 -0.047574265 -0.43108539 -0.43614099 -0.41863444 0.187286021 #> 190 -0.3355913009 -0.246340495 -0.40567361 1.58353932 -0.11650220 -0.302447632 #> 191 -0.6094795026 -0.479674765 -0.42092068 -0.45350214 -0.41259180 -0.245559379 #> 192 0.1104551991 -0.721651045 0.80900933 -0.59239133 -0.40957048 -0.307394436 #> 193 -0.5077495991 0.609218495 0.12289135 -0.56924313 -0.14671542 -0.297500827 #> 194 3.4518912600 -0.687083005 -0.40567361 1.55460407 0.06175583 -0.260399793 #> 195 -0.4842734675 0.315390155 2.58783373 -0.52873379 0.17958741 -0.282660413 #> 196 2.4658937338 -0.721651045 1.35282136 -0.16414966 -0.42467709 -0.322234850 #> 197 -0.0382269676 -0.669798985 -0.39550890 -0.58660428 -0.40352783 -0.161463701 #> 198 -0.9068438359 -0.721651045 0.15338549 -0.62132658 -0.41561312 -0.297500827 #> 199 -0.8598915727 0.107981915 0.40750326 -0.60396543 -0.27058964 -0.299974229 #> 200 -0.0304015904 0.004277795 -0.14647348 -0.55766903 -0.23131245 -0.317288045 #> Otu00029 Otu00030 Otu00031 Otu00032 Otu00033 #> 1 0.695821495 0.39193166 0.2730666130 1.850227727 -0.352365855 #> 2 -0.252260766 0.44720466 -0.1402887916 -0.493938512 0.152851091 #> 3 0.066720182 -0.59377025 -0.4629076438 -0.357825634 -0.288065517 #> 4 -0.473775313 -0.71352842 1.5937875395 -0.501500339 -0.435037719 #> 5 -0.571241714 0.33665866 -0.5637260352 -0.577118604 0.952012441 #> 6 -0.216818439 -0.52928508 -0.2411071829 0.337862411 0.079364989 #> 7 3.079318020 0.19847615 -0.3520074134 -0.395634767 -0.618752972 #> 8 0.031277854 -0.17001055 -0.3822529308 -0.357825634 -0.444223482 #> 9 -0.730732188 -0.11473754 0.3335576478 -0.070476224 -0.168650602 #> 10 0.137604837 -0.76880143 -0.4830713221 -0.516623992 0.740739900 #> 11 -0.305424257 0.16162748 -0.5939715526 -0.577118604 -0.600381447 #> 12 -0.730732188 -0.54770941 -0.5233986787 0.148816747 0.465167021 #> 13 -0.269981930 -0.62140675 -0.2209435046 0.103445788 -0.453409245 #> 14 -0.526938804 0.54853851 0.1420027042 0.572279035 -0.646310260 #> 15 -0.535799386 -0.33582956 -0.2411071829 0.436166157 -0.655496023 #> 16 -0.340866585 -0.38189040 -0.4729894830 -0.569556778 1.071427356 #> 17 -0.181376111 1.20260239 -0.4427439656 1.071359589 -0.582009922 #> 18 0.279374147 0.65908451 0.0109387955 -0.100723530 0.106922277 #> 19 0.270513565 0.72356969 -0.0797977567 0.466413463 -0.232950941 #> 20 1.431249791 0.85254003 0.4646215565 -0.546871298 0.446795495 #> 21 -0.730732188 -0.76880143 -0.5939715526 -0.569556778 1.787916843 #> 22 2.937548710 -0.28055656 -0.5536441961 -0.456129379 -0.159464840 #> 23 -0.004164473 0.04186930 -0.3217618960 0.141254920 -0.673867548 #> 24 0.146465418 1.07363205 -0.5838897135 0.504222596 0.116108040 #> 25 -0.730732188 0.79726702 -0.1806161481 -0.577118604 -0.021678400 #> 26 -0.730732188 -0.70431626 -0.5637260352 -0.138532663 4.424230724 #> 27 -0.686429278 -0.76880143 -0.5838897135 -0.531747645 1.705244979 #> 28 0.562912767 -0.76880143 -0.5939715526 -0.577118604 -0.490152295 #> 29 0.279374147 -0.52928508 -0.1402887916 -0.357825634 1.098984644 #> 30 -0.721871606 7.25499635 -0.5637260352 0.020265695 -0.692239074 #> 31 -0.128212620 1.34078490 1.6643604135 -0.569556778 -0.012492637 #> 32 1.378086300 -0.06867671 -0.5838897135 2.530792119 -0.627938735 #> 33 0.075580763 -0.43716340 -0.5939715526 -0.577118604 0.428423970 #> 34 -0.243400184 -0.76880143 -0.5838897135 -0.577118604 -0.223765178 #> 35 0.199628910 0.76041836 0.3033121304 -0.441005726 -0.407480431 #> 36 2.388192634 3.49643206 -0.5939715526 -0.509062165 -0.407480431 #> 37 -0.695289860 -0.67667975 -0.4830713221 0.821819312 -0.701424836 #> 38 -0.721871606 -0.03182804 -0.5939715526 -0.577118604 -0.012492637 #> 39 -0.234539602 2.08697046 0.5251125913 -0.350263807 -0.591195684 #> 40 -0.323145421 0.04186930 -0.1402887916 0.065636655 -0.609567210 #> 41 1.316062227 -0.34504173 -0.5233986787 -0.448567553 0.290637530 #> 42 -0.367448331 -0.06867671 -0.2713527003 -0.123409010 -0.692239074 #> 43 -0.721871606 -0.76880143 -0.5738078743 -0.577118604 -0.609567210 #> 44 0.748984986 0.39193166 1.3316597220 -0.478814859 -0.379923143 #> 45 1.989466449 -0.75037709 -0.4931531613 -0.289769194 2.936137175 #> 46 -0.057327965 -0.76880143 -0.4729894830 -0.569556778 2.467663279 #> 47 -0.730732188 -0.73195276 -0.3217618960 -0.297331021 -0.141093314 #> 48 3.495765369 -0.20685922 -0.5435623569 -0.524185818 -0.058421450 #> 49 -0.385169494 -0.72274059 -0.2108616655 -0.229274582 0.492724309 #> 50 -0.624405205 -0.63983108 -0.4124984482 0.489098943 0.042621939 #> 51 -0.588962878 2.18830430 -0.4830713221 -0.561994951 3.110666665 #> 52 -0.137073202 0.12477881 0.6662583392 1.056235936 -0.232950941 #> 53 -0.730732188 -0.76880143 -0.5939715526 -0.561994951 -0.692239074 #> 54 -0.305424257 -0.75037709 -0.5738078743 -0.577118604 -0.398294669 #> 55 -0.535799386 -0.63983108 -0.4225802873 0.050513002 -0.591195684 #> 56 -0.730732188 0.92623737 -0.5536441961 -0.478814859 0.446795495 #> 57 -0.367448331 2.16066779 -0.2511890220 5.563084576 -0.600381447 #> 58 -0.721871606 -0.75037709 -0.5838897135 -0.546871298 0.042621939 #> 59 -0.721871606 -0.23449572 2.7128716834 -0.577118604 1.622573115 #> 60 0.376840547 0.43799250 -0.4024166090 -0.115847183 -0.122721789 #> 61 0.111023091 0.09714230 4.3360477841 -0.055352571 -0.582009922 #> 62 -0.562381132 0.13399097 -0.2209435046 -0.577118604 -0.021678400 #> 63 1.750230739 0.22611265 -0.5133168395 -0.463691206 -0.554452634 #> 64 -0.314284839 0.36429516 2.6422988095 0.254682319 0.079364989 #> 65 -0.721871606 -0.75958926 -0.3923347699 -0.577118604 -0.085978738 #> 66 0.252792401 -0.54770941 -0.5939715526 -0.569556778 -0.333994330 #> 67 -0.358587749 -0.54770941 -0.4024166090 -0.554433125 -0.471780770 #> 68 -0.677568696 0.15241531 0.6965038566 0.012703869 -0.315622805 #> 69 0.642658004 -0.19764705 -0.0596340785 0.156378574 -0.517709583 #> 70 0.155326000 0.24453698 2.8741811096 -0.577118604 -0.499338058 #> 71 0.935057206 -0.48322424 -0.5939715526 0.942808538 -0.389108906 #> 72 -0.491496477 0.21690048 0.1117571868 -0.577118604 -0.343180093 #> 73 -0.730732188 -0.02261587 -0.4729894830 0.186625880 -0.673867548 #> 74 0.048999018 -0.46479990 -0.4225802873 -0.191465449 -0.425851957 #> 75 -0.145933784 1.34078490 -0.3217618960 0.436166157 -0.232950941 #> 76 -0.730732188 1.31314840 4.7393213494 0.141254920 -0.453409245 #> 77 -0.730732188 -0.05025237 4.3864569797 1.404079959 0.079364989 #> 78 -0.730732188 -0.76880143 -0.1302069524 -0.289769194 2.081861248 #> 79 -0.243400184 0.63144801 -0.3520074134 -0.168779969 -0.673867548 #> 80 6.614690190 0.31823432 -0.5939715526 -0.577118604 -0.389108906 #> 81 -0.394030076 -0.05025237 -0.5334805178 -0.342701980 -0.664681786 #> 82 1.759091320 -0.76880143 -0.5939715526 -0.577118604 0.162036853 #> 83 2.007187613 -0.28055656 -0.5334805178 -0.350263807 0.520281597 #> 84 -0.730732188 0.35508299 -0.5939715526 -0.478814859 -0.205393653 #> 85 -0.633265787 -0.08710104 -0.1201251133 -0.577118604 -0.710610599 #> 86 -0.101630874 0.08793014 -0.3419255742 -0.577118604 -0.269693992 #> 87 1.218595826 0.21690048 0.2125755781 1.094045069 -0.131907552 #> 88 -0.721871606 -0.40031473 -0.1906979872 -0.577118604 0.125293803 #> 89 -0.207957857 -0.45558774 -0.5939715526 -0.509062165 -0.425851957 #> 90 -0.730732188 -0.30819306 0.8376496045 -0.577118604 0.667253799 #> 91 -0.730732188 -0.76880143 1.7450151266 -0.093161703 -0.067607213 #> 92 -0.544659968 -0.17001055 -0.1503706307 -0.078038050 -0.582009922 #> 93 0.881893714 -0.76880143 -0.3520074134 -0.577118604 -0.398294669 #> 94 -0.137073202 -0.73195276 -0.1402887916 -0.577118604 -0.554452634 #> 95 -0.624405205 -0.29898089 -0.2612708612 0.383233371 -0.333994330 #> 96 -0.730732188 -0.76880143 -0.5939715526 2.349308281 -0.591195684 #> 97 0.243931819 -0.59377025 -0.5939715526 -0.577118604 2.807536497 #> 98 -0.482635895 0.42878033 1.4223962743 2.530792119 -0.159464840 #> 99 -0.730732188 -0.69510409 -0.5939715526 -0.561994951 -0.600381447 #> 100 -0.730732188 0.40114383 0.1420027042 -0.569556778 -0.600381447 #> 101 -0.704150442 0.91702520 -0.5637260352 -0.561994951 -0.389108906 #> 102 -0.491496477 2.38175981 -0.5939715526 -0.577118604 -0.683053311 #> 103 -0.243400184 -0.30819306 -0.4326621264 -0.569556778 -0.370737381 #> 104 1.316062227 -0.76880143 -0.5939715526 -0.009981611 -0.343180093 #> 105 0.040138436 0.56696284 -0.1201251133 0.156378574 -0.232950941 #> 106 -0.668708114 -0.23449572 -0.4528258047 0.020265695 -0.710610599 #> 107 0.261652983 1.19339022 0.4444578782 -0.138532663 -0.600381447 #> 108 -0.730732188 0.74199402 -0.5838897135 0.564717209 -0.582009922 #> 109 -0.704150442 -0.55692158 -0.4931531613 -0.561994951 -0.040049925 #> 110 -0.261121348 1.46975524 0.3133939695 -0.183903622 -0.288065517 #> 111 -0.367448331 -0.22528355 3.8823650230 -0.055352571 -0.572824159 #> 112 -0.721871606 -0.75958926 -0.5939715526 -0.531747645 -0.710610599 #> 113 -0.128212620 0.83411569 3.5496643316 0.678144607 -0.315622805 #> 114 -0.650986951 -0.10552538 -0.4830713221 -0.546871298 -0.664681786 #> 115 -0.500357059 0.99072254 3.0052450183 0.715953740 0.033436176 #> 116 -0.243400184 -0.56613375 -0.3419255742 -0.259521888 -0.361551618 #> 117 0.917336042 -0.76880143 -0.4427439656 -0.365387460 2.100232773 #> 118 0.616076258 0.43799250 0.7569948914 3.377716696 -0.563638396 #> 119 -0.225679020 -0.76880143 1.0090408698 2.939130754 0.703996850 #> 120 2.512240780 0.53932634 -0.5838897135 -0.546871298 -0.131907552 #> 121 -0.394030076 0.44720466 -0.4830713221 -0.531747645 -0.683053311 #> 122 0.111023091 -0.41873907 1.2409231698 0.950370364 -0.333994330 #> 123 -0.721871606 -0.75037709 -0.2915163786 -0.448567553 -0.683053311 #> 124 0.261652983 0.06029364 -0.3520074134 -0.161218143 -0.609567210 #> 125 -0.721871606 0.94466170 -0.3822529308 0.247120493 -0.012492637 #> 126 0.137604837 -0.75958926 -0.4225802873 -0.569556778 -0.058421450 #> 127 -0.713011024 -0.56613375 0.1117571868 -0.554433125 -0.232950941 #> 128 0.075580763 -0.51086074 -0.5233986787 -0.168779969 3.955756829 #> 129 -0.500357059 -0.56613375 -0.4427439656 -0.463691206 -0.471780770 #> 130 -0.642126369 -0.05946454 -0.5939715526 -0.456129379 -0.333994330 #> 131 2.972991038 -0.66746759 -0.5233986787 0.050513002 1.493972438 #> 132 -0.730732188 0.35508299 -0.4024166090 -0.040228917 0.823411764 #> 133 2.078072268 -0.70431626 0.0109387955 -0.463691206 -0.040049925 #> 134 -0.473775313 -0.54770941 -0.1402887916 0.315176932 -0.517709583 #> 135 2.645149508 -0.53849724 -0.5838897135 -0.561994951 1.319442948 #> 136 0.350258802 -0.45558774 1.1804321350 1.313338040 -0.049235688 #> 137 -0.269981930 -0.20685922 3.0254086966 1.857789554 -0.591195684 #> 138 0.093301927 -0.54770941 -0.4528258047 2.583724905 -0.683053311 #> 139 0.607215676 -0.66746759 -0.2209435046 7.158629984 -0.517709583 #> 140 -0.730732188 0.83411569 2.2087797267 -0.577118604 3.312753443 #> 141 -0.110491456 1.50660391 0.2125755781 0.368109718 -0.600381447 #> 142 -0.305424257 -0.75037709 -0.1705343090 -0.569556778 -0.710610599 #> 143 -0.278842512 -0.06867671 -0.3217618960 0.179064053 -0.683053311 #> 144 -0.571241714 0.50247767 -0.0293885611 2.349308281 -0.582009922 #> 145 1.271759317 -0.29898089 -0.4427439656 -0.365387460 -0.710610599 #> 146 -0.110491456 0.47484117 0.0008569563 0.549593556 0.051807701 #> 147 -0.730732188 -0.76880143 -0.5838897135 -0.577118604 -0.673867548 #> 148 -0.367448331 0.19847615 1.9164063918 0.632773648 -0.710610599 #> 149 -0.642126369 -0.74116493 -0.4326621264 -0.569556778 -0.701424836 #> 150 -0.730732188 4.27025412 -0.5939715526 -0.577118604 -0.701424836 #> 151 -0.402890658 -0.38189040 -0.4629076438 -0.577118604 0.805040239 #> 152 0.740124404 -0.36346606 -0.2511890220 0.050513002 -0.609567210 #> 153 -0.580102296 -0.65825542 0.0109387955 1.162101508 1.025498543 #> 154 -0.704150442 -0.74116493 -0.2209435046 2.825703355 -0.655496023 #> 155 0.004696108 0.90781303 -0.5133168395 -0.448567553 0.005878888 #> 156 0.846451387 -0.07788888 -0.2612708612 -0.561994951 -0.664681786 #> 157 -0.713011024 -0.76880143 -0.5838897135 -0.561994951 -0.710610599 #> 158 -0.367448331 -0.76880143 -0.0797977567 0.156378574 -0.637124498 #> 159 -0.163654947 -0.40031473 2.0676339788 -0.569556778 -0.646310260 #> 160 0.004696108 -0.48322424 -0.5738078743 -0.539309471 -0.370737381 #> 161 1.094547680 -0.48322424 -0.3923347699 -0.433443899 -0.591195684 #> 162 -0.730732188 0.41956816 -0.5939715526 -0.577118604 1.319442948 #> 163 0.181907746 -0.61219458 -0.5637260352 -0.569556778 -0.444223482 #> 164 -0.721871606 -0.25292005 -0.4830713221 -0.501500339 0.465167021 #> 165 -0.030746219 0.01423280 -0.5838897135 -0.554433125 -0.223765178 #> 166 -0.713011024 -0.76880143 0.6662583392 -0.577118604 -0.710610599 #> 167 -0.713011024 4.09522294 1.1602684568 -0.577118604 2.302319551 #> 168 2.388192634 -0.70431626 -0.5939715526 -0.577118604 1.007127017 #> 169 0.270513565 -0.76880143 -0.5738078743 -0.539309471 0.593767698 #> 170 -0.730732188 -0.76880143 0.1016753477 -0.569556778 -0.710610599 #> 171 -0.571241714 -0.61219458 -0.1100432742 0.534469902 -0.600381447 #> 172 -0.287703094 -0.48322424 -0.4225802873 -0.524185818 -0.407480431 #> 173 1.422389209 -0.61219458 -0.5738078743 -0.577118604 2.752421921 #> 174 0.456585784 0.14320314 -0.1705343090 -0.546871298 1.806288368 #> 175 -0.296563675 -0.39110257 -0.0697159176 -0.493938512 -0.627938735 #> 176 0.562912767 1.38684574 -0.5939715526 0.587402689 -0.012492637 #> 177 0.952778369 -0.48322424 -0.1604524698 -0.244398235 -0.683053311 #> 178 -0.721871606 -0.75037709 -0.5838897135 -0.214150929 1.705244979 #> 179 0.217350073 -0.52928508 -0.5435623569 -0.577118604 5.278506651 #> 180 -0.261121348 0.88017653 -0.1604524698 0.557155382 -0.673867548 #> 181 -0.039606801 -0.54770941 -0.1604524698 0.111007614 -0.627938735 #> 182 -0.083909710 -0.64904325 -0.2612708612 -0.577118604 -0.306437042 #> 183 -0.199097275 1.20260239 -0.2108616655 -0.123409010 -0.554452634 #> 184 -0.668708114 -0.30819306 -0.3116800568 1.600687450 -0.572824159 #> 185 0.297095310 2.55679099 -0.5939715526 -0.554433125 -0.627938735 #> 186 -0.713011024 -0.62140675 -0.0293885611 -0.380511113 -0.701424836 #> 187 -0.721871606 -0.75958926 -0.4225802873 -0.085599877 -0.609567210 #> 188 2.990712202 -0.41873907 -0.5939715526 -0.554433125 1.392929049 #> 189 -0.730732188 -0.56613375 -0.4326621264 -0.380511113 -0.710610599 #> 190 0.102162509 -0.25292005 0.0815116694 -0.304892848 -0.609567210 #> 191 -0.668708114 -0.25292005 -0.5133168395 -0.554433125 -0.343180093 #> 192 -0.730732188 -0.32661739 0.6158491435 -0.577118604 -0.205393653 #> 193 0.057859600 -0.63061892 -0.3822529308 0.413480677 -0.278879754 #> 194 -0.509217641 0.14320314 -0.4528258047 -0.577118604 0.162036853 #> 195 -0.668708114 0.11556664 -0.3721710916 0.526908076 -0.692239074 #> 196 -0.730732188 -0.76880143 -0.5838897135 -0.577118604 0.906083628 #> 197 -0.154794365 -0.47401207 2.1079613354 -0.093161703 -0.572824159 #> 198 -0.721871606 -0.67667975 -0.5939715526 -0.577118604 -0.627938735 #> 199 -0.713011024 -0.74116493 -0.4225802873 -0.161218143 -0.232950941 #> 200 -0.730732188 -0.47401207 -0.3217618960 0.511784423 -0.278879754 #> Otu00034 Otu00035 Otu00036 Otu00037 Otu00038 #> 1 -0.1482914828 -0.28857253 -0.337797955 -0.28026882 -0.269009738 #> 2 -0.1507314908 1.32771762 -0.337797955 -0.40104181 -0.269009738 #> 3 -0.1360914431 -0.09645535 -0.309626997 5.43380328 -0.251964926 #> 4 -0.1507314908 -0.24263146 -0.337797955 -0.28781713 -0.254805728 #> 5 0.0469091527 -0.38463111 -0.332163763 -0.55200805 -0.269009738 #> 6 -0.1507314908 -0.31363129 -0.337797955 -0.02362622 -0.269009738 #> 7 -0.1507314908 -0.38880757 3.099058896 -0.19723739 -0.269009738 #> 8 -0.1507314908 -0.25098438 -0.337797955 -0.13685089 -0.266168936 #> 9 -0.0775312524 -0.38880757 -0.337797955 0.32359613 -0.084357613 #> 10 -0.0604511968 -0.30110191 0.811577123 -0.51426649 -0.254805728 #> 11 -0.1507314908 1.31518824 -0.337797955 0.52740055 -0.269009738 #> 12 0.6935112580 -0.25098438 -0.337797955 -0.54445974 -0.266168936 #> 13 -0.1458514749 5.21182571 -0.337797955 -0.55200805 -0.257646530 #> 14 -0.1507314908 -0.31780775 -0.337797955 -0.43878337 -0.269009738 #> 15 -0.1507314908 -0.20921978 0.158010902 -0.40859012 -0.269009738 #> 16 -0.0824112683 -0.36792527 -0.337797955 1.16145875 -0.269009738 #> 17 -0.1507314908 -0.38880757 0.963700295 -0.29536544 0.049160077 #> 18 -0.1507314908 -0.17580810 -0.337797955 0.01411534 -0.200830492 #> 19 -0.1458514749 0.28360254 -0.337797955 -0.43123506 -0.269009738 #> 20 -0.1482914828 -0.36792527 -0.337797955 1.87100007 -0.269009738 #> 21 0.3616701775 -0.38880757 -0.337797955 7.21520489 -0.251964926 #> 22 -0.1214513954 -0.38463111 -0.337797955 0.18772652 -0.232079313 #> 23 -0.1507314908 0.35460236 -0.337797955 -0.25007557 -0.269009738 #> 24 -0.1507314908 -0.38880757 -0.337797955 0.06695353 -0.260487332 #> 25 -0.1360914431 -0.23010208 1.746852922 -0.54445974 0.270742627 #> 26 0.9887522192 -0.38463111 -0.337797955 -0.51426649 -0.260487332 #> 27 13.8524741014 -0.38880757 -0.337797955 -0.55200805 -0.266168936 #> 28 -0.1507314908 -0.38880757 -0.337797955 -0.55200805 -0.101402425 #> 29 -0.1507314908 0.05807368 -0.337797955 -0.31801038 -0.266168936 #> 30 -0.1458514749 -0.38880757 -0.337797955 -0.46897662 -0.260487332 #> 31 -0.1141313716 1.80383409 -0.320895380 0.42927250 0.301991448 #> 32 -0.1482914828 -0.38045465 -0.332163763 -0.33310700 -0.269009738 #> 33 -0.1507314908 -0.30945483 0.929895146 1.22184525 -0.269009738 #> 34 0.3836302490 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 35 -0.1434114669 -0.38880757 -0.337797955 0.05940521 -0.266168936 #> 36 0.0542291766 -0.38880757 -0.337797955 -0.55200805 -0.254805728 #> 37 -0.1068113478 -0.38880757 -0.337797955 -0.52936311 2.219532746 #> 38 0.0883892878 -0.38463111 -0.337797955 -0.55200805 0.196881777 #> 39 -0.1507314908 -0.31780775 -0.337797955 -0.20478570 -0.226397709 #> 40 -0.1507314908 -0.27604314 -0.337797955 -0.14439921 0.114498521 #> 41 -0.1385314510 -0.38463111 -0.332163763 0.98029927 -0.269009738 #> 42 -0.0848512763 -0.30945483 -0.072990952 -0.01607790 -0.146855255 #> 43 -0.0360511174 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 44 -0.1434114669 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 45 -0.1019313319 -0.38880757 -0.337797955 -0.46142831 -0.266168936 #> 46 -0.1409714590 -0.38880757 3.262450451 0.53494886 -0.266168936 #> 47 -0.0214110697 -0.38880757 -0.337797955 0.82933303 -0.269009738 #> 48 -0.1312114272 -0.35121943 -0.337797955 2.98060192 -0.266168936 #> 49 -0.1287714193 -0.38880757 2.969472490 -0.52936311 -0.192308086 #> 50 -0.0946113080 -0.38880757 -0.337797955 -0.49162155 -0.269009738 #> 51 -0.1458514749 -0.18833748 -0.337797955 -0.44633168 -0.135492048 #> 52 -0.1458514749 3.57047681 -0.337797955 -0.54445974 0.392897110 #> 53 0.0493491607 -0.38880757 -0.337797955 1.64455071 -0.229238511 #> 54 0.1249894069 -0.38880757 -0.337797955 -0.54445974 -0.149696057 #> 55 -0.1482914828 -0.19251394 -0.337797955 -0.41613843 -0.269009738 #> 56 -0.0311711015 -0.38880757 -0.337797955 -0.55200805 -0.266168936 #> 57 -0.1507314908 -0.07139659 -0.337797955 -0.43123506 -0.254805728 #> 58 -0.0287310935 -0.37210173 -0.326529572 -0.54445974 -0.269009738 #> 59 -0.1092513557 -0.38880757 -0.337797955 -0.48407324 0.017911256 #> 60 -0.1507314908 -0.11733765 -0.337797955 -0.41613843 -0.269009738 #> 61 -0.1409714590 -0.38880757 -0.337797955 -0.32555869 0.071886493 #> 62 -0.1287714193 -0.28439607 -0.005380653 0.23301639 1.310476131 #> 63 -0.0458111492 -0.38880757 -0.332163763 -0.04627115 -0.007655961 #> 64 -0.1507314908 0.63442520 -0.281456039 0.48965899 -0.226397709 #> 65 -0.1507314908 -0.38880757 -0.337797955 -0.55200805 -0.220716105 #> 66 -0.1409714590 1.92912790 -0.337797955 -0.55200805 -0.090039217 #> 67 -0.1482914828 -0.32198421 -0.337797955 -0.09910934 -0.269009738 #> 68 -0.1507314908 0.04972076 2.293369503 -0.53691142 -0.269009738 #> 69 -0.1507314908 -0.05469075 -0.337797955 -0.42368675 -0.266168936 #> 70 -0.0653312127 0.55507246 -0.337797955 -0.18968908 1.685461984 #> 71 -0.1068113478 -0.38880757 -0.332163763 0.24056470 -0.260487332 #> 72 -0.1482914828 0.44230803 -0.337797955 -0.40104181 -0.226397709 #> 73 -0.1482914828 -0.38880757 -0.337797955 -0.29536544 -0.217875303 #> 74 -0.1482914828 -0.38880757 -0.337797955 -0.25762388 -0.269009738 #> 75 -0.1458514749 -0.34704297 0.011521922 -0.48407324 -0.257646530 #> 76 -0.0897312922 -0.17998456 -0.337797955 -0.55200805 -0.232079313 #> 77 -0.1409714590 -0.25933730 -0.326529572 -0.46897662 0.032115266 #> 78 -0.1482914828 0.07895598 -0.337797955 -0.55200805 -0.246283323 #> 79 -0.1507314908 -0.29692545 -0.337797955 -0.50671818 -0.269009738 #> 80 0.1591495182 -0.38463111 -0.337797955 -0.55200805 -0.269009738 #> 81 -0.1507314908 -0.01292614 0.203084435 -0.53691142 -0.266168936 #> 82 -0.0287310935 -0.36374881 7.662754058 -0.55200805 -0.269009738 #> 83 -0.1190113875 -0.38045465 -0.337797955 2.54279983 -0.195148888 #> 84 -0.1434114669 0.12489705 -0.337797955 2.80699074 -0.266168936 #> 85 0.9009119332 1.03536539 -0.337797955 -0.52936311 -0.269009738 #> 86 -0.1507314908 -0.19669040 -0.337797955 -0.55200805 -0.269009738 #> 87 -0.1507314908 0.47989617 -0.337797955 0.46701406 -0.240601719 #> 88 -0.1141313716 0.53419016 2.304637886 -0.34820363 -0.192308086 #> 89 -0.1507314908 -0.38880757 -0.337797955 -0.29536544 0.398578714 #> 90 -0.0214110697 -0.38880757 -0.337797955 -0.07646440 -0.266168936 #> 91 -0.1434114669 -0.38880757 -0.332163763 -0.46897662 -0.246283323 #> 92 -0.1482914828 1.78712825 -0.337797955 -0.55200805 -0.169581671 #> 93 -0.1507314908 -0.38880757 -0.337797955 -0.39349350 -0.240601719 #> 94 -0.1482914828 -0.32616067 1.284849214 -0.29536544 -0.158218463 #> 95 -0.0824112683 -0.35121943 -0.337797955 -0.25007557 -0.269009738 #> 96 -0.0580111889 -0.38880757 -0.337797955 -0.55200805 -0.266168936 #> 97 0.3909502729 -0.38880757 -0.337797955 -0.52936311 -0.266168936 #> 98 -0.1482914828 1.37365868 -0.337797955 -0.03117453 -0.266168936 #> 99 0.0005490018 -0.35539589 -0.337797955 -0.55200805 -0.269009738 #> 100 0.1786695817 -0.38463111 -0.337797955 -0.55200805 8.500545795 #> 101 -0.0946113080 -0.37210173 -0.247650890 -0.01607790 -0.266168936 #> 102 -0.1434114669 -0.38880757 -0.332163763 -0.42368675 -0.263328134 #> 103 -0.1019313319 -0.38880757 -0.337797955 0.73875328 -0.237760917 #> 104 -0.1482914828 0.41724927 1.160897000 -0.55200805 -0.251964926 #> 105 -0.1263314113 -0.38880757 -0.337797955 -0.52936311 -0.118447236 #> 106 0.5324707336 -0.38463111 0.496062396 -0.55200805 -0.269009738 #> 107 -0.1507314908 1.03954186 -0.337797955 0.11224340 -0.172422473 #> 108 -0.1385314510 -0.38880757 -0.337797955 -0.34820363 -0.095720821 #> 109 -0.1214513954 -0.38045465 -0.337797955 0.74630160 -0.269009738 #> 110 -0.1458514749 -0.38463111 -0.337797955 -0.47652493 -0.266168936 #> 111 -0.1507314908 -0.38463111 -0.337797955 -0.03872284 -0.269009738 #> 112 -0.0165310538 -0.17163164 -0.337797955 0.17262989 -0.263328134 #> 113 0.0200690653 -0.38880757 -0.337797955 -0.45387999 -0.200830492 #> 114 -0.1507314908 -0.32198421 -0.337797955 -0.42368675 -0.075835207 #> 115 -0.1507314908 -0.09645535 -0.337797955 -0.38594519 0.120180125 #> 116 0.1323094308 -0.35539589 -0.332163763 0.55759380 -0.206512096 #> 117 -0.1507314908 -0.30945483 1.476411727 -0.49162155 -0.260487332 #> 118 -0.1434114669 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 119 -0.1507314908 -0.38880757 -0.337797955 0.57269042 -0.269009738 #> 120 -0.1409714590 -0.38045465 -0.332163763 0.88971952 -0.269009738 #> 121 -0.1507314908 -0.38880757 -0.332163763 -0.48407324 -0.269009738 #> 122 -0.1507314908 3.68741770 -0.337797955 -0.55200805 -0.030382377 #> 123 -0.1458514749 -0.38880757 -0.337797955 -0.55200805 -0.269009738 #> 124 -0.1019313319 -0.10063181 -0.337797955 0.85952627 -0.215034501 #> 125 -0.1287714193 -0.29692545 -0.337797955 0.49720730 -0.217875303 #> 126 -0.1092513557 0.78477778 -0.337797955 -0.10665765 0.228130598 #> 127 -0.1434114669 -0.38880757 -0.337797955 0.17262989 0.151428946 #> 128 -0.1360914431 -0.38045465 -0.332163763 -0.37839688 0.012229652 #> 129 -0.1507314908 -0.38880757 -0.337797955 -0.53691142 0.179836966 #> 130 -0.1482914828 0.61354290 -0.337797955 -0.35575194 1.557625898 #> 131 -0.1409714590 -0.38880757 -0.337797955 1.72003383 -0.234920115 #> 132 -0.1190113875 -0.34286651 -0.332163763 0.27830626 -0.269009738 #> 133 -0.1385314510 0.68454273 6.113351379 0.40662756 -0.146855255 #> 134 -0.1507314908 -0.38880757 -0.337797955 -0.43878337 -0.269009738 #> 135 -0.1336514351 -0.37210173 -0.332163763 -0.53691142 -0.260487332 #> 136 -0.1507314908 0.21260271 -0.337797955 -0.35575194 -0.254805728 #> 137 -0.1360914431 -0.38880757 -0.281456039 -0.55200805 -0.269009738 #> 138 -0.1409714590 1.77042241 -0.332163763 0.11224340 -0.124128840 #> 139 -0.1507314908 0.57595476 0.056595454 -0.52181480 -0.254805728 #> 140 -0.0458111492 0.54254308 -0.337797955 -0.55200805 -0.237760917 #> 141 -0.1507314908 0.12489705 -0.337797955 -0.40104181 -0.192308086 #> 142 -0.1482914828 0.18336749 -0.315261189 -0.55200805 -0.183785680 #> 143 -0.1238914034 -0.36374881 -0.337797955 -0.45387999 -0.243442521 #> 144 -0.1482914828 -0.38880757 1.955318009 -0.24252726 0.441190742 #> 145 -0.1312114272 -0.35957235 -0.337797955 -0.55200805 -0.260487332 #> 146 -0.1507314908 -0.10898473 -0.270187656 -0.55200805 0.784927775 #> 147 -0.0580111889 -0.38880757 -0.332163763 -0.55200805 -0.269009738 #> 148 -0.1507314908 -0.36792527 1.521485259 -0.51426649 -0.001974357 #> 149 0.2201497168 -0.33869005 -0.337797955 0.32359613 -0.269009738 #> 150 -0.0677712207 -0.38880757 -0.337797955 0.21791976 0.509369989 #> 151 -0.1507314908 -0.23845500 -0.337797955 -0.49162155 0.023592860 #> 152 -0.1482914828 -0.38463111 -0.337797955 0.77649484 -0.263328134 #> 153 -0.1482914828 -0.38880757 -0.292724422 -0.06136778 0.162792154 #> 154 -0.1385314510 -0.36374881 -0.337797955 -0.55200805 4.418313433 #> 155 0.2665098677 -0.32198421 -0.337797955 1.95403150 0.091772106 #> 156 -0.1482914828 -0.16745518 -0.337797955 0.35378938 -0.254805728 #> 157 0.4812305668 -0.37210173 -0.332163763 -0.55200805 -0.223556907 #> 158 -0.0824112683 2.04606879 -0.337797955 -0.51426649 0.052000879 #> 159 -0.1263314113 -0.10063181 -0.337797955 -0.53691142 -0.263328134 #> 160 -0.1482914828 -0.38880757 0.203084435 4.20342844 -0.260487332 #> 161 -0.1507314908 -0.38880757 0.974968678 0.32359613 -0.269009738 #> 162 -0.0994913239 -0.38880757 -0.337797955 -0.55200805 -0.263328134 #> 163 -0.1507314908 -0.18416102 -0.337797955 0.35378938 -0.269009738 #> 164 0.1079093513 -0.37627819 -0.163138017 0.90481615 -0.266168936 #> 165 -0.1287714193 -0.37627819 -0.337797955 -0.50671818 -0.237760917 #> 166 0.0347091130 0.50495493 -0.337797955 -0.54445974 5.517703777 #> 167 -0.1507314908 0.04136784 -0.337797955 -0.55200805 -0.269009738 #> 168 -0.1482914828 -0.38463111 -0.337797955 -0.55200805 -0.266168936 #> 169 -0.1482914828 -0.38880757 2.535639740 -0.55200805 -0.240601719 #> 170 0.5861509084 -0.38463111 -0.337797955 -0.55200805 0.941171881 #> 171 -0.1507314908 -0.29274899 -0.337797955 -0.50671818 -0.260487332 #> 172 -0.0799712604 -0.22592562 0.005887730 -0.35575194 -0.144014453 #> 173 0.0127490415 -0.33869005 -0.264553465 -0.12175427 -0.257646530 #> 174 -0.1507314908 -0.38463111 -0.208211549 -0.15949583 -0.001974357 #> 175 -0.1458514749 0.56342538 -0.298358614 0.11224340 -0.260487332 #> 176 -0.1312114272 1.81218701 -0.337797955 0.33869275 -0.266168936 #> 177 -0.1507314908 -0.31363129 1.279215022 -0.28781713 -0.269009738 #> 178 -0.0775312524 -0.38463111 -0.337797955 -0.55200805 -0.215034501 #> 179 0.1298694228 -0.33451359 -0.337797955 2.56544476 -0.269009738 #> 180 0.3445901219 -0.33033713 0.890455805 -0.37084856 0.091772106 #> 181 -0.1507314908 2.17136260 0.777771974 -0.43878337 -0.269009738 #> 182 -0.1507314908 5.69629511 -0.337797955 -0.50671818 -0.115606434 #> 183 -0.0994913239 -0.38045465 -0.337797955 -0.53691142 -0.269009738 #> 184 0.0371491210 -0.20086686 -0.095527718 -0.25762388 -0.223556907 #> 185 -0.1507314908 -0.38880757 2.259564353 0.05940521 -0.234920115 #> 186 -0.1385314510 -0.35957235 -0.089893526 -0.54445974 0.375852298 #> 187 -0.1360914431 -0.38880757 -0.337797955 -0.55200805 -0.246283323 #> 188 -0.1092513557 -0.38880757 -0.337797955 1.79551695 -0.266168936 #> 189 -0.1165713795 -0.36792527 0.417183714 -0.52936311 -0.246283323 #> 190 -0.1507314908 -0.35957235 -0.337797955 -0.34065532 -0.269009738 #> 191 -0.0628912048 -0.29692545 -0.337797955 0.72365666 -0.266168936 #> 192 -0.0189710618 -0.38463111 2.693397103 0.36888600 7.210821722 #> 193 -0.1360914431 -0.38880757 -0.337797955 0.26320964 -0.186626482 #> 194 0.0298290971 -0.38880757 -0.337797955 2.06725618 0.515051592 #> 195 -0.1458514749 -0.38880757 -0.337797955 -0.44633168 -0.269009738 #> 196 -0.1312114272 -0.38880757 -0.337797955 2.57299307 -0.269009738 #> 197 -0.1190113875 -0.34704297 2.225759204 -0.52936311 -0.257646530 #> 198 0.4446304476 -0.38880757 -0.332163763 0.83688134 -0.269009738 #> 199 0.0200690653 -0.38880757 -0.337797955 -0.54445974 0.128702531 #> 200 -0.1092513557 7.49217304 -0.337797955 -0.15194752 -0.269009738 #> Otu00039 Otu00040 Otu00041 Otu00042 Otu00043 #> 1 -0.369691676 -0.20704023 0.122728281 0.690525991 0.719828577 #> 2 0.504524822 -0.32139200 -0.630775883 -0.301679743 -0.243967502 #> 3 -0.439414464 0.35201286 0.855588495 -0.293479696 -0.461086399 #> 4 0.064734927 -0.33409775 -0.620453908 0.641325706 -0.127464679 #> 5 0.252450126 -0.85503359 4.860514738 2.211634782 -0.461086399 #> 6 -0.214156225 0.05978056 0.277557904 -0.301679743 0.545074343 #> 7 -0.385781550 -0.81691633 -0.424336386 -0.301679743 0.126723298 #> 8 -0.278515722 0.30118985 -0.661741808 -0.301679743 -0.381652656 #> 9 -0.133706855 -0.33409775 3.467048133 -0.297579720 -0.455790816 #> 10 -0.412598007 -0.46115527 0.071118407 -0.301679743 -0.461086399 #> 11 0.102277967 0.50448189 -0.661741808 -0.301679743 -0.461086399 #> 12 -0.417961299 -0.63903580 0.081440382 -0.301679743 0.312068697 #> 13 0.080824801 0.37742437 0.205304080 -0.010578061 -0.461086399 #> 14 -0.396508133 -0.55009554 0.298201853 4.581448478 -0.095691182 #> 15 -0.289242305 -0.37221501 1.712312408 3.257140824 -0.026848605 #> 16 -0.439414464 0.75859693 -0.651419833 -0.301679743 0.539778760 #> 17 -0.289242305 -0.33409775 0.659470973 -0.301679743 0.269704035 #> 18 -0.251699265 0.17413233 -0.155965040 -0.277079601 -0.005666274 #> 19 -0.058620775 -0.60091855 0.628505049 -0.256579483 -0.164533759 #> 20 1.362651445 1.52094206 -0.372726512 -0.297579720 -0.461086399 #> 21 -0.439414464 4.04938672 -0.661741808 -0.301679743 -0.455790816 #> 22 -0.310695471 -0.85503359 -0.661741808 -0.256579483 -0.249263085 #> 23 -0.407234716 0.79671419 -0.021779367 -0.297579720 0.132018880 #> 24 -0.305332179 1.34306153 1.640058584 -0.236079364 -0.365765907 #> 25 -0.439414464 0.25036685 -0.651419833 -0.301679743 -0.461086399 #> 26 -0.434051173 -0.74068182 0.721402822 -0.289379672 0.010220475 #> 27 -0.439414464 -0.85503359 -0.641097858 -0.231979341 -0.424017319 #> 28 -0.230246100 -0.57550704 -0.558522059 -0.002378014 -0.418721736 #> 29 0.466981782 -0.72797607 -0.290150713 -0.301679743 -0.392243822 #> 30 8.093582148 -0.74068182 -0.455302311 -0.268879554 3.399393499 #> 31 -0.310695471 0.14872083 -0.661741808 -0.297579720 -0.455790816 #> 32 -0.439414464 -0.30868625 -0.661741808 -0.281179625 -0.424017319 #> 33 -0.192703060 1.16518100 -0.630775883 -0.301679743 1.180544285 #> 34 0.139821007 0.84753719 0.174338155 -0.289379672 -0.413426153 #> 35 -0.273152431 -0.10539421 -0.475946260 -0.301679743 -0.085100016 #> 36 -0.332148636 1.02541772 -0.661741808 -0.297579720 -0.413426153 #> 37 0.542067861 -0.63903580 -0.269506763 -0.301679743 -0.053326519 #> 38 -0.439414464 -0.85503359 -0.651419833 -0.301679743 -0.461086399 #> 39 -0.417961299 -0.14351147 1.412975137 -0.301679743 -0.249263085 #> 40 0.247086835 -0.29598050 -0.114677141 -0.297579720 0.184974709 #> 41 0.043281762 0.31389561 -0.434658361 -0.301679743 -0.238671919 #> 42 -0.412598007 0.14872083 -0.279828738 -0.260679507 -0.392243822 #> 43 -0.439414464 -0.85503359 -0.641097858 -0.301679743 -0.429312902 #> 44 -0.203429643 -0.85503359 0.287879879 -0.289379672 -0.344583576 #> 45 -0.428687881 -0.82962208 -0.475946260 -0.301679743 -0.339287993 #> 46 0.129094424 0.37742437 -0.506912185 -0.252479459 -0.461086399 #> 47 -0.428687881 -0.80421058 -0.032101342 -0.297579720 0.290886366 #> 48 0.123731133 -0.05457121 -0.166287015 -0.301679743 -0.461086399 #> 49 -0.230246100 -0.62633005 -0.424336386 -0.301679743 0.820444651 #> 50 -0.417961299 0.16142658 0.019508532 -0.297579720 0.449753851 #> 51 0.450891908 -0.43574377 -0.455302311 -0.297579720 -0.461086399 #> 52 0.214907086 -0.74068182 -0.465624286 4.749549449 -0.302218913 #> 53 -0.434051173 0.17413233 -0.620453908 0.973427626 -0.461086399 #> 54 -0.439414464 1.10165224 -0.661741808 -0.297579720 -0.450495233 #> 55 -0.037167609 -0.37221501 0.225948029 -0.301679743 0.412684771 #> 56 -0.439414464 -0.85503359 -0.661741808 1.563831038 -0.461086399 #> 57 -0.235609391 -0.51197828 -0.434658361 1.157928692 -0.386948239 #> 58 -0.369691676 -0.84232784 -0.641097858 -0.293479696 -0.445199650 #> 59 -0.026441027 1.69882259 2.032293628 -0.293479696 -0.445199650 #> 60 -0.305332179 0.13601508 -0.228218864 -0.277079601 -0.010961856 #> 61 -0.412598007 -0.48656678 2.352274849 -0.293479696 -0.445199650 #> 62 -0.026441027 0.19954384 -0.290150713 -0.289379672 -0.439904067 #> 63 0.096914676 2.25787568 -0.073389241 -0.293479696 -0.445199650 #> 64 1.389467902 -0.32139200 -0.651419833 -0.289379672 0.052585138 #> 65 -0.439414464 -0.85503359 -0.424336386 -0.301679743 5.326985656 #> 66 -0.010351152 1.20329825 0.143372231 -0.301679743 -0.461086399 #> 67 -0.407234716 -0.81691633 -0.506912185 3.232540682 2.599760488 #> 68 -0.396508133 -0.55009554 1.784566232 -0.301679743 -0.455790816 #> 69 -0.316058762 0.40283587 -0.661741808 -0.301679743 0.063176303 #> 70 -0.273152431 -0.20704023 -0.661741808 -0.297579720 -0.455790816 #> 71 1.603999558 0.40283587 -0.114677141 -0.301679743 -0.381652656 #> 72 -0.273152431 0.05978056 -0.661741808 -0.301679743 -0.450495233 #> 73 -0.417961299 0.08519207 1.113637867 -0.301679743 -0.286332165 #> 74 0.048645053 0.26307260 -0.197252939 -0.297579720 0.211452623 #> 75 -0.310695471 -0.24515749 1.268467489 -0.297579720 0.788671154 #> 76 -0.257062557 -0.85503359 -0.114677141 -0.293479696 -0.116873513 #> 77 -0.358965093 -0.56280129 1.361365263 -0.289379672 -0.418721736 #> 78 -0.439414464 -0.43574377 1.144603791 -0.297579720 -0.461086399 #> 79 -0.396508133 -0.39762651 -0.052745291 -0.301679743 0.089654218 #> 80 -0.439414464 -0.81691633 -0.661741808 -0.301679743 -0.461086399 #> 81 -0.423324590 -0.23245173 -0.661741808 -0.301679743 -0.233376336 #> 82 -0.439414464 1.07624073 0.102084331 0.292823692 0.910469559 #> 83 3.760042699 0.92377171 -0.238540839 -0.297579720 -0.365765907 #> 84 2.816103414 3.09645532 -0.661741808 2.219834829 -0.450495233 #> 85 -0.439414464 -0.82962208 0.463353451 -0.100778582 0.274999617 #> 86 -0.439414464 -0.74068182 0.525285300 -0.297579720 -0.074508851 #> 87 0.820959014 -0.72797607 -0.279828738 -0.285279649 -0.402834987 #> 88 -0.273152431 -0.85503359 -0.651419833 -0.289379672 -0.333992410 #> 89 0.359715954 0.94918321 0.504641350 -0.293479696 -0.376357073 #> 90 -0.434051173 1.01271197 -0.661741808 -0.301679743 -0.461086399 #> 91 -0.391144842 -0.47386102 0.287879879 -0.301679743 -0.455790816 #> 92 -0.283879014 -0.84232784 -0.651419833 -0.301679743 -0.392243822 #> 93 -0.181976477 -0.85503359 -0.661741808 -0.297579720 -0.307514496 #> 94 -0.364328385 -0.85503359 -0.661741808 -0.297579720 -0.455790816 #> 95 -0.251699265 -0.34680350 0.463353451 -0.297579720 0.666872748 #> 96 -0.439414464 -0.09268846 0.153694206 -0.301679743 -0.461086399 #> 97 0.912134968 1.03812348 -0.641097858 -0.301679743 -0.439904067 #> 98 0.096914676 -0.51197828 0.834944546 -0.301679743 -0.461086399 #> 99 0.075461510 0.49177614 -0.661741808 -0.301679743 6.846817934 #> 100 -0.439414464 -0.85503359 -0.620453908 -0.289379672 4.109001601 #> 101 -0.294605596 -0.68985881 -0.372726512 -0.293479696 1.127588456 #> 102 -0.160523311 -0.65174155 -0.517234160 -0.244279412 -0.376357073 #> 103 -0.214156225 1.57176506 -0.589487984 -0.174579009 -0.386948239 #> 104 2.767833791 1.35576728 -0.383048487 -0.297579720 -0.450495233 #> 105 -0.407234716 -0.49927253 0.019508532 0.219023266 0.417980354 #> 106 1.051580544 -0.71527031 0.060796432 -0.301679743 2.864539631 #> 107 -0.396508133 -0.05457121 -0.444980336 -0.301679743 0.476231766 #> 108 -0.439414464 2.90586903 -0.661741808 0.145222839 -0.439904067 #> 109 -0.348238510 0.98730047 -0.630775883 -0.297579720 1.350002936 #> 110 0.134457715 -0.58821279 0.029830507 0.719226157 -0.016257439 #> 111 -0.364328385 -0.65174155 -0.661741808 -0.244279412 -0.445199650 #> 112 -0.439414464 4.51949955 0.339489753 -0.301679743 4.956294857 #> 113 -0.198066351 -0.85503359 -0.661741808 1.752432128 -0.455790816 #> 114 -0.171249894 -0.60091855 2.589680270 -0.297579720 -0.286332165 #> 115 -0.348238510 -0.04186545 -0.661741808 -0.301679743 0.089654218 #> 116 -0.181976477 -0.52468403 -0.001135417 -0.108978630 -0.291627748 #> 117 -0.396508133 0.04707481 0.969130219 -0.301679743 -0.461086399 #> 118 -0.439414464 -0.23245173 2.259377075 -0.301679743 -0.461086399 #> 119 0.107641258 -0.85503359 2.042615603 -0.293479696 -0.461086399 #> 120 6.806392213 1.94023187 -0.651419833 -0.297579720 -0.455790816 #> 121 -0.401871424 -0.65174155 1.113637867 0.018122105 -0.206898422 #> 122 0.745872935 -0.71527031 -0.661741808 1.756532152 -0.455790816 #> 123 -0.439414464 -0.85503359 -0.465624286 -0.297579720 -0.455790816 #> 124 0.761962809 0.93647746 -0.661741808 -0.297579720 -0.461086399 #> 125 -0.428687881 0.94918321 -0.558522059 0.624925612 -0.429312902 #> 126 0.037918470 -0.42303802 0.422065552 0.895527176 -0.461086399 #> 127 -0.122980272 -0.84232784 1.825854131 -0.297579720 -0.445199650 #> 128 0.155910881 -0.56280129 -0.661741808 -0.301679743 -0.243967502 #> 129 0.649333689 -0.66444731 -0.537878109 -0.301679743 -0.281036582 #> 130 -0.385781550 0.36471861 -0.166287015 -0.301679743 -0.461086399 #> 131 -0.439414464 -0.85503359 -0.589487984 -0.256579483 -0.450495233 #> 132 0.155910881 -0.33409775 -0.599809959 0.268223550 1.662442324 #> 133 0.155910881 1.68611683 -0.661741808 -0.301679743 -0.455790816 #> 134 -0.326785345 0.12330932 0.463353451 -0.301679743 1.620077661 #> 135 -0.139070146 0.80941994 -0.651419833 -0.301679743 -0.434608484 #> 136 -0.149796729 -0.21974598 2.114869427 -0.281179625 0.073767469 #> 137 -0.353601802 0.46636463 -0.661741808 0.743826299 -0.058622102 #> 138 -0.101527106 -0.39762651 -0.661741808 2.387935801 -0.461086399 #> 139 -0.149796729 -0.21974598 0.277557904 -0.301679743 -0.217489588 #> 140 0.525977987 1.19059250 0.164016180 -0.301679743 -0.461086399 #> 141 -0.332148636 -0.74068182 0.618183074 1.990233502 0.184974709 #> 142 -0.434051173 -0.84232784 -0.641097858 -0.289379672 -0.333992410 #> 143 1.587909684 -0.66444731 -0.465624286 -0.297579720 -0.318105662 #> 144 -0.439414464 -0.21974598 -0.362404537 -0.301679743 0.492118514 #> 145 -0.321422053 -0.85503359 -0.444980336 -0.281179625 1.561826250 #> 146 -0.342875219 -0.76609332 -0.475946260 9.243175419 -0.450495233 #> 147 -0.439414464 -0.85503359 -0.455302311 -0.293479696 -0.461086399 #> 148 -0.434051173 0.40283587 2.909661491 -0.301679743 0.889287228 #> 149 -0.439414464 -0.52468403 -0.403692436 -0.301679743 -0.461086399 #> 150 -0.439414464 0.45365888 0.308523828 -0.297579720 -0.376357073 #> 151 0.032555179 -0.70256456 0.287879879 -0.301679743 -0.461086399 #> 152 -0.004987861 0.96188896 -0.300472688 -0.002378014 -0.461086399 #> 153 -0.358965093 -0.81691633 6.832011934 -0.293479696 -0.461086399 #> 154 -0.412598007 0.31389561 -0.269506763 -0.297579720 0.169087960 #> 155 0.102277967 0.59342215 -0.630775883 -0.100778582 0.121427715 #> 156 -0.439414464 4.15103274 -0.290150713 -0.301679743 -0.461086399 #> 157 -0.439414464 -0.85503359 -0.630775883 -0.301679743 -0.355174742 #> 158 0.107641258 -0.47386102 0.215626055 -0.301679743 -0.386948239 #> 159 -0.031804318 -0.13080572 0.153694206 -0.281179625 -0.318105662 #> 160 1.169572955 -0.77879908 -0.630775883 -0.301679743 -0.429312902 #> 161 -0.332148636 0.22495534 -0.630775883 -0.301679743 -0.461086399 #> 162 -0.417961299 -0.01645395 -0.661741808 -0.297579720 -0.450495233 #> 163 -0.042530901 0.21224959 -0.599809959 -0.301679743 -0.455790816 #> 164 -0.407234716 1.95293763 -0.114677141 -0.281179625 -0.445199650 #> 165 -0.364328385 2.10540665 -0.610131933 -0.301679743 0.592734588 #> 166 -0.439414464 -0.85503359 -0.661741808 -0.301679743 -0.455790816 #> 167 -0.439414464 -0.85503359 -0.434658361 -0.301679743 -0.461086399 #> 168 3.373885719 -0.06727696 -0.661741808 -0.223779293 -0.450495233 #> 169 0.359715954 -0.84232784 -0.589487984 0.124722721 -0.185716091 #> 170 -0.439414464 -0.85503359 -0.661741808 -0.297579720 -0.461086399 #> 171 -0.391144842 -0.28327474 0.525285300 -0.301679743 0.635099251 #> 172 -0.439414464 0.05978056 -0.465624286 -0.240179388 0.862809314 #> 173 -0.417961299 -0.76609332 -0.630775883 -0.301679743 3.341142087 #> 174 0.338262788 -0.15621722 0.680114923 -0.301679743 -0.085100016 #> 175 0.005738722 -0.04186545 1.010418118 -0.277079601 -0.455790816 #> 176 -0.439414464 -0.85503359 -0.661741808 0.501924901 -0.461086399 #> 177 -0.391144842 -0.43574377 -0.032101342 -0.293479696 -0.058622102 #> 178 -0.439414464 1.39388453 -0.145643065 -0.301679743 -0.461086399 #> 179 -0.439414464 0.61883366 -0.661741808 -0.301679743 -0.445199650 #> 180 -0.369691676 -0.49927253 0.164016180 -0.301679743 -0.069213268 #> 181 -0.267789139 -0.39762651 0.081440382 0.961127555 -0.153942593 #> 182 3.111084440 1.03812348 -0.661741808 -0.178679033 -0.439904067 #> 183 -0.198066351 -0.51197828 -0.290150713 -0.301679743 -0.196307256 #> 184 -0.353601802 -0.70256456 2.486460522 -0.293479696 -0.408130570 #> 185 -0.439414464 1.22870976 -0.496590210 -0.281179625 -0.381652656 #> 186 -0.407234716 -0.85503359 -0.661741808 -0.293479696 -0.413426153 #> 187 -0.439414464 -0.85503359 0.607861099 -0.301679743 -0.455790816 #> 188 -0.439414464 3.94774071 -0.661741808 -0.268879554 -0.445199650 #> 189 -0.423324590 -0.84232784 -0.527556135 -0.256579483 -0.333992410 #> 190 -0.321422053 -0.41033226 1.805210182 -0.285279649 -0.397539405 #> 191 0.134457715 -0.62633005 -0.661741808 0.014022081 -0.386948239 #> 192 -0.439414464 1.52094206 -0.661741808 -0.301679743 -0.450495233 #> 193 -0.412598007 -0.09268846 -0.094033191 -0.289379672 0.455049434 #> 194 -0.423324590 0.98730047 -0.527556135 -0.133578772 -0.392243822 #> 195 -0.375054967 -0.15621722 0.236270004 -0.297579720 1.090519376 #> 196 -0.144433437 -0.85503359 -0.661741808 0.104222602 -0.450495233 #> 197 -0.439414464 -0.82962208 -0.001135417 -0.293479696 -0.376357073 #> 198 0.692240021 -0.81691633 -0.661741808 -0.301679743 -0.445199650 #> 199 -0.423324590 -0.75338757 -0.290150713 -0.293479696 -0.191011673 #> 200 0.445528616 0.11060357 0.494319376 -0.301679743 -0.392243822 #> Otu00044 Otu00045 Otu00046 Otu00047 Otu00048 Otu00049 #> 1 -0.611704260 -0.23391339 0.693551357 -0.203512195 -0.253544727 0.60651290 #> 2 -0.622709104 -0.23391339 -0.569110688 -0.208661143 -0.253544727 -0.42970775 #> 3 0.026576699 -0.23391339 -0.584323484 0.342276360 0.007337307 -0.42161228 #> 4 0.092605763 -0.23391339 -0.523472301 -0.208661143 -0.253544727 -0.43780323 #> 5 -0.303568625 -0.14075174 -0.584323484 -0.208661143 -0.194846269 0.42841248 #> 6 -0.259549248 -0.23391339 0.784828131 -0.208661143 -0.253544727 -0.43780323 #> 7 0.829930318 -0.23391339 -0.584323484 -0.033596890 -0.247022676 0.01554331 #> 8 -0.204525028 -0.23391339 0.221954690 -0.208661143 -0.253544727 -0.33256207 #> 9 -0.534670351 -0.23391339 -0.584323484 -0.208661143 -0.070927303 -0.31637112 #> 10 -0.446631598 -0.23391339 -0.584323484 0.501893767 -0.207890371 -0.42970775 #> 11 0.235668737 -0.23391339 1.895362219 -0.203512195 -0.247022676 -0.43780323 #> 12 -0.622709104 -0.23391339 -0.188790795 -0.208661143 -0.116581659 -0.40542133 #> 13 -0.314573469 -0.23391339 -0.584323484 -0.208661143 -0.227456524 -0.42161228 #> 14 -0.578689727 -0.18733256 0.298018668 -0.208661143 -0.253544727 -0.43780323 #> 15 0.884954539 -0.23391339 1.180360820 -0.208661143 -0.253544727 -0.40542133 #> 16 -0.611704260 -0.10348707 -0.584323484 -0.193214297 -0.253544727 8.67770035 #> 17 0.004567010 -0.23391339 0.678338561 -0.208661143 -0.207890371 -0.41351681 #> 18 -0.215529872 -0.23391339 0.632700174 -0.203512195 -0.253544727 -0.43780323 #> 19 0.169639672 -0.23391339 -0.386557139 -0.208661143 -0.253544727 0.68746764 #> 20 -0.402612222 0.55174991 -0.584323484 -0.208661143 -0.247022676 -0.43780323 #> 21 -0.600699416 -0.23391339 -0.477833914 -0.208661143 1.142174157 -0.42161228 #> 22 0.488780151 -0.23391339 -0.234429182 -0.203512195 -0.227456524 -0.42970775 #> 23 -0.039452366 -0.23391339 -0.097514021 -0.208661143 -0.247022676 0.50936722 #> 24 6.431395968 -0.23391339 1.119509637 0.316531617 -0.253544727 -0.13017522 #> 25 -0.435626754 -0.23391339 -0.584323484 -0.208661143 -0.207890371 -0.43780323 #> 26 0.279688113 -0.23391339 -0.127939612 -0.203512195 0.626932139 -0.43780323 #> 27 -0.732757545 -0.23391339 -0.584323484 -0.208661143 5.707609757 1.02747754 #> 28 -0.380602533 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.43780323 #> 29 -0.633713948 -0.23080800 -0.219216386 0.002445751 -0.253544727 0.03982973 #> 30 -0.545675195 -0.23080800 -0.295280365 -0.203512195 -0.253544727 -0.43780323 #> 31 -0.644718792 -0.23391339 -0.584323484 -0.208661143 -0.207890371 -0.26779828 #> 32 -0.226534716 5.84954278 -0.584323484 -0.208661143 -0.253544727 -0.43780323 #> 33 0.026576699 -0.23391339 1.073871250 -0.141724811 -0.253544727 0.52555816 #> 34 -0.655723636 -0.23391339 0.982594476 3.756029300 0.920424427 -0.02493406 #> 35 -0.347588001 -0.23080800 -0.264854773 -0.208661143 -0.240500625 0.26650300 #> 36 -0.721752701 -0.23391339 -0.584323484 -0.208661143 0.033425511 -0.28398922 #> 37 1.677303314 -0.23391339 0.510997808 -0.208661143 -0.097015507 -0.38113491 #> 38 0.829930318 -0.23391339 0.008975549 -0.208661143 -0.233978575 -0.12207975 #> 39 -0.006437834 7.04201198 0.754402540 -0.208661143 -0.253544727 0.12078447 #> 40 0.180644516 -0.23080800 1.256424799 -0.208661143 -0.253544727 -0.41351681 #> 41 -0.138495963 -0.23080800 0.008975549 -0.208661143 -0.247022676 0.48508079 #> 42 -0.292563781 -0.22459723 -0.493046709 -0.193214297 0.274741392 -0.41351681 #> 43 -0.523665507 -0.23391339 -0.584323484 -0.208661143 1.311747479 -0.34065754 #> 44 -0.094476587 -0.14385712 2.153979746 -0.208661143 -0.227456524 -0.36494396 #> 45 0.202654204 -0.23391339 -0.462621118 -0.208661143 1.279137225 0.19364374 #> 46 -0.380602533 -0.23391339 -0.569110688 -0.188065349 -0.194846269 -0.42161228 #> 47 3.206976645 -0.23391339 -0.386557139 0.661511175 0.079079867 -0.37303944 #> 48 -0.600699416 -0.23080800 -0.584323484 -0.208661143 -0.220934473 -0.43780323 #> 49 -0.380602533 -0.23391339 -0.386557139 -0.208661143 -0.207890371 -0.08969785 #> 50 -0.490650974 -0.23391339 0.100252324 -0.203512195 1.670460276 -0.31637112 #> 51 -0.215529872 -0.23391339 0.419721034 -0.208661143 -0.253544727 -0.43780323 #> 52 -0.688738168 -0.23391339 0.997807271 -0.208661143 -0.253544727 -0.43780323 #> 53 -0.721752701 -0.23391339 -0.584323484 -0.208661143 0.046469612 -0.43780323 #> 54 -0.534670351 -0.22770262 -0.188790795 -0.208661143 0.366050104 -0.42161228 #> 55 -0.248544404 -0.23391339 3.918664050 -0.208661143 -0.253544727 -0.43780323 #> 56 -0.732757545 -0.23391339 -0.584323484 -0.208661143 -0.220934473 -0.42970775 #> 57 -0.127491119 -0.02274697 -0.508259505 -0.208661143 -0.253544727 -0.17065259 #> 58 -0.721752701 -0.23391339 0.176316302 -0.198363246 -0.247022676 -0.34065754 #> 59 -0.325578313 -0.23391339 -0.371344344 -0.203512195 -0.240500625 -0.38923038 #> 60 0.323707489 0.39026971 -0.538685096 -0.208661143 -0.253544727 0.08840257 #> 61 1.226104706 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.42161228 #> 62 -0.699743012 -0.23391339 -0.416982731 -0.203512195 0.079079867 0.25031205 #> 63 -0.501655819 2.33734833 -0.477833914 -0.203512195 -0.175280117 -0.42970775 #> 64 -0.567684883 -0.23391339 0.510997808 -0.203512195 -0.240500625 -0.41351681 #> 65 -0.468641286 -0.23080800 -0.219216386 -0.115980068 -0.253544727 2.04750725 #> 66 0.983998136 -0.23391339 -0.082301225 -0.203512195 -0.149191913 -0.07350690 #> 67 -0.446631598 -0.23391339 -0.508259505 -0.018150044 -0.253544727 0.74413596 #> 68 1.435196744 -0.23391339 3.812174480 -0.208661143 -0.253544727 -0.43780323 #> 69 0.873949695 -0.23391339 5.303028460 -0.208661143 -0.227456524 1.06795491 #> 70 -0.534670351 -0.23391339 -0.584323484 -0.208661143 -0.083971405 -0.42970775 #> 71 0.433755930 -0.18422718 -0.553897892 -0.208661143 -0.240500625 0.54174911 #> 72 1.138065953 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.07350690 #> 73 -0.369597689 -0.23391339 2.473448456 6.943228501 -0.227456524 -0.38923038 #> 74 -0.094476587 1.04550669 -0.386557139 -0.208661143 -0.253544727 -0.20303448 #> 75 -0.347588001 -0.23391339 0.374082647 -0.208661143 -0.253544727 -0.29208470 #> 76 -0.710747857 -0.23391339 -0.158365203 -0.208661143 0.646498291 -0.43780323 #> 77 0.510789839 -0.23080800 -0.553897892 -0.208661143 -0.253544727 -0.43780323 #> 78 -0.732757545 -0.23391339 -0.584323484 -0.208661143 0.033425511 -0.43780323 #> 79 0.048586387 4.98624476 -0.204003591 -0.208661143 -0.253544727 -0.08160238 #> 80 0.323707489 -0.23391339 -0.584323484 -0.208661143 -0.136147812 -0.43780323 #> 81 0.499784995 -0.23391339 0.997807271 -0.208661143 -0.253544727 0.09649805 #> 82 -0.732757545 -0.23391339 -0.584323484 -0.203512195 -0.129625761 -0.42161228 #> 83 0.147629984 -0.23080800 -0.356131548 -0.208661143 -0.240500625 -0.42161228 #> 84 -0.523665507 -0.23391339 -0.584323484 -0.208661143 -0.227456524 -0.30018017 #> 85 5.352921246 -0.19975412 -0.569110688 -0.208661143 -0.175280117 0.06411615 #> 86 -0.457636442 -0.23391339 -0.401769935 -0.208661143 0.248653189 -0.29208470 #> 87 0.081600919 -0.23391339 -0.553897892 -0.208661143 -0.240500625 -0.42970775 #> 88 -0.116486275 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.34875301 #> 89 0.774906098 -0.23391339 1.773659853 -0.208661143 -0.253544727 -0.43780323 #> 90 -0.534670351 -0.22149184 -0.584323484 -0.208661143 0.666064444 -0.43780323 #> 91 -0.380602533 -0.23391339 1.682383079 -0.198363246 -0.253544727 -0.32446659 #> 92 0.499784995 -0.23391339 3.583982544 -0.208661143 -0.253544727 -0.39732586 #> 93 -0.633713948 -0.23391339 -0.538685096 -0.208661143 -0.253544727 -0.35684849 #> 94 -0.457636442 -0.23391339 0.419721034 -0.208661143 -0.253544727 -0.33256207 #> 95 -0.391607378 -0.23391339 0.298018668 -0.208661143 -0.083971405 -0.39732586 #> 96 -0.732757545 -0.23391339 -0.584323484 1.160959192 0.144300375 -0.43780323 #> 97 -0.369597689 -0.23080800 -0.584323484 -0.193214297 0.242131138 0.06411615 #> 98 -0.259549248 -0.23391339 0.434933830 -0.208661143 -0.253544727 -0.38113491 #> 99 -0.677733324 -0.23391339 -0.584323484 -0.208661143 -0.038317049 -0.39732586 #> 100 3.273005710 -0.23391339 -0.477833914 -0.208661143 -0.253544727 4.16042593 #> 101 0.554809216 -0.23391339 -0.553897892 -0.167469554 -0.057883201 1.04366849 #> 102 -0.710747857 0.20084100 -0.508259505 -0.208661143 -0.207890371 -0.42161228 #> 103 -0.435626754 -0.23391339 -0.584323484 -0.208661143 -0.175280117 0.14507089 #> 104 -0.600699416 -0.23080800 -0.584323484 -0.208661143 0.633454190 -0.33256207 #> 105 -0.281558936 -0.23391339 0.008975549 -0.208661143 -0.240500625 -0.36494396 #> 106 -0.479646130 -0.22770262 -0.097514021 -0.208661143 0.509535223 1.65892451 #> 107 0.213659048 -0.23391339 -0.569110688 -0.208661143 -0.253544727 1.18129155 #> 108 0.213659048 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.36494396 #> 109 1.699313003 -0.22459723 1.210786411 -0.208661143 -0.253544727 3.28611475 #> 110 2.260560052 -0.03206314 1.575893509 -0.208661143 -0.240500625 0.03173426 #> 111 1.908405041 -0.23391339 -0.462621118 -0.208661143 -0.253544727 -0.42161228 #> 112 0.686867345 -0.23391339 -0.584323484 0.120871569 -0.253544727 3.50469255 #> 113 0.466770463 -0.23391339 -0.584323484 -0.208661143 -0.233978575 -0.43780323 #> 114 0.653852813 6.28429718 1.560680713 -0.203512195 -0.253544727 -0.33256207 #> 115 -0.149500807 -0.23391339 1.530255122 -0.208661143 -0.247022676 2.12846199 #> 116 -0.314573469 -0.23391339 -0.493046709 -0.146873760 -0.207890371 -0.42970775 #> 117 -0.490650974 -0.23080800 -0.584323484 -0.208661143 -0.207890371 -0.42970775 #> 118 -0.710747857 -0.23080800 -0.584323484 2.715941677 -0.240500625 -0.43780323 #> 119 -0.380602533 -0.23391339 -0.584323484 3.169049157 -0.194846269 -0.41351681 #> 120 -0.600699416 -0.23080800 -0.584323484 -0.208661143 -0.253544727 -0.42970775 #> 121 -0.358592845 5.26883512 -0.584323484 -0.208661143 -0.253544727 0.08840257 #> 122 -0.501655819 -0.23080800 -0.432195526 -0.208661143 -0.253544727 -0.42970775 #> 123 -0.369597689 -0.22149184 -0.584323484 -0.038745838 -0.247022676 -0.43780323 #> 124 -0.402612222 -0.23391339 -0.569110688 -0.208661143 -0.247022676 -0.42970775 #> 125 0.664857657 -0.23391339 -0.508259505 -0.208661143 -0.227456524 -0.42161228 #> 126 -0.490650974 -0.23391339 1.438978347 -0.203512195 -0.201368320 -0.43780323 #> 127 -0.534670351 -0.23080800 -0.401769935 -0.203512195 -0.123103710 -0.34875301 #> 128 -0.644718792 -0.23391339 -0.523472301 -0.208661143 -0.253544727 0.36364869 #> 129 0.015571854 -0.23391339 -0.310493161 1.572875082 -0.253544727 0.71175406 #> 130 -0.094476587 -0.23391339 -0.584323484 -0.203512195 -0.253544727 -0.30827565 #> 131 -0.567684883 0.69770317 -0.584323484 -0.208661143 -0.025272947 -0.43780323 #> 132 -0.039452366 -0.23391339 0.860892110 -0.198363246 -0.253544727 1.01938207 #> 133 0.972993292 -0.23391339 -0.584323484 -0.208661143 -0.240500625 -0.37303944 #> 134 0.400741398 -0.23391339 1.895362219 -0.208661143 -0.253544727 -0.40542133 #> 135 -0.534670351 -0.22770262 -0.432195526 -0.208661143 -0.253544727 -0.10588880 #> 136 0.037581543 -0.23391339 -0.584323484 -0.208661143 -0.253544727 1.36748745 #> 137 -0.578689727 -0.23391339 -0.264854773 -0.208661143 -0.227456524 1.17319607 #> 138 0.928973915 -0.22770262 -0.584323484 -0.208661143 -0.201368320 -0.43780323 #> 139 -0.545675195 -0.11901402 -0.584323484 -0.208661143 -0.247022676 -0.21922543 #> 140 3.262000866 -0.23391339 -0.584323484 -0.203512195 -0.240500625 -0.43780323 #> 141 0.895959383 -0.22149184 -0.386557139 -0.208661143 -0.253544727 0.08840257 #> 142 -0.600699416 -0.23391339 -0.462621118 -0.208661143 -0.253544727 -0.42161228 #> 143 0.125620295 0.74428400 -0.584323484 -0.193214297 -0.240500625 0.82509070 #> 144 -0.468641286 -0.21217567 0.161103507 -0.136575862 -0.253544727 -0.34065754 #> 145 -0.160505651 -0.23391339 -0.584323484 -0.198363246 -0.240500625 -0.33256207 #> 146 -0.589694571 -0.22149184 4.146855986 -0.182916400 -0.253544727 -0.43780323 #> 147 -0.633713948 -0.23391339 -0.584323484 -0.208661143 0.137778324 -0.13017522 #> 148 -0.732757545 -0.23391339 -0.584323484 -0.208661143 -0.247022676 0.81699522 #> 149 -0.567684883 -0.23391339 0.298018668 -0.208661143 0.085601918 -0.42970775 #> 150 -0.732757545 -0.23391339 -0.553897892 -0.208661143 -0.162236015 -0.43780323 #> 151 -0.611704260 -0.23080800 -0.310493161 -0.208661143 -0.253544727 -0.43780323 #> 152 -0.281558936 -0.23391339 -0.584323484 0.980745990 -0.253544727 -0.43780323 #> 153 -0.424621910 -0.23391339 2.777704371 9.152127462 -0.253544727 -0.31637112 #> 154 -0.699743012 -0.23391339 1.515042326 -0.208661143 -0.233978575 0.20983468 #> 155 -0.534670351 -0.23391339 -0.584323484 -0.208661143 -0.207890371 4.74330005 #> 156 -0.490650974 -0.23391339 -0.584323484 -0.208661143 0.020381409 -0.43780323 #> 157 -0.699743012 -0.22770262 -0.584323484 -0.208661143 11.623109885 -0.29208470 #> 158 2.271564896 -0.19975412 3.188449855 -0.208661143 -0.253544727 -0.43780323 #> 159 -0.622709104 -0.23391339 -0.584323484 -0.208661143 -0.175280117 -0.31637112 #> 160 -0.556680039 -0.23080800 -0.401769935 -0.208661143 -0.247022676 -0.43780323 #> 161 -0.567684883 0.65422773 -0.584323484 -0.208661143 -0.253544727 -0.43780323 #> 162 -0.501655819 -0.23391339 0.465359421 -0.208661143 -0.201368320 0.76032691 #> 163 1.369167679 0.46169364 1.241212003 -0.208661143 -0.253544727 -0.30018017 #> 164 -0.446631598 -0.23391339 -0.493046709 -0.198363246 0.222564986 -0.42970775 #> 165 0.400741398 -0.23080800 -0.553897892 -0.208661143 -0.240500625 -0.10588880 #> 166 -0.732757545 -0.23391339 -0.584323484 -0.208661143 1.540019259 -0.26779828 #> 167 -0.545675195 -0.23080800 0.480572217 0.337127411 -0.247022676 -0.39732586 #> 168 0.191649360 -0.23080800 -0.432195526 -0.208661143 -0.253544727 -0.43780323 #> 169 -0.512660663 -0.23391339 -0.432195526 -0.208661143 -0.175280117 0.88985449 #> 170 -0.721752701 -0.23080800 -0.584323484 -0.208661143 0.653020342 -0.36494396 #> 171 0.257678425 -0.23391339 1.362914369 -0.203512195 -0.181802168 -0.40542133 #> 172 -0.501655819 -0.19043795 -0.493046709 -0.208661143 -0.247022676 2.04750725 #> 173 -0.512660663 -0.23391339 -0.553897892 -0.208661143 0.326917799 2.76800443 #> 174 -0.677733324 1.07345519 -0.584323484 -0.208661143 -0.247022676 -0.37303944 #> 175 0.015571854 -0.23391339 -0.112726816 -0.203512195 -0.253544727 -0.43780323 #> 176 -0.358592845 -0.23391339 -0.569110688 -0.208661143 0.366050104 0.11268900 #> 177 0.059591231 0.80639177 -0.280067569 -0.208661143 -0.253544727 -0.43780323 #> 178 1.006007824 -0.23080800 -0.584323484 -0.208661143 0.561711630 -0.43780323 #> 179 -0.732757545 -0.23080800 -0.584323484 -0.208661143 -0.077449354 0.23412110 #> 180 -0.402612222 0.02693925 0.632700174 -0.188065349 -0.253544727 0.32317132 #> 181 -0.270554092 -0.23391339 0.008975549 0.450404281 -0.253544727 0.39603058 #> 182 0.609833436 -0.23391339 0.465359421 -0.208661143 -0.227456524 -0.42161228 #> 183 0.631843124 0.11389013 -0.401769935 -0.208661143 -0.253544727 -0.30018017 #> 184 -0.589694571 -0.22459723 -0.371344344 -0.172618503 0.222564986 -0.35684849 #> 185 -0.457636442 0.65112234 -0.553897892 -0.208661143 -0.253544727 -0.37303944 #> 186 -0.655723636 -0.23391339 -0.477833914 -0.208661143 -0.247022676 -0.32446659 #> 187 0.895959383 -0.23391339 -0.584323484 -0.208661143 0.092123968 -0.30827565 #> 188 -0.248544404 -0.23391339 -0.493046709 -0.208661143 -0.129625761 -0.18684354 #> 189 -0.666728480 -0.23080800 -0.553897892 4.682840053 0.150822426 -0.41351681 #> 190 -0.171510495 1.64484668 1.073871250 -0.110831119 -0.247022676 -0.42970775 #> 191 -0.369597689 -0.23391339 -0.553897892 -0.208661143 2.146569989 -0.30018017 #> 192 3.735209162 -0.22459723 -0.569110688 -0.208661143 -0.240500625 -0.43780323 #> 193 -0.369597689 -0.23080800 0.328444260 -0.208661143 -0.253544727 -0.31637112 #> 194 0.224663892 -0.23391339 -0.356131548 -0.208661143 -0.253544727 -0.32446659 #> 195 -0.204525028 -0.23080800 0.313231464 -0.177767451 -0.247022676 0.43650795 #> 196 -0.490650974 -0.23391339 -0.386557139 -0.208661143 -0.188324219 -0.43780323 #> 197 -0.435626754 -0.23391339 -0.569110688 -0.208661143 -0.142669863 -0.42161228 #> 198 -0.666728480 -0.23391339 -0.553897892 -0.208661143 -0.103537557 -0.22732091 #> 199 -0.303568625 -0.23391339 -0.340918752 -0.208661143 1.983518717 -0.29208470 #> 200 2.876831322 -0.23391339 -0.584323484 -0.208661143 -0.253544727 -0.42970775 #> Otu00050 Otu00051 Otu00052 Otu00053 Otu00054 Otu00055 #> 1 -0.475385806 -0.20991733 0.19735560 -0.082761027 -0.18688626 -0.256009183 #> 2 -0.450642238 -0.20991733 -0.25745566 0.651532741 -0.45315341 -0.418554697 #> 3 0.304036595 -0.16859502 5.36271211 -0.189845534 1.12780781 -0.377918318 #> 4 1.380381816 -0.20991733 -0.25745566 -0.128654387 -0.08703608 -0.405009237 #> 5 -0.549616511 2.09035789 -0.25745566 -0.465205697 -0.53636190 -0.201827346 #> 6 -0.475385806 -0.20991733 -0.25745566 -0.342823403 -0.58628699 -0.283100102 #> 7 -0.524872942 -0.20991733 0.06740953 -0.082761027 -0.33666153 -0.432100156 #> 8 1.652561068 -0.20991733 -0.22496914 -0.388716763 -0.51972020 -0.418554697 #> 9 0.390639084 -0.20991733 -0.25745566 1.095168558 0.76169047 0.136809140 #> 10 -0.475385806 -0.20991733 -0.25745566 -0.373418976 0.26243956 0.096172762 #> 11 3.384610848 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.296645562 #> 12 -0.549616511 -0.20991733 -0.25745566 -0.419312337 -0.38658662 -0.296645562 #> 13 -0.549616511 -0.20991733 -0.25745566 0.085514628 -0.30337814 -0.418554697 #> 14 -0.425898669 0.04490358 -0.25745566 -0.358121189 -0.50307850 -0.350827400 #> 15 0.192690538 -0.20991733 -0.25745566 0.024323481 -0.58628699 -0.296645562 #> 16 -0.203206555 0.84380156 -0.25745566 -0.465205697 -0.53636190 -0.432100156 #> 17 -0.549616511 -0.20991733 -0.25745566 -0.419312337 -0.03711098 -0.364372859 #> 18 -0.376411533 -0.20991733 -0.25745566 -0.312227829 0.16258938 0.245172816 #> 19 1.120574349 -0.20303028 -0.25745566 -0.281632255 -0.18688626 -0.405009237 #> 20 -0.524872942 0.91955912 -0.25745566 0.100812415 -0.58628699 -0.201827346 #> 21 -0.512501158 -0.20991733 -0.25745566 -0.465205697 -0.10367777 4.850629026 #> 22 -0.487757590 -0.20303028 -0.25745566 -0.449907910 2.24280151 -0.432100156 #> 23 -0.326924396 -0.20991733 -0.25745566 -0.388716763 -0.35330323 -0.432100156 #> 24 1.256663975 -0.20991733 7.27941672 -0.465205697 -0.51972020 -0.432100156 #> 25 -0.265065475 -0.20991733 -0.25745566 -0.006272093 2.12630963 -0.201827346 #> 26 -0.549616511 -0.20991733 0.58719383 -0.388716763 -0.43651171 0.475445626 #> 27 -0.512501158 -0.20991733 -0.25745566 -0.449907910 -0.58628699 2.547900921 #> 28 0.019485560 -0.20991733 -0.25745566 -0.434610124 -0.40322832 -0.405009237 #> 29 1.442240737 -0.18236913 -0.25745566 -0.449907910 -0.32001983 1.829991571 #> 30 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.405009237 #> 31 -0.549616511 -0.20303028 -0.24662682 -0.465205697 -0.07039438 2.209264435 #> 32 -0.005258008 -0.03774104 5.22193719 1.079870772 -0.10367777 -0.418554697 #> 33 -0.302180828 -0.20991733 -0.25745566 -0.327525616 -0.51972020 0.949536707 #> 34 3.533072258 -0.20991733 -0.25745566 -0.449907910 2.79197752 0.109718221 #> 35 -0.549616511 -0.20991733 -0.25745566 -0.312227829 -0.56964529 -0.323736481 #> 36 -0.537244727 -0.20991733 -0.24662682 -0.465205697 -0.41987002 -0.418554697 #> 37 -0.549616511 -0.20991733 -0.25745566 -0.419312337 -0.50307850 -0.147645508 #> 38 -0.524872942 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.377918318 #> 39 -0.512501158 -0.08595040 -0.25745566 0.009025694 -0.58628699 -0.405009237 #> 40 1.937112103 -0.20991733 -0.25745566 -0.465205697 0.11266429 -0.418554697 #> 41 -0.116604066 -0.20991733 -0.25745566 -0.465205697 -0.00382759 3.035537461 #> 42 -0.487757590 -0.18236913 -0.09502307 -0.189845534 0.27908126 -0.120554589 #> 43 0.897882235 -0.20303028 -0.23579798 -0.465205697 2.30936830 2.507264543 #> 44 -0.401155101 -0.20991733 -0.04087887 -0.159249961 -0.12031947 -0.377918318 #> 45 -0.549616511 -0.20991733 -0.25745566 -0.449907910 0.86154066 -0.256009183 #> 46 0.056600912 -0.20991733 -0.25745566 -0.434610124 -0.33666153 -0.432100156 #> 47 -0.500129374 -0.20991733 -0.25745566 -0.388716763 -0.33666153 -0.377918318 #> 48 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.432100156 #> 49 3.124803381 -0.20991733 -0.25745566 -0.465205697 2.22615982 -0.350827400 #> 50 -0.549616511 -0.20991733 -0.25745566 -0.342823403 -0.46979511 -0.323736481 #> 51 -0.549616511 -0.20991733 -0.25745566 -0.251036682 -0.51972020 -0.432100156 #> 52 -0.549616511 -0.20991733 -0.25745566 0.085514628 -0.56964529 -0.418554697 #> 53 -0.524872942 -0.20991733 -0.25745566 -0.465205697 -0.51972020 0.163900059 #> 54 6.564159374 -0.20991733 -0.21414030 -0.465205697 -0.30337814 -0.418554697 #> 55 0.242177675 -0.20991733 -0.25745566 -0.358121189 -0.51972020 -0.337281940 #> 56 -0.537244727 -0.19614323 -0.24662682 -0.312227829 -0.51972020 -0.418554697 #> 57 -0.388783317 0.25840217 -0.25745566 -0.404014550 -0.46979511 -0.405009237 #> 58 -0.549616511 -0.20991733 -0.17082495 -0.449907910 -0.58628699 0.123263681 #> 59 0.254549459 -0.20991733 -0.25745566 -0.465205697 -0.12031947 -0.391463778 #> 60 -0.091860497 2.84104651 -0.25745566 -0.388716763 -0.56964529 0.055536384 #> 61 -0.302180828 -0.20991733 -0.25745566 -0.449907910 -0.46979511 -0.350827400 #> 62 -0.487757590 -0.20991733 -0.25745566 -0.006272093 3.92361292 4.539083459 #> 63 -0.512501158 0.39614321 4.64800869 -0.296930042 -0.58628699 -0.418554697 #> 64 0.613331199 -0.20991733 -0.11668075 0.819808396 0.12930599 -0.432100156 #> 65 0.304036595 -0.20991733 -0.25745566 -0.296930042 -0.56964529 -0.405009237 #> 66 1.454612521 -0.20991733 -0.25745566 -0.465205697 -0.20352796 -0.432100156 #> 67 4.287751091 -0.20991733 -0.25745566 -0.327525616 -0.58628699 -0.432100156 #> 68 0.786536177 -0.20991733 2.70964640 0.223194710 -0.58628699 -0.432100156 #> 69 0.118459833 -0.20991733 -0.25745566 -0.449907910 -0.58628699 -0.418554697 #> 70 -0.537244727 -0.20991733 -0.25745566 -0.465205697 3.05824467 0.367081951 #> 71 -0.549616511 -0.19614323 -0.25745566 -0.465205697 -0.53636190 -0.432100156 #> 72 0.130831617 -0.20991733 -0.25745566 0.391470365 -0.51972020 -0.377918318 #> 73 0.922625803 -0.20991733 -0.25745566 0.116110202 -0.20352796 -0.432100156 #> 74 0.192690538 -0.18925618 -0.25745566 -0.419312337 -0.30337814 -0.432100156 #> 75 -0.524872942 -0.20991733 -0.25745566 -0.052165453 -0.48643681 -0.283100102 #> 76 -0.537244727 -0.20991733 1.99494298 0.529150446 -0.33666153 -0.418554697 #> 77 -0.512501158 -0.20303028 -0.25745566 -0.174547748 -0.58628699 0.055536384 #> 78 -0.351667964 -0.20991733 -0.25745566 1.033977411 -0.56964529 -0.242463724 #> 79 -0.425898669 2.84793356 -0.13833843 -0.419312337 -0.58628699 -0.405009237 #> 80 -0.549616511 -0.20991733 -0.24662682 -0.465205697 -0.28673644 -0.432100156 #> 81 -0.326924396 -0.20991733 -0.25745566 -0.449907910 -0.48643681 -0.432100156 #> 82 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.50307850 -0.174736427 #> 83 -0.549616511 -0.20991733 -0.25745566 -0.404014550 -0.51972020 -0.323736481 #> 84 0.551472278 -0.20991733 -0.25745566 -0.388716763 -0.40322832 -0.323736481 #> 85 1.528843226 -0.18925618 -0.25745566 -0.220441108 -0.43651171 -0.310191021 #> 86 1.256663975 -0.20991733 -0.25745566 -0.449907910 -0.45315341 -0.432100156 #> 87 -0.549616511 -0.20991733 -0.25745566 -0.251036682 0.36228975 0.908900329 #> 88 0.266921243 -0.20991733 -0.25745566 1.095168558 -0.56964529 -0.161190967 #> 89 -0.500129374 -0.20991733 -0.25745566 0.238492497 0.42885653 -0.432100156 #> 90 -0.475385806 -0.20991733 -0.25745566 4.078236988 1.95989266 -0.283100102 #> 91 -0.277437260 -0.18925618 -0.24662682 0.330279217 -0.58628699 -0.432100156 #> 92 -0.351667964 -0.20991733 -0.25745566 1.202253066 -0.56964529 -0.391463778 #> 93 -0.166091202 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.377918318 #> 94 -0.166091202 -0.20991733 -0.25745566 -0.465205697 -0.40322832 -0.432100156 #> 95 -0.524872942 -0.20991733 0.77128410 -0.419312337 -0.03711098 -0.310191021 #> 96 -0.376411533 -0.20991733 -0.25745566 1.752973392 4.00682140 -0.350827400 #> 97 -0.537244727 -0.20991733 -0.24662682 -0.465205697 -0.46979511 1.071445842 #> 98 -0.104232281 -0.20991733 -0.25745566 -0.404014550 -0.56964529 0.082627303 #> 99 -0.401155101 -0.20991733 -0.25745566 -0.465205697 1.22765799 5.026719999 #> 100 -0.549616511 -0.20991733 -0.25745566 8.912337624 -0.58628699 -0.093463670 #> 101 -0.549616511 -0.20991733 -0.25745566 -0.128654387 -0.53636190 -0.418554697 #> 102 -0.549616511 0.14820935 -0.25745566 -0.358121189 -0.58628699 -0.418554697 #> 103 -0.227950123 -0.20991733 -0.25745566 -0.465205697 1.47728345 0.394172870 #> 104 -0.549616511 -0.20991733 -0.25745566 0.269088070 1.22765799 -0.350827400 #> 105 -0.364039749 -0.20991733 -0.25745566 -0.388716763 0.26243956 -0.174736427 #> 106 -0.524872942 -0.20991733 -0.25745566 -0.404014550 1.76019230 -0.418554697 #> 107 0.007113776 -0.20991733 -0.24662682 -0.067463240 -0.58628699 -0.418554697 #> 108 -0.190834770 -0.20991733 -0.24662682 -0.465205697 0.12930599 -0.432100156 #> 109 1.182433270 -0.20991733 -0.24662682 -0.465205697 -0.23681135 -0.405009237 #> 110 2.036086376 0.46501372 -0.24662682 -0.205143321 -0.12031947 -0.377918318 #> 111 -0.265065475 -0.20991733 -0.25745566 0.590341593 -0.55300359 -0.337281940 #> 112 -0.227950123 -0.20991733 -0.25745566 -0.465205697 -0.20352796 -0.432100156 #> 113 -0.450642238 -0.20991733 -0.24662682 -0.006272093 -0.28673644 -0.432100156 #> 114 -0.116604066 0.05179063 1.34521260 -0.082761027 -0.27009474 -0.418554697 #> 115 -0.339296180 -0.20991733 -0.25745566 0.162003562 -0.15360286 -0.052827292 #> 116 -0.537244727 0.38236910 -0.25745566 -0.174547748 -0.08703608 1.003718545 #> 117 -0.487757590 -0.20991733 -0.25745566 -0.205143321 -0.48643681 -0.174736427 #> 118 -0.549616511 -0.20991733 -0.25745566 1.951844620 -0.35330323 -0.391463778 #> 119 -0.289809044 -0.20991733 -0.25745566 1.538804376 0.06273920 -0.432100156 #> 120 -0.549616511 -0.20303028 -0.25745566 -0.465205697 -0.56964529 -0.432100156 #> 121 -0.463014022 11.54627967 -0.25745566 -0.205143321 -0.38658662 -0.432100156 #> 122 -0.326924396 -0.20991733 -0.25745566 2.915605190 -0.55300359 -0.432100156 #> 123 -0.463014022 -0.16170797 1.12863581 -0.342823403 -0.48643681 -0.432100156 #> 124 -0.549616511 -0.20991733 -0.25745566 -0.358121189 -0.43651171 0.597354761 #> 125 1.244292191 -0.20991733 -0.25745566 0.146705776 0.94474914 -0.418554697 #> 126 -0.537244727 -0.20991733 -0.25745566 4.185321496 -0.58628699 -0.432100156 #> 127 0.316408380 -0.20303028 -0.25745566 -0.281632255 0.42885653 -0.432100156 #> 128 -0.376411533 -0.20991733 -0.25745566 -0.281632255 -0.58628699 -0.418554697 #> 129 0.588587631 -0.20991733 -0.25745566 -0.388716763 -0.35330323 -0.432100156 #> 130 -0.425898669 -0.20991733 -0.25745566 0.116110202 -0.51972020 -0.432100156 #> 131 -0.463014022 0.92644617 -0.25745566 -0.449907910 -0.43651171 6.354175024 #> 132 -0.537244727 -0.19614323 -0.25745566 -0.465205697 -0.56964529 -0.432100156 #> 133 0.514356926 -0.20991733 -0.25745566 -0.404014550 -0.56964529 0.407718329 #> 134 -0.549616511 -0.20991733 -0.25745566 -0.143952174 -0.51972020 -0.201827346 #> 135 -0.425898669 -0.20991733 -0.25745566 -0.465205697 -0.45315341 -0.364372859 #> 136 0.192690538 -0.20991733 -0.24662682 3.879365760 -0.36994493 -0.432100156 #> 137 -0.388783317 -0.20991733 -0.25745566 0.100812415 1.19437460 -0.405009237 #> 138 1.145317917 -0.20991733 -0.25745566 -0.251036682 0.31236465 -0.134100048 #> 139 0.019485560 -0.08595040 -0.24662682 -0.113356600 -0.56964529 -0.432100156 #> 140 -0.401155101 -0.20991733 -0.17082495 2.686138388 -0.51972020 -0.432100156 #> 141 -0.487757590 -0.20991733 -0.25745566 -0.052165453 0.02945580 -0.405009237 #> 142 -0.500129374 -0.20991733 -0.25745566 0.452661512 0.71176538 -0.432100156 #> 143 -0.425898669 -0.18925618 -0.25745566 0.024323481 -0.08703608 -0.432100156 #> 144 0.167946970 -0.19614323 1.64842011 -0.235738895 1.92660927 -0.432100156 #> 145 -0.537244727 -0.20991733 -0.25745566 -0.220441108 0.34564805 -0.012190913 #> 146 -0.252693691 -0.19614323 -0.25745566 0.054919055 -0.27009474 -0.296645562 #> 147 4.225892170 -0.20303028 -0.25745566 -0.465205697 0.06273920 0.231627356 #> 148 -0.376411533 -0.20991733 3.34854794 0.177301349 -0.10367777 -0.432100156 #> 149 0.761792609 -0.19614323 -0.24662682 -0.327525616 4.95539814 0.488991086 #> 150 -0.549616511 -0.20991733 -0.25745566 -0.220441108 3.50757049 -0.418554697 #> 151 -0.549616511 -0.20991733 -0.25745566 0.100812415 -0.55300359 -0.432100156 #> 152 -0.549616511 -0.20991733 -0.25745566 -0.373418976 -0.22016965 2.317628111 #> 153 -0.537244727 -0.20991733 -0.25745566 0.636234954 0.02945580 0.150354600 #> 154 1.083458997 -0.20991733 -0.25745566 -0.082761027 3.90697122 -0.377918318 #> 155 1.491727874 -0.20991733 -0.25745566 -0.388716763 -0.45315341 1.355900490 #> 156 -0.153719418 -0.20991733 -0.25745566 -0.327525616 -0.03711098 -0.337281940 #> 157 -0.549616511 -0.20991733 -0.15999611 -0.434610124 -0.58628699 3.726355893 #> 158 -0.500129374 -0.20991733 -0.25745566 -0.205143321 -0.13696117 -0.405009237 #> 159 -0.413526885 -0.20991733 -0.25745566 -0.465205697 3.57413728 -0.405009237 #> 160 -0.537244727 -0.20991733 -0.25745566 -0.449907910 -0.58628699 0.651536599 #> 161 -0.549616511 1.87685929 -0.25745566 -0.327525616 -0.58628699 1.667446057 #> 162 -0.425898669 -0.20991733 -0.25745566 2.303693717 -0.20352796 -0.283100102 #> 163 0.205062322 3.83278193 -0.25745566 -0.358121189 -0.58628699 0.001354546 #> 164 -0.500129374 -0.20991733 -0.23579798 -0.266334469 -0.15360286 0.312900113 #> 165 -0.487757590 -0.18236913 -0.23579798 -0.449907910 -0.56964529 -0.391463778 #> 166 -0.537244727 -0.20991733 -0.25745566 1.018679624 -0.15360286 -0.377918318 #> 167 -0.413526885 -0.20991733 -0.25745566 0.452661512 1.84340078 -0.337281940 #> 168 2.852624130 -0.20991733 -0.25745566 -0.434610124 -0.56964529 -0.432100156 #> 169 -0.227950123 -0.20991733 -0.25745566 -0.358121189 -0.56964529 -0.432100156 #> 170 -0.549616511 -0.20991733 -0.25745566 1.079870772 0.24579787 0.326445573 #> 171 -0.524872942 -0.20991733 -0.25745566 -0.296930042 -0.46979511 -0.337281940 #> 172 -0.463014022 -0.20991733 -0.25745566 -0.358121189 -0.56964529 -0.432100156 #> 173 0.130831617 -0.20991733 -0.25745566 -0.465205697 -0.56964529 -0.188281886 #> 174 -0.524872942 -0.16859502 -0.25745566 -0.449907910 -0.50307850 -0.432100156 #> 175 -0.425898669 -0.20991733 0.19735560 0.620937167 -0.48643681 0.190990978 #> 176 -0.500129374 -0.20991733 -0.25745566 -0.434610124 -0.20352796 -0.161190967 #> 177 0.279293027 3.47465525 -0.01922119 -0.342823403 -0.56964529 -0.405009237 #> 178 -0.512501158 -0.19614323 -0.25745566 -0.342823403 0.29572296 0.231627356 #> 179 -0.401155101 -0.20991733 -0.25745566 -0.465205697 -0.30337814 -0.432100156 #> 180 -0.475385806 1.57382902 0.34895936 -0.128654387 -0.03711098 -0.405009237 #> 181 1.095830781 -0.20991733 -0.25745566 0.054919055 0.01281411 -0.256009183 #> 182 1.009228292 -0.20991733 -0.25745566 -0.404014550 -0.36994493 -0.391463778 #> 183 2.679419152 0.31349859 -0.25745566 -0.404014550 -0.55300359 -0.350827400 #> 184 -0.438270453 1.29834696 -0.24662682 1.538804376 -0.33666153 -0.215372805 #> 185 -0.549616511 0.24462807 -0.01922119 -0.358121189 -0.58628699 -0.405009237 #> 186 -0.524872942 -0.20991733 -0.24662682 -0.419312337 -0.33666153 -0.432100156 #> 187 -0.549616511 -0.20303028 3.85750340 1.095168558 -0.38658662 -0.269554643 #> 188 -0.537244727 -0.20991733 -0.23579798 -0.419312337 -0.55300359 0.475445626 #> 189 -0.425898669 -0.20991733 -0.23579798 -0.373418976 2.70876903 0.177445519 #> 190 0.254549459 -0.09972451 3.55429589 0.162003562 -0.33666153 -0.432100156 #> 191 -0.512501158 -0.20991733 -0.25745566 -0.465205697 0.21251447 -0.147645508 #> 192 -0.537244727 -0.20991733 -0.25745566 -0.465205697 0.54534841 -0.432100156 #> 193 -0.450642238 -0.20303028 -0.25745566 -0.358121189 -0.38658662 -0.310191021 #> 194 0.885510450 -0.20991733 -0.25745566 -0.388716763 -0.56964529 -0.432100156 #> 195 -0.104232281 -0.16170797 0.01326533 -0.388716763 -0.32001983 -0.269554643 #> 196 -0.549616511 -0.20991733 -0.25745566 -0.465205697 -0.22016965 -0.256009183 #> 197 -0.512501158 -0.05151515 0.31647284 1.768271179 0.91146575 -0.174736427 #> 198 0.167946970 -0.20991733 -0.25745566 -0.465205697 -0.58628699 -0.147645508 #> 199 -0.537244727 -0.20991733 -0.25745566 -0.434610124 -0.38658662 1.708082436 #> 200 -0.450642238 -0.20991733 -0.25745566 -0.342823403 -0.15360286 2.046718922 #> Otu00056 Otu00057 Otu00058 Otu00059 Otu00060 #> 1 -0.67302626 -0.063085238 0.244028438 -0.04265350 -0.41506494 #> 2 2.49956176 -0.378272648 0.956294184 -0.33573273 -0.41506494 #> 3 -0.80430576 2.658987854 -0.313396928 -0.40900254 -0.40518715 #> 4 0.18029052 -0.340068114 -0.065652321 -0.29386427 -0.41506494 #> 5 -0.80430576 -0.426028317 -0.561141535 -0.39853543 -0.40518715 #> 6 0.77104829 0.786965657 0.151124210 0.66911037 -0.41506494 #> 7 -0.82618568 -0.244556777 -0.545657497 -0.29386427 -0.41506494 #> 8 -0.62926642 -0.426028317 -0.406301156 7.84955171 -0.16812007 #> 9 0.24593027 -0.426028317 -0.483721345 -0.40900254 -0.41506494 #> 10 -0.23542791 -0.406926049 -0.576625573 -0.40900254 2.69644047 #> 11 -0.82618568 -0.406926049 2.798894699 -0.40900254 0.40479204 #> 12 -0.56362667 0.557738450 -0.205008662 0.09341901 0.04919142 #> 13 0.66164870 -0.426028317 1.730496081 -0.40900254 -0.41506494 #> 14 0.04901101 0.529085049 0.213060362 0.69004460 -0.41506494 #> 15 1.82128432 1.407789345 0.832421880 -0.05312061 -0.41506494 #> 16 1.66812490 -0.397374916 -0.158556549 -0.40900254 -0.41506494 #> 17 -0.41046725 0.519533915 -0.220492700 0.21902440 -0.41506494 #> 18 -0.30106766 1.073499667 -0.096620397 0.03061631 -0.38543156 #> 19 -0.69490618 0.147039703 0.569193235 -0.21012735 -0.41506494 #> 20 -0.78242585 -0.359170381 -0.545657497 -0.23106158 -0.41506494 #> 21 -0.82618568 -0.406926049 -0.576625573 -0.40900254 -0.41506494 #> 22 1.88692408 -0.426028317 -0.530173459 2.16590791 -0.41506494 #> 23 0.46472945 -0.426028317 -0.205008662 0.76331441 -0.41506494 #> 24 1.05548722 -0.426028317 -0.375333080 -0.40900254 -0.41506494 #> 25 0.31157002 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 26 -0.32294758 0.357164643 -0.081136359 -0.03218638 1.06660430 #> 27 -0.78242585 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 28 -0.60738651 -0.426028317 -0.607593649 -0.40900254 2.67668488 #> 29 -0.76054593 -0.426028317 -0.437269232 0.75284729 -0.41506494 #> 30 -0.69490618 -0.053534104 -0.189524624 -0.13685754 -0.39530935 #> 31 -0.82618568 -0.426028317 1.482751474 -0.39853543 1.00733753 #> 32 -0.60738651 2.085919835 -0.375333080 -0.40900254 2.59766252 #> 33 -0.62926642 0.252102173 -0.592109611 -0.40900254 -0.31628699 #> 34 -0.82618568 0.242551039 0.770485728 -0.40900254 -0.41506494 #> 35 -0.71678609 8.532935052 0.878873994 -0.19966023 -0.33604258 #> 36 -0.49798692 -0.426028317 -0.607593649 -0.40900254 0.20723614 #> 37 2.23700275 0.280755574 -0.235976738 -0.04265350 -0.41506494 #> 38 -0.76054593 -0.426028317 0.383384780 -0.40900254 0.54308117 #> 39 -0.80430576 -0.426028317 0.615645349 -0.40900254 2.37047324 #> 40 -0.43234717 0.605494118 -0.143072511 0.03061631 0.12821378 #> 41 -0.60738651 -0.292312446 -0.437269232 -0.40900254 -0.41506494 #> 42 0.61788887 -0.416477183 -0.344365004 -0.35666697 -0.40518715 #> 43 -0.80430576 -0.426028317 -0.375333080 -0.40900254 1.60000523 #> 44 1.99632366 -0.063085238 0.042735945 -0.40900254 -0.41506494 #> 45 0.31157002 -0.034431837 -0.514689421 -0.29386427 0.39491424 #> 46 0.02713110 -0.406926049 -0.468237308 -0.40900254 -0.40518715 #> 47 -0.71678609 -0.015329570 -0.313396928 -0.14732465 -0.41506494 #> 48 -0.82618568 -0.387823782 -0.545657497 -0.40900254 -0.41506494 #> 49 -0.65114634 0.137488569 -0.266944814 -0.16825888 -0.41506494 #> 50 -0.54174675 0.634147519 0.305964590 0.28182709 -0.41506494 #> 51 0.37720978 -0.426028317 -0.561141535 4.57334451 -0.40518715 #> 52 -0.47610700 -0.177698842 -0.468237308 -0.25199581 -0.41506494 #> 53 -0.80430576 -0.416477183 -0.592109611 -0.40900254 -0.41506494 #> 54 -0.80430576 -0.426028317 -0.561141535 -0.40900254 -0.40518715 #> 55 -0.38858733 0.739209989 0.058219983 0.08295189 -0.40518715 #> 56 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 57 -0.76054593 -0.416477183 0.135640172 0.40743248 -0.41506494 #> 58 1.20864664 -0.416477183 -0.452753270 -0.40900254 -0.03970874 #> 59 -0.21354799 -0.426028317 0.166608248 0.83658422 -0.40518715 #> 60 -0.10414841 -0.129943173 -0.003716169 0.02014920 -0.41506494 #> 61 0.70540854 -0.426028317 1.157586677 -0.40900254 1.35306035 #> 62 -0.76054593 0.739209989 -0.514689421 -0.40900254 -0.39530935 #> 63 0.44284953 -0.235005644 -0.359849042 -0.39853543 -0.41506494 #> 64 -0.76054593 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 65 -0.82618568 0.318960108 -0.468237308 -0.40900254 0.21711393 #> 66 0.48660936 -0.426028317 5.369244999 -0.40900254 -0.41506494 #> 67 1.29616631 -0.426028317 -0.561141535 0.54350498 0.82953722 #> 68 1.23052655 1.197664405 0.166608248 -0.19966023 2.07413939 #> 69 1.20864664 -0.426028317 1.064682449 -0.40900254 -0.41506494 #> 70 0.13653068 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 71 -0.45422709 -0.349619247 -0.530173459 -0.38806831 6.91425892 #> 72 0.13653068 2.534823116 2.195017219 -0.07405484 1.57037184 #> 73 0.50848928 0.242551039 -0.607593649 -0.40900254 -0.41506494 #> 74 4.62191375 0.013323831 0.182092286 0.63770902 3.72373115 #> 75 0.81480812 0.748761123 0.491773045 1.42274270 -0.41506494 #> 76 -0.82618568 -0.426028317 5.431181150 -0.40900254 0.02943583 #> 77 -0.69490618 -0.426028317 0.213060362 1.06686076 -0.40518715 #> 78 -0.56362667 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 79 1.58060523 -0.091738639 0.940810146 1.19246615 -0.41506494 #> 80 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 81 0.59600895 -0.426028317 1.699528005 0.20855728 -0.41506494 #> 82 3.28723879 0.939783796 -0.607593649 -0.39853543 -0.41506494 #> 83 0.83668804 -0.034431837 -0.545657497 -0.25199581 -0.40518715 #> 84 -0.76054593 -0.426028317 -0.390817118 -0.40900254 -0.16812007 #> 85 -0.43234717 -0.426028317 2.427277789 -0.40900254 -0.41506494 #> 86 -0.82618568 -0.139494307 -0.251460776 -0.40900254 -0.40518715 #> 87 -0.06038857 0.051528366 -0.390817118 -0.36713408 -0.41506494 #> 88 1.01172738 -0.426028317 6.546031883 -0.40900254 -0.41506494 #> 89 1.79940441 -0.359170381 0.151124210 -0.31479850 -0.41506494 #> 90 0.13653068 6.603606053 -0.174040587 -0.28339716 -0.41506494 #> 91 -0.23542791 -0.378272648 -0.344365004 2.80440196 0.95794856 #> 92 -0.76054593 -0.426028317 2.009208764 -0.40900254 0.41466983 #> 93 -0.82618568 -0.426028317 -0.530173459 -0.40900254 -0.41506494 #> 94 -0.80430576 -0.426028317 0.228544400 2.50085561 -0.38543156 #> 95 1.03360730 1.054397400 0.274996514 0.55397210 -0.41506494 #> 96 -0.82618568 -0.426028317 -0.576625573 -0.40900254 -0.41506494 #> 97 -0.78242585 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 98 -0.16978816 -0.426028317 -0.468237308 1.63208501 -0.41506494 #> 99 -0.78242585 -0.406926049 -0.592109611 -0.40900254 -0.41506494 #> 100 2.41204209 -0.397374916 -0.499205383 -0.39853543 -0.37555376 #> 101 1.79940441 -0.177698842 -0.576625573 -0.40900254 -0.41506494 #> 102 -0.80430576 -0.426028317 -0.607593649 -0.36713408 -0.41506494 #> 103 -0.19166808 -0.301863579 -0.421785194 -0.40900254 -0.41506494 #> 104 -0.82618568 1.025743999 0.011767869 -0.40900254 -0.39530935 #> 105 0.18029052 0.509982781 0.027251907 0.47023517 0.07882480 #> 106 0.04901101 0.309408975 -0.235976738 0.03061631 -0.39530935 #> 107 0.20217044 -0.426028317 -0.034684245 -0.40900254 0.33564747 #> 108 0.81480812 -0.426028317 1.838884347 -0.40900254 0.80978163 #> 109 -0.62926642 -0.129943173 -0.251460776 -0.38806831 -0.41506494 #> 110 2.08384333 -0.397374916 -0.205008662 -0.27293004 -0.40518715 #> 111 0.53036920 -0.426028317 -0.220492700 -0.40900254 -0.41506494 #> 112 0.50848928 -0.426028317 0.259512476 -0.40900254 0.13809157 #> 113 -0.21354799 -0.426028317 0.569193235 -0.38806831 -0.41506494 #> 114 0.35532986 -0.378272648 1.637591853 -0.15779177 1.13574887 #> 115 0.44284953 -0.426028317 1.467267436 -0.40900254 -0.06934212 #> 116 2.01820358 -0.215903376 -0.174040587 -0.40900254 -0.41506494 #> 117 -0.03850865 -0.426028317 -0.607593649 -0.40900254 2.64705149 #> 118 0.18029052 -0.426028317 -0.514689421 -0.40900254 -0.41506494 #> 119 -0.82618568 -0.426028317 -0.050168283 -0.40900254 -0.41506494 #> 120 -0.32294758 -0.387823782 -0.607593649 -0.38806831 -0.34592038 #> 121 -0.34482750 0.414471445 1.002746297 0.35509690 4.63248828 #> 122 0.24593027 -0.416477183 -0.576625573 -0.40900254 -0.41506494 #> 123 -0.82618568 -0.426028317 -0.545657497 -0.39853543 -0.41506494 #> 124 0.02713110 -0.426028317 -0.530173459 -0.40900254 -0.41506494 #> 125 -0.60738651 -0.426028317 0.089188059 3.14981678 2.73595165 #> 126 0.63976878 -0.426028317 1.064682449 -0.40900254 -0.41506494 #> 127 -0.27918775 -0.378272648 -0.545657497 -0.31479850 -0.39530935 #> 128 -0.78242585 -0.426028317 -0.576625573 -0.40900254 -0.06934212 #> 129 -0.80430576 -0.110840906 -0.483721345 0.26089286 -0.41506494 #> 130 -0.47610700 -0.426028317 -0.344365004 -0.40900254 -0.40518715 #> 131 -0.56362667 -0.426028317 -0.390817118 -0.40900254 -0.41506494 #> 132 1.47120565 -0.426028317 -0.421785194 -0.40900254 -0.20763125 #> 133 -0.67302626 -0.426028317 -0.530173459 -0.26246293 -0.41506494 #> 134 0.46472945 0.739209989 1.869852422 1.54834808 -0.40518715 #> 135 -0.82618568 -0.406926049 -0.437269232 -0.39853543 -0.41506494 #> 136 0.85856796 -0.426028317 0.011767869 -0.40900254 -0.41506494 #> 137 -0.16978816 2.085919835 -0.468237308 -0.40900254 1.15550446 #> 138 0.88044788 -0.426028317 -0.220492700 -0.40900254 -0.40518715 #> 139 -0.71678609 -0.416477183 -0.468237308 0.11435324 -0.41506494 #> 140 -0.82618568 -0.426028317 -0.220492700 -0.40900254 -0.41506494 #> 141 -0.65114634 -0.426028317 -0.174040587 1.51694674 -0.03970874 #> 142 -0.56362667 1.617914285 0.693065539 -0.40900254 -0.41506494 #> 143 -0.73866601 -0.005778436 -0.607593649 -0.06358773 -0.41506494 #> 144 -0.58550659 1.149908736 -0.468237308 0.88891980 -0.41506494 #> 145 0.61788887 -0.196801109 -0.607593649 -0.40900254 -0.41506494 #> 146 0.81480812 -0.426028317 -0.592109611 -0.06358773 -0.40518715 #> 147 -0.82618568 -0.426028317 -0.592109611 -0.39853543 -0.41506494 #> 148 -0.73866601 -0.426028317 -0.359849042 -0.40900254 -0.41506494 #> 149 -0.71678609 0.185244237 -0.452753270 -0.40900254 -0.41506494 #> 150 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 151 1.66812490 0.834721326 0.878873994 -0.40900254 -0.41506494 #> 152 1.05548722 -0.168147708 -0.576625573 -0.40900254 -0.41506494 #> 153 -0.67302626 -0.426028317 0.058219983 0.45976806 -0.41506494 #> 154 -0.82618568 -0.426028317 -0.607593649 1.78909174 -0.41506494 #> 155 -0.69490618 -0.426028317 -0.545657497 5.65145742 -0.41506494 #> 156 -0.19166808 0.643698653 -0.483721345 -0.40900254 0.16772496 #> 157 -0.82618568 -0.416477183 -0.607593649 -0.40900254 -0.23726464 #> 158 1.53684540 -0.426028317 2.597602206 -0.40900254 -0.37555376 #> 159 -0.78242585 0.041977232 -0.437269232 -0.40900254 -0.41506494 #> 160 -0.80430576 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 161 -0.65114634 -0.426028317 0.352416704 -0.40900254 -0.41506494 #> 162 -0.32294758 -0.426028317 -0.468237308 -0.40900254 0.28625850 #> 163 0.66164870 -0.378272648 0.816937842 3.22308659 -0.41506494 #> 164 -0.80430576 -0.416477183 -0.576625573 -0.40900254 2.05438380 #> 165 -0.71678609 -0.406926049 -0.576625573 -0.40900254 2.11365057 #> 166 -0.82618568 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 167 0.48660936 3.585447818 -0.328880966 -0.40900254 -0.27677581 #> 168 -0.82618568 -0.426028317 -0.406301156 -0.40900254 -0.41506494 #> 169 -0.80430576 -0.426028317 -0.530173459 -0.38806831 1.61976082 #> 170 -0.82618568 -0.426028317 -0.607593649 -0.40900254 1.05672651 #> 171 -0.47610700 0.701005455 0.646613425 0.81564999 -0.41506494 #> 172 -0.76054593 -0.426028317 -0.437269232 -0.40900254 -0.01995315 #> 173 -0.82618568 -0.426028317 -0.592109611 -0.40900254 -0.39530935 #> 174 -0.78242585 -0.416477183 -0.421785194 -0.31479850 4.01018720 #> 175 2.43392201 -0.215903376 -0.034684245 -0.40900254 -0.40518715 #> 176 1.07736713 -0.426028317 -0.127588473 -0.39853543 -0.41506494 #> 177 0.20217044 -0.034431837 0.538225159 0.05155054 -0.41506494 #> 178 -0.82618568 -0.426028317 0.182092286 -0.40900254 -0.41506494 #> 179 -0.80430576 -0.426028317 -0.607593649 -0.40900254 -0.41506494 #> 180 -0.25730783 0.844272459 -0.065652321 -0.10545619 -0.41506494 #> 181 -0.67302626 -0.416477183 -0.576625573 0.78424864 -0.41506494 #> 182 0.26781019 -0.426028317 -0.452753270 0.86798557 -0.41506494 #> 183 -0.41046725 -0.263659045 0.027251907 0.54350498 -0.41506494 #> 184 -0.36670742 -0.273210178 -0.174040587 -0.36713408 -0.30640920 #> 185 2.43392201 -0.378272648 -0.561141535 -0.40900254 -0.41506494 #> 186 -0.78242585 -0.416477183 -0.545657497 -0.37760120 -0.41506494 #> 187 0.31157002 0.548187316 -0.607593649 -0.40900254 -0.15824228 #> 188 -0.82618568 -0.426028317 -0.592109611 -0.40900254 -0.35579817 #> 189 -0.71678609 -0.340068114 -0.514689421 -0.40900254 -0.26689802 #> 190 0.81480812 0.739209989 -0.297912890 -0.25199581 -0.40518715 #> 191 0.00525118 -0.426028317 -0.499205383 -0.40900254 1.41232712 #> 192 1.12112697 -0.426028317 -0.561141535 -0.40900254 -0.41506494 #> 193 1.47120565 1.130806469 0.383384780 0.66911037 -0.05946433 #> 194 -0.56362667 -0.387823782 -0.576625573 0.02014920 0.52332558 #> 195 -0.21354799 0.901579261 0.491773045 0.50163652 -0.39530935 #> 196 -0.82618568 -0.426028317 -0.592109611 -0.40900254 -0.41506494 #> 197 -0.80430576 1.608363152 -0.514689421 -0.38806831 -0.37555376 #> 198 -0.80430576 -0.426028317 -0.530173459 -0.40900254 -0.25702023 #> 199 1.71188474 0.204346505 -0.421785194 -0.19966023 0.06894701 #> 200 3.72483714 -0.426028317 1.869852422 -0.40900254 -0.32616479 #> #> $removed #> character(0) #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":null,"dir":"Reference","previous_headings":"","what":"Get feature importance using the permutation method — get_feature_importance","title":"Get feature importance using the permutation method — get_feature_importance","text":"Calculates feature importance using trained model test data. Requires future.apply package.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get feature importance using the permutation method — get_feature_importance","text":"","code":"get_feature_importance( trained_model, test_data, outcome_colname, perf_metric_function, perf_metric_name, class_probs, method, seed = NA, corr_thresh = 1, groups = NULL, nperms = 100, corr_method = \"spearman\" )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get feature importance using the permutation method — get_feature_importance","text":"trained_model Trained model caret::train(). test_data Held test data: dataframe outcome features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost seed Random seed (default: NA). results reproducible set seed. corr_thresh feature importance, group correlations equal corr_thresh (range 0 1; default: 1). groups Vector feature names group together permutation. element string feature names separated pipe character (|). NULL (default), correlated features grouped together based corr_thresh. nperms number permutations perform (default: 100). corr_method correlation method. options supported stats::cor: spearman, pearson, kendall. (default: spearman)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get feature importance using the permutation method — get_feature_importance","text":"Data frame performance metrics feature (group correlated features; feat) permuted (perf_metric), differences actual test performance metric permuted performance metric (perf_metric_diff; test minus permuted performance), p-value (pvalue: probability obtaining actual performance value null hypothesis). Features larger perf_metric_diff important. performance metric name (perf_metric_name) seed (seed) also returned.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Get feature importance using the permutation method — get_feature_importance","text":"permutation tests, p-value number permutation statistics greater test statistic, divided number permutations. case, permutation statistic model performance (e.g. AUROC) randomizing order observations one feature, test statistic actual performance test data. default perform 100 permutations per feature; increasing increase precision estimating null distribution, also increases runtime. p-value represents probability obtaining actual performance event null hypothesis true, null hypothesis feature important model performance. strongly recommend providing multiple cores speed computation time. See vignette parallel processing details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get feature importance using the permutation method — get_feature_importance","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_feature_importance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get feature importance using the permutation method — get_feature_importance","text":"","code":"if (FALSE) { # If you called `run_ml()` with `feature_importance = FALSE` (the default), # you can use `get_feature_importance()` later as long as you have the # trained model and test data. results <- run_ml(otu_small, \"glmnet\", kfold = 2, cv_times = 2) names(results$trained_model$trainingData)[1] <- \"dx\" feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\" ) # We strongly recommend providing multiple cores to speed up computation time. # Do this before calling `get_feature_importance()`. doFuture::registerDoFuture() future::plan(future::multicore, workers = 2) # Optionally, you can group features together with a custom grouping feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\", groups = c( \"Otu00007\", \"Otu00008\", \"Otu00009\", \"Otu00011\", \"Otu00012\", \"Otu00015\", \"Otu00016\", \"Otu00018\", \"Otu00019\", \"Otu00020\", \"Otu00022\", \"Otu00023\", \"Otu00025\", \"Otu00028\", \"Otu00029\", \"Otu00030\", \"Otu00035\", \"Otu00036\", \"Otu00037\", \"Otu00038\", \"Otu00039\", \"Otu00040\", \"Otu00047\", \"Otu00050\", \"Otu00052\", \"Otu00054\", \"Otu00055\", \"Otu00056\", \"Otu00060\", \"Otu00003|Otu00002|Otu00005|Otu00024|Otu00032|Otu00041|Otu00053\", \"Otu00014|Otu00021|Otu00017|Otu00031|Otu00057\", \"Otu00013|Otu00006\", \"Otu00026|Otu00001|Otu00034|Otu00048\", \"Otu00033|Otu00010\", \"Otu00042|Otu00004\", \"Otu00043|Otu00027|Otu00049\", \"Otu00051|Otu00045\", \"Otu00058|Otu00044\", \"Otu00059|Otu00046\" ) ) # the function can show a progress bar if you have the `progressr` package installed. ## optionally, specify the progress bar format: progressr::handlers(progressr::handler_progress( format = \":message :bar :percent | elapsed: :elapsed | eta: :eta\", clear = FALSE, show_after = 0 )) ## tell progressr to always report progress progressr::handlers(global = TRUE) ## run the function and watch the live progress udpates feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\" ) # You can specify any correlation method supported by `stats::cor`: feat_imp <- get_feature_importance(results$trained_model, results$trained_model$trainingData, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\", corr_method = \"pearson\" ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Get hyperparameter performance metrics — get_hp_performance","title":"Get hyperparameter performance metrics — get_hp_performance","text":"Get hyperparameter performance metrics","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get hyperparameter performance metrics — get_hp_performance","text":"","code":"get_hp_performance(trained_model)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get hyperparameter performance metrics — get_hp_performance","text":"trained_model trained model (e.g. run_ml())","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get hyperparameter performance metrics — get_hp_performance","text":"Named list: dat: Dataframe performance metric group hyperparameters. params: Hyperparameters tuned. metric: Performance metric used.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get hyperparameter performance metrics — get_hp_performance","text":"Zena Lapp, zenalapp@umich.edu Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hp_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get hyperparameter performance metrics — get_hp_performance","text":"","code":"get_hp_performance(otu_mini_bin_results_glmnet$trained_model) #> $dat #> alpha lambda AUC #> 1 0 1e-04 0.6082552 #> 2 0 1e-03 0.6082552 #> 3 0 1e-02 0.6086458 #> 4 0 1e-01 0.6166789 #> 5 0 1e+00 0.6221737 #> 6 0 1e+01 0.6187408 #> #> $params #> [1] \"lambda\" #> #> $metric #> [1] \"AUC\" #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":null,"dir":"Reference","previous_headings":"","what":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"details see vignette hyperparameter tuning.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"","code":"get_hyperparams_list(dataset, method)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"dataset Data frame outcome variable columns features. method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"Named list hyperparameters.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_hyperparams_list.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Set hyperparameters based on ML method and dataset characteristics — get_hyperparams_list","text":"","code":"get_hyperparams_list(otu_mini_bin, \"rf\") #> $mtry #> [1] 2 3 6 #> get_hyperparams_list(otu_small, \"rf\") #> $mtry #> [1] 4 8 16 #> get_hyperparams_list(otu_mini_bin, \"rpart2\") #> $maxdepth #> [1] 1 2 4 8 16 30 #> get_hyperparams_list(otu_small, \"rpart2\") #> $maxdepth #> [1] 1 2 4 8 16 30 #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":null,"dir":"Reference","previous_headings":"","what":"Get outcome type. — get_outcome_type","title":"Get outcome type. — get_outcome_type","text":"outcome numeric, type continuous. Otherwise, outcome type binary two outcomes multiclass two outcomes.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get outcome type. — get_outcome_type","text":"","code":"get_outcome_type(outcomes_vec)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get outcome type. — get_outcome_type","text":"outcomes_vec Vector outcomes.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get outcome type. — get_outcome_type","text":"Outcome type (continuous, binary, multiclass).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get outcome type. — get_outcome_type","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_outcome_type.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get outcome type. — get_outcome_type","text":"","code":"get_outcome_type(c(1, 2, 1)) #> [1] \"continuous\" get_outcome_type(c(\"a\", \"b\", \"b\")) #> [1] \"binary\" get_outcome_type(c(\"a\", \"b\", \"c\")) #> [1] \"multiclass\""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":null,"dir":"Reference","previous_headings":"","what":"Select indices to partition the data into training & testing sets. — get_partition_indices","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"Use function get row indices training set.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"","code":"get_partition_indices( outcomes, training_frac = 0.8, groups = NULL, group_partitions = NULL )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"outcomes vector outcomes training_frac Fraction data training set (default: 0.8). Rows dataset randomly selected training set, remaining rows used testing set. Alternatively, provide vector integers, used row indices training set. remaining rows used testing set. groups Vector groups keep together splitting data train test sets. number groups training set larger kfold, groups also kept together cross-validation. Length matches number rows dataset (default: NULL). group_partitions Specify assign groups training testing partitions (default: NULL). groups specifies samples belong group \"\" belong group \"B\", setting group_partitions = list(train = c(\"\", \"B\"), test = c(\"B\")) result samples group \"\" placed training set, samples \"B\" also training set, remaining samples \"B\" testing set. partition sizes close training_frac possible. number groups training set larger kfold, groups also kept together cross-validation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"Vector row indices training set.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"groups NULL, uses createDataPartition. Otherwise, uses create_grouped_data_partition(). Set seed prior calling function like data partitions reproducible (recommended).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_partition_indices.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Select indices to partition the data into training & testing sets. — get_partition_indices","text":"","code":"training_inds <- get_partition_indices(otu_mini_bin$dx) train_data <- otu_mini_bin[training_inds, ] test_data <- otu_mini_bin[-training_inds, ]"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":null,"dir":"Reference","previous_headings":"","what":"Get default performance metric function — get_perf_metric_fn","title":"Get default performance metric function — get_perf_metric_fn","text":"Get default performance metric function","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get default performance metric function — get_perf_metric_fn","text":"","code":"get_perf_metric_fn(outcome_type)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get default performance metric function — get_perf_metric_fn","text":"outcome_type Type outcome (one : \"continuous\",\"binary\",\"multiclass\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get default performance metric function — get_perf_metric_fn","text":"Performance metric function.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get default performance metric function — get_perf_metric_fn","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_fn.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get default performance metric function — get_perf_metric_fn","text":"","code":"get_perf_metric_fn(\"continuous\") #> function (data, lev = NULL, model = NULL) #> { #> if (is.character(data$obs)) #> data$obs <- factor(data$obs, levels = lev) #> postResample(data[, \"pred\"], data[, \"obs\"]) #> } #> #> get_perf_metric_fn(\"binary\") #> function (data, lev = NULL, model = NULL) #> { #> if (!all(levels(data[, \"pred\"]) == levels(data[, \"obs\"]))) #> stop(\"levels of observed and predicted data do not match\") #> has_class_probs <- all(lev %in% colnames(data)) #> if (has_class_probs) { #> lloss <- mnLogLoss(data = data, lev = lev, model = model) #> requireNamespaceQuietStop(\"pROC\") #> requireNamespaceQuietStop(\"MLmetrics\") #> prob_stats <- lapply(levels(data[, \"pred\"]), function(x) { #> obs <- ifelse(data[, \"obs\"] == x, 1, 0) #> prob <- data[, x] #> roc_auc <- try(pROC::roc(obs, data[, x], direction = \"<\", #> quiet = TRUE), silent = TRUE) #> roc_auc <- if (inherits(roc_auc, \"try-error\")) #> NA #> else roc_auc$auc #> pr_auc <- try(MLmetrics::PRAUC(y_pred = data[, x], #> y_true = obs), silent = TRUE) #> if (inherits(pr_auc, \"try-error\")) #> pr_auc <- NA #> res <- c(ROC = roc_auc, AUC = pr_auc) #> return(res) #> }) #> prob_stats <- do.call(\"rbind\", prob_stats) #> prob_stats <- colMeans(prob_stats, na.rm = TRUE) #> } #> CM <- confusionMatrix(data[, \"pred\"], data[, \"obs\"], mode = \"everything\") #> if (length(levels(data[, \"pred\"])) == 2) { #> class_stats <- CM$byClass #> } #> else { #> class_stats <- colMeans(CM$byClass) #> names(class_stats) <- paste(\"Mean\", names(class_stats)) #> } #> overall_stats <- if (has_class_probs) #> c(CM$overall, logLoss = as.numeric(lloss), AUC = unname(prob_stats[\"ROC\"]), #> prAUC = unname(prob_stats[\"AUC\"])) #> else CM$overall #> stats <- c(overall_stats, class_stats) #> stats <- stats[!names(stats) %in% c(\"AccuracyNull\", \"AccuracyLower\", #> \"AccuracyUpper\", \"AccuracyPValue\", \"McnemarPValue\", \"Mean Prevalence\", #> \"Mean Detection Prevalence\")] #> names(stats) <- gsub(\"[[:blank:]]+\", \"_\", names(stats)) #> stat_list <- c(\"Accuracy\", \"Kappa\", \"Mean_F1\", \"Mean_Sensitivity\", #> \"Mean_Specificity\", \"Mean_Pos_Pred_Value\", \"Mean_Neg_Pred_Value\", #> \"Mean_Precision\", \"Mean_Recall\", \"Mean_Detection_Rate\", #> \"Mean_Balanced_Accuracy\") #> if (has_class_probs) #> stat_list <- c(\"logLoss\", \"AUC\", \"prAUC\", stat_list) #> if (length(levels(data[, \"pred\"])) == 2) #> stat_list <- gsub(\"^Mean_\", \"\", stat_list) #> stats <- stats[c(stat_list)] #> return(stats) #> } #> #> get_perf_metric_fn(\"multiclass\") #> function (data, lev = NULL, model = NULL) #> { #> if (!all(levels(data[, \"pred\"]) == levels(data[, \"obs\"]))) #> stop(\"levels of observed and predicted data do not match\") #> has_class_probs <- all(lev %in% colnames(data)) #> if (has_class_probs) { #> lloss <- mnLogLoss(data = data, lev = lev, model = model) #> requireNamespaceQuietStop(\"pROC\") #> requireNamespaceQuietStop(\"MLmetrics\") #> prob_stats <- lapply(levels(data[, \"pred\"]), function(x) { #> obs <- ifelse(data[, \"obs\"] == x, 1, 0) #> prob <- data[, x] #> roc_auc <- try(pROC::roc(obs, data[, x], direction = \"<\", #> quiet = TRUE), silent = TRUE) #> roc_auc <- if (inherits(roc_auc, \"try-error\")) #> NA #> else roc_auc$auc #> pr_auc <- try(MLmetrics::PRAUC(y_pred = data[, x], #> y_true = obs), silent = TRUE) #> if (inherits(pr_auc, \"try-error\")) #> pr_auc <- NA #> res <- c(ROC = roc_auc, AUC = pr_auc) #> return(res) #> }) #> prob_stats <- do.call(\"rbind\", prob_stats) #> prob_stats <- colMeans(prob_stats, na.rm = TRUE) #> } #> CM <- confusionMatrix(data[, \"pred\"], data[, \"obs\"], mode = \"everything\") #> if (length(levels(data[, \"pred\"])) == 2) { #> class_stats <- CM$byClass #> } #> else { #> class_stats <- colMeans(CM$byClass) #> names(class_stats) <- paste(\"Mean\", names(class_stats)) #> } #> overall_stats <- if (has_class_probs) #> c(CM$overall, logLoss = as.numeric(lloss), AUC = unname(prob_stats[\"ROC\"]), #> prAUC = unname(prob_stats[\"AUC\"])) #> else CM$overall #> stats <- c(overall_stats, class_stats) #> stats <- stats[!names(stats) %in% c(\"AccuracyNull\", \"AccuracyLower\", #> \"AccuracyUpper\", \"AccuracyPValue\", \"McnemarPValue\", \"Mean Prevalence\", #> \"Mean Detection Prevalence\")] #> names(stats) <- gsub(\"[[:blank:]]+\", \"_\", names(stats)) #> stat_list <- c(\"Accuracy\", \"Kappa\", \"Mean_F1\", \"Mean_Sensitivity\", #> \"Mean_Specificity\", \"Mean_Pos_Pred_Value\", \"Mean_Neg_Pred_Value\", #> \"Mean_Precision\", \"Mean_Recall\", \"Mean_Detection_Rate\", #> \"Mean_Balanced_Accuracy\") #> if (has_class_probs) #> stat_list <- c(\"logLoss\", \"AUC\", \"prAUC\", stat_list) #> if (length(levels(data[, \"pred\"])) == 2) #> stat_list <- gsub(\"^Mean_\", \"\", stat_list) #> stats <- stats[c(stat_list)] #> return(stats) #> } #> #> "},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":null,"dir":"Reference","previous_headings":"","what":"Get default performance metric name — get_perf_metric_name","title":"Get default performance metric name — get_perf_metric_name","text":"Get default performance metric name cross-validation.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get default performance metric name — get_perf_metric_name","text":"","code":"get_perf_metric_name(outcome_type)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get default performance metric name — get_perf_metric_name","text":"outcome_type Type outcome (one : \"continuous\",\"binary\",\"multiclass\").","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get default performance metric name — get_perf_metric_name","text":"Performance metric name.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get default performance metric name — get_perf_metric_name","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_perf_metric_name.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get default performance metric name — get_perf_metric_name","text":"","code":"get_perf_metric_name(\"continuous\") #> [1] \"RMSE\" get_perf_metric_name(\"binary\") #> [1] \"AUC\" get_perf_metric_name(\"multiclass\") #> [1] \"logLoss\""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":null,"dir":"Reference","previous_headings":"","what":"Get model performance metrics as a one-row tibble — get_performance_tbl","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"Get model performance metrics one-row tibble","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"","code":"get_performance_tbl( trained_model, test_data, outcome_colname, perf_metric_function, perf_metric_name, class_probs, method, seed = NA )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"trained_model Trained model caret::train(). test_data Held test data: dataframe outcome features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". class_probs Whether use class probabilities (TRUE categorical outcomes, FALSE numeric outcomes). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost seed Random seed (default: NA). results reproducible set seed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"one-row tibble column cross-validation performance, columns performance metrics test data, plus method, seed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"Kelly Sovacool, sovacool@umich.edu Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_performance_tbl.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get model performance metrics as a one-row tibble — get_performance_tbl","text":"","code":"if (FALSE) { results <- run_ml(otu_small, \"glmnet\", kfold = 2, cv_times = 2) names(results$trained_model$trainingData)[1] <- \"dx\" get_performance_tbl(results$trained_model, results$test_data, \"dx\", multiClassSummary, \"AUC\", class_probs = TRUE, method = \"glmnet\" ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"Generate tuning grid tuning hyperparameters","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"","code":"get_tuning_grid(hyperparams_list, method)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"hyperparams_list Named list lists hyperparameters. method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"tuning grid.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/get_tuning_grid.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Generate the tuning grid for tuning hyperparameters — get_tuning_grid","text":"","code":"ml_method <- \"glmnet\" hparams_list <- get_hyperparams_list(otu_small, ml_method) get_tuning_grid(hparams_list, ml_method) #> lambda alpha #> 1 1e-04 0 #> 2 1e-03 0 #> 3 1e-02 0 #> 4 1e-01 0 #> 5 1e+00 0 #> 6 1e+01 0"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":null,"dir":"Reference","previous_headings":"","what":"Group correlated features — group_correlated_features","title":"Group correlated features — group_correlated_features","text":"Group correlated features","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Group correlated features — group_correlated_features","text":"","code":"group_correlated_features( features, corr_thresh = 1, group_neg_corr = TRUE, corr_method = \"spearman\" )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Group correlated features — group_correlated_features","text":"features dataframe column feature ML corr_thresh feature importance, group correlations equal corr_thresh (range 0 1; default: 1). group_neg_corr Whether group negatively correlated features together (e.g. c(0,1) c(1,0)). corr_method correlation method. options supported stats::cor: spearman, pearson, kendall. (default: spearman)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Group correlated features — group_correlated_features","text":"vector element group correlated features separated pipes (|)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Group correlated features — group_correlated_features","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/group_correlated_features.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Group correlated features — group_correlated_features","text":"","code":"features <- data.frame( a = 1:3, b = 2:4, c = c(1, 0, 1), d = (5:7), e = c(5, 1, 4), f = c(-1, 0, -1) ) group_correlated_features(features) #> [1] \"a|b|d\" \"c|f\" \"e\""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/mikropml.html","id":null,"dir":"Reference","previous_headings":"","what":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","title":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","text":"mikropml implements supervised machine learning pipelines using regression, support vector machines, decision trees, random forest, gradient-boosted trees. main functions preprocess_data() process data prior running machine learning, run_ml() run machine learning.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/mikropml.html","id":"authors","dir":"Reference","previous_headings":"","what":"Authors","title":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","text":"Begüm D. Topçuoğlu (ORCID) Zena Lapp (ORCID) Kelly L. Sovacool (ORCID) Evan Snitkin (ORCID) Jenna Wiens (ORCID) Patrick D. Schloss (ORCID)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/mikropml.html","id":"see-vignettes","dir":"Reference","previous_headings":"","what":"See vignettes","title":"mikropml: User-Friendly R Package for Robust Machine Learning Pipelines — mikropml","text":"Introduction Preprocessing data Hyperparameter tuning Parallel processing mikropml paper","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_data_preproc.html","id":null,"dir":"Reference","previous_headings":"","what":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","title":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","text":"result running preprocess_data(\"otu_mini_bin\")","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_data_preproc.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","text":"","code":"otu_data_preproc"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_data_preproc.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Mini OTU abundance dataset - preprocessed — otu_data_preproc","text":"object class list length 3.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin.html","id":null,"dir":"Reference","previous_headings":"","what":"Mini OTU abundance dataset — otu_mini_bin","title":"Mini OTU abundance dataset — otu_mini_bin","text":"dataset containing relatives abundances OTUs human stool samples binary outcome, dx. subset otu_small.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mini OTU abundance dataset — otu_mini_bin","text":"","code":"otu_mini_bin"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Mini OTU abundance dataset — otu_mini_bin","text":"data frame dx column diagnosis: healthy cancerous (colorectal). columns OTU relative abundances.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_glmnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","title":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","text":"Results running pipeline L2 logistic regression otu_mini_bin feature importance grouping","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_glmnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","text":"","code":"otu_mini_bin_results_glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_glmnet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with L2 logistic regression on otu_mini_bin with feature importance and grouping — otu_mini_bin_results_glmnet","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rf.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","title":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","text":"Results running pipeline random forest otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","text":"","code":"otu_mini_bin_results_rf"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rf.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with random forest on otu_mini_bin — otu_mini_bin_results_rf","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rpart2.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","title":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","text":"Results running pipeline rpart2 otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rpart2.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","text":"","code":"otu_mini_bin_results_rpart2"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_rpart2.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with rpart2 on otu_mini_bin — otu_mini_bin_results_rpart2","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_svmRadial.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","title":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","text":"Results running pipeline svmRadial otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_svmRadial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","text":"","code":"otu_mini_bin_results_svmRadial"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_svmRadial.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with svmRadial on otu_mini_bin — otu_mini_bin_results_svmRadial","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_xgbTree.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","title":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","text":"Results running pipeline xbgTree otu_mini_bin","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_xgbTree.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","text":"","code":"otu_mini_bin_results_xgbTree"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_bin_results_xgbTree.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with xbgTree on otu_mini_bin — otu_mini_bin_results_xgbTree","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_glmnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","text":"Results running pipeline glmnet otu_mini_bin Otu00001 outcome","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_glmnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","text":"","code":"otu_mini_cont_results_glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_glmnet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome — otu_mini_cont_results_glmnet","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_nocv.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","text":"Results running pipeline glmnet otu_mini_bin Otu00001 outcome column, using custom train control scheme perform cross-validation","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_nocv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","text":"","code":"otu_mini_cont_results_nocv"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cont_results_nocv.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with glmnet on otu_mini_bin with Otu00001\nas the outcome column,\nusing a custom train control scheme that does not perform cross-validation — otu_mini_cont_results_nocv","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cv.html","id":null,"dir":"Reference","previous_headings":"","what":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","title":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","text":"Cross validation train_data_mini grouped features.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cv.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","text":"","code":"otu_mini_cv"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_cv.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Cross validation on train_data_mini with grouped features. — otu_mini_cv","text":"object class list length 27.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi.html","id":null,"dir":"Reference","previous_headings":"","what":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","title":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","text":"dataset containing relatives abundances OTUs human stool samples","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","text":"","code":"otu_mini_multi"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Mini OTU abundance dataset with 3 categorical variables — otu_mini_multi","text":"data frame dx column colorectal cancer diagnosis: adenoma, carcinoma, normal. columns OTU relative abundances.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_group.html","id":null,"dir":"Reference","previous_headings":"","what":"Groups for otu_mini_multi — otu_mini_multi_group","title":"Groups for otu_mini_multi — otu_mini_multi_group","text":"Groups otu_mini_multi","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_group.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Groups for otu_mini_multi — otu_mini_multi_group","text":"","code":"otu_mini_multi_group"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_group.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Groups for otu_mini_multi — otu_mini_multi_group","text":"object class character length 490.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_results_glmnet.html","id":null,"dir":"Reference","previous_headings":"","what":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","title":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","text":"Results running pipeline glmnet otu_mini_multi multiclass outcomes","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_results_glmnet.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","text":"","code":"otu_mini_multi_results_glmnet"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_mini_multi_results_glmnet.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Results from running the pipeline with glmnet on otu_mini_multi for\nmulticlass outcomes — otu_mini_multi_results_glmnet","text":"object class list length 4.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_small.html","id":null,"dir":"Reference","previous_headings":"","what":"Small OTU abundance dataset — otu_small","title":"Small OTU abundance dataset — otu_small","text":"dataset containing relatives abundances 60 OTUs 60 human stool samples. subset data provided extdata/otu_large.csv, used Topçuoğlu et al. 2020.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_small.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Small OTU abundance dataset — otu_small","text":"","code":"otu_small"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/otu_small.html","id":"format","dir":"Reference","previous_headings":"","what":"Format","title":"Small OTU abundance dataset — otu_small","text":"data frame 60 rows 61 variables. dx column diagnosis: healthy cancerous (colorectal). columns OTU relative abundances.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculated a permuted p-value comparing two models — permute_p_value","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"Calculated permuted p-value comparing two models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"","code":"permute_p_value( merged_data, metric, group_name, group_1, group_2, nperm = 10000 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"merged_data concatenated performance data run_ml metric metric compare, must numeric group_name column group variables compare group_1 name one group compare group_2 name group compare nperm number permutations, default=10000","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"numeric p-value comparing two models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Courtney R Armour, armourc@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/permute_p_value.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculated a permuted p-value comparing two models — permute_p_value","text":"","code":"df <- dplyr::tibble( model = c(\"rf\", \"rf\", \"glmnet\", \"glmnet\", \"svmRadial\", \"svmRadial\"), AUC = c(.2, 0.3, 0.8, 0.9, 0.85, 0.95) ) set.seed(123) permute_p_value(df, \"AUC\", \"model\", \"rf\", \"glmnet\", nperm = 100) #> [1] 0.3663366"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot ROC and PRC curves — plot_mean_roc","title":"Plot ROC and PRC curves — plot_mean_roc","text":"Plot ROC PRC curves","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot ROC and PRC curves — plot_mean_roc","text":"","code":"plot_mean_roc(dat, ribbon_fill = \"#C6DBEF\", line_color = \"#08306B\") plot_mean_prc( dat, baseline_precision = NULL, ribbon_fill = \"#C7E9C0\", line_color = \"#00441B\" )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot ROC and PRC curves — plot_mean_roc","text":"dat sensitivity, specificity, precision data calculated calc_mean_roc() ribbon_fill ribbon fill color (default: \"#D9D9D9\") line_color line color (default: \"#000000\") baseline_precision baseline precision calc_baseline_precision()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"functions","dir":"Reference","previous_headings":"","what":"Functions","title":"Plot ROC and PRC curves — plot_mean_roc","text":"plot_mean_roc(): Plot mean sensitivity specificity plot_mean_prc(): Plot mean precision recall","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Plot ROC and PRC curves — plot_mean_roc","text":"Courtney Armour Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_curves.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot ROC and PRC curves — plot_mean_roc","text":"","code":"if (FALSE) { library(dplyr) # get performance for multiple models get_sensspec_seed <- function(seed) { ml_result <- run_ml(otu_mini_bin, \"glmnet\", seed = seed) sensspec <- calc_model_sensspec( ml_result$trained_model, ml_result$test_data, \"dx\" ) %>% mutate(seed = seed) return(sensspec) } sensspec_dat <- purrr::map_dfr(seq(100, 102), get_sensspec_seed) # plot ROC & PRC sensspec_dat %>% calc_mean_roc() %>% plot_mean_roc() baseline_prec <- calc_baseline_precision(otu_mini_bin, \"dx\", \"cancer\") sensspec_dat %>% calc_mean_prc() %>% plot_mean_prc(baseline_precision = baseline_prec) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot hyperparameter performance metrics — plot_hp_performance","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"Plot hyperparameter performance metrics","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"","code":"plot_hp_performance(dat, param_col, metric_col)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"dat dataframe hyperparameters performance metric (e.g. get_hp_performance() combine_hp_performance()) param_col hyperparameter plotted. must column dat. metric_col performance metric. must column dat.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"ggplot hyperparameter performance.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"Zena Lapp, zenalapp@umich.edu Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_hp_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot hyperparameter performance metrics — plot_hp_performance","text":"","code":"# plot for a single `run_ml()` call hp_metrics <- get_hp_performance(otu_mini_bin_results_glmnet$trained_model) hp_metrics #> $dat #> alpha lambda AUC #> 1 0 1e-04 0.6082552 #> 2 0 1e-03 0.6082552 #> 3 0 1e-02 0.6086458 #> 4 0 1e-01 0.6166789 #> 5 0 1e+00 0.6221737 #> 6 0 1e+01 0.6187408 #> #> $params #> [1] \"lambda\" #> #> $metric #> [1] \"AUC\" #> plot_hp_performance(hp_metrics$dat, lambda, AUC) if (FALSE) { # plot for multiple `run_ml()` calls results <- lapply(seq(100, 102), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed) }) models <- lapply(results, function(x) x$trained_model) hp_metrics <- combine_hp_performance(models) plot_hp_performance(hp_metrics$dat, lambda, AUC) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"ggplot2 required use function.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"","code":"plot_model_performance(performance_df)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"performance_df dataframe performance results multiple calls run_ml()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"ggplot2 plot performance.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"Begüm Topçuoglu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/plot_model_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot performance metrics for multiple ML runs with different parameters — plot_model_performance","text":"","code":"if (FALSE) { # call `run_ml()` multiple times with different seeds results_lst <- lapply(seq(100, 104), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed) }) # extract and combine the performance results perf_df <- lapply(results_lst, function(result) { result[[\"performance\"]] }) %>% dplyr::bind_rows() # plot the performance results p <- plot_model_performance(perf_df) # call `run_ml()` with different ML methods param_grid <- expand.grid( seeds = seq(100, 104), methods = c(\"glmnet\", \"rf\") ) results_mtx <- mapply( function(seed, method) { run_ml(otu_mini_bin, method, seed = seed, kfold = 2) }, param_grid$seeds, param_grid$methods ) # extract and combine the performance results perf_df2 <- dplyr::bind_rows(results_mtx[\"performance\", ]) # plot the performance results p <- plot_model_performance(perf_df2) # you can continue adding layers to customize the plot p + theme_classic() + scale_color_brewer(palette = \"Dark2\") + coord_flip() }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Preprocess data prior to running machine learning — preprocess_data","title":"Preprocess data prior to running machine learning — preprocess_data","text":"Function preprocess data input run_ml().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Preprocess data prior to running machine learning — preprocess_data","text":"","code":"preprocess_data( dataset, outcome_colname, method = c(\"center\", \"scale\"), remove_var = \"nzv\", collapse_corr_feats = TRUE, to_numeric = TRUE, group_neg_corr = TRUE, prefilter_threshold = 1 )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Preprocess data prior to running machine learning — preprocess_data","text":"dataset Data frame outcome variable columns features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). method Methods preprocess data, described caret::preProcess() (default: c(\"center\",\"scale\"), use NULL normalization). remove_var Whether remove variables near-zero variance ('nzv'; default), zero variance ('zv'), none (NULL). collapse_corr_feats Whether keep one perfectly correlated features. to_numeric Whether change features numeric possible. group_neg_corr Whether group negatively correlated features together (e.g. c(0,1) c(1,0)). prefilter_threshold Remove features non-zero & non-NA values N rows fewer (default: 1). Set -1 keep columns step. step also skipped to_numeric set FALSE.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Preprocess data prior to running machine learning — preprocess_data","text":"Named list including: dat_transformed: Preprocessed data. grp_feats: features grouped together, named list features corresponding group. removed_feats: features removed preprocessing (e.g. zero variance near-zero variance features). progressr package installed, progress bar time elapsed estimated time completion can displayed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"more-details","dir":"Reference","previous_headings":"","what":"More details","title":"Preprocess data prior to running machine learning — preprocess_data","text":"See preprocessing vignette details. Note values outcome_colname contain spaces, converted underscores compatibility caret.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Preprocess data prior to running machine learning — preprocess_data","text":"Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/preprocess_data.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Preprocess data prior to running machine learning — preprocess_data","text":"","code":"preprocess_data(mikropml::otu_small, \"dx\") #> Using 'dx' as the outcome column. #> $dat_transformed #> # A tibble: 200 × 61 #> dx Otu00001 Otu00002 Otu00003 Otu00004 Otu00005 Otu00006 Otu00…¹ Otu00008 #> #> 1 normal -0.420 -0.219 -0.174 -0.591 -0.0488 -0.167 -0.569 -0.0624 #> 2 normal -0.105 1.75 -0.718 0.0381 1.54 -0.573 -0.643 -0.132 #> 3 normal -0.708 0.696 1.43 0.604 -0.265 -0.0364 -0.612 -0.207 #> 4 normal -0.494 -0.665 2.02 -0.593 -0.676 -0.586 -0.552 -0.470 #> 5 normal 1.11 -0.395 -0.754 -0.586 -0.754 2.73 0.191 -0.676 #> 6 normal -0.685 0.614 -0.174 -0.584 0.376 0.804 -0.337 -0.00608 #> 7 cancer -0.770 -0.496 -0.318 0.159 -0.658 2.20 -0.717 0.0636 #> 8 normal -0.424 -0.478 -0.397 -0.556 -0.391 -0.0620 0.376 -0.0222 #> 9 normal -0.556 1.14 1.62 -0.352 -0.275 -0.465 -0.804 0.294 #> 10 cancer 1.46 -0.451 -0.694 -0.0567 -0.706 0.689 -0.370 1.59 #> # … with 190 more rows, 52 more variables: Otu00009 , Otu00010 , #> # Otu00011 , Otu00012 , Otu00013 , Otu00014 , #> # Otu00015 , Otu00016 , Otu00017 , Otu00018 , #> # Otu00019 , Otu00020 , Otu00021 , Otu00022 , #> # Otu00023 , Otu00024 , Otu00025 , Otu00026 , #> # Otu00027 , Otu00028 , Otu00029 , Otu00030 , #> # Otu00031 , Otu00032 , Otu00033 , Otu00034 , … #> #> $grp_feats #> NULL #> #> $removed_feats #> character(0) #> # the function can show a progress bar if you have the progressr package installed ## optionally, specify the progress bar format progressr::handlers(progressr::handler_progress( format = \":message :bar :percent | elapsed: :elapsed | eta: :eta\", clear = FALSE, show_after = 0 )) ## tell progressor to always report progress if (FALSE) { progressr::handlers(global = TRUE) ## run the function and watch the live progress udpates dat_preproc <- preprocess_data(mikropml::otu_small, \"dx\") }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":null,"dir":"Reference","previous_headings":"","what":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"Randomize feature order eliminate position-dependent effects","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"","code":"randomize_feature_order(dataset, outcome_colname)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"dataset Data frame outcome variable columns features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"Dataset feature order randomized.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"Nick Lesniak, nlesniak@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/randomize_feature_order.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Randomize feature order to eliminate any position-dependent effects — randomize_feature_order","text":"","code":"dat <- data.frame( outcome = c(\"1\", \"2\", \"3\"), a = 4:6, b = 7:9, c = 10:12, d = 13:15 ) randomize_feature_order(dat, \"outcome\") #> outcome c b a d #> 1 1 10 7 4 13 #> 2 2 11 8 5 14 #> 3 3 12 9 6 15"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"caret contr.ltfr — reexports","title":"caret contr.ltfr — reexports","text":"objects imported packages. Follow links see documentation. caret contr.ltfr dplyr %>% rlang :=, !!, .data","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":null,"dir":"Reference","previous_headings":"","what":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"Removes columns non-zero & non-NA values threshold row(s) fewer.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"","code":"remove_singleton_columns(dat, threshold = 1)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"dat dataframe threshold Number rows. column non-zero & non-NA values threshold row(s) fewer, removed.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"dataframe without singleton columns","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"Kelly Sovacool, sovacool@umich.edu Courtney Armour","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/remove_singleton_columns.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Remove columns appearing in only threshold row(s) or fewer. — remove_singleton_columns","text":"","code":"remove_singleton_columns(data.frame(a = 1:3, b = c(0, 1, 0), c = 4:6)) #> $dat #> a c #> 1 1 4 #> 2 2 5 #> 3 3 6 #> #> $removed_feats #> [1] \"b\" #> remove_singleton_columns(data.frame(a = 1:3, b = c(0, 1, 0), c = 4:6), threshold = 0) #> $dat #> a b c #> 1 1 0 4 #> 2 2 1 5 #> 3 3 0 6 #> #> $removed_feats #> character(0) #> remove_singleton_columns(data.frame(a = 1:3, b = c(0, 1, NA), c = 4:6)) #> $dat #> a c #> 1 1 4 #> 2 2 5 #> 3 3 6 #> #> $removed_feats #> [1] \"b\" #> remove_singleton_columns(data.frame(a = 1:3, b = c(1, 1, 1), c = 4:6)) #> $dat #> a b c #> 1 1 1 4 #> 2 2 1 5 #> 3 3 1 6 #> #> $removed_feats #> character(0) #>"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":null,"dir":"Reference","previous_headings":"","what":"Replace spaces in all elements of a character vector with underscores — replace_spaces","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"Replace spaces elements character vector underscores","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"","code":"replace_spaces(x, new_char = \"_\")"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"x character vector new_char character replace spaces (default: _)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"character vector spaces replaced new_char","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/replace_spaces.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Replace spaces in all elements of a character vector with underscores — replace_spaces","text":"","code":"dat <- data.frame( dx = c(\"outcome 1\", \"outcome 2\", \"outcome 1\"), a = 1:3, b = c(5, 7, 1) ) dat$dx <- replace_spaces(dat$dx) dat #> dx a b #> 1 outcome_1 1 5 #> 2 outcome_2 2 7 #> 3 outcome_1 3 1"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":null,"dir":"Reference","previous_headings":"","what":"Run the machine learning pipeline — run_ml","title":"Run the machine learning pipeline — run_ml","text":"function splits data set train & test set, trains machine learning (ML) models using k-fold cross-validation, evaluates best model held-test set, optionally calculates feature importance using framework outlined Topçuoğlu et al. 2020 (doi:10.1128/mBio.00434-20 ). Required inputs data frame (must contain outcome variable columns features) ML method. See vignette('introduction') details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run the machine learning pipeline — run_ml","text":"","code":"run_ml( dataset, method, outcome_colname = NULL, hyperparameters = NULL, find_feature_importance = FALSE, calculate_performance = TRUE, kfold = 5, cv_times = 100, cross_val = NULL, training_frac = 0.8, perf_metric_function = NULL, perf_metric_name = NULL, groups = NULL, group_partitions = NULL, corr_thresh = 1, seed = NA, ... )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run the machine learning pipeline — run_ml","text":"dataset Data frame outcome variable columns features. method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). hyperparameters Dataframe hyperparameters (default NULL; sensible defaults chosen automatically). find_feature_importance Run permutation importance (default: FALSE). TRUE recommended like identify features important predicting outcome, resource-intensive. calculate_performance Whether calculate performance metrics (default: TRUE). might choose skip perform cross-validation model training. kfold Fold number k-fold cross-validation (default: 5). cv_times Number cross-validation partitions create (default: 100). cross_val custom cross-validation scheme caret::trainControl() (default: NULL, uses kfold cross validation repeated cv_times). kfold cv_times ignored user provides custom cross-validation scheme. See caret::trainControl() docs information use . training_frac Fraction data training set (default: 0.8). Rows dataset randomly selected training set, remaining rows used testing set. Alternatively, provide vector integers, used row indices training set. remaining rows used testing set. perf_metric_function Function calculate performance metric used cross-validation test performance. functions provided caret (see caret::defaultSummary()). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary. perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". groups Vector groups keep together splitting data train test sets. number groups training set larger kfold, groups also kept together cross-validation. Length matches number rows dataset (default: NULL). group_partitions Specify assign groups training testing partitions (default: NULL). groups specifies samples belong group \"\" belong group \"B\", setting group_partitions = list(train = c(\"\", \"B\"), test = c(\"B\")) result samples group \"\" placed training set, samples \"B\" also training set, remaining samples \"B\" testing set. partition sizes close training_frac possible. number groups training set larger kfold, groups also kept together cross-validation. corr_thresh feature importance, group correlations equal corr_thresh (range 0 1; default: 1). seed Random seed (default: NA). results reproducible set seed. ... additional arguments passed caret::train(), case weights via weights argument ntree rf models. See caret::train() docs details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run the machine learning pipeline — run_ml","text":"Named list results: trained_model: Output caret::train(), including best model. test_data: Part data used testing. performance: Data frame performance metrics. first column cross-validation performance metric, last two columns ML method used seed (one set), respectively. columns performance metrics calculated test data. contains one row, can easily combine performance data frames multiple calls run_ml() (see vignette(\"parallel\")). feature_importance: feature importances calculated, data frame row feature correlated group. columns performance metric permuted data, difference true performance metric performance metric permuted data (true - permuted), feature name, ML method, performance metric name, seed (provided). AUC RMSE, higher perf_metric_diff , important feature predicting outcome. log loss, lower perf_metric_diff , important feature predicting outcome.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"more-details","dir":"Reference","previous_headings":"","what":"More details","title":"Run the machine learning pipeline — run_ml","text":"details, please see vignettes.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Run the machine learning pipeline — run_ml","text":"Begüm Topçuoğlu, topcuoglu.begum@gmail.com Zena Lapp, zenalapp@umich.edu Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/run_ml.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run the machine learning pipeline — run_ml","text":"","code":"if (FALSE) { # regression run_ml(otu_small, \"glmnet\", seed = 2019 ) # random forest w/ feature importance run_ml(otu_small, \"rf\", outcome_colname = \"dx\", find_feature_importance = TRUE ) # custom cross validation & hyperparameters run_ml(otu_mini_bin[, 2:11], \"glmnet\", outcome_colname = \"Otu00001\", seed = 2019, hyperparameters = list(lambda = c(1e-04), alpha = 0), cross_val = caret::trainControl(method = \"none\"), calculate_performance = FALSE ) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"Use functions calculate cumulative sensitivity, specificity, recall, etc. single models, concatenate results together multiple models, compute mean ROC PRC. can plot mean ROC PRC curves visualize results. Note: functions assume binary outcome.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"","code":"calc_model_sensspec(trained_model, test_data, outcome_colname = NULL) calc_mean_roc(sensspec_dat) calc_mean_prc(sensspec_dat)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"trained_model Trained model caret::train(). test_data Held test data: dataframe outcome features. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). sensspec_dat data frame created concatenating results calc_model_sensspec() multiple models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"data frame summarized performance","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"functions","dir":"Reference","previous_headings":"","what":"Functions","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"calc_model_sensspec(): Get sensitivity, specificity, precision model. calc_mean_roc(): Calculate mean sensitivity specificity multiple models calc_mean_prc(): Calculate mean precision recall multiple models","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"Courtney Armour Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/sensspec.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calculate and summarize performance for ROC and PRC plots — calc_model_sensspec","text":"","code":"if (FALSE) { library(dplyr) # get cumulative performance for a single model sensspec_1 <- calc_model_sensspec( otu_mini_bin_results_glmnet$trained_model, otu_mini_bin_results_glmnet$test_data, \"dx\" ) head(sensspec_1) # get performance for multiple models get_sensspec_seed <- function(seed) { ml_result <- run_ml(otu_mini_bin, \"glmnet\", seed = seed) sensspec <- calc_model_sensspec( ml_result$trained_model, ml_result$test_data, \"dx\" ) %>% mutate(seed = seed) return(sensspec) } sensspec_dat <- purrr::map_dfr(seq(100, 102), get_sensspec_seed) # calculate mean sensitivity over specificity roc_dat <- calc_mean_roc(sensspec_dat) head(roc_dat) # calculate mean precision over recall prc_dat <- calc_mean_prc(sensspec_dat) head(prc_dat) # plot ROC & PRC roc_dat %>% plot_mean_roc() baseline_prec <- calc_baseline_precision(otu_mini_bin, \"dx\", \"cancer\") prc_dat %>% plot_mean_prc(baseline_precision = baseline_prec) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":null,"dir":"Reference","previous_headings":"","what":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"Get plot layers shared plot_mean_roc plot_mean_prc","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"","code":"shared_ggprotos(ribbon_fill = \"#D9D9D9\", line_color = \"#000000\")"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"ribbon_fill ribbon fill color (default: \"#D9D9D9\") line_color line color (default: \"#000000\")","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"list ggproto objects add ggplot","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/shared_ggprotos.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Get plot layers shared by plot_mean_roc and plot_mean_prc — shared_ggprotos","text":"Kelly Sovacool sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":null,"dir":"Reference","previous_headings":"","what":"Tidy the performance dataframe — tidy_perf_data","title":"Tidy the performance dataframe — tidy_perf_data","text":"Used plot_model_performance().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Tidy the performance dataframe — tidy_perf_data","text":"","code":"tidy_perf_data(performance_df)"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Tidy the performance dataframe — tidy_perf_data","text":"performance_df dataframe performance results multiple calls run_ml()","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Tidy the performance dataframe — tidy_perf_data","text":"Tidy dataframe model performance metrics.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Tidy the performance dataframe — tidy_perf_data","text":"Begüm Topçuoglu, topcuoglu.begum@gmail.com Kelly Sovacool, sovacool@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/tidy_perf_data.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Tidy the performance dataframe — tidy_perf_data","text":"","code":"if (FALSE) { # call `run_ml()` multiple times with different seeds results_lst <- lapply(seq(100, 104), function(seed) { run_ml(otu_small, \"glmnet\", seed = seed) }) # extract and combine the performance results perf_df <- lapply(results_lst, function(result) { result[[\"performance\"]] }) %>% dplyr::bind_rows() # make it pretty! tidy_perf_data(perf_df) }"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":null,"dir":"Reference","previous_headings":"","what":"Train model using caret::train(). — train_model","title":"Train model using caret::train(). — train_model","text":"Train model using caret::train().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Train model using caret::train(). — train_model","text":"","code":"train_model( train_data, outcome_colname, method, cv, perf_metric_name, tune_grid, ... )"},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Train model using caret::train(). — train_model","text":"train_data Training data. Expected subset full dataset. outcome_colname Column name string outcome variable (default NULL; first column chosen automatically). method ML method. Options: c(\"glmnet\", \"rf\", \"rpart2\", \"svmRadial\", \"xgbTree\"). glmnet: linear, logistic, multiclass regression rf: random forest rpart2: decision tree svmRadial: support vector machine xgbTree: xgboost cv Cross-validation caret scheme define_cv(). perf_metric_name column name output function provided perf_metric_function used performance metric. Defaults: binary classification = \"ROC\", multi-class classification = \"logLoss\", regression = \"RMSE\". tune_grid Tuning grid get_tuning_grid().#' ... additional arguments passed caret::train(), case weights via weights argument ntree rf models. See caret::train() docs details.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Train model using caret::train(). — train_model","text":"Trained model caret::train().","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"Train model using caret::train(). — train_model","text":"Zena Lapp, zenalapp@umich.edu","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/reference/train_model.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Train model using caret::train(). — train_model","text":"","code":"if (FALSE) { training_data <- otu_mini_bin_results_glmnet$trained_model$trainingData %>% dplyr::rename(dx = .outcome) method <- \"rf\" hyperparameters <- get_hyperparams_list(otu_mini_bin, method) cross_val <- define_cv(training_data, \"dx\", hyperparameters, perf_metric_function = caret::multiClassSummary, class_probs = TRUE, cv_times = 2 ) tune_grid <- get_tuning_grid(hyperparameters, method) rf_model <- train_model( training_data, \"dx\", method, cross_val, \"AUC\", tune_grid, ntree = 1000 ) rf_model$results %>% dplyr::select(mtry, AUC, prAUC) }"},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-150","dir":"Changelog","previous_headings":"","what":"mikropml 1.5.0","title":"mikropml 1.5.0","text":"CRAN release: 2023-01-16 New example showing plot feature importances parallel vignette (#310, @kelly-sovacool). can now use parRF, parallel implementation rf method, default hyperparameters rf set automatically (#306, @kelly-sovacool). calc_model_sensspec() - calculate sensitivity, specificity, precision model. calc_mean_roc() & plot_mean_roc() - calculate & plot specificity mean sensitivity multiple models. calc_mean_prc() & plot_mean_prc() - calculate & plot recall mean precision multiple models.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-140","dir":"Changelog","previous_headings":"","what":"mikropml 1.4.0","title":"mikropml 1.4.0","text":"CRAN release: 2022-10-16 Users can now pass model-specific arguments (e.g. weights) caret::train(), allowing greater flexibility. Improved tests (#298, #300, #303 #kelly-sovacool) Minor documentation improvements.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-130","dir":"Changelog","previous_headings":"","what":"mikropml 1.3.0","title":"mikropml 1.3.0","text":"CRAN release: 2022-05-20 mikropml now requires R version 4.1.0 greater due update randomForest package (#292). New function compare_models() compares performance two models permutation test (#295, @courtneyarmour). Fixed bug cv_times affect reported repeats cross-validation (#291, @kelly-sovacool). Made minor documentation improvements (#293, @kelly-sovacool)","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-122","dir":"Changelog","previous_headings":"","what":"mikropml 1.2.2","title":"mikropml 1.2.2","text":"CRAN release: 2022-02-03 minor patch fixes test failure platforms long doubles. actual package code remains unchanged.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-121","dir":"Changelog","previous_headings":"","what":"mikropml 1.2.1","title":"mikropml 1.2.1","text":"CRAN release: 2022-01-30 using groups parameter, groups kept together cross-validation partitions kfold <= number groups training set. Previously, error thrown condition met. Now, enough groups training set groups kept together CV, groups allowed split across CV partitions. Report p-values permutation feature importance (#288, @kelly-sovacool).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-120","dir":"Changelog","previous_headings":"","what":"mikropml 1.2.0","title":"mikropml 1.2.0","text":"CRAN release: 2021-11-10 Also added new parameter calculate_performance, controls whether performance metrics calculated (default: TRUE). Users may wish skip performance calculations training models cross-validation. New parameter group_partitions added run_ml() allows users control groups go partition train/test split (#281, @kelly-sovacool). default, training_frac fraction 0 1 specifies much dataset used training fraction train/test split. Users can instead give training_frac vector indices correspond rows dataset go training fraction train/test split. gives users direct control exactly observations training fraction desired.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-111","dir":"Changelog","previous_headings":"","what":"mikropml 1.1.1","title":"mikropml 1.1.1","text":"CRAN release: 2021-09-14 Also, group_correlated_features() now user-facing function.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-110","dir":"Changelog","previous_headings":"","what":"mikropml 1.1.0","title":"mikropml 1.1.0","text":"CRAN release: 2021-08-10 default still “spearman”, now can use methods supported stats::cor corr_method parameter: get_feature_importance(corr_method = \"pearson\") now video tutorials covering mikropml skills related machine learning, created @pschloss (#270). Fixed bug preprocess_data() converted outcome column character vector (#273, @kelly-sovacool, @ecmaggioncalda).","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-100","dir":"Changelog","previous_headings":"","what":"mikropml 1.0.0","title":"mikropml 1.0.0","text":"CRAN release: 2021-05-13 mikropml now logo created @NLesniak! Made documentation improvements (#238, #231 @kelly-sovacool; #256 @BTopcuoglu). Remove features appear N=prefilter_threshold fewer rows data. Created function remove_singleton_columns() called preprocess_data() carry . Provide custom groups features permute together permutation importance. groups NULL default; case, correlated features corr_thresh grouped together. preprocess_data() now replaces spaces outcome column underscores (#247, @kelly-sovacool, @JonnyTran). Clarify intro vignette support multi-label outcomes. (#254, @zenalapp) Optional progress bar preprocess_data() get_feature_importance() using progressr package (#257, @kelly-sovacool, @JonnyTran, @FedericoComoglio). mikropml paper soon published JOSS!","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-002","dir":"Changelog","previous_headings":"","what":"mikropml 0.0.2","title":"mikropml 0.0.2","text":"CRAN release: 2020-12-03 Fixed test failure Solaris. Fixed multiple test failures R 3.6.2 due stringsAsFactors behavior. Made minor documentation improvements. Moved rpart Suggests Imports consistency packages used model training.","code":""},{"path":"http://www.schlosslab.org/mikropml/dev/news/index.html","id":"mikropml-001","dir":"Changelog","previous_headings":"","what":"mikropml 0.0.1","title":"mikropml 0.0.1","text":"CRAN release: 2020-11-23 first release version mikropml! 🎉 Added NEWS.md file track changes package. run_ml() preprocess_data() plot_model_performance() plot_hp_performance() glmnet: logistic linear regression rf: random forest rpart2: decision trees svmRadial: support vector machines xgbTree: gradient-boosted trees Introduction Preprocess data Hyperparameter tuning Parallel processing mikropml paper","code":""}]