From fe645ef262a5fdda980998b6ce067b906ae125d1 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Fri, 27 Oct 2023 09:55:24 -0400 Subject: [PATCH 01/52] Project proposal submission --- TeamCAHJ_ProjectProposal.pdf | Bin 0 -> 111123 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 TeamCAHJ_ProjectProposal.pdf diff --git a/TeamCAHJ_ProjectProposal.pdf b/TeamCAHJ_ProjectProposal.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2dada5861ea1fa629f87a2ea2b9a2ff5bdf6524f GIT binary patch literal 111123 zcmdRW1ymJE*ES&_El9U?2!}&=cXu}&xDiT!<Z+d5bPl#DGvE@Ui_W)*;(vAqQ?$exZ2pz7-EX6mZu3<9Y*IJlCrvfj7L z8#~)b+nYN;#<(kj%*Y_!Kmq~)d628I8Du~*AS?HMPYo-OJIGnd8DtJ}2HBfJW&pA= z-?az`pdx|n&F)#boAfuH?s)--I=I=plCc0}t;}3>$?j(+V`0A|)+hUw{I2D196WR(q|9U-5QaFpS@g*uGO%}rEW?HD2VauT4sMQ+ zIqx-h{|7)7pyq6B@8Wp3TvJbgs474V^vuc>q#`K-8CT8N#0ApjhZ=u~#D$FcSLxji z{8M@oR<^DnXMlt)WMyI?QwK8;Kn`SY;c7|7%EiWUN9E$`3^KMuMe<1PB${@9-1v=O z{d=fqe<}}&MDoZ}H#$|y2WVJxxt~jppw8NoY(qTSWVyayzSn1rG~=yx&OovqlG@R5 zfA1ds@tluTJ|+QR=)>cEOWkAWLnQAI6SKR$)bb9z%*^fU5`V=d&|8&yZH1xmEhp=C zf>8lyvRl&sCSEQoN&)aX%HX3o;$Td8$|{8z!aAaiQdlK<;St|^Cud-$>4L9>r*mTt zGWCZQnzw?B#H7BSNt+MR5vZ;yH`|4U1&_!93maG7PB-RSRZ*(Rg>d=zggK03Zi_wm1PnjB281xy4gIp)Z8&mxB$s- zqZJNWG12!H}F|-HUQIoM~3nYaC9A>LO<)O4$|) z`*i(zRX&z;;Lz6l$xD^qwh~&$33#R17b@qEWKPNpo#IX3s;Q^MnUT7_5*?<)^rC#i z!O2K>dUL7ei(6(+X=pkZ0WUe-=>`=`zZ0I?(o@T;$gP2Mee!V=4RV9tqTkS5I;V|V z_!OQUj}V?(7AB-rj5|Dl7aMHJUz{>LU5l1aFPY8}f6HM)46TWhPH824yUwM^)S1%x z_S!3snP^?Z0x1`GZPR-BY%165>vwu_EkBFL`DGm}ifhU!67k{_g3GwuPzj1{g5P`U zIpeW3)Z*VBwwfoupk}2&GKqYTXb9?m$nf$rVjW&?!6U=zON5)QVwcWC-7+qUDQ~7n zePbqpLPp6-!QNMPY~HGRsX{mw(F(HK6f*jPk!Q=bilJ{rU8V3b!^s!b$5oW9G0De1 zny7(t^62>}pi+Zev4t91R8~u|-dd=c1rHSaz6xVN={eseSw;`gv46hvCcI_TgGCEY zpQ0URaKHga9$C*^J=2L?yXHlXTR_e%F4C8h43+>6)#I5|^~@ms-hK2`wtnd8v{C|H zw89?i6`H;9$?)-W^Rbe*s?Xwy5C?%l7GiY^LWkzzC_y#J3p$yrG`SQsB#};CQhqdJ zvBP2gYbAAGGiT%pq)SL{#cv0bI8q5u`ZncN#JsXw(#ti8!NDrsvJ2KneSt3#4Z%zJ zGZvGHW7>~rYT9jwFqFx1UF}x18`t01>d*&##N-2fo9FDx#;4K2<|DooqEZtUj5Hyn zdZ(0M+o9ySifSg7S%6onA*BAw9oKJkrMSqu(JPH!_~Y9ZE+YYIw{;1fmx90^148M| zQ3i#6Qz?_Pj+(;CyobvT>gMeAaT0SWI zbQwAFwJ+o8!3*>7I@-&TIi_Bu4YVEH7cgJ-CuYb$`yC(TbPfC6R-(Mx6KH)e-+G2Q zTVy7?M_Zfu;JHNwO}L%9MS(w-pkp-lbFb{@Ok^MA)WmH+qRDI1cvPM`yR&%!w@Q}dHhuYX)X^HY_aNhK1gKvO z?Td9DI4O`_y!lIdSt*Qt6>f!ZdK{K?nT-PJ*H=6p-_FFS=WlK(DKe9rcS66~YiRD9 zQhm~+IV-@dqY$MYEYPfYP*zfpyC%u5QHk~fW;m$QjS^6BaHV&l3`19#qIDWjP6_+4 z4zJAKt(7^3sd$k3C99PWa8Tz)2t^+zn}+iFXHN3P_Xs=)^5g+yTMpE-vn6Is1c{n8 z$HZfZ-WcJ>C=6ZN;0G@|oy}B}G?dj}dxz{a)!u%9p^}GtaZRVr7mz1dp9O=TG%MB5 zq}&dpDtJ>tS=8~;%3bhLn+qdLD!o2+&tcMAW3*l4~Wtj7@vpiF3Lk^0Zxf=ZM7PiM`4RhD{6x=PHIJXchcBgrs_@7 zX{n5y;gHs)rbft6v7a1gjb!w23gPF*`?Njo`Cb9v5D{ZOZLu@z=wmiUld#Ckza^ol zE!1$T(9NREZIr!8(j_u=tQ<{nO8SktW5i#5OGphH+5JKuHK+He5?|%bv7~k{AQ%OK z-CGG2xC^3HTo3=a#E-sK^ChrN6lhR(`0V9*$4rt^VjKnr>v~Z9YXKK?tB5VvIidbB zFQhgVh#>o21ay7e-_VK!P+@7x%RCm5q5J5Idndf~9N>Gyv;v`>1;x@I7et#>@d3XxF%Y{QW3mrr?=qC;h*C>6_`+(*N&7>d!@AcdBO@*Cq&aZew%Zs zRk>b!bWJgiFOdDTy1w!YUggsk&^cphSLqvB`3JqTwIs@`OvDNDw5+qVo^jC%WhqWB z%eieiU#AMBb2GL&k`&y6hXq+@IkdNOXAqw=Wk}AC_=_pqd}ps2cngv;ZG-{sSK+5k zbp?f@#^az_o{K#;96_$DV!$x8F8X|mUpt&0iQ*fi6^oCwXP4kt9yCK6RNaNj3cg%& zQDQKBL@K5w`L4{_ZT-Dok4szBi+Jg%)uL_%qot@wh6q=XX50%+M}lFjF5HQA?>z$2 zFNLo*{VSfh6kd!i^y}<+CiOU7>jGv;7>K)YnqHjQ=7q)mfEa zAueRe9a&3wNBO(HW>$aKdzCDM&QQZ->r~fZHH@#;IZPx0S@HGbWPXhbWNzW+1lY>^ zDdnQ@r2U;>C{f;pB`Acx|c#8)eAmwnQq_bZv^s06_tyxp5B8Ur7&LaFkp%r02qR5QWaKZJl zn*uC$ix}Sfu!3R&znE3e`d)i%iC|38R_3rQRTEkgH8W4rxNu%3DTSn7y=)lwEHUiC zXIAm_y&uAv=aio>sFsnN7l#23O6+x>jB#@8;5i!7zqV(3q!3>`e#{KE2T2;+Fu`^{hh9GL(50`-uvX13n+c?qiuEas@sWBP3Q z?ASpziZ%)0BTn^H2<4f<;22-!JUZSg`ML zj*zRsC)ja(qzfdN(!3zH#F@03jMp}M>8&008!chuA~ch996?O-O59-SJt>U>%3{qY zLd}-7rM4VGfg8{Fhuc|80zXuj>wGPks7F6divGC5Pyj9&63u@vmasI2kG8i+Kq&f| z;Y)>WUa~`qN5*yJihIP%cY^{XRZxs9?LVEQbB|xZn8XRh(Guyxv#0`&#ocrM{ah6>rvf+2|I! zD|2h8Ju9QI1aDB>FQ;6EHm_wC>7?_hcjsY(4n>9t)ocBLy>Ma5&X)ud$52>_(P0nD zXbU7z0$lnKc{O^pa2HpR@ID9De2Qd=Q7|nd_Sw@yf{8PkCEm(?$%DVo=qo#wMDO1x zRgf1dhoD?+z<6WG37?2rq9!^2nHdWpkKP3Tc=?rsA2Du;JUCNMSL&b0RIAP=gxHj5vqi&Jtl zReqbt9RG-xbNiO++0EfSgz+;a{)uMXp$b0}>iazX4sZadx|z7%!7X<+%m5K%7tnpR zh^UyjxHyBTv8|PfvlT$x9+Grh*;@dl%^+F7m8&PC6eOK@F@=n6Z0~wU;X?KUmr!$1 zx3{{>&>`T(ea){q{awX93%X>SkRk*x`3r2htN6*@zhmPkv!c>scXQtH%<=z#pEhca5_$Q+PF--^v!psbSG^%>KxPt8N03xzKpcBX-cgt|La&&cYCi?*mLHe`Y zH~-4{UByrC6rIgL&JbbH{uBlTk5U0ySV08tNh=J2jzD(+m7AlZEy(T;>iG@UxSQkW zK!0lCFCOkuAO&MP5a3@l0Kq;iT*z2i0MZax%*s^Q-oh3{#taa?(}61)Cp+^W`fw#< zWP$u4YV0Tlva+!J(IouLLet6&0++C}03pbj$B#dNoNTQ3^J)I*$p(><)*n=NW2i!e zr2!$i?;~;#0x_~Pv))etkr4zhgG>v-g6?DwL1;jCc-6gHe-+$40QK8~|HP;OBJuzQ z2WLBDTYxD9nfs0EZ*bs0QHcuw3)SBMyL+?vX*FsPEbw0B_wtos`OVrT{>j=kt?Y&E zU9A4J{=4D+(iH?uyfb2egq5?4tEi>1^PPI+jQ=X%iy49}vfLZiZ|nOTHu(F{znQ_W zto?s$2EW32f7<I4wu~YUwr28be{Dnmonrn#Q2$XeK!^|g)C}w2 zFkj$5Zj*m0{V&b@#@T?a^(sXAQf4C>-zqqFy z$oOtchG5@+cq$v`efK{+6+#X1REYbrKz2VeP8P1;!RdP#`8U7(uY85=Ulj6xql^13 z;omsPZ@RdLl7I7(U(bX;jppC#0>ae)k{Z~4*(F&xIM{zx!@us5e^&$BZ))IR|1Z_> z>%sEp691iFu>Xr17=IiF|2NwBamM}gmIVZI|K%C?F7%r={^O(ThiClB+dnFW5y^pl65hu!}?L}K~l zbo>3xW4ZTI03=BIap(4{h1^s8ugEW=%`bF=BN=)(tfjJBn!_c}8o-8(#LIZ@X z1H39e1Q*Ab-0Kgy5fft(-XL0hLjP=ZH6E@-{1G`}k+C%PyS)~T$G0Sp`Vyb4un#$z zoVl7J9-=<>g3^P7;IrpI?5oW63=Uuo3mIu6z0=XR6W&*U8dKh zaI7^zZ{U2F`4l)fr=0m*5ZahAJ`h3apmyUjqCn|;q>_B$*Naz$3Xg6dqwL8Jm4qx= ziRMyOu2#TgNaT*TTa&~YXKK$b21=lOO#g0XIB=C3Awzx>jQ0Xjwq4>4TjA|oseA@v7ypzaYYAbzZ<8amNy?nH+OMAs4= z!`MNN>72(&ZfccRl*U=(#t9Mn*&+!Q$mA2kVf!#)2fQr4<`+ z)OK$7Oq~<(E`P|&uk~>G0}RiJ*$5kX_A?51p;V%J_Ne1?+C{3Ew2R1QSY^$5l)-qXb;eBx0ljf3(@9mIuAp{GvHBS_j7~YxorR22 zbc4gQDhU#2U>7YV7tX-JaNp`4H9o}Er-b!bqn(z_6Y`11awOm3LzvTet@!*vic#K? zXvhYx@eRkv+1H$20L_?*kaQ1wLfvUM`}W7hCv1jz~FZw zDV%f3t}*e7HOw4zRZ^C(Y@_)JWKD$1lQ|9*--KdjHgTMx-XDHavV&iYjpO05FD!kf znk|zn8Af&JY2spLxZ3ib0C!fD0)uxoSI%^UMzUevQ`}`jUzGv)cu;H=#f=i!sXZxC z((fv=lbU4Dp!FIT+Zx7df1K1_7=8`Nz{HLA8ft@b{+Zu7cWSsQ&UZf{V5%8 zT=AzoNiOpSWpSx45Mmd52JFEnF>FsXPOHN38m_0}7H2xC4_1<0Hg8~_AA}=uRn7wF zwPs?(K^CQBvf_OK&6IA}WM(7bom9u#UOLJGyA&0HH1QF#FK8a++hbbzQpl`7q9MVr zqUA;!x<%JTKl(>N=6;9%xrP0C$o&=j{T^hfdfJ&d*#4(c#t(es*C^v}V9Y(paX&pI z%D4{;e~U7HjrRVGGX7-Ze=p4Vk%HZyjDMZ)WIuwU`!M5gSj;`ePr3b%!i@V_evb5~ z2L58>o~wJb;LnBsV=((C4g*1&e#`VYeu-r{xLNR$YcCf2I1b^rx3E=VqWm%T$4#|9^Nfd@V^hx^1(FfuSAMhlEvxk%q(~H}xl|byk4@%F!Prt%3>3jt9be#tLLOU4j%W01fmkr9UX*7opzP$*(Y8=?YU zF|lVC7y*j9a~Qq#73>8+9o!p(m{BBne7=$GP3yjvBV_`i zzrd8gUgonCW|J!t2Ijta`Z5h#d-0>ypjbn$!+@vDgdX3`mz1Nujb-agzwt%(cP*-y z0Co$`98mx4hcd$^q2OCRv30R>$E?o@ed-EQlP_)vN=aQzs2TDkD-^Zy8Rjdqz7`~( z<$8uP?+G}2HAus0-@0;R6py`nIq=F6v;I6WO(J2*EsrB1g;jF|+sF~K|AIL=?GeZn zKb?9XcTyXh2y~`;k?biDJ~}>tOyyBqGJw6~l4d44Ag(#nz~F99Bc@&45E56_(^gbu zY74ZleXkHN&v4oEExm%3|1qlKo65(U`0dld=DV>K;=OhBAg+{$c4@5kOcq(oh$=X? zX{^)7yE~~);)8WO7L}ghaqiU(XKI3Uvhl~Ng&dtP(zVh=5)U=PyE<2#$1SjHf!Z_K z!JNu%Rh{ioPoIvN4#o#Isr6V6ZfJ5cmHUXF^4|2~98qCUzfciB6(>=wqfIHGdB)Ak zp-P`}`mroj1avfHP=tjPCr`C7AzI;5`enut6Y1bp=HZb@W+fd}A|fuxrQM)tE9=Y5 z!=jz6VR2k+RawW!mQ^-30ClkxeO*;c(^u%7UdK-}gcD#0(vp-mjz52WZ~aV%xriYQ z+eJpWyo7)WK}`7RW;@O2mhJM#naP+8#uKc-#^7m9=1?AURw|9BT1U(-Pi396m`pYh z?WgD}FwHq1yR5*cAZ>(EynhGjkH!j|4W3qKRvg1*dMc7$OwH$l?fjJ1nO{Y$!~@37 z7=M5LRe?pRXfCF;zS#9s2j{-rjwH;vM7FLMTyMQ8`|*NCiY`*e_s-uTu4^`taNf|g zkGJc$ESSgWiP^544v2ZL?1j%o?vspYB|K}X80mf@5RXN=7e)BE9(RskB0qO6OU^6L zEIoiEpkeCx`h)Ql$6|>3t=m^s`@xRjs!sUBT+RLA9Pu}TC7JmP;G4-)ms=_N!W$_4 zUE^8K{BLJ|zD>7vt;2oGnl1~5c7*i_Tby6@&AQGMcgLOQ-z_pd$YlGdE81?-NHnqS zU8R5Xp|x*+_nq<4sxzM77h)#Hvk$&4*l%Vxc2}%fiF|H-!*7EQV%AOA=ZM%+%i<@m z+orhCL*{L9+y_F}2ioEDo_^r6^;t7J-fo}gr>~h~+MjD`G~jbR-(D$gxg`;leZck6 zkkIpyL-we+a)XkzJMlKole7bH4|kds*b11oDnY={3&5Ikt&8#{A738a^RS>iCECM( zz5CUvgtwHUoBM+6l6W(2(dznQ{MM8FNya}0F>JpAyNUog$dPjYOTo>~`#Oun3_7fM!|>bM;!bSkEAo*>^epUN4?Gve=lcHwskMNF6-pQxrl zHnhu^#nT2S&wLstE`hHN^0k3UFK-xM4Lf{~u1*@Ppr_FOFto&~fow?oe1j=;M8Q^L zY#xo|`C;dTk#3bmRTOBuj_>puov1b9DN?KnJKKI{evyAETB}Ixz=xVfhi?ppuZ5rn2!X)*%TFp+fa7H_gmoe6pdhIWe-6gZ8@_APu0+)CC`i27a zQeRv&RvL7hB4PEarec7^WV0kGl^ThqajfaiJ)+zfBIM~r%xrv zRlHir89_z3`iTzxx1f_-8`y+G^wqaMW(;n-0*$Og>vYTTs(dOU_!!Zi_!Nu$_;^ckO=hMevCgzqn;R zXk?t8H{;~jemh@I84D*uL-zIsR|+{ z#6YU6aeaJblPsmDBe`ZtOX{sg2bxF~U3vahY@@gk91gF4GMw7@vLlIBeX_jT&r@{> z+wd_Xodg^yCs9l92jA~DqG6?kFU~JjHTCX^0H^S?_U z%Tf?jT3B5tuU~IqpNia&cU;DEloa`IS8~37;?%B?lM+>8I5T!}IB0@-q%Q^wP5|iE z)9+-;Zz%%sTYSX9-^O0dSA|Vzt~`79h6q{qYjy3V{ed9Bb8Ir!j>789ukyyKr;KkR z`|Qm7w3gDxnlQu+#;PO48D~b)#A2mp@nG7;?o+;my#CWmj_n?8Mcg;x6K`I>Wunhw zeW?<%+)Uu?rYe*wX&xl`V^A>dTWW$yBfPDp! zFtPat?8*&FurH*L%!;zqME=dLfvzX1H^6Te@v%e0rVgtT-|%`dwb$bspM@ z5g`v<*Kj>^RnF%0vNK=M0QcLxj7>IrKlmAOLvg3bWYVBuf0IKloZz#TAbEdRfBuX8 zpe|#2c`magot@Kzxm5_31xmiI>K({W5{!4iNOd3(aYKl`Oy9Uly}UML{)&`#^Vozb z#Ik!T!n(J8~QG3eWE z;llXs7hbgp?zq*bWw*QM?=X)9K{Ix0=7(7HV1bOUI^BWZA1yPyD%dWPGdESLtl-&g zNC{{qeX5pk;kTjLG(MgkEwta_q8uf6u(mv7Si$*F*BL3_C_+iXQTF;7>F8q6{`EVX z5r6X;6&_-(OT`WFSezABP^}-!z=Wnsi60?03B5u~2z}sy&|_v&yzenHle;t@99o|oq$sX- zy01-sVv}x6>#}rEY0F$`NUn?82}r)pDSn2H9bRUY!4WB8^|;_2NAxHf_O0sb9kDuJv;B{oDyR5PWo{#kYMI3HdAHU4PT{(9 z_v>V||D$+op!yDoZgIQgVs61-F_iLkDCzH=>N*+iTbje#Z(_awh5WANzW?cNVr{BQ zz_Jju(!;gul`}1&?7;fgF}#c0y zd`3Aah9t8nMg7FTyG;z~ZPMCp-d@iBSW3vS6oF{2n(%5gXA*s3@=jq`{IIi*_@oN2 z9WHFma=ouh=o(rdaWpRv=a%dt<5|hKv<=?;rR{>Rgpo=KQC=ef{zW!CJJuPUV{!~i znrr25c87KxJD4M7BNO-+zFzjn(OXp``XguAhGH%3e%?x(4o+blTi6$t1O&uaJ@}U` z&(3@tz}_h@7kIDJC`?Gyu=Te}5p=cyR>~(dBMStSTY@)KvK_;SqG>i>cA6w< zF}}tn>~QFqUg2}9m*!0BV2YVG18U22sKRJ>a7s)&pxvaYO4(E_D&>wurEEE$DS$b5 zXxvao1;^cY-)+C!HFW(->?_3o&3{B1sAgQOdF5~9a=?w##B0kstg!F-H1cM3h8t$M z3-2m?;mU#K+CNLBf~@(I$?2`>3uVWRJg{bop3~enP&bpEfO*e~I5I99``|lH_rWjP ztTr|xzrH-#Q`Z`9Uz|^g z+jvIByBCJY@xIe`lwH`j?;6{!+0j^kN^Of&u=$%hVr3(r9_f^Q zi?$<}PW@JE+?4UL8K|eNu4=H+GgQXzqh>vZT^k!xBB9^hSvlYuUsiJPdZLO~kFAAi z$?}+UVyF~x_k?z=DB(kGI>s}EPfgdL>0?zGO`tMA{G<0d{q>~9IhNzJYwng=6~!s`93Y)7yqH}lbZ3*vbU+Y)Z#3;<6F8um z?Rl*8y?W_TeQAZ17YUz_Eya!<5s*@fEYQ+Ja;GmS0p z>E<>7ZNq8RA6gZ3Iriiyt|P1FmUS(zm7SQP!72G@#Ajz*m!9OVFPTfr!%P$PN1U2gD63$8i-l$H4sZorX_}_Ez zu`BJO522qQfp{-Z?5d(ES{h2RY-(U;(^|3?4qU#|>+JGoF&^p8K5v^?Uv~?aFww8R zpkb@M$e)b(h8OuxqpGb+V}bVtb(r+HsdAiz>Fm`?e^G4)m&-+e@*WfUV$wJ}hP-92 zo2kvL7g?EkBRT-8{k?RVYj3b3wV6ElO*#J8^gZD$9EwRZt_WArP_lLlRRLjCVuDLO zpKOs;3E@;Q0gqhjW3>_LbK5UA%UcE&mW0lFESn^s!h<8^YB}D*63MY4*tG>`h`Lzg zscgS$bwM|I{Gf(6Sk+@jX_Kve{)x^E=JWJD33;$dG~>(I=6D5C9j5jMwctyqZqZBc zN6FgYeX=nJJq(}*fw}PndH<7-ioL#(&Nn4x`eQnp4DRT?t$Ljw9wOgbBH7%GSXOjm zC>(P}qRw8M=C+7k$WA##`!Sd|e}3dK!W9^o6#5^X)3VolA&u1m=f#us||^-2o`1-0y*f}f{+ z$|oKp559>Fs2w>bIQ4kbF<|ZarWmdIV_Q^l2oLE{^tM*tRi)H%c|M|db76UpP!>(R zbfvCW;CvFE!d4X0SH_yAmN}PxxQQ7Z`;XB)6gUCijh~)X7nJuYbs-tKzF`-xgV+pv&!W$B!V?v z=%+NPz(t?+Z|d9crt;(w@t&gCr)_l;iZQ%t6_)J&)^o-_(M!u#KRVTiMo8K4pl7mL z+4KGG!#K?^Lztrn;((&uw6=cw!=>V7rRQCeIZ z>L#Ro>P|-DJ1wDVJN+iVW_*!=W5N#Zxw7w*`1uFY=LRH)lbk&i$1tAH?OA#?GU~gw zKF{#2`xUQK*F{!zv>NZG005{Z-vR{knF696Kjc+kNK~63rw4rqEi5;dAU06{7P-bs z;?{&fhw|w`QjxegQKAwVtS92)de(HCUg2QdI4DVZ6JODv*Wk6l_~Bn+o{JwMFKxD=KPQD+ zL~o7~F&kaTc_PG~;a()M1iwVpD%YJsE&MDNH#U!74kScTj#T%Enc@_BZ9aei?t3-3 zmf9>V%p)V8-su%44Z}!QhqR2Xnn@7mZk$$rhYpp3nn@%k4Xx8h%=d9xxg9c8uT@Rz zG5cb+l7MvYdvR4fTI zS0+N;Bmz^IN+uSsq$64;LS^M4l>|LdB+P^=gaIfQmMMdxY*LOH6pL5T0Y&vHcB@QG zkm`aSLn$i?vng&XS4@!#DUMbtD-_d@s(~T8ELah-4AWC)jebNPF$$Ag#*864S1b<` zUnYkkI!~+!lTM~C&XF#x{Ms7jSrtf@*>R%ED3l~$aom{eC(sF;*j zOsJStRz#?nlvX~-q?UDf%A~&U;Fht~Sh1xVRLO#jN9&Y_X+c{d9R`k5>L$s2%`Zai z9n5q3(CrJx&J@L1g=B$g)PK`p@&a?`iel`$WWiCC(k5xYLYC)zN)LWzY=cXj-BhtxjaaI%y=-PVJ=)g%p<1$u=NGo7}bpFiq}T-*B|%06)!3 zlBb#EO{z<6{(ydeQ-%Kb#s<^6iE_JYZ!7A_2&=sO!AiH+7m;ZL86K9E{s8m@GPiH zy{{54Ttac1G>8XAoK9NX&2LCp|D^;RE|MY8g} z5H)pL8bSVp$M}?4a+eX0DiTx$nSF<eE_y#(hGPf75@=1ZEoa4&AnU*4%ka zlgF8~u=Hk<`-A}eKDs|!xJ-=2BEZ1hD=ZDb($sp)2X-O)_7u$e<%$9fmQw`s_dM}8 z@_L>Y#me7xObb@XljQnltiaz<9keYx=pC@__iUlt6#(X!n@!^3awg2*H*W~HVo>#_ z%zl6sX4Vf=1ALLRZ`KgB{VZ=bv6gwBpr_T3(Ql|`?vpF)=6r`M1sGMX#k3)Pn85@!*WbnacIOxLjyTj?3Pw` z>*sj#YNC%NhQsuHP{D|D-K-_%I-S*T-y5`bMw#l6NIO6=uJ*oO|C}o~AlJ=&XOHVR zK*qIr#-$Z`xk#evR4mSYNWgmg$3*tiwt3of+evzEjEe;!+m`nGD9y?*Z3@kk z2iYPE>EGf!P?s28BhG{TI!?!w=EY{?emjBgiWMqnl8h(aUN0oKH0F`#^riS^j@^Se z$*<)2=JZy3W3iHxmHLJ z2uPeJ146{-ScOHo9Ug+0UN!~w-HcDURs~g|I&tpHi#~b zE|M;e?gesPeL#3xkDRoe0yTo<3^z-38lv2tDZii?AhI&aAIt$FQX@&kY~Ra9nkMFTmRjNM5}%-&p&o9sCLax(dtd7CmzAVUZC&#a>u?(p zNeVv+CpX&aTpqTjI(jxcS#?Y6ud%~e=QOyANW?WD6-$$^=flB_tD>u~7(~{$_}pCF zj@-cdmC&jUnN({SrU)L-)AiEPeK?NN`iEHM3)}h^t{G_C)5hu?#e1u|zJRjKEAGaO&OxDw`PMRX!jN1%3_vYg2&FLBn;O2ubN zWGZVbN;2+CGDaFmRR>ddaMyGVQ3iw8(Bts7!JH0xa}_b4>cKYT7d4X`uS?Rf?4x<` z*j&}DMWpPE_It+}VACPsA8(0InMd4^GMx8(%3 zGrUbR$fpJ<=ci-1exkUX1S@OnN86sw)F_4d7U?6;$jUH0KlOv9zbSmTSNYCov$d$c zu3x{N-pGeSok63|Ovr2ftXEyCuWVc*8Ai~W2tN4(W~~gj=Cw!R?H;w?9{klA;VtD* zV!<;4cI$kYd?*@N##ZT82_x8Y5-o&cIE4ojP&828u_VbU}D6@jDm-R0u2uv z6cE@7mGN*3{TTKF2081rSR;z=R?+f_oZ(&m0-HQUpst29+fN{cZ~33!EQRGr}p< zEmSjXGh8!tGki154cZmzIp#UcIng=F6sixB4~!4W754eVbL7J(htP+AY)MAAhiHc! zhd75cOY%#oOKwZ-OX^G5OQ0pHCGjP=CC?@9CG92rrKnbLt7>aZD`@~tKzCqwfMj4= zD@7}0<7q`(5?+E{@>>#EGFT!p;vI&zh24kWhpC0x6MA3?`>8dpmAG}U)vgt1Nn`1( zm9cf8RlYT{6}9ysU^j3#z$VZp;CAeOX~%$)-BAgq0u#Rv47wR;NMqjK^vhz znu2|B3KhsN^dA?W_{WlX2eguql_Na-0uA%e#h=sH0Lo9{CYoDkaJ4RBYz&R|kc)i- z?<)QGe9v3vto_u{RP}^ccbGMDn(&x5esGfeklLN+I>4 zpW+zy^%t^5i%X`L4X18eDzqcCyJH3cX9p$@uIL8!Qj(!P)z9EvE0tGdcgHXH=iHBk zO_ju1t#AhNxYZ<2w)`m=vpVsTObe3YM=KDC=!}XK1LbxS}SHBwF!uH#Z2$-aebAERkf+c z6RFUx${RG8vv_5>!u3e3;Tzcj zRmiP#Lvm?bYP2VAn@eP-*}m^F-n_W`aGL7}U_2@wI|&9m6LJJl?=?^>iX?p|x0BD0 z#`X1t#IgQq3HtA%^p#tR#ixdj_X2+TX!AfvgyZ9hKoyqu9~;wX%&+M+lwQpt`zB=10gE? z=_wfwdAjAb4!Ml3gw3bBrLVrDHOzZB8(U{TtO0BA^)y-$y5ZfB_VLEH@NY!9dewsY zT><8wVXI+z2inh7(l3|2nK$OG&bh9c`JY!utc;l+b3H4d*{1VKPS2HuJk72X2pn@c zhG~*2L!ucwI@Z)X8+F+@qYLX=TD|4R+hnj}8n~u%{}42gD!T1zcigyzIC60OwOUAR z70Qz^eN=;o@ZD6@JSpABSj!96+>wbb&I?<|vL=T_+`UGD3+|)}o6<4ER;K)zIEi7i zWRZkmMf9;bNrhAu7R#Kv-O}Sq7G1$zSgt_JwRZe236WU%82MxuT?K_O|5>Fz^LhSE z(c@Villcg+XV1ciGXd4=v%PsbidWCVvpy{NUg}RK4An_uq57TWx+9<+J^(wX&L%6n z7WnHPDa}{bYr7|W{?-<}@ixPJel9|$QU->5=s`Li65nGX_CRP@n-{!`c4UmwRXX6O zH~ZH$nf%D1s3d%r)f}!;nJwOL&Cf$n2nBRV`Dhiv;8=@lYOsj^a%j@USQCz2c8gh; zk&A(Xb28Xuq5+h9OsKw;Eb!h)?N;x-ea)76O3mb9#XJprg1dkF&Y=9zXZeb06=&8_ zm-&e%hvb;ktAW&G-@?N0m&k5?7*(=fTM5)NY%!gq(Ft+$n=h{rS>LMaQ+pTnr5RV{ z)e$%Knjg%^F33-eF!HDE&hX;5Z%6F|?bz~_-RW}#dz=;(__Iju-0C`%Z(b+TetGFv zF^*=eNsd9DNTD7mYmnA8Zmbv7Bo^4eZ-X1-eU6OW>^w!RJ@0ZIrVwrH{HA?h>% z%YUNmlGsmDX1_Z24YwE-6%`$Y1Qzx{(&`}5a)>MXvpteE5AxO~19q&q+ zH-zyJN`RI|Ii5DlhaSt9q7W>9_DX;)7XIap9U?1po8b)OlUEDuAzdfE3Ua%(SH)<` zV+*P8bf%tM*xMLr(AoJL#5r&uORDXTiffInjW9u zW5pQSy*AumTfOwr?JM6W2a-Ql zT1!>3k7PNxfO08wPxiM9o0t3EmtHOpPH#60r00yK##Y-GBJziO#XEReUN>AS5~yma zSvqa@_+R?zx<(m9`ex9i(RaTeNG2dG8Jn`i|H5`o$gNXYF(R`!Jm=2sLI;?fc|x>T zSU4SzHuAp7TfH__qk=!fGT-|_)HoTp3Iri8HSc(q(59ObX(^XJSi{s!zL*xZpI@aW zJI!x4Wwbr0M| z;J6igiW6{t1RT!(`I{9vK}B>$S-t<%RcP$;II&5R#rg|C}%PJ+$!i8z0VZESGY-fu4`$ z7M33kaR;xq)@ZYB*3DC8(LVB!Ml-Rc)lU=U=>{)YBnS0z#)z72y$+f%8~Ir80z<{O zXF@UUrsP?a+n%3OI2s@SE^bw}>i+;kK)k>0ZHLO118Ym`o|T=SX457cLk*QJKa;G;2*AlR$d9yR~Q*GM5!MZ&?H!VFc&93bmCorS3vwW%6zMe(C0X?!3#0h4x z$)4ja%wC%8Np*BiG}FTGFSI6FbrbeEynrw3^xxE{ha;z#=ZB;9oAUZ(+Y?fA)A5Ft z``Y7p2HmJPU1RKQJuIBFE4&L?r2XA#I>yZA^ zEoZ6zc<_YV*Iba0nx8G#VV@k4-e>aSrAPI?>xiT0^u6YaDGMul=oi#qdF-ft{W}Nd z)?VGMnI*GNJbLPqAxV4x(Q8}-=H|HQZoR>rPZLBhUWsFH+(^a&VoYo_40ioL&b|an zjpdr+sPW1@h#PR^zg|K3M^TTET%hNFzptuida}WP1F7n&uKMczo_Bx0 zw>t7(hxZtju30KwPUXucGdY|ugFc4y;k?deNB6prP`Y>?ZQvFcl1$grR6UZ+(=M#e zKRoS__hxDT6o*WrF8d4_Bbo^f87*rCe>2%!*6PS%0j*AH(5GghS+KSvx6$Qd6RWJV zWCE-H9EvyQ1Qc_mQWN|fkDy<)kw)o@1;a4ioJBnYqM(^2qi01tY61;=R>b47XyB@~ zKtreGp%={tv&3u^7R<<{4qQQBzwF@S6J2}Pwb`^_5>$qUl~aQ)6U)N+@f&ZtVZ5R3 z#$&5f>qgp53WlN;TBSDEGtt_#vf#|G+I`dRRSo3FHa>8D-s%t9BR&W~mBDz(+q|-& zc|}`&L+9$L(XrdsW;ln>q!sKYBlvZ9$m6N&jx?=k&o^|ennD(c1L^-2NPm#H9dY@r zy$lSp3kdfaWM)6IDj@SB%>VEVd?zT3$iaJLY0raK`YVR_*%W^cVSGyAF=DUosmgbL z(WNhXpK9oXz4@;gwO&FAk*7bg4<-0B5BZa9_IN6 zCQ5|CvrM6xp0V3vx~P7ZBFk!9)DP#iQOzvXR_2Ll#FLC&K>l%u@dow=*}u*~xJo*5 z_7`6Rh8v4c{uSv1%2#-}Ll?UsF5V;i)D+ybB)E+QluLtK1_!tFZtD-+eog{a)TOrV;6*~3YTWXz(W({r&{jAoYq)A9i zbgJ7c8pPQ>^Ka94GIbD?ehkg`l%0spQk`Xu&iY=?Q}R%r;4EnbRoY2i@YmPXQ}yXt zvT<5@BXX7#b9e*c>x(FGead5fPxJ`*28}4}tfwxB^-6U8DG_eCBxYr5VpisAF)Mwi z%2^ngXckZ1*4uacX)(8Mu+6CkldaT7OB3bZ{bT9e+Jgff>pJ3k1w^u6^aPv%x2f;m z=a1g;vj+xwcfc767@dO37xp&qdSqhzBa;o@kXIqN(eF4tP7w6_jGr(PKB7~~R!vk9 zGF>NSF00k{i-t+pMRpf54tSC{>C_qQ7e&J)>$)hyS4g`TO>Qu{5Leb3+xG`P`{VoH zGl#=wQuyq@-P!YGe9foDV_(>L&vj|a_qm_l)8z@!Uk`Ye-}QrA$3M5L_5DBB?|2mP z2GM5#`qIRD>~WmH8gtpK_M7}BLhXDv8dW$h=qFXbzN7sZ}y?tgLJ$cPX_fTY}Fg z$S1(|Fg<{_QkoBh%SNMvQ&oPZiuSA#fu}K0E(^yb_c}U z(q_rr~$sxqu>|3E(53 zcnPfwq3=+Y7B!*(Xj!0u{<}i@q#)+!!4>>Bxrx1-iW1)iwNp@00sns5{M+O|!Smxp zVe$D8uJ5oT+r!?7gdGF|(-GQ%0&}o1EVeKoAuRtVXUmp8XI;Vr#DKq4D^+G!*y)Yi zGys>=={Kp!?An`Fq?KBOR>#{ot5;C!IK2>VD^tH&ARcQc^&6nlUZv86_^;IOFwc^P zX=ay1<#|k{eg}ijo)Vc|RVpJ$PO;)Ga22-HZ+NcaGlv&=xEFb;A8WIMG{5dl1LgM zT!#n+BojOpRl@iwXi#sCcn+$H!7*heM~5BaU0ci@q4Wj*0;+M!L_kiDRH$(5>Foh)l>zn3deRZf9vkCD-6 zEM!KUx@@*4tJT(!r5Bk=za4b$_eJ^28}ZQ?Nm8Uza9iyjG}|MB%PgoWsWl0Tgg*+S z+h+F&iteD_7ofC5Um6Y$4h;rB_}{g2RdIH{5*~YeJicbl+Bo?Sm6RQ-kQduI|7WI$ zL1}D^=tuMZ0b!=vK%Km>)esJnJH_pUM1hkvXTgC00->;jcu+Gev_>6(D#F_k6ab_FaNdMy{O>niN3 z&L3Q$bL-t3>pYFa^{z}Lz+b0P{$ESoQ2F=+oh$MVlM>XBRvWZ`OZMcPmC*$}ej4yZ zdv|pe)-KQU+CW`7{wJq{dNtIZa#X(S$e~|p8kqkx^*%`H5OJ4;OE+aa6D>pw1`m46 zi7dWe4=S9ylHnkHBm++L|RA-`0VA1qgbaB85(BO(G21)V((?xu*UM!|MFN*kf z6S`k6+UTo-eNIZm&ZVyABd(EyQSZ0yd|*|>mf=QT$x;-=1KLdQw)V_$voF=VVPZpH zvasn$e{y_Ty#ar(R;m*n;|;NL+Me#)uz5pYnv4w|7)={pZeGh_JlmvL|Zh` zk&7f7mT&7S-#D1ytqufG)&TK56@7atjoEzk;6XeTJv|?=`7!aq^ zR*vDpyPS4SYHpBIsdImQ7G+S33w<#o;%^o+D8z zNfgLJA={OlCBY>8B1i^UAm| zWIV3!wL1&jKC?Rd_4|7!yM5QK?AqI5*Xa}roo+*^H`2SaYk1#aq_?oL(FJk^;Vb9x zI71#ⅆn?&)G7iMDMEZ9+037z#~8XJmh;n#uDVlH593I`g^%#`o;_@{>9{+J|^e% zF?o3-CNl)~Z=xF@DbbeoIRj~MyzVP&^!+}FY^fR3ApKT7st>63{U~WvOshwb4yWeu zhRjV!#gub$S*xt`IbIi~0ZeFwv!ZFRo_LAuGtMQiya1w*R9CSi zmGo1r($PM&KDX_W9gSU6C$3A4^)%Ym3d*SGV(n{M58f6iPqY`;mQp%Y7yMs>L(n@S z9%K3N>09pp!J#(Z8MGTrc4N#JhzHJocio-qQ{hxdW%5X&IWMAO_*xYLi zyDchKrJ$7tcP#JfyT0r_$Qc>6UU|@w=}yGEvkq^anx%|7eWYcqcnLFWSzi_(7`3G6 z8COrLd+&HQyY`MXo0Wpo1ZkglYKvNlTZm64(S+sXg-hrt>v2iw;*yWIKS zWXo9IU~~vEpEvBQ6$5jvz})ryPqZkIP239l_87Z|h!REOd)OxWN^PXpRYaDsi0bY| zROc)r+gC(}vG@#m5rVgzq>VXQ33IX%=49)bla(+xOKHj)Q=nH{jJX&?6201E9|W_? zoHmTG!^j(e^}~_Aq;4deqJ7YS?wt|w&B$SZiMu53ueEFtP%VbM z(P(Xi*i1hLF|Qe|R`;FQc>U+r#q-;bZ5h3@tTg)|wK1x{x$ILtrS;7YOJQwSprhOy zbEu@0Ty=0{?Z})N*;Dfb-R(P2z3s|R!^0NoSdaZ*Hq zcoU7!78uzG zGP141$6CCXoDYNW4g}Iax}Es~L;a8;f5DIp)v(V%8PspON914y`<c&Rcf2oPL=mQadT_?)Yq=xeSB-?oAf6S zc5LblQWO;n41Hp4*5b4%4GyE;#ObsSyQ%Zg?4bkyddKpf{a;>hI{I*Sctm{?c4ZMp%|^jP0}oU35m2t^!$pE_KCXg$WrY zFjh?<&19#_C@mjB&WfZ|WlFcTrqEx9N{ezXh==Z~uFa8kt;s|Y{(=WM0a$d>->a#a zkQZh&GJ(~aiA>0qYbKXHy5@;0ts3HPtqT1H`e!N+(T@V^1H>9)8}SwCX@k_hQ!C?} z(9bxm^>(5kDqP@E;{GofMoTkNRhSzgM6Yr%~c z4M6NUTxkPuB9%O~3j>Ux5uOs!5Fs=MxQ);-Yky0GaU{3#Ts~DDXmOfL!|XEN}+1L4Z)!WHUKCb_{QiG=+t> zou3(@8micT@Sm+)mW72xsg62OJ+~kM)xQKa|I5llVSSeMcf@^v)}*OKfntY9OSY8(zCpGA6*1%8~4`v%NlwWlR!f;LE8 z#L<Fg9}`j}+%Y@@clHu|b92a6oU7NNIl5vqJSjZgg# z%mUl8PzJ(OvA+OMxRUf@Kj<>@sHq0|yeN+rK4W&Y-8WA5Ufy{lt^j-m{U0yBES6h#86H-n85qaUP)NbTO zV3#$S!K5QRP-T~~h*(u+4Znm#uIUWEMJv|aBUM*Flpjmsx6ogca5t*#*@3#wBbNjH z=*Wh{!+}M3xshvu2_SAoaSY0dUj-CV1@(E1qq~$K<3=(ekWoDu)sayZ8C8-=noLln zR}RU&GN@iTX7tK2qgMvii$XmS@4@98RsYvj*taT%3~;gPY-_Rp@1f7jBmw(ReO zq^9T4vd-(8o#dZB{=GZ<_+TMa>8x69=1*#=x~BeQvNK^BzU$;I%kP+MHzk(USH84r zef#7QNyDipa82zVTv^!HNJcpsQcgyat3u1km~#kPqueS8#+g7E6HY)pQdXx1qnyP* zU_qLT%`BNaSM77b@J*-ig&J}3O1o@9YXFlVxe90s>Iq7rR;g^Bu*Ff=*c!T2QzKoi zMUOrZ_UIUzq_Mb+57cIqK(t$pFb?ZPn$5 zN<*VVw+)>fVr$xt-jUmlu$FeAGMDL+b|l<}M7~`1Nxet7>j)V=x%CKj9U-lCJ&pcZ zDAYlO3Y5AsZrOyxD2!Lqozzjf>?_Thw}h3#R$-q&OD#sfLw!Jl);ArrTHmOO@j_z}$`Ai1y+zc|8#W(Z zQMYb+omIo2-l0@!ZA-Ez?~0XIt{E%G661%*!~LxZi;|{kC4}3-rh#0toUp{p<7-xx zW29lZ2vTBmn8Q93gi9{J%NS~kL<@0WFx9!Xy>Z(>T4%KII?l?Y4gjmeY6{i4V~uft zFxkGEkhsb2VfPXb5`ULguUg2Li5v z%*Y+Y$l-q9+%3H=n|AjOy*0jaeCznWaXLGm9beb*e02BVx;J`9?&jtk{r95yQ7u{2 z+7cR%io!*_Q2XLJFbFyLV=PM5v{%tG6OR+1!r}f~d~>4opz$H#%s4;p9|z9hXYFoy zUWBLhu6sj-=Q+4J(b2EI7l);4*&Z*grKJQZ9;q!}Ua?|EUCJ-ZHTi3`eOQ5x-9s@- zoiCoV_Dz<(hdCqajXLDWb|>P?>YO2u3Pr@~VB=sdldmlznUx>wbfk<{TivERSC1cA zo%|!}v@&w9H}zX1ZnILMQZO3@tDx0#YDHvd{|d@bTT!d;O_T@Se(x2eqxIUd)w*qY zRH3lQeHX$S1ii~V4GO}mS$aXb@ z%&5+$&&-ATJ^DFYe?1Z-r6fk@k>PBRt7+%Y;Sx3sJtxAowz5q>C))az^;ig$vJg1= z^Q9%-PqjS17T2gfD&C*S+IlC;o?9iJeX5!cypBqwfOd1AE$lX{ST)OR^aOdsQmkK6 z!c~k~2OloMXnr1HME`}&QBm?+1VJdN2=!I;x(`18GGNz5+<<$-a$P*iVNzaiibMLS zqZ?CQ{d{V!t+5|vVW%S_YN_OS9<+?i1dbjqZA{IHZRN)Ph#6NI@%6aM z=sXW9g6vn`TC=%#;{ayWSZT|kceyV@HAvM?7Z$u&y z8QD}LN??=n=O&6dlop=LHPlq}W#}BzFhoD!Pig7?#`OjSB*18tVN`BfB4}W5@;s^; zqA`~U+$D%!ZhESSF18p6SCUBP1(o1UAS+VxR=R2cr)VYTjb9mZ@(0x<8`t%XhS!_= zz(iQUL?EFJ@CJ>HdJQuKv!K`Ouf`6NthRDi=C3T-id9&$XC%pHsj6fvx`2bbiPM;a zzRpoB(7W@y4Z9~Mb{lBd3iOMP-SsGIKNWGULTe_9Vx-@~U~XEfzGIP> zB`E~^sA>_YZ7%w7#UgjqYL4K$isme`ky_@l!DojJ^`GTxn7Jq%bhcKpu3?vIIeB%> ztQi319zE2vRGD=V8L;o4!<x5}^>PeqLAhaFOGFzsqvWpC#+E=j7`*B#@S>BhT?D2e1rInQ6Rqi?CSQC=T((;8YlueLGhUV6XlR@4=$R-w z?-`G`MUDA%x)i4VQ>WAF>mmtjx|GZ=&samQq+4&a2qCx0?6rHEMsm07tbS`O7LI|; zBdqMKgQz3MGEF| zb+v3Qma6>xsEc2~%0rHTVC59KZSCC~i%x&nmQwwAT**1jW+$)sOuR21F8DZ|Hy;fT zWT`*s^o&BS=*rdSMsIBI-9MU&M#(IzVrZICv6WTXtiP}<6z**dq#BXb_raWR0;!J> zS>gz;m1Y_AGvzKpa7Aayb!8jjGCgcCsI$lXXlcowc+fwkK5joy?Gm0^c%|03&{OL( zm>w449wrOI&X6t|?(-)e6#Y|n^>NXDV4-aoNqKFr?ZP^g)v`ng7FVIDo1CV~F=L{; zK3d8LG#Zs5n5u8~A3q)&{8&#f7>Q3a%X>nFu!&*_rz6&pv}!q>$?0|&bZYj&B`Nkjyz*3*vIZ;pTfz;teI218e57;6!S7=cw_jcCcS8$8 z&Zk628Gi-*b^aX9!gN^@+gwxW{s+rBEr@Fh-f6ZtIaVL6%Lankx&ZYDU>busN7*6B zr6|~(O?thFq|ZxG^S>tFC3Qe!5V$lXtr9 zghtaqXjv;p%btC#`Gg2pAoL}27gM|@DFbM_GT|g)%=9-|vsR!_t5T>rix3GbIhP4d zMiZiot6{gztt#vaReh&7!6$*bvf_=r=*Sdx@K{Y&V6_ubpPw1q7R$=mq&UYYfv} zQ~v4VRC`k*zb~=hwja|`TQ|C)u}r|AzOZP{lFT#t09-Ox2~u+FQP!U z^!Md5mE%`CrtCWS)7_A5&bgxwZMb8+MM8@@3}~h0{>d=Fv<_g>lIzO`!fkjmHWjw{ z_f-LsDzzxH#n2dHh9||ED{AVp*Fa-&_AqbiFk+|)rrOjJ>Wf!FaD)Pm{MLrH@p^0R z6bI0h9=>c!cru{Z%fYfe4zn?VZIj=~Fr$B}n(%@xS8E)Pa8sc#vIj1CBZG510ehU!;&uS(t=_i9zn!22i zVO{(9TNMMcL<_>2% z8MjphkTqEOptE6HdwD~P^8vjtACB~A;=S>3gO8`*?VB1+sZAbpMq^HrJDttea^%a&$vPZsvqpx1P&KYzP9CAN8r{-L2D#%&9`z-qxtBij= zC%p!xk7>?{i{F2Q+M*jD}2xfK>@z?>_{)LCGpu ztv&9kd~2!1_4(p96{AuZY%q&%>RECQW>F$OBU!d0x%!Mh?N93*v*enxhtMUD{l5O~ zdaAzZK}V5|Olgk&Q22#F3DyVM1GQbL6HD7t%aQuzF;RMHD$?|zi0>6N$8cE>?1jKu zAC!44wRhm=)QKxs@tPobtR6B;qANgTE?l~<(chN!=@c}pWHg?5QzX-w>>Ma1{Kc`n zw;|@#vhWS7u!eKKd@5BMEG6lKsqTzj%W*oJMQ_rvypapW+yR>{UT%!Fr>tt7UIX6< zI#$o?6Haf)ZjE#z`y2wiPO^^^d15W*IuY{45Z8GVr}gcPJ?hXtYTBDRp_H`rJbnq{ z++UylC5|O6zP+Z{qoTu9HfbLfO?#E86LKht%OlnIH*gps$gXTjam2cqiDH>2PbxH4 zZ-CpndWBZ2)r~0R80$Vb(%$D!I-?3k!BRADwSyC6Hf<)Oc8}ZsDHgI61~#9vyFK>G zpX>7+qcws~6ZBc!)1oDGM7;X;$<&NOP3!uJ(yQksS2v@k%W!NjIbS`$80|}LmmI6M zj8slC=j8?N$}}3A@smUB-Al(lh{vA1WbDbSj6IYtwj@){o8);4@npqM87lQ zHYx>DWeK@mA&bhOcEo+Ygk7z+Cw#uRLrvaNUC*GO)){rILMQ0nD+W?7t=5$a1Tqe- z){#Nha&CT(Jjraq6t+mV-AYXoe!@Z(pVIP4p!Y@sh~>{!E&EgGWZ8w9x1H#Gtqr6L zS0nm3#5fkKi&v0>!W4G9f+nR}Z4G!PeCa$+^bxt}T$#gS z0jICvgM>?Iy&s|Jk0UhWeuUQV!9m0GDXAExdY~r4e=VFW-=V z*)p+ZBTE`Q4x`hgqnpNC+`i&?1F7cSR-2or*zM0(u6yZMl?^}D30f9nMt0{5zkX$E z>gC`3$1X;pfZUEpHscUb_&QJ+AeKpTZNlF|`OtWwnCtC*%{qdWn4Ec{Q&AY8A}+$|%r#*E~l)$IJktby9Z0lV2kggcG)4At?G3 z07~S5y1%~k8h%q_8jkI`T0G5H#Pi>V2Zsk3z1MDZn6z}dJ7cwHyOZGToEDphW0(j2 zx$>=()0J<3LnCMaP9?kgJ10+XnwWX=+pEE{)2zmT&;;t9#?<8{G=1nFM&O8NC?jDc zSUBLE8cZG#$4m6V37NjDgXg8mr)}x(G^I5HyS;jn864;zW@yf11Cy$vlHFN5`P!z_ zC%*&40DM5>*U7chC&~5ytTt#^8vOF=Z$CLRKTq5}&wxa}K`H24i3vD91jn!A?rxXS!#mkD2MY??W0%Gty`d-O}i^EX%fR`Ho}vNt{3+TscU7VYA7X00CnQ+X6xYK?z?VPC`Nmn~)E3{IW?l-ekiPgJX@p zs_GtTEFYM_{`NbhXHV5sSNBxEs`|frRrOxMMS1x|4qU%xyQ*~KO-~=V<@DYf7#<$k zTXQ1Rdqe;1U2{uAz1R28-aWUJFkSb;XNKnA^TM7S4DWv7;Nnkj&o*wkd-3A?cYyF7 zD5J;E6SSrj^tVQYT2^j^a13Ljo*Tj2hFwd5plE3sJApysB}y5N;3a_ADA`EZ=M zXm18X)5_9PO<2`;2%5F{tg_F-oi|yH6iIVByiB7PLCf|^j9sUP_{y>5;JqAd@1kAQ z)88T}0*9yzF<~|6Nk;*Ho3Y(WE0&2(r;=enSltM5u(42t259pJfY?+hg;xFe2k;c|t0Xn50!9JtE{SK+{IaEfly@>!`0_*PSl zH0Wf(eu_bcF-Iwu*NxQ~WTRH2;|;iF#!%8^OUz0(ukESgXif`yr&erOyt8fLktLOm zj$H%)MkKU6OLa?5NymsDo5g20>HgTbvai!$l8rljB|aLUC>W6&giyd%(z~rIwR+Ru zu5al~PP8tX&y)KAe}=F-75*fNnOQ^M!qUFB(!RdZHWFG+Z90t^O--gITkQyL%d)c; z#>WEzYSvQQ2yQ=4tx}erE6>&m%7P~vpehs6QXS&iDRlQNbh{z9Fc2Tlp$9k?pj={X8>-1fjO6&zpb-vO-G=j ze_!v7dR_+xXFRvPZPUyWN4hWFu)3$3rFD#k&}c0U3$JS#`q)s+*Sw^@Wov)sfq9?U z*eH9woWbIeO?IC4M|^?iMaj%SOMnp^7L#3|gDr!Z=*)~a7!6W_Q|4vSXbuOh@dbOj znl{a^)e}_p{H>tZ2gX@VwI+g7eO}IQB=Rx*r!xZKR~bEYdZA}t1r5w< z_PSgG*{U;WK?CJ=7j6%d;R|_)aNXl)$rq*~Tqi=fPK0n)1>vlhML25#;jHfq;m8*f zxknEierRp9>bge{9C~P7^l@9$rg=RZTiv!s6nY3Lry|^S--QVGKMxFidRt5V=6eRz z5cH160D62;69u(2Pw5>|!ja`gkH`VSY$hqVBr1;J<^o>)5`f)kVPOqcj!DXbEHunB z2@j56o|y^_N&jWQ53PxjU!++BGq#k`b2JUuiyJROLdhoCd=mXfFl44W;QBT6f*?6W z9r14n60M?5xRpg z3h5$5gp=AA!y(9626LXwfvj};A|Qkl+m}bd$R(C)lOXtcN(VqNZM5(V$?|&K)-#k4 zR;_7TGppJ_aXN~XTZVSE4Bj?aVei;Ibe2egK0FlwTUPaTg=P==O8wCHio-95g7(r` zxwe{hn^o|`u?+zG?gl&DLTm=u*B~~Cat2}F%>IgT zpO2ckRK8~HLwC>2DA?EUs~FFr2SDdN?~Q$#e-QfaZr=Uymew5u^}LoQjRsx1e_Kb} z+KxbZ|GwEb0e&$wXVmRLz?Yhru3yz#%|c^KpuZdI7wl>u`q&Zxd_xWG+xlWh2JYHW zWAS)+quFfKPS#dE;g6<&6bT_Fc|obxJm`$k?=KPzPAL@-Ryn+*Z*l>NO8AItu#HfIcyn0_-Z+RAY_U zwgPsonjF*JQW(>vP9r!iX9bNn-xb8X# z+FKZ5*{uaEtGxuvvRSzRX72(Z`whgQ_42z)3%ojtW$h}Ay$d1iHwePkQ}Vlo=6NxG z;U5^k!u7e&t*Ti*Gr@z=n4aUx+Lkrd4>o#y9hfGu=#0TF zV346g4PfY;>*qy1>E2l5%C-<**1dOrOt3n5y~%0z+3g;?qq42EtTS$>jdrtSH&TwO z&Zw^@YI6h}l+kYDErNmfxXi)!)y>rl+hZ(Am9-6mUL6^~K;EVaVHH@FI&Q1-^YF~~ z5{kkqNAQQTyj@D{M;vT~nKzE|^UnKIM!8hX#{=lya6H~vU z1h-9c*{nul9%p1xzZ<)r5$r_Y*qac?`o{mAoD2FI)S;UT^Jy7?V?M%?)#;t*#P#8K zra)L$FFMcV#PwA8oy(w?=8EVww`}RXn>)8IXo`r@CHHRG_^F}jqmInHOiN#-b55G;<( zEsWcw(;g8F**KFlf@eL{uhsHB>*L6{@U=0F}=e0u@~ij>ufQCsqg zqqYi4$I)X?QJj|2=s4<)--s5y257TdTuT`xhh+0e^tTvrwNzUEB)VV%iuwRufUzJ(J2j#D}s-%m@P3NN&Y zI&84yM~M7a@g;bf-~V#K@x#j$@3NWQHd#-s=0voAM>n>~N6Z@gX<^=k@! zEkq?~mtLas1jahx0fl^j2n4F*MYsX|SBfVO9$p0!wx5F6C;TT~PQqF)ojPeb@XXEo zPwuU4I`EByH-qp*+1%Yd12^>eqkX$(ExuuvpE!Kqe|~+%qKD4?(`U{dTe0Zja}Vn8 zdhu|>?4!@@P{YF931tN`HmS5wKOtsyR*}Q{DGvf!O5 zg4T)ki?%i|zhhZko^j;5Gejj+8+xEGN=ER=W{=fsz}cmD->|y8e0F^xPy#dux5a1` zjY2r&NH4vyvw8nrkL`F>C#lu}CoJk<+hs$aPtHewGkh5~Z z?ioO1uj2@+dVw-$a$cjM%n04qPokTlfNp;umXY277_Fk#e0Jd#0d82)9k?u)E6RO2 zjJwo_&80IZ8ng7G;!I7_j{R<8=Hx}n?2mhb39s=*lL^k8EWy8uA1?Lg-zYdC!2Pt~ zwwk;yr%1f5HRzPdlYj9Oua3 zw5v2jYZEhc1Z=j!*waJ|3UFl`x=&pk@u~KhK6RTq2!DSHgdrd98zEv@9qR*wo-9Tp zfBrh?C4!yH64=Q2kI*+(0KbhP54bSGVr;C!si1JU2!%xnS6tMVFDqXr{AyXTEgv*Y zDKD}*a~5Da{9Z5(P+C)j(i-xG%Ih9KeB)QwmsjN;KXelaA2&M78)sK7+}tF4TGw^g zE^I2b>4=Zt_qXG#7C&_E!F$i4@awBSw|8NUz3=vKf>Sqbkdzs%1b_ zWjP4nf$LRK=*rpXvB8^c9Q@=&X#NLo0pTmd;F+T8Pp2sTS@;MMRSvl0K@k_>P^V=$ zI6o?k;7kFa;b=!aB;ic?XnC^W4Hxnrw6bNBW`F8XsO4l1Jc5SLn8fPK4Te7SDJ=4} z`&eZPOMoIuqrstH!oS}Mr?Au+EF`C1M0H|_X7F+k9(Cgp51xh0w?gJ;aT)do87)!D zkeM=Q&JzMnsgb&;h=UlXu#f{R&2rE(^_m48v};pko!W}AWssBq^p1A z>An$ho1f{h#3~p&q#NJ1?ogtD}7ki}vt>m%rvS74?f;J@fZo2c>P9*l=oEbo* zyP@2}XDD)S2r$|O89jt2w1AZf2s;VH?F8g@VuVO%StuCi*i86rP%*#HMnIMb^qMju zE44zw3J%@rgm*gC?f?X7>?Ee;RIgDGU5sk8H>y3!2&>x!D$m#e{S`E1;X^7LaMUf- z?vZ6-;`Wgm*X{s-s!fNEi? zU$`w>JlBWs9K3UVjm7OXqRBp&FAU`M!s?oVY>~VktSNgh%C@bczq=mEwRj|(H?7qV`ULxA00fNe^E zCYJ&ff&$#KAUsWoSP7<4kW>Q%SiuQTmW5t;G*y*XA4otM1{;BDmjw?aI8!*WvQM(F!o2iU;v|Apzr^fGI=UI{)9)kTeRf=Nf~Zz_eb)7DVRCn0xfuK zCOEG|N^mm`1;!i~z8fU|FjfyW%5Gwm_&N2j1f!tg2YSs9_qIsg;rB$5ib)P zP##82X$5x;q7$d_ff&;bKsI~s{e&6%ry`{FPVSyqJ=m0MM~$v zaBj&xj>?XT>drEgv#O)Kx}#kB#?pH?*QGbyw_@nN&GnfL_pR&S*6uFp-q;F4SEPF* z+NCyLh3_F|7FNK47KcCf1XAKgaMQ5s2I?kt1zb_&)D>{k6FGEgVg;PK+Fa=#t>lE8 z&6xX)5({-_<|&c`_-wWtG$V|T0{vLW5WD*rf`!YC2A0}}YX}_flK>^0HJ%|h0G?(P zE%`~n(f>Rd^T&YRIYP8$S)KK@(tY~W^W=@n@=if(L%cPVT3^e7%Vg^L9LS(ZepFt1 z$}H#=mvdxdgWi}g<3x@kwFdUH)lC65+YoMu`GB5AleARU&|TWHqSpQGpR zbbB;OlT{A;nwV|%AP%UY+kKk`*9hkAFEGs*@%Z~lX^QMRJc&Mu9TKCD{RvQ#@-dJ9D znVC;#kEiTd4)X{eLMFj&tyRzETK7@%wa4Ry2eq3v?ijrMV5~uf`zvb4y?}M03 zXedqdfy1|UbR0Q+u!R79nbdNibOWGtOHt`kg}qk?(B3QjQMC8!&@k^{mEyt8R|pAD zz!ar?;mO@sshSLIaK*c?mP)~NfYkA_QMQ3WOI=+9LFjEJQ8vR9|o%qz+$%(_p9wfcioXUAXE#r3sY2RV$)nyAVoiaL@ps*60it!D8nr5pD@5 zpk=~4a~M)*7C}0z*@_c54jKSN=|&7A9Dtu0Y(W8~UpWO)ItF1{lNEG=6}KKMJ!tTz z@YFH#5aew&Vs6nS8^r5lIgp7=9m|0{6L=kdKX~mhu=zE59&QZexZA?j9z!M`NS8Q( zbf-wlSdwlH0?sES<*YJJEH!U z_SvyMgx4|9b_DRQ^RQ19@On6F=$;+wt`Bv0hw4eVBmSpZAEr;Iqe2y4^>Dj4ibucZ z<$1H$OSK>LnsM_ZYV$&PQYoC>0sf(Ny-U}vz*E|*9?p5AU&|GqK$#yYG(qJ7<;ClJ z#a2(3j>R(6a~A(De>ex7I|?m#%cb`r-wQo!+pSxGK&dgFNf zVHKYbu!kq1e8**110K#5<-JBMHx*$y39-CM&&^^0mP0*1|7$o629$dGPo&Wrw7E*` z>@3#sC*piv2DK0|D5~oxwl@`t8e$-zh3I@_3gcDS6BC1 zvve~Py;#x16=|3Ty5wrXjRveWt;;7!0^v)6I>12bGeE0w;u&ah66oM-my`fa}M zgLYeEjronM7ka*oDd)dkpj4@yX`1~XS2tr-eigc(U%Yb-vDeHWoUF9EPi;2gBD&z0 zZg#NOFEBT(*Xv|-OX!KrbIwiLXs%;zc^jPGh)!O6+!pL>Bc7R(oT@{TXzZttv3`Aj zi!i_QEBoFf;%yRl=JFUUEPWX7kBc#B7w^7fAddzc5M@iLrIzMP!dF^pv`oo7^Rs7p zWhXFx5aisv^<5@Zs}w7#KK8{qkBYD$2i5w>vOjsVrAv@SJYP9a0HcO$Q{Ke#sU`oa zLiiJ*LKD~x#CVX}H|@;y*vTZesSn>SMUwuh$&&5?>eDL#PZG5~X9q&~#JZ4tTx*Ba zUj2(k^l4tk6GFrf(f&16MU@7DB(Jh9oOOV9o-hDHAet8*4ta}MPSL+N@fRZe1V*6R zQjnpYk?2kg*s%-%Rq?ZwO~}Ut!2L@#+6?WG&0}A45NK_HvLk|#LLxX(`zG$d;3H}E z$U?3hXIh8N_8aT3DA@%WO))iAV3&(;y_*o2LmI)YVM>d`$X^`?eu>61`hl@piE7Sxv+fMSC)pGBc`6g;euB}Si10AQZT|7-v zSX*zA7f>gD8hbd8E2p#wt=t3cE{|GR4~4TsG1H6jORNyV3*8y2Cs(MGGiF0J0_kAu9J2U;HRKWtUw}JK zPy2JYJEMUD#xGxRSOdcf(C1mW7>JE4K{ACu{j(vXG)QLV$+h|xY@Fo5mGPjBk+g() zVj9>)yMBUyvWCr^p&NeEf3}MW(YD!eFor1uq>p--6d%IrY7O2I!@IG;@dH71&7J@a zXvB>LfluUxb4VfIiv?0Y^H|Kw$Aj|(E#w{4etL7e{J)cUR? z7gJXGE?EI*!f_N_m=sne=iA@cTAt+AsxE*X2`FsA-A;d#x9GJ@!TBjdEcaH4H2w(|hrY9&y|5W5s$xwAWfn}y^;udI1p+=4zlM4zRRR{E zWc5qdQQIiv?y>xrIr2i)pSgM9oHTi8w(=vze&u50Sa4aTrn+W@gFOn>7-K9C z!2=!K)hP?g6)VV&R@73}jVT`J^WXR|tQGF|cA0r9;tkNFPRP?B$iEhlI2Dzk4e_=Q zK(9IO3l`Nc^sQK_P#=%))A7Sv)-g!jFDtQVm^|wN@Te?l@U*46dZFC%qOsBiwgo45 z!BMvYRvG?Mhh-yk~JpFr;u@n0dJE3w%dAO>=C zCHxl*f#Z;dKcuk8Amx?4W`JH&BUO(kX2DO=i{*EN)52Q=p^Uy#7sE{B;WrBUEMOL1 zt$@JH7V+wh@Jc(-!^W1leHi@#IKI5!*-_45Q4A z&ITqIkZdoW$!aehSxBY642c#Ix1S6@ntfVUxqjWQ#+0mJ=h{wf$@Z`aBDfIfB^5oyK ztf|A^eB?vD)9r_1J$*i58M2HBj~) zSTynlOTH)In6?Dmffp?H?6+%dj_|o@VQ9QdYlUqvZcSrfvAfl;T+|@Za-K};m4|W( zD$?<&*#mdu44sB+W}E`+d4++q7zpV=#nGH-p`1_`R6S zmS_cCyUXh~$4?HerK@hLNE)h$G@;2vV*I#gP3*M2kaOLl+5(#Lz-&FF(oK@Xgee`n zas{1rMaH+1^&kz8BS}XI)nNd{Y=uD@p&V|neWk@vRnnWgh@9=igwm#*fLH9!nge*^ z$}rzBY~Xpj{M4yP`v4$jCT75=ib0)bFi+4>x*0n!)laDfqd@mA(%m_|>&A04?bM+b zhTD)(r_nI|0ZS5R9D?`C0*Ul7p16gJa;+Z@(FU&Z*QR84gpsMliU0gPiH=83095$ zWvME6gT)`)&(}w7X!4q5FnqDb0A-a0zOusxWA$_|Tsoc(g;AP<-WH{G(0L6aI>ZgA zVH)6;SW3K<*M$=3I|!?H?~-56EBXd!Y~b6V)7hLB=`EdDnt-W+FDLzhnUyfpRd}EL z;{|hs@z3XnxsUx`X^Ls1xM%8y@qmEc9uU;y?(41i4_4Odr8o3?V=o!rD#{#+F4(6E z%A7%irJADm1q@N@`M2*&$0SkeRnqwHiMlTF9n;iCPftswJZvEM>e2nt! z)=m>d-cp1-F2+PGsuB=nQ8b;=z0Q4XDL$!S>#H|%c86e@v};1-NLxgl6B_IUXfvkBZ^e(q(?(CQhd05xaEHFWt~ zOTTk%WPjLvpXAj;?Fw;Qie7B9c5dvhYH!Y=CsFP!=hq>uu1^2?>0bJ0#C!G}zL~Qw zJOxAxmMYAGv*0~GqljXdO?K{I$LC>#Jxu4qotiUc0VeFc68O>Y|I7OVpZ9m8mKi_) z`*?{zTBu0V&3w1DUSXfRSlqRBS&Olqmdocv(oOphx&P3yE^+H;d(V{{d7u^p%Uff? zA_w~+MAD5cb{8Y!hZLPLya0#@W!qBO%_!hSGn;$UikTBVhHFxH){C*OWjP7JSu*55 zz+j(3L6qA&tP!0^oo^waHQrR*g&?h@|1lT7zc(8s%376tsk#do(h#T~f{w0zmyAhD zn|w&YSwh>-XX?yW-gQyr4rX7jw9<*v+=HBNK|npbT>wNq%6^%2dbMJ&8m4>TnQsXs zKG#>;6{FjIg`t8*=+r_%Y1ECO^-gOCLO^27m89a7!2550c*) zzDaa)4YbP^tnBK9K%q+)^|BtQ$o^~Xdb^S5A}Rk<4ab!)dkcuNIx&OJTkM!~QEsgB;Di%yI@tTmVfUI1vfr z#4KZ&*)RKy8!Qc4cPKcJ)8CId(X*)>A8Qbo%@3M-)qKflbrvkrn;7_{{xivv~ z(j!(IR}g^PVsExj&D;&o*42jioQ2`JrEU^RM0EE@Qvq$ZFXKte01P(@JQkKMCZ3c8GcD&N6d)J?HN1Jyzu^&HW zhhjjp0E;qIW|TOa>+xfS8^$B7pNXb-!j4gR=?UR?s##;VmU9E7Tx4q1Oz- zSdRvh{fX3qAskeo@pswHa)8wkFQ?HOHskVQeksCGjya1g{{!1Db`;-g(@IlEo{=-> z>VDudq(x4GX{d7!hf(**%3F`bOpPshW)eG~(4saJuFWO7A7lRoxxR)L>m%*scamW< z@l_)Wurm7matTJ$3)5@T%-1$1+Exk}lUxLK)YRd3w{9l8SyB3PP-9^&y%AZpcea3V||5{JFZ;(1c*K6NiUS=U5g^ooTTl z;*BQZ203?hO~96=ra)oEMdWSq_r_kgFvD9XT{Xy^5vuFPjG7Jog=yz|cOk*fj=Ww^ zZps zB0HBBjVq?um7^|)Cet$Mh6Lj1bxP(dZ;{S9&9a1JnB46U}rbFync0!y0Th z#bdR@vs|f6%y34ec0=tV=EDYSORlYVFM6(3SJBB+$v+uN@N-56{U$yTcP+{xxSoy^ zucL>c197@ZvJtwYL$GN|Rt=w{d?rl%*qeIPuG{8EcckJidaw@{+&8>*NNrSyo}VYw zDD{$+PpVjUb$SPAvKwC)*wvpYk=a)&IgL|JYF*4Q->*j>1xy1fy9QC5crzph1g(iw z#!c1>1NdR;ELfu90@T><#DwDnCL|1p-yMq}cDT`CZ(nwChDRj6FM(Mg4skJj&vU?S zeCc}wiw|8Ki}J=9f^x<(AQ&clK{_)Wu&4aJ1EhBt)pIIedI+AWKjZdT%qlni?`8brjgCDiM+ z-y3B@y7z^&b`owC%tjb4nzdk0`Q$H;7A=Cn>Y zFI*mM{0KP-EMKaoDGi&7pC3kXOiDJp?<*3YGmImr3ComF>XEv)xsqbux80}I^VdGu zA-9MxF~ql-oY-9gOemh^rAXX>tjMKbG7$B|{X(MHGGf7!mNPhaEHp|bg^7*nni#Cf z70lK$U#rs6tt%)tibji3VsLz|x)g;6sc+%AFr>^!=6_Ko_omqk(NdA6UkEme*=K{4 znG{c@i4M}-(*#?g-90dy9G5NQHDVu2PR?dfXV+)J1xK6|%Pv`}O5qP$g>XSANTtJb zQ6zsCa&S(jfKM(|x0d-qE-gGrt5gQ4lq%%3Zc{Rg`95n>Vm)salQGNO`9sTCExkd^ zgkUj$QR9bJwkAgLyycv%R*|rzDXt*Vk>bfD(O}VJ9?OpvEnWWP@MU;PYBYGHNDDPs-zQa?lyc?EoQR6x?DqKi04qgc+A>G0f zvin#GIPq!Fd}ilZX?Dy3-wG2XrC>nfCy#{o6iumRfn~(g07chKNhL;x)o`CL3Dqc#|fFVm%yqFOqXh4xCmV_au$|MSUDFszH^rNy- zLr}+yziCWN+yVV+$e>Qwga|&+zX9JPZ%R=U1Q}nve^szy=qJ!lQf9HLDHriLc(GQe zR4VAYdmiUeAT{tR$3&Au1_H_l41Il6z$ub@{a}$Qb4UYq4{yWhNhBXq9yomsr2-9e zes&?p-ia$jaa+~nkLdCMO#_4wMBP1&=y>pX921I+q~aOuQ!lg|6o=IGSdfCjS2F3$ zs>Zyxy$qOmRyd1hqLGSn&@z`|>>8ZA2N^?HA)s;a7`>Eo{9<;id9C6JgE2UvN}@Eb zph;?5qK-5S4@4s7YPq0lDLu*+Y)6YB>b-*mh#Kfth=)XkuXwla8*C9env{xdZ4~+2 zY1aq9?Y-bQ8sPH~)1{zgaHwQ_4Wb65W3ezDVqf*H)U#&InaVAryA15HR&%ZxX}_Z^~9)Q>Qnum7_K}s^zp^G zLiV_emVqVFc4XrNDm1?XIFaj^koKKyA-t9h!mylsh@3GtYlyh{AZ_(nPfRpHFdzc| zn88cUKoq8omiogauUzFNuCP^qOB0du~> z8mV5~iCGHGZ(-SB(%MyVxQBpUWszZ7s#{AlgyAun61peSRW?7Q$%bPmzBhx7mxLv* zQ_tyk0$R1Nb)q3>rf@YdsmoF-Qji>t07aX*u^1Cz0bl(HoEH?evN8HpfiWW!rzk@B ziDFzj64I-pFDm3hL{9kQ2c5iRK1vd`sE<;X5Q1-HNNLN*Ok~KW3Zq79Jd~?0Fb{Ib zPYNXoVp=I;`EPwY`=g4rzoXBJ!b@%Ah68;GklP5=oMj(G}l!Ns`KnK1L-Y zsGSrD;f+wypqWSWLB|;N3fqZ}X0I`++SRQEsRM6B36_spQrOoM_N zkF6L1T{ij~4vaQQ`CJ;3Mrkil#iWh|oQsTt3weOly)ROP4s4Np1eCumY`N*|Toq-c^dKD)pO{?01w49TTZ~=F3S)Ond&8C&(_}+up(M$KFt^ z&yMG-ozKHCOqsh)pMdKF&}dFVgN|pQLO%}wTVk;dL}r^!&)aW9nTRZ#KJnN7m%(tg z=k`R~rT1hMsMXwnGktg@6QHl;b^g9r>X30uLNw%t2%c}QhwW_z&siNy*;_Tl8Pmmj z?p>LV1Ub{exhYRyAAK&?1=L6bgi>NuKc57l zCLN5Z$WG}Jh4i9S6{zXdL6RI zSe@w8Zd7%%_Bj26oq@JhfaOTH@gM7YVz*+jBQ&K&i5Dt0}S?<7x%1+#) zpjq(1=E^s6U5P&-H)MS7&+qkz7QB!Lr9Li6UdyW zX9e0p40y^ffEC{11nAJCM-v2q2{eTs;0(K60hC1vbV%OgiR-`*lGVL581Z>B{QVZc z2O9{$13ZP*vqI=_0(=PQ;lg)d0Lh{SIyCFi1O~(bPr(M<@^AY<18{-m?0Z%q9KM52 zS@fiX0m4A#__pyt0p>tcN*d)*$y0_ zS?56J>OEXQ4rV}8AU#~L4k^G>F#*n)+q`gqY0#;!5r1ckH)091fdE+ctl&F50A~pU znNw^x08K>%I3sM+fo5p~9e(TaMEN^z#8yNAo*nxk{e`j|Y!5YPmM)OFjBZxJtsI~e z*c`KG1yipc$ee#W3l^XbG-VrbOSH`k3#bO2D%oRNrqsHz?ZJiucmqv+1So?8wm`E& z1M0yX{9ph}z*9~=o@fqw;92bJ?E&Wm!`a@sb0`kq|GKCQBnzVFng1honwSwrcUyq7 zIUwMuHZ)X-pBfD-s?q@sH>#R-}?&M4L?KOQZTmL{dY z)&LkoL03m78p9CSl6( z+SRYLP#__LhLkxwRR-R^P)tR9V90Z_tQr^zrJlwD8%uQOJOPp*?1=ClS(iYB^5k*y z(6`xHB^&0&D0X4eO0?iAL-7zHPMW?=CYaDOwhTEbnleY{T}LQl&H`SNps+VxQBf6U zOk@MM?9y*GI(>!6X-8K}F&W`#QV$xB8mWC;kW)qgN1zYKJ206$5g#S6nfg#f_@ZQ$ z!kR@y!f9lFWXZ_0Kd&X%!V@bBdW?QT1c65VV81nevZ15^Oa@UzA_&d4#3n{mMn+Hs zjsr+Iu~<9E+HmO*xWX38Mk9wvMtJ>D{HaK{W7Od!-ryw?BH^<#lzwXKfE7hCAzGL= z#+vF(h%M3VFGk}12?dIhR62vISr8Rzli)-ZXWB?Zg4Lq?exFoXh;-q ztC<;E3U{>}G1~*?*hFqf+Y5hw2+-D*m<4Yr4A<6eElQK;8qDhBI||}52p=XL3_<~D zP}T+x({o&@!1F*Pi03yqW4Q;F>+*tv7#T(s=4lv+-n2;m3UT62~&w?r)P#7 z!n8b%!NbafjYP%;kc4X!9SE;Q>-G~)6(P<*$$Uy96h&!y`}F2LZKg^2NC-xB?Ezy4 zl(VmJ#?+-{fH^2VSQ=*>k2bS@1_J1#-wjDbWS;3+Ku5}{2&7$2kcx~;s4KY)fAPDH zQsmLD1UWL=MZY@7hXIJBE>d8PFt!4R4hnL%O5e*Rx!oapV6I2Fa4biBct3DhVU08& zua=lN=0br#LzZ+4w}wa?%o&|g6>?M&iV%&ZAX@;Bu?vKDaHZuwDiLem9uaziQZ1ul z!%yUWIdh!5Clr7Mqat=M_SxFlr9M>gfjh5U#kLENB?uz4tdv(8LL$?ZBnhBmVU~A z#64RigIGi&yYeDPL$g%`TjZ6=RtR$490XDZC>H&EWUXO9ISnSfl>f64ta#3s%5axM zkJOlr$Mal9d)!A&f0qNLiwYparpp%v@oR_Bk^4%#P7A){2IA4da5g~fSxoIB!2P>y z*t#rkA?oP+w8-0O97X!{V;1d@|cge-ldU!9_8pQKZuYrpg{pb6&?SlqJoWnM6?jhyI zm5c{!4AZ6rsmRD8-%U683kmn1JmBsyu!x0Wf~@Ohl9)Cx>hY46%s8nn&ZMY?LGQEu0~WFOt}&~90p>vG+sqB8b%pDtev zt*m5Aez-S3JbE472W^#+xpGb4F2w}(ucIQHL)v$7TaSFqTj&2;aA)DI*d_ecavQs4 zwVipyI`!(AYq|4#ZA%v&avE?_n{O~Ko)3$*IOb=q%gH9&!T)C{>v0!?Lvz*SJwy3M z%uc}bOhwPd_m?Z-h7N`|FB`|h#ML38*jWdchl;8`iN<*M{?}~Vyu9vxzWL$o^>8eQ zKx)%T(dy$XYX@$)G<_VLManZ^ z`g*(@scJtS7?R~R$k)$*9%Lrq}MW3QTah+*nTTKl|S6QVRwa2PC&M^|sR+Y29q z8TlL6ZSNMu=5C@mL%;`e?%{b7j#CDsZXulrAouKOm7YT3{cfBmcav4uXZlHB9&R)h z*q(&uE)v|ub)df@GEXzKZIF8JvhNpyW8zj>%~m*1<}mXJ=&+BJ%!<=y%(Bz*evW4R z7^#tjHlJg#e_e3)J+}u&z4bq18@fCi8#CTo$K}{I+`{*L*g9M3{IN7x|GaO#<=Z64 zd%SdbE&!iF-s&*gsW_G0lBYEdtLI7Q5ZZ{dR(dE9feVeJbDaW(S)6$AH%81eKXR;2mV6z!VXZ3B1mT<8J>$STg zam`U2QjjCdsYsac&%8fyritl=lG-2WK4*YO`Jqn^Ty+1MO4HEsiO|KjAx5o|P9}6n ze*X>|z3cU~%5j9aG@SY3;0bfJ&D+{?AJ|cPV+78PUOu$j+b(c<|6NakEV3%wd`9oI z72n+M!+W4D$_}oCU+wDMmb~vduYEO9e}+_qm)foONwjLsmt?#G@2-x5QG}(Vx(578KoEx=hK}K|%G4HhW ztf8ehl!9gCXQt)?{6P37+vVLJkHChHH!FuO7xpvE^2fO_Pf1u4BK$GZCVz+UaQwr` zRn0b>sf2`v#rI!wFwqO;)s)xr=ziC3C35slCe^$@`k$YsmA7eGfAm@mX(Ang@IqjK z1|s{5=cPWllsvcqh1 z$@=)?JlQH0>+j6^d757?ynSp;U5@*EF52{T{*bb@|9JF9-0HkB>ir!j&IW@>bvV9J zj&Xo_Iim4C_mIo={@&#FI36{jr+y2|Y%&#u*by7f&)@Z%HbbwmV>Hm-3($EV;U3Fc1GJe1Otnth`d6eh3!rL+&4`>*u_w7RXtWk3_+zp>D5W}C~X>xomO-FTA zmeF6={8&qbm7R9+%jZwq5ccpr8;G%Xr+YnkH+F{f;V}FCr`f9{*XCnew7Ii4vE&=y z{5TKxY@(N441*+HvdkL0-|uQjmJ%xe7nhICvEwlKPR7*ngY~tiQwZ+Y3yHz8HD_`4 z3$zR0mHSa{XUg86z{8tdo$sSr!pa>4TH=Wj`s}n!_WbM(qSFk)yo~5u)9d9L$Sz$A z4Hx}!TktVC`<=!qE-r>~klxdcjJuBf(Y_DcAct7))M&t%R3M0HBt&W)qcDT5Tjd}4 zXHZ#kg9R&GkQ=o*m!&nM4~!c)3YrcnWtVV3)oep%~dR&x!!ISYkx{ax7~kC zwJYo5@Z0ue>hs-)U;aU$FQM~$0?g4~4ZdaWl64)I>N1cDKOaT8AQ-ILlZ1X^S@-U{F z>-a;Wg`J$|S=w_nQKLZqGFt4zmPKu<-Azj2xA}2>0eXy=cQ{Nc+Qw<_W1`McHd&TP z%M9~llipp_N>{V*N#Z@fXF_kQIEL;C_jMr)xp`Fc=@6zorY%Wa(^00@Yqb6~J$dc^ zW+4iK?`k!6P{Lvp&@WoC#NJ`l-LUj1T;ymafSN=}&(P!aCMO_E)Oi!z}n~><2 zPK7*rE${oQ>$$N9k5QDIV|17BlPPW0{B5W2>1!q>TUy+FJicp!Q-8q5_$+%vnUU2Y z?DPdJ@|R5Qoo+JDJrPUryVsIC{7si+n7f_C&XP}l3}K?EPj|UF4~<+_dfQESQbA=5 zbn=zkj$;gM`T081+XqhEmyM(EB20%262_ur@^7jwJ||5)yuPrvBz@RTQ{0v)@>RF6 z!F#%qwM-vvU!2P%j`0j#G$svPq;{dx3RLj7pP%QV{*+zDT(oI4(UFSbKsFzHNyTf&`u62yHlRa#i>pr;r=yR6w z%7m20Y=lPNo1uc1729zLp_&LrB6&V8-c zL*1`6zCD1yPEVHhTF|=9)E2OnF<(EqU%dL6oZDpK$9v+|zP3gq@(!2#lG9`YQlCYLw~F8PyESvhswvXgyw7p7jDX7<)#id zXUQ>QeM!q<0~CxV^?WKq_NJBlad3g~0thI>eDTg`EQS=-E*M~#z#VaZIx(jR9q3#z|9}Vg5R#Y6C;H! zGFIYk^~{C2bD)CqZ9>9+8)l~$%ph|HTh71f`x{a3z16xoD4cH#o(J$Xx4pEEPD(dD zWPL;lH9!5fC-7r0eNNPyT#aqxBRh_=4Z8ZtJIIMjQ^b$S9kcqm$R+9e)C>!TOT*(Y z?*;bA&Az~{&u99U)sWJ5pG(8vn$7;US1d}MIS`0`>aVe2w}CbDQtu2&C}lm?oz48B zCgF729AsC8L4m)9*Xg?4+R0)!by6W7R!`{a*<3NYn`Zy3Z^8QRq^4t9S`~gRH}a|U z;6xO2>2hmQzskaS&Q-n4`=W|#gYeC*aRZ5S>j@ZGdN}RZ=1H9@^V@o-w_-iT$JNKr zM@r7u{NbVFemhxNI;)o&I*Y5u#H#ZpSF8qm>$p_n5)S8f`}GDtx2LD1sLDWj6o?5c zg2i72^?H27`)~o{x82%5Fb6n;UTcU|%gDe+FEs-SRvh zd)S-}t(U)ZmI6pe1A7` z;#@*Xb+7n?7W-7zaea_IDff8kUGJ>fzvdT-pR-mlJoXi$h$*IwWy8kP(tRSx7qROe z=V@*L+kF@PE^>iTuZ_oL^aT@>!$fC=wR-#A+x+6bv&^Etk8k@HeV3(@lV4&Qx0br} z=&VP6b0ZrU>1w?5EHWTHkd>61{GyeY+UrD5alLD9yfBE4P+(zcPcax|BdplBqek2) ztbmXct=so{E@ar5^DLZtVNyWT^`}Q$r`u-ms^=^E{m{809@OD)p0g zqoRn(M%znR!P(n@vDH>6st(G1Bc#Ben{0H*Ao(X*_~J08LHKtF_FA^ChD(z{ftO;$ zGwtQZ8qIS=a?&FNQXwISbleM-gT!Mp?%uMAd#_HGN6Z^~mtGXGoE`LuLfC`f;Pq#6 ziH^rX5iatX+z2a0{abuyW$-kt8b2r1GHz7G>gmQ4WyP1l9gY-ok;XE^Nq>JrqWT7U zf_;Fn-iAKtH-gr<*Ny=e$D!T+ zN-k#G?)M(sSM&0xS<%P39P?3r2HUmz^U`lH$<-3&ge?9SjS+F<=KPVC?$@GAyFC2_ zN7|L`U7Meq*RXLo75Yd__I^IW7l~ngW#&G#LcG-~m)8C+M8gdY{AF^ilF#@2|zJqo}rwQ}DO9TU$s~4K+f) zC2CJPNxv3pZP8BAMew!M7`u5Grsam60+%G!j~BG!@Ya2*O$oGca&n%UDAY39tM+L2 zQ>4H{$kJJ-VPV7LlpweahWY}Km^th6uAxe=0x68-%!CQ{A~5r4k$CNd`9V!I6+uly z#4v;nX592hp4V^=v)!6af3^fLv=gKNR#Xgo@OHz2UBQ>gp~ zZa#$&c&V%^Se~hwfrTwpFT8MBWUHTJW#3LMs`jybttrp=l?6Ce|8!B$Tl_Z1mLa(B zdaY?x+ox$D*{AkOqP448`P4AVD`957QJJ*<5N6bQGb_u6_UbEh-|4|a=BBrh|D=wi zf6k6N*XAepr5$&7j=ibd-b8R%=XdW7T5Bg6;2jkDC_qO$yAjcrwm~+)>A1|~-Uiw#Fi*F0cLe-38 zbxXNy)m5|Y8cw~I*>B*@GQU(sw5I6Z{dQ~_cUR-S5(pUC6J%&+QuUjzk4+d{aVhs# znxzz6A4|Sw>ydS%_`5Jrz2v@i(=KI@2{70g%>r@As~|ZJm@N;G!JwglQBHONpN1Rf zS_n%e)k2_*NG9Sk8;(FV;4$=DjUW;LI5?O%SvXi&nIrln62h98QwiROiTc+lJ|8Sc z_D4OB5p93B$>cS#_+Re914H^-_;Wpwt}$LcS53zWX)U)U?_P4!#i$-%B;;Uh9#!y% znoEvto=;6o1E#gsU)9lZ+l8dpZEwxjV(M-pSSfR3p=Cg&ZBklnFHHj+ibeb0-;?x+o$pPZ*o7 zjhpy**ljLGN^@;Z!LIJzozb~!zV!B8n&D3Ca3(7+GjVotGBvdQ2ihB1!NV}K0En1~ z{(;&=Oxi^3Oq{wzM6B9GEG!&E?3`R*6=uK}nU(b~`3teJ|E009X%l_@4PS`;i^j_Q zmj>Ycl4Jg>&h^FpmmD+GmrXXnUn0w2Nw$A1vT(9|!GF|Q032VC3-Bet%EtB8EGyGr z$ie&<{)ZeRV+H(GXZeTsuQNOQm#hDyF#S{e-+D|$9L&uBSmR`2`MUn={8#EP`j@4D z>HLes!NK{D^ndGdv9Wzo{+s?^E&Of!ukHW5va+)NL;FkrYx{5c%l~@?e2w5Q{Xfg! zQT^rrm)>8@`EQ*s{6Bs4cNqWNa=g5ZVwN^8rcR7vHij;yBBsXnCZ>!srgr8o7DOxn zPJVuPnE$$CJ+gEa*X%-wkbDnmJ=Xbc6x3}wnJLO`Q3<^yH~a$nEF@`=1|UDXbSi99 zz(%;75m+bU+dOT+IMX_;CAFD6wu3Cw_6w-D3B(u_vJ8gGR^kPY=cz?1T6N7ql}RBY zB0-W6akFCduK)CzAi9TlSVCk(ArtbSF7CJ~p5Rg-L5P>qF$eeZ-_4dS*@4J(oG&xs z9-8rKP3P{k(ey+NJ5qg@f~Y^CI+BYHh#aXx!JKwWh5~DkpzK{hyeps6^Q}9fk}8vUHx?#|&8&G6RrLO4;rQVaarY>6~N8CMriUd@|s8rbo4%D<8rdbqm=Lz_w$ zv2**Gu|B;H^w2aTYQ4rRK)yw@M8;r1Sd=){y8Z#&7oPIx|NV5c{=HNF|E^~HdX`xK z?RRD(E&za)<$t>Vj1SgBWpKgV?4t8k&Q55=TKK^(g{}`I5Nsq^!Y&6284Ae=162|# z@Ct?LNlfP#R28EYy{Hy=rP=@uC!zyrgTl4qOt`AGwtn91ODXWGR<6mTfU`Si$l}NO zr|c+Ob`Y4*?}-@MmzatLcTs0+ISC`r0`GSp8Rl8MbEX>lO?|K$X+&cRMQIwK~@= z0eiUjEq?U2j|(T6A)l8T?D)B!diEofJ%!qkR! z1%w1^@Z5;N%>eQ646`PU0!P#;0c`9W0N7YK>`#WYD?E~w`48IV)Wr9KCo2bzQC1n^ zY?^ols*tDy48*JSJ_YIm6nZ?tm=73+V0)l8+(kz`D$KqMiLTs3A)X0gUV=(-Zob56 zK^ubkD$3Od#%~p{_ff7jsy+x9q?^^v{)BFlp*1Muh=%RLAcFj$m?r-aY%i|9Qr5j1-JDKm@@Y-s{gFubcn=X@~!zh;;tx?-KJ3&Il>kpAre`oRee> zDJqjW{2$7{#Q&f0=`iq$RpU+8=i}1gbD{0a?x%i$&8NHdzZ&_RL^^wqx?xRt!wfI^ zxPN(LhdTe=gOYEe;MtM01q2(4e0Y6W%a+DlnpJ*ys>lHOtI+2fTZ5tthJFY0igeE9 z3Hd<40F5H5_$ponO12MOK{O{=dj+#5BIA$RnK?yeAm9(q@`&pnUk6XU;ZJW$L-<%n zS{u|@!Qd<4gL0&p8rg~78|-B<{_nOG_+n1jZb;A5hVPVVal>+xUfAy~Wx0gP#43H>rU{C)}V8W4~zh(lyxjxVVX6=Dp z>XSPn-nHa^hu3B-iX0AE}w@f@w;d`!NY6 zM2W3Of5eC!RV2GX19*oR_&XUefv@E;P(RCk?-_eOCIU%IUZ z7o<6>Y`cvnPgGN@b!vT9W}C>Pv@tmr^_4iQ-4-X58coJR2a}>;U~eepM#k|47|hSo zW^xos8L|FM{T~2OK(D_Z?#gm1j?7^h_Vl#Wl;ojFLlP6@_&A%@Vm28Kk|;1rV05Lc zs!7q@O`70#U3AeHUUoGjqj?acNmGzjb-s+IH0ff>`I0J1F8&KiH7uzPND^5ohT<`! zl}eYQ^;Wo)0GU6#9_c$OTwX=%)9D#Hy+x;ONIRX_K&eb^t57u3q*Q8ESGD;on<}uo z4vV?mRo-eIGa5R~79=f5X(_JN9VBH6(FvMTS=2$nXhWs6bXP^CmgcJ98fYxLvU!nK zGrPXB!tQi>$BfoUd5deI2CgzK-lK~^xju|0m1_ol7-cD+4cw-5j6UqYJs`uvCXaQI zYf*DUy~dioe2fGSj$5InTz4|{ToF4@EU(`*xYo}6m8nY=UiAAnDca82^@D4jJoS39 z4{S@btD5{(IP~pk#Ok>Ujz%|n>ou|whgA4X_^iV7Yjss}PSY|)Gr7uKZT@9VxHRd0 z4dz_y>`YHrdqO85y;AYl*1McqiQVOGuE^*}0{@(AyV6u8?fjZCqdVk;uwgsm;v4WtZarGoS~!`aU_I(;VSQ+47%(liW`RTn+1^wC3?>qegKTF_hz)qK;Ga z^29NtuL@9?YqhLkhUSABv~9DuC#CkTzh0^IH$`k) zU3*?BTwgFy7ooHv<@L-?BNVkWy%wFaVL+G{>aCiPjUP#O>O}#A5xr995T#0!n=T5c zUbEBr4>k&f&hS=xcCJ}OO|8gtetGhF<@40D`WY%LxM_9me81m(ULCqxc))3q3_Tud z>zzuu2J>)dX5%LiI$XffYu8jXSveO#{|mDs#q$L15!#Euy>rayDhzaff0awA@;CXL z10ml+mm<6TJ@hI1lz(+)Q`B(+q5ZenwW`~_Xsb3-G-kAm*ZBR5I)G*4Fsi+S=tO?` zZC-7b$LrD-dR$IdeJjqTqX?|d+NN@(D3o=$$mZD{D%m`DetnOOXV>Q1`c6t{c~hCU zV>s5;_b7N&=v>OVoXHCcFF-Xx({xg!E^qHq0epI`pfmJxOMrmRGe&s?S^_l8lfwhK z^?_8pbF>76aE%(37LaEQ^L%0H3nJ1+tdV)mexUru#w)|nfrZ+7)ts*usgufvzc}gxEW<&pA3fKztpOReO)#)0~K;4<$LzH^4*9x!9nNaJ=xu=bf9u zy!rKAQvmzdQS@Jp+B>T+xb6b-(J$ z;4x%iwCkgzU4f%{cvi*JHcYUmlPi4Fig6pchH$0JtunZpYEz$gc2O_(4B~j+s&}l`f|jMQFBeOEesAw1qrr@$-9QW0MChk>F4CD}{dC z@QpJ6>A9A=CX8!}tW+ubQa9VNghrUgOJ3|KENIH+g4kaFxV4p@jz&ZFIgb8W;>HuRr=fXO^eQ`Vjk~Bea%_a#PvcGP z(FF@PLyk9k0HD8jWRuM`gCk*smRLRKQac)UOh@g;QVSc+ADRWjz`;Ch6cl4Vs#6-~ z2(Syf+^LS253-SP0+F(jot|Ni9yWpv>zwRR1I*QxI5aoDbPQ841n20fg6S&6J&1=G zf9+YoGLbK1dLyC_aS!4l#Fr8A^#M~}t01mI+<|z4*RWwMqf>Fnr6X7xwoSt+#IqDQ zjTk~?sAmd}ngX*B7a-n+xC2qr*;*E%h5YOl}Dog3SI}deF>AX$Px|XfT)r-yH zVnd@|?7GAo&S%UH=M~e!Vnt!G@e{)Av1Q?W#OQFIn4Rn6x!IO`xHOq1<0O+&@zt0T z`ZUB70uI>8hJuDjnH1ruEU|02J9oz+CIDfS5m*F{&|yY8Z3(%hW*VZWArTz(6zvPw z(7vv?gxnpa)9HJ#2k{UhqwnGKE`1knq$jw|F)cydfp`e<7~*L}iJrjcI6m*tcOag= z4LOJdYRG?w8tGywnk( z?{_I4$IjAm^fk~B`8Nr#;mEInf>?vtgt!_}Lh5y-UI!oI7Q~&18X|fbrZS>Jk0QQ^ z_&SV3R1s?sjr8SC940`Ib-K$OrOEUq`W&R7#e3<`_54Npte(F>pV9N@G0(*OD1Ejw z(*dOxtOsl(V=iN!gY_c)S=aDHN2oM`9zxqWFwH?ML7auS0P!wFi5{X^or@fa*vSDn zY6O(*gpc(6NARE#)MXCUU5*Z@@WfqoAySy^PfU-U7J1^n^KPVg;=cKIq1O)AK}QKwVWuW@$_o6($#3CtI#OEfyY~mYEd6eu<%*Z0mF%aFh-u--Ap6J&zQEn`C2N0PT8tB4ikO zJ^?c5W0>c5X4GM`_)hm|$9@vWTkUgvnQ_wbQAU6w`C-NZ$FG!tKsp`2M9yRT9Is{E z>UcgUU_{PAcYt8FU(rQ-G721zAJs)SV8!-M$9kUabKHPK=_E(Xy(4n?tpZgpo8J9VV!%`D@n|+RPsDdXuGLh5mm^g1Fbp%;4N1mCqil>bT8(C-(I^=OBQ=6CDG)lLdiWQSNs`Pn z{wt^i1f7;CPx*J2JoFJGh3TLTVb!#HZW*c84!1z{LPh&*t}8&y_~IwJ$_SpUP+eQ5 z6?m!xhR_@>-&3s_YA&tsAmmOjay7acUtMbJ10=**8}0lnmL4D^VdEWko?o!>4zCwd zldmdCElHe`P*_#*4M|fZ^_+v$^XOq^+Scm1^_>%U?;cj>)pB(@6hgXM`wssqric8I zoT;qnA%EbRx4wr>A%Co#!})AVh1XjhAa%MNDC7?)hpzJnUCxN7G?xR#m>HJa9+u0- z_9!=;XDDYfLAEZJZ8GU{0^xEU?ZYc8I))F|Wl|Jq*JavMl)*Abvr#5HTbD`p!BJi2 zXtIyXXj62Nj0_aX%+N(hI%Mb~86;g7sXHf<6A`&}K;%|^Bu383WQ1jGC!#X86DZ^P zN1?UM;~`y>y)6y=Ypf<$Wh-KncH32Lsr*K+bhLOQ-*35_7PhqU9PgxFS8IjV;;K+O zCO3RTbpx-S?5gO1hRWLdjs~^0qI0r3xzg2K;q97QGa>)Hv2Gn0YeLO8bXLPVo507K zn*R;e`Mi26A1j}amCwhTs!r9%0^QYX>N|{3<}Ggs=UvodMz?Lkdwf~4yn2f6!;_t< z>+SpT>aq(g9M4dUkgyJjhb7nmEg;7hrxi{CO##tD9W=~ zDg&XzVdhw57I7wH1A?4a%y}kLMDVZI|H?>S&Ub;29_S)!CRq#ZUZ!PM*HVlKwUIA{ z_T$BhpB3$1oM1cgkoIUF5w$%Y2$zA+If`qqj8G9jdhKsUGV z^23l8dK#rNpb^?&JM1SD$yDkT8$vTey^xG!T!X#d3!ShJp8)&>-XK=-Oz5G|8AyZC zFb(I^4KIq4(V5{!p=sO=-z z@^|EOx*ne!*|S1bs0`xJF5l+XhG*bil1_5SEK)~D(pB_+b_E!5#PRqnf~9EhZP@!e z=#2ZQl^$b16dn`4mWBmSgyL{T-0*$44}L~$I8%kRlW&pN$@{dNE}-A1@3DJ?M}=1m z%{c4JU?toEkHP08kra^GZwHSc$eZgDi^&1FRk~`AJAj?yi7l#U$PXI zg)h4kSuyjnRjAb_wgsQP>|G&UI3|S9&bi`Nafi4|d`x^wJR?~R-@*&Si)Vk>KdS$o zAOtrDw+1_d-Jy43D6S!%LHG_=jM_KjvkcdJD|*}>c$HYucIjjknL;i`%Pb(v$Qp7D zTJmPHojk1T{scLQR{9k=jXK+ChOYBiI+2#qS@>K=Tj?6S+ucpO>Fe|>W?&W;&xW#5 zY$|JHt!yp3mThGkdy&1(-eaG!v-pGrv)~Z21h?Q3rV0y$D~0=o4}=fI2Jr>)w~|>} zDcvFkq(9&lX^Nr7Fx${*xXZB5@S3p+z4Hj{g(nC9d5@f68(1aV3wP2yAr0S(UP6Cf z0E^fRik?b$k%Es z!V%~b4&vNi!k({@tYkesEm@(H=)dz%o?+tz4|@ULV8@9;cmUoM%p`^M(MMPfdcjYG zDPlc1*$?0ewuamQdub(r`D^3t=*Kh3Zj1}HB$s@_Lij>F6TK*(y$?6Ra{6oN!+pFN zzDE`bOW;n(BkSM;_z~`kk>cgjC}}8po-P&qbO`ANDm==6E1e7{OiY5CNh8}Xou)Vl!~@_Leg$)(By@Y|m+0LWV1&IF7UJ#qB+l{_j&Kn> z40*wsv?El-R^xn+!|c!_AqO!-TWAH$f`jm|L4;<52iI65ui$*IhgLc#w3f97m!ci+ zLM!n9kHnQ2e{L1l2sa5|!tJ={w_?26iTh?Z?i{}NLH`mSgQ3X({BD4W;{M{9@~6c!`ir3iY56QBO z;3XffwH3SxlnQ7y(|$!sbTilrh# za!&T2gp&T>mDoEn^jbI8QjpA+7nrF?ED(yJ0OeRQr3xYB1+&?*!TG>8yr`pAjm0zM zKKUeeaZ)}7B_%WD{@>vxy-UOk6p@SNVy}1nI6E$rL6z{m15!&$(tC4r#*Oz5VF`H& zj7`iN`a%Bj2|qkWRxlH(3?4Z9dGMazUYx^atczZ)&%pxU;d4MUhQ8>^%AOzwLSLv^ z?vWELk{Oo=Ujju@vYav*jf_z+7>mvECZCC#@TQ{FJi- z-Z5{F?;XdwNXnTr!H>czf8SacoK2g=S3!oEd=9F4L_EHbB^r#f93XjJup`chxtd_u z5qB9dnJJ91A1B;*J74(5{?Gd4&-zfg5?pZ7Xzzk}1K}#*@Wf3MCge}b&yx)J43!Bv z{@s^m%s;r{+7TDJJj4^6eUN-X;!eHM|MkmW|JDOP2|9wx`5IlNYh=BEULaGGStgKZ z;yRgkFoJnE>|mG01wv=KWtq-H;*0M1c%3-eZL{gbCu+ReOy|YNIpS#Cj}s$W^DEY0 zX-?uzQO$?AAYsA?H$HjEDak`++P?t}ops@e>oy#mKjT<%HaS7wJ=n9=KmV1l``!&B>K3roc;VF7S^`lz2ZHlii^aFDKnAtD3p z??qQfx0)CFKu8b|67T7ExRN??N=>s^B;KXWIUrlDJmtKBF3FF1>DfGd_JlrBXTcDOkaZ=Ba~}&>wNfTO#xqcrUgQRZXZ%X*a1(*MxEC zMHMOR`TT;^G4BRlCRI&Ut zbOMqQ6A|MP3pfx&uh(nuuq6dpPWK9%O-M(gbA^D{ z!aYPz&dZ&YAiG^z25H{%S9f03xwdTCs}H<(?YDa#UAOMhN7t{L-bi010=e+V3%Y`# zH-f?7k;k_^N$v}N@AMhGz%Dzr^cM8S0R_*qwr&am_QOKGl|4X$x*XzT{B-NApDOgfzSy(ejxF&nxMODhVHI$ zy1Sya)HLplQIFHTFWoAt7`H{&iQBp)u`nmS7q>oI-04b?B*R49=y~+(?$TFlzxQ6w zTH*RB>l{x^J-Ps0Bpq)mGM+?cz*{Z1-9fP8%arc0-E}r8Vm>iyM=45}1}qyT#HeMZ zC}A3~b(Am|Bf`cgVTk&s(KsLzSr}uK5Tk??B}`EwvWjYcU1Ghp&AQ$CsP%cPcrm-! zc8|ak2_9@#i5Wz*g&EKnY__9JNMcN2HlS9UU|{QJl2q*$a%|d`Keo_?8 z>M+Lyb99l-;fSIWr}WTb4v>7+W>B+S6AV7*M8lSN$`{pQOPT;wrV3@0H|A}SIJu9v zp?l*3M=7-^uzC<8A1(;)KFN{ixnF)+5{?KOG+*kBhE%h7Bv$U;WW5P)1F%s+K~lucpwdx(lsY8JLeL41YO5zTvwu z@YwfR1(U>VG{AD7TFwZk zRLo9ZUBLG^ZBtEACGqM|&GCDy1~Yeiv;x^FrXPmC@#f`X5az5K>zNHhZnDXFCn7Y zb^*rlz33D6q%1wGC)wu5{i)T6%CvcmLHd-~4XyAaWlqEIrUacQ}FXw{C(^3MRgr{^k=)CI{S9xZg1fJZF~0MDm7yiOBNqN+ux?& zSmK_x5dlBcD41}o2j~HH93_IuYHP=9G+&lkbOBw3J2XAs*lzkg%tEhRKv@Z9tH_Ob z-lX9^y_@e-{vxoZc*duFGv&{C9G20V{8gnep)h>CB5NA$-6ktTUtxKjD~Q1DlBZt= zCpw2pz$AlfQet9$GuwN6u&;Vjd=LB9pKcYtdi?h91`~r{2i|;~d`zDE0qz=n2_Ux! zZv8DPe4|!yA4&!jm5K#cOe6tsVL4C&l%GBi7$f(-HC&@US-UVqlUQsZDikDJ6 z2ugZ&Rq!T{*n^ic=I!l0`-uJ?qTf695#wNoK4MP00h(5{m_0xefRlC>e?cbAhEu%n?cist81QAU>*S zFjIs}Az>+?a618vBzL^LZ1<89ar7i)(M>>KdI-^a2Ys)Q^gI z@l-QF9QE^gl@0@fdo#Lc&7rdcv}q7u97GFIdeC7d2?M8=)%w#&ldr2oOQ|3Yl$^NSfWR+*&IDrql76gsz#J!2y>pSBXRMve$4!_J3@S+ z$2ZC&s+Y$KeqKxSL_N)s$H_~KZKfu9Gut9RFFq?Bme0r*qv$1dv_@{TX!4(|f3p24 z&Lmg`n-Iq=W|JrgR$H7=G8pidAs8hqzO4a&S&i4PJc_}ZgjJL=&Kt^kOcAU}*vgbC zipETdNddZAHG$FkkxB`r`-uhLqb+KpRe@Fmn^Pkk6OIdP3wlg|5Y z{tsne0w6_otzGxlzE@Xo)qD3$PtVf805dQ&jG00NMMTDhW|5XLhzpCdh-(tZM8GYI z=1E)<#kd3)!ZI+7;EPE3jm8-L<8Iu318$MXGlVEGJ@4LIRXsh?_a_hNs#DcHHC^|f z^PO|Pb8cagy_5bTw`%O$r|sLfmUZ?>?WS#!&ziO+dB$7^8t2a5&gitj?;1FS6LW%$ zV8!y%x46&BpV(G`>!U~6$`V1K08(E;0T6wkmW~dLyEy)>1udYK&d~l0ovY_~8n*mj zIQ!kl`|o|=E%0A=PpOHwu)B^<0Z)}rf^)(8C$ISN&B!v>BW(K=>9UV_)At;<04bQ= zh%*?5nNnMfNDt-Uto2i9u6-Ci+ zCYzcX8>x6Ai6$tOO!_FnLO)xs{H#^w>#Ttz=sf;esc7Xm1qVqS3X<$P?6^4-yKIib ziogl@n{q_;xBym5?(DiD*B$CZz3Upqx||Zn*Ht8ZH3ciPg0H1ShoSHI4gV>5?lGAr z0eaev&X17nk_c{Oesnw0T1ViVSY0CVc_}|(o>hsxOYQwQI6IvjPbc2goo**lLXH5x z(EPR|WGmnZ#fJ$NV1~UoJ+@6Wn~-bRvBWe1l|<6^C6oXXj<@;(=C;}VZ_8sWR>tSx=3o9?UaP@7ox}UkF{1b4(b>mK*Hsy!+mHz~m zo|~IAcl_CR-csJg?wa%D!gC*Kseh{b!mVAS=retx3#L!KwDH&mUKzJ|%9&S?JOPcsv~M1pSO8hlOhbn0Za7?>3-MlNC(C$2GXHt(VD*6xXK1P{TDzSlHA z6`-OST_uXiJ^FS z`w|p2_>r(}i9$Pbg(kZN%~uPKc)}c0Ta!g|VrE-LGhsf5gButOm^jp7w(mWm{F?&@ z%WvGf8BE&y4j4A!>6X1edGycomVUJ6!4DxE^~JGgz$Je?0A_Ce;DzBE?)XLdi`#dX zKfUQG%$wYYeArxM_d1#vCmJx*DKJTJtkY*CbxH^iv7({uf@ei3JTM|i*@RaiCLv0M zZL!E1VpPOKL{kaF8&*Sd6D-3CQ;|^_|F%*6!!xR{Jfli}i&43Ur6HCyYUHG=?6Gu= z7dU|x7=hs;R@8!=EFlb#Xf7BEg#00zi_zf>Fg3KXgm?yoBwvQgTY37`x4*c$ycHB~e|Xg7=|5d^ z`lj;l*@=pRi<%;IfCinAKcm`J@s=BtKdlY(#? z4|txU89Zy0!2c`_(=vonnGs>duY37&1;E6AIo7u9Fk`NXmcM~x(gaWM-9ex3L0M_~ zekX>-=>WzpOr@~`8l{~}SQelWW2#5YbzW%Df*JDKNWXjh0Nm2w$?n=z9>1yo0`yIn zA`E#FVMrbLCAs%#ED#IAu6l5;;0Gq1%~BaN4C|;QBt{;@-xdHaoYd${k`n>Q)z@W* zXmhqoo3j;dR*Ts*O`|ca?;`p2eljv7@NpCEw}}>#z(>Qs1+M6>2lWYWuoK>3Cn~^^ z$fYG9Re(bxbd40rEjVWga7;H!N8FJ#$WUUp?&YC*G@R%dLqB@qya4I(Nla}l9*sxh zG^gZ@x?nDq6Y7{;ZJnhiGE_+SXV8ZO{xpw{YS_9Ah|5T!13t7#ikS?RrO}488z9BT zv2rEYz$#9aVmoO&o6Tq>_1Lil0FBF#9G(+Rzh4Uz*wP4Wjc#snyo`^qLm0cp)cJ;> zS18PnfC#=Y3#t2$ZgJixA)T3d#}ihTMOXr zKREOq*zpAD|6s++C&9UIz7AGyzp(E=MlS1~e#Z4PZn%G6`K#{p+JFz~$U_LxYLI=% zV2TsPs7Mn+{$MLZCq-$4^s)pc7D8D-mOn%lgep}KJXP3H6w*A;VK#;Y97qLhnFJk# zGvIJ$A0&YXAc04zTLmhVJw1>;J&+we*jbd*E~~f4me7~_H;hkkjjsytLXPn%X%(c^ zPPI#2rZN-eSSS~-(0LX7bJQu{?r0M!C=^P~1Ox#JCus@Is;w@eX$qi_QeYu5$d18y zvc-pt61{5C_MyG-=-$2k9J{Ok5jgkg6xh>09o@~-2$-%zV@`uk!i+-i{%KEuygLvy z5+2 zv#+gl_KBpkx1tU?+N2t-X?7jEnML-FeDZD725Jk%G*dR!Nxe@UqF6JHPOhV9)}h_- z+*t0MeeTWKU%fdyVjE7PLgwrj%QgLxm#l@T?{TsH>QxtM?-@s>tsU!lvYuRxNL~vp`e8wi!kz7jqLE3NoJN;dL z#t(8h%=7&BXP!R4<2sXRX0JC~2aVG5Bb5jkC0WFKA3T^xBZUxY2O33DXNc=X_ejPn z??y}hHo^dn9fJmrP(!&1oB3r6rY>sOJLj1nKJy~jU~T-~q?JFQzdRo4``x1VuunzV z4|^uY40tvf*CbujRuIJT_0kw_taP$8i(W&&LGzbOZ_#g|@BpJe32PeITbP^J$C%Fq zR${;y=5+?wOh2$iGt)|^@dkyQJxamEXM51G;O-f`Pmukd9y5f`zh_U1px;?ncY+{B zA}26SmUSXZZFo@<1&O5@Ce2C#mPJPhkvNnFbCN`{kO9cFklji&lmW%`!SS{}k_8*s zE$kll1D0V=74SKEBoERkt=+=Yd>>rX^FJaKa!qpjf1B2P>4gKzgpD|5&JV_1{iWpx zF?)m^OFM?2_ICV2*n;zB4d=})N}MqP%?n1m(2lZX3uVc%!7LdkqHm6yXm zCTvZ2Q52`6J1}U+8wgFZ1LzP5Phw1Qi25fnv^Y*q&{Q7@rx21NfyMRI1}rHiew)=5 zTBHCJL|LN*5lS*gFvxHL=x8qiAcMl?6Zg))^6vcj_91m$T!ih|v&VS(Kj&Q) zp}ab8iYMUBku6QO4B4_|D`0J}#S$b@F$%hH_@@oBUlQ5!9yuo90Zw5l#{wlSnXNk6 zuqe|5noJ33FiEusUyTUlBD-OcGARQVZPg9GId8hiWhsE~<#6+1lGY%Ta&D$nFSA^bQipvUeB)=HqyW3ITPxdHj;%1h@haT>PS{VqM)3qp9yRLp&*EojGUyV8F-$$ z%x)8!yWmFTK1OY0(?TB@wL=jlkC=4XM_VjPX42IjA^$TN zCLD2m)XpN6)|VN;=!34vOGLg;KlKPkaaF?9frP>TrJv>u)>h zS@Hfxje~^CTl;-N<)Iz&H5JNk_J2|e3Os~?*C{ygI-%;OZBfvB4b?!U4t%v$lkl5v z9qrZLOhFr#A+@yLwt_agjfgvjqqAHP zc*P&eo!(_%r8FuA94|L5s)K+L2*kp%7{eG$Kn}|>=FxDk_N+#S!&VHY6Si-Le@57j z&Sqzevy7R(`Tn`#`BpJHD|U1EZfHc3G;JnjF*qc82v$W8K`(mfEe3OGl%@WdTVRFfaYRhajE{CFKtlIGQ1?JkujE>K4r5@YAHhw! z{6&hZvZP3S!C>y{u{NbjIehv*Hz|}#%VS$*lh8G?NJmKCL(^4#2UKP>gR^~-@3a|I z%V^5i3UjqJ)BkVjfWy-;{RmT4$^s*t7-gYvaR5#;rUlM1&IvHGl0>FS zg)N5xFo#TtSv4W1S3T@0nmJ)NnjVLn)rtb)KLx{uxx==J$4Q8$qX7MjTBd`wViT%bh84r^DkxN&145)2*69Cunz^ud>F#oW{}VL0<=S~a3aXeXn>}zV z`&VQ?L8<}LWCre>pSwQ?BUW1w%5f%z)9^qlP|G#3!^8R91hzdqK6e^>T6k)%#Lldp zox7C1hQ6A;g}#Nole&j~ho@!zzuuO@)!&;wvEyvV_nsTk7 zLT;)xH9k2txprEvD9rZF49<RBaMUhI%uxb-C)zRR>HQr{7AS!`C5$Ujps`){YtICFuQz3%nq^?#p_lsmrXW= zLRlT!G|gv;ZS$g@B(|-)w%y`1x-{brigw(ft}YbjbM;&mwSi3^ogM4iybl9FTdtn@ z``Q(S`+ol5|M_kCsm)u!@fG5g+Ew~$=?49$>Z5v}{)zUn zZXhgA`}Ba%r~7n8G-EIm4N08oGgO7OL@^YOMv~!Q5A3O8oA)?eOE^qrY6#9*NE0<7 zIYe)hReGBoOmdSs?Pr|V1IT&ELjAWEcpPVdT6P@)r!6nrQbGSN%3;{^jHJw{Cmy(-zxoY*!og2_EwUJ{8= zKu19ds3C^o;;X|D{Zm0H918nu=@AeiY%M`s(uu(t+^Zl#nmbR--z6kf;%^Ucb z`3w6BHIkl4D(yb1-t%K!YJN|h{E$?+;V=E*4A zvRs9Lo&X%dZYVCG7ceX7%NVAvehgiRPok&trzIw*PRdTHKZ~Bj&r8f|xY4iGVk#Lc zPu3IaJR#=^^`1~m^ve0LBh-09&J*e}_cR4d4Qeh6vvhskSiQCOq`Jw?bJNAznRQF# zMe1Vhg1|!SD*0;lYW@4hs_e?THS|sLjp|MMkB!yYAJyHVuGiNGlWrn3Jd-nHxu}?H z1UZUojGD~oQ8{WMf*y7FRk0gmFjf~*hbQanKph)mD+L}lIb2L8Lo^AUFcDmG7;C&M z0i2~YA9Vi4?BR7;O_kY9JdunEJj$aX2kNpl=p4r;W5c61wxQcl>~Sze4JVX7Ne2y( z2A!Y_ECcHR2l~JkTN$2A`~4@*#J`xuCavNl{8i8$oGK1!r5Z|txysU86mwJ~XvATs zrooww_NUrH{@e23?gnodm;89vi7QWW&!=S&MU~VeMSas8;qkqF=_W;W0aGfInCOGRNa{6p$w(z{bg!IkoN~Iu9^9Q}B!!=Gf zN-k|&N#01BkjHrf=R;%LyMz-dC_5Y-P7Xke&&}5eUreZDAJet>{PM9^|4=@_8m&tm5?qPu^aX0;>9J_qrc>CqdS&v_nsjqF!?u z{+Qc?2C`ng1_@PEoo(WZEy*k;iWLPGK(_ha7masc^tH554&90SW3$aHXrdbEI;mM1 zsdOnf3O9=Dls(EJMNTW73S^)xK*zuA6oDcWCb6S~v`nF&mP9cvumM3}DP(+*4M50> z=vRH3mMB47D1e1fAZ3__LZ<+_g>?ct2EbOK-B6eh!ENw<2q8Y>OS7FU9LaXE>)1W) zA(mzP;0--;*G7jNT8?|)@yo)Efhfv~Mh;pX?U?>?JGpQpmqY#p1__>Rl#Z0}zqW}c zz&jy;8B4M-fzUw>=+k4*m`m1K;V17A;sCd~ezPN*0Cq1+34+HMHyvcaY&K%yO0DISTE0L)sJ2@AIN64PC<1Nxe!x^(qIf47`edjp{Zw zy0YC;Y@~9&Ml)v>sIkabjkTsyC;O(FCtI_rS-#okEX%k{xJ!pV3l~NVvUMYs76U{h z`Cu#Cs+`20q)ZE*&7Q5C6Fi?iUs)WyjJ-^`CaAMP%(t2-h}9t(^A3ll50jt-YhRLP zSQc_T(mn~nkEm+8t_1w184QIj3+2`AJuGFV@m?`~cyG@Q3SydK@wz^Q!+^3_K}ZIz zKrm>TiYO+7CK64bqU&kH7cdN;DJp^$WObi`z!-fcmbMIC7e%B7=sQ`a>GM%SG#ri^ z6Gd-5PBPx`2mF?h5l zP8QttcwS@T^lFlG=nkpnTFuzE4{h4_c~bQTskx5S+*f53req=Pc1)lSz1LLDh!RV1 zX;f|SjUL5jZM>ET5CVQh$?2;Z`X^?qiixlQO&Nee!R>RzWT;~+e#ch)CffO1KrO)i zjsViKY>{5-<)j-TH7mn{8Wf3*Dh-)`Pc z|LW)zrgTes;mOAybP;_jvMfJ6n`9a?WmFN+x+iZh`YJuQ+MwN-m6O6?%Pz}ROvigy z9n2>DmUmM8Mv7rK6;Xg00U;SSnvMQ(pp70Uj1$MHUq&t~DVW;^l^Stl|^QQ2Y_PY5o^NH|@_K`V^BMX%jABxjM zcqTXeo^xK)0Bu@v`9l$O)GRjQE22|Xc|JlswxU4 zGzUQtSz21L6^L+b{?4?tPTC{UQXlBsG2itheYV7Hw~bEYC4)B5hioZLMFPRS8SF{& zrys`lSF#R74wep6-?aF7%TXl9fO9kH&zg8VgXw}Efj9|o1yRt zf`}xVL6VHio+C_jO|u{?iOdOrN@Ut)2}=-5ia(6b`NQZOmLQUbT4}Y}&H>B+IWdXt zR0UiMf^K0H(*3}c)9*w&E1M9_xD z8xiwu7BS~H?FmR=8qmf>RxC70%^GtNxQM$*evf0YeQ*LVa-7J~qNLzX%(Ns2BuVCI zPNXqW5W;8aGz2KX0-Pdq0Hqv2?t>9qlq6&_D9+RRpk<4Sc&06NOAr~w4qKIFB~8(1 z&VaWO?|$9X)++>mq?2J=!Fd9 zy{!?w^>eU-%qkIwzlaE{K2FzY{K{ zr_DCjqGmN7kR1SkHL2vgp)!b ziHQmyPLE7;aP7qy4Ei&$lnSS;{II6h0#g_^l{sZtQJ9mzK)5)6xw2MyUin)6I`3<1 z)c|8OXIsOgGXZOU4 zlZspZzyX{$@6UG}M2BQCU)2vUceuO);qkDk5nIBYFx2wcPKwUw6-8_8qle>yT)bYQ zM-!Qrt#CLVAWQntG<%(*9b`cQDXHLs3Tf8DTvQwRJEj${^W*sX??6IMg@VCAC|sMP zIbMTE#I2)a$I$KPKe=f0Qzx%HWz6EYF9a=5}0m%vo+Nzj>N{-l-$5JalaJ3FE-K4RNDk zdh;n==gc_a3T($~kR7LRKMR$BCb8pd0ITR(b__e2Wjj(^QZSXOiMPa0j4w;AOL61< z?VP7k`p-ZBBQg115hu@BT=>I(YdF0QD4^jiENSbZd zn*$@+4&7!?(>vJ<*tZjZXO0?*5!4tCQn5I4ky0?O$#|*EAlYqIrB#(>Gj>hU%DyZE zL$>8Exm#wOu6&u;tZcbGGe^9-&mpgA&v~eUj90Caf*YYgu`0=zA;5E$(B~*&Z_(F6 znO=dNQN02?u{3;)Y`CS8Akyyo=}3{TgK!T3>%a!E1snoQ3Uq)OfCiX4!$ymQ<8}g@ z7a-aV2;T%utlfZUHx4AXW3wV34dJg0EYj#r7X5%o^5nK5)DTwO6;3xbIkz7vfB#T} za0-VVWKoUdK$fpS3cVdjh884iQ#G04Vq|S9$2$)($rn6|t`=$V*p|^r7&NHbntD1A z9$bP7hW~Q=iml(>yxcB-`JYcMhOIMizx;`ZS6%)DyQ}~2x6QcicPqR){ISwiC*`%0tn3sXENiwa1^C;2CZ z&+?xY?(%nq?}hi$_oxpU4@DJ0jYx~&#q=U}m9k9jRv%Gzh`rJdMF}Zuls`iNP(ZK0 zTC16_U#efL(>g$Iz|M^%eYRcbYptU;P#;i-Q2M3oGBx-evA9jqU0)u-#DuV;vMuqftj z)tVX)df*-OR6FQ`ccvK3zQltR-@!v(Bb_S}z5q|WG9F-un^s)&;YY<<_f-gDM-8i* zw^z82@qZEpj3|Ld@2)tEYuqcmr2q&zjnaPfkJJT_!p^BYssXPpGLeTc8H;Ev3SLbD zy)s`}2UfMVh^(G&CC|*L6acUS)9qUmU;OIr@;_F5dgG>dQkx^!&b{HWhpu0AD_9-g z`4UI~=?MU@+x$Rm@si)X^7`H%BBY#xH1~aH6(RH!r2L))8MRJrRZmjcF@Z7hS@3M> z%)nXk3*q_fLUBQ$E50Z7D*KxM-N*s|fxs8xzeElYL=2@;`6!0bX;F-+{0Nv;M})@1 zG3qopS)CG?8lNQ<)eF@F+{dA#;IL+ZAg#%Uj*wF3eH4O9hoGwdt?HI7>UD=gwN~p^LgETzTMupL%@2O>1mniX_@J0S#QzI@?I?y*YvUGO(;Iizr?@K5AY1u zz8O5tCy5D@z<^IWrcE@G1ZRAdXk;YW+F6B@IBO%Uc?BBDNvHK1;br@Nmr%#AL7>E) zYq6~~21yh+(+E;DN|GIN%0ke2>Jl0A{q~tyO2o-i8qBxYqk$t)W|TQT zGAcgNoF1JRKf|0CIWvBqxiorS{7UZ1;1PJlGN=&H)o{2ogi9TCD6X$FHW<(_m{?rm zsa^0fY^$CZ+hdcdHW23A=|?z*7ttO1|39Lqrx>cb_BRE?*@* z)iFCk5f{~sya7AX74}ds<1C*sLRiPtB?e`TG@ze0B*yQ;n-nAPCh^|V$@%A&n)AMv z=F;*ih;#b!0(^!Pc|e9dMuMZy-8Jm5Pkvhd0tDW94QSx_C(^dn3vTIu8=j$zE8h6M zM?o?C;C7HgF^U2j%I}rGHqx7ST?Fo2GwGs7a0SSZ)Uca-XN;x9+OyeM{2mPG{>#Lb{U> zh@?RvU<5=ovLu2EC=f(e*;E)&bVNj)8NhKqaCz!Di$KC6>X>gtToL@-o;oP=6CKov z!<)x2zhvCupAxY7w8!c)cP3J_O6OdP)p156?jvNXIONIo}Ce_qtx zNpq4)>P#eP7GTvHwXgBIR9(6O=cP zETTw0Zz_?Jkc`O71qso4CZ&TDh5SSd`Ey#x zPqYw5a3Sx0WoRM17e;h3wUFPakw*klnBUoC)OY4Nl{Wu!6(&He$YRJMiy@0lhV-`; zmAt4PS3gzzRi;R7RwpAH&k`%Dsi}svsYI-(A0j4mL+DLgl5-IL_raWa~>noohJA*c->^y>@Gpp5PC>`Gyv`hODuVG#}$k+bFdTL zbb1!TUpo2{mM5D4pwE!J{%c16q7OaaUO(=4Ih7%veS)c@$I^!#40+4z52pQs zuIL_`g+Pt4yhoOl0rrey<0{3nZ8BwTNeKxq>=B`l7&E>MY27X-OAlrq5rg#972`-(>sz0n!sOiIgMw)Ai{+zOz2+X5Vh#9$&wY z@lhU&i98mgcr5<%B#ahG4PD+@47wjh56vmSkiuaz-uPX{Pox=RbxC5{sxCGhLqHMn zMG<#&lDFmZfZfJklC)YRmn6>((i*ZW~C(&x+BNaAYJ+2dW;k^2$?%R4+{=T()#lp+)YeMee-?w!< z{&3$c>XA)rr*65m?>Qv3?nVsJg!?K1Pf12+r8GD%n@4#;#HgSdWd6YxFw4`djb<#m zyN!}9KiwiYi{KHF`_ev}ca^7E8_n2q%#1m@v_)_h!6U+UrTN%=o@Q+{lMC}Tw2Ae2 zoF|K0#ogi_u}Az=JS*}*ED~=P*NeN%w@!)uqEsXzhmU6{TIA^G`g_c8sBEKegMec> zM&kGc3z%KZZe|bD!<=Hc9_B1V0Vc|vKwAuB#2Kiom>jDygw+@cUjjpvCSxg0#;W~c za8yFVY{yIz1~c4Jf|QEcDrpBv?O1A0nGAe|4>A(VVU!{QduiZ*Vvr)lQg?TEcQaof zKmNU!DgFK4Avt!64W$F z!-={jpm`z={3J;;eQrebRBDqZ7$6Z&K~qZwO$tNN$%ldQ#1AUB4e!-?^wej>PF
Y-q)BgSLBNOK!afhCJ z`d_#ol_v4VE#u*4wL4Fl|i$uA&sGB6B&UU%VaAN9gH(I+`@&-vfpUrGU?{EymGF*e$ zg4lt(NSY?yLa&fMqCeyKCpZ}AO8JD)$PE*l9g`i?nd#g#e!6%ovzpy09_9Xrd7nGY zeaZim`&RI}B#EVIhQiITqJTC;K}Z-aur$LYjJ8+_J?0FqCSq7z!Xe8*Vh%wyBeD!h zi^K%H<&Bc~jApbEZAFe$zW>!?V_7lbuH*#BJw7?Mr*cRoR02X|Yd3?*NK)gL=b=O$ z5&i^58#s{^I82Jv4~YW-60QP4#qsCjmU(v0B#ddA(6}^o$$4BFdM??UOC}??+uw*Q zlo+dN7I)9`h*O0onk>!Kx}#NuMdC)95(5rBjnvbU=?1W4MiQ$E8bzU?pb2-o?km7e zukY@QlGWZ~6TXr3qAdYt%9-o$*;m|1B3lYs?Uk)wHXMb*r*fTS zpy2VLvw1vCWPxk$_XY65f8854c849)jTAz@oRxIIsS=Xf3}6JLQW?%rrAD!i)lEwh z$2^jN@aZqJ3*d{NWFNVmJ^aJ-a8Gt+-yEuFO%`Je-+}aJ1HmQOaH00hC1ig@R}RvN zxm|xm0}j%3wr{;auN`b-iTWZ41P#LsB!YCEv91|vtPObXo=gJSd@5VS?qWY>naOBz zmZgi>o7wejKg*z7B2lzqB;ju&#*!C#N4o&*0cSDJcHUz8SKeYOuvtt7a~i&%U^;$Q z{c3-|RnclHGcbu6sLZf7!zm*Zy?|_8pu4(&WDK4;YrAn}Rt8hg?W}@s!Cz#XBY4Okg&!g;n?D;6=7ou@75RQs8 z9WRVI{inl8!E&WK2oi(t}w<$8z6~*+i#E z^pfaUo#aY&tA#Hg!#o5#x&l_vDc>|>CWjntfyCA@43jg3b7u;ZGlg-(w~o&gCMJBC z#LmN5suHIntl)Eot^8~lo$3IfQgQ1faZ5bMP4Vn$izna-0C8G@DgszY(?m!YTp*;0 z5dnOr=kS9S138!R5EsH=f2Se0OA?3Rs;!uxk=0IN=^{dWV!+-vG-*#%>s#TXi^g!Q-miPJvdaix&3le2cIR@JYF~qm4HMm1N5jYj-4=_=|$E-j&YTMyX>*nY zh{qZr-m6UfR+;#%a?Wp+iM*3zWDk7HwShH_5^wYOn|0RUZ2H#K8qWDmfhte68e0%uA9&cKkB+lgIrhL=DISb zJnqT32D`$O{MvbYt(D=-o+Iw;fDe)uiV8vhY3|MYvp!B2B|+jPn$t>k&KZWPpI4iJR>m|qTf17_Rk86s0uEP>`?Do8$X@lsG} z;l>MX+;rgvZoVL-wc)PezPi9zt<}})8yje6+r=xi4p)coiohavkvK zMUHjMq_1SJlx8ZO^jvnX)Tv1R2*c}0LV5cpZD4$Y3*%2&%oZZe)gJ#&5v(^S1VR z4B!k*IA?okZOr5My!(fipOOp@KGs3g#Fp(iSW^Awf;bNVBE&5)&nC z70sA!FkcuIbyC1;i?RH(99W1^vLWH8L?taU%b`0(rSb_xOC*_@#7wZwl>p$!D=@6W z3(Zz|p$#a!I6t`Xf)tw}T}28nidq^1a6sinuJ9s>F`4u4n!Ws_Tf1>!$pvugP9F)+ zI%OnsxX2;;qMQ~Z0X57kmJaiIJ$R0X(lVX4G0djNhi|*_?h^}Ee7bhV{X_I8R<8QZ zlgpRw&CX|E`011>oBMY>mi_+T%ZB%TPd|R_$Q$py@%jgdYsP2i)29&EXdnXrK)9w; zrjk@;USC-zAuAcgnyt^$T&Qqh|D`VPlQ3N;&-ON0B!*T z3K{aDe}EIvD%8}BI$K?@GKD1cUr6{xC82*c7y4I8=wBtFf0YlV`tp8O-@Z_`UI$T1J9Xvmbe@g+q3k>Ea0g8RJBaI}qVVL{>Gu)Z19w}ter zfd09r3;Dxf56E~3doZ$-bI2jW9`Zo;fVx@KP;Z9o2V|M6Dz{(tLiTTq-?{DOC6D$M z|8~`~C!Sfk;<4;}N*FN-4uSmc>>W>R`R)?>cgK#s^vb*Mzk=jBYa=I3nW**sMm_o3q6lCb?RU{8-rgwAOKMDTe68_qfrT3j%1bZE_Oin~tTh zBYY4WmB^Nq57G5DTheYv8mTxQ>HIq%9XY>w<~1Wnj~;Q2r;sUqWXbs9PnNYb&tBU1 zF20A&{b%UC=pNQEg~S`rr8jfFLJ-Hf8VK8$+a7^38(lhJQiMERVx!|WI%cDbZFDqe zM6PRNVxHJ=@ltV2Nn31gY@N78+*tC2`!`j8q#dF^6!6!yR=w|M!_-w2rPV?y&@Qx# z?NYnkuCzNA2n)mo(gJybvcS<@+FhpNmfDhv`jQ#aba_tcobu)IM2SU|8(}zeA56q8qQoK!a23C+uyKY^mQW-n z6fN~K@{ocMPCdqgRisQM*c_Z3oE3Z~cs$6d!J^>e;HN>RD0qL63ci5Y$cs3fR4`;b zcy|rsIce|&qyW?)K^E-t_|l}Z0mH08IHbLxvw$jyczMPss3oEDFRXC*7a2F^MJ6&t zE(*a=NigFMq-*ivwIn$mFqT+M1j+nM!6-g57{x~gNrWy)$_((QqeMe(#n{?SBQy&_#&gGc=WYJOH`sLRq{jBs3~*5!!9JA-HV7?BgJVn z9c1)2HTJ0$JPZys>nM;sv?G5bJ0sb=cb5>ls&qcFI8 z9ItXXv3|qW&y-zM7-v0IrMl+Q+#1cr9MLck%lR;5FG7n#585nt#={^McPK)I6o%zx zqQoVcFeuUruvcK9!;%GYMk|vWHf#Vk*^M(99r-t);hX{p3*^$$f*}kM+(YDGDC8|5 z_MbOj46Nfq-~kYhxB$3p2vwJ^Z@8d89sR?QJ|qp>yidJn?YdQUi3gAFoIGk+aM_D$1-2$JlI=u14?QQdq-g^8)cyVOm(z#Jp2o_$7k9K{Wv!=DWR4BXI|%QkeFRsBmN9~GZAR>b$l}nQl1Oy7 z#F42?YHO=aMAKFUjT^i6?{_E53m@5~jhy=^b^T_@ zcV_R~-1p$bsu0#QcOX4;3eQOl4;mTt5H6hZ`qNa@?ZW^yXEQ;UC!KV|62a|LpxY-S zHLauj4^qAXQ3s#RCiB^%4!%UdkHZ5Y5_|BIsDs}{N-c6t2Y#ZX{keF9-(wbCn0azP z32FIpyupEQdA}d__@RGNh;;AcP($c!h`KqnJG3X%A7VneG+Qo#XC~*ma2`KrIN-_M zqT&hhl*ovd;UMM=2Q!5)k;Fae1R}|L5^E4inp~U|91!R=bKw^T6$~SSR8BWFnzPg( zF%x1mr$cq%K3a^ePIDTggn>iQjpSKnn@T(^wc*4K64(3UAv@4uUB{Qe`v1tDBV zfw(}Tt~aWvZQb;?CVKGWP4tv67s|IA<3hd?8zUjutE~O303?BfT&_XB1V4Rjb<*A^ zPuk=U*0B)8^hOD5ZHL|{`dpE;fEN@}Rb6Ne&9#JMykcQ-8kD02@$NQaQlW^h^dB zNn|rFUz$a1oHjE~B1Q+`u(6BTT_q3=fap7XF9-%030mj7=U@aHBwi2_qT~@p3D6XkFxsnGNdl75E6wqeNCU`LD^Q6EnM}r*uM~#&XV~>D z#Ud&rMyU*p$zS~sX6N2eu&<-9BNXU8-C<5)l*_E?jRSEXaTA%@){=kO+xT(kJ{fC! z=RWC1cBdfxeYW!trxQhilIuIgO%2ijf#yGSe!C9@g zKlxM-tYj)A>Qa5CzD1|?s4*{~Ik_BT5js|EVo`Bgixe1X`OLwhk~G69qMHkgK^Mya zhLc6vDY!J?ragQ_2+IY|5|H34g`_hL>iFTp2Kpo&vJ*=1FnB?-;3q4UJl9~WzKT7%vJ3f1{z$q2zLoP==s~r^>j1F7QHvR})Sg zx^nWg11br}q5k(X4OopikaVad@hJ!NxH(DIOQocKC3A(;u6OF|^q=aIE-^?};NdU^ zW-#_cb@E)b0l1`{#{Ue{8T}7uJT$?C;8{_U1X)oeP1lhH(b~lV7jgt9Wade#Gy0Ow z3sGKoxsojJVOidZ#yR0|dK?a?psQ+950BXGC)Dz#R#gdI348jT*w9x1fIvD z&81=_Pswv$b3hF3zurM3ZhkT&MJG#ev2;7`#-*;xh?8}=SifDzpgvb+WQ~QhNqU$@ z!tcrb@Vxu{JmORYC!Xu*2q32b{l`MEBkxe2*>iPat$e}!ip8pfwphk_ltO)=~r*7+>^wc~~gwTItFJA-6`^8gR_oVC~5<>r} zy?oU8&^2%hEevOO;sA2F&Vaqnx1S(Y&}B4i%|3Vd*Ue1o z*U#>%yZFE}+3x3lUGV`@7r#8MzfRrMx8sdt)V%LMqSo#I;W)NxRb|EZ!Ut6sRAWH{3;pt7;|%o| z%rAtU&WF?|A;!KJGKhh`x^U`i#(50XT27a~8*<2c&De&9rc{-CNF z27gdhhn7Qr&?HW!R7pDuj*66)$YDPmbwEeL4*$Su1pYyq?1HWdj@y)&q+ZL3w`C+` zG91ismvigMByyKzoKgBVDmEG2`U(0v5<=@ZXIK%)h|oUQ&h=h^^e`mZVLm1uw`vB| zgQ{*K=nhAWC0g+W@uA6NwQgsippk$u$OVl`%-=|(KfJxKxKSfeS>8q%D{d4sksJ>F zbds1Ox1kA@ZOUj z4bOf_mBZ|}Ej6Q4-)EJ+x8SAIvmKb{inCMbZxGLgsFz8^4vm^sv776Ivyz_{p)wZy zlq;{Yoz|xgi%@d==Mq{M)EZ(3$NH)qTPuUuxA1QRX15RfSxX9<0BVk4zqi$4bHJJMmMo`w3LndlaRYgPE z1=+$ukK3JaNglMJDoA}KWXU7Tk{dVw%N+g!5YZx3b>zhe6*)vT?^n?s$#@P?S7zj9 zSH?xna=qxHT!-N311gMxv0({+MDAH8s?;cx75Z|ep8%qa?y6SN9ip1M!_jp}m7rVL zho?bA+>asD1KPRMK|F_aZz!Pk5(;r%Z~jpb>}6{BSumkaTK71S3=ZUF@N)+KX8##1 z;K`H?LKKRM9QD8(8Yok&dV_crP#IO%lkhvMlMO#Zjo=*jTjAfBTjBXtH}w|hoZHI4V{rL3G?^v9FQSW)t`?ri6%lA3jG>Ff=^ z*2+u53)KRqyl>}<4eM4?3%`H$nbFgy;-{pO{t{lyzJ?g|9+DdG`1NEa(>KX;kIPOyyfKOCy8kXp&aA4h zqnUrb{lD%SF{Pr#H)|@o?xX19*N}Nx*OD3eDD1$oxIK={ip)MnD$GEXlsBHXQV@Ub zb{H^-XY<7W@26>N23N4rS@GrFeFnWy0q z_NVBYE;8oWr67%N4!SjOFdzPdIC7vG-v{(9`+)I_t$ABN|CD+9)G7Qe3#pUvPwbmO z1j~$haxl6LjsAE%X(la#L$IhzpwBResK+uY&!aIB>mWJ>y+XSBSy&D5iJH!n>b52g zH=Q(@b8ZpaJdEeiEy{>|25BP*dEBOienxdFr1cunsJN<=7AZ^ zOlA_`k!9ltMW75+g9gwHTEJv*HJAl%0E@vb;CAq8=K32uFTe82_Gzou4Qsl2d3n|B zIVG2kSA;Pc1_#bL{E#PYdo43h)lY4<wV)#4!R;TV1W4IHny#rZA$8 zVbcCb_Jh^NT6@dbf6&es?2qn%CmyRyr)ta0wcA|#&9(J8KWKaKdhq9iw-bZD-#Q=t zZcRE}^E15oXKH9_XbGmW4Yla?yVTI3De4Nm>8GP7> z>6-KqL|{%L9Sle&sbIblyQPqYI7Cp=PEL$4D&U}l<{Sx-0h#o$G{|@ZX%#UKH5v5d zIzJ??bd5|vj;(ZvG}FYsBeSmQz+p;bzB5K74LXeKq?O3+hUPBR&XODpeFhws!(mxD}4uA@UtMF*gQ`Z6s z1kjQJ)xCe}h1I>E;D-R8DXOO%>QhlVZA@eCC2$c9^)NMI@-MT;&;q6&_QbBeHs&#w z@aw&EI^lY>$gW@BFl*MZWM_@a6!M%{ai;^$bF6ew4yu%?XD9~B z3d>zhLs~vWUE2jk7J%wwCy(^ib{s(xt@=o+<5*)yN8b^|pbd4!br7d@<6c^)$7DCa zpRc*`#;sqhduT2EDC_Th2y&3mev8igHu}xg=)693>nCMYzh1#cKm~+yMFhY?bfS!t zXZfxCZXWr$kf(#sp*x`hn!0wcTLsmB!gY`Map}-gh)SWmldA63{?wa7w+N?`aYdQm z=QZw@hNsbl9+D7>$6&!O*Yyt%xqVgZ6<1HaJ^Kb6xwv&?cE#Nzr`&%x^$b7p{c$75 ztXeUC3|s_%HZql6v}xR^I{?gngYJ}@o{q!?lG_wuK?Ngl6uQ#A48RwJZpE9uZ{l{< zGM5`e?o#yjKyP+El{gn$2S>s9>dUjIv-cxah>^(s%*DWse*5@L)j}7X;erblI75MR z1b7wtPtBz$cG*@x^!tkomPL^f#ukR>*X2`}C+(kPyyOtTjq|i4yK}#m=;AIB+U$3ul z-sz;~IiZtd7ASDM0u=$BV4uv1=yctXQ79)m72>%g%_;O-tB=(pjyrZlJ5pOq&KlPZ z#w*t2kQs5byYFUd&(Dr#?_#rw>{@u|6ZCyQEJGggHgpBop}Swf{x{MXaj+wELpnYt zzA;Xh#^I(ITp5F#qHsbGmUtoS^?IrKZn#Nb-Wg_|94xZ_d>HCBXI@^g4<4h^T$ za2SO8GM5XDRv3oEiDKYih9{uVU4?~6j|Du1PBu^|IE-0*8>>^jZ`I<#tF0x&hi+bO z1nn<~*cVR*VRW4r6I_jWjMAZPRAFH47u`R6_x3y451<0U<6i{2hrV*pD{$PcJ9pl3 z%Wt2i&iyfa{`lM3kI^;W4>!Th&vt8HX1~dP*thAm`|f-0nfo6gQGw>{?esY&fkvb` zQ*a*-JNb1y)ymK0sdDt-A>LN#WsD#&fImz<2P8D?G#-my^fOY`_$jPLZU%ax2af}E z+yyZ6)8{_O{xS3{2g9HCrRe?iL*Mj`&)y4@)EkicJ?QTTSD{hR*&gI;x&huEftcfJ zwg>5QPk%dkrxpOFoIQ;8@CWcV9=A%g2M2;)yxodE2ket*JMMkIqCN&jTVP9zb3(MV zxcK5yXK^%D=<|&Z7naha3mH*Fk33>?WBkbQ;b9{xB9%JQTRUg~i{quGWlq|94ja(J zkjC?sIw`(vBuzkmbdRTyhEAGy(q0cnQXfgxv5P)Y144QGs-1IJzwnbWjT`^>u4g}} zESz%R;#r%o8=>}ira$=h=Kp$cLp>|H!u_CR{_Xcqxc;{Bk0ApM_)|+NeKpnvf6mr zjGD%4#w5~nw_Rh-@3a`*!9rw7?aC~=s}#;jKtAD5Q2&p;F9C0(Iv1UDX0%I|Y{|PU zk8H_HB-^qb$96VL-gg`C4l#*r$+jFvawIv%N!b#JNkSpCB@H3H4PBt=>rH{CCAb&b zfLq`z&{7_yv<2GILYGI|*K)ah5R3PpGb33}0$*R>>%HxFofAuE{m;Lj|C|}kOy!_~ zbRo>NZ*?_K`t$d0q`-hJmAr~BW3RMxux%;f1;3ohQo z&gy^kP0zh^E$I2Vhd<_=dK%?E@&%M|VaoS-mh3p@K5%9%|J&b?XYbs1tn=EZ?(Ak;7aj?f4c_^!iHomE7GEkKbXlFB z`uB4ezbs-SKMUBbXQrI>x0Ir}QdD5Cvg7_-)R>E!GEigsKss(nL$@fGlGF-1rVa`6Pz8sy8lJdam5Jdq2~~;&xf)bDCI!LUQfg3Q)aJ-5X)AI?TTMjQ z6z{l$NTJ}uHx&HNeRqHqj&(~GFFt(hVvx)~$@1?tzF0Qs1UdZW96LnhK>XH|xyyBPwD>#KoCReg1@}TJDCHN0|z5L3O?v7T;h9^U-S5lM8%p%B@+GhvB_o%~d2(S>DVM*aKr%(X zVotH3cvvA(D3VYPDnqOic27!-ct&Xe0E}9Oc}^k4G8vL$@{(REYv4$=4v<=2f_>+L zO~+hVAg_W!d&<+m0Lw|ywjjT52~qPin-P&Yc?qRtXXNeWq{kn}uOVhR5x>2xmMku} z<8#aH!0??P?3AX0UZ%lb>?P;N50;>TVw7Br4i%xMqQN5EREP!&P;vp17NB7Z%CI1X z1vTfQ`D`?mjjFTJQ4^Xnp~*}%l!h8oP-=2hG9K3=pAt=R=n%w)DXCehS$UaEn$c*q znbY!8Q}ua_S!d?Wtl3;@BX5Y)%CcoxmST4I;dz`3WLGNl$XD{q+2d4dv%G1 zP+|;KCV2Df7M7M!+FLsEP3}JV(0^9?P7CUKETH%ttkW`-i^?k5^3)Y?Pdq!f=dMTZ zjVzxZn_NWrp7Dq8xo)T=Jb_YwxBRDl-gkd@x2qYqANu;#^dk%HEjOR9><&JK5>9;` z@lWk49XWL7>EOA@E1t-=QBf(%dZ6tmNAGy$L}%-XH$QX350CfKXJx&~z6Nn*GE>M* zJ6kIYnhJ1bPE!u9GBulURa$cz9!Nw(36lwUNIogYnVHE}WH1PK;~|^XBK$OZhl*1r zOL=3GPL^V1r798$kmU~kN5f3A(3h9Qe1RFPhMaf_D;xyc}y}*N2CDalqmJazsXrDbASprhor zpTp`1RnTVJ~KJn*?e@f&M^^9C%RH#*;t8bMNs`e~D zS4Osd^o1*xsUq89$FoIZD9lYNNJ%TGGw$$~$j+gn{HokU342;DFCCa`;tmr2bw>KQ zbD$#@rivMJ)>KwDRpQFhrczv0)LevzRX!CS$FSi*nMQ-vvTj|H#j~Y$Ynlc_M6K(M=CsxbVhz^QkBwTG^8puMv0chHzfA2Aha%(@cGa)+KkKRAu zVC^|DlKCZb+y3FZuD-f1bZQ%(U4CKcroO_`k%fLt-~2{)pjn`UCWw|p&i4LNM0H{~ zA8j)uUo!F}q6sycRo$Y(9u=CCE=aMoc%SKr2@gX^7h;%%&@l_LSk@PZ_%xJOXp%yd zoNdypz$k0PeNJqP3hs4e1ojAVp#*}+SGGZT;$_^PwX%}ER6F(@8NY3$T3>v(XXh#Z zX4k%d-97v8^|c#<4}NTW%T`~9-QMZjQs?io6?gBJEZ#A7a?V{=Tw6PLzi0fxn_AmG z{=FN$k3aLujXU4FP%^Nutt;GNEo`4@Y`*U5^?%2;`#;gbVIL$u;RXC)J40j(|F}Ke z9`KjR%vt9(+62AQkj!#=u1b0W#0VLyL~4*oTO_DTf@BgTk?6RRIO}L;(R4iDAOVQz^qZ8tXC@a$*_Tu%FeU4a}rMUkV{}oiS#TJTro;7 zA(Dn*Vu4)o*Ci}TV0ndbK9R;3BeWKc=wPu;SV~Mn%Nv_Qz@fcl94^+Y-a@Gq$GOcG(vxLOP;!7VyX6;$-C=F&RQN--(08?vu3t(%l0iM z=id7#BW38Rr=LzOX`_nke^>WT>dsn>KNaWo>{y5st*{vQ&-hGP&{~DcOJKA z;aF~SZFO;iWvjzcZIWO7erAa?FW*&6?@-0K!jYFg+2I)4-)4SsWFpDKAW1oB z@=WN3z(>w4rjhx zuHrbS3YDr5qtby6u__hPNEKEF@eo+aYzhUE*f3Eer47Z@=&h&@5muH1Yr*%!4vf5- zi~e!B7iU>N1kwx3^Z0pg&*{kbPDj2V&dxpovnyc71MZu=XpB=~iIv>7jb+%zaZ+R> z=VcE{m7@CoLQ%{xU3W_c{_80 z)?_oAGZT$qNVbPEjhHEk-`c;mviz;uTUw>&s50?-S)MtjO5{x0%$@&Db{%Y@H zl!M$5m7l|zq_Qe7mT2UxRU)@x%qeB$s1b3*ht(`4QRWGPS`v;l%`~htCzf+V=a(N^ zT*RH{v2ytx(3NWRqyQcD0Cbkga4J|(gCBqzu=IeG$OBPDB+qz8J&TK!--RFi0lyBp zB&NWru9Bi8t4?Q7Sma4C=<9NYiW(s-3pK*dmFO{|>ttRX+Fw z8|!lwi)d={&TWO?`l|kyR%gEhzZa$Z7E1R%=SCUqkCiN-29D3bFyQ)>EckPVQOXqx ztCBS+Sy;uDJlSLt?l2Nwf+Z}X{P|kc$=56;R<9@*4TQx3nNh z(}__rg53g=MFV5*mbW6~XV0ItOXma&h%5UlNXXX=F!k4TOKX*j8aAR!obPg@62I+|!AYFD=mD;!tRLmGCpP0(MLG9TIs2WJ{a}PLv<;ya zgyiJfGDuu5Gst1fqq6E0NTJZ+sB~|`Xcyj(F_w`7eW&`M9C_qn@Th`0l#Ptgk0@DS zx(hmC;f|tDQ7+HHY%h zAs!tuqQfa@V^V7pZd0R;YE&yn8|0``hUzh@z(`_Y(sk*4I-8yj+ktFy3B;&WnT#KC z2}D?xF1ro_ab;XYiBI0}!%z429)ID6n_s-EyYIvc*A3m~uE3RJNBa6dK3-ih_HpiD z|2;px`QVRF_6^_jgSp#(ezNBxw>fayJN~)ArVW8J;q4~^Tg27L3_MR@(wSOk{W)x){$M+t+qtU=`V2a0;e;X^~|rG zCl8gQTguSkVsy9wRb(_|;L7x-bi66KEg82NP_+Tov#63qjg;YLhUdvCZJtA|QhBv# z5LnL1kz8V#fKY#k9Ocq?S+IF|8yuGGRFrzig65#fXtp(*hD_MR8;waSn_7LV7WuUE zT0EjfZCbQZiwXhMIyq9wL67CoB_>1&2ZaOP5pwlz`PFJL>uN#W=&=K&x5`F90n19t zg#&l;v(n4OEtLsB?aE&{mC%zHXIG|>J9I9S{$N*`x2&S&(B_)kuZn6xZ*yB`e)IkU zFMF9P-1>4CUy^t*wcf#-S5zXaSbIxSRgVArSI~>6K`&Ano*8l$8B)kEc`Z76Vc9}% zt>TTW(WuNxlUoI6n`|(o*<>;WZ&T5QNI@4Ox?&39S5!^Ph}Wx55+mheHkW!0kpamm zsYrM;eXf~%`d2Snx74(jer^#PJ*Q_j|Ld9kyOys(n+_k_e>n00s@%|ION{8aJ$%dD z(1CkXxYCoTbHLL@??q|_YfT3o+QfX*8QxosW~-q2wD((40x zv1l|^4AVg|@+-+D0wD;@FJ@z*D_f&Dd}Gy)#(ZPrz8%+`o`oIceV!XH9IWRa;;Odo zt#wYe7G!qa5gPx*cH8Ee&yIQSzqXNMv&W{B3pZKKWd#h|U19!4M`3F6=B}|V^`YL9;-MRR2!|o0WZ$Gav@EAWhODs0ayBW67chwz$Zqqw zh7w@#f3 z?7weURXY2KL~pIxUOO?rHSwG1wTe(+tj%85;|omL@F$jMY<+F6l8n-A*R|tgfVCWK zV<%v(g*bE5AVLGM15s({TbnUwWwC+9EQ>W-25AtuPGn$EH!&P^5TPJK9)u9qaturD z1PcKh>?*g=%S3czxpI(ABa--HH9dkLPkk$gotv=IayuiX-;Hc}8db>EO1T8f4O&#s z?RozWSiHWQoMSc{a_jThjfCSShLLUp+^uMX^IX0T=@K+*ojSp)Q5!VsPil}xqdu%g zlWLT&Mrze!Dk?}t8L3E>itbV%az>9+(-o>zmBN~yYDiC2q~AkBk7gBGr$T8EfTXvk zqx6hiAUhXXDe(;AjOZkX3OHoU;II*RlwmYv7`cp7lq(a2A5UO(9-~iTG=b4DMwJ*9 zV1y?!<}wyC*bNy2892k5k)UDHNwlKENFCj)FA1!Rti1!2CQH*U+P0=`PTRI^PTRKa zY1_7qY1{U+rfu8S?U}vz_wVnli?hzUtE=9Q%#3*Q$;f(Z)mu>+)uOAL(=A9~9IAzY zpY8;4u>C0EJ}At1#SKcA3<(@N*0(Ec<@VH~K+aB0yAW?`LMeG}idLALY#qk+v^U(= z3W6$7A22*%BjkNKoD5T{z^9&$@AkHLQL2cT7!rT%bPy7;I1+X&WFk!%ra855O~z#DRjccg!@`wN4kV>G8YQPoi8&(ehWoCLXP5&5{FR$ z)&eCy9IR~}qk-b?FOVx6_+vUpTz3w}m^~4*6yH-BD`&W%ObNRiaYDPgRzu;I`R59!KrBf#HN<^8FToB4B zE&yD#P}0?IktFFHnc%|WoZSbk-5862X5}@?U1cKs4C*}K-Yu@!*8#CT*^Np2ujZC+ z7l2bHgiD8LkE**e8XnfliYUv{lliCsfp#NfY_bldy9ML2JPQK_F|1*>4CAc`+qimX zn?P2>%77gXJda47C?7UoXO(i_;o<1Rs`*82#Zgc|e$n_|GQNr`55C6gB{^U$3Bs_~ z6GmkiFSyr!Kq^kwo^RM8)GOtZrn%}d0)E53I z?Aq)375Xb#BYDMqWq)Pe;$X$*Jcgg*BIOvu`L$(sGQ*PAG}?#cvA*J z``7qFOfj5)xW8W^_0eHn8NfVspgm1%fOYr>?S9EGAWnKmr2))FyrEKzu134j)Wm~~ z55hycQ8@fSqNg?Bs^PI8{J!z!25LV}^GGkZu)=fq%|05wqG@{?OI%Ozg{1Tde zih(@{FU*amDmp+OYaR7QK_m&=OC%wJJn}A<(d^lyzunJFEGg725{9%XeuK<`$_*vG zCVrndW&xu**NwdLVYdAy)TQsjz|Ej-3?SG_LaNV>0=XvX`g80w1Ph4{N8pFwZ0dQtlN<`gqAu32CwBs!k) z0MilZ4ei4YG#>8yVlV~91d-fdW8?hNXo2U!ZuvL$z_)I`+|ahJzE}V$JjO8B1~J*+ z8+9#ld4f9p$%8euE0jhS$>FSh>43f5QWjuu5YP?AX=-dJ6G(EC>L-Dsa?zy&;CUp5 zZiBZOW!!og8{}{UQ58;#GE}AW0Z8!`MrYNGbDbh)BqeqLJm3bw%qFOAujMAMD_)v3vv*%WgRp6cvU_U$o_eNlk61)y&LXlqw2f)M2 z%#Cl95`f(@4jC;pg%v3ot2^Vpar2?9l9kT&wdP`^>_UglhztM@4%cRfr9RcJ`eI76s}fXWT_~J^3YeIns5pmhxshns78|diH8*Q zGL>#=3K)CW=9P_#Bh6;(M8`dQxMvEIWI0)xKU-qSzwZ{}rjFNyKU32tmSG)31e4^5 z)>jGzwCo_#s*$N*gf&47u%Qm@e;DIDQ@-)ow1bFvAcJM2%qU-tR{T;W#qe4 z9nMD__5@-KbKqA{zNZv5rzYOLD)`*H#0Wz z#~WG98MYu%-=NP-Nhz-sp0K@*qEfb$LiPFU z48IRiew)CYc$OYJ&$&ZA`Q5#0u=)}x@rrW7JX3qCAA4`-yh}c6dfr#a-#eCZoehTT zHh#RNH<|p99<|9ifVUS9I%`9aJfzAMOU2EOl@CTP+F|ceJ>(;=d>qOrk8%krdfRLC zSf1dR5=1RB%u&Z}!pSE@$e;dLP)YI)xm8!VcND8D@1qhEL2Dx|6>NEVfa&upPI(Mh zeHO^+tM%FUBtWuxK`ZXLUBPJW8t!Zf^O!2ZiJ#TkRdzrTxR0j+*K~Jwkub7j1#@6; zoPekpE8Q^D?3e)Rv}a3#dM%G8d7WX3ph$AFI$`sL8NO>(XvJ`6rY6E1_7mdNx@^0;4yL5y*9=VT-c8u2>Y za&>4kH-yblf1DLCF$uKsNdgb}j%!lQP}~x|VjyUQL_m0-#VD89%i}UBS}H%PfN!k% zvVY$Isq}ITJoJmbOPJ9f?Es5430QVkSI?D*QNk$0%a(kt$j>;*Bj(3+s-F`z5>`LUqw*lbNuJ~oW3KsrZ!aI_VSqU!)}YeO zQ;WN({&YmakQ(65D_xVw(c<0g?9(2T+-}0~Sf^3_t zyMSRSulP;`UYyV8!Vz{Yyz*x|Kl51APC|GA(UIdpje`xfUozunIe>S(R($q)ttTKV ziMMf%b@ME^F6}vdve(fj+JPIr&?gJs&_0BP3H@vg@!b(Fn)UQvIyg*=A5RK|vH8jN z=h@Y{Gq}su_bd1VnR@$ty4oR5dd@SrYfzld8eQKUWwuS&pkb!O0XXopxoI?iAq*8f z$El}RB+rFMHr*k#EAH>|+1VGjk(N20W}ACq+y_k2A+*wTm#Grq0I3 z%iwd7;ICWy-fVPZ7tT{Rb$li%wh2l%TL8SMx2=8JoEKMt#=c|D>t_#_jQn`@QT%AM zo^67!5v8RqH`IMU|fxddY!Wlp&}aM>8Bxm zf3%3_P{Gx)r_3GTrDxqC420?9UCK_~fOZP-JKL=R_KNCJhNN9G9>RX<{6b$T#TPWB zYn6vhcG*LvcVSI9n*(or2eSV<3)p1B{ICjC&Ntwj0U!mMUAKiH_boa(M*LaIz1T$A z0eO6gm)4mS2++4XTYRGg-j?Qsje9X^}9fCTYRhv1UOKz!{oR02PAKnEkqj<~=#xzlMG_jbNTE);+? z2#a=)j#*u=gtc<3)F1YV%oa>*LTm(OZ+xsjx}oz){H!-4h9;xIdl>jV21(X>AahbL z(Op3i1GhYNk*Q{6Q?`rNzm}Q!aHD__xR4Z-jB&6x--u+OOOWm}10EJEIp$+6qn2|v0e{11IrM~3Hf=iXo}av7fc@)fSdZ&fgPr^;vcMZN zV=fPlu=R7Sb2m>_+GGbSPd!PU`&I8exbC;j94|pnRU+&2?Bh)hj=xr(f^yHZg4VS= z%Y$u>4y|p>d|Q(yTkqeWlO{)MIw7V+0?ZA#w7Ea4A3v{XaeKP>$hxT5088n>>vC#vV0q{!1w{o-cIu~~ z96tsDl3jL8RNXssB$ng-xX68+cXEP%sApn+xSG&9yMvWeTLDOEF<-$7DD^FAbykrK z&a8HEYEHvEETQq85o`-xgk@p8C-pT=q;iQ%6#VuD4oXG7}lj~320qeXz zKRY9dKC?Irx(UgtCxiwHw8Mb~LKZOpN)gJ3rf_~?rP@>2*)tFK!+m7)adH4*0pt)j z^i<{5k(Oa}oCx-mttjVJqg5lU@geL53w|x$bus?b|1|%!Y4hTXbL`t_8703@b-ApC z$=P4q$DPSp@7+wp&Ard$jWfTyg~|DUwq*C428W0FgLC=wip-AcKih|YX2;N(?wVs{ zNRkh~Oa1ycPR)xlZmP8bdy=XIi=DJ)>Pt1)jF;ua}U z+LIF_ZUjL!5kn8wD61F$5)7xL-=BHzq`CKFXda!{q;-1OvZyqiV;R7RP%Wh?ZkKKd za0iAKXSVn#ZiWQc2kf+aYZL!oZ6?=kd{Oz=h)B z{XWx#yf>PYiQY06cd3Bl5(Sm|XouVo33Y@4R064ujo?mm}=7VIJ==_4@FhWy(4 z+PtznM=a#>?XakH+}AeUB4;x3Hi@a$)?Z9sp(-B`Vv%5cI{`E45B03wHC73+*?> zMBl}&7S07VpElGRin^3cM8u6v1g+S(n})=lT_! z)@Ww>4T6ktH1V@7k_JbpKq8T% z1CmmU&rISAg0N7U7!rtP>@#uB@N>(?`NL2acLvhr>PTiws@wVoqr=ATae|lQIV2me zDln-A{B>fXSncV-5ahCw!ptf{kE8)ReQeQ*uYxcQPnfv@`#|l%zO$AD;m|eYN#!pJ z8{yXyHKn`TnyAWN9O_37Ne%+(#(@S=)H-FPS6Y+0Rx=jWBs<5TU8Qr7MiP_Z13TxU zMXll=@*O{a>}hg!{IuZTvUGGOto=^+nCRyz>LMh1p5W-p06}B*xw<|*Q1P2o%#8k; za#yA>V`BXYBz-7%U;dptp#rgB^1_^Q8W3oZZr))hZwLY-VKV@cu^7fk4wGD2Y5Y3T z3QqDTEDQ#~b-2Box^FT9Ct!hL;y9zxtLE($PaE8CchfIIPzCrB< zhz3APYHohSM^Hjx11IyL1>W3b+~Bypy?y-#o<#GJZ^Q}^)S(SWru=;)Edb#aISQdd zqB+p%Y;4$NI{$8*TC&(g;aYKc>ZfRT>fvfi?dGL@vL4pjt(}x^|E$0sCJq3S;jQ$K8F11+$Am-OMsK9Tz@j7jv`31H z1ut^ZBo!S$iVq)fJo^%G%o`c7$!GVub6)Vh;GNbC8Buv`Jq$1N^hCO2!p&`kIKao@ z&09dMU4P=uI$o|vQQ$JZse`2&h3?6}%&Yb2m_REG`b=D>F(;y&U&x~ZF17qu+?i%Q z-Yw z>5VWCJFoXaP(C8d<^Fx|j|J;pz7!Y7{vX`AZ|i2QW|~4wW&J zoz5dK8x6^zdgSDVJ^03(1IT-vT$D`J3v&N3Xud&27(LM7@dC&TAwUQu!1fCS6SrUY z4S;c_J;-+`pQRobsP|}^RRF}(z&^elZZh=lC-Mw}cU=THT-e!?cY_HymPjh<>SReQ z&MC~GG+wX51&x`bAK!X$%ulp4R%IC4E$pQ-+AvfA?`f7e3)ahQ>#YGjvnoa6CM3bM zYOf#<^feXG1>v{WIt{_xa`1!qkBEr-raXJMj;6u%E019cE3}ZOL|)0Egp?&zIrEkr z-g<{x(2FY31q(Mlo)H>QgMDFk1-=#L6lKwCvpWfa=etExj?DdlCuO&37rBn}g173)S zr74Oc-Ur&gmYoWX-yu0uumWZf*2znC@ejIJw?GntbRjNPM=XU(^Y=)4(}0?)8=F?9 z1a|{{@WK&GDNj*Rx{v ztHxyKYlaN}JTEBH~6TAxBOMX{NrNq|wiF-_LO8 zlGhg3=qND1#K1xzIMM4+~^vI9uBoB3W#85>*ZW>7GK6QhHNSRp%JSQ2tr(Qj^VWdqg#H#MNI1Z!<`C2a0Xi*Z6kzBkNKZ^7lT z;kl2yI>R}kl)2neYXYZD3Mvbg7>`ZWlRCVXI1q#HVda%PJ($6VGWL zdr^0i=d58IMm=7Pt6{cPdAgA|%U@>^-l^MC(7U&OEra4a-3;0a?13i*U||9OVD`hr zsdr8kkZNU^WG5~+(Bx8nf}Ewje4RA%f*t295)wYl#%*?7w?Io6FA+xw5CkNnnQu2z ztgt}(DNW0LMC^ZgDvrU6V@o=VtT`d3-uXsc2y~nAP1xmv()z@}c>BVWwm(7)m(N=a zTNA+V$tB>ROH)h#YC;SyQ2!PUMsD+q_YH_S!aW4sX0mIyaTj_w?M!l5#qh_q8h2Fy z2HYac1xW1V8uaW^>i$yEM>E$Od3(4S!<5l3a(OFe#ZKR5lJEX^@*IE&c%xyXVGzlW zH>WGT2IwMDs?i5n{)Z3(GT<=Z*DrzyA3neO7pDkZejaaV!W#^x5fW6)LlDw(u&8`H zWq)4Pkc3r&b8%o1itmQXX4Q_Y3WQfM7S5S4zw22$9;cfqpwF(0e|9Pa8-YQJ$l1)} zsGh&<9GK0Cj*pxcYQ~8jji=<8Gr$b2nXQX;7VMZ}@QXr{kA-b7l6dvp zXQlL)0PI<+=I`NFcgIu6+^#|vFM)eMU%U1FH56gOP8leQJ{3P%p?Li@ZNw+=k=#eC zT61rFVc=w5ap}pLu;KV@(lQ|fx8J0mD1(+_RDNDXIjL05rC{v*$-VBeNFG=j&ybvU z+Nq$X_I70AYGS^U^5GIb3g7D+yIbOtG|l7*^+Bh!#2Lrl;^xp^kjrU4xTIp34|MW* zpFQWuPbv6WieM>K>sjDJ6p5isy4}REHM`0vh(?+6Af-W#1XNw|?=#TT4YO2?^F~_x z-?Uqs{TXa0@|tV0t9wTyTwPjKYMOMn@b}P#JA6 zCEv1}0KEx5Ai`n+Y!PC(d`R(qP9YG`R}e&j2ob@H2wHW702a5qIt{MCKbl;8Yt{h= zUBOBAp;TD3$j~##_DtVZ6#ghvU8nW9;o^nuSGvLD99qSTzcT-Z{kVe03 zqg)M%yBgsj3gS})oQBIO*o14aZqwB1kV^}$rPsAE(PiZ+^7n73KbQGd_Xq#DHF^!K z8gp!RP{zfJSqxw29wf4|4e`jGP-k^NOOPj-Xx+|t*+vzHLnYL1!Pwsg zqMck{N^uXDP|pz>&gl>_M|;XcBvjLu$E5T&z@Jb4sB2xfLj4k@ z3<(~d&JleciEuLSflz{(0s!Cea&Du!o?LoYcUcT6k&{HsdqNRgSM{HOe> zFths4)JwsQU&v(@7YR|^tb36aRu$Rw?i}qbnS14lD zMydH2>Bdd=%bR6xZ}$MOL*F{z{IJ}Z9Hhvb4yzZPU3jRv)DK*12`K;qq_6fH`!qN; z>{fh8a=iKuwLVk(@1if(zB(ND+o`$UkiI(Dn=UW5F`Ee8v;Z;h>p18_M7Js{RM4Uh zD&wSe!s)H9>6|USR;xd|m-@iEeJ{noG7?2mz}?dWE@EAiyp*>3&sDJ&?LJcDR`}QPflAB*Lm%Q)x8dB@To?5O<>$ z&t_DeIK&{OXE&K9r#;&1YXob{!rDPaXjz|tRNnQ%7LiZZ@|BFMHLNoO?n>{WoUpL* zWiTPEcWO>S=f(N4j@?l-{NViJsN+9e3oJS!MZTc0v52W=hUg-Yhnf2Iih}=;PLVRs z*}hO~vqeuGMZ)h8a(ah!fS3`bTBKC49vP-Tc2d)kb!?%9W&Ju^zfroP8NqyKGeu?R zmpdvNEn5E=^&Ty(VjZD%mC_5>7q9r9*ymP(tf6r=?Xj22!DZu~%830u$ex|*1r}v9 zhWHketi7+dUQ8^>sSN#HE|O4FNY!}NtC^u?({uIidlN|RraR5!iVWw`Hq@YKNZwOS zVR$_llS-D&KayeuOY)>*cgN;}`)0MX;)p6Y-4u35_>`4fqv21AuC05US&4!uv|oe=WHuzOnF z-1lrKR1O5pO3x-mXvs{*%j+^d(4#h{pYi?WmC<(~R#6PW-Pand`l33k^XiQqGHRo9 z*8dGI)`g^)_9a`Sv8pLPzahLdK9coDb{*ypdQz0V_b;+K&!GJ>GGF&UxqlB_-aJUTLLuSBh)yCQQ4P)mgK*y^ygp z&u%JNwTRWI6bre3|3N>{fazT122OGogJQMEWE2wKj8}^p)h&nDOb(&3p5ZCCX|HMV zaEHR}>{?zaP48R8r8s_(A_lV6y}96Z>isWFo(4;7ll?G90|HjHVo1E*o)$6)C+_kH zXFi+SbkfydY)dQv^3r!9rU}`AyFB_9F5Jd&K1{N;%A3nv9|1KiG;vJ8vU6-il?gHN zfU9_HjY4v+;#R(7T|TUF*Dy5*123F;<@#pujBh=wDSMDF(d@l~kN&0f!BJzh#$jfesr05zcvYnA8fFR5cDA+bKkx$PKDJY1_iRqY9Z5>aKzeV zt%GvDq_u7rQzI!;?%!TWP8<>6Y%tty@_h88E^ChxZ*$SKu0OV=97Lq&tk0W1mnAoRxesyU46w(}&&N{Vy zU5>Fc$Q>`|&l3(Y->jHwv6z9`#hkD*VnBk4M|8PAsiwIHh>TcP(9-hlK5va=H&Zgm z6Pa|h$o&08_p<$cZ2qx8!PFUP=zyvCJ*&lpk=!p_uPJqu=;dp<%Xah%vvbSjj{0Pr zSEi}bPeQL_j?2J`PR8!ydg-j^Yoal*u^(9!yWx4BgAR87cljNi`6d{1CcydCaMa;J zUW7V!F$}GKsO}{f5vjf(079h3FR)NX)`kv__C|VEe?%L7b66;PMn-%({6C@wKAi?W zD>I!IK0c!cJ_9S;r$A4yfzSA<^11&mG5@EY^*<8320k-0%kTYDgW=QiZ|PI_U-h3_ zpNM~>GO}pkv#_xK*;yGF@IUuYTPB9j?mu;Y*ZkIHru$8W>9eQbIBb6yF#o3b+4gDu z*~ZNDHx51h-x$nn8u;u?OrLEmpVHs`|3>AR?C(zL{_F2=r~aw?tM#wXf4cpx z{agMY;=gNt>-;y?|Ki3!c>jwN|Hk+|xKHYTIq#3&AKSlN_gVgQ+n?J1=J^kwKg0dU zZU4C9v;6;y3;*pk=1&rTACCV$(*Nf5zhC-2%71JBpQHZIeEfB|{6_sf{{Nn%e;+Wv zHU1pj{~qD*N&PcIE-qRTGfPJ!ds-1oJx3!UBLf>lBU&jVYZFJ)Phb`v9$2VIq8Z7qozscPjj1lPT5(Ku4N!Ja;NuQZ-# zv^cLJZ?V8v4E(0!4K#a4QT4h<;yF|{+oe5JtU1$5>bu_>zh-b^`kQ0^ydb0rWo6@P zq1%){QgI&6B4P8?c0YPy9<<4bMN}%Y>e(Xm*OqEe#o$LfonxQ&T9)a3W#?%OkxPDm z#To<)mSjR*`Wf(f&t2U#t57{~_x*^IIpTiI)|rhIIj@5%fB_7)2rBjewmqyIc<(Xp}pxAP`A zpk0;bQ$JdwQ%AlM#K*Wmw8W=B7?40t1&EQ*@&OZk1!VM%8yS~2){RS@ltNUTBMj3Q zN>k>wn=fnl6{gw&7@PYo*WAiQrO6uav~l?t_(?t`^#;wcx|H#U!~Q<#=W835Y&>Mo z**`wAKVB~!&O3nl0CaJkfs`sPekd>DeS`!0HpjM(EL~hCd$m~tk)jb<)j5!TO>B4d zhNp2w=RzA?KBfW4&@Q&JcF{aIRd`&&Z4sk$tRfKQMXKEZ zlpqnJ0>k#4SR;Dg1&Mz{w%O~J^ZM?)z2SKI;sj8G0^o^IQcd_Uk#}-Y zzMLfYl1s3{_q6F@k(JBRTm3;Nj~psh#nf&QBFvO@H47~&c)BZb$|{ZfJg4vK;%8yNNo${(4boDo7RhE_oLGA<{;;3 ztO7}M^Qp}0sTS=6#PZ{#aSIWx_{`UE^N5e(>Q~PxEHK*HmoVsUv;}!Pv+HB_#X(w{ zi@IuyPAGy?#O@$3Y;-z--lf&LlgZXeuu0r-A%Mv=lOF+14fpZT`G$2)zbe}+oQ0+D zu8ls|o|lzX{7@eqlaLGxhl0Mo+10E|sH5}XS2h-LS2vfqek~AE>}`K+RyBy=uSNpR zB9i+>RehW^=V$$mEM}pCytw(4&(8ogS0gc|-kc_y!ikZ^jHVct-Gg?sUYOl#HlzRR zynvNk>Kv8iF{N}ZGD$kCBY~$!yk+tnBaHTtZmuuERY=N2vw*pb3#&NAQBvCcH^KAi z{d1B8R$L3q1S}K-H;f1@gD(-nV$$(%J72Ac^#GE8X);cJC%~;#VB9M*W)`&0BPsFp zuInpov^q4m9%V>B+cTmta~$QS7=|?$AUF6fQY13ZT;^URaxLew$zs-ix_j991Jb=! za!0dvtfoP7568KTs0R6trPNG^H>_HD%gX8Pm;8~4JsUdt>LJ8?yd5KThsucO!3Nf>Ybty6?ELS$&-(OSzL;?b>v`E*FN#ifEMKVnIX;Cfs-p< zxx6B5;p|;micMVMG&~?BAJ0vXW|(09;k7g#5U5$E)B@(DY>&BFsk(xIq|inPyD*jt zgQvLLl3Yr8@*4>xI1FCBo+ZvbCS}luWf5d2wWU9<(3}=lYhE)5w_0$IF?B<^CTk`; zJ9B((?Br=$TB`OswE;2j0EkmM`S!C)VdBXCsD7XxDU3f#D|TFkJ~P<> z5z(z;kaU{{kz)dyq=e5%MFgp^N4Sv8lR5-0LNx7sMu+mahaq&)r%vJ<#pi7m7Dpx@5a(nHKG;8wdu;D!qjx?jyIjCAHKfSWH| zAg+|E(4Kb;t&4sM*$^Va<(scFR>!!c4^x)2EDMpYb0=KZ7mm#8?+K_0B*@?8#*e=7nF}4m54Ww zPcOK7&P+~O7cV~9E3#W$T3ur9oGoI)scM>M9g8A)u`?y+-XVTDC~0$&ShFJ|;6{^# ziN~d=4KTM{^MQ;rOPlqb*zfL-%lw>p5h;Xr@|6Z}Rzzn^HOh z0lU1l?@Z4ObQOLFe#g?RkWKxr>+aX46I%tDZ>1f`{pC3dn$Ya7w3`sd0B-gkyLY`HRMW-vobK$GC)?&^gMMDPlf4cFAj_MEyWeaD&{$#i#6C!{`z{)J z`H;L>?PJ8NK*{a#PW^$Jzu*R#6ENB%|LXM!0uc-3Dk0$c$q(i2uqLY=JeCV;RMD+6 zqe0;N23FuBm9zr`NTeCKd=cQW#r}x)NMx8J1f4lq^b-Qdv`>Z&&IbXje?^za z6Z-S2WsCF@uU`O{#Ao{hn;p}Q{W`n344p!c-10lFHhb!a_E*MQ#%t5v zgC78uyaC!DAl_lEJcNi#LK&+>ufw}KM8L6vr;$v7>d}?B9cFl*S=T(db8y|4P2QH_t*qX+6g@Y&yP(OQM9k=`h?iDak0 zN*_KR7A+xy8#g^m`~kdCQZ1@+;oD}r=OVulzBt(nq9&mnlelX?Bhx8vO|n7y(U)jn zZFlhG<_V-TxbkHEuKrH_j^zKD+x)tFw>-9Sm>FG?D~c}b3B7^jFracX7$2su;Rkf( zO0*SBz6(Tp*cI3JGtfuw_Q*3FS9qT83-d2tOmT>oD2vJnSS@~*UsuHNAme~{%@9eZ zVzc9TV;xrYt;tm;Di4-hcqXR;o9Sit23wF;&Ca`~M=?87;jcF~;jgA{cn^F=KH$pV zp?U~h1NgUO_))xxGEgQV;|W6;Si%nx@G@LYYmMwB3-D_RXOLRB8G_EZBd!E)ivXae zvsuQ=klbtRUl%)rtUwH!z`0wY&xaPL54mCWT*1n3-Nr8@FDYK(-d?!dLGk)X(Xo$p zpy=c66+*8%A}R(OF>aWIs)*Z$z3t%#-7>HToa5S~v}N*w)7dy(m|6R^#?~frHqZB> z97tOfQ$iuC`=knP%AZi*CA^`1nDMqRwlDTSzISE&_`=_iJ|%<5lzdqZ@U_jSBjNhW zMgA)Fn0VQ$N>GI$T2s0$f2DSXqZLOZ*`&IV3w`1g^rf#YNSqCb*WZrKeTx)nNm9xX z+gogX57Zm&fni}_z`l6cdJC-pfl*G9ZxtT+QpZ$}<%zf{LUV+3#5iKZdPcIb+vO4T zRqz$%vX+gHyY<()Y|R)@y)~&2^v@v5b-f%$jW$ z5!{_D|E})uBeDUko9ySziy_59@n;zyy&bF(q!T2M3E@L;bKeg_@TQ7*UfWOmE=+Jw zOaiTPWiI-wx#exIb>~85vJMywZvEoq#bE<_14Nb{8c?t4QrfiRR2u3QOacIB$opT4pEX@ zQB`AO<8qiDl1)v*y8T*slj_V7UtF14zR&Zr`fGWGZ~5>y!*@c$c;Opi#;pyW!!*}l z+*bw<1n04#>0&rOeyi^b#r+NO{0IHLhbmp?-)?_Gbu}bbH)@;V)UI2iAnpdzx3sWd z_eQL)+PmrLS<{fxGVP9%xamn$QH?-D+wa-#Y7EVEQxs}c{Gz*BvNSsyF*7?VC99*O zM(LSTg2`c?NTVrCbVy?AH7O`Avi;TThweFB_YTG}LoqwU@R^!NFY4AFXsZMvpPV2+ z@7l=WP2SVr+^Eq1o8U>YnwdK|(uOjeBhTuBObDVTfzdQ%Mv^W^B5)_(C?IUVL5 zzY#?$Mo;C>ztUTs9J4(G>S~&lGdSR>{F-+iM}^B)L&z#C1A$aW&C@W-M^U!BXbz*G znQ|CSy-ew)8Y54B86lk(=~9oZM^#YWzwsI)243U_<99uulh`LhNHg7)5toI zervy=-YKc}{aqtC=)QyDX{fI30MF%wL5I!xw14X2=+RjiH++1Tf{Xpz>XGnebERo6 zHK$P_U~TP9WB*BAxq^P68?o8UdVjS3BfS7aSU^plx}cPZUCKDtMl=?l#qPM7?kR8d<|v9#2BeDk~6#b(v$DJ zfP|!ULxr_%r=V&~njN^hc#Qw``VW{X6gTqnVk32>JE{s# z`!duHAZ4A81P3x_jkJEi!E>TZ(}g3$3ntC z!c()*3M-4HscuxS>fkuf8x+CMd7O^Vfs31;4_%>*zL{A;tiNe6*Cct}f;GaAjd@Fsr5v5N$KsXM^oHK%D>51@0W< zE$N|u#94dS@Mc>@Ospo%B+Onx-H!A$D6<}uWnTd4)8F0}c10%18*l&nfYkuUknN#;aA(8~)jc!QfEy+BMFaOj6gj;#8|$_YO! zMR~G{ODi{j=>jW4T*e|h7chV6>})kip|X*x`NJJ2SiwA?@Dc$#j<@Xa7OX@RVDj!A zEG1wuQAvV$M^NHz1OHOB43yff*`6=0Za6*BPS4?2}EY2*o;WQ-7}Da9Asb#r$fIS zAb+ta287#BakE9V#U9fre8r86hdhr%>tWI@3i4rnQ9-&8sVqQpbOCG8?g2l<`es1_ zpU8O*E-uTX1${Gqa%Gp24U04QX`lASd+;XV0ada)=qhyOcCl}ZaYv^TJ^Z|wB|;4v zU)yXDI}3Ycy26N4h&GD|`Y$Nd?}YH?kmPTU z#h_dz^Ltphm(SQ$Ez=G{f`LY%=Z~sh&tQV@Ju$lqCFu!6m9B3NMCP%_;37=UX$Z(6 zS9TumT&6b%Xix(c>X&MRxhS3pzTU57x7)!75c=G70^3(T^*C1#S6QhSz6epko4q^$ zH1D|H89qt8-yiJWMPGM*hMv4N*Phfv-mRdw|mG-(9iw^_is)UF`or3Pu@I+N}$&mxRv-)@G@O@-JYr3^t(4932=j30l@e}W9w zw9Ph7cwo@L*P$BOifqZbXl|YvT=`IoGR+Z-aI+Pix@LO~nH!{f_Jczmt1E zLxC<-hGYacIbs$vAlNfKu`4mI6DD4vkU^EDo#8O4F3d6TempHAic8`WKqJyI#%m{6mFdMs|o~*xl>y?8L>}5%7TeA+iD^Kl)u0= z1;w#ZC48JNzR3VJTdc+c)!xYq=S=%TnDg1%v z&MW54_1Gvj`;2poB@@jA83xbJOT;m}n>yBk>yp%@DR>I+XzD+`ji+XsYSy$8Nj1D#q@1QnY z*vfl`%CaMI%y|u#$^!xTyhP^l?OnsRI|X{IUtD)TIxd8{Y4#5a^<@`aF_Z#c2iPbo z;_~cT5^hUibyyl1XS8NL*xdaBRrif2(mmAlaXL{eMD{EPqAU7(PROOe~+F!2gb}vHzp{8AVZYvo)fX zHgeQ6)N|CMRdmvK{9U0cE%i?@g^`Y){@(!sE?!q#Jp&6PM|^!F6EkaW!i%;pLVPnr zZbB7iX*y|JK_gQ$F?V|-MRyq`19uApc0)p59xhi7S1Vhq&u;Kttt_n_I9$02^*+NC zMjW5=?`m2?{68v=7Tko4H1yQ0bo7KgP+azg#vBSlB7YlwmbeK`9UW~sXlXy+y-ef6 zNMmDfLQBui&Q43mK+C{D{i#9i;AZWp=SpquK=coSkdcFdy_v0}nT<96Z$v$P8z)C@ zLPGq%ac%#FYwbYuhZv24jTP-@11&ubBkgY>+TWJ{Li(HCziR$(u?-FWL)q5J-trG& zLjzhPOCu{IYe$Dqc=|u^h6Wr4pW$Uk8+)EVqgFJs`%ADi;}JBqH*;__)3e5xwY4>} zx6-rzi_K@#U&0?{+J8g*AF}Q zkEOJYp_#GU|BmIqH2+Vh>yxUq^#8jHtgQa=g0wV;kd1-U?}-!_;&J+XPL4x_PDqGf zM2Mc6zE;07kI|c3p}*sRnheW*NXOw@>)h|*eksH6{#ux+9w z{*c;)q=^mjMPqDit0qWn;+aC}&OOJNHL;-R*iE8iQ&p2cR0KuMno!adoR*sJ^T3xt{hHpuE}DmFRJ`MErTzy9KN^ z7nwXw`{pI0t;OvKw{{J=8aH`(_a2RFrUspT;g;Us1bkpxOL%LnqgOM$wlDa-eosw3 zhhnKd%1qOOQpuEFt(0CW)%~*uOz`)eV204s?=FLHi3_vgrD0z@aD4$R7s! zcE;B?JaZ%&`R?Gc_YQ@6latL8SEk?CH+3%bV(|FD%!c9e(tGAVT&P>$T{wJcVqtRn zZjF5tDAStrXD>pQ+;Ck1QXOINR}RC5sf8ku`0jrBzLZ?p%KVSaRD$L|i;vQogXZY;ijGbW`5m&civ2 zqYoyFl4I6yr|J^MaK+t^%>{Vp?!Iduyt93C`K5PrhfaUJ9J zZMc%Fwzlat$@<5$Z4=o#(n5NYNi?tII+p1+ik~zDY;68{rEs93*DqttFmX;ohN3(* zEJ6ky*=jx;&})rk63bxYMS2i>EYk@Yv5p|7ep}9UhTaEbux9*v5D=5#FbNKm;D9)z z@n9`r8Nne44ne#Rp|XL`&`OQxAUFiUAqWnO;IIe|i{P*b4#;QZAM9&bMsRqY{%Q_c z1cya%*aQcxUo@UgaM%QgO>n>-hQ}i~Y=XmUr&rk=)n~|Eh%;KNu-9Q*f&jaDqc8I3V_DJovf=mJu8}!2xHyY75z+$l#r$nuc=;=cPYS1(XMB?h4&N zH4@0YFWz|1YE39$J5Xx|p-Whoem!Y9Qni(~4rj8WXA3yrR2dW|s?1?(Z2^g^gJ(Wl zPo~y6kfes8oWGtdy{=2d+q-()cqjn>l3BS!>Hqn~Y58i@9#1Tn=-`tyStzh+Q^U?^ lC~&K82g(29UAQUk_G`;dg}uvVYvQ2y5#pvg&e?6p{{lyx7m@$~ literal 0 HcmV?d00001 From 6cc75397f5476292044b0958c3d8faf1dd83b014 Mon Sep 17 00:00:00 2001 From: Jinfeng Date: Thu, 16 Nov 2023 20:47:36 -0800 Subject: [PATCH 02/52] Scrape tool created. Subtitles scraped. --- .idea/.gitignore | 3 + .../CS410_Fall2023_CourseProject_TeamCAHJ.iml | 8 + .../inspectionProfiles/profiles_settings.xml | 6 + .idea/misc.xml | 7 + .idea/modules.xml | 8 + .idea/vcs.xml | 6 + ScrapeSubtitle.py | 132 + geckodriver.log | 1011 ++++ subtitles.json | 5087 +++++++++++++++++ 9 files changed, 6268 insertions(+) create mode 100644 .idea/.gitignore create mode 100644 .idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml create mode 100644 .idea/inspectionProfiles/profiles_settings.xml create mode 100644 .idea/misc.xml create mode 100644 .idea/modules.xml create mode 100644 .idea/vcs.xml create mode 100644 ScrapeSubtitle.py create mode 100644 geckodriver.log create mode 100644 subtitles.json diff --git a/.idea/.gitignore b/.idea/.gitignore new file mode 100644 index 0000000000..26d33521af --- /dev/null +++ b/.idea/.gitignore @@ -0,0 +1,3 @@ +# Default ignored files +/shelf/ +/workspace.xml diff --git a/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml b/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml new file mode 100644 index 0000000000..0f66176ec7 --- /dev/null +++ b/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml @@ -0,0 +1,8 @@ + + + + + + + + \ No newline at end of file diff --git a/.idea/inspectionProfiles/profiles_settings.xml b/.idea/inspectionProfiles/profiles_settings.xml new file mode 100644 index 0000000000..105ce2da2d --- /dev/null +++ b/.idea/inspectionProfiles/profiles_settings.xml @@ -0,0 +1,6 @@ + + + + \ No newline at end of file diff --git a/.idea/misc.xml b/.idea/misc.xml new file mode 100644 index 0000000000..cad43f9617 --- /dev/null +++ b/.idea/misc.xml @@ -0,0 +1,7 @@ + + + + + + \ No newline at end of file diff --git a/.idea/modules.xml b/.idea/modules.xml new file mode 100644 index 0000000000..0afb6bbefa --- /dev/null +++ b/.idea/modules.xml @@ -0,0 +1,8 @@ + + + + + + + + \ No newline at end of file diff --git a/.idea/vcs.xml b/.idea/vcs.xml new file mode 100644 index 0000000000..35eb1ddfbb --- /dev/null +++ b/.idea/vcs.xml @@ -0,0 +1,6 @@ + + + + + + \ No newline at end of file diff --git a/ScrapeSubtitle.py b/ScrapeSubtitle.py new file mode 100644 index 0000000000..40bc82865b --- /dev/null +++ b/ScrapeSubtitle.py @@ -0,0 +1,132 @@ +import time +import json +from selenium import webdriver +from selenium.webdriver.chrome.options import Options +from bs4 import BeautifulSoup +import re + +driver = webdriver.Firefox() +coursera_url = "https://www.coursera.org" +# --------------------------------------------------------- + +def get_soup(url): + driver.get(url) + time.sleep(20) + + # Get the page source and parse the HTML content + page_source = driver.page_source + soup = BeautifulSoup(page_source, 'html.parser') + return soup + +def login(): + # Open the login page + login_url = coursera_url + "/?authMode=login" + driver.get(login_url) + + # Ensure the page has loaded + input("Login and navigate to the course page, then press 'Enter' here") + +def get_week_urls(): + + # current_url = driver.current_url + # print("Current" + current_url) # To Be Done + current_url = "https://www.coursera.org/learn/text-mining/home/week/1" + + if "https://www.coursera.org/learn/" in current_url: + week_urls = [] + + soup = get_soup(current_url) + # Count number of weeks this course has + # num_weeks = len(soup.find_all()) # To Be Done + + + for i in range(6): + week_url = current_url[:-1] + str(i + 1) + week_urls.append(week_url) + print(week_urls) + return week_urls + else: + input("Navigate to the right page, then press 'Enter'.") + get_week_urls() + +def get_lecture_urls(week_url): + lecture_urls = [] + soup = get_soup(week_url) + + elements = soup.find_all("div", attrs={"data-test":"WeekSingleItemDisplay-lecture"}) + for element in elements: + a_tag = element.find('a') + if a_tag and 'href' in a_tag.attrs: + href_value = a_tag['href'] + lecture_urls.append(coursera_url + href_value) + else: + print("href attribute not found") + print(lecture_urls) + return lecture_urls + +def get_lecture_subtitles(lecture_url): + + soup = get_soup(lecture_url) + subtitles = [] + + # Find all div elements contain subtitles + pattern = re.compile(r'\bcss-1shylkf\b') + elements = soup.find_all('div', class_=pattern) + if len(elements) == 0: + print("No value retrieved") + else: + print("Retrieved") + + for element in elements: + # Extract the timestamp + button = element.find('button', class_='timestamp') + timestamp = button.contents[-1].strip() + + # Extract all phrase elements and concatenate the text of all subtitles + phrases = element.find_all('div', class_='phrases') + text_content = ' '.join(phrase.get_text().strip() for phrase in phrases) + + # Append the subtitles to the list as a dictionary + subtitles.append({timestamp: text_content}) + + # Print or process the subtitles + # print(subtitles) + return subtitles + +# --------------------------------------------------------- + +# Set up options +options = Options() +# chrome_options.add_argument('--headless') +# chrome_options.add_argument('--no-sandbox') # This may be necessary in a headless environment +options.add_argument('--disk-cache-dir=/Users/jnfng_w/Documents/Cofig') +prefs = {"profile.managed_default_content_settings.images": 2} +options.add_experimental_option("prefs", prefs) + + +login() + +# Get course name # To Be Done +course_name = "Text Mining and Analytics" +week_urls = get_week_urls() +subtitles_of_course = [] +for week_url in week_urls: + + # Which week + week = "Week " + week_url.rsplit("/", 2)[-1] + lecture_urls = get_lecture_urls(week_url) + subtitles_of_week = [] + for lecture_url in lecture_urls: + + # Get course title + course_title = lecture_url.rsplit("/",2)[-1] + lecture_subtitles = get_lecture_subtitles(lecture_url) + subtitles_of_week.append({course_title: lecture_subtitles}) + subtitles_of_course.append({week: subtitles_of_week}) +subtitle_package = {course_name: subtitles_of_course} + +# Writing a JSON file +with open('subtitles.json', 'w') as json_file: + json.dump(subtitle_package, json_file, indent=4) + + diff --git a/geckodriver.log b/geckodriver.log new file mode 100644 index 0000000000..1ea71146b5 --- /dev/null +++ b/geckodriver.log @@ -0,0 +1,1011 @@ +1700127513505 geckodriver INFO Listening on 127.0.0.1:60211 +1700127514535 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile09LM5N" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700127530927 Marionette INFO Marionette enabled +1700127531414 Marionette INFO Listening on port 60294 +Read port: 60294 +1700127531653 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700127561075 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700127679934 geckodriver INFO Listening on 127.0.0.1:61026 +1700127680957 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileTUbkKw" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700127681465 Marionette INFO Marionette enabled +1700127681835 Marionette INFO Listening on port 61045 +Read port: 61045 +1700127682056 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700127685268 Marionette INFO Stopped listening on port 60294 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700127711505 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +1700127761093 geckodriver INFO Listening on 127.0.0.1:61458 +1700127762117 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilelCJAXc" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700127762611 Marionette INFO Marionette enabled +1700127762906 Marionette INFO Listening on port 61477 +Read port: 61477 +1700127763113 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700127766234 Marionette INFO Stopped listening on port 61045 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700127792894 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +console.error: "Unable to find target with innerWindowId:6442450967" +console.error: "Unable to find target with innerWindowId:6442450967" +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +console.warn: "Listener for event 'frame' did not return a promise." +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +console.warn: "Listener for event 'frame' did not return a promise." +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +1700128567696 Marionette INFO Stopped listening on port 61477 +1700128577366 geckodriver INFO Listening on 127.0.0.1:65228 +1700128578390 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileft1KDm" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700128579108 Marionette INFO Marionette enabled +1700128579382 Marionette INFO Listening on port 65243 +Read port: 65243 +1700128579566 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700128609169 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700130672665 Marionette INFO Stopped listening on port 65243 +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +1700130675184 geckodriver INFO Listening on 127.0.0.1:55089 +1700130676207 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileWd3gJk" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700130677702 Marionette INFO Marionette enabled +1700130678074 Marionette INFO Listening on port 55112 +Read port: 55112 +1700130678209 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700130710738 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +console.error: (new TypeError("container.node.targetFront is null", "resource://devtools/client/inspector/markup/markup.js", 2401)) +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +console.error: (new TypeError("container.node.targetFront is null", "resource://devtools/client/inspector/markup/markup.js", 2401)) +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +1700132043223 geckodriver INFO Listening on 127.0.0.1:58607 +1700132044251 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileskhVAw" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700132046005 Marionette INFO Marionette enabled +1700132046515 Marionette INFO Listening on port 58638 +Read port: 58638 +1700132046674 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700132050940 Marionette INFO Stopped listening on port 55112 +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +1700132077951 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700132236346 geckodriver INFO Listening on 127.0.0.1:59547 +1700132237372 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileUu8vtV" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700132237985 Marionette INFO Marionette enabled +1700132238551 Marionette INFO Listening on port 59566 +Read port: 59566 +1700132238765 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700132244245 Marionette INFO Stopped listening on port 58638 +1700132268071 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700132521360 Marionette INFO Stopped listening on port 59566 +1700132524457 geckodriver INFO Listening on 127.0.0.1:61052 +1700132525481 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilegxA5jU" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700132526794 Marionette INFO Marionette enabled +1700132527158 Marionette INFO Listening on port 61075 +Read port: 61075 +1700132527392 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700132556826 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700132897583 geckodriver INFO Listening on 127.0.0.1:62794 +1700132898611 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileFrNLXR" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700132899502 Marionette INFO Marionette enabled +1700132900026 Marionette INFO Listening on port 62813 +Read port: 62813 +1700132900172 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700132903514 Marionette INFO Stopped listening on port 61075 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700132929815 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700132982291 geckodriver INFO Listening on 127.0.0.1:63343 +1700132983326 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile8P1p8x" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700132983850 Marionette INFO Marionette enabled +1700132984132 Marionette INFO Listening on port 63358 +Read port: 63358 +1700132984305 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: +sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 + +1700132988326 Marionette INFO Stopped listening on port 62813 +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700133014703 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: +sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 + +1700133092197 Marionette INFO Stopped listening on port 63358 +1700133097250 geckodriver INFO Listening on 127.0.0.1:64018 +1700133098273 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileP5WYJN" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700133098763 Marionette INFO Marionette enabled +1700133098938 Marionette INFO Listening on port 64037 +Read port: 64037 +1700133099167 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700133128888 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +1700176420420 geckodriver INFO Listening on 127.0.0.1:59096 +1700176421449 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile8nTq3D" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700176423235 Marionette INFO Marionette enabled +1700176423710 Marionette INFO Listening on port 59119 +Read port: 59119 +1700176423809 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700176453318 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: +sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 + +1700176498297 Marionette INFO Stopped listening on port 59119 +1700177955176 geckodriver INFO Listening on 127.0.0.1:49464 +1700177956182 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilej1Beo0" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700177957409 Marionette INFO Marionette enabled +1700177957791 Marionette INFO Listening on port 49494 +Read port: 49494 +1700177958032 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700177988696 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700178080698 geckodriver INFO Listening on 127.0.0.1:50151 +1700178081712 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileuKhj7f" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700178082496 Marionette INFO Marionette enabled +1700178082813 Marionette INFO Listening on port 50170 +Read port: 50170 +1700178082998 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700178087077 Marionette INFO Stopped listening on port 49494 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700178112729 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: +sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 + +JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined +1700178301520 Marionette INFO Stopped listening on port 50170 +console.warn: TopSitesFeed: Failed to fetch data from Contile server: NetworkError when attempting to fetch resource. +console.error: "Failed to fetch https://spocs.getpocket.com/user:" "NetworkError when attempting to fetch resource." +JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined +JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: +sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 + +1700178350270 Marionette INFO Stopped listening on port 64037 +console.warn: TopSitesFeed: Failed to fetch data from Contile server: NetworkError when attempting to fetch resource. +console.error: "Failed to fetch https://spocs.getpocket.com/user:" "NetworkError when attempting to fetch resource." +1700178355107 geckodriver INFO Listening on 127.0.0.1:51502 +1700178356129 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileIRu7sp" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700178356860 Marionette INFO Marionette enabled +1700178357228 Marionette INFO Listening on port 51513 +Read port: 51513 +1700178357398 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700178387183 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700178742776 Marionette INFO Stopped listening on port 51513 +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +1700180031920 geckodriver INFO Listening on 127.0.0.1:56930 +1700180032936 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile75aoM8" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700180033746 Marionette INFO Marionette enabled +1700180034157 Marionette INFO Listening on port 56949 +Read port: 56949 +1700180034301 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700180063922 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined +JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: +sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56317 + +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +1700180180136 geckodriver INFO Listening on 127.0.0.1:57652 +1700180181162 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileLqGiwD" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700180181767 Marionette INFO Marionette enabled +1700180181980 Marionette INFO Listening on port 57671 +Read port: 57671 +1700180182051 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700180211987 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined +console.error: (new TypeError("watcher.browserElement.browsingContext is null", "resource://devtools/server/actors/watcher/target-helpers/frame-helper.js", 193)) +1700180481274 geckodriver INFO Listening on 127.0.0.1:59023 +1700180482297 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilexmWM72" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700180482935 Marionette INFO Marionette enabled +1700180483175 Marionette INFO Listening on port 59038 +Read port: 59038 +1700180483330 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700180486554 Marionette INFO Stopped listening on port 57671 +console.warn: TopSitesFeed: Failed to fetch data from Contile server: NetworkError when attempting to fetch resource. +console.error: "Failed to fetch https://spocs.getpocket.com/user:" "NetworkError when attempting to fetch resource." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700180513101 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700180735537 geckodriver INFO Listening on 127.0.0.1:60179 +1700180736564 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileaC3zVC" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700180737953 Marionette INFO Marionette enabled +1700180738370 Marionette INFO Listening on port 60202 +Read port: 60202 +1700180738622 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700180741302 Marionette INFO Stopped listening on port 59038 +JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700180768095 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700180870626 Marionette INFO Stopped listening on port 60202 +1700180873669 geckodriver INFO Listening on 127.0.0.1:60886 +1700180874693 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileR9cihB" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700180875190 Marionette INFO Marionette enabled +1700180875471 Marionette INFO Listening on port 60905 +Read port: 60905 +1700180875676 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700180905340 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700180992552 geckodriver INFO Listening on 127.0.0.1:61491 +1700180993576 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileM1Cv6l" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700180994104 Marionette INFO Marionette enabled +1700180994298 Marionette INFO Listening on port 61510 +Read port: 61510 +1700180994544 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700180997818 Marionette INFO Stopped listening on port 60905 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700181024188 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700181099096 geckodriver INFO Listening on 127.0.0.1:62056 +1700181100121 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile6iWxmJ" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700181100705 Marionette INFO Marionette enabled +1700181101051 Marionette INFO Listening on port 62078 +Read port: +1700181101052 geckodriver::browser WARN Failed fo convert to u16 +Read port: 62078 +1700181101290 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700181104360 Marionette INFO Stopped listening on port 61510 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700181131450 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700183897304 geckodriver INFO Listening on 127.0.0.1:50678 +1700183898326 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileJcqKfb" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700183899996 Marionette INFO Marionette enabled +1700183900435 Marionette INFO Listening on port 50701 +Read port: 50701 +1700183900730 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700183930044 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700184032541 geckodriver INFO Listening on 127.0.0.1:51371 +1700184033569 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile0dZr3G" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700184034484 Marionette INFO Marionette enabled +1700184034969 Marionette INFO Listening on port 51394 +Read port: 51394 +1700184035167 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700184039174 Marionette INFO Stopped listening on port 62078 +1700184042444 Marionette INFO Stopped listening on port 50701 +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700184065056 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700184610126 geckodriver INFO Listening on 127.0.0.1:53854 +1700184611150 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileOzEfYH" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700184611905 Marionette INFO Marionette enabled +1700184612297 Marionette INFO Listening on port 53873 +Read port: 53873 +1700184612558 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700184643561 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700184799256 geckodriver INFO Listening on 127.0.0.1:54787 +1700184800279 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilesKufTW" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700184801779 Marionette INFO Marionette enabled +1700184802144 Marionette INFO Listening on port 54813 +Read port: 54813 +1700184802398 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700184804214 Marionette INFO Stopped listening on port 51394 +1700184806995 Marionette INFO Stopped listening on port 53873 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700184831986 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700184941801 geckodriver INFO Listening on 127.0.0.1:55546 +1700184942826 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileuryE4b" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700184943387 Marionette INFO Marionette enabled +1700184943672 Marionette INFO Listening on port 55566 +Read port: 55566 +1700184943874 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700184973441 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700185071848 Marionette INFO Stopped listening on port 54813 +1700185074740 Marionette INFO Stopped listening on port 55566 +1700185269583 geckodriver INFO Listening on 127.0.0.1:57020 +1700185270606 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileSmwtry" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700185272191 Marionette INFO Marionette enabled +1700185272537 Marionette INFO Listening on port 57043 +Read port: 57043 +1700185272777 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700185302252 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700186871774 geckodriver INFO Listening on 127.0.0.1:63647 +1700186872800 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileDRdGGG" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700186873902 Marionette INFO Marionette enabled +1700186874342 Marionette INFO Listening on port 63666 +Read port: 63666 +1700186874438 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700186903996 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700186921289 Marionette INFO Stopped listening on port 57043 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700187101503 geckodriver INFO Listening on 127.0.0.1:64706 +1700187102531 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofiledehudU" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700187103335 Marionette INFO Marionette enabled +1700187103743 Marionette INFO Listening on port 64725 +Read port: 64725 +1700187103832 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700187105923 Marionette INFO Stopped listening on port 63666 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700187133443 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +1700187540440 geckodriver INFO Listening on 127.0.0.1:50229 +1700187541460 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileSI75X3" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700187542282 Marionette INFO Marionette enabled +1700187542781 Marionette INFO Listening on port 50248 +Read port: 50248 +1700187542977 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +1700187572378 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700189346255 geckodriver INFO Listening on 127.0.0.1:55607 +1700189347279 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile8T0yyG" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700189348465 Marionette INFO Marionette enabled +1700189348799 Marionette INFO Listening on port 55618 +Read port: 55618 +1700189348965 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700189352236 Marionette INFO Stopped listening on port 50248 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700189379187 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700189470259 geckodriver INFO Listening on 127.0.0.1:55786 +1700189471284 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilekoIluN" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700189471877 Marionette INFO Marionette enabled +1700189472185 Marionette INFO Listening on port 55797 +Read port: 55797 +1700189472360 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700189474896 Marionette INFO Stopped listening on port 55618 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +1700189501902 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700189635856 geckodriver INFO Listening on 127.0.0.1:55928 +1700189636879 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilefcM2Pu" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700189637501 Marionette INFO Marionette enabled +1700189637724 Marionette INFO Listening on port 55941 +Read port: 55941 +1700189637966 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700189640551 Marionette INFO Stopped listening on port 55797 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700189667685 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +1700192432865 geckodriver INFO Listening on 127.0.0.1:56305 +1700192433879 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileqWnPm6" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700192434440 Marionette INFO Marionette enabled +1700192434642 Marionette INFO Listening on port 56316 +Read port: 56316 +1700192434855 RemoteAgent WARN TLS certificate errors will be ignored for this session +1700192438296 Marionette INFO Stopped listening on port 55941 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700192464531 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177ce300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +Exiting due to channel error. +1700194457241 geckodriver INFO Listening on 127.0.0.1:56698 +1700194458264 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileXHQJda" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700194459573 Marionette INFO Marionette enabled +1700194459924 Marionette INFO Listening on port 56709 +Read port: 56709 +1700194460236 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +1700194489848 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a1db700 state=DECODING_METADATA Decode metadata failed, shutting down decoder: file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachine.cpp:372 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a1db700 Decode error: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=119327100 state=DECODING_METADATA Decode metadata failed, shutting down decoder: file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachine.cpp:372 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=119327100 Decode error: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128900 state=DECODING_METADATA Decode metadata failed, shutting down decoder: file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachine.cpp:372 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128900 Decode error: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a127100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d4000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 diff --git a/subtitles.json b/subtitles.json new file mode 100644 index 0000000000..59a536ca87 --- /dev/null +++ b/subtitles.json @@ -0,0 +1,5087 @@ +{ + "Text Mining and Analytics": [ + { + "Week 1": [ + { + "introduction-to-text-mining-and-analytics": [ + { + "0:00": "[SOUND] Hello. Welcome to the course Text Mining and Analytics. My name is ChengXiang Zhai. I have a nickname, Cheng. I am a professor of the Department of Computer Science at the University of Illinois at Urbana-Champaign. This course is a part of a data mining specialization offered by the University of Illinois at Urbana-Champaign. In addition to this course, there are four other courses offered by" + }, + { + "0:39": "Professor Jiawei Han, Professor John Hart and me, followed by a capstone project course that all of us will teach together." + }, + { + "0:51": "This course is particularly related to another course in the specialization, mainly text retrieval and search engines in that both courses are about text data." + }, + { + "1:07": "In contrast, pattern discovery and cluster analysis are about algorithms more applicable to all kinds of data in general. The visualization course is also relatively general in that the techniques can be applied to all kinds of data." + }, + { + "1:28": "This course addresses a pressing need for harnessing big text data." + }, + { + "1:35": "Text data has been growing dramatically recently, mostly because of the advance of technologies deployed on the web that would enable people to quickly generate text data." + }, + { + "1:50": "So, I listed some of the examples on this slide" + }, + { + "1:57": "that can show a variety of text data that are available today. For example, if you think about the data on the internet, on the web," + }, + { + "2:07": "everyday we are seeing many web pages being created." + }, + { + "2:13": "Blogs are another kind of new text data that are being generated quickly by people. Anyone can write a blog article on the web. New articles of course have always been a main kind of text data that being generated everyday." + }, + { + "2:31": "Emails are yet another kind of text data. And literature is also representing a large portion of text data. It's also especially very important because of the high quality in the data. That is, we encode our knowledge about the word using text data represented by all the literature articles. It's a vast amount of knowledge of" + }, + { + "3:08": "all the text and data in these literature articles." + }, + { + "3:14": "Twitter is another representative text data representing social media. Of course there are forums as well." + }, + { + "3:24": "People are generating tweets very quickly indeed as we are speaking perhaps many people have already written many tweets. So, as you can see there are all kinds of text data that are being generated very quickly." + }, + { + "3:38": "Now these text data present some challenges for people." + }, + { + "3:43": "It's very hard for anyone to digest all the text data quickly. In particular, it's impossible for scientists to read all of the for example or for anyone to read all the tweets." + }, + { + "4:01": "So there's a need for tools to help people digest text data more efficiently." + }, + { + "4:09": "There is also another interesting opportunity provided by such big text data, and that is it's possible to leverage the amount of text data to discover interesting patterns to turn text data into actionable knowledge that can be useful for decision making." + }, + { + "4:27": "So for example, product managers may be interested in knowing the feedback of customers about their products, knowing how well their products are being received as compared with the products of competitors. This can be a good opportunity for leveraging text data as we have seen a lot of reviews of product on the web. So if we can develop a master text mining techniques to tap into such a [INAUDIBLE] to extract the knowledge and opinions of people about these products, then we can help these product managers to gain business intelligence or to essentially feedback from their customers." + }, + { + "5:18": "In scientific research, for example, scientists are interested in knowing the trends of research topics, knowing" + }, + { + "5:29": "about what related fields have discovered. This problem is especially important in biology research as well. Different communities tend to use different terminologies, yet they're starting very similar problems. So how can we integrate the knowledge that is covered in different communities to help study a particular problem? It's very important, and it can speed up scientific discovery." + }, + { + "5:57": "So there are many such examples where we can leverage the text data to discover useable knowledge to optimize our decision." + }, + { + "6:06": "The main techniques for harnessing big text data are text retrieval and text mining. So these are two very much related technologies.Yet, they have somewhat different purposes. These two kinds of techniques are covered in the tool in this specialization. So, text retrieval on search engines covers text retrieval, and this is necessary to turn big text data into a much smaller but more relevant text data, which are often the data that we need to handle a particular problem or to optimize a particular decision. This course covers text mining which is a second step in this pipeline that can be used to further process the small amount of relevant data to extract the knowledge or to help people digest the text data easily. So the two courses are clearly related, in fact, some of the techniques are shared by both text retrieval and text mining. If you have already taken the text retrieval course, then you might see some of the content being repeated in this text mining course, although we'll be talking about the techniques from a very different perspective. If you have not taken the text retrieval course, it's also fine because this course is self-contained and you can certainly understand all of the materials without a problem. Of course, you might find it beneficial to take both courses and that will give you a very complete set of skills to handle big text data." + }, + { + "8:02": "[MUSIC]" + } + ] + }, + { + "course-prerequisites-completion": [ + { + "0:00": "[MUSIC]" + }, + { + "0:07": "This lecture is a brief introduction to the course. We'll then do cover the objectives of the course. The prerequisites and the course format, the reference books and how to complete the course. The objectives of the course are the following. First we would like to cover the basic concepts and practical techniques in text data mining." + }, + { + "0:33": "So this means we will not be able to cover some advance the techniques in detail but rather we are going to choose the practical useful techniques and then treat them in more depth. We are going to also cover the basic concepts that are very useful for many applications. The second objective is to cover more general techniques for text data mining. So we emphasise the coverage of general techniques that can be applicable to any text in any natural language. We also hope that these techniques to either automatically work on problems without any human effort or only requiring minimum human effort." + }, + { + "1:26": "So these criteria have helped us to choose techniques that can be applied to many applications. This is in contrast to some more detailed analysis of text data particularly using natural language processing techniques. Now, such techniques are also very important. And they are indeed necessary for some of the protections where we would like to go in depth to understand the text data in more detail." + }, + { + "2:01": "Such detailed understanding techniques however are generally not scalable and they tend to require a lot of human effort. So they cannot be easy to applied into any domain. So as you can imagine in practice it would be beneficial to combine both kinds of techniques using the general techniques that we'll be covering in this course as a basis. And improve these techniques by using more human effort whenever it's appropriate We also would like to provide a hands on experience to you in Macro aspects. First, you can look at some experience was using a text mining toolkit and implementing text mining algorithms. Second, you have the opportunity to experiment with some algorithms for text reminding and physics to try them on some data sets and to understand how to do experiments. And finally you have opportunity to participate in a competition of a text-based prediction task. You are expected to know the basic concept of computer science for example data structures and some other really basic concepts in computers science." + }, + { + "3:25": "You are also expected to be familiar with programming and comfortable with programming, particularly with C++. This course however is not about programming, so you are not expected to do a lot of coding. but we're going to do give you a C++ toolkit that's fairly sophisticated so you have to be comfortable with handling such a toolkit." + }, + { + "3:49": "And you may be asked to write a small amount of code." + }, + { + "3:56": "It's also useful if you know some concepts and techniques in probability and statistics, but it is not necessary, knowing such analogy will help you understand some of the algorithm in more depth." + }, + { + "4:16": "The format of the course is lectures plus quizzes that will be given to you in a regular basis." + }, + { + "4:29": "And there is also an optional programming assignment." + }, + { + "4:33": "Now, we've made programming assignments optional not because it's not important, but because we suspect that not all of you will have the need for computing resources to do the programming assignment. So naturally, we would encourage all of you to try to do the programming assignment. If possible as that would be a great way to learn about the knowledge that would teach in this course." + }, + { + "5:05": "The main reference book for this course is a recent book that Sean Massung and I have co-authored. The title is Text Data Management and Analysis A practical introduction to information retrieval and text mining. However this reference book is not required in the sense that if we follow all the lecture videos closely then you should have little problems because working on the quiz problems and will program your assignment to pass the course. However the book would be useful to give you a high level and systematic description of all the topics covered in this course plus some others. It would also help you understand some topics in more depth. So if you problems with following some of the videos the book may be useful to you." + }, + { + "5:59": "The book is also the reference book for another book. If you are interested in buying the book, there's a link here. And there should be substantial discount for the students of this course. There are also quite a few other useful reference books and ratings. And they are available though the link at the bottom of this slide. [MUSIC]" + } + ] + }, + { + "1-1-overview-text-mining-and-analytics-part-1": [ + { + "0:00": "[SOUND] In this lecture we give an overview of Text Mining and Analytics." + }, + { + "0:13": "First, let's define the term text mining, and the term text analytics. The title of this course is called Text Mining and Analytics." + }, + { + "0:25": "But the two terms text mining, and text analytics are actually roughly the same." + }, + { + "0:32": "So we are not really going to really distinguish them, and we're going to use them interchangeably. But the reason that we have chosen to use both terms in the title is because there is also some subtle difference, if you look at the two phrases literally." + }, + { + "0:52": "Mining emphasizes more on the process. So it gives us a error rate medical view of the problem. Analytics, on the other hand emphasizes more on the result," + }, + { + "1:07": "or having a problem in mind. We are going to look at text data to help us solve a problem." + }, + { + "1:16": "But again as I said, we can treat these two terms roughly the same." + }, + { + "1:21": "And I think in the literature you probably will find the same. So we're not going to really distinguish that in the course." + }, + { + "1:29": "Both text mining and text analytics mean that we want to turn text data into high quality information, or actionable knowledge." + }, + { + "1:42": "So in both cases, we" + }, + { + "1:45": "have the problem of dealing with a lot of text data and we hope to. Turn these text data into something more useful to us than the raw text data." + }, + { + "1:57": "And here we distinguish two different results. One is high-quality information, the other is actionable knowledge." + }, + { + "2:05": "Sometimes the boundary between the two is not so clear." + }, + { + "2:09": "But I also want to say a little bit about" + }, + { + "2:12": "these two different angles of the result of text field mining." + }, + { + "2:19": "In the case of high quality information, we refer to more concise information about the topic." + }, + { + "2:28": "Which might be much easier for humans to digest than the raw text data. For example, you might face a lot of reviews of a product." + }, + { + "2:38": "A more concise form of information would be a very concise summary of the major opinions about the features of the product. Positive about, let's say battery life of a laptop." + }, + { + "2:53": "Now this kind of results are very useful to help people digest the text data." + }, + { + "2:59": "And so this is to minimize a human effort in consuming text data in some sense." + }, + { + "3:06": "The other kind of output is actually more knowledge. Here we emphasize the utility of the information or knowledge we discover from text data." + }, + { + "3:18": "It's actionable knowledge for some decision problem, or some actions to take." + }, + { + "3:24": "For example, we might be able to determine which product is more appealing to us, or a better choice for a shocking decision." + }, + { + "3:38": "Now, such an outcome could be called actionable knowledge, because a consumer can take the knowledge and make a decision, and act on it. So, in this case text mining supplies knowledge for optimal decision making. But again, the two are not so clearly distinguished, so we don't necessarily have to make a distinction." + }, + { + "4:06": "Text mining is also related to text retrieval, which is a essential component in many text mining systems." + }, + { + "4:15": "Now, text retrieval refers to finding relevant information from a large amount of text data." + }, + { + "4:24": "So I've taught another separate MOOC on text retrieval and search engines." + }, + { + "4:31": "Where we discussed various techniques for text retrieval." + }, + { + "4:36": "If you have taken that MOOC, and you will find some overlap." + }, + { + "4:42": "And it will be useful To know the background of text retrieval of understanding some of the topics in text mining." + }, + { + "4:51": "But, if you have not taken that MOOC, it's also fine because in this MOOC on text mining and analytics, we're going to repeat some of the key concepts that are relevant for text mining. But they're at the high level and they also explain the relation between text retrieval and text mining." + }, + { + "5:12": "Text retrieval is very useful for text mining in two ways. First, text retrieval can be a preprocessor for text mining. Meaning that it can help us turn big text data into a relatively small amount of most relevant text data. Which is often what's needed for solving a particular problem." + }, + { + "5:36": "And in this sense, text retrieval also helps minimize human effort." + }, + { + "5:43": "Text retrieval is also needed for knowledge provenance. And this roughly corresponds to the interpretation of text mining as turning text data into actionable knowledge. Once we find the patterns in text data, or actionable knowledge, we generally would have to verify the knowledge. By looking at the original text data. So the users would have to have some text retrieval support, go back to the original text data to interpret the pattern or to better understand an analogy or to verify whether a pattern is really reliable. So this is a high level introduction to the concept of text mining, and the relationship between text mining and retrieval." + }, + { + "6:32": "Next, let's talk about text data as a special kind of data." + }, + { + "6:39": "Now it's interesting to view text data as data generated by humans as subjective sensors." + }, + { + "6:53": "So, this slide shows an analogy between text data and non-text data. And between humans as subjective sensors and physical sensors, such as a network sensor or a thermometer." + }, + { + "7:16": "So in general a sensor would monitor the real world in some way. It would sense some signal from the real world, and then would report the signal as data, in various forms. For example, a thermometer would watch the temperature of real world and then we report the temperature being a particular format." + }, + { + "7:44": "Similarly, a geo sensor would sense the location and then report. The location specification, for example, in the form of longitude value and latitude value. A network sends over the monitor network traffic, or activities in the network and are reported. Some digital format of data. Similarly we can think of humans as subjective sensors. That will observe the real world and from some perspective. And then humans will express what they have observed in the form of text data. So, in this sense, human is actually a subjective sensor that would also sense what's happening in the world and then express what's observed in the form of data, in this case, text data. Now, looking at the text data in this way has an advantage of being able to integrate all types of data together. And that's indeed needed in most data mining problems." + }, + { + "8:56": "So here we are looking at the general problem of data mining." + }, + { + "9:02": "And in general we would Be dealing with a lot of data about our world that are related to a problem. And in general it will be dealing with both non-text data and text data. And of course the non-text data are usually produced by physical senses. And those non-text data can be also of different formats." + }, + { + "9:27": "Numerical data, categorical, or relational data, or multi-media data like video or speech." + }, + { + "9:36": "So, these non text data are often very important in some problems. But text data is also very important, mostly because they contain a lot of symmetrical content. And they often contain knowledge about the users, especially preferences and opinions of users." + }, + { + "10:01": "So, but by treating text data as the data observed from human sensors, we can treat all this data together in the same framework. So the data mining problem is basically to turn such data, turn all the data in your actionable knowledge to that we can take advantage of it to change the real world of course for better. So this means the data mining problem is basically taking a lot of data as input and giving actionable knowledge as output. Inside of the data mining module, you can also see we have a number of different kind of mining algorithms. And this is because, for different kinds of data, we generally need different algorithms for mining the data." + }, + { + "10:56": "For example, video data might require computer vision to understand video content. And that would facilitate the more effective mining. And we also have a lot of general algorithms that are applicable to all kinds of data and those algorithms, of course, are very useful. Although, for a particular kind of data, we generally want to also develop a special algorithm. So this course will cover specialized algorithms that are particularly useful for mining text data. [MUSIC]" + } + ] + }, + { + "1-2-overview-text-mining-and-analytics-part-2": [ + { + "0:00": "[SOUND] So, looking at the text mining problem more closely, we see that the problem is similar to general data mining, except that we'll be focusing more on text data." + }, + { + "0:21": "And we're going to have text mining algorithms to help us to turn text data into actionable knowledge that we can use in real world, especially for decision making, or for completing whatever tasks that require text data to support. Because, in general, in many real world problems of data mining we also tend to have other kinds of data that are non-textual. So a more general picture would be to include non-text data as well." + }, + { + "0:56": "And for this reason we might be concerned with joint mining of text and non-text data. And so in this course we're going to focus more on text mining, but we're also going to also touch how do to joint analysis of both text data and non-text data. With this problem definition we can now look at the landscape of the topics in text mining and analytics." + }, + { + "1:21": "Now this slide shows the process of generating text data in more detail." + }, + { + "1:27": "More specifically, a human sensor or human observer would look at the word from some perspective." + }, + { + "1:34": "Different people would be looking at the world from different angles and they'll pay attention to different things. The same person at different times might also pay attention to different aspects of the observed world. And so the humans are able to perceive the world from some perspective. And that human, the sensor, would then form a view of the world. And that can be called the Observed World. Of course, this would be different from the Real World because of the perspective that the person has taken can often be biased also." + }, + { + "2:16": "Now the Observed World can be represented as, for example, entity-relation graphs or in a more general way, using knowledge representation language. But in general, this is basically what a person has in mind about the world. And we don't really know what exactly it looks like, of course. But then the human would express what the person has observed using a natural language, such as English. And the result is text data." + }, + { + "2:55": "Of course a person could have used a different language to express what he or she has observed. In that case we might have text data of mixed languages or different languages." + }, + { + "3:10": "The main goal of text mining Is actually to revert this process of generating text data. We hope to be able to uncover some aspect in this process." + }, + { + "3:28": "Specifically, we can think about mining, for example, knowledge about the language." + }, + { + "3:35": "And that means by looking at text data in English, we may be able to discover something about English, some usage of English, some patterns of English." + }, + { + "3:47": "So this is one type of mining problems, where the result is some knowledge about language which may be useful in various ways." + }, + { + "3:58": "If you look at the picture, we can also then mine knowledge about the observed world. And so this has much to do with mining the content of text data." + }, + { + "4:11": "We're going to look at what the text data are about, and then try to get the essence of it or extracting high quality information about a particular aspect of the world that we're interested in." + }, + { + "4:26": "For example, everything that has been said about a particular person or a particular entity. And this can be regarded as mining content to describe the observed world in the user's mind or the person's mind." + }, + { + "4:45": "If you look further, then you can also imagine we can mine knowledge about this observer, himself or herself. So this has also to do with using text data to infer some properties of this person." + }, + { + "5:03": "And these properties could include the mood of the person or sentiment of the person." + }, + { + "5:10": "And note that we distinguish the observed word from the person because text data can't describe what the person has observed in an objective way. But the description can be also subjected with sentiment and so, in general, you can imagine the text data would contain some factual descriptions of the world plus some subjective comments. So that's why it's also possible to do text mining to mine knowledge about the observer. Finally, if you look at the picture to the left side of this picture, then you can see we can certainly also say something about the real world. Right? So indeed we can do text mining to infer other real world variables. And this is often called a predictive analytics." + }, + { + "6:00": "And we want to predict the value of certain interesting variable. So, this picture basically covered multiple types of knowledge that we can mine from text in general." + }, + { + "6:14": "When we infer other real world variables we could also use some of the results from mining text data as intermediate results to help the prediction. For example, after we mine the content of text data we might generate some summary of content. And that summary could be then used to help us predict the variables of the real world. Now of course this is still generated from the original text data, but I want to emphasize here that often the processing of text data to generate some features that can help with the prediction is very important." + }, + { + "7:04": "And that's why here we show the results of some other mining tasks, including mining the content of text data and mining knowledge about the observer, can all be very helpful for prediction." + }, + { + "7:21": "In fact, when we have non-text data, we could also use the non-text data to help prediction, and of course it depends on the problem. In general, non-text data can be very important for such prediction tasks. For example, if you want to predict stock prices or changes of stock prices based on discussion in the news articles or in social media, then this is an example of using text data to predict some other real world variables. But in this case, obviously, the historical stock price data would be very important for this prediction. And so that's an example of non-text data that would be very useful for the prediction. And we're going to combine both kinds of data to make the prediction. Now non-text data can be also used for analyzing text by supplying context." + }, + { + "8:25": "When we look at the text data alone, we'll be mostly looking at the content and/or opinions expressed in the text." + }, + { + "8:32": "But text data generally also has context associated." + }, + { + "8:37": "For example, the time and the location that associated are with the text data. And these are useful context information." + }, + { + "8:48": "And the context can provide interesting angles for analyzing text data. For example, we might partition text data into different time periods because of the availability of the time. Now we can analyze text data in each time period and then make a comparison. Similarly we can partition text data based on locations or any meta data that's associated to form interesting comparisons in areas. So, in this sense, non-text data can actually provide interesting angles or perspectives for text data analysis. And it can help us make context-sensitive analysis of content or the language usage or" + }, + { + "9:36": "the opinions about the observer or the authors of text data. We could analyze the sentiment in different contexts. So this is a fairly general landscape of the topics in text mining and analytics. In this course we're going to selectively cover some of those topics. We actually hope to cover most of these general topics." + }, + { + "10:06": "First we're going to cover natural language processing very briefly because this has to do with understanding text data and this determines how we can represent text data for text mining. Second, we're going to talk about how to mine word associations from text data. And word associations is a form of use for lexical knowledge about a language. Third, we're going to talk about topic mining and analysis. And this is only one way to analyze content of text, but it's a very useful ways of analyzing content. It's also one of the most useful techniques in text mining." + }, + { + "10:53": "Then we're going to talk about opinion mining and sentiment analysis. So this can be regarded as one example of mining knowledge about the observer." + }, + { + "11:07": "And finally we're going to cover text-based prediction problems where we try to predict some real world variable based on text data." + }, + { + "11:17": "So this slide also serves as a road map for this course. And we're going to use this as an outline for the topics that we'll cover in the rest of this course. [MUSIC]" + } + ] + }, + { + "1-3-natural-language-content-analysis-part-1": [ + { + "0:00": "[SOUND]" + }, + { + "0:09": "This lecture is about natural language content analysis. Natural language content analysis is the foundation of text mining. So we're going to first talk about this." + }, + { + "0:24": "And in particular, natural language processing with a factor how we can present text data." + }, + { + "0:33": "And this determines what algorithms can be used to analyze and mine text data." + }, + { + "0:40": "We're going to take a look at the basic concepts in natural language first." + }, + { + "0:46": "And I'm going to explain these concepts using a similar example that you've all seen here. A dog is chasing a boy on the playground. Now this is a very simple sentence. When we read such a sentence we don't have to think about it to get the meaning of it. But when a computer has to understand the sentence, the computer has to go through several steps." + }, + { + "1:13": "First, the computer needs to know what are the words, how to segment the words in English. And this is very easy, we can just look at the space. And then the computer will need the know the categories of these words, syntactical categories. So for example, dog is a noun, chasing's a verb, boy is another noun etc. And this is called a Lexical analysis. In particular, tagging these words with these syntactic categories is called a part-of-speech tagging." + }, + { + "1:45": "After that the computer also needs to figure out the relationship between these words. So a and dog would form a noun phrase. On the playground would be a prepositional phrase, etc. And there is certain way for them to be connected together in order for them to create meaning. Some other combinations may not make sense." + }, + { + "2:07": "And this is called syntactical parsing, or syntactical analysis, parsing of a natural language sentence. The outcome is a parse tree that you are seeing here. That tells us the structure of the sentence, so that we know how we can interpret this sentence. But this is not semantics yet. So in order to get the meaning we would have to map these phrases and these structures into some real world antithesis that we have in our mind. So dog is a concept that we know, and boy is a concept that we know. So connecting these phrases that we know is understanding." + }, + { + "2:52": "Now for a computer, would have to formally represent these entities by using symbols. So dog, d1 means d1 is a dog." + }, + { + "3:04": "Boy, b1 means b1 refers to a boy etc. And also represents the chasing action as a predicate. So, chasing is a predicate here with three arguments, d1, b1, and p1. Which is playground. So this formal rendition of the semantics of this sentence. Once we reach that level of understanding, we might also make inferences. For example, if we assume there's a rule that says if someone's being chased then the person can get scared, then we can infer this boy might be scared. This is the inferred meaning, based on additional knowledge. And finally, we might even further infer what this sentence is requesting, or why the person who say it in a sentence, is saying the sentence. And so, this has to do with purpose of saying the sentence. This is called speech act analysis or pragmatic analysis. Which first to the use of language. So, in this case a person saying this may be reminding another person to bring back the dog." + }, + { + "4:35": "So this means when saying a sentence, the person actually takes an action. So the action here is to make a request." + }, + { + "4:46": "Now, this slide clearly shows that in order to really understand a sentence there are a lot of things that a computer has to do. Now, in general it's very hard for a computer will do everything, especially if you would want it to do everything correctly. This is very difficult." + }, + { + "5:08": "Now, the main reason why natural language processing is very difficult, it's because it's designed it will make human communications efficient." + }, + { + "5:15": "As a result, for example, with only a lot of common sense knowledge." + }, + { + "5:21": "Because we assume all of us have this knowledge, there's no need to encode this knowledge." + }, + { + "5:29": "That makes communication efficient." + }, + { + "5:32": "We also keep a lot of ambiguities, like, ambiguities of words." + }, + { + "5:39": "And this is again, because we assume we have the ability to disambiguate the word. So, there's no problem with having the same word to mean possibly different things in different context." + }, + { + "5:52": "Yet for a computer this would be very difficult because a computer does not have the common sense knowledge that we do. So the computer will be confused indeed. And this makes it hard for natural language processing. Indeed, it makes it very hard for every step in the slide that I showed you earlier." + }, + { + "6:16": "Ambiguity is a main killer. Meaning that in every step there are multiple choices, and the computer would have to decide whats the right choice and that decision can be very difficult as you will see also in a moment." + }, + { + "6:31": "And in general, we need common sense reasoning in order to fully understand the natural language. And computers today don't yet have that. That's why it's very hard for computers to precisely understand the natural language at this point." + }, + { + "6:48": "So here are some specific examples of challenges. Think about the world-level ambiguity. A word like design can be a noun or a verb, so we've got ambiguous part of speech tag." + }, + { + "7:00": "Root also has multiple meanings, it can be of mathematical sense, like in the square of, or can be root of a plant." + }, + { + "7:12": "Syntactic ambiguity refers to different interpretations" + }, + { + "7:19": "of a sentence in terms structures. So for example, natural language processing can actually be interpreted in two ways." + }, + { + "7:28": "So one is the ordinary meaning that we will be getting as we're talking about this topic. So, it's processing of natural language. But there's is also another possible interpretation which is to say language processing is natural." + }, + { + "7:48": "Now we don't generally have this problem, but imagine for the computer to determine the structure, the computer would have to make a choice between the two." + }, + { + "7:59": "Another classic example is a man saw a boy with a telescope. And this ambiguity lies in the question who had the telescope? This is called a prepositional phrase attachment ambiguity." + }, + { + "8:14": "Meaning where to attach this prepositional phrase with the telescope. Should it modify the boy? Or should it be modifying, saw, the verb. Another problem is anaphora resolution. In John persuaded Bill to buy a TV for himself. Does himself refer to John or Bill?" + }, + { + "8:39": "Presupposition is another difficulty. He has quit smoking implies that he smoked before, and we need to have such a knowledge in order to understand the languages." + }, + { + "8:52": "Because of these problems, the state of the art natural language processing techniques can not do anything perfectly. Even for the simplest part of speech tagging, we still can not solve the whole problem. The accuracy that are listed here, which is about 97%, was just taken from some studies earlier." + }, + { + "9:17": "And these studies obviously have to be using particular data sets so the numbers here are not really meaningful if you take it out of the context of the data set that are used for evaluation. But I show these numbers mainly to give you some sense about the accuracy, or how well we can do things like this. It doesn't mean any data set accuracy would be precisely 97%. But, in general, we can do parsing speech tagging fairly well although not perfect." + }, + { + "9:53": "Parsing would be more difficult, but for partial parsing, meaning to get some phrases correct, we can probably achieve 90% or better accuracy." + }, + { + "10:06": "But to get the complete parse tree correctly is still very, very difficult." + }, + { + "10:13": "For semantic analysis, we can also do some aspects of semantic analysis, particularly, extraction of entities and relations. For example, recognizing this is the person, that's a location, and this person and that person met in some place etc. We can also do word sense to some extent." + }, + { + "10:38": "The occurrence of root in this sentence refers to the mathematical sense etc. Sentiment analysis is another aspect of semantic analysis that we can do." + }, + { + "10:50": "That means we can tag the senses as generally positive when it's talking about the product or talking about the person." + }, + { + "11:02": "Inference, however, is very hard, and we generally cannot do that for any big domain and if it's only feasible for a very limited domain. And that's a generally difficult problem in artificial intelligence. Speech act analysis is also very difficult and we can only do this probably for very specialized cases. And with a lot of help from humans to annotate enough data for the computers to learn from." + }, + { + "11:36": "So the slide also shows that computers are far from being able to understand natural language precisely. And that also explains why the text mining problem is difficult. Because we cannot rely on mechanical approaches or computational methods to understand the language precisely. Therefore, we have to use whatever we have today. A particular statistical machine learning method of statistical analysis methods to try to get as much meaning out from the text as possible. And, later you will see that there are actually" + }, + { + "12:20": "many such algorithms that can indeed extract interesting model from text even though we cannot really fully understand it. Meaning of all the natural language sentences precisely. [MUSIC]" + } + ] + }, + { + "1-4-natural-language-content-analysis-part-2": [ + { + "0:00": "[SOUND]" + }, + { + "0:10": "So here are some specific examples of what we can't do today and part of speech tagging is still not easy to do 100% correctly. So in the example, he turned off the highway verses he turned off the fan and the two offs actually have somewhat a differentness in their active categories and also its very difficult to get a complete the parsing correct. Again, the example, a man saw a boy with a telescope can actually be very difficult to parse depending on the context. Precise deep semantic analysis is also very hard. For example, to define the meaning of own, precisely is very difficult in the sentence, like John owns a restaurant. So the state of the off can be summarized as follows. Robust and general NLP tends to be shallow while a deep understanding does not scale up." + }, + { + "1:12": "For this reason in this course, the techniques that we cover are in general, shallow techniques for analyzing text data and mining text data and they are generally based on statistical analysis. So there are robust and general and they are in" + }, + { + "1:36": "the in category of shallow analysis. So such techniques have the advantage of being able to be applied to any text data in any natural about any topic. But the downside is that, they don't give use a deeper understanding of text. For that, we have to rely on deeper natural language analysis." + }, + { + "2:00": "That typically would require a human effort to annotate a lot of examples of analysis that would like to do and then computers can use machine learning techniques and learn from these training examples to do the task. So in practical applications, we generally combine the two kinds of techniques with the general statistical and methods as a backbone as the basis. These can be applied to any text data. And on top of that, we're going to use humans to, and you take more data and to use supervised machine learning to do some tasks as well as we can, especially for those important tasks to bring humans into the loop to analyze text data more precisely. But this course will cover the general statistical approaches that generally, don't require much human effort. So they're practically, more useful that some of the deeper analysis techniques that require a lot of human effort to annotate the text today. So to summarize, the main points we take are first NLP is the foundation for text mining. So obviously, the better we can understand the text data, the better we can do text mining." + }, + { + "3:30": "Computers today are far from being able to understand the natural language. Deep NLP requires common sense knowledge and inferences. Thus, only working for very limited domains not feasible for large scale text mining. Shallow NLP based on statistical methods can be done in large scale and is the main topic of this course and they are generally applicable to a lot of applications. They are in some sense also, more useful techniques. In practice, we use statistical NLP as the basis and we'll have humans for help as needed in various ways. [MUSIC]" + } + ] + }, + { + "1-5-text-representation-part-1": [ + { + "0:06": "This lecture is about the textual representation. In this lecture, we are going to discuss textual representation, and discuss how natural language processing can allow us to represent text in many different ways. Let's take a look at this example sentence again. We can represent this sentence in many different ways. First, we can always represent such a sentence as a string of characters. This is true for all the languages when we store them in the computer. When we store a natural language sentence as a string of characters, we have perhaps the most general way of representing text since we always use this approach to represent any text data. But unfortunately, using such a representation will not help us to do semantic analysis, which is often needed for many applications of text mining. The reason is because we're not even recognizing words. So as a string, we're going to keep all the spaces and these ASCII symbols. We can perhaps count what's the most frequent character in English text, or the correlation between those characters, but we can't really analyze semantics. Yet, this is the most general way of representing text because we can use this to represent any natural language text. If we try to do a little bit more natural language processing by doing word segmentation, then we can obtain a representation of the same text, but in the form of a sequence of words. So here we see that we can identify words like a dog is chasing etc. Now with this level of representation, we certainly can do a lot of things, and this is mainly because words are the basic units of human communication in natural language, so they are very powerful. By identifying words, we can for example easily count what are the most frequent words in this document or in the whole collection etc. These words can be used to form topics when we combine related words together, and some words are positive, some words negative, so we can also do sentiment analysis. So representing text data as a sequence of words opens up a lot of interesting analysis possibilities. However, this level of representation is slightly less general than string of characters because in some languages such as Chinese, it's actually not that easy to identify all the word boundaries because in such a language, you see text as a sequence of characters with no space in between. So you'll have to rely on some special techniques to identify words. In such a language, of course then, we might make mistakes in segmenting words. So the sequence of words representation is not as robust as string of characters. But in English, it's very easy to obtain this level of representation, so we can do that all the time." + }, + { + "4:00": "Now, if we go further to do naturally language processing, we can add a part of speech tags. Now once we do that, we can count, for example, the most frequent nouns or what kind of nouns are associated with what kind of verbs etc. So this opens up a little bit more interesting opportunities for further analysis. Note that I use a plus sign here because by representing text as a sequence of part of speech tags, we don't necessarily replace the original word sequence written. Instead, we add this as an additional way of representing text data, so that now the data is represented as both a sequence of words and a sequence of part of speech tags. This enriches the representation of text data, and thus also enables more interesting analysis." + }, + { + "5:00": "If we go further, then we'll be pausing the sentence often to obtain a syntactic structure. Now this of course, further open up a more interesting analysis of, for example, the writing styles or correcting grammar mistakes. If we go further for semantic analysis, then we might be able to recognize dog as an animal, and we also can recognize a boy as a person, and playground as a location. We can further analyze their relations, for example, dog is chasing the boy and the boy is on the playground. Now this will add more entities and relations through entity relation recreation. At this level, then we can do even more interesting things. For example, now we can count easily the most frequent person that's mentioning this whole collection of news articles, or whenever you mention this person, you also tend to see mentioning of another person etc. So this is a very useful representation, and it's also related to the knowledge graph that some of you may have heard of that Google is doing as a more semantic way of representing text data. However, it's also less robust than sequence of words or even syntactical analysis because it's not always easy to identify all the entities with the right types, and we might make mistakes, and relations are even harder to find, and we might make mistakes. So this makes this level of representation less robust, yet it's very useful. Now if we move further to logical condition, then we can have predicates and even inference rules. With inference rules, we can infer interesting derived facts from the text, so that's very useful. But unfortunately, at this level of representation is even less robust and we can make mistakes and we can't do that all the time for all kinds of sentences. Finally, speech acts would add a yet another level of repetition of the intent of saying this sentence. So in this case, it might be a request. So knowing that would allow us to analyze even more interesting things about this observer or the author of this sentence. What's the intention of saying that? What's scenarios? What kind of actions would be made? So this is another level of analysis that would be very interesting. So this picture shows that if we move down, we generally see more sophisticated natural language processing techniques to be used. Unfortunately, such techniques would require more human effort, and they are less accurate. That means there are mistakes. So if we add an texts that are at the levels that are representing deeper analysis of language, then we have to tolerate the errors. So that also means it's still necessary to combine such deep analysis with shallow analysis based on, for example, sequence of words. On the right side, you'll see the arrow points down to indicate that. As we go down, we are representation of text is closer to knowledge representation in our mind, and need for solving a lot of problems. Now this is desirable because as we can represent text at the level of knowledge, we can easily extract the knowledge. That's the purpose of text-mining. So there is a trade-off here between doing a deeper analysis that might have errors but would give us direct knowledge that can be extracted from text. Doing shallow analysis, which is more robust but wouldn't actually give us the necessary deeper representation of knowledge. I should also say that text data are generated by humans and are meant to be consumed by humans. So as a result, in text data analysis, text-mining humans play a very important role, they are always in the loop. Meaning that we should optimize the collaboration of humans and computers. So in that sense, it's okay that computers may not be able to have compute accurately representation of text data, and the patterns that are extracted from text data can be interpreted by humans, and humans can guide the computers to do more accurate analysis by annotating more data, by providing features to guide a machine learning programs to make them work more effectively." + } + ] + }, + { + "1-6-text-representation-part-2": [ + { + "0:00": "[SOUND]. So, as we explained the different text representation tends to enable different analysis." + }, + { + "0:16": "In particular, we can gradually add more and more deeper analysis results to represent text data. And that would open up a more interesting representation" + }, + { + "0:29": "opportunities and also analysis capacities. So, this table summarizes what we have just seen. So the first column shows the text representation. The second visualizes the generality of such a representation. Meaning whether we can do this kind of representation accurately for all the text data or only some of them. And the third column shows the enabled analysis techniques." + }, + { + "0:56": "And the final column shows some examples of application that can be achieved through this level of representation. So let's take a look at them. So as a stream text can only be processed by stream processing algorithms. It's very robust, it's general." + }, + { + "1:15": "And there was still some interesting applications that can be down at this level. For example, compression of text. Doesn't necessarily need to know the word boundaries. Although knowing word boundaries might actually also help." + }, + { + "1:28": "Word base repetition is a very important level of representation. It's quite general and relatively robust, indicating they were a lot of analysis techniques. Such as word relation analysis, topic analysis and sentiment analysis. And there are many applications that can be enabled by this kind of analysis. For example, thesaurus discovery has to do with discovering related words. And topic and opinion related applications are abounded. And there are, for example, people might be interesting in knowing the major topics covered in the collection of texts. And this can be the case in research literature. And scientists want to know what are the most important research topics today. Or customer service people might want to know all our major complaints from their customers by mining their e-mail messages. And business intelligence people might be interested in understanding consumers' opinions about their products and the competitors' products to figure out what are the winning features of their products." + }, + { + "2:43": "And, in general, there are many applications that can be enabled by the representation at this level." + }, + { + "2:53": "Now, moving down, we'll see we can gradually add additional representations. By adding syntactical structures, we can enable, of course, syntactical graph analysis. We can use graph mining algorithms to analyze syntactic graphs. And some applications are related to this kind of representation. For example, stylistic analysis generally requires syntactical structure representation." + }, + { + "3:22": "We can also generate the structure based features. And those are features that might help us classify the text objects into different categories by looking at the structures sometimes in the classification. It can be more accurate. For example, if you want to classify articles into" + }, + { + "3:45": "different categories corresponding to different authors. You want to figure out which of the k authors has actually written this article, then you generally need to look at the syntactic structures." + }, + { + "4:03": "When we add entities and relations, then we can enable other techniques such as knowledge graph and answers, or information network and answers in general. And this analysis enable applications about entities." + }, + { + "4:22": "For example, discovery of all the knowledge and opinions about real world entities." + }, + { + "4:28": "You can also use this level representation to integrate everything about anything from scaled resources." + }, + { + "4:37": "Finally, when we add logical predicates, that would enable large inference, of course. And this can be very useful for integrating analysis of scattered knowledge." + }, + { + "4:50": "For example, we can also add ontology on top of the," + }, + { + "4:54": "extracted the information from text, to make inferences." + }, + { + "4:59": "A good of example of application in this enabled by this level of representation, is a knowledge assistant for biologists. And this program that can help a biologist manage all the relevant knowledge from literature about a research problem such as understanding functions of genes." + }, + { + "5:22": "And the computer can make inferences about some of the hypothesis that the biologist might be interesting. For example, whether a gene has a certain function, and then the intelligent program can read the literature to extract the relevant facts, doing compiling and information extracting. And then using a logic system to actually track that's the answers to researchers questioning about what genes are related to what functions." + }, + { + "5:57": "So in order to support this level of application we need to go as far as logical representation. Now, this course is covering techniques mainly based on word based representation." + }, + { + "6:12": "And these techniques are general and robust and that's more widely used in various applications." + }, + { + "6:21": "In fact, in virtually all the text mining applications you need this level of representation and then techniques that support analysis of text in this level." + }, + { + "6:35": "But obviously all these other levels can be combined and should be combined in order to support the sophisticated applications. So to summarize, here are the major takeaway points. Text representation determines what kind of mining algorithms can be applied. And there are multiple ways to represent the text, strings, words, syntactic structures, entity-relation graphs, knowledge predicates, etc. And these different representations should in general be combined in real applications to the extent we can. For example, even if we cannot do accurate representations of syntactic structures, we can state that partial structures strictly. And if we can recognize some entities, that would be great. So in general we want to do as much as we can." + }, + { + "7:34": "And when different levels are combined together, we can enable a richer analysis, more powerful analysis." + }, + { + "7:42": "This course however focuses on word-based representation. Such techniques have also several advantage, first of they are general and robust, so they are applicable to any natural language. That's a big advantage over other approaches that rely on more fragile natural language processing techniques. Secondly, it does not require much manual effort, or sometimes, it does not require any manual effort. So that's, again, an important benefit, because that means that you can apply it directly to any application." + }, + { + "8:20": "Third, these techniques are actually surprisingly powerful and effective form in implications." + }, + { + "8:29": "Although not all of course as I just explained." + }, + { + "8:34": "Now they are very effective partly because the words are invented by humans as basically units for communications." + }, + { + "8:45": "So they are actually quite sufficient for representing all kinds of semantics." + }, + { + "8:53": "So that makes this kind of word-based representation all so powerful. And finally, such a word-based representation and the techniques enable by such a representation can be combined with many other sophisticated approaches." + }, + { + "9:14": "So they're not competing with each other. [MUSIC]" + } + ] + }, + { + "1-7-word-association-mining-and-analysis": [ + { + "0:00": "[SOUND] This lecture is about the word association mining and analysis. In this lecture, we're going to talk about how to mine associations of words from text. Now this is an example of knowledge about the natural language that we can mine from text data." + }, + { + "0:33": "Here's the outline. We're going to first talk about what is word association and then explain why discovering such relations is useful and finally we're going to talk about some general ideas about how to mine word associations. In general there are two word relations and these are quite basic." + }, + { + "0:56": "One is called a paradigmatic relation. The other is syntagmatic relation. A and B have paradigmatic relation if they can be substituted for each other. That means the two words that have paradigmatic relation would be in the same semantic class, or syntactic class. And we can in general replace one by the other without affecting the understanding of the sentence. That means we would still have a valid sentence. For example, cat and dog, these two words have a paradigmatic relation because they are in the same class of animal. And in general, if you replace cat with dog in a sentence, the sentence would still be a valid sentence that you can make sense of." + }, + { + "1:58": "Similarly Monday and Tuesday have paradigmatical relation." + }, + { + "2:04": "The second kind of relation is called syntagmatical relation." + }, + { + "2:10": "In this case, the two words that have this relation, can be combined with each other. So A and B have syntagmatic relation if they can be combined with each other in a sentence, that means these two words are semantically related." + }, + { + "2:30": "So for example, cat and sit are related because a cat can sit somewhere." + }, + { + "2:38": "Similarly, car and drive are related semantically and they can be combined with each other to convey meaning. However, in general, we can not replace cat with sit in a sentence or car with drive in the sentence to still get a valid sentence, meaning that if we do that, the sentence will become somewhat meaningless. So this is different from paradigmatic relation. And these two relations are in fact so fundamental that they can be" + }, + { + "3:17": "generalized to capture basic relations between units in arbitrary sequences. And definitely they can be generalized to describe relations of any items in a language. So, A and B don't have to be words and they can be phrases, for example." + }, + { + "3:37": "And they can even be more complex phrases than just a non-phrase. If you think about the general problem of the sequence mining then we can think about the units being and the sequence data. Then we think of paradigmatic relation as relations that are applied to units that tend to occur in a singular locations in a sentence, or in a sequence of data elements in general. So they occur in similar locations relative to the neighbors in the sequence. Syntagmatical relation on the other hand is related to co-occurrent elements that tend to show up in the same sequence." + }, + { + "4:33": "So these two are complimentary and are basic relations of words. And we're interested in discovering them automatically from text data. Discovering such worded relations has many applications." + }, + { + "4:47": "First, such relations can be directly useful for improving accuracy of many NLP tasks, and this is because this is part of our knowledge about a language. So if you know these two words are synonyms, for example, and then you can help a lot of tasks." + }, + { + "5:05": "And grammar learning can be also done by using such techniques. Because if we can learn paradigmatic relations, then we form classes of words, syntactic classes for example. And if we learn syntagmatic relations, then we would be able to know the rules for putting together a larger expression based on component expressions. So we learn the structure and what can go with what else." + }, + { + "5:39": "Word relations can be also very useful for many applications in text retrieval and mining. For example, in search and text retrieval, we can use word associations to modify a query, and this can be used to introduce additional related words into a query and make the query more effective." + }, + { + "6:01": "It's often called a query expansion." + }, + { + "6:05": "Or you can use related words to suggest related queries to the user to explore the information space." + }, + { + "6:12": "Another application is to use word associations to automatically construct the top of the map for browsing. We can have words as nodes and associations as edges. A user could navigate from one word to another to" + }, + { + "6:28": "find information in the information space." + }, + { + "6:33": "Finally, such word associations can also be used to compare and summarize opinions. For example, we might be interested in understanding positive and negative opinions about the iPhone 6. In order to do that, we can look at what words are most strongly associated with a feature word like battery in positive versus negative reviews. Such a syntagmatical relations would help us show the detailed opinions about the product." + }, + { + "7:16": "So, how can we discover such associations automatically? Now, here are some intuitions about how to do that. Now let's first look at the paradigmatic relation." + }, + { + "7:29": "Here we essentially can take advantage of similar context." + }, + { + "7:34": "So here you see some simple sentences about cat and dog. You can see they generally occur in similar context, and that after all is the definition of paradigmatic relation." + }, + { + "7:49": "On the right side you can kind of see I extracted expressly the context of cat and dog from this small sample of text data." + }, + { + "8:00": "I've taken away cat and dog from these sentences, so that you can see just the context." + }, + { + "8:08": "Now, of course we can have different perspectives to look at the context." + }, + { + "8:13": "For example, we can look at what words occur in the left part of this context. So we can call this left context. What words occur before we see cat or dog? So, you can see in this case, clearly dog and cat have similar left context." + }, + { + "8:41": "You generally say his cat or my cat and you say also, my dog and his dog. So that makes them similar in the left context." + }, + { + "8:53": "Similarly, if you look at the words that occur after cat and dog, which we can call right context, they are also very similar in this case. Of course, it's an extreme case, where you only see eats." + }, + { + "9:08": "And in general, you'll see many other words, of course, that can't follow cat and dog." + }, + { + "9:17": "You can also even look at the general context. And that might include all the words in the sentence or in sentences around this word." + }, + { + "9:27": "And even in the general context, you also see similarity between the two words." + }, + { + "9:35": "So this was just a suggestion that we can discover paradigmatic relation by looking at the similarity of context of words. So, for example, if we think about the following questions. How similar are context of cat and context of dog?" + }, + { + "9:56": "In contrast how similar are context of cat and context of computer?" + }, + { + "10:02": "Now, intuitively, we're to imagine the context of cat and the context of dog would be more similar than the context of cat and context of the computer. That means, in the first case the similarity value would be high," + }, + { + "10:21": "between the context of cat and dog, where as in the second, the similarity between context of cat and computer would be low because they all not having a paradigmatic relationship and imagine what words occur after computer in general. It would be very different from what words occur after cat." + }, + { + "10:46": "So this is the basic idea of what this covering, paradigmatic relation." + }, + { + "10:52": "What about the syntagmatic relation? Well, here we're going to explore the correlated occurrences, again based on the definition of syntagmatic relation." + }, + { + "11:03": "Here you see the same sample of text." + }, + { + "11:06": "But here we're interested in knowing what other words are correlated with the verb eats and what words can go with eats." + }, + { + "11:16": "And if you look at the right side of this slide and you see, I've taken away the two words around eats." + }, + { + "11:27": "I've taken away the word to its left and also the word to its right in each sentence." + }, + { + "11:35": "And then we ask the question, what words tend to occur to the left of eats?" + }, + { + "11:43": "And what words tend to occur to the right of eats?" + }, + { + "11:49": "Now thinking about this question would help us discover syntagmatic relations because syntagmatic relations essentially captures such correlations." + }, + { + "12:03": "So the important question to ask for syntagmatical relation is, whenever eats occurs, what other words also tend to occur?" + }, + { + "12:16": "So the question here has to do with whether there are some other words that tend to co-occur together with each. Meaning that whenever you see eats you tend to see the other words." + }, + { + "12:29": "And if you don't see eats, probably, you don't see other words often either." + }, + { + "12:36": "So this intuition can help discover syntagmatic relations." + }, + { + "12:41": "Now again, consider example." + }, + { + "12:44": "How helpful is occurrence of eats for predicting occurrence of meat?" + }, + { + "12:49": "Right. All right, so knowing whether eats occurs in a sentence would generally help us predict whether meat also occurs indeed. And if we see eats occur in the sentence, and that should increase the chance that meat would also occur." + }, + { + "13:08": "In contrast, if you look at the question in the bottom, how helpful is the occurrence of eats for predicting of occurrence of text?" + }, + { + "13:17": "Because eats and text are not really related, so knowing whether eats occurred in the sentence doesn't really help us predict the weather, text also occurs in the sentence. So this is in contrast to the question about eats and meat." + }, + { + "13:35": "This also helps explain that intuition behind the methods of what discovering syntagmatic relations. Mainly we need to capture the correlation between the occurrences of two words." + }, + { + "13:50": "So to summarize the general ideas for discovering word associations are the following." + }, + { + "13:56": "For paradigmatic relation, we present each word by its context. And then compute its context similarity. We're going to assume the words that have high context similarity to have paradigmatic relation." + }, + { + "14:14": "For syntagmatic relation, we will count how many times two words occur together in a context, which can be a sentence, a paragraph, or a document even. And we're going to compare their co-occurrences with their individual occurrences." + }, + { + "14:33": "We're going to assume words with high co-occurrences but relatively low individual occurrences to have syntagmatic relations because they attempt to occur together and they don't usually occur alone. Note that the paradigmatic relation and the syntagmatic relation are actually closely related in that paradigmatically related words tend to have syntagmatic relation with the same word. They tend to be associated with the same word, and that suggests that we can also do join the discovery of the two relations. So these general ideas can be implemented in many different ways. And the course won't cover all of them, but we will cover at least some of the methods that are effective for discovering these relations. [MUSIC]" + } + ] + }, + { + "1-8-paradigmatic-relation-discovery-part-1": [ + { + "0:00": "[SOUND] This lecture is about the Paradigmatics Relation Discovery. In this lecture we are going to talk about how to discover a particular kind of word association called a paradigmatical relation." + }, + { + "0:25": "By definition, two words are paradigmatically related if they share a similar context. Namely, they occur in similar positions in text. So naturally our idea of discovering such a relation is to look at the context of each word and then try to compute the similarity of those contexts." + }, + { + "0:50": "So here is an example of context of a word, cat." + }, + { + "0:55": "Here I have taken the word cat out of the context and you can see we are seeing some remaining words in the sentences that contain cat." + }, + { + "1:09": "Now, we can do the same thing for another word like dog." + }, + { + "1:13": "So in general we would like to capture such a context and then try to assess the similarity of the context of cat and the context of a word like dog." + }, + { + "1:24": "So now the question is how can we formally represent the context and then define the similarity function." + }, + { + "1:33": "So first, we note that the context actually contains a lot of words. So, they can be regarded as a pseudo document, a imagine document, but there are also different ways of looking at the context. For example, we can look at the word that occurs before the word cat. We can call this context Left1 context. All right, so in this case you will see words like my, his, or big, a, the, et cetera. These are the words that can occur to left of the word cat. So we say my cat, his cat, big cat, a cat, et cetera. Similarly, we can also collect the words that occur right after the word cat. We can call this context Right1, and here we see words like eats, ate, is, has, et cetera. Or, more generally, we can look at all the words in the window of text around the word cat. Here, let's say we can take a window of 8 words around the word cat. We call this context Window8." + }, + { + "2:49": "Now, of course, you can see all the words from left or from right, and so we'll have a bag of words in general to represent the context." + }, + { + "3:01": "Now, such a word based representation would actually give us an interesting way to define the perspective of measuring the similarity. Because if you look at just the similarity of Left1, then we'll see words that share just the words in the left context, and we kind of ignored the other words that are also in the general context. So that gives us one perspective to measure the similarity, and similarly, if we only use the Right1 context, we will capture this narrative from another perspective. Using both the Left1 and Right1 of course would allow us to capture the similarity with even more strict criteria." + }, + { + "3:49": "So in general, context may contain adjacent words, like eats and my, that you see here, or non-adjacent words, like Saturday, Tuesday, or some other words in the context." + }, + { + "4:05": "And this flexibility also allows us to match the similarity in somewhat different ways. Sometimes this is useful, as we might want to capture similarity base on general content. That would give us loosely related paradigmatical relations. Whereas if you use only the words immediately to the left and to the right of the word, then you likely will capture words that are very much related by their syntactical categories and semantics." + }, + { + "4:41": "So the general idea of discovering paradigmatical relations is to compute the similarity of context of two words. So here, for example, we can measure the similarity of cat and dog based on the similarity of their context. In general, we can combine all kinds of views of the context. And so the similarity function is, in general, a combination of similarities on different context. And of course, we can also assign weights to these different similarities to allow us to focus more on a particular kind of context. And this would be naturally application specific, but again, here the main idea for discovering pardigmatically related words is to computer the similarity of their context. So next let's see how we exactly compute these similarity functions. Now to answer this question, it is useful to think of bag of words representation as vectors in a vector space model." + }, + { + "5:48": "Now those of you who have been familiar with information retrieval or textual retrieval techniques would realize that vector space model has been used frequently for modeling documents and queries for search. But here we also find it convenient to model the context of a word for paradigmatic relation discovery. So the idea of this approach is to view each word in our vocabulary as defining one dimension in a high dimensional space. So we have N words in total in the vocabulary, then we have N dimensions, as illustrated here. And on the bottom, you can see a frequency vector representing a context, and here we see where eats occurred 5 times in this context, ate occurred 3 times, et cetera. So this vector can then be placed in this vector space model. So in general, we can represent a pseudo document or context of cat as one vector, d1, and another word, dog, might give us a different context, so d2. And then we can measure the similarity of these two vectors. So by viewing context in the vector space model, we convert the problem of paradigmatical relation discovery into the problem of computing the vectors and their similarity." + }, + { + "7:20": "So the two questions that we have to address are first, how to compute each vector, and that is how to compute xi or yi." + }, + { + "7:31": "And the other question is how do you compute the similarity." + }, + { + "7:35": "Now in general, there are many approaches that can be used to solve the problem, and most of them are developed for information retrieval. And they have been shown to work well for matching a query vector and a document vector. But we can adapt many of the ideas to compute a similarity of context documents for our purpose here. So let's first look at the one plausible approach, where we try to match the similarity of context based on the expected overlap of words, and we call this EOWC." + }, + { + "8:17": "So the idea here is to represent a context by a word vector where each word has a weight that's equal to the probability that a randomly picked word from this document vector, is this word. So in other words, xi is defined as the normalized account of word wi in the context, and this can be interpreted as the probability that you would actually pick this word from d1 if you randomly picked a word." + }, + { + "8:56": "Now, of course these xi's would sum to one because they are normalized frequencies," + }, + { + "9:02": "and this means the vector is actually probability of the distribution over words." + }, + { + "9:10": "So, the vector d2 can be also computed in the same way, and this would give us then two probability distributions representing two contexts." + }, + { + "9:24": "So, that addresses the problem how to compute the vectors, and next let's see how we can define similarity in this approach. Well, here, we simply define the similarity as a dot product of two vectors, and this is defined as a sum of the products" + }, + { + "9:41": "of the corresponding elements of the two vectors." + }, + { + "9:46": "Now, it's interesting to see that this similarity function actually has a nice interpretation, and that is this. Dot product, in fact that gives us the probability that two randomly picked words from the two contexts are identical. That means if we try to pick a word from one context and try to pick another word from another context, we can then ask the question, are they identical? If the two contexts are very similar, then we should expect we frequently will see the two words picked from the two contexts are identical. If they are very different, then the chance of seeing identical words being picked from the two contexts would be small. So this intuitively makes sense, right, for measuring similarity of contexts." + }, + { + "10:41": "Now you might want to also take a look at the exact formulas and see why this can be interpreted as the probability that two randomly picked words are identical." + }, + { + "10:57": "So if you just stare at the formula to check what's inside this sum, then you will see basically in each case it gives us the probability that we will see an overlap on a particular word, wi. And where xi gives us a probability that we will pick this particular word from d1, and yi gives us the probability of picking this word from d2. And when we pick the same word from the two contexts, then we have an identical pick, right so. That's one possible approach, EOWC, extracted overlap of words in context. Now as always, we would like to assess whether this approach it would work well. Now of course, ultimately we have to test the approach with real data and see if it gives us really semantically related words." + }, + { + "11:57": "Really give us paradigmatical relations, but analytically we can also analyze this formula a little bit. So first, as I said, it does make sense, right, because this formula will give a higher score if there is more overlap between the two contexts. So that's exactly what we want. But if you analyze the formula more carefully, then you also see there might be some potential problems, and specifically there are two potential problems. First, it might favor matching one frequent term very well, over matching more distinct terms." + }, + { + "12:36": "And that is because in the dot product, if one element has a high value and this element is shared by both contexts and it contributes a lot to the overall sum," + }, + { + "12:51": "it might indeed make the score higher than in another case, where the two vectors actually have a lot of overlap in different terms. But each term has a relatively low frequency, so this may not be desirable. Of course, this might be desirable in some other cases. But in our case, we should intuitively prefer a case where we match more different terms in the context, so that we have more confidence in saying that the two words indeed occur in similar context. If you only rely on one term and that's a little bit questionable, it may not be robust." + }, + { + "13:34": "Now the second problem is that it treats every word equally, right. So if you match a word like the and it will be the same as matching a word like eats, but intuitively we know matching the isn't really surprising because the occurs everywhere. So matching the is not as such strong evidence as matching what a word like eats, which doesn't occur frequently. So this is another problem of this approach." + }, + { + "14:13": "In the next chapter we are going to talk about how to address these problems. [MUSIC]" + } + ] + }, + { + "1-9-paradigmatic-relation-discovery-part-2": [ + { + "0:05": "In this lecture, we continue discussing Paradigmatical Relation Discovery. Earlier we introduced a method called Expected Overlap of Words in Context. In this method, we represent each context by a word vector that represents the probability of a word in the context. We measure the similarity by using the.product, which can be interpreted as the probability that two randomly picked words from the two contexts are identical. We also discussed the two problems of this method. The first is that it favors matching one frequent term very well over matching more distinct terms. It put too much emphasis on matching one term very well. The second is that it treats every word equally. Even a common word like the will contribute equally as content word like eats. So now we are going to talk about how to solve these problems. More specifically, we're going to introduce some retrieval heuristics used in text retrieval. These heuristics can effectively solve these problems, as these problems also occur in text retrieval when we match a query that though with a document vector. So to address the first problem, we can use a sublinear transformation of tone frequency. That is, we don't have to use the raw frequency count of a term to represent the context. We can transform it into some form that wouldn't emphasize so much on the raw frequency. To address the synchronous problem, we can put more weight on rare terms. That is we can reward matching a real-world. This heuristic is called the IDF term weighting in text retrieval. IDF stands for Inverse Document Frequency. So now, we're going to talk about the two heuristics in more detail. First let's talk about the TF Transformation. That is to convert the raw count of a word in the document into some weight that reflects our belief about how important this word in the document. So that will be denoted by TF of w,d. That's shown in the y-axis. Now, in general, there are many ways to map that. Let's first look at the simple way of mapping. In this case, we're going to say, well, any non-zero counts will be mapped to one and the zero count will be mapped to zero. So with this mapping all the frequencies will be mapped to only two values; zero or one. The mapping function is shown here as a flat line here. Now, this is naive because it's not the frequency of words. However, this actually has the advantage of emphasizing matching all the words in the context. So it does not allow a frequency of word to dominate the matching. Now, the approach that we have taken earlier in the expected overlap count approach, is a linear transformation. We basically, take y as the same as x. So we use the raw count as a representation. That created the problem that we just talked about namely; it emphasize too much on just matching one frequent term. Matching one frequent term can contribute a lot. So we can have a lot of other interesting transformations in between the two extremes, and they generally form a sublinear transformation. So for example, one possibility is to take logarithm of the raw count, and this will give us curve that looks like this, that you are seeing here. In this case, you can see the high frequency counts. The high counts are penalize a little bit, so the curve is a sublinear curve and it brings down the weight of those really high counts. This is what we want, because it prevents that terms from dominating the scoring function." + }, + { + "4:48": "Now, there is also another interesting transformation called a BM25 transformation which has been shown to be very effective for retrieval. In this transformation, we have a form that looks like this. So it's k plus one multiplied by x divided by x plus k, where k is a parameter, x is the count, the raw count of a word. Now, the transformation is very interesting in that it can actually go from one extreme to the other extreme by varying k. It also interesting that it has upper bound, k plus one in this case. So this puts a very strict constraint on high frequency terms, because their weight would never exceed k plus one. As we vary k, if we can simulate the two extremes. So when k is set to zero, we roughly have the 0,1 vector. Whereas when we set k to a very large value, it will behave more like the linear transformation. So this transformation function is by far the most effective transformation function for text retrieval and it also makes sense for our problem setup. So we just talked about how to solve the problem of overemphasizing a frequency term Now let's look at the second problem, and that is how we can penalize popular terms. Matching \"the\" is not surprising, because \"the\" occurs everywhere. But matching \"eats\" would count a lot. So how can we address that problem? Now in this case, we can use the IDF weighting. That's commonly used in retrieval. IDF stands for Inverse Document Frequency. Document frequency means the count of the total number of documents that contain a particular word. So here we show that the IDF measure is defined as a logarithm function of the number of documents that match a term or document frequency. So K is the number of documents containing word or document frequency and M here is the total number of documents in the collection. The IDF function is giving a higher value for a lower K, meaning that it rewards rare term. The maximum value is log of M plus one. That's when the word occurred just once in a context. So that's a very rare term, the rare is term in the whole collection. The lowest value you can see here is when K reaches its maximum which would be M. So that would be a very low value, close to zero in fact. So this of course measure is used in search where we naturally have a collection. In our case, what would be our collection? Well, we can also use the context that we can collect all the words as our collection. That is to say, a word that's popular in the collection in general, would also have a low IDF. Because depending on the dataset, we can construct the context vectors in different ways. But in the end if a term is very frequent in the original dataset, then it will still be frequent in the collective context documents. So how can we add these heuristics to improve our similarity function? Well, here's one way and there are many other ways that are possible. But this is a reasonable way, where we can adapt the BM25 retrieval model for paradigmatical relation mining." + }, + { + "9:14": "In this case, we define the document vector as containing elements representing normalized BM25 values. So in this normalization function, we take sum over all the words and we normalize the weight of each word by the sum of the weights of all the words. This is to again ensure all the xi's will sum to one in this vector. So this would be very similar to what we had before, in that this vector is actually something similar to a word distribution, all the xi's will sum to one. Now, the weight of BM25 for each word is defined here. If you compare this with our old definition where we just have a normalized count of this one, right? So we only have this one and the document lens or the total counts of words in that context to document, and that's what we had before. But now with the BM25 transformation, we introduced something else. First, of course, this extra occurrence of this count is just to achieve the sub-linear normalization. But we also see we introduced the parameter, k, here, and this parameter is generally a non-active number, although zero is also possible. But this controls the upper bound, and also controls to what extent it simulates the linear transformation. So this is one parameter, but we also see there is another parameter here, b, and this would be within zero and one. This is a parameter to control lens normalization. In this case, the normalization formula has a average document lens here. This is computed up by taking the average of the lenses of all the documents in the collection. In this case, all the lenses of all the context of documents that we're considering. So this average documents will be a constant for any given collection. So it actually is only affecting the effect of the parameter, b, here because this is a constant. But I kept it here because it's a constant that's used for in retrieval where it would give us a stabilized interpretation of parameter, b. But for our purpose, this will be a constant so it would only be affecting the lens normalization together with parameter, b. Now, with this definition then, we have a new way to define our document of vectors, and we can compute the vector d2 in the same way. The difference is that the high-frequency terms will now have a somewhat lower weights. This would help us control the inference of these high-frequency terms. Now, the idea can be added here in the scoring function. That means we'll introduce a weight for matching each term. So you may recall this sum indicates all the possible words that can be overlap between the two contexts. The x_i and the y_i are probabilities of picking the word from both contexts. Therefore, it indicates how likely we'll see a match on this word. Now, IDF would give us the importance of matching this word. A common word will be worth less than a rare word. So we emphasize more on matching rare words now. So with this modification, then the new function will likely address those two problems. Now, interestingly we can also use this approach to discover syntagmatic relations. In general, when we re-brand a context with a term vector, we would likely see some terms have high weights and other terms have low weights. Depending on how we assign weights to these terms, we might be able to use these weights to discover the words that are strongly associated with the candidate word in the context. So let's take a look at the term vector in more detail here. We have each x_i defined as the normalized weight of BM25. Now, this weight alone only reflects how frequent the word occurs in the context. But we can't just say any frequent term in the context that would be correlated with the candidate word because many common words like 'the' will occur frequently in all the context. But if we apply IDF weighting as you see here, we can then re-weight these terms based on IDF. That means the words that are common like 'the' will get penalized. So now the highest weighted terms will not be those common terms because they have lower IDFs. Instead, those terms would be the terms that are frequent in the context, but not frequent in the collection. So those are clearly the words that tend to occur in the context of the candidate word, for example, cat. So for this reason, the highly weighted terms in this idea of weighted vector can also be assumed to be candidates for syntagmatic relations. Now, of course, this is only a by-product of our approach for discovering paradigmatic relations. In the next lecture, we're going to talk more about how to discover syntagmatic relations. But it clearly shows the relation between discovering the two relations. Indeed they can be discovered in a joint manner by leveraging such associations. So to summarize, the main idea for discovering paradigmatic relations is to collect the context of a candidate word to form a pseudo document. This is typically represented as a bag of words. Then compute the similarity of the corresponding context documents of two candidate words. Then we can take the highly similar word pairs, and treat them as having paradigmatic relations. These are the words that share similar contexts. There are many different ways to implement this general idea. We just talked about some of the approaches. More specifically, we talked about using text retrieval models to help us design effective similarity function to compute the paradigmatic relations. More specifically, we have used the BM25 and IDF weighting to discover paradigmatic relation. These approaches also represent the state of the art in text retrieval techniques. Finally, syntagmatic relations can also be discovered as a by-product when we discover paradigmatic relations." + } + ] + } + ] + }, + { + "Week 2": [ + { + "2-1-syntagmatic-relation-discovery-entropy": [ + { + "0:00": "[SOUND]. This lecture is about the syntagmatic relation discovery, and entropy. In this lecture, we're going to continue talking about word association mining. In particular, we're going to talk about how to discover syntagmatic relations. And we're going to start with the introduction of entropy, which is the basis for designing some measures for discovering such relations." + }, + { + "0:32": "By definition, syntagmatic relations hold between words that have correlated co-occurrences. That means, when we see one word occurs in context, we tend to see the occurrence of the other word." + }, + { + "0:48": "So, take a more specific example, here. We can ask the question, whenever eats occurs, what other words also tend to occur?" + }, + { + "1:01": "Looking at the sentences on the left, we see some words that might occur together with eats, like cat, dog, or fish is right. But if I take them out and if you look at the right side where we only show eats and some other words, the question then is. Can you predict what other words occur to the left or to the right?" + }, + { + "1:28": "Right so this would force us to think about what other words are associated with eats. If they are associated with eats, they tend to occur in the context of eats." + }, + { + "1:38": "More specifically our prediction problem is to take any text segment which can be a sentence, a paragraph, or a document. And then ask I the question, is a particular word present or absent in this segment?" + }, + { + "1:54": "Right here we ask about the word W. Is W present or absent in this segment?" + }, + { + "2:02": "Now what's interesting is that some words are actually easier to predict than other words." + }, + { + "2:10": "If you take a look at the three words shown here, meat, the, and unicorn, which one do you think is easier to predict?" + }, + { + "2:20": "Now if you think about it for a moment you might conclude that" + }, + { + "2:24": "the is easier to predict because it tends to occur everywhere. So I can just say, well that would be in the sentence." + }, + { + "2:31": "Unicorn is also relatively easy because unicorn is rare, is very rare. And I can bet that it doesn't occur in this sentence." + }, + { + "2:42": "But meat is somewhere in between in terms of frequency. And it makes it harder to predict because it's possible that it occurs in a sentence or the segment, more accurately." + }, + { + "2:53": "But it may also not occur in the sentence, so now let's study this problem more formally." + }, + { + "3:02": "So the problem can be formally defined as predicting the value of a binary random variable. Here we denote it by X sub w, w denotes a word, so this random variable is associated with precisely one word." + }, + { + "3:18": "When the value of the variable is 1, it means this word is present. When it's 0, it means the word is absent. And naturally, the probabilities for 1 and 0 should sum to 1, because a word is either present or absent in a segment." + }, + { + "3:35": "There's no other choice." + }, + { + "3:38": "So the intuition with this concept earlier can be formally stated as follows. The more random this random variable is, the more difficult the prediction will be." + }, + { + "3:49": "Now the question is how does one quantitatively measure the randomness of a random variable like X sub w?" + }, + { + "3:56": "How in general, can we quantify the randomness of a variable and that's why we need a measure called entropy and this measure introduced in information theory to measure the randomness of X. There is also some connection with information here but that is beyond the scope of this course." + }, + { + "4:17": "So for our purpose we just treat entropy function as a function defined on a random variable. In this case, it is a binary random variable, although the definition can be easily generalized for a random variable with multiple values." + }, + { + "4:32": "Now the function form looks like this, there's the sum of all the possible values for this random variable. Inside the sum for each value we have a product of the probability" + }, + { + "4:45": "that the random variable equals this value and log of this probability." + }, + { + "4:53": "And note that there is also a negative sign there." + }, + { + "4:56": "Now entropy in general is non-negative. And that can be mathematically proved." + }, + { + "5:02": "So if we expand this sum, we'll see that the equation looks like the second one. Where I explicitly plugged in the two values, 0 and 1. And sometimes when we have 0 log of 0, we would generally define that as 0, because log of 0 is undefined." + }, + { + "5:28": "So this is the entropy function. And this function will give a different value for different distributions of this random variable." + }, + { + "5:37": "And it clearly depends on the probability that the random variable taking value of 1 or 0. If we plot this function against the probability that the random variable is equal to 1." + }, + { + "5:56": "And then the function looks like this." + }, + { + "6:01": "At the two ends, that means when the probability of X" + }, + { + "6:07": "equals 1 is very small or very large, then the entropy function has a low value. When it's 0.5 in the middle then it reaches the maximum." + }, + { + "6:20": "Now if we plot the function against the probability that X" + }, + { + "6:25": "is taking a value of 0 and the function would show exactly the same curve here, and you can imagine why. And so that's because" + }, + { + "6:42": "the two probabilities are symmetric, and completely symmetric." + }, + { + "6:48": "So an interesting question you can think about in general is for what kind of X does entropy reach maximum or minimum. And we can in particular think about some special cases. For example, in one case, we might have a random variable that" + }, + { + "7:08": "always takes a value of 1. The probability is 1." + }, + { + "7:16": "Or there's a random variable that" + }, + { + "7:19": "is equally likely taking a value of one or zero. So in this case the probability that X equals 1 is 0.5." + }, + { + "7:30": "Now which one has a higher entropy?" + }, + { + "7:34": "It's easier to look at the problem by thinking of a simple example" + }, + { + "7:40": "using coin tossing." + }, + { + "7:43": "So when we think about random experiments like tossing a coin," + }, + { + "7:48": "it gives us a random variable, that can represent the result. It can be head or tail. So we can define a random variable X sub coin, so that it's 1 when the coin shows up as head, it's 0 when the coin shows up as tail." + }, + { + "8:09": "So now we can compute the entropy of this random variable. And this entropy indicates how difficult it is to predict the outcome" + }, + { + "8:22": "of a coin toss." + }, + { + "8:25": "So we can think about the two cases. One is a fair coin, it's completely fair. The coin shows up as head or tail equally likely. So the two probabilities would be a half. Right? So both are equal to one half." + }, + { + "8:44": "Another extreme case is completely biased coin, where the coin always shows up as heads. So it's a completely biased coin." + }, + { + "8:54": "Now let's think about the entropies in the two cases. And if you plug in these values you can see the entropies would be as follows. For a fair coin we see the entropy reaches its maximum, that's 1." + }, + { + "9:11": "For the completely biased coin, we see it's 0. And that intuitively makes a lot of sense. Because a fair coin is most difficult to predict." + }, + { + "9:22": "Whereas a completely biased coin is very easy to predict. We can always say, well, it's a head. Because it is a head all the time. So they can be shown on the curve as follows. So the fair coin corresponds to the middle point where it's very uncertain. The completely biased coin corresponds to the end point where we have a probability of 1.0 and the entropy is 0. So, now let's see how we can use entropy for word prediction. Let's think about our problem is to predict whether W is present or absent in this segment. Again, think about the three words, particularly think about their entropies." + }, + { + "10:06": "Now we can assume high entropy words are harder to predict." + }, + { + "10:11": "And so we now have a quantitative way to tell us which word is harder to predict." + }, + { + "10:20": "Now if you look at the three words meat, the, unicorn, again, and we clearly would expect meat to have a higher entropy than the unicorn. In fact if you look at the entropy of the, it's close to zero. Because it occurs everywhere. So it's like a completely biased coin." + }, + { + "10:44": "Therefore the entropy is zero." + }, + { + "10:48": "[MUSIC]" + } + ] + }, + { + "2-2-syntagmatic-relation-discovery-conditional-entropy": [ + { + "0:00": "[SOUND] This lecture is about the syntagmatic relation discovery and conditional entropy. In this lecture, we're going to continue the discussion of word association mining and analysis." + }, + { + "0:18": "We're going to talk about the conditional entropy, which is useful for discovering syntagmatic relations. Earlier, we talked about using entropy to capture how easy it is to predict the presence or absence of a word." + }, + { + "0:34": "Now, we'll address a different scenario where we assume that we know something about the text segment. So now the question is, suppose we know that eats occurred in the segment. How would that help us predict the presence or absence of water, like in meat? And in particular, we want to know whether the presence of eats has helped us predict the presence of meat." + }, + { + "1:02": "And if we frame this using entrophy, that would mean we are interested in knowing whether knowing the presence of eats could reduce uncertainty about the meats. Or, reduce the entrophy of the random variable corresponding to the presence or absence of meat. We can also ask as a question, what if we know of the absents of eats?" + }, + { + "1:28": "Would that also help us predict the presence or absence of meat?" + }, + { + "1:34": "These questions can be addressed by using another concept called a conditioning entropy. So to explain this concept, let's first look at the scenario we had before, when we know nothing about the segment. So we have these probabilities indicating whether a word like meat occurs, or it doesn't occur in the segment. And we have an entropy function that looks like what you see on the slide." + }, + { + "2:03": "Now suppose we know eats is present, so now we know the value of another random variable that denotes eats." + }, + { + "2:12": "Now, that would change all these probabilities to conditional probabilities. Where we look at the presence or absence of meat," + }, + { + "2:21": "given that we know eats occurred in the context. So as a result, if we replace these probabilities with their corresponding conditional probabilities in the entropy function, we'll get the conditional entropy." + }, + { + "2:37": "So this equation now here would be the conditional entropy. Conditional on the presence of eats." + }, + { + "2:52": "So, you can see this is essentially the same entropy function as you have seen before, except that all the probabilities now have a condition." + }, + { + "3:04": "And this then tells us the entropy of meat, after we have known eats occurring in the segment." + }, + { + "3:14": "And of course, we can also define this conditional entropy for the scenario where we don't see eats. So if we know it did not occur in the segment, then this entry condition of entropy would capture the instances of meat in that condition. So now, putting different scenarios together, we have the completed definition of conditional entropy as follows." + }, + { + "3:39": "Basically, we're going to consider both scenarios of the value of eats zero, one, and this gives us a probability that eats is equal to zero or one. Basically, whether eats is present or absent. And this of course, is the conditional entropy of meat in that particular scenario." + }, + { + "4:05": "So if you expanded this entropy, then you have the following equation." + }, + { + "4:15": "Where you see the involvement of those conditional probabilities." + }, + { + "4:21": "Now in general, for any discrete random variables x and y, we have" + }, + { + "4:27": "the conditional entropy is no larger than the entropy of the variable x. So basically, this is upper bound for the conditional entropy. That means by knowing more information about the segment, we want to be able to increase uncertainty. We can only reduce uncertainty. And that intuitively makes sense because as we know more information, it should always help us make the prediction. And cannot hurt the prediction in any case." + }, + { + "5:05": "Now, what's interesting here is also to think about what's the minimum possible value of this conditional entropy? Now, we know that the maximum value is the entropy of X." + }, + { + "5:17": "But what about the minimum, so what do you think?" + }, + { + "5:22": "I hope you can reach the conclusion that the minimum possible value, would be zero. And it will be interesting to think about under what situation will achieve this." + }, + { + "5:34": "So, let's see how we can use conditional entropy to capture syntagmatic relation." + }, + { + "5:39": "Now of course, this conditional entropy gives us directly one way to measure the association of two words. Because it tells us to what extent, we can predict the one word given that we know the presence or absence of another word. Now before we look at the intuition of conditional entropy in capturing syntagmatic relations, it's useful to think of a very special case, listed here. That is, the conditional entropy of the word given itself." + }, + { + "6:19": "So here, we listed this conditional entropy in the middle. So, it's here." + }, + { + "6:33": "So, what is the value of this?" + }, + { + "6:36": "Now, this means we know where the meat occurs in the sentence. And we hope to predict whether the meat occurs in the sentence. And of course, this is 0 because there's no incident anymore. Once we know whether the word occurs in the segment, we'll already know the answer of the prediction. So this is zero. And that's also when this conditional entropy reaches the minimum." + }, + { + "7:06": "So now, let's look at some other cases." + }, + { + "7:09": "So this is a case of knowing the and trying to predict the meat. And this is a case of knowing eats and trying to predict the meat. Which one do you think is smaller? No doubt smaller entropy means easier for prediction." + }, + { + "7:31": "Which one do you think is higher? Which one is not smaller?" + }, + { + "7:36": "Well, if you at the uncertainty, then in the first case, the doesn't really tell us much about the meat. So knowing the occurrence of the doesn't really help us reduce entropy that much. So it stays fairly close to the original entropy of meat. Whereas in the case of eats, eats is related to meat. So knowing presence of eats or absence of eats, would help us predict whether meat occurs. So it can help us reduce entropy of meat. So we should expect the sigma term, namely this one, to have a smaller entropy." + }, + { + "8:21": "And that means there is a stronger association between meat and eats." + }, + { + "8:29": "So we now also know when this w is the same as this meat, then the conditional entropy would reach its minimum, which is 0. And for what kind of words would either reach its maximum? Well, that's when this stuff is not really related to meat. And like the for example, it would be very close to the maximum, which is the entropy of meat itself." + }, + { + "8:59": "So this suggests that when you use conditional entropy for mining syntagmatic relations, the hours would look as follows." + }, + { + "9:10": "For each word W1, we're going to enumerate the overall other words W2. And then, we can compute the conditional entropy of W1 given W2." + }, + { + "9:22": "We thought all the candidate was in ascending order of the conditional entropy because we're out of favor, a world that has a small entropy. Meaning that it helps us predict the time of the word W1. And then, we're going to take the top ring of the candidate words as words that have potential syntagmatic relations with W1." + }, + { + "9:41": "Note that we need to use a threshold to find these words. The stresser can be the number of top candidates take, or absolute value for the conditional entropy." + }, + { + "9:55": "Now, this would allow us to mine the most strongly correlated words with a particular word, W1 here." + }, + { + "10:06": "But, this algorithm does not help us mine the strongest that K syntagmatical relations from an entire collection. Because in order to do that, we have to ensure that these conditional entropies are comparable across different words. In this case of discovering the mathematical relations for a targeted word like W1, we only need to compare the conditional entropies" + }, + { + "10:34": "for W1, given different words. And in this case, they are comparable." + }, + { + "10:41": "All right. So, the conditional entropy of W1, given W2, and the conditional entropy of W1, given W3 are comparable." + }, + { + "10:51": "They all measure how hard it is to predict the W1. But, if we think about the two pairs, where we share W2 in the same condition, and we try to predict the W1 and W3. Then, the conditional entropies are actually not comparable. You can think of about this question. Why? So why are they not comfortable? Well, that was because they have a different outer bounds. Right? So those outer bounds are precisely the entropy of W1 and the entropy of W3. And they have different upper bounds. So we cannot really compare them in this way. So how do we address this problem?" + }, + { + "11:38": "Well later, we'll discuss, we can use mutual information to solve this problem. [MUSIC]" + } + ] + }, + { + "2-3-syntagmatic-relation-discovery-mutual-information-part-1": [ + { + "0:00": "[SOUND]. This lecture is about the syntagmatic relation discovery and mutual information." + }, + { + "0:13": "In this lecture we are going to continue discussing syntagmatic relation discovery. In particular, we are going to talk about another the concept in the information series, we called it mutual information and how it can be used to discover syntagmatic relations. Before we talked about the problem of conditional entropy and that is the conditional entropy computed different pairs of words. It is not really comparable, so that makes it harder with this cover, strong synagmatic relations globally from corpus. So now we are going to introduce mutual information, which is another concept in the information series that allows us to, sometimes, normalize the conditional entropy to make it more comparable across different pairs." + }, + { + "1:04": "In particular, mutual information in order to find I(X:Y), matches the entropy reduction of X obtained from knowing Y. More specifically the question we are interested in here is how much of an entropy of X can we obtain by knowing Y." + }, + { + "1:27": "So mathematically it can be defined as the difference between the original entropy of X, and the condition of Y of X given Y." + }, + { + "1:37": "And you might see, as you can see here it can also be defined as reduction of entropy of Y because of knowing X." + }, + { + "1:48": "Now normally the two conditional interface H of X given Y and the entropy of Y given X are not equal, but interestingly, the reduction of entropy by knowing one of them, is actually equal. So, this quantity is called a Mutual Information in order to buy I here. And this function has some interesting properties, first it is also non-negative. This is easy to understand because the original entropy is always" + }, + { + "2:22": "not going to be lower than the possibility reduced conditional entropy. In other words, the conditional entropy will never exceed the original entropy. Knowing some information can always help us potentially, but will not hurt us in predicting x." + }, + { + "2:41": "The signal property is that it is symmetric like additional entropy is not symmetrical, mutual information is, and the third property is that It reaches its minimum, zero, if and only if the two random variables are completely independent. That means knowing one of them does not tell us anything about the other and this last property can be verified by simply looking at the equation above and it reaches 0 if and only the conditional entropy of X [INAUDIBLE] Y is exactly the same as original entropy of X. So that means knowing why it did not help at all and that is when X and a Y are completely independent." + }, + { + "3:32": "Now when we fix X to rank different Ys using conditional entropy would give the same order as ranking based on mutual information because in the function here, H(X) is fixed because X is fixed. So ranking based on mutual entropy is exactly the same as ranking based on the conditional entropy of X given Y, but the mutual information allows us to compare different pairs of x and y. So, that is why mutual information is more general and in general, more useful." + }, + { + "4:10": "So, let us examine the intuition of using mutual information for Syntagmatical Relation Mining." + }, + { + "4:17": "Now, the question we ask forcing that relation mining is, whenever \"eats\" occurs, what other words also tend to occur?" + }, + { + "4:25": "So this question can be framed as a mutual information question, that is, which words have high mutual information was eats, so computer the missing information between eats and other words." + }, + { + "4:39": "And if we do that, and it is basically a base on the same as conditional we will see that words that are strongly associated with eats, will have a high point. Whereas words that are not related will have lower mutual information. For this, I will give some example here. The mutual information between \"eats\" and \"meats\", which is the same as between \"meats\" and \"eats,\" because the information is symmetrical is expected to be higher than the mutual information between eats and the, because knowing the does not really help us as a predictor. It is similar, and knowing eats does not help us predicting, the as well. And you also can easily see that the mutual information between a word and itself is the largest, which is equal to the entropy of this word and so, because in this case the reduction is maximum because knowing one allows us to predict the other completely. So the conditional entropy is zero, therefore the mutual information reaches its maximum. It is going to be larger, then are equal to the machine volume eats in other words. In other words picking any other word and the computer picking between eats and that word. You will not get any information larger the computation from eats and itself." + }, + { + "6:16": "So now let us look at how to compute the mute information. Now in order to do that, we often" + }, + { + "6:25": "use a different form of mutual information, and we can mathematically rewrite the mutual information into the form shown on this slide. Where we essentially see a formula that computes what is called a KL-divergence or divergence. This is another term in information theory. It measures the divergence between two distributions." + }, + { + "6:50": "Now, if you look at the formula, it is also sum over many combinations of different values of the two random variables but inside the sum, mainly we are doing a comparison between two joint distributions. The numerator has the joint, actual observed the joint distribution of the two random variables." + }, + { + "7:12": "The bottom part or the denominator can be interpreted as the expected joint distribution of the two random variables, if they were independent because when two random variables are independent, they are joined distribution is equal to the product of the two probabilities." + }, + { + "7:35": "So this comparison will tell us whether the two variables are indeed independent. If they are indeed independent then we would expect that the two are the same," + }, + { + "7:44": "but if the numerator is different from the denominator, that would mean the two variables are not independent and that helps measure the association." + }, + { + "7:56": "The sum is simply to take into consideration of all of the combinations of the values of these two random variables. In our case, each random variable can choose one of the two values, zero or one, so we have four combinations here. If we look at this form of mutual information, it shows that the mutual information matches the divergence of the actual joint distribution from the expected distribution under the independence assumption. The larger this divergence is, the higher the mutual information would be." + }, + { + "8:33": "So now let us further look at what are exactly the probabilities, involved in this formula of mutual information." + }, + { + "8:41": "And here, this is all the probabilities involve, and it is easy for you to verify that. Basically, we have first to [INAUDIBLE] probabilities corresponding to the presence or absence of each word. So, for w1, we have two probabilities shown here." + }, + { + "9:02": "They should sum to one, because a word can either be present or absent. In the segment, and similarly for the second word, we also have two probabilities representing presence or absences of this word, and there is some to y as well." + }, + { + "9:21": "And finally, we have a lot of joined probabilities that represent the scenarios of co-occurrences of the two words, and they are shown here." + }, + { + "9:34": "And they sum to one because the two words can only have these four possible scenarios. Either they both occur, so in that case both variables will have a value of one, or one of them occurs. There are two scenarios." + }, + { + "9:51": "In these two cases one of the random variables will be equal to one and the other will be zero and finally we have the scenario when none of them occurs. This is when the two variables taking a value of zero." + }, + { + "10:07": "So these are the probabilities involved in the calculation of mutual information, over here." + }, + { + "10:16": "Once we know how to calculate these probabilities, we can easily calculate the mutual information." + }, + { + "10:24": "It is also interesting to know that there are actually some relations or constraint among these probabilities, and we already saw two of them, right? So in the previous slide, that you have seen that the marginal probabilities of these words sum to one and we also have seen this constraint, that says the two words have these four scenarios of co-occurrency, but we also have some additional constraints listed in the bottom." + }, + { + "10:58": "For example, this one means if we add up the probabilities that we observe the two words occur together and the probabilities when the first word occurs and the second word does not occur. We get exactly the probability that the first word is observed. In other words, when the word is observed. When the first word is observed, and there are only two scenarios, depending on whether the second word is also observed. So, this probability captures the first scenario when the second word actually is also observed, and this captures the second scenario when the second word is not observed. So, we only see the first word, and it is easy to see the other equations also follow the same reasoning." + }, + { + "11:46": "Now these equations allow us to compute some probabilities based on other probabilities, and this can simplify the computation." + }, + { + "11:55": "So more specifically, if we know the probability that a word is present, like in this case, so if we know this, and if we know the probability of the presence of the second word, then we can easily compute the absence probability, right? It is very easy to use this equation to do that, and so we take care of the computation of these probabilities of presence and absence of each word. Now let's look at the [INAUDIBLE] distribution. Let us assume that we also have available the probability that they occurred together. Now it is easy to see that we can actually compute all the rest of these probabilities based on these." + }, + { + "12:46": "Specifically for example using this equation we can compute the probability that the first word occurred and the second word did not, because we know these probabilities in the boxes, and similarly using this equation we can compute the probability that we observe only the second word. Word. And then finally, this probability can be calculated by using this equation because now this is known, and this is also known, and this is already known, right. So this can be easier to calculate. So now this can be calculated." + }, + { + "13:26": "So this slide shows that we only need to know how to compute these three probabilities that are shown in the boxes, naming the presence of each word and the co-occurence of both words, in a segment. [MUSIC]" + } + ] + }, + { + "2-4-syntagmatic-relation-discovery-mutual-information-part-2": [ + { + "0:00": "[SOUND]" + }, + { + "0:06": "In general, we can use the empirical count of events in the observed data to estimate the probabilities. And a commonly used technique is called a maximum likelihood estimate, where we simply normalize the observe accounts. So if we do that, we can see, we can compute these probabilities as follows. For estimating the probability that we see a water current in a segment, we simply normalize the count of segments that contain this word. So let's first take a look at the data here. On the right side, you see a list of some, hypothesizes the data. These are segments. And in some segments you see both words occur, they are indicated as ones for both columns. In some other cases only one will occur, so only that column has one and the other column has zero. And in all, of course, in some other cases none of the words occur, so they are both zeros. And for estimating these probabilities, we simply need to collect the three counts." + }, + { + "1:20": "So the three counts are first, the count of W1. And that's the total number of segments that contain word W1. It's just as the ones in the column of W1. We can count how many ones we have seen there. The segment count is for word 2, and we just count the ones in the second column. And these will give us the total number of segments that contain W2. The third count is when both words occur. So this time, we're going to count the sentence where both columns have ones." + }, + { + "1:56": "And then, so this would give us the total number of segments where we have seen both W1 and W2. Once we have these counts, we can just normalize these counts by N, which is the total number of segments, and this will give us the probabilities that we need to compute original information. Now, there is a small problem, when we have zero counts sometimes. And in this case, we don't want a zero probability because our data may be a small sample and in general, we would believe that it's potentially possible for a [INAUDIBLE] to avoid any context. So, to address this problem, we can use a technique called smoothing. And that's basically to add some small constant to these counts, and so that we don't get the zero probability in any case. Now, the best way to understand smoothing is imagine that we actually observed more data than we actually have, because we'll pretend we observed some pseudo-segments. I illustrated on the top, on the right side on the slide. And these pseudo-segments would contribute additional counts of these words so that no event will have zero probability. Now, in particular we introduce the four pseudo-segments. Each is weighted at one quarter. And these represent the four different combinations of occurrences of this word. So now each event, each combination will have at least one count or at least a non-zero count from this pseudo-segment. So, in the actual segments that we'll observe, it's okay if we haven't observed all of the combinations. So more specifically, you can see the 0.5 here after it comes from the two ones in the two pseudo-segments, because each is weighted at one quarter. We add them up, we get 0.5. And similar to this, 0.05 comes from one single pseudo-segment that indicates the two words occur together." + }, + { + "4:09": "And of course in the denominator we add the total number of pseudo-segments that we add, in this case, we added a four pseudo-segments. Each is weighed at one quarter so the total of the sum is, after the one. So, that's why in the denominator you'll see a one there." + }, + { + "4:25": "So, this basically concludes the discussion of how to compute a these four syntagmatic relation discoveries." + }, + { + "4:36": "Now, so to summarize, syntagmatic relation can generally be discovered by measuring correlations between occurrences of two words. We've introduced the three concepts from information theory. Entropy, which measures the uncertainty of a random variable X. Conditional entropy, which measures the entropy of X given we know Y. And mutual information of X and Y, which matches the entropy reduction of X due to knowing Y, or entropy reduction of Y due to knowing X. They are the same. So these three concepts are actually very useful for other applications as well. That's why we spent some time to explain this in detail. But in particular, they are also very useful for discovering syntagmatic relations. In particular, mutual information is a principal way for discovering such a relation. It allows us to have values computed on different pairs of words that are comparable and so we can rank these pairs and discover the strongest syntagmatic from a collection of documents. Now, note that there is some relation between syntagmatic relation discovery and [INAUDIBLE] relation discovery. So we already discussed the possibility of using BM25 to achieve waiting for terms in the context to potentially also suggest the candidates that have syntagmatic relations with the candidate word. But here, once we use mutual information to discover syntagmatic relations, we can also represent the context with this mutual information as weights. So this would give us another way to represent the context of a word, like a cat. And if we do the same for all the words, then we can cluster these words or compare the similarity between these words based on their context similarity. So this provides yet another way to do term weighting for paradigmatic relation discovery. And so to summarize this whole part about word association mining. We introduce two basic associations, called a paradigmatic and a syntagmatic relations. These are fairly general, they apply to any items in any language, so the units don't have to be words, they can be phrases or entities." + }, + { + "7:11": "We introduced multiple statistical approaches for discovering them, mainly showing that pure statistical approaches are visible, are variable for discovering both kind of relations. And they can be combined to perform joint analysis, as well. These approaches can be applied to any text with no human effort, mostly because they are based on counting of words, yet they can actually discover interesting relations of words." + }, + { + "7:44": "We can also use different ways with defining context and segment, and this would lead us to some interesting variations of applications. For example, the context can be very narrow like a few words, around a word, or a sentence, or maybe paragraphs, as using differing contexts would allows to discover different flavors of paradigmatical relations. And similarly, counting co-occurrences using let's say, visual information to discover syntagmatical relations. We also have to define the segment, and the segment can be defined as a narrow text window or a longer text article. And this would give us different kinds of associations. These discovery associations can support many other applications, in both information retrieval and text and data mining. So here are some recommended readings, if you want to know more about the topic. The first is a book with a chapter on collocations, which is quite relevant to the topic of these lectures. The second is an article about using various statistical measures to discover lexical atoms. Those are phrases that are non-compositional. For example, hot dog is not really a dog that's hot," + }, + { + "9:08": "blue chip is not a chip that's blue. And the paper has a discussion about some techniques for discovering such phrases." + }, + { + "9:17": "The third one is a new paper on a unified way to discover both paradigmatical relations and a syntagmatical relations, using random works on word graphs. [SOUND]" + } + ] + }, + { + "2-5-topic-mining-and-analysis-motivation-and-task-definition": [ + { + "0:00": "[SOUND] >> This lecture is about topic mining and analysis. We're going to talk about its motivation and task definition." + }, + { + "0:17": "In this lecture we're going to talk about different kind of mining task." + }, + { + "0:23": "As you see on this road map, we have just covered mining knowledge about language, namely discovery of word associations such as paradigmatic and relations and syntagmatic relations." + }, + { + "0:39": "Now, starting from this lecture, we're going to talk about mining another kind of knowledge, which is content mining, and trying to discover knowledge about the main topics in the text." + }, + { + "0:56": "And we call that topic mining and analysis." + }, + { + "0:59": "In this lecture, we're going to talk about its motivation and the task definition. So first of all, let's look at the concept of topic. So topic is something that we all understand, I think, but it's actually not that easy to formally define. Roughly speaking, topic is the main idea discussed in text data. And you can think of this as a theme or subject of a discussion or conversation. It can also have different granularities. For example, we can talk about the topic of a sentence. A topic of article, aa topic of paragraph or the topic of all the research articles in the research library, right, so different grand narratives of topics obviously have different applications." + }, + { + "1:46": "Indeed, there are many applications that require discovery of topics in text, and they're analyzed then. Here are some examples. For example, we might be interested in knowing about what are Twitter users are talking about today? Are they talking about NBA sports, or are they talking about some international events, etc.? Or we are interested in knowing about research topics. For example, one might be interested in knowing what are the current research topics in data mining, and how are they different from those five years ago? Now this involves discovery of topics in data mining literatures and also we want to discover topics in today's literature and those in the past. And then we can make a comparison. We might also be also interested in knowing what do people like about some products like the iPhone 6, and what do they dislike? And this involves discovering topics in positive opinions about iPhone 6 and also negative reviews about it. Or perhaps we're interested in knowing what are the major topics debated in 2012 presidential election?" + }, + { + "2:59": "And all these have to do with discovering topics in text and analyzing them, and we're going to talk about a lot of techniques for doing this. In general we can view a topic as some knowledge about the world. So from text data we expect to discover a number of topics, and then these topics generally provide a description about the world. And it tells us something about the world. About a product, about a person etc." + }, + { + "3:29": "Now when we have some non-text data, then we can have more context for analyzing the topics. For example, we might know the time associated with the text data, or locations where the text data were produced, or the authors of the text, or the sources of the text, etc. All such meta data, or context variables can be associated with the topics that we discover, and then we can use these context variables help us analyze patterns of topics. For example, looking at topics over time, we would be able to discover whether there's a trending topic, or some topics might be fading away." + }, + { + "4:15": "Soon you are looking at topics in different locations. We might know some insights about people's opinions in different locations." + }, + { + "4:26": "So that's why mining topics is very important. Now, let's look at the tasks of topic mining and analysis. In general, it would involve first discovering a lot of topics, in this case, k topics. And then we also would like to know, which topics are covered in which documents, to what extent. So for example, in document one, we might see that Topic 1 is covered a lot, Topic 2 and Topic k are covered with a small portion." + }, + { + "4:58": "And other topics, perhaps, are not covered. Document two, on the other hand, covered Topic 2 very well, but it did not cover Topic 1 at all, and it also covers Topic k to some extent, etc., right? So now you can see there are generally two different tasks, or sub-tasks, the first is to discover k topics from a collection of text laid out. What are these k topics? Okay, major topics in the text they are. The second task is to figure out which documents cover which topics to what extent. So more formally, we can define the problem as follows. First, we have, as input, a collection of N text documents. Here we can denote the text collection as C, and denote text article as d i. And, we generally also need to have as input the number of topics, k. But there may be techniques that can automatically suggest a number of topics. But in the techniques that we will discuss, which are also the most useful techniques, we often need to specify a number of topics." + }, + { + "6:14": "Now the output would then be the k topics that we would like to discover, in order as theta sub one through theta sub k." + }, + { + "6:24": "Also we want to generate the coverage of topics in each document of d sub i And this is denoted by pi sub i j." + }, + { + "6:33": "And pi sub ij is the probability of document d sub i covering topic theta sub j. So obviously for each document, we have a set of such values to indicate to what extent the document covers, each topic." + }, + { + "6:48": "And we can assume that these probabilities sum to one. Because a document won't be able to cover other topics outside of the topics that we discussed, that we discovered. So now, the question is, how do we define theta sub i, how do we define the topic? Now this problem has not been completely defined until we define what is exactly theta." + }, + { + "7:16": "So in the next few lectures, we're going to talk about different ways to define theta. [MUSIC]" + } + ] + }, + { + "2-6-topic-mining-and-analysis-term-as-topic": [ + { + "0:00": "[MUSIC]" + }, + { + "0:07": "This lecture is about topic mining and analysis." + }, + { + "0:12": "We're going to talk about using a term as topic. This is a slide that you have seen in a earlier lecture where we define the task of topic mining and analysis. We also raised the question, how do we exactly define the topic of theta?" + }, + { + "0:31": "So in this lecture, we're going to offer one way to define it, and that's our initial idea. Our idea here is defining a topic simply as a term." + }, + { + "0:42": "A term can be a word or a phrase." + }, + { + "0:45": "And in general, we can use these terms to describe topics. So our first thought is just to define a topic as one term. For example, we might have terms like sports, travel, or science, as you see here. Now if we define a topic in this way, we can then analyze the coverage of such topics in each document. Here for example, we might want to discover to what extent document one covers sports. And we found that 30% of the content of document one is about sports. And 12% is about the travel, etc. We might also discover document two does not cover sports at all. So the coverage is zero, etc." + }, + { + "1:32": "So now, of course, as we discussed in the task definition for topic mining and analysis, we have two tasks. One is to discover the topics. And the second is to analyze coverage. So let's first think about how we can discover topics if we represent each topic by a term. So that means we need to mine k topical terms from a collection." + }, + { + "2:01": "Now there are, of course, many different ways of doing that." + }, + { + "2:05": "And we're going to talk about a natural way of doing that, which is also likely effective. So first of all, we're going to parse the text data in the collection to obtain candidate terms. Here candidate terms can be words or phrases. Let's say the simplest solution is to just take each word as a term. These words then become candidate topics. Then we're going to design a scoring function to match how good each term is as a topic." + }, + { + "2:35": "So how can we design such a function? Well there are many things that we can consider. For example, we can use pure statistics to design such a scoring function." + }, + { + "2:45": "Intuitively, we would like to favor representative terms, meaning terms that can represent a lot of content in the collection. So that would mean we want to favor a frequent term. However, if we simply use the frequency to design the scoring function, then the highest scored terms would be general terms or functional terms like the, etc. Those terms occur very frequently English." + }, + { + "3:14": "So we also want to avoid having such words on the top so we want to penalize such words. But in general, we would like to favor terms that are fairly frequent but not so frequent. So a particular approach could be based on TF-IDF weighting from retrieval." + }, + { + "3:35": "And TF stands for term frequency. IDF stands for inverse document frequency. We talked about some of these ideas in the lectures about the discovery of word associations. So these are statistical methods, meaning that the function is defined mostly based on statistics. So the scoring function would be very general. It can be applied to any language, any text. But when we apply such a approach to a particular problem, we might also be able to leverage some domain-specific heuristics. For example, in news we might favor title words actually general. We might want to favor title words because the authors tend to use the title to describe the topic of an article." + }, + { + "4:27": "If we're dealing with tweets, we could also favor hashtags, which are invented to denote topics. So naturally, hashtags can be good candidates for representing topics." + }, + { + "4:44": "Anyway, after we have this design scoring function, then we can discover the k topical terms by simply picking k terms with the highest scores. Now, of course, we might encounter situation where the highest scored terms are all very similar. They're semantically similar, or closely related, or even synonyms. So that's not desirable. So we also want to have coverage over all the content in the collection. So we would like to remove redundancy. And one way to do that is to do a greedy algorithm, which is sometimes called a maximal marginal relevance ranking. Basically, the idea is to go down the list based on our scoring function and gradually take terms to collect the k topical terms. The first term, of course, will be picked. When we pick the next term, we're going to look at what terms have already been picked and try to avoid picking a term that's too similar. So while we are considering the ranking of a term in the list, we are also considering the redundancy of the candidate term with respect to the terms that we already picked." + }, + { + "5:58": "And with some thresholding, then we can get a balance of the redundancy removal and also high score of a term. Okay, so after this that will get k topical terms. And those can be regarded as the topics that we discovered from the connection. Next, let's think about how we're going to compute the topic coverage pi sub ij." + }, + { + "6:23": "So looking at this picture, we have sports, travel and science and these topics. And now suppose you are give a document. How should we pick out coverage of each topic in the document?" + }, + { + "6:36": "Well, one approach can be to simply count occurrences of these terms. So for example, sports might have occurred four times in this this document and travel occurred twice, etc. And then we can just normalize these counts as our estimate of the coverage probability for each topic. So in general, the formula would be to collect the counts of all the terms that represent the topics. And then simply normalize them so that the coverage of each topic in the document would add to one." + }, + { + "7:15": "This forms a distribution of the topics for the document to characterize coverage of different topics in the document. Now, as always, when we think about idea for solving problem, we have to ask the question, how good is this one? Or is this the best way of solving problem?" + }, + { + "7:38": "So now let's examine this approach. In general, we have to do some empirical evaluation" + }, + { + "7:46": "by using actual data sets and to see how well it works." + }, + { + "7:52": "Well, in this case let's take a look at a simple example here. And we have a text document that's about a NBA basketball game." + }, + { + "8:04": "So in terms of the content, it's about sports." + }, + { + "8:08": "But if we simply count these words that represent our topics, we will find that the word sports actually did not occur in the article, even though the content is about the sports." + }, + { + "8:22": "So the count of sports is zero. That means the coverage of sports would be estimated as zero. Now of course, the term science also did not occur in the document and it's estimate is also zero. And that's okay. But sports certainly is not okay because we know the content is about sports. So this estimate has problem." + }, + { + "8:50": "What's worse, the term travel actually occurred in the document. So when we estimate the coverage of the topic travel, we have got a non-zero count. So its estimated coverage will be non-zero. So this obviously is also not desirable." + }, + { + "9:08": "So this simple example illustrates some problems of this approach. First, when we count what words belong to to the topic, we also need to consider related words. We can't simply just count the topic word sports. In this case, it did not occur at all. But there are many related words like basketball, game, etc. So we need to count the related words also. The second problem is that a word like star can be actually ambiguous. So here it probably means a basketball star, but we can imagine it might also mean a star on the sky. So in that case, the star might actually suggest, perhaps, a topic of science." + }, + { + "9:54": "So we need to deal with that as well. Finally, a main restriction of this approach is that we have only one term to describe the topic, so it cannot really describe complicated topics. For example, a very specialized topic in sports would be harder to describe by using just a word or one phrase. We need to use more words. So this example illustrates some general problems with this approach of treating a term as topic. First, it lacks expressive power. Meaning that it can only represent the simple general topics, but it cannot represent the complicated topics that might require more words to describe." + }, + { + "10:37": "Second, it's incomplete in vocabulary coverage, meaning that the topic itself is only represented as one term. It does not suggest what other terms are related to the topic. Even if we're talking about sports, there are many terms that are related. So it does not allow us to easily count related terms to order, conversion to coverage of this topic. Finally, there is this problem of word sense disintegration. A topical term or related term can be ambiguous. For example, basketball star versus star in the sky." + }, + { + "11:10": "So in the next lecture, we're going to talk about how to solve the problem with of a topic. [MUSIC]" + } + ] + }, + { + "2-7-topic-mining-and-analysis-probabilistic-topic-models": [ + { + "0:06": "This lecture is about Probabilistic Topic Models for topic mining and analysis." + }, + { + "0:13": "In this lecture, we're going to continue talking about the topic mining and analysis." + }, + { + "0:18": "We're going to introduce probabilistic topic models." + }, + { + "0:22": "So this is a slide that you have seen earlier, where we discussed the problems with using a term as a topic. So, to solve these problems intuitively we need to use more words to describe the topic. And this will address the problem of lack of expressive power. When we have more words that we can use to describe the topic, that we can describe complicated topics. To address the second problem we need to introduce weights on words. This is what allows you to distinguish subtle differences in topics, and to introduce semantically related words in a fuzzy manner. Finally, to solve the problem of word ambiguity, we need to split ambiguous word, so that we can disambiguate its topic." + }, + { + "1:15": "It turns out that all these can be done by using a probabilistic topic model. And that's why we're going to spend a lot of lectures to talk about this topic. So the basic idea here is that, improve the replantation of topic as one distribution. So what you see now is the older replantation. Where we replanted each topic, it was just one word, or one term, or one phrase. But now we're going to use a word distribution to describe the topic. So here you see that for sports. We're going to use the word distribution over theoretical speaking all the words in our vocabulary." + }, + { + "1:54": "So for example, the high probability words here are sports, game, basketball, football, play, star, etc. These are sports related terms. And of course it would also give a non-zero probability to some other word like Trouble which might be related to sports in general, not so much related to topic." + }, + { + "2:18": "In general we can imagine a non zero probability for all the words. And some words that are not read and would have very, very small probabilities. And these probabilities will sum to one." + }, + { + "2:31": "So that it forms a distribution of all the words." + }, + { + "2:36": "Now intuitively, this distribution represents a topic in that if we assemble words from the distribution, we tended to see words that are ready to dispose." + }, + { + "2:48": "You can also see, as a very special case, if the probability of the mass is concentrated in entirely on just one word, it's sports. And this basically degenerates to the symbol foundation of a topic was just one word." + }, + { + "3:04": "But as a distribution, this topic of representation can, in general, involve many words to describe a topic and can model several differences in semantics of a topic. Similarly we can model Travel and Science with their respective distributions. In the distribution for Travel we see top words like attraction, trip, flight etc." + }, + { + "3:31": "Whereas in Science we see scientist, spaceship, telescope, or genomics, and, you know, science related terms. Now that doesn't mean sports related terms will necessarily have zero probabilities for science. In general we can imagine all of these words we have now zero probabilities. It's just that for a particular topic in some words we have very, very small probabilities." + }, + { + "3:58": "Now you can also see there are some words that are shared by these topics. When I say shared it just means even with some probability threshold, you can still see one word occurring much more topics. In this case I mark them in black. So you can see travel, for example, occurred in all the three topics here, but with different probabilities. It has the highest probability for the Travel topic, 0.05. But with much smaller probabilities for Sports and Science, which makes sense. And similarly, you can see a Star also occurred in Sports and Science with reasonably high probabilities. Because they might be actually related to the two topics. So with this replantation it addresses the three problems that I mentioned earlier. First, it now uses multiple words to describe a topic. So it allows us to describe a fairly complicated topics. Second, it assigns weights to terms. So now we can model several differences of semantics. And you can bring in related words together to model a topic. Third, because we have probabilities for the same word in different topics, we can disintegrate the sense of word. In the text to decode it's underlying topic, to address all these three problems with this new way of representing a topic. So now of course our problem definition has been refined just slightly. The slight is very similar to what you've seen before except we have added refinement for what our topic is. Now each topic is word distribution, and for each word distribution we know that all the probabilities should sum to one with all the words in the vocabulary. So you see a constraint here. And we still have another constraint on the topic coverage, namely pis. So all the Pi sub ij's must sum to one for the same document." + }, + { + "5:59": "So how do we solve this problem? Well, let's look at this problem as a computation problem. So we clearly specify it's input and output and illustrate it here on this side. Input of course is our text data. C is our collection but we also generally assume we know the number of topics, k. Or we hypothesize a number and then try to bind k topics, even though we don't know the exact topics that exist in the collection. And V is the vocabulary that has a set of words that determines what units would be treated as the basic units for analysis. In most cases we'll use words as the basis for analysis. And that means each word is a unique." + }, + { + "6:47": "Now the output would consist of as first a set of topics represented by theta I's. Each theta I is a word distribution." + }, + { + "6:56": "And we also want to know the coverage of topics in each document. So that's. That the same pi ijs that we have seen before." + }, + { + "7:07": "So given a set of text data we would like compute all these distributions and all these coverages as you have seen on this slide." + }, + { + "7:18": "Now of course there may be many different ways of solving this problem. In theory, you can write the [INAUDIBLE] program to solve this problem, but here we're going to introduce a general way of solving this problem called a generative model. And this is, in fact, a very general idea and it's a principle way of using statistical modeling to solve text mining problems. And here I dimmed the picture that you have seen before in order to show the generation process. So the idea of this approach is actually to first design a model for our data. So we design a probabilistic model to model how the data are generated. Of course, this is based on our assumption. The actual data aren't necessarily generating this way. So that gave us a probability distribution of the data that you are seeing on this slide. Given a particular model and parameters that are denoted by lambda. So this template of actually consists of all the parameters that we're interested in. And these parameters in general will control the behavior of the probability risk model. Meaning that if you set these parameters with different values and it will give some data points higher probabilities than others. Now in this case of course, for our text mining problem or more precisely topic mining problem we have the following plans. First of all we have theta i's which is a word distribution snd then we have a set of pis for each document. And since we have n documents, so we have n sets of pis, and each set the pi up. The pi values will sum to one. So this is to say that we first would pretend we already have these word distributions and the coverage numbers. And then we can see how we can generate data by using such distributions. So how do we model the data in this way? And we assume that the data are actual symbols drawn from such a model that depends on these parameters. Now one interesting question here is to" + }, + { + "9:32": "think about how many parameters are there in total? Now obviously we can already see n multiplied by K parameters. For pi's. We also see k theta i's. But each theta i is actually a set of probability values, right? It's a distribution of words. So I leave this as an exercise for you to figure out exactly how many parameters there are here. Now once we set up the model then we can fit the model to our data. Meaning that we can estimate the parameters or infer the parameters based on the data. In other words we would like to adjust these parameter values. Until we give our data set the maximum probability. I just said, depending on the parameter values, some data points will have higher probabilities than others. What we're interested in, here, is what parameter values will give our data set the highest probability? So I also illustrate the problem with a picture that you see here. On the X axis I just illustrate lambda, the parameters, as a one dimensional variable. It's oversimplification, obviously, but it suffices to show the idea. And the Y axis shows the probability of the data, observe. This probability obviously depends on this setting of lambda. So that's why it varies as you change the value of lambda. What we're interested here is to find the lambda star." + }, + { + "11:05": "That would maximize the probability of the observed data." + }, + { + "11:10": "So this would be, then, our estimate of the parameters. And these parameters, note that are precisely what we hoped to discover from text data. So we'd treat these parameters as actually the outcome or the output of the data mining algorithm. So this is the general idea of using a generative model for text mining. First, we design a model with some parameter values to fit the data as well as we can. After we have fit the data, we will recover some parameter value. We will use the specific parameter value And those would be the output of the algorithm. And we'll treat those as actually the discovered knowledge from text data. By varying the model of course we can discover different knowledge. So to summarize, we introduced a new way of representing topic, namely representing as word distribution and this has the advantage of using multiple words to describe a complicated topic.It also allow us to assign weights on words so we have more than several variations of semantics. We talked about the task of topic mining, and answers. When we define a topic as distribution. So the importer is a clashing of text articles and a number of topics and a vocabulary set and the output is a set of topics. Each is a word distribution and also the coverage of all the topics in each document. And these are formally represented by theta i's and pi i's. And we have two constraints here for these parameters. The first is the constraints on the worded distributions. In each worded distribution the probability of all the words must sum to 1, all the words in the vocabulary. The second constraint is on the topic coverage in each document. A document is not allowed to recover a topic outside of the set of topics that we are discovering. So, the coverage of each of these k topics would sum to one for a document. We also introduce a general idea of using a generative model for text mining. And the idea here is, first we're design a model to model the generation of data. We simply assume that they are generative in this way. And inside the model we embed some parameters that we're interested in denoted by lambda." + }, + { + "13:36": "And then we can infer the most likely parameter values lambda star, given a particular data set." + }, + { + "13:43": "And we can then take the lambda star as knowledge discovered from the text for our problem." + }, + { + "13:50": "And we can adjust the design of the model and the parameters to discover various kinds of knowledge from text. As you will see later in the other lectures. [MUSIC]" + } + ] + }, + { + "2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1": [ + { + "0:00": "[SOUND] >> This lecture is about the Overview of Statistical Language Models, which cover proper models as special cases. In this lecture we're going to give a overview of Statical Language Models. These models are general models that cover probabilistic topic models as a special cases. So first off, what is a Statistical Language Model?" + }, + { + "0:31": "A Statistical Language Model is basically a probability distribution over word sequences. So, for example, we might have a distribution that gives, today is Wednesday a probability of .001. It might give today Wednesday is, which is a non-grammatical sentence, a very, very small probability as shown here." + }, + { + "0:54": "And similarly another sentence, the eigenvalue is positive might get the probability of .00001. So as you can see such a distribution clearly is Context Dependent. It depends on the Context of Discussion. Some Word Sequences might have higher probabilities than others but the same Sequence of Words might have different probability in different context." + }, + { + "1:20": "And so this suggests that such a distribution can actually categorize topic" + }, + { + "1:26": "such a model can also be regarded as Probabilistic Mechanism for generating text." + }, + { + "1:33": "And that just means we can view text data as data observed from such a model. For this reason, we call such a model as Generating Model. So, now given a model we can then assemble sequences of words. So, for example, based on the distribution that I have shown here on this slide, when matter it say assemble a sequence like today is Wednesday because it has a relative high probability. We might often get such a sequence. We might also get the item value as positive sometimes with a smaller probability and very, very occasionally we might get today is Wednesday because it's probability is so small." + }, + { + "2:24": "So in general, in order to categorize such a distribution we must specify probability values for all these different sequences of words. Obviously, it's impossible to specify that because it's impossible to enumerate all of the possible sequences of words. So in practice, we will have to simplify the model in some way. So, the simplest language model is called the Unigram Language Model. In such a case, it was simply a the text is generated by generating each word independently." + }, + { + "3:02": "But in general, the words may not be generated independently. But after we make this assumption, we can significantly simplify the language more." + }, + { + "3:12": "Basically, now the probability of a sequence of words, w1 through wn, will be just the product of the probability of each word." + }, + { + "3:24": "So for such a model, we have as many parameters as the number of words in our vocabulary. So here we assume we have n words, so we have n probabilities. One for each word. And then some to 1. So, now we assume that our text is a sample drawn according to this word distribution. That just means, we're going to draw a word each time and then eventually we'll get a text." + }, + { + "3:53": "So for example, now again, we can try to assemble words according to a distribution. We might get Wednesday often or today often." + }, + { + "4:06": "And some other words like eigenvalue might have a small probability, etcetera. But with this, we actually can also compute the probability of every sequence, even though our model only specify the probabilities of words. And this is because of the independence. So specifically, we can compute the probability of today is Wednesday." + }, + { + "4:34": "Because it's just a product of the probability of today, the probability of is, and probability of Wednesday. For example, I show some fake numbers here and when you multiply these numbers together you get the probability that today's Wednesday. So as you can see, with N probabilities, one for each word, we actually can characterize the probability situation over all kinds of sequences of words. And so, this is a very simple model. Ignore the word order. So it may not be, in fact, in some problems, such as for speech recognition, where you may care about the order of words. But it turns out to be quite sufficient for many tasks that involve topic analysis. And that's also what we're interested in here. So when we have a model, we generally have two problems that we can think about. One is, given a model, how likely are we to observe a certain kind of data points? That is, we are interested in the Sampling Process. The other is the Estimation Process. And that, is to think of the parameters of a model given, some observe the data and we're going to talk about that in a moment. Let's first talk about the sampling. So, here I show two examples of Water Distributions or Unigram Language Models. The first one has higher probabilities for words like a text mining association, it's separate." + }, + { + "6:10": "Now this signals a topic about text mining because when we assemble words from such a distribution, we tend to see words that often occur in text mining contest." + }, + { + "6:23": "So in this case, if we ask the question about what is the probability of generating a particular document. Then, we likely will see text that looks like a text mining paper. Of course, the text that we generate by drawing words. This distribution is unlikely coherent. Although, the probability of generating attacks mine [INAUDIBLE] publishing in the top conference is non-zero assuming that no word has a zero probability in the distribution. And that just means, we can essentially generate all kinds of text documents including very meaningful text documents." + }, + { + "7:07": "Now, the second distribution show, on the bottom, has different than what was high probabilities. So food [INAUDIBLE] healthy [INAUDIBLE], etcetera. So this clearly indicates a different topic. In this case it's probably about health. So if we sample a word from such a distribution, then the probability of observing a text mining paper would be very, very small." + }, + { + "7:32": "On the other hand, the probability of observing a text that looks like a food nutrition paper would be high, relatively higher." + }, + { + "7:41": "So that just means, given a particular distribution, different than the text. Now let's look at the estimation problem now. In this case, we're going to assume that we have observed the data. I will know exactly what the text data looks like. In this case, let's assume we have a text mining paper. In fact, it's abstract of the paper, so the total number of words is 100. And I've shown some counts of individual words here." + }, + { + "8:12": "Now, if we ask the question, what is the most likely" + }, + { + "8:17": "Language Model that has been used to generate this text data? Assuming that the text is observed from some Language Model, what's our best guess of this Language Model?" + }, + { + "8:30": "Okay, so the problem now is just to estimate the probabilities of these words. As I've shown here." + }, + { + "8:37": "So what do you think? What would be your guess?" + }, + { + "8:40": "Would you guess text has a very small probability, or a relatively large probability?" + }, + { + "8:48": "What about query? Well, your guess probably would be dependent on how many times we have observed this word in the text data, right? And if you think about it for a moment. And if you are like many others, you would have guessed that, well, text has a probability of 10 out of 100 because I've observed the text 10 times in the text that has a total of 100 words. And similarly, mining has 5 out of 100. And query has a relatively small probability, just observed for once. So it's 1 out of 100. Right, so that, intuitively, is a reasonable guess. But the question is, is this our best guess or best estimate of the parameters?" + }, + { + "9:37": "Of course, in order to answer this question, we have to define what do we mean by best, in this case, it turns out that our guesses are indeed the best. In some sense and this is called Maximum Likelihood Estimate. And it's the best thing that, it will give the observer data our maximum probability." + }, + { + "10:01": "Meaning that, if you change the estimate somehow, even slightly, then the probability of the observed text data will be somewhat smaller. And this is called a Maximum Likelihood Estimate. [MUSIC]" + } + ] + }, + { + "2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2": [ + { + "0:00": "[MUSIC] So now let's talk about the problem a little bit more, and specifically let's talk about the two different ways of estimating the parameters. One is called the Maximum Likelihood estimate that I already just mentioned. The other is Bayesian estimation. So in maximum likelihood estimation, we define best as meaning the data likelihood has reached the maximum. So formally it's given by this expression here, where we define the estimate as a arg max of the probability of x given theta." + }, + { + "0:46": "So, arg max here just means its actually a function that will turn. The argument that gives the function maximum value, adds the value. So the value of arg max is not the value of this function. But rather, the argument that has made it the function reaches maximum. So in this case the value of arg max is theta. It's the theta that makes the probability of X, given theta, reach it's maximum. So this estimate that in due it also makes sense and it's often very useful, and it seeks the premise that best explains the data. But it has a problem, when the data is too small because when the data points are too small, there are very few data points. The sample is small, then if we trust data in entirely and try to fit the data and then we'll be biased. So in the case of text data, let's say, all observed 100 words did not contain another word related to text mining. Now, our maximum likelihood estimator will give that word a zero probability. Because giving the non-zero probability would take away probability mass from some observer word. Which obviously is not optimal in terms of maximizing the likelihood of the observer data." + }, + { + "2:11": "But this zero probability for all the unseen words may not be reasonable sometimes. Especially, if we want the distribution to characterize the topic of text mining. So one way to address this problem is actually to use Bayesian estimation, where we actually would look at the both the data, and our prior knowledge about the parameters. We assume that we have some prior belief about the parameters. Now in this case of course, so we are not" + }, + { + "2:47": "going to look at just the data, but also look at the prior." + }, + { + "2:54": "So the prior here is defined by P of theta, and this means, we will impose some preference on certain theta's of others." + }, + { + "3:06": "And by using Bayes Rule, that I have shown here," + }, + { + "3:12": "we can then combine the likelihood function. With the prior to give us this" + }, + { + "3:23": "posterior probability of the parameter. Now, a full explanation of Bayes rule, and some of these things related to Bayesian reasoning, would be outside the scope of this course. But I just gave a brief introduction because this is general knowledge that might be useful to you. The Bayes Rule is basically defined here, and allows us to write down one conditional probability of X given Y in terms of the conditional probability of Y given X. And you can see the two probabilities are different in the order of the two variables." + }, + { + "4:09": "But often the rule is used for making inferences of the variable, so let's take a look at it again. We can assume that p(X) Encodes our prior belief about X. That means before we observe any other data, that's our belief about X, what we believe some X values have higher probability than others." + }, + { + "4:40": "And this probability of X given Y is a conditional probability, and this is our posterior belief about X. Because this is our belief about X values after we have observed the Y. Given that we have observed the Y, now what do we believe about X? Now, do we believe some values have higher probabilities than others?" + }, + { + "5:09": "Now the two probabilities are related through this one, this can be regarded as the probability of" + }, + { + "5:19": "the observed evidence Y, given a particular X. So you can think about X as our hypothesis, and we have some prior belief about which hypothesis to choose. And after we have observed Y, we will update our belief and this updating formula is based on the combination of our prior." + }, + { + "5:48": "And the likelihood of observing this Y if X is indeed true," + }, + { + "5:57": "so much for detour about Bayes Rule. In our case, what we are interested in is inferring the theta values. So, we have a prior here that includes our prior knowledge about the parameters." + }, + { + "6:15": "And then we have the data likelihood here, that would tell us which parameter value can explain the data well. The posterior probability combines both of them," + }, + { + "6:30": "so it represents a compromise of the the two preferences. And in such a case, we can maximize this posterior probability. To find this theta that would maximize this posterior probability, and this estimator is called a Maximum a Posteriori, or MAP estimate." + }, + { + "6:55": "And this estimator is a more general estimator than the maximum likelihood estimator. Because if we define our prior as a noninformative prior, meaning that it's uniform over all the theta values. No preference, then we basically would go back to the maximum likelihood estimated. Because in such a case, it's mainly going to be determined by this likelihood value, the same as here." + }, + { + "7:28": "But if we have some not informative prior, some bias towards the different values then map estimator can allow us to incorporate that. But the problem here of course, is how to define the prior." + }, + { + "7:44": "There is no free lunch and if you want to solve the problem with more knowledge, we have to have that knowledge. And that knowledge, ideally, should be reliable. Otherwise, your estimate may not necessarily be more accurate than that maximum likelihood estimate." + }, + { + "8:01": "So, now let's look at the Bayesian estimation in more detail." + }, + { + "8:08": "So, I show the theta values as just a one dimension value and that's a simplification of course. And so, we're interested in which variable of theta is optimal. So now, first we have the Prior. The Prior tells us that some of the variables are more likely the others would believe. For example, these values are more likely than the values over here, or here, or other places." + }, + { + "8:42": "So this is our Prior, and then we have our theta likelihood. And in this case, the theta also tells us which values of theta are more likely. And that just means loose syllables can best expand our theta." + }, + { + "9:01": "And then when we combine the two we get the posterior distribution, and that's just a compromise of the two. It would say that it's somewhere in-between. So, we can now look at some interesting point that is made of. This point represents the mode of prior, that means the most likely parameter value according to our prior, before we observe any data." + }, + { + "9:25": "This point is the maximum likelihood estimator, it represents the theta that gives the theta of maximum probability." + }, + { + "9:32": "Now this point is interesting, it's the posterior mode." + }, + { + "9:38": "It's the most likely value of the theta given by the posterior of this. And it represents a good compromise of the prior mode and the maximum likelihood estimate." + }, + { + "9:51": "Now in general in Bayesian inference, we are interested in the distribution of all these parameter additives as you see here. If there's a distribution over see how values that you can see. Here, P of theta given X." + }, + { + "10:09": "So the problem of Bayesian inference is" + }, + { + "10:14": "to infer this posterior, this regime, and also to infer other interesting quantities that might depend on theta. So, I show f of theta here as an interesting variable that we want to compute. But in order to compute this value, we need to know the value of theta. In Bayesian inference, we treat theta as an uncertain variable. So we think about all the possible variables of theta. Therefore, we can estimate the value of this function f as extracted value of f, according to the posterior distribution of theta, given the observed evidence X." + }, + { + "10:58": "As a special case, we can assume f of theta is just equal to theta. In this case, we get the expected value of the theta, that's basically the posterior mean. That gives us also one point of theta, and it's sometimes the same as posterior mode, but it's not always the same. So, it gives us another way to estimate the parameter." + }, + { + "11:24": "So, this is a general illustration of Bayesian estimation and its an influence. And later, you will see this can be useful for topic mining where we want to inject the sum prior knowledge about the topics. So to summarize, we've used the language model which is basically probability distribution over text. It's also called a generative model for text data. The simplest language model is Unigram Language Model, it's basically a word distribution." + }, + { + "11:54": "We introduced the concept of likelihood function, which is the probability of the a data given some model." + }, + { + "12:02": "And this function is very important," + }, + { + "12:05": "given a particular set of parameter values this function can tell us which X, which data point has a higher likelihood, higher probability." + }, + { + "12:16": "Given a data sample X, we can use this function to determine which parameter values would maximize the probability of the observed data, and this is the maximum livelihood estimate." + }, + { + "12:31": "We also talk about the Bayesian estimation or inference. In this case we, must define a prior on the parameters p of theta. And then we're interested in computing the posterior distribution of the parameters, which is proportional to the prior and the likelihood." + }, + { + "12:48": "And this distribution would allow us then to infer any derive that is from theta. [MUSIC]" + } + ] + }, + { + "2-10-probabilistic-topic-models-mining-one-topic": [ + { + "0:00": "[SOUND] This lecture is a continued discussion of probabilistic topic models. In this lecture, we're going to continue discussing probabilistic models. We're going to talk about a very simple case where we are interested in just mining one topic from one document." + }, + { + "0:30": "So in this simple setup, we are interested in analyzing one document and trying to discover just one topic. So this is the simplest case of topic model. The input now no longer has k, which is the number of topics because we know there is only one topic and the collection has only one document, also. In the output, we also no longer have coverage because we assumed that the document covers this topic 100%. So the main goal is just to discover the world of probabilities for this single topic, as shown here." + }, + { + "1:14": "As always, when we think about using a generating model to solve such a problem, we start with thinking about what kind of data we are going to model or from what perspective we're going to model the data or data representation. And then we're going to design a specific model for the generating of the data, from our perspective. Where our perspective just means we want to take a particular angle of looking at the data, so that the model will have the right parameters for discovering the knowledge that we want. And then we'll be thinking about the microfunction or write down the microfunction to capture more formally how likely a data point will be obtained from this model." + }, + { + "2:05": "And the likelihood function will have some parameters in the function. And then we argue our interest in estimating those parameters for example, by maximizing the likelihood which will lead to maximum likelihood estimated. These estimator parameters will then become the output of the mining hours, which means we'll take the estimating parameters as the knowledge that we discover from the text. So let's look at these steps for this very simple case. Later we'll look at this procedure for some more complicated cases. So our data, in this case is, just a document which is a sequence of words. Each word here is denoted by x sub i. Our model is a Unigram language model. A word distribution that we hope to denote a topic and that's our goal. So we will have as many parameters as many words in our vocabulary, in this case M." + }, + { + "3:09": "And for convenience we're going to use theta sub i to denote the probability of word w sub i." + }, + { + "3:20": "And obviously these theta sub i's will sum to 1." + }, + { + "3:24": "Now what does a likelihood function look like? Well, this is just the probability of generating this whole document, that given such a model. Because we assume the independence in generating each word so the probability of the document will be just a product of the probability of each word." + }, + { + "3:42": "And since some word might have repeated occurrences. So we can also rewrite this product in a different form." + }, + { + "3:52": "So in this line, we have rewritten the formula into a product over all the unique words in the vocabulary, w sub 1 through w sub M. Now this is different from the previous line. Well, the product is over different positions of words in the document." + }, + { + "4:15": "Now when we do this transformation, we then would need to introduce a counter function here. This denotes the count of word one in document and similarly this is the count of words of n in the document because these words might have repeated occurrences. You can also see if a word did not occur in the document." + }, + { + "4:41": "It will have a zero count, therefore that corresponding term will disappear. So this is a very useful form of writing down the likelihood function that we will often use later. So I want you to pay attention to this, just get familiar with this notation. It's just to change the product over all the different words in the vocabulary. So in the end, of course, we'll use theta sub i to express this likelihood function and it would look like this. Next, we're going to find the theta values or probabilities of these words that would maximize this likelihood function. So now lets take a look at the maximum likelihood estimate problem more closely." + }, + { + "5:32": "This line is copied from the previous slide. It's just our likelihood function." + }, + { + "5:38": "So our goal is to maximize this likelihood function. We will find it often easy to" + }, + { + "5:47": "maximize the local likelihood instead of the original likelihood. And this is purely for mathematical convenience because after the logarithm transformation our function will becomes a sum instead of product. And we also have constraints over these these probabilities. The sum makes it easier to take derivative, which is often needed for finding the optimal solution of this function. So please take a look at this sum again, here. And this is a form of a function that you will often see later also, the more general topic models. So it's a sum over all the words in the vocabulary. And inside the sum there is a count of a word in the document. And this is macroed by the logarithm of a probability." + }, + { + "6:55": "So let's see how we can solve this problem." + }, + { + "6:58": "Now at this point the problem is purely a mathematical problem because we are going to just the find the optimal solution of a constrained maximization problem. The objective function is the likelihood function and the constraint is that all these probabilities must sum to one. So, one way to solve the problem is to use Lagrange multiplier approace." + }, + { + "7:24": "Now this command is beyond the scope of this course but since Lagrange multiplier is a very useful approach, I also would like to just give a brief introduction to this, for those of you who are interested." + }, + { + "7:39": "So in this approach we will construct a Lagrange function, here. And this function will combine our objective function with another term that encodes our constraint and we introduce Lagrange multiplier here, lambda, so it's an additional parameter. Now, the idea of this approach is just to turn the constraint optimization into, in some sense, an unconstrained optimizing problem. Now we are just interested in optimizing this Lagrange function." + }, + { + "8:19": "As you may recall from calculus, an optimal point would be achieved when the derivative is set to zero. This is a necessary condition. It's not sufficient, though. So if we do that you will see the partial derivative, with respect to theta i here ,is equal to this. And this part comes from the derivative of the logarithm function and this lambda is simply taken from here. And when we set it to zero we can easily see theta sub i is related to lambda in this way." + }, + { + "9:06": "Since we know all the theta i's must a sum to one we can plug this into this constraint, here. And this will allow us to solve for lambda." + }, + { + "9:16": "And this is just a net sum of all the counts. And this further allows us to then solve the optimization problem, eventually, to find the optimal setting for theta sub i. And if you look at this formula it turns out that it's actually very intuitive because this is just the normalized count of these words by the document ns, which is also a sum of all the counts of words in the document. So, after all this mess, after all, we have just obtained something that's very intuitive and this will be just our intuition where we want to maximize the data by assigning as much probability mass as possible to all the observed the words here. And you might also notice that this is the general result of maximum likelihood raised estimator. In general, the estimator would be to normalize counts and it's just sometimes the counts have to be done in a particular way, as you will also see later. So this is basically an analytical solution to our optimization problem. In general though, when the likelihood function is very complicated, we're not going to be able to solve the optimization problem by having a closed form formula. Instead we have to use some numerical algorithms and we're going to see such cases later, also. So if you imagine what would we get if we use such a maximum likelihood estimator to estimate one topic for a single document d here? Let's imagine this document is a text mining paper. Now, what you might see is something that looks like this. On the top, you will see the high probability words tend to be those very common words, often functional words in English. And this will be followed by some content words that really characterize the topic well like text, mining, etc. And then in the end, you also see there is more probability of words that are not really related to the topic but they might be extraneously mentioned in the document. As a topic representation, you will see this is not ideal, right? That because the high probability words are functional words, they are not really characterizing the topic. So my question is how can we get rid of such common words?" + }, + { + "11:59": "Now this is the topic of the next module. We're going to talk about how to use probabilistic models to somehow get rid of these common words. [MUSIC]" + } + ] + } + ] + }, + { + "Week 3": [ + { + "3-1-probabilistic-topic-models-mixture-of-unigram-language-models": [ + { + "0:00": "[MUSIC]" + }, + { + "0:06": "This lecture is about the mixture of unigram language models." + }, + { + "0:11": "In this lecture we will continue discussing probabilistic topic models. In particular, what we introduce a mixture of unigram language models. This is a slide that you have seen earlier. Where we talked about how to get rid of the background words that we have on top of for one document." + }, + { + "0:36": "So if you want to solve the problem, it would be useful to think about why we end up having this problem. Well, this obviously because these words are very frequent in our data and we are using a maximum likelihood to estimate. Then the estimate obviously would have to assign high probability for these words in order to maximize the likelihood. So, in order to get rid of them that would mean we'd have to do something differently here." + }, + { + "1:05": "In particular we'll have to say this distribution doesn't have to explain all the words in the tax data. What were going to say is that, these common words should not be explained by this distribution. So one natural way to solve the problem is to think about using another distribution to account for just these common words. This way, the two distributions can be mixed together to generate the text data. And we'll let the other model which we'll call background topic model to generate the common words. This way our target topic theta here will be only generating the common handle words that are characterised the content of the document." + }, + { + "1:52": "So, how does this work? Well, it is just a small modification of the previous setup where we have just one distribution. Since we now have two distributions, we have to decide which distribution to use when we generate the word. Each word will still be a sample from one of the two distributions." + }, + { + "2:13": "Text data is still generating the same way. Namely, look at the generating of the one word at each time and eventually we generate a lot of words. When we generate the word, however, we're going to first decide which of the two distributions to use. And this is controlled by another probability, the probability of theta sub d and the probability of theta sub B here." + }, + { + "2:41": "So this is a probability of enacting the topic word of distribution. This is the probability of enacting the background word" + }, + { + "2:52": "of distribution denoted by theta sub B." + }, + { + "2:55": "On this case I just give example where we can set both to 0.5. So you're going to basically flip a coin, a fair coin, to decide what you want to use. But in general these probabilities don't have to be equal. So you might bias toward using one topic more than the other. So now the process of generating a word would be to first we flip a coin. Based on these probabilities choosing each model and if let's say the coin shows up as head, which means we're going to use the topic two word distribution. Then we're going to use this word distribution to generate a word. Otherwise we might be going slow this path." + }, + { + "3:41": "And we're going to use the background word distribution to generate a word." + }, + { + "3:46": "So in such a case, we have a model that has some uncertainty associated with the use of a word distribution. But we can still think of this as a model for generating text data. And such a model is called a mixture model." + }, + { + "4:02": "So now let's see. In this case, what's the probability of observing a word w? Now here I showed some words. like \"the\" and \"text\". So as in all cases, once we setup a model we are interested in computing the likelihood function. The basic question is, so what's the probability of observing a specific word here? Now we know that the word can be observed from each of the two distributions, so we have to consider two cases. Therefore it's a sum over these two cases." + }, + { + "4:34": "The first case is to use the topic for the distribution to generate the word. And in such a case then the probably would be theta sub d, which is the probability of choosing the model multiplied by the probability of actually observing the word from that model. Both events must happen in order to observe. We first must have choosing the topic theta sub d and then, we also have to actually have sampled the word the from the distribution. And similarly, the second part accounts for a different way of generally the word from the background." + }, + { + "5:15": "Now obviously the probability of text the same is all similar, right? So we also can see the two ways of generating the text. And in each case, it's a product of the probability of choosing a particular word is multiplied by the probability of observing the word from that distribution." + }, + { + "5:35": "Now whether you will see, this is actually a general form. So might want to make sure that you have really understood this expression here. And you should convince yourself that this is indeed the probability of obsolete text. So to summarize what we observed here. The probability of a word from a mixture model is a general sum of different ways of generating the word." + }, + { + "6:00": "In each case, it's a product of the probability of selecting that component model. Multiplied by the probability of actually observing the data point from that component of the model. And this is something quite general and you will see this occurring often later. So the basic idea of a mixture model is just to retrieve thesetwo distributions together as one model. So I used a box to bring all these components together. So if you view this whole box as one model, it's just like any other generative model. It would just give us the probability of a word." + }, + { + "6:42": "But the way that determines this probability is quite the different from when we have just one distribution." + }, + { + "6:50": "And this is basically a more complicated mixture model. So the more complicated is more than just one distribution. And it's called a mixture model." + }, + { + "7:00": "So as I just said we can treat this as a generative model. And it's often useful to think of just as a likelihood function. The illustration that you have seen before, which is dimmer now, is just the illustration of this generated model. So mathematically, this model is nothing but to just define the following generative model. Where the probability of a word is assumed to be a sum over two cases" + }, + { + "7:26": "of generating the word. And the form you are seeing now is a more general form that what you have seen in the calculation earlier. Well I just use the symbol w to denote any water but you can still see this is basically first a sum. Right? And this sum is due to the fact that the water can be generated in much more ways, two ways in this case. And inside a sum, each term is a product of two terms. And the two terms are first the probability of selecting a component like of D Second, the probability of actually observing the word from this component of the model. So this is a very general description of all the mixture models. I just want to make sure that you understand this because this is really the basis for understanding all kinds of on top models." + }, + { + "8:28": "So now once we setup model. We can write down that like functioning as we see here. The next question is, how can we estimate the parameter, or what to do with the parameters. Given the data. Well, in general, we can use some of the text data to estimate the model parameters. And this estimation would allow us to discover the interesting knowledge about the text. So you, in this case, what do we discover? Well, these are presented by our parameters and we will have two kinds of parameters. One is the two worded distributions, that result in topics, and the other is the coverage of each topic in each." + }, + { + "9:12": "The coverage of each topic. And this is determined by probability of C less of D and probability of theta, so this is to one. Now, what's interesting is also to think about special cases like when we send one of them to want what would happen? Well with the other, with the zero right? And if you look at the likelihood function," + }, + { + "9:36": "it will then degenerate to the special case of just one distribution. Okay so you can easily verify that by assuming one of these two is 1.0 and the other is Zero." + }, + { + "9:49": "So in this sense, the mixture model is more general than the previous model where we have just one distribution. It can cover that as a special case." + }, + { + "9:59": "So to summarize, we talked about the mixture of two Unigram Language Models and the data we're considering here is just One document. And the model is a mixture model with two components, two unigram LM models, specifically theta sub d, which is intended to denote the topic of document d, and theta sub B, which is representing a background topic that we can set to attract the common words because common words would be assigned a high probability in this model." + }, + { + "10:33": "So the parameters can be collectively called Lambda which I show here you can again" + }, + { + "10:41": "think about the question about how many parameters are we talking about exactly. This is usually a good exercise to do because it allows you to see the model in depth and to have a complete understanding of what's going on this model. And we have mixing weights, of course, also." + }, + { + "10:59": "So what does a likelihood function look like? Well, it looks very similar to what we had before. So for the document, first it's a product over all the words in the document exactly the same as before. The only difference is that inside here now it's a sum instead of just one. So you might have recalled before we just had this one there." + }, + { + "11:25": "But now we have this sum because of the mixture model. And because of the mixture model we also have to introduce a probability of choosing that particular component of distribution." + }, + { + "11:39": "And so this is just another way of writing, and by using a product over all the unique words in our vocabulary instead of having that product over all the positions in the document. And this form where we look at the different and unique words is a commutative that formed for computing the maximum likelihood estimate later. And the maximum likelihood estimator is, as usual, just to find the parameters that would maximize the likelihood function. And the constraints here are of course two kinds. One is what are probabilities in each [INAUDIBLE] must sum to 1 the other is the choice of each [INAUDIBLE] must sum to 1. [MUSIC]" + } + ] + }, + { + "3-2-probabilistic-topic-models-mixture-model-estimation-part-1": [ + { + "0:06": "This lecture is about the mixture model estimation. In this lecture, we're going to continue discussing probabilistic topic models. In particular, we're going to talk about the how to estimate the parameters of a mixture model. So let's first look at our motivation for using a mixture model, and we hope to effect out the background words from the topic word distribution. So the idea is to assume that the text data actually contain two kinds of words. One kind is from the background here, so the \"is\", \"we\" etc. The other kind is from our topic word distribution that we're interested in. So in order to solve this problem of factoring out background words, we can set up our mixture model as follows. We are going to assume that we already know the parameters of all the values for all the parameters in the mixture model except for the word distribution of Theta sub d which is our target. So this is a case of customizing probably some model so that we embedded the unknown variables that we are interested in, but we're going to simplify other things. We're going to assume we have knowledge about others and this is a powerful way of customizing a model for a particular need. Now you can imagine, we could have assumed that we also don't know the background word distribution, but in this case, our goal is to affect out precisely those high probability in the background words. So we assume the background model is already fixed. The problem here is, how can we adjust the Theta sub d in order to maximize the probability of the observed document here and we assume all the other parameters are known? Now, although we designed the modal heuristically to try to factor out these background words, it's unclear whether if we use maximum likelihood estimator, we will actually end up having a word distribution where the common words like \"the\" will be indeed having smaller probabilities than before. So now, in this case, it turns out that the answer is yes. When we set up the probabilistic modeling this way, when we use maximum likelihood estimator, we will end up having a word distribution where the common words would be factored out by the use of the background distribution. So to understand why this is so, it's useful to examine the behavior of a mixture model. So we're going to look at a very simple case. In order to understand some interesting behaviors of a mixture model, the observed patterns here actually are generalizable to mixture model in general, but it's much easier to understand this behavior when we use a very simple case like what we're seeing here. So specifically in this case, let's assume that the probability of choosing each of the two models is exactly the same. So we're going to flip a fair coin to decide which model to use. Furthermore, we are going to assume there are precisely to words, \"the\" and \"text.\" Obviously, this is a very naive oversimplification of the actual text, but again, it is useful to examine the behavior in such a special case. So we further assume that, the background model gives probability of 0.9 to the word \"the\" and \"text\" 0.1. Now, let's also assume that our data is extremely simple. The document has just two words \"text\" and then \"the.\" So now, let's write down the likelihood function in such a case. First, what's the probability of \"text\" and what's the probability of \"the\"? I hope by this point, you will be able to write it down. So the probability of \"text\" is basically a sum of two cases where each case corresponds to each of the water distribution and it accounts for the two ways of generating text. Inside each case, we have the probability of choosing the model which is 0.5 multiplied by the probability of observing \"text\" from that model. Similarly, \"the\" would have a probability of the same form just as it was different exactly probabilities. So naturally, our likelihood function is just the product of the two. So it's very easy to see that, once you understand what's the probability of each word and which is also why it's so important to understand what's exactly the probability of observing each word from such a mixture model. Now, the interesting question now is, how can we then optimize this likelihood? Well, you will notice that, there are only two variables. They are precisely the two probabilities of the two words \"text\" and \"the\" given by Theta sub d. This is because we have assumed that, all the other parameters are known. So now, the question is a very simple algebra question. So we have a simple expression with two variables and we hope to choose the values of these two variables to maximize this function. It's exercises that we have seen some simple algebra problems, and note that the two probabilities must sum to one. So there's some constraint. If there were no constraint of course, we will set both probabilities to their maximum value which would be one to maximize this, but we can't do that because \"text\" and \"the\" must sum to one. We can't give those a probability of one. So now the question is, how should we allocate the probability in the mass between the two words? What do you think? Now, it will be useful to look at this formula for moment and to see intuitively what we do in order to set these probabilities to maximize the value of this function. If we look into this further, then we'll see some interesting behavior of the two component models in that, they will be collaborating to maximize the probability of the observed data which is dictated by the maximum likelihood estimator, but they're also competing in some way. In particular, they would be competing on the words and they will tend to bet high probabilities on different words to avoid this competition in some sense or to gain advantage in this competition. So again, looking at this objective function and we have a constraint on the two probabilities, now if you look at the formula intuitively, you might feel that you want to set the probability of \"text\" to be somewhat larger than \"the\". This intuition can be well-supported by mathematical fact which is, when the sum of two variables is a constant then the product of them which is maximum then they are equal, and this is a fact that we know from algebra. Now, if we plug that in, we will would mean that we have to make the two probabilities equal. When we make them equal and then if we consider the constraint that we can easily solve this problem, and the solution is the probability of \"text\" would be 0.9 and probability of \"the\" is 0.1. As you can see indeed, the probability of text is not much larger than probability of \"the\" and this is not the case when we have just one distribution. This is clearly because of the use of the background model which assign a very high probability to \"the\" low probability to \"text\". If you look at the equation, you will see obviously some interaction of the two distributions here. In particular, you will see in order to make them equal and then the probability assigned by Theta sub d must be higher for a word that has a smaller probability given by the background. This is obvious from examining this equation because \"the\" background part is weak for \"text\" it's a small. So in order to compensate for that, we must make the probability of \"text\" that's given by Theta sub d somewhat larger so that the two sides can be balanced. So this is in fact a very general behavior of this mixture model. That is, if one distribution assigns a high probability to one word than another, then the other distribution would tend to do the opposite. Basically, it would discourage other distributions to do the same and this is to balance them out so that, we can account for all words. This also means that, by using a background model that is fixed to assign high probabilities to background words, we can indeed encourage the unknown topic word distribution to assign smaller probabilities for such common words. Instead, put more probability mass on the content words that cannot be explained well by the background model meaning that, they have a very small probability from the background model like \"text\" here." + } + ] + }, + { + "3-3-probabilistic-topic-models-mixture-model-estimation-part-2": [ + { + "0:00": "[SOUND] Now lets look at another behaviour of the Mixed Model and in this case lets look at the response to data frequencies. So what you are seeing now is basically the likelihood of function for the two word document and we now in this case the solution is text. A probability of 0.9 and the a probability of 0.1. Now it's interesting to think about a scenario where we start adding more words to the document. So what would happen if we add many the's to the document?" + }, + { + "0:41": "Now this would change the game, right? So, how? Well, picture, what would the likelihood function look like now? Well, it start with the likelihood function for the two words, right? As we add more words, we know that. But we have to just multiply the likelihood function by additional terms to account for the additional. occurrences of that. Since in this case, all the additional terms are the, we're going to just multiply by this term. Right? For the probability of the." + }, + { + "1:12": "And if we have another occurrence of the, we'd multiply again by the same term, and so on and forth. Add as many terms as the number of the's that we add to the document, d'. Now this obviously changes the likelihood function. So what's interesting is now to think about how would that change our solution? So what's the optimal solution now?" + }, + { + "1:38": "Now, intuitively you'd know the original solution, pulling the 9 versus pulling the ,will no longer be optimal for this new function. Right?" + }, + { + "1:48": "But, the question is how should we change it. What general is to sum to one. So he know we must take away some probability the mass from one word and add the probability mass to the other word. The question is which word to have reduce the probability and which word to have a larger probability. And in particular, let's think about the probability of the. Should it be increased to be more than 0.1? Or should we decrease it to less than 0.1? What do you think?" + }, + { + "2:19": "Now you might want to pause the video a moment to think more about. This question. Because this has to do with understanding of important behavior of a mixture model. And indeed, other maximum likelihood estimator. Now if you look at the formula for a moment, then you will see it seems like another object Function is more influenced by the than text. Before, each computer. So now as you can imagine, it would make sense to actually assign a smaller probability for text and lock it. To make room for a larger probability for the. Why? Because the is repeated many times. If we increase it a little bit, it will have more positive impact. Whereas a slight decrease of text will have relatively small impact because it occurred just one, right? So this means there is another behavior that we observe here. That is high frequency words generated with high probabilities from all the distributions. And, this is no surprise at all, because after all, we are maximizing the likelihood of the data. So the more a word occurs, then it makes more sense to give such a word a higher probability because the impact would be more on the likelihood function. This is in fact a very general phenomenon of all the maximum likelihood estimator. But in this case, we can see as we see more occurrences of a term, it also encourages the unknown distribution theta sub d to assign a somewhat higher probability to this word." + }, + { + "4:07": "Now it's also interesting to think about the impact of probability of Theta sub B. The probability of choosing one of the two component models. Now we've been so far assuming that each model is equally likely. And that gives us 0.5. But you can again look at this likelihood function and try to picture what would happen if we increase the probability of choosing a background model. Now you will see these terms for the, we have a different form where the probability that would be" + }, + { + "4:40": "even larger because the background has a high probability for the word and the coefficient in front of 0.9 which is now 0.5 would be even larger. When this is larger, the overall result would be larger. And that also makes this the less important for theta sub d to increase the probability before the. Because it's already very large. So the impact here of increasing the probability of the is somewhat regulated by this coefficient, the point of i. If it's larger on the background, then it becomes less important to increase the value. So this means the behavior here, which is high frequency words tend to get the high probabilities, are effected or regularized somewhat by the probability of choosing each component. The more likely a component is being chosen. It's more important that to have higher values for these frequent words. If you have a various small probability of being chosen, then the incentive is less. So to summarize, we have just discussed the mixture model. And we discussed that the estimation problem of the mixture model and particular with this discussed some general behavior of the estimator and that means we can expect our estimator to capture these infusions. First every component model attempts to assign high probabilities to high frequent their words in the data. And this is to collaboratively maximize likelihood. Second, different component models tend to bet high probabilities on different words. And this is to avoid a competition or waste of probability. And this would allow them to collaborate more efficiently to maximize the likelihood." + }, + { + "6:33": "So, the probability of choosing each component regulates the collaboration and the competition between component models. It would allow some component models to respond more to the change, for example, of frequency of the theta point in the data." + }, + { + "6:53": "We also talked about the special case of fixing one component to a background word distribution, right? And this distribution can be estimated by using a collection of documents, a large collection of English documents, by using just one distribution and then we'll just have normalized frequencies of terms to give us the probabilities of all these words. Now when we use such a specialized mixture model, we show that we can effectively get rid of that one word in the other component." + }, + { + "7:23": "And that would make this cover topic more discriminative." + }, + { + "7:27": "This is also an example of imposing a prior on the model parameter and the prior here basically means one model must be exactly the same as the background language model and if you recall what we talked about in Bayesian estimation, and this prior will allow us to favor a model that is consistent with our prior. In fact, if it's not consistent we're going to say the model is impossible. So it has a zero prior probability. That effectively excludes such a scenario. This is also issue that we'll talk more later. [MUSIC]" + } + ] + }, + { + "3-4-probabilistic-topic-models-expectation-maximization-algorithm-part-1": [ + { + "0:06": "This lecture is about the expectation-maximization algorithm, also called the EM algorithm. In this lecture, we're going to continue the discussion of probabilistic topic models. In particular, we're going to introduce the EM algorithm, which is a family of useful algorithms for computing the maximum likelihood estimate of mixture models. So this is now familiar scenario of using a two component, the mixture model, to try to factor out the background words from one topic word of distribution here. So we're interested in computing this estimate, and we're going to try to adjust these probability values to maximize the probability of the observed document. Note that we assume that all the other parameters are known. So the only thing unknown is the word probabilities given by theta sub. In this lecture, we're going to look into how to compute this maximum likelihood estimate. Now, let's start with the idea of separating the words in the text data into two groups. One group would be explained by the background model. The other group would be explained by the unknown topic word distribution. After all, this is the basic idea of mixture model. But suppose we actually know which word is from which distribution? So that would mean, for example, these words the, is, and we are known to be from this background word distribution. On the other hand, the other words text, mining, clustering etc are known to be from the topic word distribution. If you can see the color, then these are shown in blue. These blue words are then assumed that to be from the topic word distribution. If we already know how to separate these words, then the problem of estimating the word distribution would be extremely simple. If you think about this for a moment, you'll realize that, well, we can simply take all these words that are known to be from this word distribution theta sub d and normalize them. So indeed this problem would be very easy to solve if we had known which words are from which a distribution precisely, and this is in fact making this model no longer a mixture model because we can already observe which distribution has been used to generate which part of the data. So we actually go back to the single word distribution problem. In this case let's call these words that are known to be from theta d, a pseudo document of d prime, and now all we need to do is just normalize these words counts for each word w_i. That's fairly straightforward. It's just dictated by the maximum likelihood estimator. Now, this idea however doesn't work because we in practice don't really know which word is from which distribution, but this gives us the idea of perhaps we can guess which word is from which it is written. Specifically given all the parameters, can we infer the distribution a word is from. So let's assume that we actually know tentative probabilities for these words in theta sub d. So now all the parameters are known for this mixture model, and now let's consider a word like a \"text\". So the question is, do you think \"text\" is more likely having been generated from theta sub d or from theta sub of b? So in other words, we want to infer which distribution has been used to generate this text. Now, this inference process is a typical Bayesian inference situation where we have some prior about these two distributions. So can you see what is our prior here? Well, the prior here is the probability of each distribution. So the prior is given by these two probabilities. In this case, the prior is saying that each model is equally likely, but we can imagine perhaps a different prior is possible. So this is called a prior because this is our guess of which distribution has been used to generate a word before we even off reserve the word. So that's why we call it the prior. So if we don't observe the word, we don't know what word has been observed. Our best guess is to say well, they're equally likely. All right. So it's just flipping a coin. Now in Bayesian inference we typically learn with update our belief after we have observed the evidence. So what is the evidence here? Well, the evidence here is the word text. Now that we know we're interested in the word text. So text that can be regarded as evidence, and if we use Bayes rule to combine the prior and the data likelihood, what we will end up with is to combine the prior with the likelihood that you see here, which is basically the probability of the word text from each distribution. We see that in both cases the text is possible. Note that even in the background it is still possible, it just has a very small probability. So intuitively what would be your guess in this case. Now if you're like many others, you are guess text is probably from theta sub d. It's more likely from theta sub d. Why? You will probably see that it's because text that has a much higher probability here by the theta sub d, then by the background model which has a very small probability. By this we're going to say, well, text is more likely from theta sub d. So you see our guess of which distribution has been used to generate the text would depend on how high the probability of the text is in each word distribution. We can do, tend to guess the distribution that gives us a word a higher probability, and this is likely to maximize the likelihood. So we're going to choose a word that has a higher likelihood. So in other words, we're going to compare these two probabilities of the word given by each distributions. But our guess must also be affected by the prior. So we also need to compare these two priors. Why? Because imagine if we adjust these probabilities, we're going to say the probability of choosing a background model is almost 100 percent. Now, if you have that kind of strong prior, then that would affect your guess. You might think, well, wait a moment, maybe text could have been from the background as well. Although the probability is very small here, the prior is very high. So in the end, we have to combine the two, and the base formula provides us a solid and principled way of making this kind of guess to quantify that. So more specifically, let's think about the probability that this word has been generated in fact from from theta sub d. Well, in order for texts to be generated from theta sub d two things must happen. First, the theta sub d must have been selected, so we have the selection probability here. Secondly, we also have to actually have observed text from the distribution. So when we multiply the two together, we get the probability that text has in fact been generated from theta sub d. Similarly, for the background model, the probability of generating text is another product of a similar form. Now, we also introduced the latent variable z here to denote whether the word is from the background or the topic. When z is zero, it means it's from the topic theta sub d. When it's one, it means it's from the background theta sub b. So now we have the probability that text is generated from each. Then we can simply normalize them to have an estimate of the probability that the word text is from theta sub d or from theta sub b. Then equivalently, the probability that z is equal to zero given that the observed evidence is text. So this is application of Bayes rule. But this step is very crucial for understanding the EM algorithm because if we can do this, then we would be able to first initialize the parameter values somewhat randomly, and then we're going to take a guess of these z values. Which distributing has been used to generate which word, and the initialized the parameter values would allow us to have a complete specification of the mixture model which further allows us to apply Bayes rule to infer which distribution is more likely to generate each word. This prediction essentially helped us to separate the words from the two distributions. Although we can't separate them for sure, but we can separate them probabilistically as shown here." + } + ] + }, + { + "3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2": [ + { + "0:00": "[SOUND] So this is indeed a general idea of the Expectation-Maximization, or EM, Algorithm." + }, + { + "0:14": "So in all the EM algorithms we introduce a hidden variable to help us solve the problem more easily. In our case the hidden variable is a binary variable for each occurrence of a word. And this binary variable would indicate whether the word has been generated from 0 sub d or 0 sub p. And here we show some possible values of these variables. For example, for the it's from background, the z value is one. And text on the other hand. Is from the topic then it's zero for z, etc." + }, + { + "0:53": "Now, of course, we don't observe these z values, we just imagine they're all such. Values of z attaching to other words." + }, + { + "1:02": "And that's why we call these hidden variables." + }, + { + "1:06": "Now, the idea that we talked about before for predicting the word distribution that has been used when we generate the word is it a predictor, the value of this hidden variable? And, so, the EM algorithm then, would work as follows. First, we'll initialize all the parameters with random values. In our case, the parameters are mainly the probability. of a word, given by theta sub d. So this is an initial addition stage. These initialized values would allow us to use base roll to take a guess of these z values, so we'd guess these values. We can't say for sure whether textt is from background or not. But we can have our guess. This is given by this formula. It's called an E-step. And so the algorithm would then try to use the E-step to guess these z values. After that, it would then invoke another that's called M-step. In this step we simply take advantage of the inferred z values and then just group words that are in the same distribution like these from that ground including this as well." + }, + { + "2:27": "We can then normalize the count to estimate the probabilities or to revise our estimate of the parameters." + }, + { + "2:36": "So let me also illustrate that we can group the words that are believed to have come from zero sub d, and that's text, mining algorithm, for example, and clustering." + }, + { + "2:51": "And we group them together to help us re-estimate the parameters that we're interested in. So these will help us estimate these parameters." + }, + { + "3:06": "Note that before we just set these parameter values randomly. But with this guess, we will have somewhat improved estimate of this. Of course, we don't know exactly whether it's zero or one. So we're not going to really do the split in a hard way. But rather we're going to do a softer split. And this is what happened here." + }, + { + "3:29": "So we're going to adjust the count by the probability that would believe this word has been generated by using the theta sub d." + }, + { + "3:39": "And you can see this, where does this come from? Well, this has come from here, right? From the E-step. So the EM Algorithm would iteratively improve uur initial estimate of parameters by using E-step first and then M-step. The E-step is to augment the data with additional information, like z. And the M-step is to take advantage of the additional information to separate the data. To split the data accounts and then collect the right data accounts to re-estimate our parameter. And then once we have a new generation of parameter, we're going to repeat this. We are going the E-step again. To improve our estimate of the hidden variables. And then that would lead to another generation of re-estimated parameters." + }, + { + "4:34": "For the word distribution that we are interested in." + }, + { + "4:39": "Okay, so, as I said, the bridge between the two is really the variable z, hidden variable, which indicates how likely this water is from the top water distribution, theta sub p." + }, + { + "4:56": "So, this slide has a lot of content and you may need to. Pause the reader to digest it. But this basically captures the essence of EM Algorithm. Start with initial values that are often random themself. And then we invoke E-step followed by M-step to get an improved setting of parameters. And then we repeated this, so this a Hill-Climbing algorithm that would gradually improve the estimate of parameters. As I will explain later there is some guarantee for reaching a local maximum of the log-likelihood function. So lets take a look at the computation for a specific case, so these formulas are the EM. Formulas that you see before, and you can also see there are superscripts, here, like here, n, to indicate the generation of parameters. Like here for example we have n plus one. That means we have improved. From here to here we have an improvement. So in this setting we have assumed the two numerals have equal probabilities and the background model is null. So what are the relevance of the statistics? Well these are the word counts. So assume we have just four words, and their counts are like this. And this is our background model that assigns high probabilities to common words like the." + }, + { + "6:25": "And in the first iteration, you can picture what will happen. Well first we initialize all the values. So here, this probability that we're interested in is normalized into a uniform distribution of all the words." + }, + { + "6:40": "And then the E-step would give us a guess of the distribution that has been used. That will generate each word. We can see we have different probabilities for different words. Why? Well, that's because these words have different probabilities in the background. So even though the two distributions are equally likely. And then our initial audition say uniform distribution because of the difference in the background of the distribution, we have different guess the probability. So these words are believed to be more likely from the topic." + }, + { + "7:15": "These on the other hand are less likely. Probably from background." + }, + { + "7:20": "So once we have these z values, we know in the M-step these probabilities will be used to adjust the counts. So four must be multiplied by this 0.33 in order to get the allocated accounts toward the topic." + }, + { + "7:39": "And this is done by this multiplication. Note that if our guess says this is 100% If this is one point zero," + }, + { + "7:52": "then we just get the full count of this word for this topic. In general it's not going to be one point zero. So we're just going to get some percentage of this counts toward this topic. Then we simply normalize these counts to have a new generation of parameters estimate. So you can see, compare this with the older one, which is here." + }, + { + "8:18": "So compare this with this one and we'll see the probability is different. Not only that, we also see some words that are believed to have come from the topic will have a higher probability. Like this one, text." + }, + { + "8:32": "And of course, this new generation of parameters would allow us to further adjust the inferred latent variable or hidden variable values. So we have a new generation of values, because of the E-step based on the new generation of parameters. And these new inferred values of Zs will give us then another generation of the estimate of probabilities of the word. And so on and so forth so this is what would actually happen when we compute these probabilities using the EM Algorithm. As you can see in the last row where we show the log-likelihood, and the likelihood is increasing as we do the iteration. And note that these log-likelihood is negative because the probability is between 0 and 1 when you take a logarithm, it becomes a negative value. Now what's also interesting is, you'll note the last column. And these are the inverted word split. And these are the probabilities that a word is believed to have come from one distribution, in this case the topical distribution, all right. And you might wonder whether this would be also useful. Because our main goal is to estimate these word distributions. So this is our primary goal. We hope to have a more discriminative order of distribution. But the last column is also bi-product. This also can actually be very useful. You can think about that. We want to use, is to for example is to estimate to what extent this document has covered background words. And this, when we add this up or take the average we will kind of know to what extent it has covered background versus content was that are not explained well by the background. [MUSIC]" + } + ] + }, + { + "3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3": [ + { + "0:07": "So, I just showed you that empirically the likelihood will converge, but theoretically it can also be proved that EM algorithm will converge to a local maximum. So here's just an illustration of what happened and a detailed explanation. This required more knowledge about that, some of that inequalities, that we haven't really covered yet." + }, + { + "0:39": "So here what you see is on the X dimension, we have a c0 value. This is a parameter that we have. On the y axis we see the likelihood function. So this curve is the original likelihood function, and this is the one that we hope to maximize. And we hope to find a c0 value at this point to maximize this. But in the case of Mitsumoto we can not easily find an analytic solution to the problem. So, we have to resolve the numerical errors, and the EM algorithm is such an algorithm. It's a Hill-Climb algorithm. That would mean you start with some random guess. Let's say you start from here, that's your starting point. And then you try to improve this by moving this to another point where you can have a higher likelihood. So that's the ideal hill climbing. And in the EM algorithm, the way we achieve this is to do two things. First, we'll fix a lower bound of likelihood function. So this is the lower bound. See here." + }, + { + "1:51": "And once we fit the lower bound, we can then maximize the lower bound. And of course, the reason why this works, is because the lower bound is much easier to optimize. So we know our current guess is here. And by maximizing the lower bound, we'll move this point to the top. To here." + }, + { + "2:13": "Right? And we can then map to the original likelihood function, we find this point. Because it's a lower bound, we are guaranteed to improve this guess, right? Because we improve our lower bound and then the original likelihood curve which is above this lower bound will definitely be improved as well." + }, + { + "2:36": "So we already know it's improving the lower bound. So we definitely improve this original likelihood function, which is above this lower bound. So, in our example, the current guess is parameter value given by the current generation. And then the next guess is the re-estimated parameter values. From this illustration you can see the next guess is always better than the current guess. Unless it has reached the maximum, where it will be stuck there. So the two would be equal. So, the E-step is basically to compute this lower bound. We don't directly just compute this likelihood function but we compute the length of the variable values and these are basically a part of this lower bound. This helps determine the lower bound. The M-step on the other hand is to maximize the lower bound. It allows us to move parameters to a new point. And that's why EM algorithm is guaranteed to converge to a local maximum." + }, + { + "3:42": "Now, as you can imagine, when we have many local maxima, we also have to repeat the EM algorithm multiple times. In order to figure out which one is the actual global maximum. And this actually in general is a difficult problem in numeral optimization. So here for example had we started from here, then we gradually just climb up to this top. So, that's not optimal, and we'd like to climb up all the way to here, so the only way to climb up to this gear is to start from somewhere here or here. So, in the EM algorithm, we generally would have to start from different points or have some other way to determine a good initial starting point." + }, + { + "4:29": "To summarize in this lecture we introduced the EM algorithm. This is a general algorithm for computing maximum maximum likelihood estimate of all kinds of models, so not just for our simple model. And it's a hill-climbing algorithm, so it can only converge to a local maximum and it will depend on initial points." + }, + { + "4:49": "The general idea is that we will have two steps to improve the estimate of. In the E-step we roughly [INAUDIBLE] how many there are by predicting values of useful hidden variables that we would use to simplify the estimation. In our case, this is the distribution that has been used to generate the word. In the M-step then we would exploit such augmented data which would make it easier to estimate the distribution, to improve the estimate of parameters. Here improve is guaranteed in terms of the likelihood function. Note that it's not necessary that we will have a stable convergence of parameter value even though the likelihood function is ensured to increase. There are some properties that have to be satisfied in order for the parameters also to convert into some stable value." + }, + { + "5:47": "Now here data augmentation is done probabilistically. That means, we're not going to just say exactly what's the value of a hidden variable. But we're going to have a probability distribution over the possible values of these hidden variables. So this causes a split of counts of events probabilistically." + }, + { + "6:07": "And in our case we'll split the word counts between the two distributions. [MUSIC]" + } + ] + }, + { + "3-7-probabilistic-latent-semantic-analysis-plsa-part-1": [ + { + "0:00": "[SOUND] This lecture is about probabilistic and latent Semantic Analysis or PLSA." + }, + { + "0:12": "In this lecture we're going to introduce probabilistic latent semantic analysis, often called PLSA. This is the most basic topic model, also one of the most useful topic models. Now this kind of models can in general be used to mine multiple topics from text documents. And PRSA is one of the most basic topic models for doing this. So let's first examine this power in the e-mail for more detail. Here I show a sample article which is a blog article about Hurricane Katrina." + }, + { + "0:48": "And I show some simple topics. For example government response, flood of the city of New Orleans. Donation and the background." + }, + { + "0:59": "You can see in the article we use words from all these distributions." + }, + { + "1:05": "So we first for example see there's a criticism of government response and this is followed by discussion of flooding of the city and donation et cetera. We also see background words mixed with them." + }, + { + "1:18": "So the overall of topic analysis here is to try to decode these topics behind the text, to segment the topics, to figure out which words are from which distribution and to figure out first, what are these topics? How do we know there's a topic about government response. There's a topic about a flood in the city. So these are the tasks at the top of the model." + }, + { + "1:42": "If we had discovered these topics can color these words, as you see here, to separate the different topics. Then you can do a lot of things, such as summarization, or segmentation, of the topics, clustering of the sentences etc. So the formal definition of problem of mining multiple topics from text is shown here. And this is after a slide that you have seen in an earlier lecture. So the input is a collection, the number of topics, and a vocabulary set, and of course the text data." + }, + { + "2:16": "And then the output is of two kinds. One is the topic category, characterization. Theta i's. Each theta i is a word distribution. And second, it's the topic coverage for each document. These are pi sub i j's. And they tell us which document it covers. Which topic to what extent. So we hope to generate these as output. Because there are many useful applications if we can do that." + }, + { + "2:42": "So the idea of PLSA is actually very similar to the two component mixture model that we have already introduced. The only difference is that we are going to have more than two topics. Otherwise, it is essentially the same. So here I illustrate how we can generate the text that has multiple topics and naturally in all cases of Probabilistic modelling would want to figure out the likelihood function. So we would also ask the question, what's the probability of observing a word from such a mixture model? Now if you look at this picture and compare this with the picture that we have seen earlier, you will see the only difference is that we have added more topics here." + }, + { + "3:26": "So, before we have just one topic, besides the background topic. But now we have more topics. Specifically, we have k topics now. All these are topics that we assume that exist in the text data. So the consequence is that our switch for choosing a topic is now a multiway switch. Before it's just a two way switch. We can think of it as flipping a coin. But now we have multiple ways. First we can flip a coin to decide whether we're talk about the background. So it's the background lambda sub B versus non-background. 1 minus lambda sub B gives us the probability of actually choosing a non-background topic. After we have made this decision, we have to make another decision to choose one of these K distributions. So there are K way switch here. And this is characterized by pi, and this sum to one." + }, + { + "4:31": "This is just the difference of designs. Which is a little bit more complicated. But once we decide which distribution to use the rest is the same we are going to just generate a word by using one of these distributions as shown here." + }, + { + "4:46": "So now lets look at the question about the likelihood. So what's the probability of observing a word from such a distribution? What do you think? Now we've seen this problem many times now and if you can recall, it's generally a sum. Of all the different possibilities of generating a word. So let's first look at how the word can be generated from the background mode. Well, the probability that the word is generated from the background model is lambda multiplied by the probability of the word from the background mode. Model, right. Two things must happen. First, we have to have chosen the background model, and that's the probability of lambda, of sub b. Then second, we must have actually obtained the word w from the background, and that's probability of w given theta sub b." + }, + { + "5:40": "Okay, so similarly, we can figure out the probability of observing the word from another topic. Like the topic theta sub k. Now notice that here's the product of three terms. And that's because of the choice of topic theta sub k, only happens if two things happen. One is we decide not to talk about background. So, that's a probability of 1 minus lambda sub B. Second, we also have to actually choose theta sub K among these K topics. So that's probability of theta sub K, or pi." + }, + { + "6:17": "And similarly, the probability of generating a word from the second. The topic and the first topic are like what you are seeing here. And so in the end the probability of observing the word is just a sum of all these cases. And I have to stress again this is a very important formula to know because this is really key to understanding all the topic models and indeed a lot of mixture models. So make sure that you really understand the probability" + }, + { + "6:49": "of w is indeed the sum of these terms." + }, + { + "6:56": "So, next, once we have the likelihood function, we would be interested in knowing the parameters. All right, so to estimate the parameters. But firstly, let's put all these together to have the complete likelihood of function for PLSA. The first line shows the probability of a word as illustrated on the previous slide. And this is an important formula as I said." + }, + { + "7:22": "So let's take a closer look at this. This actually commands all the important parameters. So first of all we see lambda sub b here. This represents a percentage of background words" + }, + { + "7:32": "that we believe exist in the text data. And this can be a known value that we set empirically." + }, + { + "7:41": "Second, we see the background language model, and typically we also assume this is known. We can use a large collection of text, or use all the text that we have available to estimate the world of distribution." + }, + { + "7:52": "Now next in the next stop this formula. [COUGH] Excuse me. You see two interesting kind of parameters, those are the most important parameters. That we are. So one is pi's. And these are the coverage of a topic in the document." + }, + { + "8:11": "And the other is word distributions that characterize all the topics." + }, + { + "8:18": "So the next line, then is simply to plug this in to calculate the probability of document. This is, again, of the familiar form where you have a sum and you have a count of a word in the document. And then log of a probability. Now it's a little bit more complicated than the two component. Because now we have more components, so the sum involves more terms. And then this line is just the likelihood for the whole collection. And it's very similar, just accounting for more documents in the collection." + }, + { + "8:52": "So what are the unknown parameters? I already said that there are two kinds. One is coverage, one is word distributions. Again, it's a useful exercise for you to think about. Exactly how many parameters there are here." + }, + { + "9:05": "How many unknown parameters are there? Now, try and think out that question will help you understand the model in more detail. And will also allow you to understand what would be the output that we generate when use PLSA to analyze text data? And these are precisely the unknown parameters." + }, + { + "9:24": "So after we have obtained the likelihood function shown here, the next is to worry about the parameter estimation." + }, + { + "9:32": "And we can do the usual think, maximum likelihood estimator. So again, it's a constrained optimization problem, like what we have seen before. Only that we have a collection of text and we have more parameters to estimate. And we still have two constraints, two kinds of constraints. One is the word distributions." + }, + { + "9:51": "All the words must have probabilities that's sum to one for one distribution. The other is the topic coverage distribution and a document will have to cover precisely these k topics so the probability of covering each topic that would have to sum to 1. So at this point though it's basically a well defined applied math problem, you just need to figure out the solutions to optimization problem. There's a function with many variables. and we need to just figure out the patterns of these variables to make the function reach its maximum. >> [MUSIC]" + } + ] + }, + { + "3-8-probabilistic-latent-semantic-analysis-plsa-part-2": [ + { + "0:00": "[SOUND]" + }, + { + "0:08": "We can compute this maximum estimate by using the EM algorithm. So in the e step, we now have to introduce more hidden variables because we have more topics, so our hidden variable z now, which is a topic indicator can take more than two values. So specifically will take a k plus one values, with b in the noting the background. And once locate, to denote other k topics, right." + }, + { + "0:36": "So, now the e step, as you can recall is your augmented data, and by predicting the values of the hidden variable. So we're going to predict for a word, whether the word has come from one of these k plus one distributions. This equation allows us to predict the probability that the word w in document d is generated from topic zero sub j." + }, + { + "1:03": "And the bottom one is the predicted probability that this word has been generated from the background. Note that we use document d here to index the word. Why? Because whether a word is from a particular topic actually depends on the document. Can you see why? Well, it's through the pi's. The pi's are tied to each document. Each document can have potentially different pi's, right. The pi's will then affect our prediction. So, the pi's are here. And this depends on the document." + }, + { + "1:38": "And that might give a different guess for a word in different documents, and that's desirable." + }, + { + "1:46": "In both cases we are using the Baye's Rule, as I explained, basically assessing the likelihood of generating word from each of this division and there's normalize." + }, + { + "1:57": "What about the m step? Well, we may recall the m step is we take advantage of the inferred z values. To split the counts. And then collected the right counts to re-estimate the parameters. So in this case, we can re-estimate our coverage of probability. And this is re-estimated based on collecting all the words in the document." + }, + { + "2:22": "And that's why we have the count of the word in document. And sum over all the words. And then we're going to look at to what extent this word belongs to" + }, + { + "2:34": "the topic theta sub j. And this part is our guess from each step." + }, + { + "2:40": "This tells us how likely this word is actually from theta sub j. And when we multiply them together, we get the discounted count that's located for topic theta sub j. And when we normalize this over all the topics, we get the distribution of all the topics to indicate the coverage. And similarly, the bottom one is the estimated probability of word for a topic. And in this case we are using exact the same count, you can see this is the same discounted account, ] it tells us to what extend we should allocate this word [INAUDIBLE] but then normalization is different. Because in this case we are interested in the word distribution, so we simply normalize this over all the words. This is different, in contrast here we normalize the amount all the topics. It would be useful to take a comparison between the two." + }, + { + "3:37": "This give us different distributions. And these tells us how to improve the parameters." + }, + { + "3:48": "And as I just explained, in both the formula is we have a maximum estimate based on allocated word counts [INAUDIBLE]. Now this phenomena is actually general phenomena in all the EM algorithms. In the m-step, you general with the computer expect an account of the event based on the e-step result, and then you just and then count to four, particular normalize it, typically. So, in terms of computation of this EM algorithm, we can actually just keep accounting various events and then normalize them. And when we thinking this way, we also have a more concise way of presenting the EM Algorithm. It actually helps us better understand the formulas. So I'm going to go over this in some detail. So as a algorithm we first initialize all the unknown perimeters randomly, all right. So, in our case, we are interested in all of those coverage perimeters, pi's and awarded distributions [INAUDIBLE], and we just randomly normalize them. This is the initialization step and then we will repeat until likelihood converges. Now how do we know whether likelihood converges? We can do compute likelihood at each step and compare the current likelihood with the previous likelihood. If it doesn't change much and we're going to say it stopped, right." + }, + { + "5:19": "So, in each step we're going to do e-step and m-step. In the e-step we're going to do augment the data by predicting the hidden variables. In this case, the hidden variable, z sub d, w, indicates whether the word w in d is from a topic or background. And if it's from a topic, which topic. So if you look at the e-step formulas, essentially we're actually normalizing these counts, sorry, these probabilities of observing the word from each distribution. So you can see, basically the prediction of word from topic zero sub j is based on the probability of selecting that theta sub j as a word distribution to generate the word. Multiply by the probability of observing the word from that distribution." + }, + { + "6:17": "And I said it's proportional to this because in the implementation of EM algorithm you can keep counter for this quantity, and in the end it just normalizes it. So the normalization here is over all the topics and then you would get a probability." + }, + { + "6:36": "Now, in the m-step, we do the same, and we are going to collect these." + }, + { + "6:43": "Allocated account for each topic." + }, + { + "6:47": "And we split words among the topics." + }, + { + "6:50": "And then we're going to normalize them in different ways to obtain the real estimate. So for example, we can normalize among all the topics to get the re-estimate of pi, the coverage. Or we can re-normalize based on all the words. And that would give us a word distribution." + }, + { + "7:10": "So it's useful to think algorithm in this way because when implemented, you can just use variables, but keep track of these quantities in each case." + }, + { + "7:23": "And then you just normalize these variables to make them distribution." + }, + { + "7:32": "Now I did not put the constraint for this one. And I intentionally leave this as an exercise for you. And you can see, what's the normalizer for this one? It's of a slightly different form but it's essentially the same as the one that you have seen here in this one. So in general in the envisioning of EM algorithms you will see you accumulate the counts, various counts and then you normalize them." + }, + { + "8:01": "So to summarize, we introduced the PLSA model. Which is a mixture model with k unigram language models representing k topics." + }, + { + "8:11": "And we also added a pre-determined background language model to help discover discriminative topics, because this background language model can help attract the common terms." + }, + { + "8:23": "And we select the maximum estimate that we cant discover topical knowledge from text data. In this case PLSA allows us to discover two things, one is k worded distributions, each one representing a topic and the other is the proportion of each topic in each document." + }, + { + "8:41": "And such detailed characterization of coverage of topics in documents can enable a lot of photo analysis. For example, we can aggregate the documents in the particular pan period to assess the coverage of a particular topic in a time period. That would allow us to generate the temporal chains of topics. We can also aggregate topics covered in documents associated with a particular author and then we can categorize the topics written by this author, etc. And in addition to this, we can also cluster terms and cluster documents. In fact, each topic can be regarded as a cluster. So we already have the term clusters. In the higher probability, the words can be regarded as" + }, + { + "9:29": "belonging to one cluster represented by the topic. Similarly, documents can be clustered in the same way. We can assign a document to the topic cluster that's covered most in the document. So remember, pi's indicate to what extent each topic is covered in the document, we can assign the document to the topical cluster that has the highest pi." + }, + { + "9:57": "And in general there are many useful applications of this technique." + }, + { + "10:03": "[MUSIC]" + } + ] + }, + { + "3-9-latent-dirichlet-allocation-lda-part-1": [ + { + "0:07": "This lecture is about that Latent Dirichlet Allocation or LDA. In this lecture, we are going to continue talking about topic models. In particular, we are going to talk about some extension of PLSA, and one of them is LDA or Latent Dirichlet Allocation. So the plan for this lecture is to cover two things. One is to extend the PLSA with prior knowledge and that would allow us to have in some sense a user-controlled PLSA, so it doesn't apply to they just listen to data, but also would listen to our needs. The second is to extend the PLSA as a generative model, a fully generative model. This has led to the development of Latent Dirichlet Allocation or LDA. So first, let's talk about the PLSA with prior knowledge. Now in practice, when we apply PLSA to analyze text data, we might have additional knowledge that we want to inject to guide the analysis. The standard PLSA is going to blindly listen to the data by using maximum [inaudible]. We are going to just fit data as much as we can and get some insight about data. This is also very useful, but sometimes a user might have some expectations about which topics to analyze. For example, we might expect to see retrieval models as a topic in information retrieval or we also may be interesting in certain aspects, such as battery and memory, when looking at opinions about a laptop because the user is particularly interested in these aspects. A user may also have knowledge about topic coverage and we may know which topic is definitely not covering which document or is covering the document. For example, we might have seen those tags, topic tags assigned to documents. And those tags could be treated as topics. If we do that then a document account will be generated using topics corresponding to the tags already assigned to the document. If the document is not assigned a tag, we're going to say there is no way for using that topic to generate document. The document must be generated by using the topics corresponding to that assigned tags. So question is how can we incorporate such knowledge into PLSA. It turns out that there is a very elegant way of doing that and that would incorporate such knowledge as priors on the models. And you may recall in Bayesian inference, we use prior together with data to estimate parameters and this is precisely what would happen. So in this case, we can use maximum a posteriori estimate also called MAP estimate and the formula is given here. Basically, this is to maximize the posteriori distribution probability. And this is a combination of the likelihood of data and the prior. So what would happen is that we are going to have an estimate that listens to the data and also listens to our prior preferences. We can use this prior which is denoted as p of lambda to encode all kinds of preferences and the constraints. So for example, we can use this to encode the need of having precise background of the topic. Now this could be encoded as a prior because we can say the prior for the parameters is only a non-zero if the parameters contain one topic that is equivalent to the background language model. In other words, in other cases if it is not like that, we are going to say the prior says it is impossible. So the probability of that kind of models I think would be zero according to our prior. So now we can also for example use the prior to force particular choice of topic to have a probability of a certain number. For example, we can force document D to choose topic one with probability of one half or we can prevent topic from being used in generating document. So we can say the third topic should not be used in generating document D, we will set to the Pi zero for that topic. We can also use the prior to favor a set of parameters with topics that assign high probability to some particular words. In this case, we are not going to say it is impossible but we can just strongly favor certain kind of distributions and you will see example later. The MAP can be computed using a similar EM algorithm as we have used for the maximum likelihood estimate. With just some modifications, most of the parameters would reflect the prior preferences and in such an estimate if we use a special form of the prior code or conjugate the prior, then the functional form of the prior will be similar to the data. As a result, we can combine the two and the consequence is that you can basically convert the inference of the prior into the inference of having additional pseudo data because the two functional forms are the same and they can be combined. So the effect is as if we had more data and this is convenient for computation. It does not mean conjugate prior is the best way to define prior. So now let us look at the specific example. Suppose the user is particularly interested in battery life of a laptop and we are analyzing reviews. So the prior says that the distribution should contain one distribution that would assign high probability to battery and life. So we could say well there is distribution that is kind of concentrated on battery life and prior says that one of distributions should be very similar to this. Now if we use MAP estimate with conjugate prior, which is the original prior, the original distribution based on this preference, then the only difference in the EM is that when we re-estimate words distributions, we are going to add additional counts to reflect our prior. So here you can see the pseudo counts are defined based on the probability of words in a prior. So battery obviously would have high pseudo counts and similarly life would have also high pseudo counts. All the other words would have zero pseudo counts because their probability is zero in the prior and we see this is also controlled by a parameter mu and we are going to add a mu much by the probability of W given prior distribution to the connected accounts when we re-estimate this word distribution. So this is the only step that is changed and the change is happening here. And before we just connect the counts of words that we believe have been generated from this topic but now we force this distribution to give more probabilities to these words by adding them to the pseudo counts. So in fact we artificially inflated their probabilities. To make this distribution, we also need to add this many pseudo counts to the denominator. This is total sum of all the pseudo counts we have added for all the words This would make this a gamma distribution. Now this is intuitively very reasonable way of modifying EM and theoretically speaking, this works and it computes the MAP estimate. It is useful to think about the two specific extreme cases of mu. Now, [inaudible] the picture. Think about what would happen if we set mu to zero. Well that essentially to remove this prior. So mu in some sense indicates our strengths on prior. Now what would happen if we set mu to positive infinity. Well that is to say that prior is so strong that we are not going to listen to the data at all. So in the end, you see in this case we are going to make one of the distributions fixed to the prior. You see why? When mu is infinitive, we basically let this one dominate. In fact we are going to set this one to precise this distribution. So in this case, it is this distribution. And that is why we said the background language model is in fact a way to impose the prior because it would force one distribution to be exactly the same as what we give, that is background distribution. So in this case, we can even force the distribution to entirely focus on battery life. But of course this would not work well because it cannot attract other words. It would affect the accuracy of counting topics about battery life. So in practice, mu is set somewhere in between of course. So this is one way to impose a prior. We can also impose some other constraints. For example, we can set any parameters that will constantly include zero as needed. For example, we may want to set one of the Pi's to zero and this would mean we do not allow that topic to participate in generating that document. And this is only reasonable of course when we have prior analogy that strongly suggests this." + } + ] + }, + { + "3-10-latent-dirichlet-allocation-lda-part-2": [ + { + "0:00": "[SOUND] So now let's talk about the exchanging of PLSA to of LDA and to motivate that, we need to talk about some deficiencies of PLSA. First, it's not really a generative model because we can compute the probability of a new document. You can see why, and that's because the pis are needed to generate the document, but the pis are tied to the document that we have in the training data. So we can't compute the pis for future document." + }, + { + "0:34": "And there's some heuristic workaround, though. Secondly, it has many parameters, and I've asked you to compute how many parameters exactly there are in PLSA, and you will see there are many parameters. That means that model is very complex. And this also means that there are many local maxima and it's prone to overfitting. And that means it's very hard to also find a good local maximum." + }, + { + "1:02": "And that we are representing global maximum. And in terms of explaining future data, we might find that it will overfit the training data because of the complexity of the model. The model is so flexible to fit precisely what the training data looks like. And then it doesn't allow us to generalize the model for using other data." + }, + { + "1:23": "This however is not a necessary problem for text mining because here we're often only interested in hitting the training documents that we have. We are not always interested in modern future data, but in other cases, or if we would care about the generality, we would worry about this overfitting." + }, + { + "1:42": "So LDA is proposing to improve that, and basically to make PLSA a generative model by imposing a Dirichlet prior on the model parameters. Dirichlet is just a special distribution that we can use to specify product. So in this sense, LDA is just a Bayesian version of PLSA, and the parameters are now much more regularized. You will see there are many few parameters and you can achieve the same goal as PLSA for text mining. It means it can compute the top coverage and topic word distributions as in PLSA. However, there's no. Why are the parameters for PLSA here are much fewer, there are fewer parameters and in order to compute a topic coverage and word distributions, we again face a problem of influence of these variables because they are not parameters of the model. So the influence part again face the local maximum problem. So essentially they are doing something very similar, but theoretically, LDA is a more elegant way of looking at the top and bottom problem. So let's see how we can generalize the PLSA to LDA or a standard PLSA to have LDA. Now a full treatment of LDA is beyond the scope of this course and we just don't have time to go in depth on that talking about that. But here, I just want to give you a brief idea about what's extending and what it enables, all right. So this is the picture of LDA. Now, I remove the background of model just for simplicity." + }, + { + "3:15": "Now, in this model, all these parameters are free to change and we do not impose any prior. So these word distributions are now represented as theta vectors. So these are word distributions, so here. And the other set of parameters are pis. And we would present it as a vector also. And this is more convenient to introduce LDA. And we have one vector for each document. And in this case, in theta, we have one vector for each topic." + }, + { + "3:50": "Now, the difference between LDA and PLSA is that in LDA, we're not going to allow them to free the chain. Instead, we're going to force them to be drawn from another distribution." + }, + { + "4:03": "So more specifically, they will be drawn from two Dirichlet distributions respectively, but the Dirichlet distribution is a distribution over vectors. So it gives us a probability of four particular choice of a vector. Take, for example, pis, right. So this Dirichlet distribution tells us which vectors of pi is more likely. And this distribution in itself is controlled by another vector of parameters of alphas." + }, + { + "4:31": "Depending on the alphas, we can characterize the distribution in different ways but with full certain choices of pis to be more likely than others. For example, you might favor the choice of a relatively uniform distribution of all the topics. Or you might favor generating a skewed coverage of topics, and this is controlled by alpha. And similarly here, the topic or word distributions are drawn from another Dirichlet distribution with beta parameters. And note that here, alpha has k parameters, corresponding to our inference on the k values of pis for our document. Whereas here, beta has n values corresponding to controlling the m words in our vocabulary." + }, + { + "5:17": "Now once we impose this price, then the generation process will be different. And we start with joined pis from the Dirichlet distribution and this pi will tell us these probabilities." + }, + { + "5:35": "And then, we're going to use the pi to further choose which topic to use, and this is of course very similar to the PLSA model." + }, + { + "5:47": "And similar here, we're not going to have these distributions free. Instead, we're going to draw one from the Dirichlet distribution. And then from this, then we're going to further sample a word. And the rest is very similar to the. The likelihood function now is more complicated for LDA. But there's a close connection between the likelihood function of LDA and the PLSA. So I'm going to illustrate the difference here. So in the top, you see PLSA likelihood function that you have already seen before. It's copied from previous slide. Only that I dropped the background for simplicity." + }, + { + "6:27": "So in the LDA formulas you see very similar things. You see the first equation is essentially the same. And this is the probability of generating a word from multiple word distributions." + }, + { + "6:40": "And this formula is a sum of all the possibilities of generating a word. Inside a sum is a product of the probability of choosing a topic multiplied by the probability of observing the word from that topic." + }, + { + "6:55": "So this is a very important formula, as I've stressed multiple times. And this is actually the core assumption in all the topic models. And you might see other topic models that are extensions of LDA or PLSA. And they all rely on this. So it's very important to understand this. And this gives us a probability of getting a word from a mixture model. Now, next in the probability of a document, we see there is a PLSA component in the LDA formula, but the LDA formula will add a sum integral here. And that's to account for the fact that the pis are not fixed. So they are drawn from the original distribution, and that's shown here. That's why we have to take an integral, to consider all the possible pis that we could possibly draw from this Dirichlet distribution. And similarly in the likelihood for the whole collection, we also see further components added, another integral here." + }, + { + "7:58": "Right? So basically in the area we're just adding this integrals to account for the uncertainties and we added of course the Dirichlet distributions to cover the choice of this parameters, pis, and theta." + }, + { + "8:12": "So this is a likelihood function for LDA. Now, next to this, let's talk about the parameter as estimation and inferences. Now the parameters can be now estimated using exactly the same approach maximum likelihood estimate for LDA. Now you might think about how many parameters are there in LDA versus PLSA. You'll see there're a fewer parameters in LDA because in this case the only parameters are alphas and the betas. So we can use the maximum likelihood estimator to compute that. Of course, it's more complicated because the form of likelihood function is more complicated. But what's also important is notice that now these parameters that we are interested in name and topics, and the coverage are no longer parameters in LDA. In this case we have to use basic inference or posterior inference to compute them based on the parameters of alpha and the beta. Unfortunately, this computation is intractable. So we generally have to resort to approximate inference." + }, + { + "9:18": "And there are many methods available for that and I'm sure you will see them when you use different tool kits for LDA, or when you read papers about" + }, + { + "9:30": "these different extensions of LDA. Now here we, of course, can't give in-depth instruction to that, but just know that they are computed based in inference by using the parameters alphas and betas. But our math [INAUDIBLE], actually, in the end, in some of our math list, it's very similar to PLSA. And, especially when we use algorithm called class assembly, then the algorithm looks very similar to the Algorithm. So in the end, they are doing something very similar." + }, + { + "10:10": "So to summarize our discussion of properties of topic models, these models provide a general principle or way of mining and analyzing topics in text with many applications. The best basic task setup is to take test data as input and we're going to output the k topics. Each topic is characterized by word distribution. And we're going to also output proportions of these topics covered in each document." + }, + { + "10:38": "And PLSA is the basic topic model, and in fact the most basic of the topic model. And this is often adequate for most applications. That's why we spend a lot of time to explain PLSA in detail." + }, + { + "10:53": "Now LDA improves over PLSA by imposing priors. This has led to theoretically more appealing models. However, in practice, LDA and PLSA tend to give similar performance, so in practice PLSA and LDA would work equally well for most of the tasks." + }, + { + "11:12": "Now here are some suggested readings if you want to know more about the topic. First is a nice review of probabilistic topic models." + }, + { + "11:20": "The second has a discussion about how to automatically label a topic model. Now I've shown you some distributions and they intuitively suggest a topic. But what exactly is a topic? Can we use phrases to label the topic? To make it the more easy to understand and this paper is about the techniques for doing that. The third one is empirical comparison of LDA and the PLSA for various tasks. The conclusion is that they tend to perform similarly. [MUSIC]" + } + ] + } + ] + }, + { + "Week 4": [ + { + "4-1-text-clustering-motivation": [ + { + "0:00": "[SOUND] This lecture is the first one about the text clustering." + }, + { + "0:14": "In this lecture, we are going to talk about the text clustering." + }, + { + "0:18": "This is a very important technique for doing topic mining and analysis. In particular, in this lecture we're going to start with some basic questions about the clustering." + }, + { + "0:31": "And that is, what is text clustering and why we are interested in text clustering." + }, + { + "0:38": "In the following lectures, we are going to talk about how to do text clustering. How to evaluate the clustering results?" + }, + { + "0:47": "So what is text clustering?" + }, + { + "0:49": "Well, clustering actually is a very general technique for data mining as you might have learned in some other courses." + }, + { + "0:56": "The idea is to discover natural structures in the data." + }, + { + "1:01": "In another words, we want to group similar objects together. In our case, these objects are of course, text objects. For example, they can be documents, terms, passages, sentences, or websites, and then I'll go group similar text objects together. So let's see an example, well, here you don't really see text objects, but I just used some shapes to denote objects that can be grouped together." + }, + { + "1:33": "Now if I ask you, what are some natural structures or natural groups where you, if you look at it and you might agree that we can group these objects based on chips, or their locations on this two dimensional space." + }, + { + "1:53": "So we got the three clusters in this case." + }, + { + "1:56": "And they may not be so much this agreement about these three clusters but it really depends on the perspective to look at the objects." + }, + { + "2:07": "Maybe some of you have also seen thing in a different way, so we might get different clusters. And you'll see another example about this ambiguity more clearly. But the main point of here is, the problem is actually not so well defined." + }, + { + "2:29": "And the problem lies in how to define similarity. And what do you mean by similar objects?" + }, + { + "2:38": "Now this problem has to be clearly defined in order to have a well defined clustering problem." + }, + { + "2:46": "And the problem is in general that any two objects can be similar depending on how you look at them. So for example, this will kept the two words like car and horse." + }, + { + "3:00": "So are the two words similar? Well, it depends on how if we look at the physical" + }, + { + "3:11": "properties of car and horse they are very different but if you look at them functionally, a car and a horse, can both be transportation tools. So in that sense, they may be similar. So as we can see, it really depends on our perspective, to look at the objects." + }, + { + "3:32": "And so it ought to make the clustering problem well defined. A user must define the perspective for assessing similarity." + }, + { + "3:44": "And we call this perspective the clustering bias." + }, + { + "3:49": "And when you define a clustering problem, it's important to specify" + }, + { + "3:55": "your perspective for similarity or for defining the similarity that will be used to group similar objects. because otherwise, the similarity is not well defined and one can have different ways to group objects. So let's look at the example here. You are seeing some objects, or some shapes, that are very similar to what you have seen on the first slide, but if I ask you to group these objects, again, you might" + }, + { + "4:38": "feel there is more than here than on the previous slide. For example, you might think, well, we can steer a group by ships, so that would give us cluster that looks like this. However, you might also feel that, well, maybe the objects can be grouped based on their sizes. So that would give us a different way to cluster the data if we look at the size and look at the similarity in size." + }, + { + "5:12": "So as you can see clearly here, depending on the perspective, we'll get different clustering result. So that also clearly tells us that in order to evaluate the clustering without, we must use perspective. Without perspective, it's very hard to define what is the best clustering result." + }, + { + "5:36": "So there are many examples of text clustering setup." + }, + { + "5:42": "And so for example, we can cluster documents in the whole text collection. So in this case, documents are the units to be clustered." + }, + { + "5:52": "We may be able to cluster terms. In this case, terms are objects. And a cluster of terms can be used to define concept, or theme, or a topic. In fact, there's a topic models that you have seen some previous lectures can give you cluster of terms in some sense if you take terms with high probabilities from word distribution. Another example is just to cluster any text segments, for example, passages, sentences, or any segments that you can extract the former larger text objects." + }, + { + "6:32": "For example, we might extract the order text segments about the topic, let's say, by using a topic model. Now once we've got those text objects then we can" + }, + { + "6:45": "cluster the segments that we've got to discover interesting clusters that might also ripple in the subtopics. So this is a case of combining text clustering with some other techniques. And in general you will see a lot of text mining" + }, + { + "7:05": "can be accurate combined in a flexible way to achieve the goal of doing more sophisticated mining and analysis of text data." + }, + { + "7:16": "We can also cluster fairly large text objects and by that, I just mean text objects may contain a lot of documents. So for example, we might cluster websites. Each website is actually compose of multiple documents. Similarly, we can also cluster articles written by the same author, for example. So we can trigger all the articles published by also as one unit for clustering. In this way, we might group authors together based on whether they're published papers or similar." + }, + { + "7:55": "For the more text clusters will be for the cluster to generate a hierarchy. That's because we can in general cluster any text object at different levels." + }, + { + "8:08": "So more generally why is text clustering interesting? Well, it's because it's a very useful technique for text mining, particularly exploratory text analysis." + }, + { + "8:20": "And so a typical scenario is that you were getting a lot of text data, let's say all the email messages from customers in some time period, all the literature articles, etc. And then you hope to get a sense about what are the overall content of the connection, so for example, you might be interested in getting a sense about major topics, or what are some typical or representative documents in the connection. And clustering to help us achieve this goal. We sometimes also want to link a similar text objects together. And these objects might be duplicated content, for example. And in that case, such a technique can help us remove redundancy and remove duplicate documents." + }, + { + "9:10": "Sometimes they are about the same topic and by linking them together we can have more complete coverage of a topic." + }, + { + "9:19": "We may also used text clustering to create a structure on the text data and sometimes we can create a hierarchy of structures and this is very useful for problems." + }, + { + "9:31": "We may also use text clustering to induce additional features to represent text data when we cluster documents together, we can treat each cluster as a feature. And then we can say when a document is in this cluster and then the feature value would be one. And if a document is not in this cluster, then the feature value is zero. And this helps provide additional discrimination that might be used for text classification as we will discuss later." + }, + { + "9:59": "So there are, in general, many applications of text clustering. And I just thought of two very specific ones. One is to cluster search results, for example, [INAUDIBLE] search engine can cluster such results so that the user can see overall structure of the results of return the fall query. And when the query's ambiguous this is particularly useful because the clusters likely represent different senses of ambiguous word." + }, + { + "10:28": "Another application is to understand the major complaints from customers based on their emails, right. So in this case, we can cluster email messages and then find in the major clusters from there, we can understand what are the major complaints about them. [MUSIC]" + } + ] + }, + { + "4-2-text-clustering-generative-probabilistic-models-part-1": [ + { + "0:00": "[SOUND] This lecture is about generating probabilistic models for text clustering." + }, + { + "0:13": "In this lecture, we're going to continue discussing text clustering, and we're going to introduce generating probabilistic models as a way to do text clustering. So this is the overall plan for covering text clustering. In the previous lecture, we have talked about what is text clustering and why text clustering is interesting. In this lecture, we're going to talk about how to do text clustering. In general, as you see on this slide, there are two kinds of approaches. One is generating probabilistic models, which is the topic of this lecture. And later, we'll also discuss similarity-based approaches." + }, + { + "0:53": "So to talk about generating models for text clustering, it would be useful to revisit the topic mining problem using topic models, because the two problems are very similar. This is a slide that you have seen earlier in the lecture on topic model. Here we show that we have input of a text collection C and a number of topics k, and vocabulary V. And we hope to generate as output two things. One is a set of topics denoted by Theta i's, each is awarded distribution and the other is pi i j. These are the probabilities that each document covers each topic. So this is a topic coverage and it's also visualized here on this slide. You can see that this is what we can get by using a topic model. Now, the main difference between this and the text clustering problem is that here, a document is assumed to possibly cover multiple topics. And indeed, in general, a document will be covering more than one topic with nonzero probabilities. In text clustering, however, we only allow a document to cover one topic, if we assume one topic is a cluster." + }, + { + "2:24": "So that means if we change the problem definition just slightly by assuming that each document that can only be generated by using precisely one topic." + }, + { + "2:37": "Then we'll have a definition of the clustering problem as you'll hear. So here the output is changed so that we no longer have the detailed coverage distributions pi i j. But instead, we're going to have a cluster assignment decisions, Ci. And Ci is a decision for the document i. And C sub i is going to take a value from 1 through k to indicate one of the k clusters." + }, + { + "3:09": "And basically tells us that d i is in which cluster. As illustrated here, we no longer have multiple topics covered in each document. It is precisely one topic. Although which topic is still uncertain. There is also a connection with" + }, + { + "3:29": "the problem of mining one topic that we discussed earlier. So here again, it's a slide that you have seen before and here we hope to estimate a topic model or distribution based on precisely one document. And that's when we assume that this document, it covers precisely one topic." + }, + { + "3:52": "But we can also consider some variations of the problem. For example, we can consider there are N documents, each covers a different topic, so that's N documents, and topics. Of course, in this case, these documents are independent, and these topics are also independent. But, we can further allow these documents with share topics, and we can also assume that we are going to assume there are fewer topics than the number of documents, so these documents must share some topics. And if we have N documents that share k topics, then we'll again have precisely the document clustering problem." + }, + { + "4:34": "So because of these connections, naturally we can think about how to use a probabilistically generative model to solve the problem of text clustering." + }, + { + "4:43": "So the question now is what generative model can be used to do clustering?" + }, + { + "4:49": "As in all cases of designing a generative model, we hope the generative model would adopt the output that we hope to generate or the structure that we hope to model. So in this case, this is a clustering structure, the topics and each document that covers one topic. And we hope to embed such preferences in the generative model. But, if you think about the main difference between this problem and the topic model that we talked about earlier. And you will see a main requirement is how can we force every document to be generated from precisely one topic, instead of k topics, as in the topic model?" + }, + { + "5:35": "So let's revisit the topic model again in more detail. So this is a detailed view of a two component mixture model. When we have k components, it looks similar. So here we see that when we generate a document," + }, + { + "5:53": "we generate each word independent." + }, + { + "5:57": "And when we generate each word, but first make a choice between these distributions. We decide to use one of them with probability. So p of theta 1 is the probability of choosing the distribution on the top. Now we first make this decision regarding which distribution should be used to generate the word. And then we're going to use this distribution to sample a word. Now note that in such a generative model, the decision on which distribution to use for each word is independent. So that means, for example, the here could have generated from the second distribution, theta 2 whereas text is more likely generated from the first one on the top." + }, + { + "6:49": "That means the words in the document that could have been generated in general from multiple distributions." + }, + { + "6:58": "Now this is not what we want, as we said, for text clustering, for document clustering, where we hoped this document will be generated from precisely one topic." + }, + { + "7:09": "So now that means we need to modify the model. But how? Well, let's first think about why this model cannot be used for clustering. And as I just said, the reason is because it has allowed multiple topics to contribute a word to the document." + }, + { + "7:28": "And that causes confusion because we're not going to know which cluster this document is from. And it's, more importantly it's violating our assumption about the partitioning of documents in the clusters. If we really have one topic to correspond it to one cluster of documents, then we would have a document that we generate from precisely one topic. That means all the words in the document must have been generated from precisely one distribution. And this is not true for such a topic model that we're seeing here. And that's why this cannot be used for clustering because it did not ensure that only one distribution has been used to generate all the words in one document." + }, + { + "8:15": "So if you realize this problem, then we can naturally design alternative mixture model for doing clustering. So this is what you're seeing here. And we again have to make a decision regarding which distribution to use to generate this document because the document could potentially be generated from any of the k word distributions that we have. But this time, once we have made a decision to choose one of the topics, we're going to stay with this regime to generate all the words in the document." + }, + { + "8:49": "And that means, once we have made a choice of the distribution in generating the first word, we're going to go stay with this distribution in generating all of the other words in the document. So, in other words, we only make the choice once for, basically, we make the decision once for this document and this state was just to generate all the words. Similarly if I had choosing the second distribution, theta sub 2 here, you can see which state was this one. And then generate the entire document of d. Now, if you compare this picture with the previous one, you will see the decision of using a particular distribution is made just once for this document, in the case of document clustering. But in the case of topic model, we have to make as many decisions as the number of words in the document. Because for each word, we can make a potentially different decision. And that's the key difference between the two models." + }, + { + "9:58": "But this is obviously also a mixed model so we can just group them together as one box to show that this is the model that will give us a probability of the document. Now, inside of this model, there is also this switch of choosing a different distribution. And we don't observe that so that's a mixture model. And of course a main problem in document clustering is to infer which distribution has been used to generate a document and that would allow us to recover the cluster identity of a document." + }, + { + "10:37": "So it will be useful to think about the difference from the topic model as I have also mentioned multiple times." + }, + { + "10:46": "And there are mainly two differences, one is the choice of" + }, + { + "10:56": "using that particular distribution is made just once for document clustering. Whereas in the topic model, it's made it multiple times for different words. The second is that word distribution, here, is going to be used to regenerate all the words for a document." + }, + { + "11:19": "But, in the case of one distribution doesn't have to generate all the words in the document. Multiple distribution could have been used to generate the words in the document." + }, + { + "11:34": "Let's also think about a special case, when one of the probability of choosing a particular distribution is equal to 1. Now that just means we have no uncertainty now. We just stick with one particular distribution. Now in that case, clearly, we will see this is no longer mixture model, because there's no uncertainty here and we can just use precisely one of the distributions for generating a document. And we're going back to the case of estimating one order distribution based on one document." + }, + { + "12:12": "So that's a connection that we discussed earlier. Now you can see it more clearly. So as in all cases of using a generative model to solve a problem, we first look at data and then think about how to design the model. But once we design the model, the next step is to write down the likelihood function. And after that we're going to look at the how to estimate the parameters." + }, + { + "12:36": "So in this case, what's the likelihood function? It's going to be very similar to what you have seen before in topic models but it will be also different." + }, + { + "12:45": "Now if you still recall what the likelihood function looks like in then you will realize that in general, the probability of observing a data point from mixture model is going to be a sum of all the possibilities of generating the data." + }, + { + "13:00": "In this case, so it's going to be a sum over these k topics, because every one can be user generated document. And then inside is the sum you can still recall what the formula looks like, and it's going to be the product of two probabilities. One is the probability of choosing the distribution, the other is the probability of observing a particular datapoint from that distribution." + }, + { + "13:27": "So if you map this kind of formula to our problem here, you will see the probability of observing a document d" + }, + { + "13:37": "is basically a sum in this case of two different distributions because we have a very simplified situation of just two clusters. And so in this case, you can see it's a sum of two cases. In each case, it's indeed the probability of choosing the distribution either theta 1 or theta 2. And then, the probability is multiplied by the probability of observing this document from this particular distribution." + }, + { + "14:16": "And if you further expanded this probability of observing the whole document, we see that it's a product of observing each word X sub i. And here we made the assumption that each word is generated independently, so the probability of the whole document is just a product of the probability of each word in the document." + }, + { + "14:40": "So this form should be very similar to the topic model. But it's also useful to think about the difference and for that purpose, I am also copying the probability of topic model of these two components here. So here you can see the formula looks very similar or in many ways, they are similar." + }, + { + "15:02": "But there is also some difference." + }, + { + "15:06": "And in particular, the difference is on the top. You see for the mixture model for document clustering, we first take a product, and then take a sum." + }, + { + "15:16": "And that's corresponding to our assumption of first make a choice of choosing one distribution and then stay with the distribution, it'll generate all the words. And that's why we have the product inside the sum." + }, + { + "15:30": "The sum corresponds to the choice. Now, in topic model, we see that the sum is actually inside the product. And that's because we generated each word independently. And that's why we have the product outside, but when we generate each word we have to make a decision regarding which distribution we use so we have a sum there for each word. But in general, these are all mixture models and we can estimate these models by using the Algorithm, as we will discuss more later. [MUSIC]" + } + ] + }, + { + "4-3-text-clustering-generative-probabilistic-models-part-2": [ + { + "0:00": "[SOUND] This lecture is a continuing discussion of Generative Probabilistic Models for text clustering." + }, + { + "0:13": "In this lecture, we are going to continue talking about the text clustering, particularly, the Generative Probabilistic Models." + }, + { + "0:23": "So this is a slide that you have seen earlier where we have written down the likelihood function for a document with two distributions, being a two component mixed model for document clustering." + }, + { + "0:39": "Now in this lecture, we're going to generalize this to include the k clusters. Now if you look at the formula and think about the question, how to generalize it, you'll realize that all we need is to add more terms, like what you have seen here." + }, + { + "0:57": "So you can just add more thetas and the probabilities of thetas and the probabilities of generating d from those thetas. So this is precisely what we are going to use and this is the general presentation of the mixture model for document clustering." + }, + { + "1:19": "So as more cases would follow these steps in using a generating model first, think about our data. And so in this case our data is a collection of documents, end documents denoted by d sub i, and then we talk about the other models, think of other modelling. In this case, we design a mixture of k unigram language models. It's a little bit different from the topic model, but we have similar parameters. We have a set of theta i's that denote that our distributions corresponding to the k unigram language models. We have p of each theta i as a probability of selecting each of the k distributions we generate the document. Now note that although our goal is to find the clusters and we actually have used a more general notion of a probability of each cluster and this as you will see later, will allow us to assign a document to the cluster that has the highest probability of being able to generate the document." + }, + { + "2:31": "So as a result, we can also recover some other interesting" + }, + { + "2:36": "properties, as you will see later." + }, + { + "2:42": "So the model basically would make the following assumption about the generation of a document. We first choose a theta i according to probability of theta i, and then generate all the words in the document using this distribution. Note that it's important that we use this distribution all the words in the document. This is very different from topic model. So the likelihood function would be like what you are seeing here." + }, + { + "3:10": "So you can take a look at the formula here, we have used the different notation here in the second line of this equation. You are going to see now the notation has been changed to use unique word in the vocabulary, in the product instead of particular position in the document. So from X subject to W, is a change of notation and this change allows us to show the estimation formulas more easily. And you have seen this change also in the topic model presentation, but it's basically still just a product of the probabilities of all the words." + }, + { + "4:10": "And so with the likelihood function, now we can talk about how to do parameter estimation. Here we can simply use the maximum likelihood estimator. So that's just a standard way of doing things. So all should be familiar to you now. It's just a different model. So after we have estimated parameters, how can we then allocate clusters to the documents? Well, let's take a look at the this situation more closely. So we just repeated the parameters here for this mixture model." + }, + { + "4:43": "Now if you think about what we can get by estimating such a model, we can actually get more information than what we need for doing clustering, right? So theta i for example, represents the content of cluster i, this is actually a by-product, it can help us summarize what the cluster is about. If you look at the top terms in this cluster or in this word distribution and they will tell us what the cluster is about." + }, + { + "5:11": "p of theta i can be interpreted as indicating the size of cluster because it tells us how likely the cluster would be used to generate the document. The more likely a cluster is used to generate a document, we can assume the larger the cluster size is." + }, + { + "5:30": "Note that unlike in PLSA and this probability of theta i is not dependent on d." + }, + { + "5:37": "Now you may recall that the topic you chose at each document actually depends on d. That means each document can have a potentially different choice of topics, but here we have a generic choice probability for all the documents. But of course, even a particular document that we still have to infer which topic is more likely to generate the document. So in that sense, we can still have a document dependent probability of clusters." + }, + { + "6:10": "So now let's look at the key problem of assigning documents to clusters or assigning clusters to documents." + }, + { + "6:17": "So that's the computer c sub d here and this will take one of the values in the range of one through k to indicate which cluster should be assigned to d." + }, + { + "6:28": "Now first you might think about a way to use likelihood on it and that is to assign d to the cluster corresponding to the topic of theta i, that most likely has been used to generate d." + }, + { + "6:42": "So that means we're going to choose one of those distributions that gives d the highest probability. In other words, we see which distribution has the content that matches our d at the [INAUDIBLE]. Intuitively that makes sense, however, this approach does not consider the size of clusters, which is also a available to us and so a better way is to use the likelihood together with the prior, in this case the prior is p of theta i. And together, that is, we're going to use the base formula to compute the posterior probability of theta, given d." + }, + { + "7:25": "And if we choose theta .based on this posterior probability, we would have the following formula that you see here on the bottom of this slide. And in this case, we're going to choose the theta that has a large P of theta i, that means a large cluster and also a high probability of generating d. So we're going to favor a cluster that's large and also consistent with the document. And that intuitively makes sense because the chance of a document being a large cluster is generally higher than in a small cluster." + }, + { + "8:07": "So this means once we can estimate the parameters of the model, then we can easily solve the problem of document clustering. So next, we'll have to discuss how to actually compute the estimate of the model. [MUSIC]" + } + ] + }, + { + "4-4-text-clustering-generative-probabilistic-models-part-3": [ + { + "0:00": "[SOUND] This lecture is a continuing discussion of generative probabilistic models for tax classroom." + }, + { + "0:14": "In this lecture, we're going to do a finishing discussion of generative probabilistic models for text crossing." + }, + { + "0:21": "So this is a slide that you have seen before and here, we show how we define the mixture model for text crossing and what the likelihood function looks like. And we can also compute the maximum likelihood estimate, to estimate the parameters. In this lecture, we're going to do talk more about how exactly we're going to compute the maximum likelihood estimate. As in most cases the Algorithm can be used to solve this problem for mixture models. So here's the detail of this Algorithm for document clustering. Now, if you have understood how Algorithm works for topic models like TRSA, and I think here it would be very similar. And we just need to adapt a little bit to this new mixture model. So as you may recall Algorithm starts with initialization of all the parameters. So this is the same as what happened before for topic models." + }, + { + "1:28": "And then we're going to repeat until the likelihood converges and in each step we'll do E step and M step. In M step, we're going to infer which distribution has been used to generate each document. So I have to introduce a hidden variable Zd for each document and this variable could take a value from the range of 1 through k, representing k different distributions." + }, + { + "1:59": "More specifically basically, we're going to apply base rules to infer which distribution is more likely to have generated this document, or computing the posterior probability of the distribution given the document." + }, + { + "2:17": "And we know it's proportional to the probability of selecting this distribution p of Z the i. And the probability of generating this whole document from the distribution which is the product of the probabilities of world for this document as you see here. Now, as you all clear this use for kind of remember," + }, + { + "2:45": "the normalizer or the constraint on this probability. So in this case, we know the constraint on this probability in E-Step is that all the probabilities of Z equals i must sum to 1. Because the documented must have been generated from precisely one of these k topics. So the probability of being generated from each of them should sum to 1. And if you know this constraint, then you can easily compute this distribution as long as you know what it is proportional to. So once you compute this product that you see here, then you simply normalize" + }, + { + "3:31": "these probabilities, to make them sum to 1 over all the topics. So that's E-Step, after E-Step we want to know which distribution is more likely to have generated this document d, and which is unlikely." + }, + { + "3:45": "And then in M-Step we're going to re-estimate all the parameters based on the in further z values or in further knowledge about which distribution has been used to generate which document. So the re-estimation involves two kinds of parameters 1 is p of theta and this is the probability of selecting a particular distribution. Before we observe anything, we don't have any knowledge about which cluster is more likely. But after we have observed that these documents, then we can crack the evidence to infer which cluster is more likely. And so this is proportional to the sum of the probability of Z sub d j is equal to i." + }, + { + "4:34": "And so this gives us all the evidence about using topic i, theta i to generate a document. Pull them together and again, we normalize them into probabilities." + }, + { + "4:50": "So this is for key of theta sub i." + }, + { + "4:54": "Now the other kind of parameters are the probabilities of words in each distribution, in each cluster. And this is very similar to the case piz and here we just report the kinds of words that are in documents that are inferred to have been generated from a particular topic of theta i here. This would allows to then estimate how many words have actually been generated from theta i. And then we'll normalize again these accounts in the probabilities so that the probabilities on all the words would sum to up. Note that it's very important to understand these constraints as they are precisely the normalizing in all these formulas. And it's also important to know that the distribution is over what?" + }, + { + "5:54": "For example, the probability of theta is over all the k topics, that's why these k probabilities will sum to 1. Whereas the probability of a word given theta is a probability distribution over all the words. So there are many probabilities and they have to sum to 1. So now, let's take a look at a simple example of two clusters. I've two clusters, I've assumed some initialized values for the two distributions. And let's assume we randomly initialize two probability of selecting each cluster as 0.5, so equally likely. And then let's consider one document that you have seen here. There are two occurrences of text and two occurrences of mining. So there are four words together and medical and health did not occur in this document. So let's think about the hidden variable." + }, + { + "6:50": "Now for each document then we much use a hidden variable. And before in piz, we used one hidden variable for each work because that's the output from one mixture model. So in our case the output from the mixture model or the observation from mixture model is a document, not a word. So now we have one hidden variable attached to the document. Now that hidden variable must tell us which distribution has been used to generate the document. So it's going to take two values, one and two to indicate the two topics." + }, + { + "7:25": "So now how do we infer which distribution has been used generally d? Well it's been used base rule, so it looks like this. In order for the first topic theta 1 to generate a document, two things must happen. First, theta sub 1 must have been selected. So it's given by p of theta 1. Second, it must have also be generating the four words in the document. Namely, two occurrences of text and two occurrences of sub mining. And that's why you see the numerator has the product of the probability of selecting theta 1 and the probability of generating the document from theta 1. So the denominator is just the sum of two possibilities of generality in this document. And you can plug in the numerical values to verify indeed in this case, the document is more likely to be generated from theta 1, much more likely than from theta 2. So once we have this probability, we can easily compute the probability of Z equals 2, given this document. How? Well, we can use the constraint. That's going to be 1 minus 100 over 101. So now it's important that you note that in such a computation there is a potential problem of underflow. And that is because if you look at the original numerator and the denominator, it involves the competition of a product of many small probabilities. Imagine if a document has many words and it's going to be a very small value here that can cause the problem of underflow. So to solve the problem, we can use a normalize. So here you see that we take a average of all these two math solutions to compute average at the screen called a theta bar." + }, + { + "9:24": "And this average distribution would be comparable to each of these distributions in terms of the quantities or the magnitude. So we can then divide the numerator and the denominator both by this normalizer. So basically this normalizes the probability of generating this document by using this average word distribution. So you can see the normalizer is here. And since we have used exactly the same normalizer for the numerator and the denominator. The whole value of this expression is not changed but by doing this normalization you can see we can make the numerators and the denominators more manageable in that the overall value is not going to be very small for each. And thus we can avoid the underflow problem." + }, + { + "10:24": "In some other times we sometimes also use logarithm of the product to convert this into a sum of log of probabilities. This can help preserve precision as well, but in this case we cannot use algorithm to solve the problem. Because there is a sum in the denominator, but this kind of normalizes can be effective for solving this problem. So it's a technique that's sometimes useful in other situations in other situations as well. Now let's look at the M-Step. So from the E-Step we can see our estimate of which distribution is more likely to have generated a document at d. And you can see d1's more like got it from the first topic, where is d2 is more like from second topic, etc. Now, let's think about what we need to compute in M-step well basically we need to re-estimate all the parameters. First, look at p of theta 1 and p of theta 2. How do we estimate that? Intuitively you can just pool together these z, the probabilities from E-step. So if all of these documents say, well they're more likely from theta 1, then we intuitively would give a higher probability to theta 1. In this case, we can just take an average of these probabilities that you see here and we've obtain a 0.6 for theta 1. So 01 is more likely and then theta 2. So you can see probability of 02 would be natural in 0.4. What about these word of probabilities? Well we do the same, and intuition is the same. So we're going to see, in order to estimate the probabilities of words in theta 1, we're going to look at which documents have been generated from theta 1. And we're going to pull together the words in those documents and normalize them. So this is basically what I just said." + }, + { + "12:20": "More specifically, we're going to do for example, use all the kinds of text in these documents for estimating the probability of text given theta 1. But we're not going to use their raw count or total accounts. Instead, we can do that discount them by the probabilities that each document is likely been generated from theta 1. So these gives us some fractional accounts. And then these accounts would be then normalized in order to get the probability. Now, how do we normalize them? Well these probability of these words must assign to 1. So to summarize our discussion of generative models for clustering. Well we show that a slight variation of topic model can be used for clustering documents. And this also shows the power of generating models in general. By changing the generation assumption and changing the model slightly we can achieve different goals, and we can capture different patterns and types of data. So in this case, each cluster is represented by unigram language model word distribution and that is similar to topic model. So here you can see the word distribution actually generates a term cluster as a by-product. A document that is generated by first choosing a unigram language model. And then generating all the words in the document are using just a single language model. And this is very different from again copy model where we can generate the words in the document by using multiple unigram language models." + }, + { + "13:56": "And then the estimated model parameters are given both topic characterization of each cluster and the probabilistic assignment of each document into a cluster." + }, + { + "14:07": "And this probabilistic assignment sometimes is useful for some applications. But if we want to achieve harder clusters mainly to" + }, + { + "14:16": "partition documents into disjointed clusters. Then we can just force a document into the cluster corresponding to the words distribution that's most likely to have generated the document. We've also shown that the Algorithm can be used to compute the maximum likelihood estimate. And in this case, we need to use a special number addition technique to avoid underflow. [MUSIC]" + } + ] + }, + { + "4-5-text-clustering-similarity-based-approaches": [ + { + "0:00": "[MUSIC] This lecture is about the similarity-based approaches to text clustering." + }, + { + "0:13": "In this lecture we're going to to continue the discussion of how to do a text clustering." + }, + { + "0:18": "In particular, we're going to to cover different kinds of approaches than generative models, and that is similarity-based approaches. So the general idea of similarity-based clustering is to explicitly specify a similarity function to measure the similarity between two text objects. Now this is in contrast with a generative model where we implicitly define the clustering bias by using a particular object to function like a [INAUDIBLE] function." + }, + { + "0:52": "The whole process is driven by optimizing the [INAUDIBLE,] but here we explicitly provide a view of what we think are similar. And this is often very useful because then it allows us to inject any particular view of similarity into the clustering program. So once we have a similarity function, we can then aim at optimally partitioning, to partitioning the data into clusters or into different groups. And try to maximize the inter-group similarity and minimize the inter-group similarity. That is to ensure the objects that are put into the same group to be similar, but the objects that are put into different groups to be not similar. And these are the general goals of clustering, and there is often a trade off between achieving both goals. Now there are many different methods for doing similarity based clustering, and in general I think we can distinguish the two strategies at high level. One is to progressively construct the hierarchy of clusters, and so this often leads to hierarchical clustering. And we can further distinguish it two ways, to construct a hierarchy depending on whether we started with the collection to divide the connection. Or started with individual objectives and gradually group them together, so one is bottom-up that can be called agglomerative. Well we gradually group a similar objects into larger and larger clusters. Until we group everything together, the other is top-down or divisive, in this case we gradually partition the whole data set into smaller and smaller clusters. The other general strategy is to start with the initial tentative clustering and then iteratively improve it. And this often leads for a flat clustering, one example is k-Means, so as I just said, there are many different clustering methods available. And a full coverage of all the clustering methods would be beyond the scope of this course. But here we are going to talk about the two representative methods, in some detail" + }, + { + "3:14": "one is Hierarchical Agglomerative Clustering or HAC, the other is k-Means. So first of it we'll get the agglomerative hierarchical clustering, in this case, we're given a similarity function to measure similarity between two objects. And then we can gradually group similar objects together in a bottom-up fashion to form larger and larger groups. And they always form a hierarchy, and then we can stop when some stopping criterion is met. It could be either some number of clusters has been achieved or the threshold for similarity has been reached." + }, + { + "3:52": "There are different variations here, and they mainly differ in the ways to compute a group similarity. Based on the individual objects similarity, so let's illustrate how again induced a structure based on just similarity. So start with all the text objects and we can then measure the similarity between them. Of course based on the provided similarity function, and then we can see which pair has the highest similarity. And then just group them together, and then we're going to see which pair is the next one to group." + }, + { + "4:30": "Maybe these two now have the highest similarity, and then we're going to gradually group them together. And then every time we're going to pick the highest similarity, the similarity of pairs to group." + }, + { + "4:45": "This will give us a binary tree eventually to group everything together." + }, + { + "4:50": "Now, depending on our applications, we can use the whole hierarchy as a structure for browsing, for example. Or we can choose a cutoff, let's say cut here to get four clusters, or we can use a threshold to cut. Or we can cut at this high level to get just two clusters, so this is a general idea, now if you think about how to implement this algorithm. You'll realize that we have everything specified except for how to compute group similarity." + }, + { + "5:24": "We are only given the similarity function of two objects, but as we group groups together, we also need to assess the similarity between two groups. There are also different ways to do that and there are the three popular methods. Single-link, complete-link, and average-link, so given two groups and the single-link algorithm. Is going to define the group similarity as the similarity of the closest pair of the two groups. Complete-link defines the similarity of the two groups as the similarity of the farthest system pair. Average-link defines the similarity as average of similarity of all the pairs of the two groups. So it's much easier to understand the methods by illustrating them, so here are two groups, g1 and g2 with some objects in each group. And we know how to compute the similarity between two objects, but the question now is, how can we compute the similarity between the two groups?" + }, + { + "6:29": "And then we can in general base this on the similarities of the objects in the two groups." + }, + { + "6:35": "So, in terms of single-link and we're just looking at the closest pair so in this case, these two paired objects will defined the similarities of the two groups." + }, + { + "6:47": "As long as they are very close, we're going to say the two groups are very" + }, + { + "6:51": "close so it is an optimistic view of similarity." + }, + { + "6:57": "The complete link on the other hand were in some sense pessimistic, and by taking the similarity of the two farthest pair as the similarity for the two groups. So we are going to make sure that if the two groups are having a high similarity. Then every pair of the two groups, or the objects in the two groups will have, will be ensured to have high similarity." + }, + { + "7:29": "Now average link is in between, so it takes the average of all these pairs. Now these different ways of computing group similarities will lead to different clustering algorithms. And they would in general give different results, so it's useful to take a look at their differences and to make a comparison." + }, + { + "7:53": "First, single-link can be expected to generally the loose clusters, the reason is because as long as two objects are very similar in the two groups, it will bring the two groups together." + }, + { + "8:09": "If you think about this as similar to having parties with people, then it just means two groups of people would be partying together. As long as in each group there is a person that" + }, + { + "8:27": "is well connected with the other group. So the two leaders of the two groups can have a good relationship with each other and then they will bring together the two groups. In this case, the cluster is loose, because there's no guarantee that other members of the two groups are actually very close to each other. Sometimes they may be very far away, now in this case it's also based on individual decisions, so it could be sensitive to outliers. The complete-link is in the opposite situation, where we can expect the clusters to be tight. And it's also based on individual decision so it can be sensitive to outliers. Again to continue the analogy to having a party of people, then complete-link would mean when two groups come together. They want to ensure that even" + }, + { + "9:21": "the people that are unlikely to talk to each other would be comfortable. Always talking to each other, so ensure the whole class to be coherent. The average link of clusters in between and as group decision, so it's" + }, + { + "9:37": "going to be insensitive to outliers, now in practice which one is the best. Well, this would depend on the application and sometimes you need a lose clusters. And aggressively cluster objects together that maybe single-link is good. But other times you might need a tight clusters and a complete-link might be better. But in general, you have to empirically evaluate these methods for your application to know which one is better." + }, + { + "10:07": "Now, next let's look at another example of a method for similarity-based clustering. In this case, which is called k-Means clustering, we will represent each text object as a term vector. And then assume a similarity function defined on two objects, now we're going to start with some tentative clustering results by just selecting k randomly. selected vectors as centroids of k clusters and treat them as centers as if they represent, they each represent a cluster. So this gives us the initial tentative cluster, then we're going to iteratively improve it. And the process goes like this, and once we have these centroids Decide. We're going to assign a vector to the cluster whose centroid is closest to the current vector. So basically we're going to measure the distance between this vector, and each of the centroids, and see which one is the closest to this one. And then just put this object into that cluster, this is to have tentative assignment of objects into clusters. And we're going to partition all the objects into k clusters based on our tentative clustering and centroids." + }, + { + "11:28": "Then we can do re-compute the centroid based on the locate the object in each cluster. And this is to adjust the centroid, and then we can repeat this process until the similarity-based objective function. In this case, it's within cluster sum of squares converges, and theoretically we can show that. This process actually is going to minimize the within cluster sum of squares where define object and function. Given k clusters, so it can be also shown, this process will converge to a local minimum. I think about this process for a moment, it might remind you the Algorithm for mixture model." + }, + { + "12:13": "Indeed this algorithm is very similar to the Algorithm for the mixture model for clustering. More specifically we also initialize these parameters in the Algorithm so the random initialization is similar." + }, + { + "12:34": "And then in the Algorithm, you may recall that, we're going to repeat E-step and M-step to improve our parameter estimation. In this case, we're going to improve the clustering result iteratively by also doing two steps. And in fact that the two steps are very similar to Algorithm, in that when we locate the vector into one of the clusters based on our tentative clustering. It's very similar to inferring the distribution that has been used to generate the document, the mixture model. So it is essentially similar to E-step, so what's the difference, well the difference is here. We don't make a probabilistic allocation as in the case of E-step, the brother will make a choice. We're going to make a call if this, there upon this closest to cluster two, then we're going to say you are in cluster two. So there's no choice, and we're not going to say, you assume the set is belonging to a cluster two. And so we're not going to have a probability, but we're just going to put one object into precisely one cluster. In the E-step however, we do a probability location, so we split in counts. And we're not going to say exactly which distribution has been used to generate a data point. Now next, we're going to adjust the centroid, and this is very similar to M-step where we re-estimate the parameters. That's when we'll have a better estimate of the parameter, so here we'll have a better clustering result by adjusting the centroid. And note that centroid is based on the average of the vectors in the cluster. So this is also similar to the M-step where we do counts,pull together counts and then normalize them. The difference of course is also because of the difference in the E-step, and we're not going to consider probabilities when we count the points. In this case, k-Means we're going to all make count of the objects as allocated to this cluster. And this is only a subset of data points, but in the Algorithm, we in principle consider all the data points based on probabilistic allocations." + }, + { + "14:56": "But in nature they are very similar and that's why it's also maximizing well defined object of functions. And it's guaranteed to convert local minimum, so to summarize our discussion of clustering methods. We first discussed model based approaches, mainly the mixture model. Here we use the implicit similarity function to define the clustering bias. There is no explicit define similarity function, the model defines clustering bias and the clustering structure is built into a generative model. That's why we can use potentially a different model to recover different structure." + }, + { + "15:40": "Complex generative models can be used to discover complex clustering structures. We do not talk about in full, but we can easily design, generate a model to generate a hierarchical clusters. We can also use prior to further customize the clustering algorithm to for example control the topic of one cluster or multiple clusters. However one disadvantage of this approach is that there is no easy way to directly control the similarity measure." + }, + { + "16:11": "Sometimes we want to that, but it's very hard to inject such a special definition of similarity into such a model." + }, + { + "16:20": "We also talked about similarity-based approaches, these approaches are more flexible to actually specify similarity functions." + }, + { + "16:29": "But one major disadvantage is that their objective function is not always very clear." + }, + { + "16:35": "The k-Means algorithm has clearly defined the objective function, but it's also very similar to a model based approach. The hierarchical clustering algorithm on the other hand is harder to specify the objective function. So it's not clear what exactly is being optimized," + }, + { + "17:00": "both approaches can generate term clusters. And document clusters, and term clusters can be in general, generated by representing each term with some text content. For example, take the context of each term as a representation of each term, as we have done in semantic relation learning. And then we can certainly cluster terms, based on actual text [INAUDIBLE]. Of course, term clusters can be generated by using generative models as well, as we've seen. [MUSIC]" + } + ] + }, + { + "4-6-text-clustering-evaluation": [ + { + "0:00": "[MUSIC] This lecture is about evaluation of text clustering." + }, + { + "0:12": "So far we have talked about multiple ways of doing text clustering but how do we know which method works the best?" + }, + { + "0:22": "So this has to do with evaluation. Now to talk about evaluation one must go back to the clustering bias that we introduced at the beginning." + }, + { + "0:32": "Because two objects can be similar depending on how you look at them," + }, + { + "0:37": "we must clearly specify the perspective of similarity. Without that, the problem of clustering is not well defined. So this perspective is also very important for evaluation. If you look at this slide, and you can see we have two different ways to cluster these shapes, and if you ask a question, which one is the best, or which one is better? You actually see, there's no way to answer this question without knowing whether we'd like to cluster based on shapes, or cluster based on sizes. And that's precisely why the perspective on clustering bias is crucial for evaluation." + }, + { + "1:19": "In general, we can evaluate text clusters in two ways, one is direct evaluation, and the other indirect evaluation. So in direct evaluation, we want to answer the following questions, how close are the system-generated clusters to the ideal clusters that are generated by humans?" + }, + { + "1:38": "So the closeness here can be assessed" + }, + { + "1:44": "from multiple perspectives and that will help us characterize the quality of cluster result in multiple angles, and this is sometimes desirable." + }, + { + "1:56": "Now we also want to quantify the closeness because this would allow us to easily compare different measures based on their performance figures." + }, + { + "2:09": "And finally, you can see, in this case, we essentially inject the clustering bias" + }, + { + "2:15": "by using humans, basically humans would bring in the the need or desire to clustering bias. Now, how do we do that exactly? Well, the general procedure would look like this." + }, + { + "2:28": "Given a test set which consists of a lot of text objects, we can have humans to create the ideal clustering result, that is, we're going to ask humans to partition the objects to create the gold standard. And they will use their judgments based on the need of a particular application to generate what they think are the best clustering results, and this would be then used to compare with the system generated clusters from the same test set." + }, + { + "3:01": "And ideally, we want the system results to be the same as the human generated results, but in general, they are not going to be the same. So we would like to then quantify the similarity between the system-generated clusters and the gold standard clusters. And this similarity can also be measure from multiple perspectives and this will give us various meshes to quantitatively evaluate a cluster, a clustering result. And some of the commonly used measures include the purity, which measures whether a cluster has a similar object from the same cluster, in the gold standard. And normalized mutual information is a commonly used measure which basically measures based on the identity of cluster of object in the system generally. How well can you predict the cluster of the object in the gold standard or vice versa? And mutual information captures, the correlation between these cluster labels and normalized mutual information is often used for quantifying the similarity for this evaluation purpose, F measure is another possible measure." + }, + { + "4:21": "Now again a thorough discussion of this evaluation and these evaluation issues would be beyond the scope of this course." + }, + { + "4:29": "I've suggested some reading in the end that you can take a look at to know more about that." + }, + { + "4:36": "So here I just want to discuss some high level ideas that would allow you to think about how to do evaluation in your applications. The second way to evaluate text clusters is to do indirect evaluation. So in this case the question to answer is, how useful are the clustering results for the intended applications? Now this of course is application specific question, so usefulness is going to depend on specific applications." + }, + { + "5:07": "In this case, the clustering bias is imposed by the independent application as well, so what counts as a best cluster result would be dependent on the application." + }, + { + "5:19": "Now procedure wise we also would create a test set with text objects for the intended application to quantify the performance of the system." + }, + { + "5:32": "In this case, what we care about is the contribution of clustering to some application so we often have a baseline system to compare with." + }, + { + "5:45": "This could be the current system for doing something, and then you hope to add a clustering to improve it, or the baseline system could be using a different clustering method. And then what you are trying to experiment with, and you hope to have better idea of word clustering. So in any case you have a baseline system work with, and then you add a clustering algorithm to the baseline system to produce a clustering system." + }, + { + "6:11": "And then we have to compare the performance of your clustering system and the baseline system in terms of the performance measure for that particular application." + }, + { + "6:21": "So in this case we call it indirect evaluation of clusters because there's no explicit assessment of the quality of clusters, but rather it's to assess the contribution of clusters to a particular application." + }, + { + "6:37": "So, to summarize text clustering," + }, + { + "6:41": "it's a very useful unsupervised general text mining technique, and it's particularly useful for obtaining an overall picture of the text content. And this is often needed to explore text data, and this is often the first step when you deal with a lot of text data." + }, + { + "7:01": "The second application or second kind of applications is through discover interesting clustering structures in text data and these structures can be very meaningful." + }, + { + "7:13": "There are many approaches that can be used to form text clustering and we discussed model based approaches and some narrative based approaches. In general, strong clusters tend to show up no matter what method is used. Also the effectiveness of a method highly depends on whether the desired clustering bias is captured appropriately, and this can be done either through using the right generating model, the model design appropriate for the clustering, or the right similarity function expressly define the bias. Deciding the optimal number of customers is a very difficult problem for order cluster methods, and that's because it's unsupervised algorithm, and there's no training there how to guide us to select the best number of clusters." + }, + { + "8:05": "Now sometimes you may see some methods that can automatically determine the number of clusters, but in general that has some implied application of clustering bias there and that's just not specified. Without clearly defining a clustering bias, it's just impossible to say the optimal number of cluster is what, so this important to keep in mind." + }, + { + "8:31": "And I should also say sometimes we can also use application to determine the number of clusters, for example, if you're clustering search results, then obviously you don't want to generate the 100 clusters, so the number can be dictated by the interface design." + }, + { + "8:46": "In other situations, we might be able to use the fitness to data to assess whether we've got a good number of clusters to explain our data well. And to do that, you can vary the number of clusters and watch how well you can fit the data." + }, + { + "9:07": "In general when you add a more components to a mixed model you should fit the data better because you, you don't, you can always set the probability of using the new component as zero. So you can't in general fit the data worse than before, but the question is as you add more components would you be able to significantly improve the fitness of the data and that can be used to determine the right number of clusters. And finally evaluation of clustering results, this kind can be done both directly and indirectly, and we often would like to do both in order to get a good sense about how well our method works. So here's some suggested reading and this is particularly useful to better understand how the matches are calculated and clustering in general [MUSIC]" + } + ] + }, + { + "4-7-text-categorization-motivation": [ + { + "0:00": "[SOUND]" + }, + { + "0:06": "This lecture is about text categorization." + }, + { + "0:11": "In this lecture, we're going to talk about text categorization." + }, + { + "0:16": "This is a very important technique for text data mining and analytics." + }, + { + "0:22": "It is relevant to discovery of various different kinds of knowledge as shown here. First, it's related to topic mining and analysis. And, that's because it has to do with analyzing text to data based on some predefined topics. Secondly, it's also related to opinion mining and sentiment analysis, which has to do with discovery knowledge about the observer, the human sensor. Because we can categorize the authors, for example, based on the content of the articles that they have written, right? We can, in general, categorize the observer based on the content that they produce." + }, + { + "1:12": "Finally, it's also related to text-based prediction. Because, we can often use text categorization techniques to predict some variables in the real world that are only remotely related to text data." + }, + { + "1:27": "And so, this is a very important technique for text to data mining." + }, + { + "1:34": "This is the overall plan for covering the topic. First, we're going to talk about what is text categorization and why we're interested in doing that in this lecture? And now, we're going to talk about how to do text categorization for how to evaluate the categorization results. So, the problem of text categorization is defined as follows. We're given a set of predefined categories possibly forming a hierarchy or so. And often, also a set of training examples or training set of labeled text objects which means the text objects have already been enabled with known categories. And then, the task is to classify any text object into one or more of these predefined categories. So, the picture on this slide shows what happens." + }, + { + "2:30": "When we do text categorization, we have a lot of text objects to be processed by a categorization system and the system will, in general, assign categories through these documents. As shown on the right and the categorization results, and we often assume the availability of training examples and these are the documents that are tag with known categories. And these examples are very important for helping the system to learn patterns in different categories. And, this would further help the system then know how to recognize" + }, + { + "3:11": "the categories of new text objects that it has not seen. So, here are some specific examples of text categorization. And in fact, there are many examples, here are just a few." + }, + { + "3:27": "So first, text objects can vary, so we can categorize a document, or a passage, or a sentence, or collections of text. As in the case of clustering, the units to be analyzed can vary a lot, so this creates a lot of possibilities. Secondly, categories can also vary. Allocate in general, there's two major kinds of categories. One is internal categories. These are categories that categorize content of text object. For example, topic categories or sentiment categories and they generally have to do with the content of the text objects throughout the categorization of the content." + }, + { + "4:08": "The other kind is external categories that can characterize an entity associated with the text object. For example, authors are entities associated with the content that they produce. And so, we can use their content in determining which author has written, which part, for example, and that's called author attribution." + }, + { + "4:33": "Or, we can have any other mininal categories associate with text data as long as there is minimal connection between the entity and text data. For example, we might collect a lot of reviews about a restaurant or a lot of reviews about a product, and then, this text data can help us infer properties of a product or a restaurant. In that case, we can treat this as a categorization problem. We can categorize restaurants or categorize products based on their corresponding reviews. So, this is an example for external category. Here are some specific examples of the applications. News categorization is very common as being started a lot. News agencies would like to assign predefined categories to categorize news generated everyday. And, these virtual article categorizations are not important aspect. For example, in the biomedical domain, there's MeSH annotations. MeSH stands for Medical Subject Heading, and this is ontology of terms," + }, + { + "5:49": "characterize content of literature articles in detail." + }, + { + "5:54": "Another example of application is spam email detection or filtering, right? So, we often have a spam filter to help us distinguish spams from legitimate emails and this is clearly a binary classification problem." + }, + { + "6:14": "Sentiment categorization of product reviews or tweets is yet another kind of applications where we can categorize, comparing to positive or negative or positive and negative or neutral." + }, + { + "6:27": "So, you can have send them to categories, assign the two text content." + }, + { + "6:35": "Another application is automatic email routing or sorting, so, you might want to automatically sort your emails into different folders and that's one application of text categorization where each folder is a category." + }, + { + "6:48": "The results are another important kind of applications of routing emails to the right person to handle, so, in helpdesk, email messaging is generally routed to a particular person to handle. Different people tend to handle different kinds of requests. And in many cases, a person would manually assign the messages to the right people. But, if you can imagine, you can't be able to automatically text categorization system to help routing request. And, this is a class file, the incoming request in the one of the categories where each category actually corresponds to a person to handle the request. And finally, author attribution, as I just mentioned, is yet another application, and it's another example of using text to actually infer properties of" + }, + { + "7:41": "some other entities. And, there are also many variants of the problem formulation. And so, first, we have the simplest case, which is a binary categorization, where there are only two categories. And, there are many examples like that, information retrieval or search engine." + }, + { + "7:59": "Applications with one distinguishing relevant documents from non-relevant documents for a particular query." + }, + { + "8:06": "Spam filtering just distinguishing spams from non-spams, so, also two categories. Sometimes, classifications of opinions can be in two categories, positive and a negative." + }, + { + "8:19": "A more general case would be K-category categorization and there are also many applications like that, there could be more than two categories. So, topic categorization is often such an example where you can have multiple topics. Email routing would be another example when you may have multiple folders or if you route the email to the right person to handle it, then there are multiple people to classify. So, in all these cases, there are more than two kinds of categories." + }, + { + "8:49": "Another variation is to have hierarchical categorization where categories form a hierarchy. Again, topical hierarchy is very common." + }, + { + "8:58": "Yet another variation is joint categorization. That's when you have multiple categorization tasks that are related and then you hope to kind of join the categorization. Further leverage the dependency of these tasks to improve accuracy for each individual task. Among all these binary categorizations is most fundamental and part of it also is because it's simple and probably it's because it can actually be used to perform all the other categorization tasks. For example, a K-category categorization task can be actually performed by using binary categorization." + }, + { + "9:40": "Basically, we can look at each category separately and then the binary categorization problem is whether object is in this category or not, meaning in other categories." + }, + { + "9:53": "And, the hierarchical categorization can also be done by progressively doing flat categorization at each level. So, we have, first, we categorize all the objects into, let's say, a small number of high-level categories, and inside each category, we have further categorized to sub-categories, etc." + }, + { + "10:15": "So, why is text categorization important? Well, I already showed that you, several applications but, in general, there are several reasons. One is text categorization helps enrich text representation and that's to achieve more understanding of text data that's all it was useful for text analysis. So, now with categorization text can be represented in multiple levels. The keyword conditions that's often used for a lot text processing tasks. But we can now also add categories and they provide two levels of transition." + }, + { + "10:55": "Semantic categories assigned can also be directly or indirectly useful for application. So, for example, semantic categories could be already very useful or other attribution might be directly useful. Another example is when semantic categories can facilitate aggregation of text content and this is another case of applications of text categorization." + }, + { + "11:25": "For example, if we want to know the overall opinions about a product, we" + }, + { + "11:32": "could first categorize the opinions in each individual view as positive or negative and then, that would allow us to easy to aggregate all the sentiment, and it would tell us about the 70% of the views are positive and 30% are negative, etc." + }, + { + "11:53": "So, without doing categorization, it will be much harder to aggregate such opinions to provide a concise way of coding text in some sense based on all of the vocabulary. And, sometimes you may see in some applications, text with categorizations called a text coded, encoded with some control of vocabulary. The second kind of reasons is to use text categorization to infer properties of entities, and text categories allows us to infer the properties of such entities that are associate with text data. So, this means we can use text categorization to discover knowledge about the world. In general, as long as we can associate the entity with text of data, we can always the text of data to help categorize the corresponding entities. So, it's used for single information network that will connect the other entities with text data. The obvious entities that can be directly connected are authors. But, you can also imagine the author's affiliations or the author's age and other things can be actually connected to text data indirectly. Once we have made the connection, then we can make a prediction about those values. So, this is a general way to allow us to use text mining through, so the text categorization to discover knowledge about the world. Very useful, especially in big text data analytics where we are often just using text data as extra sets of data extracted from humans to infer certain decision factors often together with non-textual data. Specifically with text, for example, we can also think of examples of inferring properties of entities. For example, discovery of non-native speakers of a language. And, this can be done by categorizing the content of speakers." + }, + { + "14:00": "Another example is to predict the party affiliation of a politician based on the political speech. And, this is again an example of using text data to infer some knowledge about the real world. In nature, the problems are all the same, and that's as we defined and it's a text categorization problem. [MUSIC]" + } + ] + }, + { + "4-8-text-categorization-methods": [ + { + "0:06": "This lecture is about the methods for text categorization." + }, + { + "0:12": "So in this lecture we're going to discuss how to do text for categorization." + }, + { + "0:19": "First, there're many methods for text categorization." + }, + { + "0:25": "In such a method the idea is to determine the category based on some rules that we design carefully to reflect the domain knowledge about the category prediction problem. So for example, if you want to do topic categorization for news articles you can say well, if the news article mentions word like a game and sports three times. That we're going to say it's about sports things like that and this would allow us to deterministically decide which category a document that should be put into." + }, + { + "1:02": "Now such a strategy would work well if the following conditions hold. First the categories must be very well defined and this allows the person to clearly decide the category based on some clear rules." + }, + { + "1:21": "A certainly the categories as" + }, + { + "1:25": "half to be easy to distinguished at the based on a surface features in text. So that means some official features like keywords or punctuations or whatever, you can easily identify in text to data." + }, + { + "1:41": "For example, if there is some special vocabulary that is known to only occur in a particular category. And that would be most effective because we can easily use such a vocabulary or padding of such a vocabulary to recognize this category." + }, + { + "1:57": "Now we also should have sufficient knowledge for designing these words, and so if that's the case then such a can be effective. And so it does have a in some domains and sometimes. However, in general, there are several problems with this approach. First off, because it's label intensive it requires a lot of manual work. Obviously, we can't do this for all kinds of categorization problems. We have to do it from scratch for a different problem. problem because given the rules, what they need. So it doesn't scale up well." + }, + { + "2:41": "Secondly, it cannot handle uncertainty in rules, often the rules Aren't 100% reliable. Take for example looking at occurrences of words in texts and trying to decide the topic." + }, + { + "2:57": "It's actually very hard to have 100% correct rule. So for example you can say well, if it has game, sports, basketball Then for sure it's about sports. But one can also imagine some types of articles that mention these cures, but may not be exactly about sports or only marginally touching sports. The main topic could be another topic, a different topic than sports." + }, + { + "3:27": "So that's one disadvantage of this approach. And then finally, the rules maybe inconsistent and this would lead to robustness. More specifically, and sometimes, the results of categorization may be different that depending on which rule to be applied. So as in that case that you are facing uncertainty. And you will also have to decide an order of applying the rules, or combination of results that are contradictory. So all these are problems with this approach. And it turns out that both problems can be solved or alleviated by using machine learning." + }, + { + "4:07": "So these machine learning methods are more automatic. But, I still put automatic in quotation marks because they are not really completely automatic cause it still require many work. More specifically we have to use a human experts to help in two ways. First the human experts must annotate data cells was category labels. And would tell the computer which documents should receive which categories. And this is called training data." + }, + { + "4:38": "And then secondly, the human experts also need to provide a set of features to represent each text object. That can potentially provide a clue about the category. So, we need to provide some basic features for the computers to look into." + }, + { + "4:55": "In the case of tax a natural choice would be the words. So, using each has a feature is a very common choice to start with, but of course there are other sophisticated features like phrases or even parts of ancients tags or even syntax to the structures. So once human experts can provide this then we can use machine running to learn soft rules for categorization from the training data. So, soft rules just means, we're going to get decided which category we should be assigned for a document, but it's not going to be use using a rule that is deterministic. So we might use something similar to saying that if it matches games, sports many times, it's likely to be sports. But, we're not going to say exactly for sure but instead, we're going to use probabilities or weights. So that we can combine much more evidences. So, the learning process, basically is going to figure out which features are most useful for separating different categories. And it's going to also figure out how to optimally combine features to minimize errors of the categorization of the training data. So the training data, as you can see here, is very important. It's the basis for learning. And then, the trained classifier can be applied to a new text object to predict the most likely category. And that's to simulate the prediction of what human Would assign to this text object. If the human were to make a judgement. So when we use machine learning for text categorization we can also talk about the problem in the general setting of supervisement. So the set up is to learn a classifier to map a value of X. Into a map of Y so here X is all the text objects and Y is all the categories, a set of categories. So the class phi will take any value in x as input and would generate a value in y as output. We hope that output y with this right category for x. And here correct, of course, is judged based on the training data. So that's a general goal in machine learning problems or supervised learning problems where you are given some examples of input and output for a function. And then the computer's going to figure out the, how the function behaves like based on this examples. And then try to be able to compute the values for future x's that when we have not seen." + }, + { + "7:38": "So in general all methods would rely on discriminative features of text objects to distinguish different categories. So that's why these features are very important and they have to be provided by humans. And they will also combine multiple features in a weight map with weights to be optimized to minimize errors on the training data. So after the learning processes optimization problem. An objective function is often tied into the errors on the training data." + }, + { + "8:12": "Different methods tend to vary in their ways of measuring the errors on the training data. They might optimize a different objective function, which is often also called a loss function or cost function." + }, + { + "8:26": "They also tend to vary in their ways of combining the features. So a linear combination for example is simple, is often used. But they are not as powerful as nonlinear combinations. But nonlinear models might be more complex for training, so there are tradeoffs as well. But that would lead to different variations of" + }, + { + "8:50": "many variations of these learning methods. So in general we can distinguish two kinds of classifiers at a high level. One is called generative classifiers. The other is called discriminative classifiers. The generative classifiers try to learn what the data looks like in each category. So it attempts to model the joint distribution of the data and the label x and y and this can then be factored out to a product of why the distribution of labels. And the joint probability of sorry the conditional probability of X given Y, so it's Y. So we first model the distribution of labels and then we model how the data is generate a particular label here." + }, + { + "9:48": "And once we can estimate these models, then we can compute this conditional probability of label given data based on the probability of data given label." + }, + { + "10:02": "And the label distribution here by using the Bayes Rule." + }, + { + "10:07": "Now this is the most important thing, because this conditional probability of the label can then be used directly to decide which label is most likely." + }, + { + "10:18": "So in such approaches objective function is actually likelihood. And so, we model how the data are generated. So it only indirectly captures the training errors. But if we can model the data in each category accurately, then we can also classify accurately." + }, + { + "10:38": "One example is Na\u00efve Bayes classifier, in this case. The other kind of approaches are called discriminative classifies, and these classifies try to learn what features separate categories. So they direct or attack the problem of categorization for separation of classes. So sorry for the problem." + }, + { + "11:04": "So, these discriminative classifiers attempt to model the conditional probability of the label given the data point directly." + }, + { + "11:17": "So, the objective function tends to directly measure the errors of categorization on the training data." + }, + { + "11:24": "Some examples include a logistical regression, support vector machines, and k-nearest neighbors. We will cover some of these classifiers in detail in the next few lectures. [MUSIC]" + } + ] + }, + { + "4-9-text-categorization-generative-probabilistic-models": [ + { + "0:00": "[SOUND] This lecture is about how to use generative probabilistic models for text categorization." + }, + { + "0:14": "There are in general about two kinds of approaches to text categorization by using machine learning. One is by generating probabilistic models. The other is discriminative approaches. In this lecture, we're going to talk about the generative models. In the next lecture, we're going to talk about discriminative approaches. So the problem of text categorization is actually a very similar to document clustering. In that, we'll assume that each document it belongs to one category or one cluster. The main difference is that in clustering we don't really know what are the predefined categories are, what are the clusters. In fact, that's the goal of text clustering." + }, + { + "0:55": "We want to find such clusters in the data." + }, + { + "0:59": "But in the case of categorization, we are given the categories. So we kind of have pre-defined categories and then based on these categories and training data, we would like to allocate a document to one of these categories or sometimes multiple categories. But because of the similarity of the two problems, we can actually get the document clustering models for text categorization. And we understand how we can use generated models to do text categorization from the perspective of clustering. And so, this is a slide that we've talked about before, about text clustering, where we assume there are multiple topics represented by word distributions. Each topic is one cluster. So once we estimated such a model, we faced a problem of deciding which cluster document d should belong to. And this question boils down to decide which theta i has been used to generate d." + }, + { + "2:06": "Now, suppose d has L words represented as xi here. Now, how can you compute the probability that a particular topic word distribution zeta i has been used to generate this document?" + }, + { + "2:27": "Well, in general, we use base wall to make this influence and you can see this prior information here that we need to consider if a topic or cluster has a higher prior then it's more likely that the document has been from this cluster. And so, we should favor such a cluster. The other is a likelihood part, it's this part." + }, + { + "2:56": "And this has to do with whether the topic word of distribution can explain the content of this document well. And we want to pick a topic that's high by both values. So more specifically, we just multiply them together and then choose which topic has the highest product. So more rigorously, this is what we'd be doing. So we're going to choose the topic that would maximize. This posterior probability at the top of a given document gets posterior because this one, p of the i, is the prior. That's our belief about which topic is more likely, before we observe any document. But this conditional probability here is the posterior probability of the topic after we have observed the document of d." + }, + { + "3:49": "And base wall allows us to update this probability based on the prior and I have shown the details, below here you can see how the prior here is related to the posterior, on the left-hand side." + }, + { + "4:05": "And this is related to how well this word distribution explains the document here, and the two are related in this way. So to find the topic that has the higher posterior probability here it's equivalent to maximize this product as we have seen also, multiple times in this course." + }, + { + "4:32": "And we can then change the probability of document in your product of the probability of each word, and that's just because we've made an assumption about independence in generating each word. So this is just something that you have seen in document clustering." + }, + { + "4:50": "And we now can see clearly how we can assign a document to a category based on the information about word distributions for these categories and the prior on these categories. So this idea can be directly adapted to do categorization. And this is precisely what a Naive Bayes Classifier is doing. So here it's most really the same information except that we're looking at the categorization problem now. So we assume that if theta i represents category i accurately, that means the word distribution characterizes the content of documents in category i accurately. Then, what we can do is precisely like what we did for text clustering. Namely we're going to assign document d to the category that has the highest probability of generating this document. In other words, we're going to maximize this posterior probability as well." + }, + { + "5:56": "And this is related to the prior and the [INAUDIBLE] as you have seen on the previous slide. And so, naturally we can decompose this [INAUDIBLE] into a product as you see here. Now, here, I change the notation so that we will write down the product as product of all the words in the vocabulary, and even though the document doesn't contain all the words. And the product is still accurately representing the product of all the words in the document because of this count here." + }, + { + "6:37": "When a word, it doesn't occur in the document. The count would be 0, so this time will just disappear. So if actively we'll just have the product over other words in the document. So basically, with Naive Bayes Classifier, we're going to score each category for the document by this function." + }, + { + "6:56": "Now, you may notice that here it involves a product of a lot of small probabilities. And this can cause and the four problem. So one way to solve the problem is thru take logarithm of this function, which it doesn't changes all the often these categories. But will helps us preserve precision. And so, this is often the function that we actually use to score each category and then we're going to choose the category that has the highest score by this function. So this is called an Naive Bayes Classifier, now the keyword base is understandable because we are applying a base rule here when we go from the posterior probability of the topic to a product of the likelihood and the prior." + }, + { + "7:47": "Now, it's also called a naive because we've made an assumption that every word in the document is generated independently, and this is indeed a naive assumption because in reality they're not generating independently. Once you see some word, then other words will more likely occur. For example, if you have seen a word like a text. Than that mixed category, they see more clustering more likely to appear than if you have not the same text." + }, + { + "8:15": "But this assumption allows us to simplify the problem. And it's actually quite effective for many text categorization tasks. But you should know that this kind of model doesn't have to make this assumption. We could for example, assume that words may be dependent on each other. So that would make it a bigram analogy model or a trigram analogy model. And of course you can even use a mixture model to model what the document looks like in each category. So in nature, they will be all using base rule to do classification. But the actual generating model for documents in each category can vary. And here, we just talk about very simple case perhaps, the simplest case." + }, + { + "9:00": "So now the question is, how can we make sure theta i actually represents category i accurately? Now in clustering, we learned that this category i or what are the distributions for category i from the data. But in our case, what can we do to make sure this theta i represents indeed category i." + }, + { + "9:25": "Well if you think about the question, and you likely come up with the idea of using the training data." + }, + { + "9:34": "Indeed in the textbook, we typically assume that there is training data available and those are the documents that unknown to have generator from which category. In other words, these are the documents with known categories assigned and of course human experts must do that. In here, you see that T1 represents the set of documents that are known to have the generator from category 1. And T2 represents the documents that are known to have been generated from category 2, etc. Now if you look at this picture, you'll see that the model here is really a simplified unigram language model. It's no longer mixed modal, why? Because we already know which distribution has been used to generate which documents. There's no uncertainty here, there's no mixing of different categories here." + }, + { + "10:30": "So the estimation problem of course would be simplified. But in general, you can imagine what we want to do is estimate these probabilities that I marked here. And what other probability is that we have to estimate it in order to do relation. Well there are two kinds. So one is the prior, the probability of theta i and this indicates how popular each category is or how likely will it have observed the document in that category. The other kind is the water distributions and we want to know what words have high probabilities for each category." + }, + { + "11:11": "So the idea then is to just use observe the training data to estimate these two probabilities." + }, + { + "11:18": "And in general, we can do this separately for the different categories. That's just because these documents are known to be generated from a specific category. So once we know that, it's in some sense irrelevant of what other categories we are also dealing with." + }, + { + "11:37": "So now this is a statistical estimation problem. We have observed some data from some model and we want to guess the parameters of this model. We want to take our best guess of the parameters." + }, + { + "11:51": "And this is a problem that we have seen also several times in this course. Now, if you haven't thought about this problem, haven't seen life based classifier. It would be very useful for you to pause the video for a moment and to think about how to solve this problem. So let me state the problem again. So let's just think about with category 1, we know there is one word of distribution that has been used to generate documents. And we generate each word in the document independently, and we know that we have observed a set of n sub 1 documents in the set of Q1. These documents have been all generated from category 1. Namely have been all generated using this same word distribution. Now the question is, what would be your guess or estimate of the probability of each word in this distribution? And what would be your guess of the entire probability of this category? Of course, this singular probability depends on how likely are you to see documents in other categories?" + }, + { + "12:55": "So think for a moment, how do you use all this training data including all these documents that are known to be in these k categories, to estimate all these parameters? Now, if you spend some time to think about this and it would help you understand the following few slides. So do spend some time to make sure that you can try to solve this problem, or do you best to solve the problem yourself. Now if you have thought about and then you will realize the following to it." + }, + { + "13:29": "First, what's the bases for estimating the prior or the probability of each category. Well this has to do with whether you have observed a lot of documents form that category. Intuitively, you have seen a lot of documents in sports and very few in medical science. Then you guess is that the probability of the sports category is larger or your prior on the category would be larger." + }, + { + "13:57": "And what about the basis for estimating the probability of where each category? Well the same, and you'll be just assuming that words that are observed frequently in the documents that are known to be generated from a category will likely have a higher probability. And that's just a maximum Naive Bayes made of. Indeed, that's what we can do, so this made the probability of which category and to answer the question, which category is most popular? Then we can simply normalize, the count of documents in each category. So here you see N sub i denotes the number of documents in each category." + }, + { + "14:37": "And we simply just normalize these counts to make this a probability. In other words, we make this probability proportional to the size of training intercept in each category that's a size of the set t sub i." + }, + { + "14:55": "Now what about the word distribution? Well, we do the same. Again this time we can do this for each category. So let's say, we're considering category i or theta i. So which word has a higher probability? Well, we simply count the word occurrences in the documents that are known to be generated from theta i." + }, + { + "15:20": "And then we put together all the counts of the same word in the set. And then we just normalize these counts to make this distribution of all the words make all the probabilities off these words to 1. So in this case, you're going to see this is a proportional through the count of the word in the collection of training documents T sub i and that's denoted by c of w and T sub i." + }, + { + "15:49": "Now, you may notice that we often write down probable estimate in the form of being proportional for certain numbers. And this is often sufficient, because we have some constraints on these distributions. So the normalizer is dictated by the constraint. So in this case, it will be useful for you to think about what are the constraints on these two kinds of probabilities? So once you figure out the answer to this question, and you will know how to normalize these accounts. And so this is a good exercise to work on if it's not obvious to you. There is another issue in Naive Bayes which is a smoothing. In fact the smoothing is a general problem in older estimate of language morals. And this has to do with, what would happen if you have observed a small amount of data? So smoothing is an important technique to address that outsmarts this. In our case, the training data can be small and when the data set is small when we use maximum likely estimator we often face the problem of zero probability. That means if an event is not observed then the estimated probability would be zero. In this case, if we have not seen a word in the training documents for let's say, category i. Then our estimator would be zero for the probability of this one in this category, and this is generally not accurate. So we have to do smoothing to make sure it's not zero probability. The other reason for smoothing is that this is a way to bring prior knowledge, and this is also generally true for a lot of situations of smoothing. When the data set is small, we tend to rely on some prior knowledge to solve the problem. So in this case our [INAUDIBLE] says that no word should have zero probability. So smoothing allows us to inject these to prior initial that no order has a real zero probability." + }, + { + "17:54": "There is also a third reason which us sometimes not very obvious, but we explain that in a moment. And that is to help achieve discriminative weighting of terms. And this is also called IDF weighting, inverse document frequency weighting that you have seen in mining word relations." + }, + { + "18:14": "So how do we do smoothing? Well in general we add pseudo counts to these events, we'll make sure that no event has 0 count." + }, + { + "18:22": "So one possible way of smoothing the probability of the category is to simply add a small non active constant delta to the count. Let's pretend that every category has actually some extra number of documents represented by delta." + }, + { + "18:40": "And in the denominator we also add a k multiplied by delta because we want the probability to some to 1. So in total we've added delta k times because we have a k categories. Therefore in this sum, we have to also add k multiply by delta as a total pseudocount that we add up to the estimate." + }, + { + "19:06": "Now, it's interesting to think about the influence of that data, obvious data is a smoothing parameter here. Meaning that the larger data is and the more we will do smoothing and that means we'll more rely on pseudocounts. And we might indeed ignore the actual counts if they are delta is set to infinity. Imagine what would happen if there are approaches positively to infinity? Well, we are going to say every category has an infinite amount of documents. And then there's no distinction to them so it become just a uniform." + }, + { + "19:44": "What if delta is 0? Well, we just go back to the original estimate based on the observed training data to estimate to estimate the probability of each category. Now we can do the same for the word distribution. But in this case, sometimes we find it useful to use a nonuniform seudocount for the word. So here you'll see we'll add a pseudocounts to each word and that's mule multiplied by the probability of the word given by a background language model, theta sub b. Now that background model in general can be estimated by using a logic collection of tests. Or in this case we will use the whole set of all the training data to estimate this background language model. But we don't have to use this one, we can use larger test data that are available from somewhere else." + }, + { + "20:36": "Now if we use such a background language model that has pseudocounts, we'll find that some words will receive more pseudocounts. So what are those words? Well those are the common words because they get a high probability by the background average model. So the pseudocounts added for such words will be higher. Real words on the other hand will have smaller pseudocounts. Now this addition of background model would cause a nonuniform smoothing of these word distributions. We're going to bring the probability of those common words to a higher level, because of the background model. Now this helps make the difference of the probability of such words smaller across categories. Because every category has some help from the background four words, and I get the, a, which have high probabilities. Therefore, it's not always so important that each category has documents that contain a lot of occurrences of such words or the estimate is more influenced by the background model. And the consequence is that when we do categorization, such words tend not to influence the decision that much as words that have small probabilities from the background language model. Those words don't get some help from the background language model. So the difference would be primary because of the differences of the occurrences in the training documents in different categories." + }, + { + "22:05": "We also see another smoothing parameter mu here, which controls the amount of smoothing and just like a delta does for the other probability." + }, + { + "22:14": "And you can easily understand why we add mu to the denominator, because that represents the sum of all the pseudocounts that we add for all the words." + }, + { + "22:25": "So view is also a non negative constant and it's [INAUDIBLE] set to control smoothing. Now there are some interesting special cases to think about as well. First, let's think about when mu approaches infinity what would happen? Well in this case the estimate would approach" + }, + { + "22:43": "to the background language model we'll attempt to the background language model. So we will bring every word distribution to the same background language model and that essentially remove the difference between these categories. Obviously, we don't want to do that. The other special case is the thing about the background model and suppose, we actually set the two uniform distribution. And let's say, 1 over the size of the vocabulary. So each one has the same probability, then this smoothing formula is going to be very similar to the one on the top when we add delta. It's because we're going to add a constant pseudocounts to every word." + }, + { + "23:29": "So in general, in Naive Bayes categorization we have to do such a small thing. And then once we have these probabilities, then we can compute the score for each category. For a document and then choose the category where it was the highest score as we discussed earlier." + }, + { + "23:49": "Now, it's useful to further understand whether the Naive Bayes scoring function actually makes sense. So to understand that, and also to understand why adding a background model will actually achieve the effect of IDF weighting and to penalize common words. So suppose we have just two categories and we're going to score based on their ratio of probability, right? So this is the." + }, + { + "24:24": "Lets say this is our scoring function for two categories, right? So, this is a score of a document for these two categories. And we're going to score based on this probability ratio. So if the ratio is larger, then it means it's more likely to be in category one. So the larger the score is the more likely the document is in category one. So by using Bayes' rule, we can write down this ratio as follows, and you have seen this before." + }, + { + "25:09": "Now, we generally take logarithm of this ratio, and to avoid small probabilities. And this would then give us this formula in the second line. And here we see something really interesting, because this is our scoring function for deciding between the two categories." + }, + { + "25:30": "And if you look at this function, we'll see it has several parts. The first part here is actually log of probability ratio. And so this is a category bias." + }, + { + "25:41": "It doesn't really depend on the document. It just says which category is more likely and then we would then favor this category slightly, right? So, the second part has a sum of all the words, right? So, these are the words that are observed in the document but in general we can consider all the words in the vocabulary. So here we're going to collect the evidence about which category is more likely, right? So inside of the sum you can see there is product of two things. The first, is a count of the word. And this count of the word serves as a feature to represent the document." + }, + { + "26:27": "And this is what we can collect from document. The second part is the weight of this feature, here it's the weight on which word, right? This weight tells us to what extent observing this word helps contribute in our decision to put this document in category one. Now remember, the higher the scoring function is, the more likely it's in category one. Now if you look at this ratio, basically, sorry this weight it's basically based on the ratio of the probability of the word from each of the two distributions. Essentially we're comparing the probability of the word from the two distributions. And if it's a higher according to theta 1, then according to theta 2, then this weight would be positive. And therefore it means when we observe such a word, we will say that it's more likely to be from category one. And the more we observe such a word, the more likely the document will be classified as theta 1." + }, + { + "27:35": "If, on the other hand, the probability of the word from theta 1 is smaller than the probability of the word from theta 2, then you can see that this word is negative. Therefore, this is negative evidence for supporting category one. That means the more we observe such a word, the more likely the document is actually from theta 2." + }, + { + "27:58": "So this formula now makes a little sense, right? So we're going to aggregate all the evidence from the document, we take a sum of all the words. We can call this the features that we collected from the document that would help us make the decision. And then each feature has a weight that tells us how" + }, + { + "28:19": "does this feature support category one or just support category two. And this is estimated as the log of probability ratio here in na\u00efve Bayes." + }, + { + "28:32": "And then finally we have this constant of bias here. So that formula actually is a formula that can be generalized to accommodate more features and that's why I have introduce some other symbols here. To introduce beta 0 to denote the Bayes and fi to denote the each feature and beta sub i to denote the weight on each feature. Now we do this generalisation, what we see is that in general we can represent the document by feature vector fi, here of course in this case fi is the count of a word. But in general, we can put any features that we think are relevant for categorization. For example, document length or font size or count of other patterns in the document." + }, + { + "29:27": "And then our scoring function can be defined as a sum of a constant beta 0 and the sum of the feature weights of all the features." + }, + { + "29:42": "So if each f sub i is a feature value then we multiply the value by the corresponding weight, beta sub i, and we just take the sum. And this is the aggregate of all evidence that we can collect from all these features. And of course there are parameters here. So what are the parameters? Well, these are the betas. These betas are weights. And with a proper setting of the weights, then we can expect such a scoring function to work well to classify documents, just like in the case of naive Bayes. We can clearly see naive Bayes classifier as a special case of this general classifier. Actually, this general form is very close to a classifier called a logistical regression, and this is actually one of those conditional approaches or discriminative approaches to classification." + }, + { + "30:32": "And we're going to talk more about such approaches later, but here I want you to note that there is a strong connection, a close connection between the two kinds of approaches. And this slide shows how naive Bayes classifier can be connected to a logistic regression. And you can also see that in discriminative classifiers that tend to use more general form on the bottom, we can accommodate more features to solve the problem. [MUSIC]" + } + ] + } + ] + }, + { + "Week 5": [ + { + "5-1-text-categorization-discriminative-classifier-part-1": [ + { + "0:00": "[SOUND] This lecture is about the discriminative classifiers for text categorization." + }, + { + "0:13": "In this lecture we're going to continue talking about how to do text categorization and cover discriminative approaches. This is a slide that you have seen from the discussion of Naive Bayes Classifier, where we have shown that although Naive Bayes Classifier tries to model the generation of text data, from each categories, we can actually use Bayes' rule to eventually rewrite the scoring function as you see on this slide. And this scoring function is basically a weighted combination of a lot of word features, where the feature values are word counts, and the feature weights are the log of probability ratios of the word given by two distributions here." + }, + { + "0:57": "Now this kind of scoring function can be actually a general scoring function where we can in general present text data as a feature vector. Of course the features don't have to be all the words. Their features can be other signals that we want to use. And we mentioned that this is precisely similar to logistic regression. So, in this lecture we're going to introduce some discriminative classifiers. They try to model the conditional distribution of labels given the data directly rather than using Bayes' rule to compute that interactively as we have seen in naive Bayes. So the general idea of logistic regression is to model the dependency of a binary response variable Y on some predictors that are denoted as X. So here we have also changed the notation to X for future values." + }, + { + "2:07": "You may recall in the previous slides we have used FI to represent the future values." + }, + { + "2:13": "And here we use the notation of X factor, which is more common when we introduce such discriminative algorithms. So, X is our input. It's a vector with n features and each feature has a value x sub i here. And I will go with a model that dependency of this binary response variable of these features. So in our categorization problem when I have two categories theta 1 and theta 2, and we can use the Y value to denote the two categories when Y is 1, it means the category of the document, the first class, is theta 1. Now, the goal here is the model, the conditional property of Y given X directly as opposed to model of the generation of X and Y as in the case of Naive Bayes. And another advantage of this kind of approach is that it would allow many other features than words to be used in this vector since we're not modeling the generation of this vector. And we can plug in any signals that we want. So this is potentially advantageous for doing text categorization. So more specifically, in logistic regression, assume the functional form of Y depending on X is the following. And this is very closely related to the log odds that I introduced in the Naive Bayes or log of probability ratio of the two categories that you have seen on the previous slide." + }, + { + "3:57": "So this is what I meant. So in the case of Naive Bayes, we compute this by using those words and eventually we have reached a formula that looks like this." + }, + { + "4:12": "But here we actually would assume explicitly that we with the model our probability of Y given X" + }, + { + "4:29": "directly as a function of these features." + }, + { + "4:37": "So, most specifically we assume that the ratio of the probability of Y equals 1 and the probability of Y equals 0 is a function of X." + }, + { + "4:54": "All right, so it's a function of x and it's a linear combination of these feature values controlled by theta values." + }, + { + "5:02": "And it seems we know that the probability of Y equals zero is one minus probability of Y equals one and this can be also written in this way. So this is a log out ratio here." + }, + { + "5:22": "And so in logistic regression, we're basically assuming that the probability of Y equals 1. Okay my X is dependent on this linear combination of all these features. So it's just one of the many possible ways, assuming that the dependency. But this particular form has been quite useful and it also has some nice properties." + }, + { + "5:47": "So if we rewrite this equation to actually express the probability of Y given X. In terms of X by getting rid of the logarithm we get this functional form, and this is called a logistical function. It's a transformation of X into Y, as you see" + }, + { + "6:08": "on the right side here, so that the X's will be map into a range of values from 0 to 1.0, you can see. And that's precisely what we want since we have a probability here." + }, + { + "6:24": "And the function form looks like this." + }, + { + "6:28": "So this is the basic idea of logistic regression. And it's a very useful classifier that can be used to do a lot of classification tasks including text categorization." + }, + { + "6:41": "So as in all cases of model we would be interested in estimating the parameters. And in fact in all of the machine running programs, once you set up with the model, set up object and function to model the file, then the next step is to compute the parameter values. In general, we're going to adjust to these parameter values. Optimize the performance of classify on the training data. So in our case just assume we have the training data here, xi and yi, and each pair is basically a future vector of x and a known label for that x. Y is either 1 or 0. So in our case we are interested maximize this conditional likelihood." + }, + { + "7:31": "The conditional likelihood here is basically to model why given observe the x, so it's not like a moderate x, but rather we're going to model this. Note that this is a conditional probability of Y given X and this is also precisely what we wanted For classification. Now so the likelihood function would be just a product of all the training cases. And in each case, this is the model of the probability of observing this particular training case. So given a particular Xi, how likely we are to observe the corresponding Yi? Of course, Yi could be 1 or 0, and in fact, the function found here would vary depending on whether Yi is 1 or 0. If it's a 1, we'll be taking this form. And that's basically the logistic regression function. But what about this, if it's 0? Well, if it's 0, then we have to use a different form, and that's this one." + }, + { + "8:48": "Now, how do we get this one? Well, that's just a 1 minus the probability of Y=1, right?" + }, + { + "8:55": "And you can easily see this. Now the key point in here is that the function form here depends on the observer Yi, if it's a 1, it has a different form than when it's 0. And if you think about when we want to maximize this probability, we're basically going to want this probability to be as high as possible. When the label is 1, that means the document is in probability 1. But if the document is not, we're going to maximize this value, and what's going to happen is actually to make this value as small as possible because this sum's 1. When I maximize this one, it's equivalent to minimize this one." + }, + { + "9:48": "So you can see basically, if we maximize the conditional likelihood, we're going to basically try to make the prediction on the training data as accurate as possible." + }, + { + "10:00": "So as another occasion, when you compute the maximum likelihood data, basically you'll find a beta value, a set of beta values that would maximize this conditional likelihood." + }, + { + "10:12": "And this, again, then gives us a standard optimization problem. In this case, it can be also solved in many ways. Newton's method is a popular way to solve this problem, there are other methods as well. But in the end, we will look at a set of data values. Once we have the beta values, then we have a way to find the scoring function to help us classify a document." + }, + { + "10:39": "So what's the function? Well, it's this one. See, if we have all the beta values, are they known? All we need is to compute the Xi for that document and then plug in those values. That will give us an estimated probability that the document is in category one." + }, + { + "10:59": "Okay so, so much for logistical regression. Let's also introduce another discriminative classifier called K-Nearest Neighbors. Now in general, I should say there are many such approaches, and a thorough introduction to all of them is clearly beyond the scope of this course. And you should take a machine learning course or read more about machine learning to know about them. Here, I just want to include the basic introduction to some of the most commonly used classifiers, since you might use them often for text calculation. So the second classifier is called K-Nearest Neighbors. In this approach, we're going to also estimate the conditional probability of label given data, but in a very different way. So the idea is to keep all the training examples and then once we see a text object that we want to classify, we're going to find the K examples in the training set and that are most similar to this text object. Basically, this is to find the neighbors of this text objector in the training data set. So once we found the neighborhood and we found the object that are close to the object we are interested in classifying, and let's say we have found the K-Nearest Neighbors. That's why this method is called K-Nearest Neighbors. Then we're going to assign the category that's most common in these neighbors. Basically we're going to allow these neighbors to vote for the category of the objective that we're interested in classifying." + }, + { + "12:33": "Now that means if most of them have a particular category and it's a category one, they're going to say this current object will have category one." + }, + { + "12:43": "This approach can also be improved by considering the distance of a neighbor and of a current object. Basically, we can assume a closed neighbor would have more say about the category of the subject. So, we can give such a neighbor more influence on the vote. And we can take away some of the votes based on the distances." + }, + { + "13:06": "But the general idea is look at the neighborhood, and then try to assess the category based on the categories of the neighbors. Intuitively, this makes a lot of sense. But mathematically, this can also be regarded as a way to directly estimate there's a conditional probability of label given data, that is p of Y given X." + }, + { + "13:28": "Now I'm going to explain this intuition in a moment, but before we proceed, let me emphasize that we do need a similarity function here in order for this to work. Note that in naive base class five, we did not need a similarity function. And in logistical regression, we did not talk about those similarity function either, but here we explicitly require a similarity function. Now this similarity function actually is a good opportunity for us to inject any of our insights about the features. Basically effective features are those that would make the objects that are on the same category look more similar, but distinguishing objects in different categories. So the design of this similarity function is closely tied it to the design of the features in logistical regression and other classifiers. So let's illustrate how K-NN works. Now suppose we have a lot of training instances here. And I've colored them differently and to show just different categories. Now suppose we have a new object in the center that we want to classify. So according to this approach, you work on finding the neighbors. Now, let's first think of a special case of finding just one neighbor, the closest neighbor." + }, + { + "14:53": "Now in this case, let's assume the closest neighbor is the box filled with diamonds. And so then we're going to say, well, since this is in this object that is in category of diamonds, let's say. Then we're going to say, well, we're going to assign the same category to our text object. But let's also look at another possibility of finding a larger neighborhood, so let's think about the four neighbors." + }, + { + "15:26": "In this case, we're going to include a lot of other solid field boxes in red or pink, right? So in this case now, we're going to notice that among the four neighbors, there are three neighbors in a different category. So if we take a vote, then we'll conclude the object is actually of a different category. So this both illustrates how can nearest neighbor works and also it illustrates some potential problems of this classifier. Basically, the results might depend on the K and indeed, k's an important parameter to optimize. Now, you can intuitively imagine if we have a lot of neighbors around this object, and then we'd be okay because we have a lot of neighbors who will help us decide the categories. But if we have only a few, then the decision may not be reliable. So on the one hand, we want to find more neighbor, right? And then we have more votes. But on the other hand, as we try to find more neighbors we actually could risk on getting neighbors that are not really similar to this instance. They might actually be far away as you try to get more neighbors. So although you get more neighbors but those neighbors aren't necessarily so helpful because they are not very similar to the object. So the parameter still has to be set empirically. And typically, you can optimize such a parameter by using cross validation. Basically, you're going to separate your training data into two parts and then you're going to use one part to actually help you choose the parameter k here or some other parameters in other class files. And then you're going to assume this number that works well on your training that will be actually be the best for your future data." + }, + { + "17:23": "So as I mentioned, K-NN can be actually regarded as estimate of conditional problem within y given x an that's why we put this in the category of discriminative approaches. So the key assumption that we made in this approach is that the distribution of the label given the document probability a category given for example probability of theta i given document d is locally smooth. And that just means we're going to assume that this probability is the same for all the documents in these region R here. And suppose we draw a neighborhood and we're going to assume in this neighborhood since the data instances are very similar we're going to assume that the conditional distribution of the label given the data will be roughly the same. If these are very different then we're going to assume that the probability of c doc given d would be also similar. So that's a very key assumption. And that's actually important assumption that would allow us to do a lot of machinery. But in reality, whether this is true of course, would depend on how we define similarity. Because neighborhood is largely determined by our similarity function. If our similarity function captures objects that do follow similar distributions then these assumptions are okay but if our similarity function could not capture that, obviously these assumption would be a problem and then the classifier would not be accurate." + }, + { + "18:59": "Okay, let's proceed with these assumption. Then what we are saying is that, in order to estimate the probability of category given a document. We can try to estimate the probability of the category given that entire region. Now, this has a benefit, of course, of bringing additional data points to help us estimate this probability." + }, + { + "19:22": "And so this is precisely the idea of K-NN. Basically now we can use the known categories of all the documents in this region to estimate this probability. And I have even given a formula here where you can see we just count the topics in this region and then normalize that by the total number of documents in the region. So the numerator that you see here, c of theta i and r, is a counter of the documents in region R was category theta i. Since these are training document and we know they are categories. We can simply count how many times it was since here. How many times we have the same signs, etc. And then the denominator is just the total number of training documents in this region. So this gives us a rough estimate of which categories most popular in this neighborhood. And we are going to assign the popular category to our data object since it falls into this region. [MUSIC]" + } + ] + }, + { + "5-2-text-categorization-discriminative-classifier-part-2": [ + { + "0:07": "[SOUND] This lecture is a continued discussion of Discriminative Classifiers for Text Categorization. So, in this lecture, we're going to introduce, yet another Discriminative Classifier called the Support Vector Machine or SVM. Which is a very popular classification method and it has been also shown to be effective for text categorization." + }, + { + "0:31": "So to introduce this classifier, let's also think about the simple case of two categories. We have two topic categories, 01 and 02 here. And we want to classify documents into these two categories and we're going to represent again a document by a feature factor x here." + }, + { + "0:53": "Now, the idea of this classifier is to design also a linear separator" + }, + { + "0:59": "here that you'll see and it's very similar to what you have seen not just for regression, right? And we're going to do also say that if the sign of this function value is positive then we're going to say the objective is in category one. Otherwise, we're going to say it's in category 2. So that makes 0 that is the decision boundary between the few categories." + }, + { + "1:28": "So, in generally hiding marginal space such as, 0. corresponds to a hyper plain." + }, + { + "1:38": "Now I've shown you a simple case of two dimensional space it was just X1 and X2 and this case this corresponds to a line that you can see here." + }, + { + "1:51": "So, this is a line defined by just three parameters here, beta zero, beta one, and beta two." + }, + { + "2:02": "Now, this line is heading in this direction so it shows that as we increase X1, X2 will also increase. So we know that beta one and beta two have different assigns, one is negative and the other is positive." + }, + { + "2:20": "So let's just assume that beta one is negative and beta two Is positive." + }, + { + "2:28": "Now, it's interesting to examine, then, the data instances on the two sides of the slide. So, here, the data instance are visualized as circles for one class and diamonds for the other class." + }, + { + "2:43": "Now, one question is to take a point like this one and to ask the question what's the value of this expression, or this classifier, for this data point?" + }, + { + "2:55": "So what do you think? Basically, we're going to evaluate its value by using this function." + }, + { + "3:01": "And as we said, if this value's positive we're going to say this is in category one, and if it's negative, it's going to be in category two. Intuitively, this line separates these two categories, so we expect the points on one side would be positive and the points on the other side would be negative. Our question is under the assumption that I just mentioned, let's examine a particular point like this one." + }, + { + "3:27": "So what do you think is the sine of this expression?" + }, + { + "3:31": "Well, to examine the sine we can simply look at this expression here. And we can compare this with let's say," + }, + { + "3:42": "value on the line, let's see, compare this with this point." + }, + { + "3:48": "While they have identical X1, but then one has a higher value for X2." + }, + { + "3:54": "Now, let's look at the sin of the coefficient for X2. Well, we know this is a positive." + }, + { + "4:02": "So, what that means is that the f value for this point should be higher than the f value for this point on the line that means this will be positive, right?" + }, + { + "4:16": "So we know in general of all points on this side," + }, + { + "4:20": "the function's value will be positive and you can also verify all the points on this side will be negative. And so this is how this kind of linear classifier or linear separator can then separate the points in the two categories." + }, + { + "4:37": "So, now the natural question is, which linear separator is the best? Now, I've get you one line here that can separate the two classes. And this line, of course, is determined by the vector beta, the coefficients. Different coefficients will give us different lines. So, we could imagine there are other lines that can do the same job. Gamma, for example, could give us another line that counts a separator to these instances." + }, + { + "5:06": "Of course, there are also lines that won't separate to them and those are bad lines. But, the question is, when we have multiple lines that can separate both clauses, which align the best? In fact, you can imagine, there are many different ways of choosing the line. So, the logistical regression classifier that you have seen earlier actually uses some criteria to determine where this line should be and so linear separate as well. And uses a conditional likelihood on the training that it determines which line is the best. But in SVM we're going to look at another criteria for determining which line is the best. And this time, the criteria is more tied to the classification arrow as you will see." + }, + { + "5:49": "So, the basic idea is to choose the separator to maximize the margin. So what is a margin? So, I choose some dotted lines here to indicate the boundaries of those data points in each class. And the margin is simply the distance between the line, the separator, and the closest point from each class." + }, + { + "6:18": "So you can see the margin of this side is as I've shown here and you can also define the margin on the other side." + }, + { + "6:27": "In order for the separator to maximize the margin, it has to be kind of in the middle of the two boundaries and you don't want this separator to be very close to one side, and that in intuition makes a lot of sense." + }, + { + "6:44": "So this is basic idea of SVM. We're going to choose a linear separator to maximize the margin." + }, + { + "6:52": "Now on this slide, I've also changed the notation so that I'm not going to use beta to denote the parameters. But instead, I'm going to use w although w was used to denote the words before so don't be confused here. W here is actually a width, a certain width." + }, + { + "7:12": "So I'm also using lowercase b to denote the beta 0, a biased constant." + }, + { + "7:20": "And there are instances do represent that as x and I also use the vector form of multiplication here. So we see a transpose of w vector multiply by the future vector." + }, + { + "7:35": "So b is a bias constant and w is a set of weights with one way for each feature. We have m features and so we have m weights and that will represent as a vector." + }, + { + "7:47": "And similarly, the data instance here, the text object, is represented by also a feature vector of the same number of elements. Xi is a feature value. For example, word count and you can verify, when we. Multiply these two vectors together, take the dot product, we get the same form of the linear separator as you have seen before. It's just a different way of representing this. Now I use this way so that it's more consistent with what notations people usually use when they talk about SVM. This way you can better connect the slides with some other readings you might do." + }, + { + "8:31": "Okay, so when we maximize the margins of a separator, it just means the boundary of the separator is only determined by a few data points, and these are the data points that we call support vectors. So here illustrated are two support vectors for one class and two for the other class. And these quotas define the margin basically, and you can imagine once we know which are supportive vectors then this" + }, + { + "9:06": "center separator line will be determined by them. So the other data points actually don't really matter that much. And you can see if you change the other data points it won't really affect the margin, so the separator will stay the same. Mainly affected by the the support vector machines. Sorry, it's mainly affected by the support vectors and that's why it's called a support vector machine. Okay, so now the next question is, of course, how can we set it up to optimize the line? How can we actually find the line or the separator? Now this is equivalent to finding values for w and b, because they will determine where exactly the separator is." + }, + { + "9:58": "So in the simplest case, the linear SVM is just a simple optimization problem. So again, let's recall that our classifier is such a linear separator, where we have weights for all the features, and the main goal is remove these weights w and b. And the classifier will say X is in category theta 1 if it's positive. Otherwise, it's going to say it's in the other category. So this is our assumption, our setup. So in the linear SVM, we are going to then seek these parameter values to optimize the margins and then the training error." + }, + { + "10:38": "The training data would be basically like in other classifiers. We have a set of training points where we know the x vector, and then we also know the corresponding label, y i. And here we define y i as two values, but these values are not 0, 1 as you have seen before, but rather -1 and positive 1, and they're corresponding to these two categories, as I've shown here. Now you might wonder why we don't define them as 0 and 1 instead of having -1, 1. And this is purely for mathematical convenience, as you will see in a moment." + }, + { + "11:16": "So the goal of optimization first is to make sure the labeling of training data is all correct. So that just means if y i, the norm label for instance x i, is 1, we would like this classified value to be large. And here we just choose a threshold of 1 here. But if you use another threshold, you can easily fit that constant into the parameter values b and w to make the right-hand side just 1." + }, + { + "11:48": "Now if, on the other hand, y i is -1, that means it's in a different class, then we want this classifier to give us a very small value, in fact a negative value, and we want this value to be less than or equal to -1. Now these are the two different instances, different kinds of cases. How can we combine them together? Now this is where it's convenient when we have chosen y i as -1 for the other category, because it turns out that we can either combine the two into one constraint." + }, + { + "12:26": "y i multiplied by the classifier value must be larger than or equal to 1." + }, + { + "12:33": "And obviously when y i is just 1, you see this is the same as the constraint on the left-hand side. But when y i is -1, you also see that this is equivalent to the other inequality. So this one actually captures both constraints in a unified way, and that's a convenient way of capturing these constraints. What's our second goal? Well, that's to maximize margin, so we want to ensure that separator can do well on the training data. But then, among all the cases where we can separate the data, we also would like to choose the separator that has the largest margin. Now the margin can be assumed to be related to the magnitude of the weight. And so w transform multiplied by w would give us basically the sum of squares of all those weights. So to have a small value for this expression, it means all the w i's must be small." + }, + { + "13:42": "So we've just assumed that we have a constraint for" + }, + { + "13:46": "getting the data on the training set to be classified correctly. Now we also have the objective that's tied into a maximization of margin, and this is simply to minimize w transpose multiplied by w, and we often denote this by phi of w. So now you can see this is basically a optimization problem. We have some variables to optimize, and these are the weights and b and we have some constraints. These are linear constraints and the objective function is a quadratic function of the weights. So this a quadratic program with linear constraints, and there are standard algorithm that are variable for solving this problem. And once we solve the problem we obtain the weights w and b. And then this would give us a well-defined classifier. So we can then use this classifier to classify any new text objects. Now the previous formulation did not allow any error in the classification, but sometimes the data may not be linear to the separator. That means that they may not look as nice as you have seen on the previous slide where a line can separate all of them. And what would happen if we allowed some errors? Well, the principle can stay. We want to minimize the training error but try to also maximize the margin. But in this case we have a soft margin, because the data points may not be completely separable." + }, + { + "15:17": "So it turns out that we can easily modify SVM to accommodate this. So what you see here is very similar to what you have seen before, but we have introduced the extra variable xi i. And we in fact will have one for each data instance, and this is going to model the error that we allow for each instance. But the optimization problem would be very similar. So specifically, you will see we have added something to the optimization problem. First we have added some error to the constraint so that now we allow a Allow the classifier to make some mistakes here. So, this Xi i is allowed an error. If we set Xi i to 0, then we go back to the original constraint. We want every instance to be classified accurately. But, if we allow this to be non-zero, then we allow some errors here. In fact, if the length of the Xi i is very large, the error can be very, very large. So naturally, we don't want this to happen. So we want to then also minimize this Xi i. So, because Xi i needs to be minimized in order to control the error." + }, + { + "16:42": "And so, as a result, in the objective function, we also add more to the original one, which is only W, by basically ensuring that we not only minimize the weights, but also minimize the errors, as you see here. Here we simply take a sum over all the instances. Each one has a Xi i to model the error allowed for that instance. And when we combine them together, we basically want to minimize the errors on all of them." + }, + { + "17:16": "Now you see there's a parameter C here, and that's a constant to control the trade-off between minimizing the errors and maximizing the margin. If C is set to zero, you can see, we go back to the original object function where we only maximize the margin." + }, + { + "17:34": "We don't really optimize the training errors and then Xi i can be set to a very large value to make the constraints easy to satisfy. That's not very good of course, so C should be set to a non-zero value, a positive value. But when C is set to a very, very large value, we'll see the object of the function will be dominated mostly by the training errors and so the optimization of margin will then play a secondary role. So if that happens, what would happen is" + }, + { + "18:07": "then we will try to do our best to minimize the training errors, but then we're not going to take care of the margin and that affects the generalization factors of the classify for future data. So it's also not good. So in particular, this parameter C has to be actually set carefully. And this is just like in the case of k-nearest neighbor where you need to optimize a number of neighbors. Here you need to optimize the C. And this is, in general, also achievable by doing cross-validation. Basically, you look at the empirical data and see what value C should be set to in order to optimize the performance." + }, + { + "18:49": "Now with this modification, the problem is still quadratic programming with linear constraints so the optimizing algorithm can be actually applied to solve this different version of the program." + }, + { + "19:02": "Again, once we have obtained the weights and the bias, then we can have classifier that's ready for classifying new objects. So that's the basic idea of SVM." + }, + { + "19:16": "So to summarize the text categorization methods, where we introduce the many methods, and some are generative models. Some are discriminative methods. And these tend to perform similarly when optimized. So there's still no clear winner, although each one has its pros and cons. And the performance might also vary on different data sets for different problems. And one reason is also because the feature representation is very critical" + }, + { + "19:52": "and these methods all require effective feature representation. And to design an effective feature set, we need domain knowledge and humans definitely play an important role here, although there are new machine learning methods and algorithm representation learning that can help with learning features." + }, + { + "20:12": "And another common thing is that they might be performing similarly on the data set, but with different mistakes. And so, their performance might be similar, but then the mistakes they make might be different. So that means it's useful to compare different methods for a particular problem and then maybe combine multiple methods because this can improve the robustness and they won't make the same mistakes. So assemble approaches that would combine different methods tend to be more robust and can be useful in practice. Most techniques that we introduce use the supervised machine learning, which is a very general method. So that means that these methods can be actually applied to any text or categorization problem. As long as we have humans to help annotate some training data sets and design features, then supervising machine learning and all these classifiers can be easily applied to those problems to solve the categorization problem to allow us to characterize content of text concisely with categories. Or to predict the sum properties of real world variables that are associated with text data. The computers, of course, here are trying to optimize the combinations of the features provided by human. And as I said, there are many different ways of combining them and they also optimize different object or functions." + }, + { + "21:58": "But in order to achieve good performance, they all require effective features and also plenty of training data." + }, + { + "22:04": "So as a general rule, and if you can improve the feature representation, and then provide more training data, then you can generally do better. Performance is often much more affected by the effectiveness of features than by the choice of specific classifiers. So feature design tends to be more important than the choice of specific classifier." + }, + { + "22:30": "So, how do we design effective features? Well, unfortunately, this is very application-specific. So there's no really much general thing to say here. But we can do some analysis of the categorization problem and try to understand what kind of features might help us distinguish categories. And in general, we can use a lot of domain knowledge to help us design features." + }, + { + "23:01": "And another way to figure out the effective features is to do error analysis on the categorization results. You could, for example, look at which category tends to be confused with which other categories. And you can use a confusion matrix to examine the errors systematically across categories. And then, you can look into specific instances to see why the mistake has been made and what features can prevent the mistake. And this can allow you to obtain insights for design new features. So error analysis is very important in general, and that's where you can get the insights about your specific problem." + }, + { + "23:42": "And finally, we can leverage this on machine learning techniques. So, for example, feature selection is a technique that we haven't really talked about, but is very important. And it has to do with trying to select the most useful features before you actually train a full classifier. Sometimes training a classifier will also help you identify which features have high values. There are also other ways to ensure this sparsity. Of the model, meaning to recognize the widths. For example, the SVM actually tries to minimize the weights on features. But you can further force some features, force to use only a small number of features." + }, + { + "24:21": "There are also techniques for dimension reduction. And that's to reduce a high dimensional feature space into a low dimensional space typically by clustering of features in various ways. So metrics factorization has been used to do such a job, and this is some of the techniques are actually very similar to the talking models that we'll discuss. So talking morals like psa or lda can actually help us reduce the dimension of features. Like imagine the words our original feature. But the can be matched to the topic space .Let's say we have k topics. So a document can now be represented as a vector of just k values corresponding to the topics. So we can let each topic define one dimension, so we have a k dimensional space instead of the original high dimensional space corresponding to words. And this is often another way to learn effective features. Especially, we could also use the categories to supervise the learning of such low dimensional structures." + }, + { + "25:29": "And so, the original worth features can be also combined with such amazing dimension features or lower dimensional space features to provide a multi resolution which is often very useful. Deep learning is a new technique that has been developed the machine learning." + }, + { + "25:51": "It's particularly useful for learning representations. So deep learning refers to deep neural network, it's another kind of classifier, where you can have intermediate features embedded in the models. That it's highly non-linear transpire, and some recent events that's allowed us to train such a complex network effectively. And the technique has been shown to be quite effective for speech recognition, computer reasoning, and recently has been applied to text as well. It has shown some promise. And one important advantage of this approach in" + }, + { + "26:34": "relationship with the featured design, is that they can learn intermediate replantations or compound the features automatically. And this is very valuable for learning effective replantation, for text recalibration. Although in text domain, because words are exemplary representation of text content, because these are human's imaging for communication. And they are generally sufficient for For representing content for many tasks. If there's a need for some new representation, people would have invented a new word. So because of this we think of value of deep learning for text processing tends to be lower than for [INAUDIBLE]. And the speech revenue where they are anchored corresponding where the design that worked as features." + }, + { + "27:31": "But people only still very promising for learning effective features especially for complicated tasks. Like a analysis it has been shown to be effective" + }, + { + "27:41": "because it can provide that goes beyond that of words." + }, + { + "27:47": "Now regarding the training examples. It's generally hard to get a lot of training examples because it involves human labor." + }, + { + "27:56": "But there are also some ways to help with this. So one is to assume in some low quality training examples can also be used. So, those can be called pseudo training examples. For example, if you take reviews from the internet, they might have overall ratings. So, to train a of categorizer, meaning we want to positive or negative. And categorize these reviews into these two categories. Then we could assume five star reviews are all positive training samples. One star are negative. But of course, sometimes even five star reviews will also mention negative opinions so the training sample is not all of that high quality, but they can still be useful." + }, + { + "28:45": "Another idea is to exploit the unlabeled data and there are techniques called the semi-supervised machine learning techniques that can allow you to combine labeled data with unlabeled data. So, in other case it's easy to see the next model can be used For both text plus read and the categorization. So you can imagine, if you have a lot of unlabeled text data for categorization, then you can actually do clustering on these text data, learn categories. And then try to somehow align these categories. With the categories defined by the training data, where we already know which documents are in which category. So you can in fact use the Algorithm to actually combine both. That would allow you essentially also pick up useful words and label the data. You can think of this in another way. Basically, we can use let's say a to classify all of the unlabeled text documents, and then we're going to assume the high confidence Classification results are actually liable. Then you suddenly have more training data because from the enabler that we now know some are labeled as category one, some are labeled as category two. All though the label is not completely reliable But then they can still be useful. So let's assume they are actually training label examples, and then we combine them with true training examples through improved categorization method. And so this idea is very powerful." + }, + { + "30:23": "When the enabled data and the training data are very different, and we might need to use other advanced machine learning techniques called domain adaptation or transfer learning. This is when we can Borrow some training examples from a related problem that may be different. Or, from a categorization password" + }, + { + "30:46": "that follow very different distribution from what we are working on. But basically, when the two domains are very different, then we need to be careful and not overfit the training domain. But yet, we can still want to use some signals from the related training data. So for example, training categorization on news might not give you Effective plus y for class vine topics and tweets. But you can still learn something from news to help look at writing tweets. So there are mission learning techniques that can help you do that effectively. Here's a suggested reading where you can find more details about some more of the methods is that we have covered. [MUSIC]" + } + ] + }, + { + "5-3-text-categorization-evaluation-part-1": [ + { + "0:00": "[SOUND] This lecture is about the Evaluation of Text Categorization. So we've talked about many different methods for text categorization. But how do you know which method works better?" + }, + { + "0:19": "And for a particular application, how do you know this is the best way of solving your problem? To understand these, we have to" + }, + { + "0:29": "how to we have to know how to evaluate categorization results. So first some general thoughts about the evaluation." + }, + { + "0:38": "In general, for evaluation of this kind of empirical tasks such as categorization, we use methodology that was developed in 1960s by information retrieval researchers. Called a Cranfield Evaluation Methodology. The basic idea is to have humans create test correction," + }, + { + "0:59": "where, we already know, every document is tagged with the desired categories. Or, in the case of search, for which query, which documents that should have been retrieved, and this is called, a ground truth. Now, with this ground truth test correction, we can then reuse the collection to test the many different systems and then compare different systems. We can also turn off some components in the system to see what's going to happen. Basically it provides a way to do control experiments to compare different methods." + }, + { + "1:36": "So this methodology has been virtually used for all the tasks that involve empirically defined problems." + }, + { + "1:45": "So in our case, then, we are going to compare our systems categorization results with the categorization, ground truth, created by humans." + }, + { + "1:56": "And we're going to compare our systems decisions," + }, + { + "2:00": "which documents should get which category with what" + }, + { + "2:06": "categories have been assigned to those documents by humans. And we want to quantify the similarity of these decisions or equivalently, to measure the difference between the system output and the desired ideal output generated by the humans." + }, + { + "2:25": "So obviously, the highest similarity is the better results are." + }, + { + "2:30": "The similarity could be measured in different ways. And that would lead to different measures. And sometimes it's desirable also to match the similarity from different perspectives just to have a better understanding of the results in detail. For example, we might be also interested in knowing which category performs better and which which category is easy to categorize, etc. In general, different categorization mistakes however, have different costs for specific applications. So some areas might be more serious than others. So ideally, we would like to model such differences, but if you read many papers in categorization you will see that they don't generally do that. Instead, they will use a simplified measure and that's because it's often okay not to consider such a cost variation when we compare methods and when we are interested in knowing the relative difference of these methods. So it's okay to introduce some bias, as long as the bias is not already with a particular method and then we should expect the more effective method to perform better than a less effective one, even though the measure is not perfect." + }, + { + "3:53": "So the first measure that we'll introduce is called classification accuracy and this is a basic into measure the percentage of correct decisions. So here you see that there are categories denoted by c1 through ck and there are n documents, denoted by d1 through d N. And for each pair of category and the document, we can then look at the situation." + }, + { + "4:16": "And see if the system has said yes to this pair, basically has assigned this category to this document. Or no, so this is denoted by Y or M, that's the systems of the decision. And similarly, we can look at the human's decisions also, if the human has assigned a category to the document of that there will be a plus sign here. That just means that a human. We think of this assignment is correct and incorrect then it's a minus. So we'll see all combinations of this Ns, yes and nos, minus and pluses. There are four combinations in total. And two of them are correct, and that's when we have y(+) or n(-), and then there are also two kinds of errors. So the measure of classification accuracy is simply to count how many of these decisions are correct. And normalize that by the total number of decisions we have made. So, we know that the total number of decisions is n, multiplied by k." + }, + { + "5:20": "And, the number of correct decisions are basically of two kinds. One is y plusses. And the other is n minus this n. We just put together the count. Now, this is a very convenient measure that will give us one number to characterize performance of a method. And the higher, the better, of course." + }, + { + "5:41": "But the method also has some problems. First it has treated all the decisions equally. But in reality, some decision errors are more serious than others. For example, it may be more important to get the decisions right on some documents, than others." + }, + { + "5:58": "Or maybe, more important to get the decisions right on some categories, than others, and this would call for some detailed evaluation of this results to understand the strands and" + }, + { + "6:12": "of different methods, and to understand the performance of these methods. In detail in a per category or per document basis. One example that shows clearly the decision errors are having different causes is spam filtering that could be retrieved as two category categorization problem." + }, + { + "6:36": "Missing a legitimate email result, is one type of error. But letting spam to come into your folder is another type of error. The two types of errors are clearly very different, because it's very important not to miss a legitimate email. It's okay to occasionally let a spam email to come into your inbox. So the error of the first, missing a legitimate email is very, is of high cost. It's a very serious mistake and classification error, classification accuracy does not address this issue." + }, + { + "7:14": "There's also another problem with imbalance to test set. Imagine there's a skew to test set where most instances are category one and 98% of instances are category one. Only 2% are in category two. In such a case, we can have a very simple baseline that accurately performs very well and that baseline. Sign with similar, I put all instances in the major category." + }, + { + "7:36": "That will get us 98% accuracy in this case. It's going to be appearing to be very effective, but in reality, this is obviously not a good result." + }, + { + "7:47": "And so, in general, when we use classification accuracy as a measure, we want to ensure that the causes of balance." + }, + { + "7:54": "And one above equal number of instances, for example in each class the minority categories or causes tend to be overlooked in the evaluation of classification accuracy. So, to address these problems, we of course would like to also evaluate the results in other ways and in different ways. As I said, it's beneficial to look at after multiple perspectives. So for example, we can look at the perspective from each document as a perspective based on each document. So the question here is, how good are the decisions on this document?" + }, + { + "8:29": "Now, as in the general cases of all decisions, we can think about four combinations of possibilities, depending on whether the system has said yes and depending on whether the human has said it correct or incorrect or said yes or no. And so the four combinations are first when both the human systems said yes, and that's the true positives, when the system says, yes, and it's after the positive. So, when the system says, yes, it's a positive. But, when the human confirm that it is indeed correct, that becomes a true positive." + }, + { + "9:07": "When the system says, yes, but the human says, no, that's incorrect, that's a false positive, have FP." + }, + { + "9:15": "And when the system says no, but the human says yes, then it's a false negative. We missed one assignment. When both the system and human says no, then it's also correctly to assume that's true negatives. All right, so then we can have some measures to just better characterize the performance by using these four numbers and so two popular measures are precision and recall. And these were also proposed by information retrieval researchers 1960s for evaluating search results, but now they have become standard measures, use it everywhere. So when the system says yes, we can ask the question, how many are correct? What's the percent of correct decisions when the system says yes? That's called precision. It's true positive divided by all the cases when the system says yes, all the positives. The other measure is called recall, and this measures" + }, + { + "10:14": "whether the document has all the categories it should have. So in this case it's divide the true positive by true positives and the false negatives. So these are all the cases where this human Says the document should have this category. So this represents both categories that it should have got, and so recall tells us whether the system has actually indeed assigned all the categories that it should have to this document." + }, + { + "10:46": "This gives us a detailed view of the document, then we can aggregate them later." + }, + { + "10:52": "And if we're interested in some documents, and this will tell us how well we did on those documents, the subsets of them. It might be more interesting than others, for example. And this allows us to analyze errors in more detail as well. We can separate the documents of certain characteristics from others, and then look at the errors. You might see a pattern A for this kind of document, this long document. It doesn't as well for shock documents." + }, + { + "11:18": "And this gives you some insight for inputting the method. Similarly, we can look at the per-category evaluation. In this case, we're going to look at the how good are the decisions on a particular category. As in the previous case we can define precision and recall. And it would just basically answer the questions from a different perspective." + }, + { + "11:39": "So when the system says yes, how many are correct? That means looking at this category to see if all the documents that are assigned with this category are indeed in this category, right? And recall, would tell us, has the category been actually assigned to all the documents That should have this category." + }, + { + "12:00": "It's sometimes also useful to combine precision and recall as one measure, and this is often done by using f measure. And this is just a harmonic mean of precision. Precision and recall defined on this slide. And it's also controlled by a parameter beta to" + }, + { + "12:20": "indicate whether precision is more important or recall is more. When beta is set to 1, we have measure called F1, and in this case, we just take equal weight upon both procedure and recall." + }, + { + "12:34": "F1 is very often used as a measure for categorization." + }, + { + "12:39": "Now, as in all cases, when we combine results, you always should think about the best way of combining them, so in this case I don't know if you have thought about it and we could have combined them just with arithmetic mean, right. So that would still give us the same range of values, but obviously there's a reason why we didn't do that and why f1 is more popular, and it's actually useful to think about difference. And we think about that, you'll see that there is indeed some difference and some undesirable property of this arithmatic. Basically, it will be obvious to you if you think about a case when the system says yes for all the category and document pairs. And then try the compute the precision and recall in that case. And see what would happen." + }, + { + "13:28": "And basically, this kind of measure, the arithmetic mean, is not going to be as reasonable as F1 minus one [INAUDIBLE] trade off, so that the two values are equal. There is an extreme case where you have 0 for one letter and one for the other. Then F1 will be low, but the mean would still be reasonably high." + }, + { + "14:01": "[MUSIC]" + } + ] + }, + { + "5-4-text-categorization-evaluation-part-2": [ + { + "0:00": "[SOUND] This lecture is a continued discussion of evaluation of text categorization. Earlier we have introduced measures that can be used with computer provision and recall. For each category and each document now in this lecture we're going to" + }, + { + "0:27": "further examine how to combine the performance of the different categories of different documents how to aggregate them, how do we take average? You see on the title here I indicated it's called a macro average and this is in contrast to micro average that we'll talk more about later." + }, + { + "0:47": "So, again, for each category we're going to compute the precision require an f1 so for example category c1 we have precision p1, recall r1 and F value f1. And similarly we can do that for category 2 and and all the other categories. Now once we compute that and we can aggregate them, so for example we can aggregate all the precision values. For all the categories, for computing overall precision. And this is often very useful to summarize what we have seen in the whole data set. And aggregation can be done many different ways. Again as I said, in a case when you need to aggregate different values, it's always good to think about what's the best way of doing the aggregation. For example, we can consider arithmetic mean, which is very commonly used, or you can use geometric mean, which would have different behavior. Depending on the way you aggregate, you might have got different conclusions. in terms of which method works better, so it's important to consider these differences and choosing the right one or a more suitable one for your task. So the difference fore example between arithmetically and geometrically is that the arithmetically would be dominated by high values whereas geometrically would be more affected by low values. Base and so whether you are want to emphasis low values or high values would be a question relate with all you And similar we can do that for recal and F score. So that's how we can generate the overall precision, recall and F score." + }, + { + "2:31": "Now we can do the same for aggregation of other all the document All right. So it's exactly the same situation for each document on our computer. Precision, recall, and F. And then after we have completed the computation for all these documents, we're going to aggregate them to generate the overall precision, overall recall, and overall F score." + }, + { + "2:53": "These are, again, examining the results from different angles. Which one's more useful will depend on your application. In general, it's beneficial to look at the results from all these perspectives. And especially if you compare different methods in different dimensions, it might reveal which method Is better in which measure or in what situations and this provides insightful. Understanding the strands of a method or a weakness and this provides further insight for improving them." + }, + { + "3:28": "So as I mentioned, there is also micro-average in contrast to the macro average that we talked about earlier. In this case, what we do is you pool together all the decisions, and then compute the precision and recall." + }, + { + "3:45": "So we can compute the overall precision and recall by just counting how many cases are in true positive, how many cases in false positive, etc, it's computing the values in the contingency table, and then we can compute the precision and recall just once." + }, + { + "4:06": "In contrast, in macro-averaging, we're going to do that for each category first. And then aggregate over these categories or we do that for each document and then aggregate all the documents but here we pooled them together." + }, + { + "4:21": "Now this would be very similar to the classification accuracy that we used earlier, and one problem here of course to treat all the instances, all the decisions equally." + }, + { + "4:32": "And this may not be desirable." + }, + { + "4:36": "But it may be a property for some applications, especially if we associate the, for example, the cost for each combination. Then we can actually compute for example, weighted classification accuracy. Where you associate the different cost or utility for each specific decision," + }, + { + "4:56": "so there could be variations of these methods that would be more useful. But in general macro average tends to be more information than micro average, just because it might reflect the need for understanding performance" + }, + { + "5:14": "on each category or performance on each document which are needed in applications. But macro averaging and micro averaging, they are both very common, and you might see both reported in research papers on Categorization. Also sometimes categorization results might actually be evaluated from ranking prospective." + }, + { + "5:40": "And this is because categorization results are sometimes or often indeed passed it to a human for various purposes. For example, it might be passed to humans for further editing. For example, news articles can be tempted to be categorized by using a system and then human editors would then correct them." + }, + { + "6:02": "And all the email messages might be throughout to the right person for handling in the help desk. And in such a case the categorizations will help prioritizing the task for particular customer service person." + }, + { + "6:19": "So, in this case the results have to be prioritized" + }, + { + "6:26": "and if the system can't give a score to the categorization decision for confidence then we can use the scores to rank these decisions and then evaluate the results as a rank list, just as in a search engine. Evaluation where you rank the documents in responsible query." + }, + { + "6:49": "So for example a discovery of spam emails can be evaluated" + }, + { + "6:55": "based on ranking emails for the spam category. And this is useful if you want people to to verify whether this is really spam, right? The person would then take the rank To check one by one and then verify whether this is indeed a spam. So to reflect the utility for humans in such a task, it's better to evaluate Ranking Chris and this is basically similar to a search again." + }, + { + "7:25": "And in such a case often the problem can be better formulated as a ranking problem instead of a categorization problem. So for example, ranking documents in a search engine can also be framed as a binary categorization problem, distinguish the relevant documents that are useful to users from those that are not useful, but typically we frame this as a ranking problem, and we evaluate it as a rank list. That's because people tend to examine the results so" + }, + { + "7:52": "ranking evaluation more reflects utility from user's perspective." + }, + { + "7:58": "So to summarize categorization evaluation, first evaluation is always very important for all these tasks. So get it right." + }, + { + "8:07": "If you don't get it right, you might get misleading results. And you might be misled to believe one method is better than the other, which is in fact not true. So it's very important to get it right." + }, + { + "8:18": "Measures must also reflect the intended use of the results for a particular application. For example, in spam filtering and news categorization the results are used in maybe different ways." + }, + { + "8:30": "So then we would need to consider the difference and design measures appropriately." + }, + { + "8:36": "We generally need to consider how will the results be further processed by the user and think from a user's perspective. What quality is important? What aspect of quality is important?" + }, + { + "8:49": "Sometimes there are trade offs between multiple aspects like precision and recall and so we need to know for this application is high recall more important, or high precision is more important." + }, + { + "8:59": "Ideally we associate the different cost with each different decision arrow. And this of course has to be designed in an application specific way." + }, + { + "9:08": "Some commonly used measures for relative comparison methods are the following. Classification accuracy, it's very commonly used for especially balance. [INAUDIBLE] preceding [INAUDIBLE] Scores are common and report characterizing performances, given angles and give us some [INAUDIBLE] like a [INAUDIBLE] Per document basis [INAUDIBLE] And then take a average of all of them, different ways micro versus macro [INAUDIBLE]. In general, you want to look at the results from multiple perspectives and for particular applications some perspectives would be more important than others but diagnoses and analysis of categorization methods. It's generally useful to look at as many perspectives as possible to see subtle differences between methods or tow see where a method might be weak from which you can obtain sight for improving a method." + }, + { + "10:04": "Finally sometimes ranking may be more appropriate so be careful sometimes categorization has got may be better frame as a ranking tasks and there're machine running methods for optimizing ranking measures as well." + }, + { + "10:17": "So here are two suggested readings. One is some chapters of this book where you can find more discussion about evaluation measures. The second is a paper about comparison of different approaches to text categorization and it also has an excellent discussion of how to evaluate textual categorization. [MUSIC]" + } + ] + }, + { + "5-5-opinion-mining-and-sentiment-analysis-motivation": [ + { + "0:00": "[SOUND] This lecture is about, Opinion Mining and Sentiment Analysis, covering, Motivation. In this lecture, we're going to start, talking about, mining a different kind of knowledge. Namely, knowledge about the observer or humans that have generated the text data. In particular, we're going to talk about the opinion mining and sentiment analysis." + }, + { + "0:32": "As we discussed earlier, text data can be regarded as data generated from humans as subjective sensors." + }, + { + "0:43": "In contrast, we have other devices such as video recorder that can report what's happening in the real world objective to generate the viewer data for example." + }, + { + "0:58": "Now the main difference between test data and other data, like video data, is that it has rich opinions, and the content tends to be subjective because it's generated from humans." + }, + { + "1:16": "Now, this is actually a unique advantaged of text data, as compared with other data, because the office is a great opportunity to understand the observers. We can mine text data to understand their opinions. Understand people's preferences, how people think about something." + }, + { + "1:37": "So this lecture and the following lectures will be mainly about how we can mine and analyze opinions buried in a lot of text data." + }, + { + "1:49": "So let's start with the concept of opinion. It's not that easy to formally define opinion, but mostly we would define opinion as a subjective statement describing what a person believes or thinks about something." + }, + { + "2:08": "Now, I highlighted quite a few words here. And that's because it's worth thinking a little bit more about these words. And that will help us better understand what's in an opinion. And this further helps us to define opinion more formally. Which is always needed to computation to resolve the problem of opinion mining. So let's first look at the key word of subjective here. This is in contrast with objective statement or factual statement." + }, + { + "2:40": "Those statements can be proved right or wrong." + }, + { + "2:45": "And this is a key differentiating factor from opinions which tends to be not easy to prove wrong or right, because it reflects what the person thinks about something." + }, + { + "2:59": "So in contrast, objective statement can usually be proved wrong or correct." + }, + { + "3:07": "For example, you might say this computer has a screen and a battery." + }, + { + "3:16": "Now that's something you can check. It's either having a battery or not." + }, + { + "3:23": "But in contrast with this, think about the sentence such as, this laptop has the best battery or this laptop has a nice screen. Now these statements are more subjective and it's very hard to prove whether it's wrong or correct." + }, + { + "3:45": "So opinion, is a subjective statement." + }, + { + "3:50": "And next lets look at the keyword person here. And that indicates that is an opinion holder. Because when we talk about opinion, it's about an opinion held by someone. And then we notice that there is something here. So that is the target of the opinion. The opinion is expressed on this something." + }, + { + "4:11": "And now, of course, believes or thinks implies that an opinion will depend on the culture or background and the context in general. Because a person might think different in a different context. People from different background may also think in different ways. So this analysis shows that there are multiple elements that we need to include in order to characterize opinion." + }, + { + "4:38": "So, what's a basic opinion representation like? Well, it should include at least three elements, right? Firstly, it has to specify what's the opinion holder. So whose opinion is this? Second, it must also specify the target, what's this opinion about?" + }, + { + "4:57": "And third, of course, we want opinion content. And so what exactly is opinion? If you can identify these, we get a basic understanding of opinion and can already be useful sometimes. You want to understand further, we want enriched opinion representation." + }, + { + "5:15": "And that means we also want to understand that, for example, the context of the opinion and what situation was the opinion expressed. For example, what time was it expressed? We, also, would like to, people understand the opinion sentiment, and this is to understand that what the opinion tells us about the opinion holder's feeling. For example, is this opinion positive, or negative? Or perhaps the opinion holder was happy or was sad, and so such understanding obvious to those beyond just Extracting the opinion content, it needs some analysis." + }, + { + "6:00": "So let's take a simple example of a product review. In this case, this actually expressed the opinion holder, and expressed the target. So its obviously whats opinion holder and that's just reviewer and its also often very clear whats the opinion target and that's the product review for example iPhone 6. When the review is posted usually you can't such information easier." + }, + { + "6:27": "Now the content, of course, is a review text that's, in general, also easy to obtain. So you can see product reviews are fairly easy to analyze in terms of obtaining a basic opinion of representation. But of course, if you want to get more information, you might know the Context, for example. The review was written in 2015. Or, we want to know that the sentiment of this review is positive. So, this additional understanding of course adds value to mining the opinions." + }, + { + "7:04": "Now, you can see in this case the task is relatively easy and that's because the opinion holder and the opinion target have already been identified." + }, + { + "7:14": "Now let's take a look at the sentence in the news. In this case, we have a implicit holder and a implicit target. And the tasker is in general harder. So, we can identify opinion holder here, and that's the governor of Connecticut." + }, + { + "7:32": "We can also identify the target. So one target is Hurricane Sandy, but there is also another target mentioned which is hurricane of 1938. So what's the opinion? Well, there's a negative sentiment here that's indicated by words like bad and worst." + }, + { + "7:53": "And we can also, then, identify context, New England in this case." + }, + { + "8:00": "Now, unlike in the playoff review, all these elements must be extracted by using natural RAM processing techniques. So, the task Is much harder. And we need a deeper natural language processing." + }, + { + "8:14": "And these examples also" + }, + { + "8:17": "suggest that a lot of work can be easy to done for product reviews. That's indeed what has happened. Analyzing and assembling news is still quite difficult, it's more difficult than the analysis of opinions in product reviews." + }, + { + "8:36": "Now there are also some other interesting variations. In fact, here we're going to examine the variations of opinions, more systematically. First, let's think about the opinion holder." + }, + { + "8:47": "The holder could be an individual or it could be group of people. Sometimes, the opinion was from a committee. Or from a whole country of people." + }, + { + "8:56": "Opinion target accounts will vary a lot. It can be about one entity, a particular person, a particular product, a particular policy, ect. But it could be about a group of products. Could be about the products from a company in general." + }, + { + "9:11": "Could also be very specific about one attribute, though. An attribute of the entity. For example, it's just about the battery of iPhone. It could be someone else's opinion. And one person might comment on another person's Opinion, etc. So, you can see there is a lot of variation here that will cause the problem to vary a lot. Now, opinion content, of course, can also vary a lot on the surface, you can identify one-sentence opinion or one-phrase opinion. But you can also have longer text to express an opinion, like the whole article." + }, + { + "9:48": "And furthermore we identify the variation in the sentiment or emotion damage that's above the feeding of the opinion holder. So, we can distinguish a positive versus negative or mutual or happy versus sad, separate." + }, + { + "10:03": "Finally, the opinion context can also vary. We can have a simple context, like different time or different locations. But there could be also complex contexts, such as some background of topic being discussed. So when opinion is expressed in particular discourse context, it has to be interpreted in different ways than when it's expressed in another context. So the context can be very [INAUDIBLE] to entire discourse context of the opinion. From computational perspective, we're mostly interested in what opinions can be extracted from text data. So, it turns out that we can also differentiate, distinguish, different kinds of opinions in text data from computation perspective. First, the observer might make a comment about opinion targeting, observe the word So in case we have the author's opinion. For example, I don't like this phone at all. And that's an opinion of this author." + }, + { + "10:59": "In contrast, the text might also report opinions about others. So the person could also Make observation about another person's opinion and reported this opinion. So for example, I believe he loves the painting. And that opinion is really about the It is really expressed by another person here. So, it doesn't mean this author loves that painting." + }, + { + "11:33": "So clearly, the two kinds of opinions need to be analyzed in different ways, and sometimes in product reviews, you can see, although mostly the opinions are false from this reviewer. Sometimes, a reviewer might mention opinions of his friend or her friend." + }, + { + "11:51": "Another complication is that there may be indirect opinions or inferred opinions that can be obtained. By making inferences on what's expressed in the text that might not necessarily look like opinion. For example, one statement that might be, this phone ran out of battery in just one hour. Now, this is in a way a factual statement because It's either true or false, right? You can even verify that, but from this statement, one can also infer some negative opinions about the quality of the battery of this phone, or the feeling of the opinion holder about the battery. The opinion holder clearly wished that the battery do last longer." + }, + { + "12:42": "So these are interesting variations that we need to pay attention to when we extract opinions. Also, for this reason about indirect opinions," + }, + { + "12:53": "it's often also very useful to extract whatever the person has said about the product, and sometimes factual sentences like these are also very useful. So, from a practical viewpoint, sometimes we don't necessarily extract the subject of sentences. Instead, again, all the sentences that are about the opinions are useful for understanding the person or understanding the product that we commend." + }, + { + "13:19": "So the task of opinion mining can be defined as taking textualized input to generate a set of opinion representations. Each representation we should identify opinion holder, target, content, and the context. Ideally we can also infer opinion sentiment from the comment and the context to better understand." + }, + { + "13:43": "The opinion." + }, + { + "13:44": "Now often, some elements of the representation are already known. I just gave a good example in the case of product we'd use where the opinion holder and the opinion target are often expressly identified. And that's not why this turns out to be one of the simplest opinion mining tasks. Now, it's interesting to think about the other tasks that might be also simple. Because those are the cases where you can easily build applications by using opinion mining techniques." + }, + { + "14:17": "So now that we have talked about what is opinion mining, we have defined the task. Let's also just talk a little bit about why opinion mining is very important and why it's very useful. So here, I identify three major reasons, three broad reasons. The first is it can help decision support. It can help us optimize our decisions. We often look at other people's opinions, look at read the reviews in order to make a decisions like buying a product or using a service." + }, + { + "14:52": "We also would be interested in others opinions when we decide whom to vote for example." + }, + { + "15:00": "And policy makers, may also want to know people's opinions when designing a new policy. So that's one general, kind of, applications. And it's very broad, of course. The second application is to understand people, and this is also very important. For example, it could help understand people's preferences. And this could help us better serve people. For example, we optimize a product search engine or optimize a recommender system if we know what people are interested in, what people think about product." + }, + { + "15:35": "It can also help with advertising, of course, and we can have targeted advertising if we know what kind of people tend to like what kind of plot." + }, + { + "15:48": "Now the third kind of application can be called voluntary survey. Now this is most important research that used to be done by doing surveys, doing manual surveys. Question, answer it. People need to feel informs to answer their questions. Now this is directly related to humans as sensors, and we can usually aggregate opinions from a lot of humans through kind of assess the general opinion. Now this would be very useful for business intelligence where manufacturers want to know where their products have advantages over others." + }, + { + "16:31": "What are the winning features of their products, winning features of competitive products." + }, + { + "16:37": "Market research has to do with understanding consumers oppinions. And this create very useful directive for that. Data-driven social science research can benefit from this because they can do text mining to understand the people's opinions. And if you can aggregate a lot of opinions from social media, from a lot of, popular" + }, + { + "16:58": "information then you can actually do some study of some questions. For example, we can study the behavior of people on social media on social networks. And these can be regarded as voluntary survey done by those people." + }, + { + "17:19": "In general, we can gain a lot of advantage in any prediction task because we can leverage the text data as extra data above any problem. And so we can use text based prediction techniques to help you make predictions or improve the accuracy of prediction. [MUSIC]" + } + ] + }, + { + "5-6-opinion-mining-and-sentiment-analysis-sentiment-classification": [ + { + "0:00": "[NOISE] This lecture is about the sentiment classification." + }, + { + "0:11": "If we assume that" + }, + { + "0:13": "most of the elements in the opinion representation are all ready known, then our only task may be just a sentiment classification, as shown in this case. So suppose we know who's the opinion holder and what's the opinion target, and also know the content and the context of the opinion, then we mainly need to decide the opinion sentiment of the review. So this is a case of just using sentiment classification for understanding opinion." + }, + { + "0:46": "Sentiment classification can be defined more specifically as follows. The input is opinionated text object, the output is typically a sentiment label, or a sentiment tag, and that can be designed in two ways. One is polarity analysis, where we have categories such as positive, negative, or neutral." + }, + { + "1:08": "The other is emotion analysis that can go beyond a polarity to characterize the feeling of the opinion holder." + }, + { + "1:21": "In the case of polarity analysis, we sometimes also have numerical ratings as you often see in some reviews on the web." + }, + { + "1:30": "Five might denote the most positive, and one maybe the most negative, for example. In general, you have just disk holder categories to characterize the sentiment." + }, + { + "1:43": "In emotion analysis, of course, there are also different ways for design the categories." + }, + { + "1:49": "The six most frequently used categories are happy, sad, fearful, angry, surprised, and disgusted." + }, + { + "1:59": "So as you can see, the task is essentially a classification task, or categorization task, as we've seen before, so it's a special case of text categorization. This also means any textual categorization method can be used to do sentiment classification." + }, + { + "2:15": "Now of course if you just do that, the accuracy may not be good because sentiment classification does requires some improvement over regular text categorization technique, or simple text categorization technique. In particular, it needs two kind of improvements. One is to use more sophisticated features that may be more appropriate for sentiment tagging as I will discuss in a moment." + }, + { + "2:41": "The other is to consider the order of these categories, and especially in polarity analysis, it's very clear there's an order here, and so these categories are not all that independent. There's order among them, and so it's useful to consider the order. For example, we could use ordinal regression to do that, and that's something that we'll talk more about later. So now, let's talk about some features that are often very useful for text categorization and text mining in general, but some of them are especially also needed for sentiment analysis." + }, + { + "3:18": "So let's start from the simplest one, which is character n-grams. You can just have a sequence of characters as a unit, and they can be mixed with different n's, different lengths. All right, and this is a very general way and very robust way to represent the text data. And you could do that for any language, pretty much." + }, + { + "3:42": "And this is also robust to spelling errors or recognition errors, right? So if you misspell a word by one character and this representation actually would allow you to match this word when it occurs in the text correctly. Right, so misspell the word and the correct form can be matched because they contain some common n-grams of characters. But of course such a recommendation would not be as discriminating as words." + }, + { + "4:10": "So next, we have word n-grams, a sequence of words and again, we can mix them with different n's. Unigram's are actually often very effective for a lot of text processing tasks, and it's mostly because words are word designed features by humans for communication, and so they are often good enough for many tasks. But it's not good, or not sufficient for sentiment analysis clearly. For example, we might see a sentence like, it's not good or it's not as good as something else, right? So in such a case if you just take a good and that would suggest positive that's not good, all right so it's not accurate. But if you take a bigram, not good together, and then it's more accurate. So longer n-grams are generally more discriminative, and they're more specific. If you match it, and it says a lot, and it's accurate it's unlikely, very ambiguous. But it may cause overfitting because with such very unique features that machine oriented program can easily pick up such features from the training set and to rely on such unique features to distinguish the categories. And obviously, that kind of classify, one would generalize word to future there when such discriminative features will not necessarily occur. So that's a problem of overfitting that's not desirable. We can also consider part of speech tag, n-grams if we can do part of speech tagging an, for example, adjective noun could form a pair. We can also mix n-grams of words and n-grams of part of speech tags. For example, the word great might be followed by a noun, and this could become a feature, a hybrid feature, that could be useful for sentiment analysis." + }, + { + "6:06": "So next we can also have word classes. So these classes can be syntactic like a part of speech tags, or could be semantic, and they might represent concepts in the thesaurus or ontology, like WordNet. Or they can be recognized the name entities, like people or place, and these categories can be used to enrich the presentation as additional features. We can also learn word clusters and parodically, for example, we've talked about the mining associations of words. And so we can have cluster of paradigmatically related words or syntaxmatically related words, and these clusters can be features to supplement the word base representation. Furthermore, we can also have frequent pattern syntax, and these could be frequent word set, the words that form the pattern do not necessarily occur together or next to each other. But we'll also have locations where the words my occur more closely together, and such patterns provide a more discriminative features than words obviously." + }, + { + "7:14": "And they may also generalize better than just regular n-grams because they are frequent. So you expected them to occur also in tested data. So they have a lot of advantages, but they might still face the problem of overfeeding as the features become more complex. This is a problem in general, and the same is true for parse tree-based features, when you can use a parse tree to derive features such as frequent subtrees, or paths, and those are even more discriminating, but they're also are more likely to cause over fitting. And in general, pattern discovery algorithm's are very useful for feature construction because they allow us to search in a large space of possible features that are more complex than words that are sometimes useful. So in general, natural language processing is very important that they derive complex features, and they can enrich text representation. So for example, this is a simple sentence that I showed you a long time ago in another lecture. So from these words we can only derive simple word n-grams, representations or character n-grams. But with NLP, we can enrich the representation with a lot of other information such as part of speech tags, parse trees or entities, or even speech act. Now with such enriching information of course, then we can generate a lot of other features, more complex features like a mixed grams of a word and the part of speech tags, or even a part of a parse tree." + }, + { + "8:55": "So in general, feature design actually affects categorization accuracy significantly, and it's a very important part of any machine learning application. In general, I think it would be most effective if you can combine machine learning, error analysis, and domain knowledge in design features. So first you want to use the main knowledge, your understanding of the problem, the design seed features, and you can also define a basic feature space with a lot of possible features for the machine learning program to work on, and machine can be applied to select the most effective features or construct the new features. That's feature learning, and these features can then be further analyzed by humans through error analysis. And you can look at the categorization errors, and then further analyze what features can help you recover from those errors, or what features cause overfitting and cause those errors. And so this can lead into feature validation that will revised the feature set, and then you can iterate. And we might consider using a different features space." + }, + { + "10:07": "So NLP enriches text recognition as I just said, and because it enriches the feature space, it allows much larger such a space of features and there are also many, many more features that can be very useful for a lot of tasks. But be careful not to use a lot of category features because it can cause overfitting, or otherwise you would have to training careful not to let overflow happen. So a main challenge in design features, a common challenge is to optimize a trade off between exhaustivity and the specificity, and this trade off turns out to be very difficult. Now exhaustivity means we want the features to actually have high coverage of a lot of documents. And so in that sense, you want the features to be frequent. Specifity requires the feature to be discriminative, so naturally infrequent the features tend to be more discriminative. So this really cause a trade off between frequent versus infrequent features. And that's why a featured design is usually odd. And that's probably the most important part in machine learning any problem in particularly in our case, for text categoration or more specifically the senitment classification. [MUSIC]" + } + ] + }, + { + "5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression": [ + { + "0:00": "[NOISE] This lecture is about the ordinal logistic regression for sentiment analysis. So, this is our problem set up for a typical sentiment classification problem. Or more specifically a rating prediction. We have an opinionated text document d as input, and we want to generate as output, a rating in the range of 1 through k so it's a discrete rating, and this is a categorization problem. We have k categories here. Now we could use a regular text for categorization technique to solve this problem. But such a solution would not consider the order and dependency of the categories. Intuitively, the features that can distinguish category 2 from 1, or rather rating 2 from 1, may be similar to those that can distinguish k from k-1. For example, positive words generally suggest a higher rating. When we train categorization problem by treating these categories as independent we would not capture this." + }, + { + "1:17": "So what's the solution? Well in general we can order to classify and there are many different approaches. And here we're going to talk about one of them that called ordinal logistic regression. Now, let's first think about how we use logistical regression for a binary sentiment. A categorization problem. So suppose we just wanted to distinguish a positive from a negative and that is just a two category categorization problem. So the predictors are represented as X and these are the features. And there are M features all together. The feature value is a real number. And this can be representation of a text document." + }, + { + "1:56": "And why it has two values, binary response variable 0 or 1. 1 means X is positive, 0 means X is negative. And then of course this is a standard two category categorization problem. We can apply logistical regression. You may recall that in logistical regression, we assume the log of probability that the Y is equal to one, is assumed to be a linear function of these features, as shown here. So this would allow us to also write the probability of Y equals one, given X" + }, + { + "2:36": "in this equation that you are seeing on the bottom." + }, + { + "2:43": "So that's a logistical function and you can see it relates this probability to, probability that y=1 to the feature values. And of course beta i's are parameters here, so this is just a direct application of logistical regression for binary categorization." + }, + { + "3:08": "What if we have multiple categories, multiple levels? Well we have to use such a binary logistical regression problem to solve this multi level rating prediction." + }, + { + "3:21": "And the idea is we can introduce multiple binary class files. In each case we asked the class file to predict the, whether the rating is j or above, or the rating's lower than j. So when Yj is equal to 1, it means rating is j or above. When it's 0, that means the rating is Lower than j." + }, + { + "3:45": "So basically if we want to predict a rating in the range of 1-k, we first have one classifier to distinguish a k versus others. And that's our classifier one. And then we're going to have another classifier to distinguish it. At k-1 from the rest. That's Classifier 2. And in the end, we need a Classifier to distinguish between 2 and 1. So altogether we'll have k-1 classifiers." + }, + { + "4:17": "Now if we do that of course then we can also solve this problem and the logistical regression program will be also very straight forward as you have just seen on the previous slide. Only that here we have more parameters. Because for each classifier, we need a different set of parameters. So now the logistical regression classifies index by J, which corresponds to a rating level." + }, + { + "4:46": "And I have also used of J to replace beta 0. And this is to. Make the notation more consistent, than was what we can show in the ordinal logistical regression. So here we now have basically k minus one regular logistic regression classifiers. Each has it's own set of parameters. So now with this approach, we can now do ratings as follows." + }, + { + "5:19": "After we have trained these k-1 logistic regression classifiers, separately of course, then we can take a new instance and then invoke a classifier sequentially to make the decision. So first let look at the classifier that corresponds to level of rating K. So this classifier will tell us whether this object should have a rating of K or about. If probability according to this logistical regression classifier is larger than point five, we're going to say yes. The rating is K." + }, + { + "6:02": "Now, what if it's not as large as twenty-five? Well, that means the rating's below K, right?" + }, + { + "6:11": "So now, we need to invoke the next classifier, which tells us whether it's above K minus one." + }, + { + "6:18": "It's at least K minus one. And if the probability is larger than twenty-five, then we'll say, well, then it's k-1. What if it says no? Well, that means the rating would be even below k-1. And so we're going to just keep invoking these classifiers. And here we hit the end when we need to decide whether it's two or one. So this would help us solve the problem. Right? So we can have a classifier that would actually give us a prediction of a rating in the range of 1 through k. Now unfortunately such a strategy is not an optimal way of solving this problem. And specifically there are two problems with this approach. So these equations are the same as. You have seen before." + }, + { + "7:06": "Now the first problem is that there are just too many parameters. There are many parameters. Now, can you count how many parameters do we have exactly here? Now this may be a interesting exercise. To do. So you might want to just pause the video and try to figure out the solution. How many parameters do I have for each classifier?" + }, + { + "7:28": "And how many classifiers do we have?" + }, + { + "7:31": "Well you can see the, and so it is that for each classifier we have n plus one parameters, and we have k minus one classifiers all together, so the total number of parameters is k minus one multiplied by n plus one. That's a lot. A lot of parameters, so when the classifier has a lot of parameters, we would in general need a lot of data out to actually help us, training data, to help us decide the optimal parameters of such a complex model." + }, + { + "8:04": "So that's not ideal." + }, + { + "8:07": "Now the second problems is that these problems, these k minus 1 plus fives, are not really independent. These problems are actually dependent." + }, + { + "8:18": "In general, words that are positive would make the rating higher" + }, + { + "8:25": "for any of these classifiers. For all these classifiers. So we should be able to take advantage of this fact." + }, + { + "8:33": "Now the idea of ordinal logistical regression is precisely that. The key idea is just the improvement over the k-1 independent logistical regression classifiers. And that idea is to tie these beta parameters. And that means we are going to assume the beta parameters. These are the parameters that indicated the inference of those weights. And we're going to assume these beta values are the same for all the K- 1 parameters. And this just encodes our intuition that, positive words in general would make a higher rating more likely." + }, + { + "9:19": "So this is intuitively assumptions, so reasonable for our problem setup. And we have this order in these categories." + }, + { + "9:28": "Now in fact, this would allow us to have two positive benefits. One is it's going to reduce the number of families significantly." + }, + { + "9:38": "And the other is to allow us to share the training data. Because all these parameters are similar to be equal. So these training data, for different classifiers can then be shared to help us set the optimal value for beta." + }, + { + "9:56": "So we have more data to help us choose a good beta value." + }, + { + "10:01": "So what's the consequence, well the formula would look very similar to what you have seen before only that, now the beta parameter has just one index that corresponds to the feature. It no longer has the other index that corresponds to the level of rating." + }, + { + "10:19": "So that means we tie them together. And there's only one set of better values for all the classifiers. However, each classifier still has the distinct R for value. The R for parameter. Except it's different. And this is of course needed to predict the different levels of ratings. So R for sub j is different it depends on j, different than j, has a different R value. But the rest of the parameters, the beta i's are the same. So now you can also ask the question, how many parameters do we have now? Again, that's an interesting question to think about. So if you think about it for a moment, and you will see now, the param, we have far fewer parameters. Specifically we have M plus K minus one. Because we have M, beta values, and plus K minus one of our values." + }, + { + "11:15": "So let's just look basically, that's basically the main idea of ordinal logistical regression." + }, + { + "11:24": "So, now, let's see how we can use such a method to actually assign ratings. It turns out that with this, this idea of tying all the parameters, the beta values. We also end up by having a similar way to make decisions. And more specifically now, the criteria whether the predictor probabilities are at least 0.5 above, and now is equivalent to whether the score of the object is larger than or equal to negative authors of j, as shown here. Now, the scoring function is just taking the linear combination of all the features with the divided beta values." + }, + { + "12:15": "So, this means now we can simply make a decision of rating, by looking at the value of this scoring function, and see which bracket it falls into. Now you can see the general decision rule is thus, when the score is in the particular range of all of our values, then we will assign the corresponding rating to that text object." + }, + { + "12:49": "So in this approach, we're going to score the object" + }, + { + "12:55": "by using the features and trained parameter values." + }, + { + "13:00": "This score will then be compared with a set of trained alpha values to see which range the score is in. And then, using the range, we can then decide which rating the object should be getting. Because, these ranges of alpha values correspond to the different levels of ratings, and that's from the way we train these alpha values. Each is tied to some level of rating. [MUSIC]" + } + ] + } + ] + }, + { + "Week 6": [ + { + "6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1": [ + { + "0:01": "[MUSIC] This lecture is about the Latent Aspect Rating Analysis for Opinion Mining and Sentiment Analysis." + }, + { + "0:14": "In this lecture, we're going to continue discussing Opinion Mining and Sentiment Analysis." + }, + { + "0:19": "In particular, we're going to introduce Latent Aspect Rating Analysis which allows us to perform detailed analysis of reviews with overall ratings." + }, + { + "0:34": "So, first is motivation." + }, + { + "0:37": "Here are two reviews that you often see in the net about the hotel. And you see some overall ratings. In this case, both reviewers have given five stars. And, of course, there are also reviews that are in text." + }, + { + "0:53": "Now, if you just look at these reviews, it's not very clear whether the hotel is good for its location or for its service. It's also unclear why a reviewer liked this hotel." + }, + { + "1:06": "What we want to do is to decompose this overall rating into ratings on different aspects such as value, rooms, location, and service." + }, + { + "1:18": "So, if we can decompose the overall ratings, the ratings on these different aspects, then, we can obtain a more detailed understanding of the reviewer's opinionsabout the hotel." + }, + { + "1:30": "And this would also allow us to rank hotels along different dimensions such as value or rooms. But, in general, such detailed understanding will reveal more information about the user's preferences, reviewer's preferences. And also, we can understand better how the reviewers view this hotel from different perspectives. Now, not only do we want to infer these aspect ratings, we also want to infer the aspect weights. So, some reviewers may care more about values as opposed to the service. And that would be a case. like what's shown on the left for the weight distribution, where you can see a lot of weight is places on value." + }, + { + "2:18": "But others care more for service. And therefore, they might place more weight on service than value." + }, + { + "2:25": "The reason why this is also important is because, do you think about a five star on value, it might still be very expensive if the reviewer cares a lot about service, right? For this kind of service, this price is good, so the reviewer might give it a five star. But if a reviewer really cares about the value of the hotel, then the five star, most likely, would mean really cheap prices. So, in order to interpret the ratings on different aspects accurately, we also need to know these aspect weights. When they're combined together, we can have a more detailed understanding of the opinion. So the task here is to get these reviews and their overall ratings as input, and then, generate both the aspect ratings, the compose aspect ratings, and the aspect rates as output. And this is a problem called Latent Aspect Rating Analysis." + }, + { + "3:31": "So the task, in general, is given a set of review articles about the topic with overall ratings, and we hope to generate three things. One is the major aspects commented on in the reviews. Second is ratings on each aspect, such as value and room service." + }, + { + "3:53": "And third is the relative weights placed on different aspects by the reviewers. And this task has a lot of applications, and if you can do this, and it will enable a lot of applications. I just listed some here. And later, I will show you some results. And, for example, we can do opinion based entity ranking. We can generate an aspect-level opinion summary. We can also analyze reviewers preferences, compare them or compare their preferences on different hotels. And we can do personalized recommendations of products." + }, + { + "4:29": "So, of course, the question is how can we solve this problem? Now, as in other cases of these advanced topics, we won\u2019t have time to really cover the technique in detail. But I\u2019m going to give you a brisk, basic introduction to the technique development for this problem. So, first step, we\u2019re going to talk about how to solve the problem in two stages. Later, we\u2019re going to also mention that we can do this in the unified model. Now, take this review with the overall rating as input. What we want to do is, first, we're going to segment the aspects. So we're going to pick out what words are talking about location, and what words are talking about room condition, etc." + }, + { + "5:13": "So with this, we would be able to obtain aspect segments. In particular, we're going to obtain the counts of all the words in each segment, and this is denoted by C sub I of W and D. Now this can be done by using seed words like location and room or price to retrieve the [INAUDIBLE] in the segments. And then, from those segments, we can further mine correlated words with these seed words and that would allow us to segmented the text into segments, discussing different aspects. But, of course, later, as we will see, we can also use [INAUDIBLE] models to do the segmentation. But anyway, that's the first stage, where the obtain the council of words in each segment. In the second stage, which is called Latent Rating Regression, we're going to use these words and their frequencies in different aspects to predict the overall rate. And this predicting happens in two stages." + }, + { + "6:17": "In the first stage, we're going to use the [INAUDIBLE] and the weights of these words in each aspect to predict the aspect rating. So, for example, if in your discussion of location, you see a word like, amazing, mentioned many times, and it has a high weight. For example, here, 3.9. Then, it will increase the Aspect Rating for location. But, another word like, far, which is an acted weight, if it's mentioned many times, and it will decrease the rating. So the aspect ratings, assume that it will be a weighted combination of these word frequencies where the weights are the sentiment weights of the words. Of course, these sentimental weights might be different for different aspects. So we have, for each aspect, a set of term sentiment weights as shown here. And that's in order by beta sub I and W." + }, + { + "7:18": "In the second stage or second step, we're going to assume that the overall rating is simply a weighted combination of these aspect ratings. So we're going to assume we have aspect weights to the [INAUDIBLE] sub i of d, and this will be used to take a weighted average of the aspect ratings, which are denoted by r sub i of d." + }, + { + "7:42": "And we're going to assume the overall rating is simply a weighted average of these aspect ratings. So this set up allows us to predict the overall rating based on the observable frequencies. So on the left side, you will see all these observed information, the r sub d and the count." + }, + { + "8:03": "But on the right side, you see all the information in that range is actually latent." + }, + { + "8:09": "So, we hope to discover that. Now, this is a typical case of a generating model where would embed the interesting variables in the generated model. And then, we're going to set up a generation probability for the overall rating given the observed words. And then, of course, we can adjust these parameter values including betas Rs and alpha Is in order to maximize the probability of the data. In this case, the conditional probability of the observed rating given the document. So we have seen such cases before in, for example, PISA, where we predict a text data. But here, we're predicting the rating, and the parameters, of course, are very different. But we can see, if we can uncover these parameters, it would be nice, because r sub i of d is precise as the ratings that we want to get. And these are the composer ratings on different aspects. [INAUDIBLE] sub I D is precisely the aspect weights that we hope to get as a byproduct, that we also get the beta factor, and these are the [INAUDIBLE] factor, the sentiment weights of words." + }, + { + "9:31": "So more formally," + }, + { + "9:33": "the data we are modeling here is a set of review documents with overall ratings. And each review document denote by a d, and the overall ratings denote by r sub d. And d pre-segments turn into k aspect segments. And we're going to use ci(w,d) to denote the count of word w in aspect segment i. Of course, it's zero if the word doesn't occur in the segment." + }, + { + "10:01": "Now, the model is going to predict the rating based on d. So, we're interested in the provisional problem of r sub-d given d. And this model is set up as follows. So r sub-d is assumed the two follow a normal distribution doesn't mean that denotes actually await the average of the aspect of ratings r Sub I of d as shown here. This normal distribution is a variance of data squared. Now, of course, this is just our assumption. The actual rating is not necessarily anything thing this way. But as always, when we make this assumption, we have a formal way to model the problem and that allows us to compute the interest in quantities. In this case, the aspect ratings and the aspect weights." + }, + { + "10:52": "Now, the aspect rating as you see on the [INAUDIBLE] is assuming that will be a weight of sum of these weights. Where the weight is just the [INAUDIBLE] of the weight." + }, + { + "11:04": "So as I said, the overall rating is assumed to be a weighted average of aspect ratings." + }, + { + "11:15": "Now, these other values, r for sub I of D, or denoted together by other vector that depends on D is that the token of specific weights. And we\u2019re going to assume that this vector itself is drawn from another Multivariate Gaussian distribution, with mean denoted by a Mu factor, and covariance metrics sigma here." + }, + { + "11:43": "Now, so this means, when we generate our overall rating, we're going to first draw" + }, + { + "11:49": "a set of other values from this Multivariate Gaussian Prior distribution. And once we get these other values, we're going to use then the weighted average of aspect ratings as the mean here to use the normal distribution to generate the overall rating." + }, + { + "12:13": "Now, the aspect rating, as I just said, is the sum of the sentiment weights of words in aspect, note that here the sentiment weights are specific to aspect. So, beta is indexed by i, and that's for aspect. And that gives us a way to model different segment of a word." + }, + { + "12:36": "This is neither because the same word might have positive sentiment for another aspect. It's also used for see what parameters we have here beta sub i and w gives us the aspect-specific sentiment of w. So, obviously, that's one of the important parameters. But, in general, we can see we have these parameters, beta values, the delta, and the Mu, and sigma." + }, + { + "13:12": "So, next, the question is, how can we estimate these parameters and, so we collectively denote all the parameters by lambda here. Now, we can, as usual, use the maximum likelihood estimate, and this will give us the settings of these parameters, that with a maximized observed ratings condition of their respective reviews. And of, course, this would then give us all the useful variables that we are interested in computing." + }, + { + "13:45": "So, more specifically, we can now, once we estimate the parameters, we can easily compute the aspect rating, for aspect the i or sub i of d. And that's simply to take all of the words that occurred in the segment, i, and then take their counts and then multiply that by the center of the weight of each word and take a sum. So, of course, this time would be zero for words that are not occurring in and that's why were going to take the sum of all the words in the vocabulary." + }, + { + "14:17": "Now what about the s factor weights? Alpha sub i of d, well, it's not part of our parameter. Right? So we have to use that to compute it. And in this case, we can use the Maximum a Posteriori to compute this alpha value. Basically, we're going to maximize the product of the prior of alpha according to our assumed Multivariate Gaussian Distribution and the likelihood. In this case, the likelihood rate is the probability of generating this observed overall rating given this particular alpha value and some other parameters, as you see here. So for more details about this model, you can read this paper cited here." + }, + { + "15:05": "[MUSIC]" + } + ] + }, + { + "6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2": [ + { + "0:00": "[SOUND] This lecture is a continued discussion of Latent Aspect Rating Analysis. Earlier, we talked about how to solve the problem of LARA in two stages. But we first do segmentation of different aspects. And then we use a latent regression model to learn the aspect ratings and then later the weight. Now it's also possible to develop a unified generative model for solving this problem, and that is we not only model the generational over-rating based on text. We also model the generation of text, and so a natural solution would be to use topic model. So given the entity, we can assume there are aspects that are described by word distributions. Topics. And then we an use a topic model to model the generation of the reviewed text." + }, + { + "1:01": "I will assume words in the review text are drawn from these distributions." + }, + { + "1:08": "In the same way as we assumed for generating model like PRSA." + }, + { + "1:13": "And then we can then plug in the latent regression model to use the text to further predict the overrating. And that means when we first predict the aspect rating and then combine them with aspect weights to predict the overall rating. So this would give us a unified generated model, where we model both the generation of text and the overall ready condition on text." + }, + { + "1:40": "So we don't have time to discuss this model in detail as in many other cases in this part of the cause where we discuss the cutting edge topics, but there's a reference site here where you can find more details." + }, + { + "1:57": "So now I'm going to show you some simple results that you can get by using these kind of generated models. First, it's about rating decomposition. So here, what you see are the decomposed ratings for three hotels that have the same overall rating. So if you just look at the overall rating, you can't really tell much difference between these hotels. But by decomposing these ratings into aspect ratings we can see some hotels have higher ratings for some dimensions, like value, but others might score better in other dimensions, like location. And so this can give you detailed opinions at the aspect level." + }, + { + "2:38": "Now here, the ground-truth is shown in the parenthesis, so it also allows you to see whether the prediction is accurate. It's not always accurate but It's mostly still reflecting some of the trends." + }, + { + "2:53": "The second result you compare different reviewers on the same hotel. So the table shows the decomposed ratings for two reviewers about same hotel. Again their high level overall ratings are the same. So if you just look at the overall ratings, you don't really get that much information about the difference between the two reviewers. But after you decompose the ratings, you can see clearly that they have high scores on different dimensions. So this shows that model can review differences in opinions of different reviewers and such a detailed understanding can help us understand better about reviewers and also better about their feedback on the hotel. This is something very interesting, because this is in some sense some byproduct. In our problem formulation, we did not really have to do this. But the design of the generating model has this component. And these are sentimental weights for words in different aspects. And you can see the highly weighted words versus the negatively loaded weighted words here for each of the four dimensions. Value, rooms, location, and cleanliness. The top words clearly make sense, and the bottom words also make sense." + }, + { + "4:10": "So this shows that with this approach, we can also learn sentiment information directly from the data. Now, this kind of lexicon is very useful because in general, a word like long, let's say, may have different sentiment polarities for different context. So if I say the battery life of this laptop is long, then that's positive. But if I say the rebooting time for the laptop is long, that's bad, right? So even for reviews about the same product, laptop, the word long is ambiguous, it could mean positive or it could mean negative. But this kind of lexicon, that we can learn by using this kind of generated models, can show whether a word is positive for a particular aspect. So this is clearly very useful, and in fact such a lexicon can be directly used to tag other reviews about hotels or tag comments about hotels in social media like Tweets." + }, + { + "5:08": "And what's also interesting is that since this is almost completely unsupervised, well assuming the reviews whose overall rating are available And then this can allow us to learn form potentially larger amount of data on the internet to reach sentiment lexicon." + }, + { + "5:28": "And here are some results to validate the preference words. Remember the model can infer wether a reviewer cares more about service or the price. Now how do we know whether the inferred weights are correct? And this poses a very difficult challenge for evaluation. Now here we show some interesting way of evaluating." + }, + { + "5:50": "What you see here are the prices of hotels in different cities, and these are the prices of hotels that are favored by different groups of reviewers. The top ten are the reviewers was the highest inferred value to other aspect ratio." + }, + { + "6:09": "So for example value versus location, value versus room, etcetera. Now the top ten of the reviewers that have the highest ratios by this measure. And that means these reviewers tend to put a lot of weight on value as compared with other dimensions. So that means they really emphasize on value." + }, + { + "6:30": "The bottom ten on the other hand of the reviewers. The lowest ratio, what does that mean? Well it means these reviewers have put higher weights on other aspects than value. So those are people that cared about another dimension and they didn't care so much the value in some sense, at least as compared with the top ten group." + }, + { + "6:52": "Now these ratios are computer based on the inferred weights from the model." + }, + { + "6:57": "So now you can see the average prices of hotels favored by top ten reviewers are indeed much cheaper than those that are favored by the bottom ten. And this provides some indirect way of validating the inferred weights. It just means the weights are not random. They are actually meaningful here. In comparison, the average price in these three cities, you can actually see the top ten tend to have below average in price, whereas the bottom half, where they care a lot about other things like a service or room condition tend to have hotels that have higher prices than average. So with these results we can build a lot of interesting applications. For example, a direct application would be to generate the rated aspect, the summary, and because of the decomposition we have now generated the summaries for each aspect. The positive sentences the negative sentences about each aspect. It's more informative than original review that just has an overall rating and review text. Here are some other results about the aspects that's covered from reviews with no ratings. These are mp3 reviews, and these results show that the model can discover some interesting aspects. Commented on low overall ratings versus those higher overall per ratings. And they care more about the different aspects." + }, + { + "8:22": "Or they comment more on the different aspects. So that can help us discover for example, consumers' trend in appreciating different features of products. For example, one might have discovered the trend that people tend to like larger screens of cell phones or light weight of laptop, etcetera. Such knowledge can be useful for manufacturers to design their next generation of products. Here are some interesting results on analyzing users rating behavior. So what you see is average weights along different dimensions by different groups of reviewers. And on the left side you see the weights of viewers that like the expensive hotels. They gave the expensive hotels 5 Stars, and you can see their average rates tend to be more for some service. And that suggests that people like expensive hotels because of good service, and that's not surprising. That's also another way to validate it by inferred weights." + }, + { + "9:34": "If you look at the right side where, look at the column of 5 Stars. These are the reviewers that like the cheaper hotels, and they gave cheaper hotels five stars. As we expected and they put more weight on value, and that's why they like the cheaper hotels." + }, + { + "9:52": "But if you look at the, when they didn't like expensive hotels, or cheaper hotels, then you'll see that they tended to have more weights on the condition of the room cleanness." + }, + { + "10:04": "So this shows that by using this model, we can infer some information that's very hard to obtain even if you read all the reviews. Even if you read all the reviews it's very hard to infer such preferences or such emphasis. So this is a case where text mining algorithms can go beyond what humans can do, to review interesting patterns in the data. And this of course can be very useful. You can compare different hotels, compare the opinions from different consumer groups, in different locations. And of course, the model is general. It can be applied to any reviews with overall ratings. So this is a very useful technique that can support a lot of text mining applications." + }, + { + "10:50": "Finally the results of applying this model for personalized ranking or recommendation of entities." + }, + { + "10:57": "So because we can infer the reviewers weights on different dimensions, we can allow a user to actually say what do you care about. So for example, I have a query here that shows 90% of the weight should be on value and 10% on others. So that just means I don't care about other aspect. I just care about getting a cheaper hotel. My emphasis is on the value dimension. Now what we can do with such query is we can use reviewers that we believe have a similar preference to recommend a hotels for you. How can we know that? Well, we can infer the weights of those reviewers on different aspects. We can find the reviewers whose weights are more precise, of course inferred rates are similar to yours. And then use those reviewers to recommend hotels for you and this is what we call personalized or rather query specific recommendations. Now the non-personalized recommendations now shown on the top, and you can see the top results generally have much higher price, than the lower group and that's because when the reviewer's cared more about the value as dictated by this query they tended to really favor low price hotels. So this is yet another application of this technique." + }, + { + "12:18": "It shows that by doing text mining we can understand the users better. And once we can handle users better we can solve these users better. So to summarize our discussion of opinion mining in general, this is a very important topic and with a lot of applications." + }, + { + "12:33": "And as a text sentiment analysis can be readily done by using just text categorization. But standard technique tends to not be enough. And so we need to have enriched feature implementation." + }, + { + "12:45": "And we also need to consider the order of those categories. And we'll talk about ordinal regression for some of these problem. We have also assume that the generating models are powerful for mining latent user preferences. This in particular in the generative model for mining latent regression. And we embed some interesting preference information and send the weights of words in the model as a result we can learn most useful information when fitting the model to the data. Now most approaches have been proposed and evaluated. For product reviews, and that was because in such a context, the opinion holder and the opinion target are clear. And they are easy to analyze. And there, of course, also have a lot of practical applications. But opinion mining from news and social media is also important, but that's more difficult than analyzing review data, mainly because the opinion holders and opinion targets are all interested. So that calls for natural management processing techniques to uncover them accurately." + }, + { + "13:50": "Here are some suggested readings. The first two are small books that are of some use of this topic, where you can find a lot of discussion about other variations of the problem and techniques proposed for solving the problem." + }, + { + "14:08": "The next two papers about generating models for rating the aspect rating analysis. The first one is about solving the problem using two stages, and the second one is about a unified model where the topic model is integrated with the regression model to solve the problem using a unified model." + }, + { + "14:30": "[MUSIC]" + } + ] + }, + { + "6-3-text-based-prediction": [ + { + "0:00": "[SOUND] This lecture is about the Text-Based Prediction. In this lecture, we're going to start talking about the mining a different kind of knowledge, as you can see here on this slide. Namely we're going to use text data to infer values of some other variables in the real world that may not be directly related to the text. Or only remotely related to text data. So this is very different from content analysis or topic mining where we directly characterize the content of text. It's also different from opinion mining or sentiment analysis, which still have to do is characterizing mostly the content. Only that we focus more on the subject of content which reflects what we know about the opinion holder." + }, + { + "1:05": "But this only provides limited review of what we can predict." + }, + { + "1:10": "In this lecture and the following lectures, we're going to talk more about how we can predict more Information about the world. How can we get the sophisticated patterns of text together with other kind of data?" + }, + { + "1:28": "It would be useful first to take a look at the big picture of prediction, and data mining in general, and I call this data mining loop. So the picture that you are seeing right now is that there are multiple sensors, including human sensors, to report what we have seen in the real world in the form of data. Of course the data in the form of non-text data, and text data." + }, + { + "1:51": "And our goal is to see if we can predict some values of important real world variables that matter to us. For example, someone's house condition, or the weather, or etc. And so these variables would be important because we might want to act on that. We might want to make decisions based on that. So how can we get from the data to these predicted values? Well in general we'll first have to do data mining and analysis of the data." + }, + { + "2:23": "Because we, in general, should treat all the data that we collected" + }, + { + "2:30": "in such a prediction problem set up. We are very much interested in joint mining of non-text and text data, which should combine all the data together." + }, + { + "2:41": "And then, through analysis, generally there are multiple predictors of this interesting variable to us. And we call these features. And these features can then be put into a predictive model, to actually predict the value of any interesting variable." + }, + { + "3:02": "So this then allows us to change the world. And so this basically is the general process for making a prediction based on data, including the test data." + }, + { + "3:17": "Now it's important to emphasize that a human actually plays a very important role in this process." + }, + { + "3:24": "Especially because of the involvement of text data. So human first would be involved in the mining of the data. It would control the generation of these features. And it would also help us understand the text data, because text data are created to be consumed by humans. Humans are the best in consuming or interpreting text data." + }, + { + "3:48": "But when there are, of course, a lot of text data then machines have to help and that's why we need to do text data mining." + }, + { + "3:55": "Sometimes machines can see patterns in a lot of data that humans may not see. But in general human would play an important role in analyzing some text data, or applications. Next, human also must be involved in predictive model building and adjusting or testing. So in particular, we will have a lot of domain knowledge about the problem of prediction that we can build into this predictive model. And then next, of course, when we have predictive values for the variables, then humans would be involved in taking actions to change a word or make decisions based on these particular values." + }, + { + "4:36": "And finally it's interesting that a human could be involved in controlling the sensors." + }, + { + "4:43": "And this is so that we can adjust to the sensors to collect the most useful data for prediction." + }, + { + "4:52": "So that's why I call this data mining loop. Because as we perturb the sensors, it'll collect the new data and more useful data then we will obtain more data for prediction. And this data generally will help us improve the predicting accuracy. And in this loop, humans will recognize what additional data will need to be collected. And machines, of course, help humans identify what data should be collected next. In general, we want to collect data that is most useful for learning. And there was actually a subarea in machine learning called active learning that has to do with this. How do you identify data" + }, + { + "5:32": "points that would be most helpful in machine learning programs? If you can label them, right?" + }, + { + "5:38": "So, in general, you can see there is a loop here from data acquisition to data analysis. Or data mining to prediction of values. And to take actions to change the word, and then observe what happens. And then you can then decide what additional data have to be collected by adjusting the sensors. Or from the prediction arrows, you can also note what additional data we need to acquire in order to improve the accuracy of prediction. And this big picture is actually very general and it's reflecting a lot of important applications of big data." + }, + { + "6:16": "So, it's useful to keep that in mind while we are looking at some text mining techniques." + }, + { + "6:22": "So from text mining perspective and we're interested in text based prediction. Of course, sometimes texts alone can make predictions. And this is most useful for prediction about human behavior or human preferences or opinions. But in general text data will be put together as non-text data. So the interesting questions here would be, first, how can we design effective predictors? And how do we generate such effective predictors from text?" + }, + { + "6:53": "And this question has been addressed to some extent in some previous lectures where we talked about what kind of features we can design for text data. And it has also been addressed to some extent by talking about the other knowledge that we can mine from text. So, for example, topic mining can be very useful to generate the patterns or topic based indicators or predictors that can be further fed into a predictive model. So topics can be intermediate recognition of text. That would allow us to do design high level features or predictors that are useful for prediction of some other variable. It may be also generated from original text data, it provides a much better implementation of the problem and it serves as more effective predictors." + }, + { + "7:46": "And similarly similar analysis can lead to such predictors, as well. So, those other data mining or text mining algorithms can be used to generate predictors." + }, + { + "7:58": "The other question is, how can we join the mine text and non-text data together? Now, this is a question that we have not addressed yet. So, in this lecture, and in the following lectures, we're going to address this problem. Because this is where we can generate much more enriched features for prediction. And allows us to review a lot of interesting knowledge about the world. These patterns that are generated from text and non-text data themselves can sometimes, already be useful for prediction. But, when they are put together with many other predictors they can really help improving the prediction." + }, + { + "8:39": "Basically, you can see text-based prediction can actually serve as a unified framework to combine many text mining and analysis techniques. Including topic mining and any content mining techniques or segment analysis." + }, + { + "8:55": "The goal here is mainly to evoke values of real-world variables. But in order to achieve the goal we can do some other preparations. And these are subtasks. So one subtask could mine the content of text data, like topic mining. And the other could be to mine knowledge about the observer. So sentiment analysis, opinion." + }, + { + "9:21": "And both can help provide predictors for the prediction problem." + }, + { + "9:27": "And of course we can also add non-text data directly to the predicted model, but then non-text data also helps provide a context for text analyst. And that further improves the topic mining and the opinion analysis. And such improvement often leads to more effective predictors for our problems. It would enlarge the space of patterns of opinions of topics that we can mine from text and that we'll discuss more later. So the joint analysis of text and non-text data can be actually understood from two perspectives." + }, + { + "10:05": "One perspective, we have non-text can help with testimony." + }, + { + "10:11": "Because non-text data can provide a context for mining text data provide a way to partition data in different ways. And this leads to a number of type of techniques for contextual types of mining. And that's the mine text in the context defined by non-text data. And you see this reference here, for a large body of work, in this direction. And I will need to highlight some of them, in the next lectures." + }, + { + "10:39": "Now, the other perspective is text data can help with non-text data mining as well. And this is because text data can help interpret patterns discovered from non-text data. Let's say you discover some frequent patterns from non-text data. Now we can use the text data associated with instances where the pattern occurs as well as text data that is associated with instances where the pattern doesn't look up. And this gives us two sets of text data. And then we can see what's the difference. And this difference in text data is interpretable because text content is easy to digest. And that difference might suggest some meaning for this pattern that we found from non-text data. So, it helps interpret such patterns. And this technique is called pattern annotation." + }, + { + "11:32": "And you can see this reference listed here for more detail." + }, + { + "11:38": "So here are the references that I just mentioned. The first is reference for pattern annotation. The second is, Qiaozhu Mei's dissertation on contextual text mining. It contains a large body of work on contextual text mining techniques." + }, + { + "11:56": "[MUSIC]" + } + ] + }, + { + "6-4-contextual-text-mining-motivation": [ + { + "0:00": "[SOUND] This lecture is about the contextual text mining." + }, + { + "0:11": "Contextual text mining is related to multiple kinds of knowledge that we mine from text data, as I'm showing here. It's related to topic mining because you can make topics associated with context, like time or location. And similarly, we can make opinion mining more contextualized, making opinions connected to context." + }, + { + "0:34": "It's related to text based prediction because it allows us to combine non-text data with text data to derive sophisticated predictors for the prediction problem. So more specifically, why are we interested in contextual text mining? Well, that's first because text often has rich context information. And this can include direct context such as meta-data, and also indirect context. So, the direct context can grow the meta-data such as time, location, authors, and source of the text data. And they're almost always available to us." + }, + { + "1:14": "Indirect context refers to additional data related to the meta-data. So for example, from office, we can further obtain additional context such as social network of the author, or the author's age." + }, + { + "1:30": "Such information is not in general directly related to the text, yet through the process, we can connect them. There could be other text data from the same source, as this one through the other text can be connected with this text as well. So in general, any related data can be regarded as context. So there could be removed or rated for context." + }, + { + "1:55": "And so what's the use? What is text context used for? Well, context can be used to partition text data in many interesting ways. It can almost allow us to partition text data in other ways as we need. And this is very important because this allows us to do interesting comparative analyses. It also in general, provides meaning to the discovered topics, if we associate the text with context." + }, + { + "2:25": "So here's illustration of how context can be regarded as interesting ways of partitioning of text data. So here I just showed some research papers published in different years." + }, + { + "2:41": "On different venues, different conference names here listed on the bottom like the SIGIR or ACL, etc." + }, + { + "2:49": "Now such text data can be partitioned in many interesting ways because we have context." + }, + { + "2:56": "So the context here just includes time and the conference venues. But perhaps we can include some other variables as well." + }, + { + "3:06": "But let's see how we can partition this interesting of ways. First, we can treat each paper as a separate unit. So in this case, a paper ID and the, each paper has its own context. It's independent. But we can also treat all the papers within 1998 as one group and this is only possible because of the availability of time. And we can partition data in this way. This would allow us to compare topics for example, in different years." + }, + { + "3:39": "Similarly, we can partition the data based on the menus. We can get all the SIGIR papers and compare those papers with the rest. Or compare SIGIR papers with KDD papers, with ACL papers." + }, + { + "3:52": "We can also partition the data to obtain the papers written by authors in the U.S., and that of course, uses additional context of the authors. And this would allow us to then compare such a subset with another set of papers written by also seen in other countries." + }, + { + "4:13": "Or we can obtain a set of papers about text mining, and this can be compared with papers about another topic. And note that these partitionings can be also intersected with each other to generate even more complicated partitions." + }, + { + "4:29": "And so in general, this enables discovery of knowledge associated with different context as needed." + }, + { + "4:37": "And in particular, we can compare different contexts. And this often gives us a lot of useful knowledge. For example, comparing topics over time, we can see trends of topics. Comparing topics in different contexts can also reveal differences about the two contexts. So there are many interesting questions that require contextual text mining. Here I list some very specific ones. For example, what topics have been getting increasing attention recently in data mining research? Now to answer this question, obviously we need to analyze text in the context of time." + }, + { + "5:13": "So time is context in this case. Is there any difference in the responses of people in different regions to the event, to any event? So this is a very broad an answer to this question. In this case of course, location is the context. What are the common research interests of two researchers? In this case, authors can be the context. Is there any difference in the research topics published by authors in the USA and those outside? Now in this case, the context would include the authors and their affiliation and location." + }, + { + "5:47": "So this goes beyond just the author himself or herself. We need to look at the additional information connected to the author. Is there any difference in the opinions of all the topics expressed on one social network and another? In this case, the social network of authors and the topic can be a context." + }, + { + "6:06": "Other topics in news data that are correlated with sudden changes in stock prices. In this case, we can use a time series such as stock prices as context." + }, + { + "6:17": "What issues mattered in the 2012 presidential campaign, or presidential election? Now in this case, time serves again as context. So, as you can see, the list can go on and on. Basically, contextual text mining can have many applications. [MUSIC]" + } + ] + }, + { + "6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis": [ + { + "0:00": "[MUSIC] This lecture is about a specific technique for Contextual Text Mining called Contextual Probabilistic Latent Semantic Analysis." + }, + { + "0:19": "In this lecture, we're going to continue discussing Contextual Text Mining. And we're going to introduce Contextual Probablitistic Latent Semantic Analysis as exchanging of POS for doing contextual text mining." + }, + { + "0:34": "Recall that in contextual text mining we hope to analyze topics in text, in consideration of the context so that we can associate the topics with a property of the context were interesting." + }, + { + "0:48": "So in this approach, contextual probabilistic latent semantic analysis, or CPLSA, the main idea is to express to the add interesting context variables into a generating model." + }, + { + "1:03": "Recall that before when we generate the text we generally assume we'll start wIth some topics, and then assemble words from some topics. But here, we're going to add context variables, so that the coverage of topics, and also the content of topics would be tied in context. Or in other words, we're going to let the context Influence both coverage and the content of a topic." + }, + { + "1:31": "The consequences that this will enable us to discover contextualized topics. Make the topics more interesting, more meaningful. Because we can then have topics that can be interpreted as specifically to a particular context that we are interested in. For example, a particular time period." + }, + { + "1:52": "As an extension of PLSA model, CPLSA does the following changes. Firstly it would model the conditional likelihood of text given context." + }, + { + "2:07": "That clearly suggests that the generation of text would then depend on context, and that allows us to bring context into the generative model." + }, + { + "2:18": "Secondly, it makes two specific assumptions about the dependency of topics on context. One is to assume that depending on the context, depending on different time periods or different locations, we assume that there are different views of a topic or different versions of word descriptions that characterize a topic." + }, + { + "2:38": "And this assumption allows us to discover different variations of the same topic in different contexts." + }, + { + "2:46": "The other is that we assume the topic coverage also depends on the context." + }, + { + "2:55": "That means depending on the time or location, we might cover topics differently." + }, + { + "3:00": "Again, this dependency would then allow us to capture the association of topics with specific contexts. We can still use the EM algorithm to solve the problem of parameter estimation." + }, + { + "3:16": "And in this case, the estimated parameters would naturally contain context variables. And in particular, a lot of conditional probabilities of topics given certain context. And this is what allows you to do contextual text mining. So this is the basic idea." + }, + { + "3:35": "Now, we don't have time to introduce this model in detail, but there are references here that you can look into to know more detail. Here I just want to explain the high level ideas in more detail. Particularly I want to explain the generation process. Of text data that has context associated in such a model." + }, + { + "4:01": "So as you see here, we can assume there are still multiple topics. For example, some topics might represent a themes like a government response, donation Or the city of New Orleans. Now this example is in the context of Hurricane Katrina and that hit New Orleans." + }, + { + "4:22": "Now as you can see we assume there are different views associated with each of the topics. And these are shown as View 1, View 2, View 3. Each view is a different version of word distributions. And these views are tied to some context variables. For example, tied to the location Texas, or the time July 2005, or the occupation of the author being a sociologist." + }, + { + "4:56": "Now, on the right side, now we assume the document has context information. So the time is known to be July 2005. The location is Texas, etc. And such context information is what we hope to model as well. So we're not going to just model the text." + }, + { + "5:15": "And so one idea here is to model the variations of top content and various content. And this gives us different views of the water distributions." + }, + { + "5:27": "Now on the bottom you will see the theme coverage of top Coverage might also vary according to these context because in the case of a location like Texas, people might want to cover the red topics more. That's New Orleans. That's visualized here. But in a certain time period, maybe Particular topic and will be covered more. So this variation is also considered in CPLSA. So to generate the searcher document With context, with first also choose a view." + }, + { + "6:08": "And this view of course now could be from any of these contexts. Let's say, we have taken this view that depends on the time. In the middle. So now, we will have a specific version of word distributions. Now, you can see some probabilities of words for each topic." + }, + { + "6:26": "Now, once we have chosen a view, now the situation will be very similar to what happened in standard ((PRSA)) We assume we have got word distribution associated with each topic, right?" + }, + { + "6:39": "And then next, we will also choose a coverage from the bottom, so we're going to choose a particular coverage, and that coverage, before is fixed in PLSA, and assigned to a particular document. Each document has just one coverage distribution." + }, + { + "6:58": "Now here, because we consider context, so the distribution of topics or the coverage of Topics can vary depending on the context that has influenced the coverage." + }, + { + "7:10": "So, for example, we might pick a particular coverage. Let's say in this case we picked a document specific coverage." + }, + { + "7:20": "Now with the coverage and these word distributions we can generate a document in exactly the same way as in PLSA. So what it means, we're going to use the coverage to choose a topic, to choose one of these three topics. Let's say we have picked the yellow topic. Then we'll draw a word from this particular topic on the top." + }, + { + "7:44": "Okay, so we might get a word like government. And then next time we might choose a different topic, and we'll get donate, etc. Until we generate all the words. And this is basically the same process as in PLSA." + }, + { + "8:00": "So the main difference is when we obtain the coverage. And the word distribution, we let the context influence our choice So in other words we have extra switches that are tied to these contacts that will control the choices of different views of topics and the choices of coverage." + }, + { + "8:22": "And naturally the model we have more parameters to estimate. But once we can estimate those parameters that involve the context, then we will be able to understand the context specific views of topics, or context specific coverages of topics. And this is precisely what we want in contextual text mining." + }, + { + "8:40": "So here are some simple results. From using such a model. Not necessary exactly the same model, but similar models. So on this slide you see some sample results of comparing news articles about Iraq War and Afghanistan War." + }, + { + "8:56": "Now we have about 30 articles on Iraq wa,r and 26 articles on Afghanistan war. And in this case, the goal is to review the common topic. It's covered in both sets of articles and the differences of variations of the topic in each of the two collections." + }, + { + "9:18": "So in this case the context is explicitly specified by the topic or collection." + }, + { + "9:25": "And we see the results here show that there is a common theme that's corresponding to Cluster 1 here in this column. And there is a common theme indicting that United Nations is involved in both Wars. It's a common topic covered in both sets of articles. And that's indicated by the high probability words shown here, united and nations." + }, + { + "9:51": "Now if you know the background, of course this is not surprising and this topic is indeed very relevant to both wars. If you look at the column further and then what's interesting's that the next two cells of word distributions actually tell us collection specific variations of the topic of United Nations. So it indicates that the Iraq War, United Nations was more involved in weapons factions, whereas in the Afghanistan War it was more involved in maybe aid to Northern Alliance. It's a different variation of the topic of United Nations." + }, + { + "10:30": "So this shows that by bringing the context. In this case different the walls or different the collection of texts. We can have topical variations tied to these contexts, to review the differences of coverage of the United Nations in the two wars." + }, + { + "10:46": "Now similarly if you look at the second cluster Class two, it has to do with the killing of people, and, again, it's not surprising if you know the background about wars. All the wars involve killing of people, but imagine if you are not familiar with the text collections. We have a lot of text articles, and such a technique can reveal the common topics covered in both sets of articles. It can be used to review common topics in multiple sets of articles as well. If you look at of course in that column of cluster two, you see variations of killing of people and that corresponds to different contexts" + }, + { + "11:28": "And here is another example of results obtained from blog articles about Hurricane Katrina." + }, + { + "11:37": "In this case, what you see here is visualization of the trends of topics over time." + }, + { + "11:47": "And the top one shows just the temporal trends of two topics. One is oil price, and one is about the flooding of the city of New Orleans." + }, + { + "12:00": "Now these topics are obtained from blog articles about Hurricane Katrina." + }, + { + "12:07": "And people talk about these topics. And end up teaching to some other topics. But the visualisation shows that with this technique, we can have conditional distribution of time. Given a topic. So this allows us to plot this conditional probability the curve is like what you're seeing here. We see that, initially, the two curves tracked each other very well. But later we see the topic of New Orleans was mentioned again but oil price was not. And this turns out to be the time period when another hurricane, hurricane Rita hit the region. And that apparently triggered more discussion about the flooding of the city." + }, + { + "12:54": "The bottom curve shows the coverage of this topic about flooding of the city by block articles in different locations. And it also shows some shift of coverage that might be related to people's migrating from the state of Louisiana to Texas for example." + }, + { + "13:20": "So in this case we can see the time can be used as context to review trends of topics." + }, + { + "13:27": "These are some additional results on spacial patterns. In this case it was about the topic of government response. And there was some criticism about the slow response of government in the case of Hurricane Katrina." + }, + { + "13:44": "And the discussion now is covered in different locations. And these visualizations show the coverage in different weeks of the event. And initially it's covered mostly in the victim states, in the South, but then gradually spread into other locations. But in week four, which is shown on the bottom left, we see a pattern that's very similar to the first week on the top left. And that's when again Hurricane Rita hit in the region. So such a technique would allow us to use location as context to examine their issues of topics. And of course the moral is completely general so you can apply this to any other connections of text. To review spatial temporal patterns." + }, + { + "14:34": "His view found another application of this kind of model, where we look at the use of the model for event impact analysis." + }, + { + "14:43": "So here we're looking at the research articles information retrieval. IR, particularly SIGIR papers. And the topic we are focusing on is about the retrieval models. And you can see the top words with high probability about this model on the left." + }, + { + "14:59": "And then we hope to examine the impact of two events. One is a start of TREC, for Text and Retrieval Conference. This is a major evaluation sponsored by U.S. government, and was launched in 1992 or around that time. And that is known to have made a impact on the topics of research information retrieval." + }, + { + "15:23": "The other is the publication of a seminal paper, by Croft and Porte. This is about a language model approach to information retrieval. It's also known to have made a high impact on information retrieval research. So we hope to use this kind of model to understand impact. The idea here is simply to use the time as context. And use these events to divide the time periods into a period before. For the event and another after this event. And then we can compare the differences of the topics. The and the variations, etc. So in this case, the results show before track the study of retrieval models was mostly a vector space model, Boolean model etc. But the after Trec, apparently the study of retrieval models have involved a lot of other words. That seems to suggest some different retrieval tasks, so for example, email was used in the enterprise search tasks and subtopical retrieval was another task later introduced by Trec." + }, + { + "16:28": "On the bottom, we see the variations that are correlated with the propagation of the language model paper. Before, we have those classic probability risk model, logic model, Boolean etc., but after 1998, we see clear dominance of language model as probabilistic models. And we see words like language model, estimation of parameters, etc. So this technique here can use events as context to understand the impact of event. Again the technique is generals so you can use this to analyze the impact of any event. Here are some suggested readings." + }, + { + "17:11": "The first is paper about simple staging of psi to label cross-collection comparison." + }, + { + "17:21": "It's to perform comparative text mining to allow us to extract common topics shared by multiple collections. And there are variations in each collection." + }, + { + "17:31": "The second one is the main paper about the CPLSA model. Was a discussion of a lot of applications. The third one has a lot of details about the special temporal patterns for the Hurricane Katrina example. [MUSIC]" + } + ] + }, + { + "6-6-contextual-text-mining-mining-topics-with-social-network-context": [ + { + "0:00": "[SOUND] This lecture is about how to mine text data with social network as context. In this lecture we're going to continue discussing contextual text mining. In particular, we're going to look at the social network of others as context." + }, + { + "0:26": "So first, what's our motivation for using network context for analysis of text?" + }, + { + "0:32": "The context of a text article can form a network." + }, + { + "0:37": "For example the authors of research articles might form collaboration networks." + }, + { + "0:44": "But authors of social media content might form social networks. For example, in Twitter people might follow each other. Or in Facebook as people might claim friends of others, etc. So such context connects the content of the others. Similarly, locations associated with text can also be connected to form geographical network. But in general you can can imagine the metadata of the text data can form some kind of network if they have some relations." + }, + { + "1:24": "Now there is some benefit in jointly analyzing text and its social network context or network context in general. And that's because we can use network to impose some constraints on topics of text." + }, + { + "1:41": "So for example it's reasonable to assume that authors connected in collaboration networks tend to write about the similar topics." + }, + { + "1:53": "So such heuristics can be used to guide us in analyzing topics. Text also can help characterize the content associated with each subnetwork. And this is to say that both" + }, + { + "2:11": "kinds of data, the network and text, can help each other." + }, + { + "2:16": "So for example the difference in opinions expressed that are in two subnetworks can be reviewed by doing this type of joint analysis." + }, + { + "2:30": "So here briefly you could use a model called a network supervised topic model." + }, + { + "2:40": "In this slide we're going to give some general ideas. And then in the next slide we're going to give some more details." + }, + { + "2:48": "But in general in this part of the course we don't have enough time to cover these frontier topics in detail. But we provide references that would allow you to read more about the topic to know the details." + }, + { + "3:05": "But it should still be useful to know the general ideas. And to know what they can do to know when you might be able to use them. So the general idea of network supervised topic model is the following. Let's start with viewing the regular topic models. Like if you had an LDA as sorting optimization problem. Of course, in this case, the optimization objective function is a likelihood function. So we often use maximum likelihood estimator to obtain the parameters. And these parameters will give us useful information that we want to obtain from text data. For example, topics. So we want to maximize the probability of tests that are given the parameters generally denoted by number. The main idea of incorporating network is to think about the constraints that can be imposed based on the network. In general, the idea is to use the network to impose some constraints on the model parameters, lambda here. For example, the text at adjacent nodes of the network can be similar to cover similar topics. Indeed, in many cases, they tend to cover similar topics." + }, + { + "4:34": "So we may be able to smooth the topic distributions" + }, + { + "4:39": "on the graph on the network so that adjacent nodes will have very similar topic distributions. So they will share a common distribution on the topics. Or have just a slight variations of the topic of distributions, of the coverage." + }, + { + "5:02": "So, technically, what we can do is simply to add a network and use the regularizers to the likelihood of objective function as shown here. So instead of just optimize the probability of test data given parameters lambda, we're going to optimize another function F." + }, + { + "5:19": "This function combines the likelihood with a regularizer function called R here. And the regularizer defines the the parameters lambda and the Network. It tells us basically what kind of parameters are preferred from a network constraint perspective. So you can easily see this is in effect implementing the idea of imposing some prior on the model parameters. Only that we're not necessary having a probabilistic model, but the idea is the same. We're going to combine the two in one single objective function." + }, + { + "5:57": "So, the advantage of this idea is that it's quite general. Here the top model can be any generative model for text." + }, + { + "6:07": "It doesn't have to be PLSA or LEA, or the current topic models." + }, + { + "6:12": "And similarly, the network can be also in a network. Any graph that connects these text objects." + }, + { + "6:22": "This regularizer can also be any regularizer. We can be flexible in capturing different heuristics that we want to capture." + }, + { + "6:32": "And finally, the function F can also vary, so there can be many different ways to combine them. So, this general idea is actually quite, quite powerful. It offers a general approach to combining these different types of data in single optimization framework. And this general idea can really be applied for any problem." + }, + { + "6:56": "But here in this paper reference here, a particular instantiation called a NetPLSA was started. In this case, it's just for instantiating of PLSA to incorporate this simple constraint imposed by network. And the prior here is the neighbors on the network must have similar topic distribution. They must cover similar topics in similar ways. And that's basically what it says in English." + }, + { + "7:24": "So technically we just have a modified objective function here. Let's define both the texts you can actually see in the network graph G here." + }, + { + "7:34": "And if you look at this formula, you can actually recognize some part fairly familiarly." + }, + { + "7:40": "Because they are, they should be fairly familiar to you by now. So can you recognize which part is the likelihood for the test given the topic model?" + }, + { + "7:52": "Well if you look at it, you will see this part is precisely the PLSA log-likelihood that we want to maximize when we estimate parameters for PLSA alone. But the second equation shows some additional constraints on the parameters. And in particular, we'll see here it's to measure the difference between the topic coverage at node u and node v. The two adjacent nodes on the network. We want their distributions to be similar. So here we are computing the square of their differences and we want to minimize this difference. And note that there's a negative sign in front of this sum, this whole sum here. So this makes it possible to find the parameters that are both to maximize the PLSA log-likelihood. That means the parameters will fit the data well and, also to respect that this constraint from the network." + }, + { + "9:06": "And this is the negative sign that I just mentioned. Because this is an negative sign, when we maximize this object in function we'll actually minimize this statement term here." + }, + { + "9:19": "So if we look further in this picture we'll see the results will weight of edge between u and v here. And that space from out network. If you have a weight that says well, these two nodes are strong collaborators of researchers. These two are strong connections between two people in a social network. And they would have weight. Then that means it would be more important that they're topic coverages are similar. And that's basically what it says here." + }, + { + "9:55": "And finally you see a parameter lambda here. This is a new parameter to control the influence of network constraint. We can see easily, if lambda is set to 0, we just go back to the standard PLSA. But when lambda is set to a larger value, then we will let the network influence the estimated models more. So as you can see, the effect here is that we're going to do basically PLSA. But we're going to also try to make the topic coverages on the two nodes that are strongly connected to be similar. And we ensure their coverages are similar." + }, + { + "10:33": "So here are some of the several results, from that paper. This is slide shows the record results of using PLSA. And the data here is DBLP data, bibliographic data, about research articles. And the experiments have to do with using four communities of applications. IR information retrieval. DM stands for data mining. ML for machinery and web. There are four communities of articles, and we were hoping" + }, + { + "11:06": "to see that the topic mining can help us uncover these four communities. But from these assembled topics that you have seen here that are generated by PLSA. And PLSA is unable to generate the four communities that correspond to our intuition. The reason was because they are all mixed together and there are many words that are shared by these communities. So it's not that easy to use four topics to separate them. If we use more topics, perhaps we will have more coherent topics." + }, + { + "11:42": "But what's interesting is that if we use the NetPLSA where the network, the collaboration network in this case of authors is used to impose constraints. And in this case we also use four topics. But Ned Pierre said we gave much more meaningful topics. So here we'll see that these topics correspond well to the four communities. The first is information retrieval. The second is data mining. Third is machine learning. And the fourth is web. So that separation was mostly because of the influence of network where with leverage is a collaboration network information. Essentially the people that form a collaborating network would then be kind of assumed to write about similar topics. And that's why we're going to have more coherent topics. And if you just listen to text data alone based on the occurrences, you won't get such coherent topics. Even though a topic model, like PLSA or LDA also should be able to pick up co-occurring words. So in general the topics that they generate represent words that co-occur each other. But still they cannot generate such a coherent results as NetPLSA, showing that the network contest is very useful here." + }, + { + "13:08": "Now a similar model could have been also useful to to characterize the content associated with each subnetwork of collaborations." + }, + { + "13:19": "So a more general view of text mining in context of network is you treat text as living in a rich information network environment. And that means we can connect all the related data together as a big network. And text data can be associated with a lot of structures in the network. For example, text data can be associated with the nodes of the network, and that's basically what we just discussed in the NetPLSA. But text data can be associated with age as well, or paths or even subnetworks. And such a way to represent texts that are in the big environment of all the context information is very powerful. Because it allows to analyze all the data, all the information together. And so in general, analysis of text should be using the entire network information that's related to the text data. So here's one suggested reading. And this is the paper about NetPLSA where you can find more details about the model and how to make such a model. [MUSIC]" + } + ] + }, + { + "6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision": [ + { + "0:00": "[SOUND]" + }, + { + "0:07": "This lecture is about using a time series as context to potentially discover causal topics in text. In this lecture, we're going to continue discussing Contextual Text Mining. In particular, we're going to look at the time series as a context for analyzing text, to potentially discover causal topics. As usual, it started with the motivation. In this case, we hope to use text mining to understand a time series. Here, what you are seeing is Dow Jones Industrial Average stock price curves. And you'll see a sudden drop here. Right. So one would be interested knowing what might have caused the stock market to crash." + }, + { + "0:48": "Well, if you know the background, and you might be able to figure it out if you look at the time stamp, or there are other data that can help us think about. But the question here is can we get some clues about this from the companion news stream? And we have a lot of news data that generated during that period." + }, + { + "1:08": "So if you do that we might actually discover the crash. After it happened, at the time of the September 11 attack. And that's the time when there is a sudden rise of the topic about September 11 happened in news articles." + }, + { + "1:26": "Here's another scenario where we want to analyze the Presidential Election. And this is the time series that are from the Presidential Prediction Market. For example, I write a trunk of market would have stocks for each candidate. And if you believe one candidate that will win then you tend to buy the stock for that candidate, causing the price of that candidate to increase. So, that's a nice way to actual do survey of people's opinions about these candidates." + }, + { + "2:00": "Now, suppose you see something drop of price for one candidate. And you might also want to know what might have caused the sudden drop." + }, + { + "2:10": "Or in a social science study, you might be interested in knowing what method in this election, what issues really matter to people. Now again in this case, we can look at the companion news stream and ask for the question. Are there any clues in the news stream that might provide insight about this? So for example, we might discover the mention of tax cut" + }, + { + "2:35": "has been increasing since that point. So maybe, that's related to the drop of the price. So all these cases are special cases of a general problem of joint analysis of text and a time series data to discover causal topics. The input in this case is time series plus text data that are produced in the same time period, the companion text stream." + }, + { + "3:02": "And this is different from the standard topic models, where we have just to text collection. That's why we see time series here, it serves as context." + }, + { + "3:13": "Now, the output that we want to generate is the topics whose coverage in the text stream has strong correlations with the time series." + }, + { + "3:22": "For example, whenever the topic is managing the price tends to go down, etc." + }, + { + "3:28": "Now we call these topics Causal Topics. Of course, they're not, strictly speaking, causal topics. We are never going to be able to verify whether they are causal, or there's a true causal relationship here. That's why we put causal in quotation marks. But at least they are correlating topics that might potentially explain the cause and humans can certainly further analyze such topics to understand the issue better." + }, + { + "3:59": "And the output would contain topics just like in topic modeling. But we hope that these topics are not just the regular topics with. These topics certainly don't have to explain the data of the best in text, but rather they have to explain the data in the text. Meaning that they have to reprehend the meaningful topics in text. Cement but also more importantly, they should be correlated with external hand series that's given as a context. So to understand how we solve this problem, let's first adjust to solve the problem with reactive topic model, for example PRSA. And we can apply this to text stream and with some extension like a CPRSA or Contextual PRSA. Then we can discover these topics in the correlation and also discover their coverage over time." + }, + { + "4:53": "So, one simple solution is, to choose the topics from this set that have the strongest correlation with the external time series." + }, + { + "5:05": "But this approach is not going to be very good. Why? Because awareness pictured to the topics is that they will discover by PRSA or LDA. And that means the choice of topics will be very limited. And we know these models try to maximize the likelihood of the text data. So those topics tend to be the major topics that explain the text data well. aAnd they are not necessarily correlated with time series. Even if we get the best one, the most correlated topics might still not be so" + }, + { + "5:34": "interesting from causal perspective." + }, + { + "5:37": "So here in this work site here, a better approach was proposed. And this approach is called Iterative Causal Topic Modeling. The idea is to do an iterative adjustment of topic, discovered by topic models using time series to induce a product." + }, + { + "5:57": "So here's an illustration on how this work, how this works. Take the text stream as input and then apply regular topic modeling to generate a number of topics. Let's say four topics. Shown here." + }, + { + "6:09": "And then we're going to use external time series to assess which topic is more causally related or correlated with the external time series. So we have something that rank them. And we might think that topic one and topic four are more correlated and topic two and topic three are not. Now we could have stopped here and that would be just like what the simple approached that I talked about earlier then we can get to these topics and call them causal topics. But as I also explained that these topics are unlikely very good because they are general topics that explain the whole text connection. They are not necessary. The best topics are correlated with our time series." + }, + { + "6:51": "So what we can do in this approach is to first zoom into word level and we can look into each word and the top ranked word listed for each topic. Let's say we take Topic 1 as the target examined. We know Topic 1 is correlated with the time series. Or is at least the best that we could get from this set of topics so far." + }, + { + "7:18": "And we're going to look at the words in this topic, the top words." + }, + { + "7:23": "And if the topic is correlated with the Time Series, there must be some words that are highly correlated with the Time Series. So here, for example, we might discover W1 and W3 are positively correlated with Time Series, but W2 and W4 are negatively correlated." + }, + { + "7:41": "So, as a topic, and it's not good to mix these words with different correlations. So we can then for the separate of these words. We are going to get all the red words that indicate positive correlations. W1 and W3. And we're going to also get another sub topic." + }, + { + "8:00": "If you want. That represents a negatively correlated words, W2 and W4." + }, + { + "8:07": "Now, these subtopics, or these variations of topics, based on the correlation analysis, are topics that are still quite related to the original topic, Topic 1. But they are already deviating, because of the use of time series information for bias selection of words. So then in some sense, well we should expect so, some sense more correlated with the time series than the original Topic 1. Because the Topic 1 has mixed words, here we separate them." + }, + { + "8:42": "So each of these two subtopics" + }, + { + "8:46": "can be expected to be better coherent in this time series. However, they may not be so coherent as it mention. So the idea here is to go back to topic model by using these each as a prior to further guide the topic modeling. And that's to say we ask our topic models now discover topics that are very similar to each of these two subtopics. And this will cause a bias toward more correlate to the topics was a time series. Of course then we can apply topic models to get another generation of topics. And that can be further ran to the base of the time series to set after the highly correlated topics. And then we can further analyze the components at work in the topic and then try to analyze.word level correlation. And then get the even more correlated subtopics that can be further fed into the process as prior to drive the topic of model discovery." + }, + { + "9:46": "So this whole process is just a heuristic way of optimizing causality and coherence, and that's our ultimate goal. Right? So here you see the pure topic models will be very good at maximizing topic coherence, the topics will be all meaningful." + }, + { + "10:02": "If we only use causality test, or correlation measure, then we might get a set words that are strongly correlate with time series, but they may not necessarily mean anything. It might not be cementric connected. So, that would be at the other extreme, on the top." + }, + { + "10:21": "Now, the ideal is to get the causal topic that's scored high, both in topic coherence and also causal relation. In this approach, it can be regarded as an alternate way to maximize both sine engines. So when we apply the topic models we're maximizing the coherence. But when we decompose the topic model words into sets of words that are very strong correlated with the time series. We select the most strongly correlated words with the time series. We are pushing the model back to the causal dimension to make it better in causal scoring. And then, when we apply the selected words as a prior to guide a topic modeling, we again go back to optimize the coherence. Because topic models, we ensure the next generation of topics to be coherent and we can iterate when they're optimized in this way as shown on this picture." + }, + { + "11:20": "So the only I think a component that you haven't seen such a framework is how to measure the causality. Because the rest is just talking more on. So let's have a little bit of discussion of that. So here we show that. And let's say we have a topic about government response here. And then we just talking more of we can get coverage of the topic over time. So, we have a time series, X sub t." + }, + { + "11:43": "Now, we also have, are give a time series that represents external information. It's a non text time series, Y sub t. It's the stock prices. Now the the question here is does Xt cause Yt?" + }, + { + "11:58": "Well in other words, we want to match the causality relation between the two. Or maybe just measure the correlation of the two." + }, + { + "12:08": "There are many measures that we can use in this framework. For example, pairs in correlation is a common use measure. And we got to consider time lag here so that we can try to capture causal relation. Using somewhat past data and using the data in the past" + }, + { + "12:26": "to try to correlate with the data on points of y that represents the future, for example. And by introducing such lag, we can hopefully capture some causal relation by even using correlation measures like person correlation." + }, + { + "12:45": "But a common use, the measure for causality here is Granger Causality Test." + }, + { + "12:52": "And the idea of this test is actually quite simple. Basically you're going to have all the regressive model to use the history information of Y to predict itself. And this is the best we could without any other information. So we're going to build such a model. And then we're going to add some history information of X into such model. To see if we can improve the prediction of Y. If we can do that with a statistically significant difference. Then we just say X has some causal inference on Y, or otherwise it wouldn't have causal improvement of prediction of Y." + }, + { + "13:32": "If, on the other hand, the difference is insignificant and that would mean X does not really have a cause or relation why. So that's the basic idea. Now, we don't have time to explain this in detail so you could read, but you would read at this cited reference here to know more about this measure. It's a very convenient used measure. Has many applications." + }, + { + "13:55": "So next, let's look at some simple results generated by this approach. And here the data is the New York Times and in the time period of June 2000 through December of 2011. And here the time series we used is stock prices of two companies. American Airlines and Apple and the goal is to see if we inject the sum time series contest, whether we can actually get topics that are wise for the time series. Imagine if we don't use any input, we don't use any context. Then the topics from New York times discovered by PRSA would be just general topics that people talk about in news. All right. Those major topics in the news event." + }, + { + "14:41": "But here you see these topics are indeed biased toward each time series. And particularly if you look at the underlined words here in the American Airlines result, and you see airlines, airport, air, united trade, or terrorism, etc. So it clearly has topics that are more correlated with the external time series. On the right side, you see that some of the topics are clearly related to Apple, right. So you can see computer, technology, software, internet, com, web, etc. So that just means the time series has effectively served as a context to bias the discovery of topics. From another perspective, these results help us on what people have talked about in each case. So not just the people, what people have talked about, but what are some topics that might be correlated with their stock prices. And so these topics can serve as a starting point for people to further look into issues and you'll find the true causal relations. Here are some other results from analyzing Presidential Election time series. The time series data here is from Iowa Electronic market. And that's a prediction market. And the data is the same. New York Times from May 2000 to October 2000. That's for 2000 presidential campaign election. Now, what you see here are the top three words in significant topics from New York Times." + }, + { + "16:21": "And if you look at these topics, and they are indeed quite related to the campaign. Actually the issues are very much related to the important issues of this presidential election. Now here I should mention that the text data has been filtered by using only the articles that mention these candidate names." + }, + { + "16:45": "It's a subset of these news articles. Very different from the previous experiment." + }, + { + "16:53": "But the results here clearly show that the approach can uncover some important issues in that presidential election. So tax cut, oil energy, abortion and gun control are all known to be important issues in that presidential election. And that was supported by some literature in political science." + }, + { + "17:17": "And also I was discussing Wikipedia, right. So basically the results show that the approach can effectively discover possibly causal topics based on the time series data." + }, + { + "17:35": "So there are two suggested readings here. One is the paper about this iterative topic modeling with time series feedback. Where you can find more details about how this approach works. And the second one is reading about Granger Casuality text." + }, + { + "17:55": "So in the end, let's summarize the discussion of Text-based Prediction. Now, Text-based prediction is generally very useful for big data applications that involve text. Because they can help us inform new knowledge about the world. And the knowledge can go beyond what's discussed in the text." + }, + { + "18:17": "As a result can also support optimizing of our decision making. And this has a wider spread application." + }, + { + "18:28": "Text data is often combined with non-text data for prediction. because, for this purpose, the prediction purpose, we generally would like to combine non-text data and text data together, as much cruel as possible for prediction. And so as a result during the analysis of text and non-text is very necessary and it's also very useful. Now when we analyze text data together with non-text data, we can see they can help each other. So non-text data, provide a context for mining text data, and we discussed a number of techniques for contextual text mining. And on the other hand, a text data can also help interpret patterns discovered from non-text data, and this is called a pattern annotation." + }, + { + "19:14": "In general, this is a very active research topic, and there are new papers being published. And there are also many open challenges that have to be solved. [MUSIC]" + } + ] + }, + { + "6-8-course-summary": [ + { + "0:06": "This lecture is a summary of this whole course." + }, + { + "0:10": "First, let's revisit the topics that we covered in this course. In the beginning, we talked about the natural language processing and how it can enrich text representation. We then talked about how to mine knowledge about the language, natural language used to express the, what's observing the world in text and data." + }, + { + "0:34": "In particular, we talked about how to mine word associations. We then talked about how to analyze topics in text. How to discover topics and analyze them." + }, + { + "0:47": "This can be regarded as knowledge about observed world, and then we talked about how to mine knowledge about the observer and particularly talk about the, how to mine opinions and do sentiment analysis. And finally, we will talk about the text-based prediction, which has to do with predicting values of other real world variables based on text data. And in discussing this, we will also discuss the role of non-text data, which can contribute additional predictors for the prediction problem, and also can provide context for analyzing text data, and in particular we talked about how to use context to analyze topics." + }, + { + "1:33": "So here are the key high-level take away messages from this cost. I going to go over these major topics and point out what are the key take-away messages that you should remember." + }, + { + "1:47": "First the NLP and text representation." + }, + { + "1:53": "You should realize that NLP is always very important for any text replication because it enriches text representation. The more NLP the better text representation we can have. And this further enables more accurate knowledge discovery, to discover deeper knowledge, buried in text." + }, + { + "2:12": "However, the current estate of art of natural energy processing is, still not robust enough. So, as an result, the robust text mining technologies today, tend to be based on world [INAUDIBLE]. And tend to rely a lot on statistical analysis, as we've discussed in this course. And you may recall we've mostly used word based representations. And we've relied a lot on statistical techniques, statistical learning techniques particularly." + }, + { + "2:47": "In word-association mining and analysis the important points first, we are introduced the two concepts for two basic and complementary relations of words, paradigmatic and syntagmatic relations. These are actually very general relations between elements sequences. If you take it as meaning elements that occur in similar context in the sequence and elements that tend to co-occur with each other. And these relations might be also meaningful for other sequences of data." + }, + { + "3:25": "We also talked a lot about test the similarity then we discuss how to discover paradynamic similarities compare the context of words discover words that share similar context. At that point level, we talked about representing text data with a vector space model. And we talked about some retrieval techniques such as BM25 for measuring similarity of text and for assigning weights to terms, tf-idf weighting, et cetera. And this part is well-connected to text retrieval. There are other techniques that can be relevant here also." + }, + { + "4:03": "The next point is about co-occurrence analysis of text, and we introduce some information theory concepts such as entropy, conditional entropy, and mutual information. These are not only very useful for measuring the co-occurrences of words, they are also very useful for analyzing other kind of data, and they are useful for, for example, for feature selection in text categorization as well." + }, + { + "4:30": "So this is another important concept, good to know." + }, + { + "4:35": "And then we talked about the topic mining and analysis, and that's where we introduce in the probabilistic topic model. We spent a lot of time to explain the basic topic model, PLSA in detail and this is, those are the basics for understanding LDA which is. Theoretically, a more opinion model, but we did not have enough time to really go in depth in introducing LDA." + }, + { + "5:02": "But in practice, PLSA seems as effective as LDA and it's simpler to implement and it's also more efficient." + }, + { + "5:11": "In this part of Wilson videos is some general concepts that would be useful to know, one is generative model, and this is a general method for modeling text data and modeling other kinds of data as well." + }, + { + "5:24": "And we talked about the maximum life erase data, the EM algorithm for solving the problem of computing maximum estimator. So, these are all general techniques that tend to be very useful in other scenarios as well." + }, + { + "5:40": "Then we talked about the text clustering and the text categorization. Those are two important building blocks in any text mining application systems. In text with clustering we talked about how we can solve the problem by using a slightly different mixture module than the probabilistic topic model. and we then also prefer to view the similarity based approaches to test for cuss word." + }, + { + "6:11": "In categorization we also talk about the two kinds of approaches. One is generative classifies that rely on to base word to" + }, + { + "6:20": "infer the condition of or probability of a category given text data, in deeper we'll introduce you should use [INAUDIBLE] base in detail." + }, + { + "6:29": "This is the practical use for technique, for a lot of text, capitalization tasks." + }, + { + "6:37": "We also introduce the some discriminative classifiers, particularly logistical regression, can nearest labor and SBN. They also very important, they are very popular, they are very useful for text capitalization as well." + }, + { + "6:52": "In both parts, we'll also discuss how to evaluate the results. Evaluation is quite important because if the matches that you use don't really reflect the volatility of the method then it would give you misleading results so its very important to get the variation right. And we talked about variation of categorization in detail was a lot of specific measures." + }, + { + "7:18": "Then we talked about the sentiment analysis and the paradigm and that's where we introduced sentiment classification problem. And although it's a special case of text recalculation, but we talked about how to extend or improve the text recalculation method by using more sophisticated features that would be needed for sentiment analysis. We did a review of some common use for complex features for text analysis, and then we also talked about how to capture the order of these categories, in sentiment classification, and in particular we introduced ordinal logistical regression then we also talked about Latent Aspect Rating Analysis. This is an unsupervised way of using a generative model to understand and review data in more detail. In particular, it allows us to understand the composed ratings of" + }, + { + "8:14": "a reviewer on different aspects of a topic. So given text reviews with overall ratings, the method allows even further ratings on different aspects. And it also allows us to infer, the viewers laying their weights on these aspects or which aspects are more important to a viewer can be revealed as well. And this enables a lot of interesting applications." + }, + { + "8:41": "Finally, in the discussion of prediction, we mainly talk about the joint mining of text and non text data, as they are both very important for prediction." + }, + { + "8:51": "We particularly talked about how text data can help non-text data and vice versa." + }, + { + "8:58": "In the case of using non-text data to help text data analysis, we talked about the contextual text mining. We introduced the contextual PLSA as a generalizing or generalized model of PLSA to allows us to incorporate the context of variables, such as time and location. And this is a general way to allow us to reveal a lot of interesting topic of patterns in text data. We also introduced the net PLSA, in this case we used social network or network in general of text data to help analyze puppets." + }, + { + "9:31": "And finally we talk about how can be used as context to mine potentially causal Topics in text layer." + }, + { + "9:43": "Now, in the other way of using text to" + }, + { + "9:47": "help interpret patterns discovered from LAM text data, we did not really discuss anything in detail but just provide a reference but I should stress that that's after a very important direction to know about, if you want to build a practical text mining systems, because understanding and interpreting patterns is quite important." + }, + { + "10:13": "So this is a summary of the key take away messages, and I hope these will be very useful to you for building any text mining applications or to you for the starting of these algorithms. And this should provide a good basis for you to read from your research papers, to know more about more of allowance for other organisms or to invent new hours in yourself." + }, + { + "10:40": "So to know more about this topic, I would suggest you to look into other areas in more depth." + }, + { + "10:48": "And during this short period of time of this course, we could only touch the basic concepts, basic principles, of text mining and we emphasize the coverage of practical algorithms. And this is after the cost of covering algorithms and in many cases we omit the discussion of a lot of algorithms. So to learn more about the subject you should definitely learn more about the natural language process because this is foundation for all text based applications. The more NLP you can do, the better the additional text that you can get, and then the deeper knowledge you can discover. So this is very important." + }, + { + "11:37": "The second area you should look into is the Statistical Machine Learning." + }, + { + "11:41": "And these techniques are now the backbone techniques for" + }, + { + "11:46": "not just text analysis applications but also for NLP. A lot of NLP techniques are nowadays actually based on supervised machinery." + }, + { + "11:56": "So, they are very important because they are a key to also understanding some advancing NLP techniques and naturally they will provide more tools for doing text analysis in general." + }, + { + "12:09": "Now, a particularly interesting area, called deep learning has attracted a lot of attention recently. It has also shown promise in many application areas, especially in speech and vision, and has been applied to text data as well. So, for example, recently there has work on using deep learning to do segment analysis to achieve better accuracy. So that's one example of [INAUDIBLE] techniques that we weren't able to cover, but that's also very important." + }, + { + "12:41": "And the other area that has emerged in status learning is the water and baring technique, where they can learn better recognition of words. And then these better recognitions will allow you confuse similarity of words. As you can see, this provides directly a way to discover the paradigmatic relations of words. And results that people have got, so far, are very impressive. That's another promising technique that we did not have time to touch," + }, + { + "13:12": "but, of course, whether these new techniques would lead to practical useful techniques that work much better than the current technologies is still an open question that has to be examined. And no serious evaluation has been done yet. In, for example, examining the practical value of word embedding, other than word similarity and basic evaluation." + }, + { + "13:36": "But nevertheless, these are advanced techniques that surely will make impact in text mining in the future. So its very important to know more about these. Statistical learning is also the key to predictive modeling which is very crucial for many big data applications and we did not talk about that predictive modeling component but this is mostly about the regression or categorization techniques and this is another reason why statistical learning is important." + }, + { + "14:07": "We also suggest that you learn more about data mining, and that's simply because general data mining algorithms can always be applied to text data, which can be regarded as as special case of general data." + }, + { + "14:23": "So there are many applications of data mining techniques. In particular for example, pattern discovery would be very useful to generate the interesting features for test analysis and the reason that an information network that mining techniques can also be used to analyze text information at work." + }, + { + "14:42": "So these are all good to know. In order to develop effective text analysis techniques. And finally, we also recommend you to learn more about the text retrieval, information retrieval, of search engines. This is especially important if you are interested in building practical text application systems. And a search ending would be an essential system component in any text-based applications. And that's because texts data are created for humans to consume. So humans are at the best position to understand text data and it's important to have human in the loop in big text data applications, so it can in particular help text mining systems in two ways. One is through effectively reduce the data size from a large collection to a small collection with the most relevant text data that only matter for the particular interpretation. So the other is to provide a way to annotate it, to explain parents, and this has to do with knowledge providence. Once we discover some knowledge, we have to figure out whether or not the discovery is really reliable. So we need to go back to the original text to verify that. And that is why the search engine is very important." + }, + { + "16:04": "Moreover, some techniques of information retrieval, for example BM25, vector space and are also very useful for text data mining. We only mention some of them, but if you know more about text retrieval you'll see that there are many techniques that are used for it. Another technique that it's used for is indexing technique that enables quick response of search engine to a user's query, and such techniques can be very useful for building efficient text mining systems as well." + }, + { + "16:35": "So, finally, I want to remind you of this big picture for harnessing big text data that I showed you at your beginning of the semester." + }, + { + "16:45": "So in general, to deal with a big text application system, we need two kinds text, text retrieval and text mining." + }, + { + "16:53": "And text retrieval, as I explained, is to help convert big text data into a small amount of most relevant data for a particular problem, and can also help providing knowledge provenance, help interpreting patterns later. Text mining has to do with further analyzing the relevant data to discover the actionable knowledge that can be directly useful for decision making or many other tasks. So this course covers text mining. And there's a companion course called Text Retrieval and Search Engines that covers text retrieval. If you haven't taken that course, it would be useful for you to take it, especially if you are interested in building a text caching system. And taking both courses will give you a complete set of practical skills for building such a system. So in [INAUDIBLE] I just would like to thank you for taking this course. I hope you have learned useful knowledge and skills in test mining and [INAUDIBLE]. As you see from our discussions there are a lot of opportunities for this kind of techniques and there are also a lot of open channels. So I hope you can use what you have learned to build a lot of use for applications will benefit society and to also join the research community to discover new techniques for text mining and benefits. Thank you. [MUSIC]" + } + ] + } + ] + } + ] +} \ No newline at end of file From a4d325ee81d7b23fbcaeb9067845ec121b533ab0 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Fri, 17 Nov 2023 13:34:10 -0800 Subject: [PATCH 03/52] Chrome extension - initial commit --- .gitignore | 2 + README.md | 6 +++ .../CS410_Fall2023_CourseProject_TeamCAHJ.png | Bin 0 -> 38673 bytes extension/index.html | 43 ++++++++++++++++++ extension/js/es_client.js | 18 ++++++++ extension/js/search.js | 6 +++ extension/manifest.json | 10 ++++ 7 files changed, 85 insertions(+) create mode 100644 .gitignore create mode 100644 extension/CS410_Fall2023_CourseProject_TeamCAHJ.png create mode 100644 extension/index.html create mode 100644 extension/js/es_client.js create mode 100644 extension/js/search.js create mode 100644 extension/manifest.json diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000..29b636a486 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +.idea +*.iml \ No newline at end of file diff --git a/README.md b/README.md index a7b40d2cc8..3b227a7cc5 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,9 @@ # CourseProject Please fork this repository and paste the github link of your fork on Microsoft CMT. Detailed instructions are on Coursera under Week 1: Course Project Overview/Week 9 Activities. + + +## Steps to load the extension in Chrome +1. Go to the Extensions page by entering chrome://extensions +2. Enable Developer Mode by clicking the toggle switch next to Developer mode. +3. Click the Load unpacked button and select the /extension directory. \ No newline at end of file diff --git a/extension/CS410_Fall2023_CourseProject_TeamCAHJ.png b/extension/CS410_Fall2023_CourseProject_TeamCAHJ.png new file mode 100644 index 0000000000000000000000000000000000000000..e8408f738b307df7a41a07506827963fb23d700b GIT binary patch literal 38673 zcmeIbd0f)X7e8udi&okee3;K?=FFTqbCx-8#v6wZxoXYR zou{UzrnPVH?jvey8dx>8nPzimt4a{-(%-A8EdZU}wd?S{UAv4AM}!5PJ%2_`ZSRfy z;5nX0>sRKUx_jF>_|XAPvgR7AOZOI^958t6vRD1;(m>-EUt0FeVwkP*+XddUyztVO zCAIG!e~||4*yE)=Kf2#--arKb8CDfGEEC}XN;_Ck=qk0(hc51Z?0wAOlB0jTt=Za3 zK*97e7y}dc`)|PUI56`t|DF@sGy&{asO)_I@*6c%%q3u4FuZ%i_RhI0qOX z;`GA=#y~YSNb`yt^VN*^#|rXit=W{YBKU^?rN!oVzaO;2KlmPL*KI=FcRg@N=j=;& z=OvoZG@OM?`+i*OJow_#*G0vtw>K}hwP#0rCD~VHBWW$`kyu& zOu3ohx$OHQ_1q@IfO~h>X}aC}mY2|pT-mhhWBivJRLup6UlxD&^AA0C^|CvCu?BH< zf`dlR`i*`YoD1iStU;p>limfT#zbVl)Bc*iI>=&$%q!WG4d(Ky)^50y{}Gc@3pE|H zXnS0E_4M^ki%%_|clWXJiECdhKpTMG{(xs2Z>_L8ZSvs#K33@^+hp?vnzwm@*H(S+ zvAu>m3sHPT7Q1<=N8eNDn||0o_daJ%2rJ_BNKLt4+QiPF@3$9%Fz4TY^p25L8;d)O zp-BG#(oW8m7skeifGeM0j|f~oUbYHkv%4W{$6%(f4ngPEh85LXn;2BU?pYR^qqZZD z&mZ!{IvOJFuB&ZXJa@oZJ;XkJ!Rz>kL_+?e?s~-K8FQ}hSl=^q&MbiC8n1SftBjb# z8j7pe7mX%TXMKnhKU_TOo9HEOn{oO>-25{p5!K7C3!H5~ToWzr*x383bnQ=_?Uykz zoa%;aBkXsZhm<1OVrh>5&h6qj0!*Ok))JY7d?EYXLw2K9Q6Xr>zQ3W z1AYyiUS>2lINR&Nn$WW+20&+(7iNPB6W%2c?sloKIE{JTOgA|`bgV1k_>*#l1!&Z> zHBr#4J1p)1@e&6GetHVxx;JybrIZA2D29DmQmFZ{nB@s*k2kpgy|+u~`KB^~R9pYO zKHmN^WgAiZ|a>$=mw~^ z+7qU>n8$M~9IT8Vh(vPO${CmFBX8fntznkGVyO)ZFQq6S@w2sNqV4a^uti)qU83D! z=N-4Yy7a0p^3zzn#kM)t7lE{DgP56^#TxINE%&Nz@>_!i&L4(Zq|Xbw9Q#n?UVOiw z)(Z{2w<}-G9Goq6US_L)=rW*3y-Wii?{HBxc1io85osoFiN@;H$F3#}n5@6PE+EC$ zWWjrF&7Es*+pe9pZ`bxEtJmxZS|hl#{q(vcH_p49yF>o4x^Hd7!XG9ZQu ztUbKq*nF#l(zUS=qbwPnoJ?&0hX$#zXF^AyS|}>(1O1yDzI!9* z0rh+@gTFicbRlU`E`zG2BVihw-`@6f|GDOsW}s%q)%>eVciQ>u^&0nDA|J%fm?$$THvsbbfad*+C%=fET+}^*Jx3_XHbzj;( zrzeZ{H|$@(pSJ(){`~zrp9VaQc$%}Hde6+%>A2+Ji4CXjl0USZHjK58T@`Cuehqo2 zbC=>^%R|qj$c>u!&Nn#jpY=31A~Paw)WyrI<6y^8yAxgy9l_;0V^c=G`sXwl9k)C_ zc+7J%>-gHu2adlf$Ua>DbTncA(~}!JNfkM zi#RXgQ(dp-BT4zz9z}bLyf09`?kNFAy{@=#e;_9hc#6DVep{j?oS~Ij7n~HFMefc07mqa8mJi%#NlJDm_afqSwO+@{%}3 z9Fly5JewR6ha3AL*&(r$n2u$Q@mKZCj-FjS`>2++PPX-c-Xp7b2o3n|AJzt_FP%S+mXSLPTeuolh3+ z3FnjPz=bjN(Os>FkKXtZ)Ox6u9@HLmBd9h=D@fF|iM>FuK+wi6V#`;qUwLOGU@Lb^ z=&8F~S3U505VG~#R^}8i*(w6+^p2BqxjYn&mHotToaSb47;{VN{BfCMtOty<@~mFU(O^SAU2wU z5#TD`Y(`e7W$E71YoSGrB zPVzj`c&xmCs3(U-Zk2ZE@GN8&r>Ksj7J}Z1^W#6ppGu#JLq~_k0YcV9+l14Yjff)- z6JQ$tI1>bbTlmhloppWY!r3HfY;jQWA&r3e)$vc`b9Q9zxUu8@4$+n7%YZAq%hFw) zJAEqGGqnHyv{yoe^ak#!1;scj}NAnwS)s$Us-2IQoqCNCDI74HxIt(R2~3YmN8Z{*=9czuW$<2lm^{f;%(AomgFUX;=#I zZQQ#*G&$>a4>sk*MBe?nmi9z`_Ua#ff8?C?wQb4Pr8(O>knPF1#_#z4svfc6gdk1~ zl--v-Uj@eep|2So@-rx;I#SXB~k7pnXiNPQ*6CSTaD^*x0&7#XJPsr)>_~7UAt%=YJR~y z&_TD%3QT65W$z!!KcZW;-7zPZdyjovfSe6?;T>$v5Cj3!YqLvYt+zX%qIvoPyZP*~ zCasLhK*KW-8i{!;=O%b1B?sq7ITPP_@c=<)Ka0S{ly7057*B_%T zFFzblD)>=w08H@#oNX?TV2mb(YzkQyQfazob0)*9@Kn|CfioxiOI}`jnQ*R#TieEG z2-pyI<5XqqMb2ycKMoyXREEg#2P$X6Z5R%p!$Lov-S}*mmXdCmZ~qb%*G*ite8yvZl#e?J zEvTE&>#~Wp!F4r|h1}@gAv&Dhy{>aB`8k>(tLl|!g3c?02cr2viSf@J)S8JzxM8>v zIVM{)-MM0pYKTYG$Zyp%uMjY9iMb|`C-Q?MyRxVs=HxubCYj{~*fZ-lMPca$!J zCO!S-;qUc32x_xUGw;l(<_)PGKC1TctzgxeY?*9>>2+Dt=3PA?fAK1h6P+_&Br5sE zsGofHY&FE2cP#DCV7*1o3szjxaKBVhwt(@7m^tXZY3uHogY+&tsCFUZYJZL{inuA0USUA0-Ns~M^{aE9Jr*Dfit1d5`iS##)xo|!dW*_6U=GPPUs_V&O^EJl59*GQb zTI1<<*mzf1#2I62vu$SE)&S=j8yh=DoDQ--vfJf%b=5bgHNlaQ7wyf>;c&Pa9AFj} z0X4UGA|C%)5ybD^F(5iXVKfBxgnCep{Ws|QNX{tXPr#>gI4?7)6 zIiiw7XSIF1cN~kEG1#?(X6X~HJ+UZ1YspN&*_D1@F0~y2>jld;0CRZ=4&3r8wcxP4ka5)7AV4SFI9aZz>Wo(6vVV16bw|r2 zW&O`sxTK^?^Uc(Y=t$*l$`0}f&CARQC<5Y`Jx|Gn8P^0;u zJC_pfSpCh|KWV>O@^RbOi?*x(bLZo>*R%gi(pp{DMQ`DT|8hXrrTsswy;MET^l7F~ zOZq>S{_kZstzM?p%d}3PHl)*r^j|KHL8;XkIbHc9jUs8JOk^B+ zU^*3g{B%=Sj`}LaHc4ZZcG!d#evnV|&0jHs3^Mmv%>Ywbi4h_!z^QAeVh{E##ZSJx zMHRjYAgoyx-Rm|!8$^%J-Tl4%qe~sy< zT70Lrs-gLh{Xg3*kDp?w=i1wPHT!qe1beJraNw68YiK;hv>nq-Q_bEq)21cuFMTw< zJg0Tq+{M#|VA^_4ThD2iW7<#q3%G`7Z#qU%U#-Odim65yjnf`4h4HXVA3x zPlUpVMj&d#bTk^g@l94BqJl&2hFqTkDT z6a^myQT8lS5PTIqVal9Sb&`fJgr)kdZl{z=7V4VVG!2Z?B7zwt>7W^o7`~lKEDplk z-SXw9NDHjtXzlT0p3Ib`JXqZL-89*w@uqx*P#mF{b;-Wjk$jbZ0n2=XjY!q^e6s$x*egRL+ zty0`mWIJByD97mv*7HZ`P7zqy3uY_(L2|xq{H0rkXq~$*UHs~bfRRE-N`*X59NsjT5n-n6F;WP874;ywvTg{Xx!7~F7er zV3>+?YIzgcwM`-b6_A$^vYG9b1S83r(7<7W9!Dy)?FJ!odOR{!GoH+ytKormJe6Rm-gb8tp|M=)BCcADL-{KL#Zl*b8#hTVk? z^jWqu!fwAS$2vbob`e= z$h>Ta@MLR4-y4SWNXroK5u&(zHG`l_&_TfSiXe7xZgCy6y!VBuAuoAUpUuMPjkB&W zk>fea+o^UzLoPr7!8a$29rI=-6mM5Bk}i5d_qf0%W+>g>wZ!R%$=yITyuChKEy3 zDfKw?oaEfV96m-dyK5x-17BRpL5cg3me_IA2D^AGb+Ml4M_5~C6VC2sLd!1h-9_2^ z^H;*1ChBUWTbku4=%PlX8sr99nOK^U$nnC94;wMOJJx{q=RZ_#}uC*8|1}U12 zAi-l-RMshO2I6)lg}<>LXYu(t(b z^ZNu(1kzD?1_-uWD{s!{bw<|rX&dGv-rL5-q|ob*5fr;vWy+1?*Tut8@4xukWF0_{ zxMCw-W=AXo2`(CmQW%~3O-&Xy8*rb`VG%+rYED$8y^BcFG}08o&h+3DbzEcFx(IL@ z=}iDxpfART`}+6_3i2WdjzP)ZR|o+qj#Uk3)-wtieVyEHu)yJ_)48ZKG}myF$Vwf+ zxuGnqR%DXpa6Cb56c8vW>49#ian1tw{LgC+X#22~e(|L{IV~bwEYn74j z#M`>4Wv_SWWL?e-v}+4d7#|zVNJOx2mp6ZfQ=3KzGAFjL$UD4pyNl?nYHToeW&`WI z6RtnzsTmHrVLQLl2nY_qJZ;Pvdd}yF3Ie?4Xm%NxlOvKyc^VNzxhRfrQZDmbB?U~V zlcVHYD5>rOTFGJI4;e&O)h_BH5fxDvpyng>VG%$?L6w)loe?Oq2@ZQh4_As6j&4@i zaeJC(;gT{|=@~aRUH^DZaJbLtj-&a=+S!|o&^gU-pC6Ns@OpvCrsB_O6vYN*m|5SV zowX!xTDgxnF^9x&(61wbVw>Gs;7PGo^rbo zmP-s)S>UUN$+Y&GwlX~CfwM?T^4-8J0pgxcpgNGkFgNJ>_MNU_!`;Z0NKse>o~-YH z*#+$Q)+b*DKh#%B*q@id%*#$oXbD5aC05UiN9H1HtKw?2PbBC2|8YbK4&+?xf797 z{)W~rX~~qWRpeNG$R38c=R^T{5s^N$d?2WR+Cqkt!7y_=ZfGK7w2CFoW`1*^fT2;! z3?!ZX^M_PSCP_=#t=V-b-;q;7sW-3J9zq-!cEgqHaVR=*7*|;DfWDI8K@Q3VxcqqO zfA6kh{i{O{rIoME1kmI{|36-tiErmy1{og2J;_=`;eWxV=)=(Enq>f-Y)9F-m>rkM=4uc%0(Ny zWB3K%Psk$0_>z+>wn%n<>*pxAVM%C8`^6*=J6Vi8)t92MwG*do*Hbb{+^Pr!yr~lw z-3;Ut?S}|@c1ir;91bPk4hTXRjBu?|mV~WuHwdv|G$;@bAt%Iqz5z^EP$-alQ{sHE z#Ud8O!cd?i1{caG^6fxSW}^)!!KIqz#7g8q7##SN-fwOvm=pJ7^klT{Z<*QjuHUlbW;Zf%IomL5Cfao zj`+rGT4!Ae*q72EJ#HxFQ+;^Cwsfy7FrzKZZxw^rF>=GJ8)nUvTUjYulpJav^9M<^ zg&&6rk~g9GDWhLfa$=jYWD16IVw}?J#aIGjHpWO1JR&hfP$iKLmv^6ajL6A3-~%cg zuQF*bV3<(B45qLH$S8M z1oYUXFFDLRG;*K}>GnLsDjn9kZT~*ZGS3aS>8TkUX~jWI@*OMe%Jxl05x(W=f&)ie zo#H&1&9@hQl*fp-4Tqi_hh@ce#il;fQ8XIvyDAw_Wk3ofC@5ZaFkq;tPe9kSegvS67dFQs*NUqGZ&I-Eu;M7mOIQpjML{PAqQ~!J_+7O2pB@Vl zxJ+NArkKm3OJF>`a=KFW^W|w z6jm5XU*Kb#9U4&(u@mMvLuJW1UIlsHB2oX>)@@&Twi{l#JQQTTUeAK2yi~1nARN>O zaa%{#Wh6C=6uw%9IE*B75j`&v4#ZYwJy3bENT}g(AmnVsJxuL9;hICdeMFk1OhP3 zX}g>&8t3(HROabN91@BP_QUVM$&wpu0o2KpNqOwRL6jwdzWQamy)qI#WW- z9A?qJvL|qg9HCui&GoDYuGXvyc}TZtvqDit6cn~$kJ8+01`f7KHLAeI7y%8fO|i+t zAuP4>t_d3xKN&14haas^{HpjW*`B1l;4PQ`xa0frX872v=2+j3<@)`WfbJHQ5CTsP z=Qz?LR9-O?i5cgc@J0&)pRoagW_^15Sd$eI_L4Wwlbr9YcteB?mr(-xis|wJ>0O4A zJSW!`2fomF`dOW;2r~+y71R}#h>pAB8^I+;tmljlA%KC6VOjC*B>1+dShpPdmnZ#R zRVzaDqfoU@FX45Ws~iM58LbD=8JvmI^;|L$0pNu5z06nwxk$EUysQcWZTE;%IqXRO z_T2idoFkaDw+118Mj}(XC@M?tMEaf@NP0=nV4@Yok?qu*;cw7fqW7*S2(^F*NURwD zaKT70kXa!zP0p4Ugu@I8=ZJI*N}7~j!6A$q6t|wJ34Rfya?X2NnW-gSAh;?hzE)D7 zI8dqe)8}mam7?U%7e=-C=6wcWr5>?=nGrTdV{t-;KT?%W- z8-7sLH+MFzB4i5{bW64H+f>%Ge3e}CvaqHj-2?(9AHxpIO;X~T-GXu`jbyx?{U^bI zz@24Sn~5^#M4w@S^f2COtj`DlQ<;JI6xnXh&k&=V*bd9!F(gDYzribcl?g-YWdNX5 zl;RQaW1aHw6O|}(5ciI#$=kQQ&j|o?x`ikWXCIHvwQGX{TmMz{ogEHs(l0+3@XG?xwiThw6hqD3RsCq8b zG4rf`|1LorGNQYy`J{?v-71z@P&!YO8sOd(3>ju9{o=~W2y5%-dcb6TRTi{^C{<9_ znh{c$&W8X__(<_8u|G0WDFS2dKfR{W3%Nu%b;6Jnb)Tacz!s+nhaEYh`-`};g^ zZupqqJ>EI%&~DgrV$*1*`}gQA#P#34|&I4jS(I#C?<|q1iKH zVwoOVFuZFxsJ^7mAUtKu_UCyPJcX$%JI1p6z9-yQTF>XbD)0_MMP%pM`{sb1S0YsB zLoAX{o(0f8Fo?L%XzJ@IR|FsO5@j;Ia={$Muq2!60g+0H)j2|># zH_GE-IF94x2073wfF2=FCr1= z1Ux5kQYlopz4F4l0)hN=X9W~Qb&mvcsFlT33BA~|ny1^xA%>Xp?WAb76)`+y*sI3hBQ`~u;s#b~eZ1)*q8qXh7)ydX$I>C1 zhN>IHM}=FqSC$9ittj|#s2Q{>gVpjCRLOVfR-UdBBNR78*tecaVcghxeOy{g3;anp z-4TjjB#;o(SHShltyA#KZ-zdAfG%Ms%vJ)-NlJxfDE%O` zYn4qUE^JciTQw$j9@T~c=icH5r`};SITi~1k<0n}_l?eZG(73UOc*F>Wr4B+LPi*4 zD-|!vk!PKW^--WsaGHu?ZpkG|2gWH>v^7mp72fTIu^ZpO?9i8{N~!wzsvw_XZwF}X zAjQbnf{n(SGUf27gM<#@eNlEZoSg}1Z4wV;HV~7EY>#AAq_PR&DrhJpo&W?Ug+z1% zMjUu+I1yv=CVEOlJ>69S_H8W#q!Ghg1;NQs5sE4Wb1Er{RDIu7$RqSZUv?@*4xk*5 zm3uW|api_Slv4YO^{gl*E$T#Zoyv#rl%xA{z%=;)!VVAwNge0c;iVCHU6)Ni!qj;h zqc;3DzN}&a2LH>iGu4JYwGjB7yb}s^6fI&Q51$Av(Ts(Eu0iRqAXI%|eV|Yg1c*aI zhP2Hnl~g(^P5#>dYyjU3sVj_Kpd9hl&KK3)Q4$a$o}YK>h`q|9Nf~6-;aO7Ys3VNT zO=tNUI8cj__~aZ|nFOAUvf>v6Ch;h7O_fzrS3_K1t|+@Lnw{ATA3quLDp^L*WtX@L z^69X_iXbFI&<)2JbrDk#?XTjrrGtV3X8C-QAkec)b+kM%OK9=N?Ti z5Onz}e|Uc)Z5a>ekt>2^dMaN-&=E7Ys$Wul*X{ciC4vubFshT9 z`v5ocb);Q&ejpcx@n@(?e=o^{h0&yy+y_x*TuXK~!7edQ>C;3~oTp_3ax$A>vK$rz z0UesNPCpBFMeL6#HOo(%?8K-qotC;T5jY zAcQ@vPoVv16H3K2S&+(0fF#9^b#%XI_h}Ohrq{0~h;PRBe|0>y0XLD`EbfxpG2yDs zA6XZC69*{J8wn*fp`M*6hgf2}e@L>cMRWW3U6st+{Q3cop(;!YmyhlM=Vv<&fS+Y7 zs*%^&f+|g@1p>O!Rb7l~H!hZ4*VFJNmf5|Q#T3Kw_f5-l`)`i~v2T|EE79xOO;*xu zm};B!yiAUA<>2rcsD;Fk3mB$Bi9;r%K(swExvRip95<9h3Fwp=Lc&p{vze$WTu2VR z?`}Au4`JD@AQy(C-lX7!WiY`#nB=A|hZMx3e-7Y-IkC(ZPd;0+qAsJg>X|DnMg)oL zE??N4RpSQXsdankgB%Z`3JCNSI)9~?eI>B=EMj}J1q`EvnS$7oB?cD2dK2mk zTWC-qHiL1~OF3r;0huN$XBI~XMHGdM2Qm;~DnW-u2|X@i7pH)hIq)J*q+%V2z)_o_ zHhtI-z3*i4=ClTXc8pD_J|+kf*aWsa-2|X7;iNL)F#DSs!U8(95L%a!5dwiFj%I8}YdSVgROXWM8Nh5p+&&rj!9Gx6*jezUTVjETB!3cFQis8_vhmzpP+!MQ@tu z=+su$xz23*gagbeOQoiVEeP%VYl4r_Oyf(`#plaLw?MLlFjvO5xV;xfDb z5F$5_(hzRlUD&0lxQTg_JZa{w>d>9?ulNv^uTcXo7R6>ao5IA&eA>ZDWh^LT@yUB? zv+K$DM&U7zD-TG>w?8o$=VU)#u3uXq#g`2Qw#%X600uFHt8m<`uz!Q6~$DccDa z>~y9FsbW3H@)TnUf*pnRWk7D=V#yLTxKlrm<^YqQV>R$|b~TfeGpQ|iczr&Fc~T0O zOzs*KMEO$Su$SR9T|=u3fEdkGut5fE@C0 zNVsKbHlkm^h0D)pBSAy3g?360C5fM049KNuE}rl<#yD_G3(*exI8}BT6)IW>A2uo@ zUiqB1+Ne`11ty&eA67i>ubCL=rO?SV!7a>~xKcFw2opKfHeqT#yL?k-ltX4nm5l>G zGt3~66=SVbIc?47A$Dnb6sDbz974JUwtMo{;}B7m?C2=4W5n>$MnN#HHzWT?$4OoB zvB5ko*#uH4G3zb^VhZXIu7nb^2pKWzZU}2I6D4!!qTWQi3>H}U7U8m63wa*`*;X*J zu9BOH&{j5ht6b06s_l3wf!6Os%5DTw;0vWZzCDvZ(q-SBf@qVqIYI-Q(IP?V=}E>l zh3#pFu_do7C6yW7B~|CL9X}1;zt9a3z-dDphb!sOx(^|rfFbu;eL|r~6*PrpFEjp$ zQsoZpBxFg|L9~s_r)_b;27O5QsC#DEjx!X0#H?WD z(7_p`hO@{AS@*q4!}*mAVk`qmRg$ww?DMdDcG38HBMH2mB0<*W*XinUh@SngnEE)M ztl3aT6G<7`e8Opzsb+V}F7RSBA58L%WmsgUKm+f^8k8j0F`b?28h`x(Z z644W9pP~;!bt#7_QY}Ya&iO;JL+8o9wGBGuX0$44IX@!<48g1fa4b`IL2wv`BX2;q zo?jHiPPZdTq+JysCbfJkkaLHHMZyfQBGOXhR}f>24Ht2el9xCCW&{9ZX+-vo|c#hYT0m>MD<8i^B~WpXK1dL5@v;4@wnX z>N@ZnDfBvm+fM^(@D|98O`btS}sKNFm?+TsEi9UGX*& z3#!v25b7MI=ik4iyJnBXMbkT^s|xV>%Kh&0mC4GufesXr&&Qo13>SFUjr>5?$?XVB zLP*IWLB3R?e~ew!VYYIeJH#GJhx)(R+=33|dWeYWX<}E)v^Fg%VhNcCx-rTBt zG`}XpA5KX;8byiKeZ?WTqoQ)Th_Ehr+zY-;q(qZOcU-F8s*aI}KqN`CIVR#jkvn?g zM}a$f?8nxU<*R>M$K#Y^bHb!~P`QH2lqw30l!FXB-FLJ6D+$qRx|k_i&B z1WfceSHMI?Hx9*at-R*3WD;l+^Vk^!3Id_le02a0IjHBs2PRC^%@>z-A13vDBKSm% z>fI%D6*6^}{1n>18~!+F`9LrCN4!T86;?A7edk=pB($XpDX&nd`}*C(?3jy<*tKE` zi4uQhmIU7E30!3O$EisKO?;yEPXiZG@?k#}Zsyz+N~QYB{4qX0ACuXJ8NzuwO+K&p$^tF?V)958&e#T6rN zqsxCEjQGXecv}VBMO}scIsR|#&uk49y~p!a_?~0`#`owfUaW!znN4DNtp1JRaaMKJ zyy71N`U{a0KVwF`3f}X7gc$wQOVmg$23bUNHIlb6(DblBAxwXQg_IjqLlv3sP|XSS zSW#6ONZAD?aH%#Z8h|E`I`2@QQuuGk_}bIv>v~S7B;~alY~zRdFB_JFKa1Jjvhd zlSB6P9Q7yH3^^A3*-DmO&C(pR!M4edGJ}ELIo$N>4s}M>`-;Zrz+@@Q*DrFM zkyt6csDbCG$G0Q`VDO4llR|);6oRT3Cl}q~hUu9u=VExC^VvAXL?S_b9rOOrH9^`- zJXc@b`Tz{*wuCH2ul?z%E-&7)Ev~TR$Ajt|bp|ZHdf?FUlDQ;rXTPzCN*kX|+u~9i z5Hhl&G>N%fL+DqZ)g-cA>$;?DUyqQAAU{O4vJ}tq?yq+435w^r1>9eu(}gY2kDs%& zEO)+!%OXq5BEzkdgCtE366Z4%cPD>7Ws}Bet@1oXd$H%bi%#7H{+hzYWx%Q*#T4#( z%UzeM)6^NPa`ftLqkDGr)U5c`kkK&Xk^^&HkCT8bPze`LTzF1isg)~XuCJ(S`~Y9) zc zUkYt*yUQ0|sbEWfQqrUMP28zQ$17qe-QM#qiRf;L8r|zJlJ^yYD2Am zLk4!giAL_)i%!pYsW^$Q*|xZ)Meb{U9-iX-qq=7uDrJEl^6l@tvQno%!5n{)TzxEF z)s{K1>6*q|tR{))_Fd@MGwZ^6$A#;Eu8HRrpV9^itzH)9wz>Ml$nq%5U3VI7ZYXCE>)%WzftGaX+ zp7Ap+nmeL6V#iqWMK!V7bFtCQdu)vG&dee*okm%x;Zojv%hcO^Th(pfg9AI&d9x&- z9kJg)r$)SAp|QGae_F-!!cSoi_tK97fs^a++?}FD@hwN+BNxWYe{m>SSr|FtKUbv` zbBHP(LWdp~xO?Q*_NQt(KB+F*TV<*&PKZD{qKnS^v+|j~c z`epM+U&<_(Q_r86tD7$yAk_1}mPun|5g+o5wjhN~T^B-#JW zZ#?@s``?F{&0~u_Bu^=2+Tpe*b#?otLWhX63$ygZg3J zo}FPT%0;UFJ*(dMd{WlKuT?ppC}t?d`<1v}vO<<>gwn+osEi;p*%|XFj=?-wg^r`p zvg#R8vdn5T74z}20}0h#>dZfI4Bt_2BYJER1wUf)s{-@+ESK`)>Wwy}r|)>?`@>AR zvwmFIj>c6RsunkTIB+~}8FyAF-7#uskDEFputYJK&s_2ar+4*T*byN@BU*EeI{3{V zG0c;F<4&N|HHMd{v<@rg_!1qTrSs8+=ED*S4}|=7>ZMJJ@bWQ`#Ms%d)LD6o8h3Oe zJLTjdf^yyN6fx(6*jt?s?pc>2Q0VeUK-1~Jj}aODovTW8>I z^M7#@i;XX4U#cD`yHm0H-=v+_8*gindAt4VV6nxvhg7(_&b?iW|4rI>*9Ttw}ZgHlkk6cI9b(@fz1({-Tx;2K?P9Cz!gmK{U7A1 zs)myF^7YpIoAhZF_nPQ!e>eRbGs;pmL{0jv_3y*hG`6cIdU_hB*f>26{}89?)$sRX zoK{NzR4CI*=`T$`Z5#exB-1APe|Rx%qW^@ds#eRir}VdFopxCN>3B~&tbYeO|F6ak b%9(vu{u?)s?cJ=VdhOeDXg7YR-{t=cvWqzj literal 0 HcmV?d00001 diff --git a/extension/index.html b/extension/index.html new file mode 100644 index 0000000000..af25248055 --- /dev/null +++ b/extension/index.html @@ -0,0 +1,43 @@ + + + + + + + Simple Search Platform + + + +

CS410 Fall2023 TeamCAHJ

+

Search Coursera Lectures

+
+ +
+ +
+ + diff --git a/extension/js/es_client.js b/extension/js/es_client.js new file mode 100644 index 0000000000..29ce152e3f --- /dev/null +++ b/extension/js/es_client.js @@ -0,0 +1,18 @@ +const { Client } = require('@elastic/elasticsearch') +const client = new Client({ + cloud: { + id: '' + }, + auth: { + username: 'elastic', + password: 'changeme' + } +}) + + +const result = await client.search({ + index: 'my-index', + query: { + match: { hello: 'world' } + } +}) \ No newline at end of file diff --git a/extension/js/search.js b/extension/js/search.js new file mode 100644 index 0000000000..1bc67afc88 --- /dev/null +++ b/extension/js/search.js @@ -0,0 +1,6 @@ + +async function search(){ + return { + "lecture" : "Week: 2, Lecture: Title: introduction-to-text-mining-and-analytics, Time: 8:22" + } +} \ No newline at end of file diff --git a/extension/manifest.json b/extension/manifest.json new file mode 100644 index 0000000000..d7c94b8af5 --- /dev/null +++ b/extension/manifest.json @@ -0,0 +1,10 @@ +{ + "name": "CS410_Fall2023_CourseProject_TeamCAHJ", + "description": "Base Level Extension", + "version": "1.0", + "manifest_version": 3, + "action": { + "default_popup": "index.html", + "default_icon": "CS410_Fall2023_CourseProject_TeamCAHJ.png" + } + } \ No newline at end of file From c91948c42bcf275f341ebd2bb71eec99e5b4d65c Mon Sep 17 00:00:00 2001 From: Jinfeng Date: Fri, 17 Nov 2023 21:07:05 -0800 Subject: [PATCH 04/52] Update the subtitle dataset and scrape code --- ScrapeSubtitle.py | 2 +- geckodriver.log | 94 + subtitles.json | 6452 +++++++++++++++++++++++++++++++++------------ 3 files changed, 4934 insertions(+), 1614 deletions(-) diff --git a/ScrapeSubtitle.py b/ScrapeSubtitle.py index 40bc82865b..0d79346559 100644 --- a/ScrapeSubtitle.py +++ b/ScrapeSubtitle.py @@ -87,7 +87,7 @@ def get_lecture_subtitles(lecture_url): text_content = ' '.join(phrase.get_text().strip() for phrase in phrases) # Append the subtitles to the list as a dictionary - subtitles.append({timestamp: text_content}) + subtitles.append({'time': timestamp, 'text': text_content, 'url': lecture_url}) # Print or process the subtitles # print(subtitles) diff --git a/geckodriver.log b/geckodriver.log index 1ea71146b5..76bf244068 100644 --- a/geckodriver.log +++ b/geckodriver.log @@ -1009,3 +1009,97 @@ JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/o [Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128900 Decode error: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 [Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a127100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 [Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d4000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: +sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 + +1700197719100 Marionette INFO Stopped listening on port 56709 +1700282301441 geckodriver INFO Listening on 127.0.0.1:51098 +1700282302430 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilevwP3Vf" +console.warn: services.settings: Ignoring preference override of remote settings server +console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment +1700282304372 Marionette INFO Marionette enabled +1700282304859 Marionette INFO Listening on port 51114 +Read port: 51114 +1700282305123 RemoteAgent WARN TLS certificate errors will be ignored for this session +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement +1700282335095 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=116320900 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=116320900 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=116320900 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115ba4700 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117110e00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd6f00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c160d00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd5a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd5a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd5a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c159300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e52cd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e52cd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e52cd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4b400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4b400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 +[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4b400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 diff --git a/subtitles.json b/subtitles.json index 59a536ca87..36591f0e4c 100644 --- a/subtitles.json +++ b/subtitles.json @@ -5,899 +5,1469 @@ { "introduction-to-text-mining-and-analytics": [ { - "0:00": "[SOUND] Hello. Welcome to the course Text Mining and Analytics. My name is ChengXiang Zhai. I have a nickname, Cheng. I am a professor of the Department of Computer Science at the University of Illinois at Urbana-Champaign. This course is a part of a data mining specialization offered by the University of Illinois at Urbana-Champaign. In addition to this course, there are four other courses offered by" + "time": "0:00", + "text": "[SOUND] Hello. Welcome to the course Text Mining and Analytics. My name is ChengXiang Zhai. I have a nickname, Cheng. I am a professor of the Department of Computer Science at the University of Illinois at Urbana-Champaign. This course is a part of a data mining specialization offered by the University of Illinois at Urbana-Champaign. In addition to this course, there are four other courses offered by", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "0:39": "Professor Jiawei Han, Professor John Hart and me, followed by a capstone project course that all of us will teach together." + "time": "0:39", + "text": "Professor Jiawei Han, Professor John Hart and me, followed by a capstone project course that all of us will teach together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "0:51": "This course is particularly related to another course in the specialization, mainly text retrieval and search engines in that both courses are about text data." + "time": "0:51", + "text": "This course is particularly related to another course in the specialization, mainly text retrieval and search engines in that both courses are about text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "1:07": "In contrast, pattern discovery and cluster analysis are about algorithms more applicable to all kinds of data in general. The visualization course is also relatively general in that the techniques can be applied to all kinds of data." + "time": "1:07", + "text": "In contrast, pattern discovery and cluster analysis are about algorithms more applicable to all kinds of data in general. The visualization course is also relatively general in that the techniques can be applied to all kinds of data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "1:28": "This course addresses a pressing need for harnessing big text data." + "time": "1:28", + "text": "This course addresses a pressing need for harnessing big text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "1:35": "Text data has been growing dramatically recently, mostly because of the advance of technologies deployed on the web that would enable people to quickly generate text data." + "time": "1:35", + "text": "Text data has been growing dramatically recently, mostly because of the advance of technologies deployed on the web that would enable people to quickly generate text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "1:50": "So, I listed some of the examples on this slide" + "time": "1:50", + "text": "So, I listed some of the examples on this slide", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "1:57": "that can show a variety of text data that are available today. For example, if you think about the data on the internet, on the web," + "time": "1:57", + "text": "that can show a variety of text data that are available today. For example, if you think about the data on the internet, on the web,", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "2:07": "everyday we are seeing many web pages being created." + "time": "2:07", + "text": "everyday we are seeing many web pages being created.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "2:13": "Blogs are another kind of new text data that are being generated quickly by people. Anyone can write a blog article on the web. New articles of course have always been a main kind of text data that being generated everyday." + "time": "2:13", + "text": "Blogs are another kind of new text data that are being generated quickly by people. Anyone can write a blog article on the web. New articles of course have always been a main kind of text data that being generated everyday.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "2:31": "Emails are yet another kind of text data. And literature is also representing a large portion of text data. It's also especially very important because of the high quality in the data. That is, we encode our knowledge about the word using text data represented by all the literature articles. It's a vast amount of knowledge of" + "time": "2:31", + "text": "Emails are yet another kind of text data. And literature is also representing a large portion of text data. It's also especially very important because of the high quality in the data. That is, we encode our knowledge about the word using text data represented by all the literature articles. It's a vast amount of knowledge of", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "3:08": "all the text and data in these literature articles." + "time": "3:08", + "text": "all the text and data in these literature articles.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "3:14": "Twitter is another representative text data representing social media. Of course there are forums as well." + "time": "3:14", + "text": "Twitter is another representative text data representing social media. Of course there are forums as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "3:24": "People are generating tweets very quickly indeed as we are speaking perhaps many people have already written many tweets. So, as you can see there are all kinds of text data that are being generated very quickly." + "time": "3:24", + "text": "People are generating tweets very quickly indeed as we are speaking perhaps many people have already written many tweets. So, as you can see there are all kinds of text data that are being generated very quickly.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "3:38": "Now these text data present some challenges for people." + "time": "3:38", + "text": "Now these text data present some challenges for people.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "3:43": "It's very hard for anyone to digest all the text data quickly. In particular, it's impossible for scientists to read all of the for example or for anyone to read all the tweets." + "time": "3:43", + "text": "It's very hard for anyone to digest all the text data quickly. In particular, it's impossible for scientists to read all of the for example or for anyone to read all the tweets.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "4:01": "So there's a need for tools to help people digest text data more efficiently." + "time": "4:01", + "text": "So there's a need for tools to help people digest text data more efficiently.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "4:09": "There is also another interesting opportunity provided by such big text data, and that is it's possible to leverage the amount of text data to discover interesting patterns to turn text data into actionable knowledge that can be useful for decision making." + "time": "4:09", + "text": "There is also another interesting opportunity provided by such big text data, and that is it's possible to leverage the amount of text data to discover interesting patterns to turn text data into actionable knowledge that can be useful for decision making.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "4:27": "So for example, product managers may be interested in knowing the feedback of customers about their products, knowing how well their products are being received as compared with the products of competitors. This can be a good opportunity for leveraging text data as we have seen a lot of reviews of product on the web. So if we can develop a master text mining techniques to tap into such a [INAUDIBLE] to extract the knowledge and opinions of people about these products, then we can help these product managers to gain business intelligence or to essentially feedback from their customers." + "time": "4:27", + "text": "So for example, product managers may be interested in knowing the feedback of customers about their products, knowing how well their products are being received as compared with the products of competitors. This can be a good opportunity for leveraging text data as we have seen a lot of reviews of product on the web. So if we can develop a master text mining techniques to tap into such a [INAUDIBLE] to extract the knowledge and opinions of people about these products, then we can help these product managers to gain business intelligence or to essentially feedback from their customers.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "5:18": "In scientific research, for example, scientists are interested in knowing the trends of research topics, knowing" + "time": "5:18", + "text": "In scientific research, for example, scientists are interested in knowing the trends of research topics, knowing", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "5:29": "about what related fields have discovered. This problem is especially important in biology research as well. Different communities tend to use different terminologies, yet they're starting very similar problems. So how can we integrate the knowledge that is covered in different communities to help study a particular problem? It's very important, and it can speed up scientific discovery." + "time": "5:29", + "text": "about what related fields have discovered. This problem is especially important in biology research as well. Different communities tend to use different terminologies, yet they're starting very similar problems. So how can we integrate the knowledge that is covered in different communities to help study a particular problem? It's very important, and it can speed up scientific discovery.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "5:57": "So there are many such examples where we can leverage the text data to discover useable knowledge to optimize our decision." + "time": "5:57", + "text": "So there are many such examples where we can leverage the text data to discover useable knowledge to optimize our decision.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "6:06": "The main techniques for harnessing big text data are text retrieval and text mining. So these are two very much related technologies.Yet, they have somewhat different purposes. These two kinds of techniques are covered in the tool in this specialization. So, text retrieval on search engines covers text retrieval, and this is necessary to turn big text data into a much smaller but more relevant text data, which are often the data that we need to handle a particular problem or to optimize a particular decision. This course covers text mining which is a second step in this pipeline that can be used to further process the small amount of relevant data to extract the knowledge or to help people digest the text data easily. So the two courses are clearly related, in fact, some of the techniques are shared by both text retrieval and text mining. If you have already taken the text retrieval course, then you might see some of the content being repeated in this text mining course, although we'll be talking about the techniques from a very different perspective. If you have not taken the text retrieval course, it's also fine because this course is self-contained and you can certainly understand all of the materials without a problem. Of course, you might find it beneficial to take both courses and that will give you a very complete set of skills to handle big text data." + "time": "6:06", + "text": "The main techniques for harnessing big text data are text retrieval and text mining. So these are two very much related technologies.Yet, they have somewhat different purposes. These two kinds of techniques are covered in the tool in this specialization. So, text retrieval on search engines covers text retrieval, and this is necessary to turn big text data into a much smaller but more relevant text data, which are often the data that we need to handle a particular problem or to optimize a particular decision. This course covers text mining which is a second step in this pipeline that can be used to further process the small amount of relevant data to extract the knowledge or to help people digest the text data easily. So the two courses are clearly related, in fact, some of the techniques are shared by both text retrieval and text mining. If you have already taken the text retrieval course, then you might see some of the content being repeated in this text mining course, although we'll be talking about the techniques from a very different perspective. If you have not taken the text retrieval course, it's also fine because this course is self-contained and you can certainly understand all of the materials without a problem. Of course, you might find it beneficial to take both courses and that will give you a very complete set of skills to handle big text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" }, { - "8:02": "[MUSIC]" + "time": "8:02", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" } ] }, { "course-prerequisites-completion": [ { - "0:00": "[MUSIC]" + "time": "0:00", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "0:07": "This lecture is a brief introduction to the course. We'll then do cover the objectives of the course. The prerequisites and the course format, the reference books and how to complete the course. The objectives of the course are the following. First we would like to cover the basic concepts and practical techniques in text data mining." + "time": "0:07", + "text": "This lecture is a brief introduction to the course. We'll then do cover the objectives of the course. The prerequisites and the course format, the reference books and how to complete the course. The objectives of the course are the following. First we would like to cover the basic concepts and practical techniques in text data mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "0:33": "So this means we will not be able to cover some advance the techniques in detail but rather we are going to choose the practical useful techniques and then treat them in more depth. We are going to also cover the basic concepts that are very useful for many applications. The second objective is to cover more general techniques for text data mining. So we emphasise the coverage of general techniques that can be applicable to any text in any natural language. We also hope that these techniques to either automatically work on problems without any human effort or only requiring minimum human effort." + "time": "0:33", + "text": "So this means we will not be able to cover some advance the techniques in detail but rather we are going to choose the practical useful techniques and then treat them in more depth. We are going to also cover the basic concepts that are very useful for many applications. The second objective is to cover more general techniques for text data mining. So we emphasise the coverage of general techniques that can be applicable to any text in any natural language. We also hope that these techniques to either automatically work on problems without any human effort or only requiring minimum human effort.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "1:26": "So these criteria have helped us to choose techniques that can be applied to many applications. This is in contrast to some more detailed analysis of text data particularly using natural language processing techniques. Now, such techniques are also very important. And they are indeed necessary for some of the protections where we would like to go in depth to understand the text data in more detail." + "time": "1:26", + "text": "So these criteria have helped us to choose techniques that can be applied to many applications. This is in contrast to some more detailed analysis of text data particularly using natural language processing techniques. Now, such techniques are also very important. And they are indeed necessary for some of the protections where we would like to go in depth to understand the text data in more detail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "2:01": "Such detailed understanding techniques however are generally not scalable and they tend to require a lot of human effort. So they cannot be easy to applied into any domain. So as you can imagine in practice it would be beneficial to combine both kinds of techniques using the general techniques that we'll be covering in this course as a basis. And improve these techniques by using more human effort whenever it's appropriate We also would like to provide a hands on experience to you in Macro aspects. First, you can look at some experience was using a text mining toolkit and implementing text mining algorithms. Second, you have the opportunity to experiment with some algorithms for text reminding and physics to try them on some data sets and to understand how to do experiments. And finally you have opportunity to participate in a competition of a text-based prediction task. You are expected to know the basic concept of computer science for example data structures and some other really basic concepts in computers science." + "time": "2:01", + "text": "Such detailed understanding techniques however are generally not scalable and they tend to require a lot of human effort. So they cannot be easy to applied into any domain. So as you can imagine in practice it would be beneficial to combine both kinds of techniques using the general techniques that we'll be covering in this course as a basis. And improve these techniques by using more human effort whenever it's appropriate We also would like to provide a hands on experience to you in Macro aspects. First, you can look at some experience was using a text mining toolkit and implementing text mining algorithms. Second, you have the opportunity to experiment with some algorithms for text reminding and physics to try them on some data sets and to understand how to do experiments. And finally you have opportunity to participate in a competition of a text-based prediction task. You are expected to know the basic concept of computer science for example data structures and some other really basic concepts in computers science.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "3:25": "You are also expected to be familiar with programming and comfortable with programming, particularly with C++. This course however is not about programming, so you are not expected to do a lot of coding. but we're going to do give you a C++ toolkit that's fairly sophisticated so you have to be comfortable with handling such a toolkit." + "time": "3:25", + "text": "You are also expected to be familiar with programming and comfortable with programming, particularly with C++. This course however is not about programming, so you are not expected to do a lot of coding. but we're going to do give you a C++ toolkit that's fairly sophisticated so you have to be comfortable with handling such a toolkit.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "3:49": "And you may be asked to write a small amount of code." + "time": "3:49", + "text": "And you may be asked to write a small amount of code.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "3:56": "It's also useful if you know some concepts and techniques in probability and statistics, but it is not necessary, knowing such analogy will help you understand some of the algorithm in more depth." + "time": "3:56", + "text": "It's also useful if you know some concepts and techniques in probability and statistics, but it is not necessary, knowing such analogy will help you understand some of the algorithm in more depth.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "4:16": "The format of the course is lectures plus quizzes that will be given to you in a regular basis." + "time": "4:16", + "text": "The format of the course is lectures plus quizzes that will be given to you in a regular basis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "4:29": "And there is also an optional programming assignment." + "time": "4:29", + "text": "And there is also an optional programming assignment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "4:33": "Now, we've made programming assignments optional not because it's not important, but because we suspect that not all of you will have the need for computing resources to do the programming assignment. So naturally, we would encourage all of you to try to do the programming assignment. If possible as that would be a great way to learn about the knowledge that would teach in this course." + "time": "4:33", + "text": "Now, we've made programming assignments optional not because it's not important, but because we suspect that not all of you will have the need for computing resources to do the programming assignment. So naturally, we would encourage all of you to try to do the programming assignment. If possible as that would be a great way to learn about the knowledge that would teach in this course.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "5:05": "The main reference book for this course is a recent book that Sean Massung and I have co-authored. The title is Text Data Management and Analysis A practical introduction to information retrieval and text mining. However this reference book is not required in the sense that if we follow all the lecture videos closely then you should have little problems because working on the quiz problems and will program your assignment to pass the course. However the book would be useful to give you a high level and systematic description of all the topics covered in this course plus some others. It would also help you understand some topics in more depth. So if you problems with following some of the videos the book may be useful to you." + "time": "5:05", + "text": "The main reference book for this course is a recent book that Sean Massung and I have co-authored. The title is Text Data Management and Analysis A practical introduction to information retrieval and text mining. However this reference book is not required in the sense that if we follow all the lecture videos closely then you should have little problems because working on the quiz problems and will program your assignment to pass the course. However the book would be useful to give you a high level and systematic description of all the topics covered in this course plus some others. It would also help you understand some topics in more depth. So if you problems with following some of the videos the book may be useful to you.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" }, { - "5:59": "The book is also the reference book for another book. If you are interested in buying the book, there's a link here. And there should be substantial discount for the students of this course. There are also quite a few other useful reference books and ratings. And they are available though the link at the bottom of this slide. [MUSIC]" + "time": "5:59", + "text": "The book is also the reference book for another book. If you are interested in buying the book, there's a link here. And there should be substantial discount for the students of this course. There are also quite a few other useful reference books and ratings. And they are available though the link at the bottom of this slide. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/1gdlJ/course-prerequisites-completion" } ] }, { "1-1-overview-text-mining-and-analytics-part-1": [ { - "0:00": "[SOUND] In this lecture we give an overview of Text Mining and Analytics." + "time": "0:00", + "text": "[SOUND] In this lecture we give an overview of Text Mining and Analytics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "0:13": "First, let's define the term text mining, and the term text analytics. The title of this course is called Text Mining and Analytics." + "time": "0:13", + "text": "First, let's define the term text mining, and the term text analytics. The title of this course is called Text Mining and Analytics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "0:25": "But the two terms text mining, and text analytics are actually roughly the same." + "time": "0:25", + "text": "But the two terms text mining, and text analytics are actually roughly the same.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "0:32": "So we are not really going to really distinguish them, and we're going to use them interchangeably. But the reason that we have chosen to use both terms in the title is because there is also some subtle difference, if you look at the two phrases literally." + "time": "0:32", + "text": "So we are not really going to really distinguish them, and we're going to use them interchangeably. But the reason that we have chosen to use both terms in the title is because there is also some subtle difference, if you look at the two phrases literally.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "0:52": "Mining emphasizes more on the process. So it gives us a error rate medical view of the problem. Analytics, on the other hand emphasizes more on the result," + "time": "0:52", + "text": "Mining emphasizes more on the process. So it gives us a error rate medical view of the problem. Analytics, on the other hand emphasizes more on the result,", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "1:07": "or having a problem in mind. We are going to look at text data to help us solve a problem." + "time": "1:07", + "text": "or having a problem in mind. We are going to look at text data to help us solve a problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "1:16": "But again as I said, we can treat these two terms roughly the same." + "time": "1:16", + "text": "But again as I said, we can treat these two terms roughly the same.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "1:21": "And I think in the literature you probably will find the same. So we're not going to really distinguish that in the course." + "time": "1:21", + "text": "And I think in the literature you probably will find the same. So we're not going to really distinguish that in the course.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "1:29": "Both text mining and text analytics mean that we want to turn text data into high quality information, or actionable knowledge." + "time": "1:29", + "text": "Both text mining and text analytics mean that we want to turn text data into high quality information, or actionable knowledge.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "1:42": "So in both cases, we" + "time": "1:42", + "text": "So in both cases, we", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "1:45": "have the problem of dealing with a lot of text data and we hope to. Turn these text data into something more useful to us than the raw text data." + "time": "1:45", + "text": "have the problem of dealing with a lot of text data and we hope to. Turn these text data into something more useful to us than the raw text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "1:57": "And here we distinguish two different results. One is high-quality information, the other is actionable knowledge." + "time": "1:57", + "text": "And here we distinguish two different results. One is high-quality information, the other is actionable knowledge.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:05": "Sometimes the boundary between the two is not so clear." + "time": "2:05", + "text": "Sometimes the boundary between the two is not so clear.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:09": "But I also want to say a little bit about" + "time": "2:09", + "text": "But I also want to say a little bit about", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:12": "these two different angles of the result of text field mining." + "time": "2:12", + "text": "these two different angles of the result of text field mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:19": "In the case of high quality information, we refer to more concise information about the topic." + "time": "2:19", + "text": "In the case of high quality information, we refer to more concise information about the topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:28": "Which might be much easier for humans to digest than the raw text data. For example, you might face a lot of reviews of a product." + "time": "2:28", + "text": "Which might be much easier for humans to digest than the raw text data. For example, you might face a lot of reviews of a product.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:38": "A more concise form of information would be a very concise summary of the major opinions about the features of the product. Positive about, let's say battery life of a laptop." + "time": "2:38", + "text": "A more concise form of information would be a very concise summary of the major opinions about the features of the product. Positive about, let's say battery life of a laptop.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:53": "Now this kind of results are very useful to help people digest the text data." + "time": "2:53", + "text": "Now this kind of results are very useful to help people digest the text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "2:59": "And so this is to minimize a human effort in consuming text data in some sense." + "time": "2:59", + "text": "And so this is to minimize a human effort in consuming text data in some sense.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "3:06": "The other kind of output is actually more knowledge. Here we emphasize the utility of the information or knowledge we discover from text data." + "time": "3:06", + "text": "The other kind of output is actually more knowledge. Here we emphasize the utility of the information or knowledge we discover from text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "3:18": "It's actionable knowledge for some decision problem, or some actions to take." + "time": "3:18", + "text": "It's actionable knowledge for some decision problem, or some actions to take.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "3:24": "For example, we might be able to determine which product is more appealing to us, or a better choice for a shocking decision." + "time": "3:24", + "text": "For example, we might be able to determine which product is more appealing to us, or a better choice for a shocking decision.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "3:38": "Now, such an outcome could be called actionable knowledge, because a consumer can take the knowledge and make a decision, and act on it. So, in this case text mining supplies knowledge for optimal decision making. But again, the two are not so clearly distinguished, so we don't necessarily have to make a distinction." + "time": "3:38", + "text": "Now, such an outcome could be called actionable knowledge, because a consumer can take the knowledge and make a decision, and act on it. So, in this case text mining supplies knowledge for optimal decision making. But again, the two are not so clearly distinguished, so we don't necessarily have to make a distinction.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "4:06": "Text mining is also related to text retrieval, which is a essential component in many text mining systems." + "time": "4:06", + "text": "Text mining is also related to text retrieval, which is a essential component in many text mining systems.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "4:15": "Now, text retrieval refers to finding relevant information from a large amount of text data." + "time": "4:15", + "text": "Now, text retrieval refers to finding relevant information from a large amount of text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "4:24": "So I've taught another separate MOOC on text retrieval and search engines." + "time": "4:24", + "text": "So I've taught another separate MOOC on text retrieval and search engines.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "4:31": "Where we discussed various techniques for text retrieval." + "time": "4:31", + "text": "Where we discussed various techniques for text retrieval.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "4:36": "If you have taken that MOOC, and you will find some overlap." + "time": "4:36", + "text": "If you have taken that MOOC, and you will find some overlap.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "4:42": "And it will be useful To know the background of text retrieval of understanding some of the topics in text mining." + "time": "4:42", + "text": "And it will be useful To know the background of text retrieval of understanding some of the topics in text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "4:51": "But, if you have not taken that MOOC, it's also fine because in this MOOC on text mining and analytics, we're going to repeat some of the key concepts that are relevant for text mining. But they're at the high level and they also explain the relation between text retrieval and text mining." + "time": "4:51", + "text": "But, if you have not taken that MOOC, it's also fine because in this MOOC on text mining and analytics, we're going to repeat some of the key concepts that are relevant for text mining. But they're at the high level and they also explain the relation between text retrieval and text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "5:12": "Text retrieval is very useful for text mining in two ways. First, text retrieval can be a preprocessor for text mining. Meaning that it can help us turn big text data into a relatively small amount of most relevant text data. Which is often what's needed for solving a particular problem." + "time": "5:12", + "text": "Text retrieval is very useful for text mining in two ways. First, text retrieval can be a preprocessor for text mining. Meaning that it can help us turn big text data into a relatively small amount of most relevant text data. Which is often what's needed for solving a particular problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "5:36": "And in this sense, text retrieval also helps minimize human effort." + "time": "5:36", + "text": "And in this sense, text retrieval also helps minimize human effort.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "5:43": "Text retrieval is also needed for knowledge provenance. And this roughly corresponds to the interpretation of text mining as turning text data into actionable knowledge. Once we find the patterns in text data, or actionable knowledge, we generally would have to verify the knowledge. By looking at the original text data. So the users would have to have some text retrieval support, go back to the original text data to interpret the pattern or to better understand an analogy or to verify whether a pattern is really reliable. So this is a high level introduction to the concept of text mining, and the relationship between text mining and retrieval." + "time": "5:43", + "text": "Text retrieval is also needed for knowledge provenance. And this roughly corresponds to the interpretation of text mining as turning text data into actionable knowledge. Once we find the patterns in text data, or actionable knowledge, we generally would have to verify the knowledge. By looking at the original text data. So the users would have to have some text retrieval support, go back to the original text data to interpret the pattern or to better understand an analogy or to verify whether a pattern is really reliable. So this is a high level introduction to the concept of text mining, and the relationship between text mining and retrieval.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "6:32": "Next, let's talk about text data as a special kind of data." + "time": "6:32", + "text": "Next, let's talk about text data as a special kind of data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "6:39": "Now it's interesting to view text data as data generated by humans as subjective sensors." + "time": "6:39", + "text": "Now it's interesting to view text data as data generated by humans as subjective sensors.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "6:53": "So, this slide shows an analogy between text data and non-text data. And between humans as subjective sensors and physical sensors, such as a network sensor or a thermometer." + "time": "6:53", + "text": "So, this slide shows an analogy between text data and non-text data. And between humans as subjective sensors and physical sensors, such as a network sensor or a thermometer.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "7:16": "So in general a sensor would monitor the real world in some way. It would sense some signal from the real world, and then would report the signal as data, in various forms. For example, a thermometer would watch the temperature of real world and then we report the temperature being a particular format." + "time": "7:16", + "text": "So in general a sensor would monitor the real world in some way. It would sense some signal from the real world, and then would report the signal as data, in various forms. For example, a thermometer would watch the temperature of real world and then we report the temperature being a particular format.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "7:44": "Similarly, a geo sensor would sense the location and then report. The location specification, for example, in the form of longitude value and latitude value. A network sends over the monitor network traffic, or activities in the network and are reported. Some digital format of data. Similarly we can think of humans as subjective sensors. That will observe the real world and from some perspective. And then humans will express what they have observed in the form of text data. So, in this sense, human is actually a subjective sensor that would also sense what's happening in the world and then express what's observed in the form of data, in this case, text data. Now, looking at the text data in this way has an advantage of being able to integrate all types of data together. And that's indeed needed in most data mining problems." + "time": "7:44", + "text": "Similarly, a geo sensor would sense the location and then report. The location specification, for example, in the form of longitude value and latitude value. A network sends over the monitor network traffic, or activities in the network and are reported. Some digital format of data. Similarly we can think of humans as subjective sensors. That will observe the real world and from some perspective. And then humans will express what they have observed in the form of text data. So, in this sense, human is actually a subjective sensor that would also sense what's happening in the world and then express what's observed in the form of data, in this case, text data. Now, looking at the text data in this way has an advantage of being able to integrate all types of data together. And that's indeed needed in most data mining problems.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "8:56": "So here we are looking at the general problem of data mining." + "time": "8:56", + "text": "So here we are looking at the general problem of data mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "9:02": "And in general we would Be dealing with a lot of data about our world that are related to a problem. And in general it will be dealing with both non-text data and text data. And of course the non-text data are usually produced by physical senses. And those non-text data can be also of different formats." + "time": "9:02", + "text": "And in general we would Be dealing with a lot of data about our world that are related to a problem. And in general it will be dealing with both non-text data and text data. And of course the non-text data are usually produced by physical senses. And those non-text data can be also of different formats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "9:27": "Numerical data, categorical, or relational data, or multi-media data like video or speech." + "time": "9:27", + "text": "Numerical data, categorical, or relational data, or multi-media data like video or speech.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "9:36": "So, these non text data are often very important in some problems. But text data is also very important, mostly because they contain a lot of symmetrical content. And they often contain knowledge about the users, especially preferences and opinions of users." + "time": "9:36", + "text": "So, these non text data are often very important in some problems. But text data is also very important, mostly because they contain a lot of symmetrical content. And they often contain knowledge about the users, especially preferences and opinions of users.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "10:01": "So, but by treating text data as the data observed from human sensors, we can treat all this data together in the same framework. So the data mining problem is basically to turn such data, turn all the data in your actionable knowledge to that we can take advantage of it to change the real world of course for better. So this means the data mining problem is basically taking a lot of data as input and giving actionable knowledge as output. Inside of the data mining module, you can also see we have a number of different kind of mining algorithms. And this is because, for different kinds of data, we generally need different algorithms for mining the data." + "time": "10:01", + "text": "So, but by treating text data as the data observed from human sensors, we can treat all this data together in the same framework. So the data mining problem is basically to turn such data, turn all the data in your actionable knowledge to that we can take advantage of it to change the real world of course for better. So this means the data mining problem is basically taking a lot of data as input and giving actionable knowledge as output. Inside of the data mining module, you can also see we have a number of different kind of mining algorithms. And this is because, for different kinds of data, we generally need different algorithms for mining the data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" }, { - "10:56": "For example, video data might require computer vision to understand video content. And that would facilitate the more effective mining. And we also have a lot of general algorithms that are applicable to all kinds of data and those algorithms, of course, are very useful. Although, for a particular kind of data, we generally want to also develop a special algorithm. So this course will cover specialized algorithms that are particularly useful for mining text data. [MUSIC]" + "time": "10:56", + "text": "For example, video data might require computer vision to understand video content. And that would facilitate the more effective mining. And we also have a lot of general algorithms that are applicable to all kinds of data and those algorithms, of course, are very useful. Although, for a particular kind of data, we generally want to also develop a special algorithm. So this course will cover specialized algorithms that are particularly useful for mining text data. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/7zA4L/1-1-overview-text-mining-and-analytics-part-1" } ] }, { "1-2-overview-text-mining-and-analytics-part-2": [ { - "0:00": "[SOUND] So, looking at the text mining problem more closely, we see that the problem is similar to general data mining, except that we'll be focusing more on text data." + "time": "0:00", + "text": "[SOUND] So, looking at the text mining problem more closely, we see that the problem is similar to general data mining, except that we'll be focusing more on text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "0:21": "And we're going to have text mining algorithms to help us to turn text data into actionable knowledge that we can use in real world, especially for decision making, or for completing whatever tasks that require text data to support. Because, in general, in many real world problems of data mining we also tend to have other kinds of data that are non-textual. So a more general picture would be to include non-text data as well." + "time": "0:21", + "text": "And we're going to have text mining algorithms to help us to turn text data into actionable knowledge that we can use in real world, especially for decision making, or for completing whatever tasks that require text data to support. Because, in general, in many real world problems of data mining we also tend to have other kinds of data that are non-textual. So a more general picture would be to include non-text data as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "0:56": "And for this reason we might be concerned with joint mining of text and non-text data. And so in this course we're going to focus more on text mining, but we're also going to also touch how do to joint analysis of both text data and non-text data. With this problem definition we can now look at the landscape of the topics in text mining and analytics." + "time": "0:56", + "text": "And for this reason we might be concerned with joint mining of text and non-text data. And so in this course we're going to focus more on text mining, but we're also going to also touch how do to joint analysis of both text data and non-text data. With this problem definition we can now look at the landscape of the topics in text mining and analytics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "1:21": "Now this slide shows the process of generating text data in more detail." + "time": "1:21", + "text": "Now this slide shows the process of generating text data in more detail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "1:27": "More specifically, a human sensor or human observer would look at the word from some perspective." + "time": "1:27", + "text": "More specifically, a human sensor or human observer would look at the word from some perspective.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "1:34": "Different people would be looking at the world from different angles and they'll pay attention to different things. The same person at different times might also pay attention to different aspects of the observed world. And so the humans are able to perceive the world from some perspective. And that human, the sensor, would then form a view of the world. And that can be called the Observed World. Of course, this would be different from the Real World because of the perspective that the person has taken can often be biased also." + "time": "1:34", + "text": "Different people would be looking at the world from different angles and they'll pay attention to different things. The same person at different times might also pay attention to different aspects of the observed world. And so the humans are able to perceive the world from some perspective. And that human, the sensor, would then form a view of the world. And that can be called the Observed World. Of course, this would be different from the Real World because of the perspective that the person has taken can often be biased also.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "2:16": "Now the Observed World can be represented as, for example, entity-relation graphs or in a more general way, using knowledge representation language. But in general, this is basically what a person has in mind about the world. And we don't really know what exactly it looks like, of course. But then the human would express what the person has observed using a natural language, such as English. And the result is text data." + "time": "2:16", + "text": "Now the Observed World can be represented as, for example, entity-relation graphs or in a more general way, using knowledge representation language. But in general, this is basically what a person has in mind about the world. And we don't really know what exactly it looks like, of course. But then the human would express what the person has observed using a natural language, such as English. And the result is text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "2:55": "Of course a person could have used a different language to express what he or she has observed. In that case we might have text data of mixed languages or different languages." + "time": "2:55", + "text": "Of course a person could have used a different language to express what he or she has observed. In that case we might have text data of mixed languages or different languages.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "3:10": "The main goal of text mining Is actually to revert this process of generating text data. We hope to be able to uncover some aspect in this process." + "time": "3:10", + "text": "The main goal of text mining Is actually to revert this process of generating text data. We hope to be able to uncover some aspect in this process.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "3:28": "Specifically, we can think about mining, for example, knowledge about the language." + "time": "3:28", + "text": "Specifically, we can think about mining, for example, knowledge about the language.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "3:35": "And that means by looking at text data in English, we may be able to discover something about English, some usage of English, some patterns of English." + "time": "3:35", + "text": "And that means by looking at text data in English, we may be able to discover something about English, some usage of English, some patterns of English.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "3:47": "So this is one type of mining problems, where the result is some knowledge about language which may be useful in various ways." + "time": "3:47", + "text": "So this is one type of mining problems, where the result is some knowledge about language which may be useful in various ways.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "3:58": "If you look at the picture, we can also then mine knowledge about the observed world. And so this has much to do with mining the content of text data." + "time": "3:58", + "text": "If you look at the picture, we can also then mine knowledge about the observed world. And so this has much to do with mining the content of text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "4:11": "We're going to look at what the text data are about, and then try to get the essence of it or extracting high quality information about a particular aspect of the world that we're interested in." + "time": "4:11", + "text": "We're going to look at what the text data are about, and then try to get the essence of it or extracting high quality information about a particular aspect of the world that we're interested in.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "4:26": "For example, everything that has been said about a particular person or a particular entity. And this can be regarded as mining content to describe the observed world in the user's mind or the person's mind." + "time": "4:26", + "text": "For example, everything that has been said about a particular person or a particular entity. And this can be regarded as mining content to describe the observed world in the user's mind or the person's mind.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "4:45": "If you look further, then you can also imagine we can mine knowledge about this observer, himself or herself. So this has also to do with using text data to infer some properties of this person." + "time": "4:45", + "text": "If you look further, then you can also imagine we can mine knowledge about this observer, himself or herself. So this has also to do with using text data to infer some properties of this person.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "5:03": "And these properties could include the mood of the person or sentiment of the person." + "time": "5:03", + "text": "And these properties could include the mood of the person or sentiment of the person.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "5:10": "And note that we distinguish the observed word from the person because text data can't describe what the person has observed in an objective way. But the description can be also subjected with sentiment and so, in general, you can imagine the text data would contain some factual descriptions of the world plus some subjective comments. So that's why it's also possible to do text mining to mine knowledge about the observer. Finally, if you look at the picture to the left side of this picture, then you can see we can certainly also say something about the real world. Right? So indeed we can do text mining to infer other real world variables. And this is often called a predictive analytics." + "time": "5:10", + "text": "And note that we distinguish the observed word from the person because text data can't describe what the person has observed in an objective way. But the description can be also subjected with sentiment and so, in general, you can imagine the text data would contain some factual descriptions of the world plus some subjective comments. So that's why it's also possible to do text mining to mine knowledge about the observer. Finally, if you look at the picture to the left side of this picture, then you can see we can certainly also say something about the real world. Right? So indeed we can do text mining to infer other real world variables. And this is often called a predictive analytics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "6:00": "And we want to predict the value of certain interesting variable. So, this picture basically covered multiple types of knowledge that we can mine from text in general." + "time": "6:00", + "text": "And we want to predict the value of certain interesting variable. So, this picture basically covered multiple types of knowledge that we can mine from text in general.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "6:14": "When we infer other real world variables we could also use some of the results from mining text data as intermediate results to help the prediction. For example, after we mine the content of text data we might generate some summary of content. And that summary could be then used to help us predict the variables of the real world. Now of course this is still generated from the original text data, but I want to emphasize here that often the processing of text data to generate some features that can help with the prediction is very important." + "time": "6:14", + "text": "When we infer other real world variables we could also use some of the results from mining text data as intermediate results to help the prediction. For example, after we mine the content of text data we might generate some summary of content. And that summary could be then used to help us predict the variables of the real world. Now of course this is still generated from the original text data, but I want to emphasize here that often the processing of text data to generate some features that can help with the prediction is very important.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "7:04": "And that's why here we show the results of some other mining tasks, including mining the content of text data and mining knowledge about the observer, can all be very helpful for prediction." + "time": "7:04", + "text": "And that's why here we show the results of some other mining tasks, including mining the content of text data and mining knowledge about the observer, can all be very helpful for prediction.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "7:21": "In fact, when we have non-text data, we could also use the non-text data to help prediction, and of course it depends on the problem. In general, non-text data can be very important for such prediction tasks. For example, if you want to predict stock prices or changes of stock prices based on discussion in the news articles or in social media, then this is an example of using text data to predict some other real world variables. But in this case, obviously, the historical stock price data would be very important for this prediction. And so that's an example of non-text data that would be very useful for the prediction. And we're going to combine both kinds of data to make the prediction. Now non-text data can be also used for analyzing text by supplying context." + "time": "7:21", + "text": "In fact, when we have non-text data, we could also use the non-text data to help prediction, and of course it depends on the problem. In general, non-text data can be very important for such prediction tasks. For example, if you want to predict stock prices or changes of stock prices based on discussion in the news articles or in social media, then this is an example of using text data to predict some other real world variables. But in this case, obviously, the historical stock price data would be very important for this prediction. And so that's an example of non-text data that would be very useful for the prediction. And we're going to combine both kinds of data to make the prediction. Now non-text data can be also used for analyzing text by supplying context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "8:25": "When we look at the text data alone, we'll be mostly looking at the content and/or opinions expressed in the text." + "time": "8:25", + "text": "When we look at the text data alone, we'll be mostly looking at the content and/or opinions expressed in the text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "8:32": "But text data generally also has context associated." + "time": "8:32", + "text": "But text data generally also has context associated.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "8:37": "For example, the time and the location that associated are with the text data. And these are useful context information." + "time": "8:37", + "text": "For example, the time and the location that associated are with the text data. And these are useful context information.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "8:48": "And the context can provide interesting angles for analyzing text data. For example, we might partition text data into different time periods because of the availability of the time. Now we can analyze text data in each time period and then make a comparison. Similarly we can partition text data based on locations or any meta data that's associated to form interesting comparisons in areas. So, in this sense, non-text data can actually provide interesting angles or perspectives for text data analysis. And it can help us make context-sensitive analysis of content or the language usage or" + "time": "8:48", + "text": "And the context can provide interesting angles for analyzing text data. For example, we might partition text data into different time periods because of the availability of the time. Now we can analyze text data in each time period and then make a comparison. Similarly we can partition text data based on locations or any meta data that's associated to form interesting comparisons in areas. So, in this sense, non-text data can actually provide interesting angles or perspectives for text data analysis. And it can help us make context-sensitive analysis of content or the language usage or", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "9:36": "the opinions about the observer or the authors of text data. We could analyze the sentiment in different contexts. So this is a fairly general landscape of the topics in text mining and analytics. In this course we're going to selectively cover some of those topics. We actually hope to cover most of these general topics." + "time": "9:36", + "text": "the opinions about the observer or the authors of text data. We could analyze the sentiment in different contexts. So this is a fairly general landscape of the topics in text mining and analytics. In this course we're going to selectively cover some of those topics. We actually hope to cover most of these general topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "10:06": "First we're going to cover natural language processing very briefly because this has to do with understanding text data and this determines how we can represent text data for text mining. Second, we're going to talk about how to mine word associations from text data. And word associations is a form of use for lexical knowledge about a language. Third, we're going to talk about topic mining and analysis. And this is only one way to analyze content of text, but it's a very useful ways of analyzing content. It's also one of the most useful techniques in text mining." + "time": "10:06", + "text": "First we're going to cover natural language processing very briefly because this has to do with understanding text data and this determines how we can represent text data for text mining. Second, we're going to talk about how to mine word associations from text data. And word associations is a form of use for lexical knowledge about a language. Third, we're going to talk about topic mining and analysis. And this is only one way to analyze content of text, but it's a very useful ways of analyzing content. It's also one of the most useful techniques in text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "10:53": "Then we're going to talk about opinion mining and sentiment analysis. So this can be regarded as one example of mining knowledge about the observer." + "time": "10:53", + "text": "Then we're going to talk about opinion mining and sentiment analysis. So this can be regarded as one example of mining knowledge about the observer.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "11:07": "And finally we're going to cover text-based prediction problems where we try to predict some real world variable based on text data." + "time": "11:07", + "text": "And finally we're going to cover text-based prediction problems where we try to predict some real world variable based on text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" }, { - "11:17": "So this slide also serves as a road map for this course. And we're going to use this as an outline for the topics that we'll cover in the rest of this course. [MUSIC]" + "time": "11:17", + "text": "So this slide also serves as a road map for this course. And we're going to use this as an outline for the topics that we'll cover in the rest of this course. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/hgSh4/1-2-overview-text-mining-and-analytics-part-2" } ] }, { "1-3-natural-language-content-analysis-part-1": [ { - "0:00": "[SOUND]" + "time": "0:00", + "text": "[SOUND]", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "0:09": "This lecture is about natural language content analysis. Natural language content analysis is the foundation of text mining. So we're going to first talk about this." + "time": "0:09", + "text": "This lecture is about natural language content analysis. Natural language content analysis is the foundation of text mining. So we're going to first talk about this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "0:24": "And in particular, natural language processing with a factor how we can present text data." + "time": "0:24", + "text": "And in particular, natural language processing with a factor how we can present text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "0:33": "And this determines what algorithms can be used to analyze and mine text data." + "time": "0:33", + "text": "And this determines what algorithms can be used to analyze and mine text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "0:40": "We're going to take a look at the basic concepts in natural language first." + "time": "0:40", + "text": "We're going to take a look at the basic concepts in natural language first.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "0:46": "And I'm going to explain these concepts using a similar example that you've all seen here. A dog is chasing a boy on the playground. Now this is a very simple sentence. When we read such a sentence we don't have to think about it to get the meaning of it. But when a computer has to understand the sentence, the computer has to go through several steps." + "time": "0:46", + "text": "And I'm going to explain these concepts using a similar example that you've all seen here. A dog is chasing a boy on the playground. Now this is a very simple sentence. When we read such a sentence we don't have to think about it to get the meaning of it. But when a computer has to understand the sentence, the computer has to go through several steps.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "1:13": "First, the computer needs to know what are the words, how to segment the words in English. And this is very easy, we can just look at the space. And then the computer will need the know the categories of these words, syntactical categories. So for example, dog is a noun, chasing's a verb, boy is another noun etc. And this is called a Lexical analysis. In particular, tagging these words with these syntactic categories is called a part-of-speech tagging." + "time": "1:13", + "text": "First, the computer needs to know what are the words, how to segment the words in English. And this is very easy, we can just look at the space. And then the computer will need the know the categories of these words, syntactical categories. So for example, dog is a noun, chasing's a verb, boy is another noun etc. And this is called a Lexical analysis. In particular, tagging these words with these syntactic categories is called a part-of-speech tagging.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "1:45": "After that the computer also needs to figure out the relationship between these words. So a and dog would form a noun phrase. On the playground would be a prepositional phrase, etc. And there is certain way for them to be connected together in order for them to create meaning. Some other combinations may not make sense." + "time": "1:45", + "text": "After that the computer also needs to figure out the relationship between these words. So a and dog would form a noun phrase. On the playground would be a prepositional phrase, etc. And there is certain way for them to be connected together in order for them to create meaning. Some other combinations may not make sense.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "2:07": "And this is called syntactical parsing, or syntactical analysis, parsing of a natural language sentence. The outcome is a parse tree that you are seeing here. That tells us the structure of the sentence, so that we know how we can interpret this sentence. But this is not semantics yet. So in order to get the meaning we would have to map these phrases and these structures into some real world antithesis that we have in our mind. So dog is a concept that we know, and boy is a concept that we know. So connecting these phrases that we know is understanding." + "time": "2:07", + "text": "And this is called syntactical parsing, or syntactical analysis, parsing of a natural language sentence. The outcome is a parse tree that you are seeing here. That tells us the structure of the sentence, so that we know how we can interpret this sentence. But this is not semantics yet. So in order to get the meaning we would have to map these phrases and these structures into some real world antithesis that we have in our mind. So dog is a concept that we know, and boy is a concept that we know. So connecting these phrases that we know is understanding.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "2:52": "Now for a computer, would have to formally represent these entities by using symbols. So dog, d1 means d1 is a dog." + "time": "2:52", + "text": "Now for a computer, would have to formally represent these entities by using symbols. So dog, d1 means d1 is a dog.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "3:04": "Boy, b1 means b1 refers to a boy etc. And also represents the chasing action as a predicate. So, chasing is a predicate here with three arguments, d1, b1, and p1. Which is playground. So this formal rendition of the semantics of this sentence. Once we reach that level of understanding, we might also make inferences. For example, if we assume there's a rule that says if someone's being chased then the person can get scared, then we can infer this boy might be scared. This is the inferred meaning, based on additional knowledge. And finally, we might even further infer what this sentence is requesting, or why the person who say it in a sentence, is saying the sentence. And so, this has to do with purpose of saying the sentence. This is called speech act analysis or pragmatic analysis. Which first to the use of language. So, in this case a person saying this may be reminding another person to bring back the dog." + "time": "3:04", + "text": "Boy, b1 means b1 refers to a boy etc. And also represents the chasing action as a predicate. So, chasing is a predicate here with three arguments, d1, b1, and p1. Which is playground. So this formal rendition of the semantics of this sentence. Once we reach that level of understanding, we might also make inferences. For example, if we assume there's a rule that says if someone's being chased then the person can get scared, then we can infer this boy might be scared. This is the inferred meaning, based on additional knowledge. And finally, we might even further infer what this sentence is requesting, or why the person who say it in a sentence, is saying the sentence. And so, this has to do with purpose of saying the sentence. This is called speech act analysis or pragmatic analysis. Which first to the use of language. So, in this case a person saying this may be reminding another person to bring back the dog.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "4:35": "So this means when saying a sentence, the person actually takes an action. So the action here is to make a request." + "time": "4:35", + "text": "So this means when saying a sentence, the person actually takes an action. So the action here is to make a request.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "4:46": "Now, this slide clearly shows that in order to really understand a sentence there are a lot of things that a computer has to do. Now, in general it's very hard for a computer will do everything, especially if you would want it to do everything correctly. This is very difficult." + "time": "4:46", + "text": "Now, this slide clearly shows that in order to really understand a sentence there are a lot of things that a computer has to do. Now, in general it's very hard for a computer will do everything, especially if you would want it to do everything correctly. This is very difficult.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "5:08": "Now, the main reason why natural language processing is very difficult, it's because it's designed it will make human communications efficient." + "time": "5:08", + "text": "Now, the main reason why natural language processing is very difficult, it's because it's designed it will make human communications efficient.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "5:15": "As a result, for example, with only a lot of common sense knowledge." + "time": "5:15", + "text": "As a result, for example, with only a lot of common sense knowledge.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "5:21": "Because we assume all of us have this knowledge, there's no need to encode this knowledge." + "time": "5:21", + "text": "Because we assume all of us have this knowledge, there's no need to encode this knowledge.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "5:29": "That makes communication efficient." + "time": "5:29", + "text": "That makes communication efficient.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "5:32": "We also keep a lot of ambiguities, like, ambiguities of words." + "time": "5:32", + "text": "We also keep a lot of ambiguities, like, ambiguities of words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "5:39": "And this is again, because we assume we have the ability to disambiguate the word. So, there's no problem with having the same word to mean possibly different things in different context." + "time": "5:39", + "text": "And this is again, because we assume we have the ability to disambiguate the word. So, there's no problem with having the same word to mean possibly different things in different context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "5:52": "Yet for a computer this would be very difficult because a computer does not have the common sense knowledge that we do. So the computer will be confused indeed. And this makes it hard for natural language processing. Indeed, it makes it very hard for every step in the slide that I showed you earlier." + "time": "5:52", + "text": "Yet for a computer this would be very difficult because a computer does not have the common sense knowledge that we do. So the computer will be confused indeed. And this makes it hard for natural language processing. Indeed, it makes it very hard for every step in the slide that I showed you earlier.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "6:16": "Ambiguity is a main killer. Meaning that in every step there are multiple choices, and the computer would have to decide whats the right choice and that decision can be very difficult as you will see also in a moment." + "time": "6:16", + "text": "Ambiguity is a main killer. Meaning that in every step there are multiple choices, and the computer would have to decide whats the right choice and that decision can be very difficult as you will see also in a moment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "6:31": "And in general, we need common sense reasoning in order to fully understand the natural language. And computers today don't yet have that. That's why it's very hard for computers to precisely understand the natural language at this point." + "time": "6:31", + "text": "And in general, we need common sense reasoning in order to fully understand the natural language. And computers today don't yet have that. That's why it's very hard for computers to precisely understand the natural language at this point.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "6:48": "So here are some specific examples of challenges. Think about the world-level ambiguity. A word like design can be a noun or a verb, so we've got ambiguous part of speech tag." + "time": "6:48", + "text": "So here are some specific examples of challenges. Think about the world-level ambiguity. A word like design can be a noun or a verb, so we've got ambiguous part of speech tag.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "7:00": "Root also has multiple meanings, it can be of mathematical sense, like in the square of, or can be root of a plant." + "time": "7:00", + "text": "Root also has multiple meanings, it can be of mathematical sense, like in the square of, or can be root of a plant.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "7:12": "Syntactic ambiguity refers to different interpretations" + "time": "7:12", + "text": "Syntactic ambiguity refers to different interpretations", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "7:19": "of a sentence in terms structures. So for example, natural language processing can actually be interpreted in two ways." + "time": "7:19", + "text": "of a sentence in terms structures. So for example, natural language processing can actually be interpreted in two ways.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "7:28": "So one is the ordinary meaning that we will be getting as we're talking about this topic. So, it's processing of natural language. But there's is also another possible interpretation which is to say language processing is natural." + "time": "7:28", + "text": "So one is the ordinary meaning that we will be getting as we're talking about this topic. So, it's processing of natural language. But there's is also another possible interpretation which is to say language processing is natural.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "7:48": "Now we don't generally have this problem, but imagine for the computer to determine the structure, the computer would have to make a choice between the two." + "time": "7:48", + "text": "Now we don't generally have this problem, but imagine for the computer to determine the structure, the computer would have to make a choice between the two.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "7:59": "Another classic example is a man saw a boy with a telescope. And this ambiguity lies in the question who had the telescope? This is called a prepositional phrase attachment ambiguity." + "time": "7:59", + "text": "Another classic example is a man saw a boy with a telescope. And this ambiguity lies in the question who had the telescope? This is called a prepositional phrase attachment ambiguity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "8:14": "Meaning where to attach this prepositional phrase with the telescope. Should it modify the boy? Or should it be modifying, saw, the verb. Another problem is anaphora resolution. In John persuaded Bill to buy a TV for himself. Does himself refer to John or Bill?" + "time": "8:14", + "text": "Meaning where to attach this prepositional phrase with the telescope. Should it modify the boy? Or should it be modifying, saw, the verb. Another problem is anaphora resolution. In John persuaded Bill to buy a TV for himself. Does himself refer to John or Bill?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "8:39": "Presupposition is another difficulty. He has quit smoking implies that he smoked before, and we need to have such a knowledge in order to understand the languages." + "time": "8:39", + "text": "Presupposition is another difficulty. He has quit smoking implies that he smoked before, and we need to have such a knowledge in order to understand the languages.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "8:52": "Because of these problems, the state of the art natural language processing techniques can not do anything perfectly. Even for the simplest part of speech tagging, we still can not solve the whole problem. The accuracy that are listed here, which is about 97%, was just taken from some studies earlier." + "time": "8:52", + "text": "Because of these problems, the state of the art natural language processing techniques can not do anything perfectly. Even for the simplest part of speech tagging, we still can not solve the whole problem. The accuracy that are listed here, which is about 97%, was just taken from some studies earlier.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "9:17": "And these studies obviously have to be using particular data sets so the numbers here are not really meaningful if you take it out of the context of the data set that are used for evaluation. But I show these numbers mainly to give you some sense about the accuracy, or how well we can do things like this. It doesn't mean any data set accuracy would be precisely 97%. But, in general, we can do parsing speech tagging fairly well although not perfect." + "time": "9:17", + "text": "And these studies obviously have to be using particular data sets so the numbers here are not really meaningful if you take it out of the context of the data set that are used for evaluation. But I show these numbers mainly to give you some sense about the accuracy, or how well we can do things like this. It doesn't mean any data set accuracy would be precisely 97%. But, in general, we can do parsing speech tagging fairly well although not perfect.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "9:53": "Parsing would be more difficult, but for partial parsing, meaning to get some phrases correct, we can probably achieve 90% or better accuracy." + "time": "9:53", + "text": "Parsing would be more difficult, but for partial parsing, meaning to get some phrases correct, we can probably achieve 90% or better accuracy.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "10:06": "But to get the complete parse tree correctly is still very, very difficult." + "time": "10:06", + "text": "But to get the complete parse tree correctly is still very, very difficult.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "10:13": "For semantic analysis, we can also do some aspects of semantic analysis, particularly, extraction of entities and relations. For example, recognizing this is the person, that's a location, and this person and that person met in some place etc. We can also do word sense to some extent." + "time": "10:13", + "text": "For semantic analysis, we can also do some aspects of semantic analysis, particularly, extraction of entities and relations. For example, recognizing this is the person, that's a location, and this person and that person met in some place etc. We can also do word sense to some extent.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "10:38": "The occurrence of root in this sentence refers to the mathematical sense etc. Sentiment analysis is another aspect of semantic analysis that we can do." + "time": "10:38", + "text": "The occurrence of root in this sentence refers to the mathematical sense etc. Sentiment analysis is another aspect of semantic analysis that we can do.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "10:50": "That means we can tag the senses as generally positive when it's talking about the product or talking about the person." + "time": "10:50", + "text": "That means we can tag the senses as generally positive when it's talking about the product or talking about the person.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "11:02": "Inference, however, is very hard, and we generally cannot do that for any big domain and if it's only feasible for a very limited domain. And that's a generally difficult problem in artificial intelligence. Speech act analysis is also very difficult and we can only do this probably for very specialized cases. And with a lot of help from humans to annotate enough data for the computers to learn from." + "time": "11:02", + "text": "Inference, however, is very hard, and we generally cannot do that for any big domain and if it's only feasible for a very limited domain. And that's a generally difficult problem in artificial intelligence. Speech act analysis is also very difficult and we can only do this probably for very specialized cases. And with a lot of help from humans to annotate enough data for the computers to learn from.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "11:36": "So the slide also shows that computers are far from being able to understand natural language precisely. And that also explains why the text mining problem is difficult. Because we cannot rely on mechanical approaches or computational methods to understand the language precisely. Therefore, we have to use whatever we have today. A particular statistical machine learning method of statistical analysis methods to try to get as much meaning out from the text as possible. And, later you will see that there are actually" + "time": "11:36", + "text": "So the slide also shows that computers are far from being able to understand natural language precisely. And that also explains why the text mining problem is difficult. Because we cannot rely on mechanical approaches or computational methods to understand the language precisely. Therefore, we have to use whatever we have today. A particular statistical machine learning method of statistical analysis methods to try to get as much meaning out from the text as possible. And, later you will see that there are actually", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" }, { - "12:20": "many such algorithms that can indeed extract interesting model from text even though we cannot really fully understand it. Meaning of all the natural language sentences precisely. [MUSIC]" + "time": "12:20", + "text": "many such algorithms that can indeed extract interesting model from text even though we cannot really fully understand it. Meaning of all the natural language sentences precisely. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/qNSPo/1-3-natural-language-content-analysis-part-1" } ] }, { "1-4-natural-language-content-analysis-part-2": [ { - "0:00": "[SOUND]" + "time": "0:00", + "text": "[SOUND]", + "url": "https://www.coursera.org/learn/text-mining/lecture/07UZq/1-4-natural-language-content-analysis-part-2" }, { - "0:10": "So here are some specific examples of what we can't do today and part of speech tagging is still not easy to do 100% correctly. So in the example, he turned off the highway verses he turned off the fan and the two offs actually have somewhat a differentness in their active categories and also its very difficult to get a complete the parsing correct. Again, the example, a man saw a boy with a telescope can actually be very difficult to parse depending on the context. Precise deep semantic analysis is also very hard. For example, to define the meaning of own, precisely is very difficult in the sentence, like John owns a restaurant. So the state of the off can be summarized as follows. Robust and general NLP tends to be shallow while a deep understanding does not scale up." + "time": "0:10", + "text": "So here are some specific examples of what we can't do today and part of speech tagging is still not easy to do 100% correctly. So in the example, he turned off the highway verses he turned off the fan and the two offs actually have somewhat a differentness in their active categories and also its very difficult to get a complete the parsing correct. Again, the example, a man saw a boy with a telescope can actually be very difficult to parse depending on the context. Precise deep semantic analysis is also very hard. For example, to define the meaning of own, precisely is very difficult in the sentence, like John owns a restaurant. So the state of the off can be summarized as follows. Robust and general NLP tends to be shallow while a deep understanding does not scale up.", + "url": "https://www.coursera.org/learn/text-mining/lecture/07UZq/1-4-natural-language-content-analysis-part-2" }, { - "1:12": "For this reason in this course, the techniques that we cover are in general, shallow techniques for analyzing text data and mining text data and they are generally based on statistical analysis. So there are robust and general and they are in" + "time": "1:12", + "text": "For this reason in this course, the techniques that we cover are in general, shallow techniques for analyzing text data and mining text data and they are generally based on statistical analysis. So there are robust and general and they are in", + "url": "https://www.coursera.org/learn/text-mining/lecture/07UZq/1-4-natural-language-content-analysis-part-2" }, { - "1:36": "the in category of shallow analysis. So such techniques have the advantage of being able to be applied to any text data in any natural about any topic. But the downside is that, they don't give use a deeper understanding of text. For that, we have to rely on deeper natural language analysis." + "time": "1:36", + "text": "the in category of shallow analysis. So such techniques have the advantage of being able to be applied to any text data in any natural about any topic. But the downside is that, they don't give use a deeper understanding of text. For that, we have to rely on deeper natural language analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/07UZq/1-4-natural-language-content-analysis-part-2" }, { - "2:00": "That typically would require a human effort to annotate a lot of examples of analysis that would like to do and then computers can use machine learning techniques and learn from these training examples to do the task. So in practical applications, we generally combine the two kinds of techniques with the general statistical and methods as a backbone as the basis. These can be applied to any text data. And on top of that, we're going to use humans to, and you take more data and to use supervised machine learning to do some tasks as well as we can, especially for those important tasks to bring humans into the loop to analyze text data more precisely. But this course will cover the general statistical approaches that generally, don't require much human effort. So they're practically, more useful that some of the deeper analysis techniques that require a lot of human effort to annotate the text today. So to summarize, the main points we take are first NLP is the foundation for text mining. So obviously, the better we can understand the text data, the better we can do text mining." + "time": "2:00", + "text": "That typically would require a human effort to annotate a lot of examples of analysis that would like to do and then computers can use machine learning techniques and learn from these training examples to do the task. So in practical applications, we generally combine the two kinds of techniques with the general statistical and methods as a backbone as the basis. These can be applied to any text data. And on top of that, we're going to use humans to, and you take more data and to use supervised machine learning to do some tasks as well as we can, especially for those important tasks to bring humans into the loop to analyze text data more precisely. But this course will cover the general statistical approaches that generally, don't require much human effort. So they're practically, more useful that some of the deeper analysis techniques that require a lot of human effort to annotate the text today. So to summarize, the main points we take are first NLP is the foundation for text mining. So obviously, the better we can understand the text data, the better we can do text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/07UZq/1-4-natural-language-content-analysis-part-2" }, { - "3:30": "Computers today are far from being able to understand the natural language. Deep NLP requires common sense knowledge and inferences. Thus, only working for very limited domains not feasible for large scale text mining. Shallow NLP based on statistical methods can be done in large scale and is the main topic of this course and they are generally applicable to a lot of applications. They are in some sense also, more useful techniques. In practice, we use statistical NLP as the basis and we'll have humans for help as needed in various ways. [MUSIC]" + "time": "3:30", + "text": "Computers today are far from being able to understand the natural language. Deep NLP requires common sense knowledge and inferences. Thus, only working for very limited domains not feasible for large scale text mining. Shallow NLP based on statistical methods can be done in large scale and is the main topic of this course and they are generally applicable to a lot of applications. They are in some sense also, more useful techniques. In practice, we use statistical NLP as the basis and we'll have humans for help as needed in various ways. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/07UZq/1-4-natural-language-content-analysis-part-2" } ] }, { "1-5-text-representation-part-1": [ { - "0:06": "This lecture is about the textual representation. In this lecture, we are going to discuss textual representation, and discuss how natural language processing can allow us to represent text in many different ways. Let's take a look at this example sentence again. We can represent this sentence in many different ways. First, we can always represent such a sentence as a string of characters. This is true for all the languages when we store them in the computer. When we store a natural language sentence as a string of characters, we have perhaps the most general way of representing text since we always use this approach to represent any text data. But unfortunately, using such a representation will not help us to do semantic analysis, which is often needed for many applications of text mining. The reason is because we're not even recognizing words. So as a string, we're going to keep all the spaces and these ASCII symbols. We can perhaps count what's the most frequent character in English text, or the correlation between those characters, but we can't really analyze semantics. Yet, this is the most general way of representing text because we can use this to represent any natural language text. If we try to do a little bit more natural language processing by doing word segmentation, then we can obtain a representation of the same text, but in the form of a sequence of words. So here we see that we can identify words like a dog is chasing etc. Now with this level of representation, we certainly can do a lot of things, and this is mainly because words are the basic units of human communication in natural language, so they are very powerful. By identifying words, we can for example easily count what are the most frequent words in this document or in the whole collection etc. These words can be used to form topics when we combine related words together, and some words are positive, some words negative, so we can also do sentiment analysis. So representing text data as a sequence of words opens up a lot of interesting analysis possibilities. However, this level of representation is slightly less general than string of characters because in some languages such as Chinese, it's actually not that easy to identify all the word boundaries because in such a language, you see text as a sequence of characters with no space in between. So you'll have to rely on some special techniques to identify words. In such a language, of course then, we might make mistakes in segmenting words. So the sequence of words representation is not as robust as string of characters. But in English, it's very easy to obtain this level of representation, so we can do that all the time." + "time": "0:06", + "text": "This lecture is about the textual representation. In this lecture, we are going to discuss textual representation, and discuss how natural language processing can allow us to represent text in many different ways. Let's take a look at this example sentence again. We can represent this sentence in many different ways. First, we can always represent such a sentence as a string of characters. This is true for all the languages when we store them in the computer. When we store a natural language sentence as a string of characters, we have perhaps the most general way of representing text since we always use this approach to represent any text data. But unfortunately, using such a representation will not help us to do semantic analysis, which is often needed for many applications of text mining. The reason is because we're not even recognizing words. So as a string, we're going to keep all the spaces and these ASCII symbols. We can perhaps count what's the most frequent character in English text, or the correlation between those characters, but we can't really analyze semantics. Yet, this is the most general way of representing text because we can use this to represent any natural language text. If we try to do a little bit more natural language processing by doing word segmentation, then we can obtain a representation of the same text, but in the form of a sequence of words. So here we see that we can identify words like a dog is chasing etc. Now with this level of representation, we certainly can do a lot of things, and this is mainly because words are the basic units of human communication in natural language, so they are very powerful. By identifying words, we can for example easily count what are the most frequent words in this document or in the whole collection etc. These words can be used to form topics when we combine related words together, and some words are positive, some words negative, so we can also do sentiment analysis. So representing text data as a sequence of words opens up a lot of interesting analysis possibilities. However, this level of representation is slightly less general than string of characters because in some languages such as Chinese, it's actually not that easy to identify all the word boundaries because in such a language, you see text as a sequence of characters with no space in between. So you'll have to rely on some special techniques to identify words. In such a language, of course then, we might make mistakes in segmenting words. So the sequence of words representation is not as robust as string of characters. But in English, it's very easy to obtain this level of representation, so we can do that all the time.", + "url": "https://www.coursera.org/learn/text-mining/lecture/6T38K/1-5-text-representation-part-1" }, { - "4:00": "Now, if we go further to do naturally language processing, we can add a part of speech tags. Now once we do that, we can count, for example, the most frequent nouns or what kind of nouns are associated with what kind of verbs etc. So this opens up a little bit more interesting opportunities for further analysis. Note that I use a plus sign here because by representing text as a sequence of part of speech tags, we don't necessarily replace the original word sequence written. Instead, we add this as an additional way of representing text data, so that now the data is represented as both a sequence of words and a sequence of part of speech tags. This enriches the representation of text data, and thus also enables more interesting analysis." + "time": "4:00", + "text": "Now, if we go further to do naturally language processing, we can add a part of speech tags. Now once we do that, we can count, for example, the most frequent nouns or what kind of nouns are associated with what kind of verbs etc. So this opens up a little bit more interesting opportunities for further analysis. Note that I use a plus sign here because by representing text as a sequence of part of speech tags, we don't necessarily replace the original word sequence written. Instead, we add this as an additional way of representing text data, so that now the data is represented as both a sequence of words and a sequence of part of speech tags. This enriches the representation of text data, and thus also enables more interesting analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/6T38K/1-5-text-representation-part-1" }, { - "5:00": "If we go further, then we'll be pausing the sentence often to obtain a syntactic structure. Now this of course, further open up a more interesting analysis of, for example, the writing styles or correcting grammar mistakes. If we go further for semantic analysis, then we might be able to recognize dog as an animal, and we also can recognize a boy as a person, and playground as a location. We can further analyze their relations, for example, dog is chasing the boy and the boy is on the playground. Now this will add more entities and relations through entity relation recreation. At this level, then we can do even more interesting things. For example, now we can count easily the most frequent person that's mentioning this whole collection of news articles, or whenever you mention this person, you also tend to see mentioning of another person etc. So this is a very useful representation, and it's also related to the knowledge graph that some of you may have heard of that Google is doing as a more semantic way of representing text data. However, it's also less robust than sequence of words or even syntactical analysis because it's not always easy to identify all the entities with the right types, and we might make mistakes, and relations are even harder to find, and we might make mistakes. So this makes this level of representation less robust, yet it's very useful. Now if we move further to logical condition, then we can have predicates and even inference rules. With inference rules, we can infer interesting derived facts from the text, so that's very useful. But unfortunately, at this level of representation is even less robust and we can make mistakes and we can't do that all the time for all kinds of sentences. Finally, speech acts would add a yet another level of repetition of the intent of saying this sentence. So in this case, it might be a request. So knowing that would allow us to analyze even more interesting things about this observer or the author of this sentence. What's the intention of saying that? What's scenarios? What kind of actions would be made? So this is another level of analysis that would be very interesting. So this picture shows that if we move down, we generally see more sophisticated natural language processing techniques to be used. Unfortunately, such techniques would require more human effort, and they are less accurate. That means there are mistakes. So if we add an texts that are at the levels that are representing deeper analysis of language, then we have to tolerate the errors. So that also means it's still necessary to combine such deep analysis with shallow analysis based on, for example, sequence of words. On the right side, you'll see the arrow points down to indicate that. As we go down, we are representation of text is closer to knowledge representation in our mind, and need for solving a lot of problems. Now this is desirable because as we can represent text at the level of knowledge, we can easily extract the knowledge. That's the purpose of text-mining. So there is a trade-off here between doing a deeper analysis that might have errors but would give us direct knowledge that can be extracted from text. Doing shallow analysis, which is more robust but wouldn't actually give us the necessary deeper representation of knowledge. I should also say that text data are generated by humans and are meant to be consumed by humans. So as a result, in text data analysis, text-mining humans play a very important role, they are always in the loop. Meaning that we should optimize the collaboration of humans and computers. So in that sense, it's okay that computers may not be able to have compute accurately representation of text data, and the patterns that are extracted from text data can be interpreted by humans, and humans can guide the computers to do more accurate analysis by annotating more data, by providing features to guide a machine learning programs to make them work more effectively." + "time": "5:00", + "text": "If we go further, then we'll be pausing the sentence often to obtain a syntactic structure. Now this of course, further open up a more interesting analysis of, for example, the writing styles or correcting grammar mistakes. If we go further for semantic analysis, then we might be able to recognize dog as an animal, and we also can recognize a boy as a person, and playground as a location. We can further analyze their relations, for example, dog is chasing the boy and the boy is on the playground. Now this will add more entities and relations through entity relation recreation. At this level, then we can do even more interesting things. For example, now we can count easily the most frequent person that's mentioning this whole collection of news articles, or whenever you mention this person, you also tend to see mentioning of another person etc. So this is a very useful representation, and it's also related to the knowledge graph that some of you may have heard of that Google is doing as a more semantic way of representing text data. However, it's also less robust than sequence of words or even syntactical analysis because it's not always easy to identify all the entities with the right types, and we might make mistakes, and relations are even harder to find, and we might make mistakes. So this makes this level of representation less robust, yet it's very useful. Now if we move further to logical condition, then we can have predicates and even inference rules. With inference rules, we can infer interesting derived facts from the text, so that's very useful. But unfortunately, at this level of representation is even less robust and we can make mistakes and we can't do that all the time for all kinds of sentences. Finally, speech acts would add a yet another level of repetition of the intent of saying this sentence. So in this case, it might be a request. So knowing that would allow us to analyze even more interesting things about this observer or the author of this sentence. What's the intention of saying that? What's scenarios? What kind of actions would be made? So this is another level of analysis that would be very interesting. So this picture shows that if we move down, we generally see more sophisticated natural language processing techniques to be used. Unfortunately, such techniques would require more human effort, and they are less accurate. That means there are mistakes. So if we add an texts that are at the levels that are representing deeper analysis of language, then we have to tolerate the errors. So that also means it's still necessary to combine such deep analysis with shallow analysis based on, for example, sequence of words. On the right side, you'll see the arrow points down to indicate that. As we go down, we are representation of text is closer to knowledge representation in our mind, and need for solving a lot of problems. Now this is desirable because as we can represent text at the level of knowledge, we can easily extract the knowledge. That's the purpose of text-mining. So there is a trade-off here between doing a deeper analysis that might have errors but would give us direct knowledge that can be extracted from text. Doing shallow analysis, which is more robust but wouldn't actually give us the necessary deeper representation of knowledge. I should also say that text data are generated by humans and are meant to be consumed by humans. So as a result, in text data analysis, text-mining humans play a very important role, they are always in the loop. Meaning that we should optimize the collaboration of humans and computers. So in that sense, it's okay that computers may not be able to have compute accurately representation of text data, and the patterns that are extracted from text data can be interpreted by humans, and humans can guide the computers to do more accurate analysis by annotating more data, by providing features to guide a machine learning programs to make them work more effectively.", + "url": "https://www.coursera.org/learn/text-mining/lecture/6T38K/1-5-text-representation-part-1" } ] }, { "1-6-text-representation-part-2": [ { - "0:00": "[SOUND]. So, as we explained the different text representation tends to enable different analysis." + "time": "0:00", + "text": "[SOUND]. So, as we explained the different text representation tends to enable different analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "0:16": "In particular, we can gradually add more and more deeper analysis results to represent text data. And that would open up a more interesting representation" + "time": "0:16", + "text": "In particular, we can gradually add more and more deeper analysis results to represent text data. And that would open up a more interesting representation", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "0:29": "opportunities and also analysis capacities. So, this table summarizes what we have just seen. So the first column shows the text representation. The second visualizes the generality of such a representation. Meaning whether we can do this kind of representation accurately for all the text data or only some of them. And the third column shows the enabled analysis techniques." + "time": "0:29", + "text": "opportunities and also analysis capacities. So, this table summarizes what we have just seen. So the first column shows the text representation. The second visualizes the generality of such a representation. Meaning whether we can do this kind of representation accurately for all the text data or only some of them. And the third column shows the enabled analysis techniques.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "0:56": "And the final column shows some examples of application that can be achieved through this level of representation. So let's take a look at them. So as a stream text can only be processed by stream processing algorithms. It's very robust, it's general." + "time": "0:56", + "text": "And the final column shows some examples of application that can be achieved through this level of representation. So let's take a look at them. So as a stream text can only be processed by stream processing algorithms. It's very robust, it's general.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "1:15": "And there was still some interesting applications that can be down at this level. For example, compression of text. Doesn't necessarily need to know the word boundaries. Although knowing word boundaries might actually also help." + "time": "1:15", + "text": "And there was still some interesting applications that can be down at this level. For example, compression of text. Doesn't necessarily need to know the word boundaries. Although knowing word boundaries might actually also help.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "1:28": "Word base repetition is a very important level of representation. It's quite general and relatively robust, indicating they were a lot of analysis techniques. Such as word relation analysis, topic analysis and sentiment analysis. And there are many applications that can be enabled by this kind of analysis. For example, thesaurus discovery has to do with discovering related words. And topic and opinion related applications are abounded. And there are, for example, people might be interesting in knowing the major topics covered in the collection of texts. And this can be the case in research literature. And scientists want to know what are the most important research topics today. Or customer service people might want to know all our major complaints from their customers by mining their e-mail messages. And business intelligence people might be interested in understanding consumers' opinions about their products and the competitors' products to figure out what are the winning features of their products." + "time": "1:28", + "text": "Word base repetition is a very important level of representation. It's quite general and relatively robust, indicating they were a lot of analysis techniques. Such as word relation analysis, topic analysis and sentiment analysis. And there are many applications that can be enabled by this kind of analysis. For example, thesaurus discovery has to do with discovering related words. And topic and opinion related applications are abounded. And there are, for example, people might be interesting in knowing the major topics covered in the collection of texts. And this can be the case in research literature. And scientists want to know what are the most important research topics today. Or customer service people might want to know all our major complaints from their customers by mining their e-mail messages. And business intelligence people might be interested in understanding consumers' opinions about their products and the competitors' products to figure out what are the winning features of their products.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "2:43": "And, in general, there are many applications that can be enabled by the representation at this level." + "time": "2:43", + "text": "And, in general, there are many applications that can be enabled by the representation at this level.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "2:53": "Now, moving down, we'll see we can gradually add additional representations. By adding syntactical structures, we can enable, of course, syntactical graph analysis. We can use graph mining algorithms to analyze syntactic graphs. And some applications are related to this kind of representation. For example, stylistic analysis generally requires syntactical structure representation." + "time": "2:53", + "text": "Now, moving down, we'll see we can gradually add additional representations. By adding syntactical structures, we can enable, of course, syntactical graph analysis. We can use graph mining algorithms to analyze syntactic graphs. And some applications are related to this kind of representation. For example, stylistic analysis generally requires syntactical structure representation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "3:22": "We can also generate the structure based features. And those are features that might help us classify the text objects into different categories by looking at the structures sometimes in the classification. It can be more accurate. For example, if you want to classify articles into" + "time": "3:22", + "text": "We can also generate the structure based features. And those are features that might help us classify the text objects into different categories by looking at the structures sometimes in the classification. It can be more accurate. For example, if you want to classify articles into", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "3:45": "different categories corresponding to different authors. You want to figure out which of the k authors has actually written this article, then you generally need to look at the syntactic structures." + "time": "3:45", + "text": "different categories corresponding to different authors. You want to figure out which of the k authors has actually written this article, then you generally need to look at the syntactic structures.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "4:03": "When we add entities and relations, then we can enable other techniques such as knowledge graph and answers, or information network and answers in general. And this analysis enable applications about entities." + "time": "4:03", + "text": "When we add entities and relations, then we can enable other techniques such as knowledge graph and answers, or information network and answers in general. And this analysis enable applications about entities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "4:22": "For example, discovery of all the knowledge and opinions about real world entities." + "time": "4:22", + "text": "For example, discovery of all the knowledge and opinions about real world entities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "4:28": "You can also use this level representation to integrate everything about anything from scaled resources." + "time": "4:28", + "text": "You can also use this level representation to integrate everything about anything from scaled resources.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "4:37": "Finally, when we add logical predicates, that would enable large inference, of course. And this can be very useful for integrating analysis of scattered knowledge." + "time": "4:37", + "text": "Finally, when we add logical predicates, that would enable large inference, of course. And this can be very useful for integrating analysis of scattered knowledge.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "4:50": "For example, we can also add ontology on top of the," + "time": "4:50", + "text": "For example, we can also add ontology on top of the,", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "4:54": "extracted the information from text, to make inferences." + "time": "4:54", + "text": "extracted the information from text, to make inferences.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "4:59": "A good of example of application in this enabled by this level of representation, is a knowledge assistant for biologists. And this program that can help a biologist manage all the relevant knowledge from literature about a research problem such as understanding functions of genes." + "time": "4:59", + "text": "A good of example of application in this enabled by this level of representation, is a knowledge assistant for biologists. And this program that can help a biologist manage all the relevant knowledge from literature about a research problem such as understanding functions of genes.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "5:22": "And the computer can make inferences about some of the hypothesis that the biologist might be interesting. For example, whether a gene has a certain function, and then the intelligent program can read the literature to extract the relevant facts, doing compiling and information extracting. And then using a logic system to actually track that's the answers to researchers questioning about what genes are related to what functions." + "time": "5:22", + "text": "And the computer can make inferences about some of the hypothesis that the biologist might be interesting. For example, whether a gene has a certain function, and then the intelligent program can read the literature to extract the relevant facts, doing compiling and information extracting. And then using a logic system to actually track that's the answers to researchers questioning about what genes are related to what functions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "5:57": "So in order to support this level of application we need to go as far as logical representation. Now, this course is covering techniques mainly based on word based representation." + "time": "5:57", + "text": "So in order to support this level of application we need to go as far as logical representation. Now, this course is covering techniques mainly based on word based representation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "6:12": "And these techniques are general and robust and that's more widely used in various applications." + "time": "6:12", + "text": "And these techniques are general and robust and that's more widely used in various applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "6:21": "In fact, in virtually all the text mining applications you need this level of representation and then techniques that support analysis of text in this level." + "time": "6:21", + "text": "In fact, in virtually all the text mining applications you need this level of representation and then techniques that support analysis of text in this level.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "6:35": "But obviously all these other levels can be combined and should be combined in order to support the sophisticated applications. So to summarize, here are the major takeaway points. Text representation determines what kind of mining algorithms can be applied. And there are multiple ways to represent the text, strings, words, syntactic structures, entity-relation graphs, knowledge predicates, etc. And these different representations should in general be combined in real applications to the extent we can. For example, even if we cannot do accurate representations of syntactic structures, we can state that partial structures strictly. And if we can recognize some entities, that would be great. So in general we want to do as much as we can." + "time": "6:35", + "text": "But obviously all these other levels can be combined and should be combined in order to support the sophisticated applications. So to summarize, here are the major takeaway points. Text representation determines what kind of mining algorithms can be applied. And there are multiple ways to represent the text, strings, words, syntactic structures, entity-relation graphs, knowledge predicates, etc. And these different representations should in general be combined in real applications to the extent we can. For example, even if we cannot do accurate representations of syntactic structures, we can state that partial structures strictly. And if we can recognize some entities, that would be great. So in general we want to do as much as we can.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "7:34": "And when different levels are combined together, we can enable a richer analysis, more powerful analysis." + "time": "7:34", + "text": "And when different levels are combined together, we can enable a richer analysis, more powerful analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "7:42": "This course however focuses on word-based representation. Such techniques have also several advantage, first of they are general and robust, so they are applicable to any natural language. That's a big advantage over other approaches that rely on more fragile natural language processing techniques. Secondly, it does not require much manual effort, or sometimes, it does not require any manual effort. So that's, again, an important benefit, because that means that you can apply it directly to any application." + "time": "7:42", + "text": "This course however focuses on word-based representation. Such techniques have also several advantage, first of they are general and robust, so they are applicable to any natural language. That's a big advantage over other approaches that rely on more fragile natural language processing techniques. Secondly, it does not require much manual effort, or sometimes, it does not require any manual effort. So that's, again, an important benefit, because that means that you can apply it directly to any application.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "8:20": "Third, these techniques are actually surprisingly powerful and effective form in implications." + "time": "8:20", + "text": "Third, these techniques are actually surprisingly powerful and effective form in implications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "8:29": "Although not all of course as I just explained." + "time": "8:29", + "text": "Although not all of course as I just explained.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "8:34": "Now they are very effective partly because the words are invented by humans as basically units for communications." + "time": "8:34", + "text": "Now they are very effective partly because the words are invented by humans as basically units for communications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "8:45": "So they are actually quite sufficient for representing all kinds of semantics." + "time": "8:45", + "text": "So they are actually quite sufficient for representing all kinds of semantics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "8:53": "So that makes this kind of word-based representation all so powerful. And finally, such a word-based representation and the techniques enable by such a representation can be combined with many other sophisticated approaches." + "time": "8:53", + "text": "So that makes this kind of word-based representation all so powerful. And finally, such a word-based representation and the techniques enable by such a representation can be combined with many other sophisticated approaches.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" }, { - "9:14": "So they're not competing with each other. [MUSIC]" + "time": "9:14", + "text": "So they're not competing with each other. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/PK3Gd/1-6-text-representation-part-2" } ] }, { "1-7-word-association-mining-and-analysis": [ { - "0:00": "[SOUND] This lecture is about the word association mining and analysis. In this lecture, we're going to talk about how to mine associations of words from text. Now this is an example of knowledge about the natural language that we can mine from text data." + "time": "0:00", + "text": "[SOUND] This lecture is about the word association mining and analysis. In this lecture, we're going to talk about how to mine associations of words from text. Now this is an example of knowledge about the natural language that we can mine from text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "0:33": "Here's the outline. We're going to first talk about what is word association and then explain why discovering such relations is useful and finally we're going to talk about some general ideas about how to mine word associations. In general there are two word relations and these are quite basic." + "time": "0:33", + "text": "Here's the outline. We're going to first talk about what is word association and then explain why discovering such relations is useful and finally we're going to talk about some general ideas about how to mine word associations. In general there are two word relations and these are quite basic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "0:56": "One is called a paradigmatic relation. The other is syntagmatic relation. A and B have paradigmatic relation if they can be substituted for each other. That means the two words that have paradigmatic relation would be in the same semantic class, or syntactic class. And we can in general replace one by the other without affecting the understanding of the sentence. That means we would still have a valid sentence. For example, cat and dog, these two words have a paradigmatic relation because they are in the same class of animal. And in general, if you replace cat with dog in a sentence, the sentence would still be a valid sentence that you can make sense of." + "time": "0:56", + "text": "One is called a paradigmatic relation. The other is syntagmatic relation. A and B have paradigmatic relation if they can be substituted for each other. That means the two words that have paradigmatic relation would be in the same semantic class, or syntactic class. And we can in general replace one by the other without affecting the understanding of the sentence. That means we would still have a valid sentence. For example, cat and dog, these two words have a paradigmatic relation because they are in the same class of animal. And in general, if you replace cat with dog in a sentence, the sentence would still be a valid sentence that you can make sense of.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "1:58": "Similarly Monday and Tuesday have paradigmatical relation." + "time": "1:58", + "text": "Similarly Monday and Tuesday have paradigmatical relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "2:04": "The second kind of relation is called syntagmatical relation." + "time": "2:04", + "text": "The second kind of relation is called syntagmatical relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "2:10": "In this case, the two words that have this relation, can be combined with each other. So A and B have syntagmatic relation if they can be combined with each other in a sentence, that means these two words are semantically related." + "time": "2:10", + "text": "In this case, the two words that have this relation, can be combined with each other. So A and B have syntagmatic relation if they can be combined with each other in a sentence, that means these two words are semantically related.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "2:30": "So for example, cat and sit are related because a cat can sit somewhere." + "time": "2:30", + "text": "So for example, cat and sit are related because a cat can sit somewhere.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "2:38": "Similarly, car and drive are related semantically and they can be combined with each other to convey meaning. However, in general, we can not replace cat with sit in a sentence or car with drive in the sentence to still get a valid sentence, meaning that if we do that, the sentence will become somewhat meaningless. So this is different from paradigmatic relation. And these two relations are in fact so fundamental that they can be" + "time": "2:38", + "text": "Similarly, car and drive are related semantically and they can be combined with each other to convey meaning. However, in general, we can not replace cat with sit in a sentence or car with drive in the sentence to still get a valid sentence, meaning that if we do that, the sentence will become somewhat meaningless. So this is different from paradigmatic relation. And these two relations are in fact so fundamental that they can be", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "3:17": "generalized to capture basic relations between units in arbitrary sequences. And definitely they can be generalized to describe relations of any items in a language. So, A and B don't have to be words and they can be phrases, for example." + "time": "3:17", + "text": "generalized to capture basic relations between units in arbitrary sequences. And definitely they can be generalized to describe relations of any items in a language. So, A and B don't have to be words and they can be phrases, for example.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "3:37": "And they can even be more complex phrases than just a non-phrase. If you think about the general problem of the sequence mining then we can think about the units being and the sequence data. Then we think of paradigmatic relation as relations that are applied to units that tend to occur in a singular locations in a sentence, or in a sequence of data elements in general. So they occur in similar locations relative to the neighbors in the sequence. Syntagmatical relation on the other hand is related to co-occurrent elements that tend to show up in the same sequence." + "time": "3:37", + "text": "And they can even be more complex phrases than just a non-phrase. If you think about the general problem of the sequence mining then we can think about the units being and the sequence data. Then we think of paradigmatic relation as relations that are applied to units that tend to occur in a singular locations in a sentence, or in a sequence of data elements in general. So they occur in similar locations relative to the neighbors in the sequence. Syntagmatical relation on the other hand is related to co-occurrent elements that tend to show up in the same sequence.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "4:33": "So these two are complimentary and are basic relations of words. And we're interested in discovering them automatically from text data. Discovering such worded relations has many applications." + "time": "4:33", + "text": "So these two are complimentary and are basic relations of words. And we're interested in discovering them automatically from text data. Discovering such worded relations has many applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "4:47": "First, such relations can be directly useful for improving accuracy of many NLP tasks, and this is because this is part of our knowledge about a language. So if you know these two words are synonyms, for example, and then you can help a lot of tasks." + "time": "4:47", + "text": "First, such relations can be directly useful for improving accuracy of many NLP tasks, and this is because this is part of our knowledge about a language. So if you know these two words are synonyms, for example, and then you can help a lot of tasks.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "5:05": "And grammar learning can be also done by using such techniques. Because if we can learn paradigmatic relations, then we form classes of words, syntactic classes for example. And if we learn syntagmatic relations, then we would be able to know the rules for putting together a larger expression based on component expressions. So we learn the structure and what can go with what else." + "time": "5:05", + "text": "And grammar learning can be also done by using such techniques. Because if we can learn paradigmatic relations, then we form classes of words, syntactic classes for example. And if we learn syntagmatic relations, then we would be able to know the rules for putting together a larger expression based on component expressions. So we learn the structure and what can go with what else.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "5:39": "Word relations can be also very useful for many applications in text retrieval and mining. For example, in search and text retrieval, we can use word associations to modify a query, and this can be used to introduce additional related words into a query and make the query more effective." + "time": "5:39", + "text": "Word relations can be also very useful for many applications in text retrieval and mining. For example, in search and text retrieval, we can use word associations to modify a query, and this can be used to introduce additional related words into a query and make the query more effective.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "6:01": "It's often called a query expansion." + "time": "6:01", + "text": "It's often called a query expansion.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "6:05": "Or you can use related words to suggest related queries to the user to explore the information space." + "time": "6:05", + "text": "Or you can use related words to suggest related queries to the user to explore the information space.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "6:12": "Another application is to use word associations to automatically construct the top of the map for browsing. We can have words as nodes and associations as edges. A user could navigate from one word to another to" + "time": "6:12", + "text": "Another application is to use word associations to automatically construct the top of the map for browsing. We can have words as nodes and associations as edges. A user could navigate from one word to another to", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "6:28": "find information in the information space." + "time": "6:28", + "text": "find information in the information space.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "6:33": "Finally, such word associations can also be used to compare and summarize opinions. For example, we might be interested in understanding positive and negative opinions about the iPhone 6. In order to do that, we can look at what words are most strongly associated with a feature word like battery in positive versus negative reviews. Such a syntagmatical relations would help us show the detailed opinions about the product." + "time": "6:33", + "text": "Finally, such word associations can also be used to compare and summarize opinions. For example, we might be interested in understanding positive and negative opinions about the iPhone 6. In order to do that, we can look at what words are most strongly associated with a feature word like battery in positive versus negative reviews. Such a syntagmatical relations would help us show the detailed opinions about the product.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "7:16": "So, how can we discover such associations automatically? Now, here are some intuitions about how to do that. Now let's first look at the paradigmatic relation." + "time": "7:16", + "text": "So, how can we discover such associations automatically? Now, here are some intuitions about how to do that. Now let's first look at the paradigmatic relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "7:29": "Here we essentially can take advantage of similar context." + "time": "7:29", + "text": "Here we essentially can take advantage of similar context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "7:34": "So here you see some simple sentences about cat and dog. You can see they generally occur in similar context, and that after all is the definition of paradigmatic relation." + "time": "7:34", + "text": "So here you see some simple sentences about cat and dog. You can see they generally occur in similar context, and that after all is the definition of paradigmatic relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "7:49": "On the right side you can kind of see I extracted expressly the context of cat and dog from this small sample of text data." + "time": "7:49", + "text": "On the right side you can kind of see I extracted expressly the context of cat and dog from this small sample of text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "8:00": "I've taken away cat and dog from these sentences, so that you can see just the context." + "time": "8:00", + "text": "I've taken away cat and dog from these sentences, so that you can see just the context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "8:08": "Now, of course we can have different perspectives to look at the context." + "time": "8:08", + "text": "Now, of course we can have different perspectives to look at the context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "8:13": "For example, we can look at what words occur in the left part of this context. So we can call this left context. What words occur before we see cat or dog? So, you can see in this case, clearly dog and cat have similar left context." + "time": "8:13", + "text": "For example, we can look at what words occur in the left part of this context. So we can call this left context. What words occur before we see cat or dog? So, you can see in this case, clearly dog and cat have similar left context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "8:41": "You generally say his cat or my cat and you say also, my dog and his dog. So that makes them similar in the left context." + "time": "8:41", + "text": "You generally say his cat or my cat and you say also, my dog and his dog. So that makes them similar in the left context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "8:53": "Similarly, if you look at the words that occur after cat and dog, which we can call right context, they are also very similar in this case. Of course, it's an extreme case, where you only see eats." + "time": "8:53", + "text": "Similarly, if you look at the words that occur after cat and dog, which we can call right context, they are also very similar in this case. Of course, it's an extreme case, where you only see eats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "9:08": "And in general, you'll see many other words, of course, that can't follow cat and dog." + "time": "9:08", + "text": "And in general, you'll see many other words, of course, that can't follow cat and dog.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "9:17": "You can also even look at the general context. And that might include all the words in the sentence or in sentences around this word." + "time": "9:17", + "text": "You can also even look at the general context. And that might include all the words in the sentence or in sentences around this word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "9:27": "And even in the general context, you also see similarity between the two words." + "time": "9:27", + "text": "And even in the general context, you also see similarity between the two words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "9:35": "So this was just a suggestion that we can discover paradigmatic relation by looking at the similarity of context of words. So, for example, if we think about the following questions. How similar are context of cat and context of dog?" + "time": "9:35", + "text": "So this was just a suggestion that we can discover paradigmatic relation by looking at the similarity of context of words. So, for example, if we think about the following questions. How similar are context of cat and context of dog?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "9:56": "In contrast how similar are context of cat and context of computer?" + "time": "9:56", + "text": "In contrast how similar are context of cat and context of computer?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "10:02": "Now, intuitively, we're to imagine the context of cat and the context of dog would be more similar than the context of cat and context of the computer. That means, in the first case the similarity value would be high," + "time": "10:02", + "text": "Now, intuitively, we're to imagine the context of cat and the context of dog would be more similar than the context of cat and context of the computer. That means, in the first case the similarity value would be high,", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "10:21": "between the context of cat and dog, where as in the second, the similarity between context of cat and computer would be low because they all not having a paradigmatic relationship and imagine what words occur after computer in general. It would be very different from what words occur after cat." + "time": "10:21", + "text": "between the context of cat and dog, where as in the second, the similarity between context of cat and computer would be low because they all not having a paradigmatic relationship and imagine what words occur after computer in general. It would be very different from what words occur after cat.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "10:46": "So this is the basic idea of what this covering, paradigmatic relation." + "time": "10:46", + "text": "So this is the basic idea of what this covering, paradigmatic relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "10:52": "What about the syntagmatic relation? Well, here we're going to explore the correlated occurrences, again based on the definition of syntagmatic relation." + "time": "10:52", + "text": "What about the syntagmatic relation? Well, here we're going to explore the correlated occurrences, again based on the definition of syntagmatic relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "11:03": "Here you see the same sample of text." + "time": "11:03", + "text": "Here you see the same sample of text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "11:06": "But here we're interested in knowing what other words are correlated with the verb eats and what words can go with eats." + "time": "11:06", + "text": "But here we're interested in knowing what other words are correlated with the verb eats and what words can go with eats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "11:16": "And if you look at the right side of this slide and you see, I've taken away the two words around eats." + "time": "11:16", + "text": "And if you look at the right side of this slide and you see, I've taken away the two words around eats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "11:27": "I've taken away the word to its left and also the word to its right in each sentence." + "time": "11:27", + "text": "I've taken away the word to its left and also the word to its right in each sentence.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "11:35": "And then we ask the question, what words tend to occur to the left of eats?" + "time": "11:35", + "text": "And then we ask the question, what words tend to occur to the left of eats?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "11:43": "And what words tend to occur to the right of eats?" + "time": "11:43", + "text": "And what words tend to occur to the right of eats?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "11:49": "Now thinking about this question would help us discover syntagmatic relations because syntagmatic relations essentially captures such correlations." + "time": "11:49", + "text": "Now thinking about this question would help us discover syntagmatic relations because syntagmatic relations essentially captures such correlations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "12:03": "So the important question to ask for syntagmatical relation is, whenever eats occurs, what other words also tend to occur?" + "time": "12:03", + "text": "So the important question to ask for syntagmatical relation is, whenever eats occurs, what other words also tend to occur?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "12:16": "So the question here has to do with whether there are some other words that tend to co-occur together with each. Meaning that whenever you see eats you tend to see the other words." + "time": "12:16", + "text": "So the question here has to do with whether there are some other words that tend to co-occur together with each. Meaning that whenever you see eats you tend to see the other words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "12:29": "And if you don't see eats, probably, you don't see other words often either." + "time": "12:29", + "text": "And if you don't see eats, probably, you don't see other words often either.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "12:36": "So this intuition can help discover syntagmatic relations." + "time": "12:36", + "text": "So this intuition can help discover syntagmatic relations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "12:41": "Now again, consider example." + "time": "12:41", + "text": "Now again, consider example.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "12:44": "How helpful is occurrence of eats for predicting occurrence of meat?" + "time": "12:44", + "text": "How helpful is occurrence of eats for predicting occurrence of meat?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "12:49": "Right. All right, so knowing whether eats occurs in a sentence would generally help us predict whether meat also occurs indeed. And if we see eats occur in the sentence, and that should increase the chance that meat would also occur." + "time": "12:49", + "text": "Right. All right, so knowing whether eats occurs in a sentence would generally help us predict whether meat also occurs indeed. And if we see eats occur in the sentence, and that should increase the chance that meat would also occur.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "13:08": "In contrast, if you look at the question in the bottom, how helpful is the occurrence of eats for predicting of occurrence of text?" + "time": "13:08", + "text": "In contrast, if you look at the question in the bottom, how helpful is the occurrence of eats for predicting of occurrence of text?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "13:17": "Because eats and text are not really related, so knowing whether eats occurred in the sentence doesn't really help us predict the weather, text also occurs in the sentence. So this is in contrast to the question about eats and meat." + "time": "13:17", + "text": "Because eats and text are not really related, so knowing whether eats occurred in the sentence doesn't really help us predict the weather, text also occurs in the sentence. So this is in contrast to the question about eats and meat.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "13:35": "This also helps explain that intuition behind the methods of what discovering syntagmatic relations. Mainly we need to capture the correlation between the occurrences of two words." + "time": "13:35", + "text": "This also helps explain that intuition behind the methods of what discovering syntagmatic relations. Mainly we need to capture the correlation between the occurrences of two words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "13:50": "So to summarize the general ideas for discovering word associations are the following." + "time": "13:50", + "text": "So to summarize the general ideas for discovering word associations are the following.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "13:56": "For paradigmatic relation, we present each word by its context. And then compute its context similarity. We're going to assume the words that have high context similarity to have paradigmatic relation." + "time": "13:56", + "text": "For paradigmatic relation, we present each word by its context. And then compute its context similarity. We're going to assume the words that have high context similarity to have paradigmatic relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "14:14": "For syntagmatic relation, we will count how many times two words occur together in a context, which can be a sentence, a paragraph, or a document even. And we're going to compare their co-occurrences with their individual occurrences." + "time": "14:14", + "text": "For syntagmatic relation, we will count how many times two words occur together in a context, which can be a sentence, a paragraph, or a document even. And we're going to compare their co-occurrences with their individual occurrences.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" }, { - "14:33": "We're going to assume words with high co-occurrences but relatively low individual occurrences to have syntagmatic relations because they attempt to occur together and they don't usually occur alone. Note that the paradigmatic relation and the syntagmatic relation are actually closely related in that paradigmatically related words tend to have syntagmatic relation with the same word. They tend to be associated with the same word, and that suggests that we can also do join the discovery of the two relations. So these general ideas can be implemented in many different ways. And the course won't cover all of them, but we will cover at least some of the methods that are effective for discovering these relations. [MUSIC]" + "time": "14:33", + "text": "We're going to assume words with high co-occurrences but relatively low individual occurrences to have syntagmatic relations because they attempt to occur together and they don't usually occur alone. Note that the paradigmatic relation and the syntagmatic relation are actually closely related in that paradigmatically related words tend to have syntagmatic relation with the same word. They tend to be associated with the same word, and that suggests that we can also do join the discovery of the two relations. So these general ideas can be implemented in many different ways. And the course won't cover all of them, but we will cover at least some of the methods that are effective for discovering these relations. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/Uufkz/1-7-word-association-mining-and-analysis" } ] }, { "1-8-paradigmatic-relation-discovery-part-1": [ { - "0:00": "[SOUND] This lecture is about the Paradigmatics Relation Discovery. In this lecture we are going to talk about how to discover a particular kind of word association called a paradigmatical relation." + "time": "0:00", + "text": "[SOUND] This lecture is about the Paradigmatics Relation Discovery. In this lecture we are going to talk about how to discover a particular kind of word association called a paradigmatical relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "0:25": "By definition, two words are paradigmatically related if they share a similar context. Namely, they occur in similar positions in text. So naturally our idea of discovering such a relation is to look at the context of each word and then try to compute the similarity of those contexts." + "time": "0:25", + "text": "By definition, two words are paradigmatically related if they share a similar context. Namely, they occur in similar positions in text. So naturally our idea of discovering such a relation is to look at the context of each word and then try to compute the similarity of those contexts.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "0:50": "So here is an example of context of a word, cat." + "time": "0:50", + "text": "So here is an example of context of a word, cat.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "0:55": "Here I have taken the word cat out of the context and you can see we are seeing some remaining words in the sentences that contain cat." + "time": "0:55", + "text": "Here I have taken the word cat out of the context and you can see we are seeing some remaining words in the sentences that contain cat.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "1:09": "Now, we can do the same thing for another word like dog." + "time": "1:09", + "text": "Now, we can do the same thing for another word like dog.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "1:13": "So in general we would like to capture such a context and then try to assess the similarity of the context of cat and the context of a word like dog." + "time": "1:13", + "text": "So in general we would like to capture such a context and then try to assess the similarity of the context of cat and the context of a word like dog.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "1:24": "So now the question is how can we formally represent the context and then define the similarity function." + "time": "1:24", + "text": "So now the question is how can we formally represent the context and then define the similarity function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "1:33": "So first, we note that the context actually contains a lot of words. So, they can be regarded as a pseudo document, a imagine document, but there are also different ways of looking at the context. For example, we can look at the word that occurs before the word cat. We can call this context Left1 context. All right, so in this case you will see words like my, his, or big, a, the, et cetera. These are the words that can occur to left of the word cat. So we say my cat, his cat, big cat, a cat, et cetera. Similarly, we can also collect the words that occur right after the word cat. We can call this context Right1, and here we see words like eats, ate, is, has, et cetera. Or, more generally, we can look at all the words in the window of text around the word cat. Here, let's say we can take a window of 8 words around the word cat. We call this context Window8." + "time": "1:33", + "text": "So first, we note that the context actually contains a lot of words. So, they can be regarded as a pseudo document, a imagine document, but there are also different ways of looking at the context. For example, we can look at the word that occurs before the word cat. We can call this context Left1 context. All right, so in this case you will see words like my, his, or big, a, the, et cetera. These are the words that can occur to left of the word cat. So we say my cat, his cat, big cat, a cat, et cetera. Similarly, we can also collect the words that occur right after the word cat. We can call this context Right1, and here we see words like eats, ate, is, has, et cetera. Or, more generally, we can look at all the words in the window of text around the word cat. Here, let's say we can take a window of 8 words around the word cat. We call this context Window8.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "2:49": "Now, of course, you can see all the words from left or from right, and so we'll have a bag of words in general to represent the context." + "time": "2:49", + "text": "Now, of course, you can see all the words from left or from right, and so we'll have a bag of words in general to represent the context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "3:01": "Now, such a word based representation would actually give us an interesting way to define the perspective of measuring the similarity. Because if you look at just the similarity of Left1, then we'll see words that share just the words in the left context, and we kind of ignored the other words that are also in the general context. So that gives us one perspective to measure the similarity, and similarly, if we only use the Right1 context, we will capture this narrative from another perspective. Using both the Left1 and Right1 of course would allow us to capture the similarity with even more strict criteria." + "time": "3:01", + "text": "Now, such a word based representation would actually give us an interesting way to define the perspective of measuring the similarity. Because if you look at just the similarity of Left1, then we'll see words that share just the words in the left context, and we kind of ignored the other words that are also in the general context. So that gives us one perspective to measure the similarity, and similarly, if we only use the Right1 context, we will capture this narrative from another perspective. Using both the Left1 and Right1 of course would allow us to capture the similarity with even more strict criteria.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "3:49": "So in general, context may contain adjacent words, like eats and my, that you see here, or non-adjacent words, like Saturday, Tuesday, or some other words in the context." + "time": "3:49", + "text": "So in general, context may contain adjacent words, like eats and my, that you see here, or non-adjacent words, like Saturday, Tuesday, or some other words in the context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "4:05": "And this flexibility also allows us to match the similarity in somewhat different ways. Sometimes this is useful, as we might want to capture similarity base on general content. That would give us loosely related paradigmatical relations. Whereas if you use only the words immediately to the left and to the right of the word, then you likely will capture words that are very much related by their syntactical categories and semantics." + "time": "4:05", + "text": "And this flexibility also allows us to match the similarity in somewhat different ways. Sometimes this is useful, as we might want to capture similarity base on general content. That would give us loosely related paradigmatical relations. Whereas if you use only the words immediately to the left and to the right of the word, then you likely will capture words that are very much related by their syntactical categories and semantics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "4:41": "So the general idea of discovering paradigmatical relations is to compute the similarity of context of two words. So here, for example, we can measure the similarity of cat and dog based on the similarity of their context. In general, we can combine all kinds of views of the context. And so the similarity function is, in general, a combination of similarities on different context. And of course, we can also assign weights to these different similarities to allow us to focus more on a particular kind of context. And this would be naturally application specific, but again, here the main idea for discovering pardigmatically related words is to computer the similarity of their context. So next let's see how we exactly compute these similarity functions. Now to answer this question, it is useful to think of bag of words representation as vectors in a vector space model." + "time": "4:41", + "text": "So the general idea of discovering paradigmatical relations is to compute the similarity of context of two words. So here, for example, we can measure the similarity of cat and dog based on the similarity of their context. In general, we can combine all kinds of views of the context. And so the similarity function is, in general, a combination of similarities on different context. And of course, we can also assign weights to these different similarities to allow us to focus more on a particular kind of context. And this would be naturally application specific, but again, here the main idea for discovering pardigmatically related words is to computer the similarity of their context. So next let's see how we exactly compute these similarity functions. Now to answer this question, it is useful to think of bag of words representation as vectors in a vector space model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "5:48": "Now those of you who have been familiar with information retrieval or textual retrieval techniques would realize that vector space model has been used frequently for modeling documents and queries for search. But here we also find it convenient to model the context of a word for paradigmatic relation discovery. So the idea of this approach is to view each word in our vocabulary as defining one dimension in a high dimensional space. So we have N words in total in the vocabulary, then we have N dimensions, as illustrated here. And on the bottom, you can see a frequency vector representing a context, and here we see where eats occurred 5 times in this context, ate occurred 3 times, et cetera. So this vector can then be placed in this vector space model. So in general, we can represent a pseudo document or context of cat as one vector, d1, and another word, dog, might give us a different context, so d2. And then we can measure the similarity of these two vectors. So by viewing context in the vector space model, we convert the problem of paradigmatical relation discovery into the problem of computing the vectors and their similarity." + "time": "5:48", + "text": "Now those of you who have been familiar with information retrieval or textual retrieval techniques would realize that vector space model has been used frequently for modeling documents and queries for search. But here we also find it convenient to model the context of a word for paradigmatic relation discovery. So the idea of this approach is to view each word in our vocabulary as defining one dimension in a high dimensional space. So we have N words in total in the vocabulary, then we have N dimensions, as illustrated here. And on the bottom, you can see a frequency vector representing a context, and here we see where eats occurred 5 times in this context, ate occurred 3 times, et cetera. So this vector can then be placed in this vector space model. So in general, we can represent a pseudo document or context of cat as one vector, d1, and another word, dog, might give us a different context, so d2. And then we can measure the similarity of these two vectors. So by viewing context in the vector space model, we convert the problem of paradigmatical relation discovery into the problem of computing the vectors and their similarity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "7:20": "So the two questions that we have to address are first, how to compute each vector, and that is how to compute xi or yi." + "time": "7:20", + "text": "So the two questions that we have to address are first, how to compute each vector, and that is how to compute xi or yi.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "7:31": "And the other question is how do you compute the similarity." + "time": "7:31", + "text": "And the other question is how do you compute the similarity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "7:35": "Now in general, there are many approaches that can be used to solve the problem, and most of them are developed for information retrieval. And they have been shown to work well for matching a query vector and a document vector. But we can adapt many of the ideas to compute a similarity of context documents for our purpose here. So let's first look at the one plausible approach, where we try to match the similarity of context based on the expected overlap of words, and we call this EOWC." + "time": "7:35", + "text": "Now in general, there are many approaches that can be used to solve the problem, and most of them are developed for information retrieval. And they have been shown to work well for matching a query vector and a document vector. But we can adapt many of the ideas to compute a similarity of context documents for our purpose here. So let's first look at the one plausible approach, where we try to match the similarity of context based on the expected overlap of words, and we call this EOWC.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "8:17": "So the idea here is to represent a context by a word vector where each word has a weight that's equal to the probability that a randomly picked word from this document vector, is this word. So in other words, xi is defined as the normalized account of word wi in the context, and this can be interpreted as the probability that you would actually pick this word from d1 if you randomly picked a word." + "time": "8:17", + "text": "So the idea here is to represent a context by a word vector where each word has a weight that's equal to the probability that a randomly picked word from this document vector, is this word. So in other words, xi is defined as the normalized account of word wi in the context, and this can be interpreted as the probability that you would actually pick this word from d1 if you randomly picked a word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "8:56": "Now, of course these xi's would sum to one because they are normalized frequencies," + "time": "8:56", + "text": "Now, of course these xi's would sum to one because they are normalized frequencies,", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "9:02": "and this means the vector is actually probability of the distribution over words." + "time": "9:02", + "text": "and this means the vector is actually probability of the distribution over words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "9:10": "So, the vector d2 can be also computed in the same way, and this would give us then two probability distributions representing two contexts." + "time": "9:10", + "text": "So, the vector d2 can be also computed in the same way, and this would give us then two probability distributions representing two contexts.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "9:24": "So, that addresses the problem how to compute the vectors, and next let's see how we can define similarity in this approach. Well, here, we simply define the similarity as a dot product of two vectors, and this is defined as a sum of the products" + "time": "9:24", + "text": "So, that addresses the problem how to compute the vectors, and next let's see how we can define similarity in this approach. Well, here, we simply define the similarity as a dot product of two vectors, and this is defined as a sum of the products", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "9:41": "of the corresponding elements of the two vectors." + "time": "9:41", + "text": "of the corresponding elements of the two vectors.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "9:46": "Now, it's interesting to see that this similarity function actually has a nice interpretation, and that is this. Dot product, in fact that gives us the probability that two randomly picked words from the two contexts are identical. That means if we try to pick a word from one context and try to pick another word from another context, we can then ask the question, are they identical? If the two contexts are very similar, then we should expect we frequently will see the two words picked from the two contexts are identical. If they are very different, then the chance of seeing identical words being picked from the two contexts would be small. So this intuitively makes sense, right, for measuring similarity of contexts." + "time": "9:46", + "text": "Now, it's interesting to see that this similarity function actually has a nice interpretation, and that is this. Dot product, in fact that gives us the probability that two randomly picked words from the two contexts are identical. That means if we try to pick a word from one context and try to pick another word from another context, we can then ask the question, are they identical? If the two contexts are very similar, then we should expect we frequently will see the two words picked from the two contexts are identical. If they are very different, then the chance of seeing identical words being picked from the two contexts would be small. So this intuitively makes sense, right, for measuring similarity of contexts.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "10:41": "Now you might want to also take a look at the exact formulas and see why this can be interpreted as the probability that two randomly picked words are identical." + "time": "10:41", + "text": "Now you might want to also take a look at the exact formulas and see why this can be interpreted as the probability that two randomly picked words are identical.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "10:57": "So if you just stare at the formula to check what's inside this sum, then you will see basically in each case it gives us the probability that we will see an overlap on a particular word, wi. And where xi gives us a probability that we will pick this particular word from d1, and yi gives us the probability of picking this word from d2. And when we pick the same word from the two contexts, then we have an identical pick, right so. That's one possible approach, EOWC, extracted overlap of words in context. Now as always, we would like to assess whether this approach it would work well. Now of course, ultimately we have to test the approach with real data and see if it gives us really semantically related words." + "time": "10:57", + "text": "So if you just stare at the formula to check what's inside this sum, then you will see basically in each case it gives us the probability that we will see an overlap on a particular word, wi. And where xi gives us a probability that we will pick this particular word from d1, and yi gives us the probability of picking this word from d2. And when we pick the same word from the two contexts, then we have an identical pick, right so. That's one possible approach, EOWC, extracted overlap of words in context. Now as always, we would like to assess whether this approach it would work well. Now of course, ultimately we have to test the approach with real data and see if it gives us really semantically related words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "11:57": "Really give us paradigmatical relations, but analytically we can also analyze this formula a little bit. So first, as I said, it does make sense, right, because this formula will give a higher score if there is more overlap between the two contexts. So that's exactly what we want. But if you analyze the formula more carefully, then you also see there might be some potential problems, and specifically there are two potential problems. First, it might favor matching one frequent term very well, over matching more distinct terms." + "time": "11:57", + "text": "Really give us paradigmatical relations, but analytically we can also analyze this formula a little bit. So first, as I said, it does make sense, right, because this formula will give a higher score if there is more overlap between the two contexts. So that's exactly what we want. But if you analyze the formula more carefully, then you also see there might be some potential problems, and specifically there are two potential problems. First, it might favor matching one frequent term very well, over matching more distinct terms.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "12:36": "And that is because in the dot product, if one element has a high value and this element is shared by both contexts and it contributes a lot to the overall sum," + "time": "12:36", + "text": "And that is because in the dot product, if one element has a high value and this element is shared by both contexts and it contributes a lot to the overall sum,", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "12:51": "it might indeed make the score higher than in another case, where the two vectors actually have a lot of overlap in different terms. But each term has a relatively low frequency, so this may not be desirable. Of course, this might be desirable in some other cases. But in our case, we should intuitively prefer a case where we match more different terms in the context, so that we have more confidence in saying that the two words indeed occur in similar context. If you only rely on one term and that's a little bit questionable, it may not be robust." + "time": "12:51", + "text": "it might indeed make the score higher than in another case, where the two vectors actually have a lot of overlap in different terms. But each term has a relatively low frequency, so this may not be desirable. Of course, this might be desirable in some other cases. But in our case, we should intuitively prefer a case where we match more different terms in the context, so that we have more confidence in saying that the two words indeed occur in similar context. If you only rely on one term and that's a little bit questionable, it may not be robust.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "13:34": "Now the second problem is that it treats every word equally, right. So if you match a word like the and it will be the same as matching a word like eats, but intuitively we know matching the isn't really surprising because the occurs everywhere. So matching the is not as such strong evidence as matching what a word like eats, which doesn't occur frequently. So this is another problem of this approach." + "time": "13:34", + "text": "Now the second problem is that it treats every word equally, right. So if you match a word like the and it will be the same as matching a word like eats, but intuitively we know matching the isn't really surprising because the occurs everywhere. So matching the is not as such strong evidence as matching what a word like eats, which doesn't occur frequently. So this is another problem of this approach.", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" }, { - "14:13": "In the next chapter we are going to talk about how to address these problems. [MUSIC]" + "time": "14:13", + "text": "In the next chapter we are going to talk about how to address these problems. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/wBtIp/1-8-paradigmatic-relation-discovery-part-1" } ] }, { "1-9-paradigmatic-relation-discovery-part-2": [ { - "0:05": "In this lecture, we continue discussing Paradigmatical Relation Discovery. Earlier we introduced a method called Expected Overlap of Words in Context. In this method, we represent each context by a word vector that represents the probability of a word in the context. We measure the similarity by using the.product, which can be interpreted as the probability that two randomly picked words from the two contexts are identical. We also discussed the two problems of this method. The first is that it favors matching one frequent term very well over matching more distinct terms. It put too much emphasis on matching one term very well. The second is that it treats every word equally. Even a common word like the will contribute equally as content word like eats. So now we are going to talk about how to solve these problems. More specifically, we're going to introduce some retrieval heuristics used in text retrieval. These heuristics can effectively solve these problems, as these problems also occur in text retrieval when we match a query that though with a document vector. So to address the first problem, we can use a sublinear transformation of tone frequency. That is, we don't have to use the raw frequency count of a term to represent the context. We can transform it into some form that wouldn't emphasize so much on the raw frequency. To address the synchronous problem, we can put more weight on rare terms. That is we can reward matching a real-world. This heuristic is called the IDF term weighting in text retrieval. IDF stands for Inverse Document Frequency. So now, we're going to talk about the two heuristics in more detail. First let's talk about the TF Transformation. That is to convert the raw count of a word in the document into some weight that reflects our belief about how important this word in the document. So that will be denoted by TF of w,d. That's shown in the y-axis. Now, in general, there are many ways to map that. Let's first look at the simple way of mapping. In this case, we're going to say, well, any non-zero counts will be mapped to one and the zero count will be mapped to zero. So with this mapping all the frequencies will be mapped to only two values; zero or one. The mapping function is shown here as a flat line here. Now, this is naive because it's not the frequency of words. However, this actually has the advantage of emphasizing matching all the words in the context. So it does not allow a frequency of word to dominate the matching. Now, the approach that we have taken earlier in the expected overlap count approach, is a linear transformation. We basically, take y as the same as x. So we use the raw count as a representation. That created the problem that we just talked about namely; it emphasize too much on just matching one frequent term. Matching one frequent term can contribute a lot. So we can have a lot of other interesting transformations in between the two extremes, and they generally form a sublinear transformation. So for example, one possibility is to take logarithm of the raw count, and this will give us curve that looks like this, that you are seeing here. In this case, you can see the high frequency counts. The high counts are penalize a little bit, so the curve is a sublinear curve and it brings down the weight of those really high counts. This is what we want, because it prevents that terms from dominating the scoring function." + "time": "0:05", + "text": "In this lecture, we continue discussing Paradigmatical Relation Discovery. Earlier we introduced a method called Expected Overlap of Words in Context. In this method, we represent each context by a word vector that represents the probability of a word in the context. We measure the similarity by using the.product, which can be interpreted as the probability that two randomly picked words from the two contexts are identical. We also discussed the two problems of this method. The first is that it favors matching one frequent term very well over matching more distinct terms. It put too much emphasis on matching one term very well. The second is that it treats every word equally. Even a common word like the will contribute equally as content word like eats. So now we are going to talk about how to solve these problems. More specifically, we're going to introduce some retrieval heuristics used in text retrieval. These heuristics can effectively solve these problems, as these problems also occur in text retrieval when we match a query that though with a document vector. So to address the first problem, we can use a sublinear transformation of tone frequency. That is, we don't have to use the raw frequency count of a term to represent the context. We can transform it into some form that wouldn't emphasize so much on the raw frequency. To address the synchronous problem, we can put more weight on rare terms. That is we can reward matching a real-world. This heuristic is called the IDF term weighting in text retrieval. IDF stands for Inverse Document Frequency. So now, we're going to talk about the two heuristics in more detail. First let's talk about the TF Transformation. That is to convert the raw count of a word in the document into some weight that reflects our belief about how important this word in the document. So that will be denoted by TF of w,d. That's shown in the y-axis. Now, in general, there are many ways to map that. Let's first look at the simple way of mapping. In this case, we're going to say, well, any non-zero counts will be mapped to one and the zero count will be mapped to zero. So with this mapping all the frequencies will be mapped to only two values; zero or one. The mapping function is shown here as a flat line here. Now, this is naive because it's not the frequency of words. However, this actually has the advantage of emphasizing matching all the words in the context. So it does not allow a frequency of word to dominate the matching. Now, the approach that we have taken earlier in the expected overlap count approach, is a linear transformation. We basically, take y as the same as x. So we use the raw count as a representation. That created the problem that we just talked about namely; it emphasize too much on just matching one frequent term. Matching one frequent term can contribute a lot. So we can have a lot of other interesting transformations in between the two extremes, and they generally form a sublinear transformation. So for example, one possibility is to take logarithm of the raw count, and this will give us curve that looks like this, that you are seeing here. In this case, you can see the high frequency counts. The high counts are penalize a little bit, so the curve is a sublinear curve and it brings down the weight of those really high counts. This is what we want, because it prevents that terms from dominating the scoring function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CV8fN/1-9-paradigmatic-relation-discovery-part-2" }, { - "4:48": "Now, there is also another interesting transformation called a BM25 transformation which has been shown to be very effective for retrieval. In this transformation, we have a form that looks like this. So it's k plus one multiplied by x divided by x plus k, where k is a parameter, x is the count, the raw count of a word. Now, the transformation is very interesting in that it can actually go from one extreme to the other extreme by varying k. It also interesting that it has upper bound, k plus one in this case. So this puts a very strict constraint on high frequency terms, because their weight would never exceed k plus one. As we vary k, if we can simulate the two extremes. So when k is set to zero, we roughly have the 0,1 vector. Whereas when we set k to a very large value, it will behave more like the linear transformation. So this transformation function is by far the most effective transformation function for text retrieval and it also makes sense for our problem setup. So we just talked about how to solve the problem of overemphasizing a frequency term Now let's look at the second problem, and that is how we can penalize popular terms. Matching \"the\" is not surprising, because \"the\" occurs everywhere. But matching \"eats\" would count a lot. So how can we address that problem? Now in this case, we can use the IDF weighting. That's commonly used in retrieval. IDF stands for Inverse Document Frequency. Document frequency means the count of the total number of documents that contain a particular word. So here we show that the IDF measure is defined as a logarithm function of the number of documents that match a term or document frequency. So K is the number of documents containing word or document frequency and M here is the total number of documents in the collection. The IDF function is giving a higher value for a lower K, meaning that it rewards rare term. The maximum value is log of M plus one. That's when the word occurred just once in a context. So that's a very rare term, the rare is term in the whole collection. The lowest value you can see here is when K reaches its maximum which would be M. So that would be a very low value, close to zero in fact. So this of course measure is used in search where we naturally have a collection. In our case, what would be our collection? Well, we can also use the context that we can collect all the words as our collection. That is to say, a word that's popular in the collection in general, would also have a low IDF. Because depending on the dataset, we can construct the context vectors in different ways. But in the end if a term is very frequent in the original dataset, then it will still be frequent in the collective context documents. So how can we add these heuristics to improve our similarity function? Well, here's one way and there are many other ways that are possible. But this is a reasonable way, where we can adapt the BM25 retrieval model for paradigmatical relation mining." + "time": "4:48", + "text": "Now, there is also another interesting transformation called a BM25 transformation which has been shown to be very effective for retrieval. In this transformation, we have a form that looks like this. So it's k plus one multiplied by x divided by x plus k, where k is a parameter, x is the count, the raw count of a word. Now, the transformation is very interesting in that it can actually go from one extreme to the other extreme by varying k. It also interesting that it has upper bound, k plus one in this case. So this puts a very strict constraint on high frequency terms, because their weight would never exceed k plus one. As we vary k, if we can simulate the two extremes. So when k is set to zero, we roughly have the 0,1 vector. Whereas when we set k to a very large value, it will behave more like the linear transformation. So this transformation function is by far the most effective transformation function for text retrieval and it also makes sense for our problem setup. So we just talked about how to solve the problem of overemphasizing a frequency term Now let's look at the second problem, and that is how we can penalize popular terms. Matching \"the\" is not surprising, because \"the\" occurs everywhere. But matching \"eats\" would count a lot. So how can we address that problem? Now in this case, we can use the IDF weighting. That's commonly used in retrieval. IDF stands for Inverse Document Frequency. Document frequency means the count of the total number of documents that contain a particular word. So here we show that the IDF measure is defined as a logarithm function of the number of documents that match a term or document frequency. So K is the number of documents containing word or document frequency and M here is the total number of documents in the collection. The IDF function is giving a higher value for a lower K, meaning that it rewards rare term. The maximum value is log of M plus one. That's when the word occurred just once in a context. So that's a very rare term, the rare is term in the whole collection. The lowest value you can see here is when K reaches its maximum which would be M. So that would be a very low value, close to zero in fact. So this of course measure is used in search where we naturally have a collection. In our case, what would be our collection? Well, we can also use the context that we can collect all the words as our collection. That is to say, a word that's popular in the collection in general, would also have a low IDF. Because depending on the dataset, we can construct the context vectors in different ways. But in the end if a term is very frequent in the original dataset, then it will still be frequent in the collective context documents. So how can we add these heuristics to improve our similarity function? Well, here's one way and there are many other ways that are possible. But this is a reasonable way, where we can adapt the BM25 retrieval model for paradigmatical relation mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CV8fN/1-9-paradigmatic-relation-discovery-part-2" }, { - "9:14": "In this case, we define the document vector as containing elements representing normalized BM25 values. So in this normalization function, we take sum over all the words and we normalize the weight of each word by the sum of the weights of all the words. This is to again ensure all the xi's will sum to one in this vector. So this would be very similar to what we had before, in that this vector is actually something similar to a word distribution, all the xi's will sum to one. Now, the weight of BM25 for each word is defined here. If you compare this with our old definition where we just have a normalized count of this one, right? So we only have this one and the document lens or the total counts of words in that context to document, and that's what we had before. But now with the BM25 transformation, we introduced something else. First, of course, this extra occurrence of this count is just to achieve the sub-linear normalization. But we also see we introduced the parameter, k, here, and this parameter is generally a non-active number, although zero is also possible. But this controls the upper bound, and also controls to what extent it simulates the linear transformation. So this is one parameter, but we also see there is another parameter here, b, and this would be within zero and one. This is a parameter to control lens normalization. In this case, the normalization formula has a average document lens here. This is computed up by taking the average of the lenses of all the documents in the collection. In this case, all the lenses of all the context of documents that we're considering. So this average documents will be a constant for any given collection. So it actually is only affecting the effect of the parameter, b, here because this is a constant. But I kept it here because it's a constant that's used for in retrieval where it would give us a stabilized interpretation of parameter, b. But for our purpose, this will be a constant so it would only be affecting the lens normalization together with parameter, b. Now, with this definition then, we have a new way to define our document of vectors, and we can compute the vector d2 in the same way. The difference is that the high-frequency terms will now have a somewhat lower weights. This would help us control the inference of these high-frequency terms. Now, the idea can be added here in the scoring function. That means we'll introduce a weight for matching each term. So you may recall this sum indicates all the possible words that can be overlap between the two contexts. The x_i and the y_i are probabilities of picking the word from both contexts. Therefore, it indicates how likely we'll see a match on this word. Now, IDF would give us the importance of matching this word. A common word will be worth less than a rare word. So we emphasize more on matching rare words now. So with this modification, then the new function will likely address those two problems. Now, interestingly we can also use this approach to discover syntagmatic relations. In general, when we re-brand a context with a term vector, we would likely see some terms have high weights and other terms have low weights. Depending on how we assign weights to these terms, we might be able to use these weights to discover the words that are strongly associated with the candidate word in the context. So let's take a look at the term vector in more detail here. We have each x_i defined as the normalized weight of BM25. Now, this weight alone only reflects how frequent the word occurs in the context. But we can't just say any frequent term in the context that would be correlated with the candidate word because many common words like 'the' will occur frequently in all the context. But if we apply IDF weighting as you see here, we can then re-weight these terms based on IDF. That means the words that are common like 'the' will get penalized. So now the highest weighted terms will not be those common terms because they have lower IDFs. Instead, those terms would be the terms that are frequent in the context, but not frequent in the collection. So those are clearly the words that tend to occur in the context of the candidate word, for example, cat. So for this reason, the highly weighted terms in this idea of weighted vector can also be assumed to be candidates for syntagmatic relations. Now, of course, this is only a by-product of our approach for discovering paradigmatic relations. In the next lecture, we're going to talk more about how to discover syntagmatic relations. But it clearly shows the relation between discovering the two relations. Indeed they can be discovered in a joint manner by leveraging such associations. So to summarize, the main idea for discovering paradigmatic relations is to collect the context of a candidate word to form a pseudo document. This is typically represented as a bag of words. Then compute the similarity of the corresponding context documents of two candidate words. Then we can take the highly similar word pairs, and treat them as having paradigmatic relations. These are the words that share similar contexts. There are many different ways to implement this general idea. We just talked about some of the approaches. More specifically, we talked about using text retrieval models to help us design effective similarity function to compute the paradigmatic relations. More specifically, we have used the BM25 and IDF weighting to discover paradigmatic relation. These approaches also represent the state of the art in text retrieval techniques. Finally, syntagmatic relations can also be discovered as a by-product when we discover paradigmatic relations." + "time": "9:14", + "text": "In this case, we define the document vector as containing elements representing normalized BM25 values. So in this normalization function, we take sum over all the words and we normalize the weight of each word by the sum of the weights of all the words. This is to again ensure all the xi's will sum to one in this vector. So this would be very similar to what we had before, in that this vector is actually something similar to a word distribution, all the xi's will sum to one. Now, the weight of BM25 for each word is defined here. If you compare this with our old definition where we just have a normalized count of this one, right? So we only have this one and the document lens or the total counts of words in that context to document, and that's what we had before. But now with the BM25 transformation, we introduced something else. First, of course, this extra occurrence of this count is just to achieve the sub-linear normalization. But we also see we introduced the parameter, k, here, and this parameter is generally a non-active number, although zero is also possible. But this controls the upper bound, and also controls to what extent it simulates the linear transformation. So this is one parameter, but we also see there is another parameter here, b, and this would be within zero and one. This is a parameter to control lens normalization. In this case, the normalization formula has a average document lens here. This is computed up by taking the average of the lenses of all the documents in the collection. In this case, all the lenses of all the context of documents that we're considering. So this average documents will be a constant for any given collection. So it actually is only affecting the effect of the parameter, b, here because this is a constant. But I kept it here because it's a constant that's used for in retrieval where it would give us a stabilized interpretation of parameter, b. But for our purpose, this will be a constant so it would only be affecting the lens normalization together with parameter, b. Now, with this definition then, we have a new way to define our document of vectors, and we can compute the vector d2 in the same way. The difference is that the high-frequency terms will now have a somewhat lower weights. This would help us control the inference of these high-frequency terms. Now, the idea can be added here in the scoring function. That means we'll introduce a weight for matching each term. So you may recall this sum indicates all the possible words that can be overlap between the two contexts. The x_i and the y_i are probabilities of picking the word from both contexts. Therefore, it indicates how likely we'll see a match on this word. Now, IDF would give us the importance of matching this word. A common word will be worth less than a rare word. So we emphasize more on matching rare words now. So with this modification, then the new function will likely address those two problems. Now, interestingly we can also use this approach to discover syntagmatic relations. In general, when we re-brand a context with a term vector, we would likely see some terms have high weights and other terms have low weights. Depending on how we assign weights to these terms, we might be able to use these weights to discover the words that are strongly associated with the candidate word in the context. So let's take a look at the term vector in more detail here. We have each x_i defined as the normalized weight of BM25. Now, this weight alone only reflects how frequent the word occurs in the context. But we can't just say any frequent term in the context that would be correlated with the candidate word because many common words like 'the' will occur frequently in all the context. But if we apply IDF weighting as you see here, we can then re-weight these terms based on IDF. That means the words that are common like 'the' will get penalized. So now the highest weighted terms will not be those common terms because they have lower IDFs. Instead, those terms would be the terms that are frequent in the context, but not frequent in the collection. So those are clearly the words that tend to occur in the context of the candidate word, for example, cat. So for this reason, the highly weighted terms in this idea of weighted vector can also be assumed to be candidates for syntagmatic relations. Now, of course, this is only a by-product of our approach for discovering paradigmatic relations. In the next lecture, we're going to talk more about how to discover syntagmatic relations. But it clearly shows the relation between discovering the two relations. Indeed they can be discovered in a joint manner by leveraging such associations. So to summarize, the main idea for discovering paradigmatic relations is to collect the context of a candidate word to form a pseudo document. This is typically represented as a bag of words. Then compute the similarity of the corresponding context documents of two candidate words. Then we can take the highly similar word pairs, and treat them as having paradigmatic relations. These are the words that share similar contexts. There are many different ways to implement this general idea. We just talked about some of the approaches. More specifically, we talked about using text retrieval models to help us design effective similarity function to compute the paradigmatic relations. More specifically, we have used the BM25 and IDF weighting to discover paradigmatic relation. These approaches also represent the state of the art in text retrieval techniques. Finally, syntagmatic relations can also be discovered as a by-product when we discover paradigmatic relations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CV8fN/1-9-paradigmatic-relation-discovery-part-2" } ] } @@ -908,934 +1478,1530 @@ { "2-1-syntagmatic-relation-discovery-entropy": [ { - "0:00": "[SOUND]. This lecture is about the syntagmatic relation discovery, and entropy. In this lecture, we're going to continue talking about word association mining. In particular, we're going to talk about how to discover syntagmatic relations. And we're going to start with the introduction of entropy, which is the basis for designing some measures for discovering such relations." + "time": "0:00", + "text": "[SOUND]. This lecture is about the syntagmatic relation discovery, and entropy. In this lecture, we're going to continue talking about word association mining. In particular, we're going to talk about how to discover syntagmatic relations. And we're going to start with the introduction of entropy, which is the basis for designing some measures for discovering such relations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "0:32": "By definition, syntagmatic relations hold between words that have correlated co-occurrences. That means, when we see one word occurs in context, we tend to see the occurrence of the other word." + "time": "0:32", + "text": "By definition, syntagmatic relations hold between words that have correlated co-occurrences. That means, when we see one word occurs in context, we tend to see the occurrence of the other word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "0:48": "So, take a more specific example, here. We can ask the question, whenever eats occurs, what other words also tend to occur?" + "time": "0:48", + "text": "So, take a more specific example, here. We can ask the question, whenever eats occurs, what other words also tend to occur?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "1:01": "Looking at the sentences on the left, we see some words that might occur together with eats, like cat, dog, or fish is right. But if I take them out and if you look at the right side where we only show eats and some other words, the question then is. Can you predict what other words occur to the left or to the right?" + "time": "1:01", + "text": "Looking at the sentences on the left, we see some words that might occur together with eats, like cat, dog, or fish is right. But if I take them out and if you look at the right side where we only show eats and some other words, the question then is. Can you predict what other words occur to the left or to the right?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "1:28": "Right so this would force us to think about what other words are associated with eats. If they are associated with eats, they tend to occur in the context of eats." + "time": "1:28", + "text": "Right so this would force us to think about what other words are associated with eats. If they are associated with eats, they tend to occur in the context of eats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "1:38": "More specifically our prediction problem is to take any text segment which can be a sentence, a paragraph, or a document. And then ask I the question, is a particular word present or absent in this segment?" + "time": "1:38", + "text": "More specifically our prediction problem is to take any text segment which can be a sentence, a paragraph, or a document. And then ask I the question, is a particular word present or absent in this segment?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "1:54": "Right here we ask about the word W. Is W present or absent in this segment?" + "time": "1:54", + "text": "Right here we ask about the word W. Is W present or absent in this segment?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "2:02": "Now what's interesting is that some words are actually easier to predict than other words." + "time": "2:02", + "text": "Now what's interesting is that some words are actually easier to predict than other words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "2:10": "If you take a look at the three words shown here, meat, the, and unicorn, which one do you think is easier to predict?" + "time": "2:10", + "text": "If you take a look at the three words shown here, meat, the, and unicorn, which one do you think is easier to predict?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "2:20": "Now if you think about it for a moment you might conclude that" + "time": "2:20", + "text": "Now if you think about it for a moment you might conclude that", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "2:24": "the is easier to predict because it tends to occur everywhere. So I can just say, well that would be in the sentence." + "time": "2:24", + "text": "the is easier to predict because it tends to occur everywhere. So I can just say, well that would be in the sentence.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "2:31": "Unicorn is also relatively easy because unicorn is rare, is very rare. And I can bet that it doesn't occur in this sentence." + "time": "2:31", + "text": "Unicorn is also relatively easy because unicorn is rare, is very rare. And I can bet that it doesn't occur in this sentence.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "2:42": "But meat is somewhere in between in terms of frequency. And it makes it harder to predict because it's possible that it occurs in a sentence or the segment, more accurately." + "time": "2:42", + "text": "But meat is somewhere in between in terms of frequency. And it makes it harder to predict because it's possible that it occurs in a sentence or the segment, more accurately.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "2:53": "But it may also not occur in the sentence, so now let's study this problem more formally." + "time": "2:53", + "text": "But it may also not occur in the sentence, so now let's study this problem more formally.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "3:02": "So the problem can be formally defined as predicting the value of a binary random variable. Here we denote it by X sub w, w denotes a word, so this random variable is associated with precisely one word." + "time": "3:02", + "text": "So the problem can be formally defined as predicting the value of a binary random variable. Here we denote it by X sub w, w denotes a word, so this random variable is associated with precisely one word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "3:18": "When the value of the variable is 1, it means this word is present. When it's 0, it means the word is absent. And naturally, the probabilities for 1 and 0 should sum to 1, because a word is either present or absent in a segment." + "time": "3:18", + "text": "When the value of the variable is 1, it means this word is present. When it's 0, it means the word is absent. And naturally, the probabilities for 1 and 0 should sum to 1, because a word is either present or absent in a segment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "3:35": "There's no other choice." + "time": "3:35", + "text": "There's no other choice.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "3:38": "So the intuition with this concept earlier can be formally stated as follows. The more random this random variable is, the more difficult the prediction will be." + "time": "3:38", + "text": "So the intuition with this concept earlier can be formally stated as follows. The more random this random variable is, the more difficult the prediction will be.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "3:49": "Now the question is how does one quantitatively measure the randomness of a random variable like X sub w?" + "time": "3:49", + "text": "Now the question is how does one quantitatively measure the randomness of a random variable like X sub w?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "3:56": "How in general, can we quantify the randomness of a variable and that's why we need a measure called entropy and this measure introduced in information theory to measure the randomness of X. There is also some connection with information here but that is beyond the scope of this course." + "time": "3:56", + "text": "How in general, can we quantify the randomness of a variable and that's why we need a measure called entropy and this measure introduced in information theory to measure the randomness of X. There is also some connection with information here but that is beyond the scope of this course.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "4:17": "So for our purpose we just treat entropy function as a function defined on a random variable. In this case, it is a binary random variable, although the definition can be easily generalized for a random variable with multiple values." + "time": "4:17", + "text": "So for our purpose we just treat entropy function as a function defined on a random variable. In this case, it is a binary random variable, although the definition can be easily generalized for a random variable with multiple values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "4:32": "Now the function form looks like this, there's the sum of all the possible values for this random variable. Inside the sum for each value we have a product of the probability" + "time": "4:32", + "text": "Now the function form looks like this, there's the sum of all the possible values for this random variable. Inside the sum for each value we have a product of the probability", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "4:45": "that the random variable equals this value and log of this probability." + "time": "4:45", + "text": "that the random variable equals this value and log of this probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "4:53": "And note that there is also a negative sign there." + "time": "4:53", + "text": "And note that there is also a negative sign there.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "4:56": "Now entropy in general is non-negative. And that can be mathematically proved." + "time": "4:56", + "text": "Now entropy in general is non-negative. And that can be mathematically proved.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "5:02": "So if we expand this sum, we'll see that the equation looks like the second one. Where I explicitly plugged in the two values, 0 and 1. And sometimes when we have 0 log of 0, we would generally define that as 0, because log of 0 is undefined." + "time": "5:02", + "text": "So if we expand this sum, we'll see that the equation looks like the second one. Where I explicitly plugged in the two values, 0 and 1. And sometimes when we have 0 log of 0, we would generally define that as 0, because log of 0 is undefined.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "5:28": "So this is the entropy function. And this function will give a different value for different distributions of this random variable." + "time": "5:28", + "text": "So this is the entropy function. And this function will give a different value for different distributions of this random variable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "5:37": "And it clearly depends on the probability that the random variable taking value of 1 or 0. If we plot this function against the probability that the random variable is equal to 1." + "time": "5:37", + "text": "And it clearly depends on the probability that the random variable taking value of 1 or 0. If we plot this function against the probability that the random variable is equal to 1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "5:56": "And then the function looks like this." + "time": "5:56", + "text": "And then the function looks like this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "6:01": "At the two ends, that means when the probability of X" + "time": "6:01", + "text": "At the two ends, that means when the probability of X", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "6:07": "equals 1 is very small or very large, then the entropy function has a low value. When it's 0.5 in the middle then it reaches the maximum." + "time": "6:07", + "text": "equals 1 is very small or very large, then the entropy function has a low value. When it's 0.5 in the middle then it reaches the maximum.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "6:20": "Now if we plot the function against the probability that X" + "time": "6:20", + "text": "Now if we plot the function against the probability that X", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "6:25": "is taking a value of 0 and the function would show exactly the same curve here, and you can imagine why. And so that's because" + "time": "6:25", + "text": "is taking a value of 0 and the function would show exactly the same curve here, and you can imagine why. And so that's because", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "6:42": "the two probabilities are symmetric, and completely symmetric." + "time": "6:42", + "text": "the two probabilities are symmetric, and completely symmetric.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "6:48": "So an interesting question you can think about in general is for what kind of X does entropy reach maximum or minimum. And we can in particular think about some special cases. For example, in one case, we might have a random variable that" + "time": "6:48", + "text": "So an interesting question you can think about in general is for what kind of X does entropy reach maximum or minimum. And we can in particular think about some special cases. For example, in one case, we might have a random variable that", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:08": "always takes a value of 1. The probability is 1." + "time": "7:08", + "text": "always takes a value of 1. The probability is 1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:16": "Or there's a random variable that" + "time": "7:16", + "text": "Or there's a random variable that", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:19": "is equally likely taking a value of one or zero. So in this case the probability that X equals 1 is 0.5." + "time": "7:19", + "text": "is equally likely taking a value of one or zero. So in this case the probability that X equals 1 is 0.5.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:30": "Now which one has a higher entropy?" + "time": "7:30", + "text": "Now which one has a higher entropy?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:34": "It's easier to look at the problem by thinking of a simple example" + "time": "7:34", + "text": "It's easier to look at the problem by thinking of a simple example", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:40": "using coin tossing." + "time": "7:40", + "text": "using coin tossing.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:43": "So when we think about random experiments like tossing a coin," + "time": "7:43", + "text": "So when we think about random experiments like tossing a coin,", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "7:48": "it gives us a random variable, that can represent the result. It can be head or tail. So we can define a random variable X sub coin, so that it's 1 when the coin shows up as head, it's 0 when the coin shows up as tail." + "time": "7:48", + "text": "it gives us a random variable, that can represent the result. It can be head or tail. So we can define a random variable X sub coin, so that it's 1 when the coin shows up as head, it's 0 when the coin shows up as tail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "8:09": "So now we can compute the entropy of this random variable. And this entropy indicates how difficult it is to predict the outcome" + "time": "8:09", + "text": "So now we can compute the entropy of this random variable. And this entropy indicates how difficult it is to predict the outcome", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "8:22": "of a coin toss." + "time": "8:22", + "text": "of a coin toss.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "8:25": "So we can think about the two cases. One is a fair coin, it's completely fair. The coin shows up as head or tail equally likely. So the two probabilities would be a half. Right? So both are equal to one half." + "time": "8:25", + "text": "So we can think about the two cases. One is a fair coin, it's completely fair. The coin shows up as head or tail equally likely. So the two probabilities would be a half. Right? So both are equal to one half.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "8:44": "Another extreme case is completely biased coin, where the coin always shows up as heads. So it's a completely biased coin." + "time": "8:44", + "text": "Another extreme case is completely biased coin, where the coin always shows up as heads. So it's a completely biased coin.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "8:54": "Now let's think about the entropies in the two cases. And if you plug in these values you can see the entropies would be as follows. For a fair coin we see the entropy reaches its maximum, that's 1." + "time": "8:54", + "text": "Now let's think about the entropies in the two cases. And if you plug in these values you can see the entropies would be as follows. For a fair coin we see the entropy reaches its maximum, that's 1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "9:11": "For the completely biased coin, we see it's 0. And that intuitively makes a lot of sense. Because a fair coin is most difficult to predict." + "time": "9:11", + "text": "For the completely biased coin, we see it's 0. And that intuitively makes a lot of sense. Because a fair coin is most difficult to predict.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "9:22": "Whereas a completely biased coin is very easy to predict. We can always say, well, it's a head. Because it is a head all the time. So they can be shown on the curve as follows. So the fair coin corresponds to the middle point where it's very uncertain. The completely biased coin corresponds to the end point where we have a probability of 1.0 and the entropy is 0. So, now let's see how we can use entropy for word prediction. Let's think about our problem is to predict whether W is present or absent in this segment. Again, think about the three words, particularly think about their entropies." + "time": "9:22", + "text": "Whereas a completely biased coin is very easy to predict. We can always say, well, it's a head. Because it is a head all the time. So they can be shown on the curve as follows. So the fair coin corresponds to the middle point where it's very uncertain. The completely biased coin corresponds to the end point where we have a probability of 1.0 and the entropy is 0. So, now let's see how we can use entropy for word prediction. Let's think about our problem is to predict whether W is present or absent in this segment. Again, think about the three words, particularly think about their entropies.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "10:06": "Now we can assume high entropy words are harder to predict." + "time": "10:06", + "text": "Now we can assume high entropy words are harder to predict.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "10:11": "And so we now have a quantitative way to tell us which word is harder to predict." + "time": "10:11", + "text": "And so we now have a quantitative way to tell us which word is harder to predict.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "10:20": "Now if you look at the three words meat, the, unicorn, again, and we clearly would expect meat to have a higher entropy than the unicorn. In fact if you look at the entropy of the, it's close to zero. Because it occurs everywhere. So it's like a completely biased coin." + "time": "10:20", + "text": "Now if you look at the three words meat, the, unicorn, again, and we clearly would expect meat to have a higher entropy than the unicorn. In fact if you look at the entropy of the, it's close to zero. Because it occurs everywhere. So it's like a completely biased coin.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "10:44": "Therefore the entropy is zero." + "time": "10:44", + "text": "Therefore the entropy is zero.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" }, { - "10:48": "[MUSIC]" + "time": "10:48", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/qGZrA/2-1-syntagmatic-relation-discovery-entropy" } ] }, { "2-2-syntagmatic-relation-discovery-conditional-entropy": [ { - "0:00": "[SOUND] This lecture is about the syntagmatic relation discovery and conditional entropy. In this lecture, we're going to continue the discussion of word association mining and analysis." + "time": "0:00", + "text": "[SOUND] This lecture is about the syntagmatic relation discovery and conditional entropy. In this lecture, we're going to continue the discussion of word association mining and analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "0:18": "We're going to talk about the conditional entropy, which is useful for discovering syntagmatic relations. Earlier, we talked about using entropy to capture how easy it is to predict the presence or absence of a word." + "time": "0:18", + "text": "We're going to talk about the conditional entropy, which is useful for discovering syntagmatic relations. Earlier, we talked about using entropy to capture how easy it is to predict the presence or absence of a word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "0:34": "Now, we'll address a different scenario where we assume that we know something about the text segment. So now the question is, suppose we know that eats occurred in the segment. How would that help us predict the presence or absence of water, like in meat? And in particular, we want to know whether the presence of eats has helped us predict the presence of meat." + "time": "0:34", + "text": "Now, we'll address a different scenario where we assume that we know something about the text segment. So now the question is, suppose we know that eats occurred in the segment. How would that help us predict the presence or absence of water, like in meat? And in particular, we want to know whether the presence of eats has helped us predict the presence of meat.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "1:02": "And if we frame this using entrophy, that would mean we are interested in knowing whether knowing the presence of eats could reduce uncertainty about the meats. Or, reduce the entrophy of the random variable corresponding to the presence or absence of meat. We can also ask as a question, what if we know of the absents of eats?" + "time": "1:02", + "text": "And if we frame this using entrophy, that would mean we are interested in knowing whether knowing the presence of eats could reduce uncertainty about the meats. Or, reduce the entrophy of the random variable corresponding to the presence or absence of meat. We can also ask as a question, what if we know of the absents of eats?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "1:28": "Would that also help us predict the presence or absence of meat?" + "time": "1:28", + "text": "Would that also help us predict the presence or absence of meat?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "1:34": "These questions can be addressed by using another concept called a conditioning entropy. So to explain this concept, let's first look at the scenario we had before, when we know nothing about the segment. So we have these probabilities indicating whether a word like meat occurs, or it doesn't occur in the segment. And we have an entropy function that looks like what you see on the slide." + "time": "1:34", + "text": "These questions can be addressed by using another concept called a conditioning entropy. So to explain this concept, let's first look at the scenario we had before, when we know nothing about the segment. So we have these probabilities indicating whether a word like meat occurs, or it doesn't occur in the segment. And we have an entropy function that looks like what you see on the slide.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "2:03": "Now suppose we know eats is present, so now we know the value of another random variable that denotes eats." + "time": "2:03", + "text": "Now suppose we know eats is present, so now we know the value of another random variable that denotes eats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "2:12": "Now, that would change all these probabilities to conditional probabilities. Where we look at the presence or absence of meat," + "time": "2:12", + "text": "Now, that would change all these probabilities to conditional probabilities. Where we look at the presence or absence of meat,", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "2:21": "given that we know eats occurred in the context. So as a result, if we replace these probabilities with their corresponding conditional probabilities in the entropy function, we'll get the conditional entropy." + "time": "2:21", + "text": "given that we know eats occurred in the context. So as a result, if we replace these probabilities with their corresponding conditional probabilities in the entropy function, we'll get the conditional entropy.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "2:37": "So this equation now here would be the conditional entropy. Conditional on the presence of eats." + "time": "2:37", + "text": "So this equation now here would be the conditional entropy. Conditional on the presence of eats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "2:52": "So, you can see this is essentially the same entropy function as you have seen before, except that all the probabilities now have a condition." + "time": "2:52", + "text": "So, you can see this is essentially the same entropy function as you have seen before, except that all the probabilities now have a condition.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "3:04": "And this then tells us the entropy of meat, after we have known eats occurring in the segment." + "time": "3:04", + "text": "And this then tells us the entropy of meat, after we have known eats occurring in the segment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "3:14": "And of course, we can also define this conditional entropy for the scenario where we don't see eats. So if we know it did not occur in the segment, then this entry condition of entropy would capture the instances of meat in that condition. So now, putting different scenarios together, we have the completed definition of conditional entropy as follows." + "time": "3:14", + "text": "And of course, we can also define this conditional entropy for the scenario where we don't see eats. So if we know it did not occur in the segment, then this entry condition of entropy would capture the instances of meat in that condition. So now, putting different scenarios together, we have the completed definition of conditional entropy as follows.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "3:39": "Basically, we're going to consider both scenarios of the value of eats zero, one, and this gives us a probability that eats is equal to zero or one. Basically, whether eats is present or absent. And this of course, is the conditional entropy of meat in that particular scenario." + "time": "3:39", + "text": "Basically, we're going to consider both scenarios of the value of eats zero, one, and this gives us a probability that eats is equal to zero or one. Basically, whether eats is present or absent. And this of course, is the conditional entropy of meat in that particular scenario.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "4:05": "So if you expanded this entropy, then you have the following equation." + "time": "4:05", + "text": "So if you expanded this entropy, then you have the following equation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "4:15": "Where you see the involvement of those conditional probabilities." + "time": "4:15", + "text": "Where you see the involvement of those conditional probabilities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "4:21": "Now in general, for any discrete random variables x and y, we have" + "time": "4:21", + "text": "Now in general, for any discrete random variables x and y, we have", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "4:27": "the conditional entropy is no larger than the entropy of the variable x. So basically, this is upper bound for the conditional entropy. That means by knowing more information about the segment, we want to be able to increase uncertainty. We can only reduce uncertainty. And that intuitively makes sense because as we know more information, it should always help us make the prediction. And cannot hurt the prediction in any case." + "time": "4:27", + "text": "the conditional entropy is no larger than the entropy of the variable x. So basically, this is upper bound for the conditional entropy. That means by knowing more information about the segment, we want to be able to increase uncertainty. We can only reduce uncertainty. And that intuitively makes sense because as we know more information, it should always help us make the prediction. And cannot hurt the prediction in any case.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "5:05": "Now, what's interesting here is also to think about what's the minimum possible value of this conditional entropy? Now, we know that the maximum value is the entropy of X." + "time": "5:05", + "text": "Now, what's interesting here is also to think about what's the minimum possible value of this conditional entropy? Now, we know that the maximum value is the entropy of X.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "5:17": "But what about the minimum, so what do you think?" + "time": "5:17", + "text": "But what about the minimum, so what do you think?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "5:22": "I hope you can reach the conclusion that the minimum possible value, would be zero. And it will be interesting to think about under what situation will achieve this." + "time": "5:22", + "text": "I hope you can reach the conclusion that the minimum possible value, would be zero. And it will be interesting to think about under what situation will achieve this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "5:34": "So, let's see how we can use conditional entropy to capture syntagmatic relation." + "time": "5:34", + "text": "So, let's see how we can use conditional entropy to capture syntagmatic relation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "5:39": "Now of course, this conditional entropy gives us directly one way to measure the association of two words. Because it tells us to what extent, we can predict the one word given that we know the presence or absence of another word. Now before we look at the intuition of conditional entropy in capturing syntagmatic relations, it's useful to think of a very special case, listed here. That is, the conditional entropy of the word given itself." + "time": "5:39", + "text": "Now of course, this conditional entropy gives us directly one way to measure the association of two words. Because it tells us to what extent, we can predict the one word given that we know the presence or absence of another word. Now before we look at the intuition of conditional entropy in capturing syntagmatic relations, it's useful to think of a very special case, listed here. That is, the conditional entropy of the word given itself.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "6:19": "So here, we listed this conditional entropy in the middle. So, it's here." + "time": "6:19", + "text": "So here, we listed this conditional entropy in the middle. So, it's here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "6:33": "So, what is the value of this?" + "time": "6:33", + "text": "So, what is the value of this?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "6:36": "Now, this means we know where the meat occurs in the sentence. And we hope to predict whether the meat occurs in the sentence. And of course, this is 0 because there's no incident anymore. Once we know whether the word occurs in the segment, we'll already know the answer of the prediction. So this is zero. And that's also when this conditional entropy reaches the minimum." + "time": "6:36", + "text": "Now, this means we know where the meat occurs in the sentence. And we hope to predict whether the meat occurs in the sentence. And of course, this is 0 because there's no incident anymore. Once we know whether the word occurs in the segment, we'll already know the answer of the prediction. So this is zero. And that's also when this conditional entropy reaches the minimum.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "7:06": "So now, let's look at some other cases." + "time": "7:06", + "text": "So now, let's look at some other cases.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "7:09": "So this is a case of knowing the and trying to predict the meat. And this is a case of knowing eats and trying to predict the meat. Which one do you think is smaller? No doubt smaller entropy means easier for prediction." + "time": "7:09", + "text": "So this is a case of knowing the and trying to predict the meat. And this is a case of knowing eats and trying to predict the meat. Which one do you think is smaller? No doubt smaller entropy means easier for prediction.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "7:31": "Which one do you think is higher? Which one is not smaller?" + "time": "7:31", + "text": "Which one do you think is higher? Which one is not smaller?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "7:36": "Well, if you at the uncertainty, then in the first case, the doesn't really tell us much about the meat. So knowing the occurrence of the doesn't really help us reduce entropy that much. So it stays fairly close to the original entropy of meat. Whereas in the case of eats, eats is related to meat. So knowing presence of eats or absence of eats, would help us predict whether meat occurs. So it can help us reduce entropy of meat. So we should expect the sigma term, namely this one, to have a smaller entropy." + "time": "7:36", + "text": "Well, if you at the uncertainty, then in the first case, the doesn't really tell us much about the meat. So knowing the occurrence of the doesn't really help us reduce entropy that much. So it stays fairly close to the original entropy of meat. Whereas in the case of eats, eats is related to meat. So knowing presence of eats or absence of eats, would help us predict whether meat occurs. So it can help us reduce entropy of meat. So we should expect the sigma term, namely this one, to have a smaller entropy.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "8:21": "And that means there is a stronger association between meat and eats." + "time": "8:21", + "text": "And that means there is a stronger association between meat and eats.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "8:29": "So we now also know when this w is the same as this meat, then the conditional entropy would reach its minimum, which is 0. And for what kind of words would either reach its maximum? Well, that's when this stuff is not really related to meat. And like the for example, it would be very close to the maximum, which is the entropy of meat itself." + "time": "8:29", + "text": "So we now also know when this w is the same as this meat, then the conditional entropy would reach its minimum, which is 0. And for what kind of words would either reach its maximum? Well, that's when this stuff is not really related to meat. And like the for example, it would be very close to the maximum, which is the entropy of meat itself.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "8:59": "So this suggests that when you use conditional entropy for mining syntagmatic relations, the hours would look as follows." + "time": "8:59", + "text": "So this suggests that when you use conditional entropy for mining syntagmatic relations, the hours would look as follows.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "9:10": "For each word W1, we're going to enumerate the overall other words W2. And then, we can compute the conditional entropy of W1 given W2." + "time": "9:10", + "text": "For each word W1, we're going to enumerate the overall other words W2. And then, we can compute the conditional entropy of W1 given W2.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "9:22": "We thought all the candidate was in ascending order of the conditional entropy because we're out of favor, a world that has a small entropy. Meaning that it helps us predict the time of the word W1. And then, we're going to take the top ring of the candidate words as words that have potential syntagmatic relations with W1." + "time": "9:22", + "text": "We thought all the candidate was in ascending order of the conditional entropy because we're out of favor, a world that has a small entropy. Meaning that it helps us predict the time of the word W1. And then, we're going to take the top ring of the candidate words as words that have potential syntagmatic relations with W1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "9:41": "Note that we need to use a threshold to find these words. The stresser can be the number of top candidates take, or absolute value for the conditional entropy." + "time": "9:41", + "text": "Note that we need to use a threshold to find these words. The stresser can be the number of top candidates take, or absolute value for the conditional entropy.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "9:55": "Now, this would allow us to mine the most strongly correlated words with a particular word, W1 here." + "time": "9:55", + "text": "Now, this would allow us to mine the most strongly correlated words with a particular word, W1 here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "10:06": "But, this algorithm does not help us mine the strongest that K syntagmatical relations from an entire collection. Because in order to do that, we have to ensure that these conditional entropies are comparable across different words. In this case of discovering the mathematical relations for a targeted word like W1, we only need to compare the conditional entropies" + "time": "10:06", + "text": "But, this algorithm does not help us mine the strongest that K syntagmatical relations from an entire collection. Because in order to do that, we have to ensure that these conditional entropies are comparable across different words. In this case of discovering the mathematical relations for a targeted word like W1, we only need to compare the conditional entropies", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "10:34": "for W1, given different words. And in this case, they are comparable." + "time": "10:34", + "text": "for W1, given different words. And in this case, they are comparable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "10:41": "All right. So, the conditional entropy of W1, given W2, and the conditional entropy of W1, given W3 are comparable." + "time": "10:41", + "text": "All right. So, the conditional entropy of W1, given W2, and the conditional entropy of W1, given W3 are comparable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "10:51": "They all measure how hard it is to predict the W1. But, if we think about the two pairs, where we share W2 in the same condition, and we try to predict the W1 and W3. Then, the conditional entropies are actually not comparable. You can think of about this question. Why? So why are they not comfortable? Well, that was because they have a different outer bounds. Right? So those outer bounds are precisely the entropy of W1 and the entropy of W3. And they have different upper bounds. So we cannot really compare them in this way. So how do we address this problem?" + "time": "10:51", + "text": "They all measure how hard it is to predict the W1. But, if we think about the two pairs, where we share W2 in the same condition, and we try to predict the W1 and W3. Then, the conditional entropies are actually not comparable. You can think of about this question. Why? So why are they not comfortable? Well, that was because they have a different outer bounds. Right? So those outer bounds are precisely the entropy of W1 and the entropy of W3. And they have different upper bounds. So we cannot really compare them in this way. So how do we address this problem?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" }, { - "11:38": "Well later, we'll discuss, we can use mutual information to solve this problem. [MUSIC]" + "time": "11:38", + "text": "Well later, we'll discuss, we can use mutual information to solve this problem. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZAjmz/2-2-syntagmatic-relation-discovery-conditional-entropy" } ] }, { "2-3-syntagmatic-relation-discovery-mutual-information-part-1": [ { - "0:00": "[SOUND]. This lecture is about the syntagmatic relation discovery and mutual information." + "time": "0:00", + "text": "[SOUND]. This lecture is about the syntagmatic relation discovery and mutual information.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "0:13": "In this lecture we are going to continue discussing syntagmatic relation discovery. In particular, we are going to talk about another the concept in the information series, we called it mutual information and how it can be used to discover syntagmatic relations. Before we talked about the problem of conditional entropy and that is the conditional entropy computed different pairs of words. It is not really comparable, so that makes it harder with this cover, strong synagmatic relations globally from corpus. So now we are going to introduce mutual information, which is another concept in the information series that allows us to, sometimes, normalize the conditional entropy to make it more comparable across different pairs." + "time": "0:13", + "text": "In this lecture we are going to continue discussing syntagmatic relation discovery. In particular, we are going to talk about another the concept in the information series, we called it mutual information and how it can be used to discover syntagmatic relations. Before we talked about the problem of conditional entropy and that is the conditional entropy computed different pairs of words. It is not really comparable, so that makes it harder with this cover, strong synagmatic relations globally from corpus. So now we are going to introduce mutual information, which is another concept in the information series that allows us to, sometimes, normalize the conditional entropy to make it more comparable across different pairs.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "1:04": "In particular, mutual information in order to find I(X:Y), matches the entropy reduction of X obtained from knowing Y. More specifically the question we are interested in here is how much of an entropy of X can we obtain by knowing Y." + "time": "1:04", + "text": "In particular, mutual information in order to find I(X:Y), matches the entropy reduction of X obtained from knowing Y. More specifically the question we are interested in here is how much of an entropy of X can we obtain by knowing Y.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "1:27": "So mathematically it can be defined as the difference between the original entropy of X, and the condition of Y of X given Y." + "time": "1:27", + "text": "So mathematically it can be defined as the difference between the original entropy of X, and the condition of Y of X given Y.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "1:37": "And you might see, as you can see here it can also be defined as reduction of entropy of Y because of knowing X." + "time": "1:37", + "text": "And you might see, as you can see here it can also be defined as reduction of entropy of Y because of knowing X.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "1:48": "Now normally the two conditional interface H of X given Y and the entropy of Y given X are not equal, but interestingly, the reduction of entropy by knowing one of them, is actually equal. So, this quantity is called a Mutual Information in order to buy I here. And this function has some interesting properties, first it is also non-negative. This is easy to understand because the original entropy is always" + "time": "1:48", + "text": "Now normally the two conditional interface H of X given Y and the entropy of Y given X are not equal, but interestingly, the reduction of entropy by knowing one of them, is actually equal. So, this quantity is called a Mutual Information in order to buy I here. And this function has some interesting properties, first it is also non-negative. This is easy to understand because the original entropy is always", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "2:22": "not going to be lower than the possibility reduced conditional entropy. In other words, the conditional entropy will never exceed the original entropy. Knowing some information can always help us potentially, but will not hurt us in predicting x." + "time": "2:22", + "text": "not going to be lower than the possibility reduced conditional entropy. In other words, the conditional entropy will never exceed the original entropy. Knowing some information can always help us potentially, but will not hurt us in predicting x.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "2:41": "The signal property is that it is symmetric like additional entropy is not symmetrical, mutual information is, and the third property is that It reaches its minimum, zero, if and only if the two random variables are completely independent. That means knowing one of them does not tell us anything about the other and this last property can be verified by simply looking at the equation above and it reaches 0 if and only the conditional entropy of X [INAUDIBLE] Y is exactly the same as original entropy of X. So that means knowing why it did not help at all and that is when X and a Y are completely independent." + "time": "2:41", + "text": "The signal property is that it is symmetric like additional entropy is not symmetrical, mutual information is, and the third property is that It reaches its minimum, zero, if and only if the two random variables are completely independent. That means knowing one of them does not tell us anything about the other and this last property can be verified by simply looking at the equation above and it reaches 0 if and only the conditional entropy of X [INAUDIBLE] Y is exactly the same as original entropy of X. So that means knowing why it did not help at all and that is when X and a Y are completely independent.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "3:32": "Now when we fix X to rank different Ys using conditional entropy would give the same order as ranking based on mutual information because in the function here, H(X) is fixed because X is fixed. So ranking based on mutual entropy is exactly the same as ranking based on the conditional entropy of X given Y, but the mutual information allows us to compare different pairs of x and y. So, that is why mutual information is more general and in general, more useful." + "time": "3:32", + "text": "Now when we fix X to rank different Ys using conditional entropy would give the same order as ranking based on mutual information because in the function here, H(X) is fixed because X is fixed. So ranking based on mutual entropy is exactly the same as ranking based on the conditional entropy of X given Y, but the mutual information allows us to compare different pairs of x and y. So, that is why mutual information is more general and in general, more useful.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "4:10": "So, let us examine the intuition of using mutual information for Syntagmatical Relation Mining." + "time": "4:10", + "text": "So, let us examine the intuition of using mutual information for Syntagmatical Relation Mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "4:17": "Now, the question we ask forcing that relation mining is, whenever \"eats\" occurs, what other words also tend to occur?" + "time": "4:17", + "text": "Now, the question we ask forcing that relation mining is, whenever \"eats\" occurs, what other words also tend to occur?", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "4:25": "So this question can be framed as a mutual information question, that is, which words have high mutual information was eats, so computer the missing information between eats and other words." + "time": "4:25", + "text": "So this question can be framed as a mutual information question, that is, which words have high mutual information was eats, so computer the missing information between eats and other words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "4:39": "And if we do that, and it is basically a base on the same as conditional we will see that words that are strongly associated with eats, will have a high point. Whereas words that are not related will have lower mutual information. For this, I will give some example here. The mutual information between \"eats\" and \"meats\", which is the same as between \"meats\" and \"eats,\" because the information is symmetrical is expected to be higher than the mutual information between eats and the, because knowing the does not really help us as a predictor. It is similar, and knowing eats does not help us predicting, the as well. And you also can easily see that the mutual information between a word and itself is the largest, which is equal to the entropy of this word and so, because in this case the reduction is maximum because knowing one allows us to predict the other completely. So the conditional entropy is zero, therefore the mutual information reaches its maximum. It is going to be larger, then are equal to the machine volume eats in other words. In other words picking any other word and the computer picking between eats and that word. You will not get any information larger the computation from eats and itself." + "time": "4:39", + "text": "And if we do that, and it is basically a base on the same as conditional we will see that words that are strongly associated with eats, will have a high point. Whereas words that are not related will have lower mutual information. For this, I will give some example here. The mutual information between \"eats\" and \"meats\", which is the same as between \"meats\" and \"eats,\" because the information is symmetrical is expected to be higher than the mutual information between eats and the, because knowing the does not really help us as a predictor. It is similar, and knowing eats does not help us predicting, the as well. And you also can easily see that the mutual information between a word and itself is the largest, which is equal to the entropy of this word and so, because in this case the reduction is maximum because knowing one allows us to predict the other completely. So the conditional entropy is zero, therefore the mutual information reaches its maximum. It is going to be larger, then are equal to the machine volume eats in other words. In other words picking any other word and the computer picking between eats and that word. You will not get any information larger the computation from eats and itself.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "6:16": "So now let us look at how to compute the mute information. Now in order to do that, we often" + "time": "6:16", + "text": "So now let us look at how to compute the mute information. Now in order to do that, we often", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "6:25": "use a different form of mutual information, and we can mathematically rewrite the mutual information into the form shown on this slide. Where we essentially see a formula that computes what is called a KL-divergence or divergence. This is another term in information theory. It measures the divergence between two distributions." + "time": "6:25", + "text": "use a different form of mutual information, and we can mathematically rewrite the mutual information into the form shown on this slide. Where we essentially see a formula that computes what is called a KL-divergence or divergence. This is another term in information theory. It measures the divergence between two distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "6:50": "Now, if you look at the formula, it is also sum over many combinations of different values of the two random variables but inside the sum, mainly we are doing a comparison between two joint distributions. The numerator has the joint, actual observed the joint distribution of the two random variables." + "time": "6:50", + "text": "Now, if you look at the formula, it is also sum over many combinations of different values of the two random variables but inside the sum, mainly we are doing a comparison between two joint distributions. The numerator has the joint, actual observed the joint distribution of the two random variables.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "7:12": "The bottom part or the denominator can be interpreted as the expected joint distribution of the two random variables, if they were independent because when two random variables are independent, they are joined distribution is equal to the product of the two probabilities." + "time": "7:12", + "text": "The bottom part or the denominator can be interpreted as the expected joint distribution of the two random variables, if they were independent because when two random variables are independent, they are joined distribution is equal to the product of the two probabilities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "7:35": "So this comparison will tell us whether the two variables are indeed independent. If they are indeed independent then we would expect that the two are the same," + "time": "7:35", + "text": "So this comparison will tell us whether the two variables are indeed independent. If they are indeed independent then we would expect that the two are the same,", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "7:44": "but if the numerator is different from the denominator, that would mean the two variables are not independent and that helps measure the association." + "time": "7:44", + "text": "but if the numerator is different from the denominator, that would mean the two variables are not independent and that helps measure the association.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "7:56": "The sum is simply to take into consideration of all of the combinations of the values of these two random variables. In our case, each random variable can choose one of the two values, zero or one, so we have four combinations here. If we look at this form of mutual information, it shows that the mutual information matches the divergence of the actual joint distribution from the expected distribution under the independence assumption. The larger this divergence is, the higher the mutual information would be." + "time": "7:56", + "text": "The sum is simply to take into consideration of all of the combinations of the values of these two random variables. In our case, each random variable can choose one of the two values, zero or one, so we have four combinations here. If we look at this form of mutual information, it shows that the mutual information matches the divergence of the actual joint distribution from the expected distribution under the independence assumption. The larger this divergence is, the higher the mutual information would be.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "8:33": "So now let us further look at what are exactly the probabilities, involved in this formula of mutual information." + "time": "8:33", + "text": "So now let us further look at what are exactly the probabilities, involved in this formula of mutual information.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "8:41": "And here, this is all the probabilities involve, and it is easy for you to verify that. Basically, we have first to [INAUDIBLE] probabilities corresponding to the presence or absence of each word. So, for w1, we have two probabilities shown here." + "time": "8:41", + "text": "And here, this is all the probabilities involve, and it is easy for you to verify that. Basically, we have first to [INAUDIBLE] probabilities corresponding to the presence or absence of each word. So, for w1, we have two probabilities shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "9:02": "They should sum to one, because a word can either be present or absent. In the segment, and similarly for the second word, we also have two probabilities representing presence or absences of this word, and there is some to y as well." + "time": "9:02", + "text": "They should sum to one, because a word can either be present or absent. In the segment, and similarly for the second word, we also have two probabilities representing presence or absences of this word, and there is some to y as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "9:21": "And finally, we have a lot of joined probabilities that represent the scenarios of co-occurrences of the two words, and they are shown here." + "time": "9:21", + "text": "And finally, we have a lot of joined probabilities that represent the scenarios of co-occurrences of the two words, and they are shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "9:34": "And they sum to one because the two words can only have these four possible scenarios. Either they both occur, so in that case both variables will have a value of one, or one of them occurs. There are two scenarios." + "time": "9:34", + "text": "And they sum to one because the two words can only have these four possible scenarios. Either they both occur, so in that case both variables will have a value of one, or one of them occurs. There are two scenarios.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "9:51": "In these two cases one of the random variables will be equal to one and the other will be zero and finally we have the scenario when none of them occurs. This is when the two variables taking a value of zero." + "time": "9:51", + "text": "In these two cases one of the random variables will be equal to one and the other will be zero and finally we have the scenario when none of them occurs. This is when the two variables taking a value of zero.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "10:07": "So these are the probabilities involved in the calculation of mutual information, over here." + "time": "10:07", + "text": "So these are the probabilities involved in the calculation of mutual information, over here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "10:16": "Once we know how to calculate these probabilities, we can easily calculate the mutual information." + "time": "10:16", + "text": "Once we know how to calculate these probabilities, we can easily calculate the mutual information.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "10:24": "It is also interesting to know that there are actually some relations or constraint among these probabilities, and we already saw two of them, right? So in the previous slide, that you have seen that the marginal probabilities of these words sum to one and we also have seen this constraint, that says the two words have these four scenarios of co-occurrency, but we also have some additional constraints listed in the bottom." + "time": "10:24", + "text": "It is also interesting to know that there are actually some relations or constraint among these probabilities, and we already saw two of them, right? So in the previous slide, that you have seen that the marginal probabilities of these words sum to one and we also have seen this constraint, that says the two words have these four scenarios of co-occurrency, but we also have some additional constraints listed in the bottom.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "10:58": "For example, this one means if we add up the probabilities that we observe the two words occur together and the probabilities when the first word occurs and the second word does not occur. We get exactly the probability that the first word is observed. In other words, when the word is observed. When the first word is observed, and there are only two scenarios, depending on whether the second word is also observed. So, this probability captures the first scenario when the second word actually is also observed, and this captures the second scenario when the second word is not observed. So, we only see the first word, and it is easy to see the other equations also follow the same reasoning." + "time": "10:58", + "text": "For example, this one means if we add up the probabilities that we observe the two words occur together and the probabilities when the first word occurs and the second word does not occur. We get exactly the probability that the first word is observed. In other words, when the word is observed. When the first word is observed, and there are only two scenarios, depending on whether the second word is also observed. So, this probability captures the first scenario when the second word actually is also observed, and this captures the second scenario when the second word is not observed. So, we only see the first word, and it is easy to see the other equations also follow the same reasoning.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "11:46": "Now these equations allow us to compute some probabilities based on other probabilities, and this can simplify the computation." + "time": "11:46", + "text": "Now these equations allow us to compute some probabilities based on other probabilities, and this can simplify the computation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "11:55": "So more specifically, if we know the probability that a word is present, like in this case, so if we know this, and if we know the probability of the presence of the second word, then we can easily compute the absence probability, right? It is very easy to use this equation to do that, and so we take care of the computation of these probabilities of presence and absence of each word. Now let's look at the [INAUDIBLE] distribution. Let us assume that we also have available the probability that they occurred together. Now it is easy to see that we can actually compute all the rest of these probabilities based on these." + "time": "11:55", + "text": "So more specifically, if we know the probability that a word is present, like in this case, so if we know this, and if we know the probability of the presence of the second word, then we can easily compute the absence probability, right? It is very easy to use this equation to do that, and so we take care of the computation of these probabilities of presence and absence of each word. Now let's look at the [INAUDIBLE] distribution. Let us assume that we also have available the probability that they occurred together. Now it is easy to see that we can actually compute all the rest of these probabilities based on these.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "12:46": "Specifically for example using this equation we can compute the probability that the first word occurred and the second word did not, because we know these probabilities in the boxes, and similarly using this equation we can compute the probability that we observe only the second word. Word. And then finally, this probability can be calculated by using this equation because now this is known, and this is also known, and this is already known, right. So this can be easier to calculate. So now this can be calculated." + "time": "12:46", + "text": "Specifically for example using this equation we can compute the probability that the first word occurred and the second word did not, because we know these probabilities in the boxes, and similarly using this equation we can compute the probability that we observe only the second word. Word. And then finally, this probability can be calculated by using this equation because now this is known, and this is also known, and this is already known, right. So this can be easier to calculate. So now this can be calculated.", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" }, { - "13:26": "So this slide shows that we only need to know how to compute these three probabilities that are shown in the boxes, naming the presence of each word and the co-occurence of both words, in a segment. [MUSIC]" + "time": "13:26", + "text": "So this slide shows that we only need to know how to compute these three probabilities that are shown in the boxes, naming the presence of each word and the co-occurence of both words, in a segment. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/b1ZFI/2-3-syntagmatic-relation-discovery-mutual-information-part-1" } ] }, { "2-4-syntagmatic-relation-discovery-mutual-information-part-2": [ { - "0:00": "[SOUND]" + "time": "0:00", + "text": "[SOUND]", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "0:06": "In general, we can use the empirical count of events in the observed data to estimate the probabilities. And a commonly used technique is called a maximum likelihood estimate, where we simply normalize the observe accounts. So if we do that, we can see, we can compute these probabilities as follows. For estimating the probability that we see a water current in a segment, we simply normalize the count of segments that contain this word. So let's first take a look at the data here. On the right side, you see a list of some, hypothesizes the data. These are segments. And in some segments you see both words occur, they are indicated as ones for both columns. In some other cases only one will occur, so only that column has one and the other column has zero. And in all, of course, in some other cases none of the words occur, so they are both zeros. And for estimating these probabilities, we simply need to collect the three counts." + "time": "0:06", + "text": "In general, we can use the empirical count of events in the observed data to estimate the probabilities. And a commonly used technique is called a maximum likelihood estimate, where we simply normalize the observe accounts. So if we do that, we can see, we can compute these probabilities as follows. For estimating the probability that we see a water current in a segment, we simply normalize the count of segments that contain this word. So let's first take a look at the data here. On the right side, you see a list of some, hypothesizes the data. These are segments. And in some segments you see both words occur, they are indicated as ones for both columns. In some other cases only one will occur, so only that column has one and the other column has zero. And in all, of course, in some other cases none of the words occur, so they are both zeros. And for estimating these probabilities, we simply need to collect the three counts.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "1:20": "So the three counts are first, the count of W1. And that's the total number of segments that contain word W1. It's just as the ones in the column of W1. We can count how many ones we have seen there. The segment count is for word 2, and we just count the ones in the second column. And these will give us the total number of segments that contain W2. The third count is when both words occur. So this time, we're going to count the sentence where both columns have ones." + "time": "1:20", + "text": "So the three counts are first, the count of W1. And that's the total number of segments that contain word W1. It's just as the ones in the column of W1. We can count how many ones we have seen there. The segment count is for word 2, and we just count the ones in the second column. And these will give us the total number of segments that contain W2. The third count is when both words occur. So this time, we're going to count the sentence where both columns have ones.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "1:56": "And then, so this would give us the total number of segments where we have seen both W1 and W2. Once we have these counts, we can just normalize these counts by N, which is the total number of segments, and this will give us the probabilities that we need to compute original information. Now, there is a small problem, when we have zero counts sometimes. And in this case, we don't want a zero probability because our data may be a small sample and in general, we would believe that it's potentially possible for a [INAUDIBLE] to avoid any context. So, to address this problem, we can use a technique called smoothing. And that's basically to add some small constant to these counts, and so that we don't get the zero probability in any case. Now, the best way to understand smoothing is imagine that we actually observed more data than we actually have, because we'll pretend we observed some pseudo-segments. I illustrated on the top, on the right side on the slide. And these pseudo-segments would contribute additional counts of these words so that no event will have zero probability. Now, in particular we introduce the four pseudo-segments. Each is weighted at one quarter. And these represent the four different combinations of occurrences of this word. So now each event, each combination will have at least one count or at least a non-zero count from this pseudo-segment. So, in the actual segments that we'll observe, it's okay if we haven't observed all of the combinations. So more specifically, you can see the 0.5 here after it comes from the two ones in the two pseudo-segments, because each is weighted at one quarter. We add them up, we get 0.5. And similar to this, 0.05 comes from one single pseudo-segment that indicates the two words occur together." + "time": "1:56", + "text": "And then, so this would give us the total number of segments where we have seen both W1 and W2. Once we have these counts, we can just normalize these counts by N, which is the total number of segments, and this will give us the probabilities that we need to compute original information. Now, there is a small problem, when we have zero counts sometimes. And in this case, we don't want a zero probability because our data may be a small sample and in general, we would believe that it's potentially possible for a [INAUDIBLE] to avoid any context. So, to address this problem, we can use a technique called smoothing. And that's basically to add some small constant to these counts, and so that we don't get the zero probability in any case. Now, the best way to understand smoothing is imagine that we actually observed more data than we actually have, because we'll pretend we observed some pseudo-segments. I illustrated on the top, on the right side on the slide. And these pseudo-segments would contribute additional counts of these words so that no event will have zero probability. Now, in particular we introduce the four pseudo-segments. Each is weighted at one quarter. And these represent the four different combinations of occurrences of this word. So now each event, each combination will have at least one count or at least a non-zero count from this pseudo-segment. So, in the actual segments that we'll observe, it's okay if we haven't observed all of the combinations. So more specifically, you can see the 0.5 here after it comes from the two ones in the two pseudo-segments, because each is weighted at one quarter. We add them up, we get 0.5. And similar to this, 0.05 comes from one single pseudo-segment that indicates the two words occur together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "4:09": "And of course in the denominator we add the total number of pseudo-segments that we add, in this case, we added a four pseudo-segments. Each is weighed at one quarter so the total of the sum is, after the one. So, that's why in the denominator you'll see a one there." + "time": "4:09", + "text": "And of course in the denominator we add the total number of pseudo-segments that we add, in this case, we added a four pseudo-segments. Each is weighed at one quarter so the total of the sum is, after the one. So, that's why in the denominator you'll see a one there.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "4:25": "So, this basically concludes the discussion of how to compute a these four syntagmatic relation discoveries." + "time": "4:25", + "text": "So, this basically concludes the discussion of how to compute a these four syntagmatic relation discoveries.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "4:36": "Now, so to summarize, syntagmatic relation can generally be discovered by measuring correlations between occurrences of two words. We've introduced the three concepts from information theory. Entropy, which measures the uncertainty of a random variable X. Conditional entropy, which measures the entropy of X given we know Y. And mutual information of X and Y, which matches the entropy reduction of X due to knowing Y, or entropy reduction of Y due to knowing X. They are the same. So these three concepts are actually very useful for other applications as well. That's why we spent some time to explain this in detail. But in particular, they are also very useful for discovering syntagmatic relations. In particular, mutual information is a principal way for discovering such a relation. It allows us to have values computed on different pairs of words that are comparable and so we can rank these pairs and discover the strongest syntagmatic from a collection of documents. Now, note that there is some relation between syntagmatic relation discovery and [INAUDIBLE] relation discovery. So we already discussed the possibility of using BM25 to achieve waiting for terms in the context to potentially also suggest the candidates that have syntagmatic relations with the candidate word. But here, once we use mutual information to discover syntagmatic relations, we can also represent the context with this mutual information as weights. So this would give us another way to represent the context of a word, like a cat. And if we do the same for all the words, then we can cluster these words or compare the similarity between these words based on their context similarity. So this provides yet another way to do term weighting for paradigmatic relation discovery. And so to summarize this whole part about word association mining. We introduce two basic associations, called a paradigmatic and a syntagmatic relations. These are fairly general, they apply to any items in any language, so the units don't have to be words, they can be phrases or entities." + "time": "4:36", + "text": "Now, so to summarize, syntagmatic relation can generally be discovered by measuring correlations between occurrences of two words. We've introduced the three concepts from information theory. Entropy, which measures the uncertainty of a random variable X. Conditional entropy, which measures the entropy of X given we know Y. And mutual information of X and Y, which matches the entropy reduction of X due to knowing Y, or entropy reduction of Y due to knowing X. They are the same. So these three concepts are actually very useful for other applications as well. That's why we spent some time to explain this in detail. But in particular, they are also very useful for discovering syntagmatic relations. In particular, mutual information is a principal way for discovering such a relation. It allows us to have values computed on different pairs of words that are comparable and so we can rank these pairs and discover the strongest syntagmatic from a collection of documents. Now, note that there is some relation between syntagmatic relation discovery and [INAUDIBLE] relation discovery. So we already discussed the possibility of using BM25 to achieve waiting for terms in the context to potentially also suggest the candidates that have syntagmatic relations with the candidate word. But here, once we use mutual information to discover syntagmatic relations, we can also represent the context with this mutual information as weights. So this would give us another way to represent the context of a word, like a cat. And if we do the same for all the words, then we can cluster these words or compare the similarity between these words based on their context similarity. So this provides yet another way to do term weighting for paradigmatic relation discovery. And so to summarize this whole part about word association mining. We introduce two basic associations, called a paradigmatic and a syntagmatic relations. These are fairly general, they apply to any items in any language, so the units don't have to be words, they can be phrases or entities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "7:11": "We introduced multiple statistical approaches for discovering them, mainly showing that pure statistical approaches are visible, are variable for discovering both kind of relations. And they can be combined to perform joint analysis, as well. These approaches can be applied to any text with no human effort, mostly because they are based on counting of words, yet they can actually discover interesting relations of words." + "time": "7:11", + "text": "We introduced multiple statistical approaches for discovering them, mainly showing that pure statistical approaches are visible, are variable for discovering both kind of relations. And they can be combined to perform joint analysis, as well. These approaches can be applied to any text with no human effort, mostly because they are based on counting of words, yet they can actually discover interesting relations of words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "7:44": "We can also use different ways with defining context and segment, and this would lead us to some interesting variations of applications. For example, the context can be very narrow like a few words, around a word, or a sentence, or maybe paragraphs, as using differing contexts would allows to discover different flavors of paradigmatical relations. And similarly, counting co-occurrences using let's say, visual information to discover syntagmatical relations. We also have to define the segment, and the segment can be defined as a narrow text window or a longer text article. And this would give us different kinds of associations. These discovery associations can support many other applications, in both information retrieval and text and data mining. So here are some recommended readings, if you want to know more about the topic. The first is a book with a chapter on collocations, which is quite relevant to the topic of these lectures. The second is an article about using various statistical measures to discover lexical atoms. Those are phrases that are non-compositional. For example, hot dog is not really a dog that's hot," + "time": "7:44", + "text": "We can also use different ways with defining context and segment, and this would lead us to some interesting variations of applications. For example, the context can be very narrow like a few words, around a word, or a sentence, or maybe paragraphs, as using differing contexts would allows to discover different flavors of paradigmatical relations. And similarly, counting co-occurrences using let's say, visual information to discover syntagmatical relations. We also have to define the segment, and the segment can be defined as a narrow text window or a longer text article. And this would give us different kinds of associations. These discovery associations can support many other applications, in both information retrieval and text and data mining. So here are some recommended readings, if you want to know more about the topic. The first is a book with a chapter on collocations, which is quite relevant to the topic of these lectures. The second is an article about using various statistical measures to discover lexical atoms. Those are phrases that are non-compositional. For example, hot dog is not really a dog that's hot,", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "9:08": "blue chip is not a chip that's blue. And the paper has a discussion about some techniques for discovering such phrases." + "time": "9:08", + "text": "blue chip is not a chip that's blue. And the paper has a discussion about some techniques for discovering such phrases.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" }, { - "9:17": "The third one is a new paper on a unified way to discover both paradigmatical relations and a syntagmatical relations, using random works on word graphs. [SOUND]" + "time": "9:17", + "text": "The third one is a new paper on a unified way to discover both paradigmatical relations and a syntagmatical relations, using random works on word graphs. [SOUND]", + "url": "https://www.coursera.org/learn/text-mining/lecture/8d6Wn/2-4-syntagmatic-relation-discovery-mutual-information-part-2" } ] }, { "2-5-topic-mining-and-analysis-motivation-and-task-definition": [ { - "0:00": "[SOUND] >> This lecture is about topic mining and analysis. We're going to talk about its motivation and task definition." + "time": "0:00", + "text": "[SOUND] >> This lecture is about topic mining and analysis. We're going to talk about its motivation and task definition.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "0:17": "In this lecture we're going to talk about different kind of mining task." + "time": "0:17", + "text": "In this lecture we're going to talk about different kind of mining task.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "0:23": "As you see on this road map, we have just covered mining knowledge about language, namely discovery of word associations such as paradigmatic and relations and syntagmatic relations." + "time": "0:23", + "text": "As you see on this road map, we have just covered mining knowledge about language, namely discovery of word associations such as paradigmatic and relations and syntagmatic relations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "0:39": "Now, starting from this lecture, we're going to talk about mining another kind of knowledge, which is content mining, and trying to discover knowledge about the main topics in the text." + "time": "0:39", + "text": "Now, starting from this lecture, we're going to talk about mining another kind of knowledge, which is content mining, and trying to discover knowledge about the main topics in the text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "0:56": "And we call that topic mining and analysis." + "time": "0:56", + "text": "And we call that topic mining and analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "0:59": "In this lecture, we're going to talk about its motivation and the task definition. So first of all, let's look at the concept of topic. So topic is something that we all understand, I think, but it's actually not that easy to formally define. Roughly speaking, topic is the main idea discussed in text data. And you can think of this as a theme or subject of a discussion or conversation. It can also have different granularities. For example, we can talk about the topic of a sentence. A topic of article, aa topic of paragraph or the topic of all the research articles in the research library, right, so different grand narratives of topics obviously have different applications." + "time": "0:59", + "text": "In this lecture, we're going to talk about its motivation and the task definition. So first of all, let's look at the concept of topic. So topic is something that we all understand, I think, but it's actually not that easy to formally define. Roughly speaking, topic is the main idea discussed in text data. And you can think of this as a theme or subject of a discussion or conversation. It can also have different granularities. For example, we can talk about the topic of a sentence. A topic of article, aa topic of paragraph or the topic of all the research articles in the research library, right, so different grand narratives of topics obviously have different applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "1:46": "Indeed, there are many applications that require discovery of topics in text, and they're analyzed then. Here are some examples. For example, we might be interested in knowing about what are Twitter users are talking about today? Are they talking about NBA sports, or are they talking about some international events, etc.? Or we are interested in knowing about research topics. For example, one might be interested in knowing what are the current research topics in data mining, and how are they different from those five years ago? Now this involves discovery of topics in data mining literatures and also we want to discover topics in today's literature and those in the past. And then we can make a comparison. We might also be also interested in knowing what do people like about some products like the iPhone 6, and what do they dislike? And this involves discovering topics in positive opinions about iPhone 6 and also negative reviews about it. Or perhaps we're interested in knowing what are the major topics debated in 2012 presidential election?" + "time": "1:46", + "text": "Indeed, there are many applications that require discovery of topics in text, and they're analyzed then. Here are some examples. For example, we might be interested in knowing about what are Twitter users are talking about today? Are they talking about NBA sports, or are they talking about some international events, etc.? Or we are interested in knowing about research topics. For example, one might be interested in knowing what are the current research topics in data mining, and how are they different from those five years ago? Now this involves discovery of topics in data mining literatures and also we want to discover topics in today's literature and those in the past. And then we can make a comparison. We might also be also interested in knowing what do people like about some products like the iPhone 6, and what do they dislike? And this involves discovering topics in positive opinions about iPhone 6 and also negative reviews about it. Or perhaps we're interested in knowing what are the major topics debated in 2012 presidential election?", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "2:59": "And all these have to do with discovering topics in text and analyzing them, and we're going to talk about a lot of techniques for doing this. In general we can view a topic as some knowledge about the world. So from text data we expect to discover a number of topics, and then these topics generally provide a description about the world. And it tells us something about the world. About a product, about a person etc." + "time": "2:59", + "text": "And all these have to do with discovering topics in text and analyzing them, and we're going to talk about a lot of techniques for doing this. In general we can view a topic as some knowledge about the world. So from text data we expect to discover a number of topics, and then these topics generally provide a description about the world. And it tells us something about the world. About a product, about a person etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "3:29": "Now when we have some non-text data, then we can have more context for analyzing the topics. For example, we might know the time associated with the text data, or locations where the text data were produced, or the authors of the text, or the sources of the text, etc. All such meta data, or context variables can be associated with the topics that we discover, and then we can use these context variables help us analyze patterns of topics. For example, looking at topics over time, we would be able to discover whether there's a trending topic, or some topics might be fading away." + "time": "3:29", + "text": "Now when we have some non-text data, then we can have more context for analyzing the topics. For example, we might know the time associated with the text data, or locations where the text data were produced, or the authors of the text, or the sources of the text, etc. All such meta data, or context variables can be associated with the topics that we discover, and then we can use these context variables help us analyze patterns of topics. For example, looking at topics over time, we would be able to discover whether there's a trending topic, or some topics might be fading away.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "4:15": "Soon you are looking at topics in different locations. We might know some insights about people's opinions in different locations." + "time": "4:15", + "text": "Soon you are looking at topics in different locations. We might know some insights about people's opinions in different locations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "4:26": "So that's why mining topics is very important. Now, let's look at the tasks of topic mining and analysis. In general, it would involve first discovering a lot of topics, in this case, k topics. And then we also would like to know, which topics are covered in which documents, to what extent. So for example, in document one, we might see that Topic 1 is covered a lot, Topic 2 and Topic k are covered with a small portion." + "time": "4:26", + "text": "So that's why mining topics is very important. Now, let's look at the tasks of topic mining and analysis. In general, it would involve first discovering a lot of topics, in this case, k topics. And then we also would like to know, which topics are covered in which documents, to what extent. So for example, in document one, we might see that Topic 1 is covered a lot, Topic 2 and Topic k are covered with a small portion.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "4:58": "And other topics, perhaps, are not covered. Document two, on the other hand, covered Topic 2 very well, but it did not cover Topic 1 at all, and it also covers Topic k to some extent, etc., right? So now you can see there are generally two different tasks, or sub-tasks, the first is to discover k topics from a collection of text laid out. What are these k topics? Okay, major topics in the text they are. The second task is to figure out which documents cover which topics to what extent. So more formally, we can define the problem as follows. First, we have, as input, a collection of N text documents. Here we can denote the text collection as C, and denote text article as d i. And, we generally also need to have as input the number of topics, k. But there may be techniques that can automatically suggest a number of topics. But in the techniques that we will discuss, which are also the most useful techniques, we often need to specify a number of topics." + "time": "4:58", + "text": "And other topics, perhaps, are not covered. Document two, on the other hand, covered Topic 2 very well, but it did not cover Topic 1 at all, and it also covers Topic k to some extent, etc., right? So now you can see there are generally two different tasks, or sub-tasks, the first is to discover k topics from a collection of text laid out. What are these k topics? Okay, major topics in the text they are. The second task is to figure out which documents cover which topics to what extent. So more formally, we can define the problem as follows. First, we have, as input, a collection of N text documents. Here we can denote the text collection as C, and denote text article as d i. And, we generally also need to have as input the number of topics, k. But there may be techniques that can automatically suggest a number of topics. But in the techniques that we will discuss, which are also the most useful techniques, we often need to specify a number of topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "6:14": "Now the output would then be the k topics that we would like to discover, in order as theta sub one through theta sub k." + "time": "6:14", + "text": "Now the output would then be the k topics that we would like to discover, in order as theta sub one through theta sub k.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "6:24": "Also we want to generate the coverage of topics in each document of d sub i And this is denoted by pi sub i j." + "time": "6:24", + "text": "Also we want to generate the coverage of topics in each document of d sub i And this is denoted by pi sub i j.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "6:33": "And pi sub ij is the probability of document d sub i covering topic theta sub j. So obviously for each document, we have a set of such values to indicate to what extent the document covers, each topic." + "time": "6:33", + "text": "And pi sub ij is the probability of document d sub i covering topic theta sub j. So obviously for each document, we have a set of such values to indicate to what extent the document covers, each topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "6:48": "And we can assume that these probabilities sum to one. Because a document won't be able to cover other topics outside of the topics that we discussed, that we discovered. So now, the question is, how do we define theta sub i, how do we define the topic? Now this problem has not been completely defined until we define what is exactly theta." + "time": "6:48", + "text": "And we can assume that these probabilities sum to one. Because a document won't be able to cover other topics outside of the topics that we discussed, that we discovered. So now, the question is, how do we define theta sub i, how do we define the topic? Now this problem has not been completely defined until we define what is exactly theta.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" }, { - "7:16": "So in the next few lectures, we're going to talk about different ways to define theta. [MUSIC]" + "time": "7:16", + "text": "So in the next few lectures, we're going to talk about different ways to define theta. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" } ] }, { "2-6-topic-mining-and-analysis-term-as-topic": [ { - "0:00": "[MUSIC]" + "time": "0:00", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "0:07": "This lecture is about topic mining and analysis." + "time": "0:07", + "text": "This lecture is about topic mining and analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "0:12": "We're going to talk about using a term as topic. This is a slide that you have seen in a earlier lecture where we define the task of topic mining and analysis. We also raised the question, how do we exactly define the topic of theta?" + "time": "0:12", + "text": "We're going to talk about using a term as topic. This is a slide that you have seen in a earlier lecture where we define the task of topic mining and analysis. We also raised the question, how do we exactly define the topic of theta?", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "0:31": "So in this lecture, we're going to offer one way to define it, and that's our initial idea. Our idea here is defining a topic simply as a term." + "time": "0:31", + "text": "So in this lecture, we're going to offer one way to define it, and that's our initial idea. Our idea here is defining a topic simply as a term.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "0:42": "A term can be a word or a phrase." + "time": "0:42", + "text": "A term can be a word or a phrase.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "0:45": "And in general, we can use these terms to describe topics. So our first thought is just to define a topic as one term. For example, we might have terms like sports, travel, or science, as you see here. Now if we define a topic in this way, we can then analyze the coverage of such topics in each document. Here for example, we might want to discover to what extent document one covers sports. And we found that 30% of the content of document one is about sports. And 12% is about the travel, etc. We might also discover document two does not cover sports at all. So the coverage is zero, etc." + "time": "0:45", + "text": "And in general, we can use these terms to describe topics. So our first thought is just to define a topic as one term. For example, we might have terms like sports, travel, or science, as you see here. Now if we define a topic in this way, we can then analyze the coverage of such topics in each document. Here for example, we might want to discover to what extent document one covers sports. And we found that 30% of the content of document one is about sports. And 12% is about the travel, etc. We might also discover document two does not cover sports at all. So the coverage is zero, etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "1:32": "So now, of course, as we discussed in the task definition for topic mining and analysis, we have two tasks. One is to discover the topics. And the second is to analyze coverage. So let's first think about how we can discover topics if we represent each topic by a term. So that means we need to mine k topical terms from a collection." + "time": "1:32", + "text": "So now, of course, as we discussed in the task definition for topic mining and analysis, we have two tasks. One is to discover the topics. And the second is to analyze coverage. So let's first think about how we can discover topics if we represent each topic by a term. So that means we need to mine k topical terms from a collection.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "2:01": "Now there are, of course, many different ways of doing that." + "time": "2:01", + "text": "Now there are, of course, many different ways of doing that.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "2:05": "And we're going to talk about a natural way of doing that, which is also likely effective. So first of all, we're going to parse the text data in the collection to obtain candidate terms. Here candidate terms can be words or phrases. Let's say the simplest solution is to just take each word as a term. These words then become candidate topics. Then we're going to design a scoring function to match how good each term is as a topic." + "time": "2:05", + "text": "And we're going to talk about a natural way of doing that, which is also likely effective. So first of all, we're going to parse the text data in the collection to obtain candidate terms. Here candidate terms can be words or phrases. Let's say the simplest solution is to just take each word as a term. These words then become candidate topics. Then we're going to design a scoring function to match how good each term is as a topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "2:35": "So how can we design such a function? Well there are many things that we can consider. For example, we can use pure statistics to design such a scoring function." + "time": "2:35", + "text": "So how can we design such a function? Well there are many things that we can consider. For example, we can use pure statistics to design such a scoring function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "2:45": "Intuitively, we would like to favor representative terms, meaning terms that can represent a lot of content in the collection. So that would mean we want to favor a frequent term. However, if we simply use the frequency to design the scoring function, then the highest scored terms would be general terms or functional terms like the, etc. Those terms occur very frequently English." + "time": "2:45", + "text": "Intuitively, we would like to favor representative terms, meaning terms that can represent a lot of content in the collection. So that would mean we want to favor a frequent term. However, if we simply use the frequency to design the scoring function, then the highest scored terms would be general terms or functional terms like the, etc. Those terms occur very frequently English.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "3:14": "So we also want to avoid having such words on the top so we want to penalize such words. But in general, we would like to favor terms that are fairly frequent but not so frequent. So a particular approach could be based on TF-IDF weighting from retrieval." + "time": "3:14", + "text": "So we also want to avoid having such words on the top so we want to penalize such words. But in general, we would like to favor terms that are fairly frequent but not so frequent. So a particular approach could be based on TF-IDF weighting from retrieval.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "3:35": "And TF stands for term frequency. IDF stands for inverse document frequency. We talked about some of these ideas in the lectures about the discovery of word associations. So these are statistical methods, meaning that the function is defined mostly based on statistics. So the scoring function would be very general. It can be applied to any language, any text. But when we apply such a approach to a particular problem, we might also be able to leverage some domain-specific heuristics. For example, in news we might favor title words actually general. We might want to favor title words because the authors tend to use the title to describe the topic of an article." + "time": "3:35", + "text": "And TF stands for term frequency. IDF stands for inverse document frequency. We talked about some of these ideas in the lectures about the discovery of word associations. So these are statistical methods, meaning that the function is defined mostly based on statistics. So the scoring function would be very general. It can be applied to any language, any text. But when we apply such a approach to a particular problem, we might also be able to leverage some domain-specific heuristics. For example, in news we might favor title words actually general. We might want to favor title words because the authors tend to use the title to describe the topic of an article.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "4:27": "If we're dealing with tweets, we could also favor hashtags, which are invented to denote topics. So naturally, hashtags can be good candidates for representing topics." + "time": "4:27", + "text": "If we're dealing with tweets, we could also favor hashtags, which are invented to denote topics. So naturally, hashtags can be good candidates for representing topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "4:44": "Anyway, after we have this design scoring function, then we can discover the k topical terms by simply picking k terms with the highest scores. Now, of course, we might encounter situation where the highest scored terms are all very similar. They're semantically similar, or closely related, or even synonyms. So that's not desirable. So we also want to have coverage over all the content in the collection. So we would like to remove redundancy. And one way to do that is to do a greedy algorithm, which is sometimes called a maximal marginal relevance ranking. Basically, the idea is to go down the list based on our scoring function and gradually take terms to collect the k topical terms. The first term, of course, will be picked. When we pick the next term, we're going to look at what terms have already been picked and try to avoid picking a term that's too similar. So while we are considering the ranking of a term in the list, we are also considering the redundancy of the candidate term with respect to the terms that we already picked." + "time": "4:44", + "text": "Anyway, after we have this design scoring function, then we can discover the k topical terms by simply picking k terms with the highest scores. Now, of course, we might encounter situation where the highest scored terms are all very similar. They're semantically similar, or closely related, or even synonyms. So that's not desirable. So we also want to have coverage over all the content in the collection. So we would like to remove redundancy. And one way to do that is to do a greedy algorithm, which is sometimes called a maximal marginal relevance ranking. Basically, the idea is to go down the list based on our scoring function and gradually take terms to collect the k topical terms. The first term, of course, will be picked. When we pick the next term, we're going to look at what terms have already been picked and try to avoid picking a term that's too similar. So while we are considering the ranking of a term in the list, we are also considering the redundancy of the candidate term with respect to the terms that we already picked.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "5:58": "And with some thresholding, then we can get a balance of the redundancy removal and also high score of a term. Okay, so after this that will get k topical terms. And those can be regarded as the topics that we discovered from the connection. Next, let's think about how we're going to compute the topic coverage pi sub ij." + "time": "5:58", + "text": "And with some thresholding, then we can get a balance of the redundancy removal and also high score of a term. Okay, so after this that will get k topical terms. And those can be regarded as the topics that we discovered from the connection. Next, let's think about how we're going to compute the topic coverage pi sub ij.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "6:23": "So looking at this picture, we have sports, travel and science and these topics. And now suppose you are give a document. How should we pick out coverage of each topic in the document?" + "time": "6:23", + "text": "So looking at this picture, we have sports, travel and science and these topics. And now suppose you are give a document. How should we pick out coverage of each topic in the document?", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "6:36": "Well, one approach can be to simply count occurrences of these terms. So for example, sports might have occurred four times in this this document and travel occurred twice, etc. And then we can just normalize these counts as our estimate of the coverage probability for each topic. So in general, the formula would be to collect the counts of all the terms that represent the topics. And then simply normalize them so that the coverage of each topic in the document would add to one." + "time": "6:36", + "text": "Well, one approach can be to simply count occurrences of these terms. So for example, sports might have occurred four times in this this document and travel occurred twice, etc. And then we can just normalize these counts as our estimate of the coverage probability for each topic. So in general, the formula would be to collect the counts of all the terms that represent the topics. And then simply normalize them so that the coverage of each topic in the document would add to one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "7:15": "This forms a distribution of the topics for the document to characterize coverage of different topics in the document. Now, as always, when we think about idea for solving problem, we have to ask the question, how good is this one? Or is this the best way of solving problem?" + "time": "7:15", + "text": "This forms a distribution of the topics for the document to characterize coverage of different topics in the document. Now, as always, when we think about idea for solving problem, we have to ask the question, how good is this one? Or is this the best way of solving problem?", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "7:38": "So now let's examine this approach. In general, we have to do some empirical evaluation" + "time": "7:38", + "text": "So now let's examine this approach. In general, we have to do some empirical evaluation", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "7:46": "by using actual data sets and to see how well it works." + "time": "7:46", + "text": "by using actual data sets and to see how well it works.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "7:52": "Well, in this case let's take a look at a simple example here. And we have a text document that's about a NBA basketball game." + "time": "7:52", + "text": "Well, in this case let's take a look at a simple example here. And we have a text document that's about a NBA basketball game.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "8:04": "So in terms of the content, it's about sports." + "time": "8:04", + "text": "So in terms of the content, it's about sports.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "8:08": "But if we simply count these words that represent our topics, we will find that the word sports actually did not occur in the article, even though the content is about the sports." + "time": "8:08", + "text": "But if we simply count these words that represent our topics, we will find that the word sports actually did not occur in the article, even though the content is about the sports.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "8:22": "So the count of sports is zero. That means the coverage of sports would be estimated as zero. Now of course, the term science also did not occur in the document and it's estimate is also zero. And that's okay. But sports certainly is not okay because we know the content is about sports. So this estimate has problem." + "time": "8:22", + "text": "So the count of sports is zero. That means the coverage of sports would be estimated as zero. Now of course, the term science also did not occur in the document and it's estimate is also zero. And that's okay. But sports certainly is not okay because we know the content is about sports. So this estimate has problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "8:50": "What's worse, the term travel actually occurred in the document. So when we estimate the coverage of the topic travel, we have got a non-zero count. So its estimated coverage will be non-zero. So this obviously is also not desirable." + "time": "8:50", + "text": "What's worse, the term travel actually occurred in the document. So when we estimate the coverage of the topic travel, we have got a non-zero count. So its estimated coverage will be non-zero. So this obviously is also not desirable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "9:08": "So this simple example illustrates some problems of this approach. First, when we count what words belong to to the topic, we also need to consider related words. We can't simply just count the topic word sports. In this case, it did not occur at all. But there are many related words like basketball, game, etc. So we need to count the related words also. The second problem is that a word like star can be actually ambiguous. So here it probably means a basketball star, but we can imagine it might also mean a star on the sky. So in that case, the star might actually suggest, perhaps, a topic of science." + "time": "9:08", + "text": "So this simple example illustrates some problems of this approach. First, when we count what words belong to to the topic, we also need to consider related words. We can't simply just count the topic word sports. In this case, it did not occur at all. But there are many related words like basketball, game, etc. So we need to count the related words also. The second problem is that a word like star can be actually ambiguous. So here it probably means a basketball star, but we can imagine it might also mean a star on the sky. So in that case, the star might actually suggest, perhaps, a topic of science.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "9:54": "So we need to deal with that as well. Finally, a main restriction of this approach is that we have only one term to describe the topic, so it cannot really describe complicated topics. For example, a very specialized topic in sports would be harder to describe by using just a word or one phrase. We need to use more words. So this example illustrates some general problems with this approach of treating a term as topic. First, it lacks expressive power. Meaning that it can only represent the simple general topics, but it cannot represent the complicated topics that might require more words to describe." + "time": "9:54", + "text": "So we need to deal with that as well. Finally, a main restriction of this approach is that we have only one term to describe the topic, so it cannot really describe complicated topics. For example, a very specialized topic in sports would be harder to describe by using just a word or one phrase. We need to use more words. So this example illustrates some general problems with this approach of treating a term as topic. First, it lacks expressive power. Meaning that it can only represent the simple general topics, but it cannot represent the complicated topics that might require more words to describe.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "10:37": "Second, it's incomplete in vocabulary coverage, meaning that the topic itself is only represented as one term. It does not suggest what other terms are related to the topic. Even if we're talking about sports, there are many terms that are related. So it does not allow us to easily count related terms to order, conversion to coverage of this topic. Finally, there is this problem of word sense disintegration. A topical term or related term can be ambiguous. For example, basketball star versus star in the sky." + "time": "10:37", + "text": "Second, it's incomplete in vocabulary coverage, meaning that the topic itself is only represented as one term. It does not suggest what other terms are related to the topic. Even if we're talking about sports, there are many terms that are related. So it does not allow us to easily count related terms to order, conversion to coverage of this topic. Finally, there is this problem of word sense disintegration. A topical term or related term can be ambiguous. For example, basketball star versus star in the sky.", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" }, { - "11:10": "So in the next lecture, we're going to talk about how to solve the problem with of a topic. [MUSIC]" + "time": "11:10", + "text": "So in the next lecture, we're going to talk about how to solve the problem with of a topic. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" } ] }, { "2-7-topic-mining-and-analysis-probabilistic-topic-models": [ { - "0:06": "This lecture is about Probabilistic Topic Models for topic mining and analysis." + "time": "0:06", + "text": "This lecture is about Probabilistic Topic Models for topic mining and analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "0:13": "In this lecture, we're going to continue talking about the topic mining and analysis." + "time": "0:13", + "text": "In this lecture, we're going to continue talking about the topic mining and analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "0:18": "We're going to introduce probabilistic topic models." + "time": "0:18", + "text": "We're going to introduce probabilistic topic models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "0:22": "So this is a slide that you have seen earlier, where we discussed the problems with using a term as a topic. So, to solve these problems intuitively we need to use more words to describe the topic. And this will address the problem of lack of expressive power. When we have more words that we can use to describe the topic, that we can describe complicated topics. To address the second problem we need to introduce weights on words. This is what allows you to distinguish subtle differences in topics, and to introduce semantically related words in a fuzzy manner. Finally, to solve the problem of word ambiguity, we need to split ambiguous word, so that we can disambiguate its topic." + "time": "0:22", + "text": "So this is a slide that you have seen earlier, where we discussed the problems with using a term as a topic. So, to solve these problems intuitively we need to use more words to describe the topic. And this will address the problem of lack of expressive power. When we have more words that we can use to describe the topic, that we can describe complicated topics. To address the second problem we need to introduce weights on words. This is what allows you to distinguish subtle differences in topics, and to introduce semantically related words in a fuzzy manner. Finally, to solve the problem of word ambiguity, we need to split ambiguous word, so that we can disambiguate its topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "1:15": "It turns out that all these can be done by using a probabilistic topic model. And that's why we're going to spend a lot of lectures to talk about this topic. So the basic idea here is that, improve the replantation of topic as one distribution. So what you see now is the older replantation. Where we replanted each topic, it was just one word, or one term, or one phrase. But now we're going to use a word distribution to describe the topic. So here you see that for sports. We're going to use the word distribution over theoretical speaking all the words in our vocabulary." + "time": "1:15", + "text": "It turns out that all these can be done by using a probabilistic topic model. And that's why we're going to spend a lot of lectures to talk about this topic. So the basic idea here is that, improve the replantation of topic as one distribution. So what you see now is the older replantation. Where we replanted each topic, it was just one word, or one term, or one phrase. But now we're going to use a word distribution to describe the topic. So here you see that for sports. We're going to use the word distribution over theoretical speaking all the words in our vocabulary.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "1:54": "So for example, the high probability words here are sports, game, basketball, football, play, star, etc. These are sports related terms. And of course it would also give a non-zero probability to some other word like Trouble which might be related to sports in general, not so much related to topic." + "time": "1:54", + "text": "So for example, the high probability words here are sports, game, basketball, football, play, star, etc. These are sports related terms. And of course it would also give a non-zero probability to some other word like Trouble which might be related to sports in general, not so much related to topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "2:18": "In general we can imagine a non zero probability for all the words. And some words that are not read and would have very, very small probabilities. And these probabilities will sum to one." + "time": "2:18", + "text": "In general we can imagine a non zero probability for all the words. And some words that are not read and would have very, very small probabilities. And these probabilities will sum to one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "2:31": "So that it forms a distribution of all the words." + "time": "2:31", + "text": "So that it forms a distribution of all the words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "2:36": "Now intuitively, this distribution represents a topic in that if we assemble words from the distribution, we tended to see words that are ready to dispose." + "time": "2:36", + "text": "Now intuitively, this distribution represents a topic in that if we assemble words from the distribution, we tended to see words that are ready to dispose.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "2:48": "You can also see, as a very special case, if the probability of the mass is concentrated in entirely on just one word, it's sports. And this basically degenerates to the symbol foundation of a topic was just one word." + "time": "2:48", + "text": "You can also see, as a very special case, if the probability of the mass is concentrated in entirely on just one word, it's sports. And this basically degenerates to the symbol foundation of a topic was just one word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "3:04": "But as a distribution, this topic of representation can, in general, involve many words to describe a topic and can model several differences in semantics of a topic. Similarly we can model Travel and Science with their respective distributions. In the distribution for Travel we see top words like attraction, trip, flight etc." + "time": "3:04", + "text": "But as a distribution, this topic of representation can, in general, involve many words to describe a topic and can model several differences in semantics of a topic. Similarly we can model Travel and Science with their respective distributions. In the distribution for Travel we see top words like attraction, trip, flight etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "3:31": "Whereas in Science we see scientist, spaceship, telescope, or genomics, and, you know, science related terms. Now that doesn't mean sports related terms will necessarily have zero probabilities for science. In general we can imagine all of these words we have now zero probabilities. It's just that for a particular topic in some words we have very, very small probabilities." + "time": "3:31", + "text": "Whereas in Science we see scientist, spaceship, telescope, or genomics, and, you know, science related terms. Now that doesn't mean sports related terms will necessarily have zero probabilities for science. In general we can imagine all of these words we have now zero probabilities. It's just that for a particular topic in some words we have very, very small probabilities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "3:58": "Now you can also see there are some words that are shared by these topics. When I say shared it just means even with some probability threshold, you can still see one word occurring much more topics. In this case I mark them in black. So you can see travel, for example, occurred in all the three topics here, but with different probabilities. It has the highest probability for the Travel topic, 0.05. But with much smaller probabilities for Sports and Science, which makes sense. And similarly, you can see a Star also occurred in Sports and Science with reasonably high probabilities. Because they might be actually related to the two topics. So with this replantation it addresses the three problems that I mentioned earlier. First, it now uses multiple words to describe a topic. So it allows us to describe a fairly complicated topics. Second, it assigns weights to terms. So now we can model several differences of semantics. And you can bring in related words together to model a topic. Third, because we have probabilities for the same word in different topics, we can disintegrate the sense of word. In the text to decode it's underlying topic, to address all these three problems with this new way of representing a topic. So now of course our problem definition has been refined just slightly. The slight is very similar to what you've seen before except we have added refinement for what our topic is. Now each topic is word distribution, and for each word distribution we know that all the probabilities should sum to one with all the words in the vocabulary. So you see a constraint here. And we still have another constraint on the topic coverage, namely pis. So all the Pi sub ij's must sum to one for the same document." + "time": "3:58", + "text": "Now you can also see there are some words that are shared by these topics. When I say shared it just means even with some probability threshold, you can still see one word occurring much more topics. In this case I mark them in black. So you can see travel, for example, occurred in all the three topics here, but with different probabilities. It has the highest probability for the Travel topic, 0.05. But with much smaller probabilities for Sports and Science, which makes sense. And similarly, you can see a Star also occurred in Sports and Science with reasonably high probabilities. Because they might be actually related to the two topics. So with this replantation it addresses the three problems that I mentioned earlier. First, it now uses multiple words to describe a topic. So it allows us to describe a fairly complicated topics. Second, it assigns weights to terms. So now we can model several differences of semantics. And you can bring in related words together to model a topic. Third, because we have probabilities for the same word in different topics, we can disintegrate the sense of word. In the text to decode it's underlying topic, to address all these three problems with this new way of representing a topic. So now of course our problem definition has been refined just slightly. The slight is very similar to what you've seen before except we have added refinement for what our topic is. Now each topic is word distribution, and for each word distribution we know that all the probabilities should sum to one with all the words in the vocabulary. So you see a constraint here. And we still have another constraint on the topic coverage, namely pis. So all the Pi sub ij's must sum to one for the same document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "5:59": "So how do we solve this problem? Well, let's look at this problem as a computation problem. So we clearly specify it's input and output and illustrate it here on this side. Input of course is our text data. C is our collection but we also generally assume we know the number of topics, k. Or we hypothesize a number and then try to bind k topics, even though we don't know the exact topics that exist in the collection. And V is the vocabulary that has a set of words that determines what units would be treated as the basic units for analysis. In most cases we'll use words as the basis for analysis. And that means each word is a unique." + "time": "5:59", + "text": "So how do we solve this problem? Well, let's look at this problem as a computation problem. So we clearly specify it's input and output and illustrate it here on this side. Input of course is our text data. C is our collection but we also generally assume we know the number of topics, k. Or we hypothesize a number and then try to bind k topics, even though we don't know the exact topics that exist in the collection. And V is the vocabulary that has a set of words that determines what units would be treated as the basic units for analysis. In most cases we'll use words as the basis for analysis. And that means each word is a unique.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "6:47": "Now the output would consist of as first a set of topics represented by theta I's. Each theta I is a word distribution." + "time": "6:47", + "text": "Now the output would consist of as first a set of topics represented by theta I's. Each theta I is a word distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "6:56": "And we also want to know the coverage of topics in each document. So that's. That the same pi ijs that we have seen before." + "time": "6:56", + "text": "And we also want to know the coverage of topics in each document. So that's. That the same pi ijs that we have seen before.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "7:07": "So given a set of text data we would like compute all these distributions and all these coverages as you have seen on this slide." + "time": "7:07", + "text": "So given a set of text data we would like compute all these distributions and all these coverages as you have seen on this slide.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "7:18": "Now of course there may be many different ways of solving this problem. In theory, you can write the [INAUDIBLE] program to solve this problem, but here we're going to introduce a general way of solving this problem called a generative model. And this is, in fact, a very general idea and it's a principle way of using statistical modeling to solve text mining problems. And here I dimmed the picture that you have seen before in order to show the generation process. So the idea of this approach is actually to first design a model for our data. So we design a probabilistic model to model how the data are generated. Of course, this is based on our assumption. The actual data aren't necessarily generating this way. So that gave us a probability distribution of the data that you are seeing on this slide. Given a particular model and parameters that are denoted by lambda. So this template of actually consists of all the parameters that we're interested in. And these parameters in general will control the behavior of the probability risk model. Meaning that if you set these parameters with different values and it will give some data points higher probabilities than others. Now in this case of course, for our text mining problem or more precisely topic mining problem we have the following plans. First of all we have theta i's which is a word distribution snd then we have a set of pis for each document. And since we have n documents, so we have n sets of pis, and each set the pi up. The pi values will sum to one. So this is to say that we first would pretend we already have these word distributions and the coverage numbers. And then we can see how we can generate data by using such distributions. So how do we model the data in this way? And we assume that the data are actual symbols drawn from such a model that depends on these parameters. Now one interesting question here is to" + "time": "7:18", + "text": "Now of course there may be many different ways of solving this problem. In theory, you can write the [INAUDIBLE] program to solve this problem, but here we're going to introduce a general way of solving this problem called a generative model. And this is, in fact, a very general idea and it's a principle way of using statistical modeling to solve text mining problems. And here I dimmed the picture that you have seen before in order to show the generation process. So the idea of this approach is actually to first design a model for our data. So we design a probabilistic model to model how the data are generated. Of course, this is based on our assumption. The actual data aren't necessarily generating this way. So that gave us a probability distribution of the data that you are seeing on this slide. Given a particular model and parameters that are denoted by lambda. So this template of actually consists of all the parameters that we're interested in. And these parameters in general will control the behavior of the probability risk model. Meaning that if you set these parameters with different values and it will give some data points higher probabilities than others. Now in this case of course, for our text mining problem or more precisely topic mining problem we have the following plans. First of all we have theta i's which is a word distribution snd then we have a set of pis for each document. And since we have n documents, so we have n sets of pis, and each set the pi up. The pi values will sum to one. So this is to say that we first would pretend we already have these word distributions and the coverage numbers. And then we can see how we can generate data by using such distributions. So how do we model the data in this way? And we assume that the data are actual symbols drawn from such a model that depends on these parameters. Now one interesting question here is to", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "9:32": "think about how many parameters are there in total? Now obviously we can already see n multiplied by K parameters. For pi's. We also see k theta i's. But each theta i is actually a set of probability values, right? It's a distribution of words. So I leave this as an exercise for you to figure out exactly how many parameters there are here. Now once we set up the model then we can fit the model to our data. Meaning that we can estimate the parameters or infer the parameters based on the data. In other words we would like to adjust these parameter values. Until we give our data set the maximum probability. I just said, depending on the parameter values, some data points will have higher probabilities than others. What we're interested in, here, is what parameter values will give our data set the highest probability? So I also illustrate the problem with a picture that you see here. On the X axis I just illustrate lambda, the parameters, as a one dimensional variable. It's oversimplification, obviously, but it suffices to show the idea. And the Y axis shows the probability of the data, observe. This probability obviously depends on this setting of lambda. So that's why it varies as you change the value of lambda. What we're interested here is to find the lambda star." + "time": "9:32", + "text": "think about how many parameters are there in total? Now obviously we can already see n multiplied by K parameters. For pi's. We also see k theta i's. But each theta i is actually a set of probability values, right? It's a distribution of words. So I leave this as an exercise for you to figure out exactly how many parameters there are here. Now once we set up the model then we can fit the model to our data. Meaning that we can estimate the parameters or infer the parameters based on the data. In other words we would like to adjust these parameter values. Until we give our data set the maximum probability. I just said, depending on the parameter values, some data points will have higher probabilities than others. What we're interested in, here, is what parameter values will give our data set the highest probability? So I also illustrate the problem with a picture that you see here. On the X axis I just illustrate lambda, the parameters, as a one dimensional variable. It's oversimplification, obviously, but it suffices to show the idea. And the Y axis shows the probability of the data, observe. This probability obviously depends on this setting of lambda. So that's why it varies as you change the value of lambda. What we're interested here is to find the lambda star.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "11:05": "That would maximize the probability of the observed data." + "time": "11:05", + "text": "That would maximize the probability of the observed data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "11:10": "So this would be, then, our estimate of the parameters. And these parameters, note that are precisely what we hoped to discover from text data. So we'd treat these parameters as actually the outcome or the output of the data mining algorithm. So this is the general idea of using a generative model for text mining. First, we design a model with some parameter values to fit the data as well as we can. After we have fit the data, we will recover some parameter value. We will use the specific parameter value And those would be the output of the algorithm. And we'll treat those as actually the discovered knowledge from text data. By varying the model of course we can discover different knowledge. So to summarize, we introduced a new way of representing topic, namely representing as word distribution and this has the advantage of using multiple words to describe a complicated topic.It also allow us to assign weights on words so we have more than several variations of semantics. We talked about the task of topic mining, and answers. When we define a topic as distribution. So the importer is a clashing of text articles and a number of topics and a vocabulary set and the output is a set of topics. Each is a word distribution and also the coverage of all the topics in each document. And these are formally represented by theta i's and pi i's. And we have two constraints here for these parameters. The first is the constraints on the worded distributions. In each worded distribution the probability of all the words must sum to 1, all the words in the vocabulary. The second constraint is on the topic coverage in each document. A document is not allowed to recover a topic outside of the set of topics that we are discovering. So, the coverage of each of these k topics would sum to one for a document. We also introduce a general idea of using a generative model for text mining. And the idea here is, first we're design a model to model the generation of data. We simply assume that they are generative in this way. And inside the model we embed some parameters that we're interested in denoted by lambda." + "time": "11:10", + "text": "So this would be, then, our estimate of the parameters. And these parameters, note that are precisely what we hoped to discover from text data. So we'd treat these parameters as actually the outcome or the output of the data mining algorithm. So this is the general idea of using a generative model for text mining. First, we design a model with some parameter values to fit the data as well as we can. After we have fit the data, we will recover some parameter value. We will use the specific parameter value And those would be the output of the algorithm. And we'll treat those as actually the discovered knowledge from text data. By varying the model of course we can discover different knowledge. So to summarize, we introduced a new way of representing topic, namely representing as word distribution and this has the advantage of using multiple words to describe a complicated topic.It also allow us to assign weights on words so we have more than several variations of semantics. We talked about the task of topic mining, and answers. When we define a topic as distribution. So the importer is a clashing of text articles and a number of topics and a vocabulary set and the output is a set of topics. Each is a word distribution and also the coverage of all the topics in each document. And these are formally represented by theta i's and pi i's. And we have two constraints here for these parameters. The first is the constraints on the worded distributions. In each worded distribution the probability of all the words must sum to 1, all the words in the vocabulary. The second constraint is on the topic coverage in each document. A document is not allowed to recover a topic outside of the set of topics that we are discovering. So, the coverage of each of these k topics would sum to one for a document. We also introduce a general idea of using a generative model for text mining. And the idea here is, first we're design a model to model the generation of data. We simply assume that they are generative in this way. And inside the model we embed some parameters that we're interested in denoted by lambda.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "13:36": "And then we can infer the most likely parameter values lambda star, given a particular data set." + "time": "13:36", + "text": "And then we can infer the most likely parameter values lambda star, given a particular data set.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "13:43": "And we can then take the lambda star as knowledge discovered from the text for our problem." + "time": "13:43", + "text": "And we can then take the lambda star as knowledge discovered from the text for our problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" }, { - "13:50": "And we can adjust the design of the model and the parameters to discover various kinds of knowledge from text. As you will see later in the other lectures. [MUSIC]" + "time": "13:50", + "text": "And we can adjust the design of the model and the parameters to discover various kinds of knowledge from text. As you will see later in the other lectures. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" } ] }, { "2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1": [ { - "0:00": "[SOUND] >> This lecture is about the Overview of Statistical Language Models, which cover proper models as special cases. In this lecture we're going to give a overview of Statical Language Models. These models are general models that cover probabilistic topic models as a special cases. So first off, what is a Statistical Language Model?" + "time": "0:00", + "text": "[SOUND] >> This lecture is about the Overview of Statistical Language Models, which cover proper models as special cases. In this lecture we're going to give a overview of Statical Language Models. These models are general models that cover probabilistic topic models as a special cases. So first off, what is a Statistical Language Model?", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "0:31": "A Statistical Language Model is basically a probability distribution over word sequences. So, for example, we might have a distribution that gives, today is Wednesday a probability of .001. It might give today Wednesday is, which is a non-grammatical sentence, a very, very small probability as shown here." + "time": "0:31", + "text": "A Statistical Language Model is basically a probability distribution over word sequences. So, for example, we might have a distribution that gives, today is Wednesday a probability of .001. It might give today Wednesday is, which is a non-grammatical sentence, a very, very small probability as shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "0:54": "And similarly another sentence, the eigenvalue is positive might get the probability of .00001. So as you can see such a distribution clearly is Context Dependent. It depends on the Context of Discussion. Some Word Sequences might have higher probabilities than others but the same Sequence of Words might have different probability in different context." + "time": "0:54", + "text": "And similarly another sentence, the eigenvalue is positive might get the probability of .00001. So as you can see such a distribution clearly is Context Dependent. It depends on the Context of Discussion. Some Word Sequences might have higher probabilities than others but the same Sequence of Words might have different probability in different context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "1:20": "And so this suggests that such a distribution can actually categorize topic" + "time": "1:20", + "text": "And so this suggests that such a distribution can actually categorize topic", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "1:26": "such a model can also be regarded as Probabilistic Mechanism for generating text." + "time": "1:26", + "text": "such a model can also be regarded as Probabilistic Mechanism for generating text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "1:33": "And that just means we can view text data as data observed from such a model. For this reason, we call such a model as Generating Model. So, now given a model we can then assemble sequences of words. So, for example, based on the distribution that I have shown here on this slide, when matter it say assemble a sequence like today is Wednesday because it has a relative high probability. We might often get such a sequence. We might also get the item value as positive sometimes with a smaller probability and very, very occasionally we might get today is Wednesday because it's probability is so small." + "time": "1:33", + "text": "And that just means we can view text data as data observed from such a model. For this reason, we call such a model as Generating Model. So, now given a model we can then assemble sequences of words. So, for example, based on the distribution that I have shown here on this slide, when matter it say assemble a sequence like today is Wednesday because it has a relative high probability. We might often get such a sequence. We might also get the item value as positive sometimes with a smaller probability and very, very occasionally we might get today is Wednesday because it's probability is so small.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "2:24": "So in general, in order to categorize such a distribution we must specify probability values for all these different sequences of words. Obviously, it's impossible to specify that because it's impossible to enumerate all of the possible sequences of words. So in practice, we will have to simplify the model in some way. So, the simplest language model is called the Unigram Language Model. In such a case, it was simply a the text is generated by generating each word independently." + "time": "2:24", + "text": "So in general, in order to categorize such a distribution we must specify probability values for all these different sequences of words. Obviously, it's impossible to specify that because it's impossible to enumerate all of the possible sequences of words. So in practice, we will have to simplify the model in some way. So, the simplest language model is called the Unigram Language Model. In such a case, it was simply a the text is generated by generating each word independently.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "3:02": "But in general, the words may not be generated independently. But after we make this assumption, we can significantly simplify the language more." + "time": "3:02", + "text": "But in general, the words may not be generated independently. But after we make this assumption, we can significantly simplify the language more.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "3:12": "Basically, now the probability of a sequence of words, w1 through wn, will be just the product of the probability of each word." + "time": "3:12", + "text": "Basically, now the probability of a sequence of words, w1 through wn, will be just the product of the probability of each word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "3:24": "So for such a model, we have as many parameters as the number of words in our vocabulary. So here we assume we have n words, so we have n probabilities. One for each word. And then some to 1. So, now we assume that our text is a sample drawn according to this word distribution. That just means, we're going to draw a word each time and then eventually we'll get a text." + "time": "3:24", + "text": "So for such a model, we have as many parameters as the number of words in our vocabulary. So here we assume we have n words, so we have n probabilities. One for each word. And then some to 1. So, now we assume that our text is a sample drawn according to this word distribution. That just means, we're going to draw a word each time and then eventually we'll get a text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "3:53": "So for example, now again, we can try to assemble words according to a distribution. We might get Wednesday often or today often." + "time": "3:53", + "text": "So for example, now again, we can try to assemble words according to a distribution. We might get Wednesday often or today often.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "4:06": "And some other words like eigenvalue might have a small probability, etcetera. But with this, we actually can also compute the probability of every sequence, even though our model only specify the probabilities of words. And this is because of the independence. So specifically, we can compute the probability of today is Wednesday." + "time": "4:06", + "text": "And some other words like eigenvalue might have a small probability, etcetera. But with this, we actually can also compute the probability of every sequence, even though our model only specify the probabilities of words. And this is because of the independence. So specifically, we can compute the probability of today is Wednesday.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "4:34": "Because it's just a product of the probability of today, the probability of is, and probability of Wednesday. For example, I show some fake numbers here and when you multiply these numbers together you get the probability that today's Wednesday. So as you can see, with N probabilities, one for each word, we actually can characterize the probability situation over all kinds of sequences of words. And so, this is a very simple model. Ignore the word order. So it may not be, in fact, in some problems, such as for speech recognition, where you may care about the order of words. But it turns out to be quite sufficient for many tasks that involve topic analysis. And that's also what we're interested in here. So when we have a model, we generally have two problems that we can think about. One is, given a model, how likely are we to observe a certain kind of data points? That is, we are interested in the Sampling Process. The other is the Estimation Process. And that, is to think of the parameters of a model given, some observe the data and we're going to talk about that in a moment. Let's first talk about the sampling. So, here I show two examples of Water Distributions or Unigram Language Models. The first one has higher probabilities for words like a text mining association, it's separate." + "time": "4:34", + "text": "Because it's just a product of the probability of today, the probability of is, and probability of Wednesday. For example, I show some fake numbers here and when you multiply these numbers together you get the probability that today's Wednesday. So as you can see, with N probabilities, one for each word, we actually can characterize the probability situation over all kinds of sequences of words. And so, this is a very simple model. Ignore the word order. So it may not be, in fact, in some problems, such as for speech recognition, where you may care about the order of words. But it turns out to be quite sufficient for many tasks that involve topic analysis. And that's also what we're interested in here. So when we have a model, we generally have two problems that we can think about. One is, given a model, how likely are we to observe a certain kind of data points? That is, we are interested in the Sampling Process. The other is the Estimation Process. And that, is to think of the parameters of a model given, some observe the data and we're going to talk about that in a moment. Let's first talk about the sampling. So, here I show two examples of Water Distributions or Unigram Language Models. The first one has higher probabilities for words like a text mining association, it's separate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "6:10": "Now this signals a topic about text mining because when we assemble words from such a distribution, we tend to see words that often occur in text mining contest." + "time": "6:10", + "text": "Now this signals a topic about text mining because when we assemble words from such a distribution, we tend to see words that often occur in text mining contest.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "6:23": "So in this case, if we ask the question about what is the probability of generating a particular document. Then, we likely will see text that looks like a text mining paper. Of course, the text that we generate by drawing words. This distribution is unlikely coherent. Although, the probability of generating attacks mine [INAUDIBLE] publishing in the top conference is non-zero assuming that no word has a zero probability in the distribution. And that just means, we can essentially generate all kinds of text documents including very meaningful text documents." + "time": "6:23", + "text": "So in this case, if we ask the question about what is the probability of generating a particular document. Then, we likely will see text that looks like a text mining paper. Of course, the text that we generate by drawing words. This distribution is unlikely coherent. Although, the probability of generating attacks mine [INAUDIBLE] publishing in the top conference is non-zero assuming that no word has a zero probability in the distribution. And that just means, we can essentially generate all kinds of text documents including very meaningful text documents.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "7:07": "Now, the second distribution show, on the bottom, has different than what was high probabilities. So food [INAUDIBLE] healthy [INAUDIBLE], etcetera. So this clearly indicates a different topic. In this case it's probably about health. So if we sample a word from such a distribution, then the probability of observing a text mining paper would be very, very small." + "time": "7:07", + "text": "Now, the second distribution show, on the bottom, has different than what was high probabilities. So food [INAUDIBLE] healthy [INAUDIBLE], etcetera. So this clearly indicates a different topic. In this case it's probably about health. So if we sample a word from such a distribution, then the probability of observing a text mining paper would be very, very small.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "7:32": "On the other hand, the probability of observing a text that looks like a food nutrition paper would be high, relatively higher." + "time": "7:32", + "text": "On the other hand, the probability of observing a text that looks like a food nutrition paper would be high, relatively higher.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "7:41": "So that just means, given a particular distribution, different than the text. Now let's look at the estimation problem now. In this case, we're going to assume that we have observed the data. I will know exactly what the text data looks like. In this case, let's assume we have a text mining paper. In fact, it's abstract of the paper, so the total number of words is 100. And I've shown some counts of individual words here." + "time": "7:41", + "text": "So that just means, given a particular distribution, different than the text. Now let's look at the estimation problem now. In this case, we're going to assume that we have observed the data. I will know exactly what the text data looks like. In this case, let's assume we have a text mining paper. In fact, it's abstract of the paper, so the total number of words is 100. And I've shown some counts of individual words here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "8:12": "Now, if we ask the question, what is the most likely" + "time": "8:12", + "text": "Now, if we ask the question, what is the most likely", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "8:17": "Language Model that has been used to generate this text data? Assuming that the text is observed from some Language Model, what's our best guess of this Language Model?" + "time": "8:17", + "text": "Language Model that has been used to generate this text data? Assuming that the text is observed from some Language Model, what's our best guess of this Language Model?", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "8:30": "Okay, so the problem now is just to estimate the probabilities of these words. As I've shown here." + "time": "8:30", + "text": "Okay, so the problem now is just to estimate the probabilities of these words. As I've shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "8:37": "So what do you think? What would be your guess?" + "time": "8:37", + "text": "So what do you think? What would be your guess?", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "8:40": "Would you guess text has a very small probability, or a relatively large probability?" + "time": "8:40", + "text": "Would you guess text has a very small probability, or a relatively large probability?", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "8:48": "What about query? Well, your guess probably would be dependent on how many times we have observed this word in the text data, right? And if you think about it for a moment. And if you are like many others, you would have guessed that, well, text has a probability of 10 out of 100 because I've observed the text 10 times in the text that has a total of 100 words. And similarly, mining has 5 out of 100. And query has a relatively small probability, just observed for once. So it's 1 out of 100. Right, so that, intuitively, is a reasonable guess. But the question is, is this our best guess or best estimate of the parameters?" + "time": "8:48", + "text": "What about query? Well, your guess probably would be dependent on how many times we have observed this word in the text data, right? And if you think about it for a moment. And if you are like many others, you would have guessed that, well, text has a probability of 10 out of 100 because I've observed the text 10 times in the text that has a total of 100 words. And similarly, mining has 5 out of 100. And query has a relatively small probability, just observed for once. So it's 1 out of 100. Right, so that, intuitively, is a reasonable guess. But the question is, is this our best guess or best estimate of the parameters?", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "9:37": "Of course, in order to answer this question, we have to define what do we mean by best, in this case, it turns out that our guesses are indeed the best. In some sense and this is called Maximum Likelihood Estimate. And it's the best thing that, it will give the observer data our maximum probability." + "time": "9:37", + "text": "Of course, in order to answer this question, we have to define what do we mean by best, in this case, it turns out that our guesses are indeed the best. In some sense and this is called Maximum Likelihood Estimate. And it's the best thing that, it will give the observer data our maximum probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" }, { - "10:01": "Meaning that, if you change the estimate somehow, even slightly, then the probability of the observed text data will be somewhat smaller. And this is called a Maximum Likelihood Estimate. [MUSIC]" + "time": "10:01", + "text": "Meaning that, if you change the estimate somehow, even slightly, then the probability of the observed text data will be somewhat smaller. And this is called a Maximum Likelihood Estimate. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" } ] }, { "2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2": [ { - "0:00": "[MUSIC] So now let's talk about the problem a little bit more, and specifically let's talk about the two different ways of estimating the parameters. One is called the Maximum Likelihood estimate that I already just mentioned. The other is Bayesian estimation. So in maximum likelihood estimation, we define best as meaning the data likelihood has reached the maximum. So formally it's given by this expression here, where we define the estimate as a arg max of the probability of x given theta." + "time": "0:00", + "text": "[MUSIC] So now let's talk about the problem a little bit more, and specifically let's talk about the two different ways of estimating the parameters. One is called the Maximum Likelihood estimate that I already just mentioned. The other is Bayesian estimation. So in maximum likelihood estimation, we define best as meaning the data likelihood has reached the maximum. So formally it's given by this expression here, where we define the estimate as a arg max of the probability of x given theta.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "0:46": "So, arg max here just means its actually a function that will turn. The argument that gives the function maximum value, adds the value. So the value of arg max is not the value of this function. But rather, the argument that has made it the function reaches maximum. So in this case the value of arg max is theta. It's the theta that makes the probability of X, given theta, reach it's maximum. So this estimate that in due it also makes sense and it's often very useful, and it seeks the premise that best explains the data. But it has a problem, when the data is too small because when the data points are too small, there are very few data points. The sample is small, then if we trust data in entirely and try to fit the data and then we'll be biased. So in the case of text data, let's say, all observed 100 words did not contain another word related to text mining. Now, our maximum likelihood estimator will give that word a zero probability. Because giving the non-zero probability would take away probability mass from some observer word. Which obviously is not optimal in terms of maximizing the likelihood of the observer data." + "time": "0:46", + "text": "So, arg max here just means its actually a function that will turn. The argument that gives the function maximum value, adds the value. So the value of arg max is not the value of this function. But rather, the argument that has made it the function reaches maximum. So in this case the value of arg max is theta. It's the theta that makes the probability of X, given theta, reach it's maximum. So this estimate that in due it also makes sense and it's often very useful, and it seeks the premise that best explains the data. But it has a problem, when the data is too small because when the data points are too small, there are very few data points. The sample is small, then if we trust data in entirely and try to fit the data and then we'll be biased. So in the case of text data, let's say, all observed 100 words did not contain another word related to text mining. Now, our maximum likelihood estimator will give that word a zero probability. Because giving the non-zero probability would take away probability mass from some observer word. Which obviously is not optimal in terms of maximizing the likelihood of the observer data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "2:11": "But this zero probability for all the unseen words may not be reasonable sometimes. Especially, if we want the distribution to characterize the topic of text mining. So one way to address this problem is actually to use Bayesian estimation, where we actually would look at the both the data, and our prior knowledge about the parameters. We assume that we have some prior belief about the parameters. Now in this case of course, so we are not" + "time": "2:11", + "text": "But this zero probability for all the unseen words may not be reasonable sometimes. Especially, if we want the distribution to characterize the topic of text mining. So one way to address this problem is actually to use Bayesian estimation, where we actually would look at the both the data, and our prior knowledge about the parameters. We assume that we have some prior belief about the parameters. Now in this case of course, so we are not", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "2:47": "going to look at just the data, but also look at the prior." + "time": "2:47", + "text": "going to look at just the data, but also look at the prior.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "2:54": "So the prior here is defined by P of theta, and this means, we will impose some preference on certain theta's of others." + "time": "2:54", + "text": "So the prior here is defined by P of theta, and this means, we will impose some preference on certain theta's of others.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "3:06": "And by using Bayes Rule, that I have shown here," + "time": "3:06", + "text": "And by using Bayes Rule, that I have shown here,", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "3:12": "we can then combine the likelihood function. With the prior to give us this" + "time": "3:12", + "text": "we can then combine the likelihood function. With the prior to give us this", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "3:23": "posterior probability of the parameter. Now, a full explanation of Bayes rule, and some of these things related to Bayesian reasoning, would be outside the scope of this course. But I just gave a brief introduction because this is general knowledge that might be useful to you. The Bayes Rule is basically defined here, and allows us to write down one conditional probability of X given Y in terms of the conditional probability of Y given X. And you can see the two probabilities are different in the order of the two variables." + "time": "3:23", + "text": "posterior probability of the parameter. Now, a full explanation of Bayes rule, and some of these things related to Bayesian reasoning, would be outside the scope of this course. But I just gave a brief introduction because this is general knowledge that might be useful to you. The Bayes Rule is basically defined here, and allows us to write down one conditional probability of X given Y in terms of the conditional probability of Y given X. And you can see the two probabilities are different in the order of the two variables.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "4:09": "But often the rule is used for making inferences of the variable, so let's take a look at it again. We can assume that p(X) Encodes our prior belief about X. That means before we observe any other data, that's our belief about X, what we believe some X values have higher probability than others." + "time": "4:09", + "text": "But often the rule is used for making inferences of the variable, so let's take a look at it again. We can assume that p(X) Encodes our prior belief about X. That means before we observe any other data, that's our belief about X, what we believe some X values have higher probability than others.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "4:40": "And this probability of X given Y is a conditional probability, and this is our posterior belief about X. Because this is our belief about X values after we have observed the Y. Given that we have observed the Y, now what do we believe about X? Now, do we believe some values have higher probabilities than others?" + "time": "4:40", + "text": "And this probability of X given Y is a conditional probability, and this is our posterior belief about X. Because this is our belief about X values after we have observed the Y. Given that we have observed the Y, now what do we believe about X? Now, do we believe some values have higher probabilities than others?", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "5:09": "Now the two probabilities are related through this one, this can be regarded as the probability of" + "time": "5:09", + "text": "Now the two probabilities are related through this one, this can be regarded as the probability of", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "5:19": "the observed evidence Y, given a particular X. So you can think about X as our hypothesis, and we have some prior belief about which hypothesis to choose. And after we have observed Y, we will update our belief and this updating formula is based on the combination of our prior." + "time": "5:19", + "text": "the observed evidence Y, given a particular X. So you can think about X as our hypothesis, and we have some prior belief about which hypothesis to choose. And after we have observed Y, we will update our belief and this updating formula is based on the combination of our prior.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "5:48": "And the likelihood of observing this Y if X is indeed true," + "time": "5:48", + "text": "And the likelihood of observing this Y if X is indeed true,", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "5:57": "so much for detour about Bayes Rule. In our case, what we are interested in is inferring the theta values. So, we have a prior here that includes our prior knowledge about the parameters." + "time": "5:57", + "text": "so much for detour about Bayes Rule. In our case, what we are interested in is inferring the theta values. So, we have a prior here that includes our prior knowledge about the parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "6:15": "And then we have the data likelihood here, that would tell us which parameter value can explain the data well. The posterior probability combines both of them," + "time": "6:15", + "text": "And then we have the data likelihood here, that would tell us which parameter value can explain the data well. The posterior probability combines both of them,", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "6:30": "so it represents a compromise of the the two preferences. And in such a case, we can maximize this posterior probability. To find this theta that would maximize this posterior probability, and this estimator is called a Maximum a Posteriori, or MAP estimate." + "time": "6:30", + "text": "so it represents a compromise of the the two preferences. And in such a case, we can maximize this posterior probability. To find this theta that would maximize this posterior probability, and this estimator is called a Maximum a Posteriori, or MAP estimate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "6:55": "And this estimator is a more general estimator than the maximum likelihood estimator. Because if we define our prior as a noninformative prior, meaning that it's uniform over all the theta values. No preference, then we basically would go back to the maximum likelihood estimated. Because in such a case, it's mainly going to be determined by this likelihood value, the same as here." + "time": "6:55", + "text": "And this estimator is a more general estimator than the maximum likelihood estimator. Because if we define our prior as a noninformative prior, meaning that it's uniform over all the theta values. No preference, then we basically would go back to the maximum likelihood estimated. Because in such a case, it's mainly going to be determined by this likelihood value, the same as here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "7:28": "But if we have some not informative prior, some bias towards the different values then map estimator can allow us to incorporate that. But the problem here of course, is how to define the prior." + "time": "7:28", + "text": "But if we have some not informative prior, some bias towards the different values then map estimator can allow us to incorporate that. But the problem here of course, is how to define the prior.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "7:44": "There is no free lunch and if you want to solve the problem with more knowledge, we have to have that knowledge. And that knowledge, ideally, should be reliable. Otherwise, your estimate may not necessarily be more accurate than that maximum likelihood estimate." + "time": "7:44", + "text": "There is no free lunch and if you want to solve the problem with more knowledge, we have to have that knowledge. And that knowledge, ideally, should be reliable. Otherwise, your estimate may not necessarily be more accurate than that maximum likelihood estimate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "8:01": "So, now let's look at the Bayesian estimation in more detail." + "time": "8:01", + "text": "So, now let's look at the Bayesian estimation in more detail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "8:08": "So, I show the theta values as just a one dimension value and that's a simplification of course. And so, we're interested in which variable of theta is optimal. So now, first we have the Prior. The Prior tells us that some of the variables are more likely the others would believe. For example, these values are more likely than the values over here, or here, or other places." + "time": "8:08", + "text": "So, I show the theta values as just a one dimension value and that's a simplification of course. And so, we're interested in which variable of theta is optimal. So now, first we have the Prior. The Prior tells us that some of the variables are more likely the others would believe. For example, these values are more likely than the values over here, or here, or other places.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "8:42": "So this is our Prior, and then we have our theta likelihood. And in this case, the theta also tells us which values of theta are more likely. And that just means loose syllables can best expand our theta." + "time": "8:42", + "text": "So this is our Prior, and then we have our theta likelihood. And in this case, the theta also tells us which values of theta are more likely. And that just means loose syllables can best expand our theta.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "9:01": "And then when we combine the two we get the posterior distribution, and that's just a compromise of the two. It would say that it's somewhere in-between. So, we can now look at some interesting point that is made of. This point represents the mode of prior, that means the most likely parameter value according to our prior, before we observe any data." + "time": "9:01", + "text": "And then when we combine the two we get the posterior distribution, and that's just a compromise of the two. It would say that it's somewhere in-between. So, we can now look at some interesting point that is made of. This point represents the mode of prior, that means the most likely parameter value according to our prior, before we observe any data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "9:25": "This point is the maximum likelihood estimator, it represents the theta that gives the theta of maximum probability." + "time": "9:25", + "text": "This point is the maximum likelihood estimator, it represents the theta that gives the theta of maximum probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "9:32": "Now this point is interesting, it's the posterior mode." + "time": "9:32", + "text": "Now this point is interesting, it's the posterior mode.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "9:38": "It's the most likely value of the theta given by the posterior of this. And it represents a good compromise of the prior mode and the maximum likelihood estimate." + "time": "9:38", + "text": "It's the most likely value of the theta given by the posterior of this. And it represents a good compromise of the prior mode and the maximum likelihood estimate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "9:51": "Now in general in Bayesian inference, we are interested in the distribution of all these parameter additives as you see here. If there's a distribution over see how values that you can see. Here, P of theta given X." + "time": "9:51", + "text": "Now in general in Bayesian inference, we are interested in the distribution of all these parameter additives as you see here. If there's a distribution over see how values that you can see. Here, P of theta given X.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "10:09": "So the problem of Bayesian inference is" + "time": "10:09", + "text": "So the problem of Bayesian inference is", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "10:14": "to infer this posterior, this regime, and also to infer other interesting quantities that might depend on theta. So, I show f of theta here as an interesting variable that we want to compute. But in order to compute this value, we need to know the value of theta. In Bayesian inference, we treat theta as an uncertain variable. So we think about all the possible variables of theta. Therefore, we can estimate the value of this function f as extracted value of f, according to the posterior distribution of theta, given the observed evidence X." + "time": "10:14", + "text": "to infer this posterior, this regime, and also to infer other interesting quantities that might depend on theta. So, I show f of theta here as an interesting variable that we want to compute. But in order to compute this value, we need to know the value of theta. In Bayesian inference, we treat theta as an uncertain variable. So we think about all the possible variables of theta. Therefore, we can estimate the value of this function f as extracted value of f, according to the posterior distribution of theta, given the observed evidence X.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "10:58": "As a special case, we can assume f of theta is just equal to theta. In this case, we get the expected value of the theta, that's basically the posterior mean. That gives us also one point of theta, and it's sometimes the same as posterior mode, but it's not always the same. So, it gives us another way to estimate the parameter." + "time": "10:58", + "text": "As a special case, we can assume f of theta is just equal to theta. In this case, we get the expected value of the theta, that's basically the posterior mean. That gives us also one point of theta, and it's sometimes the same as posterior mode, but it's not always the same. So, it gives us another way to estimate the parameter.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "11:24": "So, this is a general illustration of Bayesian estimation and its an influence. And later, you will see this can be useful for topic mining where we want to inject the sum prior knowledge about the topics. So to summarize, we've used the language model which is basically probability distribution over text. It's also called a generative model for text data. The simplest language model is Unigram Language Model, it's basically a word distribution." + "time": "11:24", + "text": "So, this is a general illustration of Bayesian estimation and its an influence. And later, you will see this can be useful for topic mining where we want to inject the sum prior knowledge about the topics. So to summarize, we've used the language model which is basically probability distribution over text. It's also called a generative model for text data. The simplest language model is Unigram Language Model, it's basically a word distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "11:54": "We introduced the concept of likelihood function, which is the probability of the a data given some model." + "time": "11:54", + "text": "We introduced the concept of likelihood function, which is the probability of the a data given some model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "12:02": "And this function is very important," + "time": "12:02", + "text": "And this function is very important,", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "12:05": "given a particular set of parameter values this function can tell us which X, which data point has a higher likelihood, higher probability." + "time": "12:05", + "text": "given a particular set of parameter values this function can tell us which X, which data point has a higher likelihood, higher probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "12:16": "Given a data sample X, we can use this function to determine which parameter values would maximize the probability of the observed data, and this is the maximum livelihood estimate." + "time": "12:16", + "text": "Given a data sample X, we can use this function to determine which parameter values would maximize the probability of the observed data, and this is the maximum livelihood estimate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "12:31": "We also talk about the Bayesian estimation or inference. In this case we, must define a prior on the parameters p of theta. And then we're interested in computing the posterior distribution of the parameters, which is proportional to the prior and the likelihood." + "time": "12:31", + "text": "We also talk about the Bayesian estimation or inference. In this case we, must define a prior on the parameters p of theta. And then we're interested in computing the posterior distribution of the parameters, which is proportional to the prior and the likelihood.", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" }, { - "12:48": "And this distribution would allow us then to infer any derive that is from theta. [MUSIC]" + "time": "12:48", + "text": "And this distribution would allow us then to infer any derive that is from theta. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" } ] }, { "2-10-probabilistic-topic-models-mining-one-topic": [ { - "0:00": "[SOUND] This lecture is a continued discussion of probabilistic topic models. In this lecture, we're going to continue discussing probabilistic models. We're going to talk about a very simple case where we are interested in just mining one topic from one document." + "time": "0:00", + "text": "[SOUND] This lecture is a continued discussion of probabilistic topic models. In this lecture, we're going to continue discussing probabilistic models. We're going to talk about a very simple case where we are interested in just mining one topic from one document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "0:30": "So in this simple setup, we are interested in analyzing one document and trying to discover just one topic. So this is the simplest case of topic model. The input now no longer has k, which is the number of topics because we know there is only one topic and the collection has only one document, also. In the output, we also no longer have coverage because we assumed that the document covers this topic 100%. So the main goal is just to discover the world of probabilities for this single topic, as shown here." + "time": "0:30", + "text": "So in this simple setup, we are interested in analyzing one document and trying to discover just one topic. So this is the simplest case of topic model. The input now no longer has k, which is the number of topics because we know there is only one topic and the collection has only one document, also. In the output, we also no longer have coverage because we assumed that the document covers this topic 100%. So the main goal is just to discover the world of probabilities for this single topic, as shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "1:14": "As always, when we think about using a generating model to solve such a problem, we start with thinking about what kind of data we are going to model or from what perspective we're going to model the data or data representation. And then we're going to design a specific model for the generating of the data, from our perspective. Where our perspective just means we want to take a particular angle of looking at the data, so that the model will have the right parameters for discovering the knowledge that we want. And then we'll be thinking about the microfunction or write down the microfunction to capture more formally how likely a data point will be obtained from this model." + "time": "1:14", + "text": "As always, when we think about using a generating model to solve such a problem, we start with thinking about what kind of data we are going to model or from what perspective we're going to model the data or data representation. And then we're going to design a specific model for the generating of the data, from our perspective. Where our perspective just means we want to take a particular angle of looking at the data, so that the model will have the right parameters for discovering the knowledge that we want. And then we'll be thinking about the microfunction or write down the microfunction to capture more formally how likely a data point will be obtained from this model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "2:05": "And the likelihood function will have some parameters in the function. And then we argue our interest in estimating those parameters for example, by maximizing the likelihood which will lead to maximum likelihood estimated. These estimator parameters will then become the output of the mining hours, which means we'll take the estimating parameters as the knowledge that we discover from the text. So let's look at these steps for this very simple case. Later we'll look at this procedure for some more complicated cases. So our data, in this case is, just a document which is a sequence of words. Each word here is denoted by x sub i. Our model is a Unigram language model. A word distribution that we hope to denote a topic and that's our goal. So we will have as many parameters as many words in our vocabulary, in this case M." + "time": "2:05", + "text": "And the likelihood function will have some parameters in the function. And then we argue our interest in estimating those parameters for example, by maximizing the likelihood which will lead to maximum likelihood estimated. These estimator parameters will then become the output of the mining hours, which means we'll take the estimating parameters as the knowledge that we discover from the text. So let's look at these steps for this very simple case. Later we'll look at this procedure for some more complicated cases. So our data, in this case is, just a document which is a sequence of words. Each word here is denoted by x sub i. Our model is a Unigram language model. A word distribution that we hope to denote a topic and that's our goal. So we will have as many parameters as many words in our vocabulary, in this case M.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "3:09": "And for convenience we're going to use theta sub i to denote the probability of word w sub i." + "time": "3:09", + "text": "And for convenience we're going to use theta sub i to denote the probability of word w sub i.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "3:20": "And obviously these theta sub i's will sum to 1." + "time": "3:20", + "text": "And obviously these theta sub i's will sum to 1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "3:24": "Now what does a likelihood function look like? Well, this is just the probability of generating this whole document, that given such a model. Because we assume the independence in generating each word so the probability of the document will be just a product of the probability of each word." + "time": "3:24", + "text": "Now what does a likelihood function look like? Well, this is just the probability of generating this whole document, that given such a model. Because we assume the independence in generating each word so the probability of the document will be just a product of the probability of each word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "3:42": "And since some word might have repeated occurrences. So we can also rewrite this product in a different form." + "time": "3:42", + "text": "And since some word might have repeated occurrences. So we can also rewrite this product in a different form.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "3:52": "So in this line, we have rewritten the formula into a product over all the unique words in the vocabulary, w sub 1 through w sub M. Now this is different from the previous line. Well, the product is over different positions of words in the document." + "time": "3:52", + "text": "So in this line, we have rewritten the formula into a product over all the unique words in the vocabulary, w sub 1 through w sub M. Now this is different from the previous line. Well, the product is over different positions of words in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "4:15": "Now when we do this transformation, we then would need to introduce a counter function here. This denotes the count of word one in document and similarly this is the count of words of n in the document because these words might have repeated occurrences. You can also see if a word did not occur in the document." + "time": "4:15", + "text": "Now when we do this transformation, we then would need to introduce a counter function here. This denotes the count of word one in document and similarly this is the count of words of n in the document because these words might have repeated occurrences. You can also see if a word did not occur in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "4:41": "It will have a zero count, therefore that corresponding term will disappear. So this is a very useful form of writing down the likelihood function that we will often use later. So I want you to pay attention to this, just get familiar with this notation. It's just to change the product over all the different words in the vocabulary. So in the end, of course, we'll use theta sub i to express this likelihood function and it would look like this. Next, we're going to find the theta values or probabilities of these words that would maximize this likelihood function. So now lets take a look at the maximum likelihood estimate problem more closely." + "time": "4:41", + "text": "It will have a zero count, therefore that corresponding term will disappear. So this is a very useful form of writing down the likelihood function that we will often use later. So I want you to pay attention to this, just get familiar with this notation. It's just to change the product over all the different words in the vocabulary. So in the end, of course, we'll use theta sub i to express this likelihood function and it would look like this. Next, we're going to find the theta values or probabilities of these words that would maximize this likelihood function. So now lets take a look at the maximum likelihood estimate problem more closely.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "5:32": "This line is copied from the previous slide. It's just our likelihood function." + "time": "5:32", + "text": "This line is copied from the previous slide. It's just our likelihood function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "5:38": "So our goal is to maximize this likelihood function. We will find it often easy to" + "time": "5:38", + "text": "So our goal is to maximize this likelihood function. We will find it often easy to", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "5:47": "maximize the local likelihood instead of the original likelihood. And this is purely for mathematical convenience because after the logarithm transformation our function will becomes a sum instead of product. And we also have constraints over these these probabilities. The sum makes it easier to take derivative, which is often needed for finding the optimal solution of this function. So please take a look at this sum again, here. And this is a form of a function that you will often see later also, the more general topic models. So it's a sum over all the words in the vocabulary. And inside the sum there is a count of a word in the document. And this is macroed by the logarithm of a probability." + "time": "5:47", + "text": "maximize the local likelihood instead of the original likelihood. And this is purely for mathematical convenience because after the logarithm transformation our function will becomes a sum instead of product. And we also have constraints over these these probabilities. The sum makes it easier to take derivative, which is often needed for finding the optimal solution of this function. So please take a look at this sum again, here. And this is a form of a function that you will often see later also, the more general topic models. So it's a sum over all the words in the vocabulary. And inside the sum there is a count of a word in the document. And this is macroed by the logarithm of a probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "6:55": "So let's see how we can solve this problem." + "time": "6:55", + "text": "So let's see how we can solve this problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "6:58": "Now at this point the problem is purely a mathematical problem because we are going to just the find the optimal solution of a constrained maximization problem. The objective function is the likelihood function and the constraint is that all these probabilities must sum to one. So, one way to solve the problem is to use Lagrange multiplier approace." + "time": "6:58", + "text": "Now at this point the problem is purely a mathematical problem because we are going to just the find the optimal solution of a constrained maximization problem. The objective function is the likelihood function and the constraint is that all these probabilities must sum to one. So, one way to solve the problem is to use Lagrange multiplier approace.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "7:24": "Now this command is beyond the scope of this course but since Lagrange multiplier is a very useful approach, I also would like to just give a brief introduction to this, for those of you who are interested." + "time": "7:24", + "text": "Now this command is beyond the scope of this course but since Lagrange multiplier is a very useful approach, I also would like to just give a brief introduction to this, for those of you who are interested.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "7:39": "So in this approach we will construct a Lagrange function, here. And this function will combine our objective function with another term that encodes our constraint and we introduce Lagrange multiplier here, lambda, so it's an additional parameter. Now, the idea of this approach is just to turn the constraint optimization into, in some sense, an unconstrained optimizing problem. Now we are just interested in optimizing this Lagrange function." + "time": "7:39", + "text": "So in this approach we will construct a Lagrange function, here. And this function will combine our objective function with another term that encodes our constraint and we introduce Lagrange multiplier here, lambda, so it's an additional parameter. Now, the idea of this approach is just to turn the constraint optimization into, in some sense, an unconstrained optimizing problem. Now we are just interested in optimizing this Lagrange function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "8:19": "As you may recall from calculus, an optimal point would be achieved when the derivative is set to zero. This is a necessary condition. It's not sufficient, though. So if we do that you will see the partial derivative, with respect to theta i here ,is equal to this. And this part comes from the derivative of the logarithm function and this lambda is simply taken from here. And when we set it to zero we can easily see theta sub i is related to lambda in this way." + "time": "8:19", + "text": "As you may recall from calculus, an optimal point would be achieved when the derivative is set to zero. This is a necessary condition. It's not sufficient, though. So if we do that you will see the partial derivative, with respect to theta i here ,is equal to this. And this part comes from the derivative of the logarithm function and this lambda is simply taken from here. And when we set it to zero we can easily see theta sub i is related to lambda in this way.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "9:06": "Since we know all the theta i's must a sum to one we can plug this into this constraint, here. And this will allow us to solve for lambda." + "time": "9:06", + "text": "Since we know all the theta i's must a sum to one we can plug this into this constraint, here. And this will allow us to solve for lambda.", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "9:16": "And this is just a net sum of all the counts. And this further allows us to then solve the optimization problem, eventually, to find the optimal setting for theta sub i. And if you look at this formula it turns out that it's actually very intuitive because this is just the normalized count of these words by the document ns, which is also a sum of all the counts of words in the document. So, after all this mess, after all, we have just obtained something that's very intuitive and this will be just our intuition where we want to maximize the data by assigning as much probability mass as possible to all the observed the words here. And you might also notice that this is the general result of maximum likelihood raised estimator. In general, the estimator would be to normalize counts and it's just sometimes the counts have to be done in a particular way, as you will also see later. So this is basically an analytical solution to our optimization problem. In general though, when the likelihood function is very complicated, we're not going to be able to solve the optimization problem by having a closed form formula. Instead we have to use some numerical algorithms and we're going to see such cases later, also. So if you imagine what would we get if we use such a maximum likelihood estimator to estimate one topic for a single document d here? Let's imagine this document is a text mining paper. Now, what you might see is something that looks like this. On the top, you will see the high probability words tend to be those very common words, often functional words in English. And this will be followed by some content words that really characterize the topic well like text, mining, etc. And then in the end, you also see there is more probability of words that are not really related to the topic but they might be extraneously mentioned in the document. As a topic representation, you will see this is not ideal, right? That because the high probability words are functional words, they are not really characterizing the topic. So my question is how can we get rid of such common words?" + "time": "9:16", + "text": "And this is just a net sum of all the counts. And this further allows us to then solve the optimization problem, eventually, to find the optimal setting for theta sub i. And if you look at this formula it turns out that it's actually very intuitive because this is just the normalized count of these words by the document ns, which is also a sum of all the counts of words in the document. So, after all this mess, after all, we have just obtained something that's very intuitive and this will be just our intuition where we want to maximize the data by assigning as much probability mass as possible to all the observed the words here. And you might also notice that this is the general result of maximum likelihood raised estimator. In general, the estimator would be to normalize counts and it's just sometimes the counts have to be done in a particular way, as you will also see later. So this is basically an analytical solution to our optimization problem. In general though, when the likelihood function is very complicated, we're not going to be able to solve the optimization problem by having a closed form formula. Instead we have to use some numerical algorithms and we're going to see such cases later, also. So if you imagine what would we get if we use such a maximum likelihood estimator to estimate one topic for a single document d here? Let's imagine this document is a text mining paper. Now, what you might see is something that looks like this. On the top, you will see the high probability words tend to be those very common words, often functional words in English. And this will be followed by some content words that really characterize the topic well like text, mining, etc. And then in the end, you also see there is more probability of words that are not really related to the topic but they might be extraneously mentioned in the document. As a topic representation, you will see this is not ideal, right? That because the high probability words are functional words, they are not really characterizing the topic. So my question is how can we get rid of such common words?", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" }, { - "11:59": "Now this is the topic of the next module. We're going to talk about how to use probabilistic models to somehow get rid of these common words. [MUSIC]" + "time": "11:59", + "text": "Now this is the topic of the next module. We're going to talk about how to use probabilistic models to somehow get rid of these common words. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" } ] } @@ -1846,511 +3012,825 @@ { "3-1-probabilistic-topic-models-mixture-of-unigram-language-models": [ { - "0:00": "[MUSIC]" + "time": "0:00", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "0:06": "This lecture is about the mixture of unigram language models." + "time": "0:06", + "text": "This lecture is about the mixture of unigram language models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "0:11": "In this lecture we will continue discussing probabilistic topic models. In particular, what we introduce a mixture of unigram language models. This is a slide that you have seen earlier. Where we talked about how to get rid of the background words that we have on top of for one document." + "time": "0:11", + "text": "In this lecture we will continue discussing probabilistic topic models. In particular, what we introduce a mixture of unigram language models. This is a slide that you have seen earlier. Where we talked about how to get rid of the background words that we have on top of for one document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "0:36": "So if you want to solve the problem, it would be useful to think about why we end up having this problem. Well, this obviously because these words are very frequent in our data and we are using a maximum likelihood to estimate. Then the estimate obviously would have to assign high probability for these words in order to maximize the likelihood. So, in order to get rid of them that would mean we'd have to do something differently here." + "time": "0:36", + "text": "So if you want to solve the problem, it would be useful to think about why we end up having this problem. Well, this obviously because these words are very frequent in our data and we are using a maximum likelihood to estimate. Then the estimate obviously would have to assign high probability for these words in order to maximize the likelihood. So, in order to get rid of them that would mean we'd have to do something differently here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "1:05": "In particular we'll have to say this distribution doesn't have to explain all the words in the tax data. What were going to say is that, these common words should not be explained by this distribution. So one natural way to solve the problem is to think about using another distribution to account for just these common words. This way, the two distributions can be mixed together to generate the text data. And we'll let the other model which we'll call background topic model to generate the common words. This way our target topic theta here will be only generating the common handle words that are characterised the content of the document." + "time": "1:05", + "text": "In particular we'll have to say this distribution doesn't have to explain all the words in the tax data. What were going to say is that, these common words should not be explained by this distribution. So one natural way to solve the problem is to think about using another distribution to account for just these common words. This way, the two distributions can be mixed together to generate the text data. And we'll let the other model which we'll call background topic model to generate the common words. This way our target topic theta here will be only generating the common handle words that are characterised the content of the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "1:52": "So, how does this work? Well, it is just a small modification of the previous setup where we have just one distribution. Since we now have two distributions, we have to decide which distribution to use when we generate the word. Each word will still be a sample from one of the two distributions." + "time": "1:52", + "text": "So, how does this work? Well, it is just a small modification of the previous setup where we have just one distribution. Since we now have two distributions, we have to decide which distribution to use when we generate the word. Each word will still be a sample from one of the two distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "2:13": "Text data is still generating the same way. Namely, look at the generating of the one word at each time and eventually we generate a lot of words. When we generate the word, however, we're going to first decide which of the two distributions to use. And this is controlled by another probability, the probability of theta sub d and the probability of theta sub B here." + "time": "2:13", + "text": "Text data is still generating the same way. Namely, look at the generating of the one word at each time and eventually we generate a lot of words. When we generate the word, however, we're going to first decide which of the two distributions to use. And this is controlled by another probability, the probability of theta sub d and the probability of theta sub B here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "2:41": "So this is a probability of enacting the topic word of distribution. This is the probability of enacting the background word" + "time": "2:41", + "text": "So this is a probability of enacting the topic word of distribution. This is the probability of enacting the background word", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "2:52": "of distribution denoted by theta sub B." + "time": "2:52", + "text": "of distribution denoted by theta sub B.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "2:55": "On this case I just give example where we can set both to 0.5. So you're going to basically flip a coin, a fair coin, to decide what you want to use. But in general these probabilities don't have to be equal. So you might bias toward using one topic more than the other. So now the process of generating a word would be to first we flip a coin. Based on these probabilities choosing each model and if let's say the coin shows up as head, which means we're going to use the topic two word distribution. Then we're going to use this word distribution to generate a word. Otherwise we might be going slow this path." + "time": "2:55", + "text": "On this case I just give example where we can set both to 0.5. So you're going to basically flip a coin, a fair coin, to decide what you want to use. But in general these probabilities don't have to be equal. So you might bias toward using one topic more than the other. So now the process of generating a word would be to first we flip a coin. Based on these probabilities choosing each model and if let's say the coin shows up as head, which means we're going to use the topic two word distribution. Then we're going to use this word distribution to generate a word. Otherwise we might be going slow this path.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "3:41": "And we're going to use the background word distribution to generate a word." + "time": "3:41", + "text": "And we're going to use the background word distribution to generate a word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "3:46": "So in such a case, we have a model that has some uncertainty associated with the use of a word distribution. But we can still think of this as a model for generating text data. And such a model is called a mixture model." + "time": "3:46", + "text": "So in such a case, we have a model that has some uncertainty associated with the use of a word distribution. But we can still think of this as a model for generating text data. And such a model is called a mixture model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "4:02": "So now let's see. In this case, what's the probability of observing a word w? Now here I showed some words. like \"the\" and \"text\". So as in all cases, once we setup a model we are interested in computing the likelihood function. The basic question is, so what's the probability of observing a specific word here? Now we know that the word can be observed from each of the two distributions, so we have to consider two cases. Therefore it's a sum over these two cases." + "time": "4:02", + "text": "So now let's see. In this case, what's the probability of observing a word w? Now here I showed some words. like \"the\" and \"text\". So as in all cases, once we setup a model we are interested in computing the likelihood function. The basic question is, so what's the probability of observing a specific word here? Now we know that the word can be observed from each of the two distributions, so we have to consider two cases. Therefore it's a sum over these two cases.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "4:34": "The first case is to use the topic for the distribution to generate the word. And in such a case then the probably would be theta sub d, which is the probability of choosing the model multiplied by the probability of actually observing the word from that model. Both events must happen in order to observe. We first must have choosing the topic theta sub d and then, we also have to actually have sampled the word the from the distribution. And similarly, the second part accounts for a different way of generally the word from the background." + "time": "4:34", + "text": "The first case is to use the topic for the distribution to generate the word. And in such a case then the probably would be theta sub d, which is the probability of choosing the model multiplied by the probability of actually observing the word from that model. Both events must happen in order to observe. We first must have choosing the topic theta sub d and then, we also have to actually have sampled the word the from the distribution. And similarly, the second part accounts for a different way of generally the word from the background.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "5:15": "Now obviously the probability of text the same is all similar, right? So we also can see the two ways of generating the text. And in each case, it's a product of the probability of choosing a particular word is multiplied by the probability of observing the word from that distribution." + "time": "5:15", + "text": "Now obviously the probability of text the same is all similar, right? So we also can see the two ways of generating the text. And in each case, it's a product of the probability of choosing a particular word is multiplied by the probability of observing the word from that distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "5:35": "Now whether you will see, this is actually a general form. So might want to make sure that you have really understood this expression here. And you should convince yourself that this is indeed the probability of obsolete text. So to summarize what we observed here. The probability of a word from a mixture model is a general sum of different ways of generating the word." + "time": "5:35", + "text": "Now whether you will see, this is actually a general form. So might want to make sure that you have really understood this expression here. And you should convince yourself that this is indeed the probability of obsolete text. So to summarize what we observed here. The probability of a word from a mixture model is a general sum of different ways of generating the word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "6:00": "In each case, it's a product of the probability of selecting that component model. Multiplied by the probability of actually observing the data point from that component of the model. And this is something quite general and you will see this occurring often later. So the basic idea of a mixture model is just to retrieve thesetwo distributions together as one model. So I used a box to bring all these components together. So if you view this whole box as one model, it's just like any other generative model. It would just give us the probability of a word." + "time": "6:00", + "text": "In each case, it's a product of the probability of selecting that component model. Multiplied by the probability of actually observing the data point from that component of the model. And this is something quite general and you will see this occurring often later. So the basic idea of a mixture model is just to retrieve thesetwo distributions together as one model. So I used a box to bring all these components together. So if you view this whole box as one model, it's just like any other generative model. It would just give us the probability of a word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "6:42": "But the way that determines this probability is quite the different from when we have just one distribution." + "time": "6:42", + "text": "But the way that determines this probability is quite the different from when we have just one distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "6:50": "And this is basically a more complicated mixture model. So the more complicated is more than just one distribution. And it's called a mixture model." + "time": "6:50", + "text": "And this is basically a more complicated mixture model. So the more complicated is more than just one distribution. And it's called a mixture model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "7:00": "So as I just said we can treat this as a generative model. And it's often useful to think of just as a likelihood function. The illustration that you have seen before, which is dimmer now, is just the illustration of this generated model. So mathematically, this model is nothing but to just define the following generative model. Where the probability of a word is assumed to be a sum over two cases" + "time": "7:00", + "text": "So as I just said we can treat this as a generative model. And it's often useful to think of just as a likelihood function. The illustration that you have seen before, which is dimmer now, is just the illustration of this generated model. So mathematically, this model is nothing but to just define the following generative model. Where the probability of a word is assumed to be a sum over two cases", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "7:26": "of generating the word. And the form you are seeing now is a more general form that what you have seen in the calculation earlier. Well I just use the symbol w to denote any water but you can still see this is basically first a sum. Right? And this sum is due to the fact that the water can be generated in much more ways, two ways in this case. And inside a sum, each term is a product of two terms. And the two terms are first the probability of selecting a component like of D Second, the probability of actually observing the word from this component of the model. So this is a very general description of all the mixture models. I just want to make sure that you understand this because this is really the basis for understanding all kinds of on top models." + "time": "7:26", + "text": "of generating the word. And the form you are seeing now is a more general form that what you have seen in the calculation earlier. Well I just use the symbol w to denote any water but you can still see this is basically first a sum. Right? And this sum is due to the fact that the water can be generated in much more ways, two ways in this case. And inside a sum, each term is a product of two terms. And the two terms are first the probability of selecting a component like of D Second, the probability of actually observing the word from this component of the model. So this is a very general description of all the mixture models. I just want to make sure that you understand this because this is really the basis for understanding all kinds of on top models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "8:28": "So now once we setup model. We can write down that like functioning as we see here. The next question is, how can we estimate the parameter, or what to do with the parameters. Given the data. Well, in general, we can use some of the text data to estimate the model parameters. And this estimation would allow us to discover the interesting knowledge about the text. So you, in this case, what do we discover? Well, these are presented by our parameters and we will have two kinds of parameters. One is the two worded distributions, that result in topics, and the other is the coverage of each topic in each." + "time": "8:28", + "text": "So now once we setup model. We can write down that like functioning as we see here. The next question is, how can we estimate the parameter, or what to do with the parameters. Given the data. Well, in general, we can use some of the text data to estimate the model parameters. And this estimation would allow us to discover the interesting knowledge about the text. So you, in this case, what do we discover? Well, these are presented by our parameters and we will have two kinds of parameters. One is the two worded distributions, that result in topics, and the other is the coverage of each topic in each.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "9:12": "The coverage of each topic. And this is determined by probability of C less of D and probability of theta, so this is to one. Now, what's interesting is also to think about special cases like when we send one of them to want what would happen? Well with the other, with the zero right? And if you look at the likelihood function," + "time": "9:12", + "text": "The coverage of each topic. And this is determined by probability of C less of D and probability of theta, so this is to one. Now, what's interesting is also to think about special cases like when we send one of them to want what would happen? Well with the other, with the zero right? And if you look at the likelihood function,", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "9:36": "it will then degenerate to the special case of just one distribution. Okay so you can easily verify that by assuming one of these two is 1.0 and the other is Zero." + "time": "9:36", + "text": "it will then degenerate to the special case of just one distribution. Okay so you can easily verify that by assuming one of these two is 1.0 and the other is Zero.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "9:49": "So in this sense, the mixture model is more general than the previous model where we have just one distribution. It can cover that as a special case." + "time": "9:49", + "text": "So in this sense, the mixture model is more general than the previous model where we have just one distribution. It can cover that as a special case.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "9:59": "So to summarize, we talked about the mixture of two Unigram Language Models and the data we're considering here is just One document. And the model is a mixture model with two components, two unigram LM models, specifically theta sub d, which is intended to denote the topic of document d, and theta sub B, which is representing a background topic that we can set to attract the common words because common words would be assigned a high probability in this model." + "time": "9:59", + "text": "So to summarize, we talked about the mixture of two Unigram Language Models and the data we're considering here is just One document. And the model is a mixture model with two components, two unigram LM models, specifically theta sub d, which is intended to denote the topic of document d, and theta sub B, which is representing a background topic that we can set to attract the common words because common words would be assigned a high probability in this model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "10:33": "So the parameters can be collectively called Lambda which I show here you can again" + "time": "10:33", + "text": "So the parameters can be collectively called Lambda which I show here you can again", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "10:41": "think about the question about how many parameters are we talking about exactly. This is usually a good exercise to do because it allows you to see the model in depth and to have a complete understanding of what's going on this model. And we have mixing weights, of course, also." + "time": "10:41", + "text": "think about the question about how many parameters are we talking about exactly. This is usually a good exercise to do because it allows you to see the model in depth and to have a complete understanding of what's going on this model. And we have mixing weights, of course, also.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "10:59": "So what does a likelihood function look like? Well, it looks very similar to what we had before. So for the document, first it's a product over all the words in the document exactly the same as before. The only difference is that inside here now it's a sum instead of just one. So you might have recalled before we just had this one there." + "time": "10:59", + "text": "So what does a likelihood function look like? Well, it looks very similar to what we had before. So for the document, first it's a product over all the words in the document exactly the same as before. The only difference is that inside here now it's a sum instead of just one. So you might have recalled before we just had this one there.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "11:25": "But now we have this sum because of the mixture model. And because of the mixture model we also have to introduce a probability of choosing that particular component of distribution." + "time": "11:25", + "text": "But now we have this sum because of the mixture model. And because of the mixture model we also have to introduce a probability of choosing that particular component of distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" }, { - "11:39": "And so this is just another way of writing, and by using a product over all the unique words in our vocabulary instead of having that product over all the positions in the document. And this form where we look at the different and unique words is a commutative that formed for computing the maximum likelihood estimate later. And the maximum likelihood estimator is, as usual, just to find the parameters that would maximize the likelihood function. And the constraints here are of course two kinds. One is what are probabilities in each [INAUDIBLE] must sum to 1 the other is the choice of each [INAUDIBLE] must sum to 1. [MUSIC]" + "time": "11:39", + "text": "And so this is just another way of writing, and by using a product over all the unique words in our vocabulary instead of having that product over all the positions in the document. And this form where we look at the different and unique words is a commutative that formed for computing the maximum likelihood estimate later. And the maximum likelihood estimator is, as usual, just to find the parameters that would maximize the likelihood function. And the constraints here are of course two kinds. One is what are probabilities in each [INAUDIBLE] must sum to 1 the other is the choice of each [INAUDIBLE] must sum to 1. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" } ] }, { "3-2-probabilistic-topic-models-mixture-model-estimation-part-1": [ { - "0:06": "This lecture is about the mixture model estimation. In this lecture, we're going to continue discussing probabilistic topic models. In particular, we're going to talk about the how to estimate the parameters of a mixture model. So let's first look at our motivation for using a mixture model, and we hope to effect out the background words from the topic word distribution. So the idea is to assume that the text data actually contain two kinds of words. One kind is from the background here, so the \"is\", \"we\" etc. The other kind is from our topic word distribution that we're interested in. So in order to solve this problem of factoring out background words, we can set up our mixture model as follows. We are going to assume that we already know the parameters of all the values for all the parameters in the mixture model except for the word distribution of Theta sub d which is our target. So this is a case of customizing probably some model so that we embedded the unknown variables that we are interested in, but we're going to simplify other things. We're going to assume we have knowledge about others and this is a powerful way of customizing a model for a particular need. Now you can imagine, we could have assumed that we also don't know the background word distribution, but in this case, our goal is to affect out precisely those high probability in the background words. So we assume the background model is already fixed. The problem here is, how can we adjust the Theta sub d in order to maximize the probability of the observed document here and we assume all the other parameters are known? Now, although we designed the modal heuristically to try to factor out these background words, it's unclear whether if we use maximum likelihood estimator, we will actually end up having a word distribution where the common words like \"the\" will be indeed having smaller probabilities than before. So now, in this case, it turns out that the answer is yes. When we set up the probabilistic modeling this way, when we use maximum likelihood estimator, we will end up having a word distribution where the common words would be factored out by the use of the background distribution. So to understand why this is so, it's useful to examine the behavior of a mixture model. So we're going to look at a very simple case. In order to understand some interesting behaviors of a mixture model, the observed patterns here actually are generalizable to mixture model in general, but it's much easier to understand this behavior when we use a very simple case like what we're seeing here. So specifically in this case, let's assume that the probability of choosing each of the two models is exactly the same. So we're going to flip a fair coin to decide which model to use. Furthermore, we are going to assume there are precisely to words, \"the\" and \"text.\" Obviously, this is a very naive oversimplification of the actual text, but again, it is useful to examine the behavior in such a special case. So we further assume that, the background model gives probability of 0.9 to the word \"the\" and \"text\" 0.1. Now, let's also assume that our data is extremely simple. The document has just two words \"text\" and then \"the.\" So now, let's write down the likelihood function in such a case. First, what's the probability of \"text\" and what's the probability of \"the\"? I hope by this point, you will be able to write it down. So the probability of \"text\" is basically a sum of two cases where each case corresponds to each of the water distribution and it accounts for the two ways of generating text. Inside each case, we have the probability of choosing the model which is 0.5 multiplied by the probability of observing \"text\" from that model. Similarly, \"the\" would have a probability of the same form just as it was different exactly probabilities. So naturally, our likelihood function is just the product of the two. So it's very easy to see that, once you understand what's the probability of each word and which is also why it's so important to understand what's exactly the probability of observing each word from such a mixture model. Now, the interesting question now is, how can we then optimize this likelihood? Well, you will notice that, there are only two variables. They are precisely the two probabilities of the two words \"text\" and \"the\" given by Theta sub d. This is because we have assumed that, all the other parameters are known. So now, the question is a very simple algebra question. So we have a simple expression with two variables and we hope to choose the values of these two variables to maximize this function. It's exercises that we have seen some simple algebra problems, and note that the two probabilities must sum to one. So there's some constraint. If there were no constraint of course, we will set both probabilities to their maximum value which would be one to maximize this, but we can't do that because \"text\" and \"the\" must sum to one. We can't give those a probability of one. So now the question is, how should we allocate the probability in the mass between the two words? What do you think? Now, it will be useful to look at this formula for moment and to see intuitively what we do in order to set these probabilities to maximize the value of this function. If we look into this further, then we'll see some interesting behavior of the two component models in that, they will be collaborating to maximize the probability of the observed data which is dictated by the maximum likelihood estimator, but they're also competing in some way. In particular, they would be competing on the words and they will tend to bet high probabilities on different words to avoid this competition in some sense or to gain advantage in this competition. So again, looking at this objective function and we have a constraint on the two probabilities, now if you look at the formula intuitively, you might feel that you want to set the probability of \"text\" to be somewhat larger than \"the\". This intuition can be well-supported by mathematical fact which is, when the sum of two variables is a constant then the product of them which is maximum then they are equal, and this is a fact that we know from algebra. Now, if we plug that in, we will would mean that we have to make the two probabilities equal. When we make them equal and then if we consider the constraint that we can easily solve this problem, and the solution is the probability of \"text\" would be 0.9 and probability of \"the\" is 0.1. As you can see indeed, the probability of text is not much larger than probability of \"the\" and this is not the case when we have just one distribution. This is clearly because of the use of the background model which assign a very high probability to \"the\" low probability to \"text\". If you look at the equation, you will see obviously some interaction of the two distributions here. In particular, you will see in order to make them equal and then the probability assigned by Theta sub d must be higher for a word that has a smaller probability given by the background. This is obvious from examining this equation because \"the\" background part is weak for \"text\" it's a small. So in order to compensate for that, we must make the probability of \"text\" that's given by Theta sub d somewhat larger so that the two sides can be balanced. So this is in fact a very general behavior of this mixture model. That is, if one distribution assigns a high probability to one word than another, then the other distribution would tend to do the opposite. Basically, it would discourage other distributions to do the same and this is to balance them out so that, we can account for all words. This also means that, by using a background model that is fixed to assign high probabilities to background words, we can indeed encourage the unknown topic word distribution to assign smaller probabilities for such common words. Instead, put more probability mass on the content words that cannot be explained well by the background model meaning that, they have a very small probability from the background model like \"text\" here." + "time": "0:06", + "text": "This lecture is about the mixture model estimation. In this lecture, we're going to continue discussing probabilistic topic models. In particular, we're going to talk about the how to estimate the parameters of a mixture model. So let's first look at our motivation for using a mixture model, and we hope to effect out the background words from the topic word distribution. So the idea is to assume that the text data actually contain two kinds of words. One kind is from the background here, so the \"is\", \"we\" etc. The other kind is from our topic word distribution that we're interested in. So in order to solve this problem of factoring out background words, we can set up our mixture model as follows. We are going to assume that we already know the parameters of all the values for all the parameters in the mixture model except for the word distribution of Theta sub d which is our target. So this is a case of customizing probably some model so that we embedded the unknown variables that we are interested in, but we're going to simplify other things. We're going to assume we have knowledge about others and this is a powerful way of customizing a model for a particular need. Now you can imagine, we could have assumed that we also don't know the background word distribution, but in this case, our goal is to affect out precisely those high probability in the background words. So we assume the background model is already fixed. The problem here is, how can we adjust the Theta sub d in order to maximize the probability of the observed document here and we assume all the other parameters are known? Now, although we designed the modal heuristically to try to factor out these background words, it's unclear whether if we use maximum likelihood estimator, we will actually end up having a word distribution where the common words like \"the\" will be indeed having smaller probabilities than before. So now, in this case, it turns out that the answer is yes. When we set up the probabilistic modeling this way, when we use maximum likelihood estimator, we will end up having a word distribution where the common words would be factored out by the use of the background distribution. So to understand why this is so, it's useful to examine the behavior of a mixture model. So we're going to look at a very simple case. In order to understand some interesting behaviors of a mixture model, the observed patterns here actually are generalizable to mixture model in general, but it's much easier to understand this behavior when we use a very simple case like what we're seeing here. So specifically in this case, let's assume that the probability of choosing each of the two models is exactly the same. So we're going to flip a fair coin to decide which model to use. Furthermore, we are going to assume there are precisely to words, \"the\" and \"text.\" Obviously, this is a very naive oversimplification of the actual text, but again, it is useful to examine the behavior in such a special case. So we further assume that, the background model gives probability of 0.9 to the word \"the\" and \"text\" 0.1. Now, let's also assume that our data is extremely simple. The document has just two words \"text\" and then \"the.\" So now, let's write down the likelihood function in such a case. First, what's the probability of \"text\" and what's the probability of \"the\"? I hope by this point, you will be able to write it down. So the probability of \"text\" is basically a sum of two cases where each case corresponds to each of the water distribution and it accounts for the two ways of generating text. Inside each case, we have the probability of choosing the model which is 0.5 multiplied by the probability of observing \"text\" from that model. Similarly, \"the\" would have a probability of the same form just as it was different exactly probabilities. So naturally, our likelihood function is just the product of the two. So it's very easy to see that, once you understand what's the probability of each word and which is also why it's so important to understand what's exactly the probability of observing each word from such a mixture model. Now, the interesting question now is, how can we then optimize this likelihood? Well, you will notice that, there are only two variables. They are precisely the two probabilities of the two words \"text\" and \"the\" given by Theta sub d. This is because we have assumed that, all the other parameters are known. So now, the question is a very simple algebra question. So we have a simple expression with two variables and we hope to choose the values of these two variables to maximize this function. It's exercises that we have seen some simple algebra problems, and note that the two probabilities must sum to one. So there's some constraint. If there were no constraint of course, we will set both probabilities to their maximum value which would be one to maximize this, but we can't do that because \"text\" and \"the\" must sum to one. We can't give those a probability of one. So now the question is, how should we allocate the probability in the mass between the two words? What do you think? Now, it will be useful to look at this formula for moment and to see intuitively what we do in order to set these probabilities to maximize the value of this function. If we look into this further, then we'll see some interesting behavior of the two component models in that, they will be collaborating to maximize the probability of the observed data which is dictated by the maximum likelihood estimator, but they're also competing in some way. In particular, they would be competing on the words and they will tend to bet high probabilities on different words to avoid this competition in some sense or to gain advantage in this competition. So again, looking at this objective function and we have a constraint on the two probabilities, now if you look at the formula intuitively, you might feel that you want to set the probability of \"text\" to be somewhat larger than \"the\". This intuition can be well-supported by mathematical fact which is, when the sum of two variables is a constant then the product of them which is maximum then they are equal, and this is a fact that we know from algebra. Now, if we plug that in, we will would mean that we have to make the two probabilities equal. When we make them equal and then if we consider the constraint that we can easily solve this problem, and the solution is the probability of \"text\" would be 0.9 and probability of \"the\" is 0.1. As you can see indeed, the probability of text is not much larger than probability of \"the\" and this is not the case when we have just one distribution. This is clearly because of the use of the background model which assign a very high probability to \"the\" low probability to \"text\". If you look at the equation, you will see obviously some interaction of the two distributions here. In particular, you will see in order to make them equal and then the probability assigned by Theta sub d must be higher for a word that has a smaller probability given by the background. This is obvious from examining this equation because \"the\" background part is weak for \"text\" it's a small. So in order to compensate for that, we must make the probability of \"text\" that's given by Theta sub d somewhat larger so that the two sides can be balanced. So this is in fact a very general behavior of this mixture model. That is, if one distribution assigns a high probability to one word than another, then the other distribution would tend to do the opposite. Basically, it would discourage other distributions to do the same and this is to balance them out so that, we can account for all words. This also means that, by using a background model that is fixed to assign high probabilities to background words, we can indeed encourage the unknown topic word distribution to assign smaller probabilities for such common words. Instead, put more probability mass on the content words that cannot be explained well by the background model meaning that, they have a very small probability from the background model like \"text\" here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/QnGYn/3-2-probabilistic-topic-models-mixture-model-estimation-part-1" } ] }, { "3-3-probabilistic-topic-models-mixture-model-estimation-part-2": [ { - "0:00": "[SOUND] Now lets look at another behaviour of the Mixed Model and in this case lets look at the response to data frequencies. So what you are seeing now is basically the likelihood of function for the two word document and we now in this case the solution is text. A probability of 0.9 and the a probability of 0.1. Now it's interesting to think about a scenario where we start adding more words to the document. So what would happen if we add many the's to the document?" + "time": "0:00", + "text": "[SOUND] Now lets look at another behaviour of the Mixed Model and in this case lets look at the response to data frequencies. So what you are seeing now is basically the likelihood of function for the two word document and we now in this case the solution is text. A probability of 0.9 and the a probability of 0.1. Now it's interesting to think about a scenario where we start adding more words to the document. So what would happen if we add many the's to the document?", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "0:41": "Now this would change the game, right? So, how? Well, picture, what would the likelihood function look like now? Well, it start with the likelihood function for the two words, right? As we add more words, we know that. But we have to just multiply the likelihood function by additional terms to account for the additional. occurrences of that. Since in this case, all the additional terms are the, we're going to just multiply by this term. Right? For the probability of the." + "time": "0:41", + "text": "Now this would change the game, right? So, how? Well, picture, what would the likelihood function look like now? Well, it start with the likelihood function for the two words, right? As we add more words, we know that. But we have to just multiply the likelihood function by additional terms to account for the additional. occurrences of that. Since in this case, all the additional terms are the, we're going to just multiply by this term. Right? For the probability of the.", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "1:12": "And if we have another occurrence of the, we'd multiply again by the same term, and so on and forth. Add as many terms as the number of the's that we add to the document, d'. Now this obviously changes the likelihood function. So what's interesting is now to think about how would that change our solution? So what's the optimal solution now?" + "time": "1:12", + "text": "And if we have another occurrence of the, we'd multiply again by the same term, and so on and forth. Add as many terms as the number of the's that we add to the document, d'. Now this obviously changes the likelihood function. So what's interesting is now to think about how would that change our solution? So what's the optimal solution now?", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "1:38": "Now, intuitively you'd know the original solution, pulling the 9 versus pulling the ,will no longer be optimal for this new function. Right?" + "time": "1:38", + "text": "Now, intuitively you'd know the original solution, pulling the 9 versus pulling the ,will no longer be optimal for this new function. Right?", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "1:48": "But, the question is how should we change it. What general is to sum to one. So he know we must take away some probability the mass from one word and add the probability mass to the other word. The question is which word to have reduce the probability and which word to have a larger probability. And in particular, let's think about the probability of the. Should it be increased to be more than 0.1? Or should we decrease it to less than 0.1? What do you think?" + "time": "1:48", + "text": "But, the question is how should we change it. What general is to sum to one. So he know we must take away some probability the mass from one word and add the probability mass to the other word. The question is which word to have reduce the probability and which word to have a larger probability. And in particular, let's think about the probability of the. Should it be increased to be more than 0.1? Or should we decrease it to less than 0.1? What do you think?", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "2:19": "Now you might want to pause the video a moment to think more about. This question. Because this has to do with understanding of important behavior of a mixture model. And indeed, other maximum likelihood estimator. Now if you look at the formula for a moment, then you will see it seems like another object Function is more influenced by the than text. Before, each computer. So now as you can imagine, it would make sense to actually assign a smaller probability for text and lock it. To make room for a larger probability for the. Why? Because the is repeated many times. If we increase it a little bit, it will have more positive impact. Whereas a slight decrease of text will have relatively small impact because it occurred just one, right? So this means there is another behavior that we observe here. That is high frequency words generated with high probabilities from all the distributions. And, this is no surprise at all, because after all, we are maximizing the likelihood of the data. So the more a word occurs, then it makes more sense to give such a word a higher probability because the impact would be more on the likelihood function. This is in fact a very general phenomenon of all the maximum likelihood estimator. But in this case, we can see as we see more occurrences of a term, it also encourages the unknown distribution theta sub d to assign a somewhat higher probability to this word." + "time": "2:19", + "text": "Now you might want to pause the video a moment to think more about. This question. Because this has to do with understanding of important behavior of a mixture model. And indeed, other maximum likelihood estimator. Now if you look at the formula for a moment, then you will see it seems like another object Function is more influenced by the than text. Before, each computer. So now as you can imagine, it would make sense to actually assign a smaller probability for text and lock it. To make room for a larger probability for the. Why? Because the is repeated many times. If we increase it a little bit, it will have more positive impact. Whereas a slight decrease of text will have relatively small impact because it occurred just one, right? So this means there is another behavior that we observe here. That is high frequency words generated with high probabilities from all the distributions. And, this is no surprise at all, because after all, we are maximizing the likelihood of the data. So the more a word occurs, then it makes more sense to give such a word a higher probability because the impact would be more on the likelihood function. This is in fact a very general phenomenon of all the maximum likelihood estimator. But in this case, we can see as we see more occurrences of a term, it also encourages the unknown distribution theta sub d to assign a somewhat higher probability to this word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "4:07": "Now it's also interesting to think about the impact of probability of Theta sub B. The probability of choosing one of the two component models. Now we've been so far assuming that each model is equally likely. And that gives us 0.5. But you can again look at this likelihood function and try to picture what would happen if we increase the probability of choosing a background model. Now you will see these terms for the, we have a different form where the probability that would be" + "time": "4:07", + "text": "Now it's also interesting to think about the impact of probability of Theta sub B. The probability of choosing one of the two component models. Now we've been so far assuming that each model is equally likely. And that gives us 0.5. But you can again look at this likelihood function and try to picture what would happen if we increase the probability of choosing a background model. Now you will see these terms for the, we have a different form where the probability that would be", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "4:40": "even larger because the background has a high probability for the word and the coefficient in front of 0.9 which is now 0.5 would be even larger. When this is larger, the overall result would be larger. And that also makes this the less important for theta sub d to increase the probability before the. Because it's already very large. So the impact here of increasing the probability of the is somewhat regulated by this coefficient, the point of i. If it's larger on the background, then it becomes less important to increase the value. So this means the behavior here, which is high frequency words tend to get the high probabilities, are effected or regularized somewhat by the probability of choosing each component. The more likely a component is being chosen. It's more important that to have higher values for these frequent words. If you have a various small probability of being chosen, then the incentive is less. So to summarize, we have just discussed the mixture model. And we discussed that the estimation problem of the mixture model and particular with this discussed some general behavior of the estimator and that means we can expect our estimator to capture these infusions. First every component model attempts to assign high probabilities to high frequent their words in the data. And this is to collaboratively maximize likelihood. Second, different component models tend to bet high probabilities on different words. And this is to avoid a competition or waste of probability. And this would allow them to collaborate more efficiently to maximize the likelihood." + "time": "4:40", + "text": "even larger because the background has a high probability for the word and the coefficient in front of 0.9 which is now 0.5 would be even larger. When this is larger, the overall result would be larger. And that also makes this the less important for theta sub d to increase the probability before the. Because it's already very large. So the impact here of increasing the probability of the is somewhat regulated by this coefficient, the point of i. If it's larger on the background, then it becomes less important to increase the value. So this means the behavior here, which is high frequency words tend to get the high probabilities, are effected or regularized somewhat by the probability of choosing each component. The more likely a component is being chosen. It's more important that to have higher values for these frequent words. If you have a various small probability of being chosen, then the incentive is less. So to summarize, we have just discussed the mixture model. And we discussed that the estimation problem of the mixture model and particular with this discussed some general behavior of the estimator and that means we can expect our estimator to capture these infusions. First every component model attempts to assign high probabilities to high frequent their words in the data. And this is to collaboratively maximize likelihood. Second, different component models tend to bet high probabilities on different words. And this is to avoid a competition or waste of probability. And this would allow them to collaborate more efficiently to maximize the likelihood.", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "6:33": "So, the probability of choosing each component regulates the collaboration and the competition between component models. It would allow some component models to respond more to the change, for example, of frequency of the theta point in the data." + "time": "6:33", + "text": "So, the probability of choosing each component regulates the collaboration and the competition between component models. It would allow some component models to respond more to the change, for example, of frequency of the theta point in the data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "6:53": "We also talked about the special case of fixing one component to a background word distribution, right? And this distribution can be estimated by using a collection of documents, a large collection of English documents, by using just one distribution and then we'll just have normalized frequencies of terms to give us the probabilities of all these words. Now when we use such a specialized mixture model, we show that we can effectively get rid of that one word in the other component." + "time": "6:53", + "text": "We also talked about the special case of fixing one component to a background word distribution, right? And this distribution can be estimated by using a collection of documents, a large collection of English documents, by using just one distribution and then we'll just have normalized frequencies of terms to give us the probabilities of all these words. Now when we use such a specialized mixture model, we show that we can effectively get rid of that one word in the other component.", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "7:23": "And that would make this cover topic more discriminative." + "time": "7:23", + "text": "And that would make this cover topic more discriminative.", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" }, { - "7:27": "This is also an example of imposing a prior on the model parameter and the prior here basically means one model must be exactly the same as the background language model and if you recall what we talked about in Bayesian estimation, and this prior will allow us to favor a model that is consistent with our prior. In fact, if it's not consistent we're going to say the model is impossible. So it has a zero prior probability. That effectively excludes such a scenario. This is also issue that we'll talk more later. [MUSIC]" + "time": "7:27", + "text": "This is also an example of imposing a prior on the model parameter and the prior here basically means one model must be exactly the same as the background language model and if you recall what we talked about in Bayesian estimation, and this prior will allow us to favor a model that is consistent with our prior. In fact, if it's not consistent we're going to say the model is impossible. So it has a zero prior probability. That effectively excludes such a scenario. This is also issue that we'll talk more later. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" } ] }, { "3-4-probabilistic-topic-models-expectation-maximization-algorithm-part-1": [ { - "0:06": "This lecture is about the expectation-maximization algorithm, also called the EM algorithm. In this lecture, we're going to continue the discussion of probabilistic topic models. In particular, we're going to introduce the EM algorithm, which is a family of useful algorithms for computing the maximum likelihood estimate of mixture models. So this is now familiar scenario of using a two component, the mixture model, to try to factor out the background words from one topic word of distribution here. So we're interested in computing this estimate, and we're going to try to adjust these probability values to maximize the probability of the observed document. Note that we assume that all the other parameters are known. So the only thing unknown is the word probabilities given by theta sub. In this lecture, we're going to look into how to compute this maximum likelihood estimate. Now, let's start with the idea of separating the words in the text data into two groups. One group would be explained by the background model. The other group would be explained by the unknown topic word distribution. After all, this is the basic idea of mixture model. But suppose we actually know which word is from which distribution? So that would mean, for example, these words the, is, and we are known to be from this background word distribution. On the other hand, the other words text, mining, clustering etc are known to be from the topic word distribution. If you can see the color, then these are shown in blue. These blue words are then assumed that to be from the topic word distribution. If we already know how to separate these words, then the problem of estimating the word distribution would be extremely simple. If you think about this for a moment, you'll realize that, well, we can simply take all these words that are known to be from this word distribution theta sub d and normalize them. So indeed this problem would be very easy to solve if we had known which words are from which a distribution precisely, and this is in fact making this model no longer a mixture model because we can already observe which distribution has been used to generate which part of the data. So we actually go back to the single word distribution problem. In this case let's call these words that are known to be from theta d, a pseudo document of d prime, and now all we need to do is just normalize these words counts for each word w_i. That's fairly straightforward. It's just dictated by the maximum likelihood estimator. Now, this idea however doesn't work because we in practice don't really know which word is from which distribution, but this gives us the idea of perhaps we can guess which word is from which it is written. Specifically given all the parameters, can we infer the distribution a word is from. So let's assume that we actually know tentative probabilities for these words in theta sub d. So now all the parameters are known for this mixture model, and now let's consider a word like a \"text\". So the question is, do you think \"text\" is more likely having been generated from theta sub d or from theta sub of b? So in other words, we want to infer which distribution has been used to generate this text. Now, this inference process is a typical Bayesian inference situation where we have some prior about these two distributions. So can you see what is our prior here? Well, the prior here is the probability of each distribution. So the prior is given by these two probabilities. In this case, the prior is saying that each model is equally likely, but we can imagine perhaps a different prior is possible. So this is called a prior because this is our guess of which distribution has been used to generate a word before we even off reserve the word. So that's why we call it the prior. So if we don't observe the word, we don't know what word has been observed. Our best guess is to say well, they're equally likely. All right. So it's just flipping a coin. Now in Bayesian inference we typically learn with update our belief after we have observed the evidence. So what is the evidence here? Well, the evidence here is the word text. Now that we know we're interested in the word text. So text that can be regarded as evidence, and if we use Bayes rule to combine the prior and the data likelihood, what we will end up with is to combine the prior with the likelihood that you see here, which is basically the probability of the word text from each distribution. We see that in both cases the text is possible. Note that even in the background it is still possible, it just has a very small probability. So intuitively what would be your guess in this case. Now if you're like many others, you are guess text is probably from theta sub d. It's more likely from theta sub d. Why? You will probably see that it's because text that has a much higher probability here by the theta sub d, then by the background model which has a very small probability. By this we're going to say, well, text is more likely from theta sub d. So you see our guess of which distribution has been used to generate the text would depend on how high the probability of the text is in each word distribution. We can do, tend to guess the distribution that gives us a word a higher probability, and this is likely to maximize the likelihood. So we're going to choose a word that has a higher likelihood. So in other words, we're going to compare these two probabilities of the word given by each distributions. But our guess must also be affected by the prior. So we also need to compare these two priors. Why? Because imagine if we adjust these probabilities, we're going to say the probability of choosing a background model is almost 100 percent. Now, if you have that kind of strong prior, then that would affect your guess. You might think, well, wait a moment, maybe text could have been from the background as well. Although the probability is very small here, the prior is very high. So in the end, we have to combine the two, and the base formula provides us a solid and principled way of making this kind of guess to quantify that. So more specifically, let's think about the probability that this word has been generated in fact from from theta sub d. Well, in order for texts to be generated from theta sub d two things must happen. First, the theta sub d must have been selected, so we have the selection probability here. Secondly, we also have to actually have observed text from the distribution. So when we multiply the two together, we get the probability that text has in fact been generated from theta sub d. Similarly, for the background model, the probability of generating text is another product of a similar form. Now, we also introduced the latent variable z here to denote whether the word is from the background or the topic. When z is zero, it means it's from the topic theta sub d. When it's one, it means it's from the background theta sub b. So now we have the probability that text is generated from each. Then we can simply normalize them to have an estimate of the probability that the word text is from theta sub d or from theta sub b. Then equivalently, the probability that z is equal to zero given that the observed evidence is text. So this is application of Bayes rule. But this step is very crucial for understanding the EM algorithm because if we can do this, then we would be able to first initialize the parameter values somewhat randomly, and then we're going to take a guess of these z values. Which distributing has been used to generate which word, and the initialized the parameter values would allow us to have a complete specification of the mixture model which further allows us to apply Bayes rule to infer which distribution is more likely to generate each word. This prediction essentially helped us to separate the words from the two distributions. Although we can't separate them for sure, but we can separate them probabilistically as shown here." + "time": "0:06", + "text": "This lecture is about the expectation-maximization algorithm, also called the EM algorithm. In this lecture, we're going to continue the discussion of probabilistic topic models. In particular, we're going to introduce the EM algorithm, which is a family of useful algorithms for computing the maximum likelihood estimate of mixture models. So this is now familiar scenario of using a two component, the mixture model, to try to factor out the background words from one topic word of distribution here. So we're interested in computing this estimate, and we're going to try to adjust these probability values to maximize the probability of the observed document. Note that we assume that all the other parameters are known. So the only thing unknown is the word probabilities given by theta sub. In this lecture, we're going to look into how to compute this maximum likelihood estimate. Now, let's start with the idea of separating the words in the text data into two groups. One group would be explained by the background model. The other group would be explained by the unknown topic word distribution. After all, this is the basic idea of mixture model. But suppose we actually know which word is from which distribution? So that would mean, for example, these words the, is, and we are known to be from this background word distribution. On the other hand, the other words text, mining, clustering etc are known to be from the topic word distribution. If you can see the color, then these are shown in blue. These blue words are then assumed that to be from the topic word distribution. If we already know how to separate these words, then the problem of estimating the word distribution would be extremely simple. If you think about this for a moment, you'll realize that, well, we can simply take all these words that are known to be from this word distribution theta sub d and normalize them. So indeed this problem would be very easy to solve if we had known which words are from which a distribution precisely, and this is in fact making this model no longer a mixture model because we can already observe which distribution has been used to generate which part of the data. So we actually go back to the single word distribution problem. In this case let's call these words that are known to be from theta d, a pseudo document of d prime, and now all we need to do is just normalize these words counts for each word w_i. That's fairly straightforward. It's just dictated by the maximum likelihood estimator. Now, this idea however doesn't work because we in practice don't really know which word is from which distribution, but this gives us the idea of perhaps we can guess which word is from which it is written. Specifically given all the parameters, can we infer the distribution a word is from. So let's assume that we actually know tentative probabilities for these words in theta sub d. So now all the parameters are known for this mixture model, and now let's consider a word like a \"text\". So the question is, do you think \"text\" is more likely having been generated from theta sub d or from theta sub of b? So in other words, we want to infer which distribution has been used to generate this text. Now, this inference process is a typical Bayesian inference situation where we have some prior about these two distributions. So can you see what is our prior here? Well, the prior here is the probability of each distribution. So the prior is given by these two probabilities. In this case, the prior is saying that each model is equally likely, but we can imagine perhaps a different prior is possible. So this is called a prior because this is our guess of which distribution has been used to generate a word before we even off reserve the word. So that's why we call it the prior. So if we don't observe the word, we don't know what word has been observed. Our best guess is to say well, they're equally likely. All right. So it's just flipping a coin. Now in Bayesian inference we typically learn with update our belief after we have observed the evidence. So what is the evidence here? Well, the evidence here is the word text. Now that we know we're interested in the word text. So text that can be regarded as evidence, and if we use Bayes rule to combine the prior and the data likelihood, what we will end up with is to combine the prior with the likelihood that you see here, which is basically the probability of the word text from each distribution. We see that in both cases the text is possible. Note that even in the background it is still possible, it just has a very small probability. So intuitively what would be your guess in this case. Now if you're like many others, you are guess text is probably from theta sub d. It's more likely from theta sub d. Why? You will probably see that it's because text that has a much higher probability here by the theta sub d, then by the background model which has a very small probability. By this we're going to say, well, text is more likely from theta sub d. So you see our guess of which distribution has been used to generate the text would depend on how high the probability of the text is in each word distribution. We can do, tend to guess the distribution that gives us a word a higher probability, and this is likely to maximize the likelihood. So we're going to choose a word that has a higher likelihood. So in other words, we're going to compare these two probabilities of the word given by each distributions. But our guess must also be affected by the prior. So we also need to compare these two priors. Why? Because imagine if we adjust these probabilities, we're going to say the probability of choosing a background model is almost 100 percent. Now, if you have that kind of strong prior, then that would affect your guess. You might think, well, wait a moment, maybe text could have been from the background as well. Although the probability is very small here, the prior is very high. So in the end, we have to combine the two, and the base formula provides us a solid and principled way of making this kind of guess to quantify that. So more specifically, let's think about the probability that this word has been generated in fact from from theta sub d. Well, in order for texts to be generated from theta sub d two things must happen. First, the theta sub d must have been selected, so we have the selection probability here. Secondly, we also have to actually have observed text from the distribution. So when we multiply the two together, we get the probability that text has in fact been generated from theta sub d. Similarly, for the background model, the probability of generating text is another product of a similar form. Now, we also introduced the latent variable z here to denote whether the word is from the background or the topic. When z is zero, it means it's from the topic theta sub d. When it's one, it means it's from the background theta sub b. So now we have the probability that text is generated from each. Then we can simply normalize them to have an estimate of the probability that the word text is from theta sub d or from theta sub b. Then equivalently, the probability that z is equal to zero given that the observed evidence is text. So this is application of Bayes rule. But this step is very crucial for understanding the EM algorithm because if we can do this, then we would be able to first initialize the parameter values somewhat randomly, and then we're going to take a guess of these z values. Which distributing has been used to generate which word, and the initialized the parameter values would allow us to have a complete specification of the mixture model which further allows us to apply Bayes rule to infer which distribution is more likely to generate each word. This prediction essentially helped us to separate the words from the two distributions. Although we can't separate them for sure, but we can separate them probabilistically as shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/f82s5/3-4-probabilistic-topic-models-expectation-maximization-algorithm-part-1" } ] }, { "3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2": [ { - "0:00": "[SOUND] So this is indeed a general idea of the Expectation-Maximization, or EM, Algorithm." + "time": "0:00", + "text": "[SOUND] So this is indeed a general idea of the Expectation-Maximization, or EM, Algorithm.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "0:14": "So in all the EM algorithms we introduce a hidden variable to help us solve the problem more easily. In our case the hidden variable is a binary variable for each occurrence of a word. And this binary variable would indicate whether the word has been generated from 0 sub d or 0 sub p. And here we show some possible values of these variables. For example, for the it's from background, the z value is one. And text on the other hand. Is from the topic then it's zero for z, etc." + "time": "0:14", + "text": "So in all the EM algorithms we introduce a hidden variable to help us solve the problem more easily. In our case the hidden variable is a binary variable for each occurrence of a word. And this binary variable would indicate whether the word has been generated from 0 sub d or 0 sub p. And here we show some possible values of these variables. For example, for the it's from background, the z value is one. And text on the other hand. Is from the topic then it's zero for z, etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "0:53": "Now, of course, we don't observe these z values, we just imagine they're all such. Values of z attaching to other words." + "time": "0:53", + "text": "Now, of course, we don't observe these z values, we just imagine they're all such. Values of z attaching to other words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "1:02": "And that's why we call these hidden variables." + "time": "1:02", + "text": "And that's why we call these hidden variables.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "1:06": "Now, the idea that we talked about before for predicting the word distribution that has been used when we generate the word is it a predictor, the value of this hidden variable? And, so, the EM algorithm then, would work as follows. First, we'll initialize all the parameters with random values. In our case, the parameters are mainly the probability. of a word, given by theta sub d. So this is an initial addition stage. These initialized values would allow us to use base roll to take a guess of these z values, so we'd guess these values. We can't say for sure whether textt is from background or not. But we can have our guess. This is given by this formula. It's called an E-step. And so the algorithm would then try to use the E-step to guess these z values. After that, it would then invoke another that's called M-step. In this step we simply take advantage of the inferred z values and then just group words that are in the same distribution like these from that ground including this as well." + "time": "1:06", + "text": "Now, the idea that we talked about before for predicting the word distribution that has been used when we generate the word is it a predictor, the value of this hidden variable? And, so, the EM algorithm then, would work as follows. First, we'll initialize all the parameters with random values. In our case, the parameters are mainly the probability. of a word, given by theta sub d. So this is an initial addition stage. These initialized values would allow us to use base roll to take a guess of these z values, so we'd guess these values. We can't say for sure whether textt is from background or not. But we can have our guess. This is given by this formula. It's called an E-step. And so the algorithm would then try to use the E-step to guess these z values. After that, it would then invoke another that's called M-step. In this step we simply take advantage of the inferred z values and then just group words that are in the same distribution like these from that ground including this as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "2:27": "We can then normalize the count to estimate the probabilities or to revise our estimate of the parameters." + "time": "2:27", + "text": "We can then normalize the count to estimate the probabilities or to revise our estimate of the parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "2:36": "So let me also illustrate that we can group the words that are believed to have come from zero sub d, and that's text, mining algorithm, for example, and clustering." + "time": "2:36", + "text": "So let me also illustrate that we can group the words that are believed to have come from zero sub d, and that's text, mining algorithm, for example, and clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "2:51": "And we group them together to help us re-estimate the parameters that we're interested in. So these will help us estimate these parameters." + "time": "2:51", + "text": "And we group them together to help us re-estimate the parameters that we're interested in. So these will help us estimate these parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "3:06": "Note that before we just set these parameter values randomly. But with this guess, we will have somewhat improved estimate of this. Of course, we don't know exactly whether it's zero or one. So we're not going to really do the split in a hard way. But rather we're going to do a softer split. And this is what happened here." + "time": "3:06", + "text": "Note that before we just set these parameter values randomly. But with this guess, we will have somewhat improved estimate of this. Of course, we don't know exactly whether it's zero or one. So we're not going to really do the split in a hard way. But rather we're going to do a softer split. And this is what happened here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "3:29": "So we're going to adjust the count by the probability that would believe this word has been generated by using the theta sub d." + "time": "3:29", + "text": "So we're going to adjust the count by the probability that would believe this word has been generated by using the theta sub d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "3:39": "And you can see this, where does this come from? Well, this has come from here, right? From the E-step. So the EM Algorithm would iteratively improve uur initial estimate of parameters by using E-step first and then M-step. The E-step is to augment the data with additional information, like z. And the M-step is to take advantage of the additional information to separate the data. To split the data accounts and then collect the right data accounts to re-estimate our parameter. And then once we have a new generation of parameter, we're going to repeat this. We are going the E-step again. To improve our estimate of the hidden variables. And then that would lead to another generation of re-estimated parameters." + "time": "3:39", + "text": "And you can see this, where does this come from? Well, this has come from here, right? From the E-step. So the EM Algorithm would iteratively improve uur initial estimate of parameters by using E-step first and then M-step. The E-step is to augment the data with additional information, like z. And the M-step is to take advantage of the additional information to separate the data. To split the data accounts and then collect the right data accounts to re-estimate our parameter. And then once we have a new generation of parameter, we're going to repeat this. We are going the E-step again. To improve our estimate of the hidden variables. And then that would lead to another generation of re-estimated parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "4:34": "For the word distribution that we are interested in." + "time": "4:34", + "text": "For the word distribution that we are interested in.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "4:39": "Okay, so, as I said, the bridge between the two is really the variable z, hidden variable, which indicates how likely this water is from the top water distribution, theta sub p." + "time": "4:39", + "text": "Okay, so, as I said, the bridge between the two is really the variable z, hidden variable, which indicates how likely this water is from the top water distribution, theta sub p.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "4:56": "So, this slide has a lot of content and you may need to. Pause the reader to digest it. But this basically captures the essence of EM Algorithm. Start with initial values that are often random themself. And then we invoke E-step followed by M-step to get an improved setting of parameters. And then we repeated this, so this a Hill-Climbing algorithm that would gradually improve the estimate of parameters. As I will explain later there is some guarantee for reaching a local maximum of the log-likelihood function. So lets take a look at the computation for a specific case, so these formulas are the EM. Formulas that you see before, and you can also see there are superscripts, here, like here, n, to indicate the generation of parameters. Like here for example we have n plus one. That means we have improved. From here to here we have an improvement. So in this setting we have assumed the two numerals have equal probabilities and the background model is null. So what are the relevance of the statistics? Well these are the word counts. So assume we have just four words, and their counts are like this. And this is our background model that assigns high probabilities to common words like the." + "time": "4:56", + "text": "So, this slide has a lot of content and you may need to. Pause the reader to digest it. But this basically captures the essence of EM Algorithm. Start with initial values that are often random themself. And then we invoke E-step followed by M-step to get an improved setting of parameters. And then we repeated this, so this a Hill-Climbing algorithm that would gradually improve the estimate of parameters. As I will explain later there is some guarantee for reaching a local maximum of the log-likelihood function. So lets take a look at the computation for a specific case, so these formulas are the EM. Formulas that you see before, and you can also see there are superscripts, here, like here, n, to indicate the generation of parameters. Like here for example we have n plus one. That means we have improved. From here to here we have an improvement. So in this setting we have assumed the two numerals have equal probabilities and the background model is null. So what are the relevance of the statistics? Well these are the word counts. So assume we have just four words, and their counts are like this. And this is our background model that assigns high probabilities to common words like the.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "6:25": "And in the first iteration, you can picture what will happen. Well first we initialize all the values. So here, this probability that we're interested in is normalized into a uniform distribution of all the words." + "time": "6:25", + "text": "And in the first iteration, you can picture what will happen. Well first we initialize all the values. So here, this probability that we're interested in is normalized into a uniform distribution of all the words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "6:40": "And then the E-step would give us a guess of the distribution that has been used. That will generate each word. We can see we have different probabilities for different words. Why? Well, that's because these words have different probabilities in the background. So even though the two distributions are equally likely. And then our initial audition say uniform distribution because of the difference in the background of the distribution, we have different guess the probability. So these words are believed to be more likely from the topic." + "time": "6:40", + "text": "And then the E-step would give us a guess of the distribution that has been used. That will generate each word. We can see we have different probabilities for different words. Why? Well, that's because these words have different probabilities in the background. So even though the two distributions are equally likely. And then our initial audition say uniform distribution because of the difference in the background of the distribution, we have different guess the probability. So these words are believed to be more likely from the topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "7:15": "These on the other hand are less likely. Probably from background." + "time": "7:15", + "text": "These on the other hand are less likely. Probably from background.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "7:20": "So once we have these z values, we know in the M-step these probabilities will be used to adjust the counts. So four must be multiplied by this 0.33 in order to get the allocated accounts toward the topic." + "time": "7:20", + "text": "So once we have these z values, we know in the M-step these probabilities will be used to adjust the counts. So four must be multiplied by this 0.33 in order to get the allocated accounts toward the topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "7:39": "And this is done by this multiplication. Note that if our guess says this is 100% If this is one point zero," + "time": "7:39", + "text": "And this is done by this multiplication. Note that if our guess says this is 100% If this is one point zero,", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "7:52": "then we just get the full count of this word for this topic. In general it's not going to be one point zero. So we're just going to get some percentage of this counts toward this topic. Then we simply normalize these counts to have a new generation of parameters estimate. So you can see, compare this with the older one, which is here." + "time": "7:52", + "text": "then we just get the full count of this word for this topic. In general it's not going to be one point zero. So we're just going to get some percentage of this counts toward this topic. Then we simply normalize these counts to have a new generation of parameters estimate. So you can see, compare this with the older one, which is here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "8:18": "So compare this with this one and we'll see the probability is different. Not only that, we also see some words that are believed to have come from the topic will have a higher probability. Like this one, text." + "time": "8:18", + "text": "So compare this with this one and we'll see the probability is different. Not only that, we also see some words that are believed to have come from the topic will have a higher probability. Like this one, text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" }, { - "8:32": "And of course, this new generation of parameters would allow us to further adjust the inferred latent variable or hidden variable values. So we have a new generation of values, because of the E-step based on the new generation of parameters. And these new inferred values of Zs will give us then another generation of the estimate of probabilities of the word. And so on and so forth so this is what would actually happen when we compute these probabilities using the EM Algorithm. As you can see in the last row where we show the log-likelihood, and the likelihood is increasing as we do the iteration. And note that these log-likelihood is negative because the probability is between 0 and 1 when you take a logarithm, it becomes a negative value. Now what's also interesting is, you'll note the last column. And these are the inverted word split. And these are the probabilities that a word is believed to have come from one distribution, in this case the topical distribution, all right. And you might wonder whether this would be also useful. Because our main goal is to estimate these word distributions. So this is our primary goal. We hope to have a more discriminative order of distribution. But the last column is also bi-product. This also can actually be very useful. You can think about that. We want to use, is to for example is to estimate to what extent this document has covered background words. And this, when we add this up or take the average we will kind of know to what extent it has covered background versus content was that are not explained well by the background. [MUSIC]" + "time": "8:32", + "text": "And of course, this new generation of parameters would allow us to further adjust the inferred latent variable or hidden variable values. So we have a new generation of values, because of the E-step based on the new generation of parameters. And these new inferred values of Zs will give us then another generation of the estimate of probabilities of the word. And so on and so forth so this is what would actually happen when we compute these probabilities using the EM Algorithm. As you can see in the last row where we show the log-likelihood, and the likelihood is increasing as we do the iteration. And note that these log-likelihood is negative because the probability is between 0 and 1 when you take a logarithm, it becomes a negative value. Now what's also interesting is, you'll note the last column. And these are the inverted word split. And these are the probabilities that a word is believed to have come from one distribution, in this case the topical distribution, all right. And you might wonder whether this would be also useful. Because our main goal is to estimate these word distributions. So this is our primary goal. We hope to have a more discriminative order of distribution. But the last column is also bi-product. This also can actually be very useful. You can think about that. We want to use, is to for example is to estimate to what extent this document has covered background words. And this, when we add this up or take the average we will kind of know to what extent it has covered background versus content was that are not explained well by the background. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" } ] }, { "3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3": [ { - "0:07": "So, I just showed you that empirically the likelihood will converge, but theoretically it can also be proved that EM algorithm will converge to a local maximum. So here's just an illustration of what happened and a detailed explanation. This required more knowledge about that, some of that inequalities, that we haven't really covered yet." + "time": "0:07", + "text": "So, I just showed you that empirically the likelihood will converge, but theoretically it can also be proved that EM algorithm will converge to a local maximum. So here's just an illustration of what happened and a detailed explanation. This required more knowledge about that, some of that inequalities, that we haven't really covered yet.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "0:39": "So here what you see is on the X dimension, we have a c0 value. This is a parameter that we have. On the y axis we see the likelihood function. So this curve is the original likelihood function, and this is the one that we hope to maximize. And we hope to find a c0 value at this point to maximize this. But in the case of Mitsumoto we can not easily find an analytic solution to the problem. So, we have to resolve the numerical errors, and the EM algorithm is such an algorithm. It's a Hill-Climb algorithm. That would mean you start with some random guess. Let's say you start from here, that's your starting point. And then you try to improve this by moving this to another point where you can have a higher likelihood. So that's the ideal hill climbing. And in the EM algorithm, the way we achieve this is to do two things. First, we'll fix a lower bound of likelihood function. So this is the lower bound. See here." + "time": "0:39", + "text": "So here what you see is on the X dimension, we have a c0 value. This is a parameter that we have. On the y axis we see the likelihood function. So this curve is the original likelihood function, and this is the one that we hope to maximize. And we hope to find a c0 value at this point to maximize this. But in the case of Mitsumoto we can not easily find an analytic solution to the problem. So, we have to resolve the numerical errors, and the EM algorithm is such an algorithm. It's a Hill-Climb algorithm. That would mean you start with some random guess. Let's say you start from here, that's your starting point. And then you try to improve this by moving this to another point where you can have a higher likelihood. So that's the ideal hill climbing. And in the EM algorithm, the way we achieve this is to do two things. First, we'll fix a lower bound of likelihood function. So this is the lower bound. See here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "1:51": "And once we fit the lower bound, we can then maximize the lower bound. And of course, the reason why this works, is because the lower bound is much easier to optimize. So we know our current guess is here. And by maximizing the lower bound, we'll move this point to the top. To here." + "time": "1:51", + "text": "And once we fit the lower bound, we can then maximize the lower bound. And of course, the reason why this works, is because the lower bound is much easier to optimize. So we know our current guess is here. And by maximizing the lower bound, we'll move this point to the top. To here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "2:13": "Right? And we can then map to the original likelihood function, we find this point. Because it's a lower bound, we are guaranteed to improve this guess, right? Because we improve our lower bound and then the original likelihood curve which is above this lower bound will definitely be improved as well." + "time": "2:13", + "text": "Right? And we can then map to the original likelihood function, we find this point. Because it's a lower bound, we are guaranteed to improve this guess, right? Because we improve our lower bound and then the original likelihood curve which is above this lower bound will definitely be improved as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "2:36": "So we already know it's improving the lower bound. So we definitely improve this original likelihood function, which is above this lower bound. So, in our example, the current guess is parameter value given by the current generation. And then the next guess is the re-estimated parameter values. From this illustration you can see the next guess is always better than the current guess. Unless it has reached the maximum, where it will be stuck there. So the two would be equal. So, the E-step is basically to compute this lower bound. We don't directly just compute this likelihood function but we compute the length of the variable values and these are basically a part of this lower bound. This helps determine the lower bound. The M-step on the other hand is to maximize the lower bound. It allows us to move parameters to a new point. And that's why EM algorithm is guaranteed to converge to a local maximum." + "time": "2:36", + "text": "So we already know it's improving the lower bound. So we definitely improve this original likelihood function, which is above this lower bound. So, in our example, the current guess is parameter value given by the current generation. And then the next guess is the re-estimated parameter values. From this illustration you can see the next guess is always better than the current guess. Unless it has reached the maximum, where it will be stuck there. So the two would be equal. So, the E-step is basically to compute this lower bound. We don't directly just compute this likelihood function but we compute the length of the variable values and these are basically a part of this lower bound. This helps determine the lower bound. The M-step on the other hand is to maximize the lower bound. It allows us to move parameters to a new point. And that's why EM algorithm is guaranteed to converge to a local maximum.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "3:42": "Now, as you can imagine, when we have many local maxima, we also have to repeat the EM algorithm multiple times. In order to figure out which one is the actual global maximum. And this actually in general is a difficult problem in numeral optimization. So here for example had we started from here, then we gradually just climb up to this top. So, that's not optimal, and we'd like to climb up all the way to here, so the only way to climb up to this gear is to start from somewhere here or here. So, in the EM algorithm, we generally would have to start from different points or have some other way to determine a good initial starting point." + "time": "3:42", + "text": "Now, as you can imagine, when we have many local maxima, we also have to repeat the EM algorithm multiple times. In order to figure out which one is the actual global maximum. And this actually in general is a difficult problem in numeral optimization. So here for example had we started from here, then we gradually just climb up to this top. So, that's not optimal, and we'd like to climb up all the way to here, so the only way to climb up to this gear is to start from somewhere here or here. So, in the EM algorithm, we generally would have to start from different points or have some other way to determine a good initial starting point.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "4:29": "To summarize in this lecture we introduced the EM algorithm. This is a general algorithm for computing maximum maximum likelihood estimate of all kinds of models, so not just for our simple model. And it's a hill-climbing algorithm, so it can only converge to a local maximum and it will depend on initial points." + "time": "4:29", + "text": "To summarize in this lecture we introduced the EM algorithm. This is a general algorithm for computing maximum maximum likelihood estimate of all kinds of models, so not just for our simple model. And it's a hill-climbing algorithm, so it can only converge to a local maximum and it will depend on initial points.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "4:49": "The general idea is that we will have two steps to improve the estimate of. In the E-step we roughly [INAUDIBLE] how many there are by predicting values of useful hidden variables that we would use to simplify the estimation. In our case, this is the distribution that has been used to generate the word. In the M-step then we would exploit such augmented data which would make it easier to estimate the distribution, to improve the estimate of parameters. Here improve is guaranteed in terms of the likelihood function. Note that it's not necessary that we will have a stable convergence of parameter value even though the likelihood function is ensured to increase. There are some properties that have to be satisfied in order for the parameters also to convert into some stable value." + "time": "4:49", + "text": "The general idea is that we will have two steps to improve the estimate of. In the E-step we roughly [INAUDIBLE] how many there are by predicting values of useful hidden variables that we would use to simplify the estimation. In our case, this is the distribution that has been used to generate the word. In the M-step then we would exploit such augmented data which would make it easier to estimate the distribution, to improve the estimate of parameters. Here improve is guaranteed in terms of the likelihood function. Note that it's not necessary that we will have a stable convergence of parameter value even though the likelihood function is ensured to increase. There are some properties that have to be satisfied in order for the parameters also to convert into some stable value.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "5:47": "Now here data augmentation is done probabilistically. That means, we're not going to just say exactly what's the value of a hidden variable. But we're going to have a probability distribution over the possible values of these hidden variables. So this causes a split of counts of events probabilistically." + "time": "5:47", + "text": "Now here data augmentation is done probabilistically. That means, we're not going to just say exactly what's the value of a hidden variable. But we're going to have a probability distribution over the possible values of these hidden variables. So this causes a split of counts of events probabilistically.", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" }, { - "6:07": "And in our case we'll split the word counts between the two distributions. [MUSIC]" + "time": "6:07", + "text": "And in our case we'll split the word counts between the two distributions. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" } ] }, { "3-7-probabilistic-latent-semantic-analysis-plsa-part-1": [ { - "0:00": "[SOUND] This lecture is about probabilistic and latent Semantic Analysis or PLSA." + "time": "0:00", + "text": "[SOUND] This lecture is about probabilistic and latent Semantic Analysis or PLSA.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "0:12": "In this lecture we're going to introduce probabilistic latent semantic analysis, often called PLSA. This is the most basic topic model, also one of the most useful topic models. Now this kind of models can in general be used to mine multiple topics from text documents. And PRSA is one of the most basic topic models for doing this. So let's first examine this power in the e-mail for more detail. Here I show a sample article which is a blog article about Hurricane Katrina." + "time": "0:12", + "text": "In this lecture we're going to introduce probabilistic latent semantic analysis, often called PLSA. This is the most basic topic model, also one of the most useful topic models. Now this kind of models can in general be used to mine multiple topics from text documents. And PRSA is one of the most basic topic models for doing this. So let's first examine this power in the e-mail for more detail. Here I show a sample article which is a blog article about Hurricane Katrina.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "0:48": "And I show some simple topics. For example government response, flood of the city of New Orleans. Donation and the background." + "time": "0:48", + "text": "And I show some simple topics. For example government response, flood of the city of New Orleans. Donation and the background.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "0:59": "You can see in the article we use words from all these distributions." + "time": "0:59", + "text": "You can see in the article we use words from all these distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "1:05": "So we first for example see there's a criticism of government response and this is followed by discussion of flooding of the city and donation et cetera. We also see background words mixed with them." + "time": "1:05", + "text": "So we first for example see there's a criticism of government response and this is followed by discussion of flooding of the city and donation et cetera. We also see background words mixed with them.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "1:18": "So the overall of topic analysis here is to try to decode these topics behind the text, to segment the topics, to figure out which words are from which distribution and to figure out first, what are these topics? How do we know there's a topic about government response. There's a topic about a flood in the city. So these are the tasks at the top of the model." + "time": "1:18", + "text": "So the overall of topic analysis here is to try to decode these topics behind the text, to segment the topics, to figure out which words are from which distribution and to figure out first, what are these topics? How do we know there's a topic about government response. There's a topic about a flood in the city. So these are the tasks at the top of the model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "1:42": "If we had discovered these topics can color these words, as you see here, to separate the different topics. Then you can do a lot of things, such as summarization, or segmentation, of the topics, clustering of the sentences etc. So the formal definition of problem of mining multiple topics from text is shown here. And this is after a slide that you have seen in an earlier lecture. So the input is a collection, the number of topics, and a vocabulary set, and of course the text data." + "time": "1:42", + "text": "If we had discovered these topics can color these words, as you see here, to separate the different topics. Then you can do a lot of things, such as summarization, or segmentation, of the topics, clustering of the sentences etc. So the formal definition of problem of mining multiple topics from text is shown here. And this is after a slide that you have seen in an earlier lecture. So the input is a collection, the number of topics, and a vocabulary set, and of course the text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "2:16": "And then the output is of two kinds. One is the topic category, characterization. Theta i's. Each theta i is a word distribution. And second, it's the topic coverage for each document. These are pi sub i j's. And they tell us which document it covers. Which topic to what extent. So we hope to generate these as output. Because there are many useful applications if we can do that." + "time": "2:16", + "text": "And then the output is of two kinds. One is the topic category, characterization. Theta i's. Each theta i is a word distribution. And second, it's the topic coverage for each document. These are pi sub i j's. And they tell us which document it covers. Which topic to what extent. So we hope to generate these as output. Because there are many useful applications if we can do that.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "2:42": "So the idea of PLSA is actually very similar to the two component mixture model that we have already introduced. The only difference is that we are going to have more than two topics. Otherwise, it is essentially the same. So here I illustrate how we can generate the text that has multiple topics and naturally in all cases of Probabilistic modelling would want to figure out the likelihood function. So we would also ask the question, what's the probability of observing a word from such a mixture model? Now if you look at this picture and compare this with the picture that we have seen earlier, you will see the only difference is that we have added more topics here." + "time": "2:42", + "text": "So the idea of PLSA is actually very similar to the two component mixture model that we have already introduced. The only difference is that we are going to have more than two topics. Otherwise, it is essentially the same. So here I illustrate how we can generate the text that has multiple topics and naturally in all cases of Probabilistic modelling would want to figure out the likelihood function. So we would also ask the question, what's the probability of observing a word from such a mixture model? Now if you look at this picture and compare this with the picture that we have seen earlier, you will see the only difference is that we have added more topics here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "3:26": "So, before we have just one topic, besides the background topic. But now we have more topics. Specifically, we have k topics now. All these are topics that we assume that exist in the text data. So the consequence is that our switch for choosing a topic is now a multiway switch. Before it's just a two way switch. We can think of it as flipping a coin. But now we have multiple ways. First we can flip a coin to decide whether we're talk about the background. So it's the background lambda sub B versus non-background. 1 minus lambda sub B gives us the probability of actually choosing a non-background topic. After we have made this decision, we have to make another decision to choose one of these K distributions. So there are K way switch here. And this is characterized by pi, and this sum to one." + "time": "3:26", + "text": "So, before we have just one topic, besides the background topic. But now we have more topics. Specifically, we have k topics now. All these are topics that we assume that exist in the text data. So the consequence is that our switch for choosing a topic is now a multiway switch. Before it's just a two way switch. We can think of it as flipping a coin. But now we have multiple ways. First we can flip a coin to decide whether we're talk about the background. So it's the background lambda sub B versus non-background. 1 minus lambda sub B gives us the probability of actually choosing a non-background topic. After we have made this decision, we have to make another decision to choose one of these K distributions. So there are K way switch here. And this is characterized by pi, and this sum to one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "4:31": "This is just the difference of designs. Which is a little bit more complicated. But once we decide which distribution to use the rest is the same we are going to just generate a word by using one of these distributions as shown here." + "time": "4:31", + "text": "This is just the difference of designs. Which is a little bit more complicated. But once we decide which distribution to use the rest is the same we are going to just generate a word by using one of these distributions as shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "4:46": "So now lets look at the question about the likelihood. So what's the probability of observing a word from such a distribution? What do you think? Now we've seen this problem many times now and if you can recall, it's generally a sum. Of all the different possibilities of generating a word. So let's first look at how the word can be generated from the background mode. Well, the probability that the word is generated from the background model is lambda multiplied by the probability of the word from the background mode. Model, right. Two things must happen. First, we have to have chosen the background model, and that's the probability of lambda, of sub b. Then second, we must have actually obtained the word w from the background, and that's probability of w given theta sub b." + "time": "4:46", + "text": "So now lets look at the question about the likelihood. So what's the probability of observing a word from such a distribution? What do you think? Now we've seen this problem many times now and if you can recall, it's generally a sum. Of all the different possibilities of generating a word. So let's first look at how the word can be generated from the background mode. Well, the probability that the word is generated from the background model is lambda multiplied by the probability of the word from the background mode. Model, right. Two things must happen. First, we have to have chosen the background model, and that's the probability of lambda, of sub b. Then second, we must have actually obtained the word w from the background, and that's probability of w given theta sub b.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "5:40": "Okay, so similarly, we can figure out the probability of observing the word from another topic. Like the topic theta sub k. Now notice that here's the product of three terms. And that's because of the choice of topic theta sub k, only happens if two things happen. One is we decide not to talk about background. So, that's a probability of 1 minus lambda sub B. Second, we also have to actually choose theta sub K among these K topics. So that's probability of theta sub K, or pi." + "time": "5:40", + "text": "Okay, so similarly, we can figure out the probability of observing the word from another topic. Like the topic theta sub k. Now notice that here's the product of three terms. And that's because of the choice of topic theta sub k, only happens if two things happen. One is we decide not to talk about background. So, that's a probability of 1 minus lambda sub B. Second, we also have to actually choose theta sub K among these K topics. So that's probability of theta sub K, or pi.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "6:17": "And similarly, the probability of generating a word from the second. The topic and the first topic are like what you are seeing here. And so in the end the probability of observing the word is just a sum of all these cases. And I have to stress again this is a very important formula to know because this is really key to understanding all the topic models and indeed a lot of mixture models. So make sure that you really understand the probability" + "time": "6:17", + "text": "And similarly, the probability of generating a word from the second. The topic and the first topic are like what you are seeing here. And so in the end the probability of observing the word is just a sum of all these cases. And I have to stress again this is a very important formula to know because this is really key to understanding all the topic models and indeed a lot of mixture models. So make sure that you really understand the probability", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "6:49": "of w is indeed the sum of these terms." + "time": "6:49", + "text": "of w is indeed the sum of these terms.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "6:56": "So, next, once we have the likelihood function, we would be interested in knowing the parameters. All right, so to estimate the parameters. But firstly, let's put all these together to have the complete likelihood of function for PLSA. The first line shows the probability of a word as illustrated on the previous slide. And this is an important formula as I said." + "time": "6:56", + "text": "So, next, once we have the likelihood function, we would be interested in knowing the parameters. All right, so to estimate the parameters. But firstly, let's put all these together to have the complete likelihood of function for PLSA. The first line shows the probability of a word as illustrated on the previous slide. And this is an important formula as I said.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "7:22": "So let's take a closer look at this. This actually commands all the important parameters. So first of all we see lambda sub b here. This represents a percentage of background words" + "time": "7:22", + "text": "So let's take a closer look at this. This actually commands all the important parameters. So first of all we see lambda sub b here. This represents a percentage of background words", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "7:32": "that we believe exist in the text data. And this can be a known value that we set empirically." + "time": "7:32", + "text": "that we believe exist in the text data. And this can be a known value that we set empirically.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "7:41": "Second, we see the background language model, and typically we also assume this is known. We can use a large collection of text, or use all the text that we have available to estimate the world of distribution." + "time": "7:41", + "text": "Second, we see the background language model, and typically we also assume this is known. We can use a large collection of text, or use all the text that we have available to estimate the world of distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "7:52": "Now next in the next stop this formula. [COUGH] Excuse me. You see two interesting kind of parameters, those are the most important parameters. That we are. So one is pi's. And these are the coverage of a topic in the document." + "time": "7:52", + "text": "Now next in the next stop this formula. [COUGH] Excuse me. You see two interesting kind of parameters, those are the most important parameters. That we are. So one is pi's. And these are the coverage of a topic in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "8:11": "And the other is word distributions that characterize all the topics." + "time": "8:11", + "text": "And the other is word distributions that characterize all the topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "8:18": "So the next line, then is simply to plug this in to calculate the probability of document. This is, again, of the familiar form where you have a sum and you have a count of a word in the document. And then log of a probability. Now it's a little bit more complicated than the two component. Because now we have more components, so the sum involves more terms. And then this line is just the likelihood for the whole collection. And it's very similar, just accounting for more documents in the collection." + "time": "8:18", + "text": "So the next line, then is simply to plug this in to calculate the probability of document. This is, again, of the familiar form where you have a sum and you have a count of a word in the document. And then log of a probability. Now it's a little bit more complicated than the two component. Because now we have more components, so the sum involves more terms. And then this line is just the likelihood for the whole collection. And it's very similar, just accounting for more documents in the collection.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "8:52": "So what are the unknown parameters? I already said that there are two kinds. One is coverage, one is word distributions. Again, it's a useful exercise for you to think about. Exactly how many parameters there are here." + "time": "8:52", + "text": "So what are the unknown parameters? I already said that there are two kinds. One is coverage, one is word distributions. Again, it's a useful exercise for you to think about. Exactly how many parameters there are here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "9:05": "How many unknown parameters are there? Now, try and think out that question will help you understand the model in more detail. And will also allow you to understand what would be the output that we generate when use PLSA to analyze text data? And these are precisely the unknown parameters." + "time": "9:05", + "text": "How many unknown parameters are there? Now, try and think out that question will help you understand the model in more detail. And will also allow you to understand what would be the output that we generate when use PLSA to analyze text data? And these are precisely the unknown parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "9:24": "So after we have obtained the likelihood function shown here, the next is to worry about the parameter estimation." + "time": "9:24", + "text": "So after we have obtained the likelihood function shown here, the next is to worry about the parameter estimation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "9:32": "And we can do the usual think, maximum likelihood estimator. So again, it's a constrained optimization problem, like what we have seen before. Only that we have a collection of text and we have more parameters to estimate. And we still have two constraints, two kinds of constraints. One is the word distributions." + "time": "9:32", + "text": "And we can do the usual think, maximum likelihood estimator. So again, it's a constrained optimization problem, like what we have seen before. Only that we have a collection of text and we have more parameters to estimate. And we still have two constraints, two kinds of constraints. One is the word distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" }, { - "9:51": "All the words must have probabilities that's sum to one for one distribution. The other is the topic coverage distribution and a document will have to cover precisely these k topics so the probability of covering each topic that would have to sum to 1. So at this point though it's basically a well defined applied math problem, you just need to figure out the solutions to optimization problem. There's a function with many variables. and we need to just figure out the patterns of these variables to make the function reach its maximum. >> [MUSIC]" + "time": "9:51", + "text": "All the words must have probabilities that's sum to one for one distribution. The other is the topic coverage distribution and a document will have to cover precisely these k topics so the probability of covering each topic that would have to sum to 1. So at this point though it's basically a well defined applied math problem, you just need to figure out the solutions to optimization problem. There's a function with many variables. and we need to just figure out the patterns of these variables to make the function reach its maximum. >> [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" } ] }, { "3-8-probabilistic-latent-semantic-analysis-plsa-part-2": [ { - "0:00": "[SOUND]" + "time": "0:00", + "text": "[SOUND]", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "0:08": "We can compute this maximum estimate by using the EM algorithm. So in the e step, we now have to introduce more hidden variables because we have more topics, so our hidden variable z now, which is a topic indicator can take more than two values. So specifically will take a k plus one values, with b in the noting the background. And once locate, to denote other k topics, right." + "time": "0:08", + "text": "We can compute this maximum estimate by using the EM algorithm. So in the e step, we now have to introduce more hidden variables because we have more topics, so our hidden variable z now, which is a topic indicator can take more than two values. So specifically will take a k plus one values, with b in the noting the background. And once locate, to denote other k topics, right.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "0:36": "So, now the e step, as you can recall is your augmented data, and by predicting the values of the hidden variable. So we're going to predict for a word, whether the word has come from one of these k plus one distributions. This equation allows us to predict the probability that the word w in document d is generated from topic zero sub j." + "time": "0:36", + "text": "So, now the e step, as you can recall is your augmented data, and by predicting the values of the hidden variable. So we're going to predict for a word, whether the word has come from one of these k plus one distributions. This equation allows us to predict the probability that the word w in document d is generated from topic zero sub j.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "1:03": "And the bottom one is the predicted probability that this word has been generated from the background. Note that we use document d here to index the word. Why? Because whether a word is from a particular topic actually depends on the document. Can you see why? Well, it's through the pi's. The pi's are tied to each document. Each document can have potentially different pi's, right. The pi's will then affect our prediction. So, the pi's are here. And this depends on the document." + "time": "1:03", + "text": "And the bottom one is the predicted probability that this word has been generated from the background. Note that we use document d here to index the word. Why? Because whether a word is from a particular topic actually depends on the document. Can you see why? Well, it's through the pi's. The pi's are tied to each document. Each document can have potentially different pi's, right. The pi's will then affect our prediction. So, the pi's are here. And this depends on the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "1:38": "And that might give a different guess for a word in different documents, and that's desirable." + "time": "1:38", + "text": "And that might give a different guess for a word in different documents, and that's desirable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "1:46": "In both cases we are using the Baye's Rule, as I explained, basically assessing the likelihood of generating word from each of this division and there's normalize." + "time": "1:46", + "text": "In both cases we are using the Baye's Rule, as I explained, basically assessing the likelihood of generating word from each of this division and there's normalize.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "1:57": "What about the m step? Well, we may recall the m step is we take advantage of the inferred z values. To split the counts. And then collected the right counts to re-estimate the parameters. So in this case, we can re-estimate our coverage of probability. And this is re-estimated based on collecting all the words in the document." + "time": "1:57", + "text": "What about the m step? Well, we may recall the m step is we take advantage of the inferred z values. To split the counts. And then collected the right counts to re-estimate the parameters. So in this case, we can re-estimate our coverage of probability. And this is re-estimated based on collecting all the words in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "2:22": "And that's why we have the count of the word in document. And sum over all the words. And then we're going to look at to what extent this word belongs to" + "time": "2:22", + "text": "And that's why we have the count of the word in document. And sum over all the words. And then we're going to look at to what extent this word belongs to", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "2:34": "the topic theta sub j. And this part is our guess from each step." + "time": "2:34", + "text": "the topic theta sub j. And this part is our guess from each step.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "2:40": "This tells us how likely this word is actually from theta sub j. And when we multiply them together, we get the discounted count that's located for topic theta sub j. And when we normalize this over all the topics, we get the distribution of all the topics to indicate the coverage. And similarly, the bottom one is the estimated probability of word for a topic. And in this case we are using exact the same count, you can see this is the same discounted account, ] it tells us to what extend we should allocate this word [INAUDIBLE] but then normalization is different. Because in this case we are interested in the word distribution, so we simply normalize this over all the words. This is different, in contrast here we normalize the amount all the topics. It would be useful to take a comparison between the two." + "time": "2:40", + "text": "This tells us how likely this word is actually from theta sub j. And when we multiply them together, we get the discounted count that's located for topic theta sub j. And when we normalize this over all the topics, we get the distribution of all the topics to indicate the coverage. And similarly, the bottom one is the estimated probability of word for a topic. And in this case we are using exact the same count, you can see this is the same discounted account, ] it tells us to what extend we should allocate this word [INAUDIBLE] but then normalization is different. Because in this case we are interested in the word distribution, so we simply normalize this over all the words. This is different, in contrast here we normalize the amount all the topics. It would be useful to take a comparison between the two.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "3:37": "This give us different distributions. And these tells us how to improve the parameters." + "time": "3:37", + "text": "This give us different distributions. And these tells us how to improve the parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "3:48": "And as I just explained, in both the formula is we have a maximum estimate based on allocated word counts [INAUDIBLE]. Now this phenomena is actually general phenomena in all the EM algorithms. In the m-step, you general with the computer expect an account of the event based on the e-step result, and then you just and then count to four, particular normalize it, typically. So, in terms of computation of this EM algorithm, we can actually just keep accounting various events and then normalize them. And when we thinking this way, we also have a more concise way of presenting the EM Algorithm. It actually helps us better understand the formulas. So I'm going to go over this in some detail. So as a algorithm we first initialize all the unknown perimeters randomly, all right. So, in our case, we are interested in all of those coverage perimeters, pi's and awarded distributions [INAUDIBLE], and we just randomly normalize them. This is the initialization step and then we will repeat until likelihood converges. Now how do we know whether likelihood converges? We can do compute likelihood at each step and compare the current likelihood with the previous likelihood. If it doesn't change much and we're going to say it stopped, right." + "time": "3:48", + "text": "And as I just explained, in both the formula is we have a maximum estimate based on allocated word counts [INAUDIBLE]. Now this phenomena is actually general phenomena in all the EM algorithms. In the m-step, you general with the computer expect an account of the event based on the e-step result, and then you just and then count to four, particular normalize it, typically. So, in terms of computation of this EM algorithm, we can actually just keep accounting various events and then normalize them. And when we thinking this way, we also have a more concise way of presenting the EM Algorithm. It actually helps us better understand the formulas. So I'm going to go over this in some detail. So as a algorithm we first initialize all the unknown perimeters randomly, all right. So, in our case, we are interested in all of those coverage perimeters, pi's and awarded distributions [INAUDIBLE], and we just randomly normalize them. This is the initialization step and then we will repeat until likelihood converges. Now how do we know whether likelihood converges? We can do compute likelihood at each step and compare the current likelihood with the previous likelihood. If it doesn't change much and we're going to say it stopped, right.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "5:19": "So, in each step we're going to do e-step and m-step. In the e-step we're going to do augment the data by predicting the hidden variables. In this case, the hidden variable, z sub d, w, indicates whether the word w in d is from a topic or background. And if it's from a topic, which topic. So if you look at the e-step formulas, essentially we're actually normalizing these counts, sorry, these probabilities of observing the word from each distribution. So you can see, basically the prediction of word from topic zero sub j is based on the probability of selecting that theta sub j as a word distribution to generate the word. Multiply by the probability of observing the word from that distribution." + "time": "5:19", + "text": "So, in each step we're going to do e-step and m-step. In the e-step we're going to do augment the data by predicting the hidden variables. In this case, the hidden variable, z sub d, w, indicates whether the word w in d is from a topic or background. And if it's from a topic, which topic. So if you look at the e-step formulas, essentially we're actually normalizing these counts, sorry, these probabilities of observing the word from each distribution. So you can see, basically the prediction of word from topic zero sub j is based on the probability of selecting that theta sub j as a word distribution to generate the word. Multiply by the probability of observing the word from that distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "6:17": "And I said it's proportional to this because in the implementation of EM algorithm you can keep counter for this quantity, and in the end it just normalizes it. So the normalization here is over all the topics and then you would get a probability." + "time": "6:17", + "text": "And I said it's proportional to this because in the implementation of EM algorithm you can keep counter for this quantity, and in the end it just normalizes it. So the normalization here is over all the topics and then you would get a probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "6:36": "Now, in the m-step, we do the same, and we are going to collect these." + "time": "6:36", + "text": "Now, in the m-step, we do the same, and we are going to collect these.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "6:43": "Allocated account for each topic." + "time": "6:43", + "text": "Allocated account for each topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "6:47": "And we split words among the topics." + "time": "6:47", + "text": "And we split words among the topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "6:50": "And then we're going to normalize them in different ways to obtain the real estimate. So for example, we can normalize among all the topics to get the re-estimate of pi, the coverage. Or we can re-normalize based on all the words. And that would give us a word distribution." + "time": "6:50", + "text": "And then we're going to normalize them in different ways to obtain the real estimate. So for example, we can normalize among all the topics to get the re-estimate of pi, the coverage. Or we can re-normalize based on all the words. And that would give us a word distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "7:10": "So it's useful to think algorithm in this way because when implemented, you can just use variables, but keep track of these quantities in each case." + "time": "7:10", + "text": "So it's useful to think algorithm in this way because when implemented, you can just use variables, but keep track of these quantities in each case.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "7:23": "And then you just normalize these variables to make them distribution." + "time": "7:23", + "text": "And then you just normalize these variables to make them distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "7:32": "Now I did not put the constraint for this one. And I intentionally leave this as an exercise for you. And you can see, what's the normalizer for this one? It's of a slightly different form but it's essentially the same as the one that you have seen here in this one. So in general in the envisioning of EM algorithms you will see you accumulate the counts, various counts and then you normalize them." + "time": "7:32", + "text": "Now I did not put the constraint for this one. And I intentionally leave this as an exercise for you. And you can see, what's the normalizer for this one? It's of a slightly different form but it's essentially the same as the one that you have seen here in this one. So in general in the envisioning of EM algorithms you will see you accumulate the counts, various counts and then you normalize them.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "8:01": "So to summarize, we introduced the PLSA model. Which is a mixture model with k unigram language models representing k topics." + "time": "8:01", + "text": "So to summarize, we introduced the PLSA model. Which is a mixture model with k unigram language models representing k topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "8:11": "And we also added a pre-determined background language model to help discover discriminative topics, because this background language model can help attract the common terms." + "time": "8:11", + "text": "And we also added a pre-determined background language model to help discover discriminative topics, because this background language model can help attract the common terms.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "8:23": "And we select the maximum estimate that we cant discover topical knowledge from text data. In this case PLSA allows us to discover two things, one is k worded distributions, each one representing a topic and the other is the proportion of each topic in each document." + "time": "8:23", + "text": "And we select the maximum estimate that we cant discover topical knowledge from text data. In this case PLSA allows us to discover two things, one is k worded distributions, each one representing a topic and the other is the proportion of each topic in each document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "8:41": "And such detailed characterization of coverage of topics in documents can enable a lot of photo analysis. For example, we can aggregate the documents in the particular pan period to assess the coverage of a particular topic in a time period. That would allow us to generate the temporal chains of topics. We can also aggregate topics covered in documents associated with a particular author and then we can categorize the topics written by this author, etc. And in addition to this, we can also cluster terms and cluster documents. In fact, each topic can be regarded as a cluster. So we already have the term clusters. In the higher probability, the words can be regarded as" + "time": "8:41", + "text": "And such detailed characterization of coverage of topics in documents can enable a lot of photo analysis. For example, we can aggregate the documents in the particular pan period to assess the coverage of a particular topic in a time period. That would allow us to generate the temporal chains of topics. We can also aggregate topics covered in documents associated with a particular author and then we can categorize the topics written by this author, etc. And in addition to this, we can also cluster terms and cluster documents. In fact, each topic can be regarded as a cluster. So we already have the term clusters. In the higher probability, the words can be regarded as", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "9:29": "belonging to one cluster represented by the topic. Similarly, documents can be clustered in the same way. We can assign a document to the topic cluster that's covered most in the document. So remember, pi's indicate to what extent each topic is covered in the document, we can assign the document to the topical cluster that has the highest pi." + "time": "9:29", + "text": "belonging to one cluster represented by the topic. Similarly, documents can be clustered in the same way. We can assign a document to the topic cluster that's covered most in the document. So remember, pi's indicate to what extent each topic is covered in the document, we can assign the document to the topical cluster that has the highest pi.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "9:57": "And in general there are many useful applications of this technique." + "time": "9:57", + "text": "And in general there are many useful applications of this technique.", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" }, { - "10:03": "[MUSIC]" + "time": "10:03", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" } ] }, { "3-9-latent-dirichlet-allocation-lda-part-1": [ { - "0:07": "This lecture is about that Latent Dirichlet Allocation or LDA. In this lecture, we are going to continue talking about topic models. In particular, we are going to talk about some extension of PLSA, and one of them is LDA or Latent Dirichlet Allocation. So the plan for this lecture is to cover two things. One is to extend the PLSA with prior knowledge and that would allow us to have in some sense a user-controlled PLSA, so it doesn't apply to they just listen to data, but also would listen to our needs. The second is to extend the PLSA as a generative model, a fully generative model. This has led to the development of Latent Dirichlet Allocation or LDA. So first, let's talk about the PLSA with prior knowledge. Now in practice, when we apply PLSA to analyze text data, we might have additional knowledge that we want to inject to guide the analysis. The standard PLSA is going to blindly listen to the data by using maximum [inaudible]. We are going to just fit data as much as we can and get some insight about data. This is also very useful, but sometimes a user might have some expectations about which topics to analyze. For example, we might expect to see retrieval models as a topic in information retrieval or we also may be interesting in certain aspects, such as battery and memory, when looking at opinions about a laptop because the user is particularly interested in these aspects. A user may also have knowledge about topic coverage and we may know which topic is definitely not covering which document or is covering the document. For example, we might have seen those tags, topic tags assigned to documents. And those tags could be treated as topics. If we do that then a document account will be generated using topics corresponding to the tags already assigned to the document. If the document is not assigned a tag, we're going to say there is no way for using that topic to generate document. The document must be generated by using the topics corresponding to that assigned tags. So question is how can we incorporate such knowledge into PLSA. It turns out that there is a very elegant way of doing that and that would incorporate such knowledge as priors on the models. And you may recall in Bayesian inference, we use prior together with data to estimate parameters and this is precisely what would happen. So in this case, we can use maximum a posteriori estimate also called MAP estimate and the formula is given here. Basically, this is to maximize the posteriori distribution probability. And this is a combination of the likelihood of data and the prior. So what would happen is that we are going to have an estimate that listens to the data and also listens to our prior preferences. We can use this prior which is denoted as p of lambda to encode all kinds of preferences and the constraints. So for example, we can use this to encode the need of having precise background of the topic. Now this could be encoded as a prior because we can say the prior for the parameters is only a non-zero if the parameters contain one topic that is equivalent to the background language model. In other words, in other cases if it is not like that, we are going to say the prior says it is impossible. So the probability of that kind of models I think would be zero according to our prior. So now we can also for example use the prior to force particular choice of topic to have a probability of a certain number. For example, we can force document D to choose topic one with probability of one half or we can prevent topic from being used in generating document. So we can say the third topic should not be used in generating document D, we will set to the Pi zero for that topic. We can also use the prior to favor a set of parameters with topics that assign high probability to some particular words. In this case, we are not going to say it is impossible but we can just strongly favor certain kind of distributions and you will see example later. The MAP can be computed using a similar EM algorithm as we have used for the maximum likelihood estimate. With just some modifications, most of the parameters would reflect the prior preferences and in such an estimate if we use a special form of the prior code or conjugate the prior, then the functional form of the prior will be similar to the data. As a result, we can combine the two and the consequence is that you can basically convert the inference of the prior into the inference of having additional pseudo data because the two functional forms are the same and they can be combined. So the effect is as if we had more data and this is convenient for computation. It does not mean conjugate prior is the best way to define prior. So now let us look at the specific example. Suppose the user is particularly interested in battery life of a laptop and we are analyzing reviews. So the prior says that the distribution should contain one distribution that would assign high probability to battery and life. So we could say well there is distribution that is kind of concentrated on battery life and prior says that one of distributions should be very similar to this. Now if we use MAP estimate with conjugate prior, which is the original prior, the original distribution based on this preference, then the only difference in the EM is that when we re-estimate words distributions, we are going to add additional counts to reflect our prior. So here you can see the pseudo counts are defined based on the probability of words in a prior. So battery obviously would have high pseudo counts and similarly life would have also high pseudo counts. All the other words would have zero pseudo counts because their probability is zero in the prior and we see this is also controlled by a parameter mu and we are going to add a mu much by the probability of W given prior distribution to the connected accounts when we re-estimate this word distribution. So this is the only step that is changed and the change is happening here. And before we just connect the counts of words that we believe have been generated from this topic but now we force this distribution to give more probabilities to these words by adding them to the pseudo counts. So in fact we artificially inflated their probabilities. To make this distribution, we also need to add this many pseudo counts to the denominator. This is total sum of all the pseudo counts we have added for all the words This would make this a gamma distribution. Now this is intuitively very reasonable way of modifying EM and theoretically speaking, this works and it computes the MAP estimate. It is useful to think about the two specific extreme cases of mu. Now, [inaudible] the picture. Think about what would happen if we set mu to zero. Well that essentially to remove this prior. So mu in some sense indicates our strengths on prior. Now what would happen if we set mu to positive infinity. Well that is to say that prior is so strong that we are not going to listen to the data at all. So in the end, you see in this case we are going to make one of the distributions fixed to the prior. You see why? When mu is infinitive, we basically let this one dominate. In fact we are going to set this one to precise this distribution. So in this case, it is this distribution. And that is why we said the background language model is in fact a way to impose the prior because it would force one distribution to be exactly the same as what we give, that is background distribution. So in this case, we can even force the distribution to entirely focus on battery life. But of course this would not work well because it cannot attract other words. It would affect the accuracy of counting topics about battery life. So in practice, mu is set somewhere in between of course. So this is one way to impose a prior. We can also impose some other constraints. For example, we can set any parameters that will constantly include zero as needed. For example, we may want to set one of the Pi's to zero and this would mean we do not allow that topic to participate in generating that document. And this is only reasonable of course when we have prior analogy that strongly suggests this." + "time": "0:07", + "text": "This lecture is about that Latent Dirichlet Allocation or LDA. In this lecture, we are going to continue talking about topic models. In particular, we are going to talk about some extension of PLSA, and one of them is LDA or Latent Dirichlet Allocation. So the plan for this lecture is to cover two things. One is to extend the PLSA with prior knowledge and that would allow us to have in some sense a user-controlled PLSA, so it doesn't apply to they just listen to data, but also would listen to our needs. The second is to extend the PLSA as a generative model, a fully generative model. This has led to the development of Latent Dirichlet Allocation or LDA. So first, let's talk about the PLSA with prior knowledge. Now in practice, when we apply PLSA to analyze text data, we might have additional knowledge that we want to inject to guide the analysis. The standard PLSA is going to blindly listen to the data by using maximum [inaudible]. We are going to just fit data as much as we can and get some insight about data. This is also very useful, but sometimes a user might have some expectations about which topics to analyze. For example, we might expect to see retrieval models as a topic in information retrieval or we also may be interesting in certain aspects, such as battery and memory, when looking at opinions about a laptop because the user is particularly interested in these aspects. A user may also have knowledge about topic coverage and we may know which topic is definitely not covering which document or is covering the document. For example, we might have seen those tags, topic tags assigned to documents. And those tags could be treated as topics. If we do that then a document account will be generated using topics corresponding to the tags already assigned to the document. If the document is not assigned a tag, we're going to say there is no way for using that topic to generate document. The document must be generated by using the topics corresponding to that assigned tags. So question is how can we incorporate such knowledge into PLSA. It turns out that there is a very elegant way of doing that and that would incorporate such knowledge as priors on the models. And you may recall in Bayesian inference, we use prior together with data to estimate parameters and this is precisely what would happen. So in this case, we can use maximum a posteriori estimate also called MAP estimate and the formula is given here. Basically, this is to maximize the posteriori distribution probability. And this is a combination of the likelihood of data and the prior. So what would happen is that we are going to have an estimate that listens to the data and also listens to our prior preferences. We can use this prior which is denoted as p of lambda to encode all kinds of preferences and the constraints. So for example, we can use this to encode the need of having precise background of the topic. Now this could be encoded as a prior because we can say the prior for the parameters is only a non-zero if the parameters contain one topic that is equivalent to the background language model. In other words, in other cases if it is not like that, we are going to say the prior says it is impossible. So the probability of that kind of models I think would be zero according to our prior. So now we can also for example use the prior to force particular choice of topic to have a probability of a certain number. For example, we can force document D to choose topic one with probability of one half or we can prevent topic from being used in generating document. So we can say the third topic should not be used in generating document D, we will set to the Pi zero for that topic. We can also use the prior to favor a set of parameters with topics that assign high probability to some particular words. In this case, we are not going to say it is impossible but we can just strongly favor certain kind of distributions and you will see example later. The MAP can be computed using a similar EM algorithm as we have used for the maximum likelihood estimate. With just some modifications, most of the parameters would reflect the prior preferences and in such an estimate if we use a special form of the prior code or conjugate the prior, then the functional form of the prior will be similar to the data. As a result, we can combine the two and the consequence is that you can basically convert the inference of the prior into the inference of having additional pseudo data because the two functional forms are the same and they can be combined. So the effect is as if we had more data and this is convenient for computation. It does not mean conjugate prior is the best way to define prior. So now let us look at the specific example. Suppose the user is particularly interested in battery life of a laptop and we are analyzing reviews. So the prior says that the distribution should contain one distribution that would assign high probability to battery and life. So we could say well there is distribution that is kind of concentrated on battery life and prior says that one of distributions should be very similar to this. Now if we use MAP estimate with conjugate prior, which is the original prior, the original distribution based on this preference, then the only difference in the EM is that when we re-estimate words distributions, we are going to add additional counts to reflect our prior. So here you can see the pseudo counts are defined based on the probability of words in a prior. So battery obviously would have high pseudo counts and similarly life would have also high pseudo counts. All the other words would have zero pseudo counts because their probability is zero in the prior and we see this is also controlled by a parameter mu and we are going to add a mu much by the probability of W given prior distribution to the connected accounts when we re-estimate this word distribution. So this is the only step that is changed and the change is happening here. And before we just connect the counts of words that we believe have been generated from this topic but now we force this distribution to give more probabilities to these words by adding them to the pseudo counts. So in fact we artificially inflated their probabilities. To make this distribution, we also need to add this many pseudo counts to the denominator. This is total sum of all the pseudo counts we have added for all the words This would make this a gamma distribution. Now this is intuitively very reasonable way of modifying EM and theoretically speaking, this works and it computes the MAP estimate. It is useful to think about the two specific extreme cases of mu. Now, [inaudible] the picture. Think about what would happen if we set mu to zero. Well that essentially to remove this prior. So mu in some sense indicates our strengths on prior. Now what would happen if we set mu to positive infinity. Well that is to say that prior is so strong that we are not going to listen to the data at all. So in the end, you see in this case we are going to make one of the distributions fixed to the prior. You see why? When mu is infinitive, we basically let this one dominate. In fact we are going to set this one to precise this distribution. So in this case, it is this distribution. And that is why we said the background language model is in fact a way to impose the prior because it would force one distribution to be exactly the same as what we give, that is background distribution. So in this case, we can even force the distribution to entirely focus on battery life. But of course this would not work well because it cannot attract other words. It would affect the accuracy of counting topics about battery life. So in practice, mu is set somewhere in between of course. So this is one way to impose a prior. We can also impose some other constraints. For example, we can set any parameters that will constantly include zero as needed. For example, we may want to set one of the Pi's to zero and this would mean we do not allow that topic to participate in generating that document. And this is only reasonable of course when we have prior analogy that strongly suggests this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/deiXc/3-9-latent-dirichlet-allocation-lda-part-1" } ] }, { "3-10-latent-dirichlet-allocation-lda-part-2": [ { - "0:00": "[SOUND] So now let's talk about the exchanging of PLSA to of LDA and to motivate that, we need to talk about some deficiencies of PLSA. First, it's not really a generative model because we can compute the probability of a new document. You can see why, and that's because the pis are needed to generate the document, but the pis are tied to the document that we have in the training data. So we can't compute the pis for future document." + "time": "0:00", + "text": "[SOUND] So now let's talk about the exchanging of PLSA to of LDA and to motivate that, we need to talk about some deficiencies of PLSA. First, it's not really a generative model because we can compute the probability of a new document. You can see why, and that's because the pis are needed to generate the document, but the pis are tied to the document that we have in the training data. So we can't compute the pis for future document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "0:34": "And there's some heuristic workaround, though. Secondly, it has many parameters, and I've asked you to compute how many parameters exactly there are in PLSA, and you will see there are many parameters. That means that model is very complex. And this also means that there are many local maxima and it's prone to overfitting. And that means it's very hard to also find a good local maximum." + "time": "0:34", + "text": "And there's some heuristic workaround, though. Secondly, it has many parameters, and I've asked you to compute how many parameters exactly there are in PLSA, and you will see there are many parameters. That means that model is very complex. And this also means that there are many local maxima and it's prone to overfitting. And that means it's very hard to also find a good local maximum.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "1:02": "And that we are representing global maximum. And in terms of explaining future data, we might find that it will overfit the training data because of the complexity of the model. The model is so flexible to fit precisely what the training data looks like. And then it doesn't allow us to generalize the model for using other data." + "time": "1:02", + "text": "And that we are representing global maximum. And in terms of explaining future data, we might find that it will overfit the training data because of the complexity of the model. The model is so flexible to fit precisely what the training data looks like. And then it doesn't allow us to generalize the model for using other data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "1:23": "This however is not a necessary problem for text mining because here we're often only interested in hitting the training documents that we have. We are not always interested in modern future data, but in other cases, or if we would care about the generality, we would worry about this overfitting." + "time": "1:23", + "text": "This however is not a necessary problem for text mining because here we're often only interested in hitting the training documents that we have. We are not always interested in modern future data, but in other cases, or if we would care about the generality, we would worry about this overfitting.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "1:42": "So LDA is proposing to improve that, and basically to make PLSA a generative model by imposing a Dirichlet prior on the model parameters. Dirichlet is just a special distribution that we can use to specify product. So in this sense, LDA is just a Bayesian version of PLSA, and the parameters are now much more regularized. You will see there are many few parameters and you can achieve the same goal as PLSA for text mining. It means it can compute the top coverage and topic word distributions as in PLSA. However, there's no. Why are the parameters for PLSA here are much fewer, there are fewer parameters and in order to compute a topic coverage and word distributions, we again face a problem of influence of these variables because they are not parameters of the model. So the influence part again face the local maximum problem. So essentially they are doing something very similar, but theoretically, LDA is a more elegant way of looking at the top and bottom problem. So let's see how we can generalize the PLSA to LDA or a standard PLSA to have LDA. Now a full treatment of LDA is beyond the scope of this course and we just don't have time to go in depth on that talking about that. But here, I just want to give you a brief idea about what's extending and what it enables, all right. So this is the picture of LDA. Now, I remove the background of model just for simplicity." + "time": "1:42", + "text": "So LDA is proposing to improve that, and basically to make PLSA a generative model by imposing a Dirichlet prior on the model parameters. Dirichlet is just a special distribution that we can use to specify product. So in this sense, LDA is just a Bayesian version of PLSA, and the parameters are now much more regularized. You will see there are many few parameters and you can achieve the same goal as PLSA for text mining. It means it can compute the top coverage and topic word distributions as in PLSA. However, there's no. Why are the parameters for PLSA here are much fewer, there are fewer parameters and in order to compute a topic coverage and word distributions, we again face a problem of influence of these variables because they are not parameters of the model. So the influence part again face the local maximum problem. So essentially they are doing something very similar, but theoretically, LDA is a more elegant way of looking at the top and bottom problem. So let's see how we can generalize the PLSA to LDA or a standard PLSA to have LDA. Now a full treatment of LDA is beyond the scope of this course and we just don't have time to go in depth on that talking about that. But here, I just want to give you a brief idea about what's extending and what it enables, all right. So this is the picture of LDA. Now, I remove the background of model just for simplicity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "3:15": "Now, in this model, all these parameters are free to change and we do not impose any prior. So these word distributions are now represented as theta vectors. So these are word distributions, so here. And the other set of parameters are pis. And we would present it as a vector also. And this is more convenient to introduce LDA. And we have one vector for each document. And in this case, in theta, we have one vector for each topic." + "time": "3:15", + "text": "Now, in this model, all these parameters are free to change and we do not impose any prior. So these word distributions are now represented as theta vectors. So these are word distributions, so here. And the other set of parameters are pis. And we would present it as a vector also. And this is more convenient to introduce LDA. And we have one vector for each document. And in this case, in theta, we have one vector for each topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "3:50": "Now, the difference between LDA and PLSA is that in LDA, we're not going to allow them to free the chain. Instead, we're going to force them to be drawn from another distribution." + "time": "3:50", + "text": "Now, the difference between LDA and PLSA is that in LDA, we're not going to allow them to free the chain. Instead, we're going to force them to be drawn from another distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "4:03": "So more specifically, they will be drawn from two Dirichlet distributions respectively, but the Dirichlet distribution is a distribution over vectors. So it gives us a probability of four particular choice of a vector. Take, for example, pis, right. So this Dirichlet distribution tells us which vectors of pi is more likely. And this distribution in itself is controlled by another vector of parameters of alphas." + "time": "4:03", + "text": "So more specifically, they will be drawn from two Dirichlet distributions respectively, but the Dirichlet distribution is a distribution over vectors. So it gives us a probability of four particular choice of a vector. Take, for example, pis, right. So this Dirichlet distribution tells us which vectors of pi is more likely. And this distribution in itself is controlled by another vector of parameters of alphas.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "4:31": "Depending on the alphas, we can characterize the distribution in different ways but with full certain choices of pis to be more likely than others. For example, you might favor the choice of a relatively uniform distribution of all the topics. Or you might favor generating a skewed coverage of topics, and this is controlled by alpha. And similarly here, the topic or word distributions are drawn from another Dirichlet distribution with beta parameters. And note that here, alpha has k parameters, corresponding to our inference on the k values of pis for our document. Whereas here, beta has n values corresponding to controlling the m words in our vocabulary." + "time": "4:31", + "text": "Depending on the alphas, we can characterize the distribution in different ways but with full certain choices of pis to be more likely than others. For example, you might favor the choice of a relatively uniform distribution of all the topics. Or you might favor generating a skewed coverage of topics, and this is controlled by alpha. And similarly here, the topic or word distributions are drawn from another Dirichlet distribution with beta parameters. And note that here, alpha has k parameters, corresponding to our inference on the k values of pis for our document. Whereas here, beta has n values corresponding to controlling the m words in our vocabulary.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "5:17": "Now once we impose this price, then the generation process will be different. And we start with joined pis from the Dirichlet distribution and this pi will tell us these probabilities." + "time": "5:17", + "text": "Now once we impose this price, then the generation process will be different. And we start with joined pis from the Dirichlet distribution and this pi will tell us these probabilities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "5:35": "And then, we're going to use the pi to further choose which topic to use, and this is of course very similar to the PLSA model." + "time": "5:35", + "text": "And then, we're going to use the pi to further choose which topic to use, and this is of course very similar to the PLSA model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "5:47": "And similar here, we're not going to have these distributions free. Instead, we're going to draw one from the Dirichlet distribution. And then from this, then we're going to further sample a word. And the rest is very similar to the. The likelihood function now is more complicated for LDA. But there's a close connection between the likelihood function of LDA and the PLSA. So I'm going to illustrate the difference here. So in the top, you see PLSA likelihood function that you have already seen before. It's copied from previous slide. Only that I dropped the background for simplicity." + "time": "5:47", + "text": "And similar here, we're not going to have these distributions free. Instead, we're going to draw one from the Dirichlet distribution. And then from this, then we're going to further sample a word. And the rest is very similar to the. The likelihood function now is more complicated for LDA. But there's a close connection between the likelihood function of LDA and the PLSA. So I'm going to illustrate the difference here. So in the top, you see PLSA likelihood function that you have already seen before. It's copied from previous slide. Only that I dropped the background for simplicity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "6:27": "So in the LDA formulas you see very similar things. You see the first equation is essentially the same. And this is the probability of generating a word from multiple word distributions." + "time": "6:27", + "text": "So in the LDA formulas you see very similar things. You see the first equation is essentially the same. And this is the probability of generating a word from multiple word distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "6:40": "And this formula is a sum of all the possibilities of generating a word. Inside a sum is a product of the probability of choosing a topic multiplied by the probability of observing the word from that topic." + "time": "6:40", + "text": "And this formula is a sum of all the possibilities of generating a word. Inside a sum is a product of the probability of choosing a topic multiplied by the probability of observing the word from that topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "6:55": "So this is a very important formula, as I've stressed multiple times. And this is actually the core assumption in all the topic models. And you might see other topic models that are extensions of LDA or PLSA. And they all rely on this. So it's very important to understand this. And this gives us a probability of getting a word from a mixture model. Now, next in the probability of a document, we see there is a PLSA component in the LDA formula, but the LDA formula will add a sum integral here. And that's to account for the fact that the pis are not fixed. So they are drawn from the original distribution, and that's shown here. That's why we have to take an integral, to consider all the possible pis that we could possibly draw from this Dirichlet distribution. And similarly in the likelihood for the whole collection, we also see further components added, another integral here." + "time": "6:55", + "text": "So this is a very important formula, as I've stressed multiple times. And this is actually the core assumption in all the topic models. And you might see other topic models that are extensions of LDA or PLSA. And they all rely on this. So it's very important to understand this. And this gives us a probability of getting a word from a mixture model. Now, next in the probability of a document, we see there is a PLSA component in the LDA formula, but the LDA formula will add a sum integral here. And that's to account for the fact that the pis are not fixed. So they are drawn from the original distribution, and that's shown here. That's why we have to take an integral, to consider all the possible pis that we could possibly draw from this Dirichlet distribution. And similarly in the likelihood for the whole collection, we also see further components added, another integral here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "7:58": "Right? So basically in the area we're just adding this integrals to account for the uncertainties and we added of course the Dirichlet distributions to cover the choice of this parameters, pis, and theta." + "time": "7:58", + "text": "Right? So basically in the area we're just adding this integrals to account for the uncertainties and we added of course the Dirichlet distributions to cover the choice of this parameters, pis, and theta.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "8:12": "So this is a likelihood function for LDA. Now, next to this, let's talk about the parameter as estimation and inferences. Now the parameters can be now estimated using exactly the same approach maximum likelihood estimate for LDA. Now you might think about how many parameters are there in LDA versus PLSA. You'll see there're a fewer parameters in LDA because in this case the only parameters are alphas and the betas. So we can use the maximum likelihood estimator to compute that. Of course, it's more complicated because the form of likelihood function is more complicated. But what's also important is notice that now these parameters that we are interested in name and topics, and the coverage are no longer parameters in LDA. In this case we have to use basic inference or posterior inference to compute them based on the parameters of alpha and the beta. Unfortunately, this computation is intractable. So we generally have to resort to approximate inference." + "time": "8:12", + "text": "So this is a likelihood function for LDA. Now, next to this, let's talk about the parameter as estimation and inferences. Now the parameters can be now estimated using exactly the same approach maximum likelihood estimate for LDA. Now you might think about how many parameters are there in LDA versus PLSA. You'll see there're a fewer parameters in LDA because in this case the only parameters are alphas and the betas. So we can use the maximum likelihood estimator to compute that. Of course, it's more complicated because the form of likelihood function is more complicated. But what's also important is notice that now these parameters that we are interested in name and topics, and the coverage are no longer parameters in LDA. In this case we have to use basic inference or posterior inference to compute them based on the parameters of alpha and the beta. Unfortunately, this computation is intractable. So we generally have to resort to approximate inference.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "9:18": "And there are many methods available for that and I'm sure you will see them when you use different tool kits for LDA, or when you read papers about" + "time": "9:18", + "text": "And there are many methods available for that and I'm sure you will see them when you use different tool kits for LDA, or when you read papers about", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "9:30": "these different extensions of LDA. Now here we, of course, can't give in-depth instruction to that, but just know that they are computed based in inference by using the parameters alphas and betas. But our math [INAUDIBLE], actually, in the end, in some of our math list, it's very similar to PLSA. And, especially when we use algorithm called class assembly, then the algorithm looks very similar to the Algorithm. So in the end, they are doing something very similar." + "time": "9:30", + "text": "these different extensions of LDA. Now here we, of course, can't give in-depth instruction to that, but just know that they are computed based in inference by using the parameters alphas and betas. But our math [INAUDIBLE], actually, in the end, in some of our math list, it's very similar to PLSA. And, especially when we use algorithm called class assembly, then the algorithm looks very similar to the Algorithm. So in the end, they are doing something very similar.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "10:10": "So to summarize our discussion of properties of topic models, these models provide a general principle or way of mining and analyzing topics in text with many applications. The best basic task setup is to take test data as input and we're going to output the k topics. Each topic is characterized by word distribution. And we're going to also output proportions of these topics covered in each document." + "time": "10:10", + "text": "So to summarize our discussion of properties of topic models, these models provide a general principle or way of mining and analyzing topics in text with many applications. The best basic task setup is to take test data as input and we're going to output the k topics. Each topic is characterized by word distribution. And we're going to also output proportions of these topics covered in each document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "10:38": "And PLSA is the basic topic model, and in fact the most basic of the topic model. And this is often adequate for most applications. That's why we spend a lot of time to explain PLSA in detail." + "time": "10:38", + "text": "And PLSA is the basic topic model, and in fact the most basic of the topic model. And this is often adequate for most applications. That's why we spend a lot of time to explain PLSA in detail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "10:53": "Now LDA improves over PLSA by imposing priors. This has led to theoretically more appealing models. However, in practice, LDA and PLSA tend to give similar performance, so in practice PLSA and LDA would work equally well for most of the tasks." + "time": "10:53", + "text": "Now LDA improves over PLSA by imposing priors. This has led to theoretically more appealing models. However, in practice, LDA and PLSA tend to give similar performance, so in practice PLSA and LDA would work equally well for most of the tasks.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "11:12": "Now here are some suggested readings if you want to know more about the topic. First is a nice review of probabilistic topic models." + "time": "11:12", + "text": "Now here are some suggested readings if you want to know more about the topic. First is a nice review of probabilistic topic models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" }, { - "11:20": "The second has a discussion about how to automatically label a topic model. Now I've shown you some distributions and they intuitively suggest a topic. But what exactly is a topic? Can we use phrases to label the topic? To make it the more easy to understand and this paper is about the techniques for doing that. The third one is empirical comparison of LDA and the PLSA for various tasks. The conclusion is that they tend to perform similarly. [MUSIC]" + "time": "11:20", + "text": "The second has a discussion about how to automatically label a topic model. Now I've shown you some distributions and they intuitively suggest a topic. But what exactly is a topic? Can we use phrases to label the topic? To make it the more easy to understand and this paper is about the techniques for doing that. The third one is empirical comparison of LDA and the PLSA for various tasks. The conclusion is that they tend to perform similarly. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" } ] } @@ -2361,927 +3841,1521 @@ { "4-1-text-clustering-motivation": [ { - "0:00": "[SOUND] This lecture is the first one about the text clustering." + "time": "0:00", + "text": "[SOUND] This lecture is the first one about the text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "0:14": "In this lecture, we are going to talk about the text clustering." + "time": "0:14", + "text": "In this lecture, we are going to talk about the text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "0:18": "This is a very important technique for doing topic mining and analysis. In particular, in this lecture we're going to start with some basic questions about the clustering." + "time": "0:18", + "text": "This is a very important technique for doing topic mining and analysis. In particular, in this lecture we're going to start with some basic questions about the clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "0:31": "And that is, what is text clustering and why we are interested in text clustering." + "time": "0:31", + "text": "And that is, what is text clustering and why we are interested in text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "0:38": "In the following lectures, we are going to talk about how to do text clustering. How to evaluate the clustering results?" + "time": "0:38", + "text": "In the following lectures, we are going to talk about how to do text clustering. How to evaluate the clustering results?", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "0:47": "So what is text clustering?" + "time": "0:47", + "text": "So what is text clustering?", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "0:49": "Well, clustering actually is a very general technique for data mining as you might have learned in some other courses." + "time": "0:49", + "text": "Well, clustering actually is a very general technique for data mining as you might have learned in some other courses.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "0:56": "The idea is to discover natural structures in the data." + "time": "0:56", + "text": "The idea is to discover natural structures in the data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "1:01": "In another words, we want to group similar objects together. In our case, these objects are of course, text objects. For example, they can be documents, terms, passages, sentences, or websites, and then I'll go group similar text objects together. So let's see an example, well, here you don't really see text objects, but I just used some shapes to denote objects that can be grouped together." + "time": "1:01", + "text": "In another words, we want to group similar objects together. In our case, these objects are of course, text objects. For example, they can be documents, terms, passages, sentences, or websites, and then I'll go group similar text objects together. So let's see an example, well, here you don't really see text objects, but I just used some shapes to denote objects that can be grouped together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "1:33": "Now if I ask you, what are some natural structures or natural groups where you, if you look at it and you might agree that we can group these objects based on chips, or their locations on this two dimensional space." + "time": "1:33", + "text": "Now if I ask you, what are some natural structures or natural groups where you, if you look at it and you might agree that we can group these objects based on chips, or their locations on this two dimensional space.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "1:53": "So we got the three clusters in this case." + "time": "1:53", + "text": "So we got the three clusters in this case.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "1:56": "And they may not be so much this agreement about these three clusters but it really depends on the perspective to look at the objects." + "time": "1:56", + "text": "And they may not be so much this agreement about these three clusters but it really depends on the perspective to look at the objects.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "2:07": "Maybe some of you have also seen thing in a different way, so we might get different clusters. And you'll see another example about this ambiguity more clearly. But the main point of here is, the problem is actually not so well defined." + "time": "2:07", + "text": "Maybe some of you have also seen thing in a different way, so we might get different clusters. And you'll see another example about this ambiguity more clearly. But the main point of here is, the problem is actually not so well defined.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "2:29": "And the problem lies in how to define similarity. And what do you mean by similar objects?" + "time": "2:29", + "text": "And the problem lies in how to define similarity. And what do you mean by similar objects?", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "2:38": "Now this problem has to be clearly defined in order to have a well defined clustering problem." + "time": "2:38", + "text": "Now this problem has to be clearly defined in order to have a well defined clustering problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "2:46": "And the problem is in general that any two objects can be similar depending on how you look at them. So for example, this will kept the two words like car and horse." + "time": "2:46", + "text": "And the problem is in general that any two objects can be similar depending on how you look at them. So for example, this will kept the two words like car and horse.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "3:00": "So are the two words similar? Well, it depends on how if we look at the physical" + "time": "3:00", + "text": "So are the two words similar? Well, it depends on how if we look at the physical", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "3:11": "properties of car and horse they are very different but if you look at them functionally, a car and a horse, can both be transportation tools. So in that sense, they may be similar. So as we can see, it really depends on our perspective, to look at the objects." + "time": "3:11", + "text": "properties of car and horse they are very different but if you look at them functionally, a car and a horse, can both be transportation tools. So in that sense, they may be similar. So as we can see, it really depends on our perspective, to look at the objects.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "3:32": "And so it ought to make the clustering problem well defined. A user must define the perspective for assessing similarity." + "time": "3:32", + "text": "And so it ought to make the clustering problem well defined. A user must define the perspective for assessing similarity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "3:44": "And we call this perspective the clustering bias." + "time": "3:44", + "text": "And we call this perspective the clustering bias.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "3:49": "And when you define a clustering problem, it's important to specify" + "time": "3:49", + "text": "And when you define a clustering problem, it's important to specify", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "3:55": "your perspective for similarity or for defining the similarity that will be used to group similar objects. because otherwise, the similarity is not well defined and one can have different ways to group objects. So let's look at the example here. You are seeing some objects, or some shapes, that are very similar to what you have seen on the first slide, but if I ask you to group these objects, again, you might" + "time": "3:55", + "text": "your perspective for similarity or for defining the similarity that will be used to group similar objects. because otherwise, the similarity is not well defined and one can have different ways to group objects. So let's look at the example here. You are seeing some objects, or some shapes, that are very similar to what you have seen on the first slide, but if I ask you to group these objects, again, you might", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "4:38": "feel there is more than here than on the previous slide. For example, you might think, well, we can steer a group by ships, so that would give us cluster that looks like this. However, you might also feel that, well, maybe the objects can be grouped based on their sizes. So that would give us a different way to cluster the data if we look at the size and look at the similarity in size." + "time": "4:38", + "text": "feel there is more than here than on the previous slide. For example, you might think, well, we can steer a group by ships, so that would give us cluster that looks like this. However, you might also feel that, well, maybe the objects can be grouped based on their sizes. So that would give us a different way to cluster the data if we look at the size and look at the similarity in size.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "5:12": "So as you can see clearly here, depending on the perspective, we'll get different clustering result. So that also clearly tells us that in order to evaluate the clustering without, we must use perspective. Without perspective, it's very hard to define what is the best clustering result." + "time": "5:12", + "text": "So as you can see clearly here, depending on the perspective, we'll get different clustering result. So that also clearly tells us that in order to evaluate the clustering without, we must use perspective. Without perspective, it's very hard to define what is the best clustering result.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "5:36": "So there are many examples of text clustering setup." + "time": "5:36", + "text": "So there are many examples of text clustering setup.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "5:42": "And so for example, we can cluster documents in the whole text collection. So in this case, documents are the units to be clustered." + "time": "5:42", + "text": "And so for example, we can cluster documents in the whole text collection. So in this case, documents are the units to be clustered.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "5:52": "We may be able to cluster terms. In this case, terms are objects. And a cluster of terms can be used to define concept, or theme, or a topic. In fact, there's a topic models that you have seen some previous lectures can give you cluster of terms in some sense if you take terms with high probabilities from word distribution. Another example is just to cluster any text segments, for example, passages, sentences, or any segments that you can extract the former larger text objects." + "time": "5:52", + "text": "We may be able to cluster terms. In this case, terms are objects. And a cluster of terms can be used to define concept, or theme, or a topic. In fact, there's a topic models that you have seen some previous lectures can give you cluster of terms in some sense if you take terms with high probabilities from word distribution. Another example is just to cluster any text segments, for example, passages, sentences, or any segments that you can extract the former larger text objects.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "6:32": "For example, we might extract the order text segments about the topic, let's say, by using a topic model. Now once we've got those text objects then we can" + "time": "6:32", + "text": "For example, we might extract the order text segments about the topic, let's say, by using a topic model. Now once we've got those text objects then we can", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "6:45": "cluster the segments that we've got to discover interesting clusters that might also ripple in the subtopics. So this is a case of combining text clustering with some other techniques. And in general you will see a lot of text mining" + "time": "6:45", + "text": "cluster the segments that we've got to discover interesting clusters that might also ripple in the subtopics. So this is a case of combining text clustering with some other techniques. And in general you will see a lot of text mining", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "7:05": "can be accurate combined in a flexible way to achieve the goal of doing more sophisticated mining and analysis of text data." + "time": "7:05", + "text": "can be accurate combined in a flexible way to achieve the goal of doing more sophisticated mining and analysis of text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "7:16": "We can also cluster fairly large text objects and by that, I just mean text objects may contain a lot of documents. So for example, we might cluster websites. Each website is actually compose of multiple documents. Similarly, we can also cluster articles written by the same author, for example. So we can trigger all the articles published by also as one unit for clustering. In this way, we might group authors together based on whether they're published papers or similar." + "time": "7:16", + "text": "We can also cluster fairly large text objects and by that, I just mean text objects may contain a lot of documents. So for example, we might cluster websites. Each website is actually compose of multiple documents. Similarly, we can also cluster articles written by the same author, for example. So we can trigger all the articles published by also as one unit for clustering. In this way, we might group authors together based on whether they're published papers or similar.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "7:55": "For the more text clusters will be for the cluster to generate a hierarchy. That's because we can in general cluster any text object at different levels." + "time": "7:55", + "text": "For the more text clusters will be for the cluster to generate a hierarchy. That's because we can in general cluster any text object at different levels.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "8:08": "So more generally why is text clustering interesting? Well, it's because it's a very useful technique for text mining, particularly exploratory text analysis." + "time": "8:08", + "text": "So more generally why is text clustering interesting? Well, it's because it's a very useful technique for text mining, particularly exploratory text analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "8:20": "And so a typical scenario is that you were getting a lot of text data, let's say all the email messages from customers in some time period, all the literature articles, etc. And then you hope to get a sense about what are the overall content of the connection, so for example, you might be interested in getting a sense about major topics, or what are some typical or representative documents in the connection. And clustering to help us achieve this goal. We sometimes also want to link a similar text objects together. And these objects might be duplicated content, for example. And in that case, such a technique can help us remove redundancy and remove duplicate documents." + "time": "8:20", + "text": "And so a typical scenario is that you were getting a lot of text data, let's say all the email messages from customers in some time period, all the literature articles, etc. And then you hope to get a sense about what are the overall content of the connection, so for example, you might be interested in getting a sense about major topics, or what are some typical or representative documents in the connection. And clustering to help us achieve this goal. We sometimes also want to link a similar text objects together. And these objects might be duplicated content, for example. And in that case, such a technique can help us remove redundancy and remove duplicate documents.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "9:10": "Sometimes they are about the same topic and by linking them together we can have more complete coverage of a topic." + "time": "9:10", + "text": "Sometimes they are about the same topic and by linking them together we can have more complete coverage of a topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "9:19": "We may also used text clustering to create a structure on the text data and sometimes we can create a hierarchy of structures and this is very useful for problems." + "time": "9:19", + "text": "We may also used text clustering to create a structure on the text data and sometimes we can create a hierarchy of structures and this is very useful for problems.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "9:31": "We may also use text clustering to induce additional features to represent text data when we cluster documents together, we can treat each cluster as a feature. And then we can say when a document is in this cluster and then the feature value would be one. And if a document is not in this cluster, then the feature value is zero. And this helps provide additional discrimination that might be used for text classification as we will discuss later." + "time": "9:31", + "text": "We may also use text clustering to induce additional features to represent text data when we cluster documents together, we can treat each cluster as a feature. And then we can say when a document is in this cluster and then the feature value would be one. And if a document is not in this cluster, then the feature value is zero. And this helps provide additional discrimination that might be used for text classification as we will discuss later.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "9:59": "So there are, in general, many applications of text clustering. And I just thought of two very specific ones. One is to cluster search results, for example, [INAUDIBLE] search engine can cluster such results so that the user can see overall structure of the results of return the fall query. And when the query's ambiguous this is particularly useful because the clusters likely represent different senses of ambiguous word." + "time": "9:59", + "text": "So there are, in general, many applications of text clustering. And I just thought of two very specific ones. One is to cluster search results, for example, [INAUDIBLE] search engine can cluster such results so that the user can see overall structure of the results of return the fall query. And when the query's ambiguous this is particularly useful because the clusters likely represent different senses of ambiguous word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" }, { - "10:28": "Another application is to understand the major complaints from customers based on their emails, right. So in this case, we can cluster email messages and then find in the major clusters from there, we can understand what are the major complaints about them. [MUSIC]" + "time": "10:28", + "text": "Another application is to understand the major complaints from customers based on their emails, right. So in this case, we can cluster email messages and then find in the major clusters from there, we can understand what are the major complaints about them. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" } ] }, { "4-2-text-clustering-generative-probabilistic-models-part-1": [ { - "0:00": "[SOUND] This lecture is about generating probabilistic models for text clustering." + "time": "0:00", + "text": "[SOUND] This lecture is about generating probabilistic models for text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "0:13": "In this lecture, we're going to continue discussing text clustering, and we're going to introduce generating probabilistic models as a way to do text clustering. So this is the overall plan for covering text clustering. In the previous lecture, we have talked about what is text clustering and why text clustering is interesting. In this lecture, we're going to talk about how to do text clustering. In general, as you see on this slide, there are two kinds of approaches. One is generating probabilistic models, which is the topic of this lecture. And later, we'll also discuss similarity-based approaches." + "time": "0:13", + "text": "In this lecture, we're going to continue discussing text clustering, and we're going to introduce generating probabilistic models as a way to do text clustering. So this is the overall plan for covering text clustering. In the previous lecture, we have talked about what is text clustering and why text clustering is interesting. In this lecture, we're going to talk about how to do text clustering. In general, as you see on this slide, there are two kinds of approaches. One is generating probabilistic models, which is the topic of this lecture. And later, we'll also discuss similarity-based approaches.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "0:53": "So to talk about generating models for text clustering, it would be useful to revisit the topic mining problem using topic models, because the two problems are very similar. This is a slide that you have seen earlier in the lecture on topic model. Here we show that we have input of a text collection C and a number of topics k, and vocabulary V. And we hope to generate as output two things. One is a set of topics denoted by Theta i's, each is awarded distribution and the other is pi i j. These are the probabilities that each document covers each topic. So this is a topic coverage and it's also visualized here on this slide. You can see that this is what we can get by using a topic model. Now, the main difference between this and the text clustering problem is that here, a document is assumed to possibly cover multiple topics. And indeed, in general, a document will be covering more than one topic with nonzero probabilities. In text clustering, however, we only allow a document to cover one topic, if we assume one topic is a cluster." + "time": "0:53", + "text": "So to talk about generating models for text clustering, it would be useful to revisit the topic mining problem using topic models, because the two problems are very similar. This is a slide that you have seen earlier in the lecture on topic model. Here we show that we have input of a text collection C and a number of topics k, and vocabulary V. And we hope to generate as output two things. One is a set of topics denoted by Theta i's, each is awarded distribution and the other is pi i j. These are the probabilities that each document covers each topic. So this is a topic coverage and it's also visualized here on this slide. You can see that this is what we can get by using a topic model. Now, the main difference between this and the text clustering problem is that here, a document is assumed to possibly cover multiple topics. And indeed, in general, a document will be covering more than one topic with nonzero probabilities. In text clustering, however, we only allow a document to cover one topic, if we assume one topic is a cluster.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "2:24": "So that means if we change the problem definition just slightly by assuming that each document that can only be generated by using precisely one topic." + "time": "2:24", + "text": "So that means if we change the problem definition just slightly by assuming that each document that can only be generated by using precisely one topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "2:37": "Then we'll have a definition of the clustering problem as you'll hear. So here the output is changed so that we no longer have the detailed coverage distributions pi i j. But instead, we're going to have a cluster assignment decisions, Ci. And Ci is a decision for the document i. And C sub i is going to take a value from 1 through k to indicate one of the k clusters." + "time": "2:37", + "text": "Then we'll have a definition of the clustering problem as you'll hear. So here the output is changed so that we no longer have the detailed coverage distributions pi i j. But instead, we're going to have a cluster assignment decisions, Ci. And Ci is a decision for the document i. And C sub i is going to take a value from 1 through k to indicate one of the k clusters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "3:09": "And basically tells us that d i is in which cluster. As illustrated here, we no longer have multiple topics covered in each document. It is precisely one topic. Although which topic is still uncertain. There is also a connection with" + "time": "3:09", + "text": "And basically tells us that d i is in which cluster. As illustrated here, we no longer have multiple topics covered in each document. It is precisely one topic. Although which topic is still uncertain. There is also a connection with", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "3:29": "the problem of mining one topic that we discussed earlier. So here again, it's a slide that you have seen before and here we hope to estimate a topic model or distribution based on precisely one document. And that's when we assume that this document, it covers precisely one topic." + "time": "3:29", + "text": "the problem of mining one topic that we discussed earlier. So here again, it's a slide that you have seen before and here we hope to estimate a topic model or distribution based on precisely one document. And that's when we assume that this document, it covers precisely one topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "3:52": "But we can also consider some variations of the problem. For example, we can consider there are N documents, each covers a different topic, so that's N documents, and topics. Of course, in this case, these documents are independent, and these topics are also independent. But, we can further allow these documents with share topics, and we can also assume that we are going to assume there are fewer topics than the number of documents, so these documents must share some topics. And if we have N documents that share k topics, then we'll again have precisely the document clustering problem." + "time": "3:52", + "text": "But we can also consider some variations of the problem. For example, we can consider there are N documents, each covers a different topic, so that's N documents, and topics. Of course, in this case, these documents are independent, and these topics are also independent. But, we can further allow these documents with share topics, and we can also assume that we are going to assume there are fewer topics than the number of documents, so these documents must share some topics. And if we have N documents that share k topics, then we'll again have precisely the document clustering problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "4:34": "So because of these connections, naturally we can think about how to use a probabilistically generative model to solve the problem of text clustering." + "time": "4:34", + "text": "So because of these connections, naturally we can think about how to use a probabilistically generative model to solve the problem of text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "4:43": "So the question now is what generative model can be used to do clustering?" + "time": "4:43", + "text": "So the question now is what generative model can be used to do clustering?", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "4:49": "As in all cases of designing a generative model, we hope the generative model would adopt the output that we hope to generate or the structure that we hope to model. So in this case, this is a clustering structure, the topics and each document that covers one topic. And we hope to embed such preferences in the generative model. But, if you think about the main difference between this problem and the topic model that we talked about earlier. And you will see a main requirement is how can we force every document to be generated from precisely one topic, instead of k topics, as in the topic model?" + "time": "4:49", + "text": "As in all cases of designing a generative model, we hope the generative model would adopt the output that we hope to generate or the structure that we hope to model. So in this case, this is a clustering structure, the topics and each document that covers one topic. And we hope to embed such preferences in the generative model. But, if you think about the main difference between this problem and the topic model that we talked about earlier. And you will see a main requirement is how can we force every document to be generated from precisely one topic, instead of k topics, as in the topic model?", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "5:35": "So let's revisit the topic model again in more detail. So this is a detailed view of a two component mixture model. When we have k components, it looks similar. So here we see that when we generate a document," + "time": "5:35", + "text": "So let's revisit the topic model again in more detail. So this is a detailed view of a two component mixture model. When we have k components, it looks similar. So here we see that when we generate a document,", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "5:53": "we generate each word independent." + "time": "5:53", + "text": "we generate each word independent.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "5:57": "And when we generate each word, but first make a choice between these distributions. We decide to use one of them with probability. So p of theta 1 is the probability of choosing the distribution on the top. Now we first make this decision regarding which distribution should be used to generate the word. And then we're going to use this distribution to sample a word. Now note that in such a generative model, the decision on which distribution to use for each word is independent. So that means, for example, the here could have generated from the second distribution, theta 2 whereas text is more likely generated from the first one on the top." + "time": "5:57", + "text": "And when we generate each word, but first make a choice between these distributions. We decide to use one of them with probability. So p of theta 1 is the probability of choosing the distribution on the top. Now we first make this decision regarding which distribution should be used to generate the word. And then we're going to use this distribution to sample a word. Now note that in such a generative model, the decision on which distribution to use for each word is independent. So that means, for example, the here could have generated from the second distribution, theta 2 whereas text is more likely generated from the first one on the top.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "6:49": "That means the words in the document that could have been generated in general from multiple distributions." + "time": "6:49", + "text": "That means the words in the document that could have been generated in general from multiple distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "6:58": "Now this is not what we want, as we said, for text clustering, for document clustering, where we hoped this document will be generated from precisely one topic." + "time": "6:58", + "text": "Now this is not what we want, as we said, for text clustering, for document clustering, where we hoped this document will be generated from precisely one topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "7:09": "So now that means we need to modify the model. But how? Well, let's first think about why this model cannot be used for clustering. And as I just said, the reason is because it has allowed multiple topics to contribute a word to the document." + "time": "7:09", + "text": "So now that means we need to modify the model. But how? Well, let's first think about why this model cannot be used for clustering. And as I just said, the reason is because it has allowed multiple topics to contribute a word to the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "7:28": "And that causes confusion because we're not going to know which cluster this document is from. And it's, more importantly it's violating our assumption about the partitioning of documents in the clusters. If we really have one topic to correspond it to one cluster of documents, then we would have a document that we generate from precisely one topic. That means all the words in the document must have been generated from precisely one distribution. And this is not true for such a topic model that we're seeing here. And that's why this cannot be used for clustering because it did not ensure that only one distribution has been used to generate all the words in one document." + "time": "7:28", + "text": "And that causes confusion because we're not going to know which cluster this document is from. And it's, more importantly it's violating our assumption about the partitioning of documents in the clusters. If we really have one topic to correspond it to one cluster of documents, then we would have a document that we generate from precisely one topic. That means all the words in the document must have been generated from precisely one distribution. And this is not true for such a topic model that we're seeing here. And that's why this cannot be used for clustering because it did not ensure that only one distribution has been used to generate all the words in one document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "8:15": "So if you realize this problem, then we can naturally design alternative mixture model for doing clustering. So this is what you're seeing here. And we again have to make a decision regarding which distribution to use to generate this document because the document could potentially be generated from any of the k word distributions that we have. But this time, once we have made a decision to choose one of the topics, we're going to stay with this regime to generate all the words in the document." + "time": "8:15", + "text": "So if you realize this problem, then we can naturally design alternative mixture model for doing clustering. So this is what you're seeing here. And we again have to make a decision regarding which distribution to use to generate this document because the document could potentially be generated from any of the k word distributions that we have. But this time, once we have made a decision to choose one of the topics, we're going to stay with this regime to generate all the words in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "8:49": "And that means, once we have made a choice of the distribution in generating the first word, we're going to go stay with this distribution in generating all of the other words in the document. So, in other words, we only make the choice once for, basically, we make the decision once for this document and this state was just to generate all the words. Similarly if I had choosing the second distribution, theta sub 2 here, you can see which state was this one. And then generate the entire document of d. Now, if you compare this picture with the previous one, you will see the decision of using a particular distribution is made just once for this document, in the case of document clustering. But in the case of topic model, we have to make as many decisions as the number of words in the document. Because for each word, we can make a potentially different decision. And that's the key difference between the two models." + "time": "8:49", + "text": "And that means, once we have made a choice of the distribution in generating the first word, we're going to go stay with this distribution in generating all of the other words in the document. So, in other words, we only make the choice once for, basically, we make the decision once for this document and this state was just to generate all the words. Similarly if I had choosing the second distribution, theta sub 2 here, you can see which state was this one. And then generate the entire document of d. Now, if you compare this picture with the previous one, you will see the decision of using a particular distribution is made just once for this document, in the case of document clustering. But in the case of topic model, we have to make as many decisions as the number of words in the document. Because for each word, we can make a potentially different decision. And that's the key difference between the two models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "9:58": "But this is obviously also a mixed model so we can just group them together as one box to show that this is the model that will give us a probability of the document. Now, inside of this model, there is also this switch of choosing a different distribution. And we don't observe that so that's a mixture model. And of course a main problem in document clustering is to infer which distribution has been used to generate a document and that would allow us to recover the cluster identity of a document." + "time": "9:58", + "text": "But this is obviously also a mixed model so we can just group them together as one box to show that this is the model that will give us a probability of the document. Now, inside of this model, there is also this switch of choosing a different distribution. And we don't observe that so that's a mixture model. And of course a main problem in document clustering is to infer which distribution has been used to generate a document and that would allow us to recover the cluster identity of a document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "10:37": "So it will be useful to think about the difference from the topic model as I have also mentioned multiple times." + "time": "10:37", + "text": "So it will be useful to think about the difference from the topic model as I have also mentioned multiple times.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "10:46": "And there are mainly two differences, one is the choice of" + "time": "10:46", + "text": "And there are mainly two differences, one is the choice of", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "10:56": "using that particular distribution is made just once for document clustering. Whereas in the topic model, it's made it multiple times for different words. The second is that word distribution, here, is going to be used to regenerate all the words for a document." + "time": "10:56", + "text": "using that particular distribution is made just once for document clustering. Whereas in the topic model, it's made it multiple times for different words. The second is that word distribution, here, is going to be used to regenerate all the words for a document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "11:19": "But, in the case of one distribution doesn't have to generate all the words in the document. Multiple distribution could have been used to generate the words in the document." + "time": "11:19", + "text": "But, in the case of one distribution doesn't have to generate all the words in the document. Multiple distribution could have been used to generate the words in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "11:34": "Let's also think about a special case, when one of the probability of choosing a particular distribution is equal to 1. Now that just means we have no uncertainty now. We just stick with one particular distribution. Now in that case, clearly, we will see this is no longer mixture model, because there's no uncertainty here and we can just use precisely one of the distributions for generating a document. And we're going back to the case of estimating one order distribution based on one document." + "time": "11:34", + "text": "Let's also think about a special case, when one of the probability of choosing a particular distribution is equal to 1. Now that just means we have no uncertainty now. We just stick with one particular distribution. Now in that case, clearly, we will see this is no longer mixture model, because there's no uncertainty here and we can just use precisely one of the distributions for generating a document. And we're going back to the case of estimating one order distribution based on one document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "12:12": "So that's a connection that we discussed earlier. Now you can see it more clearly. So as in all cases of using a generative model to solve a problem, we first look at data and then think about how to design the model. But once we design the model, the next step is to write down the likelihood function. And after that we're going to look at the how to estimate the parameters." + "time": "12:12", + "text": "So that's a connection that we discussed earlier. Now you can see it more clearly. So as in all cases of using a generative model to solve a problem, we first look at data and then think about how to design the model. But once we design the model, the next step is to write down the likelihood function. And after that we're going to look at the how to estimate the parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "12:36": "So in this case, what's the likelihood function? It's going to be very similar to what you have seen before in topic models but it will be also different." + "time": "12:36", + "text": "So in this case, what's the likelihood function? It's going to be very similar to what you have seen before in topic models but it will be also different.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "12:45": "Now if you still recall what the likelihood function looks like in then you will realize that in general, the probability of observing a data point from mixture model is going to be a sum of all the possibilities of generating the data." + "time": "12:45", + "text": "Now if you still recall what the likelihood function looks like in then you will realize that in general, the probability of observing a data point from mixture model is going to be a sum of all the possibilities of generating the data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "13:00": "In this case, so it's going to be a sum over these k topics, because every one can be user generated document. And then inside is the sum you can still recall what the formula looks like, and it's going to be the product of two probabilities. One is the probability of choosing the distribution, the other is the probability of observing a particular datapoint from that distribution." + "time": "13:00", + "text": "In this case, so it's going to be a sum over these k topics, because every one can be user generated document. And then inside is the sum you can still recall what the formula looks like, and it's going to be the product of two probabilities. One is the probability of choosing the distribution, the other is the probability of observing a particular datapoint from that distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "13:27": "So if you map this kind of formula to our problem here, you will see the probability of observing a document d" + "time": "13:27", + "text": "So if you map this kind of formula to our problem here, you will see the probability of observing a document d", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "13:37": "is basically a sum in this case of two different distributions because we have a very simplified situation of just two clusters. And so in this case, you can see it's a sum of two cases. In each case, it's indeed the probability of choosing the distribution either theta 1 or theta 2. And then, the probability is multiplied by the probability of observing this document from this particular distribution." + "time": "13:37", + "text": "is basically a sum in this case of two different distributions because we have a very simplified situation of just two clusters. And so in this case, you can see it's a sum of two cases. In each case, it's indeed the probability of choosing the distribution either theta 1 or theta 2. And then, the probability is multiplied by the probability of observing this document from this particular distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "14:16": "And if you further expanded this probability of observing the whole document, we see that it's a product of observing each word X sub i. And here we made the assumption that each word is generated independently, so the probability of the whole document is just a product of the probability of each word in the document." + "time": "14:16", + "text": "And if you further expanded this probability of observing the whole document, we see that it's a product of observing each word X sub i. And here we made the assumption that each word is generated independently, so the probability of the whole document is just a product of the probability of each word in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "14:40": "So this form should be very similar to the topic model. But it's also useful to think about the difference and for that purpose, I am also copying the probability of topic model of these two components here. So here you can see the formula looks very similar or in many ways, they are similar." + "time": "14:40", + "text": "So this form should be very similar to the topic model. But it's also useful to think about the difference and for that purpose, I am also copying the probability of topic model of these two components here. So here you can see the formula looks very similar or in many ways, they are similar.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "15:02": "But there is also some difference." + "time": "15:02", + "text": "But there is also some difference.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "15:06": "And in particular, the difference is on the top. You see for the mixture model for document clustering, we first take a product, and then take a sum." + "time": "15:06", + "text": "And in particular, the difference is on the top. You see for the mixture model for document clustering, we first take a product, and then take a sum.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "15:16": "And that's corresponding to our assumption of first make a choice of choosing one distribution and then stay with the distribution, it'll generate all the words. And that's why we have the product inside the sum." + "time": "15:16", + "text": "And that's corresponding to our assumption of first make a choice of choosing one distribution and then stay with the distribution, it'll generate all the words. And that's why we have the product inside the sum.", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" }, { - "15:30": "The sum corresponds to the choice. Now, in topic model, we see that the sum is actually inside the product. And that's because we generated each word independently. And that's why we have the product outside, but when we generate each word we have to make a decision regarding which distribution we use so we have a sum there for each word. But in general, these are all mixture models and we can estimate these models by using the Algorithm, as we will discuss more later. [MUSIC]" + "time": "15:30", + "text": "The sum corresponds to the choice. Now, in topic model, we see that the sum is actually inside the product. And that's because we generated each word independently. And that's why we have the product outside, but when we generate each word we have to make a decision regarding which distribution we use so we have a sum there for each word. But in general, these are all mixture models and we can estimate these models by using the Algorithm, as we will discuss more later. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" } ] }, { "4-3-text-clustering-generative-probabilistic-models-part-2": [ { - "0:00": "[SOUND] This lecture is a continuing discussion of Generative Probabilistic Models for text clustering." + "time": "0:00", + "text": "[SOUND] This lecture is a continuing discussion of Generative Probabilistic Models for text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "0:13": "In this lecture, we are going to continue talking about the text clustering, particularly, the Generative Probabilistic Models." + "time": "0:13", + "text": "In this lecture, we are going to continue talking about the text clustering, particularly, the Generative Probabilistic Models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "0:23": "So this is a slide that you have seen earlier where we have written down the likelihood function for a document with two distributions, being a two component mixed model for document clustering." + "time": "0:23", + "text": "So this is a slide that you have seen earlier where we have written down the likelihood function for a document with two distributions, being a two component mixed model for document clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "0:39": "Now in this lecture, we're going to generalize this to include the k clusters. Now if you look at the formula and think about the question, how to generalize it, you'll realize that all we need is to add more terms, like what you have seen here." + "time": "0:39", + "text": "Now in this lecture, we're going to generalize this to include the k clusters. Now if you look at the formula and think about the question, how to generalize it, you'll realize that all we need is to add more terms, like what you have seen here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "0:57": "So you can just add more thetas and the probabilities of thetas and the probabilities of generating d from those thetas. So this is precisely what we are going to use and this is the general presentation of the mixture model for document clustering." + "time": "0:57", + "text": "So you can just add more thetas and the probabilities of thetas and the probabilities of generating d from those thetas. So this is precisely what we are going to use and this is the general presentation of the mixture model for document clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "1:19": "So as more cases would follow these steps in using a generating model first, think about our data. And so in this case our data is a collection of documents, end documents denoted by d sub i, and then we talk about the other models, think of other modelling. In this case, we design a mixture of k unigram language models. It's a little bit different from the topic model, but we have similar parameters. We have a set of theta i's that denote that our distributions corresponding to the k unigram language models. We have p of each theta i as a probability of selecting each of the k distributions we generate the document. Now note that although our goal is to find the clusters and we actually have used a more general notion of a probability of each cluster and this as you will see later, will allow us to assign a document to the cluster that has the highest probability of being able to generate the document." + "time": "1:19", + "text": "So as more cases would follow these steps in using a generating model first, think about our data. And so in this case our data is a collection of documents, end documents denoted by d sub i, and then we talk about the other models, think of other modelling. In this case, we design a mixture of k unigram language models. It's a little bit different from the topic model, but we have similar parameters. We have a set of theta i's that denote that our distributions corresponding to the k unigram language models. We have p of each theta i as a probability of selecting each of the k distributions we generate the document. Now note that although our goal is to find the clusters and we actually have used a more general notion of a probability of each cluster and this as you will see later, will allow us to assign a document to the cluster that has the highest probability of being able to generate the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "2:31": "So as a result, we can also recover some other interesting" + "time": "2:31", + "text": "So as a result, we can also recover some other interesting", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "2:36": "properties, as you will see later." + "time": "2:36", + "text": "properties, as you will see later.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "2:42": "So the model basically would make the following assumption about the generation of a document. We first choose a theta i according to probability of theta i, and then generate all the words in the document using this distribution. Note that it's important that we use this distribution all the words in the document. This is very different from topic model. So the likelihood function would be like what you are seeing here." + "time": "2:42", + "text": "So the model basically would make the following assumption about the generation of a document. We first choose a theta i according to probability of theta i, and then generate all the words in the document using this distribution. Note that it's important that we use this distribution all the words in the document. This is very different from topic model. So the likelihood function would be like what you are seeing here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "3:10": "So you can take a look at the formula here, we have used the different notation here in the second line of this equation. You are going to see now the notation has been changed to use unique word in the vocabulary, in the product instead of particular position in the document. So from X subject to W, is a change of notation and this change allows us to show the estimation formulas more easily. And you have seen this change also in the topic model presentation, but it's basically still just a product of the probabilities of all the words." + "time": "3:10", + "text": "So you can take a look at the formula here, we have used the different notation here in the second line of this equation. You are going to see now the notation has been changed to use unique word in the vocabulary, in the product instead of particular position in the document. So from X subject to W, is a change of notation and this change allows us to show the estimation formulas more easily. And you have seen this change also in the topic model presentation, but it's basically still just a product of the probabilities of all the words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "4:10": "And so with the likelihood function, now we can talk about how to do parameter estimation. Here we can simply use the maximum likelihood estimator. So that's just a standard way of doing things. So all should be familiar to you now. It's just a different model. So after we have estimated parameters, how can we then allocate clusters to the documents? Well, let's take a look at the this situation more closely. So we just repeated the parameters here for this mixture model." + "time": "4:10", + "text": "And so with the likelihood function, now we can talk about how to do parameter estimation. Here we can simply use the maximum likelihood estimator. So that's just a standard way of doing things. So all should be familiar to you now. It's just a different model. So after we have estimated parameters, how can we then allocate clusters to the documents? Well, let's take a look at the this situation more closely. So we just repeated the parameters here for this mixture model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "4:43": "Now if you think about what we can get by estimating such a model, we can actually get more information than what we need for doing clustering, right? So theta i for example, represents the content of cluster i, this is actually a by-product, it can help us summarize what the cluster is about. If you look at the top terms in this cluster or in this word distribution and they will tell us what the cluster is about." + "time": "4:43", + "text": "Now if you think about what we can get by estimating such a model, we can actually get more information than what we need for doing clustering, right? So theta i for example, represents the content of cluster i, this is actually a by-product, it can help us summarize what the cluster is about. If you look at the top terms in this cluster or in this word distribution and they will tell us what the cluster is about.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "5:11": "p of theta i can be interpreted as indicating the size of cluster because it tells us how likely the cluster would be used to generate the document. The more likely a cluster is used to generate a document, we can assume the larger the cluster size is." + "time": "5:11", + "text": "p of theta i can be interpreted as indicating the size of cluster because it tells us how likely the cluster would be used to generate the document. The more likely a cluster is used to generate a document, we can assume the larger the cluster size is.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "5:30": "Note that unlike in PLSA and this probability of theta i is not dependent on d." + "time": "5:30", + "text": "Note that unlike in PLSA and this probability of theta i is not dependent on d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "5:37": "Now you may recall that the topic you chose at each document actually depends on d. That means each document can have a potentially different choice of topics, but here we have a generic choice probability for all the documents. But of course, even a particular document that we still have to infer which topic is more likely to generate the document. So in that sense, we can still have a document dependent probability of clusters." + "time": "5:37", + "text": "Now you may recall that the topic you chose at each document actually depends on d. That means each document can have a potentially different choice of topics, but here we have a generic choice probability for all the documents. But of course, even a particular document that we still have to infer which topic is more likely to generate the document. So in that sense, we can still have a document dependent probability of clusters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "6:10": "So now let's look at the key problem of assigning documents to clusters or assigning clusters to documents." + "time": "6:10", + "text": "So now let's look at the key problem of assigning documents to clusters or assigning clusters to documents.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "6:17": "So that's the computer c sub d here and this will take one of the values in the range of one through k to indicate which cluster should be assigned to d." + "time": "6:17", + "text": "So that's the computer c sub d here and this will take one of the values in the range of one through k to indicate which cluster should be assigned to d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "6:28": "Now first you might think about a way to use likelihood on it and that is to assign d to the cluster corresponding to the topic of theta i, that most likely has been used to generate d." + "time": "6:28", + "text": "Now first you might think about a way to use likelihood on it and that is to assign d to the cluster corresponding to the topic of theta i, that most likely has been used to generate d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "6:42": "So that means we're going to choose one of those distributions that gives d the highest probability. In other words, we see which distribution has the content that matches our d at the [INAUDIBLE]. Intuitively that makes sense, however, this approach does not consider the size of clusters, which is also a available to us and so a better way is to use the likelihood together with the prior, in this case the prior is p of theta i. And together, that is, we're going to use the base formula to compute the posterior probability of theta, given d." + "time": "6:42", + "text": "So that means we're going to choose one of those distributions that gives d the highest probability. In other words, we see which distribution has the content that matches our d at the [INAUDIBLE]. Intuitively that makes sense, however, this approach does not consider the size of clusters, which is also a available to us and so a better way is to use the likelihood together with the prior, in this case the prior is p of theta i. And together, that is, we're going to use the base formula to compute the posterior probability of theta, given d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "7:25": "And if we choose theta .based on this posterior probability, we would have the following formula that you see here on the bottom of this slide. And in this case, we're going to choose the theta that has a large P of theta i, that means a large cluster and also a high probability of generating d. So we're going to favor a cluster that's large and also consistent with the document. And that intuitively makes sense because the chance of a document being a large cluster is generally higher than in a small cluster." + "time": "7:25", + "text": "And if we choose theta .based on this posterior probability, we would have the following formula that you see here on the bottom of this slide. And in this case, we're going to choose the theta that has a large P of theta i, that means a large cluster and also a high probability of generating d. So we're going to favor a cluster that's large and also consistent with the document. And that intuitively makes sense because the chance of a document being a large cluster is generally higher than in a small cluster.", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" }, { - "8:07": "So this means once we can estimate the parameters of the model, then we can easily solve the problem of document clustering. So next, we'll have to discuss how to actually compute the estimate of the model. [MUSIC]" + "time": "8:07", + "text": "So this means once we can estimate the parameters of the model, then we can easily solve the problem of document clustering. So next, we'll have to discuss how to actually compute the estimate of the model. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" } ] }, { "4-4-text-clustering-generative-probabilistic-models-part-3": [ { - "0:00": "[SOUND] This lecture is a continuing discussion of generative probabilistic models for tax classroom." + "time": "0:00", + "text": "[SOUND] This lecture is a continuing discussion of generative probabilistic models for tax classroom.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "0:14": "In this lecture, we're going to do a finishing discussion of generative probabilistic models for text crossing." + "time": "0:14", + "text": "In this lecture, we're going to do a finishing discussion of generative probabilistic models for text crossing.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "0:21": "So this is a slide that you have seen before and here, we show how we define the mixture model for text crossing and what the likelihood function looks like. And we can also compute the maximum likelihood estimate, to estimate the parameters. In this lecture, we're going to do talk more about how exactly we're going to compute the maximum likelihood estimate. As in most cases the Algorithm can be used to solve this problem for mixture models. So here's the detail of this Algorithm for document clustering. Now, if you have understood how Algorithm works for topic models like TRSA, and I think here it would be very similar. And we just need to adapt a little bit to this new mixture model. So as you may recall Algorithm starts with initialization of all the parameters. So this is the same as what happened before for topic models." + "time": "0:21", + "text": "So this is a slide that you have seen before and here, we show how we define the mixture model for text crossing and what the likelihood function looks like. And we can also compute the maximum likelihood estimate, to estimate the parameters. In this lecture, we're going to do talk more about how exactly we're going to compute the maximum likelihood estimate. As in most cases the Algorithm can be used to solve this problem for mixture models. So here's the detail of this Algorithm for document clustering. Now, if you have understood how Algorithm works for topic models like TRSA, and I think here it would be very similar. And we just need to adapt a little bit to this new mixture model. So as you may recall Algorithm starts with initialization of all the parameters. So this is the same as what happened before for topic models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "1:28": "And then we're going to repeat until the likelihood converges and in each step we'll do E step and M step. In M step, we're going to infer which distribution has been used to generate each document. So I have to introduce a hidden variable Zd for each document and this variable could take a value from the range of 1 through k, representing k different distributions." + "time": "1:28", + "text": "And then we're going to repeat until the likelihood converges and in each step we'll do E step and M step. In M step, we're going to infer which distribution has been used to generate each document. So I have to introduce a hidden variable Zd for each document and this variable could take a value from the range of 1 through k, representing k different distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "1:59": "More specifically basically, we're going to apply base rules to infer which distribution is more likely to have generated this document, or computing the posterior probability of the distribution given the document." + "time": "1:59", + "text": "More specifically basically, we're going to apply base rules to infer which distribution is more likely to have generated this document, or computing the posterior probability of the distribution given the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "2:17": "And we know it's proportional to the probability of selecting this distribution p of Z the i. And the probability of generating this whole document from the distribution which is the product of the probabilities of world for this document as you see here. Now, as you all clear this use for kind of remember," + "time": "2:17", + "text": "And we know it's proportional to the probability of selecting this distribution p of Z the i. And the probability of generating this whole document from the distribution which is the product of the probabilities of world for this document as you see here. Now, as you all clear this use for kind of remember,", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "2:45": "the normalizer or the constraint on this probability. So in this case, we know the constraint on this probability in E-Step is that all the probabilities of Z equals i must sum to 1. Because the documented must have been generated from precisely one of these k topics. So the probability of being generated from each of them should sum to 1. And if you know this constraint, then you can easily compute this distribution as long as you know what it is proportional to. So once you compute this product that you see here, then you simply normalize" + "time": "2:45", + "text": "the normalizer or the constraint on this probability. So in this case, we know the constraint on this probability in E-Step is that all the probabilities of Z equals i must sum to 1. Because the documented must have been generated from precisely one of these k topics. So the probability of being generated from each of them should sum to 1. And if you know this constraint, then you can easily compute this distribution as long as you know what it is proportional to. So once you compute this product that you see here, then you simply normalize", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "3:31": "these probabilities, to make them sum to 1 over all the topics. So that's E-Step, after E-Step we want to know which distribution is more likely to have generated this document d, and which is unlikely." + "time": "3:31", + "text": "these probabilities, to make them sum to 1 over all the topics. So that's E-Step, after E-Step we want to know which distribution is more likely to have generated this document d, and which is unlikely.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "3:45": "And then in M-Step we're going to re-estimate all the parameters based on the in further z values or in further knowledge about which distribution has been used to generate which document. So the re-estimation involves two kinds of parameters 1 is p of theta and this is the probability of selecting a particular distribution. Before we observe anything, we don't have any knowledge about which cluster is more likely. But after we have observed that these documents, then we can crack the evidence to infer which cluster is more likely. And so this is proportional to the sum of the probability of Z sub d j is equal to i." + "time": "3:45", + "text": "And then in M-Step we're going to re-estimate all the parameters based on the in further z values or in further knowledge about which distribution has been used to generate which document. So the re-estimation involves two kinds of parameters 1 is p of theta and this is the probability of selecting a particular distribution. Before we observe anything, we don't have any knowledge about which cluster is more likely. But after we have observed that these documents, then we can crack the evidence to infer which cluster is more likely. And so this is proportional to the sum of the probability of Z sub d j is equal to i.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "4:34": "And so this gives us all the evidence about using topic i, theta i to generate a document. Pull them together and again, we normalize them into probabilities." + "time": "4:34", + "text": "And so this gives us all the evidence about using topic i, theta i to generate a document. Pull them together and again, we normalize them into probabilities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "4:50": "So this is for key of theta sub i." + "time": "4:50", + "text": "So this is for key of theta sub i.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "4:54": "Now the other kind of parameters are the probabilities of words in each distribution, in each cluster. And this is very similar to the case piz and here we just report the kinds of words that are in documents that are inferred to have been generated from a particular topic of theta i here. This would allows to then estimate how many words have actually been generated from theta i. And then we'll normalize again these accounts in the probabilities so that the probabilities on all the words would sum to up. Note that it's very important to understand these constraints as they are precisely the normalizing in all these formulas. And it's also important to know that the distribution is over what?" + "time": "4:54", + "text": "Now the other kind of parameters are the probabilities of words in each distribution, in each cluster. And this is very similar to the case piz and here we just report the kinds of words that are in documents that are inferred to have been generated from a particular topic of theta i here. This would allows to then estimate how many words have actually been generated from theta i. And then we'll normalize again these accounts in the probabilities so that the probabilities on all the words would sum to up. Note that it's very important to understand these constraints as they are precisely the normalizing in all these formulas. And it's also important to know that the distribution is over what?", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "5:54": "For example, the probability of theta is over all the k topics, that's why these k probabilities will sum to 1. Whereas the probability of a word given theta is a probability distribution over all the words. So there are many probabilities and they have to sum to 1. So now, let's take a look at a simple example of two clusters. I've two clusters, I've assumed some initialized values for the two distributions. And let's assume we randomly initialize two probability of selecting each cluster as 0.5, so equally likely. And then let's consider one document that you have seen here. There are two occurrences of text and two occurrences of mining. So there are four words together and medical and health did not occur in this document. So let's think about the hidden variable." + "time": "5:54", + "text": "For example, the probability of theta is over all the k topics, that's why these k probabilities will sum to 1. Whereas the probability of a word given theta is a probability distribution over all the words. So there are many probabilities and they have to sum to 1. So now, let's take a look at a simple example of two clusters. I've two clusters, I've assumed some initialized values for the two distributions. And let's assume we randomly initialize two probability of selecting each cluster as 0.5, so equally likely. And then let's consider one document that you have seen here. There are two occurrences of text and two occurrences of mining. So there are four words together and medical and health did not occur in this document. So let's think about the hidden variable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "6:50": "Now for each document then we much use a hidden variable. And before in piz, we used one hidden variable for each work because that's the output from one mixture model. So in our case the output from the mixture model or the observation from mixture model is a document, not a word. So now we have one hidden variable attached to the document. Now that hidden variable must tell us which distribution has been used to generate the document. So it's going to take two values, one and two to indicate the two topics." + "time": "6:50", + "text": "Now for each document then we much use a hidden variable. And before in piz, we used one hidden variable for each work because that's the output from one mixture model. So in our case the output from the mixture model or the observation from mixture model is a document, not a word. So now we have one hidden variable attached to the document. Now that hidden variable must tell us which distribution has been used to generate the document. So it's going to take two values, one and two to indicate the two topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "7:25": "So now how do we infer which distribution has been used generally d? Well it's been used base rule, so it looks like this. In order for the first topic theta 1 to generate a document, two things must happen. First, theta sub 1 must have been selected. So it's given by p of theta 1. Second, it must have also be generating the four words in the document. Namely, two occurrences of text and two occurrences of sub mining. And that's why you see the numerator has the product of the probability of selecting theta 1 and the probability of generating the document from theta 1. So the denominator is just the sum of two possibilities of generality in this document. And you can plug in the numerical values to verify indeed in this case, the document is more likely to be generated from theta 1, much more likely than from theta 2. So once we have this probability, we can easily compute the probability of Z equals 2, given this document. How? Well, we can use the constraint. That's going to be 1 minus 100 over 101. So now it's important that you note that in such a computation there is a potential problem of underflow. And that is because if you look at the original numerator and the denominator, it involves the competition of a product of many small probabilities. Imagine if a document has many words and it's going to be a very small value here that can cause the problem of underflow. So to solve the problem, we can use a normalize. So here you see that we take a average of all these two math solutions to compute average at the screen called a theta bar." + "time": "7:25", + "text": "So now how do we infer which distribution has been used generally d? Well it's been used base rule, so it looks like this. In order for the first topic theta 1 to generate a document, two things must happen. First, theta sub 1 must have been selected. So it's given by p of theta 1. Second, it must have also be generating the four words in the document. Namely, two occurrences of text and two occurrences of sub mining. And that's why you see the numerator has the product of the probability of selecting theta 1 and the probability of generating the document from theta 1. So the denominator is just the sum of two possibilities of generality in this document. And you can plug in the numerical values to verify indeed in this case, the document is more likely to be generated from theta 1, much more likely than from theta 2. So once we have this probability, we can easily compute the probability of Z equals 2, given this document. How? Well, we can use the constraint. That's going to be 1 minus 100 over 101. So now it's important that you note that in such a computation there is a potential problem of underflow. And that is because if you look at the original numerator and the denominator, it involves the competition of a product of many small probabilities. Imagine if a document has many words and it's going to be a very small value here that can cause the problem of underflow. So to solve the problem, we can use a normalize. So here you see that we take a average of all these two math solutions to compute average at the screen called a theta bar.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "9:24": "And this average distribution would be comparable to each of these distributions in terms of the quantities or the magnitude. So we can then divide the numerator and the denominator both by this normalizer. So basically this normalizes the probability of generating this document by using this average word distribution. So you can see the normalizer is here. And since we have used exactly the same normalizer for the numerator and the denominator. The whole value of this expression is not changed but by doing this normalization you can see we can make the numerators and the denominators more manageable in that the overall value is not going to be very small for each. And thus we can avoid the underflow problem." + "time": "9:24", + "text": "And this average distribution would be comparable to each of these distributions in terms of the quantities or the magnitude. So we can then divide the numerator and the denominator both by this normalizer. So basically this normalizes the probability of generating this document by using this average word distribution. So you can see the normalizer is here. And since we have used exactly the same normalizer for the numerator and the denominator. The whole value of this expression is not changed but by doing this normalization you can see we can make the numerators and the denominators more manageable in that the overall value is not going to be very small for each. And thus we can avoid the underflow problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "10:24": "In some other times we sometimes also use logarithm of the product to convert this into a sum of log of probabilities. This can help preserve precision as well, but in this case we cannot use algorithm to solve the problem. Because there is a sum in the denominator, but this kind of normalizes can be effective for solving this problem. So it's a technique that's sometimes useful in other situations in other situations as well. Now let's look at the M-Step. So from the E-Step we can see our estimate of which distribution is more likely to have generated a document at d. And you can see d1's more like got it from the first topic, where is d2 is more like from second topic, etc. Now, let's think about what we need to compute in M-step well basically we need to re-estimate all the parameters. First, look at p of theta 1 and p of theta 2. How do we estimate that? Intuitively you can just pool together these z, the probabilities from E-step. So if all of these documents say, well they're more likely from theta 1, then we intuitively would give a higher probability to theta 1. In this case, we can just take an average of these probabilities that you see here and we've obtain a 0.6 for theta 1. So 01 is more likely and then theta 2. So you can see probability of 02 would be natural in 0.4. What about these word of probabilities? Well we do the same, and intuition is the same. So we're going to see, in order to estimate the probabilities of words in theta 1, we're going to look at which documents have been generated from theta 1. And we're going to pull together the words in those documents and normalize them. So this is basically what I just said." + "time": "10:24", + "text": "In some other times we sometimes also use logarithm of the product to convert this into a sum of log of probabilities. This can help preserve precision as well, but in this case we cannot use algorithm to solve the problem. Because there is a sum in the denominator, but this kind of normalizes can be effective for solving this problem. So it's a technique that's sometimes useful in other situations in other situations as well. Now let's look at the M-Step. So from the E-Step we can see our estimate of which distribution is more likely to have generated a document at d. And you can see d1's more like got it from the first topic, where is d2 is more like from second topic, etc. Now, let's think about what we need to compute in M-step well basically we need to re-estimate all the parameters. First, look at p of theta 1 and p of theta 2. How do we estimate that? Intuitively you can just pool together these z, the probabilities from E-step. So if all of these documents say, well they're more likely from theta 1, then we intuitively would give a higher probability to theta 1. In this case, we can just take an average of these probabilities that you see here and we've obtain a 0.6 for theta 1. So 01 is more likely and then theta 2. So you can see probability of 02 would be natural in 0.4. What about these word of probabilities? Well we do the same, and intuition is the same. So we're going to see, in order to estimate the probabilities of words in theta 1, we're going to look at which documents have been generated from theta 1. And we're going to pull together the words in those documents and normalize them. So this is basically what I just said.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "12:20": "More specifically, we're going to do for example, use all the kinds of text in these documents for estimating the probability of text given theta 1. But we're not going to use their raw count or total accounts. Instead, we can do that discount them by the probabilities that each document is likely been generated from theta 1. So these gives us some fractional accounts. And then these accounts would be then normalized in order to get the probability. Now, how do we normalize them? Well these probability of these words must assign to 1. So to summarize our discussion of generative models for clustering. Well we show that a slight variation of topic model can be used for clustering documents. And this also shows the power of generating models in general. By changing the generation assumption and changing the model slightly we can achieve different goals, and we can capture different patterns and types of data. So in this case, each cluster is represented by unigram language model word distribution and that is similar to topic model. So here you can see the word distribution actually generates a term cluster as a by-product. A document that is generated by first choosing a unigram language model. And then generating all the words in the document are using just a single language model. And this is very different from again copy model where we can generate the words in the document by using multiple unigram language models." + "time": "12:20", + "text": "More specifically, we're going to do for example, use all the kinds of text in these documents for estimating the probability of text given theta 1. But we're not going to use their raw count or total accounts. Instead, we can do that discount them by the probabilities that each document is likely been generated from theta 1. So these gives us some fractional accounts. And then these accounts would be then normalized in order to get the probability. Now, how do we normalize them? Well these probability of these words must assign to 1. So to summarize our discussion of generative models for clustering. Well we show that a slight variation of topic model can be used for clustering documents. And this also shows the power of generating models in general. By changing the generation assumption and changing the model slightly we can achieve different goals, and we can capture different patterns and types of data. So in this case, each cluster is represented by unigram language model word distribution and that is similar to topic model. So here you can see the word distribution actually generates a term cluster as a by-product. A document that is generated by first choosing a unigram language model. And then generating all the words in the document are using just a single language model. And this is very different from again copy model where we can generate the words in the document by using multiple unigram language models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "13:56": "And then the estimated model parameters are given both topic characterization of each cluster and the probabilistic assignment of each document into a cluster." + "time": "13:56", + "text": "And then the estimated model parameters are given both topic characterization of each cluster and the probabilistic assignment of each document into a cluster.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "14:07": "And this probabilistic assignment sometimes is useful for some applications. But if we want to achieve harder clusters mainly to" + "time": "14:07", + "text": "And this probabilistic assignment sometimes is useful for some applications. But if we want to achieve harder clusters mainly to", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" }, { - "14:16": "partition documents into disjointed clusters. Then we can just force a document into the cluster corresponding to the words distribution that's most likely to have generated the document. We've also shown that the Algorithm can be used to compute the maximum likelihood estimate. And in this case, we need to use a special number addition technique to avoid underflow. [MUSIC]" + "time": "14:16", + "text": "partition documents into disjointed clusters. Then we can just force a document into the cluster corresponding to the words distribution that's most likely to have generated the document. We've also shown that the Algorithm can be used to compute the maximum likelihood estimate. And in this case, we need to use a special number addition technique to avoid underflow. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" } ] }, { "4-5-text-clustering-similarity-based-approaches": [ { - "0:00": "[MUSIC] This lecture is about the similarity-based approaches to text clustering." + "time": "0:00", + "text": "[MUSIC] This lecture is about the similarity-based approaches to text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "0:13": "In this lecture we're going to to continue the discussion of how to do a text clustering." + "time": "0:13", + "text": "In this lecture we're going to to continue the discussion of how to do a text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "0:18": "In particular, we're going to to cover different kinds of approaches than generative models, and that is similarity-based approaches. So the general idea of similarity-based clustering is to explicitly specify a similarity function to measure the similarity between two text objects. Now this is in contrast with a generative model where we implicitly define the clustering bias by using a particular object to function like a [INAUDIBLE] function." + "time": "0:18", + "text": "In particular, we're going to to cover different kinds of approaches than generative models, and that is similarity-based approaches. So the general idea of similarity-based clustering is to explicitly specify a similarity function to measure the similarity between two text objects. Now this is in contrast with a generative model where we implicitly define the clustering bias by using a particular object to function like a [INAUDIBLE] function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "0:52": "The whole process is driven by optimizing the [INAUDIBLE,] but here we explicitly provide a view of what we think are similar. And this is often very useful because then it allows us to inject any particular view of similarity into the clustering program. So once we have a similarity function, we can then aim at optimally partitioning, to partitioning the data into clusters or into different groups. And try to maximize the inter-group similarity and minimize the inter-group similarity. That is to ensure the objects that are put into the same group to be similar, but the objects that are put into different groups to be not similar. And these are the general goals of clustering, and there is often a trade off between achieving both goals. Now there are many different methods for doing similarity based clustering, and in general I think we can distinguish the two strategies at high level. One is to progressively construct the hierarchy of clusters, and so this often leads to hierarchical clustering. And we can further distinguish it two ways, to construct a hierarchy depending on whether we started with the collection to divide the connection. Or started with individual objectives and gradually group them together, so one is bottom-up that can be called agglomerative. Well we gradually group a similar objects into larger and larger clusters. Until we group everything together, the other is top-down or divisive, in this case we gradually partition the whole data set into smaller and smaller clusters. The other general strategy is to start with the initial tentative clustering and then iteratively improve it. And this often leads for a flat clustering, one example is k-Means, so as I just said, there are many different clustering methods available. And a full coverage of all the clustering methods would be beyond the scope of this course. But here we are going to talk about the two representative methods, in some detail" + "time": "0:52", + "text": "The whole process is driven by optimizing the [INAUDIBLE,] but here we explicitly provide a view of what we think are similar. And this is often very useful because then it allows us to inject any particular view of similarity into the clustering program. So once we have a similarity function, we can then aim at optimally partitioning, to partitioning the data into clusters or into different groups. And try to maximize the inter-group similarity and minimize the inter-group similarity. That is to ensure the objects that are put into the same group to be similar, but the objects that are put into different groups to be not similar. And these are the general goals of clustering, and there is often a trade off between achieving both goals. Now there are many different methods for doing similarity based clustering, and in general I think we can distinguish the two strategies at high level. One is to progressively construct the hierarchy of clusters, and so this often leads to hierarchical clustering. And we can further distinguish it two ways, to construct a hierarchy depending on whether we started with the collection to divide the connection. Or started with individual objectives and gradually group them together, so one is bottom-up that can be called agglomerative. Well we gradually group a similar objects into larger and larger clusters. Until we group everything together, the other is top-down or divisive, in this case we gradually partition the whole data set into smaller and smaller clusters. The other general strategy is to start with the initial tentative clustering and then iteratively improve it. And this often leads for a flat clustering, one example is k-Means, so as I just said, there are many different clustering methods available. And a full coverage of all the clustering methods would be beyond the scope of this course. But here we are going to talk about the two representative methods, in some detail", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "3:14": "one is Hierarchical Agglomerative Clustering or HAC, the other is k-Means. So first of it we'll get the agglomerative hierarchical clustering, in this case, we're given a similarity function to measure similarity between two objects. And then we can gradually group similar objects together in a bottom-up fashion to form larger and larger groups. And they always form a hierarchy, and then we can stop when some stopping criterion is met. It could be either some number of clusters has been achieved or the threshold for similarity has been reached." + "time": "3:14", + "text": "one is Hierarchical Agglomerative Clustering or HAC, the other is k-Means. So first of it we'll get the agglomerative hierarchical clustering, in this case, we're given a similarity function to measure similarity between two objects. And then we can gradually group similar objects together in a bottom-up fashion to form larger and larger groups. And they always form a hierarchy, and then we can stop when some stopping criterion is met. It could be either some number of clusters has been achieved or the threshold for similarity has been reached.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "3:52": "There are different variations here, and they mainly differ in the ways to compute a group similarity. Based on the individual objects similarity, so let's illustrate how again induced a structure based on just similarity. So start with all the text objects and we can then measure the similarity between them. Of course based on the provided similarity function, and then we can see which pair has the highest similarity. And then just group them together, and then we're going to see which pair is the next one to group." + "time": "3:52", + "text": "There are different variations here, and they mainly differ in the ways to compute a group similarity. Based on the individual objects similarity, so let's illustrate how again induced a structure based on just similarity. So start with all the text objects and we can then measure the similarity between them. Of course based on the provided similarity function, and then we can see which pair has the highest similarity. And then just group them together, and then we're going to see which pair is the next one to group.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "4:30": "Maybe these two now have the highest similarity, and then we're going to gradually group them together. And then every time we're going to pick the highest similarity, the similarity of pairs to group." + "time": "4:30", + "text": "Maybe these two now have the highest similarity, and then we're going to gradually group them together. And then every time we're going to pick the highest similarity, the similarity of pairs to group.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "4:45": "This will give us a binary tree eventually to group everything together." + "time": "4:45", + "text": "This will give us a binary tree eventually to group everything together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "4:50": "Now, depending on our applications, we can use the whole hierarchy as a structure for browsing, for example. Or we can choose a cutoff, let's say cut here to get four clusters, or we can use a threshold to cut. Or we can cut at this high level to get just two clusters, so this is a general idea, now if you think about how to implement this algorithm. You'll realize that we have everything specified except for how to compute group similarity." + "time": "4:50", + "text": "Now, depending on our applications, we can use the whole hierarchy as a structure for browsing, for example. Or we can choose a cutoff, let's say cut here to get four clusters, or we can use a threshold to cut. Or we can cut at this high level to get just two clusters, so this is a general idea, now if you think about how to implement this algorithm. You'll realize that we have everything specified except for how to compute group similarity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "5:24": "We are only given the similarity function of two objects, but as we group groups together, we also need to assess the similarity between two groups. There are also different ways to do that and there are the three popular methods. Single-link, complete-link, and average-link, so given two groups and the single-link algorithm. Is going to define the group similarity as the similarity of the closest pair of the two groups. Complete-link defines the similarity of the two groups as the similarity of the farthest system pair. Average-link defines the similarity as average of similarity of all the pairs of the two groups. So it's much easier to understand the methods by illustrating them, so here are two groups, g1 and g2 with some objects in each group. And we know how to compute the similarity between two objects, but the question now is, how can we compute the similarity between the two groups?" + "time": "5:24", + "text": "We are only given the similarity function of two objects, but as we group groups together, we also need to assess the similarity between two groups. There are also different ways to do that and there are the three popular methods. Single-link, complete-link, and average-link, so given two groups and the single-link algorithm. Is going to define the group similarity as the similarity of the closest pair of the two groups. Complete-link defines the similarity of the two groups as the similarity of the farthest system pair. Average-link defines the similarity as average of similarity of all the pairs of the two groups. So it's much easier to understand the methods by illustrating them, so here are two groups, g1 and g2 with some objects in each group. And we know how to compute the similarity between two objects, but the question now is, how can we compute the similarity between the two groups?", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "6:29": "And then we can in general base this on the similarities of the objects in the two groups." + "time": "6:29", + "text": "And then we can in general base this on the similarities of the objects in the two groups.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "6:35": "So, in terms of single-link and we're just looking at the closest pair so in this case, these two paired objects will defined the similarities of the two groups." + "time": "6:35", + "text": "So, in terms of single-link and we're just looking at the closest pair so in this case, these two paired objects will defined the similarities of the two groups.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "6:47": "As long as they are very close, we're going to say the two groups are very" + "time": "6:47", + "text": "As long as they are very close, we're going to say the two groups are very", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "6:51": "close so it is an optimistic view of similarity." + "time": "6:51", + "text": "close so it is an optimistic view of similarity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "6:57": "The complete link on the other hand were in some sense pessimistic, and by taking the similarity of the two farthest pair as the similarity for the two groups. So we are going to make sure that if the two groups are having a high similarity. Then every pair of the two groups, or the objects in the two groups will have, will be ensured to have high similarity." + "time": "6:57", + "text": "The complete link on the other hand were in some sense pessimistic, and by taking the similarity of the two farthest pair as the similarity for the two groups. So we are going to make sure that if the two groups are having a high similarity. Then every pair of the two groups, or the objects in the two groups will have, will be ensured to have high similarity.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "7:29": "Now average link is in between, so it takes the average of all these pairs. Now these different ways of computing group similarities will lead to different clustering algorithms. And they would in general give different results, so it's useful to take a look at their differences and to make a comparison." + "time": "7:29", + "text": "Now average link is in between, so it takes the average of all these pairs. Now these different ways of computing group similarities will lead to different clustering algorithms. And they would in general give different results, so it's useful to take a look at their differences and to make a comparison.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "7:53": "First, single-link can be expected to generally the loose clusters, the reason is because as long as two objects are very similar in the two groups, it will bring the two groups together." + "time": "7:53", + "text": "First, single-link can be expected to generally the loose clusters, the reason is because as long as two objects are very similar in the two groups, it will bring the two groups together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "8:09": "If you think about this as similar to having parties with people, then it just means two groups of people would be partying together. As long as in each group there is a person that" + "time": "8:09", + "text": "If you think about this as similar to having parties with people, then it just means two groups of people would be partying together. As long as in each group there is a person that", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "8:27": "is well connected with the other group. So the two leaders of the two groups can have a good relationship with each other and then they will bring together the two groups. In this case, the cluster is loose, because there's no guarantee that other members of the two groups are actually very close to each other. Sometimes they may be very far away, now in this case it's also based on individual decisions, so it could be sensitive to outliers. The complete-link is in the opposite situation, where we can expect the clusters to be tight. And it's also based on individual decision so it can be sensitive to outliers. Again to continue the analogy to having a party of people, then complete-link would mean when two groups come together. They want to ensure that even" + "time": "8:27", + "text": "is well connected with the other group. So the two leaders of the two groups can have a good relationship with each other and then they will bring together the two groups. In this case, the cluster is loose, because there's no guarantee that other members of the two groups are actually very close to each other. Sometimes they may be very far away, now in this case it's also based on individual decisions, so it could be sensitive to outliers. The complete-link is in the opposite situation, where we can expect the clusters to be tight. And it's also based on individual decision so it can be sensitive to outliers. Again to continue the analogy to having a party of people, then complete-link would mean when two groups come together. They want to ensure that even", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "9:21": "the people that are unlikely to talk to each other would be comfortable. Always talking to each other, so ensure the whole class to be coherent. The average link of clusters in between and as group decision, so it's" + "time": "9:21", + "text": "the people that are unlikely to talk to each other would be comfortable. Always talking to each other, so ensure the whole class to be coherent. The average link of clusters in between and as group decision, so it's", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "9:37": "going to be insensitive to outliers, now in practice which one is the best. Well, this would depend on the application and sometimes you need a lose clusters. And aggressively cluster objects together that maybe single-link is good. But other times you might need a tight clusters and a complete-link might be better. But in general, you have to empirically evaluate these methods for your application to know which one is better." + "time": "9:37", + "text": "going to be insensitive to outliers, now in practice which one is the best. Well, this would depend on the application and sometimes you need a lose clusters. And aggressively cluster objects together that maybe single-link is good. But other times you might need a tight clusters and a complete-link might be better. But in general, you have to empirically evaluate these methods for your application to know which one is better.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "10:07": "Now, next let's look at another example of a method for similarity-based clustering. In this case, which is called k-Means clustering, we will represent each text object as a term vector. And then assume a similarity function defined on two objects, now we're going to start with some tentative clustering results by just selecting k randomly. selected vectors as centroids of k clusters and treat them as centers as if they represent, they each represent a cluster. So this gives us the initial tentative cluster, then we're going to iteratively improve it. And the process goes like this, and once we have these centroids Decide. We're going to assign a vector to the cluster whose centroid is closest to the current vector. So basically we're going to measure the distance between this vector, and each of the centroids, and see which one is the closest to this one. And then just put this object into that cluster, this is to have tentative assignment of objects into clusters. And we're going to partition all the objects into k clusters based on our tentative clustering and centroids." + "time": "10:07", + "text": "Now, next let's look at another example of a method for similarity-based clustering. In this case, which is called k-Means clustering, we will represent each text object as a term vector. And then assume a similarity function defined on two objects, now we're going to start with some tentative clustering results by just selecting k randomly. selected vectors as centroids of k clusters and treat them as centers as if they represent, they each represent a cluster. So this gives us the initial tentative cluster, then we're going to iteratively improve it. And the process goes like this, and once we have these centroids Decide. We're going to assign a vector to the cluster whose centroid is closest to the current vector. So basically we're going to measure the distance between this vector, and each of the centroids, and see which one is the closest to this one. And then just put this object into that cluster, this is to have tentative assignment of objects into clusters. And we're going to partition all the objects into k clusters based on our tentative clustering and centroids.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "11:28": "Then we can do re-compute the centroid based on the locate the object in each cluster. And this is to adjust the centroid, and then we can repeat this process until the similarity-based objective function. In this case, it's within cluster sum of squares converges, and theoretically we can show that. This process actually is going to minimize the within cluster sum of squares where define object and function. Given k clusters, so it can be also shown, this process will converge to a local minimum. I think about this process for a moment, it might remind you the Algorithm for mixture model." + "time": "11:28", + "text": "Then we can do re-compute the centroid based on the locate the object in each cluster. And this is to adjust the centroid, and then we can repeat this process until the similarity-based objective function. In this case, it's within cluster sum of squares converges, and theoretically we can show that. This process actually is going to minimize the within cluster sum of squares where define object and function. Given k clusters, so it can be also shown, this process will converge to a local minimum. I think about this process for a moment, it might remind you the Algorithm for mixture model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "12:13": "Indeed this algorithm is very similar to the Algorithm for the mixture model for clustering. More specifically we also initialize these parameters in the Algorithm so the random initialization is similar." + "time": "12:13", + "text": "Indeed this algorithm is very similar to the Algorithm for the mixture model for clustering. More specifically we also initialize these parameters in the Algorithm so the random initialization is similar.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "12:34": "And then in the Algorithm, you may recall that, we're going to repeat E-step and M-step to improve our parameter estimation. In this case, we're going to improve the clustering result iteratively by also doing two steps. And in fact that the two steps are very similar to Algorithm, in that when we locate the vector into one of the clusters based on our tentative clustering. It's very similar to inferring the distribution that has been used to generate the document, the mixture model. So it is essentially similar to E-step, so what's the difference, well the difference is here. We don't make a probabilistic allocation as in the case of E-step, the brother will make a choice. We're going to make a call if this, there upon this closest to cluster two, then we're going to say you are in cluster two. So there's no choice, and we're not going to say, you assume the set is belonging to a cluster two. And so we're not going to have a probability, but we're just going to put one object into precisely one cluster. In the E-step however, we do a probability location, so we split in counts. And we're not going to say exactly which distribution has been used to generate a data point. Now next, we're going to adjust the centroid, and this is very similar to M-step where we re-estimate the parameters. That's when we'll have a better estimate of the parameter, so here we'll have a better clustering result by adjusting the centroid. And note that centroid is based on the average of the vectors in the cluster. So this is also similar to the M-step where we do counts,pull together counts and then normalize them. The difference of course is also because of the difference in the E-step, and we're not going to consider probabilities when we count the points. In this case, k-Means we're going to all make count of the objects as allocated to this cluster. And this is only a subset of data points, but in the Algorithm, we in principle consider all the data points based on probabilistic allocations." + "time": "12:34", + "text": "And then in the Algorithm, you may recall that, we're going to repeat E-step and M-step to improve our parameter estimation. In this case, we're going to improve the clustering result iteratively by also doing two steps. And in fact that the two steps are very similar to Algorithm, in that when we locate the vector into one of the clusters based on our tentative clustering. It's very similar to inferring the distribution that has been used to generate the document, the mixture model. So it is essentially similar to E-step, so what's the difference, well the difference is here. We don't make a probabilistic allocation as in the case of E-step, the brother will make a choice. We're going to make a call if this, there upon this closest to cluster two, then we're going to say you are in cluster two. So there's no choice, and we're not going to say, you assume the set is belonging to a cluster two. And so we're not going to have a probability, but we're just going to put one object into precisely one cluster. In the E-step however, we do a probability location, so we split in counts. And we're not going to say exactly which distribution has been used to generate a data point. Now next, we're going to adjust the centroid, and this is very similar to M-step where we re-estimate the parameters. That's when we'll have a better estimate of the parameter, so here we'll have a better clustering result by adjusting the centroid. And note that centroid is based on the average of the vectors in the cluster. So this is also similar to the M-step where we do counts,pull together counts and then normalize them. The difference of course is also because of the difference in the E-step, and we're not going to consider probabilities when we count the points. In this case, k-Means we're going to all make count of the objects as allocated to this cluster. And this is only a subset of data points, but in the Algorithm, we in principle consider all the data points based on probabilistic allocations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "14:56": "But in nature they are very similar and that's why it's also maximizing well defined object of functions. And it's guaranteed to convert local minimum, so to summarize our discussion of clustering methods. We first discussed model based approaches, mainly the mixture model. Here we use the implicit similarity function to define the clustering bias. There is no explicit define similarity function, the model defines clustering bias and the clustering structure is built into a generative model. That's why we can use potentially a different model to recover different structure." + "time": "14:56", + "text": "But in nature they are very similar and that's why it's also maximizing well defined object of functions. And it's guaranteed to convert local minimum, so to summarize our discussion of clustering methods. We first discussed model based approaches, mainly the mixture model. Here we use the implicit similarity function to define the clustering bias. There is no explicit define similarity function, the model defines clustering bias and the clustering structure is built into a generative model. That's why we can use potentially a different model to recover different structure.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "15:40": "Complex generative models can be used to discover complex clustering structures. We do not talk about in full, but we can easily design, generate a model to generate a hierarchical clusters. We can also use prior to further customize the clustering algorithm to for example control the topic of one cluster or multiple clusters. However one disadvantage of this approach is that there is no easy way to directly control the similarity measure." + "time": "15:40", + "text": "Complex generative models can be used to discover complex clustering structures. We do not talk about in full, but we can easily design, generate a model to generate a hierarchical clusters. We can also use prior to further customize the clustering algorithm to for example control the topic of one cluster or multiple clusters. However one disadvantage of this approach is that there is no easy way to directly control the similarity measure.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "16:11": "Sometimes we want to that, but it's very hard to inject such a special definition of similarity into such a model." + "time": "16:11", + "text": "Sometimes we want to that, but it's very hard to inject such a special definition of similarity into such a model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "16:20": "We also talked about similarity-based approaches, these approaches are more flexible to actually specify similarity functions." + "time": "16:20", + "text": "We also talked about similarity-based approaches, these approaches are more flexible to actually specify similarity functions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "16:29": "But one major disadvantage is that their objective function is not always very clear." + "time": "16:29", + "text": "But one major disadvantage is that their objective function is not always very clear.", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "16:35": "The k-Means algorithm has clearly defined the objective function, but it's also very similar to a model based approach. The hierarchical clustering algorithm on the other hand is harder to specify the objective function. So it's not clear what exactly is being optimized," + "time": "16:35", + "text": "The k-Means algorithm has clearly defined the objective function, but it's also very similar to a model based approach. The hierarchical clustering algorithm on the other hand is harder to specify the objective function. So it's not clear what exactly is being optimized,", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" }, { - "17:00": "both approaches can generate term clusters. And document clusters, and term clusters can be in general, generated by representing each term with some text content. For example, take the context of each term as a representation of each term, as we have done in semantic relation learning. And then we can certainly cluster terms, based on actual text [INAUDIBLE]. Of course, term clusters can be generated by using generative models as well, as we've seen. [MUSIC]" + "time": "17:00", + "text": "both approaches can generate term clusters. And document clusters, and term clusters can be in general, generated by representing each term with some text content. For example, take the context of each term as a representation of each term, as we have done in semantic relation learning. And then we can certainly cluster terms, based on actual text [INAUDIBLE]. Of course, term clusters can be generated by using generative models as well, as we've seen. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" } ] }, { "4-6-text-clustering-evaluation": [ { - "0:00": "[MUSIC] This lecture is about evaluation of text clustering." + "time": "0:00", + "text": "[MUSIC] This lecture is about evaluation of text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "0:12": "So far we have talked about multiple ways of doing text clustering but how do we know which method works the best?" + "time": "0:12", + "text": "So far we have talked about multiple ways of doing text clustering but how do we know which method works the best?", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "0:22": "So this has to do with evaluation. Now to talk about evaluation one must go back to the clustering bias that we introduced at the beginning." + "time": "0:22", + "text": "So this has to do with evaluation. Now to talk about evaluation one must go back to the clustering bias that we introduced at the beginning.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "0:32": "Because two objects can be similar depending on how you look at them," + "time": "0:32", + "text": "Because two objects can be similar depending on how you look at them,", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "0:37": "we must clearly specify the perspective of similarity. Without that, the problem of clustering is not well defined. So this perspective is also very important for evaluation. If you look at this slide, and you can see we have two different ways to cluster these shapes, and if you ask a question, which one is the best, or which one is better? You actually see, there's no way to answer this question without knowing whether we'd like to cluster based on shapes, or cluster based on sizes. And that's precisely why the perspective on clustering bias is crucial for evaluation." + "time": "0:37", + "text": "we must clearly specify the perspective of similarity. Without that, the problem of clustering is not well defined. So this perspective is also very important for evaluation. If you look at this slide, and you can see we have two different ways to cluster these shapes, and if you ask a question, which one is the best, or which one is better? You actually see, there's no way to answer this question without knowing whether we'd like to cluster based on shapes, or cluster based on sizes. And that's precisely why the perspective on clustering bias is crucial for evaluation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "1:19": "In general, we can evaluate text clusters in two ways, one is direct evaluation, and the other indirect evaluation. So in direct evaluation, we want to answer the following questions, how close are the system-generated clusters to the ideal clusters that are generated by humans?" + "time": "1:19", + "text": "In general, we can evaluate text clusters in two ways, one is direct evaluation, and the other indirect evaluation. So in direct evaluation, we want to answer the following questions, how close are the system-generated clusters to the ideal clusters that are generated by humans?", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "1:38": "So the closeness here can be assessed" + "time": "1:38", + "text": "So the closeness here can be assessed", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "1:44": "from multiple perspectives and that will help us characterize the quality of cluster result in multiple angles, and this is sometimes desirable." + "time": "1:44", + "text": "from multiple perspectives and that will help us characterize the quality of cluster result in multiple angles, and this is sometimes desirable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "1:56": "Now we also want to quantify the closeness because this would allow us to easily compare different measures based on their performance figures." + "time": "1:56", + "text": "Now we also want to quantify the closeness because this would allow us to easily compare different measures based on their performance figures.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "2:09": "And finally, you can see, in this case, we essentially inject the clustering bias" + "time": "2:09", + "text": "And finally, you can see, in this case, we essentially inject the clustering bias", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "2:15": "by using humans, basically humans would bring in the the need or desire to clustering bias. Now, how do we do that exactly? Well, the general procedure would look like this." + "time": "2:15", + "text": "by using humans, basically humans would bring in the the need or desire to clustering bias. Now, how do we do that exactly? Well, the general procedure would look like this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "2:28": "Given a test set which consists of a lot of text objects, we can have humans to create the ideal clustering result, that is, we're going to ask humans to partition the objects to create the gold standard. And they will use their judgments based on the need of a particular application to generate what they think are the best clustering results, and this would be then used to compare with the system generated clusters from the same test set." + "time": "2:28", + "text": "Given a test set which consists of a lot of text objects, we can have humans to create the ideal clustering result, that is, we're going to ask humans to partition the objects to create the gold standard. And they will use their judgments based on the need of a particular application to generate what they think are the best clustering results, and this would be then used to compare with the system generated clusters from the same test set.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "3:01": "And ideally, we want the system results to be the same as the human generated results, but in general, they are not going to be the same. So we would like to then quantify the similarity between the system-generated clusters and the gold standard clusters. And this similarity can also be measure from multiple perspectives and this will give us various meshes to quantitatively evaluate a cluster, a clustering result. And some of the commonly used measures include the purity, which measures whether a cluster has a similar object from the same cluster, in the gold standard. And normalized mutual information is a commonly used measure which basically measures based on the identity of cluster of object in the system generally. How well can you predict the cluster of the object in the gold standard or vice versa? And mutual information captures, the correlation between these cluster labels and normalized mutual information is often used for quantifying the similarity for this evaluation purpose, F measure is another possible measure." + "time": "3:01", + "text": "And ideally, we want the system results to be the same as the human generated results, but in general, they are not going to be the same. So we would like to then quantify the similarity between the system-generated clusters and the gold standard clusters. And this similarity can also be measure from multiple perspectives and this will give us various meshes to quantitatively evaluate a cluster, a clustering result. And some of the commonly used measures include the purity, which measures whether a cluster has a similar object from the same cluster, in the gold standard. And normalized mutual information is a commonly used measure which basically measures based on the identity of cluster of object in the system generally. How well can you predict the cluster of the object in the gold standard or vice versa? And mutual information captures, the correlation between these cluster labels and normalized mutual information is often used for quantifying the similarity for this evaluation purpose, F measure is another possible measure.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "4:21": "Now again a thorough discussion of this evaluation and these evaluation issues would be beyond the scope of this course." + "time": "4:21", + "text": "Now again a thorough discussion of this evaluation and these evaluation issues would be beyond the scope of this course.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "4:29": "I've suggested some reading in the end that you can take a look at to know more about that." + "time": "4:29", + "text": "I've suggested some reading in the end that you can take a look at to know more about that.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "4:36": "So here I just want to discuss some high level ideas that would allow you to think about how to do evaluation in your applications. The second way to evaluate text clusters is to do indirect evaluation. So in this case the question to answer is, how useful are the clustering results for the intended applications? Now this of course is application specific question, so usefulness is going to depend on specific applications." + "time": "4:36", + "text": "So here I just want to discuss some high level ideas that would allow you to think about how to do evaluation in your applications. The second way to evaluate text clusters is to do indirect evaluation. So in this case the question to answer is, how useful are the clustering results for the intended applications? Now this of course is application specific question, so usefulness is going to depend on specific applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "5:07": "In this case, the clustering bias is imposed by the independent application as well, so what counts as a best cluster result would be dependent on the application." + "time": "5:07", + "text": "In this case, the clustering bias is imposed by the independent application as well, so what counts as a best cluster result would be dependent on the application.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "5:19": "Now procedure wise we also would create a test set with text objects for the intended application to quantify the performance of the system." + "time": "5:19", + "text": "Now procedure wise we also would create a test set with text objects for the intended application to quantify the performance of the system.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "5:32": "In this case, what we care about is the contribution of clustering to some application so we often have a baseline system to compare with." + "time": "5:32", + "text": "In this case, what we care about is the contribution of clustering to some application so we often have a baseline system to compare with.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "5:45": "This could be the current system for doing something, and then you hope to add a clustering to improve it, or the baseline system could be using a different clustering method. And then what you are trying to experiment with, and you hope to have better idea of word clustering. So in any case you have a baseline system work with, and then you add a clustering algorithm to the baseline system to produce a clustering system." + "time": "5:45", + "text": "This could be the current system for doing something, and then you hope to add a clustering to improve it, or the baseline system could be using a different clustering method. And then what you are trying to experiment with, and you hope to have better idea of word clustering. So in any case you have a baseline system work with, and then you add a clustering algorithm to the baseline system to produce a clustering system.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "6:11": "And then we have to compare the performance of your clustering system and the baseline system in terms of the performance measure for that particular application." + "time": "6:11", + "text": "And then we have to compare the performance of your clustering system and the baseline system in terms of the performance measure for that particular application.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "6:21": "So in this case we call it indirect evaluation of clusters because there's no explicit assessment of the quality of clusters, but rather it's to assess the contribution of clusters to a particular application." + "time": "6:21", + "text": "So in this case we call it indirect evaluation of clusters because there's no explicit assessment of the quality of clusters, but rather it's to assess the contribution of clusters to a particular application.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "6:37": "So, to summarize text clustering," + "time": "6:37", + "text": "So, to summarize text clustering,", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "6:41": "it's a very useful unsupervised general text mining technique, and it's particularly useful for obtaining an overall picture of the text content. And this is often needed to explore text data, and this is often the first step when you deal with a lot of text data." + "time": "6:41", + "text": "it's a very useful unsupervised general text mining technique, and it's particularly useful for obtaining an overall picture of the text content. And this is often needed to explore text data, and this is often the first step when you deal with a lot of text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "7:01": "The second application or second kind of applications is through discover interesting clustering structures in text data and these structures can be very meaningful." + "time": "7:01", + "text": "The second application or second kind of applications is through discover interesting clustering structures in text data and these structures can be very meaningful.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "7:13": "There are many approaches that can be used to form text clustering and we discussed model based approaches and some narrative based approaches. In general, strong clusters tend to show up no matter what method is used. Also the effectiveness of a method highly depends on whether the desired clustering bias is captured appropriately, and this can be done either through using the right generating model, the model design appropriate for the clustering, or the right similarity function expressly define the bias. Deciding the optimal number of customers is a very difficult problem for order cluster methods, and that's because it's unsupervised algorithm, and there's no training there how to guide us to select the best number of clusters." + "time": "7:13", + "text": "There are many approaches that can be used to form text clustering and we discussed model based approaches and some narrative based approaches. In general, strong clusters tend to show up no matter what method is used. Also the effectiveness of a method highly depends on whether the desired clustering bias is captured appropriately, and this can be done either through using the right generating model, the model design appropriate for the clustering, or the right similarity function expressly define the bias. Deciding the optimal number of customers is a very difficult problem for order cluster methods, and that's because it's unsupervised algorithm, and there's no training there how to guide us to select the best number of clusters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "8:05": "Now sometimes you may see some methods that can automatically determine the number of clusters, but in general that has some implied application of clustering bias there and that's just not specified. Without clearly defining a clustering bias, it's just impossible to say the optimal number of cluster is what, so this important to keep in mind." + "time": "8:05", + "text": "Now sometimes you may see some methods that can automatically determine the number of clusters, but in general that has some implied application of clustering bias there and that's just not specified. Without clearly defining a clustering bias, it's just impossible to say the optimal number of cluster is what, so this important to keep in mind.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "8:31": "And I should also say sometimes we can also use application to determine the number of clusters, for example, if you're clustering search results, then obviously you don't want to generate the 100 clusters, so the number can be dictated by the interface design." + "time": "8:31", + "text": "And I should also say sometimes we can also use application to determine the number of clusters, for example, if you're clustering search results, then obviously you don't want to generate the 100 clusters, so the number can be dictated by the interface design.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "8:46": "In other situations, we might be able to use the fitness to data to assess whether we've got a good number of clusters to explain our data well. And to do that, you can vary the number of clusters and watch how well you can fit the data." + "time": "8:46", + "text": "In other situations, we might be able to use the fitness to data to assess whether we've got a good number of clusters to explain our data well. And to do that, you can vary the number of clusters and watch how well you can fit the data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" }, { - "9:07": "In general when you add a more components to a mixed model you should fit the data better because you, you don't, you can always set the probability of using the new component as zero. So you can't in general fit the data worse than before, but the question is as you add more components would you be able to significantly improve the fitness of the data and that can be used to determine the right number of clusters. And finally evaluation of clustering results, this kind can be done both directly and indirectly, and we often would like to do both in order to get a good sense about how well our method works. So here's some suggested reading and this is particularly useful to better understand how the matches are calculated and clustering in general [MUSIC]" + "time": "9:07", + "text": "In general when you add a more components to a mixed model you should fit the data better because you, you don't, you can always set the probability of using the new component as zero. So you can't in general fit the data worse than before, but the question is as you add more components would you be able to significantly improve the fitness of the data and that can be used to determine the right number of clusters. And finally evaluation of clustering results, this kind can be done both directly and indirectly, and we often would like to do both in order to get a good sense about how well our method works. So here's some suggested reading and this is particularly useful to better understand how the matches are calculated and clustering in general [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" } ] }, { "4-7-text-categorization-motivation": [ { - "0:00": "[SOUND]" + "time": "0:00", + "text": "[SOUND]", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "0:06": "This lecture is about text categorization." + "time": "0:06", + "text": "This lecture is about text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "0:11": "In this lecture, we're going to talk about text categorization." + "time": "0:11", + "text": "In this lecture, we're going to talk about text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "0:16": "This is a very important technique for text data mining and analytics." + "time": "0:16", + "text": "This is a very important technique for text data mining and analytics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "0:22": "It is relevant to discovery of various different kinds of knowledge as shown here. First, it's related to topic mining and analysis. And, that's because it has to do with analyzing text to data based on some predefined topics. Secondly, it's also related to opinion mining and sentiment analysis, which has to do with discovery knowledge about the observer, the human sensor. Because we can categorize the authors, for example, based on the content of the articles that they have written, right? We can, in general, categorize the observer based on the content that they produce." + "time": "0:22", + "text": "It is relevant to discovery of various different kinds of knowledge as shown here. First, it's related to topic mining and analysis. And, that's because it has to do with analyzing text to data based on some predefined topics. Secondly, it's also related to opinion mining and sentiment analysis, which has to do with discovery knowledge about the observer, the human sensor. Because we can categorize the authors, for example, based on the content of the articles that they have written, right? We can, in general, categorize the observer based on the content that they produce.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "1:12": "Finally, it's also related to text-based prediction. Because, we can often use text categorization techniques to predict some variables in the real world that are only remotely related to text data." + "time": "1:12", + "text": "Finally, it's also related to text-based prediction. Because, we can often use text categorization techniques to predict some variables in the real world that are only remotely related to text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "1:27": "And so, this is a very important technique for text to data mining." + "time": "1:27", + "text": "And so, this is a very important technique for text to data mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "1:34": "This is the overall plan for covering the topic. First, we're going to talk about what is text categorization and why we're interested in doing that in this lecture? And now, we're going to talk about how to do text categorization for how to evaluate the categorization results. So, the problem of text categorization is defined as follows. We're given a set of predefined categories possibly forming a hierarchy or so. And often, also a set of training examples or training set of labeled text objects which means the text objects have already been enabled with known categories. And then, the task is to classify any text object into one or more of these predefined categories. So, the picture on this slide shows what happens." + "time": "1:34", + "text": "This is the overall plan for covering the topic. First, we're going to talk about what is text categorization and why we're interested in doing that in this lecture? And now, we're going to talk about how to do text categorization for how to evaluate the categorization results. So, the problem of text categorization is defined as follows. We're given a set of predefined categories possibly forming a hierarchy or so. And often, also a set of training examples or training set of labeled text objects which means the text objects have already been enabled with known categories. And then, the task is to classify any text object into one or more of these predefined categories. So, the picture on this slide shows what happens.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "2:30": "When we do text categorization, we have a lot of text objects to be processed by a categorization system and the system will, in general, assign categories through these documents. As shown on the right and the categorization results, and we often assume the availability of training examples and these are the documents that are tag with known categories. And these examples are very important for helping the system to learn patterns in different categories. And, this would further help the system then know how to recognize" + "time": "2:30", + "text": "When we do text categorization, we have a lot of text objects to be processed by a categorization system and the system will, in general, assign categories through these documents. As shown on the right and the categorization results, and we often assume the availability of training examples and these are the documents that are tag with known categories. And these examples are very important for helping the system to learn patterns in different categories. And, this would further help the system then know how to recognize", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "3:11": "the categories of new text objects that it has not seen. So, here are some specific examples of text categorization. And in fact, there are many examples, here are just a few." + "time": "3:11", + "text": "the categories of new text objects that it has not seen. So, here are some specific examples of text categorization. And in fact, there are many examples, here are just a few.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "3:27": "So first, text objects can vary, so we can categorize a document, or a passage, or a sentence, or collections of text. As in the case of clustering, the units to be analyzed can vary a lot, so this creates a lot of possibilities. Secondly, categories can also vary. Allocate in general, there's two major kinds of categories. One is internal categories. These are categories that categorize content of text object. For example, topic categories or sentiment categories and they generally have to do with the content of the text objects throughout the categorization of the content." + "time": "3:27", + "text": "So first, text objects can vary, so we can categorize a document, or a passage, or a sentence, or collections of text. As in the case of clustering, the units to be analyzed can vary a lot, so this creates a lot of possibilities. Secondly, categories can also vary. Allocate in general, there's two major kinds of categories. One is internal categories. These are categories that categorize content of text object. For example, topic categories or sentiment categories and they generally have to do with the content of the text objects throughout the categorization of the content.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "4:08": "The other kind is external categories that can characterize an entity associated with the text object. For example, authors are entities associated with the content that they produce. And so, we can use their content in determining which author has written, which part, for example, and that's called author attribution." + "time": "4:08", + "text": "The other kind is external categories that can characterize an entity associated with the text object. For example, authors are entities associated with the content that they produce. And so, we can use their content in determining which author has written, which part, for example, and that's called author attribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "4:33": "Or, we can have any other mininal categories associate with text data as long as there is minimal connection between the entity and text data. For example, we might collect a lot of reviews about a restaurant or a lot of reviews about a product, and then, this text data can help us infer properties of a product or a restaurant. In that case, we can treat this as a categorization problem. We can categorize restaurants or categorize products based on their corresponding reviews. So, this is an example for external category. Here are some specific examples of the applications. News categorization is very common as being started a lot. News agencies would like to assign predefined categories to categorize news generated everyday. And, these virtual article categorizations are not important aspect. For example, in the biomedical domain, there's MeSH annotations. MeSH stands for Medical Subject Heading, and this is ontology of terms," + "time": "4:33", + "text": "Or, we can have any other mininal categories associate with text data as long as there is minimal connection between the entity and text data. For example, we might collect a lot of reviews about a restaurant or a lot of reviews about a product, and then, this text data can help us infer properties of a product or a restaurant. In that case, we can treat this as a categorization problem. We can categorize restaurants or categorize products based on their corresponding reviews. So, this is an example for external category. Here are some specific examples of the applications. News categorization is very common as being started a lot. News agencies would like to assign predefined categories to categorize news generated everyday. And, these virtual article categorizations are not important aspect. For example, in the biomedical domain, there's MeSH annotations. MeSH stands for Medical Subject Heading, and this is ontology of terms,", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "5:49": "characterize content of literature articles in detail." + "time": "5:49", + "text": "characterize content of literature articles in detail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "5:54": "Another example of application is spam email detection or filtering, right? So, we often have a spam filter to help us distinguish spams from legitimate emails and this is clearly a binary classification problem." + "time": "5:54", + "text": "Another example of application is spam email detection or filtering, right? So, we often have a spam filter to help us distinguish spams from legitimate emails and this is clearly a binary classification problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "6:14": "Sentiment categorization of product reviews or tweets is yet another kind of applications where we can categorize, comparing to positive or negative or positive and negative or neutral." + "time": "6:14", + "text": "Sentiment categorization of product reviews or tweets is yet another kind of applications where we can categorize, comparing to positive or negative or positive and negative or neutral.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "6:27": "So, you can have send them to categories, assign the two text content." + "time": "6:27", + "text": "So, you can have send them to categories, assign the two text content.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "6:35": "Another application is automatic email routing or sorting, so, you might want to automatically sort your emails into different folders and that's one application of text categorization where each folder is a category." + "time": "6:35", + "text": "Another application is automatic email routing or sorting, so, you might want to automatically sort your emails into different folders and that's one application of text categorization where each folder is a category.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "6:48": "The results are another important kind of applications of routing emails to the right person to handle, so, in helpdesk, email messaging is generally routed to a particular person to handle. Different people tend to handle different kinds of requests. And in many cases, a person would manually assign the messages to the right people. But, if you can imagine, you can't be able to automatically text categorization system to help routing request. And, this is a class file, the incoming request in the one of the categories where each category actually corresponds to a person to handle the request. And finally, author attribution, as I just mentioned, is yet another application, and it's another example of using text to actually infer properties of" + "time": "6:48", + "text": "The results are another important kind of applications of routing emails to the right person to handle, so, in helpdesk, email messaging is generally routed to a particular person to handle. Different people tend to handle different kinds of requests. And in many cases, a person would manually assign the messages to the right people. But, if you can imagine, you can't be able to automatically text categorization system to help routing request. And, this is a class file, the incoming request in the one of the categories where each category actually corresponds to a person to handle the request. And finally, author attribution, as I just mentioned, is yet another application, and it's another example of using text to actually infer properties of", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "7:41": "some other entities. And, there are also many variants of the problem formulation. And so, first, we have the simplest case, which is a binary categorization, where there are only two categories. And, there are many examples like that, information retrieval or search engine." + "time": "7:41", + "text": "some other entities. And, there are also many variants of the problem formulation. And so, first, we have the simplest case, which is a binary categorization, where there are only two categories. And, there are many examples like that, information retrieval or search engine.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "7:59": "Applications with one distinguishing relevant documents from non-relevant documents for a particular query." + "time": "7:59", + "text": "Applications with one distinguishing relevant documents from non-relevant documents for a particular query.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "8:06": "Spam filtering just distinguishing spams from non-spams, so, also two categories. Sometimes, classifications of opinions can be in two categories, positive and a negative." + "time": "8:06", + "text": "Spam filtering just distinguishing spams from non-spams, so, also two categories. Sometimes, classifications of opinions can be in two categories, positive and a negative.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "8:19": "A more general case would be K-category categorization and there are also many applications like that, there could be more than two categories. So, topic categorization is often such an example where you can have multiple topics. Email routing would be another example when you may have multiple folders or if you route the email to the right person to handle it, then there are multiple people to classify. So, in all these cases, there are more than two kinds of categories." + "time": "8:19", + "text": "A more general case would be K-category categorization and there are also many applications like that, there could be more than two categories. So, topic categorization is often such an example where you can have multiple topics. Email routing would be another example when you may have multiple folders or if you route the email to the right person to handle it, then there are multiple people to classify. So, in all these cases, there are more than two kinds of categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "8:49": "Another variation is to have hierarchical categorization where categories form a hierarchy. Again, topical hierarchy is very common." + "time": "8:49", + "text": "Another variation is to have hierarchical categorization where categories form a hierarchy. Again, topical hierarchy is very common.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "8:58": "Yet another variation is joint categorization. That's when you have multiple categorization tasks that are related and then you hope to kind of join the categorization. Further leverage the dependency of these tasks to improve accuracy for each individual task. Among all these binary categorizations is most fundamental and part of it also is because it's simple and probably it's because it can actually be used to perform all the other categorization tasks. For example, a K-category categorization task can be actually performed by using binary categorization." + "time": "8:58", + "text": "Yet another variation is joint categorization. That's when you have multiple categorization tasks that are related and then you hope to kind of join the categorization. Further leverage the dependency of these tasks to improve accuracy for each individual task. Among all these binary categorizations is most fundamental and part of it also is because it's simple and probably it's because it can actually be used to perform all the other categorization tasks. For example, a K-category categorization task can be actually performed by using binary categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "9:40": "Basically, we can look at each category separately and then the binary categorization problem is whether object is in this category or not, meaning in other categories." + "time": "9:40", + "text": "Basically, we can look at each category separately and then the binary categorization problem is whether object is in this category or not, meaning in other categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "9:53": "And, the hierarchical categorization can also be done by progressively doing flat categorization at each level. So, we have, first, we categorize all the objects into, let's say, a small number of high-level categories, and inside each category, we have further categorized to sub-categories, etc." + "time": "9:53", + "text": "And, the hierarchical categorization can also be done by progressively doing flat categorization at each level. So, we have, first, we categorize all the objects into, let's say, a small number of high-level categories, and inside each category, we have further categorized to sub-categories, etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "10:15": "So, why is text categorization important? Well, I already showed that you, several applications but, in general, there are several reasons. One is text categorization helps enrich text representation and that's to achieve more understanding of text data that's all it was useful for text analysis. So, now with categorization text can be represented in multiple levels. The keyword conditions that's often used for a lot text processing tasks. But we can now also add categories and they provide two levels of transition." + "time": "10:15", + "text": "So, why is text categorization important? Well, I already showed that you, several applications but, in general, there are several reasons. One is text categorization helps enrich text representation and that's to achieve more understanding of text data that's all it was useful for text analysis. So, now with categorization text can be represented in multiple levels. The keyword conditions that's often used for a lot text processing tasks. But we can now also add categories and they provide two levels of transition.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "10:55": "Semantic categories assigned can also be directly or indirectly useful for application. So, for example, semantic categories could be already very useful or other attribution might be directly useful. Another example is when semantic categories can facilitate aggregation of text content and this is another case of applications of text categorization." + "time": "10:55", + "text": "Semantic categories assigned can also be directly or indirectly useful for application. So, for example, semantic categories could be already very useful or other attribution might be directly useful. Another example is when semantic categories can facilitate aggregation of text content and this is another case of applications of text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "11:25": "For example, if we want to know the overall opinions about a product, we" + "time": "11:25", + "text": "For example, if we want to know the overall opinions about a product, we", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "11:32": "could first categorize the opinions in each individual view as positive or negative and then, that would allow us to easy to aggregate all the sentiment, and it would tell us about the 70% of the views are positive and 30% are negative, etc." + "time": "11:32", + "text": "could first categorize the opinions in each individual view as positive or negative and then, that would allow us to easy to aggregate all the sentiment, and it would tell us about the 70% of the views are positive and 30% are negative, etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "11:53": "So, without doing categorization, it will be much harder to aggregate such opinions to provide a concise way of coding text in some sense based on all of the vocabulary. And, sometimes you may see in some applications, text with categorizations called a text coded, encoded with some control of vocabulary. The second kind of reasons is to use text categorization to infer properties of entities, and text categories allows us to infer the properties of such entities that are associate with text data. So, this means we can use text categorization to discover knowledge about the world. In general, as long as we can associate the entity with text of data, we can always the text of data to help categorize the corresponding entities. So, it's used for single information network that will connect the other entities with text data. The obvious entities that can be directly connected are authors. But, you can also imagine the author's affiliations or the author's age and other things can be actually connected to text data indirectly. Once we have made the connection, then we can make a prediction about those values. So, this is a general way to allow us to use text mining through, so the text categorization to discover knowledge about the world. Very useful, especially in big text data analytics where we are often just using text data as extra sets of data extracted from humans to infer certain decision factors often together with non-textual data. Specifically with text, for example, we can also think of examples of inferring properties of entities. For example, discovery of non-native speakers of a language. And, this can be done by categorizing the content of speakers." + "time": "11:53", + "text": "So, without doing categorization, it will be much harder to aggregate such opinions to provide a concise way of coding text in some sense based on all of the vocabulary. And, sometimes you may see in some applications, text with categorizations called a text coded, encoded with some control of vocabulary. The second kind of reasons is to use text categorization to infer properties of entities, and text categories allows us to infer the properties of such entities that are associate with text data. So, this means we can use text categorization to discover knowledge about the world. In general, as long as we can associate the entity with text of data, we can always the text of data to help categorize the corresponding entities. So, it's used for single information network that will connect the other entities with text data. The obvious entities that can be directly connected are authors. But, you can also imagine the author's affiliations or the author's age and other things can be actually connected to text data indirectly. Once we have made the connection, then we can make a prediction about those values. So, this is a general way to allow us to use text mining through, so the text categorization to discover knowledge about the world. Very useful, especially in big text data analytics where we are often just using text data as extra sets of data extracted from humans to infer certain decision factors often together with non-textual data. Specifically with text, for example, we can also think of examples of inferring properties of entities. For example, discovery of non-native speakers of a language. And, this can be done by categorizing the content of speakers.", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" }, { - "14:00": "Another example is to predict the party affiliation of a politician based on the political speech. And, this is again an example of using text data to infer some knowledge about the real world. In nature, the problems are all the same, and that's as we defined and it's a text categorization problem. [MUSIC]" + "time": "14:00", + "text": "Another example is to predict the party affiliation of a politician based on the political speech. And, this is again an example of using text data to infer some knowledge about the real world. In nature, the problems are all the same, and that's as we defined and it's a text categorization problem. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" } ] }, { "4-8-text-categorization-methods": [ { - "0:06": "This lecture is about the methods for text categorization." + "time": "0:06", + "text": "This lecture is about the methods for text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "0:12": "So in this lecture we're going to discuss how to do text for categorization." + "time": "0:12", + "text": "So in this lecture we're going to discuss how to do text for categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "0:19": "First, there're many methods for text categorization." + "time": "0:19", + "text": "First, there're many methods for text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "0:25": "In such a method the idea is to determine the category based on some rules that we design carefully to reflect the domain knowledge about the category prediction problem. So for example, if you want to do topic categorization for news articles you can say well, if the news article mentions word like a game and sports three times. That we're going to say it's about sports things like that and this would allow us to deterministically decide which category a document that should be put into." + "time": "0:25", + "text": "In such a method the idea is to determine the category based on some rules that we design carefully to reflect the domain knowledge about the category prediction problem. So for example, if you want to do topic categorization for news articles you can say well, if the news article mentions word like a game and sports three times. That we're going to say it's about sports things like that and this would allow us to deterministically decide which category a document that should be put into.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "1:02": "Now such a strategy would work well if the following conditions hold. First the categories must be very well defined and this allows the person to clearly decide the category based on some clear rules." + "time": "1:02", + "text": "Now such a strategy would work well if the following conditions hold. First the categories must be very well defined and this allows the person to clearly decide the category based on some clear rules.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "1:21": "A certainly the categories as" + "time": "1:21", + "text": "A certainly the categories as", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "1:25": "half to be easy to distinguished at the based on a surface features in text. So that means some official features like keywords or punctuations or whatever, you can easily identify in text to data." + "time": "1:25", + "text": "half to be easy to distinguished at the based on a surface features in text. So that means some official features like keywords or punctuations or whatever, you can easily identify in text to data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "1:41": "For example, if there is some special vocabulary that is known to only occur in a particular category. And that would be most effective because we can easily use such a vocabulary or padding of such a vocabulary to recognize this category." + "time": "1:41", + "text": "For example, if there is some special vocabulary that is known to only occur in a particular category. And that would be most effective because we can easily use such a vocabulary or padding of such a vocabulary to recognize this category.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "1:57": "Now we also should have sufficient knowledge for designing these words, and so if that's the case then such a can be effective. And so it does have a in some domains and sometimes. However, in general, there are several problems with this approach. First off, because it's label intensive it requires a lot of manual work. Obviously, we can't do this for all kinds of categorization problems. We have to do it from scratch for a different problem. problem because given the rules, what they need. So it doesn't scale up well." + "time": "1:57", + "text": "Now we also should have sufficient knowledge for designing these words, and so if that's the case then such a can be effective. And so it does have a in some domains and sometimes. However, in general, there are several problems with this approach. First off, because it's label intensive it requires a lot of manual work. Obviously, we can't do this for all kinds of categorization problems. We have to do it from scratch for a different problem. problem because given the rules, what they need. So it doesn't scale up well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "2:41": "Secondly, it cannot handle uncertainty in rules, often the rules Aren't 100% reliable. Take for example looking at occurrences of words in texts and trying to decide the topic." + "time": "2:41", + "text": "Secondly, it cannot handle uncertainty in rules, often the rules Aren't 100% reliable. Take for example looking at occurrences of words in texts and trying to decide the topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "2:57": "It's actually very hard to have 100% correct rule. So for example you can say well, if it has game, sports, basketball Then for sure it's about sports. But one can also imagine some types of articles that mention these cures, but may not be exactly about sports or only marginally touching sports. The main topic could be another topic, a different topic than sports." + "time": "2:57", + "text": "It's actually very hard to have 100% correct rule. So for example you can say well, if it has game, sports, basketball Then for sure it's about sports. But one can also imagine some types of articles that mention these cures, but may not be exactly about sports or only marginally touching sports. The main topic could be another topic, a different topic than sports.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "3:27": "So that's one disadvantage of this approach. And then finally, the rules maybe inconsistent and this would lead to robustness. More specifically, and sometimes, the results of categorization may be different that depending on which rule to be applied. So as in that case that you are facing uncertainty. And you will also have to decide an order of applying the rules, or combination of results that are contradictory. So all these are problems with this approach. And it turns out that both problems can be solved or alleviated by using machine learning." + "time": "3:27", + "text": "So that's one disadvantage of this approach. And then finally, the rules maybe inconsistent and this would lead to robustness. More specifically, and sometimes, the results of categorization may be different that depending on which rule to be applied. So as in that case that you are facing uncertainty. And you will also have to decide an order of applying the rules, or combination of results that are contradictory. So all these are problems with this approach. And it turns out that both problems can be solved or alleviated by using machine learning.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "4:07": "So these machine learning methods are more automatic. But, I still put automatic in quotation marks because they are not really completely automatic cause it still require many work. More specifically we have to use a human experts to help in two ways. First the human experts must annotate data cells was category labels. And would tell the computer which documents should receive which categories. And this is called training data." + "time": "4:07", + "text": "So these machine learning methods are more automatic. But, I still put automatic in quotation marks because they are not really completely automatic cause it still require many work. More specifically we have to use a human experts to help in two ways. First the human experts must annotate data cells was category labels. And would tell the computer which documents should receive which categories. And this is called training data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "4:38": "And then secondly, the human experts also need to provide a set of features to represent each text object. That can potentially provide a clue about the category. So, we need to provide some basic features for the computers to look into." + "time": "4:38", + "text": "And then secondly, the human experts also need to provide a set of features to represent each text object. That can potentially provide a clue about the category. So, we need to provide some basic features for the computers to look into.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "4:55": "In the case of tax a natural choice would be the words. So, using each has a feature is a very common choice to start with, but of course there are other sophisticated features like phrases or even parts of ancients tags or even syntax to the structures. So once human experts can provide this then we can use machine running to learn soft rules for categorization from the training data. So, soft rules just means, we're going to get decided which category we should be assigned for a document, but it's not going to be use using a rule that is deterministic. So we might use something similar to saying that if it matches games, sports many times, it's likely to be sports. But, we're not going to say exactly for sure but instead, we're going to use probabilities or weights. So that we can combine much more evidences. So, the learning process, basically is going to figure out which features are most useful for separating different categories. And it's going to also figure out how to optimally combine features to minimize errors of the categorization of the training data. So the training data, as you can see here, is very important. It's the basis for learning. And then, the trained classifier can be applied to a new text object to predict the most likely category. And that's to simulate the prediction of what human Would assign to this text object. If the human were to make a judgement. So when we use machine learning for text categorization we can also talk about the problem in the general setting of supervisement. So the set up is to learn a classifier to map a value of X. Into a map of Y so here X is all the text objects and Y is all the categories, a set of categories. So the class phi will take any value in x as input and would generate a value in y as output. We hope that output y with this right category for x. And here correct, of course, is judged based on the training data. So that's a general goal in machine learning problems or supervised learning problems where you are given some examples of input and output for a function. And then the computer's going to figure out the, how the function behaves like based on this examples. And then try to be able to compute the values for future x's that when we have not seen." + "time": "4:55", + "text": "In the case of tax a natural choice would be the words. So, using each has a feature is a very common choice to start with, but of course there are other sophisticated features like phrases or even parts of ancients tags or even syntax to the structures. So once human experts can provide this then we can use machine running to learn soft rules for categorization from the training data. So, soft rules just means, we're going to get decided which category we should be assigned for a document, but it's not going to be use using a rule that is deterministic. So we might use something similar to saying that if it matches games, sports many times, it's likely to be sports. But, we're not going to say exactly for sure but instead, we're going to use probabilities or weights. So that we can combine much more evidences. So, the learning process, basically is going to figure out which features are most useful for separating different categories. And it's going to also figure out how to optimally combine features to minimize errors of the categorization of the training data. So the training data, as you can see here, is very important. It's the basis for learning. And then, the trained classifier can be applied to a new text object to predict the most likely category. And that's to simulate the prediction of what human Would assign to this text object. If the human were to make a judgement. So when we use machine learning for text categorization we can also talk about the problem in the general setting of supervisement. So the set up is to learn a classifier to map a value of X. Into a map of Y so here X is all the text objects and Y is all the categories, a set of categories. So the class phi will take any value in x as input and would generate a value in y as output. We hope that output y with this right category for x. And here correct, of course, is judged based on the training data. So that's a general goal in machine learning problems or supervised learning problems where you are given some examples of input and output for a function. And then the computer's going to figure out the, how the function behaves like based on this examples. And then try to be able to compute the values for future x's that when we have not seen.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "7:38": "So in general all methods would rely on discriminative features of text objects to distinguish different categories. So that's why these features are very important and they have to be provided by humans. And they will also combine multiple features in a weight map with weights to be optimized to minimize errors on the training data. So after the learning processes optimization problem. An objective function is often tied into the errors on the training data." + "time": "7:38", + "text": "So in general all methods would rely on discriminative features of text objects to distinguish different categories. So that's why these features are very important and they have to be provided by humans. And they will also combine multiple features in a weight map with weights to be optimized to minimize errors on the training data. So after the learning processes optimization problem. An objective function is often tied into the errors on the training data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "8:12": "Different methods tend to vary in their ways of measuring the errors on the training data. They might optimize a different objective function, which is often also called a loss function or cost function." + "time": "8:12", + "text": "Different methods tend to vary in their ways of measuring the errors on the training data. They might optimize a different objective function, which is often also called a loss function or cost function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "8:26": "They also tend to vary in their ways of combining the features. So a linear combination for example is simple, is often used. But they are not as powerful as nonlinear combinations. But nonlinear models might be more complex for training, so there are tradeoffs as well. But that would lead to different variations of" + "time": "8:26", + "text": "They also tend to vary in their ways of combining the features. So a linear combination for example is simple, is often used. But they are not as powerful as nonlinear combinations. But nonlinear models might be more complex for training, so there are tradeoffs as well. But that would lead to different variations of", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "8:50": "many variations of these learning methods. So in general we can distinguish two kinds of classifiers at a high level. One is called generative classifiers. The other is called discriminative classifiers. The generative classifiers try to learn what the data looks like in each category. So it attempts to model the joint distribution of the data and the label x and y and this can then be factored out to a product of why the distribution of labels. And the joint probability of sorry the conditional probability of X given Y, so it's Y. So we first model the distribution of labels and then we model how the data is generate a particular label here." + "time": "8:50", + "text": "many variations of these learning methods. So in general we can distinguish two kinds of classifiers at a high level. One is called generative classifiers. The other is called discriminative classifiers. The generative classifiers try to learn what the data looks like in each category. So it attempts to model the joint distribution of the data and the label x and y and this can then be factored out to a product of why the distribution of labels. And the joint probability of sorry the conditional probability of X given Y, so it's Y. So we first model the distribution of labels and then we model how the data is generate a particular label here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "9:48": "And once we can estimate these models, then we can compute this conditional probability of label given data based on the probability of data given label." + "time": "9:48", + "text": "And once we can estimate these models, then we can compute this conditional probability of label given data based on the probability of data given label.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "10:02": "And the label distribution here by using the Bayes Rule." + "time": "10:02", + "text": "And the label distribution here by using the Bayes Rule.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "10:07": "Now this is the most important thing, because this conditional probability of the label can then be used directly to decide which label is most likely." + "time": "10:07", + "text": "Now this is the most important thing, because this conditional probability of the label can then be used directly to decide which label is most likely.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "10:18": "So in such approaches objective function is actually likelihood. And so, we model how the data are generated. So it only indirectly captures the training errors. But if we can model the data in each category accurately, then we can also classify accurately." + "time": "10:18", + "text": "So in such approaches objective function is actually likelihood. And so, we model how the data are generated. So it only indirectly captures the training errors. But if we can model the data in each category accurately, then we can also classify accurately.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "10:38": "One example is Na\u00efve Bayes classifier, in this case. The other kind of approaches are called discriminative classifies, and these classifies try to learn what features separate categories. So they direct or attack the problem of categorization for separation of classes. So sorry for the problem." + "time": "10:38", + "text": "One example is Na\u00efve Bayes classifier, in this case. The other kind of approaches are called discriminative classifies, and these classifies try to learn what features separate categories. So they direct or attack the problem of categorization for separation of classes. So sorry for the problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "11:04": "So, these discriminative classifiers attempt to model the conditional probability of the label given the data point directly." + "time": "11:04", + "text": "So, these discriminative classifiers attempt to model the conditional probability of the label given the data point directly.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "11:17": "So, the objective function tends to directly measure the errors of categorization on the training data." + "time": "11:17", + "text": "So, the objective function tends to directly measure the errors of categorization on the training data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" }, { - "11:24": "Some examples include a logistical regression, support vector machines, and k-nearest neighbors. We will cover some of these classifiers in detail in the next few lectures. [MUSIC]" + "time": "11:24", + "text": "Some examples include a logistical regression, support vector machines, and k-nearest neighbors. We will cover some of these classifiers in detail in the next few lectures. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" } ] }, { "4-9-text-categorization-generative-probabilistic-models": [ { - "0:00": "[SOUND] This lecture is about how to use generative probabilistic models for text categorization." + "time": "0:00", + "text": "[SOUND] This lecture is about how to use generative probabilistic models for text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "0:14": "There are in general about two kinds of approaches to text categorization by using machine learning. One is by generating probabilistic models. The other is discriminative approaches. In this lecture, we're going to talk about the generative models. In the next lecture, we're going to talk about discriminative approaches. So the problem of text categorization is actually a very similar to document clustering. In that, we'll assume that each document it belongs to one category or one cluster. The main difference is that in clustering we don't really know what are the predefined categories are, what are the clusters. In fact, that's the goal of text clustering." + "time": "0:14", + "text": "There are in general about two kinds of approaches to text categorization by using machine learning. One is by generating probabilistic models. The other is discriminative approaches. In this lecture, we're going to talk about the generative models. In the next lecture, we're going to talk about discriminative approaches. So the problem of text categorization is actually a very similar to document clustering. In that, we'll assume that each document it belongs to one category or one cluster. The main difference is that in clustering we don't really know what are the predefined categories are, what are the clusters. In fact, that's the goal of text clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "0:55": "We want to find such clusters in the data." + "time": "0:55", + "text": "We want to find such clusters in the data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "0:59": "But in the case of categorization, we are given the categories. So we kind of have pre-defined categories and then based on these categories and training data, we would like to allocate a document to one of these categories or sometimes multiple categories. But because of the similarity of the two problems, we can actually get the document clustering models for text categorization. And we understand how we can use generated models to do text categorization from the perspective of clustering. And so, this is a slide that we've talked about before, about text clustering, where we assume there are multiple topics represented by word distributions. Each topic is one cluster. So once we estimated such a model, we faced a problem of deciding which cluster document d should belong to. And this question boils down to decide which theta i has been used to generate d." + "time": "0:59", + "text": "But in the case of categorization, we are given the categories. So we kind of have pre-defined categories and then based on these categories and training data, we would like to allocate a document to one of these categories or sometimes multiple categories. But because of the similarity of the two problems, we can actually get the document clustering models for text categorization. And we understand how we can use generated models to do text categorization from the perspective of clustering. And so, this is a slide that we've talked about before, about text clustering, where we assume there are multiple topics represented by word distributions. Each topic is one cluster. So once we estimated such a model, we faced a problem of deciding which cluster document d should belong to. And this question boils down to decide which theta i has been used to generate d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "2:06": "Now, suppose d has L words represented as xi here. Now, how can you compute the probability that a particular topic word distribution zeta i has been used to generate this document?" + "time": "2:06", + "text": "Now, suppose d has L words represented as xi here. Now, how can you compute the probability that a particular topic word distribution zeta i has been used to generate this document?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "2:27": "Well, in general, we use base wall to make this influence and you can see this prior information here that we need to consider if a topic or cluster has a higher prior then it's more likely that the document has been from this cluster. And so, we should favor such a cluster. The other is a likelihood part, it's this part." + "time": "2:27", + "text": "Well, in general, we use base wall to make this influence and you can see this prior information here that we need to consider if a topic or cluster has a higher prior then it's more likely that the document has been from this cluster. And so, we should favor such a cluster. The other is a likelihood part, it's this part.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "2:56": "And this has to do with whether the topic word of distribution can explain the content of this document well. And we want to pick a topic that's high by both values. So more specifically, we just multiply them together and then choose which topic has the highest product. So more rigorously, this is what we'd be doing. So we're going to choose the topic that would maximize. This posterior probability at the top of a given document gets posterior because this one, p of the i, is the prior. That's our belief about which topic is more likely, before we observe any document. But this conditional probability here is the posterior probability of the topic after we have observed the document of d." + "time": "2:56", + "text": "And this has to do with whether the topic word of distribution can explain the content of this document well. And we want to pick a topic that's high by both values. So more specifically, we just multiply them together and then choose which topic has the highest product. So more rigorously, this is what we'd be doing. So we're going to choose the topic that would maximize. This posterior probability at the top of a given document gets posterior because this one, p of the i, is the prior. That's our belief about which topic is more likely, before we observe any document. But this conditional probability here is the posterior probability of the topic after we have observed the document of d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "3:49": "And base wall allows us to update this probability based on the prior and I have shown the details, below here you can see how the prior here is related to the posterior, on the left-hand side." + "time": "3:49", + "text": "And base wall allows us to update this probability based on the prior and I have shown the details, below here you can see how the prior here is related to the posterior, on the left-hand side.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "4:05": "And this is related to how well this word distribution explains the document here, and the two are related in this way. So to find the topic that has the higher posterior probability here it's equivalent to maximize this product as we have seen also, multiple times in this course." + "time": "4:05", + "text": "And this is related to how well this word distribution explains the document here, and the two are related in this way. So to find the topic that has the higher posterior probability here it's equivalent to maximize this product as we have seen also, multiple times in this course.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "4:32": "And we can then change the probability of document in your product of the probability of each word, and that's just because we've made an assumption about independence in generating each word. So this is just something that you have seen in document clustering." + "time": "4:32", + "text": "And we can then change the probability of document in your product of the probability of each word, and that's just because we've made an assumption about independence in generating each word. So this is just something that you have seen in document clustering.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "4:50": "And we now can see clearly how we can assign a document to a category based on the information about word distributions for these categories and the prior on these categories. So this idea can be directly adapted to do categorization. And this is precisely what a Naive Bayes Classifier is doing. So here it's most really the same information except that we're looking at the categorization problem now. So we assume that if theta i represents category i accurately, that means the word distribution characterizes the content of documents in category i accurately. Then, what we can do is precisely like what we did for text clustering. Namely we're going to assign document d to the category that has the highest probability of generating this document. In other words, we're going to maximize this posterior probability as well." + "time": "4:50", + "text": "And we now can see clearly how we can assign a document to a category based on the information about word distributions for these categories and the prior on these categories. So this idea can be directly adapted to do categorization. And this is precisely what a Naive Bayes Classifier is doing. So here it's most really the same information except that we're looking at the categorization problem now. So we assume that if theta i represents category i accurately, that means the word distribution characterizes the content of documents in category i accurately. Then, what we can do is precisely like what we did for text clustering. Namely we're going to assign document d to the category that has the highest probability of generating this document. In other words, we're going to maximize this posterior probability as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "5:56": "And this is related to the prior and the [INAUDIBLE] as you have seen on the previous slide. And so, naturally we can decompose this [INAUDIBLE] into a product as you see here. Now, here, I change the notation so that we will write down the product as product of all the words in the vocabulary, and even though the document doesn't contain all the words. And the product is still accurately representing the product of all the words in the document because of this count here." + "time": "5:56", + "text": "And this is related to the prior and the [INAUDIBLE] as you have seen on the previous slide. And so, naturally we can decompose this [INAUDIBLE] into a product as you see here. Now, here, I change the notation so that we will write down the product as product of all the words in the vocabulary, and even though the document doesn't contain all the words. And the product is still accurately representing the product of all the words in the document because of this count here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "6:37": "When a word, it doesn't occur in the document. The count would be 0, so this time will just disappear. So if actively we'll just have the product over other words in the document. So basically, with Naive Bayes Classifier, we're going to score each category for the document by this function." + "time": "6:37", + "text": "When a word, it doesn't occur in the document. The count would be 0, so this time will just disappear. So if actively we'll just have the product over other words in the document. So basically, with Naive Bayes Classifier, we're going to score each category for the document by this function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "6:56": "Now, you may notice that here it involves a product of a lot of small probabilities. And this can cause and the four problem. So one way to solve the problem is thru take logarithm of this function, which it doesn't changes all the often these categories. But will helps us preserve precision. And so, this is often the function that we actually use to score each category and then we're going to choose the category that has the highest score by this function. So this is called an Naive Bayes Classifier, now the keyword base is understandable because we are applying a base rule here when we go from the posterior probability of the topic to a product of the likelihood and the prior." + "time": "6:56", + "text": "Now, you may notice that here it involves a product of a lot of small probabilities. And this can cause and the four problem. So one way to solve the problem is thru take logarithm of this function, which it doesn't changes all the often these categories. But will helps us preserve precision. And so, this is often the function that we actually use to score each category and then we're going to choose the category that has the highest score by this function. So this is called an Naive Bayes Classifier, now the keyword base is understandable because we are applying a base rule here when we go from the posterior probability of the topic to a product of the likelihood and the prior.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "7:47": "Now, it's also called a naive because we've made an assumption that every word in the document is generated independently, and this is indeed a naive assumption because in reality they're not generating independently. Once you see some word, then other words will more likely occur. For example, if you have seen a word like a text. Than that mixed category, they see more clustering more likely to appear than if you have not the same text." + "time": "7:47", + "text": "Now, it's also called a naive because we've made an assumption that every word in the document is generated independently, and this is indeed a naive assumption because in reality they're not generating independently. Once you see some word, then other words will more likely occur. For example, if you have seen a word like a text. Than that mixed category, they see more clustering more likely to appear than if you have not the same text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "8:15": "But this assumption allows us to simplify the problem. And it's actually quite effective for many text categorization tasks. But you should know that this kind of model doesn't have to make this assumption. We could for example, assume that words may be dependent on each other. So that would make it a bigram analogy model or a trigram analogy model. And of course you can even use a mixture model to model what the document looks like in each category. So in nature, they will be all using base rule to do classification. But the actual generating model for documents in each category can vary. And here, we just talk about very simple case perhaps, the simplest case." + "time": "8:15", + "text": "But this assumption allows us to simplify the problem. And it's actually quite effective for many text categorization tasks. But you should know that this kind of model doesn't have to make this assumption. We could for example, assume that words may be dependent on each other. So that would make it a bigram analogy model or a trigram analogy model. And of course you can even use a mixture model to model what the document looks like in each category. So in nature, they will be all using base rule to do classification. But the actual generating model for documents in each category can vary. And here, we just talk about very simple case perhaps, the simplest case.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "9:00": "So now the question is, how can we make sure theta i actually represents category i accurately? Now in clustering, we learned that this category i or what are the distributions for category i from the data. But in our case, what can we do to make sure this theta i represents indeed category i." + "time": "9:00", + "text": "So now the question is, how can we make sure theta i actually represents category i accurately? Now in clustering, we learned that this category i or what are the distributions for category i from the data. But in our case, what can we do to make sure this theta i represents indeed category i.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "9:25": "Well if you think about the question, and you likely come up with the idea of using the training data." + "time": "9:25", + "text": "Well if you think about the question, and you likely come up with the idea of using the training data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "9:34": "Indeed in the textbook, we typically assume that there is training data available and those are the documents that unknown to have generator from which category. In other words, these are the documents with known categories assigned and of course human experts must do that. In here, you see that T1 represents the set of documents that are known to have the generator from category 1. And T2 represents the documents that are known to have been generated from category 2, etc. Now if you look at this picture, you'll see that the model here is really a simplified unigram language model. It's no longer mixed modal, why? Because we already know which distribution has been used to generate which documents. There's no uncertainty here, there's no mixing of different categories here." + "time": "9:34", + "text": "Indeed in the textbook, we typically assume that there is training data available and those are the documents that unknown to have generator from which category. In other words, these are the documents with known categories assigned and of course human experts must do that. In here, you see that T1 represents the set of documents that are known to have the generator from category 1. And T2 represents the documents that are known to have been generated from category 2, etc. Now if you look at this picture, you'll see that the model here is really a simplified unigram language model. It's no longer mixed modal, why? Because we already know which distribution has been used to generate which documents. There's no uncertainty here, there's no mixing of different categories here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "10:30": "So the estimation problem of course would be simplified. But in general, you can imagine what we want to do is estimate these probabilities that I marked here. And what other probability is that we have to estimate it in order to do relation. Well there are two kinds. So one is the prior, the probability of theta i and this indicates how popular each category is or how likely will it have observed the document in that category. The other kind is the water distributions and we want to know what words have high probabilities for each category." + "time": "10:30", + "text": "So the estimation problem of course would be simplified. But in general, you can imagine what we want to do is estimate these probabilities that I marked here. And what other probability is that we have to estimate it in order to do relation. Well there are two kinds. So one is the prior, the probability of theta i and this indicates how popular each category is or how likely will it have observed the document in that category. The other kind is the water distributions and we want to know what words have high probabilities for each category.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "11:11": "So the idea then is to just use observe the training data to estimate these two probabilities." + "time": "11:11", + "text": "So the idea then is to just use observe the training data to estimate these two probabilities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "11:18": "And in general, we can do this separately for the different categories. That's just because these documents are known to be generated from a specific category. So once we know that, it's in some sense irrelevant of what other categories we are also dealing with." + "time": "11:18", + "text": "And in general, we can do this separately for the different categories. That's just because these documents are known to be generated from a specific category. So once we know that, it's in some sense irrelevant of what other categories we are also dealing with.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "11:37": "So now this is a statistical estimation problem. We have observed some data from some model and we want to guess the parameters of this model. We want to take our best guess of the parameters." + "time": "11:37", + "text": "So now this is a statistical estimation problem. We have observed some data from some model and we want to guess the parameters of this model. We want to take our best guess of the parameters.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "11:51": "And this is a problem that we have seen also several times in this course. Now, if you haven't thought about this problem, haven't seen life based classifier. It would be very useful for you to pause the video for a moment and to think about how to solve this problem. So let me state the problem again. So let's just think about with category 1, we know there is one word of distribution that has been used to generate documents. And we generate each word in the document independently, and we know that we have observed a set of n sub 1 documents in the set of Q1. These documents have been all generated from category 1. Namely have been all generated using this same word distribution. Now the question is, what would be your guess or estimate of the probability of each word in this distribution? And what would be your guess of the entire probability of this category? Of course, this singular probability depends on how likely are you to see documents in other categories?" + "time": "11:51", + "text": "And this is a problem that we have seen also several times in this course. Now, if you haven't thought about this problem, haven't seen life based classifier. It would be very useful for you to pause the video for a moment and to think about how to solve this problem. So let me state the problem again. So let's just think about with category 1, we know there is one word of distribution that has been used to generate documents. And we generate each word in the document independently, and we know that we have observed a set of n sub 1 documents in the set of Q1. These documents have been all generated from category 1. Namely have been all generated using this same word distribution. Now the question is, what would be your guess or estimate of the probability of each word in this distribution? And what would be your guess of the entire probability of this category? Of course, this singular probability depends on how likely are you to see documents in other categories?", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "12:55": "So think for a moment, how do you use all this training data including all these documents that are known to be in these k categories, to estimate all these parameters? Now, if you spend some time to think about this and it would help you understand the following few slides. So do spend some time to make sure that you can try to solve this problem, or do you best to solve the problem yourself. Now if you have thought about and then you will realize the following to it." + "time": "12:55", + "text": "So think for a moment, how do you use all this training data including all these documents that are known to be in these k categories, to estimate all these parameters? Now, if you spend some time to think about this and it would help you understand the following few slides. So do spend some time to make sure that you can try to solve this problem, or do you best to solve the problem yourself. Now if you have thought about and then you will realize the following to it.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "13:29": "First, what's the bases for estimating the prior or the probability of each category. Well this has to do with whether you have observed a lot of documents form that category. Intuitively, you have seen a lot of documents in sports and very few in medical science. Then you guess is that the probability of the sports category is larger or your prior on the category would be larger." + "time": "13:29", + "text": "First, what's the bases for estimating the prior or the probability of each category. Well this has to do with whether you have observed a lot of documents form that category. Intuitively, you have seen a lot of documents in sports and very few in medical science. Then you guess is that the probability of the sports category is larger or your prior on the category would be larger.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "13:57": "And what about the basis for estimating the probability of where each category? Well the same, and you'll be just assuming that words that are observed frequently in the documents that are known to be generated from a category will likely have a higher probability. And that's just a maximum Naive Bayes made of. Indeed, that's what we can do, so this made the probability of which category and to answer the question, which category is most popular? Then we can simply normalize, the count of documents in each category. So here you see N sub i denotes the number of documents in each category." + "time": "13:57", + "text": "And what about the basis for estimating the probability of where each category? Well the same, and you'll be just assuming that words that are observed frequently in the documents that are known to be generated from a category will likely have a higher probability. And that's just a maximum Naive Bayes made of. Indeed, that's what we can do, so this made the probability of which category and to answer the question, which category is most popular? Then we can simply normalize, the count of documents in each category. So here you see N sub i denotes the number of documents in each category.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "14:37": "And we simply just normalize these counts to make this a probability. In other words, we make this probability proportional to the size of training intercept in each category that's a size of the set t sub i." + "time": "14:37", + "text": "And we simply just normalize these counts to make this a probability. In other words, we make this probability proportional to the size of training intercept in each category that's a size of the set t sub i.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "14:55": "Now what about the word distribution? Well, we do the same. Again this time we can do this for each category. So let's say, we're considering category i or theta i. So which word has a higher probability? Well, we simply count the word occurrences in the documents that are known to be generated from theta i." + "time": "14:55", + "text": "Now what about the word distribution? Well, we do the same. Again this time we can do this for each category. So let's say, we're considering category i or theta i. So which word has a higher probability? Well, we simply count the word occurrences in the documents that are known to be generated from theta i.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "15:20": "And then we put together all the counts of the same word in the set. And then we just normalize these counts to make this distribution of all the words make all the probabilities off these words to 1. So in this case, you're going to see this is a proportional through the count of the word in the collection of training documents T sub i and that's denoted by c of w and T sub i." + "time": "15:20", + "text": "And then we put together all the counts of the same word in the set. And then we just normalize these counts to make this distribution of all the words make all the probabilities off these words to 1. So in this case, you're going to see this is a proportional through the count of the word in the collection of training documents T sub i and that's denoted by c of w and T sub i.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "15:49": "Now, you may notice that we often write down probable estimate in the form of being proportional for certain numbers. And this is often sufficient, because we have some constraints on these distributions. So the normalizer is dictated by the constraint. So in this case, it will be useful for you to think about what are the constraints on these two kinds of probabilities? So once you figure out the answer to this question, and you will know how to normalize these accounts. And so this is a good exercise to work on if it's not obvious to you. There is another issue in Naive Bayes which is a smoothing. In fact the smoothing is a general problem in older estimate of language morals. And this has to do with, what would happen if you have observed a small amount of data? So smoothing is an important technique to address that outsmarts this. In our case, the training data can be small and when the data set is small when we use maximum likely estimator we often face the problem of zero probability. That means if an event is not observed then the estimated probability would be zero. In this case, if we have not seen a word in the training documents for let's say, category i. Then our estimator would be zero for the probability of this one in this category, and this is generally not accurate. So we have to do smoothing to make sure it's not zero probability. The other reason for smoothing is that this is a way to bring prior knowledge, and this is also generally true for a lot of situations of smoothing. When the data set is small, we tend to rely on some prior knowledge to solve the problem. So in this case our [INAUDIBLE] says that no word should have zero probability. So smoothing allows us to inject these to prior initial that no order has a real zero probability." + "time": "15:49", + "text": "Now, you may notice that we often write down probable estimate in the form of being proportional for certain numbers. And this is often sufficient, because we have some constraints on these distributions. So the normalizer is dictated by the constraint. So in this case, it will be useful for you to think about what are the constraints on these two kinds of probabilities? So once you figure out the answer to this question, and you will know how to normalize these accounts. And so this is a good exercise to work on if it's not obvious to you. There is another issue in Naive Bayes which is a smoothing. In fact the smoothing is a general problem in older estimate of language morals. And this has to do with, what would happen if you have observed a small amount of data? So smoothing is an important technique to address that outsmarts this. In our case, the training data can be small and when the data set is small when we use maximum likely estimator we often face the problem of zero probability. That means if an event is not observed then the estimated probability would be zero. In this case, if we have not seen a word in the training documents for let's say, category i. Then our estimator would be zero for the probability of this one in this category, and this is generally not accurate. So we have to do smoothing to make sure it's not zero probability. The other reason for smoothing is that this is a way to bring prior knowledge, and this is also generally true for a lot of situations of smoothing. When the data set is small, we tend to rely on some prior knowledge to solve the problem. So in this case our [INAUDIBLE] says that no word should have zero probability. So smoothing allows us to inject these to prior initial that no order has a real zero probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "17:54": "There is also a third reason which us sometimes not very obvious, but we explain that in a moment. And that is to help achieve discriminative weighting of terms. And this is also called IDF weighting, inverse document frequency weighting that you have seen in mining word relations." + "time": "17:54", + "text": "There is also a third reason which us sometimes not very obvious, but we explain that in a moment. And that is to help achieve discriminative weighting of terms. And this is also called IDF weighting, inverse document frequency weighting that you have seen in mining word relations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "18:14": "So how do we do smoothing? Well in general we add pseudo counts to these events, we'll make sure that no event has 0 count." + "time": "18:14", + "text": "So how do we do smoothing? Well in general we add pseudo counts to these events, we'll make sure that no event has 0 count.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "18:22": "So one possible way of smoothing the probability of the category is to simply add a small non active constant delta to the count. Let's pretend that every category has actually some extra number of documents represented by delta." + "time": "18:22", + "text": "So one possible way of smoothing the probability of the category is to simply add a small non active constant delta to the count. Let's pretend that every category has actually some extra number of documents represented by delta.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "18:40": "And in the denominator we also add a k multiplied by delta because we want the probability to some to 1. So in total we've added delta k times because we have a k categories. Therefore in this sum, we have to also add k multiply by delta as a total pseudocount that we add up to the estimate." + "time": "18:40", + "text": "And in the denominator we also add a k multiplied by delta because we want the probability to some to 1. So in total we've added delta k times because we have a k categories. Therefore in this sum, we have to also add k multiply by delta as a total pseudocount that we add up to the estimate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "19:06": "Now, it's interesting to think about the influence of that data, obvious data is a smoothing parameter here. Meaning that the larger data is and the more we will do smoothing and that means we'll more rely on pseudocounts. And we might indeed ignore the actual counts if they are delta is set to infinity. Imagine what would happen if there are approaches positively to infinity? Well, we are going to say every category has an infinite amount of documents. And then there's no distinction to them so it become just a uniform." + "time": "19:06", + "text": "Now, it's interesting to think about the influence of that data, obvious data is a smoothing parameter here. Meaning that the larger data is and the more we will do smoothing and that means we'll more rely on pseudocounts. And we might indeed ignore the actual counts if they are delta is set to infinity. Imagine what would happen if there are approaches positively to infinity? Well, we are going to say every category has an infinite amount of documents. And then there's no distinction to them so it become just a uniform.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "19:44": "What if delta is 0? Well, we just go back to the original estimate based on the observed training data to estimate to estimate the probability of each category. Now we can do the same for the word distribution. But in this case, sometimes we find it useful to use a nonuniform seudocount for the word. So here you'll see we'll add a pseudocounts to each word and that's mule multiplied by the probability of the word given by a background language model, theta sub b. Now that background model in general can be estimated by using a logic collection of tests. Or in this case we will use the whole set of all the training data to estimate this background language model. But we don't have to use this one, we can use larger test data that are available from somewhere else." + "time": "19:44", + "text": "What if delta is 0? Well, we just go back to the original estimate based on the observed training data to estimate to estimate the probability of each category. Now we can do the same for the word distribution. But in this case, sometimes we find it useful to use a nonuniform seudocount for the word. So here you'll see we'll add a pseudocounts to each word and that's mule multiplied by the probability of the word given by a background language model, theta sub b. Now that background model in general can be estimated by using a logic collection of tests. Or in this case we will use the whole set of all the training data to estimate this background language model. But we don't have to use this one, we can use larger test data that are available from somewhere else.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "20:36": "Now if we use such a background language model that has pseudocounts, we'll find that some words will receive more pseudocounts. So what are those words? Well those are the common words because they get a high probability by the background average model. So the pseudocounts added for such words will be higher. Real words on the other hand will have smaller pseudocounts. Now this addition of background model would cause a nonuniform smoothing of these word distributions. We're going to bring the probability of those common words to a higher level, because of the background model. Now this helps make the difference of the probability of such words smaller across categories. Because every category has some help from the background four words, and I get the, a, which have high probabilities. Therefore, it's not always so important that each category has documents that contain a lot of occurrences of such words or the estimate is more influenced by the background model. And the consequence is that when we do categorization, such words tend not to influence the decision that much as words that have small probabilities from the background language model. Those words don't get some help from the background language model. So the difference would be primary because of the differences of the occurrences in the training documents in different categories." + "time": "20:36", + "text": "Now if we use such a background language model that has pseudocounts, we'll find that some words will receive more pseudocounts. So what are those words? Well those are the common words because they get a high probability by the background average model. So the pseudocounts added for such words will be higher. Real words on the other hand will have smaller pseudocounts. Now this addition of background model would cause a nonuniform smoothing of these word distributions. We're going to bring the probability of those common words to a higher level, because of the background model. Now this helps make the difference of the probability of such words smaller across categories. Because every category has some help from the background four words, and I get the, a, which have high probabilities. Therefore, it's not always so important that each category has documents that contain a lot of occurrences of such words or the estimate is more influenced by the background model. And the consequence is that when we do categorization, such words tend not to influence the decision that much as words that have small probabilities from the background language model. Those words don't get some help from the background language model. So the difference would be primary because of the differences of the occurrences in the training documents in different categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "22:05": "We also see another smoothing parameter mu here, which controls the amount of smoothing and just like a delta does for the other probability." + "time": "22:05", + "text": "We also see another smoothing parameter mu here, which controls the amount of smoothing and just like a delta does for the other probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "22:14": "And you can easily understand why we add mu to the denominator, because that represents the sum of all the pseudocounts that we add for all the words." + "time": "22:14", + "text": "And you can easily understand why we add mu to the denominator, because that represents the sum of all the pseudocounts that we add for all the words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "22:25": "So view is also a non negative constant and it's [INAUDIBLE] set to control smoothing. Now there are some interesting special cases to think about as well. First, let's think about when mu approaches infinity what would happen? Well in this case the estimate would approach" + "time": "22:25", + "text": "So view is also a non negative constant and it's [INAUDIBLE] set to control smoothing. Now there are some interesting special cases to think about as well. First, let's think about when mu approaches infinity what would happen? Well in this case the estimate would approach", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "22:43": "to the background language model we'll attempt to the background language model. So we will bring every word distribution to the same background language model and that essentially remove the difference between these categories. Obviously, we don't want to do that. The other special case is the thing about the background model and suppose, we actually set the two uniform distribution. And let's say, 1 over the size of the vocabulary. So each one has the same probability, then this smoothing formula is going to be very similar to the one on the top when we add delta. It's because we're going to add a constant pseudocounts to every word." + "time": "22:43", + "text": "to the background language model we'll attempt to the background language model. So we will bring every word distribution to the same background language model and that essentially remove the difference between these categories. Obviously, we don't want to do that. The other special case is the thing about the background model and suppose, we actually set the two uniform distribution. And let's say, 1 over the size of the vocabulary. So each one has the same probability, then this smoothing formula is going to be very similar to the one on the top when we add delta. It's because we're going to add a constant pseudocounts to every word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "23:29": "So in general, in Naive Bayes categorization we have to do such a small thing. And then once we have these probabilities, then we can compute the score for each category. For a document and then choose the category where it was the highest score as we discussed earlier." + "time": "23:29", + "text": "So in general, in Naive Bayes categorization we have to do such a small thing. And then once we have these probabilities, then we can compute the score for each category. For a document and then choose the category where it was the highest score as we discussed earlier.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "23:49": "Now, it's useful to further understand whether the Naive Bayes scoring function actually makes sense. So to understand that, and also to understand why adding a background model will actually achieve the effect of IDF weighting and to penalize common words. So suppose we have just two categories and we're going to score based on their ratio of probability, right? So this is the." + "time": "23:49", + "text": "Now, it's useful to further understand whether the Naive Bayes scoring function actually makes sense. So to understand that, and also to understand why adding a background model will actually achieve the effect of IDF weighting and to penalize common words. So suppose we have just two categories and we're going to score based on their ratio of probability, right? So this is the.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "24:24": "Lets say this is our scoring function for two categories, right? So, this is a score of a document for these two categories. And we're going to score based on this probability ratio. So if the ratio is larger, then it means it's more likely to be in category one. So the larger the score is the more likely the document is in category one. So by using Bayes' rule, we can write down this ratio as follows, and you have seen this before." + "time": "24:24", + "text": "Lets say this is our scoring function for two categories, right? So, this is a score of a document for these two categories. And we're going to score based on this probability ratio. So if the ratio is larger, then it means it's more likely to be in category one. So the larger the score is the more likely the document is in category one. So by using Bayes' rule, we can write down this ratio as follows, and you have seen this before.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "25:09": "Now, we generally take logarithm of this ratio, and to avoid small probabilities. And this would then give us this formula in the second line. And here we see something really interesting, because this is our scoring function for deciding between the two categories." + "time": "25:09", + "text": "Now, we generally take logarithm of this ratio, and to avoid small probabilities. And this would then give us this formula in the second line. And here we see something really interesting, because this is our scoring function for deciding between the two categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "25:30": "And if you look at this function, we'll see it has several parts. The first part here is actually log of probability ratio. And so this is a category bias." + "time": "25:30", + "text": "And if you look at this function, we'll see it has several parts. The first part here is actually log of probability ratio. And so this is a category bias.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "25:41": "It doesn't really depend on the document. It just says which category is more likely and then we would then favor this category slightly, right? So, the second part has a sum of all the words, right? So, these are the words that are observed in the document but in general we can consider all the words in the vocabulary. So here we're going to collect the evidence about which category is more likely, right? So inside of the sum you can see there is product of two things. The first, is a count of the word. And this count of the word serves as a feature to represent the document." + "time": "25:41", + "text": "It doesn't really depend on the document. It just says which category is more likely and then we would then favor this category slightly, right? So, the second part has a sum of all the words, right? So, these are the words that are observed in the document but in general we can consider all the words in the vocabulary. So here we're going to collect the evidence about which category is more likely, right? So inside of the sum you can see there is product of two things. The first, is a count of the word. And this count of the word serves as a feature to represent the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "26:27": "And this is what we can collect from document. The second part is the weight of this feature, here it's the weight on which word, right? This weight tells us to what extent observing this word helps contribute in our decision to put this document in category one. Now remember, the higher the scoring function is, the more likely it's in category one. Now if you look at this ratio, basically, sorry this weight it's basically based on the ratio of the probability of the word from each of the two distributions. Essentially we're comparing the probability of the word from the two distributions. And if it's a higher according to theta 1, then according to theta 2, then this weight would be positive. And therefore it means when we observe such a word, we will say that it's more likely to be from category one. And the more we observe such a word, the more likely the document will be classified as theta 1." + "time": "26:27", + "text": "And this is what we can collect from document. The second part is the weight of this feature, here it's the weight on which word, right? This weight tells us to what extent observing this word helps contribute in our decision to put this document in category one. Now remember, the higher the scoring function is, the more likely it's in category one. Now if you look at this ratio, basically, sorry this weight it's basically based on the ratio of the probability of the word from each of the two distributions. Essentially we're comparing the probability of the word from the two distributions. And if it's a higher according to theta 1, then according to theta 2, then this weight would be positive. And therefore it means when we observe such a word, we will say that it's more likely to be from category one. And the more we observe such a word, the more likely the document will be classified as theta 1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "27:35": "If, on the other hand, the probability of the word from theta 1 is smaller than the probability of the word from theta 2, then you can see that this word is negative. Therefore, this is negative evidence for supporting category one. That means the more we observe such a word, the more likely the document is actually from theta 2." + "time": "27:35", + "text": "If, on the other hand, the probability of the word from theta 1 is smaller than the probability of the word from theta 2, then you can see that this word is negative. Therefore, this is negative evidence for supporting category one. That means the more we observe such a word, the more likely the document is actually from theta 2.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "27:58": "So this formula now makes a little sense, right? So we're going to aggregate all the evidence from the document, we take a sum of all the words. We can call this the features that we collected from the document that would help us make the decision. And then each feature has a weight that tells us how" + "time": "27:58", + "text": "So this formula now makes a little sense, right? So we're going to aggregate all the evidence from the document, we take a sum of all the words. We can call this the features that we collected from the document that would help us make the decision. And then each feature has a weight that tells us how", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "28:19": "does this feature support category one or just support category two. And this is estimated as the log of probability ratio here in na\u00efve Bayes." + "time": "28:19", + "text": "does this feature support category one or just support category two. And this is estimated as the log of probability ratio here in na\u00efve Bayes.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "28:32": "And then finally we have this constant of bias here. So that formula actually is a formula that can be generalized to accommodate more features and that's why I have introduce some other symbols here. To introduce beta 0 to denote the Bayes and fi to denote the each feature and beta sub i to denote the weight on each feature. Now we do this generalisation, what we see is that in general we can represent the document by feature vector fi, here of course in this case fi is the count of a word. But in general, we can put any features that we think are relevant for categorization. For example, document length or font size or count of other patterns in the document." + "time": "28:32", + "text": "And then finally we have this constant of bias here. So that formula actually is a formula that can be generalized to accommodate more features and that's why I have introduce some other symbols here. To introduce beta 0 to denote the Bayes and fi to denote the each feature and beta sub i to denote the weight on each feature. Now we do this generalisation, what we see is that in general we can represent the document by feature vector fi, here of course in this case fi is the count of a word. But in general, we can put any features that we think are relevant for categorization. For example, document length or font size or count of other patterns in the document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "29:27": "And then our scoring function can be defined as a sum of a constant beta 0 and the sum of the feature weights of all the features." + "time": "29:27", + "text": "And then our scoring function can be defined as a sum of a constant beta 0 and the sum of the feature weights of all the features.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "29:42": "So if each f sub i is a feature value then we multiply the value by the corresponding weight, beta sub i, and we just take the sum. And this is the aggregate of all evidence that we can collect from all these features. And of course there are parameters here. So what are the parameters? Well, these are the betas. These betas are weights. And with a proper setting of the weights, then we can expect such a scoring function to work well to classify documents, just like in the case of naive Bayes. We can clearly see naive Bayes classifier as a special case of this general classifier. Actually, this general form is very close to a classifier called a logistical regression, and this is actually one of those conditional approaches or discriminative approaches to classification." + "time": "29:42", + "text": "So if each f sub i is a feature value then we multiply the value by the corresponding weight, beta sub i, and we just take the sum. And this is the aggregate of all evidence that we can collect from all these features. And of course there are parameters here. So what are the parameters? Well, these are the betas. These betas are weights. And with a proper setting of the weights, then we can expect such a scoring function to work well to classify documents, just like in the case of naive Bayes. We can clearly see naive Bayes classifier as a special case of this general classifier. Actually, this general form is very close to a classifier called a logistical regression, and this is actually one of those conditional approaches or discriminative approaches to classification.", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" }, { - "30:32": "And we're going to talk more about such approaches later, but here I want you to note that there is a strong connection, a close connection between the two kinds of approaches. And this slide shows how naive Bayes classifier can be connected to a logistic regression. And you can also see that in discriminative classifiers that tend to use more general form on the bottom, we can accommodate more features to solve the problem. [MUSIC]" + "time": "30:32", + "text": "And we're going to talk more about such approaches later, but here I want you to note that there is a strong connection, a close connection between the two kinds of approaches. And this slide shows how naive Bayes classifier can be connected to a logistic regression. And you can also see that in discriminative classifiers that tend to use more general form on the bottom, we can accommodate more features to solve the problem. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" } ] } @@ -3292,853 +5366,1403 @@ { "5-1-text-categorization-discriminative-classifier-part-1": [ { - "0:00": "[SOUND] This lecture is about the discriminative classifiers for text categorization." + "time": "0:00", + "text": "[SOUND] This lecture is about the discriminative classifiers for text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "0:13": "In this lecture we're going to continue talking about how to do text categorization and cover discriminative approaches. This is a slide that you have seen from the discussion of Naive Bayes Classifier, where we have shown that although Naive Bayes Classifier tries to model the generation of text data, from each categories, we can actually use Bayes' rule to eventually rewrite the scoring function as you see on this slide. And this scoring function is basically a weighted combination of a lot of word features, where the feature values are word counts, and the feature weights are the log of probability ratios of the word given by two distributions here." + "time": "0:13", + "text": "In this lecture we're going to continue talking about how to do text categorization and cover discriminative approaches. This is a slide that you have seen from the discussion of Naive Bayes Classifier, where we have shown that although Naive Bayes Classifier tries to model the generation of text data, from each categories, we can actually use Bayes' rule to eventually rewrite the scoring function as you see on this slide. And this scoring function is basically a weighted combination of a lot of word features, where the feature values are word counts, and the feature weights are the log of probability ratios of the word given by two distributions here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "0:57": "Now this kind of scoring function can be actually a general scoring function where we can in general present text data as a feature vector. Of course the features don't have to be all the words. Their features can be other signals that we want to use. And we mentioned that this is precisely similar to logistic regression. So, in this lecture we're going to introduce some discriminative classifiers. They try to model the conditional distribution of labels given the data directly rather than using Bayes' rule to compute that interactively as we have seen in naive Bayes. So the general idea of logistic regression is to model the dependency of a binary response variable Y on some predictors that are denoted as X. So here we have also changed the notation to X for future values." + "time": "0:57", + "text": "Now this kind of scoring function can be actually a general scoring function where we can in general present text data as a feature vector. Of course the features don't have to be all the words. Their features can be other signals that we want to use. And we mentioned that this is precisely similar to logistic regression. So, in this lecture we're going to introduce some discriminative classifiers. They try to model the conditional distribution of labels given the data directly rather than using Bayes' rule to compute that interactively as we have seen in naive Bayes. So the general idea of logistic regression is to model the dependency of a binary response variable Y on some predictors that are denoted as X. So here we have also changed the notation to X for future values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "2:07": "You may recall in the previous slides we have used FI to represent the future values." + "time": "2:07", + "text": "You may recall in the previous slides we have used FI to represent the future values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "2:13": "And here we use the notation of X factor, which is more common when we introduce such discriminative algorithms. So, X is our input. It's a vector with n features and each feature has a value x sub i here. And I will go with a model that dependency of this binary response variable of these features. So in our categorization problem when I have two categories theta 1 and theta 2, and we can use the Y value to denote the two categories when Y is 1, it means the category of the document, the first class, is theta 1. Now, the goal here is the model, the conditional property of Y given X directly as opposed to model of the generation of X and Y as in the case of Naive Bayes. And another advantage of this kind of approach is that it would allow many other features than words to be used in this vector since we're not modeling the generation of this vector. And we can plug in any signals that we want. So this is potentially advantageous for doing text categorization. So more specifically, in logistic regression, assume the functional form of Y depending on X is the following. And this is very closely related to the log odds that I introduced in the Naive Bayes or log of probability ratio of the two categories that you have seen on the previous slide." + "time": "2:13", + "text": "And here we use the notation of X factor, which is more common when we introduce such discriminative algorithms. So, X is our input. It's a vector with n features and each feature has a value x sub i here. And I will go with a model that dependency of this binary response variable of these features. So in our categorization problem when I have two categories theta 1 and theta 2, and we can use the Y value to denote the two categories when Y is 1, it means the category of the document, the first class, is theta 1. Now, the goal here is the model, the conditional property of Y given X directly as opposed to model of the generation of X and Y as in the case of Naive Bayes. And another advantage of this kind of approach is that it would allow many other features than words to be used in this vector since we're not modeling the generation of this vector. And we can plug in any signals that we want. So this is potentially advantageous for doing text categorization. So more specifically, in logistic regression, assume the functional form of Y depending on X is the following. And this is very closely related to the log odds that I introduced in the Naive Bayes or log of probability ratio of the two categories that you have seen on the previous slide.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "3:57": "So this is what I meant. So in the case of Naive Bayes, we compute this by using those words and eventually we have reached a formula that looks like this." + "time": "3:57", + "text": "So this is what I meant. So in the case of Naive Bayes, we compute this by using those words and eventually we have reached a formula that looks like this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "4:12": "But here we actually would assume explicitly that we with the model our probability of Y given X" + "time": "4:12", + "text": "But here we actually would assume explicitly that we with the model our probability of Y given X", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "4:29": "directly as a function of these features." + "time": "4:29", + "text": "directly as a function of these features.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "4:37": "So, most specifically we assume that the ratio of the probability of Y equals 1 and the probability of Y equals 0 is a function of X." + "time": "4:37", + "text": "So, most specifically we assume that the ratio of the probability of Y equals 1 and the probability of Y equals 0 is a function of X.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "4:54": "All right, so it's a function of x and it's a linear combination of these feature values controlled by theta values." + "time": "4:54", + "text": "All right, so it's a function of x and it's a linear combination of these feature values controlled by theta values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "5:02": "And it seems we know that the probability of Y equals zero is one minus probability of Y equals one and this can be also written in this way. So this is a log out ratio here." + "time": "5:02", + "text": "And it seems we know that the probability of Y equals zero is one minus probability of Y equals one and this can be also written in this way. So this is a log out ratio here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "5:22": "And so in logistic regression, we're basically assuming that the probability of Y equals 1. Okay my X is dependent on this linear combination of all these features. So it's just one of the many possible ways, assuming that the dependency. But this particular form has been quite useful and it also has some nice properties." + "time": "5:22", + "text": "And so in logistic regression, we're basically assuming that the probability of Y equals 1. Okay my X is dependent on this linear combination of all these features. So it's just one of the many possible ways, assuming that the dependency. But this particular form has been quite useful and it also has some nice properties.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "5:47": "So if we rewrite this equation to actually express the probability of Y given X. In terms of X by getting rid of the logarithm we get this functional form, and this is called a logistical function. It's a transformation of X into Y, as you see" + "time": "5:47", + "text": "So if we rewrite this equation to actually express the probability of Y given X. In terms of X by getting rid of the logarithm we get this functional form, and this is called a logistical function. It's a transformation of X into Y, as you see", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "6:08": "on the right side here, so that the X's will be map into a range of values from 0 to 1.0, you can see. And that's precisely what we want since we have a probability here." + "time": "6:08", + "text": "on the right side here, so that the X's will be map into a range of values from 0 to 1.0, you can see. And that's precisely what we want since we have a probability here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "6:24": "And the function form looks like this." + "time": "6:24", + "text": "And the function form looks like this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "6:28": "So this is the basic idea of logistic regression. And it's a very useful classifier that can be used to do a lot of classification tasks including text categorization." + "time": "6:28", + "text": "So this is the basic idea of logistic regression. And it's a very useful classifier that can be used to do a lot of classification tasks including text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "6:41": "So as in all cases of model we would be interested in estimating the parameters. And in fact in all of the machine running programs, once you set up with the model, set up object and function to model the file, then the next step is to compute the parameter values. In general, we're going to adjust to these parameter values. Optimize the performance of classify on the training data. So in our case just assume we have the training data here, xi and yi, and each pair is basically a future vector of x and a known label for that x. Y is either 1 or 0. So in our case we are interested maximize this conditional likelihood." + "time": "6:41", + "text": "So as in all cases of model we would be interested in estimating the parameters. And in fact in all of the machine running programs, once you set up with the model, set up object and function to model the file, then the next step is to compute the parameter values. In general, we're going to adjust to these parameter values. Optimize the performance of classify on the training data. So in our case just assume we have the training data here, xi and yi, and each pair is basically a future vector of x and a known label for that x. Y is either 1 or 0. So in our case we are interested maximize this conditional likelihood.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "7:31": "The conditional likelihood here is basically to model why given observe the x, so it's not like a moderate x, but rather we're going to model this. Note that this is a conditional probability of Y given X and this is also precisely what we wanted For classification. Now so the likelihood function would be just a product of all the training cases. And in each case, this is the model of the probability of observing this particular training case. So given a particular Xi, how likely we are to observe the corresponding Yi? Of course, Yi could be 1 or 0, and in fact, the function found here would vary depending on whether Yi is 1 or 0. If it's a 1, we'll be taking this form. And that's basically the logistic regression function. But what about this, if it's 0? Well, if it's 0, then we have to use a different form, and that's this one." + "time": "7:31", + "text": "The conditional likelihood here is basically to model why given observe the x, so it's not like a moderate x, but rather we're going to model this. Note that this is a conditional probability of Y given X and this is also precisely what we wanted For classification. Now so the likelihood function would be just a product of all the training cases. And in each case, this is the model of the probability of observing this particular training case. So given a particular Xi, how likely we are to observe the corresponding Yi? Of course, Yi could be 1 or 0, and in fact, the function found here would vary depending on whether Yi is 1 or 0. If it's a 1, we'll be taking this form. And that's basically the logistic regression function. But what about this, if it's 0? Well, if it's 0, then we have to use a different form, and that's this one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "8:48": "Now, how do we get this one? Well, that's just a 1 minus the probability of Y=1, right?" + "time": "8:48", + "text": "Now, how do we get this one? Well, that's just a 1 minus the probability of Y=1, right?", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "8:55": "And you can easily see this. Now the key point in here is that the function form here depends on the observer Yi, if it's a 1, it has a different form than when it's 0. And if you think about when we want to maximize this probability, we're basically going to want this probability to be as high as possible. When the label is 1, that means the document is in probability 1. But if the document is not, we're going to maximize this value, and what's going to happen is actually to make this value as small as possible because this sum's 1. When I maximize this one, it's equivalent to minimize this one." + "time": "8:55", + "text": "And you can easily see this. Now the key point in here is that the function form here depends on the observer Yi, if it's a 1, it has a different form than when it's 0. And if you think about when we want to maximize this probability, we're basically going to want this probability to be as high as possible. When the label is 1, that means the document is in probability 1. But if the document is not, we're going to maximize this value, and what's going to happen is actually to make this value as small as possible because this sum's 1. When I maximize this one, it's equivalent to minimize this one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "9:48": "So you can see basically, if we maximize the conditional likelihood, we're going to basically try to make the prediction on the training data as accurate as possible." + "time": "9:48", + "text": "So you can see basically, if we maximize the conditional likelihood, we're going to basically try to make the prediction on the training data as accurate as possible.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "10:00": "So as another occasion, when you compute the maximum likelihood data, basically you'll find a beta value, a set of beta values that would maximize this conditional likelihood." + "time": "10:00", + "text": "So as another occasion, when you compute the maximum likelihood data, basically you'll find a beta value, a set of beta values that would maximize this conditional likelihood.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "10:12": "And this, again, then gives us a standard optimization problem. In this case, it can be also solved in many ways. Newton's method is a popular way to solve this problem, there are other methods as well. But in the end, we will look at a set of data values. Once we have the beta values, then we have a way to find the scoring function to help us classify a document." + "time": "10:12", + "text": "And this, again, then gives us a standard optimization problem. In this case, it can be also solved in many ways. Newton's method is a popular way to solve this problem, there are other methods as well. But in the end, we will look at a set of data values. Once we have the beta values, then we have a way to find the scoring function to help us classify a document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "10:39": "So what's the function? Well, it's this one. See, if we have all the beta values, are they known? All we need is to compute the Xi for that document and then plug in those values. That will give us an estimated probability that the document is in category one." + "time": "10:39", + "text": "So what's the function? Well, it's this one. See, if we have all the beta values, are they known? All we need is to compute the Xi for that document and then plug in those values. That will give us an estimated probability that the document is in category one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "10:59": "Okay so, so much for logistical regression. Let's also introduce another discriminative classifier called K-Nearest Neighbors. Now in general, I should say there are many such approaches, and a thorough introduction to all of them is clearly beyond the scope of this course. And you should take a machine learning course or read more about machine learning to know about them. Here, I just want to include the basic introduction to some of the most commonly used classifiers, since you might use them often for text calculation. So the second classifier is called K-Nearest Neighbors. In this approach, we're going to also estimate the conditional probability of label given data, but in a very different way. So the idea is to keep all the training examples and then once we see a text object that we want to classify, we're going to find the K examples in the training set and that are most similar to this text object. Basically, this is to find the neighbors of this text objector in the training data set. So once we found the neighborhood and we found the object that are close to the object we are interested in classifying, and let's say we have found the K-Nearest Neighbors. That's why this method is called K-Nearest Neighbors. Then we're going to assign the category that's most common in these neighbors. Basically we're going to allow these neighbors to vote for the category of the objective that we're interested in classifying." + "time": "10:59", + "text": "Okay so, so much for logistical regression. Let's also introduce another discriminative classifier called K-Nearest Neighbors. Now in general, I should say there are many such approaches, and a thorough introduction to all of them is clearly beyond the scope of this course. And you should take a machine learning course or read more about machine learning to know about them. Here, I just want to include the basic introduction to some of the most commonly used classifiers, since you might use them often for text calculation. So the second classifier is called K-Nearest Neighbors. In this approach, we're going to also estimate the conditional probability of label given data, but in a very different way. So the idea is to keep all the training examples and then once we see a text object that we want to classify, we're going to find the K examples in the training set and that are most similar to this text object. Basically, this is to find the neighbors of this text objector in the training data set. So once we found the neighborhood and we found the object that are close to the object we are interested in classifying, and let's say we have found the K-Nearest Neighbors. That's why this method is called K-Nearest Neighbors. Then we're going to assign the category that's most common in these neighbors. Basically we're going to allow these neighbors to vote for the category of the objective that we're interested in classifying.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "12:33": "Now that means if most of them have a particular category and it's a category one, they're going to say this current object will have category one." + "time": "12:33", + "text": "Now that means if most of them have a particular category and it's a category one, they're going to say this current object will have category one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "12:43": "This approach can also be improved by considering the distance of a neighbor and of a current object. Basically, we can assume a closed neighbor would have more say about the category of the subject. So, we can give such a neighbor more influence on the vote. And we can take away some of the votes based on the distances." + "time": "12:43", + "text": "This approach can also be improved by considering the distance of a neighbor and of a current object. Basically, we can assume a closed neighbor would have more say about the category of the subject. So, we can give such a neighbor more influence on the vote. And we can take away some of the votes based on the distances.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "13:06": "But the general idea is look at the neighborhood, and then try to assess the category based on the categories of the neighbors. Intuitively, this makes a lot of sense. But mathematically, this can also be regarded as a way to directly estimate there's a conditional probability of label given data, that is p of Y given X." + "time": "13:06", + "text": "But the general idea is look at the neighborhood, and then try to assess the category based on the categories of the neighbors. Intuitively, this makes a lot of sense. But mathematically, this can also be regarded as a way to directly estimate there's a conditional probability of label given data, that is p of Y given X.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "13:28": "Now I'm going to explain this intuition in a moment, but before we proceed, let me emphasize that we do need a similarity function here in order for this to work. Note that in naive base class five, we did not need a similarity function. And in logistical regression, we did not talk about those similarity function either, but here we explicitly require a similarity function. Now this similarity function actually is a good opportunity for us to inject any of our insights about the features. Basically effective features are those that would make the objects that are on the same category look more similar, but distinguishing objects in different categories. So the design of this similarity function is closely tied it to the design of the features in logistical regression and other classifiers. So let's illustrate how K-NN works. Now suppose we have a lot of training instances here. And I've colored them differently and to show just different categories. Now suppose we have a new object in the center that we want to classify. So according to this approach, you work on finding the neighbors. Now, let's first think of a special case of finding just one neighbor, the closest neighbor." + "time": "13:28", + "text": "Now I'm going to explain this intuition in a moment, but before we proceed, let me emphasize that we do need a similarity function here in order for this to work. Note that in naive base class five, we did not need a similarity function. And in logistical regression, we did not talk about those similarity function either, but here we explicitly require a similarity function. Now this similarity function actually is a good opportunity for us to inject any of our insights about the features. Basically effective features are those that would make the objects that are on the same category look more similar, but distinguishing objects in different categories. So the design of this similarity function is closely tied it to the design of the features in logistical regression and other classifiers. So let's illustrate how K-NN works. Now suppose we have a lot of training instances here. And I've colored them differently and to show just different categories. Now suppose we have a new object in the center that we want to classify. So according to this approach, you work on finding the neighbors. Now, let's first think of a special case of finding just one neighbor, the closest neighbor.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "14:53": "Now in this case, let's assume the closest neighbor is the box filled with diamonds. And so then we're going to say, well, since this is in this object that is in category of diamonds, let's say. Then we're going to say, well, we're going to assign the same category to our text object. But let's also look at another possibility of finding a larger neighborhood, so let's think about the four neighbors." + "time": "14:53", + "text": "Now in this case, let's assume the closest neighbor is the box filled with diamonds. And so then we're going to say, well, since this is in this object that is in category of diamonds, let's say. Then we're going to say, well, we're going to assign the same category to our text object. But let's also look at another possibility of finding a larger neighborhood, so let's think about the four neighbors.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "15:26": "In this case, we're going to include a lot of other solid field boxes in red or pink, right? So in this case now, we're going to notice that among the four neighbors, there are three neighbors in a different category. So if we take a vote, then we'll conclude the object is actually of a different category. So this both illustrates how can nearest neighbor works and also it illustrates some potential problems of this classifier. Basically, the results might depend on the K and indeed, k's an important parameter to optimize. Now, you can intuitively imagine if we have a lot of neighbors around this object, and then we'd be okay because we have a lot of neighbors who will help us decide the categories. But if we have only a few, then the decision may not be reliable. So on the one hand, we want to find more neighbor, right? And then we have more votes. But on the other hand, as we try to find more neighbors we actually could risk on getting neighbors that are not really similar to this instance. They might actually be far away as you try to get more neighbors. So although you get more neighbors but those neighbors aren't necessarily so helpful because they are not very similar to the object. So the parameter still has to be set empirically. And typically, you can optimize such a parameter by using cross validation. Basically, you're going to separate your training data into two parts and then you're going to use one part to actually help you choose the parameter k here or some other parameters in other class files. And then you're going to assume this number that works well on your training that will be actually be the best for your future data." + "time": "15:26", + "text": "In this case, we're going to include a lot of other solid field boxes in red or pink, right? So in this case now, we're going to notice that among the four neighbors, there are three neighbors in a different category. So if we take a vote, then we'll conclude the object is actually of a different category. So this both illustrates how can nearest neighbor works and also it illustrates some potential problems of this classifier. Basically, the results might depend on the K and indeed, k's an important parameter to optimize. Now, you can intuitively imagine if we have a lot of neighbors around this object, and then we'd be okay because we have a lot of neighbors who will help us decide the categories. But if we have only a few, then the decision may not be reliable. So on the one hand, we want to find more neighbor, right? And then we have more votes. But on the other hand, as we try to find more neighbors we actually could risk on getting neighbors that are not really similar to this instance. They might actually be far away as you try to get more neighbors. So although you get more neighbors but those neighbors aren't necessarily so helpful because they are not very similar to the object. So the parameter still has to be set empirically. And typically, you can optimize such a parameter by using cross validation. Basically, you're going to separate your training data into two parts and then you're going to use one part to actually help you choose the parameter k here or some other parameters in other class files. And then you're going to assume this number that works well on your training that will be actually be the best for your future data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "17:23": "So as I mentioned, K-NN can be actually regarded as estimate of conditional problem within y given x an that's why we put this in the category of discriminative approaches. So the key assumption that we made in this approach is that the distribution of the label given the document probability a category given for example probability of theta i given document d is locally smooth. And that just means we're going to assume that this probability is the same for all the documents in these region R here. And suppose we draw a neighborhood and we're going to assume in this neighborhood since the data instances are very similar we're going to assume that the conditional distribution of the label given the data will be roughly the same. If these are very different then we're going to assume that the probability of c doc given d would be also similar. So that's a very key assumption. And that's actually important assumption that would allow us to do a lot of machinery. But in reality, whether this is true of course, would depend on how we define similarity. Because neighborhood is largely determined by our similarity function. If our similarity function captures objects that do follow similar distributions then these assumptions are okay but if our similarity function could not capture that, obviously these assumption would be a problem and then the classifier would not be accurate." + "time": "17:23", + "text": "So as I mentioned, K-NN can be actually regarded as estimate of conditional problem within y given x an that's why we put this in the category of discriminative approaches. So the key assumption that we made in this approach is that the distribution of the label given the document probability a category given for example probability of theta i given document d is locally smooth. And that just means we're going to assume that this probability is the same for all the documents in these region R here. And suppose we draw a neighborhood and we're going to assume in this neighborhood since the data instances are very similar we're going to assume that the conditional distribution of the label given the data will be roughly the same. If these are very different then we're going to assume that the probability of c doc given d would be also similar. So that's a very key assumption. And that's actually important assumption that would allow us to do a lot of machinery. But in reality, whether this is true of course, would depend on how we define similarity. Because neighborhood is largely determined by our similarity function. If our similarity function captures objects that do follow similar distributions then these assumptions are okay but if our similarity function could not capture that, obviously these assumption would be a problem and then the classifier would not be accurate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "18:59": "Okay, let's proceed with these assumption. Then what we are saying is that, in order to estimate the probability of category given a document. We can try to estimate the probability of the category given that entire region. Now, this has a benefit, of course, of bringing additional data points to help us estimate this probability." + "time": "18:59", + "text": "Okay, let's proceed with these assumption. Then what we are saying is that, in order to estimate the probability of category given a document. We can try to estimate the probability of the category given that entire region. Now, this has a benefit, of course, of bringing additional data points to help us estimate this probability.", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" }, { - "19:22": "And so this is precisely the idea of K-NN. Basically now we can use the known categories of all the documents in this region to estimate this probability. And I have even given a formula here where you can see we just count the topics in this region and then normalize that by the total number of documents in the region. So the numerator that you see here, c of theta i and r, is a counter of the documents in region R was category theta i. Since these are training document and we know they are categories. We can simply count how many times it was since here. How many times we have the same signs, etc. And then the denominator is just the total number of training documents in this region. So this gives us a rough estimate of which categories most popular in this neighborhood. And we are going to assign the popular category to our data object since it falls into this region. [MUSIC]" + "time": "19:22", + "text": "And so this is precisely the idea of K-NN. Basically now we can use the known categories of all the documents in this region to estimate this probability. And I have even given a formula here where you can see we just count the topics in this region and then normalize that by the total number of documents in the region. So the numerator that you see here, c of theta i and r, is a counter of the documents in region R was category theta i. Since these are training document and we know they are categories. We can simply count how many times it was since here. How many times we have the same signs, etc. And then the denominator is just the total number of training documents in this region. So this gives us a rough estimate of which categories most popular in this neighborhood. And we are going to assign the popular category to our data object since it falls into this region. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" } ] }, { "5-2-text-categorization-discriminative-classifier-part-2": [ { - "0:07": "[SOUND] This lecture is a continued discussion of Discriminative Classifiers for Text Categorization. So, in this lecture, we're going to introduce, yet another Discriminative Classifier called the Support Vector Machine or SVM. Which is a very popular classification method and it has been also shown to be effective for text categorization." + "time": "0:07", + "text": "[SOUND] This lecture is a continued discussion of Discriminative Classifiers for Text Categorization. So, in this lecture, we're going to introduce, yet another Discriminative Classifier called the Support Vector Machine or SVM. Which is a very popular classification method and it has been also shown to be effective for text categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "0:31": "So to introduce this classifier, let's also think about the simple case of two categories. We have two topic categories, 01 and 02 here. And we want to classify documents into these two categories and we're going to represent again a document by a feature factor x here." + "time": "0:31", + "text": "So to introduce this classifier, let's also think about the simple case of two categories. We have two topic categories, 01 and 02 here. And we want to classify documents into these two categories and we're going to represent again a document by a feature factor x here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "0:53": "Now, the idea of this classifier is to design also a linear separator" + "time": "0:53", + "text": "Now, the idea of this classifier is to design also a linear separator", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "0:59": "here that you'll see and it's very similar to what you have seen not just for regression, right? And we're going to do also say that if the sign of this function value is positive then we're going to say the objective is in category one. Otherwise, we're going to say it's in category 2. So that makes 0 that is the decision boundary between the few categories." + "time": "0:59", + "text": "here that you'll see and it's very similar to what you have seen not just for regression, right? And we're going to do also say that if the sign of this function value is positive then we're going to say the objective is in category one. Otherwise, we're going to say it's in category 2. So that makes 0 that is the decision boundary between the few categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "1:28": "So, in generally hiding marginal space such as, 0. corresponds to a hyper plain." + "time": "1:28", + "text": "So, in generally hiding marginal space such as, 0. corresponds to a hyper plain.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "1:38": "Now I've shown you a simple case of two dimensional space it was just X1 and X2 and this case this corresponds to a line that you can see here." + "time": "1:38", + "text": "Now I've shown you a simple case of two dimensional space it was just X1 and X2 and this case this corresponds to a line that you can see here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "1:51": "So, this is a line defined by just three parameters here, beta zero, beta one, and beta two." + "time": "1:51", + "text": "So, this is a line defined by just three parameters here, beta zero, beta one, and beta two.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "2:02": "Now, this line is heading in this direction so it shows that as we increase X1, X2 will also increase. So we know that beta one and beta two have different assigns, one is negative and the other is positive." + "time": "2:02", + "text": "Now, this line is heading in this direction so it shows that as we increase X1, X2 will also increase. So we know that beta one and beta two have different assigns, one is negative and the other is positive.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "2:20": "So let's just assume that beta one is negative and beta two Is positive." + "time": "2:20", + "text": "So let's just assume that beta one is negative and beta two Is positive.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "2:28": "Now, it's interesting to examine, then, the data instances on the two sides of the slide. So, here, the data instance are visualized as circles for one class and diamonds for the other class." + "time": "2:28", + "text": "Now, it's interesting to examine, then, the data instances on the two sides of the slide. So, here, the data instance are visualized as circles for one class and diamonds for the other class.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "2:43": "Now, one question is to take a point like this one and to ask the question what's the value of this expression, or this classifier, for this data point?" + "time": "2:43", + "text": "Now, one question is to take a point like this one and to ask the question what's the value of this expression, or this classifier, for this data point?", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "2:55": "So what do you think? Basically, we're going to evaluate its value by using this function." + "time": "2:55", + "text": "So what do you think? Basically, we're going to evaluate its value by using this function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "3:01": "And as we said, if this value's positive we're going to say this is in category one, and if it's negative, it's going to be in category two. Intuitively, this line separates these two categories, so we expect the points on one side would be positive and the points on the other side would be negative. Our question is under the assumption that I just mentioned, let's examine a particular point like this one." + "time": "3:01", + "text": "And as we said, if this value's positive we're going to say this is in category one, and if it's negative, it's going to be in category two. Intuitively, this line separates these two categories, so we expect the points on one side would be positive and the points on the other side would be negative. Our question is under the assumption that I just mentioned, let's examine a particular point like this one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "3:27": "So what do you think is the sine of this expression?" + "time": "3:27", + "text": "So what do you think is the sine of this expression?", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "3:31": "Well, to examine the sine we can simply look at this expression here. And we can compare this with let's say," + "time": "3:31", + "text": "Well, to examine the sine we can simply look at this expression here. And we can compare this with let's say,", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "3:42": "value on the line, let's see, compare this with this point." + "time": "3:42", + "text": "value on the line, let's see, compare this with this point.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "3:48": "While they have identical X1, but then one has a higher value for X2." + "time": "3:48", + "text": "While they have identical X1, but then one has a higher value for X2.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "3:54": "Now, let's look at the sin of the coefficient for X2. Well, we know this is a positive." + "time": "3:54", + "text": "Now, let's look at the sin of the coefficient for X2. Well, we know this is a positive.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "4:02": "So, what that means is that the f value for this point should be higher than the f value for this point on the line that means this will be positive, right?" + "time": "4:02", + "text": "So, what that means is that the f value for this point should be higher than the f value for this point on the line that means this will be positive, right?", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "4:16": "So we know in general of all points on this side," + "time": "4:16", + "text": "So we know in general of all points on this side,", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "4:20": "the function's value will be positive and you can also verify all the points on this side will be negative. And so this is how this kind of linear classifier or linear separator can then separate the points in the two categories." + "time": "4:20", + "text": "the function's value will be positive and you can also verify all the points on this side will be negative. And so this is how this kind of linear classifier or linear separator can then separate the points in the two categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "4:37": "So, now the natural question is, which linear separator is the best? Now, I've get you one line here that can separate the two classes. And this line, of course, is determined by the vector beta, the coefficients. Different coefficients will give us different lines. So, we could imagine there are other lines that can do the same job. Gamma, for example, could give us another line that counts a separator to these instances." + "time": "4:37", + "text": "So, now the natural question is, which linear separator is the best? Now, I've get you one line here that can separate the two classes. And this line, of course, is determined by the vector beta, the coefficients. Different coefficients will give us different lines. So, we could imagine there are other lines that can do the same job. Gamma, for example, could give us another line that counts a separator to these instances.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "5:06": "Of course, there are also lines that won't separate to them and those are bad lines. But, the question is, when we have multiple lines that can separate both clauses, which align the best? In fact, you can imagine, there are many different ways of choosing the line. So, the logistical regression classifier that you have seen earlier actually uses some criteria to determine where this line should be and so linear separate as well. And uses a conditional likelihood on the training that it determines which line is the best. But in SVM we're going to look at another criteria for determining which line is the best. And this time, the criteria is more tied to the classification arrow as you will see." + "time": "5:06", + "text": "Of course, there are also lines that won't separate to them and those are bad lines. But, the question is, when we have multiple lines that can separate both clauses, which align the best? In fact, you can imagine, there are many different ways of choosing the line. So, the logistical regression classifier that you have seen earlier actually uses some criteria to determine where this line should be and so linear separate as well. And uses a conditional likelihood on the training that it determines which line is the best. But in SVM we're going to look at another criteria for determining which line is the best. And this time, the criteria is more tied to the classification arrow as you will see.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "5:49": "So, the basic idea is to choose the separator to maximize the margin. So what is a margin? So, I choose some dotted lines here to indicate the boundaries of those data points in each class. And the margin is simply the distance between the line, the separator, and the closest point from each class." + "time": "5:49", + "text": "So, the basic idea is to choose the separator to maximize the margin. So what is a margin? So, I choose some dotted lines here to indicate the boundaries of those data points in each class. And the margin is simply the distance between the line, the separator, and the closest point from each class.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "6:18": "So you can see the margin of this side is as I've shown here and you can also define the margin on the other side." + "time": "6:18", + "text": "So you can see the margin of this side is as I've shown here and you can also define the margin on the other side.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "6:27": "In order for the separator to maximize the margin, it has to be kind of in the middle of the two boundaries and you don't want this separator to be very close to one side, and that in intuition makes a lot of sense." + "time": "6:27", + "text": "In order for the separator to maximize the margin, it has to be kind of in the middle of the two boundaries and you don't want this separator to be very close to one side, and that in intuition makes a lot of sense.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "6:44": "So this is basic idea of SVM. We're going to choose a linear separator to maximize the margin." + "time": "6:44", + "text": "So this is basic idea of SVM. We're going to choose a linear separator to maximize the margin.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "6:52": "Now on this slide, I've also changed the notation so that I'm not going to use beta to denote the parameters. But instead, I'm going to use w although w was used to denote the words before so don't be confused here. W here is actually a width, a certain width." + "time": "6:52", + "text": "Now on this slide, I've also changed the notation so that I'm not going to use beta to denote the parameters. But instead, I'm going to use w although w was used to denote the words before so don't be confused here. W here is actually a width, a certain width.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "7:12": "So I'm also using lowercase b to denote the beta 0, a biased constant." + "time": "7:12", + "text": "So I'm also using lowercase b to denote the beta 0, a biased constant.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "7:20": "And there are instances do represent that as x and I also use the vector form of multiplication here. So we see a transpose of w vector multiply by the future vector." + "time": "7:20", + "text": "And there are instances do represent that as x and I also use the vector form of multiplication here. So we see a transpose of w vector multiply by the future vector.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "7:35": "So b is a bias constant and w is a set of weights with one way for each feature. We have m features and so we have m weights and that will represent as a vector." + "time": "7:35", + "text": "So b is a bias constant and w is a set of weights with one way for each feature. We have m features and so we have m weights and that will represent as a vector.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "7:47": "And similarly, the data instance here, the text object, is represented by also a feature vector of the same number of elements. Xi is a feature value. For example, word count and you can verify, when we. Multiply these two vectors together, take the dot product, we get the same form of the linear separator as you have seen before. It's just a different way of representing this. Now I use this way so that it's more consistent with what notations people usually use when they talk about SVM. This way you can better connect the slides with some other readings you might do." + "time": "7:47", + "text": "And similarly, the data instance here, the text object, is represented by also a feature vector of the same number of elements. Xi is a feature value. For example, word count and you can verify, when we. Multiply these two vectors together, take the dot product, we get the same form of the linear separator as you have seen before. It's just a different way of representing this. Now I use this way so that it's more consistent with what notations people usually use when they talk about SVM. This way you can better connect the slides with some other readings you might do.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "8:31": "Okay, so when we maximize the margins of a separator, it just means the boundary of the separator is only determined by a few data points, and these are the data points that we call support vectors. So here illustrated are two support vectors for one class and two for the other class. And these quotas define the margin basically, and you can imagine once we know which are supportive vectors then this" + "time": "8:31", + "text": "Okay, so when we maximize the margins of a separator, it just means the boundary of the separator is only determined by a few data points, and these are the data points that we call support vectors. So here illustrated are two support vectors for one class and two for the other class. And these quotas define the margin basically, and you can imagine once we know which are supportive vectors then this", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "9:06": "center separator line will be determined by them. So the other data points actually don't really matter that much. And you can see if you change the other data points it won't really affect the margin, so the separator will stay the same. Mainly affected by the the support vector machines. Sorry, it's mainly affected by the support vectors and that's why it's called a support vector machine. Okay, so now the next question is, of course, how can we set it up to optimize the line? How can we actually find the line or the separator? Now this is equivalent to finding values for w and b, because they will determine where exactly the separator is." + "time": "9:06", + "text": "center separator line will be determined by them. So the other data points actually don't really matter that much. And you can see if you change the other data points it won't really affect the margin, so the separator will stay the same. Mainly affected by the the support vector machines. Sorry, it's mainly affected by the support vectors and that's why it's called a support vector machine. Okay, so now the next question is, of course, how can we set it up to optimize the line? How can we actually find the line or the separator? Now this is equivalent to finding values for w and b, because they will determine where exactly the separator is.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "9:58": "So in the simplest case, the linear SVM is just a simple optimization problem. So again, let's recall that our classifier is such a linear separator, where we have weights for all the features, and the main goal is remove these weights w and b. And the classifier will say X is in category theta 1 if it's positive. Otherwise, it's going to say it's in the other category. So this is our assumption, our setup. So in the linear SVM, we are going to then seek these parameter values to optimize the margins and then the training error." + "time": "9:58", + "text": "So in the simplest case, the linear SVM is just a simple optimization problem. So again, let's recall that our classifier is such a linear separator, where we have weights for all the features, and the main goal is remove these weights w and b. And the classifier will say X is in category theta 1 if it's positive. Otherwise, it's going to say it's in the other category. So this is our assumption, our setup. So in the linear SVM, we are going to then seek these parameter values to optimize the margins and then the training error.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "10:38": "The training data would be basically like in other classifiers. We have a set of training points where we know the x vector, and then we also know the corresponding label, y i. And here we define y i as two values, but these values are not 0, 1 as you have seen before, but rather -1 and positive 1, and they're corresponding to these two categories, as I've shown here. Now you might wonder why we don't define them as 0 and 1 instead of having -1, 1. And this is purely for mathematical convenience, as you will see in a moment." + "time": "10:38", + "text": "The training data would be basically like in other classifiers. We have a set of training points where we know the x vector, and then we also know the corresponding label, y i. And here we define y i as two values, but these values are not 0, 1 as you have seen before, but rather -1 and positive 1, and they're corresponding to these two categories, as I've shown here. Now you might wonder why we don't define them as 0 and 1 instead of having -1, 1. And this is purely for mathematical convenience, as you will see in a moment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "11:16": "So the goal of optimization first is to make sure the labeling of training data is all correct. So that just means if y i, the norm label for instance x i, is 1, we would like this classified value to be large. And here we just choose a threshold of 1 here. But if you use another threshold, you can easily fit that constant into the parameter values b and w to make the right-hand side just 1." + "time": "11:16", + "text": "So the goal of optimization first is to make sure the labeling of training data is all correct. So that just means if y i, the norm label for instance x i, is 1, we would like this classified value to be large. And here we just choose a threshold of 1 here. But if you use another threshold, you can easily fit that constant into the parameter values b and w to make the right-hand side just 1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "11:48": "Now if, on the other hand, y i is -1, that means it's in a different class, then we want this classifier to give us a very small value, in fact a negative value, and we want this value to be less than or equal to -1. Now these are the two different instances, different kinds of cases. How can we combine them together? Now this is where it's convenient when we have chosen y i as -1 for the other category, because it turns out that we can either combine the two into one constraint." + "time": "11:48", + "text": "Now if, on the other hand, y i is -1, that means it's in a different class, then we want this classifier to give us a very small value, in fact a negative value, and we want this value to be less than or equal to -1. Now these are the two different instances, different kinds of cases. How can we combine them together? Now this is where it's convenient when we have chosen y i as -1 for the other category, because it turns out that we can either combine the two into one constraint.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "12:26": "y i multiplied by the classifier value must be larger than or equal to 1." + "time": "12:26", + "text": "y i multiplied by the classifier value must be larger than or equal to 1.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "12:33": "And obviously when y i is just 1, you see this is the same as the constraint on the left-hand side. But when y i is -1, you also see that this is equivalent to the other inequality. So this one actually captures both constraints in a unified way, and that's a convenient way of capturing these constraints. What's our second goal? Well, that's to maximize margin, so we want to ensure that separator can do well on the training data. But then, among all the cases where we can separate the data, we also would like to choose the separator that has the largest margin. Now the margin can be assumed to be related to the magnitude of the weight. And so w transform multiplied by w would give us basically the sum of squares of all those weights. So to have a small value for this expression, it means all the w i's must be small." + "time": "12:33", + "text": "And obviously when y i is just 1, you see this is the same as the constraint on the left-hand side. But when y i is -1, you also see that this is equivalent to the other inequality. So this one actually captures both constraints in a unified way, and that's a convenient way of capturing these constraints. What's our second goal? Well, that's to maximize margin, so we want to ensure that separator can do well on the training data. But then, among all the cases where we can separate the data, we also would like to choose the separator that has the largest margin. Now the margin can be assumed to be related to the magnitude of the weight. And so w transform multiplied by w would give us basically the sum of squares of all those weights. So to have a small value for this expression, it means all the w i's must be small.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "13:42": "So we've just assumed that we have a constraint for" + "time": "13:42", + "text": "So we've just assumed that we have a constraint for", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "13:46": "getting the data on the training set to be classified correctly. Now we also have the objective that's tied into a maximization of margin, and this is simply to minimize w transpose multiplied by w, and we often denote this by phi of w. So now you can see this is basically a optimization problem. We have some variables to optimize, and these are the weights and b and we have some constraints. These are linear constraints and the objective function is a quadratic function of the weights. So this a quadratic program with linear constraints, and there are standard algorithm that are variable for solving this problem. And once we solve the problem we obtain the weights w and b. And then this would give us a well-defined classifier. So we can then use this classifier to classify any new text objects. Now the previous formulation did not allow any error in the classification, but sometimes the data may not be linear to the separator. That means that they may not look as nice as you have seen on the previous slide where a line can separate all of them. And what would happen if we allowed some errors? Well, the principle can stay. We want to minimize the training error but try to also maximize the margin. But in this case we have a soft margin, because the data points may not be completely separable." + "time": "13:46", + "text": "getting the data on the training set to be classified correctly. Now we also have the objective that's tied into a maximization of margin, and this is simply to minimize w transpose multiplied by w, and we often denote this by phi of w. So now you can see this is basically a optimization problem. We have some variables to optimize, and these are the weights and b and we have some constraints. These are linear constraints and the objective function is a quadratic function of the weights. So this a quadratic program with linear constraints, and there are standard algorithm that are variable for solving this problem. And once we solve the problem we obtain the weights w and b. And then this would give us a well-defined classifier. So we can then use this classifier to classify any new text objects. Now the previous formulation did not allow any error in the classification, but sometimes the data may not be linear to the separator. That means that they may not look as nice as you have seen on the previous slide where a line can separate all of them. And what would happen if we allowed some errors? Well, the principle can stay. We want to minimize the training error but try to also maximize the margin. But in this case we have a soft margin, because the data points may not be completely separable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "15:17": "So it turns out that we can easily modify SVM to accommodate this. So what you see here is very similar to what you have seen before, but we have introduced the extra variable xi i. And we in fact will have one for each data instance, and this is going to model the error that we allow for each instance. But the optimization problem would be very similar. So specifically, you will see we have added something to the optimization problem. First we have added some error to the constraint so that now we allow a Allow the classifier to make some mistakes here. So, this Xi i is allowed an error. If we set Xi i to 0, then we go back to the original constraint. We want every instance to be classified accurately. But, if we allow this to be non-zero, then we allow some errors here. In fact, if the length of the Xi i is very large, the error can be very, very large. So naturally, we don't want this to happen. So we want to then also minimize this Xi i. So, because Xi i needs to be minimized in order to control the error." + "time": "15:17", + "text": "So it turns out that we can easily modify SVM to accommodate this. So what you see here is very similar to what you have seen before, but we have introduced the extra variable xi i. And we in fact will have one for each data instance, and this is going to model the error that we allow for each instance. But the optimization problem would be very similar. So specifically, you will see we have added something to the optimization problem. First we have added some error to the constraint so that now we allow a Allow the classifier to make some mistakes here. So, this Xi i is allowed an error. If we set Xi i to 0, then we go back to the original constraint. We want every instance to be classified accurately. But, if we allow this to be non-zero, then we allow some errors here. In fact, if the length of the Xi i is very large, the error can be very, very large. So naturally, we don't want this to happen. So we want to then also minimize this Xi i. So, because Xi i needs to be minimized in order to control the error.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "16:42": "And so, as a result, in the objective function, we also add more to the original one, which is only W, by basically ensuring that we not only minimize the weights, but also minimize the errors, as you see here. Here we simply take a sum over all the instances. Each one has a Xi i to model the error allowed for that instance. And when we combine them together, we basically want to minimize the errors on all of them." + "time": "16:42", + "text": "And so, as a result, in the objective function, we also add more to the original one, which is only W, by basically ensuring that we not only minimize the weights, but also minimize the errors, as you see here. Here we simply take a sum over all the instances. Each one has a Xi i to model the error allowed for that instance. And when we combine them together, we basically want to minimize the errors on all of them.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "17:16": "Now you see there's a parameter C here, and that's a constant to control the trade-off between minimizing the errors and maximizing the margin. If C is set to zero, you can see, we go back to the original object function where we only maximize the margin." + "time": "17:16", + "text": "Now you see there's a parameter C here, and that's a constant to control the trade-off between minimizing the errors and maximizing the margin. If C is set to zero, you can see, we go back to the original object function where we only maximize the margin.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "17:34": "We don't really optimize the training errors and then Xi i can be set to a very large value to make the constraints easy to satisfy. That's not very good of course, so C should be set to a non-zero value, a positive value. But when C is set to a very, very large value, we'll see the object of the function will be dominated mostly by the training errors and so the optimization of margin will then play a secondary role. So if that happens, what would happen is" + "time": "17:34", + "text": "We don't really optimize the training errors and then Xi i can be set to a very large value to make the constraints easy to satisfy. That's not very good of course, so C should be set to a non-zero value, a positive value. But when C is set to a very, very large value, we'll see the object of the function will be dominated mostly by the training errors and so the optimization of margin will then play a secondary role. So if that happens, what would happen is", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "18:07": "then we will try to do our best to minimize the training errors, but then we're not going to take care of the margin and that affects the generalization factors of the classify for future data. So it's also not good. So in particular, this parameter C has to be actually set carefully. And this is just like in the case of k-nearest neighbor where you need to optimize a number of neighbors. Here you need to optimize the C. And this is, in general, also achievable by doing cross-validation. Basically, you look at the empirical data and see what value C should be set to in order to optimize the performance." + "time": "18:07", + "text": "then we will try to do our best to minimize the training errors, but then we're not going to take care of the margin and that affects the generalization factors of the classify for future data. So it's also not good. So in particular, this parameter C has to be actually set carefully. And this is just like in the case of k-nearest neighbor where you need to optimize a number of neighbors. Here you need to optimize the C. And this is, in general, also achievable by doing cross-validation. Basically, you look at the empirical data and see what value C should be set to in order to optimize the performance.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "18:49": "Now with this modification, the problem is still quadratic programming with linear constraints so the optimizing algorithm can be actually applied to solve this different version of the program." + "time": "18:49", + "text": "Now with this modification, the problem is still quadratic programming with linear constraints so the optimizing algorithm can be actually applied to solve this different version of the program.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "19:02": "Again, once we have obtained the weights and the bias, then we can have classifier that's ready for classifying new objects. So that's the basic idea of SVM." + "time": "19:02", + "text": "Again, once we have obtained the weights and the bias, then we can have classifier that's ready for classifying new objects. So that's the basic idea of SVM.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "19:16": "So to summarize the text categorization methods, where we introduce the many methods, and some are generative models. Some are discriminative methods. And these tend to perform similarly when optimized. So there's still no clear winner, although each one has its pros and cons. And the performance might also vary on different data sets for different problems. And one reason is also because the feature representation is very critical" + "time": "19:16", + "text": "So to summarize the text categorization methods, where we introduce the many methods, and some are generative models. Some are discriminative methods. And these tend to perform similarly when optimized. So there's still no clear winner, although each one has its pros and cons. And the performance might also vary on different data sets for different problems. And one reason is also because the feature representation is very critical", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "19:52": "and these methods all require effective feature representation. And to design an effective feature set, we need domain knowledge and humans definitely play an important role here, although there are new machine learning methods and algorithm representation learning that can help with learning features." + "time": "19:52", + "text": "and these methods all require effective feature representation. And to design an effective feature set, we need domain knowledge and humans definitely play an important role here, although there are new machine learning methods and algorithm representation learning that can help with learning features.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "20:12": "And another common thing is that they might be performing similarly on the data set, but with different mistakes. And so, their performance might be similar, but then the mistakes they make might be different. So that means it's useful to compare different methods for a particular problem and then maybe combine multiple methods because this can improve the robustness and they won't make the same mistakes. So assemble approaches that would combine different methods tend to be more robust and can be useful in practice. Most techniques that we introduce use the supervised machine learning, which is a very general method. So that means that these methods can be actually applied to any text or categorization problem. As long as we have humans to help annotate some training data sets and design features, then supervising machine learning and all these classifiers can be easily applied to those problems to solve the categorization problem to allow us to characterize content of text concisely with categories. Or to predict the sum properties of real world variables that are associated with text data. The computers, of course, here are trying to optimize the combinations of the features provided by human. And as I said, there are many different ways of combining them and they also optimize different object or functions." + "time": "20:12", + "text": "And another common thing is that they might be performing similarly on the data set, but with different mistakes. And so, their performance might be similar, but then the mistakes they make might be different. So that means it's useful to compare different methods for a particular problem and then maybe combine multiple methods because this can improve the robustness and they won't make the same mistakes. So assemble approaches that would combine different methods tend to be more robust and can be useful in practice. Most techniques that we introduce use the supervised machine learning, which is a very general method. So that means that these methods can be actually applied to any text or categorization problem. As long as we have humans to help annotate some training data sets and design features, then supervising machine learning and all these classifiers can be easily applied to those problems to solve the categorization problem to allow us to characterize content of text concisely with categories. Or to predict the sum properties of real world variables that are associated with text data. The computers, of course, here are trying to optimize the combinations of the features provided by human. And as I said, there are many different ways of combining them and they also optimize different object or functions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "21:58": "But in order to achieve good performance, they all require effective features and also plenty of training data." + "time": "21:58", + "text": "But in order to achieve good performance, they all require effective features and also plenty of training data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "22:04": "So as a general rule, and if you can improve the feature representation, and then provide more training data, then you can generally do better. Performance is often much more affected by the effectiveness of features than by the choice of specific classifiers. So feature design tends to be more important than the choice of specific classifier." + "time": "22:04", + "text": "So as a general rule, and if you can improve the feature representation, and then provide more training data, then you can generally do better. Performance is often much more affected by the effectiveness of features than by the choice of specific classifiers. So feature design tends to be more important than the choice of specific classifier.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "22:30": "So, how do we design effective features? Well, unfortunately, this is very application-specific. So there's no really much general thing to say here. But we can do some analysis of the categorization problem and try to understand what kind of features might help us distinguish categories. And in general, we can use a lot of domain knowledge to help us design features." + "time": "22:30", + "text": "So, how do we design effective features? Well, unfortunately, this is very application-specific. So there's no really much general thing to say here. But we can do some analysis of the categorization problem and try to understand what kind of features might help us distinguish categories. And in general, we can use a lot of domain knowledge to help us design features.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "23:01": "And another way to figure out the effective features is to do error analysis on the categorization results. You could, for example, look at which category tends to be confused with which other categories. And you can use a confusion matrix to examine the errors systematically across categories. And then, you can look into specific instances to see why the mistake has been made and what features can prevent the mistake. And this can allow you to obtain insights for design new features. So error analysis is very important in general, and that's where you can get the insights about your specific problem." + "time": "23:01", + "text": "And another way to figure out the effective features is to do error analysis on the categorization results. You could, for example, look at which category tends to be confused with which other categories. And you can use a confusion matrix to examine the errors systematically across categories. And then, you can look into specific instances to see why the mistake has been made and what features can prevent the mistake. And this can allow you to obtain insights for design new features. So error analysis is very important in general, and that's where you can get the insights about your specific problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "23:42": "And finally, we can leverage this on machine learning techniques. So, for example, feature selection is a technique that we haven't really talked about, but is very important. And it has to do with trying to select the most useful features before you actually train a full classifier. Sometimes training a classifier will also help you identify which features have high values. There are also other ways to ensure this sparsity. Of the model, meaning to recognize the widths. For example, the SVM actually tries to minimize the weights on features. But you can further force some features, force to use only a small number of features." + "time": "23:42", + "text": "And finally, we can leverage this on machine learning techniques. So, for example, feature selection is a technique that we haven't really talked about, but is very important. And it has to do with trying to select the most useful features before you actually train a full classifier. Sometimes training a classifier will also help you identify which features have high values. There are also other ways to ensure this sparsity. Of the model, meaning to recognize the widths. For example, the SVM actually tries to minimize the weights on features. But you can further force some features, force to use only a small number of features.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "24:21": "There are also techniques for dimension reduction. And that's to reduce a high dimensional feature space into a low dimensional space typically by clustering of features in various ways. So metrics factorization has been used to do such a job, and this is some of the techniques are actually very similar to the talking models that we'll discuss. So talking morals like psa or lda can actually help us reduce the dimension of features. Like imagine the words our original feature. But the can be matched to the topic space .Let's say we have k topics. So a document can now be represented as a vector of just k values corresponding to the topics. So we can let each topic define one dimension, so we have a k dimensional space instead of the original high dimensional space corresponding to words. And this is often another way to learn effective features. Especially, we could also use the categories to supervise the learning of such low dimensional structures." + "time": "24:21", + "text": "There are also techniques for dimension reduction. And that's to reduce a high dimensional feature space into a low dimensional space typically by clustering of features in various ways. So metrics factorization has been used to do such a job, and this is some of the techniques are actually very similar to the talking models that we'll discuss. So talking morals like psa or lda can actually help us reduce the dimension of features. Like imagine the words our original feature. But the can be matched to the topic space .Let's say we have k topics. So a document can now be represented as a vector of just k values corresponding to the topics. So we can let each topic define one dimension, so we have a k dimensional space instead of the original high dimensional space corresponding to words. And this is often another way to learn effective features. Especially, we could also use the categories to supervise the learning of such low dimensional structures.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "25:29": "And so, the original worth features can be also combined with such amazing dimension features or lower dimensional space features to provide a multi resolution which is often very useful. Deep learning is a new technique that has been developed the machine learning." + "time": "25:29", + "text": "And so, the original worth features can be also combined with such amazing dimension features or lower dimensional space features to provide a multi resolution which is often very useful. Deep learning is a new technique that has been developed the machine learning.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "25:51": "It's particularly useful for learning representations. So deep learning refers to deep neural network, it's another kind of classifier, where you can have intermediate features embedded in the models. That it's highly non-linear transpire, and some recent events that's allowed us to train such a complex network effectively. And the technique has been shown to be quite effective for speech recognition, computer reasoning, and recently has been applied to text as well. It has shown some promise. And one important advantage of this approach in" + "time": "25:51", + "text": "It's particularly useful for learning representations. So deep learning refers to deep neural network, it's another kind of classifier, where you can have intermediate features embedded in the models. That it's highly non-linear transpire, and some recent events that's allowed us to train such a complex network effectively. And the technique has been shown to be quite effective for speech recognition, computer reasoning, and recently has been applied to text as well. It has shown some promise. And one important advantage of this approach in", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "26:34": "relationship with the featured design, is that they can learn intermediate replantations or compound the features automatically. And this is very valuable for learning effective replantation, for text recalibration. Although in text domain, because words are exemplary representation of text content, because these are human's imaging for communication. And they are generally sufficient for For representing content for many tasks. If there's a need for some new representation, people would have invented a new word. So because of this we think of value of deep learning for text processing tends to be lower than for [INAUDIBLE]. And the speech revenue where they are anchored corresponding where the design that worked as features." + "time": "26:34", + "text": "relationship with the featured design, is that they can learn intermediate replantations or compound the features automatically. And this is very valuable for learning effective replantation, for text recalibration. Although in text domain, because words are exemplary representation of text content, because these are human's imaging for communication. And they are generally sufficient for For representing content for many tasks. If there's a need for some new representation, people would have invented a new word. So because of this we think of value of deep learning for text processing tends to be lower than for [INAUDIBLE]. And the speech revenue where they are anchored corresponding where the design that worked as features.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "27:31": "But people only still very promising for learning effective features especially for complicated tasks. Like a analysis it has been shown to be effective" + "time": "27:31", + "text": "But people only still very promising for learning effective features especially for complicated tasks. Like a analysis it has been shown to be effective", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "27:41": "because it can provide that goes beyond that of words." + "time": "27:41", + "text": "because it can provide that goes beyond that of words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "27:47": "Now regarding the training examples. It's generally hard to get a lot of training examples because it involves human labor." + "time": "27:47", + "text": "Now regarding the training examples. It's generally hard to get a lot of training examples because it involves human labor.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "27:56": "But there are also some ways to help with this. So one is to assume in some low quality training examples can also be used. So, those can be called pseudo training examples. For example, if you take reviews from the internet, they might have overall ratings. So, to train a of categorizer, meaning we want to positive or negative. And categorize these reviews into these two categories. Then we could assume five star reviews are all positive training samples. One star are negative. But of course, sometimes even five star reviews will also mention negative opinions so the training sample is not all of that high quality, but they can still be useful." + "time": "27:56", + "text": "But there are also some ways to help with this. So one is to assume in some low quality training examples can also be used. So, those can be called pseudo training examples. For example, if you take reviews from the internet, they might have overall ratings. So, to train a of categorizer, meaning we want to positive or negative. And categorize these reviews into these two categories. Then we could assume five star reviews are all positive training samples. One star are negative. But of course, sometimes even five star reviews will also mention negative opinions so the training sample is not all of that high quality, but they can still be useful.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "28:45": "Another idea is to exploit the unlabeled data and there are techniques called the semi-supervised machine learning techniques that can allow you to combine labeled data with unlabeled data. So, in other case it's easy to see the next model can be used For both text plus read and the categorization. So you can imagine, if you have a lot of unlabeled text data for categorization, then you can actually do clustering on these text data, learn categories. And then try to somehow align these categories. With the categories defined by the training data, where we already know which documents are in which category. So you can in fact use the Algorithm to actually combine both. That would allow you essentially also pick up useful words and label the data. You can think of this in another way. Basically, we can use let's say a to classify all of the unlabeled text documents, and then we're going to assume the high confidence Classification results are actually liable. Then you suddenly have more training data because from the enabler that we now know some are labeled as category one, some are labeled as category two. All though the label is not completely reliable But then they can still be useful. So let's assume they are actually training label examples, and then we combine them with true training examples through improved categorization method. And so this idea is very powerful." + "time": "28:45", + "text": "Another idea is to exploit the unlabeled data and there are techniques called the semi-supervised machine learning techniques that can allow you to combine labeled data with unlabeled data. So, in other case it's easy to see the next model can be used For both text plus read and the categorization. So you can imagine, if you have a lot of unlabeled text data for categorization, then you can actually do clustering on these text data, learn categories. And then try to somehow align these categories. With the categories defined by the training data, where we already know which documents are in which category. So you can in fact use the Algorithm to actually combine both. That would allow you essentially also pick up useful words and label the data. You can think of this in another way. Basically, we can use let's say a to classify all of the unlabeled text documents, and then we're going to assume the high confidence Classification results are actually liable. Then you suddenly have more training data because from the enabler that we now know some are labeled as category one, some are labeled as category two. All though the label is not completely reliable But then they can still be useful. So let's assume they are actually training label examples, and then we combine them with true training examples through improved categorization method. And so this idea is very powerful.", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "30:23": "When the enabled data and the training data are very different, and we might need to use other advanced machine learning techniques called domain adaptation or transfer learning. This is when we can Borrow some training examples from a related problem that may be different. Or, from a categorization password" + "time": "30:23", + "text": "When the enabled data and the training data are very different, and we might need to use other advanced machine learning techniques called domain adaptation or transfer learning. This is when we can Borrow some training examples from a related problem that may be different. Or, from a categorization password", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" }, { - "30:46": "that follow very different distribution from what we are working on. But basically, when the two domains are very different, then we need to be careful and not overfit the training domain. But yet, we can still want to use some signals from the related training data. So for example, training categorization on news might not give you Effective plus y for class vine topics and tweets. But you can still learn something from news to help look at writing tweets. So there are mission learning techniques that can help you do that effectively. Here's a suggested reading where you can find more details about some more of the methods is that we have covered. [MUSIC]" + "time": "30:46", + "text": "that follow very different distribution from what we are working on. But basically, when the two domains are very different, then we need to be careful and not overfit the training domain. But yet, we can still want to use some signals from the related training data. So for example, training categorization on news might not give you Effective plus y for class vine topics and tweets. But you can still learn something from news to help look at writing tweets. So there are mission learning techniques that can help you do that effectively. Here's a suggested reading where you can find more details about some more of the methods is that we have covered. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" } ] }, { "5-3-text-categorization-evaluation-part-1": [ { - "0:00": "[SOUND] This lecture is about the Evaluation of Text Categorization. So we've talked about many different methods for text categorization. But how do you know which method works better?" + "time": "0:00", + "text": "[SOUND] This lecture is about the Evaluation of Text Categorization. So we've talked about many different methods for text categorization. But how do you know which method works better?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "0:19": "And for a particular application, how do you know this is the best way of solving your problem? To understand these, we have to" + "time": "0:19", + "text": "And for a particular application, how do you know this is the best way of solving your problem? To understand these, we have to", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "0:29": "how to we have to know how to evaluate categorization results. So first some general thoughts about the evaluation." + "time": "0:29", + "text": "how to we have to know how to evaluate categorization results. So first some general thoughts about the evaluation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "0:38": "In general, for evaluation of this kind of empirical tasks such as categorization, we use methodology that was developed in 1960s by information retrieval researchers. Called a Cranfield Evaluation Methodology. The basic idea is to have humans create test correction," + "time": "0:38", + "text": "In general, for evaluation of this kind of empirical tasks such as categorization, we use methodology that was developed in 1960s by information retrieval researchers. Called a Cranfield Evaluation Methodology. The basic idea is to have humans create test correction,", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "0:59": "where, we already know, every document is tagged with the desired categories. Or, in the case of search, for which query, which documents that should have been retrieved, and this is called, a ground truth. Now, with this ground truth test correction, we can then reuse the collection to test the many different systems and then compare different systems. We can also turn off some components in the system to see what's going to happen. Basically it provides a way to do control experiments to compare different methods." + "time": "0:59", + "text": "where, we already know, every document is tagged with the desired categories. Or, in the case of search, for which query, which documents that should have been retrieved, and this is called, a ground truth. Now, with this ground truth test correction, we can then reuse the collection to test the many different systems and then compare different systems. We can also turn off some components in the system to see what's going to happen. Basically it provides a way to do control experiments to compare different methods.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "1:36": "So this methodology has been virtually used for all the tasks that involve empirically defined problems." + "time": "1:36", + "text": "So this methodology has been virtually used for all the tasks that involve empirically defined problems.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "1:45": "So in our case, then, we are going to compare our systems categorization results with the categorization, ground truth, created by humans." + "time": "1:45", + "text": "So in our case, then, we are going to compare our systems categorization results with the categorization, ground truth, created by humans.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "1:56": "And we're going to compare our systems decisions," + "time": "1:56", + "text": "And we're going to compare our systems decisions,", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "2:00": "which documents should get which category with what" + "time": "2:00", + "text": "which documents should get which category with what", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "2:06": "categories have been assigned to those documents by humans. And we want to quantify the similarity of these decisions or equivalently, to measure the difference between the system output and the desired ideal output generated by the humans." + "time": "2:06", + "text": "categories have been assigned to those documents by humans. And we want to quantify the similarity of these decisions or equivalently, to measure the difference between the system output and the desired ideal output generated by the humans.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "2:25": "So obviously, the highest similarity is the better results are." + "time": "2:25", + "text": "So obviously, the highest similarity is the better results are.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "2:30": "The similarity could be measured in different ways. And that would lead to different measures. And sometimes it's desirable also to match the similarity from different perspectives just to have a better understanding of the results in detail. For example, we might be also interested in knowing which category performs better and which which category is easy to categorize, etc. In general, different categorization mistakes however, have different costs for specific applications. So some areas might be more serious than others. So ideally, we would like to model such differences, but if you read many papers in categorization you will see that they don't generally do that. Instead, they will use a simplified measure and that's because it's often okay not to consider such a cost variation when we compare methods and when we are interested in knowing the relative difference of these methods. So it's okay to introduce some bias, as long as the bias is not already with a particular method and then we should expect the more effective method to perform better than a less effective one, even though the measure is not perfect." + "time": "2:30", + "text": "The similarity could be measured in different ways. And that would lead to different measures. And sometimes it's desirable also to match the similarity from different perspectives just to have a better understanding of the results in detail. For example, we might be also interested in knowing which category performs better and which which category is easy to categorize, etc. In general, different categorization mistakes however, have different costs for specific applications. So some areas might be more serious than others. So ideally, we would like to model such differences, but if you read many papers in categorization you will see that they don't generally do that. Instead, they will use a simplified measure and that's because it's often okay not to consider such a cost variation when we compare methods and when we are interested in knowing the relative difference of these methods. So it's okay to introduce some bias, as long as the bias is not already with a particular method and then we should expect the more effective method to perform better than a less effective one, even though the measure is not perfect.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "3:53": "So the first measure that we'll introduce is called classification accuracy and this is a basic into measure the percentage of correct decisions. So here you see that there are categories denoted by c1 through ck and there are n documents, denoted by d1 through d N. And for each pair of category and the document, we can then look at the situation." + "time": "3:53", + "text": "So the first measure that we'll introduce is called classification accuracy and this is a basic into measure the percentage of correct decisions. So here you see that there are categories denoted by c1 through ck and there are n documents, denoted by d1 through d N. And for each pair of category and the document, we can then look at the situation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "4:16": "And see if the system has said yes to this pair, basically has assigned this category to this document. Or no, so this is denoted by Y or M, that's the systems of the decision. And similarly, we can look at the human's decisions also, if the human has assigned a category to the document of that there will be a plus sign here. That just means that a human. We think of this assignment is correct and incorrect then it's a minus. So we'll see all combinations of this Ns, yes and nos, minus and pluses. There are four combinations in total. And two of them are correct, and that's when we have y(+) or n(-), and then there are also two kinds of errors. So the measure of classification accuracy is simply to count how many of these decisions are correct. And normalize that by the total number of decisions we have made. So, we know that the total number of decisions is n, multiplied by k." + "time": "4:16", + "text": "And see if the system has said yes to this pair, basically has assigned this category to this document. Or no, so this is denoted by Y or M, that's the systems of the decision. And similarly, we can look at the human's decisions also, if the human has assigned a category to the document of that there will be a plus sign here. That just means that a human. We think of this assignment is correct and incorrect then it's a minus. So we'll see all combinations of this Ns, yes and nos, minus and pluses. There are four combinations in total. And two of them are correct, and that's when we have y(+) or n(-), and then there are also two kinds of errors. So the measure of classification accuracy is simply to count how many of these decisions are correct. And normalize that by the total number of decisions we have made. So, we know that the total number of decisions is n, multiplied by k.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "5:20": "And, the number of correct decisions are basically of two kinds. One is y plusses. And the other is n minus this n. We just put together the count. Now, this is a very convenient measure that will give us one number to characterize performance of a method. And the higher, the better, of course." + "time": "5:20", + "text": "And, the number of correct decisions are basically of two kinds. One is y plusses. And the other is n minus this n. We just put together the count. Now, this is a very convenient measure that will give us one number to characterize performance of a method. And the higher, the better, of course.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "5:41": "But the method also has some problems. First it has treated all the decisions equally. But in reality, some decision errors are more serious than others. For example, it may be more important to get the decisions right on some documents, than others." + "time": "5:41", + "text": "But the method also has some problems. First it has treated all the decisions equally. But in reality, some decision errors are more serious than others. For example, it may be more important to get the decisions right on some documents, than others.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "5:58": "Or maybe, more important to get the decisions right on some categories, than others, and this would call for some detailed evaluation of this results to understand the strands and" + "time": "5:58", + "text": "Or maybe, more important to get the decisions right on some categories, than others, and this would call for some detailed evaluation of this results to understand the strands and", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "6:12": "of different methods, and to understand the performance of these methods. In detail in a per category or per document basis. One example that shows clearly the decision errors are having different causes is spam filtering that could be retrieved as two category categorization problem." + "time": "6:12", + "text": "of different methods, and to understand the performance of these methods. In detail in a per category or per document basis. One example that shows clearly the decision errors are having different causes is spam filtering that could be retrieved as two category categorization problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "6:36": "Missing a legitimate email result, is one type of error. But letting spam to come into your folder is another type of error. The two types of errors are clearly very different, because it's very important not to miss a legitimate email. It's okay to occasionally let a spam email to come into your inbox. So the error of the first, missing a legitimate email is very, is of high cost. It's a very serious mistake and classification error, classification accuracy does not address this issue." + "time": "6:36", + "text": "Missing a legitimate email result, is one type of error. But letting spam to come into your folder is another type of error. The two types of errors are clearly very different, because it's very important not to miss a legitimate email. It's okay to occasionally let a spam email to come into your inbox. So the error of the first, missing a legitimate email is very, is of high cost. It's a very serious mistake and classification error, classification accuracy does not address this issue.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "7:14": "There's also another problem with imbalance to test set. Imagine there's a skew to test set where most instances are category one and 98% of instances are category one. Only 2% are in category two. In such a case, we can have a very simple baseline that accurately performs very well and that baseline. Sign with similar, I put all instances in the major category." + "time": "7:14", + "text": "There's also another problem with imbalance to test set. Imagine there's a skew to test set where most instances are category one and 98% of instances are category one. Only 2% are in category two. In such a case, we can have a very simple baseline that accurately performs very well and that baseline. Sign with similar, I put all instances in the major category.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "7:36": "That will get us 98% accuracy in this case. It's going to be appearing to be very effective, but in reality, this is obviously not a good result." + "time": "7:36", + "text": "That will get us 98% accuracy in this case. It's going to be appearing to be very effective, but in reality, this is obviously not a good result.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "7:47": "And so, in general, when we use classification accuracy as a measure, we want to ensure that the causes of balance." + "time": "7:47", + "text": "And so, in general, when we use classification accuracy as a measure, we want to ensure that the causes of balance.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "7:54": "And one above equal number of instances, for example in each class the minority categories or causes tend to be overlooked in the evaluation of classification accuracy. So, to address these problems, we of course would like to also evaluate the results in other ways and in different ways. As I said, it's beneficial to look at after multiple perspectives. So for example, we can look at the perspective from each document as a perspective based on each document. So the question here is, how good are the decisions on this document?" + "time": "7:54", + "text": "And one above equal number of instances, for example in each class the minority categories or causes tend to be overlooked in the evaluation of classification accuracy. So, to address these problems, we of course would like to also evaluate the results in other ways and in different ways. As I said, it's beneficial to look at after multiple perspectives. So for example, we can look at the perspective from each document as a perspective based on each document. So the question here is, how good are the decisions on this document?", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "8:29": "Now, as in the general cases of all decisions, we can think about four combinations of possibilities, depending on whether the system has said yes and depending on whether the human has said it correct or incorrect or said yes or no. And so the four combinations are first when both the human systems said yes, and that's the true positives, when the system says, yes, and it's after the positive. So, when the system says, yes, it's a positive. But, when the human confirm that it is indeed correct, that becomes a true positive." + "time": "8:29", + "text": "Now, as in the general cases of all decisions, we can think about four combinations of possibilities, depending on whether the system has said yes and depending on whether the human has said it correct or incorrect or said yes or no. And so the four combinations are first when both the human systems said yes, and that's the true positives, when the system says, yes, and it's after the positive. So, when the system says, yes, it's a positive. But, when the human confirm that it is indeed correct, that becomes a true positive.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "9:07": "When the system says, yes, but the human says, no, that's incorrect, that's a false positive, have FP." + "time": "9:07", + "text": "When the system says, yes, but the human says, no, that's incorrect, that's a false positive, have FP.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "9:15": "And when the system says no, but the human says yes, then it's a false negative. We missed one assignment. When both the system and human says no, then it's also correctly to assume that's true negatives. All right, so then we can have some measures to just better characterize the performance by using these four numbers and so two popular measures are precision and recall. And these were also proposed by information retrieval researchers 1960s for evaluating search results, but now they have become standard measures, use it everywhere. So when the system says yes, we can ask the question, how many are correct? What's the percent of correct decisions when the system says yes? That's called precision. It's true positive divided by all the cases when the system says yes, all the positives. The other measure is called recall, and this measures" + "time": "9:15", + "text": "And when the system says no, but the human says yes, then it's a false negative. We missed one assignment. When both the system and human says no, then it's also correctly to assume that's true negatives. All right, so then we can have some measures to just better characterize the performance by using these four numbers and so two popular measures are precision and recall. And these were also proposed by information retrieval researchers 1960s for evaluating search results, but now they have become standard measures, use it everywhere. So when the system says yes, we can ask the question, how many are correct? What's the percent of correct decisions when the system says yes? That's called precision. It's true positive divided by all the cases when the system says yes, all the positives. The other measure is called recall, and this measures", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "10:14": "whether the document has all the categories it should have. So in this case it's divide the true positive by true positives and the false negatives. So these are all the cases where this human Says the document should have this category. So this represents both categories that it should have got, and so recall tells us whether the system has actually indeed assigned all the categories that it should have to this document." + "time": "10:14", + "text": "whether the document has all the categories it should have. So in this case it's divide the true positive by true positives and the false negatives. So these are all the cases where this human Says the document should have this category. So this represents both categories that it should have got, and so recall tells us whether the system has actually indeed assigned all the categories that it should have to this document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "10:46": "This gives us a detailed view of the document, then we can aggregate them later." + "time": "10:46", + "text": "This gives us a detailed view of the document, then we can aggregate them later.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "10:52": "And if we're interested in some documents, and this will tell us how well we did on those documents, the subsets of them. It might be more interesting than others, for example. And this allows us to analyze errors in more detail as well. We can separate the documents of certain characteristics from others, and then look at the errors. You might see a pattern A for this kind of document, this long document. It doesn't as well for shock documents." + "time": "10:52", + "text": "And if we're interested in some documents, and this will tell us how well we did on those documents, the subsets of them. It might be more interesting than others, for example. And this allows us to analyze errors in more detail as well. We can separate the documents of certain characteristics from others, and then look at the errors. You might see a pattern A for this kind of document, this long document. It doesn't as well for shock documents.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "11:18": "And this gives you some insight for inputting the method. Similarly, we can look at the per-category evaluation. In this case, we're going to look at the how good are the decisions on a particular category. As in the previous case we can define precision and recall. And it would just basically answer the questions from a different perspective." + "time": "11:18", + "text": "And this gives you some insight for inputting the method. Similarly, we can look at the per-category evaluation. In this case, we're going to look at the how good are the decisions on a particular category. As in the previous case we can define precision and recall. And it would just basically answer the questions from a different perspective.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "11:39": "So when the system says yes, how many are correct? That means looking at this category to see if all the documents that are assigned with this category are indeed in this category, right? And recall, would tell us, has the category been actually assigned to all the documents That should have this category." + "time": "11:39", + "text": "So when the system says yes, how many are correct? That means looking at this category to see if all the documents that are assigned with this category are indeed in this category, right? And recall, would tell us, has the category been actually assigned to all the documents That should have this category.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "12:00": "It's sometimes also useful to combine precision and recall as one measure, and this is often done by using f measure. And this is just a harmonic mean of precision. Precision and recall defined on this slide. And it's also controlled by a parameter beta to" + "time": "12:00", + "text": "It's sometimes also useful to combine precision and recall as one measure, and this is often done by using f measure. And this is just a harmonic mean of precision. Precision and recall defined on this slide. And it's also controlled by a parameter beta to", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "12:20": "indicate whether precision is more important or recall is more. When beta is set to 1, we have measure called F1, and in this case, we just take equal weight upon both procedure and recall." + "time": "12:20", + "text": "indicate whether precision is more important or recall is more. When beta is set to 1, we have measure called F1, and in this case, we just take equal weight upon both procedure and recall.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "12:34": "F1 is very often used as a measure for categorization." + "time": "12:34", + "text": "F1 is very often used as a measure for categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "12:39": "Now, as in all cases, when we combine results, you always should think about the best way of combining them, so in this case I don't know if you have thought about it and we could have combined them just with arithmetic mean, right. So that would still give us the same range of values, but obviously there's a reason why we didn't do that and why f1 is more popular, and it's actually useful to think about difference. And we think about that, you'll see that there is indeed some difference and some undesirable property of this arithmatic. Basically, it will be obvious to you if you think about a case when the system says yes for all the category and document pairs. And then try the compute the precision and recall in that case. And see what would happen." + "time": "12:39", + "text": "Now, as in all cases, when we combine results, you always should think about the best way of combining them, so in this case I don't know if you have thought about it and we could have combined them just with arithmetic mean, right. So that would still give us the same range of values, but obviously there's a reason why we didn't do that and why f1 is more popular, and it's actually useful to think about difference. And we think about that, you'll see that there is indeed some difference and some undesirable property of this arithmatic. Basically, it will be obvious to you if you think about a case when the system says yes for all the category and document pairs. And then try the compute the precision and recall in that case. And see what would happen.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "13:28": "And basically, this kind of measure, the arithmetic mean, is not going to be as reasonable as F1 minus one [INAUDIBLE] trade off, so that the two values are equal. There is an extreme case where you have 0 for one letter and one for the other. Then F1 will be low, but the mean would still be reasonably high." + "time": "13:28", + "text": "And basically, this kind of measure, the arithmetic mean, is not going to be as reasonable as F1 minus one [INAUDIBLE] trade off, so that the two values are equal. There is an extreme case where you have 0 for one letter and one for the other. Then F1 will be low, but the mean would still be reasonably high.", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" }, { - "14:01": "[MUSIC]" + "time": "14:01", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" } ] }, { "5-4-text-categorization-evaluation-part-2": [ { - "0:00": "[SOUND] This lecture is a continued discussion of evaluation of text categorization. Earlier we have introduced measures that can be used with computer provision and recall. For each category and each document now in this lecture we're going to" + "time": "0:00", + "text": "[SOUND] This lecture is a continued discussion of evaluation of text categorization. Earlier we have introduced measures that can be used with computer provision and recall. For each category and each document now in this lecture we're going to", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "0:27": "further examine how to combine the performance of the different categories of different documents how to aggregate them, how do we take average? You see on the title here I indicated it's called a macro average and this is in contrast to micro average that we'll talk more about later." + "time": "0:27", + "text": "further examine how to combine the performance of the different categories of different documents how to aggregate them, how do we take average? You see on the title here I indicated it's called a macro average and this is in contrast to micro average that we'll talk more about later.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "0:47": "So, again, for each category we're going to compute the precision require an f1 so for example category c1 we have precision p1, recall r1 and F value f1. And similarly we can do that for category 2 and and all the other categories. Now once we compute that and we can aggregate them, so for example we can aggregate all the precision values. For all the categories, for computing overall precision. And this is often very useful to summarize what we have seen in the whole data set. And aggregation can be done many different ways. Again as I said, in a case when you need to aggregate different values, it's always good to think about what's the best way of doing the aggregation. For example, we can consider arithmetic mean, which is very commonly used, or you can use geometric mean, which would have different behavior. Depending on the way you aggregate, you might have got different conclusions. in terms of which method works better, so it's important to consider these differences and choosing the right one or a more suitable one for your task. So the difference fore example between arithmetically and geometrically is that the arithmetically would be dominated by high values whereas geometrically would be more affected by low values. Base and so whether you are want to emphasis low values or high values would be a question relate with all you And similar we can do that for recal and F score. So that's how we can generate the overall precision, recall and F score." + "time": "0:47", + "text": "So, again, for each category we're going to compute the precision require an f1 so for example category c1 we have precision p1, recall r1 and F value f1. And similarly we can do that for category 2 and and all the other categories. Now once we compute that and we can aggregate them, so for example we can aggregate all the precision values. For all the categories, for computing overall precision. And this is often very useful to summarize what we have seen in the whole data set. And aggregation can be done many different ways. Again as I said, in a case when you need to aggregate different values, it's always good to think about what's the best way of doing the aggregation. For example, we can consider arithmetic mean, which is very commonly used, or you can use geometric mean, which would have different behavior. Depending on the way you aggregate, you might have got different conclusions. in terms of which method works better, so it's important to consider these differences and choosing the right one or a more suitable one for your task. So the difference fore example between arithmetically and geometrically is that the arithmetically would be dominated by high values whereas geometrically would be more affected by low values. Base and so whether you are want to emphasis low values or high values would be a question relate with all you And similar we can do that for recal and F score. So that's how we can generate the overall precision, recall and F score.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "2:31": "Now we can do the same for aggregation of other all the document All right. So it's exactly the same situation for each document on our computer. Precision, recall, and F. And then after we have completed the computation for all these documents, we're going to aggregate them to generate the overall precision, overall recall, and overall F score." + "time": "2:31", + "text": "Now we can do the same for aggregation of other all the document All right. So it's exactly the same situation for each document on our computer. Precision, recall, and F. And then after we have completed the computation for all these documents, we're going to aggregate them to generate the overall precision, overall recall, and overall F score.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "2:53": "These are, again, examining the results from different angles. Which one's more useful will depend on your application. In general, it's beneficial to look at the results from all these perspectives. And especially if you compare different methods in different dimensions, it might reveal which method Is better in which measure or in what situations and this provides insightful. Understanding the strands of a method or a weakness and this provides further insight for improving them." + "time": "2:53", + "text": "These are, again, examining the results from different angles. Which one's more useful will depend on your application. In general, it's beneficial to look at the results from all these perspectives. And especially if you compare different methods in different dimensions, it might reveal which method Is better in which measure or in what situations and this provides insightful. Understanding the strands of a method or a weakness and this provides further insight for improving them.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "3:28": "So as I mentioned, there is also micro-average in contrast to the macro average that we talked about earlier. In this case, what we do is you pool together all the decisions, and then compute the precision and recall." + "time": "3:28", + "text": "So as I mentioned, there is also micro-average in contrast to the macro average that we talked about earlier. In this case, what we do is you pool together all the decisions, and then compute the precision and recall.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "3:45": "So we can compute the overall precision and recall by just counting how many cases are in true positive, how many cases in false positive, etc, it's computing the values in the contingency table, and then we can compute the precision and recall just once." + "time": "3:45", + "text": "So we can compute the overall precision and recall by just counting how many cases are in true positive, how many cases in false positive, etc, it's computing the values in the contingency table, and then we can compute the precision and recall just once.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "4:06": "In contrast, in macro-averaging, we're going to do that for each category first. And then aggregate over these categories or we do that for each document and then aggregate all the documents but here we pooled them together." + "time": "4:06", + "text": "In contrast, in macro-averaging, we're going to do that for each category first. And then aggregate over these categories or we do that for each document and then aggregate all the documents but here we pooled them together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "4:21": "Now this would be very similar to the classification accuracy that we used earlier, and one problem here of course to treat all the instances, all the decisions equally." + "time": "4:21", + "text": "Now this would be very similar to the classification accuracy that we used earlier, and one problem here of course to treat all the instances, all the decisions equally.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "4:32": "And this may not be desirable." + "time": "4:32", + "text": "And this may not be desirable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "4:36": "But it may be a property for some applications, especially if we associate the, for example, the cost for each combination. Then we can actually compute for example, weighted classification accuracy. Where you associate the different cost or utility for each specific decision," + "time": "4:36", + "text": "But it may be a property for some applications, especially if we associate the, for example, the cost for each combination. Then we can actually compute for example, weighted classification accuracy. Where you associate the different cost or utility for each specific decision,", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "4:56": "so there could be variations of these methods that would be more useful. But in general macro average tends to be more information than micro average, just because it might reflect the need for understanding performance" + "time": "4:56", + "text": "so there could be variations of these methods that would be more useful. But in general macro average tends to be more information than micro average, just because it might reflect the need for understanding performance", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "5:14": "on each category or performance on each document which are needed in applications. But macro averaging and micro averaging, they are both very common, and you might see both reported in research papers on Categorization. Also sometimes categorization results might actually be evaluated from ranking prospective." + "time": "5:14", + "text": "on each category or performance on each document which are needed in applications. But macro averaging and micro averaging, they are both very common, and you might see both reported in research papers on Categorization. Also sometimes categorization results might actually be evaluated from ranking prospective.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "5:40": "And this is because categorization results are sometimes or often indeed passed it to a human for various purposes. For example, it might be passed to humans for further editing. For example, news articles can be tempted to be categorized by using a system and then human editors would then correct them." + "time": "5:40", + "text": "And this is because categorization results are sometimes or often indeed passed it to a human for various purposes. For example, it might be passed to humans for further editing. For example, news articles can be tempted to be categorized by using a system and then human editors would then correct them.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "6:02": "And all the email messages might be throughout to the right person for handling in the help desk. And in such a case the categorizations will help prioritizing the task for particular customer service person." + "time": "6:02", + "text": "And all the email messages might be throughout to the right person for handling in the help desk. And in such a case the categorizations will help prioritizing the task for particular customer service person.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "6:19": "So, in this case the results have to be prioritized" + "time": "6:19", + "text": "So, in this case the results have to be prioritized", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "6:26": "and if the system can't give a score to the categorization decision for confidence then we can use the scores to rank these decisions and then evaluate the results as a rank list, just as in a search engine. Evaluation where you rank the documents in responsible query." + "time": "6:26", + "text": "and if the system can't give a score to the categorization decision for confidence then we can use the scores to rank these decisions and then evaluate the results as a rank list, just as in a search engine. Evaluation where you rank the documents in responsible query.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "6:49": "So for example a discovery of spam emails can be evaluated" + "time": "6:49", + "text": "So for example a discovery of spam emails can be evaluated", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "6:55": "based on ranking emails for the spam category. And this is useful if you want people to to verify whether this is really spam, right? The person would then take the rank To check one by one and then verify whether this is indeed a spam. So to reflect the utility for humans in such a task, it's better to evaluate Ranking Chris and this is basically similar to a search again." + "time": "6:55", + "text": "based on ranking emails for the spam category. And this is useful if you want people to to verify whether this is really spam, right? The person would then take the rank To check one by one and then verify whether this is indeed a spam. So to reflect the utility for humans in such a task, it's better to evaluate Ranking Chris and this is basically similar to a search again.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "7:25": "And in such a case often the problem can be better formulated as a ranking problem instead of a categorization problem. So for example, ranking documents in a search engine can also be framed as a binary categorization problem, distinguish the relevant documents that are useful to users from those that are not useful, but typically we frame this as a ranking problem, and we evaluate it as a rank list. That's because people tend to examine the results so" + "time": "7:25", + "text": "And in such a case often the problem can be better formulated as a ranking problem instead of a categorization problem. So for example, ranking documents in a search engine can also be framed as a binary categorization problem, distinguish the relevant documents that are useful to users from those that are not useful, but typically we frame this as a ranking problem, and we evaluate it as a rank list. That's because people tend to examine the results so", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "7:52": "ranking evaluation more reflects utility from user's perspective." + "time": "7:52", + "text": "ranking evaluation more reflects utility from user's perspective.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "7:58": "So to summarize categorization evaluation, first evaluation is always very important for all these tasks. So get it right." + "time": "7:58", + "text": "So to summarize categorization evaluation, first evaluation is always very important for all these tasks. So get it right.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "8:07": "If you don't get it right, you might get misleading results. And you might be misled to believe one method is better than the other, which is in fact not true. So it's very important to get it right." + "time": "8:07", + "text": "If you don't get it right, you might get misleading results. And you might be misled to believe one method is better than the other, which is in fact not true. So it's very important to get it right.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "8:18": "Measures must also reflect the intended use of the results for a particular application. For example, in spam filtering and news categorization the results are used in maybe different ways." + "time": "8:18", + "text": "Measures must also reflect the intended use of the results for a particular application. For example, in spam filtering and news categorization the results are used in maybe different ways.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "8:30": "So then we would need to consider the difference and design measures appropriately." + "time": "8:30", + "text": "So then we would need to consider the difference and design measures appropriately.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "8:36": "We generally need to consider how will the results be further processed by the user and think from a user's perspective. What quality is important? What aspect of quality is important?" + "time": "8:36", + "text": "We generally need to consider how will the results be further processed by the user and think from a user's perspective. What quality is important? What aspect of quality is important?", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "8:49": "Sometimes there are trade offs between multiple aspects like precision and recall and so we need to know for this application is high recall more important, or high precision is more important." + "time": "8:49", + "text": "Sometimes there are trade offs between multiple aspects like precision and recall and so we need to know for this application is high recall more important, or high precision is more important.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "8:59": "Ideally we associate the different cost with each different decision arrow. And this of course has to be designed in an application specific way." + "time": "8:59", + "text": "Ideally we associate the different cost with each different decision arrow. And this of course has to be designed in an application specific way.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "9:08": "Some commonly used measures for relative comparison methods are the following. Classification accuracy, it's very commonly used for especially balance. [INAUDIBLE] preceding [INAUDIBLE] Scores are common and report characterizing performances, given angles and give us some [INAUDIBLE] like a [INAUDIBLE] Per document basis [INAUDIBLE] And then take a average of all of them, different ways micro versus macro [INAUDIBLE]. In general, you want to look at the results from multiple perspectives and for particular applications some perspectives would be more important than others but diagnoses and analysis of categorization methods. It's generally useful to look at as many perspectives as possible to see subtle differences between methods or tow see where a method might be weak from which you can obtain sight for improving a method." + "time": "9:08", + "text": "Some commonly used measures for relative comparison methods are the following. Classification accuracy, it's very commonly used for especially balance. [INAUDIBLE] preceding [INAUDIBLE] Scores are common and report characterizing performances, given angles and give us some [INAUDIBLE] like a [INAUDIBLE] Per document basis [INAUDIBLE] And then take a average of all of them, different ways micro versus macro [INAUDIBLE]. In general, you want to look at the results from multiple perspectives and for particular applications some perspectives would be more important than others but diagnoses and analysis of categorization methods. It's generally useful to look at as many perspectives as possible to see subtle differences between methods or tow see where a method might be weak from which you can obtain sight for improving a method.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "10:04": "Finally sometimes ranking may be more appropriate so be careful sometimes categorization has got may be better frame as a ranking tasks and there're machine running methods for optimizing ranking measures as well." + "time": "10:04", + "text": "Finally sometimes ranking may be more appropriate so be careful sometimes categorization has got may be better frame as a ranking tasks and there're machine running methods for optimizing ranking measures as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" }, { - "10:17": "So here are two suggested readings. One is some chapters of this book where you can find more discussion about evaluation measures. The second is a paper about comparison of different approaches to text categorization and it also has an excellent discussion of how to evaluate textual categorization. [MUSIC]" + "time": "10:17", + "text": "So here are two suggested readings. One is some chapters of this book where you can find more discussion about evaluation measures. The second is a paper about comparison of different approaches to text categorization and it also has an excellent discussion of how to evaluate textual categorization. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" } ] }, { "5-5-opinion-mining-and-sentiment-analysis-motivation": [ { - "0:00": "[SOUND] This lecture is about, Opinion Mining and Sentiment Analysis, covering, Motivation. In this lecture, we're going to start, talking about, mining a different kind of knowledge. Namely, knowledge about the observer or humans that have generated the text data. In particular, we're going to talk about the opinion mining and sentiment analysis." + "time": "0:00", + "text": "[SOUND] This lecture is about, Opinion Mining and Sentiment Analysis, covering, Motivation. In this lecture, we're going to start, talking about, mining a different kind of knowledge. Namely, knowledge about the observer or humans that have generated the text data. In particular, we're going to talk about the opinion mining and sentiment analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "0:32": "As we discussed earlier, text data can be regarded as data generated from humans as subjective sensors." + "time": "0:32", + "text": "As we discussed earlier, text data can be regarded as data generated from humans as subjective sensors.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "0:43": "In contrast, we have other devices such as video recorder that can report what's happening in the real world objective to generate the viewer data for example." + "time": "0:43", + "text": "In contrast, we have other devices such as video recorder that can report what's happening in the real world objective to generate the viewer data for example.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "0:58": "Now the main difference between test data and other data, like video data, is that it has rich opinions, and the content tends to be subjective because it's generated from humans." + "time": "0:58", + "text": "Now the main difference between test data and other data, like video data, is that it has rich opinions, and the content tends to be subjective because it's generated from humans.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "1:16": "Now, this is actually a unique advantaged of text data, as compared with other data, because the office is a great opportunity to understand the observers. We can mine text data to understand their opinions. Understand people's preferences, how people think about something." + "time": "1:16", + "text": "Now, this is actually a unique advantaged of text data, as compared with other data, because the office is a great opportunity to understand the observers. We can mine text data to understand their opinions. Understand people's preferences, how people think about something.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "1:37": "So this lecture and the following lectures will be mainly about how we can mine and analyze opinions buried in a lot of text data." + "time": "1:37", + "text": "So this lecture and the following lectures will be mainly about how we can mine and analyze opinions buried in a lot of text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "1:49": "So let's start with the concept of opinion. It's not that easy to formally define opinion, but mostly we would define opinion as a subjective statement describing what a person believes or thinks about something." + "time": "1:49", + "text": "So let's start with the concept of opinion. It's not that easy to formally define opinion, but mostly we would define opinion as a subjective statement describing what a person believes or thinks about something.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "2:08": "Now, I highlighted quite a few words here. And that's because it's worth thinking a little bit more about these words. And that will help us better understand what's in an opinion. And this further helps us to define opinion more formally. Which is always needed to computation to resolve the problem of opinion mining. So let's first look at the key word of subjective here. This is in contrast with objective statement or factual statement." + "time": "2:08", + "text": "Now, I highlighted quite a few words here. And that's because it's worth thinking a little bit more about these words. And that will help us better understand what's in an opinion. And this further helps us to define opinion more formally. Which is always needed to computation to resolve the problem of opinion mining. So let's first look at the key word of subjective here. This is in contrast with objective statement or factual statement.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "2:40": "Those statements can be proved right or wrong." + "time": "2:40", + "text": "Those statements can be proved right or wrong.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "2:45": "And this is a key differentiating factor from opinions which tends to be not easy to prove wrong or right, because it reflects what the person thinks about something." + "time": "2:45", + "text": "And this is a key differentiating factor from opinions which tends to be not easy to prove wrong or right, because it reflects what the person thinks about something.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "2:59": "So in contrast, objective statement can usually be proved wrong or correct." + "time": "2:59", + "text": "So in contrast, objective statement can usually be proved wrong or correct.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "3:07": "For example, you might say this computer has a screen and a battery." + "time": "3:07", + "text": "For example, you might say this computer has a screen and a battery.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "3:16": "Now that's something you can check. It's either having a battery or not." + "time": "3:16", + "text": "Now that's something you can check. It's either having a battery or not.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "3:23": "But in contrast with this, think about the sentence such as, this laptop has the best battery or this laptop has a nice screen. Now these statements are more subjective and it's very hard to prove whether it's wrong or correct." + "time": "3:23", + "text": "But in contrast with this, think about the sentence such as, this laptop has the best battery or this laptop has a nice screen. Now these statements are more subjective and it's very hard to prove whether it's wrong or correct.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "3:45": "So opinion, is a subjective statement." + "time": "3:45", + "text": "So opinion, is a subjective statement.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "3:50": "And next lets look at the keyword person here. And that indicates that is an opinion holder. Because when we talk about opinion, it's about an opinion held by someone. And then we notice that there is something here. So that is the target of the opinion. The opinion is expressed on this something." + "time": "3:50", + "text": "And next lets look at the keyword person here. And that indicates that is an opinion holder. Because when we talk about opinion, it's about an opinion held by someone. And then we notice that there is something here. So that is the target of the opinion. The opinion is expressed on this something.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "4:11": "And now, of course, believes or thinks implies that an opinion will depend on the culture or background and the context in general. Because a person might think different in a different context. People from different background may also think in different ways. So this analysis shows that there are multiple elements that we need to include in order to characterize opinion." + "time": "4:11", + "text": "And now, of course, believes or thinks implies that an opinion will depend on the culture or background and the context in general. Because a person might think different in a different context. People from different background may also think in different ways. So this analysis shows that there are multiple elements that we need to include in order to characterize opinion.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "4:38": "So, what's a basic opinion representation like? Well, it should include at least three elements, right? Firstly, it has to specify what's the opinion holder. So whose opinion is this? Second, it must also specify the target, what's this opinion about?" + "time": "4:38", + "text": "So, what's a basic opinion representation like? Well, it should include at least three elements, right? Firstly, it has to specify what's the opinion holder. So whose opinion is this? Second, it must also specify the target, what's this opinion about?", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "4:57": "And third, of course, we want opinion content. And so what exactly is opinion? If you can identify these, we get a basic understanding of opinion and can already be useful sometimes. You want to understand further, we want enriched opinion representation." + "time": "4:57", + "text": "And third, of course, we want opinion content. And so what exactly is opinion? If you can identify these, we get a basic understanding of opinion and can already be useful sometimes. You want to understand further, we want enriched opinion representation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "5:15": "And that means we also want to understand that, for example, the context of the opinion and what situation was the opinion expressed. For example, what time was it expressed? We, also, would like to, people understand the opinion sentiment, and this is to understand that what the opinion tells us about the opinion holder's feeling. For example, is this opinion positive, or negative? Or perhaps the opinion holder was happy or was sad, and so such understanding obvious to those beyond just Extracting the opinion content, it needs some analysis." + "time": "5:15", + "text": "And that means we also want to understand that, for example, the context of the opinion and what situation was the opinion expressed. For example, what time was it expressed? We, also, would like to, people understand the opinion sentiment, and this is to understand that what the opinion tells us about the opinion holder's feeling. For example, is this opinion positive, or negative? Or perhaps the opinion holder was happy or was sad, and so such understanding obvious to those beyond just Extracting the opinion content, it needs some analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "6:00": "So let's take a simple example of a product review. In this case, this actually expressed the opinion holder, and expressed the target. So its obviously whats opinion holder and that's just reviewer and its also often very clear whats the opinion target and that's the product review for example iPhone 6. When the review is posted usually you can't such information easier." + "time": "6:00", + "text": "So let's take a simple example of a product review. In this case, this actually expressed the opinion holder, and expressed the target. So its obviously whats opinion holder and that's just reviewer and its also often very clear whats the opinion target and that's the product review for example iPhone 6. When the review is posted usually you can't such information easier.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "6:27": "Now the content, of course, is a review text that's, in general, also easy to obtain. So you can see product reviews are fairly easy to analyze in terms of obtaining a basic opinion of representation. But of course, if you want to get more information, you might know the Context, for example. The review was written in 2015. Or, we want to know that the sentiment of this review is positive. So, this additional understanding of course adds value to mining the opinions." + "time": "6:27", + "text": "Now the content, of course, is a review text that's, in general, also easy to obtain. So you can see product reviews are fairly easy to analyze in terms of obtaining a basic opinion of representation. But of course, if you want to get more information, you might know the Context, for example. The review was written in 2015. Or, we want to know that the sentiment of this review is positive. So, this additional understanding of course adds value to mining the opinions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "7:04": "Now, you can see in this case the task is relatively easy and that's because the opinion holder and the opinion target have already been identified." + "time": "7:04", + "text": "Now, you can see in this case the task is relatively easy and that's because the opinion holder and the opinion target have already been identified.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "7:14": "Now let's take a look at the sentence in the news. In this case, we have a implicit holder and a implicit target. And the tasker is in general harder. So, we can identify opinion holder here, and that's the governor of Connecticut." + "time": "7:14", + "text": "Now let's take a look at the sentence in the news. In this case, we have a implicit holder and a implicit target. And the tasker is in general harder. So, we can identify opinion holder here, and that's the governor of Connecticut.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "7:32": "We can also identify the target. So one target is Hurricane Sandy, but there is also another target mentioned which is hurricane of 1938. So what's the opinion? Well, there's a negative sentiment here that's indicated by words like bad and worst." + "time": "7:32", + "text": "We can also identify the target. So one target is Hurricane Sandy, but there is also another target mentioned which is hurricane of 1938. So what's the opinion? Well, there's a negative sentiment here that's indicated by words like bad and worst.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "7:53": "And we can also, then, identify context, New England in this case." + "time": "7:53", + "text": "And we can also, then, identify context, New England in this case.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "8:00": "Now, unlike in the playoff review, all these elements must be extracted by using natural RAM processing techniques. So, the task Is much harder. And we need a deeper natural language processing." + "time": "8:00", + "text": "Now, unlike in the playoff review, all these elements must be extracted by using natural RAM processing techniques. So, the task Is much harder. And we need a deeper natural language processing.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "8:14": "And these examples also" + "time": "8:14", + "text": "And these examples also", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "8:17": "suggest that a lot of work can be easy to done for product reviews. That's indeed what has happened. Analyzing and assembling news is still quite difficult, it's more difficult than the analysis of opinions in product reviews." + "time": "8:17", + "text": "suggest that a lot of work can be easy to done for product reviews. That's indeed what has happened. Analyzing and assembling news is still quite difficult, it's more difficult than the analysis of opinions in product reviews.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "8:36": "Now there are also some other interesting variations. In fact, here we're going to examine the variations of opinions, more systematically. First, let's think about the opinion holder." + "time": "8:36", + "text": "Now there are also some other interesting variations. In fact, here we're going to examine the variations of opinions, more systematically. First, let's think about the opinion holder.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "8:47": "The holder could be an individual or it could be group of people. Sometimes, the opinion was from a committee. Or from a whole country of people." + "time": "8:47", + "text": "The holder could be an individual or it could be group of people. Sometimes, the opinion was from a committee. Or from a whole country of people.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "8:56": "Opinion target accounts will vary a lot. It can be about one entity, a particular person, a particular product, a particular policy, ect. But it could be about a group of products. Could be about the products from a company in general." + "time": "8:56", + "text": "Opinion target accounts will vary a lot. It can be about one entity, a particular person, a particular product, a particular policy, ect. But it could be about a group of products. Could be about the products from a company in general.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "9:11": "Could also be very specific about one attribute, though. An attribute of the entity. For example, it's just about the battery of iPhone. It could be someone else's opinion. And one person might comment on another person's Opinion, etc. So, you can see there is a lot of variation here that will cause the problem to vary a lot. Now, opinion content, of course, can also vary a lot on the surface, you can identify one-sentence opinion or one-phrase opinion. But you can also have longer text to express an opinion, like the whole article." + "time": "9:11", + "text": "Could also be very specific about one attribute, though. An attribute of the entity. For example, it's just about the battery of iPhone. It could be someone else's opinion. And one person might comment on another person's Opinion, etc. So, you can see there is a lot of variation here that will cause the problem to vary a lot. Now, opinion content, of course, can also vary a lot on the surface, you can identify one-sentence opinion or one-phrase opinion. But you can also have longer text to express an opinion, like the whole article.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "9:48": "And furthermore we identify the variation in the sentiment or emotion damage that's above the feeding of the opinion holder. So, we can distinguish a positive versus negative or mutual or happy versus sad, separate." + "time": "9:48", + "text": "And furthermore we identify the variation in the sentiment or emotion damage that's above the feeding of the opinion holder. So, we can distinguish a positive versus negative or mutual or happy versus sad, separate.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "10:03": "Finally, the opinion context can also vary. We can have a simple context, like different time or different locations. But there could be also complex contexts, such as some background of topic being discussed. So when opinion is expressed in particular discourse context, it has to be interpreted in different ways than when it's expressed in another context. So the context can be very [INAUDIBLE] to entire discourse context of the opinion. From computational perspective, we're mostly interested in what opinions can be extracted from text data. So, it turns out that we can also differentiate, distinguish, different kinds of opinions in text data from computation perspective. First, the observer might make a comment about opinion targeting, observe the word So in case we have the author's opinion. For example, I don't like this phone at all. And that's an opinion of this author." + "time": "10:03", + "text": "Finally, the opinion context can also vary. We can have a simple context, like different time or different locations. But there could be also complex contexts, such as some background of topic being discussed. So when opinion is expressed in particular discourse context, it has to be interpreted in different ways than when it's expressed in another context. So the context can be very [INAUDIBLE] to entire discourse context of the opinion. From computational perspective, we're mostly interested in what opinions can be extracted from text data. So, it turns out that we can also differentiate, distinguish, different kinds of opinions in text data from computation perspective. First, the observer might make a comment about opinion targeting, observe the word So in case we have the author's opinion. For example, I don't like this phone at all. And that's an opinion of this author.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "10:59": "In contrast, the text might also report opinions about others. So the person could also Make observation about another person's opinion and reported this opinion. So for example, I believe he loves the painting. And that opinion is really about the It is really expressed by another person here. So, it doesn't mean this author loves that painting." + "time": "10:59", + "text": "In contrast, the text might also report opinions about others. So the person could also Make observation about another person's opinion and reported this opinion. So for example, I believe he loves the painting. And that opinion is really about the It is really expressed by another person here. So, it doesn't mean this author loves that painting.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "11:33": "So clearly, the two kinds of opinions need to be analyzed in different ways, and sometimes in product reviews, you can see, although mostly the opinions are false from this reviewer. Sometimes, a reviewer might mention opinions of his friend or her friend." + "time": "11:33", + "text": "So clearly, the two kinds of opinions need to be analyzed in different ways, and sometimes in product reviews, you can see, although mostly the opinions are false from this reviewer. Sometimes, a reviewer might mention opinions of his friend or her friend.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "11:51": "Another complication is that there may be indirect opinions or inferred opinions that can be obtained. By making inferences on what's expressed in the text that might not necessarily look like opinion. For example, one statement that might be, this phone ran out of battery in just one hour. Now, this is in a way a factual statement because It's either true or false, right? You can even verify that, but from this statement, one can also infer some negative opinions about the quality of the battery of this phone, or the feeling of the opinion holder about the battery. The opinion holder clearly wished that the battery do last longer." + "time": "11:51", + "text": "Another complication is that there may be indirect opinions or inferred opinions that can be obtained. By making inferences on what's expressed in the text that might not necessarily look like opinion. For example, one statement that might be, this phone ran out of battery in just one hour. Now, this is in a way a factual statement because It's either true or false, right? You can even verify that, but from this statement, one can also infer some negative opinions about the quality of the battery of this phone, or the feeling of the opinion holder about the battery. The opinion holder clearly wished that the battery do last longer.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "12:42": "So these are interesting variations that we need to pay attention to when we extract opinions. Also, for this reason about indirect opinions," + "time": "12:42", + "text": "So these are interesting variations that we need to pay attention to when we extract opinions. Also, for this reason about indirect opinions,", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "12:53": "it's often also very useful to extract whatever the person has said about the product, and sometimes factual sentences like these are also very useful. So, from a practical viewpoint, sometimes we don't necessarily extract the subject of sentences. Instead, again, all the sentences that are about the opinions are useful for understanding the person or understanding the product that we commend." + "time": "12:53", + "text": "it's often also very useful to extract whatever the person has said about the product, and sometimes factual sentences like these are also very useful. So, from a practical viewpoint, sometimes we don't necessarily extract the subject of sentences. Instead, again, all the sentences that are about the opinions are useful for understanding the person or understanding the product that we commend.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "13:19": "So the task of opinion mining can be defined as taking textualized input to generate a set of opinion representations. Each representation we should identify opinion holder, target, content, and the context. Ideally we can also infer opinion sentiment from the comment and the context to better understand." + "time": "13:19", + "text": "So the task of opinion mining can be defined as taking textualized input to generate a set of opinion representations. Each representation we should identify opinion holder, target, content, and the context. Ideally we can also infer opinion sentiment from the comment and the context to better understand.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "13:43": "The opinion." + "time": "13:43", + "text": "The opinion.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "13:44": "Now often, some elements of the representation are already known. I just gave a good example in the case of product we'd use where the opinion holder and the opinion target are often expressly identified. And that's not why this turns out to be one of the simplest opinion mining tasks. Now, it's interesting to think about the other tasks that might be also simple. Because those are the cases where you can easily build applications by using opinion mining techniques." + "time": "13:44", + "text": "Now often, some elements of the representation are already known. I just gave a good example in the case of product we'd use where the opinion holder and the opinion target are often expressly identified. And that's not why this turns out to be one of the simplest opinion mining tasks. Now, it's interesting to think about the other tasks that might be also simple. Because those are the cases where you can easily build applications by using opinion mining techniques.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "14:17": "So now that we have talked about what is opinion mining, we have defined the task. Let's also just talk a little bit about why opinion mining is very important and why it's very useful. So here, I identify three major reasons, three broad reasons. The first is it can help decision support. It can help us optimize our decisions. We often look at other people's opinions, look at read the reviews in order to make a decisions like buying a product or using a service." + "time": "14:17", + "text": "So now that we have talked about what is opinion mining, we have defined the task. Let's also just talk a little bit about why opinion mining is very important and why it's very useful. So here, I identify three major reasons, three broad reasons. The first is it can help decision support. It can help us optimize our decisions. We often look at other people's opinions, look at read the reviews in order to make a decisions like buying a product or using a service.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "14:52": "We also would be interested in others opinions when we decide whom to vote for example." + "time": "14:52", + "text": "We also would be interested in others opinions when we decide whom to vote for example.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "15:00": "And policy makers, may also want to know people's opinions when designing a new policy. So that's one general, kind of, applications. And it's very broad, of course. The second application is to understand people, and this is also very important. For example, it could help understand people's preferences. And this could help us better serve people. For example, we optimize a product search engine or optimize a recommender system if we know what people are interested in, what people think about product." + "time": "15:00", + "text": "And policy makers, may also want to know people's opinions when designing a new policy. So that's one general, kind of, applications. And it's very broad, of course. The second application is to understand people, and this is also very important. For example, it could help understand people's preferences. And this could help us better serve people. For example, we optimize a product search engine or optimize a recommender system if we know what people are interested in, what people think about product.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "15:35": "It can also help with advertising, of course, and we can have targeted advertising if we know what kind of people tend to like what kind of plot." + "time": "15:35", + "text": "It can also help with advertising, of course, and we can have targeted advertising if we know what kind of people tend to like what kind of plot.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "15:48": "Now the third kind of application can be called voluntary survey. Now this is most important research that used to be done by doing surveys, doing manual surveys. Question, answer it. People need to feel informs to answer their questions. Now this is directly related to humans as sensors, and we can usually aggregate opinions from a lot of humans through kind of assess the general opinion. Now this would be very useful for business intelligence where manufacturers want to know where their products have advantages over others." + "time": "15:48", + "text": "Now the third kind of application can be called voluntary survey. Now this is most important research that used to be done by doing surveys, doing manual surveys. Question, answer it. People need to feel informs to answer their questions. Now this is directly related to humans as sensors, and we can usually aggregate opinions from a lot of humans through kind of assess the general opinion. Now this would be very useful for business intelligence where manufacturers want to know where their products have advantages over others.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "16:31": "What are the winning features of their products, winning features of competitive products." + "time": "16:31", + "text": "What are the winning features of their products, winning features of competitive products.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "16:37": "Market research has to do with understanding consumers oppinions. And this create very useful directive for that. Data-driven social science research can benefit from this because they can do text mining to understand the people's opinions. And if you can aggregate a lot of opinions from social media, from a lot of, popular" + "time": "16:37", + "text": "Market research has to do with understanding consumers oppinions. And this create very useful directive for that. Data-driven social science research can benefit from this because they can do text mining to understand the people's opinions. And if you can aggregate a lot of opinions from social media, from a lot of, popular", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "16:58": "information then you can actually do some study of some questions. For example, we can study the behavior of people on social media on social networks. And these can be regarded as voluntary survey done by those people." + "time": "16:58", + "text": "information then you can actually do some study of some questions. For example, we can study the behavior of people on social media on social networks. And these can be regarded as voluntary survey done by those people.", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" }, { - "17:19": "In general, we can gain a lot of advantage in any prediction task because we can leverage the text data as extra data above any problem. And so we can use text based prediction techniques to help you make predictions or improve the accuracy of prediction. [MUSIC]" + "time": "17:19", + "text": "In general, we can gain a lot of advantage in any prediction task because we can leverage the text data as extra data above any problem. And so we can use text based prediction techniques to help you make predictions or improve the accuracy of prediction. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" } ] }, { "5-6-opinion-mining-and-sentiment-analysis-sentiment-classification": [ { - "0:00": "[NOISE] This lecture is about the sentiment classification." + "time": "0:00", + "text": "[NOISE] This lecture is about the sentiment classification.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "0:11": "If we assume that" + "time": "0:11", + "text": "If we assume that", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "0:13": "most of the elements in the opinion representation are all ready known, then our only task may be just a sentiment classification, as shown in this case. So suppose we know who's the opinion holder and what's the opinion target, and also know the content and the context of the opinion, then we mainly need to decide the opinion sentiment of the review. So this is a case of just using sentiment classification for understanding opinion." + "time": "0:13", + "text": "most of the elements in the opinion representation are all ready known, then our only task may be just a sentiment classification, as shown in this case. So suppose we know who's the opinion holder and what's the opinion target, and also know the content and the context of the opinion, then we mainly need to decide the opinion sentiment of the review. So this is a case of just using sentiment classification for understanding opinion.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "0:46": "Sentiment classification can be defined more specifically as follows. The input is opinionated text object, the output is typically a sentiment label, or a sentiment tag, and that can be designed in two ways. One is polarity analysis, where we have categories such as positive, negative, or neutral." + "time": "0:46", + "text": "Sentiment classification can be defined more specifically as follows. The input is opinionated text object, the output is typically a sentiment label, or a sentiment tag, and that can be designed in two ways. One is polarity analysis, where we have categories such as positive, negative, or neutral.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "1:08": "The other is emotion analysis that can go beyond a polarity to characterize the feeling of the opinion holder." + "time": "1:08", + "text": "The other is emotion analysis that can go beyond a polarity to characterize the feeling of the opinion holder.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "1:21": "In the case of polarity analysis, we sometimes also have numerical ratings as you often see in some reviews on the web." + "time": "1:21", + "text": "In the case of polarity analysis, we sometimes also have numerical ratings as you often see in some reviews on the web.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "1:30": "Five might denote the most positive, and one maybe the most negative, for example. In general, you have just disk holder categories to characterize the sentiment." + "time": "1:30", + "text": "Five might denote the most positive, and one maybe the most negative, for example. In general, you have just disk holder categories to characterize the sentiment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "1:43": "In emotion analysis, of course, there are also different ways for design the categories." + "time": "1:43", + "text": "In emotion analysis, of course, there are also different ways for design the categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "1:49": "The six most frequently used categories are happy, sad, fearful, angry, surprised, and disgusted." + "time": "1:49", + "text": "The six most frequently used categories are happy, sad, fearful, angry, surprised, and disgusted.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "1:59": "So as you can see, the task is essentially a classification task, or categorization task, as we've seen before, so it's a special case of text categorization. This also means any textual categorization method can be used to do sentiment classification." + "time": "1:59", + "text": "So as you can see, the task is essentially a classification task, or categorization task, as we've seen before, so it's a special case of text categorization. This also means any textual categorization method can be used to do sentiment classification.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "2:15": "Now of course if you just do that, the accuracy may not be good because sentiment classification does requires some improvement over regular text categorization technique, or simple text categorization technique. In particular, it needs two kind of improvements. One is to use more sophisticated features that may be more appropriate for sentiment tagging as I will discuss in a moment." + "time": "2:15", + "text": "Now of course if you just do that, the accuracy may not be good because sentiment classification does requires some improvement over regular text categorization technique, or simple text categorization technique. In particular, it needs two kind of improvements. One is to use more sophisticated features that may be more appropriate for sentiment tagging as I will discuss in a moment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "2:41": "The other is to consider the order of these categories, and especially in polarity analysis, it's very clear there's an order here, and so these categories are not all that independent. There's order among them, and so it's useful to consider the order. For example, we could use ordinal regression to do that, and that's something that we'll talk more about later. So now, let's talk about some features that are often very useful for text categorization and text mining in general, but some of them are especially also needed for sentiment analysis." + "time": "2:41", + "text": "The other is to consider the order of these categories, and especially in polarity analysis, it's very clear there's an order here, and so these categories are not all that independent. There's order among them, and so it's useful to consider the order. For example, we could use ordinal regression to do that, and that's something that we'll talk more about later. So now, let's talk about some features that are often very useful for text categorization and text mining in general, but some of them are especially also needed for sentiment analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "3:18": "So let's start from the simplest one, which is character n-grams. You can just have a sequence of characters as a unit, and they can be mixed with different n's, different lengths. All right, and this is a very general way and very robust way to represent the text data. And you could do that for any language, pretty much." + "time": "3:18", + "text": "So let's start from the simplest one, which is character n-grams. You can just have a sequence of characters as a unit, and they can be mixed with different n's, different lengths. All right, and this is a very general way and very robust way to represent the text data. And you could do that for any language, pretty much.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "3:42": "And this is also robust to spelling errors or recognition errors, right? So if you misspell a word by one character and this representation actually would allow you to match this word when it occurs in the text correctly. Right, so misspell the word and the correct form can be matched because they contain some common n-grams of characters. But of course such a recommendation would not be as discriminating as words." + "time": "3:42", + "text": "And this is also robust to spelling errors or recognition errors, right? So if you misspell a word by one character and this representation actually would allow you to match this word when it occurs in the text correctly. Right, so misspell the word and the correct form can be matched because they contain some common n-grams of characters. But of course such a recommendation would not be as discriminating as words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "4:10": "So next, we have word n-grams, a sequence of words and again, we can mix them with different n's. Unigram's are actually often very effective for a lot of text processing tasks, and it's mostly because words are word designed features by humans for communication, and so they are often good enough for many tasks. But it's not good, or not sufficient for sentiment analysis clearly. For example, we might see a sentence like, it's not good or it's not as good as something else, right? So in such a case if you just take a good and that would suggest positive that's not good, all right so it's not accurate. But if you take a bigram, not good together, and then it's more accurate. So longer n-grams are generally more discriminative, and they're more specific. If you match it, and it says a lot, and it's accurate it's unlikely, very ambiguous. But it may cause overfitting because with such very unique features that machine oriented program can easily pick up such features from the training set and to rely on such unique features to distinguish the categories. And obviously, that kind of classify, one would generalize word to future there when such discriminative features will not necessarily occur. So that's a problem of overfitting that's not desirable. We can also consider part of speech tag, n-grams if we can do part of speech tagging an, for example, adjective noun could form a pair. We can also mix n-grams of words and n-grams of part of speech tags. For example, the word great might be followed by a noun, and this could become a feature, a hybrid feature, that could be useful for sentiment analysis." + "time": "4:10", + "text": "So next, we have word n-grams, a sequence of words and again, we can mix them with different n's. Unigram's are actually often very effective for a lot of text processing tasks, and it's mostly because words are word designed features by humans for communication, and so they are often good enough for many tasks. But it's not good, or not sufficient for sentiment analysis clearly. For example, we might see a sentence like, it's not good or it's not as good as something else, right? So in such a case if you just take a good and that would suggest positive that's not good, all right so it's not accurate. But if you take a bigram, not good together, and then it's more accurate. So longer n-grams are generally more discriminative, and they're more specific. If you match it, and it says a lot, and it's accurate it's unlikely, very ambiguous. But it may cause overfitting because with such very unique features that machine oriented program can easily pick up such features from the training set and to rely on such unique features to distinguish the categories. And obviously, that kind of classify, one would generalize word to future there when such discriminative features will not necessarily occur. So that's a problem of overfitting that's not desirable. We can also consider part of speech tag, n-grams if we can do part of speech tagging an, for example, adjective noun could form a pair. We can also mix n-grams of words and n-grams of part of speech tags. For example, the word great might be followed by a noun, and this could become a feature, a hybrid feature, that could be useful for sentiment analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "6:06": "So next we can also have word classes. So these classes can be syntactic like a part of speech tags, or could be semantic, and they might represent concepts in the thesaurus or ontology, like WordNet. Or they can be recognized the name entities, like people or place, and these categories can be used to enrich the presentation as additional features. We can also learn word clusters and parodically, for example, we've talked about the mining associations of words. And so we can have cluster of paradigmatically related words or syntaxmatically related words, and these clusters can be features to supplement the word base representation. Furthermore, we can also have frequent pattern syntax, and these could be frequent word set, the words that form the pattern do not necessarily occur together or next to each other. But we'll also have locations where the words my occur more closely together, and such patterns provide a more discriminative features than words obviously." + "time": "6:06", + "text": "So next we can also have word classes. So these classes can be syntactic like a part of speech tags, or could be semantic, and they might represent concepts in the thesaurus or ontology, like WordNet. Or they can be recognized the name entities, like people or place, and these categories can be used to enrich the presentation as additional features. We can also learn word clusters and parodically, for example, we've talked about the mining associations of words. And so we can have cluster of paradigmatically related words or syntaxmatically related words, and these clusters can be features to supplement the word base representation. Furthermore, we can also have frequent pattern syntax, and these could be frequent word set, the words that form the pattern do not necessarily occur together or next to each other. But we'll also have locations where the words my occur more closely together, and such patterns provide a more discriminative features than words obviously.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "7:14": "And they may also generalize better than just regular n-grams because they are frequent. So you expected them to occur also in tested data. So they have a lot of advantages, but they might still face the problem of overfeeding as the features become more complex. This is a problem in general, and the same is true for parse tree-based features, when you can use a parse tree to derive features such as frequent subtrees, or paths, and those are even more discriminating, but they're also are more likely to cause over fitting. And in general, pattern discovery algorithm's are very useful for feature construction because they allow us to search in a large space of possible features that are more complex than words that are sometimes useful. So in general, natural language processing is very important that they derive complex features, and they can enrich text representation. So for example, this is a simple sentence that I showed you a long time ago in another lecture. So from these words we can only derive simple word n-grams, representations or character n-grams. But with NLP, we can enrich the representation with a lot of other information such as part of speech tags, parse trees or entities, or even speech act. Now with such enriching information of course, then we can generate a lot of other features, more complex features like a mixed grams of a word and the part of speech tags, or even a part of a parse tree." + "time": "7:14", + "text": "And they may also generalize better than just regular n-grams because they are frequent. So you expected them to occur also in tested data. So they have a lot of advantages, but they might still face the problem of overfeeding as the features become more complex. This is a problem in general, and the same is true for parse tree-based features, when you can use a parse tree to derive features such as frequent subtrees, or paths, and those are even more discriminating, but they're also are more likely to cause over fitting. And in general, pattern discovery algorithm's are very useful for feature construction because they allow us to search in a large space of possible features that are more complex than words that are sometimes useful. So in general, natural language processing is very important that they derive complex features, and they can enrich text representation. So for example, this is a simple sentence that I showed you a long time ago in another lecture. So from these words we can only derive simple word n-grams, representations or character n-grams. But with NLP, we can enrich the representation with a lot of other information such as part of speech tags, parse trees or entities, or even speech act. Now with such enriching information of course, then we can generate a lot of other features, more complex features like a mixed grams of a word and the part of speech tags, or even a part of a parse tree.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "8:55": "So in general, feature design actually affects categorization accuracy significantly, and it's a very important part of any machine learning application. In general, I think it would be most effective if you can combine machine learning, error analysis, and domain knowledge in design features. So first you want to use the main knowledge, your understanding of the problem, the design seed features, and you can also define a basic feature space with a lot of possible features for the machine learning program to work on, and machine can be applied to select the most effective features or construct the new features. That's feature learning, and these features can then be further analyzed by humans through error analysis. And you can look at the categorization errors, and then further analyze what features can help you recover from those errors, or what features cause overfitting and cause those errors. And so this can lead into feature validation that will revised the feature set, and then you can iterate. And we might consider using a different features space." + "time": "8:55", + "text": "So in general, feature design actually affects categorization accuracy significantly, and it's a very important part of any machine learning application. In general, I think it would be most effective if you can combine machine learning, error analysis, and domain knowledge in design features. So first you want to use the main knowledge, your understanding of the problem, the design seed features, and you can also define a basic feature space with a lot of possible features for the machine learning program to work on, and machine can be applied to select the most effective features or construct the new features. That's feature learning, and these features can then be further analyzed by humans through error analysis. And you can look at the categorization errors, and then further analyze what features can help you recover from those errors, or what features cause overfitting and cause those errors. And so this can lead into feature validation that will revised the feature set, and then you can iterate. And we might consider using a different features space.", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" }, { - "10:07": "So NLP enriches text recognition as I just said, and because it enriches the feature space, it allows much larger such a space of features and there are also many, many more features that can be very useful for a lot of tasks. But be careful not to use a lot of category features because it can cause overfitting, or otherwise you would have to training careful not to let overflow happen. So a main challenge in design features, a common challenge is to optimize a trade off between exhaustivity and the specificity, and this trade off turns out to be very difficult. Now exhaustivity means we want the features to actually have high coverage of a lot of documents. And so in that sense, you want the features to be frequent. Specifity requires the feature to be discriminative, so naturally infrequent the features tend to be more discriminative. So this really cause a trade off between frequent versus infrequent features. And that's why a featured design is usually odd. And that's probably the most important part in machine learning any problem in particularly in our case, for text categoration or more specifically the senitment classification. [MUSIC]" + "time": "10:07", + "text": "So NLP enriches text recognition as I just said, and because it enriches the feature space, it allows much larger such a space of features and there are also many, many more features that can be very useful for a lot of tasks. But be careful not to use a lot of category features because it can cause overfitting, or otherwise you would have to training careful not to let overflow happen. So a main challenge in design features, a common challenge is to optimize a trade off between exhaustivity and the specificity, and this trade off turns out to be very difficult. Now exhaustivity means we want the features to actually have high coverage of a lot of documents. And so in that sense, you want the features to be frequent. Specifity requires the feature to be discriminative, so naturally infrequent the features tend to be more discriminative. So this really cause a trade off between frequent versus infrequent features. And that's why a featured design is usually odd. And that's probably the most important part in machine learning any problem in particularly in our case, for text categoration or more specifically the senitment classification. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" } ] }, { "5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression": [ { - "0:00": "[NOISE] This lecture is about the ordinal logistic regression for sentiment analysis. So, this is our problem set up for a typical sentiment classification problem. Or more specifically a rating prediction. We have an opinionated text document d as input, and we want to generate as output, a rating in the range of 1 through k so it's a discrete rating, and this is a categorization problem. We have k categories here. Now we could use a regular text for categorization technique to solve this problem. But such a solution would not consider the order and dependency of the categories. Intuitively, the features that can distinguish category 2 from 1, or rather rating 2 from 1, may be similar to those that can distinguish k from k-1. For example, positive words generally suggest a higher rating. When we train categorization problem by treating these categories as independent we would not capture this." + "time": "0:00", + "text": "[NOISE] This lecture is about the ordinal logistic regression for sentiment analysis. So, this is our problem set up for a typical sentiment classification problem. Or more specifically a rating prediction. We have an opinionated text document d as input, and we want to generate as output, a rating in the range of 1 through k so it's a discrete rating, and this is a categorization problem. We have k categories here. Now we could use a regular text for categorization technique to solve this problem. But such a solution would not consider the order and dependency of the categories. Intuitively, the features that can distinguish category 2 from 1, or rather rating 2 from 1, may be similar to those that can distinguish k from k-1. For example, positive words generally suggest a higher rating. When we train categorization problem by treating these categories as independent we would not capture this.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "1:17": "So what's the solution? Well in general we can order to classify and there are many different approaches. And here we're going to talk about one of them that called ordinal logistic regression. Now, let's first think about how we use logistical regression for a binary sentiment. A categorization problem. So suppose we just wanted to distinguish a positive from a negative and that is just a two category categorization problem. So the predictors are represented as X and these are the features. And there are M features all together. The feature value is a real number. And this can be representation of a text document." + "time": "1:17", + "text": "So what's the solution? Well in general we can order to classify and there are many different approaches. And here we're going to talk about one of them that called ordinal logistic regression. Now, let's first think about how we use logistical regression for a binary sentiment. A categorization problem. So suppose we just wanted to distinguish a positive from a negative and that is just a two category categorization problem. So the predictors are represented as X and these are the features. And there are M features all together. The feature value is a real number. And this can be representation of a text document.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "1:56": "And why it has two values, binary response variable 0 or 1. 1 means X is positive, 0 means X is negative. And then of course this is a standard two category categorization problem. We can apply logistical regression. You may recall that in logistical regression, we assume the log of probability that the Y is equal to one, is assumed to be a linear function of these features, as shown here. So this would allow us to also write the probability of Y equals one, given X" + "time": "1:56", + "text": "And why it has two values, binary response variable 0 or 1. 1 means X is positive, 0 means X is negative. And then of course this is a standard two category categorization problem. We can apply logistical regression. You may recall that in logistical regression, we assume the log of probability that the Y is equal to one, is assumed to be a linear function of these features, as shown here. So this would allow us to also write the probability of Y equals one, given X", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "2:36": "in this equation that you are seeing on the bottom." + "time": "2:36", + "text": "in this equation that you are seeing on the bottom.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "2:43": "So that's a logistical function and you can see it relates this probability to, probability that y=1 to the feature values. And of course beta i's are parameters here, so this is just a direct application of logistical regression for binary categorization." + "time": "2:43", + "text": "So that's a logistical function and you can see it relates this probability to, probability that y=1 to the feature values. And of course beta i's are parameters here, so this is just a direct application of logistical regression for binary categorization.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "3:08": "What if we have multiple categories, multiple levels? Well we have to use such a binary logistical regression problem to solve this multi level rating prediction." + "time": "3:08", + "text": "What if we have multiple categories, multiple levels? Well we have to use such a binary logistical regression problem to solve this multi level rating prediction.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "3:21": "And the idea is we can introduce multiple binary class files. In each case we asked the class file to predict the, whether the rating is j or above, or the rating's lower than j. So when Yj is equal to 1, it means rating is j or above. When it's 0, that means the rating is Lower than j." + "time": "3:21", + "text": "And the idea is we can introduce multiple binary class files. In each case we asked the class file to predict the, whether the rating is j or above, or the rating's lower than j. So when Yj is equal to 1, it means rating is j or above. When it's 0, that means the rating is Lower than j.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "3:45": "So basically if we want to predict a rating in the range of 1-k, we first have one classifier to distinguish a k versus others. And that's our classifier one. And then we're going to have another classifier to distinguish it. At k-1 from the rest. That's Classifier 2. And in the end, we need a Classifier to distinguish between 2 and 1. So altogether we'll have k-1 classifiers." + "time": "3:45", + "text": "So basically if we want to predict a rating in the range of 1-k, we first have one classifier to distinguish a k versus others. And that's our classifier one. And then we're going to have another classifier to distinguish it. At k-1 from the rest. That's Classifier 2. And in the end, we need a Classifier to distinguish between 2 and 1. So altogether we'll have k-1 classifiers.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "4:17": "Now if we do that of course then we can also solve this problem and the logistical regression program will be also very straight forward as you have just seen on the previous slide. Only that here we have more parameters. Because for each classifier, we need a different set of parameters. So now the logistical regression classifies index by J, which corresponds to a rating level." + "time": "4:17", + "text": "Now if we do that of course then we can also solve this problem and the logistical regression program will be also very straight forward as you have just seen on the previous slide. Only that here we have more parameters. Because for each classifier, we need a different set of parameters. So now the logistical regression classifies index by J, which corresponds to a rating level.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "4:46": "And I have also used of J to replace beta 0. And this is to. Make the notation more consistent, than was what we can show in the ordinal logistical regression. So here we now have basically k minus one regular logistic regression classifiers. Each has it's own set of parameters. So now with this approach, we can now do ratings as follows." + "time": "4:46", + "text": "And I have also used of J to replace beta 0. And this is to. Make the notation more consistent, than was what we can show in the ordinal logistical regression. So here we now have basically k minus one regular logistic regression classifiers. Each has it's own set of parameters. So now with this approach, we can now do ratings as follows.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "5:19": "After we have trained these k-1 logistic regression classifiers, separately of course, then we can take a new instance and then invoke a classifier sequentially to make the decision. So first let look at the classifier that corresponds to level of rating K. So this classifier will tell us whether this object should have a rating of K or about. If probability according to this logistical regression classifier is larger than point five, we're going to say yes. The rating is K." + "time": "5:19", + "text": "After we have trained these k-1 logistic regression classifiers, separately of course, then we can take a new instance and then invoke a classifier sequentially to make the decision. So first let look at the classifier that corresponds to level of rating K. So this classifier will tell us whether this object should have a rating of K or about. If probability according to this logistical regression classifier is larger than point five, we're going to say yes. The rating is K.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "6:02": "Now, what if it's not as large as twenty-five? Well, that means the rating's below K, right?" + "time": "6:02", + "text": "Now, what if it's not as large as twenty-five? Well, that means the rating's below K, right?", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "6:11": "So now, we need to invoke the next classifier, which tells us whether it's above K minus one." + "time": "6:11", + "text": "So now, we need to invoke the next classifier, which tells us whether it's above K minus one.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "6:18": "It's at least K minus one. And if the probability is larger than twenty-five, then we'll say, well, then it's k-1. What if it says no? Well, that means the rating would be even below k-1. And so we're going to just keep invoking these classifiers. And here we hit the end when we need to decide whether it's two or one. So this would help us solve the problem. Right? So we can have a classifier that would actually give us a prediction of a rating in the range of 1 through k. Now unfortunately such a strategy is not an optimal way of solving this problem. And specifically there are two problems with this approach. So these equations are the same as. You have seen before." + "time": "6:18", + "text": "It's at least K minus one. And if the probability is larger than twenty-five, then we'll say, well, then it's k-1. What if it says no? Well, that means the rating would be even below k-1. And so we're going to just keep invoking these classifiers. And here we hit the end when we need to decide whether it's two or one. So this would help us solve the problem. Right? So we can have a classifier that would actually give us a prediction of a rating in the range of 1 through k. Now unfortunately such a strategy is not an optimal way of solving this problem. And specifically there are two problems with this approach. So these equations are the same as. You have seen before.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "7:06": "Now the first problem is that there are just too many parameters. There are many parameters. Now, can you count how many parameters do we have exactly here? Now this may be a interesting exercise. To do. So you might want to just pause the video and try to figure out the solution. How many parameters do I have for each classifier?" + "time": "7:06", + "text": "Now the first problem is that there are just too many parameters. There are many parameters. Now, can you count how many parameters do we have exactly here? Now this may be a interesting exercise. To do. So you might want to just pause the video and try to figure out the solution. How many parameters do I have for each classifier?", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "7:28": "And how many classifiers do we have?" + "time": "7:28", + "text": "And how many classifiers do we have?", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "7:31": "Well you can see the, and so it is that for each classifier we have n plus one parameters, and we have k minus one classifiers all together, so the total number of parameters is k minus one multiplied by n plus one. That's a lot. A lot of parameters, so when the classifier has a lot of parameters, we would in general need a lot of data out to actually help us, training data, to help us decide the optimal parameters of such a complex model." + "time": "7:31", + "text": "Well you can see the, and so it is that for each classifier we have n plus one parameters, and we have k minus one classifiers all together, so the total number of parameters is k minus one multiplied by n plus one. That's a lot. A lot of parameters, so when the classifier has a lot of parameters, we would in general need a lot of data out to actually help us, training data, to help us decide the optimal parameters of such a complex model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "8:04": "So that's not ideal." + "time": "8:04", + "text": "So that's not ideal.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "8:07": "Now the second problems is that these problems, these k minus 1 plus fives, are not really independent. These problems are actually dependent." + "time": "8:07", + "text": "Now the second problems is that these problems, these k minus 1 plus fives, are not really independent. These problems are actually dependent.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "8:18": "In general, words that are positive would make the rating higher" + "time": "8:18", + "text": "In general, words that are positive would make the rating higher", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "8:25": "for any of these classifiers. For all these classifiers. So we should be able to take advantage of this fact." + "time": "8:25", + "text": "for any of these classifiers. For all these classifiers. So we should be able to take advantage of this fact.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "8:33": "Now the idea of ordinal logistical regression is precisely that. The key idea is just the improvement over the k-1 independent logistical regression classifiers. And that idea is to tie these beta parameters. And that means we are going to assume the beta parameters. These are the parameters that indicated the inference of those weights. And we're going to assume these beta values are the same for all the K- 1 parameters. And this just encodes our intuition that, positive words in general would make a higher rating more likely." + "time": "8:33", + "text": "Now the idea of ordinal logistical regression is precisely that. The key idea is just the improvement over the k-1 independent logistical regression classifiers. And that idea is to tie these beta parameters. And that means we are going to assume the beta parameters. These are the parameters that indicated the inference of those weights. And we're going to assume these beta values are the same for all the K- 1 parameters. And this just encodes our intuition that, positive words in general would make a higher rating more likely.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "9:19": "So this is intuitively assumptions, so reasonable for our problem setup. And we have this order in these categories." + "time": "9:19", + "text": "So this is intuitively assumptions, so reasonable for our problem setup. And we have this order in these categories.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "9:28": "Now in fact, this would allow us to have two positive benefits. One is it's going to reduce the number of families significantly." + "time": "9:28", + "text": "Now in fact, this would allow us to have two positive benefits. One is it's going to reduce the number of families significantly.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "9:38": "And the other is to allow us to share the training data. Because all these parameters are similar to be equal. So these training data, for different classifiers can then be shared to help us set the optimal value for beta." + "time": "9:38", + "text": "And the other is to allow us to share the training data. Because all these parameters are similar to be equal. So these training data, for different classifiers can then be shared to help us set the optimal value for beta.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "9:56": "So we have more data to help us choose a good beta value." + "time": "9:56", + "text": "So we have more data to help us choose a good beta value.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "10:01": "So what's the consequence, well the formula would look very similar to what you have seen before only that, now the beta parameter has just one index that corresponds to the feature. It no longer has the other index that corresponds to the level of rating." + "time": "10:01", + "text": "So what's the consequence, well the formula would look very similar to what you have seen before only that, now the beta parameter has just one index that corresponds to the feature. It no longer has the other index that corresponds to the level of rating.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "10:19": "So that means we tie them together. And there's only one set of better values for all the classifiers. However, each classifier still has the distinct R for value. The R for parameter. Except it's different. And this is of course needed to predict the different levels of ratings. So R for sub j is different it depends on j, different than j, has a different R value. But the rest of the parameters, the beta i's are the same. So now you can also ask the question, how many parameters do we have now? Again, that's an interesting question to think about. So if you think about it for a moment, and you will see now, the param, we have far fewer parameters. Specifically we have M plus K minus one. Because we have M, beta values, and plus K minus one of our values." + "time": "10:19", + "text": "So that means we tie them together. And there's only one set of better values for all the classifiers. However, each classifier still has the distinct R for value. The R for parameter. Except it's different. And this is of course needed to predict the different levels of ratings. So R for sub j is different it depends on j, different than j, has a different R value. But the rest of the parameters, the beta i's are the same. So now you can also ask the question, how many parameters do we have now? Again, that's an interesting question to think about. So if you think about it for a moment, and you will see now, the param, we have far fewer parameters. Specifically we have M plus K minus one. Because we have M, beta values, and plus K minus one of our values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "11:15": "So let's just look basically, that's basically the main idea of ordinal logistical regression." + "time": "11:15", + "text": "So let's just look basically, that's basically the main idea of ordinal logistical regression.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "11:24": "So, now, let's see how we can use such a method to actually assign ratings. It turns out that with this, this idea of tying all the parameters, the beta values. We also end up by having a similar way to make decisions. And more specifically now, the criteria whether the predictor probabilities are at least 0.5 above, and now is equivalent to whether the score of the object is larger than or equal to negative authors of j, as shown here. Now, the scoring function is just taking the linear combination of all the features with the divided beta values." + "time": "11:24", + "text": "So, now, let's see how we can use such a method to actually assign ratings. It turns out that with this, this idea of tying all the parameters, the beta values. We also end up by having a similar way to make decisions. And more specifically now, the criteria whether the predictor probabilities are at least 0.5 above, and now is equivalent to whether the score of the object is larger than or equal to negative authors of j, as shown here. Now, the scoring function is just taking the linear combination of all the features with the divided beta values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "12:15": "So, this means now we can simply make a decision of rating, by looking at the value of this scoring function, and see which bracket it falls into. Now you can see the general decision rule is thus, when the score is in the particular range of all of our values, then we will assign the corresponding rating to that text object." + "time": "12:15", + "text": "So, this means now we can simply make a decision of rating, by looking at the value of this scoring function, and see which bracket it falls into. Now you can see the general decision rule is thus, when the score is in the particular range of all of our values, then we will assign the corresponding rating to that text object.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "12:49": "So in this approach, we're going to score the object" + "time": "12:49", + "text": "So in this approach, we're going to score the object", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "12:55": "by using the features and trained parameter values." + "time": "12:55", + "text": "by using the features and trained parameter values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" }, { - "13:00": "This score will then be compared with a set of trained alpha values to see which range the score is in. And then, using the range, we can then decide which rating the object should be getting. Because, these ranges of alpha values correspond to the different levels of ratings, and that's from the way we train these alpha values. Each is tied to some level of rating. [MUSIC]" + "time": "13:00", + "text": "This score will then be compared with a set of trained alpha values to see which range the score is in. And then, using the range, we can then decide which rating the object should be getting. Because, these ranges of alpha values correspond to the different levels of ratings, and that's from the way we train these alpha values. Each is tied to some level of rating. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" } ] } @@ -4149,935 +6773,1537 @@ { "6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1": [ { - "0:01": "[MUSIC] This lecture is about the Latent Aspect Rating Analysis for Opinion Mining and Sentiment Analysis." + "time": "0:01", + "text": "[MUSIC] This lecture is about the Latent Aspect Rating Analysis for Opinion Mining and Sentiment Analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "0:14": "In this lecture, we're going to continue discussing Opinion Mining and Sentiment Analysis." + "time": "0:14", + "text": "In this lecture, we're going to continue discussing Opinion Mining and Sentiment Analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "0:19": "In particular, we're going to introduce Latent Aspect Rating Analysis which allows us to perform detailed analysis of reviews with overall ratings." + "time": "0:19", + "text": "In particular, we're going to introduce Latent Aspect Rating Analysis which allows us to perform detailed analysis of reviews with overall ratings.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "0:34": "So, first is motivation." + "time": "0:34", + "text": "So, first is motivation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "0:37": "Here are two reviews that you often see in the net about the hotel. And you see some overall ratings. In this case, both reviewers have given five stars. And, of course, there are also reviews that are in text." + "time": "0:37", + "text": "Here are two reviews that you often see in the net about the hotel. And you see some overall ratings. In this case, both reviewers have given five stars. And, of course, there are also reviews that are in text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "0:53": "Now, if you just look at these reviews, it's not very clear whether the hotel is good for its location or for its service. It's also unclear why a reviewer liked this hotel." + "time": "0:53", + "text": "Now, if you just look at these reviews, it's not very clear whether the hotel is good for its location or for its service. It's also unclear why a reviewer liked this hotel.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "1:06": "What we want to do is to decompose this overall rating into ratings on different aspects such as value, rooms, location, and service." + "time": "1:06", + "text": "What we want to do is to decompose this overall rating into ratings on different aspects such as value, rooms, location, and service.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "1:18": "So, if we can decompose the overall ratings, the ratings on these different aspects, then, we can obtain a more detailed understanding of the reviewer's opinionsabout the hotel." + "time": "1:18", + "text": "So, if we can decompose the overall ratings, the ratings on these different aspects, then, we can obtain a more detailed understanding of the reviewer's opinionsabout the hotel.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "1:30": "And this would also allow us to rank hotels along different dimensions such as value or rooms. But, in general, such detailed understanding will reveal more information about the user's preferences, reviewer's preferences. And also, we can understand better how the reviewers view this hotel from different perspectives. Now, not only do we want to infer these aspect ratings, we also want to infer the aspect weights. So, some reviewers may care more about values as opposed to the service. And that would be a case. like what's shown on the left for the weight distribution, where you can see a lot of weight is places on value." + "time": "1:30", + "text": "And this would also allow us to rank hotels along different dimensions such as value or rooms. But, in general, such detailed understanding will reveal more information about the user's preferences, reviewer's preferences. And also, we can understand better how the reviewers view this hotel from different perspectives. Now, not only do we want to infer these aspect ratings, we also want to infer the aspect weights. So, some reviewers may care more about values as opposed to the service. And that would be a case. like what's shown on the left for the weight distribution, where you can see a lot of weight is places on value.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "2:18": "But others care more for service. And therefore, they might place more weight on service than value." + "time": "2:18", + "text": "But others care more for service. And therefore, they might place more weight on service than value.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "2:25": "The reason why this is also important is because, do you think about a five star on value, it might still be very expensive if the reviewer cares a lot about service, right? For this kind of service, this price is good, so the reviewer might give it a five star. But if a reviewer really cares about the value of the hotel, then the five star, most likely, would mean really cheap prices. So, in order to interpret the ratings on different aspects accurately, we also need to know these aspect weights. When they're combined together, we can have a more detailed understanding of the opinion. So the task here is to get these reviews and their overall ratings as input, and then, generate both the aspect ratings, the compose aspect ratings, and the aspect rates as output. And this is a problem called Latent Aspect Rating Analysis." + "time": "2:25", + "text": "The reason why this is also important is because, do you think about a five star on value, it might still be very expensive if the reviewer cares a lot about service, right? For this kind of service, this price is good, so the reviewer might give it a five star. But if a reviewer really cares about the value of the hotel, then the five star, most likely, would mean really cheap prices. So, in order to interpret the ratings on different aspects accurately, we also need to know these aspect weights. When they're combined together, we can have a more detailed understanding of the opinion. So the task here is to get these reviews and their overall ratings as input, and then, generate both the aspect ratings, the compose aspect ratings, and the aspect rates as output. And this is a problem called Latent Aspect Rating Analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "3:31": "So the task, in general, is given a set of review articles about the topic with overall ratings, and we hope to generate three things. One is the major aspects commented on in the reviews. Second is ratings on each aspect, such as value and room service." + "time": "3:31", + "text": "So the task, in general, is given a set of review articles about the topic with overall ratings, and we hope to generate three things. One is the major aspects commented on in the reviews. Second is ratings on each aspect, such as value and room service.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "3:53": "And third is the relative weights placed on different aspects by the reviewers. And this task has a lot of applications, and if you can do this, and it will enable a lot of applications. I just listed some here. And later, I will show you some results. And, for example, we can do opinion based entity ranking. We can generate an aspect-level opinion summary. We can also analyze reviewers preferences, compare them or compare their preferences on different hotels. And we can do personalized recommendations of products." + "time": "3:53", + "text": "And third is the relative weights placed on different aspects by the reviewers. And this task has a lot of applications, and if you can do this, and it will enable a lot of applications. I just listed some here. And later, I will show you some results. And, for example, we can do opinion based entity ranking. We can generate an aspect-level opinion summary. We can also analyze reviewers preferences, compare them or compare their preferences on different hotels. And we can do personalized recommendations of products.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "4:29": "So, of course, the question is how can we solve this problem? Now, as in other cases of these advanced topics, we won\u2019t have time to really cover the technique in detail. But I\u2019m going to give you a brisk, basic introduction to the technique development for this problem. So, first step, we\u2019re going to talk about how to solve the problem in two stages. Later, we\u2019re going to also mention that we can do this in the unified model. Now, take this review with the overall rating as input. What we want to do is, first, we're going to segment the aspects. So we're going to pick out what words are talking about location, and what words are talking about room condition, etc." + "time": "4:29", + "text": "So, of course, the question is how can we solve this problem? Now, as in other cases of these advanced topics, we won\u2019t have time to really cover the technique in detail. But I\u2019m going to give you a brisk, basic introduction to the technique development for this problem. So, first step, we\u2019re going to talk about how to solve the problem in two stages. Later, we\u2019re going to also mention that we can do this in the unified model. Now, take this review with the overall rating as input. What we want to do is, first, we're going to segment the aspects. So we're going to pick out what words are talking about location, and what words are talking about room condition, etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "5:13": "So with this, we would be able to obtain aspect segments. In particular, we're going to obtain the counts of all the words in each segment, and this is denoted by C sub I of W and D. Now this can be done by using seed words like location and room or price to retrieve the [INAUDIBLE] in the segments. And then, from those segments, we can further mine correlated words with these seed words and that would allow us to segmented the text into segments, discussing different aspects. But, of course, later, as we will see, we can also use [INAUDIBLE] models to do the segmentation. But anyway, that's the first stage, where the obtain the council of words in each segment. In the second stage, which is called Latent Rating Regression, we're going to use these words and their frequencies in different aspects to predict the overall rate. And this predicting happens in two stages." + "time": "5:13", + "text": "So with this, we would be able to obtain aspect segments. In particular, we're going to obtain the counts of all the words in each segment, and this is denoted by C sub I of W and D. Now this can be done by using seed words like location and room or price to retrieve the [INAUDIBLE] in the segments. And then, from those segments, we can further mine correlated words with these seed words and that would allow us to segmented the text into segments, discussing different aspects. But, of course, later, as we will see, we can also use [INAUDIBLE] models to do the segmentation. But anyway, that's the first stage, where the obtain the council of words in each segment. In the second stage, which is called Latent Rating Regression, we're going to use these words and their frequencies in different aspects to predict the overall rate. And this predicting happens in two stages.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "6:17": "In the first stage, we're going to use the [INAUDIBLE] and the weights of these words in each aspect to predict the aspect rating. So, for example, if in your discussion of location, you see a word like, amazing, mentioned many times, and it has a high weight. For example, here, 3.9. Then, it will increase the Aspect Rating for location. But, another word like, far, which is an acted weight, if it's mentioned many times, and it will decrease the rating. So the aspect ratings, assume that it will be a weighted combination of these word frequencies where the weights are the sentiment weights of the words. Of course, these sentimental weights might be different for different aspects. So we have, for each aspect, a set of term sentiment weights as shown here. And that's in order by beta sub I and W." + "time": "6:17", + "text": "In the first stage, we're going to use the [INAUDIBLE] and the weights of these words in each aspect to predict the aspect rating. So, for example, if in your discussion of location, you see a word like, amazing, mentioned many times, and it has a high weight. For example, here, 3.9. Then, it will increase the Aspect Rating for location. But, another word like, far, which is an acted weight, if it's mentioned many times, and it will decrease the rating. So the aspect ratings, assume that it will be a weighted combination of these word frequencies where the weights are the sentiment weights of the words. Of course, these sentimental weights might be different for different aspects. So we have, for each aspect, a set of term sentiment weights as shown here. And that's in order by beta sub I and W.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "7:18": "In the second stage or second step, we're going to assume that the overall rating is simply a weighted combination of these aspect ratings. So we're going to assume we have aspect weights to the [INAUDIBLE] sub i of d, and this will be used to take a weighted average of the aspect ratings, which are denoted by r sub i of d." + "time": "7:18", + "text": "In the second stage or second step, we're going to assume that the overall rating is simply a weighted combination of these aspect ratings. So we're going to assume we have aspect weights to the [INAUDIBLE] sub i of d, and this will be used to take a weighted average of the aspect ratings, which are denoted by r sub i of d.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "7:42": "And we're going to assume the overall rating is simply a weighted average of these aspect ratings. So this set up allows us to predict the overall rating based on the observable frequencies. So on the left side, you will see all these observed information, the r sub d and the count." + "time": "7:42", + "text": "And we're going to assume the overall rating is simply a weighted average of these aspect ratings. So this set up allows us to predict the overall rating based on the observable frequencies. So on the left side, you will see all these observed information, the r sub d and the count.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "8:03": "But on the right side, you see all the information in that range is actually latent." + "time": "8:03", + "text": "But on the right side, you see all the information in that range is actually latent.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "8:09": "So, we hope to discover that. Now, this is a typical case of a generating model where would embed the interesting variables in the generated model. And then, we're going to set up a generation probability for the overall rating given the observed words. And then, of course, we can adjust these parameter values including betas Rs and alpha Is in order to maximize the probability of the data. In this case, the conditional probability of the observed rating given the document. So we have seen such cases before in, for example, PISA, where we predict a text data. But here, we're predicting the rating, and the parameters, of course, are very different. But we can see, if we can uncover these parameters, it would be nice, because r sub i of d is precise as the ratings that we want to get. And these are the composer ratings on different aspects. [INAUDIBLE] sub I D is precisely the aspect weights that we hope to get as a byproduct, that we also get the beta factor, and these are the [INAUDIBLE] factor, the sentiment weights of words." + "time": "8:09", + "text": "So, we hope to discover that. Now, this is a typical case of a generating model where would embed the interesting variables in the generated model. And then, we're going to set up a generation probability for the overall rating given the observed words. And then, of course, we can adjust these parameter values including betas Rs and alpha Is in order to maximize the probability of the data. In this case, the conditional probability of the observed rating given the document. So we have seen such cases before in, for example, PISA, where we predict a text data. But here, we're predicting the rating, and the parameters, of course, are very different. But we can see, if we can uncover these parameters, it would be nice, because r sub i of d is precise as the ratings that we want to get. And these are the composer ratings on different aspects. [INAUDIBLE] sub I D is precisely the aspect weights that we hope to get as a byproduct, that we also get the beta factor, and these are the [INAUDIBLE] factor, the sentiment weights of words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "9:31": "So more formally," + "time": "9:31", + "text": "So more formally,", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "9:33": "the data we are modeling here is a set of review documents with overall ratings. And each review document denote by a d, and the overall ratings denote by r sub d. And d pre-segments turn into k aspect segments. And we're going to use ci(w,d) to denote the count of word w in aspect segment i. Of course, it's zero if the word doesn't occur in the segment." + "time": "9:33", + "text": "the data we are modeling here is a set of review documents with overall ratings. And each review document denote by a d, and the overall ratings denote by r sub d. And d pre-segments turn into k aspect segments. And we're going to use ci(w,d) to denote the count of word w in aspect segment i. Of course, it's zero if the word doesn't occur in the segment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "10:01": "Now, the model is going to predict the rating based on d. So, we're interested in the provisional problem of r sub-d given d. And this model is set up as follows. So r sub-d is assumed the two follow a normal distribution doesn't mean that denotes actually await the average of the aspect of ratings r Sub I of d as shown here. This normal distribution is a variance of data squared. Now, of course, this is just our assumption. The actual rating is not necessarily anything thing this way. But as always, when we make this assumption, we have a formal way to model the problem and that allows us to compute the interest in quantities. In this case, the aspect ratings and the aspect weights." + "time": "10:01", + "text": "Now, the model is going to predict the rating based on d. So, we're interested in the provisional problem of r sub-d given d. And this model is set up as follows. So r sub-d is assumed the two follow a normal distribution doesn't mean that denotes actually await the average of the aspect of ratings r Sub I of d as shown here. This normal distribution is a variance of data squared. Now, of course, this is just our assumption. The actual rating is not necessarily anything thing this way. But as always, when we make this assumption, we have a formal way to model the problem and that allows us to compute the interest in quantities. In this case, the aspect ratings and the aspect weights.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "10:52": "Now, the aspect rating as you see on the [INAUDIBLE] is assuming that will be a weight of sum of these weights. Where the weight is just the [INAUDIBLE] of the weight." + "time": "10:52", + "text": "Now, the aspect rating as you see on the [INAUDIBLE] is assuming that will be a weight of sum of these weights. Where the weight is just the [INAUDIBLE] of the weight.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "11:04": "So as I said, the overall rating is assumed to be a weighted average of aspect ratings." + "time": "11:04", + "text": "So as I said, the overall rating is assumed to be a weighted average of aspect ratings.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "11:15": "Now, these other values, r for sub I of D, or denoted together by other vector that depends on D is that the token of specific weights. And we\u2019re going to assume that this vector itself is drawn from another Multivariate Gaussian distribution, with mean denoted by a Mu factor, and covariance metrics sigma here." + "time": "11:15", + "text": "Now, these other values, r for sub I of D, or denoted together by other vector that depends on D is that the token of specific weights. And we\u2019re going to assume that this vector itself is drawn from another Multivariate Gaussian distribution, with mean denoted by a Mu factor, and covariance metrics sigma here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "11:43": "Now, so this means, when we generate our overall rating, we're going to first draw" + "time": "11:43", + "text": "Now, so this means, when we generate our overall rating, we're going to first draw", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "11:49": "a set of other values from this Multivariate Gaussian Prior distribution. And once we get these other values, we're going to use then the weighted average of aspect ratings as the mean here to use the normal distribution to generate the overall rating." + "time": "11:49", + "text": "a set of other values from this Multivariate Gaussian Prior distribution. And once we get these other values, we're going to use then the weighted average of aspect ratings as the mean here to use the normal distribution to generate the overall rating.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "12:13": "Now, the aspect rating, as I just said, is the sum of the sentiment weights of words in aspect, note that here the sentiment weights are specific to aspect. So, beta is indexed by i, and that's for aspect. And that gives us a way to model different segment of a word." + "time": "12:13", + "text": "Now, the aspect rating, as I just said, is the sum of the sentiment weights of words in aspect, note that here the sentiment weights are specific to aspect. So, beta is indexed by i, and that's for aspect. And that gives us a way to model different segment of a word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "12:36": "This is neither because the same word might have positive sentiment for another aspect. It's also used for see what parameters we have here beta sub i and w gives us the aspect-specific sentiment of w. So, obviously, that's one of the important parameters. But, in general, we can see we have these parameters, beta values, the delta, and the Mu, and sigma." + "time": "12:36", + "text": "This is neither because the same word might have positive sentiment for another aspect. It's also used for see what parameters we have here beta sub i and w gives us the aspect-specific sentiment of w. So, obviously, that's one of the important parameters. But, in general, we can see we have these parameters, beta values, the delta, and the Mu, and sigma.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "13:12": "So, next, the question is, how can we estimate these parameters and, so we collectively denote all the parameters by lambda here. Now, we can, as usual, use the maximum likelihood estimate, and this will give us the settings of these parameters, that with a maximized observed ratings condition of their respective reviews. And of, course, this would then give us all the useful variables that we are interested in computing." + "time": "13:12", + "text": "So, next, the question is, how can we estimate these parameters and, so we collectively denote all the parameters by lambda here. Now, we can, as usual, use the maximum likelihood estimate, and this will give us the settings of these parameters, that with a maximized observed ratings condition of their respective reviews. And of, course, this would then give us all the useful variables that we are interested in computing.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "13:45": "So, more specifically, we can now, once we estimate the parameters, we can easily compute the aspect rating, for aspect the i or sub i of d. And that's simply to take all of the words that occurred in the segment, i, and then take their counts and then multiply that by the center of the weight of each word and take a sum. So, of course, this time would be zero for words that are not occurring in and that's why were going to take the sum of all the words in the vocabulary." + "time": "13:45", + "text": "So, more specifically, we can now, once we estimate the parameters, we can easily compute the aspect rating, for aspect the i or sub i of d. And that's simply to take all of the words that occurred in the segment, i, and then take their counts and then multiply that by the center of the weight of each word and take a sum. So, of course, this time would be zero for words that are not occurring in and that's why were going to take the sum of all the words in the vocabulary.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "14:17": "Now what about the s factor weights? Alpha sub i of d, well, it's not part of our parameter. Right? So we have to use that to compute it. And in this case, we can use the Maximum a Posteriori to compute this alpha value. Basically, we're going to maximize the product of the prior of alpha according to our assumed Multivariate Gaussian Distribution and the likelihood. In this case, the likelihood rate is the probability of generating this observed overall rating given this particular alpha value and some other parameters, as you see here. So for more details about this model, you can read this paper cited here." + "time": "14:17", + "text": "Now what about the s factor weights? Alpha sub i of d, well, it's not part of our parameter. Right? So we have to use that to compute it. And in this case, we can use the Maximum a Posteriori to compute this alpha value. Basically, we're going to maximize the product of the prior of alpha according to our assumed Multivariate Gaussian Distribution and the likelihood. In this case, the likelihood rate is the probability of generating this observed overall rating given this particular alpha value and some other parameters, as you see here. So for more details about this model, you can read this paper cited here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" }, { - "15:05": "[MUSIC]" + "time": "15:05", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" } ] }, { "6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2": [ { - "0:00": "[SOUND] This lecture is a continued discussion of Latent Aspect Rating Analysis. Earlier, we talked about how to solve the problem of LARA in two stages. But we first do segmentation of different aspects. And then we use a latent regression model to learn the aspect ratings and then later the weight. Now it's also possible to develop a unified generative model for solving this problem, and that is we not only model the generational over-rating based on text. We also model the generation of text, and so a natural solution would be to use topic model. So given the entity, we can assume there are aspects that are described by word distributions. Topics. And then we an use a topic model to model the generation of the reviewed text." + "time": "0:00", + "text": "[SOUND] This lecture is a continued discussion of Latent Aspect Rating Analysis. Earlier, we talked about how to solve the problem of LARA in two stages. But we first do segmentation of different aspects. And then we use a latent regression model to learn the aspect ratings and then later the weight. Now it's also possible to develop a unified generative model for solving this problem, and that is we not only model the generational over-rating based on text. We also model the generation of text, and so a natural solution would be to use topic model. So given the entity, we can assume there are aspects that are described by word distributions. Topics. And then we an use a topic model to model the generation of the reviewed text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "1:01": "I will assume words in the review text are drawn from these distributions." + "time": "1:01", + "text": "I will assume words in the review text are drawn from these distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "1:08": "In the same way as we assumed for generating model like PRSA." + "time": "1:08", + "text": "In the same way as we assumed for generating model like PRSA.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "1:13": "And then we can then plug in the latent regression model to use the text to further predict the overrating. And that means when we first predict the aspect rating and then combine them with aspect weights to predict the overall rating. So this would give us a unified generated model, where we model both the generation of text and the overall ready condition on text." + "time": "1:13", + "text": "And then we can then plug in the latent regression model to use the text to further predict the overrating. And that means when we first predict the aspect rating and then combine them with aspect weights to predict the overall rating. So this would give us a unified generated model, where we model both the generation of text and the overall ready condition on text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "1:40": "So we don't have time to discuss this model in detail as in many other cases in this part of the cause where we discuss the cutting edge topics, but there's a reference site here where you can find more details." + "time": "1:40", + "text": "So we don't have time to discuss this model in detail as in many other cases in this part of the cause where we discuss the cutting edge topics, but there's a reference site here where you can find more details.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "1:57": "So now I'm going to show you some simple results that you can get by using these kind of generated models. First, it's about rating decomposition. So here, what you see are the decomposed ratings for three hotels that have the same overall rating. So if you just look at the overall rating, you can't really tell much difference between these hotels. But by decomposing these ratings into aspect ratings we can see some hotels have higher ratings for some dimensions, like value, but others might score better in other dimensions, like location. And so this can give you detailed opinions at the aspect level." + "time": "1:57", + "text": "So now I'm going to show you some simple results that you can get by using these kind of generated models. First, it's about rating decomposition. So here, what you see are the decomposed ratings for three hotels that have the same overall rating. So if you just look at the overall rating, you can't really tell much difference between these hotels. But by decomposing these ratings into aspect ratings we can see some hotels have higher ratings for some dimensions, like value, but others might score better in other dimensions, like location. And so this can give you detailed opinions at the aspect level.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "2:38": "Now here, the ground-truth is shown in the parenthesis, so it also allows you to see whether the prediction is accurate. It's not always accurate but It's mostly still reflecting some of the trends." + "time": "2:38", + "text": "Now here, the ground-truth is shown in the parenthesis, so it also allows you to see whether the prediction is accurate. It's not always accurate but It's mostly still reflecting some of the trends.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "2:53": "The second result you compare different reviewers on the same hotel. So the table shows the decomposed ratings for two reviewers about same hotel. Again their high level overall ratings are the same. So if you just look at the overall ratings, you don't really get that much information about the difference between the two reviewers. But after you decompose the ratings, you can see clearly that they have high scores on different dimensions. So this shows that model can review differences in opinions of different reviewers and such a detailed understanding can help us understand better about reviewers and also better about their feedback on the hotel. This is something very interesting, because this is in some sense some byproduct. In our problem formulation, we did not really have to do this. But the design of the generating model has this component. And these are sentimental weights for words in different aspects. And you can see the highly weighted words versus the negatively loaded weighted words here for each of the four dimensions. Value, rooms, location, and cleanliness. The top words clearly make sense, and the bottom words also make sense." + "time": "2:53", + "text": "The second result you compare different reviewers on the same hotel. So the table shows the decomposed ratings for two reviewers about same hotel. Again their high level overall ratings are the same. So if you just look at the overall ratings, you don't really get that much information about the difference between the two reviewers. But after you decompose the ratings, you can see clearly that they have high scores on different dimensions. So this shows that model can review differences in opinions of different reviewers and such a detailed understanding can help us understand better about reviewers and also better about their feedback on the hotel. This is something very interesting, because this is in some sense some byproduct. In our problem formulation, we did not really have to do this. But the design of the generating model has this component. And these are sentimental weights for words in different aspects. And you can see the highly weighted words versus the negatively loaded weighted words here for each of the four dimensions. Value, rooms, location, and cleanliness. The top words clearly make sense, and the bottom words also make sense.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "4:10": "So this shows that with this approach, we can also learn sentiment information directly from the data. Now, this kind of lexicon is very useful because in general, a word like long, let's say, may have different sentiment polarities for different context. So if I say the battery life of this laptop is long, then that's positive. But if I say the rebooting time for the laptop is long, that's bad, right? So even for reviews about the same product, laptop, the word long is ambiguous, it could mean positive or it could mean negative. But this kind of lexicon, that we can learn by using this kind of generated models, can show whether a word is positive for a particular aspect. So this is clearly very useful, and in fact such a lexicon can be directly used to tag other reviews about hotels or tag comments about hotels in social media like Tweets." + "time": "4:10", + "text": "So this shows that with this approach, we can also learn sentiment information directly from the data. Now, this kind of lexicon is very useful because in general, a word like long, let's say, may have different sentiment polarities for different context. So if I say the battery life of this laptop is long, then that's positive. But if I say the rebooting time for the laptop is long, that's bad, right? So even for reviews about the same product, laptop, the word long is ambiguous, it could mean positive or it could mean negative. But this kind of lexicon, that we can learn by using this kind of generated models, can show whether a word is positive for a particular aspect. So this is clearly very useful, and in fact such a lexicon can be directly used to tag other reviews about hotels or tag comments about hotels in social media like Tweets.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "5:08": "And what's also interesting is that since this is almost completely unsupervised, well assuming the reviews whose overall rating are available And then this can allow us to learn form potentially larger amount of data on the internet to reach sentiment lexicon." + "time": "5:08", + "text": "And what's also interesting is that since this is almost completely unsupervised, well assuming the reviews whose overall rating are available And then this can allow us to learn form potentially larger amount of data on the internet to reach sentiment lexicon.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "5:28": "And here are some results to validate the preference words. Remember the model can infer wether a reviewer cares more about service or the price. Now how do we know whether the inferred weights are correct? And this poses a very difficult challenge for evaluation. Now here we show some interesting way of evaluating." + "time": "5:28", + "text": "And here are some results to validate the preference words. Remember the model can infer wether a reviewer cares more about service or the price. Now how do we know whether the inferred weights are correct? And this poses a very difficult challenge for evaluation. Now here we show some interesting way of evaluating.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "5:50": "What you see here are the prices of hotels in different cities, and these are the prices of hotels that are favored by different groups of reviewers. The top ten are the reviewers was the highest inferred value to other aspect ratio." + "time": "5:50", + "text": "What you see here are the prices of hotels in different cities, and these are the prices of hotels that are favored by different groups of reviewers. The top ten are the reviewers was the highest inferred value to other aspect ratio.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "6:09": "So for example value versus location, value versus room, etcetera. Now the top ten of the reviewers that have the highest ratios by this measure. And that means these reviewers tend to put a lot of weight on value as compared with other dimensions. So that means they really emphasize on value." + "time": "6:09", + "text": "So for example value versus location, value versus room, etcetera. Now the top ten of the reviewers that have the highest ratios by this measure. And that means these reviewers tend to put a lot of weight on value as compared with other dimensions. So that means they really emphasize on value.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "6:30": "The bottom ten on the other hand of the reviewers. The lowest ratio, what does that mean? Well it means these reviewers have put higher weights on other aspects than value. So those are people that cared about another dimension and they didn't care so much the value in some sense, at least as compared with the top ten group." + "time": "6:30", + "text": "The bottom ten on the other hand of the reviewers. The lowest ratio, what does that mean? Well it means these reviewers have put higher weights on other aspects than value. So those are people that cared about another dimension and they didn't care so much the value in some sense, at least as compared with the top ten group.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "6:52": "Now these ratios are computer based on the inferred weights from the model." + "time": "6:52", + "text": "Now these ratios are computer based on the inferred weights from the model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "6:57": "So now you can see the average prices of hotels favored by top ten reviewers are indeed much cheaper than those that are favored by the bottom ten. And this provides some indirect way of validating the inferred weights. It just means the weights are not random. They are actually meaningful here. In comparison, the average price in these three cities, you can actually see the top ten tend to have below average in price, whereas the bottom half, where they care a lot about other things like a service or room condition tend to have hotels that have higher prices than average. So with these results we can build a lot of interesting applications. For example, a direct application would be to generate the rated aspect, the summary, and because of the decomposition we have now generated the summaries for each aspect. The positive sentences the negative sentences about each aspect. It's more informative than original review that just has an overall rating and review text. Here are some other results about the aspects that's covered from reviews with no ratings. These are mp3 reviews, and these results show that the model can discover some interesting aspects. Commented on low overall ratings versus those higher overall per ratings. And they care more about the different aspects." + "time": "6:57", + "text": "So now you can see the average prices of hotels favored by top ten reviewers are indeed much cheaper than those that are favored by the bottom ten. And this provides some indirect way of validating the inferred weights. It just means the weights are not random. They are actually meaningful here. In comparison, the average price in these three cities, you can actually see the top ten tend to have below average in price, whereas the bottom half, where they care a lot about other things like a service or room condition tend to have hotels that have higher prices than average. So with these results we can build a lot of interesting applications. For example, a direct application would be to generate the rated aspect, the summary, and because of the decomposition we have now generated the summaries for each aspect. The positive sentences the negative sentences about each aspect. It's more informative than original review that just has an overall rating and review text. Here are some other results about the aspects that's covered from reviews with no ratings. These are mp3 reviews, and these results show that the model can discover some interesting aspects. Commented on low overall ratings versus those higher overall per ratings. And they care more about the different aspects.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "8:22": "Or they comment more on the different aspects. So that can help us discover for example, consumers' trend in appreciating different features of products. For example, one might have discovered the trend that people tend to like larger screens of cell phones or light weight of laptop, etcetera. Such knowledge can be useful for manufacturers to design their next generation of products. Here are some interesting results on analyzing users rating behavior. So what you see is average weights along different dimensions by different groups of reviewers. And on the left side you see the weights of viewers that like the expensive hotels. They gave the expensive hotels 5 Stars, and you can see their average rates tend to be more for some service. And that suggests that people like expensive hotels because of good service, and that's not surprising. That's also another way to validate it by inferred weights." + "time": "8:22", + "text": "Or they comment more on the different aspects. So that can help us discover for example, consumers' trend in appreciating different features of products. For example, one might have discovered the trend that people tend to like larger screens of cell phones or light weight of laptop, etcetera. Such knowledge can be useful for manufacturers to design their next generation of products. Here are some interesting results on analyzing users rating behavior. So what you see is average weights along different dimensions by different groups of reviewers. And on the left side you see the weights of viewers that like the expensive hotels. They gave the expensive hotels 5 Stars, and you can see their average rates tend to be more for some service. And that suggests that people like expensive hotels because of good service, and that's not surprising. That's also another way to validate it by inferred weights.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "9:34": "If you look at the right side where, look at the column of 5 Stars. These are the reviewers that like the cheaper hotels, and they gave cheaper hotels five stars. As we expected and they put more weight on value, and that's why they like the cheaper hotels." + "time": "9:34", + "text": "If you look at the right side where, look at the column of 5 Stars. These are the reviewers that like the cheaper hotels, and they gave cheaper hotels five stars. As we expected and they put more weight on value, and that's why they like the cheaper hotels.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "9:52": "But if you look at the, when they didn't like expensive hotels, or cheaper hotels, then you'll see that they tended to have more weights on the condition of the room cleanness." + "time": "9:52", + "text": "But if you look at the, when they didn't like expensive hotels, or cheaper hotels, then you'll see that they tended to have more weights on the condition of the room cleanness.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "10:04": "So this shows that by using this model, we can infer some information that's very hard to obtain even if you read all the reviews. Even if you read all the reviews it's very hard to infer such preferences or such emphasis. So this is a case where text mining algorithms can go beyond what humans can do, to review interesting patterns in the data. And this of course can be very useful. You can compare different hotels, compare the opinions from different consumer groups, in different locations. And of course, the model is general. It can be applied to any reviews with overall ratings. So this is a very useful technique that can support a lot of text mining applications." + "time": "10:04", + "text": "So this shows that by using this model, we can infer some information that's very hard to obtain even if you read all the reviews. Even if you read all the reviews it's very hard to infer such preferences or such emphasis. So this is a case where text mining algorithms can go beyond what humans can do, to review interesting patterns in the data. And this of course can be very useful. You can compare different hotels, compare the opinions from different consumer groups, in different locations. And of course, the model is general. It can be applied to any reviews with overall ratings. So this is a very useful technique that can support a lot of text mining applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "10:50": "Finally the results of applying this model for personalized ranking or recommendation of entities." + "time": "10:50", + "text": "Finally the results of applying this model for personalized ranking or recommendation of entities.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "10:57": "So because we can infer the reviewers weights on different dimensions, we can allow a user to actually say what do you care about. So for example, I have a query here that shows 90% of the weight should be on value and 10% on others. So that just means I don't care about other aspect. I just care about getting a cheaper hotel. My emphasis is on the value dimension. Now what we can do with such query is we can use reviewers that we believe have a similar preference to recommend a hotels for you. How can we know that? Well, we can infer the weights of those reviewers on different aspects. We can find the reviewers whose weights are more precise, of course inferred rates are similar to yours. And then use those reviewers to recommend hotels for you and this is what we call personalized or rather query specific recommendations. Now the non-personalized recommendations now shown on the top, and you can see the top results generally have much higher price, than the lower group and that's because when the reviewer's cared more about the value as dictated by this query they tended to really favor low price hotels. So this is yet another application of this technique." + "time": "10:57", + "text": "So because we can infer the reviewers weights on different dimensions, we can allow a user to actually say what do you care about. So for example, I have a query here that shows 90% of the weight should be on value and 10% on others. So that just means I don't care about other aspect. I just care about getting a cheaper hotel. My emphasis is on the value dimension. Now what we can do with such query is we can use reviewers that we believe have a similar preference to recommend a hotels for you. How can we know that? Well, we can infer the weights of those reviewers on different aspects. We can find the reviewers whose weights are more precise, of course inferred rates are similar to yours. And then use those reviewers to recommend hotels for you and this is what we call personalized or rather query specific recommendations. Now the non-personalized recommendations now shown on the top, and you can see the top results generally have much higher price, than the lower group and that's because when the reviewer's cared more about the value as dictated by this query they tended to really favor low price hotels. So this is yet another application of this technique.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "12:18": "It shows that by doing text mining we can understand the users better. And once we can handle users better we can solve these users better. So to summarize our discussion of opinion mining in general, this is a very important topic and with a lot of applications." + "time": "12:18", + "text": "It shows that by doing text mining we can understand the users better. And once we can handle users better we can solve these users better. So to summarize our discussion of opinion mining in general, this is a very important topic and with a lot of applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "12:33": "And as a text sentiment analysis can be readily done by using just text categorization. But standard technique tends to not be enough. And so we need to have enriched feature implementation." + "time": "12:33", + "text": "And as a text sentiment analysis can be readily done by using just text categorization. But standard technique tends to not be enough. And so we need to have enriched feature implementation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "12:45": "And we also need to consider the order of those categories. And we'll talk about ordinal regression for some of these problem. We have also assume that the generating models are powerful for mining latent user preferences. This in particular in the generative model for mining latent regression. And we embed some interesting preference information and send the weights of words in the model as a result we can learn most useful information when fitting the model to the data. Now most approaches have been proposed and evaluated. For product reviews, and that was because in such a context, the opinion holder and the opinion target are clear. And they are easy to analyze. And there, of course, also have a lot of practical applications. But opinion mining from news and social media is also important, but that's more difficult than analyzing review data, mainly because the opinion holders and opinion targets are all interested. So that calls for natural management processing techniques to uncover them accurately." + "time": "12:45", + "text": "And we also need to consider the order of those categories. And we'll talk about ordinal regression for some of these problem. We have also assume that the generating models are powerful for mining latent user preferences. This in particular in the generative model for mining latent regression. And we embed some interesting preference information and send the weights of words in the model as a result we can learn most useful information when fitting the model to the data. Now most approaches have been proposed and evaluated. For product reviews, and that was because in such a context, the opinion holder and the opinion target are clear. And they are easy to analyze. And there, of course, also have a lot of practical applications. But opinion mining from news and social media is also important, but that's more difficult than analyzing review data, mainly because the opinion holders and opinion targets are all interested. So that calls for natural management processing techniques to uncover them accurately.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "13:50": "Here are some suggested readings. The first two are small books that are of some use of this topic, where you can find a lot of discussion about other variations of the problem and techniques proposed for solving the problem." + "time": "13:50", + "text": "Here are some suggested readings. The first two are small books that are of some use of this topic, where you can find a lot of discussion about other variations of the problem and techniques proposed for solving the problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "14:08": "The next two papers about generating models for rating the aspect rating analysis. The first one is about solving the problem using two stages, and the second one is about a unified model where the topic model is integrated with the regression model to solve the problem using a unified model." + "time": "14:08", + "text": "The next two papers about generating models for rating the aspect rating analysis. The first one is about solving the problem using two stages, and the second one is about a unified model where the topic model is integrated with the regression model to solve the problem using a unified model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" }, { - "14:30": "[MUSIC]" + "time": "14:30", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" } ] }, { "6-3-text-based-prediction": [ { - "0:00": "[SOUND] This lecture is about the Text-Based Prediction. In this lecture, we're going to start talking about the mining a different kind of knowledge, as you can see here on this slide. Namely we're going to use text data to infer values of some other variables in the real world that may not be directly related to the text. Or only remotely related to text data. So this is very different from content analysis or topic mining where we directly characterize the content of text. It's also different from opinion mining or sentiment analysis, which still have to do is characterizing mostly the content. Only that we focus more on the subject of content which reflects what we know about the opinion holder." + "time": "0:00", + "text": "[SOUND] This lecture is about the Text-Based Prediction. In this lecture, we're going to start talking about the mining a different kind of knowledge, as you can see here on this slide. Namely we're going to use text data to infer values of some other variables in the real world that may not be directly related to the text. Or only remotely related to text data. So this is very different from content analysis or topic mining where we directly characterize the content of text. It's also different from opinion mining or sentiment analysis, which still have to do is characterizing mostly the content. Only that we focus more on the subject of content which reflects what we know about the opinion holder.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "1:05": "But this only provides limited review of what we can predict." + "time": "1:05", + "text": "But this only provides limited review of what we can predict.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "1:10": "In this lecture and the following lectures, we're going to talk more about how we can predict more Information about the world. How can we get the sophisticated patterns of text together with other kind of data?" + "time": "1:10", + "text": "In this lecture and the following lectures, we're going to talk more about how we can predict more Information about the world. How can we get the sophisticated patterns of text together with other kind of data?", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "1:28": "It would be useful first to take a look at the big picture of prediction, and data mining in general, and I call this data mining loop. So the picture that you are seeing right now is that there are multiple sensors, including human sensors, to report what we have seen in the real world in the form of data. Of course the data in the form of non-text data, and text data." + "time": "1:28", + "text": "It would be useful first to take a look at the big picture of prediction, and data mining in general, and I call this data mining loop. So the picture that you are seeing right now is that there are multiple sensors, including human sensors, to report what we have seen in the real world in the form of data. Of course the data in the form of non-text data, and text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "1:51": "And our goal is to see if we can predict some values of important real world variables that matter to us. For example, someone's house condition, or the weather, or etc. And so these variables would be important because we might want to act on that. We might want to make decisions based on that. So how can we get from the data to these predicted values? Well in general we'll first have to do data mining and analysis of the data." + "time": "1:51", + "text": "And our goal is to see if we can predict some values of important real world variables that matter to us. For example, someone's house condition, or the weather, or etc. And so these variables would be important because we might want to act on that. We might want to make decisions based on that. So how can we get from the data to these predicted values? Well in general we'll first have to do data mining and analysis of the data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "2:23": "Because we, in general, should treat all the data that we collected" + "time": "2:23", + "text": "Because we, in general, should treat all the data that we collected", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "2:30": "in such a prediction problem set up. We are very much interested in joint mining of non-text and text data, which should combine all the data together." + "time": "2:30", + "text": "in such a prediction problem set up. We are very much interested in joint mining of non-text and text data, which should combine all the data together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "2:41": "And then, through analysis, generally there are multiple predictors of this interesting variable to us. And we call these features. And these features can then be put into a predictive model, to actually predict the value of any interesting variable." + "time": "2:41", + "text": "And then, through analysis, generally there are multiple predictors of this interesting variable to us. And we call these features. And these features can then be put into a predictive model, to actually predict the value of any interesting variable.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "3:02": "So this then allows us to change the world. And so this basically is the general process for making a prediction based on data, including the test data." + "time": "3:02", + "text": "So this then allows us to change the world. And so this basically is the general process for making a prediction based on data, including the test data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "3:17": "Now it's important to emphasize that a human actually plays a very important role in this process." + "time": "3:17", + "text": "Now it's important to emphasize that a human actually plays a very important role in this process.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "3:24": "Especially because of the involvement of text data. So human first would be involved in the mining of the data. It would control the generation of these features. And it would also help us understand the text data, because text data are created to be consumed by humans. Humans are the best in consuming or interpreting text data." + "time": "3:24", + "text": "Especially because of the involvement of text data. So human first would be involved in the mining of the data. It would control the generation of these features. And it would also help us understand the text data, because text data are created to be consumed by humans. Humans are the best in consuming or interpreting text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "3:48": "But when there are, of course, a lot of text data then machines have to help and that's why we need to do text data mining." + "time": "3:48", + "text": "But when there are, of course, a lot of text data then machines have to help and that's why we need to do text data mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "3:55": "Sometimes machines can see patterns in a lot of data that humans may not see. But in general human would play an important role in analyzing some text data, or applications. Next, human also must be involved in predictive model building and adjusting or testing. So in particular, we will have a lot of domain knowledge about the problem of prediction that we can build into this predictive model. And then next, of course, when we have predictive values for the variables, then humans would be involved in taking actions to change a word or make decisions based on these particular values." + "time": "3:55", + "text": "Sometimes machines can see patterns in a lot of data that humans may not see. But in general human would play an important role in analyzing some text data, or applications. Next, human also must be involved in predictive model building and adjusting or testing. So in particular, we will have a lot of domain knowledge about the problem of prediction that we can build into this predictive model. And then next, of course, when we have predictive values for the variables, then humans would be involved in taking actions to change a word or make decisions based on these particular values.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "4:36": "And finally it's interesting that a human could be involved in controlling the sensors." + "time": "4:36", + "text": "And finally it's interesting that a human could be involved in controlling the sensors.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "4:43": "And this is so that we can adjust to the sensors to collect the most useful data for prediction." + "time": "4:43", + "text": "And this is so that we can adjust to the sensors to collect the most useful data for prediction.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "4:52": "So that's why I call this data mining loop. Because as we perturb the sensors, it'll collect the new data and more useful data then we will obtain more data for prediction. And this data generally will help us improve the predicting accuracy. And in this loop, humans will recognize what additional data will need to be collected. And machines, of course, help humans identify what data should be collected next. In general, we want to collect data that is most useful for learning. And there was actually a subarea in machine learning called active learning that has to do with this. How do you identify data" + "time": "4:52", + "text": "So that's why I call this data mining loop. Because as we perturb the sensors, it'll collect the new data and more useful data then we will obtain more data for prediction. And this data generally will help us improve the predicting accuracy. And in this loop, humans will recognize what additional data will need to be collected. And machines, of course, help humans identify what data should be collected next. In general, we want to collect data that is most useful for learning. And there was actually a subarea in machine learning called active learning that has to do with this. How do you identify data", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "5:32": "points that would be most helpful in machine learning programs? If you can label them, right?" + "time": "5:32", + "text": "points that would be most helpful in machine learning programs? If you can label them, right?", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "5:38": "So, in general, you can see there is a loop here from data acquisition to data analysis. Or data mining to prediction of values. And to take actions to change the word, and then observe what happens. And then you can then decide what additional data have to be collected by adjusting the sensors. Or from the prediction arrows, you can also note what additional data we need to acquire in order to improve the accuracy of prediction. And this big picture is actually very general and it's reflecting a lot of important applications of big data." + "time": "5:38", + "text": "So, in general, you can see there is a loop here from data acquisition to data analysis. Or data mining to prediction of values. And to take actions to change the word, and then observe what happens. And then you can then decide what additional data have to be collected by adjusting the sensors. Or from the prediction arrows, you can also note what additional data we need to acquire in order to improve the accuracy of prediction. And this big picture is actually very general and it's reflecting a lot of important applications of big data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "6:16": "So, it's useful to keep that in mind while we are looking at some text mining techniques." + "time": "6:16", + "text": "So, it's useful to keep that in mind while we are looking at some text mining techniques.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "6:22": "So from text mining perspective and we're interested in text based prediction. Of course, sometimes texts alone can make predictions. And this is most useful for prediction about human behavior or human preferences or opinions. But in general text data will be put together as non-text data. So the interesting questions here would be, first, how can we design effective predictors? And how do we generate such effective predictors from text?" + "time": "6:22", + "text": "So from text mining perspective and we're interested in text based prediction. Of course, sometimes texts alone can make predictions. And this is most useful for prediction about human behavior or human preferences or opinions. But in general text data will be put together as non-text data. So the interesting questions here would be, first, how can we design effective predictors? And how do we generate such effective predictors from text?", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "6:53": "And this question has been addressed to some extent in some previous lectures where we talked about what kind of features we can design for text data. And it has also been addressed to some extent by talking about the other knowledge that we can mine from text. So, for example, topic mining can be very useful to generate the patterns or topic based indicators or predictors that can be further fed into a predictive model. So topics can be intermediate recognition of text. That would allow us to do design high level features or predictors that are useful for prediction of some other variable. It may be also generated from original text data, it provides a much better implementation of the problem and it serves as more effective predictors." + "time": "6:53", + "text": "And this question has been addressed to some extent in some previous lectures where we talked about what kind of features we can design for text data. And it has also been addressed to some extent by talking about the other knowledge that we can mine from text. So, for example, topic mining can be very useful to generate the patterns or topic based indicators or predictors that can be further fed into a predictive model. So topics can be intermediate recognition of text. That would allow us to do design high level features or predictors that are useful for prediction of some other variable. It may be also generated from original text data, it provides a much better implementation of the problem and it serves as more effective predictors.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "7:46": "And similarly similar analysis can lead to such predictors, as well. So, those other data mining or text mining algorithms can be used to generate predictors." + "time": "7:46", + "text": "And similarly similar analysis can lead to such predictors, as well. So, those other data mining or text mining algorithms can be used to generate predictors.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "7:58": "The other question is, how can we join the mine text and non-text data together? Now, this is a question that we have not addressed yet. So, in this lecture, and in the following lectures, we're going to address this problem. Because this is where we can generate much more enriched features for prediction. And allows us to review a lot of interesting knowledge about the world. These patterns that are generated from text and non-text data themselves can sometimes, already be useful for prediction. But, when they are put together with many other predictors they can really help improving the prediction." + "time": "7:58", + "text": "The other question is, how can we join the mine text and non-text data together? Now, this is a question that we have not addressed yet. So, in this lecture, and in the following lectures, we're going to address this problem. Because this is where we can generate much more enriched features for prediction. And allows us to review a lot of interesting knowledge about the world. These patterns that are generated from text and non-text data themselves can sometimes, already be useful for prediction. But, when they are put together with many other predictors they can really help improving the prediction.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "8:39": "Basically, you can see text-based prediction can actually serve as a unified framework to combine many text mining and analysis techniques. Including topic mining and any content mining techniques or segment analysis." + "time": "8:39", + "text": "Basically, you can see text-based prediction can actually serve as a unified framework to combine many text mining and analysis techniques. Including topic mining and any content mining techniques or segment analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "8:55": "The goal here is mainly to evoke values of real-world variables. But in order to achieve the goal we can do some other preparations. And these are subtasks. So one subtask could mine the content of text data, like topic mining. And the other could be to mine knowledge about the observer. So sentiment analysis, opinion." + "time": "8:55", + "text": "The goal here is mainly to evoke values of real-world variables. But in order to achieve the goal we can do some other preparations. And these are subtasks. So one subtask could mine the content of text data, like topic mining. And the other could be to mine knowledge about the observer. So sentiment analysis, opinion.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "9:21": "And both can help provide predictors for the prediction problem." + "time": "9:21", + "text": "And both can help provide predictors for the prediction problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "9:27": "And of course we can also add non-text data directly to the predicted model, but then non-text data also helps provide a context for text analyst. And that further improves the topic mining and the opinion analysis. And such improvement often leads to more effective predictors for our problems. It would enlarge the space of patterns of opinions of topics that we can mine from text and that we'll discuss more later. So the joint analysis of text and non-text data can be actually understood from two perspectives." + "time": "9:27", + "text": "And of course we can also add non-text data directly to the predicted model, but then non-text data also helps provide a context for text analyst. And that further improves the topic mining and the opinion analysis. And such improvement often leads to more effective predictors for our problems. It would enlarge the space of patterns of opinions of topics that we can mine from text and that we'll discuss more later. So the joint analysis of text and non-text data can be actually understood from two perspectives.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "10:05": "One perspective, we have non-text can help with testimony." + "time": "10:05", + "text": "One perspective, we have non-text can help with testimony.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "10:11": "Because non-text data can provide a context for mining text data provide a way to partition data in different ways. And this leads to a number of type of techniques for contextual types of mining. And that's the mine text in the context defined by non-text data. And you see this reference here, for a large body of work, in this direction. And I will need to highlight some of them, in the next lectures." + "time": "10:11", + "text": "Because non-text data can provide a context for mining text data provide a way to partition data in different ways. And this leads to a number of type of techniques for contextual types of mining. And that's the mine text in the context defined by non-text data. And you see this reference here, for a large body of work, in this direction. And I will need to highlight some of them, in the next lectures.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "10:39": "Now, the other perspective is text data can help with non-text data mining as well. And this is because text data can help interpret patterns discovered from non-text data. Let's say you discover some frequent patterns from non-text data. Now we can use the text data associated with instances where the pattern occurs as well as text data that is associated with instances where the pattern doesn't look up. And this gives us two sets of text data. And then we can see what's the difference. And this difference in text data is interpretable because text content is easy to digest. And that difference might suggest some meaning for this pattern that we found from non-text data. So, it helps interpret such patterns. And this technique is called pattern annotation." + "time": "10:39", + "text": "Now, the other perspective is text data can help with non-text data mining as well. And this is because text data can help interpret patterns discovered from non-text data. Let's say you discover some frequent patterns from non-text data. Now we can use the text data associated with instances where the pattern occurs as well as text data that is associated with instances where the pattern doesn't look up. And this gives us two sets of text data. And then we can see what's the difference. And this difference in text data is interpretable because text content is easy to digest. And that difference might suggest some meaning for this pattern that we found from non-text data. So, it helps interpret such patterns. And this technique is called pattern annotation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "11:32": "And you can see this reference listed here for more detail." + "time": "11:32", + "text": "And you can see this reference listed here for more detail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "11:38": "So here are the references that I just mentioned. The first is reference for pattern annotation. The second is, Qiaozhu Mei's dissertation on contextual text mining. It contains a large body of work on contextual text mining techniques." + "time": "11:38", + "text": "So here are the references that I just mentioned. The first is reference for pattern annotation. The second is, Qiaozhu Mei's dissertation on contextual text mining. It contains a large body of work on contextual text mining techniques.", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" }, { - "11:56": "[MUSIC]" + "time": "11:56", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" } ] }, { "6-4-contextual-text-mining-motivation": [ { - "0:00": "[SOUND] This lecture is about the contextual text mining." + "time": "0:00", + "text": "[SOUND] This lecture is about the contextual text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "0:11": "Contextual text mining is related to multiple kinds of knowledge that we mine from text data, as I'm showing here. It's related to topic mining because you can make topics associated with context, like time or location. And similarly, we can make opinion mining more contextualized, making opinions connected to context." + "time": "0:11", + "text": "Contextual text mining is related to multiple kinds of knowledge that we mine from text data, as I'm showing here. It's related to topic mining because you can make topics associated with context, like time or location. And similarly, we can make opinion mining more contextualized, making opinions connected to context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "0:34": "It's related to text based prediction because it allows us to combine non-text data with text data to derive sophisticated predictors for the prediction problem. So more specifically, why are we interested in contextual text mining? Well, that's first because text often has rich context information. And this can include direct context such as meta-data, and also indirect context. So, the direct context can grow the meta-data such as time, location, authors, and source of the text data. And they're almost always available to us." + "time": "0:34", + "text": "It's related to text based prediction because it allows us to combine non-text data with text data to derive sophisticated predictors for the prediction problem. So more specifically, why are we interested in contextual text mining? Well, that's first because text often has rich context information. And this can include direct context such as meta-data, and also indirect context. So, the direct context can grow the meta-data such as time, location, authors, and source of the text data. And they're almost always available to us.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "1:14": "Indirect context refers to additional data related to the meta-data. So for example, from office, we can further obtain additional context such as social network of the author, or the author's age." + "time": "1:14", + "text": "Indirect context refers to additional data related to the meta-data. So for example, from office, we can further obtain additional context such as social network of the author, or the author's age.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "1:30": "Such information is not in general directly related to the text, yet through the process, we can connect them. There could be other text data from the same source, as this one through the other text can be connected with this text as well. So in general, any related data can be regarded as context. So there could be removed or rated for context." + "time": "1:30", + "text": "Such information is not in general directly related to the text, yet through the process, we can connect them. There could be other text data from the same source, as this one through the other text can be connected with this text as well. So in general, any related data can be regarded as context. So there could be removed or rated for context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "1:55": "And so what's the use? What is text context used for? Well, context can be used to partition text data in many interesting ways. It can almost allow us to partition text data in other ways as we need. And this is very important because this allows us to do interesting comparative analyses. It also in general, provides meaning to the discovered topics, if we associate the text with context." + "time": "1:55", + "text": "And so what's the use? What is text context used for? Well, context can be used to partition text data in many interesting ways. It can almost allow us to partition text data in other ways as we need. And this is very important because this allows us to do interesting comparative analyses. It also in general, provides meaning to the discovered topics, if we associate the text with context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "2:25": "So here's illustration of how context can be regarded as interesting ways of partitioning of text data. So here I just showed some research papers published in different years." + "time": "2:25", + "text": "So here's illustration of how context can be regarded as interesting ways of partitioning of text data. So here I just showed some research papers published in different years.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "2:41": "On different venues, different conference names here listed on the bottom like the SIGIR or ACL, etc." + "time": "2:41", + "text": "On different venues, different conference names here listed on the bottom like the SIGIR or ACL, etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "2:49": "Now such text data can be partitioned in many interesting ways because we have context." + "time": "2:49", + "text": "Now such text data can be partitioned in many interesting ways because we have context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "2:56": "So the context here just includes time and the conference venues. But perhaps we can include some other variables as well." + "time": "2:56", + "text": "So the context here just includes time and the conference venues. But perhaps we can include some other variables as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "3:06": "But let's see how we can partition this interesting of ways. First, we can treat each paper as a separate unit. So in this case, a paper ID and the, each paper has its own context. It's independent. But we can also treat all the papers within 1998 as one group and this is only possible because of the availability of time. And we can partition data in this way. This would allow us to compare topics for example, in different years." + "time": "3:06", + "text": "But let's see how we can partition this interesting of ways. First, we can treat each paper as a separate unit. So in this case, a paper ID and the, each paper has its own context. It's independent. But we can also treat all the papers within 1998 as one group and this is only possible because of the availability of time. And we can partition data in this way. This would allow us to compare topics for example, in different years.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "3:39": "Similarly, we can partition the data based on the menus. We can get all the SIGIR papers and compare those papers with the rest. Or compare SIGIR papers with KDD papers, with ACL papers." + "time": "3:39", + "text": "Similarly, we can partition the data based on the menus. We can get all the SIGIR papers and compare those papers with the rest. Or compare SIGIR papers with KDD papers, with ACL papers.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "3:52": "We can also partition the data to obtain the papers written by authors in the U.S., and that of course, uses additional context of the authors. And this would allow us to then compare such a subset with another set of papers written by also seen in other countries." + "time": "3:52", + "text": "We can also partition the data to obtain the papers written by authors in the U.S., and that of course, uses additional context of the authors. And this would allow us to then compare such a subset with another set of papers written by also seen in other countries.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "4:13": "Or we can obtain a set of papers about text mining, and this can be compared with papers about another topic. And note that these partitionings can be also intersected with each other to generate even more complicated partitions." + "time": "4:13", + "text": "Or we can obtain a set of papers about text mining, and this can be compared with papers about another topic. And note that these partitionings can be also intersected with each other to generate even more complicated partitions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "4:29": "And so in general, this enables discovery of knowledge associated with different context as needed." + "time": "4:29", + "text": "And so in general, this enables discovery of knowledge associated with different context as needed.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "4:37": "And in particular, we can compare different contexts. And this often gives us a lot of useful knowledge. For example, comparing topics over time, we can see trends of topics. Comparing topics in different contexts can also reveal differences about the two contexts. So there are many interesting questions that require contextual text mining. Here I list some very specific ones. For example, what topics have been getting increasing attention recently in data mining research? Now to answer this question, obviously we need to analyze text in the context of time." + "time": "4:37", + "text": "And in particular, we can compare different contexts. And this often gives us a lot of useful knowledge. For example, comparing topics over time, we can see trends of topics. Comparing topics in different contexts can also reveal differences about the two contexts. So there are many interesting questions that require contextual text mining. Here I list some very specific ones. For example, what topics have been getting increasing attention recently in data mining research? Now to answer this question, obviously we need to analyze text in the context of time.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "5:13": "So time is context in this case. Is there any difference in the responses of people in different regions to the event, to any event? So this is a very broad an answer to this question. In this case of course, location is the context. What are the common research interests of two researchers? In this case, authors can be the context. Is there any difference in the research topics published by authors in the USA and those outside? Now in this case, the context would include the authors and their affiliation and location." + "time": "5:13", + "text": "So time is context in this case. Is there any difference in the responses of people in different regions to the event, to any event? So this is a very broad an answer to this question. In this case of course, location is the context. What are the common research interests of two researchers? In this case, authors can be the context. Is there any difference in the research topics published by authors in the USA and those outside? Now in this case, the context would include the authors and their affiliation and location.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "5:47": "So this goes beyond just the author himself or herself. We need to look at the additional information connected to the author. Is there any difference in the opinions of all the topics expressed on one social network and another? In this case, the social network of authors and the topic can be a context." + "time": "5:47", + "text": "So this goes beyond just the author himself or herself. We need to look at the additional information connected to the author. Is there any difference in the opinions of all the topics expressed on one social network and another? In this case, the social network of authors and the topic can be a context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "6:06": "Other topics in news data that are correlated with sudden changes in stock prices. In this case, we can use a time series such as stock prices as context." + "time": "6:06", + "text": "Other topics in news data that are correlated with sudden changes in stock prices. In this case, we can use a time series such as stock prices as context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" }, { - "6:17": "What issues mattered in the 2012 presidential campaign, or presidential election? Now in this case, time serves again as context. So, as you can see, the list can go on and on. Basically, contextual text mining can have many applications. [MUSIC]" + "time": "6:17", + "text": "What issues mattered in the 2012 presidential campaign, or presidential election? Now in this case, time serves again as context. So, as you can see, the list can go on and on. Basically, contextual text mining can have many applications. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" } ] }, { "6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis": [ { - "0:00": "[MUSIC] This lecture is about a specific technique for Contextual Text Mining called Contextual Probabilistic Latent Semantic Analysis." + "time": "0:00", + "text": "[MUSIC] This lecture is about a specific technique for Contextual Text Mining called Contextual Probabilistic Latent Semantic Analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "0:19": "In this lecture, we're going to continue discussing Contextual Text Mining. And we're going to introduce Contextual Probablitistic Latent Semantic Analysis as exchanging of POS for doing contextual text mining." + "time": "0:19", + "text": "In this lecture, we're going to continue discussing Contextual Text Mining. And we're going to introduce Contextual Probablitistic Latent Semantic Analysis as exchanging of POS for doing contextual text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "0:34": "Recall that in contextual text mining we hope to analyze topics in text, in consideration of the context so that we can associate the topics with a property of the context were interesting." + "time": "0:34", + "text": "Recall that in contextual text mining we hope to analyze topics in text, in consideration of the context so that we can associate the topics with a property of the context were interesting.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "0:48": "So in this approach, contextual probabilistic latent semantic analysis, or CPLSA, the main idea is to express to the add interesting context variables into a generating model." + "time": "0:48", + "text": "So in this approach, contextual probabilistic latent semantic analysis, or CPLSA, the main idea is to express to the add interesting context variables into a generating model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "1:03": "Recall that before when we generate the text we generally assume we'll start wIth some topics, and then assemble words from some topics. But here, we're going to add context variables, so that the coverage of topics, and also the content of topics would be tied in context. Or in other words, we're going to let the context Influence both coverage and the content of a topic." + "time": "1:03", + "text": "Recall that before when we generate the text we generally assume we'll start wIth some topics, and then assemble words from some topics. But here, we're going to add context variables, so that the coverage of topics, and also the content of topics would be tied in context. Or in other words, we're going to let the context Influence both coverage and the content of a topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "1:31": "The consequences that this will enable us to discover contextualized topics. Make the topics more interesting, more meaningful. Because we can then have topics that can be interpreted as specifically to a particular context that we are interested in. For example, a particular time period." + "time": "1:31", + "text": "The consequences that this will enable us to discover contextualized topics. Make the topics more interesting, more meaningful. Because we can then have topics that can be interpreted as specifically to a particular context that we are interested in. For example, a particular time period.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "1:52": "As an extension of PLSA model, CPLSA does the following changes. Firstly it would model the conditional likelihood of text given context." + "time": "1:52", + "text": "As an extension of PLSA model, CPLSA does the following changes. Firstly it would model the conditional likelihood of text given context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "2:07": "That clearly suggests that the generation of text would then depend on context, and that allows us to bring context into the generative model." + "time": "2:07", + "text": "That clearly suggests that the generation of text would then depend on context, and that allows us to bring context into the generative model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "2:18": "Secondly, it makes two specific assumptions about the dependency of topics on context. One is to assume that depending on the context, depending on different time periods or different locations, we assume that there are different views of a topic or different versions of word descriptions that characterize a topic." + "time": "2:18", + "text": "Secondly, it makes two specific assumptions about the dependency of topics on context. One is to assume that depending on the context, depending on different time periods or different locations, we assume that there are different views of a topic or different versions of word descriptions that characterize a topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "2:38": "And this assumption allows us to discover different variations of the same topic in different contexts." + "time": "2:38", + "text": "And this assumption allows us to discover different variations of the same topic in different contexts.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "2:46": "The other is that we assume the topic coverage also depends on the context." + "time": "2:46", + "text": "The other is that we assume the topic coverage also depends on the context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "2:55": "That means depending on the time or location, we might cover topics differently." + "time": "2:55", + "text": "That means depending on the time or location, we might cover topics differently.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "3:00": "Again, this dependency would then allow us to capture the association of topics with specific contexts. We can still use the EM algorithm to solve the problem of parameter estimation." + "time": "3:00", + "text": "Again, this dependency would then allow us to capture the association of topics with specific contexts. We can still use the EM algorithm to solve the problem of parameter estimation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "3:16": "And in this case, the estimated parameters would naturally contain context variables. And in particular, a lot of conditional probabilities of topics given certain context. And this is what allows you to do contextual text mining. So this is the basic idea." + "time": "3:16", + "text": "And in this case, the estimated parameters would naturally contain context variables. And in particular, a lot of conditional probabilities of topics given certain context. And this is what allows you to do contextual text mining. So this is the basic idea.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "3:35": "Now, we don't have time to introduce this model in detail, but there are references here that you can look into to know more detail. Here I just want to explain the high level ideas in more detail. Particularly I want to explain the generation process. Of text data that has context associated in such a model." + "time": "3:35", + "text": "Now, we don't have time to introduce this model in detail, but there are references here that you can look into to know more detail. Here I just want to explain the high level ideas in more detail. Particularly I want to explain the generation process. Of text data that has context associated in such a model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "4:01": "So as you see here, we can assume there are still multiple topics. For example, some topics might represent a themes like a government response, donation Or the city of New Orleans. Now this example is in the context of Hurricane Katrina and that hit New Orleans." + "time": "4:01", + "text": "So as you see here, we can assume there are still multiple topics. For example, some topics might represent a themes like a government response, donation Or the city of New Orleans. Now this example is in the context of Hurricane Katrina and that hit New Orleans.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "4:22": "Now as you can see we assume there are different views associated with each of the topics. And these are shown as View 1, View 2, View 3. Each view is a different version of word distributions. And these views are tied to some context variables. For example, tied to the location Texas, or the time July 2005, or the occupation of the author being a sociologist." + "time": "4:22", + "text": "Now as you can see we assume there are different views associated with each of the topics. And these are shown as View 1, View 2, View 3. Each view is a different version of word distributions. And these views are tied to some context variables. For example, tied to the location Texas, or the time July 2005, or the occupation of the author being a sociologist.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "4:56": "Now, on the right side, now we assume the document has context information. So the time is known to be July 2005. The location is Texas, etc. And such context information is what we hope to model as well. So we're not going to just model the text." + "time": "4:56", + "text": "Now, on the right side, now we assume the document has context information. So the time is known to be July 2005. The location is Texas, etc. And such context information is what we hope to model as well. So we're not going to just model the text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "5:15": "And so one idea here is to model the variations of top content and various content. And this gives us different views of the water distributions." + "time": "5:15", + "text": "And so one idea here is to model the variations of top content and various content. And this gives us different views of the water distributions.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "5:27": "Now on the bottom you will see the theme coverage of top Coverage might also vary according to these context because in the case of a location like Texas, people might want to cover the red topics more. That's New Orleans. That's visualized here. But in a certain time period, maybe Particular topic and will be covered more. So this variation is also considered in CPLSA. So to generate the searcher document With context, with first also choose a view." + "time": "5:27", + "text": "Now on the bottom you will see the theme coverage of top Coverage might also vary according to these context because in the case of a location like Texas, people might want to cover the red topics more. That's New Orleans. That's visualized here. But in a certain time period, maybe Particular topic and will be covered more. So this variation is also considered in CPLSA. So to generate the searcher document With context, with first also choose a view.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "6:08": "And this view of course now could be from any of these contexts. Let's say, we have taken this view that depends on the time. In the middle. So now, we will have a specific version of word distributions. Now, you can see some probabilities of words for each topic." + "time": "6:08", + "text": "And this view of course now could be from any of these contexts. Let's say, we have taken this view that depends on the time. In the middle. So now, we will have a specific version of word distributions. Now, you can see some probabilities of words for each topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "6:26": "Now, once we have chosen a view, now the situation will be very similar to what happened in standard ((PRSA)) We assume we have got word distribution associated with each topic, right?" + "time": "6:26", + "text": "Now, once we have chosen a view, now the situation will be very similar to what happened in standard ((PRSA)) We assume we have got word distribution associated with each topic, right?", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "6:39": "And then next, we will also choose a coverage from the bottom, so we're going to choose a particular coverage, and that coverage, before is fixed in PLSA, and assigned to a particular document. Each document has just one coverage distribution." + "time": "6:39", + "text": "And then next, we will also choose a coverage from the bottom, so we're going to choose a particular coverage, and that coverage, before is fixed in PLSA, and assigned to a particular document. Each document has just one coverage distribution.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "6:58": "Now here, because we consider context, so the distribution of topics or the coverage of Topics can vary depending on the context that has influenced the coverage." + "time": "6:58", + "text": "Now here, because we consider context, so the distribution of topics or the coverage of Topics can vary depending on the context that has influenced the coverage.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "7:10": "So, for example, we might pick a particular coverage. Let's say in this case we picked a document specific coverage." + "time": "7:10", + "text": "So, for example, we might pick a particular coverage. Let's say in this case we picked a document specific coverage.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "7:20": "Now with the coverage and these word distributions we can generate a document in exactly the same way as in PLSA. So what it means, we're going to use the coverage to choose a topic, to choose one of these three topics. Let's say we have picked the yellow topic. Then we'll draw a word from this particular topic on the top." + "time": "7:20", + "text": "Now with the coverage and these word distributions we can generate a document in exactly the same way as in PLSA. So what it means, we're going to use the coverage to choose a topic, to choose one of these three topics. Let's say we have picked the yellow topic. Then we'll draw a word from this particular topic on the top.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "7:44": "Okay, so we might get a word like government. And then next time we might choose a different topic, and we'll get donate, etc. Until we generate all the words. And this is basically the same process as in PLSA." + "time": "7:44", + "text": "Okay, so we might get a word like government. And then next time we might choose a different topic, and we'll get donate, etc. Until we generate all the words. And this is basically the same process as in PLSA.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "8:00": "So the main difference is when we obtain the coverage. And the word distribution, we let the context influence our choice So in other words we have extra switches that are tied to these contacts that will control the choices of different views of topics and the choices of coverage." + "time": "8:00", + "text": "So the main difference is when we obtain the coverage. And the word distribution, we let the context influence our choice So in other words we have extra switches that are tied to these contacts that will control the choices of different views of topics and the choices of coverage.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "8:22": "And naturally the model we have more parameters to estimate. But once we can estimate those parameters that involve the context, then we will be able to understand the context specific views of topics, or context specific coverages of topics. And this is precisely what we want in contextual text mining." + "time": "8:22", + "text": "And naturally the model we have more parameters to estimate. But once we can estimate those parameters that involve the context, then we will be able to understand the context specific views of topics, or context specific coverages of topics. And this is precisely what we want in contextual text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "8:40": "So here are some simple results. From using such a model. Not necessary exactly the same model, but similar models. So on this slide you see some sample results of comparing news articles about Iraq War and Afghanistan War." + "time": "8:40", + "text": "So here are some simple results. From using such a model. Not necessary exactly the same model, but similar models. So on this slide you see some sample results of comparing news articles about Iraq War and Afghanistan War.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "8:56": "Now we have about 30 articles on Iraq wa,r and 26 articles on Afghanistan war. And in this case, the goal is to review the common topic. It's covered in both sets of articles and the differences of variations of the topic in each of the two collections." + "time": "8:56", + "text": "Now we have about 30 articles on Iraq wa,r and 26 articles on Afghanistan war. And in this case, the goal is to review the common topic. It's covered in both sets of articles and the differences of variations of the topic in each of the two collections.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "9:18": "So in this case the context is explicitly specified by the topic or collection." + "time": "9:18", + "text": "So in this case the context is explicitly specified by the topic or collection.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "9:25": "And we see the results here show that there is a common theme that's corresponding to Cluster 1 here in this column. And there is a common theme indicting that United Nations is involved in both Wars. It's a common topic covered in both sets of articles. And that's indicated by the high probability words shown here, united and nations." + "time": "9:25", + "text": "And we see the results here show that there is a common theme that's corresponding to Cluster 1 here in this column. And there is a common theme indicting that United Nations is involved in both Wars. It's a common topic covered in both sets of articles. And that's indicated by the high probability words shown here, united and nations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "9:51": "Now if you know the background, of course this is not surprising and this topic is indeed very relevant to both wars. If you look at the column further and then what's interesting's that the next two cells of word distributions actually tell us collection specific variations of the topic of United Nations. So it indicates that the Iraq War, United Nations was more involved in weapons factions, whereas in the Afghanistan War it was more involved in maybe aid to Northern Alliance. It's a different variation of the topic of United Nations." + "time": "9:51", + "text": "Now if you know the background, of course this is not surprising and this topic is indeed very relevant to both wars. If you look at the column further and then what's interesting's that the next two cells of word distributions actually tell us collection specific variations of the topic of United Nations. So it indicates that the Iraq War, United Nations was more involved in weapons factions, whereas in the Afghanistan War it was more involved in maybe aid to Northern Alliance. It's a different variation of the topic of United Nations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "10:30": "So this shows that by bringing the context. In this case different the walls or different the collection of texts. We can have topical variations tied to these contexts, to review the differences of coverage of the United Nations in the two wars." + "time": "10:30", + "text": "So this shows that by bringing the context. In this case different the walls or different the collection of texts. We can have topical variations tied to these contexts, to review the differences of coverage of the United Nations in the two wars.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "10:46": "Now similarly if you look at the second cluster Class two, it has to do with the killing of people, and, again, it's not surprising if you know the background about wars. All the wars involve killing of people, but imagine if you are not familiar with the text collections. We have a lot of text articles, and such a technique can reveal the common topics covered in both sets of articles. It can be used to review common topics in multiple sets of articles as well. If you look at of course in that column of cluster two, you see variations of killing of people and that corresponds to different contexts" + "time": "10:46", + "text": "Now similarly if you look at the second cluster Class two, it has to do with the killing of people, and, again, it's not surprising if you know the background about wars. All the wars involve killing of people, but imagine if you are not familiar with the text collections. We have a lot of text articles, and such a technique can reveal the common topics covered in both sets of articles. It can be used to review common topics in multiple sets of articles as well. If you look at of course in that column of cluster two, you see variations of killing of people and that corresponds to different contexts", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "11:28": "And here is another example of results obtained from blog articles about Hurricane Katrina." + "time": "11:28", + "text": "And here is another example of results obtained from blog articles about Hurricane Katrina.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "11:37": "In this case, what you see here is visualization of the trends of topics over time." + "time": "11:37", + "text": "In this case, what you see here is visualization of the trends of topics over time.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "11:47": "And the top one shows just the temporal trends of two topics. One is oil price, and one is about the flooding of the city of New Orleans." + "time": "11:47", + "text": "And the top one shows just the temporal trends of two topics. One is oil price, and one is about the flooding of the city of New Orleans.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "12:00": "Now these topics are obtained from blog articles about Hurricane Katrina." + "time": "12:00", + "text": "Now these topics are obtained from blog articles about Hurricane Katrina.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "12:07": "And people talk about these topics. And end up teaching to some other topics. But the visualisation shows that with this technique, we can have conditional distribution of time. Given a topic. So this allows us to plot this conditional probability the curve is like what you're seeing here. We see that, initially, the two curves tracked each other very well. But later we see the topic of New Orleans was mentioned again but oil price was not. And this turns out to be the time period when another hurricane, hurricane Rita hit the region. And that apparently triggered more discussion about the flooding of the city." + "time": "12:07", + "text": "And people talk about these topics. And end up teaching to some other topics. But the visualisation shows that with this technique, we can have conditional distribution of time. Given a topic. So this allows us to plot this conditional probability the curve is like what you're seeing here. We see that, initially, the two curves tracked each other very well. But later we see the topic of New Orleans was mentioned again but oil price was not. And this turns out to be the time period when another hurricane, hurricane Rita hit the region. And that apparently triggered more discussion about the flooding of the city.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "12:54": "The bottom curve shows the coverage of this topic about flooding of the city by block articles in different locations. And it also shows some shift of coverage that might be related to people's migrating from the state of Louisiana to Texas for example." + "time": "12:54", + "text": "The bottom curve shows the coverage of this topic about flooding of the city by block articles in different locations. And it also shows some shift of coverage that might be related to people's migrating from the state of Louisiana to Texas for example.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "13:20": "So in this case we can see the time can be used as context to review trends of topics." + "time": "13:20", + "text": "So in this case we can see the time can be used as context to review trends of topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "13:27": "These are some additional results on spacial patterns. In this case it was about the topic of government response. And there was some criticism about the slow response of government in the case of Hurricane Katrina." + "time": "13:27", + "text": "These are some additional results on spacial patterns. In this case it was about the topic of government response. And there was some criticism about the slow response of government in the case of Hurricane Katrina.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "13:44": "And the discussion now is covered in different locations. And these visualizations show the coverage in different weeks of the event. And initially it's covered mostly in the victim states, in the South, but then gradually spread into other locations. But in week four, which is shown on the bottom left, we see a pattern that's very similar to the first week on the top left. And that's when again Hurricane Rita hit in the region. So such a technique would allow us to use location as context to examine their issues of topics. And of course the moral is completely general so you can apply this to any other connections of text. To review spatial temporal patterns." + "time": "13:44", + "text": "And the discussion now is covered in different locations. And these visualizations show the coverage in different weeks of the event. And initially it's covered mostly in the victim states, in the South, but then gradually spread into other locations. But in week four, which is shown on the bottom left, we see a pattern that's very similar to the first week on the top left. And that's when again Hurricane Rita hit in the region. So such a technique would allow us to use location as context to examine their issues of topics. And of course the moral is completely general so you can apply this to any other connections of text. To review spatial temporal patterns.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "14:34": "His view found another application of this kind of model, where we look at the use of the model for event impact analysis." + "time": "14:34", + "text": "His view found another application of this kind of model, where we look at the use of the model for event impact analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "14:43": "So here we're looking at the research articles information retrieval. IR, particularly SIGIR papers. And the topic we are focusing on is about the retrieval models. And you can see the top words with high probability about this model on the left." + "time": "14:43", + "text": "So here we're looking at the research articles information retrieval. IR, particularly SIGIR papers. And the topic we are focusing on is about the retrieval models. And you can see the top words with high probability about this model on the left.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "14:59": "And then we hope to examine the impact of two events. One is a start of TREC, for Text and Retrieval Conference. This is a major evaluation sponsored by U.S. government, and was launched in 1992 or around that time. And that is known to have made a impact on the topics of research information retrieval." + "time": "14:59", + "text": "And then we hope to examine the impact of two events. One is a start of TREC, for Text and Retrieval Conference. This is a major evaluation sponsored by U.S. government, and was launched in 1992 or around that time. And that is known to have made a impact on the topics of research information retrieval.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "15:23": "The other is the publication of a seminal paper, by Croft and Porte. This is about a language model approach to information retrieval. It's also known to have made a high impact on information retrieval research. So we hope to use this kind of model to understand impact. The idea here is simply to use the time as context. And use these events to divide the time periods into a period before. For the event and another after this event. And then we can compare the differences of the topics. The and the variations, etc. So in this case, the results show before track the study of retrieval models was mostly a vector space model, Boolean model etc. But the after Trec, apparently the study of retrieval models have involved a lot of other words. That seems to suggest some different retrieval tasks, so for example, email was used in the enterprise search tasks and subtopical retrieval was another task later introduced by Trec." + "time": "15:23", + "text": "The other is the publication of a seminal paper, by Croft and Porte. This is about a language model approach to information retrieval. It's also known to have made a high impact on information retrieval research. So we hope to use this kind of model to understand impact. The idea here is simply to use the time as context. And use these events to divide the time periods into a period before. For the event and another after this event. And then we can compare the differences of the topics. The and the variations, etc. So in this case, the results show before track the study of retrieval models was mostly a vector space model, Boolean model etc. But the after Trec, apparently the study of retrieval models have involved a lot of other words. That seems to suggest some different retrieval tasks, so for example, email was used in the enterprise search tasks and subtopical retrieval was another task later introduced by Trec.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "16:28": "On the bottom, we see the variations that are correlated with the propagation of the language model paper. Before, we have those classic probability risk model, logic model, Boolean etc., but after 1998, we see clear dominance of language model as probabilistic models. And we see words like language model, estimation of parameters, etc. So this technique here can use events as context to understand the impact of event. Again the technique is generals so you can use this to analyze the impact of any event. Here are some suggested readings." + "time": "16:28", + "text": "On the bottom, we see the variations that are correlated with the propagation of the language model paper. Before, we have those classic probability risk model, logic model, Boolean etc., but after 1998, we see clear dominance of language model as probabilistic models. And we see words like language model, estimation of parameters, etc. So this technique here can use events as context to understand the impact of event. Again the technique is generals so you can use this to analyze the impact of any event. Here are some suggested readings.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "17:11": "The first is paper about simple staging of psi to label cross-collection comparison." + "time": "17:11", + "text": "The first is paper about simple staging of psi to label cross-collection comparison.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "17:21": "It's to perform comparative text mining to allow us to extract common topics shared by multiple collections. And there are variations in each collection." + "time": "17:21", + "text": "It's to perform comparative text mining to allow us to extract common topics shared by multiple collections. And there are variations in each collection.", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" }, { - "17:31": "The second one is the main paper about the CPLSA model. Was a discussion of a lot of applications. The third one has a lot of details about the special temporal patterns for the Hurricane Katrina example. [MUSIC]" + "time": "17:31", + "text": "The second one is the main paper about the CPLSA model. Was a discussion of a lot of applications. The third one has a lot of details about the special temporal patterns for the Hurricane Katrina example. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" } ] }, { "6-6-contextual-text-mining-mining-topics-with-social-network-context": [ { - "0:00": "[SOUND] This lecture is about how to mine text data with social network as context. In this lecture we're going to continue discussing contextual text mining. In particular, we're going to look at the social network of others as context." + "time": "0:00", + "text": "[SOUND] This lecture is about how to mine text data with social network as context. In this lecture we're going to continue discussing contextual text mining. In particular, we're going to look at the social network of others as context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "0:26": "So first, what's our motivation for using network context for analysis of text?" + "time": "0:26", + "text": "So first, what's our motivation for using network context for analysis of text?", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "0:32": "The context of a text article can form a network." + "time": "0:32", + "text": "The context of a text article can form a network.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "0:37": "For example the authors of research articles might form collaboration networks." + "time": "0:37", + "text": "For example the authors of research articles might form collaboration networks.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "0:44": "But authors of social media content might form social networks. For example, in Twitter people might follow each other. Or in Facebook as people might claim friends of others, etc. So such context connects the content of the others. Similarly, locations associated with text can also be connected to form geographical network. But in general you can can imagine the metadata of the text data can form some kind of network if they have some relations." + "time": "0:44", + "text": "But authors of social media content might form social networks. For example, in Twitter people might follow each other. Or in Facebook as people might claim friends of others, etc. So such context connects the content of the others. Similarly, locations associated with text can also be connected to form geographical network. But in general you can can imagine the metadata of the text data can form some kind of network if they have some relations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "1:24": "Now there is some benefit in jointly analyzing text and its social network context or network context in general. And that's because we can use network to impose some constraints on topics of text." + "time": "1:24", + "text": "Now there is some benefit in jointly analyzing text and its social network context or network context in general. And that's because we can use network to impose some constraints on topics of text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "1:41": "So for example it's reasonable to assume that authors connected in collaboration networks tend to write about the similar topics." + "time": "1:41", + "text": "So for example it's reasonable to assume that authors connected in collaboration networks tend to write about the similar topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "1:53": "So such heuristics can be used to guide us in analyzing topics. Text also can help characterize the content associated with each subnetwork. And this is to say that both" + "time": "1:53", + "text": "So such heuristics can be used to guide us in analyzing topics. Text also can help characterize the content associated with each subnetwork. And this is to say that both", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "2:11": "kinds of data, the network and text, can help each other." + "time": "2:11", + "text": "kinds of data, the network and text, can help each other.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "2:16": "So for example the difference in opinions expressed that are in two subnetworks can be reviewed by doing this type of joint analysis." + "time": "2:16", + "text": "So for example the difference in opinions expressed that are in two subnetworks can be reviewed by doing this type of joint analysis.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "2:30": "So here briefly you could use a model called a network supervised topic model." + "time": "2:30", + "text": "So here briefly you could use a model called a network supervised topic model.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "2:40": "In this slide we're going to give some general ideas. And then in the next slide we're going to give some more details." + "time": "2:40", + "text": "In this slide we're going to give some general ideas. And then in the next slide we're going to give some more details.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "2:48": "But in general in this part of the course we don't have enough time to cover these frontier topics in detail. But we provide references that would allow you to read more about the topic to know the details." + "time": "2:48", + "text": "But in general in this part of the course we don't have enough time to cover these frontier topics in detail. But we provide references that would allow you to read more about the topic to know the details.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "3:05": "But it should still be useful to know the general ideas. And to know what they can do to know when you might be able to use them. So the general idea of network supervised topic model is the following. Let's start with viewing the regular topic models. Like if you had an LDA as sorting optimization problem. Of course, in this case, the optimization objective function is a likelihood function. So we often use maximum likelihood estimator to obtain the parameters. And these parameters will give us useful information that we want to obtain from text data. For example, topics. So we want to maximize the probability of tests that are given the parameters generally denoted by number. The main idea of incorporating network is to think about the constraints that can be imposed based on the network. In general, the idea is to use the network to impose some constraints on the model parameters, lambda here. For example, the text at adjacent nodes of the network can be similar to cover similar topics. Indeed, in many cases, they tend to cover similar topics." + "time": "3:05", + "text": "But it should still be useful to know the general ideas. And to know what they can do to know when you might be able to use them. So the general idea of network supervised topic model is the following. Let's start with viewing the regular topic models. Like if you had an LDA as sorting optimization problem. Of course, in this case, the optimization objective function is a likelihood function. So we often use maximum likelihood estimator to obtain the parameters. And these parameters will give us useful information that we want to obtain from text data. For example, topics. So we want to maximize the probability of tests that are given the parameters generally denoted by number. The main idea of incorporating network is to think about the constraints that can be imposed based on the network. In general, the idea is to use the network to impose some constraints on the model parameters, lambda here. For example, the text at adjacent nodes of the network can be similar to cover similar topics. Indeed, in many cases, they tend to cover similar topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "4:34": "So we may be able to smooth the topic distributions" + "time": "4:34", + "text": "So we may be able to smooth the topic distributions", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "4:39": "on the graph on the network so that adjacent nodes will have very similar topic distributions. So they will share a common distribution on the topics. Or have just a slight variations of the topic of distributions, of the coverage." + "time": "4:39", + "text": "on the graph on the network so that adjacent nodes will have very similar topic distributions. So they will share a common distribution on the topics. Or have just a slight variations of the topic of distributions, of the coverage.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "5:02": "So, technically, what we can do is simply to add a network and use the regularizers to the likelihood of objective function as shown here. So instead of just optimize the probability of test data given parameters lambda, we're going to optimize another function F." + "time": "5:02", + "text": "So, technically, what we can do is simply to add a network and use the regularizers to the likelihood of objective function as shown here. So instead of just optimize the probability of test data given parameters lambda, we're going to optimize another function F.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "5:19": "This function combines the likelihood with a regularizer function called R here. And the regularizer defines the the parameters lambda and the Network. It tells us basically what kind of parameters are preferred from a network constraint perspective. So you can easily see this is in effect implementing the idea of imposing some prior on the model parameters. Only that we're not necessary having a probabilistic model, but the idea is the same. We're going to combine the two in one single objective function." + "time": "5:19", + "text": "This function combines the likelihood with a regularizer function called R here. And the regularizer defines the the parameters lambda and the Network. It tells us basically what kind of parameters are preferred from a network constraint perspective. So you can easily see this is in effect implementing the idea of imposing some prior on the model parameters. Only that we're not necessary having a probabilistic model, but the idea is the same. We're going to combine the two in one single objective function.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "5:57": "So, the advantage of this idea is that it's quite general. Here the top model can be any generative model for text." + "time": "5:57", + "text": "So, the advantage of this idea is that it's quite general. Here the top model can be any generative model for text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "6:07": "It doesn't have to be PLSA or LEA, or the current topic models." + "time": "6:07", + "text": "It doesn't have to be PLSA or LEA, or the current topic models.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "6:12": "And similarly, the network can be also in a network. Any graph that connects these text objects." + "time": "6:12", + "text": "And similarly, the network can be also in a network. Any graph that connects these text objects.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "6:22": "This regularizer can also be any regularizer. We can be flexible in capturing different heuristics that we want to capture." + "time": "6:22", + "text": "This regularizer can also be any regularizer. We can be flexible in capturing different heuristics that we want to capture.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "6:32": "And finally, the function F can also vary, so there can be many different ways to combine them. So, this general idea is actually quite, quite powerful. It offers a general approach to combining these different types of data in single optimization framework. And this general idea can really be applied for any problem." + "time": "6:32", + "text": "And finally, the function F can also vary, so there can be many different ways to combine them. So, this general idea is actually quite, quite powerful. It offers a general approach to combining these different types of data in single optimization framework. And this general idea can really be applied for any problem.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "6:56": "But here in this paper reference here, a particular instantiation called a NetPLSA was started. In this case, it's just for instantiating of PLSA to incorporate this simple constraint imposed by network. And the prior here is the neighbors on the network must have similar topic distribution. They must cover similar topics in similar ways. And that's basically what it says in English." + "time": "6:56", + "text": "But here in this paper reference here, a particular instantiation called a NetPLSA was started. In this case, it's just for instantiating of PLSA to incorporate this simple constraint imposed by network. And the prior here is the neighbors on the network must have similar topic distribution. They must cover similar topics in similar ways. And that's basically what it says in English.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "7:24": "So technically we just have a modified objective function here. Let's define both the texts you can actually see in the network graph G here." + "time": "7:24", + "text": "So technically we just have a modified objective function here. Let's define both the texts you can actually see in the network graph G here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "7:34": "And if you look at this formula, you can actually recognize some part fairly familiarly." + "time": "7:34", + "text": "And if you look at this formula, you can actually recognize some part fairly familiarly.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "7:40": "Because they are, they should be fairly familiar to you by now. So can you recognize which part is the likelihood for the test given the topic model?" + "time": "7:40", + "text": "Because they are, they should be fairly familiar to you by now. So can you recognize which part is the likelihood for the test given the topic model?", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "7:52": "Well if you look at it, you will see this part is precisely the PLSA log-likelihood that we want to maximize when we estimate parameters for PLSA alone. But the second equation shows some additional constraints on the parameters. And in particular, we'll see here it's to measure the difference between the topic coverage at node u and node v. The two adjacent nodes on the network. We want their distributions to be similar. So here we are computing the square of their differences and we want to minimize this difference. And note that there's a negative sign in front of this sum, this whole sum here. So this makes it possible to find the parameters that are both to maximize the PLSA log-likelihood. That means the parameters will fit the data well and, also to respect that this constraint from the network." + "time": "7:52", + "text": "Well if you look at it, you will see this part is precisely the PLSA log-likelihood that we want to maximize when we estimate parameters for PLSA alone. But the second equation shows some additional constraints on the parameters. And in particular, we'll see here it's to measure the difference between the topic coverage at node u and node v. The two adjacent nodes on the network. We want their distributions to be similar. So here we are computing the square of their differences and we want to minimize this difference. And note that there's a negative sign in front of this sum, this whole sum here. So this makes it possible to find the parameters that are both to maximize the PLSA log-likelihood. That means the parameters will fit the data well and, also to respect that this constraint from the network.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "9:06": "And this is the negative sign that I just mentioned. Because this is an negative sign, when we maximize this object in function we'll actually minimize this statement term here." + "time": "9:06", + "text": "And this is the negative sign that I just mentioned. Because this is an negative sign, when we maximize this object in function we'll actually minimize this statement term here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "9:19": "So if we look further in this picture we'll see the results will weight of edge between u and v here. And that space from out network. If you have a weight that says well, these two nodes are strong collaborators of researchers. These two are strong connections between two people in a social network. And they would have weight. Then that means it would be more important that they're topic coverages are similar. And that's basically what it says here." + "time": "9:19", + "text": "So if we look further in this picture we'll see the results will weight of edge between u and v here. And that space from out network. If you have a weight that says well, these two nodes are strong collaborators of researchers. These two are strong connections between two people in a social network. And they would have weight. Then that means it would be more important that they're topic coverages are similar. And that's basically what it says here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "9:55": "And finally you see a parameter lambda here. This is a new parameter to control the influence of network constraint. We can see easily, if lambda is set to 0, we just go back to the standard PLSA. But when lambda is set to a larger value, then we will let the network influence the estimated models more. So as you can see, the effect here is that we're going to do basically PLSA. But we're going to also try to make the topic coverages on the two nodes that are strongly connected to be similar. And we ensure their coverages are similar." + "time": "9:55", + "text": "And finally you see a parameter lambda here. This is a new parameter to control the influence of network constraint. We can see easily, if lambda is set to 0, we just go back to the standard PLSA. But when lambda is set to a larger value, then we will let the network influence the estimated models more. So as you can see, the effect here is that we're going to do basically PLSA. But we're going to also try to make the topic coverages on the two nodes that are strongly connected to be similar. And we ensure their coverages are similar.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "10:33": "So here are some of the several results, from that paper. This is slide shows the record results of using PLSA. And the data here is DBLP data, bibliographic data, about research articles. And the experiments have to do with using four communities of applications. IR information retrieval. DM stands for data mining. ML for machinery and web. There are four communities of articles, and we were hoping" + "time": "10:33", + "text": "So here are some of the several results, from that paper. This is slide shows the record results of using PLSA. And the data here is DBLP data, bibliographic data, about research articles. And the experiments have to do with using four communities of applications. IR information retrieval. DM stands for data mining. ML for machinery and web. There are four communities of articles, and we were hoping", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "11:06": "to see that the topic mining can help us uncover these four communities. But from these assembled topics that you have seen here that are generated by PLSA. And PLSA is unable to generate the four communities that correspond to our intuition. The reason was because they are all mixed together and there are many words that are shared by these communities. So it's not that easy to use four topics to separate them. If we use more topics, perhaps we will have more coherent topics." + "time": "11:06", + "text": "to see that the topic mining can help us uncover these four communities. But from these assembled topics that you have seen here that are generated by PLSA. And PLSA is unable to generate the four communities that correspond to our intuition. The reason was because they are all mixed together and there are many words that are shared by these communities. So it's not that easy to use four topics to separate them. If we use more topics, perhaps we will have more coherent topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "11:42": "But what's interesting is that if we use the NetPLSA where the network, the collaboration network in this case of authors is used to impose constraints. And in this case we also use four topics. But Ned Pierre said we gave much more meaningful topics. So here we'll see that these topics correspond well to the four communities. The first is information retrieval. The second is data mining. Third is machine learning. And the fourth is web. So that separation was mostly because of the influence of network where with leverage is a collaboration network information. Essentially the people that form a collaborating network would then be kind of assumed to write about similar topics. And that's why we're going to have more coherent topics. And if you just listen to text data alone based on the occurrences, you won't get such coherent topics. Even though a topic model, like PLSA or LDA also should be able to pick up co-occurring words. So in general the topics that they generate represent words that co-occur each other. But still they cannot generate such a coherent results as NetPLSA, showing that the network contest is very useful here." + "time": "11:42", + "text": "But what's interesting is that if we use the NetPLSA where the network, the collaboration network in this case of authors is used to impose constraints. And in this case we also use four topics. But Ned Pierre said we gave much more meaningful topics. So here we'll see that these topics correspond well to the four communities. The first is information retrieval. The second is data mining. Third is machine learning. And the fourth is web. So that separation was mostly because of the influence of network where with leverage is a collaboration network information. Essentially the people that form a collaborating network would then be kind of assumed to write about similar topics. And that's why we're going to have more coherent topics. And if you just listen to text data alone based on the occurrences, you won't get such coherent topics. Even though a topic model, like PLSA or LDA also should be able to pick up co-occurring words. So in general the topics that they generate represent words that co-occur each other. But still they cannot generate such a coherent results as NetPLSA, showing that the network contest is very useful here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "13:08": "Now a similar model could have been also useful to to characterize the content associated with each subnetwork of collaborations." + "time": "13:08", + "text": "Now a similar model could have been also useful to to characterize the content associated with each subnetwork of collaborations.", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" }, { - "13:19": "So a more general view of text mining in context of network is you treat text as living in a rich information network environment. And that means we can connect all the related data together as a big network. And text data can be associated with a lot of structures in the network. For example, text data can be associated with the nodes of the network, and that's basically what we just discussed in the NetPLSA. But text data can be associated with age as well, or paths or even subnetworks. And such a way to represent texts that are in the big environment of all the context information is very powerful. Because it allows to analyze all the data, all the information together. And so in general, analysis of text should be using the entire network information that's related to the text data. So here's one suggested reading. And this is the paper about NetPLSA where you can find more details about the model and how to make such a model. [MUSIC]" + "time": "13:19", + "text": "So a more general view of text mining in context of network is you treat text as living in a rich information network environment. And that means we can connect all the related data together as a big network. And text data can be associated with a lot of structures in the network. For example, text data can be associated with the nodes of the network, and that's basically what we just discussed in the NetPLSA. But text data can be associated with age as well, or paths or even subnetworks. And such a way to represent texts that are in the big environment of all the context information is very powerful. Because it allows to analyze all the data, all the information together. And so in general, analysis of text should be using the entire network information that's related to the text data. So here's one suggested reading. And this is the paper about NetPLSA where you can find more details about the model and how to make such a model. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" } ] }, { "6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision": [ { - "0:00": "[SOUND]" + "time": "0:00", + "text": "[SOUND]", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "0:07": "This lecture is about using a time series as context to potentially discover causal topics in text. In this lecture, we're going to continue discussing Contextual Text Mining. In particular, we're going to look at the time series as a context for analyzing text, to potentially discover causal topics. As usual, it started with the motivation. In this case, we hope to use text mining to understand a time series. Here, what you are seeing is Dow Jones Industrial Average stock price curves. And you'll see a sudden drop here. Right. So one would be interested knowing what might have caused the stock market to crash." + "time": "0:07", + "text": "This lecture is about using a time series as context to potentially discover causal topics in text. In this lecture, we're going to continue discussing Contextual Text Mining. In particular, we're going to look at the time series as a context for analyzing text, to potentially discover causal topics. As usual, it started with the motivation. In this case, we hope to use text mining to understand a time series. Here, what you are seeing is Dow Jones Industrial Average stock price curves. And you'll see a sudden drop here. Right. So one would be interested knowing what might have caused the stock market to crash.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "0:48": "Well, if you know the background, and you might be able to figure it out if you look at the time stamp, or there are other data that can help us think about. But the question here is can we get some clues about this from the companion news stream? And we have a lot of news data that generated during that period." + "time": "0:48", + "text": "Well, if you know the background, and you might be able to figure it out if you look at the time stamp, or there are other data that can help us think about. But the question here is can we get some clues about this from the companion news stream? And we have a lot of news data that generated during that period.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "1:08": "So if you do that we might actually discover the crash. After it happened, at the time of the September 11 attack. And that's the time when there is a sudden rise of the topic about September 11 happened in news articles." + "time": "1:08", + "text": "So if you do that we might actually discover the crash. After it happened, at the time of the September 11 attack. And that's the time when there is a sudden rise of the topic about September 11 happened in news articles.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "1:26": "Here's another scenario where we want to analyze the Presidential Election. And this is the time series that are from the Presidential Prediction Market. For example, I write a trunk of market would have stocks for each candidate. And if you believe one candidate that will win then you tend to buy the stock for that candidate, causing the price of that candidate to increase. So, that's a nice way to actual do survey of people's opinions about these candidates." + "time": "1:26", + "text": "Here's another scenario where we want to analyze the Presidential Election. And this is the time series that are from the Presidential Prediction Market. For example, I write a trunk of market would have stocks for each candidate. And if you believe one candidate that will win then you tend to buy the stock for that candidate, causing the price of that candidate to increase. So, that's a nice way to actual do survey of people's opinions about these candidates.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "2:00": "Now, suppose you see something drop of price for one candidate. And you might also want to know what might have caused the sudden drop." + "time": "2:00", + "text": "Now, suppose you see something drop of price for one candidate. And you might also want to know what might have caused the sudden drop.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "2:10": "Or in a social science study, you might be interested in knowing what method in this election, what issues really matter to people. Now again in this case, we can look at the companion news stream and ask for the question. Are there any clues in the news stream that might provide insight about this? So for example, we might discover the mention of tax cut" + "time": "2:10", + "text": "Or in a social science study, you might be interested in knowing what method in this election, what issues really matter to people. Now again in this case, we can look at the companion news stream and ask for the question. Are there any clues in the news stream that might provide insight about this? So for example, we might discover the mention of tax cut", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "2:35": "has been increasing since that point. So maybe, that's related to the drop of the price. So all these cases are special cases of a general problem of joint analysis of text and a time series data to discover causal topics. The input in this case is time series plus text data that are produced in the same time period, the companion text stream." + "time": "2:35", + "text": "has been increasing since that point. So maybe, that's related to the drop of the price. So all these cases are special cases of a general problem of joint analysis of text and a time series data to discover causal topics. The input in this case is time series plus text data that are produced in the same time period, the companion text stream.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "3:02": "And this is different from the standard topic models, where we have just to text collection. That's why we see time series here, it serves as context." + "time": "3:02", + "text": "And this is different from the standard topic models, where we have just to text collection. That's why we see time series here, it serves as context.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "3:13": "Now, the output that we want to generate is the topics whose coverage in the text stream has strong correlations with the time series." + "time": "3:13", + "text": "Now, the output that we want to generate is the topics whose coverage in the text stream has strong correlations with the time series.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "3:22": "For example, whenever the topic is managing the price tends to go down, etc." + "time": "3:22", + "text": "For example, whenever the topic is managing the price tends to go down, etc.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "3:28": "Now we call these topics Causal Topics. Of course, they're not, strictly speaking, causal topics. We are never going to be able to verify whether they are causal, or there's a true causal relationship here. That's why we put causal in quotation marks. But at least they are correlating topics that might potentially explain the cause and humans can certainly further analyze such topics to understand the issue better." + "time": "3:28", + "text": "Now we call these topics Causal Topics. Of course, they're not, strictly speaking, causal topics. We are never going to be able to verify whether they are causal, or there's a true causal relationship here. That's why we put causal in quotation marks. But at least they are correlating topics that might potentially explain the cause and humans can certainly further analyze such topics to understand the issue better.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "3:59": "And the output would contain topics just like in topic modeling. But we hope that these topics are not just the regular topics with. These topics certainly don't have to explain the data of the best in text, but rather they have to explain the data in the text. Meaning that they have to reprehend the meaningful topics in text. Cement but also more importantly, they should be correlated with external hand series that's given as a context. So to understand how we solve this problem, let's first adjust to solve the problem with reactive topic model, for example PRSA. And we can apply this to text stream and with some extension like a CPRSA or Contextual PRSA. Then we can discover these topics in the correlation and also discover their coverage over time." + "time": "3:59", + "text": "And the output would contain topics just like in topic modeling. But we hope that these topics are not just the regular topics with. These topics certainly don't have to explain the data of the best in text, but rather they have to explain the data in the text. Meaning that they have to reprehend the meaningful topics in text. Cement but also more importantly, they should be correlated with external hand series that's given as a context. So to understand how we solve this problem, let's first adjust to solve the problem with reactive topic model, for example PRSA. And we can apply this to text stream and with some extension like a CPRSA or Contextual PRSA. Then we can discover these topics in the correlation and also discover their coverage over time.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "4:53": "So, one simple solution is, to choose the topics from this set that have the strongest correlation with the external time series." + "time": "4:53", + "text": "So, one simple solution is, to choose the topics from this set that have the strongest correlation with the external time series.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "5:05": "But this approach is not going to be very good. Why? Because awareness pictured to the topics is that they will discover by PRSA or LDA. And that means the choice of topics will be very limited. And we know these models try to maximize the likelihood of the text data. So those topics tend to be the major topics that explain the text data well. aAnd they are not necessarily correlated with time series. Even if we get the best one, the most correlated topics might still not be so" + "time": "5:05", + "text": "But this approach is not going to be very good. Why? Because awareness pictured to the topics is that they will discover by PRSA or LDA. And that means the choice of topics will be very limited. And we know these models try to maximize the likelihood of the text data. So those topics tend to be the major topics that explain the text data well. aAnd they are not necessarily correlated with time series. Even if we get the best one, the most correlated topics might still not be so", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "5:34": "interesting from causal perspective." + "time": "5:34", + "text": "interesting from causal perspective.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "5:37": "So here in this work site here, a better approach was proposed. And this approach is called Iterative Causal Topic Modeling. The idea is to do an iterative adjustment of topic, discovered by topic models using time series to induce a product." + "time": "5:37", + "text": "So here in this work site here, a better approach was proposed. And this approach is called Iterative Causal Topic Modeling. The idea is to do an iterative adjustment of topic, discovered by topic models using time series to induce a product.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "5:57": "So here's an illustration on how this work, how this works. Take the text stream as input and then apply regular topic modeling to generate a number of topics. Let's say four topics. Shown here." + "time": "5:57", + "text": "So here's an illustration on how this work, how this works. Take the text stream as input and then apply regular topic modeling to generate a number of topics. Let's say four topics. Shown here.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "6:09": "And then we're going to use external time series to assess which topic is more causally related or correlated with the external time series. So we have something that rank them. And we might think that topic one and topic four are more correlated and topic two and topic three are not. Now we could have stopped here and that would be just like what the simple approached that I talked about earlier then we can get to these topics and call them causal topics. But as I also explained that these topics are unlikely very good because they are general topics that explain the whole text connection. They are not necessary. The best topics are correlated with our time series." + "time": "6:09", + "text": "And then we're going to use external time series to assess which topic is more causally related or correlated with the external time series. So we have something that rank them. And we might think that topic one and topic four are more correlated and topic two and topic three are not. Now we could have stopped here and that would be just like what the simple approached that I talked about earlier then we can get to these topics and call them causal topics. But as I also explained that these topics are unlikely very good because they are general topics that explain the whole text connection. They are not necessary. The best topics are correlated with our time series.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "6:51": "So what we can do in this approach is to first zoom into word level and we can look into each word and the top ranked word listed for each topic. Let's say we take Topic 1 as the target examined. We know Topic 1 is correlated with the time series. Or is at least the best that we could get from this set of topics so far." + "time": "6:51", + "text": "So what we can do in this approach is to first zoom into word level and we can look into each word and the top ranked word listed for each topic. Let's say we take Topic 1 as the target examined. We know Topic 1 is correlated with the time series. Or is at least the best that we could get from this set of topics so far.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "7:18": "And we're going to look at the words in this topic, the top words." + "time": "7:18", + "text": "And we're going to look at the words in this topic, the top words.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "7:23": "And if the topic is correlated with the Time Series, there must be some words that are highly correlated with the Time Series. So here, for example, we might discover W1 and W3 are positively correlated with Time Series, but W2 and W4 are negatively correlated." + "time": "7:23", + "text": "And if the topic is correlated with the Time Series, there must be some words that are highly correlated with the Time Series. So here, for example, we might discover W1 and W3 are positively correlated with Time Series, but W2 and W4 are negatively correlated.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "7:41": "So, as a topic, and it's not good to mix these words with different correlations. So we can then for the separate of these words. We are going to get all the red words that indicate positive correlations. W1 and W3. And we're going to also get another sub topic." + "time": "7:41", + "text": "So, as a topic, and it's not good to mix these words with different correlations. So we can then for the separate of these words. We are going to get all the red words that indicate positive correlations. W1 and W3. And we're going to also get another sub topic.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "8:00": "If you want. That represents a negatively correlated words, W2 and W4." + "time": "8:00", + "text": "If you want. That represents a negatively correlated words, W2 and W4.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "8:07": "Now, these subtopics, or these variations of topics, based on the correlation analysis, are topics that are still quite related to the original topic, Topic 1. But they are already deviating, because of the use of time series information for bias selection of words. So then in some sense, well we should expect so, some sense more correlated with the time series than the original Topic 1. Because the Topic 1 has mixed words, here we separate them." + "time": "8:07", + "text": "Now, these subtopics, or these variations of topics, based on the correlation analysis, are topics that are still quite related to the original topic, Topic 1. But they are already deviating, because of the use of time series information for bias selection of words. So then in some sense, well we should expect so, some sense more correlated with the time series than the original Topic 1. Because the Topic 1 has mixed words, here we separate them.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "8:42": "So each of these two subtopics" + "time": "8:42", + "text": "So each of these two subtopics", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "8:46": "can be expected to be better coherent in this time series. However, they may not be so coherent as it mention. So the idea here is to go back to topic model by using these each as a prior to further guide the topic modeling. And that's to say we ask our topic models now discover topics that are very similar to each of these two subtopics. And this will cause a bias toward more correlate to the topics was a time series. Of course then we can apply topic models to get another generation of topics. And that can be further ran to the base of the time series to set after the highly correlated topics. And then we can further analyze the components at work in the topic and then try to analyze.word level correlation. And then get the even more correlated subtopics that can be further fed into the process as prior to drive the topic of model discovery." + "time": "8:46", + "text": "can be expected to be better coherent in this time series. However, they may not be so coherent as it mention. So the idea here is to go back to topic model by using these each as a prior to further guide the topic modeling. And that's to say we ask our topic models now discover topics that are very similar to each of these two subtopics. And this will cause a bias toward more correlate to the topics was a time series. Of course then we can apply topic models to get another generation of topics. And that can be further ran to the base of the time series to set after the highly correlated topics. And then we can further analyze the components at work in the topic and then try to analyze.word level correlation. And then get the even more correlated subtopics that can be further fed into the process as prior to drive the topic of model discovery.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "9:46": "So this whole process is just a heuristic way of optimizing causality and coherence, and that's our ultimate goal. Right? So here you see the pure topic models will be very good at maximizing topic coherence, the topics will be all meaningful." + "time": "9:46", + "text": "So this whole process is just a heuristic way of optimizing causality and coherence, and that's our ultimate goal. Right? So here you see the pure topic models will be very good at maximizing topic coherence, the topics will be all meaningful.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "10:02": "If we only use causality test, or correlation measure, then we might get a set words that are strongly correlate with time series, but they may not necessarily mean anything. It might not be cementric connected. So, that would be at the other extreme, on the top." + "time": "10:02", + "text": "If we only use causality test, or correlation measure, then we might get a set words that are strongly correlate with time series, but they may not necessarily mean anything. It might not be cementric connected. So, that would be at the other extreme, on the top.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "10:21": "Now, the ideal is to get the causal topic that's scored high, both in topic coherence and also causal relation. In this approach, it can be regarded as an alternate way to maximize both sine engines. So when we apply the topic models we're maximizing the coherence. But when we decompose the topic model words into sets of words that are very strong correlated with the time series. We select the most strongly correlated words with the time series. We are pushing the model back to the causal dimension to make it better in causal scoring. And then, when we apply the selected words as a prior to guide a topic modeling, we again go back to optimize the coherence. Because topic models, we ensure the next generation of topics to be coherent and we can iterate when they're optimized in this way as shown on this picture." + "time": "10:21", + "text": "Now, the ideal is to get the causal topic that's scored high, both in topic coherence and also causal relation. In this approach, it can be regarded as an alternate way to maximize both sine engines. So when we apply the topic models we're maximizing the coherence. But when we decompose the topic model words into sets of words that are very strong correlated with the time series. We select the most strongly correlated words with the time series. We are pushing the model back to the causal dimension to make it better in causal scoring. And then, when we apply the selected words as a prior to guide a topic modeling, we again go back to optimize the coherence. Because topic models, we ensure the next generation of topics to be coherent and we can iterate when they're optimized in this way as shown on this picture.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "11:20": "So the only I think a component that you haven't seen such a framework is how to measure the causality. Because the rest is just talking more on. So let's have a little bit of discussion of that. So here we show that. And let's say we have a topic about government response here. And then we just talking more of we can get coverage of the topic over time. So, we have a time series, X sub t." + "time": "11:20", + "text": "So the only I think a component that you haven't seen such a framework is how to measure the causality. Because the rest is just talking more on. So let's have a little bit of discussion of that. So here we show that. And let's say we have a topic about government response here. And then we just talking more of we can get coverage of the topic over time. So, we have a time series, X sub t.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "11:43": "Now, we also have, are give a time series that represents external information. It's a non text time series, Y sub t. It's the stock prices. Now the the question here is does Xt cause Yt?" + "time": "11:43", + "text": "Now, we also have, are give a time series that represents external information. It's a non text time series, Y sub t. It's the stock prices. Now the the question here is does Xt cause Yt?", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "11:58": "Well in other words, we want to match the causality relation between the two. Or maybe just measure the correlation of the two." + "time": "11:58", + "text": "Well in other words, we want to match the causality relation between the two. Or maybe just measure the correlation of the two.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "12:08": "There are many measures that we can use in this framework. For example, pairs in correlation is a common use measure. And we got to consider time lag here so that we can try to capture causal relation. Using somewhat past data and using the data in the past" + "time": "12:08", + "text": "There are many measures that we can use in this framework. For example, pairs in correlation is a common use measure. And we got to consider time lag here so that we can try to capture causal relation. Using somewhat past data and using the data in the past", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "12:26": "to try to correlate with the data on points of y that represents the future, for example. And by introducing such lag, we can hopefully capture some causal relation by even using correlation measures like person correlation." + "time": "12:26", + "text": "to try to correlate with the data on points of y that represents the future, for example. And by introducing such lag, we can hopefully capture some causal relation by even using correlation measures like person correlation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "12:45": "But a common use, the measure for causality here is Granger Causality Test." + "time": "12:45", + "text": "But a common use, the measure for causality here is Granger Causality Test.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "12:52": "And the idea of this test is actually quite simple. Basically you're going to have all the regressive model to use the history information of Y to predict itself. And this is the best we could without any other information. So we're going to build such a model. And then we're going to add some history information of X into such model. To see if we can improve the prediction of Y. If we can do that with a statistically significant difference. Then we just say X has some causal inference on Y, or otherwise it wouldn't have causal improvement of prediction of Y." + "time": "12:52", + "text": "And the idea of this test is actually quite simple. Basically you're going to have all the regressive model to use the history information of Y to predict itself. And this is the best we could without any other information. So we're going to build such a model. And then we're going to add some history information of X into such model. To see if we can improve the prediction of Y. If we can do that with a statistically significant difference. Then we just say X has some causal inference on Y, or otherwise it wouldn't have causal improvement of prediction of Y.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "13:32": "If, on the other hand, the difference is insignificant and that would mean X does not really have a cause or relation why. So that's the basic idea. Now, we don't have time to explain this in detail so you could read, but you would read at this cited reference here to know more about this measure. It's a very convenient used measure. Has many applications." + "time": "13:32", + "text": "If, on the other hand, the difference is insignificant and that would mean X does not really have a cause or relation why. So that's the basic idea. Now, we don't have time to explain this in detail so you could read, but you would read at this cited reference here to know more about this measure. It's a very convenient used measure. Has many applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "13:55": "So next, let's look at some simple results generated by this approach. And here the data is the New York Times and in the time period of June 2000 through December of 2011. And here the time series we used is stock prices of two companies. American Airlines and Apple and the goal is to see if we inject the sum time series contest, whether we can actually get topics that are wise for the time series. Imagine if we don't use any input, we don't use any context. Then the topics from New York times discovered by PRSA would be just general topics that people talk about in news. All right. Those major topics in the news event." + "time": "13:55", + "text": "So next, let's look at some simple results generated by this approach. And here the data is the New York Times and in the time period of June 2000 through December of 2011. And here the time series we used is stock prices of two companies. American Airlines and Apple and the goal is to see if we inject the sum time series contest, whether we can actually get topics that are wise for the time series. Imagine if we don't use any input, we don't use any context. Then the topics from New York times discovered by PRSA would be just general topics that people talk about in news. All right. Those major topics in the news event.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "14:41": "But here you see these topics are indeed biased toward each time series. And particularly if you look at the underlined words here in the American Airlines result, and you see airlines, airport, air, united trade, or terrorism, etc. So it clearly has topics that are more correlated with the external time series. On the right side, you see that some of the topics are clearly related to Apple, right. So you can see computer, technology, software, internet, com, web, etc. So that just means the time series has effectively served as a context to bias the discovery of topics. From another perspective, these results help us on what people have talked about in each case. So not just the people, what people have talked about, but what are some topics that might be correlated with their stock prices. And so these topics can serve as a starting point for people to further look into issues and you'll find the true causal relations. Here are some other results from analyzing Presidential Election time series. The time series data here is from Iowa Electronic market. And that's a prediction market. And the data is the same. New York Times from May 2000 to October 2000. That's for 2000 presidential campaign election. Now, what you see here are the top three words in significant topics from New York Times." + "time": "14:41", + "text": "But here you see these topics are indeed biased toward each time series. And particularly if you look at the underlined words here in the American Airlines result, and you see airlines, airport, air, united trade, or terrorism, etc. So it clearly has topics that are more correlated with the external time series. On the right side, you see that some of the topics are clearly related to Apple, right. So you can see computer, technology, software, internet, com, web, etc. So that just means the time series has effectively served as a context to bias the discovery of topics. From another perspective, these results help us on what people have talked about in each case. So not just the people, what people have talked about, but what are some topics that might be correlated with their stock prices. And so these topics can serve as a starting point for people to further look into issues and you'll find the true causal relations. Here are some other results from analyzing Presidential Election time series. The time series data here is from Iowa Electronic market. And that's a prediction market. And the data is the same. New York Times from May 2000 to October 2000. That's for 2000 presidential campaign election. Now, what you see here are the top three words in significant topics from New York Times.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "16:21": "And if you look at these topics, and they are indeed quite related to the campaign. Actually the issues are very much related to the important issues of this presidential election. Now here I should mention that the text data has been filtered by using only the articles that mention these candidate names." + "time": "16:21", + "text": "And if you look at these topics, and they are indeed quite related to the campaign. Actually the issues are very much related to the important issues of this presidential election. Now here I should mention that the text data has been filtered by using only the articles that mention these candidate names.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "16:45": "It's a subset of these news articles. Very different from the previous experiment." + "time": "16:45", + "text": "It's a subset of these news articles. Very different from the previous experiment.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "16:53": "But the results here clearly show that the approach can uncover some important issues in that presidential election. So tax cut, oil energy, abortion and gun control are all known to be important issues in that presidential election. And that was supported by some literature in political science." + "time": "16:53", + "text": "But the results here clearly show that the approach can uncover some important issues in that presidential election. So tax cut, oil energy, abortion and gun control are all known to be important issues in that presidential election. And that was supported by some literature in political science.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "17:17": "And also I was discussing Wikipedia, right. So basically the results show that the approach can effectively discover possibly causal topics based on the time series data." + "time": "17:17", + "text": "And also I was discussing Wikipedia, right. So basically the results show that the approach can effectively discover possibly causal topics based on the time series data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "17:35": "So there are two suggested readings here. One is the paper about this iterative topic modeling with time series feedback. Where you can find more details about how this approach works. And the second one is reading about Granger Casuality text." + "time": "17:35", + "text": "So there are two suggested readings here. One is the paper about this iterative topic modeling with time series feedback. Where you can find more details about how this approach works. And the second one is reading about Granger Casuality text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "17:55": "So in the end, let's summarize the discussion of Text-based Prediction. Now, Text-based prediction is generally very useful for big data applications that involve text. Because they can help us inform new knowledge about the world. And the knowledge can go beyond what's discussed in the text." + "time": "17:55", + "text": "So in the end, let's summarize the discussion of Text-based Prediction. Now, Text-based prediction is generally very useful for big data applications that involve text. Because they can help us inform new knowledge about the world. And the knowledge can go beyond what's discussed in the text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "18:17": "As a result can also support optimizing of our decision making. And this has a wider spread application." + "time": "18:17", + "text": "As a result can also support optimizing of our decision making. And this has a wider spread application.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "18:28": "Text data is often combined with non-text data for prediction. because, for this purpose, the prediction purpose, we generally would like to combine non-text data and text data together, as much cruel as possible for prediction. And so as a result during the analysis of text and non-text is very necessary and it's also very useful. Now when we analyze text data together with non-text data, we can see they can help each other. So non-text data, provide a context for mining text data, and we discussed a number of techniques for contextual text mining. And on the other hand, a text data can also help interpret patterns discovered from non-text data, and this is called a pattern annotation." + "time": "18:28", + "text": "Text data is often combined with non-text data for prediction. because, for this purpose, the prediction purpose, we generally would like to combine non-text data and text data together, as much cruel as possible for prediction. And so as a result during the analysis of text and non-text is very necessary and it's also very useful. Now when we analyze text data together with non-text data, we can see they can help each other. So non-text data, provide a context for mining text data, and we discussed a number of techniques for contextual text mining. And on the other hand, a text data can also help interpret patterns discovered from non-text data, and this is called a pattern annotation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" }, { - "19:14": "In general, this is a very active research topic, and there are new papers being published. And there are also many open challenges that have to be solved. [MUSIC]" + "time": "19:14", + "text": "In general, this is a very active research topic, and there are new papers being published. And there are also many open challenges that have to be solved. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" } ] }, { "6-8-course-summary": [ { - "0:06": "This lecture is a summary of this whole course." + "time": "0:06", + "text": "This lecture is a summary of this whole course.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "0:10": "First, let's revisit the topics that we covered in this course. In the beginning, we talked about the natural language processing and how it can enrich text representation. We then talked about how to mine knowledge about the language, natural language used to express the, what's observing the world in text and data." + "time": "0:10", + "text": "First, let's revisit the topics that we covered in this course. In the beginning, we talked about the natural language processing and how it can enrich text representation. We then talked about how to mine knowledge about the language, natural language used to express the, what's observing the world in text and data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "0:34": "In particular, we talked about how to mine word associations. We then talked about how to analyze topics in text. How to discover topics and analyze them." + "time": "0:34", + "text": "In particular, we talked about how to mine word associations. We then talked about how to analyze topics in text. How to discover topics and analyze them.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "0:47": "This can be regarded as knowledge about observed world, and then we talked about how to mine knowledge about the observer and particularly talk about the, how to mine opinions and do sentiment analysis. And finally, we will talk about the text-based prediction, which has to do with predicting values of other real world variables based on text data. And in discussing this, we will also discuss the role of non-text data, which can contribute additional predictors for the prediction problem, and also can provide context for analyzing text data, and in particular we talked about how to use context to analyze topics." + "time": "0:47", + "text": "This can be regarded as knowledge about observed world, and then we talked about how to mine knowledge about the observer and particularly talk about the, how to mine opinions and do sentiment analysis. And finally, we will talk about the text-based prediction, which has to do with predicting values of other real world variables based on text data. And in discussing this, we will also discuss the role of non-text data, which can contribute additional predictors for the prediction problem, and also can provide context for analyzing text data, and in particular we talked about how to use context to analyze topics.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "1:33": "So here are the key high-level take away messages from this cost. I going to go over these major topics and point out what are the key take-away messages that you should remember." + "time": "1:33", + "text": "So here are the key high-level take away messages from this cost. I going to go over these major topics and point out what are the key take-away messages that you should remember.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "1:47": "First the NLP and text representation." + "time": "1:47", + "text": "First the NLP and text representation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "1:53": "You should realize that NLP is always very important for any text replication because it enriches text representation. The more NLP the better text representation we can have. And this further enables more accurate knowledge discovery, to discover deeper knowledge, buried in text." + "time": "1:53", + "text": "You should realize that NLP is always very important for any text replication because it enriches text representation. The more NLP the better text representation we can have. And this further enables more accurate knowledge discovery, to discover deeper knowledge, buried in text.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "2:12": "However, the current estate of art of natural energy processing is, still not robust enough. So, as an result, the robust text mining technologies today, tend to be based on world [INAUDIBLE]. And tend to rely a lot on statistical analysis, as we've discussed in this course. And you may recall we've mostly used word based representations. And we've relied a lot on statistical techniques, statistical learning techniques particularly." + "time": "2:12", + "text": "However, the current estate of art of natural energy processing is, still not robust enough. So, as an result, the robust text mining technologies today, tend to be based on world [INAUDIBLE]. And tend to rely a lot on statistical analysis, as we've discussed in this course. And you may recall we've mostly used word based representations. And we've relied a lot on statistical techniques, statistical learning techniques particularly.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "2:47": "In word-association mining and analysis the important points first, we are introduced the two concepts for two basic and complementary relations of words, paradigmatic and syntagmatic relations. These are actually very general relations between elements sequences. If you take it as meaning elements that occur in similar context in the sequence and elements that tend to co-occur with each other. And these relations might be also meaningful for other sequences of data." + "time": "2:47", + "text": "In word-association mining and analysis the important points first, we are introduced the two concepts for two basic and complementary relations of words, paradigmatic and syntagmatic relations. These are actually very general relations between elements sequences. If you take it as meaning elements that occur in similar context in the sequence and elements that tend to co-occur with each other. And these relations might be also meaningful for other sequences of data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "3:25": "We also talked a lot about test the similarity then we discuss how to discover paradynamic similarities compare the context of words discover words that share similar context. At that point level, we talked about representing text data with a vector space model. And we talked about some retrieval techniques such as BM25 for measuring similarity of text and for assigning weights to terms, tf-idf weighting, et cetera. And this part is well-connected to text retrieval. There are other techniques that can be relevant here also." + "time": "3:25", + "text": "We also talked a lot about test the similarity then we discuss how to discover paradynamic similarities compare the context of words discover words that share similar context. At that point level, we talked about representing text data with a vector space model. And we talked about some retrieval techniques such as BM25 for measuring similarity of text and for assigning weights to terms, tf-idf weighting, et cetera. And this part is well-connected to text retrieval. There are other techniques that can be relevant here also.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "4:03": "The next point is about co-occurrence analysis of text, and we introduce some information theory concepts such as entropy, conditional entropy, and mutual information. These are not only very useful for measuring the co-occurrences of words, they are also very useful for analyzing other kind of data, and they are useful for, for example, for feature selection in text categorization as well." + "time": "4:03", + "text": "The next point is about co-occurrence analysis of text, and we introduce some information theory concepts such as entropy, conditional entropy, and mutual information. These are not only very useful for measuring the co-occurrences of words, they are also very useful for analyzing other kind of data, and they are useful for, for example, for feature selection in text categorization as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "4:30": "So this is another important concept, good to know." + "time": "4:30", + "text": "So this is another important concept, good to know.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "4:35": "And then we talked about the topic mining and analysis, and that's where we introduce in the probabilistic topic model. We spent a lot of time to explain the basic topic model, PLSA in detail and this is, those are the basics for understanding LDA which is. Theoretically, a more opinion model, but we did not have enough time to really go in depth in introducing LDA." + "time": "4:35", + "text": "And then we talked about the topic mining and analysis, and that's where we introduce in the probabilistic topic model. We spent a lot of time to explain the basic topic model, PLSA in detail and this is, those are the basics for understanding LDA which is. Theoretically, a more opinion model, but we did not have enough time to really go in depth in introducing LDA.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "5:02": "But in practice, PLSA seems as effective as LDA and it's simpler to implement and it's also more efficient." + "time": "5:02", + "text": "But in practice, PLSA seems as effective as LDA and it's simpler to implement and it's also more efficient.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "5:11": "In this part of Wilson videos is some general concepts that would be useful to know, one is generative model, and this is a general method for modeling text data and modeling other kinds of data as well." + "time": "5:11", + "text": "In this part of Wilson videos is some general concepts that would be useful to know, one is generative model, and this is a general method for modeling text data and modeling other kinds of data as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "5:24": "And we talked about the maximum life erase data, the EM algorithm for solving the problem of computing maximum estimator. So, these are all general techniques that tend to be very useful in other scenarios as well." + "time": "5:24", + "text": "And we talked about the maximum life erase data, the EM algorithm for solving the problem of computing maximum estimator. So, these are all general techniques that tend to be very useful in other scenarios as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "5:40": "Then we talked about the text clustering and the text categorization. Those are two important building blocks in any text mining application systems. In text with clustering we talked about how we can solve the problem by using a slightly different mixture module than the probabilistic topic model. and we then also prefer to view the similarity based approaches to test for cuss word." + "time": "5:40", + "text": "Then we talked about the text clustering and the text categorization. Those are two important building blocks in any text mining application systems. In text with clustering we talked about how we can solve the problem by using a slightly different mixture module than the probabilistic topic model. and we then also prefer to view the similarity based approaches to test for cuss word.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "6:11": "In categorization we also talk about the two kinds of approaches. One is generative classifies that rely on to base word to" + "time": "6:11", + "text": "In categorization we also talk about the two kinds of approaches. One is generative classifies that rely on to base word to", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "6:20": "infer the condition of or probability of a category given text data, in deeper we'll introduce you should use [INAUDIBLE] base in detail." + "time": "6:20", + "text": "infer the condition of or probability of a category given text data, in deeper we'll introduce you should use [INAUDIBLE] base in detail.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "6:29": "This is the practical use for technique, for a lot of text, capitalization tasks." + "time": "6:29", + "text": "This is the practical use for technique, for a lot of text, capitalization tasks.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "6:37": "We also introduce the some discriminative classifiers, particularly logistical regression, can nearest labor and SBN. They also very important, they are very popular, they are very useful for text capitalization as well." + "time": "6:37", + "text": "We also introduce the some discriminative classifiers, particularly logistical regression, can nearest labor and SBN. They also very important, they are very popular, they are very useful for text capitalization as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "6:52": "In both parts, we'll also discuss how to evaluate the results. Evaluation is quite important because if the matches that you use don't really reflect the volatility of the method then it would give you misleading results so its very important to get the variation right. And we talked about variation of categorization in detail was a lot of specific measures." + "time": "6:52", + "text": "In both parts, we'll also discuss how to evaluate the results. Evaluation is quite important because if the matches that you use don't really reflect the volatility of the method then it would give you misleading results so its very important to get the variation right. And we talked about variation of categorization in detail was a lot of specific measures.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "7:18": "Then we talked about the sentiment analysis and the paradigm and that's where we introduced sentiment classification problem. And although it's a special case of text recalculation, but we talked about how to extend or improve the text recalculation method by using more sophisticated features that would be needed for sentiment analysis. We did a review of some common use for complex features for text analysis, and then we also talked about how to capture the order of these categories, in sentiment classification, and in particular we introduced ordinal logistical regression then we also talked about Latent Aspect Rating Analysis. This is an unsupervised way of using a generative model to understand and review data in more detail. In particular, it allows us to understand the composed ratings of" + "time": "7:18", + "text": "Then we talked about the sentiment analysis and the paradigm and that's where we introduced sentiment classification problem. And although it's a special case of text recalculation, but we talked about how to extend or improve the text recalculation method by using more sophisticated features that would be needed for sentiment analysis. We did a review of some common use for complex features for text analysis, and then we also talked about how to capture the order of these categories, in sentiment classification, and in particular we introduced ordinal logistical regression then we also talked about Latent Aspect Rating Analysis. This is an unsupervised way of using a generative model to understand and review data in more detail. In particular, it allows us to understand the composed ratings of", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "8:14": "a reviewer on different aspects of a topic. So given text reviews with overall ratings, the method allows even further ratings on different aspects. And it also allows us to infer, the viewers laying their weights on these aspects or which aspects are more important to a viewer can be revealed as well. And this enables a lot of interesting applications." + "time": "8:14", + "text": "a reviewer on different aspects of a topic. So given text reviews with overall ratings, the method allows even further ratings on different aspects. And it also allows us to infer, the viewers laying their weights on these aspects or which aspects are more important to a viewer can be revealed as well. And this enables a lot of interesting applications.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "8:41": "Finally, in the discussion of prediction, we mainly talk about the joint mining of text and non text data, as they are both very important for prediction." + "time": "8:41", + "text": "Finally, in the discussion of prediction, we mainly talk about the joint mining of text and non text data, as they are both very important for prediction.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "8:51": "We particularly talked about how text data can help non-text data and vice versa." + "time": "8:51", + "text": "We particularly talked about how text data can help non-text data and vice versa.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "8:58": "In the case of using non-text data to help text data analysis, we talked about the contextual text mining. We introduced the contextual PLSA as a generalizing or generalized model of PLSA to allows us to incorporate the context of variables, such as time and location. And this is a general way to allow us to reveal a lot of interesting topic of patterns in text data. We also introduced the net PLSA, in this case we used social network or network in general of text data to help analyze puppets." + "time": "8:58", + "text": "In the case of using non-text data to help text data analysis, we talked about the contextual text mining. We introduced the contextual PLSA as a generalizing or generalized model of PLSA to allows us to incorporate the context of variables, such as time and location. And this is a general way to allow us to reveal a lot of interesting topic of patterns in text data. We also introduced the net PLSA, in this case we used social network or network in general of text data to help analyze puppets.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "9:31": "And finally we talk about how can be used as context to mine potentially causal Topics in text layer." + "time": "9:31", + "text": "And finally we talk about how can be used as context to mine potentially causal Topics in text layer.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "9:43": "Now, in the other way of using text to" + "time": "9:43", + "text": "Now, in the other way of using text to", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "9:47": "help interpret patterns discovered from LAM text data, we did not really discuss anything in detail but just provide a reference but I should stress that that's after a very important direction to know about, if you want to build a practical text mining systems, because understanding and interpreting patterns is quite important." + "time": "9:47", + "text": "help interpret patterns discovered from LAM text data, we did not really discuss anything in detail but just provide a reference but I should stress that that's after a very important direction to know about, if you want to build a practical text mining systems, because understanding and interpreting patterns is quite important.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "10:13": "So this is a summary of the key take away messages, and I hope these will be very useful to you for building any text mining applications or to you for the starting of these algorithms. And this should provide a good basis for you to read from your research papers, to know more about more of allowance for other organisms or to invent new hours in yourself." + "time": "10:13", + "text": "So this is a summary of the key take away messages, and I hope these will be very useful to you for building any text mining applications or to you for the starting of these algorithms. And this should provide a good basis for you to read from your research papers, to know more about more of allowance for other organisms or to invent new hours in yourself.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "10:40": "So to know more about this topic, I would suggest you to look into other areas in more depth." + "time": "10:40", + "text": "So to know more about this topic, I would suggest you to look into other areas in more depth.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "10:48": "And during this short period of time of this course, we could only touch the basic concepts, basic principles, of text mining and we emphasize the coverage of practical algorithms. And this is after the cost of covering algorithms and in many cases we omit the discussion of a lot of algorithms. So to learn more about the subject you should definitely learn more about the natural language process because this is foundation for all text based applications. The more NLP you can do, the better the additional text that you can get, and then the deeper knowledge you can discover. So this is very important." + "time": "10:48", + "text": "And during this short period of time of this course, we could only touch the basic concepts, basic principles, of text mining and we emphasize the coverage of practical algorithms. And this is after the cost of covering algorithms and in many cases we omit the discussion of a lot of algorithms. So to learn more about the subject you should definitely learn more about the natural language process because this is foundation for all text based applications. The more NLP you can do, the better the additional text that you can get, and then the deeper knowledge you can discover. So this is very important.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "11:37": "The second area you should look into is the Statistical Machine Learning." + "time": "11:37", + "text": "The second area you should look into is the Statistical Machine Learning.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "11:41": "And these techniques are now the backbone techniques for" + "time": "11:41", + "text": "And these techniques are now the backbone techniques for", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "11:46": "not just text analysis applications but also for NLP. A lot of NLP techniques are nowadays actually based on supervised machinery." + "time": "11:46", + "text": "not just text analysis applications but also for NLP. A lot of NLP techniques are nowadays actually based on supervised machinery.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "11:56": "So, they are very important because they are a key to also understanding some advancing NLP techniques and naturally they will provide more tools for doing text analysis in general." + "time": "11:56", + "text": "So, they are very important because they are a key to also understanding some advancing NLP techniques and naturally they will provide more tools for doing text analysis in general.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "12:09": "Now, a particularly interesting area, called deep learning has attracted a lot of attention recently. It has also shown promise in many application areas, especially in speech and vision, and has been applied to text data as well. So, for example, recently there has work on using deep learning to do segment analysis to achieve better accuracy. So that's one example of [INAUDIBLE] techniques that we weren't able to cover, but that's also very important." + "time": "12:09", + "text": "Now, a particularly interesting area, called deep learning has attracted a lot of attention recently. It has also shown promise in many application areas, especially in speech and vision, and has been applied to text data as well. So, for example, recently there has work on using deep learning to do segment analysis to achieve better accuracy. So that's one example of [INAUDIBLE] techniques that we weren't able to cover, but that's also very important.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "12:41": "And the other area that has emerged in status learning is the water and baring technique, where they can learn better recognition of words. And then these better recognitions will allow you confuse similarity of words. As you can see, this provides directly a way to discover the paradigmatic relations of words. And results that people have got, so far, are very impressive. That's another promising technique that we did not have time to touch," + "time": "12:41", + "text": "And the other area that has emerged in status learning is the water and baring technique, where they can learn better recognition of words. And then these better recognitions will allow you confuse similarity of words. As you can see, this provides directly a way to discover the paradigmatic relations of words. And results that people have got, so far, are very impressive. That's another promising technique that we did not have time to touch,", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "13:12": "but, of course, whether these new techniques would lead to practical useful techniques that work much better than the current technologies is still an open question that has to be examined. And no serious evaluation has been done yet. In, for example, examining the practical value of word embedding, other than word similarity and basic evaluation." + "time": "13:12", + "text": "but, of course, whether these new techniques would lead to practical useful techniques that work much better than the current technologies is still an open question that has to be examined. And no serious evaluation has been done yet. In, for example, examining the practical value of word embedding, other than word similarity and basic evaluation.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "13:36": "But nevertheless, these are advanced techniques that surely will make impact in text mining in the future. So its very important to know more about these. Statistical learning is also the key to predictive modeling which is very crucial for many big data applications and we did not talk about that predictive modeling component but this is mostly about the regression or categorization techniques and this is another reason why statistical learning is important." + "time": "13:36", + "text": "But nevertheless, these are advanced techniques that surely will make impact in text mining in the future. So its very important to know more about these. Statistical learning is also the key to predictive modeling which is very crucial for many big data applications and we did not talk about that predictive modeling component but this is mostly about the regression or categorization techniques and this is another reason why statistical learning is important.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "14:07": "We also suggest that you learn more about data mining, and that's simply because general data mining algorithms can always be applied to text data, which can be regarded as as special case of general data." + "time": "14:07", + "text": "We also suggest that you learn more about data mining, and that's simply because general data mining algorithms can always be applied to text data, which can be regarded as as special case of general data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "14:23": "So there are many applications of data mining techniques. In particular for example, pattern discovery would be very useful to generate the interesting features for test analysis and the reason that an information network that mining techniques can also be used to analyze text information at work." + "time": "14:23", + "text": "So there are many applications of data mining techniques. In particular for example, pattern discovery would be very useful to generate the interesting features for test analysis and the reason that an information network that mining techniques can also be used to analyze text information at work.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "14:42": "So these are all good to know. In order to develop effective text analysis techniques. And finally, we also recommend you to learn more about the text retrieval, information retrieval, of search engines. This is especially important if you are interested in building practical text application systems. And a search ending would be an essential system component in any text-based applications. And that's because texts data are created for humans to consume. So humans are at the best position to understand text data and it's important to have human in the loop in big text data applications, so it can in particular help text mining systems in two ways. One is through effectively reduce the data size from a large collection to a small collection with the most relevant text data that only matter for the particular interpretation. So the other is to provide a way to annotate it, to explain parents, and this has to do with knowledge providence. Once we discover some knowledge, we have to figure out whether or not the discovery is really reliable. So we need to go back to the original text to verify that. And that is why the search engine is very important." + "time": "14:42", + "text": "So these are all good to know. In order to develop effective text analysis techniques. And finally, we also recommend you to learn more about the text retrieval, information retrieval, of search engines. This is especially important if you are interested in building practical text application systems. And a search ending would be an essential system component in any text-based applications. And that's because texts data are created for humans to consume. So humans are at the best position to understand text data and it's important to have human in the loop in big text data applications, so it can in particular help text mining systems in two ways. One is through effectively reduce the data size from a large collection to a small collection with the most relevant text data that only matter for the particular interpretation. So the other is to provide a way to annotate it, to explain parents, and this has to do with knowledge providence. Once we discover some knowledge, we have to figure out whether or not the discovery is really reliable. So we need to go back to the original text to verify that. And that is why the search engine is very important.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "16:04": "Moreover, some techniques of information retrieval, for example BM25, vector space and are also very useful for text data mining. We only mention some of them, but if you know more about text retrieval you'll see that there are many techniques that are used for it. Another technique that it's used for is indexing technique that enables quick response of search engine to a user's query, and such techniques can be very useful for building efficient text mining systems as well." + "time": "16:04", + "text": "Moreover, some techniques of information retrieval, for example BM25, vector space and are also very useful for text data mining. We only mention some of them, but if you know more about text retrieval you'll see that there are many techniques that are used for it. Another technique that it's used for is indexing technique that enables quick response of search engine to a user's query, and such techniques can be very useful for building efficient text mining systems as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "16:35": "So, finally, I want to remind you of this big picture for harnessing big text data that I showed you at your beginning of the semester." + "time": "16:35", + "text": "So, finally, I want to remind you of this big picture for harnessing big text data that I showed you at your beginning of the semester.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "16:45": "So in general, to deal with a big text application system, we need two kinds text, text retrieval and text mining." + "time": "16:45", + "text": "So in general, to deal with a big text application system, we need two kinds text, text retrieval and text mining.", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" }, { - "16:53": "And text retrieval, as I explained, is to help convert big text data into a small amount of most relevant data for a particular problem, and can also help providing knowledge provenance, help interpreting patterns later. Text mining has to do with further analyzing the relevant data to discover the actionable knowledge that can be directly useful for decision making or many other tasks. So this course covers text mining. And there's a companion course called Text Retrieval and Search Engines that covers text retrieval. If you haven't taken that course, it would be useful for you to take it, especially if you are interested in building a text caching system. And taking both courses will give you a complete set of practical skills for building such a system. So in [INAUDIBLE] I just would like to thank you for taking this course. I hope you have learned useful knowledge and skills in test mining and [INAUDIBLE]. As you see from our discussions there are a lot of opportunities for this kind of techniques and there are also a lot of open channels. So I hope you can use what you have learned to build a lot of use for applications will benefit society and to also join the research community to discover new techniques for text mining and benefits. Thank you. [MUSIC]" + "time": "16:53", + "text": "And text retrieval, as I explained, is to help convert big text data into a small amount of most relevant data for a particular problem, and can also help providing knowledge provenance, help interpreting patterns later. Text mining has to do with further analyzing the relevant data to discover the actionable knowledge that can be directly useful for decision making or many other tasks. So this course covers text mining. And there's a companion course called Text Retrieval and Search Engines that covers text retrieval. If you haven't taken that course, it would be useful for you to take it, especially if you are interested in building a text caching system. And taking both courses will give you a complete set of practical skills for building such a system. So in [INAUDIBLE] I just would like to thank you for taking this course. I hope you have learned useful knowledge and skills in test mining and [INAUDIBLE]. As you see from our discussions there are a lot of opportunities for this kind of techniques and there are also a lot of open channels. So I hope you can use what you have learned to build a lot of use for applications will benefit society and to also join the research community to discover new techniques for text mining and benefits. Thank you. [MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" } ] } From ce06271f5b0e45059535b97dc2333608509cc130 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Sat, 18 Nov 2023 21:06:43 -0500 Subject: [PATCH 05/52] Add progress report to Github --- TeamCAHJ_ProjectProgressReport.pdf | Bin 0 -> 74523 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 TeamCAHJ_ProjectProgressReport.pdf diff --git a/TeamCAHJ_ProjectProgressReport.pdf b/TeamCAHJ_ProjectProgressReport.pdf new file mode 100644 index 0000000000000000000000000000000000000000..06f2c215f3957f7723ad244cd9fe4af157332409 GIT binary patch literal 74523 zcmdSAWmKHYwl0hW2oNB+ySr;baBJLxI}J20je8P;ySqbx;O-8=-Gc{rmmps!Yp=D} zI{Tb^?-=*~`4~0&t*)wBHEY(ar|Ow+L#-q($;{5ei-JU53;9DqVkcuGvop3tL3;a^ zRn6TV$SP(8HnO%eXH_yX2Re~)K&n+(<&132>43KMWUQ)SM`shTnj;XXVrK^?>ykbt(PzZLn_@*f)g)`V5e&e;}B#?C4WGIi1+`z@tM_IJ5oMgJiOnFeGah@!ve zp#pTWb9RIb3sF?k&KCUZ!A{2gyH1jWjQx*RlAL7hTz`~tk+Jjt%0Z?g;Rcpc1sj2Z z5Fsg5b~Z9d`ehqPo}CPm{%ED-XlJ4d1naOu?98eLbOW=>*gy;^`o}};kB7`J#UZva z1sRFjx#>Ve*~oYxgY)up=#fDtY72&_>_qm54pNSG&i0Toe=pAOFRZGpYK}&>PWHd1 zX5!8&rphV~bOD(FRis2A{i+!mJ3*TKu~L6b--(Rv@0R-2@t>BG1X+WDj;xZ_keP`C zP3%m8ta3nGbFc*&2R|?OFDWOmBhbhO1<5UKM0d%Sh!?{fH_LZdO5P82F?om#(K1|8 z5+}9ySiAEbmAk#Q7V6qJeFL1khf<2rIkU0LVS64cVFG#HR6V8)RdpsDy;irYy(=eY z@5hb6ksDWR&$GF?F~ymg&5QefjS`2iwkJF@?X0YtBd4n!61~MZKCU0Xyc5#be>iB~ zzdPT!UO(UH6)b;ILb9T#Z*S4Vpzpw9Sx_8N*@QXMeKNkg#W-4A$U1IvW!mO=e}8_y zQE1iIfH#NYReaQta5_L0dn5PET<>ZUNJ=>M!p?aK{Tb_s?v{H~ZcSrlC34Vv*JX*> z*(js;q^ZR7qPE#w#o5G8Tg)}A4MqpDWaf!j7ludRr~=Swudv=M*KVB`uone zlXf+`QZh^`!Kp@H5V$GN)oVjmRBU|b)zrhJdASv1Oa^rXy#MgXm*>_6iGK- zLTnuJU^F0kfkR*p=>$W(Uz?D{2tsy%sW);k|ADs17VdclJ5$Ga8IM@WBN!Hyo z^H)jBD08#ucpcUa83(ZP$~-GcJ?TabugO;ev#JXF<#+c5(S@tpGd>}|P^;4AEE{U; z=SRfo1P}&}UP6UaR*~AiMWn9vsAUFhA-_P}q*SWP*|A_rvqBNnZ|*)-q!>7iD&@~5 zhsT)=5jE1z>Z}U-MAsIMo~$X38xVkHBs{Vl zM3^sq#N7xb(>|y7SJS$2uz$S{=Q)gQj0Y1ZeF0wJXkV$k*yV=iU|8Mb$Z=lHX4$kL zWS1|j)mt`uT}|bhKN<~C1@tHnV8|>wE92t3O?Y4&^1sk1J>-2JZM{R+< z_e!VnQ$ z8OBBkHuNxD7+n0Eo5NW|Gf4KK{_x^XfHA9=noPDSZc5?c*a_BShJ-JI>iv?^iU-Z9 zd|37kOu)8g*PzWeg-XRBIpcj{v80p>nH6ixV<`?oohSV}A(ZsHI zoXy#W*{d-J+sXD6!1t26dMmL=Zx=RB<)W(1_!X42Yh!LTlhiDgzt6qpYz}2`P+&lb z#ojq{M9oo!&*|WhzaD_0&vt#E5{vXPC74UsEMErO=>b_h)wk2E*0$_j*=(ADw@zpI zSc$Xi)z-x0o%U++`~Cgy5ZCE*M7!tDitn12%Fj~n?h{x-GDw~ae0>AnpH$A|KHVR( z>}RZe{gk(TyePG5o?q_&;;B-KRhDdDW9ii76vyj+rvzI=L)X__zw72+@sWnIV4`D; z@qsS?>GK|!?U$#fXtCZkOSE>lho^_Q=6gj!St0)iLZL6K%5{o#=Ll+3CRrGTPX>VC zm8zzt$YCZKg<{0i{I6@fTxf-dGSo)Zaq{`}3JU|s398pZDKFvC?w_Pwktak;iDlbb zCNwU^GKmT*xR)nvM$-KrQ~Wt;$S96<6rY4X3>~{0(zX0hG^U%oYf@S=H>3;OW*=8% zk8qJ+J_Q-CD`M4)fM`tZ> zWw?O_YNXtsaE6(yc*O(Y8@ij~aOEBmf zTZKkJo3Z129^pFNv{W3v-BA{~gc)&QQ*%OCw|hE|dQNrS8{P7$B~eG-kVVr=Zd2as-TVYUqScM!i6>J*TmtV4=Z7Gp#(Utg_9@anjgseGYefta) zGVVS>v}*TwQ}X6^t-Fux5UB+js!EI5M2$YF*n@A=^jm@Az0k5v{EfDENn?{D^1>#+ zxlE>^=uXmQFrVzbDTTa4nS#Fk4$2Lo;HJPS8XoBDZA6baxp}uKGmvWS+2lJlf&Rtg zMO?tJ7HX2r%=A%qOxl)pvNhfp8e_f7 zoZh^uvFRSe>5&qB76E0NQO7(vB)C^eSUumk+I_*)0}FM9{AAeDsrVg}n-+CYRD$W1 zc8Yd(m10%7jh<4;qi%SulaJtfiM#+oSt~E|>&?5oMdC4=@=Y5{19S^Zv#v)PEy}RO|TQeq+p!2 zye@^GlDsGVV?-HEwuIqz1^y*VKq=b2p0S?1HF|1b#8Dy49i{{UoRK$fE=8MOKZEKd z^_!1kWqv(fqS94ZIPoDE zJP6^7c5@l5DqKApDgQWLj7f5>dl$tJlBHOGCl5dKas%z6>W6hy9Sl>QYopZ}9+OYE zn6w(bWiKFnH%t$4ADf>T8hMlGxR=FQ8bxW=R0QWF#`C8V-f~5mtlrO>hh7wKyKH62 zLWJZ`AQAb-LEIJB2L0q$?#d6|3HNRjbrmE6YH_hKoz*#X{@E)e%1_9y{njkM3Dlo= z^*_1PFKYAWP5(E6_{DlyRh^B&zo_1?5;j&*BPZbRVo@=12?+^CF(Ye`u_K68!WO~? zK(^+rGNuqZ0s^};OG9{ulL@46BU|t<2`4fhNdGFVYIf?jpkLI1jORCJ`8(J6)#A4X zI%K@R8tak$o9X>3_*1+8NsB*q6_XMFHRLaqIsV{6zjcNbtJz7(h|3$<|D`9ZxCVrr zv9Ym2DplQ`z(AW{+?0&(x30gWe$B!WWDmA;B>Tn`{9>E`U|YY2__L$G*5KbN{3fCbMm9j! z|6+d#B{O#-<#iHNPaHIR&rRpi$SfXR5d+5WW}U@~S7$QLmqdubrZ+~SWa z5f^g}kST=4adWUkIJw&&U)XuMIDe0)@kdK8h>;{)^Oq#0q{p#h*T-1_?s^w)k)JN^<e+R_>E6n*l$e(lk%l`k?;@|%E*S!DNo(92F z?!Vy;JI~+nhKHY%^)I~Pfk4W?@rIrK{{U}z*nX??2i~wjzy`8H><~#ZNXP#Kn}5aP z{(XY{j?VqZJpb)y{{#T8e*=JsBgp6<9{0c5`5b=)`~Lf#&(6Wd`me3e`>#OYzg7G# z`5!9&cT4f7PXFtb;Q!}B`~&zo`2M{RTwH$xehwZU?%ziKAAnyDX!Pr-C*$P&*BQXY z^|v))hDiTf7B(^t$gv8E`K0vQ*?<9=Z{B*w`H$^B0eP+kt6UmgB=sd7szO;`o| zV8iM=!SxEffCoMCz@dLvSDrSq|I`}Q`h|0$avqfLBE0I;E?1oHKV8`^3UXZZqHU+5 zgiQdoygnc;~pXyn0FPFknzP#D z70MrEHRB8g-F-$Zf)(8V5Cir^87EqVPnO7Tv#i5eWFilf>61dX0t3FHF zLDwbM^6DX4<@SxmgQ_@nT8jq-1FL}nC=C2wk6Cq+Q$qKNg#5s>|C2i;F zt2X+AB-Au>E1H`sjyAGWcWqxF{P+lq@qv$wG6+@$@}LcJQ+b`f{WYUpqs%gQlC7L$ zxa5@+OScGpH`7oB3`d&OzX~}ED;6mHL`T)mOK39!7=H?Pv z*VOh~&j5tXZIrA>1K;jb10JzQ&GWH#%$r&LX4j4M1LWJ{{n^T3-*k=8l?JJ){`>He z?=8SY7J~+w>DKLBAN6>NEi=VN+DlReMz?QEIb{OPpwM=H2Eqf^<+nn!S587BPeTW> zPNjj?3pVfU_L+aqVXo+0YegqsIUaO>Wx}y7vDrl!Nsw@$Z{3t2oHX(ntXXG+v}DKMlUbH zx19vmAF;@(siGZs15>Wmu7o>eq6vgQ5uKk{xwyFp(E7(|esoP;%Kb69 z{w&a`D)a5LmBwOOG3&;bxeNH#&2Fjv+VVjEg?>uNWX5vfyPc!N2g`slPAn~25~_VH#H4iRiPR`O7Y1!(IX+knV>zP5u_4U~ zTV)X|)ya`c*fF>-DC}Ech}(E#ivFZVO@7OhL4)M-0jv=%bO9CWsX#BS#JunT4v7L?5SP*=*Zv^IP1@dZD58{l=RR zyB`RCk%;e1^!og%sSp`M7`Q!hbGPm7J^B3jETHaetG-;^G(*r9MMm+4g{GKcn33-B z{5zU?EZ?{|){VQ-3h3@lqPf1*2B}~W6EGG~h4B5f$L~>m(vrSq8I}OSjfl^PHeN_C$KVc;+f!VOi64 z=2$Zth5m~Tlw{bzqZL#9cBs8$r)^d}WrwG{S9$GJDV&W)BJx)YqwMiiED!z`mO0`kHpH)w*klrOg66Ll9&Aq zL#(>G^=vLK+8Q}COZLs>G7%Eeqj6nbzVey)+#6E+458?P4_EuR)a{Spn+yarB1@!Y z^Ik#vI4)zc*4F}Z#)|U^9}!Tk5tq%y_YL*xWfUhZo01P}K7gldRC#PDLhwKQwEQ+B zueE`}5j*1*?oKA^Vlr!xA5Ne(L0w${qS50#%5OVr{_>L{7p?-4ng!QA2sNbvC4sQ^ z6>hLga^VS^)L@268vVJ2RS%=vci!04Cs!Gz%R zs37z5?zC^ASE1*#-@=Vu%78fU0qgh!=Jt${zH8X(H2-7CQIZ^Q>S*@o&5cq%ZG*#~ zN1ERv-)j@%2mK_Po4C$su$6v!?70_wA$OznI8dXP{DjrdUaQpF_MGI9ByOUpS(p~{ zX4#2z89aIOM!ble613~^gbNC4d%E2z9?Soruvc$!Kd^Ej{S$tI`<|(oPdNh!t^UdD z<;zj2g<^%9H`4L6PG~mh`GLB!d$aX)%Qe{dO@1UId$g)TrV9h{s%NHmQA*z_pKm?C zy`6f>eJ6_sn7bKY+cYMgn)5p$$nRaMS-)BQ$RX*j$}6DwQ2NM>m^^EDgte1`IfGJ^ zZzZsD(opv!?Kb(*@=lm(f%WKa)Zt2v;hi$^g?IYnqIcXP{Wfl#rRCw_oAp-ZJ|fwo zgtMg8lSdzSE%>9BhnqqADZ4B#r=_zoB?AS)t5mupK3_$K=UUB09)|IEh43~xohCIr z1k5g~u15^}`Q8#JIiHJrAFf@aZC`VK>Eyx&wzDZ&Op6hpVK$I%1uG6PA6O86EAo|X zt*xCFTas3=q1#HgUVYL)xk%|5@&z+~z^TcC3zu(J`1EM0k9~BK)aUwD|8x5r!LP}C z#U_s-&(c%dQ}Bjc4nAn<$X_2?9cfMO4V%`4Y}b4j*ep=utS)Y?U+wu-d zOyaG*Gh}BYv`??leK*qQI#_nUz@=v(x^>Sp$fwMoMot5p>Mvl3!}gH3IJcR%G?0kaG}`JPm8Q0wV~yc2Z%&fG}n9O}_77RNOi}rOLT( zc3D|jKv{m7lOAq(L%OrY15UlQXxyo6e@f#WQVh zHJ@gPpAHoIpS(X4x%K(eK8-evaFm^wuk7CQJu-wDzyotM7Ivqk8H6+Ya;31jK3akz z#>L|*La8R?FJH$>1bo4AlK7}eM$pSqDVfH~@)|cpwx@Ojx>EKKM={jACzNK{6vI>A zOf_i0xGF_dB4L+NGL(otL$pV+Z`oJinpKUw6z(Y_Ub5XrH+IPf%<*+*4*w8t(9iqc z=aKMH=dl?TsBnVS|Dxx$^S+QX9<+)HRPE)9ky|v$ZC8(LsZ<})9S2`R*>@wcXbk%w z56*axB6TBw6wNmn?Zxzl;;`^J`E=u7Y>y`E8ECw*+727sn_?IoYRRWo4WU&`;ajgc z+w6Yd z9=J`R#ky@_44L%Zf@X50$u1h*%FCU~TTY$o(~V(c>Od?fXO=yYfqN5xM?>C@w2YB! zcow?1M>Eng%c?A#tw!n>I+l+sx@#=1ch`g-AF}1~CA&V0TQXQSAYtxcHF)I=s_!yt z^At8#6hob9cXgjH-#Ht3d#K84@|p6*l?Qqnh3>Me;E5h6^R-A&mdLNn78c^ZpS!2Q zO~l6GQ&Am4A>AUZ^di4|7m7Z_tx=cEVqbf?GUH{Gnr@~Jl$dU^Dk)67yR=lZoJCGe z<-NL#3ut5F{Scn%_yfMnT%o>MZ$t=#;AEw}yt+_e4N<24 zSz2X~g#@4G2cx_Np?1(ye!*a!?)Y}Lmh^2($PnLAjuxtrdsX$Iw|4ySLcZfhD@UQs z7EzJ@mO^}FaRtMv77y*j9M{lHA+O}>ZQkZbuSCVMFDdGrQjX4g)y-Lc{iQX-JM@01 zY6$n*^OLNETO}pq!Htf1AwymZ#{q48t|YjA-PD<#-(xs(0a+=mEUQxL-2U2ahg~oW zEE+~m>d?GOeU7+^M|u87o;vW2M5VBmg7HHxw;1PVN*qY3LzV~LqYP5a+vcq$!_oT} zaVEEe%RuL?bNa@kwhT{)X0^=-N#@I=q{-rn!$#`meTFF_k47(T`-TvEhn;ZY&%}{K z(u1@qyj|)vM0p|WEdCfO&LJN^Wn>6Rn(b9tKH*7Ox-%@1y+QDJ-cd%Mycq1ILX(fEVPl;bQ5?1j8$dQsR|nqLz>m5s#etneDUeyI%3c zNSfxs@6N}LyxURcj>(6qL{^)35v&f8tYvhZ87_tiaV`J{AJjsxxDf03@pE>B&?=(+ zN`k!!$$M$XVfF+J{RLGReA!R0nBtz)%?(RZynU$LUK zwc{gewLcw=>Zu*MEa*v$%-9^sSEuN_&DahIPvH^R%y%Zji&tt`2A1oKSt{3hk@%zHx^3BZDB=hy>0#eti ztIC6AZ(hdLQS`Gokx;ZqfBR-?Mb6)m!B*YBk9Z|+!Gj{kpy%U8BD$a?Qi|S1L&O}! z-)mnVT#%p?X;O#|0%VzD0ssb6=%m^#q0TQq{dDMWmGWFGrLzhr>)|b5`XW^+K@ ze!W5)P^QiQ{629ERK}3lj6iKzwn6FDB^cwC;z^}Vgt3n7G`Drvwd_8b$@tiKh!*XG zY}O|I+HKx(%BIZGgt9M$o-%N)wqPi#tnT`HvT8b!=;x4XWkLZgHn44+spb3d-lmdm zjX3uATyJ^qnJVHR%(2eQdnQ(cl`QJQSojCX5}(F}U&PUI?0>uZ4B$?(nz0q@3NOtq z`mT{dZKuss|6Q}(g*4m&?Iys~Ue4)ri?WNmQ<$e;<(*+^V7^@pR!E()48A9k^oH5i zTj_+CH7$MNUsD)x$D%R50)4bT@z1LU3a-W}o0AQrf1_Gp78LxBvyFTd>US$myrOZ; zzYUgk=jUCKwn4(`XgzAPuc6EBR)S|R1V?eduY=2!w40~-=RNYeAcJ%=htb}&#hu@k3SwL`xThf z80_B7uBRr<6xLs9mXFx;^6J>omUV0bYl)4zi}L5b)x&5i@R_tVB5jpS3YJNvCIKth z8r;(e?OrqM!5|P557Ex3>#)dX55$JI9%>*5qXzu2T70=;EhLdIPS%T7y};Y-TQ#3(MWh=f-Lz%*p|QU{fu_dz0A#i7B;B zhSY$82tNt+5s8uWAZ=A*ytSc&-s0u=K75Iwc-gx1(B<+_Rd@pkQc>4pjCzo}}0*Q&xk&CxE z)w{2WS=8v$SVt8hBVuNN*73f8`=_!*+FgNMTl5sGu!yj2ImnPp>rIp6_w~` zJ@W810U&#?9C@Y{)f-SMsa9hRNVcnDF#bwQj|04`Vd4K4myWB-FgX1#f9S9hPan-J zUToIou4b~TW0QS9GG3i{-`|1TEzDKg(3{wO>C4SH7@_Y{5w zxrRnB<(b*C=(*n(+iB-#fY1)_L-c@mt9(j%J83ofE=xq1K8sYjw!@|) z)h~Hem`Y^*3oD(1=LaM|Z9{r{enZKH{wP;M3M5C+jxCxz*5;}39HT=7hB@OzUXK9v zEJLMa@C@Lqig7-MIen#Q@D_kcHAgu(AHbw+9F2h-ubI`83>enP5eepuODgQqrQW7v z*9rEZ-jDmDo+BBo1yEyD&hL?H$+1&?{$)H(~$*X=OcX)GdHG2IZU{Eb5T> zq^~_x)HDq2BEeH}c;!8O)O`%>lED)IEPCZH;{F(SfFJbi0K`a)MnIU_h8ndbokIvl zcRXH!cm#%6JYG(ZAvHWbyHYST;8G(e6Jr5zsh$&w(Hj3vB_|O>Fn%{%yZ{3qkjFTo zhggpR0^~8Qt8^8K^HQtCNx$#P7LUQuiG1xWUO)9uP-c2@`H{MMwcrM;eK3F5(O(Hlk-c2SL zH{MMocqiUXA=o_LO)|JJ-VG2e2zXo6g9$i%Z=8q`$#|B85y@~Cgn`C*7KQO9-a^$l z5Ce_j%nxHJ-a@^rTs#!x9rdxs2INECI2A)DzCkLuIle(Q_)UC+SnyJOgM2WgOd>cn zemDX7d$cC3DEcD7(GqdW*oEc1@P|JR2eY9-Z0NALrJ+D~rtZs>ey&6i>XdbXhp!{_ zmk8BbIc~Y?q~he_Br}eYHIMkTlu@qmq7OxBMPYm3f+9Kn0+j-c0(!Y#x!7bHc6rm3 z4lu!#TY-a5zvj4R0lQB>-?&8qMZ3w{FXJbCC~6&XV{!fNxeXbC^oa-IrULPx7m05J zKztgceR17faiEt|hNwJ2ATlZZj_J%IQiLJ1{h_OSwkxwiMhDlej2#MoQizvQ5i>j>cJQn;=6J!I7x2XY_a3NLYeX@@J_qoo?4y1!=6f%^9A02 z2=y8*8sod7FMR5{M4Zwe{h72!>EiS4(!2mIX=R7v#9?2r!1X~-0ipHigai8$XDap3 z>*Jnt!CaF57DYK*>+on|lD?^j|DHR`Inh-0q1Wd`Z%ZGoO#q|=It zTx8V+dO=_^iSgpVbRV^={k}*6LQ)-P?LDTWh>vdjJY+Nyw!pv;2s`wJ%QI^D4MV;$gSB?AQ3nbGvEoKMK@7i7ZD* z?*`Witsq;T_nUCy96X$f30o!4+F#A++F?%te7%@=nabKefcU343M32i5(NVerX&OS+7m2TSVvG)70jqv_azkFS zY|4BQCrIu?@Y^Hh>NJeNT+6h;9Lw}}VbT|EyCrb<6}x9#lU*zCz>Tg>&y*T<@PYS2dVsrq(bl>a=^yEb7X8F|3%lmM zCb*`zgg@$P{g`g|akYw@Z?y)>({=&3yOW*y(Px%t z>@Liv>;kkzw4++LFe>=vC)I`^^dbD)qh@<=BDYs=T*lEC(LueVfs_e+q#>I7-w|t| z->mt*c7t+Xik?HtB(;F{>Udl?=RZL<*lXeSs&&%2CD@rgtsV9BYxZyUd*lDcZ>jqy zpB?Nf()SMnydUd7Wv;g%-b?Nh`Wljwoala=_HRzYh5R2SF^QM?%XuqTtkiW zbGEXz4HjH@jzRADmx+~K%W2944_wbgYrVt#-dJ2-=4)o2^bMUB*GkDy2~C}!WwX5z zwi1S8`@KA#6BH*fCtan;E#|XhH=|;(JKMw8qYDB{I+>~mrAyier31-ZrK3gXqr*j) zqx0H(=^3i)>2e+i@PPI#GAzFsy~>;cjy-~4gs@ARnVBR6Bv#`Ek3A`Cgq!N0)vU(^|J($V8;KDoAi)$ z#=eVf3W3v`b8c-YA&I{Q4>9V!J3hC;RsC8hJJu!ASwVa=|(G&@ih}Fx=)2U=|o5*a&eB~G`!fNHcZ?+J8a`?&U!pr~=m#CEj zWzccGW0=G*8j$tOI}Sb+s-YX)t^u7-+D>7<<#988=AfiW#E}Si)6H#Xdz)uJf|-$O z@B{H=s(iP>r*6>aJiR($v;)`|ldsi>P~V3(nJqu}#kSUM2F zuquA09ne(Jdj8^B$Wc%|unxjZuVKYuUC5AvURVpic#ZlAI@=Gs0*c%ZpDo<7Umbx< z7)rbr=A8o`--`zrqF=xS;{*REEvH?!vl*8g9|m%ao6GAL1Rc+;cqF$jri5q|BAKZ32wH%;~rP@8Ro#i z@DcfcF2sSIhkbDkgA;*%P_Q~>oUVr1q<7A3329)ki(UlTBy1urH zIP-S8t#bLfUKfRj#JRs%Gq?U2$T#f+Rj!-26!@*C4pCt6%USDIU-u1Z+eH#PT>(;x zsP*GE+>dQcoR14 zED0z!X>Bhk&m*jWvFlThC|0W%u5KC_Cw}B5p>OV$UIo-o9J%4R5&+mekNw(WkGmGz z0_!Gu9*Aa$u`j&A!kpu&#hx)Fx^7QlP_E0>e=Cky3 z%k2fl%Tt7Mr3nJcpV5rc$_w$iHrpGUiLJJ&LJFCWgz8!Cp0!BDu5z#tzS+0|mi9!w zOoT2hWY+W8rVQvJ>gu2K6cKLtF5BEk*L4=#sX1LnsP7Nvx4>27Y$?K+dkm|ezg9_o z`>YCpLBU0P``SRs@W{md747SAUA}HjnIJF4IGO4rX~3ZK2_8|ERwT(1&As(lt##kt z+naHzK>g8omlID(MhibBt^+f@E`uL`CguhL(H?$`DBj;VxoyN)zada^uJDxVHWV~< z`bd(_c;G2Y=qQ*1bb|drwQWY-ns(*rGCQR8GIpliw=qZ|oaZ%ekrtB!`EgC#;@WT+ zIV=JyUJ(jO+`2;O^sCh@axtlWSOoCYCEph0l&Kjvc+MBi1k;J@9E_$5jDKgQXBfpX zo{hmYxet@BPn44-o#5@w(Gdk#&rF(F=r7)W?y9+Ao}fQNq-TF`B?l1o0@Q4cBW!z2=CTzCFHr4w!3U;w5IMu_6OX~b|UFs(9ihud> zlu9PvHc=zhA)Ku&gM>oQ>;@GVv#FVL(l_X3q_mu@L*Rn=2l@{bhvP@Y#o=gyZyjY|zoIiQH8D9u0-q z#3d~D3k)%0Yyh2`+a4k!c^~G`wCa`ga(Y5EyXlv;{^{P0SylP)_eP4Y0#hbp^2n&( z?22fi&a{PjQivs|sn&X&&5a(^5-IP4BYQCmY4|AQo2V(&(j0Fam2|!g5ba5`-E~uz zHra+Ft$JBtBNp2}k75}ef9AIt-kCKOphl)#w8X{iSgn=BZz-QKJLN4!hxc6wF&grC zWe`V6uWYKy01R4rn#Y*x&cQ_~@X4{0-Bpu2<-L&&I#dkHu552bm};Wtu9P>e=gI3c zs|j9NA~cqTPhAu*pZ(mK{rKZ~-a=S@cwbNp!%=#JLQG(IH0rj)I{t$1U0EtkVJ#7f z>L;_hK6kS+g68XhP}(@%yTzt7cWepN5-nb>?$T&j8s-!(XQ!j_PVPjx)QUVer8sIP z@l_+mu;8$R? z+KzHk!-RBK{$}9JQfBZ~*n2a>npbuB53xk0!yOjDI?zPM__(IV*HBUVB6&^0x`J1g zjAAV*{ys=x zJMgDYE^(h*E$O%{yMA)b3LdMhwPA}f>?SrLUio~7TQ7VcBhKcPA9FxczUFxD?EHKg zxlN9gm9~RM<11KCv+ImqBVtMF_c-2J(7-9v^&=EX5K|qc=ca@Ad!0-I>{hJDdZNCM zm06LWuQfMyWr9Ht@57{{*OxIy!k`h$(oRkF*29c|}#&*atzFHPSA_mh?Vl|zuiRMZSn zB;jTa)ePfue%RTYqhWlkDCnvy%tsj_xvO~D{gXlN@)C_PxEsl8AcMgibJmCpO`AG$ zJ~ckxQ?rvZEgy;v91yy(Lqk)iPzMS;;n^D;l zjwu5(mSLQq=ckPxQ{))sW?#yRkA_`gn}4rns<+=-cBXuf^6tS5TtkoLIpe&J$Zo{m zT{u>JyyBjjS>4=bC_J6s5&UpzG9?K!V`?vM)8(&~_TgQWG{q?f|YPtIMcQSJm`|JuJ zRuiIA4_g^7k%$$obFk@D{C25d9?Bu3jY>()tRLN7>EeY{X98wQ!z=+-I&?__o}?sN z9aR>S;Zj<{F5mO2?9w!=tj^9VtyL{wBp8?CNVt z4EA!^eAiUuhWE&dqzH~v7bent!gL{cQCQ;&Y#S1=%qWNkIs`$ZIQD#ucE0Ngoq8O& zD;>UQ$5lwp5=NNwfenIa4T8|5*LY$kO_**-RNWLot_0jG&`J!eWd{}(!#Sz_s#3sE zN%2m3RU}*FHnBuETnYjZY#;w~-dG9@ z(*yax>Q!`~a}DRieX>_qR}rQv2DBkDmP#=K@G`_7^CCo4`pHDlu0Ip00`w=kIbV&^;{@n*!YB;OxOA-)AOs{K463Zz&t6@{n(v*9!F zt|veN2jA{xpg5Y+ySFuTMC7A^h2%fQKICC)myGSb)T|L_m#^Z9oY}6e5_f4CneyiIdy z*Lq_J2SPfK=?z*neYcY0X@C0NV2ElhRb8Z&^7zejcVAHnO)TXq0l5#9CLft%2Qi}c zzNo*4JSJ2HdUsoxwN8~NzE(C!yHqL3j7~-w-9(_|!sKdRB+A`Gece+%^`O zttZyn7h}GUa|dm{bAyb>?<$Y(3r`5zHZo2+>s3V}Zi!Bvd2#yNLNtw&-Uk9o#PZc| z3Lt4FdvZL8=mP@MI4s=mIzMd2&21+;nyzRVBDyrtA_cbFByv%*5$XO*k|uR|!TN#| z2j4RMSZ2KP$NX0uHE}#D(p==wfB-L+=9~5vaQi5Ry=YoVg1yewX@u3qS>tT*KuWm} zXshmt&HZ^>s?3UUHlFx((yPtg@Aelx-)6%5F3gMJ3$PlMn4)AcXG6V7wL%ihF;i*{ ztw-#?j#zvhu|FIc*G}=TC5klf%sf+oMtWt9SPTWH?)QN~SOhx@3YV<;z%5FFXeqw8 zT1)B;-i@x|baofXpeRyu%w8T~7eNJstb)>6YaXlsj{;k<#ja(8w2XnstP1VIUKPbk=9VM4%oZQ(41(7!UJ4VbQAOs3GU-bByjL>i5ZUYLw*)4xIg! zs59Q#_gXi2wY?XuC28qf?!@JEJoM7rHaY5@eN>EY_9a-X@i$zq!)GHkP8 zKR2mWx{P3QFNiZ(Rc_p=;Zy|JGo)?kQdl2+Snuz>I7Q}i5SeMrpdv7QC9p4Qy1AFD za5Y!_K*A}aJus8rVWP=Nb~C}mc$<2*%`~*`4qYZc(OqhmtyNLqN%dtYKR9y9->Rey zZCd@cAx^Ti+<x!86Pl7b60WaCReV8D1Qm@N;fIaN3H<#Zy=E(bBLV_ywvI({c_u zON5z%I~*8O&E;JhNX zB?x|QQS(^oR>VX_j}z_%Iq$I?@Sb!tuCUc5h~X#9kP5PWcYP$@d`kUt%DxcTG8sk6CcaX$Uf`~P4+ZY zdU{Ox$-AM)p)(;RGuAcdkZ3F4yj`X4oLRfmrjpK?i*3~I!#5|+HGf2Fg#nrJ;@-#C zf)B6dAI6y;qpa|&ytmC zAK)#WV8q}bEFh{MSJt%j+!Dz_ET>Et$iwmk0DyWjbWJo?52t=$}a-x_Wqx6=r+Y>SpT6_n^ zi7~G;B$`<*`ElYq__j7X952o33V+mbT2f{5jfg;*SxS(euKKu9GQn2Du=r2LaR-M4zGtN1^x z;jk62+H_z|jV0U^N(^U$f40^3wf*Z{rmM=<>S`YE{lUx2dea^}-MFSN734hu^6LRl zuy?8}*4t8R)K>Sb#UBo}RN0G9xoX>rLy7J>vG{d+q67NRwHMwb@1W|jW!N=P3P(gN zHcxb9^_=}|($nc7Ji&R~3_{sf;$443eFM=@GmkHOoY?_W>B$260>tAKEE^y5*w4xy z0X5Le!qx`jUAcjQ=O2+lgC!N?MHQ#ZRgB3yv{K`W$!7V;t^ETBkIKo3!B$BJk|TzV zc1~vdZy2pfuGzbyeWJa}Kx;MR7d-(f;I<5W`uW>$`^o19d3Qhx17H60yE~rI+VF03z5MvOd_mVolZPsoOYejY%39SWy*&$if3fw zw8nKt)@)l$#6uOjt)!dhR($4%pFNAzD17F>-O>Bqs@0#A@4I*Fp>;Kc@9v)*>QXe; zcjr?F#_rzMa_)_WnJ+-y!22_T`>Vm$mMYoc#;iKO#c#oM(%)cB`d!0x>|J`nE0k$8) zzX4)D)eEG}DC1#xm)YzzX^TH$c*!C-d8YW!4DUoEyYL?ICKy==TZKlZ5#Y$oZkBan zZkOhaAULSA{%Obd#f9Y3uuOvHj1124Q)gv#)*|{(zhC{>PP}O{6>r9O(HtBEM;Gta zT430*YJqyrm`voa!v5l{mUmjr61ev(w8~($z>pO1gs7?Q*5u&;kL0*)+{4A*bY8vSZ+-sm6(jdPe@oBxtDCKQ@@|WZ*LotJwyPglx8=cY%j`DX3)BZy z%6NRmDXWJuOBObN_p|$M{pr0UHlNSpQ*{eyP!vmI>yd6l9DJZ0=Uu!%_oQ`_H6)5G!Yfhn|Cj~iGLMbzY1$yxW302nh_y3Yj!AA zK;=F{Gq9`)5LmOLQ~>-tMa{2D%Qp>3^>Gil8NlatjMnN3OWrDn9*9el{1z={T;CR&dVu)r>y|FiZYMHQFfKLUG6xR+*) z0$aSHsM8t^dKJToUjQkHC0$vd7chny;Nr%|scof-E!34^hJt&K$keulVl91%S}pMl z-c(2&&E)&?8( z@Gk)iE}O2kv!o|F?b*(&iYQn5QszuMoa6-lDWJMS*0l8WQuTCc?NnuV?^ipu+cYzb z#qV(X%?weziwafQ+-4nFe28Goey7uCW}>3(s|koYJWk;$&KZciw>rZMUbFYyog4>@ zl_vL}`*eBlb3s2Wnx1bYe(0^1SbwmDPj3Q^v|@RLPXRN0{MSv@&*MJ^rs!^(VXGZ5 zs*u|%O;JUrEL(YekrmRA04Q}$t1_#$WV9B$-I1;%7nDi98I*4DMfu`uRU!6|}{~YC!VAKiceeh#rCN4*Goof*pD|9~>MS44(f%5E$1IY&}KlXOr`n`Soj^EPOeeiqxZk*enJrWqaY3xvqj#K(J6ngSP08qOl6IWyt;Afo|(fX zxQL3lA=K+{xiT9(+TCGG+CTr5))fs-;t%7uUzsf4SE&-3;nFK-26I~kjiz|5uX#w( z#Rlprpo<8W!LC9aLgyEv@NEp3s~)~VOz`Wt1QfP?>oWL5aX=?075<=b2c@<*3(m7A zkb3M}Ie6+(8J;V4o>XW97@@|h`X61nMneHaV2`kxw?t`A(MDFdd1@pWKRCSa_|0YA znIlcXWjA#hjm7^{*5V5AdrLmJ!REEqwReUb;oeXE_|SnL15JMNxjTDqxn_O1uFFP8 zh@rKIuLWBD{HhIKklVU<-LGo(5unvHU{ccxHtGr6Yy+PV8sQ5nqOFJ&n%oKDmDW~A z<{jvB3M!Q{u+u4+gJu35g$Hw@Rd_`PXZkbm$m+S5E^)>#QQ|CTV-6saij0hWL~Dz< zT>+b(Tw@B?cQtM+SplGn(p8_hvfi^S-{7i^1o(A&=1;czq3i>n>sXm`S{MLEq|V6x zrMfpM6-UZe@#BCe+P|#}=4w0}sLxjYKgmhF9BNBAi{Ex8;roPFTzG>x2kc`AyHk-% zH({QOHbxtb9{4^E%m|>$kkw^6&Ukt>iLC&>g=2n8y@jvoKpx$3c=1|vGIV|4NMv2-k3Bi0P8-?)CD29FKy8L2V5+&s(M zc&o>%^MpM1cxyD?o{Utd`!;uFcMQgPyVJx9B5wito#c^hk+dh#Q5UO9^=^bX?*{hO z0qo0IT`#Ma<;%S{Z z;FNq;ZSiy|6(U}Px+Z@W37osVtn~w00c1lifnk(YH(y0rL=+%eR>?g5gax3nov_a7 zqci-BYXOQoOHe!q55S_^0t$c7q8s@}`*mO0vgwhXEx>$5DPYlt`Zi@U8+!vc`oSi z>m4?b3T5@I!!u*osEpVQ0w}5tDTN_a6*^u%E}|1Ei)KqpWw1a-Tu130QD3lH ztaFLHvTe?oRdTen-KENd2py~U8w9?p(ij{_m<(Pz35m;8Aa;8%f_+Uo(q8yP!1wsMi{Lv^^ zI}_zBz8Om;op&bFNl63-fNUU>RM;f1Q2WXTm85E6v1N=|SZ!G$E=O}&N;r9&i8RN3 zzNlTV`2)-Tfnu#8$ya42jd)G*cbvvt74igadd)9P+$l1MF11j>Diwpuefm3G*I-J0Hi?bYKiUz$A zn~pk1{0DG=kUJPLYFPY%Vi|Mr1H_0@4F#!8J^rGWr76nfgkI2#VPphgV_eD6pTKxx z^D(~_e6T2dg_horK+-v_k0Km`X-W!#4M^<_T@qDmb3O8u><| z_31_&Ys4EH>$Q}DZ*up>$<~?EQs4F&}W_Jb-8YkYkx_G{ffzo^`mID-=va<(9lhm zEvq1W>18U(Q}<9PC@@OIw!< zA&;eI^?@r+I%+%P{o~!eKt~S&9sNwR8>`0Jv14eq0idZq(4@{!HL3GeO{&!4-$?3B z^Z2`2mn{L8UL^c*wKLRW0?v*lc(e$b&}Vg+P2aREKxqKY(Hx5oy88Ki25dgA8AhQL zD8i8`uUuJ>{G{u+(k==cW_7aCR0G=rvmRE0Es(Y&QxS1vwu?irg5IS}h!N~|L?eHe zp1OB3v7)~}rZu~4R=1f3`7TJ_&DyG=T&`-%T@zK`wl%KF`a80HvEKbX9c!DN_^UUb zxU*k~w#2W~D%DD@W_c;8JO79H@({oBj_=;scl&g^xw<=5d}w^4ZR!@tF4hAX`^o3A zW!Mi9_i>|{YGwJ{Z`5UTug${6RF_yB{l?-&(HCA<7E2Rs){r#fM(3-(tlp6Gh39c% z)-p)`u>sD)>I^wJH9$YE8;0yNQ9xhirK~5_|_ph zqJzC0k#)lg4Mj5smtJH}E2NWNG-2Q;G>o%tXl-)ygEPyzc0aT(G1|LK)X{|5V2ZV^ zZrOWlAUoNXS<{)|;5w`SA~*$uGvYC4Z#jD7C!X5Z%1c4fXc5gZU!W>*{M!?EtWAUy zA+5!uNR|GT<|gb$>=eq{wrv`{8oq?b`Es5 zjI@O8##y!6b8ORXIfbaFgdB_;jjdj zdim6;(}FUm=HHnp$U6s2Exr~4%kz1HBT?_uov+ED|MlnsoLs&F=|n;W*Fbg*FOuj? zD{BG(dQ2+Wyb^IL3l~Vf%)*XnR3-V;rNwVsq$Il|8dcRr+Nhfb?ipHt%TQ2f2KkfE z$~fu=8ai(2)1tDEMazav?Gwvg;aa3CLnJerQFXPtvl{6NYPNM~0% zdBZ+M$04L+8}Br8&WIaX!=2y1DZ?=Co|^9KuV~OPtkIx)7Ci@SU@!JdWCLqfX4gR* z@I|xsJwG5~*bK%23$SBb3HjJAJNT^BpCCp7wQm3>;afSwYOYx!6*dlZ#z*2reQUOr zh_}XDo9bQ*jOPHv(Q_+?g}f$@(%ecr=u~t6)9RB3{9CEt@D(NzK|ODj3i8GQQ`{FP zuvWem*ahl+yzVs_^uGdL;Fzpl0VT(at)Nq>8Cu3cXE2PxQJv7I{Yt1!%`bqzB>7j*uc48 z;~&!-bR@~JI!@$^PZlGBO|c8jQ<|OF0p;nZS#0l)5fU0ku6e`=(xWq3Hr<{c0iV6r z=z0jI_Pw2BBV$DUbat8;n;x6qH1X=-{@f<$xw`9z(_+Eco&%^t&D9R~6tw-QC*Fugc&G*P}}qWN?jku^@x1 z)WXYvm{&Wj?J3CMQmDQMEF-npJ|ISv)e1PAs#91?u~(P@r51iwzE|cRAJ}$?X1lfS z%G<}sKR%Xt1#%C8f2FzK5q8_O8ZAvSMt3ab8kov@_nOR<&cN(-)^^9My6c?YdYy(a zbB4(BGI*4LzA`KT;~B9f$XQo!qI>69UEP}7R&Qhk$rAP#y}LK*^g4}EG<$;v1It8) zZrFmq=MP&Xff;O@Xm+_%{ng7yQ%1A1(sYy%VXX`%z;u?=kds~oV(}xIYq2Pn!M=w$ zsjssYXI&Y{L^E(TPX?|u%Rmm3fs`f#LkNZ?l}Mdb19Vai&`Fh@Ce;9)gtP}FGyQBP z=Aw+%kg$k@fYwq+jl&2>3W)S{mY0^Q%LEbY$?8iDMc8sy76*;6?W~L%!!igWJ&S7k zmCO=&`a+SRj1fq)&EzAD;I_i21qU9wa_a7hs??VIHjUhoWvo8Pd~}cXe7v_4@EpK< zx&rOl{+P2wbM_9e8NTE3EqhMfIndWbu%&go=lcM|*|I;|d;1JvJUtCSx|2Y<4*|xX zz#6gFk#wt*O`T0Un@9`fh!#J5fXNc5f$R8cpmb}X*lQ5?2TbG4TyNs*2?9PyHwQUn zBc&1~O65W5Zx;P3V2ixf^#?&M3+LGksgjJu44QTMugSS`UV zJRzn~&0i-JLIl}5T1b@*QZ>)XA47dalP>|@X%t_VF(YpzOr+7RdtG)bOb9(Y0^3cN zXSpxyLE!IL6Ve4kJ`{7FKjQ7*HJY7Xk>nVbCJBIDO>1^%cYfpgmbTrGPF?-LRke?i zH}7rV&=DjEA{H3BX-%C?vN1-d*@W=ZV|;dT;pBTFdQ+>+&|C-c{o7E|uWBRKs<_7uMESN9HfQm^H&^aw7Ud(|{B$ z)aU$pK8FHe3i6-C$@E(ahfKp|CRw3LUy#9>_0fW?o{K_Rs`H{b@=Ay*Pe?7`k}^Y1 z9HIcMFg9=86>0PvpVzWFjoI`(ER2aDFx{#oq_>80*A9ld!>~ADvN((y9jg=5qb*w$ zI&UAn%((XpydctQyUPb^hT^~9k`X?Z&CFp@YU_$7EOKA2k(+wz%- zs(-V?q^iBEFV?faw`1+{OC?p*Gb9O==WHBnzN|K{nmn{RPB9Fl*XlX_-^;*|zO^@M zZpO^mqu85B--i$Hd=%<=ZvUoDgX^Z?)=taL&IAj|fo{-0xD$N#WAlVN>sxl{;QmK* zj}-3i-!-+ra47e&{Eqy(TwlJE)hDQWBUDq%um=1buLmjjc)>Y<3_(SD#UfJ5%yL>~ zjEEp)(6_RXMqPg5e>8XZBLx{=>Cpayy!2WvMidrWhU?+wvaAd*=o~M|;F8En)cBy7 zOG;f~utc$V*#MTL$+iz|Q-RBxm$FE`OfPKA4@;_#_Mm)PI@<(I#RW{oZlut}GO&~? zWGNv1P9xjv8;CI$n4lxu(*xVH1KTq{2|$4oOiz|h^rEHlxrG%%Wo>eS!*=Y0B#g=)om9B=pn{*Ta9gE)s z5NhkL2^FWQr-3zY#ttD{92f|Whavvfadv2azAJ*}pc%`g>$HXS1LL`oLT7)-o-AZ? z@w_|FAuv`=3L^5!^hsz^X*4OW+L3#`wjjf{og)Pqw$3CAa=E>@U>}{O_*GKWN78a& zVK?N*?&dlNl2z$r-$?{{mV0RLgZZ(i7^>VT#}1h^J^1nD51 zZib6M&djd&dV3%U9J#i(8T=d@9_X2|z|tz6wQV2V)E_I14>b1_hI8%t+MH9(4HqyU zriCx0m8npu66IOpSZ46VB*A8@`;#{Z8(yHZS0^`^}N-+4f(`pkanQ&WehA$so`AMY5R zfDwG^fvHA>;vHPa6!`2)z*?opw{Op`!?}A42m7Zdb`|#JuFg;9hjR`olaJ)TsLDrPfTD?dGRxgF- zzh4IW|Ax^4qId~N9=K<4-Tt9~)~pmr#kv*s9k=uXgMg1#B1%x({E=Y!FeP3JhCe_7 z3E;6__fhc}vV-@?W5A{?*lJ}J>EnIa3F10TkNE%&tRD44YqB-f*No$gsw{wcHw`)IChxFCb`n`#QODO+j*z5JB)LL#+j{a-m( zAU}ZO7S&sAaHT1EOlOaIJu!z~?}&N4F}u$EK3W*q(QB^>yYw_g096Q5#NF3IFp`t{ zi8~5=h`QYor%vaLoNahNc`DIUtJAZ*DEQrsmI3(e5*3MR-a-=Hj~zjNx@uMJO;C=r zHIp?mpjA7M>1)6z|pev153f!u5ebd?2tP3y9a?=G>;j%7IizA(va8 z2N+E~?i^9DH>IjqNT(GIzW|^X_|>yeuP&uIP;GAIrh*K5$aNHC*oPB#azqBbDCn9h zsTA~cd9A7|XB2Myk-g5@v3(&);Z%oPp!xFQM~ZIa~t;jCu?5<<;GQ} zTenK3(q2_sO8dS`Evcp67q`@QTP^i!x9zx1?6wo^ka5a(cWfsRhd>s{!mv1TVoYF| zykUTM9!zY<8wt$t&U1KVz?{H@K!9PA!#oJ&=m7&W2}|U6@2x5=?QX{j&+Trt)VFTc z{kQx7_y6yAH)Y{PT0O^cW;e%Zb(T!EN)5j)ovc41>*U9YXR%J+bI-!Zb|U?|B?NhO z0P<=hd{fBIAw~s0zA(BlS~z^*mJMA;KUR6DcBH&>?v~nNmOfnBGc`Xo32A-4e>Pi5 zHRB!G79s_eS$9eCOcD25+*91=k%wv`>hs$z9~bMC@U2x zjRmEKV};$S-xgkJ5tH5hH4%2Ru(>8)*^#5!ld{^>USuxiHyTkb_=|jJY9nG3%4;}6 zPDe;Ud5y1tMT1Ub!Pyg5UH_26Oj5(>J9|2XxAvfW;<5)#FQHHW{JjtL(4cI(|##J+=AraIR9DtE`>snTl5UMp)FCu8T!EG-`U$kk##e zw6w7xE_Z#azb2y2=igHkQTKC|n%H&&DO_sI-o>k=%ighzyz*4u;+`)PHjPJP3Avi$ zfM;_k4u+g04%h4>FS7uWU)~cJSW5BB!Df_VgQ95DnM_a=4;b_-r`X=nszsYEfP%E# zKVMN&B?+TG2W`d%;%00!{QhuRhvt|K&M=x*tOH%S$+D$Z91lwi8qNx>ICc#m`h|uC zuuYbWmYN7#D+?I7y`vE*x5dcsG$f`z_ZMg`j(weffnMTG0g|=-H>jJpOPC%d?WRqg zIlo_z>2`KkE;qanuS?v%vl9{0TP_XX3JH1B_|D4pwW)Gvc(_)s^iH)7l+J1 zshf0rb>j-GX-(QF%z4WcGuV4Yl4gA0M6~QPFchWM&>Abs>2w8H@R$ZwodnF9pCdp9 zGy99)iV*eK7}!Xq<$TGUFuuFwrLwD}aU}A$W_?~%@Pb~HGsn6eO6J%b;!Ij~bn@T~ zX_C02{sq*sPY~b6_Gx6qHG^_rXXD$q8$}a}P+haG_uY$BEM?qhbiuFq2Z($+{=SM> zIe0LCQ*F!mOr=umE8puj#cTOWaLPJ$KZ;%$rKqjY3{i{H%27*Iz?Dl!5xM6rH`PSc zcORVSaPRqeO_cg)?#Gc0)2MtaG(%QpSEK4}MyMb1WZLC?9Io^{ouOx~|Tx zC@fnScHF-;X)?(l!wc}o?_OPo5Jo`{jBg*LzDbM_GsI3HU<++CXU0*Tz_`W$e*`!S zm>TaYUby@QN~{*;3IX`~@;UUUA~OqrEg4xen3{E~*iNd8VNgDs!&%s`myGb2Oc&#J zGhG=K%G{Q^QR;Rc-C{n#!OP}T(UKm%M%hk=>YwjuG}{oNl7RmipL#`t6?N2*k!aLB~a@_H7mV+O67`R zhY=d+jx~y35$r-N9yd|)JL*9Tdw1AV*zv)czIVIL_R-$g)-BFuyKnjY(!LWnc5uOb zAfGEl{gLkNcTXk9{ea{7dVSB1&heam&-M9A&c12q#$N`K4()xnP3{?SQ-{L-$kyD< zdpC7>1#{LH&XPJZxMthX$l{iKw6v`|IMUzcbWU}w*%gcKSUY{k=8RSotp9fVdjkE{ z#I~FKeU&#C28*P|nMo#WqwBn#Be<>v%Ezaver!XKQ;yeYiEUEG5Xa81T*Nj4R}UFmuFRlI&xo0cFuJ&; z6~j}s<28|%nIh%?r1&_xQrz3qwjrIgH&pz}dXYZy2{HffO}$nPTFXESz16=OE8Va* zkenDFk1Hi4$?^5$NhL#kMG;AK--%rvdb7>Qa#jP%<5`>**P1<3dy)eYcKY5YZ#?+) zz2kgzD7jB7ZP?M)U&m!7#r5~>9yTY}<71DHSFUNT8z549??d z#YS_<88yE}x@P%;KJ~Y4E9!TSNpqrwQ7VzgKT&5mPSohED7Av5`$oERrA_%3 z-KQXUg3iHIasYlyasvajueM+VfH%(QNNh)ksbA+Cfpo&w04JyP&>FmZ#1|f`?%af^wrx6F9awG+ zUM#?%2*aH>4Z;hra1!*1zR|%n>fxM-x(E-fXcWFtNGlz^m0h@E+ge|f7%e=ZTYjfL z(tY#a+?-#S&D$B0RM9%MHo1QHSaJIrUub=KBDU@Jai21yThnWE?twL9am59E^P7;5 zsr1lLiX1~f(L2d#@K&BRSxvkf8Vx%(8aGCJBmB@!4@{F?3hm$g+2GD~5k6V$Bo8T{ zH6%d!CdkX*uRj8IL;ggFPU1;iD?D33i1xUg9=`qulI^HcR)(PNk&s6b>(T~7Zr1{D zE6IKX=N7S4=X4TL8M z5;7SAF9Ea=V3NI_RZwJ)b*47TW`89)PNJ~uAoy{et z`<7cLuvsJ5U2Bb~?s~s*)TbvWBDl(XbYi5O z>91y{oGsGQ3N$+)qlLoAP=@zIUI#wk)O~6aIUZ-k$%#?in@Mc%P7x;=4dQC9$ZEcc zP2S}w3mpBtA@T4Wv38(dAiI*${DFD>_Smh6R(_WvMr$-uBgd$GK2< zxIUs-ZPhQ8(C&Syk)&kX-hC&IpO4}A8BzDVWZ5Bk{rWUg z)3nZ@Y%s;)-I*w_A5Wn( zovbT`)b%q8Wx=K7K$unbX0%7q|0!+GkV;+~Q8f7lydPVtJ6-GnF;*7a<^d_P__DHG zvq*f+Es(ZIjtLi(>S#%uo`|uwK-D%y$Ws}1ebUW%`BWM=)QQcO4tbVZwmV=Ydx!;7 zO@4t?Yc(3XH)3;k_6&xXTf^w+;DFZ{jCc(y3Q#u+K3=QUYOL9*zBf-^X%Fx19gDM+ zMyJ!75FXdY%Qf;lkS0~YNq%E)vN$i?3#s(fu^|FMN1{4!Mwq5_T)$+ZgPxXn4{<8nXY|r+bax!0+N0 zc}fDP{xL$4o)BJ@3_=5%S_Y=xf#85!fT>%e?NYQa`>!Iedrc&Gjt}sZiITyoe~gi- zCureS#Z)w4snOn+cY;h!ey3~Uo|(?A>pKOV3L#IYi`Vt1#tQCuX~UL{rFe4g$Xui{ zn6xnzh3g?gz13W*l(Z#Eb6Yl*;=r_Cgi>O6S|ff7bn@K+w>jJ!jddsdq4dag!#y`t zI}BzUXJ7>m;a3Vy!4mHD#CsBfP-=KHA#s!5M;{;_CVnTOYW0Jk5POL2kmjSrB6#^s zBxyPFAr!0)vQBp2=$=sv%UVYFsHX2GrjJzowZrB9?R(3Uugz_k+cmd1M`h=-b6dNf zi|w7*`f7RlLu}1ic@RZCT4~m5-Tud+1=j(bY&d4?)Bio9MY%4PIB$Scu zY6d6&?8ZzjT=5!f_Da5)GOArdhO6wfMLBll^zKr(AD?=E@ol6dxhCFJ@A%6LO+_l?^l9;=4d((UOMiPl$1lwcaE z{zSi**i9V6KGNLiXkm938oYDH6Dtsf5d1RE&+M)&EU3F;Gqw3jAIe_pD$^ZPo{CUY zkIPOE%A=w57NpEYIeT_V-r<1LW~|)B{7g-puSmTZMX4LLP!rYTiU)+!WvC}))7leV z@^rjP?)vK6XcZz;gW<})YG@tWi-yyj=}w%*dQQr1{0*L(Ih~%F$92la zwMGy8B6`xBiJ8i1WoRf+>8y~Ic~iR9Q!yh&jZSZGQE(*77k5bM;EO1A-`FEAlg=r1 zLsYm5si!JE^Cm<{NJg_vMrM1P`XiC%&@R-K^1s3r`%qV+XcLx~U&&6WzN_J-OnlK- zA8jExNinQ1u`uL{--(5LWSr>OL)NV#dAga^z)2NL2?C^AB->byGf|M14oI@5#n%`Se`U)pqxZ?+O zHL=n4%?ma0=F0pyX65|!FzWxDDBnQBeMn0w{h|yneJwj@S5}>CR;U*CH?sfCE)}g+ zGd7%u_p^T;{_V6%$T z{<1|>Z^9PwjARjMQn85YQK`_s(}DiZezJd-aMz+06*2)f*;SMcKvA0Ok>$iPgoOv6C%GGU1@dTSTpi$NH_PKuz1{v&WWijtx#o{(uOs@0VaRnE` z`p&h8P1C4Q*;T4^uExaGEqZI259H%9g!4dswXtI8FLncc97Sbu6qUu%p-^$Wm=Ndi zA};vDIVryN%buKH_T=E}8#p3HzaUr6uY|~cIqvi0-DZ|dHLj_HP_qB%u%0a%wC(_%d4j!!%Kroy3L-ebdDTZ z&shB^*`{r*7~Z~RX81i1-bjWTd9>fp?pzn0-$EX4W;Fp3y8K7#2&7^MFv_Umm)|Ja z(E__4pX3tt1E2JZ4+t_fZ1StMx$KT#&5aYZF8`*~hmPOkL5v4+4kTz0O27}+gg_(& zg6P*`5JZ9?fIkg@NC3oHa9a=r5j=sGw^f1x7%#(jua>mX$qpi13G@!}D~S3xzzc(k zYEbX0>ZcUAKTLZfy#uc-rltP?tuIR4L1)RN-Dim);Ang!J-kt)4+$4A4b@FH&P?MN z!j_dD1p(1VQbzzuk@ZU$E7F&6npE}gsAzN^p4}U^XjOF;^(Rti3A*h*o}oUg(&`M% zpZ`0;hE!=xI%=!ItfipblkkW3O_#wy{v6>Qk{UhYOV8y$()U5WtOt3CFXNZLURnc* z(2q)ylKr3$y+*Sj76h>Xi1|Ux2V!0j_ke^7Bq=aB1cnB|PzDTl04`ty)0|ucfL=>F zDE?dk_Q0|7$LO_WK+8esV|KKP+aa#9bpgP-UC9q4ZzXk~-5 z2$lDA+}=U1ho9P~u>bgdMDraNi^WTj<`QRe(wYT<&-Q7o{9=}ryrWf?^Km1GoCfM^B>5y|bR~WMIDA+C1GH_>KlOy5gIBj*8>Zxv>uA9jVbl2}nZC zoP-jPBR(fdKps+oM+h{?vSpD?8_EI))P-{hUC{yTvh@;_j|8Aanvz1fJ9N!30s6zB zR}b_7lpaBR*6Z_~$!b{7d#ijS1u{_N%|HT7ArPyO;z+s!Ii+uH-Uc120AEup0*AK1 zhOX0C;(lM)rdR#&2P(ZSl$E*2fSgcPIc*dm5%qjy-_o%t^c9JWHHbb z!r0tXe;mR4(guqOoCjYpS&S-5tz+t^z^odbc^-%aaMatqMjR4q6gU| zH%PjH1E&)mAZF?{k#Q|>p=G*37jX6?vO0mk>eN}Px=Gb6F)63<(5X~u$($gwf}r(U z8_CHzM2a9KQKHWhL-_RF4R~FPv<=P5kX`Rp=L@a?Ppa?Ka#a1FG+e~z3t6=^0Mr|5 zJ{0gocy;|O$I}L@2@I&rI%>PkVWKGwYkV_HzG%_YXih|yVjDq_KcLPNX$jl>D+C8e zDxh-^WB4@09Bk9A9n+H9D1QNvPdO`W9H(U_8z7@QE6 z6JgXxl0Q&uOqw@ew7HSQgOAofz**2)x1>tX8yNIi{V@2v2F3$r1S;wYhD?Is#M(yNL`jNc=VS*%;g41#sON0##aH0NYAlHu`8F z;I*yKQSq;+idiA z#iGL*pT=l54eeetwqwBc;km?6%v|W`C`QPi84P-3XEZ5v6jRyt86oUWd5mToANE+R zK8Lq=I`=_?5D?<=NF35igtYpM+Cs#LK4LqjmCoP!4A_cNg5aSNPgwjqQ^(2Bl5?Nw zVAm=7kdlBJ7?2_SF;P&fP{&D8YDagS5@Aav-88Tao@Tl4BFQ@=XF+XJvWff*!Uyvd zvKfav#0e%^&AF^r7e^Pib(P-L@A{b0Ux-A@xx{!fT<~+$>*b{lX`K*u3>%E7G?DVq z2)!Kss6Us9<~Hmdi;ndKlD%KcX8hgj5XLSXxr25Roy44cvR3p095Ji29ruPl$u3bR zIuf5|4oPQHW1M#B7_E}vwH+6|Z0M6ByVOCQ5IdN}r$y#a+W}hmEU^YhTNcLE5?k?z zNe8K7Lhl;-@I&dzn@6naM9i*NQ^-6qxq>yUGEKRd9SNh)~F~r zC(`vz*=(SDT{u$i38s5c9(Ej#c|X+mD3K-Z#5GA-7!}=r#?ABY*afh)WGCE~VjKG##lVt7`>5zR z)HtRK>38edRt?lI!PX`OP0TNHzspiTYEG`r$BKoZPNy-2()qr?qmRZXZW$}XKlZa?qw))o$~fOJYGk=2wAT}{0INj;q^Fr`{sIF8m-1?B`6{VUMKfJogF5o zi0g@;OBtPwV2nr*W-te*wDn-q`Ft9FMcr#IfK8_vVj3AUH?bL9S4u=x#|920HXbY4 zHrPm8kv*;9i0d<7$hm+}LuVU(`YQss^d;Q;Fh;PczaZ;^Mw8A-OwrxIqwK@Xva;>;2rK4CYVN-u@>*$>hO zmBQ?S*5hPgi?r#K*l43vV-K_?ZOrE?Q5}w;}t9 zpYx*hps|h%$TJpWCGR3_Jt5LquI0yVCXaii++y#_4-N=(rqU~91^9e=jatjv_-KS- z-4-+$LnxOEhO@aK`CR8ruTRY|1`Fr0Q#6$sAiJ*^jU-JsKxyhGwxO|%>WMt~2s+&i z5?p`-I}5!;9k_tbI75|~i=KVFB3H6lNvE9Q`s&seEw!2r*M?d_ocfxznv6vd$Qu~5 zM-V)`mj2#ynL4A=nO(S?Mr-=qxAdA8;mXwiWd*?sy`lq!l506TzaNj45U_~kV-SX#ZK1cK3vZhkMg7a>f*pD@}Hw-7+)7fDj35tg( z!!2gKA_ED&}qe}yUQ)2rC%PJYKqQ%SK zBZ-pA=!C}J-e19iVQK2_39L(~S(nA;W|=}yXl+Byo#C*(u#m5Aak#s3Imh5k-nzP~ zFMm%u4DjFDu54e<6Lab#Yv%f8Njw6n)ghme72%t9%|v)iC*w;Idtgx^kW^WStQsbb zDQ-F`wzO!eBfJVVnxjV+?FeG11rD{hKRljLh~NwdorPUpLv#5yYgR==I(Eg7@L-@N zTMvD%C*Wu-(Cd=VX0?I`0B}H$ztMSR(96{7oR6_f;ZI7@Ld)5#tn)GGG}GZvD&fIf zm+SSCPb6n=t;{Y~L-DD@GZXu((T}pxHQCgfgcZHcY@uFXw>X!HPwg9DcVJ^ja&lia zG2Y|zbdPtW%H7@t^t2*)gZvD9S`_8nD6Qgm>Ja&DILfgSHWcFMI&*#%?Ox;#Hv%10 z$5m?$bOb5NX*O+*bx`Zp5iKvRYw+UASf{wK#?_HZ*cAb!O#&Bkb=@%BjCCrRL^%=Z zLb1;H(rj95@mlL|(&!W`HT0uuNIfBo@}0TaTX3wC?*3XPi((xZ6~OV|A&(zN{inV9 zJ}D~5fd@(!i@q=JRhhzzsgsVS?#E4s=)+QAP{f6Xjle)q+Bs4uMaPn<`*9Jsm12Wp zdtsrn8GKpMn_y#>2M3a&fhsuPQ`*+=3Xbj?ab`NQ9yKfQL8lZR1WkGv_oDECRmKc7 zFtC#&1RVKEB=}N$e1J6%d>5|=l<_`~kc(hZSYUAJF2;jMqM zUEjw23A;w6QJc_NKWoWnK@E+R`oyj8VDy^tccNO4AZ>FXl5ieQY|I}0~-szuDDB2 z!@tmKA(HbKVCXSXOi{O|*Jd1gmNnRIMvH;w%xoy`3EJ(6Qcrw1Eoco!9sCP#ppBd{ z>GFjgLKL5z9ftIJl750HNHNh|)3o>c1uUy$9kc}B0- z8>ZEA?E4Y;PXCcW$`w(-MOc0Xk%VbLx0)oJ#E(_#TnFd8)l$I(H4oBCgH9P5bp;(aoKU|z`OtQk0% zl0IlP4_P(Lnl;uUT8ZQtYwu!ijg@jI7TpJxkx9q8HjXgE$CfE)n4$B`Fb|hs*LuF| z@{8A|mzN@EGS+(Da`+z-e-l-7#gS#{KZQ&9}>vf4d7L%xqBZvzbH zwAq{{HT6M~ycKA8iA;Ld)Iz2`21EVvCeciszCkpD0X!iIfJR;4p_ktw?M*;$X$z!b zjyNcJ7a?$la1aqVTWhlp_(j^bA98paB%K_sS)iLtJq?-3S96lD% zBx}^GG)CPiQedqH70DR%^}{6iD5HZarMGYdp1HmYeovdAUT=^y4&aC9d{+2Hgf4)W zN_uUOKjA#UhCfB$3MZV}HriLqmTgI1B=3vtIEmvpiL)op1|bUxA|WIM z!jcdmw3JdP5W3y|N?-4Lv}_@978lBEDFzDEw9u5&UMSG^*X`ci_P*XWC5`3xoiihE zNhsXg_j_@y(VRImKJ%UL{J-U#?>ktBq?l7Aidvv7R7GiD?_I|^GlTOl|9n(4t&W?S zJjti*{rTvm$6qsS(c)5&a`A{^R1c{v9>vc#gwcHV(DY|tPoBML#{xi` zwW$gmlb8r8ovxyzXTgefI{k`KMkZ&>?~6H2HnXQ96iTXQvziQrDm-R%T`4ModD@~_ zSPtOwZ_SY^pCI_EB9W9w5IiYN*)!v3&=JE1(!wTfw7cj{RG4zo%_jw^656|sg2qZC zB}?KYF3kJz9-CfVKFtWvUaHZDjW7~jK8ZszXOH`RQM=J>cEtn!gxhR(C;Wl9%Zze( zHxdRu=y8k0v#cOm&W8i#svxN4fk1_a=RFm`)0M@aqutbdl#dD!StYprlyn;B%oq~# z@Ey?FJE~8C8I2nDHrgkevKw?d`7YLK_t<4Oj|_(dHSYDq-NL=0Y&zw64G$kY$&2i_ z`oh=}4+EW^89$5eVGfgMtvYSbRL3xV(ixCg@(xCbl3xCdvRGvRPG?!D@8G=-0;v5JUR0Nv@eSiC8q%;%t$JAg6*mDJkE z4Ud24qy;SiZtkaW1FN3HWHRTOe$qH?Q@lx5-jV5OOXJ_G=VsEg;TK~_C379x&E5d? z_UZJV3(wNYJ-756^f9T;CPMX@l=_;{Lgv)X$bP^ZRya-x;}v0*%p34Ge@t`C8fqQ1 z;ZX4r^e6UiDyC6C?_}_#I)j5a7&3LrI77lAsuM2Z~mk>_^L$~SP-%7 zme=2UXV-hdUcYuB$|Z=fZ=+v%50^w#TW z&#rggdVLFP;%HWoEw7+Qe)1-I~mskGBo9~ldq>sY6nwd&Z$3^M+Tz_BxJcBi;DjvJQRClLbYP!1;j5V(l z&WN4&6_!gYz5V z{5RzM0B|f#-OX&FzX6EC%F|x z6*)KWh`7~=!;Ffbm{wMkVD6j zDGzcT6W4fqEnC4C*A)v0~1#vA70d*ZK!1a$B0&2*$-So`<+fMCm#K(ifdmE3% z`)^pfVE3Ymc>ncF7wlP7LEEo>>GOk29(d`x0zTgV(t(wqyC&au)%`11KC}yt4`3T} z{2a|0Dqy^|5v+3y!?bsZk}bk8>K^j10kPZJIW~&zZA|VVOWu#K950Yd*yx%N5}K)H z1UgpDwg?gILDPVR-fG5Hr<}g76mXkt z+I<(xi{`OrtIK9Gm_;jc&K<0`tF`m%J2v&z2%G>KGHa4sS6kc4?XrQ{k@9@n8!iuXfTA{+WR>DkwY-1lob1MJd*^)HtnK@- zxL9QN0euFkdo}vh)ARDyf#nqg-4z1^72OQp)qUG3%4To3x2sLVNX_%}m#4?0QFi`X zbr^Y$vX^T?c$qWJk{0?!94y*mUNAd|#zO9%Pj0ssmPga$1@ZvAtGfqS_i{brE>oq) z++8Xw3#Q)W>XA~(#pvk_GR(f#YaiV?d-uvF3z!>N?G>_1cFyj;q9p3_qKWE-rPX0bz@&058ASUHkh2P%dhSn{N!LN+_9#mbH~!m{>67++2#s`1gkUX zvU@}>k_bmTR@67F>WmsCuhS03f2?zLLvmh2D3*+|lFuc&WSgTb=1woWZch8QB~2EZ zty{7KMtyXgH`EytR3(+xJ(B5&G)!~H%MAu9HB7UoTvbwJP1;vJjNB)rb^K+9b(EgK z5+`B;cEpMf9#yBbiW07QvY=O0^$eAI5+*Jdo0u-mO>ko=kA{rZju?s<>g>w*@doT_ z#BLlL78EOoQzxUpvW`e96srx|zAw6pSJK54`tjUo*_(L7R~9VWH-W+2j*#CkFS9S%|HgXKoN8W%~oG^dpRpCjmXa zVn{+SE!IXylJ@3BIVcP8L#cKpwk9bLBS#4@-T|;H1!s|MwCzeBs&Kf@Bs_TE8J!6Y z^^uE!ABjU@zQXZVhB5Kpk zQ{fOQR^pl`3Q(2HO#{NIyS7Y6!NdjoJx_w*Ke1*2!JN%08W~=+AhmC>R=WI(_ABPs zSy{o%@~+Oo-JPost*-X;>=`^mXJHJ_1i;SA2j;{V42CNrCfRTIMqKfjr(%AgyK(b& z9sCfr6|nDqpuwsV_SMpTdF#TZiG}&Z!i9-EWAhEu+X4GphS`30PFfF*w^r1gueSb_B?u3*REA9qKRW{R&#FY&Ytc~J<+PA z`xe{;^fGdS&Af{MUv_b> z%@J_MeW64s+OfR8Vf7^JMf0m0lf4b0D8QcRbIESSVvEOpldxyZF1`w5L3o^Jexc*u z`bpe#>$r#h47kTf)|C6wc;p)}ngQ(9UljI8J)FJ3eWIY%1k|hl*r><+!lB%6vPa0C zk7M5lWu3!uy6B9-5l)37H6ihS$yIz972D7YGf*$+b_BhumFYKH$a-bd`MV}iZ!4hQ z+cQzGk)U29K|R|saSOnj?I^qLLe#SX>TSWOX9EV=_R&%A?Pd^oSAVULYq{ppYw+>O zin$wGdN$9Bme1YTijTB<@U{E;yY6`P(+6L>w?BW!_aEHz<;yBswm-BMj>*>T58-qq z5`%r}FbzMXL&Fqy}uX_=%7{gt0%!1XrE>P?66|C01cnPhpv>E`t z&4ZY6hn%KgNlpuhX5RFHV0ZfzH6$BVlV*;Zn9C0Gv^C(C-Ll2-C8Jq$n?n260(Mmy z>ZmqqX9>G5pVaAYE9rFEll3kxy#*_Dxue`xO`&VPv_hB7n+w_zZ`YnR zkD#{|d5-K=%p8m(XVBxezTUHQus;%QtyXX5BcFyO47zo|L9Y4=?#|3^7q z*9`ntMzHMm5|%YxfMxl-s|02r0FeC>YDdzYjnw-<*HJHj4M(BsW?S5(eUdpMiNp?GYp zw`O)S+?Z6oQ7>!r*hQyg6@z|9Y}Uq(y5+M{Ji}IYuZB^b7(dS(GQ_EBsz%q_Y9it= z8b4mnvQ%ameIqZr%6&Cw4XKDE@sZ8^m1JTyqcmd-n+9Nc3DB$NY+*`OwNw{CthDm& z!x0Q@eGFlT6!FVON63X^&WhC_GFdF^(OsM)R27V-gVry@aN>$zp^KkK+fi#I`hLmH z_dY8HRK>5lZS-Qn#%rM!*Bd1dJuvnj#<79%e`glKSi?4SN69bv`uPwjsAigZWe@JcwQ+6#waG~5FSQ4I)| zrWTd-Hsw@M29-^)E5j&%GPXG>|5?WbY@v>jj=@H%Js-n0@_Yg6aPprgP+*F&I-#}I zUs7wUX3YXO_B1Qt6y5^+?k{Dh#Q@f<37J@%;#Jh3!hPEaSGyFCXf%o*Me#^R`Y*Sc zkr;BzF3@Pc#uym9pv!oEH+0DOIJFtxDhqwVF+Vh858Xx>^Jy6K9P_#v_f^aIIi_%- zRe?&<^`8lMrq3FxJ{otXfjg`I8{DbM*MeORJcS+;xtms!A0o7%pHwrR|eKe143)Az*eq8dKuo22ZRZx8z zUpnaw@MVb39HV&eFyfDico#|jD7!-E&XM;snuNtyLBcL&@pX`}D=sNvofo=JI`=<& z%grbDHns2n)`45#c&u{Kp1xH#^hJ^byXUXGVSa=@c=$gb->~8vXTSRSvqv_pc=YVU zmit~g*t+1ZXLsqxlJA6dgN&`$I5a}bd9zy#e5*xv5zbuxTP-mtp~1ag9D1VKJN5#q3fdW-CC~>KZ4@~+YC)5$H}=>+Zttmu@@3?xD&S0~lPx0H$Mk(lW1l?{c;@94P!^Wzr1ufBNIam$> zr_C+fq_Vg-xAw-_9XH?iHl*#}gbVmo9nkhg+H)&+cCEQ<(rJRWFaFflR!1;wvpRyV7#I&Td?unXnqSq3eJ0Ul1TPD*x)o80 z#C@4%d**`0-Yn2;-7?K*a?W629R%IBOro2$gl>NuauK@$2(6>m5`Ot50d7dM9fZ74 zm{R*<7&j#r{F0s%gF|`6MIdckEW zG$rvM&f@975#Ek4jzL!Ow`b@S7T{V!p0GYS64tZDh4nCRIQ|yTTg7oi0G-O4`7kIw zd5Xaa{qknGBf~G^X^KQ3izG29jgy|@OkVJXr}$L0Peb9dDJYC%C7W@|eHeid!_>|38)wZB!`Sx~K zuxoQ~)AIHT)l7ft@L!HzzVaJqAAayGIX-^*7xyl2^bCCZTZPZQdTUF3*7|D?Q6)bR zpmiEqNH4_WK@<<5xF5xQDDFjZ55nP*Zj>awqu_|q8nTB0?vNTpVd7IsJqJ=!CwWqj ztWWABPwFA5$zhNMwxEh{QU&}e$T;p;B*EgRhv1nqPPrzL^HTYD@FPetxf$+wSVl5d zbvlP)OOnzsGL`@uYdd;?_bA@9o+Rb1dV$cMs+x>6URP3uI!_ef5gdNTBvxNc8TyC2 zu$Z?F^O_4w!XnCGK%f_Jf4djEuyhX=kyEdbKCvbnP*o5m130!jkBRTb#Lpua9t|!s zMd=EYh+jH2A<(oQX?~hMKnWTN1susO;5?`wSUQPG)bj=faFdezSE}<2N0yG6C@;?BKgPxU2Hnr+w_rS!`*F%&a<`RRMf3`63W|( z$gt3dN1n%pA443IKyDI?f4ZuaPmoAwoRAm`=n`b%rQNMaU_P#}p3I%_|Han_riHy_ z`Of@QUmv=5^}Smfoq>>zIQ#tJGLYBH>l#<(r^xHojTLx~qk=g5L{O0Iz$xiv*Xi=w zm>akXWLFJXBR|sQH6pXG`4*gjA%)7TQM??*6R6CO5yD zt8Oqq!Y)gPmPT#t_ZG>@;fkt?p1hjDecmur^XxgAz$=?Ku z{^yBQBn5WoFx{Q!&F;4<_F1yeGdEKD{-c`JMrdoTaKBZ6%S`t90#r~-`-J}txMnrH;lh&VGlkyB zTj$J+&uNO!nGD&`INskxc@$ZWXj{Eooi0wDxL0S$l<^AZxE~9TL5~);zrUiXh=JT%xtF zY;yHgwy^;-+|<=qYn50o%9ur$&833U(%jri(-ze(yKEea=HmA6_S}AOe+RUT#vr$X-?f5x#JAFSObyEqckK}m* z)#v>O*n`t>8dOC9>hlG1;X~IcO`!DcCfVo230g2p8xSzyGX}HODLFkNE8~zR8n3$k z>p~-AcF0i98+ITXAz%QE(rpw)dx4$})UpyvpFW9EI)!1{keAGo8@Z2E9I%G6D0_st z71Oqk5O?ZIalq>%1*pVij})NJ1YVbY9K7~g`QjB85iblBghOR@L2E-gnk)B$bY~gX zR-WsM0cCx1a$e&G6tG!yeh=WaU34m<`Nmi_UF|8alk6CBWQRj?C>CE`PcqUuYe8y& z;B^XkM}l`;alqHu-N;RnF(OID+N`5^gijGi- zopm7OK#s?B_9A$akk$Ys6C)+>ipi+DD4Y% z@rLQdo7BVszom-aEgjL$Y?;WJ%_e_kTV<>^Br4?{)!jy33vKI~+m~$)WRn5T03eST zmX9~gN;EBL^x0$e!HV{z>zk>*Y`|=jC6Cvqh_X%cL}Yi|YsI;ZM7v<9PsK8lT{Kwi zHbJzQc)M)%RJE4{>XHGI!COIcf;h&{(e3m-1jjc`;`p5<96yj(T;;w{8l@lA@p(Ug za1zRQU350!(ZZCv9}&y#Q?Q)DSl(_C=If~ijKyDIHy9Z8+`ln4cTDw{d-(ah_21~1 zULN-PcVV9*4vpDm0CbXEio~Ps2BQ}ISlr`Avx{$FowbR+f!@ppK##8J8N zh90RV>&O;v-m|H{C!6i*-?ZoEg3-Tft?KW)cJIQy-8b%^v%hZpj)ooH!S%uQ%90hX z6?97n*TGk&ZRx%HcdTF0kxqB4SifWcUSnd*#%O}djAlk2*_ z&4c*kY<+EgZ5)3qHrBzvC+cf!>**!d9xAo<#h=vI*S-rU=yUL81%BF1@PVGK%jCv-;oISwnk*gGD;16K<+u2@ zpJr>aY4`%X>ZeEP)9mlSf*>S_kPD=#%Upsd_zcHKitSHYdrtsCGl`yS0q3)>oU1 zj3~qOokPhHlNR);rRhhRGg@LNyI32l{Youk)oLq|kpixN!jv2-nDpS!X>JS7ICWbO zP=`p=XB2desi{{QN%jHyBn;r5RLgvHdPEu{G_Y9N zkpk3F4I_n8EmhlAcp}5ml<-y(wt)kmvE1z9<>G&gOIE8i9xP7;yIP_wn%7T=`W(5e&h+IX$iJB&V&w^CNIUaWUFA8qrpNz7DkJ!QbeUK~JF zEjjcVqL`5C$>yU4{e}~)dm?eb#6}yFnU;u3jd?1_=oXZ&WNYCUK>re-;n-@d)zU4M z^ES7(t@GOyGvgM#79mt#6KL&qC+dCi-qv_o_h5rBmyQd($z^d1j*e7A&Rt&TkN35d zF+M>C4_b_>9l--ev(+PGgEB{xA+GCD!h7f9^9-!H1eXz&iZD7W92KXjb z36-su7;u3FxZq;EO_D84%cn7A`}WCX-IL+?qgp~e(OCQuC;9A35C;wh6+4c2OqH(D z@8JL^>>ES<#p9*#^j}Kfia$cB(s%8hF%DBss#5buQfQFkDG4Nlo>1rvc^$n#%PE8| zXeLr;wrH{)lYbljo{sZWLXU97Vtn-pAu=NEW`RHyNtRAXcv1rSte2`> ziDfM&cX^1?5ddTl<<)!>M~&ziv>LQ!7jz&Rv63MVbQvXtfx;731Or?f*CZwTc_>^__EcgPG7Be`s5^fp39Np<495JCGKci1CwFEc+fDo&5m$J;Gt`j_x>@DF$* zKlUvK=ff~?2KfH`MERRuCpOnDQ4gs+NzuSW)PBYK?@9-HTJbf-}cspyk|p6cx4 zisG75+DrN^)ydgB_TuY)VCYo+tPpy-cp8|2~MRyOGoLC;+ zITRY1$jEf&G#6)AId?rwpaxLsn)JGM=SjQE!%Zq=QV7<3I4udVx)R!x(SB=H<&ay1L zi>^0P4OEYomuD#5Ni~9y97_5-Wvm&f{&eREB_nOWwfE#d%B(EAJ4V8o(@NRC^lZb; z7~7=NXGUa_;v#rMPiGepTj_k-#J)fDJz~hLQCI7f?nDfhnC|caff-&=rV=v;+rEkFqib zR+;?)cISbfV4%xLl=A-)-J)^qx`}4r4cd_?tAM}~-d9G~YH^ZLl1Y%K!oagtlnXuu zDgjamWZ;lbA7Y)QA@)q^(h1f#F~kU{TxxsuDG>4`UAb`A+^#}@H7EI<3P=NYRcoTW zHR)z$uifFdnV3KIUbCpYtZ!#8`h5uw{x5r90vA_x|9j58v)tKW*q5+e$O1_S3?U(e zU@`*?frJ182_YI~m>FP_VP>2e0)FudQzyf7|9o7CkYcl0U5ldi8i29Nl`P)P) z!e22{5&m8gejBe#$xTnoOVM!$xlvA=oSTu651p43I596PD;EwdU*d1#R0&WeGZR(Z zUHomlGBGcMeA1lf-qvWztI1joK68eX+uR9qbmtTr_37uiD)Lzg@i{1mNQN~+7J zzFfauTz}Q2Z60fWe#2GmZC-2s=h8|nB^8!MiRmS^@Oxnb_u`db_FhuuzRTY8Wp7Q@ z&bxMAzPD|D`IcP`m+fs^P`+iDcxx4|`Rx>2h?CS+$7q~E49V0%_-YF2g{*E66`&Cf_v zUkfL6?dj??kh&1B<~AyZ;f$OAxREbRp(qsc%c6yo<9f98m>1MZU;fHV_JTUM(U5Q^ zD=8%*^}dx2i<35NoPSCAd`+4UE+$K;RSw6;=|fd3YJup>V+tDxw9CXD91KV!Fo7=cem)>U8xX;-|8xm#1+md2yP3 z`@G~d9F7#o+XAj5t1T}#H~Y+_E>X~6gRp4Hvc+v#uXATG;?kGhr-M`SeaQv+3zG2b zNd{xW=~H;}jG5@N7Cj6Jzk}wD2zg`CQnVAf6G%z^c!DDTDq;tVPfebpd26n$tG%o` zJA2Lc`r7T)+5O22Di zMSk}}d|ZK^=G43sa%bM7`iw)|YmY_c&(BE|tb_bn_$m3L)(K~B*68(`gLfnw)Vc>$ z8L7F+dj#Fdw4~(Jv}FA87F}{?YO-oTp%8@AkJ0~Kp%s0P%isr5DJps*k;^DZW>O2d z6OU=Q64v1)6_!?zJX)A~k!h5Xer9JvqAqEF$?C#{OV;L96z2*_DxJE}w5Fi4ZDp2m z>4xfE_>%N@mKd|Ma}xJ2TVGS2UUo@RTGHaARE;V*Nk7kIUQl3e+Pd0LHT^o5&hJEZ zsNykmHk^BpJ>|~MR+d#Sd6sHfO&yrZ9eWI_;349{iltQLvZbppB6TX~&K&xkmt6Wn zYwy~^HD!6QA1A7FmR2upxpb?oUXy6hS{k+*a!SPfRZC$~0Q2UXY6={Eo7S~&$HS!? zD|2;;NpM8YSZGW!)>f{uB&Met>sF^P$}uLVB`2mQrwFQ)Bwbf)?bZZN*b4Dkik{?F z@phUki%72IgQ?$^Q{`#yHKZo~Sa;!UNr}#u3T{<;XGZqCv@@^k64b(sAQlxDbfzD} z?|*=IE!r<=wZi_yoUFVAd_#f`if{+sb7qW02Rb{f_z-0wJxY31&ppFU0nU1E5|PwM zTUoF?jlHC-cq>t(S=80zT(SS&tfXu{QJj~A@Yx5HUsM>CAHr!P;Zc!!%&!55?zG;q64-=O|; zAKekZ&j`EEPn=>7y6arxGx=Z0q_~yRJ9Clp3W*LyNOiS(VHAC9ZW5{~eZoldeuT8W z5`8i^>F&M|>C?|kUp8b#kxQai&!vPF3D3!txRTP2Su{lHXyQ@hMxX-`N=i+-DT;nS zHzjWdI+5aJlqy6~mqbs_MQJH%56P5Xm42H{8Jt8vJ0E3UpH&{A?238oa+-lYVU+vy zxs=zMzh1ntpslcE{+9V?&!q)3=hDI#7QV3P&7xz)rxxGIXvw~%&4fw@mW{|%S`?=5 z5h|-YhepaqOpT^rmsbJ(iqP_s2vrnS{3x1MxaOvnnLt0P>}0fRO@yj;NOX$P>gQL# zGMlQeT(hF4t>*ICbcy+rx8|a?53hSfCUdL#IhiagW9V3IW9{R0%j&*WUuaFWj#_`# zu%_XOhTpCat$(L+ZR4{|mo!amsM_$gjY%7anlqa3Xga5IhbTSP_VoGbQvE-Lc3k@1_T~i(Ts|*~{?GOYuW($syQd?H zF7I*nczT9=Lg&!_o?CkU(^g~qW^YID(b;6ru=m?19f^+XoU5GQjH0KU-*+B$zU=(p zar%FU`inl5%7DHJ^yGkgAaNjLAb+3;sBB=>z`B9;1Fb+^1HT{m^T6qW(=KJ4;uNPi z#VJnz-;yItuex5{VcYTg&Xk=_pa!@h$TGhwr_MrK>8h3S4#6U*jL0MfXs8HcoMhQ=H-y zr#QtaPH~D;oZ=LxIQ4b3?wZVN} zeflU4BC3UcFU*ip8$}`3M#jf0St?Q~LojQn_R8oE!@YhwG6bs}6g3gX2k6)))C4*a zbX@d455;v2z28Y!IME=;Bq%%RqbO-4uNCQdfcn%0y2##(*g@ulpoI(W3;+ss)EXDH z#1*!~20w#v2h)(*W1})e$=5F6NVwWy9pSYbdc)8PQa=R1zFug%sZM^11+mu!kNpx- zjBAkE#AJG?MRuyGhe`=hKQ&RW^uY*;$03Sffbw-pzHm{S7#0B<6#?LClj4wDiEpAeG8^1}IVy4TwY^wT0z|j4jjMO{0cH!hjUHM5{qqYXkW~+VfCf z%W=&xV^WwOJ(68sifu1l8PPb}B7)ioy^mSF6Z)kzccZ;oM7a%84!bE1LsFJT<1I&@ zM~Y-(Kl6*9MuQxqPU>qCxxTP9Ovip{O#pgVOWcEC1@q9Zu$OE!Mo0z>#@HoS9y@4a zqw?*N{8Cz7`l+wT%(?Zd`W&mbNRcB)ekE{O4s`xV4^o*98i_>8&ae+6nR#xV?3d!& z7hX*wmwDU+>zy=q{%h*3HePZ6Wh!nnXktgjG|LuC^CDVLqryuy3j+Bdvde&mDV5SH z8yh*LQoNNxdpC`rei}i`>@ zH$|RRn%xv>hDAUCd|~y;LH9A2 zR9hGyveHef|M_E-RdJ8xnFx0|LtIg(iUC^hToTTtsvBs%bV@eJCkz?o*r+a{p0Uxk z1Ig#B*>!GqCKS)A3r~a_nlD@;-k;M z+%(<>qIOuObuZP$DeY84VP8hq66V7)$>RZRl%=swg4>o(Ipj`3)R z#}?>!(;C-+MA|34jj+B2c#(UpXe*Ur1#TU5Wjp1)2}YaYw^dq4?x=&2&Cn<9^|T5y zc`a}kdk)bg?TJj&P8b)%7RGATL?z0aZGv7q@Nbl6Ex@~p@+0~a`3dF_}vEUI;eaLwV7$#LTzY(d1i~1YCt5HO183f1fMO^4Dk`suNf#} zm4#x^NVSRJR|mh_Ky$)>JY=>} ze%6PvYp0J)m-Z-bvDkId`N%3}pGA7CqsX<=9`i;W?R8RLk=YW-m+jP!*;2b`BwFbz z3$?f-93c%fdYSfebTS#OQJOG$#JAD9$Wbj`AV(M%d499x>2q-*s9Pu|M6Zspu@>WXE2R=0Npt#&rUL|gF+5O&tw=XExdHp`G-xhRvJ*A?> z?H1cz{R6>(*zOEC{kxowQbVJ&*Y6w_TYXMX=Wd@V4JPxN{3=TNOO--F*v&-)E1e|NcfYT{D2Ya0khtnat z*{JAn2JC*9kC;bgIGjP7%N;1Kv$J(5aifA0pQ|y^@)A9L6>{CIP3}z zh=HNrpxY_>y&%`+=?A4?Y0xGI_<&@Ie4Ghx+_3n6<+a zC;Ya78=d|^S1<_ddUsQ+ij#DQbT9xNf@xC`zrA@|4vz(2zS zF8e@~X2T%V<*~bmAoe5Ldp!^-#jeFHEuxkKhYM-TvW$ImbY;!*tE*!i;K<(Ejd0c+M}D#AzE! zE7TheW=TRgs?>;KoV(c|@`Uwx#0~>`C~E;RMPg^SqQA&!z#>8_|Z7`?m; zRy<*-Ikn680bL$+5hATZ{Wk8aQa*rgH)^fi{Mbd+G`s{FN_>-Q4Bldyw+Z9Q#9(#N zgK+NUUp%s5MFTZra9o8weCKY$*{rJ}X?f>r#DSDw8tOw9oHU4@&Wi&c!}|04LF6?J zfAfkZaS1_{P=mrx_Fn}gr&!j_4%41^el>$dCe@mgl2WAzgYFK-s!uHbk+20m0iXd7 z$FingGz*_6OaZ4y9YuLX!jkW{#@ubyzhtyn2s^!olAfqHzDSx&7ms4cz$ShXu4AKg zwo?v!j(ikC2^xoFguCR!0t?vFDf)V5X?Jwi*sL>Liqufk<0`A#)1pt`=pxitE_P;p zVr%<*gJvMgO;*;TZDoyETf1{OR-WOWqR`bwr>mt%)Yc9wi?fA;OUtObqNLW-*;jSa zs{!-E#wyge2bT3qx?5|MSpHDu_?mFKU1RGqhsd(Z`ohM-svapPyK5^-E@J-zX{**6 zZ`KJXm(JR7>-4!@4S_m}0kW-5rZ(=HaFlCuO9@)*1|~LIjZQw98@qO9Rt5E>`On^* z6OHAyrQe&)y$!0iwsy5`1BNvi>`e=tIvLc=m9?vL%@tzRZMKcoZF61PfwH`Mlvc}& zmL;OZvtS=5JXzX05jd_j3_0gN89?dQc?a9vI9xSl#bmuzp+SoadN5Yd06TH&6A%|R z4Xlh1)+{4r^&xj$?wVws3pxzb{Z+vm_&x0l4lFvE^aZkXWsYtQMfjqLN>IH^NW8|z zJmTSeylsyCoX86dIJKqhB%F42oGXJuVWi9x&d(EQ|5>7hesYh!-e+;R zR#A@p6ypHcXHUAH7>U-)OA%YDT4Qwcm4U0)xFRCEp@D!$lli+9xz)0?mHi_HL%@8{sjJ@L<7)Q z$k)QJsPCwL1cH%UKaw9seX#RJRCU4pawi8yJH%h1YO}YdYWBO@q5SCFY5Z}PhYcq6 z#xM8zUe7l;UcEP5UoSg!9v?dDu817Aw#zy$uZSE~UlAVA-c2qb9$z&Y01kamc#iF$ z4($XE?Z^%u=nfv34gjboDvt`0s$h4-ou!0*6Pa?JM?e2cXLs4v!q$S0r%8A>;mJ*%61}NRy!9 zEdhg+{RMaq*dX{}py8*W;X0tp>k>kO1u<;a~cV5F8%4`ga)ncR>4hjQV#_wy((gcM$t`xB>==o*q1>{QIeH z=6x7HzVxN!?P%N(-JTMcWf?pEq>mUNfp?&W!dC}HLJSx*e+HfrfzP`nBCt@D1sWa> z5`GDSBnXM`2MLfHjB2t3p#nkb2oR;%R)YV`um%VcVgHVG{|;3Djza&AaR1JFz@YL! z>0#Wyg6$1c{qv>^_YS5314sc!N&`WX4-m!LHiX9S4j81~b|Bk!;M%?--o7H}-+>Mo zj0X+R14rtEz<=-GnxT7q@Bsht7JzopgafF710@I5`+tA}9znwS8vsXCg%0V^0D4IL zB@iUM@ecqp;0zq!9t!^g6rT_hKW=b}w;Z0;5KlouV_RgeADCtBek0y(W-x#*G++Y? z;Dh1d0p-4%V(bN}rrT!%7c`ix+kdz@4-h5V-iE~g3x>o6ievzE z2nxR!8GsLtuMdvD88!Ul@rfGW2{hcg-w2oh?rot>vOYwKUMK)Q7`{FT{-y*T3o;-g zK$Ls?C(vEk&xo18kdQ% z6iub45mATO7YIm~Qc;petjxHJ5C{y+kjiJJquPdN+BFuV5~$==gV;w2`FLofLx=~62eN@f3D_z$4Kn-;F){I~pw>Svvg^On z)XN-`JwF|_3w`e=?cC%NK!b~gfvX=g&E-sn$E;8zgGT-v;`3)7wM;TDNe3G)0;g(! z7Op#@&`~K=1e%JHiV|t0zXY{j=4Ao?Wv7r7he?`8SBx$-^H^p_sUjmND1Wy>#xcne zluoVpXYsg)N{x^;r0SB`YQmyBLV1c*2|m#`8#M7$s0#K!1=l~#f#1*;vPCjq8wnj_ zZOVXA$_QfFgEbfxV4Oe}oi{6n+8Oi4hTik9TBEZ&`UFxdZlPXXPClOwT`1Xqmb}f*>l+ z1tZ#pha^NaNA3wQc1d+M~#omF;_l2mNZB^ z7c+NMKO%&>3q?aQAxciMAX8205GSN-`BjrHAVPE&^yLsKk*4(__T?x=qyUC6paNXD z1+Hqm)P)v(t>`Op4Qn=*xn_m>C5ZsSkDx^C+ezl<6`OkP4*-vX4K!J0#E_1%y zD1buikl2&5n;~;x+Ixgc2>S}8DjeieCDH&zXmE;H zGZjT5fNWYAGm#zcL33M3se&y!8V_$&5f+R*33wX4bfC*ba=o9n^*y$tHHQ>@B)W7- zPl8`+-@bfJxUL9#ObFx2?ApgT{~06cPwA`8hx5B(rdx%G>xq-96udv%k1 zICoCA9af?}4^LXZiJn?#B72c&qE9laf+Sj8Sey$Fl@K*|4RM&NLM8=p>uR|R?FIp; zTJc;u&I}Jn{2kn}I-vZ7WIcx1R@*)qtYn@vM;&=~0tN#lL)2R|zlz&*30AXgyoqIa zx|I(Xk^HUrHTv#YRI2O;kDkB@Y{r*>V0+_-P%n|E2nBJl9a}Y}DI`?fq!B4=RGP#E zhzh7`+3YFck2;rHV8x?85xw*@u`yARb7&Gxzv%g4 zgU>>b7OEQb2XA6*Yi0?exJBo+kZMn$Sl5*p2y&#M2CGrCwJ@ZO*p8UQoIG4+=?{vE zs?oz7nXfEbep%5goP5<}65GE{Cp7VGju#0+f`{G8$VMG4j38_?BVn=lre_*f%62}v z6kv1*!MN`%#S@pf3)AvNrep%Rb)xjEa;+RlKgcW(grR|ft}XjW_RM~BY4wpK+Ko0= z5B^Pn!VC;KnFyQL)bXHXW%>dpvt+JwzE(#~S#l)bQu4<4WISzBf+7a)4lB3QMe;5y z2Nq3JQx{y-bzz$CGe6(Qunh}|VF`~#!<_LUh{mOTtxtlUpBtKrQE$djq#bRQLRGU{ zq@zwn+mX?g~FvZ^H>THRMtIGI5h-^Z*nq~q!X%I^!RY+BFy59s11%JACUwZ6EV(8dNu*zppdi8Fym8j_P za@_yXg962C-s|Ps6cDLx+cH#Sc&2XZ#pRhDY$?r~d8i{P^Z99K4c9%dW1=LCuyA*G zH+3hKmOFKs6J#^H*WxhR(j<=0ZMwk2g#XjEoj z_9`aFoQJTU!O8Ys3!>6>#64x5K=5x|V!QPXB|~TfodVclui_yR_?teEMhWN$D>7!z z#?Jz8%OwDF%NogA^ULmHf)70zLX={ufRfoMc446$R)=g7+FDRaig<2{BQ~5M+FE5) zhc7?UQd}r`sq*+J&7v1hBJDdgSEJ95Pq z-b=t2GBvb~PZ@ce+QsM9azd&pi=NeP$N+^cx?-dFX&%Et{3vLAZD~1@0YViTb z@1EU#yKp97ZwKo0p=mvS|IGMaL6*!_E8MTI{p=Hh>_42-|ESOk4G(7G%-l?shHvJ> z5pm)StQk`ramX$1anY&KY#hYhpsS)D!_cfT>KOL#A0vD5?@OqH;i#3Bujj-G*NrOY zT1=3Tt)(<`3yP@g8%|K&`pB4@_Re!0=`MX~MtmM><%yhOB=^{&^kw&uS3UyoMApLySG zsc)>J^LQO|A;z1wy_;XIxB1m}(7kQmh4)=Yag0BDojlm~vHR@FtbSEmdz_0J8PTYF zJdHiL?3wSm-IGweQ0di4F)+P7^S>IIT0~oZ-y4Ob@*;e#Ap1B}yvxpgv|K&R75uC4 zSCWa+?<$;ui9tt&W25cBa(Jl3fS0-Q<#lNWvP?ygp8ce6yW2$TZ{DHCTlI^L;A!u( z+3N+~6Bj{N+8st*IQ#Pn^%4lFj4Gp0W>&tGM)7-TpVMt>ywAq_;5S>J#ipB=KZ{$< z0&=3%?!Bb*DTFdPeZ~j=EnJVyeO&&Hr^%yf0#2s;ezOV;Z|`Qxq6 zM`tA*Nie$|?g8qu*V`TIp=@_Y)I|PP!|mfq(Bl0L_~oHxjDy$~Pxm<257HOZNpFo_ zEsD)bpYzE+J>JKoI=hdybhh{~ueU#sQ`o1Pc;rU5X10HLP4F;8TF<&f{xXmhDow=j zi#5v1QILBV+ODz|jMaulm`0yTzd}7s^t5%ozPBIda$6kmZ5{fwS#0JlzuZ3g&E8H? zAv5pQU!-YO+*fseH<=LMb1FsRr;ohX^D)dP#;N{o!=ctYq+WlGh;#I!y%v9I#@TQ_ z-W+U#Gjp^k(Wm_tC}< z*QGsUpoo)$x;@$G_S1TrP0rMHN5NP#NMrTd!IFKrMbEm27%yu*10gYvY2K8iBC4ZagHGvJj~me| zV?{Xd-L;;?{GJy6>ErC}(o1WtE6r;$LXIgJf)6_-#};Ad{H%N&d#F=PW&;+;aQSzc zB9OTV-L;YFK>3X`>?TSK4=R;$x1Gw8QOfdZ8FS;ohMDeQ%|sn_WJRz$uC9!Ha?Iaf zAya<2fY`2U=WFwHK3l?qG_8_{!iu2c%4$<(`ON&A)N3BuRauCWptpT=5P3&}i<}CJ zbbr=!%f>4zWU)Bt875ebJIPNNVhXx0l=9a$s>jqIe@8f`0JqzL9m6Xqbg%$FtQk`e z7;Ld0Q645LWwLwo9W{sZygfH>;$I);nG7EA6!B1UzK(a{X_tmACr$4%WYmw9?M+-g z(DWLmB6WPDr6iH?F>CY@fUA9?GZ3QslT_!_tTNhG=710G#Hy8*Tb{`PCo5JgnI*Gq zPj_$$B;X%UJvk?8h zlit}5)p=$F*Jj?>>-YD#9!c}*i;gP%#iNq` z!tM!PMaEVN?5%q?yVWA=`5$aeZpEO8I+)2#lPPX(bE!}emK7)tIXpGF{J9Rfn&OM~ejY3z+3eyCj_bX(QN!&AN^uVGvT05-WX~y$5+d}mP#HsSj><@=ka-c+KXBnyxDUxB z69Ow#VI&=sG(KIe%lRVUn)Ie zKj?vZ-~JZW@FacE|6`drO%bcAbbtou`*DGE(@sqUrsMn1<_<_nq{NnJ6QgW2lQnmk z79Xf-<*Et0h4V7FHF%7Iw!TeX=fMW7C-vcc=3-y;iXc=EltPnYMP3ogn{yXAgl$Y7 ze=sSXo{eP9n@Fnb8yRSy+{&r`uO9_Ya-FzG?Umrk0r{eCyHZq;7mw$4!dh~bQwxuG z`Br3kt`2Zzj9-}}z9?ld;%%!|SfX&qF>T){3jyE8BjCgL2sT8d*)OKk)=^-9>f2jn z-guPJHyA~HNxkeS{xS}R?G_=5l#uA>cKuqmx=-25jgzmG^ezG-0e51+7YNzi0SU34 z)VdlI4eFo_I(3J9r7jr@i7JWQzZ?=~iLk2}vq7=4+48=;A8BznZ9;)RJ2o}XXVEDH zE4d2Zxn7!&l;A@xN+U`l_MUe*%=+&ekuZ8Ux5;t)v?j^(o@c_xTKI4aACwkZ77@OM#CGu6s6e#6 zGE?v#*w8&cdNxTI+k7sI-G)*nq<2)^ttsi=jH0l*5coU= zz^5?;{vMcb_pndh5!7WQW5)BjF*AI@z`d4ws_|=GiWAs=)xm|q5}5p2A-6H zjliCMDB4w_;j>R}86}1Ro#A8oQL`;zr~6)%3+z>Y+IMl4|6;4ocm}-=`Q!`PQ!cMs ziDTGBL-1G>W_AQ4(GCs+gKewtk%`o1Nx+;hibnczvQARs_VgwZ9^AtL#-OQchgrHf zVflDEto6Zh#@?J~?v2*l8|%s@KmLKD3#Fe+)|iY>B<4o`H+Htm`8CBO9gHrDcvfRq zPo%zEf_~L!P!FdpgPH(=Qj) zW&Mq!i+Q>gW&2)K=Qp;BM7EeZX>!U^DaTN3MHsw;$3EvGOru)Yc{J@Me$E@mYNLjg z!Yr{;zQX=-4!uOK!rulW&FS239K&z(n4h5^B4YPdj$2dl>6(sggc3aud8XUiXNXv- z>KV)VJmXuux6P@JotXx|3pd{$#ik;Ir?~t+=*>6@GI3eNxq@GGY-HOKfXOozv&13$ z_Tt&Q)Dn4Tn= zftE31SpPx_9z|<`vv-EfGhcsX`XcH_au$N;%VUr>W^O{L7=zAC>*=!CNT|BN{xy%` zc$m)uZNLzrR1!DiqFV(uXI?N9-Kfz7uA>)BMXosxr%!)jvZSa&U%`S{Zl3Mm)VvH$ zuC4GUOaw1lEE#nn(Ntt&k4Y{?Et$%{?l@L4<0#l8PYZOb6Y+W~gcUUpcixxWBZ8Vm zNgDvwQKK75W@{N)g0YDnPJr2W{rbPPKaJWszqmVgT4Sg9 zXe!l>%LC8Sk=jeMTJ|z+mbz26%2_MeHrYX3$c)v?kTKT3p0uYE*eE4+fUkN)=O(z$ zUk)|$hPJn2B1c!RG|iSi@z_Gc?D&&2PkxF25&8DUJe#*f@#5xY7$#)ZVJQTq!#0>A z1lM=JD42e85jCoMhXO%RbeEo?>upI|1^X7oqzeYh`dwX8h7W4BCFm8yrFk z?hqL8T2~evpf&taJJsmi#xL2)W}#uDXt3UNsHC7@+y5&ETnGe~5rK0`^y12 z-(-!n$9giSuXK%bpBz=oR)$*i?!l`T$pHW>DdTSQXBxrc>f!!a7M!dhZey&xy4=;} zRY`0`OPg5aSj~cWuM?tf-S-c%d7pL(j>DPmTfB8#{qZR=_?LX_4;T!+B7wCPS%NcM z{98$Mvb%NozAm4HA~D!fzm(s3t6wKPNfgXIHZGo6w1@3U=(y# z#bCHQk6Edf<*4T??F*pnzrXxODph{P<^0h@^5bcK$K0-ZKT2cL(=b8 zVkvBBCbWmr=B#oZ=|IKxP|aHRT(pP&&@Upz-OZ;v3HrBL%*K)W87`bRy3c9p=~t1+ zU-uJ+J3EmRZo3jxQ7D7ix~T?K6^Jl>Q_5EV*7~oHh7z7^YH_<@!*o%k-yob@T3nCM z)-7HN7K7pB@5W}q{~D`C%0k?R2q+;7mz)gw7*n={;hH#s6cUiHXr^S+)cFL;-e`@U zuOIvQbyrzmM{Inn^Z|`-Q@7TgU`Gw7>hWDEqbyGLU1jI=l6lPT7R0Qs6_*bo2#dFV)6dY`OVpZNANe6C0{d2zbKJVzaa?LhXLO5ED`j(2Uk*{%}U5D z6KN@>==_D>wMb_a5d(H6;mYqxu&cL~FHw{QHL=rfR&a6Y#T{!{+2g5zO+OhU%=+~f zc!o7><|!07p?8-Wf|Kmx+7^W zM@fv^PzE>Yjhe{Y797%8XHa|A=GdCRf{(|94QQ45vJM6#J39GxWaMnL;7(F2wD52L zLG|yBh(#TmsB3k~R?V5&{2tL3iP83=d(M4xrq3i5VM`C9%15+cv;A!x0})=BB?ScJ zwqkbzHf0|W?|R9;3;4ZDm{1+a!ytE_CW=)+wWMbL)lz3T3EkjQnT&~xA+5vjBG>3+ zYeZnZ&sCU>WjpEd)z+u~yJTh%>-@s1Ew;g?FRQy}fn+_cOQ|3kc-t$+-)sKMAnoki zv@jW=92D3=P9klZipP+Su?@Q5s=^t8U&Xi9IL*tGAD1s8{h|rQcOf66{$hb6qMA}! z>8-_{=Yn)Y{_ll$X_O2gt-q5(-aL$8D06(u!ttl;@3%Ga6GkFTipj|p-h~rgG+6DdDSA#M3hd9Lt_GSgurz?hSGITidXWhTXB(js8vhw4mk*bGXzHy!zn-RYJ*z zeV4P)(W0shGxiWlcE(`c#%~*CX9%Vts+2=dsJ zEW6t8a%y{yeNm?h64o*7+*i>co{6-f!gCPe%{n|M8WTiv_5Fl>=z-Uk(Fp z1G}G003bg*_owr$+(3|x`4h{=_KD^C41$dv2>y$gjh!192HU4@Hcp^IR<=(+*f{@j z=^ruRdsY8t-~uL#i<|r3qJV!K_$P_nK+69pkkiWcyUg z&hp8`@hQsA`k5Ox_D_)QGdcf+z|Q_p)>(i}IOaHfFZZ24ntg1_Bw=ghVkTy0^3BwYNzTmP!o~7)VhIQ!!u@+5dSvS*{IDKmMhLs~ zgCkfMb3j+WRZs2voQ%JLc-o zkjjc;ZOhye?a;mVV_uG!zmpDbf_zb3wI`kJ4-ZrOm%l|(bDwwUqJ5uY(mPLiD94%3mG z^g-|5fsf|F0T=?j^^OMt(K&mHvr@$UyL`88sd9HJ`ix5$-MSh3}9em;*`O z?6JrG)T9n5Yq!+;@5}a)*^-lTewze?6Ah6#Un>0;uoRgH4Fte*YJ=oOA1v`LeE`ko zC43mE=k|bS{_A_d?v~5diz`S228a(zc^&D)bm8e`)!KKZmjaS?p{H$co7@7THTy*uZ%D;UN+U)^um#xq4tOQ;$t(546JT6 zoz8U5K+Ob`zJszcG`=WU8#JuGgNJl#z{!hJ(Mk z-P3JKX=3&kRyUXM(zcelc`X)G9q4{+*EER{u0w;&B~x0WuRHm^5M=+AI$^n*rmX!; zD98k}Kqob!*_tt))|HLZim?ol+nZ^-S)AK$K6{9KQN+$OV}V}wgigK@oidBdg~Z1@ z**0x~4MBg@upofsIwF0#UBp_!olBbb_wro;WdhCSPL2 zrR0;}{*c>I7=ffM>9WlPk`UIYuj!+Lef{?TomY-o|)CvdAIYe2u}tgtc= z%n=AKyeb~4)iIwnzb=|SCX6gBDr%ED3C8O*pUt%T)>ed|JGozXLZ}y7`Cx>u!$ZC> za^B!xVTxRtQckr{Rqh|PAxHCjoq|KcKH+!KdKU{ymM%E5$J#Z2M+^N{1{BM-M41g>+wSg)(wYP)y2zrz^Uc!nR2D z;?s$wW}|A)Q4}doT9wgSh^*LF1h+Vz2CI*>=c-bARoWXREHna9vym;q zUtGGdE!$GqUIyD>La_xsyw1XQ2!5^bALa}#Rk~a`+}s?=jljd)GBY#uHyKPQ1cy1D zqpYdZ-({IgX1=*Cmy_Q+m!=;1$cAq;8abye@U7Jhx0W~~VdfYJ@RtdnypAvmM9&j5 zMq4y#rs$Mq(9B_!!96{enTVgmog!+c#)uSut*r2tZ*IezK45C6#^;vRXfQI*FfY|s z<-sywfzC70-=L0%f``uQ(LhQwbt+Y2810j{o|9KCSv0S-5= zmC;*=+SB$BNva~Jd9^5M^kB>&NbhvE`ikm59%7fqO<-iq8#t8xmSHrwpcd)Tb8<0Z z8r(7Nl#5i(p(}9hU{{H1d+Q_5_tkYKJ*j`Q@9R0JsHU`#1f{|o2Umvyd#lD2S}*ilxe&7l`1qpvfYYn9n}*R8w!A%J~4pK8FU#?k4+@3Ss(O{j_r zO=;;!D4q1~x4V(O^u2U56C_Zh&wg|~uh<0{%H&ax{H&;!{=&1Rwf3I~E z`MaK2>Ez9_o1B$HAZ=q8u&NgwAckKlq`%ko?Bx;lCHWr6~C9;QSrNX z?VCeh+TXX5xOa9|NhXchNw$D@jX za`euZvJ=#{Epko9QNB1@QtJDu z&w4|VqjgO?W5gU;-cV<`9TfMheRb6%B$~sRv9oFFZI{r#p1*jiWwDZQE87Rm{g{KVCGH{aS)CXA z*1YGjN8WboprY`#q6dAbDqlqxp4)?I`!a++U@U9R*K3ZDo9qI_)5&}9egK00S6MT! z7x&fat~K>YkT2oPfO9d#+P3N5R|gR+E`)ul4@&&O%T_@lG(Rq<1nF8x8YiN&V6gTt zgdx@>Y)cCYkGyygrHl!ET4%25f`;R^_)?ZkFq;v6C^$ax$bT%k$KF@U8MX zNuA4pE{A=0<>ly=nE?nb8TEwJJ*PQ^UTJ&EEvk>fRHr(}!zWK4FoTixC;NBpcZPSg zU_gW4P5)k1V(SiOjgu2U*I|9BY$`7 zIe`bVK>wxn7hm=yR9lP{briggAY1ZvDI(Y;@I5P3%Gt!cB*8@I4P$#6jp>@hwGM%q z+0b?th5eBZv<<6^zF!l#y&1?iTieLjv$ukW{^K7=Rqt>=NIXJ>cNBy%{K&E~W?+*^ zBUm}3k5GuRJuDl|oMelM8%gKTI`~<`&iP}mMea&L;C|(CPF13LH8{Pl^oH3%n6yFj zcfwzcuKYUUM>O()s=D)>x|F@5eMNeE;qQhd8l=L;KQVw~Np@0+yzYsq9%;q7Wf!ZZ z=o<5LLLTwV#vgV|>WQGIOgsB^pE55CI!hDzU zgZJkk+P&PpJopIg%kvLFzNLCfgHR~{vKA8HP{d5hN6ts{D)*Rr)u~BRiz3-jv8#No zbxoj`#3KgWCuq#X&045mh2=LmWLR*!UGsX9l+T4fq!+Ky{9v*fo8?)cR zDn?;b0gm@k~cY~02KzwL> zL696%c&D6qsRe}vc8~(`{yOtgB1xQ?61pUWcSzP7*9L1-W3uj`YcSwbJ?XNzi+n)9 z4$Abz)g9O%z((*-^~IXIVWW=a_5bJ#eL0lh&^g~A^Pub&)nBfYJf(c3N<8GzlqSrT z3kfdFIi-Ye4EpTq8QdTL>7FiPPes1$vSCd*R)?3Yr>}2E zN$%ZsiL46la;;vLveP@HtZ&b-ag*>J7yAx^9~A@pn9%Sesa# zSp7dO5(L;WVnikj$0lOT4nEl^(L_H)T9ttt%}-!PZ&xLPh(Uc@Nhl?#jQeAZ81GW# zw__ct+Us`Kr;x9k(Hgjt6yt{G`pwd3a%T9PDGA8FdzRL=wzjTC8KHe^NZEAy9o?om zcPx}tX;tL^yr%tHRUJ?@_SN*Alr&lVR-A2TOW-KeV~PLT52}(~R znOgdBczCCMhdrIqxqjLbo$4j#>s4E;<8dpi;|l5~W(JHO3u*}Y9MhSMC8^He*#|6& z%Ss)V27<7C=9}K3c;;y5=U6FZJ&oe-oWORk0g-o=`PK&ce;$`l}HAHc9;JGYK=DQRn++yt|A$YWNGlU`=}Cqb{Ar$ggWbo&dD z$f4V1%erZ*NqK#ma95%Qe_wCwU=f~NTr#^aUu%9v`6X|~QAx4T2Mezc)TJft&Ov%u ze$O8n_ECQ=yh)-b1T>IxDJsCAH8BXZObF3d?yXoOsOY92#WSqY`D!L8Q|zABFtl!H zCp2_Fi#kVUL0bRLaJG;`vruT8V;271$>wS?lWuK;rWG+)silXGjw$92@-`0D23{Bn z=~T-|!Og*tI!D|o{=M{=+D;imI}D#(*R;cAUT@3W&0EyW&)RrohOcV6*Y0Khvb9aI z_haMFrNP@&-Nnm{Zx8RcrUUbvb-4YybKNV=0 zxWemmT~Fg9c$>b_*sys;LxDf|)1O>sdux1`fKxXa5&aRJk&9JQQzlP;t9jjo#Cy@A z3Vk8qdU63?*8Y6t0cZBr$_{4pO^4m-@}tC2Ja69bHMP0zy%*PeM{^2M{^O(*qtvgg z^fEmP%K33)wde<5F1`+jX8JJME}K&x)E{S+925Wm?t@bz&WI05bH{CQ9PF`^|fqVn=*5(l3vS|EPlI*nyT>^$CJ zbo&X0YFN@Fi7#3(beIILY^KnnzU2*J#<;VPwaMaoZz|{(`$GXLjqOn#4$pkmHl0Gy z9+O|8sn2<+?v5ur%0=nKg>H^#D}X74yy6$u)eBf$!YCKTqsG|E8KkG z!WHxLbr2=$W}4Oy_q>?Qk$gHW5>MM?Z)7kT*jY>$_Xt%PaR-OIOZ z&XIrlcenn9ZWABYq`rr*#a8c@`nr;Id?xclSP-{dtVM^s>l;cBULw@5KR;bcf4W!* zIfgJlGb;YPbd5=2!FmIY|1Nn#%yXsz_Li#zGkhPz{H-VbFDNZMRS;^J61Z1I7%PuT zEspamXr9ea9NJ{d#N9R4PIDPhD4UgYU;$Q(Ge(9VT;Vcb)^S?%67MQbN<7zC`uI{< z$9zS3;81IFMUQ<@6Sxo(zBnBN*6E|Y1cw<&ifj!_^X5_p$ydI(k4Jd*j9=UF%UMh` z)GYGiQPcMsO7#6l!k$WbR?29N$D1>mb>a!M1iM=%3VOt~qqi5IFZrlsNG10Ra%kcS65x|L=l<503AWuYZ0< zp1!p=VjboHWE@Z~6<*a!uXs|#No%^y%5IL(;IvQYd4p%r_gRgphck_SR`!b3}6%P z_Ewf*ng0nL+@2|6Jr7xxwtWJZ!lE@G)}8EKOBXI0YadTS9=u&5 z=-yJ1sY%gDU1QLx+X|nb9<@hK9fqU~iW|6-I{8HS6dkH<7cfP7Xwo9oqZ!+YZp*u3 zZJiTd^U#R#izgA~c1Pw!aKAvA$;6AFUI0}79o-%IPr=_gT5PdOG&7``ajS@7(I3C2 z_hhCFqNJ-;vgvd6vz=$O#o5SM){9P`IWiQ?8-|+yTIDKgiMAjx6DSi3Hm%IE%Kupn zvndzgnrsX{jyBI+s)fMg=+#$GLmd>6vtj4WvztzV6fX8i2Y%I1BYKKCxy=MKPqc~m z#8+@#_}7_uldK7)RcHA^MrT%Kkg$AGxHuiay==A;{Uj^Pb7$1oXqa;<6Ov-4Mm8FF zD#!Ih5hLASl08S}NovdxJLl>Kxm&*$9!p3s4t=xl-=iNG?|l>AJWk9K^UisvD|3cv zGrPXvL$~0EGd7eJD!Vt*y4SDew_Fy+4rp36Q#G(b&`(#71}11xok!@oAsSrTjIN~@3|D`XFND8F;-mmz)=f%9R|==Clxw&q}-Jw8gRC<&FRg1 z1H6JlH4n^ZvSdA(OWusT)Tb&zPox*pQ0BSr<%({KdOKGldHNO9M=iW?YxqaAE2DD1 z3)JLs3tFj(p1HuYJWaT8mhH8#zF+&>$7-+Kx(Q8A#oSd8?EWg0I}Kl3A0#E1K2O^^ zS9M;3r}a0FpY7fOV9%nzQ+oeVKIjIeaoz+jDNxa)(}rRiwK(AXT^u{_Kl<6OI(d>v z);}|&bSEf%^NR?K)cpBFg30rX9erURLVGv3FD^YqkpoUB*B@`bo)0Wzp5e#;=Tbix z^S_t+|EaKXGJn=(SXe)cTAV=eze^}wEdOC;<@&6r{P&LySnK(0|Bd6|0G8>1?GyL! z>wnY#19Nh65p%Hui$}oriRJo7|9@Ng7v+=hlm2<-WMTQ={QuJQFaG~X{o6hV^S|tW z_CKlr# zF)}rBF=A46HFo(#Xv+VucC9D2Z7PgwfHcWc5U>qc)d|XHnBr*p_8`V?TJLwsDi7g+1Ex?VBKho2|60ZWIkmuV4+QdAJ4f;NN11pJBeVT2M0b*Z)`Tbc#8 zXqT4%jb@kRPP+?SEfj7Zf=Xp83WY*Kcg=dxk>h&0Ubl;hEY9G(B%x?NtLLIv*YeS< zoXbRYMb4-?yh|CBZPD!24Tu6V?$mt4F|D0g>(Zr=XK%s9+=4dPK_h)dc@VBaxcZ7f zU&4dYH3(N<5$H>JFuM8+S9{rq?K12NxA)uWzxV4)7et`9#DJH?MX zC6^UD_xfB?LU}2JvslmNvsxbO!A7ZjpOpS*325LZg>gvhPF$LA$=|F^tRLNb`S-c= z%rPeV;mNmee0R^)GpFBpX`jD&=!YlHzTkfT)!A1LOU7|DHaqQ}nt%G`>t7bGXn*{2 zzd8E#fpfQ;%+=$g552ee40CGNUpJT=hd)?#=PwPv`|!S#8|%ALI#V=mJFnleu=e6^Z=;p|aciwSV z=DwBcZNhNwfr*jRd#`K^$2Q*ozP5TU^7zk7u}kyDp=bT#_OcbE5E`&j4X&95_Aj$2O zlN61_RH)eu%f;i6kcM&iSkPe9^~zxpLsB2YBt+Q5P?Ac>sgR6^b>kA@Gmewo zWE70!-gafA5kAY3NZ6CH21$8HS0!VLAWb6sO~e!oDuBcN3)RLX1_uy?T%3YI9dcMs zf&&Z!6DSyEtidq_sY~GjBb1b*@PIxd-9y^1Su>B z2ZvI44g*Z>noE4ou*;SgVc-m}z3j^W`C8jvO%2a)FET2W!K)GG;KBUlScKVHcu<7- hKYZhlde&2LwhAus!CqwAuLLKpsv340OBi>Y{tIcQ!Rr73 literal 0 HcmV?d00001 From ef96643257abb6c2a265dce7765db314a0cd84bd Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Fri, 24 Nov 2023 01:05:24 -0800 Subject: [PATCH 06/52] Indexing data to Elasticsearch --- credentials-e1bb73-2023-Nov-24--00_20_11.csv | 2 ++ index_documents/index_to_es.py | 35 ++++++++++++++++++++ 2 files changed, 37 insertions(+) create mode 100644 credentials-e1bb73-2023-Nov-24--00_20_11.csv create mode 100644 index_documents/index_to_es.py diff --git a/credentials-e1bb73-2023-Nov-24--00_20_11.csv b/credentials-e1bb73-2023-Nov-24--00_20_11.csv new file mode 100644 index 0000000000..4825751a4f --- /dev/null +++ b/credentials-e1bb73-2023-Nov-24--00_20_11.csv @@ -0,0 +1,2 @@ +username,password + elastic,pciWclpLNdXuicUhXV8bhgk2 \ No newline at end of file diff --git a/index_documents/index_to_es.py b/index_documents/index_to_es.py new file mode 100644 index 0000000000..6260c0bc83 --- /dev/null +++ b/index_documents/index_to_es.py @@ -0,0 +1,35 @@ +import json + +import requests +import sys, os +from datetime import datetime +from elasticsearch import Elasticsearch + +ES_URL = os.environ.get("ES_URL", "https://cs410-project.es.us-central1.gcp.cloud.es.io") +ES_USER = os.environ.get("ES_USER", "elastic") +#ES_PASSWORD = os.environ.get("ES_PASSWORD", "CS410-project") ##"pciWclpLNdXuicUhXV8bhgk2") +ES_PASSWORD = os.environ.get("ES_PASSWORD", "pciWclpLNdXuicUhXV8bhgk2") + + + +def write_to_es(doc): + es = Elasticsearch(ES_URL, + http_auth=(ES_USER, ES_PASSWORD) + ) + resp = es.index(index="subtitles", document=doc) + print(resp['result']) + +def index_subtitles(): + with open('./subtitles.json') as f: + subtitles_doc = f.read() + subtitles_json = json.loads(subtitles_doc) + for weeks in subtitles_json['Text Mining and Analytics']: + for week in weeks.values(): + for lecture_titles in week: + for lecture_title in lecture_titles: + for subtitles in lecture_titles[lecture_title]: + write_to_es(subtitles) + + +if __name__ == '__main__': + index_subtitles() \ No newline at end of file From 0b185193744bc8d0ce77d5317df0a0dd1b362f52 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Wed, 29 Nov 2023 11:01:13 -0800 Subject: [PATCH 07/52] Adding basic search logic --- .../CS410_Fall2023_CourseProject_TeamCAHJ.png | Bin extension/index.html | 4 +- extension/js/es_client.js | 18 --------- extension/js/search.js | 37 ++++++++++++++++-- extension/manifest.json | 2 +- index_documents/index_to_es.py | 7 ++-- 6 files changed, 40 insertions(+), 28 deletions(-) rename extension/{ => img}/CS410_Fall2023_CourseProject_TeamCAHJ.png (100%) delete mode 100644 extension/js/es_client.js diff --git a/extension/CS410_Fall2023_CourseProject_TeamCAHJ.png b/extension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png similarity index 100% rename from extension/CS410_Fall2023_CourseProject_TeamCAHJ.png rename to extension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png diff --git a/extension/index.html b/extension/index.html index af25248055..a4b126b8ba 100644 --- a/extension/index.html +++ b/extension/index.html @@ -35,9 +35,9 @@

CS410 Fall2023 TeamCAHJ

Search Coursera Lectures

- +
- +
diff --git a/extension/js/es_client.js b/extension/js/es_client.js deleted file mode 100644 index 29ce152e3f..0000000000 --- a/extension/js/es_client.js +++ /dev/null @@ -1,18 +0,0 @@ -const { Client } = require('@elastic/elasticsearch') -const client = new Client({ - cloud: { - id: '' - }, - auth: { - username: 'elastic', - password: 'changeme' - } -}) - - -const result = await client.search({ - index: 'my-index', - query: { - match: { hello: 'world' } - } -}) \ No newline at end of file diff --git a/extension/js/search.js b/extension/js/search.js index 1bc67afc88..8fa109c619 100644 --- a/extension/js/search.js +++ b/extension/js/search.js @@ -1,6 +1,35 @@ +//const {Client} = require('@elastic/elasticsearch') +document.addEventListener('DOMContentLoaded', function () { + document.getElementById("searchbutton").addEventListener("click", search_wild); +}); -async function search(){ - return { - "lecture" : "Week: 2, Lecture: Title: introduction-to-text-mining-and-analytics, Time: 8:22" - } + +async function search_wild() { + const ES_URL = "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws" + const ES_USER = "elastic" + const ES_PASSWORD = "CS410-project" + + const client = new Client({ + node: ES_URL, + auth: { + username: ES_USER, + password: ES_PASSWORD + } + }) + + console.log("Inside search_wild..") + const query_str = document.getElementById("searchbox").textContent + const result = await client.search({ + index: 'subtitles', + size: 1, + from: 0, + query: { + "query_string": { + "query": query_str, + "default_field": "search_for" + } + } + }) + const timestam_obj = result.hits.hits[0]._source + return timestam_obj; } \ No newline at end of file diff --git a/extension/manifest.json b/extension/manifest.json index d7c94b8af5..cac34df18e 100644 --- a/extension/manifest.json +++ b/extension/manifest.json @@ -5,6 +5,6 @@ "manifest_version": 3, "action": { "default_popup": "index.html", - "default_icon": "CS410_Fall2023_CourseProject_TeamCAHJ.png" + "default_icon": "img/CS410_Fall2023_CourseProject_TeamCAHJ.png" } } \ No newline at end of file diff --git a/index_documents/index_to_es.py b/index_documents/index_to_es.py index 6260c0bc83..b2ed93ffe0 100644 --- a/index_documents/index_to_es.py +++ b/index_documents/index_to_es.py @@ -5,10 +5,11 @@ from datetime import datetime from elasticsearch import Elasticsearch -ES_URL = os.environ.get("ES_URL", "https://cs410-project.es.us-central1.gcp.cloud.es.io") +#ES_URL = os.environ.get("ES_URL", "https://cs410-project.es.us-central1.gcp.cloud.es.io") +ES_URL = os.environ.get("ES_URL", "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws") ES_USER = os.environ.get("ES_USER", "elastic") -#ES_PASSWORD = os.environ.get("ES_PASSWORD", "CS410-project") ##"pciWclpLNdXuicUhXV8bhgk2") -ES_PASSWORD = os.environ.get("ES_PASSWORD", "pciWclpLNdXuicUhXV8bhgk2") +ES_PASSWORD = os.environ.get("ES_PASSWORD", "CS410-project") +#ES_PASSWORD = os.environ.get("ES_PASSWORD", "pciWclpLNdXuicUhXV8bhgk2") From 05764b0a4cc879492c16af312ec1c86b82b716c0 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Thu, 30 Nov 2023 17:53:14 -0800 Subject: [PATCH 08/52] Working plugin --- .gitignore | 4 +- .../CS410_Fall2023_CourseProject_TeamCAHJ.iml | 2 +- .idea/misc.xml | 2 +- extension/index.html | 83 +++++++++++-------- extension/js/search.js | 70 ++++++++++++++-- extension/manifest.json | 24 ++++-- package.json | 8 ++ 7 files changed, 139 insertions(+), 54 deletions(-) create mode 100644 package.json diff --git a/.gitignore b/.gitignore index 29b636a486..12ae9d0daa 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,4 @@ .idea -*.iml \ No newline at end of file +.idea/* +*.iml +node_modules \ No newline at end of file diff --git a/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml b/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml index 0f66176ec7..d0876a78d0 100644 --- a/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml +++ b/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml @@ -2,7 +2,7 @@ - + \ No newline at end of file diff --git a/.idea/misc.xml b/.idea/misc.xml index cad43f9617..009ceba309 100644 --- a/.idea/misc.xml +++ b/.idea/misc.xml @@ -1,6 +1,6 @@ - + diff --git a/extension/index.html b/extension/index.html index a4b126b8ba..56ae5e1541 100644 --- a/extension/index.html +++ b/extension/index.html @@ -1,43 +1,54 @@ - + - - - Simple Search Platform - + + + + Search Coursera Lectures + + + -

CS410 Fall2023 TeamCAHJ

-

Search Coursera Lectures

-
- -
- -
+
+

Search Coursera Lectures

+ +
+ +
+ + +
+ + + + diff --git a/extension/js/search.js b/extension/js/search.js index 8fa109c619..ffa097b053 100644 --- a/extension/js/search.js +++ b/extension/js/search.js @@ -1,13 +1,16 @@ -//const {Client} = require('@elastic/elasticsearch') -document.addEventListener('DOMContentLoaded', function () { - document.getElementById("searchbutton").addEventListener("click", search_wild); -}); +const search_btn = document.getElementById("button"); +search_btn.addEventListener('click', function () { + search_api() +}); async function search_wild() { + console.log("Inside search_wild..") + //import {Client} from '@elastic' + const ES_URL = "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws" const ES_USER = "elastic" - const ES_PASSWORD = "CS410-project" + const ES_PASSWORD = "replace me" const client = new Client({ node: ES_URL, @@ -17,8 +20,9 @@ async function search_wild() { } }) - console.log("Inside search_wild..") + const query_str = document.getElementById("searchbox").textContent + console.log("query_str ", query_str) const result = await client.search({ index: 'subtitles', size: 1, @@ -32,4 +36,58 @@ async function search_wild() { }) const timestam_obj = result.hits.hits[0]._source return timestam_obj; +} + + +async function search_api() { + console.log("Inside search_api..") + + var headers = new Headers(); + headers.append("Content-Type", "application/json"); + headers.append("Authorization", "Basic ZWxhc3RpYzpwY2lXY2xwTE5kWHVpY1VoWFY4YmhnazI="); + + const query_txt = document.getElementById("searchbox").value + console.log("query_txt ", query_txt) + const query_payload = { + size: 1, + from: 0, + query: { + "query_string": { + "query": query_txt + } + } + } + console.log("query_payload ", query_payload) + var requestOptions = { + method: 'POST', + headers: headers, + body: JSON.stringify(query_payload) + }; + + const response = await fetch("https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io/subtitles/_search", requestOptions) + const record = await response.json() + console.log("record ", record) + if(record.hits.total.value > 0) { + const result = record.hits.hits[0]._source + console.log(result) + const response_str = 'Search result from Lectures is here at timestamp :: ' + result.time + console.log("Resoponse :: ", response_str) + await display_result(response_str) + } else { + await display_result("We could not find a related topic") + } + +} + +async function display_result(response_str) { + const modal_body = document.querySelector('#modal_buttons_body') + modal_body.style.fontSize = 14; + modal_body.style.fontWeight = 400; + modal_body.style.fontFamily = 'Courier New'; + modal_body.style.color = 'white'; + modal_body.style.textAlign = 'left' + modal_body.style.backgroundColor = 'red' + modal_body.innerHTML = response_str + } \ No newline at end of file diff --git a/extension/manifest.json b/extension/manifest.json index cac34df18e..34e65f23cc 100644 --- a/extension/manifest.json +++ b/extension/manifest.json @@ -1,10 +1,16 @@ { - "name": "CS410_Fall2023_CourseProject_TeamCAHJ", - "description": "Base Level Extension", - "version": "1.0", - "manifest_version": 3, - "action": { - "default_popup": "index.html", - "default_icon": "img/CS410_Fall2023_CourseProject_TeamCAHJ.png" - } - } \ No newline at end of file + "name": "CS410_Fall2023_CourseProject_TeamCAHJ", + "description": "Base Level Extension", + "version": "1.0", + "permissions": [ + "storage", + "tabs" + ], + "host_permissions": ["http://*/*", "https://*/*"], + "manifest_version": 3, + "action": { + "default_popup": "index.html", + "default_icon": "img/CS410_Fall2023_CourseProject_TeamCAHJ.png", + "default_title": "CS410_Fall2023_CourseProject_TeamCAHJ" + } +} \ No newline at end of file diff --git a/package.json b/package.json new file mode 100644 index 0000000000..43e9854c8f --- /dev/null +++ b/package.json @@ -0,0 +1,8 @@ +{ + "name": "CS410_Fall2023_CourseProject_TeamCAHJ", + "version": "1.0.0", + "dependencies": { + "@elastic/elasticsearch": "^8.10.0", + "elasticsearch": "^16.7.3" + } +} From 6b0c737e246a2014bf02c9626a1cb7fbcd729464 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Sun, 3 Dec 2023 17:23:07 -0500 Subject: [PATCH 09/52] Update README documentation --- README.md | 39 +++++++++++++++++++++++++++++++++------ 1 file changed, 33 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 3b227a7cc5..fddfc756c7 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,36 @@ -# CourseProject +# CS410 CourseProject (Team CAHJ) - Coursera Search with ChatGPT Extension -Please fork this repository and paste the github link of your fork on Microsoft CMT. Detailed instructions are on Coursera under Week 1: Course Project Overview/Week 9 Activities. +## Project Overview -## Steps to load the extension in Chrome -1. Go to the Extensions page by entering chrome://extensions -2. Enable Developer Mode by clicking the toggle switch next to Developer mode. -3. Click the Load unpacked button and select the /extension directory. \ No newline at end of file + +## Requirements +This project is fairly straightforward with regards to requirements on the user's machine, but there are a few baselines that are required to be hit: +- The project requires Google Chrome to work. +- The project requires ChromeDriver, maintained by Chronium, to be installed in the root directory of the project in order to enable scraping (see Step 2 under Installation Instructions, below). +- The project requires a working installation of Python to scrape new course content. The file `requirements.txt` includes the packages necessary for the script to run. If you plan to scrape new course content into the project ElasticSearch index, please ensure your Python environment satisfies these requirements. (TODO - Create requirements.txt file for Python packages) +- As the extension is not deployed to the Google Chrome Web Store, it requires a local copy of the codebase on the user's computer (see Step 1 under Installation Instructions, below). + + +## Installation Instructions +Installing the extension is quite simple; all you need to do is download the code from GitHub and then activate the extension in Chrome. +A step-by-step guide for the above is below.: + +1. Pull the code from GitHub to `desiredDirectory` using your shell: + ``` + cd desiredDirectory + git clone https://github.com/christianopperman/CS410_Fall2023_CourseProject_TeamCAHJ.git + ``` + +2. Install the appropriate ChromeDriver for your computer's enviornment from [this linke](https://googlechromelabs.github.io/chrome-for-testing/#stable), unzip it, and move the `Google Chrome for Testing` application to the `CS410__Fall2023_CourseProject_TeamCAHJ` directory created in Step 1, above. +3. Open Google Chrome. +4. Go to the Extensions page on Google Chrome by following [this link](chrome://extensions). +5. Activate Developer Mode by toggling the switch in the upper right corner labeled `Developer mode`. +![Screenshot of Devloper Mode toggle](/Documentation/README_images/Chrome%20Developer%20Mode.png) +6. Load the extension from the codebase pulled to your computer in Step 1 by clicking the `Load unpacked` button in the top left corner: +![Screenshot of load unpacked button](/Documentation/README_images/Chrome%20Load%20Unpacked.png) +7. Select the `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/extension` directory in the popup and click `Select` +![Screenshot of load unpacked button](/Documentation/README_images/Chrome%20Extension%20Directory.png) +8. The extension should now be available to you in your Google Chrome Extensions list. + +## Usage Instructions \ No newline at end of file From 7619d23c50843b8b822432a849e9f529b2bbcf51 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Sun, 3 Dec 2023 17:23:23 -0500 Subject: [PATCH 10/52] Generalize scraping script --- ScrapeSubtitle.py | 277 ++++++++++++++++++++++++++-------------------- 1 file changed, 159 insertions(+), 118 deletions(-) diff --git a/ScrapeSubtitle.py b/ScrapeSubtitle.py index 0d79346559..0f7e878fee 100644 --- a/ScrapeSubtitle.py +++ b/ScrapeSubtitle.py @@ -1,132 +1,173 @@ -import time import json -from selenium import webdriver -from selenium.webdriver.chrome.options import Options -from bs4 import BeautifulSoup import re +import time -driver = webdriver.Firefox() -coursera_url = "https://www.coursera.org" -# --------------------------------------------------------- - -def get_soup(url): - driver.get(url) - time.sleep(20) - - # Get the page source and parse the HTML content - page_source = driver.page_source - soup = BeautifulSoup(page_source, 'html.parser') - return soup - -def login(): - # Open the login page - login_url = coursera_url + "/?authMode=login" - driver.get(login_url) - - # Ensure the page has loaded - input("Login and navigate to the course page, then press 'Enter' here") - -def get_week_urls(): - - # current_url = driver.current_url - # print("Current" + current_url) # To Be Done - current_url = "https://www.coursera.org/learn/text-mining/home/week/1" - - if "https://www.coursera.org/learn/" in current_url: - week_urls = [] - - soup = get_soup(current_url) - # Count number of weeks this course has - # num_weeks = len(soup.find_all()) # To Be Done - - - for i in range(6): - week_url = current_url[:-1] + str(i + 1) - week_urls.append(week_url) - print(week_urls) - return week_urls - else: - input("Navigate to the right page, then press 'Enter'.") - get_week_urls() - -def get_lecture_urls(week_url): - lecture_urls = [] - soup = get_soup(week_url) - - elements = soup.find_all("div", attrs={"data-test":"WeekSingleItemDisplay-lecture"}) - for element in elements: - a_tag = element.find('a') - if a_tag and 'href' in a_tag.attrs: - href_value = a_tag['href'] - lecture_urls.append(coursera_url + href_value) +from bs4 import BeautifulSoup +from selenium import webdriver +# from selenium.webdriver.chrome.options import Options +from selenium.webdriver.chrome.service import Service as ChromeService +from selenium.webdriver.common.by import By +from webdriver_manager.chrome import ChromeDriverManager +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.common.exceptions import TimeoutException + + +class CourseraScraper: + def __init__(self) -> None: + + self.driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install())) + self.coursera_url = "https://www.coursera.org" + self.course_transcript_for_json = {} + # Login to Coursera to allow scraper to parse pages + CourseraScraperLogin(self.driver).login() + + def run_scraper(self): + # Parse course to get list of urls for each week to scrape + course_transcripts = [] + + course_parser = CourseraCourseParser(self.driver) + course_name = course_parser.course_name + + # Parse each week url to get list of lecture URLs to scrape + for week_url in course_parser.week_urls: + week_str = "Week" + week_url.rsplit("/", 2)[-1] + week_parser = CourseraWeekParser(self.driver, week_url) + lecture_urls = week_parser.lecture_urls + + week_transcripts = [] + + for lecture_url in lecture_urls: + lecture_title = lecture_url.rsplit("/", 2)[-1] + lecture_subtitles = week_parser.get_lecture_subtitles(lecture_url) + week_transcripts.append({lecture_title: lecture_subtitles}) + + print('*'*50) + print(week_str) + print(week_transcripts) + print('*'*50) + + course_transcripts.append({week_str: week_transcripts}) + + self.course_transcript_for_json[course_name] = course_transcripts + + +class CourseraScraperLogin: + def __init__(self, driver: webdriver.Chrome) -> None: + self.driver = driver + self.url = "https://www.coursera.org" + + def login(self) -> None: + login_url = self.url + "/?authMode=login" + self.driver.get(login_url) + + +class CourseraCourseParser: + def __init__(self, driver: webdriver.Chrome) -> None: + self.driver = driver + self.prompt = "Navigate to the home page for the course you wish to scrape and press enter" + self.parse_course() + + def parse_course(self): + # TODO: Automatically parse course name + self.course_name = "TODO" + self.get_week_urls() + + def get_week_urls(self) -> None: + """Initialize the URLs for each week of the course""" + input(self.prompt) + self.landing_page = self.driver.current_url + # Coursera defaults to saving the user's last accessed week, so need to get the true landing + # page once it's been navigated to + self.landing_page = self.landing_page.split('week')[0] + + week_url_list = [] + if "https://www.coursera.org/learn/" in self.landing_page: + self.driver.get(self.landing_page) + week_list_xpath_pattern = "//*[@class='cds-108 css-1mxkpit cds-110']" + # Need to make sure the element loads on the page before it can be scraped + try: + myElem = WebDriverWait(self.driver, 2).until(EC.presence_of_element_located((By.XPATH, week_list_xpath_pattern))) + except TimeoutException: + print("Loading took too much time!") + # Get all elements from the sidebare containing links to the course's week lectures + week_elements = self.driver.find_elements( + By.XPATH, + week_list_xpath_pattern) + + for week_number in range(1, len(week_elements)+1): + week_url_list.append(self.landing_page + f"week/{week_number}") else: - print("href attribute not found") - print(lecture_urls) - return lecture_urls - -def get_lecture_subtitles(lecture_url): - - soup = get_soup(lecture_url) - subtitles = [] - - # Find all div elements contain subtitles - pattern = re.compile(r'\bcss-1shylkf\b') - elements = soup.find_all('div', class_=pattern) - if len(elements) == 0: - print("No value retrieved") - else: - print("Retrieved") - - for element in elements: - # Extract the timestamp - button = element.find('button', class_='timestamp') - timestamp = button.contents[-1].strip() - - # Extract all phrase elements and concatenate the text of all subtitles - phrases = element.find_all('div', class_='phrases') - text_content = ' '.join(phrase.get_text().strip() for phrase in phrases) - - # Append the subtitles to the list as a dictionary - subtitles.append({'time': timestamp, 'text': text_content, 'url': lecture_url}) - - # Print or process the subtitles - # print(subtitles) - return subtitles + self.get_week_urls() + + self.week_urls = week_url_list + + +class CourseraWeekParser: + def __init__(self, driver: webdriver.Chrome, week_url: str) -> None: + self.driver = driver + self.week_url = week_url + self.get_lecture_urls() + + def get_lecture_urls(self): + lecture_urls = [] + soup = self.get_page_soup(self.week_url) + elements = soup.find_all("div", attrs={"data-test": "WeekSingleItemDisplay-lecture"}) + + for element in elements: + a_tag = element.find('a') + if a_tag and 'href' in a_tag.attrs: + href_value = a_tag['href'] + lecture_urls.append("https://www.coursera.org" + href_value) + else: + print("href attribute not found") + self.lecture_urls = lecture_urls + + def get_lecture_subtitles(self, lecture_url): + soup = self.get_page_soup(lecture_url) + subtitles = [] + + # Find all div elements contain subtitles + pattern = re.compile(r'\bcss-1shylkf\b') + elements = soup.find_all('div', class_=pattern) + if len(elements) == 0: + print("No value retrieved") + else: + print("Retrieved") -# --------------------------------------------------------- + for element in elements: + # Extract the timestamp + button = element.find('button', class_='timestamp') + timestamp = button.contents[-1].strip() -# Set up options -options = Options() -# chrome_options.add_argument('--headless') -# chrome_options.add_argument('--no-sandbox') # This may be necessary in a headless environment -options.add_argument('--disk-cache-dir=/Users/jnfng_w/Documents/Cofig') -prefs = {"profile.managed_default_content_settings.images": 2} -options.add_experimental_option("prefs", prefs) + # Extract all phrase elements and concatenate the text of all subtitles + phrases = element.find_all('div', class_='phrases') + text_content = ' '.join(phrase.get_text().strip() for phrase in phrases) + # Append the subtitles to the list as a dictionary + subtitles.append({'time': timestamp, 'text': text_content, 'url': lecture_url}) -login() + # Process the subtitles + return subtitles -# Get course name # To Be Done -course_name = "Text Mining and Analytics" -week_urls = get_week_urls() -subtitles_of_course = [] -for week_url in week_urls: + def get_page_soup(self, url: str) -> BeautifulSoup: + # Take driver to specified URL + self.driver.get(url) + # Insert a sleep timer to avoid being flagged as a bot + time.sleep(2) - # Which week - week = "Week " + week_url.rsplit("/", 2)[-1] - lecture_urls = get_lecture_urls(week_url) - subtitles_of_week = [] - for lecture_url in lecture_urls: + # get the page source and parse the HTML content into a BeautifulSoup object + parge_source = self.driver.page_source + soup = BeautifulSoup(parge_source, 'html.parser') - # Get course title - course_title = lecture_url.rsplit("/",2)[-1] - lecture_subtitles = get_lecture_subtitles(lecture_url) - subtitles_of_week.append({course_title: lecture_subtitles}) - subtitles_of_course.append({week: subtitles_of_week}) -subtitle_package = {course_name: subtitles_of_course} + return soup -# Writing a JSON file -with open('subtitles.json', 'w') as json_file: - json.dump(subtitle_package, json_file, indent=4) +if __name__ == "__main__": + scraper = CourseraScraper() + scraper.run_scraper() + print(scraper.course_transcript_for_json) + # Writing a JSON file + with open('subtitles.json', 'w') as json_file: + json.dump(scraper.course_transcript_for_json, json_file, indent=4) From 1463476079409685aa751092268b03fc98aadd9e Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Sun, 3 Dec 2023 17:24:46 -0500 Subject: [PATCH 11/52] Update gitignore, include documentation images --- .gitignore | 3 ++- .../README_images/Chrome Developer Mode.png | Bin 0 -> 31841 bytes .../Chrome Extension Directory.png | Bin 0 -> 213105 bytes .../README_images/Chrome Load Unpacked.png | Bin 0 -> 30505 bytes 4 files changed, 2 insertions(+), 1 deletion(-) create mode 100644 Documentation/README_images/Chrome Developer Mode.png create mode 100644 Documentation/README_images/Chrome Extension Directory.png create mode 100644 Documentation/README_images/Chrome Load Unpacked.png diff --git a/.gitignore b/.gitignore index 12ae9d0daa..d854ba3d15 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,5 @@ .idea .idea/* *.iml -node_modules \ No newline at end of file +node_modules +*.DS_Store \ No newline at end of file diff --git a/Documentation/README_images/Chrome Developer Mode.png b/Documentation/README_images/Chrome Developer Mode.png new file mode 100644 index 0000000000000000000000000000000000000000..e027bac3c9aec5cdc184a86ebb43c1fb8aefe903 GIT binary patch literal 31841 zcma%j1zgiz_c*PHSf~g{7=W~NmxMG5NS6YG(cLL3A}KMtyI~A))DVLjHIN!Gl+H=V z82itBp5OcWJntVqY?u4Jaq`}C&pG!S{y^hC%1ZKD1O#Lde0l6N8UC|W zgZKdf0ig*{PVRxSoE*~wS0@0_-imdxa6ZHN-r6FR zM*hO>33NfG=8(+#mj6SC3R%?)g9uvnic6P=FAI^%%4!b`Fg|#bFxjfdQP=2v5P0C) z_j!_MW<>4AmMOvWTnpBZ_jeDUD@^SZQ8FA1hCLo)54Vw!>bmN0hbfzWSFq#wV10tz zxU=A0!&JIRMPwA50lV(xm$9tYl^a;y4yvYayK=Kb_+SPuq){@9z#(e2icL4J)RZ{=p;3Uqb{Iyo|Z8~5=OCl7bYn>W8r z^!w)roL1hzzh-iD`-v7FL7s0>-M&5f(d|24KcQ);KuB^y<{v-WIA->wz7YP~VE9>X+?jEK8Y7y0 z+}O18l}?|Fx$*A+%y|;VfUZwM?PBaFA%YD?&L{r@?2qwB&T`TI4=8fjXa2uh$-K}l zZsm)l_;J*dkO8D4RJbI+UNI6ZYVtp3swm6RQnhwxR z?`RF3|Ao3`j7(%oahhl?xFa*~=C4%)`cOvNuW`yT{N>BesG@$+TR$EIGlpcJf(XP4 z`X5LJzPLHv2675sZCrKQY2u>z0k8~{2$Aoms(@Z6*{QU=-!wn%O}fKhOjehSn^J8j z{y{0%y5K|OnR91?$-pC3Mj7^Zdr(XQ)dd(4Lhc}B_1T{q3f5&36Epk*|C>w`l00S0 z&XV1&y+ir4k!yXhDd#%+>XHE|vm8f3c{$|lLD_4wZgF%os}pS7(NM@bP3`KNVk_0< z!MrekOs_E50a>rMk+JceWcLhYr0wS*TA|{;fvkxvXUVN}Kfd_Ah{h@sVmVhfy*m*q zdDLk?bp3(~(NT+5ID1sVib6nvdxfr6u_d#3Gu0`t&J)UQtJQYY{6PIY@Wg96{u67_ z_@l1GYiH-iG#KBWww?0AVO@c!){=8hy(^!#g2#o@jb7hWn5!E#Yj7$)M=#zT9Efc{ zbQcx=My8P4L@5{H?2if~*Xwjj_*`e&q(rNo%;NICUjAzd3z;MR5^CrZhYA+_DRLLI zh-f=6#RI48XY#c2rV^aVx!FRO1a5Gk5vk?RnCF( zGU)a~*Ccd1UEzj{S*|>JSC({8sYS_ynkS{m1Efq9=h0*MwI?R! zUnPdVNH6@I6HH#@@87@A$2=2^eN+{A%Z12LNU-GATrY=JH>j4nWNoU}D4J3HYTI;5E%3QY zV!nJA0{mq83w-%5ogbTS^BV!Dam7M)+N_Oz1)+0UazAxR=G@hR{`{kH0sZZ^MRWES z(Rc_*Oqi+;kxjZ&zSFZSsA+L8`=n$hc*t134@Jp?67cciYX9_X*lcKoVZ-arOUzbZ zvSy$9`ub2@PJE@_sCsg8k@`v9uNKiJ!Ix zc1QJP6cBPznv2SE^?r2iCNm8@B%l$qd`%>A&n?uF{ol*T^EVin%kT^HMp z&%6(kWZaxj;sbtp&$%ki>VCYK74&M_=DWB-Rb^lTN=`#?PKO}PwDjWl+1ic|LJeuZ zB6bZ1*&mg#V^5i~-G@J?vF^`&?K;oV*?#C|wHFCAaCu{i;e6CsK3r_Tp-OC+sy@}S zU%=KFT65U-sW{JC51HX2Y zn*$$wTk;?o(p#OdInZNaRO4JNpk9P5LHCQfli*x`tgR+9$3 z__KXffLxFhEGxkiGbXm;UJ47(6+5>bGOCPnYa-?!J1mR}SZ(AV^Xl&{|40G;1 zpQ}eYCFvGgS{!c8uj4M%JE6F10euNE(np=+@M+u@(ka&rz-7=pMuvJZ+fn8Et&_{`(?!ccdeF%Yb@d z0_GKC>7i?)^p@}1&n@;eTgXnJ5M@cc`Q2|8!!-(j+^5r`?q7qk4A8*5kBE9X)@5sM z0qkxEWBWn*yZvDZ+SW_Bj+?~j=$O(S!z@=H-Ajaao z7lb=5@-i9BRal<|^rop|&e30L8KWg>T+Q@b!zs1i4G?q4XJgcds$5wtY2neze<>gw z5J@Ah>)5)fINsvdU}BnUsqA-r<_i1miIdOw8D2{znWoyLfV75rAr7M&8+1|<4!fN$Sjb*pCFeYMr-z*u@!eVHv@~? zk+(-%HI8S4VA*e!OYF{4H|#K^?IyERhUgL zCwFbCMHQfy!HH3WzF*)nQcbk|G`Bd2LQ2sI0a*lXKY`d5MC6>V1CmVKhPc4CANUTD zYw`VX(us;uX01A3N>R_9V$O8o4zg<=a+0UG9%U~YAdTho^?G~Lb{n|Jb5_fA%DAb$ zR&OXoKGqq&;l>@5T(ed$+c}LT>m1Dr%1u*Fah+EOndq@Oef0g>7MLP!U2ks;8>wu{ z$5|DkGPKGxB-u>r%k2$eeI(b4#%q?yy6E3+z|a4(58Gm7e5KQlmR~7+Up)=(WC!hY zrwWgT$hT)@I$f38BfUuX@!97KHKyBL+7q1mQb;F?o}AA6&^Kg6S{?zowS&|_si&c2d64^eQ2)9m zcTKMP`njm$C?dPvKa(o*}HTCzR&_Xl;4U=4f#egy#4} z#oB_QqrL^9avG9*c*fwWAnQAoT1fgDCl`ORp<8{5xYIOgTD>EEkQQ}$=Tp+&RKfV@ z46(x&(QR{J{Xjb@#oC~t>T(ThW_1js-?0lF9Ki}ZKCZRD>*m;E0fx379@#aS23;34 z;{zJJ>zBd~H?6IEO>dIbo=ZM#61X7Po$zGXp2H!v&ZCMvf!Trf+7PvGVb_;b^QX2& z8W~%TuInD7K8$t)-Q)mzUbC-~z>k<6ZOw3gR{g+j<}nAa$46+k40nHB5;y6zncU1NeS& z3*o$^WG%x{AOqspVqKb-=wHJm_9UsGLiChYJ?ZUd6fmRb%Z_pk(gF|4jO!XBPxd$t zKc~Ic`XDjlPzPLh!ZMYGq*cKy^f#4&gN+p?N&WtJM$UJn2|LDWN_%%$3Nyyn(~Bc# z5Sr4FaaFK`_b13InkO1VGrbH{=|^kt_ANhcSt0hpvVqbGK%pHgJxWkrnO-+58s-pc zpJXPgzm+yK4*o#VoBRr?R{nGWWsvY%tKv zS2|V@5PKo3cRrDFb)>TcP=0j(C0P>$l2`wfltQ^S!f^cjSFzT6&m+cUQY%%LuQg`= zHrWrZC@orR!vbn9e6x|6dPoJ#3o5ie+3r0VEUSc0pR^ofG^M-ao^SV}#vRsBqe!*8 z_c>ygo~LP)A+vMVCR@2aVACBkFoj&)-~eu6H+>x6!YNUg<7ZtLn%c=?+|!6iDUu_^ zR*@8H9@xRW`}umP6VH~wq>RY}cQb60G0%uZbx_vQ!TAGYnX*;E90ex@xEHzajZ)K{ z@jOj4Iyh$BU$5Gu2QcVUEkIDv(RegLXU_+Jlhu(LVOKPPzU*r{q1gm>3-MYXH`+r_ z*M~Q&ontTDG$sE5liBAu!rg3~nw|YkZf@PJ ztK19&t*vy_>;_$QW7{Sh5_?OsbL}!owNg|2hOPs^Rs+LEcnW`y0UA9${;_LiYwBZz z6KdXIQ=dO%XQ3+K>$KmrR$xpTUx1;OCb6MOqx)-`sm87k3#MtH&wjHoTwM&SW+o>; z03K_1i7&$4@jS{LWV$(3>|%iqlv7uh?%hKOO;|n+uplxz>KSbEa`G7^@rtXz&k(4; z(3`PpA_45ajQQ|-LmPg44@gj!G(HN<@;|Oo%aWRs%8+tM^|#ElE7WD!j)>K2@}H4G zuUKdKC>Ey9+F*9#HRtHHJ!@RzS@ycoWtFXl)4tp4==#1;NM!8?1!~bWq&Wy=Wo+Ng zZu|9XqMljZD!Wek($tGn-xu5z4s)p&mc?=z_gB#Y@!D&*IKZ_#!M@B1WUpxjY@hU8 zUs7O5Er4{w^9s~!3qHnR)T0?h?X&t%{8D=(Q-%Esdo2;J;S)Qd3wsr51?63et`pG? zMpnDs+g2W{!uFi;C$y{&!_G~ppB$}IhjaRzgk}W}nAq{PLAVU>YEF1G>c2|kXv{tL z(aA|KnUVJ36`!7Ng>n5bw;|L%T!dpn;L@zJb}-{V5E7q+8z<~YD8 zOtisq+LFCkKp;^XkTRpLfWhuR+1_`v#T;h7KEc9zjkY* z+}l|m+5)qQAIuGDBFidwP19NlTIUJZ+b;(~_zRBGg?cO{6L${psU@MuYTkk9hqL2B zXI@1|dkQts7`MLVG^rW4RQptFf^^jW1fQz2ALkR-1l0~n?oZnho21GXjz4Kgg_jMd zGK{h@zOn9SaJt(wjwX*!+1svyO%!Q3_ZbxJA5o-(Tx`4KGNjdzr?qcB47H*aL4vI! z&I*O_efN#Z*qm89#nG~S-C&nifGo&YxF2?3kC@$eD5zm4 z)Im62uz)(we)`QcXtL=^XI&wQx&K}*;DTt$+L0_c4N6-Y+4q=+%C5FffM8Aldtp_8 z(i;dRM9n0a2InvzrF(9@cUjD#yp|#9rzmle+EhfVTMc0J!5Hd^l-&1e7zGC%IkgJ) z<|DLD*TKWq6BfYi28|GGu*6#5!B&4`)!r=0r=3IcEZz*UE4cfLnG!zp5b|TJgl__b z-@w`=GK37h08h^>PI$Uu2DW5#BGAmT6Tj{;Kr-K#bXz3Fyli#HdjckeU5f)*j71%W z{C3uS_fy5*B0ZCKhVjU&{v@EBd9}`jcgGM7U25cC5chDPZKf4Cpw*trtL~LS0Eu_d z`?f5n^SPkdweDw(3a_ptt}i(07XcsfQkDWsWQq=T69#vWgsV@uDyc(lRhLEj^fOwI zr^L0-zrXEojb6(mWJjvS7C}+~_hGdI+is~REmbMHsR_A3LyY3Sa04s!`4oyL5%)pW zF&P*-rzZYEA?9A=`4pM~F}Gv$U^kl##=aD)hXB+z^--o-{81iNSYHtCi_%RXmCG>Q76fw4>osFM0oNaA7y~%{MiX#DaZgVs-rm;JwcGn-m1M@PuAXwQ2z%7S zQ$L)&Em7yk#kvG(4ZysbjID)gJgxvxhCf^J?uA;-77KVj}f)IrU=0{eEfg#0Kgz@!|=xyzIu90C& zzvVX`QWYKQK=~Hn$279~*5fY}qHEg`Svw*U-<>_3$-zDO&PDqw`qt5$l*Xd!(jvAa zAhJeqqoC~sd}MZHb_u531L6eJsQB7vEG{Rrab+`;?WG=;#MLt-8QZdgvFk;=o@TwK z2#)v1f>?rb#(NQ|N26j%AR+97N-ONnlO&ncle9uNJ7(%~M7GW@t>C=U@MG;(^i6G3 zrq)RpgoRCj17&Hz@$~bQN#%y!edF4Sox-e(?i-aHFN|3kn-2|h)z7C{*i5;g3TMUb zPogwbRX48nFqL4;WL+b_Ti`P?$R_k0h@zoD` zQEKCFR{$CnQ2tq1jC4Y*OTuJeYe2e4;3P@F%>e50!>Rxt8IPy3eTs%9@*D*uuGg(m zJJYNFsH2t;(AH?lMac4VuoxPoYda)b(|thl*}bI?Ixm1dcdsnR=1Nkz{r!D=mHRS1 z$9C(kJvA*x2>OAECXL$VIGivl;}NT*)AX^ti5&<8Vi?}5U`tEF8_YC;=)VGogA-Q15qUr`=O1dhXs0xiM-T7HbdtdXsd@r-|onPP0e%Nd=E!%HP9U$=D39c z7gGBY3&_Ou^b!4ZjcL)37JEtoMD`OZ$&nQY&<`WXMK-L`DzDZ`sKgYm0#i)VLD+PxaJi8+E^`FU`06s3EoKCoU55xC>Q1BJrxUh& zT$a3q8aonhHr1!?bV+YT_`y1Kf*(?3pj3$s5=(UwBMj?m<}cOthhmK}_I1Ghibz@0eMu~!6D#abJ}LI;9J-mZsp z?B+wMYCn%WhfM^=AzAM*pUMced$5W>LM3Je9x1GRDr0Rp+ca*FxO6hqFV%otx)(%a zh;GK1=B~@lE7#e6$`6W@8Sof2-JKQ(t*H;t6Xs!8RAHFL28zWd9!rkcDO%!F2`qj3j|((}`PVBw{d*V`6m-wp%`q>~G%(Bw$< zJgXi`+b&sFy!X*W!|kcbkp$4-o^{&JbPWA&#SYEg2Vx$}+}3L0Fn^!z6sC#GM)x!^ z=|aF}axuzSBs!pt;fwR|jloD$v9eFepMH6Gq$ZIJa$aG#8Fo zMVz>0#8lzPd=`mTDCy7*_mEXW`cx_g5@f%D8M;KbV-WgxM_NUlXv|!|It$1jmoijCR?NCP&B?Y8?0UWqB+I%OVh#NeZ|9Wz3qy0vW0j6!Wa; zA-_M=zYyC0jX400BC@(%m%alM!pfuE?ug3t`xlJ}0iF8SW5Jc$Z= zxLUtQWW;4r>=raHm}m~^v(|U&oWw9{-0IM}ykt!A}%c%v=vvnF_2{ zM=M&*ckxjp>rlhSFIzpQItOmZHP4@_gJ%N(z&6!1fV=3n(>N))wC!jK8Y3{oD2;{l zBMR*FkZ8AlDUNs+4YeU&MchGN3DlswmHd9n_O&PAY>0!P)uE$62v-r13DBN@V5XaK z=cJpX(pgs-sC`SDKL!FEhZ#AxjyGvAp1m&eV4}{+38wG6uw1nMX-K7F>^^J8b(1Ei*R_POzgG(g0db8=g4QBM-iU+vFe26SsO>1qoqvf$c;$hgJMelX3#bX_5)n%rTHN9*Z*}=oy0AzL3#_L_X zkuj#!K)sm+%=VNb<0;qld<@xUHu`t?S+ndCB+*JDX{aCm4yT9Bc6{gdZF@QiZ)#x;&tA#1Pp( zurT+1dk9N_^ZUs&?g7;4ozY8V>l=GtGz3Qr3s7j%&CoF}oipU^by zmzI3XDXGEK;r#J~^0&sin=y) zhp!$=7MMPxiOAeb0V~rEly7R>^IX2Ig>y2H=NsqCv& zFrt=P?5d#QmxZdC0x7yjq{#qzU=w!4$ZBrXECRINgbz&n)pGa;HlIw`N%a%?9l*{@ zCyY2uj44@fFZ5c$$7=RenNh;sG2}E4x^|DzteLU>tq1LyemAHFTk*jNp0S;&1{BDO zd{=`s$8=b6ADGM`@H|f&+osxc=Vm{xf2g z@N-WJ7LCVG{SGKIx#OetY=jj90Rh*(r5VIhk+7Wb_Q8b%1X#}sp8Z%X%U1E^7T6n3 z0k%5f(Q2^@Vlyg=x{P-~^aqq3@)DUS~NTnsDM#rzcQn6v=aNJjM9O3P^1T$~UJ z+s~XpH__}c~Rb3iug& z(Rs2^UdQOaPNA{8L~oDOQg^M}FMdBmfK_D{{iDU)?>3#wDvK^deO^`PQ5?MVG_0X2 zK|#9p&a5dd6AG92yk&Xz10bbxn~Q(xJzE^HTZ)XTG3+GCs_VL8pKN>@MUO$%ermxy zNx490fh-qlL3|&C-bz1t^CqRO!vFah*KU$Hf#CTnysyFN)n#tZ)sc!?DhLM2($ce| z{;0vx#K_jUAVtZPAC4FjK&q+I*eZgYE^tW(93t=6ep(T>VHzwfT*BLNRHUQF4*h)J z{XOuD0}-eD(nT{wGX^Pss^qu4j^zuTNj$Lqdtcnw`nbOvzRQ=D#DX4*H9c-H=FCQM zEUOzGx5TTtw?eY12(~*Cb|ja=(Osy*lm<1`^L-h@{QwK*(K7sTWr0VW!#LEyw>9iM zW4&-dx-qgkFu=8pui2QtLO>>Au+YH3x2Lo)q`p8EWcg@+C*u1a6d4>?{PN}3Wa!zv z`0=M7ewyCj&i|Gf!GTxR*KOC!#l<9lMcWlg(#;deYLBsg8%_o)9d33qa^U)vY{}O6 zka>90y!aiH1$nBY1&hL~b`0FLS7-3FhVMeWUh|htAaau|rIu^cyVcj6bG?8nn$0^PClW>g3!bDkRH!|Eb@P z_V-j}SieL^73`SXw6I5#{nBB4;t-c$ahB{~IY_r=`YMA#`~(}nBIwqq=Xmux72h0c z-LP&pjJ))#YbTWGC`DV^5Ah2gLbH%;(Dfb3yD`7IE|Sl6rRmMB8H9&Nb@_x^zyFK+ zH+alL4USUcztiOcAJMAt#+j6wyRnm8_|&_NK&88Veg zOvyk{|D^P@C<2t?^Ip}i@E^HE7bFO|2xmQ;n2#n!!})Q8zhNY~pi~vCYW!tzYXqK> zXnJo?4o^(5e70xr83P1Q#?4q$!kC_U6?E%b*f;UP%L!Cp2IgEXTd`z=Pk8J5a4B8v zFYfx13UKtyZk|SI7U;!de@xOTNtF3Uq_VV6l1JG3d%g%hdU|Gw`h03df!RHq2OKOc ziU;k`ryQ9rlL>~T<3#Qk>AS!`}94={TJqhI_XM?z+BtAW zCvyE4gA@o`^OauzPQm-`75nwEe8E}!>K%I9wWmJ`@fY*{Ub-Ma=yp*MI4wK0AoKV7 zU&H)WM@SKybLB3L_+Qyxe=3n^2k(@d+q!Z7!soAapud$uf=MU2lQ?EF#t47O<8LqY z-*}Brj4~&GpL9KvV&t>Px!-*IGeRNc-tOC-mtJcXs-?8;S=#+80E8q@naqg|{q{E! z*O3jRpPjE0{SQFeNG3xZ`bPeu#~({1dTr~jQ250Zd5$-0*MB9h{U4Nm&%;!KPtg^@ z#}FSx{1&A@kreM+SR%h|Y4d!#?b^lvK?L^L*c?)_|IsLiZRT9>V_x?EL7^|CmsHNJ zNErT=X8IfSzp;{#$^ncG=}#=*Zu2{vQqb%yJ!Z4pZtn z8*}TXQy^0BvsNNMes+%m{~1{Ry-*E=?N{x+k=;zqxl$aw-+2Zx@2Z+}=}NxMIIV$i zV?~(6E$j*QK8Qh1T-tA?_?rTbS%gTyP-eJG@MTb*2-o_HprTGKXay@#!C#i!Zvp(t z*;pirgo}&IW0$_?D!K{L@1_OJ_C`Qs$u^?ZR^jl2IjL|dH?8*xkL-EBJ}w#GJ2M#h z(yb__=S$Yft+a(>oBkG^T>xUQG0DDq23{iv_IlTBslH(^urK9eK_cO)?Y94bJ#n4~Ydv;$%$MFtp-d-mdg#uMj1X1^9uHewpRv5&lRwF=_ z?V5px2K_iIMQf0TXGN+brgQ|HB_M3kt5^J{H*UPqCLK<<3sGuc9nME#>vvKrrQIY{ zu0{on??`Su-4t5(%NlMu@HG(Hpf?crc&1}J*}GSkB3~54824xv$GTb=WU=34$2Z8` zx-BO@J+|Zq_B_(VA9J}WzC5v*8#}4kb3Yel2vpU#rGdVbILx<6SwWl>j=l#J(xBy# zdfkxoXaO44jW(BTJMoidi-Au&Nw~F{)v{ z1-o!tB|Dp;_edrn1FMQ9xtJlAD5cAS|W z`-n<9@e!32h@Ayl+&i#nxe|s?r&f=!AE=tG7M52PXxyGlwD1$|DZhw|d)G9=?5nxz zy-m%<{7_uC=lEg&7=}Va3QjGu;8IhgRjTNQ!7*a7Mw{0D0|Ip@L9WMRZ<|lmxfg@L z!#L?BT*vUvt0e0Vm}bu>2s(t^$<)p)^kcwA;zY)a)JR180!|rewIQsTY(5uvH)&z# z&pOA{@LlD)T103gm$s{Tg_7%h5!U!qan)YWe&+c=2d_bwsg(rvNGM7V^-9V|1$F@Q} z+i1Pxgq~KmJ`gB*0Bq8LWsd|K*69+@ElsO?wm71eM;g?f%bM5ai0x>f842uN5pDL% z7`72tO5a?%u1@JY$tqMGy#g#% zxwx^r>f-`l<$^$;U2V{fiaq@Dm=J+JkkMe6=HQ3Dsg?k!R6Qq?%I(kOtLl9x^kDep zBQCk-L+2}))ZMCCCDDMLVEP^e3Kb_=X04(Uu>69pg_`0r$1ScZ-+R%FlaGmO!n~X= zP(D@EEw?7VbtLsEt-jJLi1P}qsXb8*l^WB|EjAr++>ZZOy1|(Uw4d+dD>kshV3yF` ziQ_F?EoPyfF&cU@T_r?#4{PG+-Zk|lfn9CgtK!oPs{@l8Hqyyz-i&bq$7)k;kk-m0 z$mF;1{_)^`TUYUvRznXZqI#d6Wu{IAbbQ3}M+U z@cOi-RN8K~flg{hu6n5P{HhluW@uZEzpb!2C~ZoxokY#H4%SV^DBy;7L)WWeP$qku zIqPL=4_2(wfVGYvg}2_}>L6x@cGWX4$@y&V=0Li32F~(OIxbHtQcYj8?YG))&MGh+ zrDN7?ZLjCS=7cmM1PxyX-e`4(M|k?+ccw9T=J+i}UIP;rVu$tX%iEp$@yj01*G#bu zluv9FJ5X7Y<_)7~3nME1C#B9cFwzE>SQB~Dc9sr9!pKEHHRGJLcMj%LGDl2Ns;Pu} zwNr^&xhit1u~#vdSe*6od&>M+YsFSk*UV6`1pmi`Y^Pf9%Gp9_?udgl)ONib_Fjfr)l1xT)MUUglusYII;C_`^4||9JJX+xixEK+ zVgp8l%Pp#^8pu`!^(&bFy593&X4>gz3SDn6J^BI*maeuc+4m5II6SAcSr+TH9N-LV zXQ_SP0KCft){<;eQY{lz4;7~&x5*F#qn1^oH^yxk7rV?lK{tn4Pc?AuSE8K2A0fc{ zC_t)*5c5L!u%kmLMV}(K-m?JxYHtH0ZcSph3CQ&nkjAiXvCc76G=zi;x+Jh}WE6wB zFA}lV0`R=TKi>nPpKOFruf_z>ed(Apxd1p1foIlVi>EBuR_LX@n591are89bnc2gX zt4K7(n{8YiNv&Qz99#f12koWcIkj2`Nhhmf7g~FQVr6T&TC!V!W!^U(4ixUAu^{G> zWIUK4+w$bhx?S6PoV2mv@S%I8I;QS8cjz3;ab{vUVvKG;Ec3y~hYg~E&PBanT^W-HtH>7Fv5gq&pGla3) zPfhV`O#u*p>3K`9d|E_7w|sS~A$geV+}{6!@_u;v zHAZ`c%dl-w9hY!DpgT#(h|}q9u8?-evJvbKkEIHgm^yzryWld&ru91I$`u~7o<8!c zbrw-v4&K9XpL}N0paVUQw9oTvRr|E^L%QY0Tz*y3ujnG29gdEoe2d3Pi;CQ;feWCc zUWUVJu2t_(9NY3e+P11)rm7;gLu=4?ej{n3#lAh6b+yfIknu!vX)RwacPcJPgPcHC zI&(zu>uG=D?2w)LrV3ccHq6iPYHE2YLXbO-1-lX7t?sA19vuM}?KoxOo@GJ4{azHN z*$DTkvQdUlB6e`MCIpNS4T%*~-BOaxZc$z}AbR%{xE`g%Dmv*Ta{~1tAZ=YJI!j9V zsA*(xg4+J$r-RWjrQyC$_18-Q*(|m!7toOSu>G-_{eDdkud6>oME~au^uzvoRuGJw zcbzY$8sa%#^=RtyD;qkb9d8J1?R`1O^k`Qo3F%{1TEAw;pRAE~$xlO9!BktZ@?K~k zqIf@J+HJZaFq4u&plK~jNW4fH^;wAN^AuATF{^gLVO!YuTUe^?$3n_Ns++))mEO5r z4u*I`dNS1}I@5aO2A_fG#><0)Tr#i#rf``cwwWAE1h`KQ18(ITx9n-bFn4K=KWZ7Js_`qkYjB%P+qcOT zT7DLs7KNotxxH)IW*FoPdakMiX5LK~&EGj5dIFnjmeQVX>IEIz2_CRH34yB2@LQqd z=m$8gV11t&gXGNWYk(~Vn5BzAkYqfx`_NHPj%o!D@*$rzyYT829K*Gv^xo^k@VgF2 zm@mj3Y>vT;)S9t@<7pkcPiBVa14hV9JS&p%x!d~EQhqDR*M;9o_GS*Fpej*KDWV-O zR0H^nG61vd27XfXDxvyQ8`#McSn;A9$}nkNwF5cIk2j2ZX+Ncb6f1@f#f&xvsrS~p zZm}TCUU5m6q90|9s`UVDQ{IB>kV+Ujv-lfM+zqqaQ!yR?a8{Dhpmy#^>VDP3yZ9RB zOitx}7`9&+!ki%tKp=&cA#68BKl;m7ZNrzSJ5uMf9R|nr)bMF{d#b2C;TFJRwX24{ z>eJri;~U{oQmkBRX1h(On|>p3S6wsbDjHLa5WIo;z550SWt^yi(R5UAy3nFgbA^*K zm^h6ZspaH(?zT2qZ4ovI&!FUo32*Z_)U3L1lxv6WXQDq!Ahw<&ui}`+yk3(vz0xbb z#c|?qW*^JN{aH|8|HywZIZ2+_!erxMATwse2%_de6^zLeSjS~wp9HI2JvoS54?>6w zkI!TQ?$lk|DHhr0VHWcik>C{)L&d@hw_i9I?SyqXmT{S&58snrg6O0VX+gY%_^_)| z@zafOxtFDm-RK^;>#nx6HsH-=fq`NxiX^-}L7r)!g!lj3FFtKrdUbFJ*VQ~4ZjGVa zYkk*;PSi4+wqOxMZKD+^2i6S*O-k$5tQWP1W=dNd>Uj=TZ+GF&v9x-$P}%~O6)6{N zhD2vH=miS|siwUYNBoqm#}7$AZz|z}at%^whlfxwys?*5%NpHjhJ%Vd3|H<=W!G;` z#|FIy(F}_okEr85-pt=?s$VM7Tk|XYc2td54eQy&A}@Wez#qhTFY;h;eAoa}iM>Zu z!?aGJ)<>-sljTJj(oP*)UEzDP-!GH=$42<=-`F8iIAz1hg##| z8G^TZj`r6@oI-A1vfyKZ92?=|o^1l}ZV0i6Ewf{cN0Da}pYbeX9TTb_^t@D<>*BDE zzFz<_$n{87b{Xf7x154h*~eDM3?epVL*$LZGoO4j+GWuMsPDB}xXg~am@nhMc1Hw)&5Ylu(GsA^3YJymxY?)C`0^jzsQHGRn{%{=#aXYkHkGYfbt4wCq{NJlN^qa<213Emj&6*4zUeZD?;RUB08_i=e62oOq;C zrGRh<$inNiF9zY0siC;Ow=Ol_Q8ID6$=bW(K^SpbQogU2a*y!QM(h`KYHNo8h6@T9 zG5d-~0&yh9GRoES)7AW+R&dCJt4gPBo^M@vXKkO_E4ZJ=IyX?C_}NV>9WNr@q=2B^ zV)-no<$%7Qw}AiqTonSp`1OriNht+OZ&!NjGIO|0IotC$KGXfmpe_8R*LDT-#Vxka zuI%YQ@2vmVT7X=cceA{2;+=~S6(0}(hbyf$9OUZG(!w&qf7abVf#Jf-vnFEH{ z{_GL^ZI94gb+&#JCg6Snf9)SH`@pd{^>$?$Gq-?s{t3JPBo&KrPaAN_6GAK^&Tex*(M>#l_UAoT|Fkg`){; z$mU5`v3cJ_*1qOmi)o5OR)BT;>pwG@X_CbF+Z!kJ&{AOk!!1t9hI`K`R6nRZ%E#Y0 zacK(GTE45=AJ!L7J#Z-`^-)1cF;zzlE9^0Woi~moIPsnEJT^Xn$6R;J8a3{0`DcI7 zX;#7{=uPR$edZ*9j|@lpfC%-U8~%!!&5+-Q=UizfcD?ja3+^G$f_04t&cg}+OoQK~R*DD~7ZMt=UsUO-w-5cp68N_{@Tq{4g{0W*>)$V? z{2t=(2gn~RgPeIY`{j;svMy)+w>Z{+E6K0A)2#%A6*FmW^8eBXA1omtxg&aw;A-0M z3;utUITEwrv;SX%O%mGo6#j9D-%5bL%ZzuaL-Q@WE@{=7J-w}P5A52!eGE$LdW3TY)@*(|=KhiG=j#)*Qv# zhrIuy7<`&-vzrI&a4KqQYu)!}MWE)x{mm1zn0So!prQgVDBpVI))w$YW zR7Br+K+`3CZVQlg-T7zP>?9(@J+9;wxnOcLSmq;%@y6*c?d+HGx}{D|iK%-Kdc~CN zk<2l6Nf7Z3rvUr8OYIREPYu#g|)A`CwU{|?8qg|m;OUxH(%k8IT z_!ll*0Zt@E{R=$gM3y%5&-B81Z^n{^94p)`@LaUZ*XfP0+)n2OKi<2BH#^f_NRE^I zQm+5j%K^N1$2c{kVA}>Cdu6&#a+6~uu@isPXfG)1UQy2KD!GMe{|pg1^-tOuV$3-c z9Wi8nmHCw0reCo3dF>FfjH<*!EMr2hmEDVFntus=Ko#2PvvLxRG?KRE@hRy#cH^3~ zJI#;IT;_;x(nzoUvp))t%4v>}XJ;#_#aK1)*#Z6}0>zK;=NK2its~C)8~?nMiQaF^ zaC_0MgqNTeBE!IH-6jr1llVVcD0oY+;>UjV?ae9AYGtD8^;GcA-}b}HKVD?pOhj>E4*+-*2wD@#^kO`Q?{ z;u9u2l2Re`ExgkWp4zv3 z?k6=~H_qcnGa))Wb6@58QHCYz1?locR>r?b)j51(!IeqZ(A1E5y0$GZ1WM8enEPS< zlr%d%nKa0TMiW0?0Fl*dePVNbM9(-?*w84U-#=^d1xrHB*cj_jgH(@4(8nDD^uCyG z*t%3|okG2CYA0XwAl@TxIh4*=a5#@+^`W@h;~eCE!I0>6BCS#vdz97w%KMbK96=Su zf8L7Pe7@~%&fVw43djVL6GtLBdY4@H#HH8ZK|52#Q4U3T=d^tH2rzz$z{RCz3$q4Je2GIeq<|& zju5hxkiBDHqf|&J*~VHi_A#>WTSX{K82i3vn~Y^_Lz3+KI)~llo_l%U+jU*<>sBwp>r);Yzj6w?`jxDL<+r{czW=fM{m;aG`dO4Y*#w|= zu>8~{BzGlD_C5f^HycXLEK>&~p@%-KUA52dM==(sir!_6^p#Lmp|=?z_|>eS+sVp< zc-RllG88@L-KCs;`|C+M=jmraC^A@MM|5|d>w9Pi(<4S+-c6H>*NT3sLQoIP%0<(k?6|72T8>V zi9B+7(L>f!s)be6IuRUW!h?#;4)K*WbDnimC$qrZUZZNVU7A>6@o|DL6J~bCv|+-# z>!}L99YDOJCW|b`5CL&X+fl(N2c~$ryIF3nsL0_47-FHjv&sQ%*nngh9k*R_!i} zcZ>2WaV+iiwd_+;b~7iANhK#LE$Dqu|G{bbwR-)M`HKgLDOOtLPer zwD%t-~-t_uvyB@j{YX zbV|Tx=v!Of^ar6IeBNA#;6IcKOih2;LQ8vB+)J0aX2Nmw1W$2FchwCq;iy)-bbovpr#=yWShu`T~JKDAz` zW7FR%Jjh{}=fkH#oLuuQogeU8D97)eAo25AB(>E5wd#8iWla%>t|V2P&S#AHFtj9X;OC#2PBRuK(Ny~i z1H(XA@C}0$^Zb%aJSuK*WTF{1)ss$#L<<{qM}eL4*V|-HE>g6%Qy=qsRa_u>Mam&! z)AX!<@>~Q%clE%tcNEA%WJ4EONf@uJecW99m{KnQvfdZVuM$bM4l+Lc8iQ2I!fm%l zfMZq=xsBf_UvIZ>ypsDdKP$@Y#PZozV%H`S-&l9Tq2>)uZ-qW&A5Zo5yNWIQcH2m5 z#t2;|->?P=KSp!{(EiU!1fUJ5k5t#SW8~-q0RLFFVyjMa+l+_m*KC9Ja(O0mErxlp zI6x)gUQ$UbAi4BIaMQAgZMjLZzQ*5h%N%osW@50wB#p@vCX^gH5DKvxzy&rY>omX(+fn+1N5^eX$Nm`JhE##T{VMw#m`?0p`_O8^6i74rjg#a z_`bko@0I|icv=A%2Q-M$KN^rJuh50O{D>}n@9Q(?5z7X>LvJkSx*cEwo1hi)_xRRj zu8SH5YuC*@mdhVbhbMGZ$S9i9_3@i#M&K-u^X_9egF`fGlpx%N#?od9Cec4b(uycn z3{7{Sp>>-tL1K%}a``Zr<0E0JgdFNLOT_1wTSPMDB%B?2rk8;RHC;7} zo7>pZW}85Ck@*DQ@U|98&xhcKF1cw6A$j3bkRqNgXNkjX??*OY&LP~!Rrhvk)GZd* zplIA^A|g1Xsw9e0U!GqG?(?MwS_#a8^}Y;ngHg+XMZAg|G)lD62($7D&2_|`Jbx%? z#PmMq9MqVLeyX;qE0SSmQly6RL?p2$f=6;+B$Or#?2q)8x4tW$m<}=#r_(KUkOLhP z+z|(n1bdlpiRp?K$fBX@eR9mcQO3;C4jP;1GlR^^)BN})i%0oAOo6nO*1k9Gth3Iq z!(reW-S&r9i&xD1-21zL}2>w%&KDD!Ok_O%urif+Y zy`oe0Qr|}5C$r2;>v%!-{X*+=cTqNjGIcIJA8m#9!(o!e*f}SoLg>y{EpBLF)(LO0 z=rpP#dK_YPA}JX26^|*zXu#F)VGnph1sZN-sQ*~QQS^v9!$DRhrG5_?j}MX_c^>Q4 z)%u_=4!oFd#=jikU~@QvEA(_U-mWNeIR;1rUKZ=YIj6c7t)J~yw(Q?H8qop% zZ8b3};O_}cuDS;)&y~|)KW6PP^9NK7pQBBdgZv*yy#2c+=kzlrS@J(&oq=^2bO3#S z+}-)%MLmZhHGZj;ggiDR{J_Yi@pym3AgPLW?!mk!NBmU!njNV)c`iWEp*I6RvK<5T z!Ee#LjWvk@S^*mKKb-0`;@VzM$F7!c*|dDSm#m~8ea1U{>Yyw1W_t`gDiPZ0@0JM7 z_qwWJ>J^#m43O0td*|;H-8yIN=@Nj6RfD;k832hchW@hDz;4zFtln8>K5woB!4T82 zd#y`@#uBg8E(qv-Im!Qy2pV+)P6C;F}GvJ z7Rm$uzAH|e4{!tcjnpuftt%SN{^|iI0|{WgleAuzB8w1-69NJDaet}C|Ng49aNgst zyY)N3iBj}>2AD!ZEc}_?gUe{9UmY$vFO$g&$6H$c9-?=%{>nR&z%IJC<28Dc-OOC9 z-6^10=B^7*i2sEt{Q0yt2NEsg6pHejG6|C+WUVTk*`xsH(Oz|b#-f~0%)=TJ_Rg*K zAG~s3zf-y!tk?D~BoJuU08(HV-SHpb${B*C^RFktyJjb6>TSvns!MISzd}CAFh+Tc zAP|rc46sL$Ki$}SIQ}g0|E&t=J97D+8ef8v`TRA)FBVh8D>Mi{ZEWQrKw~B;SEQ6mc6OGC@(_8Gkbj+ zc0qj;-%1l~6~ucH15b|?&_4>oxinQP9wQE|&D5ZN?(602FpNVCyMi|n6eAHH#vowX zyEuY@_9mU}6Zre`1HdcqSazXtu5Inb8ehza3yz6jPGR@+p>~m;P|ZB{JNkBx_Zmut zC9&&0kOM3?T~@!}gF(C;fFL4o#U9#r+6qqGf7}1}eqaLN)x|Z853Yi2F@P0PJ4^<=~<0&@21HTaL zzitR)=ipO=!SKRFl4X9KY^>Gs{4uzakp&Udp4a5U#TiaKR1SH}%}n1snvm zYQMo;@`*dk?QA^Z<||r??G-j40#?n-IE^bRj&#)6J$2z2nVsFMHtDr<1B$J8!mprR zI5u}f={Nh%(&l&{$GVA2K)59(C37`yR6-6)j=zU+9N-tBm>kx;Dh=D^7<}t5rW~q> z6Qp1GrpiEZ8_%z?2lI<63yHVtG@sZjru~bOJ-$6{qE@R~n%e z5o~g81jTnf6#;t7dL`h#j6f&+1c=nIX6HRa@cbkas>s^qg&RI@R3Bte#&{FF)McvR!lRt5X+&e^?&Bn!>Rbh@~{DO)N%8ZZp`jqjW$yJcq=g#%O z*Ar)2+ImdnX8ZjW1e2wph~+`2`c7evo6T!=THc}y24vN^R~)|> zPbgA{DNfB>!C6g$-4#xvl_}mdhI?2S!CFlujrbW@|CYzjaD1*2vJ`MWmR!Uj8t3kpzRyyu8WytCbl~Z1G2Ag53R`5#NusF&Z#}@o zL-TYReD*B1X{S2JC5l#vpz`4i!^X{#tW618z4L~<%w<0*d1uYUNCNdHr}+t+z@4(dvGvlu-KQ0 znw$fCpTpqU4UC#*vM~RnEe^N3&Dl=0-|f9!F6fl6aLrfYeZ}(bnMg$*z~MH>#C=Cs zA3dAonYpi#DvNFiUiuxnYHu_4DHBP{&Pmrn%ECGDsP_#uQ#+LBb9P{SAP-$kbe>u* z(R|E>r5J#cjH3v~nIJ_H52I3Os%z9UQUdPYXCEw~?lO=pDn zdXNF#6D2TZ^nJ$rLgDxE`PbhX(A_Ht$cQbJFY&{2Egt)?PPA*NexbW78gmfQ(}i+I zIyc5l(D(Qs&0~2>Fcs0Ma&Fd=3uH1L>(n(UWtGK=y^F}mA1p5XpsbwahPfnL(K{hZ zLMF8p#l~fKM@me%7=+=RaO=s3%Mybf=m@LP@2wQHu;bzB;n(MS7bF%;KAL6+zhf2~ zf%MY<{p9=;rR3uPw$4@5Wxd-cT6&>Pl)XU=o@ZeL>hyf>E&%>ivOu40bd z9#I<#4RdM&lbra|=kb=8e7ON%TgSlE7~W&P{0goLU|_ufY&|3=QeEXaZu@-g#>9{Z zQEadAhD2VChBh9K;Ox+d+)^$y%;;00DQejye`$5HuJS3~DNz)R7%DaCWVCBt^I`Il zaao8eYR-)}VtV2##Y-_>A0Aj z8%C?(d)29V-d{I{boiRj9VSsFc}# ze3g1J2)hI(%#|;HnN+Cltj=%k$T%jxveZX?50*jQg=_egP3@Jk*ryeMOiX0mo%U_~ zlbQMo8EWUycsPn!xIHp+kwa3a$}T|OchP&wca=Dz4-rhb?K5_ZS>{FcWj^hDa1N8^ zA}iN=B8zc3!fEzAsKPPxg71NrXvS2tsCwfCrN7sH1TN6El?TAUX1Y`~%GPjLDwr^~ zX?6&%X}3@|+8Nb+B(PS$#k99KpWYnOIG^U7W8I%6YCb1emiduIYD2WTP#<2Ztiv!E z@fd!#S|6U~kc>rn*Nu8^&4m!2(*u@>LuUrNwTs+7TvZ^(8Rvn>cb11^fOX`fVP#>R zqOSI&*FYY`n_0o@-Ex7i#k3MKk{oWUvfTIGX|TIH*JK}Ku>}(njvC(m#@gKa>3h}E zFmz!A)Zp``!ghFo+$m|{ccTh>ZZ=uB^YG;Ud#SROlbAyyZ7d}R5mh<``u5)c8+p{h z4lYfahIMipPxwMDQj3H-xW#zmMk>RXtsBc;J_Y8=n2GV)lI4O1IF!TRnNVxL=k(Fr z*LQKevMy~t24UW+NjQ2Noh|CHE@CHjuCc8CQ3x zl&~AAo*8=wt{z(l`>xfQt*)XwT^qFeo!7;*d#cP}Tv73)LWUJCAtE11@N>wCEKp9! zgwL9Hy^_A$k|kj(N#5o~QV5uDuSN7G(oU)RdGmu+!p1c=zgtDxynO9Ket2{!!g}L3 z!l%r$CE_E?L1utPU_q;PN{xCmo*zd(2V3aN)-&~mL<4Lt~ zxc(?Cf`8!fKX`)2?F$~@lZJ~XUGw!IYhISr2wAM5$&?9`xY$aGyI$w66^}I@W3#vZ ziqO1o@r$%i;H#LY%tvxOxamne~6cJ=w}s(uXA1*-g59Xd8J%M+n6HnsVykb-BF0d zy1{cF!ajW9R<9N0FKQk&^T%PQkkIr4h zXg{=nwZGKFTWYElMOd=z*;q6?CdPLqONn%wr($RL4G!#r!P|K^yhq!nH>WEUa~9)6 z8ZRH0*^Ra)_1EZ&sVLIIZJW6%)Wb|g4mb7LBrEH%tX-r}-?&PDt-_5j9`3-ZIcpmu zjty@%L>(Qafq0AT?tUlpmynV-_jte?)LL$5D2H0fORONem{)&hMOM>RLzWuCZ_jXdAayEM2Dm@PT6N0JTvrjI&8Ys`SkPFJvV;{OE zZzK){d|L6bm=V_y=~0S zeCx9U1+CYk5Jg{8d(e$sr??J>%IC@)@oo@I^>=CQp}iC_?<^VEvszj22)w!nI| z-t#HRGztm*T%R76YK%ucg@sxuPovj$V0+>-aq*KIiUa$1K#7Xbz=}qk8(|@q^n9YfPT&(!$?fdFsmf$COGq$Y2=h%hDFFtg;@ z9XA}lKr_Q;EdukVbl@MA&sD*I6!AKUs5hcmklIwY*r1;#+J@~x0WhR%EC z-sY1ND~veKM#B-ff3NEW_q7qBa*Z!*{yW3JGs%~CNhorHK}K$7zt{P{r-8UtY2WSb z4{H=Ds-xMlZb4M(37YYYwEF_uB7*6QET#*UiO(LS8eQ??FHN1)D;8_G^NNUY{C-_Z z(ENQ;rHQ0b=x^LPz-}V5((?|dEcwzXd~MZeC;AARF1zEq`#WSwNB_I`8ZNmcj7dQ6 z(a}~Q<$yeK!S`eFWdS{%Z}Ji&hex*ymJPn?xE`7S21Mq&29i3{aK9Zp;NSzN=I>6H zY85gn7;P?{tW|H-Otsh;;NDzG=j~|mO0QGxSii&tZotafO>;#qt+@&<7AU71qPLI= zR`$0QUFt#3^Qw{T3OSJ%S*5?DttKnmRuh@j7IR!v@)DCI4+>|7m)7-6=G)EWX{-**1`E;KZe)!bE0rjRU?%6vEqgE0G6Ol{HzO5q z-{a@j`;gOzvK`>yl5RAg*HkNfsv=pBG+mMIPL^=KgPfLaT2!OwefZ{ld}pC9wB4Ik z@^y$@#lGu(Z2%Ei=^9y%dJY)2#ONWboJh)$ca`Zat0Gekd#8@@6wDH~uiYZdz9W8W z6Dl=!Aj+4a>%=2LPkrZ(a2m@w{Ey|I3u#H`lUH|?9(Mt?~zCP zZuw$m?vcuZ2o+2xo)|qAd7rRD7j7aJ2KqdXqu+PeuZk&kw@S1caA(^hZ0I+%tF$)Uwn0xx;n)bd349#;LBRWzNUY^ zW&0Y3p8*l;-zeAfjv3wtZ1|SE4|5tVtwCj7Lr=I=7JSb})_N9cJz&lw<{91g=;KT* zZpxLbb9qT?G)n5(`;KEChIzVh0gS1Fh`M!MSxLu#3>y6wqyu{&m--@5gf(yd zZrtHMzShJ@?}J$_u*ueX?>Ru8|6yrlP)SFbS<%5*8q*%!y+6Z2*^Zd)Jk+R&aj(=V zG1LY0jQot3x581x$8KFMoKUf^+IMfFKWgd-{=Wu+id=vcYO<2 zbMWhm$!=M)xTDHC0!_Dc&@FO%FFyfqtlyjCms#Jt9p3~->-v?gqSbo4Xxv;^+9K(Fe1q!^5fDPB_$;O)^A)AB3KT^&o!^VoH|4V%J!na7 zPBp2mogN;@4bRocP$L|;fq{;hvevS zzBSrt%R6PQAs1uFsZCv%*I%C@ofm#aXf1z!Ea)s~@GuUszsoSP+_N!_xuj*J2Yz#2 zzb#XlQjDg9++7c0aSkQPll~-fS{9^%@zg0!xTz9xHR5Krc#&_}pEp1KAJp|a{Y|sd zclbrmYy_H-hDE&1vDui$8bpIghJXn}VKD7l7e2nt@as-DVj*zU#L3vNg1aOAI-W&g?}eixwkcFE|A>yx=>3vlY1f~uV%_y>zHtn z27~f8;h3|O4R5Uvw*^40pJ*sON->znZ(!$stEY{}6=~-_JUQN5c=Ryph(&g-c7h43 zqR43{YIa+nM-jj07!V^PJb#V2J84W`J!@DZp=_sLsw=F>UZ9ok4PG~YHQ}2C;A{dy zhE*Mr`Ry|Qh-1s}(Ag*n+-{99`0T#j8^a5fKnsjtk#VwHmLxZRHTX+<=LEHO(&TJf zZOw6i*ioav*ry=m>zg?C>Yp89qRL4(_}sXzm1QuJ|Myz_pKL(j0{vTpXhjzM#Sqm$ z|0fAO71%luV2u7A*7}2KQgS)}RmG~ZfZ-qi>kMg{l6Bjiy2$^J(oxaKTw(H+$bRro zuLTNKpmbB2n)=t>cp9TwPdQ^V8B*i2_5S5)QJF2h|58&O>aQxUxPpI1;FSWIRM*M0 z&&!jY0p6%&PxV#J*w}t3*6>t#{PUes*#1)mgGw_n78)}^B%eQ9%D-%lm)ILT0XjW%$|Gw-yHFN`?5 zXU(*@aXw2^I6j%7Gkw^v_+IMDlNdt?L{<4_qS{NjFsJcqArytdX^d1pw<)H+@wHWK z=~=P=+aZ89+KgASZG91zgDdLV#cVDf-)b{sNufFE)PQq2e&PN5FJ77KAL6gq1q%6l zPs-tF6fI9%R5Jh{%t3b23xN}K)pR%whoNM105H1ic1)5&(9NefX$3#}^|N|@tyzGP ziq#Tw6Xo91HL^MS^Hd7TsYsP$N*MI_Y&xdQ;27JS=aRE)1d1Hm6<&Gr)B|B9@0cmY z=RLGQw7$IaNBTla8>Pfh_X@A~_`B1Qhvy_L06Igw`X@K1ap?pH( zkDKo{-zY6fD00hii(hzdEbZJ$?Sc0Blq$XT)dt!l{+;c#@EK=t8joMuoc>JB9VLeA zepAS{|K5}&`06`sZ*bDJ040)3RBpn=X2lvy6K~!b7lX0gyF^EYxR39BNix5SpUR^L zsB%;h6vZF3cHm z2$Uv}hwQ%gpt0xfJU*VXnS4)CC0+=g^8U0UNNwx+q;xycutEyBxJLkZNe1WnKJ4YM zTB*(_nJ7+X(>~+L6L2AwcNa;#9%j!tXE0K6K2{>Hzo?4yx?uGzSGC>}%ph!>@Jx^7 zFmbv9s9m5c*$OpnX-S`CGi$_1kRg`tb*Y*fl72kdg|q84y6@cp0{IfXz-gqVVqe7239tKBm}ES8fhb>a5|vSA6*C^HF`MGC1 z;==8zNA8{p^x<{{u7L7!yhiKX-FBA5PV9%a zct%{YSrpae(@v;S(D24oHtJmAtyAim?jsNsIoidiu4In4Nuz=KIA5rG4+$jO|2C(n znRZ#W(tQ6_4jvesMx=|4LQqe`B+;U&3acAqW5gRDg-!}eARTc_8S%E|HX7<+VD;)l z_@P8p$6OYQ=#!`)dCxF!oHFo^r9eQ;^M50rHUs)nTLEh0fyCxfYD}sxLgVPtTf$Lb zvcKkAv6D7@asTsiRsHa2Fw3?qYkN;vTyLb;NeA?EJ%fZyDtGQ{-@6%+y9V0rpmy3@ z>&M>f&3~YAGM2{ZGf<(Z-ghh~2kWw%j+Wl^N2{iSX%}64o{&iWd={}9o%%=T#RS;p z&D28&_SqulE6(bjW-s0B1FGVI3!#A5AE&Ai{kQwEmlrtbHBW5wq@h6PZ1Gt9agE0% zDvJ^9y-$2am(K)x2$4^V%iKSS^Kr6zXqQomJwJELK3=V2_yh5joBbLVK+~8RDOjlP z9YcOG{B%yceiC<$a#Fxb& z14;Ei@I+2I((jTtztD-q}A?g-YdQPEa?9M=$HyO literal 0 HcmV?d00001 diff --git a/Documentation/README_images/Chrome Extension Directory.png b/Documentation/README_images/Chrome Extension Directory.png new file mode 100644 index 0000000000000000000000000000000000000000..fd888da4ebd2caf47195113e18315378a3ef6554 GIT binary patch literal 213105 zcmZ^~1z42L_dg6Q?1IEn(y(+&Bf0Dn(vnKIfPi!_(hbtB(j_7w9nvY?DkZR#G%Vfl z$LIO};(g!$yZ5!%J@?F9zKk1Wo0##WM%0!T%BIq+FPNau_rntw5!NylJ>RVVKYD`2)Um^ogY2vv9<`I zd`8GTcTkkAJ|(qxL|Ap)5 z@^4<`ujdR$CTL;#7Hr=YPA|jc7S1v7=`JQB%%?b`Y@o0KGGDvv(#7tg6Gx4`IUJ+@ zqV9%;Owsa~*kQWcJtv>6t>**z{*_&6m9!m?8T&+N@(>xnEsoib+E(mC)Jcn}d`%8M zsP*(WtT}R7e;QCS@9qood1%bK@s8)K?f^$b9#h>y`!5rLXz>H9O4h*ztOym^#5dWx07VlQ`MlC zb#k?$7v|yP;bVk7qNk^ax>~*#*Lo)ZAM%GUNk$uYcV}^4UN0{%9xp*2Cs%7;elamI zUOoX{0RirZ9Ncc+j_&3NZbvt!e+&7qa-LbaS-9FdyW2WB(*G^j{FRf3yCfsy--`bG z_isC`5VrqQlcU>zy7kaO-oGil{5*WT|2Hys+t>d;WPelsCHtpe|5gY6TbQ`Eo0Y4q zlY@hmqdV+>m<#=i?bcf2Gv>A4*X^ zf&WeUzf%4e<=-xdtGn7hwA1`A4`C0r{eR;AlOM|am%;yw;s1{2Kd}$P33~+P{cq8L zJ+hVgF@uICg{JgO`XvH=HwzT_S}yfKePe&X`#ltok~x|lI}%e@S5AZUv4)wh@nd&m zqv@Yd3o0pepBbO-eIzno63qV`D#!eVo-*9r%+>7Z`NsbFeO6QP^_j@6--4thfW&dc z*Uw2iOMK%;$(I)%oC{yb>F)FQ!^43-Cy5GW_4VJs$I&^^?(Nl^cLPgHO89FWcQ&_C z=YTVbj(U6B`lgQtc5m!!M@Q8RzquAU{BE2uFf?=+e$Ov#HAqS==2}=+S2z1-B*WqI zX#Lm3gt>=@hp_jNCAEZSnYOmJ*qzloyFHtRkQ}c_4=*o650ARK=(Auf+#Z#3)ePYg z`521j#%1e;CXZjgf7_!Z%)2nD;f|Td80{+QJRXgdg0E+6*3tcu0n1%sYH(+1pP@AF zdYcKxQsP!k3;Fq!k{4OJHp{;%Ob;%D*)FnSeoa58rzOo%Fi}xC ze6s0s($!5rj-a9Q+2v2~>;yq}Pa;9^YJ+Oas12{AHO=W6Gach~@8E`?rMeP(3FNl@ zfX4GVJ6>wIe@lBqZLOYzN36qa9a^|R%SJ{RR@gii-egt8u4||C-Da2~FzJH~vsyaF~S7+ytdIX3^ z%9*T1K{awdJ7owvxOyi8vlF=gGNAZQ!=%P~>AS#IozC+*)A{-%Jgd-rv7l7{6S&t z=k}85Cz#R*6*$UXBw zTSh3(6}A7I#Q&kRExaymTGyc2hc{MZO)q zA}?dSv_L+V41T=(AUG$^?vLM!OeT>Z;oNtSDBOqM1=Cr$Rlk-_?NJO!x zy~Ywod={sv<&fKKbrBE9d9UIA?reqxH?(-@Y+3@5C}#ECxIbOY7x96sVl7Dg1zX!6 z_)&tQhmy1oSH+eWG6*!=bV_YMtatd8U$y3n@yh2an`&8pDOu#Zgz;eaU@Di61$1bg{^}zl zEo9!W`vpiVb%#BFDDrGZO+v%4Jjq0sEji!t?SR!M;UBZJ=Gq`ScSVs{*SpnteFej2 zT6ZKmPfcHuuc4u#(IrQ0$~)i!6?>i1H|-tM20~9I^k;C^#|eb8&uH&N%XnO)2&%S;zMd{>`-KcF5bl3BB%5T#cYYs zq91nI+ndBwGuDt+ng!KlS8;JK1;@K7e6KPF6`f;G>yjD<4QEN-qGb|XMnjXDbWRFx zLaedQMF+9zN;W;dNOWjL%%wzp#3R2^FhCxzVuz5auZ`5G93pYxsE!0+i??2S>(OMn zuvF9cEAa!fQ_k!!+8a$T?>M+m#2?3zM!J0%G~H=z)&{n^2ZfP&Gv1>*c)JFe{DDk+ zRL&YnB|@~84R^_fe6>&ZIjSi?zI>aPXe&mE&+N2vIkD&ht9)T-t82xksNEVr^rb4| z+oSvu4;Id!GLwl#rR7$HY^^Cp=bpu)n`jDDbDG6wHcWV|E)(HQ@Y<7)WyU1&*`7up z2qHz1oqYLui}n4Lod;AZytT(VTQfjlx$-o+L)cXI&f{OM zdmm1maONbNMS($+BqBn7TU#k>Ejn+Kqv@n!)6`vS6Wt8ExPpN@4sj)N)rBGf;jfG@ zjnl)nE($Fo`@-5f4}3YeZ!v`!%iBu>wn!aVSv%F6-fX$5^WvtnT_^#d;@^r|sua|;T*}=4)iUbVjk;tS3hY zQ9fYKyWKTnMV9IGfuKR&oKK`_jgysaGhTkClcIjdAI1dtTGZY?)*t^v^^vNzRrhix zO{31gF2g`~gwj&-+MDvgm!|USYJ6eB=O0berqVXlA#xM_CL3h z>a~yxvdJ%?w;jj%Cf{}#0qxhdrNbHOizq|Lf5Qj?@R$MAFPPLXLO|I!9gnwGaWW&U zUHWXfZ!562mC8nus3gQ*3RF*H`CB+w8&Iqsq1)S#Mt@AtKb(IWy$@X!!37{g0 z?eP{)2s|dK<4EhPEI~3w)23FU%+!UUpym~qr%(a=IfOMYF3Bq0M zq!f>A3Cp>_OumAfIhTdR-j}=@A|<)zdvij(-0QbU%$r^UQx#cr|3?Q8`C7q;}owwkuA{CVY$_& zb6S%a6u5A~L=qa)IMe#A%{EkEi-LHr+3KRJC**nvQqip5dlmU__nH}fa}QC4Q&34j zVl0_xzmlz&$*Cvl%qIJP4>YYvT~RgIGX6@-Y*&|WP+|P+$(tW`Y&glwLd6F4sky^8 zF>m(k)l?E43~yLtpwnyeLYR^7At+lL;68Ub^;%)wd{^rR1D_=FLg|9Wp zI>dZ3H&lb>=I8+V&b076SeI6&Dl&H}rZJb+cg+PvM@p-mU|M_>|LWN{`3NE?K>cmk zG6yMhtr4ZQpvhPJqLfVDUi5gE4ifTZB<}Cf-d~Z>v5t+)P)8Kuau5ktbPEzoP7PWK z&FdCKFyY_Op9kg*y99}>bW-y55D`E`yQ0p$dO7${HPzNJ9b3^!;slQLB~$8yOY)I+ z6ZRC#D>H}EY&p{jB#21d%tL0H3%-Oa-1yo=_)TLV3-soYt~M|zqTh~g)cXY*K+LG{ zVP;|8bp#C$xb-KV{17Ybd_*7=35fIZuX@a47S)Ze;9cCbDc!u6qxFqIxFC_o)H6i2 zd81<@tWf%A{U5Ot@~whVg=?HlO|s>&?xgV1NYCefpITZRtQ>eJ2^A2Hf##+7S0N|k zc@D^6Q1--GE*1?R(oW$%k}kxhD>A}=)4`@tIPD6u5=`JA;tU{J?zGNkZ9G)aTO;y| zxzUvZvbd zJ%(%PcZJ5cV*amO)v`H7t_FCH|@x}Na*108&{nQ zeSF*TcPlmyv02`0L@=ceo$0!r=`C!9JsO>q7 z_F8edV2ZieyZ{KV1bR?fS<%{huLSm?0~~KqeS(la2|nn9^le_0WuQQ+Naw_Ko9WrIY$8JL?_p5F%LFfP@(V(SKoYTQE^gu%S!$d-XhaI5#4%s8sAmBxN4vr z#glLb5I2p>XogLX=jwY!^Vgh5vJ|z73Lh0AEedyCV>;|QD}Gk8-t;ux*Z_`ruKN4S z!lH|)fr8EU*^0QKAxzkjAFfL$_+Y#XyFwNJvqD=Tu2D;J{E?bFu54ioB0KE;=NCnV zgJJ1in&h0{f>yYS7PeMx&hyUhLVt?XjbfU<@oEzGheNXq6t=6S8&#&+z?0vEZQ2cKMDkHe+euJh`hr>_CsTICFD4|t?BDgx<UNI^3sp_Rdt>t>z{YmyYsDbfFV@@sQjCyUz!Ru z=3MR>j{&cAux*Tl%ycQsUFnm1I@!jMAW@pTn`SQ3-pl2kY;^nW;Xo`1s-u&ng@o=RFQ^j( z&zc(n^e1(pe`Pzx>^hI@<+)T zAkIyNKNsaY)5jkH>H0<(BOv~b8FW#qisKSR&BN*=sX-+6t3fP<&$wxQkh& zw>hS;Ruc&$QEX$5@8>;4g{@|x5M%J;gN{z}t$z58Ec|Fw2#W+Nhd^Re^+c3mcSBq{ zk_e2vp*Wy&`kx{<%Ypa`25WI458gn$IDU_fgl0n^AYk0}JzjRLbPfWcQW_O}UXR_g zztO*=VZ`B^GOgjfD^^+Ckhd-jR>QB5FJ9BBE!fx6Eh+e6ZuDc;7aHQY4X-4EoI(PI z%Shs**<3`o+xb-P3mZG6X)1%#t!u63sGMPXI<6<`!*dgNfO>^K&L@jX3jP?p;hN2? z#2s=*C^hIo=y4pbSg1eCB4~6)a`YQ!JU>G)`~oL)O*9^`kP{K4lbzl{vI8K%4_QJbR9dfXeYnp&Q!5@cg>bjt zG@E@9`Ao0it2OKlKHGPDV$jf!@5jRvd-nW^*& z2nt=adHku|-lUY|F>!>d8L`J7}j(I zzKt3$FWozFTYxl`=&~OX@oa}hwTxh7Bsy=YjK5v9_^YGTV)eHutoESzN@U1m%=W9K z?3X-UfM0+bRIbt$66;DaQnJfiJTbllU@{phgO9Z{UD8>+25AAv&eFWy%=7PMfQBAn zK^hxhA1(SE>~RR{Skz*!0&%`8x~@Umw7vz3Aea}&l1@5abD*W+ilwAcx+ID$x-fv9 zkLkup=Nr_B<$WQrHSi@p`C1I*JrXb*C}DH>+%}Y}fN~vj!sJ)T-HXNBb2%cJWQ=5m zZibLRQKZ1HwGDjfRTBA3urVimN|HXOAn*q#+Ii*k z&N5=8!r&ylSeUFSLg)rVw`)17b0$O(GUfxL-NSFo_q&v~IH^D3y^8WoL!E7xjZ1e1 zAVbtw_~hTejJhGFdn>$oT*$o|3`AvpOsx1S!yCS+T&*OaB%tCvyJirDrv=HH#N@W= zkr+e7qfj9tVW)#%Nm(ru!ECh4{YkE+U(#ZVP7YT?RgI+wd28Q4VH)elBwrtpK(r8i zlO8YK3B>p92pc+HHHQKE>P@peCPMKEJ|+rVk~fJ>*1Q3jc4&Jf=M)|&QM(?B9!5a! zI*GQj67J0SUJV@%Hz#PqDlRWp{5De1)=jsShgyW%P;4KdT5!-7ELPoloz|>$4F$t@`3##7Rn*jy(~LkY4i&`{mSnbX4hNPN zJ1a}VMq{2|_88)q*8+Sd;vW$o6;x+>&e`sBJ48seJ^hB~KTCA+SCTVidC;EG`GOxO{w(qNujWzo<>(L~Uch-r zj?Skfa~X;o1(Ub@sXjZYRU5iyZIelqx=YIyKwr1tjvkY zMVk8hY8En4yeF@JyJS*g)|b6V8T*182YvPY=gx3gP4}CS(D`=DtCiiI9Tjd%WJ`(q z#%ykeKwdB9g|9htlX)IA-Hn4)6GZkqG>m_Bi<5sNRRN8ERzFT-*7-Bb zNA#qxiD1)p>m*)09m{Hgq?rr>w3kH2ie~4jQlFTr6>NPt9gH`b-8r1gug(?_w1Cen zguUJ54kFujg#;-|G3R$crJ~5|5X!Rg32Dh2u&^eo*g6bmsWDRzx-DPlXP+df6`U-d zDK)O-{P4u>CCxU##T~gV{#d~hDE0ZK)r!(d%1LsK)&AVMdtd-%#g}6auF#-s z0;`+l^G>oZ?QZ|3^j`53@ZEcx z9>8xI~?>3!OTE)M$oE18J5Nk`)uTD?DZF!633_PfJy zU(~A8$}l{#7TcKk*6+Tm?tMXH1Fzs@+OFPc%GqT6oY3L;1l`HD2>| zW^~9`7InVaSKK%`U6mXkm~viAt^AX|vLSJEumXf46O}7hs-x z2+W*-nJh{V`*m)*j954Uz3U{$+2`lUo?KUjn{1Uig+@DDaEq(_{VEdSkHUyyIV_t=E)l+3^YAxxMFIhp{@w#D`C#rL{)K zcPDf;`tIT{dHlUUo8;(9#;fq5KR$lboLxrTx5l*YuqlR3i6_azk!v8{-Qe*4-AkV` z%$HaNXKx+w)L98n(BXLKUr~Ps8qy4`NS_(;hTj#w2Tm1QB!5v*x}B*S*(oHd|1_viIgax19YB*3R5ci^Dhr$I#Vy79 zeW&{(YtxMhFT`a-+l7J)Z@2)=Jr?qC(81)VqSErFl)RLr+&*i2ns9Rj>{NTdm{49) z8Qjyu==72zstS94uRq}uqIT(0%y&ctT>7QQ6q=e`+ge&DRvt*$EfG$A zbA6hP-uCka)LF0P6+yl74-%Ul~}jL znv)`%5J3aKtB5Di+*Uw8kkn5#>TdfP3~#sf^D%#r&$JIrP2YPmZdK;>q7_@kFVf8I^kE;)2HXnTn~hO z2hwslU*u7KAD22|puJQM@w`ynu-)FFPGXp<;9q%uorC^;0k3Xh9ChGhlgd8Y)M41v zmL`zE&lrNGcO*?SKSV)Js51~0a^Uh~dl&CJ#aH=*Eg!wY(PG}j8n3pO9nj}2Z~Rt! zUjK1L(CUsZfr2q5qpHs*E1z`cLJX&<+$S+A6FPW!(4AEEn<4fpn+`#f9r|+yua!0W zeQ~?flUOKntcY~70gMnu5*x0{qpRBiQ!P!^Elv*I6n_DRq#seme>6&3 zUQGmf+y3byP4_L_GMtp$l;l4QntvBPC+>Ihu+|$`>C?x1@OnBSN30NY3 zGb!?EPKe&=2iaoDi{iPDPwng37d!Q%X~CU<-)r;4$gM)h`;Xv%kHo412Dhz>p9lFsYtZolYBYbLe?{I(=>r z+EGdf&_u`oakjO=A&ZFk^19zf7hof5oRQxr|@!OPz6CYF5fUQXrdygaRsgD+MzR@jd^Er=b;;b$wIAoAm4h z!rwdnZMkoM!eSWD5-$NqIUxN=&>c_ioeaiU7-q}U67$28*E75%CF`m3MFwMxik>() z@8YctQe@l-U3lNJ|Gd*o&0q9{Uo@;Y?2%r7`b{=B|AMud)bDM*RyyvsW~rY*kp_gn zPcVW$U6|aUg^ZY;8TCY|Bjb(IRLpB7h(_OT3-H&0?=;KVYR*m1*UP!--D~(wUhB2w zJv)KzVQae(Z6}iFcGQ2wZtt?MMGEi&zwz;R$6TsVKlXZLK zy?H<@zPEjHzGw|;K9M{)bho}4S(>39^C8hmSaJV@S3!=TH9Q6f7a#%Tt>GgNSpY+RF zPh@ScN~0pAk*Q^+M#NjLvYe6R`YUV0J8I<)zOCyodQo^sD%+9*M8%T#8b|xjGNDnbM~A)FW-Q}5?x2kNKrZHS!S_SYmiMyHO5d%6@slQ|uXmf{Ter2X=Tdcz zt?qqx$~{kxbDSWz{SkX<)K2y7i(VDQ@*8R&p|d&_CVQpqn86|F4yE;1)nQS4ZGx{y zYm0OI4VtpRbJe|E9cLHoJfDgt z^U0B#&3?OG(x&V=CV4&+ermu?fTenrE@25#qB3t`d_5G^GFPX_+%=1SW@Id`jpTWec+LJKP;w{m)QR zSeBuRdDoxlQ=Wwl^3@XOG^_aYPk}PhRitX7{scNJ=YWtE9MI@ZJOLpB(HZ8-90OgB z7X{5M25{nd`b=$uO|$*Cd`e8W61DE8A%mDiQOQOx^yoe!Rxo%Vx-H~?)y%Fk{hWEN zuBI+w6Yo~vn+#F&r0eu2`foBCgyt8hZrV@FooYPB$_{65rs}x2vkrW%sq|nqb3E-g zuGc%J_wTek9z;R>x+ht!-m7)TSoqBk_W8n~;|TZ-k zD~#vc$pQ=&c7A`Eb6@4;yGkv<^Ko~OtD-i9K)k||!f&_K_a2U-f?DrQVP+&kHUSsQ5qpiRapzoi z=koypKF1kL_Pjo`L%WkIJT69S2L+JZ&kJ5lerMm*1LjztzegI*ZSL&1o|{}x2i)o| z?R$n@%qf(To;>wh^e*w4$Ecsbyl+ubX=BaRnkQcHFZa z3zux>;MC=}NZXd`S-42H-N<%>Wcl{glCgW$1`8g3Tdv8HhBbHjh`-6ev0@Hv?YqDU zsVu7|Gdpu7NRb|X51rXPt8ZJp`!orhlU^PH|DtS((3Dmn1o#k6=KhJfdD!v`*%eZBNP_iA*vq?qVq?WZR&dbAgmgS zX6mU@S*-7zDIS9*PZIq0YjWx;d@redl7Vr0SY@X(;88&ra4!esp zKi>ChpFa#B`)GQptR4^)cei-nX5D^&d$4B>zyD^8IFxftsO;UWx{RN<8}%;t+wfVc zYxlk!^0iv$CO5WOWsPY#<`G@OkF|*N3(9icRw^Z>Xq@8lJNFSXb99%rZ`I*4@fqFOeUa1F6(@Scp3!^`^;te-yEyjP$lej1 zaTS&cPj}jRRis$mhx{mHTitr<;MCB5f88)RPR%u&vjH?oMcf^m-mUoFo@~Ht1Wj+> z99;L~`cWK-?uYf+Ue64i@j?*o;4=}2#V6Fvs$^eq-MhhWQ>H$R819~Kv~{*!2HXub zEJ&W9n6lGLwHE`}=91mB*ETmG7Q8-}OLLEP@0we$%4&88!qjKKIr)a%=G@=$HlNnR zXHMHM>x@xpBJ&R_e780IHhJ*6&-0IdMg5Nd4UDdx=7H!k;{KswN6Sj$uS@m1u&j|0KBOdroSRoi<<6lK7z@xn zB+diuM3&=zoaw7`80I4!QsS9bmht#(H(433?~}2cbLVa6t)9CsytPd6&`CA;o{)sf z{HHtZn&UDJH_u?{ZFCTw<z?i8YO6cL50IZZ+WX=$C$mk9$joE;Y65)(`L= z*s^;+L&OZ^Daxp9_U%=$+o)LImpUu*WsU&U;RXD(Ta zEcg~=<#w-9Fqz(u-Pf|Hr1k450xS?V({+s?8s}U@MMF0j-t}j6Vg7LJgWz;`PtgKH zw`E97Vun+jTqy&@eRVI2jVuBE`eviu?A#`7O96gW<&efdJ~_D!e4c@#d*b~t{X~?7 zRV;#aDd(q19}UO}_gv z6`k!lYy9;w7liz?eLgq>eC5BPAk^8tt^Tl`{V>3FEPVE|nBC}Mkqm?fk(ex`S(yHd zm&-;$is)fPA^8|E4k&0VXownA$cDe=9YHtLyz6I`!{LII3OZEl3x$i<=HhbGEQg{~ z-$;WM5dReVpdhbR=mll2^sN=Z0^Ost@g-AdljSc1PBKdFj;T!>cKk1xB>ud&*lqJU zaJwCQB3z2Mxo&#@N93#QMAg@ZQ%e!n27@{qKJRH`uUW*%enG*O#CT_3!w35#n6x}z zgP1FfQq)X_7ceA|3F!6`MF`u`k8!GH0FT?43xb5Vbe|YHHoQUMWnaci?pyPT3yl}6 zto&B?)u#c3AAqVG3PxmoFMT4y)=;dve8c zMRG-5&zeY=U7&7ZI2mSuzCuBX1w)qMN*kS+ow^5MQ zuGOczRPL$%xpwt@Zc~2}GA{AAPANfi9AYHo8^dCzAvx4;7cO7_G_<#N>T0C%7{7|Q znV+w=8oHq&*dV_&!Tnk)aXO6(Kg2feA^(~n#Smm{h|x{w5TRwYbg&!&*&iy6p!Iy; z3zkKb>nD*@28CeYv~!X4M#z54vka13Uc<*()NJVifc6gODw@w1eI(RUn68b)s4-M; zkRWn+7wVXa+gzG!F)t|k0$Gaq>v=q^P!Bt$0RZt>>&YOn1lph86$cHdk0X6*w7l%-7jn_}x~X=Bcg#-ReKtEm z_JeCmo+lFuN;wTS^V5E3t_}s_8l}V_TK8PMOuN0*3g0kl^pO$6+IOy%v4mN5`V86r$uITdzv-)})+ACmqv4$bB-OptIYnT!i85^|A z1eV4NKkzZnI@kAX*7{PpMY=(kYj*xCQ^uyPFNTE$&Yn2_3@@B!me~zmxZX0;40_|H}4k)T>3-aCX0vH{kM3 z`c$D3$sZs z0|68z<@3R@|JCDdBL~2P5n+OGGHXuh=o|pg%2zIkdJ(zt0^34v$I^i-L;>+m#dfo1t9ZcrUj=c>!nd0?oJ*%*-CKqy z2siK=P{`5$0vUMMz3qJ(mZ4& zTXG)Eu>y+ICir>=fM4i+|C429e}-0r(e3#p27j`8E#S9HpO~>8%YGDFHfr4^hU9gl zs9w}^S45+puyqcfVKdTW>%m$l2R>|*S|XZ~o+O_yu!s&mV6n2ro=_3M`P^sjrX+ll zlM9%1xf5Fu#OlQ>W26OvYbq#$gs{&fU4~Q(xOy#rt@%n>79_U@>6KGx9>8yxlY;f( z3eK;b0ajoz1@PqYCbroJ=qrIoqVBehoWO5}otrn2z;XLlbLRo($8W=((sBbVf(Koq z$S#7|Xb|nW&qDNBi-mVGF_%RnhHOBlp@#rq0U*;}UAeK5SlGegi#Ghb?nqD&dHpji zSUpuIx$Tx0f;jz*u?Sbc0|Xyw(tl#3kQ1C^nqMOLSm5)_t7l6}dj0qxTnm`Z$C>ae z?IF>$SZji)nvoI{QKLoDmpW=G;Q4;+4TD$Cz40e&;C8w@0;qsh2BkMs_7Ntqsj(k$ z*k7V=wI`rql!D`|dDijb>qRG|5{U@{k{lSZqRbaE<=by2T4|X!=TpL6m0y)(Dn|5>%^BrE;@n%7M z(?n-S&fZH3X7*1HSaEN0JS-h%4V2rTVHc~>$5DT;zN}(EeDg9je;;p*ookbK<;Jhh zDcn$q)A;sO6$Pq=X_Mel#6ZlDL|B)9VJ+UW=p_4tRT_>w%`58vw4@nx{B-?PGNh-_ zbSe}e?o&dbRx_6K!OK)Dy$urp*d)V$#sB<$C&6%>ab+gaCVa$8%n;(u=V0|+Uy&yL;c0b z2;Rvfy4p@^*GE+2%k~7mN8MxKY$UaJGhPjf9k5M)dpX^H=RsXjg*MIo;(H3naE9Gt zTk)pf`sDnOnJbLz@2WqYg0?3~8%Y7YXD6lsc&7lpK3W)|Mj!PgtHZWxtjna#0?{HK zypHeK=4JAgT|tLvFDUrzAN(;)nyF%SuWrmW#vU1Gr({D$E! z0H|{)mI>Zt$1A-?=R7V^QrhnX56j_;Mj4EMy%EKbofaBjHIT1{ERVSEw3T^J8NyOxt)|_3wC)&?l<0F9RwK4-zavRi`5cw&o&N!^yY8H&>>=qdWNnJ zy1qc#g1}Pz%Vmo3nj=1^Md-aa7r;Nw&!Tc{+K?YpZ_g||IzymCS{xND7z2(;ZwW$! zhJ*Mz!ZC2+<`j~{mF5~|KN5fdAUqr!lGbGt4D)r>g_W`*gUL|;-Ob$r;!Ozzehx5Z zG8NE-$E~|>+Q679fg@4$TSl>pxgtB!7T$|IATe`cc?FFwI*3UZTg5U$%-EQ zJFe&H9iAWdtx=F`M~lZn(oi7Cu*$jYZ4aj$1;OqP%ihDD<;`#4@{K5ofbAr+={K`5 ztPl)YjNd&W>1;wrJTZhi(A=RxoN&zLOk4)qhBwO_(N$BHflwv`b(;+Gp~)u=@frqc z6-8LrPb>=F4BZ|)%M+-$K>hSu8%L4BB$n@6Dt{>}OzbdPIN9005M)S6ZaG6kS5VRWX1?zF& z%!?3r#CtUA*VFkem$LM6*;^^{?$;w9pSDTHIOxSop1ofR8C6Kh1v?V7S`ZS!?fyK! z7bU-b#jhxAlZweyBwfmj{1efhIdl^6ZIiL(uR;Ex;VLjGB)(_oJo@(A9H3Ne8iYdA zlwB_k@LPhDk*ho?dF=Oqz?nOK0KerlxGGfA28St4QO7a#to@sPe@a9qn$A{oK?Obl z6TY0Oy1e@z_pOyh56blDeg*nmdb6pWA9!zk(CvMM7Co>#7g z9qGuQN)i|Z-P6wK(l|}*5eWvizBq_w?!OwRI8|zn^a1`Afq!2q9-@Z_03MVfLivLd zr6^oF%-AoGrZVQpF$ot@yf8i^l!XvlJby>7C>|jSQ*BZTmxcmt{vMOo%%umtFPYC7 z9hXLz?x+XLZ>0mrfAe2&>92laMvuKy2KnP*TWCbaD6ir$Rqpz#wc|jw+fSNSUKk4fy9>%a!z`GBU-(Y z+wH6}_sy@aq#;6COP&lxTH0CuZTBzyyPkpyGHxw%ZA?FNF)bsFfbTvx)sN13l$*z9 zvma1(iu8*H5}+%WxHlE4n1-;#nijn05Fj<60OyVO%TvHw3N9kMb2|@1qX~?~^me92 zx8F+9W|me=Wt}-kWOJRn;q~VHm!kd59<<4)ob#AOQYEnndb$DO;gZ0x=1?yZQeG8= zFX%Vk_=HN%k4-A6TsgC1TL>J^{wdD=$9}8f3NpY0O`B5ntWp$4Ea^U}M!DL1T%!xd z5a|$Gvc9xSmHi$8Z4xmmrPRJsE^DcSp*Ccnm1Re&JwBhpC1uA0?4_(Q$#oc{p2c#O z4^#VL>A<^sU)JIM9{>PB|Gpe;+#-?F!cb7OJCqi9@-DB)JTF;f7$Od@B;?ocaEIrR zLxwm>C6BfD-N{t~=mjm_3JDYvC?v3JOF+7=M8$f3c9;q*LoZKshtV!uQiJh72^112B(Oas zP)^xx58VCXc(&>F$HVRzwzJ4}AU+0)-yO@Q$WtMKJwyW79Q@o6Ax;_h($l>wquzPh5VQTX^u ztqhh9#RHuT8m*sn)#1y2N~@I>(r>d9sxr3Lz$ld#PD)*2fRsw4@D~y&Bv448kU$}U zXG;S8^~}!}?Sf7rfkFa9k$@W_44lI<@pBt)kR&v0mm1iGL4Yxl)-^CPNLsCx3k(vc z;VU8dF)Td9NAj2&7$2mp$&3+>sdWk7e|l&A1Kd^J7)FF+$i%)Z1#N?ith3><$^ZaB z07*naRKeEvbf6=5ftL5e6%r^UP)MMVKp}xb0?&d3ywlC~!14MlkS<6R5-21vGzrx5 z;lCyw{AN!kj)P-(Z&!vZOHjuSLmzd?IO*2)xg=6j41_Mak4UkNgqzqVPsdgg-gCxOsTZ7vy z>4b>BobIX8DhRd=4jO3{1U&qb!IEOP@vjI~%wh$!k^6EW#EZ+q?aUGwZUGHc2?omh zaOn?(f1vo`@;?y%f#Qdo|L2#HAbguuQ=<`^dYsw34$q_gsshpuc-appPCv zANL6;spvUxAF$m+6ob!2P7Z@lR6y=d64+Vle=S?GYOoV9Cd8FruPaIW_ch$W7)b0& ztaBjU`X-3#wHp_taASk?96EP{g;=&z$wa}}&{+@GO7#78rHMGq8Ns`8!ml3B=j-+GKn%fsk4%21%Mt7+i+=jtB?Gb<8|VP^E8j=vDaGbk(j8 zgYbbOeMj9|+uE&mWRq=a)w8BSH*>XH10OuZ{HX4*&MvK(=qw2?ZQs(ZTf*I;FRa$o z>qCcn)`DF2NljPwMOMBmA{Y#({1@1{^$WU$@g!etL5nZ2rw%P zhg>k${OpXv@dF2b;%hQEe-|9_;O^k8jU05HhQe^fg{;Aezi{Ox5b^^eWao8M{=v#G z6LQ-mU`wA&*j$ z9pT~MRiF9#MjoRM@?Ozx5BXOlP(&6I*l`j_ zwY%5CNd7F8V7=1DYxI#annKcBDGaWXe%o>U?JSr>Kfq5Po!~OrS-33tcupjcn#-U_ z=j+a(`yItJ5I7y-kh0P6yUy2-J?UO>H^>Qa94N z8>(zWhZ5BZPRgOPbBj!ndL3|YwvkP>*52H#C>azoUbwxB%re-ltE;tEefgZPt#@>G zSba^M)=ZQh14Xzxw2ceH00W`5wbh>(9AS*aGdgLWC@9K+FSAEl3>AzHj1cgksA4b% zFNEVFE@%)&To^hSA<$r38~jH)=s=fWV`F2_fR*%xt1JQJ8Rh4li)2fABi~9c(&QTs zVK~CP5>Su9Pn}cUV^xrPP+vPZ{O|)V$C2?FJPpL3ukc}QXD+SzrTQOrFXDE@6P|AB$g*`(Q@et>?D z-pfR_%{;mK94?Y=rd}i*mIMaZ_fc1(zC?YBXM}Gn8!>(uo*^RYZED~c2sN~883tWl zEm~nZKIgZ2hu@w*0BbZR48dHnOqJHnpy_ z1{PI1Wk^(M{Y3c4WHtN7G%&2IVTFPLn&PBUT7*OU%6zMqQn`Y`5Ma>HfCE91$ry-$ zi@^tuFn?DVXJl}s3Bv+B401_>@c^!f1Ah!+h@&W-j`J$~XgfJz+A`dv_q=3=4Sr~l zu5gtlkRK!?!*G-r@*hZsqz^~>@RN>T3{HU=4t~ls&ny0@Z@lx%sBxW1&B_JJ1T*~2Yg2vdD~T5 z!gggxzMfP0@5;g}c;5{q5W0hB(p^58!>fWi)X%z3@>q0m((^pMFVI+JcX6R>^ZGph zUK$@de4F!+$X(|Cvdw6UgwLe}RCay4{65UYBzgo!oypg$h})KD;2V}90^u5Fb7d%S zjw5yS8i3U`+QxQmrK%re2OWBh&D!@+d+gDtY~f!YwN0Ih*P-?Orj4^>PdL?{TKJ?r zbnk68UMnVyfoj?{YpDTdtqc#&lXPnP7S+632Ec1y_d2WA3JA=jK9($6WcS^BuWeYr z!5SJGt)->K6%AXtFdT3QvRNxFjg57_l0jo+H3CBbR|6g4g9cm2Vx^*4xG+Yl6;DUc zDh=~59hzNXJ5M`23OBBk{KGs0Tj6308O9kU97ao*ww(bFUBaq;xi|)=P%m765E{VH1=bt2#@<4<_t5BPAT3EqOX*D=uzxOlPx zLw&*#1{&{31*>XBD1Ua4b4nTA-kJ4~PdtNHm>>*)YU^ork8u1^P61b0c0fhm;z`(0 z9P%e^`SufhhEW9X>h~^jdX?7Muw5p(04AbCS^AFbE-C(VC z6YQYFUZ7Qw$#(no|JORS)u~lR$T26KVRzhdhs|I9sI7Q>q#b?Ysn*feZi^ne)vUeM zMo3XK$hhG%B<;e>)~Rj{a@sVwzvNx-vh|xb+PV#EZN#WXn=o#ity#I&-tiCbw53az z`T$?8%W7pn_!$#cC8T`n8XAO4#)+P|R#qwm$AFfB2zy9**2|dLyrtRrwhD#>h6jSG z)hZ4vClmtP#C%Xg8sVBVXO8{Hm%d~l{NM-eulL>OI2)RpWYnbdD5PhY*~#*-@>yTc z*%)pFV<23d5ph9O_!EjW8Upg9EW+=~v_g8yIMM)?AFyErs7R>_J@^84aQIbENh}vL z?0F-JaQFvjG~nRZ{h|$h>H)5lX5`%?cgP=g*y4>XX}I7s&uj1-PvU^XpXU!h;odg% zjvSD6;MyVV4+3e>k0(3@@9?_2xSdu0yNi5_eC&)8!05oFkaLRyKe|lMjNJ}h35vmi zqNR=0XwoNc`Q#Fvr1>)7UHlO`E}rCah8i?m8Q@*BqNN zdtdwG?|y0BTh?2Rwp@*B7->_+G+4d1Z`F5ivG&cacI(Z*x6{rzU0bylSjPq_iS~6G z(6nlR#GqYM2{hcCD{^aOMM{?lpayO_?&qmMmFf zD^{#<-Bv3eCd-&wvu2G=oH)@YO`2qjv=Xy=^=fOZZ?f4kM8-`RZ?k64vSrJb*~*nG zWkhw^j2SaL-SXwjJ*-K_39BJ7P=lALOaBwlK3b@QBPfHN9ryyK6scn3fAfK{=St}jn?q`>ZY zB`-Q@Cu*s3!f1ld5a|YBMi7mi|k2L+4b!xE5nw?4o*f<_F;G=*{1E7 z1M^o(bLaDKv-}4qzMU__1xJMhB!Q&OHD*8>uEdf7BCUoBb;mhP^cN-vyEmA4Y1cpk zaNl*|lR(*kEG5|`|DpqM@eCaonZX}%@k9to|7F4o;NrQ5NI-SQi@tpQ@2M-2NWQNm z+}oA6yWxv#ScV9yfDaSGOP6l3>d-a5H5!~(SBVV zS}AA|tj>Ydv}&nT+oM{|mOs6~X3X5z9{7s}|1vt-yPK`PS_2@-r@N+6D;OH+RI^R0 zQG`UNrNbJvy7Bl!kNQf;zWeQ`m5eQR>ZzyN$3FHkH!f&+_uO-j{qsNnvl}REx%%WM zKWWDwf4tAABUFr+?|%2YJ{WuJTiUs&b}NqE2ffh4!vs?c5No+COJYnSkNq z8y16yQ~P&DnWCTLm!AxZu+V?-E^Ro>EEMg3m;&7PJ0TIeo=?E zo$0%(3E^c$mAJw?2bF{vFb*x1hnW} zgz$A~yHj0by$1WrSlcFRQeb!YW?Qjjfj#fAW38%glyz3OYUZL&-#$!AgNDN4bZ*CL zY12xH@YL!yBdhAL!wx^pX3d&yci;7YeFfqJANYX%`Okm0FKMgPzWeTL|MqYHX6K)O zzJ2qX-?WP_zSxdE_E`J)$3JclJ@k-WaKQ!kwzs{_{_uxC*tBWW+>gQV)1Us-&OZBW zPX{hmM`FN&p~ArI3t#wxoqO)N_OVZV+zpS(;Ha#fujU|1;T2 zhuVV5H$Ujbk4p`?+KtkYBS+eyhaK()D@F-%(J$9?2lM63Rz3ced*5^ecv5?Oit?W%uTbVI)wvE*@6<3m! z=SmYtNRS!g^nEAfFI~hn=ZN_8Jj0Zkb!lfBYR_em{}3n%w1jW7H=RsvBz@fd<4_#6KTEtK<$k))j ztx;QGwg{$P2FQk%PMa`tk{x=)(QbTns$sNAZFK5yy{_4+))uWAG0@q*(P|sDB}P~G zwrV>^Edxbe^INAwWlExNk&sG_tG3JDa`9X2ytlkb?`BO+jrQdH1@_|~|Je4^st8-I zZomC@A6Rlgoh?*H9C3szqoa;G%5J{-X8Y}Lf9r#)&wS=H_RC-X(g%@$`?r7VgRcAU zzh8#UEH`Fe^P1P#kw+fsh6o+}b=O_zTfLN1yZi3DeG3<}E1NcL@L)g_~#cNRSc;K+t8c6;qL%?DU5 zY7BHL%w$P$Y0#BjB*2OSadb9Gu3UD|yH%=5XMXk;uF+%0*!=nPb^b?nP31Iq?%pQouN)!xveaWMeLfeRM{+0?1^8X4e$&%i%^!UTK#u}8gmfHTs8m$Z&c58h^Ipgv{F zR5wN$Ew;Dn7By}0V*6gbR{J_K9=IuvOB3RfgTU!>Q=LLyblpC%lwaOVHk$Pl9sH!N z(>W*Tx(pQv-=075l{)IgAG|_?IAp!w{`>pZ!l#~G=yb>f4me53>vgr-Ee0a#_S^pe zZ-;D$N~=A|XiW4}CQ9bt1=3`K%S2l)!Q@T@25``6#+EwBbgHbK@R zxw+CCIdeHgc6JNCp%rBlvL_sRBV_#Tf4~90jS?d(+9Giwn_y?>f6CaEHsuQScAAi6 zcksS*barm6?j>*VI^C(Wd$qi`s!bpxO%l0GHPKUB-5^bu{WF<)nyI2733h(35iD+#GcoSY#b# z0t^#8Of-4D6Rf90`NNCKpRKK=_j;##LVRqwqYncg?Y4ooBY5I~ffh!7lrLfUd8Rt7 zS}b}|_JorMp5P!oF46~o_zO2Y3AhAh6iAy$z|s>LI>M856{6MVv}&R4RQl!^i_h8t zuBXa|dP%zaQMP8Y^g-=NTdz7)(?}g)#E{xNH*Z#{l2mIFs!@ow$3n*XESugm%0`YI zXHP!mdE=UIP*exp8e)|Iod)Y)-k$9ny3 z)c}$>Zt$e;<{f=Y5^l;9bsiYf(1(vZHKs^7kGhnKGnYB zJ=$R4%BM?wq%sUT#1l@sz#q7vL)(sc!g=SMaXt(yQQ+KFLceZHoDKB`t z)mhfoRoVaEe4E{N+aK(hy#YrcfJkLh?hA^5k63#EFxmT)Qi;P0H?}g(gtVxq5+xiU~y(}kAnrKt^nyP_l z`i_jFqt5Hc00b9<;rJd*@aGLjjnnG^<)?D5k$=?aQ8qz5Z``on@c@qlo_7Xq@Xw0D zSapCgSOSWh*Sd5bD#GyNpxvx?KsfCU%bf4jkZb(@o+;WI88Xa}*KxK=h7I+WSwIGP z+~h`G2gX0PDO3zhTZOx>KHWH%>NK5&=n6M7BMe?AOql3KJ6ZDCMS$cYkI)*Yd?5p5 zL3!c0EW+g}JZT$TtsQ>EQR1Uc;dB~m2l~`I(09fL;&}$?0opfph>eqvx-Fyy*>XOw zOZ8T$*fuDQ<#k93Ty0hDPw3J)BLI>CLP6x~h1WaD-s`a9=rTwXt7~+0%(}rh^~TFo z`XA#QJamA}$R#?d&NE1)4uruEjyi4MrQNteuX+PL&TnDB@b0>ncBWha*MM;@o}-;J z*@4b?zu>8d-X^lTNIjt~lOG(srB>kH;?IyJ5YpqxkCvH*WwNEsdwo*4>mA~mcv4+% zuh!)Hw-MvD zf;f4#ZPI51yq-D_WQ2MsBScj)`Fj7Rke%ai^esGNwSV;JF{-!fH=0|VN%GG`RML^D zZF=9JNeNuED~uBAJH4Nmgk;APk^7S}z)z9zl%owGC)!{5d50GA1Rt~_jxczoJ;pn6 zz@x0g#Jt}@tQq7pIKVAhP%?o=i2)}V}mwlF@zey($Y` zAJ0}*!fwstEqMG1yWy9=w%cyG$zG&YhfCi1F5`U1(@#I`9TtWN2hpQq!ao&y!GZ;L z$RUS#BcP#-)r`RlUho3H!HaD>9Bx17oO3*$3O`1(=NKy!G|L^0gTV}yo+n}84*Y~G zvB%$di-Yg)_=kV6v(7rpey5d~haPy)CC%WWO0uG3WS?r`dL=-eLbhBCzyKO0 zDj|Cq^^CzhFqA1SeXFG^&6sNGkQ<;#k5Zy+d{B`os#=vh<+*&ta+|0vUnpRd5!xtb zYJrdPrJmpiPbj~@=WT;JFCJL!0)|-wwvw%1x7K-tKf!VT_Pp z9K4b*;xV9^NrFdsr97T_<{9xJJUw~J@~Y3JdTP*ms ztvawP8C6LhBP5IQ^;I^peuUoZb%}^3hSan=WMOn_1-iR>q$Y>V#8X8xmQK>eL5h zxlDwM9Yo6IOIcu;Apa=ykSk?RnbH2duBje+eU#9FkFpQ=sOzjCJ6@H)m!qD*@J`)@ z4yz^L2p9OGj^pRu+m>KS%R6BhC>S!(z)^RjEl_7j3q5|A9>Df-(A`DrPM>@{S02PC z+SFqH_UC*6=`(#1o58AXv}$c*Y5N%4JYu$OteR}i)gxp?r$Mk3q+|5*R$Xq_Ia4oZ zjV&5GH`};Tszc3B+uRXd(pyt({pw|2mgpCX*?B^dB@G|McZsL>EvoC$SNR#ptlco~ zq;4}QguZ4+0Qwdkx?8Y7; z#nIlqKqOPgBHd5hN%Gb>NNI51Zbbq-v~S0u_Uf~}N}F_qFz7u^yrmU%+BEs%M>-tu z=nx!b@BAq&bb)^mJ8Sz6w8#BP`mIv7LuM{Z_0^2YFhp>nA1kFFY+OY{#F0>I`f*Vm z>6oa<%(zjvyk~0PV#}U<#Ev-jMYei@X4G0V0MXUFY^T`Vw%HG)cWE}GMc4dRkC<$G z&6sER{_#eu>ufPiXlV6>nNXBJ9gixMIt&JuU1}UHx?Y#c*{Xqyop<`m1ZP3s{`R-~xsXR5dBk4%%2)cCk=N?-w0GG$=T`3aOEGEW_qD!$_Jdzj!* zD9DzOrcxLRu!!|w?b(AYJq!=YWKvJ3!T3z9s z1A}jjR1UHe#}*(QiX2{X!4ry+3k+zqp;U^6|@QVQ>dFOUG z&xd!uNpZsiIYdUI#*9`P(e!KSzgEEq%7FK#jdp0rWEJQ*zD z&l)CJ&G}-k$m_ zK3UPnA9(?e29l(aP!xZMA^{0+5Dxh!87FvOS=NZURkw4l?;2~Hs;Ao8F6q>UaRQXC z>)N6lIy-FkjLBXf9$&Cn^?iiK8=AP#7VtK01#fFrd3JBGO)YC>l#FIv>qa|8$~)ys z-C*luugf9`U__6B^c=gqj&gppI_hmh^&F*%KKAyKwK3Yr@h6<%;~bY4nUA)(aJ`%tzG$;ty(tKUU2*gcE|tT zZ1tn7ZB6rfs~R3qwEgQ`{B&;Q^5vp2l)O?L4`7ut=#|DC=6pWo-VX8p$(zu*<|uDkBC zuYTnk?^v(<;k7n%#&mo01?PK%V723lPk+i*tz4m5(Uo@9zh7yWTq2|9C8v19`1SR_ zvS0lCXFgbA>qwhs+Z!}kYuDB@W!UP~Id^E_w^ZjqZo274JNe|3ZLg_Q{P6m(e)UVc z_~MIvVB9Q&>GPlatgY2S<|HR6Koo*10E?ihJMLe)drBzvChb>6!K5-zt09>pQW)W} z|J463l zX1}7Diwuaz?*m<>0S1RZ8foA`$$Op68gH~Op3Z~#V2~X!q+@XAxP{XdtMJxqpYxl} zKTq2U|D>&CtDVjX>O5HOTc~YH;F_(A8s^TKtNM47aMt^Q{_}MS!=&W9cti*2WpH@mZY)+A|0S8V-JejeXuR{_Q5# zv+GrMT~*o^CH*#1X{uK(vQ>}VZlkm$q7v4`P_qnZsza&+zMo!oNG;4|>$*+V47pQI z4D8Z5mNYnEK>xuHe#mbB;~%BBZ&WT)8S+6{1~JqZ-Z3)7r0YuRl@B&1m$NN^RPLJnAmBuI|tgdFF&ZKTseR8}GTh2)$FFx?~crtcN{aNs{{K(fy+6jJE*25rh-K#MNj(UEEF3(|i z2)jrYFJ6+iHLK0co;}-Mr19Dq8D{M2V4^1pq30nC%Cqn%QKY_%i6bVG*hNAcVp}x2 zJb3YTCBAsN{?$7Uos2)i34{02XQT%<^6Gg~8)q9PbjTNd2a|kkkB285a5(e?Kj8TB z4&I>WKzraHWLL={b7Y8|Fy6uuzCB$UQDqoG4*EWYlZ62i;i;;mx(JghQ?IWaGFR=A zAyR8~U7PjA^gHZe-G+7Q8E4p{1&`YD<%?{yDoCxqC$i6seRads!M14OQ})nb{%8|y zqm8Ol#bTyG1N2tSymiUQp?hOc)u~E%#%ZT8K_?@C0f)AG$yhn_%rmtdn=0c&4?U=r zjdOe^6#>&AlEARK+wH2W{#`d}{YTHtD=%?n_qhzgBb>%#I0Vv%3l29 zlfAM>MgG*ME_Xj$6IxoD?efb%DFf)EUa?90+0R^MpZol0ZKMWT+|nb#x$NpR$YXHK z83ty-!Jj0DboWe08Bj((sG*}*nMat{N5w^bBOK*{L(vew-+uesMHgM{mk|8n54U=l zHyXiGl`!=11B)TZ6Fhku;3FJI8S_rruo6OA6m;CWg%V@16ACTxkkooGUr4i_3@wdeolTW(Qa)5kFnIPgH7x0-JcKm4%E0bWOr9Oc^%H*enL zX}O^dKFGW)S`lTIISud<2MnItefc~M(SyJLyXudCR+yZPrWm z9A@!ckg+2_>qVEfbNDHZtyc)7$n?aM#py5Rae<$o#i=GtBvr+et+5z zD?OAY)T^`=lw14QqJ<0@U@~B3=8Ga_dlFCN%77C-{3EGZ*_7rXpPyafJ@KFPwyzXZ z^xj6&;2arIpL7wSwWw`%)-~GV_9}hrN$p58#I-sf#45IL^OB6DOeI(0s~urY4VvAQ z4xKS>FO{X>C3npTc{%dhPRFFW#3c}v8EulYbQrZGwK_9OD_Z#B0RwmZM16o(tJYI= zK8KITQoko@@W448wh%Tn)%)2n^27?VuM}#qKS~259BpE{CX=YooC}OChZrmjf~YgB zsPUt`pVS2l-~av(e6Ku%OUfQTW4rAVZONr;V<< z-Rb*x(|m-K@r(W|VIAma)48q}!O*=t(fd5n+r%@e)t>cP2g?9C<&;zGrHD1H09yw~HR^U&xSG?kHy-#`kvB%0}r8m>)S=!2jg>4IjNOzsiE zA&YXE4f!K01oMu76C^ zXxq@yVvpW`r#-!Jg6*^40e1Af(_}p8V3}I)>Xl3Fo?EWB4Xc;i)Fx$e%UbnuThw9d zT!7*`m0@N->hx32MT`omL4;=T*n%PwKA_;xJC%&ehmiPjlhyxc?@iz|yQ)IpU2|0r z>F!LBk#3Rz1~8D}c^Om$RJfo9fdmMsC@9EN1gERN_mJxl@Nu2qy;l&B-$m{NK|m$} zWey1eLJ~p}$UF~C=%mwQ%~kLJU;C`DzN((9(i1tm`>S)#9@gG_?X}ikYmXwxdL#n# z+YUzN5y7U*6M2m5hG+9iLQcOJotMy!&r^RlLK`nO)yshVTnglbO|FGaUMd6{4jLlQzt(7ANWli`IKH{0^><*RVnR{lhR)CieD;U_`(;;MHgKJ zUQzA=WxQsQvzIF=*v>boP>8h5Ty zj{0I?tXXwLITE8Im&_T@rEy2Flul=vR7!k0QkVF$?C!U+W>LNPQ#-qgGwVmYV!*gx zLOEiY#FGOF>GV6?oP05jG2zUIn$j%m`(l=Z`uZ+HJmJdC9n;Kkj3S>`AA3|e^^_-- zo)7j_zAF(XzeeZi;C?rpP+S7}t6Tb7^?<%il-<<9KG!7VX>7h$# z*I<-*ocdA6)B`HEa;SvHs>||z_PNiE5nSn8qsQ57mymeZlf?Vt^UjYG4wb#ab%xO8 ze?M~iv^YS?sh+;zsoFmfxMV>tXXbyP>!Tz=6i$ zfthmL>e=#ZzjR{xE;{tP-u1Ddhu6RUMdkcod_>UA*RNfVOk;l7+ba8zU16&FX~r-{$j>Y@pFsy%=7aN0*=70V5x@N@L+udlE2Md6#earDVR^^? z(s27f`NlZ*HS*3Ap7=zJm1mY4Z@eku)Y{he=+viof)9r1#jK8PnTuzbqee zgqu+-h@^*g=sld;qT0|y?-;p6UgU$|iA|UgrfZ=sti&nNN@x;mDvF)*5hn>w!ZyAN zokVFGPfm#quK2m9Lqawzkk1IK>y$l{8}^Q;X&p&OVi(3hjL2$?RVB}~j?`HO_qCX> z>2!{?IP$W9wIl4|_9xpYX-r(&mVO%+ino~OJ=zKU>h~CqmGv%|&#+|Yt6#migEr%d zd*P(eF@E|P(RY90-_U?@O>5qU<+pI09aGRg@{x~>IF8oZo(er-=uAriqc8P6e_1cT zg_Uumj`0p$eXy*To}626!`F`!5hXChKd9f`2s=%&2&2b>Wl{lOoI!*JzdH& zpLNz*xH=W-R( zSSIV520EpSeOJ$Z_OsO?%V+-Mv$6T@+;h)mB<}ffZ;?sX}-W>@^z-kxS~T)Qqd@ehH`HloF5}k zu#tDhydFNVmCjdGza0+%DCUBpMHAz8I))u+tQpR(RrPX`Nxr7or6aN237MW1P02lqTiUzx6qwlB{?1YtPv z=d7lKe-3U~Mt?UjL*w(*Q%?<;WJo=xBL3#hn*#;*W7g4pg};%dnJz)oxUp}wyyl_H zejq;DKfA<$I1^Xm(`UA|ypp)wW#YXIfctjg8oD#{ZkHrk=~9K-k!FbyP%!g$Ixs%y zAn5dq7of=K6DazOVltjK#jWYyH!?DVa?a*-RDY5&!{Ka-%PjDjc4RLyrJ^y~X9sIh z?Ir<_e3-KLm8hb4wTvPr6*-ZYt{Y*ANbA*&MC0oxBN<0$c-G(d5K{OXp7~^)fY+4T zvUmK^wnFJNKm09e_hW;!;1C;8Z*|OKO_E4&u{$UT#FGx#W^dI_+p& zG3-+u3nSZgXHVNvVH{5coG#k2X={a^f>R;srG<{_xU@r|r=U%krZZ1}D$D5-R@Ntk zdIQ@y(t=?cSdPS+*6(om*#)Sr2aV(J(mvnycg=KNf>h}?hzr=3Da+!MkARtU?bCd` zBgc`ee|YD+BK$YlW3zVcS~{W2BrUvyEK${Hj;I zs=Vht?}^-b)c(*ktbCRrP@FsoYdf;E;veFt zzQ`N$tV!g-TOcAf%3*TzOnq~>t0vxM>+Yo&wQG) zGBAVU=;m~lBhxsDa|##1UJNe}p_q7@;B9x@Ue0*pk4Ax1rc_?O$Y`ynyt!QQY>b3e zECGq7ujF;=X{VJ3P+koC%x6E_X>ZGHyIUp=o=NJEjT%=K*2U9pAtZerH zlczl8DKV;h|Arp~ttIU(q?JhA{@E`W5Z4aK#1bv&Kt8sQK+?r;201Xej9YL}2WEPP z$`#kGFIQfF7c|VQ7fX&zKMLKy?Hym_oteCXVvC+Z1Zds?DfYiDtH*y224Pp3r5Y3c zy|jJ^JL@M-YF~%`#J4V=F>}YavQj5)=L|s%YJtVnWlW=t&1dw%kq-Pbh$D=^32?|V z;*GIvc=ML6<+$Tdh&_MKXlRHycxSofE$^0ca7Y8gJsfM-u8sH(?j6mA!Y;AA{)QW) z9EQ1c^Zo}m($4G$Wbn**@}|awGG0FZ-gVd0zCAH({^&=a5;ESZ``fVL{wTvS9M}Iy zP6b=P{`=I~zSTChj<+~!KqRh94m$MD`qVhop(1lOiR40w(`%;l6UG3Bmv0Z9* zJI`W%xI)2y_DwJZ1+~jZy+dhY=7Fz1?f{wb9*~|fd>L7z5<|1B*2Is+B&4L9lgNpU z>PR}$sDN1xGOK4DBRm4O9jqm&jK#~M-R|nRXrLCYb$g8giMw0C`L1H_Q$km9bp3cL zq5AoYTkE;$-Zibxh{za~Ihwr=jEERdziy{8k@@8_VValmRca0M+k6$=j?B1}$fa2d zRY!7!r8AB3d5QSl`5D#;ZvwJhrZFtO=`3~5cF0GWPW)INjR$|Gsrfc{?NA2OyDUwG z&NwCxO0VgVfpSVq&G(QYZ3F+*x zZic0?z0EnA+K%2I6SKIk{{~NPG;}Te6tmS1^mw<(*tRkD+qs;z&H^gA`!Q&oIU1wm zo`wecNIT0cbT941_-SI~K4(dpJI{K-?U|Tq|2P7 zqS&U=iETFkUHk1aJ@4>vrtb77JU;w_1_^Vio^)|Pr>hMzqa#oHixE%yf%WUxmw)}& ze=Q#5;4F$xL)RIapbx&~A7?v+<7}pR`tGQ!My~@5(wuy4*__puXJWZ+6IZ@8o$++Z zhr(&yQZb;6-aQ^`X#Hzjr~>#k!0HM2h)u!Y-Vfnn4cMw`oWy~aX)_OTa5=aAh)XnG zp&9phVT?%g(v$6|Mv;np$`-s1TW6ibk+iCFZ#hws(>{t%$aUYk4h;$leTuz{xl==A?~=~NlAvH=ev-YNBM%!GDBZ%%oyvtfQ4M`g%zxe<2l+O@I7$C(k05aXmmC8{lRE`!<`>J@q_ z(#GpJ-czyB-&qJxwQ{)$QNvjd^DbCT z9SZnW$#NNs<<}TF`J@wB3;%)gO_W9R)(FY+beGq6f7V4jE_AhyD7qwoF>>lJOqi*RGzKCq+XqD&ei`nw!!E$yWda>W%_ z&?#=^6fjQ4LEgCB$T~=CeZ)PyoxD=l?S^dWMUH|?m$O+jWhnovECxTl&Nve;ElgmH zXsk>GT@a5H9;FKhAeLF8LOb-L+zoi_I&0}?+-m~v9F2D7dW<3CyEbfO7G^ePpzQQ* zyJhS%lFoeB5Y)JkXP)_tpDLFz3uZg36iBnup>3^k;Pc}j|9JW8B^O5+^XSk!Zu2nY=mshH&$QRC*ZHG3z z_s9Ep#Er_UbSDF9xJDk}Cizr^irf4}ChecaW5AKdreEI~5X}Q=IPq=UBO>{(?J&iT zb!P*I=qF~mALN`fPA+GiaccSJ4}OMO>uo_FeGI<*>dStv+;Pvw^4Tw5P20L+gZ7?4 zmH|&%7k;{wdCC&mr{kimO<_8o2mi(LTGpL8i!AMi+$N688-JM-39q^4n(}RyU}+?z zlRrHD)1{4XfBW03j1@O1WcE!o6mq`~)|n>wmp(>$;_0F5*WVjFra>a_pLpU4AqVW! zAAx~lIpjz4F|Bg{*kg~0V}$*-jEQ^mnqa`meF^4i+B%qEzMhCx<8J5PIpLX3c%v9D zwLgJQ;NduFk+}K^!?>c#GD}zCc8lR^sZzWjO65ElW$6&LctK#L(6lTmxFmWg5TqgH#1UYKAv!*u+|6}WcDOD8`y!LrY^#%b(yPs z;!%HH;DGY$3^QM!mNWeL^P(5Mm`%=)iyPxV{)vyp=$5Bt8OLRB|M-vp82dw>bjBG` zCXadj1}D7Oc~m@6pQa7#$ehL`%R6!3_{P^6k!R+Wd^J{Wb?Yr`jhLVT=uAnI%g!R< zVfv@yzjn%0E;?WRq+*{SE$Zp4M=Sxfo{VPt?QoDvm?C@@H*eV68*!04@eJyfuq>2o>Ra%6=DOb;`dj)?nx_g#0zGOm|U*3WUdtVYXI zS$cKd_r8lVO07|L>~Fk7WfHgpjL@p^3J}^ z{J=K^O=#5%QkNKH-_nfxMx|wze>06&{?l2d9J}(l<(=>N=V+_n{_U5Q6W5k<)j!@x zTf6_yOF11#qQx;%T&g2nvyi@{ZDoq^%`#ZVow#m2WGBG~!prgouQ!?b-t8Af#83Zg zkclokeD*V+Rn|RfZMo+z`n3Q8LH)i5H}vEtn{w@ zl|LOw@OyaKNIBtz6Uxmu-yD5~`8pG7pJ`vU0si;W#9MB;CHhZ!NdB=b!ZNMPuH642 zymjl=McOotVT31^hf4$$RQy%2UoBgW9&Kna8mCMuOStimS^1O1cm_0N+u z^!M9Qa^LT~^UfH(agR8#el!38KmbWZK~#eZ#pgcvAEBg7qO3TzE4+jyJg0UQMwypS zg`;7{HO!xB6v)RMb4)D9a@}hvBKYevPOSySZ_1V5K82}&zWZ%{I-h>)k{;rl_`((+ zD$rSW<7s?leOz1b^tE+%G}q1{*S?yiu#B6+HambA5qCt8G{U~>+gA!Y-eXi4m>$Dz z5n_oA!a6T-QwbDqmb<%b0fax+tJ^>8n`xXCakw8%4Xi*zVeE|sdYC${j$BH zL*iSx5+{je!4JzPlrsw-P@7l^%hMCYKo0e_9i$IBaW>-6;+!Dk>;@-5ZQfEQ2awgH zEaU6N@Fh*iaq!0>h!S@k5;BdFHbj)k35?w-W+tJVUY5S}aJu0jIGox#R^Icje~1y* zU*^smrLzjhf3Qd6$5|?MH0@{`ZbM1<+`s>O%)b4|V;)n!i}HWz#TUm#w7>b9-zdNT z`@dg4@}UpK-7zn};AKHy53tVr1ONMf$H2k><(qWL22Txzs4s-C@o?RDZz`8F`|-?Y zJfl1xI?=%SAWIB4GYd1zsH~?QO<<_Y3;*jq?>HBafjakBL)_9~BpADd{=s`>-URL!!{ou=w3U5z({CC6G-x?@d%1ENDPI zVVZrWTzG>lWU!%cxLkG3E#;ry*I!=uOQ)4TeBF6*r^I7U<^-+x-%`GH<@MYhvYJ62 zjc#X|JtdQVLY^T5XSAJ741LA6RtQGplW`Q%5;cy(_^*~SJrC|PX;OJtab~%z6L$`7 zIcW81_DMbM=}#|@VoBthBUaOQxVLXGF6;GTXlGSDJVcr5iIgtq^w5+T$e}Id9r?f? zTYh7Z&<*F#G&;cL-b0N8XG#5*N8EUBU!Cvr>GjuNAADp#=h7?(8(seFvaWzlUdn6x zA^Sf4oL%*3VBz|6S*P`NNwIkvSK~x}x4ed#&Uf<`o_LanvoB2?wa*`ZQrd9d*hj;t~O2*4$v$aZ$;jZzPt}1N>@aSafG;__13>ZYKvm#P*Zj4m$u35FZ zJjkZGO&cGKkywe+GFcXd-h`v@d`Fonx2Le=CTjgGdopXj$5H^wZ=LkBJ~}rDTheMg z)ma}$c@(;iFo_=ps_;_KW*)-Qop|C22xl$?C7yN6aB(5d^w0NL24|UxXT7{1!OeRf z@i>EWs=qkMa*HFw<>V-;iUS0l;l>xf<(Ka>@AUU+T=8bR*@)@H#AzFcyBr%%b;^aR990UzFevFqg6v! z0u_wzf=liw3iSxN8SCA_g$jtWc7Q!3oCjD|#EFE^Jv0GffCu7u=!nP^9pK+(TZ5z< z0#^e(Cso!6cZR@|3BcE(x7Vmd|=Xb(S z+W+VG|9`ob)!!-qg-dRweZTL#{kHPf3;!Y%9?K{VT48DMBkY+{Nee}V z`fp~G_zkcBofr(5WC>*OEArS>)0@hxf9dS9Zrv&6-S4}+eDRVSxK#OsvTY1w z14CqlOAq@ew;_8Rcyc{Pl>^FWHIR5>cuRvcbmTo(Qp<2?$PH;wocPQ^7il=b+?_%cg*4u6^FZjiC%P+j(+<^0WW)QQF%s#|C zb<+Q>x852CfdfpI!#Elt&W6a3?iKMkUw_JfXFyD2+}xk&(b@8!GosPPz_%XquKk?` zhTrz}8a)~y)={HH7{W3y{Y<0F^vSEjO<7=A<`a?IKYPT0I2M;Wap$|whr%ssX_n%_ zpa1H<4mx=XTMdlLsyWc1x)nbQA1Rb25E~tfKVfKeXoz@`;f%-W=nytYe;&rrS#f{$ zOP|&q-WZU;bs;1qbOf0~$$Kqy8Wbu5?uk&*b<!@f=yxf1yi;wG6^j z0Sd*2h*3VOFF&g|I!fwi0xp12%9|vb#7G0 zAXO=s9=M~#Wr5pRHWB$iH(t!?yLZ#1lis8$W&SL@}R`Y~O|2mHPCc8c=Km(oh|s!F92Mj9~xF_bRDf(Q9_FNlV) z`-s9H{t4b__+4I;rt}kT!nSV7A6d`#xepkC;AooNJMb-G+UC_srO0&%dytbk`$jjE zW&NvY2kyy0c21!;3=bYvzHrHPW$S3&SpBW5Ze%aukt_vuCo0Q&VV1tJAC#W)4P}r! z%#=Agvyje$?tF$Xf6PDM=RGaU&O_!!I{+V+Irtqukglam>msej@?XlJpYfE_!wgv5 z$7xdj?AyHb&_OYm?Ot>3)fjf8&>?W4+zur!jdoK2?y%FÐ~ zgLy08ea5BD#2e*^`0srCs*XWnTFa%n=kdDRICV?7mLtn8Pu_IHjkWy1()bUa#Q%HO zezy*&(JKHa=sKJBcFyed3-`Eq93eK%hEn~&e-)w(@nz_!xarvAj;L+zGih*%tn zNoIuHQ2{kRG&Ja^n!?h>86k`eLp@c?IHrYF_*+LKAd_JDP_2w(C!-?+JC7a>lE}*# z)m8K~AN%2v`Rps`-Ii-1(A`Gf*PIW(_!42+YKl&FX(gt~9gmU+y}Gt+%;+lsJURhdfVRhZ5qD5UBXeHGd&F@F10 zuqxEeSJ)}gTt=wEEHzse!@FlBGH(gkyq&dB;7iGd$v4h6L>?%|;y|H$F_%t#mE%Z_ z<9D5r(0Iww{>oc+%IPdeETKYC6mOZQ#)-U|hJocsPV!wR4h*-0$4(p^MNLiE%?qP9HNcgdQDBC$k_(>akKghePpH)U#_gL?rIJX03)cbIE zbf-}T4On?*KpgSwas4}WRXO1i(s_0NY}tPQUFBGCG26eKr5MYrJwXJiH}J3A#Q<@` z`T%^`?{cnVb6&0B^mdbNEhBgFo~ZuA4Uo#(&+G8C;i&Arjrg;cpPCh zZqlCN!8hbDJ!D#EK=e1B?JX?+bd(n9!??nhN1Vm8OwzJ+VSHtQY5n$H=euwN9e-?v*Dt{!#1*!c8@6tpD8W%7A*5#;BOBv=A&ec5| zOYg+p$o!tNX<~?(PzGC|gJsaccdq<)LpQ@^h~-;79K0}qfxV)Cv<%ODzbu>iL0Q&E zUo*M|s2VrbnC(ZtD$i`+g++@Z@yC!|g=L%r@r-3;%78|ADqp2_e<7c#L-c7B#-}a^ zvEqY2*eW5k@Qc-EABhFUw^kStj9{kN%cDu^@fUrL^!5-kK!0Qa%p{CTyfRvKyUR;vjE#*j!t^XkBS{e1@C3vc1_VkIMB%BD ztkR1Ps1Ks(WnYNr)A!O@8ZTTtWGI!VMgTNnDyAx$j--WALJZS-EI`5i^h4+5K-M6~>qjO0_%<>@bu=m#@^n+9na{bBc1a7uGQePX5A5lBPsgj3jE9D8t_KS9G1ZN5{{cHo_s;v0*bdX5d#5J!Je!}vn z%co(!M_cmLa1el-XDzd9)?-A5c!j{lo6bzvl|g^l(pk;Esa(?5^06$ z-t^t@J2c$H_giP)=BqP)=DVx+49{n#>y975+CPiJK(u!%(9!aO(U-r$!VMqu;alLH zXPgGdy<;nB%9Dns%6;e(_m4eTdYK8C;Sxjpi@`~brl#4Qv7X`nknyRF#K%zPBISXt z8_JrV2g-3PxK*TQqI~a;n;CJhGz3`$U0VRi%3S?}tL3UCso&Kn0uRm~ie`D;8yEuu z%yK9TJyhk999yf*7H3w(vP#GP!jJDt$I4ISzI(~SFCIt+`pR2wD|hg#uBNTC-SV-@ zel<#j>9>PRW+z<3ge%@^p7OkXOMNy#$`kwjrqfF+zsqF7&7>w$V z!#C)Ucu$APD%gncOXgq04|?lmF%R-HkiW|^C@<~HHTK;5F;4ZR^;YU&j7oSz z@+8MX)_y?O*Oo^O$TO8#@t~p=BV!9P z=q6v^E&1+T;E+lRb5aG8GW)K-3>(Vq?g0EyhgG4d0MeLHc!dChCbu!2d(%xfm38aZ z#i+Yz2%PauWuk-I-QdV}R7jgf<3k}GCvdSA^_H7&E^F7Wod-`)3*qk?JToECv$X7} zk?Uf&0aqr?HcsV7+OizBxhGZKbkmKjnLQN(lJDl|*gE0S$x=A;-JU3Je(c?S3CZ$6 zi4iS2kjjvqZ}Ybu^5$CWU|lSyKQ~7!;B7b8_ufV4=Mkx%N|d8iJC@<0tHVOr3$bN* z7%M8F)MWsHI>s`b@3SPqjgOAd@0p7i=5lC!qbgAlI(qQ0klh^kzrDf#inyC2ED6v( z2#p;{Cw>IR$njV#hi|3SFUzN76b@E!E}l} zPXD2xG#l0jD0FP1L>Op6c@F$G6ajTFFCgMLl*X80NsK}RVdB$&UOpCzI+Y#5+SV}G z=JX%RL}e;1Y&dBlH|?sx3;ijx!X_;|M_P5(Jru4kr7V-i08TP5>GL(i7e3Q+Onh2y z-Hr<8!jun>?}`VPHdVaXp3!+=oVZC_0qV(BN5KeQ*jGomY8>e^+J85wGQB5Wc@bew z>)N(5k@pw4zEt=!T2A3+8TJOgafRdhRQEQdVVMSq?eBE@BIiChM+CGo5bbUyY9LxMnU3Eg~kApmOFKV1G28d^5hha7LRYt zORAPqzNMzkwiJ$KmKH3d2C2u;9?s7866el#D96-E?!KF2YB6+=KmLTkztiZ>ti`?~ z+RYI}m6zBv6SE#l+Un-ZGR+eQN(gggGvYRWgw&a7)xr6x8c^+{#lU{YfHQXe(-UQt z_lPiKH;(b)?dko4EZ>AOJ*89UT{n{`9J3H}pZi0IH~PT6<>q_uEL)LPDI-$eW?vNj z&|-Cq{a*XI43xi?RcAlrxBZd*ok!=of5KnN?UdK{HCfj=Xzg2zfjJCFANG~%0Y0T2 zX)OAkMrWIgz4*6&M~B^PmMg~zdB`Ejil_{gkc$ywv2DyPIc<@M}897a39HyR6Wyst*?>^%G%s2!ofP|XPBCk+;IkE$X0Lsn-kYs>OC zXrjVbH2Yci>M7M7Nv(qi&WM?&bu9)C1_q?dxOWLXg2`#wff*8)U@;hqT$bP6mpa32 zyGK*knGTi?!4VoV%CBO{N)L(acS*?@BeNZaIjm#|w#h+) z-IasnLkQSG_4AXlXE;6!`@9MV!r4T^(k}!-?;vN>9ci@A6x^nMKu;WS;P zd{sL>^HqWHI8a73RDvuU<#LnjzyPN_@a(r2CVa6@rc`;#5o;0eJ`PljAF3TN zG_Z}|-3ev~5{Vz;KbGFSlD;Fau6B^tExZ0%<}8l}mvwSI_V5U!ZInBBgY@Ffck6N} zxWuVtSI$`{jV(`M>c;hGnP3QIH2e{Fia7WRl!lIdS1u?o(r@2Nc7S5b zpAGV|tSO)Ek95jA>!Qrl8J=~u{&gQm1hs!2mKcypcFlpQ7v}*1*?(4g+ilXl!e;Up z_Cw`XiB(U({i$I1RC(UDtE!mYR{dF_`eqvWeF8YWgUic3JggnEa zQPW(mWw_<)ZZ_Wod$qtu*{QSfA|CLCYaY2g$Ym1`XD7`mOr@F*z0tmjrw9jc;xB1b z1y}<{O4-wlh;eOS@1BsEYzyDB&BeX>>K4McR6!3By0braYLuek8)FE|JS@M$Kf{D$ znBV%_{`#rFTRZ(Tf8ki}puvVV_1hmV_AOBk%ak)sfsep1%Nm$@v>c9#=t6mo`dUAD zu^D}vE354&jaR0yUI){ezqoYJ!aC+mTb&gfbOo7o?IfIeas>< zotH#j3&>fHP(ZqrB4fwWjd{uDJ`e#AKO+ssi$bz?+&vd~#GN3(>Sa&JCQj5k`6*nE zv+Cq>*KPNdC!g|&a>JM2Tc$kxYm7?f-Z8>S1Iqs0*IFZ_yZ=m@(l7O;-9y;B z(9VpAMB42-o%90#?l`_zK~fpNIa?)sW0@9dN16H`8yqQ{`i>~8RzIq&d-7=<-Fsiz zKGk1_S+2#kGgXMVAEe__Jn_WaBFIqQui(h$V;}RF@)J+{@$!jJe1bLhKOoiovUkAj z=>_oaVR31Liq(Y|zNOsA`PuJ!*SpG}{K=mbZ~J`3%U|AslBMc+knIkBoxu7cUh8pl z8aCtQ&2N5ldF*2!TVDCfSB9loNA&juWdYD1qR$kR9b%`Qb7`Dmx=^6NV}6TM=~G5X z1#DL+F|G=+43vto>4cS?&r;wES9q4kcU=g1;G1{;GH*MLth4!M9gLT0yPxSFf&1x? z9V{jJgt}Iv)%^+zx%W9mGzxBSrL|37>#tA!R*WVkBuUg+k2x4Y9}Enk=G4fUKrpRqF>pXJ zAQ3wgAU)Q>s~9-NxR~W)WNFx0TMxNGI@MD<8OfD!<5W%})TK#P_GX`;Y)g54sLKas zvdUucWz&b)1S#Wu&oKEtWr6<6>iXR>vG{1+ju?=3YCkdeG* z7XnP;-5n^GRZY!u>omXFSuad;lNf}LQr{P60;4?k4V4L&U0rkSbq*ENNlx;9RoPCQ zDLORo9C5V5{5{4rMluvCW^6cN{&g3cr*93gf8$lJdR2MVt6x=~{p@EFNdy_$5dbH6 zR9ICC9r3h-RykDARR8X8{pOA$^PJ?Px7^y4t)gFLLVW9N;B?t08K!>{ zhGD7rX8gnPX?+y@*4t*xX`jF(U0=`!t7;ae_YB)n`hzcVlM z%lE8@-4|A%{vxmNgbeC$r+^n` z)PA=Z*nb$HmxZ^XW9d?J1Ac%y@x?%K%5H|^#@RFi`NIG5g1UwMf$?kqlJcR$N67Sr z@_YZ)U}=z*Y29VrRX>vc-MQ_|uRFYbZ!wTOA+6+~WVhc#g`v9WYYEU?s^K7m;|%aM zwu2+uj)r2OaO)NK^KwVwva<51M~0!Y`2o)F=d>ViKpmSLC?g!D=lvl49AT<5FJtwy zi`_Mjd?;oDup7|4@pJ_@H=lj>*=6IVjpez|e_r{(KYy@;pf~}HeE=S#ddexMgz|Ci zwburtib#)Xb)>^3SErqJT8t!If8z~Ndp8!lWG0o94}bW>ajI2*o8F^V*Q{9+vnF}3 zLJBeAd3LwQoOu$G`w>>GSW!+m;e>$Wai$(e`@s)>FyiHumvwYh!xqXLOW$ER4JJyLdV*L3odIU>Rc30(F#|C z)#K(haD*-Zy2~u)z_xEK2KF-sFeHLzt5H!i3)2LD1CpS9OM_g>HJyev{z>~aar?K$ zK#PF`h=DYsmt=^b4l_%^$urDk6lOkV7>!WD9%f(2*eD|rBdcM5HmLJ^(i6@qGvia` zW{&mz@gMuKa>IAFmrZwXD}$U`F*UUKz&4I> z-~ayimv_GNondUe?QL(1*Ml--pE*;645z8PR(*jMpuw26LE@y@v&d;VVIh?ET@G3ie8U4Jqwl{Cx zLBqhDs#S}yz+S*J)Gal!cvl?@pQhd7YXOi5!mo+aD0>xt{tXz-$BubstoGSrVE&(gI&bLO{i+cP(dsov-_@g~-~HD{E#wvhyTgDwt-rc9Xm>I95GJz^ z#D>V3GS20Mv&?2pVU&#btt^|TmX-1THRYt!p2S6lE6ZKC-%&O{upurOI_<0 zE?aw#C{u&0S;95UvM0m`^Zqk&v~Mk%>zx%HD&=nUeg5;FS1$Y3W#v`gU;lME^UO2j zUHBRp8V3q1pE`|%U;p)A4*CM&MDqJ`DcIj zXXV8&esQ_@;)~0xU;XOHPXk9o=c=o&D$jfV^UJrt{q30b5b?ry2E=#s*0@NgA(94B z8Xg&Dd|etN>3_IBqc(F@Km4j;1!|M*TiF+zZr^g*f3a<|!@1q|A7&05;w%hkzgr9(1Ps)<4KuFBh=v8jm@zQw;HdBVReG;~ z=h7U60=HSV7-%sN7zllP$%Y8r-P6Odc#{|ktiSa3^Zua~<$=-OGQRxevU&LAGQRTU za^mUdlr3Wu<@W2phlH3ax2^v{**bhoIquA7myP|8C>zV_GB&WXjJpBSWnC!ZZZPv2 z`Dqt1bC$hh9FIQr6qZz-QvUrv{+)b#%a_0WmGW~x_w#YAZy!qi7|W~1F;@QDU;S0N z{f;}zM?U&djFxYeGoJLMG6_6I&{pnEG5p^3>&r*~^|Ins&h6% zWnW{#rC?wF@|VjeKl#a6diB<~zBOi9T!NKGiMMN?eDcZl7HVcjT*9@TTj0G2Mr1$u z;DeFB#*&6d&U)AyhWkrnBpWA<4&$WZkw%Qq?r`I@?f_#zWsNNTHN9M?0Mc6buM^)gZlTe7QYcX&z zF;M3|Qr5cXiKWe~(3s{F&#C6;yMBf(jDIl8-sat6;Ngye9CTmO(k#0@RCSC6_B6PT z3gh6xv6*t*X=j$HWv7(oeXGk+CqJr84-A)W_di%pI_0FYV)p2A`<=I!l_#H4j$8Kx zHnvZfk+BVB4d%%AufME}Z@Q-}A8-#u53?vE%#yH7iJ6M2>51~3=ly3!H)b#b-cVlk z+SirkD_3%yZ-05pQ=U>Tx#W`Ai*e6A_r%PI4Xna{-+lL$$3On@VNh7rkDh&()m`5(fGR|z|TbFwoFAWTrc^S`d4HMHD-VG}Xbh!Uf6KY+HfffUc z#6V+UD2ux+gB{tbjuxBA+}naIyV5TRiO&qqXL#`8BG(!Z%m}9w7@;xz&i{hNY7+zH#{lp(D zqr;Cb6a6d8Da`kQVjAN}aZ*c|UgJw4?&fAcrW z^I7(E>7|#3((c}nk&%&@jnEKSyLN3DBRR|9=5aTbYl!%BQ~TX_-yMp+Ga+dVXawjq z1a7_c)-YxgugggP^rt^PjGyz*Kfi2ciI_{ZHg4P)%d=kpTfZH!H8jlESrwh{{xpEh zOXI@XpWG;KeDk#oX@nRzmy8*n!X)<)K?ZKs3T!gWR? z#)F#%>a<6EJ5XdJRnx=wjuE0ZrpDIo_zZE3>-iV)TP^! ztcFIVZ~rzkg_Z>WJn^m4tCXF;1w3H<*0mUDF>pXIkOS{aGDIpX%knTjvjly;T>i&? zk-Z1e<6$u zG3QKz`!znwvMl#+xLN#f{^oDO08o)vu{ZoTz^UgajPT+DGmid_Gf|tE4W@0RhNY=dQ%u3mhXixd|`R+bDtYaz+B>`F_K0{KF#Am zJDX#n=nlV7Y-%@v9TjrE#>?2on(yi~`HuecV;#a@O+BN8Sxr86R-?dCR%I9IGWdt? zxlrL1LtXet4BiDAcLqrivVH4X z3S?=279dRrNqh z%GFmrSe9{1d*Af9QwNqNPR(MK7DmoY`PrX(X4$@FYx%~d-yp1RUU$jX7ryZMa{hTQ zDo=arQ{v>U@2|hN9IXV;9(f@E9DXg)y{@#1a>QzUSw_NxaQOD1G z`rjhlOOXtp=Jxh}jH&UlQF7r{bl(5zz5lBm_sB<-^IrTSE;ziPY}l}&y!FBh%YE!K zdEa~ATTaKA@mpiz$}6rY$FO1Ev>vuE{Ikw{a{2Q=`_uA!I?i?YC!WdmC?#G8&l3 zh8-QyLRpxLy({?Dw4D#ZxHWzat#nlXE*zoPx8lA2&G3Bg4F3Gw4&|?qG8GvRlsKui zY|6XVc9i5AeIbwv+`op0N<74*3i~{U zCb%OV>9EyhQ6opP(tJYSCkPMg$ek8lECS`_27?t=epOHDpo*yOP~-3m zuZ8JrP)*HW;>^XU-wj!1(mW#@bH%ye3Wm_4-2pOH+Rt8Y^ojZxs zvbdprh$De#CdQbZP=HOdw25U(Y=HMNqJcq9)FMxpJE<`H(-`o=L6<@Kol1N6{tJx> z!*o710)&x9fKTD5{QIucAaHg9)*i>gX*1hxHH2vU`Z}L(jkx!Y zw1TJ*_dZ!0a}EPbLqCF~F)*VYn)XopsH#4HgFdRd;NPl1NC(x;Yr9OJ#WNB#m2n|{ zJ^1lV&!9Pc18|h0Nu3c|`e+c+K3WVMa11ouxcvJ{UKN+LDy!z!Y{V4+p zBW)%TUIw?%76UB?_Adrv;4JyKW~F412Xj!Oy8t`zYju(xJ1XleQ7b!V&K4I zAbaP%$T|Ao8hC(Z)65x(Dj8)Mw{p%B9`aQT>$ zjKlI(R8-b)zUiizZJ22c1cjZ>J4tl6-FoYs5zwX78*X~br?Rinqr&fd7z7w7!bIDS zcp6;2O`M$NNMpoV6=7+R_-^_%XwrBv+;`Ku)NFuL$230t8K(1TTAgtdes@~Sn1)G2 zJivc6Dtee|o2t`1dz+{$ zp6%BI-|E7^jM))eL|5D2>C$YUIpZ`lCU=+c5h2TAnZl*V4xgrJJbiTDP0*bv->cHn zDMVM&_O-=8i-DzKAh|4|bl5Q%ldmSi&f?AanQHFKw~!w7B|-1-j9`U}FRg1a&|=_# zV<7r{a_#inSps-+P8R4{J?2<-erVNtGx0t?RUlB``Ek+}djw|tFb>#=-G>t0M_50n zTCsg3E*F|%3S@@alNpSP9xgT1Q0c)S86NU@S`_es!O}Ol427Tc>t_1*LuphxNmHqUqh7JZ4doz3= z1kRL44+F28afRGa#PArE!xk-!(YGfG|C? zH#q}ixQ0d=AsIi75YuE{zGryCKk!eGg)!N~Qad-DOO(CRgvuofY7dsFKU#sGBq%Y0 zx%gEDwE7dY2xsmAb`fdYBo%jo!^I+lc9hTk+3wY#2+pVwqq)r?bICZ;2XtGGTN(VWO11Q+ien(Bnj<_S@aeoV@LG$xE|lP~>|! zS!$|>nE(p`$ewAGPZZE;mSgpDV|v_Z%1j38JsZqbT-_hQtN@0}wkZ^uk(Ff&o89{` zAe=GjBaWwVO-_un2V|&BOnICyo5(R7dfmKE4Ogxhj+@lgaZ_oV-d3t z%rIyil$P(y_&=* z5q=mE7%5@A@YImd0MZyq10&08o|%tub;dE=a#@ap?Hsk5VY!^;>S1Qeo1EuHxM|;} zGm%j13`sRsYPqu`GHeMonWOe40C{*O4b%nCF0TI7_@Ldx5P%jMFdjxAle%253LXs( zjS#=;({Mdz`1x-KXWAyYp*4mcz5|2T?sQ3OhDR*H&r4-U>sk!77&x#PNZ$4-M^>ix znJnJT+qGVF&8@{ii-CC;skgHmXGmj;`k0r-iw2YVTSlLL z8zyY?(HW+n>2-$XQ-58SIX(``ALc>C2YJVg2t{v789r`QRwacB_*?0t6YeHvK7vLf zoG6QM72YBewn+j5mD(1E40+q10}Syl7-^VC6-qTiPzVuhD+O0PjU%mhKBNUB)IR}( zkNT|nq!JS`n#jheuQhH5L<1^(i4b573}=e;iPVjM`)D!HVqno2khv8Ex~ien_ux?d zYnU7tyv%2EpR=hM-Xv=Hy7nI#8esXT;4^imXrCe;ObcfT99Gp7pR1jRI)&fU}Dh`5G=djO`n9o-W2sEDI; zMwy!-Q%6kesY)*GV$(WlXSnn*jD-Oo%lV!~B?)dq> ztGpJLS-1zo*)D~{1g2O%cOv(!c)W7Y6ox5&g{4^@(JP_FYzTM6FjeLk@uU&MY&wlq z-xlYVkaTew{*a}L{(q5mina|7t~@CixkUePA)Sy5@Nl#fv!@{^2=*Qw|2fED>uaWD zTW0#o_bii`qzM;aY-gL7<>XxH6}^rZ{iFhLpJ4+Wu`Z z&|+Y57)ZXV3^o4?zNYWAZ=K}+tR9w8yAooYRRBXSd1io;&+6{!yx zbBHC<(+ui43tJtr^0hm0``%)p#lZf>0IUalM;|=J%Af&O0*$hYsSh5lObKJUVroGt zqMxR-m%pdz!;d`X*d@-01Sm5?xu!%e)zB*p1(Z%ddfXG@Kf;Y!bra$lm5)AaLVD&A z@Y7OJv2YuERbs>3Xke?T%#cP%Zi6?a08@eSbD`@t5*CKF-%G>*xSw$!NbgWt|AP%s z)mUvcnz6SGK%f{Tv}H9!7&mVWk=ppR2U>~|vc%H2$(M)$`FXk-C{PB-kP01tl{u?w zArHp#EsTR2Am_@(8lg2sjcG!iJZ=oaP-2EhmX+;XFxty(qBE)fRBqhkm8Z@*%bEs! zWwL@-7s zkps(D91%)O(~A>#=bUJGP0wlwU=A?=F*(kmHNdX@e&=Oh3*N?WG0hwtJ276W0 zNC`$R8PV|6T@8^Mu3}n^33ZSVD5POUSL1v^$)IS)s~e#;a)8_z9nO-tp4bs3?%RFN(RRY3HeZV=U=RBKpwc_fscqpjlFkeZDX|sk!77&tf>fKlaZ`P9DJ>nOaqJO&&aZkB&Ld`%pAyK4UynSL92 zup~o7q2L&(z*5DqK<1R7`w&3A9_#DUE1b4Z9pR`cT-t+JV#b3f6{rQSaAV{=09FOW z^u!FsH53v(kVeMamD5mZ-TuLV#wBKDB|<36rC77G!*nh@`zD#?qK)=-EubMFjAddd!(-D^V#gs1* zktcOFB(kkDBo$(OtnNhhKb;XJO^DLOWnE_11$y~3JfGK`CQ0nG4P;2%1 zlTTsYhU0@@D_>WBj&oI)YKSN_sv$xN_y_|e1csTQ79c2M6RCdM9(cq>cZw0>$B9YY zxz0zZqdx0tScWN-P*c6RnNf3-D6|YWpI$Ut^Y;WU6_8!Ij&ajOSopoGbgloMF)$4Q zO>h^?bnkGP?mxPW&#WjDJROE4eCSuC1DmWhRP44(97nBzI>k@YLubFA+G{F(4 z{nKKg#lX@q5NrptrEwtN$+tfBi+&oWbFd(0FL^gj=m||69@SN54IYkknVuApN+rj< zGJgAPG0R;Mt)<)?#;aSCSjUff?{Dij*g5To51 z8@1~X@qrq`Gb{#|RAK*UR9Fg^?eI`34FJbLIKN56y8(5|2IF&dI%lsDTrVerYXx)FO)GbboDoR8uKqnx$K`HtkqV>3X;UZR=7kDyoqh zPKA+|N-dS9{E9e6;AIYfHD-7;>4PZ(Q2sGb6Ez*Ak-K1IK1s(x&Lnl&p!qZtaSe<} z8h~BvY)MqfYl$kPxfqpEtIA$uM!AFjjaAE@b;;C0|Kr1FO`8W>~<`Z$a878jnmd=hMC|z(9+l1qj*u&s~z; z-h-$V`Vau>fN0v&Z(r@t?Q>N)lvy!oK%novY+85lAoytk9pqq(EbCK;~|h*)UBS5T+t)Y1=5!TFE+TXZ1-UyijyQsEy)m7YV6%*jQxun8 zsl2$KV2Vv^Bb=f!IpJl42q(`O=c%qSJ;jJRa0UiP$~f<#XsPs(pO-22Fftm)cslZ; zz;hW$^PWP?r+zxEFW)m>cO1io;V7&Z5PDZZhR0dxoO|xGJFVdvFXQ?Yj!)xzVWHpV z8)0UI>u!+hvu}ShpM6WU=h&9D+b#U=gkd!FmJyC&q-~Zxs%)5AU8ee%BXn7SGflg3 zG3As?@WAEn|%A_YH8e z6Pxw>skbG!1uFK&FVk{y=-srH6ZnoCkBQo z7hh(@?AOT9Oc_|Vt@ItSq4aINwT!R7oOb5OXJ$)W;1>p({M_LR5huUsCp{7ad^^MH zy3~c-Q_~UNFqIEG|47M)@A_2(lNaB+{wk3ooWGGK!i+?F5HJZ355&c!)`b~Bp~RrcApqU6vu@l3#6uti`@Rg=0BV z#tPrE`OG@(F5L8catSlr%e)4V0oKhtyja#tboDn**4=XWo&M%yeErQ=7{2R_XP;wU zLqp6Tatvmc-)FZoj_ITB<_G`ky6a+`bo%GFuq;o&Y@+N80~x;?X2R<{N#Bh=c`sWu z_=)h?jINxdjbsHCd{=X5gv=tFbFb%IX%qTXAF60A7acfFq${(Q{||= zgdemM1@)xOF=VGFcFp!r1W)(%3=w8uV&Uav4_QINA$RS=W|2?5F?9>CaNtQEImVBx2MqcPU!QR1}kpoh0mB;`qe&`_n-=qo6vaZPWW zJf7Stq$5})hpf`ic&%$OuqzBCzbjvom*xM3mwtEid&mX=cFV1u!iV_k(#q^B_|zbIEW_Ryb|*`j>_Ou*4n~NZ znS1%kMzg&tb}}%`%kr#g6rd;&by11zK~b7TNQ*E})*7FL@fBV&HF-^=#0A1Ag3sz# z7n5`h1H-0a z6rF$S&Mw0l!f(Q5Jj=N7(wzk=qv_@5&9_SQhDF-VA%@eIf6bo z2;FekNtCa5kP9=RCk$n0<$9N?%uLczVQkuggdsy4S$^R5_l%P-a7Hw|G$^2R4Hm-u z^>XRn_Q^5Ic|;kr1JhqPq^~E#ajMvm%*{og8t)>5n?7fN*-yDN;7ud{Qe9{ZejrL8 zvrE!tZ|_KSeB0^tq?uvz^Z18J%EB4ha1eNeKYE8L3$!;*S*GEyN$@cMEstYJ4Ga&! zpU^tEnuMR^+b|sLP{Dh&rJXUft}znju?-0r96F+G0)}+peI9nc=2J&J={U(p{oriOl_o$vzsx(RK~10<$xyUT=#qRRdDU- zJC`C15h6JV5>BwZKrYoa0DO>JIUbxHg`E1fMjMbT?E2U65XQ)mUg%{MVjM(C+`_r% zL*Rhwfivo>oP*Cb7#+odWla1CwNQ8tNVo~iVjpZP>w+C62E{s~KXDJy*%Kf}#OzY+K#%jLJO zJDxB)<)Vz~%tII%SGcytVx6*8+zUqqKFez!`t2!vPx%VNdWb)9DqO#=BS-t7Tk5_A91W~Ifr23=GFkhmc?W@O08z2kFvJ`C zSGyyp-(0K=?SwStnA4Cm#y&pB#~CT5Q1%E^*Ayax zQs9_(Hl!lb`1*xHtFnSH^sbE&ZlAP_Ndzi#dvd1jcBb9mZegD0*X`%KqewlB81*rG zA(LrIoPHx zj=o{}pzyCi8K0ozanv4V9~vM7e&oL$0|X^}W(tEu;{+v;-X~xg-@K#aidhwE0PRq4 z>QZOsCdW}C;GbS96^8%Rc8n%R=U57dzcVw_7#U%ljH@sKle}lc;G%9K7b9U-r5XC^ zr7TkzJ9yT-Bq_@2sG-h~c!fv$X-{pxGE1AmE6(Kf3^FSqz6p1<5rQT^Z~|`P9vS(Y zI>yol$~TK_m}S40<>?(bq6{s6L>ayRE;wa7fEkoH2+o?GkL)^y`n_iBgF&0!#SkT) z{3egS`n9j6KHwN$?gO_o8XnNV4cA?VJQ^>TUi!753Gtxfp%Z?L{E{aAi6{NRbMlgM z!uxl8`qOE+C_5ZYo+2MR z(Qq_xXSt%&f$j~LZ{mOnn6(ewXlxA+RK<88Ix*(ww!yE^R47Z7D`X!;;xh~{%Z_np zxh=nNbvbYoGu?#EDX9^>&wn)b#Dj{QGpjLRB=1lT^PGiNcNcE?cI&K%{RBJ18D(#tF_R1&G6Or%ZKX4$xR{9k7eWX=n1YUbhX%@K@_5 zQ9lEwQMl={oab`%j1fVoXCNPEkxjEh^d5f^_iy+?`TC`omTlX%5vLj(DwtMIMi%k<*YApO1wH-S3w7z3+Vp?wm4L0jM(@ITddj;~Pf-VqNT=$JwUe>>q( zj5Rwj<0*K|-$vG$SxjSFpLEu7Nv<|Vfar!B6>9*`^T#i~n{rBUOo+nGlB_7a>K;trCwk6Dq)@vUVUgNeQ4j+9F` z+>DLUGR|oFG)jJd-`3L4a|i>)n=@6yC&p-}p_PGGN8qt^P$20rz*8(G!Z?XGS0SdX zF7cYi@EK;d#16loGNIxDpUSDUSvU5#cp^j}9nQ?e_A<j&C>wSWh9;wqwC`X)vlSWx(Gd{^n;s)>1ye81h(Pk=-Y%-L zg3;nqw?1YR1yyWs<;l)d8eThMv*j2{iZNRCT0duY*%9t4F}$Zb0FQLXqRK4kG05( z^TjJ=w#mhXu0Hj#%xrB@siaM|U?95$P^T0dfuB!Yx0brlp3u$37hQy}qwvyMM2kNu z^le}HHHE+LAqP-EJ{^i4` zvn5v8HVauqm7@&Qqz@eFoFVqznHJ*@HnLXq#J~^rvOGG=5e6#nu>_Djl#^r@VKF1j zdz6MJnHkfhi*m|XWqlZ2OiT**_Rpzx?@a z+$=|T8(PMUA4({73EYbpob70SnU`^O!tTbie$oHPN$njB|m~X?H>Mb_MxpY=AAevE_SA4-ge8xdIga=&{O}uI> zY-4b9d*AYM)-Rl2h9323X0f^DT|Hyt{pE%$FE969b4_6hg`<-sa-fLLXo}fmR&uNv zT3%Kl6VX68aDy2*bpYTg-^S5Xh8WyI#0I`&87XqWWw;J*X-p2Wq|@cRwu8%M9Yjd} zwf$p2l&40i?PYr#*VZ}?!=HZsq&?$i`b@LGo=Y-B z9L<`P!(bW}c80{6#+)guk9tJ8`<{(u^{SPqQgpmT@u%ESb$8Taz@=Dl7%CbSo-M16 zISHZAUsj^j^|5Tr5y#01XBuEukfO@;`q%$<*}Qc-Og|3O50o{lR>dg7Ti^QD^7+qy zK8yneY&wmQIbn{nM#t`VH7-z&84Vpr$?6wk_dh^m)YJ+W%VmR{ci#Esf(tGv z=bn2m#zpPu4Ue0{O+dXImDJFf0Dc%9jS^uepx~!Pfv54+bOwR5J&eZHqU^uFlv`(& zn`+0mWu5n3edql(JV;ujjd$zizKK2>ITT*@1^npajxT5Z$Wdju52YB0Gc5lZn;I^c zUVd}A>F(`}y4xP3DC`u3rCJjx`~y7C`^D##a~@yHfBnDy-*Uzi9#ek(w|=_(pa11c zWy_Y$Wp&>@<=p4|L^*ZcU z=g6f4W>e7K;NS$dz-ICrMmg4SB|RJOr8BgxFk+_1q2U#i<%KW&i83_3h0WEBJkMa{ z5N8?%b{iY>KmFxv%bn{VM7h@(aakC%gU;BAml$;fo{BzG{`6TW1{x$&g_U5?H}IN~ z<`}UA4N}C|7|YUJo@i$@-V6RPXt)2MoO;Y)dG52HQXco1#+mJ_;J5ZWqK7C{i zuEH{bjPct}!}m2u9aT;^;e>J>vo*>uWnw73O~<056NZjysNk@23&Y0&mQZpVA?S>W z>0<<+_aWrz@y8!uRv~w ztfmtydzLnVx*h)5Hem<>%QE>}xpF0PVoSN_o_m5W_Y_|DAngFU<00OUI&w|Gw)~qm zZ7Lf#ZjAPiI#}1$tLQ(D4ZjB-cp&0fuPOQ#rm$Z(y8Whx}n zna(it*7=OMe3Enc?d(sJG=icf@rbtEb&?9E7Ae|(Pf>T4Z~DolbudM_V&5vB^rbWc z=8v;>)?$hQ7zcply)A@|AAM4}tE?=O%uY{khd&07DXX9MbLA0F{RPI2NKd=YAmiu~ zpnnWo2afK!tz7?+e_}9-K`e}n83udI#?=`oo^(>V@WKnrd*1c#@;P)7+fQA?8AtJH z-!C6(yhxMEdj~a)H_AYhxA9-j?K|5^^{FG;$QBn^_TNC?>aYLQZVcf$2{*a2 z$6$2(aWJDw+@>=4|Fd@{fO=Kcpoe&G67OHjV zbHr-3Sf}!y!0aG`di+8f0GPA9iT0TemLmU9%?hl6g^U?zsOO^dOX^%`Ihf;L}|C4mL+mz&@>-<{0T?Kt6tb1 zn;u~CGi#ttw(q`mU2*5KYml&G3yrp`-vt|}Xhf4z$2iTH5;1!g2)=GZ%$>F%4x0|K zG=fPMH2eoHxj2qHBI3@a5hor$DgM`U&Wb<1oQEe8eouOXiL2Db@vbvA0Pd&8v);rcj*XFIXY7osAi`P&a3kZt}gnX-u+`txiEX`|%=mSl* zX23kt3os^aFb9J@1JMCNp$117)y%0K^DGI_e)f&jBY$gQAv|(kXZ+3k&j~;)?phWx zXZF-MO>d5iUp5;CWj*6| z4{+@qb%BP#qrMVI&KgGF4UiabjfnPUzde zVH5NGcIxi$<-m$zv=*aNsER?O+2h`yHuyk4Z+heWr9C?B^V>=QW4zD8nVN*6sRGARN2RvD^WJopZ=H@f>Zq*8-FM#&LnONi z+(Squ^TYh=Coh>S%QjRCm{;py*;W0kI#+d&S;uy&5RLiuqx{1VsXc6vsXTf~_K`&0 z!3Q4#Lozv*E?r9bn6EJo5BL7ArFUk5Nwhvf(KBbvj6)$Z?K8(pj^J4}KqkX-8-2=(2XbNmu zt~K`O1NW~3I_O&nR-sX?lzF)?%=a=Pwk`2SZTzf*{cFC}y=lBRIPOZ{O=Iv{ja^1O zia(CHMF03XYEt5N=(UJ3y6G6e6BOi?YNmAiLG!L zYZZh^oB%^jyx+`y9YPfMfi&Q>M4AN=7UPJx6cglwXB`sT7?C;j^$<8+y2O4}VA_jt z^nnM~v(|Ufr)Yk&To_Q-MRB4G5k>Ks>whtk10LMtM|!^*fL?NZtZxyTjqlmgris53 zJ+QkyFm`TN_s;nWH;6kbd?C&jPnm9S@WSraZEU)|Vnig^JSRfi^akwo8?mVeXEH%Y zV%^FcF=y%Jy=0I`SS;k7izf@Mgo!9a_peAuJ5D$nVo@7@>{&6suV=<;53i{UmQFc| z`;kD(jDQ*Z5`yfGc=KD|9RKwBFT`2roE^(ou83(fXT--o_OX~ZZ(br?B~-6{?Q3HV zjKWdJEWrfpuTr|@K*&_Q;SFzyRjXE&Jt7Q*NbM$o=Z8ug-7utaI@nza~yQ?X+ZoWM~}Vn{T=) z{^U>oBmqF_rwgRS$-WE%kD&s%-O0Q9W_Qw#Pm`sANb4b#lgV}RNRGdpHQZ38i-V@oVLbXsiQ(jBYr-vJ|o$1M7!L+cEueLJE} z1{cJC#1Y2?>wNtAU;ZF|e(h~YBB#dZKYece&aCGd1q zu-WY~by{1j#9VRRHa9KGT=c}Osg1F1!@5|sctLDL%6{M4`^j%4jy&?H*g_jt-M1yW z*&AX$m%Q->^jE~6zWYaU{cS7ko_HP%$4g%_GtN5w#Q3+XZo}`@P|R%J8qb0WVO@-8 zJ^irw=db+`vo}q}Fb^E=jW(uFbKSO7Xf)6_nol)T|8|D3ZHW0g*aMNdHCm_5jQ;IA zqM1D{P2|-wFcOo{h77Ujy4gQ>qO*`|k1i7;Bk3LmH{&lpd~vK^xsrSAtJ66}Aq~_R zs2vgNCBd;4Gdp$Pbm-g=7`Vu=f0gJLI6%Rr)n8w=#-oau$C&Js?=PK zt@Cr+HZ*s@q(q8DrTOZBLuSTj8HSFIj+}2UEIu3jRy^l-N?y%F+#s=Tyo|YZaL=7g zgT$-frgjA3faNzL!8bHu!*1TI?^~1k!fmIf#&2|-U|FMO+fn=m*?XcXmsiutcvzRL z7x6M;g|yBh9`&CbGhpao1VWm;EJHe>HbiZZ1pSmrMLY6H?7g70_ z-n7Dshn+l#}O2mBmPxvEG-wXG@I{4MjbbRKgew?9t&Z}|WjC)s7?%Cgx zKYs-T@^yxZi6k`&&7n2}&0!zb3J6Y;>DL1m>&bLrqsr1Rrtt$G_+V_^v^k$=95gE)fOeQYb5=~B zK0R(-dPmYSbLSlz?aZs?D^?_9WCjdXYDA7&a&&Yd_>{?AxL|&)BTu!76#tO$aB&Yw zG!Ze;1N1YMrRo2MOsxB_L@D-U}hT`q-xFlZpy4S_aUVdI| z-nu1Te*Pck0`QJ^yd%zd>KU=-OKUQ%9e?Vnr^cn1UK-#2_P66hANo)(7!Kh1=bsx&WvS8N3IG_iEDb+-X zJb|%|Teioi{%K?U?AoOe!0oXG0u7}Tw)#j00mNojlD@ufcn|h&Q0 zAL4C)`n)({acS+xKlTUlmiOEozr68YBvr5*G$0j)z+IkQGYoSAC)@`k(hp z>gj{vq?|(!YK{+G`hu8-b_GkHxOa2Jo8S3$h|}(v(%Bjxdgq0)#ImRd=E(6+UtSxR zU-7ST_SvV#>t6GOxPMK=+7)eow0C~oiI<$_APaShm^DFP!o&zI;sTj;!6E1~&d$+-~u-^q?0FGmL54i)uT8}B2 zgp(_EG4~)t#$E7nyp;lecl7Z2J@PZ z)O*pwg~@;@@s>Fde>k^}Kkji!^h&I%k-h!)+v5+Uv_s2#HLN1 zZco7s4vfajgiM({)oDfFWKYtUMf%Cmd9}2{ER4S1 z(;LeMfmFGhjV|EJykqU)d5Ce*^PmDu88Y`Svim&gmGvwhlZbD`o8SO_<$Q3ClLbsB zV+B%HVA98aK+PbVn!sf+?+S19#N~LZ#p;JqoeAUd*?;_8yysni9?O<5kI%Bsufj@W z;h~3R{cgJT7RG67JoWUaB;HuRaRY+M*TT4LildG?I{xzg?*m>Z6Fsm;Js=Lqb;`AtkTkUGG|y^srTxU$G9%*t{T;+v{>sX@7o|&lsz%hD zLt-es`Y>5tb6v0pCE_@FyN^x1Za5y`I*2RMfjYeIq_kRFND&|eAOw*N%Fr;#0s^S0 z{7dJEbSr|{Js#RK`<3?98#iuD36YEGu`H~={N*nb$-8J?kKMa*4uA`w(yV7b^O?#0 zz4Dc>j4yrp%cUcDfBf;sClm6W?|dhjkV@LX>w`k=F%h=! z#{M3ZusjGDy9!nQE-2u6#tYKjJYgc6G`#_f42TxusZUrKPe0|*c*d#o;>jn@ zMuV^=e(S8oal;J}ue|v4@mK%nC$adDi03}zBy4eUM+w8Kf?BD&iwHt^%wHPf*5#|? zEC2Rmm;$s7z{s=Dd3sF7ROuK0bpv}hAaq&G8+Ct03clb!90OW_fIuH^MoWWvlDzk& z@BT`by=(zWj_UFEnT{tIRuOKJ^JO09|qQ4}Zz#>WF8sr=%X%rG+y<^ns6) zvUvWpPmfof_q4d^ywl<}=RP^kf8MD%P6OdJX+e7>)cr7+Hz$vEguswAY{NJPC?;+j%y)9_ORpmCkdnT`|E-80{+1z9+M zLG<8x>W1sCkLB3fndVT4=R}&!Cz+KyZo55}Exik;1RZhMyhCH{n)~9Gn{JMEXkg~T zd`yOLX~(o|KJR+EcEl|=-W1D8--aL{H4WtB_1L4A#Exy-n>>a6C9PTJ_19dB z1Bc$smqE_9Hjx$-fQGXzCBrM0EsvWZeAl5ZIB3R9`buKX{UyKxKw-bM+gmgL z*CDLD5SpY7#$FTe>>%=)2opW9r#+BFzKb})d*{3Kn@op73N^UW$!e;7Cv952ujsrz zEqiyqtI8N9C3fedhf5=3*JilBwIdf>P-rG|L+VJ~6gKrvg89%S0I39&iwjdhuMmwR zNNonT=D}+&2qa)7(rgK&j%|sD%ZaI6DBKL06P>SJK;6V{XWEd33l}CMpgLa8?yR%U zN-cyPuarSai0aYHF1suli3=~hFioqpdw%bG-y65yjOdwQJ`RL~VsKz8F43#1E8|rC z-+%x8`Ofsd^I6S^X?igISkFSCO{&a>W*tfyACC>W^{ob6bV+kY@n;sgum6}ofs!7R zkR~UN9Sf!Bbo59lun?qOtN>4`fA-E-#dg*T54TgBaPGM`#Gk$8y+{_P$Go{m0u>NU z(7y*yZ$+!Y(n}uVVNImuXt4@P?`!qs@Ayu^1>^CDFN#;cVoChs+K3_%V&sWfm# z!4N?(9JpI#1&7eE1E>XKpa+@8kv2>pwwMgKKzhvnYYq|dHGGR^gq8{1`I(FiAx-Fb z)}6G(s%La)!ZL08lsIBR#4rD2C7asO_8_gi^XB{Fl&2hmm$0Gu;rIV7Zur@jSa{g1 zSa8_9*oH=@lYJADTe|r{x)zB3*nnYLhf$eBb4vEDJBR+ZA4@lA&*Ictc&V2H$ z_{Y!ei0yrP&nbI9jywM7n9DSk!Gkx$Yn^R_&qDP9$pc}62_%HJqt}f|mtE3JT!Z}& zBR+x)<2w3M!kP-fykMFG$tw}t!1Jo*%i^Z#1M&O6cUpY-va2Y>bfOxOoE<88C?jI4 zXv_f&Rp~MyNqt!t+E4(18Opta1?PFu>Ojyy%p@}coXB`caJ0}LN|}a$&niz{g3F)_ zg3f!Tr{XgAkhP+vxE)4AuWk~%nrkWKFtLQ^>J|4uAmKTVZ|2Ovc>plo)s1&ROr+H6 zN{mipuYv@o=3dM1#?*uIdcd3Q9UF3wo7x)jAEXjQ>DK6kaS(31A<`$YhhhYV#(gfE zkj`%2v@y$Chjv-*ikqs<57IvCpV~y~t|gwLlkei2u{bGcI+al)LzIC52u+Kedl|fH{`}|zH==Qma@;q zIrLjj1KQ_RhQPjZ&SwL7pX&&HtoB9%-*;ZqO)~kp$H2LtUXCckF>^1Bc!&}(3q3Mg zAQJVi=zf_7>Q{a1v)@{8&y4ZjcNu5izBdr(oj>1aSzP_z{bAO@JgV>WeJ$QRI?JSp=dW?_pq3JfvBmZSQ4Y zU!Mnu2&b%14MgV%G-J{RocH&Fr_hEqB3v6xgvpoz8)ffTX^i9IzZ8}-CeC|d?uxH{ z6*ylDPXAWC>NOX~RaadVD;YQGjyJvK&1h1m<~T^VeE1_DNy1RtK-2R7^Vfe(epD!1 zvG00|3@5^GP!EVBDmqrYC;j9)Bb}-yL`~``r<@WB-^F+S4Apgibd_aE9qe%sX{y3> zQ}D2o2_*$+HMAk`h6$Oo_~@8_($jF4Hwodk_V?IfD%nwUg0!_|WP7Z-;s3>&6}Lwh z2MdWlc)_CEA02IU8lEnOK_D(-zEkbq$j_6vi2!jl#LLh7!_J=8Q{qdX6SqdnE)YIb~oANJMtASi>LoCK2clu`TsPc8Gu3XQq% zrxCLo$HWcl9!d$-0a6EX3l9tu!lk(I74MC%ky#*iyr|(Btu>~^rEk4BPCKP3y1>~x zkVxr8Y7#2)Vch-p!Iks^FFlxZ^{{UNdK>rx7GMCTvB_b{{MqsDx1EEun9b`S{X%T7 zo6O1V9psN!BbbjS4hm##$c^d}$SzPmTPGJF83v-08L0#Ah=oaF;X>-~SQUmBK$JUC zOP5PO3zf_-Ps!u7jKnNVruZaIJ!wH4cg%uhm^#^5zX|4M@)Ve*V;97G|Mba8=&vU( zzazcV`6dPmwJ^kD&b-h`={C)M}h;~rjvJTblK#?*%T9~nHKe(fLsZoK3Ve;0`@w$C*6j3+LM@BaL5`jGsn|9k%W zpW>HyZO=VBt?VD^Z0d=xfAJORc2Z&pD}L%n{>-mB4u^yl;b2FIJIZar8AUJtZb$Ta z%YRqzA($Z#G;9I3HK`O9rgTotan*}WSxiN4{K(&hp4RcKhRPm2FhSzM;NM{>nG8aU zbuQ0JCTUV4a~TAYdJ%#Y3H1<-s#Dv<7tU$tew`9m`lMZgmpF+*=eo?fI72t0-AL3u z_hJAdTAZ*2rbC3Bgf(UR&NC?#z6$(? z5Hob(0dX#a;%Qsb*EF_8M+nAypZJIO=2g|J`nChc(sHwomO=YurVRH#ur8S|_a$m7qZuAkEEz@3>=brvL%1OP zl<^ndZQ^6qhx>FuCNleP)&tU}!lR~|-n;(f{AVs0+-s((uA#y)!mRKr{xj|V2+XcM z4_46yrvWKTHw)VkHnR1NOhhyqdc|S8BStbE`5QnZBHCcN14*FKaP~rwZD%uVAMe_b zs5O8zL>x@(+=OS%(#D!=6D9yk507HE^4ZULM!K9l<>Zs%th3L;-gbHX=m$THE+j+O zVIpwexi3TdyfwX6op;`Op)K6ApZ)Ck>_2=a7ZCS;h#PxZOgCe0P$3I#vA^(zFR&^4 zh`8vYi;@}9v`fjLi{MLN`qJbaHT^pO{PW{o?|Nr^hrJ{xo%BRhTub7EAN*h<4HrKb zzCZu-cf~ip@r`)G6P|$AmZNGKnX~KSRjZV;0rV z`5Y2fo=w3pLX1Rg?QM*$gN#4^+v|s#klG&_r<~LjU;5k+;_@rM#kfa&`5*t9abThF zybG8K_|GKN8zd~qUXzd(%z+M@-X8CJ$2qa(0Zg9W_kL`v(Yh(JXJbT*i+J~jG(#GI z%5TMvdr<1$>bbrk{qj_B`vKv7bYj-khO}P$;cft;2S!IJvSuqR1!%r9^Q)*ZaZh?j$mS6WcWigQ4b3t*%#1II2SjT?8aBdNAd2aVQ7Y6-K2U zBG>oY^ULrEuxd#RUV9;)Z@B5kTo_Cz44ap(Ufqmq|4Wcp#JHv>Bw@b@Tt4+$zI<8Y z!>p?WGoc9vMgmO2-0w{1ufg)U%2<_qV4nLri8$iTcM2v#2jSXIE!)kv-kdm80$Azp zfPy46gfiRBXfIs=H*rqGhK=iUKZ)t)&N~E&+ygKR>+&aoCLZ=$F&q?@<;mpOU$)sa zwxPmhrgaR{g1&wBWs zd00Q|vl)iMy+UQYAsB$k$&7`4)Z04(bF~(M+`Xv@b1de=yk2+Rb<|boEbGyf6$Zt7 z;WbwX>bqy5%##8w8)@`;fZ_ZhG9yYfcZN1glN{9q=|88i|a-Fl;L5fsd;0( z9COD_?IA;zKbe}u3&aU4zBB0U%`v}s&3!PjJ@G8)lV2bR)yv~K=bV$wi*RGG-^4E7 z<=+zFtBEkt1H0@2@uoO3wZL4(#o}0hZdRW=cW(UR7ry{!Av^-6y;k^Uml^M)m*O>l zd&P)Y7b`Be)q?od)1ftS80OHQj<$xlfAvz%Sa<+If~tC^P|hMJFhmT$Mi`L>4pN(m zRG@du1`eu2s-`5v38(Z8LU0EgWIdT%mO^EIBiBEhY-uIX9jW@pWjdAAFpPmRv z2IgC+V6#)Hw=Nv@RAOCRfUf-4ug90!k0J*uv7wJ}2R?1e_tBt^&!$?e$H(u>XHD}Y z#yq*4Wf@w$c4!mCo8Cpl*F$XhZeT-oKVGIJ=DN{nDWzR}b?kns&IU+UvzWY{{3gDzc1k2l86s~_*3zmXFVg%I%`q<`MbX#-P<5o*kf|o+~zp* zNlW6V*W3+3O&u}i7=V!)MibM3DN$M!$*4F<*=a5tO>9n!Af_DD7|%WT<8GT^XL!UDPJ0p>!QuG*=RU_4GWPAUwHxn5EP#V*7FYyj zz-4X*>tN`$Q&Yo3-AC{=*1e;djp{4oWzSm>fA+@n;u}|f6Xu~cUi6aZLij}d%O`&n zCmpva4q}u0dq00${QJ+=qD_LqXvAh8_tL-f%mtVZAu)zPv^cy1(?+H3Nb+IAbTk2e zqR%n?Xc~+QU(^Bl-(auJU+oW9B5z68ccn-_gig-RikxQF=iY)$J@ z+TF~2?FMJ0i%<}{jB1+yNYuilXx3*RH^DgP`9gk4zbb4Hk|j_iPFxgr;M%wcGpogm z7NvyU&EC3m_1p;U_C-VKX^OrSpKM3KqW7jF5yI#y-1HK>;&V>2Kk}g}JHSHXpHgVGh-@Wy@GV=cFY0uBFSEI(TSg&&eYCa?$(+*@s6Q zengJnHoQ`~Ph~ychjiI~*xY&f-i`ZedNz=+=T$W@2ZYNyOK|MzHy{&C? z0f3(84%V+Y!FGthcWmE^J6|@-Z=IUVz#Jx_aeefU*$$!A*ZfV!)ZjPvl|fzX2z}|b zfqqTX90 z$J}mWT^K~rf+~e{!U)VFNJ1}p?!l31bRU>7t?OkT(-yq6c=-1HzH`Mws&|j#p?QW8z-t zAOHBrxyM@bTGN6m zBbJK*06+jqL_t(Qe4!N?%1!A`aO#^gh=qsmT`ZQXrUH!n8xcoa?6g`t6H8rt3#-@N+gam3M2jsN}T6JqTp{c-IbcR_eHp~DkQ z-PFoPR5S$cs?a6z0P6WZw0BrcVM>L~{s{XuSgqcE>F48v?>#Thz3BPCVZ<6VHShkw zcY*!R_~PY1iK7oYBVP6T3(_R&w%hyS=%ZR<6WWrIj``6;Uj4(U?J;-SiWe>pNN1BI z?Pkx)=3X>_%+8)+Z2Vhh#%KTeDm0JJi|0M-pg8%Ib7>QMMi6%Y%}1||8*aTj-txMO zVhhvfs-ND58Qk<}>BJ3a&jWGQf7}sIgE4sGlg^BF>(}KR>>GrWV*(DLiPR*ihxz69 z{=vGLarf#w;=5P(#_#;r6dVein;MSv+W zyV*xS{uAIGrWBw3qeKQb&eb|?%$VUU<6;3KMG6NdR3&@WFM**VtW@YpHtBMBD&{bQ zFdl8-0r$wL32S2m^vYE$7?UQ>3OWt}1KMARPdX)S@ZB5Yp=h4%qLxH6Ej79=aIzBH zs~{GSJo2bG;h1BS$n1h>egNrcGq5&-+RbY(gDmdw-O9U`C9`r68_!)V+?%n1eHME9 zS%yGGLVpRwws@rvCPR0=EiNc9PrhHcaA7P)LN0^i`9}AySiwP=i~)7Ao@x^{n`>jC z8UUB2c9HR2k3eMx2L&F!U_lbQrfr8waN&3W24E8HKK95X3Q>-Vzqtw5)YPdS9Elts zW|;DjzHdVPc<7wju^7f!9Jm(kLO=Ld##{}gc+fub-LR%$%*Dy%y@UR0GHL z6t99)``bKoF=iYV95OfiXc~K8CNDlb`_DeUhY$4pwR+idwB1L?u`r32>u=49l`!>; zHS+5c$&S?7M3T>DlpPg~b-V6LK#;rqP_Wxm+J5-MUj zaRgEMOcAEAB(M3~yJE@3Ku$?mh)2xdoW(7rEK{iH5XaR+XQ&}pXrjQhnR(h~<$zNe zPnk|+C7DN>NHO=+f=Hw2Qr$Dnnv+Sl&%}Gu9thM}U}U_dJ;){*5X~_?z_QRsFeJ`1 z;9wZokWT3r#rCqEDr3Jh7o~;n{bg-fe^Qm>9i_BR2wp>`K zi$UaEafGHxJ75SW+vva;XL_#=j2+~?@BCIo@8B3eHech+w|e)sx4%8+;|JWcM$FuE z0<52TsFKZotq~bnvB&C+iINYJvT+oz))Rt?5Lj_ zfBm^{2hI>VkAroUJnvZ(@wqSlYj7L~+5k>+#zaRcyt<8gG+#4XLCpPK(M+Ez6#@W| z(1YZgVMbiMM}P;YlmXNR^^Kft%_eiDT21xaSWwtkfjYZYM+j_Q7{rs*0Iq?RJhj$C zQUfD(m^f*l&Q?U8*^dOA4ns`$55WYW{%+wc4V@)ua`iYiZ$JCl$EHLd@3jzt34Xcf zV~{HU`_Jx)AN>3d;DGZs>NWQu-)0XT13rd0xDU6!9aCT~kz)3$>7da~tuQMPT2o=p zr%Yl}(+`^<0A;G1Km^HjFl*QNn>ZXG86hSMks5+Sxcx zIe&#`90`ZAhXs7%MDYiXQ7ik_cfBk67$-LmrupH6tJW$cKR? z_g6qn@>0gb`X#|5(GC2`ylIA^YhB48am-lURpAudoh`o2uQWukx$fR;AAj_*0F!&(_&;T|n(8gY7a;^Rj&g->WU~Tk*Mv z)mG*B-aM*j88)?4=3#u*7Sox=G4pD?`Fd~tjaN%y{?)ftd#d%sbFHd+`kmq7+HSnx z8mjV5Z=7S}y+SaBWqxlxz4v`prtyAj*_P?mVA{P2eys+f%&O9c@|XC0$Gl)3qW4L& zPp3A$h`^&RCgFf#Gq9K&_QzDh^~jK#iJX!7l2rq4%cLtnwzYQiN$tn2e!6X`Dnj`nXO-!_AT zr)*pmtG@Wrm_NJ)zl8{0C|@&TRoR+OI@W67gjbMYep?+!pap>aO>F+NPi#9&Nxbcn zOBk!6_y}G$g`Jn5_p*4|xi5{EpZl`d0E1ceFd1-PLbkeksKrfOCVF6e4+wJ=&JjKh z#)%uM=Y&_{#Sy~0Ft~e+h>%Hs%sJg=Pr|avIZeiC;^;zLKoq#8OoL16_~3r)f|16( z$+V>eDrP|s4pm-WwYJb*N(_Kb zA*ev=%t}vXP6Q)TAiXyGZZdNciWnN$9jr~X8k6*VVH$`9JJXJJbE%tbt3}ApsP%!h zC0>T3Far`&GCE2u$^nC=rN&ih%+LHADOZW6?Jz&TEr`4fbhU_hz2frAbCF1uFzpu* z9Qeu{OY?;)&io$BVD6<=r4*5D^}M$w3x<*v?k&*7h<(o>T7E?CKsw)F$EItL^e}|4 z%TWW&!4O1kOZ)U#j|Zr>wmC@g8C&)dd8S1jYHX$eK*5**Cmuv7(Od@;GeQ}491=Ld z`64||qBgWkh`W?$LK(BTq4H*odRQ>j(u^Rz7r4}Hs7Bw;LeU6A1C&xelGbhrbG$?i zjvygqaq0z*0R!p*^TWN_Lf=sb34{X69=>UthAC$d2FQhNpl&M67E;%(XuLen!#=@A z3G$Fl#Aqc@<#%_X8FF&B;qeBqVAgPuy%}m?8fY7}rT*m0WU6Ox#{l!dMXCwk$8I>b zVml-<(4GwRx5j1^8+oF2<4gsuY)qaWUEB;cDp>{A__m&Q>qCEtO`46&DHM~@4Wk4* zKwrA~xs$wT7tBLlBO74*N;;NnzO9dYwV>GuS5`Lzk9c~^q8^NkCx|l*xhGHNg;Kso z34SZW!raJSuAKTpf`kJ3F-Y7?%q6tseLd1e^h^wqo$b%D93N^i4}CE{;Vz#2*k1+p z4^H}VJS8*%Pm>vFuOdvDkm>-|0kmmtd_JwWFhY)bV$&K@NmR^tN)j?YdHke&cMX2WIZXpM?+{>Ozo38;#FA0ap8E1ffS2)D#&y!5cFv zGvC+xuIfYUI6__{n2}l^T z_wbEzhS71WrO!P1&TlPi=XzGg)3{-9h{CkVt}nQPuJRe^DF<2tnVz=too93OpoQosO->Ht zlF=W{u)4_{OTOkaE|j<9B6%dcM+mmZ`dJs|gs_@xlyj8*L}1{S2Uf*Q=EPLilo95n zl}|?ufS_|<6I1(7IMt*zO|mxJ3(Np};8kDfNFo^?aCAE+n%UpPx$j3InDg6tllj;~ z!hx_(zm)w)l*#>VH{To=zvg1h&d*9gf%~1ldF40K^n7RAAF;8I&<7LI6Fnfv6?|)7 zcwY%65`O@z25C{_6DQQ36J~+UJq^O`!$IuB#V4*i`{_>``3ZJnzyEvBO&|4@ePd-l zq@A-E!c47bW+d|DzfWbs208j75sk~5#3yDSnc79kJIO8ox@nz>$Xl5a84)ups4yRD z&snT5N8jc{Ek+KgAfhaI=c`9~SC#lEKdU9T6U@P%_1KNDcLhGAfC_l31#N7C9AF8M zU4*PT<2v?x?{`Nd0-9wZY>n+r^P=Ub-;K?jo<2x+Et z5eSvCPyk;d{!GDN>QaK^%LhqN8COz__P4M2S3q&h`y7i}7LHA3Cwv)9UwCifPiWMv zOoAAef-4J;griAaXoXVBXKAUyt@12`i*M~m8UEW@FH@ACe~ZK}Q$lr zlYfOTQy8-T`Nz7I`jtFOZrKV7Q>Al>9lM1SKpoR1!L!Y{=fKM#MTM@mm)}!<{qE>(3mB|o#T=@manVZvH4Z`PFyE?V88Z2 z0phY&kI{%QH(Wh-0)3gkqgAWHzR#}NsS#0HBJfuWXKt<(2{{RgFGNd$ekVr4e>EA% zNVJuYEKqJ&&L<`@OJ2TnsrI}8Cu~73PQb#56jUbKduTAvoyF|D8%>!D@aWS4yctus zF}LF@lmz9N$g01`MD5Y7V`4~x(Q&HMR-gTh-T#STmQJo zRz2sPA&Gx034xL040W52=0|!@oX+-wABG0pK#R?P9n4?f#%Lbg#zHR>_)xr3GUv;k z9gtCv{*R^m+u4q)g|s$MV zx8@@wVx8(d^ph~+CSUTD5pk}&C|L{5VT8d3%D1?|jMsg;xto_Z$)+p}3O5tsQTBkn zXQ-~_Y4`sMDUO!kivxXfbR5sxD8lGE=1%XDPS_EKWN3vU)GfJ}!u=~Fn0D6VI$e#M zj%c=VfZ(@QZ;Wo{Qa{LGl&KK{%yVp2A7vx>ycYb>pEWOa%;%hgrLN+r#4voM7rSs8Q32VxMc2*`~y{cJW$GjVj zuf^p>W`>(}YwycnwQlFwlmw%nJz{JHt4XR9 zom4U&a`yz|Oi+cfgme|})MsuIoXSpLf>{{j0_bB(7K|oe%1F;SENCf3FkSw3mg6_S zJ|P=lt1kC8BNzFdEwHafiM5^4c~`v~7yYOnbIfXAjL&cEm_4dYcQdi3L<5)u^>cE0 zYlCikm40K-%GCN^>>LI z3kUgfzx}D^;`p(Cr1*zajB(i0;Fl7m$_)$Hi#sHIjgzniL$Jzf#v{8oj5$#|rQkdJ?9WTP2*4;A%?bUXXNID}fC$l>|8V?hQ)P z23(h8Pe-V&;lWmBoblqrgb&j1+@$ZD8Vo@c6=`aL!6ejjK9EKRUV#qzDrq)*d)GA1 z7-Z1=kTjE(xk(&)r$$mH-*}ZI&CJ6_&E~8baqia|;^&wdisq#-DuwtYk-~u_x6Dxi zlJEIPzU5m9HwiP+!hl*dEC-2s&XmLd*YF;{Ho zeG~8m6fx$B9TI-kbXum2jalzqsB);T=2lvh=7qV|ly>??+$egFm(4 zgite*SL2zl)g(!2UoEMVFl~vQ|CuhAT~M0~O>HvsszPxh2hCr&i}wB`>JHTrJMjeS10-^aXqRJT<=j!~uEGA=b(BHLYrUBBJMhrDvZLY9i=tQT)n zy7uf~-v*{DAZ<6MyH_YZE152tPe`XEazO6ZFGpS?X*WA;*VWmBx4DFl`EGnYs^0Nh z{Y+lP1G&j?kN<8=mXG!ABNhEf@eirbZYO-$FOz^_Ol39Fln&4Q?iX-xkB%zpu+eXX zO7>it#s)mbaRU7C=A|*TZB4Weav&T~aG-*t=svDy2$5Fyb=2V);Tldp-iGI=ZrU)? z)P^J%A_@V`0B7XbQ;nErB_z13HS2CHa74P&lm3+%5erJV@Ky>8hM*AtC7n^GO92gU zB=jsnlST1r114Asam+`FtW1AN<6Cj7`S366A=BYA*tN!ph>s<%n!)30SInIxv;}4* ziNKTrXZdBn4%%$etFe#?(Ui`w%Au2LH=M1Es}vkxHE_V2^(ahAiSuvHsBEu{n`HtI zjD6yG>Rhx*w$uKzZ|G0rC&K>k0mrJ0tI*+CCQwoliSmM%%czrzcL@)*Zxkr0?b4gn z2z!Q_(5NYFsME|5_;?zw)H`sF(Tt!$k5$Qh*2-{plu%vwrtRiI&4jrz+2t+p31Z_L}ZO__K(N>jp zgSk|3dzRzGR}($(>(K)h?V5~83%ha!{3MEU zUXS>(Rd=W#Cc*f)vCrQ76CpPs@oKN@j!C_1IY)s_m}o)R2F}KQ-aEKy$^lxZpHiid z&4>T=B9&_gdAK|rh>-M>1&?jq!&M~5vPx84O_<9!gw%nl|HD;vf5thdC6IGIE7`X2 z^peHmFaDt4bkSeO2He3uHbW0~o%$$YkU-J3&%Q;tNKD>?O2i zDhe|q?i2QHcYz=4A(J4H8|%P}n@QZXVv3BN_}mKp_PU|xOW zdR_-(Axt#l0k0Wn9}PozVjJ1cMsB=I;ljF3PbE0ZaPB9LNJg|aKQa$tOqdV?%yVzU zBj=&hh%7^V^u>E?WbY_ zNrF8LuG5w9P)4MBHoM&sB<^9c6Q51=z<;YAX!N{`v7ETIt0t?HVr=j^e1i-9$rHLi~y)PM(f--BLN599l| zihs0Mky$l9ULBtt$6eLWLaI7e*D7xO^Y{w!-sZQt5fS&h+g9F!H!D3_0Rgyol~55r zhH&fZs-tO!L=U_Q-x1GqA%x2EHa`3P@oW>V!|sKLNtk_b;Of77^?vM9No0&b_1Xg? zZdT5J5cCz1P5(fHcNYY1TAMr407fStGX_4b(wjp0UzoF)vH`6L{mA%FnOYt~~^i_sq+u)SlQ5=C#ZVrPvBxV0-5?g_1w`hFcFZElkDEw%I)E zVfn+!h?r1wP5OwJw$pOU+$;AyPK5p31CCe~s&(5@E6!y!nFM}v9A!cZY%`F=%VgtD z98E}X7XpReEz!SaC7ua4#HYfJfyC?wub37la|P|$mVBzWLhk<%KG67-px3@7ik7> zs2GyUEPkwKqutDIBH2U_{95&Z^Uh!6{rBIW{_IDISEpKSvX%vN=WO%Y39P#>eVBVD zMh7Q6J2@pMlqB?Xu7L|8lu&SE=0xA=dK%IW z@o_RDMC-ZAy%XZadiK%Sm)MDAS;i#;STTEM2?H{O21gCEgjLy~4kKjwqxh#*)9h-n zuud9;+nNT8uP9U6igdKsynHIASbl3A{BK)}rmMWCcGSf&w{FJCaHu)5cLrfNtw+KK z@tLN5Vxu7_m>n_}q3fB4jU~A%W|q{|p6U zvsL`DttQ}C^#_c3na_pF8D^g4K38M2u>6yG?a809T94|!TC;tg2opW< zP`tA6KGXcq?&?QkHtf&64d!9~A}^Uk57Ckd@!&EM*&~&`x8?f11c?Moo*&cKop7M5 zQ~xOstC!edv7&m!Xu6{WN}@%1u?wGzl=<-IK!hyY^7kxQ>jPBtJ*)6O{>s^1_{xNY zQ9@S2xtiB$+QB{y;dBV?r+qCv^uv_sZ@+R{v5%{^TiIC`nIqlxHgZPH0Eew?C8BJBSjkbQQX5;`1>9K#$GOq9y_FrdPirqrgZ!Ioan z>d|uUfKZ==Dem-HnDTMfPG6sgtZJgoeu|Q#xeA#HH-%-lr0SQi_wY~7x?O%-5%;5s zEY>CozuaUiflXJeA}n?8R&B1;+ecFPMZz~}g>j}W`Q$7b&3;dAtc&=u`k9AuRs5dh zIq}Uz5BxgxKn3SGj$jfx!)hV9@!BLX%7h+LHHiAq`}WRG7?8Hqij-5&K~hXeZ7`C9 zCo|6^RDLqzVyk4qjjQA#f!xfaE&vomJg9#Ar$Ld#Z@ZI>I37e z$?SdgT)p3)SL>01_Kn^@Oa*rJ?m@|GqC?v>Hqc56V2m`)0$C2A5oy58L`KmdL-JUK zs%*`#8am zR!>tO=)r|4drW%G1KgL>HgTbZ&PcR!c&7)nYN9CPj)Vt<$j%H-r5|8VO(Skl(`y%u z%pm<#XZ{?3*V57+eVn9ShY-j)HH7=&2G3`q9C5%P#GwpeVHZI)wA{cMZcRu3bzj%vm^ zJR*nEx;oHB+ReA%euKC|2uLRu-jj)knDWQOnP%e8qwj&e?Ya~sNSSBgTCH0~G0q+a z=;`6ZaHN_8HhQ?^09rZ^rv+vQ;8+KMH_v05HvNzo*mfgv2pbu8*Y{$!$9N|LlMs|l zZedLJ1@5ovJ|0)|Y;&?*zAJaGx6&WN@N53$wRr$*KQKJhq&AaUq18_<5r;*3Q0UKp z`lGn>j$6v%nau4$@L3aWuY;lXpifOJhk$tnvWMD?wrP zt>2s8Jp6fYnz4r;`<*}I##%g>M<0h)DkQGTtjehJvOeA$pZ!4I2IH!rMTPYnOK%xB zzK+!)cw@h<@|d_z^uX@+fNmeRuA9H70d?SH?|Z8u}joYX<5{h^Fia$|J#Ys{X6sqZx# z%CYBDPbp8ct^?LE>(3m?jo~r~nmFnEM`ENfSVMi=WBzfAV(Q$3q6fUy$jSGExcOar z%WcuOZWHH>ur~wZq7UZZ)0bmV2@Cl)pqYfRsBfB5h>8Ksxj21UK>-vDo8Y~GyP#k4 zkVf*TXTJ7BOr+3e5ZA_@(Lg0PZdXkMp6Q!ysE=hK6WwV=J5cVPu9mQ$nVtNYkwx4R+m-8A}-OO9Jg9 z(z}UpAbMc5>ql?OI0FnYy=wcebE;^9>MEV0Af$m)-)UOT%l-lAk=B;zgW;Cx zZUQ$AFi%c;(kXHG3cUP{cyc`+_Sgf{&pc{CL#Zk9h?-9F@+VE=nk#YcPe#FKX%w## z_tH;=Rw_(^&swEO%NQ!cUgFOfXXM(WwrJfH-@cwpB>Zji-~uV_PsNFLf8fxnu(Rk@W!p_-)Q zzp`%DXKdZar`cc6x;M_ShoURj?Olbcjnx>LwzMAvi&XP54M}7o{nDdPYC4L9#nH^4 zII2d7mN+BSn3VDMRqhdzI64p^$E2$5_zILdJ~%cTRBMxOe3r5D`WqOs{{&FyTQRjM z8=Q@E{(BMr9pBo|$zKKLXi_q}nEzGM$Kq;DG!gX>!Lsjy#TIF9F_Qw936~8>{%3Z? zeV9n?0L~l1M?=i%&SMrw^9e`Sv=)3%I_KL68MfeJYhcZ$c;NQc+K;gG#}J!7=odJderUUTp#{5t4V>d>gnE)+L1augw;V<)b|x!+<+vwf3S}d z)!^_2+Mj2heoDOI!kO`=w|+Zrxwk8ZTV~M&@&?y8g5w=EwJn-)sS#^x28Rz?9(}{1 zkTN+V2%VHHrv{8S07$m0z79qUp@ihC5_W4L8#@v9um=QUyLL~5NI2(Y4F~5qnHib- z3V3P)Uh23SHv;^mCKDpGmLS*JVKs0ttupll{_a*ed4y;?DKIF!|(W{(Dn* z^4%F~_09FhnwDqZyG9za9(l)G7S;wB6`ZU_Lx6^er(^qJ=U*ZKgw{(2vpyiu3`XO-z$ben#kZ)dPv&Hcb4Ia z)hkx4$b2j2!hB3ur8k{rROM996%A#6e(ST}_})Bb&z>Dqrc8;YOP6N(6MrUpU{88L zcrxr3r793Qi46km*igr_lR5+GoSY-7<5?KRG^+aRo7-8)kPPsi#N>rEj|H#U#TUuB z=;@Jy!0)e`Tu$b(lghdoss?3UYjKa1=l+Pd9@fdam#`BtR^|J?s(%$X{#uzdjgQ~? zT%lKy>(2bVA3s(*zu8xJj+_e^wWPdat6o7$DMOoD-%F%8(z!u47YY00k;lD!{TTnX z3R#EUtjEJ7EA4-b>-pa``&{%TDjoB}VCwcVwI9NN&O_h>W<-rxrbt{tui5uG?;oa9 z_gj0dM-Be~2iByFr`o&H4oH&NKHM9d&@T)nQw7t|3Jz4NFVkW>d8}&zwtKh3P_#uW z>W=Bt(2RiR>eWzmOSH4M4kD!A@01}o>h`ohhA`pkY-x?IUI-E-&3yw~VC*GwkdQ*` zbU+9U^=-ps3zLw6ZSm>9eR2Hwhil^>zw$#~*P-R;idMCsLt7!DXo3GB1Yre`*dhdi z$!Lej8lnw#5Q~E_Nt#F1bKVbsW2Ae1bhaS{Z7k+lp8GL~7o>Yu;Weyh12}q0^bB-G zhfXN^dvm9kE=#-Fw5z1Fp&xgxXm6Tuhu4pJPJ2^lZ0$yx+c6yhlsJL<_(#nLb@QAQ zahimjOeh#Kul$(^kFp0w(dUCv?xV=X!7=%m4D!}Uf5J9Wc}(=07(-*4A#!`&4~S>P z9J);AJYv0vh}SK@+PM5>zRaYYW0vLmRLeRqe!Qc7A!VkO7IHSwS7wpP>(t8#JK6YYY+%%hOEG~XeVsbAT*DA;RZYq zb?dgg??)3U5ih+W4bVV8Hj}Rmi1kpo;QB04-v=W!c`||pHIM90GcVIv!#wntd%qu4 z!%ALFHEpCo%DcbogU@9r|;4&p) zLlWLd`gXAJ46xYLX`9R`!2OsjG&Q%y4i>YH)(#d`2^b`PyAfnODx$hlF*;EURh=q= z-uihz_IWpocubO52RnLfC=2h{m|fpjJl?~nvYpi%Z)(W47CCmTs#N>*>PkojMd+YN z?x_RBT9OvL@^$gLL#eeoDSFX%S`D}WCeMCNpP(J;F!N{_*%DJ*`{IP-=f$?IJ#oj1?lhyCJRLlViOA*+ z^jA+`wDoU@c?VC9?b|lQ^hqOe?7>Vm!v7 zqjNHNb|gAlhN5fBy>Y~Gl(VBh?pV#iqxGGbR-qN)T$hFOn&aR(9dX;;D`WGHAvWhv zitBH=Jyx!~Chp(1EqZfPeqYRP>WyO;&t`fL$4z&wiH%K|s`YG-1=BFM#oySrfsL_X z(E^y5p;)?nGY3FUfgxdC(2Rr8cAp@NL3XKHGa)1e7df{M-1t+GU)%5ywDhQ30`!8v zgv~N*&A>4dstaE8e3@pnAdBZ88h0X;&GXF<}t*wri79eFDNs5*q*mvf}wK%griS@V*Z;qjI)mdDbH_77ezAuFv(*DkIF%rk>tdaqDf--Q|8N z=FimTIB5R7=wixFoiZa%e$rFonqS@!?Pwc^NT06h)mVTCnqkVeGFNWD2SJQP^8m5n*W|uN2(lLz?{L(dDU0rEeqS>YvCZ=yBzKOXbvF|=l8JJ3&VqT`H ztWjEVP-7nEWg9JDM?&W3zO^Pywq!D_k4%T(PMJEDrJEgMd~3bR-qOZ$UJ7Zgw;CGh zmr3jkHdwB8Hy_iNUd&e$e!;%E@ zglM03P|Q8}Pym*Vpqy6RkBP#TZChj0+GWu)&=`}DDC)O+SHW_eU|yXF+Cf&51{+Xu zROVrttk8p3uY>) zIuIg@ue#wyy5mWYn-?E^=Tr1%oZ-e>Bi{L!pNZdo`q}Y<7abNKdjIut=gqgo^M2>d zIPazN;=LdEVf@*fpU#;q5zqPUL*wM%Iy8Rw4?Z1pr#Hp>E`4##oIy6c+qf;_O>g^B ztX{JwUia$P#>pp81>&bnF8%+*pAN-`Kl*>sPB+FIUjOQN#%TqtQI^D)uG|#=@Wt=O zlTLbKyzw=M#LLhBdn~TD#p$P>9PfGEG=+ZCA}nOcFrKVbnlKCl;WhTt%H8HFM};9E#hu;a!VfwbJR_Nh)9U3xnp~G+KV9v%*+Te&Ij*Lzi0BIh-*Tib; z)~#{uvBxCMaOfQ!4X6>Zqf$ zP74<<$~`LPcl6OqYT6DDBrX~qk9U|eXAbo$`QCZwotb{r3Z~Y}`*Q3i^7_r`0b$jj zvvQX~WF+8@C#VY{4U&PGX!M{mCpXeErJaLlK}Oo9v&r+6*dA@Mv9BT4b|Dq*oD;K` zoE*~^o)E*$(?LodnSNIlsF)ust0yS0)~T9UGAZNhwyXU1_dDzV;2%;Vkajfy4}Ld3 zE&WMbt6#NHl&h$=yenf-?v3-g#67mH713)m-w0zWL6+-IE;9g@n`>DBlr(E9GWyE} z$wk7&siZ4Qlgrco{Ry)TP>$v2ef3i+?vLmE)t;4>=dVgv-tX^v@3ZopNBLu2X8a!Q zkI@AJSjn*qug9=j+i;HM7=?*OOmmE{L#m!`bVw^fBD}P)D7L6YrThaHWI#NbyuJ_3 zoqI!|qSO3}y$%V`rDmEmxc6i8=C0VVtvh;X!}{%gFdqFpcOf5Am_feikPj8}Z5)pBX>6{L8WJ zu82$D`1m+#Zcm*5;(77it2V~Zue~K+e8FYWO?UmvH*SiTT<|$0(8KYrx4bZ>!5qEw zlCQ=8e#4hzD#cy;#+Sr&OuyP`%Ry5juDIg*xbUK{#8uxNh~GNBA�o982bo#5qrk z_{vxBjC0TbNPOk;HF5frrsK^ElPx@jwId0hKG+q9ckYOHzaCrjr4bjs;q&p9_x)>3 zJ%sY!^1Nstx+gjs*2lpxL0|duz47V`FOOgRbW5D`)P|V9U}p63eQ_VmCR!Muq8=4y zTsL4qvcDaJiQgXgK{C3lvh|{04I1ZW1ej(sP zb328QVsc&B)?xl zna0^hbu|OHAzpg#-LV;FqVdp0G3l7c$5hO)=0E=UIB3Bfw2PadZ_sekW=+gw3d~Hp zLvML%EsRTI3S!;0T5X4fyye_-%Pn#9%{ONw4?g(dq*WyBrDc{cU!L!KdwVkv83n(Y zH*a1{nlvfj-*wkr$-FFHyf~SJRjXDd12c2x%sA|@!!qAB2p!aPNK2SUYDTDs&lLkC z6Xf&UxpPy)V!f>E9cXc6POQIW%Q*Q?#>e;W1u@O_*I%D$X3d(F>CEfDGu`#rLDfXg z`?3dwIfMH0UFIt=LZT$lIdPK{2I*x0!K8~`Bnw3X;-qZBw|ISY_oMb}pB^iIaT8KP z97ZB?Yo9VbRxjTW3+En+^kGwMS$!*uB@#Oh6f2VSQkHd+s1+M3J8=VMGn+imKKmJI ztEfuRiQvYr>#n^vCts;2Uu1oEUaN+d3O^Qqv!aifajl=ny{FOH?Yr*#^3mA*P4Dy0 z@66ve+HQZ|KUQt7O7)t*TGh(454nWZYFAKwO1+H9Y7>{=)ZXz$zNqDskNfpcsc$JV zD_ASNR-y6rvZ2|KTC&?Gk`;^hUkT` z>V`OLVUufagM5kJ2ArkK{*5i_RFh?{R%7f(5TcCi@u*mH?|t#zZ(I?Ve;G+Crdh4*HDSw0w1St9J8E&X(~Q6S z`+td*Yqm!3ntS74zj;?Y?|&T;9rfEGKz9T})A-J}e~MlGU|jjHSH|g2{ll0weR3>c z(<4aGYQ~?x*}pKMWl(GVZ6EI{Bz)~E!EgAxU)6uVs(S?a1p5m1xjJaTu|Wy@0Pub1 z(%XQyx_HrxPL7XUb`{LFFpV0}`7W$GnF01R(W)`@Mc70#-Xp>(Kkz^%Pl(7O${*WN zRbfo@Bi)+RPb%D-c>*iK*w`-7gevzrWahSO=UW+zAp)Q~_DvgGWPPJrgUlP!H_8|y-1U=oX}kas5%Du)=MzRQfTZ>(!$BO@E8 z*b|~7nC>CGzF$ja;-;dl78_)hz%e$ zgb!(7ef}#hh^`&o+#`vh@M%+~vXO2eKJkf9#DD(hexb2L{(lj(fw>}-j=Hb30ZNLU60os&iA(~|{QD*-O*=NyiD2mw!m9~kaouHuA)4ZNO(WBY9DAjHZ5n?>rGqs`zkCa4YN z8*8oB2*v=QR|7QvuzBFgh(CD2S+T{VI>3({n1c%gP4dMyzo#h!s6+tA0H>4bUcnhBG?)d+hJBZrcTa!J+RSg zv=BEOUDPl4!-AXdT(vnq_Q_l0MZb4!yyf+0#+2qW~hI z&hz^=V5dHUdWF8ZABKee?JygYXiqbyA=DeQ+Y$B%^nmX*PitrIiu+02BQOj>r~6i! z&&)XC!@i~RB9YNi=XQJ`*+h6Od%zBMewJ%l)ysVq@A@XwJ!{s?*vuj7x_WNFJ+)`Z zIEOq`6Q~5oXltFs{-+&ys{`=)HjgF9xlM7DA#GuBjjrz1Mi^YHO;fGo67$vhUR7G@hq8_% z9cGP}VOX$WL4H4W-IrFm@4ov&jfg>pM*7WQ9+fu6F{AP6YS=c5t31=G$$dO9k?DRr zHNDVByvuc3+OGD?GTjRV-pW3!#aH=c9*O_5=!riQJ+N0jfMpGniizrk5Xgtwj5Z>{ z3m{h$r<%mh;RgjGwEHw(+ke2jtSd=?`h>eW++Gf|gVT*xt(ZX!;aR4)XDF_@`K~zS z#M5KtH9gU{WhIL%{>de*0eI1dZu0Wv;(Ug=08uy>zJBF5WJ%Uv+es(A?ek}Q4?5@|km{^-wYMExP~)8ZPHycml_1!5B@=32 zHf`RN>QLS6`EC(vQp@t!**Pf|;CgTE+O_$u5*R&D=pjQ1hGkV!49mBEvuDpqNrh65 zLk^yo?*7#7RCTaizf&!$D?pKgX|*fWJ)lw!gZ)vBO-Ag8Kc&2#1tlHfV~)8a2Lepw zPh2JzI&y8uYa+G0D}VCs_)N#VKy+VO002M$NklCXZe#*XV{UYFQfx-oS%#-H!Y+uF;LMeSizYVl3UqTa3`>@`Dk zo~udGNk`tZ(3^G$yYW6Yn%jzcHiZxOmR9NFTD-Dy^=htj;*vjmf2?40c^4c+8=E+W zu}}B>k=dAVoBK*9?N^?^y~1RF>Qk3dW>N$9x33h`(&s<_jr4_Y{49;) zwMgcn#-WFM`eCLZ3}YZ+Vp`z<{Xhc0bcSKBWJHXYX}I*VOVclZ_@C@EsipJ(^y%qI ze{^d4;`bK>@8l66dgWTNdL4K(VQMu$-Eu4UNRXEve*(gwls0bJNu5Yo>6(8a`&b9a z%>QA|8yAYH2^q0M_*LP2Vg@VYuW*)dUH#=*Mux#1s z>BOb8FyZ^DADcCk{Nj1Oqs)f_DH(F6Owa(OHVIt+h3Bl82d)vOX6;E#7Gey>=DTO=b=6fkxRe(39pLKP$~g z(>S?_HI@zRwN_vUCp$9N?Jy97y=)q9$5pZJr+ryp+WH8MG)6@5$FVJ%kf60#%hf>0 zjQDK2w2@^EUQHL*NV}B18sZJ_O)n-{mR-2u*q}WM1LV~Zv|PP@%bM5r3_csoQv{o) zm!IkO*-*-u*7$6>NHh9CA1!iMf2t8U^azwWS94+`4~*PSV&_x=ibGc+ptG=r2NaPR zxnqI@^w?cCHRIN?84oQ9x03Cnp~@Y>FaF8PPD#54Tk*!?Mz7{HbKVJQ!_G}e&(xv_ zm>H0)8}%d z7hDjGhA*j_@g47YN8L`dDLwB`pP$Y>_uP0#5#&`@UYRbs=%Tb>!Gd(*g%_s3`@6pj zn@Qc(z3eakGKh?Kzx&&Ah3|dudzgzoHkcHF_Ylj z$4`1<^7quIJ~ch@iBC)~eBlctugK%tYp+djfBW0Bw4*`-<6~UiR3mUSBOn1a#6F85 zw61Y5C)iBe_7T($UW0mpP#HqXKRCc*u1j)=7thL2V;iT9BOUgvq*$g|=!d~kEtpM> zapF0cG^CHN)GabA;4JBl8*W|!(Q{UM`QN-ced@EHN$p)z)1UvvpQIaa8b}wu`~B(l zuYFcpy_UTmZ~Yf=+h3)>`imE&H@@RDv@x34l9JAP*s1B1>sN&B_O&nqW&1)qbamcURi=% zLokPtX07}rvbn0w=vP~j{`#`ivwk@+igg39y5NLk$UBy%PMZz$?jAQ!l*hsYrex5Q z<#E>?C#tJG1DOQ1n#L3aL&cN3+AIfLCbxaaAo2ix=|;lzdXrvUY?6O9?Z~;>a!g` z^lNXsG2OA^#HL?zEM0T$b?He@K9{!JqRyxO$Jotn{|Gzazcm{FkQHYu2P^Kj%4^o$X04e9?>2E|AJszxK8C z+~+-yKGa}po|Uft)vwa?|MXAOyWjJkG#4ax;?kwTB<1n0{#GMUjlj{4fLKd_P~6wA4CsP9pJ5YOt9K8P4c-NO+Xj8sU1Qb zAD2{)a0g+Eg34yJ9b+wBY-(>#ul~Q^NR#KZrdPb?Md|!k{t2d12~(-hrAI&h+%)^x zl-~N@@1%`A9qA*Vye^&bpr-VYv(JIJ>q=kvHrnIqDZTzpPi8~>r1bXpe@^ZZ}6ur9oYQwaI@|tZ8)#jwDuU(OT@{8Vd?$ggo zZ@=j8(y0$QJ^jB=-NwF-ovC{_co$w_uytprhq>xy42C&fy)R9HIiV~jW_@ro+4RN{l^~K~gADc2;I-UGMD^!r zM!@+yK89s1kNQdh=D1Ar_)l)RUa^{hRzChk@B3NWyk$38yGZK5eu!Uu$odC_h7erT z+9eoR&l1bb4J$VJ!gM$2gy=OHa|>~EH~xgCCCbOlqStEQgxh-eT)$yL6b`nb-57-# z@qkZzotcEuG_7-LdiWU+PP6f5Rbvmxoawl?rrgY9&<@t{EWxSrAq0(g1kTZj*nsp= z_8@9q3LdhVw&au^t%lHyYWm$yexJQJ--ZqAv7YF`>fsnPO-o@obkExlj*wyFB#}F} zV)<}KTD*8sI`PCMSY}Lv@mPhxasXyyF|@~>Fe}^8glvd1OP8J)_1nG{8eVPF{I)ra zl}qWH?fdKJ5Y+8hdd!|ZD;;;-!pIXQc%)gzeptpn`77m0J=UG+!@b|r5>=>1U?L-c z#S`|%buEM(v?exk#D>iM#*92*-MZz|&$vK|tbxBuNMI;foJWx$pwta%0;m9O2a|hf zR~p#18o=&IJGQM!(*g9+re&V#%Rd)u7y*G8h))#@F^__HukXqz6}Cc@?hpP7em=lv7TP zF}dZITk%q{5N*s2>Adrv8tFIPbaSL#e)$#Ynrp7%9(R{$LSFjPmxj954Pi2ZN;K?) z(h4OKYHZFv`|Qx{yzFH!i?}f*7g1YOQ~jw%pc;Xr8UaZbaTa)wtJ2@*@chL|!w^$V z3PFe;!6Q~PB=#7dX{4t;J3#z13?2$%MBD@i6Sxp_tx;49!*T%DO1{`Et63E1DY1Kl z4ezb#>g5PBUikSm6{2Sl9J3YU`x*k7EkC?1edZgtpjGf3f_dq4-(H?R{mol3yPD3) z==0J?zIA5LV!`LHgVW2QktzKmFi}HR;Ds$E!}7 zg&?>s{rraQ=@~EnVwx_Y22SkWO*@^lL-X<0fBt&l#-a8ZpzM*fYU@CH!qdJ&f2XC^ z&S~IobtH5`sWSML!%MhI7%@IU!^oHrUxojM5pdl3#qi50=Er(OJhVW(kFW;yLCiNZ zCp~PlSzphw?8BtP?5YK+sr5h=R;zo^6OM!~P4R>)e@;ffwVyrGfw7=?@Al_cuj6|N zuQ9Xu%M}Jo=E!}RN*pAf+c8Vj%A$|CihLC2WK!x!crd+vW;*B0bC4Y%bO4^)ILuJu zdsH(<87by%QwzdTrtcscQ4elyNS||Pg+S@#COyz zU4|;poyb=GsYc+aN5DDnujXC^CzTU5RB$Y$;)S>;&Ix$P0PyG8Xg=?OeEzGh)OD+p zz!uCI#yF{T5~|!meRRW^G>L=;I!TA)R*kPzhH_-3nmTg>A{YUIc=(o4)+wVT5MomX zfT|*`h0WyEc+c`(1Wr|z#U-A5H+)gepvoSQqP7&+Zd@t6WB9i`SZ^|KLFnL3?dbS{ZSRl#iw98>~|?| zU4;oTho)uEfBy5+%hBSPM+WA7?|UByp*L&LaWCxIn^a<$m#KF|+-gAqyetxqP5disJ$!+xE2 zAtPp5NRX)qP6Q7_AZ7+5oYb?$!2pX|}q6*6bNwHLC#vqskQvB!K%->$8E-hQSoIctB%Rl1<0#Q=Ik2wHe~dJWcdZ zQ%O&Mw+*#F3NhL{fM3i|jcaSqr>p0>0B!J#`y~@Je?5jBOHn~Nq>;TdeCB#BfgE(hFzm_ z6C#gbU91EI;4=%X3zyOii3O0t2r5RkC3a8$=IxU@(o_EAykH{q)$K-Zk%!uY;$}|& z+XAc`zqHGA6T4i$HkC3R4@1kaVf}^xUdtNH?^viMQG-(I5-8m?KAw+0+iwA~l=E4? z-fk#+#VcMB(hS>C;vmCfS?l*&5Qga@7@y4}BCwqp6Y?s#c+;ES6zx3bF^>uD$vNko zlfKM;6c1ObLNx-_2ps(g1hWCmX*QwtnbubD1BbD+pothlfT4{&2q6eIi7bEEf6@xV87M+E!ZJ7`o??zhSUdP8@(11T zki-rB!TqS)GsbZ85xqpXGo&=PsSTo|olUs3jf9t>fC+IO;Or0k&N`rXq!Ad1A;yPJ z65ke^^@+gTi}#l~v4?7XRKY-CiIudm&t4Q6N6p%ZE8`RA$Jj02k6NoRff2Yz#?6mn zMRgh=7>tM@(li$7;b8=ad{H%lZeEXdp6?yP!_Xw61xN@5IjnQ68Lx6JIH+kxGY1nr z+U%xowPEBZMwqi&3I2M+TqhQCJ-MvxNSJ968Lz|NfNR%P%_ec+ptZvQFxtxI`#reP z{s{XqNAQ~GewQ&Ba>}%{@k}Sp3m|3K15P?P6;ya@#tBWV(=ZU4KaB#*uG7ZtWe&2G z*LB{tya*z_cYQDM(g1~+58A6$)@>;D!~=gy z+eM)2v;DKa^0Uw08@xAIPZg>WxF<%S%-5Pkv!+>2vY7a?_XH-gi2O=q3;;4}$l;(n){md<(fqtXgghCju;iGsKWz9BvH8P7<2U^GMk&wcK5Q#UT| zR<7VQcL4mU=be}S`jxLt-@z>FK@U1J{qZ0Dar(dq{yAN8$t3}>&tb2Io8C3!TD*91 zXjcS8$EGk0j+Fzbq{U5e#@j#dmGWG1+2!fUPka)GmAx|k=tn;arsR3-_xR{XKbpS# z-S2|*AT-GDl1)V+S?NL}WB*m<_M?h@&>3gMyR2EWCVlwBA5K@Yss1Zp`AXzf6|6R- z1EeS-tU@&c)d);v1Tt|f2Fo~%zcM12nP_`rs-_sI6jJ+?*&+QM9gf{;5YX6t1NWMl z_Op1`&@K$J0ecMMMMoD+Xbi&a3piJRVtrKSvFlI(b?KrDJ4}f5gam`QK-r79We8WH zGCm4qq~0|BszD5?{gB{?go%9OCQUh7JMoI8+E{5P__Y~^V>G0vFi+r7O+H32N0r%7 zl53-75g5cnPy3|FXk(Cu!j$N;v&FXPcWVn9eKoIA>T3OHg~m9ou_XjVG9GA$AW)m? zLM(?%-eeR_klvOuvE*)nJl1)nc_p7IYsi1&w)%6FBe4Ivdz8iF8^rkgEtq$*y75KS zY7iH0teL~%w%<7etx%IWLDR-(=Es4?sTWfw1bZW#HzAt~)5;9y=bV?(%-=rhvxC5a z{jRrU%)|nf2vSZU=a=tM#v|yD=sy$(a5~I>4d-<$lKy@eE6uZ9JDYWq!(46#X7!+@ zFhJM5t(b(%I{DtN)g#R3cC{-oBMM48s7JTHYDJ?9v{mSW!jKrRfUyWNg$3bUxGRKy zVG=BBT47WyYd(W@$RtQ-nBTmGIj~H*H-BS&#`|3ADfz6k^u>GY@VVqI_1LCyrHvAA z`-O?Id=;t@nCJ*NC;inlS?a!XTTH|NB*59pS0=lelzg~f!~GIY#_5-3ocsl2tN5r9F+k1ATuYdjP=|BJTKY|HxFy43JMd>xKc}@DO|NZg+AhjfKf7jcB z_`mtao6?6q_^;{3FMe@)FieQ5=C89U{EEvjV=?c?E6B%#ae2?X-xb=GO&d3eif9~Q zw=FiEfm~$(L|8m~jYYbCU4H8==_CL4;qY|zu(KZ;>2na=w!UW)tTQ9r`JOIrxTHogf~fWX8bI@NtJY((bVvs z|L{RDK)w@iO1>uJxsg%JR|_}>Q4N2qFp&|6X)uw3lyPm#)5otM!{Dd!uQS})%`@K& zKU3V1hwtkg8)MyUh3Qs^*uvV_*^Eou!Jag_V-xcL;+*;3N;OQen17bF+N>+fRgFMT z=vm9w?pC0GF^8`+AWdM9A^YDx9Oe!@^I4-W8hvE zVc&Q5tdyAPPFZ4G)RqFn$Q}X?ZF>JTxU9W}$S>($OK7||f1$f%3QB(S7Fx{v!W4Ml zn8)X`{+D~NjcqhOHsrfv)$|$+?`g+2ESGIKb6%g_ds5CZDQV^vxmb6(FIV%}Z_9eMPqyK+ zWlFzHa}yz#$XC8rAroL*slEM!W_cMMV|q@GFB(;m#4w1IbCOWfNh-FQl*j6#U1)zuws?zg?Oh0HjF=@C!jL}Tf7 z)MLLQzr+o{T$i_*F=!4>#rjQ$=wYvg<%c=HL@BcZ8~3%aQ!U!EO(8QR*Tlpg46b=x zCHmbgY>xa6BVlp|UagN+Aw=az@)z&|VTNgqq*%*vC*{GxXo)0J?U!#LBW1t*YaKQA zmUurb6T?`wMuqfP?@81x(Q3M8Xht|5d5ofAQmQbK5pYZ=8XWn6JLZ`)H0lygVlFzL z`Mxn9o%ei4^0>;YF?64&rdJ)9T-W-0(%81OskZHwG;_~w>C_z?)A_UJ$M^mCwoPfx zWXzJ$jt#(sda&UjbEs?fjMTFSFI-LzYAqq)*Os0C#OL?Ry>q?@(*t;y45nK;gbbF+ zv$8(#$19Ok09JtWG50knJOqD;D{3;$2p$wNsCCq&(4zyIYplP#?l1y|5jU>WQp{6_ zAX>@z07@s;-Y6G+EpJd=s}cAe8G(RZVRi31y&c$uIS@+_3S0v@z;0ux7mcVhu>Qf} zMJ!_hoS7i4S3!Jc09Y~vT8YhukO^0p69AK1n*i!oT=c4{WeX%k`p$Iw`W5NahoTXv zWu_psBy)kS7uBt$GSC13?I*asOE zO%o=;T-aaLuoCl`NYC#egJON=ce9%LyS0Vny<%gUh{UoIGcX4Ob>>AR*4ELU`s>U< zZ1y5wZ1^Ir6_W((3YJ2jFk+QeX#F7Tel&`-uU?C=hCieksdj#Kh+3SBXF=0ka(#f5$%9@ZfF16$jFW~ zwC?&4h;&V#ou+h5OSNHe4Pg+1OU5X?0s@iRzDty~iTi-LAl`vTO{Rii34#vTq^$T+ zf@z#+if8j7pQ$qj+3sGn28^r4fcOtrs63lCs`)CoRpv=7AVKF@9XT(=mG#4(ge05% z)}5IK`Wz)JMgQtJ1m!ekGJ|9<FT+G8_K@WFlyx#b=vrFwB>T)fSxdd)+VU%-uhMDVm z{-K_h89Z2}^40 zbnmNRP0NgS%(GfT1DNNnUbv(gqUXS8S%>dN_p74S2vj4$2-K9yiz+0D4M8Y#l{gLp zJ7V)6;{?7;5CiUk1Og2JoB%V6kVvPU#eIN{Yn~Uf`i|R?fME6j3eW?E$N=+W7E~A! z^5Dk>CI)i=S5kv*u-MY6KnR*Xg-tUxds_ME7@7(zF{=xrc?0u+9$|*Ys7GX>ukWJG z7{ctyeyde5*iSwsA>eN5r`2G zZK%}@zxkMbcGcL|uzfAMhQ73W8(LBDRtIipCD8g|M1~PAG^wEw@93@+Jfyj*1byCg ztzZT%Or(nuC0YDI2nFFSTH~1tb!Tcy-XJeN0)I$2nHxN*Cd4ayX^EGJr#*ux5rSvW@r4*37Q}D@G%pt3MMPff&vBE)yv4Jd9$_ z+xoZQ+vTszJYueChL;&n)!PD76QE{c8FDc~64DBS=>$}3{dmi2>5csynpU;oF-v`k#Iv^8{`BV3 z8n(%qkPm`hNGDBGE>AVqwz-+I{n|m(L?8qgWe9RnLVFj!z||PgMs`t0G<6JN$0;GN zySZQ-L@+Wgq)EgNxuKFejPoafZ!oX#XjwO|d2gIdL`nDFIGK*(6+;b1l&P1OStw&= zUZ3@brS_vRE7lj<6TEihvbl`+!!Rf$=Kez{`5yLj*t{}mjtBV~zft*awj1vnJ+A&# zBT$XNT}J@?25uSQHw<|wUZ%MKRd6eTUqu?Ma;4i|4u)jBtCQULlkW=8#njDC^xu_A5VUXdGfQ#G^ zL}ZJ5F=9zC@*HXk0x}o~>ItTW+?vGaO~bk0)?hz0y~(7UHuAo!`M`lvYorX3#rMyB z$c%^m9pQWh`w~6_=1-PjGeXqizbgFqj6l|Q<@w?4bT&9| zWlfwXelKG(GwV!n4#;TcIhWXT)W&>iWgkWtYjQg-^*iy+Fvh%58#XNAA3x_0^9GYA z&**6xVNI7ApsaIXV!?7w=u8tZt9W!y*4=cz3$RNH2NC2=9 z-tr-tm=*vtksb#GaUV#@#R%<&%WK3#7;AzB37(saGc>ppxMm9&d0GMP zQO%WHc>O2h2qY2}Q-|0Lh6T<;reW6}i1*MMXc7@9CpLE5w_rlJ${a}SH!=wl<02ag zfA?1yyf>e2{M1HB%p1(>uRLp8s&@@F67C_fOq65HN;_Vy&o&&h5(-1$)%r~HY9G8B zU-A@&VV}0iU-rJpyx5NUjq~36yf?0d{qGz5Q}R^TY6Pkg*gFDa67~`(;u(Y)$73ei z%AS!uxbYkjPB}+JOs83iM2VL|>`RceAVe`o*tCa~Uq(a>rR2EIjF=8al;9q#pQDk9 zKH7~v9JWvx5vd~gT;x8gg^6~7^S$(1U|yRctAY|UOE}uMFg_t(2ElwXJK$P6Enz89 zW^6D3;%A$Y34wuWhU=F7S3ov_XMQ}EZ@X&IZDah|9Ll6f9GE;2p?u1-A_fG{;~nOW(v#N0_>tbaQ$OYEf0F)&WkWoD7GH zsfX?KrH)BckWhdQ)Q(W4=3Nrnu4gSU1~Lm0;%-DQ8@g@31mk@6-OOw7ph6LqnukK< zm&1rOx$vN$%s@!|Ls*cPH$~0s7|?s_Qgv&6MZ;kmwpExD+f!p=zZ^T$ZL2UH#+fEF zWu3NH`fZ-Vgye5RpBw5Z_r8nig%R=IvgJKZuR=8f)d=iA0un92iXNeWmucXV#~gD^ z8p7q%NE;eQ;#%sPtAVa4oUddI%>)+_qXfB8l0m_YkTqPGO3o;pkdTp}0k6f$;*xh6 zm#8_&bp}g7j)Jp%wuH}A9#`8EciC7bkiddG7IIO>1}7KxshK?`Ufokk)1zk)nq9A zH$q<5YduMYmW1ipf+_SAoH^`fu8lDd-G5=tt(0j4riLjqO1m=mckkQF4CRSsP0=^| zMgLsKM`Io`_i^dj&e}aHJ-|uo8_?Qpn>ji4x*ufJ!wg|Kn4?|H)fx1w6F0qj{&X#t z2C+Vwb+tr#tqVsF-0MhJcPS}^Ef9x< zfyEl2i#%Kf&LDn4HWJQiE(TrDQFn$!444ms8pH!m$R4rjgiD*cxZtpSz&L_S|cye>j@eO4{{1@|tZyQ3v_~v=uxyIyl%jWganH`5-lE1AD zK_IliE(D3)Q#w-bZe}+Vr)bfFHgFa=k8Qs`f9}$^BaU#gId2X2$$G7tHhtpJy&*ci z(G%7Hg3@04_<$#H5b~mvShv{+);1~a+0~s^Tzgeociq(qHWeIZ&}Gt{Q>+@(8$AuvtAzx>Qd}sK=s6f(R!9 zmqL`Q0gz}Gs68pX0~O~WDp=v)(@SeeKCILQEH;3{IE0QPHeG9DX+9(h#JWHSO4F6E ztty$JoSWm!2l)=c?LsG#AwBj`SaX{CCDSbO+riR3$&EHV8 zx~)dwf6NF7k%$Y@j!A9li<|C9?ZDUGJrs6Rm_~yNe81vv{wA&DaM6xQT^Wx7lkGUq z*v-MIC!e$orhhQbBf9GHD}rzt!*;$^tuZ(-JnewoLc@yiLt%=zN0>GpBoH`DoGZ=? z^GoqE&*D+@hDMg`amEt3!Z|976bo-Y5n@G8KRx(y;%# z9BH{KUo`^%lSTkpMr^_!hM?bt7F6I>>K~H-jI6L1M@tLgu>P1CY2BWkAfaqpD*2;U zatF>f+NMuN$bq!r*)-Y{^I9pQfnt}z|nNF4-JG?=Qkum+Jk79;SgpOU>IBs zg23lqK|(9%ESc`_Ktzj+h_>vhiTYSXgh|UyJ9bX$>FG(Uw|9i*wU}pGe_LB;wB;Z1 zgmozS)?8NHVLis#p6yQItcUHO;0=Adv$^Ty>~+wxId4jV^10LZ`+yZP$EyH8M{Z|tu-9S(8=(I$9@*&1}!Zd zHDJ$RM69QUBL>=dthJ4$Idg|oZw<%5`7dm)939bi~u*NrQO~wB|(OJ^ZKiN5kGD z4i@r-Tnt={2JkMkr+arGlMz&&A|F+*G9dc94_iv^-6%Q4Z`cLFgSfD-X1R-YMNYgn zQUz_Av-TmobyUeOBjVufg;s`+M}V0!qMk_4V;%J!%^z!xnkQJkp<8!pwz0}OO%uIs z%sBp5^~~eXjhT+*p6^n=&G_O>TwZ`1dvYX|<%xJ?IjdaN2>kYrfYRB4K)!nkS_Qz; zGI=vK7zXcIec-!Y;G3PAa)OSUAimW~_To&VdHR%e)onMYX;Y`9E;PiWlOb-PXs`lF zEmJzvE(m;`Z4~oLRTDBH>UBH2I-z&)&V^)E;#^l6;@4r-7m#Zm2zkOu28+ay>0E=b z;l4&6Iy2&boFR zJgQmNBsQBf9`0B3+K1+Q52mPnoJPHi`dTJaKSY%b%cRNJ*K@sf^ZMxb5b5n4+$!#I zZP>%cW#3s1%^3CWB(916s73evJg2GO_rylO1CzMS6xGH(3R+JSnw%l?(ta;=TpxWI zq@Cv0w&;KC3#1Q2NU9yX7S|UvFP)5e4d%~-YCU_YjeGasi3MBO_hZN!DZXvAGhwV7 zb^R{?$s?Elx{kY=`q!~`%`41^j6g8l!7RAiGxp4}T;^_!V8AIX4)sPaxMn90%x616 z3sMS9Bf6HE(GfJ2Ou97Xgqi91h0B;Pv*OGUXfZ4t_|>^287PcAm{SN}fxBJXdQ#^` z%%^<^_Fb7T96vydIDa>6*pS}yPyduQZbW+sj2Y(5o0n$IniX-|ySuX%60M@9+{Kfp zW0pG~MXEp52;8G1APp#fEc9WyiieM3KjgM;+X81B@4PL7E7zT}Z}}eW&_OCYAR`hm zOo4JOW$kQvk=U!jA}E64zsk=#;j4j^CIxjgV45qk}Yi=O3Vo=%`8DcXS3yPZ^ zMINU4BX8JL6Ct{hKrmGkEj2A79GjR>$X%FtdHQppV6~=r^@AJgO~rlrW1M4g;M97q z12<8sh@!U9$bgi@yr3Vhf|C2-SHF1`su8%ajezhMo3dkmXE+p}h+_&PA|W!WRFj{k z9hNbtL`yS>+w&1xI61m)%9J?#^mxpf($9aMcB2iM#92m`1<2!6XHA#nF+417{&B~p z1;;LkA(g4vwQE-p>Q`QQB}$GqOsq}_BBtQm_3PKCFi&D(nL2YuI{x_MgJ}@* zuDRwKq|{C6lv7U)M#KKv_RTlntXd|`JK^{=b?Vfp)BdbnxiW3vzMV0a;DZsNZQo@S z#>V}Fn!!z*HZ9)4MRVQy4e8E1?+iw5%CxD`rp%gVtOEnMlBLh9R;^0i5P}lGGAn+- zaJT}KqDk^0@o#*g>d(POKzOLH^xO$IIVT_lY~`O(vChTXo#7|e?I@e`Pv|f8mv2`* zZsz7mP*vd^g5JpPcgYEJo299Yh#|aFmEh}gDrkA|Ij09 zG!zna!UPMzZ5+_t&dW7(W{Uo`Y6vT4&YYS4{onsR{o_CWW4h#*m!?<0`qk;Nk9};k z=Q{kw|N27ue;@i#Xh^5URB_9$|tKt`k?-#)h%@WsY|fcfy~0C@ynP{0T#0%LfgK;wa| zFycf6rn(3lL_jUN<&7nV_-0-FAwj?kjOFqsGnKiS(ePj*ybr zx|v8Qcr30HseqPqDA#=}u~*uI9=vyT27+-tV6a+}4w4$Jmxd8mIb#dm{HnM(;FAC-hn&rS+i!RJ-d6;+Bt$BIeGWn~pnvQQEm{PiQd~Ek2R`w0qKRx2+7bs*_GSDb1g|0CN$sEmEz4;WT~f zjI?yg($sxtce-Qkx*+sVzu*1S?p?cs*q52G+@eKG((>DGOLONfKtnSn7AE%r7HO{i zY2#h``Z=s_%Cxk3^X6baY+o}q&xYzB7)m`mcBQG)W~41!x5a`sbJj7bd;5;mi`M+~ z(@&3lH{5u0jG6mBe23eXFOM;6Wn9URNCrslL*w@tEqJ?GJ1SU;f;0=)QRJ#M_u4jrUA zLL^E&a|pQO^sRt(iHFt!=#pNviFo%K8*WLH;0;=X5y?!G?==X~HVEC)x2um0>_cff za0MZQhJ-_yX+}zf2_5jemvC!J3l|=j9{+?tN*{vp_~tjinV$KKXQW3z`cdiApZ;`c z=~`iE+-Dn-YRXrk8i9Lj1jM5%vRe-Rec%Hh7)*!4Dj8#$m0*y`^P8j{ z56g&@1wrI((6(`J2f42UQn0Hyo#uSw##+jZX7svzdXL+!NMVuN|E4N z$tUAsC|B=~l%s7dvwt1sdG9yhw++*)P>sNE=Lpn^C*dqpQw@-8B!Un27>K!s;Sypq zwoC~Gt;DgouYD3W%~pg=V6&vn&1TnqQAE z5O>_ME**!K#5QNnoE2tGyI>k7P3jCnd;a_dXbz@Dyqn%vuU-=ih;GntxZ%cFJbHWk zqK`5;bLPwqO@W#c7a!}^-v;x}Mq^@m z$EA<@x6*Dg3owt1r9ty9>yXj#{kLuFj`9+5tJkbW>oYseV7$69F)-g6918euI$?24 zWwx4;gxhv}1O7Clfoh`ualBXm_R(PV=g3DuYTQu2`0eAzvv(!{VUZr|xKE6=U)G}j zqG-+>C64A#DI1>Utg~KkPzqVXz%c@dMBVqnH_v-n$;2Gcxy}fi)CXEICq#QP$b1Ov z4j2;6E z`tn!4l0N?NkA=`c&5+(Po!{+fK0`u%G%sD{tw!J|MnIgZ(8+lwJ*d{htEN{nA8N$U zJoC)-)1Uq{)*|Pzc+sn&;Nhca@~E;3GyV?Ch(xVGxCkW>w|GU29zhL;420Yh3VhO( zNR0RkMj!bc(uX*S6&0q;O0f;a2rCd#HpRk@EwuyZUadn(1}bx;9mNe_U;*kK)1N&J zh3Q1;h8Sp~q0}+a@=@;KmB_e_2gh#wA>T9-`1>X0%#>+-z5UBO!K~DJ_I{WA?1Veq zM6ygNd&s`jJJ*=sD4MN8H3GlIBOq}EUMcf908G(|5fCG!DIJyhkntGQln0noLa#}c zrWG~fckbGmx**oOK=3jl5+dSE&9K}+Ei8)rf`}mw>NuoNx(^9=*;b|Ycu!LoEDG)X&l?u`5{uGS%QWdB=g9F|L8 zU3f<@8FmZ{W8+x(v(N4sh^2`f#$|5G(Wt`Fj)3cM&5ujSEC=(KMzCIo5FOodL9V<>|;*3QNBq6i>rdtA3?`3N{a%N!NIi)a10he<88 zrorM=)7%>(qhk7z(1b_ci-YI93nLP!p#eA8c_BRFLEHiCu08}2LH2OvBI}~8l!}aGXlQ7{cQuqKAFZcMGSkY$G0p|OJezj7V3|YZ zkMzS;Ql(cT@LM(l1)ky>Kv{nQ?qxnkFyR4>ha?@u6!#8}_Z&oC z=oN_T4xSYh1dLK;@zB(ZZTbL2$T4%~gv3-+t*PvLAR{J3XoI=-!z8#5BP6E4Wsz!m z!0AFXCo&wiIgI3g=Je^2Pr}AFcI@a0)2#&y7Nogz=Y%`|FUU%c;TpND0958p@P%snLK zJ-iOXGz_p&e_F?s7;qT}>-N2@qb!6z8#D(io9QJA+c1f<9v4p2?E4(Xbo=)1s6)Gb z>+xOvNq{QtHrQufor`=Wo~tmi5fEHrL8_0V&oQ3dVjzBge~R zG*M=h{YK16I36h%+KBlB1p^&oQOo;t&2}Nkq-}&x$fWNuvMK63|dcqT)kiPew z@1~n?x+&gq2nO2o%V;H@t3TBU+#4ex?iBaM8pQk*XBQgJeTHhLFTecqU_6|^;$YX9 zQs&<1Q)8V4Zm=9s0+5r8)Q5Z~{)Mi_A{p5BjS-9h(hTzeB8w9+tVOlH;{)YTr@=iV zquA#TtDZ+y-h^8UwIR(M5`*h(=-u`!S<}rYsqJQ z1qm5f;^Ru1d6iT;CdFGzsC21Q?=Iy8b@SVf*Al$5eLBNfWCq4OzE8GsE;;x8M)v(^ zgYA}IDO>t-pxR73P<$y{U8@oJpF9Eqk4)gYTqC+_Tmc>9#d2;)9T@6QGp0=e2GL?5 zjhB#+*mgs72lybo1oAwDE8i~6F2o0cr+^Xhs>sUxJv(=#U3+$?<4!m(%{pdwm~2Q; zOAwiF+qOHARLx9_7M&3LF_ca%Uc5N=Z@7_LJh^1a;@E)R3i5O>gnKu3pp{V@;BUjG zjlpOf$7b-IyLYACy?cV_U$XSXv-c{~sTQB()f>I|x8BwU0Xn%DO!e9Qeaea{WnewB!v2Nkzn$VI=az8A4!A6){ zOv31lbFB^Yz=xi7R(jK$-<&@A$xjB6*UMaWzUfRtI>Ytf5bJ6+_hKxnKljcE6xvOE zE$%IJtF)o}Lw@|@A7{-t^r?9Y<6=3VOZj`JcapTg6F)So0C61~xDbG;Ln`M2>c0a9 zqlIU#VR`_#T8E2bJJ39WluJ{FA@+w1$ym5Sur@jM1E^#8(RbsZ0NnxC(vDQ3udgR{ zagga4z^-kk1K@#%+ER7^*rEd^U@j&uirOcSV7IADOvDp(A_KMWBEU=ZH-_>N3Sl1a zOdek&e!Q#oTh;~I4_1PCGm#+D1vN1~ek{!z{HUeQ!i4y2K7&_(dfl*$Qn(Tv+ssUH zcp~+bNfHc4$;8$AbVcd4OcRTC?Ev#{ohSr*`1OMf)nt1ID@UAS@l!#Ujf4zWXO^%e8W)Z%af^pesqQ)ru zgENKhq=IXN1hel6V8cU7dZpt_m`D$1{k!jMF`~FV&8-1s={5 zNAgX*uo~m4h1t#d9-U>kCjeWpyc#u#)1WW9orC4bb zLen5zO-WFRp)XHZHpydI1cw&&4AER%Jdt`iGK{*t&Cgaa+A)eMQ?fD+iCN;l>A$@n zY9OYyICH^QtrO1~ZzkskX094~hBzFHFr3-*4JC^ZkFu{6i34h^&}?j<|GjB`bav(R zI?3A(Y`|GJcW&?Sb!J*-%PuMi&wJ0mi=i6wOfrn>R*husWSAx9CoRP&3JM+LqJUO3 z3=^3twIQE`gJY|C5hrrfl$7nlQOQ1Tc9z+UsR<=lKNADL4tFu5OrE>>6<3!=Y1eebiVEn57bVH_G11B8Ol0xfYBQ;jwo=~XRkCQ%+z;jfD zq!yQ*HU2jM*c@(N{EM&cZ}j-PslXJyr?vjLptHwOH|y860kb@CAf_kcf^?tG;5QgbgzZKpW!m~xa~OK+_Fkmvi%Jdi z4H<(?YYNc=#VmS#hO7N8{BOClBF@$9~wp1&P9FM9#y0eR4pzZUNgx2J*Q5 zHK3uVd3Qyh{{ya`mf)~9n0wr7erQA|a=F@v-_^aHi&ljIukC5E-gFv+FzzMJFUnME zIvBP3agd1`Gk;{Tfwh|b3WqBTf+i5M4mDcgN2)i-%)!Y9d7rcuKhyWv%4^?M8kFiU zrdKoMjZ9}J6F-JamNwIC6s_0vGRWInnHw43fQx#{I6gw6cm<%(F-!!Ne@P+|_gNTW zt&_Z3>Xi)KUYL#Wyy87fLk;05XQxG&jSRL%pl4AHGZ6n0%R%cN0Jm13ik_Cf)EN8u zbebN$J0{qf7F6`ULR1;K&x17cf{q^X0=KY+U1G&cBqWRRhRPhZk28<`H{VHCd_v0S zrq}Fif`8f03=luo-H&VZuyvzls^D64(iV4J==2^UA3_d`{ug4mf};K!Q9hF%Gt%RH zn03#jB@5IFJXaH_95Y@hgTdLb3iD;A2|BI&>}_OXe>OOX28xjzNc59VqodyMZ_3zS zB@9K*2@#UCiIAHO{lUHuXn3nq)a58}C#hi~+)J8ma;Vix;|a8bWV;K(j*n3MhVTpD;HbDx;K+>cg`k1{w`+&g|{%gqx6%6XdQz*(!^Hm{rf9zq0(fV%vOXh!CCZe zi#!cm;r@o%B@2mjsUgExISI@=BqgIRUi~s zAkB9j^GI*`zpYO94+v;Kn4a2wfMeyEa=wmjZx!km_nvX?RkdNRO>2wATmcW-#8979 z0OV-oU=+Uxp9OOqPaUZYOQe6sY#`D=(tWgB@yA-A%l!L{OyiN9e5&s!!HUMCQsf{X zA`;?ghb`S^yOWI9^}q+(i5}yo;{vlLOOgOBwhNhiJf=syN516^OF{3)S?_R*mtq>* zXxd~N7ysq<$75@0?%M&)oFf0gsMuh^0OGiAo6-DGQ*wZN{~!5u6fnRMGh$&zcK#x` zF{{Ews9RP^b05cxGEDhEVt}3pF0r3(M(9{Dk{I$GlAx9MND(Oy$8tC{R~%As;J#?) zqa=(RI}(kMmPHUNS?z-x8GpLpdHtN?cK@TSQo&y5ilXiOz_!557asZr-A6~ahVl@ zrf7}X{cog~I`mgo1~6w3=_X>Km5OyI5z2}A#)PfdT*Wq+kD7Bn#QP ziaC1?d3`6}130QQ_l6JsV+uvN{jp)keL}9|%8bc;38zQ6@GT2pL*h^rs7wRMYJ_Z10X4OH-w_B!U>&fc0FuH(IIqwpbrb%N+1;34uUj$-=bi^dNX z&Y_zmzd*gd@JAdgQL}5~t<2su0(x7|c-xC(PCIfL_Zz-ib$)i904_+lh=7QNOoqGK z`Qh|08frzaSg?$&B+#|ELI-ei_Cta@A*LN^PIvF&z5%>o@SW?j`lK)$Cp?t(bm=VA z)V}oHZo{a<;%b%Xau7lrTbtmOV}4mhVk6pbg<(r<{NWMvu?<5;$j$s@=)>UTFj zr_ZQeoiaeH;zI))Gm~gcyWlse5l!5|J=1M*v!N?0{`1-{_nLYT?^pIwVElF;;Q6RC z8Wlxzlr*}F6kZe_i7`k5s4J>7b08KEm(6A+W9)I(l{Q2 zA&eh^;CTWLm;-9o-G#`bTddOix^{P_%V)Vwsy4G^(d0Uu|0!h2wn@r%>pK z8qaupQ`sgwiJ6`up8dCu{QBWf;vBQ?&)-{fG(vrXZe54HduFdmYHtOZb=N3=SYfOJ z>!JzB>i+ZvRUUuL28d(6PR1>er7IE41Id>c7vqlz{cN=BK`o0>wx(xzh>)=U(GGt= zKunUjnvN5c@N>PDU!pFkc97v3Mf)jPNp>A(yy1|pbJs$rU_aoBD5MxOZrt*pfA=Au zK0|9H%!+lkR=07!Uu=JnC@{RsfeL9p9JQTa*GNy9zpP;^|4u}%c)L&4Y7s*BBMm-O zy!HvU)Bc01O2Y*Ao)&brK}+{F?%ni&|Ab)X0tfvk-B$qnG;#hjtr;0qz(3J_6Zw(I zNRmtM?N@!T37svBaQTB0u0e6#S`L!uKEYBGL#Ojp0-^2eQ0^k@+HRcT@`#UjTd&*1 zT*Aow_xAIK3G!~pqf^${_n`ZyFOCuo{&8Wz^uU0RPmUdu-=6Og^@EhMW^Z}M=ZL-j zL>^{%sN?^lt~cnk7+N>E4m5hdv+7Dt!iCeOOZYIpwBF!)(iS%!rHfnqq(F)6;&uZMf}D1P&?JMM??w(qZ(WyC*-5Bk8a@2lBq7;Sz~0VBh_Z5D?T&0 z0bc5SjOkbIIk-b(oq00)kj*joDKCkSs>ebTcpox7=KCYuc87BS4avVB6v}GSX#YS) zMYBJMH!jCfDC|gxRIC^cpuMspzLF)m>ohx>@J|1BahJ@e;|taJ!|}HO?ob;a&SUSQ z+GXL*>C9ZLP@fDnNDmV-r;7{@ato$#?5@lKP1?u%m|t)PpWc}qf#@Q4<(i05Tu-D1 z5dJR2fpalHi`vJqlM1Un#O}5jPRAGGNE|Y4s3>(`Jsr-IsI@|&1bIz?S>+Z#Z4!&E z-0CxS$w&6>;0Hw!v_x+|B-RYZ&mx&?rKw z*RA*GXs@fUn;paBsQI>gNSL$LX}Ms=)kauqpdpUix|4V}E*8T=s_`)eV; zg^eGsW3QJ(e4oc|qWtr2686rQjz4Wd@vlvm<8aVyic2Y{&5BFFy~|##-y7!1hI4L0 z&tr3K=?MQN>Oz5dWMLG3wC2upxV`Tkbp2@0y(zOSvqwLoNaM}4dk=dNg_Y(sP=A%5 z>Nv;S4zzJB_%++a$4b%UddXx6I7@gT%Z zPX<08?VH})?yqnSqK^K}Dceh7oOHea{{*@V6dSr)8frQDT&>_?s&N|mpSfvoEKFqD zjk*H|b`NTgS=zjIDB(knodzQ0Kr@-O68aGCra=089x4cUYT=dY{U(nVfm^JLYQ)gz z<)*+$L|+_pSR>VN?;J+!j<2m*%8ql^t1tnE?2`F=Pnjjg6U=_qh=Gxni)I0gS(-Z% zc$^9OhY8=gpB*zMhz;^EPc#Augi2DKV(&@Fj~mMe1ez;a{!Au{8N8k)DPHIaddD0w zooaZp${k=0->jt=4a~D73=Ph_K2c(Ah?rDG3DL|>?Mc? zPHe?p-OVoTXQyrqD=pP}-@PWI5~ASP+nJF-1RoLTOs2{@skgx8OuJL1(##*01yG7x?*;)Xu$U+<%bIlu-%okTB}UIsn_SLAe!9J%}8#X@B5kc{QK1>7);NP zY{28de6oh#)XgJmN$thS2dal*$KD_MLV@_f;pohH+@5Q3l#x^E&j$m8aJC$jbMgw8 zsk5Xp)p@$otONnjThaC0>|B_^ZP=QcevE_iTRexOT6Ki zyW_B4fLi;N_f?2Tj8>xW$3kv#t>1;Uuwh$X3UUo*Xdy_+vEVuS z&F%k9#r`XzIuCUk%uvW;VF}uf>E`HB&lI+J)MA*6D=Unk__kt*<{T}Aib=K7{T-!n zKpnm*>ksnS6UX83K|WV4Cb;vw{9se~3*B4@qC4Cb^?zvI>xNQT8wZWy7Ghm8+Aqp# z8^_&PR-#V)3FeNo47NClD(MO6x@G1_2sjVjRh8hm&kVlFb>!%8nf2Anv$t{F%!V0K zt6a?ixKyC=ZXm@~^_W3BcdWOjhht|Sb8qBeHW^Nzg&>XO>p`<(&fH#S%CtJg>-_y;yU)6ltq=I!TS>2DV+j}&8k2}j__=@f<7x~E6OXjsFW?;}qE*=` z9j`x4CCX$|86^tx&%+P%qV^FcV^$jl*_#5Y-K#wgAoYEVwCaiK#(8n+`0Ua#)&QOcZRs!f+#Tg=M zv^Z5a1BY2VOQCby(vk~5_P;5@@n3S!qbVWN>!GQ##Mri5mtMGd64ZC_`z;V*;VmET z@Zi&Cpu{jokhcQiz*hifC@1MZ+cDr&HTMsI!AkB=@SP@ygba(8_V-`^v1)@7Lg8wY=uqN9TU%ZY2Pqso z_zi*br$l75-ymx_z-uov(O@_y(6~aeF2~}y^luXDs*4?0#`}Ai0A2{D+yxK2S4f)uXTer6xl#;o+FHJDad@IvN zV~uZ)RLYBcqo1g1bO{#NOl2F}ZF*FueEUCM8@Gj(UUY};db^@y=W2(;Pe0-)lzI%u znr5E=lHAJe>s?y^o-OOaZ|mwSuHmCbNI*Y z&YRouP!l>!ULmo88d2;?O&A7yF(uJuWgyk)KJYKw%(|IkeeyGN;zn0Rb9P>I;uV-H z)HMq0cILVc-2lWzK@_DCM%||KaVUrV^yJD0&uE~}YOzKUY>rRwBK%XG%}18YODw-! z2=duf;=;j;?7X@HQps0INh$0YdFJ<38VtAe#ab+u#R@5E2}2AoR$QeDGZ3Qwb07=d zGxo>La+MY);6KN8SM1eJ_f=ubUdLGtqWr*w!N={&ub#b@b@EFpW>0A;j;ev39jv^j zs>ohP2m=NnSd~iu0}BdeaLd3A`^?@@_!-vy9RmmE zbuEC}>?DFlBA4)tDk<9KR5|%^faHUP|EYZ@I+w;!;E+yHnkL&t<$lU{RsVljiJ>t? z-aCjzMTL_jT+iRs0S_yJKwHNUKdETR<59N|eIA4sffA?+A{AY!!P%!+}(kQf~vC7 z8iX4a$qqkhD|l)_DzlV}SdYueJ*agiVA-Ay`5BEs)kCU6sX1Ew3s(#eLqE5$6P5`e zR-X{0^^-~3GFtgec>IX(D3a~obbD@m(~=POrm0P>OFNvgH}P353sB}zmP1&foBXK5 z1|f&V$TAyIgtKHEBPQ#ud6t<@FN*QrXF(`&iVn@!$t~9wbQ^9EQ0RSUi%7KhmsEbg zmXDqz1tpG{W%`HSzZQyoRG^Kr$C0_`ZqcIZ+KpAm1NNQ#Rcz6Rx3f(}hu^DY9!nD^ z)m67gfCI$Objh7(NMZID6kKC1e=;R#5-ZO%q!cZ_{&R&h4Ms(CzVj@Xk+n>o*@|2) ziFqo)cWG^xTT9EA+f3EyA*0~vljbq@1y)2wDQ;a5W>bzwqm{&WGWot?mgIr~rt1~V zP&Y#vNQQO{NP#+{==2iB9Gacbs@0O9=_to4GcZL8tYN6O)l^V4u z@-ttRCnS>Vem9W@NVX$~HpKt^m9=C@{C8o%vd`5JXOxg~L`3>G_#lfc=~TUwX!`{} zs+q98?%H;DR5!WTNJo0m&|*_KmV)5>ec6DUn6V0Z>z0pf(#vb_=j;8KPnm7XxbgL? z%?MX4bpNB5_+`|k9~zgE^Lwhk7oB$Ymc@dbu7xdqZjmlyYK+F$v&OTAu2fTg@(XIW zd&9{@d(cpQ4b}v>x@}q49K@cWc|Et%!Y!QDX%59!Qj!8CA>OdGiBrfzU?ny5EyD(B zcSw57x;8^E4gX?^cV&bN2`M%VZ8t_R8uJ+ge2?{}5h7llqcjJry2NyLz~szhtp_*s zcUN6m0Mr6PO1RG}r|LX@!H7K(^lQ!0$e*rOK`+EV=zn~5&Y(oP2hzdoeaerO_}L6z z)DHxOmQ*5t3^iX=kys@o2vaGzGd!Um5w=y76@Z+#2?9uq((R{MiXbP=yA4j8RcK%{ z483aI7T%RR&X%l%okvJ-`>`{4fBGTJ`PoO$ehl_pyd)~Hj^d3PhWRH6z#@;lG2Kt> zf~e%uT3J|9aP77HEJr#fI!`ysn!N3MUw1+J#oO)@{>~k7&-q#`!GojbG$!vvsQ$-i zboyYL54d0q^c0DyQ$gp47ADLiG4f~gM-Q0`kkh_|-povj6#VNqz5HoQ8xC}lJa#zo zSZ;Ee*$UqOC|DG=bKPvW<9@<)+|10pd?STfiq-TPX?sE|pvEyQ#_0VFmn@R}sFftQ zJeTG5>R8a|c{{xS@(2a1j@;4x!|i&JKe4s<)&cd}`-p<2WrQJ;{)06Ng=&*!ei8S& zPDhpLeSVn%+fxsJlbtts8=9C`^u66T1ZsoXgo%bCS&_p<_)bo8pxs(oP@+uNB#P|c(JX$7u6^7 z_Fk9-f49L3mv8}|U(PJz3j-0s-iK=Y65Q^8At%Mhy}@t1dqqo(ncT|yjqOpiZ@&LG z`xTOa8)2~`I8;AU44|hJ_GfvFuA+&g_>#NTUsc1-04VJ0LgB!`7F?#WE$oTqva>{4 zfD+5X4v9-ecEQz_9r>9C&4VoT7Z&m`V+&gmi`csA)+6Px_y#}1;%bS(#^#IjCaBO5 z*yjU0(=II{&8yuWcac?dcbVj*25(FC75g!$4B9VeK9h_Ozx!HJ%IQ)d4Ub{9=h^`S zhM(BT+EakbgRk7`x_$xkSHnf3o9l`ZG&B11_ruO)w}qIFeq~-2O|M&UwEx~P zAN4Ab}slkWaJZ{j0F+RTnmTw#t1 zwB^kOaHXkV2RQ=G0@M~{1+GwW5C-z={&p~!bOXa7V6TP*siqv+@C^QAKrZ2Uw%<6X zGWo5m27d-h^9>N~~xHA_^R9o(M9MgotKJ(DqgC7Bn1g)(@UvF>1LRj6ziEK3a zd%i@~4{_NDG#kYwEQu>>gGC~WjcG;3EbeO+<0$I;A=iMki?Z^#4tkT1O3J}yxP7nMW( zwYQr}=4C}}ve8Ufgjw(rHD4!ycmVADP*^JzJJuNN8MVAedCvnmo74G&FGnE{w5|gI%3D&?zi1Rtw z=TEZwkC1?1^%vIXF+s&i3`nXRtCd8Dl>eG;Npau_{=F0ce{1{zhP4C zh#Q6?Xk=og=#z5qgRZYY%yOXf*JtZV>kd;CYzB%;te#@JE_@Bct=lCN?f&{DjWuq2 zY+p&qJb{}L{3S~xe8a#%Zmho1>vbL(XtOLq^DIZ0=YR;@sNkW~L>B6=eMJ5B#}jy> zg0?T`#^X|5>!suZE6;ptNu5P&jB9CfDu6;hvyp}I=gkw3(6v!HK*oG+p_3l@d?Tl( zHBK3NbdB*(_}mSIKQHzeW6r~MDq0By4bk#aUbRL8YI45d!76{0Lr z$bzbmNW*-M=ksNFL<)ZKQJng|mj%X#cRXWA1RV!`&sLOzx>KUC&-ca(@?WeT0Wf7a zqF*Vx*4SQcw6mW^Kb{K7JAt)q)wifl`>+8Lj2?xBnBj)+NfSp+x&SO#2WY5&bN|yB z@Km6KIfL8Aaf4MFV|@+>m2%z4Meef5+JfOUXn!SZJ#?q-{$qXO5Ya?-W4G_@Bf9^BZty3G z;Po3%rlZZF-i84MG0t`mcF5}<91p75&Mz}Ch|%U>ja32t0&;Y) z3sLD8Cjo6BcZK&`+|)ts4H%-?mERIOWF@i)kl*Ucyq2! z_bZfAo=6Rkz)n6+F^$wWZ7pPr1wr;K{Ab4 z)#tmXf|{LbKAaFS{&yn=Q}9d>Oz0i83vZ+Fr@xWt=E;WT=Cz=4FQsgBJmpV<=I(*j^-27SQwCFNck=KTW!)$it*jb0HPA~c;TeT5EM-P#)R6}o zQ=TZGO~iD6ps%@RRCja`+%NIrx5H~wW02vX#>^hQ_#) z6Vt*`T*&^=mWx72V_zU`E@+l|agnG(vz8pv7^~Ks33Jo3FeH#@1ec|L%cZPwb-C5# zc5z`z01*nH>U~TcKNapeR}@xuiFp_Vx_xva`}(!QqI}Uh2ABn*TXj$uE*u-Tfnb*VCl0@9=zK1KGzN>^KK~ z4|M+EaIb86b*WoShQtPcDVxo1t}x3F+yaIm7H(&9+sAO6VC^GPIP!qgpqmR}M2E=zs9wlD z0_F#hkgb?EI^at&H32qdncZZJ<%rZ`SK8HUD$Z7x$g%8<@?abqxI*0v|Mr&E#JMAP zy?L8(*svBOR|nF}v|Zs#09XN7CFD=rd+9H<8m|Wc_$^!^dZZvrlqlDBlH#bynHS0( z-K_>_INNnDSwOUgqEqvipv0b64%~yenX1QN|5=-B!%w9^x3Kp?E~NbXm7&4^(M6a9 z;CV<=6BzH4Xuoec8%Y-<*pWRVIK1Zq7aOj9EuYhVl*5l`Xljy)f#b$+R+|uy1webI zxgzHc$)E3M_`;?)(Gx+bMmq`qd7*`h7RkB1SfB6Q`oeaif0T@;njDj4w%Du&HaV+H z@h01($mfIaC;hNom3A#ZuHeav{u3DCxKQi2iu#1X?1h&l zU@MW9y3y`Rzk&$^)HqPwA)|G;<7l;*-t9@9bNTAXQ+1c+fkgi+wHSGY6&_xh!k{xu z&qafptoD=sMg@+Ps^q_-Sc@Q6WqpTgs2TM&X0d2M0|l@9g-jx=z3tHCl7P3sLI?7h zs1Dz^-IOgXvYIt;}RRg&X$*hn1!y+()9A{Y5i!v+I zMbettaWy}lQL33vFLZhko!Ntsf=rQHEQ#t35Xj1bO1q<6=UvnYZtH8w-#eHN`hP`}{b1qSUzvvc|-vfM)nf@qy z0r6js0I#Mg^^7D&VXeq+Sg@#lAS#eZ6-v*93-OK2J(#q#i(0F&qbtf(g;Yr=5m#=Z zoF4~zA}T;kE;=0k@w0BIY>PUE67IlM>1oWb?DlnU81RSa-!5$D2^uj5#sJCkYVHE+sI7(el&7!18bo536YNUH?f^bY{lA`m1{_1Ja zt_YE=nG&Rz*v3&MOw-ukN>J)oljsr`Mg(rIPULZQJK4cm5l zKOSmh+r|hpyuJ#NKKvfHAbV=|JMKupP_wbmF%bKx<4;q%>pycxsjNrhyJK{g682m4 zR})^T2~yuM+lQ#8x5e!enjI4xTQ*E+v_2Di55G@TB+;sn4=L0~bO(w7P_I5;b3BiC z_u#?c-2&MUhO&7NCpYdcv)GO&<=TV-IZNZ$+eMus{_tXtbE@e;_=#ld>pH$1D(}Q$ z+n#FQh;oq^k0Hrkz(M$W#SCHbc9Fils=_c5-<9B{%G?*ZvrkQU z2RN*A&F6kpUzR%c%M=|uc)xPn$uF?V)^xj44-#EP;C>f1EVF4~w>VV~XE>UqcGW^l zH+VbuD| zV~_cxfMK#)x1Rj%;`5ywG_|r?l^ZS^v};nDbOraeih=eWo$`k{V*iV@?T3hv;qQ8` z-m!MVx2Iqf76>Idw1RY6F|E`que5Rsl=CXGYH1_-D`EiVVY4_04^J?dzP4+A7y<#J z0c;;a*r%W>btnEROON*(W_5Kn*NKqcJo>#dOPPE6Fc|R1^U*hPqpwAAiROOFP&^{N zMorT;)FC7y{Ba8F2ipBf*aZ0&buZbO=ITc-pRXV_FHiWlG{T0-m0r+krD^r%?>{^j z2@-k8NiJ(bM&KwOPCP!iv>$xV9(Q=AM#XGFL?7q9J;NIX`cEKkDYOhCyO%dyVD#FieKAx*M56Y&khpI2R-SXx(S-Kx zjiL`**TZloFrip>z(BgKHo4(;b!HReXvzRh|K(I8%)R?{bNJuq?m{kCo6U$d5?Dw? z>td%!Cz^m=*Np@`Z=GjL6aPV5D*E4Z!0j(##po}GBk=8X$Pt}u4~vKW?ck>-B~@n z$M9al+0S1FOi&HJr>4G_x1S5_r_UhNdR|x$p!(hKUmtz?iaw7ieo}S4CD$UQ8U&yX z3%k1Ji#I-#fUYHhU)}M;bvI-3yPV5<%=sO7ALlb4mAgCZ+wH14CB5@WL#?O$etuVg zUDLqNC%;d+yCF%|gQr+r%|HFVt9QXDck2yM=gwxhElz}U(Mu85d|1LNBm&I3^=_YH}gz#^WIqr~D29^00ZFJ;w~Cvv3iH3E<>Ung9Spn**A+6FMt5s@t!mx8 zyxLBgNcVS-7AJJOBB3_2~klRklFSk?pVO@@#9@T2aV4~UFPn|M8xvd|&{QxcJ$ z24wx-jlnrgbTSJEPEAgSm>9+mrWv@z+P4nRC%tcx*@^d(KfO;{YJG;FAT2P@1RT7n zE=6!0p)zHe+9@}@w*kAyg1r+^Nz8sP6Y4O#dCDmfA&ATf+wo|<+F$;$6O!J?zbtmZ z=h>gFjFSd!eMu?ncAqkuoxXY=E-ww&E$&_XZ~t=rY&!qAp2#Iz?TV_ZKi29a;8}0! zq6yQ2Vn3fzF=L45&aFGmWcU)C^iRux(;qroHLyH=%z>k z)q+Xo4u12VXpqZL8-8d^UwV36_DvslrnH!{XzM;AK9#EwanNhdH`jQJEzLn&MQe-Z^eZ?UvA4l>G}q9A)* z|AAk7mc+7~jsE@R_-}H_R#7Xrpp_Ydc) z%X~(Ex0DoAKeT1J(RKrayK;Y$=LSBcJ(zB-8P`|tr%KDAkob!(L^%q47BNSmaIfuzlx6;ji79NLsF=wa{(T3#!H@HBp zK=6y$Ti=#BDYK8e?q{7|qsN0mU6Aw_t`kM;Hx1O1gyUe3M*YVJsVfYZWj{+L914l~ zV=)f81TOUIh2$ziCVCG}iowi$Y9!Y8oj&ZsluNA&37)B$wS5K!)ro2)OaUCk=$lEPR&nN{v_Udsc{bh zN!SLGyMNosU)0j0`_=ZRP8LTehdoG6==#8X=mMJ0m3^V|Uh(T>ybC(3MZNk8tWici z1Y^ev;t0IlD5jDB#-3Ra^dZ`m9foR+>%lOv;p(F!5c^U|39H(xlv z_e^VHf`6U&c{D!SvQB$5$1nTGzlY51O?uz^DBV8Y5!lYzC(?GbRl`ZFDR?}UX}*Nv z5Gh1FqW#OQET|amjKTwE%SemJCY8IRIFhaJfV;0k=toubn?hTlBRUN#J(MQ_w;e|$ zyq#`Ubb;QyLox*QyZID6Ceaqt!#|3v5THW(g;Qez;Swt~xc^MXcN86%7<#)w_%2vL zDbp1^iia5mlHnanl(-+sd>_fl4u8m8YyhT;n9$dX8rf)5S!C}V!b&25s7`F}J&y%& zWK3A}a(v=Q0P(w=WA^@LEINSavu(r!3vV$Pg6=IEDmY2?o3V}+885XIUua+6EsLR& zsgY;ExVPWm@vQqSmN*s%43-^C!m1~hpCyiCOq%i2aY>2Rln1LZVeV?Ci-tQ=Kq=#zB+tJDrGU!OtRL0Xcsopg$@@(cu*jj{apLwV{NguKu5l&t ztgSQY@r2bQZK7SSbXUnB#1Ea|4QpRdby-g&Ol+| zvc#A|yQ5!&M*Q9iu4~+T2BuX&$0VOCO(_!$;+tgUOGHt5r zA0zEihESfWWtqDdZ@-0Ul9$Qyx2Ax#>UTOp4D8-u(0~OLLovwlp``0@Lly$!n4>$(dPEt5Els+%TrjuhWPdeLa%MPmuLz5 z2PB8}EC`;?U&X`_sptA^g*<996Q1Ny{@9K0YaMDUCVrPUaxwAW&OU>}MhEP$tR=7k z8OW_gnix|~hpx3@*=GGEFOC6&0iys7V{9HZl0f07klR6YtaHm6un8&C?gG4)g!QoMxwr)8GM4iLai3r z1H;kU3)cuH=S?W!G@sGhG`B~ek4M*aZ_TxTfoES?t&rW4)^5g~?+^WzxZuN+!!_lO(YuklcZ$2HH%9IOV?RHo6xO&~j?fH81U6;Gz-F{Kv1^eP( z?K$6iRB$pLDZ&mV9+2Q!?=FaLwg1h?x#TX@Joh`!y0wHJv$X%DqY&TyPGc13EBL$v zxogVom(J+(`JWi?=%1cF8p_9wY-_0B*O1!UNQzd5M!x$4j+@p%N|Tmf^5F4y!@34u zvC4zoA>lE5yIa*-%$LOXI*4QiiQ{Sud%JBZSRl6)24go&ZJE3lpC?Mw& zWRKu{y|%yXlA6EbJbF)rH>u!vm^r7t6qWIK^Y>-{9R++D{%a)>xUA2BhU=Xm)X<6M zd}x4XDXaXeranNIm+uq=bXt=X|7I{fh zBgW&w()e5EAqLJb0s-*}l?a6xPdZ4OJpQftt?_AiCs+yzCv=>k>)sGe!B6|docA5M z6bBPVfenUyNMs{f$OwwgcXO6Qr)|72I0)vDTWYs#M1}sa-|WF*IZsfn>OJl?;2+@~ zbF{YXJZ-|VSjjLz0VvZ9Wx-UYtr&ZXOF+3`!5^X9PKvy#*Pu^}ii)nwgf_TZ9g(y7 zl5HQ>vh-sYxz;(qo$%ugbOt=D@vCNiV+QO-w*39FDc)J~ zMqPl@;h=8$`FU)S=$r6bY|_%hL-3nR?4|@3i>jo!Ra9`Bp8l8nPUG}eFx2=C1%Z=I zoS>wyl$+56`In|yc9}--CE^;_33i~tUk4`guXjghRV>NNb=aiHU4Q1MxMgO-DmZeGos)`vh-JOF7X* zzjC2svsK~#GQ;ci+(AToPQj9I zN+kJA!zt@!@VTh`#M`Y7?MTkhtbOJFCBnadGIw$Eczp^croAn6E;W+Yi66*hkS22WUjmQR5bL zV!0m97hOf%`bw-{Zg&3Wv`9!G<(BLRdd)2^memK*CkQC^C_hC0^y$@&(qJ-F4)rB# zQ(an6{@r51N4CSfsLcq9hWOgm<5!C6>*!?4%aD5K5%N(X(f~Fp$AZdLOIM(6u>N#( zXa~s~bQ$z*wb)iN@M?-2=@9s!o;Bq|zi*URPk6Sbq(rEQlxUHBY2or(PzV$VVfzX{ zZ;0v-#11EtS;g5T_AwOW+pf!cDfYcTdXvKSqv*9B zOtd~Px=MneU#24ep9|prZJt{>{bFbyuxQdJ=v-Cc1LMP56akU01~D;+SDngJL{w?0 zre-{?>~JZGyiF4r4=DYGLy8q?1}_f?i&?1wXqp6A3if$NfsgnTWPdiJaPi|1HD0J6`|~6K`j583)kQL>HAb^2(i9 zqAonzBFOq(|DDB0390vNM}+6HPca{ky)oNN0;6*XO1k_9JTuX18A*md? z7e}NuFb&=S>4C?aAf(zVQ;QL?^!pW%{*q)e)M<|7EOBQ+!%RZj9efaf{H=Tt{u!SE zY~pQ-iqe^!OQfK;6M#!~4&?O@c(xEegxiS`@W}0FhiL7N9S4K-dTdf!?JvdVLB$Bh zk%YXA;%j$@#G_Xfu=;mcN9!+>+<(*I=@jXf&ucuR-Uu;tqe~K2qYXml0NTpAR}!zU zNzoTqWK@?Cs)oH!T*(W59B;o>Z`z6yHuqoI_WrnqYZBUq8J?K|9I@vsZ)@wzgl$r} z=!>{`-ecv!X{$${hnk|Cp7Sd zWpeF86^*`H`55)%zH;x#g~p>Ddo*)oy0IMQIHTFu;C6U3^VY4f+MZF<&~q)Q(fS}0 z);A7^+Ri1F*z$>5F{msN^qTc(HQbMHLw|QFPLpgHdQh3#SBOlhrJ2tosSSwaK(g~hJr!MLbJa|6^AA-LM7k?kF?89vvt0aD%4?j**5tYg-;BTHr3XTQrnM581HPXLn@_XtjeR96h6Z%E# zltvwMD-JGe7@Qqrd_9bvGmb_?n_v_yDBX;@MTZm!9Jsab-O$LDRX7*!@CEY$jDTq- zkktQ2)HO%P-F4mAwrx9UY};;(#aZ?7kA*Hd$2-ZBrj1s!TvAX86-%7UFl)t#_a zn1bbF#e3dMJo@lJG6kc55i`QZ z@T9~Jz8z6QZ}M$l4R{VAS|k97_#E~as@JoOrX@EEJE9u+M1po)jJSv&XitLrWM2Q< zUEF0_OJguRBs>J&BzS!@&zq)3)EFq0w{H58wysmBLiZ3ZhoQ;)@g`%MX>k+k8N%Tp zZb{uI+3CE$1>n9Bs?(J3)T70lYtXo(gfL0eb@LUW36aU}EsqsiMZLh+sg5OF16ryZjRg zd|j7tl@H>?M*S|nUHW1S~@g=2G4r7z$NnNYx49A!!LA zF+gaFFcwZ8-qA#g9$^7(x^?=c-K8WMIfBwOek(OrA9dkt#F57`e{U_#WnLI{`Rtq8 z(N8ARQ541{@w=SRS|n})dOH5*tgJ@LG&m_W@h@ed;l}%Pq0?Y5x)`mDH(+r6*H>2C zQ6cHeB?{WeTbXE$P5OK1j@sCUTU5O*h-g0c<$4Y4##pE)@$}?CSYz(1ViU~#^0B&j zD2&iWhAYO|b$=*a)H01k#@x$i%DzMWFtY!DrGVfD_*MutOqP?VA zQh+DV8Z9kD!&8J;^E8gaLS2b${LW%0W`7ZRdC$L|ZEY~k5v&5-h*E?sG ze0EV#*_h|jQpUHEl_Z#VL9{;-fzDTGE(!X3}MR0?kEmd9!xbaJrDX%~+1(yJ@Hb-l;?;B(B zpwqC!yS3VDwU=!qGTT>D2}-p~!G!WK)MF5I!2I7R(Xt3MFb3*asV66FzAm6wbT!5{ zen3hMzwL0h#5fPGlpQ_V*qK7STMH;zg=F89)!F-LR{(xEc;i=U8G{C{+pbK=y&XDe zh7vIZG`ij~8we9NFclWa;KJC;W*SdQSKataV*qZ;EcbQ`}@0M_cbG1}y(6v&cx z6O;tvS;{PSBAdXz2TSAi&Fhk*Fq^d?6vFXo9SSU_V-ZK-?Y%ZT{c`WyG$E~FMlj#Q z#Ct9mbx*dK6=r^P^@?Z9&!CyzkbdjnvlGbT&tP-T!;5y*>Xf&wK(&MUkZBJ#66Xu; z-8-ugfxBc{TFQptzQx83p__O|_)eWU93VHVuaTNLw4l3cLfhQp2}up3uRahlwbj|W zGpw#EcGrMPTU8u}`33JA;&j<3A!_9+{|EPQ*aJH9Z^k1~i1g%iR7{RCMH83yqd6G6 zH^B2n>F}7j8O#vxeli|S(&qQXd!h^x;y>HFi{##<-VdHaPK^TheTn0J#?-$C&njEp z5Fyuha|&L=gTGDCh1xW7i0!`>E{@ohXJJx>+5MX0AJSdov@4hP3wC_E)SV-Rn<&b^M{iI}8u}@yoNu4^a}nF7W7-W=>XNbBvw5~g_*98N3hy{H zU^@gRA#)b>aiB~cF<6I_Ax&FjzwfEjfs?=L>M2Tq@zl)8DTiYA|Gi_#+(`zO4&c^QZ zWoIs#;T&&JeF}|bV>m$nqUcy0#+D^{v#09Md#r40XY!S0FX>Wg(xuF$Wi2xHt^ZZS zD&c89Pp?DNg#&mT*DNU#&+|IR0OcI(*{A8jvx-(XB1jIOH4B>`+|BaQP1?wFA5D=I zLwxheEcE0L7ZzuQ(>4!1g4xb85QqvjN#;qmy z9>8TirxQcTQ+iTKYI>+p-c$umG{Jjhf&#;p z7>mKws+ss(=(hRGoow;Rvf1z6`p}htql+W=I>db}lkbW0waegd-v-tGA`S^L%Ah z{JjY3$}Ces(vCcX%Qv%u{xl6obeBjru=(|QVKPc#iPo0^ZtLB;(Wt z_zWZje%>8%o3mtoHFS_T`L#@dSPN@I#xVO((8;eQcRgeK^|N7~qE)+|XSlnb0@pFa zPnLHLABI7G7{Hc}y;myS!~$0tkk z5cdQzWq)%3cy-S>kKe8(fH#VttW@;8LJ{#=Dt2Hj;XofB+FSU$bitQQrN?ZYprA{@ zAq5K20X-EG5i5-Ru>ZuFOGSH`cTq4B+>&;Vo;s{W8ECvqc!>D`n0wwifRRGQ+|m8g zrc~rNg!{sHknNhat4cttykfgzp!D38`B^rYP638e-{PcUA9FwIs*_(pc<|b!`g~df zVBa_S7IgARV3J8)yANus0N~#o=qaR~%h;}Gd6wrx-6WqydM?{;QCF+yz8`W4=vPX8 zy*Ir-({lAWgG1f+>j28GCohDu8Bcgc)ab)s`L5J>=#g0Jb!u$}<%wIgTHzVecnY693j* zAmxR1g&3=2-*HC*torrY+S*#Lx~QN(v*YLNj#mTfPBXZmX2o;Q>#OxtrT5T)Ec?KS z=-V_U>DMKCyj&bJ5SGPGZLT*wp2m{H1BEXdB)$ug<5<~@;ss8^HS93oIg(7eU@rgz z32Eyaz5Lw=s#LOk@5_^Z6026d-`17Rr6K-OG)3#VI2?8zDN4)I_%9W!V|v~aZT0=| zkQQV}|7A7wJ$8iJLS^aYQu~gxzboOYd|K6EjZJQwo~eS zHHiBkyYDnhK;*CT^#--%rtEekqoTHJqBU)MeJVXx^V-I72WslE%@uPb5Ii{GxV`vE z@tiE6ff&C^)E0UZ7wIk?6N-d05#PBE*w!7_#C^6F!|(>+fLIqHgb??=$b0CW0)j8o zoF2{cXtU`aJ`XQA^Q1wzz;RD=g3~=JLfONTyhrlf!UK57@p$l7xX^&q_}3?gha{Lv z$g;;I*aK-&0G1sd;EvXm%+w?tFF0cu6Gr_R9pY2Q0b7NAqgY_;yg>;RmfM(?JTR1> z`NsawRKWEIW5~zzp$W0;7{4xw?Tls=P(=U5*@3jqk*6O6W%l9QTlMs`w&G2Ah#MB< z1o4pn8Y`g9%DUg^!-W@M-It*YAEvSi_v#OT(G-4>D7nF9R!ClwvJ zCc1FHtB*QRO%8uj>8ng-t%6Inb~yq3%!06{6qOEwT*C|*xG~nR(tAk#gK&I+xSzLY z$YwgA7VYvb#kEt~$bDYQbhF)Ie;@>sW<&$0gW2R0(L&WYr+}Y}4f&u=_QJXz9&ej$ zE=7Ww5&ZFKSci(Rv(Li-iXkE{3sNqRL$h%5m81ShbzxoR1)>WRf2)KZ6OG~B)aKo7 z!HIN{8+%_AXauA%L3FotI5%A2bs4(&7k^}BAcs7J0+?<=G zKEEX>EiP!sT=oc9iCkLc&0= zg%l458SnBZFnV0($q1?2&y$|ouT(Q=-CjAC76}yi5MlOk4J)@m9>5aCxLbdKcXJ$UA)`tleD~mNFA)1oVjnDc>E#hAOGRLssHLVjMZ# zM+sYW>F;x3%X9?Y>vC{$?tWcd_7L~ohl_e&o>KaXHpb0m`(4T@i^I`yt;eBfbexMP z^2M-phH4Oj!VdfgCOw?b#kXbJukwW}5l8n8MHg-cV+p7-8$HzFt5c#S8adRpUheJy zXxDHL2rniAmew?oIk30&>)yPs8>cvr6%P;uYP_8KnF&l&0ph>_;OmzyXU^H6F7_-Flk0LDVY!L*Rxy{Xz3r4 zON9+161p|V!|AGp{+2>yL3w<;<#iE)gQb%1nBVHt6KQ0Ty+uvDv&b`n^mqTfKumIDKw`bElELs3Kl+W3(wWi^{xfN(mR<815 zg{u?!f!pILW|TZc)?U^7R?DWB?w~8-FoSaIjS+@LMn?G9(c{d|=qvR`geW9}RXn)i zH;3by*dS1QF`L6JpqR_^9g7-(0MtPJIzJSFt4FHFZ>TnSY=5=WM-5HIpgJH&Zo1dW z#+D4GIYDG04sTY>q*TP5Yrfa>lRV<@g?%^{tW?s3Oc@s61IrO_tBs^Io83izf``|x#AJ7q|?`9jyBi(v&Ofkaq*<^S08n(qOe1%PF`-_C@r7? zja1S5^*)NAKv!{kYFhHiw&DLC~O=tA|&W6tw@N)FNo0q9G zuFINu^Z7jt#ZF{$=-Fs`80lQqC#0Ci$3C6QJ8RW%lDg!v3W-LwY*2lvgDVf6~iUjxt(ur27C+TnN&*6 zwFVu+hOU#;8wB2pYaLGgSP@~;<6n+ZM)i~U)S&Wn?Pm?z?X_B@S^%O3E-?q8%4U6g ziE&tQEAVrA2Xo3StcVGcav=rkBzABfO@=1mArEHEAu9(25veUff54Uazr!&zryboc zn_5=Q4n20VNFC&02^2Jq%voRdcwS^^S?8vc(ynqXc%#~gtN>y#4Tjd54ww3cV4@!R z-K(~NVW?QNf#v)Txr#xJGtOX#i{2QldE!F@a)YAOTG67gDnj(gG-U9QKapYfOby$m zp_mI{A>v_STx%hb6`3@3P8k3V^uY(Fj3?@Me(Gx8PZJrSmfUc1(ybT-!GXqDKEo4M zmwt`J57!iB6Y$pzW`k+<87g#4EszSQow&_g2Mc+p(5r&!#!4^rgxC3$)CI!jMO5If z@C6!0XX&*v8Px3YcR630f)xx~5|v7~=m?4)sfl+E3z~0Sv;Le8(Q5-W9dHTK@Iq!m zeVZ`gacauu&YJ#se;qCvjpI7xUdcjeG}UJ#05%^}BAqsFp367E#%{bZ-RwYC4wg zFR8{8@0c#mEjQ(1CHBjEy=l2+$akccyz2S>i%P9CeuvbRe(R5^=*RWUQf}3mU zJOCwQVVx%{;?idrGZxOp8}6d)|M%Xkh~UQrv|3W^_`uV=@ptP63XR}?obQ!c?zL3H zY0!D;j5;}fV77qA^?J~(JN#j(K@-o0wzg#~cq{}6HhRPssY!BTQX7zogmYKmyRRE5 z8ENNqMRDHKbRM`|(BD=HP<9y&K4OT4h+|o59o;J#0Exd=7(?luSx~(=KIF3fNf7Nt zDmd~}-FG>gLZ;OUdi_oI>T>GVw)QUyyE$|8jNoyDxuBqJ!(n*_pf}Ah1eJxocPQf^ zzJJfOV~m|PRGMKjY#g{H;7);zXX#R{n}5MDw8FL7hPS(`3cC;H$2?{cPy!;e$;sgJ z({}2{wVo`7-(pr)@ajugKr4-H1C&~cp6p8XtoTjqIP69EKjgfMFPbVi4tRK?_loW=T zt8)%(7T6t5LBLvQ5QJ!SRh(&DsV_Z@2H&oQKI{CHuxRE2ffE_<`alpclC!5B(`L7t zwd`GwQzCM|#PgNYdd8j+cRN?|)l#qMEfV>h+r@prurs*J@3DSvWyR`BLS)j(!2!Eh zilSk{GUxGTu|J&}Gt9)snSxjh zSR}Vw9A;n(g@1D&Q1&wd%8NW_7q*pUbi~zK(IcqTtKjhZBm{mF^u?aatA#_a>*>>o zTg}eV;eiCxQ_aC)SbJbkT(1d$>Rd9pPS4g=Kaq*KiI@Z1u*H(-+~{N|2+wyv#>P`COB;2 zZ?(}{b71QPRvhKyTyyO_h>wC4?t#B#w~9p4;_Z;57N%y>MV7cQ9Qd@fPcggdshENY zF&)EA<{s~Ef__3d9V_6&CJTJQNoVWD>Ea#8GED^>%PW5A{LFGp4GNif5n>^D;nT&c zqM@C+9OBvPp8zQgdKg*Rl}M8}U=lnp@#HlOE)dYX^bniihd-d-!@k`#=)e+jq=*M6 zigy_D(SC<^f^t9ObNKBJe1byq$aj3n_qU<4yUxNZVPczjyqz;xi5JzF44XN)l_#tR z1CO%4J&Egp-=-}JFPa0(H0qw9Rn|MZMq)bQ^|Ilj=WWun&?3cN%pp{Xf@m zG9KC>$GiArv#f&n>7b-3qbgG2$5N%H_KI0_r30O?W6g_xv{;TA$vcwpoL#wZ$AB&Td1(*lX*K6#Q?&Cv8K z`%NK8RnXroHqf^}>KymJmkA-dAJ?n$=kT%$2;2R;ezL;sC0LJ7g90-mv zqrTw@N%_xnYs5pV4i!wNH{wYQHEm`3fx&u;2O+25r46ONo{S)HQOc6JNR4COGPRVA zv_tzMDsq~;xX+YNT$2QVh8aBfB{&@cLEHZ>JOs#3yrb6mW3qVB2CSW>Tm$h{g&qkI z?VQ|?T4NsIy)V|=Kb8~C!~?0((tE(`Ox8($K0p+(0;<> zH=`tN+T*_r#ov{;i{~t?@g2g|IzABtZ>os{3y%?LIlGpTY|H{0ZFC45Hj4xZ0fZbp z?}9jR;MXXNUp+=)y5jSpBQEm`YO$R3Dv$I=_;>fi%tX6I$Li7V5wYFdD0Z@WPE}ew zDD-Pt2*rDugd35L9b9~E%|=d`Vr5PV=6)0r3*5&jhvnCc$JG^5+H9O_uPEGkLCGrq z!#RP(L>fpoxWK^j;lsU>)M}&%QaoHBtUOpqj|k>?=>9ay*pWlBPPK0GcsjCRY3Cg2 zfu0UCPQAe4o$Ha`Zl8*tH+{iB8RQ;b@CH&DUpdh%o^vKf$nkLdocbr(f}f0uE=RUH zlLz2RuH6>QMdi-<`+M%#1jv2vFH`TjCzY8uk8a*eUb@y#*c!k8d%~>AVe(WoN#|m^ zp*JI4i$qwFtiTS54yc=vrm&G4rgG3B?06@J2@a;=p@-7pfIZCzND00>ayl%Ao zD^Kxr5F8?{e2WSN(*j%f?zU{%9&h~^{*&SVljS?(O*9!`Yq-s*8lq3w_WPMjT3T`N zjZ@_sS={<`87+SyeJVKdgtl}1Vvo>g=H5m}Rx_il?}K9l{6embYXkX1`IV?hwzYW)37VL~#N1nB)czy0imWN`9^7BF)SXz_O{j22Vlwe1??}ic3(w}Y0N*NM zR1mYsYCQ4V#}kiEZNSSi){n`l=->1~BRE8MvJ4NdT?5=6XRa-D%)W5!VRPqqfvPIa z12T@tVJvxz(yjE?sQS^oX3@a@A(bvlSkMe*_9YzI^#Ac36o3RT3E z_)75Z#O{s%*Mk=^h8rd< z^V*2ATyna-QYVV}hdV;k!E!*eCmdTP#-kiu@Z|7O-p$~^8hCAGtMTY(w8j~}u9ks( zUpC<73`-kl?`sAkIUGidgN%5V^KSsQ_sYZq$Z*vBU{wIK1CVp762twirC9sqDatx za(I{bR6<69<%d?Ffr```JOcY-_zM${JXMG9*AEk*%Xa*f{9f)bTlCY*#c?&NzlpEF zmXJ3ITb0aivq!IC_kW{qJPFbOWhccdf5;=o6xS;B4}{m6?|~sW7zk$k(^NyYU(2HJ zJRiep;Lkkxe>I|3P)ToXBc|1rR$!s!3sIz3e5;>d34XvD9Vu_TgMxo}S>0a7ALHUZ zCrTMMv(YLRYed^<>N*@ir!(2$uAv9+3I z1m|(^?iuMaj3HAN@%AiRJ=qSy#P_P5Vxj=IgrOeKzESJWZ2nsXP)38K&>4oo%;8QH zKq-wjxrqh-{=fsc1N4jybpa_gKlUbos{@z1MwzdwOdGm>+AA&kXR7~mN6pd2z>G+6c zAL9@VXu;RYHZ5o~;?8QnI zR1obYsS!#Kl~N$)bY5;T&GjNJ^rGCa=FQ`W_&Yr z9*i(uD3d$09t5P&YgFWzK{y=dOa7?c0q(993;xCN6E zJ}v|o)`n`RSvYUwr+>QCnf?K+Aecfl0_L879*%_$NX`QwLe%R*;gpv~P9T|0t8GNS zRb=DGS>CFv*0g>rU5hH0m=;{6liscKWpmU+EeoDVCaodlI;S&fCI2oNZ){n%R2056 zksm9VMUA+r2R7TC;>-Ry3-4Sxk?50IixV3=aD6Ivvek((lnKhbbwsh z2#+ov^J&1MeSZH|5g*8J{$r-rCc%)mS7iXk0RYLvCcw{+MB<=vsrw?k{_kTVfhn>| zF7R)#i8E}h;c`=_#I5_Hl*%@vS2a^yi#wBpy&}@Z&c(|u&oCP`R?#UYqx>BI*-(zM zF6LaQF>*~=zXa8w!x34RLUWo486@>VNuyCgk&%S7=`mV~1CF<9THsf7Y7eo*xJH3;q;{1+Sby4?MGVM+af@;b9Nk_rLGRI*a`|0 z;T`@tN2^U95cB6ye!M5D9K3#iVBb8o4W>MR5pC(%sOy9=Ad*Eni)|2C1eFps3cDdg zn25L$JXYer_abf3n*BeheKIxxPfx*S*VH%Bgsf6_GM{=RT-3w-3vj(%7nNtVmnaL8 z`&F$Dxx!yArOT*Gpk&fc-#5E19gAmr?qGbyZAsm0pul*ihMSPl7Z=q^2N39K-gejM z?UE-t^sEku=6A(ZlcTxE!4DysdehOdDIU~|@*TD+qu$VV<2KfZWedIKl$wQEhV@n{ z+9<{C4Ow)RDmqv8!ZR;aL@ySA;zFD?PRqlWF>GBnhb;ENeRZ0DDi|7J;L^C_e}Yw& zqYv9wPLzqF=1_7C=bo%D2ocw~tG)}h8bdrtfpk{=AIA*TmmHJxbBhFWNjnB<92(P?>HFBpggS$c2$1D*Zp83*i)_>n z)kIIY*N2OaX15>HDY;s4dOE>UU8jrI;oCirx}?SCpt4X)pem{z+BLv168|)_AOU4X zefoOH@8;V;;olkp*H+H}zC14MNlEnF*k8_Ko_ zf?!cm1wR*BmYU~Plerk$I*)0Tmf)-%xRxoK1 zlY@BhY;5Ajf9NvP&_XcG5Z66uJ2}801@(j37hqZvJ9XxT<>4=|mvl@9-_$^+gBPWm z$;z3s!dp7?honDP0X69L_AHUDCY*(8p05I9)Ig*dU5%tdI3n3AObS@rB zKm;cy2%W+@jSsTHsn@9=P%UgDgORXwo-7fpSDWKo2{*AUCITl2w+xLzKQLVC-wQ<) zPow!#g~-(ITp6&ElRQR+z@kTo@@;116x)usN=3+7Vy(|Fv#-VBYki;?#OJ!Bkm_}pjo<9*ol z1%SUFG5cCN=ezyux?%T^Cm%)2dsa709p*e!Nfr^O&#%_yD8x*t+iW+hNP~WNyCCqm6mte_%RfZq-jd0H46b`zCt}b0tU3V~{SeN# zIsV$(5qQPey_^U;y7PUiAEj+J3s4FUEGCpD{^l+wcoMv5bJYpKGn|N{T}5JZP*{0&4~DiJHt}b@CNrt(Ux6(-!MXp0`+vA=T9`ds!%rV*orH@+xHWh7 zDWz}pUyNYSWSbgRv)olYBu>)BU36JASHB%B@83=}Ao+k4L3>wMC<>R)N_2N$GV!xt z1PTSw0^1$jyzdGB_+@z=r<)$$!XC+c@GZq@>DWQ{NP3*Bw5_Dw$xoWzyo}Z54$TPl zjEvX~5oN#kyPs!L5piuo%>Fg0?)^Jm<8jMPA zcm2l<95j^GlSzFi`P#3F%T!7%3gs%u!wQm?&}jlIYnVyMG0?g2#AHY$Iz^Lk;;EQH z+OHJ8uf7Ip5I4fn_r5l>RR+QdR&F-zc9Q>E`b9C@3N9GhG3AEkXePqM`42I{CyXSy zFwWM>jxX~psv0jbM!BNZ5*RJK;SH$P)K;Y%+=KSAaR9d3!*}Svuqa8#_BrOBjO4)V&@A+v ze|{tiGh?kD%zQ2MQ_2#2AG%&w31YePAc5~PhCkNgShb7WtubcS|FsC=REOIo0~=!W zJM?GNI7ycX*WIL+m{Cp2u7q-|qOl_JIGsO$*I|c9`1vu2^x<-Y5@rea;{}ow&YbG# z&+_XIE(yRA!pdvg&KPs&{n9JyA93fd1^10tX!SZyfA?Y3R!Ky@OYw=L7%ckon1A+q zm{^N$;6mBt*h!XcIQdW!E{mZJTcQ80+*FvYE4d9B+#JgBUXs_&vr&sPT44ipN#Xl1 z>*3EJXaC&BkXP`QkP}{`jSTOPH=iG=*AcA#gLqtaQUt96S)>r~iZ7N|#h%_eI}U8p zLJPaB=_+`9h3vs}@qC2aEpLsxP4tOt3Mp%*Vmnw!9~VdiUMj{Oj5o&xH1_*)?X zHHzJt4fsU!c!fNJ+yN#G1p~aXWYgvek`pQQDT^g~`F**6(~|qtv_M^urg57eQ&m3HTGVy|#Qr`~xih^&LZ(n#~FtDrxx--BL2 z-sH{n>|`j^Foj?CN~xU5@Z+baEIiZoejU1srJcK@`_Z|iP$RihxYYLlAW_&?N1ltOF$uoW4xsOO*95#%m&hv zWl`~ibj8nFXG+PuD960p#(eXcD0NxJl8A91)lz><}rxCx=C%I^OP! zb)AEL3F*PnMiR*lm&5ZM7e^--)d-df01#kd+u&N~pq=o}DqE|f1)tsn9K6YbifIhJ zaZmCY)_UBonOd_w*th)BRr36?uwN!N&^Am;tpDB?$tL6_pI2aC3IEKY1l)dDG^=y{ z4}X9%D5;ic$;Z;eUwu|Egh#Q<`Kv6t`V-#!zn=GxSNnM0ZgIUL>dtkd40ZhQD6KTg z+ep}04T-3l?b<)bw-_tx_Iw64019kfKX`?c?%X7()?Q$FVTBzyOR~$fpVg&Hs%rYw zVI{Try2hE!KOHhUqemn5-i(foIG@*btjCQ=j7M{%>Ct*sdrc_)ioH%r`pEBcZAv`M zzw*`bHayqn7g(~QZ;Zy0PE1w_>gwM|c7prGHJp<+P8uZGd*R@O81xhsWD~z&gk%!m zDp%NLgG)7>)is>gVdtl`myDVwMfdc=`;FuhBG1B@vXiN&`p=*CJYZ$?f<$k80a&=8?*q|dDM z5*~4ARkFjq+pZGC#bA!vZNpiI%T}$9AiASimP%t6dBE+^=$;!h@`D>yVH>sf(DSMQ zxzEJsqt?|iHXIDiv9o!d>X$k7t8SBo9%5az;NbJ&Z z=U_}0U{+wDKECJj;u9YT6}u6kC~WiRA42s`rX5mv+@7*N`x^DAj_GFddO7Br;=iQn z=r=HN)A1;63}xMu04g0CV3}Ap$O{FY>1ghdHzs|tJU_Z6F?Dsr^*TZ=$BdmN`)wKi z+D-h^fxuz{hQDcHjcK)fc$jT!2Kp)X9Y^*(0(#1eK9Hvp>j&#TtjPOnqwT%9siBr| z{1m;4qh5e(55{iFV%kcoESj~8pORR7Jtyh6*E)eA(r-^m!@oa{REC89yg^=crs9ST z0Vym+k*0CNfeZ^|Fese%1Yfu9jxzhM_Q<=G*(4&*p<1jl7)O3*8O7m1U3@8JW=Fql z=)JAE&fVQ14g^`m6a4Nu&X|qD?cGZn!qzXm-Frs`oJ+3OoG61KJ$(BENv!GlxV`iq zasPfu`loZ$7WQqhHbw@TsylV?$OooimSo5<;9Oof^+H|v+w*sU=p5DHgAnY|^csG> z?1KZ99o89$F)xpfTAK)Vuv3-40l>us%u-V9o1P~v0jV6fY4(ksortQWpw z9Uf(v5y;nAf~(^eQ^M(>RTT(WB3*CDVI z?M>g5V^Pct+KVLs&@5b*@7*3vc^~3_-IApf=gG&1Y7T4r_ZbtG%ctFos3z;L@H{Iq zu8hwr2dm0$nS#k6 zjt#ZR8hyaESM}&`-YI7ZD#g|F5}Ge*7PUe+XwZL^NyqW}0j*x*S7pZYSzf9T55Uuh zJbIbv{MXN@Jp$KGtZJ*LtW0y&7QF1krMcT-{cGS*k%?g^Wn)l)-KHTs1GD{zhd2T( zCtzCm2bV&OCir3dW@zHwXn}WXVX_g@si0PHz0?0*<9l0!U~gdx zD>r-+85h18J{F6GB;#8fI}7ZkTrG=NA8t-9SG>o7kcv&-ImNhs&*I(b>SKCy75tv?5Kbp>oYh z!iprL;G~vV*m(P(hNvS7_J+3?5XZyb0lvA?l83;VaH+AhPYl;7*_d@7N$AK{K4A?_ zKm0mp&7zF{qa*rf`p8AVY233eBF^n8Yd@F&(J*Pd)KRBOz@=LuX9T!Z__xEbUti~{?EmWpu@V2oX0+4`L9%)^{`Xf`%F(AmT8wRmk$QK+XEDwAi!1 za<3XHw!NI?u$_a=(gM!l}s;kxIva{f4YtrS)m9t zxRz925Gcox6JFg4eiL2vdRXiBdN{qM$;B{!kVb! z;EmYEArcHpT?@EiQ56_Xz-DBAAF|AkeB8{2zJ?d{)1z?+KAt_Y#-;)KetS%M=m{;@L%B$RZuR&(;<7Pt6ob8B zcwcWe(9?VUt1(LigcXH|$wg|!9Z+s~IY+5>B`4)ujW9i>k?C%T(0+*D70TBPL5v*D zW@C@o=IEnVbdmgDqr3X~4AFzbBQ>K({m%&v@_bb&4c^O)!=(KuA_n{h7l9{Xyi==B zYR{H-V`@kVh^w(q=Zyo^8yCm@eK!eXjZNzQ>UKga8ST|?K!=W;<%75ly*Os>4##!- zAt><);@vmttj!gIkGHl}LdemYCcAM@Ohr-5CpT%LhzUcj+RGf&J-fPh8B2MX^9U>Noo2iVPL7Y~r)ttGg zMeH}Pj-D5F%MYej#|51VLgR(B2@o>fV;T+~GsgCNWEkmJ3>s<34i9DOtA##Yk%Y5b z!RwJ4)!PJTX80m%rg4d$z_Ci3u+I-a2WXkZ0v}#ZH9=h0iW0M@ED+qI7h!al_+8t3 zg7ds?yONK{u9;>vpMr(YvC67%qm zU!a<$p|EXxA(+uCi@k8ralp*qI-X%Zya>3oSlx=67c8jzSPe`vaNl~O|oOI%l6 zo_vw40c_m2Yo4ik!k%X23Czqli;;5tza=}Z$C3rNuuR6-9IfN+*AXEtAMu?JJ*uzb z)BMjqDPF-o!x0u-wFDNRGC`b{Swe%Kv1!N_1|PO05B&D5yLI^}7TOtyya5d>k@~Rl z*-^oWX<5YW7wM<}_&p8wdm9cjcHAktdF1KB{Mt{EdR71aNWI!!Ac$@6GqE{;ApK45 z+{wa-MDLi*G|9q#Q9Euv6RKj4MOc#5WrYSDT5Gx9VdGWu4BHStIsd_D$~dvNG)Q$d z%lS%Xx{wsvpeH^%%chHyU#U@5HX&o=qhCOR1HCjFUX6e&A$t|abFQUtV=#A-_g~i8 z%^O>Sdi(74K?;;S1-!YZ>sh{Tp0OBd!@`;=33|X zUIBza5%)1}Eb_!t$K~PI&~0vJP3@Thx%t0kV2$t)j727^CoXe+PDj}g1ZCobr|%C# z&&+~^tYt)@wd}6Q{=2(E2BuG)k}7SwU!~3<%sJnXF{*wC=r7SdGHl`A>Pv`Mnhi@A zED2qnHB{+1hz*H%0Bhd)F}OD;E;*-JDA37@aLM@2obz#tf0j3{A<94fA5&iy700%< z8{8ZB#yt>R8h3X|aCdiicXxujy9NjvAh>&Qf&_Qx_CDF?{CCV39%x2WRja0bLKUA= z&QoGQ+DnnqNg9vV8%iD+w2c^Wo@Xy2TM>hVtRand_tTk-Bs73M=-gd#Jr!#d;}FR) z1?TK$V6L};o~^e;Q7@c#s(4Ov6uo@+w#k?DtOx&=_HXn0b3mwAC5VP+wVJ4 z6(z})eKtcX5AFN%e(e_*Xk3O-HqNgu_4GGU4@!L_VRJ_Xh$T1w-HL|p;hFVK8-rrilFs7*qOc6&(!B80u9D(o6JO0T;e-U>|xY;`)d=|J9P1fqa42(z+9V_BiZkDb6(3xmMZ2ghx9EGvcB*_iSf=au79ldMyWcb>)eJR zb3LE(Yo!v*htOdvzL*vU)Zt4Z+%;EVlzt#f7`Q;caqM#%gWQ*vJd|{hT$TjoXVatZ zbtWkC2hpPqDkC;R3hwLT7XWRyIK>@I`we;cj#b0`wpzEN;HK=jFG0-RQ@zjvuM{7K zH#yee#^nA3H;zMvN}#a7x@Uzpa`age;vN53#U8t>mpVYjK@%F%fcU9ut`wN4)I0`j2DY`dvL= zKz}E+jL4wZ>rQLfL-++12CvNgUeq-6>U4rUT8dwV;@Pc}1(m;?@N3fk&?eKyxR#*| zFBGWJ{qN3i@%06`6`9yd`Dka9QJQ<#Kd_KOlt#b1qykG zGNo{#19nAxIe%fpZsY{af??3SoeA+G!Hpumvw=rx$aTA?N_o-dTS~ZNF-xxdp{&I- z0=N!=(pgJjRZ2*P(RL$o{4F%8qH}QGn)?rQH#rPq1gk*3wum5mo9Nstom!_EKU}8? z#N0x=THIHgJj4|O^?V;HNupbcIoIiOSTM*{|vcD-s6P|ZeQJtLo|2+h7 zunEQ;GHdsR>|&UacBR4>a`_2tractjy>(!0zRHX?y4G|URxlz$n)@xw_3Q{WxRQP} zcvlPikqa{ZY`Lh+6Qv0PGTZJo@8YB-5T9xV_ij+QGblh!j_o5^N7$xU#F5s|0s8se zAuVh1+A&0UEK?{Rc$0o3!rxZ>L4@4rQ}ARucz_gyP9eSO(oKOLolc{dDo#|qcvBgA zhGw|x1|C7r^5t9&b`&k=WTR~wl2(s4pRiq?I6qqB!ISe-A!iSO>9N99^$}at-!ujosgZyLWho6#Jhf+@I-joR#<|q56bI6(PggG`oUTVR{ z;w?>HeH^~4Z6IpB*eB_0{V7I~s2DsCcxBU$pd2LHq$TAHl z7|Srd@JlX8q!a0e@0k~aCLrXbGwFOk5>tsF!u-0B9PK)?(@Ds)Zur-AfHW^3W(4Mr zxA>3#yzcA)GX6EVaA7ax29h`Sq|Sjt?UlP5HHgx8-JcP41v2whw3)=1uo6hH85h0$ zRWjBKSwnFPW&|w-U(!_8#W*}%r>*Lwlw8}j;BtQ#`@EdB^~^xaG=EvNGAdMTF47u> zPe#(<=~!D{%E#xj)5)+{i(Fcc^YOkrXz&;1;dT8Pd)!;kMcvK8PDNeTK!dT zD7w;r;!D1{c6p!0<7(A@(T_Ax`MJub)0IiqpNR;#M`BAQs@O;x})(qs#)6D8hr-%i;X*N;mA^{H>pNhr59^&ymN za?MA+HTpbWHTDNh%Q^IIq{0r7Ig-@Xw%5P5*ZE#IL>b6uBLxE66oQb}zuQfuqFw!T z<_R!zS}{$}{4ga!$TxTXNxKz#GOv)~JiTUj6z8bD+=(%n9uZ9EZ)`T`gRJVYoTF9^ zDVOzp63)Z@q-6rvKa1n)3D%E(d*n?zJGX29f`6=<#6XPA9t# z;{)w%phVNTh(aGTAc6ov3=s;^4!;L{kFB*R$6ag+HkEp7wyR`!TfY z8@=Ik3_H-P4yK5pyYEy##-0_3)US>|f0qQ2PZ;9o&yvi3d`_NF1k>^VMv-y84LTie zs!8-=9#wtf=S)9?A1|43Yf){*QYp>RSILLXKNsRH)y(XPM z49Kw zcyy#h9-``uk{U36zgflm4w|wUb;;4D+}8Cr4}n3#3G?>zb_Y4N^T(t*Y?g_irkKm> zSJO|khg{H}d}I{P89?(+?eTdurY<-3#254n(v&vp||*3=PJxyF56X|M&=M^M@Llz>#>4A>P)HyV)U-n_L==3;0ux-{H7WaYlWm`8D|GbOvnZXO)Al7pa2Z1j zN!uZn(Z^7`thn(s`bO$vwZ8n+QLnP6@zL4M9gApF?0}g&yCYvPXxt5%6m*~&O&v5x zDonh@u;^Q#2n)$m`(>JMF)lwsYL}TRKanqWAV|csQKNkl5Uc^rc$8fJK`_1HM|%R^ zme2OV;iPke`FXT;`aK(FS4`g@c9`8ax-JGW!P{WbObse=s8A?juPlSpUu{#+K3ITJ z#S9S3$wv3u)=av*3Ua+pJ-|4Lf%Um%G5G2F_f(rp3RE>L{E4dRWv@|_@xdgs=9jFh z+)HFtrNjFI(QuJ)ZHsx-NzI=~0y4<%Db2*11jqBJQtr;q&dcqN#>-`IWb1QFqQ`Kr zJ%@$S<5xk}IKJ8C0!`FP1acl4GQD8FH!(npMeFIXs1+9OVGqJRyP9-Skq-3+J2d1V z_?To79k5oTtU0>&<$Q-|-rM9z`XH&gi2*#ivOXlbzQJ4MyY$T$a$FAvHpM_+PJi zm~JW))r?5+_2Z@@{yVzT#RM8Magzb9AT>N5^KydVvg?W-Y+G3Z zU^3<|5B}CDWmE__thv0Dh(>cfAszk8T%=e3PNMOLF$3+b-mL)uS3UBK9k+Dv7?L# z0)SpEU(MM<#kQ*yk6+S)k?ExG@S%3!&)mp_HS9TBK|EX!u<#yof|4b@`)JVn3zvrzk;<-{)Xhh z2bYwp>Ih&>ztUqZmczXg^@>E_k40tT6iZ|Q6<2#T_t$68ww$8GL!P@>Ko$dsl)W+Ir2Np-K#Ou&IE~^&jCX$l(uM z3oJ;N*_`9j(PR*7)j1h&e1&l860+7trU0 zR1FP3!RJg?&1TuF1-NQQYj=#dn(@N^lRR4gJNQ=({QNM+4g{WtX*qZm=T-zMaQ;Kw z^GRHjfwY}n8Gr<^HT2Ctzf+JYtG-XY>teL*-OttS>CcL@U2E0ke%d)z0Yn3CJ%>-8 zA=901f_vGRk9Y(5j*SZ4+3`a@G&@dB!-l z#DBb1Y8zh~7AIUN!PFM82s0_Y;yVTfM-kn6NWP>7hem$fMO2Vd^#Zx2=BIqYmKL6e z<}%qozd-!bYQnoSt+>OsvhU-jk;zNwizO*rcWk;rFGI14qYL!*B1|$+BCMv7syL>Q zp#hxAY5`+UC|57!H4P2II57wqCv8~{@j`KH=)MMIJ_@{Uj)pVa_%I>nZV&xgqEBG; z0l5n};?o(_R>?dz^VVC`wog|8>(eo?YK4uww63m>AQZ#sQ`+%IYK>~#OE>|Y|gy@SRRtewHs~E`O#;b?jc)(=W}CdFZ|l zVxSlxXo{sjtUKBvz`w~67Zk(j6aDjgY&#iF=d4}JU^3qEc6ruFziXqSeQoU0?|9cy zm#T{!*rWb-mpWzDpf-Q5=N!?A4FyHq=%^?QULwxLLlCAv2|V{m<*7tAqo|XnAgINB z{4qRoY96B#sWBXAaB2<|#^uBFopCNEa&#jS{8Xm;3nO<})-j{UK^TwbzC2PZLqOC{ z%eeXLqG5*=k)@ywPI4=EO&XW&=zKjm8;FiU*W?+TRFG(>Sq#H?SfGzW{Q;Uy<=Tq4 zwF=U*7CW0S(TDhB5o6^9S!|Hk(Bad-RES%%*LLkhF@u#R$U@yw4O`s^sygV z1d|3V)+twCOA-*dq9rM#J7Bcr)tz#rP`F z?bFxN7FIaalQvGzDSj=*0UpxT)v-)Q(&;xePcJ3Rf!~7N5;A`|Zc=y-$U?~oG$!bQ zfP+2t_2hGOiuh4UCi9?qD?%xbX&N)-^f?>Dy4K)o<1q=M`OoxW`K+PD9jt0gVFM$l zuB&$qJWLup4ZocH(TeQEJp1n3zkk`+fU2}*wybI^>yq@3!{=r(#fl@C_|7sAW&|;3 zD4F7gfHZkx(8WxX+kq(r5fc*{HBVM=NyHh_)M9RJi1k)( z^EzME+t|Dxe{XxX=uR@luJVxZy2~{GJ)X&S@cy31YHmh+d^)WZKMTtlgS>D(E0MdB zElt?}GJbeeU=}fbtX86Yebnyd0Ds(0AlN6-cSX=mBXx&)(h-0Q0H#8TaYz}eem!WzwR7jrp5^JIhrlCCJIa4kGWHFAXn`y!~jvrT_N$MIkePM!sC-=Lh$S%Fo%Bw z4E;WZL1K&;2~C%_U9KnNl$|3XEC&PL`yC|T>ZP;Gp|7Gtbc_TM6~*Y>PFLxmd`dYB zrw}Q6I<*OF@FRJsd4xN`t=dm`LBXL0zt^IuFd!nu3mD`SWojs!&WHdP>R7AuvqICA zN7^cx+BJ)uRitM``LO#MJa(xT1Kf>fl$~d?nTrwm4*L=8HQbE?`IedBERr7B`45*H zO`FJhy@UOE-E9*nM4&`gwkFnmkFE!?%jNIwZ4TM ze7vm1ETpiRq*;aZ@6w0O3Z!Cbot&J+gKMtSZIUSIViSv-IMYq~m2=6xy%$3%(3OM; z_cs(i!cDvC&>Tism?_9&P|F-%hi&(n<#g8Ke@9r28KI*;O)e|V4embLh9j2JUaXsi zRp^e1vIsZ3CZJgyL%>&XuJiHI^1Gc=wbpk2_I|fE`;Fw4RvuWX_^Hxb`N7w~g7g;9 zh_6!MX*Rky!YsH#du~!MIMrTYv6F&B_r^oVf+a8saR&ns1p!nBFVl?suEuSMkeL}k zBdT$;W;v{g>0(4fAEIKt7-`JNr178+=~qSQ-B+iVI?}Duv>hJS33c&jRKuZwBz_q= zboc?60+cjFf;ofd-ReVpvQbYQeJrlWDQabr@vxs3c=TZ(5;)bZ-KcnBgKWupK6YRy z<0KK0Sy;2e`G{GwTyTp=#a@UZzW|W;U?kWQ5z}PJWTmD9fRMOM{Lo<~ikJQ3Kj*;w z5*R&$lVyX9Yh=HeYdFgAMD-NIgz%GD56xV0{gmVc?Gcf~ViL~9Yh+@Fq_%A17_bca z)NY}@MKQ5Ya|zRsxPQtT-3L_J31Xj%zFz5`nEPR#g($znJk{!DJX8h7E8vVQB+WBu zxgFKNR}vrO@%{Y!vjli8-5gGgQNLY&RCfduCRQB5dd-J8me(KnH&pur2Z+$I>TJhl z+;_2H@MOAj_dh!k5HxpH>Ms$C^K4v(ag})!w=>z!5*%74mz6DbZO0xJJfBH>-=8Ew zPMax+^SIe8A#XI}AHmdli+=N77;SbO&b{eZf3^AMK0EJn-e#vD8gz6%Gt@89?GXhZ zE%*AZ^ywzZS}&!x>m`a^dHV^QUcJ=*ZH-Gb)A8(0|K*2%vEXa6pmf{IRvIuiAqc=G zs(_S;g+(}uUKX0NL`xqcy>RNBi3KVU_f+|dPS49& zPc{$7+}+!KHy1f~cD3;j;+!03|DoqE+0MZXP+o`*y_}Uw2x`s z2xXjIZXORz<~iPz+Df_km?WE(e<+-Hr^hZE$4nQLZNNGhCMe<2bjKGEZqcWdlb!9$ zZ&*C6q9zV`m5XHZk?qm9g(v0!IB`pium>0h%R~il!U_xtxZcPY0(Y~kc(@_u7=l&2 zk)8ENY7&w;B*XaXw+g!d9I57nj2H9GB^QkhnVOHdmLhP=;>%VC#CDCY*IDjoa@xT$ zxUPE_lMqmuJ7dAn$jDm%U^3VyO0-fG(fvjqH_c7)rW$*e+ z#U!r>?z8!KDQNuo1R^Vuh)2oy-|xGRdU~#UN?fbHltJ;(yn($fzv{g<)k)hw|IkN< z@$BmdaMRj);IRNw(2U8FW@$wW3rg!XiG_`-yX0ba$+@#;qxk4q3ke@#^k_BX5(Zhw zAcmbnm5y0OG)N{jpXXY{f_<7srMBq)`A4_L_dkT2#K>c<;3u-D9O5hFy(~-KdVeu# z=Il_e+XRI%Dr2Tui@QJ$i$8L5!SZP0qL)L0@vyD5ejC))5?#pr98QfxUXj93b zO0fMFOv2NvT9U|#cg%vbFxO@XX+Lpvnek-pI|MHV$6T5$(5T8xAnvAlNN%yPKZ1Be z7VP*tKXz*H1-c9qkPnL;-+7yhBao~oZj(^q`dsG!n z{eGvYFPrzcS5|*DhpjH~>2;grRoUxktXB5jZ%$qqiEu%)X<|rxa!;~PW2JC@xmt>! z%qvYWKO`c>A)As3+c=m)mKy%O2-w}1%(hPlmyYc^TsWEUdb`R&@Gdl8uB;L>RQd?hZcR85B(bpL;es5c)z% z!rP!PKD#(uAU56n*rSx+Q<4~2w9|^8dAgs@sJTFpWC{tWKuQM`-7brXK7{jS6R;Tl zy5>9&<1=mKwS}ICQypBwUskSK3mnsFsQg5MQ9vM&HU446*PR=?--PPMi{ms|b4ZZbrY7`Q= zba%#iv*{emq?D<8EWopvS7zuS8G0?$szJqMG4oRk?8C-Q7*{t-n%NIa^}=0Sdv9$U zI}>L*g^{}Vr@CPRlMU9F%qbu0e}pv)=9S$13r%(O-)7Tiqw9R7pK3OrYCHlzOG-*U z{por8)9wd0Bob6GG`vH7xbHgPKMEIoF}#3&NY``EA^*g4Df?RQ)}d`(qX~)A6kI5X z;lodW#5RBQOuyQXU*R`Nd;M_&f=z~8v@q9OIuy#B4NgNVOZ66sWQRJ+Hl;BHP85#< zM7Y=DONike?8KaaI+ESR6e87d;Ni4aEY0*{EC|;~$qU7q;XJ$sr&VNrtD9ock? zxzN_&c_k0R0q@T=N)8`k*?BYBIZtjuo=K*?dH*E*5TEC&Lr3;26$qobX#53|26yTV z$*Xrvd?#}d^z*C0Ar1GAQPw!B1(hPJ-4i)}+4I4Qw;AnqJ1tjU@43_c5078^q>TRm ztAh~Wsy_{Usjb|x67XvM&WR8C!A!_H#n?!}x`caSbaam`@5v;qT%PigR$>BA=01O% z@T;?Y|S<^ z9qar?7x?BcUmoTK1qL%&W<|uedo=)RIY$q7OW(&6>wI5rmjZkg))t-`43<;tqC8Qq1Wf~4-^1@VoU(U z<8gbrJE2A4vTP8JVt$+V?18WF#3WbLmk26j4IGOl5g`HdsaWn2N1yJE+Y>m5Xg+pu@3^Vq46&KhZxM zyUI-VL*OLkT>-av|8d0ZEkD;qLNMD+yVo_@C3M>sn+~JOxD7HsD&K+TfYJ=(gZ)NL zv2)M#D1}I&S5KyI4a?Tf=J!5%h1@gtNzg`0+rk)>T-iQC;jBZPxk|zdIX{Qf)vKzY z@5b&i^r5ZD04Ur^Y#~~R0@68Sbx@TM+;V|TeTly06&s zJqvE|W!G*a(Otjyt6RFVzRI@Hf1W|R@mZq zoxKnXN_Tzzcg^^0>Y^&ms4D4u*k3b$#)$#eTFk3|P!WCO;~I1-o^S9~XNSUx73N$E z$CVV-87|4aW`Awg-1gRJU2`PdS;h~k_dKBGZHPQ5xSd=Gclbe6d*k+uM%5}D+7sP;;+N~ zY`fMH9rAu;_yd2Z9{+6{%9!D%<#mcK-C3YYSQqDuPgp{G`@0Nn+4n+2y=G-X=Lcfa znE^k^@Dm1Dii_N5?2l=92qi4hg_T{3^}HoKuDwKcowGPhCA27a{RW^=OL>TXj)dLv z} zi#V=6&|4n<;GWh>l*pbiE>k%+o`Qorl?vT{*w@kB|e7F=BmIJ&O=4OqE7VDrfqAkkxwb{DZi_Ag0fC{cIB$oU!mPG4J66ldF z*xRoERM7o=K+Nip?N(C$--54VBCuMqq-m&;Y(Ce8@&kkmmap_eJ~{RP&s(dqa;%11 zNn16lrRYe#*%ko7Hy)U3r_1lPLUw;v8adilO8o2DLogy`A^*vUx}i;5l0NFgC%gzv z2`s{}pqHFx+vnDs=BipblBenL+3Vcz=epb*vyQge#`dYGAs`8_O$e-Y5$;` zWY=@r%5Hl+f5XeQk)e0-Vm;VL0KJe<`S12#f zbWZb+Rx>I~8KqLv0GUVm=`qB65>SR0oE2efOK#S!W`mNn4B$Md1U)#)OYS^7&+Ry3 z(@Y%ixMUEvWRJVBQNPSCa#BN}#dRs}CXmpdxBJ|;Csp8Ods7~BfAGfQeMrz}zbY!m zhb2vprF@x^akN-^K>DhJ(f{4uA6Gf%A2VGRUXOzu_tFEt18$WKZZF|I%E=6M-ZZV3 z9x$Qph_9P)Z^GUynViKw@_PS0{Vt)}$3z?t(*Lc@A4g&I+Ha@ha&gD|>^pCo;8w`s zTB!0>6nv^0ND1U%<`l4B==gbr;+wF@cgKaM6D)Hvr|q1md?zdY9Ke2gI%W`Nz2|ON zv$`zLdDpJwWp08-X^9)dBk-%xr_AWB8xk20kda8~3cj!buaPW14Ytz-+&O#y!?0&B zemM{30=y)=Pd1G$0UV=6f%yy+5CJb639R$Wr&Y+57O?H&z)ddV39bzP8iia}W`j}rNPlF1uBSkjR81}~mzaPJ+ez*}PBl8Qv{X#zyVg7k1$${K z%!_CqTMR#xbz?HLMUf(Z>L@-TfSnU2T6(_id3|n{EK_YkQY^|&gRSWxPM}aJOSXL4 z<*u8d9%Vl0TCoh;DDMR8MJ$g1WU#0iQAg3JP~>UnN0#maKd@O|?J51Hb;*tVKFG=# zyCwrH)x<9{F`O?rtIgOhMj0c}nRG(gV8R&H*uR(<`q4n^^u+Tkl0~=*^U?a&4!5jf z_ru#iJPDtmOX(r?AxWH(531hTY8V|K*c){~Dp*RSbVqbQA^iVDgbcYOWRfae5D1f_ z=$u9goC(gClw=1b@CVIuZYGQ+WDv0&G5Ju^3wJGr{K?*Wj zz>+y@M>=)=@eFX(DM0!%(ElFB)sv}fhyIAChTrqZDBsF(b|}?GALR9H+a^QYc%oKA z<-EhlcOyOuO5MK4DS5Q7MMxNa=|dOBNgVO6gYzUOT0iRRQO=z}y}E#T+~5E5;+B|5 zR@Z!hqM;WPA;*UL>A7%j_>rxZY4_j^{^xE!?SeTy+)e16J1>*tUS}R0=BDdwufxgZ z&j!o~h>KWrI`_5lBg=U5m^YXU?P_Y^yYflU?7dpg#e@8PB~kd6l;Z)lF`+tost-8P z_f~8R*)H0(7nVKBTi-Fl0VRq>-(J5&;$>dUooA^!NRpG2fvgg67#G5J_gVP8)s$(( zxr(VYKV|XDb%)PJk@zt2KMb5(|5geMh&vMBMO|J}%vT`Z~RT!ysPSBdQgYzmG<%l)^YVgDb;D8j8E$ z?mk64^O<)=d=%F+4AM@CNVdKD=BMfgO?W77iuj(`6qHuf;vmR1N}-3G%gtwGI2ZlY zaT`Y4EhK+8MgjivfppU1Ba&k%6ip5kyy&>c9%k7SQLcxCnMHhNBYzT{9M8NgHHQU4 zMTqFj9pN&N#?#@pDUZT)A=~cp?VeJf@(Wgl!TU;mY{^67OmM;s?qNI`*;E1*=ZPl4 zXdfQ1y3Zu)4}WA(R{?Uz;4db3-6k{n6MNB8{IoMq!#^RBpB!C0@p_KhT&<4W1=wX+x&@q=ova;+hHcaO7t*+)l;388h$dV^ z)#YZQV*F6Tk4BUlbXxdqLZcp&iBT_d#0QY#v4wk?(YQ$s%|S>ML2n&OLpNhNoM)I_ zkL8mwW!lP^W(3NIz5KZt;oqAW`{4CQiOk<=cgyLG;2X03Mm#m_eA1j!`3fw!JkQIM9pYJh zH<%6GBbhfo-zwAxlBXHAv?(IKq;UNS>2pY-95|`oDtFM7*rr57;V#bnXmX6Lu9IkH z`P1^|V|KRA^6Dz}Fx8HHlJ<%H7$zShFM;p$7bbxW34_l4av~Mz=&Pa@{>yK85|66j zm+wE}kmmG3cLAg+n|gP;M#E1RTZj}4gyCs?5uVyDq5YgpeGH?ht!EM8xIwzlSqvYl zu=%-CoCrE;(@2i_%WM8)PR1oBLz(h`hhsqv+`E2pmXK5&UT59Qv2LW{6VBNNoqb zMj9UZ4WWhyOUzsN?zp6(p;}FiMOvlK;?udJ)y7iN{a2}rCw>{SrjR0W?_0hKV;+&( zXhw^1c2sCTY8yNfP`N0Y-RHqDJ{K+GS=dMTZjj_w-6FcYfX9Wwi0zUpET3d)aQtOE z;CslxL3~2c|F$8MJOK8_%+C1ax?(BjN@=|5wDZ}ol5z#jBv1)R64+jL#Pk7$QGQZh zcyn%h4QcqEleFuVS7L5_AN~lB?~GMWg9r{tP9B*;EQ$A-5ae=dn(O z=5J0bl!|S>BJom8=My+JV~h{DH)FZ9_nZS8z%|& zF%pv`_IubqM4}*pA{lt)B;t)@3&IRw3pk-k5-Emw+^nUDD6sysSQCh9ZhCjJf?m-Z z7$9-%pg@}^;OoT~Od=2fm3`n2W$WU4vjYvpLIs&hQWB&LR5!F^H5WhX=uRe zy8ruYpf{Mo>!`zv?sLB_MINcAfk4jos(HAH9uL=lcVT@7h(#mYr3E#* zQ{KfoU`S3_8Xl?Oo%9c1(OsQtkuUt>mj?Xw;)W%pDC`x+jboh68*F+CWeq8%RdE|j zo6u6|jIND`YO{H9Eg^|QV?AD9p4t8Mk5W-z(w>dO!hv)%LHGlwP zZlspO0r#F`tX&~9B@iP!HZCp>kKS`grXO9iZg(M43oSDMSL_S1uZ!+h&;|&$dy&M` z;@2m<4UnL0Vq#`gNI-Il9$36h_?2k(I~oZNr2n$L2)At?{w@|upZNl&ZULhOl4;Tr z68Zn_0h_|0MjUI0Mj~v^D6vq#g`0;65&}rR4k5l5b<$HzOTu~EaFrDQrx)EyaU@mj ziU}9!q)7@+3S{Uc9}PrbCk@Y6_dd?3aknl+>k@<(&6{hHYDQh5GbYI-Xu8My(?2X7 z@O*!+=)U18DWcs=(By+sW_`NK5gCDGtlNG8DQ@3NPcV=D z0*UkO>Sy>Z`s0WAVQ4N-SQ4Lb+!%S_;^+U(mR80CHL^?{bIp6K03*AxqY9yfBoKbZ zs~3_1^gsS~D&_=m%pQMCK46waL_{DPayEpMF;MVwT0~pK7%PmBUu{e0;6+1kjT8g) zLNH)G!Q*tiTkjh^3Lv5b)Nq*1oqTyn~N~i zDR*cT596q%V(y#cN3fd>7CHftmYhQjv|Ja|^g$P-my?rYd3(GtlNP&wg1*w>Y<>eq z1K#B4B;#_kW}w_NtQ4YR_)dQ1V!Ctnm_jtBnjuj}zhJ`|8qLK_3sH_^Gy;lQ0)Qaq ze%0d&-k9OQh8H>@?SBIF6%@d55Ou`+438z%Zw>Uc#EG{V1qzy$jM^;hmKuNX)B(c$ zyK9vG{Xv3b25pa=iEBUhxJy5H;vk!Igq7ir4jI7?Qntxto&qzm1OzK3osAdc#S4x1 zf_Dyv>LU!0z#0Df=8B$}ABXZg8s}5g9p1JWU2o_m3IO>Yde8;OuDfPAD;(eMO{kK}RWnUgjD^IpXQTbksgzm)_Q%vZXHuv=BA z%_2Q}m}8vF8a;kKdB>YaP#bNcfxi67TF9JFrRpdLYZ5}CLFE=K6Suw(InJEeD-Pz$Qpc3M4~ieO)YMV<%4sL{)y2fFi%GowZbW(83DQ4C z641NKVLi(;!Cv1EHA0{>?x7wgwsQ&s$5s`qB_`+C&(oLm(*tsnQ|9 zbouyio?C=tUlT`@{&$ymxedDWD@v>L0c9Gd;eXn+IO@X(f)AzCY1_2MDn+=v>aPA7 z5P&OGY9~^{RhtaE*^}q|I~Ip|`=+Y_C%!Cf6f(qFSA!nT(lK>(pnz;s8Y3yo;H=oy6lfHNO|2*zy4q-00)wS@=>D<)`yNLw^K7#3qsm<**Fww@P;P>ssf` zbH)R`{~f=MpAa`8v8P;n!F#b;#$94ubS-jIIX%J@xQZ0n=m}Wl5OyyBYRfm6E+1dxJlAG>vrJ&J0{A0 z298K}#egV=S+|sFpbs_tn5lhOA<1mD`wxcu=OF*7>cQ75z!x64GS<+Rol6yoBU_Jk zCRM}JU8x6vgMFw+cFaaqSh2F2_CqVAVVxf4_)ZT+RJXApSq1g`@;LTDZ9_9kd=IV+ zO7mO_eiET#QP{Gn4rF zj$Q#yRL8#&1)UDN-5L@1jbGw0m8DYcWd8kj(!T>j@D(D`bR3^B#}eAxj(_2>C>Kem zfw*PEoPB*&8D>VW;)+afHvn<>D8e5Ho-|;Nlll6S7jzj6=L{kfwtQHGUAr24D#I0U zd|?#-^QeJ~gv_6TSrn{8-e@(0BIFr~97pICb-6L0V?%%q!WAHI?)*9geK?y+=k;11 zZ^0swN6fumyp@CDhunjEIut7kuP$q24^j?Jj>A{7SFOSlT-;;zfR_Po%h^s4S_a+> z-jszDrhMH3MXcQ!13wqFYR)9|%#YK{Vv@sdT!gV?KMJK6D-`C+Nt9 z++`Hy3+F~-McqziQh40er$p%Zwhi0OCUy>(!Sz#$liJMR5UF*Kyg`+w#(;@j3f>Ai zOsVGSzSX|>`W3#=`aQQZOD4|9;9wJZ5-GY(O`#5cTkFQ4EA|)Xcwm)YX74&N-to`| z%C6Hz@;%$_9#LO!0W+@!;L-&oFl#a4nt?61MI??bKonR^0Ps&BF@4b)FG zu{)RMJl_ocG{RBluboB-fbtr82dXn8MMpydsshz>bLG2}1<-rO6U3mK$HZ~CtI}}R z9Id-9FpSAKiB)4SfHI0!7@k~ zEKE0FYH)~D40|B2ev}{|qQSMbiGXCo!I6Z(1Hve@G-BifMKhdM=V+Rf;Ntk&*9G9G zNme^iE!b*OK0jZy#|*Q7tGOha`eh7D^WgbA2xs(gfX1Vu{mYbjvM0xgd4b#SPo7D` z^50kCK=_0|J&wj+m8M_>+MsAmdpr5i4&$E(94Sy4ACS9>WSXG_rzp2PenXN>txNY# zGevqO9erSG{tCOQZ0_+y*fzZ>Hz5j>3ad)z-!&Z;_dhg$?2P%+$jccbxhe-v?B0?ygn9Rd9WG#b~Q$ zKO_6PXIgB?O*i561y(IB-Q3g|t1lXzK~?4S2*%iR-l7=-H5)~ku-xU)i7){#)z{aT zVu*IAln9bAqZtZ!KV@bJmM)B5f?0i`7uKyhP?q6YP|9J`aWK%SJ2!_X~!2DW8>(<_ok#U5TG>X^Lcs zEGS?%@419&7gmscCj1dVklO+NBc>?>ysNtb31y0-(nE@Q^BqJNy&Z~z>T4g6kZSlQ40SpuvJF_8Gb1k!oj%yy;_;$fi_k~35(h2_@#JdqO7Z4rZIq4IXO7k zW7T^lSu2~IL|DSLIVET7V|Dgo-_@IDFt&O2THh<@7~6K5Cs!ct++qs;9K&+@ib1o< za2^1@{gzuJ`pBj${)t!ogzecwp#@_Y<(UXWgYzn+|o161ECnm8gnU0i&+*nso4f_O8MYm6xDQQ6Bf% zw_iH&&#*oXJW#J#lqnwQN_Gz$6SQ-UYVPgg4|>k`P=D&&{lzTG9KC%^KDEG5)*RR^ z^zjdg(w^`;9+Gb&kgR~@>SAVpTADkOm#-ifiQqx!k)bM&X%lHF@ql;WKCo-M{D4;? z1SP3sL{-8sKaa|kfok`9a)15Nul^o^ScnkUIM6QLQ%Bh4-=YlsKn7GJaQh@(;<)@< zgQLSS(>N!23W{)fW=0%+IG0JEQ!bM~4t^CqGWb=@J|B|!kwPKN&{tHSr%2;y38N)rcrz%X!1cAY zuWo1S_+bUr6KJz&;=eo)Fq6>@rj53Y!9gPVX>ui{RzS;B6EE&R?Orr=` zJ^UkvIn+2jr13L3J6yp7J$=;E4gcODZIE?Kx|!>|d@4@F?ISJthv20Y$FiIwW@3YE zxc8f*ImvA53kRavPIE771=0(Oyv8fn{^)BNlC7Z%0u^E%pE`ClZbBzNisQ8BAd)x0 zG?y|RFV@249;>h5Je&fyZNB-=U}X4xnV;;x2|N`D+L^6ZU9s%b{aXWm<7T$HiNp{` zemD^GlNwqHGQ}-fr>7~0<@{GRE?|l3p8Rnju>-whh{hkGn za?D=R_`*V9o$*%a*4o#rbk~#ni*FN>`K$ysLQVNH7jVTIzPUNw+!^+7#Z&@TOWA$I zKXmQX-#Q!clJjj|?9I~mFw~VyVvX5_AT4*qKp_V}7%cuf~7g^VscHR2>2; z`t*n=w+R)5ZL?mH&M&(&w$S>19GlW0e*I@=>R*(`b#o+t6A#lZC?)f6DtNR>77NYd zsfhpGO`X>XMTZBr-l>w&My`}t*5})6qOv(05ykg%8K0`IUWfSH86$B${bR_GBT3H#hZm>nG_#(AOeGKd4KN<{x7vji*&l^^$*>oB;DO~XM0 zK-rcq11Lq=`!*MwJ&g1loh|CnxyP3=vEZ>aT}fMov@1Wam5w)JYSS;`rOE2}459j2 z@ojA_RzDILpTV(*TPf!L zdm@NIVXRm#Oo;*42?V_R{F&cwo@I{hxUz7C*#pC+nprk{B`f zaBV)5v5C{f9m5>I!=H|9=^-W+HRSIa)CFd*dMi$VezNft`)0F~>rhJWUwEWD<8m`r zLDFm{Sb!y2fv$_B>tj?1bB6^c6ofP6#vk&AO&uIw@=d5*jRt=V_i--=b_)Mty}F%R z?XVQj-t-pp~M>EdMRJ6}F73i?Y+gAz2qlIl9w0MZCk3t`fdFU(} zL3-r%e7vSD6pvCd1iuvDdkZ30okpYYVdu9D6B#zan~=-s=Hrqj;U zR&n3^*+mUXo_5#*v-4E8N6|+6ChL}%@co6 zsGaQMzu`4@a_997<(?5xmCRe?CD1R8(}LNqN0ZzcGXTD&*Ce4QT?|F0D?KrhqIQHU z38E=-Ceo%TIvCv8T+w%Htk}fWRM*tU)N@&IDB~kDF(d_mDWmwA=qIEA3+1TrB|LEx zR7hEvY;}l=u=|T=S*B11k_FES9O1`w@Nt(yDm{sVV>`u96vUVWmLk9rVp)<4SnWaK z)GGxFQ9)MGQ8nRo-mqxT0y^OIe`;-&@QfIZA*=S?h^+(gVuir<-kL}7gjZh;X;+#7 z4?iXy9g1B15P{h#oL2HYJ%fVB5Q@{XV;x*rW!^s~Xx0o$8p3i=$Xq3N$lecm(Axi|d<1}I94D2J27zu8zUutYyCAg; z-t{cxzLPvBQpTcq_EX@I4o5OZ&^WF4WcTfhIE3Y7_&HcEM=PjqrGPWyV-41D&~FQl zXCY58VB<-0Lx_aUD0kFTS_eD_Qr8iw5BwW}PyUIQ=T@X?2dhs{dbA?aeI@Xfbagda z*yWw=bz}Qtzle>mPM!;@4cdn7;EdI6C7G$H=AMTshm_cUeXp?nF~n}dFRrPBj%rtX z)w6Fao-y`xh%a!c{WlhuZi*b^8{M(sxf-bXJ$Nu1i2J6;ky7Lu1`&c{K}ZOavzQ6F zK!9(T;;LsTd5H>UJW1g(q9egi(PtdzeQDEm{CJhOr)jMq=C$RHlb~G)DY}dwlTlzO z1rTUassW`OG(Y5>MXo8MH*j_(lR3O9^?o@A;%mlPc_qBi>?1^l2yt0 z*JmA>>lzEnl5YMsz8*^M28M5H?=<2nbV&r4LQJwUEw2I2?+PbMoRoISWwkckA4YVT ziKMx@9D+jwJ8VoI&7*UztWfoW4PdI9>XNpS_w@Idy)w*Z%O}B7b6YE08tsp*XB%wf zMplrxGYIT2bN~0jdkC#Ekm6(_?Nn;bkJ3sz;#Bae3pq4(kBjFOPzG-UKG_iCLb7eH z31T={W}ky)>J9FTSZC=##t1J(K%Go@47P(3S;hx2PecLZT7E+Tpj1z zxW3g16^BNN0&xNvmm|P^YeHafHI-wOSj+1i@ zWNlz5Bvm{$W=JmVL)=nIYjTZnvh2(&a!`)cGRdb@KbxcFZDlM>oF_v^Q!$qlJ!1{Kl~}dI*bW<9JvIvHW47kCvu^`b0Y6;mB(M$*set3t#;{>k#EU zzAGd?j>9-aE8asum&G0;aX5kzGZL6snB%@bdz$hJw{{Tj}Wa?CP**)Wl&NxSy;9v9@|tEo_uWTNrH;Xy(!fGQ3G z@5+Yd$h>rKd}Dl5V^6$46l;9Z`9$6vAj#U-uF55eWrfjE1UFecgMbBw#|%ff1;sQf z)nG83Z8aA0oRZTUl(ED})96&eJN$uq1ZS?#E!ykYU_V@ZZBpYfz)4fX{K1Sh9yAu} zp~RspDQq>o3&B&AJS(pQh4l3Z9@$qjvBKoPJ(Zy%HvDaF{5$#Msd1@=w+Tpm2{x0&W<)$0{@l zA4^w-l9uCg1ke?V_2v&e;3SWkVYuYvBnjaN0kCFbPq1p|;E*~anwXfFT(#baOAweh zBpdf_FdpNe9g%?2Ya?Gbux_T8GnvDS+Pn#h23#_dhY?vODw92x&#IhxPw-<}_WE_2Us@zSm?Le8> z=>ROAOAbsa%CJW@8F-Dtfa@)y5(=xzb&x%;&H{VIn8XBXG`s2NVEMtBT&9fzcG&`5 z;PL4vEHkwN-8BNKT0rZ{L>~4=XxkgtXAm;!)qKi10~CL2@;LokO4q7Nbd;#^8L>Bd zY5o^`*ALX*2w1|_VI==)EHmAxT}mflw>@gRPf1WFC2yyc)3Hfpkeivm6f zi6k9C?uP=eNLvCo4xCfmi6VA949yKAPnB&q@NNkai<&_pE)rl@$gKE+isEi(ME_}NZN zW>UE%XKqwu*$Nd^pFjgp6Gn5*0aUxvs)Mg(;?w1|n$|ctN7pZI69;EmJ@OO2!#2{HDvt%=0fP;!K1BF3Z-das z^F>cTo1Nzh(T*am( z>hrLzf(fp7?{(SQq@Va;O8G2Nv6D~-n)oDN3t)mMkv@c*PmM1t>g}efRcqiSmldOL zSzO7RqCbdT1;+UZMW-TheXfaiqx?p4VF|#Xk5HgbK6$(3QWz_AN<`~JQaYeASMP7M z*=LZFnl?&Qh5FAvhk6M|`$PX4{Gy*_A56O%i#=BDicf0!+O8ou+#EZ&Fc3qQDj;?k(|H^JQj3K{XeB1>-{gsC>H1Cbv@bNKcf$15|!B`@$e&Gf$Sc5Z^4N1w=lOE{n_r-k9aHWas(KH+|bq+WZOE zZ+xdPQfKYNt~1&j`7zEqggVJ*aX*Kf3wy9vGBZ|fo4yys9)l}l9h3@V^qKVW|3B_R z2P&=T{FKR487xrgjqAf6BK6?R8ka>Kkwk3Gl1zFAHcY0+AY@dg;vLQ1uak?wl#_TC zw)TpH(=sIOXFgOMMx(=yF0K=ZOSr;r^rckF*jMRMp12kh@Xjg_AQl;& z#PcG}h)-yNqyP<_))DJDg`9_qz&v6(txVW!J{}fhbK9gO$Ho}txzT<8Wp*s*ZL^#^ zV`}{CJ{$8X--|gKe!TR4f#VO}-0Fj}iZ8D;m=R2>yiVBFw-}AX)0{wjQZ>b zEzR@RgzW(U1ods9;r4vCP(zHd&wx~QFpmgk-e6N3i$-Ub-L^^7w3X?-oyOUzXYvA< znxh#ou`8kS3CyPLoy0vaaQU;;P$;HqEA!#3M&~THaI;RxUOcbgjMy?7!N?$O_)gpP ze$*WTJIibMR!QngYx9ZqO1hX?#|MK_WfEL*I-=nRm#7arBjNRn?&WY>evP%a1se|8 z&Y^c^;&W!u3dgsjq_E1lG%0M8qvhUboNUm||w+p2hw>}__Db3=YkOJckaWQwkUim1*@RVnXwQ&;!i?7+o?W*hvvi z41d?4NkwMZ_Ek{Sn_({8Hmz;>adu%4X(>zM zDyaDRy8H*a-Qyp%cfUR5!xHVliO5?JOB`byM=gFKzKAXtjji~xonF`+^JtER7%}KP zA?ZuJSCADF>zdW`k(qq|FLtE*XfL?kAmbn zFEId6obGU0x~W)isOWj`OJb5odxe$FyNzc4`157D)>^^{SfJ>ZuizBFLsaO{!x7)}-T!-tWv}lREe($}6gDe9}?Rsj91*P=v{QU^vwIsfiw?vzDSY#BWqu=bK7S zK=n(V`B6*n&Cf;beds4CYvz&qDnw2hBq=jzBkuOp!nn*cUJr#qzDNBoxKfL0*ZWQfulkwB+#;QL;@T!q7r_xS}VUPWHpD zCQ zL@Ju?vGW#u?$|F2pUQhRI&+=xB1QtE@;lEK72{gs`jx^NkSX8lgbpH1e&C8`wuafZ zT|29#v2fOdr_R3p8OhCa@%KVqVlZ#^mHe-jsNcev`y#e=(7!*Ad~e4>m+jeaJRMMR z{_^K*&5ye(%K_O0s}i3ckC*>RnwVEn3Z`5BTyiw<`Ha=*f4qQ@6Dow3B&3nR3nvz3 zwUtER3l06uk2Xn&9*gNVM^c&mdZ_~>t{&smK!8M$Z3W;A?8R$>B*g>}FOSBCeKBke zPC?w`!mrRZ|2g{*jX%&00?j-?oh%h1OlAanG!egJ7?B|&O3IHfS5}FtE=5y4z1Cwx zk)OR~wx$^tZ)s0F@R}qDkN=O?{Q|Ax6mwTv1$&b*=d6n1NcDO3{O&FLDoPJ~Xn;U5 zpv2>MIW!$W!On+pbX5_IX)ojQfgiWaV5(uy7|bk|oIU9T(88YY*X7|a1B?IuR^ga& z8zH)zOX|^vN2WO0&fO}n8(Y=~&?FPz4Tn24HY`VD4VfUAzdT!?fFr#7nj#G2PprJD z;}i#cvbRntul{Euri;Vu{jOB>S)l**s&H&bHc^NKMxqs-U$jpS-}WB#<%iBk=qXX= zy@BElnov|#Em~F~)r1IvS2s1iTl9{5+$yc%98I3gb{>`h{Iu$)zhOG-v2^Y!1A}?qYt; zYS~wuy7!3~nGKKNe>&P2%s&1~W>F^Q$eOTjWWMkp?I)X4RGTbHl-q7TrjR?)FFy!B zn20wC+-rVAt_;WHkFu^yM9?IA>=|W z=ltL85qE?+XIS+H1dzO@C*dl&*0FP~hn9JfSQRcmRP^%J06u)b5UrZmXK>XvAR z`_yvQ>|%teQ0^zA3kW$vB0RF!$nCkBt#>$t8s;eX0_fi;ir{9A^k;v7t`rg+Cy3!f za<+Pt6es>rzftsOPegzuN;0iZA|-!PZWd>G#|5&sj0~=wm(+?H)(5L^_nvnMz?4zw zVa-zPT$iB`?K{?>h|-5xJki69e4B@f9x-uKk)>e;-j6j zHMeZ8$9>X+#5AD@=D~)f_@_e3a+a@FInU}a=^)2Kt_eDI-&&TbAghqW6qDiLS>N$6 zD^iacbR`iWZ5;|N8Ltw@1YXXG&x373DGqgJaZpY-)pj_Pw@1(YQ5yx(R;0WW+CAnJ zUe->uNpGSFJh^+oggvH)FBEA`5Vs9P@YhnRGmTugN$lzVC8I5QJn&JFMM>ygG)+?n zX0**Jl|fy8wEKSjc3C4K^&>#m|5qo8a&<-%d8o}Zmff38Y^_ewBOW~0-#LTSfo)T3 z?UUxTG!XlMwL`v~fQ=ImI7l|Q0yyNPBx8IKOUdWN zU|kThg)7_I$xPd9n+EdimVY;l4}Y!t^t@l$E*byD^vH6`>wA4J$ikfDT*Pt8V7p1w z7_9YqAU;9u9g*Kk9v!q#9badPv5QyS>-#R2!(Xxvs4E|BD=&Ytx7cpPbHDQdvVygL zbApUJAEBr#qlae$`oyh~d^sn*N^MEst5ZZ*ltYIczrBFv5SaR6cbaUC{kV6f$Z>(S zJKJxLym>FDN)oxa zf01=15y2#nqQw;qqSQ0S%fQlwSS*X^1iyY4J&nKl7JMZ@Pnz7bjHkA*GQ>O_PP?K4 zZ{J1?9GCv@&IrQuMlsx5+TvbWdE)tE?fhQlKh(X%=&I$xdY4Ns)cLD(qLcdfy9Gk@ zASx5vr9Hm@c3Ql7s^g>kCVDKfXF*ovXs+W2{kuCypB@qaosZb#Fx|xnd(&e4`#B#H z^?RY}W63m2-+R=vDeJS97^>Aw+3EpxG+O5#_8J1%_2z#R0yj$>Z)}-f7|=VuMuu_h z#|+7f@DkXJt*V$1Gf7d0XfMNmjLg+L8UZ`92P8MeH{w-_C;HSxvegjip_5aN7(Xr~ zCGnF|zZ#q4F>dh05|)J$xT)4DN+yyy$Up*)3djnlVjO?Kjngq`T~KZ+u-O6H9l1Aa5w+y?6~j6JSDtO!koG)iB0W6 zV}xCiMCw|?8nc^327`=)dNR~v`muAQ_wnicW3$YOBK}nl*bJNc^9^K9Hr!`lXR)Ul zs5q{xDrDYEOhQwi(a>H+n{C!}h7U}9npS@OwhulJ3L#R8!HQw5?{0u;Rk#b=$wy#O z)=-sxTIZBsVZTz6H~C#il$Qj!eb&>f~zsptpeTclz{v2pcY~&T_I^nzd z)Mz9hVYmJ7pCTz|#w{IqSrimL8vM*fqzhnW4{mn1aUeyCd`k z+E*v^o?0@mspehJ{Z~wQ`vWRaV=lcEF>8D8e^V}3c%4{~lr$t_oboh#5s|f{kzh@aQ zoXG=ynG8@vHT+5>ePnvNHRdWATL=bJn`Z0L zWnSk5jk+#ki}g$koxYb$O`N?lu>g(FbNJ>02TF}`5%pY zz$>ss%C7#;x{#$GdkkpjiBv0zPkylrPJbZV;me7UzBMxGe0^2EG%Dw|nK z^uB^X-YnNoK)%Ff(7=1RRF>8rCw{E_^OH64&|Cz3gm5F+!e~O^4j`Ril3f%E3d-`2 zbk}kIl2C$@CuciTc$1j}Am&6@AR%2ocT-N%>YQo*{Xy%|Eyg*}1`U?Lrt5j|R$eLM z9gs%PdB~+-2a}uy@Ub_GYFXb_*8JlQ|8qM*DM9fj2JE{LBVw`Wl(=gjG}%`rypC7@ zMdgIk`@xcfb~Yi5!&krDUKBi`(<8CxJ9Thn^(EHF2q$1D>8flWv)*aZTS zFnfqJW}PhMz?Xl!eCkE=4|f3n%aR zSVl!XVke^LPzY4C)=4q+m!*>Eer45F+2a*WFsqX{WXRzZsFBIB$hN?>y_{~YvKcSo zEpZa%)pX*>Lr;$!jpTbSLYX80V}WF&41qT7Q#T;hC8Q(zM!}uY(FtA#s-&isZcEfn z?B?fMePjVyLKXW%sZv}<>FrfU)a1jRQ5~vpW{kl}oKm0I<91n)5Aku!R_^7n0>Ob^ zI9ww!N5~iKV-gp0WVhVpr~A)z+~r!KKNqv|w@2nK(SD&JZ_KE%F?Oy*VF|Wknd9R(Vy#UeirKggHyAip3%%fqKUgS z*({9y$>wXF^TTM7X z&}QcN?6rZD!qv~k8#OBOk%TFm;N0|v!cr&V&mDZO?_KMP|7vqkEKUK7WnZbE(fvty zhf5JgO_E^w)8Imx?d8V*~{k0h?s=d^(E_i-4Y^&N!ss`J&QFInMI;yg~` zhtvpZKI-D=L0bcIj3Bd+Nt7iOMZ-M%Ve&YI!4mHW?g4^elVJfEB9xDSYc-)OI#`80 z3!P>IRph_@88>S}Pb0+S%f^bjtPxL+EZ&Og2l8Pg;Jqt!#P_@c+{!{daoU!iPt^rV zhQYl^GZTEs-pDY_iI^i;=@5~tad#7D=8v!bKs7(YfW+4{8fG=;6pK)kP)BbfVJB@d zxF+0970%4&hmR4Rc1T^SJ1-znMSC5UazMf$-3)p!q_rJ8e3J_4*#oA#TtcF?oF*|d ziT_8q9szF&Vgx$TVGf8cxY-mA45i#O6jSs*f*<(d)dv4?mlH7TKgJ^sbbG zv4Ug_eDUc8-9|q#@F{o-b_~uU?HLkvnnVR*>US_tDoD^gQ#pOThX+HwrI7v~9RqNJ zDy0#oFIk{^w~}Y<+^a9`Zq}5(j8|psgpZx7kc}68$K49@svWE8J^@stMg7kB+h*C6 zgy98@^0M)GcH5Iux65XBdO1t3%G_mGUNY$xSjR~d)6-NI6JBgtdLAiL#+S^w$|;O( zG`R=K!Chx_2@(b<0g2>sjbC8JI}>qC7l2TXL68M zyRi(+-L9up0>-!bLDTpLrV`yoTpPTy!Nb)Ri_=vmN{aay!B24*DoNaWNC3$qGKOHD z!7yO_$)h;o34*XJFJ!jwPQLIr*RSeLp+AnF5Q^l*maB>qr9GC~N;<$^DVe)jRWFG4 z%jPKkS=ICUKd5~R6<{&epnVnbq`z6Yu%P@E?>QTKcz{BcK57A|cjVJP*|m~V3i5a+ zT9(ejLjsWbqZ+Y0`mR;a!-PTw1&^T}p#0UiGl8 zu`pR7-tYDG*tE0X9kiw9`O3HSDy!|dkbg&h4IB`Qac3ZdrOYyA4o|u_Uw+3yrV5$R zD~+s?*!aITrJ%P`}W9EsaD17p`_0n z2~^YL^LJSpnWULI16nYNv_J(cb#f`9qm*XoVdg#%M9+$HOpL3!AVy9#fDG=B^v!>; zdQ!ps(IkM(2GPj$ScV)IggXbLtLw#Oc(DkiiYAa;MU{mVfcw0mMCX3Hnnl@8zZn@L zP1}yrdDSn`V&+6{qxgH|#EcQW&YZ@c>jKzPw+VT@W-p zIZWcu1ONdMDfE{J(AeLi(bMDHoBq>jq@U1jf9{pS7P|g zun1sy2R<8OW?P+K@8XP9HTy^{z{I2)5g#A{8YfhM1J5~7n&qCW{S+~3Ws$_Bl#AFvO}BMY|* z8{h8w$3Y(?oeX#frdbHNAghv+DaA6IL<15yS0xg+UEv^lAKumN;#5Ow@6a-Z*oKMH z@NWl!#qE9Xp=_5RH55j#Me17$pEGB!)mmDPt70B*Qo@LA$d$1T4RQpnU3Z~d_cxTV z^)L|`CQjLc8S6|>ipartWz3~g$7l%=vR)Oah^3lA$kEp+5iMIEdr1>55~1|yKzlxH zhR%v&sIrw#k|02)25sTh*nGdcL4nWlpVM0%6ZCG}t#`3jt{E(gCmcf8dHV3Hr-I$W zd!U(=9wYO(rzcF){HrGrb7+UVpPP)?vHLLSBc91%lhKps9LyR8MEQL|1_N~A=G~t^ z+GJWXiBY&ckY2L-g%o#ub%hG|6Kh@H@Dox?7kYX-d~heVmGJS&*v`V?)#?7?Wv)%4 zA;)?CVk4jImAt}mQq7+ZB1e_4zI(RgrBWy9y05RVrrIS77*BD<9~=-k`j}l^vP+Q+ zT&GsH)7fVNK~O~DUB{J1*S}nE2^=+cge&5TO`%DbpEP_Q7jOVSW@k)n;fOGBs(gQi zn*D$^o*w}yKmI2np&&e5mc0cR9&Ga@m}Z5A%^athD9Gt%KlOU?rZ1bKKj)M8qa|p6 z;|ck6HanG2ez8j0@eAl`55m|`^5I+X>*vJ*_3m02P>&QRrUz-sdbO6{%$NV@KXwgV zU5EbkJBSZ#7Gs0EA1tsIHp^Wfq;y9MxjMcp|}fKvz?cUr$fN5-3+&skgK5>)U3wfvxv|a=;le^G(J_#V1y*s8q;dj$uaR zQ&gYIU5jUl|COlcVE{B^nvdf0_t+Q|hGO=`0Nsmgx`x_)`=WnE&rhe|_5BB2>6RyRpWy=quiG;m;>j@VbMH<|$0zM)gL&{tdMo~IRpWz!P?Dwk zzBv9&OCcnVMb$5r*y_|kpqH6H?Tuj9+*dJ+;F(;6ORnYDS@K`;#L+*umY~XhN6qZh zR3E|za-Z_sB_#D>a4^VLy%pFLtD6AJjq=BMukKJQJdF6xv=N;tTEI_a+s|>#hwrtd zR~*4-?}>Bat)P>($@s0m;oybq3VQZP$gcTh7L}LfSb~{v)4lGXzMP^B{%&J%4tmF2 zX$0CK6T|%N3cXS;iY0?UT(H8Rn(2@cd11RXE*JSu8gv>D0n$E{l687z>7blnEyiK0 z@t*5f)mP5%)dNg#3^;j$*uz!`2}Zgg2fse(MdQw~Om)%EoZx3qd?j9hl+KKYY11y@ zX+fDD!K4pK-*!MQ#g!2lRz7ncAKbY;u_a1AAh%_*z1ONcZJ*+29meooxdczuoxsvB z3pG6Z6(+E}^NYh{u4sgQtli;*esU(PpYT_9z(dOZg;cq566)`euc2ZFUna1sb$0z&|L+v6VW1qQ>(-+78+D&OH|~^HO+b_jZ{>QQezCaF+al|8 zoy>6s07D1+UC)KHUhAi?O|}{dtX}l}wjIF&4fiEX`36cUd8OxD_GI3rRV!NZld02D zQF^}Y#jm#-OXmXRG{L>!!B`eB!uV%`X7@Uj#0IWbrPpw2&o6D_qAU*^#}_4NJLpr7p3@vsfo>ZmU@`R22`WN;8`Z{> zxefw+5{i|j>0A=uL{N%1x3z#8%68n@=!$6o8%pV_3F0>XRO-J!nedtOKQ=&R1I;;m z2e@tSlA44Z*g5IlSIEqJW;a*vXRkS44zCZWB(atLO==Z=`=uCke`a+5u~PHp63Nfy zYI#H~+Jl|}+@(QIZL>m_Qqo%UA@JzyA#RnbwzV&gF_H%jD}=b)Eh1#-w!INxoT`cw zi*ZK#ez_e>5ydROr;htAX^bH_mQ$jlqNYNGkhGAl4r(JP)N-0)D3jjSed^u!Xv!B_ zj_jYc?YDwOH~)Mz`t#Acx1>k-oy`;_5FtC02AB}dIDW9-p7798066FbEZ&SFkxy?( z{*m5*$VbNlCffU>gZrGei|m7UBBd!Xl{N-57CQ5ofwV8*2E(!+*edcIM_pBa8dK|| zG=In+Jm`#<9(r^50}t89M7n0sSNdEjXTnJ;oiLUPS$mVroLUZ(y)4IvvaD9vAW)zj z`5gC&z*H;laZ@TPgxOsKq3)qeI2&9*-JFFNb)FVY`7*iL@g!8`@o{A-EpJKc`AGk~ zr$lwP!0V5GGf8=P;to6FZX=uC4VR-WUxz)zJ@V_zEfB%&xVFHL0!zQ2#!xO{1a|0Ma``P-E zRO_u@6OXiYa>JX<8PW<9hTX!v-9!%cfrTWLn@oU}KaX5~R@tkLf-=O4d7IRv+dKT|AuTG@yPa<4f z4Brd2-QON4`G@rL2}&L})2jn-|NE@UwySO3S7#vqLlCnfmOugh^`;WpXiQY95xHvZ z-PPB-$Ff`X^WL^iOIP)!!Pi7s*;7XTzA(O>I;`Cp+plVW2kaGE7P~s1!8Z$u4wd$! zy#JR4(7gBE(?o}(Qj-eU*zP=TlvKIn=3x?>z$o9J2?HJ;MZlt+iJ>781bJ#4AVC-mVCXWQl|jh0`qaKPl^cJW7yg4Ut~n}Nmz z@0q01m|2}A2@{3({l%5~69ld3PfX(uTOaXYGV1Iym-e~&6!*3vqf?qaTCe&f^Q=>( zuJSK&&mpPpERf?z$(bezkCR*>PPr0%>aDLC287(~14Hb`EHjOcT_68y_yPVh=_mT& zB$>PX=a69Xd>izBmy7&L*ZbCsZjnKi9ZVyF2E&e{uFAGH7khq|E=}rs><=yVJIdHg?}tX8I3> zepV?B+kWGF8+Ax7mW?dL``#;>nW<9{JmeYivcKT)cD7mRT&J$IRkM%MU2JaGvK6eqX%7&R#lvAnq z1(*{jjK`a-5NI^aY3zoVOZ?7TS1)K?`!vW&^oiY($|fBb8dkdWNZ8K&G+cCsI?QzgW83p8xaM?_%R9U=i2~yr;UY3M4;! zMi08J=N`S#;LoW(jiDuK$j&O|@`_9H-k6WuX>1OeQwHmJ24hAcdiyZ-gnTgUtG4pc+J^JP%+d5H0=M{%tE1?S3VI_S@AvxpatBS+Tf z=~^D9^o5F!p6TAKe5`UJ5A7TSLvbuif)tlf>3U2!q48@NDc%ER!8oK!ng!2c-P^&D z$wN~lEe8!tVG|286nnqZ`Z+1hvD>Hvp)$C)*Jp%*9F&AP{Vp|lx3uIA)=RDIbLn2k z(djllR1emQk2cLTd3yWMBBZc!ugY^Ny$VFs*BO(0ST7xYciwz^cSL#B?YMu`8nB^7 z^Yq)Rr-u!VZRfWG2%UgL|45+F#G_@DkM8%3J&$hZljbpxMxW`^U5iY`)xSIBOsGxTv6jAfptB=ma z$p~sn;`MY{Snc$cR7@VSoqZ-w?N=g1<(n}l+>09u(-zUZj*+HoXZGW}iB*{kJEMsv zTRxq&sSr828B2R?mnkB#mr!}i!$S`tQCmU6ZyavAh{@ZXWKJ^ob;Q18YL<#HU3gws zDORT#{C%QSR;2f^QkEif)`h=lzpYabaJ@XJpZQAUD5GHr*k0M&l!_vKTErtErD!X; zzKniC6)I;Zzu!YJm?*kvi{k`>PE4_pADCe*L(+-5irpt@L8`vO&*0r&fK92GEN5>y znFN9@eyOY2ftQ~bj_1>Yhc*~Q*1;&Oigc_~;d%9OHDa-aT?B_k_0Io% z0c)tTyy;}#&Gw}YU4`DO{z7pSx*dE-9ApS)^b!;&MFgVV%C@l?fe#Z)$TlfuNexBg z*Aq^a+HjgMA=FJLOEj)^q*$r-5;8rgFlnD)w-BadgQpXLfpt+)#am1ew4*X-x(WVse=MBb4(^ z9rw@Alamj@f^>97?e3GQ@`0Wt9^e zD6;vgkpCe8`xG)eHOyMLnDJp?X3lCs(i$1Qtl$h8(kAIH4m~QiLc?aEe=1&7*mhM3)H3$iWYWxUqH5JAA-js@vp$_~<2QitSmTunZB-&%XKE z;fGs8e%A>AIxe^AWrg3R=O&;}z}E_;+K#53=5n!>1)QxUveG-n_0< zhjs_Askt7Dw~1u0sWmnu!7t}yNrNOX#02On4Mw)=XKP9;LWq?_V=ojw9Qslfv44F+ z!N$o!w}gfP=Wfm&dMe1drYOK-pYhW;sVSR;C*(?Gv|R34yn{;0K8xkkRU{@x&`2NNEA)r9j=;bazFlJSy{ z7&CCxg5O==F4OuhlXWfA4~=dc9=_CWrVB{rkfNlG+}g~YerY^mvQ+!68tadoaoT^U z54cD(a%uOC^b{%*(J62CoJU*pJgP;+SC1;_QiQ$UKoXnq+l!&bjDJ9^ox~6EEn%A6 zX&ur>($(-xUZbSFTEd~ccuGiRbc5+iElIR~qWVnGPiM5=WXjN5PP}4_7Px@(b}bcx znX6W!g0eS0E}eLQw|#CRdzvO0s{JlRpuUlQisNywKjBQ6uPU7saDot=ng~KYUV^}s zeS^;^r57ACVy+<#rm?mqCmvFlGPNrG#i8MD^&^EDI=933SMC{Ko5S9{yAD1CcA)gq zhjVJL&M6aWwu^stbv^_R!vD@?Z8ipX_eMIc=}^OvkD7{tuN~zmf7c#D!cV=QcRN&WkEgzW-0=GNdHVVI>)-(Le^)&$ z{{U$59E?_8u>=1oJ8hi&(kyHGAi%T1?JN|iE%OL&&Y)!JHw$WAHsI?xuX;y4R^ZdH z9JDx}Q>mXq`=m?{ceYlxV+d7+^WEZ9b7{&UX4b56^_PAg+usGugt3df$D>Gdjt*1M5^{S=$lI)cf9<}6B&x$~H`j!GUD`QSko z=>zhtUe6cn(sEp1wsZ##xcbKeA*Q$?=ij+&d>MVNzdXI{;d{gh8n-~UF%mrE`_t=q z6E`tmVD=+C))q~!w&a~gRn&P#O3-5>d%M6`<{V-Fz;BN%bxs}a5cAEX-0=2qZ^+2{XPG;yc)ZAU)Oma z=W%`x@BO3+87Kx&W(pxaxrRj33~(|iKb#>H#0_Y)tGkux)E6@Dqh=-P7=0fk_t9Rf z7MH9=U&(b5f!_8XD_4lP;_o?2?eC`#0CM1%qBXD%<>`VN&P<&kt^6lUZ}WO<#mxoZ zS!I5gwdA~hOr9=4`qD>h6@_b7{T=j7~8Iw?b3k8Xa&SP^djiJmsgb_|s37 zj@Dt8cqqK@WDqLDzp))I2b++6lj+=v*8jhADw7pkJC<39YQ-y~9Q-a{KrQZcMhFu2 zu`Ng)tCl@eHv@b0=s~5>HnJeIkzU9wpU8&Geelp&{ew0IT>%c}g10{XV8CV-m#=rr zN|uuP%jgmM90A2p;T)7Tr7UP=t8UDs)8oLK(h_JkT5@kp@6Dxq;BxvqUvKa7Uqx#d z+;yOr&Ij2!_yo9R`D3-(bu$W3zvug4tz&0(%t~diamHuI;{QU(-8N5?x$A zMn++e-lsHVystHHd*q4)Zjc!eOhRK5y`@f2(<%&49+BHtXVo=`x~3eBZw_j&LDrfI zWymF0qvEUKw~7mE;&5GT9PmKKfET%J_MXD}!qJ3l=2nOc`U#XIF5i)NawUa}?-1>} z_1fpB{br@n^_QMbY3>D0dgZ4g_^_$@z)h^kG7<8I3R8EO;v1ME0cpa6+x~f@VdlDBL7z=K-$p0z5Zwci0d4_jqosUu!@_v;&~KN$ zd#yw3gQQN%`gh{&PWtb3o{Wsme^+R)i(xNlCXSQY%G9LX4?JG_N;*hSxTJ+JTS#y6 zAy{oD?=NJ(u+s}gERUY95Qo&;*%WTzV2-Km{c(Gw#CI;MY zMdUU#8|fX%ghE8Vq3SeO(Y~<|B^ovc*ZD`xvKyy49JHKRi!4P2(?Z#E#45^8RYk>+ z+LR@x+)3wo90O49OhS(lZ}1jP?11nKRj40pA=eG@38+v<7~N#vYbb%PyrvzFFRNeJ zi@`eiC!WS8IREOrP=Pk1r?quT-+DLvVfvS+ci%Gtq5NHnq!mH3FoLuD4b`F!?wOAV9`5d(P2O4w zfVE^2@B3J^|EmWwb+CJ*f!W3?)()w3=A5SMs8ljPwB_;3c#hu_C!j#dlB&+OQ~R3C zI!pfCR(lE`m!}q!l%;+w3Y+wMetL95uoiwT5~*Wo>{UP}=c| zy|;JNCMR6lt0)ovU|s&8Tkct2gjjGeTFs}odcIp~?%pNZb@;kY=lLT1q&z4->=|Am zpZYDFQzhc9|3U2RbmPDQ#`=AFAeEF>lcTw>v`Bn`FH2-}my@Z1Nu`*EG;l|dM!B+nGf|BuzaZbT(K(7T=%B{9gu0``vx$@QV+U-{%G?9?o$?s zBh1$qO7ZYYQHmOf#&(|7*^ebNpwAQdReqYF=m9Z3HePCR&6(KdnVl6Um4QK*P28f2 z!w|zHX8bZhCs4ejj8X{a;{aKPxr~oRi;L3_#D3c)e&(l5S-boF8J5l9`?*L8E5}Ro zDyp(b9C9DNfsF@=qJQcKoye+_qxpXdvJl$A|SypC&ZbYGW~&ZnntL#Ds{ubM9;B(%^}xmy7r*tTx#?SMI?_ z7Dgu8X~isS#$NA_%dPJGztx3j)u^RJ>{SWBXiSl?ad?y)F-`>wnB`K1^&1&c>+AX7 z2h8Pv3K03S${T7_@$Nfb1=(0e`D4I!=@zvo{q$_*n}>%FMs?XJds}$8j<0?M0sj8*Cg=7%=^+b$m;Ss$$QzpnN*al9e= zdvo&D^L!tGT#f@R6fQ~q{bV|BA{8UQk%~3~=4-i&Ir(i#8QR;gbX)?nVikvzUZv9g ztn5+P-dc*JcmT+2&A@0M#KsiOUgI}W_a0EJZ<0cB${cB@8xi9jUn&CIU59p3bW(h9 zr`s_n$mBv*(1dsCj5bLjcuz0bS*z(vE5zX-je%dDqiuc)tY*(5vfOvSWoT|+QAlty zcKuty!O?r_)AiDQ8!QZGRyD&S_iyL9?md!M#6$VuGzbPO zO5kZz#N_GQn(OEHMS+DLmi0wLVEor_wbsbZ^9*!v>~G{u-bEFYJWyjtVj)UNU3P4V zx6ZmmMQS=1FVfR=op3|bbz}Fy)MNr`rzGU&1K$qwIar*kI`*8xB1p(YZDb%AD1=I3 zQ-Mh_x|dJ)C$0Ds>UssnLtU@OBnXuP!MtMCdK#M=9_aeL%eI!BiW+V&KRHtbu;#kA z-K1QAuhbnqjm%m%AQ$_!vQ@zvbEDZm+iu6+zWiosmRMJ2>V9ALdr><&{=w@yiMrkR zw!+3zNGU*_sNq)~rY>RW@eKOQ#KKr+NZgC#M4bf=2FjJV;&s-IG%6&uE7x>13I7@y zP9zQv%!OVRBs=fiE=st`S?W}VGo9p_c3;3EMmgS9m~aj2`=LYz?sPIisUC~t$F|4@ z1{VaeS?p_KVSf8UeY(O#!;ak~Nr>{ooE`(Y70(7wf=Wh?DLMVD#FKF!=Mz#1&e8|x z5~F)}6E`#mS^ta7{rPWq*EGQnbwj3n-3-~TFPR{;IH{CkZGOWC%q4ewPv{^_$CbDp2)Z#`V^Hl|w2Y#&D}rk=mK z2d3%{&2}SAXf$6tU9`VNsY##l9!@O8)ffBmdrS#5Z8#j}_)}~jL$N;avj^_+%r%#V z&(GK8pX*t#Md(^GBExf+a==_3@oJ zQh%Isqi3$%lW0lSl)emZ_yUYw~rY0&Qlk{}u z{+!zFb4-?WRll{tpVc^9^g?tX_|)s!VC^VEiBIz;UJ&dzbFIEF~Uc*K_%v zp4L|%hFgks7)^?wQ=-7c0c5&=ahKrOoF!JH&nHUyf!0{s{f)V)P><1XnyU)G?n~Zx z0%lgK=G5;_P~k}rj^e#sU}TGCriK`%5BH#EYI+6pEk&Se+xz~UWIt5nU-wOiZI}!)VGP z94fAMif9!9jt#a$_yOv@n{>$0596Pc@H1h11fHEnxNQ!qEkR#1RR6CTRm7H9ce8zz zr@73131Wxgx-bipl%qoaX&#~S`+kfNw#d{AY0>E_#u;77u&8zdnkP1I=&@R zd@;4nih%Fs-hTz&pA2(MJ;Ic;Zm{Znsa=%bw-b4m0x0CklN*adp+Xkm0wKx< zRJJJm$}S;)vC1+{unmwbEX4h9X}Cg?q9+QyovzAT?iWG-nkpj&WC<(+Y-P8poAUzT zphvL(E@IP_DxRZBFo*4#yXS0oo?l(PBHhM`ysWxh5tEa~b(6`(%I){7;X<|54YOYl zI8l*jDS2y~O(}&DtS6D{0!#7JF@;|gaDx>;t5rscH-f0_LOoEeBJNT$NCT&Aq!Lm1 zoQ46oH*`v37>4@zO8lLlMKF6@I@#!ygvW&QFo@CD)dHF9AZ4zgA;P!fzpC&CjjRcL@1*Y+Y<9FL3U+5qBhVp37&$SVR?fe_hervFNfFZi- z*pQAv`)+nM9(sV$LM&hcK>{pwxBXT&*2H!pjN3SKvTM|G!?y^StD3m@^;Q!LaqIn4 z;Ot{j;B)C2UHg&H*0qmX{*Qlx1Z9EG0P6m+xL8&R0r3+tnq-lyGK~`n>tO-SvqBZ? zVVox0yLDZUnPJL2ESorISp*7iFv4csTWT8vUwbLSThK2_1sU0`L#F+}NXU0{3l-dA#>KYlBHTX< zd?!4f8J?2C>WWUJ6@6W#s^v*pUrD`0c#O4t9fvm}_@?vyKAcGB{!4Ikq07y6)s+(x zhtBQi4|AeQFw%&s;a*=L{}~C{qG92EJ)}`fuog_=9fO zngCdShjq7ChYj~;;ax&!j_g|C__j?k2+e55wx;=83{NC}&x>25RegT7u2tbV<-)mz*TpL`oG=jBz22TvZe z#`~kZNAu=<&QNtmCA3)qkFLCZEWH!}X>d%o%c&nvrwGB%6SL+0x#V4U_R{Wk(Euz| zoMQ@c-ZOG|9CBI*a60W>24#!!hPEv{OEf+j6LB?~O@BuO-#ZIbD`E}41_*~M_Vo{n zdsD}~Z>Z86S8@x!Yqm+PI;U|D4Dw$)cdIv2Fd;#t4jPBL%t&5w&uUqn&HfAfHIq6c zNyz(sV)*gOO9-$2X&`4kw>Eg~yct`Gj*%`Ez6H9h4whT>fKJ6MHXXej;&hG|0G&4R zWp=(oqe0X_^ldVtk;(uUH)-^=t1L~NM0ND3@f$-%d_iOFkHf=@ zw@+igkD5tz`5NwG9_=xNLl0106wk)@~+40KgaLR(tb6K+T z`~CO6=F;kPWBx^BXIyH|MnI4XICXOTn;gtUWnhq~^F9v97pTqj6z-yFxkhS5YBfApu4q(?l zPn}zs)O4ZeY}5fujrY_2@_K{UbRs>;Ik^6*Kfmi;X;(`T{8oIeaxsPbxYA@a)YBZJ zDgi$#p-*i(_;<&;?cdgQ?^IrfeegS^Ck_ro^RJf^be8&YdsJG=x`C&d4LI~_7m-{1 z^TIFq+V9}c+vguY8xqe@rMam$`#X7Y8sqmu!Uj)Fw^Vl{V3PHLyCyJ z61?OwA3?G&c%wL@1h<6^KA#S^wFN?N{?iYsDnd9>z@x+8zCξLeomNE_tPzspxv zYkJ{PlK6aDP`SGGYu4ynlZ%tBiUs5pX{o&*$d60odTmRG;3#|m@CY?H>Kt0(> ziYhbhReib8U>nUV3XnfDRCf{Ij!gLZ(94SVm^9b#$xhe2C0RD zgYtikoo2Bf#v>;{QRHqLswmuG5)&FHa1WA%f(xrRhp4*SP;`FB&4Vf&;5!X1$zII= ze;=iyh*B1LN2f6L&F41gM7``g2~{*3t6u7*%?j&A5Uz9<6&8B_{`fPi=VL?q$KCHk z9$$u2A8GInl%YI7pWgf)m+31md6e@FNS%FmR50>iPb(Hb*fwEDSU#eeDj=sCh|sE$(?vFTtPG-x}=nqJL=NylwYSvpgJlMVt|0 z*?2nZo%U;Qa_(E3w@&fvAP*5ti3{#=XA^#no;I-Ld=iK*VvEp@5Q9Kj?99U= zu@xmiMx$a$x#fqlw8bAhFTUez)c=Ab|9F|PB&vxFz_FwTJ65><$@=}PDE^Px$KrZE zx)vtEpXu7jzpdYVb+pSiE6GCUwY6(DV;r}i_fNPXD!D!a!3{ob_F*oow@}e04?XmQ zBuf7Dw_Ey@1R~{Jz{i7(A}UkCjCi%UeW|ae#OxDy*a8&*oI|3YBcy&#hI~0oLYGGm zLT8AWI)$kbve5?pb$jUF+Vf;uM|iVeDl=>bQPj}U-|D{~#&Em8`a5Su5dZXtAh6*v zQED+HZFU3wSrISlRA0W}8?u;(Am%>h8)Hli_6Hd&^)sY!j5ggKw>&GfzzQQBfVs>~ z;z3ZNGKSA>>BBbE*FP15az0dG6AP--R&nm9Y(b_D#&_JYnI^oOV+iAZN;eK zoJ@EJ&;QwS9~qvM@5X|kPv$1x-fH^aa$DJ8kL~rq(K=pVAa^rEA_RZ*{>SQ5A{#$Q-N%cKi@uokCeZcoD!t9 zx|_`?=MTOeJJ1SKZLbS{{PV2@^{o_geq!*$)`4*F<(qOOMK39=P3XU&Xwyq%tZAVK0Cr4IN!7LP1w;{XW3IS#B`;|oBhk_Wych!zpgMm+o_5WVG=HCHJ19Xy8Q zZNn!=h^xXwk_Y~_J~~z-=6LpZ^1AJGkJ*&Lt6@&O1I(>H9slTwxOnKmRkX-Feh{7> zfM5Ktn#9!|h!T-*yB{+8P}`i8fYJSu8aHU=f`L8*&5x4=_ikH-HGiDiK%TYb$1Bf4 z9T|{5{?{<+OONg&pf$9vp>Oq1I0Pv^L|xMDy0JhdD5u>}4i5J&^`zdK)U3LyceGA; z%yPYJK{wF8boF)yr5$9e0;^>n&3r`*S>Keh;H%=Ag^6Dxi4P?UYyt&ts-lOC3J|*e zzmB(slhKM}kTqy~94o?FKSD3cpl?AV78yN{<5q&+bB@k$foJ$u_K#7bt^nxDXKIGH z$~cF&{qC^(dn+P1E}kn_7jk=PhrwvlyN$|4nkOf?<}0|f_3l^e6a3f3^)vkX;7#!H zo4K^9AG2Tv2_cH8zR&BIwSJ3>cfl9x!Qi#KU#;1jXZJnN_rBZP{0|WR1-9;-vWQNl zQ7raW52bOhQvKdi#2O8S8J%0(272|-rJ=TjXwWSYIFd zPT+RN_Dz&>smTt6XWINwF=fDyLVSA8wKsF|2QJgc?-kTg-b)Kr;q+opv3@QXv6K3o zWHJ((`!io`WYt`C_chg!?{m{n={idRvZ@;~gYUwFye1q$CNz$0{um?u5F!TXDyX>Pv~HJDY3f90S1 z{~w$R(0A5Q@Ob3V5*N3{*3NT;&uP@g-@cu4As&1#?mJ6S@d=ShULdp+l|>7)mU}$V z%PIEMhRYt=GTdiQTs>4@cTJu7J1F?C{P$li79nY3RD8gDnMyf-G*svUBzzaJT&OmP zkSI-Tlv8eBVk9@C0?I#fXi#&UHLhM=BL&f`C9km)t68Ky>9SZ$Ptanlg--o4 z=Z(bi)#Z-v23)=Vk}^6LFl-ZXzCdiXNLvn3&zox~<)8X2L6_&d0K;1Ca>wus zex-SfzY-tMdsFm45JqKhO)znpP@Wh{T zIgv4+EM5$>pK|~#TrM9^g5_QJ4+Nvw^V!RXyk%MB;tU0yhxYO5s7jo?Q3Q~U!7v4)9kLVsTQ4JOgC@Ha2wrUee{BPV*cX|ItuF?GY2ia z8b2@Br&CWDicW$SO61tbN~cC2@mlvGoJjm^aR%~n&>$#Ox(^z4k)3`P(i+nND~VpT zciC*zj~AFy&`-Fga(b{rzpZw%*!RiD_}f;kBp5{T#nb5R?5W6yN&xUIBxf0hn{TDQ zGGAmyv6(GKGQD;nMknS2JH58^$tpyd&J0Sp-gp#p6TfhXuvN?Xe(H?~@%bn49F!wH zqUh1kDQBe1ZWM)6a-jzQYK8Q&9dhL+3-YcM$$THJp&UO>R@3S^eQlA0DkfZeR(WO(sIfm0qp+MAdWkt2OImla^M^@I7LX zu-sUc=*UUtltu2K7BSXi-Ai6hBy~rSDdG|G_}8yz`KtL}Dr`B&D&lNS&!-|I>t|y_hzrCFcQcTPcVSaPbAL>JQZb;r*a=nU&P5@{LTH}mvkp1i}a@Syr?Fn^T3X%yW9jo$@?;Lny z=c4ay;O7b*BQQOwDJ}mBjQX$Gm;$Q=I)bGH|0}a~MbVrIWZ)rcCu;dj1j=Dpef?}g zx#LP^Qh-ca-1rF~=7knH|I77<`(O>%Y~5}pcE_$O(YlwhWTmmk{=Fg>XBj;*U0ZY~^`;2~ zfiS=U&H9bn;lY8R9SYMXV!O2EFI{_|3c=`mN%AHJ4p#6d1b))YD&5)>AjsbQprQ@{ zoTn`fKYe}ErAd*~rC0I&NN2glMHIY%`6ZN`E;rj~dIS0ApU0kaX{X4)I=A8SH{mIA z9AdXBv}5G|S=Zu>kop!2EZXktZMej~)|LRXsGNx2$}k7buy-%i`L3}KY-Wm8eXxl` z2#s}NKqRei`kI6r+ai?EY)+fOH`Apxgo+ZUrV>#O@Lm$22lSi`8Z2De5X@QfH3FDr zB_~53q$jasAZ8lzh6xD!p!hmFu*c z|DXx}t0vvGgMh>)PZ-x%Y;_0x)zM>^jf?bmlF!yB4)Ix{lOiV)Z~f|(IFyvG!`uRl$TRj*Ho2TQJ& zE+*&&ZcfeOSiw>L)QhbKng7@TP3f%P|Ml~pefi;T<6CQ~U(X-$Sy%Zd15YKyU&4dW zZFX)|?s~qzhtMt!Hj+C}$=q4M?c7o%|sB^^SGv#hadNdfo zh8HcZPiA+s6g@AN$S-)M1{yeN1^*qJRV4~VWS{5SxifZ@d<|*j)AOHGGD^9>TQt26 zOeiyfE|u~*>4Th|_7dop0N1=*q2Ze^aWy>}Y99~$=h>>lO`c=ncP>#0*i z1R{`Qvy#zp$W0sug@hC8n!R%|^!6&8Q-{QlnxZm160+W>56^kMFXdNm3N#))c!rvP z>oFaou+d%*IgEEbEqPzw;YrT)qaC4L zYJonSsml}TG}7;JH^3-1-Hwi=z$PL>KHV4}A9pcitPh}9?*26O%o$vuJDmbwu9K#D zr4c*I7h)dMt4wJ>CpZWSJB=?!Bpx-&ky$xYmBY7Cg40@&(VsA;Sk}f$E%pMeNIfTM zR&{dc=#VKC(*sKZd_e&WVaM=JU3@>OBj?rJC@TgJCR{{?I(Q`|oj(^kR26~zRQpO% z8TT?X?(}*`z@*&eQ?rn?Wj&}caSrD>VsJ@QSM>eJ&vi0K8`BQNudPtWHOf~32h5eb z%V(Z~KOOC_#d8MA8t1`=LbN&}ShmGhs{;?KehxP&<_-F|}`-)tf74 zMcNB$Dta{bMCho5_f5;zE^8}!Og~YFS6hghu#5VRS-4C0X1T@Gh3*LVCV`UX1rMb|w7+XGD(pb)0owH;eXc{jP%%5DCXodmLFhucexHq6jC@DD zrBBoO1KE_6r=DKk*Gmzk|Cw(&&~l%dmfst2E@LX;vR!E2TATh7TqT+<{=}3E{8`-> zk&Ya}!fTU4>U&@}WoWK@5PaZx+Kmh*L>>I$opwtId8>K8JMsII+BpYml=TCy96;W> zCCv!^{g{c_a<1xf=Ap#>!k{MOA zKxp3rmZXltk&1b|+VKfdGYTsa(Ee?zR-QA7rtr{Pyv8^w&oHcd>tb&y0r3crXgu42 zBm!~mw7}keU%SDL0Ml9lpbRJ`+!Nw~cY^X^g@XayzlJxv$ou_ldDp;s+z{(YqzSnj z2K_byVpGmb+s5iRHU_ozQEatdhOyJ}?IdNKHGuRt_0t3ANhBgt=`FhG(FOkN8mTIf zR&;cX@lqKW3I>h`C5r8Bn!BxJHV$mN3^idymLO1?5QG1&hFNV^a(u?ys;j(NzwTQC zrx4bh&1&@JrLL*;ma^M(ic-5hUU945~?FVi_(1j3o3@8EmQE7b_JC zavXzVcSmfNvesZ0RKNMA$^GV=NWR7`=3lt_UmX+zVD_of>5hkA*A!`c93iTSd`%E+sEDKbmHZ)8`BM=m zig*)T*Bzqk0O;D^>#Pe&fE^Ju(@!PQQ%mOQPuCE&Cp1R*YwO0T4 zhzLm?qfjPuno(l62%6ydbH&Rn#K_CUI{IIPmCoK-Cq>veIAWgi=%o}hW-fSlqPVqO z7lC(fSJ1`9=@Zw-nCUhs9dha4Kq%>(ul$pWf8aGe zw*z83KRekZC8&7r7S;}DyTQD+5pWC_AXrzTw1bBMZ!nZtAR@xG`O!q~6wE}{Be6)a zH1v@LK$rkYqXmW|E$~pm-|=4Am~E#aj=U^<>sKgF!TBN~jIpgaju2m4FP!0lGjV9u zm?911DZ+r_3>4&YxV6#WPJ@zMAcV<(TMZdemms*U>d4r+Z7{vkl5lFn!5)aALUx3S zWLHCV3t0s~A*vu4ZWk?qV-y69=S(0##lMc=;B$BarMRFN=sa3Z#w{b5C2~v zs45IOhj&8jvWSIxl+Wl_ul% zac8cYHjP@HjM}S9f)AkCCvW()aJJ1)del`xq2RULX5XDJ^Tkv!UEY)vE5GiD(MgfQ ztZ1=_os=b?j8iR86H-jdj13Bx(*D=i)pH2^A1?*UW=W!)vsN+TivWNBQUxAy8zRNa z>CkZBBMID_6fHi{&=eWWcFeMIDUaVoPDEVoP%LC>R#ZiV**1@@*^m$3>bh(%jD=ak z!B=YIi#&j8VW4S2#55^QOku{k9BF24P3ZAbT|&aTq^7Mf{y3~Dy1k$o7F8f|xUCt1 zvx;ykm~swp*&9d5Y!8IcfD9B6#0mFNq$c@kN=golad7<+t{s})*!jn6vfE1O`YIeZ zn1Q^map2hqJFl$PikQoS^^SqON89yU057j5puR5>V=09qTUfcDm}j;mPT>RJ4x0F^ zq2K5Lo>d!X03uP(*M{})1Ks5!6W6$Ya>kF8!8Bt?FOJvc&o?x<2W|}8{o&Q8t+Txu-~!V*H?QXJx=*Z2Ar&E~S;aUI%pA;O%sx5iM5MT+QDT+| zBUE@K4#m0QqLsOm=#U7>8ddJF)&o;t_t< zAqb{?vBF8UOiR*XkEq z486DGq&*?M$@o(lutvi!Z0J6UF4q=b!TrK&Do@%k`{U<#k+WDlqbTxc>j;}&6n~TD zbhzI@@+B;adGia+M(_2tlH~urE5%5G#B&#MpZ%K9`L6p@p^_Q$?af9yuX&+Fd0aKa z6bO+%*N)Wjy$NwdRZ*CEY1}K*D6C+q#@4!Ppp-W}1RVN_4Su}PpyS6fp(Ab)Z299R zjK`%BH7&L4>#gvHwhkvGW+F@|i~%AVW#F??v*_SWU50EQPaF~w%NWb7B1N{ zC_*E`L2Yj&mhnq{f^|$i!ZDz4{JBJqmSx=jL6OsJ9FeRN!sj+#@&}Y|KT|wF{v3DJ z7jj}|x%R_jgE>c5VT0LaGS{u|XFe5HR|3Y<#s-vpDn(?p!U8wQJ`Nrr(LzH2pl@c% z!x@^NL3%Ej ztf45qqo;OsU5!P%W`INbQVr$-L(mOT*%U@9Y+JvbdLx=Z{+qjJ z-A?d!vZZnKz>l6k!^!G+{9u|Z+S9d|DPew|f@=PDB?B+jA*aa>kEQzntbdguCbl*^ z8cZBMT24l!&gEBqCVF+8U+7HnN%wF1SE9K+^cHCACR_OL3o?eJf9<7k7z&!ObMH?O zMn(cN{O7`+V85I=o$S(TRmT_O(#yTNmlT$5x+SB$PRVMs=T?Nr!us~8nh6xDbPb>2 ztbyQUm(Sr?_n*a!Fe5MrFys19duu|YktQ7_l|(nn^!e(tydedOy;fzJvmM5&1aMdXX5Nw)c;OFj6tMz+#4%MiY&-GNqG$*{kM*Jf!Fg zdi1A$1N2*N4X`ZNy&+{I;4j@J!+`{*MJPO%yn?*~Ep9LHQ(` z673)H&%hjpQW1tSSy!$@g#6NEBf_OxBWZEBs&XktK&$iX@Wcxg_F__|e=V+yQ7=e= z(`6?l|3} z!RE?eC_qu7iG^njD)0w@2RF(9-+OvrEAjd@ZI^OFrREnZmlRi3aE?Y zzOh$D&L-5ODh*|uCot1zYZ8m_^PBokMYJ*>33LSLg3)*jJV?#WNxUs*iR5Qm3D~g=SzR<4hHdNWsss%aXyDS z7@(bqvHo=#&H6BCfb7izJ>glgw<3wh36YLIHT`@I$}QDQG~ZE`4HmV{JcWr&?YKU^ z4@>?X;a*f1{WMsX#!7FtL9#%G!wLb#6=>VG<0r7YIg_+b{vGMRzN{>9@6$_EhM)PG zwm1G70(i<__w=Fp(d1xPaTQUhk&@`ALMid?e65J6EsR1!pB!if+=w)O3pTBj2M;9* zDdjz^5}mkJ=q>zD2a)1LD(KT78c8?}6)&$6J~22KWvhFU&FH{tC-FbYf1A_Vr2-~< z8A^mze8iRAET4z#4C9HoTu6 zY6Oz#izvb_ofF3qe*9ln5x1kzC%nHNN-*`$6A*3`SHAEr%G|r}W8ly6H2kk`LQuCW z%Cu$dckFVR2up`AAn5Yxg*Nx&_z0o#InWxxh9vNbkc*|@PeJkP27fMzBCu1Qij%JY z?zYf2x5g7z(BhIwb-0LQ$;H$E-gquX`an3He_H(HF>VYz7!%T;sgUY8IKdee(E zg&+$L;iEFr18*;?dASPkHkMu5mmY3GPzlct--h+=HhWdy9F#_vMMLRd6u^~Blpt98 zLAKLk)uIGU&`6ywffl4jvz(#1Vd8YL+-TP{g8=gzgiOi@=WRr%#BM@bQ7mu|0@Wg< z^x0k3Y@VXLevCXSZK2d3>WjbjlNGV}??EHU%{2y&%x;8JE8dnNpl`0fz|TpUG;#W} zb%M4NPoolEp2af_4!bEGK!SqKW5OIe1N@4H_b2nzP_5u5w2Y8RI*ewyWJezd`&IFv#;Q>8 z`kn(@zvsCCKVbQE#OBiXm5#7eYoqjz&nkixD2N{0QYy*LYYB!__=f6q(9bqDO{(I| z+FZ4g9{d4dq+5cJ_oqixbLceBRj#-0#6%c4VW5%K?R4AsZKpKUTFOwhJ_AB;2 zByEM*|2lQ3HF59Z-7I;%^~m#xtRC}0&&uiqJKRs3wtYHT(m zS{nTjjL_#)ogD4V6n>u=dHk1))>kE#o&aje6a61d-#Qpl7=x!Q<%QAuZ>s<`8Go4E z68pnV!&0)+aJeg}_}yw`wjvSXCc{!WXAMJI(rFTE=OaV7Si(5tm5jDR#EG-YJ%CR| z!US|A-diKWRJtXnN68mC*!kO`ExK?0Y{U6aVQVupe4Pv7DGs63V`mRrk%{T$)W$yN zx}AY5;MfAhWdeJ}i{qbK5bb9kUk#bqnEx|R?OyTI(1Pk?2eo)|02hcYjOss;_nfrQ z)&Jf*fVonxepU@#jklA;TCHD7-`4)HSJYoHZxASjbRexc<(*Dg|F2GKibR>5t`{E5 z3@S%nnI(mFfstL6?60=&*B`BZT&6RWZ&?^t#`lbm*T-beIFPi6HjmY4w2Gl#v$){A z1%8$A%A3Hd>|}|j5D}WEymM5d9m!?GGc7;*#-$A&5w^5zD-I0{rDI>t#F~Q$m?Nnglc29%wYvFlFDW*(S)h2td05w*n(+8fTW`;(O|-a%)FAhm?+kZ zi8k`HZEvwI;ICxM6(ueBhg=C52J&&A zO5RIXG2V3gQBOWsi^y_Xm%Q*@Pyv}KmS^#ODWoM%2W}o9xgyIMbuu51qUgLR^ zh_7G8dmd`*p}Gj!c#K)@RUUzPGA%y0hB>8PS$2wCXYj~%-D^v;c&UU`@`1djjtA{$@_pk03V3Ghy6YcxX zsg4D(0q-S6HCYQO*R|~t6OLSG*7qW8;;OG$%>O;-Y@=SP{PUW-r3$Id*HI<}vNJx# zz*l;Jb?3P_%&>x0IjcAoFD~Qi^xDR^@AVO0_M%CBoi=iQI|gK%Udp;Sf2XD`l%bRa zm>FhcYT3Gq$s3Tcz$wcDi;0o%>-y-)1Okt05IdK{k4^RJ)-7Z=nNsBEEP-zA_(sG` zv@7sI*PH-K|BdhGO_r}irMchsW*)=?lHlb~p!t4fCgSAYvBiXu zxH%Ill+}J$i7gIa(*Eo!M2Z=ICTgoGCamUuG@Oq+cOf$YBLn>_GK9lXeqJRkKiWgz zUteIew&L6-OnJiuO)2B;Hr1&- z2C#ZKb09P;eYdZuM_~b8QifuKcl*WGv#*?X-HK`c7`@Lj;GU~|`Wc=o|8=3Q?fpYq zhRg#QNS!8-l0shv9q|a~bdf(4q=~r%eW*^Q!Sw7dCt;U#uI?s{4I;76;NKY)?-z^z zl!uXrG%9V+B|uf~M#fthHwDq<;6nMYEsR6qt9-{Y`VlH3(|a3|msH_oo`B98Pm4fC z3rzrD`#u?+5ubjY)d6iLj^i6y#g~L%zCjTD=qxQHBy#oqP=W#+IwC{$%o|?WqtccU zVFLczrMS1m<9yTAgL}$Z;N@`Tcal)#hwjy5sVARHz)18YBusl=v{n>2-%jiT2;gV& z=WPZkomMw`WkpZZpE3Lm_@X{yv2=e0OYl`t^Yc3}FRwF*5vW_eq4|GQy<>P?QMWc6 zW5>2_tFdh-jcwbu?KEuI*tYGoaT+wX^X>LL=Y7t1%|H9fzsxn)SaXb9nX4An`#$5M zdezfvmgh!jX+(cZY+pqwh~4~0^h4Hr>WN?7hC}JOj@&%Y!pE+S&O`Slb70BMpDNWm{usu+eacH zcyCa@{D_*D^vOdkz_2i!ERcc{Z}70ijga??Rx}c)f2LNAy8SC2jU(ucA=Vq9^|?D& zyCILMpmOj1U48p6I(Uh=@6>h@!Q3@#-3B=`faGb6&onxYk*XS(oS|57X2vTp#FNKI zVD;0l(kX}D*KsH?V>PIbHeW$;bbHZ@Fk)URVtfzz~!zkD1&v; z*BO>D%DP^(N{O85&k7z@XUmQCQ6XUP?a4YaMUHOzyE!=Gpjc90cX!&p{+y{Y^q*;b zBjvJir%%fsTa&Ww55Rq1hP9Q3l_L?R!MWv0%2Ri$W33FVO4>#I>rdlI5{5&yZ6l5y zE!lW8$|EuH-Ck*@sTjmAv*pY#n^QrzJb`km0CyCn#YU7euN@EdTc(`5*!->fd4p>C z;(s(KWkR49YU=qAvm)c%wK-W)5sLl2KNIkNA4%pYsL^*-9%!ZwmG%@$*?XIR%@8Lw zYkRJWmQ0ec;=x>7R2Rx#oNQzb5#&s!dOIS9Y-`Wo&+R_%=yOI)CauIr=eD6%A}}np zTO6_aV}HErWep6-R`3ha7QGhA1SyW;Q)Pv8)a~V`07;?&)&>~B0mYXq3UtILR5E(z zhQBbXn}Kl{1h<^JXl2JzfcU9*ds^x)JRnyU z-Ov$qf0E0wGf@RY4dx#kBv%`~5MfbpHCi2TjNWvwLGZR`w+)3Ouc|zH8b|n9mx}yF zA?EhOIrc>}EVKbHpLiC#kd*u1Zp9XYybW{pqXZcjp(H!=tR(I!%V8@8)gxFELp}^j zYd%H1cuV%-O`333Ym&u9q~X^@j>pQ==^l>i#r>dVU3P4swWAnwHdi7Wkj4Qj?irQ) z4xvL)eqOY?`!k}Dp5mf&%hK%Ft#7ury8U3>J#>ZE0fY;o7KIuruB6g*hveCgNc|l` zLO_lbg80%P#V>`SCx0n$=JMyF8f;ZU9#Iw9vAL(JB2EpdddV2%>&lOCyj`AeMBWrt zVU>rAE-38dkp#`)_5GdVerETPOD3Vl4ZNzT^ii@eB`mghePy|BQB1S()G2l3CNsR7 z|Dqq(&bt_%&;D4p{ISSuCA&Dc{@eH3hf~?GNz>LKTmYi6!uB}`y@DrutY8oC#^J=a zNPel52M{Tbv909i4jR;5+*qu}2Q~a7*21a{zZ&>AQ$m8IwQI1gvRDdfPS^)v0N{1K_zUGvSyhgcl12n+(Mq&Mpi^>Kg!U&)9e+vtSrp*)C~ z6-0^=MF%Hyb>0$U5E_S{G452T@uKD@}F@B(A0<}28 zf&iWx>cn319yqPY=em3NYr9A$c&YiT%W@^x}iIP8|M^ zJY)9zXc`q=lZ?$XQUn=cWeNU}hSn9G#fdb!MW7v#Elya?GNS|;62X_$7Q0PKh*MQo zYSdb_z|h$dCCqU7GQ!`G^%2ASeuA<-KArv7>_!DT0Cr!R!`43C$x8_$t?TzI6yuvm z6NPsW5++L$oM2LSr~|73feh-voxPE-!->|IJh}B|J$+}AF`@W5kVa^^jA9gJs(8z< zhYmIli`Rjps=ECKzmHvO{0Xo!1;^>BuMzgRC?vhuLcbu!%eF`&5%D)=yQHEQL#Vri z^8T$%g)j0|%Izu&IV}Rh7}n~#O}k+0l)lvPcsPLQytwN zgA(};aHHq3XeVUJpgJNej>)6Lqa)uW&Hxj` z02d%*Qxz1u7PmBIWQKWsj4zxIQh{0jMeGb6Dml}^g_7RWV-=T3+IfpTq>riSh}6j zh?`!}W3;K2x_oEsL0t*$2XY24e(U!AKr&c8pCDP*e+(c^*BoK^G$AX}#d zj;Q!KF2WHi3XcD-R~anU3gGI(SVNig@}eAB|L9o1N8t!+*GprRzO1Efdj zbqD#$)tRJAO6#jWvR*xW0MWrMbeLEu{TEH2u-wvCsr`P3XGe$-c8q%^VD=fiQ zf`^X|`IKx)5WWJG)no<`&8J$_57w+2nxQeg=+JcHd>8~8e{9bIYAz((i7!xU+AoQ7 zHwzMWfYa`xo&F!zLNUk!QO3KDXsWycU#PRN?yA9qji>NAc9^N5)Va1($S^$hDWG95 zGa812#Vk~0gr&Cb35I(D)E)w2?;q+p2n6Hg3fs;aCT(G{WMmW-HP~D1e_4w_%D@6( zMIfANn+n&MiwEGNX6t}rVVN^DoTI}jDRd&04-L=l9*Md1`R^Hp``FXR_gVd@q4UXN zYuXB+CdQS;EUfF#Dc=K7#<2YS-mQcmf4X*kN@1EO>X>$Lc=m6hl=(_q@Jad~iIU?P zrsOV=f`IY|Sa_+WJKhem?Z2XN04umcI+(R4yFhlv`$0>E&}(bDcp@DDa*^t}&Ri0k zSSZQz(sI?-A^=m3{dla}!Xcob;ZHc*IBFmxH2jsUw2@3HWLx%o>;s)M5M!8PhCJ@g znjP!xHnkxJr5FUpvtO93I4jvTsoHfI*mbB+md_)6aRWi%jD|7)Dn6tlceO7QQFAM@ zkK&jLKH#NQ6xQvyJD`#!^o^K+jf9$9k^erm)TjV9ZO56_Tq2#xAm7{DoBrc?zPJSf z{19B!i<*AT$UFkfH?nJU;p)1=AQg^W-3yJrw%<`zo*4PiYs?P*#qg?#oE#@AKTnw0L|TXe}ksmwNtlu8Y4_3j3Qwt7<(&et(evgJZ?)@o7=-oKX70+5$tVXpImQ z2&SK(hfV7}*~;%j2a|$G+lSbCcy7hCT>PzndzMKEU~Y^O8Q2j4vhdtsY|A%T&X+g* zzQy(-B|>W#3+2qxStQ;=+ld&6w{P(2a+jio@fgG#CDxnFg#~suwKRI9LcUr3PL9n2t{#xrC+vA8F0Fa^aT$9~mio zy{&w^o`ILZuGr$Tht1iZgbk-Jj?C-au>40rV-i<`st%2NTudENpCdz4+PQfH@2?9zx1I=QPB ze&k7^BticI+cyx6x7ZPRorci_(~YKRCHf`n%{3Df`v{_l@|{~->PF2CFVj8t2iDgF zfTq$;bs>ly)L|~DFa-Dp%zGIuY$b{R1{|j%S#N$MjCREA`Rc;Ladbvn+qpJ zTAq=&SRR8=mV?*>J3QK&W8ld0bfmTLAgLUN^a*QCRw;J0lHdAf=d+1N236=&VUU6O zsd+48K=Z27iwk=n?i;MbgK?^YCLA@h>s;m_xTF9F80O+d3Xxet zz}#c2^(UmHzR3+Lbt+=In@x|%d5C|OKvqvGv#p1s6s(Quy;?ign7>T{p#i7i(R+Tm z+zu$2ZOnZ*V*+n|6dsBosfFji23{tph~Hbcq#EU1nXuDy=_wY9ZU}r1>!wet^BQ;b zhm|-4}%O(@F;=$VfR7EAm_p9&b<%%08dASk$N;NOYl( zJ0cMTL?}|FWbiE~;bN1YFt-g(AwoJ!CWeXV7KM3c8hAw|!et=zla)yN;h4QR=JKKo zh0<(yWwTr?FmW;93Cefj>buc(uw&}ifom!5((jKt4?S{WnvuTAl?Ohy|NIqn9VBY| zvi|2s@79C)La&ZsP5SSE5~@!igSS^^uVUxXoUf4N7P}vzg7UIv@5}SoB-X#5i*&H2 zfDzCUn?=UU#!o!{Z{_7wPG2q{UrBDZ$(-;;+_aL0j z3*?~UCEQ(pj~zpdP=x|l4Kg8TnvA~uQKH2?BLbL%a^0!AP%H2cMc*Dnn%0lk1UMupZt0_FxLe8rM)4u zV-%)+C(XRgjhGgV)E>730;0@-B`O+vveuSe{3lcx z*g5oJe;QzWR#*oeR#Qp}&V_sn1Cx0WVWEU9ZqS*_sf`8r#3&)3@HyfPs7apQzz>%* zbp;>NarM*6tBx|Ky~WlW#X(;cPKiXL!XYuX6D2dIyJ4=bSXGyWO$WI&C43}8F7sJA z@qu7S1lY?+yVlgJ;)~JC8i&p5nprSVuCt8E7;hKiC!QbmqG_XVEnpsFY|nc=Vxyg( zjx)<@f}LGrE!=i68d2HW7UIachWNdu+ z7!f1KGw}YlfuJ16$4*pX@0)rtII|%P!~v;4(XoT1vCvvZ3PgZZnMeT{iOh3y(r=^e z-O~mr;RXo_LY;cOOILRliiWH%=!F!w-^UAx6}0G>2BzgZ_L9_b2NXusjL=;?sZ#hC zpde@P0dto({jw;>hh|W3-uZzp~PVY&9&-^#N-Yo?8uXYtE&~CF`x(Djl<>8z#W9(Q&0%pcU+WluriYhTwBO3;d9BJmQ(^S zBO^R6qr<%3C5hm7%^fF|{h~90o#LU#Zb{u`<&!zzlwAVeu7~3CgToKpm7RRCb0S-hJL2Z1Np%jvqp|sm}S<0ZPQ3V!DWy68=J|o|e zdRr4yV|F&zT!(@%}_U3Ba}=enMU zIaYG3$E_uCK_-a}VD4wL&aqu6z;R{@_0D_N{!}1QIGJCo?Rs`PqaI~{U{>PKjAQ{M z%!Xi+AC6aD#p@;4HA{RIm$>zn6p_gKy9$Ug3^jk%x;1C z_JQ*lx>}|WVHcq%>epD*jBn2`XN9{pS59leAH{jd*atHY{GcKIJuTvxfsUjp^o1Sa zoQ)2Q$ukzBS&Q#V3+s;z%ta5zHN z`z=enUVHiUyJxHE*TGbUNSv=~DM;6lne&0oHXEyheJdq;{2v5q%OzfG85EZO#U;2P zHc-{cT5M1zE~v4%U=pJ=KZ3Jd2lbzI=B!UPcq;-(DN?0{#6tkX_IYg%U}*g5ES252 zz9t*#mB5|E`29W|4$*~y;)GdqA+;8!NoS~2_%v0v;v38V_ zCxfx_px?oxZUl(CNm5a^;*ykzyLCheR4df5;i(kyb-jfQs0RtV6;hkDTcO0&fy=w@8Jj?t2D8zU? z=CeeHYb2wWrnKeuzpnq8%77;!0-4Cp6SUnwFo?IEM^n@S%34Z>mFQ1g$=jKbUPq1r z9g5CW=G_NLKbl&~1y{F1H+f%F0>*GvDK1lQ`*nAys!g4@7WDs&-L#mS^18LDt42rC z5WcYF%%hw0Dp7{APK41=ixkQ`We3d3MVjN?88u1E^4BtRwl$jCC|PM|4=>cv*LAGP zWQJ-@d%L>LNRP6oEzU2SO}5&O1X~@DOlMNJ(-GA29n3$=@tTYNybNwkcyM%5?F=;= zFdC|D&k80jh2WjiLZ99UI9^dK4O4x!WPeEEnY6N0s;^Hj-15QIm9sZI;EOhu?sz z*8XHeOP|NQPZ1Wi7xF8|mCsdKTAkevi#&& z@C86D+4=RHZ-ff;3N66!^W!%G4S4U)7McDnO?rTl?jNcP70xguSY#N`GY zlU!9seg@Pl^x_P^Vo8BO1KIhd4<&*W3iiScU5I*yLL6&fa~6h#O$mM%Fbq6ZWgtq! z){sa6NwACU{j6M2-miP~R!8H|0%GeXDCZ%*&So0}AaR%#ce2=%qLJ5efAo2=UApPe z%kOtsWk3@p-j57phY;+gFG}sNPzW+%xMeYj%BQSxPiOWil^G|K$atnNvQTY?3{ zdotW92`G3r7xcj1Is;EBG?-dXcuQS~OPF;OfkFLqmJIX$T$vGesI9KWt`eCm2=vcB zrGV3=%?sQ!S-9rZ)90$yvej)z(d;UuZ?dp!f&RYF!FLT7c2Whv{K&<3kyFczvOptW zZ`TzjT(Mm!J%-)|c!1pX=iBtQTm&LPi%V3gamnqBZkcS>L)l}`WA{l#dA`i(nG`cx z1VVTdW)=7(q|x{84xbWP+0GYjQc9I-{YFX#{f41b#J%nw{YNE{5zp(i9g_23yIN-5 z#IJ4Dn|tw@GPt=j_?5AJ>3cM!bYuuP1Aj?Z|8SjDuv2TZCEq7%r3apf0I3jS@HxAxW9Xz_1m&{?Y&EuT`2!NWjl#; zc)>bJ0jdSKum=8Q{NcdcsSVZGjN<->m@Fj=y<5%-VpjMQQW_>>frP zLX-7`gvYW(iKq>F4M#BTCJlY0zi3JVHNHrD9sqkwBRZ}{CijiBE!PNh_73)aW`j|* zpN_kwR{ir^>;*LSBm}Dz2;b#j6y--BXa3kK$~O1z^Bfsu#yf1M#h-|vrn}u|Llu?k z=>&)2(8pgM9&ZWLu#7gr$MBP-wwv9i-;I$Wl>@sG>k4(<4rQ5HpzK=t*H)(DOWS@t z?*g>I6=>(YQ|$_p|I1>5LWY5V`*bbe!>PzOx)9g_pLsHP*fu31y^;FNnC zfli}br(h*Pde_jjyIJoji;lpDJ-m6SoFGhW%sG@PXIYtB>k9|*;qMKbS_PZZ(6jYM zDhB;7Za<&}mxADOeiBzNkhlR1(yLnB-2QOylz!)Gu7;xkMzcLjc`xQ{rDmdJ74HWQ zvP*()VJI#?dGy0a4QIOpGq8?`?f&;W_&=vywY1NMx3eUaUheVXu}qU_Eu%++nnX1W z>sf&xa_u9Kbm0z2Dx!=A^$}#>H7eIfu;DBbXTztjsBAv|UaOIcY(>7Imuv|*&|0eX96i`I3z4)8p++*c6$d(9*kmmM5$fPQ`U7~z&+RHy>y9jYM@4x( zN@fE-ZmodoC95QJ?o3GSX6v+H1UFZMNK~mPQy7|GlX~uJ97-?!E`7=le1Y~YvVXUqidow%Sq=&#IxwZ}sW>;vjm&;Bz zr;3EJ94ar0Z%WIE$?BG0pj4FVDPG8j=w(CB4@KDD%-zsYhz2RYXtck&o@#W?5D+H* z9!l!W@pqQ&I4tQB@qA}d@D$2fMkXo}{so60>FqaP-yLMSV? z6FosJ#BX&plULT|dn`&Mlf&iE^2J#Mvrb7SvR6qGX5V~$wZUxCmS?r8JIMtl z(F#;2kYaOdox2!$_)&uv@xme6mewPeGzW|z>wjqI+KQn2yQdMoZ;2;;@%d*IqHh9~^l1K3Zn$z1q2I2F0GQV$KzU#jL^V@A2 z2`>XaU+rech(30SfMM~S(+@FkBMxs89iJbscE5+>(a{c=r>6n3q`JDK|z01*0+&Z>Ctmymz`ZzZFATn_QOvQle)Tfvxx%1VO`$tzb!*bOz@SSgl~dr&N^o+Pjyk~k zP$UBBq%|?@Ef{}u_-1j!q2dm;P?)aaBs@_VZ}RH!dH?4m;4{rY;nEjaA};+Nc~3rH z=AhAh1O+}ui5jzzIBNq=QvyiLXiAxUKU}QdpZ_%V9@KDp_gXDd{t|9btjqF3NfLL> zKc!GmZ&hdXmrx>Z#S>84P->Yj_RxBY>?hlp;Mq&T-Ngar;`>i||GTHLib3_>${m~I znIms#hrH`|DgEEF*yVRk7PxLG08m<{mfC992GXnmbA!h4UyFqhjtecvN|XZ>C+W|D z#D$a`N~$%Tl|apAMIDJMWd1KD{4<1TJi|5F!J7 zAyetB+BB9`>JY+8F`|e36|>)d{5WqEuLgHJmMlQiDAiFr7Cl=8qg4ppi3FDd#X%ht z6M=G0$@#e}*t6|%-gx=Gox^*$1tHhq;rwxw%YWHC`}q;j+qklk!)^_CKdd9_*FttKOw1Tp)oXCt4J@%=>=MLtqsq-7CNNIuO6c}5@MeuXoGSzo382|ts3kqq(QC6Nkp<)B8b-D~5IRks^shCIFt8v-mk1ihRZ5mhFc}N+GEmr4Q#X__3<&sh?=w3UbyVe5ciqDTzev9aaTOuU}`_7O+ zbVNw0r4rSQP=k^p6HM_m0x(O0i)ly6@YhJ-@d>GtQKBUIilw85$gO$X9{nrxlihpPq~;CK6Z_gXScaHBW!2HQ0u zf!1e?sBL>Nfo%$m)k%4LyzcfA1`~oZ{F$wLXDC~eu@WsNu&XEdToH0iETq45w5$>I zfG)e9$*IJug{dqUl$@zF{6%y!|DfWVZ#r)FwOH@egcE;aLRa*Bzy{1YgenstThXJMy%Q z{xxKlsjx_s-M3FT3Xiu?{K8U4Hd+c%RAPc_eb%~YDRK4+V-s;0Sh3(kpLzefQw?RO zyB`!}6kmD!=Ov<2DGF?)mIzHD!O)|o=)WZI7* zL&iXfBgzvc`z3Uk33mE+R|~+!eQ^p*HB6tS#^ELzaESzOeMwacZ1u0GCe~J5z(FRG z3G@mJ@dCY!2XarT8K@@lhOdeURd((@Pn1MI%|69bxl7d|?AxOu!dgNA#OQ^Jchm>* z?|boIm(r3bb7zcX@^w>XGrt~hyhilGltkTqc}H)d^t56c#IPdduSgh2L5Z7vEo946 z&Uo($_|Dh8Jg4m)rH#3d$y8@ti#;Z0?Yv=P+iDDTfsUXl`EfDkvUxqsh7o4|fS9`xVUEe!_x z9a?Fc1B8ejF32z!bjAfX5orT-xCkfF2c%!tOJA?TR=*9?z(&7=uDFgdFwdJ?m8?w# z7e&y5!&ChQhF&B|-$=uU&udpHVc{E4rB!Pg{fIR!Gm7iFNz|2fR$EZ_{l}ZAvQT3z zkv#JK;^!KcL?0%;itu;g8z3(%PJ5WwD;pipz8yn~okmi)Hw!ciOO7bNV7@TXMy zZ%fi&*%yPW6Fm%&rj!8EVEwjv& zQN!7>BOzk{h(Vl<$6=a*#T2hcn(LwqlNDh@-$XJZCn#Zd;SJ%bJc;cD`a^%@W5d8n zFByzcim?lsv7JMl`zS$83~kb{kagv1W}3kaMY0?F!ECwPMo5+8B!jk(mFrh%wP}_B zjC!EzqpK|Lxp?|x#?esR!gl1CIwhQEsFp1>Z%sv}q{#;6Zfq%o0aBqulHDO774;<4 z-9@E7y3Mjat?r|t#5)dqVqg(&C}?BgB|&7Uv6l9bJVBO)=aJ+gd_%*m2Na1JFP3D< zY2o=LlmC6(e}C~yf(bF~=Jf}Sp$yDblbLSv7KLg45(D@Ga6n3fTd#5ECoO+Lh%z|V z(AyFdkb?89LVbrnQ84aGRT;~BWZNNVq%5b{k=H{Dyn;yYEya~<_Kj|Mnj@M`me=(p zxhN5`8@GWJC}chT@kn++@K*6NWC5%qQjD@ma`srfOBCu`LT1W5*!d2esFklS%)#%% zP9yT4a(uB;Vjw5){@7$o2})g9r^-sJi6||StC?T(T%V9ZpZrUC;}rAp%6JfQxZ_Z7 zXk`su27_UO;e;NQby1p$lL8YnKDeMp7BUmq_x8AHWCIf&31H(Sqq`_u`Ao?KjyIk+Fsu456kipduA@-o?1^R&E&)2I?!g`eF zD6+g4PwC0^fLCbRO*_q|L0O}XxJ;@&whKwP5>>|Rbr+&*J^F}eH3q6}6WDTnlcb_b zD*}p1u4Ziq1-c4A;=bfwnFiR&JGqG4^OxF0ld}J%N zb3!KH3lDl2z+JFM(Um-e_Po09B>f?q%kyX<;+GYr!e5BS!_eUGODnHM)FLAeQq==g z(U=bJQ4Nai=GU5D!}Fnit&7N^bn?b9ZdpiCa6H0JyFPD)!&C$y@}25gD0rPXZC2I* z?B1uFR+A>pFKr=troP%@BaSEuqyq2MgLyH*1PXsKVIIuSf){$Ges;`U37PuGagA3k zF-W|4+)T4?qg#4ROO08_J;sJOy(bUBm3jXp^dCHF<)mXYZnX%ZfVBvu>1b1G!Q_rH z7|+$s3=Cv`d%MMn>SPnWhT4gZ@pW@Q86qmyUWqS}yi$-ltvGw0p zmp!ggIW}!-7<87Tk7y3XV+qMeZp>!qomr`fk-LKR+axMTfenhl-^~%3{bF{A zq&s+2bE-*{Y#drv2pfT>S;C1%f?gZ-R*IlwYhsH#Pg>D3uYQSG;YfNt>V8+|CHU>K zL{GyaP5l3sVTKHdEpmbu%Oi3W<#W$WF3%(;y_RL8`SgviG8?jV%+Vo3R}&S)M35Ef zNYIYhbaZPmANave`}29ieoWcTPqbqyV*e%~1031Mhw~O7hf){V<=VJtWm1P{Hz9%M z8JCe47zbUSuNd*4-&WUMQRmd}Fz2wof>d#}>PqkAsQ z^5B^_hPRX(qOw#woTPvWW7nTROLD_TauJcs%n81Ee0XScXhxHo_^x=R0wb-DHHXxE zG0d^KfG8}s6wgjfYj@wnpt0G}-=QgJo>xWG+_NRmSuAZTWmT)Zi6bA}+OR)b5;?YW zx??5`qYQ_-sFwv8fQRiZN@e{FOqpY?|hbtxk*CNf-%ec zn~=Y(zIN9cF=GoGYcuX>_Y|9<7Wjvold3fS1>Q1q<82m_;s0LI$YIYkV6R>g)}Tfm zf9IbAU}7-O9bV5{3qtnwtt5@VkV6chG*g4TC2J5+z}$35guWQRJmeE9lGPFi%JT zOq1S;L0wb(Ha)$N{hfiGM?8ynx#Wdfh77dcNrgoDi}al1Eh%_4>mSy1LDx9-Hi##> zCmtKZxqLO+Yzr&<;CM(_aVsd&y$)0>A)SpJyY7jg)<5d^n)fT>O1b1Tm^Y;xYB~_& zw8(FU-xpzJx+OOfvsO4=;^l1&4AUOgSDx2qYf4+@%MQfVfK*UckrKsyxDX2`U$YmN zFr%thxzNUc4YD1X+4Okb16r3CA0|`f_`|}D^HEku=V-mI!(FxkHzCEWQv3 zzt=XX+SkQ=slKUS7Elcj2Rok|(<`K|4J)gsE+8?Bph+raKZ@aN?2K_?21QAf?K_HY zqhFHVk>AW<)0$n`!pTo{M!~(#Ohbx_tI0$Tn%M+hkNDw5UhvhLwJ33Huz6dRolfGd z7BPuGL4(PQ63uhY!vec1ptIADGZLTO%c-u@Y0DVxB$qt#|1IVQ2vQ*n5~8ib5LjZ8 zq2lL2=ABNEZ^*dIvtA%~6j{(oBRGMV_Au7r&iKfPr@^656D6$o#<$0qHTu!pkk|J& zLqS4f`o-Oe?gJ%GkSVMTo~h5}LfLbXbF(l?_!9^xh*eeT%?0?}Pz+KoE-}+tY^$91 zKbM%4rZOs7^Jviz-xwT!+1ahMd@8Rjv(9=07*?oMYjpI^3aRZtr=>fUQL{bHqPtU1 zB?1EP5vHLVN?z2;&4)H_DQrTtmSk3{`a4wlcJfDvDOocrpC{x`BQCyyA8ysG(B1^E zKx8Y~Ud(Z;b-n~j6qDbmX>P?f&s8Q+?VJ>%^p}V7l+IA5i6Wl0j&i%;M8P*Kezjmk zRj|m!wr#x=JW?R_)>kuE^@?$0pKXbMM{2agBUGV8*_6{gj8oh*0e(FMq>_Cf=*1|M zrLKjaRVjBZ^6t=w+w*emgP$l_f?1-&q1_JTl8h2^P3}_bPl>wk)+$WyFP|;9j}jAp z@q3xXP#yknnkn9_n-Zr~i%sSymHY}o7Onr97lklgbR+?{k%;;-jQP7;^BcW0~G=dEP za0s@KzINFz2ofcc;jr98Ce5LDbH|S9%t{{ALde#ndTuCfGDgwtq8K@>X6Y~p_tqDF zaEP;CDv+Al@<%N#^PdhvXBNQ?He$<(Wt~xre2z%8u`0p{D50(KsEHa9QB}Hedt-yA z7o_IQ<~l-Eiv8jt4t#_HnyK6M0LmW@E{=!4kJvbT%`P; z$x3*N5CR+BDj={?+fRB&iIV#X>#FK(<|z6W>lrM^QhaZ6QiaBL2cHFkyo`D;*lJ~O zIOcrX!M_o7qrmq@I)^FZv2o_hrh;#Ol=~n8m!~d05nK|^i>yNDJ|YV7m$J=zHH^UO zYoi6VL|E-A&Zc~t(bpEvENU}{{nyPHw>wFQ+_9%bg!!+~I0&Yd!O|3ktg;8uo@(M^ z*29=p(s(LKn4ky{5h?I5`x{_M$dPzMzA9ktBb-%%rrwu?TR%3NIzzx9o=iV zDH%`!zrqXOSo-pPqnL&J%AKzQa&p2^xijbQ=Q@z0&7gS?}m=ZK_!xlD&l zWhApX!0lmWy+p=Ti4S=j6K;?=&Q>BUq3qw_qTef}JBx}^^RTL#`;_L#es1GzOUCEPe66yMW(YF}Opk@g{;6 z0t*ab6`UrqN#OAM zt0C7C`9M@Pl{h^fe$jwoX$7eO-izi+b55605d}y>f~4NevBpWWs)E46gMp#@ZCr2N z>xDtZyU4~_Zlhwe3sa^|%w@ir;*V{^VV%}#OS%ov+anzZMIf-+t(+Z>S@QxX4XRny zd(UW-#wT;g$m_c6`p1%K8dqwLZ+sW+Eb5Dp#=i-3*@+6fFcL+KwrnypEPTSeSa0fWau~+Jq0v7xQ}8pK#l|x3+YOV9lBv5Or!kc z=t5cB+^lYpY3DC-HRL4bEUK%(M@~6m7_%y+C6mFBe3vO`gQxLmEeM*>(&Lv~20sld7z;Hl z+%KR)i_`A3$^WP8hE@6ca7LGo7K0^DzW!~~`=9&O&EL~G=L<26Eflip+*{d<&`zh_ zmp!j%BX|z%as!CGZ;n2{Yz*jZv6yv_(itAIX7w#{4#fP&zC~=|p1(mMlR()fmP+ncim*KHwc!ip%d75}v?i1WOS5T;r>OYa zhRH`)16#Na=Yu&h(JR5-tUX-0z^4v!MoKN%bEMz-*?-bv6{BDDNBlJ^Vy3mV>jV8M z?(x-Qa$4Z?IY*rNFr(NxtP4iBR+3Ul_iTdetF0tarLQ&kQ^30>VS|BSi*lG17>#y7JiHRo;{a8@buj_rJVn4{L=IpxxO6m>fCu4AN}5Y`5{vRb|8Ng`hkPH zmKWWBJ8&A5PaVxIhypFGtbs8m%@T-d(LiE+`{r6x8(0<;IkMB@;95t*opLv>BAKgm zG-SOg4uQ=NByWzBs1%biVJ@d%Hn&)pyQy)lFIA?$Rod6BOhDPxI;0U0R1y$UuOIiz zs6*?ZHR+Er(lPWE>+SqxEz_EWmMqM47=?0wEE?8Lr+~mvjfM zX%gxvL`WSnh-M1veqnN(T&b!^L{6TJ87ZESUbioY+fammpSnEDX8_=c{F3_cbRz}N zJH}omkoRAl;O{R=Mo{K(8J9pV7)08^!^nd7=YH)07Q-_0?n21hKr7(EIC9D~4RR|| zb53`qxUZO8LoK$I35RcxNEyPN}sjrBmh zv-@GO;N$BDWx$Tt#zxtijlrw8ZliaN9}vB6{EJ@4_v=42WVB-*{*9aGsM6SMCVDFL zyPq_uFNJ2T2O5aazKRj;!I@qTF+R=N*d)a{?%?&qxqxt$sX$smMsKKh?_l!4Xd~;b z)mLLw(TXdB7zWN2yF~^4Aef&AVaFodM zp&2j0?(;3lVeRygqjN90*v`Aw6mG-ip(7@t0bw@teR0&^vY_ijIEQof?C0%lgRXt| za?g9nef5paWzUPm>+UbXg{CB#)Tm!`RNYnc#)Yus9deLrapnc#)!edKm=%6Tfs?V}K!O}E zSbb!Izhi2u5t3fQwsNPoUHP^~VqAw($46v+8q z`>CIQd{QLm`>B#6f+)&Lf8l5{Ez_usSqbrDU6EEOUI4)h6McN{>BJv6Wh}DepR@ zh){VmJDleWbPfEPQQ*%Sp{ShjGf8$WNWvPFmVQ#nTfq3=kaYEj07F^NBa4d+Wu7x( z0a&abyy-!m9S>Sn=ua7gf*l`GGN#b_QR_pgKsDZ`PkRp)pNK!`qGI3J#;coDrwqBs zU>TK|bMjkneQrJy>byT~60gH6_~yw0)2!eR0dd@#vp?Gouyj5rSih*3Jg)=P0!DJ4 z+1<{gXqOMjB&i1o(EE=B@)q@W5ufhRevm z;1CS(WT2Ato6|hY8OhCVqi!guK6?QI}jPn%=8{U!p4Q-R^d<}CS z2TnQzWkDWhAI)tQlDAe%O;yWdkUK`BZOEx$JLDmAmWHDc-%Q-x z-7Jr0yE7- zU@(w)&>)|nuYmek@4EsHjR)sg{*Tsmu@>S+R!}<+r@g@ReZ3qXEgg$QDF$Ec~*j^8Mqy%IaA!Ce#B&=Wu3IHvA?U5-81XM=)%g9hz>fKWWJAHhoJ5RBa+ zzWzlHd%U+#w-FAQ;(ydt0MAsu%vh`lO#U9SS1P!G?^o|;5p>;-&@D;^fs2%)y4<~v(Ckw$rv%tfM+Y#M4@UjI^t4SF&Lc=ofqcDgv*pu%ljL&xW_ zovd=xd0x`j;cDsEcj?t~{X5GF9Oy?%peoq-A(iCoJoK$uPVJ%GiQcM7SSRbZl2URd z2EDDUFyMMol->74+`NMlh0aLL?C0X_><^%}qLBR(#|WS>;S6Qex(PG3rVi`bdEaI7 zFbt)cYRBn>qq3t$Bj$!7uX`>?$$^CYG|1ZkCQZMt-d0|d3kE45KlOH?Ec+PE zzeaYgU$&9!^=dhvIz^px0}-wJ6_d@SCNb{^x!nuZZ5JFmTYuydQke$nt!3-ie!QD= ztQ$q0^MpIiZM|Q*t-l4spI?4l*a1rfpTk+(u{*aRAC%$~yPq?=>;AzPWsIn2vO5)1 znFSJlF~2_i`b%-gKcES=@!!9sbm#W8uwv9mYmEltPeF=Iii5wSp^0o8<+7MXxE}Bg z5Y|ZbKSv-sc(Ubj9g1?q@a@8+ZrzwEgk@PK+zp!clV}89gLtJ4M47P{s{CuREtJq> zfrf*;W;m+pk1@>X=cV6Tjq>d0oaHct`>-8|`~ts~Z;!fiY-{J0$^UlMAMsh0{pDQ3 zve@2#(RnLBeY=C~Gg21^aHZ%x7@s|recvW^H9A^pbtu@A%bGROktwn$6JQT5BZs#g-4%bBi2(NHQu1Dmh>DxiK6IE>yixKo(z@)Nct;N9 zGv}RzF9}`*)uf)z5Hny8Dc&0le7_yC*!!RvxIx=wuBwn`Yv~|ix)e8WYlk_-;sPm_ z@^2$MCM0pj5*G9x)#oV=o%kcdhV#vhq$OW4)C z%GiqgtPYa3WUWqes;EInxU5VWw;Zjjl@wyhL-6aRT_~YP(8gO+%_{$w zsSUd*NYYViy+-Frapw8WIKU0M;%nQO+x&8UTBnUiO!@u#Wk7U(Li}I64@4ka}%p$#2g>yquj<~0--Nj^ z4I1v3rN{|kd{C~|>~f}F^NRGGW!U$R5z?5C!p}cIbxY0mWY@Y4hhf5;kV}}>GnpI8 zUV5U5D6{W9m5IaZW?2cdLdS+a%d}K7V67rKySOM#ze)WIYp+QvTo3e|{Gi1B1;TMu zUp<6*#0tE{WqXfV zy6##=0{Mxb)2rMaqir|5JM~3Yn`is9 zZ@kW^uXjpQ8JyO1kdz|s!}j{A2li|OD2BRo0g(CWTZrm zhYr>4dMcZ?!vt-bx23QW(}kpaSHLDAank(NI*U*7xdaU0FWb_7#(1$?d(o^fEaBEu zf0yZ3jv{D#wcYgvmcLWOTS+o6ZlusK@M+R>o)z!9`-{JM1!cbeK^bq)eAd37AalNE zUDNa>E4!k9myi9N3j6|(=2|O|P!Ti^`$utgKmPeFT^(6XCL`9+yHC|W{4%S(?m`-| z~-jz=M8H?ZjqNdYk99*UI+PLv0{=QbQ3+b9{AFmT`(Q<`dQdw#`1IKKGs5Dck@GpuuD|*& z&yhAo1XFfzFD>-TEQ`$=rV>SmR=|eyX0KnCf~|Iu*<_)QoVqS`gw6}b9Xr;st14$4 zTB*NZ&Aab>One6-0sQ@poyQPf#}4S<+J+muD*2y|6CVb?HwZpA6aXQqyA{vAkLt>= ztd52b=_Xp@#W3v>o(g~;%&a(DP{0U5-+-jcNaqDwgOTUR3KbKGgS;ey0r`Ub9950t zP^Sn?*n>SM+$>c+aOxY>^i0;(lSp+mbMd+(ek7d6YR#LE`Q9YvI%ASt3qJKCpI3KWHh-C(EUdn=I>rTL z875$c!_aZoB5_!&cYnF6T_uwrPNbVeVlmYJ#92pYi$$ZvVeEn4<#u&`k zKohz5=+mPGB6!^w!6KV;^lYuqixHPxF@0{?)?YBKnP$fcbMMLT zzF%TbgP9NsTV^J{MuAZ7wWHTpo&80&)edy+Pzf4vv&3!-6Cg~}1vR#NpSD|bGtCh9 zIEACqDwj$223m7^@uJrLTx(Q1y!{+sKQaN%TQ{yPK?ekOo=V`0b~9~gKQF+&mLTh| z+Xahk5CrKo!C`<7pw%v}-~@Sc)9k;i`WDca($zGt#>!^~N2(|`Y<_64AsMym%60XC z-9AON_6wYNYFz{<^K&0WrU*ZUL#R;E+BN_kC0Y+&yPI?_+qa6RQI?XZikwtR?DK}^ z^AX{aKR?(hUvr7uHzsRUr5KSiSgXinh=0B;W&dzXO18^5EFSH8%I&Vr{F(R|HfHo| zky>Cz!$xE4=3!8GPIKq@eLL?GMfIJJ-lE-Pba^GNkz4zs}bT>SdT- zvwUhzWH`15x?L!UIrw3r^0YgqvF6hIK|uC$Z9}-kuX+B6H2{y?jKB-7)MnwgXQNoP zrp-c7Yc1xm7-XD)l?S4jN~qQMY&BH6N43K5<>9zTHU}Aq#(uVFnyEunN@H(PJ!#{P zJ!MuZ$tCJmKCID{CHI3j1I+GbBhnpx%3b<3ROT{ASmp~BbVTT0ONj>*K>!p}e^w-g zQn+UpR~Ji6hU{;?3rdo!lAq7H*`>~IZeHFLIzKat1$I8ruN}Rt@MNAC@-xJ#jSIY%Ro2A@spM#|x}%WPAScR_D3n^fP?HWg`aX>K_JOAy~`7 zBObxjrU2YmpizSj1bn$YRH{It!lxobh_cu6dUX9|K%n)7_jjIcpAm%Kx9TNiw8({G z{v9APQxDRyW+EY0;ar7%5PY&!^)A}XH!kDp|6HEZ5&%LY=#qL`wbrED#j2034fmVH zpG-_|ZX-@({ejpqSqezmkKm0a{A6G}`( z7_@WXk|iV}lE0URCklMABz!(s#8M%t({p%dW8})p!_PBxs-cWAVyGG;V`2x!2){=G zegs(oXjmpWqRuOnpbMmBgqz+RBnfqYVJkAHIb`OtKc{!ug6~y?9Z#xte||m(OEEbb zN-ib>kqZxXU0$^$W5DHN&jOVWplp0gY2)w~ABKPS4V z*LyQwqzIn^s#`@m_$nC@HDc2j(WxFijXGpAA%WWL4~Q&pGl*>T{%pPY zRDFL!t9AIzaxCuXz7v%U@0ZS{uPdoI&0WE$n&*5Ig+)7uZOOQWHvuJ!j(UlYlc{G7zoEKQq z3;5p2@!(yi28q1j1qt#a)Q~*js3Whta@#9dm_hahU=LBDR_{XpF53a$%4tBQ@>XcA$omld1PPKV7 zPK#cN^sTp^M+;m(;E_|~+Sz|6xJI7OI+_zMC1+Y1JUw2n&Er${bS{WbQIUe%OD}$7 zfB}QOkBAzvK9bAs${tAFhNe1Ax;`89~U8eHRD#4*b~^^bEkHF+G3lr#bIUJo-I zEB!g4Qp=wUmzpzbB5&bQ{Anf8zSLxJ^GD7oQ zBxCbPs%=qX7_1gGhC~IFt?Mh9o9l%@$r&VjT8%DIN1*XyegT6RXGTZv_0kt(U_|sn z@oqS7Gtf2kd*HmKB8Cyca*^*7<)B9uQI(9S;4jGhZNz28)U(cK$ubDzQGB0k6fU8~ z;$|(-6R1XiNYok2WiGF$B*(6@C8v6Im3TeC{u0aC8B(?p!C?n0UmfljJ#hF?Y=;q1 zS@p5O2J1EDv{qK!xw2Pv$)#&t$J*GkeKaiafnu&t(i= z5^8K9Zx8ZJ(zz!JW(=7^GVQunW4I4-UVESgwylN?K29|*$VKo5z>NkDJpb4(3iIvd zUo^ca=a#FHYP!Z`3;pA$4P6!f;xfs8S%+-7+Fnuf0WC%?S)J@U=~b=n^Cw4AX;rWic>tj+;(qWW{`H^h=YB$5p40SketQ z0%#(H55mKW0$1Bvb+o=TW)eYyiOf>jv?kHD@G5+D?dcn&OxF=u?T8lo1{y7(p-2&* zf1{B7ix`Wlgqf1=!hyH`v^S%3NgGN?a)|gCOt!ePy=Q1;7|w{3a9`pN@qwY+dh_tH zCwLFxx6T{7hBZA^U1(sV$07Q-= zZ@W*EKVJv>(hpi#rz5cou~L zM4EV163ux} z{IvR**6elCYLw@2GI?yQ^ZGg9+ERX8npfSXtleqJ94OB)iUit)Tv_XXp!RjPl~>x4 zK#@W8ri%7t_@co9mYu+^w~aX4&|u4Gi0H_#81Y@nectj&;pHs`EKbuT&BQ1M60zum z1C~S_HU7l)bY>n{dSO{t)Vo7MWDPF2ovXS+57`iNbqXeGdn2K7S=NaSr zgAG2L@_3tF3-DGO4(zoqQPgqJD8hLpSXx%)f(t%&_dt9?E@@}H! z`O6k9Ru z|Hz!<`I4eiCBf8YUGZ$}Ot-1XQaUh<&QRqXo`XnDhK@!{+^3gED6Wh~p;JaRdQ@M2 z#F`%YW^1@#U?azVt7L(aQ~CorBf z|JT0r)i^vx1F7-j>|utu;MTrl!R`#Mfhj&w;LD3nY2ZUmRK0uHlKWjx z0u|85u%IwY|66cxKNxl^x1z4v1x zEu}6Dr8SB?B)?k>J6)0J8$Se%3`Y|UpT=n;5o+4aq1l5C?v{BG>~X`XGBD@y;F$ z%t(J`U=30RR zGfKwALasqN#Dv+kDA?4L!Fz_&Z$i9i)h`HV(LdDEEKtAdl!V>FAh-lfikY81VWgB6 z!pHCjBasMPDoez=dRxCed+m{S;{1 zupde;OrU-?VW^rJR=o{mRmd|)?{bQMWcgWJ&q_>N__M9`U0{C@G4QTnXNn4fH<={e#BBz8~AhENjBzMBix z=x{3qVbRdj)THnW8exg$GypVFAag5I>vgk;O>G`|ME#5|zmBo%WSV|Bh5gV+Or8Uk zJ>gJ!yNMo8jnNiHbCIs zCVVj?BI*C@W^6y|fxR$ezTvDOcT_Gx~QX_SG?$BWPi#BIXR5~6qr8*!OgLVR4y zaoN?E^;_eVQh}8bX4p=kp_S=8)#k;L>s>110zHT8A3~>Rh~b-~-eb6;#bmRXQWN?p zEU*kE!7g1$J>Dch&bhhKXq0Z2BCQ8UJ%l`jnrUS^c@xnKUe9NOZz2i4mtOehJ6*xf zMn1sA?LrJ)qXYao;<0-}kjrl#7L)|%p>Xq5s<~%v<$cDVA?D-_q*Qns!~4%b3Pt5r z0qjA@YZUyAB8AJux9$Rqh=72GW1_Qf_HI2rrZ_N#UM*#=Bx^Os@S^#%VF**-hurVX zAGDKIU<(J)bkTQx;uzeb+gjmOSWy%_QHqAykUitr{qmI z1l=j7wYrr>|68?7gL&0Jgal)o1SF6ko5628Wj#yA?bhnykY2{jZiW*UxJgb(5klS} z&l9Z@E*EfLk&o}E2amn;DHoB}lnjmSaaa6Um~XwMC67Lj%N5`pLQcyErg54QJ~)MX zLOTw|58n!nJ=Kte#EpUy$D@uR;gM%64+!;(LD-6f5v*As+|hhDv=R^V*j*)m42T6S zJBJLE7X2@v^9y;SLkVIUUQ?Z$$_xkoLDW-xZ1LV1W(crZi1*>BF_8mgd_#If6~+Sp zAtuZc{rJ%-;g5KLys2zry2l=@h??+?sBoC?TDI$dQpkVi-#O^8=YCr`iRvokUS$vj zv#+e|Lm;E2-m?fbxnYV?Qm2My6>0uY@qvzyEm`q!HY+cCG$-n&JSwbt*Rn(I?z==! zsKX7=uKX_C|LLv$!(T8YPVAXtVDE^9HRtQcbhZ=g4mA9-laf+nYXpVi5h9p{5kc6h#RZ?`&2!VtS5=A2d6qmOjQyMn4{%!k`hx zK%4B+L^fyR>BqeWf7$apw>X|gkm@IHtt*-^BQ0U>nfTQG-JiZQI%dAsK5^oMvJ#e$ zf_{$nhN#EBi8=DzZ-;u>W+9+5IXB@$zGK(>H(LJxUt1Mgn3PlDHMgrOSsH$kL{Gx% zCPUw?4P%#!b?T-3C2yogV;M0oB^vEoDTU1qSS!(t$RevDTehBpaY&j;MeYU$z_fd(l8RHD9*gnZS{eV|8zgCNO) z9VHVlW}Q@UiDiy^#G-IP51rTyqr_*Q)jKcV522qJ3N?MG6MVTYX6ALpvsEVd-{0&1 zU6adzOdu}(v_gBpz6J?JcU)sbo9aahul4tq(Wzi>Y1A|~FKIfKCGGhoRZ;jxVW0O9{{r>%{ZCfU7I zF5BlKLb$-ucN~GOXlYWYi_XkJd=10MjA%9^Yc5|k9YCLf#28@k;HNA=>A=o4Vz*v=a#{`J+_7At`G_GRB^q(1LCMqM=$P@ILT%cPkRg=<}MEPGfea{ zQQh>fbFPZ%Dw>A{L6PaIBxURW20@2B*b9&l;n-Dgaj$H2eKXPb^U>k?1d^qJa}D9^ zYn=m8PQZqu_FFWcOt;-o#&d2q1^Edx0jP1)aphL8{DO>b3 z-Etnheu)4^U}C%AqBA^;NT3DdcXgW~Nt23$CJ7CB4x_e+l=Cq+O#_2f$H8KyZuodE zAt|Po03j5vc}_5jKv4AjQ|LJnqN)ABIF!=q)wD5)Z!uI7fCPbj2#WHdBf~4XiAL5V znpgMD^!jgh|9J~vnRg)t)3KzFGP~UGEB&kW?&&cF@ZpRiB~}(OES$&3L{v)=MOq=* zFAs~u&OQH7qIk}>8%5;vLh+2a&PMF<6o!U$R`WGBvjL-xvq_N87zNxi0zc2r+3PmH zI1~~n?lL3IQ0quco!0@mm!vg@VChtX-Ij2IICH7#Z)+&|JkTPbqkd9COn8&kI5Z|C z{6i|@1wVqkPV>E;v%QzNA-98@mQ z<-%FQA+7kv1G_Ea?=7Uas@kFdu=D$Iu%9T(VD&{5a3+@&>F6H2VKV*TDW>i z2$db)V5wKW&|EfcJsnw!Phq0}B)7>)pC;vt9pzto|eJ<8`L9 zb^46Ip1yKl?TZ72UPyDh(vkO=XzRQGA~GiQE~>{((g(?p&OI48`Z$bCostxFkCYTk zeubm3$;@!LdVdtZ7V-Y~o&l+vxk*ry=$dg<@VoJ2&ipNBr}sZ6T~`+y&ySE)i+@hz z)iH9@TUTh=e{Zpwc7zC{$52ZsZor2Rz!3l+xQKJJ~KWh)h8rz&A34cB9H1BVVR#sGuH`}G6cYp}=dmF^jQ+@$Jt%vOVTJ`!&MK2zje=Re;wl4*t4m6Dxr()`Mr zvKskVe@xb`IvrKmM6x$Eu7W}{CIe(U)OcFP!Uf_mA;h=H!^6*`Ss|j>s!0V2bcbJ8 z*2A+*Hpo*6Pv9|RqLPVjqaXPkeF_oy9Kk12Gy3zv!SSl4w4oErXP|3DldorDn!KH8 zm~W2k?>P^V-os2Hj?w~i5D3*|@wd2}1XYUz4^sAtbD8&9RMnZIZwExap7WS&TR?*SVB-GP;E0cv^ne8GO8GJYZ{d4d&=xCt0)V zzn{w3G{3lR90{095;{d4@oxKpXKB{KX|}NOBxSc4`Uf$M#|S)V>J;9!uL&N8^h4~QWpnUPw@NmjAGAo{`{ME?_*p-^Qy@sZ_!CS z?O{Q<_f0e>ym3*JldfQ=E-{hAG~cIGP27k2gP*_&!6=GPCVE<(K%8kYQxqFl{dY++d|YGbaJ)Z z{Dl>(kdNdHxWT_PzkJsEf?5^RROQvNcgyiy_}Zu0bkzdjw42iX#u=a<#3~s%0@YJb z5&Lj_7hU{YV-v{kKLyrNSmC{>(FJGV$rBVpl3Id9+2@#4IJwqj7sgZS^#JH~Dby6R z&VLCO9ydkwxsG_!vO<7t#K7GiG$I7NVsYop1487KMpU&RU;^tVD#Y=>ac~=L=eXa0 z_ew?B9Eq#@&3(;$B3t^DX1Y}LT0WO~AUnNs?M#Z)W-k}!Qs9*?6?QLMXhSc$Zt{*Z z*eN<4ZGxM@k>gd!{~H`vA-vZEuh_@FPyX-8H4b;FzXj75(IX;OeNgJa*xTysPm$)7 z;QJM^9`twxUvc^VF74}!-;K)ami^0DB3|R=ef7Ot5Y1O;3j7wOdwQGqN+a<}L|7+i oJ9C5Lr++e<v@N&*ZF3?ez%hpHGDxaH_; zC>}2Qm6OoU6axcOA0#EEA}1w9r($aZ1X-bP-3LOXjw< z%IX-tUVCOs)nvl`g1`@5w@y}V+!B`*JJLLhKSdSOG?CG_2~wvz$4@-py1UspW99q! zMmD9p0V-Dcb9>dCHLF=x$KyGpkF7nGFK$DuJtk8Mhk06K87ra(gBjUO=)N{(Eyuj_ z!Q(yjVp2_hXZ+#1Zo*UPn?l{oZ{b(lhn*Bt+6$!c!dmc32qSk;gz-~6XO?(PCllK% zR9`=0*(dThG%-D|&w?;3jvvbnY3xL42EOn;F-@73=o}Vm62us|mEq2*pe>9m65M3E zb3OAJ5Gnh-R}3xhXQof(fJ#aj_t4jP7+9FsFfO65FwuV){vZtO@7EX@Oz3wEj7wpD z7?;uS*U{^z#}6dfre{O-fD<{jT=h*3{J6&fLa+`%YIf1_qWGNd2k( zQzb>fa~m*+v5CzyQx0eFi}NlRBF+HxCD_#7n9dn&Wo-v=7N!5*0)W0gzs*Tc_q~a| zr6~PVB^5d;8(UL40S+z>E_yKnIyyQLTN5Ba_2Hw7?&wdV^yc>VF94jJPEJl7PP`m8 zwq~5%LPA2ETs)jSJnZNe>~=2J_QuZa)^-d(2KhP8LsPrwwxAdGARBAC^Kp%z**MsX z($k+4`t|PzpQg^BzbRSUT}%r-LC*6hoZK8-od4F%9t8YNxAP}Ix_zJ54{{>sg8@`P z&Zbt{4?$qGRMD)7@o@2qd?)iiPyS}~M^6nqQ(Gw;FuJ3?*x$=?(fOYb|GVRNl2898 z$t}n&@K2imc=V61=W_r&wgsWd8J`PLj9Y~Bf9_qh7vVgY@INH}V>!RyMX#qAfe7a> z-4G*akk_lkz>vU@dnl>yjJYz2V?-r0)U;tXYFydU)zw9m4Ng&t7XZ1L`bD7L8^#tT zz$4=$;JEKWR6t-*3o&o{QoP{qy;25fr4_~8*+fT}qnVkR7pvfAqX+M8@L2gf!wQQ? z6;cTdOzhvkw7tje&(n=l$Ck@7Lhz5?ulC6fVvm+hJe{sWy>a z3u8Ev&i*~^@0=B574ADT{vVCnYp_hr=O^r>dj8xn5Q9R;o$$|XY|Fg^Sjug;4^r*^ zG*yWt@)!753q;Otqm9U)1lWPa@vf(1GTr7+qhVmV=+m(!z11{0BYbiEqoqllgq~i) zB7;HlEKHwwq1ow!;%w&apQPxt@Lw+$OWeLa!xKyzt)TZVSkHmf{^kWMwt*5)-yWz+ zO8(^SnCz{fWZuZgUt4eaH&|66Ck`gUzlIvYUiqMOsQ&~)|D|=Tr5~x1nwq+B75GeF zFPxy#u6yIhij&Iva1{#Ughf|g{k}4I2AD(!t)I>5Secwj!Wg7}k|J$a$3zHccm@xo zXg>YRr$7-Wi~9$uS5N7>0&#A9_HRqQ{X>Z)=&<>ySeXEE->@(FD8&C1k;JX5y;SKl zU3;Cy?gv&wnzw|kh_7Ev?f3>)%U|JBtJyU1KL!b5#z|A6V+9;iH|LAE>VKicNmzY|Ztl-1%y>mVrgc6y9%+fPitRckoS!Qz zE7xSoyr>9bhyHBj^AMA&OFp5C>c&rU=lyi)hTMl4b0SASmj56iv5vr$Zz2gg94nQ8ZY^M}hF(`7g$I|UC!>u@VIQd|JEG4~|Z|NhtsfmMYEi|RtzA%Ho_say< zA&t7P&h6XUF7urYZUz^oK!S;Yu_teH&(Mk2U-N4KZmGb-LM85)`GU)oBXDp6MsyW` z?Ls|#ng~9B{*1Q;#vO_o`YB6HY>pefsFXwfh@VoaQImN77@uTI&7WY~c@eVJeYj1n z|MWrl(kP>cgNLbUW@e_1j3+~jzt>$9$r&B{U~YH^R)J1cltF`Q2g$S-=aVN-VkZEw z?;0{^f(h$>meZWz?zc$w(=L&s=c3QY#ums{@Kp>(O>=%#qUX^p$x-XZNXc`aVy`nJF})z7b79lPN+>GY z`=Q>m>sRK3E9{p_ZND~&2^HHImNu;a+|B3(@QJ|TOCh@%#W_!qf3c~3s2PkdX~%hx zF~g7Fw`FSjH;Wxrg4WEn1$O!IUxF$IF6kQM3$Kr6O|yDO2JgMm~#r+CsUn z{rwf1U*V{&s`!;oe%Is${0S7Lu(O4RkX;B1%iu8&VkY)$cl?=s{?al-PM z$_f&%-BFMa7+N(soWu9@;wUZHTKVfj3bqyAp3H-&EDkLT@;=)gW4O!tVrvo133|B= z7ivvUI-QKWA%a$~#pBq~34r@l?`am&d!3!;xt%rdr)i8ks@@ZMz;q%ira;2!l{%xE z3wq4*G+&W&&+?V_?)1u2mf;kIxL4ZBE%QjHhy~xI5Yt(M>)dA(N0gpiUBmL(l3qy; zYiva3APgwJ1xO%@mwaeGnk2%~k2o@lU3X+5noawj`0;-F@g~$}*F)aUW-xcWDS2(b z0`}IXF|;F`>7W8!Y_+{N?G+&Q!>qU{NN`f_>+!U(w+2?cW`*@>K&4kaQw zHS#*VC4J{}YPe@7YRXg_A4qH0zi8}A6cQz^Qz(JKO*`daIu(~Fd*N2gNgkY$OKMD= zqr7ISD+-Ua<~UVUo%;?Px=kCuz+zWtm%(+0VXQ>Hk+ zkYo<)zNb<=7&0ZF(t&yi;S5#!OHgURD1nFw6wBuPOm_t>267n3 z7vueJc|NVRU%ht+Ml&48?>O@rf3*5#y2!~~xTCO{Zy(V4^Mf~66iiGqL?W!Zi*HN$ z$?heIW&&YxX?^=G{v^;be#^Nd6eL=MRSTdrV&-+2OU&809k#=+Q=W4|k^=kyOX63U za@~S4u5RIeH&u2x+X`>RrOV2;Q>Cr|Fxh$>?W-kE=%d{4KIj2Zcuba|?(mo`@M$-i zlz=}vzq@8T3wG-GvV$}^YiytZi^+g#pPe??Oyt*e#d0fk%YVhvIoewbl`tEezxx#6 zdvP8L$u~`-vV@lD775WNne;Q3xQFlqVj2;nA_hV8iHBoO%k(F(4_T1l6QQ@k0A~^< ze+or6U=$!`g-zH-Ge06fhLm+fCfg$K6m6+9-6$Dtz}SpZmLnecErhPP$q6;3e7vkK zT+p)Twbqlh4gzXW-a2TRva;(5di|hK;8QVn5|Ql?#TYKhwDfeax7o`!OP;kd?$Ej&zpR;qD}=N4r{1U_SrxVqIr1k`xHcefBfhf_xy4Q2;SQv9 z?icFDC8WOVT0d;)q#t&2v1D7=GsiU6y@aee^G!Y;ERK)-=~nbA{fR7GuDE7a8&hiC z;52wZyf%$0^jKMZaxfE4vI6Pp{-ms5C)eQj0S1YGwy>?kUY7m1BG10Nonz8XZt{?5 z1>&VEXu|_K@friiS6G4lNZHum5RirGI3PQQ&yHk+XvLYvG$(P1GQB&Vjn$flE5@!@ zB?!9>jYjGXlvVLr!PS+M0e5(_6vJRt4J5kxQjCskLsFeu`HXH(sIb`ntadS8y=MVK z;7p61deJe-ZZkDbo>?0Wu)Jm@J>$ zzU?p{->&I_tJlC`!kX-cOk2R7h~)|kxoL&V?d_amWChuoc?a?3@)gUwsGf+*Ek``c zhn557v1mitq$vDCF-s}#?syfrLSx4UacKcp>%^@trY6a=lkrto0$}u*h6*%%G!_g@ ziR9K+HC_OCvDkriczYaC6H9Ba3}|~H%{w-R4FaTQ3!Vvp4O-zF>8JXIf~IWx^|A}` zvx8|;hlV0wZ7r`_F2s#hng%HDZ#g*$#1r8;u01Ohy&=HRLtW|NfafrGqYW16u3$SAYuwBr9`_!+_pb zw=dg!HD<|VX3E8{7U)M-ou-xwQLC-&Ra48vK+xFPgpi#{2EG`0s(w^+I3)pCc!%WG zQfK+@)*+Ski_2x2MF!xq7C*>YBY(f+NA+BKeI-AN?~w$Z1ePamxjRm2LejFHjt^U& z7Du$FZ(&3w4Pdk>U`+v%zZ?x@Kb$X;ub|{P^tYKupk(AVwsepdeI<~cfP7r*M*d*5 z&mjk`TwSfjT2LtQ6q+gprMquzya=Ck*Bf(Mv+z}b9E->|9v#{p*+77Ag`|B2-4tIe zLm!Jp@Uv>kLZF4Rwp;Ji9+#zZ)>*vGH>}eUJ6rUiGO2T4CuK2zWyTuCA>eTj#9KlT z?}g$I$)M1#J+pxvcL{9tJ55*ns}Fp;*^?y1YUp)XLR9F09F@5j`?9rSk(*DTt=Od> z_n>oIj(2D~7>7M>qhMS)S>%bBCyKppnmF;=X@NtmL+krH6A#lps%MrhO`gvI4!X8A z&hYw-qKQ{?GGBb7hu3RR9pM-pX=RvmQ_1!_H14J%|Bl_XFvTJ9uHb+---8)U@?P)A zs7@xX%;EP%wY#oIdz;CSz(~^o@;FG@_$Z_(U6D&h=2jn-V4`Y~F~ew$XeYu^SsziO z4b`lKMFNSD(_fC+m>(zbQ9RQc`x-jLqFx?gIu?<5rQQwISn7o`7>~m3zNh}tYIrI53rt5g&vCUNb@$G| zL_T0YZfp2-l8kphO``AZc7@R^MauZM!2yxS((;)$y1`>_T3E*wIk%+FP!+FT^y8Y} z0OnHEM9RwTV|mS@%BIH4)Y>|hH}hvlD-oL#^+Squ9?q<^Ui+zTcOltbTpoaoB;4{^ z2P6}(+jdAmSJ_P(KU8AXbY)StVk|wcS$Blxb>0AbYmy$LsHu&~YPsusj>9pxPx!_a zpyuKDyAAkOOPn&(jUMxtu20Ar-11uC*5>kRblaHARadq&NdxooCML_nK z$R~H0E-6C8q_1R1MF(VL)TATVn_nL~zBx$sDQa2h{y4}QH7=c}#dh87m2Lx6fw`vP z*btaq-XL@QyenUMdqmApwL@m?hYCm`kre&KS((ec>d9&g zG5Ohe!B?Kuy0-<QW%anO5diWEmU2T^s`{Je(16*SB_S*XXZ z+e_O^uf=&b=b@ZtieFK~ed!s#9RMVdkk47rW;c*gie+*4>eb9iOc9Tw`Sx$VOn03; zWnZbitE^St%U-qYLvesc{R_HC{(+{Q!ybApDg%M*sa|Ht={A5#LlkRKO}7VI?OJ|K zR-v@7mKI^nP~h5=0U+H{Ip*P>o5OW%co^K#(R3#G*`4=L$1TBpHG6e#qK``|KFS-^ zPaX{;r<)F|s;3A)whntHGy=dbZjM>E-YV4tQC8`g_jJ2V{ds_KG!4MiHpGn+H$&b6 zJ{IY-EOF13>atETS{}jAQEX(^hQd99Q=VF3fUCg5%`QUoMA4UAiFW3RsP39@|4fOJ zEy2Si!|=mAG;Wlk!@~A_&(O`rkT*}YGef7;1oTl`M@J^0N}%2p?1Ga1676Z^ch-L! zry^wAK{~^xQTnYqR1Dol$VB!k{1~=(P{Swik*t$SF($`^w9AJxYVKHWW6ilJuUoqNqjPKNC(5cM@`ja8112^jO>S|&dk3H@@47o{g+!Hg?-dLa+28d_q?8Ppx|_FUy(9|; zFl%ZD(Kpt|%zM&kpY`~cDMy1ChszeFE6Q)cLTcCIEKNS*6Sx>y^&7rOJFZ9NK{tWEm1WU-cHf%74FB4b@`Z3*I^&@@c_8^8gjO;0dQPeI87^b z@?6q4HHE=wb+(}exGVO`Tqj}qf-#E;21&D_u+Uu)5#%KwR6i`)(IL1nq91V}wXqlu+@5;k;$-wvtx ztm!Wm)pruh6}229`R!@ok1q*a!qo?zMGZpiY^qn4h5aM@`zrD@W8?3Q$ISV0@ot|G z)ZJ$v7aF$-3@rD0jNI{m)*3Kg>A+!5v-FvTtm{LpQe0Jedd%YV$f02U)YG`6ll$O< zqx!Z$jP#oTNHlKgJ*@(!g0b?lmOB>lR1Cx8HI=nH%g=+;D`GjJyrfkf`NQ9?AY)VU zuaI+CWouU>PyHYA&hV5`PgLr4Q%@)FGAWVD7fHUZelY)aH-lj(qv2{OIKZn+lKY2O zc_$#ixgdh#3V+Vd={m_=w4rDCIRJ)+!&WLrIUA2Vhx#I?c*^~})>DU9OWlljmIqBG zN+o(Bg%E>NK0C`*{#m=MOg8P(hiHJxBG1AxraPYOt?lvtOPAA8GjRVHYRcoVJ(Gz_ z(O3Ggxjz-}OqN=J>);gO*43vmsURVTS{$nRT)r6cnu3L@3#ZLEi}R2+8Vy~cBV>-y zBvH4qfDf+C8=*3^kwxdqF7^FQcniZv3wZ^772~&q6UC8mA1ztSaQ&Nc`#2saW3OH6-FF`OJM?}db6CV`zjH`1jFOO8`_irZnwlNG zXGuvh5lpf)s}BR1n9PUz^FeA8TO3HD^jXitzT*OqH+pVrpCSZx%oYYZ*2LHiOWRLZ z^Q5MQrM5Foh0V^k%zDbzkmhu#;-n2NHk}9dO5teFVl^LN*yi+uoKT7 z5;qf9)@@8mRAA|LS6u_c2t>0=7UJIM=-JJ(@7f%mxb4Lt@$iQkw;$=9P(d`tcXx4h zX}+i+PCIQLz3SuXq0SDvmfnAF(ZPKuN!ZdaL6(~Tixvc4XN`E@fyTmn%FQfdPjgO| z(vM70e$uLi-3L@3`b$wnWnN)Y~JRwiO7BmECs%IUD2(m{GRc#}uu;#zed!JnvJ*I) zvSlWeS)dtT5N5#DH_cDA--?-tKnjCVNyA7y?DBI?SxO z-U*&*B5M>M_c_`>m?)=)c;-5O!6z9IgZFbdTx0<0?&Ey=v^mr&IWa#kK!8o~%P5w-YK0LdcsYqM3h59e8t#Jn}J#yQlYN}=F z9S`!J3eF$ivUDs;WakyHYr`$uK79WLN0k8dCe{Iyg|c4$RtS6#orS#$D}^^jUayy* z!i06nHx>+abRLx1do{T2Md<)1lx^;yy_B4-C!oGSCL$oHFZEdLVE_5jKuuYq)JL5P z3m)fnBMMR1a(xsbc@B91eV2f!>vjk_C>bLu>gLR}Q;Fn~I$iXfc0B?gOkmqNu1>H> zDXyd9EQ=~awPJuOG|HV^d%vJqJtXh=Vw50nZ(L=#_LjEE?$PuO7*MK)3if}YU;9F|QkIg6jJ`-(20a|6|a zx^=)0NBo@MT_hf@CQqOC2_{Z+82&RMJFpOnGiMQjb@D0>9<2^mJ!{yw*#&n1;V+nr zW}E9ap03(2XU>mKd(DdaYS$)jpZa{=RvMQ>ZX8|jf}doV(JDin&yUP=C$gS3glt-A ztB^w`=sZcCC?W6Vn?2;aZ51sN6Il;79L9&2CC{y9?Ri)nZ4N%3)UrVKk0(bgBiDTo zWfl5X%J=URS+#a#jf%?#uqHe*QNkXIyGJ@i(3PH=&SYU}eMM#^0L2XUG^;vAMFqC~Vg@D{8oxwb<=;n8^3IL|X zymu6GXL|!`n}dSWBNzkvOsp0&pWg{Q&0MACVTrtYm9md8)J!oeGRX+`7H-=HuIgw< zHI}vPR#;5xY=_p@?<~Z3kY^s8oQWMg9ILc6L&@9prM*Gp``rfW@W%V5R?l!Fw!{v$ z&O}ej-4Aw#cSizZ-tJx^5sj#S@4kL|R7(G{zW<75mT~jmv?r?*GcmJsc!zqwzrnHX zVJ?tCCP-0ALH&CYcwGjalUww%3%@LP41x{|#L|4;tl%8FD6T*YcuoxSL|I;fbuUrS zHSqj1qmP`|2f-~RTBw{pL1*I#p08Xnk5&Yu@+Xs}k!Vw7+H}Svq>pN$wJyuGC#|>k zI()}XAJ1Xct-TnKr|Pn;u_xv*^d({n!Lk4A84NC;YmR{&51oEXtdtY%2p%8%sNBSQ z;HXX31&?Ukj)*q&7^E?AK_xm(ND+bgTh^82sS8z@O}xIfl25|K(5b4@eiMD6R($xP z8_gJMO&E1!QtW_KGEQhGV$>1sXm@9=nI$T6xNJ;_0bW+8vg=m1D+x}}QHI!Yr~;0} z9hM{Njj*g3WCCTU)Whn!`D5(I*{j;qqyrQW7Y@F~l#9Z4;e`Cm4D2papjAZ9lXNRX zk#Q%Ee2Zy^94VgiFy!X>DcP=vN%Me{^*wF{o|WkvzN7EeE$1(`$r73#%rFn0?eA$_ zx&kZ!V(XjK<$j$+SBh)BP3oUucLB6Rp{sc&@9?cYcsZmTZ_C;p^$t_6z#Vkjrqsce za-s>Q7T6ZV?1&h(xGPZYqMo?SdF*_?_Do6ko*ALND8%N(X>CSqQ^AAQ_qJo$VSL{6 z1p|d@GVjUQ?}dhjI=*tWaX8k`qK$CfSw_c#TJi7Z_*=S?(?)@Sh*Y473vM5J<;MjM zVAOO(lmdLSHjkiV6_(-H!}Wt^Kb-Uhyu}X$W<5zA317^A6||tICyq^UZ{G7T}=IBmnSf$*olAn@?|cAqmTSoMDHU)PfAJ2bb!VP7-XQv z7o`DuSctYyWmgL_E}OM6my0XDU|p5==OuK0$y1zgP%7e-15KEZOPs$!WP(mnOW1R8 zd=K2tjYqIIaeVylkx6NG@Cw;?!sjn^``FvUIiBR1KZ{e=ZuIu5eQG&Y?D}k*v@`8u z=HVK|p@3d}Ct2PSze%SeeUW2d60n4VKa)LyB%91DvmYFn8qy7Q_5Ve)I$c~oI_>8A zciCGA70x-%(UlxY)kE#^!?e-8zycB%_-r7DWpJ%Y%Fj33IRiBY-l>=EPFUtJE`m=q zq`}an;#;$z-uL8!jyk%wXtEQoU&?zSs&+6Iz)3QPm9NSn{`7~K&WX?QV4ie$)N9wQ zWc2A@^p$NQ^YR21vW?xT`&kw;aW{k$Lb3ZMMblcuVf$BEMmrgnK#Xalo$60UFm13n zuF~sF89nv8V3wRC)p_!+G^E+oGshKUX5OwrahHa7kQ8(OM*e&t91%`}O>oHP-w z{oUN({T8~AmzWLfDWf{BS0DK5l$4ZQ>zPZS@Xy5gITgCMINQP*Q`DKwY*^Zo?rB1N zKb~OGHHG+MN@U7ij2q@{@D}e%f0L%zyMTbQU}uk^uTd^9wiFEx`CjB-)KPgixnR;L z_dYE6YUL3*93Nt(ds-p?YaU%GlO&GsB`=me|O zuS&gN%kz`%^F|VMZ3NX_MiT!lHv6}Rw)x%^)iEKOL88L;g#SV2mr=Xu*3<>fy>9=v zGW-X%?_$$XR84nm@~R{RUmid~UPlzpu{4O88JSqswI460ZUNT%P+CT}=a3^Bw*T zPZ#nRQug-N3=+O$kN1a3l1enG(J9V|PU9{(gr-Sns?L8@O*3J6S@Sj(=aGtd!rj!acHCNOwn__*Q53t;1DFe+% zs$(C5C)wf!@}oyaMw-?_hRQVP`1sW2qggC_4&pSRywn3TTj2%SyiliZ1T(Z_Qn<^? z&8emSKS%sqX?P@?$QB9NVO?;ep()Wiv{n0r4_#6+Z&K~4NU{^%qhNRUUP3;RC@@Ydu? zd#U(eBmdV&ZRW4pR#YaYdx)^^i#y_v<^04(jh4^Zl!aVL!P_!FFho zN=dnO*y;|!jVo0i98cNkTRiY#hP{;<<@S+;RgvM8=G=?oswKwcAl^CcqeV{?!gkP$ z9C-f)#kH9*`fp^J&NXqEq7#);$ge&Z&D1MA6wY$0|-(M=$kYF0D>%S6e(gpUt4qk0=4BR$N$1IkGRfBK(qW6t1gQ zd9t$4v&=ksx$AoAzw(X$MZ)C?+HlO;oxCJ}m&2H7<1(zbQo!)Cgxl5#$rF2UaB|IV zH^RKBa6Y9iL-UALnwLJUCm|#ga;zoLLuDCUXa8XVeri}WX4E`7>u-Ykg2UoE(6DF- z&!4`uIZUP1`R>w_%BguyrRbCRo6wb_TT!Qv&sfZYXd^;;x$ukNvdT1546LW_hL!B6 zYJ6@MO=l+eMA0V;(9}mscuKcZx^pxF!jY= z_cwvPK9idZM6Eg=C^3ymJ9#`iq9VV}jx=+3lXPn?r4!@O3_?u@D*f#1my}9kK?_bZy}N@vC*gI9U*Iow#S3=Ni$1#rAq+{Mk)>0k`uva_Q- zpu7|k2E!G(dq8lLAHvLR3d`U$fqm}D_}C#};RFldSq&F9JzKaVusD1W<@C_s+#c=t zw(VJm3Nf5A$T&(f9Vl9QUAf80?(dWz`6fN7s)wA~J%sZ`*X-;}hCS@&E=2tBsA?JY zIt63_u>`y}X5OM0NR@K9t8jwyJ#|u4Dsl%~e?pYIvma5!?5{cOQuEc!iRJdsQ-(j$ z88{XM)pebng02VZ-JF9{-}-k`GbfE1o*G$4qU(D#`q(Ux2w0GKoN1-MU)Qidb(D%) z@d1V8SPvsjrn964LddA>l=0)v)CWM^%H( zD_t@xD$1w*bQ57Oi=d&Cp{ivkWHZ|`2y=@Oq=1l%c{EX2YwIHgf25(H)a2lcJ!wO} z6%YT!9uP83+=?qteI6Iyfz4QMUPozS>C$>#HTR`cUH*}WVVt*|+GveM98AZecNSPD z!g-7bHEjP(Fl$9xK0cW|mU=V#?6lA-NxVC*yzx*Q8lrhrTA>S-cJgG)4qAyEH8i|0 zl2v4v>JuDemkq0N>B4;vnA$h&uX?Jz)6=#f5){GPMm*BUcD5xtTIN`}Z=t|?c*u71 z-V;q7!=3Vk`_(PD&w^K`?owj2YveW(1ui1gX?1F+w_??;>_hQCEA#pEKF*YVXu5RL zyO-8X-qW+)t)X_uqGMWsYiwrIYaF~aM$)F|g&K~-NIm5cY}~)~((j()s#5_h#LSA$ zuBD6CWJf%C~6}g#7ARE zNAG8NXYO$qF+XYqWY* z^B1)l$F|?NVuB9M<@mtptZS5-O+Zoen~}h5^FBH)^Z~GH>E&sTJ2DV#T@~+#UWC;k zt_3 zOgkzicLb#yo9@lydpmX0=&g360=z^*G!dCQs=EDZe}=ngWWEBR2jX>G2y&k=afl@> z-`ScAYio;DEMhg?8`Bnen@T}b*67|lG)$PK>6=_yR@NtKffW?wlcno;HrjI2 z-Ap%+1V<`o0;QZQzN`%{XO2FI^SO_zV_x07?(aB8Sor=!ZRb?`f#YZ)Kd)(D59jFc zRiN7m;lf}V;-qL8hWaRELzzLny{VWxY8|rjRE%YCx}t{RR(Q>3bu}6W?lD3i!gc92`JNIl83l28~1G5G?8Zt%z6Ns{)(%)B>T6<}}9pKbnvnE|t)UuI8OsiQ}&Sz(%I(YIcBq zvn)-uZ_E*5xK{cuPS3VzF(eF76s4A+C}Oy)d@}Pcin4ec4YLY(Bwo=kl7fQPKdPo% z@7m2E^)iN{Dx=|MPsS(tYyyeOw>{T|soKX&Y)iWi7Q|~~>mu%I^*0WEBTuyD+Sp@D zv$*agfqXc_6{Lgs+?*&70L`UU?BURYpXvEAHU{tgRjC2vk3m;Q=Gxzvp&8;~hgg!q zTx^_H0|auCZVJ*oP*;FtpUSE>s7LxeqksOp2$U`d`;+Zah_8%eK^5~uL=Rl?8hh(? z&d;>`&2NYW8;yOn%Nm^}9E;+?hkIDW$ z@S2A87(<7<5aj@?c4}qf^_BG?2S@Xk=fG|T8r5~gS$PSCLHF)qz9b*}ozCIPp2PHv z61<@_gv~$;jPt%oqOTHf)qu@fd)rEJkXA#ZguSyod*g}&o(y}OsFO(#F7IUx!_hL& zAh#d^^d9g>+foht#g6wxwIdkexPjZ>C@_>B>n1rgG#Kp9+Lex&XrQ;YeB3U?@FO3R z!2!U@-IhQa*ar?`?!JyV7F*&$l%+{MTZl|Mqv=Z<4NJ<@OBC2$4NXK&1UQT|lSV^< zPnJ>9r%7Yetsc5oG;NQk8PD`By(XaovlgFprRLYWD;)2u!z7ec73ro}&^ErL8D*Hf zy2Li~o0y6ePse9VOUADG_A<`dp{TX3__AYz<}UcDIqx$*S&j6`c}Ig-i-Mz3!Yp3k zTT$jbdI-g%!e=do@M@I)8pC9ju@BKLv^V=CKC!=inu{vrg_i8@rw(-x_ab-h(8pwZ zg@kD)ZT@5%g<*4!fd?miSsxdkvk%pCNG+2FRo>D2SI+L^$~>%|em6v#D@K60d*6Eiavws`xG5otE)c zE-$d32(tUj=Ju8YN4bF@?^#roxFLm6nS1TjM(UB>%gtVEs(N$V4iCq_Y6r$ji89yI z<+sG}lYJ}=%42=wD6*uz3`GJw#rU8qyf>+0r{kBwxU%Dx2E~%m`gvJynncUP2xZ#n z>rBWRYH)xa;Nx^UHV3jn4=s?xUnq}dOkCRKx> zYkQv7m+%4V32N?R!jXzcwXnH8>tY48v(SlqBXRIBKe-^H`wcFo!^kI7NTca1=5f=G zk>M4+7X^iiF+EnRbDS%PYP6rRdTh`*W{cwO*ho5Kv+%Y?#{sGh4qn2^akyGLWfdNy z;$iN}4RR=b8($p&V>+%~`6Bg;?BLi`{_EmSkA@l)%tjAznm6Rn+|smTDF^zeA$|?0 z5Ig4JPVut_KnH1f9KS}jiObP6Jq=I=u^y1H(I73m>IClKEJBBE=do#4G;#R#qx7(^ z1*eGX+*dh|rg=_03;=6H*9YQ^e9aDP3gGosiQ+bGoh)*dvqb;y64Q|qV%_#QG@yn! zt{&VjI~L8!p}`-e(O~sw2Bqn@aqxI`cQHb-c9^_UpIx)$VY1(^nv?d&>L>KC9*LRD zTXEC3;Z?uJ33n@qs8;d6dIc73*qwBLB~K^JAx`e)=vMe0+ruh>cFi2;*w-l%VLHzc zJ$)E4W)gSg$m}G?tt}UN(K)Zx?nJSLq}et*H%l|W5xa%64LtIM3rIc2BIJR=QOq&*!FUw$2d$ikLVa|K7Cv_^!yzGwW zKJJMW2r=HVb&*F7y@c{Gz5Xut9-jRjmoK)-f#?j3`T7akVSwRa+LZb9O!J=vlCiR9 z2h*;5DtEaij;6jV-0St*ZDZd2H@E0P7CMNh<&T_VRy1N7O#AU#=xM)f@{Q z-~fACgzSz}5aU-2vMZGF>2RJtaUhdETh`=MS+SxI*iVA zRq61z*`8=ZpyE8g@Ex&dJ{vM@c7Szu?UUv_rRu9v^3tW#0Qw zSdY)wYji1OdWi7!KO(fRl5n1(NmLj6nIlioPw+?aeV8bqj4)r8;x3G{}#1;a?S=C@652+ ziBm0f$Kk1XwQ)USC5Ep~>i-8@l#oqsByTIyuj6QGyICU%xVTPqjQp{iB|S zd<5Ga4y~r==Hx5MINHo9QrH%nk+^BfN%uo%f;>D7!fZYxm}_L$8y_PP%%WKCp71O( zyrc5fSe@T2y2LV;@wH^v%)p`ErblzN{1jln)vX+iz{D_ z=-sbjAyyN@br)&=--3o@m|B`9=9%!zn!h{5F4zl6L;q9*3%vMyjsNczi5aij_5t(a zrQfLI&y$e_pR`wA&hK~oegBM< zwtvGijva)2jBMe}V;TFSDx*<#_OD@zw`_YSd45!4{C}ykfB#&>tm7Z`1e=t|8?m05Q&jy-^-?`sRjy**B}0m*0RA8Z#GTm(cftYjeRB* zYLD>+g{58lGdq=d)r?|dx{h7S|JUFF5-}kukvEx_f2Van)(87(ib+S9YRNNc7QO1? z$^KlmY?I+a{cB8&<~T7)3xB4fdi}1($B2PEEl#+F+I7c%|9D23?D1f-md(J z_W`#rtECwW{du~Y_&4sU-)+?oO^u`{VkK=C!b&|Lm`mvHz5aU~9*d6j=FO;Qd5*AN zC#|ffK4f@wAtzWHFJGde(F|(wYAI!!2@Zd!N1drTgSK22nF+!Ud`H5vB>%di-;XoA z6p7B%LX5SDtg8}q9$2?IqUTuE6+f{G76*624o7f_Z4BAnhI+#FErFe#IyA{8e zTl*a>>~h&S>#vhAyT9>m;vLY}<^ME|P@TNPo!`WR3bAIF2`GfQhA3vp3Oa zhdqIX^Zt!OV&Sm-(pZ7XR((k+sN4KqfDpo<=OV}ZZ&R`@^JW!vN@`^AmQeLkB}2;P zu_lF}qk*UgcRHE$JAVDF$^Tjn+guFH0zKE6MFI2)hYo<0M z=)^gCzOA6NaJ&q|6jvlBym+s`#3C~}i2b*nY-7-rs2v|H5@W@(q(~OvirQS&dV1Pr z6;=7$KBVocXvt`I!n#S@T`_4qi#6MtwIR=U-~Q9m{)bJHU_#rqPDa@{Njl~$0~TIy z9;#a=gxASV|DJ~I6Lfa(Jzu^Aoinz)WqcywQrB}QFW@0>-1*{;o9J)<;Y{ljpQbPBxXd@-5YS{jrN%KfR< zhN4j1CO*pms{+1I(`xuBOyOK9;b;g`;q=oTo{3vTb8J5AokVxgn968_04U3(d-+bo z?@c;gu0)inxsVi*1s7`pg;reOyj67hcQ}PuYgjnd1E9hD3dA9aG2WUC zSUT>+zg=+KNN-lH9RZ0%c7w(!u#oM76ZwkqwBH+n7>DNfY&ylB%Xr#=wGV7#@v{A% zB<-f0inn7TO=q+r>0x9@;ED1m!|&}I1t}i1snQd?+?5(R#KsPBoIyM~&jw ziolVoARt{tr5EWWR3V{v2))^`0V0ObBT7I@C~U+2e<3^rT0Bs{Ivz`I!AjLbhsc0&wKO<3cHtJ?ttuQ@kdDPhC~ zI%s=LSZ0wwdVBct{6*=ZGyXh=#{5h35d;dGi;&WT}hlS&w%h0 z@sc)E(5R5EzI}ac`{2~zvvtL*H;GXHUaD_tInkxn6D+eO47?NlW&HwT-{l(VC7%s> z5|O0HGO+qleO71yYq$RL|9WKp-OxoK4~4IkaoQB99D_;YWxalM>U$GV%I0#C*1ag6 z)AnA^n1?HW_;0x1EC<3;x@%y3bg;Pw%bU(RVjq2N1!6HeRj_g0_3aO#=wU)C8vy>o zosh%)QMkfl6#=FnM6Tt;1q&(Tm&tW^wkj7F&xPYjZExeFV9hIJavZaOoi;@_cQ_M& zQV5RKF?Jg@eThI{E~d0Ky>6$*>-gLsvOOWCUOKlFy8r{-m>zZ-LeMKK@Y&Kc9%fEm zcV68Wq$7?`jN7*W(R(;>jb^J>vye2V;1!KREhY9b#f)69?TH9~&ENLcjh-w!t z9XpK4v_m%JUP!O~HNdBTv&vw7A+h@R$$`V^{+@l?O#L|$yj`3W4@VsnP2<#hY~dtY z8I27E;WYHBAAP-&2^cht?evDg58wD^n0JyZ>dAML)^*b+rTE#*d)UNW9959h;lTcp zGEnUmXCNJ{K{9p>7+XF}lRj)qe@6uptpJ$GFTKqQz!P8~bEGRIMRk9+$tfUjqAj8F zo|Jj_=^(So`{yN9<6dsMa~tIYP(LTnoaL|nLlAjWJ*>GImq&nVZduhAv2LFEtS~;E z9p&#~g^gY7$!rTdfn-Ie*n70T%Rt(6`pf0c(?X$+IbHdtA53(=oMO6AYSDO*!LXkk;@MO2#}O>p)DGTzm* zz+i&g+jrx5%GT6qBAdC;d*qFOGVcC;fBs|RR%q}NqAvM*elW=ve?R5Q*5trs;g+tf zdvikb@ePjQ+|G3gmJ*>Pdz>-Y!j)DLa565orViGGygli?hA7&7`nIQA2AGmELc&4% zxpQ4If@i4ei3mbwqmySw>l=#w_5n3vVh5L$aok!{Vlrc-f1)vtYkmRWl%(7g13&4e zCc4)sCST@&#Y2IF|8Sv}sYm*`4!KJ5X?Zj98MaT6LR_aFNR|l6EM5cBpq2CT;xP*# z4{HHWT@h(9&S@`VV8$i73XG@EmCD>}0(qtQs%_DGNACkVY?SC@Q2n1{z*X z)PM1HWN7to1Ck&7HUhFMrE)$bCzAxpVVfWehQH$oUINhAGsb+a437{kmpr=Qg!3v@}JW>QPbV7B3xnDuU<6* za*y+)U%>jr>aMDe%k-re#7!Hx9J?cofD?6ibKcxdalI~DP!zI0P$TVQF3W%Q3%}yC zP&bME{p)+kId1nNr`jEq?b1-1^ZEcgtm9%(4f@TOcQySpK&4muj<>jW$O8bloL#F^ zA8?-T=n5>qxmh+IyD>R_@H+0%S1rN9wjGgk9j{gXem7`i9Gg^tf^7>kP#0$^$M)w< zUnaf8I2L^zK?*ZNLEMd^bbn7cUf~-i+|E3H3O*;`<1+HDQ2TmrYO8uzMNt(T$Q~uR z7CewPdsk@l0DO;<_A*+@?wQo-Qq?8V&1LhHv3G^~H_R2bAt_wbN%a>#*7Kz;5b5Fp z5{+mt$jW)H6k7;n=3RHd{d6Q<9(fxC>Zj^sc86946RAN90Y5owJ4(s+j$eF|w8sMd zF*IQ32BGU7R^!~GKc2IT>1ze_GEx|^!f*FF?0O?C=kxaU#u=3P=LNzcl%B53mw>`S z0hdR(BoWW?jy7%gxalKp@3|BC5yC~S*8P>ov0W)?ezwAqV1v?opR0@*a0Sfz#K~;l3O9uYt4Ct!Gl?$CWXV`a+Cbb3oE&TdBlsH79Y` zteCoJ5ac!YgL`Wgehl`!p*n6ALViXML!xqTR@tyH-GvvDRHTX94@~ygGPq*A05o_g z2p}<3y|2O14=$0dXI!3n?5gziu;&()H|u}2kF9H_9l|;^{u}l7zdUYbdWHy+-n6@f zEE{qZFpf)+UI3X0^zU%`fDFPuLL`1m98^ob`wXM;6Du=3$8w^^Kqdf2^-G$+Ii zP!A|K-AY2&L^>Du71slj=Nup?fXp-5j@VB6L&tV@qlX8!S=X~34x|Q7wwiV*Xx9m* z-J^_qR}d}CUesS3v7mInd=-5ovf$yYt7yjdKztOxb>tOjjp)(7hp8=VM zwmkX}DZ=IM+4jlQ?Tru`ue}rkc5;*9Tf5>ABJLuv+kU2()J$YV`PiaRlj0UvM0>0H zG}7r&@Xsr3DgS1_tR%vT*R(@$8+t-+g701r$J6Ew@U1yFTDmOm9_L7O^f7> zI`JehMw@L$X?ECryCXlZgq7*ZREa^6$;pVUA5Z;tWB~n0vknedJ#!p>iQ(~WG3HwsVS_TUZAG@V#Db%p z7Eh~))96*!syn=n^wO=Id<;1Bslkbfx+7mckHA2n=ZUnAIdkkq>T;TEvLwgJqhld& z^kHNiJU4TTCWmq6`ePTt@M~T-CZz%-(h=PO7fMB$?X8A3;(+u|M3>#eMNu55pv@eUX}L^O_V>c zbX+sQ2heGQhX;gdlCF3L>^K97-}D8Lq#;bfG8_~C4PcY3 zo!{%-VQ-fv@7F_M75SSz9zf@Mdme}q>DteGT|N2%J#@%a_`jdZ{WxZGg+UI}+siWcqY_)d zEphj_b!e&OXhx_Spczs7PTYFzQBbALF8QDzR)U^9TgVw;9x%U%(<&UTuFgM6{J_kw z*>6|5*oLRU(t>`^nUB0Ad``gp)}+mpbMNk>{$jTj>!GQ4(cB?Fj^!BQ%Z7u=MXpO- zp&O%oI>|TVkj1GGf&={|V<4HHYJ`_49l~HFpKd$tp9Kc0WaQ2{- zn@8-G3RPZ_yvl(ZAJbVI?)`TaSZI^N>ItKPa@%URN?2MD)pB=#sIV*F+%QQoIL5+f z(T_T{Usb-IoMBL4QE};f2Zx;Rb0ajU7uoRU*xn9dv^rQ)@x9*wgcb$Hy_$@c*%!w9 zFD>VJG|M>a*VYVsQ5IJKUBH%~vynRsSIOc_=TBtTm69w4FGI&0W@da03ax0Hk;=8H z^8Rg0mgx$iAnvdkm0H|JX$Ilq6)&$E!wAE?3Aeo&WcYd|*uyX0R2!}9JRgs1{3W{a z-!H``xyYjYKfU!Ou>f+FWHN-y79Yyhgnw~sg+&s3x?X+J-P+C*cf0PgI9SSl1}f{e zl8UM8HMQoql=DwOhI7X|9YdEb4z|FdVi_|&opdDzSq?g4EHg4T5lqnPHL}dqvLN!^ zW!0#brt^F>-WM8_H`SA+JOmo36>15EX6_ry6MD*8ckh-fY&Y>q0EJU`u~l)Q=-u-?u7o&rnxt&Isi^{=NcUKJ1kT_LHxggyhrf4-I7@LI7qFZG5S$jhv$4`^)&M{XrMl2$@ANY0nsF!aMRYVgP2RtSC3a7(%vGalT$8XbRvYqW*#BVf1G{H(Mj5OX6gZQ?a9Du= zW-6Z>Gm4knAULiGq;t4MrI6pRqTG(+i6vWTZy?V+x-JIMj94`{w zWPis|y1xva;s#DnGN3!c=s^P@CPw9c`v#Z3$ z3E#W4;dYHNWjp;PHw={aXH*^FpS1Z3(Z1h$7ky_$N_$m3HOzA=Jx8!D{Xe!19P zS(-ORvi_z)(M@wrI}}t-j|mb>Eh`a^RoZ@z&SooE3s64THS`~1vmPdi?QGQYz_ZhX z4AO(w+3~tNg?>`2cW^06ZZ8$1=8A3hEu4lw#o?K(c;Cd+3z_8ePPHV&^J^uv@YEha zX~j97H>-CZY|k%MQ&huEOR9Fe7^OKEqH&{`T4jH7XsnT>j9|#>8VS=&_o`-Vn)5|$ zECSFe9U{GA6P0MV-1km~9Hw^Va|2|!yja4&O~I+Byf{SCJJZcD{DPPTAPhQVQ+9EO z+^HD6p6rUHtDr-aIM%?#@hV&A|!R{Cpa@kI0NXNy)I&6H6dd3}tT%&6{m0^3w zSJv@MnzLQ!R~kKPn|n0kGYwFhD^n!0`jah>tc0HRY?ac+^ng3MsS1}pYWEVhJUWb= zx=LPDt<25Cg8!wg^)JhDXk*&Wc@E^gG#_T;y5kvj(p|Qlqa;~g@NxjEy{%G&nxL@K z)bK%dWpSw1I%s{4uh`40jFF>+Lk9Y~N5Y4cZ&uu8e||CH4A$F1)=*Rl&H~)#q|ZIXBgY?IJ?9 zrnp1@+ToFatlT}xm^WTo^ato3Q_ZDL5gGX?8v0XBJvoL~uTF9|Cn!cS7CenIg#%`% z3AnY%=f@`nbW(XuTG}yxF|RKne*EyfW{k=7*UZVX`}R7a(oZU$y6I(JJkCwqLDfnjiN14xW>8jKHaZkNMypn$lz+Y*6|bFyiGNTs zxq`O4%TQeG37eyxKj>TzZcCG=^zrO&yS#Wso}N~V+05IqG_#h6E%;I%s#F$O;vM_G z6#yzX>LR6E=}VZ`!kktU&iCe#UIpo$ifc=g(7J-km7l=y#~82HHM!+Zf@?+$@wu!P_(-pkR`^$=U(b_@B)LliHXS z?Vr1nixqnz_tzH2s2+-y{%nj{XM1n2|FG?RE9szHEnd3e5nesfo4-~*M73jj9$I1( zd7528eS2+I6bo(pvDAL3$@|Zw`oDb|4m#G_(Xty~w2;TFfueJpYdO`IJAn+Emslm{ zml#`zK6hHg>M6$q7Jl^HDUTQNn8!E>sVI@Uaj33}VEt48$+zKu)Xzp}UCS+y7~L(TLw z`g8^f0dcq)nw7*DqG*hBw=s2U6Sra(?#`;72Ncw?eO!%UMW@+>t@}PwR=X;+OaoPT zYV?wC4nUBZ1LqmfH{GF$j+c;Nbc(39{diyORgZ2V*nYzoiHfe38e3d8t^Vem@RsyY z>aq9Ey7;ies8X!B(^ma_X)ec2oHT3!ywdsk9uzCdvPRGMY6#pv<&p{Cg0-tSFMNa4*#gC{ZE&oOMO} z^@w~FG7d?4u7yCY8kt%6mjdC)RZ=)vNd;)$aTWxU{&Bpz#$=s zwj*mJQU-^L?HaG%xY4=i%3mvD_*tS-WTItIm+6u@tiNE$x{o$Dve`XZco*&8Z-`Qcnsk@keu-jNTo}-mAH#SyC$P)N+S>t6i#nl zl~ zU6qMJ)Y3x#b3OmZo%)`@XkUBL!N$&xi*);y6yckuY^h(F4pfF{8)NWOZTvRP6nPidqdWEg4~FIybN~(WtjcIKwImGJ-9RqQj>SutV*^JIbK}3sN4V z+iht>q=WVYWa>rO7G^2ZBj%Fqu-Ni4*mBxbLv+ZG+Q?&BN1P|51Rayzkwrjb%<+li z^~Dfwi@a!V7CzdWs&n7#9~am^94c#XX5CYKC%jPd&xGqWn!QK zDvX;kA3VcGQ_xLTse28VD@nikJPS|lAyTeu)QcYU5CtfC7D zEtP&qe4y|YMXInWQLzYZl=GJ0jk(>;2NxoA>m;|Z4H?D?PAs|@5?U~Sn9g>gEY0tN zp?3E#iY*AwLzJ^!0VS6BJ-NQa7850O;|&F7ran3=#qixWQe!<>Nds)XAs*b;^H!ZF=LnWEs8N;e+*{jm}7i4cs7#V&mGd0 z5>!jeRmh+)3*d|%jA z;frRu_Sas#oExKxuTB$T^Q?{)_3J^gu4t4Ro0ye|U1N0o?p6r$-a#8F@kI=`*s78l z@1FJ6#~llt(}XZQ`8|8WwV|qtnu*p_MKw8~m%{^)^pD!o;SBl)H)54_*PsIFKJ7?1 zr0dqLo%_TX?n7yxPt@r~ zS(B9t6fzXGqb)0M_OjnsI|~Pago@P1t{jrFWOPZ>qTrDp?$e0iP212D_}jG3YMI7) z4kP+}G7JQ32Z8vvs-~o2Hn=9LJH8G#+nuFByC`En}E!)6V)5T6tq85smm0L z%^%oXSsQDZdJ@A4w6joq#a6kMH~O}1qx4u|%TRn))6!6>kGL>Txm_bHa3VT!vNaw? zE^|wH%`K(5{c~Peyn2C?cA0@`w_4gWB0)i6uB^6mISa5WpEjRAF=a#)EgO$0W!K2_ z@5Q$%Z@nk}9^9)#Uuf#voE0#w*%q4MnE}6P!4!9Oi~NAc@t-)$t?rp^yq@8`-~He-E3f^<<(JlpP4L9SmYjK(HD}Wsz7TWY0fKmqG&w!ZR|)9h04w*58gl{CH>P<&yDxg#xI)vI zL}G4T_+)^>)QS`SoM#0{FrN1oHY7E5hz&7z>@KX6De@jQr+Y9j`iNL>i1oU$V-kZe z@U3)mOIo0G9o8YK8C0d!212G0g z>@~$TP$k1ZrY$OYFON?dQM^9xRjt9eP+CzfFIWVJ_ekKjBT#k;-S+{-A0CEhQcIJi zT!OGUSxKxN7(PS?v-hmiXaIP2K&F4EUlqiO{YH>Ju@Jt)c96%vcgp3u@SXATaT?Pt z%BtgSk=TTK>~{lD>m&UQ)=HVyeINUFG5*p_a_2|2uufX5!h@>7)mhmC5Hkx)^*vJB zGy%~LQRR~R+3JVG_h|-ScU37#86j(4w5s$q=g}05RfLAZJ8gPo$d@!%lSMzle$L!L zOeOSf_4ixK`i6)y-VSPg6vaUsAOG&cs(Rwlxsx}uH0ILp;j_$1zIC5IQ6=;V_U8BN zYo5@F`jD>1#}I9Q@7nwWogOm$4y6GNc(iK~K@NNEk`DIm>D3P>EcXgDy7T?KQ;({^ z_!TC3rZ7Gko~9uAb;Fn6niaGC>iP5^366h#<=q6@*wK^UaUYpG`Zdx=e*yW4=ZU^g zk0cKr9yk|rh~>+m{@Eq$2>6Tl6o7N)UL3jUk=fC-M;=^rN1@o8DnL|Cm-w*!Rfk<9PdxWbBHIDjs zzXE=G;*e6V@+I?cD1pEgBEK^}d6RK5#OT-XBs6)!1}37t4<$7d!+=$Z%)k8o;)8PU zSMHl3Wc%oFhSbAGhi^J*fNv$&_kN|l66?_WHDfqW%84h#vWU;{!x`|8iBMG6o@a-G z2-MWPY0Y0lRXBFAH@bsF;_hUZ{bd#@R^vfqlXQr_Q|}zB@t}UTmO~uMh78k061QtA z=n>y8OPT}AAn@4d7BC~AFKDm zfy=G7KSF(ctX!iHuSE}xbeO)@UhHrsn`mg*S3=|I3tkyEU5b2Tv*+h9ohqhwn027q z99tU8v>Z1+mM^?+m&^|KFFKC+rP%Ww5KoR+68^jq)*JqZTW9O>(vXqOAj-}+L+CFy z+k5ian!*!gd9uuI{I6weIu~gZPPT8-=yEc-xGAIBa@U^aBCXS5D^!hpx_1li*gX0_ DIUY08 literal 0 HcmV?d00001 From ec86dc055ea520e8a7d25df9fa1581e33980de90 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Sun, 3 Dec 2023 17:42:19 -0500 Subject: [PATCH 12/52] Increase sleep waittime --- ScrapeSubtitle.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/ScrapeSubtitle.py b/ScrapeSubtitle.py index 0f7e878fee..3f9f7536c4 100644 --- a/ScrapeSubtitle.py +++ b/ScrapeSubtitle.py @@ -128,6 +128,7 @@ def get_lecture_subtitles(self, lecture_url): subtitles = [] # Find all div elements contain subtitles + # TODO: Take another look at this and see if XPATH is more accurate. Looks like this pattern isn't consistent across classes pattern = re.compile(r'\bcss-1shylkf\b') elements = soup.find_all('div', class_=pattern) if len(elements) == 0: @@ -154,7 +155,8 @@ def get_page_soup(self, url: str) -> BeautifulSoup: # Take driver to specified URL self.driver.get(url) # Insert a sleep timer to avoid being flagged as a bot - time.sleep(2) + # TODO: Replace this with a wait call to make sure the required element loads correctly + time.sleep(4) # get the page source and parse the HTML content into a BeautifulSoup object parge_source = self.driver.page_source From c852c1c2b2910cd112cd633f1ea5ee425f038a20 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 19:48:00 -0500 Subject: [PATCH 13/52] Update .gitignore --- .gitignore | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/.gitignore b/.gitignore index d854ba3d15..ccdacd8f90 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,7 @@ +__pycache__/* .idea .idea/* -*.iml node_modules -*.DS_Store \ No newline at end of file +*.docx +*.DS_Store +*.iml \ No newline at end of file From 7e1881faa2189a9874df7deb66388a041ecf393f Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 19:49:28 -0500 Subject: [PATCH 14/52] Refine scraper script to have a single point of entry --- .gitignore | 3 +- .../CourseraScraper.py | 44 +- .../CourseraScraper.cpython-312.pyc | Bin 0 -> 9595 bytes {index_documents => scraper}/index_to_es.py | 0 scraper/scrape_coursera_course.py | 19 + subtitles.json | 8313 ----------------- 6 files changed, 39 insertions(+), 8340 deletions(-) rename ScrapeSubtitle.py => scraper/CourseraScraper.py (87%) create mode 100644 scraper/__pycache__/CourseraScraper.cpython-312.pyc rename {index_documents => scraper}/index_to_es.py (100%) create mode 100644 scraper/scrape_coursera_course.py delete mode 100644 subtitles.json diff --git a/.gitignore b/.gitignore index ccdacd8f90..f5757cbb02 100644 --- a/.gitignore +++ b/.gitignore @@ -4,4 +4,5 @@ __pycache__/* node_modules *.docx *.DS_Store -*.iml \ No newline at end of file +*.iml +*.json \ No newline at end of file diff --git a/ScrapeSubtitle.py b/scraper/CourseraScraper.py similarity index 87% rename from ScrapeSubtitle.py rename to scraper/CourseraScraper.py index 3f9f7536c4..45afa4f3be 100644 --- a/ScrapeSubtitle.py +++ b/scraper/CourseraScraper.py @@ -1,4 +1,3 @@ -import json import re import time @@ -14,13 +13,16 @@ class CourseraScraper: - def __init__(self) -> None: + def __init__(self, course_url: str, username: str, password: str) -> None: self.driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install())) - self.coursera_url = "https://www.coursera.org" + self.url = course_url + self.username = username + self.password = password self.course_transcript_for_json = {} # Login to Coursera to allow scraper to parse pages - CourseraScraperLogin(self.driver).login() + CourseraScraperLogin(self.driver, self.username, self.password).login() + self.driver.get(self.url) def run_scraper(self): # Parse course to get list of urls for each week to scrape @@ -42,40 +44,40 @@ def run_scraper(self): lecture_subtitles = week_parser.get_lecture_subtitles(lecture_url) week_transcripts.append({lecture_title: lecture_subtitles}) - print('*'*50) - print(week_str) - print(week_transcripts) - print('*'*50) - course_transcripts.append({week_str: week_transcripts}) self.course_transcript_for_json[course_name] = course_transcripts class CourseraScraperLogin: - def __init__(self, driver: webdriver.Chrome) -> None: + def __init__(self, driver: webdriver.Chrome, email: str, password: str) -> None: self.driver = driver self.url = "https://www.coursera.org" + self.login_email = email + self.login_password = password def login(self) -> None: login_url = self.url + "/?authMode=login" self.driver.get(login_url) + self.driver.find_element("id", "email").send_keys(self.login_email) + self.driver.find_element("id", "password").send_keys(self.login_password) + self.driver.find_element("xpath", "//button[@type='submit']").click() + input("Finalize CAPTCHA and then press Enter in the shell") + class CourseraCourseParser: def __init__(self, driver: webdriver.Chrome) -> None: self.driver = driver - self.prompt = "Navigate to the home page for the course you wish to scrape and press enter" - self.parse_course() + self.course_name = self.parse_course_name() + self.get_week_urls() - def parse_course(self): + def parse_course_name(self) -> str: # TODO: Automatically parse course name - self.course_name = "TODO" - self.get_week_urls() + return "TODO" def get_week_urls(self) -> None: """Initialize the URLs for each week of the course""" - input(self.prompt) self.landing_page = self.driver.current_url # Coursera defaults to saving the user's last accessed week, so need to get the true landing # page once it's been navigated to @@ -163,13 +165,3 @@ def get_page_soup(self, url: str) -> BeautifulSoup: soup = BeautifulSoup(parge_source, 'html.parser') return soup - - -if __name__ == "__main__": - scraper = CourseraScraper() - scraper.run_scraper() - print(scraper.course_transcript_for_json) - - # Writing a JSON file - with open('subtitles.json', 'w') as json_file: - json.dump(scraper.course_transcript_for_json, json_file, indent=4) diff --git a/scraper/__pycache__/CourseraScraper.cpython-312.pyc b/scraper/__pycache__/CourseraScraper.cpython-312.pyc new file mode 100644 index 0000000000000000000000000000000000000000..763ac10c49e1253f94922a29d6447db369cf9951 GIT binary patch literal 9595 zcmb7KU2GdycAgY%Z?MmVs>j z(w=kYhZIfkrWewkd+*$zbMKs=@0`nj^>~^XNPkDtV!DH2{u3)kvgw8Oe}lpTBQOG+ zWG1yIJIT_zWzqsqOVXOMP1;y|XH9Y``=p(gZAnLppLEhPmuyP8CS9~_Pr6f{Ne?YM zlABWANiQw)NndL7P0;2w z+uCTG585`HZT>knyyZKbXgtiSp4Y`#PEJhcl9w{MEHpT8idO}axFHg#I4+6gMj|dk z(enBnvM5C3nY56Q6PdIG&Ay35 zO3dWs6Sv}G78~Q*hP0g5E9<+Vu)v56o-v#+D_R8${9EDQCUDcdV1FM@TX4M3OmcHv zm{+|cnH-Ts61x;9v8+hcrZ{bg=15ZIbMPh|ONlCldt1Dlk!Vt^x}+q=Tk{ z1#0H(1uN8yGhz}9)Xa6{t>zeJmU6UF*Jom8H*IXeT7jAMJ>U8{o&luUHtkuP8@pg} zY=d=X+iQDkW$5XCzSY=0%gixh3$mgEJ`-g&55cdZ+BIHNJ=#-eMS|y{IudC~jwO?- zUE@{N3SXMw7Ks<%cj(_=Cb6^>CyA^aoz9TxtdvQsEp^OtE^{rBR=Fg7Sg(n4*rwVf zF*!{^*+==qUL zk&#O;5A2Ihh_~c3X`mq$L%zQ>C&^+;ieyP<7UV?lbP{&7f8YKCQH{cj+DjN4ON|Vl zJ{y6ZAa_UVHqf7)Q~78#kxt0bXx?A{)_AiS$d_(G_8#+XYgeVUS8472Vt#q(VtMGz z%Fv`TG+7>+Do3s?xBgMl_MvOFIS3Da)gOX~$9U|1sc2i~`@ZD^cPBo+eDCu8o?q?# zZ0}ObKL!6GSmsYv`11;Xewn|p%DX>uec<}Y`+@ha?PLBPzrt@{hqalwDOFH0CGI|z zxNFQjTa`NCn+9@c@XfrXV41CzUYs1#ycOC_3DVjyA7!rGfwwk+E!Y~#r(hARhK>|$ z$i{j2pXX#3j>goOo`TI7HE%E28*%!)1NOxi96-S!%;+pQf7tr-O)wvFb6qMp%`9_uY4W#lK9tV83BIYBl7TXjE;AMpGFfm&CF=8vSuDmek)^;j@u# z(4V{l*=GzLOV60#$W zVn(3}ir@kj(I}L(1S*Y*nrfAT22*iT(4ZT1D!i8QXZ{&k&Pqojk()Pf_G@xMV*MF% zEo`H-5H$R%iyFizB}?6;RSf(D9g!`y-a}YIJ!Kk5?ey`RGI zV^&)On$_7$txhLcon_v?x~;cpTj4wD=h$F6Q1#Oj`2{r{pt7K$J_#&U;;7Kf6RGx@ zHDOn^tx%&jUogr9WYtl%#-ovPhnX6HB}0ug!QQZ?53VuBAi@XN@#mxDPAP#8=%aRa?$ucT;D;tw%^8QHVYEG6j>C4CE zxvV(U2kJAGko&IW_n%CpW64Bb42=w5oESMh9EzodkUS%%Ls=q9Qs_il7D*_P#pFsv}-Qr(3edlzkYd~okC^~Clk#m{fq1P*X zNZ~_^EsHnHe5Ar3R`|n9J(VM8l_O`D`LX&=`7*!1!XH)mqe~N&p^M7U#by4Djh)&` zyGP6Xi3&fa@MFvT`A58`wCVK~euN6rXBJJxgn9BRP)QD9b{I31D$ zF14DK>MhI;!oL)O3^l8b>$^Kw>53>_ku?S?OIG@LDAC84@;Wv?b=$d7UBBWC8}uu# zk7<1rvLB*f1?#j;uz^11K-b!*EdqzSRd7%jk)N}|7I(r_8ZT&iwN}!wOtr`qU5~n7 zP|^~}im5q`Hij|y9(1lb6ZC8#t42?BaXw?mb?vHdrsx73V{}=meIpc(co5+jkVMj1 zkP^Ou`8KLmjR`fxWUAFuK&^yxR@G`}SEX}6=^QA7fv1X8oM*AR&fR6V!dbI?~p-3y4uyY>0i5xvvZ z{{;>hj9o+3-bkU>eG(+TE;ag0f~oQa3o>NwEa;I4;CDCjG;T2D@e61({qpMajpwMU zOTjj8aWWgcjp|^tBygrmM(4CH$)>l^$gh`8Oh4OFHxQ}?%e2Ly#(07+)%fgDsn(@G zG`m6fyiG>@OOxP`5$Mqbev}Sj1P2;zqd#~}-ROsb|FT{0TJvapssdl&{*w8nh0deT z3SQ7o&9{k~=P>7l@i2RRstPVu2sQn?85#W zxm;RKBvs3a5w#QTvzU&H(af}Ad!xxrJO*Jwf-RU^#2mamnGLAd= zOhVl8&#w;iCmhVL9W5ueS$`d5Atz^{9WdPsp$bJQ6tAEa(paPpuTTtt*-7}9kdqLv zSh-*ouR!I&=sE+BwI&PKx_DrnfyYz6gX^OvisFbkj5HIn3#N7a{yzk)pff1grtK7@ z*a4z22vs2G1)e&JP5?7b05DB+wr~?chgAbE2(Wao&wA6cr)DP4Z5)EFV z#jQ(KMK{Ng^YAiktuCngYC|rFf#Ge{?L(R*qy{9W#^EzV8E>1B921wMaR|;(#U*-l z5z$Cc;e#uD*YkJ)3CD%jeXyMc25mCvZF6hzMo-XcLNcQbEQ#KeMhqJqg21BtmSDdy z9V@VVRn`xdN?^gx>K4qT2%IeExM~#=H}Xy)Cdc;5q9o^oc=ne-NhifK5M~}tfR>KU z?bSn55NhW@DH2I#W2$Y2h|}aXpe66eVhG5L3EDwh`Ef`wvJ)c8>eu89bK zdzGiE8iVjQERk7g)RZ)WX|fx#utQ@QEJI8khN`@&dL4#8#>Jv??3&ty>ql?Ik~vYr z4QiTL=dEkdknc8BaSdPxpSZx9r_lw5>L` zEezcmTHJd7-O|y^rT(egL*?cxMgDPH*AvF>++RGk+S0jj?#{Wz=5ou<;)(z8w%l!9 zJoxG0Cxc3R|7vi@;w$&A+-_R+1uDL7#n)ZxIrgwa=^0z`o%bZyt6#v;CN`b70N-0>Q~Vnt(khY#zO^*uTT8+Ag&F6D=(dSe z3noVkw(Gm#`?;%Fw94%)lXlg#*C3t(2NFXlm)mDe?u9+-30yM1W@Sgg(db4>aH6C% zic}k_3L@$D#yb<5YUXzeu35AvbxD9b(}Hu^tC*1SWj4l3{vOJ@boUs;Ww zAcZnWq4g3I-ZY-~PFDDXGlyErDtqtL*xrt-~jxu{|K@%hNUjjl^t;OEGU=; zYSB2l-{;N#f?Z%?YdtpbRsYs{lhv$&MdeYxWVpNns0XxX!xq)jzfZO6v>`+5>i?PZ z=ru77_rBy*bKZ<}bDIM!K2mb|3DTN^WFXngeF<*60JnUUO`RRpEE|hx? zR(b}Np214bkkUQzKQ!FmwTWwzw3vd)K8(e^fhFl*G!eRr|j(qb=J~abOLf~-ny{+ z&hAn$Qr@z!=%^~Wo2cGiE%J|Cn~JyoWa#ejSFYeA_vX@;-W7M>x9-ii_bj`&QQg<% zoMMZo9&ZgSOx>AU-1T7a_EdT6YsHhR{%s$7?|JWA6#t%ze?ajMl>G;aX8_u^ZPP9i z7jqBZDYf>NTVDg^*}3iGL-!8BH~R9_a`!~JbE0?-x_VkG?ttPBEGCu$UtU*&lPm7a zYg=*2wGPJDUgG^4&cfJTyjD}%pn+=2zk$LF;{m2-gid;@4dzzw>l@t;Ry8LL#u^jU zJR4c}c}rCn7A%6r@L3T*fPS!!e}|;!T{Chj*{|Ixf!l%(q~MvNVgLv_Pw&G_PLWXHN-J92wI^a5rwrq2En2M$99j!Ml1t#z3XJhd}U&qqxk zG~FKf&;wt~A6R(z&byViex5+Sd-bxRyn@ z#DT~C;8T2PoP;xm+d-(|+Mm1+6+%zkdR2Ov(xUQk%`c`Cxs>*cgY8DO-~16wKiu)d z4_^B9-zE^mj|9UfYM@c)Oe&R0_g|gU{9C^{O%#3{fWMW|r_+AK5Nt4ml*?u_MDEWe zG=I9|#eIkqX?X`}srCY0hZKm$4{RjrU28XCXj-*sC_V3uF48mpsCL12jNF5MXhx(7 z$iO#a*{>Pb*NhYL-!Q>%m~FpdPJYAe`Gz_44fE> This lecture is about topic mining and analysis. We're going to talk about its motivation and task definition.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "0:17", - "text": "In this lecture we're going to talk about different kind of mining task.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "0:23", - "text": "As you see on this road map, we have just covered mining knowledge about language, namely discovery of word associations such as paradigmatic and relations and syntagmatic relations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "0:39", - "text": "Now, starting from this lecture, we're going to talk about mining another kind of knowledge, which is content mining, and trying to discover knowledge about the main topics in the text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "0:56", - "text": "And we call that topic mining and analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "0:59", - "text": "In this lecture, we're going to talk about its motivation and the task definition. So first of all, let's look at the concept of topic. So topic is something that we all understand, I think, but it's actually not that easy to formally define. Roughly speaking, topic is the main idea discussed in text data. And you can think of this as a theme or subject of a discussion or conversation. It can also have different granularities. For example, we can talk about the topic of a sentence. A topic of article, aa topic of paragraph or the topic of all the research articles in the research library, right, so different grand narratives of topics obviously have different applications.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "1:46", - "text": "Indeed, there are many applications that require discovery of topics in text, and they're analyzed then. Here are some examples. For example, we might be interested in knowing about what are Twitter users are talking about today? Are they talking about NBA sports, or are they talking about some international events, etc.? Or we are interested in knowing about research topics. For example, one might be interested in knowing what are the current research topics in data mining, and how are they different from those five years ago? Now this involves discovery of topics in data mining literatures and also we want to discover topics in today's literature and those in the past. And then we can make a comparison. We might also be also interested in knowing what do people like about some products like the iPhone 6, and what do they dislike? And this involves discovering topics in positive opinions about iPhone 6 and also negative reviews about it. Or perhaps we're interested in knowing what are the major topics debated in 2012 presidential election?", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "2:59", - "text": "And all these have to do with discovering topics in text and analyzing them, and we're going to talk about a lot of techniques for doing this. In general we can view a topic as some knowledge about the world. So from text data we expect to discover a number of topics, and then these topics generally provide a description about the world. And it tells us something about the world. About a product, about a person etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "3:29", - "text": "Now when we have some non-text data, then we can have more context for analyzing the topics. For example, we might know the time associated with the text data, or locations where the text data were produced, or the authors of the text, or the sources of the text, etc. All such meta data, or context variables can be associated with the topics that we discover, and then we can use these context variables help us analyze patterns of topics. For example, looking at topics over time, we would be able to discover whether there's a trending topic, or some topics might be fading away.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "4:15", - "text": "Soon you are looking at topics in different locations. We might know some insights about people's opinions in different locations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "4:26", - "text": "So that's why mining topics is very important. Now, let's look at the tasks of topic mining and analysis. In general, it would involve first discovering a lot of topics, in this case, k topics. And then we also would like to know, which topics are covered in which documents, to what extent. So for example, in document one, we might see that Topic 1 is covered a lot, Topic 2 and Topic k are covered with a small portion.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "4:58", - "text": "And other topics, perhaps, are not covered. Document two, on the other hand, covered Topic 2 very well, but it did not cover Topic 1 at all, and it also covers Topic k to some extent, etc., right? So now you can see there are generally two different tasks, or sub-tasks, the first is to discover k topics from a collection of text laid out. What are these k topics? Okay, major topics in the text they are. The second task is to figure out which documents cover which topics to what extent. So more formally, we can define the problem as follows. First, we have, as input, a collection of N text documents. Here we can denote the text collection as C, and denote text article as d i. And, we generally also need to have as input the number of topics, k. But there may be techniques that can automatically suggest a number of topics. But in the techniques that we will discuss, which are also the most useful techniques, we often need to specify a number of topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "6:14", - "text": "Now the output would then be the k topics that we would like to discover, in order as theta sub one through theta sub k.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "6:24", - "text": "Also we want to generate the coverage of topics in each document of d sub i And this is denoted by pi sub i j.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "6:33", - "text": "And pi sub ij is the probability of document d sub i covering topic theta sub j. So obviously for each document, we have a set of such values to indicate to what extent the document covers, each topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "6:48", - "text": "And we can assume that these probabilities sum to one. Because a document won't be able to cover other topics outside of the topics that we discussed, that we discovered. So now, the question is, how do we define theta sub i, how do we define the topic? Now this problem has not been completely defined until we define what is exactly theta.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - }, - { - "time": "7:16", - "text": "So in the next few lectures, we're going to talk about different ways to define theta. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/dmpQ0/2-5-topic-mining-and-analysis-motivation-and-task-definition" - } - ] - }, - { - "2-6-topic-mining-and-analysis-term-as-topic": [ - { - "time": "0:00", - "text": "[MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "0:07", - "text": "This lecture is about topic mining and analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "0:12", - "text": "We're going to talk about using a term as topic. This is a slide that you have seen in a earlier lecture where we define the task of topic mining and analysis. We also raised the question, how do we exactly define the topic of theta?", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "0:31", - "text": "So in this lecture, we're going to offer one way to define it, and that's our initial idea. Our idea here is defining a topic simply as a term.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "0:42", - "text": "A term can be a word or a phrase.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "0:45", - "text": "And in general, we can use these terms to describe topics. So our first thought is just to define a topic as one term. For example, we might have terms like sports, travel, or science, as you see here. Now if we define a topic in this way, we can then analyze the coverage of such topics in each document. Here for example, we might want to discover to what extent document one covers sports. And we found that 30% of the content of document one is about sports. And 12% is about the travel, etc. We might also discover document two does not cover sports at all. So the coverage is zero, etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "1:32", - "text": "So now, of course, as we discussed in the task definition for topic mining and analysis, we have two tasks. One is to discover the topics. And the second is to analyze coverage. So let's first think about how we can discover topics if we represent each topic by a term. So that means we need to mine k topical terms from a collection.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "2:01", - "text": "Now there are, of course, many different ways of doing that.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "2:05", - "text": "And we're going to talk about a natural way of doing that, which is also likely effective. So first of all, we're going to parse the text data in the collection to obtain candidate terms. Here candidate terms can be words or phrases. Let's say the simplest solution is to just take each word as a term. These words then become candidate topics. Then we're going to design a scoring function to match how good each term is as a topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "2:35", - "text": "So how can we design such a function? Well there are many things that we can consider. For example, we can use pure statistics to design such a scoring function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "2:45", - "text": "Intuitively, we would like to favor representative terms, meaning terms that can represent a lot of content in the collection. So that would mean we want to favor a frequent term. However, if we simply use the frequency to design the scoring function, then the highest scored terms would be general terms or functional terms like the, etc. Those terms occur very frequently English.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "3:14", - "text": "So we also want to avoid having such words on the top so we want to penalize such words. But in general, we would like to favor terms that are fairly frequent but not so frequent. So a particular approach could be based on TF-IDF weighting from retrieval.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "3:35", - "text": "And TF stands for term frequency. IDF stands for inverse document frequency. We talked about some of these ideas in the lectures about the discovery of word associations. So these are statistical methods, meaning that the function is defined mostly based on statistics. So the scoring function would be very general. It can be applied to any language, any text. But when we apply such a approach to a particular problem, we might also be able to leverage some domain-specific heuristics. For example, in news we might favor title words actually general. We might want to favor title words because the authors tend to use the title to describe the topic of an article.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "4:27", - "text": "If we're dealing with tweets, we could also favor hashtags, which are invented to denote topics. So naturally, hashtags can be good candidates for representing topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "4:44", - "text": "Anyway, after we have this design scoring function, then we can discover the k topical terms by simply picking k terms with the highest scores. Now, of course, we might encounter situation where the highest scored terms are all very similar. They're semantically similar, or closely related, or even synonyms. So that's not desirable. So we also want to have coverage over all the content in the collection. So we would like to remove redundancy. And one way to do that is to do a greedy algorithm, which is sometimes called a maximal marginal relevance ranking. Basically, the idea is to go down the list based on our scoring function and gradually take terms to collect the k topical terms. The first term, of course, will be picked. When we pick the next term, we're going to look at what terms have already been picked and try to avoid picking a term that's too similar. So while we are considering the ranking of a term in the list, we are also considering the redundancy of the candidate term with respect to the terms that we already picked.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "5:58", - "text": "And with some thresholding, then we can get a balance of the redundancy removal and also high score of a term. Okay, so after this that will get k topical terms. And those can be regarded as the topics that we discovered from the connection. Next, let's think about how we're going to compute the topic coverage pi sub ij.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "6:23", - "text": "So looking at this picture, we have sports, travel and science and these topics. And now suppose you are give a document. How should we pick out coverage of each topic in the document?", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "6:36", - "text": "Well, one approach can be to simply count occurrences of these terms. So for example, sports might have occurred four times in this this document and travel occurred twice, etc. And then we can just normalize these counts as our estimate of the coverage probability for each topic. So in general, the formula would be to collect the counts of all the terms that represent the topics. And then simply normalize them so that the coverage of each topic in the document would add to one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "7:15", - "text": "This forms a distribution of the topics for the document to characterize coverage of different topics in the document. Now, as always, when we think about idea for solving problem, we have to ask the question, how good is this one? Or is this the best way of solving problem?", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "7:38", - "text": "So now let's examine this approach. In general, we have to do some empirical evaluation", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "7:46", - "text": "by using actual data sets and to see how well it works.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "7:52", - "text": "Well, in this case let's take a look at a simple example here. And we have a text document that's about a NBA basketball game.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "8:04", - "text": "So in terms of the content, it's about sports.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "8:08", - "text": "But if we simply count these words that represent our topics, we will find that the word sports actually did not occur in the article, even though the content is about the sports.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "8:22", - "text": "So the count of sports is zero. That means the coverage of sports would be estimated as zero. Now of course, the term science also did not occur in the document and it's estimate is also zero. And that's okay. But sports certainly is not okay because we know the content is about sports. So this estimate has problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "8:50", - "text": "What's worse, the term travel actually occurred in the document. So when we estimate the coverage of the topic travel, we have got a non-zero count. So its estimated coverage will be non-zero. So this obviously is also not desirable.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "9:08", - "text": "So this simple example illustrates some problems of this approach. First, when we count what words belong to to the topic, we also need to consider related words. We can't simply just count the topic word sports. In this case, it did not occur at all. But there are many related words like basketball, game, etc. So we need to count the related words also. The second problem is that a word like star can be actually ambiguous. So here it probably means a basketball star, but we can imagine it might also mean a star on the sky. So in that case, the star might actually suggest, perhaps, a topic of science.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "9:54", - "text": "So we need to deal with that as well. Finally, a main restriction of this approach is that we have only one term to describe the topic, so it cannot really describe complicated topics. For example, a very specialized topic in sports would be harder to describe by using just a word or one phrase. We need to use more words. So this example illustrates some general problems with this approach of treating a term as topic. First, it lacks expressive power. Meaning that it can only represent the simple general topics, but it cannot represent the complicated topics that might require more words to describe.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "10:37", - "text": "Second, it's incomplete in vocabulary coverage, meaning that the topic itself is only represented as one term. It does not suggest what other terms are related to the topic. Even if we're talking about sports, there are many terms that are related. So it does not allow us to easily count related terms to order, conversion to coverage of this topic. Finally, there is this problem of word sense disintegration. A topical term or related term can be ambiguous. For example, basketball star versus star in the sky.", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - }, - { - "time": "11:10", - "text": "So in the next lecture, we're going to talk about how to solve the problem with of a topic. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/A1bUb/2-6-topic-mining-and-analysis-term-as-topic" - } - ] - }, - { - "2-7-topic-mining-and-analysis-probabilistic-topic-models": [ - { - "time": "0:06", - "text": "This lecture is about Probabilistic Topic Models for topic mining and analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "0:13", - "text": "In this lecture, we're going to continue talking about the topic mining and analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "0:18", - "text": "We're going to introduce probabilistic topic models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "0:22", - "text": "So this is a slide that you have seen earlier, where we discussed the problems with using a term as a topic. So, to solve these problems intuitively we need to use more words to describe the topic. And this will address the problem of lack of expressive power. When we have more words that we can use to describe the topic, that we can describe complicated topics. To address the second problem we need to introduce weights on words. This is what allows you to distinguish subtle differences in topics, and to introduce semantically related words in a fuzzy manner. Finally, to solve the problem of word ambiguity, we need to split ambiguous word, so that we can disambiguate its topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "1:15", - "text": "It turns out that all these can be done by using a probabilistic topic model. And that's why we're going to spend a lot of lectures to talk about this topic. So the basic idea here is that, improve the replantation of topic as one distribution. So what you see now is the older replantation. Where we replanted each topic, it was just one word, or one term, or one phrase. But now we're going to use a word distribution to describe the topic. So here you see that for sports. We're going to use the word distribution over theoretical speaking all the words in our vocabulary.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "1:54", - "text": "So for example, the high probability words here are sports, game, basketball, football, play, star, etc. These are sports related terms. And of course it would also give a non-zero probability to some other word like Trouble which might be related to sports in general, not so much related to topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "2:18", - "text": "In general we can imagine a non zero probability for all the words. And some words that are not read and would have very, very small probabilities. And these probabilities will sum to one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "2:31", - "text": "So that it forms a distribution of all the words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "2:36", - "text": "Now intuitively, this distribution represents a topic in that if we assemble words from the distribution, we tended to see words that are ready to dispose.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "2:48", - "text": "You can also see, as a very special case, if the probability of the mass is concentrated in entirely on just one word, it's sports. And this basically degenerates to the symbol foundation of a topic was just one word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "3:04", - "text": "But as a distribution, this topic of representation can, in general, involve many words to describe a topic and can model several differences in semantics of a topic. Similarly we can model Travel and Science with their respective distributions. In the distribution for Travel we see top words like attraction, trip, flight etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "3:31", - "text": "Whereas in Science we see scientist, spaceship, telescope, or genomics, and, you know, science related terms. Now that doesn't mean sports related terms will necessarily have zero probabilities for science. In general we can imagine all of these words we have now zero probabilities. It's just that for a particular topic in some words we have very, very small probabilities.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "3:58", - "text": "Now you can also see there are some words that are shared by these topics. When I say shared it just means even with some probability threshold, you can still see one word occurring much more topics. In this case I mark them in black. So you can see travel, for example, occurred in all the three topics here, but with different probabilities. It has the highest probability for the Travel topic, 0.05. But with much smaller probabilities for Sports and Science, which makes sense. And similarly, you can see a Star also occurred in Sports and Science with reasonably high probabilities. Because they might be actually related to the two topics. So with this replantation it addresses the three problems that I mentioned earlier. First, it now uses multiple words to describe a topic. So it allows us to describe a fairly complicated topics. Second, it assigns weights to terms. So now we can model several differences of semantics. And you can bring in related words together to model a topic. Third, because we have probabilities for the same word in different topics, we can disintegrate the sense of word. In the text to decode it's underlying topic, to address all these three problems with this new way of representing a topic. So now of course our problem definition has been refined just slightly. The slight is very similar to what you've seen before except we have added refinement for what our topic is. Now each topic is word distribution, and for each word distribution we know that all the probabilities should sum to one with all the words in the vocabulary. So you see a constraint here. And we still have another constraint on the topic coverage, namely pis. So all the Pi sub ij's must sum to one for the same document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "5:59", - "text": "So how do we solve this problem? Well, let's look at this problem as a computation problem. So we clearly specify it's input and output and illustrate it here on this side. Input of course is our text data. C is our collection but we also generally assume we know the number of topics, k. Or we hypothesize a number and then try to bind k topics, even though we don't know the exact topics that exist in the collection. And V is the vocabulary that has a set of words that determines what units would be treated as the basic units for analysis. In most cases we'll use words as the basis for analysis. And that means each word is a unique.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "6:47", - "text": "Now the output would consist of as first a set of topics represented by theta I's. Each theta I is a word distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "6:56", - "text": "And we also want to know the coverage of topics in each document. So that's. That the same pi ijs that we have seen before.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "7:07", - "text": "So given a set of text data we would like compute all these distributions and all these coverages as you have seen on this slide.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "7:18", - "text": "Now of course there may be many different ways of solving this problem. In theory, you can write the [INAUDIBLE] program to solve this problem, but here we're going to introduce a general way of solving this problem called a generative model. And this is, in fact, a very general idea and it's a principle way of using statistical modeling to solve text mining problems. And here I dimmed the picture that you have seen before in order to show the generation process. So the idea of this approach is actually to first design a model for our data. So we design a probabilistic model to model how the data are generated. Of course, this is based on our assumption. The actual data aren't necessarily generating this way. So that gave us a probability distribution of the data that you are seeing on this slide. Given a particular model and parameters that are denoted by lambda. So this template of actually consists of all the parameters that we're interested in. And these parameters in general will control the behavior of the probability risk model. Meaning that if you set these parameters with different values and it will give some data points higher probabilities than others. Now in this case of course, for our text mining problem or more precisely topic mining problem we have the following plans. First of all we have theta i's which is a word distribution snd then we have a set of pis for each document. And since we have n documents, so we have n sets of pis, and each set the pi up. The pi values will sum to one. So this is to say that we first would pretend we already have these word distributions and the coverage numbers. And then we can see how we can generate data by using such distributions. So how do we model the data in this way? And we assume that the data are actual symbols drawn from such a model that depends on these parameters. Now one interesting question here is to", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "9:32", - "text": "think about how many parameters are there in total? Now obviously we can already see n multiplied by K parameters. For pi's. We also see k theta i's. But each theta i is actually a set of probability values, right? It's a distribution of words. So I leave this as an exercise for you to figure out exactly how many parameters there are here. Now once we set up the model then we can fit the model to our data. Meaning that we can estimate the parameters or infer the parameters based on the data. In other words we would like to adjust these parameter values. Until we give our data set the maximum probability. I just said, depending on the parameter values, some data points will have higher probabilities than others. What we're interested in, here, is what parameter values will give our data set the highest probability? So I also illustrate the problem with a picture that you see here. On the X axis I just illustrate lambda, the parameters, as a one dimensional variable. It's oversimplification, obviously, but it suffices to show the idea. And the Y axis shows the probability of the data, observe. This probability obviously depends on this setting of lambda. So that's why it varies as you change the value of lambda. What we're interested here is to find the lambda star.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "11:05", - "text": "That would maximize the probability of the observed data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "11:10", - "text": "So this would be, then, our estimate of the parameters. And these parameters, note that are precisely what we hoped to discover from text data. So we'd treat these parameters as actually the outcome or the output of the data mining algorithm. So this is the general idea of using a generative model for text mining. First, we design a model with some parameter values to fit the data as well as we can. After we have fit the data, we will recover some parameter value. We will use the specific parameter value And those would be the output of the algorithm. And we'll treat those as actually the discovered knowledge from text data. By varying the model of course we can discover different knowledge. So to summarize, we introduced a new way of representing topic, namely representing as word distribution and this has the advantage of using multiple words to describe a complicated topic.It also allow us to assign weights on words so we have more than several variations of semantics. We talked about the task of topic mining, and answers. When we define a topic as distribution. So the importer is a clashing of text articles and a number of topics and a vocabulary set and the output is a set of topics. Each is a word distribution and also the coverage of all the topics in each document. And these are formally represented by theta i's and pi i's. And we have two constraints here for these parameters. The first is the constraints on the worded distributions. In each worded distribution the probability of all the words must sum to 1, all the words in the vocabulary. The second constraint is on the topic coverage in each document. A document is not allowed to recover a topic outside of the set of topics that we are discovering. So, the coverage of each of these k topics would sum to one for a document. We also introduce a general idea of using a generative model for text mining. And the idea here is, first we're design a model to model the generation of data. We simply assume that they are generative in this way. And inside the model we embed some parameters that we're interested in denoted by lambda.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "13:36", - "text": "And then we can infer the most likely parameter values lambda star, given a particular data set.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "13:43", - "text": "And we can then take the lambda star as knowledge discovered from the text for our problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - }, - { - "time": "13:50", - "text": "And we can adjust the design of the model and the parameters to discover various kinds of knowledge from text. As you will see later in the other lectures. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/ai3kj/2-7-topic-mining-and-analysis-probabilistic-topic-models" - } - ] - }, - { - "2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1": [ - { - "time": "0:00", - "text": "[SOUND] >> This lecture is about the Overview of Statistical Language Models, which cover proper models as special cases. In this lecture we're going to give a overview of Statical Language Models. These models are general models that cover probabilistic topic models as a special cases. So first off, what is a Statistical Language Model?", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "0:31", - "text": "A Statistical Language Model is basically a probability distribution over word sequences. So, for example, we might have a distribution that gives, today is Wednesday a probability of .001. It might give today Wednesday is, which is a non-grammatical sentence, a very, very small probability as shown here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "0:54", - "text": "And similarly another sentence, the eigenvalue is positive might get the probability of .00001. So as you can see such a distribution clearly is Context Dependent. It depends on the Context of Discussion. Some Word Sequences might have higher probabilities than others but the same Sequence of Words might have different probability in different context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "1:20", - "text": "And so this suggests that such a distribution can actually categorize topic", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "1:26", - "text": "such a model can also be regarded as Probabilistic Mechanism for generating text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "1:33", - "text": "And that just means we can view text data as data observed from such a model. For this reason, we call such a model as Generating Model. So, now given a model we can then assemble sequences of words. So, for example, based on the distribution that I have shown here on this slide, when matter it say assemble a sequence like today is Wednesday because it has a relative high probability. We might often get such a sequence. We might also get the item value as positive sometimes with a smaller probability and very, very occasionally we might get today is Wednesday because it's probability is so small.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "2:24", - "text": "So in general, in order to categorize such a distribution we must specify probability values for all these different sequences of words. Obviously, it's impossible to specify that because it's impossible to enumerate all of the possible sequences of words. So in practice, we will have to simplify the model in some way. So, the simplest language model is called the Unigram Language Model. In such a case, it was simply a the text is generated by generating each word independently.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "3:02", - "text": "But in general, the words may not be generated independently. But after we make this assumption, we can significantly simplify the language more.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "3:12", - "text": "Basically, now the probability of a sequence of words, w1 through wn, will be just the product of the probability of each word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "3:24", - "text": "So for such a model, we have as many parameters as the number of words in our vocabulary. So here we assume we have n words, so we have n probabilities. One for each word. And then some to 1. So, now we assume that our text is a sample drawn according to this word distribution. That just means, we're going to draw a word each time and then eventually we'll get a text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "3:53", - "text": "So for example, now again, we can try to assemble words according to a distribution. We might get Wednesday often or today often.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "4:06", - "text": "And some other words like eigenvalue might have a small probability, etcetera. But with this, we actually can also compute the probability of every sequence, even though our model only specify the probabilities of words. And this is because of the independence. So specifically, we can compute the probability of today is Wednesday.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "4:34", - "text": "Because it's just a product of the probability of today, the probability of is, and probability of Wednesday. For example, I show some fake numbers here and when you multiply these numbers together you get the probability that today's Wednesday. So as you can see, with N probabilities, one for each word, we actually can characterize the probability situation over all kinds of sequences of words. And so, this is a very simple model. Ignore the word order. So it may not be, in fact, in some problems, such as for speech recognition, where you may care about the order of words. But it turns out to be quite sufficient for many tasks that involve topic analysis. And that's also what we're interested in here. So when we have a model, we generally have two problems that we can think about. One is, given a model, how likely are we to observe a certain kind of data points? That is, we are interested in the Sampling Process. The other is the Estimation Process. And that, is to think of the parameters of a model given, some observe the data and we're going to talk about that in a moment. Let's first talk about the sampling. So, here I show two examples of Water Distributions or Unigram Language Models. The first one has higher probabilities for words like a text mining association, it's separate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "6:10", - "text": "Now this signals a topic about text mining because when we assemble words from such a distribution, we tend to see words that often occur in text mining contest.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "6:23", - "text": "So in this case, if we ask the question about what is the probability of generating a particular document. Then, we likely will see text that looks like a text mining paper. Of course, the text that we generate by drawing words. This distribution is unlikely coherent. Although, the probability of generating attacks mine [INAUDIBLE] publishing in the top conference is non-zero assuming that no word has a zero probability in the distribution. And that just means, we can essentially generate all kinds of text documents including very meaningful text documents.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "7:07", - "text": "Now, the second distribution show, on the bottom, has different than what was high probabilities. So food [INAUDIBLE] healthy [INAUDIBLE], etcetera. So this clearly indicates a different topic. In this case it's probably about health. So if we sample a word from such a distribution, then the probability of observing a text mining paper would be very, very small.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "7:32", - "text": "On the other hand, the probability of observing a text that looks like a food nutrition paper would be high, relatively higher.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "7:41", - "text": "So that just means, given a particular distribution, different than the text. Now let's look at the estimation problem now. In this case, we're going to assume that we have observed the data. I will know exactly what the text data looks like. In this case, let's assume we have a text mining paper. In fact, it's abstract of the paper, so the total number of words is 100. And I've shown some counts of individual words here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "8:12", - "text": "Now, if we ask the question, what is the most likely", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "8:17", - "text": "Language Model that has been used to generate this text data? Assuming that the text is observed from some Language Model, what's our best guess of this Language Model?", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "8:30", - "text": "Okay, so the problem now is just to estimate the probabilities of these words. As I've shown here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "8:37", - "text": "So what do you think? What would be your guess?", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "8:40", - "text": "Would you guess text has a very small probability, or a relatively large probability?", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "8:48", - "text": "What about query? Well, your guess probably would be dependent on how many times we have observed this word in the text data, right? And if you think about it for a moment. And if you are like many others, you would have guessed that, well, text has a probability of 10 out of 100 because I've observed the text 10 times in the text that has a total of 100 words. And similarly, mining has 5 out of 100. And query has a relatively small probability, just observed for once. So it's 1 out of 100. Right, so that, intuitively, is a reasonable guess. But the question is, is this our best guess or best estimate of the parameters?", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "9:37", - "text": "Of course, in order to answer this question, we have to define what do we mean by best, in this case, it turns out that our guesses are indeed the best. In some sense and this is called Maximum Likelihood Estimate. And it's the best thing that, it will give the observer data our maximum probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - }, - { - "time": "10:01", - "text": "Meaning that, if you change the estimate somehow, even slightly, then the probability of the observed text data will be somewhat smaller. And this is called a Maximum Likelihood Estimate. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/KaYeS/2-8-probabilistic-topic-models-overview-of-statistical-language-models-part-1" - } - ] - }, - { - "2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2": [ - { - "time": "0:00", - "text": "[MUSIC] So now let's talk about the problem a little bit more, and specifically let's talk about the two different ways of estimating the parameters. One is called the Maximum Likelihood estimate that I already just mentioned. The other is Bayesian estimation. So in maximum likelihood estimation, we define best as meaning the data likelihood has reached the maximum. So formally it's given by this expression here, where we define the estimate as a arg max of the probability of x given theta.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "0:46", - "text": "So, arg max here just means its actually a function that will turn. The argument that gives the function maximum value, adds the value. So the value of arg max is not the value of this function. But rather, the argument that has made it the function reaches maximum. So in this case the value of arg max is theta. It's the theta that makes the probability of X, given theta, reach it's maximum. So this estimate that in due it also makes sense and it's often very useful, and it seeks the premise that best explains the data. But it has a problem, when the data is too small because when the data points are too small, there are very few data points. The sample is small, then if we trust data in entirely and try to fit the data and then we'll be biased. So in the case of text data, let's say, all observed 100 words did not contain another word related to text mining. Now, our maximum likelihood estimator will give that word a zero probability. Because giving the non-zero probability would take away probability mass from some observer word. Which obviously is not optimal in terms of maximizing the likelihood of the observer data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "2:11", - "text": "But this zero probability for all the unseen words may not be reasonable sometimes. Especially, if we want the distribution to characterize the topic of text mining. So one way to address this problem is actually to use Bayesian estimation, where we actually would look at the both the data, and our prior knowledge about the parameters. We assume that we have some prior belief about the parameters. Now in this case of course, so we are not", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "2:47", - "text": "going to look at just the data, but also look at the prior.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "2:54", - "text": "So the prior here is defined by P of theta, and this means, we will impose some preference on certain theta's of others.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "3:06", - "text": "And by using Bayes Rule, that I have shown here,", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "3:12", - "text": "we can then combine the likelihood function. With the prior to give us this", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "3:23", - "text": "posterior probability of the parameter. Now, a full explanation of Bayes rule, and some of these things related to Bayesian reasoning, would be outside the scope of this course. But I just gave a brief introduction because this is general knowledge that might be useful to you. The Bayes Rule is basically defined here, and allows us to write down one conditional probability of X given Y in terms of the conditional probability of Y given X. And you can see the two probabilities are different in the order of the two variables.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "4:09", - "text": "But often the rule is used for making inferences of the variable, so let's take a look at it again. We can assume that p(X) Encodes our prior belief about X. That means before we observe any other data, that's our belief about X, what we believe some X values have higher probability than others.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "4:40", - "text": "And this probability of X given Y is a conditional probability, and this is our posterior belief about X. Because this is our belief about X values after we have observed the Y. Given that we have observed the Y, now what do we believe about X? Now, do we believe some values have higher probabilities than others?", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "5:09", - "text": "Now the two probabilities are related through this one, this can be regarded as the probability of", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "5:19", - "text": "the observed evidence Y, given a particular X. So you can think about X as our hypothesis, and we have some prior belief about which hypothesis to choose. And after we have observed Y, we will update our belief and this updating formula is based on the combination of our prior.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "5:48", - "text": "And the likelihood of observing this Y if X is indeed true,", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "5:57", - "text": "so much for detour about Bayes Rule. In our case, what we are interested in is inferring the theta values. So, we have a prior here that includes our prior knowledge about the parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "6:15", - "text": "And then we have the data likelihood here, that would tell us which parameter value can explain the data well. The posterior probability combines both of them,", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "6:30", - "text": "so it represents a compromise of the the two preferences. And in such a case, we can maximize this posterior probability. To find this theta that would maximize this posterior probability, and this estimator is called a Maximum a Posteriori, or MAP estimate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "6:55", - "text": "And this estimator is a more general estimator than the maximum likelihood estimator. Because if we define our prior as a noninformative prior, meaning that it's uniform over all the theta values. No preference, then we basically would go back to the maximum likelihood estimated. Because in such a case, it's mainly going to be determined by this likelihood value, the same as here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "7:28", - "text": "But if we have some not informative prior, some bias towards the different values then map estimator can allow us to incorporate that. But the problem here of course, is how to define the prior.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "7:44", - "text": "There is no free lunch and if you want to solve the problem with more knowledge, we have to have that knowledge. And that knowledge, ideally, should be reliable. Otherwise, your estimate may not necessarily be more accurate than that maximum likelihood estimate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "8:01", - "text": "So, now let's look at the Bayesian estimation in more detail.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "8:08", - "text": "So, I show the theta values as just a one dimension value and that's a simplification of course. And so, we're interested in which variable of theta is optimal. So now, first we have the Prior. The Prior tells us that some of the variables are more likely the others would believe. For example, these values are more likely than the values over here, or here, or other places.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "8:42", - "text": "So this is our Prior, and then we have our theta likelihood. And in this case, the theta also tells us which values of theta are more likely. And that just means loose syllables can best expand our theta.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "9:01", - "text": "And then when we combine the two we get the posterior distribution, and that's just a compromise of the two. It would say that it's somewhere in-between. So, we can now look at some interesting point that is made of. This point represents the mode of prior, that means the most likely parameter value according to our prior, before we observe any data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "9:25", - "text": "This point is the maximum likelihood estimator, it represents the theta that gives the theta of maximum probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "9:32", - "text": "Now this point is interesting, it's the posterior mode.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "9:38", - "text": "It's the most likely value of the theta given by the posterior of this. And it represents a good compromise of the prior mode and the maximum likelihood estimate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "9:51", - "text": "Now in general in Bayesian inference, we are interested in the distribution of all these parameter additives as you see here. If there's a distribution over see how values that you can see. Here, P of theta given X.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "10:09", - "text": "So the problem of Bayesian inference is", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "10:14", - "text": "to infer this posterior, this regime, and also to infer other interesting quantities that might depend on theta. So, I show f of theta here as an interesting variable that we want to compute. But in order to compute this value, we need to know the value of theta. In Bayesian inference, we treat theta as an uncertain variable. So we think about all the possible variables of theta. Therefore, we can estimate the value of this function f as extracted value of f, according to the posterior distribution of theta, given the observed evidence X.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "10:58", - "text": "As a special case, we can assume f of theta is just equal to theta. In this case, we get the expected value of the theta, that's basically the posterior mean. That gives us also one point of theta, and it's sometimes the same as posterior mode, but it's not always the same. So, it gives us another way to estimate the parameter.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "11:24", - "text": "So, this is a general illustration of Bayesian estimation and its an influence. And later, you will see this can be useful for topic mining where we want to inject the sum prior knowledge about the topics. So to summarize, we've used the language model which is basically probability distribution over text. It's also called a generative model for text data. The simplest language model is Unigram Language Model, it's basically a word distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "11:54", - "text": "We introduced the concept of likelihood function, which is the probability of the a data given some model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "12:02", - "text": "And this function is very important,", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "12:05", - "text": "given a particular set of parameter values this function can tell us which X, which data point has a higher likelihood, higher probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "12:16", - "text": "Given a data sample X, we can use this function to determine which parameter values would maximize the probability of the observed data, and this is the maximum livelihood estimate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "12:31", - "text": "We also talk about the Bayesian estimation or inference. In this case we, must define a prior on the parameters p of theta. And then we're interested in computing the posterior distribution of the parameters, which is proportional to the prior and the likelihood.", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - }, - { - "time": "12:48", - "text": "And this distribution would allow us then to infer any derive that is from theta. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/lCSNo/2-9-probabilistic-topic-models-overview-of-statistical-language-models-part-2" - } - ] - }, - { - "2-10-probabilistic-topic-models-mining-one-topic": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is a continued discussion of probabilistic topic models. In this lecture, we're going to continue discussing probabilistic models. We're going to talk about a very simple case where we are interested in just mining one topic from one document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "0:30", - "text": "So in this simple setup, we are interested in analyzing one document and trying to discover just one topic. So this is the simplest case of topic model. The input now no longer has k, which is the number of topics because we know there is only one topic and the collection has only one document, also. In the output, we also no longer have coverage because we assumed that the document covers this topic 100%. So the main goal is just to discover the world of probabilities for this single topic, as shown here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "1:14", - "text": "As always, when we think about using a generating model to solve such a problem, we start with thinking about what kind of data we are going to model or from what perspective we're going to model the data or data representation. And then we're going to design a specific model for the generating of the data, from our perspective. Where our perspective just means we want to take a particular angle of looking at the data, so that the model will have the right parameters for discovering the knowledge that we want. And then we'll be thinking about the microfunction or write down the microfunction to capture more formally how likely a data point will be obtained from this model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "2:05", - "text": "And the likelihood function will have some parameters in the function. And then we argue our interest in estimating those parameters for example, by maximizing the likelihood which will lead to maximum likelihood estimated. These estimator parameters will then become the output of the mining hours, which means we'll take the estimating parameters as the knowledge that we discover from the text. So let's look at these steps for this very simple case. Later we'll look at this procedure for some more complicated cases. So our data, in this case is, just a document which is a sequence of words. Each word here is denoted by x sub i. Our model is a Unigram language model. A word distribution that we hope to denote a topic and that's our goal. So we will have as many parameters as many words in our vocabulary, in this case M.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "3:09", - "text": "And for convenience we're going to use theta sub i to denote the probability of word w sub i.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "3:20", - "text": "And obviously these theta sub i's will sum to 1.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "3:24", - "text": "Now what does a likelihood function look like? Well, this is just the probability of generating this whole document, that given such a model. Because we assume the independence in generating each word so the probability of the document will be just a product of the probability of each word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "3:42", - "text": "And since some word might have repeated occurrences. So we can also rewrite this product in a different form.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "3:52", - "text": "So in this line, we have rewritten the formula into a product over all the unique words in the vocabulary, w sub 1 through w sub M. Now this is different from the previous line. Well, the product is over different positions of words in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "4:15", - "text": "Now when we do this transformation, we then would need to introduce a counter function here. This denotes the count of word one in document and similarly this is the count of words of n in the document because these words might have repeated occurrences. You can also see if a word did not occur in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "4:41", - "text": "It will have a zero count, therefore that corresponding term will disappear. So this is a very useful form of writing down the likelihood function that we will often use later. So I want you to pay attention to this, just get familiar with this notation. It's just to change the product over all the different words in the vocabulary. So in the end, of course, we'll use theta sub i to express this likelihood function and it would look like this. Next, we're going to find the theta values or probabilities of these words that would maximize this likelihood function. So now lets take a look at the maximum likelihood estimate problem more closely.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "5:32", - "text": "This line is copied from the previous slide. It's just our likelihood function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "5:38", - "text": "So our goal is to maximize this likelihood function. We will find it often easy to", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "5:47", - "text": "maximize the local likelihood instead of the original likelihood. And this is purely for mathematical convenience because after the logarithm transformation our function will becomes a sum instead of product. And we also have constraints over these these probabilities. The sum makes it easier to take derivative, which is often needed for finding the optimal solution of this function. So please take a look at this sum again, here. And this is a form of a function that you will often see later also, the more general topic models. So it's a sum over all the words in the vocabulary. And inside the sum there is a count of a word in the document. And this is macroed by the logarithm of a probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "6:55", - "text": "So let's see how we can solve this problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "6:58", - "text": "Now at this point the problem is purely a mathematical problem because we are going to just the find the optimal solution of a constrained maximization problem. The objective function is the likelihood function and the constraint is that all these probabilities must sum to one. So, one way to solve the problem is to use Lagrange multiplier approace.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "7:24", - "text": "Now this command is beyond the scope of this course but since Lagrange multiplier is a very useful approach, I also would like to just give a brief introduction to this, for those of you who are interested.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "7:39", - "text": "So in this approach we will construct a Lagrange function, here. And this function will combine our objective function with another term that encodes our constraint and we introduce Lagrange multiplier here, lambda, so it's an additional parameter. Now, the idea of this approach is just to turn the constraint optimization into, in some sense, an unconstrained optimizing problem. Now we are just interested in optimizing this Lagrange function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "8:19", - "text": "As you may recall from calculus, an optimal point would be achieved when the derivative is set to zero. This is a necessary condition. It's not sufficient, though. So if we do that you will see the partial derivative, with respect to theta i here ,is equal to this. And this part comes from the derivative of the logarithm function and this lambda is simply taken from here. And when we set it to zero we can easily see theta sub i is related to lambda in this way.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "9:06", - "text": "Since we know all the theta i's must a sum to one we can plug this into this constraint, here. And this will allow us to solve for lambda.", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "9:16", - "text": "And this is just a net sum of all the counts. And this further allows us to then solve the optimization problem, eventually, to find the optimal setting for theta sub i. And if you look at this formula it turns out that it's actually very intuitive because this is just the normalized count of these words by the document ns, which is also a sum of all the counts of words in the document. So, after all this mess, after all, we have just obtained something that's very intuitive and this will be just our intuition where we want to maximize the data by assigning as much probability mass as possible to all the observed the words here. And you might also notice that this is the general result of maximum likelihood raised estimator. In general, the estimator would be to normalize counts and it's just sometimes the counts have to be done in a particular way, as you will also see later. So this is basically an analytical solution to our optimization problem. In general though, when the likelihood function is very complicated, we're not going to be able to solve the optimization problem by having a closed form formula. Instead we have to use some numerical algorithms and we're going to see such cases later, also. So if you imagine what would we get if we use such a maximum likelihood estimator to estimate one topic for a single document d here? Let's imagine this document is a text mining paper. Now, what you might see is something that looks like this. On the top, you will see the high probability words tend to be those very common words, often functional words in English. And this will be followed by some content words that really characterize the topic well like text, mining, etc. And then in the end, you also see there is more probability of words that are not really related to the topic but they might be extraneously mentioned in the document. As a topic representation, you will see this is not ideal, right? That because the high probability words are functional words, they are not really characterizing the topic. So my question is how can we get rid of such common words?", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - }, - { - "time": "11:59", - "text": "Now this is the topic of the next module. We're going to talk about how to use probabilistic models to somehow get rid of these common words. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/1exQ5/2-10-probabilistic-topic-models-mining-one-topic" - } - ] - } - ] - }, - { - "Week 3": [ - { - "3-1-probabilistic-topic-models-mixture-of-unigram-language-models": [ - { - "time": "0:00", - "text": "[MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "0:06", - "text": "This lecture is about the mixture of unigram language models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "0:11", - "text": "In this lecture we will continue discussing probabilistic topic models. In particular, what we introduce a mixture of unigram language models. This is a slide that you have seen earlier. Where we talked about how to get rid of the background words that we have on top of for one document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "0:36", - "text": "So if you want to solve the problem, it would be useful to think about why we end up having this problem. Well, this obviously because these words are very frequent in our data and we are using a maximum likelihood to estimate. Then the estimate obviously would have to assign high probability for these words in order to maximize the likelihood. So, in order to get rid of them that would mean we'd have to do something differently here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "1:05", - "text": "In particular we'll have to say this distribution doesn't have to explain all the words in the tax data. What were going to say is that, these common words should not be explained by this distribution. So one natural way to solve the problem is to think about using another distribution to account for just these common words. This way, the two distributions can be mixed together to generate the text data. And we'll let the other model which we'll call background topic model to generate the common words. This way our target topic theta here will be only generating the common handle words that are characterised the content of the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "1:52", - "text": "So, how does this work? Well, it is just a small modification of the previous setup where we have just one distribution. Since we now have two distributions, we have to decide which distribution to use when we generate the word. Each word will still be a sample from one of the two distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "2:13", - "text": "Text data is still generating the same way. Namely, look at the generating of the one word at each time and eventually we generate a lot of words. When we generate the word, however, we're going to first decide which of the two distributions to use. And this is controlled by another probability, the probability of theta sub d and the probability of theta sub B here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "2:41", - "text": "So this is a probability of enacting the topic word of distribution. This is the probability of enacting the background word", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "2:52", - "text": "of distribution denoted by theta sub B.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "2:55", - "text": "On this case I just give example where we can set both to 0.5. So you're going to basically flip a coin, a fair coin, to decide what you want to use. But in general these probabilities don't have to be equal. So you might bias toward using one topic more than the other. So now the process of generating a word would be to first we flip a coin. Based on these probabilities choosing each model and if let's say the coin shows up as head, which means we're going to use the topic two word distribution. Then we're going to use this word distribution to generate a word. Otherwise we might be going slow this path.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "3:41", - "text": "And we're going to use the background word distribution to generate a word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "3:46", - "text": "So in such a case, we have a model that has some uncertainty associated with the use of a word distribution. But we can still think of this as a model for generating text data. And such a model is called a mixture model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "4:02", - "text": "So now let's see. In this case, what's the probability of observing a word w? Now here I showed some words. like \"the\" and \"text\". So as in all cases, once we setup a model we are interested in computing the likelihood function. The basic question is, so what's the probability of observing a specific word here? Now we know that the word can be observed from each of the two distributions, so we have to consider two cases. Therefore it's a sum over these two cases.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "4:34", - "text": "The first case is to use the topic for the distribution to generate the word. And in such a case then the probably would be theta sub d, which is the probability of choosing the model multiplied by the probability of actually observing the word from that model. Both events must happen in order to observe. We first must have choosing the topic theta sub d and then, we also have to actually have sampled the word the from the distribution. And similarly, the second part accounts for a different way of generally the word from the background.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "5:15", - "text": "Now obviously the probability of text the same is all similar, right? So we also can see the two ways of generating the text. And in each case, it's a product of the probability of choosing a particular word is multiplied by the probability of observing the word from that distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "5:35", - "text": "Now whether you will see, this is actually a general form. So might want to make sure that you have really understood this expression here. And you should convince yourself that this is indeed the probability of obsolete text. So to summarize what we observed here. The probability of a word from a mixture model is a general sum of different ways of generating the word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "6:00", - "text": "In each case, it's a product of the probability of selecting that component model. Multiplied by the probability of actually observing the data point from that component of the model. And this is something quite general and you will see this occurring often later. So the basic idea of a mixture model is just to retrieve thesetwo distributions together as one model. So I used a box to bring all these components together. So if you view this whole box as one model, it's just like any other generative model. It would just give us the probability of a word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "6:42", - "text": "But the way that determines this probability is quite the different from when we have just one distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "6:50", - "text": "And this is basically a more complicated mixture model. So the more complicated is more than just one distribution. And it's called a mixture model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "7:00", - "text": "So as I just said we can treat this as a generative model. And it's often useful to think of just as a likelihood function. The illustration that you have seen before, which is dimmer now, is just the illustration of this generated model. So mathematically, this model is nothing but to just define the following generative model. Where the probability of a word is assumed to be a sum over two cases", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "7:26", - "text": "of generating the word. And the form you are seeing now is a more general form that what you have seen in the calculation earlier. Well I just use the symbol w to denote any water but you can still see this is basically first a sum. Right? And this sum is due to the fact that the water can be generated in much more ways, two ways in this case. And inside a sum, each term is a product of two terms. And the two terms are first the probability of selecting a component like of D Second, the probability of actually observing the word from this component of the model. So this is a very general description of all the mixture models. I just want to make sure that you understand this because this is really the basis for understanding all kinds of on top models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "8:28", - "text": "So now once we setup model. We can write down that like functioning as we see here. The next question is, how can we estimate the parameter, or what to do with the parameters. Given the data. Well, in general, we can use some of the text data to estimate the model parameters. And this estimation would allow us to discover the interesting knowledge about the text. So you, in this case, what do we discover? Well, these are presented by our parameters and we will have two kinds of parameters. One is the two worded distributions, that result in topics, and the other is the coverage of each topic in each.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "9:12", - "text": "The coverage of each topic. And this is determined by probability of C less of D and probability of theta, so this is to one. Now, what's interesting is also to think about special cases like when we send one of them to want what would happen? Well with the other, with the zero right? And if you look at the likelihood function,", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "9:36", - "text": "it will then degenerate to the special case of just one distribution. Okay so you can easily verify that by assuming one of these two is 1.0 and the other is Zero.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "9:49", - "text": "So in this sense, the mixture model is more general than the previous model where we have just one distribution. It can cover that as a special case.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "9:59", - "text": "So to summarize, we talked about the mixture of two Unigram Language Models and the data we're considering here is just One document. And the model is a mixture model with two components, two unigram LM models, specifically theta sub d, which is intended to denote the topic of document d, and theta sub B, which is representing a background topic that we can set to attract the common words because common words would be assigned a high probability in this model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "10:33", - "text": "So the parameters can be collectively called Lambda which I show here you can again", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "10:41", - "text": "think about the question about how many parameters are we talking about exactly. This is usually a good exercise to do because it allows you to see the model in depth and to have a complete understanding of what's going on this model. And we have mixing weights, of course, also.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "10:59", - "text": "So what does a likelihood function look like? Well, it looks very similar to what we had before. So for the document, first it's a product over all the words in the document exactly the same as before. The only difference is that inside here now it's a sum instead of just one. So you might have recalled before we just had this one there.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "11:25", - "text": "But now we have this sum because of the mixture model. And because of the mixture model we also have to introduce a probability of choosing that particular component of distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - }, - { - "time": "11:39", - "text": "And so this is just another way of writing, and by using a product over all the unique words in our vocabulary instead of having that product over all the positions in the document. And this form where we look at the different and unique words is a commutative that formed for computing the maximum likelihood estimate later. And the maximum likelihood estimator is, as usual, just to find the parameters that would maximize the likelihood function. And the constraints here are of course two kinds. One is what are probabilities in each [INAUDIBLE] must sum to 1 the other is the choice of each [INAUDIBLE] must sum to 1. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/EbbsO/3-1-probabilistic-topic-models-mixture-of-unigram-language-models" - } - ] - }, - { - "3-2-probabilistic-topic-models-mixture-model-estimation-part-1": [ - { - "time": "0:06", - "text": "This lecture is about the mixture model estimation. In this lecture, we're going to continue discussing probabilistic topic models. In particular, we're going to talk about the how to estimate the parameters of a mixture model. So let's first look at our motivation for using a mixture model, and we hope to effect out the background words from the topic word distribution. So the idea is to assume that the text data actually contain two kinds of words. One kind is from the background here, so the \"is\", \"we\" etc. The other kind is from our topic word distribution that we're interested in. So in order to solve this problem of factoring out background words, we can set up our mixture model as follows. We are going to assume that we already know the parameters of all the values for all the parameters in the mixture model except for the word distribution of Theta sub d which is our target. So this is a case of customizing probably some model so that we embedded the unknown variables that we are interested in, but we're going to simplify other things. We're going to assume we have knowledge about others and this is a powerful way of customizing a model for a particular need. Now you can imagine, we could have assumed that we also don't know the background word distribution, but in this case, our goal is to affect out precisely those high probability in the background words. So we assume the background model is already fixed. The problem here is, how can we adjust the Theta sub d in order to maximize the probability of the observed document here and we assume all the other parameters are known? Now, although we designed the modal heuristically to try to factor out these background words, it's unclear whether if we use maximum likelihood estimator, we will actually end up having a word distribution where the common words like \"the\" will be indeed having smaller probabilities than before. So now, in this case, it turns out that the answer is yes. When we set up the probabilistic modeling this way, when we use maximum likelihood estimator, we will end up having a word distribution where the common words would be factored out by the use of the background distribution. So to understand why this is so, it's useful to examine the behavior of a mixture model. So we're going to look at a very simple case. In order to understand some interesting behaviors of a mixture model, the observed patterns here actually are generalizable to mixture model in general, but it's much easier to understand this behavior when we use a very simple case like what we're seeing here. So specifically in this case, let's assume that the probability of choosing each of the two models is exactly the same. So we're going to flip a fair coin to decide which model to use. Furthermore, we are going to assume there are precisely to words, \"the\" and \"text.\" Obviously, this is a very naive oversimplification of the actual text, but again, it is useful to examine the behavior in such a special case. So we further assume that, the background model gives probability of 0.9 to the word \"the\" and \"text\" 0.1. Now, let's also assume that our data is extremely simple. The document has just two words \"text\" and then \"the.\" So now, let's write down the likelihood function in such a case. First, what's the probability of \"text\" and what's the probability of \"the\"? I hope by this point, you will be able to write it down. So the probability of \"text\" is basically a sum of two cases where each case corresponds to each of the water distribution and it accounts for the two ways of generating text. Inside each case, we have the probability of choosing the model which is 0.5 multiplied by the probability of observing \"text\" from that model. Similarly, \"the\" would have a probability of the same form just as it was different exactly probabilities. So naturally, our likelihood function is just the product of the two. So it's very easy to see that, once you understand what's the probability of each word and which is also why it's so important to understand what's exactly the probability of observing each word from such a mixture model. Now, the interesting question now is, how can we then optimize this likelihood? Well, you will notice that, there are only two variables. They are precisely the two probabilities of the two words \"text\" and \"the\" given by Theta sub d. This is because we have assumed that, all the other parameters are known. So now, the question is a very simple algebra question. So we have a simple expression with two variables and we hope to choose the values of these two variables to maximize this function. It's exercises that we have seen some simple algebra problems, and note that the two probabilities must sum to one. So there's some constraint. If there were no constraint of course, we will set both probabilities to their maximum value which would be one to maximize this, but we can't do that because \"text\" and \"the\" must sum to one. We can't give those a probability of one. So now the question is, how should we allocate the probability in the mass between the two words? What do you think? Now, it will be useful to look at this formula for moment and to see intuitively what we do in order to set these probabilities to maximize the value of this function. If we look into this further, then we'll see some interesting behavior of the two component models in that, they will be collaborating to maximize the probability of the observed data which is dictated by the maximum likelihood estimator, but they're also competing in some way. In particular, they would be competing on the words and they will tend to bet high probabilities on different words to avoid this competition in some sense or to gain advantage in this competition. So again, looking at this objective function and we have a constraint on the two probabilities, now if you look at the formula intuitively, you might feel that you want to set the probability of \"text\" to be somewhat larger than \"the\". This intuition can be well-supported by mathematical fact which is, when the sum of two variables is a constant then the product of them which is maximum then they are equal, and this is a fact that we know from algebra. Now, if we plug that in, we will would mean that we have to make the two probabilities equal. When we make them equal and then if we consider the constraint that we can easily solve this problem, and the solution is the probability of \"text\" would be 0.9 and probability of \"the\" is 0.1. As you can see indeed, the probability of text is not much larger than probability of \"the\" and this is not the case when we have just one distribution. This is clearly because of the use of the background model which assign a very high probability to \"the\" low probability to \"text\". If you look at the equation, you will see obviously some interaction of the two distributions here. In particular, you will see in order to make them equal and then the probability assigned by Theta sub d must be higher for a word that has a smaller probability given by the background. This is obvious from examining this equation because \"the\" background part is weak for \"text\" it's a small. So in order to compensate for that, we must make the probability of \"text\" that's given by Theta sub d somewhat larger so that the two sides can be balanced. So this is in fact a very general behavior of this mixture model. That is, if one distribution assigns a high probability to one word than another, then the other distribution would tend to do the opposite. Basically, it would discourage other distributions to do the same and this is to balance them out so that, we can account for all words. This also means that, by using a background model that is fixed to assign high probabilities to background words, we can indeed encourage the unknown topic word distribution to assign smaller probabilities for such common words. Instead, put more probability mass on the content words that cannot be explained well by the background model meaning that, they have a very small probability from the background model like \"text\" here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/QnGYn/3-2-probabilistic-topic-models-mixture-model-estimation-part-1" - } - ] - }, - { - "3-3-probabilistic-topic-models-mixture-model-estimation-part-2": [ - { - "time": "0:00", - "text": "[SOUND] Now lets look at another behaviour of the Mixed Model and in this case lets look at the response to data frequencies. So what you are seeing now is basically the likelihood of function for the two word document and we now in this case the solution is text. A probability of 0.9 and the a probability of 0.1. Now it's interesting to think about a scenario where we start adding more words to the document. So what would happen if we add many the's to the document?", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "0:41", - "text": "Now this would change the game, right? So, how? Well, picture, what would the likelihood function look like now? Well, it start with the likelihood function for the two words, right? As we add more words, we know that. But we have to just multiply the likelihood function by additional terms to account for the additional. occurrences of that. Since in this case, all the additional terms are the, we're going to just multiply by this term. Right? For the probability of the.", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "1:12", - "text": "And if we have another occurrence of the, we'd multiply again by the same term, and so on and forth. Add as many terms as the number of the's that we add to the document, d'. Now this obviously changes the likelihood function. So what's interesting is now to think about how would that change our solution? So what's the optimal solution now?", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "1:38", - "text": "Now, intuitively you'd know the original solution, pulling the 9 versus pulling the ,will no longer be optimal for this new function. Right?", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "1:48", - "text": "But, the question is how should we change it. What general is to sum to one. So he know we must take away some probability the mass from one word and add the probability mass to the other word. The question is which word to have reduce the probability and which word to have a larger probability. And in particular, let's think about the probability of the. Should it be increased to be more than 0.1? Or should we decrease it to less than 0.1? What do you think?", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "2:19", - "text": "Now you might want to pause the video a moment to think more about. This question. Because this has to do with understanding of important behavior of a mixture model. And indeed, other maximum likelihood estimator. Now if you look at the formula for a moment, then you will see it seems like another object Function is more influenced by the than text. Before, each computer. So now as you can imagine, it would make sense to actually assign a smaller probability for text and lock it. To make room for a larger probability for the. Why? Because the is repeated many times. If we increase it a little bit, it will have more positive impact. Whereas a slight decrease of text will have relatively small impact because it occurred just one, right? So this means there is another behavior that we observe here. That is high frequency words generated with high probabilities from all the distributions. And, this is no surprise at all, because after all, we are maximizing the likelihood of the data. So the more a word occurs, then it makes more sense to give such a word a higher probability because the impact would be more on the likelihood function. This is in fact a very general phenomenon of all the maximum likelihood estimator. But in this case, we can see as we see more occurrences of a term, it also encourages the unknown distribution theta sub d to assign a somewhat higher probability to this word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "4:07", - "text": "Now it's also interesting to think about the impact of probability of Theta sub B. The probability of choosing one of the two component models. Now we've been so far assuming that each model is equally likely. And that gives us 0.5. But you can again look at this likelihood function and try to picture what would happen if we increase the probability of choosing a background model. Now you will see these terms for the, we have a different form where the probability that would be", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "4:40", - "text": "even larger because the background has a high probability for the word and the coefficient in front of 0.9 which is now 0.5 would be even larger. When this is larger, the overall result would be larger. And that also makes this the less important for theta sub d to increase the probability before the. Because it's already very large. So the impact here of increasing the probability of the is somewhat regulated by this coefficient, the point of i. If it's larger on the background, then it becomes less important to increase the value. So this means the behavior here, which is high frequency words tend to get the high probabilities, are effected or regularized somewhat by the probability of choosing each component. The more likely a component is being chosen. It's more important that to have higher values for these frequent words. If you have a various small probability of being chosen, then the incentive is less. So to summarize, we have just discussed the mixture model. And we discussed that the estimation problem of the mixture model and particular with this discussed some general behavior of the estimator and that means we can expect our estimator to capture these infusions. First every component model attempts to assign high probabilities to high frequent their words in the data. And this is to collaboratively maximize likelihood. Second, different component models tend to bet high probabilities on different words. And this is to avoid a competition or waste of probability. And this would allow them to collaborate more efficiently to maximize the likelihood.", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "6:33", - "text": "So, the probability of choosing each component regulates the collaboration and the competition between component models. It would allow some component models to respond more to the change, for example, of frequency of the theta point in the data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "6:53", - "text": "We also talked about the special case of fixing one component to a background word distribution, right? And this distribution can be estimated by using a collection of documents, a large collection of English documents, by using just one distribution and then we'll just have normalized frequencies of terms to give us the probabilities of all these words. Now when we use such a specialized mixture model, we show that we can effectively get rid of that one word in the other component.", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "7:23", - "text": "And that would make this cover topic more discriminative.", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - }, - { - "time": "7:27", - "text": "This is also an example of imposing a prior on the model parameter and the prior here basically means one model must be exactly the same as the background language model and if you recall what we talked about in Bayesian estimation, and this prior will allow us to favor a model that is consistent with our prior. In fact, if it's not consistent we're going to say the model is impossible. So it has a zero prior probability. That effectively excludes such a scenario. This is also issue that we'll talk more later. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/cMSgR/3-3-probabilistic-topic-models-mixture-model-estimation-part-2" - } - ] - }, - { - "3-4-probabilistic-topic-models-expectation-maximization-algorithm-part-1": [ - { - "time": "0:06", - "text": "This lecture is about the expectation-maximization algorithm, also called the EM algorithm. In this lecture, we're going to continue the discussion of probabilistic topic models. In particular, we're going to introduce the EM algorithm, which is a family of useful algorithms for computing the maximum likelihood estimate of mixture models. So this is now familiar scenario of using a two component, the mixture model, to try to factor out the background words from one topic word of distribution here. So we're interested in computing this estimate, and we're going to try to adjust these probability values to maximize the probability of the observed document. Note that we assume that all the other parameters are known. So the only thing unknown is the word probabilities given by theta sub. In this lecture, we're going to look into how to compute this maximum likelihood estimate. Now, let's start with the idea of separating the words in the text data into two groups. One group would be explained by the background model. The other group would be explained by the unknown topic word distribution. After all, this is the basic idea of mixture model. But suppose we actually know which word is from which distribution? So that would mean, for example, these words the, is, and we are known to be from this background word distribution. On the other hand, the other words text, mining, clustering etc are known to be from the topic word distribution. If you can see the color, then these are shown in blue. These blue words are then assumed that to be from the topic word distribution. If we already know how to separate these words, then the problem of estimating the word distribution would be extremely simple. If you think about this for a moment, you'll realize that, well, we can simply take all these words that are known to be from this word distribution theta sub d and normalize them. So indeed this problem would be very easy to solve if we had known which words are from which a distribution precisely, and this is in fact making this model no longer a mixture model because we can already observe which distribution has been used to generate which part of the data. So we actually go back to the single word distribution problem. In this case let's call these words that are known to be from theta d, a pseudo document of d prime, and now all we need to do is just normalize these words counts for each word w_i. That's fairly straightforward. It's just dictated by the maximum likelihood estimator. Now, this idea however doesn't work because we in practice don't really know which word is from which distribution, but this gives us the idea of perhaps we can guess which word is from which it is written. Specifically given all the parameters, can we infer the distribution a word is from. So let's assume that we actually know tentative probabilities for these words in theta sub d. So now all the parameters are known for this mixture model, and now let's consider a word like a \"text\". So the question is, do you think \"text\" is more likely having been generated from theta sub d or from theta sub of b? So in other words, we want to infer which distribution has been used to generate this text. Now, this inference process is a typical Bayesian inference situation where we have some prior about these two distributions. So can you see what is our prior here? Well, the prior here is the probability of each distribution. So the prior is given by these two probabilities. In this case, the prior is saying that each model is equally likely, but we can imagine perhaps a different prior is possible. So this is called a prior because this is our guess of which distribution has been used to generate a word before we even off reserve the word. So that's why we call it the prior. So if we don't observe the word, we don't know what word has been observed. Our best guess is to say well, they're equally likely. All right. So it's just flipping a coin. Now in Bayesian inference we typically learn with update our belief after we have observed the evidence. So what is the evidence here? Well, the evidence here is the word text. Now that we know we're interested in the word text. So text that can be regarded as evidence, and if we use Bayes rule to combine the prior and the data likelihood, what we will end up with is to combine the prior with the likelihood that you see here, which is basically the probability of the word text from each distribution. We see that in both cases the text is possible. Note that even in the background it is still possible, it just has a very small probability. So intuitively what would be your guess in this case. Now if you're like many others, you are guess text is probably from theta sub d. It's more likely from theta sub d. Why? You will probably see that it's because text that has a much higher probability here by the theta sub d, then by the background model which has a very small probability. By this we're going to say, well, text is more likely from theta sub d. So you see our guess of which distribution has been used to generate the text would depend on how high the probability of the text is in each word distribution. We can do, tend to guess the distribution that gives us a word a higher probability, and this is likely to maximize the likelihood. So we're going to choose a word that has a higher likelihood. So in other words, we're going to compare these two probabilities of the word given by each distributions. But our guess must also be affected by the prior. So we also need to compare these two priors. Why? Because imagine if we adjust these probabilities, we're going to say the probability of choosing a background model is almost 100 percent. Now, if you have that kind of strong prior, then that would affect your guess. You might think, well, wait a moment, maybe text could have been from the background as well. Although the probability is very small here, the prior is very high. So in the end, we have to combine the two, and the base formula provides us a solid and principled way of making this kind of guess to quantify that. So more specifically, let's think about the probability that this word has been generated in fact from from theta sub d. Well, in order for texts to be generated from theta sub d two things must happen. First, the theta sub d must have been selected, so we have the selection probability here. Secondly, we also have to actually have observed text from the distribution. So when we multiply the two together, we get the probability that text has in fact been generated from theta sub d. Similarly, for the background model, the probability of generating text is another product of a similar form. Now, we also introduced the latent variable z here to denote whether the word is from the background or the topic. When z is zero, it means it's from the topic theta sub d. When it's one, it means it's from the background theta sub b. So now we have the probability that text is generated from each. Then we can simply normalize them to have an estimate of the probability that the word text is from theta sub d or from theta sub b. Then equivalently, the probability that z is equal to zero given that the observed evidence is text. So this is application of Bayes rule. But this step is very crucial for understanding the EM algorithm because if we can do this, then we would be able to first initialize the parameter values somewhat randomly, and then we're going to take a guess of these z values. Which distributing has been used to generate which word, and the initialized the parameter values would allow us to have a complete specification of the mixture model which further allows us to apply Bayes rule to infer which distribution is more likely to generate each word. This prediction essentially helped us to separate the words from the two distributions. Although we can't separate them for sure, but we can separate them probabilistically as shown here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/f82s5/3-4-probabilistic-topic-models-expectation-maximization-algorithm-part-1" - } - ] - }, - { - "3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2": [ - { - "time": "0:00", - "text": "[SOUND] So this is indeed a general idea of the Expectation-Maximization, or EM, Algorithm.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "0:14", - "text": "So in all the EM algorithms we introduce a hidden variable to help us solve the problem more easily. In our case the hidden variable is a binary variable for each occurrence of a word. And this binary variable would indicate whether the word has been generated from 0 sub d or 0 sub p. And here we show some possible values of these variables. For example, for the it's from background, the z value is one. And text on the other hand. Is from the topic then it's zero for z, etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "0:53", - "text": "Now, of course, we don't observe these z values, we just imagine they're all such. Values of z attaching to other words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "1:02", - "text": "And that's why we call these hidden variables.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "1:06", - "text": "Now, the idea that we talked about before for predicting the word distribution that has been used when we generate the word is it a predictor, the value of this hidden variable? And, so, the EM algorithm then, would work as follows. First, we'll initialize all the parameters with random values. In our case, the parameters are mainly the probability. of a word, given by theta sub d. So this is an initial addition stage. These initialized values would allow us to use base roll to take a guess of these z values, so we'd guess these values. We can't say for sure whether textt is from background or not. But we can have our guess. This is given by this formula. It's called an E-step. And so the algorithm would then try to use the E-step to guess these z values. After that, it would then invoke another that's called M-step. In this step we simply take advantage of the inferred z values and then just group words that are in the same distribution like these from that ground including this as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "2:27", - "text": "We can then normalize the count to estimate the probabilities or to revise our estimate of the parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "2:36", - "text": "So let me also illustrate that we can group the words that are believed to have come from zero sub d, and that's text, mining algorithm, for example, and clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "2:51", - "text": "And we group them together to help us re-estimate the parameters that we're interested in. So these will help us estimate these parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "3:06", - "text": "Note that before we just set these parameter values randomly. But with this guess, we will have somewhat improved estimate of this. Of course, we don't know exactly whether it's zero or one. So we're not going to really do the split in a hard way. But rather we're going to do a softer split. And this is what happened here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "3:29", - "text": "So we're going to adjust the count by the probability that would believe this word has been generated by using the theta sub d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "3:39", - "text": "And you can see this, where does this come from? Well, this has come from here, right? From the E-step. So the EM Algorithm would iteratively improve uur initial estimate of parameters by using E-step first and then M-step. The E-step is to augment the data with additional information, like z. And the M-step is to take advantage of the additional information to separate the data. To split the data accounts and then collect the right data accounts to re-estimate our parameter. And then once we have a new generation of parameter, we're going to repeat this. We are going the E-step again. To improve our estimate of the hidden variables. And then that would lead to another generation of re-estimated parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "4:34", - "text": "For the word distribution that we are interested in.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "4:39", - "text": "Okay, so, as I said, the bridge between the two is really the variable z, hidden variable, which indicates how likely this water is from the top water distribution, theta sub p.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "4:56", - "text": "So, this slide has a lot of content and you may need to. Pause the reader to digest it. But this basically captures the essence of EM Algorithm. Start with initial values that are often random themself. And then we invoke E-step followed by M-step to get an improved setting of parameters. And then we repeated this, so this a Hill-Climbing algorithm that would gradually improve the estimate of parameters. As I will explain later there is some guarantee for reaching a local maximum of the log-likelihood function. So lets take a look at the computation for a specific case, so these formulas are the EM. Formulas that you see before, and you can also see there are superscripts, here, like here, n, to indicate the generation of parameters. Like here for example we have n plus one. That means we have improved. From here to here we have an improvement. So in this setting we have assumed the two numerals have equal probabilities and the background model is null. So what are the relevance of the statistics? Well these are the word counts. So assume we have just four words, and their counts are like this. And this is our background model that assigns high probabilities to common words like the.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "6:25", - "text": "And in the first iteration, you can picture what will happen. Well first we initialize all the values. So here, this probability that we're interested in is normalized into a uniform distribution of all the words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "6:40", - "text": "And then the E-step would give us a guess of the distribution that has been used. That will generate each word. We can see we have different probabilities for different words. Why? Well, that's because these words have different probabilities in the background. So even though the two distributions are equally likely. And then our initial audition say uniform distribution because of the difference in the background of the distribution, we have different guess the probability. So these words are believed to be more likely from the topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "7:15", - "text": "These on the other hand are less likely. Probably from background.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "7:20", - "text": "So once we have these z values, we know in the M-step these probabilities will be used to adjust the counts. So four must be multiplied by this 0.33 in order to get the allocated accounts toward the topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "7:39", - "text": "And this is done by this multiplication. Note that if our guess says this is 100% If this is one point zero,", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "7:52", - "text": "then we just get the full count of this word for this topic. In general it's not going to be one point zero. So we're just going to get some percentage of this counts toward this topic. Then we simply normalize these counts to have a new generation of parameters estimate. So you can see, compare this with the older one, which is here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "8:18", - "text": "So compare this with this one and we'll see the probability is different. Not only that, we also see some words that are believed to have come from the topic will have a higher probability. Like this one, text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - }, - { - "time": "8:32", - "text": "And of course, this new generation of parameters would allow us to further adjust the inferred latent variable or hidden variable values. So we have a new generation of values, because of the E-step based on the new generation of parameters. And these new inferred values of Zs will give us then another generation of the estimate of probabilities of the word. And so on and so forth so this is what would actually happen when we compute these probabilities using the EM Algorithm. As you can see in the last row where we show the log-likelihood, and the likelihood is increasing as we do the iteration. And note that these log-likelihood is negative because the probability is between 0 and 1 when you take a logarithm, it becomes a negative value. Now what's also interesting is, you'll note the last column. And these are the inverted word split. And these are the probabilities that a word is believed to have come from one distribution, in this case the topical distribution, all right. And you might wonder whether this would be also useful. Because our main goal is to estimate these word distributions. So this is our primary goal. We hope to have a more discriminative order of distribution. But the last column is also bi-product. This also can actually be very useful. You can think about that. We want to use, is to for example is to estimate to what extent this document has covered background words. And this, when we add this up or take the average we will kind of know to what extent it has covered background versus content was that are not explained well by the background. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/naLsv/3-5-probabilistic-topic-models-expectation-maximization-algorithm-part-2" - } - ] - }, - { - "3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3": [ - { - "time": "0:07", - "text": "So, I just showed you that empirically the likelihood will converge, but theoretically it can also be proved that EM algorithm will converge to a local maximum. So here's just an illustration of what happened and a detailed explanation. This required more knowledge about that, some of that inequalities, that we haven't really covered yet.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "0:39", - "text": "So here what you see is on the X dimension, we have a c0 value. This is a parameter that we have. On the y axis we see the likelihood function. So this curve is the original likelihood function, and this is the one that we hope to maximize. And we hope to find a c0 value at this point to maximize this. But in the case of Mitsumoto we can not easily find an analytic solution to the problem. So, we have to resolve the numerical errors, and the EM algorithm is such an algorithm. It's a Hill-Climb algorithm. That would mean you start with some random guess. Let's say you start from here, that's your starting point. And then you try to improve this by moving this to another point where you can have a higher likelihood. So that's the ideal hill climbing. And in the EM algorithm, the way we achieve this is to do two things. First, we'll fix a lower bound of likelihood function. So this is the lower bound. See here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "1:51", - "text": "And once we fit the lower bound, we can then maximize the lower bound. And of course, the reason why this works, is because the lower bound is much easier to optimize. So we know our current guess is here. And by maximizing the lower bound, we'll move this point to the top. To here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "2:13", - "text": "Right? And we can then map to the original likelihood function, we find this point. Because it's a lower bound, we are guaranteed to improve this guess, right? Because we improve our lower bound and then the original likelihood curve which is above this lower bound will definitely be improved as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "2:36", - "text": "So we already know it's improving the lower bound. So we definitely improve this original likelihood function, which is above this lower bound. So, in our example, the current guess is parameter value given by the current generation. And then the next guess is the re-estimated parameter values. From this illustration you can see the next guess is always better than the current guess. Unless it has reached the maximum, where it will be stuck there. So the two would be equal. So, the E-step is basically to compute this lower bound. We don't directly just compute this likelihood function but we compute the length of the variable values and these are basically a part of this lower bound. This helps determine the lower bound. The M-step on the other hand is to maximize the lower bound. It allows us to move parameters to a new point. And that's why EM algorithm is guaranteed to converge to a local maximum.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "3:42", - "text": "Now, as you can imagine, when we have many local maxima, we also have to repeat the EM algorithm multiple times. In order to figure out which one is the actual global maximum. And this actually in general is a difficult problem in numeral optimization. So here for example had we started from here, then we gradually just climb up to this top. So, that's not optimal, and we'd like to climb up all the way to here, so the only way to climb up to this gear is to start from somewhere here or here. So, in the EM algorithm, we generally would have to start from different points or have some other way to determine a good initial starting point.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "4:29", - "text": "To summarize in this lecture we introduced the EM algorithm. This is a general algorithm for computing maximum maximum likelihood estimate of all kinds of models, so not just for our simple model. And it's a hill-climbing algorithm, so it can only converge to a local maximum and it will depend on initial points.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "4:49", - "text": "The general idea is that we will have two steps to improve the estimate of. In the E-step we roughly [INAUDIBLE] how many there are by predicting values of useful hidden variables that we would use to simplify the estimation. In our case, this is the distribution that has been used to generate the word. In the M-step then we would exploit such augmented data which would make it easier to estimate the distribution, to improve the estimate of parameters. Here improve is guaranteed in terms of the likelihood function. Note that it's not necessary that we will have a stable convergence of parameter value even though the likelihood function is ensured to increase. There are some properties that have to be satisfied in order for the parameters also to convert into some stable value.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "5:47", - "text": "Now here data augmentation is done probabilistically. That means, we're not going to just say exactly what's the value of a hidden variable. But we're going to have a probability distribution over the possible values of these hidden variables. So this causes a split of counts of events probabilistically.", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - }, - { - "time": "6:07", - "text": "And in our case we'll split the word counts between the two distributions. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/N5cBh/3-6-probabilistic-topic-models-expectation-maximization-algorithm-part-3" - } - ] - }, - { - "3-7-probabilistic-latent-semantic-analysis-plsa-part-1": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about probabilistic and latent Semantic Analysis or PLSA.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "0:12", - "text": "In this lecture we're going to introduce probabilistic latent semantic analysis, often called PLSA. This is the most basic topic model, also one of the most useful topic models. Now this kind of models can in general be used to mine multiple topics from text documents. And PRSA is one of the most basic topic models for doing this. So let's first examine this power in the e-mail for more detail. Here I show a sample article which is a blog article about Hurricane Katrina.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "0:48", - "text": "And I show some simple topics. For example government response, flood of the city of New Orleans. Donation and the background.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "0:59", - "text": "You can see in the article we use words from all these distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "1:05", - "text": "So we first for example see there's a criticism of government response and this is followed by discussion of flooding of the city and donation et cetera. We also see background words mixed with them.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "1:18", - "text": "So the overall of topic analysis here is to try to decode these topics behind the text, to segment the topics, to figure out which words are from which distribution and to figure out first, what are these topics? How do we know there's a topic about government response. There's a topic about a flood in the city. So these are the tasks at the top of the model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "1:42", - "text": "If we had discovered these topics can color these words, as you see here, to separate the different topics. Then you can do a lot of things, such as summarization, or segmentation, of the topics, clustering of the sentences etc. So the formal definition of problem of mining multiple topics from text is shown here. And this is after a slide that you have seen in an earlier lecture. So the input is a collection, the number of topics, and a vocabulary set, and of course the text data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "2:16", - "text": "And then the output is of two kinds. One is the topic category, characterization. Theta i's. Each theta i is a word distribution. And second, it's the topic coverage for each document. These are pi sub i j's. And they tell us which document it covers. Which topic to what extent. So we hope to generate these as output. Because there are many useful applications if we can do that.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "2:42", - "text": "So the idea of PLSA is actually very similar to the two component mixture model that we have already introduced. The only difference is that we are going to have more than two topics. Otherwise, it is essentially the same. So here I illustrate how we can generate the text that has multiple topics and naturally in all cases of Probabilistic modelling would want to figure out the likelihood function. So we would also ask the question, what's the probability of observing a word from such a mixture model? Now if you look at this picture and compare this with the picture that we have seen earlier, you will see the only difference is that we have added more topics here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "3:26", - "text": "So, before we have just one topic, besides the background topic. But now we have more topics. Specifically, we have k topics now. All these are topics that we assume that exist in the text data. So the consequence is that our switch for choosing a topic is now a multiway switch. Before it's just a two way switch. We can think of it as flipping a coin. But now we have multiple ways. First we can flip a coin to decide whether we're talk about the background. So it's the background lambda sub B versus non-background. 1 minus lambda sub B gives us the probability of actually choosing a non-background topic. After we have made this decision, we have to make another decision to choose one of these K distributions. So there are K way switch here. And this is characterized by pi, and this sum to one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "4:31", - "text": "This is just the difference of designs. Which is a little bit more complicated. But once we decide which distribution to use the rest is the same we are going to just generate a word by using one of these distributions as shown here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "4:46", - "text": "So now lets look at the question about the likelihood. So what's the probability of observing a word from such a distribution? What do you think? Now we've seen this problem many times now and if you can recall, it's generally a sum. Of all the different possibilities of generating a word. So let's first look at how the word can be generated from the background mode. Well, the probability that the word is generated from the background model is lambda multiplied by the probability of the word from the background mode. Model, right. Two things must happen. First, we have to have chosen the background model, and that's the probability of lambda, of sub b. Then second, we must have actually obtained the word w from the background, and that's probability of w given theta sub b.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "5:40", - "text": "Okay, so similarly, we can figure out the probability of observing the word from another topic. Like the topic theta sub k. Now notice that here's the product of three terms. And that's because of the choice of topic theta sub k, only happens if two things happen. One is we decide not to talk about background. So, that's a probability of 1 minus lambda sub B. Second, we also have to actually choose theta sub K among these K topics. So that's probability of theta sub K, or pi.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "6:17", - "text": "And similarly, the probability of generating a word from the second. The topic and the first topic are like what you are seeing here. And so in the end the probability of observing the word is just a sum of all these cases. And I have to stress again this is a very important formula to know because this is really key to understanding all the topic models and indeed a lot of mixture models. So make sure that you really understand the probability", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "6:49", - "text": "of w is indeed the sum of these terms.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "6:56", - "text": "So, next, once we have the likelihood function, we would be interested in knowing the parameters. All right, so to estimate the parameters. But firstly, let's put all these together to have the complete likelihood of function for PLSA. The first line shows the probability of a word as illustrated on the previous slide. And this is an important formula as I said.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "7:22", - "text": "So let's take a closer look at this. This actually commands all the important parameters. So first of all we see lambda sub b here. This represents a percentage of background words", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "7:32", - "text": "that we believe exist in the text data. And this can be a known value that we set empirically.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "7:41", - "text": "Second, we see the background language model, and typically we also assume this is known. We can use a large collection of text, or use all the text that we have available to estimate the world of distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "7:52", - "text": "Now next in the next stop this formula. [COUGH] Excuse me. You see two interesting kind of parameters, those are the most important parameters. That we are. So one is pi's. And these are the coverage of a topic in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "8:11", - "text": "And the other is word distributions that characterize all the topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "8:18", - "text": "So the next line, then is simply to plug this in to calculate the probability of document. This is, again, of the familiar form where you have a sum and you have a count of a word in the document. And then log of a probability. Now it's a little bit more complicated than the two component. Because now we have more components, so the sum involves more terms. And then this line is just the likelihood for the whole collection. And it's very similar, just accounting for more documents in the collection.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "8:52", - "text": "So what are the unknown parameters? I already said that there are two kinds. One is coverage, one is word distributions. Again, it's a useful exercise for you to think about. Exactly how many parameters there are here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "9:05", - "text": "How many unknown parameters are there? Now, try and think out that question will help you understand the model in more detail. And will also allow you to understand what would be the output that we generate when use PLSA to analyze text data? And these are precisely the unknown parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "9:24", - "text": "So after we have obtained the likelihood function shown here, the next is to worry about the parameter estimation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "9:32", - "text": "And we can do the usual think, maximum likelihood estimator. So again, it's a constrained optimization problem, like what we have seen before. Only that we have a collection of text and we have more parameters to estimate. And we still have two constraints, two kinds of constraints. One is the word distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - }, - { - "time": "9:51", - "text": "All the words must have probabilities that's sum to one for one distribution. The other is the topic coverage distribution and a document will have to cover precisely these k topics so the probability of covering each topic that would have to sum to 1. So at this point though it's basically a well defined applied math problem, you just need to figure out the solutions to optimization problem. There's a function with many variables. and we need to just figure out the patterns of these variables to make the function reach its maximum. >> [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/HKe8K/3-7-probabilistic-latent-semantic-analysis-plsa-part-1" - } - ] - }, - { - "3-8-probabilistic-latent-semantic-analysis-plsa-part-2": [ - { - "time": "0:00", - "text": "[SOUND]", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "0:08", - "text": "We can compute this maximum estimate by using the EM algorithm. So in the e step, we now have to introduce more hidden variables because we have more topics, so our hidden variable z now, which is a topic indicator can take more than two values. So specifically will take a k plus one values, with b in the noting the background. And once locate, to denote other k topics, right.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "0:36", - "text": "So, now the e step, as you can recall is your augmented data, and by predicting the values of the hidden variable. So we're going to predict for a word, whether the word has come from one of these k plus one distributions. This equation allows us to predict the probability that the word w in document d is generated from topic zero sub j.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "1:03", - "text": "And the bottom one is the predicted probability that this word has been generated from the background. Note that we use document d here to index the word. Why? Because whether a word is from a particular topic actually depends on the document. Can you see why? Well, it's through the pi's. The pi's are tied to each document. Each document can have potentially different pi's, right. The pi's will then affect our prediction. So, the pi's are here. And this depends on the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "1:38", - "text": "And that might give a different guess for a word in different documents, and that's desirable.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "1:46", - "text": "In both cases we are using the Baye's Rule, as I explained, basically assessing the likelihood of generating word from each of this division and there's normalize.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "1:57", - "text": "What about the m step? Well, we may recall the m step is we take advantage of the inferred z values. To split the counts. And then collected the right counts to re-estimate the parameters. So in this case, we can re-estimate our coverage of probability. And this is re-estimated based on collecting all the words in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "2:22", - "text": "And that's why we have the count of the word in document. And sum over all the words. And then we're going to look at to what extent this word belongs to", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "2:34", - "text": "the topic theta sub j. And this part is our guess from each step.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "2:40", - "text": "This tells us how likely this word is actually from theta sub j. And when we multiply them together, we get the discounted count that's located for topic theta sub j. And when we normalize this over all the topics, we get the distribution of all the topics to indicate the coverage. And similarly, the bottom one is the estimated probability of word for a topic. And in this case we are using exact the same count, you can see this is the same discounted account, ] it tells us to what extend we should allocate this word [INAUDIBLE] but then normalization is different. Because in this case we are interested in the word distribution, so we simply normalize this over all the words. This is different, in contrast here we normalize the amount all the topics. It would be useful to take a comparison between the two.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "3:37", - "text": "This give us different distributions. And these tells us how to improve the parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "3:48", - "text": "And as I just explained, in both the formula is we have a maximum estimate based on allocated word counts [INAUDIBLE]. Now this phenomena is actually general phenomena in all the EM algorithms. In the m-step, you general with the computer expect an account of the event based on the e-step result, and then you just and then count to four, particular normalize it, typically. So, in terms of computation of this EM algorithm, we can actually just keep accounting various events and then normalize them. And when we thinking this way, we also have a more concise way of presenting the EM Algorithm. It actually helps us better understand the formulas. So I'm going to go over this in some detail. So as a algorithm we first initialize all the unknown perimeters randomly, all right. So, in our case, we are interested in all of those coverage perimeters, pi's and awarded distributions [INAUDIBLE], and we just randomly normalize them. This is the initialization step and then we will repeat until likelihood converges. Now how do we know whether likelihood converges? We can do compute likelihood at each step and compare the current likelihood with the previous likelihood. If it doesn't change much and we're going to say it stopped, right.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "5:19", - "text": "So, in each step we're going to do e-step and m-step. In the e-step we're going to do augment the data by predicting the hidden variables. In this case, the hidden variable, z sub d, w, indicates whether the word w in d is from a topic or background. And if it's from a topic, which topic. So if you look at the e-step formulas, essentially we're actually normalizing these counts, sorry, these probabilities of observing the word from each distribution. So you can see, basically the prediction of word from topic zero sub j is based on the probability of selecting that theta sub j as a word distribution to generate the word. Multiply by the probability of observing the word from that distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "6:17", - "text": "And I said it's proportional to this because in the implementation of EM algorithm you can keep counter for this quantity, and in the end it just normalizes it. So the normalization here is over all the topics and then you would get a probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "6:36", - "text": "Now, in the m-step, we do the same, and we are going to collect these.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "6:43", - "text": "Allocated account for each topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "6:47", - "text": "And we split words among the topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "6:50", - "text": "And then we're going to normalize them in different ways to obtain the real estimate. So for example, we can normalize among all the topics to get the re-estimate of pi, the coverage. Or we can re-normalize based on all the words. And that would give us a word distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "7:10", - "text": "So it's useful to think algorithm in this way because when implemented, you can just use variables, but keep track of these quantities in each case.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "7:23", - "text": "And then you just normalize these variables to make them distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "7:32", - "text": "Now I did not put the constraint for this one. And I intentionally leave this as an exercise for you. And you can see, what's the normalizer for this one? It's of a slightly different form but it's essentially the same as the one that you have seen here in this one. So in general in the envisioning of EM algorithms you will see you accumulate the counts, various counts and then you normalize them.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "8:01", - "text": "So to summarize, we introduced the PLSA model. Which is a mixture model with k unigram language models representing k topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "8:11", - "text": "And we also added a pre-determined background language model to help discover discriminative topics, because this background language model can help attract the common terms.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "8:23", - "text": "And we select the maximum estimate that we cant discover topical knowledge from text data. In this case PLSA allows us to discover two things, one is k worded distributions, each one representing a topic and the other is the proportion of each topic in each document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "8:41", - "text": "And such detailed characterization of coverage of topics in documents can enable a lot of photo analysis. For example, we can aggregate the documents in the particular pan period to assess the coverage of a particular topic in a time period. That would allow us to generate the temporal chains of topics. We can also aggregate topics covered in documents associated with a particular author and then we can categorize the topics written by this author, etc. And in addition to this, we can also cluster terms and cluster documents. In fact, each topic can be regarded as a cluster. So we already have the term clusters. In the higher probability, the words can be regarded as", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "9:29", - "text": "belonging to one cluster represented by the topic. Similarly, documents can be clustered in the same way. We can assign a document to the topic cluster that's covered most in the document. So remember, pi's indicate to what extent each topic is covered in the document, we can assign the document to the topical cluster that has the highest pi.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "9:57", - "text": "And in general there are many useful applications of this technique.", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - }, - { - "time": "10:03", - "text": "[MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/GJyGG/3-8-probabilistic-latent-semantic-analysis-plsa-part-2" - } - ] - }, - { - "3-9-latent-dirichlet-allocation-lda-part-1": [ - { - "time": "0:07", - "text": "This lecture is about that Latent Dirichlet Allocation or LDA. In this lecture, we are going to continue talking about topic models. In particular, we are going to talk about some extension of PLSA, and one of them is LDA or Latent Dirichlet Allocation. So the plan for this lecture is to cover two things. One is to extend the PLSA with prior knowledge and that would allow us to have in some sense a user-controlled PLSA, so it doesn't apply to they just listen to data, but also would listen to our needs. The second is to extend the PLSA as a generative model, a fully generative model. This has led to the development of Latent Dirichlet Allocation or LDA. So first, let's talk about the PLSA with prior knowledge. Now in practice, when we apply PLSA to analyze text data, we might have additional knowledge that we want to inject to guide the analysis. The standard PLSA is going to blindly listen to the data by using maximum [inaudible]. We are going to just fit data as much as we can and get some insight about data. This is also very useful, but sometimes a user might have some expectations about which topics to analyze. For example, we might expect to see retrieval models as a topic in information retrieval or we also may be interesting in certain aspects, such as battery and memory, when looking at opinions about a laptop because the user is particularly interested in these aspects. A user may also have knowledge about topic coverage and we may know which topic is definitely not covering which document or is covering the document. For example, we might have seen those tags, topic tags assigned to documents. And those tags could be treated as topics. If we do that then a document account will be generated using topics corresponding to the tags already assigned to the document. If the document is not assigned a tag, we're going to say there is no way for using that topic to generate document. The document must be generated by using the topics corresponding to that assigned tags. So question is how can we incorporate such knowledge into PLSA. It turns out that there is a very elegant way of doing that and that would incorporate such knowledge as priors on the models. And you may recall in Bayesian inference, we use prior together with data to estimate parameters and this is precisely what would happen. So in this case, we can use maximum a posteriori estimate also called MAP estimate and the formula is given here. Basically, this is to maximize the posteriori distribution probability. And this is a combination of the likelihood of data and the prior. So what would happen is that we are going to have an estimate that listens to the data and also listens to our prior preferences. We can use this prior which is denoted as p of lambda to encode all kinds of preferences and the constraints. So for example, we can use this to encode the need of having precise background of the topic. Now this could be encoded as a prior because we can say the prior for the parameters is only a non-zero if the parameters contain one topic that is equivalent to the background language model. In other words, in other cases if it is not like that, we are going to say the prior says it is impossible. So the probability of that kind of models I think would be zero according to our prior. So now we can also for example use the prior to force particular choice of topic to have a probability of a certain number. For example, we can force document D to choose topic one with probability of one half or we can prevent topic from being used in generating document. So we can say the third topic should not be used in generating document D, we will set to the Pi zero for that topic. We can also use the prior to favor a set of parameters with topics that assign high probability to some particular words. In this case, we are not going to say it is impossible but we can just strongly favor certain kind of distributions and you will see example later. The MAP can be computed using a similar EM algorithm as we have used for the maximum likelihood estimate. With just some modifications, most of the parameters would reflect the prior preferences and in such an estimate if we use a special form of the prior code or conjugate the prior, then the functional form of the prior will be similar to the data. As a result, we can combine the two and the consequence is that you can basically convert the inference of the prior into the inference of having additional pseudo data because the two functional forms are the same and they can be combined. So the effect is as if we had more data and this is convenient for computation. It does not mean conjugate prior is the best way to define prior. So now let us look at the specific example. Suppose the user is particularly interested in battery life of a laptop and we are analyzing reviews. So the prior says that the distribution should contain one distribution that would assign high probability to battery and life. So we could say well there is distribution that is kind of concentrated on battery life and prior says that one of distributions should be very similar to this. Now if we use MAP estimate with conjugate prior, which is the original prior, the original distribution based on this preference, then the only difference in the EM is that when we re-estimate words distributions, we are going to add additional counts to reflect our prior. So here you can see the pseudo counts are defined based on the probability of words in a prior. So battery obviously would have high pseudo counts and similarly life would have also high pseudo counts. All the other words would have zero pseudo counts because their probability is zero in the prior and we see this is also controlled by a parameter mu and we are going to add a mu much by the probability of W given prior distribution to the connected accounts when we re-estimate this word distribution. So this is the only step that is changed and the change is happening here. And before we just connect the counts of words that we believe have been generated from this topic but now we force this distribution to give more probabilities to these words by adding them to the pseudo counts. So in fact we artificially inflated their probabilities. To make this distribution, we also need to add this many pseudo counts to the denominator. This is total sum of all the pseudo counts we have added for all the words This would make this a gamma distribution. Now this is intuitively very reasonable way of modifying EM and theoretically speaking, this works and it computes the MAP estimate. It is useful to think about the two specific extreme cases of mu. Now, [inaudible] the picture. Think about what would happen if we set mu to zero. Well that essentially to remove this prior. So mu in some sense indicates our strengths on prior. Now what would happen if we set mu to positive infinity. Well that is to say that prior is so strong that we are not going to listen to the data at all. So in the end, you see in this case we are going to make one of the distributions fixed to the prior. You see why? When mu is infinitive, we basically let this one dominate. In fact we are going to set this one to precise this distribution. So in this case, it is this distribution. And that is why we said the background language model is in fact a way to impose the prior because it would force one distribution to be exactly the same as what we give, that is background distribution. So in this case, we can even force the distribution to entirely focus on battery life. But of course this would not work well because it cannot attract other words. It would affect the accuracy of counting topics about battery life. So in practice, mu is set somewhere in between of course. So this is one way to impose a prior. We can also impose some other constraints. For example, we can set any parameters that will constantly include zero as needed. For example, we may want to set one of the Pi's to zero and this would mean we do not allow that topic to participate in generating that document. And this is only reasonable of course when we have prior analogy that strongly suggests this.", - "url": "https://www.coursera.org/learn/text-mining/lecture/deiXc/3-9-latent-dirichlet-allocation-lda-part-1" - } - ] - }, - { - "3-10-latent-dirichlet-allocation-lda-part-2": [ - { - "time": "0:00", - "text": "[SOUND] So now let's talk about the exchanging of PLSA to of LDA and to motivate that, we need to talk about some deficiencies of PLSA. First, it's not really a generative model because we can compute the probability of a new document. You can see why, and that's because the pis are needed to generate the document, but the pis are tied to the document that we have in the training data. So we can't compute the pis for future document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "0:34", - "text": "And there's some heuristic workaround, though. Secondly, it has many parameters, and I've asked you to compute how many parameters exactly there are in PLSA, and you will see there are many parameters. That means that model is very complex. And this also means that there are many local maxima and it's prone to overfitting. And that means it's very hard to also find a good local maximum.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "1:02", - "text": "And that we are representing global maximum. And in terms of explaining future data, we might find that it will overfit the training data because of the complexity of the model. The model is so flexible to fit precisely what the training data looks like. And then it doesn't allow us to generalize the model for using other data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "1:23", - "text": "This however is not a necessary problem for text mining because here we're often only interested in hitting the training documents that we have. We are not always interested in modern future data, but in other cases, or if we would care about the generality, we would worry about this overfitting.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "1:42", - "text": "So LDA is proposing to improve that, and basically to make PLSA a generative model by imposing a Dirichlet prior on the model parameters. Dirichlet is just a special distribution that we can use to specify product. So in this sense, LDA is just a Bayesian version of PLSA, and the parameters are now much more regularized. You will see there are many few parameters and you can achieve the same goal as PLSA for text mining. It means it can compute the top coverage and topic word distributions as in PLSA. However, there's no. Why are the parameters for PLSA here are much fewer, there are fewer parameters and in order to compute a topic coverage and word distributions, we again face a problem of influence of these variables because they are not parameters of the model. So the influence part again face the local maximum problem. So essentially they are doing something very similar, but theoretically, LDA is a more elegant way of looking at the top and bottom problem. So let's see how we can generalize the PLSA to LDA or a standard PLSA to have LDA. Now a full treatment of LDA is beyond the scope of this course and we just don't have time to go in depth on that talking about that. But here, I just want to give you a brief idea about what's extending and what it enables, all right. So this is the picture of LDA. Now, I remove the background of model just for simplicity.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "3:15", - "text": "Now, in this model, all these parameters are free to change and we do not impose any prior. So these word distributions are now represented as theta vectors. So these are word distributions, so here. And the other set of parameters are pis. And we would present it as a vector also. And this is more convenient to introduce LDA. And we have one vector for each document. And in this case, in theta, we have one vector for each topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "3:50", - "text": "Now, the difference between LDA and PLSA is that in LDA, we're not going to allow them to free the chain. Instead, we're going to force them to be drawn from another distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "4:03", - "text": "So more specifically, they will be drawn from two Dirichlet distributions respectively, but the Dirichlet distribution is a distribution over vectors. So it gives us a probability of four particular choice of a vector. Take, for example, pis, right. So this Dirichlet distribution tells us which vectors of pi is more likely. And this distribution in itself is controlled by another vector of parameters of alphas.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "4:31", - "text": "Depending on the alphas, we can characterize the distribution in different ways but with full certain choices of pis to be more likely than others. For example, you might favor the choice of a relatively uniform distribution of all the topics. Or you might favor generating a skewed coverage of topics, and this is controlled by alpha. And similarly here, the topic or word distributions are drawn from another Dirichlet distribution with beta parameters. And note that here, alpha has k parameters, corresponding to our inference on the k values of pis for our document. Whereas here, beta has n values corresponding to controlling the m words in our vocabulary.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "5:17", - "text": "Now once we impose this price, then the generation process will be different. And we start with joined pis from the Dirichlet distribution and this pi will tell us these probabilities.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "5:35", - "text": "And then, we're going to use the pi to further choose which topic to use, and this is of course very similar to the PLSA model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "5:47", - "text": "And similar here, we're not going to have these distributions free. Instead, we're going to draw one from the Dirichlet distribution. And then from this, then we're going to further sample a word. And the rest is very similar to the. The likelihood function now is more complicated for LDA. But there's a close connection between the likelihood function of LDA and the PLSA. So I'm going to illustrate the difference here. So in the top, you see PLSA likelihood function that you have already seen before. It's copied from previous slide. Only that I dropped the background for simplicity.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "6:27", - "text": "So in the LDA formulas you see very similar things. You see the first equation is essentially the same. And this is the probability of generating a word from multiple word distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "6:40", - "text": "And this formula is a sum of all the possibilities of generating a word. Inside a sum is a product of the probability of choosing a topic multiplied by the probability of observing the word from that topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "6:55", - "text": "So this is a very important formula, as I've stressed multiple times. And this is actually the core assumption in all the topic models. And you might see other topic models that are extensions of LDA or PLSA. And they all rely on this. So it's very important to understand this. And this gives us a probability of getting a word from a mixture model. Now, next in the probability of a document, we see there is a PLSA component in the LDA formula, but the LDA formula will add a sum integral here. And that's to account for the fact that the pis are not fixed. So they are drawn from the original distribution, and that's shown here. That's why we have to take an integral, to consider all the possible pis that we could possibly draw from this Dirichlet distribution. And similarly in the likelihood for the whole collection, we also see further components added, another integral here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "7:58", - "text": "Right? So basically in the area we're just adding this integrals to account for the uncertainties and we added of course the Dirichlet distributions to cover the choice of this parameters, pis, and theta.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "8:12", - "text": "So this is a likelihood function for LDA. Now, next to this, let's talk about the parameter as estimation and inferences. Now the parameters can be now estimated using exactly the same approach maximum likelihood estimate for LDA. Now you might think about how many parameters are there in LDA versus PLSA. You'll see there're a fewer parameters in LDA because in this case the only parameters are alphas and the betas. So we can use the maximum likelihood estimator to compute that. Of course, it's more complicated because the form of likelihood function is more complicated. But what's also important is notice that now these parameters that we are interested in name and topics, and the coverage are no longer parameters in LDA. In this case we have to use basic inference or posterior inference to compute them based on the parameters of alpha and the beta. Unfortunately, this computation is intractable. So we generally have to resort to approximate inference.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "9:18", - "text": "And there are many methods available for that and I'm sure you will see them when you use different tool kits for LDA, or when you read papers about", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "9:30", - "text": "these different extensions of LDA. Now here we, of course, can't give in-depth instruction to that, but just know that they are computed based in inference by using the parameters alphas and betas. But our math [INAUDIBLE], actually, in the end, in some of our math list, it's very similar to PLSA. And, especially when we use algorithm called class assembly, then the algorithm looks very similar to the Algorithm. So in the end, they are doing something very similar.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "10:10", - "text": "So to summarize our discussion of properties of topic models, these models provide a general principle or way of mining and analyzing topics in text with many applications. The best basic task setup is to take test data as input and we're going to output the k topics. Each topic is characterized by word distribution. And we're going to also output proportions of these topics covered in each document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "10:38", - "text": "And PLSA is the basic topic model, and in fact the most basic of the topic model. And this is often adequate for most applications. That's why we spend a lot of time to explain PLSA in detail.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "10:53", - "text": "Now LDA improves over PLSA by imposing priors. This has led to theoretically more appealing models. However, in practice, LDA and PLSA tend to give similar performance, so in practice PLSA and LDA would work equally well for most of the tasks.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "11:12", - "text": "Now here are some suggested readings if you want to know more about the topic. First is a nice review of probabilistic topic models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - }, - { - "time": "11:20", - "text": "The second has a discussion about how to automatically label a topic model. Now I've shown you some distributions and they intuitively suggest a topic. But what exactly is a topic? Can we use phrases to label the topic? To make it the more easy to understand and this paper is about the techniques for doing that. The third one is empirical comparison of LDA and the PLSA for various tasks. The conclusion is that they tend to perform similarly. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/taJgZ/3-10-latent-dirichlet-allocation-lda-part-2" - } - ] - } - ] - }, - { - "Week 4": [ - { - "4-1-text-clustering-motivation": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is the first one about the text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "0:14", - "text": "In this lecture, we are going to talk about the text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "0:18", - "text": "This is a very important technique for doing topic mining and analysis. In particular, in this lecture we're going to start with some basic questions about the clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "0:31", - "text": "And that is, what is text clustering and why we are interested in text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "0:38", - "text": "In the following lectures, we are going to talk about how to do text clustering. How to evaluate the clustering results?", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "0:47", - "text": "So what is text clustering?", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "0:49", - "text": "Well, clustering actually is a very general technique for data mining as you might have learned in some other courses.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "0:56", - "text": "The idea is to discover natural structures in the data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "1:01", - "text": "In another words, we want to group similar objects together. In our case, these objects are of course, text objects. For example, they can be documents, terms, passages, sentences, or websites, and then I'll go group similar text objects together. So let's see an example, well, here you don't really see text objects, but I just used some shapes to denote objects that can be grouped together.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "1:33", - "text": "Now if I ask you, what are some natural structures or natural groups where you, if you look at it and you might agree that we can group these objects based on chips, or their locations on this two dimensional space.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "1:53", - "text": "So we got the three clusters in this case.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "1:56", - "text": "And they may not be so much this agreement about these three clusters but it really depends on the perspective to look at the objects.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "2:07", - "text": "Maybe some of you have also seen thing in a different way, so we might get different clusters. And you'll see another example about this ambiguity more clearly. But the main point of here is, the problem is actually not so well defined.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "2:29", - "text": "And the problem lies in how to define similarity. And what do you mean by similar objects?", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "2:38", - "text": "Now this problem has to be clearly defined in order to have a well defined clustering problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "2:46", - "text": "And the problem is in general that any two objects can be similar depending on how you look at them. So for example, this will kept the two words like car and horse.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "3:00", - "text": "So are the two words similar? Well, it depends on how if we look at the physical", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "3:11", - "text": "properties of car and horse they are very different but if you look at them functionally, a car and a horse, can both be transportation tools. So in that sense, they may be similar. So as we can see, it really depends on our perspective, to look at the objects.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "3:32", - "text": "And so it ought to make the clustering problem well defined. A user must define the perspective for assessing similarity.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "3:44", - "text": "And we call this perspective the clustering bias.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "3:49", - "text": "And when you define a clustering problem, it's important to specify", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "3:55", - "text": "your perspective for similarity or for defining the similarity that will be used to group similar objects. because otherwise, the similarity is not well defined and one can have different ways to group objects. So let's look at the example here. You are seeing some objects, or some shapes, that are very similar to what you have seen on the first slide, but if I ask you to group these objects, again, you might", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "4:38", - "text": "feel there is more than here than on the previous slide. For example, you might think, well, we can steer a group by ships, so that would give us cluster that looks like this. However, you might also feel that, well, maybe the objects can be grouped based on their sizes. So that would give us a different way to cluster the data if we look at the size and look at the similarity in size.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "5:12", - "text": "So as you can see clearly here, depending on the perspective, we'll get different clustering result. So that also clearly tells us that in order to evaluate the clustering without, we must use perspective. Without perspective, it's very hard to define what is the best clustering result.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "5:36", - "text": "So there are many examples of text clustering setup.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "5:42", - "text": "And so for example, we can cluster documents in the whole text collection. So in this case, documents are the units to be clustered.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "5:52", - "text": "We may be able to cluster terms. In this case, terms are objects. And a cluster of terms can be used to define concept, or theme, or a topic. In fact, there's a topic models that you have seen some previous lectures can give you cluster of terms in some sense if you take terms with high probabilities from word distribution. Another example is just to cluster any text segments, for example, passages, sentences, or any segments that you can extract the former larger text objects.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "6:32", - "text": "For example, we might extract the order text segments about the topic, let's say, by using a topic model. Now once we've got those text objects then we can", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "6:45", - "text": "cluster the segments that we've got to discover interesting clusters that might also ripple in the subtopics. So this is a case of combining text clustering with some other techniques. And in general you will see a lot of text mining", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "7:05", - "text": "can be accurate combined in a flexible way to achieve the goal of doing more sophisticated mining and analysis of text data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "7:16", - "text": "We can also cluster fairly large text objects and by that, I just mean text objects may contain a lot of documents. So for example, we might cluster websites. Each website is actually compose of multiple documents. Similarly, we can also cluster articles written by the same author, for example. So we can trigger all the articles published by also as one unit for clustering. In this way, we might group authors together based on whether they're published papers or similar.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "7:55", - "text": "For the more text clusters will be for the cluster to generate a hierarchy. That's because we can in general cluster any text object at different levels.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "8:08", - "text": "So more generally why is text clustering interesting? Well, it's because it's a very useful technique for text mining, particularly exploratory text analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "8:20", - "text": "And so a typical scenario is that you were getting a lot of text data, let's say all the email messages from customers in some time period, all the literature articles, etc. And then you hope to get a sense about what are the overall content of the connection, so for example, you might be interested in getting a sense about major topics, or what are some typical or representative documents in the connection. And clustering to help us achieve this goal. We sometimes also want to link a similar text objects together. And these objects might be duplicated content, for example. And in that case, such a technique can help us remove redundancy and remove duplicate documents.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "9:10", - "text": "Sometimes they are about the same topic and by linking them together we can have more complete coverage of a topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "9:19", - "text": "We may also used text clustering to create a structure on the text data and sometimes we can create a hierarchy of structures and this is very useful for problems.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "9:31", - "text": "We may also use text clustering to induce additional features to represent text data when we cluster documents together, we can treat each cluster as a feature. And then we can say when a document is in this cluster and then the feature value would be one. And if a document is not in this cluster, then the feature value is zero. And this helps provide additional discrimination that might be used for text classification as we will discuss later.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "9:59", - "text": "So there are, in general, many applications of text clustering. And I just thought of two very specific ones. One is to cluster search results, for example, [INAUDIBLE] search engine can cluster such results so that the user can see overall structure of the results of return the fall query. And when the query's ambiguous this is particularly useful because the clusters likely represent different senses of ambiguous word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - }, - { - "time": "10:28", - "text": "Another application is to understand the major complaints from customers based on their emails, right. So in this case, we can cluster email messages and then find in the major clusters from there, we can understand what are the major complaints about them. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/FcCdt/4-1-text-clustering-motivation" - } - ] - }, - { - "4-2-text-clustering-generative-probabilistic-models-part-1": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about generating probabilistic models for text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "0:13", - "text": "In this lecture, we're going to continue discussing text clustering, and we're going to introduce generating probabilistic models as a way to do text clustering. So this is the overall plan for covering text clustering. In the previous lecture, we have talked about what is text clustering and why text clustering is interesting. In this lecture, we're going to talk about how to do text clustering. In general, as you see on this slide, there are two kinds of approaches. One is generating probabilistic models, which is the topic of this lecture. And later, we'll also discuss similarity-based approaches.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "0:53", - "text": "So to talk about generating models for text clustering, it would be useful to revisit the topic mining problem using topic models, because the two problems are very similar. This is a slide that you have seen earlier in the lecture on topic model. Here we show that we have input of a text collection C and a number of topics k, and vocabulary V. And we hope to generate as output two things. One is a set of topics denoted by Theta i's, each is awarded distribution and the other is pi i j. These are the probabilities that each document covers each topic. So this is a topic coverage and it's also visualized here on this slide. You can see that this is what we can get by using a topic model. Now, the main difference between this and the text clustering problem is that here, a document is assumed to possibly cover multiple topics. And indeed, in general, a document will be covering more than one topic with nonzero probabilities. In text clustering, however, we only allow a document to cover one topic, if we assume one topic is a cluster.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "2:24", - "text": "So that means if we change the problem definition just slightly by assuming that each document that can only be generated by using precisely one topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "2:37", - "text": "Then we'll have a definition of the clustering problem as you'll hear. So here the output is changed so that we no longer have the detailed coverage distributions pi i j. But instead, we're going to have a cluster assignment decisions, Ci. And Ci is a decision for the document i. And C sub i is going to take a value from 1 through k to indicate one of the k clusters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "3:09", - "text": "And basically tells us that d i is in which cluster. As illustrated here, we no longer have multiple topics covered in each document. It is precisely one topic. Although which topic is still uncertain. There is also a connection with", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "3:29", - "text": "the problem of mining one topic that we discussed earlier. So here again, it's a slide that you have seen before and here we hope to estimate a topic model or distribution based on precisely one document. And that's when we assume that this document, it covers precisely one topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "3:52", - "text": "But we can also consider some variations of the problem. For example, we can consider there are N documents, each covers a different topic, so that's N documents, and topics. Of course, in this case, these documents are independent, and these topics are also independent. But, we can further allow these documents with share topics, and we can also assume that we are going to assume there are fewer topics than the number of documents, so these documents must share some topics. And if we have N documents that share k topics, then we'll again have precisely the document clustering problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "4:34", - "text": "So because of these connections, naturally we can think about how to use a probabilistically generative model to solve the problem of text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "4:43", - "text": "So the question now is what generative model can be used to do clustering?", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "4:49", - "text": "As in all cases of designing a generative model, we hope the generative model would adopt the output that we hope to generate or the structure that we hope to model. So in this case, this is a clustering structure, the topics and each document that covers one topic. And we hope to embed such preferences in the generative model. But, if you think about the main difference between this problem and the topic model that we talked about earlier. And you will see a main requirement is how can we force every document to be generated from precisely one topic, instead of k topics, as in the topic model?", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "5:35", - "text": "So let's revisit the topic model again in more detail. So this is a detailed view of a two component mixture model. When we have k components, it looks similar. So here we see that when we generate a document,", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "5:53", - "text": "we generate each word independent.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "5:57", - "text": "And when we generate each word, but first make a choice between these distributions. We decide to use one of them with probability. So p of theta 1 is the probability of choosing the distribution on the top. Now we first make this decision regarding which distribution should be used to generate the word. And then we're going to use this distribution to sample a word. Now note that in such a generative model, the decision on which distribution to use for each word is independent. So that means, for example, the here could have generated from the second distribution, theta 2 whereas text is more likely generated from the first one on the top.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "6:49", - "text": "That means the words in the document that could have been generated in general from multiple distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "6:58", - "text": "Now this is not what we want, as we said, for text clustering, for document clustering, where we hoped this document will be generated from precisely one topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "7:09", - "text": "So now that means we need to modify the model. But how? Well, let's first think about why this model cannot be used for clustering. And as I just said, the reason is because it has allowed multiple topics to contribute a word to the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "7:28", - "text": "And that causes confusion because we're not going to know which cluster this document is from. And it's, more importantly it's violating our assumption about the partitioning of documents in the clusters. If we really have one topic to correspond it to one cluster of documents, then we would have a document that we generate from precisely one topic. That means all the words in the document must have been generated from precisely one distribution. And this is not true for such a topic model that we're seeing here. And that's why this cannot be used for clustering because it did not ensure that only one distribution has been used to generate all the words in one document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "8:15", - "text": "So if you realize this problem, then we can naturally design alternative mixture model for doing clustering. So this is what you're seeing here. And we again have to make a decision regarding which distribution to use to generate this document because the document could potentially be generated from any of the k word distributions that we have. But this time, once we have made a decision to choose one of the topics, we're going to stay with this regime to generate all the words in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "8:49", - "text": "And that means, once we have made a choice of the distribution in generating the first word, we're going to go stay with this distribution in generating all of the other words in the document. So, in other words, we only make the choice once for, basically, we make the decision once for this document and this state was just to generate all the words. Similarly if I had choosing the second distribution, theta sub 2 here, you can see which state was this one. And then generate the entire document of d. Now, if you compare this picture with the previous one, you will see the decision of using a particular distribution is made just once for this document, in the case of document clustering. But in the case of topic model, we have to make as many decisions as the number of words in the document. Because for each word, we can make a potentially different decision. And that's the key difference between the two models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "9:58", - "text": "But this is obviously also a mixed model so we can just group them together as one box to show that this is the model that will give us a probability of the document. Now, inside of this model, there is also this switch of choosing a different distribution. And we don't observe that so that's a mixture model. And of course a main problem in document clustering is to infer which distribution has been used to generate a document and that would allow us to recover the cluster identity of a document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "10:37", - "text": "So it will be useful to think about the difference from the topic model as I have also mentioned multiple times.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "10:46", - "text": "And there are mainly two differences, one is the choice of", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "10:56", - "text": "using that particular distribution is made just once for document clustering. Whereas in the topic model, it's made it multiple times for different words. The second is that word distribution, here, is going to be used to regenerate all the words for a document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "11:19", - "text": "But, in the case of one distribution doesn't have to generate all the words in the document. Multiple distribution could have been used to generate the words in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "11:34", - "text": "Let's also think about a special case, when one of the probability of choosing a particular distribution is equal to 1. Now that just means we have no uncertainty now. We just stick with one particular distribution. Now in that case, clearly, we will see this is no longer mixture model, because there's no uncertainty here and we can just use precisely one of the distributions for generating a document. And we're going back to the case of estimating one order distribution based on one document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "12:12", - "text": "So that's a connection that we discussed earlier. Now you can see it more clearly. So as in all cases of using a generative model to solve a problem, we first look at data and then think about how to design the model. But once we design the model, the next step is to write down the likelihood function. And after that we're going to look at the how to estimate the parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "12:36", - "text": "So in this case, what's the likelihood function? It's going to be very similar to what you have seen before in topic models but it will be also different.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "12:45", - "text": "Now if you still recall what the likelihood function looks like in then you will realize that in general, the probability of observing a data point from mixture model is going to be a sum of all the possibilities of generating the data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "13:00", - "text": "In this case, so it's going to be a sum over these k topics, because every one can be user generated document. And then inside is the sum you can still recall what the formula looks like, and it's going to be the product of two probabilities. One is the probability of choosing the distribution, the other is the probability of observing a particular datapoint from that distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "13:27", - "text": "So if you map this kind of formula to our problem here, you will see the probability of observing a document d", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "13:37", - "text": "is basically a sum in this case of two different distributions because we have a very simplified situation of just two clusters. And so in this case, you can see it's a sum of two cases. In each case, it's indeed the probability of choosing the distribution either theta 1 or theta 2. And then, the probability is multiplied by the probability of observing this document from this particular distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "14:16", - "text": "And if you further expanded this probability of observing the whole document, we see that it's a product of observing each word X sub i. And here we made the assumption that each word is generated independently, so the probability of the whole document is just a product of the probability of each word in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "14:40", - "text": "So this form should be very similar to the topic model. But it's also useful to think about the difference and for that purpose, I am also copying the probability of topic model of these two components here. So here you can see the formula looks very similar or in many ways, they are similar.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "15:02", - "text": "But there is also some difference.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "15:06", - "text": "And in particular, the difference is on the top. You see for the mixture model for document clustering, we first take a product, and then take a sum.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "15:16", - "text": "And that's corresponding to our assumption of first make a choice of choosing one distribution and then stay with the distribution, it'll generate all the words. And that's why we have the product inside the sum.", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - }, - { - "time": "15:30", - "text": "The sum corresponds to the choice. Now, in topic model, we see that the sum is actually inside the product. And that's because we generated each word independently. And that's why we have the product outside, but when we generate each word we have to make a decision regarding which distribution we use so we have a sum there for each word. But in general, these are all mixture models and we can estimate these models by using the Algorithm, as we will discuss more later. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/gJTFA/4-2-text-clustering-generative-probabilistic-models-part-1" - } - ] - }, - { - "4-3-text-clustering-generative-probabilistic-models-part-2": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is a continuing discussion of Generative Probabilistic Models for text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "0:13", - "text": "In this lecture, we are going to continue talking about the text clustering, particularly, the Generative Probabilistic Models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "0:23", - "text": "So this is a slide that you have seen earlier where we have written down the likelihood function for a document with two distributions, being a two component mixed model for document clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "0:39", - "text": "Now in this lecture, we're going to generalize this to include the k clusters. Now if you look at the formula and think about the question, how to generalize it, you'll realize that all we need is to add more terms, like what you have seen here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "0:57", - "text": "So you can just add more thetas and the probabilities of thetas and the probabilities of generating d from those thetas. So this is precisely what we are going to use and this is the general presentation of the mixture model for document clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "1:19", - "text": "So as more cases would follow these steps in using a generating model first, think about our data. And so in this case our data is a collection of documents, end documents denoted by d sub i, and then we talk about the other models, think of other modelling. In this case, we design a mixture of k unigram language models. It's a little bit different from the topic model, but we have similar parameters. We have a set of theta i's that denote that our distributions corresponding to the k unigram language models. We have p of each theta i as a probability of selecting each of the k distributions we generate the document. Now note that although our goal is to find the clusters and we actually have used a more general notion of a probability of each cluster and this as you will see later, will allow us to assign a document to the cluster that has the highest probability of being able to generate the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "2:31", - "text": "So as a result, we can also recover some other interesting", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "2:36", - "text": "properties, as you will see later.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "2:42", - "text": "So the model basically would make the following assumption about the generation of a document. We first choose a theta i according to probability of theta i, and then generate all the words in the document using this distribution. Note that it's important that we use this distribution all the words in the document. This is very different from topic model. So the likelihood function would be like what you are seeing here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "3:10", - "text": "So you can take a look at the formula here, we have used the different notation here in the second line of this equation. You are going to see now the notation has been changed to use unique word in the vocabulary, in the product instead of particular position in the document. So from X subject to W, is a change of notation and this change allows us to show the estimation formulas more easily. And you have seen this change also in the topic model presentation, but it's basically still just a product of the probabilities of all the words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "4:10", - "text": "And so with the likelihood function, now we can talk about how to do parameter estimation. Here we can simply use the maximum likelihood estimator. So that's just a standard way of doing things. So all should be familiar to you now. It's just a different model. So after we have estimated parameters, how can we then allocate clusters to the documents? Well, let's take a look at the this situation more closely. So we just repeated the parameters here for this mixture model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "4:43", - "text": "Now if you think about what we can get by estimating such a model, we can actually get more information than what we need for doing clustering, right? So theta i for example, represents the content of cluster i, this is actually a by-product, it can help us summarize what the cluster is about. If you look at the top terms in this cluster or in this word distribution and they will tell us what the cluster is about.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "5:11", - "text": "p of theta i can be interpreted as indicating the size of cluster because it tells us how likely the cluster would be used to generate the document. The more likely a cluster is used to generate a document, we can assume the larger the cluster size is.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "5:30", - "text": "Note that unlike in PLSA and this probability of theta i is not dependent on d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "5:37", - "text": "Now you may recall that the topic you chose at each document actually depends on d. That means each document can have a potentially different choice of topics, but here we have a generic choice probability for all the documents. But of course, even a particular document that we still have to infer which topic is more likely to generate the document. So in that sense, we can still have a document dependent probability of clusters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "6:10", - "text": "So now let's look at the key problem of assigning documents to clusters or assigning clusters to documents.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "6:17", - "text": "So that's the computer c sub d here and this will take one of the values in the range of one through k to indicate which cluster should be assigned to d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "6:28", - "text": "Now first you might think about a way to use likelihood on it and that is to assign d to the cluster corresponding to the topic of theta i, that most likely has been used to generate d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "6:42", - "text": "So that means we're going to choose one of those distributions that gives d the highest probability. In other words, we see which distribution has the content that matches our d at the [INAUDIBLE]. Intuitively that makes sense, however, this approach does not consider the size of clusters, which is also a available to us and so a better way is to use the likelihood together with the prior, in this case the prior is p of theta i. And together, that is, we're going to use the base formula to compute the posterior probability of theta, given d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "7:25", - "text": "And if we choose theta .based on this posterior probability, we would have the following formula that you see here on the bottom of this slide. And in this case, we're going to choose the theta that has a large P of theta i, that means a large cluster and also a high probability of generating d. So we're going to favor a cluster that's large and also consistent with the document. And that intuitively makes sense because the chance of a document being a large cluster is generally higher than in a small cluster.", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - }, - { - "time": "8:07", - "text": "So this means once we can estimate the parameters of the model, then we can easily solve the problem of document clustering. So next, we'll have to discuss how to actually compute the estimate of the model. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/V93Td/4-3-text-clustering-generative-probabilistic-models-part-2" - } - ] - }, - { - "4-4-text-clustering-generative-probabilistic-models-part-3": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is a continuing discussion of generative probabilistic models for tax classroom.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "0:14", - "text": "In this lecture, we're going to do a finishing discussion of generative probabilistic models for text crossing.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "0:21", - "text": "So this is a slide that you have seen before and here, we show how we define the mixture model for text crossing and what the likelihood function looks like. And we can also compute the maximum likelihood estimate, to estimate the parameters. In this lecture, we're going to do talk more about how exactly we're going to compute the maximum likelihood estimate. As in most cases the Algorithm can be used to solve this problem for mixture models. So here's the detail of this Algorithm for document clustering. Now, if you have understood how Algorithm works for topic models like TRSA, and I think here it would be very similar. And we just need to adapt a little bit to this new mixture model. So as you may recall Algorithm starts with initialization of all the parameters. So this is the same as what happened before for topic models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "1:28", - "text": "And then we're going to repeat until the likelihood converges and in each step we'll do E step and M step. In M step, we're going to infer which distribution has been used to generate each document. So I have to introduce a hidden variable Zd for each document and this variable could take a value from the range of 1 through k, representing k different distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "1:59", - "text": "More specifically basically, we're going to apply base rules to infer which distribution is more likely to have generated this document, or computing the posterior probability of the distribution given the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "2:17", - "text": "And we know it's proportional to the probability of selecting this distribution p of Z the i. And the probability of generating this whole document from the distribution which is the product of the probabilities of world for this document as you see here. Now, as you all clear this use for kind of remember,", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "2:45", - "text": "the normalizer or the constraint on this probability. So in this case, we know the constraint on this probability in E-Step is that all the probabilities of Z equals i must sum to 1. Because the documented must have been generated from precisely one of these k topics. So the probability of being generated from each of them should sum to 1. And if you know this constraint, then you can easily compute this distribution as long as you know what it is proportional to. So once you compute this product that you see here, then you simply normalize", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "3:31", - "text": "these probabilities, to make them sum to 1 over all the topics. So that's E-Step, after E-Step we want to know which distribution is more likely to have generated this document d, and which is unlikely.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "3:45", - "text": "And then in M-Step we're going to re-estimate all the parameters based on the in further z values or in further knowledge about which distribution has been used to generate which document. So the re-estimation involves two kinds of parameters 1 is p of theta and this is the probability of selecting a particular distribution. Before we observe anything, we don't have any knowledge about which cluster is more likely. But after we have observed that these documents, then we can crack the evidence to infer which cluster is more likely. And so this is proportional to the sum of the probability of Z sub d j is equal to i.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "4:34", - "text": "And so this gives us all the evidence about using topic i, theta i to generate a document. Pull them together and again, we normalize them into probabilities.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "4:50", - "text": "So this is for key of theta sub i.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "4:54", - "text": "Now the other kind of parameters are the probabilities of words in each distribution, in each cluster. And this is very similar to the case piz and here we just report the kinds of words that are in documents that are inferred to have been generated from a particular topic of theta i here. This would allows to then estimate how many words have actually been generated from theta i. And then we'll normalize again these accounts in the probabilities so that the probabilities on all the words would sum to up. Note that it's very important to understand these constraints as they are precisely the normalizing in all these formulas. And it's also important to know that the distribution is over what?", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "5:54", - "text": "For example, the probability of theta is over all the k topics, that's why these k probabilities will sum to 1. Whereas the probability of a word given theta is a probability distribution over all the words. So there are many probabilities and they have to sum to 1. So now, let's take a look at a simple example of two clusters. I've two clusters, I've assumed some initialized values for the two distributions. And let's assume we randomly initialize two probability of selecting each cluster as 0.5, so equally likely. And then let's consider one document that you have seen here. There are two occurrences of text and two occurrences of mining. So there are four words together and medical and health did not occur in this document. So let's think about the hidden variable.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "6:50", - "text": "Now for each document then we much use a hidden variable. And before in piz, we used one hidden variable for each work because that's the output from one mixture model. So in our case the output from the mixture model or the observation from mixture model is a document, not a word. So now we have one hidden variable attached to the document. Now that hidden variable must tell us which distribution has been used to generate the document. So it's going to take two values, one and two to indicate the two topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "7:25", - "text": "So now how do we infer which distribution has been used generally d? Well it's been used base rule, so it looks like this. In order for the first topic theta 1 to generate a document, two things must happen. First, theta sub 1 must have been selected. So it's given by p of theta 1. Second, it must have also be generating the four words in the document. Namely, two occurrences of text and two occurrences of sub mining. And that's why you see the numerator has the product of the probability of selecting theta 1 and the probability of generating the document from theta 1. So the denominator is just the sum of two possibilities of generality in this document. And you can plug in the numerical values to verify indeed in this case, the document is more likely to be generated from theta 1, much more likely than from theta 2. So once we have this probability, we can easily compute the probability of Z equals 2, given this document. How? Well, we can use the constraint. That's going to be 1 minus 100 over 101. So now it's important that you note that in such a computation there is a potential problem of underflow. And that is because if you look at the original numerator and the denominator, it involves the competition of a product of many small probabilities. Imagine if a document has many words and it's going to be a very small value here that can cause the problem of underflow. So to solve the problem, we can use a normalize. So here you see that we take a average of all these two math solutions to compute average at the screen called a theta bar.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "9:24", - "text": "And this average distribution would be comparable to each of these distributions in terms of the quantities or the magnitude. So we can then divide the numerator and the denominator both by this normalizer. So basically this normalizes the probability of generating this document by using this average word distribution. So you can see the normalizer is here. And since we have used exactly the same normalizer for the numerator and the denominator. The whole value of this expression is not changed but by doing this normalization you can see we can make the numerators and the denominators more manageable in that the overall value is not going to be very small for each. And thus we can avoid the underflow problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "10:24", - "text": "In some other times we sometimes also use logarithm of the product to convert this into a sum of log of probabilities. This can help preserve precision as well, but in this case we cannot use algorithm to solve the problem. Because there is a sum in the denominator, but this kind of normalizes can be effective for solving this problem. So it's a technique that's sometimes useful in other situations in other situations as well. Now let's look at the M-Step. So from the E-Step we can see our estimate of which distribution is more likely to have generated a document at d. And you can see d1's more like got it from the first topic, where is d2 is more like from second topic, etc. Now, let's think about what we need to compute in M-step well basically we need to re-estimate all the parameters. First, look at p of theta 1 and p of theta 2. How do we estimate that? Intuitively you can just pool together these z, the probabilities from E-step. So if all of these documents say, well they're more likely from theta 1, then we intuitively would give a higher probability to theta 1. In this case, we can just take an average of these probabilities that you see here and we've obtain a 0.6 for theta 1. So 01 is more likely and then theta 2. So you can see probability of 02 would be natural in 0.4. What about these word of probabilities? Well we do the same, and intuition is the same. So we're going to see, in order to estimate the probabilities of words in theta 1, we're going to look at which documents have been generated from theta 1. And we're going to pull together the words in those documents and normalize them. So this is basically what I just said.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "12:20", - "text": "More specifically, we're going to do for example, use all the kinds of text in these documents for estimating the probability of text given theta 1. But we're not going to use their raw count or total accounts. Instead, we can do that discount them by the probabilities that each document is likely been generated from theta 1. So these gives us some fractional accounts. And then these accounts would be then normalized in order to get the probability. Now, how do we normalize them? Well these probability of these words must assign to 1. So to summarize our discussion of generative models for clustering. Well we show that a slight variation of topic model can be used for clustering documents. And this also shows the power of generating models in general. By changing the generation assumption and changing the model slightly we can achieve different goals, and we can capture different patterns and types of data. So in this case, each cluster is represented by unigram language model word distribution and that is similar to topic model. So here you can see the word distribution actually generates a term cluster as a by-product. A document that is generated by first choosing a unigram language model. And then generating all the words in the document are using just a single language model. And this is very different from again copy model where we can generate the words in the document by using multiple unigram language models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "13:56", - "text": "And then the estimated model parameters are given both topic characterization of each cluster and the probabilistic assignment of each document into a cluster.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "14:07", - "text": "And this probabilistic assignment sometimes is useful for some applications. But if we want to achieve harder clusters mainly to", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - }, - { - "time": "14:16", - "text": "partition documents into disjointed clusters. Then we can just force a document into the cluster corresponding to the words distribution that's most likely to have generated the document. We've also shown that the Algorithm can be used to compute the maximum likelihood estimate. And in this case, we need to use a special number addition technique to avoid underflow. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/8Ki0H/4-4-text-clustering-generative-probabilistic-models-part-3" - } - ] - }, - { - "4-5-text-clustering-similarity-based-approaches": [ - { - "time": "0:00", - "text": "[MUSIC] This lecture is about the similarity-based approaches to text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "0:13", - "text": "In this lecture we're going to to continue the discussion of how to do a text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "0:18", - "text": "In particular, we're going to to cover different kinds of approaches than generative models, and that is similarity-based approaches. So the general idea of similarity-based clustering is to explicitly specify a similarity function to measure the similarity between two text objects. Now this is in contrast with a generative model where we implicitly define the clustering bias by using a particular object to function like a [INAUDIBLE] function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "0:52", - "text": "The whole process is driven by optimizing the [INAUDIBLE,] but here we explicitly provide a view of what we think are similar. And this is often very useful because then it allows us to inject any particular view of similarity into the clustering program. So once we have a similarity function, we can then aim at optimally partitioning, to partitioning the data into clusters or into different groups. And try to maximize the inter-group similarity and minimize the inter-group similarity. That is to ensure the objects that are put into the same group to be similar, but the objects that are put into different groups to be not similar. And these are the general goals of clustering, and there is often a trade off between achieving both goals. Now there are many different methods for doing similarity based clustering, and in general I think we can distinguish the two strategies at high level. One is to progressively construct the hierarchy of clusters, and so this often leads to hierarchical clustering. And we can further distinguish it two ways, to construct a hierarchy depending on whether we started with the collection to divide the connection. Or started with individual objectives and gradually group them together, so one is bottom-up that can be called agglomerative. Well we gradually group a similar objects into larger and larger clusters. Until we group everything together, the other is top-down or divisive, in this case we gradually partition the whole data set into smaller and smaller clusters. The other general strategy is to start with the initial tentative clustering and then iteratively improve it. And this often leads for a flat clustering, one example is k-Means, so as I just said, there are many different clustering methods available. And a full coverage of all the clustering methods would be beyond the scope of this course. But here we are going to talk about the two representative methods, in some detail", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "3:14", - "text": "one is Hierarchical Agglomerative Clustering or HAC, the other is k-Means. So first of it we'll get the agglomerative hierarchical clustering, in this case, we're given a similarity function to measure similarity between two objects. And then we can gradually group similar objects together in a bottom-up fashion to form larger and larger groups. And they always form a hierarchy, and then we can stop when some stopping criterion is met. It could be either some number of clusters has been achieved or the threshold for similarity has been reached.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "3:52", - "text": "There are different variations here, and they mainly differ in the ways to compute a group similarity. Based on the individual objects similarity, so let's illustrate how again induced a structure based on just similarity. So start with all the text objects and we can then measure the similarity between them. Of course based on the provided similarity function, and then we can see which pair has the highest similarity. And then just group them together, and then we're going to see which pair is the next one to group.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "4:30", - "text": "Maybe these two now have the highest similarity, and then we're going to gradually group them together. And then every time we're going to pick the highest similarity, the similarity of pairs to group.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "4:45", - "text": "This will give us a binary tree eventually to group everything together.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "4:50", - "text": "Now, depending on our applications, we can use the whole hierarchy as a structure for browsing, for example. Or we can choose a cutoff, let's say cut here to get four clusters, or we can use a threshold to cut. Or we can cut at this high level to get just two clusters, so this is a general idea, now if you think about how to implement this algorithm. You'll realize that we have everything specified except for how to compute group similarity.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "5:24", - "text": "We are only given the similarity function of two objects, but as we group groups together, we also need to assess the similarity between two groups. There are also different ways to do that and there are the three popular methods. Single-link, complete-link, and average-link, so given two groups and the single-link algorithm. Is going to define the group similarity as the similarity of the closest pair of the two groups. Complete-link defines the similarity of the two groups as the similarity of the farthest system pair. Average-link defines the similarity as average of similarity of all the pairs of the two groups. So it's much easier to understand the methods by illustrating them, so here are two groups, g1 and g2 with some objects in each group. And we know how to compute the similarity between two objects, but the question now is, how can we compute the similarity between the two groups?", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "6:29", - "text": "And then we can in general base this on the similarities of the objects in the two groups.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "6:35", - "text": "So, in terms of single-link and we're just looking at the closest pair so in this case, these two paired objects will defined the similarities of the two groups.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "6:47", - "text": "As long as they are very close, we're going to say the two groups are very", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "6:51", - "text": "close so it is an optimistic view of similarity.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "6:57", - "text": "The complete link on the other hand were in some sense pessimistic, and by taking the similarity of the two farthest pair as the similarity for the two groups. So we are going to make sure that if the two groups are having a high similarity. Then every pair of the two groups, or the objects in the two groups will have, will be ensured to have high similarity.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "7:29", - "text": "Now average link is in between, so it takes the average of all these pairs. Now these different ways of computing group similarities will lead to different clustering algorithms. And they would in general give different results, so it's useful to take a look at their differences and to make a comparison.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "7:53", - "text": "First, single-link can be expected to generally the loose clusters, the reason is because as long as two objects are very similar in the two groups, it will bring the two groups together.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "8:09", - "text": "If you think about this as similar to having parties with people, then it just means two groups of people would be partying together. As long as in each group there is a person that", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "8:27", - "text": "is well connected with the other group. So the two leaders of the two groups can have a good relationship with each other and then they will bring together the two groups. In this case, the cluster is loose, because there's no guarantee that other members of the two groups are actually very close to each other. Sometimes they may be very far away, now in this case it's also based on individual decisions, so it could be sensitive to outliers. The complete-link is in the opposite situation, where we can expect the clusters to be tight. And it's also based on individual decision so it can be sensitive to outliers. Again to continue the analogy to having a party of people, then complete-link would mean when two groups come together. They want to ensure that even", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "9:21", - "text": "the people that are unlikely to talk to each other would be comfortable. Always talking to each other, so ensure the whole class to be coherent. The average link of clusters in between and as group decision, so it's", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "9:37", - "text": "going to be insensitive to outliers, now in practice which one is the best. Well, this would depend on the application and sometimes you need a lose clusters. And aggressively cluster objects together that maybe single-link is good. But other times you might need a tight clusters and a complete-link might be better. But in general, you have to empirically evaluate these methods for your application to know which one is better.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "10:07", - "text": "Now, next let's look at another example of a method for similarity-based clustering. In this case, which is called k-Means clustering, we will represent each text object as a term vector. And then assume a similarity function defined on two objects, now we're going to start with some tentative clustering results by just selecting k randomly. selected vectors as centroids of k clusters and treat them as centers as if they represent, they each represent a cluster. So this gives us the initial tentative cluster, then we're going to iteratively improve it. And the process goes like this, and once we have these centroids Decide. We're going to assign a vector to the cluster whose centroid is closest to the current vector. So basically we're going to measure the distance between this vector, and each of the centroids, and see which one is the closest to this one. And then just put this object into that cluster, this is to have tentative assignment of objects into clusters. And we're going to partition all the objects into k clusters based on our tentative clustering and centroids.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "11:28", - "text": "Then we can do re-compute the centroid based on the locate the object in each cluster. And this is to adjust the centroid, and then we can repeat this process until the similarity-based objective function. In this case, it's within cluster sum of squares converges, and theoretically we can show that. This process actually is going to minimize the within cluster sum of squares where define object and function. Given k clusters, so it can be also shown, this process will converge to a local minimum. I think about this process for a moment, it might remind you the Algorithm for mixture model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "12:13", - "text": "Indeed this algorithm is very similar to the Algorithm for the mixture model for clustering. More specifically we also initialize these parameters in the Algorithm so the random initialization is similar.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "12:34", - "text": "And then in the Algorithm, you may recall that, we're going to repeat E-step and M-step to improve our parameter estimation. In this case, we're going to improve the clustering result iteratively by also doing two steps. And in fact that the two steps are very similar to Algorithm, in that when we locate the vector into one of the clusters based on our tentative clustering. It's very similar to inferring the distribution that has been used to generate the document, the mixture model. So it is essentially similar to E-step, so what's the difference, well the difference is here. We don't make a probabilistic allocation as in the case of E-step, the brother will make a choice. We're going to make a call if this, there upon this closest to cluster two, then we're going to say you are in cluster two. So there's no choice, and we're not going to say, you assume the set is belonging to a cluster two. And so we're not going to have a probability, but we're just going to put one object into precisely one cluster. In the E-step however, we do a probability location, so we split in counts. And we're not going to say exactly which distribution has been used to generate a data point. Now next, we're going to adjust the centroid, and this is very similar to M-step where we re-estimate the parameters. That's when we'll have a better estimate of the parameter, so here we'll have a better clustering result by adjusting the centroid. And note that centroid is based on the average of the vectors in the cluster. So this is also similar to the M-step where we do counts,pull together counts and then normalize them. The difference of course is also because of the difference in the E-step, and we're not going to consider probabilities when we count the points. In this case, k-Means we're going to all make count of the objects as allocated to this cluster. And this is only a subset of data points, but in the Algorithm, we in principle consider all the data points based on probabilistic allocations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "14:56", - "text": "But in nature they are very similar and that's why it's also maximizing well defined object of functions. And it's guaranteed to convert local minimum, so to summarize our discussion of clustering methods. We first discussed model based approaches, mainly the mixture model. Here we use the implicit similarity function to define the clustering bias. There is no explicit define similarity function, the model defines clustering bias and the clustering structure is built into a generative model. That's why we can use potentially a different model to recover different structure.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "15:40", - "text": "Complex generative models can be used to discover complex clustering structures. We do not talk about in full, but we can easily design, generate a model to generate a hierarchical clusters. We can also use prior to further customize the clustering algorithm to for example control the topic of one cluster or multiple clusters. However one disadvantage of this approach is that there is no easy way to directly control the similarity measure.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "16:11", - "text": "Sometimes we want to that, but it's very hard to inject such a special definition of similarity into such a model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "16:20", - "text": "We also talked about similarity-based approaches, these approaches are more flexible to actually specify similarity functions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "16:29", - "text": "But one major disadvantage is that their objective function is not always very clear.", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "16:35", - "text": "The k-Means algorithm has clearly defined the objective function, but it's also very similar to a model based approach. The hierarchical clustering algorithm on the other hand is harder to specify the objective function. So it's not clear what exactly is being optimized,", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - }, - { - "time": "17:00", - "text": "both approaches can generate term clusters. And document clusters, and term clusters can be in general, generated by representing each term with some text content. For example, take the context of each term as a representation of each term, as we have done in semantic relation learning. And then we can certainly cluster terms, based on actual text [INAUDIBLE]. Of course, term clusters can be generated by using generative models as well, as we've seen. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/PsyKR/4-5-text-clustering-similarity-based-approaches" - } - ] - }, - { - "4-6-text-clustering-evaluation": [ - { - "time": "0:00", - "text": "[MUSIC] This lecture is about evaluation of text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "0:12", - "text": "So far we have talked about multiple ways of doing text clustering but how do we know which method works the best?", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "0:22", - "text": "So this has to do with evaluation. Now to talk about evaluation one must go back to the clustering bias that we introduced at the beginning.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "0:32", - "text": "Because two objects can be similar depending on how you look at them,", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "0:37", - "text": "we must clearly specify the perspective of similarity. Without that, the problem of clustering is not well defined. So this perspective is also very important for evaluation. If you look at this slide, and you can see we have two different ways to cluster these shapes, and if you ask a question, which one is the best, or which one is better? You actually see, there's no way to answer this question without knowing whether we'd like to cluster based on shapes, or cluster based on sizes. And that's precisely why the perspective on clustering bias is crucial for evaluation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "1:19", - "text": "In general, we can evaluate text clusters in two ways, one is direct evaluation, and the other indirect evaluation. So in direct evaluation, we want to answer the following questions, how close are the system-generated clusters to the ideal clusters that are generated by humans?", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "1:38", - "text": "So the closeness here can be assessed", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "1:44", - "text": "from multiple perspectives and that will help us characterize the quality of cluster result in multiple angles, and this is sometimes desirable.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "1:56", - "text": "Now we also want to quantify the closeness because this would allow us to easily compare different measures based on their performance figures.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "2:09", - "text": "And finally, you can see, in this case, we essentially inject the clustering bias", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "2:15", - "text": "by using humans, basically humans would bring in the the need or desire to clustering bias. Now, how do we do that exactly? Well, the general procedure would look like this.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "2:28", - "text": "Given a test set which consists of a lot of text objects, we can have humans to create the ideal clustering result, that is, we're going to ask humans to partition the objects to create the gold standard. And they will use their judgments based on the need of a particular application to generate what they think are the best clustering results, and this would be then used to compare with the system generated clusters from the same test set.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "3:01", - "text": "And ideally, we want the system results to be the same as the human generated results, but in general, they are not going to be the same. So we would like to then quantify the similarity between the system-generated clusters and the gold standard clusters. And this similarity can also be measure from multiple perspectives and this will give us various meshes to quantitatively evaluate a cluster, a clustering result. And some of the commonly used measures include the purity, which measures whether a cluster has a similar object from the same cluster, in the gold standard. And normalized mutual information is a commonly used measure which basically measures based on the identity of cluster of object in the system generally. How well can you predict the cluster of the object in the gold standard or vice versa? And mutual information captures, the correlation between these cluster labels and normalized mutual information is often used for quantifying the similarity for this evaluation purpose, F measure is another possible measure.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "4:21", - "text": "Now again a thorough discussion of this evaluation and these evaluation issues would be beyond the scope of this course.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "4:29", - "text": "I've suggested some reading in the end that you can take a look at to know more about that.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "4:36", - "text": "So here I just want to discuss some high level ideas that would allow you to think about how to do evaluation in your applications. The second way to evaluate text clusters is to do indirect evaluation. So in this case the question to answer is, how useful are the clustering results for the intended applications? Now this of course is application specific question, so usefulness is going to depend on specific applications.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "5:07", - "text": "In this case, the clustering bias is imposed by the independent application as well, so what counts as a best cluster result would be dependent on the application.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "5:19", - "text": "Now procedure wise we also would create a test set with text objects for the intended application to quantify the performance of the system.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "5:32", - "text": "In this case, what we care about is the contribution of clustering to some application so we often have a baseline system to compare with.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "5:45", - "text": "This could be the current system for doing something, and then you hope to add a clustering to improve it, or the baseline system could be using a different clustering method. And then what you are trying to experiment with, and you hope to have better idea of word clustering. So in any case you have a baseline system work with, and then you add a clustering algorithm to the baseline system to produce a clustering system.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "6:11", - "text": "And then we have to compare the performance of your clustering system and the baseline system in terms of the performance measure for that particular application.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "6:21", - "text": "So in this case we call it indirect evaluation of clusters because there's no explicit assessment of the quality of clusters, but rather it's to assess the contribution of clusters to a particular application.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "6:37", - "text": "So, to summarize text clustering,", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "6:41", - "text": "it's a very useful unsupervised general text mining technique, and it's particularly useful for obtaining an overall picture of the text content. And this is often needed to explore text data, and this is often the first step when you deal with a lot of text data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "7:01", - "text": "The second application or second kind of applications is through discover interesting clustering structures in text data and these structures can be very meaningful.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "7:13", - "text": "There are many approaches that can be used to form text clustering and we discussed model based approaches and some narrative based approaches. In general, strong clusters tend to show up no matter what method is used. Also the effectiveness of a method highly depends on whether the desired clustering bias is captured appropriately, and this can be done either through using the right generating model, the model design appropriate for the clustering, or the right similarity function expressly define the bias. Deciding the optimal number of customers is a very difficult problem for order cluster methods, and that's because it's unsupervised algorithm, and there's no training there how to guide us to select the best number of clusters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "8:05", - "text": "Now sometimes you may see some methods that can automatically determine the number of clusters, but in general that has some implied application of clustering bias there and that's just not specified. Without clearly defining a clustering bias, it's just impossible to say the optimal number of cluster is what, so this important to keep in mind.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "8:31", - "text": "And I should also say sometimes we can also use application to determine the number of clusters, for example, if you're clustering search results, then obviously you don't want to generate the 100 clusters, so the number can be dictated by the interface design.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "8:46", - "text": "In other situations, we might be able to use the fitness to data to assess whether we've got a good number of clusters to explain our data well. And to do that, you can vary the number of clusters and watch how well you can fit the data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - }, - { - "time": "9:07", - "text": "In general when you add a more components to a mixed model you should fit the data better because you, you don't, you can always set the probability of using the new component as zero. So you can't in general fit the data worse than before, but the question is as you add more components would you be able to significantly improve the fitness of the data and that can be used to determine the right number of clusters. And finally evaluation of clustering results, this kind can be done both directly and indirectly, and we often would like to do both in order to get a good sense about how well our method works. So here's some suggested reading and this is particularly useful to better understand how the matches are calculated and clustering in general [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/tr0ir/4-6-text-clustering-evaluation" - } - ] - }, - { - "4-7-text-categorization-motivation": [ - { - "time": "0:00", - "text": "[SOUND]", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "0:06", - "text": "This lecture is about text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "0:11", - "text": "In this lecture, we're going to talk about text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "0:16", - "text": "This is a very important technique for text data mining and analytics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "0:22", - "text": "It is relevant to discovery of various different kinds of knowledge as shown here. First, it's related to topic mining and analysis. And, that's because it has to do with analyzing text to data based on some predefined topics. Secondly, it's also related to opinion mining and sentiment analysis, which has to do with discovery knowledge about the observer, the human sensor. Because we can categorize the authors, for example, based on the content of the articles that they have written, right? We can, in general, categorize the observer based on the content that they produce.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "1:12", - "text": "Finally, it's also related to text-based prediction. Because, we can often use text categorization techniques to predict some variables in the real world that are only remotely related to text data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "1:27", - "text": "And so, this is a very important technique for text to data mining.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "1:34", - "text": "This is the overall plan for covering the topic. First, we're going to talk about what is text categorization and why we're interested in doing that in this lecture? And now, we're going to talk about how to do text categorization for how to evaluate the categorization results. So, the problem of text categorization is defined as follows. We're given a set of predefined categories possibly forming a hierarchy or so. And often, also a set of training examples or training set of labeled text objects which means the text objects have already been enabled with known categories. And then, the task is to classify any text object into one or more of these predefined categories. So, the picture on this slide shows what happens.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "2:30", - "text": "When we do text categorization, we have a lot of text objects to be processed by a categorization system and the system will, in general, assign categories through these documents. As shown on the right and the categorization results, and we often assume the availability of training examples and these are the documents that are tag with known categories. And these examples are very important for helping the system to learn patterns in different categories. And, this would further help the system then know how to recognize", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "3:11", - "text": "the categories of new text objects that it has not seen. So, here are some specific examples of text categorization. And in fact, there are many examples, here are just a few.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "3:27", - "text": "So first, text objects can vary, so we can categorize a document, or a passage, or a sentence, or collections of text. As in the case of clustering, the units to be analyzed can vary a lot, so this creates a lot of possibilities. Secondly, categories can also vary. Allocate in general, there's two major kinds of categories. One is internal categories. These are categories that categorize content of text object. For example, topic categories or sentiment categories and they generally have to do with the content of the text objects throughout the categorization of the content.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "4:08", - "text": "The other kind is external categories that can characterize an entity associated with the text object. For example, authors are entities associated with the content that they produce. And so, we can use their content in determining which author has written, which part, for example, and that's called author attribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "4:33", - "text": "Or, we can have any other mininal categories associate with text data as long as there is minimal connection between the entity and text data. For example, we might collect a lot of reviews about a restaurant or a lot of reviews about a product, and then, this text data can help us infer properties of a product or a restaurant. In that case, we can treat this as a categorization problem. We can categorize restaurants or categorize products based on their corresponding reviews. So, this is an example for external category. Here are some specific examples of the applications. News categorization is very common as being started a lot. News agencies would like to assign predefined categories to categorize news generated everyday. And, these virtual article categorizations are not important aspect. For example, in the biomedical domain, there's MeSH annotations. MeSH stands for Medical Subject Heading, and this is ontology of terms,", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "5:49", - "text": "characterize content of literature articles in detail.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "5:54", - "text": "Another example of application is spam email detection or filtering, right? So, we often have a spam filter to help us distinguish spams from legitimate emails and this is clearly a binary classification problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "6:14", - "text": "Sentiment categorization of product reviews or tweets is yet another kind of applications where we can categorize, comparing to positive or negative or positive and negative or neutral.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "6:27", - "text": "So, you can have send them to categories, assign the two text content.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "6:35", - "text": "Another application is automatic email routing or sorting, so, you might want to automatically sort your emails into different folders and that's one application of text categorization where each folder is a category.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "6:48", - "text": "The results are another important kind of applications of routing emails to the right person to handle, so, in helpdesk, email messaging is generally routed to a particular person to handle. Different people tend to handle different kinds of requests. And in many cases, a person would manually assign the messages to the right people. But, if you can imagine, you can't be able to automatically text categorization system to help routing request. And, this is a class file, the incoming request in the one of the categories where each category actually corresponds to a person to handle the request. And finally, author attribution, as I just mentioned, is yet another application, and it's another example of using text to actually infer properties of", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "7:41", - "text": "some other entities. And, there are also many variants of the problem formulation. And so, first, we have the simplest case, which is a binary categorization, where there are only two categories. And, there are many examples like that, information retrieval or search engine.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "7:59", - "text": "Applications with one distinguishing relevant documents from non-relevant documents for a particular query.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "8:06", - "text": "Spam filtering just distinguishing spams from non-spams, so, also two categories. Sometimes, classifications of opinions can be in two categories, positive and a negative.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "8:19", - "text": "A more general case would be K-category categorization and there are also many applications like that, there could be more than two categories. So, topic categorization is often such an example where you can have multiple topics. Email routing would be another example when you may have multiple folders or if you route the email to the right person to handle it, then there are multiple people to classify. So, in all these cases, there are more than two kinds of categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "8:49", - "text": "Another variation is to have hierarchical categorization where categories form a hierarchy. Again, topical hierarchy is very common.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "8:58", - "text": "Yet another variation is joint categorization. That's when you have multiple categorization tasks that are related and then you hope to kind of join the categorization. Further leverage the dependency of these tasks to improve accuracy for each individual task. Among all these binary categorizations is most fundamental and part of it also is because it's simple and probably it's because it can actually be used to perform all the other categorization tasks. For example, a K-category categorization task can be actually performed by using binary categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "9:40", - "text": "Basically, we can look at each category separately and then the binary categorization problem is whether object is in this category or not, meaning in other categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "9:53", - "text": "And, the hierarchical categorization can also be done by progressively doing flat categorization at each level. So, we have, first, we categorize all the objects into, let's say, a small number of high-level categories, and inside each category, we have further categorized to sub-categories, etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "10:15", - "text": "So, why is text categorization important? Well, I already showed that you, several applications but, in general, there are several reasons. One is text categorization helps enrich text representation and that's to achieve more understanding of text data that's all it was useful for text analysis. So, now with categorization text can be represented in multiple levels. The keyword conditions that's often used for a lot text processing tasks. But we can now also add categories and they provide two levels of transition.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "10:55", - "text": "Semantic categories assigned can also be directly or indirectly useful for application. So, for example, semantic categories could be already very useful or other attribution might be directly useful. Another example is when semantic categories can facilitate aggregation of text content and this is another case of applications of text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "11:25", - "text": "For example, if we want to know the overall opinions about a product, we", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "11:32", - "text": "could first categorize the opinions in each individual view as positive or negative and then, that would allow us to easy to aggregate all the sentiment, and it would tell us about the 70% of the views are positive and 30% are negative, etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "11:53", - "text": "So, without doing categorization, it will be much harder to aggregate such opinions to provide a concise way of coding text in some sense based on all of the vocabulary. And, sometimes you may see in some applications, text with categorizations called a text coded, encoded with some control of vocabulary. The second kind of reasons is to use text categorization to infer properties of entities, and text categories allows us to infer the properties of such entities that are associate with text data. So, this means we can use text categorization to discover knowledge about the world. In general, as long as we can associate the entity with text of data, we can always the text of data to help categorize the corresponding entities. So, it's used for single information network that will connect the other entities with text data. The obvious entities that can be directly connected are authors. But, you can also imagine the author's affiliations or the author's age and other things can be actually connected to text data indirectly. Once we have made the connection, then we can make a prediction about those values. So, this is a general way to allow us to use text mining through, so the text categorization to discover knowledge about the world. Very useful, especially in big text data analytics where we are often just using text data as extra sets of data extracted from humans to infer certain decision factors often together with non-textual data. Specifically with text, for example, we can also think of examples of inferring properties of entities. For example, discovery of non-native speakers of a language. And, this can be done by categorizing the content of speakers.", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - }, - { - "time": "14:00", - "text": "Another example is to predict the party affiliation of a politician based on the political speech. And, this is again an example of using text data to infer some knowledge about the real world. In nature, the problems are all the same, and that's as we defined and it's a text categorization problem. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/8tCQe/4-7-text-categorization-motivation" - } - ] - }, - { - "4-8-text-categorization-methods": [ - { - "time": "0:06", - "text": "This lecture is about the methods for text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "0:12", - "text": "So in this lecture we're going to discuss how to do text for categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "0:19", - "text": "First, there're many methods for text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "0:25", - "text": "In such a method the idea is to determine the category based on some rules that we design carefully to reflect the domain knowledge about the category prediction problem. So for example, if you want to do topic categorization for news articles you can say well, if the news article mentions word like a game and sports three times. That we're going to say it's about sports things like that and this would allow us to deterministically decide which category a document that should be put into.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "1:02", - "text": "Now such a strategy would work well if the following conditions hold. First the categories must be very well defined and this allows the person to clearly decide the category based on some clear rules.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "1:21", - "text": "A certainly the categories as", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "1:25", - "text": "half to be easy to distinguished at the based on a surface features in text. So that means some official features like keywords or punctuations or whatever, you can easily identify in text to data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "1:41", - "text": "For example, if there is some special vocabulary that is known to only occur in a particular category. And that would be most effective because we can easily use such a vocabulary or padding of such a vocabulary to recognize this category.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "1:57", - "text": "Now we also should have sufficient knowledge for designing these words, and so if that's the case then such a can be effective. And so it does have a in some domains and sometimes. However, in general, there are several problems with this approach. First off, because it's label intensive it requires a lot of manual work. Obviously, we can't do this for all kinds of categorization problems. We have to do it from scratch for a different problem. problem because given the rules, what they need. So it doesn't scale up well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "2:41", - "text": "Secondly, it cannot handle uncertainty in rules, often the rules Aren't 100% reliable. Take for example looking at occurrences of words in texts and trying to decide the topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "2:57", - "text": "It's actually very hard to have 100% correct rule. So for example you can say well, if it has game, sports, basketball Then for sure it's about sports. But one can also imagine some types of articles that mention these cures, but may not be exactly about sports or only marginally touching sports. The main topic could be another topic, a different topic than sports.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "3:27", - "text": "So that's one disadvantage of this approach. And then finally, the rules maybe inconsistent and this would lead to robustness. More specifically, and sometimes, the results of categorization may be different that depending on which rule to be applied. So as in that case that you are facing uncertainty. And you will also have to decide an order of applying the rules, or combination of results that are contradictory. So all these are problems with this approach. And it turns out that both problems can be solved or alleviated by using machine learning.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "4:07", - "text": "So these machine learning methods are more automatic. But, I still put automatic in quotation marks because they are not really completely automatic cause it still require many work. More specifically we have to use a human experts to help in two ways. First the human experts must annotate data cells was category labels. And would tell the computer which documents should receive which categories. And this is called training data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "4:38", - "text": "And then secondly, the human experts also need to provide a set of features to represent each text object. That can potentially provide a clue about the category. So, we need to provide some basic features for the computers to look into.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "4:55", - "text": "In the case of tax a natural choice would be the words. So, using each has a feature is a very common choice to start with, but of course there are other sophisticated features like phrases or even parts of ancients tags or even syntax to the structures. So once human experts can provide this then we can use machine running to learn soft rules for categorization from the training data. So, soft rules just means, we're going to get decided which category we should be assigned for a document, but it's not going to be use using a rule that is deterministic. So we might use something similar to saying that if it matches games, sports many times, it's likely to be sports. But, we're not going to say exactly for sure but instead, we're going to use probabilities or weights. So that we can combine much more evidences. So, the learning process, basically is going to figure out which features are most useful for separating different categories. And it's going to also figure out how to optimally combine features to minimize errors of the categorization of the training data. So the training data, as you can see here, is very important. It's the basis for learning. And then, the trained classifier can be applied to a new text object to predict the most likely category. And that's to simulate the prediction of what human Would assign to this text object. If the human were to make a judgement. So when we use machine learning for text categorization we can also talk about the problem in the general setting of supervisement. So the set up is to learn a classifier to map a value of X. Into a map of Y so here X is all the text objects and Y is all the categories, a set of categories. So the class phi will take any value in x as input and would generate a value in y as output. We hope that output y with this right category for x. And here correct, of course, is judged based on the training data. So that's a general goal in machine learning problems or supervised learning problems where you are given some examples of input and output for a function. And then the computer's going to figure out the, how the function behaves like based on this examples. And then try to be able to compute the values for future x's that when we have not seen.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "7:38", - "text": "So in general all methods would rely on discriminative features of text objects to distinguish different categories. So that's why these features are very important and they have to be provided by humans. And they will also combine multiple features in a weight map with weights to be optimized to minimize errors on the training data. So after the learning processes optimization problem. An objective function is often tied into the errors on the training data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "8:12", - "text": "Different methods tend to vary in their ways of measuring the errors on the training data. They might optimize a different objective function, which is often also called a loss function or cost function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "8:26", - "text": "They also tend to vary in their ways of combining the features. So a linear combination for example is simple, is often used. But they are not as powerful as nonlinear combinations. But nonlinear models might be more complex for training, so there are tradeoffs as well. But that would lead to different variations of", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "8:50", - "text": "many variations of these learning methods. So in general we can distinguish two kinds of classifiers at a high level. One is called generative classifiers. The other is called discriminative classifiers. The generative classifiers try to learn what the data looks like in each category. So it attempts to model the joint distribution of the data and the label x and y and this can then be factored out to a product of why the distribution of labels. And the joint probability of sorry the conditional probability of X given Y, so it's Y. So we first model the distribution of labels and then we model how the data is generate a particular label here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "9:48", - "text": "And once we can estimate these models, then we can compute this conditional probability of label given data based on the probability of data given label.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "10:02", - "text": "And the label distribution here by using the Bayes Rule.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "10:07", - "text": "Now this is the most important thing, because this conditional probability of the label can then be used directly to decide which label is most likely.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "10:18", - "text": "So in such approaches objective function is actually likelihood. And so, we model how the data are generated. So it only indirectly captures the training errors. But if we can model the data in each category accurately, then we can also classify accurately.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "10:38", - "text": "One example is Na\u00efve Bayes classifier, in this case. The other kind of approaches are called discriminative classifies, and these classifies try to learn what features separate categories. So they direct or attack the problem of categorization for separation of classes. So sorry for the problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "11:04", - "text": "So, these discriminative classifiers attempt to model the conditional probability of the label given the data point directly.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "11:17", - "text": "So, the objective function tends to directly measure the errors of categorization on the training data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - }, - { - "time": "11:24", - "text": "Some examples include a logistical regression, support vector machines, and k-nearest neighbors. We will cover some of these classifiers in detail in the next few lectures. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/KKDXC/4-8-text-categorization-methods" - } - ] - }, - { - "4-9-text-categorization-generative-probabilistic-models": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about how to use generative probabilistic models for text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "0:14", - "text": "There are in general about two kinds of approaches to text categorization by using machine learning. One is by generating probabilistic models. The other is discriminative approaches. In this lecture, we're going to talk about the generative models. In the next lecture, we're going to talk about discriminative approaches. So the problem of text categorization is actually a very similar to document clustering. In that, we'll assume that each document it belongs to one category or one cluster. The main difference is that in clustering we don't really know what are the predefined categories are, what are the clusters. In fact, that's the goal of text clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "0:55", - "text": "We want to find such clusters in the data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "0:59", - "text": "But in the case of categorization, we are given the categories. So we kind of have pre-defined categories and then based on these categories and training data, we would like to allocate a document to one of these categories or sometimes multiple categories. But because of the similarity of the two problems, we can actually get the document clustering models for text categorization. And we understand how we can use generated models to do text categorization from the perspective of clustering. And so, this is a slide that we've talked about before, about text clustering, where we assume there are multiple topics represented by word distributions. Each topic is one cluster. So once we estimated such a model, we faced a problem of deciding which cluster document d should belong to. And this question boils down to decide which theta i has been used to generate d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "2:06", - "text": "Now, suppose d has L words represented as xi here. Now, how can you compute the probability that a particular topic word distribution zeta i has been used to generate this document?", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "2:27", - "text": "Well, in general, we use base wall to make this influence and you can see this prior information here that we need to consider if a topic or cluster has a higher prior then it's more likely that the document has been from this cluster. And so, we should favor such a cluster. The other is a likelihood part, it's this part.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "2:56", - "text": "And this has to do with whether the topic word of distribution can explain the content of this document well. And we want to pick a topic that's high by both values. So more specifically, we just multiply them together and then choose which topic has the highest product. So more rigorously, this is what we'd be doing. So we're going to choose the topic that would maximize. This posterior probability at the top of a given document gets posterior because this one, p of the i, is the prior. That's our belief about which topic is more likely, before we observe any document. But this conditional probability here is the posterior probability of the topic after we have observed the document of d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "3:49", - "text": "And base wall allows us to update this probability based on the prior and I have shown the details, below here you can see how the prior here is related to the posterior, on the left-hand side.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "4:05", - "text": "And this is related to how well this word distribution explains the document here, and the two are related in this way. So to find the topic that has the higher posterior probability here it's equivalent to maximize this product as we have seen also, multiple times in this course.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "4:32", - "text": "And we can then change the probability of document in your product of the probability of each word, and that's just because we've made an assumption about independence in generating each word. So this is just something that you have seen in document clustering.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "4:50", - "text": "And we now can see clearly how we can assign a document to a category based on the information about word distributions for these categories and the prior on these categories. So this idea can be directly adapted to do categorization. And this is precisely what a Naive Bayes Classifier is doing. So here it's most really the same information except that we're looking at the categorization problem now. So we assume that if theta i represents category i accurately, that means the word distribution characterizes the content of documents in category i accurately. Then, what we can do is precisely like what we did for text clustering. Namely we're going to assign document d to the category that has the highest probability of generating this document. In other words, we're going to maximize this posterior probability as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "5:56", - "text": "And this is related to the prior and the [INAUDIBLE] as you have seen on the previous slide. And so, naturally we can decompose this [INAUDIBLE] into a product as you see here. Now, here, I change the notation so that we will write down the product as product of all the words in the vocabulary, and even though the document doesn't contain all the words. And the product is still accurately representing the product of all the words in the document because of this count here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "6:37", - "text": "When a word, it doesn't occur in the document. The count would be 0, so this time will just disappear. So if actively we'll just have the product over other words in the document. So basically, with Naive Bayes Classifier, we're going to score each category for the document by this function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "6:56", - "text": "Now, you may notice that here it involves a product of a lot of small probabilities. And this can cause and the four problem. So one way to solve the problem is thru take logarithm of this function, which it doesn't changes all the often these categories. But will helps us preserve precision. And so, this is often the function that we actually use to score each category and then we're going to choose the category that has the highest score by this function. So this is called an Naive Bayes Classifier, now the keyword base is understandable because we are applying a base rule here when we go from the posterior probability of the topic to a product of the likelihood and the prior.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "7:47", - "text": "Now, it's also called a naive because we've made an assumption that every word in the document is generated independently, and this is indeed a naive assumption because in reality they're not generating independently. Once you see some word, then other words will more likely occur. For example, if you have seen a word like a text. Than that mixed category, they see more clustering more likely to appear than if you have not the same text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "8:15", - "text": "But this assumption allows us to simplify the problem. And it's actually quite effective for many text categorization tasks. But you should know that this kind of model doesn't have to make this assumption. We could for example, assume that words may be dependent on each other. So that would make it a bigram analogy model or a trigram analogy model. And of course you can even use a mixture model to model what the document looks like in each category. So in nature, they will be all using base rule to do classification. But the actual generating model for documents in each category can vary. And here, we just talk about very simple case perhaps, the simplest case.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "9:00", - "text": "So now the question is, how can we make sure theta i actually represents category i accurately? Now in clustering, we learned that this category i or what are the distributions for category i from the data. But in our case, what can we do to make sure this theta i represents indeed category i.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "9:25", - "text": "Well if you think about the question, and you likely come up with the idea of using the training data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "9:34", - "text": "Indeed in the textbook, we typically assume that there is training data available and those are the documents that unknown to have generator from which category. In other words, these are the documents with known categories assigned and of course human experts must do that. In here, you see that T1 represents the set of documents that are known to have the generator from category 1. And T2 represents the documents that are known to have been generated from category 2, etc. Now if you look at this picture, you'll see that the model here is really a simplified unigram language model. It's no longer mixed modal, why? Because we already know which distribution has been used to generate which documents. There's no uncertainty here, there's no mixing of different categories here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "10:30", - "text": "So the estimation problem of course would be simplified. But in general, you can imagine what we want to do is estimate these probabilities that I marked here. And what other probability is that we have to estimate it in order to do relation. Well there are two kinds. So one is the prior, the probability of theta i and this indicates how popular each category is or how likely will it have observed the document in that category. The other kind is the water distributions and we want to know what words have high probabilities for each category.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "11:11", - "text": "So the idea then is to just use observe the training data to estimate these two probabilities.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "11:18", - "text": "And in general, we can do this separately for the different categories. That's just because these documents are known to be generated from a specific category. So once we know that, it's in some sense irrelevant of what other categories we are also dealing with.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "11:37", - "text": "So now this is a statistical estimation problem. We have observed some data from some model and we want to guess the parameters of this model. We want to take our best guess of the parameters.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "11:51", - "text": "And this is a problem that we have seen also several times in this course. Now, if you haven't thought about this problem, haven't seen life based classifier. It would be very useful for you to pause the video for a moment and to think about how to solve this problem. So let me state the problem again. So let's just think about with category 1, we know there is one word of distribution that has been used to generate documents. And we generate each word in the document independently, and we know that we have observed a set of n sub 1 documents in the set of Q1. These documents have been all generated from category 1. Namely have been all generated using this same word distribution. Now the question is, what would be your guess or estimate of the probability of each word in this distribution? And what would be your guess of the entire probability of this category? Of course, this singular probability depends on how likely are you to see documents in other categories?", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "12:55", - "text": "So think for a moment, how do you use all this training data including all these documents that are known to be in these k categories, to estimate all these parameters? Now, if you spend some time to think about this and it would help you understand the following few slides. So do spend some time to make sure that you can try to solve this problem, or do you best to solve the problem yourself. Now if you have thought about and then you will realize the following to it.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "13:29", - "text": "First, what's the bases for estimating the prior or the probability of each category. Well this has to do with whether you have observed a lot of documents form that category. Intuitively, you have seen a lot of documents in sports and very few in medical science. Then you guess is that the probability of the sports category is larger or your prior on the category would be larger.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "13:57", - "text": "And what about the basis for estimating the probability of where each category? Well the same, and you'll be just assuming that words that are observed frequently in the documents that are known to be generated from a category will likely have a higher probability. And that's just a maximum Naive Bayes made of. Indeed, that's what we can do, so this made the probability of which category and to answer the question, which category is most popular? Then we can simply normalize, the count of documents in each category. So here you see N sub i denotes the number of documents in each category.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "14:37", - "text": "And we simply just normalize these counts to make this a probability. In other words, we make this probability proportional to the size of training intercept in each category that's a size of the set t sub i.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "14:55", - "text": "Now what about the word distribution? Well, we do the same. Again this time we can do this for each category. So let's say, we're considering category i or theta i. So which word has a higher probability? Well, we simply count the word occurrences in the documents that are known to be generated from theta i.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "15:20", - "text": "And then we put together all the counts of the same word in the set. And then we just normalize these counts to make this distribution of all the words make all the probabilities off these words to 1. So in this case, you're going to see this is a proportional through the count of the word in the collection of training documents T sub i and that's denoted by c of w and T sub i.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "15:49", - "text": "Now, you may notice that we often write down probable estimate in the form of being proportional for certain numbers. And this is often sufficient, because we have some constraints on these distributions. So the normalizer is dictated by the constraint. So in this case, it will be useful for you to think about what are the constraints on these two kinds of probabilities? So once you figure out the answer to this question, and you will know how to normalize these accounts. And so this is a good exercise to work on if it's not obvious to you. There is another issue in Naive Bayes which is a smoothing. In fact the smoothing is a general problem in older estimate of language morals. And this has to do with, what would happen if you have observed a small amount of data? So smoothing is an important technique to address that outsmarts this. In our case, the training data can be small and when the data set is small when we use maximum likely estimator we often face the problem of zero probability. That means if an event is not observed then the estimated probability would be zero. In this case, if we have not seen a word in the training documents for let's say, category i. Then our estimator would be zero for the probability of this one in this category, and this is generally not accurate. So we have to do smoothing to make sure it's not zero probability. The other reason for smoothing is that this is a way to bring prior knowledge, and this is also generally true for a lot of situations of smoothing. When the data set is small, we tend to rely on some prior knowledge to solve the problem. So in this case our [INAUDIBLE] says that no word should have zero probability. So smoothing allows us to inject these to prior initial that no order has a real zero probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "17:54", - "text": "There is also a third reason which us sometimes not very obvious, but we explain that in a moment. And that is to help achieve discriminative weighting of terms. And this is also called IDF weighting, inverse document frequency weighting that you have seen in mining word relations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "18:14", - "text": "So how do we do smoothing? Well in general we add pseudo counts to these events, we'll make sure that no event has 0 count.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "18:22", - "text": "So one possible way of smoothing the probability of the category is to simply add a small non active constant delta to the count. Let's pretend that every category has actually some extra number of documents represented by delta.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "18:40", - "text": "And in the denominator we also add a k multiplied by delta because we want the probability to some to 1. So in total we've added delta k times because we have a k categories. Therefore in this sum, we have to also add k multiply by delta as a total pseudocount that we add up to the estimate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "19:06", - "text": "Now, it's interesting to think about the influence of that data, obvious data is a smoothing parameter here. Meaning that the larger data is and the more we will do smoothing and that means we'll more rely on pseudocounts. And we might indeed ignore the actual counts if they are delta is set to infinity. Imagine what would happen if there are approaches positively to infinity? Well, we are going to say every category has an infinite amount of documents. And then there's no distinction to them so it become just a uniform.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "19:44", - "text": "What if delta is 0? Well, we just go back to the original estimate based on the observed training data to estimate to estimate the probability of each category. Now we can do the same for the word distribution. But in this case, sometimes we find it useful to use a nonuniform seudocount for the word. So here you'll see we'll add a pseudocounts to each word and that's mule multiplied by the probability of the word given by a background language model, theta sub b. Now that background model in general can be estimated by using a logic collection of tests. Or in this case we will use the whole set of all the training data to estimate this background language model. But we don't have to use this one, we can use larger test data that are available from somewhere else.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "20:36", - "text": "Now if we use such a background language model that has pseudocounts, we'll find that some words will receive more pseudocounts. So what are those words? Well those are the common words because they get a high probability by the background average model. So the pseudocounts added for such words will be higher. Real words on the other hand will have smaller pseudocounts. Now this addition of background model would cause a nonuniform smoothing of these word distributions. We're going to bring the probability of those common words to a higher level, because of the background model. Now this helps make the difference of the probability of such words smaller across categories. Because every category has some help from the background four words, and I get the, a, which have high probabilities. Therefore, it's not always so important that each category has documents that contain a lot of occurrences of such words or the estimate is more influenced by the background model. And the consequence is that when we do categorization, such words tend not to influence the decision that much as words that have small probabilities from the background language model. Those words don't get some help from the background language model. So the difference would be primary because of the differences of the occurrences in the training documents in different categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "22:05", - "text": "We also see another smoothing parameter mu here, which controls the amount of smoothing and just like a delta does for the other probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "22:14", - "text": "And you can easily understand why we add mu to the denominator, because that represents the sum of all the pseudocounts that we add for all the words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "22:25", - "text": "So view is also a non negative constant and it's [INAUDIBLE] set to control smoothing. Now there are some interesting special cases to think about as well. First, let's think about when mu approaches infinity what would happen? Well in this case the estimate would approach", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "22:43", - "text": "to the background language model we'll attempt to the background language model. So we will bring every word distribution to the same background language model and that essentially remove the difference between these categories. Obviously, we don't want to do that. The other special case is the thing about the background model and suppose, we actually set the two uniform distribution. And let's say, 1 over the size of the vocabulary. So each one has the same probability, then this smoothing formula is going to be very similar to the one on the top when we add delta. It's because we're going to add a constant pseudocounts to every word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "23:29", - "text": "So in general, in Naive Bayes categorization we have to do such a small thing. And then once we have these probabilities, then we can compute the score for each category. For a document and then choose the category where it was the highest score as we discussed earlier.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "23:49", - "text": "Now, it's useful to further understand whether the Naive Bayes scoring function actually makes sense. So to understand that, and also to understand why adding a background model will actually achieve the effect of IDF weighting and to penalize common words. So suppose we have just two categories and we're going to score based on their ratio of probability, right? So this is the.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "24:24", - "text": "Lets say this is our scoring function for two categories, right? So, this is a score of a document for these two categories. And we're going to score based on this probability ratio. So if the ratio is larger, then it means it's more likely to be in category one. So the larger the score is the more likely the document is in category one. So by using Bayes' rule, we can write down this ratio as follows, and you have seen this before.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "25:09", - "text": "Now, we generally take logarithm of this ratio, and to avoid small probabilities. And this would then give us this formula in the second line. And here we see something really interesting, because this is our scoring function for deciding between the two categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "25:30", - "text": "And if you look at this function, we'll see it has several parts. The first part here is actually log of probability ratio. And so this is a category bias.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "25:41", - "text": "It doesn't really depend on the document. It just says which category is more likely and then we would then favor this category slightly, right? So, the second part has a sum of all the words, right? So, these are the words that are observed in the document but in general we can consider all the words in the vocabulary. So here we're going to collect the evidence about which category is more likely, right? So inside of the sum you can see there is product of two things. The first, is a count of the word. And this count of the word serves as a feature to represent the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "26:27", - "text": "And this is what we can collect from document. The second part is the weight of this feature, here it's the weight on which word, right? This weight tells us to what extent observing this word helps contribute in our decision to put this document in category one. Now remember, the higher the scoring function is, the more likely it's in category one. Now if you look at this ratio, basically, sorry this weight it's basically based on the ratio of the probability of the word from each of the two distributions. Essentially we're comparing the probability of the word from the two distributions. And if it's a higher according to theta 1, then according to theta 2, then this weight would be positive. And therefore it means when we observe such a word, we will say that it's more likely to be from category one. And the more we observe such a word, the more likely the document will be classified as theta 1.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "27:35", - "text": "If, on the other hand, the probability of the word from theta 1 is smaller than the probability of the word from theta 2, then you can see that this word is negative. Therefore, this is negative evidence for supporting category one. That means the more we observe such a word, the more likely the document is actually from theta 2.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "27:58", - "text": "So this formula now makes a little sense, right? So we're going to aggregate all the evidence from the document, we take a sum of all the words. We can call this the features that we collected from the document that would help us make the decision. And then each feature has a weight that tells us how", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "28:19", - "text": "does this feature support category one or just support category two. And this is estimated as the log of probability ratio here in na\u00efve Bayes.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "28:32", - "text": "And then finally we have this constant of bias here. So that formula actually is a formula that can be generalized to accommodate more features and that's why I have introduce some other symbols here. To introduce beta 0 to denote the Bayes and fi to denote the each feature and beta sub i to denote the weight on each feature. Now we do this generalisation, what we see is that in general we can represent the document by feature vector fi, here of course in this case fi is the count of a word. But in general, we can put any features that we think are relevant for categorization. For example, document length or font size or count of other patterns in the document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "29:27", - "text": "And then our scoring function can be defined as a sum of a constant beta 0 and the sum of the feature weights of all the features.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "29:42", - "text": "So if each f sub i is a feature value then we multiply the value by the corresponding weight, beta sub i, and we just take the sum. And this is the aggregate of all evidence that we can collect from all these features. And of course there are parameters here. So what are the parameters? Well, these are the betas. These betas are weights. And with a proper setting of the weights, then we can expect such a scoring function to work well to classify documents, just like in the case of naive Bayes. We can clearly see naive Bayes classifier as a special case of this general classifier. Actually, this general form is very close to a classifier called a logistical regression, and this is actually one of those conditional approaches or discriminative approaches to classification.", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - }, - { - "time": "30:32", - "text": "And we're going to talk more about such approaches later, but here I want you to note that there is a strong connection, a close connection between the two kinds of approaches. And this slide shows how naive Bayes classifier can be connected to a logistic regression. And you can also see that in discriminative classifiers that tend to use more general form on the bottom, we can accommodate more features to solve the problem. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/ZxnI1/4-9-text-categorization-generative-probabilistic-models" - } - ] - } - ] - }, - { - "Week 5": [ - { - "5-1-text-categorization-discriminative-classifier-part-1": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about the discriminative classifiers for text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "0:13", - "text": "In this lecture we're going to continue talking about how to do text categorization and cover discriminative approaches. This is a slide that you have seen from the discussion of Naive Bayes Classifier, where we have shown that although Naive Bayes Classifier tries to model the generation of text data, from each categories, we can actually use Bayes' rule to eventually rewrite the scoring function as you see on this slide. And this scoring function is basically a weighted combination of a lot of word features, where the feature values are word counts, and the feature weights are the log of probability ratios of the word given by two distributions here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "0:57", - "text": "Now this kind of scoring function can be actually a general scoring function where we can in general present text data as a feature vector. Of course the features don't have to be all the words. Their features can be other signals that we want to use. And we mentioned that this is precisely similar to logistic regression. So, in this lecture we're going to introduce some discriminative classifiers. They try to model the conditional distribution of labels given the data directly rather than using Bayes' rule to compute that interactively as we have seen in naive Bayes. So the general idea of logistic regression is to model the dependency of a binary response variable Y on some predictors that are denoted as X. So here we have also changed the notation to X for future values.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "2:07", - "text": "You may recall in the previous slides we have used FI to represent the future values.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "2:13", - "text": "And here we use the notation of X factor, which is more common when we introduce such discriminative algorithms. So, X is our input. It's a vector with n features and each feature has a value x sub i here. And I will go with a model that dependency of this binary response variable of these features. So in our categorization problem when I have two categories theta 1 and theta 2, and we can use the Y value to denote the two categories when Y is 1, it means the category of the document, the first class, is theta 1. Now, the goal here is the model, the conditional property of Y given X directly as opposed to model of the generation of X and Y as in the case of Naive Bayes. And another advantage of this kind of approach is that it would allow many other features than words to be used in this vector since we're not modeling the generation of this vector. And we can plug in any signals that we want. So this is potentially advantageous for doing text categorization. So more specifically, in logistic regression, assume the functional form of Y depending on X is the following. And this is very closely related to the log odds that I introduced in the Naive Bayes or log of probability ratio of the two categories that you have seen on the previous slide.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "3:57", - "text": "So this is what I meant. So in the case of Naive Bayes, we compute this by using those words and eventually we have reached a formula that looks like this.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "4:12", - "text": "But here we actually would assume explicitly that we with the model our probability of Y given X", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "4:29", - "text": "directly as a function of these features.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "4:37", - "text": "So, most specifically we assume that the ratio of the probability of Y equals 1 and the probability of Y equals 0 is a function of X.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "4:54", - "text": "All right, so it's a function of x and it's a linear combination of these feature values controlled by theta values.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "5:02", - "text": "And it seems we know that the probability of Y equals zero is one minus probability of Y equals one and this can be also written in this way. So this is a log out ratio here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "5:22", - "text": "And so in logistic regression, we're basically assuming that the probability of Y equals 1. Okay my X is dependent on this linear combination of all these features. So it's just one of the many possible ways, assuming that the dependency. But this particular form has been quite useful and it also has some nice properties.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "5:47", - "text": "So if we rewrite this equation to actually express the probability of Y given X. In terms of X by getting rid of the logarithm we get this functional form, and this is called a logistical function. It's a transformation of X into Y, as you see", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "6:08", - "text": "on the right side here, so that the X's will be map into a range of values from 0 to 1.0, you can see. And that's precisely what we want since we have a probability here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "6:24", - "text": "And the function form looks like this.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "6:28", - "text": "So this is the basic idea of logistic regression. And it's a very useful classifier that can be used to do a lot of classification tasks including text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "6:41", - "text": "So as in all cases of model we would be interested in estimating the parameters. And in fact in all of the machine running programs, once you set up with the model, set up object and function to model the file, then the next step is to compute the parameter values. In general, we're going to adjust to these parameter values. Optimize the performance of classify on the training data. So in our case just assume we have the training data here, xi and yi, and each pair is basically a future vector of x and a known label for that x. Y is either 1 or 0. So in our case we are interested maximize this conditional likelihood.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "7:31", - "text": "The conditional likelihood here is basically to model why given observe the x, so it's not like a moderate x, but rather we're going to model this. Note that this is a conditional probability of Y given X and this is also precisely what we wanted For classification. Now so the likelihood function would be just a product of all the training cases. And in each case, this is the model of the probability of observing this particular training case. So given a particular Xi, how likely we are to observe the corresponding Yi? Of course, Yi could be 1 or 0, and in fact, the function found here would vary depending on whether Yi is 1 or 0. If it's a 1, we'll be taking this form. And that's basically the logistic regression function. But what about this, if it's 0? Well, if it's 0, then we have to use a different form, and that's this one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "8:48", - "text": "Now, how do we get this one? Well, that's just a 1 minus the probability of Y=1, right?", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "8:55", - "text": "And you can easily see this. Now the key point in here is that the function form here depends on the observer Yi, if it's a 1, it has a different form than when it's 0. And if you think about when we want to maximize this probability, we're basically going to want this probability to be as high as possible. When the label is 1, that means the document is in probability 1. But if the document is not, we're going to maximize this value, and what's going to happen is actually to make this value as small as possible because this sum's 1. When I maximize this one, it's equivalent to minimize this one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "9:48", - "text": "So you can see basically, if we maximize the conditional likelihood, we're going to basically try to make the prediction on the training data as accurate as possible.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "10:00", - "text": "So as another occasion, when you compute the maximum likelihood data, basically you'll find a beta value, a set of beta values that would maximize this conditional likelihood.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "10:12", - "text": "And this, again, then gives us a standard optimization problem. In this case, it can be also solved in many ways. Newton's method is a popular way to solve this problem, there are other methods as well. But in the end, we will look at a set of data values. Once we have the beta values, then we have a way to find the scoring function to help us classify a document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "10:39", - "text": "So what's the function? Well, it's this one. See, if we have all the beta values, are they known? All we need is to compute the Xi for that document and then plug in those values. That will give us an estimated probability that the document is in category one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "10:59", - "text": "Okay so, so much for logistical regression. Let's also introduce another discriminative classifier called K-Nearest Neighbors. Now in general, I should say there are many such approaches, and a thorough introduction to all of them is clearly beyond the scope of this course. And you should take a machine learning course or read more about machine learning to know about them. Here, I just want to include the basic introduction to some of the most commonly used classifiers, since you might use them often for text calculation. So the second classifier is called K-Nearest Neighbors. In this approach, we're going to also estimate the conditional probability of label given data, but in a very different way. So the idea is to keep all the training examples and then once we see a text object that we want to classify, we're going to find the K examples in the training set and that are most similar to this text object. Basically, this is to find the neighbors of this text objector in the training data set. So once we found the neighborhood and we found the object that are close to the object we are interested in classifying, and let's say we have found the K-Nearest Neighbors. That's why this method is called K-Nearest Neighbors. Then we're going to assign the category that's most common in these neighbors. Basically we're going to allow these neighbors to vote for the category of the objective that we're interested in classifying.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "12:33", - "text": "Now that means if most of them have a particular category and it's a category one, they're going to say this current object will have category one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "12:43", - "text": "This approach can also be improved by considering the distance of a neighbor and of a current object. Basically, we can assume a closed neighbor would have more say about the category of the subject. So, we can give such a neighbor more influence on the vote. And we can take away some of the votes based on the distances.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "13:06", - "text": "But the general idea is look at the neighborhood, and then try to assess the category based on the categories of the neighbors. Intuitively, this makes a lot of sense. But mathematically, this can also be regarded as a way to directly estimate there's a conditional probability of label given data, that is p of Y given X.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "13:28", - "text": "Now I'm going to explain this intuition in a moment, but before we proceed, let me emphasize that we do need a similarity function here in order for this to work. Note that in naive base class five, we did not need a similarity function. And in logistical regression, we did not talk about those similarity function either, but here we explicitly require a similarity function. Now this similarity function actually is a good opportunity for us to inject any of our insights about the features. Basically effective features are those that would make the objects that are on the same category look more similar, but distinguishing objects in different categories. So the design of this similarity function is closely tied it to the design of the features in logistical regression and other classifiers. So let's illustrate how K-NN works. Now suppose we have a lot of training instances here. And I've colored them differently and to show just different categories. Now suppose we have a new object in the center that we want to classify. So according to this approach, you work on finding the neighbors. Now, let's first think of a special case of finding just one neighbor, the closest neighbor.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "14:53", - "text": "Now in this case, let's assume the closest neighbor is the box filled with diamonds. And so then we're going to say, well, since this is in this object that is in category of diamonds, let's say. Then we're going to say, well, we're going to assign the same category to our text object. But let's also look at another possibility of finding a larger neighborhood, so let's think about the four neighbors.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "15:26", - "text": "In this case, we're going to include a lot of other solid field boxes in red or pink, right? So in this case now, we're going to notice that among the four neighbors, there are three neighbors in a different category. So if we take a vote, then we'll conclude the object is actually of a different category. So this both illustrates how can nearest neighbor works and also it illustrates some potential problems of this classifier. Basically, the results might depend on the K and indeed, k's an important parameter to optimize. Now, you can intuitively imagine if we have a lot of neighbors around this object, and then we'd be okay because we have a lot of neighbors who will help us decide the categories. But if we have only a few, then the decision may not be reliable. So on the one hand, we want to find more neighbor, right? And then we have more votes. But on the other hand, as we try to find more neighbors we actually could risk on getting neighbors that are not really similar to this instance. They might actually be far away as you try to get more neighbors. So although you get more neighbors but those neighbors aren't necessarily so helpful because they are not very similar to the object. So the parameter still has to be set empirically. And typically, you can optimize such a parameter by using cross validation. Basically, you're going to separate your training data into two parts and then you're going to use one part to actually help you choose the parameter k here or some other parameters in other class files. And then you're going to assume this number that works well on your training that will be actually be the best for your future data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "17:23", - "text": "So as I mentioned, K-NN can be actually regarded as estimate of conditional problem within y given x an that's why we put this in the category of discriminative approaches. So the key assumption that we made in this approach is that the distribution of the label given the document probability a category given for example probability of theta i given document d is locally smooth. And that just means we're going to assume that this probability is the same for all the documents in these region R here. And suppose we draw a neighborhood and we're going to assume in this neighborhood since the data instances are very similar we're going to assume that the conditional distribution of the label given the data will be roughly the same. If these are very different then we're going to assume that the probability of c doc given d would be also similar. So that's a very key assumption. And that's actually important assumption that would allow us to do a lot of machinery. But in reality, whether this is true of course, would depend on how we define similarity. Because neighborhood is largely determined by our similarity function. If our similarity function captures objects that do follow similar distributions then these assumptions are okay but if our similarity function could not capture that, obviously these assumption would be a problem and then the classifier would not be accurate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "18:59", - "text": "Okay, let's proceed with these assumption. Then what we are saying is that, in order to estimate the probability of category given a document. We can try to estimate the probability of the category given that entire region. Now, this has a benefit, of course, of bringing additional data points to help us estimate this probability.", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - }, - { - "time": "19:22", - "text": "And so this is precisely the idea of K-NN. Basically now we can use the known categories of all the documents in this region to estimate this probability. And I have even given a formula here where you can see we just count the topics in this region and then normalize that by the total number of documents in the region. So the numerator that you see here, c of theta i and r, is a counter of the documents in region R was category theta i. Since these are training document and we know they are categories. We can simply count how many times it was since here. How many times we have the same signs, etc. And then the denominator is just the total number of training documents in this region. So this gives us a rough estimate of which categories most popular in this neighborhood. And we are going to assign the popular category to our data object since it falls into this region. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/kgKI9/5-1-text-categorization-discriminative-classifier-part-1" - } - ] - }, - { - "5-2-text-categorization-discriminative-classifier-part-2": [ - { - "time": "0:07", - "text": "[SOUND] This lecture is a continued discussion of Discriminative Classifiers for Text Categorization. So, in this lecture, we're going to introduce, yet another Discriminative Classifier called the Support Vector Machine or SVM. Which is a very popular classification method and it has been also shown to be effective for text categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "0:31", - "text": "So to introduce this classifier, let's also think about the simple case of two categories. We have two topic categories, 01 and 02 here. And we want to classify documents into these two categories and we're going to represent again a document by a feature factor x here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "0:53", - "text": "Now, the idea of this classifier is to design also a linear separator", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "0:59", - "text": "here that you'll see and it's very similar to what you have seen not just for regression, right? And we're going to do also say that if the sign of this function value is positive then we're going to say the objective is in category one. Otherwise, we're going to say it's in category 2. So that makes 0 that is the decision boundary between the few categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "1:28", - "text": "So, in generally hiding marginal space such as, 0. corresponds to a hyper plain.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "1:38", - "text": "Now I've shown you a simple case of two dimensional space it was just X1 and X2 and this case this corresponds to a line that you can see here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "1:51", - "text": "So, this is a line defined by just three parameters here, beta zero, beta one, and beta two.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "2:02", - "text": "Now, this line is heading in this direction so it shows that as we increase X1, X2 will also increase. So we know that beta one and beta two have different assigns, one is negative and the other is positive.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "2:20", - "text": "So let's just assume that beta one is negative and beta two Is positive.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "2:28", - "text": "Now, it's interesting to examine, then, the data instances on the two sides of the slide. So, here, the data instance are visualized as circles for one class and diamonds for the other class.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "2:43", - "text": "Now, one question is to take a point like this one and to ask the question what's the value of this expression, or this classifier, for this data point?", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "2:55", - "text": "So what do you think? Basically, we're going to evaluate its value by using this function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "3:01", - "text": "And as we said, if this value's positive we're going to say this is in category one, and if it's negative, it's going to be in category two. Intuitively, this line separates these two categories, so we expect the points on one side would be positive and the points on the other side would be negative. Our question is under the assumption that I just mentioned, let's examine a particular point like this one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "3:27", - "text": "So what do you think is the sine of this expression?", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "3:31", - "text": "Well, to examine the sine we can simply look at this expression here. And we can compare this with let's say,", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "3:42", - "text": "value on the line, let's see, compare this with this point.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "3:48", - "text": "While they have identical X1, but then one has a higher value for X2.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "3:54", - "text": "Now, let's look at the sin of the coefficient for X2. Well, we know this is a positive.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "4:02", - "text": "So, what that means is that the f value for this point should be higher than the f value for this point on the line that means this will be positive, right?", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "4:16", - "text": "So we know in general of all points on this side,", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "4:20", - "text": "the function's value will be positive and you can also verify all the points on this side will be negative. And so this is how this kind of linear classifier or linear separator can then separate the points in the two categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "4:37", - "text": "So, now the natural question is, which linear separator is the best? Now, I've get you one line here that can separate the two classes. And this line, of course, is determined by the vector beta, the coefficients. Different coefficients will give us different lines. So, we could imagine there are other lines that can do the same job. Gamma, for example, could give us another line that counts a separator to these instances.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "5:06", - "text": "Of course, there are also lines that won't separate to them and those are bad lines. But, the question is, when we have multiple lines that can separate both clauses, which align the best? In fact, you can imagine, there are many different ways of choosing the line. So, the logistical regression classifier that you have seen earlier actually uses some criteria to determine where this line should be and so linear separate as well. And uses a conditional likelihood on the training that it determines which line is the best. But in SVM we're going to look at another criteria for determining which line is the best. And this time, the criteria is more tied to the classification arrow as you will see.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "5:49", - "text": "So, the basic idea is to choose the separator to maximize the margin. So what is a margin? So, I choose some dotted lines here to indicate the boundaries of those data points in each class. And the margin is simply the distance between the line, the separator, and the closest point from each class.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "6:18", - "text": "So you can see the margin of this side is as I've shown here and you can also define the margin on the other side.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "6:27", - "text": "In order for the separator to maximize the margin, it has to be kind of in the middle of the two boundaries and you don't want this separator to be very close to one side, and that in intuition makes a lot of sense.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "6:44", - "text": "So this is basic idea of SVM. We're going to choose a linear separator to maximize the margin.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "6:52", - "text": "Now on this slide, I've also changed the notation so that I'm not going to use beta to denote the parameters. But instead, I'm going to use w although w was used to denote the words before so don't be confused here. W here is actually a width, a certain width.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "7:12", - "text": "So I'm also using lowercase b to denote the beta 0, a biased constant.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "7:20", - "text": "And there are instances do represent that as x and I also use the vector form of multiplication here. So we see a transpose of w vector multiply by the future vector.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "7:35", - "text": "So b is a bias constant and w is a set of weights with one way for each feature. We have m features and so we have m weights and that will represent as a vector.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "7:47", - "text": "And similarly, the data instance here, the text object, is represented by also a feature vector of the same number of elements. Xi is a feature value. For example, word count and you can verify, when we. Multiply these two vectors together, take the dot product, we get the same form of the linear separator as you have seen before. It's just a different way of representing this. Now I use this way so that it's more consistent with what notations people usually use when they talk about SVM. This way you can better connect the slides with some other readings you might do.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "8:31", - "text": "Okay, so when we maximize the margins of a separator, it just means the boundary of the separator is only determined by a few data points, and these are the data points that we call support vectors. So here illustrated are two support vectors for one class and two for the other class. And these quotas define the margin basically, and you can imagine once we know which are supportive vectors then this", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "9:06", - "text": "center separator line will be determined by them. So the other data points actually don't really matter that much. And you can see if you change the other data points it won't really affect the margin, so the separator will stay the same. Mainly affected by the the support vector machines. Sorry, it's mainly affected by the support vectors and that's why it's called a support vector machine. Okay, so now the next question is, of course, how can we set it up to optimize the line? How can we actually find the line or the separator? Now this is equivalent to finding values for w and b, because they will determine where exactly the separator is.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "9:58", - "text": "So in the simplest case, the linear SVM is just a simple optimization problem. So again, let's recall that our classifier is such a linear separator, where we have weights for all the features, and the main goal is remove these weights w and b. And the classifier will say X is in category theta 1 if it's positive. Otherwise, it's going to say it's in the other category. So this is our assumption, our setup. So in the linear SVM, we are going to then seek these parameter values to optimize the margins and then the training error.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "10:38", - "text": "The training data would be basically like in other classifiers. We have a set of training points where we know the x vector, and then we also know the corresponding label, y i. And here we define y i as two values, but these values are not 0, 1 as you have seen before, but rather -1 and positive 1, and they're corresponding to these two categories, as I've shown here. Now you might wonder why we don't define them as 0 and 1 instead of having -1, 1. And this is purely for mathematical convenience, as you will see in a moment.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "11:16", - "text": "So the goal of optimization first is to make sure the labeling of training data is all correct. So that just means if y i, the norm label for instance x i, is 1, we would like this classified value to be large. And here we just choose a threshold of 1 here. But if you use another threshold, you can easily fit that constant into the parameter values b and w to make the right-hand side just 1.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "11:48", - "text": "Now if, on the other hand, y i is -1, that means it's in a different class, then we want this classifier to give us a very small value, in fact a negative value, and we want this value to be less than or equal to -1. Now these are the two different instances, different kinds of cases. How can we combine them together? Now this is where it's convenient when we have chosen y i as -1 for the other category, because it turns out that we can either combine the two into one constraint.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "12:26", - "text": "y i multiplied by the classifier value must be larger than or equal to 1.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "12:33", - "text": "And obviously when y i is just 1, you see this is the same as the constraint on the left-hand side. But when y i is -1, you also see that this is equivalent to the other inequality. So this one actually captures both constraints in a unified way, and that's a convenient way of capturing these constraints. What's our second goal? Well, that's to maximize margin, so we want to ensure that separator can do well on the training data. But then, among all the cases where we can separate the data, we also would like to choose the separator that has the largest margin. Now the margin can be assumed to be related to the magnitude of the weight. And so w transform multiplied by w would give us basically the sum of squares of all those weights. So to have a small value for this expression, it means all the w i's must be small.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "13:42", - "text": "So we've just assumed that we have a constraint for", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "13:46", - "text": "getting the data on the training set to be classified correctly. Now we also have the objective that's tied into a maximization of margin, and this is simply to minimize w transpose multiplied by w, and we often denote this by phi of w. So now you can see this is basically a optimization problem. We have some variables to optimize, and these are the weights and b and we have some constraints. These are linear constraints and the objective function is a quadratic function of the weights. So this a quadratic program with linear constraints, and there are standard algorithm that are variable for solving this problem. And once we solve the problem we obtain the weights w and b. And then this would give us a well-defined classifier. So we can then use this classifier to classify any new text objects. Now the previous formulation did not allow any error in the classification, but sometimes the data may not be linear to the separator. That means that they may not look as nice as you have seen on the previous slide where a line can separate all of them. And what would happen if we allowed some errors? Well, the principle can stay. We want to minimize the training error but try to also maximize the margin. But in this case we have a soft margin, because the data points may not be completely separable.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "15:17", - "text": "So it turns out that we can easily modify SVM to accommodate this. So what you see here is very similar to what you have seen before, but we have introduced the extra variable xi i. And we in fact will have one for each data instance, and this is going to model the error that we allow for each instance. But the optimization problem would be very similar. So specifically, you will see we have added something to the optimization problem. First we have added some error to the constraint so that now we allow a Allow the classifier to make some mistakes here. So, this Xi i is allowed an error. If we set Xi i to 0, then we go back to the original constraint. We want every instance to be classified accurately. But, if we allow this to be non-zero, then we allow some errors here. In fact, if the length of the Xi i is very large, the error can be very, very large. So naturally, we don't want this to happen. So we want to then also minimize this Xi i. So, because Xi i needs to be minimized in order to control the error.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "16:42", - "text": "And so, as a result, in the objective function, we also add more to the original one, which is only W, by basically ensuring that we not only minimize the weights, but also minimize the errors, as you see here. Here we simply take a sum over all the instances. Each one has a Xi i to model the error allowed for that instance. And when we combine them together, we basically want to minimize the errors on all of them.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "17:16", - "text": "Now you see there's a parameter C here, and that's a constant to control the trade-off between minimizing the errors and maximizing the margin. If C is set to zero, you can see, we go back to the original object function where we only maximize the margin.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "17:34", - "text": "We don't really optimize the training errors and then Xi i can be set to a very large value to make the constraints easy to satisfy. That's not very good of course, so C should be set to a non-zero value, a positive value. But when C is set to a very, very large value, we'll see the object of the function will be dominated mostly by the training errors and so the optimization of margin will then play a secondary role. So if that happens, what would happen is", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "18:07", - "text": "then we will try to do our best to minimize the training errors, but then we're not going to take care of the margin and that affects the generalization factors of the classify for future data. So it's also not good. So in particular, this parameter C has to be actually set carefully. And this is just like in the case of k-nearest neighbor where you need to optimize a number of neighbors. Here you need to optimize the C. And this is, in general, also achievable by doing cross-validation. Basically, you look at the empirical data and see what value C should be set to in order to optimize the performance.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "18:49", - "text": "Now with this modification, the problem is still quadratic programming with linear constraints so the optimizing algorithm can be actually applied to solve this different version of the program.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "19:02", - "text": "Again, once we have obtained the weights and the bias, then we can have classifier that's ready for classifying new objects. So that's the basic idea of SVM.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "19:16", - "text": "So to summarize the text categorization methods, where we introduce the many methods, and some are generative models. Some are discriminative methods. And these tend to perform similarly when optimized. So there's still no clear winner, although each one has its pros and cons. And the performance might also vary on different data sets for different problems. And one reason is also because the feature representation is very critical", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "19:52", - "text": "and these methods all require effective feature representation. And to design an effective feature set, we need domain knowledge and humans definitely play an important role here, although there are new machine learning methods and algorithm representation learning that can help with learning features.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "20:12", - "text": "And another common thing is that they might be performing similarly on the data set, but with different mistakes. And so, their performance might be similar, but then the mistakes they make might be different. So that means it's useful to compare different methods for a particular problem and then maybe combine multiple methods because this can improve the robustness and they won't make the same mistakes. So assemble approaches that would combine different methods tend to be more robust and can be useful in practice. Most techniques that we introduce use the supervised machine learning, which is a very general method. So that means that these methods can be actually applied to any text or categorization problem. As long as we have humans to help annotate some training data sets and design features, then supervising machine learning and all these classifiers can be easily applied to those problems to solve the categorization problem to allow us to characterize content of text concisely with categories. Or to predict the sum properties of real world variables that are associated with text data. The computers, of course, here are trying to optimize the combinations of the features provided by human. And as I said, there are many different ways of combining them and they also optimize different object or functions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "21:58", - "text": "But in order to achieve good performance, they all require effective features and also plenty of training data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "22:04", - "text": "So as a general rule, and if you can improve the feature representation, and then provide more training data, then you can generally do better. Performance is often much more affected by the effectiveness of features than by the choice of specific classifiers. So feature design tends to be more important than the choice of specific classifier.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "22:30", - "text": "So, how do we design effective features? Well, unfortunately, this is very application-specific. So there's no really much general thing to say here. But we can do some analysis of the categorization problem and try to understand what kind of features might help us distinguish categories. And in general, we can use a lot of domain knowledge to help us design features.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "23:01", - "text": "And another way to figure out the effective features is to do error analysis on the categorization results. You could, for example, look at which category tends to be confused with which other categories. And you can use a confusion matrix to examine the errors systematically across categories. And then, you can look into specific instances to see why the mistake has been made and what features can prevent the mistake. And this can allow you to obtain insights for design new features. So error analysis is very important in general, and that's where you can get the insights about your specific problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "23:42", - "text": "And finally, we can leverage this on machine learning techniques. So, for example, feature selection is a technique that we haven't really talked about, but is very important. And it has to do with trying to select the most useful features before you actually train a full classifier. Sometimes training a classifier will also help you identify which features have high values. There are also other ways to ensure this sparsity. Of the model, meaning to recognize the widths. For example, the SVM actually tries to minimize the weights on features. But you can further force some features, force to use only a small number of features.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "24:21", - "text": "There are also techniques for dimension reduction. And that's to reduce a high dimensional feature space into a low dimensional space typically by clustering of features in various ways. So metrics factorization has been used to do such a job, and this is some of the techniques are actually very similar to the talking models that we'll discuss. So talking morals like psa or lda can actually help us reduce the dimension of features. Like imagine the words our original feature. But the can be matched to the topic space .Let's say we have k topics. So a document can now be represented as a vector of just k values corresponding to the topics. So we can let each topic define one dimension, so we have a k dimensional space instead of the original high dimensional space corresponding to words. And this is often another way to learn effective features. Especially, we could also use the categories to supervise the learning of such low dimensional structures.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "25:29", - "text": "And so, the original worth features can be also combined with such amazing dimension features or lower dimensional space features to provide a multi resolution which is often very useful. Deep learning is a new technique that has been developed the machine learning.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "25:51", - "text": "It's particularly useful for learning representations. So deep learning refers to deep neural network, it's another kind of classifier, where you can have intermediate features embedded in the models. That it's highly non-linear transpire, and some recent events that's allowed us to train such a complex network effectively. And the technique has been shown to be quite effective for speech recognition, computer reasoning, and recently has been applied to text as well. It has shown some promise. And one important advantage of this approach in", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "26:34", - "text": "relationship with the featured design, is that they can learn intermediate replantations or compound the features automatically. And this is very valuable for learning effective replantation, for text recalibration. Although in text domain, because words are exemplary representation of text content, because these are human's imaging for communication. And they are generally sufficient for For representing content for many tasks. If there's a need for some new representation, people would have invented a new word. So because of this we think of value of deep learning for text processing tends to be lower than for [INAUDIBLE]. And the speech revenue where they are anchored corresponding where the design that worked as features.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "27:31", - "text": "But people only still very promising for learning effective features especially for complicated tasks. Like a analysis it has been shown to be effective", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "27:41", - "text": "because it can provide that goes beyond that of words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "27:47", - "text": "Now regarding the training examples. It's generally hard to get a lot of training examples because it involves human labor.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "27:56", - "text": "But there are also some ways to help with this. So one is to assume in some low quality training examples can also be used. So, those can be called pseudo training examples. For example, if you take reviews from the internet, they might have overall ratings. So, to train a of categorizer, meaning we want to positive or negative. And categorize these reviews into these two categories. Then we could assume five star reviews are all positive training samples. One star are negative. But of course, sometimes even five star reviews will also mention negative opinions so the training sample is not all of that high quality, but they can still be useful.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "28:45", - "text": "Another idea is to exploit the unlabeled data and there are techniques called the semi-supervised machine learning techniques that can allow you to combine labeled data with unlabeled data. So, in other case it's easy to see the next model can be used For both text plus read and the categorization. So you can imagine, if you have a lot of unlabeled text data for categorization, then you can actually do clustering on these text data, learn categories. And then try to somehow align these categories. With the categories defined by the training data, where we already know which documents are in which category. So you can in fact use the Algorithm to actually combine both. That would allow you essentially also pick up useful words and label the data. You can think of this in another way. Basically, we can use let's say a to classify all of the unlabeled text documents, and then we're going to assume the high confidence Classification results are actually liable. Then you suddenly have more training data because from the enabler that we now know some are labeled as category one, some are labeled as category two. All though the label is not completely reliable But then they can still be useful. So let's assume they are actually training label examples, and then we combine them with true training examples through improved categorization method. And so this idea is very powerful.", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "30:23", - "text": "When the enabled data and the training data are very different, and we might need to use other advanced machine learning techniques called domain adaptation or transfer learning. This is when we can Borrow some training examples from a related problem that may be different. Or, from a categorization password", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - }, - { - "time": "30:46", - "text": "that follow very different distribution from what we are working on. But basically, when the two domains are very different, then we need to be careful and not overfit the training domain. But yet, we can still want to use some signals from the related training data. So for example, training categorization on news might not give you Effective plus y for class vine topics and tweets. But you can still learn something from news to help look at writing tweets. So there are mission learning techniques that can help you do that effectively. Here's a suggested reading where you can find more details about some more of the methods is that we have covered. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/L7dem/5-2-text-categorization-discriminative-classifier-part-2" - } - ] - }, - { - "5-3-text-categorization-evaluation-part-1": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about the Evaluation of Text Categorization. So we've talked about many different methods for text categorization. But how do you know which method works better?", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "0:19", - "text": "And for a particular application, how do you know this is the best way of solving your problem? To understand these, we have to", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "0:29", - "text": "how to we have to know how to evaluate categorization results. So first some general thoughts about the evaluation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "0:38", - "text": "In general, for evaluation of this kind of empirical tasks such as categorization, we use methodology that was developed in 1960s by information retrieval researchers. Called a Cranfield Evaluation Methodology. The basic idea is to have humans create test correction,", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "0:59", - "text": "where, we already know, every document is tagged with the desired categories. Or, in the case of search, for which query, which documents that should have been retrieved, and this is called, a ground truth. Now, with this ground truth test correction, we can then reuse the collection to test the many different systems and then compare different systems. We can also turn off some components in the system to see what's going to happen. Basically it provides a way to do control experiments to compare different methods.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "1:36", - "text": "So this methodology has been virtually used for all the tasks that involve empirically defined problems.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "1:45", - "text": "So in our case, then, we are going to compare our systems categorization results with the categorization, ground truth, created by humans.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "1:56", - "text": "And we're going to compare our systems decisions,", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "2:00", - "text": "which documents should get which category with what", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "2:06", - "text": "categories have been assigned to those documents by humans. And we want to quantify the similarity of these decisions or equivalently, to measure the difference between the system output and the desired ideal output generated by the humans.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "2:25", - "text": "So obviously, the highest similarity is the better results are.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "2:30", - "text": "The similarity could be measured in different ways. And that would lead to different measures. And sometimes it's desirable also to match the similarity from different perspectives just to have a better understanding of the results in detail. For example, we might be also interested in knowing which category performs better and which which category is easy to categorize, etc. In general, different categorization mistakes however, have different costs for specific applications. So some areas might be more serious than others. So ideally, we would like to model such differences, but if you read many papers in categorization you will see that they don't generally do that. Instead, they will use a simplified measure and that's because it's often okay not to consider such a cost variation when we compare methods and when we are interested in knowing the relative difference of these methods. So it's okay to introduce some bias, as long as the bias is not already with a particular method and then we should expect the more effective method to perform better than a less effective one, even though the measure is not perfect.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "3:53", - "text": "So the first measure that we'll introduce is called classification accuracy and this is a basic into measure the percentage of correct decisions. So here you see that there are categories denoted by c1 through ck and there are n documents, denoted by d1 through d N. And for each pair of category and the document, we can then look at the situation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "4:16", - "text": "And see if the system has said yes to this pair, basically has assigned this category to this document. Or no, so this is denoted by Y or M, that's the systems of the decision. And similarly, we can look at the human's decisions also, if the human has assigned a category to the document of that there will be a plus sign here. That just means that a human. We think of this assignment is correct and incorrect then it's a minus. So we'll see all combinations of this Ns, yes and nos, minus and pluses. There are four combinations in total. And two of them are correct, and that's when we have y(+) or n(-), and then there are also two kinds of errors. So the measure of classification accuracy is simply to count how many of these decisions are correct. And normalize that by the total number of decisions we have made. So, we know that the total number of decisions is n, multiplied by k.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "5:20", - "text": "And, the number of correct decisions are basically of two kinds. One is y plusses. And the other is n minus this n. We just put together the count. Now, this is a very convenient measure that will give us one number to characterize performance of a method. And the higher, the better, of course.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "5:41", - "text": "But the method also has some problems. First it has treated all the decisions equally. But in reality, some decision errors are more serious than others. For example, it may be more important to get the decisions right on some documents, than others.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "5:58", - "text": "Or maybe, more important to get the decisions right on some categories, than others, and this would call for some detailed evaluation of this results to understand the strands and", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "6:12", - "text": "of different methods, and to understand the performance of these methods. In detail in a per category or per document basis. One example that shows clearly the decision errors are having different causes is spam filtering that could be retrieved as two category categorization problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "6:36", - "text": "Missing a legitimate email result, is one type of error. But letting spam to come into your folder is another type of error. The two types of errors are clearly very different, because it's very important not to miss a legitimate email. It's okay to occasionally let a spam email to come into your inbox. So the error of the first, missing a legitimate email is very, is of high cost. It's a very serious mistake and classification error, classification accuracy does not address this issue.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "7:14", - "text": "There's also another problem with imbalance to test set. Imagine there's a skew to test set where most instances are category one and 98% of instances are category one. Only 2% are in category two. In such a case, we can have a very simple baseline that accurately performs very well and that baseline. Sign with similar, I put all instances in the major category.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "7:36", - "text": "That will get us 98% accuracy in this case. It's going to be appearing to be very effective, but in reality, this is obviously not a good result.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "7:47", - "text": "And so, in general, when we use classification accuracy as a measure, we want to ensure that the causes of balance.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "7:54", - "text": "And one above equal number of instances, for example in each class the minority categories or causes tend to be overlooked in the evaluation of classification accuracy. So, to address these problems, we of course would like to also evaluate the results in other ways and in different ways. As I said, it's beneficial to look at after multiple perspectives. So for example, we can look at the perspective from each document as a perspective based on each document. So the question here is, how good are the decisions on this document?", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "8:29", - "text": "Now, as in the general cases of all decisions, we can think about four combinations of possibilities, depending on whether the system has said yes and depending on whether the human has said it correct or incorrect or said yes or no. And so the four combinations are first when both the human systems said yes, and that's the true positives, when the system says, yes, and it's after the positive. So, when the system says, yes, it's a positive. But, when the human confirm that it is indeed correct, that becomes a true positive.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "9:07", - "text": "When the system says, yes, but the human says, no, that's incorrect, that's a false positive, have FP.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "9:15", - "text": "And when the system says no, but the human says yes, then it's a false negative. We missed one assignment. When both the system and human says no, then it's also correctly to assume that's true negatives. All right, so then we can have some measures to just better characterize the performance by using these four numbers and so two popular measures are precision and recall. And these were also proposed by information retrieval researchers 1960s for evaluating search results, but now they have become standard measures, use it everywhere. So when the system says yes, we can ask the question, how many are correct? What's the percent of correct decisions when the system says yes? That's called precision. It's true positive divided by all the cases when the system says yes, all the positives. The other measure is called recall, and this measures", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "10:14", - "text": "whether the document has all the categories it should have. So in this case it's divide the true positive by true positives and the false negatives. So these are all the cases where this human Says the document should have this category. So this represents both categories that it should have got, and so recall tells us whether the system has actually indeed assigned all the categories that it should have to this document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "10:46", - "text": "This gives us a detailed view of the document, then we can aggregate them later.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "10:52", - "text": "And if we're interested in some documents, and this will tell us how well we did on those documents, the subsets of them. It might be more interesting than others, for example. And this allows us to analyze errors in more detail as well. We can separate the documents of certain characteristics from others, and then look at the errors. You might see a pattern A for this kind of document, this long document. It doesn't as well for shock documents.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "11:18", - "text": "And this gives you some insight for inputting the method. Similarly, we can look at the per-category evaluation. In this case, we're going to look at the how good are the decisions on a particular category. As in the previous case we can define precision and recall. And it would just basically answer the questions from a different perspective.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "11:39", - "text": "So when the system says yes, how many are correct? That means looking at this category to see if all the documents that are assigned with this category are indeed in this category, right? And recall, would tell us, has the category been actually assigned to all the documents That should have this category.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "12:00", - "text": "It's sometimes also useful to combine precision and recall as one measure, and this is often done by using f measure. And this is just a harmonic mean of precision. Precision and recall defined on this slide. And it's also controlled by a parameter beta to", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "12:20", - "text": "indicate whether precision is more important or recall is more. When beta is set to 1, we have measure called F1, and in this case, we just take equal weight upon both procedure and recall.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "12:34", - "text": "F1 is very often used as a measure for categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "12:39", - "text": "Now, as in all cases, when we combine results, you always should think about the best way of combining them, so in this case I don't know if you have thought about it and we could have combined them just with arithmetic mean, right. So that would still give us the same range of values, but obviously there's a reason why we didn't do that and why f1 is more popular, and it's actually useful to think about difference. And we think about that, you'll see that there is indeed some difference and some undesirable property of this arithmatic. Basically, it will be obvious to you if you think about a case when the system says yes for all the category and document pairs. And then try the compute the precision and recall in that case. And see what would happen.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "13:28", - "text": "And basically, this kind of measure, the arithmetic mean, is not going to be as reasonable as F1 minus one [INAUDIBLE] trade off, so that the two values are equal. There is an extreme case where you have 0 for one letter and one for the other. Then F1 will be low, but the mean would still be reasonably high.", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - }, - { - "time": "14:01", - "text": "[MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/qudy6/5-3-text-categorization-evaluation-part-1" - } - ] - }, - { - "5-4-text-categorization-evaluation-part-2": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is a continued discussion of evaluation of text categorization. Earlier we have introduced measures that can be used with computer provision and recall. For each category and each document now in this lecture we're going to", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "0:27", - "text": "further examine how to combine the performance of the different categories of different documents how to aggregate them, how do we take average? You see on the title here I indicated it's called a macro average and this is in contrast to micro average that we'll talk more about later.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "0:47", - "text": "So, again, for each category we're going to compute the precision require an f1 so for example category c1 we have precision p1, recall r1 and F value f1. And similarly we can do that for category 2 and and all the other categories. Now once we compute that and we can aggregate them, so for example we can aggregate all the precision values. For all the categories, for computing overall precision. And this is often very useful to summarize what we have seen in the whole data set. And aggregation can be done many different ways. Again as I said, in a case when you need to aggregate different values, it's always good to think about what's the best way of doing the aggregation. For example, we can consider arithmetic mean, which is very commonly used, or you can use geometric mean, which would have different behavior. Depending on the way you aggregate, you might have got different conclusions. in terms of which method works better, so it's important to consider these differences and choosing the right one or a more suitable one for your task. So the difference fore example between arithmetically and geometrically is that the arithmetically would be dominated by high values whereas geometrically would be more affected by low values. Base and so whether you are want to emphasis low values or high values would be a question relate with all you And similar we can do that for recal and F score. So that's how we can generate the overall precision, recall and F score.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "2:31", - "text": "Now we can do the same for aggregation of other all the document All right. So it's exactly the same situation for each document on our computer. Precision, recall, and F. And then after we have completed the computation for all these documents, we're going to aggregate them to generate the overall precision, overall recall, and overall F score.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "2:53", - "text": "These are, again, examining the results from different angles. Which one's more useful will depend on your application. In general, it's beneficial to look at the results from all these perspectives. And especially if you compare different methods in different dimensions, it might reveal which method Is better in which measure or in what situations and this provides insightful. Understanding the strands of a method or a weakness and this provides further insight for improving them.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "3:28", - "text": "So as I mentioned, there is also micro-average in contrast to the macro average that we talked about earlier. In this case, what we do is you pool together all the decisions, and then compute the precision and recall.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "3:45", - "text": "So we can compute the overall precision and recall by just counting how many cases are in true positive, how many cases in false positive, etc, it's computing the values in the contingency table, and then we can compute the precision and recall just once.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "4:06", - "text": "In contrast, in macro-averaging, we're going to do that for each category first. And then aggregate over these categories or we do that for each document and then aggregate all the documents but here we pooled them together.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "4:21", - "text": "Now this would be very similar to the classification accuracy that we used earlier, and one problem here of course to treat all the instances, all the decisions equally.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "4:32", - "text": "And this may not be desirable.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "4:36", - "text": "But it may be a property for some applications, especially if we associate the, for example, the cost for each combination. Then we can actually compute for example, weighted classification accuracy. Where you associate the different cost or utility for each specific decision,", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "4:56", - "text": "so there could be variations of these methods that would be more useful. But in general macro average tends to be more information than micro average, just because it might reflect the need for understanding performance", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "5:14", - "text": "on each category or performance on each document which are needed in applications. But macro averaging and micro averaging, they are both very common, and you might see both reported in research papers on Categorization. Also sometimes categorization results might actually be evaluated from ranking prospective.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "5:40", - "text": "And this is because categorization results are sometimes or often indeed passed it to a human for various purposes. For example, it might be passed to humans for further editing. For example, news articles can be tempted to be categorized by using a system and then human editors would then correct them.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "6:02", - "text": "And all the email messages might be throughout to the right person for handling in the help desk. And in such a case the categorizations will help prioritizing the task for particular customer service person.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "6:19", - "text": "So, in this case the results have to be prioritized", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "6:26", - "text": "and if the system can't give a score to the categorization decision for confidence then we can use the scores to rank these decisions and then evaluate the results as a rank list, just as in a search engine. Evaluation where you rank the documents in responsible query.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "6:49", - "text": "So for example a discovery of spam emails can be evaluated", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "6:55", - "text": "based on ranking emails for the spam category. And this is useful if you want people to to verify whether this is really spam, right? The person would then take the rank To check one by one and then verify whether this is indeed a spam. So to reflect the utility for humans in such a task, it's better to evaluate Ranking Chris and this is basically similar to a search again.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "7:25", - "text": "And in such a case often the problem can be better formulated as a ranking problem instead of a categorization problem. So for example, ranking documents in a search engine can also be framed as a binary categorization problem, distinguish the relevant documents that are useful to users from those that are not useful, but typically we frame this as a ranking problem, and we evaluate it as a rank list. That's because people tend to examine the results so", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "7:52", - "text": "ranking evaluation more reflects utility from user's perspective.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "7:58", - "text": "So to summarize categorization evaluation, first evaluation is always very important for all these tasks. So get it right.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "8:07", - "text": "If you don't get it right, you might get misleading results. And you might be misled to believe one method is better than the other, which is in fact not true. So it's very important to get it right.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "8:18", - "text": "Measures must also reflect the intended use of the results for a particular application. For example, in spam filtering and news categorization the results are used in maybe different ways.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "8:30", - "text": "So then we would need to consider the difference and design measures appropriately.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "8:36", - "text": "We generally need to consider how will the results be further processed by the user and think from a user's perspective. What quality is important? What aspect of quality is important?", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "8:49", - "text": "Sometimes there are trade offs between multiple aspects like precision and recall and so we need to know for this application is high recall more important, or high precision is more important.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "8:59", - "text": "Ideally we associate the different cost with each different decision arrow. And this of course has to be designed in an application specific way.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "9:08", - "text": "Some commonly used measures for relative comparison methods are the following. Classification accuracy, it's very commonly used for especially balance. [INAUDIBLE] preceding [INAUDIBLE] Scores are common and report characterizing performances, given angles and give us some [INAUDIBLE] like a [INAUDIBLE] Per document basis [INAUDIBLE] And then take a average of all of them, different ways micro versus macro [INAUDIBLE]. In general, you want to look at the results from multiple perspectives and for particular applications some perspectives would be more important than others but diagnoses and analysis of categorization methods. It's generally useful to look at as many perspectives as possible to see subtle differences between methods or tow see where a method might be weak from which you can obtain sight for improving a method.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "10:04", - "text": "Finally sometimes ranking may be more appropriate so be careful sometimes categorization has got may be better frame as a ranking tasks and there're machine running methods for optimizing ranking measures as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - }, - { - "time": "10:17", - "text": "So here are two suggested readings. One is some chapters of this book where you can find more discussion about evaluation measures. The second is a paper about comparison of different approaches to text categorization and it also has an excellent discussion of how to evaluate textual categorization. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/Yxm53/5-4-text-categorization-evaluation-part-2" - } - ] - }, - { - "5-5-opinion-mining-and-sentiment-analysis-motivation": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about, Opinion Mining and Sentiment Analysis, covering, Motivation. In this lecture, we're going to start, talking about, mining a different kind of knowledge. Namely, knowledge about the observer or humans that have generated the text data. In particular, we're going to talk about the opinion mining and sentiment analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "0:32", - "text": "As we discussed earlier, text data can be regarded as data generated from humans as subjective sensors.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "0:43", - "text": "In contrast, we have other devices such as video recorder that can report what's happening in the real world objective to generate the viewer data for example.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "0:58", - "text": "Now the main difference between test data and other data, like video data, is that it has rich opinions, and the content tends to be subjective because it's generated from humans.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "1:16", - "text": "Now, this is actually a unique advantaged of text data, as compared with other data, because the office is a great opportunity to understand the observers. We can mine text data to understand their opinions. Understand people's preferences, how people think about something.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "1:37", - "text": "So this lecture and the following lectures will be mainly about how we can mine and analyze opinions buried in a lot of text data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "1:49", - "text": "So let's start with the concept of opinion. It's not that easy to formally define opinion, but mostly we would define opinion as a subjective statement describing what a person believes or thinks about something.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "2:08", - "text": "Now, I highlighted quite a few words here. And that's because it's worth thinking a little bit more about these words. And that will help us better understand what's in an opinion. And this further helps us to define opinion more formally. Which is always needed to computation to resolve the problem of opinion mining. So let's first look at the key word of subjective here. This is in contrast with objective statement or factual statement.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "2:40", - "text": "Those statements can be proved right or wrong.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "2:45", - "text": "And this is a key differentiating factor from opinions which tends to be not easy to prove wrong or right, because it reflects what the person thinks about something.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "2:59", - "text": "So in contrast, objective statement can usually be proved wrong or correct.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "3:07", - "text": "For example, you might say this computer has a screen and a battery.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "3:16", - "text": "Now that's something you can check. It's either having a battery or not.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "3:23", - "text": "But in contrast with this, think about the sentence such as, this laptop has the best battery or this laptop has a nice screen. Now these statements are more subjective and it's very hard to prove whether it's wrong or correct.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "3:45", - "text": "So opinion, is a subjective statement.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "3:50", - "text": "And next lets look at the keyword person here. And that indicates that is an opinion holder. Because when we talk about opinion, it's about an opinion held by someone. And then we notice that there is something here. So that is the target of the opinion. The opinion is expressed on this something.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "4:11", - "text": "And now, of course, believes or thinks implies that an opinion will depend on the culture or background and the context in general. Because a person might think different in a different context. People from different background may also think in different ways. So this analysis shows that there are multiple elements that we need to include in order to characterize opinion.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "4:38", - "text": "So, what's a basic opinion representation like? Well, it should include at least three elements, right? Firstly, it has to specify what's the opinion holder. So whose opinion is this? Second, it must also specify the target, what's this opinion about?", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "4:57", - "text": "And third, of course, we want opinion content. And so what exactly is opinion? If you can identify these, we get a basic understanding of opinion and can already be useful sometimes. You want to understand further, we want enriched opinion representation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "5:15", - "text": "And that means we also want to understand that, for example, the context of the opinion and what situation was the opinion expressed. For example, what time was it expressed? We, also, would like to, people understand the opinion sentiment, and this is to understand that what the opinion tells us about the opinion holder's feeling. For example, is this opinion positive, or negative? Or perhaps the opinion holder was happy or was sad, and so such understanding obvious to those beyond just Extracting the opinion content, it needs some analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "6:00", - "text": "So let's take a simple example of a product review. In this case, this actually expressed the opinion holder, and expressed the target. So its obviously whats opinion holder and that's just reviewer and its also often very clear whats the opinion target and that's the product review for example iPhone 6. When the review is posted usually you can't such information easier.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "6:27", - "text": "Now the content, of course, is a review text that's, in general, also easy to obtain. So you can see product reviews are fairly easy to analyze in terms of obtaining a basic opinion of representation. But of course, if you want to get more information, you might know the Context, for example. The review was written in 2015. Or, we want to know that the sentiment of this review is positive. So, this additional understanding of course adds value to mining the opinions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "7:04", - "text": "Now, you can see in this case the task is relatively easy and that's because the opinion holder and the opinion target have already been identified.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "7:14", - "text": "Now let's take a look at the sentence in the news. In this case, we have a implicit holder and a implicit target. And the tasker is in general harder. So, we can identify opinion holder here, and that's the governor of Connecticut.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "7:32", - "text": "We can also identify the target. So one target is Hurricane Sandy, but there is also another target mentioned which is hurricane of 1938. So what's the opinion? Well, there's a negative sentiment here that's indicated by words like bad and worst.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "7:53", - "text": "And we can also, then, identify context, New England in this case.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "8:00", - "text": "Now, unlike in the playoff review, all these elements must be extracted by using natural RAM processing techniques. So, the task Is much harder. And we need a deeper natural language processing.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "8:14", - "text": "And these examples also", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "8:17", - "text": "suggest that a lot of work can be easy to done for product reviews. That's indeed what has happened. Analyzing and assembling news is still quite difficult, it's more difficult than the analysis of opinions in product reviews.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "8:36", - "text": "Now there are also some other interesting variations. In fact, here we're going to examine the variations of opinions, more systematically. First, let's think about the opinion holder.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "8:47", - "text": "The holder could be an individual or it could be group of people. Sometimes, the opinion was from a committee. Or from a whole country of people.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "8:56", - "text": "Opinion target accounts will vary a lot. It can be about one entity, a particular person, a particular product, a particular policy, ect. But it could be about a group of products. Could be about the products from a company in general.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "9:11", - "text": "Could also be very specific about one attribute, though. An attribute of the entity. For example, it's just about the battery of iPhone. It could be someone else's opinion. And one person might comment on another person's Opinion, etc. So, you can see there is a lot of variation here that will cause the problem to vary a lot. Now, opinion content, of course, can also vary a lot on the surface, you can identify one-sentence opinion or one-phrase opinion. But you can also have longer text to express an opinion, like the whole article.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "9:48", - "text": "And furthermore we identify the variation in the sentiment or emotion damage that's above the feeding of the opinion holder. So, we can distinguish a positive versus negative or mutual or happy versus sad, separate.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "10:03", - "text": "Finally, the opinion context can also vary. We can have a simple context, like different time or different locations. But there could be also complex contexts, such as some background of topic being discussed. So when opinion is expressed in particular discourse context, it has to be interpreted in different ways than when it's expressed in another context. So the context can be very [INAUDIBLE] to entire discourse context of the opinion. From computational perspective, we're mostly interested in what opinions can be extracted from text data. So, it turns out that we can also differentiate, distinguish, different kinds of opinions in text data from computation perspective. First, the observer might make a comment about opinion targeting, observe the word So in case we have the author's opinion. For example, I don't like this phone at all. And that's an opinion of this author.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "10:59", - "text": "In contrast, the text might also report opinions about others. So the person could also Make observation about another person's opinion and reported this opinion. So for example, I believe he loves the painting. And that opinion is really about the It is really expressed by another person here. So, it doesn't mean this author loves that painting.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "11:33", - "text": "So clearly, the two kinds of opinions need to be analyzed in different ways, and sometimes in product reviews, you can see, although mostly the opinions are false from this reviewer. Sometimes, a reviewer might mention opinions of his friend or her friend.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "11:51", - "text": "Another complication is that there may be indirect opinions or inferred opinions that can be obtained. By making inferences on what's expressed in the text that might not necessarily look like opinion. For example, one statement that might be, this phone ran out of battery in just one hour. Now, this is in a way a factual statement because It's either true or false, right? You can even verify that, but from this statement, one can also infer some negative opinions about the quality of the battery of this phone, or the feeling of the opinion holder about the battery. The opinion holder clearly wished that the battery do last longer.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "12:42", - "text": "So these are interesting variations that we need to pay attention to when we extract opinions. Also, for this reason about indirect opinions,", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "12:53", - "text": "it's often also very useful to extract whatever the person has said about the product, and sometimes factual sentences like these are also very useful. So, from a practical viewpoint, sometimes we don't necessarily extract the subject of sentences. Instead, again, all the sentences that are about the opinions are useful for understanding the person or understanding the product that we commend.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "13:19", - "text": "So the task of opinion mining can be defined as taking textualized input to generate a set of opinion representations. Each representation we should identify opinion holder, target, content, and the context. Ideally we can also infer opinion sentiment from the comment and the context to better understand.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "13:43", - "text": "The opinion.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "13:44", - "text": "Now often, some elements of the representation are already known. I just gave a good example in the case of product we'd use where the opinion holder and the opinion target are often expressly identified. And that's not why this turns out to be one of the simplest opinion mining tasks. Now, it's interesting to think about the other tasks that might be also simple. Because those are the cases where you can easily build applications by using opinion mining techniques.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "14:17", - "text": "So now that we have talked about what is opinion mining, we have defined the task. Let's also just talk a little bit about why opinion mining is very important and why it's very useful. So here, I identify three major reasons, three broad reasons. The first is it can help decision support. It can help us optimize our decisions. We often look at other people's opinions, look at read the reviews in order to make a decisions like buying a product or using a service.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "14:52", - "text": "We also would be interested in others opinions when we decide whom to vote for example.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "15:00", - "text": "And policy makers, may also want to know people's opinions when designing a new policy. So that's one general, kind of, applications. And it's very broad, of course. The second application is to understand people, and this is also very important. For example, it could help understand people's preferences. And this could help us better serve people. For example, we optimize a product search engine or optimize a recommender system if we know what people are interested in, what people think about product.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "15:35", - "text": "It can also help with advertising, of course, and we can have targeted advertising if we know what kind of people tend to like what kind of plot.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "15:48", - "text": "Now the third kind of application can be called voluntary survey. Now this is most important research that used to be done by doing surveys, doing manual surveys. Question, answer it. People need to feel informs to answer their questions. Now this is directly related to humans as sensors, and we can usually aggregate opinions from a lot of humans through kind of assess the general opinion. Now this would be very useful for business intelligence where manufacturers want to know where their products have advantages over others.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "16:31", - "text": "What are the winning features of their products, winning features of competitive products.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "16:37", - "text": "Market research has to do with understanding consumers oppinions. And this create very useful directive for that. Data-driven social science research can benefit from this because they can do text mining to understand the people's opinions. And if you can aggregate a lot of opinions from social media, from a lot of, popular", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "16:58", - "text": "information then you can actually do some study of some questions. For example, we can study the behavior of people on social media on social networks. And these can be regarded as voluntary survey done by those people.", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - }, - { - "time": "17:19", - "text": "In general, we can gain a lot of advantage in any prediction task because we can leverage the text data as extra data above any problem. And so we can use text based prediction techniques to help you make predictions or improve the accuracy of prediction. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/o93Yl/5-5-opinion-mining-and-sentiment-analysis-motivation" - } - ] - }, - { - "5-6-opinion-mining-and-sentiment-analysis-sentiment-classification": [ - { - "time": "0:00", - "text": "[NOISE] This lecture is about the sentiment classification.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "0:11", - "text": "If we assume that", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "0:13", - "text": "most of the elements in the opinion representation are all ready known, then our only task may be just a sentiment classification, as shown in this case. So suppose we know who's the opinion holder and what's the opinion target, and also know the content and the context of the opinion, then we mainly need to decide the opinion sentiment of the review. So this is a case of just using sentiment classification for understanding opinion.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "0:46", - "text": "Sentiment classification can be defined more specifically as follows. The input is opinionated text object, the output is typically a sentiment label, or a sentiment tag, and that can be designed in two ways. One is polarity analysis, where we have categories such as positive, negative, or neutral.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "1:08", - "text": "The other is emotion analysis that can go beyond a polarity to characterize the feeling of the opinion holder.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "1:21", - "text": "In the case of polarity analysis, we sometimes also have numerical ratings as you often see in some reviews on the web.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "1:30", - "text": "Five might denote the most positive, and one maybe the most negative, for example. In general, you have just disk holder categories to characterize the sentiment.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "1:43", - "text": "In emotion analysis, of course, there are also different ways for design the categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "1:49", - "text": "The six most frequently used categories are happy, sad, fearful, angry, surprised, and disgusted.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "1:59", - "text": "So as you can see, the task is essentially a classification task, or categorization task, as we've seen before, so it's a special case of text categorization. This also means any textual categorization method can be used to do sentiment classification.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "2:15", - "text": "Now of course if you just do that, the accuracy may not be good because sentiment classification does requires some improvement over regular text categorization technique, or simple text categorization technique. In particular, it needs two kind of improvements. One is to use more sophisticated features that may be more appropriate for sentiment tagging as I will discuss in a moment.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "2:41", - "text": "The other is to consider the order of these categories, and especially in polarity analysis, it's very clear there's an order here, and so these categories are not all that independent. There's order among them, and so it's useful to consider the order. For example, we could use ordinal regression to do that, and that's something that we'll talk more about later. So now, let's talk about some features that are often very useful for text categorization and text mining in general, but some of them are especially also needed for sentiment analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "3:18", - "text": "So let's start from the simplest one, which is character n-grams. You can just have a sequence of characters as a unit, and they can be mixed with different n's, different lengths. All right, and this is a very general way and very robust way to represent the text data. And you could do that for any language, pretty much.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "3:42", - "text": "And this is also robust to spelling errors or recognition errors, right? So if you misspell a word by one character and this representation actually would allow you to match this word when it occurs in the text correctly. Right, so misspell the word and the correct form can be matched because they contain some common n-grams of characters. But of course such a recommendation would not be as discriminating as words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "4:10", - "text": "So next, we have word n-grams, a sequence of words and again, we can mix them with different n's. Unigram's are actually often very effective for a lot of text processing tasks, and it's mostly because words are word designed features by humans for communication, and so they are often good enough for many tasks. But it's not good, or not sufficient for sentiment analysis clearly. For example, we might see a sentence like, it's not good or it's not as good as something else, right? So in such a case if you just take a good and that would suggest positive that's not good, all right so it's not accurate. But if you take a bigram, not good together, and then it's more accurate. So longer n-grams are generally more discriminative, and they're more specific. If you match it, and it says a lot, and it's accurate it's unlikely, very ambiguous. But it may cause overfitting because with such very unique features that machine oriented program can easily pick up such features from the training set and to rely on such unique features to distinguish the categories. And obviously, that kind of classify, one would generalize word to future there when such discriminative features will not necessarily occur. So that's a problem of overfitting that's not desirable. We can also consider part of speech tag, n-grams if we can do part of speech tagging an, for example, adjective noun could form a pair. We can also mix n-grams of words and n-grams of part of speech tags. For example, the word great might be followed by a noun, and this could become a feature, a hybrid feature, that could be useful for sentiment analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "6:06", - "text": "So next we can also have word classes. So these classes can be syntactic like a part of speech tags, or could be semantic, and they might represent concepts in the thesaurus or ontology, like WordNet. Or they can be recognized the name entities, like people or place, and these categories can be used to enrich the presentation as additional features. We can also learn word clusters and parodically, for example, we've talked about the mining associations of words. And so we can have cluster of paradigmatically related words or syntaxmatically related words, and these clusters can be features to supplement the word base representation. Furthermore, we can also have frequent pattern syntax, and these could be frequent word set, the words that form the pattern do not necessarily occur together or next to each other. But we'll also have locations where the words my occur more closely together, and such patterns provide a more discriminative features than words obviously.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "7:14", - "text": "And they may also generalize better than just regular n-grams because they are frequent. So you expected them to occur also in tested data. So they have a lot of advantages, but they might still face the problem of overfeeding as the features become more complex. This is a problem in general, and the same is true for parse tree-based features, when you can use a parse tree to derive features such as frequent subtrees, or paths, and those are even more discriminating, but they're also are more likely to cause over fitting. And in general, pattern discovery algorithm's are very useful for feature construction because they allow us to search in a large space of possible features that are more complex than words that are sometimes useful. So in general, natural language processing is very important that they derive complex features, and they can enrich text representation. So for example, this is a simple sentence that I showed you a long time ago in another lecture. So from these words we can only derive simple word n-grams, representations or character n-grams. But with NLP, we can enrich the representation with a lot of other information such as part of speech tags, parse trees or entities, or even speech act. Now with such enriching information of course, then we can generate a lot of other features, more complex features like a mixed grams of a word and the part of speech tags, or even a part of a parse tree.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "8:55", - "text": "So in general, feature design actually affects categorization accuracy significantly, and it's a very important part of any machine learning application. In general, I think it would be most effective if you can combine machine learning, error analysis, and domain knowledge in design features. So first you want to use the main knowledge, your understanding of the problem, the design seed features, and you can also define a basic feature space with a lot of possible features for the machine learning program to work on, and machine can be applied to select the most effective features or construct the new features. That's feature learning, and these features can then be further analyzed by humans through error analysis. And you can look at the categorization errors, and then further analyze what features can help you recover from those errors, or what features cause overfitting and cause those errors. And so this can lead into feature validation that will revised the feature set, and then you can iterate. And we might consider using a different features space.", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - }, - { - "time": "10:07", - "text": "So NLP enriches text recognition as I just said, and because it enriches the feature space, it allows much larger such a space of features and there are also many, many more features that can be very useful for a lot of tasks. But be careful not to use a lot of category features because it can cause overfitting, or otherwise you would have to training careful not to let overflow happen. So a main challenge in design features, a common challenge is to optimize a trade off between exhaustivity and the specificity, and this trade off turns out to be very difficult. Now exhaustivity means we want the features to actually have high coverage of a lot of documents. And so in that sense, you want the features to be frequent. Specifity requires the feature to be discriminative, so naturally infrequent the features tend to be more discriminative. So this really cause a trade off between frequent versus infrequent features. And that's why a featured design is usually odd. And that's probably the most important part in machine learning any problem in particularly in our case, for text categoration or more specifically the senitment classification. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/9zE5i/5-6-opinion-mining-and-sentiment-analysis-sentiment-classification" - } - ] - }, - { - "5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression": [ - { - "time": "0:00", - "text": "[NOISE] This lecture is about the ordinal logistic regression for sentiment analysis. So, this is our problem set up for a typical sentiment classification problem. Or more specifically a rating prediction. We have an opinionated text document d as input, and we want to generate as output, a rating in the range of 1 through k so it's a discrete rating, and this is a categorization problem. We have k categories here. Now we could use a regular text for categorization technique to solve this problem. But such a solution would not consider the order and dependency of the categories. Intuitively, the features that can distinguish category 2 from 1, or rather rating 2 from 1, may be similar to those that can distinguish k from k-1. For example, positive words generally suggest a higher rating. When we train categorization problem by treating these categories as independent we would not capture this.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "1:17", - "text": "So what's the solution? Well in general we can order to classify and there are many different approaches. And here we're going to talk about one of them that called ordinal logistic regression. Now, let's first think about how we use logistical regression for a binary sentiment. A categorization problem. So suppose we just wanted to distinguish a positive from a negative and that is just a two category categorization problem. So the predictors are represented as X and these are the features. And there are M features all together. The feature value is a real number. And this can be representation of a text document.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "1:56", - "text": "And why it has two values, binary response variable 0 or 1. 1 means X is positive, 0 means X is negative. And then of course this is a standard two category categorization problem. We can apply logistical regression. You may recall that in logistical regression, we assume the log of probability that the Y is equal to one, is assumed to be a linear function of these features, as shown here. So this would allow us to also write the probability of Y equals one, given X", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "2:36", - "text": "in this equation that you are seeing on the bottom.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "2:43", - "text": "So that's a logistical function and you can see it relates this probability to, probability that y=1 to the feature values. And of course beta i's are parameters here, so this is just a direct application of logistical regression for binary categorization.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "3:08", - "text": "What if we have multiple categories, multiple levels? Well we have to use such a binary logistical regression problem to solve this multi level rating prediction.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "3:21", - "text": "And the idea is we can introduce multiple binary class files. In each case we asked the class file to predict the, whether the rating is j or above, or the rating's lower than j. So when Yj is equal to 1, it means rating is j or above. When it's 0, that means the rating is Lower than j.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "3:45", - "text": "So basically if we want to predict a rating in the range of 1-k, we first have one classifier to distinguish a k versus others. And that's our classifier one. And then we're going to have another classifier to distinguish it. At k-1 from the rest. That's Classifier 2. And in the end, we need a Classifier to distinguish between 2 and 1. So altogether we'll have k-1 classifiers.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "4:17", - "text": "Now if we do that of course then we can also solve this problem and the logistical regression program will be also very straight forward as you have just seen on the previous slide. Only that here we have more parameters. Because for each classifier, we need a different set of parameters. So now the logistical regression classifies index by J, which corresponds to a rating level.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "4:46", - "text": "And I have also used of J to replace beta 0. And this is to. Make the notation more consistent, than was what we can show in the ordinal logistical regression. So here we now have basically k minus one regular logistic regression classifiers. Each has it's own set of parameters. So now with this approach, we can now do ratings as follows.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "5:19", - "text": "After we have trained these k-1 logistic regression classifiers, separately of course, then we can take a new instance and then invoke a classifier sequentially to make the decision. So first let look at the classifier that corresponds to level of rating K. So this classifier will tell us whether this object should have a rating of K or about. If probability according to this logistical regression classifier is larger than point five, we're going to say yes. The rating is K.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "6:02", - "text": "Now, what if it's not as large as twenty-five? Well, that means the rating's below K, right?", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "6:11", - "text": "So now, we need to invoke the next classifier, which tells us whether it's above K minus one.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "6:18", - "text": "It's at least K minus one. And if the probability is larger than twenty-five, then we'll say, well, then it's k-1. What if it says no? Well, that means the rating would be even below k-1. And so we're going to just keep invoking these classifiers. And here we hit the end when we need to decide whether it's two or one. So this would help us solve the problem. Right? So we can have a classifier that would actually give us a prediction of a rating in the range of 1 through k. Now unfortunately such a strategy is not an optimal way of solving this problem. And specifically there are two problems with this approach. So these equations are the same as. You have seen before.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "7:06", - "text": "Now the first problem is that there are just too many parameters. There are many parameters. Now, can you count how many parameters do we have exactly here? Now this may be a interesting exercise. To do. So you might want to just pause the video and try to figure out the solution. How many parameters do I have for each classifier?", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "7:28", - "text": "And how many classifiers do we have?", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "7:31", - "text": "Well you can see the, and so it is that for each classifier we have n plus one parameters, and we have k minus one classifiers all together, so the total number of parameters is k minus one multiplied by n plus one. That's a lot. A lot of parameters, so when the classifier has a lot of parameters, we would in general need a lot of data out to actually help us, training data, to help us decide the optimal parameters of such a complex model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "8:04", - "text": "So that's not ideal.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "8:07", - "text": "Now the second problems is that these problems, these k minus 1 plus fives, are not really independent. These problems are actually dependent.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "8:18", - "text": "In general, words that are positive would make the rating higher", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "8:25", - "text": "for any of these classifiers. For all these classifiers. So we should be able to take advantage of this fact.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "8:33", - "text": "Now the idea of ordinal logistical regression is precisely that. The key idea is just the improvement over the k-1 independent logistical regression classifiers. And that idea is to tie these beta parameters. And that means we are going to assume the beta parameters. These are the parameters that indicated the inference of those weights. And we're going to assume these beta values are the same for all the K- 1 parameters. And this just encodes our intuition that, positive words in general would make a higher rating more likely.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "9:19", - "text": "So this is intuitively assumptions, so reasonable for our problem setup. And we have this order in these categories.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "9:28", - "text": "Now in fact, this would allow us to have two positive benefits. One is it's going to reduce the number of families significantly.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "9:38", - "text": "And the other is to allow us to share the training data. Because all these parameters are similar to be equal. So these training data, for different classifiers can then be shared to help us set the optimal value for beta.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "9:56", - "text": "So we have more data to help us choose a good beta value.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "10:01", - "text": "So what's the consequence, well the formula would look very similar to what you have seen before only that, now the beta parameter has just one index that corresponds to the feature. It no longer has the other index that corresponds to the level of rating.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "10:19", - "text": "So that means we tie them together. And there's only one set of better values for all the classifiers. However, each classifier still has the distinct R for value. The R for parameter. Except it's different. And this is of course needed to predict the different levels of ratings. So R for sub j is different it depends on j, different than j, has a different R value. But the rest of the parameters, the beta i's are the same. So now you can also ask the question, how many parameters do we have now? Again, that's an interesting question to think about. So if you think about it for a moment, and you will see now, the param, we have far fewer parameters. Specifically we have M plus K minus one. Because we have M, beta values, and plus K minus one of our values.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "11:15", - "text": "So let's just look basically, that's basically the main idea of ordinal logistical regression.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "11:24", - "text": "So, now, let's see how we can use such a method to actually assign ratings. It turns out that with this, this idea of tying all the parameters, the beta values. We also end up by having a similar way to make decisions. And more specifically now, the criteria whether the predictor probabilities are at least 0.5 above, and now is equivalent to whether the score of the object is larger than or equal to negative authors of j, as shown here. Now, the scoring function is just taking the linear combination of all the features with the divided beta values.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "12:15", - "text": "So, this means now we can simply make a decision of rating, by looking at the value of this scoring function, and see which bracket it falls into. Now you can see the general decision rule is thus, when the score is in the particular range of all of our values, then we will assign the corresponding rating to that text object.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "12:49", - "text": "So in this approach, we're going to score the object", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "12:55", - "text": "by using the features and trained parameter values.", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - }, - { - "time": "13:00", - "text": "This score will then be compared with a set of trained alpha values to see which range the score is in. And then, using the range, we can then decide which rating the object should be getting. Because, these ranges of alpha values correspond to the different levels of ratings, and that's from the way we train these alpha values. Each is tied to some level of rating. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/rlksh/5-7-opinion-mining-and-sentiment-analysis-ordinal-logistic-regression" - } - ] - } - ] - }, - { - "Week 6": [ - { - "6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1": [ - { - "time": "0:01", - "text": "[MUSIC] This lecture is about the Latent Aspect Rating Analysis for Opinion Mining and Sentiment Analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "0:14", - "text": "In this lecture, we're going to continue discussing Opinion Mining and Sentiment Analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "0:19", - "text": "In particular, we're going to introduce Latent Aspect Rating Analysis which allows us to perform detailed analysis of reviews with overall ratings.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "0:34", - "text": "So, first is motivation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "0:37", - "text": "Here are two reviews that you often see in the net about the hotel. And you see some overall ratings. In this case, both reviewers have given five stars. And, of course, there are also reviews that are in text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "0:53", - "text": "Now, if you just look at these reviews, it's not very clear whether the hotel is good for its location or for its service. It's also unclear why a reviewer liked this hotel.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "1:06", - "text": "What we want to do is to decompose this overall rating into ratings on different aspects such as value, rooms, location, and service.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "1:18", - "text": "So, if we can decompose the overall ratings, the ratings on these different aspects, then, we can obtain a more detailed understanding of the reviewer's opinionsabout the hotel.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "1:30", - "text": "And this would also allow us to rank hotels along different dimensions such as value or rooms. But, in general, such detailed understanding will reveal more information about the user's preferences, reviewer's preferences. And also, we can understand better how the reviewers view this hotel from different perspectives. Now, not only do we want to infer these aspect ratings, we also want to infer the aspect weights. So, some reviewers may care more about values as opposed to the service. And that would be a case. like what's shown on the left for the weight distribution, where you can see a lot of weight is places on value.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "2:18", - "text": "But others care more for service. And therefore, they might place more weight on service than value.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "2:25", - "text": "The reason why this is also important is because, do you think about a five star on value, it might still be very expensive if the reviewer cares a lot about service, right? For this kind of service, this price is good, so the reviewer might give it a five star. But if a reviewer really cares about the value of the hotel, then the five star, most likely, would mean really cheap prices. So, in order to interpret the ratings on different aspects accurately, we also need to know these aspect weights. When they're combined together, we can have a more detailed understanding of the opinion. So the task here is to get these reviews and their overall ratings as input, and then, generate both the aspect ratings, the compose aspect ratings, and the aspect rates as output. And this is a problem called Latent Aspect Rating Analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "3:31", - "text": "So the task, in general, is given a set of review articles about the topic with overall ratings, and we hope to generate three things. One is the major aspects commented on in the reviews. Second is ratings on each aspect, such as value and room service.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "3:53", - "text": "And third is the relative weights placed on different aspects by the reviewers. And this task has a lot of applications, and if you can do this, and it will enable a lot of applications. I just listed some here. And later, I will show you some results. And, for example, we can do opinion based entity ranking. We can generate an aspect-level opinion summary. We can also analyze reviewers preferences, compare them or compare their preferences on different hotels. And we can do personalized recommendations of products.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "4:29", - "text": "So, of course, the question is how can we solve this problem? Now, as in other cases of these advanced topics, we won\u2019t have time to really cover the technique in detail. But I\u2019m going to give you a brisk, basic introduction to the technique development for this problem. So, first step, we\u2019re going to talk about how to solve the problem in two stages. Later, we\u2019re going to also mention that we can do this in the unified model. Now, take this review with the overall rating as input. What we want to do is, first, we're going to segment the aspects. So we're going to pick out what words are talking about location, and what words are talking about room condition, etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "5:13", - "text": "So with this, we would be able to obtain aspect segments. In particular, we're going to obtain the counts of all the words in each segment, and this is denoted by C sub I of W and D. Now this can be done by using seed words like location and room or price to retrieve the [INAUDIBLE] in the segments. And then, from those segments, we can further mine correlated words with these seed words and that would allow us to segmented the text into segments, discussing different aspects. But, of course, later, as we will see, we can also use [INAUDIBLE] models to do the segmentation. But anyway, that's the first stage, where the obtain the council of words in each segment. In the second stage, which is called Latent Rating Regression, we're going to use these words and their frequencies in different aspects to predict the overall rate. And this predicting happens in two stages.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "6:17", - "text": "In the first stage, we're going to use the [INAUDIBLE] and the weights of these words in each aspect to predict the aspect rating. So, for example, if in your discussion of location, you see a word like, amazing, mentioned many times, and it has a high weight. For example, here, 3.9. Then, it will increase the Aspect Rating for location. But, another word like, far, which is an acted weight, if it's mentioned many times, and it will decrease the rating. So the aspect ratings, assume that it will be a weighted combination of these word frequencies where the weights are the sentiment weights of the words. Of course, these sentimental weights might be different for different aspects. So we have, for each aspect, a set of term sentiment weights as shown here. And that's in order by beta sub I and W.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "7:18", - "text": "In the second stage or second step, we're going to assume that the overall rating is simply a weighted combination of these aspect ratings. So we're going to assume we have aspect weights to the [INAUDIBLE] sub i of d, and this will be used to take a weighted average of the aspect ratings, which are denoted by r sub i of d.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "7:42", - "text": "And we're going to assume the overall rating is simply a weighted average of these aspect ratings. So this set up allows us to predict the overall rating based on the observable frequencies. So on the left side, you will see all these observed information, the r sub d and the count.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "8:03", - "text": "But on the right side, you see all the information in that range is actually latent.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "8:09", - "text": "So, we hope to discover that. Now, this is a typical case of a generating model where would embed the interesting variables in the generated model. And then, we're going to set up a generation probability for the overall rating given the observed words. And then, of course, we can adjust these parameter values including betas Rs and alpha Is in order to maximize the probability of the data. In this case, the conditional probability of the observed rating given the document. So we have seen such cases before in, for example, PISA, where we predict a text data. But here, we're predicting the rating, and the parameters, of course, are very different. But we can see, if we can uncover these parameters, it would be nice, because r sub i of d is precise as the ratings that we want to get. And these are the composer ratings on different aspects. [INAUDIBLE] sub I D is precisely the aspect weights that we hope to get as a byproduct, that we also get the beta factor, and these are the [INAUDIBLE] factor, the sentiment weights of words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "9:31", - "text": "So more formally,", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "9:33", - "text": "the data we are modeling here is a set of review documents with overall ratings. And each review document denote by a d, and the overall ratings denote by r sub d. And d pre-segments turn into k aspect segments. And we're going to use ci(w,d) to denote the count of word w in aspect segment i. Of course, it's zero if the word doesn't occur in the segment.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "10:01", - "text": "Now, the model is going to predict the rating based on d. So, we're interested in the provisional problem of r sub-d given d. And this model is set up as follows. So r sub-d is assumed the two follow a normal distribution doesn't mean that denotes actually await the average of the aspect of ratings r Sub I of d as shown here. This normal distribution is a variance of data squared. Now, of course, this is just our assumption. The actual rating is not necessarily anything thing this way. But as always, when we make this assumption, we have a formal way to model the problem and that allows us to compute the interest in quantities. In this case, the aspect ratings and the aspect weights.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "10:52", - "text": "Now, the aspect rating as you see on the [INAUDIBLE] is assuming that will be a weight of sum of these weights. Where the weight is just the [INAUDIBLE] of the weight.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "11:04", - "text": "So as I said, the overall rating is assumed to be a weighted average of aspect ratings.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "11:15", - "text": "Now, these other values, r for sub I of D, or denoted together by other vector that depends on D is that the token of specific weights. And we\u2019re going to assume that this vector itself is drawn from another Multivariate Gaussian distribution, with mean denoted by a Mu factor, and covariance metrics sigma here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "11:43", - "text": "Now, so this means, when we generate our overall rating, we're going to first draw", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "11:49", - "text": "a set of other values from this Multivariate Gaussian Prior distribution. And once we get these other values, we're going to use then the weighted average of aspect ratings as the mean here to use the normal distribution to generate the overall rating.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "12:13", - "text": "Now, the aspect rating, as I just said, is the sum of the sentiment weights of words in aspect, note that here the sentiment weights are specific to aspect. So, beta is indexed by i, and that's for aspect. And that gives us a way to model different segment of a word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "12:36", - "text": "This is neither because the same word might have positive sentiment for another aspect. It's also used for see what parameters we have here beta sub i and w gives us the aspect-specific sentiment of w. So, obviously, that's one of the important parameters. But, in general, we can see we have these parameters, beta values, the delta, and the Mu, and sigma.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "13:12", - "text": "So, next, the question is, how can we estimate these parameters and, so we collectively denote all the parameters by lambda here. Now, we can, as usual, use the maximum likelihood estimate, and this will give us the settings of these parameters, that with a maximized observed ratings condition of their respective reviews. And of, course, this would then give us all the useful variables that we are interested in computing.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "13:45", - "text": "So, more specifically, we can now, once we estimate the parameters, we can easily compute the aspect rating, for aspect the i or sub i of d. And that's simply to take all of the words that occurred in the segment, i, and then take their counts and then multiply that by the center of the weight of each word and take a sum. So, of course, this time would be zero for words that are not occurring in and that's why were going to take the sum of all the words in the vocabulary.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "14:17", - "text": "Now what about the s factor weights? Alpha sub i of d, well, it's not part of our parameter. Right? So we have to use that to compute it. And in this case, we can use the Maximum a Posteriori to compute this alpha value. Basically, we're going to maximize the product of the prior of alpha according to our assumed Multivariate Gaussian Distribution and the likelihood. In this case, the likelihood rate is the probability of generating this observed overall rating given this particular alpha value and some other parameters, as you see here. So for more details about this model, you can read this paper cited here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - }, - { - "time": "15:05", - "text": "[MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/dkntE/6-1-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-1" - } - ] - }, - { - "6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is a continued discussion of Latent Aspect Rating Analysis. Earlier, we talked about how to solve the problem of LARA in two stages. But we first do segmentation of different aspects. And then we use a latent regression model to learn the aspect ratings and then later the weight. Now it's also possible to develop a unified generative model for solving this problem, and that is we not only model the generational over-rating based on text. We also model the generation of text, and so a natural solution would be to use topic model. So given the entity, we can assume there are aspects that are described by word distributions. Topics. And then we an use a topic model to model the generation of the reviewed text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "1:01", - "text": "I will assume words in the review text are drawn from these distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "1:08", - "text": "In the same way as we assumed for generating model like PRSA.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "1:13", - "text": "And then we can then plug in the latent regression model to use the text to further predict the overrating. And that means when we first predict the aspect rating and then combine them with aspect weights to predict the overall rating. So this would give us a unified generated model, where we model both the generation of text and the overall ready condition on text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "1:40", - "text": "So we don't have time to discuss this model in detail as in many other cases in this part of the cause where we discuss the cutting edge topics, but there's a reference site here where you can find more details.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "1:57", - "text": "So now I'm going to show you some simple results that you can get by using these kind of generated models. First, it's about rating decomposition. So here, what you see are the decomposed ratings for three hotels that have the same overall rating. So if you just look at the overall rating, you can't really tell much difference between these hotels. But by decomposing these ratings into aspect ratings we can see some hotels have higher ratings for some dimensions, like value, but others might score better in other dimensions, like location. And so this can give you detailed opinions at the aspect level.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "2:38", - "text": "Now here, the ground-truth is shown in the parenthesis, so it also allows you to see whether the prediction is accurate. It's not always accurate but It's mostly still reflecting some of the trends.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "2:53", - "text": "The second result you compare different reviewers on the same hotel. So the table shows the decomposed ratings for two reviewers about same hotel. Again their high level overall ratings are the same. So if you just look at the overall ratings, you don't really get that much information about the difference between the two reviewers. But after you decompose the ratings, you can see clearly that they have high scores on different dimensions. So this shows that model can review differences in opinions of different reviewers and such a detailed understanding can help us understand better about reviewers and also better about their feedback on the hotel. This is something very interesting, because this is in some sense some byproduct. In our problem formulation, we did not really have to do this. But the design of the generating model has this component. And these are sentimental weights for words in different aspects. And you can see the highly weighted words versus the negatively loaded weighted words here for each of the four dimensions. Value, rooms, location, and cleanliness. The top words clearly make sense, and the bottom words also make sense.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "4:10", - "text": "So this shows that with this approach, we can also learn sentiment information directly from the data. Now, this kind of lexicon is very useful because in general, a word like long, let's say, may have different sentiment polarities for different context. So if I say the battery life of this laptop is long, then that's positive. But if I say the rebooting time for the laptop is long, that's bad, right? So even for reviews about the same product, laptop, the word long is ambiguous, it could mean positive or it could mean negative. But this kind of lexicon, that we can learn by using this kind of generated models, can show whether a word is positive for a particular aspect. So this is clearly very useful, and in fact such a lexicon can be directly used to tag other reviews about hotels or tag comments about hotels in social media like Tweets.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "5:08", - "text": "And what's also interesting is that since this is almost completely unsupervised, well assuming the reviews whose overall rating are available And then this can allow us to learn form potentially larger amount of data on the internet to reach sentiment lexicon.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "5:28", - "text": "And here are some results to validate the preference words. Remember the model can infer wether a reviewer cares more about service or the price. Now how do we know whether the inferred weights are correct? And this poses a very difficult challenge for evaluation. Now here we show some interesting way of evaluating.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "5:50", - "text": "What you see here are the prices of hotels in different cities, and these are the prices of hotels that are favored by different groups of reviewers. The top ten are the reviewers was the highest inferred value to other aspect ratio.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "6:09", - "text": "So for example value versus location, value versus room, etcetera. Now the top ten of the reviewers that have the highest ratios by this measure. And that means these reviewers tend to put a lot of weight on value as compared with other dimensions. So that means they really emphasize on value.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "6:30", - "text": "The bottom ten on the other hand of the reviewers. The lowest ratio, what does that mean? Well it means these reviewers have put higher weights on other aspects than value. So those are people that cared about another dimension and they didn't care so much the value in some sense, at least as compared with the top ten group.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "6:52", - "text": "Now these ratios are computer based on the inferred weights from the model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "6:57", - "text": "So now you can see the average prices of hotels favored by top ten reviewers are indeed much cheaper than those that are favored by the bottom ten. And this provides some indirect way of validating the inferred weights. It just means the weights are not random. They are actually meaningful here. In comparison, the average price in these three cities, you can actually see the top ten tend to have below average in price, whereas the bottom half, where they care a lot about other things like a service or room condition tend to have hotels that have higher prices than average. So with these results we can build a lot of interesting applications. For example, a direct application would be to generate the rated aspect, the summary, and because of the decomposition we have now generated the summaries for each aspect. The positive sentences the negative sentences about each aspect. It's more informative than original review that just has an overall rating and review text. Here are some other results about the aspects that's covered from reviews with no ratings. These are mp3 reviews, and these results show that the model can discover some interesting aspects. Commented on low overall ratings versus those higher overall per ratings. And they care more about the different aspects.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "8:22", - "text": "Or they comment more on the different aspects. So that can help us discover for example, consumers' trend in appreciating different features of products. For example, one might have discovered the trend that people tend to like larger screens of cell phones or light weight of laptop, etcetera. Such knowledge can be useful for manufacturers to design their next generation of products. Here are some interesting results on analyzing users rating behavior. So what you see is average weights along different dimensions by different groups of reviewers. And on the left side you see the weights of viewers that like the expensive hotels. They gave the expensive hotels 5 Stars, and you can see their average rates tend to be more for some service. And that suggests that people like expensive hotels because of good service, and that's not surprising. That's also another way to validate it by inferred weights.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "9:34", - "text": "If you look at the right side where, look at the column of 5 Stars. These are the reviewers that like the cheaper hotels, and they gave cheaper hotels five stars. As we expected and they put more weight on value, and that's why they like the cheaper hotels.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "9:52", - "text": "But if you look at the, when they didn't like expensive hotels, or cheaper hotels, then you'll see that they tended to have more weights on the condition of the room cleanness.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "10:04", - "text": "So this shows that by using this model, we can infer some information that's very hard to obtain even if you read all the reviews. Even if you read all the reviews it's very hard to infer such preferences or such emphasis. So this is a case where text mining algorithms can go beyond what humans can do, to review interesting patterns in the data. And this of course can be very useful. You can compare different hotels, compare the opinions from different consumer groups, in different locations. And of course, the model is general. It can be applied to any reviews with overall ratings. So this is a very useful technique that can support a lot of text mining applications.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "10:50", - "text": "Finally the results of applying this model for personalized ranking or recommendation of entities.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "10:57", - "text": "So because we can infer the reviewers weights on different dimensions, we can allow a user to actually say what do you care about. So for example, I have a query here that shows 90% of the weight should be on value and 10% on others. So that just means I don't care about other aspect. I just care about getting a cheaper hotel. My emphasis is on the value dimension. Now what we can do with such query is we can use reviewers that we believe have a similar preference to recommend a hotels for you. How can we know that? Well, we can infer the weights of those reviewers on different aspects. We can find the reviewers whose weights are more precise, of course inferred rates are similar to yours. And then use those reviewers to recommend hotels for you and this is what we call personalized or rather query specific recommendations. Now the non-personalized recommendations now shown on the top, and you can see the top results generally have much higher price, than the lower group and that's because when the reviewer's cared more about the value as dictated by this query they tended to really favor low price hotels. So this is yet another application of this technique.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "12:18", - "text": "It shows that by doing text mining we can understand the users better. And once we can handle users better we can solve these users better. So to summarize our discussion of opinion mining in general, this is a very important topic and with a lot of applications.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "12:33", - "text": "And as a text sentiment analysis can be readily done by using just text categorization. But standard technique tends to not be enough. And so we need to have enriched feature implementation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "12:45", - "text": "And we also need to consider the order of those categories. And we'll talk about ordinal regression for some of these problem. We have also assume that the generating models are powerful for mining latent user preferences. This in particular in the generative model for mining latent regression. And we embed some interesting preference information and send the weights of words in the model as a result we can learn most useful information when fitting the model to the data. Now most approaches have been proposed and evaluated. For product reviews, and that was because in such a context, the opinion holder and the opinion target are clear. And they are easy to analyze. And there, of course, also have a lot of practical applications. But opinion mining from news and social media is also important, but that's more difficult than analyzing review data, mainly because the opinion holders and opinion targets are all interested. So that calls for natural management processing techniques to uncover them accurately.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "13:50", - "text": "Here are some suggested readings. The first two are small books that are of some use of this topic, where you can find a lot of discussion about other variations of the problem and techniques proposed for solving the problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "14:08", - "text": "The next two papers about generating models for rating the aspect rating analysis. The first one is about solving the problem using two stages, and the second one is about a unified model where the topic model is integrated with the regression model to solve the problem using a unified model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - }, - { - "time": "14:30", - "text": "[MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/OxeOx/6-2-opinion-mining-and-sentiment-analysis-latent-aspect-rating-analysis-part-2" - } - ] - }, - { - "6-3-text-based-prediction": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about the Text-Based Prediction. In this lecture, we're going to start talking about the mining a different kind of knowledge, as you can see here on this slide. Namely we're going to use text data to infer values of some other variables in the real world that may not be directly related to the text. Or only remotely related to text data. So this is very different from content analysis or topic mining where we directly characterize the content of text. It's also different from opinion mining or sentiment analysis, which still have to do is characterizing mostly the content. Only that we focus more on the subject of content which reflects what we know about the opinion holder.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "1:05", - "text": "But this only provides limited review of what we can predict.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "1:10", - "text": "In this lecture and the following lectures, we're going to talk more about how we can predict more Information about the world. How can we get the sophisticated patterns of text together with other kind of data?", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "1:28", - "text": "It would be useful first to take a look at the big picture of prediction, and data mining in general, and I call this data mining loop. So the picture that you are seeing right now is that there are multiple sensors, including human sensors, to report what we have seen in the real world in the form of data. Of course the data in the form of non-text data, and text data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "1:51", - "text": "And our goal is to see if we can predict some values of important real world variables that matter to us. For example, someone's house condition, or the weather, or etc. And so these variables would be important because we might want to act on that. We might want to make decisions based on that. So how can we get from the data to these predicted values? Well in general we'll first have to do data mining and analysis of the data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "2:23", - "text": "Because we, in general, should treat all the data that we collected", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "2:30", - "text": "in such a prediction problem set up. We are very much interested in joint mining of non-text and text data, which should combine all the data together.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "2:41", - "text": "And then, through analysis, generally there are multiple predictors of this interesting variable to us. And we call these features. And these features can then be put into a predictive model, to actually predict the value of any interesting variable.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "3:02", - "text": "So this then allows us to change the world. And so this basically is the general process for making a prediction based on data, including the test data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "3:17", - "text": "Now it's important to emphasize that a human actually plays a very important role in this process.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "3:24", - "text": "Especially because of the involvement of text data. So human first would be involved in the mining of the data. It would control the generation of these features. And it would also help us understand the text data, because text data are created to be consumed by humans. Humans are the best in consuming or interpreting text data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "3:48", - "text": "But when there are, of course, a lot of text data then machines have to help and that's why we need to do text data mining.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "3:55", - "text": "Sometimes machines can see patterns in a lot of data that humans may not see. But in general human would play an important role in analyzing some text data, or applications. Next, human also must be involved in predictive model building and adjusting or testing. So in particular, we will have a lot of domain knowledge about the problem of prediction that we can build into this predictive model. And then next, of course, when we have predictive values for the variables, then humans would be involved in taking actions to change a word or make decisions based on these particular values.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "4:36", - "text": "And finally it's interesting that a human could be involved in controlling the sensors.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "4:43", - "text": "And this is so that we can adjust to the sensors to collect the most useful data for prediction.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "4:52", - "text": "So that's why I call this data mining loop. Because as we perturb the sensors, it'll collect the new data and more useful data then we will obtain more data for prediction. And this data generally will help us improve the predicting accuracy. And in this loop, humans will recognize what additional data will need to be collected. And machines, of course, help humans identify what data should be collected next. In general, we want to collect data that is most useful for learning. And there was actually a subarea in machine learning called active learning that has to do with this. How do you identify data", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "5:32", - "text": "points that would be most helpful in machine learning programs? If you can label them, right?", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "5:38", - "text": "So, in general, you can see there is a loop here from data acquisition to data analysis. Or data mining to prediction of values. And to take actions to change the word, and then observe what happens. And then you can then decide what additional data have to be collected by adjusting the sensors. Or from the prediction arrows, you can also note what additional data we need to acquire in order to improve the accuracy of prediction. And this big picture is actually very general and it's reflecting a lot of important applications of big data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "6:16", - "text": "So, it's useful to keep that in mind while we are looking at some text mining techniques.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "6:22", - "text": "So from text mining perspective and we're interested in text based prediction. Of course, sometimes texts alone can make predictions. And this is most useful for prediction about human behavior or human preferences or opinions. But in general text data will be put together as non-text data. So the interesting questions here would be, first, how can we design effective predictors? And how do we generate such effective predictors from text?", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "6:53", - "text": "And this question has been addressed to some extent in some previous lectures where we talked about what kind of features we can design for text data. And it has also been addressed to some extent by talking about the other knowledge that we can mine from text. So, for example, topic mining can be very useful to generate the patterns or topic based indicators or predictors that can be further fed into a predictive model. So topics can be intermediate recognition of text. That would allow us to do design high level features or predictors that are useful for prediction of some other variable. It may be also generated from original text data, it provides a much better implementation of the problem and it serves as more effective predictors.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "7:46", - "text": "And similarly similar analysis can lead to such predictors, as well. So, those other data mining or text mining algorithms can be used to generate predictors.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "7:58", - "text": "The other question is, how can we join the mine text and non-text data together? Now, this is a question that we have not addressed yet. So, in this lecture, and in the following lectures, we're going to address this problem. Because this is where we can generate much more enriched features for prediction. And allows us to review a lot of interesting knowledge about the world. These patterns that are generated from text and non-text data themselves can sometimes, already be useful for prediction. But, when they are put together with many other predictors they can really help improving the prediction.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "8:39", - "text": "Basically, you can see text-based prediction can actually serve as a unified framework to combine many text mining and analysis techniques. Including topic mining and any content mining techniques or segment analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "8:55", - "text": "The goal here is mainly to evoke values of real-world variables. But in order to achieve the goal we can do some other preparations. And these are subtasks. So one subtask could mine the content of text data, like topic mining. And the other could be to mine knowledge about the observer. So sentiment analysis, opinion.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "9:21", - "text": "And both can help provide predictors for the prediction problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "9:27", - "text": "And of course we can also add non-text data directly to the predicted model, but then non-text data also helps provide a context for text analyst. And that further improves the topic mining and the opinion analysis. And such improvement often leads to more effective predictors for our problems. It would enlarge the space of patterns of opinions of topics that we can mine from text and that we'll discuss more later. So the joint analysis of text and non-text data can be actually understood from two perspectives.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "10:05", - "text": "One perspective, we have non-text can help with testimony.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "10:11", - "text": "Because non-text data can provide a context for mining text data provide a way to partition data in different ways. And this leads to a number of type of techniques for contextual types of mining. And that's the mine text in the context defined by non-text data. And you see this reference here, for a large body of work, in this direction. And I will need to highlight some of them, in the next lectures.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "10:39", - "text": "Now, the other perspective is text data can help with non-text data mining as well. And this is because text data can help interpret patterns discovered from non-text data. Let's say you discover some frequent patterns from non-text data. Now we can use the text data associated with instances where the pattern occurs as well as text data that is associated with instances where the pattern doesn't look up. And this gives us two sets of text data. And then we can see what's the difference. And this difference in text data is interpretable because text content is easy to digest. And that difference might suggest some meaning for this pattern that we found from non-text data. So, it helps interpret such patterns. And this technique is called pattern annotation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "11:32", - "text": "And you can see this reference listed here for more detail.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "11:38", - "text": "So here are the references that I just mentioned. The first is reference for pattern annotation. The second is, Qiaozhu Mei's dissertation on contextual text mining. It contains a large body of work on contextual text mining techniques.", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - }, - { - "time": "11:56", - "text": "[MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/u8NLS/6-3-text-based-prediction" - } - ] - }, - { - "6-4-contextual-text-mining-motivation": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about the contextual text mining.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "0:11", - "text": "Contextual text mining is related to multiple kinds of knowledge that we mine from text data, as I'm showing here. It's related to topic mining because you can make topics associated with context, like time or location. And similarly, we can make opinion mining more contextualized, making opinions connected to context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "0:34", - "text": "It's related to text based prediction because it allows us to combine non-text data with text data to derive sophisticated predictors for the prediction problem. So more specifically, why are we interested in contextual text mining? Well, that's first because text often has rich context information. And this can include direct context such as meta-data, and also indirect context. So, the direct context can grow the meta-data such as time, location, authors, and source of the text data. And they're almost always available to us.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "1:14", - "text": "Indirect context refers to additional data related to the meta-data. So for example, from office, we can further obtain additional context such as social network of the author, or the author's age.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "1:30", - "text": "Such information is not in general directly related to the text, yet through the process, we can connect them. There could be other text data from the same source, as this one through the other text can be connected with this text as well. So in general, any related data can be regarded as context. So there could be removed or rated for context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "1:55", - "text": "And so what's the use? What is text context used for? Well, context can be used to partition text data in many interesting ways. It can almost allow us to partition text data in other ways as we need. And this is very important because this allows us to do interesting comparative analyses. It also in general, provides meaning to the discovered topics, if we associate the text with context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "2:25", - "text": "So here's illustration of how context can be regarded as interesting ways of partitioning of text data. So here I just showed some research papers published in different years.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "2:41", - "text": "On different venues, different conference names here listed on the bottom like the SIGIR or ACL, etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "2:49", - "text": "Now such text data can be partitioned in many interesting ways because we have context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "2:56", - "text": "So the context here just includes time and the conference venues. But perhaps we can include some other variables as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "3:06", - "text": "But let's see how we can partition this interesting of ways. First, we can treat each paper as a separate unit. So in this case, a paper ID and the, each paper has its own context. It's independent. But we can also treat all the papers within 1998 as one group and this is only possible because of the availability of time. And we can partition data in this way. This would allow us to compare topics for example, in different years.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "3:39", - "text": "Similarly, we can partition the data based on the menus. We can get all the SIGIR papers and compare those papers with the rest. Or compare SIGIR papers with KDD papers, with ACL papers.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "3:52", - "text": "We can also partition the data to obtain the papers written by authors in the U.S., and that of course, uses additional context of the authors. And this would allow us to then compare such a subset with another set of papers written by also seen in other countries.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "4:13", - "text": "Or we can obtain a set of papers about text mining, and this can be compared with papers about another topic. And note that these partitionings can be also intersected with each other to generate even more complicated partitions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "4:29", - "text": "And so in general, this enables discovery of knowledge associated with different context as needed.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "4:37", - "text": "And in particular, we can compare different contexts. And this often gives us a lot of useful knowledge. For example, comparing topics over time, we can see trends of topics. Comparing topics in different contexts can also reveal differences about the two contexts. So there are many interesting questions that require contextual text mining. Here I list some very specific ones. For example, what topics have been getting increasing attention recently in data mining research? Now to answer this question, obviously we need to analyze text in the context of time.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "5:13", - "text": "So time is context in this case. Is there any difference in the responses of people in different regions to the event, to any event? So this is a very broad an answer to this question. In this case of course, location is the context. What are the common research interests of two researchers? In this case, authors can be the context. Is there any difference in the research topics published by authors in the USA and those outside? Now in this case, the context would include the authors and their affiliation and location.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "5:47", - "text": "So this goes beyond just the author himself or herself. We need to look at the additional information connected to the author. Is there any difference in the opinions of all the topics expressed on one social network and another? In this case, the social network of authors and the topic can be a context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "6:06", - "text": "Other topics in news data that are correlated with sudden changes in stock prices. In this case, we can use a time series such as stock prices as context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - }, - { - "time": "6:17", - "text": "What issues mattered in the 2012 presidential campaign, or presidential election? Now in this case, time serves again as context. So, as you can see, the list can go on and on. Basically, contextual text mining can have many applications. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/eJt9Y/6-4-contextual-text-mining-motivation" - } - ] - }, - { - "6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis": [ - { - "time": "0:00", - "text": "[MUSIC] This lecture is about a specific technique for Contextual Text Mining called Contextual Probabilistic Latent Semantic Analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "0:19", - "text": "In this lecture, we're going to continue discussing Contextual Text Mining. And we're going to introduce Contextual Probablitistic Latent Semantic Analysis as exchanging of POS for doing contextual text mining.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "0:34", - "text": "Recall that in contextual text mining we hope to analyze topics in text, in consideration of the context so that we can associate the topics with a property of the context were interesting.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "0:48", - "text": "So in this approach, contextual probabilistic latent semantic analysis, or CPLSA, the main idea is to express to the add interesting context variables into a generating model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "1:03", - "text": "Recall that before when we generate the text we generally assume we'll start wIth some topics, and then assemble words from some topics. But here, we're going to add context variables, so that the coverage of topics, and also the content of topics would be tied in context. Or in other words, we're going to let the context Influence both coverage and the content of a topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "1:31", - "text": "The consequences that this will enable us to discover contextualized topics. Make the topics more interesting, more meaningful. Because we can then have topics that can be interpreted as specifically to a particular context that we are interested in. For example, a particular time period.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "1:52", - "text": "As an extension of PLSA model, CPLSA does the following changes. Firstly it would model the conditional likelihood of text given context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "2:07", - "text": "That clearly suggests that the generation of text would then depend on context, and that allows us to bring context into the generative model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "2:18", - "text": "Secondly, it makes two specific assumptions about the dependency of topics on context. One is to assume that depending on the context, depending on different time periods or different locations, we assume that there are different views of a topic or different versions of word descriptions that characterize a topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "2:38", - "text": "And this assumption allows us to discover different variations of the same topic in different contexts.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "2:46", - "text": "The other is that we assume the topic coverage also depends on the context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "2:55", - "text": "That means depending on the time or location, we might cover topics differently.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "3:00", - "text": "Again, this dependency would then allow us to capture the association of topics with specific contexts. We can still use the EM algorithm to solve the problem of parameter estimation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "3:16", - "text": "And in this case, the estimated parameters would naturally contain context variables. And in particular, a lot of conditional probabilities of topics given certain context. And this is what allows you to do contextual text mining. So this is the basic idea.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "3:35", - "text": "Now, we don't have time to introduce this model in detail, but there are references here that you can look into to know more detail. Here I just want to explain the high level ideas in more detail. Particularly I want to explain the generation process. Of text data that has context associated in such a model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "4:01", - "text": "So as you see here, we can assume there are still multiple topics. For example, some topics might represent a themes like a government response, donation Or the city of New Orleans. Now this example is in the context of Hurricane Katrina and that hit New Orleans.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "4:22", - "text": "Now as you can see we assume there are different views associated with each of the topics. And these are shown as View 1, View 2, View 3. Each view is a different version of word distributions. And these views are tied to some context variables. For example, tied to the location Texas, or the time July 2005, or the occupation of the author being a sociologist.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "4:56", - "text": "Now, on the right side, now we assume the document has context information. So the time is known to be July 2005. The location is Texas, etc. And such context information is what we hope to model as well. So we're not going to just model the text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "5:15", - "text": "And so one idea here is to model the variations of top content and various content. And this gives us different views of the water distributions.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "5:27", - "text": "Now on the bottom you will see the theme coverage of top Coverage might also vary according to these context because in the case of a location like Texas, people might want to cover the red topics more. That's New Orleans. That's visualized here. But in a certain time period, maybe Particular topic and will be covered more. So this variation is also considered in CPLSA. So to generate the searcher document With context, with first also choose a view.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "6:08", - "text": "And this view of course now could be from any of these contexts. Let's say, we have taken this view that depends on the time. In the middle. So now, we will have a specific version of word distributions. Now, you can see some probabilities of words for each topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "6:26", - "text": "Now, once we have chosen a view, now the situation will be very similar to what happened in standard ((PRSA)) We assume we have got word distribution associated with each topic, right?", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "6:39", - "text": "And then next, we will also choose a coverage from the bottom, so we're going to choose a particular coverage, and that coverage, before is fixed in PLSA, and assigned to a particular document. Each document has just one coverage distribution.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "6:58", - "text": "Now here, because we consider context, so the distribution of topics or the coverage of Topics can vary depending on the context that has influenced the coverage.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "7:10", - "text": "So, for example, we might pick a particular coverage. Let's say in this case we picked a document specific coverage.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "7:20", - "text": "Now with the coverage and these word distributions we can generate a document in exactly the same way as in PLSA. So what it means, we're going to use the coverage to choose a topic, to choose one of these three topics. Let's say we have picked the yellow topic. Then we'll draw a word from this particular topic on the top.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "7:44", - "text": "Okay, so we might get a word like government. And then next time we might choose a different topic, and we'll get donate, etc. Until we generate all the words. And this is basically the same process as in PLSA.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "8:00", - "text": "So the main difference is when we obtain the coverage. And the word distribution, we let the context influence our choice So in other words we have extra switches that are tied to these contacts that will control the choices of different views of topics and the choices of coverage.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "8:22", - "text": "And naturally the model we have more parameters to estimate. But once we can estimate those parameters that involve the context, then we will be able to understand the context specific views of topics, or context specific coverages of topics. And this is precisely what we want in contextual text mining.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "8:40", - "text": "So here are some simple results. From using such a model. Not necessary exactly the same model, but similar models. So on this slide you see some sample results of comparing news articles about Iraq War and Afghanistan War.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "8:56", - "text": "Now we have about 30 articles on Iraq wa,r and 26 articles on Afghanistan war. And in this case, the goal is to review the common topic. It's covered in both sets of articles and the differences of variations of the topic in each of the two collections.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "9:18", - "text": "So in this case the context is explicitly specified by the topic or collection.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "9:25", - "text": "And we see the results here show that there is a common theme that's corresponding to Cluster 1 here in this column. And there is a common theme indicting that United Nations is involved in both Wars. It's a common topic covered in both sets of articles. And that's indicated by the high probability words shown here, united and nations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "9:51", - "text": "Now if you know the background, of course this is not surprising and this topic is indeed very relevant to both wars. If you look at the column further and then what's interesting's that the next two cells of word distributions actually tell us collection specific variations of the topic of United Nations. So it indicates that the Iraq War, United Nations was more involved in weapons factions, whereas in the Afghanistan War it was more involved in maybe aid to Northern Alliance. It's a different variation of the topic of United Nations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "10:30", - "text": "So this shows that by bringing the context. In this case different the walls or different the collection of texts. We can have topical variations tied to these contexts, to review the differences of coverage of the United Nations in the two wars.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "10:46", - "text": "Now similarly if you look at the second cluster Class two, it has to do with the killing of people, and, again, it's not surprising if you know the background about wars. All the wars involve killing of people, but imagine if you are not familiar with the text collections. We have a lot of text articles, and such a technique can reveal the common topics covered in both sets of articles. It can be used to review common topics in multiple sets of articles as well. If you look at of course in that column of cluster two, you see variations of killing of people and that corresponds to different contexts", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "11:28", - "text": "And here is another example of results obtained from blog articles about Hurricane Katrina.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "11:37", - "text": "In this case, what you see here is visualization of the trends of topics over time.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "11:47", - "text": "And the top one shows just the temporal trends of two topics. One is oil price, and one is about the flooding of the city of New Orleans.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "12:00", - "text": "Now these topics are obtained from blog articles about Hurricane Katrina.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "12:07", - "text": "And people talk about these topics. And end up teaching to some other topics. But the visualisation shows that with this technique, we can have conditional distribution of time. Given a topic. So this allows us to plot this conditional probability the curve is like what you're seeing here. We see that, initially, the two curves tracked each other very well. But later we see the topic of New Orleans was mentioned again but oil price was not. And this turns out to be the time period when another hurricane, hurricane Rita hit the region. And that apparently triggered more discussion about the flooding of the city.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "12:54", - "text": "The bottom curve shows the coverage of this topic about flooding of the city by block articles in different locations. And it also shows some shift of coverage that might be related to people's migrating from the state of Louisiana to Texas for example.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "13:20", - "text": "So in this case we can see the time can be used as context to review trends of topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "13:27", - "text": "These are some additional results on spacial patterns. In this case it was about the topic of government response. And there was some criticism about the slow response of government in the case of Hurricane Katrina.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "13:44", - "text": "And the discussion now is covered in different locations. And these visualizations show the coverage in different weeks of the event. And initially it's covered mostly in the victim states, in the South, but then gradually spread into other locations. But in week four, which is shown on the bottom left, we see a pattern that's very similar to the first week on the top left. And that's when again Hurricane Rita hit in the region. So such a technique would allow us to use location as context to examine their issues of topics. And of course the moral is completely general so you can apply this to any other connections of text. To review spatial temporal patterns.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "14:34", - "text": "His view found another application of this kind of model, where we look at the use of the model for event impact analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "14:43", - "text": "So here we're looking at the research articles information retrieval. IR, particularly SIGIR papers. And the topic we are focusing on is about the retrieval models. And you can see the top words with high probability about this model on the left.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "14:59", - "text": "And then we hope to examine the impact of two events. One is a start of TREC, for Text and Retrieval Conference. This is a major evaluation sponsored by U.S. government, and was launched in 1992 or around that time. And that is known to have made a impact on the topics of research information retrieval.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "15:23", - "text": "The other is the publication of a seminal paper, by Croft and Porte. This is about a language model approach to information retrieval. It's also known to have made a high impact on information retrieval research. So we hope to use this kind of model to understand impact. The idea here is simply to use the time as context. And use these events to divide the time periods into a period before. For the event and another after this event. And then we can compare the differences of the topics. The and the variations, etc. So in this case, the results show before track the study of retrieval models was mostly a vector space model, Boolean model etc. But the after Trec, apparently the study of retrieval models have involved a lot of other words. That seems to suggest some different retrieval tasks, so for example, email was used in the enterprise search tasks and subtopical retrieval was another task later introduced by Trec.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "16:28", - "text": "On the bottom, we see the variations that are correlated with the propagation of the language model paper. Before, we have those classic probability risk model, logic model, Boolean etc., but after 1998, we see clear dominance of language model as probabilistic models. And we see words like language model, estimation of parameters, etc. So this technique here can use events as context to understand the impact of event. Again the technique is generals so you can use this to analyze the impact of any event. Here are some suggested readings.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "17:11", - "text": "The first is paper about simple staging of psi to label cross-collection comparison.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "17:21", - "text": "It's to perform comparative text mining to allow us to extract common topics shared by multiple collections. And there are variations in each collection.", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - }, - { - "time": "17:31", - "text": "The second one is the main paper about the CPLSA model. Was a discussion of a lot of applications. The third one has a lot of details about the special temporal patterns for the Hurricane Katrina example. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/3bVa3/6-5-contextual-text-mining-contextual-probabilistic-latent-semantic-analysis" - } - ] - }, - { - "6-6-contextual-text-mining-mining-topics-with-social-network-context": [ - { - "time": "0:00", - "text": "[SOUND] This lecture is about how to mine text data with social network as context. In this lecture we're going to continue discussing contextual text mining. In particular, we're going to look at the social network of others as context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "0:26", - "text": "So first, what's our motivation for using network context for analysis of text?", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "0:32", - "text": "The context of a text article can form a network.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "0:37", - "text": "For example the authors of research articles might form collaboration networks.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "0:44", - "text": "But authors of social media content might form social networks. For example, in Twitter people might follow each other. Or in Facebook as people might claim friends of others, etc. So such context connects the content of the others. Similarly, locations associated with text can also be connected to form geographical network. But in general you can can imagine the metadata of the text data can form some kind of network if they have some relations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "1:24", - "text": "Now there is some benefit in jointly analyzing text and its social network context or network context in general. And that's because we can use network to impose some constraints on topics of text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "1:41", - "text": "So for example it's reasonable to assume that authors connected in collaboration networks tend to write about the similar topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "1:53", - "text": "So such heuristics can be used to guide us in analyzing topics. Text also can help characterize the content associated with each subnetwork. And this is to say that both", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "2:11", - "text": "kinds of data, the network and text, can help each other.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "2:16", - "text": "So for example the difference in opinions expressed that are in two subnetworks can be reviewed by doing this type of joint analysis.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "2:30", - "text": "So here briefly you could use a model called a network supervised topic model.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "2:40", - "text": "In this slide we're going to give some general ideas. And then in the next slide we're going to give some more details.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "2:48", - "text": "But in general in this part of the course we don't have enough time to cover these frontier topics in detail. But we provide references that would allow you to read more about the topic to know the details.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "3:05", - "text": "But it should still be useful to know the general ideas. And to know what they can do to know when you might be able to use them. So the general idea of network supervised topic model is the following. Let's start with viewing the regular topic models. Like if you had an LDA as sorting optimization problem. Of course, in this case, the optimization objective function is a likelihood function. So we often use maximum likelihood estimator to obtain the parameters. And these parameters will give us useful information that we want to obtain from text data. For example, topics. So we want to maximize the probability of tests that are given the parameters generally denoted by number. The main idea of incorporating network is to think about the constraints that can be imposed based on the network. In general, the idea is to use the network to impose some constraints on the model parameters, lambda here. For example, the text at adjacent nodes of the network can be similar to cover similar topics. Indeed, in many cases, they tend to cover similar topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "4:34", - "text": "So we may be able to smooth the topic distributions", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "4:39", - "text": "on the graph on the network so that adjacent nodes will have very similar topic distributions. So they will share a common distribution on the topics. Or have just a slight variations of the topic of distributions, of the coverage.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "5:02", - "text": "So, technically, what we can do is simply to add a network and use the regularizers to the likelihood of objective function as shown here. So instead of just optimize the probability of test data given parameters lambda, we're going to optimize another function F.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "5:19", - "text": "This function combines the likelihood with a regularizer function called R here. And the regularizer defines the the parameters lambda and the Network. It tells us basically what kind of parameters are preferred from a network constraint perspective. So you can easily see this is in effect implementing the idea of imposing some prior on the model parameters. Only that we're not necessary having a probabilistic model, but the idea is the same. We're going to combine the two in one single objective function.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "5:57", - "text": "So, the advantage of this idea is that it's quite general. Here the top model can be any generative model for text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "6:07", - "text": "It doesn't have to be PLSA or LEA, or the current topic models.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "6:12", - "text": "And similarly, the network can be also in a network. Any graph that connects these text objects.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "6:22", - "text": "This regularizer can also be any regularizer. We can be flexible in capturing different heuristics that we want to capture.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "6:32", - "text": "And finally, the function F can also vary, so there can be many different ways to combine them. So, this general idea is actually quite, quite powerful. It offers a general approach to combining these different types of data in single optimization framework. And this general idea can really be applied for any problem.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "6:56", - "text": "But here in this paper reference here, a particular instantiation called a NetPLSA was started. In this case, it's just for instantiating of PLSA to incorporate this simple constraint imposed by network. And the prior here is the neighbors on the network must have similar topic distribution. They must cover similar topics in similar ways. And that's basically what it says in English.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "7:24", - "text": "So technically we just have a modified objective function here. Let's define both the texts you can actually see in the network graph G here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "7:34", - "text": "And if you look at this formula, you can actually recognize some part fairly familiarly.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "7:40", - "text": "Because they are, they should be fairly familiar to you by now. So can you recognize which part is the likelihood for the test given the topic model?", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "7:52", - "text": "Well if you look at it, you will see this part is precisely the PLSA log-likelihood that we want to maximize when we estimate parameters for PLSA alone. But the second equation shows some additional constraints on the parameters. And in particular, we'll see here it's to measure the difference between the topic coverage at node u and node v. The two adjacent nodes on the network. We want their distributions to be similar. So here we are computing the square of their differences and we want to minimize this difference. And note that there's a negative sign in front of this sum, this whole sum here. So this makes it possible to find the parameters that are both to maximize the PLSA log-likelihood. That means the parameters will fit the data well and, also to respect that this constraint from the network.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "9:06", - "text": "And this is the negative sign that I just mentioned. Because this is an negative sign, when we maximize this object in function we'll actually minimize this statement term here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "9:19", - "text": "So if we look further in this picture we'll see the results will weight of edge between u and v here. And that space from out network. If you have a weight that says well, these two nodes are strong collaborators of researchers. These two are strong connections between two people in a social network. And they would have weight. Then that means it would be more important that they're topic coverages are similar. And that's basically what it says here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "9:55", - "text": "And finally you see a parameter lambda here. This is a new parameter to control the influence of network constraint. We can see easily, if lambda is set to 0, we just go back to the standard PLSA. But when lambda is set to a larger value, then we will let the network influence the estimated models more. So as you can see, the effect here is that we're going to do basically PLSA. But we're going to also try to make the topic coverages on the two nodes that are strongly connected to be similar. And we ensure their coverages are similar.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "10:33", - "text": "So here are some of the several results, from that paper. This is slide shows the record results of using PLSA. And the data here is DBLP data, bibliographic data, about research articles. And the experiments have to do with using four communities of applications. IR information retrieval. DM stands for data mining. ML for machinery and web. There are four communities of articles, and we were hoping", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "11:06", - "text": "to see that the topic mining can help us uncover these four communities. But from these assembled topics that you have seen here that are generated by PLSA. And PLSA is unable to generate the four communities that correspond to our intuition. The reason was because they are all mixed together and there are many words that are shared by these communities. So it's not that easy to use four topics to separate them. If we use more topics, perhaps we will have more coherent topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "11:42", - "text": "But what's interesting is that if we use the NetPLSA where the network, the collaboration network in this case of authors is used to impose constraints. And in this case we also use four topics. But Ned Pierre said we gave much more meaningful topics. So here we'll see that these topics correspond well to the four communities. The first is information retrieval. The second is data mining. Third is machine learning. And the fourth is web. So that separation was mostly because of the influence of network where with leverage is a collaboration network information. Essentially the people that form a collaborating network would then be kind of assumed to write about similar topics. And that's why we're going to have more coherent topics. And if you just listen to text data alone based on the occurrences, you won't get such coherent topics. Even though a topic model, like PLSA or LDA also should be able to pick up co-occurring words. So in general the topics that they generate represent words that co-occur each other. But still they cannot generate such a coherent results as NetPLSA, showing that the network contest is very useful here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "13:08", - "text": "Now a similar model could have been also useful to to characterize the content associated with each subnetwork of collaborations.", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - }, - { - "time": "13:19", - "text": "So a more general view of text mining in context of network is you treat text as living in a rich information network environment. And that means we can connect all the related data together as a big network. And text data can be associated with a lot of structures in the network. For example, text data can be associated with the nodes of the network, and that's basically what we just discussed in the NetPLSA. But text data can be associated with age as well, or paths or even subnetworks. And such a way to represent texts that are in the big environment of all the context information is very powerful. Because it allows to analyze all the data, all the information together. And so in general, analysis of text should be using the entire network information that's related to the text data. So here's one suggested reading. And this is the paper about NetPLSA where you can find more details about the model and how to make such a model. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/CMCyH/6-6-contextual-text-mining-mining-topics-with-social-network-context" - } - ] - }, - { - "6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision": [ - { - "time": "0:00", - "text": "[SOUND]", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "0:07", - "text": "This lecture is about using a time series as context to potentially discover causal topics in text. In this lecture, we're going to continue discussing Contextual Text Mining. In particular, we're going to look at the time series as a context for analyzing text, to potentially discover causal topics. As usual, it started with the motivation. In this case, we hope to use text mining to understand a time series. Here, what you are seeing is Dow Jones Industrial Average stock price curves. And you'll see a sudden drop here. Right. So one would be interested knowing what might have caused the stock market to crash.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "0:48", - "text": "Well, if you know the background, and you might be able to figure it out if you look at the time stamp, or there are other data that can help us think about. But the question here is can we get some clues about this from the companion news stream? And we have a lot of news data that generated during that period.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "1:08", - "text": "So if you do that we might actually discover the crash. After it happened, at the time of the September 11 attack. And that's the time when there is a sudden rise of the topic about September 11 happened in news articles.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "1:26", - "text": "Here's another scenario where we want to analyze the Presidential Election. And this is the time series that are from the Presidential Prediction Market. For example, I write a trunk of market would have stocks for each candidate. And if you believe one candidate that will win then you tend to buy the stock for that candidate, causing the price of that candidate to increase. So, that's a nice way to actual do survey of people's opinions about these candidates.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "2:00", - "text": "Now, suppose you see something drop of price for one candidate. And you might also want to know what might have caused the sudden drop.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "2:10", - "text": "Or in a social science study, you might be interested in knowing what method in this election, what issues really matter to people. Now again in this case, we can look at the companion news stream and ask for the question. Are there any clues in the news stream that might provide insight about this? So for example, we might discover the mention of tax cut", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "2:35", - "text": "has been increasing since that point. So maybe, that's related to the drop of the price. So all these cases are special cases of a general problem of joint analysis of text and a time series data to discover causal topics. The input in this case is time series plus text data that are produced in the same time period, the companion text stream.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "3:02", - "text": "And this is different from the standard topic models, where we have just to text collection. That's why we see time series here, it serves as context.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "3:13", - "text": "Now, the output that we want to generate is the topics whose coverage in the text stream has strong correlations with the time series.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "3:22", - "text": "For example, whenever the topic is managing the price tends to go down, etc.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "3:28", - "text": "Now we call these topics Causal Topics. Of course, they're not, strictly speaking, causal topics. We are never going to be able to verify whether they are causal, or there's a true causal relationship here. That's why we put causal in quotation marks. But at least they are correlating topics that might potentially explain the cause and humans can certainly further analyze such topics to understand the issue better.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "3:59", - "text": "And the output would contain topics just like in topic modeling. But we hope that these topics are not just the regular topics with. These topics certainly don't have to explain the data of the best in text, but rather they have to explain the data in the text. Meaning that they have to reprehend the meaningful topics in text. Cement but also more importantly, they should be correlated with external hand series that's given as a context. So to understand how we solve this problem, let's first adjust to solve the problem with reactive topic model, for example PRSA. And we can apply this to text stream and with some extension like a CPRSA or Contextual PRSA. Then we can discover these topics in the correlation and also discover their coverage over time.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "4:53", - "text": "So, one simple solution is, to choose the topics from this set that have the strongest correlation with the external time series.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "5:05", - "text": "But this approach is not going to be very good. Why? Because awareness pictured to the topics is that they will discover by PRSA or LDA. And that means the choice of topics will be very limited. And we know these models try to maximize the likelihood of the text data. So those topics tend to be the major topics that explain the text data well. aAnd they are not necessarily correlated with time series. Even if we get the best one, the most correlated topics might still not be so", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "5:34", - "text": "interesting from causal perspective.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "5:37", - "text": "So here in this work site here, a better approach was proposed. And this approach is called Iterative Causal Topic Modeling. The idea is to do an iterative adjustment of topic, discovered by topic models using time series to induce a product.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "5:57", - "text": "So here's an illustration on how this work, how this works. Take the text stream as input and then apply regular topic modeling to generate a number of topics. Let's say four topics. Shown here.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "6:09", - "text": "And then we're going to use external time series to assess which topic is more causally related or correlated with the external time series. So we have something that rank them. And we might think that topic one and topic four are more correlated and topic two and topic three are not. Now we could have stopped here and that would be just like what the simple approached that I talked about earlier then we can get to these topics and call them causal topics. But as I also explained that these topics are unlikely very good because they are general topics that explain the whole text connection. They are not necessary. The best topics are correlated with our time series.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "6:51", - "text": "So what we can do in this approach is to first zoom into word level and we can look into each word and the top ranked word listed for each topic. Let's say we take Topic 1 as the target examined. We know Topic 1 is correlated with the time series. Or is at least the best that we could get from this set of topics so far.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "7:18", - "text": "And we're going to look at the words in this topic, the top words.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "7:23", - "text": "And if the topic is correlated with the Time Series, there must be some words that are highly correlated with the Time Series. So here, for example, we might discover W1 and W3 are positively correlated with Time Series, but W2 and W4 are negatively correlated.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "7:41", - "text": "So, as a topic, and it's not good to mix these words with different correlations. So we can then for the separate of these words. We are going to get all the red words that indicate positive correlations. W1 and W3. And we're going to also get another sub topic.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "8:00", - "text": "If you want. That represents a negatively correlated words, W2 and W4.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "8:07", - "text": "Now, these subtopics, or these variations of topics, based on the correlation analysis, are topics that are still quite related to the original topic, Topic 1. But they are already deviating, because of the use of time series information for bias selection of words. So then in some sense, well we should expect so, some sense more correlated with the time series than the original Topic 1. Because the Topic 1 has mixed words, here we separate them.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "8:42", - "text": "So each of these two subtopics", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "8:46", - "text": "can be expected to be better coherent in this time series. However, they may not be so coherent as it mention. So the idea here is to go back to topic model by using these each as a prior to further guide the topic modeling. And that's to say we ask our topic models now discover topics that are very similar to each of these two subtopics. And this will cause a bias toward more correlate to the topics was a time series. Of course then we can apply topic models to get another generation of topics. And that can be further ran to the base of the time series to set after the highly correlated topics. And then we can further analyze the components at work in the topic and then try to analyze.word level correlation. And then get the even more correlated subtopics that can be further fed into the process as prior to drive the topic of model discovery.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "9:46", - "text": "So this whole process is just a heuristic way of optimizing causality and coherence, and that's our ultimate goal. Right? So here you see the pure topic models will be very good at maximizing topic coherence, the topics will be all meaningful.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "10:02", - "text": "If we only use causality test, or correlation measure, then we might get a set words that are strongly correlate with time series, but they may not necessarily mean anything. It might not be cementric connected. So, that would be at the other extreme, on the top.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "10:21", - "text": "Now, the ideal is to get the causal topic that's scored high, both in topic coherence and also causal relation. In this approach, it can be regarded as an alternate way to maximize both sine engines. So when we apply the topic models we're maximizing the coherence. But when we decompose the topic model words into sets of words that are very strong correlated with the time series. We select the most strongly correlated words with the time series. We are pushing the model back to the causal dimension to make it better in causal scoring. And then, when we apply the selected words as a prior to guide a topic modeling, we again go back to optimize the coherence. Because topic models, we ensure the next generation of topics to be coherent and we can iterate when they're optimized in this way as shown on this picture.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "11:20", - "text": "So the only I think a component that you haven't seen such a framework is how to measure the causality. Because the rest is just talking more on. So let's have a little bit of discussion of that. So here we show that. And let's say we have a topic about government response here. And then we just talking more of we can get coverage of the topic over time. So, we have a time series, X sub t.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "11:43", - "text": "Now, we also have, are give a time series that represents external information. It's a non text time series, Y sub t. It's the stock prices. Now the the question here is does Xt cause Yt?", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "11:58", - "text": "Well in other words, we want to match the causality relation between the two. Or maybe just measure the correlation of the two.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "12:08", - "text": "There are many measures that we can use in this framework. For example, pairs in correlation is a common use measure. And we got to consider time lag here so that we can try to capture causal relation. Using somewhat past data and using the data in the past", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "12:26", - "text": "to try to correlate with the data on points of y that represents the future, for example. And by introducing such lag, we can hopefully capture some causal relation by even using correlation measures like person correlation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "12:45", - "text": "But a common use, the measure for causality here is Granger Causality Test.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "12:52", - "text": "And the idea of this test is actually quite simple. Basically you're going to have all the regressive model to use the history information of Y to predict itself. And this is the best we could without any other information. So we're going to build such a model. And then we're going to add some history information of X into such model. To see if we can improve the prediction of Y. If we can do that with a statistically significant difference. Then we just say X has some causal inference on Y, or otherwise it wouldn't have causal improvement of prediction of Y.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "13:32", - "text": "If, on the other hand, the difference is insignificant and that would mean X does not really have a cause or relation why. So that's the basic idea. Now, we don't have time to explain this in detail so you could read, but you would read at this cited reference here to know more about this measure. It's a very convenient used measure. Has many applications.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "13:55", - "text": "So next, let's look at some simple results generated by this approach. And here the data is the New York Times and in the time period of June 2000 through December of 2011. And here the time series we used is stock prices of two companies. American Airlines and Apple and the goal is to see if we inject the sum time series contest, whether we can actually get topics that are wise for the time series. Imagine if we don't use any input, we don't use any context. Then the topics from New York times discovered by PRSA would be just general topics that people talk about in news. All right. Those major topics in the news event.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "14:41", - "text": "But here you see these topics are indeed biased toward each time series. And particularly if you look at the underlined words here in the American Airlines result, and you see airlines, airport, air, united trade, or terrorism, etc. So it clearly has topics that are more correlated with the external time series. On the right side, you see that some of the topics are clearly related to Apple, right. So you can see computer, technology, software, internet, com, web, etc. So that just means the time series has effectively served as a context to bias the discovery of topics. From another perspective, these results help us on what people have talked about in each case. So not just the people, what people have talked about, but what are some topics that might be correlated with their stock prices. And so these topics can serve as a starting point for people to further look into issues and you'll find the true causal relations. Here are some other results from analyzing Presidential Election time series. The time series data here is from Iowa Electronic market. And that's a prediction market. And the data is the same. New York Times from May 2000 to October 2000. That's for 2000 presidential campaign election. Now, what you see here are the top three words in significant topics from New York Times.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "16:21", - "text": "And if you look at these topics, and they are indeed quite related to the campaign. Actually the issues are very much related to the important issues of this presidential election. Now here I should mention that the text data has been filtered by using only the articles that mention these candidate names.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "16:45", - "text": "It's a subset of these news articles. Very different from the previous experiment.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "16:53", - "text": "But the results here clearly show that the approach can uncover some important issues in that presidential election. So tax cut, oil energy, abortion and gun control are all known to be important issues in that presidential election. And that was supported by some literature in political science.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "17:17", - "text": "And also I was discussing Wikipedia, right. So basically the results show that the approach can effectively discover possibly causal topics based on the time series data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "17:35", - "text": "So there are two suggested readings here. One is the paper about this iterative topic modeling with time series feedback. Where you can find more details about how this approach works. And the second one is reading about Granger Casuality text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "17:55", - "text": "So in the end, let's summarize the discussion of Text-based Prediction. Now, Text-based prediction is generally very useful for big data applications that involve text. Because they can help us inform new knowledge about the world. And the knowledge can go beyond what's discussed in the text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "18:17", - "text": "As a result can also support optimizing of our decision making. And this has a wider spread application.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "18:28", - "text": "Text data is often combined with non-text data for prediction. because, for this purpose, the prediction purpose, we generally would like to combine non-text data and text data together, as much cruel as possible for prediction. And so as a result during the analysis of text and non-text is very necessary and it's also very useful. Now when we analyze text data together with non-text data, we can see they can help each other. So non-text data, provide a context for mining text data, and we discussed a number of techniques for contextual text mining. And on the other hand, a text data can also help interpret patterns discovered from non-text data, and this is called a pattern annotation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - }, - { - "time": "19:14", - "text": "In general, this is a very active research topic, and there are new papers being published. And there are also many open challenges that have to be solved. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/WDo7w/6-7-contextual-text-mining-mining-casual-topics-with-time-series-supervision" - } - ] - }, - { - "6-8-course-summary": [ - { - "time": "0:06", - "text": "This lecture is a summary of this whole course.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "0:10", - "text": "First, let's revisit the topics that we covered in this course. In the beginning, we talked about the natural language processing and how it can enrich text representation. We then talked about how to mine knowledge about the language, natural language used to express the, what's observing the world in text and data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "0:34", - "text": "In particular, we talked about how to mine word associations. We then talked about how to analyze topics in text. How to discover topics and analyze them.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "0:47", - "text": "This can be regarded as knowledge about observed world, and then we talked about how to mine knowledge about the observer and particularly talk about the, how to mine opinions and do sentiment analysis. And finally, we will talk about the text-based prediction, which has to do with predicting values of other real world variables based on text data. And in discussing this, we will also discuss the role of non-text data, which can contribute additional predictors for the prediction problem, and also can provide context for analyzing text data, and in particular we talked about how to use context to analyze topics.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "1:33", - "text": "So here are the key high-level take away messages from this cost. I going to go over these major topics and point out what are the key take-away messages that you should remember.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "1:47", - "text": "First the NLP and text representation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "1:53", - "text": "You should realize that NLP is always very important for any text replication because it enriches text representation. The more NLP the better text representation we can have. And this further enables more accurate knowledge discovery, to discover deeper knowledge, buried in text.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "2:12", - "text": "However, the current estate of art of natural energy processing is, still not robust enough. So, as an result, the robust text mining technologies today, tend to be based on world [INAUDIBLE]. And tend to rely a lot on statistical analysis, as we've discussed in this course. And you may recall we've mostly used word based representations. And we've relied a lot on statistical techniques, statistical learning techniques particularly.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "2:47", - "text": "In word-association mining and analysis the important points first, we are introduced the two concepts for two basic and complementary relations of words, paradigmatic and syntagmatic relations. These are actually very general relations between elements sequences. If you take it as meaning elements that occur in similar context in the sequence and elements that tend to co-occur with each other. And these relations might be also meaningful for other sequences of data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "3:25", - "text": "We also talked a lot about test the similarity then we discuss how to discover paradynamic similarities compare the context of words discover words that share similar context. At that point level, we talked about representing text data with a vector space model. And we talked about some retrieval techniques such as BM25 for measuring similarity of text and for assigning weights to terms, tf-idf weighting, et cetera. And this part is well-connected to text retrieval. There are other techniques that can be relevant here also.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "4:03", - "text": "The next point is about co-occurrence analysis of text, and we introduce some information theory concepts such as entropy, conditional entropy, and mutual information. These are not only very useful for measuring the co-occurrences of words, they are also very useful for analyzing other kind of data, and they are useful for, for example, for feature selection in text categorization as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "4:30", - "text": "So this is another important concept, good to know.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "4:35", - "text": "And then we talked about the topic mining and analysis, and that's where we introduce in the probabilistic topic model. We spent a lot of time to explain the basic topic model, PLSA in detail and this is, those are the basics for understanding LDA which is. Theoretically, a more opinion model, but we did not have enough time to really go in depth in introducing LDA.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "5:02", - "text": "But in practice, PLSA seems as effective as LDA and it's simpler to implement and it's also more efficient.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "5:11", - "text": "In this part of Wilson videos is some general concepts that would be useful to know, one is generative model, and this is a general method for modeling text data and modeling other kinds of data as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "5:24", - "text": "And we talked about the maximum life erase data, the EM algorithm for solving the problem of computing maximum estimator. So, these are all general techniques that tend to be very useful in other scenarios as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "5:40", - "text": "Then we talked about the text clustering and the text categorization. Those are two important building blocks in any text mining application systems. In text with clustering we talked about how we can solve the problem by using a slightly different mixture module than the probabilistic topic model. and we then also prefer to view the similarity based approaches to test for cuss word.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "6:11", - "text": "In categorization we also talk about the two kinds of approaches. One is generative classifies that rely on to base word to", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "6:20", - "text": "infer the condition of or probability of a category given text data, in deeper we'll introduce you should use [INAUDIBLE] base in detail.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "6:29", - "text": "This is the practical use for technique, for a lot of text, capitalization tasks.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "6:37", - "text": "We also introduce the some discriminative classifiers, particularly logistical regression, can nearest labor and SBN. They also very important, they are very popular, they are very useful for text capitalization as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "6:52", - "text": "In both parts, we'll also discuss how to evaluate the results. Evaluation is quite important because if the matches that you use don't really reflect the volatility of the method then it would give you misleading results so its very important to get the variation right. And we talked about variation of categorization in detail was a lot of specific measures.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "7:18", - "text": "Then we talked about the sentiment analysis and the paradigm and that's where we introduced sentiment classification problem. And although it's a special case of text recalculation, but we talked about how to extend or improve the text recalculation method by using more sophisticated features that would be needed for sentiment analysis. We did a review of some common use for complex features for text analysis, and then we also talked about how to capture the order of these categories, in sentiment classification, and in particular we introduced ordinal logistical regression then we also talked about Latent Aspect Rating Analysis. This is an unsupervised way of using a generative model to understand and review data in more detail. In particular, it allows us to understand the composed ratings of", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "8:14", - "text": "a reviewer on different aspects of a topic. So given text reviews with overall ratings, the method allows even further ratings on different aspects. And it also allows us to infer, the viewers laying their weights on these aspects or which aspects are more important to a viewer can be revealed as well. And this enables a lot of interesting applications.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "8:41", - "text": "Finally, in the discussion of prediction, we mainly talk about the joint mining of text and non text data, as they are both very important for prediction.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "8:51", - "text": "We particularly talked about how text data can help non-text data and vice versa.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "8:58", - "text": "In the case of using non-text data to help text data analysis, we talked about the contextual text mining. We introduced the contextual PLSA as a generalizing or generalized model of PLSA to allows us to incorporate the context of variables, such as time and location. And this is a general way to allow us to reveal a lot of interesting topic of patterns in text data. We also introduced the net PLSA, in this case we used social network or network in general of text data to help analyze puppets.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "9:31", - "text": "And finally we talk about how can be used as context to mine potentially causal Topics in text layer.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "9:43", - "text": "Now, in the other way of using text to", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "9:47", - "text": "help interpret patterns discovered from LAM text data, we did not really discuss anything in detail but just provide a reference but I should stress that that's after a very important direction to know about, if you want to build a practical text mining systems, because understanding and interpreting patterns is quite important.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "10:13", - "text": "So this is a summary of the key take away messages, and I hope these will be very useful to you for building any text mining applications or to you for the starting of these algorithms. And this should provide a good basis for you to read from your research papers, to know more about more of allowance for other organisms or to invent new hours in yourself.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "10:40", - "text": "So to know more about this topic, I would suggest you to look into other areas in more depth.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "10:48", - "text": "And during this short period of time of this course, we could only touch the basic concepts, basic principles, of text mining and we emphasize the coverage of practical algorithms. And this is after the cost of covering algorithms and in many cases we omit the discussion of a lot of algorithms. So to learn more about the subject you should definitely learn more about the natural language process because this is foundation for all text based applications. The more NLP you can do, the better the additional text that you can get, and then the deeper knowledge you can discover. So this is very important.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "11:37", - "text": "The second area you should look into is the Statistical Machine Learning.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "11:41", - "text": "And these techniques are now the backbone techniques for", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "11:46", - "text": "not just text analysis applications but also for NLP. A lot of NLP techniques are nowadays actually based on supervised machinery.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "11:56", - "text": "So, they are very important because they are a key to also understanding some advancing NLP techniques and naturally they will provide more tools for doing text analysis in general.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "12:09", - "text": "Now, a particularly interesting area, called deep learning has attracted a lot of attention recently. It has also shown promise in many application areas, especially in speech and vision, and has been applied to text data as well. So, for example, recently there has work on using deep learning to do segment analysis to achieve better accuracy. So that's one example of [INAUDIBLE] techniques that we weren't able to cover, but that's also very important.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "12:41", - "text": "And the other area that has emerged in status learning is the water and baring technique, where they can learn better recognition of words. And then these better recognitions will allow you confuse similarity of words. As you can see, this provides directly a way to discover the paradigmatic relations of words. And results that people have got, so far, are very impressive. That's another promising technique that we did not have time to touch,", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "13:12", - "text": "but, of course, whether these new techniques would lead to practical useful techniques that work much better than the current technologies is still an open question that has to be examined. And no serious evaluation has been done yet. In, for example, examining the practical value of word embedding, other than word similarity and basic evaluation.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "13:36", - "text": "But nevertheless, these are advanced techniques that surely will make impact in text mining in the future. So its very important to know more about these. Statistical learning is also the key to predictive modeling which is very crucial for many big data applications and we did not talk about that predictive modeling component but this is mostly about the regression or categorization techniques and this is another reason why statistical learning is important.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "14:07", - "text": "We also suggest that you learn more about data mining, and that's simply because general data mining algorithms can always be applied to text data, which can be regarded as as special case of general data.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "14:23", - "text": "So there are many applications of data mining techniques. In particular for example, pattern discovery would be very useful to generate the interesting features for test analysis and the reason that an information network that mining techniques can also be used to analyze text information at work.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "14:42", - "text": "So these are all good to know. In order to develop effective text analysis techniques. And finally, we also recommend you to learn more about the text retrieval, information retrieval, of search engines. This is especially important if you are interested in building practical text application systems. And a search ending would be an essential system component in any text-based applications. And that's because texts data are created for humans to consume. So humans are at the best position to understand text data and it's important to have human in the loop in big text data applications, so it can in particular help text mining systems in two ways. One is through effectively reduce the data size from a large collection to a small collection with the most relevant text data that only matter for the particular interpretation. So the other is to provide a way to annotate it, to explain parents, and this has to do with knowledge providence. Once we discover some knowledge, we have to figure out whether or not the discovery is really reliable. So we need to go back to the original text to verify that. And that is why the search engine is very important.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "16:04", - "text": "Moreover, some techniques of information retrieval, for example BM25, vector space and are also very useful for text data mining. We only mention some of them, but if you know more about text retrieval you'll see that there are many techniques that are used for it. Another technique that it's used for is indexing technique that enables quick response of search engine to a user's query, and such techniques can be very useful for building efficient text mining systems as well.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "16:35", - "text": "So, finally, I want to remind you of this big picture for harnessing big text data that I showed you at your beginning of the semester.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "16:45", - "text": "So in general, to deal with a big text application system, we need two kinds text, text retrieval and text mining.", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - }, - { - "time": "16:53", - "text": "And text retrieval, as I explained, is to help convert big text data into a small amount of most relevant data for a particular problem, and can also help providing knowledge provenance, help interpreting patterns later. Text mining has to do with further analyzing the relevant data to discover the actionable knowledge that can be directly useful for decision making or many other tasks. So this course covers text mining. And there's a companion course called Text Retrieval and Search Engines that covers text retrieval. If you haven't taken that course, it would be useful for you to take it, especially if you are interested in building a text caching system. And taking both courses will give you a complete set of practical skills for building such a system. So in [INAUDIBLE] I just would like to thank you for taking this course. I hope you have learned useful knowledge and skills in test mining and [INAUDIBLE]. As you see from our discussions there are a lot of opportunities for this kind of techniques and there are also a lot of open channels. So I hope you can use what you have learned to build a lot of use for applications will benefit society and to also join the research community to discover new techniques for text mining and benefits. Thank you. [MUSIC]", - "url": "https://www.coursera.org/learn/text-mining/lecture/7hOfP/6-8-course-summary" - } - ] - } - ] - } - ] -} \ No newline at end of file From 1c00933ead2151a611b27829c06e748c0c67e6f1 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 19:51:37 -0500 Subject: [PATCH 15/52] Add requirements.txt file for Python scraper --- scraper/requirements.txt | 5 +++++ 1 file changed, 5 insertions(+) create mode 100644 scraper/requirements.txt diff --git a/scraper/requirements.txt b/scraper/requirements.txt new file mode 100644 index 0000000000..35cc33593b --- /dev/null +++ b/scraper/requirements.txt @@ -0,0 +1,5 @@ +beautifulsoup4==4.12.2 +elasticsearch==8.11.0 +Requests==2.31.0 +selenium==4.9.0 +webdriver_manager==4.0.1 From fd17060cdd15c05e2659faba10bbdce7861d4134 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 20:15:46 -0500 Subject: [PATCH 16/52] Update ElasticSearch writer --- scraper/CourseraScraper.py | 35 ++++++++++++++-------------- scraper/ElasticSearchJSONWriter.py | 37 ++++++++++++++++++++++++++++++ scraper/index_to_es.py | 36 ----------------------------- scraper/scrape_coursera_course.py | 28 ++++++++++++++++++---- 4 files changed, 77 insertions(+), 59 deletions(-) create mode 100644 scraper/ElasticSearchJSONWriter.py delete mode 100644 scraper/index_to_es.py diff --git a/scraper/CourseraScraper.py b/scraper/CourseraScraper.py index 45afa4f3be..e26d4118b0 100644 --- a/scraper/CourseraScraper.py +++ b/scraper/CourseraScraper.py @@ -3,6 +3,7 @@ from bs4 import BeautifulSoup from selenium import webdriver + # from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service as ChromeService from selenium.webdriver.common.by import By @@ -14,7 +15,6 @@ class CourseraScraper: def __init__(self, course_url: str, username: str, password: str) -> None: - self.driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install())) self.url = course_url self.username = username @@ -65,7 +65,6 @@ def login(self) -> None: input("Finalize CAPTCHA and then press Enter in the shell") - class CourseraCourseParser: def __init__(self, driver: webdriver.Chrome) -> None: self.driver = driver @@ -81,7 +80,7 @@ def get_week_urls(self) -> None: self.landing_page = self.driver.current_url # Coursera defaults to saving the user's last accessed week, so need to get the true landing # page once it's been navigated to - self.landing_page = self.landing_page.split('week')[0] + self.landing_page = self.landing_page.split("week")[0] week_url_list = [] if "https://www.coursera.org/learn/" in self.landing_page: @@ -89,15 +88,15 @@ def get_week_urls(self) -> None: week_list_xpath_pattern = "//*[@class='cds-108 css-1mxkpit cds-110']" # Need to make sure the element loads on the page before it can be scraped try: - myElem = WebDriverWait(self.driver, 2).until(EC.presence_of_element_located((By.XPATH, week_list_xpath_pattern))) + myElem = WebDriverWait(self.driver, 2).until( + EC.presence_of_element_located((By.XPATH, week_list_xpath_pattern)) + ) except TimeoutException: print("Loading took too much time!") # Get all elements from the sidebare containing links to the course's week lectures - week_elements = self.driver.find_elements( - By.XPATH, - week_list_xpath_pattern) + week_elements = self.driver.find_elements(By.XPATH, week_list_xpath_pattern) - for week_number in range(1, len(week_elements)+1): + for week_number in range(1, len(week_elements) + 1): week_url_list.append(self.landing_page + f"week/{week_number}") else: self.get_week_urls() @@ -117,9 +116,9 @@ def get_lecture_urls(self): elements = soup.find_all("div", attrs={"data-test": "WeekSingleItemDisplay-lecture"}) for element in elements: - a_tag = element.find('a') - if a_tag and 'href' in a_tag.attrs: - href_value = a_tag['href'] + a_tag = element.find("a") + if a_tag and "href" in a_tag.attrs: + href_value = a_tag["href"] lecture_urls.append("https://www.coursera.org" + href_value) else: print("href attribute not found") @@ -131,8 +130,8 @@ def get_lecture_subtitles(self, lecture_url): # Find all div elements contain subtitles # TODO: Take another look at this and see if XPATH is more accurate. Looks like this pattern isn't consistent across classes - pattern = re.compile(r'\bcss-1shylkf\b') - elements = soup.find_all('div', class_=pattern) + pattern = re.compile(r"\bcss-1shylkf\b") + elements = soup.find_all("div", class_=pattern) if len(elements) == 0: print("No value retrieved") else: @@ -140,15 +139,15 @@ def get_lecture_subtitles(self, lecture_url): for element in elements: # Extract the timestamp - button = element.find('button', class_='timestamp') + button = element.find("button", class_="timestamp") timestamp = button.contents[-1].strip() # Extract all phrase elements and concatenate the text of all subtitles - phrases = element.find_all('div', class_='phrases') - text_content = ' '.join(phrase.get_text().strip() for phrase in phrases) + phrases = element.find_all("div", class_="phrases") + text_content = " ".join(phrase.get_text().strip() for phrase in phrases) # Append the subtitles to the list as a dictionary - subtitles.append({'time': timestamp, 'text': text_content, 'url': lecture_url}) + subtitles.append({"time": timestamp, "text": text_content, "url": lecture_url}) # Process the subtitles return subtitles @@ -162,6 +161,6 @@ def get_page_soup(self, url: str) -> BeautifulSoup: # get the page source and parse the HTML content into a BeautifulSoup object parge_source = self.driver.page_source - soup = BeautifulSoup(parge_source, 'html.parser') + soup = BeautifulSoup(parge_source, "html.parser") return soup diff --git a/scraper/ElasticSearchJSONWriter.py b/scraper/ElasticSearchJSONWriter.py new file mode 100644 index 0000000000..273aaeccd2 --- /dev/null +++ b/scraper/ElasticSearchJSONWriter.py @@ -0,0 +1,37 @@ +import json +import os +from elasticsearch import Elasticsearch + + +class ElasticSearchJSONWriter: + def __init__(self, json_path: str = "./subtitles.json"): + self.url = os.environ.get( + "ES_URL", "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws" + ) + self.user = os.environ.get("ES_USER", "elastic") + self.password = os.environ.get("ES_PASSWORD", "CS410-project") + self.json_path = json_path + self.subtitles_json = self.load_json(self.json_path) + + def load_json(self) -> json: + try: + with open(self.json_path) as f: + subtitles_doc = f.read() + subtitles_json = json.loads(subtitles_doc) + except FileNotFoundError as e: + print(f"{self.json_path} was not found") + + return subtitles_json + + def index_subtitles(self) -> None: + for weeks in self.subtitles_json["Text Mining and Analytics"]: + for week in weeks.values(): + for lecture_titles in week: + for lecture_title in lecture_titles: + for subtitles in lecture_titles[lecture_title]: + self.write_to_elasticsearch(subtitles) + + def write_to_elasticsearch(self, doc) -> None: + es = Elasticsearch(self.url, http_auth=(self.user, self.password)) + resp = es.index(index="subtitles", document=doc) + print(resp["result"]) diff --git a/scraper/index_to_es.py b/scraper/index_to_es.py deleted file mode 100644 index b2ed93ffe0..0000000000 --- a/scraper/index_to_es.py +++ /dev/null @@ -1,36 +0,0 @@ -import json - -import requests -import sys, os -from datetime import datetime -from elasticsearch import Elasticsearch - -#ES_URL = os.environ.get("ES_URL", "https://cs410-project.es.us-central1.gcp.cloud.es.io") -ES_URL = os.environ.get("ES_URL", "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws") -ES_USER = os.environ.get("ES_USER", "elastic") -ES_PASSWORD = os.environ.get("ES_PASSWORD", "CS410-project") -#ES_PASSWORD = os.environ.get("ES_PASSWORD", "pciWclpLNdXuicUhXV8bhgk2") - - - -def write_to_es(doc): - es = Elasticsearch(ES_URL, - http_auth=(ES_USER, ES_PASSWORD) - ) - resp = es.index(index="subtitles", document=doc) - print(resp['result']) - -def index_subtitles(): - with open('./subtitles.json') as f: - subtitles_doc = f.read() - subtitles_json = json.loads(subtitles_doc) - for weeks in subtitles_json['Text Mining and Analytics']: - for week in weeks.values(): - for lecture_titles in week: - for lecture_title in lecture_titles: - for subtitles in lecture_titles[lecture_title]: - write_to_es(subtitles) - - -if __name__ == '__main__': - index_subtitles() \ No newline at end of file diff --git a/scraper/scrape_coursera_course.py b/scraper/scrape_coursera_course.py index 6d8d86e0a2..744c958ecf 100644 --- a/scraper/scrape_coursera_course.py +++ b/scraper/scrape_coursera_course.py @@ -2,18 +2,36 @@ import json from CourseraScraper import CourseraScraper +from ElasticSearchJSONWriter import ElasticSearchJSONWriter if __name__ == "__main__": parser = argparse.ArgumentParser() - parser.add_argument("-c", "--course_url", required=True, type=str, help="URL to the landing page of the course you want to scrape. Ex: https://www.coursera.org/learn/cs-410/home/") - parser.add_argument('-u', "--username", required=True, type=str, help="Coursera Username") - parser.add_argument('-p', "--password", required=True, type=str, help="Coursera Password") + parser.add_argument( + "-c", + "--course_url", + required=True, + type=str, + help="URL to the landing page of the course you want to scrape. Ex: https://www.coursera.org/learn/cs-410/home/", + ) + parser.add_argument("-u", "--username", required=True, type=str, help="Coursera Username") + parser.add_argument("-p", "--password", required=True, type=str, help="Coursera Password") + parser.add_argument("-e", "--elastic_search_push", action="store_true") + parser.add_argument( + "-o", + "--output_path", + type=str, + default="./subtitles.json", + help="Path to write JSON file containing scraped transcripts to. Defaults to subtitles.json in this directory", + ) args = parser.parse_args() + # Scrape a Coursera course's transcripts into a JSON file scraper = CourseraScraper(args.course_url, args.username, args.password) scraper.run_scraper() - print(scraper.course_transcript_for_json) # Writing a JSON file - with open('subtitles.json', 'w') as json_file: + with open(args.output_path, "w") as json_file: json.dump(scraper.course_transcript_for_json, json_file, indent=4) + if args.elastic_search_push: + writer = ElasticSearchJSONWriter(args.output_path) + writer.index_subtitles() From 2555ae6d2a048de9cd380b1c1b1d6ac030df5913 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 20:18:24 -0500 Subject: [PATCH 17/52] Rename Python script parent folder --- .../CourseraScraper.py | 0 .../ElasticSearchJSONWriter.py | 0 .../__pycache__/CourseraScraper.cpython-312.pyc | Bin .../requirements.txt | 0 .../scrape_coursera_course.py | 0 5 files changed, 0 insertions(+), 0 deletions(-) rename {scraper => CourseraTranscriptScraper}/CourseraScraper.py (100%) rename {scraper => CourseraTranscriptScraper}/ElasticSearchJSONWriter.py (100%) rename {scraper => CourseraTranscriptScraper}/__pycache__/CourseraScraper.cpython-312.pyc (100%) rename {scraper => CourseraTranscriptScraper}/requirements.txt (100%) rename {scraper => CourseraTranscriptScraper}/scrape_coursera_course.py (100%) diff --git a/scraper/CourseraScraper.py b/CourseraTranscriptScraper/CourseraScraper.py similarity index 100% rename from scraper/CourseraScraper.py rename to CourseraTranscriptScraper/CourseraScraper.py diff --git a/scraper/ElasticSearchJSONWriter.py b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py similarity index 100% rename from scraper/ElasticSearchJSONWriter.py rename to CourseraTranscriptScraper/ElasticSearchJSONWriter.py diff --git a/scraper/__pycache__/CourseraScraper.cpython-312.pyc b/CourseraTranscriptScraper/__pycache__/CourseraScraper.cpython-312.pyc similarity index 100% rename from scraper/__pycache__/CourseraScraper.cpython-312.pyc rename to CourseraTranscriptScraper/__pycache__/CourseraScraper.cpython-312.pyc diff --git a/scraper/requirements.txt b/CourseraTranscriptScraper/requirements.txt similarity index 100% rename from scraper/requirements.txt rename to CourseraTranscriptScraper/requirements.txt diff --git a/scraper/scrape_coursera_course.py b/CourseraTranscriptScraper/scrape_coursera_course.py similarity index 100% rename from scraper/scrape_coursera_course.py rename to CourseraTranscriptScraper/scrape_coursera_course.py From 67e773f50bb8135e62517458ada49664fd480232 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 20:21:15 -0500 Subject: [PATCH 18/52] Move project proposal and progress report to Documentation folder --- .../TeamCAHJ_ProjectProgressReport.pdf | Bin .../CS410_Deliverables/TeamCAHJ_ProjectProposal.pdf | Bin 2 files changed, 0 insertions(+), 0 deletions(-) rename TeamCAHJ_ProjectProgressReport.pdf => Documentation/CS410_Deliverables/TeamCAHJ_ProjectProgressReport.pdf (100%) rename TeamCAHJ_ProjectProposal.pdf => Documentation/CS410_Deliverables/TeamCAHJ_ProjectProposal.pdf (100%) diff --git a/TeamCAHJ_ProjectProgressReport.pdf b/Documentation/CS410_Deliverables/TeamCAHJ_ProjectProgressReport.pdf similarity index 100% rename from TeamCAHJ_ProjectProgressReport.pdf rename to Documentation/CS410_Deliverables/TeamCAHJ_ProjectProgressReport.pdf diff --git a/TeamCAHJ_ProjectProposal.pdf b/Documentation/CS410_Deliverables/TeamCAHJ_ProjectProposal.pdf similarity index 100% rename from TeamCAHJ_ProjectProposal.pdf rename to Documentation/CS410_Deliverables/TeamCAHJ_ProjectProposal.pdf From 7fc19bd3d2943d1c4cae5c0328039d46069a1a1d Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 20:22:31 -0500 Subject: [PATCH 19/52] Remove log and credentials from GitHub --- .gitignore | 4 +- credentials-e1bb73-2023-Nov-24--00_20_11.csv | 2 - geckodriver.log | 1105 ------------------ 3 files changed, 3 insertions(+), 1108 deletions(-) delete mode 100644 credentials-e1bb73-2023-Nov-24--00_20_11.csv delete mode 100644 geckodriver.log diff --git a/.gitignore b/.gitignore index f5757cbb02..d35fb95f97 100644 --- a/.gitignore +++ b/.gitignore @@ -5,4 +5,6 @@ node_modules *.docx *.DS_Store *.iml -*.json \ No newline at end of file +*.json +*.log +*.csv \ No newline at end of file diff --git a/credentials-e1bb73-2023-Nov-24--00_20_11.csv b/credentials-e1bb73-2023-Nov-24--00_20_11.csv deleted file mode 100644 index 4825751a4f..0000000000 --- a/credentials-e1bb73-2023-Nov-24--00_20_11.csv +++ /dev/null @@ -1,2 +0,0 @@ -username,password - elastic,pciWclpLNdXuicUhXV8bhgk2 \ No newline at end of file diff --git a/geckodriver.log b/geckodriver.log deleted file mode 100644 index 76bf244068..0000000000 --- a/geckodriver.log +++ /dev/null @@ -1,1105 +0,0 @@ -1700127513505 geckodriver INFO Listening on 127.0.0.1:60211 -1700127514535 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile09LM5N" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700127530927 Marionette INFO Marionette enabled -1700127531414 Marionette INFO Listening on port 60294 -Read port: 60294 -1700127531653 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700127561075 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700127679934 geckodriver INFO Listening on 127.0.0.1:61026 -1700127680957 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileTUbkKw" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700127681465 Marionette INFO Marionette enabled -1700127681835 Marionette INFO Listening on port 61045 -Read port: 61045 -1700127682056 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700127685268 Marionette INFO Stopped listening on port 60294 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700127711505 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -1700127761093 geckodriver INFO Listening on 127.0.0.1:61458 -1700127762117 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilelCJAXc" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700127762611 Marionette INFO Marionette enabled -1700127762906 Marionette INFO Listening on port 61477 -Read port: 61477 -1700127763113 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700127766234 Marionette INFO Stopped listening on port 61045 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700127792894 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -console.error: "Unable to find target with innerWindowId:6442450967" -console.error: "Unable to find target with innerWindowId:6442450967" -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -console.warn: "Listener for event 'frame' did not return a promise." -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -console.warn: "Listener for event 'frame' did not return a promise." -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -1700128567696 Marionette INFO Stopped listening on port 61477 -1700128577366 geckodriver INFO Listening on 127.0.0.1:65228 -1700128578390 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileft1KDm" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700128579108 Marionette INFO Marionette enabled -1700128579382 Marionette INFO Listening on port 65243 -Read port: 65243 -1700128579566 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700128609169 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700130672665 Marionette INFO Stopped listening on port 65243 -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -1700130675184 geckodriver INFO Listening on 127.0.0.1:55089 -1700130676207 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileWd3gJk" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700130677702 Marionette INFO Marionette enabled -1700130678074 Marionette INFO Listening on port 55112 -Read port: 55112 -1700130678209 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700130710738 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -console.error: (new TypeError("container.node.targetFront is null", "resource://devtools/client/inspector/markup/markup.js", 2401)) -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -console.error: (new TypeError("container.node.targetFront is null", "resource://devtools/client/inspector/markup/markup.js", 2401)) -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -1700132043223 geckodriver INFO Listening on 127.0.0.1:58607 -1700132044251 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileskhVAw" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700132046005 Marionette INFO Marionette enabled -1700132046515 Marionette INFO Listening on port 58638 -Read port: 58638 -1700132046674 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700132050940 Marionette INFO Stopped listening on port 55112 -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -1700132077951 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700132236346 geckodriver INFO Listening on 127.0.0.1:59547 -1700132237372 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileUu8vtV" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700132237985 Marionette INFO Marionette enabled -1700132238551 Marionette INFO Listening on port 59566 -Read port: 59566 -1700132238765 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700132244245 Marionette INFO Stopped listening on port 58638 -1700132268071 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700132521360 Marionette INFO Stopped listening on port 59566 -1700132524457 geckodriver INFO Listening on 127.0.0.1:61052 -1700132525481 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilegxA5jU" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700132526794 Marionette INFO Marionette enabled -1700132527158 Marionette INFO Listening on port 61075 -Read port: 61075 -1700132527392 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700132556826 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700132897583 geckodriver INFO Listening on 127.0.0.1:62794 -1700132898611 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileFrNLXR" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700132899502 Marionette INFO Marionette enabled -1700132900026 Marionette INFO Listening on port 62813 -Read port: 62813 -1700132900172 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700132903514 Marionette INFO Stopped listening on port 61075 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700132929815 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700132982291 geckodriver INFO Listening on 127.0.0.1:63343 -1700132983326 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile8P1p8x" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700132983850 Marionette INFO Marionette enabled -1700132984132 Marionette INFO Listening on port 63358 -Read port: 63358 -1700132984305 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: -sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 - -1700132988326 Marionette INFO Stopped listening on port 62813 -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700133014703 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: -sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 - -1700133092197 Marionette INFO Stopped listening on port 63358 -1700133097250 geckodriver INFO Listening on 127.0.0.1:64018 -1700133098273 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileP5WYJN" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700133098763 Marionette INFO Marionette enabled -1700133098938 Marionette INFO Listening on port 64037 -Read port: 64037 -1700133099167 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700133128888 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript error: resource://devtools/shared/network-observer/NetworkResponseListener.sys.mjs, line 583: NS_ERROR_INVALID_CONTENT_ENCODING: Component returned failure code: 0x804b001b (NS_ERROR_INVALID_CONTENT_ENCODING) [nsIStreamListener.onDataAvailable] -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -1700176420420 geckodriver INFO Listening on 127.0.0.1:59096 -1700176421449 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile8nTq3D" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700176423235 Marionette INFO Marionette enabled -1700176423710 Marionette INFO Listening on port 59119 -Read port: 59119 -1700176423809 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700176453318 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: -sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 - -1700176498297 Marionette INFO Stopped listening on port 59119 -1700177955176 geckodriver INFO Listening on 127.0.0.1:49464 -1700177956182 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilej1Beo0" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700177957409 Marionette INFO Marionette enabled -1700177957791 Marionette INFO Listening on port 49494 -Read port: 49494 -1700177958032 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700177988696 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700178080698 geckodriver INFO Listening on 127.0.0.1:50151 -1700178081712 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileuKhj7f" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700178082496 Marionette INFO Marionette enabled -1700178082813 Marionette INFO Listening on port 50170 -Read port: 50170 -1700178082998 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700178087077 Marionette INFO Stopped listening on port 49494 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700178112729 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: -sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 - -JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined -1700178301520 Marionette INFO Stopped listening on port 50170 -console.warn: TopSitesFeed: Failed to fetch data from Contile server: NetworkError when attempting to fetch resource. -console.error: "Failed to fetch https://spocs.getpocket.com/user:" "NetworkError when attempting to fetch resource." -JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined -JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: -sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 - -1700178350270 Marionette INFO Stopped listening on port 64037 -console.warn: TopSitesFeed: Failed to fetch data from Contile server: NetworkError when attempting to fetch resource. -console.error: "Failed to fetch https://spocs.getpocket.com/user:" "NetworkError when attempting to fetch resource." -1700178355107 geckodriver INFO Listening on 127.0.0.1:51502 -1700178356129 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileIRu7sp" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700178356860 Marionette INFO Marionette enabled -1700178357228 Marionette INFO Listening on port 51513 -Read port: 51513 -1700178357398 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700178387183 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700178742776 Marionette INFO Stopped listening on port 51513 -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -1700180031920 geckodriver INFO Listening on 127.0.0.1:56930 -1700180032936 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile75aoM8" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700180033746 Marionette INFO Marionette enabled -1700180034157 Marionette INFO Listening on port 56949 -Read port: 56949 -1700180034301 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700180063922 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined -JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: -sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56317 - -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -1700180180136 geckodriver INFO Listening on 127.0.0.1:57652 -1700180181162 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileLqGiwD" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700180181767 Marionette INFO Marionette enabled -1700180181980 Marionette INFO Listening on port 57671 -Read port: 57671 -1700180182051 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700180211987 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript error: resource:///modules/sessionstore/SessionStore.sys.mjs, line 3741: TypeError: this._windows[aWindow.__SSi] is undefined -console.error: (new TypeError("watcher.browserElement.browsingContext is null", "resource://devtools/server/actors/watcher/target-helpers/frame-helper.js", 193)) -1700180481274 geckodriver INFO Listening on 127.0.0.1:59023 -1700180482297 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilexmWM72" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700180482935 Marionette INFO Marionette enabled -1700180483175 Marionette INFO Listening on port 59038 -Read port: 59038 -1700180483330 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700180486554 Marionette INFO Stopped listening on port 57671 -console.warn: TopSitesFeed: Failed to fetch data from Contile server: NetworkError when attempting to fetch resource. -console.error: "Failed to fetch https://spocs.getpocket.com/user:" "NetworkError when attempting to fetch resource." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700180513101 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700180735537 geckodriver INFO Listening on 127.0.0.1:60179 -1700180736564 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileaC3zVC" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700180737953 Marionette INFO Marionette enabled -1700180738370 Marionette INFO Listening on port 60202 -Read port: 60202 -1700180738622 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700180741302 Marionette INFO Stopped listening on port 59038 -JavaScript error: , line 0: NotFoundError: No such JSWindowActor 'DevToolsFrame' -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700180768095 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700180870626 Marionette INFO Stopped listening on port 60202 -1700180873669 geckodriver INFO Listening on 127.0.0.1:60886 -1700180874693 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileR9cihB" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700180875190 Marionette INFO Marionette enabled -1700180875471 Marionette INFO Listening on port 60905 -Read port: 60905 -1700180875676 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700180905340 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700180992552 geckodriver INFO Listening on 127.0.0.1:61491 -1700180993576 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileM1Cv6l" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700180994104 Marionette INFO Marionette enabled -1700180994298 Marionette INFO Listening on port 61510 -Read port: 61510 -1700180994544 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700180997818 Marionette INFO Stopped listening on port 60905 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700181024188 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700181099096 geckodriver INFO Listening on 127.0.0.1:62056 -1700181100121 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile6iWxmJ" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700181100705 Marionette INFO Marionette enabled -1700181101051 Marionette INFO Listening on port 62078 -Read port: -1700181101052 geckodriver::browser WARN Failed fo convert to u16 -Read port: 62078 -1700181101290 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700181104360 Marionette INFO Stopped listening on port 61510 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700181131450 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700183897304 geckodriver INFO Listening on 127.0.0.1:50678 -1700183898326 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileJcqKfb" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700183899996 Marionette INFO Marionette enabled -1700183900435 Marionette INFO Listening on port 50701 -Read port: 50701 -1700183900730 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700183930044 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700184032541 geckodriver INFO Listening on 127.0.0.1:51371 -1700184033569 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile0dZr3G" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700184034484 Marionette INFO Marionette enabled -1700184034969 Marionette INFO Listening on port 51394 -Read port: 51394 -1700184035167 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700184039174 Marionette INFO Stopped listening on port 62078 -1700184042444 Marionette INFO Stopped listening on port 50701 -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700184065056 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700184610126 geckodriver INFO Listening on 127.0.0.1:53854 -1700184611150 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileOzEfYH" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700184611905 Marionette INFO Marionette enabled -1700184612297 Marionette INFO Listening on port 53873 -Read port: 53873 -1700184612558 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700184643561 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700184799256 geckodriver INFO Listening on 127.0.0.1:54787 -1700184800279 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilesKufTW" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700184801779 Marionette INFO Marionette enabled -1700184802144 Marionette INFO Listening on port 54813 -Read port: 54813 -1700184802398 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700184804214 Marionette INFO Stopped listening on port 51394 -1700184806995 Marionette INFO Stopped listening on port 53873 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700184831986 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700184941801 geckodriver INFO Listening on 127.0.0.1:55546 -1700184942826 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileuryE4b" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700184943387 Marionette INFO Marionette enabled -1700184943672 Marionette INFO Listening on port 55566 -Read port: 55566 -1700184943874 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700184973441 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700185071848 Marionette INFO Stopped listening on port 54813 -1700185074740 Marionette INFO Stopped listening on port 55566 -1700185269583 geckodriver INFO Listening on 127.0.0.1:57020 -1700185270606 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileSmwtry" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700185272191 Marionette INFO Marionette enabled -1700185272537 Marionette INFO Listening on port 57043 -Read port: 57043 -1700185272777 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700185302252 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700186871774 geckodriver INFO Listening on 127.0.0.1:63647 -1700186872800 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileDRdGGG" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700186873902 Marionette INFO Marionette enabled -1700186874342 Marionette INFO Listening on port 63666 -Read port: 63666 -1700186874438 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700186903996 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700186921289 Marionette INFO Stopped listening on port 57043 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700187101503 geckodriver INFO Listening on 127.0.0.1:64706 -1700187102531 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofiledehudU" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700187103335 Marionette INFO Marionette enabled -1700187103743 Marionette INFO Listening on port 64725 -Read port: 64725 -1700187103832 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700187105923 Marionette INFO Stopped listening on port 63666 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700187133443 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -1700187540440 geckodriver INFO Listening on 127.0.0.1:50229 -1700187541460 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileSI75X3" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700187542282 Marionette INFO Marionette enabled -1700187542781 Marionette INFO Listening on port 50248 -Read port: 50248 -1700187542977 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -1700187572378 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700189346255 geckodriver INFO Listening on 127.0.0.1:55607 -1700189347279 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofile8T0yyG" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700189348465 Marionette INFO Marionette enabled -1700189348799 Marionette INFO Listening on port 55618 -Read port: 55618 -1700189348965 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700189352236 Marionette INFO Stopped listening on port 50248 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700189379187 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700189470259 geckodriver INFO Listening on 127.0.0.1:55786 -1700189471284 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilekoIluN" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700189471877 Marionette INFO Marionette enabled -1700189472185 Marionette INFO Listening on port 55797 -Read port: 55797 -1700189472360 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700189474896 Marionette INFO Stopped listening on port 55618 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -1700189501902 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700189635856 geckodriver INFO Listening on 127.0.0.1:55928 -1700189636879 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilefcM2Pu" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700189637501 Marionette INFO Marionette enabled -1700189637724 Marionette INFO Listening on port 55941 -Read port: 55941 -1700189637966 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700189640551 Marionette INFO Stopped listening on port 55797 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700189667685 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -1700192432865 geckodriver INFO Listening on 127.0.0.1:56305 -1700192433879 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileqWnPm6" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700192434440 Marionette INFO Marionette enabled -1700192434642 Marionette INFO Listening on port 56316 -Read port: 56316 -1700192434855 RemoteAgent WARN TLS certificate errors will be ignored for this session -1700192438296 Marionette INFO Stopped listening on port 55941 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700192464531 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=11a8f0500 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177ce300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26684, MediaDecoderStateMachine #1] WARNING: Decoder=1177cf200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -Exiting due to channel error. -1700194457241 geckodriver INFO Listening on 127.0.0.1:56698 -1700194458264 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofileXHQJda" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700194459573 Marionette INFO Marionette enabled -1700194459924 Marionette INFO Listening on port 56709 -Read port: 56709 -1700194460236 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -1700194489848 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.091289f0b6b4c8cdbbf1.js, line 2: unreachable code after return statement -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11cabfd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118e94200 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=118475000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a1db700 state=DECODING_METADATA Decode metadata failed, shutting down decoder: file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachine.cpp:372 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a1db700 Decode error: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=117d26100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=119327100 state=DECODING_METADATA Decode metadata failed, shutting down decoder: file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachine.cpp:372 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=119327100 Decode error: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d6a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=1193dff00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128900 state=DECODING_METADATA Decode metadata failed, shutting down decoder: file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachine.cpp:372 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a128900 Decode error: NS_ERROR_DOM_MEDIA_METADATA_ERR (0x806e0006): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11a127100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 26957, MediaDecoderStateMachine #1] WARNING: Decoder=11b7d4000 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -JavaScript warning: https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js, line 2: Script terminated by timeout at: -sentryWrapped@https://browser.sentry-cdn.com/7.43.0/bundle.tracing.es5.min.js:2:56209 - -1700197719100 Marionette INFO Stopped listening on port 56709 -1700282301441 geckodriver INFO Listening on 127.0.0.1:51098 -1700282302430 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... "--marionette" "-foreground" "-no-remote" "-profile" "/var/folders/tw/dsz1bcq13jv3kfqhpry4gzb80000gn/T/rust_mozprofilevwP3Vf" -console.warn: services.settings: Ignoring preference override of remote settings server -console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment -1700282304372 Marionette INFO Marionette enabled -1700282304859 Marionette INFO Listening on port 51114 -Read port: 51114 -1700282305123 RemoteAgent WARN TLS certificate errors will be ignored for this session -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/front-page/8.fbc20739412558aeeb50.js, line 2: unreachable code after return statement -1700282335095 addons.xpi ERROR System addon update list error SyntaxError: XMLHttpRequest.open: 'http://%(server)s/dummy-system-addons.xml' is not a valid URL. -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -console.warn: LoginRecipes: "Falling back to a synchronous message for: https://www.coursera.org." -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/logged-in-home/0.04e1f07f3d9e768e7c6b.js, line 2: unreachable code after return statement -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://mapixl.com/scripts/pixlv2.min.js, line 1: WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=116320900 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=116320900 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=116320900 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -JavaScript warning: https://d3njjcbhbojbot.cloudfront.net/webapps/r2-builds/br/ondemand/en.asyncCommonJS.6620ac81c5edf1fef492.js, line 2: unreachable code after return statement -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115ba4700 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117110e00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd6f00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c151c00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c160d00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=117111400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c158a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e583800 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4ba00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd5a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd5a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=115bd5a00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=10c159300 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e52cd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e52cd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=12e52cd00 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae48100 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4b400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4b400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 -[Child 37216, MediaDecoderStateMachine #1] WARNING: Decoder=11ae4b400 Decode error: NS_ERROR_DOM_MEDIA_DEMUXER_ERR (0x806e000c): file /builds/worker/checkouts/gecko/dom/media/MediaDecoderStateMachineBase.cpp:166 From ad413d5e351d161b38de84cb3b1cb9187b92118b Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 4 Dec 2023 20:26:45 -0500 Subject: [PATCH 20/52] Rename Extension folder --- .../CS410_Fall2023_CourseProject_TeamCAHJ.png | Bin {extension => ChromeExtension}/index.html | 0 {extension => ChromeExtension}/js/search.js | 0 README.md | 12 ++++++------ extension/manifest.json | 16 ---------------- 5 files changed, 6 insertions(+), 22 deletions(-) rename {extension => ChromeExtension}/img/CS410_Fall2023_CourseProject_TeamCAHJ.png (100%) rename {extension => ChromeExtension}/index.html (100%) rename {extension => ChromeExtension}/js/search.js (100%) delete mode 100644 extension/manifest.json diff --git a/extension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png b/ChromeExtension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png similarity index 100% rename from extension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png rename to ChromeExtension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png diff --git a/extension/index.html b/ChromeExtension/index.html similarity index 100% rename from extension/index.html rename to ChromeExtension/index.html diff --git a/extension/js/search.js b/ChromeExtension/js/search.js similarity index 100% rename from extension/js/search.js rename to ChromeExtension/js/search.js diff --git a/README.md b/README.md index fddfc756c7..f10a0cdd47 100644 --- a/README.md +++ b/README.md @@ -25,12 +25,12 @@ A step-by-step guide for the above is below.: 2. Install the appropriate ChromeDriver for your computer's enviornment from [this linke](https://googlechromelabs.github.io/chrome-for-testing/#stable), unzip it, and move the `Google Chrome for Testing` application to the `CS410__Fall2023_CourseProject_TeamCAHJ` directory created in Step 1, above. 3. Open Google Chrome. 4. Go to the Extensions page on Google Chrome by following [this link](chrome://extensions). -5. Activate Developer Mode by toggling the switch in the upper right corner labeled `Developer mode`. -![Screenshot of Devloper Mode toggle](/Documentation/README_images/Chrome%20Developer%20Mode.png) -6. Load the extension from the codebase pulled to your computer in Step 1 by clicking the `Load unpacked` button in the top left corner: -![Screenshot of load unpacked button](/Documentation/README_images/Chrome%20Load%20Unpacked.png) -7. Select the `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/extension` directory in the popup and click `Select` -![Screenshot of load unpacked button](/Documentation/README_images/Chrome%20Extension%20Directory.png) +5. Activate Developer Mode by toggling the switch in the upper right corner labeled `Developer mode`.
+![Screenshot of Devloper Mode toggle](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Developer%20Mode.png) +6. Load the extension from the codebase pulled to your computer in Step 1 by clicking the `Load unpacked` button in the top left corner:
+![Screenshot of load unpacked button](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Load%20Unpacked.png) +7. Select the `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/ChromeExtension` directory in the popup and click `Select`
+![Screenshot of load unpacked button](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Extension%20Directory.png) 8. The extension should now be available to you in your Google Chrome Extensions list. ## Usage Instructions \ No newline at end of file diff --git a/extension/manifest.json b/extension/manifest.json deleted file mode 100644 index 34e65f23cc..0000000000 --- a/extension/manifest.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "name": "CS410_Fall2023_CourseProject_TeamCAHJ", - "description": "Base Level Extension", - "version": "1.0", - "permissions": [ - "storage", - "tabs" - ], - "host_permissions": ["http://*/*", "https://*/*"], - "manifest_version": 3, - "action": { - "default_popup": "index.html", - "default_icon": "img/CS410_Fall2023_CourseProject_TeamCAHJ.png", - "default_title": "CS410_Fall2023_CourseProject_TeamCAHJ" - } -} \ No newline at end of file From af2406a7f3d7cb55be33a05f355318cd43f6a3e2 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Thu, 7 Dec 2023 01:02:53 -0800 Subject: [PATCH 21/52] Update ES to index Week, lecture title and subtitles and including these the resposnse --- ChromeExtension/js/search.js | 27 +++++++++++-------- .../ElasticSearchJSONWriter.py | 3 +++ 2 files changed, 19 insertions(+), 11 deletions(-) diff --git a/ChromeExtension/js/search.js b/ChromeExtension/js/search.js index ffa097b053..837b453792 100644 --- a/ChromeExtension/js/search.js +++ b/ChromeExtension/js/search.js @@ -49,7 +49,7 @@ async function search_api() { const query_txt = document.getElementById("searchbox").value console.log("query_txt ", query_txt) const query_payload = { - size: 1, + size: 5, from: 0, query: { "query_string": { @@ -64,16 +64,21 @@ async function search_api() { body: JSON.stringify(query_payload) }; - const response = await fetch("https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io/subtitles/_search", requestOptions) + const response = await fetch("https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io/subtitles_4/_search", requestOptions) const record = await response.json() console.log("record ", record) if(record.hits.total.value > 0) { - const result = record.hits.hits[0]._source - console.log(result) - const response_str = 'Search result from Lectures is
here at timestamp :: ' + result.time - console.log("Resoponse :: ", response_str) - await display_result(response_str) + for (let i = 0; i < 5; i++) { + const result = record.hits.hits[i]._source + console.log(result) + const response_str = ''+ result.week + '
' + + ' Title :: ' + result.lecture_title + '
' + + ' timestamp :: ' + result.time + '
' + + ' Subtitles : '+result.text + + '
' + console.log("Resoponse :: ", response_str) + await display_result(response_str) + } } else { await display_result("We could not find a related topic") } @@ -85,9 +90,9 @@ async function display_result(response_str) { modal_body.style.fontSize = 14; modal_body.style.fontWeight = 400; modal_body.style.fontFamily = 'Courier New'; - modal_body.style.color = 'white'; + modal_body.style.color = 'black'; modal_body.style.textAlign = 'left' - modal_body.style.backgroundColor = 'red' - modal_body.innerHTML = response_str + modal_body.style.backgroundColor = 'gray' + modal_body.innerHTML += response_str } \ No newline at end of file diff --git a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py index 273aaeccd2..262d6a357b 100644 --- a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py +++ b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py @@ -25,10 +25,13 @@ def load_json(self) -> json: def index_subtitles(self) -> None: for weeks in self.subtitles_json["Text Mining and Analytics"]: + week_val = list(weeks.keys())[0] for week in weeks.values(): for lecture_titles in week: for lecture_title in lecture_titles: for subtitles in lecture_titles[lecture_title]: + subtitles['lecture_title'] = lecture_title + subtitles['week'] = week_val self.write_to_elasticsearch(subtitles) def write_to_elasticsearch(self, doc) -> None: From 470f9fc7de3886c4a6ec9abf77c0bec28b1fe34e Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Thu, 7 Dec 2023 01:16:07 -0800 Subject: [PATCH 22/52] Adding the manifest file back for chrome extension --- .gitignore | 1 - ChromeExtension/manifest.json | 16 ++++++++++++++++ 2 files changed, 16 insertions(+), 1 deletion(-) create mode 100644 ChromeExtension/manifest.json diff --git a/.gitignore b/.gitignore index d35fb95f97..bf4ec26b0f 100644 --- a/.gitignore +++ b/.gitignore @@ -5,6 +5,5 @@ node_modules *.docx *.DS_Store *.iml -*.json *.log *.csv \ No newline at end of file diff --git a/ChromeExtension/manifest.json b/ChromeExtension/manifest.json new file mode 100644 index 0000000000..34e65f23cc --- /dev/null +++ b/ChromeExtension/manifest.json @@ -0,0 +1,16 @@ +{ + "name": "CS410_Fall2023_CourseProject_TeamCAHJ", + "description": "Base Level Extension", + "version": "1.0", + "permissions": [ + "storage", + "tabs" + ], + "host_permissions": ["http://*/*", "https://*/*"], + "manifest_version": 3, + "action": { + "default_popup": "index.html", + "default_icon": "img/CS410_Fall2023_CourseProject_TeamCAHJ.png", + "default_title": "CS410_Fall2023_CourseProject_TeamCAHJ" + } +} \ No newline at end of file From 3a295f3f042272c2b0721c21242d053df9af195a Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Thu, 7 Dec 2023 01:17:12 -0800 Subject: [PATCH 23/52] Removing .idea --- .idea/.gitignore | 3 --- .idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml | 8 -------- .idea/inspectionProfiles/profiles_settings.xml | 6 ------ .idea/misc.xml | 7 ------- .idea/modules.xml | 8 -------- .idea/vcs.xml | 6 ------ 6 files changed, 38 deletions(-) delete mode 100644 .idea/.gitignore delete mode 100644 .idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml delete mode 100644 .idea/inspectionProfiles/profiles_settings.xml delete mode 100644 .idea/misc.xml delete mode 100644 .idea/modules.xml delete mode 100644 .idea/vcs.xml diff --git a/.idea/.gitignore b/.idea/.gitignore deleted file mode 100644 index 26d33521af..0000000000 --- a/.idea/.gitignore +++ /dev/null @@ -1,3 +0,0 @@ -# Default ignored files -/shelf/ -/workspace.xml diff --git a/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml b/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml deleted file mode 100644 index d0876a78d0..0000000000 --- a/.idea/CS410_Fall2023_CourseProject_TeamCAHJ.iml +++ /dev/null @@ -1,8 +0,0 @@ - - - - - - - - \ No newline at end of file diff --git a/.idea/inspectionProfiles/profiles_settings.xml b/.idea/inspectionProfiles/profiles_settings.xml deleted file mode 100644 index 105ce2da2d..0000000000 --- a/.idea/inspectionProfiles/profiles_settings.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - \ No newline at end of file diff --git a/.idea/misc.xml b/.idea/misc.xml deleted file mode 100644 index 009ceba309..0000000000 --- a/.idea/misc.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - - - - \ No newline at end of file diff --git a/.idea/modules.xml b/.idea/modules.xml deleted file mode 100644 index 0afb6bbefa..0000000000 --- a/.idea/modules.xml +++ /dev/null @@ -1,8 +0,0 @@ - - - - - - - - \ No newline at end of file diff --git a/.idea/vcs.xml b/.idea/vcs.xml deleted file mode 100644 index 35eb1ddfbb..0000000000 --- a/.idea/vcs.xml +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - \ No newline at end of file From 3b739244d7aa7392e7eeb1a8eeb6dcb375e1184b Mon Sep 17 00:00:00 2001 From: Jinfeng Date: Thu, 7 Dec 2023 17:01:26 -0800 Subject: [PATCH 24/52] Build UI --- ChromeExtension/index.html | 96 +++++++++++++------------ ChromeExtension/js/search.js | 126 +++++++++++++++++++++++++++------ ChromeExtension/style.css | 132 +++++++++++++++++++++++++++++++++++ 3 files changed, 283 insertions(+), 71 deletions(-) create mode 100644 ChromeExtension/style.css diff --git a/ChromeExtension/index.html b/ChromeExtension/index.html index 56ae5e1541..3d5af11169 100644 --- a/ChromeExtension/index.html +++ b/ChromeExtension/index.html @@ -1,54 +1,52 @@ + + + + + + Search Coursera Lectures + + +
+
+ +
+
+ + + +
+
+ + +
+
+ + + + - - - - - Search Coursera Lectures - - - - - -
-

Search Coursera Lectures

- -
- -
- -
- - - - - diff --git a/ChromeExtension/js/search.js b/ChromeExtension/js/search.js index 837b453792..11306c6de0 100644 --- a/ChromeExtension/js/search.js +++ b/ChromeExtension/js/search.js @@ -1,11 +1,17 @@ -const search_btn = document.getElementById("button"); +const search_btn = document.getElementById("submit-button"); +const result_container = document.querySelector('#result-container') search_btn.addEventListener('click', function () { + if (result_container.childElementCount > 0) { + // console.log("Has child(ren)") + remove_all_children(result_container) + } + search_api() }); async function search_wild() { - console.log("Inside search_wild..") + // console.log("Inside search_wild..") //import {Client} from '@elastic' const ES_URL = "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws" @@ -22,7 +28,7 @@ async function search_wild() { const query_str = document.getElementById("searchbox").textContent - console.log("query_str ", query_str) + // console.log("query_str ", query_str) const result = await client.search({ index: 'subtitles', size: 1, @@ -40,14 +46,15 @@ async function search_wild() { async function search_api() { - console.log("Inside search_api..") + + // console.log("Inside search_api..") var headers = new Headers(); headers.append("Content-Type", "application/json"); headers.append("Authorization", "Basic ZWxhc3RpYzpwY2lXY2xwTE5kWHVpY1VoWFY4YmhnazI="); const query_txt = document.getElementById("searchbox").value - console.log("query_txt ", query_txt) + // console.log("query_txt ", query_txt) const query_payload = { size: 5, from: 0, @@ -57,7 +64,7 @@ async function search_api() { } } } - console.log("query_payload ", query_payload) + // console.log("query_payload ", query_payload) var requestOptions = { method: 'POST', headers: headers, @@ -66,33 +73,108 @@ async function search_api() { const response = await fetch("https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io/subtitles_4/_search", requestOptions) const record = await response.json() - console.log("record ", record) + // console.log("record ", record) if(record.hits.total.value > 0) { - for (let i = 0; i < 5; i++) { + const result_num = Math.min(record.hits.total.value, 5) + // console.log("Maximum number of result: ", result_num) + for (let i = 0; i < result_num; i++) { const result = record.hits.hits[i]._source - console.log(result) + // console.log(result) + const result_dict = {} const response_str = ''+ result.week + '
' + ' Title :: ' + result.lecture_title + '
' + ' timestamp :: ' + result.time + '
' + ' Subtitles : '+result.text + '
' console.log("Resoponse :: ", response_str) - await display_result(response_str) + result_dict["week"] = result.week + result_dict["lecture_title"] = result.lecture_title + result_dict["url"] = result.url + result_dict["time"] = result.time + result_dict["subtitles"] = result.text + set_result_format(result_dict) } } else { - await display_result("We could not find a related topic") + const result_div = document.createElement('div') + result_div.innerHTML = "We could not find a related topic" + result_container.appendChild(result_div) } } -async function display_result(response_str) { - const modal_body = document.querySelector('#modal_buttons_body') - modal_body.style.fontSize = 14; - modal_body.style.fontWeight = 400; - modal_body.style.fontFamily = 'Courier New'; - modal_body.style.color = 'black'; - modal_body.style.textAlign = 'left' - modal_body.style.backgroundColor = 'gray' - modal_body.innerHTML += response_str - -} \ No newline at end of file +function set_result_format(result_dict) { + + // Initiate html components + const result_item = document.createElement('div') + const result_second_row = document.createElement('div') + const result_url = document.createElement('a') + const result_week = document.createElement('h4') + const result_time = document.createElement('h4') + const result_lecture_title = document.createElement('h4') + const result_subtitles = document.createElement('p') + + // Set up class/ id for some components + result_item.classList.add("result__item") + result_second_row.classList.add("result__second--row") + result_time.classList.add("timestamp") + result_url.classList.add("lecture__url") + + // Set the content of components + result_url.href = result_dict["url"] + result_week.innerHTML = result_dict["week"] + time_reformat = format_time(result_dict["time"]) + result_time.innerHTML = time_reformat + result_lecture_title.innerHTML = result_dict["lecture_title"] + result_subtitles.innerHTML = result_dict["subtitles"] + + // Organize html component structure + result_item.appendChild(result_url) + result_item.appendChild(result_week) + result_item.appendChild(result_second_row) + result_second_row.appendChild(result_time) + result_second_row.appendChild(result_lecture_title) + result_item.appendChild(result_subtitles) + + result_container.appendChild(result_item) +} + +function format_time(time) { + let parts = time.split(':').map(part => parseInt(part, 10)); + let seconds = parts[0]; + let minutes = parts[1]; + let hours = parts.length > 2 ? parts[2] : 0; + + // Make sure each part has two digits + hours = hours.toString().padStart(2, '0'); + minutes = minutes.toString().padStart(2, '0'); + seconds = seconds.toString().padStart(2, '0'); + + return `${hours}:${minutes}:${seconds}`; +} + +function remove_all_children(element) { + while (element.firstChild) { + element.removeChild(element.firstChild); + } +} + +document.addEventListener('DOMContentLoaded', function () { + const parent = document.querySelector('.result__container'); + + parent.addEventListener('click', function (event) { + // Check if the clicked element or its parent has the class 'container' + let container = event.target.classList.contains('result__item') + ? event.target + : event.target.closest('.result__item'); + + if (container) { + // Extract the URL from the child anchor tag + let url = container.querySelector('.lecture__url').getAttribute('href'); + + // Open the URL + if (url) { + chrome.tabs.create({ url: url }); + } + } + }); +}); diff --git a/ChromeExtension/style.css b/ChromeExtension/style.css new file mode 100644 index 0000000000..bdb3ba9f26 --- /dev/null +++ b/ChromeExtension/style.css @@ -0,0 +1,132 @@ +@import url('https://fonts.googleapis.com/css2?family=Roboto:wght@400;700&display=swap'); + +* { + box-sizing: border-box; + background-color: transparent; +} + +body { + font-family: 'Roboto', sans-serif; + align-items: center; + justify-content:center; + height: 100%; + overflow: hidden; + margin: 0px; +} + +.extension__container{ + display: flex; + flex-direction: column; + outline: 1px solid black; + height: 600px; + width: 450px; + margin: 0px; +} + +.header__course { + display: flex; + align-items: center; + background: white; + border-bottom: 1px solid rgb(225, 225, 225); + box-shadow: 4px 4px 8px 0 rgba(0, 0, 0, 0.2), 6px 6px 10px 0 rgba(0, 0, 0, 0.19); + height: 60px; + margin: 0; + padding: 10px; +} + + +#course-options { + border: none; + background-color: transparent; + font-size: 1.5rem; + font-weight: bold; + color: rgb(55, 55, 55); + flex-grow: 1; + word-wrap: break-word; + overflow: hidden; + max-height: 1.5em; +} + +.result__container { + flex-grow: 1; + background: rgb(245,245,245); + overflow-y: auto; + margin: 0; + padding: 15px; +} + +.result__container .result__item:hover { + cursor: pointer; +} + +.result__item { + display: flex; + flex-direction: column; + background: white; + box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.1), 0 3px 10px 0 rgba(0, 0, 0, 0.1); + border-radius: 8px; + margin-bottom: 15px; + padding: 10px; +} + +.result__item h4 { + line-height: 1rem; + margin: 4px; + word-wrap: break-word; + overflow: hidden; + max-height: 1.5em; +} + +.result__second--row { + display: flex; + flex-direction: row; +} + +.timestamp { + color: rgb(47, 151, 242); +} + +.result__item p { + margin: 4px; + word-wrap: break-word; + line-height: 1em; + max-height: 3em; + overflow: hidden; + position: relative; +} + +/* .result__item p::after { + content: '...'; + position: absolute; + bottom: 0; + right: 0; +} */ + +.footer__input { + display: flex; + align-items: center; + height: 60px; + background: white; + box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.1), 0 6px 20px 0 rgba(0, 0, 0, 0.1); + border-top: 1px solid rgb(225, 225, 225); + margin: 0; + padding: 10px; +} + +#searchbox{ + flex-grow: 1; + margin-right: 10px; + background-color: white; + border: 2px solid grey; + border-radius: 5px; + height: 30px; +} + +#submit-button { + color: white; + background-color: rgb(96, 176, 246); + border: none; + height: 30px; + border-radius: 3px; +} + From 0a1f5de5244bd5375c3f053e54f96692d2acb51c Mon Sep 17 00:00:00 2001 From: Aaditya Murthy Date: Thu, 7 Dec 2023 21:08:41 -0600 Subject: [PATCH 25/52] chat coursera first commit. Does not run yet. --- CourseraTranscriptScraper/chat_coursera.py | 46 +++++++ CourseraTranscriptScraper/chat_subtitles.json | 124 ++++++++++++++++++ CourseraTranscriptScraper/requirements.txt | 3 + 3 files changed, 173 insertions(+) create mode 100644 CourseraTranscriptScraper/chat_coursera.py create mode 100644 CourseraTranscriptScraper/chat_subtitles.json diff --git a/CourseraTranscriptScraper/chat_coursera.py b/CourseraTranscriptScraper/chat_coursera.py new file mode 100644 index 0000000000..8b74cf1215 --- /dev/null +++ b/CourseraTranscriptScraper/chat_coursera.py @@ -0,0 +1,46 @@ +#! /usr/bin/env python3 + +import openai +import os +from langchain.document_loaders import JSONLoader +from langchain.text_splitter import ( + MarkdownHeaderTextSplitter, + RecursiveCharacterTextSplitter, +) +from langchain import Query + + +from dotenv import load_dotenv, find_dotenv +_ = load_dotenv(find_dotenv()) # read local .env file +openai.api_key = os.environ[""] + +loader = JSONLoader( + file_path='./chat_subtitles.json', + jq_schema='.introduction-to-text-mining-and-analytics[].content', + text_content=False) + +docs = loader.load() +trans_docs = r_splitter.split_documents(docs) + +# print(trans_docs) + +from langchain.embeddings.openai import OpenAIEmbeddings +import pinecone +from langchain.retrievers.self_query.base import SelfQueryRetriever + + +from langchain.chat_models import ChatOpenAI +from langchain.chains import RetrievalQA + +llm = ChatOpenAI(model="gpt-4-1106-preview", temperature=0) + +# qa_chain = RetrievalQA.from_chain_type(llm, retriever=retriever) +while True: + question = input() + docs = retriever.get_relevant_documents(question) + for d in docs: + print(d.metadata) + # print(len(docs)) + # print(docs) + # result = qa_chain({"query": question}) + # print(result["result"]) \ No newline at end of file diff --git a/CourseraTranscriptScraper/chat_subtitles.json b/CourseraTranscriptScraper/chat_subtitles.json new file mode 100644 index 0000000000..3e596ee1a1 --- /dev/null +++ b/CourseraTranscriptScraper/chat_subtitles.json @@ -0,0 +1,124 @@ +{ + "introduction-to-text-mining-and-analytics": [ + { + "time": "0:00", + "text": "[SOUND] Hello. Welcome to the course Text Mining and Analytics. My name is ChengXiang Zhai. I have a nickname, Cheng. I am a professor of the Department of Computer Science at the University of Illinois at Urbana-Champaign. This course is a part of a data mining specialization offered by the University of Illinois at Urbana-Champaign. In addition to this course, there are four other courses offered by", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "0:39", + "text": "Professor Jiawei Han, Professor John Hart and me, followed by a capstone project course that all of us will teach together.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "0:51", + "text": "This course is particularly related to another course in the specialization, mainly text retrieval and search engines in that both courses are about text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "1:07", + "text": "In contrast, pattern discovery and cluster analysis are about algorithms more applicable to all kinds of data in general. The visualization course is also relatively general in that the techniques can be applied to all kinds of data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "1:28", + "text": "This course addresses a pressing need for harnessing big text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "1:35", + "text": "Text data has been growing dramatically recently, mostly because of the advance of technologies deployed on the web that would enable people to quickly generate text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "1:50", + "text": "So, I listed some of the examples on this slide", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "1:57", + "text": "that can show a variety of text data that are available today. For example, if you think about the data on the internet, on the web,", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "2:07", + "text": "everyday we are seeing many web pages being created.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "2:13", + "text": "Blogs are another kind of new text data that are being generated quickly by people. Anyone can write a blog article on the web. New articles of course have always been a main kind of text data that being generated everyday.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "2:31", + "text": "Emails are yet another kind of text data. And literature is also representing a large portion of text data. It's also especially very important because of the high quality in the data. That is, we encode our knowledge about the word using text data represented by all the literature articles. It's a vast amount of knowledge of", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "3:08", + "text": "all the text and data in these literature articles.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "3:14", + "text": "Twitter is another representative text data representing social media. Of course there are forums as well.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "3:24", + "text": "People are generating tweets very quickly indeed as we are speaking perhaps many people have already written many tweets. So, as you can see there are all kinds of text data that are being generated very quickly.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "3:38", + "text": "Now these text data present some challenges for people.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "3:43", + "text": "It's very hard for anyone to digest all the text data quickly. In particular, it's impossible for scientists to read all of the for example or for anyone to read all the tweets.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "4:01", + "text": "So there's a need for tools to help people digest text data more efficiently.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "4:09", + "text": "There is also another interesting opportunity provided by such big text data, and that is it's possible to leverage the amount of text data to discover interesting patterns to turn text data into actionable knowledge that can be useful for decision making.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "4:27", + "text": "So for example, product managers may be interested in knowing the feedback of customers about their products, knowing how well their products are being received as compared with the products of competitors. This can be a good opportunity for leveraging text data as we have seen a lot of reviews of product on the web. So if we can develop a master text mining techniques to tap into such a [INAUDIBLE] to extract the knowledge and opinions of people about these products, then we can help these product managers to gain business intelligence or to essentially feedback from their customers.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "5:18", + "text": "In scientific research, for example, scientists are interested in knowing the trends of research topics, knowing", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "5:29", + "text": "about what related fields have discovered. This problem is especially important in biology research as well. Different communities tend to use different terminologies, yet they're starting very similar problems. So how can we integrate the knowledge that is covered in different communities to help study a particular problem? It's very important, and it can speed up scientific discovery.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "5:57", + "text": "So there are many such examples where we can leverage the text data to discover useable knowledge to optimize our decision.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "6:06", + "text": "The main techniques for harnessing big text data are text retrieval and text mining. So these are two very much related technologies.Yet, they have somewhat different purposes. These two kinds of techniques are covered in the tool in this specialization. So, text retrieval on search engines covers text retrieval, and this is necessary to turn big text data into a much smaller but more relevant text data, which are often the data that we need to handle a particular problem or to optimize a particular decision. This course covers text mining which is a second step in this pipeline that can be used to further process the small amount of relevant data to extract the knowledge or to help people digest the text data easily. So the two courses are clearly related, in fact, some of the techniques are shared by both text retrieval and text mining. If you have already taken the text retrieval course, then you might see some of the content being repeated in this text mining course, although we'll be talking about the techniques from a very different perspective. If you have not taken the text retrieval course, it's also fine because this course is self-contained and you can certainly understand all of the materials without a problem. Of course, you might find it beneficial to take both courses and that will give you a very complete set of skills to handle big text data.", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + }, + { + "time": "8:02", + "text": "[MUSIC]", + "url": "https://www.coursera.org/learn/text-mining/lecture/Osat9/introduction-to-text-mining-and-analytics" + } + ] +} \ No newline at end of file diff --git a/CourseraTranscriptScraper/requirements.txt b/CourseraTranscriptScraper/requirements.txt index 35cc33593b..eb2c29c85c 100644 --- a/CourseraTranscriptScraper/requirements.txt +++ b/CourseraTranscriptScraper/requirements.txt @@ -3,3 +3,6 @@ elasticsearch==8.11.0 Requests==2.31.0 selenium==4.9.0 webdriver_manager==4.0.1 +jq==1.6.0 +langchain==0.0.348 +openai==1.3.7 From c6934f377e07fc74c10f59f87d971c67faa85285 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Sun, 10 Dec 2023 16:40:02 -0500 Subject: [PATCH 26/52] Fix course title scraping --- CourseraTranscriptScraper/CourseraScraper.py | 10 ++++------ CourseraTranscriptScraper/scrape_coursera_course.py | 2 +- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/CourseraTranscriptScraper/CourseraScraper.py b/CourseraTranscriptScraper/CourseraScraper.py index e26d4118b0..731a4f6d01 100644 --- a/CourseraTranscriptScraper/CourseraScraper.py +++ b/CourseraTranscriptScraper/CourseraScraper.py @@ -3,8 +3,6 @@ from bs4 import BeautifulSoup from selenium import webdriver - -# from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service as ChromeService from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager @@ -72,8 +70,10 @@ def __init__(self, driver: webdriver.Chrome) -> None: self.get_week_urls() def parse_course_name(self) -> str: - # TODO: Automatically parse course name - return "TODO" + title_xpath = "//*[@class='cds-108 cds-Typography-base css-e7lgfl cds-110']" + title_elements = self.driver.find_elements(By.XPATH, title_xpath) + title = title_elements[0].text + return title def get_week_urls(self) -> None: """Initialize the URLs for each week of the course""" @@ -129,7 +129,6 @@ def get_lecture_subtitles(self, lecture_url): subtitles = [] # Find all div elements contain subtitles - # TODO: Take another look at this and see if XPATH is more accurate. Looks like this pattern isn't consistent across classes pattern = re.compile(r"\bcss-1shylkf\b") elements = soup.find_all("div", class_=pattern) if len(elements) == 0: @@ -156,7 +155,6 @@ def get_page_soup(self, url: str) -> BeautifulSoup: # Take driver to specified URL self.driver.get(url) # Insert a sleep timer to avoid being flagged as a bot - # TODO: Replace this with a wait call to make sure the required element loads correctly time.sleep(4) # get the page source and parse the HTML content into a BeautifulSoup object diff --git a/CourseraTranscriptScraper/scrape_coursera_course.py b/CourseraTranscriptScraper/scrape_coursera_course.py index 744c958ecf..fb022f14f2 100644 --- a/CourseraTranscriptScraper/scrape_coursera_course.py +++ b/CourseraTranscriptScraper/scrape_coursera_course.py @@ -21,7 +21,7 @@ "--output_path", type=str, default="./subtitles.json", - help="Path to write JSON file containing scraped transcripts to. Defaults to subtitles.json in this directory", + help="Path to write JSON file containing scraped transcripts to", ) args = parser.parse_args() From 90b0244216e985dfa7fdae86691fada2eb4bde4d Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Sun, 10 Dec 2023 18:24:39 -0500 Subject: [PATCH 27/52] Update README documentation. Add course title scraping to CourseraScraper --- .gitignore | 4 ++- CourseraTranscriptScraper/CourseraScraper.py | 18 ++++++---- .../ElasticSearchJSONWriter.py | 21 +++++++---- .../scrape_coursera_course.py | 34 +++++++++++------- .../Chrome Extension Directory.png | Bin 213105 -> 77956 bytes .../CourseraScraper_LoginCaptcha.png | Bin 0 -> 388532 bytes .../CourseraScraper_LoginPostCaptcha.png | Bin 0 -> 24370 bytes .../CourseraScraper_SuccessfulScrapes.png | Bin 0 -> 30523 bytes README.md | 30 ++++++++++++++-- 9 files changed, 79 insertions(+), 28 deletions(-) create mode 100644 Documentation/README_images/CourseraScraper_LoginCaptcha.png create mode 100644 Documentation/README_images/CourseraScraper_LoginPostCaptcha.png create mode 100644 Documentation/README_images/CourseraScraper_SuccessfulScrapes.png diff --git a/.gitignore b/.gitignore index bf4ec26b0f..5d368cf139 100644 --- a/.gitignore +++ b/.gitignore @@ -6,4 +6,6 @@ node_modules *.DS_Store *.iml *.log -*.csv \ No newline at end of file +*.csv +*.pyc +*/subtitles.json \ No newline at end of file diff --git a/CourseraTranscriptScraper/CourseraScraper.py b/CourseraTranscriptScraper/CourseraScraper.py index 731a4f6d01..78c592c88a 100644 --- a/CourseraTranscriptScraper/CourseraScraper.py +++ b/CourseraTranscriptScraper/CourseraScraper.py @@ -1,6 +1,4 @@ import re -import time - from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromeService @@ -27,7 +25,7 @@ def run_scraper(self): course_transcripts = [] course_parser = CourseraCourseParser(self.driver) - course_name = course_parser.course_name + self.course_name = course_parser.course_name # Parse each week url to get list of lecture URLs to scrape for week_url in course_parser.week_urls: @@ -44,7 +42,7 @@ def run_scraper(self): course_transcripts.append({week_str: week_transcripts}) - self.course_transcript_for_json[course_name] = course_transcripts + self.course_transcript_for_json[self.course_name] = course_transcripts class CourseraScraperLogin: @@ -88,7 +86,7 @@ def get_week_urls(self) -> None: week_list_xpath_pattern = "//*[@class='cds-108 css-1mxkpit cds-110']" # Need to make sure the element loads on the page before it can be scraped try: - myElem = WebDriverWait(self.driver, 2).until( + _ = WebDriverWait(self.driver, 2).until( EC.presence_of_element_located((By.XPATH, week_list_xpath_pattern)) ) except TimeoutException: @@ -154,8 +152,14 @@ def get_lecture_subtitles(self, lecture_url): def get_page_soup(self, url: str) -> BeautifulSoup: # Take driver to specified URL self.driver.get(url) - # Insert a sleep timer to avoid being flagged as a bot - time.sleep(4) + # Need to make sure the element loads on the page before it can be scraped + try: + transcript_xpath = "//*[@class='phrases']" + _ = WebDriverWait(self.driver, 2).until( + EC.presence_of_element_located((By.XPATH, transcript_xpath)) + ) + except TimeoutException: + print("Loading took too much time!") # get the page source and parse the HTML content into a BeautifulSoup object parge_source = self.driver.page_source diff --git a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py index 262d6a357b..219bdf67a5 100644 --- a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py +++ b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py @@ -4,6 +4,13 @@ class ElasticSearchJSONWriter: + """ + Class to take a JSON script and write it to ElasticSearch, so it can be used in the Coursera + search extension. + The current implementation uses the project team's ElasticSearch instance, but this can be + changed by modifying the 'ES_URL' default value in the class __init__() method below. + """ + def __init__(self, json_path: str = "./subtitles.json"): self.url = os.environ.get( "ES_URL", "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws" @@ -11,27 +18,29 @@ def __init__(self, json_path: str = "./subtitles.json"): self.user = os.environ.get("ES_USER", "elastic") self.password = os.environ.get("ES_PASSWORD", "CS410-project") self.json_path = json_path - self.subtitles_json = self.load_json(self.json_path) + self.subtitles_json = self.load_json() def load_json(self) -> json: + """Load JSON file from saved scraped results in preparation to be pusehd to ElasticSearch""" try: with open(self.json_path) as f: subtitles_doc = f.read() subtitles_json = json.loads(subtitles_doc) - except FileNotFoundError as e: + # Should always work unless the file doesn't exist, in which case the user should be warned + except FileNotFoundError: print(f"{self.json_path} was not found") return subtitles_json - def index_subtitles(self) -> None: - for weeks in self.subtitles_json["Text Mining and Analytics"]: + def index_subtitles(self, course_name: str) -> None: + for weeks in self.subtitles_json[course_name]: week_val = list(weeks.keys())[0] for week in weeks.values(): for lecture_titles in week: for lecture_title in lecture_titles: for subtitles in lecture_titles[lecture_title]: - subtitles['lecture_title'] = lecture_title - subtitles['week'] = week_val + subtitles["lecture_title"] = lecture_title + subtitles["week"] = week_val self.write_to_elasticsearch(subtitles) def write_to_elasticsearch(self, doc) -> None: diff --git a/CourseraTranscriptScraper/scrape_coursera_course.py b/CourseraTranscriptScraper/scrape_coursera_course.py index fb022f14f2..6945460e74 100644 --- a/CourseraTranscriptScraper/scrape_coursera_course.py +++ b/CourseraTranscriptScraper/scrape_coursera_course.py @@ -1,9 +1,25 @@ import argparse import json - from CourseraScraper import CourseraScraper from ElasticSearchJSONWriter import ElasticSearchJSONWriter + +def scrape_course_pipeline( + course_url: str, username: str, password: str, output_path: str, elastic_search_push: bool +) -> None: + # Scrape a Coursera course's transcripts into a JSON file + scraper = CourseraScraper(course_url, username, password) + scraper.run_scraper() + course_name = scraper.course_name + + # Writing a JSON file + with open(output_path, "w") as json_file: + json.dump(scraper.course_transcript_for_json, json_file, indent=4) + if elastic_search_push: + writer = ElasticSearchJSONWriter(args.output_path) + writer.index_subtitles(course_name) + + if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( @@ -11,7 +27,8 @@ "--course_url", required=True, type=str, - help="URL to the landing page of the course you want to scrape. Ex: https://www.coursera.org/learn/cs-410/home/", + help="URL to the landing page of the course you want to scrape. \ + Ex: https://www.coursera.org/learn/cs-410/home/", ) parser.add_argument("-u", "--username", required=True, type=str, help="Coursera Username") parser.add_argument("-p", "--password", required=True, type=str, help="Coursera Password") @@ -25,13 +42,6 @@ ) args = parser.parse_args() - # Scrape a Coursera course's transcripts into a JSON file - scraper = CourseraScraper(args.course_url, args.username, args.password) - scraper.run_scraper() - - # Writing a JSON file - with open(args.output_path, "w") as json_file: - json.dump(scraper.course_transcript_for_json, json_file, indent=4) - if args.elastic_search_push: - writer = ElasticSearchJSONWriter(args.output_path) - writer.index_subtitles() + scrape_course_pipeline( + args.course_url, args.username, args.password, args.output_path, args.elastic_search_push + ) diff --git a/Documentation/README_images/Chrome Extension Directory.png b/Documentation/README_images/Chrome Extension Directory.png index fd888da4ebd2caf47195113e18315378a3ef6554..131143d614412680147b83b248f993f4f65658f5 100644 GIT binary patch literal 77956 zcmZ^~1z225@&}3xVSoUG1cC*3cXxLP?ht~zyK8WF4ess^!QI{63GO^{_wIl9?t5?M zo3DFLcXdhksncEcJ3r)PMB!mEVZp$_;Kjv+zJh^45`ck$cLG1WU(wcHd42z*X(A{n zCoU+6CueJAXku;v21XWb5hW`LA&=S{$%3C0AXgjm*%-+uo^5K)|0fW$_F%l&h6#S= z3nSrdc?C%58s?`J9)IW(Y8^cptU46+BUp55%pq6V%u(0wGuw(=(>d6hH@=WIxEvXW z@;BG7TH0z#9<;#DNW50LkJD)MTABKA?R76o=bO>4oD@z)POEu~T}2Y-_sg5s23uQJ zUdvYnjP>F#y24U=?B1kOa}C(|E{Q8fzsWA!cg+1Fi3>2@)z8($ehsqjWF<3B?U34% z&2AZKG)B=T>#zj1cO%xNgMGe_?~faI^1GV)@U5 zZD9Ngg5u)urGmb#fq|u+v6Ve9NE`wT41(K4QPo~mT8cy8%7RAkyVW-X8fOdZKO|sW z&K&Pm3j=#SJZB4YOFIr{Zi2shaJ<+5T&5+!`>Tt+88?Bdv>cwGm8}6D3k@9&9RUw4 z9v&W-?RP_tuRAsRvw^vqkcq{+Ro_M9VPazA z`isr~tK~m}{)>!7i_Eh`nBYF7NmAZcE>491wP~qjQ?&*OIx4 zaWrtMreR7P_CJ*mn111OK--UFbl{9F5K9dRlG~}3vCg;D|FV7(_WaWr%!sf4R@x}^ zm3BZ!u0f_|WvTgFcm zT;q+LnJmn{xJY4TX<1HVWfxk$O!%>0YCv8z?bJU2Z_AuMa%i7|it58|V!UNEL6EOE z9v%<~@pqvF15T~Lw{UTDRiNJGGgIrKF{GRCwK}F17n33YUb+DM2xRKnQtGg5YYk3e zR_YFbLoak36)Q%;$Oc6H@&^o!q4SKOtnU51b;ADLN)WuzKTd6gqZNnDLg6b0;}iSs zzu75%1Gv9DlX11CR5va;aU>w%a`J<}zM>8+Aw^Z!gm8PPi<8rbV<`x!lJ#bC5AUfB z#(ze`NFf9W2nZCKOy`?tpl4V2)f}$*KthuR)ENr=_uzk2A_P>|RFl1~s0|H>74$I| zmz)Ge;;4|QOV}YJV@j}_o9^}+s)irW=GjsC=LUnefb$X_Uj)P&ELR62^HinlOlN;$ zGMi33-W&^mp3E9h27v$oaDKjj6`lAH>h;FTEoi+Np#`+OVw8s>|SFxdmxtnhX8yaFo+*NV0QW=_L+;7a;RlFl4w+wOaN4ZB^X_mPGd>3w^08l`M(q(gt)*EHy9O5FH8Yo{NkkK3^l!U#XmGL z>2tSSrL758`y5b$d1b34Y#E?Tf&O2L{~_l>^Rd50EK+OZNHsKwc9M}n2mnASph=RI z6KR;Yl-aA)Yvs}~9fjybhft~tG{(F`ErFtX057nH%{|BhAEW>zfE-o%R z<1z!mX}hM?J8smGTB{jP$J0|`Pa3uwaPhi8%N`L5w0{U;O$nj;nLElLvUP>)vd^c3 z7eSLY__L_|?*P!Y@pqm7F%<|95(*ZF>*?7ZmzpTKrmda7TaO>r==U9B4X3aQ=BCKX(-+NTJN)S zdU|@9tyrS%=Z#V*oXTQ<7_ZXQ3ST7m7jq+!_hnD$8{ZJqaI!#5e=r~M)|Rl6L2G3- z?fWV~1L{^Eodxwxrv-!_+`PTuj=pY^VzO?+1dUvVHz4MpCF)qpr(Om1oM8w`r!-Wm zZxnmJ6zwaiq>hKXJ;Ruy;^^}@-bENZm77*r^vf{Wp0E}7$)uI%e+b8#ev#!WDsF3D z2W$5DC6>s%NyDW%$6qA&XW~HMD3C{wCg_OR#ZS#dnypWj@hR_1qZ0)bY_gX2Va1kG z+RK6hzNwzJ?O{sjX!VW5biSyj$qjg{)ge#V(*2{23n`kp$goU(?=j1xX)h3~$Y8l1 z^)u>snSL2QUA&vdE!gIRDM@lBs?v!jb%!H56dgrIMpB!dt?(Xi2 zSg-H*`k2kRMV*z}mjF5hUYa1+qd6{CvgduP=JSZIcOS1!Djz-| zcEd<|fzapsn$1;9b_=waFV|?0OIXAm$wmL-p_+HdBO)@((BRiTYqk&t0RaIU-@|V$ z7R*sl$8vjNJW)swjxJqDOU9R2W!mhvcCii%)V=Yu_ain-_*J?37Vw9cy#k!{W@kk3X zqCkXR`k=>YjbWbkdaR|$?C#6aXUy2&InsL*E2v# zyB!9@2kGRrv3caxb+73hHfe?<55AsimEP@-=4jmXCe=sN891CFwN_%Q8%-R0N!ZfcgB%4YMb*z_%t99aeom z#{#YwA_D`%Nh3JV+-Y}iykPIW@eJv8cOH$?V=oX%AMX{m)>=ygvTFn``%l?PX}=)3 z&^Zg^O9k0N9+4^#(8q9-_3H3k$14wAJF^estMu&LiW=_p__}A#&>G__5*o-#u6yyp z?)UYP-MICON5}>^N+mra@6bK;*AEpL&yy|Z6TBU7B9NkCKTDo&6ntm0J)RE6i90|{ zI!>#OaC#O#SXQ{dZq|kF;Bmg6m7KggeLe%>&R_1Xe&@g7z82O3!Q==nC4VTTG6wlZTkh?vvb-`9~E)H@-POzrx5)`MTJd+&e%TuNKD} zUg5&~w{bxRMdqq=X}oO=Y+g?kBB2g`w5e#HpFp`=m4rd3nYdOdKtmRgBQ?BIdAUM- zsWOgM1M)w%ueKQ9-#{keXZB#GX~gyEZdjMcG9$c;DkZzl8_}$3D~=ya6IV0a0aqHI z#H`P;%4JWsYIDauQ?Eek`gSw_K7l1%M|}m%%)|j_tv4idL~UX?&Gy-S(ka%w%_y}{ zcPpM)+^%=&0C0po)g?nrgUP$XrMg3(O{WX1q^ZyN(z0k4>#gG9cLi4M*~icbLeT;< zW&L?vb0w;(&Iuyx?u5j|Bq`%qyblYqQetwSJsu{5+_^Ke{GM*7g0aeT)XrrK+^?usZFA@W(Ht%!Bgxlbt62u$5~ z-BeXh9Z0=CC*<%WO=c}Z%8bE4#KZZ7h1t6%B=?P>>jt!>o^_rv7BPa`uOl; zKm3v%UFxrNgdpL&S=jk7(nMlRZNBR>eI1e~c{uZXo8sxS;fSQfaN?_sNl{Vl?YfuA z#DKg6*3?gF4W-pW8^ppg)>JiSi^Uk_XSL3grDI&f-?UB7A?|@sUiMoHlLV_ZDqoVb z`6a%I;4W(-C}Ch?iije7QN&D9F&xPaH&dp}f9Za>nxW(9)AXuQE=wkVQ!wF)3iik~ zdE9SUpK+dlYY4H#R#f!!wh=h4)VuL29*Dt3BO`UtT5HtmCDF4n8Usn?$ery}ZBxnd z1Q@eOq)z1u!!BFvcz*M(g1z#Ts@Vgip^v?(Q^!3MT4lqSbdz{ywQ)L4 zot-Yv@>nl6x-#>FvD;h+4_&QMD$KG!kiI*$cpPw)AT`}qHSm6{5z`k_b6MA;8FIw4 zHfH5^qSdl22x;*^3WN7uO{YG{y8_JcSJ%bXU9aeyDDL>1O!Bt5z5?MyAhQ`6A+&mgLllX{2LozZM@L6|T!?!G-RM=#*H4KIj0{TAD=WU$#!oXpP-=ai;FreLV{tTC zN<3c1wcm8tay=AvzZry|#EW_wNAl6Z4{yuGXZ1ke)_gE-M7E_dAZdLYZJ5~BKU3bn z=GEOewYZe!$4qh~q)AT3prE{{78xjc(XjrqYk^emM{q{iSL4>tvJ^{8b+XhheS@BY zjqBF|ed~iLRUGF3YF)jk18gT^n|t{sF(l$!dDj^$JDIFNdf1u4ZT~!=w{5cczMt`t z2=U&FNOw|h=z-R+FM%sb3-6FA?Q>)m77~)nafcRfJvq8G$M%mM5^wx|>sgLl*DkK9 zQ!_*MZq-jv;3N0EZ;x19W~-9y$UR|wyma11qme}zcVS^49e2 zzy6ufaMxmYvCaWb%jL9gre0svbhaMH&r~lvhazDe1{y8+sm=j^yo~f0p;$t^>$X*376HS0e!ty zg?o13M`$`;zm8taxfpkrD-U8i#R_?6ssqkSA1^r1ixzLW4XRVUQ9&j%QrA1&tdTCu zFAOt!@<5`eMcD#vzrb3jc^QLhhlkm=_#SR|ZA?~^VqYL0U00k_!SsDWrji3%w_hCz z#v#VZlTF1`L{}VQ|3tnjp|QEGo$cXVc268G%w^|J$V3ICoVeDpSF!so3=IchQjbUH%Z4~ZqDbf_aGq#b#>-#=?_9McjsHi9+d z{ZjpWpaD*X*NHm5us>+LV{1x~KZ^>a+UAb&a&wIAdj0U@dJz$B(d5ZM{o!tXXoifM zI({wrnIY)#R2X;UDa8`Q7W$``y65~1x@n+LhpE3teKU#Z6DS~cgUZH}2(J8;z&lX6F<3{8k~^Z|raYZVBcCOTz*QDJi8MxG&u0>4sc@ zDBB9&QSoTBAfGe)=8C6smEhIS>B`k;S}`1bJMibIhEEj`A2*>P5CBmLC;mb9Gtlq| zwd79==3q~Bv?b5}iOnzQEiX0kxcC^^?`M8hQM_0W91z1CnqcUG+^s)o#*`8-QtVXzSJsRX&UH@oQraj9VdK;?n#eo3i&5aLmnQ)TS13Mf;WULsV4mML_z?!NZI{r6Wzj?->9VymJfa(SUKxv{)Jr9^L#sXQB-pS~m7f+ooWhPzhadkw- zqI~zNThJEJhQ4^ZGe%DaK&&JrK@P)HYXn~yKwOS!Wbyrdf`js!eomu&7BPLk@>E7Jf2V5&+=aZfuL_n<~LP=qc;l|2h zjZqP?8P(4P88mg+Bz1PdK901(NXMIVt{_qWD2P)y06I`W-rQ6!5!K=kExhPzDlQOww_L3+l)8P;3XDSO;@VW5e$VTFSab8r~hsvo82n09?V%f!wHtH^`GfUN!W|O&s z%nHR){yT)@>+J@bSSh&j4W^L~d+nUWh7b!vz=`XOHV9@#=-R|K6UGC>%Oqio%aK~H zw;<=>NExdnY?2@Td39KZ#)oImHw;1jQQ(mPC*n6qqG@xkwf;IR{~c9{6ahOX9NjHcqq#w$;UyeVo5<$uJ=+QN%Uw8Bsl;Q#gDV<=poGgI#*Z@Bs;4(qe{ znu!tl#!d%n>o&?IcSj;6G79znWL(-m$M@yy72zvklE4c-_=x~^ahabRnvpZZLP+ks zk6m2^9f+4-#VgPh`^IIMoq!oe7Euq0`mA{iq zR4^n-@;@T-fylf7wU|EvzQK0svd#A{L&nyhW}k0iFU8sOV0(4$sD-K8)1PI_bE9x- z$UHme(=z(I2bK{i+wOdz$==Uk$S5d->+6}OBEs?~$&5^!UC@U!gRuz1Erf#G2ga^u zds42OMFqz5x2I>u0!OI-Sb6UcfKL$WW%3e64#!E~g&p!|<_HV>VzLeK0qiv;tgPr4 z;v5o49UrsfBJLC%!xQ%G-8gl;enW+0G6e^&xifMz8Gd}*IZSanTgA6 zzLHctU&s%S#r!jPK+LWwrOs6TNhTdOFIm?0KBAqTt2K3s`yZC`feqnDk4Aa zmm9c^93;J|)pEuQ1DvKv7M-4DUwo2X7&UXTy&Frnj~lJ|rrHSRaK0EzB9koe3J!kc zd=&_PJ)$2eudJLfl_lkIC4IC~mdX*-*qiBbjWn9dD2lW0a^7a&Unp<533Zt1d5=F^ ztd+Xweg%)S(xC&xI*^u@CMC=LT#0O`?Rul6^g6hoSwM}&I<;Mz?qa$nbhKQNDbe3i z!Sj}8__F?%9BEMC4BWY_qZn0Z1&O)L%A|%jw|J()U;UV>x2QOq%CPfKZF}T)(9FY# zZTs!I#4g*rK5ZTIswfR89&*bv)z-pr5Ekg99OS0R=DYN%(`{>IaGe?48Rx5{C<)Y^nB@M_ifQN7g~AHI1T;mGc^ zj>+lPTtZfs)lugb2M^4U;-?`hv#tdSlj^NMjD}Kn1S%T#rYx!#OSv4*S^2thX4F|o z{pqaElHIL!jYaG9X7LX`C1=Eqp$~u$UqN219xu;>K@aHo z{8^2_ot?d}j>;r?^oa4BEU&XfifpvOsQRc)KKwiBn3C+9;pB%gKd3r(#>L!{(?-LU zTnaL>SF%$2h*F<&vYT|XyT<2LOn4A*xy07TQbo&_>+`a*@YJ8yy%Z}vJ$5B+!;;4l z%uS(uNxnnD;dD@ImqrxGriyqyU&5txIz>FzY=D;y)>CW5{rH5v0bMoWQ>_kqxGkwT z7|0eZKFPm1CJ$jIZgP*#!U6YLl$MoLxY@Xmk)&1fq=efWFJRAf_a(~yhUXnA+>`XV zSfT$rN#f{yK`vku3F4cWDjCkU8PKP1^=A3FgmSrlUdCixt!Gp_{ga+%FV0_!rYXd` zqd<@3r3f4N-M|RNGlFSr3KXsk(4_i(?*-`76_QS;K+3CiYG9qiHL;eZq3-`?Z#cL14Ax~kCata z)7+_G&=k>{uaEZgv@FwAy3@!@wepx}N;`Gbuxrchuu5yq?82QW9Dex`_mQNOSeF02 zv#rsK`2P6vX*K6k{jx5(tm%p-b2-aN&Mq$Kz`kOO4X^h{j%eE0xs-@e|BjSaMohua zRh10i)a#rr0&go1cNx0Vu-1%QX!|U4E6@2Jrjq_wPBb!EW3CE0Nbysgv$)?&WBULk6& zS!2ij0VVKsWBsYGg+(ovo)gvt2!GOby3Oww$lU z*}R>gK}Jd{YT!dIhvui_hw;s5fRwkCuHNhAH#D^#3hjMj*k1dp{jx>zxSP|tV?ISF zSAvz2hdE}V@ScQC@M3DAdfh4m+62}IKxU6_+BFJiHv1-NBx0vTxipLZZe8{+46EQ-%o2wyX+xjk+Xf$T z*hfR12qPWWAD#yIr1>Ta@pmr`>-n>kDodx(BUOAaR*n40F3ekG>a>+TTq9LkZG+$S zKE!p_(3BHXM`bDRSA?29ZDcHEX(JlV+S(L?Z}VK2w;aBQ-}8mq5zUc~Mh*Msk{@%} zFV)>w)aHIJl#f^A%#@5knv31`VraAl!WTX5s*q{u3_F(MY->MTJuq;1Z28ws=SkS@ zHYU9M=G-JqicJ?pAA5f!>$!(Tt&oeGZw=rI)Ai0SoKr?(7^W8}LRNlVm`o}~{k=eB zbVdop=hc1zGZ1a)N$h1xnE4)}+7i`#qAKB;_LJ1|ny;;`jVwwUOHhsTWL#a}%wIg6 zCegk?9oqr4IU9MIJ03xuI&0@`#$nkVdp(Q@XS`mXJ{OXxq3&_Z9cnVyD)tnFcv7(a zfTAHRlb|O+J-4y;GU5Da&~;a9(DTS(-t>(LN}=@a`Kbzo{0oPM!n@Y$Gv6lvTCHux zA(jR9r=cwKjuESwhR!w4Wz}+mlb%fG$PSx&f6#q|ef#}4)4A$+SCOvou5M3-3~r8R zS)(V%DWeOpFc6sFE0T=`)>cg!QtLmz0SU&Oy52_AVL2`!)6aP$1@G6o@>PTbqSH1w z?AzD5(3A~h*PCzn2m*49UVCD#15{Fk6EBEJ1;EpNy)QHHK?ii?`SgfPJXy8Ahru7llgI78~X zVX&DXL)N+4tFE5O+pqA{mkdDXb}6(|*9lJdE!%FUh^xn?AQrPYYp z`ay_DwOXqc2f{54RykICBg(ktV%q2AQFSyKBD%j^5gQ}MutqE=`$tAT2Kf$93!clN z0Cc==i16&V4~8RqtYpovN)C_1TX5% zf-FY68Srw@Bn;x8?^fZK{gEh>-N@hSyjlO2TZ+5DS_GNpe@{u=h zw!1MK-{n19w7|p=^WpOx%;m(UVz4E-UFml;f!60A)vCzgcL5mXY0qE;&cxrXQ*hrv z8{?5EY$8IA-|Y;>3@oBps|?xc*n)XHygrx-AQV$2YP%5Mv>c+Dj?2S3S*)7Ue4zeZ zt6AXkTP>;0w~|NUGb7@J>ulM&_j=o)y^k~&KGD{~a=(W3)caM2R$e^#R z%p$latlJ8)fPm}E_*kA?zT|LR#KZ6H?Fh{6cjkFb3z;|=)#vQ_e84c^XOs0tJ}7Rw z1-_2FQK`GOBCrn9Xk?rIuv8@F|HQtWDtMy-BzWQUe|K);%d%S}Bz5D-#)EUY0zKou zZ}*@2nymPB>a+E*i1Y26e0O2>LvE zZ8!*=<WegS|06s_G z7z!?09SHuh0R*}&w34R{``~rDGi-DB?vwzq-YW4<+akI8mij&om6qz+Gp#bd z>OI;G($4%;!ZMBPaGdQK=dSRl8=;B(Y)8v%GWm5j_($o39ZwE7r(;q-5FF-?o9yh` zWLDHYwE3nnuG}ab4}dz+<>7WX9J$b%G`dMQ=TgHXk4a*@Z?XB)rgk>4xH0C%kFX3m zAAZ|Zby>$mpgwaXt1C0(tI^)Nl1d(NzCleg+SvufR|iyVjiNCrXUG(tc$|?pw0MiVJZ&PJ7qAnx_V6~b}3AhpC7(T?q&G_iu#I<&6 zQc6#cOoEOF&DpymTervIDrGUdpseZZ48;2zw5sY^e4g zESz1EV^eWCwKs_l5A`F@b=wm&aQj|m+Rw(gxW1_$5cRT;$;$4pWE&h|7;TaqdPH|K z*FBZ?LSHPKlgpc4KM-GQ>5ig|oRpt=;dp(wHMt4dMR-zJ)3;sVmmS>r#8*}1g4)-e z=@rr7`r!8DqJVZqlZ0iU=Fgx;F{sN=M9CSsm5u+q%2FB15XU}Fx|-^+Arq~b2=Ecy zNAF=h)#_r$;Y)b4Ra#0sa|$!Rq_4i_Q_C0UC_gIKNz&={UGbcc&B^oTn6C5O*L#w* z#myIpDPeco$v;15fH$n=Gku)aNiX(a#&=qiE?c<%{^C_)d{HtGa z%Ql${J1nL#{#?G}kP%jEGOF(I?wFc?%IfT%+xB$wQ=IB}vUL7}$M7IcHH4Z>#Pur7 z$ZmH(XE8jb8kv--nuE#@1huZEDNgLGvW%xc{T4;-K_P2cqfg_g>>IeITH$L+X4j;< zlQf-cX6A)vrC?09D-CCz(zTpMqM2J#?FUz*pjI;$#0+?v7=kphKUaB&q@_yLfADhk z<-t@>JW?^@A1Cn2>+S99RIfFWd`rCzOkQjl33R$d)6UrFTZUH4sJ0n7hbE!s(#1^x za5h*ln+eiogL+;Q)*d3RlE?xdPys+7sx(`0l-Fx6)KU;$1c&Y(3^hgVPo6a%2|9sV z1t9x`yKfZk-jM4QdbgxTJ@!N_-h}cu2*Ck6q={H?OLQSFbO_ngb*znrVH z)0a`fKnfIMIw%8vx*9eo78zj-@o#o5)r#yb`tG_b^f}y5UW`gNIg4W*e=>~e=l}=S zvTZOB)_f_=%*fC44o2LbT%<4zPdvJuF*%or!Q@vLp$z+^@4HjrYeH9|X+02fa>_JR zl}Yp)!V{{Nws_+C%Mn7h@Q6xT^~j5{~RWHlXlVeUQoxG`#C3TKDL z_14b5xVNZ9r$C1^JRoLtsx2+RnMCEIEIU1=gGL$7ndbAf?jAaI?~MV;2qi7vGNyHa z$7p;mlU^F4`!R)1UKbHPeOw*uQ8RBv9f(GGM}*`V5PZJr#>=ftR>Xj-Y6=X~4^YFG zay>;N@k&d75TVP+%@w-sC@aZe=ZXX|;fZ@OCR#)AIoGet%RwH1;_s z%sdvd_WPiK!E8|gCj68A-O`} zMk&cb`5BIl2I3n)gg4Z<73`*>7|4|N%a)$J!@V_X|J}@C^neJ_Z5kl}ecIcez8n{& zpDcEsQ}Z}s#P#nTD5R&KMW$s!s)&G zaHS5MP5S6P2Wi&CpXww2;a+jKk_1g2X>$R=*BYXXh>38a+Z-iS+O(SyU_lPt_gmaK z`Yc*IeaF5ef=b7Y##KWn3&3Ca2tcAbA@dFP{U!L8%rT~ms>%5gd7~Td&2yc04gz?-)a2j)4$^vI8;rs0^o>5BYU8xf?ie%A z0B0<1!F3f&{#vT4-nx$*I!z!&M6hvS-C+#|-38vt1_*~adW`jpGQQ$jDDL?v1jlfD z|)57+fHpQH9g zh{NN-YVsgGUrK5FEfaeZCI3e{ZM(m8<=pQaEpOafMD&}JB0x0ay34Q$A039b6`>~x zjZP1m(a@NMfl-Q|Txz`XM@i=$2lqVcSB&GN8LJt&Hcz^)ph681XxHsr+ze5W|0ijn zs(&WQHjlfcKLrBBgD=n$*=Rv_no6b@LtFT$i1A(}_jhAMMGzu(*7(~wAY5>ZLGCrbALSWf z&Nr~l>+l*oV^?7}b2>~H*Jn!3>y94v;IZh|+a5J@GLm~+UxjU8eBntj%3Jr;Kk>s_ zT(_OMGJ|`IDFDY0srZkrt>1WVnK82r_4Kr1x*)(o`E6|SN|1enQ%uJrMX?f3ljnV@ zs&>eIlgrw9K7=CG6+?1)+&+TTNqPC^5m0W|Z7q)E1@`{JZ868ZBP@qnG)?{iKSC9E zSlwO%4EF}bV`o3!gmfgN~m#iB^_wVa8dcvV7HNG5CYhUEr!lG9!v8w&D{r4u<9OAq#ZoF};MScV-Fm4f*Xk z^K{Q6u)((hvywA_=p#fM2V^t^)e=5x7h_R=UqGbxd5o9&3eKQrZr~lYoEAXSLJOt+FvFoXtDw*dm2R}OvY3sQ3 zQ6y)PRj16a;Dp8DB|w%X;1kRTG0vW*VoTYg0xBpFx8cIa@$W(6R3y*^TWreNR))}n z4wP?s_}JcHxB^1pj~xy9MFW)9=1$5Weym-35^k2rb&^XdBGC~7os1%(gEd9Rjou}E zyae_I^-w58PjqrlTf<9+`~FEI^GTi7HSfsC3Uy}`9t6B3J$^P*N%@ zq>F8n=@kt>fm|;SQ^$>_=^_H1h9~qWh%*AF$9bRml7$F(#S%r;fF%LPV0`MuS2iJ>-ALjZHO*fk6k1H{0_&54lhif8zm{qfN>6MaUE(e zm;@yr;sks1TM7L30>x^3fNqs}Ka}g92Cb`uE@X<6ivE%#RR9MIaHm$^-%cqVkLtP4 zupC`$NdZd5n~>dIq%sR15c#&SkE{+%g6S5=?!pLD{~-Ci!G#cH3j|$&`{6;LP_xAf zN){#vY&>Gx1+T}R9B=lVV^i^@{o)}4E&NV<{QU%kfCs<|_4=_VFn3?Uc70fiFCC$B z)EZV!LFalS2v!v={g(Gh0Yr~bzOk(*K^>v{O>IuK;sg$2jL_QmRBF>c+?f5?IYDPU zflbHnCQXjMhev7rNOgR-UD4mWZ)E1d&Pa)n?oJL#=-v;!yC2fz-rJiFF8(lAzkxzd z<_%Pij{B?B{S#+jR)otfmVO^S5SM*gw219I-WF3)*n}nNJRaNg&h__r&^x}!cc<$M zTwKR_kA@Wxuk~iv!m}1jA*1`PORmQYI%0J;si?~Vuw4!_Z@r}-c_3H^wOs~-j~2=Z z`)RVhjC*a?deUw>)25A<;^;*?^C9ugiOE%T09EO8x4z#GxV|g^B#oX$n3b&Oq9*!C z--RD{8@)Sunsc~EWl(4i%SgD=Pn=d#cRL00VK{4e`C3xH^IDSj>>W1>x@MQ{AqHz$@~2T4 zxdEVbL*zIi`1Wmj0SM@ZSu8bhqs4Q8iQxjQL(0)#OCqBh%Sl`vkW+;_V^Q$ z2^5?_M}>;#;9rPpk&2!hIkq|+;@BlMLCgqnCIX)i$Mt3#EElo}y4Cik`{~qVHjG1! z=R=vlIHP}t0w!SyNW#BS9gtw$E6 z{5aoXEcyC~p}H;{jOP=FdG)C#G4J*})L;RI5H2(_>GwiXMe#WtO@b5HSpq=0OXT9) z_i~b%<5Af-&STLx3wfKq52@*)U@{ zm+V#2$5vGMxvmbqEJ+PS>h;5+8BIn$lUYK^S8b5`{t(mg#4V~5yjNu}x#}miLC-FY z!rKrB0xHF2gMM`!W_IqicDu@V_!=ySn)2Jfc`T|m3qG!+g3d(u)%=XkSM!GzjWDe1H+<^OLv1ndf}xo*K?N`)8?}VBZN=o!eDoC2#aS+ zDbjVHWI^Oe>zY3W&OVm}%^p59Za`_xwfSegI~(y;{NOBY9p#ti8qe!>v?e?A`Xzf| zj*DXRZCfT1l?`QO2R1vMfa=(N>5`5tCYGLo0PzGAfHcq_6!!@kzy$-IWXKF0`>DOp zn_KqUM#huNynErBehb`FMIvu06x?Gl1;zBh^7GzMSKe9HsAQ66^bmLgwXHAj>4VhY z5vf7Sn7sJ>U{vsDMQpcEHt6rN+NjVLYqpKiLntAjeWBx_D{2Ls!mn2f*CSEn{QOgw_#@_QPEZTOWKF354j(pbvtFnK!qQH_g7RF+m+bG z@nfDJ_q}MoBY(GWcb1VOA)|^S&3&F>{jd(AO>bW!6CTdWQM$wzDh8h^*Cwfh`-Ce<44as^aMfes zmk=C&^B1T2ish<$2c_k?BS)na>Xnn(^TTb5!K9P$`ROqij@OL9#Onf60=dO?$i`w_ z?A1{&<9sB}66*^Q_7_mPY6!N=p!kL-+bhScg411;zIN3AgxlqOSV6-BP{`~LeGIkNks zl$%2(m`gv~7-yf9V2-KPGjsnJ_VfuJJg;qEChp=>6+}tzT&Ry!@#K9S@}~S1P!Y+ezsZIkNtJ9{YV|zYmHzBJdox_jJU>DYe{4(b0;DJ^N5W0EL$i@Z`56Ce_Ln<)ZcPuq-(qp#AP_ zu*T#4RO|K3<)L>i#lP`jiu8cXOVZPHbRlyM?8N|@an>Yc*LD%@)_j?F2PNnJRzdDQ zYlrG=9KC8gC!@XEP{CmKxbAj7xjoJ$D6O{PT?vl)wBa{%vlZ$xyP>HKRb4N!Ty28F z{W3rp7azZS(Ml2Zc3xhkU!?T@KHd5|JO(fMaW8eINrTk^ErR*qh#?pfg>~1^rFttX zp1ZE=JL9hBhwH5wl|qXk1PYHa*7b>P#1&bUn>Lb;HTehXL6LWi;Sh=sUxt`79+^Ku zTdOdLd>jtn0`6ZRH$`$ltx3YNhfqulw>Z0=9%}WC-DUWC9-qv~1I6w`02-FHF+WSW zhNz_1OMHyll?^;DR*|71YMDk`iUT0^z6K*g7NHx;w@J4*ivY0 zYs_u;c}Kx07K>bvO!|m48;8}H@q!P@_uguQ>*cLp`!p4*bRG*#Y2DM|CcFKE3f|YO zma|5`v2!ou8Q##Y%>3tyl#Fb_22RZhN7dcwyzK15gg=b^U87CNCL#htzGb4OfSGdH*Zr-r zG<+(_ibE#ObSEYtLB!axZJ$MYvxCXYQr%we`+FKotu{Fvfl+?O8hTXDL_|cW$RQ(8 z-~YUt#D(DFf?SP#*=(AGvM$iHiEa55XXi0aQg^ zZvAn|ayH{p5p<%DjpS|n_4eF7jT*joPV$cQ_u(XQ-01OtA0JD*;y|L>=t-Cb{Yy;WIV;c;b#&`YpIC4f7GRYB9<2Xo%BDaB`PI{RfQ3{wPAqQ4^;cd)OJpIj zL+~29MtJ9oO9nH--C^_l<$vL_P#ZT>(e!G7NwF9i{~GxuY^niud&=Ze+KeNW1~8%1 zc73+Oti7G(3Q-tVRt|5v39j`b8m@mCUuu#t!C|pTvUdvH-3;!JU`IZ|O&koX0i1tz z-2@|x;}>hrX9%F(!BHNoJW~4gy?;Nj*}xE7Dn(kb?c7XajZ(ha*xlf>_IaeKWk zTUnjV76eVc8JVDSB!co~=i>VoT_&9}uP}|J(fr2KkZYy>_-{Kb1rtbx+euhUm=AMB zxwXEZ7Hej8gF<>K)pd7NU{=`(nXhWu+Kiip5rgV<_Y{^Ml8^>i56CZ-L9`_&ZR)^n zRapGR$&>oq<<$P%UTO^q%5;Iq|8d>_-)y@ERB}FIQkwCK`Q*mQ)_7VG_JUQz-f??{ zeWeuSaqnOox57rsKXfj-g);95nrL7lc)na|Olw=T<9}v`|Edx=KP)69b3wvT*Z!afPQ84#>hy?& z2aLY~N#BARa8)r!e0uu1r{8-zFfj1G$Bt&rnAkr`fGsV^tz;X9PKg$P%mPgORdQ)~ zTuuwk zUYUeRkLs-u;SAvZ7rPeP_g8U@AS<2ec0{@`Ivx}? zvpK;$T|;cMB?T9HXr*{FUbTX#0CFKO+;=tp>|*L;o|Y!+yw*`1KxP1TJAd~x+AbJm zPFmiz7rhko$prI>FBRo`{vHkVMHaj+aXLC`j6To#NfED_f}c?HT^m4v;k(-$%PrZ5 zF{pBZn;0%~u4>!`9+o8{U=C!)MWQ#7jPG?BBrq-dG$7|aCtdmXu;QuOA68`RkLNztH46=faAES4$bt?pU4E?_u{rsD}M7@t1({6vt ztd{dhfB#=zAwi%+Y3PNvTK1u>%&m`d)ZBSSK}-zQjyS8+DjR^DF7cN+@=Ld)#J{kU z?yM@3WWBw;134pUNUejtSO~&5x8xTO-n-}U+L*&H;DbtJuu>Qh8Vte7r_W^nr*V?4 z{yt~=`CDP<2lRN`2S~-cwK1s|>MhV0Oq-t!fu1iyz@s}DBP-2p>P^P*1@XEIMhXJp zU!)z6SUAf~QQTR~W%xDgPe6gfCF{S5Z~$PM!9e466kvQ%Ckg>%t&lu_U#9das%|@j zKQx`s3p8O2)0D#slG~pXuw*mHnpt9WM1y`5I?4mcs6npFN}<3A44(w>WUc2S!KWO> z8NjLE{+@o&hqTw-kz&II_`QVE!cu&W`#e`q6aFjnN+Dio!b(FIhhB48YhL)#@Lj~_ zxOf!pCFe=cp8ZWo-0b>#CUI%;+;59>sd5KLK~d41MpZk%53QDYTRV!$H)BEIXm79L z5ZiVyg2{dS2bN zGS|16<-+-o6)Q&?_B4J_uj9$G+~eHB97jU#xK3@i#71nf1~nBkndcYRnLg0d*b1>) zo3Gy@A$ttsYvBq97AkjnZB>;*R#;$+=xX)$F34VOujwf~iwTaA&^c+X3)v}M89B}U zR2h;^rEfE#l)0vpJa(uSqACXMl{6o{OGZTlPJC%Z9zru1V|&NOr};#Oz-vTzD%Rr(VgY)Yl? zpe|*Rpi-mPIV!?9=ghc7=KRKle`XrtM(m_Ka(Bk5cko2|mqQW$Gq8+$%>C2)>pf5K z(10OM^6( zDpx}qh06RUFezW8w-6RgN27%u$r|bMI=>uSwfOQ`7{}=wKu|e$_&VpyrSV^GAPJ_Y z(mvCdgIDSEZmu%F(H)&w5m!1%m6`Pci|4esp!X-ug{Zja5};eF22T{(8*h~*74r|v zHh>+Oqu5$3)ilA4)ZpS8L=B%zh_1S%**g1)Jy$zj z(AJY&sD+^>j0}b%)(RX^8#o)uG3Q_C$b#MX(tD2KWAcDMhiASF-%@sqBB18AI?dyh z?>F6$xG_P0SWpyI=i&5UU!)OwGvfs|hU?xvJQE@w@3`FrWX!%p&BCXch8ecjOl~S5 z)~cVP3op12OtlZ1MaKIasDkNUm*R|v@%PxPPJ|WF0PWP*{Bj4VWWsUwq$W;>k}F&2 z%#(HXig#z7RxXgJ@9|g5%<>^5la|uins}%N7-t%;axCB@dI96 z=~T|<8S}9Pf3Nv-Z+>%@3-%Og_Lj#tSD`c8UQ5falD6X}Y78jG(Ki#vLOB&-Gzmb7 zMqgI01-QIxG$OMJ!O`#v+~|1YCd=#Ad5%=+*F*FAAH=p<&^m;fcO{bv0~lT45(UTV zT`^WXX=-9r*Si9XQLF4K-P=92GEf-f!!BzlACmrTs#V~-{i$B*qGNz~&&`eEY_un5 zBeYQ9vwT&OGno|BscU@051XjJg*{wE35lR)07VGLNU!?4q#IP8`8OjyGduva!Z))mhhD&|I<;G+FmGL2xYKr> z?Jp6P(P6DvSklB%rB|NrLQ%$$e8S-z+n_+r$$aFLfJ0NnkfEnCePkni09j&#)7w-& zB|B1bcxFphWZj@*9JMA>k`_t)Q5D>F^nQRlI6wN#V{sATQGUdJF@6R!Pg}0h2 ziF(Kj15lz)O4xqIt4?H><0tkad^`%37=r7ZOcRL#PMp2!04UTMDal^|TFK1vWPix3 z7Lf-Kkbybc&f>tB94fZ_!u;uEBqAvFFcY?qTJ4VGGulqNr{=eUq}ecqqFI|TjJYc#P#6=ev-}I?wlOcHBnZmY~Cy+KdrPIr zF=m7XHxg^o#O4Do$Giq(#7PQYqx98RxFZXf;&^vxho+UZLFcY}76Q}4yWEyc6lYKc zjOhEZ;9?=;t@)$ZdrfLj6?2fJxy)KnVov4+h&jL3bJF;euGP+@`NN`3`;s>;TTy)f%_PO2t z&~sm@sX4*y@yH#zTO*@q4jFu`khuY=~r0vyNLpg3$ajE0o%NumR`{`l2^Xh8e z3zGb=m>XBC>BlLOsHW;aOj_L|pi*B3G~F*=j^d+lxUcXG!oq@QW=^$NI=tFu6E)ZC ziZ~*;v^eELBr>uS7^))I-SL51M3;w4MxnT;+gZX zjQI1)1xS@wc3g7Y>J`694*+lQYZccoC$nas$s!fsA9-W04PCGC zp~&gz0*%rhp_6pF*f%Gv-2$gVS0^lX))U>_{)M|G&`b3?p!#M&;MZUcPt;FuOx@7b zsI7dkFtGrv(iFW7T*=Jbt1VH7FRrq89$o##(5RvhM|nMn2&pm89x%5EFx1TFT_~^X z-*1HbbE1#He%TONk$2@@elnpXz)eZkHoBjFD^4VOEI!u-%q8Gy=t?m9!I|>*CqQM~ zv#N8bX^A3sB`uDLbCV=T3<~KIVq)d+Vk0@A_(v_~W&0iE3 z@RwN)N0$dZ5y$q6j=k;GBu^IXGEkla{uuQIsoCIWOx6Lj_VqhY{nMGuD~hql2{{q* z*+?Ar)M=`hMp80BPBzy1hmLBN89f8)KL)@{gRxmoHF( zk{qP+XAkW@+?1(l0&*=c<=jDT*m%pHsuHYHdEs1omq7~*VJoRoTM+v|tjP*JT~>GT z8@!g(hTCBSumzI(JiNRlc(U;YM(*9}n)X5^)Tw-F8-VQqk59JPp103r*;ib5&< zmV-8#gzdY{kf9Q)6+=C?$xH}R1~I1AsUiA!vY5QFq=6mgoHRIMSf(xD5DHAaCH#`u z!c|evZaG(k<%5Gepl6}(xj|uFBub6}hfGect%HDc_@*<^nQb)l5N&N$jjMF-HC}CV zWTkotrfeU-10ekTesSfMM@h5=bwi=lK(Bv!@NlZGo{n~LdJ1?xC#6V*jD(cZ*JTYG zCuyh0kHOgieYLBxaYF;M2S{Dv2mbJMrh?o)e-LChMXwiz?E#~y(XGcz^HMe2$@U8P zv-sW2AT#0Mn{H)C@Lv1pdCK@@3$3|OWJG3#6+B&r=R~dKcq3XUUH{ICop?&iLI(L) zkdRXkyo;xpX7o>pPnTm`I4Y}A1?O{M4iZrEbifjaPdNt;kUUjVmV}A@PehVGzE(3u zmmJHbwbD?bP-(|`N+m}RslxJ_goP-1sChF6oSb(}EQSkW>K!dWm<{ysz;<-vh%_C! zEv$|mYDt_BM;Rx2Md?jvp0;L7P=-#{Y3&@J1Q2KB8*AC@&Wm3Owh?~tN`^0%=wum7 zdjvcl+!16=cxGvPA8qK@6RUyy?GgduGMj#RpRVRXZl#6&)$t1xH|18h6MCn1VlQaK zl!8v;viS~(kCDoK@Bg14*0|h8#t@OSktMT~Ye|qs;wgAzW_jNed|Y2NQM`g0iiC@BO@}fErUxF=QO`mWEHxB~!&NGiG>5K|(zR_Qs>^A<-ZHsPs2&0V%vc%V zE=aL|7rQ13drtivv+4w|v)ts0@3V4^+<-tR{lt{MH zrpqb7T2o{3s~Nl&00Q-s1MOi2i`TnW`(uE@Bx@0w*x;UJRpeRm;rey?!~G#rxRjPo zLtv|$*c_|BZpuRKF1LgOR`JM0ryNnMwxaKo*6vSXO-p|7ZTU9>0Pky=3O4c>+BLi# zehQfsB{&s!1nlM3hvoJByU6Se@@wbAvC)`%RH-v@0FouPv{;~|#~;5eVG6;Ks9n>p zSQl$eM~4|U0wOOh_J_C-D#Doked)YL$W!|YLtxc3pb3W12>{@kVNvW1xKIYZJ}AlB zQbki~mp<5ioKjQS1k+&tdY;4T7|&VVOt7hFMC`pnM-2M!HkP?NY%K0WApVcNC+KC*|veo9@m1B68FnxUPPO1)K!!E?! zWTI4w$#j{{Jva#KIo8oQu6)sIZPo+3M*^juf>ttxKg|0*=4~ZNy>?`I++}m7sB9da zF-aIw6%L!R9>AaZaaw%q0VLt_x@Rq|2U3S_s_I<50)4(kedb&m`!2Z%7O)$?M zeeAdWquo+o<27SqJoD}K2fPU*71WPaMOb4Fg%r5}Uwu@;^9Do(x^YLJ=KJv3Z(I?a z>-W8cZ5CYq4Em%=oa$8V=g%F2jep^4%HhB9( z_s-Ez-hrHsmnTBP(UWhV>or7=U>V!~ai_B)#-)Uy_3uf9Gr9-<_zE6XgPf7)bpqZ# zvB$F3h#(li{%=!y*V)yCuTn`BZI6nF@mLC|W`JKSHhd$(*J8Qq zWv{w~HX3&wO$OwFapit7&kH|nOuPDohPst&@fjBPHLmC=~anmg=)Af z)ms*&0m*|u6cSzUJ&DN;)79eu} zYNzGbLbQn8^0m4O!x#c@7i?Skgtl1NEeY`>(qspECJO(VUBZ^oJ2R+m*;6XA?tqg; zhcId*>q-k0CudHs2`zFhzm>OMiZlK4cVy~E!_l9+_L!@t(3eE#-%Cm_*lbp*YuK709 zIIa2klsv$PvRd~DHWhl{&}1sWDP^pOIRy_QyzZn#{lINHWORXa%JZPdYV`X2+KUpO z3VkldzV&nL`?~~&M5NyMbdb{cCAlU%`H6W=zjbF1Ca}VgH)G-u|CsOHC;SLIqQ=(^ z4&`{f9;8h3CIW72&*wx^$#2n|fG@wRUXBX4@%$^J!wC9-{g7J8jP|oLq$mYYw(Er!dFendPxMIno#zgKbPbdO|#d1F2v>VM@zP) z^U4;sZ1vv*pu_3v+^uZYnAZp;hdy|_h5V(;XIOnH>VrL_u0?Qjq5XJQq%;clH(1`< z`h3J)*ec<{E(&8?zwRZ~DDn24@!#W4zo6$17t1qX*WL);>`eme9T)gGttM(7VZj4I zGmhQqN4SfJEe}%buto?reXB56B?rWhUJE@5GnP;96 z3G&+6-`VkoWz{DF=HD5E?@7pyk-R8kjHj$uGj!Wgee!@Ht}4?1d+Z1y@?u8LPaNRq zhQ{>g$;&eud`q!_T?kK#i>0C8L8vqfV2_vKAPP<2ROgIl5V-vg{}$fEiJU)VjF;3q zMKF58QcY{f&zouXHpTRdp05yEDo(b1?izn@g#k}v|Lt;Y>STR|&3vvC^o~-D=`A4T zez(4fE$L=b1JyQ=^8VE zf-oWik};!iup{-NaJSDY_)GL>_%-g)MCuvLG0P9idJc)Ny?V;4NOrK#-y*#665Z0d zg1-SU=*I3f+Wu`&jK|k&?A_z#l&@ARGFR)4r|sVPMzsqD-xA#D{OqaN??1qnxF;Sx zExZtCQ7M-79oBCRwh=WySD=sO(P>nduXovhLJ>9F*}`pt&iZfeVqY?w3RiC*^nQh! zJCSQ<4aEGeB)wz>WKud7F&SQU(wKF;wlJnD>2o0l<^&^#d245s;5qGZ4>%NkXj>*Y zkkB-)7O$ST%{yJ_&Q8fU1^@bjv`CI0V6>{x$|or>ncl?krkr8i*nRM}gfUHpt5fo^ zo5SjD0+RG3(M?CFq*X8OVBPm7;w^Zf+* z7f=e1lPTiAzwl-%vw`hd`PTFuJr5d+U4hTm){}b8o%%b}-Vy`s;rrU6h=@MWBh1B#R+Q zm_gtYK9J&ECXRch$zoLuv_CZIz+JOOR-`VSV6`fFTl#k0BLco&`BWP{KDUktO3xdJC zgz6fXl*%J1w~O*C@j?Vt_OkLRHTUB(4R+th;m-g3t#xBGVeA@yVnUjh{w=_9fcTlA z?WH^#=MYBsy+U%F^r@pSJ-~^a_y03pnnR(^hITJDmc|8lk4IvN6`w<$02`z)vl7Xz zK+^0{3V`YM%3bsZGg!){{vDhe>C#yatR@fFzjM|T@n zifSl#&z4eCH6~|dNMW`&>toT@lKrPzjt}L^0Ma>##)IR8+AC~o`XdMovdpOi`=BcP z$cfmlE?QR9Nd6V+>{Tw#4sYm?#V%S#9q8UD2f#y(|4og&i-oj~jdk3sMFXQi0&%Br z6lWsF{QE8Xs`7QOHqA5qlvv!k?{5MedP^P#Is#r1I-MldH(De1%PMBKM#K}6Q zt_BGJ1g#`s)v`8@tQINC7X0MEYi56EK_(()pjhJnciR+UK%H4cgFK^g&VKJBEME1@ z0${Rq?67105nnV#UK%VaYUzgC{=RKUbTauEB1HZx&N;nMsK^NS_hR&5$XcDY6^~J} zgPC;xv(3u?j*O6-gaP5r*66b>4DW8gPXuT@zm`t~bq~w*C+!u~K9Zo#Gk`>pb+?Z_ zg6DN7hEfCpy9e`Sn3#R9%d>mAfL^NOCx<|0OO;I=WMM_uMmiS5%*wchYT?Cxs=elYhhBaDd}o-4S(_uY47ihyUAO z^Z|>x_{sO|Gppkas62Ohm9&EwSyWUs1i&$St|fQgkq3f){?J7E7oMt^%y~=UoQ(Ud zPPgx|A0+o)8yO!gme;-N>uF+Q-IC^MbDfx#fY@Q%cx-%f+`FyLqm%t;TwFh^Q(jRO z-$J=On#cRat;(qgn?pYz4x3^AY5cxgDu{HdxAu5GIoDKdH}7`02j@)%Sm?+q z$jHbPpds)|I)YOhEptjNNpFyFxP(FAFWPU#1f=*_44#38meNW7@198yjZ8T%p6!7_ zA!Fa{Xm#~K_DTF~K(&ToYCs5gG5e5XL3?H_l1+59uoFA*iY6s#cCr?PpD})Re7t@% z08esH1N;|%LC}8gAPZlJJ$_YpC(wJe6iD`pqMjhLadmWm=w7pw*M^vv!hQ~>jKtG5 zlv~B*=w!H>OwzEPC|e?{PG?X5@0`VF=^!$G>8t8@e5ncU0_Sf zZY$nEcRIGO2K)p%-vYLUua1svDh4~HEdY@4-=0!R61Jg37)bV$MdN<_X?3uBUf;Xl z3qmJ`tHSF<)1mmhv}R`Op>OJ`(hu}YrDoBR2m!nbP;iln$;Y;TVBr8=ygbkk55Qa3 zG3M9dx{i$c0u{;c zBo!EhIKO>uxq7Q&11SZEy8F<;$YVI%$k@bzp7Wo}4ULFnlK{^B`6P*;U+8&Rza8wO z(JB4#G0c80mKnGU3yEp+`52Z`_8 z3RC(!qb^cSRJ;Janpj8IvDe!Zq+UhrJ^WASwxw8wSFtC$-B<(*4T7?!th9&qil?a2{aqQyE7jj6 zjxk18>k&_Nh@T5R52cq?LGqC0p4+i#p&lL9(}0rRjv)(rti?(+>rkjF4-U=d($gxG zDHs{LLI(>fkXBq25xxtGVpV9hz5R6X)Rd~vlALiXQecCTlmxRV5vfa&Q<*n z(fmgJK;*|hIrFb)+mGtd6YBVSCs`NTlB8>Uk>T|i2}F<#`T#npdSu`%{1KXpI674} z5DU*BM9D269Wru4&yPAD75@Z9-%v%RB{3RmV(&^3STQhDuIK%ZCoJ+(o$pt}`GtwCksF(^qU)`13B=Zcorbgk`n zvwVry)O`&0Y8--ZIQltXP_4tWW!>d2!v0|5ivdi7`i$ z*lC+rNx)65h4XEhNsY~iJHCrI<^A{@Ro3;Ps7_rumd5R4uf>KwrB_;D@2xLEcw<*_)L zNnSJ;H^URLTIbg|kN(#O8v0C!fg;Bm{lnzi!b9Qi@0Xer2K_BBC9DKJ^=0&ro3@X2 z(Z~CiE<{bvn9u6vmzTGZNUJr@kxrKba*D)Lw=GH#5S(->zCdLL%^}`(+8?h?8Hh?a z7_>^e+?M-fD`2n!Hz|2S`WV9-Vibnl-$l$v; z5EkSGm6hm%E*)5g*=TvKSauf8JZh7&%{n!i4;IR0?U6pm{0pr0U8_>awcZaDPMTf7 zjox!bI;E(4-g!Ajazh6oc;%h-=!9F;y_Q_`diQl+KvXU@+TW=Z7LAoni;OGHvZj*@d4T}_DCa5hzP zd()vEYs&gkao3&yOeB9Inu2M0sg>C8S+h>44z+DL$@I`dYg_xoO~(%Age^ zC;uDR{&Fi}e?v~n2kaw(z~vrpU$y*{9k9QCgA&iY*BV9}J9XGzBsvo6W%ZX6w(xAE zgLMO6W}Qb!P3n>BSofRxq-H-6gfTpHCEvp%E)M~>Tmc{XyWA>w*BfMbym#wxtN}xZ zY_4d%s2oVdKi!l7Db@K2rxdp}CY|R-KEV;}Hac+K>4MpQWyNH4R>)`PU0Tr$L#mbk z5woMJ{kq?=#cmhu<)F>xpm(fF*67Th)n=8rNwQUw`)%8+qiiNNZ(Nd;9ts8__9ySI zGL;(Ku3gyZP$M_Xi*S_h6R6|H&fsEe^Jj&46sF8#b z+$0fRVxmS>GO>#}&ruYK-LqTs-}blI5GIjf6XhLu^CwOOURh*bUolzJ9nC+OEv{Z4bQhh8 z&;5d`b-f8BktYK|Le+^Y!~KU+Khga3sF249%yzg{cEiLNMRi{ce2D0C7UBZ~Yt1%V zh3#;O6&=wIqP9ZLz!}=TAIzOR;WN$Z<+13Y>cK(axw3@{974O@Fe(PgWU5H!E((+% zu?e{ryC~ZpTg2YSYV+R`TOOHYl%+X792eM=lkyA<&&ICpr;S}?Moq6ciE(8mEWrY0 z<! zj=x_)7L(Y&&yJZ54zN}1$p{~XNk~%~OB}(3nl`$Ad3;5>meG~kCZh`CYgqDQBPM?8 z8FpvT|IRmRcjRDOX|^dJ*gXub{yJ=Fuwm)O{c$r-@B=4~jT}J^X@+tqCo= zZB4EJM&@}GZ$O-OyR&99;;f1p$$cY;>2X^^ zu^1v~`1!iklStV~sz?)S>wuZwMitj@g_c*(yOkpQ zYb23TI?Xb~7{8_1N{53VBt;T9BPi4~tM>g2sT?j^QBEbhrrRKmTLdA)?Ny!izb+Qe z(wWa#vph?}5Mq5d1@?!izrhrcq4#St!6VA!e@()&@b%0ptFvZuMRL&mXrxhN;gF>+ zjGNi5P^fZZC?b4Jp?8=IDKWIZ#c6P@+vDD06NtUaWf`*;f|MC+k4F$HO7`Ei@dJTv zKk7FtJXno-=B)MW=S@{a4|B@&=+{5V>gyE=m8CRz|Jq&6wVHf4mFQ6Qv7yys=lPjt zOWtTLvDbG6v3BQgyDFUu{IwYFp3)fF_9dsw=lSQ^JICov7M-uXu(y!fsyIri7T|8D z1xfouHeCrVUmrFiTqJWe0lZ6mzJkO+Jk3V*fDYfN-((YyCb+-Zv>Nr8WqHT|(}dN1 zEyrq>$T`Djz9vKI*e&^n0x2{6;KV?76fts{5#ktXp$H@65;9!mMOiML>)=m>5DVUj zLAOq1>ovCQ`Yqq>YDlFxIlCj;EWeQ`tjKkdq>uc7?6unYB##zeUeY3qTua{BN{$F# z1Oo+HPtLAY!uvYJZ8n+@g)Yzd2TTpk&FHAZyIE10oPm1Wcf_}{c6pjVH|Qdd-x|&0 zclq0c`{QHq^3RChQ)>z1Une3vQF!q<5xNg&m(3`PF=_Ea0eMT8t9#4ngNd5<6Ah)M zz3(6eobn|L{2&ErVwVQxqth|T+E~37Cx{rO7B*E~iQRu;NYK=%BOgvaq^R3^O67yk zHc|rjS)uv@zxu&Oilgz<)g!E?tI_G%xsR8fm`Am0NRG-!7@`ikOER1NZ_tVF)J;pO zAzBog^tN=vW2KH0RSgi0r$)z72hZuW`@}1q+!jeT%IfEDU-n0v%DLm1Ap%+O7cO6! zVp+obe$Yd?q5(_YcJK-&KKNo+xM;PM1{3U*O{R4(>ec$XmMIZZgP?S~rE=E_d z>AmG;Kk@aEWqDA%vq){uqXXf!{kNO9szQThO$5x5gvpALE)LNJr&^5BkZC-U{fE~q zqT4HhLNQz54*1A6O{eRN?=}=t)BzdzT8~_r7Q)=)O!8QX2ax0SVjd)YEyPLT_W!$_m{dUu8qzF6>ks@4UV{swyv%zw#)`3sHqSGRl) zl|K)1NC%a)2WIYvC*N+bs51sX1`$HG?dhgY=hx(Cd(MLXDdW^AGTDsdNH+1FT3*qx zjaFOEPy74Ce;raj(N{ib=0Ior)Bz2GEyn ztJ;YPo|YS%1BO-!_Q5s%P_fB27prDbC~8)S3=wxhORug)EMTyNZaU-kl{Cvu84k7K zN!v$i^W}+hl;pfL&4w@g3;vd<93M_orsT=bL=R4N3-5;Ii|To z1P&(aDjnaKpObvO6PTWKOLyAzrWc$-dar8BN8v1eGHEvb@w&hh7-D@{dF|Xx$$lsR zu?!kgQSVtG45_LC*Q#hOBa_1Q=cda&s_U&EImK@+muEYT&ZbDtOIIzyx+FVWjK2dm zz*JHhLZQrCd0Za?MjB3j`^r@@ps@-!A6Y;rc=i0A;$8(T!{|kQ4#Lh=qC##dT+V^u zvp9df)L^S9-*7hy>t%~(ycmIBA)q0s^TL7mB(12?D!>}=T^-W-y1zAPztC%|_4;R} z{gFweCNbVqwuF@;h;O_^Wx_I#wau$8DZ40lrt7nw+iuIzLGj+;==ry(_v@T|4mUU0F!b4oHmM^Q31F4OnOE3O51keo4bff zo|(>}MALfDfXVHSs{PJ)Ixv_*+4*q`%YOKba*mjQ2q1fX8Nw@buXwoDdwLPU8}ZrR zL4o@T9BK=<85M+|;jeZp2?cyGz_pe04+};LlH9t?S*-*GdC`^3=<`t$B z^nYf0DTe=etJeMenun8mO*t&z1V#9*Y3;Mk_X2dOsu*wPhi^g=I0luVEzLUCIGyol z-{d_(WsIsh=;Pz)aNBU6X_oX;=ytbWf}Dr%w*4P#LGnmU~)zl&;sLr87gJ z(&t+g&-G24dfV?sdRyqRfCgg;b*xPsvC&kfw+5dv7Pw%fq<|chqmYx|hw zdfjX8KC}^z?draB!GMDqqpQX@JVm~@w{WZqkPDOV^AF<;XR>^TgOa3T?|y0L4xiT~ z&ueC9&nsJr1}OA`C(Qi+<0nGvZ7qH9HG6&V{D;7V(D15XTV&zySeTSD(S zipulv7z;i9K0K1NAPz%9;xg3odJkzcHkU_1!^vA`7^7oMLJH{46zql-6huU8CMAxc z%nYGi1Sz>|sQDVzf|@_I`pIlg4z#wNCSf6ftZf{hcGa#x{)D*Um6U3IIr`H&TH`8( zid9-4uw~1RQVG4~{Zilw+Y~70$}ZJ@E6#qKY+&ex(m6qumKRW`?JgISszEA~s4&SX zvFZJwM3B<*_+=flC&R9U6eE{h5%du4`3}}~NdMyx$6ouLA=-P83chPZZ^#%5B2T_& zwGDsT6s}9oJCI8`B@)$ZZ~4-}TZ_X%7hC4&065C8;VoG7Vah3PN}`=FPU~Gk<}!#8 zwE5g$UDcHoOg?bmePSX30A^P%z$6XP+Z!L#QLflvDsR5(P47ek#J_+i(eccdh#2hi z6WIP%&16i^V6Zs-{-@8b-`hpak0{AbiAzsY3aEYoii{d$TCRsc^`e3OqpQw_8l^~kx8F_X z-ct}v>e0<&?cvvR0E~3k{_0su{w`Z_zS^Cw?Lg~iA#~NclxwkdrIA&~8;SAH1_k1ztNH*E0HdrSj8SD~)PWUZJvr;rREJ18 zuvZmVeRc{G>E@wB^5yMrHFmjBmjQUlK?Tz7I>gELx`Gk9COZ()Db4``hrG*5n<5RS z?w}jjTmEQEq0|ExpBq>YhH(?}`AYtNBmZwXEe7Q3orc}f!yY8SfRBmHF+m}}o%{Ek z7|R|q3b8b@9(0LvU=k=0aSM3ph`uKJM%yvXa7ilW)&RLnf(8bJ#C?mJzuB2@{obg;4)Ma2m&HvDOYH9i=x$$QG zCH8e@r_H2nOJUPO2dr|-H#ufZ>#*im`^OFGEgsl|*%DR8&(i3+2jql}rz191jpGJQR9&)|>RYgqH^;$+MSPqjRxZoUcDIOG&{9e)t2Fok-zLEI zpTQ^kY&KSvXNIRKn^1bw%bRf>MQwt!LQhzGVEvr{`k%2#=EUp9LwU71NsTt>+-r6e`duBLeRAa@#}?;wFhl*qOqA(ialpOhSh$$eyCLk+;?JR zy0)ht#?z=yt`_`)>uy*g8ly_HR#!ZC#c5cJZbnY3{WA*hvRScSe~dol zf{w;tbcCSkYGWHrToJFnR$H|6WSDi>Qa7gU^wi<}BVDiG>)c0OJL9t$>v?XiLd*P# z`;5zwI_?R`C-YwD%x*$$>FDVEJxw8U!~$SymW9jZUO%3~C=xG}pVM}sg#mOJ?j2NTf7Cf|?58d+{OlL#1$XChg|Kn@t9ps=hfyl$olqZGD@W`6aW zZIFOnL$iebBz~FcjgfS@J-;71c)!^*LKcmt9aV!K8@-d^K2c1uRf!a1g!%!N8p7+F zRrPE{o0OUyQ|9%Imv67BQ&Lw}VOm?ppjbiGL!n3O~5t z-Fq8ys~a<^YI;xcq$t#w304JUCX9$u2LblU!lm{E^%QRRNXU`ptPw~iJju>LgD)LQj`@7PxbdG?O#h7vo~GSX@zeuRSi)M(s#7Ip=Om z=`pB{`ozzVCXQsrIoac0v?KtnK=4)Gc;+j(6P;%f=6)p}3^w&VzmDXq4vPvylhLrR ze~&%mdgu9p{=N8{poe@WF7-~IE#rH15dm4&kr-?39~Dh<5ncADE~s-bDn%Ep;snxy zG|unME?-h8Fn?D#?+fYvECY2JWB_8IS0ZuX5Y|c?K=pQsKgE3s2Z!Y|(pp=YwW0%d zBXUmN{D;j%dd?s!rx{vnz5A4V?9x8{_fT+(@OG)JZCjm$i>m%CoNJlqv1qMxTBEPm z#CAN+8O4aD#}f%>7oNSbEB*Sok4qKP4~x29`u{9|gW3c8#TSg?mqzwcXJGDmAD2B@bBE5HgGja1$D!(cl2P0@v zz*q}{cV}0376-K9F@L{Pdys4F)sDqlyka+w;XUwM-))WL1jOU+Pn%I4DE@blN;CKS zIqr;Kx^l$*j5sAFrBbbY-T1nEzUafcs;mJ&e>K$Pr!Gte1MN8`ZZF9NUC|S5^6{=v zTA2sAz%Uj3>l3Mkv*ANS8>lzWPM8#M-S_2AMV82<6`miCvdUCpxO7;=2CU+)63T2^ z)tLPi?xjU!*-VnvL$bS;0G8Z(vuL%n+_!>dP!%w+-fvPVyjuh18W@u4_=ud`THq|! zAw~je!Z;K8k!y{-;%Qp`H+VMfw=nubG;sllJeo-9-;fvO=e%NrGc3y=>y>quJ^>CK z9_wr^Z#^M~y(=aoZFR=bCOaHZ3bMDH-oikfYvn>}f>3n?L|DjmOy9%y-4c9P!M}S3 z9Ct&RVu(_8pq2k0w!S&Oj;HNAb{jNyPLjs9ZQG5VG`4NCaT+wXZQHhO<30U7_xST>#J+y$%-xOwv@z&P2f$65u0qz22Brp^XBNCjxo%)QM+)xU zto~RmMXZqw>V@GTfTW93dKcAA+JSo_Z8E`02QdIP9mjUS(9Q48PWzCfn)>|0*jgyB zwyMj}f4-dj*L^#(Uoup;8?m~VP2XtUGU3!TGQC$zo-WZWZvg-8KPwB1MWW|bH?%cI z8GiGL^|I#-m*Q#90gCUi>dwLDTwAS5b2|P0+=k4JcAax4L+F$wlhZyBTIg2eRQxd%SC=({dnEqD)v7v`)5>h zP61t2;F7h}fbyvHEBv*FfplP_a;3)-%l)wQc4K|pwei+->V1hg6 z@1DUmrOfj>8F-feb-OuK#5o*6op6y4NMVv=E2#eb7Q}eqpe;r$;fU4D5lmJT44Uzf zXtfqdSGb@ug0#*1tWHP-X)l9#z~-b$H9G)}msd|AQSOlWJ!Y6Q1_%p!0U1&Qfl``H zs2z;4=ksikBi4+X)llF>nBs+5q5+KCHSQ!f65Z<(TW6cFs=yOq(*{$eb2`$=xrhn6 zTD;^%c|4n^9pFGbkb{+E3n);OXTL`a1J8M;`{5Geu~|GG;dH2}{hc2MLqquOiiP$8 z{If8zfo09t#>Fyt%ShrHiqu4%ZFJY0ounP}^Q~xt#>3eJ6e4_&LYr)V875a~&{h#) zCKS1F8alcZ15bt~oWdsS3A?Ps^&4(?h&)SV6l>UNsVD+E>ZI>KN*6*!=QXe{W-|8K zUJ<}NsKBcHicImcUAi#?iMmY*sCq->;Njtc9w1RsU|cw7do;f&!2K?FN>7wH0LSG*Rv}U-5!-0q|wp2HZJNgiNxiqCk4DRNZ6mhfhM%Z0g)PD?0>LL0( zYQ&*_Gl5Yis}MPdG@sQuR2+08hGj@Mq)|gBClSxN*oym2akZ(C?~zD`rrJNH%w~WR z>#e6AWhXGSN|*a>Oxo>!`4=Mr0R!tlgB0w0fc)WEq)AYA@zt~gz#VBxZ;^9w3{8=1 zi6Me zH)Wia?r~D2a7Z<;0ZjOm*V8_q=(x%cX{KRrNKLFv+dB3jAM3-rm3nhu0(YhN3ONxj)YqU&NlIVDMQ z;9y))S-#5S_p%{yN;L`d-;3o_0%o*P6yQWsI-{vr_uFzNbns^_nk>Z>zNoC6YVFEZ{;Jin5x!eGOJ;)!QEhCVHE

G;ATIxHe7{MK!k>=-C?J~%W((GAyR*~Vq8LX;!YmA&pn(fiG!+l0j;Ha9G)N1mKWx0(9XTtB zCB9T|-5+?3yaqql;jms({XlHcp#pn(qOHcorD2%fM!~=&^!9t|s;)$Qg5o7a05_oj zY4d=zqZl=z4deeyM}xHa7bI=<$VRnLR`y3~O|tph@85ZLm3_m`+ypoF4);HSyN@Fb z#BYzmlEde$pOM~`CEM*2e+n&Rx=IEv&4;vpbawDA)Y_5YN1K<)!8~scGs$Aw-(Xx0k^~H9ptZU(xm<;->X}!msz)!(IvDh6k6KAnw zW2at(PK=~9TWk}v&QGBXMD)`ags=|Dy&Esqjag7_;?seHs=t~X^2|;ttwt<%oW7#l zOn3ag@NnuyqZ90WO8e~QCV6dw#UT&3i-JOmdQO6DPPwkr?$`^k+pRCS;#3fyc=Oe@ zwGFkBk!x)2x^C+MM!P}G&O@6?*~-ka{gf9TmK4@LS3}eMVBdg;bZd@p&=0Vf_3b9% zSXcfTAMJQX*KGvXD;k|>-K1N?$fSaO`76JJqQ+o&XI;F!pk$);sv;;rQALNWxi0nzvb8TDxfWyd7fsAMm&RWXc+K_Eh~wpH zSH*jSGrNON%cRq!qxb-YKVj+u=4x5yw3+MF+Jjk=PYVs%?s3K{W}r?7?`7<6=0VWB z^`%X#rgq`POZ%qmZl>j@We~6F&tqk+@Mp`v3wyNhcS29I6DJwbJ65S$9%ZK(#~-}q z&=m4FQb>fYq%<<(CO`@kVp%-6rbz^^&t9A#^AHWRA9yU4yr?+CJq}7m{h1&y&rq0- zn6yE2Y(BEnRa5s=+Vawy*&xrDLm%RJ`ke$4w|;W03i(%87vVAp+xG>L8|0MKqR=7t zShSz7&(Box!j>VMD=Y`B_Ef_|hy zKQ%O&6>?R2(2%DW)qzaK-;azItF*^bE|LRo(t{YeDM9^;mjqU$)&P>)046*Bqrc-NABo_~F*-WD zZNW!U_{q;4=W}<~c-p$dbJ0`|fU+?qiU#UhbhlG_fcBx*>D)dgu;KhYSe~G)arTAL zh@vm6M!)C;M6!QYQLA804!>!SPEgOow7(u2`fL2n$ zlIzqpReGqii(h?D>XiD>w+bcjPMVMZih|--p$4PvSyhT&IYId5X?JP5Lxz?r5|Gei^?K7n+ zu+B=2u6HW3&SF=2il)hGkUy-|kV1y5NK_l?xfzy@nc}qS=&hR_y=T zie4^$YV9ILSz8PaO^HF%(D>n1vr5FxM%m6#ZD*^4Q&g2#piedQjRu-$d&LBea#0Vk zphLUphR6OQR~oMpD)qyzqGVL3zYKs6Y7$v>H-w#)Vw)Fi|>j9j@%%>y( z(5G&i2$NW_4s2vHgGG4u?_C<6+YLqDBrRW!Fp6b^F2aC#Vt$7A#AnZK zdu8WU#vsJ1dAG?Qfv!;)#{ex$4KIY>DG@)N?Q~*8k%&iy{w&AVzk{Y}yJ~)$9Qgjl z(mh548#s5%g?xp(Bze`-oLV`I0xNh>HBsv3^SDG?G@y07af8CSX`rdl=~SHDz({`qr8$O#iXX{2f-Xa`6r7G#AK~MF!+^rv`#w%HHU9cqX~~q7`mq zmoC?14qy0H*-U)uXT8xsQtgX!T-1#2=QB+vW_SE54K0HfNu`GJV-@qIZK{5vdAj$p z{*lrO_k>Uo`|6@_%q`h^uIQX~d;FU%pX2;%I7pIqd8|VN$@>@N2}Tjp7<0+)$DDT( z1jo&spggfj@q#i0Ol~8#QD55dpDK~}IhP#ksYlK%_MhO&+90DlXVMuaDVy0j^L`8@ zn#ww<&dEPH5F!g_S|*RCXBy+2L=HE+hWVWb@TM%bUA^luRSpN&F{dWtoGt0w>q!P! zB00DCA!5nMPhrJ&DO5{iSojt>YWR$vU|yT=gx26E{d$&vuT}18~q`!vKjDl4VmxsEgx;zSinPTGFE^RY#Wrbaw2n!>U{l-t@M8rU; zhJRP+^y`Rj=ch^I)kU%M@uLMsedrm*QhLRrC}rBjVyuwd^lH`MeH!+OU8a@d-x~A} z%n`K7cLPk!Z*gDwyjmcluvsLdB5w)BjMp^;9DNLO2-g#z@{)U&Guh#j-=AI-0QkC* z{0=Rw3di{<(xmCNn$ta9_cAyOQhelj$_pB-XtpT0a1tW}^_D7{CpiKZx(uiQJVSnC zl~$`_;SfG9ec)n_4!*@L z`GJEw(F>yzy-|W_%5R3{SF=G z?dYZ?g$yL5Ymh_uBM@fIz5pb0t7wkXSy$3~UPE79W1OBZbpm%bia~F9pPv0Nnb7lP zV}=)i>_*rVxqej(UotP}`q$!Fw28F)*~ZPG=3O$law1%W94&NH2POlgBXXFxtNS%k za5Rq+ofo$d^KGwFTK0EGji6=L6~U6<4{!pMb0XF#U6;pJ4KIkyFFl$e5|xMYg{9rw!(za^z)T-EJIy< z?PmLfdEY(%wlEIW^FtA`{3Y1qQcaTY^e{k)kNdvG>2>~3VW|y@t3mc2Y7+wff@DMM zZJy%D8wo}WZ>Da600SAao%F05rnZb4CFRLDnm3m)G%pwaC~Nh5#f+2py67o+yR1Wl zk+<664s}$1KY_pwT4=5dsg7MO3}%hGRXh9EL`@<6FhyG&U>Qywfqv7X6?++=j*`AVx8*U-g8h0s@2Ax}C@*J(1!0O6WET_QB zotgU1;b6?XSI;Su5Eak1G7%w@d|kDj6uEN^(6P@|z*^UsNJc=%HTfplLg%H?Lg&4b zXX!WriBW{Xz;r8;5TMW?0*}7rmB&MwpY)4e@^M7VW>ktPJ1)rQZF*z|o51@@KV+DN zZJk<82D8#62UzERGpB{T&X-R33)fT;<-De-dN=3GZ_k}BSZC@QTkOP?#%yqKMcyA{ zx#7;LV8euST_=LDQ}yBvU)|wyo9EU4Ja8eMm@xw_Mp+%i%)|qhxV47d>x)h&tcI?X0$hQTB+8tjBvGw5yqiIWJ>$%ddcvnwim`r$aq61Z>jns*kbx=lWclpr@3&9S zVNC@L=-#gt+Na&|M1dsrH7KHT1l8<5DnHykr@dJJ7lPayei(eFdA;KcE&&tcl!)L- za9HhesZbGN?Zn5iVBYWrEc1)sx<@j`L=~lbUE-H4!ggM6!8b`9PhxmPZ~2~2Px<*R zwK@?%Cq5a+;TvK1v2K{pk6ttX((i4b534uX%e5-@aD~bl4|@sri-{ER4R*wyrn7V2 zPz;47h<48sD}^$Ym?04U1T!PSo%CV@m~YECCT`w(5wk=yk=PMAeO43YV;X9)6Csj5 zPqwo(M)00~I$^ahWo~&_43$rT!eiN;I!TwQ$-b4}Pu=dz&9!5q4+ckfjI{<-hlAu8 zgHZhi;|;VHJ?4J?qgc5DhDT_4KLu1e9Zeh*#LF-MOVq9`{w|Ds0OsYR-l{fq`A;yz zppT7xoy$;Fwh>f4q@__Lvu^;!n#m-TmYsP~lE9|KukA~+!c8)aC@lv(6j;td!egBQ zt=#DGF~g1P-z(c+D>58c%x^{!;nDdyMSSBCp4!{27MbZt#qUQ!sJc{dx5ujlqF8m6nmy`SDM5{2u0BG1YME;ADLuRIC!${8$I z)f>J!ilu8T``#QEec$G3n67Z~c-j^MYRiQjGu zmUJB=b(MU{O&Rt~xr=y9UTOGUQ{d@-rh|fl^7zAiHqml+d{LWNO*Vr(U4Pis?8T8q z_Y>XI%#7WnDfi50O2ER6|J@$q;n3g!IUYxn&J*>dy>TP)R*p}m(m@NpXdAjU0-URJj`JJ=y8iDG|ef!ViOAQY%F887>uLpZgdh1~*f(aC{J~ zu1E5)fo@OFSWKvH8dtBEH_w9&?x`W0lsUK%qvIW~1UV~=*KfDV(vgx?pXM~b%+9YL zg8%nh1QG>eFh{VuD}kX;2I1T?v?reL1zVw5ikwG)Yd`<-Vo!L)U*XFX^eJrltR&dA z=Ji%-GO|>|F%EH&^4)(lWl(a17EmN{cc{gg<6j?=wu!|gZTbRY^Xc&e_l&M$bHzWE z2!e00SMvB|RroZbc@X)i4_Z1P^sR3bK-Tuk9<$S`#5C3>rivTF=I3cU!zBG3o& z$6VhS!p_i05s5HAd+9n8akhg^J|}!fjJV zpd>jNt<87Lrpl+K=a+#pfZb+6Q+>=-@dGEy6$pqp_#&B5gCZ-@RiLkm8Jb=B_u?*9DqLVJIr z4&p*}m#$UXD>Pg;t{pd@7@kQ#&T(oz%qY-~$l=6vW2ce-yN*|VnMS+jIQWGEF)*m> zWQ){?u4Ybcxu`=xD7Yv?DC81+R4P>4eP88aXR%C^BIs%6{2WMvHJY}6Z%5Mh8NvF3 ziR<1MSM^&up)vMh@a_5GG8}aeV`;Q%BHehoSXC$P>&51JbPrEOv@P|r)n7E?A4Wy* zQq>NFf^MaaM7mxg-B8CTL&y(_9qSF2a7{-ux|ptowHEY(Dip^>M=+8tlpoIM|ZI zT*^Gv3oaSyt9Dee$7XUB(=_nU4ePNTcnHaVUv28B9?zG4iZH-M$@KZD{?|Y3)?{Kp)Pi^hr%c zooyUAqb*^cga;`B8*5+z{{DS@okRE@!PVl5IG@)a#vDpV{OuCTIe%pH#)AaaWrcaa&)q0lY@>FRJ3!wy{sS zdC(>kDMQVN68sbKti_q(ULN7CFOF5MwGXkLsC%nxa~Xv~({{3gsp&DZ>*hgRC_69Y zg+LW9CPW#`6pA7aJjSTMJ#zH7xm=fdNXcdWl#zj6x_T!8p9`ThO#{faUN*xd1n+H4 zSUUk>S)NDkShUr=#=-_s=kAtP$7x-l2bGu=BL1atsLnJyL1U7S>@)qb-LIc;O9kR+ z6)9MNKR!m%b59*n-9D3m0^cpiI{Y(QSoAmjxsQQ8fkvx+P?`V}LLdclHW~~JOyc}# z$}$gM8yqTJ-l?mgVojMqJDRNokZ}WkKrNf~c*w;z6zg?QOgq*P&SXU8rMVPvvn?~@ z(@=ixUqJ6;sRtq!$&ef@0>Y*!1??3K)34>Cz?JoMLA*WrpmPxQ56)LoyGJsCR71Pl zj>=+D+#*Vo@00=dbY4heO48`491qd0&x@T+z_77Lw$CZv^TjCLUIgOjE@O9~j;U3|-- z-c%OQWUkUHIO-?U5~(Qu)sx}XY#@1D@w|`ol!B538wk+Uyk&Q0;`cmhE@)|8#$PZe zESn>lL4sM!3TsZYI+(#D`AW{^)5kQLOD80yApjp1*{^#V>VQkbneyXO;cSM66|U%`-SMYFg!a z@S^C2#Ea)!cLxe3Uey*BPCQBOk86N!~n5uU_HK@19YI_o1L z{;Q3%*Qc4;fr}v|Pvjvco}W7(HiMqDxh;$;SDbLnG#Nf6D7}c72!_1W5)13|N{oxk zeZjrVlUJ9)oV#fBSP%`l!Ov$A8O*5ixo$>6K?$crd!k%CYjFqb1~>1A@ehF^o8Pw6 zbmb0%D%~X%ft)ElnT>#hPa+LmEsGhfz0}m1dX@-~_e}Qf=_oeTZXRd$w)%ArsNRad z*rbl|ij|7T6Jxsu#4;UJmROjPtqzz*0Ytz4+~+p!1o0{E+6uqNj6y!$ajWr)cB;-4 zZTSk;f=CJ23A&)ewC$Go9K-2+x%)gz9 z5}Qp5FAH*E9cT-Q!(F~`xp50bQ$ACKGqU!Tqy(EZQoUnYWq*x( zgorGJM68V6kJrg{jtF0kBx17SW-tPvM}bwy$?kRgRj%Az&^Xk<)F)htPqtiVKJF$vDPqhUV ztnHA(GbU}?&VNdxZ@zcN-92ls`he{e%8xdJpF_Jw(!!Wnn!wn&$~AFbLe{+etgl0- zKNS@5R79jIIB{-h0EJsQLXXnVIKMf*nnOneEdJ#|C1m4cOX#GZ_jv$kFIgf59_I>_FrHzh@*GhIF7D*jmKcg+?A~-i+uiae=Vz8W&N?v5YvzknDOe((1 zTL6HUlDY)brm0StyfX`0=c+h-$O$XJ=eglqO&3YD9W#T0t|Cdzdotz+)}mYfqe1$9 z8lO;mRe)B4mwZ${`kr{y;!kh(5V@AtqOUTIkOnZ50F(N-z`mc1)GshlpIcmg9l7PTJ!OP~zoV zO+Y=8MGYudAtX{)b1UT+)g;7K$$teCqf>f8j0H9i z)}laFuG;iGhCI70d_W{9M^1m98Ux)Q`16wehWKUZxX;>wHWbtQ zh8KUb$@}%YD?e+gGI|BrOSUWq_?IBau}X8gRUE$*_D;WPo_^fy&>?CgE`dJ`HSbypl+OXLRyrNlbFaLd_9h;U_0Hn%y~^95U- zQhxgjs#m_vR~A+A^GF@V_fH~D8T@{M_;GzU%f)YSL^}Y>L>|dh-4b*Bpq}Cm(W(Y) z7jgTW)Ref8yKo96zL83;zWMiWT=Yq1m8^L|K@1s9m*vUlwOq)NPyyd8D4jK|knqMo zF=^5Ey;bcMumQ48+*FHVj)}L7=kzd+^bq30U{?WJ=-|`vOJO;}tJzp{#$9|lxvSoU zq*I(#d2WKnezZ7J=rqFIqGi||bpwX-!@0H)amSIlIVit_XfaWFz)Bvuw9t1AOElSA zEHYt3{E9UVcE4>MAQS$zr)?AjGe7&O(l}C08;~L4J~$##Dgtfo{9k!q%tU0_9S0t2gFlfB!6mGRzTwEqSlsIU1MH!h>Zsudeb{YXmHhqj&Au z8qAURpHT5b-uVev=hn`na?@dDF~*$_nA2(PMN;+)wpHMo2b2iK@)Dw}=;2mDiPOVE zzd&>NwkPNZncumnpeI84Z&7F-YCAHlF$pF*5q(34@5smE{*Y@=3OGfjM7>T=Fd`W1 zl-(Cf2o;LK&+}ia-|3lFpmFT4p*(i?QIev>guLSTvzk<|Z3$wbWNe$CEV$_$AG=)1 z@Uh2Je|ay*t0Z+wkh$4XNx_YoXOMr6lw&DR?Om7b^gS0v*myY|SQC4?Km4{-Dyngu zu^751>J?*lQcpOpW4!W9giWF2G3&m)75U8T4?GkvMJL7mrfS>ezv#5PQg1=2#qUIOHcCRXFAgLIY=eEjpNEGs%Skw?u5EPK*eJH~ zv}$su{9yfrJ0>o+#$teRRDgiH_773ve`7F2_;H{J*PphDkUMoHz>}^krgZUzkkEEh zFp*MeU6KGog}{DKf%3p`sc1U%k{y&d5oZtBDL~-@8$q1MrAOG)lw2-~@(1@ZToa%1 zs0oA-P_QuiYnL`r>#|k17E8FR0Ix+3(~ z#LmyRS!<6B(vR9LIsLbx@r28SY^196waV3d!0Z~iKwzHLAc`y`SEQ^d!5(nAMP>4_>LN#MVQw#z*08m=-VCBx4_ zpXw0nj?YD|7#wul**!@2BbdWN6xijMb1p4avEOY7v*>@0q=MOF26^1p0ZE>GvMBGyTvIzJbuGO{utSnZ$|3=dYu15boi<>$cuENqQBdLdUaGwUh1%!NTZcNuouGDlKD<`dE;c|A zY?9v%?hnM}ES-uBXP0%0>B(|6upTKWa|r_=N3Q?APux_WuC-z4Os0ldoDEO#?{bCP zRi9x{VGZ(o%s{4N^{cggE^5Mrs&FWz_Mi?OfaAb@r5>f9sYjZcw7L_@Zm~1XASzW| zGSN8Fq-W>SS|86WI6w9}7Xld&l&~keJs7u4Jkym%Hy$nnm0RzoGQl|)p7jg*d z&i;7*at%Mull2^tvsL+@G?bON=vG=zL-FWsK}m}AZeBn#@>m6n?MoO+u-k_g6)s0X zA{hAl)PgZ1c|Fb=adz7cWDFj;O^aZ7Oe^oOpYl|9YNdRput?qNEUB294s2H znr@t_PM=DvKgYuESH!_KlU74O71=8@vOZgR507(i`{U|sq6)x@eHd(XVrMjF*&1>$vc*X_)qS$T+^#fm|1!|@?hIc_9|*4v_>M@#DiQxsoTI&H6J zjK-xWOkJH^OmM&VL(kivyy`Nyso=AwzF9DN-^BCSf;0uLAAy z)EQ_Ib^vD2r(F#NKY~tIQ564A?`EpNL3bfZenxFrug>!B+yzf%u%nXgFADd2jbrUS z3eyHxCHLSzqt6X>F8c8BFQS>up+Kp?QtQ-!diHcFI5bnrY2~?l-b^*)s~G}Q;VvnW z(o`fhV~z(gOI{hlCb@5+u1zj+fm8klGA@hWh0Rip3+4;0tY9y^N);L{y-wERWu`VB zanO zjH4!G-lL@7p7_IDmCJ&E&w#=j(d+vH7J0elsSSjEt^Ir(LdictXE9d643rB0QMvy& zd3G1!796OEaHL2d8n;R3|L0d3h&sz;7XW4ZGNV%R|NVXH7@1x;e^%GP!T%p{@~?>g z--JhW0Dm54T=XpUEXx0+LjJ1lbC`&8bZWH5Vz7S73u)adc?@l#bxr{$hV@;C!?Zfz2-^r9gOH*-?l~^J3WHI3COHaMaW;0@u(x9=q_QqJU1m!dS9I(a6?$_d8dweF56l9 zX_*GxwCkY3E-ud=%>0Ub|M_?v^&svd($SGL!HeY0AXD;+*RTBl#82655&lIP1e4mX zOFOQI@5952T`UbMI4Njoy)^4TpO5(Nl#h%W~}*kz)i;n7Hl>sA|8 zui!H~rphnpDtAMgoR=h36)K0;+DE{_Df15U^>H~(>A`}#&DL0-qI+a2*FYg)fr(5$ zNK%0^SBPM_a32(h+(!OKbNBAH3O;>c>$=sZ{- zmVfsM&OazQ=pzKwMKK?sa&fO{;f25!9D|#&X{gFvC202Ck^6 zsZjyz?-c(w?#%vP)(>}%~A*0L1AwbwJ~V)DVwO%od9%~fCCv2Ik;^My%lk8P8F z!l?lmW(33f-++!fVCo-sdz(5gSF5-F>5dawW5hv;`ytP;%@o;V9goy;S3I^P?2)r*!%1C4FCRu1*wb#YtVj-FNjhY(9O~O6haxCq zvL8<>Iep-aiAs!;gBlwfAHVZrs73|J$4k+yKtmlsVx%V903hP7(ES9J540LA*KLQ( ztNWN~^eN$_1?>wVE;vr5G25RZ1*So}iEX+jR|DxmNxESFf}mtTmek}BdAk%aaE^jB z0uE8)+pZPR*pcf!CcG~eQEKnh@*_SYC@5rk1xDN1#8e7wBAs{&b*JaF%UzXDw_(*o zEX6Px3N#AjmDx~8e5ZL%Rc=FUt&kqEU#HS!lL&UX!MOIXwB>9mnJ7-7oxV$+`phy> z=5!&qa#fsR^b!lJ_(6$pXA2A(0hbj2`bS-0tbCO&ItID~oPpZi?OsWenEiNxk+W&l`DNUhKde4VGahd-qmN^bQEwI73egj?aJfQ)cnOwA_ zy_N_G36W56z+S+LDEg+vyI<#iyXz~$4eHi4H#ZljQ=q%a#q`n&nr1(^4;D6rpvBL$ z-tGz7AfHJ5yZT{5oWWe_4^JzhAbzmN5BZ^1Fh!jze@*>U{HI7U!)|P$&XJsdf8=*u zyJ2e|>5~7L1iQK{gQ!%1pnm~OUeRCOZ;tKj^9u!J`rr&o4*{A@=?;HCDJ7?9Tr7?2 zfufo9_R-109rLT5hAmfE?-ugvq~VQ&!}-Y6L7fcYH&zjefUbs%r3M)rR!zk)Kr`WR zNiR9$=&C^Zwpan2Z9EID8^=Lrwk2s@;NH1~8Sx}3+z7#=TB2H4Jt=0s zWx^DS;P6t$SNLIhOJtLELOn75NAs@+H@y3-_vjC#nu1Hv%`v}I9s=-GQ&UjB3U+%i zz^lfLMIzM41Jy*pVBM9KP9X z;_Uz3`B4)C_thUOmN*FODAIV}7=-|Xt~dBgLVrX`Q&}h?_`11{iM{v~;+K$h9p_pK zQgLuYkG2z{3=g1$xpdHHpo@(Q^#g4+=tz+4~5Hg|*l`3j-=`7aD)EY6m2JJ>n$ zO4r+FCIMYFk%xKFk2yCLK}LO)n$1=5-;yPAg9PChj|&TS#o^$w*HSEUpOn?i;bYE2 zu|uqNzNU~c)O&G^;6Pd(b*Ppm(wxIjPH$e}vp%B4Qv?xQS~QRCkefLO$j*SJ*F}EA_X*gNH!iABnWEZ|V zA?W<7yLuMSzCeq>SVbdT7`&;+K@sD-A0h)^(Ws0rC(I@HgGc{!UGO_DYI@lo0ZQR0 zJeLLv;iyniXukmLSbDa{!Jfr1dBRX9UVU%hAQ3|nZSmIOb}lFWTthcxTAzD%`YYV3 zq@pqDmxUVb@>*)VzzOcYz`iovOSmio8iu&`-z0o=YG}xip1>3J8c~-=93SOPOH7=r zEq~&kDk3X=43|DwerkciHRIr%!$bYFHlx`H|xe zC~GFL|Fl||y~B^6IE(mKdp_Gcsk5b?)K@utg<2+zQxB1y#>s3X5CwaVBMPTgI$V@$ z(Vs@-#H)a&P8TjLm9VV*b?>Ui?!qEH{tT455p^A+hJlEQ7n|AY2~(kaZYYFp$4I&= z33l4W2jzEoDa$u1x}JzJyHTTsZMb&a_cKM&$|!eyi%*z8qD%_5h$YaJYOr30tJ$jb z@4dTM_jcluECQ;)Zs9=ib5){tCejDo+fLr%U3R7al(&2 zZnbwF;cr?~WbL8}6y!b<6znf6OtWKFkpzW9K!`WKQgcCtT4K5gsHTh#M*?@`NW+P# z!6X(Ei4mlBv&K~Fbq|h)QCVGK+*1!(W`#ilUH{{DDeqcopW`RVJKMC!8;_hRX6$$) zCWp}7kreZqh%NwEZvVm|&{$M5Ldne~siH~y0zV8gIw92Q{M|8c=Y66>0%QD#t-v7& zl;P-7!<=8hoE{V`Qh87$sUe?f``K&#ee!TOCP^Wyh}yX3+@MdLYl~%L?L(>;lwSkz z6(QSD{%+ZAzG(YGNw2%(iR_TZ9gnOjBrb~SD(k;3$-c;@TfucAwrqdZW0DaXmrRsX z8OnAUkX+w~Ym9W13L`I(i$+rEJ%G*I_%vqF2D2{zOg&3nbTQ#ODbgY$A6Hm2wrlkhWl)4Om?;dfCzEP*?j`CW z2ORF4XWe;vR-3eU#9UDE$ZaToAV+oO@YgZm7bq(9k<(37BM3*-;2n0kK^72p{*<5 zE)W(ayqLey$;ikP7RFA*R*sXqSqEEU+6G|1`%vb+WEgYH0$ewyB7`SJQ-V9XJ~A^5 z9FC3ffvH{W^Q@-lGvJVCrMP8!1TKp|ovNMZPK7sy_;?C*!?Cm~SxO3Yy`iXV7B9?trf1n6h|ag# zA1k&OQzK;HC&FQj*7All%h-1vGWVwqSj(t4RRrbLMF0MCT7U5;zTekfgckS2iCQhY zkyrQhxk{Q+AK{Ah!76WJurBcd?KMSf)r zdPINac!evVSP$5@4Y%SE2uM=Da*ekeWIO^BU52$RRKdM;dq75k-MC$_m+0aUO1S{{ zl(iiY=gYhPcNPG(=_I5k=`!1L=daC|G26wTmaStg0y#8DQ;(I4VKD0@PgiomT%cUc zi^q~^kbGtX-bG{d` zzXD-N9Gra7uQ<7`9LosBxn?-Ht9?(W^Ia-u z)tKZC%Ui5(nN0xK(#~m8Yjkd~>D091rLood)c&soA{pk1i}8Lzbs*h+JdK4BDbJJB zIUv)4M-)<=d+FHP;PlFJu_8%1^;|3xGj}XEogR$;rpI~O9|asB_*>v6SB(0J)Ud_$ z{(M!<$+OVTv4$#?2wrC!2tu5Z!%X47z7ZpO-1R%^nz4W27g6H|8RfVG-HA85h3Uoi zK_}#7?HCOK3yDBRM_R?oqtmu8^@0KjvrHhVU(F)k9LDf5yAp7Y26~au$cu0v&C|l! z)}&t(Wtd0*)Bs+lg9SRy2h0emAE7BH^m)4u!IOL8q0?%RhIFn0yy#ivwkFfiwX*wv z(SKLhzO>LYz9nHMq2Pp>8~NZ6FlSaOK^UYhP`oDn(nUJ9I*P3LJ+H$AnStgnLlIgeqhN*zt z_$^|k&s0^rY}eeZHw^oUUi9kuB8n1l?eRjMBH5b@ODWNKc@o(Jwgg~ta$RY0QdHF@ zk@Csz{9^)UtxGR${huQM6hk{lz>M~iajht{96af_3a&+~%_;-;ku#XN9SDzyv*ICW zG21&jRtgCK01&8q{p0PFE;!Qh0jE17`(Fd&`UTV)xw=2pQRe3s3w=7}P;6`=5+l&2 z9vuNv6!P@Cn|^%NU-R*lneXchg5|a)!1xlH_b_!9z`FKix2R7nCLtVy?}0EBJ0*|R z{P{7%L}?LI->q!*FJR6C8Z3$ia6WTTG3bl{YMz2Ks%(H{<^XXAKC7t!gsp1gWAhO< zKS8z!)l$xh*k~H)$l2B3-X1a#PjgBe_gXij3F)IU3L5@h{ zEvHSxXBJoxw&Qu%RL%Of+*;L!#>Zro!Rw9O@}9}RkH-O%*1i}Fl%m-yl({}M+&^ht zyw29Ko4sCUVqimb(1cG0foKQF$+%3&|IP3a@>WyR4(Q|2$(ITZ%h5XDf67SIhU9wy zU?D3AJ5u+8{?WK5-OPqRO?nqAbga6`UQz&HOsZ;h0-f0lw&}W4DInYIBr+;t@DrPq zmu~7DvwK&kguT>LG~=S3?CN?Ep>MSm-VDmHSCNkUJ@b@FT6NonyV5aesdHZ)EplKS z&ww&tA6ndI#!9-k&bCBSyK_%>aG;NZ(S%!AF$pj=)3m9yMn{cJouEV3fV=*C!op6iS^AIY8g^yODE zT{%A=uXm07gnTw1y(Zy1Fw23`)5e~gKukpO>O5gR>|XHD9bH`rPEcl2>UlA|r012C zhe76=(O$G|r+0LKEw6hw8V-64x6wIETWxZ_4^zzkj{qZ%SHj_ zvn3`uLToxWE6_fy!j@~5GXMF$t4zKyAhJp71_3o;C6CfD#_Jq6ruFu!mxZx~oRQ~e z_zln@^66gcvk;!Gaitt|vEvI&oeGbJn}crR2?#z5&&9N1rCo=ctNSE zh_Fxmr$%)UE1vC(pu5-icT^K5B}pX#86fTL8Nz{m?)#-FF$qo5KvDr#3{Io1o$5yyoR1>H$frI|*Bh$)O0I$gUau+?cqH zW<9OV2BFAgOq0g#1`X*Gvy$J}WH1iDhray#T18b&+D}Kv3 zsW*S~14bO1vylB!=VqSk&t$?(?Bqc+vzQcPZS-U~BPV=#E*LnV)rS1XK6PY<`1o4K zMk~i)RtlhQ{d#8CGzQ&)esSPXQ4|FzBRXF}LWtV32)eKT{bm37=e=JT1MTG7pCo9M z86eNY0V@n*(MbGSXFalvnV4Eh1B2CU;-O&1>p~c<9+% zK*Rkj+yR5@q+RzD#)tHZypHW}ddFqlYTJjO8S|J(pXtzFZ^|NZ-XBwjAmW@*<;f~h zdg%sf9kgn3#5li*yTj>+7SO#Psz+Y?_`)V8t2;9O;Kz`oTg}y(G_30u>D=;CbJ1eF zFQaWEs%@tnLg4Wy*m0O+dn=J3vY1DQ%stlQc5)!GKzQDY>D&w7#JC7OUhLPiW1RdU@_wX zhqZyWs=7L$i@J`c(c_4xWbB|4y*$w0&K=8pTqPk*ZT!$V^>}zeGOtV-lu%M1fSnyw;~d| zng@k1+VIDB#(M9?tF1kX6Wr-*Tl7$}Zr-NVXtB?cYTHY7AD6a2-m9ad@U_+_M~Op; ze%6TVLTh=z3!GJ|Qkas=OQB@!hzMvx6+M36Y=-ie zt_P|NZGL#c$Nd^Ka;-TE*QoIAsld7H6YKY?)S6xvOdo=YQ-c>?jjV))@Jh&3W4D6| zUf`7(aM-`%=zvAlB-jkfVW5b&^_S|`Wb{9O9yuNA`zw>H&KT;KkBz%VCYCjPU*}A8 zXr-$5f`8GyGctukF1h90?-jnP4=#09cmSc84>R)PK14~RPtcYOj{$+ioE&V;Q7EV< zV3(I*&dz{)(fyixqfZTLBne$pt2E+0$J01)KfkqD!D#cIpqQ*t{IM3ZY)S5{nUCf>1;*Tq~IL2%TB zP#Mn$z9Y49lv7n41gG8dWl$MiBA)KB*FA<7c1JPho~yE|y3Y z<5a4Y(v!tAc<0z)7B+Z-oyG;O4F6p^1I>@LhMUtyB94!|1z_NT%k4Sr<-NFH zh7h^QzKZhoc3QsKD*lx1k$1bwxsH5A!h%Fag0?&>Bk!m&r`>X#!s0%o0qydnU^^A?m*HmegNvnC4D!%S5Xy2)r%5o$S8u6qfAj9@Ss zJU6muluotmudyTKFVi(1qCM?7YvST4RDweKYC8;adS>e6Bji1}I=_JTq@!D}rGA|&I zF;_LgNi{8KyyL7JdqQd7w!>wOBGf0o_BvUjxyf;}@el=AC|Yp2kMz2%22T3hcy=r* z-bswuah1{W#`QLO;AzP1I&!jpuLF^C4Qg)Xk+>Ehu5)9FcC`FVdcSqLY_{STX4DjsXLU0pq(U+pZ8M zft-ep$>#>gYNfu1TQd^(+I)2^PzR~WX;=B!xIE$h_xIv7gXCD5Dbz0zmoU-@TN1OyPDnS% zvBt?}to+fCYL?kV6u(2EI8=|&Zu|}T9icd(J3WJeH#!pk(IwGkvxpQ_Y@lTmjL&Ze z)?xPMiy+{g)$abNQnZO!s@F%v;?Bmfve4A{wBr)yrADjcwpre{6lA=@N_Us%L0JpU zmOI?riH4hwi;mJ)q}(nohY4-@dm$8QyRE0)!Nj!;e!r%M5d-6e#IySl13C%dSfVL< zFPUpjcovB&%uLX{Q$bADO>1ZZ;mukz=!hLuv1a|wz?)xVaouLs-kG2W{kJRx))F8H zKTdkdvo_aqT(Kuqku-dAiPGH2`ksSOA{so7riI#nzyHbxc>@cSDSKga)X4(|ClB!* zcvP*{tH0WBJ16?-)Ec9<;eDS{7yo!Iv;U>qP{zcK#afm$^w2lhUHMaG?8L?y;hdN=Y#Z^s*~M5o)ReG?>ysU@!w? zcOs4B2@bskk&=Br)@-wloHT#&t&Y|3B{>R&|kX$#@nGa9kh_DbPcjS*de47N( z?vWYUQ^vkqYpfvvVu%RVBe$g+m(^bY=n#3r!W-{grBiG9Jg~oW^1e=bJdCiIA_M|= z^wXBLXEN@%yv{`kYRdVZWQyup)2yoVPcsnsR9)9y$;Y8?nEIbd`bTZSz?EVmqK1c* zAYxd6R<>dGHdrcOUNey3{t~`_u68C^;gktN_SGmF46?h6!gTpxI`4niXTf#_)6;fg zN&^wb>!;0&S>3=_ixebF7?HJm>nbJNbB~rt7CliFxPP-$;DF*SCiu9`D*YKE%DtW* z`ut(2`$5wavPZroBvea!5vtIN?O>7J)%PqwKc(;p{?hj0=Yoe`?HGc;=?Vu@ zaD?H8!I_T!<*$^v)m1HB*1xvc=Oe)jw41^kVcBr!^Hz#uKu8pU_X6I8o4vaSXaG{k3amn?}uiALu2%Ra))KLxMp9zt+zTrTSqX8J!rju7QkM0qr^M ze9kg4Bl1aGJ(u+ro%DLVQcsSDh*35sd{doOR#4CrDd_|8qmYmkIgYZz;&M#z^Q1@d z>d|{G$moR3V{uh6%nhg@s>U*d8nd;=^Zk`|qdiknNg@ypSiw?WWcV3Jbj)=&H3%%! zIl-62{VgJ_{P30IVys8RYEkYBk~i4DR@Y6U!1BvM`^Sar_qsu$m<7`#Pj#M0Y`D^M zy67TH`3a0*Vfd2x2^XUtM@N7BD0O{FKnB9HS2wp5^0T)@fB&5>b9q@17ac)`*Is=) z-)=SErp+rZ?NiZ;2mv6>g>~kx$vh=xI{7!aUn{?XADB-U#gUyceyEH>0uXupt`c2@&gG<-Wl$xPXL zTZ|#gH{f)9d|Qq}3OsFpU=0b_P>o#EgKNtYfi|8$*TC(a5C-&l;G23;0eUd=Q36Tq zcH^03^+kB+Q72an)qafWGx=jRk5h`?)Xg7d$SQr8#GLdst);0(qxDUwN84YwTP+Zm=s=Ilz0g0&pYoUY)8Jrm=Ja+x}3oj1Cr znTE4nzVmVM*mQOzDe`Juapuk<*hmmUruvjaRTcrM7M!8bJ zhZ0xS1c1>wZ=M1|A;{;hGVWc49|T9xFERlrYejor>UHx31-}zarWB8)udk%1W?>v3 zjpUX#jy(~1^L@L3gkNtV>|IDiSl+GsdA?J>loSm`O_}Co&eZ%zWKC^IbPUNIm99b4{0EJ%_L2HB#OD z?Ti#8U)zKWl~D=Z0p zV&tS(VHlt=h==bs*Gu`ev_fl#)iV6jCa>^q#HBXu-PuY(yF{Q$+2R-XO;^Z}+ z;>E?J!tqLO>eS~U*Evw_8ZBN2Ug+?OEn`MOg<&v51Ku}M$Bs^QXw#lm~;1k@A)FL_BCxc}k)8dk4Kh<-~F6fS^Vx zwvwwzJp{{*=-Pu|pg-$T1E+a#G4@%FEq1m|B^ES*_H4Ku#USIIeo^FZCPjZ|s7B%i zZO>#ECjG4;eVOM3diA6KnH*Q*H0mOa9SbMI#KQvgbQLeml3=F*a$!XTkWns22lO`% zjTTD^6R8gD)4F1v8N^Ov7@UJBTPYr2A^<9ev>BUj!Xr0W^GSS+xk~&+0E8swbwzqy zm5L~WMrr)l{-SI~E^PR&rkD~47|v)d~=K_%7MOKni^ zLB72GU2hMQ;lyM|!^|~((f+aKjaYUwM;$Rwbhi*=Xv#)OLV%+N67xSM20Q`mt#=V7 z*lnA@HtR3}O$+XI7$%`vw7!3!Y61`Pq;K zIlCh2G4Z<9NAxQKXq*JEY@;#3_K-~V$3c+oKNlXJJ=3#5oftGIh95^2MgS|RPDs}pjJLs#SmLIg9yLb6MaS86YL zzeHZ2eW+ucSP6<_eYcdl@H)H>mk6lQX@AAILD;f}f(8GtSuZVcztn!e6i_|L`%~uE z`4!SE)ECS#nQ|N{LKkY>cY1;NUg~m#bL_Jv-1a{hskh^*W{gwsg*Q9^qtKdNPe2?$ zNBHD^d-4ZfS$YzSY2=t&0;8UFJkdAHTjU}Yu(d$?3wWW?DvNO!>9 z9Mwo{n%On45IrEmoe^EGl2W-GX&JO?2QGA~R%|q6A}m_7pz|%*A4d}~_dNpeTkqLx za0!{Cs$9=i<)rg2KHTWk5LNyxD8%vXl4l`F} zfp@zT#yCzciiYLBzE`&un3tN#N~bo9`}67+XG&XG!yjSn_Iz=!fuYK0GkHxzMh(Z{ z>DqI(I5^)S`0DKFkV={8bq&Aun)Q~UGUo45{qO;Kdt3W2@x;XbVY&}ly0-)Kyg2!5 z)|Bl2cz7>!D%mn_x; zMCyILhUK;&LPZ`*DVroD-+BQ~Y43+Xy-(vqWBFG$6NltJT{|7r+V{ojF(Z_#Z*^GY ziw-DPBz_>7#0FsoHaod>=D#tQ9`q9qR-PUC-f{VQ8NTm3>-Wm;R$~lg2~6&i{kf(o zX8d3Bq9wpbBJ|ZA7Uc@uRc1nS;j<^dtyUj?j1f8%6g)5~=tJB|9(!_R#gF^cTxhcA zxT>1)XT6O}w>HV`0{xD{Ed-YiUL!SX#DsxuY${0#RI69^# zCr5n-WHj%`G4V zd0!pr2JC9_VqMF=|D4G@I$4dP^G4wm*fCsmzW@N>xL4(aHtvkZ>&q4I<+)`!v(h~8 zE*;1ihwZWLXOeV^sVa-5{SfhwSUxz3M?B98kR?xx1v`dHJm<1Zyu@GmnROJdBrtFYO(4@`=42rC{!6G6~3K)-|RPiVzqO#F&16;ljRL; ziVEyMsyXFg?gZ?rQZ#8uzB4z^jG9X@7ZnpNsK*UkS1j>Z=*v5iQoowidhNEW?&iZl z3zlLl)sts29bBn`osBbWY9A3Gzjsg8t3|;jwdd3lE-WgHV>RKoS*43=EB6j1FmTyD z>W+nEGrV9|RIl%?Ctv^9xBG7?C1Wkei;`NsALdfBdcVU%m*+$(l>Wdi#aJW5w-s)%n9kSUu)iI0lI5F#>v{C#DTt8WMQ!DimTz(es>a6q?Lw zK%<;(iQT>PT5bDjHqZMDOI%a^%j0#`UeF+_>UljZY!+g`p{wvjGx_KON!sw?F;L-+JT>f$qpDDT>nkq;48pH0q7lDD#D5pY1-yuVz$}-(ySF+9^`dDw z?td)(b<^&R1bK3FJW790$m90p!E?Y1M&83WlZ!M~x;exzz84r_{);RhpL3*Vt)68C z{gqm{ok8b%d_B0Cir+`9mZhTIgVh^m&P0$^Fc85M#Z%LJdU9ae{w`x ziVS)el6!X}G{SEprx5E+Kz3S;^*Ze} zi(tIJ$0$%n@^-dW&B6cstRSsJZEFAF6ZbCU(J_D9v(QtAB_RCsXQl*}v22&bJ4cv= z1D=Iq#s4)+0zMeOzRU8}c#7s(lEzzz5$GWRqY5Gx5W{*fl5BX7cOcMrQuL%np#be>egx z!ncy`!9&~7!Ief9SW^RmJja)T&;AS(m_kHyI5#ug0En7y%|0T+PNfxF-k7^V#0aaC zj;g&=N{QB{#MqpK%`i8V2#1 zsK7M@zPifI#f8Jz#3Y92TK<&@*hYrotUzMnc#~ep!ONz$bgrG{!BYaS0GPB21q23x zK(DQ4UvJOVm8oe21qCUJi9PB1`nAw$%~e4ax2vnmwMJ)(4!szu8JD~?3Z3&WRh8}z zk5U7H6o!O`W|o(~3U0BFqckSWnbG7TUy3Fbrsm_*tx?j@kldLp%?S;CTI57HL6syj z(9^*t@w{Zna!k9;+0xLq1#RgEN;A%&{bD4m)e!GvJpeRT;ob4}2&(&Hf2?;=TLWHEGq!hshz_|n|1xWM<066nQL~)fE z!Lp#Ca%6w~I={BBukYP-xjr@)mSMaNa1KlvKp;Cmke}>K$!olyEYZq?k5YFMd3`Ay zUGe<(FxV~w{jViC;u;a-E7NICie&~uHFylN5H>Q(t`w5^*cl=LvgjG+zn8hZyd-~0 z!_Kb8kcQb0Q5Ve)54gH=JEfmQb5(zGg!dsw+h9G~Ek_opB900~UlVGjrkTO66gtJP z?Rni!-mSZXt-^4g08yIX2GqZw?e6f+v?7*GC4;YOkL(%EjxoYIvn6BWR73)T)!&3a zEh#A}q`jqQuWYv1nJh6kQ^V(;lb)LmHBfHK{|Ddt?``cLeKLeZiZNIGNu$=*j942I zemp_NH_~;ZfTDn&86-t7lp5tRRA|bi@#!!mVBVwIQQ{Q^u2pDL<^+b*FN^}U=Ls~d zDKbT&&XO&oxoZhr$&Y95*m-$V(1>^>8Z1RW)H2LDsN$Oa#W)~$1B95cE-Cg{Ly7_& ziGhKk+GG&FZ)ivrP8i;g-hZ~?eV$YTg)&&X^N1&a>OOuq{H2{@DR36@(zyqd2E?HG z3^Ed(qrI~H5}Z|Dt}sF2y{5Yjz3MC^H9p^)%bOivA53C+*@>hZ(1YR>kigX`1dcpF z{692&#lpD#c86&8$^A!Cb;$x~{vua~HMA@kf-;xRZP1`Cbe__XI|B1Xw++U%?8;ZDma$SmEk5 zZ<0?@|1ZZpT8aOT!2p`grwmB|@=S{7Fv0kF=%c??jn5-E3@}lW;41tlrw_>V^PSvJ26jZ{PWV07l`UwA}9oJ9Ar>r z`Hv;=UmHORYi!)O2zhLMCQGy>fw;!&QwZ4r9C^f8P~%dvHjFpaHv%T#JWhI@xYHerC4IMWc*;KZO{npdTeP#d3KmYgJM%#n07VWHl-7i3I z_%(_>3`1uRrZzG>jVEc_+`{azT%SvmH_Xv!WxSyJxapvH#mc6%;ro-qq9P?fil8nz zMSJFnsi`rbtE$YsR_i=l6Y06<)ky}*e%g-UjfmSPoX?9cyIl=(NPnpozXYFn(as*T z^&rQcxW`66zKJUd8Gr%7Kf+b(LeMv?R=s`+F^qg8BoxpqZ#lsc9lJV8cXX#^^9?~d zp|DxlRbxun`wHt;{~Ne(u_@uq)HGIIU8GnaiK|) zo)!~8>#4zDzrtyJ7K1ejb4yF*EzO()nH%wAlnsZFrIz=w%r4-};{o&aJeB9xQfnuB@yeaDR34p6kcO zsu_+J7rr{orotEL!aD{ts$>BR?+ZaBgp?mcRgkDA?|XN*l)&CHuUiYZtr^p;f&^W~ z{x)U=R7I7P<1oFE;;hKbYa$PY5Uh}a4P%2k9dF)vY`n1=_C<*?*MdaEuoJIa<)jFD z-TkYDgZzR5$%QcWOXRu_y{d%2qW0-?(_Bi!epF;Ev8f*4A5-{{%dYIVtfI}f7;^I- zefg=Zt18B>?T8cdc%#O7F~f6ygMZ#W%t-X1-3;TKPM*#H)}3i(&#}>4&wq(^%9)oY zXSNsXLCOI2jrBUEi&@uK_~wg@U+`XgZXy+EEI;oBM#OUS%AEb_DSJ6a zsA7MQL(9K7{=Z}KeM|n<7{r9xPVTf%#BB#Q%OUHp5^>p`ZPmGiNHofYg+=87je`Tv zj_eCqEeVgvHRMd&kC-;qtMT9RQ7G7AYOc^Ihd1i*rUNh>H`u)61@%xa>C}Mwq&| zCL$6dG4lOH;r)5sPr(1tOS376?|Rd)al3J;Fh?_r{{jXF2lwMn*W(Am2nG?J1-63$ z?C!jm>F1A7lGvwMA`}^$QUM$SEg3Z`m>eq6Dp}EPLDJU?ClwA>G?NTz0Gg6FTykYC z%o=PZyy@`_c}pClTstelF|ZZTA?E=t0wfqCVn`i<5Shld+bfG2WK|Voijq}hd9X+)m57I! zkz$)$*VDq{e1*vE{`*V$N^JD~_9h_rgPM{JI$5G&SeK|SNDeV=O=Z34|hVoo02U@T$wmib##QMvPr2aY(hFBIR1WZy}b)hn8I zJT!R$2Jymn)E!KSyg%E^GDzJWPG#vf2lE@8yu(k77o~H#j7xsJA{U)fWRm6r?mAQ~ z?j6e;qiJ5VK-YSEH)!xxcY3Im(BNDBZL0g`9?)sM$tYy_EBdZLnR9DQj@HV+UDPmN z7EQCI<;ir#$rmO0q#~>?%2tvzk3(N6tc^2vJ$G58!qaBqMEajGZ1^tSB}Ju8atv*= zb*AVkw#U5=^O98GR+{!RnO47k3)~exUOitL8`_&2a9o9Zpv++4{UUs(JVfXPLmiFg za%EpfP#sh{mcVN~p%_}C!0j)f@8@X5{ca1sj_t{+xfKuwA=uomGyWhfC6B^=rWi&4 zxas_C&WtIIwGz@55Rf+Q1~!e_w2iLAirm=qOyD_7v}eBGCcaM}=DSm;kk)bAn!Mem z0t?>A1BrT8j<*=xE&T0#VCVAp&G4M{N7mNzH}cj$2wOxBdkhI}-gxkYVS+@EU$%&> zcM<vVxT)pkzgPf0FNUCkaG_KcOLhk>uM_2c z+U!2Vg+_!a4fb)8oC(o0!eXUi_aAemjWcMGs8t<~K9$dN^kO?6hYr=e0I>v@s1JRy z|MZBQ?Bk0RV{v>$7a0=rlC~@D1}$HYjerE_i^v8oqD#PwX%>b_Gdr*qEg>0Eg6FxF zk^df%#`o6WZwhLn0r%m(NGD@ddNkLbm^n^gJ7;R^w30h z8(^45-|v2Vh+hcEW4)RLuSg8QCkh-bguyYba8JGik|zEb$Z3cHSJ^C)@S*bf=jUQD z-Aj6;C+H7<-~!hQ0)IVkN|-=o8tvCrKzV3lrz69)Vu2GaoR1V?6!3hafy}bs+Wyz) z2r}ZPP8DFu8=05nO>HOLt1oC}pTp%oK@l8dN~cwRU?2fM_mXM15bCs-Dk+;_xvyw6 z<*;9x=!klbKhG-Ly==aaU)s$_?7OT{4r ziW{RLaZ0~eM@0HV5J1ygVE`P@eC#jFJz2MqZ0cN9@2GckR=Oe9;C!HKZf?FUBDO+~ z`u86+O1CSJu1t>4lnkT<({!C?y*IZB*`g#zXBr1AiP;*9g8Y1l5e<}QP4oZn+2}{8 z?k0`%3mFJfm#GPn$9C6iciKt-tmPkaH(4kVax&&r4=xWW+7D@e?}ETjH~u*^9$AU`z;K?DEs356J2~3no@HFvw*8B)IU`RAQ9FHuhJqLdKElL#RRk55h{9o*8@%5Q}LWB(J#tfha>=gXZ{S|C?C$Xd0tCX^t zrxue6Khg!2T{ba7NX)FvGE|nl<^6+DayzrhuVt*@B~g$+@zd+d=bi3-p+wyO_jW8i}56470@HqKke!ly2D9afL<0vXBZ87NbIqvSx6z*v>2gvMi z+#Dxt>q6CfZ__{fONtE4Y2RJ8-2|q(uH#%C9(d>VCco4_3!y5ko?3E`(sj9YyqYNU zE?GGxJ_>Wa2%%E`h=}_)fQ)DYN2Ddvh)D{^8qWa!#BQ06x-r;X~p-EbNb27ol*N%mSCMHd3A!qDK zow)Ipql&-$0bKr*XiS#hwqpH>V&6<#8kFbkSyyLgVpBN~3Ik$%J0yNPpSy{vDn@SZ zzP>OUF4f^e`ZuNG-?fYp2hURpsRs)M$tBPtozYXs4=P7D8Lp?pMwmG-H zy5sMUAC^#)E|GaG>z(GkElJC#E#l0eS?u(^Qsz^ifjH&0V@=I&)(6QKE@tYl!vf!A zi7Poe;?&rt;k#<6J37|T);dklew=?PVlwbC=U6_CQz-;3M6Vl66PchN@r*Te@v`kc ztK<_qo&NC(%W&eV<25E#rYJoh7roboevRR@)$USdLU-7I!u|KnXSw!1umP8`4!4nm z7cT5dAtI<8Odq*IA;`Xp(S zzt#xp`V&gF#Z2GZ=beHc)LL(?n|7>EIX3)|_`}70YQUKCIDT#lnf2Y zbQ~8P<63MO_vV^{fg%S&2|AAP+voDTZ_5aFW@5l_rh|R)h4>og6r@z#pPrx-7MAYa zUal@UkIA;(^DYec*IU;gOBEUj{>tiqaDA3(p9LGdLb6lFGqs&*RHKJ8oc?@+eTF1y z)Za<7JHWe9JvNS}3M3wmv$qiMEmm~w5jA&3uDUxNfM+D7gH`x%1b{9I1Uh}Ocalm# zy72j;blP!cdGTlAxRJuLvMh~m0%4^~4=f}wGh2zCJeG?ZBLnkf@y!QHnN*pkdU>39 z9I4G>Q@sS~X~uQs{qv(;AMc-gb+HhJBV;>Vk@0!&VcxjZRJrp?9PuFq!M`27x0G+! z$qcetMbi7e2d?j8F$dlVUPbQs0em;CnBwGy}H)v(I68G-rRf^x<*)J#F>~Qu#98YMH0CBU3ENN-4!{|+G8Lo7#2th_JnyZ z>*Irt^ecCz?{2#PBHeDCY$f6@6ehaF_5DngP-hSPBAQyTuZBi)ZIDzL`VRG@G~qZL zF?yymKkmpQ=z5pAVh10Q$t5WWa{!d9bpyQl;UsN-^qC5PSZhMfdOO$!iyn!WNV=WJ z3FqNE#4Z#~z6o}Jt&3hF&h7j4Uvs1<`)C>lhVDjEL{Ry+Rq2KU+hWkWBh<;nnq|7* z`iX4^8cyVEsM;#|ZS`2z;@w}E+A}1`r+@&W`L8sGc`eLvKdFnYvFCg)vD$UepM3=y zy!{C~`Gf#xcd1DYQe_Zc6RPcDeT|wd6f1gh|Cn}R9Q}Te;;*3I$Q%$ol0bhsxJ{E? z`V@ES%CwG+L2Q?@QMHWy{nox~73w(ZU)$>J_wSKGNtiD0(Vfo7;5_+e_)AzZTqhxyLa&;TW zMVn?~{XsYAEqPf&oDat43q!Ijjtja^LAtIqh0eC;9i$OiOy^~Z7(N_}mym#lhzI94 zCm!o0q-$|`2V&2KPefC2XI#q`i1>Ve3vavQ9+n5 zGvP^v>M<3w$6*H~#=n(JZ)ix;)Y4+e(r$#>dsSiuOW%r8Vj$uIm^MN`U4-2eWLSCr zF#y7U3Mv~oCi8a8E7LrvesMDNhSQVu!$ZDdbixm8Kzhm|d<|g_@2Yui)!~nt@PvVM z{{QwDqB`Kcs=h;PDs=aDvmvKCIGb#$asC1GeJlyd;336s{IK4$%>6d+uBE~yxLeKp zZI(01NJeDgVyWb73qPMN(t_9l{(8x`pRriFADllDKmhh*G2*mWkktVLOK_+ed|RIu zm`TT7iR2wD+vxItd3Nd}v*oN?3xwDW5s<0PGyT>e=MV@Y<{L$2Wf4|Z)^Wh--wfUUfHi+=w-4><`0{1E$KTQ8VR5`Zuq*ty~Iy%wX|T8KK1 z^L=h%Vdxs@#fzc<30GIn__(-yF|qJxf}YoBDpO!Nr~UZ?cqthfln5f;k82H$jhaaD zKmaU0H91*EQ7#|?F?n~_lJ@1x+O0&H!PD0Zlm&S6Up}|sHA9}kJ&-5@RKK`UdvH2g zwg+bJEUb7x^>Vh^wF;Ba4QUDpQA!pAhsv$8I)c%aP{N*-L{?T-=GWFjCrfpt5zvXU zJ3Bk&9&PS{YMG19rAim?qzpVBZoV@_YSNVw$x;Xs2=mC!t}j?5Kn#fFVG`I8P{D~n z9dWR+6$m#b1p7iRFek-y#$NE>{Im6 zDR}=AUf4^&<7sdouEsAz{~qi!TuYP&pTU=%Gdi8pui)LmB2jb1U66~5zk^9Wbnqw1 zOGQE=P~DKMDCz9m`P{2Dn#H3sfSWL?A|(4K!X`xl7?^cId zGVGjhD%7sFP&*}ekVej3<-#FdCEowDesOr_J=;I1gs-FkCG6GP@9hMzZh)%B$cJBR z%8SK7D*g;Vyf&S>T#mQVGBsRR;%Y;X|NMbuOV2}^X7MV=YO-rGadr0jOeWOa|1Ex& zR7g7`;CI3W#yj}6dk;H;e?ky?SkZ>OmX6~2B_W+tDfWx0xWrd8uKSB|v}Cn$kDya6 zX35Wxe{WQu1H>Nkx5KHtRLR=QHAtR=CD9}5q#u^u>jK`=}Fxo z8&|uuFL~V0RhHBM+0_dcm)=Ru;`%q*ZddZ=0b8d>9XT3z*~)vp27oNUi3(45-ST{AY5AJ9oYPXp7}ZlzJGYcucx3E!h-PCbMO?qwNM+d);n?16 zEC^LuM`ePu3YiH=lSR3$HbM2gSX9KbpELPp|HaOmFh`=<*v!*v4sQP;gr`kao>}3` z#?JmGClK+&(!#L5FIe9StF}ayJxLg{>x z@Jz9c4l1&By|1~2d46GK3>ZvC!@v;n6d8Fd@{@J8a5F<zy@o(RfXkmh zMpJdTY5reVXBij8`u1^AKoIGYmPSNCq!E_xu9c9IPU!|gP&%bMSGqx#Qo5u&7Nr}O zh6VN+|K}W!=h@G^m>0XVJ3BM?bzj%@{rzsh;yL`*b&*ERnMrEe)j%dvPHtF1sCY8O z1G$@kPcK3BJIc32i}DZKlXf$XZN-kBSV>SlD=*4NRxV*kh9Fy}yb_Dp=&rg)?4gyr z#c!+Peu_3qsfjd36pPt$^`f07xC6MHlOND$0mxb zWeJ&8a)_6FBn$xpAL@S;E3^59P}#-!%NR%5tYCGT*SvX;L_U4A5EzPvC~MRpJo5DdEw{4Q)NX)|sZ~2wSV$;33zlezPk*8RGTtB>O7`YE)E+=A?Fg z-Dvt*!ByfO<*LB)xh<{OrsrL+)*T+2SUxQI%>aDmAdEIL5q|w{1{W~&L$x!4_i7r{ zmQB3mCm}N^7*^hTiw;f6m~!^?OblxBI~SU3aF2#YMMd!nmC_g)#u;~fEb+J;|Cq6m zv6npAD3kGKFoDd&cs8tcDs1$tx#}Mn9&pcd8K&k0$zo*m;B23@hZUk@;CmZ+J!a4M z?NalgD3Nvu8dhMnlon^pnB=H)K3!N}^u|>c%dK0C>NZ&u zCnP0})6DR#MuH7_fQ-Er8Ht(0<@VV(!nZ3cat=>sYaAOW&;POEfAeVwuo;40Z=;>G0X%-N z`+{;fjaM?Y&$UW_cXfbNFO~|ZV9o>3KUf})7&8p*a1|Kz+$ZagE7yi8iQS*aY>PT? zuvk=)z31<3HuAeS7?8R68o|aw2AT$L!r|TT&?qyOAvsE~h7Lslur)oLfWldMg_%BcO<3T_bA@kT6 zM=M=?bjZL9UEvJ3#&=4FE}m(Y@9oV}Qqr6EStc~WLh@WM4KQY-rVRAm$)`@eudH1l zcAx$&vIK}SRI(4l^`GPQjF007O{kKV<_W+L5XKd%(LHUKv=)H8xQC8M<@09*r^uG3edR8SCHg?_4 z`EE$Y4!rQZ5T(hy5tiS!da~Z5zP&FWapHowJrc~O`TeuI3Gh(j#`q%9&COydS9kj) z7iOqW746!^hN6XH(i8wm>+AKAu{rrj9htG)P*OGDBM)or=&cl#leZ-~-(87bwt%ZuJvq9i;O&)M16Vfgs8p5x`&ak1cJUF&toIticN9@VVglQ`=Y$XHZfp}O@F zpXJD~!y{Jd6Q3EOqiz+$d%~86F2e?TXUol#q|tNR{hwCKhH?#l2Q7O-S$VnV;W75$ zsc=?2Vma6|BUMI4Ds-6WOcnjZJXgBH}M$^dKobv@dq-4YmIyx-|`6GcXV*5`n`y{ zXr!osq={8=55VV#Ufb?(9Q4%}G>o6~em9Wux1tTen+*Vm&dlWG8M`}h zOib>*UY%dR zHo+lS6?NZOZW;u?x2qd|Swepqz$gMtE5 zi;>T@vU4t#nYr}hTkS=dE=QjqLE+=&T~bRks^rO?2E`rGc{ zl%U8LeyCwdq9R|?J=`9Vs9{x{N4}>%*!`rEaMj|Q-3FP^;vn}Q< z;=)}})s+v3&ckp?;pwy7p3!C~^n|y-2-pt*gyTD# zZ*xvpg#?*C00iM@sxo;TKs8?rm|n0dDEj&e6H~vOG4qf7{~bi`0RiDcC+UL!4f{!%D?5FH9i}GcVSczNk1S!Ht;5WyW~Ib_vQsXHAgT zp*edbXtQzIxv8q9Vg;=3%weZ#BDP!|*YV_;NW=$*k7{yu;np%d3uU1C7k&0xg%3Us z+Zafzk>lw{1q3M=4%YU4400emN5QlOa$0fjq%;qMu~WyQ@ILz zU3;W@U;41y6z{w>MrlFJm1t_Ki|6$?&Rs{t$N zZ{LbCwj7QI@K}HotSch88&1Exnc~bfqid4tSvF@VgC)`nkv8b=BAWm`p&%m>uh5T? zlfmV&qg@zTkZ$nt3b}27NW)(S?0C-QX11_7mERmb&JYraEf-+bnLMr7sorcnkl(r_ zW-N|qyTRN8yyt7%Y;U`He_Z7WKoOL0?@4|99jQw(PLx!@=Z(z_1H9Ke2mX0HpiP1WrBsm)eFoYmp+M?m9Uzp|`Vx7KG7_H^^tP1BQU^YH{NUd_^` zK4LWY3#=k1mgoQt_>OI7dg8t0#xW`8K zv4q?TrYGP6KM`^1Pev8A_n^G_*~xu{n39JgjwQIC(1$J4No>svYjKApsc20%u*A{@ zO6aJvC7M(5Fnp*dp3rcW2-mf)v-ZcP5@Jj2K9AOP4biy1F(e+@@4&8Y3bEWziv-%K zy%t#$PGp#Cb+^8W5l8s=cB_$`7W(&u-?~wDNlK=aAlFcH2cql?(_l7*=>*dm#ppQX zlF=XpWdWSR|9+cVmoqY7L#4oZ<9nWvePfD~2l)PfdiLj-zI!o6@_tr$!hBaEkga1(3=#eCCrvWrV5W2RtOY~Z zjcOPw?7{kSM0s-CrJQI|IuboB+8o1*9ZI|{dn|XW(cX7V)zk`0`9)e(39HF*X=Mw&hx)YQ8>a2oToW44$;x@!+-&jIdK)-G|M zF_doBtDTVFb*~15{z{TPC03jey>n#Ie;9ncBteXQa_uf9AJS|8$~#=iG@=htk^B*bnGX(y0|$Xq?2^A2Sm)e^(U{P$y8bBX4r*QyQqH&szHh-VmeprX(#we zo|~b}me)a{#ZEe5XJa+PV-OlY(Ha$gwY8-Ni)aFhII*C9rD--*Z;SA+9?OjzlXFsX z^YyM+krbc+0BCm8y;+H#$U>Y>gmb`A#0y;$x|$IkKs_&m02D{_2QPLv;N3L_Vx<9@ zx1ubPFx0fFu6w!Iz#ybg#kWCJcW;U`%ddU@JT>t$WVpxZ>?dB&>5NCbh8OtF;BX&p zRbsmEf%w%GO;$hO{>mv zYcDZ)U-OmryyAwUyb0~Jbe27vhV##zD5VwFdCQdm-F|kfeD$H@3!QtLTS#KHhk6nu znWtn4e2q;e7CtAWqm~j&Jc$2+4#7vh@|t+P#iGdsD#DC zZ04D(I091XaK99>EL%26!tA8x)1f2%X{{Rd&5E@Cge7_RZZj91GOTt9@#Y$vsfcf* z$%^zaEp*2acJ!cA+`tWJC*7&`^Yg{A3lH+g=W^!)+fpn(! z?;(fo)axY_tPFXRrVjLW`QpC(+Fqxm^TxC?oW?JCU$^#ED1J!I zwEBuRyuyuMH-Lc$)sJ|w2r*Drz#Rm>)pu5e65yyOz2dY-+OUV;A9rh=Zp~5}&HyDF zWLl&)mCqR};rhA0`?#3PruM9wip#LAIagEwy2L4zVZR*u7R_O?=_1_c?EZ5d(Q;3Y zG=pTJWP>k8cy*lTG-q1l8c4Hnf}|UBFq)kOC}AmMB>F}=Boi^^MX|OYF&a{>i&a@8 zh&zQvOQ+8_+URsc66q4xQ?2RpQ0VY2unt#E1aDVD+OlNb&LO}BefBL|yKQTB?l@oa z$z!8Vv1N=k)2be>TieQWI818``5uV8J<2+nPjk>F!O=ly8mPRz?(L}c-O<*nn?Q6P z50~ulm%o`vt~%kkz0+5YOVBCqyqqr3Po^-K_D8yn$AW5FjXW4eDlc6l3_o?^Y;h%3 zWq613U-0R1clPNIVOBlN@~s>^O6*rquo^9ENM6CA;?H)Qj}u6{NPfj04Emb89?Iyi zshymdNMdcvd6^_@JX@zg4a4>Nu@WZA;4)V$` z*NB-8z~;K68ZX`$(&YEF^44*nry_1+WpI&CvvNHywWVIyBn%Q1g0&`Eq7VX3%1uQekZkM71_G+*Lw5zx$u zZ;ln(@{$1D?O=-e{v!@g_+8>}{+$gHfva)~FP_B~m$HBzk1Pl{wxGyZ8Bg3@$-1V* z77ZDNXOs}hs&I{j)uhJ=lO0u^*(xD$_dCUe1AYS-&vCB~YxqU{#PNOR6i_pr$KOVh zzrw^gOYW7=$??{{KSY!D9NJ+Ldy znF@)bsb4Hw;;rCL(p#Qsr=!bju9x7SNo;7Yc$1^P@#EaC2+}Sm9GTgonE1V9XT%(C z;b?2RshJ%fQtxI}7!=|-=~jCM+h&4@=VvC&LqJVuCI@p>@2z!QvmDO^PO|)wdQV)e z_(;iTXmTzXJ+e92d;zS}aO zyj8#cEvg#G770s^`fwhS4!7wu3;2?(UBl)bN_deDTi&))%fji;u2u7eb;1~Ze}VSR(<9iHQ1AQpuM;T9(>j@9hk;!~FW z+(NGyj~K->+n`uAv2{2>b2YaDYht;`l^M5PZ%JWzRKzC#2qz@d0KpsNM#OJe&*=K< zLxDq!T#j*$3$-49l;HX!MoCk$gwD3_%--^e+6lsfGs<$^ew2FPiB^~YTi=VW|;Q*bp3E!T)U{$((`DIhY>ktka zr6GYqmXBuToscc(;z7xnKYuF`zLKJv>Ui3j5t~^YC5e3$lqAt`BqVN)${w z>zE&Q@XjzpSr^5gc#A+L;-2lEz;uH6S7wOwo&zODZ$lrm>JS$&CBKd0Hpz1nO$s`U zql;2py?~Wc+&JyEfTeB~o9U5*{V(zB=}yH?;-nTQZN5jL2^g`^pM-Qo|336ysqYlO z?05{-IM4GQ*F%LDA(_^Oi6(tcbPD)mT85_W83xc`^)NnoKbd2Zn|t@H7FIwkb^D8R zRdZ5ma(QDnNm+Lp+EazYu;>uGjR*qT`cp5&?Z7Yyf(^RKPjr*hL!9sB-OSFy>`6}l z)$Y9OL2dUq>P228{rHhCnf9#;4{nNMz1(x>+c3SK`K>aC;(<~Y20rBGkW&o?tbOP~ zCFl{e8t@0@7$7dH^wpCt>IWIh;c-OTP66-EwsLlu8p3i@EtN+~&RqiGNn9^>JLJJr6y+r&ZBU-|LyG^J9DBewG zXaaH6k6H_)^6#heLFJyw<{J#yjV;ZODN8*eg=qIiEN`@rz- zMUV2_kDEKCR>o4&1C@uDNBvP&=N9N2Y_*NK^PO2*;&dyW*=*rjUB#S?(0@+Hh!RDB zQ$0x55`(6hjEYc6tFDd+pUg@hi+Y+N-p+y$kX%snZD(h<)bDl)u)y{#z3pbhcibdC z|Bo~H6jFOT})O9Nac&P-7Nq9INKt4ezC2X}XMsi`?WXJqKq@Q@6Gm*xCrd-eNj24pki z4K340@yL7w%;hBMSh$&(68nK1E2kgs2K&6t&nyo%BCEy2A7IUZ z7~%k}2unkD$nq>J9ohf4a1qUJLY345Ec!q>=lH4}19U0q9M+vV8kgLj{* z_YTn)`6ATity%j7%lAL{S(UkTn0-Ilz4>_p@$T3Rd_g{QcLjzLToWbmVW{n}U0H;U zkV&Z!i@h}&P?%D>Il~x}#^_Z<3dSIcY0?h97qp4l0{OxJNx$(pKfJRHC7L2GvaPs&e8u`GO)YITz}S=r?$`GH*)yMB z&$I#N$}Fy?Q0m4P8SS}(ZeD_y7YxwlYw4@KmV{cQ=;wjF=Cf{!N!@C*gXX;c&g%FZ zUtAH{0!GG@zG^d^9XaJKiJKQ(gJxVW?z<6i<6zPMktzMUZ?YBcI+;KzW7rkSP4AUj zo=K24M59mPJmq!BBZpOkAJXIsxNu_OhsJbr!p0k-(E5%J+90O&+$Sn3c@v+Aam~ktQwSm#IY`5Y?xL}Hr)u(p5k=!XnM)@fw1Mwg?#Z=l?{usuk0jJ zo_i7AQk3jfZu&71Sl?vymnGYMCubcWqH#}K+O6p}nilx>Q*P_sS9NGJ*`?oD{rwEe z36x*j0~V=}#MH8&(z4X2;FGHn&iFjrl46f|kQ|>7Cv)Aq_;~nBB|VC@)hVCywWZRjwdUw{^^ vxqsWp4M$5;n>8!uQ6^{rWt*ykZxne?&>AY7_V`@p4)BtbQkE9zKk1Wo0##WM%0!T%BIq+FPNau_rntw5!NylJ>RVVKYD`2)Um^ogY2vv9<`I zd`8GTcTkkAJ|(qxL|Ap)5 z@^4<`ujdR$CTL;#7Hr=YPA|jc7S1v7=`JQB%%?b`Y@o0KGGDvv(#7tg6Gx4`IUJ+@ zqV9%;Owsa~*kQWcJtv>6t>**z{*_&6m9!m?8T&+N@(>xnEsoib+E(mC)Jcn}d`%8M zsP*(WtT}R7e;QCS@9qood1%bK@s8)K?f^$b9#h>y`!5rLXz>H9O4h*ztOym^#5dWx07VlQ`MlC zb#k?$7v|yP;bVk7qNk^ax>~*#*Lo)ZAM%GUNk$uYcV}^4UN0{%9xp*2Cs%7;elamI zUOoX{0RirZ9Ncc+j_&3NZbvt!e+&7qa-LbaS-9FdyW2WB(*G^j{FRf3yCfsy--`bG z_isC`5VrqQlcU>zy7kaO-oGil{5*WT|2Hys+t>d;WPelsCHtpe|5gY6TbQ`Eo0Y4q zlY@hmqdV+>m<#=i?bcf2Gv>A4*X^ zf&WeUzf%4e<=-xdtGn7hwA1`A4`C0r{eR;AlOM|am%;yw;s1{2Kd}$P33~+P{cq8L zJ+hVgF@uICg{JgO`XvH=HwzT_S}yfKePe&X`#ltok~x|lI}%e@S5AZUv4)wh@nd&m zqv@Yd3o0pepBbO-eIzno63qV`D#!eVo-*9r%+>7Z`NsbFeO6QP^_j@6--4thfW&dc z*Uw2iOMK%;$(I)%oC{yb>F)FQ!^43-Cy5GW_4VJs$I&^^?(Nl^cLPgHO89FWcQ&_C z=YTVbj(U6B`lgQtc5m!!M@Q8RzquAU{BE2uFf?=+e$Ov#HAqS==2}=+S2z1-B*WqI zX#Lm3gt>=@hp_jNCAEZSnYOmJ*qzloyFHtRkQ}c_4=*o650ARK=(Auf+#Z#3)ePYg z`521j#%1e;CXZjgf7_!Z%)2nD;f|Td80{+QJRXgdg0E+6*3tcu0n1%sYH(+1pP@AF zdYcKxQsP!k3;Fq!k{4OJHp{;%Ob;%D*)FnSeoa58rzOo%Fi}xC ze6s0s($!5rj-a9Q+2v2~>;yq}Pa;9^YJ+Oas12{AHO=W6Gach~@8E`?rMeP(3FNl@ zfX4GVJ6>wIe@lBqZLOYzN36qa9a^|R%SJ{RR@gii-egt8u4||C-Da2~FzJH~vsyaF~S7+ytdIX3^ z%9*T1K{awdJ7owvxOyi8vlF=gGNAZQ!=%P~>AS#IozC+*)A{-%Jgd-rv7l7{6S&t z=k}85Cz#R*6*$UXBw zTSh3(6}A7I#Q&kRExaymTGyc2hc{MZO)q zA}?dSv_L+V41T=(AUG$^?vLM!OeT>Z;oNtSDBOqM1=Cr$Rlk-_?NJO!x zy~Ywod={sv<&fKKbrBE9d9UIA?reqxH?(-@Y+3@5C}#ECxIbOY7x96sVl7Dg1zX!6 z_)&tQhmy1oSH+eWG6*!=bV_YMtatd8U$y3n@yh2an`&8pDOu#Zgz;eaU@Di61$1bg{^}zl zEo9!W`vpiVb%#BFDDrGZO+v%4Jjq0sEji!t?SR!M;UBZJ=Gq`ScSVs{*SpnteFej2 zT6ZKmPfcHuuc4u#(IrQ0$~)i!6?>i1H|-tM20~9I^k;C^#|eb8&uH&N%XnO)2&%S;zMd{>`-KcF5bl3BB%5T#cYYs zq91nI+ndBwGuDt+ng!KlS8;JK1;@K7e6KPF6`f;G>yjD<4QEN-qGb|XMnjXDbWRFx zLaedQMF+9zN;W;dNOWjL%%wzp#3R2^FhCxzVuz5auZ`5G93pYxsE!0+i??2S>(OMn zuvF9cEAa!fQ_k!!+8a$T?>M+m#2?3zM!J0%G~H=z)&{n^2ZfP&Gv1>*c)JFe{DDk+ zRL&YnB|@~84R^_fe6>&ZIjSi?zI>aPXe&mE&+N2vIkD&ht9)T-t82xksNEVr^rb4| z+oSvu4;Id!GLwl#rR7$HY^^Cp=bpu)n`jDDbDG6wHcWV|E)(HQ@Y<7)WyU1&*`7up z2qHz1oqYLui}n4Lod;AZytT(VTQfjlx$-o+L)cXI&f{OM zdmm1maONbNMS($+BqBn7TU#k>Ejn+Kqv@n!)6`vS6Wt8ExPpN@4sj)N)rBGf;jfG@ zjnl)nE($Fo`@-5f4}3YeZ!v`!%iBu>wn!aVSv%F6-fX$5^WvtnT_^#d;@^r|sua|;T*}=4)iUbVjk;tS3hY zQ9fYKyWKTnMV9IGfuKR&oKK`_jgysaGhTkClcIjdAI1dtTGZY?)*t^v^^vNzRrhix zO{31gF2g`~gwj&-+MDvgm!|USYJ6eB=O0berqVXlA#xM_CL3h z>a~yxvdJ%?w;jj%Cf{}#0qxhdrNbHOizq|Lf5Qj?@R$MAFPPLXLO|I!9gnwGaWW&U zUHWXfZ!562mC8nus3gQ*3RF*H`CB+w8&Iqsq1)S#Mt@AtKb(IWy$@X!!37{g0 z?eP{)2s|dK<4EhPEI~3w)23FU%+!UUpym~qr%(a=IfOMYF3Bq0M zq!f>A3Cp>_OumAfIhTdR-j}=@A|<)zdvij(-0QbU%$r^UQx#cr|3?Q8`C7q;}owwkuA{CVY$_& zb6S%a6u5A~L=qa)IMe#A%{EkEi-LHr+3KRJC**nvQqip5dlmU__nH}fa}QC4Q&34j zVl0_xzmlz&$*Cvl%qIJP4>YYvT~RgIGX6@-Y*&|WP+|P+$(tW`Y&glwLd6F4sky^8 zF>m(k)l?E43~yLtpwnyeLYR^7At+lL;68Ub^;%)wd{^rR1D_=FLg|9Wp zI>dZ3H&lb>=I8+V&b076SeI6&Dl&H}rZJb+cg+PvM@p-mU|M_>|LWN{`3NE?K>cmk zG6yMhtr4ZQpvhPJqLfVDUi5gE4ifTZB<}Cf-d~Z>v5t+)P)8Kuau5ktbPEzoP7PWK z&FdCKFyY_Op9kg*y99}>bW-y55D`E`yQ0p$dO7${HPzNJ9b3^!;slQLB~$8yOY)I+ z6ZRC#D>H}EY&p{jB#21d%tL0H3%-Oa-1yo=_)TLV3-soYt~M|zqTh~g)cXY*K+LG{ zVP;|8bp#C$xb-KV{17Ybd_*7=35fIZuX@a47S)Ze;9cCbDc!u6qxFqIxFC_o)H6i2 zd81<@tWf%A{U5Ot@~whVg=?HlO|s>&?xgV1NYCefpITZRtQ>eJ2^A2Hf##+7S0N|k zc@D^6Q1--GE*1?R(oW$%k}kxhD>A}=)4`@tIPD6u5=`JA;tU{J?zGNkZ9G)aTO;y| zxzUvZvbd zJ%(%PcZJ5cV*amO)v`H7t_FCH|@x}Na*108&{nQ zeSF*TcPlmyv02`0L@=ceo$0!r=`C!9JsO>q7 z_F8edV2ZieyZ{KV1bR?fS<%{huLSm?0~~KqeS(la2|nn9^le_0WuQQ+Naw_Ko9WrIY$8JL?_p5F%LFfP@(V(SKoYTQE^gu%S!$d-XhaI5#4%s8sAmBxN4vr z#glLb5I2p>XogLX=jwY!^Vgh5vJ|z73Lh0AEedyCV>;|QD}Gk8-t;ux*Z_`ruKN4S z!lH|)fr8EU*^0QKAxzkjAFfL$_+Y#XyFwNJvqD=Tu2D;J{E?bFu54ioB0KE;=NCnV zgJJ1in&h0{f>yYS7PeMx&hyUhLVt?XjbfU<@oEzGheNXq6t=6S8&#&+z?0vEZQ2cKMDkHe+euJh`hr>_CsTICFD4|t?BDgx<UNI^3sp_Rdt>t>z{YmyYsDbfFV@@sQjCyUz!Ru z=3MR>j{&cAux*Tl%ycQsUFnm1I@!jMAW@pTn`SQ3-pl2kY;^nW;Xo`1s-u&ng@o=RFQ^j( z&zc(n^e1(pe`Pzx>^hI@<+)T zAkIyNKNsaY)5jkH>H0<(BOv~b8FW#qisKSR&BN*=sX-+6t3fP<&$wxQkh& zw>hS;Ruc&$QEX$5@8>;4g{@|x5M%J;gN{z}t$z58Ec|Fw2#W+Nhd^Re^+c3mcSBq{ zk_e2vp*Wy&`kx{<%Ypa`25WI458gn$IDU_fgl0n^AYk0}JzjRLbPfWcQW_O}UXR_g zztO*=VZ`B^GOgjfD^^+Ckhd-jR>QB5FJ9BBE!fx6Eh+e6ZuDc;7aHQY4X-4EoI(PI z%Shs**<3`o+xb-P3mZG6X)1%#t!u63sGMPXI<6<`!*dgNfO>^K&L@jX3jP?p;hN2? z#2s=*C^hIo=y4pbSg1eCB4~6)a`YQ!JU>G)`~oL)O*9^`kP{K4lbzl{vI8K%4_QJbR9dfXeYnp&Q!5@cg>bjt zG@E@9`Ao0it2OKlKHGPDV$jf!@5jRvd-nW^*& z2nt=adHku|-lUY|F>!>d8L`J7}j(I zzKt3$FWozFTYxl`=&~OX@oa}hwTxh7Bsy=YjK5v9_^YGTV)eHutoESzN@U1m%=W9K z?3X-UfM0+bRIbt$66;DaQnJfiJTbllU@{phgO9Z{UD8>+25AAv&eFWy%=7PMfQBAn zK^hxhA1(SE>~RR{Skz*!0&%`8x~@Umw7vz3Aea}&l1@5abD*W+ilwAcx+ID$x-fv9 zkLkup=Nr_B<$WQrHSi@p`C1I*JrXb*C}DH>+%}Y}fN~vj!sJ)T-HXNBb2%cJWQ=5m zZibLRQKZ1HwGDjfRTBA3urVimN|HXOAn*q#+Ii*k z&N5=8!r&ylSeUFSLg)rVw`)17b0$O(GUfxL-NSFo_q&v~IH^D3y^8WoL!E7xjZ1e1 zAVbtw_~hTejJhGFdn>$oT*$o|3`AvpOsx1S!yCS+T&*OaB%tCvyJirDrv=HH#N@W= zkr+e7qfj9tVW)#%Nm(ru!ECh4{YkE+U(#ZVP7YT?RgI+wd28Q4VH)elBwrtpK(r8i zlO8YK3B>p92pc+HHHQKE>P@peCPMKEJ|+rVk~fJ>*1Q3jc4&Jf=M)|&QM(?B9!5a! zI*GQj67J0SUJV@%Hz#PqDlRWp{5De1)=jsShgyW%P;4KdT5!-7ELPoloz|>$4F$t@`3##7Rn*jy(~LkY4i&`{mSnbX4hNPN zJ1a}VMq{2|_88)q*8+Sd;vW$o6;x+>&e`sBJ48seJ^hB~KTCA+SCTVidC;EG`GOxO{w(qNujWzo<>(L~Uch-r zj?Skfa~X;o1(Ub@sXjZYRU5iyZIelqx=YIyKwr1tjvkY zMVk8hY8En4yeF@JyJS*g)|b6V8T*182YvPY=gx3gP4}CS(D`=DtCiiI9Tjd%WJ`(q z#%ykeKwdB9g|9htlX)IA-Hn4)6GZkqG>m_Bi<5sNRRN8ERzFT-*7-Bb zNA#qxiD1)p>m*)09m{Hgq?rr>w3kH2ie~4jQlFTr6>NPt9gH`b-8r1gug(?_w1Cen zguUJ54kFujg#;-|G3R$crJ~5|5X!Rg32Dh2u&^eo*g6bmsWDRzx-DPlXP+df6`U-d zDK)O-{P4u>CCxU##T~gV{#d~hDE0ZK)r!(d%1LsK)&AVMdtd-%#g}6auF#-s z0;`+l^G>oZ?QZ|3^j`53@ZEcx z9>8xI~?>3!OTE)M$oE18J5Nk`)uTD?DZF!633_PfJy zU(~A8$}l{#7TcKk*6+Tm?tMXH1Fzs@+OFPc%GqT6oY3L;1l`HD2>| zW^~9`7InVaSKK%`U6mXkm~viAt^AX|vLSJEumXf46O}7hs-x z2+W*-nJh{V`*m)*j954Uz3U{$+2`lUo?KUjn{1Uig+@DDaEq(_{VEdSkHUyyIV_t=E)l+3^YAxxMFIhp{@w#D`C#rL{)K zcPDf;`tIT{dHlUUo8;(9#;fq5KR$lboLxrTx5l*YuqlR3i6_azk!v8{-Qe*4-AkV` z%$HaNXKx+w)L98n(BXLKUr~Ps8qy4`NS_(;hTj#w2Tm1QB!5v*x}B*S*(oHd|1_viIgax19YB*3R5ci^Dhr$I#Vy79 zeW&{(YtxMhFT`a-+l7J)Z@2)=Jr?qC(81)VqSErFl)RLr+&*i2ns9Rj>{NTdm{49) z8Qjyu==72zstS94uRq}uqIT(0%y&ctT>7QQ6q=e`+ge&DRvt*$EfG$A zbA6hP-uCka)LF0P6+yl74-%Ul~}jL znv)`%5J3aKtB5Di+*Uw8kkn5#>TdfP3~#sf^D%#r&$JIrP2YPmZdK;>q7_@kFVf8I^kE;)2HXnTn~hO z2hwslU*u7KAD22|puJQM@w`ynu-)FFPGXp<;9q%uorC^;0k3Xh9ChGhlgd8Y)M41v zmL`zE&lrNGcO*?SKSV)Js51~0a^Uh~dl&CJ#aH=*Eg!wY(PG}j8n3pO9nj}2Z~Rt! zUjK1L(CUsZfr2q5qpHs*E1z`cLJX&<+$S+A6FPW!(4AEEn<4fpn+`#f9r|+yua!0W zeQ~?flUOKntcY~70gMnu5*x0{qpRBiQ!P!^Elv*I6n_DRq#seme>6&3 zUQGmf+y3byP4_L_GMtp$l;l4QntvBPC+>Ihu+|$`>C?x1@OnBSN30NY3 zGb!?EPKe&=2iaoDi{iPDPwng37d!Q%X~CU<-)r;4$gM)h`;Xv%kHo412Dhz>p9lFsYtZolYBYbLe?{I(=>r z+EGdf&_u`oakjO=A&ZFk^19zf7hof5oRQxr|@!OPz6CYF5fUQXrdygaRsgD+MzR@jd^Er=b;;b$wIAoAm4h z!rwdnZMkoM!eSWD5-$NqIUxN=&>c_ioeaiU7-q}U67$28*E75%CF`m3MFwMxik>() z@8YctQe@l-U3lNJ|Gd*o&0q9{Uo@;Y?2%r7`b{=B|AMud)bDM*RyyvsW~rY*kp_gn zPcVW$U6|aUg^ZY;8TCY|Bjb(IRLpB7h(_OT3-H&0?=;KVYR*m1*UP!--D~(wUhB2w zJv)KzVQae(Z6}iFcGQ2wZtt?MMGEi&zwz;R$6TsVKlXZLK zy?H<@zPEjHzGw|;K9M{)bho}4S(>39^C8hmSaJV@S3!=TH9Q6f7a#%Tt>GgNSpY+RF zPh@ScN~0pAk*Q^+M#NjLvYe6R`YUV0J8I<)zOCyodQo^sD%+9*M8%T#8b|xjGNDnbM~A)FW-Q}5?x2kNKrZHS!S_SYmiMyHO5d%6@slQ|uXmf{Ter2X=Tdcz zt?qqx$~{kxbDSWz{SkX<)K2y7i(VDQ@*8R&p|d&_CVQpqn86|F4yE;1)nQS4ZGx{y zYm0OI4VtpRbJe|E9cLHoJfDgt z^U0B#&3?OG(x&V=CV4&+ermu?fTenrE@25#qB3t`d_5G^GFPX_+%=1SW@Id`jpTWec+LJKP;w{m)QR zSeBuRdDoxlQ=Wwl^3@XOG^_aYPk}PhRitX7{scNJ=YWtE9MI@ZJOLpB(HZ8-90OgB z7X{5M25{nd`b=$uO|$*Cd`e8W61DE8A%mDiQOQOx^yoe!Rxo%Vx-H~?)y%Fk{hWEN zuBI+w6Yo~vn+#F&r0eu2`foBCgyt8hZrV@FooYPB$_{65rs}x2vkrW%sq|nqb3E-g zuGc%J_wTek9z;R>x+ht!-m7)TSoqBk_W8n~;|TZ-k zD~#vc$pQ=&c7A`Eb6@4;yGkv<^Ko~OtD-i9K)k||!f&_K_a2U-f?DrQVP+&kHUSsQ5qpiRapzoi z=koypKF1kL_Pjo`L%WkIJT69S2L+JZ&kJ5lerMm*1LjztzegI*ZSL&1o|{}x2i)o| z?R$n@%qf(To;>wh^e*w4$Ecsbyl+ubX=BaRnkQcHFZa z3zux>;MC=}NZXd`S-42H-N<%>Wcl{glCgW$1`8g3Tdv8HhBbHjh`-6ev0@Hv?YqDU zsVu7|Gdpu7NRb|X51rXPt8ZJp`!orhlU^PH|DtS((3Dmn1o#k6=KhJfdD!v`*%eZBNP_iA*vq?qVq?WZR&dbAgmgS zX6mU@S*-7zDIS9*PZIq0YjWx;d@redl7Vr0SY@X(;88&ra4!esp zKi>ChpFa#B`)GQptR4^)cei-nX5D^&d$4B>zyD^8IFxftsO;UWx{RN<8}%;t+wfVc zYxlk!^0iv$CO5WOWsPY#<`G@OkF|*N3(9icRw^Z>Xq@8lJNFSXb99%rZ`I*4@fqFOeUa1F6(@Scp3!^`^;te-yEyjP$lej1 zaTS&cPj}jRRis$mhx{mHTitr<;MCB5f88)RPR%u&vjH?oMcf^m-mUoFo@~Ht1Wj+> z99;L~`cWK-?uYf+Ue64i@j?*o;4=}2#V6Fvs$^eq-MhhWQ>H$R819~Kv~{*!2HXub zEJ&W9n6lGLwHE`}=91mB*ETmG7Q8-}OLLEP@0we$%4&88!qjKKIr)a%=G@=$HlNnR zXHMHM>x@xpBJ&R_e780IHhJ*6&-0IdMg5Nd4UDdx=7H!k;{KswN6Sj$uS@m1u&j|0KBOdroSRoi<<6lK7z@xn zB+diuM3&=zoaw7`80I4!QsS9bmht#(H(433?~}2cbLVa6t)9CsytPd6&`CA;o{)sf z{HHtZn&UDJH_u?{ZFCTw<z?i8YO6cL50IZZ+WX=$C$mk9$joE;Y65)(`L= z*s^;+L&OZ^Daxp9_U%=$+o)LImpUu*WsU&U;RXD(Ta zEcg~=<#w-9Fqz(u-Pf|Hr1k450xS?V({+s?8s}U@MMF0j-t}j6Vg7LJgWz;`PtgKH zw`E97Vun+jTqy&@eRVI2jVuBE`eviu?A#`7O96gW<&efdJ~_D!e4c@#d*b~t{X~?7 zRV;#aDd(q19}UO}_gv z6`k!lYy9;w7liz?eLgq>eC5BPAk^8tt^Tl`{V>3FEPVE|nBC}Mkqm?fk(ex`S(yHd zm&-;$is)fPA^8|E4k&0VXownA$cDe=9YHtLyz6I`!{LII3OZEl3x$i<=HhbGEQg{~ z-$;WM5dReVpdhbR=mll2^sN=Z0^Ost@g-AdljSc1PBKdFj;T!>cKk1xB>ud&*lqJU zaJwCQB3z2Mxo&#@N93#QMAg@ZQ%e!n27@{qKJRH`uUW*%enG*O#CT_3!w35#n6x}z zgP1FfQq)X_7ceA|3F!6`MF`u`k8!GH0FT?43xb5Vbe|YHHoQUMWnaci?pyPT3yl}6 zto&B?)u#c3AAqVG3PxmoFMT4y)=;dve8c zMRG-5&zeY=U7&7ZI2mSuzCuBX1w)qMN*kS+ow^5MQ zuGOczRPL$%xpwt@Zc~2}GA{AAPANfi9AYHo8^dCzAvx4;7cO7_G_<#N>T0C%7{7|Q znV+w=8oHq&*dV_&!Tnk)aXO6(Kg2feA^(~n#Smm{h|x{w5TRwYbg&!&*&iy6p!Iy; z3zkKb>nD*@28CeYv~!X4M#z54vka13Uc<*()NJVifc6gODw@w1eI(RUn68b)s4-M; zkRWn+7wVXa+gzG!F)t|k0$Gaq>v=q^P!Bt$0RZt>>&YOn1lph86$cHdk0X6*w7l%-7jn_}x~X=Bcg#-ReKtEm z_JeCmo+lFuN;wTS^V5E3t_}s_8l}V_TK8PMOuN0*3g0kl^pO$6+IOy%v4mN5`V86r$uITdzv-)})+ACmqv4$bB-OptIYnT!i85^|A z1eV4NKkzZnI@kAX*7{PpMY=(kYj*xCQ^uyPFNTE$&Yn2_3@@B!me~zmxZX0;40_|H}4k)T>3-aCX0vH{kM3 z`c$D3$sZs z0|68z<@3R@|JCDdBL~2P5n+OGGHXuh=o|pg%2zIkdJ(zt0^34v$I^i-L;>+m#dfo1t9ZcrUj=c>!nd0?oJ*%*-CKqy z2siK=P{`5$0vUMMz3qJ(mZ4& zTXG)Eu>y+ICir>=fM4i+|C429e}-0r(e3#p27j`8E#S9HpO~>8%YGDFHfr4^hU9gl zs9w}^S45+puyqcfVKdTW>%m$l2R>|*S|XZ~o+O_yu!s&mV6n2ro=_3M`P^sjrX+ll zlM9%1xf5Fu#OlQ>W26OvYbq#$gs{&fU4~Q(xOy#rt@%n>79_U@>6KGx9>8yxlY;f( z3eK;b0ajoz1@PqYCbroJ=qrIoqVBehoWO5}otrn2z;XLlbLRo($8W=((sBbVf(Koq z$S#7|Xb|nW&qDNBi-mVGF_%RnhHOBlp@#rq0U*;}UAeK5SlGegi#Ghb?nqD&dHpji zSUpuIx$Tx0f;jz*u?Sbc0|Xyw(tl#3kQ1C^nqMOLSm5)_t7l6}dj0qxTnm`Z$C>ae z?IF>$SZji)nvoI{QKLoDmpW=G;Q4;+4TD$Cz40e&;C8w@0;qsh2BkMs_7Ntqsj(k$ z*k7V=wI`rql!D`|dDijb>qRG|5{U@{k{lSZqRbaE<=by2T4|X!=TpL6m0y)(Dn|5>%^BrE;@n%7M z(?n-S&fZH3X7*1HSaEN0JS-h%4V2rTVHc~>$5DT;zN}(EeDg9je;;p*ookbK<;Jhh zDcn$q)A;sO6$Pq=X_Mel#6ZlDL|B)9VJ+UW=p_4tRT_>w%`58vw4@nx{B-?PGNh-_ zbSe}e?o&dbRx_6K!OK)Dy$urp*d)V$#sB<$C&6%>ab+gaCVa$8%n;(u=V0|+Uy&yL;c0b z2;Rvfy4p@^*GE+2%k~7mN8MxKY$UaJGhPjf9k5M)dpX^H=RsXjg*MIo;(H3naE9Gt zTk)pf`sDnOnJbLz@2WqYg0?3~8%Y7YXD6lsc&7lpK3W)|Mj!PgtHZWxtjna#0?{HK zypHeK=4JAgT|tLvFDUrzAN(;)nyF%SuWrmW#vU1Gr({D$E! z0H|{)mI>Zt$1A-?=R7V^QrhnX56j_;Mj4EMy%EKbofaBjHIT1{ERVSEw3T^J8NyOxt)|_3wC)&?l<0F9RwK4-zavRi`5cw&o&N!^yY8H&>>=qdWNnJ zy1qc#g1}Pz%Vmo3nj=1^Md-aa7r;Nw&!Tc{+K?YpZ_g||IzymCS{xND7z2(;ZwW$! zhJ*Mz!ZC2+<`j~{mF5~|KN5fdAUqr!lGbGt4D)r>g_W`*gUL|;-Ob$r;!Ozzehx5Z zG8NE-$E~|>+Q679fg@4$TSl>pxgtB!7T$|IATe`cc?FFwI*3UZTg5U$%-EQ zJFe&H9iAWdtx=F`M~lZn(oi7Cu*$jYZ4aj$1;OqP%ihDD<;`#4@{K5ofbAr+={K`5 ztPl)YjNd&W>1;wrJTZhi(A=RxoN&zLOk4)qhBwO_(N$BHflwv`b(;+Gp~)u=@frqc z6-8LrPb>=F4BZ|)%M+-$K>hSu8%L4BB$n@6Dt{>}OzbdPIN9005M)S6ZaG6kS5VRWX1?zF& z%!?3r#CtUA*VFkem$LM6*;^^{?$;w9pSDTHIOxSop1ofR8C6Kh1v?V7S`ZS!?fyK! z7bU-b#jhxAlZweyBwfmj{1efhIdl^6ZIiL(uR;Ex;VLjGB)(_oJo@(A9H3Ne8iYdA zlwB_k@LPhDk*ho?dF=Oqz?nOK0KerlxGGfA28St4QO7a#to@sPe@a9qn$A{oK?Obl z6TY0Oy1e@z_pOyh56blDeg*nmdb6pWA9!zk(CvMM7Co>#7g z9qGuQN)i|Z-P6wK(l|}*5eWvizBq_w?!OwRI8|zn^a1`Afq!2q9-@Z_03MVfLivLd zr6^oF%-AoGrZVQpF$ot@yf8i^l!XvlJby>7C>|jSQ*BZTmxcmt{vMOo%%umtFPYC7 z9hXLz?x+XLZ>0mrfAe2&>92laMvuKy2KnP*TWCbaD6ir$Rqpz#wc|jw+fSNSUKk4fy9>%a!z`GBU-(Y z+wH6}_sy@aq#;6COP&lxTH0CuZTBzyyPkpyGHxw%ZA?FNF)bsFfbTvx)sN13l$*z9 zvma1(iu8*H5}+%WxHlE4n1-;#nijn05Fj<60OyVO%TvHw3N9kMb2|@1qX~?~^me92 zx8F+9W|me=Wt}-kWOJRn;q~VHm!kd59<<4)ob#AOQYEnndb$DO;gZ0x=1?yZQeG8= zFX%Vk_=HN%k4-A6TsgC1TL>J^{wdD=$9}8f3NpY0O`B5ntWp$4Ea^U}M!DL1T%!xd z5a|$Gvc9xSmHi$8Z4xmmrPRJsE^DcSp*Ccnm1Re&JwBhpC1uA0?4_(Q$#oc{p2c#O z4^#VL>A<^sU)JIM9{>PB|Gpe;+#-?F!cb7OJCqi9@-DB)JTF;f7$Od@B;?ocaEIrR zLxwm>C6BfD-N{t~=mjm_3JDYvC?v3JOF+7=M8$f3c9;q*LoZKshtV!uQiJh72^112B(Oas zP)^xx58VCXc(&>F$HVRzwzJ4}AU+0)-yO@Q$WtMKJwyW79Q@o6Ax;_h($l>wquzPh5VQTX^u ztqhh9#RHuT8m*sn)#1y2N~@I>(r>d9sxr3Lz$ld#PD)*2fRsw4@D~y&Bv448kU$}U zXG;S8^~}!}?Sf7rfkFa9k$@W_44lI<@pBt)kR&v0mm1iGL4Yxl)-^CPNLsCx3k(vc z;VU8dF)Td9NAj2&7$2mp$&3+>sdWk7e|l&A1Kd^J7)FF+$i%)Z1#N?ith3><$^ZaB z07*naRKeEvbf6=5ftL5e6%r^UP)MMVKp}xb0?&d3ywlC~!14MlkS<6R5-21vGzrx5 z;lCyw{AN!kj)P-(Z&!vZOHjuSLmzd?IO*2)xg=6j41_Mak4UkNgqzqVPsdgg-gCxOsTZ7vy z>4b>BobIX8DhRd=4jO3{1U&qb!IEOP@vjI~%wh$!k^6EW#EZ+q?aUGwZUGHc2?omh zaOn?(f1vo`@;?y%f#Qdo|L2#HAbguuQ=<`^dYsw34$q_gsshpuc-appPCv zANL6;spvUxAF$m+6ob!2P7Z@lR6y=d64+Vle=S?GYOoV9Cd8FruPaIW_ch$W7)b0& ztaBjU`X-3#wHp_taASk?96EP{g;=&z$wa}}&{+@GO7#78rHMGq8Ns`8!ml3B=j-+GKn%fsk4%21%Mt7+i+=jtB?Gb<8|VP^E8j=vDaGbk(j8 zgYbbOeMj9|+uE&mWRq=a)w8BSH*>XH10OuZ{HX4*&MvK(=qw2?ZQs(ZTf*I;FRa$o z>qCcn)`DF2NljPwMOMBmA{Y#({1@1{^$WU$@g!etL5nZ2rw%P zhg>k${OpXv@dF2b;%hQEe-|9_;O^k8jU05HhQe^fg{;Aezi{Ox5b^^eWao8M{=v#G z6LQ-mU`wA&*j$ z9pT~MRiF9#MjoRM@?Ozx5BXOlP(&6I*l`j_ zwY%5CNd7F8V7=1DYxI#annKcBDGaWXe%o>U?JSr>Kfq5Po!~OrS-33tcupjcn#-U_ z=j+a(`yItJ5I7y-kh0P6yUy2-J?UO>H^>Qa94N z8>(zWhZ5BZPRgOPbBj!ndL3|YwvkP>*52H#C>azoUbwxB%re-ltE;tEefgZPt#@>G zSba^M)=ZQh14Xzxw2ceH00W`5wbh>(9AS*aGdgLWC@9K+FSAEl3>AzHj1cgksA4b% zFNEVFE@%)&To^hSA<$r38~jH)=s=fWV`F2_fR*%xt1JQJ8Rh4li)2fABi~9c(&QTs zVK~CP5>Su9Pn}cUV^xrPP+vPZ{O|)V$C2?FJPpL3ukc}QXD+SzrTQOrFXDE@6P|AB$g*`(Q@et>?D z-pfR_%{;mK94?Y=rd}i*mIMaZ_fc1(zC?YBXM}Gn8!>(uo*^RYZED~c2sN~883tWl zEm~nZKIgZ2hu@w*0BbZR48dHnOqJHnpy_ z1{PI1Wk^(M{Y3c4WHtN7G%&2IVTFPLn&PBUT7*OU%6zMqQn`Y`5Ma>HfCE91$ry-$ zi@^tuFn?DVXJl}s3Bv+B401_>@c^!f1Ah!+h@&W-j`J$~XgfJz+A`dv_q=3=4Sr~l zu5gtlkRK!?!*G-r@*hZsqz^~>@RN>T3{HU=4t~ls&ny0@Z@lx%sBxW1&B_JJ1T*~2Yg2vdD~T5 z!gggxzMfP0@5;g}c;5{q5W0hB(p^58!>fWi)X%z3@>q0m((^pMFVI+JcX6R>^ZGph zUK$@de4F!+$X(|Cvdw6UgwLe}RCay4{65UYBzgo!oypg$h})KD;2V}90^u5Fb7d%S zjw5yS8i3U`+QxQmrK%re2OWBh&D!@+d+gDtY~f!YwN0Ih*P-?Orj4^>PdL?{TKJ?r zbnk68UMnVyfoj?{YpDTdtqc#&lXPnP7S+632Ec1y_d2WA3JA=jK9($6WcS^BuWeYr z!5SJGt)->K6%AXtFdT3QvRNxFjg57_l0jo+H3CBbR|6g4g9cm2Vx^*4xG+Yl6;DUc zDh=~59hzNXJ5M`23OBBk{KGs0Tj6308O9kU97ao*ww(bFUBaq;xi|)=P%m765E{VH1=bt2#@<4<_t5BPAT3EqOX*D=uzxOlPx zLw&*#1{&{31*>XBD1Ua4b4nTA-kJ4~PdtNHm>>*)YU^ork8u1^P61b0c0fhm;z`(0 z9P%e^`SufhhEW9X>h~^jdX?7Muw5p(04AbCS^AFbE-C(VC z6YQYFUZ7Qw$#(no|JORS)u~lR$T26KVRzhdhs|I9sI7Q>q#b?Ysn*feZi^ne)vUeM zMo3XK$hhG%B<;e>)~Rj{a@sVwzvNx-vh|xb+PV#EZN#WXn=o#ity#I&-tiCbw53az z`T$?8%W7pn_!$#cC8T`n8XAO4#)+P|R#qwm$AFfB2zy9**2|dLyrtRrwhD#>h6jSG z)hZ4vClmtP#C%Xg8sVBVXO8{Hm%d~l{NM-eulL>OI2)RpWYnbdD5PhY*~#*-@>yTc z*%)pFV<23d5ph9O_!EjW8Upg9EW+=~v_g8yIMM)?AFyErs7R>_J@^84aQIbENh}vL z?0F-JaQFvjG~nRZ{h|$h>H)5lX5`%?cgP=g*y4>XX}I7s&uj1-PvU^XpXU!h;odg% zjvSD6;MyVV4+3e>k0(3@@9?_2xSdu0yNi5_eC&)8!05oFkaLRyKe|lMjNJ}h35vmi zqNR=0XwoNc`Q#Fvr1>)7UHlO`E}rCah8i?m8Q@*BqNN zdtdwG?|y0BTh?2Rwp@*B7->_+G+4d1Z`F5ivG&cacI(Z*x6{rzU0bylSjPq_iS~6G z(6nlR#GqYM2{hcCD{^aOMM{?lpayO_?&qmMmFf zD^{#<-Bv3eCd-&wvu2G=oH)@YO`2qjv=Xy=^=fOZZ?f4kM8-`RZ?k64vSrJb*~*nG zWkhw^j2SaL-SXwjJ*-K_39BJ7P=lALOaBwlK3b@QBPfHN9ryyK6scn3fAfK{=St}jn?q`>ZY zB`-Q@Cu*s3!f1ld5a|YBMi7mi|k2L+4b!xE5nw?4o*f<_F;G=*{1E7 z1M^o(bLaDKv-}4qzMU__1xJMhB!Q&OHD*8>uEdf7BCUoBb;mhP^cN-vyEmA4Y1cpk zaNl*|lR(*kEG5|`|DpqM@eCaonZX}%@k9to|7F4o;NrQ5NI-SQi@tpQ@2M-2NWQNm z+}oA6yWxv#ScV9yfDaSGOP6l3>d-a5H5!~(SBVV zS}AA|tj>Ydv}&nT+oM{|mOs6~X3X5z9{7s}|1vt-yPK`PS_2@-r@N+6D;OH+RI^R0 zQG`UNrNbJvy7Bl!kNQf;zWeQ`m5eQR>ZzyN$3FHkH!f&+_uO-j{qsNnvl}REx%%WM zKWWDwf4tAABUFr+?|%2YJ{WuJTiUs&b}NqE2ffh4!vs?c5No+COJYnSkNq z8y16yQ~P&DnWCTLm!AxZu+V?-E^Ro>EEMg3m;&7PJ0TIeo=?E zo$0%(3E^c$mAJw?2bF{vFb*x1hnW} zgz$A~yHj0by$1WrSlcFRQeb!YW?Qjjfj#fAW38%glyz3OYUZL&-#$!AgNDN4bZ*CL zY12xH@YL!yBdhAL!wx^pX3d&yci;7YeFfqJANYX%`Okm0FKMgPzWeTL|MqYHX6K)O zzJ2qX-?WP_zSxdE_E`J)$3JclJ@k-WaKQ!kwzs{_{_uxC*tBWW+>gQV)1Us-&OZBW zPX{hmM`FN&p~ArI3t#wxoqO)N_OVZV+zpS(;Ha#fujU|1;T2 zhuVV5H$Ujbk4p`?+KtkYBS+eyhaK()D@F-%(J$9?2lM63Rz3ced*5^ecv5?Oit?W%uTbVI)wvE*@6<3m! z=SmYtNRS!g^nEAfFI~hn=ZN_8Jj0Zkb!lfBYR_em{}3n%w1jW7H=RsvBz@fd<4_#6KTEtK<$k))j ztx;QGwg{$P2FQk%PMa`tk{x=)(QbTns$sNAZFK5yy{_4+))uWAG0@q*(P|sDB}P~G zwrV>^Edxbe^INAwWlExNk&sG_tG3JDa`9X2ytlkb?`BO+jrQdH1@_|~|Je4^st8-I zZomC@A6Rlgoh?*H9C3szqoa;G%5J{-X8Y}Lf9r#)&wS=H_RC-X(g%@$`?r7VgRcAU zzh8#UEH`Fe^P1P#kw+fsh6o+}b=O_zTfLN1yZi3DeG3<}E1NcL@L)g_~#cNRSc;K+t8c6;qL%?DU5 zY7BHL%w$P$Y0#BjB*2OSadb9Gu3UD|yH%=5XMXk;uF+%0*!=nPb^b?nP31Iq?%pQouN)!xveaWMeLfeRM{+0?1^8X4e$&%i%^!UTK#u}8gmfHTs8m$Z&c58h^Ipgv{F zR5wN$Ew;Dn7By}0V*6gbR{J_K9=IuvOB3RfgTU!>Q=LLyblpC%lwaOVHk$Pl9sH!N z(>W*Tx(pQv-=075l{)IgAG|_?IAp!w{`>pZ!l#~G=yb>f4me53>vgr-Ee0a#_S^pe zZ-;D$N~=A|XiW4}CQ9bt1=3`K%S2l)!Q@T@25``6#+EwBbgHbK@R zxw+CCIdeHgc6JNCp%rBlvL_sRBV_#Tf4~90jS?d(+9Giwn_y?>f6CaEHsuQScAAi6 zcksS*barm6?j>*VI^C(Wd$qi`s!bpxO%l0GHPKUB-5^bu{WF<)nyI2733h(35iD+#GcoSY#b# z0t^#8Of-4D6Rf90`NNCKpRKK=_j;##LVRqwqYncg?Y4ooBY5I~ffh!7lrLfUd8Rt7 zS}b}|_JorMp5P!oF46~o_zO2Y3AhAh6iAy$z|s>LI>M856{6MVv}&R4RQl!^i_h8t zuBXa|dP%zaQMP8Y^g-=NTdz7)(?}g)#E{xNH*Z#{l2mIFs!@ow$3n*XESugm%0`YI zXHP!mdE=UIP*exp8e)|Iod)Y)-k$9ny3 z)c}$>Zt$e;<{f=Y5^l;9bsiYf(1(vZHKs^7kGhnKGnYB zJ=$R4%BM?wq%sUT#1l@sz#q7vL)(sc!g=SMaXt(yQQ+KFLceZHoDKB`t z)mhfoRoVaEe4E{N+aK(hy#YrcfJkLh?hA^5k63#EFxmT)Qi;P0H?}g(gtVxq5+xiU~y(}kAnrKt^nyP_l z`i_jFqt5Hc00b9<;rJd*@aGLjjnnG^<)?D5k$=?aQ8qz5Z``on@c@qlo_7Xq@Xw0D zSapCgSOSWh*Sd5bD#GyNpxvx?KsfCU%bf4jkZb(@o+;WI88Xa}*KxK=h7I+WSwIGP z+~h`G2gX0PDO3zhTZOx>KHWH%>NK5&=n6M7BMe?AOql3KJ6ZDCMS$cYkI)*Yd?5p5 zL3!c0EW+g}JZT$TtsQ>EQR1Uc;dB~m2l~`I(09fL;&}$?0opfph>eqvx-Fyy*>XOw zOZ8T$*fuDQ<#k93Ty0hDPw3J)BLI>CLP6x~h1WaD-s`a9=rTwXt7~+0%(}rh^~TFo z`XA#QJamA}$R#?d&NE1)4uruEjyi4MrQNteuX+PL&TnDB@b0>ncBWha*MM;@o}-;J z*@4b?zu>8d-X^lTNIjt~lOG(srB>kH;?IyJ5YpqxkCvH*WwNEsdwo*4>mA~mcv4+% zuh!)Hw-MvD zf;f4#ZPI51yq-D_WQ2MsBScj)`Fj7Rke%ai^esGNwSV;JF{-!fH=0|VN%GG`RML^D zZF=9JNeNuED~uBAJH4Nmgk;APk^7S}z)z9zl%owGC)!{5d50GA1Rt~_jxczoJ;pn6 zz@x0g#Jt}@tQq7pIKVAhP%?o=i2)}V}mwlF@zey($Y` zAJ0}*!fwstEqMG1yWy9=w%cyG$zG&YhfCi1F5`U1(@#I`9TtWN2hpQq!ao&y!GZ;L z$RUS#BcP#-)r`RlUho3H!HaD>9Bx17oO3*$3O`1(=NKy!G|L^0gTV}yo+n}84*Y~G zvB%$di-Yg)_=kV6v(7rpey5d~haPy)CC%WWO0uG3WS?r`dL=-eLbhBCzyKO0 zDj|Cq^^CzhFqA1SeXFG^&6sNGkQ<;#k5Zy+d{B`os#=vh<+*&ta+|0vUnpRd5!xtb zYJrdPrJmpiPbj~@=WT;JFCJL!0)|-wwvw%1x7K-tKf!VT_Pp z9K4b*;xV9^NrFdsr97T_<{9xJJUw~J@~Y3JdTP*ms ztvawP8C6LhBP5IQ^;I^peuUoZb%}^3hSan=WMOn_1-iR>q$Y>V#8X8xmQK>eL5h zxlDwM9Yo6IOIcu;Apa=ykSk?RnbH2duBje+eU#9FkFpQ=sOzjCJ6@H)m!qD*@J`)@ z4yz^L2p9OGj^pRu+m>KS%R6BhC>S!(z)^RjEl_7j3q5|A9>Df-(A`DrPM>@{S02PC z+SFqH_UC*6=`(#1o58AXv}$c*Y5N%4JYu$OteR}i)gxp?r$Mk3q+|5*R$Xq_Ia4oZ zjV&5GH`};Tszc3B+uRXd(pyt({pw|2mgpCX*?B^dB@G|McZsL>EvoC$SNR#ptlco~ zq;4}QguZ4+0Qwdkx?8Y7; z#nIlqKqOPgBHd5hN%Gb>NNI51Zbbq-v~S0u_Uf~}N}F_qFz7u^yrmU%+BEs%M>-tu z=nx!b@BAq&bb)^mJ8Sz6w8#BP`mIv7LuM{Z_0^2YFhp>nA1kFFY+OY{#F0>I`f*Vm z>6oa<%(zjvyk~0PV#}U<#Ev-jMYei@X4G0V0MXUFY^T`Vw%HG)cWE}GMc4dRkC<$G z&6sER{_#eu>ufPiXlV6>nNXBJ9gixMIt&JuU1}UHx?Y#c*{Xqyop<`m1ZP3s{`R-~xsXR5dBk4%%2)cCk=N?-w0GG$=T`3aOEGEW_qD!$_Jdzj!* zD9DzOrcxLRu!!|w?b(AYJq!=YWKvJ3!T3z9s z1A}jjR1UHe#}*(QiX2{X!4ry+3k+zqp;U^6|@QVQ>dFOUG z&xd!uNpZsiIYdUI#*9`P(e!KSzgEEq%7FK#jdp0rWEJQ*zD z&l)CJ&G}-k$m_ zK3UPnA9(?e29l(aP!xZMA^{0+5Dxh!87FvOS=NZURkw4l?;2~Hs;Ao8F6q>UaRQXC z>)N6lIy-FkjLBXf9$&Cn^?iiK8=AP#7VtK01#fFrd3JBGO)YC>l#FIv>qa|8$~)ys z-C*luugf9`U__6B^c=gqj&gppI_hmh^&F*%KKAyKwK3Yr@h6<%;~bY4nUA)(aJ`%tzG$;ty(tKUU2*gcE|tT zZ1tn7ZB6rfs~R3qwEgQ`{B&;Q^5vp2l)O?L4`7ut=#|DC=6pWo-VX8p$(zu*<|uDkBC zuYTnk?^v(<;k7n%#&mo01?PK%V723lPk+i*tz4m5(Uo@9zh7yWTq2|9C8v19`1SR_ zvS0lCXFgbA>qwhs+Z!}kYuDB@W!UP~Id^E_w^ZjqZo274JNe|3ZLg_Q{P6m(e)UVc z_~MIvVB9Q&>GPlatgY2S<|HR6Koo*10E?ihJMLe)drBzvChb>6!K5-zt09>pQW)W} z|J463l zX1}7Diwuaz?*m<>0S1RZ8foA`$$Op68gH~Op3Z~#V2~X!q+@XAxP{XdtMJxqpYxl} zKTq2U|D>&CtDVjX>O5HOTc~YH;F_(A8s^TKtNM47aMt^Q{_}MS!=&W9cti*2WpH@mZY)+A|0S8V-JejeXuR{_Q5# zv+GrMT~*o^CH*#1X{uK(vQ>}VZlkm$q7v4`P_qnZsza&+zMo!oNG;4|>$*+V47pQI z4D8Z5mNYnEK>xuHe#mbB;~%BBZ&WT)8S+6{1~JqZ-Z3)7r0YuRl@B&1m$NN^RPLJnAmBuI|tgdFF&ZKTseR8}GTh2)$FFx?~crtcN{aNs{{K(fy+6jJE*25rh-K#MNj(UEEF3(|i z2)jrYFJ6+iHLK0co;}-Mr19Dq8D{M2V4^1pq30nC%Cqn%QKY_%i6bVG*hNAcVp}x2 zJb3YTCBAsN{?$7Uos2)i34{02XQT%<^6Gg~8)q9PbjTNd2a|kkkB285a5(e?Kj8TB z4&I>WKzraHWLL={b7Y8|Fy6uuzCB$UQDqoG4*EWYlZ62i;i;;mx(JghQ?IWaGFR=A zAyR8~U7PjA^gHZe-G+7Q8E4p{1&`YD<%?{yDoCxqC$i6seRads!M14OQ})nb{%8|y zqm8Ol#bTyG1N2tSymiUQp?hOc)u~E%#%ZT8K_?@C0f)AG$yhn_%rmtdn=0c&4?U=r zjdOe^6#>&AlEARK+wH2W{#`d}{YTHtD=%?n_qhzgBb>%#I0Vv%3l29 zlfAM>MgG*ME_Xj$6IxoD?efb%DFf)EUa?90+0R^MpZol0ZKMWT+|nb#x$NpR$YXHK z83ty-!Jj0DboWe08Bj((sG*}*nMat{N5w^bBOK*{L(vew-+uesMHgM{mk|8n54U=l zHyXiGl`!=11B)TZ6Fhku;3FJI8S_rruo6OA6m;CWg%V@16ACTxkkooGUr4i_3@wdeolTW(Qa)5kFnIPgH7x0-JcKm4%E0bWOr9Oc^%H*enL zX}O^dKFGW)S`lTIISud<2MnItefc~M(SyJLyXudCR+yZPrWm z9A@!ckg+2_>qVEfbNDHZtyc)7$n?aM#py5Rae<$o#i=GtBvr+et+5z zD?OAY)T^`=lw14QqJ<0@U@~B3=8Ga_dlFCN%77C-{3EGZ*_7rXpPyafJ@KFPwyzXZ z^xj6&;2arIpL7wSwWw`%)-~GV_9}hrN$p58#I-sf#45IL^OB6DOeI(0s~urY4VvAQ z4xKS>FO{X>C3npTc{%dhPRFFW#3c}v8EulYbQrZGwK_9OD_Z#B0RwmZM16o(tJYI= zK8KITQoko@@W448wh%Tn)%)2n^27?VuM}#qKS~259BpE{CX=YooC}OChZrmjf~YgB zsPUt`pVS2l-~av(e6Ku%OUfQTW4rAVZONr;V<< z-Rb*x(|m-K@r(W|VIAma)48q}!O*=t(fd5n+r%@e)t>cP2g?9C<&;zGrHD1H09yw~HR^U&xSG?kHy-#`kvB%0}r8m>)S=!2jg>4IjNOzsiE zA&YXE4f!K01oMu76C^ zXxq@yVvpW`r#-!Jg6*^40e1Af(_}p8V3}I)>Xl3Fo?EWB4Xc;i)Fx$e%UbnuThw9d zT!7*`m0@N->hx32MT`omL4;=T*n%PwKA_;xJC%&ehmiPjlhyxc?@iz|yQ)IpU2|0r z>F!LBk#3Rz1~8D}c^Om$RJfo9fdmMsC@9EN1gERN_mJxl@Nu2qy;l&B-$m{NK|m$} zWey1eLJ~p}$UF~C=%mwQ%~kLJU;C`DzN((9(i1tm`>S)#9@gG_?X}ikYmXwxdL#n# z+YUzN5y7U*6M2m5hG+9iLQcOJotMy!&r^RlLK`nO)yshVTnglbO|FGaUMd6{4jLlQzt(7ANWli`IKH{0^><*RVnR{lhR)CieD;U_`(;;MHgKJ zUQzA=WxQsQvzIF=*v>boP>8h5Ty zj{0I?tXXwLITE8Im&_T@rEy2Flul=vR7!k0QkVF$?C!U+W>LNPQ#-qgGwVmYV!*gx zLOEiY#FGOF>GV6?oP05jG2zUIn$j%m`(l=Z`uZ+HJmJdC9n;Kkj3S>`AA3|e^^_-- zo)7j_zAF(XzeeZi;C?rpP+S7}t6Tb7^?<%il-<<9KG!7VX>7h$# z*I<-*ocdA6)B`HEa;SvHs>||z_PNiE5nSn8qsQ57mymeZlf?Vt^UjYG4wb#ab%xO8 ze?M~iv^YS?sh+;zsoFmfxMV>tXXbyP>!Tz=6i$ zfthmL>e=#ZzjR{xE;{tP-u1Ddhu6RUMdkcod_>UA*RNfVOk;l7+ba8zU16&FX~r-{$j>Y@pFsy%=7aN0*=70V5x@N@L+udlE2Md6#earDVR^^? z(s27f`NlZ*HS*3Ap7=zJm1mY4Z@eku)Y{he=+viof)9r1#jK8PnTuzbqee zgqu+-h@^*g=sld;qT0|y?-;p6UgU$|iA|UgrfZ=sti&nNN@x;mDvF)*5hn>w!ZyAN zokVFGPfm#quK2m9Lqawzkk1IK>y$l{8}^Q;X&p&OVi(3hjL2$?RVB}~j?`HO_qCX> z>2!{?IP$W9wIl4|_9xpYX-r(&mVO%+ino~OJ=zKU>h~CqmGv%|&#+|Yt6#migEr%d zd*P(eF@E|P(RY90-_U?@O>5qU<+pI09aGRg@{x~>IF8oZo(er-=uAriqc8P6e_1cT zg_Uumj`0p$eXy*To}626!`F`!5hXChKd9f`2s=%&2&2b>Wl{lOoI!*JzdH& zpLNz*xH=W-R( zSSIV520EpSeOJ$Z_OsO?%V+-Mv$6T@+;h)mB<}ffZ;?sX}-W>@^z-kxS~T)Qqd@ehH`HloF5}k zu#tDhydFNVmCjdGza0+%DCUBpMHAz8I))u+tQpR(RrPX`Nxr7or6aN237MW1P02lqTiUzx6qwlB{?1YtPv z=d7lKe-3U~Mt?UjL*w(*Q%?<;WJo=xBL3#hn*#;*W7g4pg};%dnJz)oxUp}wyyl_H zejq;DKfA<$I1^Xm(`UA|ypp)wW#YXIfctjg8oD#{ZkHrk=~9K-k!FbyP%!g$Ixs%y zAn5dq7of=K6DazOVltjK#jWYyH!?DVa?a*-RDY5&!{Ka-%PjDjc4RLyrJ^y~X9sIh z?Ir<_e3-KLm8hb4wTvPr6*-ZYt{Y*ANbA*&MC0oxBN<0$c-G(d5K{OXp7~^)fY+4T zvUmK^wnFJNKm09e_hW;!;1C;8Z*|OKO_E4&u{$UT#FGx#W^dI_+p& zG3-+u3nSZgXHVNvVH{5coG#k2X={a^f>R;srG<{_xU@r|r=U%krZZ1}D$D5-R@Ntk zdIQ@y(t=?cSdPS+*6(om*#)Sr2aV(J(mvnycg=KNf>h}?hzr=3Da+!MkARtU?bCd` zBgc`ee|YD+BK$YlW3zVcS~{W2BrUvyEK${Hj;I zs=Vht?}^-b)c(*ktbCRrP@FsoYdf;E;veFt zzQ`N$tV!g-TOcAf%3*TzOnq~>t0vxM>+Yo&wQG) zGBAVU=;m~lBhxsDa|##1UJNe}p_q7@;B9x@Ue0*pk4Ax1rc_?O$Y`ynyt!QQY>b3e zECGq7ujF;=X{VJ3P+koC%x6E_X>ZGHyIUp=o=NJEjT%=K*2U9pAtZerH zlczl8DKV;h|Arp~ttIU(q?JhA{@E`W5Z4aK#1bv&Kt8sQK+?r;201Xej9YL}2WEPP z$`#kGFIQfF7c|VQ7fX&zKMLKy?Hym_oteCXVvC+Z1Zds?DfYiDtH*y224Pp3r5Y3c zy|jJ^JL@M-YF~%`#J4V=F>}YavQj5)=L|s%YJtVnWlW=t&1dw%kq-Pbh$D=^32?|V z;*GIvc=ML6<+$Tdh&_MKXlRHycxSofE$^0ca7Y8gJsfM-u8sH(?j6mA!Y;AA{)QW) z9EQ1c^Zo}m($4G$Wbn**@}|awGG0FZ-gVd0zCAH({^&=a5;ESZ``fVL{wTvS9M}Iy zP6b=P{`=I~zSTChj<+~!KqRh94m$MD`qVhop(1lOiR40w(`%;l6UG3Bmv0Z9* zJI`W%xI)2y_DwJZ1+~jZy+dhY=7Fz1?f{wb9*~|fd>L7z5<|1B*2Is+B&4L9lgNpU z>PR}$sDN1xGOK4DBRm4O9jqm&jK#~M-R|nRXrLCYb$g8giMw0C`L1H_Q$km9bp3cL zq5AoYTkE;$-Zibxh{za~Ihwr=jEERdziy{8k@@8_VValmRca0M+k6$=j?B1}$fa2d zRY!7!r8AB3d5QSl`5D#;ZvwJhrZFtO=`3~5cF0GWPW)INjR$|Gsrfc{?NA2OyDUwG z&NwCxO0VgVfpSVq&G(QYZ3F+*x zZic0?z0EnA+K%2I6SKIk{{~NPG;}Te6tmS1^mw<(*tRkD+qs;z&H^gA`!Q&oIU1wm zo`wecNIT0cbT941_-SI~K4(dpJI{K-?U|Tq|2P7 zqS&U=iETFkUHk1aJ@4>vrtb77JU;w_1_^Vio^)|Pr>hMzqa#oHixE%yf%WUxmw)}& ze=Q#5;4F$xL)RIapbx&~A7?v+<7}pR`tGQ!My~@5(wuy4*__puXJWZ+6IZ@8o$++Z zhr(&yQZb;6-aQ^`X#Hzjr~>#k!0HM2h)u!Y-Vfnn4cMw`oWy~aX)_OTa5=aAh)XnG zp&9phVT?%g(v$6|Mv;np$`-s1TW6ibk+iCFZ#hws(>{t%$aUYk4h;$leTuz{xl==A?~=~NlAvH=ev-YNBM%!GDBZ%%oyvtfQ4M`g%zxe<2l+O@I7$C(k05aXmmC8{lRE`!<`>J@q_ z(#GpJ-czyB-&qJxwQ{)$QNvjd^DbCT z9SZnW$#NNs<<}TF`J@wB3;%)gO_W9R)(FY+beGq6f7V4jE_AhyD7qwoF>>lJOqi*RGzKCq+XqD&ei`nw!!E$yWda>W%_ z&?#=^6fjQ4LEgCB$T~=CeZ)PyoxD=l?S^dWMUH|?m$O+jWhnovECxTl&Nve;ElgmH zXsk>GT@a5H9;FKhAeLF8LOb-L+zoi_I&0}?+-m~v9F2D7dW<3CyEbfO7G^ePpzQQ* zyJhS%lFoeB5Y)JkXP)_tpDLFz3uZg36iBnup>3^k;Pc}j|9JW8B^O5+^XSk!Zu2nY=mshH&$QRC*ZHG3z z_s9Ep#Er_UbSDF9xJDk}Cizr^irf4}ChecaW5AKdreEI~5X}Q=IPq=UBO>{(?J&iT zb!P*I=qF~mALN`fPA+GiaccSJ4}OMO>uo_FeGI<*>dStv+;Pvw^4Tw5P20L+gZ7?4 zmH|&%7k;{wdCC&mr{kimO<_8o2mi(LTGpL8i!AMi+$N688-JM-39q^4n(}RyU}+?z zlRrHD)1{4XfBW03j1@O1WcE!o6mq`~)|n>wmp(>$;_0F5*WVjFra>a_pLpU4AqVW! zAAx~lIpjz4F|Bg{*kg~0V}$*-jEQ^mnqa`meF^4i+B%qEzMhCx<8J5PIpLX3c%v9D zwLgJQ;NduFk+}K^!?>c#GD}zCc8lR^sZzWjO65ElW$6&LctK#L(6lTmxFmWg5TqgH#1UYKAv!*u+|6}WcDOD8`y!LrY^#%b(yPs z;!%HH;DGY$3^QM!mNWeL^P(5Mm`%=)iyPxV{)vyp=$5Bt8OLRB|M-vp82dw>bjBG` zCXadj1}D7Oc~m@6pQa7#$ehL`%R6!3_{P^6k!R+Wd^J{Wb?Yr`jhLVT=uAnI%g!R< zVfv@yzjn%0E;?WRq+*{SE$Zp4M=Sxfo{VPt?QoDvm?C@@H*eV68*!04@eJyfuq>2o>Ra%6=DOb;`dj)?nx_g#0zGOm|U*3WUdtVYXI zS$cKd_r8lVO07|L>~Fk7WfHgpjL@p^3J}^ z{J=K^O=#5%QkNKH-_nfxMx|wze>06&{?l2d9J}(l<(=>N=V+_n{_U5Q6W5k<)j!@x zTf6_yOF11#qQx;%T&g2nvyi@{ZDoq^%`#ZVow#m2WGBG~!prgouQ!?b-t8Af#83Zg zkclokeD*V+Rn|RfZMo+z`n3Q8LH)i5H}vEtn{w@ zl|LOw@OyaKNIBtz6Uxmu-yD5~`8pG7pJ`vU0si;W#9MB;CHhZ!NdB=b!ZNMPuH642 zymjl=McOotVT31^hf4$$RQy%2UoBgW9&Kna8mCMuOStimS^1O1cm_0N+u z^!M9Qa^LT~^UfH(agR8#el!38KmbWZK~#eZ#pgcvAEBg7qO3TzE4+jyJg0UQMwypS zg`;7{HO!xB6v)RMb4)D9a@}hvBKYevPOSySZ_1V5K82}&zWZ%{I-h>)k{;rl_`((+ zD$rSW<7s?leOz1b^tE+%G}q1{*S?yiu#B6+HambA5qCt8G{U~>+gA!Y-eXi4m>$Dz z5n_oA!a6T-QwbDqmb<%b0fax+tJ^>8n`xXCakw8%4Xi*zVeE|sdYC${j$BH zL*iSx5+{je!4JzPlrsw-P@7l^%hMCYKo0e_9i$IBaW>-6;+!Dk>;@-5ZQfEQ2awgH zEaU6N@Fh*iaq!0>h!S@k5;BdFHbj)k35?w-W+tJVUY5S}aJu0jIGox#R^Icje~1y* zU*^smrLzjhf3Qd6$5|?MH0@{`ZbM1<+`s>O%)b4|V;)n!i}HWz#TUm#w7>b9-zdNT z`@dg4@}UpK-7zn};AKHy53tVr1ONMf$H2k><(qWL22Txzs4s-C@o?RDZz`8F`|-?Y zJfl1xI?=%SAWIB4GYd1zsH~?QO<<_Y3;*jq?>HBafjakBL)_9~BpADd{=s`>-URL!!{ou=w3U5z({CC6G-x?@d%1ENDPI zVVZrWTzG>lWU!%cxLkG3E#;ry*I!=uOQ)4TeBF6*r^I7U<^-+x-%`GH<@MYhvYJ62 zjc#X|JtdQVLY^T5XSAJ741LA6RtQGplW`Q%5;cy(_^*~SJrC|PX;OJtab~%z6L$`7 zIcW81_DMbM=}#|@VoBthBUaOQxVLXGF6;GTXlGSDJVcr5iIgtq^w5+T$e}Id9r?f? zTYh7Z&<*F#G&;cL-b0N8XG#5*N8EUBU!Cvr>GjuNAADp#=h7?(8(seFvaWzlUdn6x zA^Sf4oL%*3VBz|6S*P`NNwIkvSK~x}x4ed#&Uf<`o_LanvoB2?wa*`ZQrd9d*hj;t~O2*4$v$aZ$;jZzPt}1N>@aSafG;__13>ZYKvm#P*Zj4m$u35FZ zJjkZGO&cGKkywe+GFcXd-h`v@d`Fonx2Le=CTjgGdopXj$5H^wZ=LkBJ~}rDTheMg z)ma}$c@(;iFo_=ps_;_KW*)-Qop|C22xl$?C7yN6aB(5d^w0NL24|UxXT7{1!OeRf z@i>EWs=qkMa*HFw<>V-;iUS0l;l>xf<(Ka>@AUU+T=8bR*@)@H#AzFcyBr%%b;^aR990UzFevFqg6v! z0u_wzf=liw3iSxN8SCA_g$jtWc7Q!3oCjD|#EFE^Jv0GffCu7u=!nP^9pK+(TZ5z< z0#^e(Cso!6cZR@|3BcE(x7Vmd|=Xb(S z+W+VG|9`ob)!!-qg-dRweZTL#{kHPf3;!Y%9?K{VT48DMBkY+{Nee}V z`fp~G_zkcBofr(5WC>*OEArS>)0@hxf9dS9Zrv&6-S4}+eDRVSxK#OsvTY1w z14CqlOAq@ew;_8Rcyc{Pl>^FWHIR5>cuRvcbmTo(Qp<2?$PH;wocPQ^7il=b+?_%cg*4u6^FZjiC%P+j(+<^0WW)QQF%s#|C zb<+Q>x852CfdfpI!#Elt&W6a3?iKMkUw_JfXFyD2+}xk&(b@8!GosPPz_%XquKk?` zhTrz}8a)~y)={HH7{W3y{Y<0F^vSEjO<7=A<`a?IKYPT0I2M;Wap$|whr%ssX_n%_ zpa1H<4mx=XTMdlLsyWc1x)nbQA1Rb25E~tfKVfKeXoz@`;f%-W=nytYe;&rrS#f{$ zOP|&q-WZU;bs;1qbOf0~$$Kqy8Wbu5?uk&*b<!@f=yxf1yi;wG6^j z0Sd*2h*3VOFF&g|I!fwi0xp12%9|vb#7G0 zAXO=s9=M~#Wr5pRHWB$iH(t!?yLZ#1lis8$W&SL@}R`Y~O|2mHPCc8c=Km(oh|s!F92Mj9~xF_bRDf(Q9_FNlV) z`-s9H{t4b__+4I;rt}kT!nSV7A6d`#xepkC;AooNJMb-G+UC_srO0&%dytbk`$jjE zW&NvY2kyy0c21!;3=bYvzHrHPW$S3&SpBW5Ze%aukt_vuCo0Q&VV1tJAC#W)4P}r! z%#=Agvyje$?tF$Xf6PDM=RGaU&O_!!I{+V+Irtqukglam>msej@?XlJpYfE_!wgv5 z$7xdj?AyHb&_OYm?Ot>3)fjf8&>?W4+zur!jdoK2?y%FÐ~ zgLy08ea5BD#2e*^`0srCs*XWnTFa%n=kdDRICV?7mLtn8Pu_IHjkWy1()bUa#Q%HO zezy*&(JKHa=sKJBcFyed3-`Eq93eK%hEn~&e-)w(@nz_!xarvAj;L+zGih*%tn zNoIuHQ2{kRG&Ja^n!?h>86k`eLp@c?IHrYF_*+LKAd_JDP_2w(C!-?+JC7a>lE}*# z)m8K~AN%2v`Rps`-Ii-1(A`Gf*PIW(_!42+YKl&FX(gt~9gmU+y}Gt+%;+lsJURhdfVRhZ5qD5UBXeHGd&F@F10 zuqxEeSJ)}gTt=wEEHzse!@FlBGH(gkyq&dB;7iGd$v4h6L>?%|;y|H$F_%t#mE%Z_ z<9D5r(0Iww{>oc+%IPdeETKYC6mOZQ#)-U|hJocsPV!wR4h*-0$4(p^MNLiE%?qP9HNcgdQDBC$k_(>akKghePpH)U#_gL?rIJX03)cbIE zbf-}T4On?*KpgSwas4}WRXO1i(s_0NY}tPQUFBGCG26eKr5MYrJwXJiH}J3A#Q<@` z`T%^`?{cnVb6&0B^mdbNEhBgFo~ZuA4Uo#(&+G8C;i&Arjrg;cpPCh zZqlCN!8hbDJ!D#EK=e1B?JX?+bd(n9!??nhN1Vm8OwzJ+VSHtQY5n$H=euwN9e-?v*Dt{!#1*!c8@6tpD8W%7A*5#;BOBv=A&ec5| zOYg+p$o!tNX<~?(PzGC|gJsaccdq<)LpQ@^h~-;79K0}qfxV)Cv<%ODzbu>iL0Q&E zUo*M|s2VrbnC(ZtD$i`+g++@Z@yC!|g=L%r@r-3;%78|ADqp2_e<7c#L-c7B#-}a^ zvEqY2*eW5k@Qc-EABhFUw^kStj9{kN%cDu^@fUrL^!5-kK!0Qa%p{CTyfRvKyUR;vjE#*j!t^XkBS{e1@C3vc1_VkIMB%BD ztkR1Ps1Ks(WnYNr)A!O@8ZTTtWGI!VMgTNnDyAx$j--WALJZS-EI`5i^h4+5K-M6~>qjO0_%<>@bu=m#@^n+9na{bBc1a7uGQePX5A5lBPsgj3jE9D8t_KS9G1ZN5{{cHo_s;v0*bdX5d#5J!Je!}vn z%co(!M_cmLa1el-XDzd9)?-A5c!j{lo6bzvl|g^l(pk;Esa(?5^06$ z-t^t@J2c$H_giP)=BqP)=DVx+49{n#>y975+CPiJK(u!%(9!aO(U-r$!VMqu;alLH zXPgGdy<;nB%9Dns%6;e(_m4eTdYK8C;Sxjpi@`~brl#4Qv7X`nknyRF#K%zPBISXt z8_JrV2g-3PxK*TQqI~a;n;CJhGz3`$U0VRi%3S?}tL3UCso&Kn0uRm~ie`D;8yEuu z%yK9TJyhk999yf*7H3w(vP#GP!jJDt$I4ISzI(~SFCIt+`pR2wD|hg#uBNTC-SV-@ zel<#j>9>PRW+z<3ge%@^p7OkXOMNy#$`kwjrqfF+zsqF7&7>w$V z!#C)Ucu$APD%gncOXgq04|?lmF%R-HkiW|^C@<~HHTK;5F;4ZR^;YU&j7oSz z@+8MX)_y?O*Oo^O$TO8#@t~p=BV!9P z=q6v^E&1+T;E+lRb5aG8GW)K-3>(Vq?g0EyhgG4d0MeLHc!dChCbu!2d(%xfm38aZ z#i+Yz2%PauWuk-I-QdV}R7jgf<3k}GCvdSA^_H7&E^F7Wod-`)3*qk?JToECv$X7} zk?Uf&0aqr?HcsV7+OizBxhGZKbkmKjnLQN(lJDl|*gE0S$x=A;-JU3Je(c?S3CZ$6 zi4iS2kjjvqZ}Ybu^5$CWU|lSyKQ~7!;B7b8_ufV4=Mkx%N|d8iJC@<0tHVOr3$bN* z7%M8F)MWsHI>s`b@3SPqjgOAd@0p7i=5lC!qbgAlI(qQ0klh^kzrDf#inyC2ED6v( z2#p;{Cw>IR$njV#hi|3SFUzN76b@E!E}l} zPXD2xG#l0jD0FP1L>Op6c@F$G6ajTFFCgMLl*X80NsK}RVdB$&UOpCzI+Y#5+SV}G z=JX%RL}e;1Y&dBlH|?sx3;ijx!X_;|M_P5(Jru4kr7V-i08TP5>GL(i7e3Q+Onh2y z-Hr<8!jun>?}`VPHdVaXp3!+=oVZC_0qV(BN5KeQ*jGomY8>e^+J85wGQB5Wc@bew z>)N(5k@pw4zEt=!T2A3+8TJOgafRdhRQEQdVVMSq?eBE@BIiChM+CGo5bbUyY9LxMnU3Eg~kApmOFKV1G28d^5hha7LRYt zORAPqzNMzkwiJ$KmKH3d2C2u;9?s7866el#D96-E?!KF2YB6+=KmLTkztiZ>ti`?~ z+RYI}m6zBv6SE#l+Un-ZGR+eQN(gggGvYRWgw&a7)xr6x8c^+{#lU{YfHQXe(-UQt z_lPiKH;(b)?dko4EZ>AOJ*89UT{n{`9J3H}pZi0IH~PT6<>q_uEL)LPDI-$eW?vNj z&|-Cq{a*XI43xi?RcAlrxBZd*ok!=of5KnN?UdK{HCfj=Xzg2zfjJCFANG~%0Y0T2 zX)OAkMrWIgz4*6&M~B^PmMg~zdB`Ejil_{gkc$ywv2DyPIc<@M}897a39HyR6Wyst*?>^%G%s2!ofP|XPBCk+;IkE$X0Lsn-kYs>OC zXrjVbH2Yci>M7M7Nv(qi&WM?&bu9)C1_q?dxOWLXg2`#wff*8)U@;hqT$bP6mpa32 zyGK*knGTi?!4VoV%CBO{N)L(acS*?@BeNZaIjm#|w#h+) z-IasnLkQSG_4AXlXE;6!`@9MV!r4T^(k}!-?;vN>9ci@A6x^nMKu;WS;P zd{sL>^HqWHI8a73RDvuU<#LnjzyPN_@a(r2CVa6@rc`;#5o;0eJ`PljAF3TN zG_Z}|-3ev~5{Vz;KbGFSlD;Fau6B^tExZ0%<}8l}mvwSI_V5U!ZInBBgY@Ffck6N} zxWuVtSI$`{jV(`M>c;hGnP3QIH2e{Fia7WRl!lIdS1u?o(r@2Nc7S5b zpAGV|tSO)Ek95jA>!Qrl8J=~u{&gQm1hs!2mKcypcFlpQ7v}*1*?(4g+ilXl!e;Up z_Cw`XiB(U({i$I1RC(UDtE!mYR{dF_`eqvWeF8YWgUic3JggnEa zQPW(mWw_<)ZZ_Wod$qtu*{QSfA|CLCYaY2g$Ym1`XD7`mOr@F*z0tmjrw9jc;xB1b z1y}<{O4-wlh;eOS@1BsEYzyDB&BeX>>K4McR6!3By0braYLuek8)FE|JS@M$Kf{D$ znBV%_{`#rFTRZ(Tf8ki}puvVV_1hmV_AOBk%ak)sfsep1%Nm$@v>c9#=t6mo`dUAD zu^D}vE354&jaR0yUI){ezqoYJ!aC+mTb&gfbOo7o?IfIeas>< zotH#j3&>fHP(ZqrB4fwWjd{uDJ`e#AKO+ssi$bz?+&vd~#GN3(>Sa&JCQj5k`6*nE zv+Cq>*KPNdC!g|&a>JM2Tc$kxYm7?f-Z8>S1Iqs0*IFZ_yZ=m@(l7O;-9y;B z(9VpAMB42-o%90#?l`_zK~fpNIa?)sW0@9dN16H`8yqQ{`i>~8RzIq&d-7=<-Fsiz zKGk1_S+2#kGgXMVAEe__Jn_WaBFIqQui(h$V;}RF@)J+{@$!jJe1bLhKOoiovUkAj z=>_oaVR31Liq(Y|zNOsA`PuJ!*SpG}{K=mbZ~J`3%U|AslBMc+knIkBoxu7cUh8pl z8aCtQ&2N5ldF*2!TVDCfSB9loNA&juWdYD1qR$kR9b%`Qb7`Dmx=^6NV}6TM=~G5X z1#DL+F|G=+43vto>4cS?&r;wES9q4kcU=g1;G1{;GH*MLth4!M9gLT0yPxSFf&1x? z9V{jJgt}Iv)%^+zx%W9mGzxBSrL|37>#tA!R*WVkBuUg+k2x4Y9}Enk=G4fUKrpRqF>pXJ zAQ3wgAU)Q>s~9-NxR~W)WNFx0TMxNGI@MD<8OfD!<5W%})TK#P_GX`;Y)g54sLKas zvdUucWz&b)1S#Wu&oKEtWr6<6>iXR>vG{1+ju?=3YCkdeG* z7XnP;-5n^GRZY!u>omXFSuad;lNf}LQr{P60;4?k4V4L&U0rkSbq*ENNlx;9RoPCQ zDLORo9C5V5{5{4rMluvCW^6cN{&g3cr*93gf8$lJdR2MVt6x=~{p@EFNdy_$5dbH6 zR9ICC9r3h-RykDARR8X8{pOA$^PJ?Px7^y4t)gFLLVW9N;B?t08K!>{ zhGD7rX8gnPX?+y@*4t*xX`jF(U0=`!t7;ae_YB)n`hzcVlM z%lE8@-4|A%{vxmNgbeC$r+^n` z)PA=Z*nb$HmxZ^XW9d?J1Ac%y@x?%K%5H|^#@RFi`NIG5g1UwMf$?kqlJcR$N67Sr z@_YZ)U}=z*Y29VrRX>vc-MQ_|uRFYbZ!wTOA+6+~WVhc#g`v9WYYEU?s^K7m;|%aM zwu2+uj)r2OaO)NK^KwVwva<51M~0!Y`2o)F=d>ViKpmSLC?g!D=lvl49AT<5FJtwy zi`_Mjd?;oDup7|4@pJ_@H=lj>*=6IVjpez|e_r{(KYy@;pf~}HeE=S#ddexMgz|Ci zwburtib#)Xb)>^3SErqJT8t!If8z~Ndp8!lWG0o94}bW>ajI2*o8F^V*Q{9+vnF}3 zLJBeAd3LwQoOu$G`w>>GSW!+m;e>$Wai$(e`@s)>FyiHumvwYh!xqXLOW$ER4JJyLdV*L3odIU>Rc30(F#|C z)#K(haD*-Zy2~u)z_xEK2KF-sFeHLzt5H!i3)2LD1CpS9OM_g>HJyev{z>~aar?K$ zK#PF`h=DYsmt=^b4l_%^$urDk6lOkV7>!WD9%f(2*eD|rBdcM5HmLJ^(i6@qGvia` zW{&mz@gMuKa>IAFmrZwXD}$U`F*UUKz&4I> z-~ayimv_GNondUe?QL(1*Ml--pE*;645z8PR(*jMpuw26LE@y@v&d;VVIh?ET@G3ie8U4Jqwl{Cx zLBqhDs#S}yz+S*J)Gal!cvl?@pQhd7YXOi5!mo+aD0>xt{tXz-$BubstoGSrVE&(gI&bLO{i+cP(dsov-_@g~-~HD{E#wvhyTgDwt-rc9Xm>I95GJz^ z#D>V3GS20Mv&?2pVU&#btt^|TmX-1THRYt!p2S6lE6ZKC-%&O{upurOI_<0 zE?aw#C{u&0S;95UvM0m`^Zqk&v~Mk%>zx%HD&=nUeg5;FS1$Y3W#v`gU;lME^UO2j zUHBRp8V3q1pE`|%U;p)A4*CM&MDqJ`DcIj zXXV8&esQ_@;)~0xU;XOHPXk9o=c=o&D$jfV^UJrt{q30b5b?ry2E=#s*0@NgA(94B z8Xg&Dd|etN>3_IBqc(F@Km4j;1!|M*TiF+zZr^g*f3a<|!@1q|A7&05;w%hkzgr9(1Ps)<4KuFBh=v8jm@zQw;HdBVReG;~ z=h7U60=HSV7-%sN7zllP$%Y8r-P6Odc#{|ktiSa3^Zua~<$=-OGQRxevU&LAGQRTU za^mUdlr3Wu<@W2phlH3ax2^v{**bhoIquA7myP|8C>zV_GB&WXjJpBSWnC!ZZZPv2 z`Dqt1bC$hh9FIQr6qZz-QvUrv{+)b#%a_0WmGW~x_w#YAZy!qi7|W~1F;@QDU;S0N z{f;}zM?U&djFxYeGoJLMG6_6I&{pnEG5p^3>&r*~^|Ins&h6% zWnW{#rC?wF@|VjeKl#a6diB<~zBOi9T!NKGiMMN?eDcZl7HVcjT*9@TTj0G2Mr1$u z;DeFB#*&6d&U)AyhWkrnBpWA<4&$WZkw%Qq?r`I@?f_#zWsNNTHN9M?0Mc6buM^)gZlTe7QYcX&z zF;M3|Qr5cXiKWe~(3s{F&#C6;yMBf(jDIl8-sat6;Ngye9CTmO(k#0@RCSC6_B6PT z3gh6xv6*t*X=j$HWv7(oeXGk+CqJr84-A)W_di%pI_0FYV)p2A`<=I!l_#H4j$8Kx zHnvZfk+BVB4d%%AufME}Z@Q-}A8-#u53?vE%#yH7iJ6M2>51~3=ly3!H)b#b-cVlk z+SirkD_3%yZ-05pQ=U>Tx#W`Ai*e6A_r%PI4Xna{-+lL$$3On@VNh7rkDh&()m`5(fGR|z|TbFwoFAWTrc^S`d4HMHD-VG}Xbh!Uf6KY+HfffUc z#6V+UD2ux+gB{tbjuxBA+}naIyV5TRiO&qqXL#`8BG(!Z%m}9w7@;xz&i{hNY7+zH#{lp(D zqr;Cb6a6d8Da`kQVjAN}aZ*c|UgJw4?&fAcrW z^I7(E>7|#3((c}nk&%&@jnEKSyLN3DBRR|9=5aTbYl!%BQ~TX_-yMp+Ga+dVXawjq z1a7_c)-YxgugggP^rt^PjGyz*Kfi2ciI_{ZHg4P)%d=kpTfZH!H8jlESrwh{{xpEh zOXI@XpWG;KeDk#oX@nRzmy8*n!X)<)K?ZKs3T!gWR? z#)F#%>a<6EJ5XdJRnx=wjuE0ZrpDIo_zZE3>-iV)TP^! ztcFIVZ~rzkg_Z>WJn^m4tCXF;1w3H<*0mUDF>pXIkOS{aGDIpX%knTjvjly;T>i&? zk-Z1e<6$u zG3QKz`!znwvMl#+xLN#f{^oDO08o)vu{ZoTz^UgajPT+DGmid_Gf|tE4W@0RhNY=dQ%u3mhXixd|`R+bDtYaz+B>`F_K0{KF#Am zJDX#n=nlV7Y-%@v9TjrE#>?2on(yi~`HuecV;#a@O+BN8Sxr86R-?dCR%I9IGWdt? zxlrL1LtXet4BiDAcLqrivVH4X z3S?=279dRrNqh z%GFmrSe9{1d*Af9QwNqNPR(MK7DmoY`PrX(X4$@FYx%~d-yp1RUU$jX7ryZMa{hTQ zDo=arQ{v>U@2|hN9IXV;9(f@E9DXg)y{@#1a>QzUSw_NxaQOD1G z`rjhlOOXtp=Jxh}jH&UlQF7r{bl(5zz5lBm_sB<-^IrTSE;ziPY}l}&y!FBh%YE!K zdEa~ATTaKA@mpiz$}6rY$FO1Ev>vuE{Ikw{a{2Q=`_uA!I?i?YC!WdmC?#G8&l3 zh8-QyLRpxLy({?Dw4D#ZxHWzat#nlXE*zoPx8lA2&G3Bg4F3Gw4&|?qG8GvRlsKui zY|6XVc9i5AeIbwv+`op0N<74*3i~{U zCb%OV>9EyhQ6opP(tJYSCkPMg$ek8lECS`_27?t=epOHDpo*yOP~-3m zuZ8JrP)*HW;>^XU-wj!1(mW#@bH%ye3Wm_4-2pOH+Rt8Y^ojZxs zvbdprh$De#CdQbZP=HOdw25U(Y=HMNqJcq9)FMxpJE<`H(-`o=L6<@Kol1N6{tJx> z!*o710)&x9fKTD5{QIucAaHg9)*i>gX*1hxHH2vU`Z}L(jkx!Y zw1TJ*_dZ!0a}EPbLqCF~F)*VYn)XopsH#4HgFdRd;NPl1NC(x;Yr9OJ#WNB#m2n|{ zJ^1lV&!9Pc18|h0Nu3c|`e+c+K3WVMa11ouxcvJ{UKN+LDy!z!Y{V4+p zBW)%TUIw?%76UB?_Adrv;4JyKW~F412Xj!Oy8t`zYju(xJ1XleQ7b!V&K4I zAbaP%$T|Ao8hC(Z)65x(Dj8)Mw{p%B9`aQT>$ zjKlI(R8-b)zUiizZJ22c1cjZ>J4tl6-FoYs5zwX78*X~br?Rinqr&fd7z7w7!bIDS zcp6;2O`M$NNMpoV6=7+R_-^_%XwrBv+;`Ku)NFuL$230t8K(1TTAgtdes@~Sn1)G2 zJivc6Dtee|o2t`1dz+{$ zp6%BI-|E7^jM))eL|5D2>C$YUIpZ`lCU=+c5h2TAnZl*V4xgrJJbiTDP0*bv->cHn zDMVM&_O-=8i-DzKAh|4|bl5Q%ldmSi&f?AanQHFKw~!w7B|-1-j9`U}FRg1a&|=_# zV<7r{a_#inSps-+P8R4{J?2<-erVNtGx0t?RUlB``Ek+}djw|tFb>#=-G>t0M_50n zTCsg3E*F|%3S@@alNpSP9xgT1Q0c)S86NU@S`_es!O}Ol427Tc>t_1*LuphxNmHqUqh7JZ4doz3= z1kRL44+F28afRGa#PArE!xk-!(YGfG|C? zH#q}ixQ0d=AsIi75YuE{zGryCKk!eGg)!N~Qad-DOO(CRgvuofY7dsFKU#sGBq%Y0 zx%gEDwE7dY2xsmAb`fdYBo%jo!^I+lc9hTk+3wY#2+pVwqq)r?bICZ;2XtGGTN(VWO11Q+ien(Bnj<_S@aeoV@LG$xE|lP~>|! zS!$|>nE(p`$ewAGPZZE;mSgpDV|v_Z%1j38JsZqbT-_hQtN@0}wkZ^uk(Ff&o89{` zAe=GjBaWwVO-_un2V|&BOnICyo5(R7dfmKE4Ogxhj+@lgaZ_oV-d3t z%rIyil$P(y_&=* z5q=mE7%5@A@YImd0MZyq10&08o|%tub;dE=a#@ap?Hsk5VY!^;>S1Qeo1EuHxM|;} zGm%j13`sRsYPqu`GHeMonWOe40C{*O4b%nCF0TI7_@Ldx5P%jMFdjxAle%253LXs( zjS#=;({Mdz`1x-KXWAyYp*4mcz5|2T?sQ3OhDR*H&r4-U>sk!77&x#PNZ$4-M^>ix znJnJT+qGVF&8@{ii-CC;skgHmXGmj;`k0r-iw2YVTSlLL z8zyY?(HW+n>2-$XQ-58SIX(``ALc>C2YJVg2t{v789r`QRwacB_*?0t6YeHvK7vLf zoG6QM72YBewn+j5mD(1E40+q10}Syl7-^VC6-qTiPzVuhD+O0PjU%mhKBNUB)IR}( zkNT|nq!JS`n#jheuQhH5L<1^(i4b573}=e;iPVjM`)D!HVqno2khv8Ex~ien_ux?d zYnU7tyv%2EpR=hM-Xv=Hy7nI#8esXT;4^imXrCe;ObcfT99Gp7pR1jRI)&fU}Dh`5G=djO`n9o-W2sEDI; zMwy!-Q%6kesY)*GV$(WlXSnn*jD-Oo%lV!~B?)dq> ztGpJLS-1zo*)D~{1g2O%cOv(!c)W7Y6ox5&g{4^@(JP_FYzTM6FjeLk@uU&MY&wlq z-xlYVkaTew{*a}L{(q5mina|7t~@CixkUePA)Sy5@Nl#fv!@{^2=*Qw|2fED>uaWD zTW0#o_bii`qzM;aY-gL7<>XxH6}^rZ{iFhLpJ4+Wu`Z z&|+Y57)ZXV3^o4?zNYWAZ=K}+tR9w8yAooYRRBXSd1io;&+6{!yx zbBHC<(+ui43tJtr^0hm0``%)p#lZf>0IUalM;|=J%Af&O0*$hYsSh5lObKJUVroGt zqMxR-m%pdz!;d`X*d@-01Sm5?xu!%e)zB*p1(Z%ddfXG@Kf;Y!bra$lm5)AaLVD&A z@Y7OJv2YuERbs>3Xke?T%#cP%Zi6?a08@eSbD`@t5*CKF-%G>*xSw$!NbgWt|AP%s z)mUvcnz6SGK%f{Tv}H9!7&mVWk=ppR2U>~|vc%H2$(M)$`FXk-C{PB-kP01tl{u?w zArHp#EsTR2Am_@(8lg2sjcG!iJZ=oaP-2EhmX+;XFxty(qBE)fRBqhkm8Z@*%bEs! zWwL@-7s zkps(D91%)O(~A>#=bUJGP0wlwU=A?=F*(kmHNdX@e&=Oh3*N?WG0hwtJ276W0 zNC`$R8PV|6T@8^Mu3}n^33ZSVD5POUSL1v^$)IS)s~e#;a)8_z9nO-tp4bs3?%RFN(RRY3HeZV=U=RBKpwc_fscqpjlFkeZDX|sk!77&tf>fKlaZ`P9DJ>nOaqJO&&aZkB&Ld`%pAyK4UynSL92 zup~o7q2L&(z*5DqK<1R7`w&3A9_#DUE1b4Z9pR`cT-t+JV#b3f6{rQSaAV{=09FOW z^u!FsH53v(kVeMamD5mZ-TuLV#wBKDB|<36rC77G!*nh@`zD#?qK)=-EubMFjAddd!(-D^V#gs1* zktcOFB(kkDBo$(OtnNhhKb;XJO^DLOWnE_11$y~3JfGK`CQ0nG4P;2%1 zlTTsYhU0@@D_>WBj&oI)YKSN_sv$xN_y_|e1csTQ79c2M6RCdM9(cq>cZw0>$B9YY zxz0zZqdx0tScWN-P*c6RnNf3-D6|YWpI$Ut^Y;WU6_8!Ij&ajOSopoGbgloMF)$4Q zO>h^?bnkGP?mxPW&#WjDJROE4eCSuC1DmWhRP44(97nBzI>k@YLubFA+G{F(4 z{nKKg#lX@q5NrptrEwtN$+tfBi+&oWbFd(0FL^gj=m||69@SN54IYkknVuApN+rj< zGJgAPG0R;Mt)<)?#;aSCSjUff?{Dij*g5To51 z8@1~X@qrq`Gb{#|RAK*UR9Fg^?eI`34FJbLIKN56y8(5|2IF&dI%lsDTrVerYXx)FO)GbboDoR8uKqnx$K`HtkqV>3X;UZR=7kDyoqh zPKA+|N-dS9{E9e6;AIYfHD-7;>4PZ(Q2sGb6Ez*Ak-K1IK1s(x&Lnl&p!qZtaSe<} z8h~BvY)MqfYl$kPxfqpEtIA$uM!AFjjaAE@b;;C0|Kr1FO`8W>~<`Z$a878jnmd=hMC|z(9+l1qj*u&s~z; z-h-$V`Vau>fN0v&Z(r@t?Q>N)lvy!oK%novY+85lAoytk9pqq(EbCK;~|h*)UBS5T+t)Y1=5!TFE+TXZ1-UyijyQsEy)m7YV6%*jQxun8 zsl2$KV2Vv^Bb=f!IpJl42q(`O=c%qSJ;jJRa0UiP$~f<#XsPs(pO-22Fftm)cslZ; zz;hW$^PWP?r+zxEFW)m>cO1io;V7&Z5PDZZhR0dxoO|xGJFVdvFXQ?Yj!)xzVWHpV z8)0UI>u!+hvu}ShpM6WU=h&9D+b#U=gkd!FmJyC&q-~Zxs%)5AU8ee%BXn7SGflg3 zG3As?@WAEn|%A_YH8e z6Pxw>skbG!1uFK&FVk{y=-srH6ZnoCkBQo z7hh(@?AOT9Oc_|Vt@ItSq4aINwT!R7oOb5OXJ$)W;1>p({M_LR5huUsCp{7ad^^MH zy3~c-Q_~UNFqIEG|47M)@A_2(lNaB+{wk3ooWGGK!i+?F5HJZ355&c!)`b~Bp~RrcApqU6vu@l3#6uti`@Rg=0BV z#tPrE`OG@(F5L8catSlr%e)4V0oKhtyja#tboDn**4=XWo&M%yeErQ=7{2R_XP;wU zLqp6Tatvmc-)FZoj_ITB<_G`ky6a+`bo%GFuq;o&Y@+N80~x;?X2R<{N#Bh=c`sWu z_=)h?jINxdjbsHCd{=X5gv=tFbFb%IX%qTXAF60A7acfFq${(Q{||= zgdemM1@)xOF=VGFcFp!r1W)(%3=w8uV&Uav4_QINA$RS=W|2?5F?9>CaNtQEImVBx2MqcPU!QR1}kpoh0mB;`qe&`_n-=qo6vaZPWW zJf7Stq$5})hpf`ic&%$OuqzBCzbjvom*xM3mwtEid&mX=cFV1u!iV_k(#q^B_|zbIEW_Ryb|*`j>_Ou*4n~NZ znS1%kMzg&tb}}%`%kr#g6rd;&by11zK~b7TNQ*E})*7FL@fBV&HF-^=#0A1Ag3sz# z7n5`h1H-0a z6rF$S&Mw0l!f(Q5Jj=N7(wzk=qv_@5&9_SQhDF-VA%@eIf6bo z2;FekNtCa5kP9=RCk$n0<$9N?%uLczVQkuggdsy4S$^R5_l%P-a7Hw|G$^2R4Hm-u z^>XRn_Q^5Ic|;kr1JhqPq^~E#ajMvm%*{og8t)>5n?7fN*-yDN;7ud{Qe9{ZejrL8 zvrE!tZ|_KSeB0^tq?uvz^Z18J%EB4ha1eNeKYE8L3$!;*S*GEyN$@cMEstYJ4Ga&! zpU^tEnuMR^+b|sLP{Dh&rJXUft}znju?-0r96F+G0)}+peI9nc=2J&J={U(p{oriOl_o$vzsx(RK~10<$xyUT=#qRRdDU- zJC`C15h6JV5>BwZKrYoa0DO>JIUbxHg`E1fMjMbT?E2U65XQ)mUg%{MVjM(C+`_r% zL*Rhwfivo>oP*Cb7#+odWla1CwNQ8tNVo~iVjpZP>w+C62E{s~KXDJy*%Kf}#OzY+K#%jLJO zJDxB)<)Vz~%tII%SGcytVx6*8+zUqqKFez!`t2!vPx%VNdWb)9DqO#=BS-t7Tk5_A91W~Ifr23=GFkhmc?W@O08z2kFvJ`C zSGyyp-(0K=?SwStnA4Cm#y&pB#~CT5Q1%E^*Ayax zQs9_(Hl!lb`1*xHtFnSH^sbE&ZlAP_Ndzi#dvd1jcBb9mZegD0*X`%KqewlB81*rG zA(LrIoPHx zj=o{}pzyCi8K0ozanv4V9~vM7e&oL$0|X^}W(tEu;{+v;-X~xg-@K#aidhwE0PRq4 z>QZOsCdW}C;GbS96^8%Rc8n%R=U57dzcVw_7#U%ljH@sKle}lc;G%9K7b9U-r5XC^ zr7TkzJ9yT-Bq_@2sG-h~c!fv$X-{pxGE1AmE6(Kf3^FSqz6p1<5rQT^Z~|`P9vS(Y zI>yol$~TK_m}S40<>?(bq6{s6L>ayRE;wa7fEkoH2+o?GkL)^y`n_iBgF&0!#SkT) z{3egS`n9j6KHwN$?gO_o8XnNV4cA?VJQ^>TUi!753Gtxfp%Z?L{E{aAi6{NRbMlgM z!uxl8`qOE+C_5ZYo+2MR z(Qq_xXSt%&f$j~LZ{mOnn6(ewXlxA+RK<88Ix*(ww!yE^R47Z7D`X!;;xh~{%Z_np zxh=nNbvbYoGu?#EDX9^>&wn)b#Dj{QGpjLRB=1lT^PGiNcNcE?cI&K%{RBJ18D(#tF_R1&G6Or%ZKX4$xR{9k7eWX=n1YUbhX%@K@_5 zQ9lEwQMl={oab`%j1fVoXCNPEkxjEh^d5f^_iy+?`TC`omTlX%5vLj(DwtMIMi%k<*YApO1wH-S3w7z3+Vp?wm4L0jM(@ITddj;~Pf-VqNT=$JwUe>>q( zj5Rwj<0*K|-$vG$SxjSFpLEu7Nv<|Vfar!B6>9*`^T#i~n{rBUOo+nGlB_7a>K;trCwk6Dq)@vUVUgNeQ4j+9F` z+>DLUGR|oFG)jJd-`3L4a|i>)n=@6yC&p-}p_PGGN8qt^P$20rz*8(G!Z?XGS0SdX zF7cYi@EK;d#16loGNIxDpUSDUSvU5#cp^j}9nQ?e_A<j&C>wSWh9;wqwC`X)vlSWx(Gd{^n;s)>1ye81h(Pk=-Y%-L zg3;nqw?1YR1yyWs<;l)d8eThMv*j2{iZNRCT0duY*%9t4F}$Zb0FQLXqRK4kG05( z^TjJ=w#mhXu0Hj#%xrB@siaM|U?95$P^T0dfuB!Yx0brlp3u$37hQy}qwvyMM2kNu z^le}HHHE+LAqP-EJ{^i4` zvn5v8HVauqm7@&Qqz@eFoFVqznHJ*@HnLXq#J~^rvOGG=5e6#nu>_Djl#^r@VKF1j zdz6MJnHkfhi*m|XWqlZ2OiT**_Rpzx?@a z+$=|T8(PMUA4({73EYbpob70SnU`^O!tTbie$oHPN$njB|m~X?H>Mb_MxpY=AAevE_SA4-ge8xdIga=&{O}uI> zY-4b9d*AYM)-Rl2h9323X0f^DT|Hyt{pE%$FE969b4_6hg`<-sa-fLLXo}fmR&uNv zT3%Kl6VX68aDy2*bpYTg-^S5Xh8WyI#0I`&87XqWWw;J*X-p2Wq|@cRwu8%M9Yjd} zwf$p2l&40i?PYr#*VZ}?!=HZsq&?$i`b@LGo=Y-B z9L<`P!(bW}c80{6#+)guk9tJ8`<{(u^{SPqQgpmT@u%ESb$8Taz@=Dl7%CbSo-M16 zISHZAUsj^j^|5Tr5y#01XBuEukfO@;`q%$<*}Qc-Og|3O50o{lR>dg7Ti^QD^7+qy zK8yneY&wmQIbn{nM#t`VH7-z&84Vpr$?6wk_dh^m)YJ+W%VmR{ci#Esf(tGv z=bn2m#zpPu4Ue0{O+dXImDJFf0Dc%9jS^uepx~!Pfv54+bOwR5J&eZHqU^uFlv`(& zn`+0mWu5n3edql(JV;ujjd$zizKK2>ITT*@1^npajxT5Z$Wdju52YB0Gc5lZn;I^c zUVd}A>F(`}y4xP3DC`u3rCJjx`~y7C`^D##a~@yHfBnDy-*Uzi9#ek(w|=_(pa11c zWy_Y$Wp&>@<=p4|L^*ZcU z=g6f4W>e7K;NS$dz-ICrMmg4SB|RJOr8BgxFk+_1q2U#i<%KW&i83_3h0WEBJkMa{ z5N8?%b{iY>KmFxv%bn{VM7h@(aakC%gU;BAml$;fo{BzG{`6TW1{x$&g_U5?H}IN~ z<`}UA4N}C|7|YUJo@i$@-V6RPXt)2MoO;Y)dG52HQXco1#+mJ_;J5ZWqK7C{i zuEH{bjPct}!}m2u9aT;^;e>J>vo*>uWnw73O~<056NZjysNk@23&Y0&mQZpVA?S>W z>0<<+_aWrz@y8!uRv~w ztfmtydzLnVx*h)5Hem<>%QE>}xpF0PVoSN_o_m5W_Y_|DAngFU<00OUI&w|Gw)~qm zZ7Lf#ZjAPiI#}1$tLQ(D4ZjB-cp&0fuPOQ#rm$Z(y8Whx}n zna(it*7=OMe3Enc?d(sJG=icf@rbtEb&?9E7Ae|(Pf>T4Z~DolbudM_V&5vB^rbWc z=8v;>)?$hQ7zcply)A@|AAM4}tE?=O%uY{khd&07DXX9MbLA0F{RPI2NKd=YAmiu~ zpnnWo2afK!tz7?+e_}9-K`e}n83udI#?=`oo^(>V@WKnrd*1c#@;P)7+fQA?8AtJH z-!C6(yhxMEdj~a)H_AYhxA9-j?K|5^^{FG;$QBn^_TNC?>aYLQZVcf$2{*a2 z$6$2(aWJDw+@>=4|Fd@{fO=Kcpoe&G67OHjV zbHr-3Sf}!y!0aG`di+8f0GPA9iT0TemLmU9%?hl6g^U?zsOO^dOX^%`Ihf;L}|C4mL+mz&@>-<{0T?Kt6tb1 zn;u~CGi#ttw(q`mU2*5KYml&G3yrp`-vt|}Xhf4z$2iTH5;1!g2)=GZ%$>F%4x0|K zG=fPMH2eoHxj2qHBI3@a5hor$DgM`U&Wb<1oQEe8eouOXiL2Db@vbvA0Pd&8v);rcj*XFIXY7osAi`P&a3kZt}gnX-u+`txiEX`|%=mSl* zX23kt3os^aFb9J@1JMCNp$117)y%0K^DGI_e)f&jBY$gQAv|(kXZ+3k&j~;)?phWx zXZF-MO>d5iUp5;CWj*6| z4{+@qb%BP#qrMVI&KgGF4UiabjfnPUzde zVH5NGcIxi$<-m$zv=*aNsER?O+2h`yHuyk4Z+heWr9C?B^V>=QW4zD8nVN*6sRGARN2RvD^WJopZ=H@f>Zq*8-FM#&LnONi z+(Squ^TYh=Coh>S%QjRCm{;py*;W0kI#+d&S;uy&5RLiuqx{1VsXc6vsXTf~_K`&0 z!3Q4#Lozv*E?r9bn6EJo5BL7ArFUk5Nwhvf(KBbvj6)$Z?K8(pj^J4}KqkX-8-2=(2XbNmu zt~K`O1NW~3I_O&nR-sX?lzF)?%=a=Pwk`2SZTzf*{cFC}y=lBRIPOZ{O=Iv{ja^1O zia(CHMF03XYEt5N=(UJ3y6G6e6BOi?YNmAiLG!L zYZZh^oB%^jyx+`y9YPfMfi&Q>M4AN=7UPJx6cglwXB`sT7?C;j^$<8+y2O4}VA_jt z^nnM~v(|Ufr)Yk&To_Q-MRB4G5k>Ks>whtk10LMtM|!^*fL?NZtZxyTjqlmgris53 zJ+QkyFm`TN_s;nWH;6kbd?C&jPnm9S@WSraZEU)|Vnig^JSRfi^akwo8?mVeXEH%Y zV%^FcF=y%Jy=0I`SS;k7izf@Mgo!9a_peAuJ5D$nVo@7@>{&6suV=<;53i{UmQFc| z`;kD(jDQ*Z5`yfGc=KD|9RKwBFT`2roE^(ou83(fXT--o_OX~ZZ(br?B~-6{?Q3HV zjKWdJEWrfpuTr|@K*&_Q;SFzyRjXE&Jt7Q*NbM$o=Z8ug-7utaI@nza~yQ?X+ZoWM~}Vn{T=) z{^U>oBmqF_rwgRS$-WE%kD&s%-O0Q9W_Qw#Pm`sANb4b#lgV}RNRGdpHQZ38i-V@oVLbXsiQ(jBYr-vJ|o$1M7!L+cEueLJE} z1{cJC#1Y2?>wNtAU;ZF|e(h~YBB#dZKYece&aCGd1q zu-WY~by{1j#9VRRHa9KGT=c}Osg1F1!@5|sctLDL%6{M4`^j%4jy&?H*g_jt-M1yW z*&AX$m%Q->^jE~6zWYaU{cS7ko_HP%$4g%_GtN5w#Q3+XZo}`@P|R%J8qb0WVO@-8 zJ^irw=db+`vo}q}Fb^E=jW(uFbKSO7Xf)6_nol)T|8|D3ZHW0g*aMNdHCm_5jQ;IA zqM1D{P2|-wFcOo{h77Ujy4gQ>qO*`|k1i7;Bk3LmH{&lpd~vK^xsrSAtJ66}Aq~_R zs2vgNCBd;4Gdp$Pbm-g=7`Vu=f0gJLI6%Rr)n8w=#-oau$C&Js?=PK zt@Cr+HZ*s@q(q8DrTOZBLuSTj8HSFIj+}2UEIu3jRy^l-N?y%F+#s=Tyo|YZaL=7g zgT$-frgjA3faNzL!8bHu!*1TI?^~1k!fmIf#&2|-U|FMO+fn=m*?XcXmsiutcvzRL z7x6M;g|yBh9`&CbGhpao1VWm;EJHe>HbiZZ1pSmrMLY6H?7g70_ z-n7Dshn+l#}O2mBmPxvEG-wXG@I{4MjbbRKgew?9t&Z}|WjC)s7?%Cgx zKYs-T@^yxZi6k`&&7n2}&0!zb3J6Y;>DL1m>&bLrqsr1Rrtt$G_+V_^v^k$=95gE)fOeQYb5=~B zK0R(-dPmYSbLSlz?aZs?D^?_9WCjdXYDA7&a&&Yd_>{?AxL|&)BTu!76#tO$aB&Yw zG!Ze;1N1YMrRo2MOsxB_L@D-U}hT`q-xFlZpy4S_aUVdI| z-nu1Te*Pck0`QJ^yd%zd>KU=-OKUQ%9e?Vnr^cn1UK-#2_P66hANo)(7!Kh1=bsx&WvS8N3IG_iEDb+-X zJb|%|Teioi{%K?U?AoOe!0oXG0u7}Tw)#j00mNojlD@ufcn|h&Q0 zAL4C)`n)({acS+xKlTUlmiOEozr68YBvr5*G$0j)z+IkQGYoSAC)@`k(hp z>gj{vq?|(!YK{+G`hu8-b_GkHxOa2Jo8S3$h|}(v(%Bjxdgq0)#ImRd=E(6+UtSxR zU-7ST_SvV#>t6GOxPMK=+7)eow0C~oiI<$_APaShm^DFP!o&zI;sTj;!6E1~&d$+-~u-^q?0FGmL54i)uT8}B2 zgp(_EG4~)t#$E7nyp;lecl7Z2J@PZ z)O*pwg~@;@@s>Fde>k^}Kkji!^h&I%k-h!)+v5+Uv_s2#HLN1 zZco7s4vfajgiM({)oDfFWKYtUMf%Cmd9}2{ER4S1 z(;LeMfmFGhjV|EJykqU)d5Ce*^PmDu88Y`Svim&gmGvwhlZbD`o8SO_<$Q3ClLbsB zV+B%HVA98aK+PbVn!sf+?+S19#N~LZ#p;JqoeAUd*?;_8yysni9?O<5kI%Bsufj@W z;h~3R{cgJT7RG67JoWUaB;HuRaRY+M*TT4LildG?I{xzg?*m>Z6Fsm;Js=Lqb;`AtkTkUGG|y^srTxU$G9%*t{T;+v{>sX@7o|&lsz%hD zLt-es`Y>5tb6v0pCE_@FyN^x1Za5y`I*2RMfjYeIq_kRFND&|eAOw*N%Fr;#0s^S0 z{7dJEbSr|{Js#RK`<3?98#iuD36YEGu`H~={N*nb$-8J?kKMa*4uA`w(yV7b^O?#0 zz4Dc>j4yrp%cUcDfBf;sClm6W?|dhjkV@LX>w`k=F%h=! z#{M3ZusjGDy9!nQE-2u6#tYKjJYgc6G`#_f42TxusZUrKPe0|*c*d#o;>jn@ zMuV^=e(S8oal;J}ue|v4@mK%nC$adDi03}zBy4eUM+w8Kf?BD&iwHt^%wHPf*5#|? zEC2Rmm;$s7z{s=Dd3sF7ROuK0bpv}hAaq&G8+Ct03clb!90OW_fIuH^MoWWvlDzk& z@BT`by=(zWj_UFEnT{tIRuOKJ^JO09|qQ4}Zz#>WF8sr=%X%rG+y<^ns6) zvUvWpPmfof_q4d^ywl<}=RP^kf8MD%P6OdJX+e7>)cr7+Hz$vEguswAY{NJPC?;+j%y)9_ORpmCkdnT`|E-80{+1z9+M zLG<8x>W1sCkLB3fndVT4=R}&!Cz+KyZo55}Exik;1RZhMyhCH{n)~9Gn{JMEXkg~T zd`yOLX~(o|KJR+EcEl|=-W1D8--aL{H4WtB_1L4A#Exy-n>>a6C9PTJ_19dB z1Bc$smqE_9Hjx$-fQGXzCBrM0EsvWZeAl5ZIB3R9`buKX{UyKxKw-bM+gmgL z*CDLD5SpY7#$FTe>>%=)2opW9r#+BFzKb})d*{3Kn@op73N^UW$!e;7Cv952ujsrz zEqiyqtI8N9C3fedhf5=3*JilBwIdf>P-rG|L+VJ~6gKrvg89%S0I39&iwjdhuMmwR zNNonT=D}+&2qa)7(rgK&j%|sD%ZaI6DBKL06P>SJK;6V{XWEd33l}CMpgLa8?yR%U zN-cyPuarSai0aYHF1suli3=~hFioqpdw%bG-y65yjOdwQJ`RL~VsKz8F43#1E8|rC z-+%x8`Ofsd^I6S^X?igISkFSCO{&a>W*tfyACC>W^{ob6bV+kY@n;sgum6}ofs!7R zkR~UN9Sf!Bbo59lun?qOtN>4`fA-E-#dg*T54TgBaPGM`#Gk$8y+{_P$Go{m0u>NU z(7y*yZ$+!Y(n}uVVNImuXt4@P?`!qs@Ayu^1>^CDFN#;cVoChs+K3_%V&sWfm# z!4N?(9JpI#1&7eE1E>XKpa+@8kv2>pwwMgKKzhvnYYq|dHGGR^gq8{1`I(FiAx-Fb z)}6G(s%La)!ZL08lsIBR#4rD2C7asO_8_gi^XB{Fl&2hmm$0Gu;rIV7Zur@jSa{g1 zSa8_9*oH=@lYJADTe|r{x)zB3*nnYLhf$eBb4vEDJBR+ZA4@lA&*Ictc&V2H$ z_{Y!ei0yrP&nbI9jywM7n9DSk!Gkx$Yn^R_&qDP9$pc}62_%HJqt}f|mtE3JT!Z}& zBR+x)<2w3M!kP-fykMFG$tw}t!1Jo*%i^Z#1M&O6cUpY-va2Y>bfOxOoE<88C?jI4 zXv_f&Rp~MyNqt!t+E4(18Opta1?PFu>Ojyy%p@}coXB`caJ0}LN|}a$&niz{g3F)_ zg3f!Tr{XgAkhP+vxE)4AuWk~%nrkWKFtLQ^>J|4uAmKTVZ|2Ovc>plo)s1&ROr+H6 zN{mipuYv@o=3dM1#?*uIdcd3Q9UF3wo7x)jAEXjQ>DK6kaS(31A<`$YhhhYV#(gfE zkj`%2v@y$Chjv-*ikqs<57IvCpV~y~t|gwLlkei2u{bGcI+al)LzIC52u+Kedl|fH{`}|zH==Qma@;q zIrLjj1KQ_RhQPjZ&SwL7pX&&HtoB9%-*;ZqO)~kp$H2LtUXCckF>^1Bc!&}(3q3Mg zAQJVi=zf_7>Q{a1v)@{8&y4ZjcNu5izBdr(oj>1aSzP_z{bAO@JgV>WeJ$QRI?JSp=dW?_pq3JfvBmZSQ4Y zU!Mnu2&b%14MgV%G-J{RocH&Fr_hEqB3v6xgvpoz8)ffTX^i9IzZ8}-CeC|d?uxH{ z6*ylDPXAWC>NOX~RaadVD;YQGjyJvK&1h1m<~T^VeE1_DNy1RtK-2R7^Vfe(epD!1 zvG00|3@5^GP!EVBDmqrYC;j9)Bb}-yL`~``r<@WB-^F+S4Apgibd_aE9qe%sX{y3> zQ}D2o2_*$+HMAk`h6$Oo_~@8_($jF4Hwodk_V?IfD%nwUg0!_|WP7Z-;s3>&6}Lwh z2MdWlc)_CEA02IU8lEnOK_D(-zEkbq$j_6vi2!jl#LLh7!_J=8Q{qdX6SqdnE)YIb~oANJMtASi>LoCK2clu`TsPc8Gu3XQq% zrxCLo$HWcl9!d$-0a6EX3l9tu!lk(I74MC%ky#*iyr|(Btu>~^rEk4BPCKP3y1>~x zkVxr8Y7#2)Vch-p!Iks^FFlxZ^{{UNdK>rx7GMCTvB_b{{MqsDx1EEun9b`S{X%T7 zo6O1V9psN!BbbjS4hm##$c^d}$SzPmTPGJF83v-08L0#Ah=oaF;X>-~SQUmBK$JUC zOP5PO3zf_-Ps!u7jKnNVruZaIJ!wH4cg%uhm^#^5zX|4M@)Ve*V;97G|Mba8=&vU( zzazcV`6dPmwJ^kD&b-h`={C)M}h;~rjvJTblK#?*%T9~nHKe(fLsZoK3Ve;0`@w$C*6j3+LM@BaL5`jGsn|9k%W zpW>HyZO=VBt?VD^Z0d=xfAJORc2Z&pD}L%n{>-mB4u^yl;b2FIJIZar8AUJtZb$Ta z%YRqzA($Z#G;9I3HK`O9rgTotan*}WSxiN4{K(&hp4RcKhRPm2FhSzM;NM{>nG8aU zbuQ0JCTUV4a~TAYdJ%#Y3H1<-s#Dv<7tU$tew`9m`lMZgmpF+*=eo?fI72t0-AL3u z_hJAdTAZ*2rbC3Bgf(UR&NC?#z6$(? z5Hob(0dX#a;%Qsb*EF_8M+nAypZJIO=2g|J`nChc(sHwomO=YurVRH#ur8S|_a$m7qZuAkEEz@3>=brvL%1OP zl<^ndZQ^6qhx>FuCNleP)&tU}!lR~|-n;(f{AVs0+-s((uA#y)!mRKr{xj|V2+XcM z4_46yrvWKTHw)VkHnR1NOhhyqdc|S8BStbE`5QnZBHCcN14*FKaP~rwZD%uVAMe_b zs5O8zL>x@(+=OS%(#D!=6D9yk507HE^4ZULM!K9l<>Zs%th3L;-gbHX=m$THE+j+O zVIpwexi3TdyfwX6op;`Op)K6ApZ)Ck>_2=a7ZCS;h#PxZOgCe0P$3I#vA^(zFR&^4 zh`8vYi;@}9v`fjLi{MLN`qJbaHT^pO{PW{o?|Nr^hrJ{xo%BRhTub7EAN*h<4HrKb zzCZu-cf~ip@r`)G6P|$AmZNGKnX~KSRjZV;0rV z`5Y2fo=w3pLX1Rg?QM*$gN#4^+v|s#klG&_r<~LjU;5k+;_@rM#kfa&`5*t9abThF zybG8K_|GKN8zd~qUXzd(%z+M@-X8CJ$2qa(0Zg9W_kL`v(Yh(JXJbT*i+J~jG(#GI z%5TMvdr<1$>bbrk{qj_B`vKv7bYj-khO}P$;cft;2S!IJvSuqR1!%r9^Q)*ZaZh?j$mS6WcWigQ4b3t*%#1II2SjT?8aBdNAd2aVQ7Y6-K2U zBG>oY^ULrEuxd#RUV9;)Z@B5kTo_Cz44ap(Ufqmq|4Wcp#JHv>Bw@b@Tt4+$zI<8Y z!>p?WGoc9vMgmO2-0w{1ufg)U%2<_qV4nLri8$iTcM2v#2jSXIE!)kv-kdm80$Azp zfPy46gfiRBXfIs=H*rqGhK=iUKZ)t)&N~E&+ygKR>+&aoCLZ=$F&q?@<;mpOU$)sa zwxPmhrgaR{g1&wBWs zd00Q|vl)iMy+UQYAsB$k$&7`4)Z04(bF~(M+`Xv@b1de=yk2+Rb<|boEbGyf6$Zt7 z;WbwX>bqy5%##8w8)@`;fZ_ZhG9yYfcZN1glN{9q=|88i|a-Fl;L5fsd;0( z9COD_?IA;zKbe}u3&aU4zBB0U%`v}s&3!PjJ@G8)lV2bR)yv~K=bV$wi*RGG-^4E7 z<=+zFtBEkt1H0@2@uoO3wZL4(#o}0hZdRW=cW(UR7ry{!Av^-6y;k^Uml^M)m*O>l zd&P)Y7b`Be)q?od)1ftS80OHQj<$xlfAvz%Sa<+If~tC^P|hMJFhmT$Mi`L>4pN(m zRG@du1`eu2s-`5v38(Z8LU0EgWIdT%mO^EIBiBEhY-uIX9jW@pWjdAAFpPmRv z2IgC+V6#)Hw=Nv@RAOCRfUf-4ug90!k0J*uv7wJ}2R?1e_tBt^&!$?e$H(u>XHD}Y z#yq*4Wf@w$c4!mCo8Cpl*F$XhZeT-oKVGIJ=DN{nDWzR}b?kns&IU+UvzWY{{3gDzc1k2l86s~_*3zmXFVg%I%`q<`MbX#-P<5o*kf|o+~zp* zNlW6V*W3+3O&u}i7=V!)MibM3DN$M!$*4F<*=a5tO>9n!Af_DD7|%WT<8GT^XL!UDPJ0p>!QuG*=RU_4GWPAUwHxn5EP#V*7FYyj zz-4X*>tN`$Q&Yo3-AC{=*1e;djp{4oWzSm>fA+@n;u}|f6Xu~cUi6aZLij}d%O`&n zCmpva4q}u0dq00${QJ+=qD_LqXvAh8_tL-f%mtVZAu)zPv^cy1(?+H3Nb+IAbTk2e zqR%n?Xc~+QU(^Bl-(auJU+oW9B5z68ccn-_gig-RikxQF=iY)$J@ z+TF~2?FMJ0i%<}{jB1+yNYuilXx3*RH^DgP`9gk4zbb4Hk|j_iPFxgr;M%wcGpogm z7NvyU&EC3m_1p;U_C-VKX^OrSpKM3KqW7jF5yI#y-1HK>;&V>2Kk}g}JHSHXpHgVGh-@Wy@GV=cFY0uBFSEI(TSg&&eYCa?$(+*@s6Q zengJnHoQ`~Ph~ychjiI~*xY&f-i`ZedNz=+=T$W@2ZYNyOK|MzHy{&C? z0f3(84%V+Y!FGthcWmE^J6|@-Z=IUVz#Jx_aeefU*$$!A*ZfV!)ZjPvl|fzX2z}|b zfqqTX90 z$J}mWT^K~rf+~e{!U)VFNJ1}p?!l31bRU>7t?OkT(-yq6c=-1HzH`Mws&|j#p?QW8z-t zAOHBrxyM@bTGN6m zBbJK*06+jqL_t(Qe4!N?%1!A`aO#^gh=qsmT`ZQXrUH!n8xcoa?6g`t6H8rt3#-@N+gam3M2jsN}T6JqTp{c-IbcR_eHp~DkQ z-PFoPR5S$cs?a6z0P6WZw0BrcVM>L~{s{XuSgqcE>F48v?>#Thz3BPCVZ<6VHShkw zcY*!R_~PY1iK7oYBVP6T3(_R&w%hyS=%ZR<6WWrIj``6;Uj4(U?J;-SiWe>pNN1BI z?Pkx)=3X>_%+8)+Z2Vhh#%KTeDm0JJi|0M-pg8%Ib7>QMMi6%Y%}1||8*aTj-txMO zVhhvfs-ND58Qk<}>BJ3a&jWGQf7}sIgE4sGlg^BF>(}KR>>GrWV*(DLiPR*ihxz69 z{=vGLarf#w;=5P(#_#;r6dVein;MSv+W zyV*xS{uAIGrWBw3qeKQb&eb|?%$VUU<6;3KMG6NdR3&@WFM**VtW@YpHtBMBD&{bQ zFdl8-0r$wL32S2m^vYE$7?UQ>3OWt}1KMARPdX)S@ZB5Yp=h4%qLxH6Ej79=aIzBH zs~{GSJo2bG;h1BS$n1h>egNrcGq5&-+RbY(gDmdw-O9U`C9`r68_!)V+?%n1eHME9 zS%yGGLVpRwws@rvCPR0=EiNc9PrhHcaA7P)LN0^i`9}AySiwP=i~)7Ao@x^{n`>jC z8UUB2c9HR2k3eMx2L&F!U_lbQrfr8waN&3W24E8HKK95X3Q>-Vzqtw5)YPdS9Elts zW|;DjzHdVPc<7wju^7f!9Jm(kLO=Ld##{}gc+fub-LR%$%*Dy%y@UR0GHL z6t99)``bKoF=iYV95OfiXc~K8CNDlb`_DeUhY$4pwR+idwB1L?u`r32>u=49l`!>; zHS+5c$&S?7M3T>DlpPg~b-V6LK#;rqP_Wxm+J5-MUj zaRgEMOcAEAB(M3~yJE@3Ku$?mh)2xdoW(7rEK{iH5XaR+XQ&}pXrjQhnR(h~<$zNe zPnk|+C7DN>NHO=+f=Hw2Qr$Dnnv+Sl&%}Gu9thM}U}U_dJ;){*5X~_?z_QRsFeJ`1 z;9wZokWT3r#rCqEDr3Jh7o~;n{bg-fe^Qm>9i_BR2wp>`K zi$UaEafGHxJ75SW+vva;XL_#=j2+~?@BCIo@8B3eHech+w|e)sx4%8+;|JWcM$FuE z0<52TsFKZotq~bnvB&C+iINYJvT+oz))Rt?5Lj_ zfBm^{2hI>VkAroUJnvZ(@wqSlYj7L~+5k>+#zaRcyt<8gG+#4XLCpPK(M+Ez6#@W| z(1YZgVMbiMM}P;YlmXNR^^Kft%_eiDT21xaSWwtkfjYZYM+j_Q7{rs*0Iq?RJhj$C zQUfD(m^f*l&Q?U8*^dOA4ns`$55WYW{%+wc4V@)ua`iYiZ$JCl$EHLd@3jzt34Xcf zV~{HU`_Jx)AN>3d;DGZs>NWQu-)0XT13rd0xDU6!9aCT~kz)3$>7da~tuQMPT2o=p zr%Yl}(+`^<0A;G1Km^HjFl*QNn>ZXG86hSMks5+Sxcx zIe&#`90`ZAhXs7%MDYiXQ7ik_cfBk67$-LmrupH6tJW$cKR? z_g6qn@>0gb`X#|5(GC2`ylIA^YhB48am-lURpAudoh`o2uQWukx$fR;AAj_*0F!&(_&;T|n(8gY7a;^Rj&g->WU~Tk*Mv z)mG*B-aM*j88)?4=3#u*7Sox=G4pD?`Fd~tjaN%y{?)ftd#d%sbFHd+`kmq7+HSnx z8mjV5Z=7S}y+SaBWqxlxz4v`prtyAj*_P?mVA{P2eys+f%&O9c@|XC0$Gl)3qW4L& zPp3A$h`^&RCgFf#Gq9K&_QzDh^~jK#iJX!7l2rq4%cLtnwzYQiN$tn2e!6X`Dnj`nXO-!_AT zr)*pmtG@Wrm_NJ)zl8{0C|@&TRoR+OI@W67gjbMYep?+!pap>aO>F+NPi#9&Nxbcn zOBk!6_y}G$g`Jn5_p*4|xi5{EpZl`d0E1ceFd1-PLbkeksKrfOCVF6e4+wJ=&JjKh z#)%uM=Y&_{#Sy~0Ft~e+h>%Hs%sJg=Pr|avIZeiC;^;zLKoq#8OoL16_~3r)f|16( z$+V>eDrP|s4pm-WwYJb*N(_Kb zA*ev=%t}vXP6Q)TAiXyGZZdNciWnN$9jr~X8k6*VVH$`9JJXJJbE%tbt3}ApsP%!h zC0>T3Far`&GCE2u$^nC=rN&ih%+LHADOZW6?Jz&TEr`4fbhU_hz2frAbCF1uFzpu* z9Qeu{OY?;)&io$BVD6<=r4*5D^}M$w3x<*v?k&*7h<(o>T7E?CKsw)F$EItL^e}|4 z%TWW&!4O1kOZ)U#j|Zr>wmC@g8C&)dd8S1jYHX$eK*5**Cmuv7(Od@;GeQ}491=Ld z`64||qBgWkh`W?$LK(BTq4H*odRQ>j(u^Rz7r4}Hs7Bw;LeU6A1C&xelGbhrbG$?i zjvygqaq0z*0R!p*^TWN_Lf=sb34{X69=>UthAC$d2FQhNpl&M67E;%(XuLen!#=@A z3G$Fl#Aqc@<#%_X8FF&B;qeBqVAgPuy%}m?8fY7}rT*m0WU6Ox#{l!dMXCwk$8I>b zVml-<(4GwRx5j1^8+oF2<4gsuY)qaWUEB;cDp>{A__m&Q>qCEtO`46&DHM~@4Wk4* zKwrA~xs$wT7tBLlBO74*N;;NnzO9dYwV>GuS5`Lzk9c~^q8^NkCx|l*xhGHNg;Kso z34SZW!raJSuAKTpf`kJ3F-Y7?%q6tseLd1e^h^wqo$b%D93N^i4}CE{;Vz#2*k1+p z4^H}VJS8*%Pm>vFuOdvDkm>-|0kmmtd_JwWFhY)bV$&K@NmR^tN)j?YdHke&cMX2WIZXpM?+{>Ozo38;#FA0ap8E1ffS2)D#&y!5cFv zGvC+xuIfYUI6__{n2}l^T z_wbEzhS71WrO!P1&TlPi=XzGg)3{-9h{CkVt}nQPuJRe^DF<2tnVz=too93OpoQosO->Ht zlF=W{u)4_{OTOkaE|j<9B6%dcM+mmZ`dJs|gs_@xlyj8*L}1{S2Uf*Q=EPLilo95n zl}|?ufS_|<6I1(7IMt*zO|mxJ3(Np};8kDfNFo^?aCAE+n%UpPx$j3InDg6tllj;~ z!hx_(zm)w)l*#>VH{To=zvg1h&d*9gf%~1ldF40K^n7RAAF;8I&<7LI6Fnfv6?|)7 zcwY%65`O@z25C{_6DQQ36J~+UJq^O`!$IuB#V4*i`{_>``3ZJnzyEvBO&|4@ePd-l zq@A-E!c47bW+d|DzfWbs208j75sk~5#3yDSnc79kJIO8ox@nz>$Xl5a84)ups4yRD z&snT5N8jc{Ek+KgAfhaI=c`9~SC#lEKdU9T6U@P%_1KNDcLhGAfC_l31#N7C9AF8M zU4*PT<2v?x?{`Nd0-9wZY>n+r^P=Ub-;K?jo<2x+Et z5eSvCPyk;d{!GDN>QaK^%LhqN8COz__P4M2S3q&h`y7i}7LHA3Cwv)9UwCifPiWMv zOoAAef-4J;griAaXoXVBXKAUyt@12`i*M~m8UEW@FH@ACe~ZK}Q$lr zlYfOTQy8-T`Nz7I`jtFOZrKV7Q>Al>9lM1SKpoR1!L!Y{=fKM#MTM@mm)}!<{qE>(3mB|o#T=@manVZvH4Z`PFyE?V88Z2 z0phY&kI{%QH(Wh-0)3gkqgAWHzR#}NsS#0HBJfuWXKt<(2{{RgFGNd$ekVr4e>EA% zNVJuYEKqJ&&L<`@OJ2TnsrI}8Cu~73PQb#56jUbKduTAvoyF|D8%>!D@aWS4yctus zF}LF@lmz9N$g01`MD5Y7V`4~x(Q&HMR-gTh-T#STmQJo zRz2sPA&Gx034xL040W52=0|!@oX+-wABG0pK#R?P9n4?f#%Lbg#zHR>_)xr3GUv;k z9gtCv{*R^m+u4q)g|s$MV zx8@@wVx8(d^ph~+CSUTD5pk}&C|L{5VT8d3%D1?|jMsg;xto_Z$)+p}3O5tsQTBkn zXQ-~_Y4`sMDUO!kivxXfbR5sxD8lGE=1%XDPS_EKWN3vU)GfJ}!u=~Fn0D6VI$e#M zj%c=VfZ(@QZ;Wo{Qa{LGl&KK{%yVp2A7vx>ycYb>pEWOa%;%hgrLN+r#4voM7rSs8Q32VxMc2*`~y{cJW$GjVj zuf^p>W`>(}YwycnwQlFwlmw%nJz{JHt4XR9 zom4U&a`yz|Oi+cfgme|})MsuIoXSpLf>{{j0_bB(7K|oe%1F;SENCf3FkSw3mg6_S zJ|P=lt1kC8BNzFdEwHafiM5^4c~`v~7yYOnbIfXAjL&cEm_4dYcQdi3L<5)u^>cE0 zYlCikm40K-%GCN^>>LI z3kUgfzx}D^;`p(Cr1*zajB(i0;Fl7m$_)$Hi#sHIjgzniL$Jzf#v{8oj5$#|rQkdJ?9WTP2*4;A%?bUXXNID}fC$l>|8V?hQ)P z23(h8Pe-V&;lWmBoblqrgb&j1+@$ZD8Vo@c6=`aL!6ejjK9EKRUV#qzDrq)*d)GA1 z7-Z1=kTjE(xk(&)r$$mH-*}ZI&CJ6_&E~8baqia|;^&wdisq#-DuwtYk-~u_x6Dxi zlJEIPzU5m9HwiP+!hl*dEC-2s&XmLd*YF;{Ho zeG~8m6fx$B9TI-kbXum2jalzqsB);T=2lvh=7qV|ly>??+$egFm(4 zgite*SL2zl)g(!2UoEMVFl~vQ|CuhAT~M0~O>HvsszPxh2hCr&i}wB`>JHTrJMjeS10-^aXqRJT<=j!~uEGA=b(BHLYrUBBJMhrDvZLY9i=tQT)n zy7uf~-v*{DAZ<6MyH_YZE152tPe`XEazO6ZFGpS?X*WA;*VWmBx4DFl`EGnYs^0Nh z{Y+lP1G&j?kN<8=mXG!ABNhEf@eirbZYO-$FOz^_Ol39Fln&4Q?iX-xkB%zpu+eXX zO7>it#s)mbaRU7C=A|*TZB4Weav&T~aG-*t=svDy2$5Fyb=2V);Tldp-iGI=ZrU)? z)P^J%A_@V`0B7XbQ;nErB_z13HS2CHa74P&lm3+%5erJV@Ky>8hM*AtC7n^GO92gU zB=jsnlST1r114Asam+`FtW1AN<6Cj7`S366A=BYA*tN!ph>s<%n!)30SInIxv;}4* ziNKTrXZdBn4%%$etFe#?(Ui`w%Au2LH=M1Es}vkxHE_V2^(ahAiSuvHsBEu{n`HtI zjD6yG>Rhx*w$uKzZ|G0rC&K>k0mrJ0tI*+CCQwoliSmM%%czrzcL@)*Zxkr0?b4gn z2z!Q_(5NYFsME|5_;?zw)H`sF(Tt!$k5$Qh*2-{plu%vwrtRiI&4jrz+2t+p31Z_L}ZO__K(N>jp zgSk|3dzRzGR}($(>(K)h?V5~83%ha!{3MEU zUXS>(Rd=W#Cc*f)vCrQ76CpPs@oKN@j!C_1IY)s_m}o)R2F}KQ-aEKy$^lxZpHiid z&4>T=B9&_gdAK|rh>-M>1&?jq!&M~5vPx84O_<9!gw%nl|HD;vf5thdC6IGIE7`X2 z^peHmFaDt4bkSeO2He3uHbW0~o%$$YkU-J3&%Q;tNKD>?O2i zDhe|q?i2QHcYz=4A(J4H8|%P}n@QZXVv3BN_}mKp_PU|xOW zdR_-(Axt#l0k0Wn9}PozVjJ1cMsB=I;ljF3PbE0ZaPB9LNJg|aKQa$tOqdV?%yVzU zBj=&hh%7^V^u>E?WbY_ zNrF8LuG5w9P)4MBHoM&sB<^9c6Q51=z<;YAX!N{`v7ETIt0t?HVr=j^e1i-9$rHLi~y)PM(f--BLN599l| zihs0Mky$l9ULBtt$6eLWLaI7e*D7xO^Y{w!-sZQt5fS&h+g9F!H!D3_0Rgyol~55r zhH&fZs-tO!L=U_Q-x1GqA%x2EHa`3P@oW>V!|sKLNtk_b;Of77^?vM9No0&b_1Xg? zZdT5J5cCz1P5(fHcNYY1TAMr407fStGX_4b(wjp0UzoF)vH`6L{mA%FnOYt~~^i_sq+u)SlQ5=C#ZVrPvBxV0-5?g_1w`hFcFZElkDEw%I)E zVfn+!h?r1wP5OwJw$pOU+$;AyPK5p31CCe~s&(5@E6!y!nFM}v9A!cZY%`F=%VgtD z98E}X7XpReEz!SaC7ua4#HYfJfyC?wub37la|P|$mVBzWLhk<%KG67-px3@7ik7> zs2GyUEPkwKqutDIBH2U_{95&Z^Uh!6{rBIW{_IDISEpKSvX%vN=WO%Y39P#>eVBVD zMh7Q6J2@pMlqB?Xu7L|8lu&SE=0xA=dK%IW z@o_RDMC-ZAy%XZadiK%Sm)MDAS;i#;STTEM2?H{O21gCEgjLy~4kKjwqxh#*)9h-n zuud9;+nNT8uP9U6igdKsynHIASbl3A{BK)}rmMWCcGSf&w{FJCaHu)5cLrfNtw+KK z@tLN5Vxu7_m>n_}q3fB4jU~A%W|q{|p6U zvsL`DttQ}C^#_c3na_pF8D^g4K38M2u>6yG?a809T94|!TC;tg2opW< zP`tA6KGXcq?&?QkHtf&64d!9~A}^Uk57Ckd@!&EM*&~&`x8?f11c?Moo*&cKop7M5 zQ~xOstC!edv7&m!Xu6{WN}@%1u?wGzl=<-IK!hyY^7kxQ>jPBtJ*)6O{>s^1_{xNY zQ9@S2xtiB$+QB{y;dBV?r+qCv^uv_sZ@+R{v5%{^TiIC`nIqlxHgZPH0Eew?C8BJBSjkbQQX5;`1>9K#$GOq9y_FrdPirqrgZ!Ioan z>d|uUfKZ==Dem-HnDTMfPG6sgtZJgoeu|Q#xeA#HH-%-lr0SQi_wY~7x?O%-5%;5s zEY>CozuaUiflXJeA}n?8R&B1;+ecFPMZz~}g>j}W`Q$7b&3;dAtc&=u`k9AuRs5dh zIq}Uz5BxgxKn3SGj$jfx!)hV9@!BLX%7h+LHHiAq`}WRG7?8Hqij-5&K~hXeZ7`C9 zCo|6^RDLqzVyk4qjjQA#f!xfaE&vomJg9#Ar$Ld#Z@ZI>I37e z$?SdgT)p3)SL>01_Kn^@Oa*rJ?m@|GqC?v>Hqc56V2m`)0$C2A5oy58L`KmdL-JUK zs%*`#8am zR!>tO=)r|4drW%G1KgL>HgTbZ&PcR!c&7)nYN9CPj)Vt<$j%H-r5|8VO(Skl(`y%u z%pm<#XZ{?3*V57+eVn9ShY-j)HH7=&2G3`q9C5%P#GwpeVHZI)wA{cMZcRu3bzj%vm^ zJR*nEx;oHB+ReA%euKC|2uLRu-jj)knDWQOnP%e8qwj&e?Ya~sNSSBgTCH0~G0q+a z=;`6ZaHN_8HhQ?^09rZ^rv+vQ;8+KMH_v05HvNzo*mfgv2pbu8*Y{$!$9N|LlMs|l zZedLJ1@5ovJ|0)|Y;&?*zAJaGx6&WN@N53$wRr$*KQKJhq&AaUq18_<5r;*3Q0UKp z`lGn>j$6v%nau4$@L3aWuY;lXpifOJhk$tnvWMD?wrP zt>2s8Jp6fYnz4r;`<*}I##%g>M<0h)DkQGTtjehJvOeA$pZ!4I2IH!rMTPYnOK%xB zzK+!)cw@h<@|d_z^uX@+fNmeRuA9H70d?SH?|Z8u}joYX<5{h^Fia$|J#Ys{X6sqZx# z%CYBDPbp8ct^?LE>(3m?jo~r~nmFnEM`ENfSVMi=WBzfAV(Q$3q6fUy$jSGExcOar z%WcuOZWHH>ur~wZq7UZZ)0bmV2@Cl)pqYfRsBfB5h>8Ksxj21UK>-vDo8Y~GyP#k4 zkVf*TXTJ7BOr+3e5ZA_@(Lg0PZdXkMp6Q!ysE=hK6WwV=J5cVPu9mQ$nVtNYkwx4R+m-8A}-OO9Jg9 z(z}UpAbMc5>ql?OI0FnYy=wcebE;^9>MEV0Af$m)-)UOT%l-lAk=B;zgW;Cx zZUQ$AFi%c;(kXHG3cUP{cyc`+_Sgf{&pc{CL#Zk9h?-9F@+VE=nk#YcPe#FKX%w## z_tH;=Rw_(^&swEO%NQ!cUgFOfXXM(WwrJfH-@cwpB>Zji-~uV_PsNFLf8fxnu(Rk@W!p_-)Q zzp`%DXKdZar`cc6x;M_ShoURj?Olbcjnx>LwzMAvi&XP54M}7o{nDdPYC4L9#nH^4 zII2d7mN+BSn3VDMRqhdzI64p^$E2$5_zILdJ~%cTRBMxOe3r5D`WqOs{{&FyTQRjM z8=Q@E{(BMr9pBo|$zKKLXi_q}nEzGM$Kq;DG!gX>!Lsjy#TIF9F_Qw936~8>{%3Z? zeV9n?0L~l1M?=i%&SMrw^9e`Sv=)3%I_KL68MfeJYhcZ$c;NQc+K;gG#}J!7=odJderUUTp#{5t4V>d>gnE)+L1augw;V<)b|x!+<+vwf3S}d z)!^_2+Mj2heoDOI!kO`=w|+Zrxwk8ZTV~M&@&?y8g5w=EwJn-)sS#^x28Rz?9(}{1 zkTN+V2%VHHrv{8S07$m0z79qUp@ihC5_W4L8#@v9um=QUyLL~5NI2(Y4F~5qnHib- z3V3P)Uh23SHv;^mCKDpGmLS*JVKs0ttupll{_a*ed4y;?DKIF!|(W{(Dn* z^4%F~_09FhnwDqZyG9za9(l)G7S;wB6`ZU_Lx6^er(^qJ=U*ZKgw{(2vpyiu3`XO-z$ben#kZ)dPv&Hcb4Ia z)hkx4$b2j2!hB3ur8k{rROM996%A#6e(ST}_})Bb&z>Dqrc8;YOP6N(6MrUpU{88L zcrxr3r793Qi46km*igr_lR5+GoSY-7<5?KRG^+aRo7-8)kPPsi#N>rEj|H#U#TUuB z=;@Jy!0)e`Tu$b(lghdoss?3UYjKa1=l+Pd9@fdam#`BtR^|J?s(%$X{#uzdjgQ~? zT%lKy>(2bVA3s(*zu8xJj+_e^wWPdat6o7$DMOoD-%F%8(z!u47YY00k;lD!{TTnX z3R#EUtjEJ7EA4-b>-pa``&{%TDjoB}VCwcVwI9NN&O_h>W<-rxrbt{tui5uG?;oa9 z_gj0dM-Be~2iByFr`o&H4oH&NKHM9d&@T)nQw7t|3Jz4NFVkW>d8}&zwtKh3P_#uW z>W=Bt(2RiR>eWzmOSH4M4kD!A@01}o>h`ohhA`pkY-x?IUI-E-&3yw~VC*GwkdQ*` zbU+9U^=-ps3zLw6ZSm>9eR2Hwhil^>zw$#~*P-R;idMCsLt7!DXo3GB1Yre`*dhdi z$!Lej8lnw#5Q~E_Nt#F1bKVbsW2Ae1bhaS{Z7k+lp8GL~7o>Yu;Weyh12}q0^bB-G zhfXN^dvm9kE=#-Fw5z1Fp&xgxXm6Tuhu4pJPJ2^lZ0$yx+c6yhlsJL<_(#nLb@QAQ zahimjOeh#Kul$(^kFp0w(dUCv?xV=X!7=%m4D!}Uf5J9Wc}(=07(-*4A#!`&4~S>P z9J);AJYv0vh}SK@+PM5>zRaYYW0vLmRLeRqe!Qc7A!VkO7IHSwS7wpP>(t8#JK6YYY+%%hOEG~XeVsbAT*DA;RZYq zb?dgg??)3U5ih+W4bVV8Hj}Rmi1kpo;QB04-v=W!c`||pHIM90GcVIv!#wntd%qu4 z!%ALFHEpCo%DcbogU@9r|;4&p) zLlWLd`gXAJ46xYLX`9R`!2OsjG&Q%y4i>YH)(#d`2^b`PyAfnODx$hlF*;EURh=q= z-uihz_IWpocubO52RnLfC=2h{m|fpjJl?~nvYpi%Z)(W47CCmTs#N>*>PkojMd+YN z?x_RBT9OvL@^$gLL#eeoDSFX%S`D}WCeMCNpP(J;F!N{_*%DJ*`{IP-=f$?IJ#oj1?lhyCJRLlViOA*+ z^jA+`wDoU@c?VC9?b|lQ^hqOe?7>Vm!v7 zqjNHNb|gAlhN5fBy>Y~Gl(VBh?pV#iqxGGbR-qN)T$hFOn&aR(9dX;;D`WGHAvWhv zitBH=Jyx!~Chp(1EqZfPeqYRP>WyO;&t`fL$4z&wiH%K|s`YG-1=BFM#oySrfsL_X z(E^y5p;)?nGY3FUfgxdC(2Rr8cAp@NL3XKHGa)1e7df{M-1t+GU)%5ywDhQ30`!8v zgv~N*&A>4dstaE8e3@pnAdBZ88h0X;&GXF<}t*wri79eFDNs5*q*mvf}wK%griS@V*Z;qjI)mdDbH_77ezAuFv(*DkIF%rk>tdaqDf--Q|8N z=FimTIB5R7=wixFoiZa%e$rFonqS@!?Pwc^NT06h)mVTCnqkVeGFNWD2SJQP^8m5n*W|uN2(lLz?{L(dDU0rEeqS>YvCZ=yBzKOXbvF|=l8JJ3&VqT`H ztWjEVP-7nEWg9JDM?&W3zO^Pywq!D_k4%T(PMJEDrJEgMd~3bR-qOZ$UJ7Zgw;CGh zmr3jkHdwB8Hy_iNUd&e$e!;%E@ zglM03P|Q8}Pym*Vpqy6RkBP#TZChj0+GWu)&=`}DDC)O+SHW_eU|yXF+Cf&51{+Xu zROVrttk8p3uY>) zIuIg@ue#wyy5mWYn-?E^=Tr1%oZ-e>Bi{L!pNZdo`q}Y<7abNKdjIut=gqgo^M2>d zIPazN;=LdEVf@*fpU#;q5zqPUL*wM%Iy8Rw4?Z1pr#Hp>E`4##oIy6c+qf;_O>g^B ztX{JwUia$P#>pp81>&bnF8%+*pAN-`Kl*>sPB+FIUjOQN#%TqtQI^D)uG|#=@Wt=O zlTLbKyzw=M#LLhBdn~TD#p$P>9PfGEG=+ZCA}nOcFrKVbnlKCl;WhTt%H8HFM};9E#hu;a!VfwbJR_Nh)9U3xnp~G+KV9v%*+Te&Ij*Lzi0BIh-*Tib; z)~#{uvBxCMaOfQ!4X6>Zqf$ zP74<<$~`LPcl6OqYT6DDBrX~qk9U|eXAbo$`QCZwotb{r3Z~Y}`*Q3i^7_r`0b$jj zvvQX~WF+8@C#VY{4U&PGX!M{mCpXeErJaLlK}Oo9v&r+6*dA@Mv9BT4b|Dq*oD;K` zoE*~^o)E*$(?LodnSNIlsF)ust0yS0)~T9UGAZNhwyXU1_dDzV;2%;Vkajfy4}Ld3 zE&WMbt6#NHl&h$=yenf-?v3-g#67mH713)m-w0zWL6+-IE;9g@n`>DBlr(E9GWyE} z$wk7&siZ4Qlgrco{Ry)TP>$v2ef3i+?vLmE)t;4>=dVgv-tX^v@3ZopNBLu2X8a!Q zkI@AJSjn*qug9=j+i;HM7=?*OOmmE{L#m!`bVw^fBD}P)D7L6YrThaHWI#NbyuJ_3 zoqI!|qSO3}y$%V`rDmEmxc6i8=C0VVtvh;X!}{%gFdqFpcOf5Am_feikPj8}Z5)pBX>6{L8WJ zu82$D`1m+#Zcm*5;(77it2V~Zue~K+e8FYWO?UmvH*SiTT<|$0(8KYrx4bZ>!5qEw zlCQ=8e#4hzD#cy;#+Sr&OuyP`%Ry5juDIg*xbUK{#8uxNh~GNBA�o982bo#5qrk z_{vxBjC0TbNPOk;HF5frrsK^ElPx@jwId0hKG+q9ckYOHzaCrjr4bjs;q&p9_x)>3 zJ%sY!^1Nstx+gjs*2lpxL0|duz47V`FOOgRbW5D`)P|V9U}p63eQ_VmCR!Muq8=4y zTsL4qvcDaJiQgXgK{C3lvh|{04I1ZW1ej(sP zb328QVsc&B)?xl zna0^hbu|OHAzpg#-LV;FqVdp0G3l7c$5hO)=0E=UIB3Bfw2PadZ_sekW=+gw3d~Hp zLvML%EsRTI3S!;0T5X4fyye_-%Pn#9%{ONw4?g(dq*WyBrDc{cU!L!KdwVkv83n(Y zH*a1{nlvfj-*wkr$-FFHyf~SJRjXDd12c2x%sA|@!!qAB2p!aPNK2SUYDTDs&lLkC z6Xf&UxpPy)V!f>E9cXc6POQIW%Q*Q?#>e;W1u@O_*I%D$X3d(F>CEfDGu`#rLDfXg z`?3dwIfMH0UFIt=LZT$lIdPK{2I*x0!K8~`Bnw3X;-qZBw|ISY_oMb}pB^iIaT8KP z97ZB?Yo9VbRxjTW3+En+^kGwMS$!*uB@#Oh6f2VSQkHd+s1+M3J8=VMGn+imKKmJI ztEfuRiQvYr>#n^vCts;2Uu1oEUaN+d3O^Qqv!aifajl=ny{FOH?Yr*#^3mA*P4Dy0 z@66ve+HQZ|KUQt7O7)t*TGh(454nWZYFAKwO1+H9Y7>{=)ZXz$zNqDskNfpcsc$JV zD_ASNR-y6rvZ2|KTC&?Gk`;^hUkT` z>V`OLVUufagM5kJ2ArkK{*5i_RFh?{R%7f(5TcCi@u*mH?|t#zZ(I?Ve;G+Crdh4*HDSw0w1St9J8E&X(~Q6S z`+td*Yqm!3ntS74zj;?Y?|&T;9rfEGKz9T})A-J}e~MlGU|jjHSH|g2{ll0weR3>c z(<4aGYQ~?x*}pKMWl(GVZ6EI{Bz)~E!EgAxU)6uVs(S?a1p5m1xjJaTu|Wy@0Pub1 z(%XQyx_HrxPL7XUb`{LFFpV0}`7W$GnF01R(W)`@Mc70#-Xp>(Kkz^%Pl(7O${*WN zRbfo@Bi)+RPb%D-c>*iK*w`-7gevzrWahSO=UW+zAp)Q~_DvgGWPPJrgUlP!H_8|y-1U=oX}kas5%Du)=MzRQfTZ>(!$BO@E8 z*b|~7nC>CGzF$ja;-;dl78_)hz%e$ zgb!(7ef}#hh^`&o+#`vh@M%+~vXO2eKJkf9#DD(hexb2L{(lj(fw>}-j=Hb30ZNLU60os&iA(~|{QD*-O*=NyiD2mw!m9~kaouHuA)4ZNO(WBY9DAjHZ5n?>rGqs`zkCa4YN z8*8oB2*v=QR|7QvuzBFgh(CD2S+T{VI>3({n1c%gP4dMyzo#h!s6+tA0H>4bUcnhBG?)d+hJBZrcTa!J+RSg zv=BEOUDPl4!-AXdT(vnq_Q_l0MZb4!yyf+0#+2qW~hI z&hz^=V5dHUdWF8ZABKee?JygYXiqbyA=DeQ+Y$B%^nmX*PitrIiu+02BQOj>r~6i! z&&)XC!@i~RB9YNi=XQJ`*+h6Od%zBMewJ%l)ysVq@A@XwJ!{s?*vuj7x_WNFJ+)`Z zIEOq`6Q~5oXltFs{-+&ys{`=)HjgF9xlM7DA#GuBjjrz1Mi^YHO;fGo67$vhUR7G@hq8_% z9cGP}VOX$WL4H4W-IrFm@4ov&jfg>pM*7WQ9+fu6F{AP6YS=c5t31=G$$dO9k?DRr zHNDVByvuc3+OGD?GTjRV-pW3!#aH=c9*O_5=!riQJ+N0jfMpGniizrk5Xgtwj5Z>{ z3m{h$r<%mh;RgjGwEHw(+ke2jtSd=?`h>eW++Gf|gVT*xt(ZX!;aR4)XDF_@`K~zS z#M5KtH9gU{WhIL%{>de*0eI1dZu0Wv;(Ug=08uy>zJBF5WJ%Uv+es(A?ek}Q4?5@|km{^-wYMExP~)8ZPHycml_1!5B@=32 zHf`RN>QLS6`EC(vQp@t!**Pf|;CgTE+O_$u5*R&D=pjQ1hGkV!49mBEvuDpqNrh65 zLk^yo?*7#7RCTaizf&!$D?pKgX|*fWJ)lw!gZ)vBO-Ag8Kc&2#1tlHfV~)8a2Lepw zPh2JzI&y8uYa+G0D}VCs_)N#VKy+VO002M$NklCXZe#*XV{UYFQfx-oS%#-H!Y+uF;LMeSizYVl3UqTa3`>@`Dk zo~udGNk`tZ(3^G$yYW6Yn%jzcHiZxOmR9NFTD-Dy^=htj;*vjmf2?40c^4c+8=E+W zu}}B>k=dAVoBK*9?N^?^y~1RF>Qk3dW>N$9x33h`(&s<_jr4_Y{49;) zwMgcn#-WFM`eCLZ3}YZ+Vp`z<{Xhc0bcSKBWJHXYX}I*VOVclZ_@C@EsipJ(^y%qI ze{^d4;`bK>@8l66dgWTNdL4K(VQMu$-Eu4UNRXEve*(gwls0bJNu5Yo>6(8a`&b9a z%>QA|8yAYH2^q0M_*LP2Vg@VYuW*)dUH#=*Mux#1s z>BOb8FyZ^DADcCk{Nj1Oqs)f_DH(F6Owa(OHVIt+h3Bl82d)vOX6;E#7Gey>=DTO=b=6fkxRe(39pLKP$~g z(>S?_HI@zRwN_vUCp$9N?Jy97y=)q9$5pZJr+ryp+WH8MG)6@5$FVJ%kf60#%hf>0 zjQDK2w2@^EUQHL*NV}B18sZJ_O)n-{mR-2u*q}WM1LV~Zv|PP@%bM5r3_csoQv{o) zm!IkO*-*-u*7$6>NHh9CA1!iMf2t8U^azwWS94+`4~*PSV&_x=ibGc+ptG=r2NaPR zxnqI@^w?cCHRIN?84oQ9x03Cnp~@Y>FaF8PPD#54Tk*!?Mz7{HbKVJQ!_G}e&(xv_ zm>H0)8}%d z7hDjGhA*j_@g47YN8L`dDLwB`pP$Y>_uP0#5#&`@UYRbs=%Tb>!Gd(*g%_s3`@6pj zn@Qc(z3eakGKh?Kzx&&Ah3|dudzgzoHkcHF_Ylj z$4`1<^7quIJ~ch@iBC)~eBlctugK%tYp+djfBW0Bw4*`-<6~UiR3mUSBOn1a#6F85 zw61Y5C)iBe_7T($UW0mpP#HqXKRCc*u1j)=7thL2V;iT9BOUgvq*$g|=!d~kEtpM> zapF0cG^CHN)GabA;4JBl8*W|!(Q{UM`QN-ced@EHN$p)z)1UvvpQIaa8b}wu`~B(l zuYFcpy_UTmZ~Yf=+h3)>`imE&H@@RDv@x34l9JAP*s1B1>sN&B_O&nqW&1)qbamcURi=% zLokPtX07}rvbn0w=vP~j{`#`ivwk@+igg39y5NLk$UBy%PMZz$?jAQ!l*hsYrex5Q z<#E>?C#tJG1DOQ1n#L3aL&cN3+AIfLCbxaaAo2ix=|;lzdXrvUY?6O9?Z~;>a!g` z^lNXsG2OA^#HL?zEM0T$b?He@K9{!JqRyxO$Jotn{|Gzazcm{FkQHYu2P^Kj%4^o$X04e9?>2E|AJszxK8C z+~+-yKGa}po|Uft)vwa?|MXAOyWjJkG#4ax;?kwTB<1n0{#GMUjlj{4fLKd_P~6wA4CsP9pJ5YOt9K8P4c-NO+Xj8sU1Qb zAD2{)a0g+Eg34yJ9b+wBY-(>#ul~Q^NR#KZrdPb?Md|!k{t2d12~(-hrAI&h+%)^x zl-~N@@1%`A9qA*Vye^&bpr-VYv(JIJ>q=kvHrnIqDZTzpPi8~>r1bXpe@^ZZ}6ur9oYQwaI@|tZ8)#jwDuU(OT@{8Vd?$ggo zZ@=j8(y0$QJ^jB=-NwF-ovC{_co$w_uytprhq>xy42C&fy)R9HIiV~jW_@ro+4RN{l^~K~gADc2;I-UGMD^!r zM!@+yK89s1kNQdh=D1Ar_)l)RUa^{hRzChk@B3NWyk$38yGZK5eu!Uu$odC_h7erT z+9eoR&l1bb4J$VJ!gM$2gy=OHa|>~EH~xgCCCbOlqStEQgxh-eT)$yL6b`nb-57-# z@qkZzotcEuG_7-LdiWU+PP6f5Rbvmxoawl?rrgY9&<@t{EWxSrAq0(g1kTZj*nsp= z_8@9q3LdhVw&au^t%lHyYWm$yexJQJ--ZqAv7YF`>fsnPO-o@obkExlj*wyFB#}F} zV)<}KTD*8sI`PCMSY}Lv@mPhxasXyyF|@~>Fe}^8glvd1OP8J)_1nG{8eVPF{I)ra zl}qWH?fdKJ5Y+8hdd!|ZD;;;-!pIXQc%)gzeptpn`77m0J=UG+!@b|r5>=>1U?L-c z#S`|%buEM(v?exk#D>iM#*92*-MZz|&$vK|tbxBuNMI;foJWx$pwta%0;m9O2a|hf zR~p#18o=&IJGQM!(*g9+re&V#%Rd)u7y*G8h))#@F^__HukXqz6}Cc@?hpP7em=lv7TP zF}dZITk%q{5N*s2>Adrv8tFIPbaSL#e)$#Ynrp7%9(R{$LSFjPmxj954Pi2ZN;K?) z(h4OKYHZFv`|Qx{yzFH!i?}f*7g1YOQ~jw%pc;Xr8UaZbaTa)wtJ2@*@chL|!w^$V z3PFe;!6Q~PB=#7dX{4t;J3#z13?2$%MBD@i6Sxp_tx;49!*T%DO1{`Et63E1DY1Kl z4ezb#>g5PBUikSm6{2Sl9J3YU`x*k7EkC?1edZgtpjGf3f_dq4-(H?R{mol3yPD3) z==0J?zIA5LV!`LHgVW2QktzKmFi}HR;Ds$E!}7 zg&?>s{rraQ=@~EnVwx_Y22SkWO*@^lL-X<0fBt&l#-a8ZpzM*fYU@CH!qdJ&f2XC^ z&S~IobtH5`sWSML!%MhI7%@IU!^oHrUxojM5pdl3#qi50=Er(OJhVW(kFW;yLCiNZ zCp~PlSzphw?8BtP?5YK+sr5h=R;zo^6OM!~P4R>)e@;ffwVyrGfw7=?@Al_cuj6|N zuQ9Xu%M}Jo=E!}RN*pAf+c8Vj%A$|CihLC2WK!x!crd+vW;*B0bC4Y%bO4^)ILuJu zdsH(<87by%QwzdTrtcscQ4elyNS||Pg+S@#COyz zU4|;poyb=GsYc+aN5DDnujXC^CzTU5RB$Y$;)S>;&Ix$P0PyG8Xg=?OeEzGh)OD+p zz!uCI#yF{T5~|!meRRW^G>L=;I!TA)R*kPzhH_-3nmTg>A{YUIc=(o4)+wVT5MomX zfT|*`h0WyEc+c`(1Wr|z#U-A5H+)gepvoSQqP7&+Zd@t6WB9i`SZ^|KLFnL3?dbS{ZSRl#iw98>~|?| zU4;oTho)uEfBy5+%hBSPM+WA7?|UByp*L&LaWCxIn^a<$m#KF|+-gAqyetxqP5disJ$!+xE2 zAtPp5NRX)qP6Q7_AZ7+5oYb?$!2pX|}q6*6bNwHLC#vqskQvB!K%->$8E-hQSoIctB%Rl1<0#Q=Ik2wHe~dJWcdZ zQ%O&Mw+*#F3NhL{fM3i|jcaSqr>p0>0B!J#`y~@Je?5jBOHn~Nq>;TdeCB#BfgE(hFzm_ z6C#gbU91EI;4=%X3zyOii3O0t2r5RkC3a8$=IxU@(o_EAykH{q)$K-Zk%!uY;$}|& z+XAc`zqHGA6T4i$HkC3R4@1kaVf}^xUdtNH?^viMQG-(I5-8m?KAw+0+iwA~l=E4? z-fk#+#VcMB(hS>C;vmCfS?l*&5Qga@7@y4}BCwqp6Y?s#c+;ES6zx3bF^>uD$vNko zlfKM;6c1ObLNx-_2ps(g1hWCmX*QwtnbubD1BbD+pothlfT4{&2q6eIi7bEEf6@xV87M+E!ZJ7`o??zhSUdP8@(11T zki-rB!TqS)GsbZ85xqpXGo&=PsSTo|olUs3jf9t>fC+IO;Or0k&N`rXq!Ad1A;yPJ z65ke^^@+gTi}#l~v4?7XRKY-CiIudm&t4Q6N6p%ZE8`RA$Jj02k6NoRff2Yz#?6mn zMRgh=7>tM@(li$7;b8=ad{H%lZeEXdp6?yP!_Xw61xN@5IjnQ68Lx6JIH+kxGY1nr z+U%xowPEBZMwqi&3I2M+TqhQCJ-MvxNSJ968Lz|NfNR%P%_ec+ptZvQFxtxI`#reP z{s{XqNAQ~GewQ&Ba>}%{@k}Sp3m|3K15P?P6;ya@#tBWV(=ZU4KaB#*uG7ZtWe&2G z*LB{tya*z_cYQDM(g1~+58A6$)@>;D!~=gy z+eM)2v;DKa^0Uw08@xAIPZg>WxF<%S%-5Pkv!+>2vY7a?_XH-gi2O=q3;;4}$l;(n){md<(fqtXgghCju;iGsKWz9BvH8P7<2U^GMk&wcK5Q#UT| zR<7VQcL4mU=be}S`jxLt-@z>FK@U1J{qZ0Dar(dq{yAN8$t3}>&tb2Io8C3!TD*91 zXjcS8$EGk0j+Fzbq{U5e#@j#dmGWG1+2!fUPka)GmAx|k=tn;arsR3-_xR{XKbpS# z-S2|*AT-GDl1)V+S?NL}WB*m<_M?h@&>3gMyR2EWCVlwBA5K@Yss1Zp`AXzf6|6R- z1EeS-tU@&c)d);v1Tt|f2Fo~%zcM12nP_`rs-_sI6jJ+?*&+QM9gf{;5YX6t1NWMl z_Op1`&@K$J0ecMMMMoD+Xbi&a3piJRVtrKSvFlI(b?KrDJ4}f5gam`QK-r79We8WH zGCm4qq~0|BszD5?{gB{?go%9OCQUh7JMoI8+E{5P__Y~^V>G0vFi+r7O+H32N0r%7 zl53-75g5cnPy3|FXk(Cu!j$N;v&FXPcWVn9eKoIA>T3OHg~m9ou_XjVG9GA$AW)m? zLM(?%-eeR_klvOuvE*)nJl1)nc_p7IYsi1&w)%6FBe4Ivdz8iF8^rkgEtq$*y75KS zY7iH0teL~%w%<7etx%IWLDR-(=Es4?sTWfw1bZW#HzAt~)5;9y=bV?(%-=rhvxC5a z{jRrU%)|nf2vSZU=a=tM#v|yD=sy$(a5~I>4d-<$lKy@eE6uZ9JDYWq!(46#X7!+@ zFhJM5t(b(%I{DtN)g#R3cC{-oBMM48s7JTHYDJ?9v{mSW!jKrRfUyWNg$3bUxGRKy zVG=BBT47WyYd(W@$RtQ-nBTmGIj~H*H-BS&#`|3ADfz6k^u>GY@VVqI_1LCyrHvAA z`-O?Id=;t@nCJ*NC;inlS?a!XTTH|NB*59pS0=lelzg~f!~GIY#_5-3ocsl2tN5r9F+k1ATuYdjP=|BJTKY|HxFy43JMd>xKc}@DO|NZg+AhjfKf7jcB z_`mtao6?6q_^;{3FMe@)FieQ5=C89U{EEvjV=?c?E6B%#ae2?X-xb=GO&d3eif9~Q zw=FiEfm~$(L|8m~jYYbCU4H8==_CL4;qY|zu(KZ;>2na=w!UW)tTQ9r`JOIrxTHogf~fWX8bI@NtJY((bVvs z|L{RDK)w@iO1>uJxsg%JR|_}>Q4N2qFp&|6X)uw3lyPm#)5otM!{Dd!uQS})%`@K& zKU3V1hwtkg8)MyUh3Qs^*uvV_*^Eou!Jag_V-xcL;+*;3N;OQen17bF+N>+fRgFMT z=vm9w?pC0GF^8`+AWdM9A^YDx9Oe!@^I4-W8hvE zVc&Q5tdyAPPFZ4G)RqFn$Q}X?ZF>JTxU9W}$S>($OK7||f1$f%3QB(S7Fx{v!W4Ml zn8)X`{+D~NjcqhOHsrfv)$|$+?`g+2ESGIKb6%g_ds5CZDQV^vxmb6(FIV%}Z_9eMPqyK+ zWlFzHa}yz#$XC8rAroL*slEM!W_cMMV|q@GFB(;m#4w1IbCOWfNh-FQl*j6#U1)zuws?zg?Oh0HjF=@C!jL}Tf7 z)MLLQzr+o{T$i_*F=!4>#rjQ$=wYvg<%c=HL@BcZ8~3%aQ!U!EO(8QR*Tlpg46b=x zCHmbgY>xa6BVlp|UagN+Aw=az@)z&|VTNgqq*%*vC*{GxXo)0J?U!#LBW1t*YaKQA zmUurb6T?`wMuqfP?@81x(Q3M8Xht|5d5ofAQmQbK5pYZ=8XWn6JLZ`)H0lygVlFzL z`Mxn9o%ei4^0>;YF?64&rdJ)9T-W-0(%81OskZHwG;_~w>C_z?)A_UJ$M^mCwoPfx zWXzJ$jt#(sda&UjbEs?fjMTFSFI-LzYAqq)*Os0C#OL?Ry>q?@(*t;y45nK;gbbF+ zv$8(#$19Ok09JtWG50knJOqD;D{3;$2p$wNsCCq&(4zyIYplP#?l1y|5jU>WQp{6_ zAX>@z07@s;-Y6G+EpJd=s}cAe8G(RZVRi31y&c$uIS@+_3S0v@z;0ux7mcVhu>Qf} zMJ!_hoS7i4S3!Jc09Y~vT8YhukO^0p69AK1n*i!oT=c4{WeX%k`p$Iw`W5NahoTXv zWu_psBy)kS7uBt$GSC13?I*asOE zO%o=;T-aaLuoCl`NYC#egJON=ce9%LyS0Vny<%gUh{UoIGcX4Ob>>AR*4ELU`s>U< zZ1y5wZ1^Ir6_W((3YJ2jFk+QeX#F7Tel&`-uU?C=hCieksdj#Kh+3SBXF=0ka(#f5$%9@ZfF16$jFW~ zwC?&4h;&V#ou+h5OSNHe4Pg+1OU5X?0s@iRzDty~iTi-LAl`vTO{Rii34#vTq^$T+ zf@z#+if8j7pQ$qj+3sGn28^r4fcOtrs63lCs`)CoRpv=7AVKF@9XT(=mG#4(ge05% z)}5IK`Wz)JMgQtJ1m!ekGJ|9<FT+G8_K@WFlyx#b=vrFwB>T)fSxdd)+VU%-uhMDVm z{-K_h89Z2}^40 zbnmNRP0NgS%(GfT1DNNnUbv(gqUXS8S%>dN_p74S2vj4$2-K9yiz+0D4M8Y#l{gLp zJ7V)6;{?7;5CiUk1Og2JoB%V6kVvPU#eIN{Yn~Uf`i|R?fME6j3eW?E$N=+W7E~A! z^5Dk>CI)i=S5kv*u-MY6KnR*Xg-tUxds_ME7@7(zF{=xrc?0u+9$|*Ys7GX>ukWJG z7{ctyeyde5*iSwsA>eN5r`2G zZK%}@zxkMbcGcL|uzfAMhQ73W8(LBDRtIipCD8g|M1~PAG^wEw@93@+Jfyj*1byCg ztzZT%Or(nuC0YDI2nFFSTH~1tb!Tcy-XJeN0)I$2nHxN*Cd4ayX^EGJr#*ux5rSvW@r4*37Q}D@G%pt3MMPff&vBE)yv4Jd9$_ z+xoZQ+vTszJYueChL;&n)!PD76QE{c8FDc~64DBS=>$}3{dmi2>5csynpU;oF-v`k#Iv^8{`BV3 z8n(%qkPm`hNGDBGE>AVqwz-+I{n|m(L?8qgWe9RnLVFj!z||PgMs`t0G<6JN$0;GN zySZQ-L@+Wgq)EgNxuKFejPoafZ!oX#XjwO|d2gIdL`nDFIGK*(6+;b1l&P1OStw&= zUZ3@brS_vRE7lj<6TEihvbl`+!!Rf$=Kez{`5yLj*t{}mjtBV~zft*awj1vnJ+A&# zBT$XNT}J@?25uSQHw<|wUZ%MKRd6eTUqu?Ma;4i|4u)jBtCQULlkW=8#njDC^xu_A5VUXdGfQ#G^ zL}ZJ5F=9zC@*HXk0x}o~>ItTW+?vGaO~bk0)?hz0y~(7UHuAo!`M`lvYorX3#rMyB z$c%^m9pQWh`w~6_=1-PjGeXqizbgFqj6l|Q<@w?4bT&9| zWlfwXelKG(GwV!n4#;TcIhWXT)W&>iWgkWtYjQg-^*iy+Fvh%58#XNAA3x_0^9GYA z&**6xVNI7ApsaIXV!?7w=u8tZt9W!y*4=cz3$RNH2NC2=9 z-tr-tm=*vtksb#GaUV#@#R%<&%WK3#7;AzB37(saGc>ppxMm9&d0GMP zQO%WHc>O2h2qY2}Q-|0Lh6T<;reW6}i1*MMXc7@9CpLE5w_rlJ${a}SH!=wl<02ag zfA?1yyf>e2{M1HB%p1(>uRLp8s&@@F67C_fOq65HN;_Vy&o&&h5(-1$)%r~HY9G8B zU-A@&VV}0iU-rJpyx5NUjq~36yf?0d{qGz5Q}R^TY6Pkg*gFDa67~`(;u(Y)$73ei z%AS!uxbYkjPB}+JOs83iM2VL|>`RceAVe`o*tCa~Uq(a>rR2EIjF=8al;9q#pQDk9 zKH7~v9JWvx5vd~gT;x8gg^6~7^S$(1U|yRctAY|UOE}uMFg_t(2ElwXJK$P6Enz89 zW^6D3;%A$Y34wuWhU=F7S3ov_XMQ}EZ@X&IZDah|9Ll6f9GE;2p?u1-A_fG{;~nOW(v#N0_>tbaQ$OYEf0F)&WkWoD7GH zsfX?KrH)BckWhdQ)Q(W4=3Nrnu4gSU1~Lm0;%-DQ8@g@31mk@6-OOw7ph6LqnukK< zm&1rOx$vN$%s@!|Ls*cPH$~0s7|?s_Qgv&6MZ;kmwpExD+f!p=zZ^T$ZL2UH#+fEF zWu3NH`fZ-Vgye5RpBw5Z_r8nig%R=IvgJKZuR=8f)d=iA0un92iXNeWmucXV#~gD^ z8p7q%NE;eQ;#%sPtAVa4oUddI%>)+_qXfB8l0m_YkTqPGO3o;pkdTp}0k6f$;*xh6 zm#8_&bp}g7j)Jp%wuH}A9#`8EciC7bkiddG7IIO>1}7KxshK?`Ufokk)1zk)nq9A zH$q<5YduMYmW1ipf+_SAoH^`fu8lDd-G5=tt(0j4riLjqO1m=mckkQF4CRSsP0=^| zMgLsKM`Io`_i^dj&e}aHJ-|uo8_?Qpn>ji4x*ufJ!wg|Kn4?|H)fx1w6F0qj{&X#t z2C+Vwb+tr#tqVsF-0MhJcPS}^Ef9x< zfyEl2i#%Kf&LDn4HWJQiE(TrDQFn$!444ms8pH!m$R4rjgiD*cxZtpSz&L_S|cye>j@eO4{{1@|tZyQ3v_~v=uxyIyl%jWganH`5-lE1AD zK_IliE(D3)Q#w-bZe}+Vr)bfFHgFa=k8Qs`f9}$^BaU#gId2X2$$G7tHhtpJy&*ci z(G%7Hg3@04_<$#H5b~mvShv{+);1~a+0~s^Tzgeociq(qHWeIZ&}Gt{Q>+@(8$AuvtAzx>Qd}sK=s6f(R!9 zmqL`Q0gz}Gs68pX0~O~WDp=v)(@SeeKCILQEH;3{IE0QPHeG9DX+9(h#JWHSO4F6E ztty$JoSWm!2l)=c?LsG#AwBj`SaX{CCDSbO+riR3$&EHV8 zx~)dwf6NF7k%$Y@j!A9li<|C9?ZDUGJrs6Rm_~yNe81vv{wA&DaM6xQT^Wx7lkGUq z*v-MIC!e$orhhQbBf9GHD}rzt!*;$^tuZ(-JnewoLc@yiLt%=zN0>GpBoH`DoGZ=? z^GoqE&*D+@hDMg`amEt3!Z|976bo-Y5n@G8KRx(y;%# z9BH{KUo`^%lSTkpMr^_!hM?bt7F6I>>K~H-jI6L1M@tLgu>P1CY2BWkAfaqpD*2;U zatF>f+NMuN$bq!r*)-Y{^I9pQfnt}z|nNF4-JG?=Qkum+Jk79;SgpOU>IBs zg23lqK|(9%ESc`_Ktzj+h_>vhiTYSXgh|UyJ9bX$>FG(Uw|9i*wU}pGe_LB;wB;Z1 zgmozS)?8NHVLis#p6yQItcUHO;0=Adv$^Ty>~+wxId4jV^10LZ`+yZP$EyH8M{Z|tu-9S(8=(I$9@*&1}!Zd zHDJ$RM69QUBL>=dthJ4$Idg|oZw<%5`7dm)939bi~u*NrQO~wB|(OJ^ZKiN5kGD z4i@r-Tnt={2JkMkr+arGlMz&&A|F+*G9dc94_iv^-6%Q4Z`cLFgSfD-X1R-YMNYgn zQUz_Av-TmobyUeOBjVufg;s`+M}V0!qMk_4V;%J!%^z!xnkQJkp<8!pwz0}OO%uIs z%sBp5^~~eXjhT+*p6^n=&G_O>TwZ`1dvYX|<%xJ?IjdaN2>kYrfYRB4K)!nkS_Qz; zGI=vK7zXcIec-!Y;G3PAa)OSUAimW~_To&VdHR%e)onMYX;Y`9E;PiWlOb-PXs`lF zEmJzvE(m;`Z4~oLRTDBH>UBH2I-z&)&V^)E;#^l6;@4r-7m#Zm2zkOu28+ay>0E=b z;l4&6Iy2&boFR zJgQmNBsQBf9`0B3+K1+Q52mPnoJPHi`dTJaKSY%b%cRNJ*K@sf^ZMxb5b5n4+$!#I zZP>%cW#3s1%^3CWB(916s73evJg2GO_rylO1CzMS6xGH(3R+JSnw%l?(ta;=TpxWI zq@Cv0w&;KC3#1Q2NU9yX7S|UvFP)5e4d%~-YCU_YjeGasi3MBO_hZN!DZXvAGhwV7 zb^R{?$s?Elx{kY=`q!~`%`41^j6g8l!7RAiGxp4}T;^_!V8AIX4)sPaxMn90%x616 z3sMS9Bf6HE(GfJ2Ou97Xgqi91h0B;Pv*OGUXfZ4t_|>^287PcAm{SN}fxBJXdQ#^` z%%^<^_Fb7T96vydIDa>6*pS}yPyduQZbW+sj2Y(5o0n$IniX-|ySuX%60M@9+{Kfp zW0pG~MXEp52;8G1APp#fEc9WyiieM3KjgM;+X81B@4PL7E7zT}Z}}eW&_OCYAR`hm zOo4JOW$kQvk=U!jA}E64zsk=#;j4j^CIxjgV45qk}Yi=O3Vo=%`8DcXS3yPZ^ zMINU4BX8JL6Ct{hKrmGkEj2A79GjR>$X%FtdHQppV6~=r^@AJgO~rlrW1M4g;M97q z12<8sh@!U9$bgi@yr3Vhf|C2-SHF1`su8%ajezhMo3dkmXE+p}h+_&PA|W!WRFj{k z9hNbtL`yS>+w&1xI61m)%9J?#^mxpf($9aMcB2iM#92m`1<2!6XHA#nF+417{&B~p z1;;LkA(g4vwQE-p>Q`QQB}$GqOsq}_BBtQm_3PKCFi&D(nL2YuI{x_MgJ}@* zuDRwKq|{C6lv7U)M#KKv_RTlntXd|`JK^{=b?Vfp)BdbnxiW3vzMV0a;DZsNZQo@S z#>V}Fn!!z*HZ9)4MRVQy4e8E1?+iw5%CxD`rp%gVtOEnMlBLh9R;^0i5P}lGGAn+- zaJT}KqDk^0@o#*g>d(POKzOLH^xO$IIVT_lY~`O(vChTXo#7|e?I@e`Pv|f8mv2`* zZsz7mP*vd^g5JpPcgYEJo299Yh#|aFmEh}gDrkA|Ij09 zG!zna!UPMzZ5+_t&dW7(W{Uo`Y6vT4&YYS4{onsR{o_CWW4h#*m!?<0`qk;Nk9};k z=Q{kw|N27ue;@i#Xh^5URB_9$|tKt`k?-#)h%@WsY|fcfy~0C@ynP{0T#0%LfgK;wa| zFycf6rn(3lL_jUN<&7nV_-0-FAwj?kjOFqsGnKiS(ePj*ybr zx|v8Qcr30HseqPqDA#=}u~*uI9=vyT27+-tV6a+}4w4$Jmxd8mIb#dm{HnM(;FAC-hn&rS+i!RJ-d6;+Bt$BIeGWn~pnvQQEm{PiQd~Ek2R`w0qKRx2+7bs*_GSDb1g|0CN$sEmEz4;WT~f zjI?yg($sxtce-Qkx*+sVzu*1S?p?cs*q52G+@eKG((>DGOLONfKtnSn7AE%r7HO{i zY2#h``Z=s_%Cxk3^X6baY+o}q&xYzB7)m`mcBQG)W~41!x5a`sbJj7bd;5;mi`M+~ z(@&3lH{5u0jG6mBe23eXFOM;6Wn9URNCrslL*w@tEqJ?GJ1SU;f;0=)QRJ#M_u4jrUA zLL^E&a|pQO^sRt(iHFt!=#pNviFo%K8*WLH;0;=X5y?!G?==X~HVEC)x2um0>_cff za0MZQhJ-_yX+}zf2_5jemvC!J3l|=j9{+?tN*{vp_~tjinV$KKXQW3z`cdiApZ;`c z=~`iE+-Dn-YRXrk8i9Lj1jM5%vRe-Rec%Hh7)*!4Dj8#$m0*y`^P8j{ z56g&@1wrI((6(`J2f42UQn0Hyo#uSw##+jZX7svzdXL+!NMVuN|E4N z$tUAsC|B=~l%s7dvwt1sdG9yhw++*)P>sNE=Lpn^C*dqpQw@-8B!Un27>K!s;Sypq zwoC~Gt;DgouYD3W%~pg=V6&vn&1TnqQAE z5O>_ME**!K#5QNnoE2tGyI>k7P3jCnd;a_dXbz@Dyqn%vuU-=ih;GntxZ%cFJbHWk zqK`5;bLPwqO@W#c7a!}^-v;x}Mq^@m z$EA<@x6*Dg3owt1r9ty9>yXj#{kLuFj`9+5tJkbW>oYseV7$69F)-g6918euI$?24 zWwx4;gxhv}1O7Clfoh`ualBXm_R(PV=g3DuYTQu2`0eAzvv(!{VUZr|xKE6=U)G}j zqG-+>C64A#DI1>Utg~KkPzqVXz%c@dMBVqnH_v-n$;2Gcxy}fi)CXEICq#QP$b1Ov z4j2;6E z`tn!4l0N?NkA=`c&5+(Po!{+fK0`u%G%sD{tw!J|MnIgZ(8+lwJ*d{htEN{nA8N$U zJoC)-)1Uq{)*|Pzc+sn&;Nhca@~E;3GyV?Ch(xVGxCkW>w|GU29zhL;420Yh3VhO( zNR0RkMj!bc(uX*S6&0q;O0f;a2rCd#HpRk@EwuyZUadn(1}bx;9mNe_U;*kK)1N&J zh3Q1;h8Sp~q0}+a@=@;KmB_e_2gh#wA>T9-`1>X0%#>+-z5UBO!K~DJ_I{WA?1Veq zM6ygNd&s`jJJ*=sD4MN8H3GlIBOq}EUMcf908G(|5fCG!DIJyhkntGQln0noLa#}c zrWG~fckbGmx**oOK=3jl5+dSE&9K}+Ei8)rf`}mw>NuoNx(^9=*;b|Ycu!LoEDG)X&l?u`5{uGS%QWdB=g9F|L8 zU3f<@8FmZ{W8+x(v(N4sh^2`f#$|5G(Wt`Fj)3cM&5ujSEC=(KMzCIo5FOodL9V<>|;*3QNBq6i>rdtA3?`3N{a%N!NIi)a10he<88 zrorM=)7%>(qhk7z(1b_ci-YI93nLP!p#eA8c_BRFLEHiCu08}2LH2OvBI}~8l!}aGXlQ7{cQuqKAFZcMGSkY$G0p|OJezj7V3|YZ zkMzS;Ql(cT@LM(l1)ky>Kv{nQ?qxnkFyR4>ha?@u6!#8}_Z&oC z=oN_T4xSYh1dLK;@zB(ZZTbL2$T4%~gv3-+t*PvLAR{J3XoI=-!z8#5BP6E4Wsz!m z!0AFXCo&wiIgI3g=Je^2Pr}AFcI@a0)2#&y7Nogz=Y%`|FUU%c;TpND0958p@P%snLK zJ-iOXGz_p&e_F?s7;qT}>-N2@qb!6z8#D(io9QJA+c1f<9v4p2?E4(Xbo=)1s6)Gb z>+xOvNq{QtHrQufor`=Wo~tmi5fEHrL8_0V&oQ3dVjzBge~R zG*M=h{YK16I36h%+KBlB1p^&oQOo;t&2}Nkq-}&x$fWNuvMK63|dcqT)kiPew z@1~n?x+&gq2nO2o%V;H@t3TBU+#4ex?iBaM8pQk*XBQgJeTHhLFTecqU_6|^;$YX9 zQs&<1Q)8V4Zm=9s0+5r8)Q5Z~{)Mi_A{p5BjS-9h(hTzeB8w9+tVOlH;{)YTr@=iV zquA#TtDZ+y-h^8UwIR(M5`*h(=-u`!S<}rYsqJQ z1qm5f;^Ru1d6iT;CdFGzsC21Q?=Iy8b@SVf*Al$5eLBNfWCq4OzE8GsE;;x8M)v(^ zgYA}IDO>t-pxR73P<$y{U8@oJpF9Eqk4)gYTqC+_Tmc>9#d2;)9T@6QGp0=e2GL?5 zjhB#+*mgs72lybo1oAwDE8i~6F2o0cr+^Xhs>sUxJv(=#U3+$?<4!m(%{pdwm~2Q; zOAwiF+qOHARLx9_7M&3LF_ca%Uc5N=Z@7_LJh^1a;@E)R3i5O>gnKu3pp{V@;BUjG zjlpOf$7b-IyLYACy?cV_U$XSXv-c{~sTQB()f>I|x8BwU0Xn%DO!e9Qeaea{WnewB!v2Nkzn$VI=az8A4!A6){ zOv31lbFB^Yz=xi7R(jK$-<&@A$xjB6*UMaWzUfRtI>Ytf5bJ6+_hKxnKljcE6xvOE zE$%IJtF)o}Lw@|@A7{-t^r?9Y<6=3VOZj`JcapTg6F)So0C61~xDbG;Ln`M2>c0a9 zqlIU#VR`_#T8E2bJJ39WluJ{FA@+w1$ym5Sur@jM1E^#8(RbsZ0NnxC(vDQ3udgR{ zagga4z^-kk1K@#%+ER7^*rEd^U@j&uirOcSV7IADOvDp(A_KMWBEU=ZH-_>N3Sl1a zOdek&e!Q#oTh;~I4_1PCGm#+D1vN1~ek{!z{HUeQ!i4y2K7&_(dfl*$Qn(Tv+ssUH zcp~+bNfHc4$;8$AbVcd4OcRTC?Ev#{ohSr*`1OMf)nt1ID@UAS@l!#Ujf4zWXO^%e8W)Z%af^pesqQ)ru zgENKhq=IXN1hel6V8cU7dZpt_m`D$1{k!jMF`~FV&8-1s={5 zNAgX*uo~m4h1t#d9-U>kCjeWpyc#u#)1WW9orC4bb zLen5zO-WFRp)XHZHpydI1cw&&4AER%Jdt`iGK{*t&Cgaa+A)eMQ?fD+iCN;l>A$@n zY9OYyICH^QtrO1~ZzkskX094~hBzFHFr3-*4JC^ZkFu{6i34h^&}?j<|GjB`bav(R zI?3A(Y`|GJcW&?Sb!J*-%PuMi&wJ0mi=i6wOfrn>R*husWSAx9CoRP&3JM+LqJUO3 z3=^3twIQE`gJY|C5hrrfl$7nlQOQ1Tc9z+UsR<=lKNADL4tFu5OrE>>6<3!=Y1eebiVEn57bVH_G11B8Ol0xfYBQ;jwo=~XRkCQ%+z;jfD zq!yQ*HU2jM*c@(N{EM&cZ}j-PslXJyr?vjLptHwOH|y860kb@CAf_kcf^?tG;5QgbgzZKpW!m~xa~OK+_Fkmvi%Jdi z4H<(?YYNc=#VmS#hO7N8{BOClBF@$9~wp1&P9FM9#y0eR4pzZUNgx2J*Q5 zHK3uVd3Qyh{{ya`mf)~9n0wr7erQA|a=F@v-_^aHi&ljIukC5E-gFv+FzzMJFUnME zIvBP3agd1`Gk;{Tfwh|b3WqBTf+i5M4mDcgN2)i-%)!Y9d7rcuKhyWv%4^?M8kFiU zrdKoMjZ9}J6F-JamNwIC6s_0vGRWInnHw43fQx#{I6gw6cm<%(F-!!Ne@P+|_gNTW zt&_Z3>Xi)KUYL#Wyy87fLk;05XQxG&jSRL%pl4AHGZ6n0%R%cN0Jm13ik_Cf)EN8u zbebN$J0{qf7F6`ULR1;K&x17cf{q^X0=KY+U1G&cBqWRRhRPhZk28<`H{VHCd_v0S zrq}Fif`8f03=luo-H&VZuyvzls^D64(iV4J==2^UA3_d`{ug4mf};K!Q9hF%Gt%RH zn03#jB@5IFJXaH_95Y@hgTdLb3iD;A2|BI&>}_OXe>OOX28xjzNc59VqodyMZ_3zS zB@9K*2@#UCiIAHO{lUHuXn3nq)a58}C#hi~+)J8ma;Vix;|a8bWV;K(j*n3MhVTpD;HbDx;K+>cg`k1{w`+&g|{%gqx6%6XdQz*(!^Hm{rf9zq0(fV%vOXh!CCZe zi#!cm;r@o%B@2mjsUgExISI@=BqgIRUi~s zAkB9j^GI*`zpYO94+v;Kn4a2wfMeyEa=wmjZx!km_nvX?RkdNRO>2wATmcW-#8979 z0OV-oU=+Uxp9OOqPaUZYOQe6sY#`D=(tWgB@yA-A%l!L{OyiN9e5&s!!HUMCQsf{X zA`;?ghb`S^yOWI9^}q+(i5}yo;{vlLOOgOBwhNhiJf=syN516^OF{3)S?_R*mtq>* zXxd~N7ysq<$75@0?%M&)oFf0gsMuh^0OGiAo6-DGQ*wZN{~!5u6fnRMGh$&zcK#x` zF{{Ews9RP^b05cxGEDhEVt}3pF0r3(M(9{Dk{I$GlAx9MND(Oy$8tC{R~%As;J#?) zqa=(RI}(kMmPHUNS?z-x8GpLpdHtN?cK@TSQo&y5ilXiOz_!557asZr-A6~ahVl@ zrf7}X{cog~I`mgo1~6w3=_X>Km5OyI5z2}A#)PfdT*Wq+kD7Bn#QP ziaC1?d3`6}130QQ_l6JsV+uvN{jp)keL}9|%8bc;38zQ6@GT2pL*h^rs7wRMYJ_Z10X4OH-w_B!U>&fc0FuH(IIqwpbrb%N+1;34uUj$-=bi^dNX z&Y_zmzd*gd@JAdgQL}5~t<2su0(x7|c-xC(PCIfL_Zz-ib$)i904_+lh=7QNOoqGK z`Qh|08frzaSg?$&B+#|ELI-ei_Cta@A*LN^PIvF&z5%>o@SW?j`lK)$Cp?t(bm=VA z)V}oHZo{a<;%b%Xau7lrTbtmOV}4mhVk6pbg<(r<{NWMvu?<5;$j$s@=)>UTFj zr_ZQeoiaeH;zI))Gm~gcyWlse5l!5|J=1M*v!N?0{`1-{_nLYT?^pIwVElF;;Q6RC z8Wlxzlr*}F6kZe_i7`k5s4J>7b08KEm(6A+W9)I(l{Q2 zA&eh^;CTWLm;-9o-G#`bTddOix^{P_%V)Vwsy4G^(d0Uu|0!h2wn@r%>pK z8qaupQ`sgwiJ6`up8dCu{QBWf;vBQ?&)-{fG(vrXZe54HduFdmYHtOZb=N3=SYfOJ z>!JzB>i+ZvRUUuL28d(6PR1>er7IE41Id>c7vqlz{cN=BK`o0>wx(xzh>)=U(GGt= zKunUjnvN5c@N>PDU!pFkc97v3Mf)jPNp>A(yy1|pbJs$rU_aoBD5MxOZrt*pfA=Au zK0|9H%!+lkR=07!Uu=JnC@{RsfeL9p9JQTa*GNy9zpP;^|4u}%c)L&4Y7s*BBMm-O zy!HvU)Bc01O2Y*Ao)&brK}+{F?%ni&|Ab)X0tfvk-B$qnG;#hjtr;0qz(3J_6Zw(I zNRmtM?N@!T37svBaQTB0u0e6#S`L!uKEYBGL#Ojp0-^2eQ0^k@+HRcT@`#UjTd&*1 zT*Aow_xAIK3G!~pqf^${_n`ZyFOCuo{&8Wz^uU0RPmUdu-=6Og^@EhMW^Z}M=ZL-j zL>^{%sN?^lt~cnk7+N>E4m5hdv+7Dt!iCeOOZYIpwBF!)(iS%!rHfnqq(F)6;&uZMf}D1P&?JMM??w(qZ(WyC*-5Bk8a@2lBq7;Sz~0VBh_Z5D?T&0 z0bc5SjOkbIIk-b(oq00)kj*joDKCkSs>ebTcpox7=KCYuc87BS4avVB6v}GSX#YS) zMYBJMH!jCfDC|gxRIC^cpuMspzLF)m>ohx>@J|1BahJ@e;|taJ!|}HO?ob;a&SUSQ z+GXL*>C9ZLP@fDnNDmV-r;7{@ato$#?5@lKP1?u%m|t)PpWc}qf#@Q4<(i05Tu-D1 z5dJR2fpalHi`vJqlM1Un#O}5jPRAGGNE|Y4s3>(`Jsr-IsI@|&1bIz?S>+Z#Z4!&E z-0CxS$w&6>;0Hw!v_x+|B-RYZ&mx&?rKw z*RA*GXs@fUn;paBsQI>gNSL$LX}Ms=)kauqpdpUix|4V}E*8T=s_`)eV; zg^eGsW3QJ(e4oc|qWtr2686rQjz4Wd@vlvm<8aVyic2Y{&5BFFy~|##-y7!1hI4L0 z&tr3K=?MQN>Oz5dWMLG3wC2upxV`Tkbp2@0y(zOSvqwLoNaM}4dk=dNg_Y(sP=A%5 z>Nv;S4zzJB_%++a$4b%UddXx6I7@gT%Z zPX<08?VH})?yqnSqK^K}Dceh7oOHea{{*@V6dSr)8frQDT&>_?s&N|mpSfvoEKFqD zjk*H|b`NTgS=zjIDB(knodzQ0Kr@-O68aGCra=089x4cUYT=dY{U(nVfm^JLYQ)gz z<)*+$L|+_pSR>VN?;J+!j<2m*%8ql^t1tnE?2`F=Pnjjg6U=_qh=Gxni)I0gS(-Z% zc$^9OhY8=gpB*zMhz;^EPc#Augi2DKV(&@Fj~mMe1ez;a{!Au{8N8k)DPHIaddD0w zooaZp${k=0->jt=4a~D73=Ph_K2c(Ah?rDG3DL|>?Mc? zPHe?p-OVoTXQyrqD=pP}-@PWI5~ASP+nJF-1RoLTOs2{@skgx8OuJL1(##*01yG7x?*;)Xu$U+<%bIlu-%okTB}UIsn_SLAe!9J%}8#X@B5kc{QK1>7);NP zY{28de6oh#)XgJmN$thS2dal*$KD_MLV@_f;pohH+@5Q3l#x^E&j$m8aJC$jbMgw8 zsk5Xp)p@$otONnjThaC0>|B_^ZP=QcevE_iTRexOT6Ki zyW_B4fLi;N_f?2Tj8>xW$3kv#t>1;Uuwh$X3UUo*Xdy_+vEVuS z&F%k9#r`XzIuCUk%uvW;VF}uf>E`HB&lI+J)MA*6D=Unk__kt*<{T}Aib=K7{T-!n zKpnm*>ksnS6UX83K|WV4Cb;vw{9se~3*B4@qC4Cb^?zvI>xNQT8wZWy7Ghm8+Aqp# z8^_&PR-#V)3FeNo47NClD(MO6x@G1_2sjVjRh8hm&kVlFb>!%8nf2Anv$t{F%!V0K zt6a?ixKyC=ZXm@~^_W3BcdWOjhht|Sb8qBeHW^Nzg&>XO>p`<(&fH#S%CtJg>-_y;yU)6ltq=I!TS>2DV+j}&8k2}j__=@f<7x~E6OXjsFW?;}qE*=` z9j`x4CCX$|86^tx&%+P%qV^FcV^$jl*_#5Y-K#wgAoYEVwCaiK#(8n+`0Ua#)&QOcZRs!f+#Tg=M zv^Z5a1BY2VOQCby(vk~5_P;5@@n3S!qbVWN>!GQ##Mri5mtMGd64ZC_`z;V*;VmET z@Zi&Cpu{jokhcQiz*hifC@1MZ+cDr&HTMsI!AkB=@SP@ygba(8_V-`^v1)@7Lg8wY=uqN9TU%ZY2Pqso z_zi*br$l75-ymx_z-uov(O@_y(6~aeF2~}y^luXDs*4?0#`}Ai0A2{D+yxK2S4f)uXTer6xl#;o+FHJDad@IvN zV~uZ)RLYBcqo1g1bO{#NOl2F}ZF*FueEUCM8@Gj(UUY};db^@y=W2(;Pe0-)lzI%u znr5E=lHAJe>s?y^o-OOaZ|mwSuHmCbNI*Y z&YRouP!l>!ULmo88d2;?O&A7yF(uJuWgyk)KJYKw%(|IkeeyGN;zn0Rb9P>I;uV-H z)HMq0cILVc-2lWzK@_DCM%||KaVUrV^yJD0&uE~}YOzKUY>rRwBK%XG%}18YODw-! z2=duf;=;j;?7X@HQps0INh$0YdFJ<38VtAe#ab+u#R@5E2}2AoR$QeDGZ3Qwb07=d zGxo>La+MY);6KN8SM1eJ_f=ubUdLGtqWr*w!N={&ub#b@b@EFpW>0A;j;ev39jv^j zs>ohP2m=NnSd~iu0}BdeaLd3A`^?@@_!-vy9RmmE zbuEC}>?DFlBA4)tDk<9KR5|%^faHUP|EYZ@I+w;!;E+yHnkL&t<$lU{RsVljiJ>t? z-aCjzMTL_jT+iRs0S_yJKwHNUKdETR<59N|eIA4sffA?+A{AY!!P%!+}(kQf~vC7 z8iX4a$qqkhD|l)_DzlV}SdYueJ*agiVA-Ay`5BEs)kCU6sX1Ew3s(#eLqE5$6P5`e zR-X{0^^-~3GFtgec>IX(D3a~obbD@m(~=POrm0P>OFNvgH}P353sB}zmP1&foBXK5 z1|f&V$TAyIgtKHEBPQ#ud6t<@FN*QrXF(`&iVn@!$t~9wbQ^9EQ0RSUi%7KhmsEbg zmXDqz1tpG{W%`HSzZQyoRG^Kr$C0_`ZqcIZ+KpAm1NNQ#Rcz6Rx3f(}hu^DY9!nD^ z)m67gfCI$Objh7(NMZID6kKC1e=;R#5-ZO%q!cZ_{&R&h4Ms(CzVj@Xk+n>o*@|2) ziFqo)cWG^xTT9EA+f3EyA*0~vljbq@1y)2wDQ;a5W>bzwqm{&WGWot?mgIr~rt1~V zP&Y#vNQQO{NP#+{==2iB9Gacbs@0O9=_to4GcZL8tYN6O)l^V4u z@-ttRCnS>Vem9W@NVX$~HpKt^m9=C@{C8o%vd`5JXOxg~L`3>G_#lfc=~TUwX!`{} zs+q98?%H;DR5!WTNJo0m&|*_KmV)5>ec6DUn6V0Z>z0pf(#vb_=j;8KPnm7XxbgL? z%?MX4bpNB5_+`|k9~zgE^Lwhk7oB$Ymc@dbu7xdqZjmlyYK+F$v&OTAu2fTg@(XIW zd&9{@d(cpQ4b}v>x@}q49K@cWc|Et%!Y!QDX%59!Qj!8CA>OdGiBrfzU?ny5EyD(B zcSw57x;8^E4gX?^cV&bN2`M%VZ8t_R8uJ+ge2?{}5h7llqcjJry2NyLz~szhtp_*s zcUN6m0Mr6PO1RG}r|LX@!H7K(^lQ!0$e*rOK`+EV=zn~5&Y(oP2hzdoeaerO_}L6z z)DHxOmQ*5t3^iX=kys@o2vaGzGd!Um5w=y76@Z+#2?9uq((R{MiXbP=yA4j8RcK%{ z483aI7T%RR&X%l%okvJ-`>`{4fBGTJ`PoO$ehl_pyd)~Hj^d3PhWRH6z#@;lG2Kt> zf~e%uT3J|9aP77HEJr#fI!`ysn!N3MUw1+J#oO)@{>~k7&-q#`!GojbG$!vvsQ$-i zboyYL54d0q^c0DyQ$gp47ADLiG4f~gM-Q0`kkh_|-povj6#VNqz5HoQ8xC}lJa#zo zSZ;Ee*$UqOC|DG=bKPvW<9@<)+|10pd?STfiq-TPX?sE|pvEyQ#_0VFmn@R}sFftQ zJeTG5>R8a|c{{xS@(2a1j@;4x!|i&JKe4s<)&cd}`-p<2WrQJ;{)06Ng=&*!ei8S& zPDhpLeSVn%+fxsJlbtts8=9C`^u66T1ZsoXgo%bCS&_p<_)bo8pxs(oP@+uNB#P|c(JX$7u6^7 z_Fk9-f49L3mv8}|U(PJz3j-0s-iK=Y65Q^8At%Mhy}@t1dqqo(ncT|yjqOpiZ@&LG z`xTOa8)2~`I8;AU44|hJ_GfvFuA+&g_>#NTUsc1-04VJ0LgB!`7F?#WE$oTqva>{4 zfD+5X4v9-ecEQz_9r>9C&4VoT7Z&m`V+&gmi`csA)+6Px_y#}1;%bS(#^#IjCaBO5 z*yjU0(=II{&8yuWcac?dcbVj*25(FC75g!$4B9VeK9h_Ozx!HJ%IQ)d4Ub{9=h^`S zhM(BT+EakbgRk7`x_$xkSHnf3o9l`ZG&B11_ruO)w}qIFeq~-2O|M&UwEx~P zAN4Ab}slkWaJZ{j0F+RTnmTw#t1 zwB^kOaHXkV2RQ=G0@M~{1+GwW5C-z={&p~!bOXa7V6TP*siqv+@C^QAKrZ2Uw%<6X zGWo5m27d-h^9>N~~xHA_^R9o(M9MgotKJ(DqgC7Bn1g)(@UvF>1LRj6ziEK3a zd%i@~4{_NDG#kYwEQu>>gGC~WjcG;3EbeO+<0$I;A=iMki?Z^#4tkT1O3J}yxP7nMW( zwYQr}=4C}}ve8Ufgjw(rHD4!ycmVADP*^JzJJuNN8MVAedCvnmo74G&FGnE{w5|gI%3D&?zi1Rtw z=TEZwkC1?1^%vIXF+s&i3`nXRtCd8Dl>eG;Npau_{=F0ce{1{zhP4C zh#Q6?Xk=og=#z5qgRZYY%yOXf*JtZV>kd;CYzB%;te#@JE_@Bct=lCN?f&{DjWuq2 zY+p&qJb{}L{3S~xe8a#%Zmho1>vbL(XtOLq^DIZ0=YR;@sNkW~L>B6=eMJ5B#}jy> zg0?T`#^X|5>!suZE6;ptNu5P&jB9CfDu6;hvyp}I=gkw3(6v!HK*oG+p_3l@d?Tl( zHBK3NbdB*(_}mSIKQHzeW6r~MDq0By4bk#aUbRL8YI45d!76{0Lr z$bzbmNW*-M=ksNFL<)ZKQJng|mj%X#cRXWA1RV!`&sLOzx>KUC&-ca(@?WeT0Wf7a zqF*Vx*4SQcw6mW^Kb{K7JAt)q)wifl`>+8Lj2?xBnBj)+NfSp+x&SO#2WY5&bN|yB z@Km6KIfL8Aaf4MFV|@+>m2%z4Meef5+JfOUXn!SZJ#?q-{$qXO5Ya?-W4G_@Bf9^BZty3G z;Po3%rlZZF-i84MG0t`mcF5}<91p75&Mz}Ch|%U>ja32t0&;Y) z3sLD8Cjo6BcZK&`+|)ts4H%-?mERIOWF@i)kl*Ucyq2! z_bZfAo=6Rkz)n6+F^$wWZ7pPr1wr;K{Ab4 z)#tmXf|{LbKAaFS{&yn=Q}9d>Oz0i83vZ+Fr@xWt=E;WT=Cz=4FQsgBJmpV<=I(*j^-27SQwCFNck=KTW!)$it*jb0HPA~c;TeT5EM-P#)R6}o zQ=TZGO~iD6ps%@RRCja`+%NIrx5H~wW02vX#>^hQ_#) z6Vt*`T*&^=mWx72V_zU`E@+l|agnG(vz8pv7^~Ks33Jo3FeH#@1ec|L%cZPwb-C5# zc5z`z01*nH>U~TcKNapeR}@xuiFp_Vx_xva`}(!QqI}Uh2ABn*TXj$uE*u-Tfnb*VCl0@9=zK1KGzN>^KK~ z4|M+EaIb86b*WoShQtPcDVxo1t}x3F+yaIm7H(&9+sAO6VC^GPIP!qgpqmR}M2E=zs9wlD z0_F#hkgb?EI^at&H32qdncZZJ<%rZ`SK8HUD$Z7x$g%8<@?abqxI*0v|Mr&E#JMAP zy?L8(*svBOR|nF}v|Zs#09XN7CFD=rd+9H<8m|Wc_$^!^dZZvrlqlDBlH#bynHS0( z-K_>_INNnDSwOUgqEqvipv0b64%~yenX1QN|5=-B!%w9^x3Kp?E~NbXm7&4^(M6a9 z;CV<=6BzH4Xuoec8%Y-<*pWRVIK1Zq7aOj9EuYhVl*5l`Xljy)f#b$+R+|uy1webI zxgzHc$)E3M_`;?)(Gx+bMmq`qd7*`h7RkB1SfB6Q`oeaif0T@;njDj4w%Du&HaV+H z@h01($mfIaC;hNom3A#ZuHeav{u3DCxKQi2iu#1X?1h&l zU@MW9y3y`Rzk&$^)HqPwA)|G;<7l;*-t9@9bNTAXQ+1c+fkgi+wHSGY6&_xh!k{xu z&qafptoD=sMg@+Ps^q_-Sc@Q6WqpTgs2TM&X0d2M0|l@9g-jx=z3tHCl7P3sLI?7h zs1Dz^-IOgXvYIt;}RRg&X$*hn1!y+()9A{Y5i!v+I zMbettaWy}lQL33vFLZhko!Ntsf=rQHEQ#t35Xj1bO1q<6=UvnYZtH8w-#eHN`hP`}{b1qSUzvvc|-vfM)nf@qy z0r6js0I#Mg^^7D&VXeq+Sg@#lAS#eZ6-v*93-OK2J(#q#i(0F&qbtf(g;Yr=5m#=Z zoF4~zA}T;kE;=0k@w0BIY>PUE67IlM>1oWb?DlnU81RSa-!5$D2^uj5#sJCkYVHE+sI7(el&7!18bo536YNUH?f^bY{lA`m1{_1Ja zt_YE=nG&Rz*v3&MOw-ukN>J)oljsr`Mg(rIPULZQJK4cm5l zKOSmh+r|hpyuJ#NKKvfHAbV=|JMKupP_wbmF%bKx<4;q%>pycxsjNrhyJK{g682m4 zR})^T2~yuM+lQ#8x5e!enjI4xTQ*E+v_2Di55G@TB+;sn4=L0~bO(w7P_I5;b3BiC z_u#?c-2&MUhO&7NCpYdcv)GO&<=TV-IZNZ$+eMus{_tXtbE@e;_=#ld>pH$1D(}Q$ z+n#FQh;oq^k0Hrkz(M$W#SCHbc9Fils=_c5-<9B{%G?*ZvrkQU z2RN*A&F6kpUzR%c%M=|uc)xPn$uF?V)^xj44-#EP;C>f1EVF4~w>VV~XE>UqcGW^l zH+VbuD| zV~_cxfMK#)x1Rj%;`5ywG_|r?l^ZS^v};nDbOraeih=eWo$`k{V*iV@?T3hv;qQ8` z-m!MVx2Iqf76>Idw1RY6F|E`que5Rsl=CXGYH1_-D`EiVVY4_04^J?dzP4+A7y<#J z0c;;a*r%W>btnEROON*(W_5Kn*NKqcJo>#dOPPE6Fc|R1^U*hPqpwAAiROOFP&^{N zMorT;)FC7y{Ba8F2ipBf*aZ0&buZbO=ITc-pRXV_FHiWlG{T0-m0r+krD^r%?>{^j z2@-k8NiJ(bM&KwOPCP!iv>$xV9(Q=AM#XGFL?7q9J;NIX`cEKkDYOhCyO%dyVD#FieKAx*M56Y&khpI2R-SXx(S-Kx zjiL`**TZloFrip>z(BgKHo4(;b!HReXvzRh|K(I8%)R?{bNJuq?m{kCo6U$d5?Dw? z>td%!Cz^m=*Np@`Z=GjL6aPV5D*E4Z!0j(##po}GBk=8X$Pt}u4~vKW?ck>-B~@n z$M9al+0S1FOi&HJr>4G_x1S5_r_UhNdR|x$p!(hKUmtz?iaw7ieo}S4CD$UQ8U&yX z3%k1Ji#I-#fUYHhU)}M;bvI-3yPV5<%=sO7ALlb4mAgCZ+wH14CB5@WL#?O$etuVg zUDLqNC%;d+yCF%|gQr+r%|HFVt9QXDck2yM=gwxhElz}U(Mu85d|1LNBm&I3^=_YH}gz#^WIqr~D29^00ZFJ;w~Cvv3iH3E<>Ung9Spn**A+6FMt5s@t!mx8 zyxLBgNcVS-7AJJOBB3_2~klRklFSk?pVO@@#9@T2aV4~UFPn|M8xvd|&{QxcJ$ z24wx-jlnrgbTSJEPEAgSm>9+mrWv@z+P4nRC%tcx*@^d(KfO;{YJG;FAT2P@1RT7n zE=6!0p)zHe+9@}@w*kAyg1r+^Nz8sP6Y4O#dCDmfA&ATf+wo|<+F$;$6O!J?zbtmZ z=h>gFjFSd!eMu?ncAqkuoxXY=E-ww&E$&_XZ~t=rY&!qAp2#Iz?TV_ZKi29a;8}0! zq6yQ2Vn3fzF=L45&aFGmWcU)C^iRux(;qroHLyH=%z>k z)q+Xo4u12VXpqZL8-8d^UwV36_DvslrnH!{XzM;AK9#EwanNhdH`jQJEzLn&MQe-Z^eZ?UvA4l>G}q9A)* z|AAk7mc+7~jsE@R_-}H_R#7Xrpp_Ydc) z%X~(Ex0DoAKeT1J(RKrayK;Y$=LSBcJ(zB-8P`|tr%KDAkob!(L^%q47BNSmaIfuzlx6;ji79NLsF=wa{(T3#!H@HBp zK=6y$Ti=#BDYK8e?q{7|qsN0mU6Aw_t`kM;Hx1O1gyUe3M*YVJsVfYZWj{+L914l~ zV=)f81TOUIh2$ziCVCG}iowi$Y9!Y8oj&ZsluNA&37)B$wS5K!)ro2)OaUCk=$lEPR&nN{v_Udsc{bh zN!SLGyMNosU)0j0`_=ZRP8LTehdoG6==#8X=mMJ0m3^V|Uh(T>ybC(3MZNk8tWici z1Y^ev;t0IlD5jDB#-3Ra^dZ`m9foR+>%lOv;p(F!5c^U|39H(xlv z_e^VHf`6U&c{D!SvQB$5$1nTGzlY51O?uz^DBV8Y5!lYzC(?GbRl`ZFDR?}UX}*Nv z5Gh1FqW#OQET|amjKTwE%SemJCY8IRIFhaJfV;0k=toubn?hTlBRUN#J(MQ_w;e|$ zyq#`Ubb;QyLox*QyZID6Ceaqt!#|3v5THW(g;Qez;Swt~xc^MXcN86%7<#)w_%2vL zDbp1^iia5mlHnanl(-+sd>_fl4u8m8YyhT;n9$dX8rf)5S!C}V!b&25s7`F}J&y%& zWK3A}a(v=Q0P(w=WA^@LEINSavu(r!3vV$Pg6=IEDmY2?o3V}+885XIUua+6EsLR& zsgY;ExVPWm@vQqSmN*s%43-^C!m1~hpCyiCOq%i2aY>2Rln1LZVeV?Ci-tQ=Kq=#zB+tJDrGU!OtRL0Xcsopg$@@(cu*jj{apLwV{NguKu5l&t ztgSQY@r2bQZK7SSbXUnB#1Ea|4QpRdby-g&Ol+| zvc#A|yQ5!&M*Q9iu4~+T2BuX&$0VOCO(_!$;+tgUOGHt5r zA0zEihESfWWtqDdZ@-0Ul9$Qyx2Ax#>UTOp4D8-u(0~OLLovwlp``0@Lly$!n4>$(dPEt5Els+%TrjuhWPdeLa%MPmuLz5 z2PB8}EC`;?U&X`_sptA^g*<996Q1Ny{@9K0YaMDUCVrPUaxwAW&OU>}MhEP$tR=7k z8OW_gnix|~hpx3@*=GGEFOC6&0iys7V{9HZl0f07klR6YtaHm6un8&C?gG4)g!QoMxwr)8GM4iLai3r z1H;kU3)cuH=S?W!G@sGhG`B~ek4M*aZ_TxTfoES?t&rW4)^5g~?+^WzxZuN+!!_lO(YuklcZ$2HH%9IOV?RHo6xO&~j?fH81U6;Gz-F{Kv1^eP( z?K$6iRB$pLDZ&mV9+2Q!?=FaLwg1h?x#TX@Joh`!y0wHJv$X%DqY&TyPGc13EBL$v zxogVom(J+(`JWi?=%1cF8p_9wY-_0B*O1!UNQzd5M!x$4j+@p%N|Tmf^5F4y!@34u zvC4zoA>lE5yIa*-%$LOXI*4QiiQ{Sud%JBZSRl6)24go&ZJE3lpC?Mw& zWRKu{y|%yXlA6EbJbF)rH>u!vm^r7t6qWIK^Y>-{9R++D{%a)>xUA2BhU=Xm)X<6M zd}x4XDXaXeranNIm+uq=bXt=X|7I{fh zBgW&w()e5EAqLJb0s-*}l?a6xPdZ4OJpQftt?_AiCs+yzCv=>k>)sGe!B6|docA5M z6bBPVfenUyNMs{f$OwwgcXO6Qr)|72I0)vDTWYs#M1}sa-|WF*IZsfn>OJl?;2+@~ zbF{YXJZ-|VSjjLz0VvZ9Wx-UYtr&ZXOF+3`!5^X9PKvy#*Pu^}ii)nwgf_TZ9g(y7 zl5HQ>vh-sYxz;(qo$%ugbOt=D@vCNiV+QO-w*39FDc)J~ zMqPl@;h=8$`FU)S=$r6bY|_%hL-3nR?4|@3i>jo!Ra9`Bp8l8nPUG}eFx2=C1%Z=I zoS>wyl$+56`In|yc9}--CE^;_33i~tUk4`guXjghRV>NNb=aiHU4Q1MxMgO-DmZeGos)`vh-JOF7X* zzjC2svsK~#GQ;ci+(AToPQj9I zN+kJA!zt@!@VTh`#M`Y7?MTkhtbOJFCBnadGIw$Eczp^croAn6E;W+Yi66*hkS22WUjmQR5bL zV!0m97hOf%`bw-{Zg&3Wv`9!G<(BLRdd)2^memK*CkQC^C_hC0^y$@&(qJ-F4)rB# zQ(an6{@r51N4CSfsLcq9hWOgm<5!C6>*!?4%aD5K5%N(X(f~Fp$AZdLOIM(6u>N#( zXa~s~bQ$z*wb)iN@M?-2=@9s!o;Bq|zi*URPk6Sbq(rEQlxUHBY2or(PzV$VVfzX{ zZ;0v-#11EtS;g5T_AwOW+pf!cDfYcTdXvKSqv*9B zOtd~Px=MneU#24ep9|prZJt{>{bFbyuxQdJ=v-Cc1LMP56akU01~D;+SDngJL{w?0 zre-{?>~JZGyiF4r4=DYGLy8q?1}_f?i&?1wXqp6A3if$NfsgnTWPdiJaPi|1HD0J6`|~6K`j583)kQL>HAb^2(i9 zqAonzBFOq(|DDB0390vNM}+6HPca{ky)oNN0;6*XO1k_9JTuX18A*md? z7e}NuFb&=S>4C?aAf(zVQ;QL?^!pW%{*q)e)M<|7EOBQ+!%RZj9efaf{H=Tt{u!SE zY~pQ-iqe^!OQfK;6M#!~4&?O@c(xEegxiS`@W}0FhiL7N9S4K-dTdf!?JvdVLB$Bh zk%YXA;%j$@#G_Xfu=;mcN9!+>+<(*I=@jXf&ucuR-Uu;tqe~K2qYXml0NTpAR}!zU zNzoTqWK@?Cs)oH!T*(W59B;o>Z`z6yHuqoI_WrnqYZBUq8J?K|9I@vsZ)@wzgl$r} z=!>{`-ecv!X{$${hnk|Cp7Sd zWpeF86^*`H`55)%zH;x#g~p>Ddo*)oy0IMQIHTFu;C6U3^VY4f+MZF<&~q)Q(fS}0 z);A7^+Ri1F*z$>5F{msN^qTc(HQbMHLw|QFPLpgHdQh3#SBOlhrJ2tosSSwaK(g~hJr!MLbJa|6^AA-LM7k?kF?89vvt0aD%4?j**5tYg-;BTHr3XTQrnM581HPXLn@_XtjeR96h6Z%E# zltvwMD-JGe7@Qqrd_9bvGmb_?n_v_yDBX;@MTZm!9Jsab-O$LDRX7*!@CEY$jDTq- zkktQ2)HO%P-F4mAwrx9UY};;(#aZ?7kA*Hd$2-ZBrj1s!TvAX86-%7UFl)t#_a zn1bbF#e3dMJo@lJG6kc55i`QZ z@T9~Jz8z6QZ}M$l4R{VAS|k97_#E~as@JoOrX@EEJE9u+M1po)jJSv&XitLrWM2Q< zUEF0_OJguRBs>J&BzS!@&zq)3)EFq0w{H58wysmBLiZ3ZhoQ;)@g`%MX>k+k8N%Tp zZb{uI+3CE$1>n9Bs?(J3)T70lYtXo(gfL0eb@LUW36aU}EsqsiMZLh+sg5OF16ryZjRg zd|j7tl@H>?M*S|nUHW1S~@g=2G4r7z$NnNYx49A!!LA zF+gaFFcwZ8-qA#g9$^7(x^?=c-K8WMIfBwOek(OrA9dkt#F57`e{U_#WnLI{`Rtq8 z(N8ARQ541{@w=SRS|n})dOH5*tgJ@LG&m_W@h@ed;l}%Pq0?Y5x)`mDH(+r6*H>2C zQ6cHeB?{WeTbXE$P5OK1j@sCUTU5O*h-g0c<$4Y4##pE)@$}?CSYz(1ViU~#^0B&j zD2&iWhAYO|b$=*a)H01k#@x$i%DzMWFtY!DrGVfD_*MutOqP?VA zQh+DV8Z9kD!&8J;^E8gaLS2b${LW%0W`7ZRdC$L|ZEY~k5v&5-h*E?sG ze0EV#*_h|jQpUHEl_Z#VL9{;-fzDTGE(!X3}MR0?kEmd9!xbaJrDX%~+1(yJ@Hb-l;?;B(B zpwqC!yS3VDwU=!qGTT>D2}-p~!G!WK)MF5I!2I7R(Xt3MFb3*asV66FzAm6wbT!5{ zen3hMzwL0h#5fPGlpQ_V*qK7STMH;zg=F89)!F-LR{(xEc;i=U8G{C{+pbK=y&XDe zh7vIZG`ij~8we9NFclWa;KJC;W*SdQSKataV*qZ;EcbQ`}@0M_cbG1}y(6v&cx z6O;tvS;{PSBAdXz2TSAi&Fhk*Fq^d?6vFXo9SSU_V-ZK-?Y%ZT{c`WyG$E~FMlj#Q z#Ct9mbx*dK6=r^P^@?Z9&!CyzkbdjnvlGbT&tP-T!;5y*>Xf&wK(&MUkZBJ#66Xu; z-8-ugfxBc{TFQptzQx83p__O|_)eWU93VHVuaTNLw4l3cLfhQp2}up3uRahlwbj|W zGpw#EcGrMPTU8u}`33JA;&j<3A!_9+{|EPQ*aJH9Z^k1~i1g%iR7{RCMH83yqd6G6 zH^B2n>F}7j8O#vxeli|S(&qQXd!h^x;y>HFi{##<-VdHaPK^TheTn0J#?-$C&njEp z5Fyuha|&L=gTGDCh1xW7i0!`>E{@ohXJJx>+5MX0AJSdov@4hP3wC_E)SV-Rn<&b^M{iI}8u}@yoNu4^a}nF7W7-W=>XNbBvw5~g_*98N3hy{H zU^@gRA#)b>aiB~cF<6I_Ax&FjzwfEjfs?=L>M2Tq@zl)8DTiYA|Gi_#+(`zO4&c^QZ zWoIs#;T&&JeF}|bV>m$nqUcy0#+D^{v#09Md#r40XY!S0FX>Wg(xuF$Wi2xHt^ZZS zD&c89Pp?DNg#&mT*DNU#&+|IR0OcI(*{A8jvx-(XB1jIOH4B>`+|BaQP1?wFA5D=I zLwxheEcE0L7ZzuQ(>4!1g4xb85QqvjN#;qmy z9>8TirxQcTQ+iTKYI>+p-c$umG{Jjhf&#;p z7>mKws+ss(=(hRGoow;Rvf1z6`p}htql+W=I>db}lkbW0waegd-v-tGA`S^L%Ah z{JjY3$}Ces(vCcX%Qv%u{xl6obeBjru=(|QVKPc#iPo0^ZtLB;(Wt z_zWZje%>8%o3mtoHFS_T`L#@dSPN@I#xVO((8;eQcRgeK^|N7~qE)+|XSlnb0@pFa zPnLHLABI7G7{Hc}y;myS!~$0tkk z5cdQzWq)%3cy-S>kKe8(fH#VttW@;8LJ{#=Dt2Hj;XofB+FSU$bitQQrN?ZYprA{@ zAq5K20X-EG5i5-Ru>ZuFOGSH`cTq4B+>&;Vo;s{W8ECvqc!>D`n0wwifRRGQ+|m8g zrc~rNg!{sHknNhat4cttykfgzp!D38`B^rYP638e-{PcUA9FwIs*_(pc<|b!`g~df zVBa_S7IgARV3J8)yANus0N~#o=qaR~%h;}Gd6wrx-6WqydM?{;QCF+yz8`W4=vPX8 zy*Ir-({lAWgG1f+>j28GCohDu8Bcgc)ab)s`L5J>=#g0Jb!u$}<%wIgTHzVecnY693j* zAmxR1g&3=2-*HC*torrY+S*#Lx~QN(v*YLNj#mTfPBXZmX2o;Q>#OxtrT5T)Ec?KS z=-V_U>DMKCyj&bJ5SGPGZLT*wp2m{H1BEXdB)$ug<5<~@;ss8^HS93oIg(7eU@rgz z32Eyaz5Lw=s#LOk@5_^Z6026d-`17Rr6K-OG)3#VI2?8zDN4)I_%9W!V|v~aZT0=| zkQQV}|7A7wJ$8iJLS^aYQu~gxzboOYd|K6EjZJQwo~eS zHHiBkyYDnhK;*CT^#--%rtEekqoTHJqBU)MeJVXx^V-I72WslE%@uPb5Ii{GxV`vE z@tiE6ff&C^)E0UZ7wIk?6N-d05#PBE*w!7_#C^6F!|(>+fLIqHgb??=$b0CW0)j8o zoF2{cXtU`aJ`XQA^Q1wzz;RD=g3~=JLfONTyhrlf!UK57@p$l7xX^&q_}3?gha{Lv z$g;;I*aK-&0G1sd;EvXm%+w?tFF0cu6Gr_R9pY2Q0b7NAqgY_;yg>;RmfM(?JTR1> z`NsawRKWEIW5~zzp$W0;7{4xw?Tls=P(=U5*@3jqk*6O6W%l9QTlMs`w&G2Ah#MB< z1o4pn8Y`g9%DUg^!-W@M-It*YAEvSi_v#OT(G-4>D7nF9R!ClwvJ zCc1FHtB*QRO%8uj>8ng-t%6Inb~yq3%!06{6qOEwT*C|*xG~nR(tAk#gK&I+xSzLY z$YwgA7VYvb#kEt~$bDYQbhF)Ie;@>sW<&$0gW2R0(L&WYr+}Y}4f&u=_QJXz9&ej$ zE=7Ww5&ZFKSci(Rv(Li-iXkE{3sNqRL$h%5m81ShbzxoR1)>WRf2)KZ6OG~B)aKo7 z!HIN{8+%_AXauA%L3FotI5%A2bs4(&7k^}BAcs7J0+?<=G zKEEX>EiP!sT=oc9iCkLc&0= zg%l458SnBZFnV0($q1?2&y$|ouT(Q=-CjAC76}yi5MlOk4J)@m9>5aCxLbdKcXJ$UA)`tleD~mNFA)1oVjnDc>E#hAOGRLssHLVjMZ# zM+sYW>F;x3%X9?Y>vC{$?tWcd_7L~ohl_e&o>KaXHpb0m`(4T@i^I`yt;eBfbexMP z^2M-phH4Oj!VdfgCOw?b#kXbJukwW}5l8n8MHg-cV+p7-8$HzFt5c#S8adRpUheJy zXxDHL2rniAmew?oIk30&>)yPs8>cvr6%P;uYP_8KnF&l&0ph>_;OmzyXU^H6F7_-Flk0LDVY!L*Rxy{Xz3r4 zON9+161p|V!|AGp{+2>yL3w<;<#iE)gQb%1nBVHt6KQ0Ty+uvDv&b`n^mqTfKumIDKw`bElELs3Kl+W3(wWi^{xfN(mR<815 zg{u?!f!pILW|TZc)?U^7R?DWB?w~8-FoSaIjS+@LMn?G9(c{d|=qvR`geW9}RXn)i zH;3by*dS1QF`L6JpqR_^9g7-(0MtPJIzJSFt4FHFZ>TnSY=5=WM-5HIpgJH&Zo1dW z#+D4GIYDG04sTY>q*TP5Yrfa>lRV<@g?%^{tW?s3Oc@s61IrO_tBs^Io83izf``|x#AJ7q|?`9jyBi(v&Ofkaq*<^S08n(qOe1%PF`-_C@r7? zja1S5^*)NAKv!{kYFhHiw&DLC~O=tA|&W6tw@N)FNo0q9G zuFINu^Z7jt#ZF{$=-Fs`80lQqC#0Ci$3C6QJ8RW%lDg!v3W-LwY*2lvgDVf6~iUjxt(ur27C+TnN&*6 zwFVu+hOU#;8wB2pYaLGgSP@~;<6n+ZM)i~U)S&Wn?Pm?z?X_B@S^%O3E-?q8%4U6g ziE&tQEAVrA2Xo3StcVGcav=rkBzABfO@=1mArEHEAu9(25veUff54Uazr!&zryboc zn_5=Q4n20VNFC&02^2Jq%voRdcwS^^S?8vc(ynqXc%#~gtN>y#4Tjd54ww3cV4@!R z-K(~NVW?QNf#v)Txr#xJGtOX#i{2QldE!F@a)YAOTG67gDnj(gG-U9QKapYfOby$m zp_mI{A>v_STx%hb6`3@3P8k3V^uY(Fj3?@Me(Gx8PZJrSmfUc1(ybT-!GXqDKEo4M zmwt`J57!iB6Y$pzW`k+<87g#4EszSQow&_g2Mc+p(5r&!#!4^rgxC3$)CI!jMO5If z@C6!0XX&*v8Px3YcR630f)xx~5|v7~=m?4)sfl+E3z~0Sv;Le8(Q5-W9dHTK@Iq!m zeVZ`gacauu&YJ#se;qCvjpI7xUdcjeG}UJ#05%^}BAqsFp367E#%{bZ-RwYC4wg zFR8{8@0c#mEjQ(1CHBjEy=l2+$akccyz2S>i%P9CeuvbRe(R5^=*RWUQf}3mU zJOCwQVVx%{;?idrGZxOp8}6d)|M%Xkh~UQrv|3W^_`uV=@ptP63XR}?obQ!c?zL3H zY0!D;j5;}fV77qA^?J~(JN#j(K@-o0wzg#~cq{}6HhRPssY!BTQX7zogmYKmyRRE5 z8ENNqMRDHKbRM`|(BD=HP<9y&K4OT4h+|o59o;J#0Exd=7(?luSx~(=KIF3fNf7Nt zDmd~}-FG>gLZ;OUdi_oI>T>GVw)QUyyE$|8jNoyDxuBqJ!(n*_pf}Ah1eJxocPQf^ zzJJfOV~m|PRGMKjY#g{H;7);zXX#R{n}5MDw8FL7hPS(`3cC;H$2?{cPy!;e$;sgJ z({}2{wVo`7-(pr)@ajugKr4-H1C&~cp6p8XtoTjqIP69EKjgfMFPbVi4tRK?_loW=T zt8)%(7T6t5LBLvQ5QJ!SRh(&DsV_Z@2H&oQKI{CHuxRE2ffE_<`alpclC!5B(`L7t zwd`GwQzCM|#PgNYdd8j+cRN?|)l#qMEfV>h+r@prurs*J@3DSvWyR`BLS)j(!2!Eh zilSk{GUxGTu|J&}Gt9)snSxjh zSR}Vw9A;n(g@1D&Q1&wd%8NW_7q*pUbi~zK(IcqTtKjhZBm{mF^u?aatA#_a>*>>o zTg}eV;eiCxQ_aC)SbJbkT(1d$>Rd9pPS4g=Kaq*KiI@Z1u*H(-+~{N|2+wyv#>P`COB;2 zZ?(}{b71QPRvhKyTyyO_h>wC4?t#B#w~9p4;_Z;57N%y>MV7cQ9Qd@fPcggdshENY zF&)EA<{s~Ef__3d9V_6&CJTJQNoVWD>Ea#8GED^>%PW5A{LFGp4GNif5n>^D;nT&c zqM@C+9OBvPp8zQgdKg*Rl}M8}U=lnp@#HlOE)dYX^bniihd-d-!@k`#=)e+jq=*M6 zigy_D(SC<^f^t9ObNKBJe1byq$aj3n_qU<4yUxNZVPczjyqz;xi5JzF44XN)l_#tR z1CO%4J&Egp-=-}JFPa0(H0qw9Rn|MZMq)bQ^|Ilj=WWun&?3cN%pp{Xf@m zG9KC>$GiArv#f&n>7b-3qbgG2$5N%H_KI0_r30O?W6g_xv{;TA$vcwpoL#wZ$AB&Td1(*lX*K6#Q?&Cv8K z`%NK8RnXroHqf^}>KymJmkA-dAJ?n$=kT%$2;2R;ezL;sC0LJ7g90-mv zqrTw@N%_xnYs5pV4i!wNH{wYQHEm`3fx&u;2O+25r46ONo{S)HQOc6JNR4COGPRVA zv_tzMDsq~;xX+YNT$2QVh8aBfB{&@cLEHZ>JOs#3yrb6mW3qVB2CSW>Tm$h{g&qkI z?VQ|?T4NsIy)V|=Kb8~C!~?0((tE(`Ox8($K0p+(0;<> zH=`tN+T*_r#ov{;i{~t?@g2g|IzABtZ>os{3y%?LIlGpTY|H{0ZFC45Hj4xZ0fZbp z?}9jR;MXXNUp+=)y5jSpBQEm`YO$R3Dv$I=_;>fi%tX6I$Li7V5wYFdD0Z@WPE}ew zDD-Pt2*rDugd35L9b9~E%|=d`Vr5PV=6)0r3*5&jhvnCc$JG^5+H9O_uPEGkLCGrq z!#RP(L>fpoxWK^j;lsU>)M}&%QaoHBtUOpqj|k>?=>9ay*pWlBPPK0GcsjCRY3Cg2 zfu0UCPQAe4o$Ha`Zl8*tH+{iB8RQ;b@CH&DUpdh%o^vKf$nkLdocbr(f}f0uE=RUH zlLz2RuH6>QMdi-<`+M%#1jv2vFH`TjCzY8uk8a*eUb@y#*c!k8d%~>AVe(WoN#|m^ zp*JI4i$qwFtiTS54yc=vrm&G4rgG3B?06@J2@a;=p@-7pfIZCzND00>ayl%Ao zD^Kxr5F8?{e2WSN(*j%f?zU{%9&h~^{*&SVljS?(O*9!`Yq-s*8lq3w_WPMjT3T`N zjZ@_sS={<`87+SyeJVKdgtl}1Vvo>g=H5m}Rx_il?}K9l{6embYXkX1`IV?hwzYW)37VL~#N1nB)czy0imWN`9^7BF)SXz_O{j22Vlwe1??}ic3(w}Y0N*NM zR1mYsYCQ4V#}kiEZNSSi){n`l=->1~BRE8MvJ4NdT?5=6XRa-D%)W5!VRPqqfvPIa z12T@tVJvxz(yjE?sQS^oX3@a@A(bvlSkMe*_9YzI^#Ac36o3RT3E z_)75Z#O{s%*Mk=^h8rd< z^V*2ATyna-QYVV}hdV;k!E!*eCmdTP#-kiu@Z|7O-p$~^8hCAGtMTY(w8j~}u9ks( zUpC<73`-kl?`sAkIUGidgN%5V^KSsQ_sYZq$Z*vBU{wIK1CVp762twirC9sqDatx za(I{bR6<69<%d?Ffr```JOcY-_zM${JXMG9*AEk*%Xa*f{9f)bTlCY*#c?&NzlpEF zmXJ3ITb0aivq!IC_kW{qJPFbOWhccdf5;=o6xS;B4}{m6?|~sW7zk$k(^NyYU(2HJ zJRiep;Lkkxe>I|3P)ToXBc|1rR$!s!3sIz3e5;>d34XvD9Vu_TgMxo}S>0a7ALHUZ zCrTMMv(YLRYed^<>N*@ir!(2$uAv9+3I z1m|(^?iuMaj3HAN@%AiRJ=qSy#P_P5Vxj=IgrOeKzESJWZ2nsXP)38K&>4oo%;8QH zKq-wjxrqh-{=fsc1N4jybpa_gKlUbos{@z1MwzdwOdGm>+AA&kXR7~mN6pd2z>G+6c zAL9@VXu;RYHZ5o~;?8QnI zR1obYsS!#Kl~N$)bY5;T&GjNJ^rGCa=FQ`W_&Yr z9*i(uD3d$09t5P&YgFWzK{y=dOa7?c0q(993;xCN6E zJ}v|o)`n`RSvYUwr+>QCnf?K+Aecfl0_L879*%_$NX`QwLe%R*;gpv~P9T|0t8GNS zRb=DGS>CFv*0g>rU5hH0m=;{6liscKWpmU+EeoDVCaodlI;S&fCI2oNZ){n%R2056 zksm9VMUA+r2R7TC;>-Ry3-4Sxk?50IixV3=aD6Ivvek((lnKhbbwsh z2#+ov^J&1MeSZH|5g*8J{$r-rCc%)mS7iXk0RYLvCcw{+MB<=vsrw?k{_kTVfhn>| zF7R)#i8E}h;c`=_#I5_Hl*%@vS2a^yi#wBpy&}@Z&c(|u&oCP`R?#UYqx>BI*-(zM zF6LaQF>*~=zXa8w!x34RLUWo486@>VNuyCgk&%S7=`mV~1CF<9THsf7Y7eo*xJH3;q;{1+Sby4?MGVM+af@;b9Nk_rLGRI*a`|0 z;T`@tN2^U95cB6ye!M5D9K3#iVBb8o4W>MR5pC(%sOy9=Ad*Eni)|2C1eFps3cDdg zn25L$JXYer_abf3n*BeheKIxxPfx*S*VH%Bgsf6_GM{=RT-3w-3vj(%7nNtVmnaL8 z`&F$Dxx!yArOT*Gpk&fc-#5E19gAmr?qGbyZAsm0pul*ihMSPl7Z=q^2N39K-gejM z?UE-t^sEku=6A(ZlcTxE!4DysdehOdDIU~|@*TD+qu$VV<2KfZWedIKl$wQEhV@n{ z+9<{C4Ow)RDmqv8!ZR;aL@ySA;zFD?PRqlWF>GBnhb;ENeRZ0DDi|7J;L^C_e}Yw& zqYv9wPLzqF=1_7C=bo%D2ocw~tG)}h8bdrtfpk{=AIA*TmmHJxbBhFWNjnB<92(P?>HFBpggS$c2$1D*Zp83*i)_>n z)kIIY*N2OaX15>HDY;s4dOE>UU8jrI;oCirx}?SCpt4X)pem{z+BLv168|)_AOU4X zefoOH@8;V;;olkp*H+H}zC14MNlEnF*k8_Ko_ zf?!cm1wR*BmYU~Plerk$I*)0Tmf)-%xRxoK1 zlY@BhY;5Ajf9NvP&_XcG5Z66uJ2}801@(j37hqZvJ9XxT<>4=|mvl@9-_$^+gBPWm z$;z3s!dp7?honDP0X69L_AHUDCY*(8p05I9)Ig*dU5%tdI3n3AObS@rB zKm;cy2%W+@jSsTHsn@9=P%UgDgORXwo-7fpSDWKo2{*AUCITl2w+xLzKQLVC-wQ<) zPow!#g~-(ITp6&ElRQR+z@kTo@@;116x)usN=3+7Vy(|Fv#-VBYki;?#OJ!Bkm_}pjo<9*ol z1%SUFG5cCN=ezyux?%T^Cm%)2dsa709p*e!Nfr^O&#%_yD8x*t+iW+hNP~WNyCCqm6mte_%RfZq-jd0H46b`zCt}b0tU3V~{SeN# zIsV$(5qQPey_^U;y7PUiAEj+J3s4FUEGCpD{^l+wcoMv5bJYpKGn|N{T}5JZP*{0&4~DiJHt}b@CNrt(Ux6(-!MXp0`+vA=T9`ds!%rV*orH@+xHWh7 zDWz}pUyNYSWSbgRv)olYBu>)BU36JASHB%B@83=}Ao+k4L3>wMC<>R)N_2N$GV!xt z1PTSw0^1$jyzdGB_+@z=r<)$$!XC+c@GZq@>DWQ{NP3*Bw5_Dw$xoWzyo}Z54$TPl zjEvX~5oN#kyPs!L5piuo%>Fg0?)^Jm<8jMPA zcm2l<95j^GlSzFi`P#3F%T!7%3gs%u!wQm?&}jlIYnVyMG0?g2#AHY$Iz^Lk;;EQH z+OHJ8uf7Ip5I4fn_r5l>RR+QdR&F-zc9Q>E`b9C@3N9GhG3AEkXePqM`42I{CyXSy zFwWM>jxX~psv0jbM!BNZ5*RJK;SH$P)K;Y%+=KSAaR9d3!*}Svuqa8#_BrOBjO4)V&@A+v ze|{tiGh?kD%zQ2MQ_2#2AG%&w31YePAc5~PhCkNgShb7WtubcS|FsC=REOIo0~=!W zJM?GNI7ycX*WIL+m{Cp2u7q-|qOl_JIGsO$*I|c9`1vu2^x<-Y5@rea;{}ow&YbG# z&+_XIE(yRA!pdvg&KPs&{n9JyA93fd1^10tX!SZyfA?Y3R!Ky@OYw=L7%ckon1A+q zm{^N$;6mBt*h!XcIQdW!E{mZJTcQ80+*FvYE4d9B+#JgBUXs_&vr&sPT44ipN#Xl1 z>*3EJXaC&BkXP`QkP}{`jSTOPH=iG=*AcA#gLqtaQUt96S)>r~iZ7N|#h%_eI}U8p zLJPaB=_+`9h3vs}@qC2aEpLsxP4tOt3Mp%*Vmnw!9~VdiUMj{Oj5o&xH1_*)?X zHHzJt4fsU!c!fNJ+yN#G1p~aXWYgvek`pQQDT^g~`F**6(~|qtv_M^urg57eQ&m3HTGVy|#Qr`~xih^&LZ(n#~FtDrxx--BL2 z-sH{n>|`j^Foj?CN~xU5@Z+baEIiZoejU1srJcK@`_Z|iP$RihxYYLlAW_&?N1ltOF$uoW4xsOO*95#%m&hv zWl`~ibj8nFXG+PuD960p#(eXcD0NxJl8A91)lz><}rxCx=C%I^OP! zb)AEL3F*PnMiR*lm&5ZM7e^--)d-df01#kd+u&N~pq=o}DqE|f1)tsn9K6YbifIhJ zaZmCY)_UBonOd_w*th)BRr36?uwN!N&^Am;tpDB?$tL6_pI2aC3IEKY1l)dDG^=y{ z4}X9%D5;ic$;Z;eUwu|Egh#Q<`Kv6t`V-#!zn=GxSNnM0ZgIUL>dtkd40ZhQD6KTg z+ep}04T-3l?b<)bw-_tx_Iw64019kfKX`?c?%X7()?Q$FVTBzyOR~$fpVg&Hs%rYw zVI{Try2hE!KOHhUqemn5-i(foIG@*btjCQ=j7M{%>Ct*sdrc_)ioH%r`pEBcZAv`M zzw*`bHayqn7g(~QZ;Zy0PE1w_>gwM|c7prGHJp<+P8uZGd*R@O81xhsWD~z&gk%!m zDp%NLgG)7>)is>gVdtl`myDVwMfdc=`;FuhBG1B@vXiN&`p=*CJYZ$?f<$k80a&=8?*q|dDM z5*~4ARkFjq+pZGC#bA!vZNpiI%T}$9AiASimP%t6dBE+^=$;!h@`D>yVH>sf(DSMQ zxzEJsqt?|iHXIDiv9o!d>X$k7t8SBo9%5az;NbJ&Z z=U_}0U{+wDKECJj;u9YT6}u6kC~WiRA42s`rX5mv+@7*N`x^DAj_GFddO7Br;=iQn z=r=HN)A1;63}xMu04g0CV3}Ap$O{FY>1ghdHzs|tJU_Z6F?Dsr^*TZ=$BdmN`)wKi z+D-h^fxuz{hQDcHjcK)fc$jT!2Kp)X9Y^*(0(#1eK9Hvp>j&#TtjPOnqwT%9siBr| z{1m;4qh5e(55{iFV%kcoESj~8pORR7Jtyh6*E)eA(r-^m!@oa{REC89yg^=crs9ST z0Vym+k*0CNfeZ^|Fese%1Yfu9jxzhM_Q<=G*(4&*p<1jl7)O3*8O7m1U3@8JW=Fql z=)JAE&fVQ14g^`m6a4Nu&X|qD?cGZn!qzXm-Frs`oJ+3OoG61KJ$(BENv!GlxV`iq zasPfu`loZ$7WQqhHbw@TsylV?$OooimSo5<;9Oof^+H|v+w*sU=p5DHgAnY|^csG> z?1KZ99o89$F)xpfTAK)Vuv3-40l>us%u-V9o1P~v0jV6fY4(ksortQWpw z9Uf(v5y;nAf~(^eQ^M(>RTT(WB3*CDVI z?M>g5V^Pct+KVLs&@5b*@7*3vc^~3_-IApf=gG&1Y7T4r_ZbtG%ctFos3z;L@H{Iq zu8hwr2dm0$nS#k6 zjt#ZR8hyaESM}&`-YI7ZD#g|F5}Ge*7PUe+XwZL^NyqW}0j*x*S7pZYSzf9T55Uuh zJbIbv{MXN@Jp$KGtZJ*LtW0y&7QF1krMcT-{cGS*k%?g^Wn)l)-KHTs1GD{zhd2T( zCtzCm2bV&OCir3dW@zHwXn}WXVX_g@si0PHz0?0*<9l0!U~gdx zD>r-+85h18J{F6GB;#8fI}7ZkTrG=NA8t-9SG>o7kcv&-ImNhs&*I(b>SKCy75tv?5Kbp>oYh z!iprL;G~vV*m(P(hNvS7_J+3?5XZyb0lvA?l83;VaH+AhPYl;7*_d@7N$AK{K4A?_ zKm0mp&7zF{qa*rf`p8AVY233eBF^n8Yd@F&(J*Pd)KRBOz@=LuX9T!Z__xEbUti~{?EmWpu@V2oX0+4`L9%)^{`Xf`%F(AmT8wRmk$QK+XEDwAi!1 za<3XHw!NI?u$_a=(gM!l}s;kxIva{f4YtrS)m9t zxRz925Gcox6JFg4eiL2vdRXiBdN{qM$;B{!kVb! z;EmYEArcHpT?@EiQ56_Xz-DBAAF|AkeB8{2zJ?d{)1z?+KAt_Y#-;)KetS%M=m{;@L%B$RZuR&(;<7Pt6ob8B zcwcWe(9?VUt1(LigcXH|$wg|!9Z+s~IY+5>B`4)ujW9i>k?C%T(0+*D70TBPL5v*D zW@C@o=IEnVbdmgDqr3X~4AFzbBQ>K({m%&v@_bb&4c^O)!=(KuA_n{h7l9{Xyi==B zYR{H-V`@kVh^w(q=Zyo^8yCm@eK!eXjZNzQ>UKga8ST|?K!=W;<%75ly*Os>4##!- zAt><);@vmttj!gIkGHl}LdemYCcAM@Ohr-5CpT%LhzUcj+RGf&J-fPh8B2MX^9U>Noo2iVPL7Y~r)ttGg zMeH}Pj-D5F%MYej#|51VLgR(B2@o>fV;T+~GsgCNWEkmJ3>s<34i9DOtA##Yk%Y5b z!RwJ4)!PJTX80m%rg4d$z_Ci3u+I-a2WXkZ0v}#ZH9=h0iW0M@ED+qI7h!al_+8t3 zg7ds?yONK{u9;>vpMr(YvC67%qm zU!a<$p|EXxA(+uCi@k8ralp*qI-X%Zya>3oSlx=67c8jzSPe`vaNl~O|oOI%l6 zo_vw40c_m2Yo4ik!k%X23Czqli;;5tza=}Z$C3rNuuR6-9IfN+*AXEtAMu?JJ*uzb z)BMjqDPF-o!x0u-wFDNRGC`b{Swe%Kv1!N_1|PO05B&D5yLI^}7TOtyya5d>k@~Rl z*-^oWX<5YW7wM<}_&p8wdm9cjcHAktdF1KB{Mt{EdR71aNWI!!Ac$@6GqE{;ApK45 z+{wa-MDLi*G|9q#Q9Euv6RKj4MOc#5WrYSDT5Gx9VdGWu4BHStIsd_D$~dvNG)Q$d z%lS%Xx{wsvpeH^%%chHyU#U@5HX&o=qhCOR1HCjFUX6e&A$t|abFQUtV=#A-_g~i8 z%^O>Sdi(74K?;;S1-!YZ>sh{Tp0OBd!@`;=33|X zUIBza5%)1}Eb_!t$K~PI&~0vJP3@Thx%t0kV2$t)j727^CoXe+PDj}g1ZCobr|%C# z&&+~^tYt)@wd}6Q{=2(E2BuG)k}7SwU!~3<%sJnXF{*wC=r7SdGHl`A>Pv`Mnhi@A zED2qnHB{+1hz*H%0Bhd)F}OD;E;*-JDA37@aLM@2obz#tf0j3{A<94fA5&iy700%< z8{8ZB#yt>R8h3X|aCdiicXxujy9NjvAh>&Qf&_Qx_CDF?{CCV39%x2WRja0bLKUA= z&QoGQ+DnnqNg9vV8%iD+w2c^Wo@Xy2TM>hVtRand_tTk-Bs73M=-gd#Jr!#d;}FR) z1?TK$V6L};o~^e;Q7@c#s(4Ov6uo@+w#k?DtOx&=_HXn0b3mwAC5VP+wVJ4 z6(z})eKtcX5AFN%e(e_*Xk3O-HqNgu_4GGU4@!L_VRJ_Xh$T1w-HL|p;hFVK8-rrilFs7*qOc6&(!B80u9D(o6JO0T;e-U>|xY;`)d=|J9P1fqa42(z+9V_BiZkDb6(3xmMZ2ghx9EGvcB*_iSf=au79ldMyWcb>)eJR zb3LE(Yo!v*htOdvzL*vU)Zt4Z+%;EVlzt#f7`Q;caqM#%gWQ*vJd|{hT$TjoXVatZ zbtWkC2hpPqDkC;R3hwLT7XWRyIK>@I`we;cj#b0`wpzEN;HK=jFG0-RQ@zjvuM{7K zH#yee#^nA3H;zMvN}#a7x@Uzpa`age;vN53#U8t>mpVYjK@%F%fcU9ut`wN4)I0`j2DY`dvL= zKz}E+jL4wZ>rQLfL-++12CvNgUeq-6>U4rUT8dwV;@Pc}1(m;?@N3fk&?eKyxR#*| zFBGWJ{qN3i@%06`6`9yd`Dka9QJQ<#Kd_KOlt#b1qykG zGNo{#19nAxIe%fpZsY{af??3SoeA+G!Hpumvw=rx$aTA?N_o-dTS~ZNF-xxdp{&I- z0=N!=(pgJjRZ2*P(RL$o{4F%8qH}QGn)?rQH#rPq1gk*3wum5mo9Nstom!_EKU}8? z#N0x=THIHgJj4|O^?V;HNupbcIoIiOSTM*{|vcD-s6P|ZeQJtLo|2+h7 zunEQ;GHdsR>|&UacBR4>a`_2tractjy>(!0zRHX?y4G|URxlz$n)@xw_3Q{WxRQP} zcvlPikqa{ZY`Lh+6Qv0PGTZJo@8YB-5T9xV_ij+QGblh!j_o5^N7$xU#F5s|0s8se zAuVh1+A&0UEK?{Rc$0o3!rxZ>L4@4rQ}ARucz_gyP9eSO(oKOLolc{dDo#|qcvBgA zhGw|x1|C7r^5t9&b`&k=WTR~wl2(s4pRiq?I6qqB!ISe-A!iSO>9N99^$}at-!ujosgZyLWho6#Jhf+@I-joR#<|q56bI6(PggG`oUTVR{ z;w?>HeH^~4Z6IpB*eB_0{V7I~s2DsCcxBU$pd2LHq$TAHl z7|Srd@JlX8q!a0e@0k~aCLrXbGwFOk5>tsF!u-0B9PK)?(@Ds)Zur-AfHW^3W(4Mr zxA>3#yzcA)GX6EVaA7ax29h`Sq|Sjt?UlP5HHgx8-JcP41v2whw3)=1uo6hH85h0$ zRWjBKSwnFPW&|w-U(!_8#W*}%r>*Lwlw8}j;BtQ#`@EdB^~^xaG=EvNGAdMTF47u> zPe#(<=~!D{%E#xj)5)+{i(Fcc^YOkrXz&;1;dT8Pd)!;kMcvK8PDNeTK!dT zD7w;r;!D1{c6p!0<7(A@(T_Ax`MJub)0IiqpNR;#M`BAQs@O;x})(qs#)6D8hr-%i;X*N;mA^{H>pNhr59^&ymN za?MA+HTpbWHTDNh%Q^IIq{0r7Ig-@Xw%5P5*ZE#IL>b6uBLxE66oQb}zuQfuqFw!T z<_R!zS}{$}{4ga!$TxTXNxKz#GOv)~JiTUj6z8bD+=(%n9uZ9EZ)`T`gRJVYoTF9^ zDVOzp63)Z@q-6rvKa1n)3D%E(d*n?zJGX29f`6=<#6XPA9t# z;{)w%phVNTh(aGTAc6ov3=s;^4!;L{kFB*R$6ag+HkEp7wyR`!TfY z8@=Ik3_H-P4yK5pyYEy##-0_3)US>|f0qQ2PZ;9o&yvi3d`_NF1k>^VMv-y84LTie zs!8-=9#wtf=S)9?A1|43Yf){*QYp>RSILLXKNsRH)y(XPM z49Kw zcyy#h9-``uk{U36zgflm4w|wUb;;4D+}8Cr4}n3#3G?>zb_Y4N^T(t*Y?g_irkKm> zSJO|khg{H}d}I{P89?(+?eTdurY<-3#254n(v&vp||*3=PJxyF56X|M&=M^M@Llz>#>4A>P)HyV)U-n_L==3;0ux-{H7WaYlWm`8D|GbOvnZXO)Al7pa2Z1j zN!uZn(Z^7`thn(s`bO$vwZ8n+QLnP6@zL4M9gApF?0}g&yCYvPXxt5%6m*~&O&v5x zDonh@u;^Q#2n)$m`(>JMF)lwsYL}TRKanqWAV|csQKNkl5Uc^rc$8fJK`_1HM|%R^ zme2OV;iPke`FXT;`aK(FS4`g@c9`8ax-JGW!P{WbObse=s8A?juPlSpUu{#+K3ITJ z#S9S3$wv3u)=av*3Ua+pJ-|4Lf%Um%G5G2F_f(rp3RE>L{E4dRWv@|_@xdgs=9jFh z+)HFtrNjFI(QuJ)ZHsx-NzI=~0y4<%Db2*11jqBJQtr;q&dcqN#>-`IWb1QFqQ`Kr zJ%@$S<5xk}IKJ8C0!`FP1acl4GQD8FH!(npMeFIXs1+9OVGqJRyP9-Skq-3+J2d1V z_?To79k5oTtU0>&<$Q-|-rM9z`XH&gi2*#ivOXlbzQJ4MyY$T$a$FAvHpM_+PJi zm~JW))r?5+_2Z@@{yVzT#RM8Magzb9AT>N5^KydVvg?W-Y+G3Z zU^3<|5B}CDWmE__thv0Dh(>cfAszk8T%=e3PNMOLF$3+b-mL)uS3UBK9k+Dv7?L# z0)SpEU(MM<#kQ*yk6+S)k?ExG@S%3!&)mp_HS9TBK|EX!u<#yof|4b@`)JVn3zvrzk;<-{)Xhh z2bYwp>Ih&>ztUqZmczXg^@>E_k40tT6iZ|Q6<2#T_t$68ww$8GL!P@>Ko$dsl)W+Ir2Np-K#Ou&IE~^&jCX$l(uM z3oJ;N*_`9j(PR*7)j1h&e1&l860+7trU0 zR1FP3!RJg?&1TuF1-NQQYj=#dn(@N^lRR4gJNQ=({QNM+4g{WtX*qZm=T-zMaQ;Kw z^GRHjfwY}n8Gr<^HT2Ctzf+JYtG-XY>teL*-OttS>CcL@U2E0ke%d)z0Yn3CJ%>-8 zA=901f_vGRk9Y(5j*SZ4+3`a@G&@dB!-l z#DBb1Y8zh~7AIUN!PFM82s0_Y;yVTfM-kn6NWP>7hem$fMO2Vd^#Zx2=BIqYmKL6e z<}%qozd-!bYQnoSt+>OsvhU-jk;zNwizO*rcWk;rFGI14qYL!*B1|$+BCMv7syL>Q zp#hxAY5`+UC|57!H4P2II57wqCv8~{@j`KH=)MMIJ_@{Uj)pVa_%I>nZV&xgqEBG; z0l5n};?o(_R>?dz^VVC`wog|8>(eo?YK4uww63m>AQZ#sQ`+%IYK>~#OE>|Y|gy@SRRtewHs~E`O#;b?jc)(=W}CdFZ|l zVxSlxXo{sjtUKBvz`w~67Zk(j6aDjgY&#iF=d4}JU^3qEc6ruFziXqSeQoU0?|9cy zm#T{!*rWb-mpWzDpf-Q5=N!?A4FyHq=%^?QULwxLLlCAv2|V{m<*7tAqo|XnAgINB z{4qRoY96B#sWBXAaB2<|#^uBFopCNEa&#jS{8Xm;3nO<})-j{UK^TwbzC2PZLqOC{ z%eeXLqG5*=k)@ywPI4=EO&XW&=zKjm8;FiU*W?+TRFG(>Sq#H?SfGzW{Q;Uy<=Tq4 zwF=U*7CW0S(TDhB5o6^9S!|Hk(Bad-RES%%*LLkhF@u#R$U@yw4O`s^sygV z1d|3V)+twCOA-*dq9rM#J7Bcr)tz#rP`F z?bFxN7FIaalQvGzDSj=*0UpxT)v-)Q(&;xePcJ3Rf!~7N5;A`|Zc=y-$U?~oG$!bQ zfP+2t_2hGOiuh4UCi9?qD?%xbX&N)-^f?>Dy4K)o<1q=M`OoxW`K+PD9jt0gVFM$l zuB&$qJWLup4ZocH(TeQEJp1n3zkk`+fU2}*wybI^>yq@3!{=r(#fl@C_|7sAW&|;3 zD4F7gfHZkx(8WxX+kq(r5fc*{HBVM=NyHh_)M9RJi1k)( z^EzME+t|Dxe{XxX=uR@luJVxZy2~{GJ)X&S@cy31YHmh+d^)WZKMTtlgS>D(E0MdB zElt?}GJbeeU=}fbtX86Yebnyd0Ds(0AlN6-cSX=mBXx&)(h-0Q0H#8TaYz}eem!WzwR7jrp5^JIhrlCCJIa4kGWHFAXn`y!~jvrT_N$MIkePM!sC-=Lh$S%Fo%Bw z4E;WZL1K&;2~C%_U9KnNl$|3XEC&PL`yC|T>ZP;Gp|7Gtbc_TM6~*Y>PFLxmd`dYB zrw}Q6I<*OF@FRJsd4xN`t=dm`LBXL0zt^IuFd!nu3mD`SWojs!&WHdP>R7AuvqICA zN7^cx+BJ)uRitM``LO#MJa(xT1Kf>fl$~d?nTrwm4*L=8HQbE?`IedBERr7B`45*H zO`FJhy@UOE-E9*nM4&`gwkFnmkFE!?%jNIwZ4TM ze7vm1ETpiRq*;aZ@6w0O3Z!Cbot&J+gKMtSZIUSIViSv-IMYq~m2=6xy%$3%(3OM; z_cs(i!cDvC&>Tism?_9&P|F-%hi&(n<#g8Ke@9r28KI*;O)e|V4embLh9j2JUaXsi zRp^e1vIsZ3CZJgyL%>&XuJiHI^1Gc=wbpk2_I|fE`;Fw4RvuWX_^Hxb`N7w~g7g;9 zh_6!MX*Rky!YsH#du~!MIMrTYv6F&B_r^oVf+a8saR&ns1p!nBFVl?suEuSMkeL}k zBdT$;W;v{g>0(4fAEIKt7-`JNr178+=~qSQ-B+iVI?}Duv>hJS33c&jRKuZwBz_q= zboc?60+cjFf;ofd-ReVpvQbYQeJrlWDQabr@vxs3c=TZ(5;)bZ-KcnBgKWupK6YRy z<0KK0Sy;2e`G{GwTyTp=#a@UZzW|W;U?kWQ5z}PJWTmD9fRMOM{Lo<~ikJQ3Kj*;w z5*R&$lVyX9Yh=HeYdFgAMD-NIgz%GD56xV0{gmVc?Gcf~ViL~9Yh+@Fq_%A17_bca z)NY}@MKQ5Ya|zRsxPQtT-3L_J31Xj%zFz5`nEPR#g($znJk{!DJX8h7E8vVQB+WBu zxgFKNR}vrO@%{Y!vjli8-5gGgQNLY&RCfduCRQB5dd-J8me(KnH&pur2Z+$I>TJhl z+;_2H@MOAj_dh!k5HxpH>Ms$C^K4v(ag})!w=>z!5*%74mz6DbZO0xJJfBH>-=8Ew zPMax+^SIe8A#XI}AHmdli+=N77;SbO&b{eZf3^AMK0EJn-e#vD8gz6%Gt@89?GXhZ zE%*AZ^ywzZS}&!x>m`a^dHV^QUcJ=*ZH-Gb)A8(0|K*2%vEXa6pmf{IRvIuiAqc=G zs(_S;g+(}uUKX0NL`xqcy>RNBi3KVU_f+|dPS49& zPc{$7+}+!KHy1f~cD3;j;+!03|DoqE+0MZXP+o`*y_}Uw2x`s z2xXjIZXORz<~iPz+Df_km?WE(e<+-Hr^hZE$4nQLZNNGhCMe<2bjKGEZqcWdlb!9$ zZ&*C6q9zV`m5XHZk?qm9g(v0!IB`pium>0h%R~il!U_xtxZcPY0(Y~kc(@_u7=l&2 zk)8ENY7&w;B*XaXw+g!d9I57nj2H9GB^QkhnVOHdmLhP=;>%VC#CDCY*IDjoa@xT$ zxUPE_lMqmuJ7dAn$jDm%U^3VyO0-fG(fvjqH_c7)rW$*e+ z#U!r>?z8!KDQNuo1R^Vuh)2oy-|xGRdU~#UN?fbHltJ;(yn($fzv{g<)k)hw|IkN< z@$BmdaMRj);IRNw(2U8FW@$wW3rg!XiG_`-yX0ba$+@#;qxk4q3ke@#^k_BX5(Zhw zAcmbnm5y0OG)N{jpXXY{f_<7srMBq)`A4_L_dkT2#K>c<;3u-D9O5hFy(~-KdVeu# z=Il_e+XRI%Dr2Tui@QJ$i$8L5!SZP0qL)L0@vyD5ejC))5?#pr98QfxUXj93b zO0fMFOv2NvT9U|#cg%vbFxO@XX+Lpvnek-pI|MHV$6T5$(5T8xAnvAlNN%yPKZ1Be z7VP*tKXz*H1-c9qkPnL;-+7yhBao~oZj(^q`dsG!n z{eGvYFPrzcS5|*DhpjH~>2;grRoUxktXB5jZ%$qqiEu%)X<|rxa!;~PW2JC@xmt>! z%qvYWKO`c>A)As3+c=m)mKy%O2-w}1%(hPlmyYc^TsWEUdb`R&@Gdl8uB;L>RQd?hZcR85B(bpL;es5c)z% z!rP!PKD#(uAU56n*rSx+Q<4~2w9|^8dAgs@sJTFpWC{tWKuQM`-7brXK7{jS6R;Tl zy5>9&<1=mKwS}ICQypBwUskSK3mnsFsQg5MQ9vM&HU446*PR=?--PPMi{ms|b4ZZbrY7`Q= zba%#iv*{emq?D<8EWopvS7zuS8G0?$szJqMG4oRk?8C-Q7*{t-n%NIa^}=0Sdv9$U zI}>L*g^{}Vr@CPRlMU9F%qbu0e}pv)=9S$13r%(O-)7Tiqw9R7pK3OrYCHlzOG-*U z{por8)9wd0Bob6GG`vH7xbHgPKMEIoF}#3&NY``EA^*g4Df?RQ)}d`(qX~)A6kI5X z;lodW#5RBQOuyQXU*R`Nd;M_&f=z~8v@q9OIuy#B4NgNVOZ66sWQRJ+Hl;BHP85#< zM7Y=DONike?8KaaI+ESR6e87d;Ni4aEY0*{EC|;~$qU7q;XJ$sr&VNrtD9ock? zxzN_&c_k0R0q@T=N)8`k*?BYBIZtjuo=K*?dH*E*5TEC&Lr3;26$qobX#53|26yTV z$*Xrvd?#}d^z*C0Ar1GAQPw!B1(hPJ-4i)}+4I4Qw;AnqJ1tjU@43_c5078^q>TRm ztAh~Wsy_{Usjb|x67XvM&WR8C!A!_H#n?!}x`caSbaam`@5v;qT%PigR$>BA=01O% z@T;?Y|S<^ z9qar?7x?BcUmoTK1qL%&W<|uedo=)RIY$q7OW(&6>wI5rmjZkg))t-`43<;tqC8Qq1Wf~4-^1@VoU(U z<8gbrJE2A4vTP8JVt$+V?18WF#3WbLmk26j4IGOl5g`HdsaWn2N1yJE+Y>m5Xg+pu@3^Vq46&KhZxM zyUI-VL*OLkT>-av|8d0ZEkD;qLNMD+yVo_@C3M>sn+~JOxD7HsD&K+TfYJ=(gZ)NL zv2)M#D1}I&S5KyI4a?Tf=J!5%h1@gtNzg`0+rk)>T-iQC;jBZPxk|zdIX{Qf)vKzY z@5b&i^r5ZD04Ur^Y#~~R0@68Sbx@TM+;V|TeTly06&s zJqvE|W!G*a(Otjyt6RFVzRI@Hf1W|R@mZq zoxKnXN_Tzzcg^^0>Y^&ms4D4u*k3b$#)$#eTFk3|P!WCO;~I1-o^S9~XNSUx73N$E z$CVV-87|4aW`Awg-1gRJU2`PdS;h~k_dKBGZHPQ5xSd=Gclbe6d*k+uM%5}D+7sP;;+N~ zY`fMH9rAu;_yd2Z9{+6{%9!D%<#mcK-C3YYSQqDuPgp{G`@0Nn+4n+2y=G-X=Lcfa znE^k^@Dm1Dii_N5?2l=92qi4hg_T{3^}HoKuDwKcowGPhCA27a{RW^=OL>TXj)dLv z} zi#V=6&|4n<;GWh>l*pbiE>k%+o`Qorl?vT{*w@kB|e7F=BmIJ&O=4OqE7VDrfqAkkxwb{DZi_Ag0fC{cIB$oU!mPG4J66ldF z*xRoERM7o=K+Nip?N(C$--54VBCuMqq-m&;Y(Ce8@&kkmmap_eJ~{RP&s(dqa;%11 zNn16lrRYe#*%ko7Hy)U3r_1lPLUw;v8adilO8o2DLogy`A^*vUx}i;5l0NFgC%gzv z2`s{}pqHFx+vnDs=BipblBenL+3Vcz=epb*vyQge#`dYGAs`8_O$e-Y5$;` zWY=@r%5Hl+f5XeQk)e0-Vm;VL0KJe<`S12#f zbWZb+Rx>I~8KqLv0GUVm=`qB65>SR0oE2efOK#S!W`mNn4B$Md1U)#)OYS^7&+Ry3 z(@Y%ixMUEvWRJVBQNPSCa#BN}#dRs}CXmpdxBJ|;Csp8Ods7~BfAGfQeMrz}zbY!m zhb2vprF@x^akN-^K>DhJ(f{4uA6Gf%A2VGRUXOzu_tFEt18$WKZZF|I%E=6M-ZZV3 z9x$Qph_9P)Z^GUynViKw@_PS0{Vt)}$3z?t(*Lc@A4g&I+Ha@ha&gD|>^pCo;8w`s zTB!0>6nv^0ND1U%<`l4B==gbr;+wF@cgKaM6D)Hvr|q1md?zdY9Ke2gI%W`Nz2|ON zv$`zLdDpJwWp08-X^9)dBk-%xr_AWB8xk20kda8~3cj!buaPW14Ytz-+&O#y!?0&B zemM{30=y)=Pd1G$0UV=6f%yy+5CJb639R$Wr&Y+57O?H&z)ddV39bzP8iia}W`j}rNPlF1uBSkjR81}~mzaPJ+ez*}PBl8Qv{X#zyVg7k1$${K z%!_CqTMR#xbz?HLMUf(Z>L@-TfSnU2T6(_id3|n{EK_YkQY^|&gRSWxPM}aJOSXL4 z<*u8d9%Vl0TCoh;DDMR8MJ$g1WU#0iQAg3JP~>UnN0#maKd@O|?J51Hb;*tVKFG=# zyCwrH)x<9{F`O?rtIgOhMj0c}nRG(gV8R&H*uR(<`q4n^^u+Tkl0~=*^U?a&4!5jf z_ru#iJPDtmOX(r?AxWH(531hTY8V|K*c){~Dp*RSbVqbQA^iVDgbcYOWRfae5D1f_ z=$u9goC(gClw=1b@CVIuZYGQ+WDv0&G5Ju^3wJGr{K?*Wj zz>+y@M>=)=@eFX(DM0!%(ElFB)sv}fhyIAChTrqZDBsF(b|}?GALR9H+a^QYc%oKA z<-EhlcOyOuO5MK4DS5Q7MMxNa=|dOBNgVO6gYzUOT0iRRQO=z}y}E#T+~5E5;+B|5 zR@Z!hqM;WPA;*UL>A7%j_>rxZY4_j^{^xE!?SeTy+)e16J1>*tUS}R0=BDdwufxgZ z&j!o~h>KWrI`_5lBg=U5m^YXU?P_Y^yYflU?7dpg#e@8PB~kd6l;Z)lF`+tost-8P z_f~8R*)H0(7nVKBTi-Fl0VRq>-(J5&;$>dUooA^!NRpG2fvgg67#G5J_gVP8)s$(( zxr(VYKV|XDb%)PJk@zt2KMb5(|5geMh&vMBMO|J}%vT`Z~RT!ysPSBdQgYzmG<%l)^YVgDb;D8j8E$ z?mk64^O<)=d=%F+4AM@CNVdKD=BMfgO?W77iuj(`6qHuf;vmR1N}-3G%gtwGI2ZlY zaT`Y4EhK+8MgjivfppU1Ba&k%6ip5kyy&>c9%k7SQLcxCnMHhNBYzT{9M8NgHHQU4 zMTqFj9pN&N#?#@pDUZT)A=~cp?VeJf@(Wgl!TU;mY{^67OmM;s?qNI`*;E1*=ZPl4 zXdfQ1y3Zu)4}WA(R{?Uz;4db3-6k{n6MNB8{IoMq!#^RBpB!C0@p_KhT&<4W1=wX+x&@q=ova;+hHcaO7t*+)l;388h$dV^ z)#YZQV*F6Tk4BUlbXxdqLZcp&iBT_d#0QY#v4wk?(YQ$s%|S>ML2n&OLpNhNoM)I_ zkL8mwW!lP^W(3NIz5KZt;oqAW`{4CQiOk<=cgyLG;2X03Mm#m_eA1j!`3fw!JkQIM9pYJh zH<%6GBbhfo-zwAxlBXHAv?(IKq;UNS>2pY-95|`oDtFM7*rr57;V#bnXmX6Lu9IkH z`P1^|V|KRA^6Dz}Fx8HHlJ<%H7$zShFM;p$7bbxW34_l4av~Mz=&Pa@{>yK85|66j zm+wE}kmmG3cLAg+n|gP;M#E1RTZj}4gyCs?5uVyDq5YgpeGH?ht!EM8xIwzlSqvYl zu=%-CoCrE;(@2i_%WM8)PR1oBLz(h`hhsqv+`E2pmXK5&UT59Qv2LW{6VBNNoqb zMj9UZ4WWhyOUzsN?zp6(p;}FiMOvlK;?udJ)y7iN{a2}rCw>{SrjR0W?_0hKV;+&( zXhw^1c2sCTY8yNfP`N0Y-RHqDJ{K+GS=dMTZjj_w-6FcYfX9Wwi0zUpET3d)aQtOE z;CslxL3~2c|F$8MJOK8_%+C1ax?(BjN@=|5wDZ}ol5z#jBv1)R64+jL#Pk7$QGQZh zcyn%h4QcqEleFuVS7L5_AN~lB?~GMWg9r{tP9B*;EQ$A-5ae=dn(O z=5J0bl!|S>BJom8=My+JV~h{DH)FZ9_nZS8z%|& zF%pv`_IubqM4}*pA{lt)B;t)@3&IRw3pk-k5-Emw+^nUDD6sysSQCh9ZhCjJf?m-Z z7$9-%pg@}^;OoT~Od=2fm3`n2W$WU4vjYvpLIs&hQWB&LR5!F^H5WhX=uRe zy8ruYpf{Mo>!`zv?sLB_MINcAfk4jos(HAH9uL=lcVT@7h(#mYr3E#* zQ{KfoU`S3_8Xl?Oo%9c1(OsQtkuUt>mj?Xw;)W%pDC`x+jboh68*F+CWeq8%RdE|j zo6u6|jIND`YO{H9Eg^|QV?AD9p4t8Mk5W-z(w>dO!hv)%LHGlwP zZlspO0r#F`tX&~9B@iP!HZCp>kKS`grXO9iZg(M43oSDMSL_S1uZ!+h&;|&$dy&M` z;@2m<4UnL0Vq#`gNI-Il9$36h_?2k(I~oZNr2n$L2)At?{w@|upZNl&ZULhOl4;Tr z68Zn_0h_|0MjUI0Mj~v^D6vq#g`0;65&}rR4k5l5b<$HzOTu~EaFrDQrx)EyaU@mj ziU}9!q)7@+3S{Uc9}PrbCk@Y6_dd?3aknl+>k@<(&6{hHYDQh5GbYI-Xu8My(?2X7 z@O*!+=)U18DWcs=(By+sW_`NK5gCDGtlNG8DQ@3NPcV=D z0*UkO>Sy>Z`s0WAVQ4N-SQ4Lb+!%S_;^+U(mR80CHL^?{bIp6K03*AxqY9yfBoKbZ zs~3_1^gsS~D&_=m%pQMCK46waL_{DPayEpMF;MVwT0~pK7%PmBUu{e0;6+1kjT8g) zLNH)G!Q*tiTkjh^3Lv5b)Nq*1oqTyn~N~i zDR*cT596q%V(y#cN3fd>7CHftmYhQjv|Ja|^g$P-my?rYd3(GtlNP&wg1*w>Y<>eq z1K#B4B;#_kW}w_NtQ4YR_)dQ1V!Ctnm_jtBnjuj}zhJ`|8qLK_3sH_^Gy;lQ0)Qaq ze%0d&-k9OQh8H>@?SBIF6%@d55Ou`+438z%Zw>Uc#EG{V1qzy$jM^;hmKuNX)B(c$ zyK9vG{Xv3b25pa=iEBUhxJy5H;vk!Igq7ir4jI7?Qntxto&qzm1OzK3osAdc#S4x1 zf_Dyv>LU!0z#0Df=8B$}ABXZg8s}5g9p1JWU2o_m3IO>Yde8;OuDfPAD;(eMO{kK}RWnUgjD^IpXQTbksgzm)_Q%vZXHuv=BA z%_2Q}m}8vF8a;kKdB>YaP#bNcfxi67TF9JFrRpdLYZ5}CLFE=K6Suw(InJEeD-Pz$Qpc3M4~ieO)YMV<%4sL{)y2fFi%GowZbW(83DQ4C z641NKVLi(;!Cv1EHA0{>?x7wgwsQ&s$5s`qB_`+C&(oLm(*tsnQ|9 zbouyio?C=tUlT`@{&$ymxedDWD@v>L0c9Gd;eXn+IO@X(f)AzCY1_2MDn+=v>aPA7 z5P&OGY9~^{RhtaE*^}q|I~Ip|`=+Y_C%!Cf6f(qFSA!nT(lK>(pnz;s8Y3yo;H=oy6lfHNO|2*zy4q-00)wS@=>D<)`yNLw^K7#3qsm<**Fww@P;P>ssf` zbH)R`{~f=MpAa`8v8P;n!F#b;#$94ubS-jIIX%J@xQZ0n=m}Wl5OyyBYRfm6E+1dxJlAG>vrJ&J0{A0 z298K}#egV=S+|sFpbs_tn5lhOA<1mD`wxcu=OF*7>cQ75z!x64GS<+Rol6yoBU_Jk zCRM}JU8x6vgMFw+cFaaqSh2F2_CqVAVVxf4_)ZT+RJXApSq1g`@;LTDZ9_9kd=IV+ zO7mO_eiET#QP{Gn4rF zj$Q#yRL8#&1)UDN-5L@1jbGw0m8DYcWd8kj(!T>j@D(D`bR3^B#}eAxj(_2>C>Kem zfw*PEoPB*&8D>VW;)+afHvn<>D8e5Ho-|;Nlll6S7jzj6=L{kfwtQHGUAr24D#I0U zd|?#-^QeJ~gv_6TSrn{8-e@(0BIFr~97pICb-6L0V?%%q!WAHI?)*9geK?y+=k;11 zZ^0swN6fumyp@CDhunjEIut7kuP$q24^j?Jj>A{7SFOSlT-;;zfR_Po%h^s4S_a+> z-jszDrhMH3MXcQ!13wqFYR)9|%#YK{Vv@sdT!gV?KMJK6D-`C+Nt9 z++`Hy3+F~-McqziQh40er$p%Zwhi0OCUy>(!Sz#$liJMR5UF*Kyg`+w#(;@j3f>Ai zOsVGSzSX|>`W3#=`aQQZOD4|9;9wJZ5-GY(O`#5cTkFQ4EA|)Xcwm)YX74&N-to`| z%C6Hz@;%$_9#LO!0W+@!;L-&oFl#a4nt?61MI??bKonR^0Ps&BF@4b)FG zu{)RMJl_ocG{RBluboB-fbtr82dXn8MMpydsshz>bLG2}1<-rO6U3mK$HZ~CtI}}R z9Id-9FpSAKiB)4SfHI0!7@k~ zEKE0FYH)~D40|B2ev}{|qQSMbiGXCo!I6Z(1Hve@G-BifMKhdM=V+Rf;Ntk&*9G9G zNme^iE!b*OK0jZy#|*Q7tGOha`eh7D^WgbA2xs(gfX1Vu{mYbjvM0xgd4b#SPo7D` z^50kCK=_0|J&wj+m8M_>+MsAmdpr5i4&$E(94Sy4ACS9>WSXG_rzp2PenXN>txNY# zGevqO9erSG{tCOQZ0_+y*fzZ>Hz5j>3ad)z-!&Z;_dhg$?2P%+$jccbxhe-v?B0?ygn9Rd9WG#b~Q$ zKO_6PXIgB?O*i561y(IB-Q3g|t1lXzK~?4S2*%iR-l7=-H5)~ku-xU)i7){#)z{aT zVu*IAln9bAqZtZ!KV@bJmM)B5f?0i`7uKyhP?q6YP|9J`aWK%SJ2!_X~!2DW8>(<_ok#U5TG>X^Lcs zEGS?%@419&7gmscCj1dVklO+NBc>?>ysNtb31y0-(nE@Q^BqJNy&Z~z>T4g6kZSlQ40SpuvJF_8Gb1k!oj%yy;_;$fi_k~35(h2_@#JdqO7Z4rZIq4IXO7k zW7T^lSu2~IL|DSLIVET7V|Dgo-_@IDFt&O2THh<@7~6K5Cs!ct++qs;9K&+@ib1o< za2^1@{gzuJ`pBj${)t!ogzecwp#@_Y<(UXWgYzn+|o161ECnm8gnU0i&+*nso4f_O8MYm6xDQQ6Bf% zw_iH&&#*oXJW#J#lqnwQN_Gz$6SQ-UYVPgg4|>k`P=D&&{lzTG9KC%^KDEG5)*RR^ z^zjdg(w^`;9+Gb&kgR~@>SAVpTADkOm#-ifiQqx!k)bM&X%lHF@ql;WKCo-M{D4;? z1SP3sL{-8sKaa|kfok`9a)15Nul^o^ScnkUIM6QLQ%Bh4-=YlsKn7GJaQh@(;<)@< zgQLSS(>N!23W{)fW=0%+IG0JEQ!bM~4t^CqGWb=@J|B|!kwPKN&{tHSr%2;y38N)rcrz%X!1cAY zuWo1S_+bUr6KJz&;=eo)Fq6>@rj53Y!9gPVX>ui{RzS;B6EE&R?Orr=` zJ^UkvIn+2jr13L3J6yp7J$=;E4gcODZIE?Kx|!>|d@4@F?ISJthv20Y$FiIwW@3YE zxc8f*ImvA53kRavPIE771=0(Oyv8fn{^)BNlC7Z%0u^E%pE`ClZbBzNisQ8BAd)x0 zG?y|RFV@249;>h5Je&fyZNB-=U}X4xnV;;x2|N`D+L^6ZU9s%b{aXWm<7T$HiNp{` zemD^GlNwqHGQ}-fr>7~0<@{GRE?|l3p8Rnju>-whh{hkGn za?D=R_`*V9o$*%a*4o#rbk~#ni*FN>`K$ysLQVNH7jVTIzPUNw+!^+7#Z&@TOWA$I zKXmQX-#Q!clJjj|?9I~mFw~VyVvX5_AT4*qKp_V}7%cuf~7g^VscHR2>2; z`t*n=w+R)5ZL?mH&M&(&w$S>19GlW0e*I@=>R*(`b#o+t6A#lZC?)f6DtNR>77NYd zsfhpGO`X>XMTZBr-l>w&My`}t*5})6qOv(05ykg%8K0`IUWfSH86$B${bR_GBT3H#hZm>nG_#(AOeGKd4KN<{x7vji*&l^^$*>oB;DO~XM0 zK-rcq11Lq=`!*MwJ&g1loh|CnxyP3=vEZ>aT}fMov@1Wam5w)JYSS;`rOE2}459j2 z@ojA_RzDILpTV(*TPf!L zdm@NIVXRm#Oo;*42?V_R{F&cwo@I{hxUz7C*#pC+nprk{B`f zaBV)5v5C{f9m5>I!=H|9=^-W+HRSIa)CFd*dMi$VezNft`)0F~>rhJWUwEWD<8m`r zLDFm{Sb!y2fv$_B>tj?1bB6^c6ofP6#vk&AO&uIw@=d5*jRt=V_i--=b_)Mty}F%R z?XVQj-t-pp~M>EdMRJ6}F73i?Y+gAz2qlIl9w0MZCk3t`fdFU(} zL3-r%e7vSD6pvCd1iuvDdkZ30okpYYVdu9D6B#zan~=-s=Hrqj;U zR&n3^*+mUXo_5#*v-4E8N6|+6ChL}%@co6 zsGaQMzu`4@a_997<(?5xmCRe?CD1R8(}LNqN0ZzcGXTD&*Ce4QT?|F0D?KrhqIQHU z38E=-Ceo%TIvCv8T+w%Htk}fWRM*tU)N@&IDB~kDF(d_mDWmwA=qIEA3+1TrB|LEx zR7hEvY;}l=u=|T=S*B11k_FES9O1`w@Nt(yDm{sVV>`u96vUVWmLk9rVp)<4SnWaK z)GGxFQ9)MGQ8nRo-mqxT0y^OIe`;-&@QfIZA*=S?h^+(gVuir<-kL}7gjZh;X;+#7 z4?iXy9g1B15P{h#oL2HYJ%fVB5Q@{XV;x*rW!^s~Xx0o$8p3i=$Xq3N$lecm(Axi|d<1}I94D2J27zu8zUutYyCAg; z-t{cxzLPvBQpTcq_EX@I4o5OZ&^WF4WcTfhIE3Y7_&HcEM=PjqrGPWyV-41D&~FQl zXCY58VB<-0Lx_aUD0kFTS_eD_Qr8iw5BwW}PyUIQ=T@X?2dhs{dbA?aeI@Xfbagda z*yWw=bz}Qtzle>mPM!;@4cdn7;EdI6C7G$H=AMTshm_cUeXp?nF~n}dFRrPBj%rtX z)w6Fao-y`xh%a!c{WlhuZi*b^8{M(sxf-bXJ$Nu1i2J6;ky7Lu1`&c{K}ZOavzQ6F zK!9(T;;LsTd5H>UJW1g(q9egi(PtdzeQDEm{CJhOr)jMq=C$RHlb~G)DY}dwlTlzO z1rTUassW`OG(Y5>MXo8MH*j_(lR3O9^?o@A;%mlPc_qBi>?1^l2yt0 z*JmA>>lzEnl5YMsz8*^M28M5H?=<2nbV&r4LQJwUEw2I2?+PbMoRoISWwkckA4YVT ziKMx@9D+jwJ8VoI&7*UztWfoW4PdI9>XNpS_w@Idy)w*Z%O}B7b6YE08tsp*XB%wf zMplrxGYIT2bN~0jdkC#Ekm6(_?Nn;bkJ3sz;#Bae3pq4(kBjFOPzG-UKG_iCLb7eH z31T={W}ky)>J9FTSZC=##t1J(K%Go@47P(3S;hx2PecLZT7E+Tpj1z zxW3g16^BNN0&xNvmm|P^YeHafHI-wOSj+1i@ zWNlz5Bvm{$W=JmVL)=nIYjTZnvh2(&a!`)cGRdb@KbxcFZDlM>oF_v^Q!$qlJ!1{Kl~}dI*bW<9JvIvHW47kCvu^`b0Y6;mB(M$*set3t#;{>k#EU zzAGd?j>9-aE8asum&G0;aX5kzGZL6snB%@bdz$hJw{{Tj}Wa?CP**)Wl&NxSy;9v9@|tEo_uWTNrH;Xy(!fGQ3G z@5+Yd$h>rKd}Dl5V^6$46l;9Z`9$6vAj#U-uF55eWrfjE1UFecgMbBw#|%ff1;sQf z)nG83Z8aA0oRZTUl(ED})96&eJN$uq1ZS?#E!ykYU_V@ZZBpYfz)4fX{K1Sh9yAu} zp~RspDQq>o3&B&AJS(pQh4l3Z9@$qjvBKoPJ(Zy%HvDaF{5$#Msd1@=w+Tpm2{x0&W<)$0{@l zA4^w-l9uCg1ke?V_2v&e;3SWkVYuYvBnjaN0kCFbPq1p|;E*~anwXfFT(#baOAweh zBpdf_FdpNe9g%?2Ya?Gbux_T8GnvDS+Pn#h23#_dhY?vODw92x&#IhxPw-<}_WE_2Us@zSm?Le8> z=>ROAOAbsa%CJW@8F-Dtfa@)y5(=xzb&x%;&H{VIn8XBXG`s2NVEMtBT&9fzcG&`5 z;PL4vEHkwN-8BNKT0rZ{L>~4=XxkgtXAm;!)qKi10~CL2@;LokO4q7Nbd;#^8L>Bd zY5o^`*ALX*2w1|_VI==)EHmAxT}mflw>@gRPf1WFC2yyc)3Hfpkeivm6f zi6k9C?uP=eNLvCo4xCfmi6VA949yKAPnB&q@NNkai<&_pE)rl@$gKE+isEi(ME_}NZN zW>UE%XKqwu*$Nd^pFjgp6Gn5*0aUxvs)Mg(;?w1|n$|ctN7pZI69;EmJ@OO2!#2{HDvt%=0fP;!K1BF3Z-das z^F>cTo1Nzh(T*am( z>hrLzf(fp7?{(SQq@Va;O8G2Nv6D~-n)oDN3t)mMkv@c*PmM1t>g}efRcqiSmldOL zSzO7RqCbdT1;+UZMW-TheXfaiqx?p4VF|#Xk5HgbK6$(3QWz_AN<`~JQaYeASMP7M z*=LZFnl?&Qh5FAvhk6M|`$PX4{Gy*_A56O%i#=BDicf0!+O8ou+#EZ&Fc3qQDj;?k(|H^JQj3K{XeB1>-{gsC>H1Cbv@bNKcf$15|!B`@$e&Gf$Sc5Z^4N1w=lOE{n_r-k9aHWas(KH+|bq+WZOE zZ+xdPQfKYNt~1&j`7zEqggVJ*aX*Kf3wy9vGBZ|fo4yys9)l}l9h3@V^qKVW|3B_R z2P&=T{FKR487xrgjqAf6BK6?R8ka>Kkwk3Gl1zFAHcY0+AY@dg;vLQ1uak?wl#_TC zw)TpH(=sIOXFgOMMx(=yF0K=ZOSr;r^rckF*jMRMp12kh@Xjg_AQl;& z#PcG}h)-yNqyP<_))DJDg`9_qz&v6(txVW!J{}fhbK9gO$Ho}txzT<8Wp*s*ZL^#^ zV`}{CJ{$8X--|gKe!TR4f#VO}-0Fj}iZ8D;m=R2>yiVBFw-}AX)0{wjQZ>b zEzR@RgzW(U1ods9;r4vCP(zHd&wx~QFpmgk-e6N3i$-Ub-L^^7w3X?-oyOUzXYvA< znxh#ou`8kS3CyPLoy0vaaQU;;P$;HqEA!#3M&~THaI;RxUOcbgjMy?7!N?$O_)gpP ze$*WTJIibMR!QngYx9ZqO1hX?#|MK_WfEL*I-=nRm#7arBjNRn?&WY>evP%a1se|8 z&Y^c^;&W!u3dgsjq_E1lG%0M8qvhUboNUm||w+p2hw>}__Db3=YkOJckaWQwkUim1*@RVnXwQ&;!i?7+o?W*hvvi z41d?4NkwMZ_Ek{Sn_({8Hmz;>adu%4X(>zM zDyaDRy8H*a-Qyp%cfUR5!xHVliO5?JOB`byM=gFKzKAXtjji~xonF`+^JtER7%}KP zA?ZuJSCADF>zdW`k(qq|FLtE*XfL?kAmbn zFEId6obGU0x~W)isOWj`OJb5odxe$FyNzc4`157D)>^^{SfJ>ZuizBFLsaO{!x7)}-T!-tWv}lREe($}6gDe9}?Rsj91*P=v{QU^vwIsfiw?vzDSY#BWqu=bK7S zK=n(V`B6*n&Cf;beds4CYvz&qDnw2hBq=jzBkuOp!nn*cUJr#qzDNBoxKfL0*ZWQfulkwB+#;QL;@T!q7r_xS}VUPWHpD zCQ zL@Ju?vGW#u?$|F2pUQhRI&+=xB1QtE@;lEK72{gs`jx^NkSX8lgbpH1e&C8`wuafZ zT|29#v2fOdr_R3p8OhCa@%KVqVlZ#^mHe-jsNcev`y#e=(7!*Ad~e4>m+jeaJRMMR z{_^K*&5ye(%K_O0s}i3ckC*>RnwVEn3Z`5BTyiw<`Ha=*f4qQ@6Dow3B&3nR3nvz3 zwUtER3l06uk2Xn&9*gNVM^c&mdZ_~>t{&smK!8M$Z3W;A?8R$>B*g>}FOSBCeKBke zPC?w`!mrRZ|2g{*jX%&00?j-?oh%h1OlAanG!egJ7?B|&O3IHfS5}FtE=5y4z1Cwx zk)OR~wx$^tZ)s0F@R}qDkN=O?{Q|Ax6mwTv1$&b*=d6n1NcDO3{O&FLDoPJ~Xn;U5 zpv2>MIW!$W!On+pbX5_IX)ojQfgiWaV5(uy7|bk|oIU9T(88YY*X7|a1B?IuR^ga& z8zH)zOX|^vN2WO0&fO}n8(Y=~&?FPz4Tn24HY`VD4VfUAzdT!?fFr#7nj#G2PprJD z;}i#cvbRntul{Euri;Vu{jOB>S)l**s&H&bHc^NKMxqs-U$jpS-}WB#<%iBk=qXX= zy@BElnov|#Em~F~)r1IvS2s1iTl9{5+$yc%98I3gb{>`h{Iu$)zhOG-v2^Y!1A}?qYt; zYS~wuy7!3~nGKKNe>&P2%s&1~W>F^Q$eOTjWWMkp?I)X4RGTbHl-q7TrjR?)FFy!B zn20wC+-rVAt_;WHkFu^yM9?IA>=|W z=ltL85qE?+XIS+H1dzO@C*dl&*0FP~hn9JfSQRcmRP^%J06u)b5UrZmXK>XvAR z`_yvQ>|%teQ0^zA3kW$vB0RF!$nCkBt#>$t8s;eX0_fi;ir{9A^k;v7t`rg+Cy3!f za<+Pt6es>rzftsOPegzuN;0iZA|-!PZWd>G#|5&sj0~=wm(+?H)(5L^_nvnMz?4zw zVa-zPT$iB`?K{?>h|-5xJki69e4B@f9x-uKk)>e;-j6j zHMeZ8$9>X+#5AD@=D~)f_@_e3a+a@FInU}a=^)2Kt_eDI-&&TbAghqW6qDiLS>N$6 zD^iacbR`iWZ5;|N8Ltw@1YXXG&x373DGqgJaZpY-)pj_Pw@1(YQ5yx(R;0WW+CAnJ zUe->uNpGSFJh^+oggvH)FBEA`5Vs9P@YhnRGmTugN$lzVC8I5QJn&JFMM>ygG)+?n zX0**Jl|fy8wEKSjc3C4K^&>#m|5qo8a&<-%d8o}Zmff38Y^_ewBOW~0-#LTSfo)T3 z?UUxTG!XlMwL`v~fQ=ImI7l|Q0yyNPBx8IKOUdWN zU|kThg)7_I$xPd9n+EdimVY;l4}Y!t^t@l$E*byD^vH6`>wA4J$ikfDT*Pt8V7p1w z7_9YqAU;9u9g*Kk9v!q#9badPv5QyS>-#R2!(Xxvs4E|BD=&Ytx7cpPbHDQdvVygL zbApUJAEBr#qlae$`oyh~d^sn*N^MEst5ZZ*ltYIczrBFv5SaR6cbaUC{kV6f$Z>(S zJKJxLym>FDN)oxa zf01=15y2#nqQw;qqSQ0S%fQlwSS*X^1iyY4J&nKl7JMZ@Pnz7bjHkA*GQ>O_PP?K4 zZ{J1?9GCv@&IrQuMlsx5+TvbWdE)tE?fhQlKh(X%=&I$xdY4Ns)cLD(qLcdfy9Gk@ zASx5vr9Hm@c3Ql7s^g>kCVDKfXF*ovXs+W2{kuCypB@qaosZb#Fx|xnd(&e4`#B#H z^?RY}W63m2-+R=vDeJS97^>Aw+3EpxG+O5#_8J1%_2z#R0yj$>Z)}-f7|=VuMuu_h z#|+7f@DkXJt*V$1Gf7d0XfMNmjLg+L8UZ`92P8MeH{w-_C;HSxvegjip_5aN7(Xr~ zCGnF|zZ#q4F>dh05|)J$xT)4DN+yyy$Up*)3djnlVjO?Kjngq`T~KZ+u-O6H9l1Aa5w+y?6~j6JSDtO!koG)iB0W6 zV}xCiMCw|?8nc^327`=)dNR~v`muAQ_wnicW3$YOBK}nl*bJNc^9^K9Hr!`lXR)Ul zs5q{xDrDYEOhQwi(a>H+n{C!}h7U}9npS@OwhulJ3L#R8!HQw5?{0u;Rk#b=$wy#O z)=-sxTIZBsVZTz6H~C#il$Qj!eb&>f~zsptpeTclz{v2pcY~&T_I^nzd z)Mz9hVYmJ7pCTz|#w{IqSrimL8vM*fqzhnW4{mn1aUeyCd`k z+E*v^o?0@mspehJ{Z~wQ`vWRaV=lcEF>8D8e^V}3c%4{~lr$t_oboh#5s|f{kzh@aQ zoXG=ynG8@vHT+5>ePnvNHRdWATL=bJn`Z0L zWnSk5jk+#ki}g$koxYb$O`N?lu>g(FbNJ>02TF}`5%pY zz$>ss%C7#;x{#$GdkkpjiBv0zPkylrPJbZV;me7UzBMxGe0^2EG%Dw|nK z^uB^X-YnNoK)%Ff(7=1RRF>8rCw{E_^OH64&|Cz3gm5F+!e~O^4j`Ril3f%E3d-`2 zbk}kIl2C$@CuciTc$1j}Am&6@AR%2ocT-N%>YQo*{Xy%|Eyg*}1`U?Lrt5j|R$eLM z9gs%PdB~+-2a}uy@Ub_GYFXb_*8JlQ|8qM*DM9fj2JE{LBVw`Wl(=gjG}%`rypC7@ zMdgIk`@xcfb~Yi5!&krDUKBi`(<8CxJ9Thn^(EHF2q$1D>8flWv)*aZTS zFnfqJW}PhMz?Xl!eCkE=4|f3n%aR zSVl!XVke^LPzY4C)=4q+m!*>Eer45F+2a*WFsqX{WXRzZsFBIB$hN?>y_{~YvKcSo zEpZa%)pX*>Lr;$!jpTbSLYX80V}WF&41qT7Q#T;hC8Q(zM!}uY(FtA#s-&isZcEfn z?B?fMePjVyLKXW%sZv}<>FrfU)a1jRQ5~vpW{kl}oKm0I<91n)5Aku!R_^7n0>Ob^ zI9ww!N5~iKV-gp0WVhVpr~A)z+~r!KKNqv|w@2nK(SD&JZ_KE%F?Oy*VF|Wknd9R(Vy#UeirKggHyAip3%%fqKUgS z*({9y$>wXF^TTM7X z&}QcN?6rZD!qv~k8#OBOk%TFm;N0|v!cr&V&mDZO?_KMP|7vqkEKUK7WnZbE(fvty zhf5JgO_E^w)8Imx?d8V*~{k0h?s=d^(E_i-4Y^&N!ss`J&QFInMI;yg~` zhtvpZKI-D=L0bcIj3Bd+Nt7iOMZ-M%Ve&YI!4mHW?g4^elVJfEB9xDSYc-)OI#`80 z3!P>IRph_@88>S}Pb0+S%f^bjtPxL+EZ&Og2l8Pg;Jqt!#P_@c+{!{daoU!iPt^rV zhQYl^GZTEs-pDY_iI^i;=@5~tad#7D=8v!bKs7(YfW+4{8fG=;6pK)kP)BbfVJB@d zxF+0970%4&hmR4Rc1T^SJ1-znMSC5UazMf$-3)p!q_rJ8e3J_4*#oA#TtcF?oF*|d ziT_8q9szF&Vgx$TVGf8cxY-mA45i#O6jSs*f*<(d)dv4?mlH7TKgJ^sbbG zv4Ug_eDUc8-9|q#@F{o-b_~uU?HLkvnnVR*>US_tDoD^gQ#pOThX+HwrI7v~9RqNJ zDy0#oFIk{^w~}Y<+^a9`Zq}5(j8|psgpZx7kc}68$K49@svWE8J^@stMg7kB+h*C6 zgy98@^0M)GcH5Iux65XBdO1t3%G_mGUNY$xSjR~d)6-NI6JBgtdLAiL#+S^w$|;O( zG`R=K!Chx_2@(b<0g2>sjbC8JI}>qC7l2TXL68M zyRi(+-L9up0>-!bLDTpLrV`yoTpPTy!Nb)Ri_=vmN{aay!B24*DoNaWNC3$qGKOHD z!7yO_$)h;o34*XJFJ!jwPQLIr*RSeLp+AnF5Q^l*maB>qr9GC~N;<$^DVe)jRWFG4 z%jPKkS=ICUKd5~R6<{&epnVnbq`z6Yu%P@E?>QTKcz{BcK57A|cjVJP*|m~V3i5a+ zT9(ejLjsWbqZ+Y0`mR;a!-PTw1&^T}p#0UiGl8 zu`pR7-tYDG*tE0X9kiw9`O3HSDy!|dkbg&h4IB`Qac3ZdrOYyA4o|u_Uw+3yrV5$R zD~+s?*!aITrJ%P`}W9EsaD17p`_0n z2~^YL^LJSpnWULI16nYNv_J(cb#f`9qm*XoVdg#%M9+$HOpL3!AVy9#fDG=B^v!>; zdQ!ps(IkM(2GPj$ScV)IggXbLtLw#Oc(DkiiYAa;MU{mVfcw0mMCX3Hnnl@8zZn@L zP1}yrdDSn`V&+6{qxgH|#EcQW&YZ@c>jKzPw+VT@W-p zIZWcu1ONdMDfE{J(AeLi(bMDHoBq>jq@U1jf9{pS7P|g zun1sy2R<8OW?P+K@8XP9HTy^{z{I2)5g#A{8YfhM1J5~7n&qCW{S+~3Ws$_Bl#AFvO}BMY|* z8{h8w$3Y(?oeX#frdbHNAghv+DaA6IL<15yS0xg+UEv^lAKumN;#5Ow@6a-Z*oKMH z@NWl!#qE9Xp=_5RH55j#Me17$pEGB!)mmDPt70B*Qo@LA$d$1T4RQpnU3Z~d_cxTV z^)L|`CQjLc8S6|>ipartWz3~g$7l%=vR)Oah^3lA$kEp+5iMIEdr1>55~1|yKzlxH zhR%v&sIrw#k|02)25sTh*nGdcL4nWlpVM0%6ZCG}t#`3jt{E(gCmcf8dHV3Hr-I$W zd!U(=9wYO(rzcF){HrGrb7+UVpPP)?vHLLSBc91%lhKps9LyR8MEQL|1_N~A=G~t^ z+GJWXiBY&ckY2L-g%o#ub%hG|6Kh@H@Dox?7kYX-d~heVmGJS&*v`V?)#?7?Wv)%4 zA;)?CVk4jImAt}mQq7+ZB1e_4zI(RgrBWy9y05RVrrIS77*BD<9~=-k`j}l^vP+Q+ zT&GsH)7fVNK~O~DUB{J1*S}nE2^=+cge&5TO`%DbpEP_Q7jOVSW@k)n;fOGBs(gQi zn*D$^o*w}yKmI2np&&e5mc0cR9&Ga@m}Z5A%^athD9Gt%KlOU?rZ1bKKj)M8qa|p6 z;|ck6HanG2ez8j0@eAl`55m|`^5I+X>*vJ*_3m02P>&QRrUz-sdbO6{%$NV@KXwgV zU5EbkJBSZ#7Gs0EA1tsIHp^Wfq;y9MxjMcp|}fKvz?cUr$fN5-3+&skgK5>)U3wfvxv|a=;le^G(J_#V1y*s8q;dj$uaR zQ&gYIU5jUl|COlcVE{B^nvdf0_t+Q|hGO=`0Nsmgx`x_)`=WnE&rhe|_5BB2>6RyRpWy=quiG;m;>j@VbMH<|$0zM)gL&{tdMo~IRpWz!P?Dwk zzBv9&OCcnVMb$5r*y_|kpqH6H?Tuj9+*dJ+;F(;6ORnYDS@K`;#L+*umY~XhN6qZh zR3E|za-Z_sB_#D>a4^VLy%pFLtD6AJjq=BMukKJQJdF6xv=N;tTEI_a+s|>#hwrtd zR~*4-?}>Bat)P>($@s0m;oybq3VQZP$gcTh7L}LfSb~{v)4lGXzMP^B{%&J%4tmF2 zX$0CK6T|%N3cXS;iY0?UT(H8Rn(2@cd11RXE*JSu8gv>D0n$E{l687z>7blnEyiK0 z@t*5f)mP5%)dNg#3^;j$*uz!`2}Zgg2fse(MdQw~Om)%EoZx3qd?j9hl+KKYY11y@ zX+fDD!K4pK-*!MQ#g!2lRz7ncAKbY;u_a1AAh%_*z1ONcZJ*+29meooxdczuoxsvB z3pG6Z6(+E}^NYh{u4sgQtli;*esU(PpYT_9z(dOZg;cq566)`euc2ZFUna1sb$0z&|L+v6VW1qQ>(-+78+D&OH|~^HO+b_jZ{>QQezCaF+al|8 zoy>6s07D1+UC)KHUhAi?O|}{dtX}l}wjIF&4fiEX`36cUd8OxD_GI3rRV!NZld02D zQF^}Y#jm#-OXmXRG{L>!!B`eB!uV%`X7@Uj#0IWbrPpw2&o6D_qAU*^#}_4NJLpr7p3@vsfo>ZmU@`R22`WN;8`Z{> zxefw+5{i|j>0A=uL{N%1x3z#8%68n@=!$6o8%pV_3F0>XRO-J!nedtOKQ=&R1I;;m z2e@tSlA44Z*g5IlSIEqJW;a*vXRkS44zCZWB(atLO==Z=`=uCke`a+5u~PHp63Nfy zYI#H~+Jl|}+@(QIZL>m_Qqo%UA@JzyA#RnbwzV&gF_H%jD}=b)Eh1#-w!INxoT`cw zi*ZK#ez_e>5ydROr;htAX^bH_mQ$jlqNYNGkhGAl4r(JP)N-0)D3jjSed^u!Xv!B_ zj_jYc?YDwOH~)Mz`t#Acx1>k-oy`;_5FtC02AB}dIDW9-p7798066FbEZ&SFkxy?( z{*m5*$VbNlCffU>gZrGei|m7UBBd!Xl{N-57CQ5ofwV8*2E(!+*edcIM_pBa8dK|| zG=In+Jm`#<9(r^50}t89M7n0sSNdEjXTnJ;oiLUPS$mVroLUZ(y)4IvvaD9vAW)zj z`5gC&z*H;laZ@TPgxOsKq3)qeI2&9*-JFFNb)FVY`7*iL@g!8`@o{A-EpJKc`AGk~ zr$lwP!0V5GGf8=P;to6FZX=uC4VR-WUxz)zJ@V_zEfB%&xVFHL0!zQ2#!xO{1a|0Ma``P-E zRO_u@6OXiYa>JX<8PW<9hTX!v-9!%cfrTWLn@oU}KaX5~R@tkLf-=O4d7IRv+dKT|AuTG@yPa<4f z4Brd2-QON4`G@rL2}&L})2jn-|NE@UwySO3S7#vqLlCnfmOugh^`;WpXiQY95xHvZ z-PPB-$Ff`X^WL^iOIP)!!Pi7s*;7XTzA(O>I;`Cp+plVW2kaGE7P~s1!8Z$u4wd$! zy#JR4(7gBE(?o}(Qj-eU*zP=TlvKIn=3x?>z$o9J2?HJ;MZlt+iJ>781bJ#4AVC-mVCXWQl|jh0`qaKPl^cJW7yg4Ut~n}Nmz z@0q01m|2}A2@{3({l%5~69ld3PfX(uTOaXYGV1Iym-e~&6!*3vqf?qaTCe&f^Q=>( zuJSK&&mpPpERf?z$(bezkCR*>PPr0%>aDLC287(~14Hb`EHjOcT_68y_yPVh=_mT& zB$>PX=a69Xd>izBmy7&L*ZbCsZjnKi9ZVyF2E&e{uFAGH7khq|E=}rs><=yVJIdHg?}tX8I3> zepV?B+kWGF8+Ax7mW?dL``#;>nW<9{JmeYivcKT)cD7mRT&J$IRkM%MU2JaGvK6eqX%7&R#lvAnq z1(*{jjK`a-5NI^aY3zoVOZ?7TS1)K?`!vW&^oiY($|fBb8dkdWNZ8K&G+cCsI?QzgW83p8xaM?_%R9U=i2~yr;UY3M4;! zMi08J=N`S#;LoW(jiDuK$j&O|@`_9H-k6WuX>1OeQwHmJ24hAcdiyZ-gnTgUtG4pc+J^JP%+d5H0=M{%tE1?S3VI_S@AvxpatBS+Tf z=~^D9^o5F!p6TAKe5`UJ5A7TSLvbuif)tlf>3U2!q48@NDc%ER!8oK!ng!2c-P^&D z$wN~lEe8!tVG|286nnqZ`Z+1hvD>Hvp)$C)*Jp%*9F&AP{Vp|lx3uIA)=RDIbLn2k z(djllR1emQk2cLTd3yWMBBZc!ugY^Ny$VFs*BO(0ST7xYciwz^cSL#B?YMu`8nB^7 z^Yq)Rr-u!VZRfWG2%UgL|45+F#G_@DkM8%3J&$hZljbpxMxW`^U5iY`)xSIBOsGxTv6jAfptB=ma z$p~sn;`MY{Snc$cR7@VSoqZ-w?N=g1<(n}l+>09u(-zUZj*+HoXZGW}iB*{kJEMsv zTRxq&sSr828B2R?mnkB#mr!}i!$S`tQCmU6ZyavAh{@ZXWKJ^ob;Q18YL<#HU3gws zDORT#{C%QSR;2f^QkEif)`h=lzpYabaJ@XJpZQAUD5GHr*k0M&l!_vKTErtErD!X; zzKniC6)I;Zzu!YJm?*kvi{k`>PE4_pADCe*L(+-5irpt@L8`vO&*0r&fK92GEN5>y znFN9@eyOY2ftQ~bj_1>Yhc*~Q*1;&Oigc_~;d%9OHDa-aT?B_k_0Io% z0c)tTyy;}#&Gw}YU4`DO{z7pSx*dE-9ApS)^b!;&MFgVV%C@l?fe#Z)$TlfuNexBg z*Aq^a+HjgMA=FJLOEj)^q*$r-5;8rgFlnD)w-BadgQpXLfpt+)#am1ew4*X-x(WVse=MBb4(^ z9rw@Alamj@f^>97?e3GQ@`0Wt9^e zD6;vgkpCe8`xG)eHOyMLnDJp?X3lCs(i$1Qtl$h8(kAIH4m~QiLc?aEe=1&7*mhM3)H3$iWYWxUqH5JAA-js@vp$_~<2QitSmTunZB-&%XKE z;fGs8e%A>AIxe^AWrg3R=O&;}z}E_;+K#53=5n!>1)QxUveG-n_0< zhjs_Askt7Dw~1u0sWmnu!7t}yNrNOX#02On4Mw)=XKP9;LWq?_V=ojw9Qslfv44F+ z!N$o!w}gfP=Wfm&dMe1drYOK-pYhW;sVSR;C*(?Gv|R34yn{;0K8xkkRU{@x&`2NNEA)r9j=;bazFlJSy{ z7&CCxg5O==F4OuhlXWfA4~=dc9=_CWrVB{rkfNlG+}g~YerY^mvQ+!68tadoaoT^U z54cD(a%uOC^b{%*(J62CoJU*pJgP;+SC1;_QiQ$UKoXnq+l!&bjDJ9^ox~6EEn%A6 zX&ur>($(-xUZbSFTEd~ccuGiRbc5+iElIR~qWVnGPiM5=WXjN5PP}4_7Px@(b}bcx znX6W!g0eS0E}eLQw|#CRdzvO0s{JlRpuUlQisNywKjBQ6uPU7saDot=ng~KYUV^}s zeS^;^r57ACVy+<#rm?mqCmvFlGPNrG#i8MD^&^EDI=933SMC{Ko5S9{yAD1CcA)gq zhjVJL&M6aWwu^stbv^_R!vD@?Z8ipX_eMIc=}^OvkD7{tuN~zmf7c#D!cV=QcRN&WkEgzW-0=GNdHVVI>)-(Le^)&$ z{{U$59E?_8u>=1oJ8hi&(kyHGAi%T1?JN|iE%OL&&Y)!JHw$WAHsI?xuX;y4R^ZdH z9JDx}Q>mXq`=m?{ceYlxV+d7+^WEZ9b7{&UX4b56^_PAg+usGugt3df$D>Gdjt*1M5^{S=$lI)cf9<}6B&x$~H`j!GUD`QSko z=>zhtUe6cn(sEp1wsZ##xcbKeA*Q$?=ij+&d>MVNzdXI{;d{gh8n-~UF%mrE`_t=q z6E`tmVD=+C))q~!w&a~gRn&P#O3-5>d%M6`<{V-Fz;BN%bxs}a5cAEX-0=2qZ^+2{XPG;yc)ZAU)Oma z=W%`x@BO3+87Kx&W(pxaxrRj33~(|iKb#>H#0_Y)tGkux)E6@Dqh=-P7=0fk_t9Rf z7MH9=U&(b5f!_8XD_4lP;_o?2?eC`#0CM1%qBXD%<>`VN&P<&kt^6lUZ}WO<#mxoZ zS!I5gwdA~hOr9=4`qD>h6@_b7{T=j7~8Iw?b3k8Xa&SP^djiJmsgb_|s37 zj@Dt8cqqK@WDqLDzp))I2b++6lj+=v*8jhADw7pkJC<39YQ-y~9Q-a{KrQZcMhFu2 zu`Ng)tCl@eHv@b0=s~5>HnJeIkzU9wpU8&Geelp&{ew0IT>%c}g10{XV8CV-m#=rr zN|uuP%jgmM90A2p;T)7Tr7UP=t8UDs)8oLK(h_JkT5@kp@6Dxq;BxvqUvKa7Uqx#d z+;yOr&Ij2!_yo9R`D3-(bu$W3zvug4tz&0(%t~diamHuI;{QU(-8N5?x$A zMn++e-lsHVystHHd*q4)Zjc!eOhRK5y`@f2(<%&49+BHtXVo=`x~3eBZw_j&LDrfI zWymF0qvEUKw~7mE;&5GT9PmKKfET%J_MXD}!qJ3l=2nOc`U#XIF5i)NawUa}?-1>} z_1fpB{br@n^_QMbY3>D0dgZ4g_^_$@z)h^kG7<8I3R8EO;v1ME0cpa6+x~f@VdlDBL7z=K-$p0z5Zwci0d4_jqosUu!@_v;&~KN$ zd#yw3gQQN%`gh{&PWtb3o{Wsme^+R)i(xNlCXSQY%G9LX4?JG_N;*hSxTJ+JTS#y6 zAy{oD?=NJ(u+s}gERUY95Qo&;*%WTzV2-Km{c(Gw#CI;MY zMdUU#8|fX%ghE8Vq3SeO(Y~<|B^ovc*ZD`xvKyy49JHKRi!4P2(?Z#E#45^8RYk>+ z+LR@x+)3wo90O49OhS(lZ}1jP?11nKRj40pA=eG@38+v<7~N#vYbb%PyrvzFFRNeJ zi@`eiC!WS8IREOrP=Pk1r?quT-+DLvVfvS+ci%Gtq5NHnq!mH3FoLuD4b`F!?wOAV9`5d(P2O4w zfVE^2@B3J^|EmWwb+CJ*f!W3?)()w3=A5SMs8ljPwB_;3c#hu_C!j#dlB&+OQ~R3C zI!pfCR(lE`m!}q!l%;+w3Y+wMetL95uoiwT5~*Wo>{UP}=c| zy|;JNCMR6lt0)ovU|s&8Tkct2gjjGeTFs}odcIp~?%pNZb@;kY=lLT1q&z4->=|Am zpZYDFQzhc9|3U2RbmPDQ#`=AFAeEF>lcTw>v`Bn`FH2-}my@Z1Nu`*EG;l|dM!B+nGf|BuzaZbT(K(7T=%B{9gu0``vx$@QV+U-{%G?9?o$?s zBh1$qO7ZYYQHmOf#&(|7*^ebNpwAQdReqYF=m9Z3HePCR&6(KdnVl6Um4QK*P28f2 z!w|zHX8bZhCs4ejj8X{a;{aKPxr~oRi;L3_#D3c)e&(l5S-boF8J5l9`?*L8E5}Ro zDyp(b9C9DNfsF@=qJQcKoye+_qxpXdvJl$A|SypC&ZbYGW~&ZnntL#Ds{ubM9;B(%^}xmy7r*tTx#?SMI?_ z7Dgu8X~isS#$NA_%dPJGztx3j)u^RJ>{SWBXiSl?ad?y)F-`>wnB`K1^&1&c>+AX7 z2h8Pv3K03S${T7_@$Nfb1=(0e`D4I!=@zvo{q$_*n}>%FMs?XJds}$8j<0?M0sj8*Cg=7%=^+b$m;Ss$$QzpnN*al9e= zdvo&D^L!tGT#f@R6fQ~q{bV|BA{8UQk%~3~=4-i&Ir(i#8QR;gbX)?nVikvzUZv9g ztn5+P-dc*JcmT+2&A@0M#KsiOUgI}W_a0EJZ<0cB${cB@8xi9jUn&CIU59p3bW(h9 zr`s_n$mBv*(1dsCj5bLjcuz0bS*z(vE5zX-je%dDqiuc)tY*(5vfOvSWoT|+QAlty zcKuty!O?r_)AiDQ8!QZGRyD&S_iyL9?md!M#6$VuGzbPO zO5kZz#N_GQn(OEHMS+DLmi0wLVEor_wbsbZ^9*!v>~G{u-bEFYJWyjtVj)UNU3P4V zx6ZmmMQS=1FVfR=op3|bbz}Fy)MNr`rzGU&1K$qwIar*kI`*8xB1p(YZDb%AD1=I3 zQ-Mh_x|dJ)C$0Ds>UssnLtU@OBnXuP!MtMCdK#M=9_aeL%eI!BiW+V&KRHtbu;#kA z-K1QAuhbnqjm%m%AQ$_!vQ@zvbEDZm+iu6+zWiosmRMJ2>V9ALdr><&{=w@yiMrkR zw!+3zNGU*_sNq)~rY>RW@eKOQ#KKr+NZgC#M4bf=2FjJV;&s-IG%6&uE7x>13I7@y zP9zQv%!OVRBs=fiE=st`S?W}VGo9p_c3;3EMmgS9m~aj2`=LYz?sPIisUC~t$F|4@ z1{VaeS?p_KVSf8UeY(O#!;ak~Nr>{ooE`(Y70(7wf=Wh?DLMVD#FKF!=Mz#1&e8|x z5~F)}6E`#mS^ta7{rPWq*EGQnbwj3n-3-~TFPR{;IH{CkZGOWC%q4ewPv{^_$CbDp2)Z#`V^Hl|w2Y#&D}rk=mK z2d3%{&2}SAXf$6tU9`VNsY##l9!@O8)ffBmdrS#5Z8#j}_)}~jL$N;avj^_+%r%#V z&(GK8pX*t#Md(^GBExf+a==_3@oJ zQh%Isqi3$%lW0lSl)emZ_yUYw~rY0&Qlk{}u z{+!zFb4-?WRll{tpVc^9^g?tX_|)s!VC^VEiBIz;UJ&dzbFIEF~Uc*K_%v zp4L|%hFgks7)^?wQ=-7c0c5&=ahKrOoF!JH&nHUyf!0{s{f)V)P><1XnyU)G?n~Zx z0%lgK=G5;_P~k}rj^e#sU}TGCriK`%5BH#EYI+6pEk&Se+xz~UWIt5nU-wOiZI}!)VGP z94fAMif9!9jt#a$_yOv@n{>$0596Pc@H1h11fHEnxNQ!qEkR#1RR6CTRm7H9ce8zz zr@73131Wxgx-bipl%qoaX&#~S`+kfNw#d{AY0>E_#u;77u&8zdnkP1I=&@R zd@;4nih%Fs-hTz&pA2(MJ;Ic;Zm{Znsa=%bw-b4m0x0CklN*adp+Xkm0wKx< zRJJJm$}S;)vC1+{unmwbEX4h9X}Cg?q9+QyovzAT?iWG-nkpj&WC<(+Y-P8poAUzT zphvL(E@IP_DxRZBFo*4#yXS0oo?l(PBHhM`ysWxh5tEa~b(6`(%I){7;X<|54YOYl zI8l*jDS2y~O(}&DtS6D{0!#7JF@;|gaDx>;t5rscH-f0_LOoEeBJNT$NCT&Aq!Lm1 zoQ46oH*`v37>4@zO8lLlMKF6@I@#!ygvW&QFo@CD)dHF9AZ4zgA;P!fzpC&CjjRcL@1*Y+Y<9FL3U+5qBhVp37&$SVR?fe_hervFNfFZi- z*pQAv`)+nM9(sV$LM&hcK>{pwxBXT&*2H!pjN3SKvTM|G!?y^StD3m@^;Q!LaqIn4 z;Ot{j;B)C2UHg&H*0qmX{*Qlx1Z9EG0P6m+xL8&R0r3+tnq-lyGK~`n>tO-SvqBZ? zVVox0yLDZUnPJL2ESorISp*7iFv4csTWT8vUwbLSThK2_1sU0`L#F+}NXU0{3l-dA#>KYlBHTX< zd?!4f8J?2C>WWUJ6@6W#s^v*pUrD`0c#O4t9fvm}_@?vyKAcGB{!4Ikq07y6)s+(x zhtBQi4|AeQFw%&s;a*=L{}~C{qG92EJ)}`fuog_=9fO zngCdShjq7ChYj~;;ax&!j_g|C__j?k2+e55wx;=83{NC}&x>25RegT7u2tbV<-)mz*TpL`oG=jBz22TvZe z#`~kZNAu=<&QNtmCA3)qkFLCZEWH!}X>d%o%c&nvrwGB%6SL+0x#V4U_R{Wk(Euz| zoMQ@c-ZOG|9CBI*a60W>24#!!hPEv{OEf+j6LB?~O@BuO-#ZIbD`E}41_*~M_Vo{n zdsD}~Z>Z86S8@x!Yqm+PI;U|D4Dw$)cdIv2Fd;#t4jPBL%t&5w&uUqn&HfAfHIq6c zNyz(sV)*gOO9-$2X&`4kw>Eg~yct`Gj*%`Ez6H9h4whT>fKJ6MHXXej;&hG|0G&4R zWp=(oqe0X_^ldVtk;(uUH)-^=t1L~NM0ND3@f$-%d_iOFkHf=@ zw@+igkD5tz`5NwG9_=xNLl0106wk)@~+40KgaLR(tb6K+T z`~CO6=F;kPWBx^BXIyH|MnI4XICXOTn;gtUWnhq~^F9v97pTqj6z-yFxkhS5YBfApu4q(?l zPn}zs)O4ZeY}5fujrY_2@_K{UbRs>;Ik^6*Kfmi;X;(`T{8oIeaxsPbxYA@a)YBZJ zDgi$#p-*i(_;<&;?cdgQ?^IrfeegS^Ck_ro^RJf^be8&YdsJG=x`C&d4LI~_7m-{1 z^TIFq+V9}c+vguY8xqe@rMam$`#X7Y8sqmu!Uj)Fw^Vl{V3PHLyCyJ z61?OwA3?G&c%wL@1h<6^KA#S^wFN?N{?iYsDnd9>z@x+8zCξLeomNE_tPzspxv zYkJ{PlK6aDP`SGGYu4ynlZ%tBiUs5pX{o&*$d60odTmRG;3#|m@CY?H>Kt0(> ziYhbhReib8U>nUV3XnfDRCf{Ij!gLZ(94SVm^9b#$xhe2C0RD zgYtikoo2Bf#v>;{QRHqLswmuG5)&FHa1WA%f(xrRhp4*SP;`FB&4Vf&;5!X1$zII= ze;=iyh*B1LN2f6L&F41gM7``g2~{*3t6u7*%?j&A5Uz9<6&8B_{`fPi=VL?q$KCHk z9$$u2A8GInl%YI7pWgf)m+31md6e@FNS%FmR50>iPb(Hb*fwEDSU#eeDj=sCh|sE$(?vFTtPG-x}=nqJL=NylwYSvpgJlMVt|0 z*?2nZo%U;Qa_(E3w@&fvAP*5ti3{#=XA^#no;I-Ld=iK*VvEp@5Q9Kj?99U= zu@xmiMx$a$x#fqlw8bAhFTUez)c=Ab|9F|PB&vxFz_FwTJ65><$@=}PDE^Px$KrZE zx)vtEpXu7jzpdYVb+pSiE6GCUwY6(DV;r}i_fNPXD!D!a!3{ob_F*oow@}e04?XmQ zBuf7Dw_Ey@1R~{Jz{i7(A}UkCjCi%UeW|ae#OxDy*a8&*oI|3YBcy&#hI~0oLYGGm zLT8AWI)$kbve5?pb$jUF+Vf;uM|iVeDl=>bQPj}U-|D{~#&Em8`a5Su5dZXtAh6*v zQED+HZFU3wSrISlRA0W}8?u;(Am%>h8)Hli_6Hd&^)sY!j5ggKw>&GfzzQQBfVs>~ z;z3ZNGKSA>>BBbE*FP15az0dG6AP--R&nm9Y(b_D#&_JYnI^oOV+iAZN;eK zoJ@EJ&;QwS9~qvM@5X|kPv$1x-fH^aa$DJ8kL~rq(K=pVAa^rEA_RZ*{>SQ5A{#$Q-N%cKi@uokCeZcoD!t9 zx|_`?=MTOeJJ1SKZLbS{{PV2@^{o_geq!*$)`4*F<(qOOMK39=P3XU&Xwyq%tZAVK0Cr4IN!7LP1w;{XW3IS#B`;|oBhk_Wych!zpgMm+o_5WVG=HCHJ19Xy8Q zZNn!=h^xXwk_Y~_J~~z-=6LpZ^1AJGkJ*&Lt6@&O1I(>H9slTwxOnKmRkX-Feh{7> zfM5Ktn#9!|h!T-*yB{+8P}`i8fYJSu8aHU=f`L8*&5x4=_ikH-HGiDiK%TYb$1Bf4 z9T|{5{?{<+OONg&pf$9vp>Oq1I0Pv^L|xMDy0JhdD5u>}4i5J&^`zdK)U3LyceGA; z%yPYJK{wF8boF)yr5$9e0;^>n&3r`*S>Keh;H%=Ag^6Dxi4P?UYyt&ts-lOC3J|*e zzmB(slhKM}kTqy~94o?FKSD3cpl?AV78yN{<5q&+bB@k$foJ$u_K#7bt^nxDXKIGH z$~cF&{qC^(dn+P1E}kn_7jk=PhrwvlyN$|4nkOf?<}0|f_3l^e6a3f3^)vkX;7#!H zo4K^9AG2Tv2_cH8zR&BIwSJ3>cfl9x!Qi#KU#;1jXZJnN_rBZP{0|WR1-9;-vWQNl zQ7raW52bOhQvKdi#2O8S8J%0(272|-rJ=TjXwWSYIFd zPT+RN_Dz&>smTt6XWINwF=fDyLVSA8wKsF|2QJgc?-kTg-b)Kr;q+opv3@QXv6K3o zWHJ((`!io`WYt`C_chg!?{m{n={idRvZ@;~gYUwFye1q$CNz$0{um?u5F!TXDyX>Pv~HJDY3f90S1 z{~w$R(0A5Q@Ob3V5*N3{*3NT;&uP@g-@cu4As&1#?mJ6S@d=ShULdp+l|>7)mU}$V z%PIEMhRYt=GTdiQTs>4@cTJu7J1F?C{P$li79nY3RD8gDnMyf-G*svUBzzaJT&OmP zkSI-Tlv8eBVk9@C0?I#fXi#&UHLhM=BL&f`C9km)t68Ky>9SZ$Ptanlg--o4 z=Z(bi)#Z-v23)=Vk}^6LFl-ZXzCdiXNLvn3&zox~<)8X2L6_&d0K;1Ca>wus zex-SfzY-tMdsFm45JqKhO)znpP@Wh{T zIgv4+EM5$>pK|~#TrM9^g5_QJ4+Nvw^V!RXyk%MB;tU0yhxYO5s7jo?Q3Q~U!7v4)9kLVsTQ4JOgC@Ha2wrUee{BPV*cX|ItuF?GY2ia z8b2@Br&CWDicW$SO61tbN~cC2@mlvGoJjm^aR%~n&>$#Ox(^z4k)3`P(i+nND~VpT zciC*zj~AFy&`-Fga(b{rzpZw%*!RiD_}f;kBp5{T#nb5R?5W6yN&xUIBxf0hn{TDQ zGGAmyv6(GKGQD;nMknS2JH58^$tpyd&J0Sp-gp#p6TfhXuvN?Xe(H?~@%bn49F!wH zqUh1kDQBe1ZWM)6a-jzQYK8Q&9dhL+3-YcM$$THJp&UO>R@3S^eQlA0DkfZeR(WO(sIfm0qp+MAdWkt2OImla^M^@I7LX zu-sUc=*UUtltu2K7BSXi-Ai6hBy~rSDdG|G_}8yz`KtL}Dr`B&D&lNS&!-|I>t|y_hzrCFcQcTPcVSaPbAL>JQZb;r*a=nU&P5@{LTH}mvkp1i}a@Syr?Fn^T3X%yW9jo$@?;Lny z=c4ay;O7b*BQQOwDJ}mBjQX$Gm;$Q=I)bGH|0}a~MbVrIWZ)rcCu;dj1j=Dpef?}g zx#LP^Qh-ca-1rF~=7knH|I77<`(O>%Y~5}pcE_$O(YlwhWTmmk{=Fg>XBj;*U0ZY~^`;2~ zfiS=U&H9bn;lY8R9SYMXV!O2EFI{_|3c=`mN%AHJ4p#6d1b))YD&5)>AjsbQprQ@{ zoTn`fKYe}ErAd*~rC0I&NN2glMHIY%`6ZN`E;rj~dIS0ApU0kaX{X4)I=A8SH{mIA z9AdXBv}5G|S=Zu>kop!2EZXktZMej~)|LRXsGNx2$}k7buy-%i`L3}KY-Wm8eXxl` z2#s}NKqRei`kI6r+ai?EY)+fOH`Apxgo+ZUrV>#O@Lm$22lSi`8Z2De5X@QfH3FDr zB_~53q$jasAZ8lzh6xD!p!hmFu*c z|DXx}t0vvGgMh>)PZ-x%Y;_0x)zM>^jf?bmlF!yB4)Ix{lOiV)Z~f|(IFyvG!`uRl$TRj*Ho2TQJ& zE+*&&ZcfeOSiw>L)QhbKng7@TP3f%P|Ml~pefi;T<6CQ~U(X-$Sy%Zd15YKyU&4dW zZFX)|?s~qzhtMt!Hj+C}$=q4M?c7o%|sB^^SGv#hadNdfo zh8HcZPiA+s6g@AN$S-)M1{yeN1^*qJRV4~VWS{5SxifZ@d<|*j)AOHGGD^9>TQt26 zOeiyfE|u~*>4Th|_7dop0N1=*q2Ze^aWy>}Y99~$=h>>lO`c=ncP>#0*i z1R{`Qvy#zp$W0sug@hC8n!R%|^!6&8Q-{QlnxZm160+W>56^kMFXdNm3N#))c!rvP z>oFaou+d%*IgEEbEqPzw;YrT)qaC4L zYJonSsml}TG}7;JH^3-1-Hwi=z$PL>KHV4}A9pcitPh}9?*26O%o$vuJDmbwu9K#D zr4c*I7h)dMt4wJ>CpZWSJB=?!Bpx-&ky$xYmBY7Cg40@&(VsA;Sk}f$E%pMeNIfTM zR&{dc=#VKC(*sKZd_e&WVaM=JU3@>OBj?rJC@TgJCR{{?I(Q`|oj(^kR26~zRQpO% z8TT?X?(}*`z@*&eQ?rn?Wj&}caSrD>VsJ@QSM>eJ&vi0K8`BQNudPtWHOf~32h5eb z%V(Z~KOOC_#d8MA8t1`=LbN&}ShmGhs{;?KehxP&<_-F|}`-)tf74 zMcNB$Dta{bMCho5_f5;zE^8}!Og~YFS6hghu#5VRS-4C0X1T@Gh3*LVCV`UX1rMb|w7+XGD(pb)0owH;eXc{jP%%5DCXodmLFhucexHq6jC@DD zrBBoO1KE_6r=DKk*Gmzk|Cw(&&~l%dmfst2E@LX;vR!E2TATh7TqT+<{=}3E{8`-> zk&Ya}!fTU4>U&@}WoWK@5PaZx+Kmh*L>>I$opwtId8>K8JMsII+BpYml=TCy96;W> zCCv!^{g{c_a<1xf=Ap#>!k{MOA zKxp3rmZXltk&1b|+VKfdGYTsa(Ee?zR-QA7rtr{Pyv8^w&oHcd>tb&y0r3crXgu42 zBm!~mw7}keU%SDL0Ml9lpbRJ`+!Nw~cY^X^g@XayzlJxv$ou_ldDp;s+z{(YqzSnj z2K_byVpGmb+s5iRHU_ozQEatdhOyJ}?IdNKHGuRt_0t3ANhBgt=`FhG(FOkN8mTIf zR&;cX@lqKW3I>h`C5r8Bn!BxJHV$mN3^idymLO1?5QG1&hFNV^a(u?ys;j(NzwTQC zrx4bh&1&@JrLL*;ma^M(ic-5hUU945~?FVi_(1j3o3@8EmQE7b_JC zavXzVcSmfNvesZ0RKNMA$^GV=NWR7`=3lt_UmX+zVD_of>5hkA*A!`c93iTSd`%E+sEDKbmHZ)8`BM=m zig*)T*Bzqk0O;D^>#Pe&fE^Ju(@!PQQ%mOQPuCE&Cp1R*YwO0T4 zhzLm?qfjPuno(l62%6ydbH&Rn#K_CUI{IIPmCoK-Cq>veIAWgi=%o}hW-fSlqPVqO z7lC(fSJ1`9=@Zw-nCUhs9dha4Kq%>(ul$pWf8aGe zw*z83KRekZC8&7r7S;}DyTQD+5pWC_AXrzTw1bBMZ!nZtAR@xG`O!q~6wE}{Be6)a zH1v@LK$rkYqXmW|E$~pm-|=4Am~E#aj=U^<>sKgF!TBN~jIpgaju2m4FP!0lGjV9u zm?911DZ+r_3>4&YxV6#WPJ@zMAcV<(TMZdemms*U>d4r+Z7{vkl5lFn!5)aALUx3S zWLHCV3t0s~A*vu4ZWk?qV-y69=S(0##lMc=;B$BarMRFN=sa3Z#w{b5C2~v zs45IOhj&8jvWSIxl+Wl_ul% zac8cYHjP@HjM}S9f)AkCCvW()aJJ1)del`xq2RULX5XDJ^Tkv!UEY)vE5GiD(MgfQ ztZ1=_os=b?j8iR86H-jdj13Bx(*D=i)pH2^A1?*UW=W!)vsN+TivWNBQUxAy8zRNa z>CkZBBMID_6fHi{&=eWWcFeMIDUaVoPDEVoP%LC>R#ZiV**1@@*^m$3>bh(%jD=ak z!B=YIi#&j8VW4S2#55^QOku{k9BF24P3ZAbT|&aTq^7Mf{y3~Dy1k$o7F8f|xUCt1 zvx;ykm~swp*&9d5Y!8IcfD9B6#0mFNq$c@kN=golad7<+t{s})*!jn6vfE1O`YIeZ zn1Q^map2hqJFl$PikQoS^^SqON89yU057j5puR5>V=09qTUfcDm}j;mPT>RJ4x0F^ zq2K5Lo>d!X03uP(*M{})1Ks5!6W6$Ya>kF8!8Bt?FOJvc&o?x<2W|}8{o&Q8t+Txu-~!V*H?QXJx=*Z2Ar&E~S;aUI%pA;O%sx5iM5MT+QDT+| zBUE@K4#m0QqLsOm=#U7>8ddJF)&o;t_t< zAqb{?vBF8UOiR*XkEq z486DGq&*?M$@o(lutvi!Z0J6UF4q=b!TrK&Do@%k`{U<#k+WDlqbTxc>j;}&6n~TD zbhzI@@+B;adGia+M(_2tlH~urE5%5G#B&#MpZ%K9`L6p@p^_Q$?af9yuX&+Fd0aKa z6bO+%*N)Wjy$NwdRZ*CEY1}K*D6C+q#@4!Ppp-W}1RVN_4Su}PpyS6fp(Ab)Z299R zjK`%BH7&L4>#gvHwhkvGW+F@|i~%AVW#F??v*_SWU50EQPaF~w%NWb7B1N{ zC_*E`L2Yj&mhnq{f^|$i!ZDz4{JBJqmSx=jL6OsJ9FeRN!sj+#@&}Y|KT|wF{v3DJ z7jj}|x%R_jgE>c5VT0LaGS{u|XFe5HR|3Y<#s-vpDn(?p!U8wQJ`Nrr(LzH2pl@c% z!x@^NL3%Ej ztf45qqo;OsU5!P%W`INbQVr$-L(mOT*%U@9Y+JvbdLx=Z{+qjJ z-A?d!vZZnKz>l6k!^!G+{9u|Z+S9d|DPew|f@=PDB?B+jA*aa>kEQzntbdguCbl*^ z8cZBMT24l!&gEBqCVF+8U+7HnN%wF1SE9K+^cHCACR_OL3o?eJf9<7k7z&!ObMH?O zMn(cN{O7`+V85I=o$S(TRmT_O(#yTNmlT$5x+SB$PRVMs=T?Nr!us~8nh6xDbPb>2 ztbyQUm(Sr?_n*a!Fe5MrFys19duu|YktQ7_l|(nn^!e(tydedOy;fzJvmM5&1aMdXX5Nw)c;OFj6tMz+#4%MiY&-GNqG$*{kM*Jf!Fg zdi1A$1N2*N4X`ZNy&+{I;4j@J!+`{*MJPO%yn?*~Ep9LHQ(` z673)H&%hjpQW1tSSy!$@g#6NEBf_OxBWZEBs&XktK&$iX@Wcxg_F__|e=V+yQ7=e= z(`6?l|3} z!RE?eC_qu7iG^njD)0w@2RF(9-+OvrEAjd@ZI^OFrREnZmlRi3aE?Y zzOh$D&L-5ODh*|uCot1zYZ8m_^PBokMYJ*>33LSLg3)*jJV?#WNxUs*iR5Qm3D~g=SzR<4hHdNWsss%aXyDS z7@(bqvHo=#&H6BCfb7izJ>glgw<3wh36YLIHT`@I$}QDQG~ZE`4HmV{JcWr&?YKU^ z4@>?X;a*f1{WMsX#!7FtL9#%G!wLb#6=>VG<0r7YIg_+b{vGMRzN{>9@6$_EhM)PG zwm1G70(i<__w=Fp(d1xPaTQUhk&@`ALMid?e65J6EsR1!pB!if+=w)O3pTBj2M;9* zDdjz^5}mkJ=q>zD2a)1LD(KT78c8?}6)&$6J~22KWvhFU&FH{tC-FbYf1A_Vr2-~< z8A^mze8iRAET4z#4C9HoTu6 zY6Oz#izvb_ofF3qe*9ln5x1kzC%nHNN-*`$6A*3`SHAEr%G|r}W8ly6H2kk`LQuCW z%Cu$dckFVR2up`AAn5Yxg*Nx&_z0o#InWxxh9vNbkc*|@PeJkP27fMzBCu1Qij%JY z?zYf2x5g7z(BhIwb-0LQ$;H$E-gquX`an3He_H(HF>VYz7!%T;sgUY8IKdee(E zg&+$L;iEFr18*;?dASPkHkMu5mmY3GPzlct--h+=HhWdy9F#_vMMLRd6u^~Blpt98 zLAKLk)uIGU&`6ywffl4jvz(#1Vd8YL+-TP{g8=gzgiOi@=WRr%#BM@bQ7mu|0@Wg< z^x0k3Y@VXLevCXSZK2d3>WjbjlNGV}??EHU%{2y&%x;8JE8dnNpl`0fz|TpUG;#W} zb%M4NPoolEp2af_4!bEGK!SqKW5OIe1N@4H_b2nzP_5u5w2Y8RI*ewyWJezd`&IFv#;Q>8 z`kn(@zvsCCKVbQE#OBiXm5#7eYoqjz&nkixD2N{0QYy*LYYB!__=f6q(9bqDO{(I| z+FZ4g9{d4dq+5cJ_oqixbLceBRj#-0#6%c4VW5%K?R4AsZKpKUTFOwhJ_AB;2 zByEM*|2lQ3HF59Z-7I;%^~m#xtRC}0&&uiqJKRs3wtYHT(m zS{nTjjL_#)ogD4V6n>u=dHk1))>kE#o&aje6a61d-#Qpl7=x!Q<%QAuZ>s<`8Go4E z68pnV!&0)+aJeg}_}yw`wjvSXCc{!WXAMJI(rFTE=OaV7Si(5tm5jDR#EG-YJ%CR| z!US|A-diKWRJtXnN68mC*!kO`ExK?0Y{U6aVQVupe4Pv7DGs63V`mRrk%{T$)W$yN zx}AY5;MfAhWdeJ}i{qbK5bb9kUk#bqnEx|R?OyTI(1Pk?2eo)|02hcYjOss;_nfrQ z)&Jf*fVonxepU@#jklA;TCHD7-`4)HSJYoHZxASjbRexc<(*Dg|F2GKibR>5t`{E5 z3@S%nnI(mFfstL6?60=&*B`BZT&6RWZ&?^t#`lbm*T-beIFPi6HjmY4w2Gl#v$){A z1%8$A%A3Hd>|}|j5D}WEymM5d9m!?GGc7;*#-$A&5w^5zD-I0{rDI>t#F~Q$m?Nnglc29%wYvFlFDW*(S)h2td05w*n(+8fTW`;(O|-a%)FAhm?+kZ zi8k`HZEvwI;ICxM6(ueBhg=C52J&&A zO5RIXG2V3gQBOWsi^y_Xm%Q*@Pyv}KmS^#ODWoM%2W}o9xgyIMbuu51qUgLR^ zh_7G8dmd`*p}Gj!c#K)@RUUzPGA%y0hB>8PS$2wCXYj~%-D^v;c&UU`@`1djjtA{$@_pk03V3Ghy6YcxX zsg4D(0q-S6HCYQO*R|~t6OLSG*7qW8;;OG$%>O;-Y@=SP{PUW-r3$Id*HI<}vNJx# zz*l;Jb?3P_%&>x0IjcAoFD~Qi^xDR^@AVO0_M%CBoi=iQI|gK%Udp;Sf2XD`l%bRa zm>FhcYT3Gq$s3Tcz$wcDi;0o%>-y-)1Okt05IdK{k4^RJ)-7Z=nNsBEEP-zA_(sG` zv@7sI*PH-K|BdhGO_r}irMchsW*)=?lHlb~p!t4fCgSAYvBiXu zxH%Ill+}J$i7gIa(*Eo!M2Z=ICTgoGCamUuG@Oq+cOf$YBLn>_GK9lXeqJRkKiWgz zUteIew&L6-OnJiuO)2B;Hr1&- z2C#ZKb09P;eYdZuM_~b8QifuKcl*WGv#*?X-HK`c7`@Lj;GU~|`Wc=o|8=3Q?fpYq zhRg#QNS!8-l0shv9q|a~bdf(4q=~r%eW*^Q!Sw7dCt;U#uI?s{4I;76;NKY)?-z^z zl!uXrG%9V+B|uf~M#fthHwDq<;6nMYEsR6qt9-{Y`VlH3(|a3|msH_oo`B98Pm4fC z3rzrD`#u?+5ubjY)d6iLj^i6y#g~L%zCjTD=qxQHBy#oqP=W#+IwC{$%o|?WqtccU zVFLczrMS1m<9yTAgL}$Z;N@`Tcal)#hwjy5sVARHz)18YBusl=v{n>2-%jiT2;gV& z=WPZkomMw`WkpZZpE3Lm_@X{yv2=e0OYl`t^Yc3}FRwF*5vW_eq4|GQy<>P?QMWc6 zW5>2_tFdh-jcwbu?KEuI*tYGoaT+wX^X>LL=Y7t1%|H9fzsxn)SaXb9nX4An`#$5M zdezfvmgh!jX+(cZY+pqwh~4~0^h4Hr>WN?7hC}JOj@&%Y!pE+S&O`Slb70BMpDNWm{usu+eacH zcyCa@{D_*D^vOdkz_2i!ERcc{Z}70ijga??Rx}c)f2LNAy8SC2jU(ucA=Vq9^|?D& zyCILMpmOj1U48p6I(Uh=@6>h@!Q3@#-3B=`faGb6&onxYk*XS(oS|57X2vTp#FNKI zVD;0l(kX}D*KsH?V>PIbHeW$;bbHZ@Fk)URVtfzz~!zkD1&v; z*BO>D%DP^(N{O85&k7z@XUmQCQ6XUP?a4YaMUHOzyE!=Gpjc90cX!&p{+y{Y^q*;b zBjvJir%%fsTa&Ww55Rq1hP9Q3l_L?R!MWv0%2Ri$W33FVO4>#I>rdlI5{5&yZ6l5y zE!lW8$|EuH-Ck*@sTjmAv*pY#n^QrzJb`km0CyCn#YU7euN@EdTc(`5*!->fd4p>C z;(s(KWkR49YU=qAvm)c%wK-W)5sLl2KNIkNA4%pYsL^*-9%!ZwmG%@$*?XIR%@8Lw zYkRJWmQ0ec;=x>7R2Rx#oNQzb5#&s!dOIS9Y-`Wo&+R_%=yOI)CauIr=eD6%A}}np zTO6_aV}HErWep6-R`3ha7QGhA1SyW;Q)Pv8)a~V`07;?&)&>~B0mYXq3UtILR5E(z zhQBbXn}Kl{1h<^JXl2JzfcU9*ds^x)JRnyU z-Ov$qf0E0wGf@RY4dx#kBv%`~5MfbpHCi2TjNWvwLGZR`w+)3Ouc|zH8b|n9mx}yF zA?EhOIrc>}EVKbHpLiC#kd*u1Zp9XYybW{pqXZcjp(H!=tR(I!%V8@8)gxFELp}^j zYd%H1cuV%-O`333Ym&u9q~X^@j>pQ==^l>i#r>dVU3P4swWAnwHdi7Wkj4Qj?irQ) z4xvL)eqOY?`!k}Dp5mf&%hK%Ft#7ury8U3>J#>ZE0fY;o7KIuruB6g*hveCgNc|l` zLO_lbg80%P#V>`SCx0n$=JMyF8f;ZU9#Iw9vAL(JB2EpdddV2%>&lOCyj`AeMBWrt zVU>rAE-38dkp#`)_5GdVerETPOD3Vl4ZNzT^ii@eB`mghePy|BQB1S()G2l3CNsR7 z|Dqq(&bt_%&;D4p{ISSuCA&Dc{@eH3hf~?GNz>LKTmYi6!uB}`y@DrutY8oC#^J=a zNPel52M{Tbv909i4jR;5+*qu}2Q~a7*21a{zZ&>AQ$m8IwQI1gvRDdfPS^)v0N{1K_zUGvSyhgcl12n+(Mq&Mpi^>Kg!U&)9e+vtSrp*)C~ z6-0^=MF%Hyb>0$U5E_S{G452T@uKD@}F@B(A0<}28 zf&iWx>cn319yqPY=em3NYr9A$c&YiT%W@^x}iIP8|M^ zJY)9zXc`q=lZ?$XQUn=cWeNU}hSn9G#fdb!MW7v#Elya?GNS|;62X_$7Q0PKh*MQo zYSdb_z|h$dCCqU7GQ!`G^%2ASeuA<-KArv7>_!DT0Cr!R!`43C$x8_$t?TzI6yuvm z6NPsW5++L$oM2LSr~|73feh-voxPE-!->|IJh}B|J$+}AF`@W5kVa^^jA9gJs(8z< zhYmIli`Rjps=ECKzmHvO{0Xo!1;^>BuMzgRC?vhuLcbu!%eF`&5%D)=yQHEQL#Vri z^8T$%g)j0|%Izu&IV}Rh7}n~#O}k+0l)lvPcsPLQytwN zgA(};aHHq3XeVUJpgJNej>)6Lqa)uW&Hxj` z02d%*Qxz1u7PmBIWQKWsj4zxIQh{0jMeGb6Dml}^g_7RWV-=T3+IfpTq>riSh}6j zh?`!}W3;K2x_oEsL0t*$2XY24e(U!AKr&c8pCDP*e+(c^*BoK^G$AX}#d zj;Q!KF2WHi3XcD-R~anU3gGI(SVNig@}eAB|L9o1N8t!+*GprRzO1Efdj zbqD#$)tRJAO6#jWvR*xW0MWrMbeLEu{TEH2u-wvCsr`P3XGe$-c8q%^VD=fiQ zf`^X|`IKx)5WWJG)no<`&8J$_57w+2nxQeg=+JcHd>8~8e{9bIYAz((i7!xU+AoQ7 zHwzMWfYa`xo&F!zLNUk!QO3KDXsWycU#PRN?yA9qji>NAc9^N5)Va1($S^$hDWG95 zGa812#Vk~0gr&Cb35I(D)E)w2?;q+p2n6Hg3fs;aCT(G{WMmW-HP~D1e_4w_%D@6( zMIfANn+n&MiwEGNX6t}rVVN^DoTI}jDRd&04-L=l9*Md1`R^Hp``FXR_gVd@q4UXN zYuXB+CdQS;EUfF#Dc=K7#<2YS-mQcmf4X*kN@1EO>X>$Lc=m6hl=(_q@Jad~iIU?P zrsOV=f`IY|Sa_+WJKhem?Z2XN04umcI+(R4yFhlv`$0>E&}(bDcp@DDa*^t}&Ri0k zSSZQz(sI?-A^=m3{dla}!Xcob;ZHc*IBFmxH2jsUw2@3HWLx%o>;s)M5M!8PhCJ@g znjP!xHnkxJr5FUpvtO93I4jvTsoHfI*mbB+md_)6aRWi%jD|7)Dn6tlceO7QQFAM@ zkK&jLKH#NQ6xQvyJD`#!^o^K+jf9$9k^erm)TjV9ZO56_Tq2#xAm7{DoBrc?zPJSf z{19B!i<*AT$UFkfH?nJU;p)1=AQg^W-3yJrw%<`zo*4PiYs?P*#qg?#oE#@AKTnw0L|TXe}ksmwNtlu8Y4_3j3Qwt7<(&et(evgJZ?)@o7=-oKX70+5$tVXpImQ z2&SK(hfV7}*~;%j2a|$G+lSbCcy7hCT>PzndzMKEU~Y^O8Q2j4vhdtsY|A%T&X+g* zzQy(-B|>W#3+2qxStQ;=+ld&6w{P(2a+jio@fgG#CDxnFg#~suwKRI9LcUr3PL9n2t{#xrC+vA8F0Fa^aT$9~mio zy{&w^o`ILZuGr$Tht1iZgbk-Jj?C-au>40rV-i<`st%2NTudENpCdz4+PQfH@2?9zx1I=QPB ze&k7^BticI+cyx6x7ZPRorci_(~YKRCHf`n%{3Df`v{_l@|{~->PF2CFVj8t2iDgF zfTq$;bs>ly)L|~DFa-Dp%zGIuY$b{R1{|j%S#N$MjCREA`Rc;Ladbvn+qpJ zTAq=&SRR8=mV?*>J3QK&W8ld0bfmTLAgLUN^a*QCRw;J0lHdAf=d+1N236=&VUU6O zsd+48K=Z27iwk=n?i;MbgK?^YCLA@h>s;m_xTF9F80O+d3Xxet zz}#c2^(UmHzR3+Lbt+=In@x|%d5C|OKvqvGv#p1s6s(Quy;?ign7>T{p#i7i(R+Tm z+zu$2ZOnZ*V*+n|6dsBosfFji23{tph~Hbcq#EU1nXuDy=_wY9ZU}r1>!wet^BQ;b zhm|-4}%O(@F;=$VfR7EAm_p9&b<%%08dASk$N;NOYl( zJ0cMTL?}|FWbiE~;bN1YFt-g(AwoJ!CWeXV7KM3c8hAw|!et=zla)yN;h4QR=JKKo zh0<(yWwTr?FmW;93Cefj>buc(uw&}ifom!5((jKt4?S{WnvuTAl?Ohy|NIqn9VBY| zvi|2s@79C)La&ZsP5SSE5~@!igSS^^uVUxXoUf4N7P}vzg7UIv@5}SoB-X#5i*&H2 zfDzCUn?=UU#!o!{Z{_7wPG2q{UrBDZ$(-;;+_aL0j z3*?~UCEQ(pj~zpdP=x|l4Kg8TnvA~uQKH2?BLbL%a^0!AP%H2cMc*Dnn%0lk1UMupZt0_FxLe8rM)4u zV-%)+C(XRgjhGgV)E>730;0@-B`O+vveuSe{3lcx z*g5oJe;QzWR#*oeR#Qp}&V_sn1Cx0WVWEU9ZqS*_sf`8r#3&)3@HyfPs7apQzz>%* zbp;>NarM*6tBx|Ky~WlW#X(;cPKiXL!XYuX6D2dIyJ4=bSXGyWO$WI&C43}8F7sJA z@qu7S1lY?+yVlgJ;)~JC8i&p5nprSVuCt8E7;hKiC!QbmqG_XVEnpsFY|nc=Vxyg( zjx)<@f}LGrE!=i68d2HW7UIachWNdu+ z7!f1KGw}YlfuJ16$4*pX@0)rtII|%P!~v;4(XoT1vCvvZ3PgZZnMeT{iOh3y(r=^e z-O~mr;RXo_LY;cOOILRliiWH%=!F!w-^UAx6}0G>2BzgZ_L9_b2NXusjL=;?sZ#hC zpde@P0dto({jw;>hh|W3-uZzp~PVY&9&-^#N-Yo?8uXYtE&~CF`x(Djl<>8z#W9(Q&0%pcU+WluriYhTwBO3;d9BJmQ(^S zBO^R6qr<%3C5hm7%^fF|{h~90o#LU#Zb{u`<&!zzlwAVeu7~3CgToKpm7RRCb0S-hJL2Z1Np%jvqp|sm}S<0ZPQ3V!DWy68=J|o|e zdRr4yV|F&zT!(@%}_U3Ba}=enMU zIaYG3$E_uCK_-a}VD4wL&aqu6z;R{@_0D_N{!}1QIGJCo?Rs`PqaI~{U{>PKjAQ{M z%!Xi+AC6aD#p@;4HA{RIm$>zn6p_gKy9$Ug3^jk%x;1C z_JQ*lx>}|WVHcq%>epD*jBn2`XN9{pS59leAH{jd*atHY{GcKIJuTvxfsUjp^o1Sa zoQ)2Q$ukzBS&Q#V3+s;z%ta5zHN z`z=enUVHiUyJxHE*TGbUNSv=~DM;6lne&0oHXEyheJdq;{2v5q%OzfG85EZO#U;2P zHc-{cT5M1zE~v4%U=pJ=KZ3Jd2lbzI=B!UPcq;-(DN?0{#6tkX_IYg%U}*g5ES252 zz9t*#mB5|E`29W|4$*~y;)GdqA+;8!NoS~2_%v0v;v38V_ zCxfx_px?oxZUl(CNm5a^;*ykzyLCheR4df5;i(kyb-jfQs0RtV6;hkDTcO0&fy=w@8Jj?t2D8zU? z=CeeHYb2wWrnKeuzpnq8%77;!0-4Cp6SUnwFo?IEM^n@S%34Z>mFQ1g$=jKbUPq1r z9g5CW=G_NLKbl&~1y{F1H+f%F0>*GvDK1lQ`*nAys!g4@7WDs&-L#mS^18LDt42rC z5WcYF%%hw0Dp7{APK41=ixkQ`We3d3MVjN?88u1E^4BtRwl$jCC|PM|4=>cv*LAGP zWQJ-@d%L>LNRP6oEzU2SO}5&O1X~@DOlMNJ(-GA29n3$=@tTYNybNwkcyM%5?F=;= zFdC|D&k80jh2WjiLZ99UI9^dK4O4x!WPeEEnY6N0s;^Hj-15QIm9sZI;EOhu?sz z*8XHeOP|NQPZ1Wi7xF8|mCsdKTAkevi#&& z@C86D+4=RHZ-ff;3N66!^W!%G4S4U)7McDnO?rTl?jNcP70xguSY#N`GY zlU!9seg@Pl^x_P^Vo8BO1KIhd4<&*W3iiScU5I*yLL6&fa~6h#O$mM%Fbq6ZWgtq! z){sa6NwACU{j6M2-miP~R!8H|0%GeXDCZ%*&So0}AaR%#ce2=%qLJ5efAo2=UApPe z%kOtsWk3@p-j57phY;+gFG}sNPzW+%xMeYj%BQSxPiOWil^G|K$atnNvQTY?3{ zdotW92`G3r7xcj1Is;EBG?-dXcuQS~OPF;OfkFLqmJIX$T$vGesI9KWt`eCm2=vcB zrGV3=%?sQ!S-9rZ)90$yvej)z(d;UuZ?dp!f&RYF!FLT7c2Whv{K&<3kyFczvOptW zZ`TzjT(Mm!J%-)|c!1pX=iBtQTm&LPi%V3gamnqBZkcS>L)l}`WA{l#dA`i(nG`cx z1VVTdW)=7(q|x{84xbWP+0GYjQc9I-{YFX#{f41b#J%nw{YNE{5zp(i9g_23yIN-5 z#IJ4Dn|tw@GPt=j_?5AJ>3cM!bYuuP1Aj?Z|8SjDuv2TZCEq7%r3apf0I3jS@HxAxW9Xz_1m&{?Y&EuT`2!NWjl#; zc)>bJ0jdSKum=8Q{NcdcsSVZGjN<->m@Fj=y<5%-VpjMQQW_>>frP zLX-7`gvYW(iKq>F4M#BTCJlY0zi3JVHNHrD9sqkwBRZ}{CijiBE!PNh_73)aW`j|* zpN_kwR{ir^>;*LSBm}Dz2;b#j6y--BXa3kK$~O1z^Bfsu#yf1M#h-|vrn}u|Llu?k z=>&)2(8pgM9&ZWLu#7gr$MBP-wwv9i-;I$Wl>@sG>k4(<4rQ5HpzK=t*H)(DOWS@t z?*g>I6=>(YQ|$_p|I1>5LWY5V`*bbe!>PzOx)9g_pLsHP*fu31y^;FNnC zfli}br(h*Pde_jjyIJoji;lpDJ-m6SoFGhW%sG@PXIYtB>k9|*;qMKbS_PZZ(6jYM zDhB;7Za<&}mxADOeiBzNkhlR1(yLnB-2QOylz!)Gu7;xkMzcLjc`xQ{rDmdJ74HWQ zvP*()VJI#?dGy0a4QIOpGq8?`?f&;W_&=vywY1NMx3eUaUheVXu}qU_Eu%++nnX1W z>sf&xa_u9Kbm0z2Dx!=A^$}#>H7eIfu;DBbXTztjsBAv|UaOIcY(>7Imuv|*&|0eX96i`I3z4)8p++*c6$d(9*kmmM5$fPQ`U7~z&+RHy>y9jYM@4x( zN@fE-ZmodoC95QJ?o3GSX6v+H1UFZMNK~mPQy7|GlX~uJ97-?!E`7=le1Y~YvVXUqidow%Sq=&#IxwZ}sW>;vjm&;Bz zr;3EJ94ar0Z%WIE$?BG0pj4FVDPG8j=w(CB4@KDD%-zsYhz2RYXtck&o@#W?5D+H* z9!l!W@pqQ&I4tQB@qA}d@D$2fMkXo}{so60>FqaP-yLMSV? z6FosJ#BX&plULT|dn`&Mlf&iE^2J#Mvrb7SvR6qGX5V~$wZUxCmS?r8JIMtl z(F#;2kYaOdox2!$_)&uv@xme6mewPeGzW|z>wjqI+KQn2yQdMoZ;2;;@%d*IqHh9~^l1K3Zn$z1q2I2F0GQV$KzU#jL^V@A2 z2`>XaU+rech(30SfMM~S(+@FkBMxs89iJbscE5+>(a{c=r>6n3q`JDK|z01*0+&Z>Ctmymz`ZzZFATn_QOvQle)Tfvxx%1VO`$tzb!*bOz@SSgl~dr&N^o+Pjyk~k zP$UBBq%|?@Ef{}u_-1j!q2dm;P?)aaBs@_VZ}RH!dH?4m;4{rY;nEjaA};+Nc~3rH z=AhAh1O+}ui5jzzIBNq=QvyiLXiAxUKU}QdpZ_%V9@KDp_gXDd{t|9btjqF3NfLL> zKc!GmZ&hdXmrx>Z#S>84P->Yj_RxBY>?hlp;Mq&T-Ngar;`>i||GTHLib3_>${m~I znIms#hrH`|DgEEF*yVRk7PxLG08m<{mfC992GXnmbA!h4UyFqhjtecvN|XZ>C+W|D z#D$a`N~$%Tl|apAMIDJMWd1KD{4<1TJi|5F!J7 zAyetB+BB9`>JY+8F`|e36|>)d{5WqEuLgHJmMlQiDAiFr7Cl=8qg4ppi3FDd#X%ht z6M=G0$@#e}*t6|%-gx=Gox^*$1tHhq;rwxw%YWHC`}q;j+qklk!)^_CKdd9_*FttKOw1Tp)oXCt4J@%=>=MLtqsq-7CNNIuO6c}5@MeuXoGSzo382|ts3kqq(QC6Nkp<)B8b-D~5IRks^shCIFt8v-mk1ihRZ5mhFc}N+GEmr4Q#X__3<&sh?=w3UbyVe5ciqDTzev9aaTOuU}`_7O+ zbVNw0r4rSQP=k^p6HM_m0x(O0i)ly6@YhJ-@d>GtQKBUIilw85$gO$X9{nrxlihpPq~;CK6Z_gXScaHBW!2HQ0u zf!1e?sBL>Nfo%$m)k%4LyzcfA1`~oZ{F$wLXDC~eu@WsNu&XEdToH0iETq45w5$>I zfG)e9$*IJug{dqUl$@zF{6%y!|DfWVZ#r)FwOH@egcE;aLRa*Bzy{1YgenstThXJMy%Q z{xxKlsjx_s-M3FT3Xiu?{K8U4Hd+c%RAPc_eb%~YDRK4+V-s;0Sh3(kpLzefQw?RO zyB`!}6kmD!=Ov<2DGF?)mIzHD!O)|o=)WZI7* zL&iXfBgzvc`z3Uk33mE+R|~+!eQ^p*HB6tS#^ELzaESzOeMwacZ1u0GCe~J5z(FRG z3G@mJ@dCY!2XarT8K@@lhOdeURd((@Pn1MI%|69bxl7d|?AxOu!dgNA#OQ^Jchm>* z?|boIm(r3bb7zcX@^w>XGrt~hyhilGltkTqc}H)d^t56c#IPdduSgh2L5Z7vEo946 z&Uo($_|Dh8Jg4m)rH#3d$y8@ti#;Z0?Yv=P+iDDTfsUXl`EfDkvUxqsh7o4|fS9`xVUEe!_x z9a?Fc1B8ejF32z!bjAfX5orT-xCkfF2c%!tOJA?TR=*9?z(&7=uDFgdFwdJ?m8?w# z7e&y5!&ChQhF&B|-$=uU&udpHVc{E4rB!Pg{fIR!Gm7iFNz|2fR$EZ_{l}ZAvQT3z zkv#JK;^!KcL?0%;itu;g8z3(%PJ5WwD;pipz8yn~okmi)Hw!ciOO7bNV7@TXMy zZ%fi&*%yPW6Fm%&rj!8EVEwjv& zQN!7>BOzk{h(Vl<$6=a*#T2hcn(LwqlNDh@-$XJZCn#Zd;SJ%bJc;cD`a^%@W5d8n zFByzcim?lsv7JMl`zS$83~kb{kagv1W}3kaMY0?F!ECwPMo5+8B!jk(mFrh%wP}_B zjC!EzqpK|Lxp?|x#?esR!gl1CIwhQEsFp1>Z%sv}q{#;6Zfq%o0aBqulHDO774;<4 z-9@E7y3Mjat?r|t#5)dqVqg(&C}?BgB|&7Uv6l9bJVBO)=aJ+gd_%*m2Na1JFP3D< zY2o=LlmC6(e}C~yf(bF~=Jf}Sp$yDblbLSv7KLg45(D@Ga6n3fTd#5ECoO+Lh%z|V z(AyFdkb?89LVbrnQ84aGRT;~BWZNNVq%5b{k=H{Dyn;yYEya~<_Kj|Mnj@M`me=(p zxhN5`8@GWJC}chT@kn++@K*6NWC5%qQjD@ma`srfOBCu`LT1W5*!d2esFklS%)#%% zP9yT4a(uB;Vjw5){@7$o2})g9r^-sJi6||StC?T(T%V9ZpZrUC;}rAp%6JfQxZ_Z7 zXk`su27_UO;e;NQby1p$lL8YnKDeMp7BUmq_x8AHWCIf&31H(Sqq`_u`Ao?KjyIk+Fsu456kipduA@-o?1^R&E&)2I?!g`eF zD6+g4PwC0^fLCbRO*_q|L0O}XxJ;@&whKwP5>>|Rbr+&*J^F}eH3q6}6WDTnlcb_b zD*}p1u4Ziq1-c4A;=bfwnFiR&JGqG4^OxF0ld}J%N zb3!KH3lDl2z+JFM(Um-e_Po09B>f?q%kyX<;+GYr!e5BS!_eUGODnHM)FLAeQq==g z(U=bJQ4Nai=GU5D!}Fnit&7N^bn?b9ZdpiCa6H0JyFPD)!&C$y@}25gD0rPXZC2I* z?B1uFR+A>pFKr=troP%@BaSEuqyq2MgLyH*1PXsKVIIuSf){$Ges;`U37PuGagA3k zF-W|4+)T4?qg#4ROO08_J;sJOy(bUBm3jXp^dCHF<)mXYZnX%ZfVBvu>1b1G!Q_rH z7|+$s3=Cv`d%MMn>SPnWhT4gZ@pW@Q86qmyUWqS}yi$-ltvGw0p zmp!ggIW}!-7<87Tk7y3XV+qMeZp>!qomr`fk-LKR+axMTfenhl-^~%3{bF{A zq&s+2bE-*{Y#drv2pfT>S;C1%f?gZ-R*IlwYhsH#Pg>D3uYQSG;YfNt>V8+|CHU>K zL{GyaP5l3sVTKHdEpmbu%Oi3W<#W$WF3%(;y_RL8`SgviG8?jV%+Vo3R}&S)M35Ef zNYIYhbaZPmANave`}29ieoWcTPqbqyV*e%~1031Mhw~O7hf){V<=VJtWm1P{Hz9%M z8JCe47zbUSuNd*4-&WUMQRmd}Fz2wof>d#}>PqkAsQ z^5B^_hPRX(qOw#woTPvWW7nTROLD_TauJcs%n81Ee0XScXhxHo_^x=R0wb-DHHXxE zG0d^KfG8}s6wgjfYj@wnpt0G}-=QgJo>xWG+_NRmSuAZTWmT)Zi6bA}+OR)b5;?YW zx??5`qYQ_-sFwv8fQRiZN@e{FOqpY?|hbtxk*CNf-%ec zn~=Y(zIN9cF=GoGYcuX>_Y|9<7Wjvold3fS1>Q1q<82m_;s0LI$YIYkV6R>g)}Tfm zf9IbAU}7-O9bV5{3qtnwtt5@VkV6chG*g4TC2J5+z}$35guWQRJmeE9lGPFi%JT zOq1S;L0wb(Ha)$N{hfiGM?8ynx#Wdfh77dcNrgoDi}al1Eh%_4>mSy1LDx9-Hi##> zCmtKZxqLO+Yzr&<;CM(_aVsd&y$)0>A)SpJyY7jg)<5d^n)fT>O1b1Tm^Y;xYB~_& zw8(FU-xpzJx+OOfvsO4=;^l1&4AUOgSDx2qYf4+@%MQfVfK*UckrKsyxDX2`U$YmN zFr%thxzNUc4YD1X+4Okb16r3CA0|`f_`|}D^HEku=V-mI!(FxkHzCEWQv3 zzt=XX+SkQ=slKUS7Elcj2Rok|(<`K|4J)gsE+8?Bph+raKZ@aN?2K_?21QAf?K_HY zqhFHVk>AW<)0$n`!pTo{M!~(#Ohbx_tI0$Tn%M+hkNDw5UhvhLwJ33Huz6dRolfGd z7BPuGL4(PQ63uhY!vec1ptIADGZLTO%c-u@Y0DVxB$qt#|1IVQ2vQ*n5~8ib5LjZ8 zq2lL2=ABNEZ^*dIvtA%~6j{(oBRGMV_Au7r&iKfPr@^656D6$o#<$0qHTu!pkk|J& zLqS4f`o-Oe?gJ%GkSVMTo~h5}LfLbXbF(l?_!9^xh*eeT%?0?}Pz+KoE-}+tY^$91 zKbM%4rZOs7^Jviz-xwT!+1ahMd@8Rjv(9=07*?oMYjpI^3aRZtr=>fUQL{bHqPtU1 zB?1EP5vHLVN?z2;&4)H_DQrTtmSk3{`a4wlcJfDvDOocrpC{x`BQCyyA8ysG(B1^E zKx8Y~Ud(Z;b-n~j6qDbmX>P?f&s8Q+?VJ>%^p}V7l+IA5i6Wl0j&i%;M8P*Kezjmk zRj|m!wr#x=JW?R_)>kuE^@?$0pKXbMM{2agBUGV8*_6{gj8oh*0e(FMq>_Cf=*1|M zrLKjaRVjBZ^6t=w+w*emgP$l_f?1-&q1_JTl8h2^P3}_bPl>wk)+$WyFP|;9j}jAp z@q3xXP#yknnkn9_n-Zr~i%sSymHY}o7Onr97lklgbR+?{k%;;-jQP7;^BcW0~G=dEP za0s@KzINFz2ofcc;jr98Ce5LDbH|S9%t{{ALde#ndTuCfGDgwtq8K@>X6Y~p_tqDF zaEP;CDv+Al@<%N#^PdhvXBNQ?He$<(Wt~xre2z%8u`0p{D50(KsEHa9QB}Hedt-yA z7o_IQ<~l-Eiv8jt4t#_HnyK6M0LmW@E{=!4kJvbT%`P; z$x3*N5CR+BDj={?+fRB&iIV#X>#FK(<|z6W>lrM^QhaZ6QiaBL2cHFkyo`D;*lJ~O zIOcrX!M_o7qrmq@I)^FZv2o_hrh;#Ol=~n8m!~d05nK|^i>yNDJ|YV7m$J=zHH^UO zYoi6VL|E-A&Zc~t(bpEvENU}{{nyPHw>wFQ+_9%bg!!+~I0&Yd!O|3ktg;8uo@(M^ z*29=p(s(LKn4ky{5h?I5`x{_M$dPzMzA9ktBb-%%rrwu?TR%3NIzzx9o=iV zDH%`!zrqXOSo-pPqnL&J%AKzQa&p2^xijbQ=Q@z0&7gS?}m=ZK_!xlD&l zWhApX!0lmWy+p=Ti4S=j6K;?=&Q>BUq3qw_qTef}JBx}^^RTL#`;_L#es1GzOUCEPe66yMW(YF}Opk@g{;6 z0t*ab6`UrqN#OAM zt0C7C`9M@Pl{h^fe$jwoX$7eO-izi+b55605d}y>f~4NevBpWWs)E46gMp#@ZCr2N z>xDtZyU4~_Zlhwe3sa^|%w@ir;*V{^VV%}#OS%ov+anzZMIf-+t(+Z>S@QxX4XRny zd(UW-#wT;g$m_c6`p1%K8dqwLZ+sW+Eb5Dp#=i-3*@+6fFcL+KwrnypEPTSeSa0fWau~+Jq0v7xQ}8pK#l|x3+YOV9lBv5Or!kc z=t5cB+^lYpY3DC-HRL4bEUK%(M@~6m7_%y+C6mFBe3vO`gQxLmEeM*>(&Lv~20sld7z;Hl z+%KR)i_`A3$^WP8hE@6ca7LGo7K0^DzW!~~`=9&O&EL~G=L<26Eflip+*{d<&`zh_ zmp!j%BX|z%as!CGZ;n2{Yz*jZv6yv_(itAIX7w#{4#fP&zC~=|p1(mMlR()fmP+ncim*KHwc!ip%d75}v?i1WOS5T;r>OYa zhRH`)16#Na=Yu&h(JR5-tUX-0z^4v!MoKN%bEMz-*?-bv6{BDDNBlJ^Vy3mV>jV8M z?(x-Qa$4Z?IY*rNFr(NxtP4iBR+3Ul_iTdetF0tarLQ&kQ^30>VS|BSi*lG17>#y7JiHRo;{a8@buj_rJVn4{L=IpxxO6m>fCu4AN}5Y`5{vRb|8Ng`hkPH zmKWWBJ8&A5PaVxIhypFGtbs8m%@T-d(LiE+`{r6x8(0<;IkMB@;95t*opLv>BAKgm zG-SOg4uQ=NByWzBs1%biVJ@d%Hn&)pyQy)lFIA?$Rod6BOhDPxI;0U0R1y$UuOIiz zs6*?ZHR+Er(lPWE>+SqxEz_EWmMqM47=?0wEE?8Lr+~mvjfM zX%gxvL`WSnh-M1veqnN(T&b!^L{6TJ87ZESUbioY+fammpSnEDX8_=c{F3_cbRz}N zJH}omkoRAl;O{R=Mo{K(8J9pV7)08^!^nd7=YH)07Q-_0?n21hKr7(EIC9D~4RR|| zb53`qxUZO8LoK$I35RcxNEyPN}sjrBmh zv-@GO;N$BDWx$Tt#zxtijlrw8ZliaN9}vB6{EJ@4_v=42WVB-*{*9aGsM6SMCVDFL zyPq_uFNJ2T2O5aazKRj;!I@qTF+R=N*d)a{?%?&qxqxt$sX$smMsKKh?_l!4Xd~;b z)mLLw(TXdB7zWN2yF~^4Aef&AVaFodM zp&2j0?(;3lVeRygqjN90*v`Aw6mG-ip(7@t0bw@teR0&^vY_ijIEQof?C0%lgRXt| za?g9nef5paWzUPm>+UbXg{CB#)Tm!`RNYnc#)Yus9deLrapnc#)!edKm=%6Tfs?V}K!O}E zSbb!Izhi2u5t3fQwsNPoUHP^~VqAw($46v+8q z`>CIQd{QLm`>B#6f+)&Lf8l5{Ez_usSqbrDU6EEOUI4)h6McN{>BJv6Wh}DepR@ zh){VmJDleWbPfEPQQ*%Sp{ShjGf8$WNWvPFmVQ#nTfq3=kaYEj07F^NBa4d+Wu7x( z0a&abyy-!m9S>Sn=ua7gf*l`GGN#b_QR_pgKsDZ`PkRp)pNK!`qGI3J#;coDrwqBs zU>TK|bMjkneQrJy>byT~60gH6_~yw0)2!eR0dd@#vp?Gouyj5rSih*3Jg)=P0!DJ4 z+1<{gXqOMjB&i1o(EE=B@)q@W5ufhRevm z;1CS(WT2Ato6|hY8OhCVqi!guK6?QI}jPn%=8{U!p4Q-R^d<}CS z2TnQzWkDWhAI)tQlDAe%O;yWdkUK`BZOEx$JLDmAmWHDc-%Q-x z-7Jr0yE7- zU@(w)&>)|nuYmek@4EsHjR)sg{*Tsmu@>S+R!}<+r@g@ReZ3qXEgg$QDF$Ec~*j^8Mqy%IaA!Ce#B&=Wu3IHvA?U5-81XM=)%g9hz>fKWWJAHhoJ5RBa+ zzWzlHd%U+#w-FAQ;(ydt0MAsu%vh`lO#U9SS1P!G?^o|;5p>;-&@D;^fs2%)y4<~v(Ckw$rv%tfM+Y#M4@UjI^t4SF&Lc=ofqcDgv*pu%ljL&xW_ zovd=xd0x`j;cDsEcj?t~{X5GF9Oy?%peoq-A(iCoJoK$uPVJ%GiQcM7SSRbZl2URd z2EDDUFyMMol->74+`NMlh0aLL?C0X_><^%}qLBR(#|WS>;S6Qex(PG3rVi`bdEaI7 zFbt)cYRBn>qq3t$Bj$!7uX`>?$$^CYG|1ZkCQZMt-d0|d3kE45KlOH?Ec+PE zzeaYgU$&9!^=dhvIz^px0}-wJ6_d@SCNb{^x!nuZZ5JFmTYuydQke$nt!3-ie!QD= ztQ$q0^MpIiZM|Q*t-l4spI?4l*a1rfpTk+(u{*aRAC%$~yPq?=>;AzPWsIn2vO5)1 znFSJlF~2_i`b%-gKcES=@!!9sbm#W8uwv9mYmEltPeF=Iii5wSp^0o8<+7MXxE}Bg z5Y|ZbKSv-sc(Ubj9g1?q@a@8+ZrzwEgk@PK+zp!clV}89gLtJ4M47P{s{CuREtJq> zfrf*;W;m+pk1@>X=cV6Tjq>d0oaHct`>-8|`~ts~Z;!fiY-{J0$^UlMAMsh0{pDQ3 zve@2#(RnLBeY=C~Gg21^aHZ%x7@s|recvW^H9A^pbtu@A%bGROktwn$6JQT5BZs#g-4%bBi2(NHQu1Dmh>DxiK6IE>yixKo(z@)Nct;N9 zGv}RzF9}`*)uf)z5Hny8Dc&0le7_yC*!!RvxIx=wuBwn`Yv~|ix)e8WYlk_-;sPm_ z@^2$MCM0pj5*G9x)#oV=o%kcdhV#vhq$OW4)C z%GiqgtPYa3WUWqes;EInxU5VWw;Zjjl@wyhL-6aRT_~YP(8gO+%_{$w zsSUd*NYYViy+-Frapw8WIKU0M;%nQO+x&8UTBnUiO!@u#Wk7U(Li}I64@4ka}%p$#2g>yquj<~0--Nj^ z4I1v3rN{|kd{C~|>~f}F^NRGGW!U$R5z?5C!p}cIbxY0mWY@Y4hhf5;kV}}>GnpI8 zUV5U5D6{W9m5IaZW?2cdLdS+a%d}K7V67rKySOM#ze)WIYp+QvTo3e|{Gi1B1;TMu zUp<6*#0tE{WqXfV zy6##=0{Mxb)2rMaqir|5JM~3Yn`is9 zZ@kW^uXjpQ8JyO1kdz|s!}j{A2li|OD2BRo0g(CWTZrm zhYr>4dMcZ?!vt-bx23QW(}kpaSHLDAank(NI*U*7xdaU0FWb_7#(1$?d(o^fEaBEu zf0yZ3jv{D#wcYgvmcLWOTS+o6ZlusK@M+R>o)z!9`-{JM1!cbeK^bq)eAd37AalNE zUDNa>E4!k9myi9N3j6|(=2|O|P!Ti^`$utgKmPeFT^(6XCL`9+yHC|W{4%S(?m`-| z~-jz=M8H?ZjqNdYk99*UI+PLv0{=QbQ3+b9{AFmT`(Q<`dQdw#`1IKKGs5Dck@GpuuD|*& z&yhAo1XFfzFD>-TEQ`$=rV>SmR=|eyX0KnCf~|Iu*<_)QoVqS`gw6}b9Xr;st14$4 zTB*NZ&Aab>One6-0sQ@poyQPf#}4S<+J+muD*2y|6CVb?HwZpA6aXQqyA{vAkLt>= ztd52b=_Xp@#W3v>o(g~;%&a(DP{0U5-+-jcNaqDwgOTUR3KbKGgS;ey0r`Ub9950t zP^Sn?*n>SM+$>c+aOxY>^i0;(lSp+mbMd+(ek7d6YR#LE`Q9YvI%ASt3qJKCpI3KWHh-C(EUdn=I>rTL z875$c!_aZoB5_!&cYnF6T_uwrPNbVeVlmYJ#92pYi$$ZvVeEn4<#u&`k zKohz5=+mPGB6!^w!6KV;^lYuqixHPxF@0{?)?YBKnP$fcbMMLT zzF%TbgP9NsTV^J{MuAZ7wWHTpo&80&)edy+Pzf4vv&3!-6Cg~}1vR#NpSD|bGtCh9 zIEACqDwj$223m7^@uJrLTx(Q1y!{+sKQaN%TQ{yPK?ekOo=V`0b~9~gKQF+&mLTh| z+Xahk5CrKo!C`<7pw%v}-~@Sc)9k;i`WDca($zGt#>!^~N2(|`Y<_64AsMym%60XC z-9AON_6wYNYFz{<^K&0WrU*ZUL#R;E+BN_kC0Y+&yPI?_+qa6RQI?XZikwtR?DK}^ z^AX{aKR?(hUvr7uHzsRUr5KSiSgXinh=0B;W&dzXO18^5EFSH8%I&Vr{F(R|HfHo| zky>Cz!$xE4=3!8GPIKq@eLL?GMfIJJ-lE-Pba^GNkz4zs}bT>SdT- zvwUhzWH`15x?L!UIrw3r^0YgqvF6hIK|uC$Z9}-kuX+B6H2{y?jKB-7)MnwgXQNoP zrp-c7Yc1xm7-XD)l?S4jN~qQMY&BH6N43K5<>9zTHU}Aq#(uVFnyEunN@H(PJ!#{P zJ!MuZ$tCJmKCID{CHI3j1I+GbBhnpx%3b<3ROT{ASmp~BbVTT0ONj>*K>!p}e^w-g zQn+UpR~Ji6hU{;?3rdo!lAq7H*`>~IZeHFLIzKat1$I8ruN}Rt@MNAC@-xJ#jSIY%Ro2A@spM#|x}%WPAScR_D3n^fP?HWg`aX>K_JOAy~`7 zBObxjrU2YmpizSj1bn$YRH{It!lxobh_cu6dUX9|K%n)7_jjIcpAm%Kx9TNiw8({G z{v9APQxDRyW+EY0;ar7%5PY&!^)A}XH!kDp|6HEZ5&%LY=#qL`wbrED#j2034fmVH zpG-_|ZX-@({ejpqSqezmkKm0a{A6G}`( z7_@WXk|iV}lE0URCklMABz!(s#8M%t({p%dW8})p!_PBxs-cWAVyGG;V`2x!2){=G zegs(oXjmpWqRuOnpbMmBgqz+RBnfqYVJkAHIb`OtKc{!ug6~y?9Z#xte||m(OEEbb zN-ib>kqZxXU0$^$W5DHN&jOVWplp0gY2)w~ABKPS4V z*LyQwqzIn^s#`@m_$nC@HDc2j(WxFijXGpAA%WWL4~Q&pGl*>T{%pPY zRDFL!t9AIzaxCuXz7v%U@0ZS{uPdoI&0WE$n&*5Ig+)7uZOOQWHvuJ!j(UlYlc{G7zoEKQq z3;5p2@!(yi28q1j1qt#a)Q~*js3Whta@#9dm_hahU=LBDR_{XpF53a$%4tBQ@>XcA$omld1PPKV7 zPK#cN^sTp^M+;m(;E_|~+Sz|6xJI7OI+_zMC1+Y1JUw2n&Er${bS{WbQIUe%OD}$7 zfB}QOkBAzvK9bAs${tAFhNe1Ax;`89~U8eHRD#4*b~^^bEkHF+G3lr#bIUJo-I zEB!g4Qp=wUmzpzbB5&bQ{Anf8zSLxJ^GD7oQ zBxCbPs%=qX7_1gGhC~IFt?Mh9o9l%@$r&VjT8%DIN1*XyegT6RXGTZv_0kt(U_|sn z@oqS7Gtf2kd*HmKB8Cyca*^*7<)B9uQI(9S;4jGhZNz28)U(cK$ubDzQGB0k6fU8~ z;$|(-6R1XiNYok2WiGF$B*(6@C8v6Im3TeC{u0aC8B(?p!C?n0UmfljJ#hF?Y=;q1 zS@p5O2J1EDv{qK!xw2Pv$)#&t$J*GkeKaiafnu&t(i= z5^8K9Zx8ZJ(zz!JW(=7^GVQunW4I4-UVESgwylN?K29|*$VKo5z>NkDJpb4(3iIvd zUo^ca=a#FHYP!Z`3;pA$4P6!f;xfs8S%+-7+Fnuf0WC%?S)J@U=~b=n^Cw4AX;rWic>tj+;(qWW{`H^h=YB$5p40SketQ z0%#(H55mKW0$1Bvb+o=TW)eYyiOf>jv?kHD@G5+D?dcn&OxF=u?T8lo1{y7(p-2&* zf1{B7ix`Wlgqf1=!hyH`v^S%3NgGN?a)|gCOt!ePy=Q1;7|w{3a9`pN@qwY+dh_tH zCwLFxx6T{7hBZA^U1(sV$07Q-= zZ@W*EKVJv>(hpi#rz5cou~L zM4EV163ux} z{IvR**6elCYLw@2GI?yQ^ZGg9+ERX8npfSXtleqJ94OB)iUit)Tv_XXp!RjPl~>x4 zK#@W8ri%7t_@co9mYu+^w~aX4&|u4Gi0H_#81Y@nectj&;pHs`EKbuT&BQ1M60zum z1C~S_HU7l)bY>n{dSO{t)Vo7MWDPF2ovXS+57`iNbqXeGdn2K7S=NaSr zgAG2L@_3tF3-DGO4(zoqQPgqJD8hLpSXx%)f(t%&_dt9?E@@}H! z`O6k9Ru z|Hz!<`I4eiCBf8YUGZ$}Ot-1XQaUh<&QRqXo`XnDhK@!{+^3gED6Wh~p;JaRdQ@M2 z#F`%YW^1@#U?azVt7L(aQ~CorBf z|JT0r)i^vx1F7-j>|utu;MTrl!R`#Mfhj&w;LD3nY2ZUmRK0uHlKWjx z0u|85u%IwY|66cxKNxl^x1z4v1x zEu}6Dr8SB?B)?k>J6)0J8$Se%3`Y|UpT=n;5o+4aq1l5C?v{BG>~X`XGBD@y;F$ z%t(J`U=30RR zGfKwALasqN#Dv+kDA?4L!Fz_&Z$i9i)h`HV(LdDEEKtAdl!V>FAh-lfikY81VWgB6 z!pHCjBasMPDoez=dRxCed+m{S;{1 zupde;OrU-?VW^rJR=o{mRmd|)?{bQMWcgWJ&q_>N__M9`U0{C@G4QTnXNn4fH<={e#BBz8~AhENjBzMBix z=x{3qVbRdj)THnW8exg$GypVFAag5I>vgk;O>G`|ME#5|zmBo%WSV|Bh5gV+Or8Uk zJ>gJ!yNMo8jnNiHbCIs zCVVj?BI*C@W^6y|fxR$ezTvDOcT_Gx~QX_SG?$BWPi#BIXR5~6qr8*!OgLVR4y zaoN?E^;_eVQh}8bX4p=kp_S=8)#k;L>s>110zHT8A3~>Rh~b-~-eb6;#bmRXQWN?p zEU*kE!7g1$J>Dch&bhhKXq0Z2BCQ8UJ%l`jnrUS^c@xnKUe9NOZz2i4mtOehJ6*xf zMn1sA?LrJ)qXYao;<0-}kjrl#7L)|%p>Xq5s<~%v<$cDVA?D-_q*Qns!~4%b3Pt5r z0qjA@YZUyAB8AJux9$Rqh=72GW1_Qf_HI2rrZ_N#UM*#=Bx^Os@S^#%VF**-hurVX zAGDKIU<(J)bkTQx;uzeb+gjmOSWy%_QHqAykUitr{qmI z1l=j7wYrr>|68?7gL&0Jgal)o1SF6ko5628Wj#yA?bhnykY2{jZiW*UxJgb(5klS} z&l9Z@E*EfLk&o}E2amn;DHoB}lnjmSaaa6Um~XwMC67Lj%N5`pLQcyErg54QJ~)MX zLOTw|58n!nJ=Kte#EpUy$D@uR;gM%64+!;(LD-6f5v*As+|hhDv=R^V*j*)m42T6S zJBJLE7X2@v^9y;SLkVIUUQ?Z$$_xkoLDW-xZ1LV1W(crZi1*>BF_8mgd_#If6~+Sp zAtuZc{rJ%-;g5KLys2zry2l=@h??+?sBoC?TDI$dQpkVi-#O^8=YCr`iRvokUS$vj zv#+e|Lm;E2-m?fbxnYV?Qm2My6>0uY@qvzyEm`q!HY+cCG$-n&JSwbt*Rn(I?z==! zsKX7=uKX_C|LLv$!(T8YPVAXtVDE^9HRtQcbhZ=g4mA9-laf+nYXpVi5h9p{5kc6h#RZ?`&2!VtS5=A2d6qmOjQyMn4{%!k`hx zK%4B+L^fyR>BqeWf7$apw>X|gkm@IHtt*-^BQ0U>nfTQG-JiZQI%dAsK5^oMvJ#e$ zf_{$nhN#EBi8=DzZ-;u>W+9+5IXB@$zGK(>H(LJxUt1Mgn3PlDHMgrOSsH$kL{Gx% zCPUw?4P%#!b?T-3C2yogV;M0oB^vEoDTU1qSS!(t$RevDTehBpaY&j;MeYU$z_fd(l8RHD9*gnZS{eV|8zgCNO) z9VHVlW}Q@UiDiy^#G-IP51rTyqr_*Q)jKcV522qJ3N?MG6MVTYX6ALpvsEVd-{0&1 zU6adzOdu}(v_gBpz6J?JcU)sbo9aahul4tq(Wzi>Y1A|~FKIfKCGGhoRZ;jxVW0O9{{r>%{ZCfU7I zF5BlKLb$-ucN~GOXlYWYi_XkJd=10MjA%9^Yc5|k9YCLf#28@k;HNA=>A=o4Vz*v=a#{`J+_7At`G_GRB^q(1LCMqM=$P@ILT%cPkRg=<}MEPGfea{ zQQh>fbFPZ%Dw>A{L6PaIBxURW20@2B*b9&l;n-Dgaj$H2eKXPb^U>k?1d^qJa}D9^ zYn=m8PQZqu_FFWcOt;-o#&d2q1^Edx0jP1)aphL8{DO>b3 z-Etnheu)4^U}C%AqBA^;NT3DdcXgW~Nt23$CJ7CB4x_e+l=Cq+O#_2f$H8KyZuodE zAt|Po03j5vc}_5jKv4AjQ|LJnqN)ABIF!=q)wD5)Z!uI7fCPbj2#WHdBf~4XiAL5V znpgMD^!jgh|9J~vnRg)t)3KzFGP~UGEB&kW?&&cF@ZpRiB~}(OES$&3L{v)=MOq=* zFAs~u&OQH7qIk}>8%5;vLh+2a&PMF<6o!U$R`WGBvjL-xvq_N87zNxi0zc2r+3PmH zI1~~n?lL3IQ0quco!0@mm!vg@VChtX-Ij2IICH7#Z)+&|JkTPbqkd9COn8&kI5Z|C z{6i|@1wVqkPV>E;v%QzNA-98@mQ z<-%FQA+7kv1G_Ea?=7Uas@kFdu=D$Iu%9T(VD&{5a3+@&>F6H2VKV*TDW>i z2$db)V5wKW&|EfcJsnw!Phq0}B)7>)pC;vt9pzto|eJ<8`L9 zb^46Ip1yKl?TZ72UPyDh(vkO=XzRQGA~GiQE~>{((g(?p&OI48`Z$bCostxFkCYTk zeubm3$;@!LdVdtZ7V-Y~o&l+vxk*ry=$dg<@VoJ2&ipNBr}sZ6T~`+y&ySE)i+@hz z)iH9@TUTh=e{Zpwc7zC{$52ZsZor2Rz!3l+xQKJJ~KWh)h8rz&A34cB9H1BVVR#sGuH`}G6cYp}=dmF^jQ+@$Jt%vOVTJ`!&MK2zje=Re;wl4*t4m6Dxr()`Mr zvKskVe@xb`IvrKmM6x$Eu7W}{CIe(U)OcFP!Uf_mA;h=H!^6*`Ss|j>s!0V2bcbJ8 z*2A+*Hpo*6Pv9|RqLPVjqaXPkeF_oy9Kk12Gy3zv!SSl4w4oErXP|3DldorDn!KH8 zm~W2k?>P^V-os2Hj?w~i5D3*|@wd2}1XYUz4^sAtbD8&9RMnZIZwExap7WS&TR?*SVB-GP;E0cv^ne8GO8GJYZ{d4d&=xCt0)V zzn{w3G{3lR90{095;{d4@oxKpXKB{KX|}NOBxSc4`Uf$M#|S)V>J;9!uL&N8^h4~QWpnUPw@NmjAGAo{`{ME?_*p-^Qy@sZ_!CS z?O{Q<_f0e>ym3*JldfQ=E-{hAG~cIGP27k2gP*_&!6=GPCVE<(K%8kYQxqFl{dY++d|YGbaJ)Z z{Dl>(kdNdHxWT_PzkJsEf?5^RROQvNcgyiy_}Zu0bkzdjw42iX#u=a<#3~s%0@YJb z5&Lj_7hU{YV-v{kKLyrNSmC{>(FJGV$rBVpl3Id9+2@#4IJwqj7sgZS^#JH~Dby6R z&VLCO9ydkwxsG_!vO<7t#K7GiG$I7NVsYop1487KMpU&RU;^tVD#Y=>ac~=L=eXa0 z_ew?B9Eq#@&3(;$B3t^DX1Y}LT0WO~AUnNs?M#Z)W-k}!Qs9*?6?QLMXhSc$Zt{*Z z*eN<4ZGxM@k>gd!{~H`vA-vZEuh_@FPyX-8H4b;FzXj75(IX;OeNgJa*xTysPm$)7 z;QJM^9`twxUvc^VF74}!-;K)ami^0DB3|R=ef7Ot5Y1O;3j7wOdwQGqN+a<}L|7+i oJ9C5Lr++e<A49;Y)-p8zO;_`L{L9REX-Z^!E@^$Z{4)?iy+U#aaTW46YCz}*vv>|o!F$b zqncVb(WsAPOw{4%KY&hL*N>hb!{JAxU;)_b$}0^E3B_PP@u8QDbyo}o*wvt7EWJeUsm4i{Z5zDHOOG;9Mz_gaF7TiHle+*n!~ zf(CpK4*?ly1_1-Ug9LwYz#s6eLOE8AaLAb=n)@U4}RgB}sc z%F^1N3&cb6R|+oh{jZ-HNr?VR;$Xo;qAD#%BxGY}M8wX(#K1(ti%3L71hg|W=K3W3 z@!#3OPdp^14i2_ljEv6C&J5113^sNqjLe*zoQzB?j4Ukl;1u-sF4hiuAbM+i(tj25 zpK^qa><#S9Y#q#OtciY=tEX?{=)gll@~fi%`}|0}2J|If+H!t`6#|9bUj zRwa8QJ0Tk@aGegk|9@)!UGIOt{C7qm9?-xlAN5G_ zm%8AW2Zhd@%Qo2Cwzm5~C1&{*@)xtLm!`m%r;eBBRWGm8HojAXfh*ZP8`2psxEJTF z*GPFUy>hGpFHZw6&zoLe@sDsy*jD}LD=VB5euHCU@hxsQgQcn5hCh45Dow_$#l^&a zr03^TJwBRiQD=SL=)Ly}BU0bm+A4qvp2${Vyvmp(W2IHl8k+ecu-N%xoBVP%O@4}k z5DD?m*R43j3%_aA9N8z2L#HO;zHYHkpR$DKtXNKV8U{OPB-hv1$#{96GrVEX5Ywlv z<|V2yOUNXtb>ZIN{UA{DoF3aGEm><_7qhywvsa0;QUonx{(V>whik^d~QYMrMLK8x{4(v)wBKRZZU zV99*Bu*T5P@KZAlqtP}gBpT5_UGNDIcttEMtht}7FwK$ld<_c?G-PDL9ro>FFiUq` zFqjN5|5qmfL_TP!v_g%9KeL6~3zdrEx_f(y5>&7uF-H#nXM+KFP(KR4D#~9S%)HZR zxko0WWetH`@au}p{a@1n;{&0uFHlt&n?!M4J?}`S0^66R^u{MEi-eltX*J;^cn}(3 zPH|@7n+KIzdSk)Qe>52U+U~+Gw|1h`GFUt~Iz3&dp`;+Q=3OBNKPJ@~VC8L@`7W7m zT~|0nnh5Zl3k@$(tJ$Ar@I%FlW@Kc@Wf>TBW8!0U9r?vFy-~%o|E!6ysz#1*E8sm= zKvwFOuzxgDE`n$;jCg+a<_FLXdZ1`Y@S5T#rn-esyz5f7hXR(^5;=t8pB5UC z&sSn^2t}HpCa=%=?q=%EUtP9}?xLsT-+$-TnYfScO+S~K|KTnwCNErFYynEmnc)%r zI^{-6%5oiKxbwAJ_-i*U@`N%K>cfrLXV0h{4y^nM?V^j@CZ4xx)+0%5=$13Z;>vk4 z`1CpVKX=~XzlkRe-_5VWb5wk{x7bj{H|HUPi;hkjhRboy>G6T0+ueMzK1%0e(6~z6 z!C~&GZ*8_tx7zG*Bpa3x=WqJZ+YO5Nto^*_VetJxe06J1!SK1%Qsx~)T!|SpS++7` zaQ^~v4`)yGG=Oc`X* zQF5p(yX3Xb}I+V=9ERe zfq{WokB4jV@2-17j%-1_@Bc23H!y$}t&&B*6Q8ZtL7|hka1GD4j3;tXX=R2Ybumjd z_Y%TB=jDS8aRe><{Tv*1v77a1r`{IX=9rr4^Yszv6ZWY!q zrd><5r3kC?+s7ENd1-c) zDGvHNcSg%)=W`}M$*HO383=i$N*m7+T@KGE4WI6jd7Tf^+>S3eOBfN>933C5>daAh z2J+G{gMtCW_o<&74;L#WTVK}TbiN{lyf?ksrq`GRx+*g?7?xk|EiIII!xGmh z4@Nm2%!oTVg%i9BQg?B2*+Ns0<+*w*E+G{oo9Ce%(C4!ACI%YrO@Ccg8?AjR<=>au zn)qr4B3k$L*4EA*2lD-=97Y0O@B(9eqO(c5j9LDK7fXby_8^Qy*J4Swyjk&@ z8F@T7X^YJpeofAPY4_f3G9S0bFg~{rtf_7clB{d(idjz)WC2u42RRQaU?FBxL*kd3z*z>x?q(Typh;2!iTv zpW8`G*$_Wottg?JV4CYb0%w&DpUd8sN;TYD;n@4D14&KI6wCQ~ac$5U`4MUkYwR;N z+xbT=j^oUdwDbCd?3!i(knL6X)TTL8xOCoxWS^)J_*<#p0s`DZR zE{D~pWy6!b<{54K8Zt79{8s57M^%-0jz6&)Ec) z3}Y2&l$CDMoK&l^nsCCTYqz*I;)`mT?GeT(XoPuy_5Ng$hKkeSqFIa9HkJnGQ6Vy| zO6epCruFfP7k{v-xG(*eE#@EkJ4kP^Z7Ver22-@CG$J^Us2 z#`~o?#y1$Eu7re?NYZv{^_yRhDwp7)&(Ov>hd{zhS^ zY^I*t5y%G+!DQ6dq#%cBpiiz7BgH5~2yoS?H6^z9Odp`F)c`Z-m%diq_VwCTh>fr)66NJ?uzpbR4wWe+Eb9=078WtP9*O|(|-#y zw3F^7;08p-acGs{b~>bw&i5}9|F#@$G!!?`U|y6ncf4(l=lF3b-Fd}BCpL=eX4$@u z|1pK#D#`O9-zV6sq25-|i3e?7JVPL>+pbc-3J?eTDA$$1r{FsVG|vkoYmd`#nV zRCoDmNbsk=%x8v~uePQ)NpVV=M^z*-RO5bghIm6ec-XOQ7Z_lY4`NgO;TEC=dqSIY zWeipdb=Poqe0(g@EuZCDJ5Z^$7BdWX4mZav3NBCWwN?uhgzl$NyxN7`F=Ko$ zf^J*_msqyfY7a95bY(dl-wbndjlCG|e?=2^PeD-(<$7d0mf<0JnN%rn}%cz&X zxOFxysz42ls+?lylEv&#P*rm}{2`zM9fi=jE}d;Ny>psSnB9A4g@JB)cfNIRpEJKa zQ`N^jXm90rJ#EJpKFyvTdQjTZ`*|-tSCU*Zcp+ZfE_fA`Hs5MiQ(v!oG-c9)8ER1F zA?K!F07X11^HgouO$%eeL6Tv=lIFI(YVYf|`sjb2lpXblox^X03OfE04nOqXz0 z&9XkFvf{o`k2%OrLk+`R?rQ^3b1Eni10U6#X!Z3ph$e_iIFN46n``)HZ%AWr*O~C3 zh(FLQ6?W6zjF5Az988z|bOyY44@9%j#BO#ePhbxK%3pu;n4gXq)>vCryZYgW zvfI8)=gBBrpsMR-Kj8~a0MDmcLic~bPq?ZK4z5XDH^FE&uh3`B;c}te7pzLtI zLDGhB=|GTa|#cEZ`c|M|3eT0M#^n!pXy4rY5)eu0&mSH^P}jr6`d?H z8whvn$~-h1>|*h}9zWm1LAnKpX*td17N+-bS-%5TFNTRn;h=*Qu|q<5>U(}_R(ZDF zP7K}Ob!*@f!q6{##re4Q0A} zwCBQxYY36~`QcIsNUi3|CSZyc-NUEO*rg6N$;xkKG_9mZdm~s0TU)ghSQxr96_))G z_wD1K=WL&Hl2tl=r5q3};B=v4O*a4adM_jfp4-DUzJ%#)7TWF`pxYcV%XJ617_L^ZwGl9i%taqQOCL$e-a`qSK~LWX-Q9GMu9joA;(RvxD7g zzWCx&PGg8jjvw^dq@bW?bIycv-xW~r(c;>ac8#Ziefqt{;NdcAxZ4@1U$f%S-_cmqyy+dhGxjaKZU@dAa18jN`jlXE7twrZ=JG#dM5L0nFWS zd%9jYiB60v=-~$mt=ZpgbzQ~D$M`vb+_LjNS5HHiwKSS@lbv@e=dT|2{-8ARyLsnd zIBliQ4OK@PfHRTQu$$7i(4f>T=+1_Yj%5GzcrkVrmwF@qyA2FBY9$c(yM4cjh=iyZ zLkMpB7}2ac>%y3yBvZP5n_7S{$P#8Zk$A~d8Hlf|j-7;Q0)EIpbgCs!w%?p_#Fl9O z`iZ!l1=hub8MM0Sx$psF^Inrv$3s`6)-v|p%JUP;=k=-& zG3h&_O1_igES9@Q_jm*xAip$Pp%i);Xi@xI{+NXG0mam}R`n01+?#R9eY^HSl=a(# zyve*;rOoShF6Y60)s(8=Q9e(m3?=ftaL~_R9xbT!63&UQjLQ3T>bN~2w}g#yw(v4W z%v%g>SP-sR-Og^v_Sb9l4c5$PZkZ0Aw7n3oxL=g;-n^$!P|SE*=nb2?BbGC!k}p(s zv2I|o^;Vg*+OVFg>sXCd(V6N)pTycI`HI48+s3<+)4T*ek&EKz4K969Kj3fo3NlsE zar}ZfoL1ylwb93*-9v;68ijCq@k1E`DP@*OKHrJ96`*Ob+1uGWv_6@)-ksNevRK;2 z(&zN8PyV12G0*-Y@vFJ(RrsCy9S_RJ)QgjpRs34xXPbu?a<~9m8 zJCZNmN}sLEL<%7ez^Bq5+Dfy>+m_?|N51UIN0D{i#HoA>&2q2w=^7C6P;78vUPk3Q z{W$#k-VJ)6t__|#wRF7GA>#;d9+OS$$ z)rdc|@_{*sF{WSStEq`;xs*xHS;?(VWG9%O6e`38yy;8s=hjjciS5Wb2Vml3(j+v%0u1@VTxwQ-O6?~e>Il**;i z-W11GL$?*SA06}Lo8w4=D34~>9%^fndIO)x{|<-v6biO$E8z?}#R0teMWVh|`TT94 z1?!NEZ9tG5_sBnBSpPOA0JSh*dz24FBTveYL@~^;e6@dql}#Z7ByX3J^Wm)B@5fM| zLg{wQ%oRnI8g6+dbsZ_lqQci03*t}=2&dRi!C&6}t?QdGyFM6p-Eg#2bgRjbC=;uw z2#P#VZ_5jp6DoRsvuF6Bl+yVt_Mv#nX-x5`wUq#-%0f`YWZv}t?r}o!2_oPAIC^;_ zLCx~y`_>EfWqpZ;z--pM42WhLqVV}j+P>4Uu3AO$d&?owAbi&KF}hQ!1SP#gmVTTY z?UBi0!`y1BymFit|HHI0KoAa^1tZ*kyw9`7{;Hg}rzJ)LeM3;FjCOEp`b^H^_-vle46NZF@glqznn%yl&crLq}0HY_KR%v4`&szx@j)RX;2aE)h@iCU+ zT!(A@Jv$NP_!}J9FZ;O?`+n~xJF>E}$tk#$Jw#FE*+?^%U$jSD75$nNR8^_w=G5Mn znxEq7&ipAf85HQ=q;R}|N-#S>&caejv5~LrQO6qKc7-LIKa!Y;J(bOR2>a)P24mka z{O#~+2nfMw_~_{T;=?!$FqW6?i1C?JUfh*cl~mfXgoO6de~1wTf}#SH?PYsXqVg5O z?5>Ye4IC^P?C)~%m-$Uh;?2e?QVi%QY1hYwE`rj7Yof8l%>iLp7Scuh zZ7MSwQQPOS&&mrIoI2ODNCt!!%O2gS6*~??LPD;7>~XyRI8QFolE|#y107O6SNIso z@=6&4Ul7L9WVnci3uMXuw{G%(L2t+5ny@Zo_gOkat6NOL3p6!?MINQ6<*W|qcRl)SJ0A>{}1pRSfW-C6a#DCk4uF>Qy?ThQ@|w%3w=;rA@y5IZeN`%LIH(b1GBY#6C*3KD-<@+_j{hfX|8uam1*G zh)8jyB{A^Xf5T{IG_}%luFVq~`D4H@RF%u|U{KcH6Y=9`-gx;sYkbaTA-L35>r}2v z`xI_}J@sv`GFGEKEdIa&|3Xkt&Q3Zf#S zcS>dU>lb5uV|UTAI|_N8`wCpn!R5%6Vm&Q1WyFkwVlpz+HmI@DWvVScOuHKJ#M%PQ zKHiFI0gdCYs&h$RcVg^wJ>N#RW?N&mqVPT1c60~WenA<9W|m^7>fNeI>my*ywrP@N zzyb=}N+B^qQBzY-G&z^CxK*M$l#vu`HPzb9>K)N$(L^{p>`jOyv0JS(hw*V7%#<|%L3)sH#T^_Z%F^Pw)3rFh_q9u+AbAn43!)RAJQGaTMH3g=}Y4W z!M_z-^4aXU7L`p?$rfE9`$p+Sg*#n_f!D+ugIZV;^Mfb!)q3gkxn@Kpd__EEb`$Vp zv+s_)h#Q~Uf`6m5#wq?_t*Qw1<->~6km1g#RwE~%cAthBIZG*{Gj>nM$F+*z1=&~W zr}4y?S*d z>*xoqTI(g6pLt`pY5|Pznj8;kEM`h$!T9=J^rA%)n1~yY7sV%UdwC9=sct1R9nWg0 zwpo#>wVWdvif=OBlH~kFmi!kerWEXrS{7HPnwgADY|(QbTd8j};39GfyQ~7Xm}m=h z5KMhQdU{s)p&V$g=lL{c@o}_qpamkR+9p460Wa>% zao$t>-D6e@w=-)IjR7RglK3HKOPK6I%rdD1V)ab=%{A7z@6<DnhTvipQrL5^`h5 zPXwO=VRI|4_NQV?wIa|!=yw(Xt1}5@lY^gmb*=s?^~xq1jSh*;7o$yY%L*@us%-wd z=Yark)yvDD=tf(EvG?{zj-+qI(f5%~TYbHag{%4ZkqoWy#agyTLL4!DE1FnIRn<;z z4(GaD)wi*XzUUvlvVt*g9a6+;M^MhV7K+b&j>ocIkKGf<=cy=<4Aj$Iqz(}?sC}E{ z1beQf2$;o%Cep;NB1pD)cJJ%%?hhnvrW7S`$+A65BE+rKs;>8$qwm}$Ex8fqIvH-@F&bM*PhUTDO?X$EdKZSuA zR~p7zu8I231}?jqqMKnHD>2S1E*dVB_}K@GEpBz19P6&Xj0&v)GliYe)S7-@!rBf$ z##kP=>)nSUWrp~<6%WcjT3M3n%prL#SeIqD8+^05%Gg;;R0+^x!91t%3)O*?j2C!= z*@T`tkr9mniCWE;%YO{V$rYg>bDTO>(Q%VrPPz=Z-1{k-m7Q&rn8f;Ua}xjR3b!A@ zL)KwZ4t(TNSIQ}!ndO2)iR zmFo#@1rmD3OckjdJRsvt##fE;Y6R;kDDr9K(>wO<9{9aNj@FXUeY*T+F_O1hyUcI=j#7n)_lKOT{T1H0sf>w-@%(Jl=v8xa>sO5B5)yNg_vu{5Kkb$9LU?_2_j%oP=*@z7j>?CUaw2>H6? z+H+Ivo!)esJ+way&8#lxaC%pu3$UKJA5e z;KDC#5skKgQc#Gc{G{Q-biEcxIKX(ZJ>Z8D8gCGBL`o=~MwdK^S%peks?hl(*=}|X z-!X0MyR=e8FPAAZocVbJn8#t{O=s|s;29xGNSP~3#|&7Vg7L8&?E980oy^W+G)O7Y zUO51k3!`&ULa`a^(hZ7CU^SB9>+W;`ztI_^%YBbxUgs%mco)i4hiJ^a=jR6-a_Rh$;64#OmJS7u61BKlBR}9)KHZXVK@dIv?^Sob1B7;j3!Gdt# z-d0#AYz=Q~71vioUENAxxTq&2zW@4YL5hV(=7UlZ$cHWrQg;J~}&#)nfIz-Mb3k7d!kQU-dJ<$KKA)k?)6$K|Ndr z0u+8N1X!P|6FCUxFEqPUrMMXL^_2LCYj5NF?%A}(!Y5DTuKEM>S!roCW?YUNb_Rb@ zIbW!T881D$0+1`M7TA}JgX|L%6C1I*CX8v#DE0L+fsbzUltv!j%GBgn*uYHVd>davjzj=ZL*XT>6HTRCkSahPbzCv^n%FBfBiIyl>xn zQpRssi!L~c7?616H%qfSD3}S1b8ZG&VZ7Hs(z@zcVgn@+t=>?kOT{vCtYm%jDD5r? zByi{FqKzmf7&Le#IJqjx;0QWhU5+GM9gXk34~sx%@KXIK`K$SGxv} zF6RSzM#*TOSMOj#WP{>8O;R!=QeQGwZZrM8oeoOs_7)WQo)o`(vWNs|x&|i@!tXc< z`K#5K$EL8JYNqa()u7y!eL)(0yVC4}L=PoU&id-iMP zJ4^ajyV^9$8hslQrWm|jEY^miI_%0VW->eNitlG&gQSt(J?FwytT^t2^&b_gT_h$m zGY65j)k3Y7@yPwBl|T-R4eE0q1 zdxTyo8>bF9ockZi-(zrf(~{kTi7fSY!DX{_Y3I!ghjMWQIRHbbw6aksp;sH7TC=JP z#1pMT?1%F>v|?RKS`^Azm%)R5n5&Oss$jRLXWqLU^sSOQ$y%D{id9*Uk;l8wqj(Kg zizV|-wA9pUfSiTY)SP8rIKR3$<{6X>tYI(U^0^ z1Zwr;GEb^qtVctPZHgoluSC^BuR4bERLYj;s}LBG-~&3U5Y{nXFDn9!Lq$`ly{`7z z29aoV!g(8;+@G#E)K7hWyn1um$vR!G*ChnQ8N>}JCvr%1RW8#usx%cCyeFM+cCn|A zR2%w2W!zkfS>&=B!DgwtUQ%N$Juoe?xl}`|HO9Qy<>?vZ1Go+Kfvm-L{rb)rpCgq1 zUU^De(&B!|=C^|~{5vRsZYcj3c1|0G_cxjdnrjn*UoGK4BQK7vrD!C|MhTxUJDi#7 z&|W=a3LacH55g}W7X^MhdyAxm3np5!p=;fe@l5*|YDTCc1^ib{wN~AVSQT%GN!k~! zqP~vb482NVT^L%0H5jJJ5QO>6zSAn`ub#WkwPf3MWv3X=V0XgfJmff|O}Z3uOpFjr z4mnU~z4RUEH{mFMS}d4SP-Cu<#@yV}VSeFxr{;#PH?-_pV^!?=nPT`^HK5+ff zlBWs>CS-RPtB>5G>E?tlql}awW@=q>h-HSzp<|)Xd{zdoR~bz6F*fdcaRt27cU$wT z97#xcG%x%xJUI7xyvL_zss6~_x{_a7RL;dRugggg9tdbFHm(yTh+YYbs#j4|oZyE~ zRkl~ktFgRJW%TAOnE|DHn8gQfbuK6j-GCarw2iLkIjeff|l;pN2&wy&-!2WHs~>9}z2$nJ)-?pbIfV#wHK_uu)JjpjZqGz(o=SuvxIP6F^#afblRLN( zKW8vBnwD4d6_ZV4)Ls#vL=GAO8LI>D6?%YnMgS2&7edFlbcQ_)JBPM#KQsyrJSfP^h|SV0 z`TZXK0lC(z!473!%aVudg*l^h?5KR1R6lLCf&w$6K_<$mD01I$&YP=;vbN`midsMf zV#m~3t+0t-B)ok8CH754EPAcckTSWT)z7|d92uf@wJ%aRey72_Nn1Y}GU#=ABe5I4mntfOkUp4}-CwFO?O!wq5r%f!%$bJSs-( zu0*c_@%>d0xH? zD^UUuDYp+x4nL*mHB9f$qY2Y2$@K*c>#viF$c140TlWiTTi6etZkDHP`kRhwxeW`@ z0GWq^UjkGxfw8Ly@2}%Ds%?D!;43g)-=8iKiEB3}NB*{C#%I2)&C|aWMPRY7;3yEl%Ufxg=kQj}mFc*4 zVJr8Yj+_SJ%d|cw4C(=`_f}%-Y?0`C0usuoWv*S*F#b?0R#Y z>${##3d$x~o(P241AQZc;lYB+zS*Bruv(*)171u%84SfhL!v+)JQR7U37oohF9{|d z#QX?63h{cX`EjJoV-okBcT($b>y%#iC3=Z)kberc1%!}a(a{b4cI($;D68)3Y0kFs<~Q#Dr;Ux1XDO@-w?FL zf>JkA-mN1nauGjc38jOj3I$&hu_BX5RY0!V6=NigM`KjZXBW)D7F7|$bFYghp3A?h zU1PJDR&?X(fJQbxK0s&mTAgCoZHMSscrG>`^!@<5c2xaSW{NOJpe0>xJ$(%eYUV7L zdFY2scOT0u)Yk-`wVDWWo9iTen`~BEwOSdkAcTPoURLwfX7}`wAM^bK-|2_yjdlXC z2z++H*c21KB|*o7-|ImTa(Q}ItMdK+NgHQ9aY*y&*SCJWz%g%zpdEVnC3khW-LERF_5W)qzaP{={9dPQ`ipAyOPXl+k z^*?u944Lz|;$`=;2@E^QKNfY{ZiT@{RDE@dt*s5|`)vs2#$n2pY=#q(H|GyNf{Jl$ zh~4k^g6~KC3DDyWT+Rp18k&>0JBP&7Tb}ZVj@* zsb~W2gVak=ej2klyAf;_%M(#RzH64z6-Lgq*U`BeOgw{JE)Qct7)Abfk_! z+99bWjG9=~LJ}|_uKUJ@u7;lWN5;|UYg9ri3F@*t^AOc~EEqKR#Y7GtxU8tQSF!}| zi`d;P&c2C7K!C_}jH?{yiZf*+=%B79B@CB+@P(mSF7BmHst%XAj0lhCZW2axkwWO{s= ze@HZz;1;b<1C1X9RNBUo57-G_MR3|K@p4+j7o-cXxh!DCmdbH56X_-^P_Cp6wVyQ%+g#>>YK zrd;_u$T3n>iMNszfLXGS?oCU!Ocj}$+Ea(F{Iq6>@^A9%zeq7CBJVn6l8=f?$m#_A zGeuU)ZZ$`0$cI=2Qq6DP|BKS1GMsAXY-jkBX5^QqfLNmeQ&?ao%bX>^e9jqBb;?%t zEMr|kQIQPx?k-vBMb z+xZWT2hR+N%Tq2WmC*BNNB+Zm!3cX3s6wt882`f@{u*K*IrQ>6({Vwt4fc0`hyP&h zYacUQs*Lq@>a0_XRU23q=XCf@X|M4<@O$2$xfQ6C zrRj_y<#=tC{49GFpto)i&Z*w|a7|mWaMb9y;>jZ^^I`4QuEt@pq^^I6k@tkfhR9%} zvK_~XnVnrZLgB_*tH@!D@qXpY`Bglt(RtvB%ARNW*>BtZuQfXd6M#ZUaa&(yl9sr7 z*4pHLTxxKmD+esMWKY&!<$YOXwHq9u9N>f=#SD5S%X3$see7hlAlBq$g}1Zj3dnrR zlRV<3#2EP;TQ@pre0vvaW_QG<65oV z^|sUA)yM7RX=7`bpe6T#xy5jMSWSJH3rs~onH!3jq+QQBa9-b{8})_bV<l@EFLs{jyci#aJa3P2E57Q@T3`2Cl4j6m?Q?S);o4%@0h_s3nh+bG=bigQ)l zwAs$clXw7f%WxDK7nl07+YY^2{VKTreBAPNsG7?z57Lk!563MFpI2Oe{32n{%{rVWaLdG{%v!{D!Yk^Wynn@wrFbZN zjo=wcr<}$Rz*WN;c%Q+Vgi`Lqsf1z!ubUGowfo%~?37>(CjG2}lzB@BojBNKfzSDA z8q>h4)&ye4woBiu=nYWkPg(QueCOpw&T#q&dX4HIGu%$Qt71sgHnOflu2=iigsm^7 ztzhSGbbO6jU_}S8>$dImu__6__c*%a-9ZCHT!QE{bDw6{)U>3^u4Xk~ar5f*EckB7xrCDP>}P*O7J%k1M(^w^4m~GYf@a+K3t@?+gW)Q+xh)Xd{BR_=K3RmfBH)+3A-Dbi>nV zlMxt#hpEUk9=-o$jm<6dE50;>=R8sfBq1d=xu@g4XXLWK7d%ldj_z{m^RRWrK{2p& zvzXVS1qeZMTB=bQ`W542Hh8gq&3!!4iG-P5N=Hnrz0%hL4qVx5=6gKN)?_ivf4o_- z>tH8yhI)NpcbT4RcR^n_7=Vlmj)@`d=|=ACygBVc`SvSn!prut_n>T-LiON19T;CA zM*-fJH(y{^ryFMk(AH!s}g{@KYelr$KDDY zH{5u-C={fN;!lYX&9?)YucyONNv$KnQ9gHI;GbnZAgZxhotj$xy=IejDkf7r{eudx zT&FZR%4X5!rsaI!qU?a#P05P!$~X-~?1VYrDl-K={qp>fZlGOHJF_%9oN(5Vlbbs* zF6um^mds)Ey=AcV@w`arD~`jni=F-o`ev&0O1y?mlTc+<6}~-|KEbxFgH$k8@NkWJ zu9%%fC7pV;DuKr#g4VJmV_Vh^Wp6J!E{K#^+Jmh&yKj*MT;%w=f`r}IP#ljcI0>rXEhQ+Ku)jpnX;5{{KmG$l&IWNbKiV_sSey_2Z zHuuD01k&Qa;nS=;JE~ib+y?s|{6tTS0)tkXqrj^4*5myMFk-#u(gcqE@oflO`BGGd zpLux!kIylNLArAf0>PP-`{6+Yp(QRCwXEYgBUsmh*>bmr$n=%@DTZ(iN$njESS=oa#pW4Wl$Yf@5y+l?XHHz zE_S-xg|f#&yx#74?^1frgbfqKWMuc+9tFa{_gNBAC z+G?S0NRaJKZ)IiW=Dx+pT?H7!z`B`L$$)-v81+@n_d(j@VNa8UP3ty`VQQrR{4W5W zGn|}WRz5x*h8D{N>#oC-R=)0#!3JGt187+RczK~IcONY0yK=q8!?Cu6JxzaimMeuP zA2}tJ;E#M`Pa+{=)12;N6poNm!$6J3OC7ZeyWH!=^DiS{2jrq1g5EQE^3`yB=VUsa zR>8mGUc4@S91dp%Jf9p}@urTjp6GJFPe;Ft$$yw0FOw`w<1yUN`#HI|l9SB#Qnzw- zlaFij!x!`ZjbA?ZLCBsu)3eQ;oeu|g8pcKQ=Jxg=^v20gv(y)`=1fcDs76t4^Ypme zUYN6+pw8p`>R`NCFYgMLQ2%^IIq-j^2zWDNzs}7@$pv<~K6Ue12?A608u4I1ES3=z z%2CZz_fC6c0u6u`bKm&|QjD%+skfG)VU9a+3;jvMw_(+cK-FuAz|*!_V|nQLsjk}j z@$V<=Q{i@PI-dqzdew6Nc&{9FI#-R*qs4}w<8f0K(;vb|I^yUxhBLp;eWUPVRNwqC zHZ=4tH1suRu?Kfw%~F#ycX&t)hn%ug+y)TrmuQtt!^zf>S0Pr117)5pQgzkH8L-Q8 zw?VD$pLBQUK=07}u>9cLQ$V|*O7nJ=pc_9(yF{Z76I9T#%FKG20(p;h2GTuMw=)8B zJI?pkFEeLLKE-9Kwl03 zE9FLLPqrq@pc>1mPuGWY+1SY}tC^GK@59xXS+QF*~)4)8{(Z<(^weUFSA zd!sLE)5CRqveKs0elw{cJ3dC-1f%QIr1YFkac`P}H&oB_|A6i1=|G(GZt zVTu=Cjk|;N*3_w*%O@YMC6v*z7z^#&uBS5-3v8 z*DC2XeQOI4VF5BVI`GV&?YS#DyPQ0sG-G}#k%5;$u}Qx=;K&!bi8aq$88&KtHCWVb ze(KPvENJQI*YPl14SBWUR}%F+3-%pT(ySGp>2lg1N!GesyF#rS-Eq3^J z%%+O@p1&w9lLLf`eU>Y385Q+OyTzQh!`{^f=c?-aW~RJC9;|d`SXi1RS^{qZ>g&|s zm#5(bPUH#>Iza6AeMG>*x_I@;wV2 z1v|R7Rna-4@I2nMy;Q3tB!=96^eUCtxDVA0=HS5bW7-}mzG(k>@u~xZz(=d;=Enk4 z_Mv<@eaZJXUHvHtB2!IjdSbuTEnYX8C55cw4Hx6^0~lvi3)cDLvz-#R@ zr^yRMF4R~RZJ-kO#w;)AG^-67%%*)2;l$hl=xTdi8MS=rvn;yE8*boxAp!M}>aXj{ zhrm+#iZ!8UNzKjf2efba#g_Y6X@tbbfO0ptwiC&WT8Xnlo31dIv|;=|zRqS;e9Z{z zZWn^D@VUP{E;;DAW`s$(o=Nj~hCNCR$hg)GC0Yxj+#Up|5-t>Kdu)l|Q$56dlZ}>D z)HSyx_JbY7(l}{!KKTA{Q0f_^RMv7{8D8d)bZ*%ZCt$SFV5P1Oqc`s41KHjPtOw=3uSN6zI z1g`}dEh7;}!xJR1lF!S>zkGONJ{nXYQ77H{5scerS5gU}U`hMxNL@sZq7_XOY9~Y6f7a&@m^d?M#Gz8<#&c5pR~RQFSP#!ui8m8r2doWR1BfbiPOG6%=BDF&=g^L<^#DojkbH+=>h}K0UdfZUHg50!!P zA(;=4+m}Z0OHF|JUd`>eI(1szDRViV2(^?3nVM`>zMj76$Vra^ z2s)}P&1F>CVK%S6Hgqt}CW7w~n?_>fvp>H@c{|Z+)7}-T9%HSRTeod$9xrFY0KC|FTa9#ra0#qhSz0=sRtulp`~FhO4282{1?Y!6!=tAWaIc^_TOP zvBG3#HXCYc3|*F6&-6+1pMbqZ3aF=-An}e*65pU*LGA*~Hqz>k#@{h%+Z@mCfnQrY z1{my7>8q_IdSEJ%z6ZKprGel!{q=3U(^j`qo-o32RIp7VCtkGrel;h0fKNJ zOC)(ul^wR(Y3mvCNqX;Gx>-?^)HQU6dP>{D8?Vgyf?G^m5q$2lIkPpcfwIIoL41l? z40P-j+PAI}BJjq9TaT5K-q+aG=F-KQQ{d5&!G0rbR zc$v`vj$4Gsz>4r+VfJ&ROaN_xf%5|t~?AVjZKgk;ZM1hur9w9N*$M|ep+oeyn zMYOc}LVo7uT;g;f{?KXmWYBI$&?T39-hhQ&k^Em#Of2_5h%oL3ZFv8r z5dX$yW1f4U#)XoqrU3p~hkt>tGA-GZQvS9WGo1fi-v2`WJYULYte>lJynM{}->3_J z6pS^91V>6!#QiP)xgh~fYtvWXzPWKkE*0p7~saQ%=3Q@B1gE;JN<%b zjY<MArqjRgcU!=vvx@E^Iz{po&Cutd;=#1N6N|3P>W3Z=p#!DYOd zo>p!3|1F>QSQ7A1?3kDrkTMp=ztCF~_)At4zo7gpVKQEIbX7$~#pgJ}00u}4>#wIg zB;LLAqN1h6`AxyOlJ(dFG)J=jiDi$QASjL~QX}Ar{EPE+CHyEDU{0C&m*jgiBh}m= z#}kbG(MX{Inv7~JDIWd&4RADtoStL0BZ3XjJDfmdo0ALk7YXW|?W8i;>ui=sc&nk7l(%uER%f0o~#rPkh zgEWtmfg0z++3?@IdVfw#ls_~Cigydm|MH~N(0Ej1ot`Te%^bh}?`V635CPUJ^wpvd z3`s|S^dU7bZ(V=>+JSw&4%kM7a8a8dh>J^og*2u>IxF|z=RX``ctomduOYI*r#BN@ zqObBTtQ_~n3p9aN7BnubFdG{ist7LTZz)$-jz4>PBIf5c14Bb&eSDDILd+JQTI9lW zn7zQm>s@SgS)h@Zle{K%Fo!TU_3Z7d&da*j;L^%J2@k}-OkjG! zMgD%R+5jHHNDASjFAx2?&DBTsTZjD`_uNIKWltl3NIyDW+N+ILA5ndGclSgJZdx=2pdwv>A3AJ=whs95nfu5!+RJv^lu`z^9=b^UX_jZszUx5N^G zzpo9RpM`{Al|H_xam=q)kQK|a7~VNCZSIz25lZ+A` zF;pNfVkjRW#RPvj?(k4GIyV<`R(f{2j0i?eu)D#U^|v>A&5GiMQgyUEH!B7OEDEwP zFv%`K;n@vqH5sI=G2O&!S;oEDhKTke?0l1%Uk6VxUq}7eRE1OBPyXLFKpuuJl$u?k@V@`rKe89DXf84u4!rOVcNf*uW z72mjDl{addC`5&q&Vs07B5zs8-6eqk` zj}P$mYqm{qnGN}_r;CJ?Datc>!eTnWL7ApIN8W|C2JV{5@Q zEPeD1xyZmh1N;gIV;MZY7KRLK8>G#83g{NC5w>ZH5m{O4+rA}!n~WWgX;Bep)(hwc zSKykrAjnuf6X4YF5%y^3fIxur=eR~rM1moCgpdjh^Fjo?({`4sSAb?#$NQE=wAC7a zYp<1;*8x1HBBCh*h={$DXA!mY)$V?MbP6$M{zXUA6J|^SDMVjim@{ z|EJ-V;t?g$)`i%|P$HWX>W@}rz#8fstLCc58XyWRGFU+k{t#SrM~FI>VAJ4cp!lik zk^Waxyc|`>cUUdoo&}4PKk96IKD9tNi})D9LYS}zJR68iJy_^xtCWU~W7Q1Lg)?a= zKw|LN9B;5EA}%g_kFzbnmfQsY>E(#B<0qeFy%P(Lenf(VuZ_mH)*-(33?hZ^!{21m zE3Y9BG06LM)=wzJC7Mm{9fp-IcQ>#&h4d^l@fQh)u=!aF&4s?@_HZ;pdfLx#ZuaIK zau>nnu?2~S{DDUWnAVJPbV z*89R}(0YggD{=`9jV6_OX%e(a5Nw3T`wD2h+rqkgVQ8!vh)j+LIZpFX>nqt5*J4>e zQ?BAjuQ#Af`>M%cBMa2b>eeSjc+??8Y%mh!?jQYD!kQ)wnd5qMUYP~SA-{IbQWjD1 zsS_qMngrHhh^n#d-Y&s>1SvSagH_e}rSz6df`f5df^~bX-F^h><*;0)e}-?}F)M;! zjy)N3uh3EfN*qi7#{hv`oQ!mdWu9vdFV)wVT(%)=gtFt7(yUU{S`oq)&_PUnhb%8rvD|N z3%?D~hhfckJ&)woRINX5g-Pc>a2-+}^2+)XD%>4+vx?Vs1_OPi< zb|B8ff=0G{*gqG`;`L_P4}gRmB3N^6mVK8Ea)6D1UCC6xF~Rc#rrYuB8kDzM0yj z0p$mSkmb+~IcQtTv7rg#=9}L3?$p|3+r!Z_-hZ|mDfrPOhu)g!Mv;=UViumNYA(?V z8Pl`545`JkEhZh7qU27#T~6R0Cy|3)#(?Sz3Puq`rLcUcDz|}WaMRl>4-X&kCw6_3 z1_Om6+j*Mu9=`}*QMOT@V9i_%zZ;eq&o5Yyj^7f=yi6qvdC+YQhZ9%A#fV>pV|lo* zA+8|awoHYPV=WU*$-ZPMe>p0=?U7j4+c$!ykbr8OqL7VkYK3XR>gOVsaXkJkfan$C z1_hu$CeWZC-2QxN(#Bd)6LSb}HrTy&*b<676zoLfYRcE!nvl4KvXQ`QMf()?zDUyh zl+{w#@{XV7tdi>3_2&@lW)vZ1^wM>xQ&1_VX@5oGiTy{3|c0TYAIF;iF1 zADI%*Yx7q3^^~}3CIt{LR+*VxC2u9qU=^VE*c7kE*fR*Ex~=~>q>~Eodyn^N=-0yB z?s;*-&H2o?gz)9e3J|Oe?L(tbJx-s~W(CawPV7J0gl7=U2?h+xx2SkjJyWWr{RiI0hlMA?Y)0ezrLBKb81~X=BVz zDHyNyS6#n8bJWrc6K-hHe4A#)Re`_um{cqUk+yv}{lc3PiNVUlQ_*QF#qyX_z=E{F`t z%GE)h&V9ucjs~RngsZUH+Lj4$AVR8g_PxQB|9oMhF5ubvv;GyB z_nFjl^^l^;fN15``L_&wu!##+laCi&obHdyj&!99;9-HC}?-yy^WG zd!JHI>BeX&9$YGiSKn;x=ECi- zd9|Ce&&M6)FK-X%|Fk%N<1K%vXb(Z8t8YuZIx-3#ivGj?9lIFU5y*f8% zlEr`e!Tz!~-Xg_(5d{>ye6zU!KkTy)JP4o9$dC(ObA2{v`KOZjwPuWyAsn~7KWfN- z%Wtgm50l|v!~5g;<2$?yG(D`#A8$;|>HkMbZxHMuItz!)8MBpUH_5&`?I}Xe&dv$d z5d(jkeh+9V)11m>r$HOl=e74nPpx;m=psk6BrE!L;^}u1MOR%(S9i_KPxzC%C0xJw z+?KHAbg}7n|GQ3P^Jpw6$~B>AbH8lbWqeP8LjCm{qL)2&F3CM@rJ?IHXJeUi?=p3a zT-~(X674M9nomE7td`Q|^x&&Z#h>xfZttcH(SO67C=5fr$?09S4yEckTtsx+F;SPt>qw7ZxuU`YlV(Ca}lph_KSIF^Sn9p6?U(Tsvm^<-g4) zGCWTuQN2pOeCchLNk4FJ>|kJCrO>>055j!?LMxJGfZgjR>lLBEsPBQHJb1b zQfS*`htQVQFMo+r5KIJiy4mQDnVs}d^5zm%F`1ddHMPng1`c27c_wL?wNy1Y z=a)n{!@;kQ=&dpI<7w*;VqT~@yBN&CtdwA;hKQ!`_jH|l9L)1o8yf+aE3IceGkJA2 zW6y%49}dkK?4DD07Trgpbx&V!4Ib{PNZ&Vp&rFkNDyE%&P8e`^l#A4u_59#^zt`F3 zUGg+rXY1H@H7h3dU8HfDW~IeC`p>#huJnY0u~hBZ`{4RmbB&J9amQr2GZRywb)i<{ zE3Kgm)Z|{aiF-ad*~pFrZ?A!KZYoN8>#n@Z(&WN|~J{eUzm=%yr z8ks~*>Sf=`lC*ddr3?c7JF?943FV^$sk@x>)`yn-k|Z_kzorS2gs!Z!=ZhD(X4Xb2 z&yz#mc1RJ_Ye#_>GMHPH%pzxdQCTS%1>U1ea}pMGBxY6DFD+gUX4zGcO%$bReTyBo z)7(>4bCWla45Y_4Axr32T$OnUv)+YBz!9z|RnJ;6tfQBw&2hV}i0u@nSO>`xKtem#IS%V{ekmt>Ye9UNbWx^Le~j! zGeOlEs-N@jg%V7QJ90@Gr9{ZkG`R)eTa>fMVOfk#g)!r{hrRdQAisxx+^P-Y>gezv zl!}#S5`6WDBnL^rs{U}AieC&Z=&(=qHfuG3-&*cRMXrSkLdLeWhP#SEyvD7)@rxaM zt(>mO=^mV@b*l+~dS{wlX#zk8mYG7G(XrzMO}Q3}dS`2!8v#@%q>?p+sH412D!|7I z`RkD_Zus8oIQ|-aF!Ywg4V1$CB8>RxKjC=K)k$F=We{7mG|PGa&7H4OS11+E3B@{jXx!`8v=M zqQXB|w`yO(JQ_}$d1vXLT8Pc*73*krlV5CO2;35W70IJGFV;24If@JvJe=aW0L{wC zSpajE7ExITr&D$IBmJ8fa^%U;oL&Z}D{q-tn}Y2lGZ(AgZY92{346L1nR~;cFEvxW z{0)djByXv*YGg0cZX)^m9HCw&l>9n`)l=fUa*SJeyJK2L#Q^;)V2#s3cBcB-H;BBu z&gexyZ4yS6?daLku)&~Q=JhR1^@!h+=J?^gOo+C}DE>DwTXND8UQU8=>V3S$BfM9>kV zG~2xkYA))VmJxzY7Zr%JO3lv?73^$o{|!cmYA5uZ|fC8@g9E9JotE!eC#=9*jdsgQT!bR=_t1#*3^*-ytNXy0SE!#!Y(MF;)5S{aAD zi^*G$za?+|!0>pV^uyp8>cR$nmGk89mBtgt19dEq*>cKjg5_MfO&m3Im(eD`_RBo` z9Q1;7{M7FhPE;~ivaKTh>_YOi%^zD(CTaFH-it-cTzkx3L8;-AmmLh0CThhlyj>S{ zpN8bev=t?C>_t!v?{sFE1ml}xPdchFsl+WPZ%y>FFJ20_PO}AA#6v6@=IkN#ofmaR z4sUEavfD61TRW2V^2z(hR9kSKJiaPcoI^N~9Ab#NX_}`WdeB_m+0g8UQmgSAPqxUN zNVSAS7v5lO)TTe{vGpEBnu2nQS8APJ!n18=+PdouoqZr=Dc=~La+MMPKhy;HF%tGD z)_`XcaqoV43`&s(kBFJ$=~{Oad2TjtaCs2Iv$-}u^@Kpm&M){WMKDfR-Uqy%`)NInwHP1qnYi#7+**lj9 z*e(pgN_j$CH9XaO_Ng!kV{$z$)T{6d7il^bT1VJ{!*s9Xpc&s6&4LfO5g6L~H@3ZH zRz*Y6r00tfyz1~gMns_RaTa~?#UE~&kvuL9n?gDtW?XQ%`=Wjf3fe4vF5XqI`;nhy zjB9FPh*E*<7;;M%iD)0+CwMi#eqw0V`U>_BfIom=c7d;>(!1xoq30&P@Obx{C7bF#@o(K z*DZK#L8UitpLd@ zN>{|@VGzvE37=2pw^Qv>=>GHZ# zNFy8Vg+{vAJfK4lcdSv9`Vf9nV)5>{ELrvT&|kF1chHCNWo>k=s&R#8oi)2ouX?;C z6}WQ%N-!%EzaDk?ApLl+j{JaZt?5UQ_xbD2bqb-oLUpk#j-a2uJdo4Kjd;mt6>SP& znlQxQo$+U%6tN(b`W!z22)zOR>L)8xwE1sY*2QjpNqfx8RM!bN0qo18lihp9G`=mX3Lq6pJI?&IBO=E8&6&UIFZf;>d87=Jq zJt`n4{icp&XE>}0D%D6>OjSH@8*JpNbNb_kA?`W&^{vxSVcomCPa>`#!6>&Qf_z5W z{NEAzcTjP20*x#k5%>?|!#H?FGQNN+({89W#-y8gQSreqKBIoac!z#O9w+P`+(Q$$ zI>Jm_GEG}+alWQGd+Xj-Jpaj=d`>%$i`{+X>s#c4bUejN5A>6+P94{+86U10>GQ5O ze7bEO*6iJ#EU;j7S7pqp;B7La*u@A})Lg5Lj6|vM+zq1~uA@HRPN2BWF|_joA-EQ1 zR$I!=UK`P*ZQVfg36J5PU>ZDOBM!>Fwea_?>pBFwvP8#13aLZW+?Bv&RG#bxyMfiT zae*6t6&T9JqZ%}l2j*a19p$jQzvlH{7fj)dtvIg5GRM>PU7d+WJv=etzl1&cxo0vd zM5zw8nanpkdFI8(7gs3Wih2479*0tRI=d6|%<@PYiLChyIy$X#u!}_>L?CtUageny zi;7zL!g>%boh{u>nN>i$qepOw%M3}qaXuVKU7WS{M4(Ai^L4zAB_9}1hh_Ud0Uw2KDMAO54C*K0*A^JU-fHoh!(^@*S%b9b0JvGV)+S4Zs)8~qv zXX;Fh;xa2U6)7?~oN+`Lp!QiOf4fnKT3GySc%KE=F6zHD=N~;>@YGXV)LJLePy+dH zzyDjm{u&3BN1pe-omR2U2sPr-zqL(Ew$QS-gKW;$Ywc>s3jSThelE>z^z`vgTeOZpe3LsF8N7DH4$1C)KE&KcCtL+*e zwzjs4GwDX`|EBwY{|^e;@X{vZ6w$Ex8|**&v?YstdHo}#d6Aoo+tF0&oAQ6W1zN{W zNY;_T8N)DDW^Nhbv!p0|{HCL15sath||; zY7U{m)Zg9nt0B~A%TBoRA=0|FnNO{dh7P_DY@JHKAI})E2vu_J5s`(4ZsSORxXVA}1Q})@QyEN}Aw(lrTJ! zqB0b#Fbmywf~OPj&r~d>h3^W%!Xh5_OD8+QJE)dCiD7BU)w~;12A6=$=BxY09c&kr z-hJqqPCQ`AX|EF+GQ}T}E%G@QUH`ytsg%~O+DM$D5k~SUV$h8FI1v(xxS;GogSEMlZ9g1(QGzo z&xmos0%vuG7P{nM+m}sx@_F?P2I=c=pPFMfYP3$B7g5+2!PSLI{Kv1OtHpI%uotGx zHJsHWj#SH+1~++0i|WmpFaF#Vse+NDPJ8FP)?ua0OL3&-U^4!L>ci^&*acHfVOx#o zVh>l}5E%{ClwKc^-(aQ-Mp?S-eErhicO-imOm_CgL>zK`9IlfrhRUaLTG1Vn8C}es zVtJeOeJ4z5hTVM21bn^U_biKqGWEzd!W4RG_VJ;#n0r^Yi}U|(QQ`ei^BG&U_d|iL z3Kn55WKX&38_pJ zyvovu{{TydG$sf<#EOp{-LLsl8v&G3=$mO9)=)2~L3z7@1e!VLc{$0?o^i&pxx*;^ z_B#LE(KKy01Eqn2yycnudqBQ2DrUBS>4@JF`{Xf<`J9P?$xT5!Ty8utDz@S=>-#9$ z2%+^t8gltL6}A!oiCHb$MP(_E*t-VJ_of^*v1=b@v2HwDCj44%8$i^b=5T;}Gx2$O zdFLMR&-?uD2E#mG05Ju)AbOt z>}Lnt(xk_Y+Ls?6tOO3O^XkVe-LxN8PMocs7Y8&Liql&(Dxu=Qq>|->r5nHK(2ItJ zvk99!^;qt|ETDhs=Azd~P~ryYW7k3#1zH15K}eu+U~0F~k7&?+{O9=M_~;|HL!Nc( zl!j*-J-7l)8gW{K0@t(HTnsDp28vD(y;Z$N?$oUQax8QfjD6B1Owwu9iuyhl<64$< zI(gl52b{)X90zd+*2HBPW+VGNOpQpj|PvF^z~mIW^@(_=RubL~PnVS5tlQ;qv6CSZDe}^QSR+ z_koKzp)JiVvPdQ_Hw7?y!(sMjt#8wm>)avqU1yqe`$MvX&i!8ZfrZbx{c*!O?TmCxw_E`;Ph3xlsgyJLqln>KbzkjC?3-_UIW z!u?7Gfi?Us<{f0-c0_(tRo<2pgq5hzrTJ66gZWzWg7`V-G7+!rqT11MXr)~>x(r^U zlkS^|=Z+f{0E3Js!j4;$bG58aD-T^Xl4sw%)8T*10xLCGF*4t~ic~q0Rx6JA)J!#& ztC4MgQPmE(v(v}q-$;mcF0OK8;?@XqG_riC{Ru=jGx!8GvBvx)4a(R6}scbMM zEE?yJM5?-rq}=!bYAM)2MD9>aCCjfS>?ZPl1YTZi7k-;40EV+X9VKs?VQCmUbCxg7 z^7@GP6FA)eo$b=|jhU>3j7Q*ZA=Q@98n34|g<8XC3H z|FM#m4IP18svBibK0OT0K%opF=O?H2)%aj4KruoLVP)_KapKJfJA~mEtr64Gl+VpJf#U9kRa&q@2`QzaqlSk zt$8o9CG4yjd#(0I=geSq4MRV zTd{{$Yfw2At+dgON^omr(W~4(e{#H_M~&e+MvX>8nZ|EzHIToS$zEEzC{M<;R5nvb~Cs5F*g$s=@Mn z4u>+r!or**%tr$%ZP|SmV?WU)f~C>Lm)9r6aEJw-g&+g3Tg#ur_6}0N)Fu?yG+BOK z#t5uX3_SCw-DpTa7Z((u zKPp)9)IQr^e#cP|_&MG3{g^F=sb*O!la{U=e~rN`QF;>BlxmnBtk#OrL=40`o`azq zc4KU}PA!%+X(O_lY&UvT#m;olm8sM;<-!8KD-JkJ3+xjBah^Mq*c#E}gcr}jTH%UFiKMk-FNh~lKg_}NPVWc-F1f#~sCnaa_RaK&u_Y?cOw|+>o9tM-5{>u2h$2*w?T~jh(j5V*@>;@ViW+F^`4uO_wLYNNb+g=(T%W6IlXC&JC99 zp#ZldL6<`W%ro!o(eKf4)9}+sa0vEk10eY$h(uzI(=J6`%)^ui^r=Ftt%PX zSkA`-Bj`4EaqMXBA^=3GCEhq+iN09T{)g@Fm-mW z9|n44{e3IDr~Mg#%GJ^2czwgYU)T_q;Z-2GUnf&55`k*|WfbL{`@;WaJaaY3hx`Z$ z2j5^8+QAB-0GT+op^z%?QsH%vpQBQ&JDJ$<^bT)bTq7#wC7(Kfj`x?_e?9kOYRR{YSCgqB%+ICRQ zCMq7F+SXZ^1zq-@g0Is7nSfVC%84j;va@-pi8r{HpT(vg?c2&10Twu-n+`7JGs~SL zTrSqc$C@9$l|pfYzJ3Ip-dk{mqIiDvdU*r4QQ?-@vOpidUx!1@?}j_#PJR7CEX>d7n%@P^xb<>k_g0u-eF-EEE15o0+<6mHJ`k6I6QUy>!BF! zH3lhFRnw4dG_9ed`(yF0$NMck=+yhD^Z}HHchh_2=bvySXdo#1{f$Fd!aW(@RXkmI zhEDEu2P=-2O)ymQ*>%m9s;FtYxvhp4<5}oMBe7crs;MvW7SL@OM4t{Z@jr`RA9CK# z$4R+oK%qH^GT7guv+yn!EFKsfNW#<}Rt{Ht(Yq`r>?0e2Q!``nXUrvK>*CX zfj*xv+&Q1JwKU3XA9VmA{YzOe9@jy_0r@Jk!A{o#4X+TwboKFDhG^vq)>r&}JT+g& zYNOavGMArUWwD5R5C81Dhx(hHcW(Jc2>OJbtQ|Oql{SCERlk%}Av7nUtC1p*?$l?frDEjdiWH#&L-iYpylPJvFXd&EL4N zvGRV^;d}OD$Gi8bB6Trwk9zoHnX;Ou)C&D$rZj>XJ09OH8(Nk9NY$1=?m?DQZI?S$ z;VoR(7iuPb8W^bapl~nCf;UzU^LhhH&_Ni^0kYyJToaKJDg{u`#NQsnTIDMSh3!^` zLyIWV<>a@!i-HmHt(ac39Lc>RcSu`?)Nz*#b$W85qoMWkb&0sXV!3sU@_H6{LV#^1SiZ zc3GH5yLW`(i(1mKK2)M1aR7jYI?tyt6qStsIK~hc-`&v3kzucE3+iLU}1fb3}Dm?H+Es>g~L7_wy%`Ax!9Li@;(b;h` zkG3o9Buike>kKV3a?c~96|c)<3xD>QbFXfZmFr*VUM-34W#_QLt{<)juQMev$0&o)SLP6sjJhwC&*FtCN zDYQ!(b@llKd{`qpOQmyzr{&6q;brz^kSaRLtpx^bQJj+b#YvA(ed&2_`%!jAX3FNA zDTnwJz0b<9CgT&^rpw%$Bcc%y3Z!La-ANFI~GZh8i(}@+jVP zDrveI{5+fFR?ShIlT`9CiSctD+U^1*3fZ2$AnWa&kt5aVwpKR(&NRt$WQ{HSA5ART zN#&LwC{UCi+Ezy~`kY^usDm@)s#s=<1#hQ}Qn7NkiZ>k@0}1yx+JEk7v!?biT;k=s zaH_eb4(}zJeD4^vP#fap3byQ;>+S-6D+$FI@?cV^Lm_(nbG=$-;wAr&Zd0g4FrjP7`MlGAyU13Q=^u58d$%Dr)J_g~ zw}Hq6z`fVq03h~MOZUWWA+n8&GQ;-ItV*KR0dxz7BWjTC6}=)$@4Xgx(%Wf@HNe-6 zroJ*encjxjLvI7g&BBUBvEI7`gp?biTl-yR_w$Z9M9J#K=9m_WhpKqeH{m(_8fC%U zPUp}-gAlhSm|u@tfl)4U+CeGY}VGo{Ez5sBAD*V#a|H4 z1VxKm|B;)R@iyrm)#JL`#jd-_2efv#g!S{?o9z|tp7%A*yUKn761Zqj1tO5?sTF(e zODc&1D+zn(_dCbTYd?{%yFQ&prGzk*mX5Lclo2P|qZ5;B-|{>O$@LFwTAZv#x;~TT z<3Bi69z~s9Yp)fpHv)1O%cT!O9tk&JVms|`tWn_@YyoK-rIJgK-xp}Kp%~6vLgVv5 z#jU8%qzGp7wos?YSW2K1A-=lZm26C!ICsv*D z8a$WbCW!z=h2y-1=4qVjoww`@ST5`qR4~@W+BoU*rS=QSu^n0{_ZDvn45}ne+N0juVCt@F%Jt7|twrzDx%ZM`_eNKz?%QMrMPp^% z$2vZbJ>4rka=X0eN&HA#7GV|-Y9^sMSZAgNR~m>B+1)T_Eet65@8#))NV%w1??-=} zBp6$=r(732bZVA*A$7a@&CJ6xVdmpF^X_?YpvoBr>F%K1AP19x_M$~!?J8AVIx~Rd z^oeT?>`bTyQ3^)(2)i9Ky<(P#4?r~U>Zq%?2N=#aPh{R#{TXUT$idh-?Ky>arwX<~ zpEn6UFqS(Aps&6}UHZ~%4{Kbaezy|0hN@e?C36k%?0hcDvQk5pv+LF> zF#F}Gt*k*qjjnp+rURvGuTcg|RxXA`5Zia!k&|LPXehLom#8sZ3$#`1AAe%O))vSG ze{#f>)Pcvo*hm9QRRGYLE$5mX8rt==xWGn0MyncC%N4}K>E3`nw^&(+^@Xj9#I%E| zj-+)R$Fj%MxmLcVk;HJ1OMEk!SeOva`Jq&8>bE612iOwf~cQgw8mSF ztLM)4`ldkXjCOJ<>3adPbaSQ;@4E`3i}D%Mo7RkXfMJ1>zV!JIy0s<~-567fn*7yI zwv_2vS~@na7m+fkBjjEiodBFObylB%)N3A zIGEPCfFADz`lFxj!n7An_W&v&sOA&wPGP^j?Qs-m6N%MO8X)hWh@gesEbn zzb7j<&tM4dx?E1j6>C@zci=iVU!eChD2(T^Tu5L#D$`V6s3mNA?<=9^Gd@)m^vLx* zV%x!=qZ&V`N@hdJw>FN4OCr;Y{2ABefLFSxLmx9O9Tg8RuAL;)3`Po{*3!o&h_j-e zOxR(OJKmPWM2N^RtZKne(gda3+YX0TrNRimx=vp=3@S(b#EKh?RF#tGNg#fZf=x6G zJ(=1zqnxveZUh=QM-n_E>UTh;#~W&d1)j@1%)Boeb8J_F$kRTt43+V8{c39m&&n!? zLK{83g>)&x0}My*#M94~VOlfvO4rOHjdvgngISLoe16M7u5@qJqY?apwy?LVrQ3aa z7sAhD{ZU6O0Ho@eK5rbrc*HjyF+Y}HR6oahP^Xne%v&F-M(0v_cxGz_+G*c!7}U9w zki=3AiF(j=n#-Wod0Il<8ytq5{L-O7eC}+_+UR1S=yNU*bAhgA9En=pPO`!!za zP1^@(=8ogWm_xRkMm!VG`bx-#7qjAAgsKrm#O>GgTH3q*=DEoGJ{1d%=g`z8CM0}& zcyvZKKCW}qax+Q!mG+SPt?F!+f7A8(TIQin^VOiOF=WPg|Cgw$VR?8gxAXhD5Tr4? z1H5IHo3^!;=f59=Rs;eVP6ASD^n+blE zTsRCAO-03U+PM4VO7_XIOOJR5oP(4`W$4<^;}`RVB;@hX+xv!91FPL}IwIlON4&wt9dDzcs1FQq{sBV;5;Xfo(WymjKK?1jK105bI{;X5M#(K{!Y~Ym{z_=M{D~ zI`?{S@-otYnBVpyWD|?W5UDC+sj#5vQ)8IZBF|U+K}UUoO0Bn(=oN#Segdm2?Z8`p z3BI!=jHKp46Is#?0RUy#cC1tb3F&F?mNz%yRYO>oHxyaFZ_SwysOU&l7`s3;6E`O# z$nms`mD}H{^a+=*c--Z}7W}~J9y3+0H|O)GT^e}uX${b3q2lpb-oWxv=P7eDrh}UC zKyHz6fLFCk-i5-FWMy5L{Q;RyJSDic1CV47vO9f!b}FY)9eZleE9JV!yX&LZ=n5xp z)y3!6kDm5bT4x=6i8|v#dPk~VpbE1WT$NGgs5~~<8r4j9<)V%|B$scT7+3qTacaY< zFB4xr{GeZ(qNz5BneHD`H^(6oDo9ofG6Xnnc)xZ6eYt8U01$&pB6=?U@tGa)ZBKvN zs(}$*$RZuDzBVn4Ph<;blB?{tkQz!SYpT?4v_2u}iTep0Hu0|-Sp$Uk#9lZqwPsi3 zjQfKKKrlLaIrte`wrb;LOYpD^jWDij4=x1K#hLU)mz(h*P^x48a`cP(&+j0Tl#?+J zZ*@8B1??Y%uD=Nj@Axb-T+3e$PTOal*UGt;nhQ`QJGEz^+UgVu09+jJ`uYa zRsARS;PDi5%DK}pwt17LWj|BtV;42x^o z*7d?6xCaPs!QCwc4^D7*f_88k4Q|1qaXPrWy9Rf6x5k~|m$mnjz0Nsz|KjP{lUXvW zt44j}9iz+;o$fI|H|x3%vsK-(g=Ht?+Tn*KSi0?h65XF3U>N@*g_zJ2NvS^OKfeK$ zIn|Aeo~xg~B8GQ!Ha>ey4nyqOG|=GHCILuq$r=`F9G^K6ojBg`UcYgnC?5}Qd4Y@F z8cA6#H$3)KPdQPup%YGw)6CFhR@RMRF2g=;_Tq14DP1Y&ySdaf@6Cb(-3c6mx)t=S;na={bBs+HM7}+A$8e1WKw*QG5~?tb4736 z3J21~ic?8n(zNZ-thb&^ZeO~S%iw8Vy-vEF*@{z{D?yHw(8h{*0Z1}AuQ|lF=k1(r zA?e}OQ3`ihOIJvQ&5*!KXiJmIO@p4z1J4DlJxjrk^jqmfe&CK|w*Q!e>05rXNPbM- zZ5F8@<;aKiomk#gutO;LHoxuI{MPF1b#8D^O^>XEgLmdDnR}D3;9|4uNy)Qg$LB#^ z#+XWw6#UJ0NQkUtd$B1E%}_TK#o_tGn0J0nT3N#UvW)BYUpUQS#BaBB&mDgF>URi4 z23QTN$HC7-QooHcWJqZ;zlCH*bj(Mue&o&7w7BqeWueqDPI}@dSzH%C6`&X|$8aD< zp3DA0^<(_hzIZ;^iCi>|F>xHGal5lz51p;H-=GkUwz~kGVBur?UHP`XvMUq zUIx?9uE|=~J5%kZWiclxNGtUX)XxNcq0CA=p>H^qHBJkcVKZWH_>$w9#iAA_GreVr zxP}P53U*utZ1xYw_3N9=w8YSp8K!4c@>T2nof=r!zl+oOu$p-N!NGTD zKMtT(asI%O1Pd?_FAhYx(k7CFS)pGMJ9HzItV0H=A3m93jO`cniT-MFgF{kG|Ux_kfd`u_1}hHSiff~`W`M7JOQzuwtO z^sZY71O5Lf?eDVH13v&@pY(9`$y)|6Lf|Thc*uOOdyR;hO5gcP9TQ6jyWa57e@DB? zF?gBPgR~hmFh5C4aVri@kEVzf1uf%v?4b|mhZ(Tz4z^VFK zM*}=4`5K;Eqmshn1E|dpQ%btBgh{%g=|neIOT0&Gvlb)mw}B5+%|13SRe4A6j%(ec zw4fGd^b!&jXzNl*hAUl6*!$WVlRiy@l<@#(b$oEcW63e=2VHw|zZU4j zk?4l6Bg4Zvot>Q{h6?i)p%}jyme&kop_$uW%hl`Natj?bPmvTXzjhlG z2bu2qc(2Z+^Kf3inDMOJ)9Wbly>+vEELY@y-0gsBO-b|UaW3C3F@vr84qb#d!+VdE zC1>GI;yPQT72;7Y<8wC_2hoap4k8_=|ntkS``KpIp9+X^sWDdc+)yK-vD+V7|;dHR} zmza%p0-ne`4t)H{SEj?vjG;7`XNB}BSO{VwWic7`ya1tgZ`X1Mzr>eX)}IiSzkKao zdeD6%qLb9V&QO;-_d4qz`!>R=17>)AZnkZpiO$@r?z>|9mJ~;G_Z|lZoYwBX%NNrA zn*;jSEMRy^fia3<&BIN!an@neE&{?RTlo=~$Kd~4&*05O%hC3rCPt|p6*LSCjHQdq zYzhjNHD%1&qwV$W&3gr9#m%-^mf%id;zT5ePd`EMwQD-OxX+!gCo--X861;)iqBhI z6lQpnYmbK#;oVm;_@ou?uHa{Iy% z=7idwZ!d2MGhO-$`NGSuh}P;Dyf!+aD2r}vk8SreFbb~xp*q?bpZ65sU(K>}X?Ln}Wwr<+qxv?GG(6S5DWs4tvkYmq=r^G zn>KcC2?uTmz=BD?qp|g}xA?^`r}##{8VvZYU=2cnjpz!e&um78av`;e!p_0{IkuK_ ziYt0y3&J zD7Nnr9^MkYSW?22D@Gp`0mWJ?q2fGWfc9=1K$_=+dfQW@uK}b0T_NJe|2{J?X{IYtdw4+QX~>z6j=_}sJ1xx~MW+;R%FMgK%bRAf1#SSd5SSUV43 zk~Gu`YI}a>_#EmTqn=F!{HW`l2~yeQ+T9+h#_fYB&g>TB?vp_8F1~KAfligt`MoX+QC~~&+p5Zm`?V;1TJ)c~0l@HXV z_$}g-e9f_bK3v)D3`(3L0z1mzkJ36tG0l{&wl9MRZ2Zn%s->xG%0BDeRZO>R(gbQnBJ z$lCs+|EDNl`#;q?gb@DbpnUArDqj!z9XoB%#`xCbKyheQ$#=B8jQHUoT9YtjZlW7k zH7!7DrbIJ}$7hYp<@+AqD+~@Fv2N6^Yd0@1+=>7(cfTlKp)Jo>rnI-?=S@O;|Lx9@ zDd|xF&^ljGLAautlBD&;O}uq$N^5LuW;FP85=-}b%eG#g5Eyt=T$2`S7L380@hs7U zzl{s=a+OQXpFnL!nA8}=4iB)vBE{+%r=%Zd%pRG;Y-Ph>;bEm&xtIubx=rRA^615P z@(wqwcPF`eqBdwFT>^^7!Uvh>&NUOf#V8xBuEUy}x_$sPl~)`l!#= z6@?9i>zt!op&R2+d^rY;5zyVYq}HeOCGGY76X!EL#?aK%ly3Eu_zP=>_RU2+Q|CvF zpcHsGh8gIIZ5g9GQEsjw1cIh86<@`rt+0ij*hV``uBK*)y+hd*o?n4q0a0A=jKhEo zWnaLt9z~uoa80E=Uj;vBq0+Im0W;gCabB6Ba8Fju=fJu?QNHfzW&1@qj;w)XJhp!? z#LZ-34eYhHibsGpejUCx?)iBK%U8R%)uI}|U`v}JviqM_4c7;=%zqkYZz zYl6eh^C#y-=fz{6A8)oJZ|8<^=F_hN4&h6|FHQhgtT$RM1!ajmGrubnsOU6(0T&8L z59Eh}l}D#h?H-Evt?1u9)R%1Q-QB*%=s&C?L$W|0pMSlgsO{xuS+I-c`E!ES^+)M4 z;M=h5(A;QJ;CU@x2-qw;<1FkU-QtuxioT_0wMemfJAlA!%-i#Zgi>Gq-_+1wBl%kN z+nL5le%-YtWzAm(Tq?Y;fOuQ9{@KnFqsx>-Q0oRIwx#sYJ(&O^<7nzFjn7U+D|NZv zO2QyGBdvbNg9kDa1S=iPX=F)azen!Bn4555r%^vFK&x1;NH2+FiUr``J`O*9%zVP* zC??4-F4=o1pI*X~@nPwTFmvp|5xA5Mo;qx!>}$BcQY4jQQEe%7cb}4?ox*FKxB_^a zwr|{`i!`l4|9pt;YHA@Z&skM&H9P+a5VC!`*I($?oSEXGmlm>bs))#q6_9kdg>30= z{DSAZ#lZ|?q->R#Kcne$3GKMphicXc?WbCuPpzS1SNYqXd9_J9?#ZM5JxjpP8rkM9@9@+Vb~Iz=OwtP3m*_CZcj;xjo#HR9Glw8j9x!`WIM3DZD6#wr0MPs; zgU>+ur=Nq?fomMgwaKmG%MO7~5&XG-q=$HyZvMA@11gp9dW6`)=TcSIg~-!plC_K7 z*57@ES0$;vM=(a0F!gDlUU!U(#rqOJ<&|1i{ZO75e(tzKmd0{^kA0DBQTC+{5V@=4 zKq@a9`^@Sne0?Pb;7_odUjet>D|g>yM!9`^h4|Ht3R-oxshx2IL@xD)V_?dov*EVkA4Z-q?~nRuvP zc1{ITcT?qoah0h5Fx)j6V6S*qGjvwnhBgP{Wc7~ciXz7U;QPh3I;}JPVh8JYTU4KTTH{M4ZylP> zLb|Xa`Y{%Xu@z4Z#-BvL#hfX5EKIq)g}S;?@Wu+~eE4N>b9k#2{W6v|zNF;e;^6kmE7p)z{5m+)i5&8uYul~eG5WVK z_q(aDhdJNs;d)@8`8~g^?7f81zXwK#Kbr4MEG-6(+4$woZ(0$L$3KaVK~u7KR!%JW z08e-F^Tuh_`lkx$ChX`oY@mcdbWrin7l(tHhoHHYyBikzl5iKHps%_kd=qL~IDh@L z{sr?hhJWw@Cl~j=A6Ca8hs<1jqW1en*7mrx+hAMx^{_ z{0U=JM_b}*b&HbpKkqJ~gYL07-cbwqUzGVjr@N7m--;1Dx?!w2(B$WT554(6Lk?aq z{sxVVj9kbQ2&10%yP@%)mH)@5R2(RE^c$7~E`C<6|EuhIL|EhJO3e3CME`n>8=le zp081H|Lq1t4$ASRsj2GZW8|fOw)~%;NCpu@nhx}Ibv@sUVR-#F6=A*yk&&4(Z&lvL zAnCs~@(e@j+l;@cp-_*a-@f<%zmBN{z7lX>Pot;(}zScxF(cmmqGoS@u$*ApKvx_s9PfkdH4d5y7gX#{Dl4{XbhjFOJB_$e5$wM)BW= zKq}^IXSx$fHE_Cv7%=locZI#{l4(D=l`L3~% zlb1C~)rLP>#PwOX={Cb-D(j|Pz>~^$txae|CNibfOHJp3Gb1@&E!E3G;6%o;+A*(Q z>dtIwHsz{9PboWo0={qI@g2*DR{G-Y%2fCH)%?f6JI<1=yMb`X)2X{_6PZBV zy9{)pBO{11k1+E!VjN%m_4~KEYVoo)ixh=w*TH!BQyqSh4qLNWyk|c8_ouD5t4^xm zjSZu2lh-rhA!jj^mYFQMr(U}p@^gpJn@mnLuD|IGN*MS63{p~)Ipq4&(!sJ%PcV}l zV4Bf%MQr@usZea%$Q+IxPm(CR&0{k1#mw-cG;S~G0=5*_#sWOmbO~_$Xbjx2SYh zP%m9sqk6X0%(jK&t(DxJYAX*>wD{KQuNt4iQ1qzPkW+O~ zukX`s`tRK*ecWd2ov=b5|D56toe}qE_c%gk)KDOQT_b?KfAE+J=|@G(9gvAp^!KNb zdCeeRFSu*gn!5^nf@-#Y-rZt*N*z(ZGp*N>7C1Ojr{S>?t1j9HvL*49k&)4YmBJdYzMC2;s;T+?FHM>dcH@4U z1o@NzprtyPc0U5^N~8Dc_A1{W0X-~UQ+jVST{g;dI(srix-v90l(Witcz6Su!9${e z0Qp;Pw5Dm%Q)ees+&2_|rL-nDwsby6{&WE^8dOxYT(5`Q{O&nYnXN5s$ce`Z_ebl# zNaCWn$=CB;CVig}Ccilie|*axIeB`MtQ44je7Q3}DIDq`ze^P7z(T9@fZ?7u21|rn zk{%gnQS5K?ur1%rM3c+!6`iaPC!ibc#ZqulF(4fo6s@7#Y0u-%E!ZJ z!Bcu=QfCJ|G2}AlA1LNnqN)B2++V zo5rACo(-4aR2lOn50nBldm%KlTDs7yC9;^QGD2O*E85LCNPgM@W~>wbOgMJK><}qhQY&-C^RikcNYQ5D?%84G>`086gd4-bI?m!CK1DO~!(3wf1C8xU-ONQq#YJew zE}xqeTTGRYFx_OtAWcQA|IwuHNJdqgR*bTjWjG>vKieW9X?wInh8tN#L_{8wPThM0 z(}bi7Elg<)KCxG^OM!d5_uE08f+Pd9q>!;+Z{J9Ny6JtQH7gTCKB$-N1FY&L7a}WS zm4p|(_@d|!QC-^=cKRxD_wA>E;3R(PSh);LD@G6FZO~7SWad;@l_2uRaujHI#9}>) zp1i!g5wStRVO3$;Wg*e86KGVK&eA%MjJrE*GsX6zbhh$?LkC@fmlt1&q@%pAvQe!O zKRTVvvzcgSh&bst?M>AkFHM%|HLHM4^YikC^V*<*E`~UTixt)(YJD}5gK--zU>W-3 zsd~orLyaff_hP{jt-Xj^qk%31M3$dt-&-XW5-$OSkPEE%5_E%>M?Y1^T?8?sE#?$i zrN$)Xbh32G?Y;ImU|Z#0+ejUbsydvEQJoGFy0ahuUN}{p++O7ixfe zQKCQ(tKrx*1=v_0A)Fd{-F27t)88Xw7R%Q=6{94#cO3W6#ShWf2zbtlnU<`@3Mdqo&&bcqpZbo74{A-G zL&kQZNxZKRM>RGkDNnO%k|dx)83I!o>THu0U*yDF#)FO_4?hI`cK6C*= ztq7LeydG*i?KLm_B{M|nBT%+?zCS|IDjtK-@U?B5v=wrS>6Wq%uC1~Ls|dl1$(}R# zlm!*8`r?y*f@9)_5fLc))7E@W$WJk3-q6aKJ=mCk?nzJnOZz8_*uT1%BPf|6r zX~aJs4cjb|D;AI&*(-}r)(93z1{@w4>r+_jDd}9nD5CZHovcGmW;=&xU+AVLy@s+Y zWU$uMe~%zghNJtElW1kmZX+5nqrgP4Y=U^kCB#vBlP@u7hAsg5vT3W{VKy=}jF$?D zy!B<;N_&W~rI+~j2}CbL?9mEex&DcMptrY}n>|IKW(VoJa?BXHLTTPd>s8wdG_&P` z;?^P<@1UmM#o5ETDwgZ<FTl$3^l#lHYc_H0#{sZJ#t$UF)loysq#ckcy{l% zC8%KF(eiFDTTk35kvCDJHdpUrgPd#|nr^V#8B2Mz@2Km z@Uv`YuclwyG4zS1vH*t>$DMn{@u5P*fO3(@kNHZwkZWH%XI{GX=DKMVK~5r9ivLdI z3lpQ=$*yCamyUel{J&x&f4A|}gFn1z3*wZP4Md~;+cI-|Pxf_py09fpF+-}Ff;VIV zngDL1>*Do?b9Uw%UF8uo$X+;og{1^1-^8is*JJ!C&ntu!gs(CNB6~AT9=?_{uT2qf zV~w=I%;fCf({wx9Jeed|S=P8qc&X{fmqj9bi@!h zwd7N|JXU2ORxMO#9BpMy!sKsHi5d&vHkBh#&@JgUfCTK|Aj#0SggB-^RRd+Kp7{H3 zZR9Z;92Bi2yxw4ttG=8E&Y0E14Kbe1kl+@q02Q9N(Ti~;buX422zm23C?8K2oxIXC z%W04Ll?m*fXu07VHn2(KKZD+}WQ=(GW369(ptoZTd1t~vmK#2{HnhQ_&aO3KR=0Ll z2^vISS|M5XzN!jOhiaLO<&7vUj`osPc=j@XL+nv~j4R92(!H$68(fgayEzvI2CK#) z(z-r_>yp-4d&xR3+C$p0QED_b^pYXbD(-kVZf-PGqk*~6+>)3Bt>8B2j+kSvrTWb@ z3+KVtK5O9K68e_hZ)&P);KK{6}u_*C7S##t3L zwt5TT+sU@r>Dtpo%i{a(kvl#}lDl_fZsA{97&^2mIxahHKm95uuZO!XWJl(@rAgM5luWkBRMv;E-eRQ_0eV*5qrEi>@T2 zU(=W2#nJYYQTBvjQbHrLzd@%6VK=rrUT$Kt#guW>6(0&j)>e0WdBUp?eXAf>@;$)M z$ugP>sBVGHu~;M=L14(0BB#FtHsyEcD&)?R)!ChM^TSwAN217?4%rl&=!0UK(Md{` z`CMh)8To*dYW;zKS_5*d6EB>iL7KGtafU!I-@9s{{TY$*rL^T!H!hOruOymA{kn)0D!w75lW4#lqa4zq7VNwZU6~}>6#IH z6^PrCSI205#G}Sjb%D00XzQmxu=t8{qY|wMqvR@6-_O2JTsaO&{4`=_`(Z#34Vj6l zf%_{Xta_EswIy{HjbowL(RAdM8CVLb;rlT=_lRKM+joQm;X{`>%JWTAD&_PJPm$TR znh@i55(OAOBETqTw*YCjUFOt*S}0*eacRBDh(_HW_vPu2qoA;ai~3_cgJZ-S8~iQ=97mghVafgGU&BMah>`#ZoWWb=R)>i4q3C;za!A29< z+*R~o0O(RH*y!TwN6C7k-kkrSNX7?oUWFkgH}V#yWpKVXwo+g#FYx&_E{aKh37M~a zn1s0{mc<`U$IWy}_Zy1J;}dMG(g~3NZ~)EHKL{4p_+TS3ja7bA?hB|AH7w+6$)i%2 z&#Q$&v#HPnD+3*3IvmEzRxa{V#@R>chPJ|yxVW`>$m;{v(xl5xZdbE$BOM{*o&I6= zfqQB*Wmk$wQpN_DEGQnkGiau&07WodHfg*D9^fn0n&5&sC(|IDJR1`0^wCdA*+t7E z4;hZ`5)LSSEov^yBrLHA8?=eRR0NMe5T*q+jMB_yM3l}1tGU|c({h8>!k5n`h0n8} zNsw2+FJKJcu)^Pe)#sMgwoaV>SS&Yj2ci#*4^?etu98o4(K*0HC^9yi;kNGI!N^Je zHC_pob{SJSJzp4^8zRhIQBvhK6$akhW#XRiyT|ik#(E0+As6#J#l7CAzP;YKeQMO}t2`e7cHjWd&9{ zXX`Ji;guc9@VdrnYB7}lOC%_kSdWQv6K~qf@R-zdT{RNzB0cYy(*B-n2|TOdjHpL& zplo^%_@M<<*%*L%&8upmoD(Pi?W(RdXfI;$>GPsHlGEVUcqC}?` zjgJF6+u~ICel3e@V0aaCTNSvtm_r2AWUZy<9}rx@+f8M8n~^X_R9TjkI!b3gJMf_R zes@MjM*+Bf=KQ9=P3*kHd>Rnaz1+7!{wnS*8mO+7_*eyVj`MuEF-jkv%eP_}3rY>h zwi(I`pkLKn`+e)Mth4Fx;XQ@Bwdx3J&Vr<2uGy%SV06Wo@y15Q6qpkW%Up zE-&kgwjV{$XWwBjDFTKmL##~OXYt^pjAL#T*9C{m1_jaE6zSSrx;F-mQd7zd9+mJ6 z8Ea3iOEzO!Y8F0y6o(@;Ab_TE8;j9J$VaC|+*ewwPnv^co0@oLm}A%Zl=_8+!M3pX zS1QvgHNoiJ;=ZiJ=UUZCC@;}S@fk@U1|c<`(ljs&VJB4QQEq{4wmi1A_kuH=@|2AO z^PpQw(Wk}Bg}Tcm_R1hOX1(jO7z4Uu+oO8D9V6K(lKau?u#a=eYy-g~EqDnME+>Qp zYI*61@QNH!6l7DU6DrCg7H6exs$1(OcEJ4+cy0+3mOsUv1{B!W)X|47ZUppMR)3DY zaS~r4_s_&VBVMn#r1zuS{dS)Sh=g3ly4+Z`u}@ZHXi;@krH%lIh8iLWnNn#W0Y0!2 z7!Ryu%@f;c*5#C>m}-R&R(leokbRcVN|NUaEdTAvKyuoC1yds7TPFN>llc_i3jjN>lopPc3RU`DL|C+OP5XO6B*IgqF>dI} z+adVHw#(5aiI3O$Qod4g1Fb7~$QGO~#^#XndX{1L*f877}J+p<19p0?`aK!n#^hJScL6o__ zDHq{UfZEa;60Fmsxmg)Z%=okR*VlunYk4n-Z#-t%cl03{2%Wgm%ua7zO@18e%EgMy z)l?m3wi;Y-&Q%+Qz%7l*M`En&dDrD=tJES=FB0oSs8e;zLTei0ul;w?NI%V$8-vlR zyIE}(jom(t*H7CEXw5FW>{rmbvYjdw$}z!nBe$SlJqYK{k8=6#UL@9;`sySHe~l^W zkK~YCuyc_4;fMAxi%%)0HA-9WWwn`}8$&yGU006V_@^e^VTHMlwrXH-+pbMb{zslX z48$6-^%l@!rJ$|N)rMs~Xp~sUozv@vqn`Q#(6F%~NhNBC^1e(hN;;PS2tAPSNgNsY zh6F7+OJ>|J1h*e7cxPAF^B6)Gt1LPL%Q}|)zJNw=ab#6DrNp|cugB~5M9=ae($-RU z=y)`|zv*;d`R4{9yD_TA`GpZh|77>w0s*qQaEY7xyzw^PcAQh?bJRdXYpX0(Adq2z zkg{5JwY;q|9)qZmhZYM3tBjOglE?5q<5ms9u@qhUp(aK~&@1!3;v2OuNIF$}VNOW2 zHmvL+19?{-i$x8FcM-lEkSXwUy!30t^LoE9uvPvw14kw2`>wD0fKN3^&<8vL@K4+} z*FQz1DVIwtIs{k0*A#xi77U{bMllw)n9pyOjbBn(}7;#5H=S`I_&0Upp0TD z{e(9rD9tb2b6!Qlo$!gKIC!2S**S~}M0-V_^KNAp8QbzM%d0+}X)~zuhJD7r!D@+j zyXc4~(v0TUFV}_PGcs87UiP`+HN$l0ssT{4Fr8Jbju|R;eYuVGiBp|u57>qpjnHLS ztiiAKxe_O?UTKk!!tKb_RQrno>GBKX4v8v4BeH~#kWcQ?cZ$gwpp8dOrs6ZpVf?RH zg4L!URgC3|wk%>Tawlq%B)o4&rx5_3GjfNX5?l)Dqz>N?sc(Ds3Li4J?=>*j$Z{+prKyjp~@Y+r2)rkxe> zxrl>w-8n)kqr6J`b5NNvDsC)?48TeQ^yMn42FIoo+*r9!eMez`L&&@JiIDzlSJ52s~%BI<=wK@s`^IJ z+G1fEfvyPKgvVkfM&EvRPRd~t**N68kXMW^)7AwZljVdg=A?1&Xa98WQiTM(KTum^ zdh5xRoWr92U3j1ZOjL76vr_C3G7^v|PeZexzYmsE{ZOH1X*2YNA2=84baK?lnfH8g z)1hzlCbN!g(~8Z~l5#uo>SMw1fC4U|jA;+Ni>htPr+x-hXrD*+jt?Jb1TB=Jv7?X^zNBL9 z@-tZYM)k`x317LRPwN%5XWF=$q}N@LQ#NIVRchsl~LqkISZKZivcX00n-~oKq;i8pGEv)&v_0 z3*2mc$z-eW@bFXfAhv32Q&TDrKG9f+2vw9kiI)OxmG3VNx!GMLPP7ejXkXhXX*fiZ z?dtjvX*vi57dXLiTOy3e)(?w_{KPXf1CKzHpXRB_l_&g*D#k42=wS&uIFM+FWKUe% zjo;-c$*4sQU?%ABtHQ6Il%u@3)uTMdV%COG#F-tn$fvc+L^6ZlAFr4p8*$e2OyPps z!R=gAxOb49vV6bzsDtB;U(d1F=xt^2mb;%)YNEn{ij5aM%TwxcI-5!a|LeA5I?IQV zdRuI8ZL!FEwBKz)CA+q$e^om<7vw3BEG}wkF`LSjOjx^RTLq)iPn0QzaO87X#fQif zJGZq=bvJuc{1IR|!H{fv89qBgqp>TY8c&-Y(RyT<{C?=|-s@U^;i9s49oB5b!#pEOMj3kvl_mZ2ii(LKbpQ(q+wNj+?7sE1(Yubq zT)&H}cIq*H3j?NDZ^Np*AxG`{AO+A7k>$NM#j+)qTk#-!MAWd&$ywNs=R;AEQR zfEMd{9zro3Hp2!|+iA&Tj<^y*CuNtVy1r&l&Vg@(3Wd6IB%hi*No}XqEux=QfqCIfTCBcCRS_T{ z_CEMo6f>X)%VXUaJ1@1I!(ueYKNu~~%j1?nCJ13M5P1vY2U0M;{odao(TY{6;qt4B zi?Y|W856aO0G;hH`>Hb6IXmerOhx2ZMMp)Vy@|n1bOB&%8+(yrY`2a(NDF^RLMDV> zC)A~;1rwdZ><2bF)Q?uf}TX zhzlBvXxcS`!PeX+MF*fbUHx>q<)HI)K&VxgeIQN=cFaF|&GjMKnuywIh0y#7c-2?} z^iz?Ogo`pNe7XzM%x%zfU?PZPG@?u=(p903s>Fd&A{VP3De*k%^y9j37ytZ{w5arF zE&z^h5T?U>FWa%IEaLLog4`A2xbGW}dpt}O%b8-Fg9w&_&P_yMSI@ft01ijCwI_Z$| zLULU=I2^T~^FmW-vbYOP+Ik+P*%?E-DEknph*_+J(&XPqzzxRnN8b6P#Z>Flr zyVlv>Sa^SwrA|kR*vWcE4hPny(+}NmrCZRAjns1Lr`-&Q*U@1s6@)g0A-l*23 zM|l#S!*XLeq7cG(TKtZzDVhexJ!}v+ja#@3dJn<1_Lj;3Lk=M+2fA3X@B)dKq32OH z3*_|@-zXm7WAE4e}-HCA^7oX0w-*{B7kefLFY zvnu@8Lsx?|0vb*o_SO{sO`h6ubJw~~uc92MV3B)C3RMDoV1w4do)w6(Ew^BSKZnsQ zjf1~D5YBYwdkB5VfG}qHBcA)ra{(x}3dQA%xQTUM=(PWeO8dyd(8Vu@=I|#;|3*gA zgCz-u%QQi{2eVl-N(Q==e%$J`ud_gLgqAjB_q8A8%dkuLzOYPR+;iE7nw68kIO|bH zJFPNbzb{x)L;5`XxW#e(9&a9Qh8AQ*pq9%}!JXuA_+xY_)2jFtur!|qgvXc#F z22*wE+Wx{-ULna>CLAgxlj7fYoFN^woKkS@_pkY&y$xIkqes7=S$^|`1O%T0Gin_B z6fRr3h>*uJVGz*sRy=MC5LG0_LlLPF%L#y%@&_DLEZHr?@|7M`zajOgw89C(ayI)p zW$L}mABD@{bFi3ea6_i2QTZ;!J+xE^demv^qiO5=7D?2O-sgFIrbwm1eN5Z2`HY`G z8OX7B?b5|p9qN5!SvY&P|1~9+3H_Z2C~^&`;ojps7plY`H<~U2c1KEK?lROh)dYjs zHV3$vjRDebSBO#GF-A3wWE$ksL^tHF!#IpT47_u^Kp`8>96Yp}GCe(??AzT|M>I*toL#D0(m*iC*%;X+wSl3eP76R1 zqa8!+p1ur{{SnoMR46&KtRQS>TI6R_5s&&=TPmX^;ils;!fgLV$vEJNgQ)mWI~%_! zS`nVGc(l}0AsL#eaoI%6@TS^XZbkhRM#lhDypk(}%XjPeWL`~_{0Kxps?IHo zQ{N|uU5qBU`NU5_hYDCOne*ShI~1`yGdNSyT#w0M2jmtxx-|n za1rmr2wZ+uX%ET)=;kCm{+t9ZSey?k&?zKIe?)8)#S9jtcuQAcZwxeyvf(c0ni<1( zszpb8S4`X|Sf5~~YWlhjPcpePrO6wo-Bf;p(>Xii*Hma0wgbjSxTuhje!ZBcY+)+H zMrAuu6D`|QA%M9D)N$KDM*)$~qEN%b`GPd;mbkc_4q9`}JW^t0sU1`Dv3P1-E6ZJB8!^oLA zpLP4_&N3X~aFY!c(Mp#(nnUs}`KtqSr!EQBstE0Fc4*-~sp@pcjO4EP6#-&Ze^^j> zc+z;-AFA7N^ml2MU!(X;b6x~$S*~;?aKb6z>}9&8TM`GBkA6O-b&jZ+1ibfnv}ZoZ zww8g@3TLzxyRRCV<1k@iz}`nb-mXhK+I|$(SfJU9S+IV}b3f5NA-XZUJ3`7+VHmwr zGl_6X=>v|9Y0SkqR(o6LlMx*{Z}I-Q-EF{mO=80IOQlHi_mF{qxu<_{K)cC_3X3YC z?ss2w*aPp=gj^K>zliF1h1^2q!YhpEmNE~~@|#ab0MJy-v)C(0~dfp})8^&{@qbUbET3Rx#q{nk*zBmzcGf7^y`U@m|TZf&T#F2H*^PXUT4=UJi ztwKsJ#zP?>6Sl)G)PSB@f=P#uAlxEyO#KU&nEJkokZk=0z8};fFZYE$GJOhZ?6ozW zOQ=#i5fT`8`YmOgzGg_|X(&0<5-}bM7jkrYxIJM%Ik7CINGC)|007wh<`V()42>x7 zSM;UR8a)pQ7z)h7Cy^SYiMOKka2;TffSoN%hmmB&%C`|Wsgf0^l!YT7Cxh%`2N8AV zFkx@N?F^tSNx3RUL2nF^fQslgVoB?vQ|nbkDZ4ng>X%3&=`4yw?rl-)n|eHl!jwfb5{U*iH`Y zRMr%9YYjQ~w%{LoBFqVL&te@XlMJ(uSS7S9zv%08$e=l-)!&U@=!zVF&gbHa(y8ok z(yP-`v$_Y&GMCM8bzU3l`~~|PmST<*73yS+&uaVZ|KSaH3!(T;3r-6L0t@V{s%2Yo zq@m59Y((yf(`GHlkY)y*Tn}vRt#^u9j%CpO4CPOC^CuZXcRgKHlaI$6VF>xQh9Shb zN2Ef$skE?NmXx*-Altx#)u0c`;lA zWI0!t5Omo$ARVd=G7UFIVfvDcM>QTzQ@ATcqsX}0A96eU1B7<5D+JxGc)%1MjTo&t z&JvAUV+k8}kci#M7lX!l%h4se9T~%7bt~M;hG|23+dwo`w$i-4z->hG*?TaV**OTQ z%H%gbSaP^KuA&|PTT&6b3_gwSKvB203DD$L#W?5*>OSSM!P7x8? z&d$)r9@s{i$M`ry5TDs8%U4G1IKX|FyxmflAInDej0(EcCwC`%PhQ3#MVAz_1TqKd zZSI2ZzTUdU{Iq(}B3wx3irm+J-Qq-A$aHqPu6kXt>g<}8#VEjgujw@vYSo^cSKg_G zAzgEg-W6Bu0_XuMGZkr!B%B37rayLu4>LGBJNwO*NJ4K7+>h@@)k3u8`wRmq<4{W9 z6k+<_Y%yve=@hmd$b2kAXX4{JD; zNg|Zu@_zn-yCscLG`8PFDvq*08VQrByW78!YOLR8jW%T%A^Av4*?#O2DbY50U8Vzc zl*APFk~2WHO3!^>9I$YfjAt2e%w4P6OLjbTNk0-6pezJOgvFPe8x%XA_3an~p_`&1 zgrht4H)Uuelbj?P@H&i)ne3J+f}J3jV1SDJ>mO(3jNEl_^{qJc-`D=Bd0>yA|^%&Dyu6t>8vZDwpO zw&*gr6X~r`_h~w6d2`nD0A!LXTijbS;PFv%Ynk%#*NUk7MJ(*72uk;*y~BlE3hAIY z9jujbyQ9xUj0_`=d4wI@o)jezbLnQ6TYM;8&`wQ3t$bFFuFVO|&rtNiCnbpMw#BxW z4Xkl389lOR!FeA8z219G%HquP?KePZ7A zZnH2Z9R_8!cAeyKzCPUTNn&3=hzUmWO9c28=!3`yF@`Q`$B4SIin?R?hgnjV% za-t?jE*NtmsFZ;lOTpW`kT(nm?E4^WyJh>m%U4q zrwm==%Yisr+_PsWVDW1#d$|aOpww_`1YvcTzga{^+E-(5SvmF=KuDChux2%9EYfRo zt=e>fx+U189Z#{z(7A-n@Riu$Zxo`@Us2Mb-mJsPa18Md1_?98Rmn6S8A97(8(>DZ zz5&f-m%4mUGpRz&G^+`@AvQDJhuRCDalTR3x!Oaj8MFzB$uxhqOV`xNi-AI+>OpyX zUP<*pCh9?s{*I#;I@NM&ySl(<%4H^LAqHVlmW@tFWf6*-Ps71Ez1HMFZ8mvQQYi9E zbD>NG`BMBhCEf8Q{yWt55heE7m8 zK<}|%e`lmNl1x34SB;y1HhP^^Fr#4lrThTN11RX2fjiySl;EEjA*0r3Ur(d`J^HD} zcBHtpNS}?JsZ@KW$omBE8uN9;32R1Hr7dTpb}KdPdKGQ+=aw=xDP_UNq>i=bYPz*M zkWz(sKf%jvi@G>^f94nh@zd)Nx^T0JffC*#-+O(?t42?*Em*5SL&;|}k=N33TB*4- zW5uKr0i!LDf%8Tq0*%%N2*G7WvM1Rm-nl;zsQKaQOQaTp%YIOO*_qr%0KZ zhX=Z@ET;=(12vc2y6~!T%xiR&v%&S$8FzJ5J>SF5~OjxZyLZVrrk&P0VExlf|>efRP?`64BpU zA(tfT{_4cLZ$IkyLM966zB{zdM0?AsJ&X_n<%HgI-`#n>r@rw1TptH-r6}R~skkH0 zwkznMCF`A6-siiv+?HV$+hEU2Y9Doj>*eCr1D$LkLJhxqMM65c*@9I!zFf!aSJb=J zOn$I z`Zw&gT^j_AY-4hP453LqGULM!WT4UWlg}=6N#myAIircks$!$(=s_^i%W3(emBi;P zRXV_;ki}JdF0WEp4q=$YPb|~BH|(**gBNKR*(k)6CO>*FT4RVmeD5%GA|5JhKyc={ zntL&_^O-(&6MR_eDJl|2$d^Wxx-RmfZtqB3?-qKu|8VXX`MgHByCm`i@jM@_8H;Do zY6lZi5g}9IA;oq%JGKvn0AvtVNsw$g#kh))Y9!g%Z-ANjU_5EETcdAd1=jR!e^I%W z$w|W9wfa`N9YJiLk+l1r?3X-E?h?v>R9R_Ra!mWx{ zj}4T~4REmXsgC;md(@HhXa(R6JPVYgScSExqN0+gBK-J%vB`^2NKc^@_r&1CVebqH zC|aC5P_&R|kwIZj#=s@B?eGx*p>r-89lWt#HfhmsEhtKnd?-iRolN0z;F*`-2fzqJ zC9a~~zeHZfYZ;)GDU?ZOeuSBsZW*LHkx86J%r(P(E`~jbrWPtihU0%jb*iZw zhP>w);5C`wOwlcf$OTV7%U|2g1XDA1KczkCG|SNUkD&yeD@8g?kxe?qFW)Qm!>CKU=%i2W!~a@M$6uyI`ON9=6m0O=V90z+=SA1ZzQxYc%8=TvEIYAy zyVlp_c&J-cC?{GRC45KM`bQy=xl0;7AmT_g8MV(2lV?*s7#Pf{pU&^ja`$y36F6*i zf}Y~&QT51cEbK;%T5dgHH!iIb_1laC#ne{5+aOU`VX02WW%B*+9jm1QMy+wFxC-|= zW}i9n=)B5A?H7)uJ~GwsHycu}J;^e^ zyx&4>>5dpAVp@72$=S!em;KXJpjmZK1+@1Nu#G zLYW7_rPJHjIQ1dZ8eFRXj*~S`)(5?3GtZSrW23nq&A=SM=M%8D%~aJa+WIXv_L|V5yxY-|T+7pCX-Dx1mncER z>5L8*wiI`Q^gOpUmB5sHH|+h>goTCBb#mM9$#|EA+X}CEv2_{yoLDY1a)?WB_K5a8 z(87_7y&P!KWtVQ5_w%uU`;x zPck~RR4*j6v;uW7zo#JpA`E_Z@{%bZH=f(-dN)Sf>b0bsWLmNwtuqsdMr|Gh4o&1S zLqC{;VUB}7{T2hkZNxh=prGKG>1JZ2fz;2=4d23NG5xpD@?YO@BSW_Yi5mvqC~}`f zuo@47Q`OS0IOacGuRpU5*wIEf3Y@23Qa>rHQ(=DH(s}$?+3SRFdE2vkBbk%H7c1YC zs+97PgMIkTM#Y0+{sdabvT;PJgKo+0f@6Xj9AW-%>b0Ca3+m@jMT=X$yWL({D3PYN ztx?%&)U8snriVG+@{~>7;#j_$%^ao7f20>J0w?7VT!DwVDA$R`K2iNukhC3|yI$H1 z+=u6P=d5H@l@RRw6Tb2c_FP`Egdd*l7YZ9+G_Xw$^1LY^S=NAW3(ks7#;G$Ga>>V< zB6X=YsyshG+9cj>)-UInC)jY|QK{0eK6D}_z_w;$X-g&rXd96x#H*&`@LFDh!+?F; z9(o+xhun_MfT(?D5ke)ryw{_nyG4$s_iq_xhS%qcWK($`a>ut{3vzbmEv}H0E?D_Lh3>OsuTq%$@5Nm&7Wmpn>yE4(~s!xjkJ6bWu zp)Fs|Sh@+POlp*Lq%>Ik3 z%+?uySu?r3olU=c5>+pPlGv{%_iJpW1YStxJq{0iD~bgw*Msghq#YTM@*l4CB9g~Y;PZ!~xE)ACV=){8?oPVx9vJ%{V((UKlY-pt9 zu|H2o9w4ZSlpmD!UOErP?z~-Z;vG^h+!Nyz99v8px1|Q}O?J#5F;!J%sNowg=(14mnoj?av#3IVT!0GbZnFxa zwSNOAwL#yr7*QEk46#bZOnxnLhYGcLw`eRQ5^;bCrJgU3bSM+_BuAL-i0%hOLc-Xh z^Y$a)RzyLa3;b^*VLl#yWYrj6N56$cv!oX4;9JZ{;OgN0jW^}3oNG~$CWXXN@`UL* zfNRZ!X@pDeanc&Q5P@9KI#p|Y?ytLkO^a&7j-6kLS0!#TJ#V84M3E}Y_6abi%Ta*{ zG`mw4U=JNFxFjxWs;wS7m#z$;1c?lN2V7LBru{V#0QeUWd{VtML6fhBI%EB+@$T zwz7lp-N^KQc~>d2#F!Rf4#j_3VEH_u1OQ3B-g&=8N`PUdaTUMMa`9e!@+<%T&{R(B zh>9N9nF(9ZX>|phwtq!%m!S5^ou0@AufBvmN~WxsfPmun?d>i2sO}ZVxylIMNHxlL zdRZKrYc;q{i;e-jQ>w)fMSstP56PF2dPS|zllB;9E?_Vl*?k_nzMrQ~*ks+GYO~h( z*WULFtS!M)iNT3FC*qUbiW=*b`3d%k?#cWYQ*X62cC6+DQ(`?~yJmg#>cg+REc8x{ zgnhRIiE5^|Kl!Uy$vF_pgNHsw?T^*-^Wf|AKB@DcR71{NAM9+4P}~0_5N)hD$#9Anhpl2>eF#fEVDk%6*wOo<1pn7_J`i|s}%|F?xft5PhO8J!d{P}L% z?`ivDG=wrY>{Jd~v=5-vfBGas0V*Q;790&45iZYGGJdct&SOf?g~$cl%-y8CW$F6H>7eWfi8bz$grZ! z%)$+qK${hd&dqwnZ3DL|ru;WrZB=6&g;R47WBo)g2`Mj5ZI(4sIwc(3b&?mU45r-( zib0v-U}H;gb>j}>sv@*5Q`@V}S$pjI&50>cG*AA!s5NdYtKs8O?j_e+fXxxMfOWdN zVWJnj95M`|3+#W9)3ScCEO|i?!MxXV9A7qNu81bmz`Ec#lJ^GWLHhk_9Zt^f?~lzw zo2i3|?o?j`^#Xi+P4=97qFi@HU%nfC$zh1#v%J{>lZ*NCQ*0Ja5$U?|i$HaVBe(dN zC|d;+WyRk)CJ)XxK538NHGh_RhaS_>%n*sE7I*$#*{BQKlf>@78*JZjnsX$p*f&wr6a+{{C&IJboPKDq5k*Jg0HWXd~;qm82N$#P1dv~ zzq5~7rURVp(iwI1i?+A6jA26%Ky*cMQFr%MU~+HM117JXj&j5Qo4~Ehp_4%;QWh1} zYX4x?Z=e!f6Q8@+&#XQtt0B8XX#%zPj{%}9*W)d>JkKsrISGA)ciQM>-V36{ceV=|2}DVi5;c3rKrGpjPm*G54JJ(oaMnA2ThYPWSzB6^Fc8+3>P> zgBBlwL93C7t(DXIb2N=_`(&Ae%$XA>gohANJhOjq0xa;EK?f=U4JUks-TU#Qg()dN3QvPnyjM@YqrN(o$+)(&g4XSUuf z#9)Jl#@57j*-G${xwKtK$%tP}f2Kqeb=cc$Um*(<9lsM@WmT+8;@SE|RNQ6Sb81!w z1wCxPGFM7>M3esL*{l>&H*Nm=lV1XH=LdE5XLL|}kHJyz&FNN_-M=ZNUA!Hc+XloLR0$=; zQKP$kXYxN&e`3;T0H`KjFLEEEHKV(m0(K@=c4?wGYL@k+2uP4q%bcb!TmUAsE(x(N zHQm98Lk+f^cT)2)MA|mOx!!j$1`;BGS>0?F>){6~s&my3Jr@-= zZseEgreys@&XrxSWLXP9+E&H`~pVf!$Ezb!?T9fTRdHdJr z**TR;pnf@QPdv?E>(O0XNvQ%5c<)pk@<*y2UAjl9hHXv{<*5SUNxv4Dl}uf-A3F-z zea_c+>)dI*Nig`_C8ybXRPD?3C56H_tLXstLc8v*J8pxYil23o*bVe!ht|<3xBJh^ zw+fbB_wgsbbM-&|{HLe-12TvMJv4m|tZkvQnR5N-S=kbdN)Q5nc5AukcKBoaE93EC zdsiJ=^n613#NYIIHD8Nh&}NU!`CU|*FY-IKj3Z$Yn0Pl}OF1Y9Ko4ag3JHMY2KkjHAlB^kcCTW+_>-SB>^0Q+VvS`ukS4}K5y&OD;Jo&Qq% z)!5rB4lCDdgBRhn#y&%mtw(aj>!Lg=JT6oXrOYW!Hh*c=!HcPV45 z6|o3GEnzG2*N1}q+gP)TzmXwc;FV}cupMbbTFKgzxH|+*s|BedX-)z4xzOJ~pXkk6 zOwdm1e1nm60DNh9i$95AToMUDK{RMGFCIOS8$QDdUO|5~=6HPsRT9q69E*E zzxX{fWSF!rW7pZ4h;Gry`Hek;RO$YJ+-o(Y59P1iK6BFNaZuKT3_wq3;LO6 zq#ZgZCo8Mi>rs*ec~@0arrFA2Da+wYTz0;wQSsX~i+O{J1k>i^ckpDqCDh}gUq_e) zCFqc-EeXm~K`PA}#Fe6<0v3gVCwBL^Zt#7dO|$Kvdf*)j6wjH+Nwj&4?qwL~sWMV-+H5-G_yW-P00C6!hWI%+W3ss(jp}wHbC%}*@&6|Xvk?4BH39qoUm14} z`%&=r-WXW6E7NkF11J^_>b+2T-pv^M>^koCIK>1k8(#lWoq=TrtMuuEd<1^1mB<>v z6nzCNd}ZrQQbuY;ico*BUSH|kNaFB&i`6rte}m@B^WOY&067=wK&i_DHcNXe2?7;v z&zR~sxioxgvp*4c<}?42h&%&S_MXTMNRFpoX$dOZ3pM-!A2;O{~a5CYYJlhM63-81qAH~|G>mR03jeJ)6i_9bElr6(x~#b zrURz)laLSo;>6$V8QJv^z)N}P7(_pRB7zSFz1!d?*=z?t{<>Xk2q8lxny0Q6j|5v( zI(0wE8_70@ zgrqoXUZs>VAfe1;AB4H-;dy6Wr#u;vDVOJ{tlvqLKDJT0K}6iE_>mA2{KG=C+*0AX z3lK{rQ-$LRvNAhF+7Cgq8df-<;7{)QUMI@32X_(?IMACztZ`qm?pBGzd2jQBxwqO7 zI8OSy=+{gdE}!vY#ga?Ln+y7qZ*?sF+uI{&_3RjT%*?OW_8KYn;Z|a#{ubL~oYqM8l$vREn6gNw<9Sj%vua)rWPdm9)>^Yvk{2VZekj|}C5_aq&6stq_oFyx?z7FW z<7gArNvrs18m7a8I8^I;Ht^*ml;te&t!3={{u}Rt40h&A2$ZEXL-q}K!(fkRw&Gwu z$v<%eBssahFEix&{O?9j1FO*jYv*fD%=?{Rr*yI>dxfRB7)Y#sCnsWeJ2_EfDoPHxtSkv$L?p1jnwy;9%)>(w$EXE&3SmJPc@a4`^ z6DKHVx39|s{wiEN+CtK-1G~J60AUU(jDN?3bJ6|M^KPF zVWb#hxqDzy1>rK*4#rUgutYskuuw*9QSeJ*5gU^p9NZ91%j#l3NCt1qr;50Zu~+`TIn!V;etr^PQ6KL zY{ezDN*!k0Z4#d^?dwFaeqB)@=Sxx9HfkD5H{40&&i;DdL%9WRz5yL0CpNkb$Lj0<{#MCq18M!(S5S>{sVCBhd)k|cDU|r`f7Z6Vj z#_PeK>2_asBySVX zou&%9{=p@GKuo={pPPsk38b!Qmv;32t(L4kfX7eN zN&d05X-sT4Hs>tmyV#M(s8x<;Wx4?VR096own3un`m-TSD_8-h%Of$oRyf2!^wa_l_+z>28-9$T6e zs0D$ys%E${PAuk54pABvsO?13~_RyXXvOb@`$QH3IUxfe|HwW*I{m1H?$L; zG&1xZo_mZF*g~#y&9$yGjQVG^_8HMo67fm+P=@&N9n%E=f1g!Gp`dJdOhzadQf53B zGvEy-=PZWE!|H3J_e^=OMQ*4bX&2DVjg$uwc?W@vb(M*Lgn_pZJgi>y_Mbm+SkYKH zA_UU?r@Af8zkU@Y((muguSf0pdbNj-*|1p467o`B8U4Sh2tU1Vy6!g|Gm~Gw8#=t$ z8zkNJSvaV`zzR+Qqrx-T@>ng$eUZ|JT4#w`<2TLBIfGpk_Hh+32!%FA-LYCDg*n-| z3L4@>-W^T8ameA9K$1MJ?f=3(=7J|^K#k2=L5`@bSMSXg@S!1R_QX46zn@bex*B=N zS_xxfqJrYEGK1)YMf4`8h5~Lad-Ws{CJscrV3-q%6D}NOl#!HoS|ZrT=q2YX^DE)h z^7`kmr$L3Q;EKbyg_~*S+n-%SZ|kfRbUYra{Hgon-YC|vCNN2asBycI*+|U_GVhx} z_(-RR^6uhXjEks-O>Zqzx08hG#ZA{2g7rgON4aa$IPTS+qQmbIc)Qdx+i!74`U{x6 z8qyqF$mTcawr9(Zqn6KBBfCZ&M+s|GY8EN|ET0?Kclq-as!Vx(w!`(i7&9oOOS8tN zbv%p@1UpLRy`xUq&H7)Tz)~EWspLRmCZ$rh?&yw$hs{1Ll3%y}qSTh@x)3tbyK0$W zArAs@Eh@QR(YeWtX5=Gz6jzC4KeCd)bG~{*i4e z#IZ7$n*wdut%64rptI{b7eUiQWYUo?tr~XW$cYifVAIHYd}ICy=;gTl4;M@g1lX;9lbu`YnV9B#!_!zS~%@`-9DhGP(c4Zxl`Ewmggn;XJhrtRt9IsPBWCZ9t0oWf1d$II0LnB zuhph9&Qipe2feq+NWl@Ygr(W3w-d)A_mv`Edriw%Sli0>Cy&?5D$g$s*TpKi#}*0o5`cX`QpC`Pk*rEXj89@TUE1dQlO~BDv_>Jl`|dRBXx;ec<`hB%CLE($ z89;9Ul#D*pwJnwEH>67&sbP&yBed{|ct0aTNld_kh_$&`Mx9?5_M|&SQ9&(%DJ!13 zZMB?94|I?N+;wvc0HJtMeFq}qgq!{6cePEnK|~i(6wDx04oSTtnb(wv2de~OxZi=4Nc_J8af=TBD`JlYm7R^Q+J^Qbs9;V(z5RBd>N`j{V=8|#`(=RKq~prrnE za5#1}`(UmwQW?4KA#Z3}B;;XCG$0@{6kh=A&bdV%aV+jjS2r5<8^GE3r|eJG+Bnmm z(&O~pQfRGyzX(*QP!S1G85KE9a-65&p!$uu-q^8FE6OTU9qVwtKx7=aJtCsq&Pj3Fkh1cH!;`0```tN) zQpqAgep&r=qxC2HkP`n71?DMaCQTRvv(k3n7m2f4g54VohK>Dwsg4@+9e~s31|PVO zxwZ&=Y^SezK!8Px&o<5Px{<`NMbKsa7$Wkdw#)Lg#sa`G{WJ9s&*9|DIG{y4_;FwH zKg+uoeXK_7mH0Wqwd?R;m^5tLh`@5*=wid_tal289*C|P2+y_+3J1tyL{a2=5m`GD zH=iHYj9rpW1EE>YM9MgR-!-GyNrVxd$CMTviyv)ZD1Xn~tWUpNRsS|lvB^a|t30niV{iLiCi=n-dI|8A&3+H45VhK))LQJqc0rHNfdpDcrw!en&vWuU~t#3BMO zJN_0P%ibW6#hea=uxpt35m8$|TV#kn;Ne7VpPu1jesq=nMGA3AE2COK8?B3IJ`@he zcD%t~Hzn&U%sM7ZOD+5I{!Wuw-R#Oe*g-i(BEl;t;;o~d;9ik;A5pO>d&>^*_QrAc zw(HXD`PFmrQ1zES^uDCI>x?Pa2E_uutC{iXSyIE3JIe}hO(&I}Kh0^9!S|9WTU?%H zvIAe1!?@f&@9~f)o~cw_Vzvz>e3lk5Keu_ez3G(@<6w$Q<%mV9HN5y*%U;XCJQj7> zR9;GF>eq1cfv7)gn@StsGA&~sP2JX6fs6O`=8R$EYkG6`=7gipqx3()Yf*qA8w4|b zfP^V`yZI|=3#cB@X!OOvGZC})0|Wh1jo@}JCY(Ub*=J}t@EWPy`|6zXO=PH=I~IH| z-ub!h@dE0`mJLth-FQ>6?bb*vm6*`aF~p0(YCgroi)MT)a5wz?Fx=9Xlb`)BkkLp+ z(uzm*9N@(ltSj+Y5g+9C(6W!vyWn&jDjIRIGXqu5& zkVF}7r337lTbz||ao=vb*v-qxowi%G?N$y3ijl~Z*y`Gl_*OJ>ro!zJ>fKL`p^BuW zq-`T(wV%E*Z+5g!Y{K|~3|HnbMjgEX@}n6y@$@3}c|vIU3yew9*VlQq{z;ZOAapzr z`eR*#*{=}&++EP$sH(Tb52U+9Ot2FmmRtC`#mnV!w$>neZ)W#Lhm=0m3E4BnDkdWxA(7V*t6x>>l0J8z5bnE z-I-iEVL10W@6{EaX_{MxRDw^DRjGdI<|5%S^B&>I$ClSlI&BQeG4(7MKV1NpmEjPx z3X=K3YK4@NLtwycq|eqyWvtBFTlg5Ms=Bz&qnNNMA*ull^J);r`7X_I7_)76xI*~b zykSk{&APs`t%SqYgtPOnqhoIByQDrRZdEOXg#xQQo+O1Q|I;I}_=9fNEpJUPaACQx zvi^4wyh56fu!v^>-&mGe5K_8MIct!nrpKg@)C~*}W`BcJ%9umX<^Bt&$MC9d%*QS= z4GjfF#Xh4BCHhkNWdP)x_B83%j7J33e6>%~A4YeqWxU{NK8y=~gcvKTGz)d0|gH;<|^!m}ZvQCKtFjy`m=ZyJB)l}`|8r=^OrK4WGoiEXD-vfX2J4xc)$7gQEf-1tBPSe*sJ2z(FkZb97pq|Z4dam zCnbSi`Bw09iz?g zt*Opek)3h=EB-I8?n~G6zbTflYSvL|sTzp8e9Qh;SmgjdgYvS=m+o9__988oa~cy5 zx*dNP9nBK!!!#Nha08xZ^ro!zgp*G_Cp2%H(y9x5Aj6QRf0KeINjWs?Ix(V|Be z=xw2a`}w4GH#&P96I+fzIun6;*(OCNaXYrJ=-Vtujhvcuo<+xgv2UVrJGL7?NK`V_I+~ zB~v4~a25C2HkJ_wfjVM>nOUx*jJGktBM1nHK&O)SdI+~p`NSIQ;Q%cb`?5E{cSy%_ z+CT#~Acw>iE=NQtxmv!*7J_j^95N++Y$zKNzB%c*a2r^iWY8Kver`KPDch;VbX0Mw zqHTCj+T*&L8n;omGBb2#+ew_#;!eNi8}8~xLOt)2S)L&#JZ}XW8&k6^r7&y+p;zu$ z_p?Pk76^LXW9ZjR!)HOpOT%WqF;-K0q@8{HO&yhTsq_GCo}`n-3SjI>j*4*Pb-l(hd9l(KXrJgh{vD0miw=-*OtI|ORA05} zk+7T|-*xr)eLQh%-}F)caZFK_T=s>p*Q!v8;76$mno%IF zfLHToq-_<_Qi*hCRso4cY%{s zH;rCveEoB3Pu^+qn*wL1@+`;4dx@3_d)oigXBwDc-P}hlC$OAj@6Y@*gZ51!!JZkd zLL&`G621<_m3T9l2l*n8<9;&G zRg~RDwf|l6a!IJ_bO{bH3GCMS~8`O5#^OU=ri%l%|-$nLXBvF#-Xh z##^*@w2Sn$=x&;ITSw?m;fxW^)+8@ut-pVoy~xj`B$)3jK$Tfo6IcJkY*#NpluYC) zq%#V{_8hH7U?59Yk)H^JksVa-uMXMPY8P0wm9?=x>5J+#kA!rDsxi1I8ODR=0PtDi zezNpIKh3mCQ^OgTqNsD-^J3h7!JA$&Fge4Bm7q>Hm9d|D?$u(2zwY2WvnK^_LeTGOER7_BbS}b{lCEsdY37YgHan1bjfs>8VyuOf*AS2~PrcezbZHM@vm9OSBfZ_s7rt`t3~Tp!vb= zBtdRd0uV@rG=7Lxzs`5i3%PPHyCf0vW;36-aJW%;wz-yJ?%^NcoHb)Xl@+W|? zfhaJO1%aLK{W&MaBl4l0`2>#idsnNnw~fa_-PvJQ=n(Y;{JRO{p8F5GxgL_>2gC*T zw(k}rH()f#yvL}`DHX7=L1uPoRc8YVi+YBn&WuCbls%Q+L2j&uZ_2lQAV zjM!o7TdvGyc-jL5ypY^cOpH@xK@wA}y1|=oPfTU*i%{l!J9@#QwSIr18D){W-4uae z=4)O4%qy_&!uMLe=~jIX}Q>KGC==hc)T|tM@{%?E|GrDp4ISOeDx5wnrHuQ-o1VC zT`C{^;3H~u7e}#%?G&r9aEyW1A%SV|_@bAa5!F$-FM^*pcR?X@a44ZxYrgkJU3bKc zSMs1`M%9i$P~d<9_ZX%$tqQBnd1~#XQK*?vxcMR{SLF^{VJd$jS)_>d2SQ7 zw8jr6$&%PYKL8l!%krIjiQC_i?P2Ej*H#mUOKVQtxbwVFdcG4W?epD1FDe2Ewf*dmq5znsg} z8;_M$Q$<$ptv=d%YKh4(a$tD9eDY!{XUV->(-zy+c{T_0dOG6u0>INk>jThSZU#-we!-JNQhy(967DB^)p)anm5M~-|B?}IqEVPjhemg1k$5M zg3frbRNrqL8m5Md-ja(F8Za14WeD6NY^3@SOiA9 zQ#|k8Jm3rzYDV%&Kb!H+yFIwRT+=V<*}h#_AMeH+jjJ(^eN(v;roMl}V(`h~M?aro zuH%4dXGYk3mpjs|6T%tN?`*30o@x)`1a9kA&qXvPJg6&92Q%PCFt}E&<-fJmAQSYk zs7b8NqMG%$;2-)D-}Ss++g%;>~IS_L$@OF3}abfsC#WB_^0xQ3SgNm|b*ZX!wy{R{{d zJ&_}-Z)C?#FgsMMQAOasbD;d6@0a@Df^Im)!j~I;81$)E>Ic4I093~TYT*j~*kPeI z$nNal^o%^`L=ahnG4r6j_vPoSIHE&ka60KbX6W?V+B&DeYi*=Lt44>Yff7d;ipK*c zC)@}Y1&=gR?5TGols;}y$aJu5rmuL{DvPAoVL4pwB|+77 zU4lxAL-0m2&=0EUfb<>1vW=BYylo(xX>lqQ<(f3QC|z(zzF>On0d1c{Dd5#6|95qk z{f*Q9?#v> z^Q*8lSWSX{NRisgQ^`ms3Zg^owmL$lBHlJ5WNU5JV)$y^8`_V*{ zTcm+aMM(sCqFhLFz;PR|g@d~17EHz_4X3n;xpujywnj*IrX`c2wQD?;ZX`Tfs_Pd3 z^T27p^w!Z=-0L-POJrd3rgIbKJ`4Wrj;Yj(;+=;A+_<`CL=nvt~Z$UqU zOQlPKwTW5)q(1t-@DPmBl{Wo~@EOH~UIZrb^un)sI2m|-h*cLks$Xr za`rTMiaBIqB+DCF0M~SeevQ(DH9w)0?Iq3V$t>{jHnTMZkH$n`#NEuw)cg(>$RJnE zP0ZFm>PrxV8!hm0H9l(73SV>=j}$YK1q%1uj~5R+4rTZ+xVc3k@JZL26|uU=f6@XK zQe@j8$`+>ZuszVCdb0!$x$9SZ1oTgk2sG{f;Bw1y=oLp$!&Pt^2cyrP;$jX`TlE z{liOfwo-=lmsYZTYMLte4_4(hLE}ELZD4HZ51?LsE!uZj`MDuc zM9;4MFF)Az!pQs;k*Kls8O^cC6fejqsZJ7ZOIbbnC55@Gv&Cpn_|rF zP9TH`vrE{}9U{OALI(8IXtUy2y`7AX*sdVJRt#gbC8U1D0^|*(VCQ9+=-GtOk6Gk> zGe8bsjR}KCBsTAv=Snc79otQLF-`F*_Q(%{w{DcWzn3Rgq8JcM4lf%{iX53g0rRx- z&r+xbujrv1@b4;{!~5pI6b~)Eg*+q5OD0ndPE$)pi0K{2*TQ#yQ!uW85qq zsm2{p$6ccqn)lx9$Fc7%O)R3zoK(!X!aQdeC4uMly7GXBlR(BiKj4o@35q|jNWCmS zb2k<0(M`^xf!C2n#`uiX1?Kx*A137^b5xAU`LRKBBo}}bcB33=v z&KoR8Wi1iG`ULZfKli63=_sDDGD@(ze(NWRPnQ^=O}81^6(QoLZd`E%rQ};AMdsh3n;(SOV}LK?B`3drFyH%q1O`?wx?*P9jbqc?jh7 zro)9y&SmtBT7@Q37rMJWlD5W0j>aq&T|X_y+J+=|O8t-1EL(d@B-Xj5C3I{BeKZ3s zb`ca-3+Z}1H64AK1_K=|VnsIE3l`Bj6xIA8Bxa;3ljvUCSmY1u@KbffMWtY`EUGwg z_I|6jULOhnuSoxu4zAd`I+>Gw@)=Jc`BgFVb&L%mwFe4B2P&!6vr{>mGiKVfky8haWZutR`R;@&PcS-%2ZAsho`*K6Ibvog2Pj$J|xVl~Hm-@wA<$ z=*N}^g(13U4Ef&Azg;H)pWNL^cQnWP*N}?l?Hl9LWDA_4xh5frz?{sNfkV|IR7u~D z7A3`0`!zN`{vWEo!Y|6c>35foMM}CsLaC*@B$ZG=N?2IByL$-}hF5pkDDF0?MP7w_tz}nnjQK2y zGiS@x6D^*Wk(M*RZZHr2fXf#nCQaszQ)LeRp&I_<(p{A|&uYN&d3!b0|L?N%X+oSh zc`**cE%iWOjvv8WlwwCS&|O9C*By^yzp)*t%mOC}daWLPgE_!dL_2W7p|#*pA%?{D zi?zkb#7x}$&wUIe+K(b2z%y!jSM^i|yn}=?MeJA5WAEv4Gn40P?#G= zx-kh^srnH9n;2HiNqsK@=p8G=?uc*KpS0JLgxs2-c8HK9Q6?556GP7185r_@m1L92 zFb2cq+jbFp_kfxJw8$Z<=Woj2zc<-ZXhD)AIyIvun*UtTQZdrdnV5ZXCI3xBtg(Ne zWIqu8p62x%IYB}ESu%_P{?rWFwjT$zn%iGq!jrn=U~vj85*qhh#XzkP*3k z7b5ZgBz4I#Y!I0^I@U78rw#~wR9 z-Eo#fmb01M3|vNDIJ4+Yb>{)MGno2e%Tl)DqXBG7^ZN!z~ zv$M_nNH?~W)N)?_k&E@A?Iv87Y3!YnWZmxuUOXQli4G=PJaIG0GtE52JA^nPF6w`+ zjYK?0>hYQm^WH8FFrXhpGUR9eSl7F}*BzdU9mxgA?Io(y##>6)W;mL?yW5Vxs5|4i zbr4;@ljWV-a3rzfeNA_9IA3iEJQnq+iauHUNHGGwUshq{o~JH2hlk^*e^*HHw{@i9GBst#sW*Vtu~tpk(Y?>x4Vs5n4v8h~8K%L~`n>qO>wu#?0A|%*ZoIj~IAkKDL z+&FdDnTOPd7riKR$gk~9*^7nebbT5wIc!$X3JRQ()6t!)Cvfmr@xpQ6ghH@5#*yJp z_(rf~0%V+=HqyLDoGbt*72vCNyHkVR+B=!QT~#$TxFrFJ?8Es&uE$$-kZeiZFX=EH zZY7cKpOvi>al zr$6Wg3zBcXj~s0Cmf4@Fgz829psF|Bk|f&69lx3LJgS-BX|9p4-Qo6Z81nIR5t2sL zQqTx0(Br*{ohlm09BDlI`UHo_e$W+S&CO`U=_WWBP$Y9$*cYhq)T0+O<#%Lf^ER>k zv{zJInUOJR|8~TEcaRQ|i3;+zbfqqa7og7va4cqCt!!#i(5bDze0Ni#+wy}_#G`jd z8RiXyjRnE#vr1uBPJvJTO`G8l9mph?BZbVn(dlldl->WFHW15&VS3 z&89D(Fly)mnj{!q`dhZVo}I=S7bi_5&_Lan)ozSasCZKkW=lPvPbP(t+GiWO5B3X) zil(sXH4yt1CnnAq$Vy2F0EIU6=eE(Ux7^h1NgSko9Q$Yhd=c6}mMBSCfEl%tKa4*4 zqy*N{!KuT|ixpj3xcbV_3s46epkCEQg*Xdk!=$3dgI#?{j=f$$tH`phJd<(D+U0Qj z*=~mJTERp$9)wvU!TBzzH2Y8d?fX-?n-Aq8a|YoDx7045$&Vs)aekK@Yp3xtu8?`Hz3E}|d5^GxX^ z)<^qoHVL}S)WI`sVoPYRgLYU5#%?6z`QpqfEgz{ku64Zb@941or1l+x+C^j(BW^;j*hM zuMZOcw)D8socsH$^*1*DZ$njq-x?b29j$he_5aiIXZhj6(Iq0h;bAecY-e`Ui4CuqnXrF;)?~C$od>%Pqoq)TEz4NXxm^1)FEYk=k5CoUL zv3uO`ebSnL6H*^E&3oqPbLrWd=<`tOn^}W}fGrQyJS2u01Zt`p)^R+hEgc%vvVb}$ z77_c|?bDL5-)jYl8u0?*#XPIr|>2{>}`DjO(? zrt6LJPhycMD#7KerzE4&XRP^I)!Mg6QO>o|Y1K(Hu{o{FmZXAmLRzcPZ3VrVE7*&M zXcXBy<%j(Wda5fD?LMl8smYF(0ISp(xbJ6Z)Ks}HH@hrfBv{V4 zms~t$-%6cTdheSs+8tN^hqRrIakjVT&{V6C^KR{G45(Ju)F^Rwrq_H^N5HXykTz5_ zjo?7`iLFtwaeTuZ!+p5l?z(~1$y3QI(o2`+1ZU%JO|yTt)K`-_peNuaW9u%ksOiFs z$hy7i5*zVvp3kf}x((VUTW;eT@m0H2Y!!(oAkZ?H0~0r{CvmnY-zK;(WuMF2%q z-p`(UvSPOs!{`Y<;X>5mnT=GwLet~P;DTg64O9*S(;5bI8;xlS`JH9OFph4XOh7-{v8}3U;`_@@bI3V> ztC`rj+IWpI(7%vu`T$0y2X_EBEoNjB93aU^keI)a=U^z;lkqA_9wN0GupH^^BPpuc zj~K$EG2UO$T?YgvKHeZ7rEUQpP2NHNHVHsa%rHk|o}y|(o!T)-?(ol1Iu@)IG2Dvy z%|0M#T|1Rb5@xSg$mCJ;punCykP1{BUas|AW~7Ik2JPk5SkW zYKN4D{G0DQzqEfC5>5)eC#g_S*HDkXc;oNGj*sLUf47Vp2wIJPVx?lixjY$mK0e+( zj#%^WZ@vx9hS4HKPs$2aV|mU<-3#_hpK;y&DPQVU9o|dvm&0Y8?yEj8a$paev@O!f zoT0qogu5&c#k!1G^r-VnywSb1bNiK1gYHrYZ?nt}K`c>sQ)xuqc&LMVPT z2bTUNXwBl5)GyNbc@US|J4v|ry{0aVysQR^P4` zm!!p6hJiV7t2u95zCe+Mw21c6)RHG$L7dbt7ElPsX@5IqM6XBtCKLudVbXR#fHYpWJqFj)cN3+%IOH|9ak-kdq;D8}jeDf@D- zCo_u&GGcT2|1l+^5hXg1@jNORlt*Lc^df~c;3HocH4)!n= z{n1ad^H(Na7BERcOwl0|00gj~if&J24m z(Uufor66<-l?Z*sd&IlHM=_rzFEE1Q&4yReC71e+dn)SZl;!qftQot}W)1D73T6I< zO+{2%T&;kJr`cbT)JLtE+^CUvH%&LFQrsCoGH5*9zYwn^tLU_tku0o>jOpEPCkkWN zr>>o06zcXK5k;P;2-kG%dJLM1;Pzd74y^d}O6krIU(^RVnncqPT>_OyLo=cPuo;RQe&C`tJjXFsA)6V*kBjRJ{rtM&~Z zoH@=-W`8OXwWAYjXCpl<_W;NPC7hiqqOYMAB+)wR2}c-&=$B??BurW|apj5J7|ut@ z{FD*Um&HP$_$07b_~3b{v+;g z;fO0g{P{dcEW=L`clPiPZ*XrN__*3CB>%GX0p=q|yynzC3J=;oZJg1}sxTuNuE-JD zO&vOs&q$CW_WtPN77c2|cYfnYvb)S6l)X!yj2nxI06NurQj>V}y$wr?Dr-SSS|j}mC_bhGKZnW?vAPv>(|SCM(YpK=Wf z_3X^ds^3vWyQ;Ga@%=rHhFBa5_?@u2Y?7xA9u;U@X^AthFpAWFakbP(7p^W1E26aS z_|K$X-#7GRJK z)9}}VDk#@VihVUAHLz{67t%z?2G@WPL^%XATzCO7b`nC-&IDF}y46sUU2dREM~E&o z!9j*ZpLY-dX=KRr(AmH$vMhX|9Y1@sJ7XeBSQ6+7kVd`0*1br(Isn*0l1zxwuq$m$ zowol*M+XB$sGqpgWNqMBop-|YbZ^%R(=4DE3j~HL6lQf80Aw^H`|%i#-yxrjA+EQW z8l3Zdzng@9QMAERHu*6E`s;Dgn(o`^45AeC$IeJ}fISoaf7wvDSZc3C7RmXb`VihK zwfXj>L2V`*&Va&8wG5>>B4fcmLlCB|ZHt00QH8X39mT@e!oY_{URV`MMr~OA zymZmN;Mp`jCw5I9Qt@3~j|Vd#{G%)LB1>|t;;d(b^FD2X=;Sv`MGB72TVhwtu2@s@ zlBYIU)I%gQPq(t)Ex$uPTC&j9UARO7%)cCN7LUkJ7v{dNx&J)wb@t zZ-RwV|1Q?m*e*APdLE9XxW=SXflxL+C$8F|=QKGZyLFt z198sDnZhmWM3mQ3W>qUTMpO+F6tuVy0>9rZOTa7;R5-i6N{}ZV8e24qt*P$jE;866 zBjo||0_%5SDtZs{)ceo3V$Y?QE6=Xa$Mq6L>?%89@|ypp+PvaPSEH7X^lJ5tXR1s8 zC5sca5M_n#%Lf(-71D7HehSz5@-6vn5K0C)2M{=2^`GQ1mlad8i6R-!BDx9YEsA%_ z?&J8HLk(EMd9-9m9T~LvsC+N=Qz*$kh&`>{&{E$I9cWD8w85Pj4_-w813%*kDM77L z-4vd0_j=ma$Kl#tu(Y4>>BoC>L(3tUYNyq_-&`em&>RhQ6mU?lnUl(UVHinb7MYTF znit^A&@PRYAprw()-7Jyf;SB{L<|;s1y3~!VK2M#P-%@x*l$&Rf_Jh9nE`yE>WZY~E_|L?l2Zj&rVvhfFt9jhzK4RN^*JZlclgbaTQ22E%0J)p@_OO1y)7p#A_K@sEIhIoS3@k zCqz)D+NZs{jSEw$DOzwTe{oPPo2F;ZC>5yweK(W2v+I>Mo{i1*!P}iKGUG;Nc+(m8 zFV64tWc3z6ULV!#Mm%UW=gpGEd#Q)>T=_k1TW@CX^D$X-q^$m(u5^jUy3=zsgi+Jc z{vjM0JN2dePNwDRi(JU3U=dBEV_)4-c# z4S$UhOaryk{QEi#}%x_aHVJns$dfP+~GhS|LP5+k7$}f}}UOs!0G}soMwh5if zyywLb-}uptE%1wT2M!tX?EHd`;sJ95#=1vEDRPdbg5j`%x3m{i#LMNca$Tn|{4b+c z2LLb#!GnB!n4>$cyt1f3H60phWAA$GMf<8NSdf~v0x1&k0f{P{f{R;LU|CJtM@>7Bj|=lXowuMc*Fq}^j3#BTCc$8 zEX*C|EQ()sHd3%Y8(4tJ8iSrF1h$@L!%?7tIAhP(68LsNK`e4-#g`%~T z(5$rGQuRRW9E?~f4BfDQl|tZMbVkiSNAZ_-?sUcY58hs0DVj#A;q+V|YDjCQ7^tU3 zkL9ufeiTlfU_1F#q)PwL0sWR z%nRwyvZEI1e~(}9y+G?xaHU3{4Ve@qXp?;`ou%d@3N^g^P1~=eupeu=pD%B`1Yn$Zr##d)-mmM|RLB zn6f0vec@tDWf9BNNX{>Tte|h#kF}pX?#h}|WyxtYtMMlDuP&{5MiJp3L9 zei_)EDw}m=<>|#8*&#Y~g!)_MGVq6B6@DWH0$V&c(5$LHKp7*Oj+hFj-*}H?LDXq@ zsF?@Q;d~ozEFE>3YonIw@ItCnPyx%1I)21{t7#RBmI0*Z*@+kh>LY!MazTWWrjjda zu#Gs{BH(<`P!^{RUqSTW#V&VT19PqmBnj?X46UBQJQCR=^NNchvnso-jbz7a!9|#U zh4@L7+T?yK2B3YlpwyBtOuVzSHb~%C&^+LYaQ{bvka__n{OdUidDTiHaQ7JS(Kiah zZrge$W2Usd*ddE;7*PV$WGiOLj}pCm^LVatl4m(n8Y|Yv0Ep$t9}bqU$!Ve-Vl8l9 zewQ>uua)MNC1P3oadV>Ayg#jZuh}I+ zr*$dD^?qhKN>DKuKhxM)XsUls_wUTn*^T2~8^hnFDqHV0#~J$%M;AX{4!k1j9BQzu zd1Mc^jejDV-FdW6tfo8Yb@N^G85AU1B>3-?&=X zOFU_bCA3S%Kb`Mfa?zs>bM%{{%g|bpK#&Rx3o{S?Mjybbse7rFpi>7xSf2NdX?u#3 zQ2zssvnuw=B#A)&txd6|(vl)yjmuK)jX}gGQvc>% z($UL_)7UL+z`+pEjfl+wlb9s{a!W7LhreHEivavW>m<)4AB`$+MFWd*#j*`TRz3WC z^_+67B0M7p%kjg%Xuak~v=2x{+pzJ0^6IEi3LNS^3Zl6xr!k#{u6x=I6(KdZ!vA}(TG zC-{I7vEJnhqmfDX(~9@~5r6mPXKJ`4kqR@%p)ST~c0GdFE`74O3l22)yT!XMC7W%T z4pG#M9gE?z#{|Q^6Q21rDSHfez1s3TnMq8Y0yUa>3rOOGrqUK|ppf?YA2>bZvqMOzX|6@2f-&C93+<3rH{Ac|S3XziYH= zC}Dtd_rJev*2fb+U*w^^t@`-TXn9y8*x4xAy$M`BqF+XRz85;l5z6%Gzrf%>=xB2^ z_o~AcFa40@$B3}Y?S8YoOGyNox9C+t`mWJs9t*!xle1dZwEkvK(oX750(MEbol zb6!ZT!3BFM$Q4FK$2}KD0K__j}~c?+^X+zbDEdJ~8}Z3uGU1n)drAnAWSA8wZ9@{jF^h8KVYBSC)VPA0HiN z(@Im85l0N^KLmdlM(=p{TRa@oQn*nWU13hf%W;^;nWgY79&4oA42c^rDW)w;`2|~% znG11@Gv{zRNVrCXf3fkB=JXdg&eNczW&OAVGNN-U ztJA6A=zCxgV%wOd9{vrED|_3p#yk0qf=84Cy%OD3vIo9HoF&I6$MI9yKwZBpsEw&R z23Y%x&rf*n)1oZV8dNT=usC3e+pk4Fg_KHZg1EbbPSV`alUBsT>&J4Uxw8Y>niZ-7 zQzeJG{n`XtvzI;Lsaf!>dVo0w)I5#f$Um~XZEi_Gj>|D|;$cyTw4MHQ_Z$Gl4GEf0 zQ6_YB#Eofaj^rwqY#}tM-Ead|q?r)5hpXEYc{SBx*nN;VFX1vX%@4C&>l>3fG%1PcsB=Uo6F6_-GIzMTmQQp#JVHum;B ztQiOjaK%6EetLT2{d_MbHxTFi#J%gx=q%f7?caroH`rk|CfDM1=WzJSl$~qR;oAw+(yAc=2P_)>EE1*nwOn$ei+rgC{h=)>7rKOeA0ZG0 zWOF1<#S)NZ48QOkY8bwGU0+-~!Sff9G}!s}k?H}4g*}@DGo7g3 zA{{F6uGd+H|H5Z00lJMRh5unYi76}+APL3Rhf>@gBXpj7Pa%&s!y1p7X1cpm0^b@R zzH>FWB3WKqqkwx{yH|b6M3<$aetlA&MAXk!=);j@f)vL^-BeQ3@(CN*BV#sk&kUOv zReecD^S^9aahN(xvrcV^@~Rb*fmFVYlVEAUf!$y|rplR0u-v;X-)gp^yIZ9Gi{I#_ zqdct0zKB<+_8PDM2`V^c|A?3Z#yup;w#Xcz^j%AQI|*Dw!&K0L^>znj@l;g9gW8jX zrUg5?@kTMkL&C@dQ|N-eL~HY7WPr*YAuzdP=uF(iH`&j!*@bkN!PkPsTEUy42q46S z-re!h*sUC(J77TElQ~@R>yJI>PSo9TH$OF^{ zT^-7NAV{YwXQwdN!^Gw1nGOvHvGi1jSq}3a!3LyG9hXqCjX0^Z0i&}_Y<|7OY(8b_ zhmx`2aK%k3l(fBeFc|{>L8MuR4Zh~f7FKbVq$2Rx)%v^=jl1z0z7N0UtHZe?nLEyD zbaiSIQV7G0q7$8QUL?gz5*_XV3@wzU%ESoy%Vi5CYZ`V&?)!fsaa%S&bh6KXvnp6e z<{01u4q_i3X?y1<;4dK*RLEmqtpWu!z*!n2I4Hr2FPDKv5G5# zm0kyJrfei2bB%VoeI)(yB!dAxas$6ap|{I;tHyDo9TbFkL|rkf8!cdIZc<8 zy0Mlo)OtADzl+7pwVJUQAg;ar9<%9Be{jh`C=)w{h7-)Vxp!lt!)&-Zmw7PY>VJYa zVr+7!RPS+R(W!p{AueLl3C_>U8=rp(>`m?HA{9ZCiU3upGC@!IMQb@Rkw+kL*fMU$gwr8)d=U*&=>Sk?pR1!?C(oVAjy|llr zJrS~7ethd%-|FUY5!@1pktVjBydt5>lwVN&&{Fz$q|JQGr@oLk#8^JlG`O98#<3Xk zubO9J680=x2$;5YM6p=;r&`#{QH=NqYU817XEPpIXdrrV=HfRIwBkS#1ynC1d33=T z>GsnxhI3|6{$j$7lNQ+<6gA#xD7&(0h@~IPaZouG+xOdrh;qs0hlgBllu>;Jq9tu4 zkwaHQZ{+J?Jm5gB-CmpKOUBFYgxlnI{Lh?wGK_AyX^{P=r`9L-aWXQYy)il+zmQgL z#E!me(RxS5S>XxdRE<&{2EbqLp)@2*GmOOMqX>IV&BODlvW!J(Tl{DvCkMfGR!CQF z>4X5vp|)vH1teVU?~<;x5`X~4pqxg>(&r5tN)M3_NL?O8vbWm~splf%v~6>n*IEJ9 zKc*xjBO&_5bd^UkF9&=(a*C-_BJHD_7-%%V7rHPvu^>+l75nxI zba>dL0PozV#KJAzM`+6ro|ykDGapFd3?OEZ9{?OBLX+!NQ&bBJEJ-=wqE^MPR=>j* zz!E3M7Uf9bUJ;_|H@}4xup^eWoLv12x6U_^K{ig}a~zs)^-;KP+6C60J}@>YqNB|w z0+Il+fMwD~-Q{Lab{G-~`miP+IzuWy&@`UG4FF6cCuqo zetjrItmJ`<(>*sD^Fhqh2riAIZwCFG`%Tf*akD_wwId^kWBy;uM2*DyB~FpMk0SQd z%=;f<sHjQ@BRloEjJ#WxSu0kt#v!$JMK!VJdNY+FFWYcYP~>)spj;2Jyj~y;&@0 zqclb2St$IDD-W8u(@{f2NR?}xA@L0tj%QchtVy<*=uL3Z;e8PfiPy>&Wf%BVmR$Qs zqdYGnwzef<3CsJqm~XRvV}%BrrBrO*>GA9pQT96XG&dglB=#RBTys%>L%Fu}6%b6i z)@I#egkp|-VyW_{4bU~B^L3~fBU9N>;l=zTpTe~g@wO+Hg0I}yW^cdM^)dkm+zg1F98ge6>k{5 zgtO3o(gtsU6^`UGn^S63dc$(hLJI(^Y~|U@yoPw>Dd^ou$CAV@Q9wfmG@Z)k204H$ zLZ%_VAQRSzH7s?e^)dZ2fa(D!4_W*Z5!Bk6CIV;#Nou3FIioX>i~va0ORf+DNNhfI zPX3)jU6CqS|2GSOD!)2{b@7403|nT3xDf+ECv+_${K~iV(+JiJ&smz!@t44fG!+oo z;Ds)k?50WFi4rw4R4OE#qOQ)zk+Cyg7tXV-WYF_!kb6!nn|Rz2zp`$2D4PBL1!h)1 zOh8e!bWN;JO)+LGMGpH2J|!UjR3VLoDKt@un?>0GX$W7QUc8HT7lv`f17BdB5Mm*@ z@tIffS^lmS-_SKJ7<0z9UzT~ZUb~75PnU!Np~x3=<9ak zSL60~Zo~%{?ibFvMP_)@_Q8O#FFIz4v!4{FmlT!XPGt9dPecqMsp5-F{fScWHc67u z$^nWS4UC4_#$p?#ncjes6}51{@qcN=8R#~*}Siv=x)@kc7cY5T(xPL*wNjlMD@B)l{|7TdgR`Zh+sDCp2)Q!<05V?&#$N7HL> zsWfIlb`eomKXpY(VVjPzHPg?L4qO%m)ew3Z^6cg~3CeH5_D4;kc4(f%($AKSGjA@q zrvYFN6=-OV;tr^Np!2_g%bxBdq2z(IW`d6HGw?^B6mwh#nHNgg0$b5pXL$dUp79cu z;q&4?=JO2~-bs9(`854bc?$kKvg|3M?V{tF;vUt!oLKsvBpQGQqedtFz#94^C~lJO zN}zz zy7l2d-h<2L>Lr7Vrll*7#qqf)AF<*{Z~e_zJI_PG+d&n;GQ(>?sjkmi?qxTbEsA(q zdC9?7F7KTMy&X^d>p`WI#91@=!IDpz=4+3;h9D+B-*@|AG zAl(tEv%lN#y>e?rJ%)#nZ}PWWo0^E8h;G}KTsM?SD9-S&D^skZTuuwwcJHV%+S7a_ zZj=(SoouLMR#J7vjD9t0hE&FbwrF*_z07N0#NS$rUAT{A<5_pHH_b1Q8|h%Ny3Ba{ z#W`5^)e@hxyH#!D7<+`WL^7RthS24~Wr4AHDL z$mb#d1#L#-c$)Bihyt4R*2KgciXJ$UJM+cvnS#J0sC2SWtkJ=mM_sVn!?7v0ANs)Z zBU@dYX4uZ_ogwz$uVHh$kZDbT;(O3fL0B-Zrb(s7cH<s;g#6Ij8Ls}s`&4yXEcKho-B%-Ifrbup4 zKizTxR~y4sL+8>d(K)@>e-SmDkmLtFQ&MSaX_b%!`!|#5m8HHfElI*r9vSB z6EjTmYKypA$l-US;+;doASxKNZAW z6wzAJ(>q$n6tP;`1pjzaCZ5k0S(9)gHAK=G#KkS7hq4%;(-?+~rY4#Zj@iBwt&>N-W4jtllUDlWzDk z7Af=VmLpMzq5?6tERi8TawmvN&%0h+-b{j&xMGw4q9eQc$v0EP!P#=e57KOYtB8M_W%%)ro9<#u zx#dX6BfQ?9gGUC_Tr2slmz;f){a1wjhki1xdx>2H%lx%YiQyzAMFH1LG5cSm z{3v=qe&^#><)G*}kNRk_9e-DO)r-h= zEW=_aB))f4Pwz^eHddY?O~Hfx`r3E&%jx&_-F$4WNro9J(*ALrd?yHzMV?!gnB97g z#od~KT?K@S7guur2XP2rgqh>BK;Ryep+#mj{pW*Mfs_@cPMC)dD}?})8tlfske1>0 z)kC<+ep>=5I1$3wJ(RjG%7Y3m4OmCI`T4Kja=j5M8bG>{ z7I2N(56r^+r7FhnQTLnIjz|4@U#;$w*>E=E!v60K!QT^Ne|fqevGWQU0FveoeBl`g^|4a6wohB^dA#=e~lRHh+u%Ij%AN;#~fN9V~r@8+UeWkp(@%Jdn zC1>s}j1Iz%&vS-7!iGy`NUAcDGuAs?g>q%$uFV!H>6x!O>>V8u6FPNL;OVEY8n2ui z%UW-9319QKtJzZI;pAZT54a??AE62(!}sSKQ;Fk|zG|o&y&b=H5&Mt873AI4PJvNc zau~>ljlDpeL)>}kig&7TDkq+W33EdY>K%$#gcs5|@h2!rIGJ&Z$_BsSfKINhRD{SaK+?~AB| z-&&4Ct+nhL>$KaYekxw6ksvnFf{3ZiQ#GMwhY0L=($EZaCaFUT3ygsIX^Jp_2r&s+ zZ!A5mR@CvGcr-tDRxZIOhX)5Mssoa0iDzeDo>p(pyK*vX1PBJL{qP^JQ1VT7m3+rI zXG5ANvezs1(%s>KkvTTy(xZm+CrWJtw?#l#f#}c_r~`$K@qsTFCkBhs*fu z4ev{l{<)xr)0N$Zts+J^Woksp?NZcP`-cP8`F{6iqaS{o7dGa-SS=aD?5{;ae7UV!=*H(#s_toDNLPI>SK*o#XJgTwyW!5`rBn3HGA6PqNT`N&M3WlDbFGB`-qX;pTF&AV4+_+!1y{p zaNo!qSjR*A;%wJM%_#88YEi%~q+E4vQ@|YXwv*Z%)_1=lTA$p51u)%*Fu$~%v-qWR&tt2Ls6*r&)V4&l4ZHx;220L;+yK6x zbHzGEoV8FTZbo3I2~gi;T54#q3O~o^`_t zZDAk@ieRR?yLW!)2n&$~4}&*jK2RSZ8CrR9VBZ_)lq$(1Qi+!%U`uV|dlY;za)!Df zGN|b{gqM5ff{2`PfJMLtq=~dQ!vdnCdO#ZQ%!iUr>4{B*%&(wZ#$&1bqYeM;kc z%2qtgMq`gpEJ62u*q}Ua+TzD@zv&&)(U(63~j9V z2-9-x*7US`8tL5+##@!D0v#V-X|BmN2E=w@kVzF0fM6$`=osD?mWu#?N31E%SClAd zxhEy)daN_JP+iSq09iZl5q1&1V}om$df#fTnn&Fb4i+-pboo&eI#ywfws40DvBRg@ z+HA_|4BXx_W-S=EFbAxDT`GsZdX?JU?M=MU<$9l06PWN(-JLMvzU3_x9`|;BvxMJT;rT{AKmcDG#9RL^N<#bF8!#aF z0d}UCGrhslJ``?h+{*r%-x&ID{%1D*B3<2SU^r@4_-6_G&$Dn<-;#Q<0^IomQAa{G zuLP^@MjB!Ed}56_GD+p7RW5>?Bv6GbWSXi%gv~MH{q|vp16RnH9T$Ee5VVK=Zjbqs zPx4K<(@WGr1yi-|v5ClYXp|fG98Y zs(}o>j5w`6l%@L$Pp04(nHG656mM$$wcnL9mEfW!su_1DeCg<@5_1D<-MCF1xcI}5 z*3bw=<4+h7J09WzuQOr}b)^vJwC7dTH=&%tMv+BIzED(dG+6kpHiGE`mrOM94>Hm9 z^#aVP&Ix0Et^oi$s$`sg9r*lylYp30+FjS* zdvo-R9gxnSAC#V2uLE3u4GlYzLhfZgRnGCr)D-X4U*ezukkhjB5J-O)9NcU@ zc#!p0H~$j_D>tJZP7OddL@=+)=(W%)3GQriB|ZQhYkDeenpB~dVasj_P!u!&e)gtH zXKo2j^XfZGchEK&u!)V!3y6$(WY#oN?)j|*9Akm1t5p#nG4X84-Yv(Di@FVQR5GYI zfm^o@byg+4A{8P@8VIsTa*u`vkuY3F(JuQvSTFCF-J~sxm?Z>BQv9XbDh!l_GC`*! zauc5n-j^piNhEYJWr=Wb5V6%ajDPh#;trP*_terjHhPyt<>+I}PW;u!M)?^RZIoQR z0{(>pFk55HK3rk=+dVv;2MY{gjACnFRwdijnnQA0VjabzK9mlNplo49H-kRL;@or0Nnwl zspqnO6oO&;l=Y`=5x)kl{Qtn``wNy`I9{7K_sFH+Q9k~ELrWD=*zlf9C-0wn`@ciW zjweeQE{n2Cl1<4Ep2mNJ*@o7S_!W#qt#d~*M zlPIUel7v>=L;Gm+!#++>!a1{}jwXkmGCvTCA1Lx2$_9>1%$XJUn_z$)fs0C!{a|Dz zBIRR7J6cAnb~`fAAbtbFquL*@y$`|P~kpy76Q z3()y>*YG>wJZvDWKw7(YmSi-Yy4H?)ab8C{woH_t-M`?cw>SjkNqm90?mcD7F-sEg z7GBpZvTpFRC+IjBfeb*%d5Xg?{$5L=`@UebmXunuGf3rTjPWWUhSiN=&`-c_U3ciH$r1daq`@OGVt(I`s*|M1Cf0J6?3W)=w9$#Q_N(>$9j0+Jvmsk5+V$3@ zeUrE=_QwU1*o6d5KOjcP^~Eu4zu^uHSE^Ok%^5c$j9WV)C_{;JCNX}(31RXvb14rt z9cN;?!l{K9PxiG2qC2J#Pt(T))=L}b>$l^-Lts&+UQ;|TpMfir;Fp~c;RAQ<75ePE zu&c>rTar(82i2+<#g{%TET_W%L)BY`Mb)-_-)rdZ7#c)r>COQJR8o;rVrc0O$zc$Xh7+Vy zQt56PYCyWXy9DXRXU^+-@8`XruWOr6Guu9EKmKw2rph##f6WySti;<98_?3jLqv`^ zMJ(3n+%jFv7WccCOLSb!tukruG??{gdRzbrHoSrUCV&}BoDG}`9(S3|6B1Pakm+98 z6_eXm<^FDgkVFPvuh)E-;*BF7O}E7pn3$7tH6eqjjZMdc^k3bnGZhRzOdD-;|K}uz zS{X-l(M6cv_P_h@0{QO1 zc3*Z6FPz&CpnRZ_KU9Yt#*sn~&Y%q=T%4{|=~BX1R5$+w^6Vsy3i3yLc7?Mo+fBFw zR6@iAmM9JY0mXf$bz~vFd9W*%q*DhEPUVX{B6Mdo^m5Y{Kmgg8>rR`U^-OlcV9||V zA0+EO5&wIl2QsDZOh2j`I-YX#o!XA^!cx(QUI`cwl{w(G&@o*8;c<%UK62=Ld%#(` z7G4&aDxn5h!Iu0!B^nJKKZ|;&&N-KF@7#b8*=+cD2=m>*?S}g`O;~?frLG zRu5ZT(#7@3!)ZabDTI6nKQ&^qxP@+=caYdqzALBI#XJ^rc;S9G{2J9>W@Pt%B+VHk zJ^Ud7kr5RnpdRKuDYi(O1c8Vxf7-0lUSTnpdo*HKss!$O9S(;%VxT9P#zbisc6qC* ztAW&hqfrd6zMWO8DHZu`!R{e5nSn1-h!Z=$cV$|g;v z?I=e+?5d8k8ZvUeD)_lAeIhSWrhWa{=Ns1w$np@Kf!H|=KY++N@{9$O{hk6wweziU ziy=?^R63f=0e@Ff^!?g$QV{kDr6tn#(OLcgS*WbG=i-;qysrN)( zW_dKr`${-sBtjJQTuj1kPsO>vJZzX5dSzKxrW_Bdr^c!ST&cS7cg;*KdHWQ-(rVAp zo3hs)1u)$v7_gnt!7VM{>0TZxWG@EO(4uzC%C0Jy1>v?&`Vi@l!3#XV>j3zJt;^1D zx9Ii*Xc$zMbWm&ap+!r?t;N-!%Vu2FC6)m%XktB6aBbgWlV^IuNWU`mEuFcWB)*~3 zTL&nzkomELwPm3>_jfjxGN;TS8%EmKynfHPdOJK%wXaNMpuZ@Fxzl9S<@t3_GFyXZ zNKl2I4$%ZQ{Q7t@9t_h{Qe`SodsFYn_kRbo2+NOmokl8PST36^>FxioN9v+CfBOh# zB1xpH!Tn=KkuL+8RCO zN*MyTXQTPx0pkQ!oD{a4hyWTmRs z5Ik8RtHjfd`XoSRsU|E;GWXgTiu|Jd6{sZ9=#qV(L%C0e_@Nb+qrfC4V}OT-bD4)9 zH`(Q&ahbpzFV|Hqx%|^JfdSns&&i=5?EFa0*Y>4_Ns5Au(^$8c0(5j``k5x<6C5mj zWLSJRYUrPq*{Ff0v82YX!7~^AvN{t*JYq<(t<1#uT-@OQ`7g`kXXNre4l_fp9;^p1 z=M~*{@?kAbM$9C#=Lc(ze+g?zPF%T`OfUwE7Z!1+E(*&B>9 zcw&3vO0m2h5H3$VF0Ux4ld>aCAB0Z9+0bye#gK8Y-K6(v_geM9CXH+-PI&ybRC=BJ zZ?X0ka~+czmTzJI#QepG;xS^mQ%rF~2Y~G`N3Ujod1=#f0K{*#hSe`ZqaRbZK`(^_ zE$(GKLpe}y=!O)re}%Daaiz!)htG1xUq)o!CyGIeQW`%IXmq6Ih3}f5!zIf!ubvBP zTbIu?WBTcc>c^ESedr)ksHjipuCDeg4%~@7hTGMfOj2KHd7a04SS}#wk9BfiPInnr z{zYq_-uOYzv7wOo=UK2!jGL^}0qIDDvb*ossTp$~Q7FjsQ|vz$7c$#8D9p=C1`&Ob zM~J|$=d~1@Oygzrk#0F1 z2;BK{Q{Q1&R&{54krO^}N~)&nxt~z=iXo@mCe-x^D&C<19}aQw3L|6 zaZtR36OM%_O>{#jDtEzwm?ZLD5=0;>o@a!D&rwi{+rC|T^xcnLSSg<`gk(g>v2>_m z#j)vX42rKSds2AG@Yz5R-Yl7fv{t)5+NVG9?^J@{Y`kNwN9!~Ha04r$Tm>ECz{=(+ zzB0Q-`w&i+gA7Q;5t0;85?FY)&ZfJbTpw=73rM1{=IWK3^)TrIE)cG^J)tzBuel>> z9qv|_c|^khljz!to@>Kg$CQGm{iT=+sF-$Tp}RCS_@$5#_gxoo1%tI&Ug_|bBggqOsJ)V(*v=9JvCBNexbO8YpQw-(^aj`TGM`cN8Fy$QQ>?=JGNXTk ziM@*4k1+n=7Nc0IM(aOdpe`2weBvou0%@d8#+kby@+%1cA=36*6QV%UnYl^8NDN}T zwUL+;>!*mKzt~5xv}`aCzXhU)De)}I++@&YZc19L-gZ8TSE=mFFJ((BX86WiAJ)0Qh%@?NiwcosSjj2B2qhVI|D2 zsflAh8#Ra}0puq2ZkC1ENDBMr8$Mh8RT~K6Mn1a>((*L$@p^J}q`R zaX(I?OGEdX2fdGIvdUiinA+%kucf6`aEGJl#Q=Nqc)`Hc@nw;P<$pZ@)aLmXz5O-% zqvxHg&<32 zZrqlz+1K5Jo!*n+uFphJC29p4vJe*<2plPX(^Y^sLz{WRuNwUOvkX<}6p4ZkCZw}$ zl~7j}qvId>Kbb3!nZm@XuNqPpQ0Wjlr*`Ea`j%JRMjJx_5>40i#MBfUfi=;=q;aS{ zY>xOYu!*x_%mx?b%+w+pbU^W(_xLfm_e6Pq1k0dT4WNzx!6s1T1;rWZ>s9LtV@b6( z5G)eGCg^zbrBI$3)Y0@0dJ{VZi?*;td~AN60qsJY?_@~2=$^KhcB7f! z#zIv^2swhQZ`K?&FFMWVL*gzU+sLV|DKgQc8e8P#_mE!cM>TOl*D$`luRPBdpoj5^ zI^TIPg<#*1NBISZr}>5&9ZYRHHHW$j^d}xf;+Ed2WgTU&t@T-Gv-csc>mBTc>8g}` z*$PE)_Ri}9#R0>}z)4Dk<}hVNSe`h9s!9g`j(1}VJ;7m=U3>VrNSx-lBgSmG`^XOv zpjoW?;!_wv4a~5zfqM=ce#xqoZJ$U9#ywAwsAty_A!OB%!RZ5|b3%mi_WrSJHzW^( zB?tk0@HaBs^B|Y>(yOY-SS1o?^D-qVVWEBHR^MTx%X_jMdP%XH8~;_K4F8JbkEv>T zei5?cF8tB<%d!Vsxxp;clWAJ|)uVSG=* z;>dKZ%G#91N!6utk0XgiLTEB0Dcx%%`$_J{_8~tC0!G(%Ni(8of%YlC5p2ilmYgGc zyHVzAL7L1W_37%;w`-yu?L>~D@GjMXfyT>v=!ZH}OvYRYXOWc-~Za&|4EX}qDU}V&UCVb ziQoS<4*ox>(~z*Q`5*tO<&Oo?zlYq(Np`k}KJ3jEwWkedNhc!(mMzf0*Pq@d^#i)t zyS;<-ypmw=M@b!l{H}|bku~Fxu`S9&0Ro{FkpWERK0?eVFam=2b2?d4XdL5O39nc? zEeJ|D4FlP%cis}`X*I|bYJv#lxl`p%%`W++XIw7EDldBl(e-HO`QiK;9t`;OuzCW$ zrypKle)gboQg+i~)koJMQ9-{zUVPxE@$8ItwRNQu?Ls5qTBP_@GS;pdMen6tYQ}8K zQGtf`ot$to0dOQt;Vl$pp1It2^g%^f6vd9Lytn#qYTH@}Yx3v>JfE=g9UHbVB1=2w zeh-jhU4dHuW(I|b0>9@LLW7h{qDV1So}wCGgzKxL-&sBR=*b#N~VKZQ*@gO z)q=h+8N}0vd?P9rJl?d>wNfz;@78Ky1HYE~4$+e+HUa5sa zXX>inS?GIYXQH|C@F1tADSP7{wH02F@X|oN#VbVViWaa;;Wck$#<~&z(?`>cFe11EI)8BaVNM1^5Wz+UCO;W`97_pP>l<(Xts~hr z>8{Itq{YI`t?x}MRrC6i1UcRxp)gCCd-?=c+3S!domwTgEKl`r$80~U`!8Ul(oBUx zLlg!pFloq+Z6@OHx)2SfsG%|nuvxR{|CU)ZNv9}%X$T%f>0v3F6eTPFlf|KbHK7AV zpC&2@bHGA`YFu}|nkG4}%||oWKzhxPqBwa3;jWjZXMXGDvK$VFC9pUFCpA`7@H(8Ln^5Cg+Wz>C8Z!jOES!#26(w2 zfpaiMUurZmgeKh^6TWC|7`D{K%QIXs+NBP-isb-vg_^9}R61nNQM{FoxP&VOvInK; zA9e1z6;l6ex&3FMR2+x3Z`WP31iqw9n)+AM{P!tp(iG6OF>sRKP;5&h=Q_9+I=brL zKbgJys*0+8bi4mpe}dJ<%%GuRD&230Sna`?j*EsZfZx#uz!_|kk$Jvmd&5aosIS(|fh%&$sR>Z5TCcZT1nKH_yCtqUgvPh35L zD7Z(^m`y2tJsOgK5)*qPK@kC=&d0(kZ5|h~hKuWwH)0bBWO>y$pjXbZ&YoQk6S;~_ zwCJs@EIrtqTz7^={u?pWL%5eeVC)RT5hk34 zl8+UO^ZX)u62kj^Ma;+J4V10 z{1Lp-wj0-nouBc)*lcnD=a& zXiyK_)yG>tV?4bcMc^}ySq3hd`&jH5)>KGL*G2RkJ3zTQyZb(h{1n$#Qg@pg;w&ri zCE#M|fOG**ZnST}5Eo!BKUGlP(Htykp+<|r0lMC+>43eI7+A5KSknDv0?K9XQa7Qux|!ALV{khfo0lunpF}Sfd84R@?k5m_0EW& zFMD9-`42~#Cyc4=F9VqTE0aD-!Coz_!3!j4sxem?QZc{Oe8?M=Gxy)}i|D>D&}@#y zQ^2*DvUQx#Nt&{@64xP}t~WYBYr6dPb1t0mWXT;j1M%DdFpgXBO?F$6OozW3b~>jH zS};&WGn}*hDq=1zeWd-nBVf_mUXi*3%kFN@I$PbO*_CG0yH6uCopiv-!Sb1r|JhKj z=b2SA`N{T|pdOFU3h$R5akxfyBprwy!s@65j`UgKiSTG%iVa81j|VczExA!GUliH% zzSbRD;UN1qw11$J&48LdM3oJ4AGpYJB>zQ4y?I17zH@d#U?u2$Nv;2TlO#s5>Lrs2 zO2Z02cS$5sR06^B(5c zcnEE@NgP4EkNf2joY{vI=r;4>8OHyN=?t5sHAUklb;5!wnY)FhyrbO+VR;UW=4+aR zZH5YwPfDK}`U!78c_a@ix7YLR7Ii5@)fRCh2>8R|OM?RUH#s*$3G+)Bk2yk%8 z(PS|Bb%X1a8Kd57WB#$Qy5yJ_%F(t zK}>K^>l?8Zwdwcdl`RXN^QL2ArZYYP8Lga4LMth?5B-&rrb_nsZbG+&y+>2Po{SIeiEt@!IlyWm!e50OQG;qVDHPl%Vel? zLi1eKbYFYN2$Ge=bR%5#?DO<{TG8?<1lDlH+Y4E$1z<)y9O75WZpa5AQ$`d5J-ii zO~+oDK1GZ9q3N-JNg;|k`eX3?jVBKiXeO=ELr|mmDPUrt-S8Ib; z>qBR%QY;5;JQeyv{!81vGOr4;n^73KB0j4E`Ih}W*7 z_0cEID)P>wzRm=pFI<>dTEcHn`i`0-dKKzBW=wj;(UNX+X69}Vt)J650icDyc&)U% z;{~tpmzS~pgVixtJ?<{?s?r<_w+gnPX{a_HL8$QAs#Xqok{Go-lz&KO?5TRTSGk8uZZIi7zw7QcnQvxN*pZdv^fD zRq8$)m4+yanJptmApxT?p9$$DUH+si29MAw;xs5CUy#^S8BsXCgR@r45`FRCp4HSG zDOA;C)C_z)e%x{2J5BEF(GLEAkca*tWVN8CnkU^zVgYSgU%djVWGvo}RDUFLHh1H( zOu-Q^Zoq1esUl*Pt%+!5>&y9-Nh8qcWDpdqoJf7pNAOH!|LKb}`k(28%2OH#FM}9* zBq}z$Yr}6N%|QGHO7R_DyvVbZRG!y?Lp!YRYpogOSbYJRcoXHEPpwca%d@4@VG{0s zb_~0RU4w)Q;EU!7}g7^z>Mp~^j}=@dY@55JeCVQBi3s}h_a zF@A!77Yq0Wvj1*{Py@GO!7h8VCXL<#nR33OG7sJ4dwV-Sv9#TvJ)}s>S-X8kSv*CQ zqOVCx`w!;s&}24S4*Hr(>GIS05LcT^as+^+%Q83^c^fGaO8|d^4nc*i%73UYJ(;U> z0aen@8FDMS7cfrlfv<>Nq&+rEZW+vnu(hLw{-1eMML#6@9jpGo7tpRL1QMwW2sh`d zscRw~Jq99G_lEn~;@>)(1Y3>NGeWB; z;!+9fecK>}n^?O5AY)Wh+av;w1^y|7a;PV0>V{=m(oWzv%c5>Q^trB54)C1F(;|+0 zjZkLQD|{W(TVSImKTiW_QNCLJL71>V9Zh*V^>SwgAUJS zd)Cx>uN%$ZU~~Rm8m7CH?~#4&L!?P7SfKsVnfq|P`HR_j6F&9$)z800dj~7my5LpX z%eB+b%Peh}&fa&^gz%|~7jl>{;FzB+_7w~Dx@F9HM808C(fdwx$ii^_`K=N{o3YyL z*)G`lSBX9oC9Gg-ZYF)4wHsAgUlKTLIi4Q)qMXIQE|b6~{^r{kJ}(ekGuS~XWHOrG zIFi+R@Y9XqJ9B?f#tYvb%$-;f11dz2M-MBks5X7PxJz)Lc0_U6<{&C`JuQvxOAMVi{|0^1+2 zs9gRSf4ZR*PM*n!4@i%P44p(s95sdvqCw`Sgbu!qvZtv8+)>#~X7>f(vv~o}-&SJY zklCdwAUoZ_hEwK#L|Sh`8GjSeQzPE>s)+Ru3RW8tQ5D%kE&yY4!BCuiQb_p{{HC|l zS-pQ*P&s`Z-)g#`+PD%vrsjex9IQh??vNR&g}Omv4Q^>f7+rfuQJ8BrQ*N* z=>@Eo1YlWydU`2Fh@MttUX19=dI23=+myH-bF z{L_gT-`$4u+9x#bFrdNq;L~)6Sg>!LvW`5oh{P{z!7kWrqHYO~v4Pp5OoJ@L_DjconFkBWPqv%@gO}cd;Rj8-$_h`Y z_XU*C!4&%Q>{4gHwCJqym8gJOwAZsgqrK|*Da-^izBVni0UoD=V@D+0y*F;uKqyh2 zsOf7^J##4XcX&ZTEzGw7)bDZdUF-5soVYX)(#+nxVHPhs-GEMUkW+*Ugvxk|FD|ii zc&R`EQ802(_OJ+jpLWbvrLVLdf$s16 zwkt)|JhTBbTF*`yK)Xx%ty1#){n!WL0XSNCRBaEJPaWU3yfH?6%t-mzO!PQ>?4KNf zp80CT^j^3XQwrIsM0%wGcA~wa-JKu3^P}Q$vA5TY`xa!+fcS!APo$+Yvozkc3jSu^ z@%k)W0b}6{@V5@o$63?N0>XBynPX=yquMMbPUS{STf;2oPcs;dw6Bu5-+CIH=l}hb zdzsnXO&6u6xV zNM?5UT2;9$yRruXjM2p4JfPi<0CFwS7$exzzTMjaA8jaxl69`HN2z!Qp7OLnrtgCz zO@zeucDTMSL9-?N#Ijwbxm3ASsNbdCWmtl!8SQPNnsE2u6tO|m+Rd5bWU;r??FWhv zQ7*+5l2^1F;X|ub2VOJ37;~Dhz#Xnjr3Zr@qE}2}a9|uoz;ti#7*4%|4Fd~{2g?Bu z(UY4GB=sNsy&~$S&6XQbGD}x_X?t&lVGM8_c>Zu)WSw=#1*{G1c^Kl)fa=(wjG`)6 z&@F;O&T4`jUES2H;U!uBq>$7Z&dQ`oHAnGg+Is!S3(1cn8}+IgZL(Jcy@yBgM&&xe zS9DB+B6@&d%wk?rN55HH~eY?V_Q0W4?N@ME|CsddyhpT6eK^LN>Yp{?)heK`hpVq z0~GJ^ePT#MS9yg2>w5|384g1EDxc5)W?+xvkin}fE!#w`hCd07sxxID`r=n@eEc&l z-Cl9uZEj5zs?>vcL%;qr@`lkd2ClCtYd33|N03Kke%*cQT8Z?teTQ}SE8snpx{DL7 z3eX6D|JTYZAGcJnIp@tUue@Um~jXub9SS;`yKU_4FR5v1PC$L@V;5ckNqrUtB znO`G~%{X!Af_#>uhh?mef8R>b@9P4JL`COFNxOVu10!>8*5gfQJI>VapIuHl?vCgy ztx@54Pr*rb^3nC*!EIzGY1LXM7(dflF=gVi(NvU(KK&x=Kh|OMI*v5Yw>3Wch*8;| zSnNq zNZt&K_1o8PtiC!A@>$MS_TR|A!F+cFf0GqSwajfgwU6+`)K>7syWu1I`c7h*JNY2C zInq$njJ7TIXReg5OG9BCtG;k^Yi@Q%>4?M&V+ex>m3*trFVcggD;pSX$7;i}5B@e| z4tBF68J#LyW>8PI-=SIa{cM5z*19F+^@g>;rJ-A^SGegVnLy^fEF-ig=q4JMvW=+8 zzPHOYCgJVbNn5~Oh}+FmH)!G@cbo%R;}QNwdn9pQ)Ny(@ws3t|x8T#R=fr>PUZVSN zmmYcf1FO$H6=X(Ya@+6cM)WI=-ZkTo_WRUiSAM^psVHJa+QhbMI?EZtPrS+M7uLrW zn!WcFZf`~cXkqJ&bg~>?MT7_=5ysSUXE{^E>vJ_HDMg$l0Gb!wAjB}?%D9_ zSfc#HO5}o(A0g{28vtM(eOor1kj-$%zki=Oj+hFuLNjmV zZG_8TFC9OE5Y$l2i-F73PJi`0{Y$`;63f$bN+R%x9&5{6(qsSKp@M!#V^n6ha0|a^ zoxbG;208XKtcA!+E6|Tc&QWQ&5z=n_CAs_U=|*z6jGI*3>A0QEKVWTSZqug?&&CP{ zG~*t5p6^RWn;^}9mbAjk-UZ{D-m??B-C5G-%AS;1K!kyEcaeQGah)$x6PtHUXV$+~ zG@RPa>C@%smzR`F<~*FSZip}kzdYYktSJBIvJLyAhgdFCGu2^Dl{}^MK;|gM^&v4c zQ^bDdL0Zttm9x-3iMD+}fUIE}2EASPbsDHL%}V9hV-De}7tUlJcl;?p_vWUvlb|XX zD7A>IM9*Da!{E&$1dGw0)|}BEcMnT4Z`0_{Qu)k%&SX1K%X!PTG$*t(-q3#bVQ+j3 zb@2;3OocW-!x0w6y81O85HCjsZItMrr^-1HSeci8SM42o`sZ27*ABvt(!->J_%at> z*8W!7m6W|)%>mpf4Cn^BoJa5C7rZyT7b<}BGWx!Jg{;g8Rp<)>82frDECH^RBHTd@lZQV8LHH)=WR=7~o)$5;lqpmYt5s_4b4oWt6 zFSzzs{Q+{W7K3prpI=9opB!*-B#G)xth+=ITh2#EyP(5AsFEv;H;#UW!k9ms$ zGdB*#d@myfjK4aT;9tH zUxE>C&mFs>Vxc&hqXW>0T2TUV=3r3H2G6f ziSDjVp0RJ{ViPcnXVpL?+}u zOEC4zfUX!3=+BRfQwmS$F9%~*SNX$ozet0JXfr`r9~;M%(YqWew-b(Jj9_^Sb|t3n zw02frcI;p!8G<3a-!{9rx*j;4V%<|cX7fm1Wvv%e75^aoamRGv&6Y>JoV#{ zs8EV12ZE_s{pt=mD3L$>m97CUz-B&V#2CxJFJ|NGI-mKV5UtDf>Pp6oW#aj$r^sUo zznHDLAS-_>hz4WZ{cl>2)^fhg9JAnNEyNe5d@(QV7+g~+Ks3P;blLMY%B|d`!foaL zUTuEGQOFPGs^8NbIc4$lP0{vGM2=$^)(Imbzh<)52hq5KwMxE=x&gv^LTE&qoIono z^R8PZ?{-u|ggF@UMLDpu?cDu#-fbj*;hOTNU?bgz>Jf?wr@p;HJ#6(`$AG(u!s zQD{cl{dvkLISWFzSiSEP*17jaU5u+=PblaSsL*Z zHI3%b9x>x(sR5DpSdCpluH9#GW-E)AjS`S(bDn#bltPA;r@qcFO77IrNbJw3{ALx z^S63&J@m$ZY}~tB)1Dl29c)F5t@OjNKTWxiRvL*3&wT!^-(aVk$z9~efj)?)9EI5u zquhHKnYk)+8_M&8bye|nN!@R_V{lAJk#@eI0!<6q=BhEOKV9Xsn(Z(k*)J)i{2nXM zzQe_fTHP`(NEo+LmT*+6>nI(*3`d|XDYf#5@mE%q3w7{zcwj_)nznb=Gn20aepo`A zbW4U<2B&_xV`ELvWZk|7>cwV`n5FFM@l?sCi>_LR%GA%zoW>|0;xp5@VnIG@fVeYuRk9vs3yLj>e!o%frDnV^Ly~DLZon!)gHB@?$A$Lk^P}sLq?I z(!XPtY#my=U`Pqa>y^Tic-F4X!N_R@yd!i^XQS&@q>!O zDXEgZ{tdnH4HF;fAv1kFkh|v`;aJMs=TT~N&m0W>Zp-|ay*fuHVN;M&^7F=qn&IJa z6&rI0CjK4fvMwqCts0o-$h8r;OFc@a{Xl;Li{m}4=LIS=DS$IO9ek2s5+76DF8{z?E7P5mWniop2JZR*==&}P z$3mYfLC(i4Y{@IEV5ajuM~5WbDy&|fj5FmLc+PLS651_5@9HL=sD0;U?&gWARzmbx zdfH2OUz~}8Rv8lBlc$=FIDqg#Lu>o9>SvAg^cP!YqPq*uFty*& z!m)NaQY;#3Loevd(;TC)8#&$_m5cm0@g?7;Uh1`=gql>pOJ6 zT*i{oYvR!$*Ei2rFk>OnK;=7#%k`LOq6n0y+5&i8$r7Mh`_g$Uib&~IP*=u`=si!R z6~i_kLh^0}v}#X^UKVm!bf!k~`p-*=QxazKElWwCax0kIDNYQY8Qvw0+KPm*%p3t# zjMPCiC;~CY67T_=ElbKMrXV>y_F`VjDX*6RtEpZgpn1RXnkVRGCFn}X`^7^w45MPO zNI5DS3GKNOS82-$~N+_yW;_eTCcv|}?rE}Rnw-UWj<&=I!=(-pwY(+yVdxdkWCdQBI{ z$TM<&Ru5TEJ&nF>F0orhB$#~aK-zt<9*4F<#wA`IJeke>GeJev;D085v!&11GA2;& z@urQQ={TWb;0Z&!1J%yI1;lJI!pTz;M_DH`8NW|{eJ34*?MIZvjfSm*I0vH zGoK91|2^im&(V&8oy@^>n%{|Ya#<*X;GlPlWrw3n1H!03>$O{wuF+XO2G|oqOTyM! ze`54M!?#ofvB9<8XrXa|H#4(cTAfy%TAh94&}}2u*T^qh=GX!REuuNdI=Y$hy7Za3 za7VeQ4&$P)jwl_z+@vkZku3QD<<2M(6y5aPxL=;XO40p#K9T*XdOH?t!pw2U215&_ zJ;h68*fY22stw|dHN_@z>plvV#-QaeE>vRZ@`PlGwo7ApQBP_yjTNFj`Oyf7bUr)d&5=?$!C8_m~z6 zD@yyhZX1S#o!-i_2RYAst+Jux^%CrK5sy|7tfZvVjzulK7PPq9_8P+CFw>>|{$&8V zb~e69ArNqgjm0m}fs8jNWRC8KDuE;g9X}+dcadamrb!9byzA|0SNMZlCTX)sCeI z*_VtPMgq2Sno7gUkqe$YTbL{=N8pxW$yxbzE|KO1zL*l9mgvXwg zR@dZCtg{Ari2F|c4sEGW>dOc|=VrxB!eRX3x>Tp*>X$hd-Fxh zStFUCfFh_V6TQlJ^-{DY-O+7CGJCj7)IOm|MGh0wu6u@zX}{)Emu%IaM1RY#Imd(k zVolBgO~26aq9nF&j{X0}nr1{7PzT(zmaA(W#a(n>#|{i*Bo@XlEJ$_lNy4TgBKfYc z#O$3_RPGK-iIzsQQ_LqF3_B&abR9wUc~@ulh}?4IVzHTUQKQy>lHs!QgpH3(NY}HU zwx+Dce;gTI=ks~w-wCTAX2fGs@bg9)f&aHJ{O3eu6ey-eW2^g8LZM7i z85uOkXVe-;Qbu2DW8qzQ@rGMk&-Uqe{M!T08$?Ao!sZ+4Y5bQE zWyT@34I@(g@F$dWszD6URWmr_dcEhxk{2#x;D-Y$>MZ5%{hJ%jz^vt`&~q#o)6T{z zL}9l(^^5|6#u+0*_@D6HAT~sPdUD5jkKT`apJ@=!Hgz)dOPasE-bUaG+S)#r(83mC znT{_5K;jIB^!?KFs2Li9ZqzF|!4=H?QgssiVpfDoO(rNk5;{*D=-aN&^^R zRZ0X5BVB;@Q3tp-@(owOo$xnIvF6FPrIlYWbh*FC1iU{uSmd1{t#DH0l%Ph+YD8&4 zbhN{5@M4t8guXF=1%FeeNb}$rO`5^D{gtW&X_qYi;nLLBHH`)Bx)W#m`FCt_8y=>P zn%isUOcWK_s99|>t4FE%`4^VTpm@zpBLb+T9GaVO3Z@TNhjCLloh7N5T3oFJXd~Dz ziJBrKD-BI+0HM}s`-Cq!j+l+qqd$LtBGe^JjdlzEmI3^n&TEmBFPN!+4JMh6b|}xJ z`*eVBKd;Qk5>y(lY(LH>-1zszlJ2NN*aC&&S*ZPd4@7v**~K4W|mMmA{4G#VSW zVMDX;su5zpowo#hQXsl};a_i8q;gx6Vo9m^vNMo19_%!q<1o-fq_EtV9#jwPPJ%EK zd#)|X$rBnUtMNJ1UKptps>!b2Lc}EY@#DhlBL%rMR@wdvyt#>EmrSsfA=-Vzv5zxj zc1z`-i}1BPwRQ~&Xl0(=6uWSiZhBPf__erM!fzy$4R#9Y6enp1Zy0A% z&&&ncK(ij!#vhE-XPlM^WcbKO`o#moYFj}3hwt4F&u-B^Dqk)wF^Il*Ie~@{4t`wY7TPXr-jXQ` zM&(HI8r>~=wbU{4z-;V|QnBY8UZtp*+oT_qfL)~QshtLSqSd^T29n~{@Cl3g^?r?5 zqmkH2+jQ3h?w(%t0w@}2cjGjmP1R7lvi;L;FbfN7hk_U4KrqMaK!>rEvpOvvb1+AL zB=tO63*g!MDlZarQX=)$C1@Xt1*y{qy=hF##(l5e2iO#)Q zOhm32O3m~A$s9R)!9suu8Udp{ z$xC9`EbvP0`K`zHAPtyq2i`<=6xjE~dwtl=oHP&H|2EdGI$JoxX{A-&`CtLO^?-^( zJEMH8=jXWxNp9@hn$q?Y{FYO}I~gW|ZWeenc^nAl{p65>lcR_3Sp_S~zCRf-S~QFt zFv2G41ZOkXF`LPP^mftoROeIV=JnkuQ@2b9Wr%=NH@53fedKA==RE6i_B>MVQ^SF6(!l%+18bBz!X?JYWOt zg#ExSTkAOEc0%pPe!8^R-$vT`n(wox8Uty6XBkM6=JX_J%94UIv;9x^&d>GQ{|R^; z$S#tq#8~$G9gPBNDlSrh#m@#{3<)+6X+}aNLhTLRvK0gNpW0YA9>Hg$s}qfrja1;V zT|s4SN06@xXz+!p5Mtc$+xgybos;G}3~)!v-lccjR=DhD5Qlbmn5w`h;WL5~9wkwV`Xy7w`ejsjMl2Y6+NX zD$Xk}lJY~?V5KZrMMplw7y4tjAb+8=gE(Q;w$JK zmCFFQON)QzKfQj$gAKxW{N$aVSAon&u725%O(v=B#S}rZ5o+z#N2g}rgYG6FLrk+O z&;7yPmL?B&DS;3Dfk7J`>0u@76)7Q%ep-KP&;MaEj7S>z_e@%j#TasP#c{qi!Em8j zQD^=0{#{xy=AW6shZ!3}=Di`Wl~>kDxUGZ-R$sk2o9Wx`hIPQYbo^pT6K*52Or17I zO?IhA&#|PYb<{k=ONpZqKSG(5z4)p6N~(DHH?n#7b36a^a&10lZCpD`0z^TaL1d}a zLKrAt-3QW|hYqU_2wUBw^$8xNop7f7%pW?*(t=~VdSfSn+hT^o5@+{4(gGQM^|aW# zGzmuEPBa#bk3&d_YCz^QKWRDU>oaRSAdK1Qm}oJ@D=9*>4M>Mh3vBx?=cm~Y8VHQY z6)xEc7azO@_^r!&mp_TaNxIeH^9?ter?TJv4^3ws*JRkf{S8KUgQOrK-JK%RDcva| zonzz>5tWt}MhHlc?gkl1OE*Y&j2!v$JkR_4cYkf4-S>H&=W(6K_o&WDeX&014V=wk zi%}1YcB4g4zH|w+rJJS^ywrh9VfzVq-4jgKscsDol#ASGc(}F;q0P>kT-ETe{ra2d z67S@vnaRwijmOx_pbWb<9E2K&oW*7+AMF$#@Pung9SME|y+&KjX*Iyr z*=;XT8aC4U?hC5EmO1dKI92jcd9R;i`x@U?qnv%eH^7_zR847(a0r8)o$32=9#{Qu z$3X{7y~(&}HoI<=%=hqDp0%{7`#mswP^~hG*p!CYFB<#&J^M(+KJzwA42@>1CPnu& z&D{=>x~}$Ovo`j+k49)-dK(YsFKB+0W)>Sh)5`+N0=Wi@WlcSIdsm$fDhK`WHKAa- zj0;12*iXx@hHLu1rmofvZ+}~kyGs0;^b1{HTM)0+)s&-fs(&u6#oFB#$r&U+kNQ~L z{pH10V8SHc{@0()DfKpzxbe-DRxH|(_c1YXVGo-7Xz0_O zTE)X&sCI)JG5Se)1GSWbG4R7&9z6KB0`FXiK%jnBa3$FueiN*4k6&Ix3v+eIVV_Ac zbKCj9%Fi|SGmm15-5y!08M)!BMvN1^{*Ch7r>Rl)iOjR|0fqJW))`6W&cX3dT6!tN z2g9SV!Jofr`=?xc^Y`mj2U$CtVADVK|GB=Y8Sl*Fe`JV|4KiGRRBgDu^B#>*i0sXv z^yppaBRJ`JTo2WDZ#~1rM_*j@-%F6Gtg6a-_=hFeS?4Y*6N36isashgYHigdW!iXR zO$P}T?XNswicz;%1y#Ht-}N$@0$ebD7HroAJR|FBj5iRN`?#cJnJ$co7i^z;E~t+R zkKhfet`Tkr{pfy=0TQ4NR5wJ2OE#b3{5n8Fi0UIcV9m1xe+s!Q3KZ^WH}>~Suq2((S~UFEra-5)^!{Zyt;$i|jMLi{a= z-wU4^Sq14BzjS3ZuLs**H=~FP-@VD=u`q$5*qw#TOJ0HSEgbP+!mF+(eh=qHm+7zT zQ*@&Ozh8fHTek(-a1HbFlHGpz+8hsLnXBW*`=p-R0{C?Og@z6?drb_$`;4Pdei+q_ zy*wM~@s)0hY~7uH!}ATcxOn8pub-;9d;W}>ft+S?3cLL=;{-2Bu+_)H?jC5pzo(;H zd4P!BY9}Oh0loc+)n^By4@(c(%nnqtLr|WX_hmEOm46ddzYD zsLAcVu=Rn+;&rmFRdm9|r%&z5zW|nHgZ5GgxYzbr!%9G6fRN56b;ifPs%_0fN-`T% zUt$A+*qQ5O>k;@Tlvy>>N3L)_ni+^CowE=vAbDO^>jb-$Gt^L{0H9yy9_d<<2r-Gc-u%et6|hvvNF|m7|8cujM=0F1Zaa zb4IYw3$*;TQu(w_%N3E*5Y)k}l(ut>bOra@|0z2^FwJF_{~oa~xwqdI?<<^D-mySR zxt9TcgT*Quy%WTvO<+Q7^bfoW_lto z_HC~I`DZX*qI&F6m3JM*+o9*9k91Neu0U}*G>=8|rsT`m%Q=FR&s!Wb=eCS^scmKr zZUt8D=gg=1Y|MreJk+Lpkaq!;;bTixhzXiSOG1dYrR7 z7BzFr7wP30kyUojj`T=^cV#_Oby6T-E7d4~PEGAL-4FC2NocO`m(m>hXw?SUr6o|V z?b1i;9fX*&wHKHxNuU-h9Pi-e;lwhKS53qMU4S&>r&=qcZ8YxDZ?{hXEkQEj8=sy* zFA@;#94HpQyOltm#lD6rnh$&~856`G;iRr;@XmPl^v=y>xpUXIQz(oZNI+wUf!8gI zZS0h3w!*!drcht^lBlRpl!cP`o@@-S60b*J-aR7i&0ay&5P9TMFZ|O%Er(DsvwDR0 ztrV8i=M|$lm-}E7mZjS2j+7CrAn7L#dWX`EEP9#=eNUkoHG1zQ+hg@0s!AkLN_C#CvKCv>!J9t*G}w-4``bcma7P^ej;685 z$8@RVqDg5&PXhXBptBC7WZWo;MiW>1&j*)heIw1{bm$gZQ+0Ow&}=vSe}q{_(R*$M zif$a}gka%Cn_xx{?e8$wiNBCBdyRYEeRagt!;$Rvy&WL<55q0TpU7X= zc*`oX3pEy-`fuXswpku2ka@MI3Jz;wjy_$YQ;A;W>u+m$8h5sDH7~atN6JUB*e+aT zVr@d8Hy0w3?_$G|6w1=}dfC1xVg{*?_ctX+TRK_mNR#Yp2**PB`Gs3S$5#0Lm`Dlr z!cQWlmTJS;2D5WB6CQ8(fOu5l!_idk#avdTbLVBKfgXde;fpgWhbJlHaIwm2gzMt{ z3ir0ef?U?YL|d><3Ksp~y6^nRsiy(_)?~kL|D@(kuO~$AYaQ4N8DV4w7!Ed*n3a*tW0#|rCV0X`_Dndk-8a-L=S00}(LfuH- z+@nSFE{&+f#+Y@*odFCdU~v#uYXKoKvd$E5H?;6kf90{9ab`~#Xjl@8t|EE(@xA!A}`dNU29daFd^f1;rV6aD1-qqoN%h&rQAQo5y;ZXvvor%bp~tl z$V!WCae68g_BH3mc$a-kT{loYGzrY8KFO`xH&Sle)XhpCXzZ7atvg@0ytd^kdA^{xpizCww`kG1I-#BPLe$LDR zH$q5Yh>-AdJ?n+ahE~KOCU4GURIvIZ=D$5LbpFQ9MV&1o^*v>~#kyaEiD$MeX%>_F zM_||Umv;Gp_%beIo>~g%=uVcx<@F)S>c3wK5W*RLdIJOiG`tT%Qp%f7#~rY-4tagZ z-&dnqw^kUDg6mUV<(UI3Bw8>OLsnQW$s7)}8)MS{JSLblS(ksJ%UmyQ(ZhYT2Fo$DlGn5FU>>x! zN}5eMvnfCcxsJ^}=l%Mz_w7&*8!wzwxBh3D?2HeC*#6M{I?o2uV$KVF`hDAZ#AQ!J z(&t^*jkb))JBW1oAaevE>@C8F_Rk@NFAgE%T_CQ23qdL);|Kl3$(10Q%QKp}=oLaM znANbKviA9lfO><2$}iE?vgK;>rhY~-79cK{dSd4l-{_FZmaEvG!N(?-^U`ScXi3eF z&w4Zw>th0Itw&$CNM5I+b{4Oe*Pqi0%X4oC)@RxkmHH;1IsSV0DqEI}j&#xju}4YE z?@v9U1+_>MUW6xD1;eDkxJKQN@s=UU7A9ggZC(|Anj+RNzKxEzQgEsI8@;R82;U;d zM3;8|t7Q^9c}8W(+N=Ls-8vQjLieX``?h5urTm^!|B<^Q;lONC1{$hYjMA~RT^|M` zz@I*)MNEbRT0pngljqMNWm!0r{G{Bi z&QwjtrnY5|_9|8eCk(g+_9fn&>1aP(SXS=Lu%6*+EG{s1EMO%b7MEHXWCvgVgL8Ph zw@h!_1a4KeuK}-?Tl+dPR<50|F6+i>s*AT@iS-SJOpexHDcp1Ttcl!m?yO7Kx^AKx z&qf2vdj#%s^rI_KLD$KDPi^iEr`NrggV2lzUYcz+%HALNo`vdkByTR?NMGr8stl&f zx-7qpnvphl+`FEMsR))_Hri~?Zc1iFt+h=9wCbR~+f4dTl9rdOR0!-L`qcTYIYliDOr7t_K~4a8()p3b?d zo4vC|vVxjgddDMb+g8CT--Uw zEt5NX*qEyF2_TLET!e?%I01V?@ZZe=wTmcj#AyX>)<{<{<6~~$XVkrlusOgpB@hTP z$nmzUWT95tc^lrWf-umW*x^o?`d86JWsCWNR< z&e3&j(F`L6ihl4*+k$(o6Rbh&YqPS%(#OA=-o6FUG@5%2ccPp@yNs(?Y>K@(N1FI| zyXj3k6GMwZq@B;$-x+e`68oK7-kEY#bbikH!5;&OA+Rs2cz)pvJjs+}U|X3CkA#TS zqT8)m=)9Ge&yWgd4I++S%T$K)nx8-X9QpLoi>?-5|9y$nur%mnrqMF`7`3B@<@4Sd z{QM2FCKG14`@*5$@999anZhCi^7sG4}C?0SJH?gJB z@a9~G2a;er0p23=amDNqV0?fHreUOP;d6mQ4l$fF{jsJlX$kAI&%S8|WQ$^39)opq zFzfs$bU%Gx@dSIC_70wjIaRa3;HsBQ@ODE7>|uxKk?B75@x}^GL+nj9cOA}=%$!3; z>!edQz521l^m>~{gT_p}H#c;SnmtT6>?UEndXM3&QK8m<1a7@&9EKW;KV@?gh@>8Y zCUR$Q7X@22pQtRIic`>zf;bK0Dj!Gni}XCnQ&Pi6@W#7`vLLC zAF-Z2oI*VrF_I@9(uBJi4M|-j-O56B+Uj|*^UQQkb8Zz-NynB@WH@rp@s`&bsSw<> z67a1VkJy1B@51O$>f<~raFGo z(1AFpG24FGmhcv}OpfS9uMKraVWA$F&U#TV(;VcIPeZ=nb5AedZ#@3oOi$0YRW7qw z)CHUi9K7?y%EYW4!|?b*G%t>6%wN!%E%MP(Yp4+K34nS0WZwBD5$!hkp|kCD%j{n) zo4o%ivcLy-VybNt#``q-A*Ir`ebLO$Sp z|DL1aDeu1LK?g#2#972G0+Q}QVA3|g3wWmW+_2g_hHt+y9RT-z=qS^qI0Cv>Y)wio z7giu(R~R6O(&ut{L#z1R;_}=^!#hSup?JJPXWSW_CICxCju3C3Z=&PwfPEL{Y!r^# zq~@^weNI}kjaL$Nx+Ze7-U#_-^A%k865CA{KRRf9C{D+VA^E>NrrQZXL%53PabpLw z1N>>ZOAq~FSw;|$@_D_j*h8`T>aGuENuAk8;LNBNwEB{vIVeZmZPmGR@Rh)5VS;cL zy*Z<3Ec+J)_jiue6h5g7%W*LIypnERyYMdz6(_i?<)N_*>|If7Y#J8_qBdw|P^?pDIW~!)rulq!_c`hG}ot?yXabC)qT{j)P=E09v#E;Kr1tAYs&Qgqx6#Zr`XuBQ2A+sa<^B2ImLjaT^MY1ve714nkNdxG zpB))o-|2X6I8&fb z1t^}@MS7Shp?vOW>{OKQI032C)Nr6w8HQ#OxID&hs88HOtbH;SXuAEwEf~@M7x^CS zj(!CbvDJ<#umc2`iWI$&)yADhtkXbQLYk_W5y zvjj>$)VUZ)f$#hQdNqBqc2l3tv7=>UA|}X{e(Tpbs(eSxHtJCwk|xp-0xCZ z?czPPF)STwVi)yqd?n0(-+M=;8PzMT>hf!v)Lv^)?$KzT^}X|?r~Ac&IPb&w+iD^Lfz4q#mY87y4c>o z38fV402J>C0lC46lgXMVe?lf5G%J;?d+wy5#b}(l4&Kb_9J_(WbLUO;bfaqwLE}t| zh@tYj*dUx{`6yX(j^P{%UmXpG-*()`_un~4eN}sp(YuN{R)fDwzhQu%T~*ZmHRm?I zSv8s;_X)-^64@{-`|OfhT$XB+``a=6z-)xkCkY03iUr>88m`2gr#oh;KK0l_yA_X@ zfgAO0;AJYhp3Aywe2Mw$q1PuAP~(;-2!_YpW)o8iYSC9{9*2PE(?MLM;{|4^SF|Kl7fUyK(yCscvtwwOUD8XSzG74Priu;cp-*i5RE!a1sP!&!8? zCr>dpqH~%SM2h$DsaSb7Twl}YK z#3xOEw)U$|rNS&JsW^YZaz4)}6;l+wwASR~Vr}pX9eZJ%TM#X%v%81s?V{L#5of1R zP;zpaBW=8VQ}=`Toa6rQV991qz-G9> z`u?iGyTFye$g1u=P(-Gc!(D&J*da)Hr4$MbV@qMzD2E^yRavm!3uv=$UQK!9qjhFYVD)naWl$Tqs@*B2{aW9RM zB~`1*v_Uy_EwQJWe~c=KrPf?2y+NS7{lsc3j&AOQ)_Q_s*|} zDFv!Q;LEDX_qL~#wlkfsiwGasl`RcSHi!|?IcBSV^`dsxHGy(eX9`5}YfHF-1aSZtX;dPt@z;YDWfr_8H z53{S;0?MU+d<*8d8q+}l_dF=mNYcEc#ouxFImjjom>={j#|Fhc98WF?SEqjT+Zxhv*)8NnP+U2Wr}zi;NKI!WGB@K2Dw?-BUj z(OuZ$!->E7Va0xcII>8-4dJ}l)oV0w(egx959SVA4oZ5X}T zwy5k{$A(4bv-JuYsWrZItT#RB@ZXyhbeB(=Rc0A@fNB&E;c`s$K<=<1$?1fTMgz;M zd$&WnVO(7{W{goJ_X*0Wc7J5+L;rO>blwft^YDp`Tix}Bs!++^#V;Av(Aj;GpCO%g z)ZpwO%P7MLT$uYXd3@#Dxg}}s$+&Kr^Wpn(f#6Bs<#mM9)jc&RP$Xq=iR z^^5YXC|lf00D|Bm3+QFV}(An(yfFn3;zgTJa`xQxw+Q~s zuGkW3O-xJDzHB<8JP=KP63wm{=*^S8v&?*d);SL!YZoE38g#Dcj zxF!2xfKOF5ADC6bH|R$t6X)hukW68Il3&ho^>8SAs1pZ=WN(5wYTxNa%)!s{|hx^)tH#cIT8=* zLHzbYTlOBmV^Jd=X0R<*H_X+)Cs^gg2}FEalO4|mcP(aI1~g9u>qG7EUWRApKK`OSOnH<avVY1vvmlG)nq*Dv$guV0S<3^sa2dYg8IfO3#80qDy*IHP4 z#L>k0Ptdi$Jkhk?dH(~FwxBGa(fL0wP#609z-<8eBiZo zsfU-TmE3v{j=w`HksyLk7r^SY!FGJGl0D_LUadE@XBaQ2un0K2d9fH|t0zb_Mj$Sw<8FQbX^Pc3 z5prX4iv<|_0hmG5zXA!!9u6-UhHd0YoTvqY>p6(TwdE}CPDndhPSu_IMR_Z#p6+9y z+758MQ--QPd(exPVa-LH+wn3c$@?Af2*7#$+3E}?)T^6bm+P+#pdNF-pEHA37}%q1 z!u6Bn#O)V+M=C%OC+UDn-+ZlzALT?*FZyjkbaT?EzLhp@sGgj`l*r5~OdezX9 zL7X`Hq}Q{m@;sMhDG%F^o1yogJk6`jR9Dy8!+z4=NQKK08Wex;v|-K3GL1>^QHu&% zGUR(0@M)MN$n-INzgWK56Ji*GGT4o^;*;W(R%0MSt0;q0@&6r;%8n8-usOA)bRlB) z-mi|UD^fmcGEN*&W`uvfxy%t5R>)u>br1MK5$zbN!D4B$Ca-8y;R?yOZ6cF%tuYHCf$9F=!WYpC4 z514%v`7@{%dD_%Bims6u*{$Ai*RsGB7nz_dCO@OOZ4+{oc8a^>20^vuvch9E9Pv||RMDDg*9(ben!7^K!RzJ` zEh#OZMZTc&dL}t3Lzl@(};-MDf9@Uh%yG7tf z>UvpVbsrZh6TGQzrApg~`?PFG$<(9$e=LCglZ)rPj$-3G*TTFSx4Z6E5Rta)arNEo zC($v#H;QUP?sjQKlWgcpVM8*yY%i}-ZoG9+9j{m+Q1-E|;+F(kix?y%qS2g+&q|d~ z`Z4H7`sR!v+B|La?cRmbTF7ug;fM?@%YV=OhlFpY1&*?(_iGJoGb(9xM_QsG*`4j0 zKH{VB^19)3smJpU?PCjnMk9eWn}3uB?AG_pyor}x=R-b{0-@aZ-C~w%qoGIMyHsgs&b~9_JW)?~m_peX>#d!Scib`DIKgU-Zd&5-@_9eT5{V`B zACB^dHJ^3i+No37`RO=pk9w~8&K-;Pv`-~3ruk#~Ga|~DE!2FE_1#3LLrFfMQ7M2w zE-JmDTE1mkYV|?~FGtjV89cmUqorImbR7~}SC!F>zj7C3zxB*sw6Iu>ETurJttc#* zx)2^RAB#AdMt&%Xi~hwIw5udKCO!f$CTpA`ET}nXzZo~`NE*qKr3qUPIWy@`d||DJ^~2XD<4j>}#1J$yy0c$%gEQ|1Vh0P4j1ws&&;+80lyP6b*exH>qp z9v`H;t8w~sIwWkj^|+3YoAU~$T5#cp>saB~R^}t=GoGz@L-7uoZ|(9G%kJ7pO9t;ANhXKx^^7SDX!_A$*j{?tQ?qIU)^`JrWc z%3!(s*7?WEz1=AWik3juGoO^hqtw9UP`Yed|DeV=`hSx_*Kl3clDv3Av^|%fKCXfx ze0Li?wemVA!O(I_g`6v3^4`WH@4&>BRY(N2AjbiRF``3|VJPspJ(ZW_#5@3c(7+@In`669eT@_P_LEC|ODh1=RU8b7X%vV1sI_2pIMzm2@s*7?eKfE9=!JmS-;W(&ZnRr?eHP$%~my zSpRMB)u4$zwqp2WP<%O$uT`PNE1HjSYn#LhZ%!gvqYG-ZryPT5A;@_{09Xq zhdtO*G4n(B7T@mPO>9rJ9JggW!!!29ibDnWrP&6bf(|z=hpz zF}qfwW43HBg3dA5wBUbj%c2DuT#lRl)M6_@|LG%eA|OC%J8FKZI;1tb*xH9P&TD_r zNLO02Y38D0;eOO=Kr-A9ymd8C6@#U@#1~AYcI>w4+9Hs-hEdCv$EVb84k@x<;p3Q8 zI?4?WG8U98syJlqQ`a>PK5aFV#C` zQ7Qc(|E0Fl=dHvFZPS~g`+1oTIZl}NkJcl^F|i>Aizz+vY7=lQp&=T})Hb*zcZeX* zCx+BxLB7Yg%>E64%`_&gHmo}UwO6OpUHfA8`H9$RbJZBy0dVKkY%mAG8A2RENXZ3d z2?v-b=p11rm~R*UDzrT)m5vl4vYV9O(+rR{n7ohd{PrR>CE5sUK9W)@_V7 z$UX}yv)k0(KR}xU9b{0m$4fbPbF?VJtrZFcCB@{j-FO$70JmOGfrVW^oCvuZ63!n2 zj|*u0121bv3ux?xG_x&&9@~U{gG70Lx>z~R{65RR*D%(%X!XZYyC0MM>VVJzRl|WF zYz+HEhB#<%`j#fAMX_Hj(`ol`PGWY(!VZs4;D$V%^11IT1pR{39kCF*O{yQwkAErI zlPSytZ9$dhD@K)SRb{8x`~AABLl68I=43seDTO|RIIUmaFryn&)`gz6=x#8D_pVE@ zf=Gk@US=&yg!GeK)wU=0nbQ)f!kiUB?1jgzj&dL7KxI^&ziQV4=a$IJS#Sr5jHag; zz?fNApFZA<_j#>|E=S(XhSl+I85E;K z?atW~HGsMOEeKL3g&rc2O^O|ooM2n|QuM~J*&BAe-5g|&RFbQ!nF|3^*2owz)OI{uk_xs@yc%FquTHOnbtK2a#m~!(#Bv5;nvw3qW=F+^vz< z9Zu2>aISLL5D<}$7fp39NbP9oqNCG#oH)KGWi8SF-+-d10`(noNOP!G`|pvSAt0e4 zi~5t=>BjV?K)&~$<2zse(x22&D}BO@fWP7{bGLtH@LG{dYQ1WkWr;*qmvZppL~Ted z!PP?c68wEzCzq8@aulw0Yo}i7*_tK5u%bPb{OphC&Od)D{yhhjT}yJIYtYyvUh8+b z;6W0&$}4c}CS-ZDr4M!?RT!{zyirao-)i(QQXRKnH&B+LPtZ7+nM0zSXOk(>`Lp=1 z@GzmT9g8BsC~ud|%~$-OQfS_z8RYNl?MYLL#_hn`xv#-!TCR2o$kEKkl^Ccxf@Ib0 zc^;MMlr##4_2?;JSYiw}{c|m6@zFfXN_lHTlKa$UiVM=Ysf|0;&fW`K<`%_WbTz?| z2X(6x0C^LR3XUjKt)%uDPQ->Ju{^LAG_tWWj5#eKs)Gss(aeo!dzf!0B*c*cK8wIJnyA|sE9&^VIsJ;`_YpH2b`_S#i?qLyXk02jiLgHSlCgC| zx0Jvgr5i7jJAz9OcIiw}7cP>-tH;v6>}?k)w3y_R>vMnd0D?aa^gRCtY9j>iVk&|} z=&^#NA_B)i*1gPXmlBKTS-}YB>Im7snrtV}L5bm!QAylQc#-C6@qJ5B>fE9$=350 zU(N{*a-TuQxq{D!l>wtms$I%N;YDbz- zV~>Vj%xUm#V6?-Ez?JFlc;qo!4v%6h!QB!G#Mj_|Z+I5zd%j*&y88FQz+_GwA62M( zXef7p^c&^EHS_sm!3x_@a@QfgY?Kmw>?)#sfuc$}Fv*d}>)R_<#(FDojiq2fQqXeg zksF|sN^@Z>_|ej5z}sQm)nPm8S2c>GcqVNp=~g)x#pFO2#n66@Ym8c(PX}!Q2=V9L2H z6$w{sm^MUokiHrJ-kopI#ScwB;^TC^gsEA^pvQ2LzjJ!GiXQc>69{B4SL4pCa^u*WiLpBYQTg4gH{TXAzNu*xQx@k{B*xAw){3LI<=oHoO1 z-a@|&XgM?+hmmK=aIrY-O42gnWn;X#lbtXqm1Qy}f3HX>1i9b#?)yY35D2gb9-&u{ z^=wQee~)~J9dIg^nj4qo2%k$p$lb=2q5Y~_ZWzS<0Kz7L&TJ0!Ml!82e@<}UjC^Qn61w^ z{4xJl@y|=;#|Y`BdLqqf7ilRumY7N%uS4Ryn0INZYGr*A2{cFE7GP!!1UQ%Sdu&v?|E zu&VuX4JKWN93em^-LUfW zVP0HeA;S8Zn1ehw54>#V!iN^*>|IJD@E7x+f;A+_o{5Jz4{yt))VQeh|DVjaNl<4! zZVhf7xUNCT3!N9M;C(y0u!8a3!Qeu#L-lBA@4e|EInXz|yeD<=yZ)A3EWof3f1_?e z@Yl(sr`#19sf$^~ZKRXB<4W?~`tNs;_zGX6vcl?(Msn=cpWoq6AfwXZs{p)Y=S z-mETNd5Wd0xf8Ud!!6oVvVzuJ0=C2PcK3$OjGrQJfQX{b;@pzbj5JH&{5jc021$%w zB<@}iOPr&H4KpN;B`Gw!uCK%JIigK3Y~oodV-$@7ljAAjZx&Jf8AC@{2{@N{nj0(0aOq@ zX#EGZJZ%n@&;Erhv_SnS{ln6(%~)Go2J#HztQlU%1+bzYl(0*IbtS7X#DvwUd7!|{ zq1j$y;|s=tu-k*}@pMPI?3;9SBlxn)KCQ9-znaF#0vJ3nn4^w`V<$L#@F_N)@x4hL z+@|^do*PQ>Dq57|ZP=(z?XKP?V^=S*BmSA)Ko8t+Y4ZRp3p-0SkQ%S-hK+KbV_vNS zYAwJ!q-I%>3^rjo#5-#;RFkvj`J{nCo-d3@GH|b>yk+%-s&sJ zgGffK9NL0a6CEIaKl%uq6*K7Y^M*R~u*Atzo>!3L6=|ekOv3ZP!w-WF(H7U|R+DN? zL8Gnj*b38S+}IcTVvw_!lGE4hp$s+=J(*q)=`#>}X^!x$1?75m^YV~Q!BI@yfXgLv#B=oauB2WKBwGOaLgDu?%EsyFg?-r{&27In(ns-!^Y$ zNkf>Qyu=|QFgY-<1qpcsyI4N4hmTA0ap|HZwa(>0YS>0|k@FULiqYj%3Z)VLt945! zRFiUMYi&@! zdyAQFJvRQj(~Gv90}{_=w^J{dYhu+E)*KJ9#Q3Wo>#kOMowBwi$_+~J5~7I?;(_Bh zk8e|a>3?_p2Js$qXr%{T&F-=G9ZAEYFTUGspN;JI3{=((DL++C$<2Alz3$G-2}|## zkb1afahVWo6m52=_4H-y^uYTltA`AZx~IB?`PxY!w5wLf&qQ5!jcqpU2>J8^)ETXG zn%Y2@u9w%19(|Ltq|$Hbx$61s4d%5PZj#}?Dsoh(-7arFdi>tEfiLJZqRKV=- zK>CTl^5V~&llh(7&vOzXWibvj44$RZ{K_sjyHs8)HTEm*Uo_mA(W!Q4`|sn*-(kAp z+w-l*Rv-)w(+uRdJ&#-bcRvJtr5^}EnT5`LV7Xu83xd%PyRE%aDU$}UBp(NGWQy#W zb<0#GEC)XF>=jqFtmE%fp>~91ZB+$Gr8;5~LR}#O1jJPPfh&)0H;HohLVil-D#4%? zxyOTyUFXp4j9vWG>W90PJB8*T6m!_5(QYJtnOWaBx59Scj$OHksc{kYwwej92BNEO zB@a$j^)Bh*tmb3nig971=p+9(8HQM64j}%|ctw1PGuT(UWCs%}gApUo+;OC~ShtLA z_#sKpOGO3~(SeR2BJqYi?OfIhL(RWA>QvV+9}ouS1Mf$uKYPKF z?#h^5PMv4WJW!FN?Ih>2U8VRyy?=Q6&Zdu>4kP9Pq46XUvJ))3>>yaW1O?nn{tPbAH%0sn~W{5Cidq{@IIW zm{-Wl?dCPV+h426hUqp6%3aoB_*?qz|Eh;9is$4K`t4zGx#g%+Wd;T2T}Wft$JW`D z$}#(+JzQ4|H|y1Z;(udwO#UBNZ`sgRpfznL!QI`9yR!F0m zcBqSa@RC;P`^+?Dvy04%Anu=NvJKJyvC?9_u0KWrznDvr7t?3Va~BjzlDKR$lbF2>v-J9ngx3reSirM>CUw1$*_tYL%LDC3us2*^iI!*3FDKa z)FB???2TuP`k)i;(hl?4x0Ut_p1ld=9oA2)^^3F@7?Sl4JaJ^xfszfyz9gu>Q<>5Ed4BYPL{#7wV#lgc|7-U zu7+Q2Hk{A?)VV#+XWrc~i_pC{Z@k~d)Dpi4`1=TxA$-C9YQ=i0g=(mQaCjVwv4s-m zV`>_fp>5jeD2VG=RkLrMz(tSWva8?EkY>QTQy%CT|b7GG*c5i>mf zi4_WjL8)iJ(x2|7@2EbUbi<^<~TEcAo8Ek zsRW$73GAnkf``~v-DnGlFpneK)!ddyGk30p%%%w`v+f`4^(GORuLT?M$MJQ086i{K zp|)cK{?lr~_|`r(LH1hq9cDa1hV)wNqkG(~q}L`B7cg<;c|94jN)pRdg(S)v5kiV;x(y6h(hAy5|D}! zuB_l2ud%&vyH&<1&v_pybv-kr{inb0`^(lY-rLyOfxcDF)DF*buX7 zX7DxUf7&wymmouCQm{M}@Z6SYoiXwky!;2VvxqEBVYv_~1>ue1dBv5%kY8o?_(i@U zIf{7!C)T?b2alXG|MOi?=!4k$QaoaR%$Dhbx}#9P*nJ(6L2v*1WBDayA@grcS5>ak3Eb3wCt_1!I)*HWeK^rOLbb^j{nPE z?T@8ab@Xc1k}C5~w^51bQSf$?-0%ln@u3*x9B1`#aWS{-Ga@Tpb5@ow4?0CWv$~F5 z$qp`?^rdr9N~T()5ZSQms^Rd1@YVDhE@z%l%>TjrjWJLy0pQ%n8N7B#>k4&i%6y-j#G4JNN-h3 zq)2#YBh)3^u|AiTjTMI`fNDcL9EI1B#(L$|-b01tXM72XVE|{&=NB_re**+4u@tcR zYe`7A6cb-Ph@132YTFEzp38b81%3KCON!0)UGo+Ij1L zGct3b@FA{8(%i7dE4U4ql6?`r4vp|UrvSpO*=#LGkUr=FSnMu z92n^BXP=IKHk<#ga}sxt{Y`#aad*-jA@PkQsUxRng7i+Hd`H#|89GRH3^dPlI zrXZ<}q5!O-B4FwaD$!$O zW|VS~;%4Y^D*3e?_2N1?RR3?#Cyj#DGabVYpn+xnkjIDgXYdNFW3&Pj&}K_p06Ina z;(0SL@zRy+{#!HnJXOQMm+`3G#=XPVJ&nNXsAT?`tK#OvMAH7uYtD#|V~?A=@R76q z=EcL0ZBOPW{8Wno*$c5FQ<1vVAK{nN?gZIm8ZGECh zaO@;aAcp+!8>TJVRGyS6gq(PjSN`Vqp*DaLZzMH_d8hE%HLY@pL(bcYJNok48&nAo#4o^~ zt~_77WebFE#ZuL?(6uW0G@Qre$IshFV{2bA&$vVa2LVFe&j;)cbRgW`K64KrTTm`h zxgK*4;`pTB1R?Aj{&SXlf|u%S*O_X}NX3GdrCKjq34Z&LM*(67 z-TjV1QAa|WezRZ)rm=N_P`+-Tq6$H_Dj>dg`brR$_~ya~{~~4jw8orP(|ie8qq@nB zIFE4XCroK(a&SU#ik1a6Lwa8yLmfkwutPq}W77kY*;Favu#sn8hEabcu?cT)ukfF> zwU~#|q16+M_JL}PrAY54sq9LB(Pm@Oy3Z`17x5hmg7%Vz9pM~bk5cc(ww^qm7h3`t z-+Lvtz66(|xdol`1=!)A#>_Zb5|fZ%IF*KRb&ANXZ7)2)sUW&ic?v2?eO^J*L*!ui z2&{ajEJB9U8(1BR!;+4W)=M(P6o(pRUH-QAkZ?0Y#J9S!+#+f;Wd3-|A6MA^u;QJ2 z8di+g-@3~*87GRpRBi8){STjf*Z3|ILNWJk=2==?WSQ^h4?7KN_5ip!U?M}Wu>U?+@*x?v2y9~#%m2Z z(c5Cpp9j_H17CSl4j}GUGG9C71^9=7{f7CY_s;bBM1W>^bBP9ld8;k?VF~~1;o+Y{ z{%S_EA0E1<53a1!$|HNyKv$%9vw z6>9^H>roLu!6>_4f+{j#`b2#@u>aC`lNZv8cX`1*YxE&fP!Qe0p2zEOL?qgQtBkmk zY_C-)uOGQ$#BcK#o=@!F>b>n5+bX`A`@7EeE}BEJWP{Av)842*XeU0w%5B#n?QSyq z8luZ#o*LvUT5ew08q#Tq`nT~hJlR&ub&862Mv|q}`EL>AcF^LP1A{td)SlW%lymNy z*i=@I;h5Zhs~5CXK@v8qE+)C+cOg%Tw=VlN$r0yMuZ5$?ONpIjVFoR>VCl40x7YVy zF-U^xcN{M;*=yvIZuxqpJ1u~|b@uv?O8RUqD~K`Tyrm=d_=PC2d4b@_H<<6Y`r^Y% z*WgY8v46V4=r5ji#nMR%bMwtUy4%gE)&@iSTjRrHC5mgvy04t*d84{p&#;XM!ch{j)xoPC~ zr(}f980Yft0X0JEUoXXB9xSJAvUuj7S2&fg0N*E_Sx*#3c~E%fkQ3+_;Ec@*=gH{X zxa9e@;PUn-Y+gPYe@h%aL|2)+G!3pGiwk&W2?fU`3X6W(jhQJu+x#9lTtD@I5@|_o zhY?s+Ce`=rLp8 zDV#Svk!siRsHk|0$4&j+fdHG-r|(yr!HkS7P%kGzU7;d;ef%~@e~@OI+ESDda^;X* zifqEJ^0?fEV6F+Nde~v@Ix?|eTCVrV;d5c?Hf4||jzPM%_6PXQqd#V?$TtT>LRMX6 zr`!0HOIMiNthfF-guyKin+j`S$uJbw`_9);ieh<+*RK6v+{Dg7*l%iicLMd z9X3WRmm&`4449#mU1w$9@{3|4 z1eSOQp_PLy^cgN9(RGwrloIUu1wohtA0#)6|2Zp_h@@K9i88Sl`>#>A;*9FFt+;n@ zU27j9DY-w2CHA~#7QfX3Sx!~)7yU6Ktpb^-4+2F-T;5mCx|oiOn~lBFm#4C)26jUJ zA7IOp(d(X()~fRzWZu$)86%&aF!s{Rf7_bx*qB5%13BjLF0SVJy*+ci8sf2ndi%a(M1$!bRR1{z)@Ri?j~% zs21Pycj%D3c3|j82R4QgEe-enRY89PL?Y|A`LYvJlsatUWB^$Tx4Je}HvvVEhXtFz zM>N|ZcAhT$9iIWzV@)q?X2&oeli&Ntr0Lk;q+jVdPCCMJGiJb|mBezshaH;axcR4}fINY~P zI65BZL!1#syF9vKf=?+pPw0OW)f3Bv4s2ceC2m*R;~JJ|vTWr)zT>I`oy&Rh5%q_X z+;(*)Dg9p?f}8T{!I7SAo|>ly~YwMM_W^LyxJ1AFrS-Oix&3LrLZZ^cRZ2l|+|=nd)w zsOh2jGAoYZ&)1}f_sDV)K<8lK3mm0^R6EMnUPB{Xv_ei}*JYw%<)v8h+j1v}`9{m? zGh$#Go)zXWd=K7BkkNM9fRqKk~47z#t_0?SPf~TJ&;~p*e3=Lt!{1l${l@gJw$-ajW7Ay!OXYrIDSyxO%OG~z|_ zo~ss3^?uA6HqZWE>5YZXQ>-*XPJG5-u0;^X=pIO3yXn(Me&J!9MclNgENqp;0Uuae zI|^Im)Xi?E`vS6nzj{i(N!osbyh{MaL&;m@ArpG3PHv^pAp z#L(htc>Y1Ipk1vC{TBb!w~%BBsbs{g(uHmM6|0&qaiYGXr>ieD(tnOo@e_l+GwbhN zeA=AIU&kI%Q#GH14iV8G`pUP%o&nE%wu3Uidz1%~D73fxgk4%{cYrKHwe`ygZX_YC z_vgKrT!4H|UoHIfxoxLrB`t@nir8iAccno^_8%Ha#i4bC$Rs|#dQ0maI8&1(Q6=`YU=EbV-}~^sDHBe zp1^XVQWm>Qx_jO9RM7y>8HW^NzoITF{f$9oyJ%_Q=Vm11t6()f`97hro^oCkMb&y`Cb!PZ@s=~;oQaqZhb%9+F97>Y?%LtLx5 z-KQMk-~@g1=2yOd_S#h}H=*nCCGp2+R&uTDBM6-W4t5RxdZ&jgiz%LR&2>0_RCqYx!BLYhA>!fAP6xBm%r81f)C{W^=!mB>u( z6!J(yX>4B+WZN0;5k3bdyhux~rITP7+QfYzUuu~rCte!U{{lr!#HX+c#)(s=~A0VrBn%>0eELD62C&wGtxEJwp zw#{(V!b-7Gbh+m0kKE0ebM1xIBvf+F-&`Oc&s>MYZz-)@3`?2|xPGcl^@!;OG%6w1 zNzik zQ5Ukqv)!82gP@;toA9P4R;VZ`3*5LimBFo!B7&uuPSYn5nVhV1UKTO5sV5|@x;b>v zPCrD8pGH}b>BtP@`Q$z0gXjGzAJgm^-Y2-s4pd|x#?j}YdnkozjN<_zB1e=jBn2;M zz)vFPHZqyUIC4N>7i<<|XEb2X|NdV^kP`0=vq{+_sgNsgGZHq7Sr&$| zUh6)4wn#%Q6?pJ$)Y}VG%B;7?&3CSp5_g~j8gHz@5+Ha914ZkRdi-NP1DSIiZDZ_* zOou6Q_X`Fx?Ch$_j%CF;MU#lIjo7}JC@s@h&xCwgw29%`V@1*$GxbB@=XtX1K{=wt z>`%Ld1H)N~+qTh1W^Tl6#26v0j!F=ZF1yMSZ98lg*%MujD(4`0lLha*=5aE3qB9@0 z^#MDPX1x$UAVmwnA4p)1wPijhqUeyRk4}0FV#xbdqpcdC50(NNE6?jK&JzNc+swQp z+cF1IE3^yGEr_8v&Uw!Wgc56~00_Lx;TCrA+R_80zt<%+EXz8?x!nB7(eGvLxQ7ls z=XC1Su_yKKW8mK@b(|nK*DSuuCe@#Wh_*G{dftbn@xV{A%5B#K{Eo(n{QEERu}!`% zd=oi*gBOwVtzU^>Y;yQR!|?ER;ByVl&xq&ZHPKF)c>+WqKpP;3KETdeXe*^7Kj?oh z)d+gvlJU|hlfMs5(7p4*L0;}fH3afse_LK2ES=Pl>ZnH@CJ50cpTFqOQ))vf?JVwv z9fXW^=fMTfHdwt(sm$^1ttYSh@9xs9K$@_K*Xg~A-YJ7Gyx=3DMx7+lzA^y+dN5?R z_%|;@5HuCJCOAP;-+jRrkUy@VT>OMHGzXt16QaY5CWDSxC>a)AHTqK{hMpQLl|R*I ziuKP@Bvhh;$+}tPyZeEvSX1Zg*LxN@D;$n+!Mbnh7ow2o;XLoyoqbP-e(l-l$alyH z8PhAo$jZAZ#@BqBWS~905OP$xgI~Uo){&r76WvuU8X6QNAz=!j(>D zz(#2#%M9zHb&T7umIxOT#B#k#M-LjSWzCuw6cLF~YtJlQE^sn+r?CgHRPNj?6A0$@ ziqthIQ3yt*?tMyg?H?D{GA@x4?0543(8p%oFRq=@jG&t{gRRE%go4+0^nwJ@SgK$j ze}$j{V`08F0n(p0sxZ3coc@{Eue3$# z!L*={fp2Cfv#qa8Xf{RYbyN^LI3>8X1`v+R@ z$`Ki39YaUHBEFU4YC{rJSwR9try)gWq!c{D#klpP=zBJj7?aw)L*b+vor)w}9FAlq z$Z@-Gm36{3vH4T1$FYD^qAkP+5J zXZFghLb3HkdUO$FDeFwHvZ`kK$IJ;0HXtM&HSODTaV6}Z3cV04;fzAP7)+$Ao-VC~ zl~;3p7cxX6`=!p3D(}BpL)S?_p0&|VAmZBlH%|zKBs}Ftnzq12t|KrUKz z!sV$f@J$FDZ8Q@Pk}%+f740__la{;IfMlAneC7 zc%68-y0Gr~FBXmfrlz5oPtFd^Dp`%4E+zJLshpZaSs;o|c=5*e#op&~;i)65JT?y4$) zl9kQRP*+b+c1eYj^T!8vEmE*_N~6M)ldFm2P{6LJeftZA%rAj7br3Dc)t95%_Hga8 zCr_&pU|Vkt-|^iZ>(b^ms=>ODLR%!1ckTl1Yb`4~!ygbF#AW1b8m}(_ds%nI5(x8o zjpFZnZJ0N<%1!$0H}-DAdKRPjZS@8^057572ZzGSbZ~0O*UhCSq#29nY#7SOpRfSe zAgifhF^0>eVN;4#Pf*w*(Y-NiI#AjxokP}$wlV+&6?aOcG`d_p{EOlw#Zd?lY^!`V zqYR|UV#=XaRGYM``SU2OS5z;N}I*{ zvmWD;He>w{X+>8DmNk7zEQlWJ6Q3&AdYOlf2Z+Y&&moR=K%h}+kP4WLO0c3i^5APV ztjuQ;W_gY;2}|9a=>Q|Jw{;jHTmcH27@?G4wUsW`g{XY4o__#2C8h>UXGoluaAH2z zqPd2{8@UUqeT@7`Oic7oJ#R&d+3*k44PR7`y$)bK^%}n+VhiTIJ89Dn$%xzZTS_DY znZnl7{dWqpg6-F4xtdAc?`>s~Cjisi%8X43j;$mOzxj$@QYrRCt%lq3-SbQby4^O{ zST<0ngzq(oE$y=g1un*4kys!RAUnO@2}hJ~bCQ*fziy^1RJEox$n{!yqT0;YSv{>6 z!khuT)@!7Nl<}O+rHFuk`#8!V7w);)llzRlIDIi*-ha-Ms}s$>NZ;6AMiogGS7@EX z+K1>YGf|pr-!l{cNvnJP!8SsQzQ5jC(pb1Er-iO9q%6fbhf_n{_ZW*#AXEn~j5sw&=q%Ll-M#m;d=ibm5N??Jc)Yys_gx?S zA#LtO%op(#j#Z53VKM43q<)}amw41BNJ__8vyC@=i6gpMOP=XAM3Jco5_sf};VEv1+a_8emG_$ueh0MoIYIYCZ0k!~pj+AsbDTfR#S5JYfXg zkjf}NDgAGI`g~Ec(|V89KDl>{6yI1y`-@d-IJP{@BLJiCK+Xem7xj@CY_-mX=vI+f z8ic~jS6$;+Z%Tx45JQ27FWAVmUWTdd(WEBTmgh!=97fTeUZYnsP7d6mpFI?L)C21< z+J41xu*UkDD~vO31>~^4E)vqH=T_uO=t>jSlk8zaIK?8J8KsAks-(bIaDaIcU%WJ*lCw=`Q&1fM@y z{`;i*EI-RNalO~NzBMoqEsUh{t<%A(_8VRnB^(F%CucPXk?~dfK-GW#LOOnXtvQi> zeP^G2^?A;P;SrfV$1+c{!XZ7`spFT2RBsPAGywqIQCwL?Kko}6_`T0#s+3WPVWj!k z7I{raD4zSueyay--6f2EmEdjC!`JPOfLC5uLu&cgmE{ar`fkrIb6ndlep{8!RT#3| zujBV%8re)=N-{a?M>wI2UqLRn6Z0fQRZ+?OlXpl?_tfMdR_kQmsFu`eg?GS*n&}kx zojT9|%jZd=)~%&yq&75c0wySr6$x~sNrJLmOvT-N9b>vWb%7FK*;p4;7vsq=NQb;L zIZi4N9JUwrpoul#InL!p=4 zi#dWu?eqV$J4Y}Ampsz1nLck~6ArT%#d}g$ENWu^7o-54G6Wsw@eQ%xp8+D>=gGCC zAqGgYv3uma!RGVpug8tGor%?3V5;y@k9Ma-GQ_AZ4FnaW0nJ-xRzHk<`F);-bu%+mmCF_%!c)`C27iK@RA|vXg;Ph2O<9mG z!Unt2b2C2UxxXA<{_Qf95qa8nX}{lV`NbtK&$NROOpEuIjAfNFJs0H1cR#<sMX@sGv0hOQq4UT-h>gINbiL*wgPd4$^?$2dB9DpZ z9GqFNYjb;=rTvN|3iLAfqlA?R&u-*Os?=NfB?mvo@Zf{$CqzJW9=XGbEr zi^ybl)o^L|JJs{R5KxlukXt^~VD**uW~B3}p~O{c2O0`o1WPSs?SbF`TJ zJ_7ntLe{l-OePT;tl-Glf#?)8e@k`KG z58S=XJeM5*55Ld%J6Xl1tu5|kpR$9Oj;;X$kDMgu)R^7LN}}G{(-{0`ODj1k+|r`5 z+b)>o$cSg8Xetw{ZH{BG)v-THbqn&-N%#=K_OAB!I58E%i%3gr-1-`5sWe)j`D%V}Qts0KlqBQtLw7%Fl?|aPc0C@x_yZsz8 zTPX>NZXMN3>gm#4)d5~5^rH2nbis-8f9E;!dRXx4+UbKtb@_*X!!%dRm*RhhX>>L9 z{<}dbx54s64m5^^Eg&Kjo$deBVLj2t z{TlFHGP5U)Bzm+cO{u^P#ptREj{{4Khz`&Do{=>dG%oHOpTFG70Qvo1b4!Qp0_N50 z{`#vkIj*4A`l&Fa70s)_HdaZOX&c5DmRpKSD|L}eoKj3w%3Pbb3<*yJ$Vj-h{C;!S zj9Sn~Q~A~xH2yOvfLv-P3sjCGk9Jf3Crn=1s=xhPlPO#Q?$03GiMpr{Ck|o$XJ;j{ zUxP0S^NF`*g!AA^Y`ib-D>FE^jBil_q5B( zmEqrfPhUZk0xaj@df^$NvnyR)ea5O)DF&OKYAYB{q8y?1 z&DyplWVkoNxNa~vOE#@+7mE@r0Z$P%#T5V{_*~vVu(DW99Nq~0;%8m3Id@_c@?%ql z_FUt6N&N50?C5q((!+I{jl4x?|JA|o%DH}&itzy6qmJ4%RUAk97jfGZ;VM-C6Py|d zbDL!n+!wDV3Jr+Bq`mDy)oeU=ie^?7Sv^JA5L4c6zn|59M`Huu((nm>ktvDRM4`$- z@MMh6Rs&c=i5opr@}Ge+gG%*xK#WG*&>FfKrqYOeLZpadYdQxLtp-~`_bTePvi;5V;Lf@sxgF1vuh7)Gz3=K z$qh9rgTsCXgI&wjDz-BN^BOK#2D4VNBR|Ytq!}#*tO68f?th{$&wUMtCM{~ar?_w6 zBdGQ_T5+Vs1Ja;L$a@@V3`4cyNwH|S1FS1zqgWYP9;0~<{Bx%2U@9YHk5|IzolhNJ zRfUan0k{fVWkMQ$q4X!YlHPvLul%53jFJUzm}ZzSu`!7lVK_lHh}#MVsYPN*Xjfg) zO`(8M^608G#iQ$9dHs*vGhaPc0k0i!6XD{&n1(kHQ{U*guvo^}lv-b{Xr)bIk9pkR zlXdI#0kJ__JQp$TJPYorp@wxy2|ZAX8(dtqTpg{IXvrXPon?{1;}s~kGx`eJ!bcfS zsamhz^5yG9A&4R4oa&8FxkiQ43+TIhAdo2~;JN=D?v<@(OaMi+X42(C#7rZDYsiPg zfm0xKou`in3MMEUiJ5P1yurQ$GUGl;VVl8(TP;!S+HDnq=YPDUlVEoKYQ=J!K9Eb0 zma-kLqY(>o!dmZZ!fZoXWo2bWsDUIg%Kt_9mxgxw(@NW+gOx=Vg+g#G01?Veyz+Y? z&HT6AgW%!A+piAV5T8#1Oj9kbS*JLas{Mn#zqn$uLbN8ed)If7PmNHph1E-6ZBKr_ z2h#G95^%p?Y-Y0N-Yj$0P}&1bgxIyDsXUo+sG8oLXr+Aku!b;|$QFJk_*7!vP3&;e zp;-wf{`yI-1N3oNB&GrXba?16raBKv?;YU#Zobq9tV9vBD(S-R>~91h=C+mD75C=l z>z_O%yK;q|t62IDlI|%6I8(vvjdq>{jOaN@EBzrO8oOx1!eI8A%s0;+iyjP6K0zd6L6|2Ym3X_*E2HmJR^1m7pjwELBEaDA zsZ_|7Vq}T)Cc+TSDmU}Q&i$1WGgEkW9BOPg3 zoEZMzi%`S2!8vP}-(JtF^6xbcfxoRQ3q0o715R~cleT06UmgGPK@n2bfj?>j?oVNT zMF^r7G(q7#|2BOYMi`(Bw%dpdA??m+U*RPDQJGm9>-dZ&8cv!45k9}a=X2lg?GnZ1qZgN_kemCPQ(7Hsvp7cY?%fGafY|xBu69T83EUfBa zGOitA4K(~TP2gZ*A>!Ai`Vb952@5~JPZOAT2B)7OxSR#BQX1`=&K!{z+-t^7De$${UAXt(7fGkUbedRz)6j z(5IQlnqJGO-3CdDDkULoy%%tP{naP#Lk*?vG*&s*@{GO4X{|mq=0K9=*n(~j&ni#k z5!Smn^gf&tuzyE@Zoqv`g_EdYpvB&q@$36FbscKo> z{u;zY{SNBbOQpH8dea@&CSX~k6Qv3UxC=1V+4aj5It^!DQyH!_u}WFb8|%Wku{hRe z1ESB>h57kSZW{7sbF*vSTJ?Fp+fpQnhbN#e_2ehN$W{DIV4dRff-@ieg37QK64N9! zr^eLb17W|YZB0SrV1KasqUb$GzE(d)qibkLtBzJ{Dx|2WEUnOj@7m|Fk~g|c_Myn2 z1Sl4qd$xFL`J(SRHZJJ-LB|n%*mUVTci%@-MahN{P>Kf1fCxV6K&!NhHrJ-ki+E%P zgsk>#-ZwudMpNICAC=T}q)VlQMS8O*<-F^*Wa>f~->=NHUcS^a2#XHX6|m&o8my7u zo<_6K_Ljm^R@;tS3g6lhTaqo@*o54hqB8dpHZ>l$9SVHYm#*iDqa^W#_tqXB4+RqX~1gSC@!9hxxRuFk)qk_@a1nPk=O?_Q8!#cj}{uqLDZl zp;#A08L$$7nFU&IA5Xct*1CcUpj&W?;>z&vA5tCuB=GJtF@%GlkZp`Z0RAh#B&*U| z!Qd)ZR<>p!1)g^j6pg;hkxoTi^LL4P7Ob?8?Y+jVxgJAAot4is(QaZ@C5(xE`046q zsupDJi)=GF^oM;=^O8XzD+?ojZQ5#x6iSyL$9uwi9t)P=#o%%UJ@pRYV-;~wQlj@*XEes9LB#N}hATCD z+cHLJG`tDTU0wq6AU1vXU2jDo`kO~Rdi`K4_k${^<71>}|F-x@gvoTHlV81wh}VgX zTF={5JKqQ8{hs8R7kIkEAF&hW9(d+~FByt!b`xf^7zSZBEjD78;<1~AOW_I;)hyX!D?G^j0 z)l_+F2MeAi0CJ6CM_R5?Uo`YQ-!pY;tdiN~Okf|8H(Vb`@A~300P0&An0pYwclEDR z1>SeS{BT*eKZlVsS%_ZSd$~01M?UisaKAySRF`&Qr(<}-y!cjO+h`9som=^2Qt*yz z{D1EpELrG%CnTIa@rpvL$*-!`JIc`D|0nLjnSd2fE-VWWRscrJ|g%od%2=`E1qlIFwa^}Kn;n2``XLMeHMqv~(-;dhxkWt+Q zatzJlKdNM7Xeb+;J?5#c-`)lXMd@H(%U$s?7x5%m|h3*C1flJbn3cad8vQLUVV1;R@BJvxz6wzfWW-z*i5 zHULuYm*bn;iaB>Vk8|7Bz)>y_Kg^rvsa(yP?twf`ix3%ez2%ARCrH#t!&{bdjuAA8 z%GQ+hJ_3on3Q*}}hvbdN0!L7weN#{qPqanD9o} zQ7t8VBkEBO_>dBzzoH_A@r-zTx=#Jgh*$F-&-E2-=VN;%8}Mttj8PjED_KLSkr_Qn za4Bitc7(!k@3<6cvj%K+KjxbO*KqQiGKp|5)WB+qnto}4e-EDe$%YXPBt2x4{t({1 zmivjLb2_w5wI>|XCM3MBs;B{sZ?Gu3F7=V`UE!=3lK`_;EK!79Cw)~@Rt=HqjUp6^-&>2o~Z^ieCj z7j7^B;i73PkSNP~tCU@dxfKX81|1X`TPCZQ==K>E4o#bn)n@uh=SS+s=AkD0;CY712z7!M7HqI*aL~g0paM;e!1PYyLhEw=>3Vw^OT>*!*x$!`FZa zl&AcRJYiai2xXPQ!jyK^vTwI6E_O1J@> zCIP+a=c}hAN@tdXmdJpcUIlO!8l|uK2Nn1ThQG~u&xwhgNWavOOTbSyC3Hie@VUWt zetrUcjZsRNP?FTrL^1*=w%k*@MvtHS~+KJZGmO^!21%- z6*uLyR3t-Q#((rE;N06djl=ydDZ448-v5T7({%LwaU5YaA-KuQY{5B$9?E>eBFi!O zdM=VZGHxkdlTM}XBS=i7+(zV|oJWC;Mn}&Qn!~FkJ#)gf2HcxGuU-0!biLlh&*Jy|vEAVgdRuQM_~U;ln4$0IUiT(XG(Zj zqFl)$QJPp=B@8B7oox~F5yzrM-@axWy8H(fstrGtk_9#iEVB6!1*dN>5N||E*`DI3 zy<31A+UO`51*E55q$fWVwv(d%+dKfLAwjU(u5%+!$KBv(6U@m zBc{i?XLKy}1BCX&$uR^tKu8&srnQMXyvF%Al@g~ufqO3(Ng-?j-!I|-3%R}b2o6Vc zLXin->|Y{2C4%Sw@`bc_YJS?IMQ@}-CTWWxQ>Akvc?4P^Cvc;J@_5xx5!!=M5JYlG zzfTZ9JfJhia?597mx9mlAA~$R&!kZM0R~@4^E$noE8VTwj}~PU&8S$2D(~t&#tTDO zXxD&dBaquTSZ}{$L2!sNP;Vcy!iiu z5m}>e?kaYoDmKU)dfl`8w`MT3X8$O&YLumTb0BnD-kh+<;(Ti{R+>NX`QW#YF_HNP z#!|?z=rvzQW5W1STU^E^E(hB>S6bFs3qCM)aF%WF?swBE7VKts26gFrva$$TBnINZ z^Oy!Y^Ku5oT@)oa=78t=2*Wm}*+O3OaFjAHLI=kpC> zv3|P`k^Q9y&bbPW7UmQE3dyvyk#)9qnA zvk?hVLqb-!U*PoLYv@1bx9wDIFQQ{p%Z@z;2UBZrGJ;O3P#MbuL*_87AbM_z)Wy#! zpY>N_HVSu3rT7R%;dQ(!h^EEq$HSeNVZE?qX3w2Lywa=^uazeB8CZf8$fc^}N!L0k zAF7(0o86daS0&)|MG;Wgai@E+On8AT1L!8h{mVQMajV#mA3tLCro}q$DH*#56(t(? z)*AylP|v=KE9$p*BbhKNR}Izs$dFivKWfx=1=24707D5NXIm8^Q-q##w95JA5q6eo z6|)gGa6_o(EDOGc;N93S?CUX)TJb(^j$Mv9kN=OSuZ)Z80h`@j7U>d@ZlqJXJEc3M zYXRx*7LZiB7m!Y2=~__fP66re?z;T%d*A#0oNs4-&pb18=JCNcU1!=?pPEJe=q0z* zd3|3WoA9$@ouKQ~U(oMfi=p!gq2ZN!ng9tusZL$(0lZ*=8fEWDUfSn_4I|Ho zqtF9NbP}Kq1yhQ)XlO!PsS->nzJL-@iEv)R5cHlWmT*e!UbggWl6FjNh#-I^SDy_b z)RQ1WJ{Lx;eusI0TfGzu=cF-JgR_^Tb{3+aJEPy#(02EK|GWXv@c=Q!rvZ?g=}k0H zIOr84E!QOI7K+~Q_1zBtJG01|1O48VD-vhhsRv*$zzu=_7PRXB4bTu49exaGLA-Oa zF}9tj@8mboj8i!DKO?Q%_*c`$QunqJ78F~xz1BWurn&5q;f{Xr-Jbl;wH33g5 zeICHGl@&`!xgv^2X6mJf0>ktr0yQ>_Y00d0q-lfQv~!`}55p}+bDVu5WeO}JRZYpt zjgxN8oI0d8k0gJ3W=3d+Ya(_B2NeyR%mNhiM(RB=V?U@4)S!8tqA8CNafGphiG{z* z1zIS%szL|JxV{mJA%A}LBmASX){*q=*&_T-2@lSSnA84Fo5#cvs1$LsXf1ho5G8)3 zoLs^|l6sR^`O3{yo56v;+QVF1P&$tto)uN)moU`jz9(ake-*dt7tS^JQ9==7_Q)~T?Iiv{6c}qBc#j;H29`WlfT0kY1Os9nN`OFs2 zdify`yX9e-5Z)tBOy4+BKPH-G4Xid8di_G-Cl^E+;Vg=xe8>WLWj8C0}Pe_d+u1Xycbn)gXbkh_GAOL7qwXv>4KxeC(bu)2WxS<*cW_DOO&-uK}82H)qA~8TuJKmSu|Xy%g*us(=uZFBn(} zfwjOcqUW^7b(A;gW5GZ%LdV4=9Vv~rr8>*9Rz;reVI`IBbQ;HDoafFJQF?D8{9z^Q zIxY0`&*#f-qvWqmLd`Z$=2ZhLSZ@hXY4Qd5HPi{jVx< z1H>~(e7Xs=hmY$xGFdAND%sCwy2dG5qau6lT;65GdtY%T)HuYCzSg#VE1fhr@S&|% z>s3zaGMObmUj2AzvqeDR|4E0pX%9Dk8<7wyWy(>k4lBG~*kvd|iHDBx^5_(w)?W#R z9-aigG!2P87K*VsEw^rceB*FL8Sa#p*L8!oTgZMbXOKT{HgV^VpSgub%n0*BulEB` zyHr*Mli`X!v8>snak!Asl@Dkmx_lKH^PVMmR~S+2NMeQrh%Ui*72KeuB(##7PX0CE zu7teSWxzqi3_%Sx?*P0B%~gtp#4FVMWGkKvA`tP~gqHY}hV!MQ#v(R-$rVpPxej~w zqt1gN>ZKt3rcXp_d<*UAd@WD0-|2>3>N-!uz$e4{2RfRLl$=KkjRsQPW!V(UC>sIg zS04-U7orL^G`UHV0)cq8H3qiH`MCfFWukx<6wnmWt$w%YPB|cj_)DKw;O~x1U%P`+ z#9ev=FCgndw4b<=4iN1Si)Xx%irW>xPT-&o%!f5Puu1C()n&{&Pw@dRZhk%=#6{}4 zllMievrD$61S7SpUazQJhK9R$0>Xaysux7XyhQX3>@8%f4aZ2pTHw~|Fjl{#P7W# zZSRh$ERUrq&czm%&|6@EOm-}>EL5zv_th*Ok}MIrlxTg!1z%Scz5mhv&@Hz$+RDeT=bNeA%($%n z*#OjcEJ=!{HCv|yId<~?s^3hD=S5HTdw0KZzI4Tjw7Es;q};-A>=J!^{J|OSRf`F4 zMW>mW`0E1Q5bq25*bh`O-mxq%&*z%_6+AL?GH8*3j?IqiicWTD z#IPn!KfIRW=tWuo&)6Wb!KXQu8l+AAUf5~gYiWwJ@3M-|j?hW6Q_v2{*672D8}TY+ zex8J#V#Nn9vbXamDX=+qgWy2^R74lGU?O8ANR$9s{kl~48A4)T$3Ns@SAnTqO! zJ0rw|>@&c)>a@Fs3$$+w|Bzxm&Jq^S=NMG=I6?6ubt`5Zc$1Tt{PL9d>~psxcB}3K zYcT939HR37CZPn$1pGG4%#$#TpL!1@x2YLT`6l3>Vpedoxbk4^vvGA0ECV1;CpK$| zvYtclan2i)&Tv-&Sia#1E-B=Xh%a=!|6$5r3+%?3w2BOE5RKzjCM~S&>~G|a(%db% zkQ&a@-2T=f8lf8*8eFN;uX z5SPOC(4&To2N6r}7tgoespKIs1_jZL<^Xazdcp`dwPzA1NH@WtHL1dsix+o9RK37O zR*kLTZ`#pyWs$P_J=OiKA^wNm$11L=$H4tA;aiq@f5d5It5HjF)7d|SnIR9=H0y}8 zSKXe-tO66&IM zFNoT7&Ynayy-29VfAUM}e7tKjocw{|Wisvb$6AdQv1mG9;NrgcsWkaZ7Mt><*=N{> z-QB5}au3DA@fy$4W67a@yqi6>0@5iNed4e`cg}jK4idmUGE-X+b6C^RnOp|pd*{gj z(!OAUqaRJsj{yN+^s#9ryhMsRVS6joAlF<|fx!#HAVOzS@E8wkHhlVSFr{d-BMxN^ zwjH2VoWNvH5kh`L_Vg7+(4c7*|F%QR>hVyHA%vIg_sgwQkb&rk6i*8JMvD?{*8woIntTkyhx^rT8o=sCx|gN{fgt8%jvMs&oxY-~ zLtFJ}30+W8c;k~z)uw;T#h60?!NfDHJ+*})$Z(F?BLye#PpR%#qO|(@&Z*1CeaXI% zcmy&eK}>g2?`2XoZ%xWL9gq|Hj4mD=Uou5J6#C743RrCK=hwDnF|7IA_OcelB2@7T zKBSX0GH@yDTtX~5h;?N48c>J{D4uRtFD$f^rW#}n2St?` z;Zvx>+(7bg>**JR$wzMRRPc>e0pY@BA041&3QE`>*MV9Q2CB0E;J;;a`zc1;S!%|J zL(0o#v_~TRH?%YvU!Z+WY!{8X5w<)GyDUUN6l{%ziNo`$FC81K;@L*S{@@z&l;^c~ zpo)I#qp2|$rF~%uXOg(+89A)M*P6!-*8p?g?XYd3S0L`V_aTTwesmK z94P}T_v$)24aiwmnDeUr4LpH^)~AaeN=6CC%Y6fM&cG)lgs8{)u|VTSenyA4T>`lA z7?OUf5Atn@N1>Uhd*{Y#s%Bl++Gcr5?uK%QxVY&a+m=In0O3&6waESm?K2iaJN*Vf zu&ffw-}mC?+C28>yIjikAKVbdZO{J}dlad3+8Ke6wPQw;gY!QXD$5o%npsSI{n*E% zOc}S3dB0C6*aUV4msQe3NZ`)WX;J=ZTqLZY zHx8(txaz*eflChkT?rN$=9IdZ-4@(|7zT35vg&2tOZakMU36#|H$Gm>`*WhbH%2;} zHR+2N+ya`$ZP1BKymK|Uu*?t?^vvVU=iG+1csZsgm_|-A)`+5wPcLU=U;hYeoCcs_ zt){FdOC1{sTh@GSfoxmIb{7n29_#?!x!}xJpjLHbOhn4$n`6FV6G53DP`(|E{RuhO=kw~T!YUPKnArG1d5AZ;}! z zJV%^mXI+nrbsZCO zlY%Tw&fxZhowbA$F%~QV7kdQOu;{b0NwL4oCtWuZq|IF}7Y%h$vXbI}zXjtJvr(DU zibsX=rGwua86m|8cLLmz;M(cjak(K*11dkH0Dsp#dE%0dYycsJD6%Q4^D;ASxdgST zyYzE<%|t4wxTkpaD>MuMM_!%T1Y3VIo{<0Gn5Fql&LS2Yy}szw5sfW`M_G{&t>UjE z;fP3EZaUx4bcR0(mNv9Fk1S`{=y7MqCW1!!Ah=|L^K5W`^XqADB>a09aKC;oC>c zA12BbM0-CnioDP88|VGvtA`fbuXT`I)M#Sj%G|umn4W=j%DWwqTqW8L5Cc<|>U0{& z*GdQHW6+$+)~ezDyqy7Q)sj&m+Lo@CxFa7U|I=nAf1*J-wv)6jy$BO6`NfMrs7xsD zbTM;@sv=JDf-&pTnM=z-foO_2Ye)3e)NiO(9QQ~b7v@{F?fW6;K0D@zJ*roKW;r0u zDm%$CF)g0?DS?QZF1(bjyAW05JM-mSYej0If1((X;Yrp{SO@9}yyPF>)All-{!MVO zu(ToFcEE3v%PMPPen#8#VEIlk#rvr%Z+S7Ww}JS8s{>_A6^s<2eK1W^$!3PX!l8$P z>4R}1v%h(@;0?IyDZRL+5m2;=A+|a~^q1Z!g^UPkr^P@!av12^w~}r_6J;R5tyIg2 zn{5-=l1tX6UfpEk7k4#GY!g>$rsxf--P}vSjIwyEmhCb*dzOpk@*Y+n@klV|fluPVx;JI(5oKv%StN zPJa01%YJvqYN+$O08clG;u2XAHay}k%6tA}=0cEC)ek_>C&(AC28fVR~zqe-%Cr^=)CjzXCIU_syV)%LmybaN#|&QX#93 z4ySMiLan#c2Av+ayEH^^PZ0og*FfUav0aOx~Kc$RM`jx{3xOV*j zl@u^V`CbLkCava!r`+K3n^6!EhrQwczWcD>?R&SD5Gb9KZe@18!DtL(6KvN}OMq9q z57-wkD4-6XD+BfNIbQr1znijCk*@JsizD|3`(|?O{|G<|(sb~ybs<4wDyspa0kM?jp3{2sG6=`w-NyCw-5B8~f0!?VqX77|`pT_y zE6rrM^+-d^*3!@GQSu3K@q``ItkKZ>12mSHD5S{F6&U-_J}4SI7{Q}C+xbV&j%+{`16Bcb)PR_BT@3C+K^Oy$|RdT*T+p|67Cz<=P3^W zSFl!Y#dh{8(j&HXBJMQqI0Hn1>?N4YLB_p9kidy%6CuPEKe^mpM5W3@ujWdOXtS-z zKGTim0^Rqm-1p}`4KXX8jY@j3B=gf$q#;iOzLJ@UnMxZHwh<3JQv(y5Vu}pZ_XS;? z^WTsOHR3h69^}mVMVtAEvk3Gl;~e8IgISBcn)xmR>65Z&<_`*B3SnB`%#Y{bi0{+C zHDi5_G-n1U<@cOn1! z*K;X~UB{kSrenM}tmievVz5JaB_7^B>#tE82Uhan($?J1n@2IE$3bD9V--lWX|a-EN#Sq(zL5lNC_$>Y-7b^?k3zatz<5cy9C0Glf z4PxpGwGWj?jt&*R*V7=W0O=sRq=p=~Q}a4~?YzjQhWcu(qKhx^n@-cxg@)?<`lWp_ zsCg&bET-KS>n*Jyo_kgomfG_`VHSvhJ-1`|lW}kbt{T_rh$rgMN1%f^3}&+)Y^8?Xb+;Uc8%9ciMtrnh|z(B7&tB(1G{bP%EoOO@BSs zzyLy$V} z+$;hmMCC{R60N_9fgVGX%l7kGP6Cje{w$gccU;ocJed<$T0Jo#p@J%;`6*Z-Qy0+` zD6R04<&S#0rgCAua*u&K&LJ%?cS}I{&7>Gjn(?=x$ORVc@Pp{E4Lz6Li2ghVvF^WaUaE* z7ho-y$i6>RpIhH% zmHZn}5j!IEbz16i04aK?oLc77A#bZOUCl7$v92-MD0$;OqU`?Cb{Op28lf^v&3`9O zTvvNCUOe$wVUT_q$Aj}KgMS zJA76UP{pl+nFnIXPyg~-q4FmM4SSxvO`r2m>q+^%pY;85rc;64eo=~<&SE{5h?6qi zg6EBz0D7}`kKiYLW44oy8#dzXMOe%`<{(nFUmp`E!l>vt7ImhyOHS{17W52RrAW7( zkM_+o1bfkm;wt^T#ERrz$C7h2|EeS{ciR1_{k><9ZJ3`FZxnAqOXhpqo%Ouy+seB< zov|V86H(FrjkJ8#8fl93->*nhd_~%8!s)*??v<0mYtS{IlIhQ-c%Ao8tl>){@rHSf zN&Na0l3WmTbhng&pLNmnW@P9Gr%!|UDlEy}p^|A=Fp0l| zX}Qt8R#H&Q0OHe;042OC%nWFBjJwLQ5)b@xb5ePt8NDq8ut{z^yoE?S&{YKDGbide zPq-7q5Nm;m=i_taZAJGm7L^+PC?`+BSmp)9dVqjMlS8(43%@lK+2gPvAPmanjGrXI z{kO64%%S0@Y|bfu9Op)Y+;kdba8{L9eGdDoLg_7=ild0Y!vLhmd;=zxhXEGg+SjZ?&$lo$Ghw`%T2hARUT zo!0o~$Wt1>H+AAny%uy!@%XUS_3ZSLb>5wm@Rr><|BftH&hSU4$^$DC^jZT}1`x!f z+K&+^?Ea^IUjyoRf(EQJK`hO_3bjr%CYra{`e%Fh7|1M#{Aoyp}l6A9=w2ba>9U%KGUIE zM>F0R@-KI*%^BA6bd#4i}q>$-*h@ycF1RoQW%qta%G` z9hu~;jOR8T(@aTyhH@rq`~m0DcbqKg8`++@^%RzlZZ+)X)2hGXGsInC>7|CEHW-Qs zH~A*y@8cM31?;o3g&`ky7bXDlk(D{_x$ILTs-NT5HBRhaNMm#!6gfVgmfuq)eO0tl zq$OV%rb$fUwk8lp)02y(kCw%ayW94~#`7Q(;IraNjnp9*%pSO&uRCAVS5xM;&pO-i zLIs>ygW$xGM~5WE*40vcVFb8h2Y5;gh`)0;6TuQcrC^KT2$9-Q*p^x&5jx(t+3e-L7v}kos?jy9TgX{>qAsKVLUE^RMFhiq! z!u?+OI$SgyKC8q*DTi>&5Pprl4d$DMXNWHj9vtG)9|JYYeimwOLDZ;_ncC;3E)y9M zKPD1IUPW@TM1>bm%@lp$3R9qx+$J|o0DPD1fJbweiWF_@{jl-P<_4QQ(aCI}0{L|V zn`Q`cVFpHqzud9AzdtjN(}}>v$wn;T2t|grN!GsLu1IV}`k_T9|2SMA{X{My6BdOi zI6aH)WxMOz3EIH(>G{y)5nAaOC>?-YFb-Fq?&bep$d8(5W@hTTZ%fa(M{VA|SQmC^bBlToYCq-~ znSH`N44Xnem8PVoMDj=DKy@hLPp22O#V>`D0ZLf?Umzf2w{B1Wo@hZ)4t`%o)Oo=L z0_o^GYnol5KzxmH>bm=3)F@hYSXvA`5+1H0{t=<6huz_?P31wNT2?SZ12L%88=oP` zwo!I6gS~wCqz7*l#nve7M11@O?n0aCyim zmk+w#FcE2(ucE0DL9aSB_nY?hPM!y}$h$gLrj+-t9FNobe#A|n&`T0BC45fl;YL9|^#V}_z8OFYux@Z5)zn1@TRiHc|*i)?^K>oDz#*v{H&x%sqzi=QlV83cPp>s0k` zuz@@o7Fn6@2k&0OZdTY&o*oGxa?A1htLUREJ3Bi=6fa?7uQJaKr7EwbvZklwn^j-8 zFc&M;Kf{YpdwB4J;OELhpkPnu{dLxgH1jnXoSAfxi~icybu;|w%h~a(GD#4DaLijh z)Ox_Jb$0suB1ZC1bCjptx7xhAq>DzSlBN^?8BydE4;MENCp>_M$Gx{R0lwSz5gf2=e#vT~uZ+Nh zn++%OJpDu`s~UzUMfL$gGl*P%uv2Ubdn};j6#p zkRM^_X1O52OXu;387~pXv?_~qe;O!dlyww`lBv8gWY01@0M;aa#o^vV3@rj3PrykQ zpL1W!Eh+gcsyrpv*LyTtuC_gHxf0&|F;YXwao#AgcW8U8dD+baP^O+kta2L5Ps?iA zSdlWv2iR|?!>;*ji&?wTO#pte`8`dUcK1RT+`qf-*02T>g044QZ=OX+RmR@HGexr% zL=Li5n&ZDj0un_Fz_{^eVvSMPOC(O6`pftCi$EkqC}DMMk$ms-fQ@<=m}YL|877ra=>OQ;w*^#G z0Y*_oD$4CQV&GPesoEtP=UIJ^o_y~dagDzk<)(DXWSuc;#H8jnqDkj9u~63Q9LjP2 zn|Q#2#X;SnxA1Ahc7cw^KKL@WYC~q8S5;_v&+^{9W0Mo^60>mm1E);Txq7;INx^x2y)}#|>)0JR6 zX!HaI<@`CAQ&IlKd3`{hw{(SZoffxAvNPXM{idq(03woURDE%1Qn?Z^TT~oZ0-4(1 zIZozP(I3i;^vd6W$Ml4F4n&gA&BRFlP5V(;)b-SH<~8|xJ!K|p>@INjF;8;|E$82L z!&c63V!qo)NiQf38|m(y)JyrBmq|WCe}U7!&i_C4PGQr%#~^4<%EKjdC@g(uXOK=%eLP-88|xrKxbXK^n59ztP#u(@_7Gs75c0>0;qYcSnAKQyY#als{eYxE*Sav zK!cG)EU`|*=gqMf&{6zA-JiXv2}~;E(_*A5cM#P(z>p{cKUramjO%^Hlz+%qsqjzA zj26e?6s#k>h|9te<4LGjB6R%uI*V0?O@;5NZ`Wd)ya%2KfZ}E%+9R$sr(z-Q5)q;S z1A7kJPC|P#J73a7dr2#SxC-(HS!`-J@bExLul3K6f`Ke|-)%je+H!}QSt~MNFhhm) zci5)D6$+DZV4Ab&45^0oC~>G9_s!&NAh@(=Zac0OM1b(#q43VeWBDU{tu(}0`prRO zbo*j@lWm_M;u5JYBR-4yd*Nethq>*dxqC!wA~zp9vRM2HeCMJXP}W1`#Q0h^0r|kA zjvt0H^`NE7#nxk^$X^P|zIPzT6?ilxK8gsXP{MJRdDF-qX+JJ z_M&Q6>C{o*szDb6x-8bRN%Bp~*Ce9XGRd-dlhM)php#VxbiID_^6`xsE0OlNgvVrL zWDE&plFj@-SFW3%gO@Y>OR~#U)Bkn>V3+8f>6`cw*6vUAy6PW@F=eH99345O1q7!n z__O7#p5F8x%?*)eBG{cAUd;K0WKz7W6F`uVkCV>_{A=1T;taRmP|n^j%LX4OiO3=Z%BEwlEYF^bD6V zPsfoGC!3N&@6OKXFf;jv3azrMr;t=#w8lGboDo@3Ny`vI<92Zs0m`BahRSR}a=gl; z+JICYM6C$dwFEPm|Mc)%?9)x205ziy*P>F}85@6*KaTNd2IB;TpjznCD|g~84t!_} zrZ~0D#eH!ES`ycovUr;sOT0U|nQ2aX8F!gjjA@U9t3GV=rO}e)Q6=$DZT|H5uGh`< z*ER=3Pkdq;%g?TJS|+4e-7YFo{95|^6H8UqFNQE5d*^30Xjx(0P|7_*GHunQa`+1_ zg#&F~(S#tfF`^duwW}9EvuC$Y@wwOEZf$<9vE3?G_eJhhGxrAzBI^O^4K9Z#pdSTl zlXrX4Q4Di16mY>2uj8TN*7O(G&uyT3m>Mfae4tVeZLqtLCt)pCuF){-vh%D-`Ovl4 zI_Wv$D;t?$W1sbKI`OP26Tl%?B;(7ZEPI-Vy46HkDjeNN#B%^!;XSH%SJWZd|J$Mg zE{fZ#Fjv1Wo>cA_jk_OX%g73f4OzVwrfVXxC&|#AFbDHDIyKUp;!zrl^3M3t1niNR zOeE8a7<>i9i43Dg%Bli;0JR+S$6TQ`@R8JU>x?NnhAj5|5Df6`|9nYm!(m%?>7L zF7$slGv@(*=TO<+ZeS)~`ePN)bg9FyK5iLl}hZKyP?Ky)q ztcET1+*5-Uj4M9LQS4W^f$YowxZIOA`TLkdG)-rSpDq%&RO|9J;_T}lrir{mqgNi9 zUppt&tkJB~j0aZF)+iNOCLmNll>%uS?xH)d|5OYcue=;M$lVS_&BUQYM8Oj*j1&(* z{)hUlot0^hQBQ{>+`ByfRzW8Op&@pcPylZdID^fE1$4LuSb~eEr|bMP`Cfn0R5xag z1z6*~JalIad?^{S(>Sl$@H(jcd`S259|Vj3<#fg|=z{a<$jS zdiSjjsQ)<}4mm%PpP^I$u>p-n*A0G15l4Ogpvy6!G;2nguV8Oqc{Ol=ikSZS!M`Zn z#Gz{pm5*`2EA`l0@TN4RE|mJ58?H;-U@eNBzE5Pq(`qg_04du_8o#W$TC()`ePYm$>HUw&9@pR{;WMfo6g)!?tY}2IY z@%#-n;k8R9b*@5Is~~!x&dg9bu*~Ss-PK<0Zb*fA`#1T@-uEDolH!upEBwm_c2Jav zsR!DlZP?DZa2EeS=+Oz~%aAe4}humt%dK*+4 zTNn*x;Q2xn`)pPqg8dE4>{t)*@l1^>k&iH({bWYx!){D5B1ox1)aE;0YXEEYdg1sR zDkpL6j}@Jw8U#HXqSo<9QhvY$e;p2dCERvF@dp&0>BHX@@A0cRC$E_Cch)a86F&kC zI7T^&Q} zUlu}+@&rfmU zsUS&OvMFYF7U1nTb3#>~wFZ*OUQJKno53`{DFzbpar;~iBAgc{Q%DK@A@z@l;&Y4- ze!0i-XuJBa{$qoh;)c+AkfI%%f<=PQb9(iH?vC6(ZTC}mP}Q5FGi4;n5YrTj9SJGD zYk|25+5?$5e&7}axm7ih5lVDrd93mA_TXP5Yc*@loQ}@fL!-8=ahp2kyLSkB8ID~Y zsL$kes@}Cdoc4AxJl13fBnMVL5gu~A2$h@kI?0cX_>=U)vRN(m3Z7{ zyu|&chthELJ;tW*m1DgT)`y)Li|(S`pC=g2kB74R8nHTZmj4ejL`bO={sz_nQ5}kZ ztXV?ok7nlE14h4sKZt~rMhK7Ofs?sqa4b>zr9O#7qXSVWR0}QA@z=pmmdN4=RXMnW zd8TNziCbsqNw~;n{G`iE|72H5)-T%-NxM{KoON(_0Ezu+FqD1#PXd3h@Fbr@!zcyZ zQTclUz>0|br+OjArWl;c!N{2bIPtaFjWes%7T~~-4AxBb+WBI470CpYGJUg|zv2pI ztaTQ7TS9eb!(e$>5ink?EOYV;}NDxYqTOFp7CO_NYAzj#oGO$ozOo|Oc^*Onw zn5U9r^80F5?niGXuYbu6ZFM9pt1hlQ&hiaSro30ebOCq$jM)AXOdw7a%0+}N%ULGJ zk-}_eYWtS^3r7We6;4A1-u@I$gq8{^Sbj>umq-vb=vN8yUM#Lh|0BO&F@oz~-1+rS zWVwfKU-hM3;l1RmN6 z^<$+n6gK;3!7>)H`rPdE%{5RI@Oq;0Jd(GX(*%aW;5=N|u$jHtc@*8b0=9|QliQ2T ziLh4PqNTQ~q!KQ`n(V>j{(^N^Da<-uVI2ij1E&4pBTn5Di+~u;P;_THE4<^)pS1wv zyOs&p2HK|JRohlrt0TBZBTzZ6biQ8l?k>qoeRUA$kSJItLNihh@L&cNPMNE0E^G0K zl;Xsn<(^9gQp7EtwA{=gs=iw?n4fDY;#I4dxP@xKHp+7>Eo8dNd`3cc?ya4e=DATEDc1v>R&mlkvUBHCCXeZ4x<2dKu*7IAPUr8-7l`_~$&5cbkynJJ*N?b)lO54`NQ^Md9)(V43(1XLU)Xw0`RmltC_*>{XUdiR{^?YOk z{#&&77%LEL@+zPwiofk3ojxZc_(!)Xz8X3Pt|A?9p&+==jpOH={zU8U&~g|0agB%f zrG)t%8MxQn-J)5Mv7-b$>I>udRj)@A(zMZrGLrpzhp|`XQqLJUCcjM6YCet4e8?Z- zzt9i%1Jd%Gad#_!Z>Q_y6fP~)?kb{R@IO_U8ryiv87?Z4!IZ@&%x{1SBgmoJ3-%L~4=A?H z$(rMSlvr$Cys=M=%v!k=04M66R);a>&}3FCB)PFYLgP*|cn&|qMkd}UB_|vO+}g?Q znxLZCYWRo3<9b54=lyxp-lqXn$Z^#c}^Bwpx%a@I}gf zHoVwFP~K(-nLQgpke?o8M@sH%M>+;_&fFI8z2BS!I1xHO6sn4DYH0;adP6f#fbe{K zqpC^u!M_E8MVsH+-P*}^J(R-J1X3<1P)+eHiWyI_Ek3-d`Lgzl`WdKt&EE_>y$MI4 zz`Q0(!Ue$9pP{u_c?rTlhv2CWR)V_xdHx*{s% zTQ+58=0o+}_CqV1Pat&rq09!>2gI)(X__?z z(GI#|r(bqg07y_vc;P_Y-~*sPH+I4uJ7>bdIAJnCcA+C!lgnqJ+& zoT`J0{xseP%RGeRTfzOHw@UUBxHZv*IE%cSipA*!8X=V8sF6Y`stvIo-CVuWrE0nM z<-8%iePGEN(eqU zwAhL3@=o{Uz|GJvSHpoTsSK{Ls3wusyEsTBGkP~ZIc+n`tL6|;3FF4{%o5lk_53uf zRQ>59<$jT=UB@}z_`o}_-4McYd~%X}2HjRH&G{6Wrk=z?B|DM#+XRMqify)CwLoy0v>KAIF{isx{yeN`s= zJ`aZ#*^HRL&gUvGfV=ROTSdFafzI9Y&5Q9o9NjA}V6dh7ovpu@x;@sZtaCd(mYuz= zf?_O6^ryvde5&NThAluG5q$dwOPGHh`x%XxTr!6K z`@)yv;kAZ`)JN(u6u&V}3l~W-u$b;?7LU)9L54-UC*Z?XK>=Bk%C4(hmth6VCIKKg+;X;6%`XukMb)ujCu$d>Rj{h zkK9q5Wp%D)} zHHRQh@-!~Y^wgmM){jts#2z3JVv3uT$-_>u#YzO6Zx`yy0q4O=b5T!LWd`*rED1WA4peur|M{|N($sy0fYz1*JZ7`R zMBNs3gET+cc~~O>(IM9JR#j}5IY0|~4U>tuPzMM~2!~)kLCI+3gDxlfww61PvNia} zt^$WC5V3wJtlG(o;(*7%WX4Flb^my+i`s;SY=wIG;&id|Jq3dkb`y$i%0g=138Rkx z<{IFh3SXcQ`c`LSs?|DeeP48-e@6G7SP=FaB}4e|tCKe8Bm`u4O!zqZcXNPxL~l#~ zh*?>H1ZmNMyn593f>{1ss= z%j7Y0zn>kvSBu{HVGmbzi;BYFatQy3dHWpsAl?YT+H8b``eL6OTXy_bQ6x5H3$gZ^ zP*-*9!cuu;o@K@UaZaJdO{aBwy&#iLis^4)g$af*}|5_d%00?fg0dU0orHAd=>3ru*PWy= z^ZbLTH+SRc-LQ`$`Wn`{%}-vDRm%{m07dR@i@{NMHcz-}xmFEpQm< zF4>Fls8)e`g!aOe02d0CvP9Io;_h|ct_nLgdbUe|gpc0p~*BWPgV#0a4s~+FGhsCq31&GXpCX0$E$;jJ?C# zu<tHAWQI$pfI9-%0?NKSZ*~f zcs}=Oyd2Rv1MgA--m*qzvHN(Q#4lmo>)CI!=q~Eg)$%hWuRiAxSTU?T&*5qWwHD44 zCJB~qbi+8J2O2Hb1w6QdNg3@4(aJ}A$=#1l zJ&EkDf&S5h1M=4@ai>P-JP*RXJ2H(JiO~P!cicOhzPb%EW{UY-9jZV*2(_{7FI|CIqwoOBe zuYcUYYw{4p)Q>$bCQq93AqXp|E7S#O%KoS3lR}-~n}VqtJUWnW z-fp;f8-NP%t&jQJ9)K0iqPegK0HEV}Av!eAG~Yq6lb*3j=)&*K7@s?LKjL>v0i1fE zOW@9X0VOWb2gpaVnqe~tir24PD_0h-azBS7<~2<1?gLJ(aRT9FdF|SbvaqypHuxie zeD?r+R?)0cKvn2H0wA*=?HihR6=Co?qk`ui+6?0gMCaaO#xXtp|5Ee#HcIPe&L$W{PB zAEF_60HB3AA0V+nVSp*tle7w;xGwCo8f5ywsx{U2Y`=cyq& ziqNk{-q68g@MMT}KqmS_9BBfn2fTlIx}ZMZrU81#P&ab!IR+EVvj_JdGKKMLr&<^L zi3d;!5`OLNua}?x?B~&j3P4J`$M~TOFhp6`j%9^6m`sO>|C`;_$g7|KJ(q>PbZ(Zm(@KrcaXXN7=&d;cBiaqAdv);)|_km{FpA~PT z&hy#~gX7`-MS0FDy2LjxbKfrUu$+NS}WZYyl@)#y157toG{t(CBXh-a5rUKwKKQ>)ndu^$_@%nX6)xysd z;FeCNr>18BuujSprxglr2+#*;Ld}Tni@l1rh|{N#*Z9ssm~+3bC!|&3JEleCKi1C= zdD}hCJy1YEFeZYj0=mK;K==DCkFJ54A)N@f8&RKfL~F zW;l+6aoZGd+rvCbLD*p1_RIY{Co$saJU|};xWfIlB~JrGScNyAu6|mUAEM#a-!F4G zX1~U%Y1hc(NsVoIg!Y9dQ1V30vV8?#1aWyokH|(}c<^P7Xd>wkPxjkBN{8uZ0ba

zM3CqpFUJrkL@~)sz+Nh{g&<^p~}u8R30F z%nGoz_1nI5C!T(&g*C<>l&YW)KdH~5v^Rn+`uiEeeopf@{SXbsRx7aHcmP_)Bx|Wo1lC$TKq2%|hwm60_{_U;cpXzwP7(#^ za-Z1cUCF}{oH?!?&HU6la;{2`!8sapMBC<>Er-45@gLkq0O>UMa{@U31+O&c^$H?W zM)`Q4!3=wlr*irpxxD<4J!P^C;n2O$9|6bdhxEL5`&RO9@P0*Q0ecPsZUU%5lk>0o z$%BZduNkM7Sm<+g$fp2-f>+KbAs_h(eQHUn;W$KH`sEUkv$%759C{AmiM*EY?iaW4 zUABkG@(CZ!9X0#h!vymY@0i!oV4J^kC0bnifb#@I>yid(&ivtCx(yjW#?n&>$4QsA z`NjG2wYT0ZyJ$iRs(s_lTg;P*a_7!R<<`~t@|)j!3o~f6S`W}_#or*hj_fyf-F?_%`Os60iSi8YpUhs%iBfai`I2f+0ToUR>OJlsx5b( zCSOX;tDy3E!52*Xryr81&7J4XtIwmUOYb<)ap1+|fM#&|z!`2dH6kcQXcM>sywREY z9%gsD2&!t_Aeb)VuW4a^oiZy3XH{UHLWAJ7*KU>Xe&-wIYq#eyTUkdqLK}kpb=*_H zFzM55NC)i-vTAK8+A8R3Qsj=P(5i4A?JUZ=PsRKQji;C<0jvoAk0SXF1FA@?d!75E zLbqU-Z*hWH3Y7u&)s|9?i_ZwoBcmpYrv&aHP!6T;Dk$o( zT2n{cv~GRy?Cl-Id}#s`F8A6|{M8I|Lf9<+ZLZujS-rLVg|}_dI=ehDdv~Qd(Y+yQ$Y?x#a)m zD|9BqS$xeONdQ2s)@%O9q3w4-C25uLHneTnXcK*w`b0i5@_hdH*XMe&pJ`Sqv1c_j z_tI6<>KLPygNfX`@4cTt%w4&94S-@V>u?!I|ALJZlhtg{V$M~^({*)a973eCe2 zfE)p4HOAE1RI5nsEM`{fbq<~AUk?EE1hczQw3XT*0F-G-`^SJyn&;Uk0ZtvZ4>7-{ z`NQLDw{GWp4v-4C+@;KvsShh-LodIbX1Q?l|yk;y}-rjh7|(WBD}jDlZu9cI`{jv-a6~&x2Q0*wJaK zYV}q7${&4$S3xp?p7Y_Nx->iV;|q*5T6PdnvJKm7Lw(U{pPsy0&hpZ9Q5dYpbiuU2 zwv5x{rvJy8ROlQ3!>noZ_axD^C+N>#jf*dB!JoT0Mc5mOgfd%jD@~73FvfOxvjxunTP# zpTX+bT%oz?Q*U@2zcTY>Ve(43dE+*x2+qW$>pl*dmpR^E2j$afpDkTkC|`f;Eq-g} zNM9}wIVEjR>9GldR;^HNAJb3T%TGVgI&_wAWJ^Bt>UT@itP1SG4QOhAbLeUAeU`5o zOYQ|C@&Zh|Je}oN(<$H~dB_aUtf6Y=*zWDuZ%1?PZ+`HDXtxMH3K*t6iU>djFk#pP zPXwj>jhw@*O7P0coJ;o0cJpjgE!aMhY+V97P#D0b@-9RT6iJ&B>PJEh&9%~zh_o;L=9m8& zGx7=MoT^eNzHXl0_ACdWrlBzzPiu&e&U?jqwSLc4RvkAGNx#A%&}^nDJ0R7do|F<)O_ z1&~`Q^HVtAqit)*e4#GK&9$IrR<5a^HA;KVFZY^pHT7oIa2?{rNDo>-GmlMpX@06a zdi0S`b_6?}s z8}Cb>eQW8let9rhm-(JIudK5R9S1rNTuu&*j-WxLAcGm4@VjDD%;Js^;uPljcq(9T zd^CWSKAgtJF;%&FtNi8hPoh<&oTl&?EgXbv|H9ux=<{?dtyA=+RqN)}3gNn%tTeA$ z>}TG^oF}hd2)WH~UIh}WuQer;1 zgT|FUzw~=`g!AYjh|}bVGt{zgD%c8Ov>|3Z_CtF3$q+1?1e8(h#Xc*P-n#KdS)AgO zK~D2peY92X-(BHUGEZ=WtAGu+&=z^)_1o#$dmny`Cau3eYCk+~rv{s9jJB7^U6D)ZO3%mI6_dbzR4<}sixWQ{?6}m z!r+Gy`t@67;|kP*XybqjI+L~@HTg`R;rDz7?U%h>wDid9KQSR&w4Jo6AkJLzG1HBveK6|!Pv;jVdlnJi_~FYB=`_d8zu(qOv&0m^H`4`ztt-8Jwl-R z2w-b;2tS*|ZDYds!yo-^c@6MplsV#*c@KidF-{vC<)prVuk=fQYnlQJTFEyRl{CXU!YuCaaGDbu zS7PovgQ@cOzW>`XLHyf)`GfeBb6jiB1zg^7%CLNPut{(mLW8RU(($Gjh0cv zOpPsi%{Sb+qXLrUb0)?CB|ME%^oIAp**>9_bg;9R8uzsA)7>UYb8oar&VC3kAn^ zcDB)sTq~pS?HKU9YQ54k%GsYTbR2k|9B|2Z;a9_f>|XhLdi|f1{Pcn^Xv#%C*~=ptw3f-|PA;Ty!4iq8xBsu~P}LFd6M)BUTa@d7<< z5Fi=^umpI{w|U;5m1PwSY^MdHeCe@|&*C87I_&?T#+7kxuukb44Enb^4_=vW9|q~M z4=so_SpIqPYOvn!zT-g0fy>8%u@MfmP#VhVc?g)?^PPX*Mf(BGt0_KX2yO0#6F6fp zUmGv4;mc{azZ`*2C&dC-`h7Y*)+~rR5vct6QUgl}D<|0bX4|}ZWS?bwSLjv?s4Bj+ ziBv&a1s@*ARUpSoE!Tqmrc;Li77p1DaMX@>XzIiPG=P9k+xvi5M*v#!r_}hqQVYhP z7e!fdASA_r2;9{xV4J?Pl(F^yCAbBZ?6qUj+`WGNMwy>oEGPH_yL)#zLgK;RF#;@_ zS=Z*u&6_vMHT>QvJl=os00DBV?)6Ba_O}h`6VH{FwB9DsUgeLPHU1tX6LFRHT1Ug3 zlnHzJ4e@uR{RqMc9LV>SsM2rpx4(4}+R0P>0#(f+wW4kzpkKd!t^DLCKL#|Z{;vc- z6w1}~@IJoMpiM0ts}{`gl)S#Ck!VlVQyKxpGA9Hsm~O<|w%R$9Xg2#fL__YBNpQqm z5;kbZni8SazB`lrpJ9<KYcUywC*$maQ&Y3}tD426k%K8>LO9L%RIO8I z`T>AibesUN`e=Br;{xI*IF&y<*e&a31AYFlFwq^CyK#9O&<` zf1$-WGoA52AuZzuJ@QBJ*FHF5Ce(JL4|{vO3j<(HPtD*146uqlFo$7~$36)j_B7A- z8IH((TJ1u|fiKMg=a2KUokvWqF8qo)(DHn+nKsU`I3F&GJD=yWJ-2=8rrV1u*h`2to02B!KJ6sG6+0b z=6qd)^;~o>o#*Qq%qJk`OGg{bD@gUUAb8dSYUVlLm*>r^r_s^HcO2+Aa5*?|dQu%f ztM#NdmYOg<9Q=-Js)TmOC|XA|2t#T@DCD^ZO#tAGUz;eu_xFAaA@m3R;m*`3+;w)fCK^+sD6XRZwjAI0Y)xd`=u?6`~*Szr_2GbahPj5fXj& zgfz$}Ac)iM@Muln!pv!7YlC*tcmVML;)*b?Ia*s!ySpk(=8p%dND!f+MT*EYVgHe0 zC%N9dajV?8eiKcziL$k}T^`+A#iWS?Hgt|mxPJX=xq1600yO?;?%l(Dhx8rN@bLuh z!*7rDJr?5aw*qAq15!o4TE5WX$ADD`wgkLsXjD5T&&1h>)`Twiq?oT3%lDIY^AvtE zRyhHZyz;^_Y8lS~P<-&=N9E2ZcjL4@Et6O9Ff=!H3xNox2tMhf$=@!u?wnU@weTxi z!7=ZN6AbaBMJftVVGsPLV6R5nC_;Vs!o72#6777b^G8ratrL0aPh=g>CY$ksGf}jL zGt|}oqhqzI$E>J@{I;S5$D_6rV9z-F={VF3&9}9hcJ|f8*Nf40TrHkVL@=r0Ume>z zF4pPC-)F%lCK$TBE`LwL^M@D^72~;iNgnVOj zJAZh*${wLsSp253r?f%Hq-C=l5wL1Qz$+33Sv1#k+@npY*`E~9#x^vptv&+S!c>pB z(&vwn9@HR^ql5fM0BwSMOMvT>%)wi?UuP}8Ue?#v0aG!Tbg-JCXf8BwTfwe7ww?F;!Kg+zl z`w%b<)6Z%C5TL&+xI{nq0OC09aTtGCn=56%-ve~tO}~1h0+k#jp&29fx8{TbSOJ`& zZJ7BniiXxc=9_N1(=(HCB(Ehy-*WIvtudX=gFXg~&U}_u0lY4B9QX~UF`@Qs$5Uuj3^`j-=i(X}gjS*U|3`Et0SL8c9 ze?|IwnY#JX_`o_3?)YNR`f&Ox19HAz>pXi|d`iW>ptS&&S2v`#OnYsM))&q8>G@ry z^CC^E!4}Txy`V=V4CAi+>&76h?UQo8aWa-~ayi%a4EhVfo2>cX&Ajwc_KunI?jMqNM^` zgli(y`ImqG`Oa5gxGK0ma8$gC5V*C2?;C_dK@SB?9l}RAcE1lD3d)s|m^DF%0=I4Y`=r*`X!EiueO5G& zGxpoB-3C~gDEIC@DC?^@M@9o=2cT>mf%f+8TjljPZsXtQuzdWo=u4qm`mVxLdK$ZPVABpEBJ78C?^>@lNGE~s+jBl;LZC~J}fF5Zr&r%j6OkAMDxb+am=9nLkW@h?%gk+-n~~=)P}Yk{1RxfPJJjHj$)pP04{qx5l!G@ zvpIcj$y4eO`YjkuBDC22S3v?XPyABipd4yra=1T&#vJv?kF*8Pq>pJ+f3IzXb~Bue zi*w-^VDiY{aOxNg3MZynlO^Ou06o!&3xf2e6Dc7V9fz7%TF#fMygeUVD>l~j^dV(a zTj~fOpt~Jo^R)TQ;JiCzK6yBTX1R1cVBhTY0`tc?U=;nLejS@Trl7W|rSgDE@?R>> zYUaKM%gd3lGU={W&EwD>vL5bpsDu3CBiQJBf?7vvt_$kS;)m+Zx85oT(D49&Z#qi9 zy0AzC!|_4&(Z?T`g{7-G-Bb{)1*RGb=!0v#d6J+eI6h;TAKL*zSlKThTVA?m?63aq zq@&MV6&$4lS^uk;5B7hEMm>Ca{Y?!#yA$ z#z0!s08_KQheoCQgg&^`HWN@(11(}UH0p;+pzR0^LVZh7A(~+T_B0aZD#|)`4=}OR z-`VOGAQLoAvS0M*?-cY7pY+P~p`e|bq||fdN!s-U?0w7v^)c7SRMNWTiyy&o*Q>nP z_b>?nv3*QVJsEKf6GF#%7Z6XA#ei}2rT!5Cp2{}*?pT|z3mpf(3qP z;&;ydjZIAKF;B;Lq~QP`asFm=_Hd*&%x>6IkbF8zx6@XQqJwfeeC;V+wvNisXxlZC{kzRA1DdM>r44*v_Dd;6-hmr^PyaNiz(Z6Z&jkHyt;v^Wl8k zZ5`+HUG#juos06EufO$kby41n;?9>p|NK>Xby1sLT*rZq1DA&b^Eh5dRE>$?)Hn`+ z0jfq3*v0^|d^_q5p`it;aEwr-7QnC$+mF$F!c6ZP$F~2$-~XNR=>tw9WP4ngS5r#? zP7NJ25Rj5u^PFnxqTZBm8tsc#lg_h$?TjFWfLehT%2A3S^*;a36JvD5dGV4=dj{)$Ep_#J0RbzH3mmw#l@Pm%cq zF9fF`jQq_#?W%pX&x`&e9NK>j*wNq8&lA3|;IZQ%sG#KDzn&iG;9$mgeCDt|3!{}kSnrC#z~`N*wPj!$lq;r?>)Q7v&0(!h;BZXm?Ub$jY1$3l%{>=s zt?SKpG7?-ByDV47((=|Bz($=p*3>ZTBp@U40)f7F)HR1`9AmDAo!UW@XkO{tX@wIA zXVFN~#8Ic_njLQ7AN0m;e7`wQ9Yc8`6KG6Y3=zJx#RG`EsyZss9no5uhy zLk*B_9m=wv>gaqugjc%=ini-svbwsKxhKG7`}RJ7I^~pA?Xv)7`pLKk;Bk(UE~pby znTrLjCV3;!WEa#xQ zc*Yy|2c~4*A*j)3s>s^?jH-MOsJrw9na-wpt;@yT(yE`pBLJ%&2QXxT+ZO4=w9Bqp zD|1}|UZG8ry@dq?J^UV}ENgQZV2w3d>8#7jg2%WH7-COK0>77D?T6)j=im&OuN-VM z`>1`qpthu65dShowf%;*?GV09-MiF%>D3?pOWlY3Mah?`!SK8QEMKe!gD+$knJW(F zIe$M$TVCXNSsVm=bYObkEYWn$BnHFt>Qh_iAPvTy?`xayY1hGiT~z+7^6F{rcJUnt zIu2Y84qRKf!HW;T4g#^CIBw^81VK^3$bW~22xR~g`kPT;(}zqP0pqCoTinn)@W@Ia_X%`<>`nWm- zu+q;}v=xZc^rp{o^nUai=t*NE0BHe&s9+ouAAQZJfyJLz8%Qe+#pvrLw%T5^bgJT|hp; zKHAU$w>8?Jr>pt;tl-=R6@`G7Uo_8e;I1HUd0xBjB)y z2GItN><{|~WuFF508MynhXcO&Lmo^SLo4l&-_HkVcj=^lc4m<_(DX|ikjOP475UQ- zXg(EC^dH1KY^hKBd5+eQf(8Eq4#{}?Nn$jNpx^q!cgrM|%A1gRn7XXb^qNCPdRLzx zb54*15}RC=pS3pVBu$zP+)6n&epmjO(zc*L<7f)@+NV>15SYw=`Knpk5#Z=FJcN(L z3zP^TPEMkI1px|KnGfl&E1BZl*Erj*G-Sr4Uy>4{NF>%dOgqjP+Zf`sQE78sWF9bn z0*-=`Y8hG2^8Nc6yUFP(PMVyJY2g+ch|)O4#AC|@4r80wH7wtyM<@8Sb zW}jVW+=oDDG*~uVBi%gS_Zp3+`dsHk%Xdvk9Zxmzh6J;@w;yUc3K|UqGzlo#mSyCZ zr#7l(S@}T&@&tAWhWq|z9f_bGO%sQYm^Z8m_Yn6k=lmFEf72(NSh}MSi^d}51gxky z8Xbp>rSEQs^e-nKQl~uFzz^3p?~;AYAQ$GUFH<#5w&1OYk`D970#1}*4~33~M}k>u zP-&hk5biTkO^uBJ06+jqL_t(!{XA-?#?beIWBWE znxF!b3ACZ<2$aU!QFmTi2AF)_ z(Yt=7Y{$QZ`V#D4Bn7nw+x9Qd&!T}f(Ae4B)0Lze|i5ejM>69pX!as!c=8vIiUo#wkV%-}L zk7>u#7|naa6<=Ue_ySWfJ?;a_pmB8@pa zr5+j*h>7%3fx4cED4P+Ol{g24#+-JT{a*gcgDT`#O&!NZ{y~)g4KWl+Kur%3FW{^wQFve~}g#7zLF$L{L8! zPy{sS!6(50HKzhBs!=Bi8B6-^baC$Z-^z)nv7{T(f>+T16CrAXdF5Lh;;V}`!e=7e zs=Nm9$+hM-k9|C%68XY++HP8{?Gqnbs6e>S49>|TgeB_?aB4KX9ASLzmi>$d4?vN9 z@DF68n2zaV=@2bHVTVK3ff_$LH$UR9`)X}@YN*YHR{E5d)4u(yR2e5omO#5|S_)9g zJ7!Nx4GH!HxDv#y{FdhyQ)F!I(w)V;__U_4GdHLuXuYjwx;xmjj z=pNwQl6V0>O`wM{p&L9r!G1e`t(jEjUgee=U9L0N?t$~1XX%{<1lmPd#iL$K^hQy3 z(l6Ge>jIvhDhLC1>1=+VHWgeMJZKOO$H6)$knW-t#a~#<&c;z$#{_yD#NTn9~LYhLSr&NnC=*^X#lSiv??r7ZSZCCyv9|VW}7qkMtGjV7by9rK+^{4`XZ)>ZT<*5?E}|e1+NCG z?`}H|bR4*395_^43Bgj|MGB`%!V%-$vw8Mk;^~(TrVYqec}Y^+Nhi+wnz?%IN{*5b z&=3L8VhESi<36OIs7X}r6}qTavpVAj_qa4*8|F#divK!k2EaPp#Yyx=*+1$d(BX#$ zp z>C{|c%}>Arcu19JJhi{(ZTX-e9!)2&W-~+Xa+C2$Uj%gcSI0^X4}gsP&Qc<8tE5t| zT3mud3i0OF3{*Z?uVZ2V16T>()W*_K`}-{Dk}?@eua2YhDP&3&b4hTB z6z3iP>`40O6Zt|w zIqEbJAZ_CXU@Evq{oD(%2spZ4baFqhPZwG}QVWuC6tL2a&Fiq*gqbG|4PL?{sj6$t zrj7Qww(T5X0>YzeR$4(T$At+~`7ZDS9X;aoYsL#znFpM5ZXPmDmLF$*tPaIfCc)513Qwo0_RR zLpzLxAXcNbn!3|i>Tk8@EN^>`r+b0sfct=s*6F^rf@AprY-ml*;BJ1qb_@ z4-Nr1P!3i4u6?vUyuibFAUELU=5aU@4pC8WS{a^GMsz1FGy|q z=W$h|YtS6hHoi*VepS4B9)0cV=s3`E;P0LTKl}MF8Z8fm9rl;{0-%tqSz+}@(*jIm zfDTpA!9S?jaB!dXt?Lg)?ng-Q z2`r7kgW1`+GKbS@PhUF)_}bYy1jynPu$V02I2jEPP3i*1*yjeMs4NgDsC{flTWz-| zK`iKL`4yhEj{5TD0X5WLr(Fwk>%ZzyzKL#-fbn4E0K- z%`2dJZL`u=fwh{ij$^dsQV6=>jdR1k*8EcH93YOiKn9vx8FOFaNfiLmch(*N)}C4= zr`u(`IYBscPFi`{cYesv&}&I6&Q%5oP^f_p9XU!HD5^!f>z@yFlpt`z{kQ~<)biyVCMEa>591b|kMHee9t?3cikRE89KS+7p%hUZN_DrjbdB60`&CQj&pWe-> zg*t(kPV2W$`zT*CH@F%cXUiG#U2pUJ%U^Z8-G{8zdepMwCUah}&$+08>N{{iq)_Y%g@g3#?r`AX-~jN8 z^WKAlSe$8LCU|woPJ|1*o9x5BD-lWfFVI*kDB6TE;mL9Z4-3F7CVcmHJD4hzI9moIfUZ(L!QQ~d2MEO$>>lDvb}*DT}U0Fdg4_oDALEwh%8sqckNv`^v<`LHGW9ce;Q4r+?`F_ka95yT(4oBC@ReCi3odiDxgENN4!? zHA|okInQs>D;Ofb?BEXpY^HF+I0RkpPUmXlg!7g1`{7k}jR?*Qi^S8+5A?64g`^JS zTar`X!50C_{Ba=RhY@1hjE{!@xAM-{<(mGS*01h4?*%W?~1SwR^o$2W}Y7c9L z6D$T?RclPFJSd`8i&-Vq41(QdrBS^SAl>jmfg%wv1=Gg-VAGl_E~zf zhsBuB1K>bI<#PXkd-7kpcnO93T6gv8``tCHAPx_9yU#zn+kNuM-R}8wb^$|4)d~Ey zYuCEJ`S>^8r8q}Rj&N0&=#8*8ypuot+d_p8GayGv|ONeWB89;6cS#dcl<9 z(()^fJUJ^iy>P%O)>qZO-t$NKaG$;raH21}=*n2t#kA6LC%8?PQeC-rA$;(t{DT}sw6W{ZaHtQ3G75U^pG>8vP<9fQ4-ttCn(H%ge>fSLj3d zN21RvW5HI(pVn7`M@7r_FL@+FdX|^Ax4hg`In>+!DGdRvBgR$jj7fSjDZ@TT55CZU zlM`JFz-som0By^2SG;|A+uhs6dT6=3dFv)iYahiLaS0I1AKI>6zYYNNw0nj_^_zDV zQtlD;Q`V_BrP*IvXxAgTpQ~3(*Uni2=}neC77#U?v4)i;eJsr>u<7`q#A>5zH-*GM zhxsb6*99^u@3J^m9_HJ`!fcbB!xjJ)wY-{{WupSx$@iCIhS@Cd5yk=Ut1J^$?3M~Y zmR~$Db`Y}u_5}a!6Z#y}_tnns7C_b}8yh_BKD&Dl&~Kre2i*0zarH8Q`AYn7UBLI* zAwd2i%Q&?nb-Cs-JELx)AaCy<^B%j*QqI*l)&KNQf9bXXhTSz%i?|Oy{Ghvg@5^px zVJ1G*>RrXVT9eu#3=;UZ5^xaz!b8R=?<&lxy9wt$fW@Uyxw} zU~FcZ9S;G<81&rB;YsgJK@CL zx;tT1VQIlhf!0M@Fug2f()_Xq=i)LEWdb3+7Pi^J#b5uvYj9~yqvr(W`&d1FS02pw zGQi1YjV^EeF5$}Fl>&rdYRfANHk3QsqaumSAVLkk|FpKrvG5w^h;p}~c&>2x0(M(%# z7rBzGq*#e)y+pPc>>raajXWsM;(woLZz;T)R zm1DeFhG~iP{s-@O55IocJ$>@HyTh1S;9YS2#`SK4rG?8lIrb1dHn@|X^^|rCtzI@@ zV%xbTICoBc15Qy#8-rk9nVUmayi-m9l&vGZHRF-WC{qUK|IzUx$$l|6rX*D-D;0kLf2 zY2Inpdw1uEcODL+N$0!l;n$A=)mOUr@7(NeVV!k><(rm$2spKuwYm0#8Gb}yGY#VW zG2sS);uCgDbvHx(W<6Yc)ZMswt-AmScgS*V$Fj(2`cO(ODGZ;#ZV9BXzKICvsP@$# zKKwXTU&|+hcZzXw6sPRI^Q>FrqbizLhVRGzsq}nxnD*tN&+m6)zGoPtCI-Yh%8rIS zhaQ9c<=MYatVQ>Xja_~pibLK(0^eE!&a>Vb0L#hD_;NUm;X;q&@Dh%KU;@=XVYqr(+t+9Zj0X9**Y`fk6Vl90}ZG7r6G0^Ihnn0%n}op4+|e9Ti+BN2RRcc$@dk6z@^N zi04?Xp0JI*3bD$cb0{Z*kO>{FJfvsZlvxQPEYo?L=T7VhRX!GAC}}Ea^C)cx09xkv zkdXpju%3Z#Xr@FgR1Rv#KlyM+FD+ZN)IcZ9`*ZJfpOiWuht~0a_xT}mI0T5gHkqYR zkDsh{3jlNTIBR#ux7)XFbT@C_h}GHG4<2=o*io!4+u7eo`$P#<*%fq?5B(Xz9pIdb zA~MUpN-}jvsm>Ck)Y5sZ@;U>coOHrm>aPsOuNA*>0f(^25fD&r0$`!*9p$&fC05ca z;H%{e5n`*2Ll42kCm82yFP}>2u}H1C;M++DvPz zQ?jBTd-e%CG?3}+B&y1o-0ey6_})!rO1qc5bjCD>e)OqhdiXc;U{g) z*(&%Uy|g7~pCMGb>1nsHI8Pt$zxT=S(n5h==L3c|PSXD&GR~Jbww>UYOwhY%JK8tE zZ}k`DT6}2@DB$e4Bo7&~10MBpjL}o*^MHHDeS1S&%gaT4U&*h)^#P#s%=BD$3(!Ps zt0w?h?#QV>tn1gWb)Wq4lWr3b>jF;S1-k{o1YXmZQ=9roTebVXabAdP&-%0c*=JjS zN?Q4Yuwb87oW~$T>8Iu)ZR#A*$lJazSXM7H1;!=825M>O_{hd3&6_SIgm<625x7v3Ezo^w4?7z|9DW#3I)UH+Po`uT(1S4R#+~3oL0WPhiE3`4B+19b$p}_EF1h{ z=K=4fhd7ciUs*cP*Jsx1%5tn)_o210xQrpmQMZO~Eqz}Jd@e3yF^0_l_~|FzDj=Bs zKtDV>($6~s8EM%!gh2vtDuL?dZ<_i$q^qI#h8f=^@f)OJEKl;zb*Z2Adoc$wz7O3r z16{Kv(7AiP3CQp=p)k%9d^woF6XV|>81e=1Vj)R7e0+ZiDE5f_`;!_dd=CluJvf~& z!|y@lZOJNFbRLYcZZ6eR3G%sdfw*Zz7FIBW!(DA0Ue5T^%q{inSrnL;8 zeUmkl=>*5L6w8F?+{iHQ#|(mSI!9GDEeP&e2Y;oFQV9>%OJ1$Fa@7K@O?F#8&p)QD z#>*M52MG)kc>5&~zdO8J@_zB`UjPk|}O<5ieUuX(%*e1RWJs zmjG#rh3#tE@}hK#e;fj2?d|Yg!HvF{1OlVDvvN02yBf_rB zhy~Ir`^v-bF~4&>M`7mK$Z{w7i)!TB-v=xpPfwk*4xs@9`3|wD>Dhu}Vx5!hvf|+9gn^Qp>7_zP~6M6NOkx%{;a?U*;6e)N&dSX4ekq4vCf0yT; z=i*T5i(Zc?7Zvl5LN8m(mugiwgp*5QLJ~Tdc0`T=aoH0_o zDKq`d)qKe7W_?xt#Z3$PAK)yOKQj86vpWt({CO8SM))i3n&S!xC^0Y z`L+y?3GQP(2=L)q04n8BKg*w_p>5MBYu zS|%5}4C0SZdE399vEKr;MEm04?7|GXP(I6?_M!}^&SBlf%RBF%696*jP`-{v*0X<= z`TIpgdVDC4@%nu~szdHU0^d#oei!00#KHG&1i>P~nDJmG1!FwU2VV}E5S)MTl6!Xn z6bu_rFQ_Bf^Qs`&BM5X}#sWX%R#;j+dAuyN$&?cgL9R@GYR2=j|fqqNL;1mxUivRxhJJ#0?S{B=F0SfYuicbn@MqqfL99{bUl<0}q*XwK#7GZL&vR zLw%M)S+BuT?0hD;-_|B*%j%Zq82nw9lH`2}iaNTm2D z73mL{I4;avRw;&Aj)nqgniGM36wwvlkGHWJxpws`fDTF?OBJ6yc?fWYLuKCOS5_}~ zSFc6g|bm*$ss>hxR>+3-`xT2N;c-OzOA1) z|4?SR4@K)D9|AB~7cnQ${}hS(@~SLu5@#$I;Y(m6Kn#%;&ly^RVY?GM0Wa)3m+0$D zu&?HQVI?UcGiVLjB#%sFVRdnrDWzvo2dhDf7p(<71ZMGNztCTkoDeeG0Gw zV0B>$Us%^z%C;5jENSSNKVW$H53}b!dQN+pjXzY5?Z|RwWnuz9LXQgss&$k*s}>0* zOguNrw32#nU(~wKcB^qH?c7thO6Zds7cL-jiJrUp&9OOz_3)WAO-`jqL0V{}|0Ac` z4e_ks$k%nOIp+h`9t33H!;jSt%Zm0;l8?0VkU_dXZwy+mtjJDzCr!;joiRRt0pEUD zV9mh8v(26E!INhw%E#R*VEYYx^WDNybQue|S%E9&jr;h_vXA;Mbi4>KLpYA$rx^za z&C-uzhcHOsbrP^^W?Z~3(f4_l0$xtw1mJc&WSjv6PvZ~MFvU1L;oT#NdJA)AEKV|B z9HTAn5*Q@#OOgO)0lq6#m)e|-m`r|DGJS_6kmN)# z2t~-5m=m&>1>dAj?kbGEdraQmvy@k5_Q^N}PVUv$pel_(w2ma-6~^@gFpD$yE9?e#i(g4+ z*}3a!_w~caDD#30sFWMYOJ=YXYB8vqyre(%43hws|$9BSP1rQaWwf34&E{tR#e9e-)IUM}&IKlu{q zk}r8K|73zXfRb5qXj(&>)ecS}V)cSh0Fmg& z9LKvmfC&5aFMvEik)`EjvQF@W+-86-WqJw!QG!NV2o}Iq;HYh*{!{zQXun?v8FYTz~($a<268(&;I-ud|^EafVIMoc&j)p7Sy_oL-H&?1VnWn zGQwPYR;`EZC!|vC=IkubRl&Z#t+Eia2SimQQGr#{t5^b{1DpIfOu3O#|W_}Ak)^0hzkaIB`1R}Ze5YrOvtukhg@RL%-~WuySXM(Y@3x+BzT2G;ko1`MQN0abEY%V0 zct558=Uqd(yyG|@J5GsDImreg#^WjAvX)cQ%*Y`B3>zV^UOIl~e21`tKK|^c7{~P@ zkhkKUFFU@+_|awcQg=$8Bi@Z|mt2>wPYYi0o%W*#cxwGCbO^xb1m6OIuLrRSx8xglI`6 zjc*&AP&)BjMKSszgK}G8CmJoQ{ADzX(76J9QJcCUMpBpEwTWaqJBxBc;RLdPseNXYo>fay+^dKR6!n0OX(1;S&pF6r{S2lPP_aZm z!jpNp$#OB%de2$F;lKX#KgD;^r=R>OR#w*0;P-?;)*cqpI{+;YA3ntTWVZYL?|&EB z?x1L!#yYw)orl(MarS-cGN=?}VLcSIpudQ)_Bg9b^u+lQlEz>=ROxN zKE@L3X?LBmbOAt45K92-G0wz4T>2hY82qZW;zRLfzj>R)7B33g|uORPs&&# zayF;(wI2l-O?&ixB6(~d+J$;3pCj7QcSS4FbDxYW0ucg*S_E!02V2CdL)}-OQf=IN z7O3^?5zvc!gMGn1JOO~YvAMxK4r>z1TX6@+E0?=V3-58g)ZO5E{mDAAd(QjkIXeqt z@rkjW;M0PDGrr>HRF2`rH`}Bd{UN{|$G(6W;ygIo##a}+sZFmT1K@rby>xM@`v`#Y z7E3&Rm)chLxr1Y@L)g4v2!jOPNCGbo&^OBRtr9UFXI57jFy`We|2e^+EtKJ-2EZG+&H zhZ9LBNtrnC0&0`DOsZc))L-KpcqLx~YEEv(K_h8z|Cj`d#bLy4ZYT0Ht&e-T*UY z=&o^EItdzGXBpn_e)rpM_2NQz_wHRRqCUquhvh7!6J%1(@m;1PZg?=rpHz|2ZdOg{Q860(IJ)w zESnMJTxQweO@NX=fBG5Lix0aGQTi^T6t1F(K4$X+!LrpWtG&hg7=Q)6Fz?Ch^ML}b z+gYeFpGp}y_RKoi7wun@lPu@qVcHBDX)its041Mvk)@!f^(iv;F|K-=_Hx_S%mR+? z?l;9da0Wn0i$j5XcMLo|RPrXP9}}HlDHHpvAe2tkZI?rL*aQrF%BB>PNqYdH3}BQ{F)v0BkNTqz<(A20r-KpW+vlGIp`NayLR+mpTG!;Rj2$ zh15Z`y2%}YPRiZe#edebo$mhC$Ly+lEA24_zW3(IpfN~bkigF;fmmRq`0=SuKlRBx>&3}d z6iA{}2VZA7xX!gKN zmj9qwCmm0Grhh8W@+eT^60AG7Z+4flV%^wW?>5*iaT}0e5C1R%57$t*uU)wsz8*ee z$sjuouCqkWWkWiuKcpMMU9-Wy0(A0}-wNMy;*bd^_SRuH3?FWIxxAla3#3jP(AO6mtpgtZzKDA5u(bzj9|gGSOxNC>)Vq@vhAD9pW-ClBDjY z6HuY5k%O{|Y^a0ah~MRrK&!unn>PZZpC}6t?>Q}(kS+>-p zm+Kj=>YFKoyjuFX6i``e*=D@5vrh_AH89Y+crCCn@;Ao)KH!ERs{9Fz%mN@Cq+hbp z3}D6j#(KAg)zZZace;Q6KmIFzs1~~a`1k)HEmqi+6&X=>3@F2-Fbxl^D)qAt3DCDH zwxfCU(NxRkXeR*L(4kFESsPmBRvJjXsY2k^27 z5EF;)?5gK7$crptdxq83mCIMVWyaqX#+SigSh@7q-3uq#^k9k&B^)#dyr0~)&|tl-o$o=%^w9xr zwhw5!yVk8eWb6VuE@Fi;vk0BJb0A_U3?G98N&+=XxE@`+C6b|E)45;63=FgMHfeZY zyfpY8@V%Y7k%sXyO3!x|FOR%ilRz50_Y!zFj=!Kzi4Pv;mzR0ol3?{-RrR-o#et1? zSOU{+a{CU~@vW+wU{niEn4SE5Q_zh6e0Y<5=kpBrg9HW%3=;S$C7>lh3#hxfR8jPJ zlU;H~KVmi_?cC27RfIEx(s~Ict|omv%zvMVG%ASbg;5T@h$V~h&cI9@)=)IHG=NT1 zP)k`CD1bIIwMJ3EM<8Jl>kqAqTsEkM#5OyIZLx%B17N_7LtJ7bfFrQw(zk~XpL7q` z)&ivH>uLuJhbb(7z_C<+nWUCNs4}bu?UJ4(ZhWdZQ&)GulN%M|_;`|MnWG+&t&`(E zr0()=T1&ygsDV}07J^qjfE96)PNm&{%4-U^-(G*-#IaeW19igw{ECAz=JX+@{L;2SrPb)XUoctgKpYoya5A(#= z7M3Srni5@VHOF!@ePJmS>#oECpb2X6`~+n}t2!->s;yWCl2q1GIr8C|KW~gRtrd*| z4~+slOY-I29Nt77@BK=6o zk8LIBVI6h&A21L)Q?yUooc1xlvLH!WWbs`23vS*2`hK^$zSiBoa|eg$*Sr7n_kZaA z^{UIh*uMq%+PIZxeP!u9|L_Db7CAacoF@Go%bQm5fi>-` z16tNeOFiX!aUNe*s}}&80p8bdvIO%nyB=aS1?YINcLcckw0nMlg%jQbH%ov`3|>C)zB%iV;@e6T(3HhFDAc*Wmx#O!}1==wF9`w3dI%RS& znjxq}zL?g$d5kjWc@`kZ0EL`QUqJ;+1z**TXFQMx16R4OaxFW!QR21A5#X3axfX{r zhX9HS_B{S%;!ql&GFNn_iemWm=~^5xtEB6YJXTrny_a|O+H?HQtZ%c^;7-SCn=B2q z{L`tKZlB4oil}qo_&ZW_mQ51?r~wBAKu`g<9!-|=dZcxW;00B)O8y`fFeiE2hYI!C zb*|1LqJUa{DaRBI;bSZjm0RY(Z<>Tj2nI+!cs@awo?=M^qIg4WE$%eI6uO#QhF+1qCA zxjD}IBT5bY>;rWAkA(?l(K7H!bxD6BEzVPR1nMC#E2meyh9p7;w>DOZvq~F@!4OxfB2_= zPCdz5LB6TGb+G+hvgkSOMjhf`%C@02s!AhRS1UpskFr})r?DA9ar>s-vf~~iDt1R6oj(BnCH~KiLbI_ ztTWvZ!Mk#WjRq zw*;z(zD5R)x3?^0lMpSe2vdwlzt>$>JHb1{Z-lVySmt@&8N4F|tj0rrpdjLeA7oD0 zoFY#Jxk;;4)gak>`YS0?RyvxO(ox7t0wh^C!QSm2>?5dQbBC42+DFrlt5HUz7tS@Rj!%Pj_Me_4{nH;Re20} zK>pmzcbM-v=EnP2t(>1oQ(voWa8ex$4y{_Ki-#bJAu2Ft?3Q9URuPL!aQ4Aei+%VTqDJi`GE1SWR7I2Q=w! zZf{|QwMv>(mLEP(kxMwOb$3GBhM$9~ttiv-;hw|qR+k3y%QwK)G`l>OPb+~KWDa0v zFA#*Z?@R$8n)k@v5XW{a_gVU=Vrg=tOu-tj z9h01aSJmeOu-Fff@e!cL31~oe^Aq@hRLvbitO!Fy+O#}vXBzu0N*BEB3B*xPL)anS zK5QqE7Xe?Fzn*Bh20!}DdW`=ukQ0`I?R85m;dJ~qYap$5WWuguNd>wkE8rF9#P)p1 z7v&yNmqY07;eY9nc0QS$l{|_Va+5XudAU@r9h0<&pymOVJdPFnsr8!1Vfy3@8K~0% zfZNXAZnwOM-zyZq=j>ki7%QxKtgsw=D_BK5Vj1BRfR{@vm#u!nX~15V#9{4V4IJ0i z?*S357sm*UGl$@ef5h8I{G>V=ul8HzC}3%sbAXv{dCx600CK$GJ|J+~Q)x(|=y*x? z{=H+5tTt$4OWp(x1<XI z{;Z@3s(B= zYKmV~8~EKjSYZ9cP40wn+y|Dv4r=)VzZUn3gcCL=D}qFFi8BFIpEe6(QgD7=FamX6QQ?3Cf+sCVgM#WHQYFu4 zeIe49ra;Lk8`l+|lq2Q&M8%f}f;$Tf7qYwR&D*zf|Lob5?kP(DJ_@y!u{QyVF>%GFqgcXI|ce_A^Rx=Zxse6DNuHCKdg}PEDqLf<2bFWfcKt`*tvMZiT zkrf`ZCM}q*$pXbvHFj<1HyDO zz15(6D+~LE-}%y;!T-?|R%m-HW7)%rxqQds1$tF`IFX(=mLGZIY8&|<0stKX^e7US z+gX-r1S%|NhI+@zJv<$1SwI^YzrD2)h3XK;>z9zdeZ*by(oZc*iv;USSx6cBmRmen zmdB*a2%%@0>0^|&yLST61V7|Io?vMw<6shSZW1}*TAY5vPKc*i*RiA4@hN_v948zT zT-#P!Y+3j0aJheokEp%uly_liv3y}M4wqR%c%3DXH?SaE$ErbJSQnSib{Gq=2YTjN z@KkF`13f*!m0(!`Sk#Aj%e9{N4Q12XTtHQM3H*R8r46)*7jztN7jLp0S+FYZbV5`B-IRi6Pwud%^`fU+`P z2h7?5l-Ak1U=cz1jm`DYsdBGB z85N!iXJ1M_H?%zMzGBbfo&9N^z>LMtj&Zz``Spcubdra#6SMi9z=sY7jf+hGaVF#w43h(&!`N$a9iqaC69D9+FRRApX@?EK_3_yJze8Bg2#lgDrMTWSJv;3yQd0 zZVD93{-v*{J^VCjnXYw`8{z2p>gm%bSX^Do4$=Bgd+_zwEGK*}rwIY5u=cW#2;{^< zh>l?Y_814R@+z|ZO>Mi-ezw0$?qYq#lYqq}Oy5ekDF^iS59RH6s(x5)>^n}m`K*q5 z=h?^Ptw1pA=s1EP`40iX1+M0iy;hB69raiHclAa1gRnrH90yo~SE$m*?4$HD#|vp- z!8c(*&q?0NzMr+yk{=~KH&0(cCib;strFRtaE7ic7uc3=Iu9X=men@6|b<#&EniL+X?C&qBJNM5)l#p}LlUGF5RVKcf>GDtQ4X z8Zbm!%i`Uvlj&uat*v4ew7R+yiAFg$e zQR<%Kcy%w!Y)}L#Lq9kHZ{RHemS_14gJ(SD)`>1W`7n~;EdQ|vfv;G0aBqT~td_DR zu}3U!La4?2?CcQ8^82L8xyfrJqvTD=A=uJs-DDk2XTe(58A6X|?v%EOl6&pyH7usq zx_|wj|Eqfdcy$0GAxI_EVvSEwo>l0roM4m|VPc#I2q^x7cyNFMy#-Kl|NevSAOG+N z{Au0jzWVYj+CaXU)We7MOueEQgr)SW0ufN@0U_MoP9W&%r?=d)HKW1 z+znFp%4b`FGEIg)(_BU{)Ap{mJNw?YeCyw22C|tT002M$Nkl&q>$W!5$pfItC~{sn$8(~34#4;&u2rb=eu^22M*rZ?vayi86=lGB%G4L^mC78a!rjVgvU91tQz|<+r zGX+Ng^9tCZja^1YT?7m5zsh)L=dk^`=MV5&mSIH0atMy1`6so;#( zaq&Cnc-9ijJmmM_BPH1nij1_RS!~;9D{b```-A<{@*PL%$KgV3&>a`|P+a9tV9+*a z%nBqzTpxS1l%&5y>Au@mmm#Y@gCr`e! zs5iiK{Qp2>2JqOvEKqoX_u>_n32(8hWp;mL$>tLP`wf6aR~DQAa2~NU;qf7a-?5;~ zZ2KQ7Ga-Dny2%MbK(^tlm$MTHz$GncTi zGPH{K?}&=*KNI(2Z_?TOc*uJxM8;#^&g<5mFV^Vo>6>z%bP#NSQ57UFlvagSBy)4k|2-a z7oZnEjtQ<>m*u790E)XhK8KDz$u0xL3Ni-dgGZM~9-Zo=r9hoigfxq@^oxK?&Qava zWo>)}@R2v0X-b0u#>pXo$`boWxIcyDa0}gT|ip<`=u$ zzx_k^+mGJswsudtKY#WJ;0i0Tg9&_h0eU$vqQWzzoyU*a9C!>ZkUGeZmUKG+JnL9k z2`=lv{pEA zkiZ~;cTEB+e=1kbyj0*y6{D6YRAo-zrpk5pvy{9_8-?|*`F4=(cnMeo0N3$ST{I`3m`jMMd<%~(?jb_i=|WhF@BGNphh@T+o+gQS2( zTKY_&fX}dmE?@!II*``N&9W>vU|r7Dmf?$gIaWgZEalJ=#QXuEc(#Y1o?YT?LjgDG zrw&tq0X&CNzKdOU7r=%nYpwN4EKt&!1THGxPy0@C+L1T{Rv^z2pw$a(7Av8Y(YLvz z1P`j+OZKfl;tB* zLwDV?FM6$YQY;DO zADM5n>)5^)bksySrEgGz?IuW^cLPP)W9={UWdAnC=ZA;vI*DAS0TZ*V4_VsZYzwXT zX8`u(#Xhc;q2pAmGW+&CGCG|F_2b!${n<`SSpkx*tG&;-@B@G02O9D~ee6qyESGfG zL-?}4P9RrhuLYcr+~q^L>-b%ue-+EDTQ{%cAM1I5{4c(^*KKTUh1T%#%@VLhUKOfW zzUpbkkEfB=_@&Vo`=E|FCOuo_TR z=^KEhoUo)IeJ_@}hJF-x&eBv438p2^qs8Y5X`~Z>Ovu|ZRlZ~8#eDnP^;MSP%>X$4 zh2?+GQR*kU_wL+47VL7zlE7V*?`?skJ*-0jO0ts~)+Ab1sc37lr!4f_GzCy7ys))d zPP?_Ja!H)lNU%jcu+Sl&d9}s~@A5_+7B0+p_wa!vu-KLW>bGYP8UliX3MVHx6^G8` zWLZHALI^SlY#B^F)$z!Dd}3+(SyDZyoZ2b8bO1#)b6t61%C zow*j)fu8`mjs^1j{t^iEz#|SIG{BI3#XdLIB}!ZpBn{;h=xG}uL-JZq9JK@N%BaBA z0d+X$tfPLxqdnwEcMiZ6OD*!80L(d-%zqKfKigP~EYEo}!HfN@d~#X-**l>6aamzm zKbT7WY+_ibgJ^e=45&+=5D?y@AIh`sU_WyS=OumsxOx31e#iy@E3$me2d$9M&aX+D zNA|w+X}>AX=Pit<9gmmh%buMttI>Ir^Y_X7CiLHiG^08?NJgQL&A?$CF*iA7Y)dvvpBoz~*|#jk2}@X9Q&G+hbJ9S+ck2YP{dPMQjbMyu%Wh_RDt&g9HW%3=$Y5 zFi7BsmOxn)jOJ@cQNAZmRTHeMbI+%HK7*Bea zTaD(AT&ReuP?|^*0x&A5Du2cuVHF}cq~fj>tmS44dHhxg1_-7+W+}}2`g+m?C_(|( zGUY0V0K|R#JjKTo)+tuUB|9>gUH1BVG!RQ3Q#WK_fCUa=n2N7BuS*8XVqD`jU8O0| z%BqDY%8M@@6>ut=Iwxg)cwa}+l?RWexO**7q7v=7o>!ivfs7@~FNEN!d@{w?5%&QH z$vel+X)Yg}XPMTQUp_^_-tDfkt6+Qvp}s30L9BgbwF8K?!M5a^fPOppxw4!%5r==R zO>`KZC50&7wwb)i*9vmCP4*A)0c8F8X!YcOe++9uUm^|ZQ99RcX zD9{elk&(Wf1mnC1Zix8ZDQ^N$GjcWFb$k&8bN~g?29(j{oVv(M1eT=9qY~nu?MP}K zP$#YuyrnKV^>IlD9f4d8le8prH!IW0qd>vo0ncG}nmGMcZ2)T7F7p6@tN8p9fO3-o zEo|mlUM6@eP%{lEmmT7Gjs}H{3N+J7%kA6+Z%qr?Y8!0zXZcfp`_ub?DaaJiP@hR! zN+Aixn|1aor|{zrhXEFm(=njOF+A>{9&{Hee*wI|(plDIYS>{}qxA zC#|2vkC`nF9qVcS)|V3}Xa-%!r0*mBpiV#wI7}&F-G#Gs?jevvIhfD>;28Dr9cX=u z56YKU+QBq6Ud%!w%CRi{ZFy_pmZSQsbi$+Ywl?uvlltzoe+m}r4&zWzm3uAV)Gb`z zxpHYW^xmb9UPM1il@NPcRN~{;sFTMaXRDIKKsFgO#DK^NEUy^ zvm=71es}V2c_BDX7z^0|gCA^;@NMP0%Z*`9uwa}dkK=60-3kAS?hBNLo3~K{c8a0h zezukwmJt4Iq8|z#Brr%|kifewff*)i1&B0-PnEA%49CCtQG}d2M`7g3N5?2dz#369 zm6FSNe6C_7%~IZ&Ad=X4{lYXrLr}$ZDT1^rfu>VCi-M|_#u3X29%!+kkEN?uRzrIo z<$8l&BrHdUlj7%pS1C3xW!!ih5D<-gMquM9ES4 zXk`-RJUp6?wv!Lin%?^?AA;c-d{iM8;>t?SR#jPb8Idw`$)(GuQm%PZ2|y5L9Qydt z`o-mjF3;RW5l1+bRm!*^jpevBZIa2qmQl(`-rR+E9&4aoz%lD1Pbvfo$Fd+JyJ_zO z6zT=&uZAc6LSao~9eX)uRcVB;AOKouA3=b~AZ9JfruQ`jI1`!me86p$N!bD)v8JN@ z*8h&>DU~J}L0;swg=+gsABK4#-$*(@p6LSC5(%srzVe(j@_5SfJ;_IL3Q$2Gr;N?b z_3*EwZNWRinyu}vSSek*el1G_1*VD>g+I_dJxz8{^Q z>hb^ysI1W@hsp|0XsmAX#GUelv;Yt(b5_4 z1{ikGJ;jP~XKxD!;3#+cUea=lF0pg8M``dFIa9KI*5!lo}C7%kIQx)5Bv0$Bg!(J zeZr%S6M3))lytaIhN(6FsaPKAcyHE(+{iTjOL;46$E;^-WnSp`$ZJ1ICoc$-dv*q- zEidry2^5Fr+dfPmVLhnx^nIMU2e^_O%CJ2=yvzsXWsLjS0+8V}va`>gC{N2XzUQla zUsY6O4k7!;F=^6!;mtfJ@Lql0yL#91Yk11E?tAn_>%TX-Pk(Mpp(y7a42lo&{$Qce zdKul9#eJKC8*TH9Uyt8^-!fjskL5{wJeqJDx_ZMyhlAs6ajlroh`sU)2W zrY1hUQl^5d->ZNQNz+>#knR8_P-Rr!7E$tRx!{FZsOXDnjoq|da^+51RR;4*EFf;( zxE26K-%4M7{fM0gpLd)1xpCRxw$>=rWPznlvnXt;w-gbjH6Rd7=8tm}S?sc)9G2^LVMOSxv}7Gw1kncAOV&UOr+={N-`n0624egVb* z`t=)FdTnNz;q2m4mOQEWAJGq5`|5Ejpd)3LG++VR%GuBW2Ki{fB(X{Bv+NX?{-(T< zW2}*qH6QY${97AH)z-^@%CO}i#J_U3T+^gHu8!#I@x?{?Z~!ocb&pQ!DI6>zGTYwT zWx3_1O%w1*A5Sisb2qz7>^$i1gE|E-z(YXB9Tl~rv3`IyWVAdjYqA@o@m7wqgQo;g z;g)`5nXRjs$0c&<&-DEXXeltQqb6xbMlKvE06Z6$b_5L=H6Fu9Fm%&0L_#duutJMUVP@*nnpG*rxlO{4eYka zC?9)RcRk0?luJ78G%gPm$juK10-71ymRS9j)-C{xPHlS}(_WS#e;H$5>$l9YG^$kb z+;$5MxRrWT{Mmb`uY3AH)kk349TDA4QR~Mce0vGB9y6ZvWpU%r$FILBZM9T2_xQCm zy#J>B<7K_`*NzXZ3Y(zDDm=kBjrA#U}Gn>S*aBDi(`>xbRrbu6=> zCwR5NPFp%`pJ5^@h{c9t(DOiP6adIgfgM#}>SEpC$ukM+Q&vga(lkY~Nx}N4Bf9ix z9N;Kx5J3d3p`=)UP&*eY-&v&Tl)V;O=7k^gb3|$F6UvVvN;L|uO18fyqwV(mcKBM- z`s!+|PuADBI>D4v0G(;9Xih}|s7#!FYncQ9VgB;%WqFi~(v5KzeFQo2OnAU95&8AK zKq^Z(=GF?!ogn2)UJD2)-Xrr~IgoArW*q>g?d`II8UVo5ES5>|Jxg0@!KZ_BEqLTd zN7!21O-@obIVc1xibjAU_%T$ImQ3VfQpG~vW!igAIjn0RwBm`hOB5yd$k~2dN=fQN z`V+vfIO8UjV3{C={Wjx5VG-x@G%cf6$7`8dTB@CV@18uM?J7p76kBvl9w} zcIPI`)C8?79e&0pX1Xg^SG#}whrjDib`H9o^-Y$X?RC%A9>Ehp)!bxv=lwg~^!!D} z6~GL2+l5yh*dHEEVCe-7M2bC-l0oJLk||=_&pzQYQI`^GvGg@Q&{lMY+y{MStuU59 z`Q(%CAy!y7S#H?ZJbgOXyd<=K^8In(Mgz2011yz~;FQmZwZrv?Cv(di^5c2HP-&6J z(9Y&NrG1U|Nr6Dzlix8YXr%?*+`JZwb6F-Sy|!DmcCuK zYNmge(62{<%PCwA{FRVfg0ZB2p-r)4=P2zs9n))V?E5OhAInI(8LYlX4bo`8jc3{J zJKWgi+sNs_(>ox6>Hh2P9jMAsg+T&?1O^EV5_p>=FtSEa$se0ZtK1ljL%&Ah0YK>) zKaH}CDinpPltz^3$uvh5YMOv1@S$^R6?`qPTn@O0 zFN#Mf(Hk3BuaI5^-*D~fmF@~w8#ixUkAkSu`{3cjSU;$sPU7r)4rkg+lPsx%|4G0a zmmD26?UbU!H{b;qP=rt8WJxqnWzuu~_YW#*(wNR{xhu7OA9KcQLvMSc!Vjy zBxuos%4IuRTq)G`?H^h8BpYpkAo*&FcpeK_5t0TTq-pzJV0pxEKKe~}=k}fMF#zA; zfdGKcy!mC>ico2-SOQLRtyNYMkf@e73DA`9saORa`-?Vt01~RcCND7H3SgsP%$Hv& z3wVGro%b6W27kF=3zBlvG1MJf64i?5~@lwv}#F5R4AS*F%g`geNv6zeN2%@%2&OY~#w9mm_GcQgeE;VK{s zv@Oeey6axR4kB#hx<CVtGU3}PS4U~GYiy!TD`SK;= z54yXb{Aaf^f2lhFSX^CU*T9+O?m7VM&G$Y6Al~Z!$N%`B09Qx!A$-G3YApw#%&wN; zBJe>0vFO5LDA)3iRT4{9-TiJ6E4$Uz%Q$ntm!0rh8`uvn;TLRq`65<=U(@avvk8C| zhg<$4?2)|begIguk>%C<%W`ZB`&U|&4w!be{R|`xK$Jdjz6u#LDWl!{%BWo*^=;NR z8gO~%9N_94?P%W)$m(u?)GhrAUK}&?^k0G8Lmc$SuNi{T;w{T*=}%gt+HT<$RJ~PL zT;bBK+qk>CyK8U@1b252?h+ul1QIN`y9c+%U4sXw8;1@-8h1Uc|KIC8d*9BRc~#F> zHEPs5%I!a;qiYdePE<)*f329%+}6!E{xnKJf*MInM^c00I*oMS3(Gh5)N*x+ZYz-Pu()s&DGvm0yTQxp`&{9=dmD&#&YH()Se zm@@5eAeJ^%qhBa?z?bCU$dwu{K#zp?Q*cyCM)2F2@HbBtBH*pF^Wg6F^k)F~IPJK{ z_{{9dCw5_^Qz=T79m9_phNzwOd=a(i7o6BD8B_0UWm1QW>1WZSMe|t2)c4@#5TdiN zdO%qu`dflse#-3k`dh!BW?uCd&Y(X2;Bd~uflsEnXI9r^lGD-3t^$Vwc2r9AD+(NJ zH1?O<9<$f}n39@acqNt&SvBT;#Bs*T;bqThEr}s6tcsWdTooFxT)Pw_IbAAks^DwO z{#2LeOJ`X4?5&QdCM-WhW^Z1YRhSt1{)(goJ*K)AGzGv)6|){Q49xF?1l88&(&`Ft z+-c0fP%A#|W(Ox2TCk~^i5g>^uBpyf#aDiM;XnqZ0N(lbigXp7l z^?e-d{aoTax-!`wSFAUzH)jyKlp{Db^1i0GZ0f{=;iVpuaPw1g% z4!#a8ru%<-9ltITm#I1o`MPwJS1r@@giH)cf+OLf8k{60L5_I5pd^lhmKFH3I)}XZ z1tOb({_g{u0O`TP{Yl`uU+>i{X>G>4 zi2@vX;n}O&O9TCnv-tf6o*~Sk&-uT8kpFxMwnwStm$#Gwvq%cF1FSy9O_)sZp(cke z6)6da$P0xLHfZf=|JHUA>Hpq5S(Xd0UW~7*I*o(TV^*Yfn9x*ZiKWExT`S$OO@bLy zQ^l^))4(F5n@vckCvqtbbr-ZJzacoe;j3ARg>{#_;=SU&AH4ThAa&4z{R^;Dz?akb zEAWpCn*H_^s0ks#gf*u3`>s=zj-TyHKo$O@iUmLcv!6QMewnl!u95~T15NWcbBq#w z$ltR3hZeKtHLQ;|mAk7{Vg!}*yR6DHCQAF_sDJNq`$G+fk+{R-3RCpNc5pTmw|OAv z4!PSYKkWKZ=gH)g#6wJs|q(-1kTPYM0{AUE3 z2t)C`BNZ9sp7eXVoAcla;U!x9vWl=ea{h*i8qs|GLe)Lao9YtS?&4yd$2ayDeiBV# z)Yk5r^|DhqJZK@=*~ERswfvVfscT^B?O&}dn&;9n`DAh($tYSMB1bjzFd%0WBXQqY z+{4GwjMF$QA7ZzVvRvT=$eLW8dZ(%FxKO}rplBf`2}wyPPiO&^dFDyoM!OVP?6^G9 zL$*)Oe0A&amW5iGk;Fbhz!qm4B#50_Btb?N>O0rtcY)i#p>?-^L0ZLHV_mzi7W{4? zVI?B$5=e(N?+>4sAa988M7tafKz_V#IIpdA&S}9aW!%=?(jhrsR0RkGMZ7Bf>Gn!Z zz6k%@Qsy9pHu|WRV=?tLABYizbiU40F%(oQNik`wTjpl>^q|-lpYkZ2-j}a+%Hy+? z?VkRmn;ts@``Iwz?pEn3_`}Dn>!6Ikk9DGrTVARh&X>Iy)sm&b`^|3lL3SGoKPBkg zUM;pip?XIK=*ykcho*0|-8fzCpDB_~4_TbWN>k!~4o{(z-SIeKOjgnL^Ic)^Lul5OBB~g4hAi zf6=kb{OYkGlV0IL*C&EUWTK&%k;sf6n7I{bsE03?FRDO%AouQaoz#cojAsp3B)ty3 z-nvO7*rw?@8+D{ov4ml1p=MLRudLy&9j@a)FzZhB!xfX@nL%PzZlm__C=QHz9TJ+R zv|l&!G^+6}(Vn8VLnUJOwO`lsPBSIikeGi`Y*f~Vh69Y!tXt<{9V$GVg2PL`JZ5{t z(ONLT2~n?dW3B!SxH>_$?M)xOb#VoQwl&hrq1R0sjSR{iG0sns7h$sD23@O!I;Dp2 zYw+PX^KLE|;KHjT%#>pond*%U+5T^tf|Vo>J?f?k4~TRDx;+0uN*5#b^MQ6cqBU1r%E^lfs1eeI7Gz3#BlnA0R-Re!o-3DpJgjh0)> zdMg4@bTJ(~*As$^9 z*aHV0KPXrXs%F|vWTXb_JzA%x;;7!H5=m%FhUwn#A?8_()@`1CP%kK=JhKuKUN5F_ zQBPM?e{^jav@7~Plewa}Jz|n}8yK(UN9^Qx!{1kqOsnHk8k$Qs6aA{3X&q(lE+xCp@6 zGhi3tIlY5$1BPfXx{SdaeCe>1?GOQrVH$`1!}*c2(}=sR$$6QeL$RRC8e^pGdFiKD z8dQpO;YxP^j}g0OzU`o$C_I0H!a?&NEg3)3f_u?FiCVwpA|{-%bod#xC{?YQDTo*r zktOCxN(z68DU}!&R=P(swiFvAZ;BoLfy<|X{e!wjwv0=1dm+_bbsEG1$2nMuh#a5g``Uqrilx)yakK{8nM?Vg{6 z*!BN%+ttl+q_^CPl)nLRY0zLu^NP2@3Mh^Osh4ZGA67 z62E)zpC?bx-p;W$(#YC~tI6)OtsflNf#lH>UHh;X?uW$uYEFy;pFUV(*Lc3m&Sj)e z#ajuiQo%#D0qF0uAO*_LooP<8(Ii`GLPDfW}x#O~i zz1>G#v6Qn;b?Kvi5qD6~)5Z1PN|N~Zv8y(xtN4W^$*Y8eyGPvt5ooh4I9_Aap_laq zTk?P3EGi?|Rq3}mdZd8=j&9rvfa+*yg=S^(Nuf!{wH$&GfLf_R`E$FkVVv@4%eRTR z0+h?&!7#GhMrD9PY&7rb{Hg6Tq)EilBc;n_+zbg+4J>^k46;V1D>{>W1P~3J* z{N1U%!OZK!E!dCF_r@H?dc|78dS_CB=^d(dqO_BOCH*07>i%2?Ee2Py^M3lUmHg^G z6M~+;54Gf8`V^{*V70%_meSb1>(7r};MKydrSsLj=rZjldv;L|0ft?+ENAvyw;Rv0 z|Xbo-l^$)qd7FO%K z6P;W{{q1Qx(mc?dVWj+>XDxV;^=4HR*5@wQ+f1|W4#pCo1+SQD31A4QrlXE?-7A1# ziMMC3{{uOu}vi7jc#*fM5 z$GV|=#Z=vn#Q43RJZ}o-Rj~cppU4n)C}{ZDh)VqLKghrT=_%{AVaJ$MPP4$(D5$XH zJZpCu;P)$S$sh5k+w>DAsgkKr%qvsjIyE|gJhj>FOh3{QOxi{f%~nEWA=|YXSfwL| z(9}sb{jNzOu9N`SIcG9CZ?0vW#}*M8n04^x5pcCgMeYK?pnHAv{;eyAIZWBv z?FGK<0jKNVp`<<5af3Mfn0I?%9Bp}&8i`83%ihf|iaZvKd-Q+w2XKQ|UgMXF}n$CtxMvxQ()i+}36CzbB_?Cc+POfMo@|`{_3wha03WVr4jRj zqqG>k!6UM-010c=78C84Fn)E($8;jAk>b`V9-m*VbapAI^6J~yJW)Acj%aePSk9py<_{;&-9Y}PN_Nf>jg*EVx0{q(O`8ELxKxhl5bk^JE!D$fOt zA~@2B*+694&MkzTNwSNmGIo=E$Jl>U=5M-5Dl@Pewow%b z4jg->@s_i%j{dGvp)NXp1n$~1wIR07YzK3wdy4W9MK}e;)GpS5hH%3VdiOiK*{G`r z2+^1g1{(@4ly=2#M+Pp^i@$q+M9V{Xi<{ciVaU^bB5ZB$#M5N! zlv)15z#ADB5#%Q`@DgQ0 zQeoSQ0Qkp9BW~Su}R~al&L9F1eeIjgdHu} z?+YG~LgcvA4ry}R5tADuj*8bod(ZC|fTk)Y7dOzUSrO>qf@O&-?xA zt_}Xs1%5rJuFx6L;gLR)o2;;V`X=;B*mg)|M%Oh;J;%D|JSBpJ64*AqlQh$Sk17h}f54KeCpyvqYm z(O*t~y7Er?zFkSaW!ffjGI2~w+C-!C{w+mlhR$Mekh3ewuffE5m;SJX1u4c#B=~C{$o4NOUK280W`>us%;hsMojT9h2+%b+BbkR+N% zH`@@q%^IAcx2Ac{iu#XXp2Nb{i~JZhF9wa!xdi*z*v@H)qEiFcz|kjW+QJS@P{tuL94GOkLdo;vG3WuJb@_FHby9FyN$^a%3S1SMYsX0 zQW*Zsvb6wRw$!OXQl<u>AUD{m@&$Mz~D9&FN1{{PWKLR9<59}onCf34_ zCDho(0;v})*ra9w1-?7zx=I>k0yE7f*RoZKwnlNGu}VKC>+8&VCOlg!Ha5Bn#vD~g zrYPW2xD64m1AfwZv$W-rdA$gCzr)8OVO5}w@M_APh8zukl_oBUaOs(y#$Z~}-y;>- zDa0UeV>zl#gA@kKS!dtUz$Fd83CrG}z&6er<$Uf)Xg|dPUq<(wRjMu}W%~@2{yba2K z3MHi%u5L4#@25=NS%SAn(USneXkH@?F!g5+Q9QDCeTPM6v*T!7{y>>RCG5`Cz>F3C zIb1D48$P*CwoKUiQU*mql2>J9`?rMAvfK;tju}HpQrXZ`sBR-CU4J-9=|>zC*9euyao;C%5U&~D$pZv)w}^UFB8xra~cjVcaZL7O^s8v z?xWEJ{Zvy{O(ojP;Nwug@jgljELF`*G?WP%_Z&xygMoQ6UUBoz0Bj;CQZ_{FR)oBI zwRKeAC}Y`68C$fmQxy>f2dFud`|)zD2Mo0aWghrSE;^W!zU<$WXS&pWYz8H|k$- z@x;NZEt&q@aPvh>c2f~)A zo7>r&uE2=K+W)?)-CuE;Z3_;2m2iKkaM)^hEIq>aiIYo!A^QKAYgOhlbBfR^+ndqI z_4LOw=L-=>vaXgUSGvgb-5R{lW!hqv2&w>Vn0ti}e?WVuEXt9uA-9_+WA*f1SFuy{ zUe>%RHlAEf3WQFFzmDW;NtDGmNZV{s()W>^DA~ErQ>f74V4t^n(C@JtUy%GLoyFHO z!|V!^0EtKqMs&*Aho?tJM@HjT9$U#BXR(q6cz4g2ylg2pP!m_1*WkNN1GQYC#`T!9 zvtg7rbVt!oE>Qx(NrY1dhK8dWPqM;M%X1ygq58S&5ZSF1#54Es6?QQ7{`3g9lLDbF zyOsZEiJ9>FZ~{&VtEO-}1jxR2gR=35O9f#TYqekw{otB4?#E+VCKFC_HVzc!jgkCV z>;;yRb31~L!`b!uJM|EX4AeLh^*K1Rh&IHU9|~z9#Au*ooFVt5UtS*CZ6N}I8lRbd zhbt_bD&$v-k&x6`<)o+BNe?>d*Qe#QxsJf9K7F=CHXb}r>9=J+$yqbA7gJQ8Xri7j z7yI-bYc^arA6;lRouw@b3OCHdF2&Yg%Zz2Lyhq<%K80T`Vqd4ZMM^>xImel7#6n|k z&(QpT*L_>aJ+buclH9(7}IAfwj!xzaW#gXSj4~M|LjRZXHOQe zU5^YIX>cEu;}5ob4nG9^rrEqSu&pk@t(z6-xS2)rzpp7zqo9#;`(@HlnAjQb65eta zaqdOR@#ie)K-4)!LQA6CyW@F(>)Gewd?>M@E$S}?h!gbGK#ZyRlmYolX_|Jp z$_UWXh#E2-15s3YP)IhV1WX`0v%@Rm} z_RTOuI2ao9iSp@V%H%}dS*TgMXo(=lFQ_RvZQJ$?Da`j=JJ9Xx4ZQZ`386K_+8p;< zIewYRcI%dF&O6e>k4spRrPf#MA?elo;aB%vn)${G!G~u_fr?yOh+b_+CPpDTd zaCCc+fBPOS*k-KJe|kxgq0un>ET30p`0xZRC>xNM6uPpFi{zxuT$PQi!oK{QlkISS zCC}DB$c(e1i#G46-)c8?-RRRNG$8h)7g70!L59&bSsg%iY=)rFeoJ^h)yXf&_4ZKR z9(#YEM)=vZ;U98>(CC%6ehB9+aOAJ@Dpc`P0t$=ei*vL0Z|nx-HaLfK5?oHLy&+ou zeffY4bQnT`mZMz{(ZtnH1XKLm1l_+b{>tbHYqpGY>21~}u-C7gM@y^($dG|(bB#|9 zUDW=$t+z)Jj@zm^zAJT#tso1ZJp%m+e30n9e#erS4CY({LhyucEgl%|rdutr)eP}Z zI!V;#E-n*Haa0=>Mye&cK{Z2#;DT88d+>ewM;8v;`t`DE@{RQN!F0{ZQ(rNcGL#7# zlTZzpyVXws5qQOI`Cb!DajnGiY_$^HaeZ@7#;SLC8s5TAy2I@i&PFEAG=KBHYim|=%i!E5q$bJQROuf z-U0b7Lj8oSk?Yy@UNeQEhgM3{5itB%tQLT|A$koA(!W1=V3;!7;cWe|yU12`JiyyhRCun8 zU*FC7L!Xx>ae!FcVQCyj`d5(A8!~(Kb)-@QTNvg!0h!#F=ZWtA<-)bhngXHVn(ly$ zG|$ItWp>Bt2^Yd&UlhC!PIa>E!i7UI@?jW@)zm($3gUdrM3Ate1+yO+Mj|sVCW>`& zL+DmR(IH*IN4wIE-zZoZ^VGzuuZ||V{rSr-aSAxLUT2Z*E@0+nOzQqsR z*lW8Ep-B=_fcleNCb_n^gI4Loi7a{nM6l$@YLEhxZLp_d`kY`xOeqmHH^{eV<*7#1 zDylibb5d#8W#38_VY**BLSs=;8v`HJ0*g!lxMB(Ff~u83_#Ck`FxzrC7O2*LRFhB3 z7iu-AKMbkzV|P1Am~5EY7lKKwQ3#An?CDB1zmGydb!+`Qh2X<- z6x7Pt%Q43H)OUrO@>KAnL{U8cu!_S|^K13(w+bmOyj5*09u!R)szW((_txi2BA=Od z*ot$&2Eo5oOn^zMw1~iB4Oc7DhEjXz=P1N=@wD^d?|6`sM2R09jgFQq@(`Dv$f@1f zfP@3odw8&>-6F9MuIB|wFRj^PNw&rWqD8V`r(0;PDNo&^ToZJ$$zeE{_}r4*JqWkq zWj%`zf(){{;%kN!JlDUK1_na#LLU$3RKx=J-AUv>+Q6J2El-Qu$4_N=0Jxqana!@H zN6Rca9F&)h66I~E+cXOVF>`r!UME_@Js(LYC2UL%*ARcY-RrF>(3#H1>Lgr!!`Tw~ zy17p5OCDWZ!q1?+>PYyPZiSWf_~Y6sROZq*+eK3XUs!G(Q46FAz9W(EQ-^?Lr-Yy? zApU*pHX#y8dnAvqrO=Ptj*@^W_wIv~t+=-uQ-nBN>L$sS)&s(qoybP%J`x~pHA)#? zj8^l)g^B;L&?0L)AZSFp}HZ1Jj>a?-Qy{b z0^4m7M!20OnArf{=W3L#jD2A>X1Y`pI+kflYICGb`$_L1(TweWaVQh5>$NZ2J;zL= z#<>}PkUZesw6D7EHJK`W@14X7SXh45EP3y1zsuORvGYl&#K&*46ZAJKki; zJdY(%iCm$>2%$`9HtJ6gQU<9KbnxeOO6JvK(-H@wRoK#S@;OioE>6C$L24OwNw&WX z$qSsb?uhyM1cK~yXZ+`g78f*KCj_amR2uSLTO)ta=NpXQ1AeRWO@Q?2kUE_549Wse z4B~mTY8PdHArihGpOV-ncQTOZk5A{(m)YJp@gL&ZC!4cBI_^8Xed4xr4iszg5?fR6 zv_y)n_NW=ELAv7Z4#%^Jtl_Xr9eJKsfJf>xa?Ib`Ol`N%O(iN621W^9sLs1(-muI$ zZp=|4&ati|0XiWvc|bMvrY1-h5!J$ZlBBJ)RxYWJYZW|t8K$_M6s2>L|=#WM~df7$_c7zR_ zn!oOD6n{DtkDGK-FwXHB+wqSTYzT*WU|Xrna9VL~B1X)WAzO`N&>a@_bi^WpUKo4iWDLaSb(yVL~3FWaCSiRAtLZj7R~Psf32W!ExZO5 zrSI%ceo5*d3g*XQoY=>SV{N8zv^kejNjBZ*ymZ><%|>#q%90ixaIuOwk@Pkmp4e5g zFK_P`@3*2tujc27Po5vR{i8v1D0+SA0DN8^QHw&O;Lcsqj+dngGpCM1}C>`g7 zdjh+V?2ZnQ;hNPbfK1OBC4D!-3(H;gErB{4pd(or_o@A?F$QEqmLp{a5gI0swGSiX z2y*r0&Ccd=q%o9t0*FnvP`rDIikWsi2;d>pT|J607{YfDv-Pnl@r~&TX^oBjCAX~Tr}!KcB|t?k0>7=^KJdo~(7 zA0itkJT|3^&K z-r1G1TkXCl`1(0QHF;IZyP02kQXl7hH*~JNv_Fp-jhPD+z{kTk&pWJ4y#AC*dmWu^ z9KN7$ZQ@mh8t3*-tZI+pKyh82j-{|FqXX zEvgSO zhrZ=2;QKnJ{hG|xH^7@WjM_j@xD+EUK%2(%hSCYq{94bytqOuZ#7rf%RfHL?TvAk+Un;lt zeXBMV9RF?{PRPl*?#OCE6g-DbUkS!7I;EwQsL5beufyj%p58T}49oE*6~JmoB>5z? z)w7Cb zL<_E4CMS0OQpzX`_8Haif&G^=S!&VttbOa7qOx1@1p)?NI+0ilK^iB(;CUUm*JF$% z06O^{Z|AwB^#XyKz$X_RxLZCRtv{Jod0&QqQx&m~5Trq+UD8}qi=I1R&{DpHF1 z$==%0l*BGC;2oB3mTAXEXPl34utZ5NPv?xg$tY}Z6^?KnZ`?M?&_Z$MezYG1#0gk3 z=*#{&|0PanDt)kRD;SL;AihYtzqy zlNArO>E3t@wI6a|dVtpv4bgmpoN77lAn>Ez*AMyqX<*c){7d0H4}_xYNte(p zEYlNh&ZnBt42M8Th~Goic%D2{7?BR_Z4uP$HXOBx&G)~R6)1pf0AaodSQ?0R0DNGC zsI3N2mEx0V_;K{~Ge>Hz+wc2noIA)zw*~uQeCrfa>M_?T)W&_-)UWa36h3nOFs!UJOYnHok3hN}SSWi@zUC|gTJul}A= z2WKZ6HZ_(&0@vSX*<6Zd6Q<@JS~JxQhiSO8==Eg8gCxo0)y+Y^2|0HEr2IolU-DMY zPz#ZNP&=8B^YbF!(ZOH`lqY5CfTvZ^@DD;-Ri%Ll17@_KMPB4wA^(*^AY^f=)}4h8 zMifR4gtGGrf(~{T$Q228enR9)MMG{J)~oun^Jjv~b6#+cJlg;dDH^8h)5H;PEDDOW zJlQ?mxoa1hY;^?{0hhzIhZU89#tTDkjx;8+hASmGHkjJfj~}2HA_=|*P{4m*O!fGB zgbme#8%ejN>ITJ?oc`uIN_P@s3+?v-hL;%|8(^{zLzDD=8oP=~Ys9k(BnfU0+*Z)I zLbh*p1pMbeCEspD{z(5b^M|{GmtrLU#|jBBdZ*Z0}us1k{(ZID+jo7*#7NtkuYI_S#+7NzmEbDHH+Nrfh&HlM0UVp#vTK^S$b8Xv5V5^%&rB-D6^DhsSl%LL* zRc)G_(&k|EII?pFb;m2HyeL0W+42=Xbte!|hN&iVKP-cr8kjGlBGDGK3+tk*NOYu8Rr}f94+)PlwE#vd@4w7v;Tn6pwzVe1)*1OE*)Q`HZ*fDS|F>I z*=cvJ?-nbbOFs=(&%>>0CTYE^iUN zyH+Tyj(4l$e0;qQd0XH}#6C6o|Lch)bbNdGo&l zEmtXgzVz@x$Ya4<22s3BY_P0oC>XfBW8hs<)@D@9!4p`J>YuA z1F_GhU_E13Ix&q;(G9|UA_M6Sis%hk%?wdc=G9OQ-H;>~7Ct_&#-Pub5tFWD^Pi#+ z+XW_Srcu89>YlnCh$WiINXZqe%*#M3eYiiua2{XhxKgfOt+x=x1G~0(5diBNpiP6f z2YitfR!Xygx=kfr4m;{@~qYo|hEPxm9SX5cn4jwYLOpLL=|lqM5Z@@{A5h3*VY0 zpZbb1gYWKMsex#p{MQ`6!LQbh2y)D`c2$0HCUuaN*ry9Q$xEpV8m$e_2UP)|YXox| zUVQ#eWvzDlP(*A_99%1NSq75lc*Kjd@&Q%bXTMzU%b#Vgr-Gjc(aKs?V$c;?tleG@0dpt+@d|F%k z1sMpqHlE;SYxRWsc#Coi+le#tN;m7SO!|nz>z!l9LA7~*N3L7Fa}n~|_u}C9BLDUf zLTWw1YitoOIZC+b53Dik3&7|SlwS>a=6(0gPmE=7YQh8Q`aFB%sD#O1Lc`@L`VE}V zT3#*a|4n$MS#vyvHWl`+a~=tbLDM-Jr^h)WVx3`b90MPGtCNi-#~LjWv>OrJU|vtm zZ~0t)U^KfdX^%P6O)PRK-)4z$TIS|MBew$fmM&{o%;Y`+IV9%vdqD&(1z6~Q*;!Bu z^V`+iMkWCeVc;h!`^dl2STy=z3d@PMTbzke-xd45iO|GXoDWAc8^cS8h2WZ!1njt> zlHBJHZ_u~7{W;VBRG^P?2}? zvtcfD+#5*8R3QpQZ|*U^SM*Ru-$DTLw`u zT}FvhU0&det(4f1nqQKI$~0GEQh#&LW}_Ma3{;D3W`8341hkdJ&xo>Z4chl=rQ4g& z@f)Y!gDjEz2)N|EERCE2H*(|1&?W$MM}tW^tp%O}nC5B3Dmq}kn~T9HG4}KiDA08 zmRdSd&T-R!s0xmL;-&D}Sx-l^Q1W z%cK4)8#K=Hp>&@m>TrJu*qoq(kfaCmWfU{c^7&PY*sBjq$S?C^kK1Co*0^ngc6%9K z>X*+6ap4PF{BNXao7w^wg;(|rYdtf4xc;lq&@{*6AHFUW_D{|+KzG2oiqz`+ZYAVu zRbW9V1GugTtiMyhIQ`bU7Fug$qpw52OlYV zsz1JYffZc6)Q2rffk^7#G0HIwSupijdm%=r@Y8Af45OO7$0O3ao+v!2)B%J7G)zN_!q=}t_#NAAH)jT zN_a;TetP`XqZ3A*GO2CPe$ks$Y`cUIldS)*^JmRj89Y-sKTn)n<{H|`1G?n8euL7l z){_UblreW%flddlh_NW^OKfJ-coL>`f3gef0nzBuu^cLxlb_PQfDul00XQR{lvk{^0`8IIENg(M$O{3=EdljE31mqOo2nVM=jd4mN;7lRH^9Z?eO;mzKU^btnpt5mIp zm(p?knr6gd)Ktp?n5B>zCkn~ol^_3HqjD#IUXl3w`w^X7c#y%$)e81N#DnI92U z5nGdbB%sP)w#J6KF7S_}6G_}(uQHC8XSyMy3R#}I=XoV-lk;BSo6@RT2Sya^BeOOP z{eGKt9{5#pU;t+iGJ0~uD=vf9vzPe&d1mlER;g}3_ykk1_~_*!7M0VyM{&2JdVvN=a%1 zBILPtrADQxxTFf``sF8&G_iyO)uRSr6mx5@X7AjD*U7f_*w69=l{ajN!hDbquHO;E z_Bv0V_1Zxeobr%B1>)+lUnYK+(|Fn#h+SMqr+q6&_PZe96J=-&Q@0h-}_m_ZE&poLTE(Xy?Y-J%2RyCI3|9A z1=@aqEi`{v^rJdlj!Ygc7cQJ<8`<$Pz=Hk`;MK?1u9vH98=T?Px`Dxwa$v~QyEqZf z)mVTVYAyg=@a)2@Zm+6ZE(tVMK*&(ncon2lD<&6)F3UXqE4F~ahomdmQy2CRE!rxX zh8FK0v~L6~)V@&*$6(*UHO~eZFgE!?ydGzntC$YR>Xu?y1xcaWPjbe6;P>2BP- z6}QHZJe#ujo(c87iNvL*brCDDv2Wj@D7%Yvw?NIX&Ck=uEJtP1?CL@|d61UuNHxpp zLu4bEq(2X}ED^hgMLHJ$v(5G?z%zlX4S*^&zYZKYUS?)pgm1;kl=GZSIR*eRdGr|D z2yY_~K%+Qit>zu|96()`UD?QQtxvVXVzJKHXI%>rZfvrZk9&bK`)}no7>>At7kSc} zW9Bk^ZSk?j+@@t3KNZ%rY$bT}B*Z~Zq#FW=-3QnpZ))K54*@X1kAPOyaAQlJU<*Rp z7^t9uK$qHSR_iF5J8mnS21pd>Ju-QO9|F)&Yt?zpb^E(lzl~S#*)m%hIAM&?YJ#uo zY8wo{s665TDk<5}IwW57X?O}S*ATZOa!ppur-E1lA^=&HaD0=jR#ML|QidE2Ykz1d zK{WV|_G2$LFaK!!4fxfQ0f|kI6noU=QLstPHn)NbOu9``4bBJm0lNW{=H}+hB=S7K zyY&9udu5t2Xq32v)aNFrN}4~tHvk>$#JXRlF9=-4Jk&l7VB~^#6wN5-oPMHBF!ot* zayAbDyb8EpK`&4B6Xb)o=^G*0k3IcT^|)GY%R{>*93|5zi|zf~xeMh-Kl+Ddi1G8C zpZ`1B+AaFXbOmi8<`}n0qG;!KO1E%Y6A@MgRpnN}@+~V587aUROO&&G1>k4WD)$s@ z)G>7B+mC%ZfNdTGy@L0Q++8~fv?M@tiGJGhdMa)Oo_x2uUKEJ7f8m=~3OTd?E8;Few zq%+4a=$t%%RWI9}Jbyt}pPA$@YPqw5+}b5rFv#|(VF0E5Xz>VM9R#>KJaGiSQMN@b z2QbptSC5sSo}LMtR|_09FCef5P$FRB_9?fTjj{!2^5}_jN*hS0PR8PJdUm1QpJlt@ z>Q)&(Fi}Rv4+ogBPMZ$9E&@>W8y9D>(1JmGjDSl3G4X^S&)OOiNYi#rPOt@NVzJyT z)K&mgLaT>owPKv)3h?wan50ie@z-SN#WkS7g_;!)^;;H;hWlk7^qrM`F|;iRB1$-; zHSBUzwTSA%mk`tPsnA12=RL?#+(EoPM|x96JcHoJ@D|}HFALE@z!L3I`EGSPpWs!1 zh?G@VvRA$~Xh&+E z5>8y}l+b8@6bUBKWI@wuA6s6Bkj02)j1M$vlc{lIyIEdkTi_x;2B3+RZW>!&=IORV zHKzQ8eJy~Fu%*Q|NEud(Y)#EJ?gd7oE~zJhlBx|i6!RE=T$r1i4H|BN6JRwi$-+h3 z;lcJ7#PH4b6wggq^{c>+?I5%fNdu`$OZhiMrKSHq>`Iy6I(<#im@i(42=K3o9BoTK zK9!Q7M%~WV(;_rQjbNmnizIx31Cdm!JiuF&1^Tk1d~6$PeFfWh zsI#W;Nhma&0GI^H@7%f_pr8PuZC(KD!SsW;A4O|wWqFA@z-}J(qvq6_b*23;_z(~? z&Hx}~A0NR=K`iqVBvTt&9%K6}EV=c0owSZOYJzQVb!^HCZvsvmfPVY2D;MK|B_clF z$ea2G3_Wygvi#2P{BAja?iIE}qTK~JI)#n6ci(-d+_`%@+KA(sAexoY56ES|)aCf2 zENj2y2D*GVPN07~o*=6(G*!8pKAe}R3%+ncmxNJMTL5qwFG9PoP69Lugb=Nzck}c~ z#dn?``|$m#`_ZlisqO!+D{be#69uX4p90|abNiZprgOZ|IbQp$GoCvSay@Dse}Hy; zaEF4lj4O!dRa(+CZ~csS;J|^H_XNT2>zk869rC`p{9oXl?Hd@nc|IlJ;E;n&Zc`G_ z%3s4QwS$Tu2O<}mrtzxNFYOUHV2GwCK(;f?iAoMe4sM2L+@xjRiN^0tZ`xE*wNi3` zb1-yp%WYS|So5X7@r*Ce^0?DEsZM^3qZ~3`<|7^b(qH+=i~L#!R>sDk#<^n8DtyS@ZcB=PZ#ZsDXeSV3bx3$z?qq8wg;lk!eZ&f@e`yQDRZ;4fD_sZ z+Kg?4FP%Px!R51|B_U8TGdo`%0ASs?JyjN#(Plu)sb9Kbu)Uc;u2&l{YPqbTN#UX{ zKolm>f>_q8z=dE|t+PfW2l_Q^8?cCHH9~L#BnezOB^RWS&vdc)ZXPNfK>*{yBcE(v zRr{h@d^42yK_kP6j;mp?8T?d|+he}8V^zRa5aA))mXW5Wh_A5hbxs1Y2BQL;{Mr%ai zrGl2Vp9iDCEPxDvLf!t!`+{~Nkk;1+Kmn4X5qN27pgZfXYTyxfi$xCpQ7*2I5BBYK zK5e&(mX2CG^5b?wK@P_h3+1*$w;X2s)K9>``VugZo|;>3J<};4HNnOhr?ulm#2Qz5 z;}m^d`KmGyY%KGAoImUeBOaLCb zEimH4kY!)zs)nAxv3Z$i7~-exdGAL6EI_N~Sy^$*N3BOB>T~!BkOJhQv1i{9&@$o> z{7fD{R{r?E_|M9ZfBVNFBis1e+Gct6)z{0(6DP_CAHHAS`_+5p9@`3&mi$-A)$d*% z#aQGvYo%MqC;Nhx7d#^yw>JiO)c|1fVVJItW!_PEyf)ELjWI+3vWa78=Uz++*jIJ6 zwbc8k{X6HhjUwsUFkont#!bgY==$A@D> z$AX?)@&&}CJu@>CTbR8%$25MvJB>Giwy1qB`c^XWlw7l$Kdz1+7ygtKr(@$Omw^3o zUrZ#Paz;NddrnSrFp)+M1bOddC4m_Sptf_c@uh6Iu1&`3ErUEr zFD7LTNM(BcEQfJYM&@HY%O?NQ_gNa|@wjm5rc+*~mrlw`IY=*gG>)>6mp1V7DEUiY zd0Srjwhk?aE@inJ-So?Rq#-}bH=Vr1%dEACkYCGg8L}=dlljWGG*W(s@8(QzehFY1 zPg#1c3!GY}yq8zQb^6Jpd6`~XevG3_evP9{{MaUTI?HWb<7GZO<202Io z%Dmb?yD+H}U-?fNL*Q~uVOdCRN3B!AM=&-{&RnT)HSbz~b* z_Pfb=XSsb&*<@bI?0I!85jkOyMK-psM$o*{P`jrMjt-5L#d%KU0$6h5=jm$NprPd= z%OSQ!O%iVujR-)gMF0uf$2g0{3+G-gM~@vX6WFO)#<=k(H?EftFMV9D-@ad_5#uIy zv=DLBjf-m+WUOng!2p)j+Hs+)3k@OHTM@sev3~rx5Zk`z(KsfGuL|%qw-Gn&dvO=L z>fk{KrApgG@n2I&*TtZ1viJUl58GD#MDm7@nwS3RQtQd*9$~h(~*t9P{>R9R{Fsoq5Au=4h)I-UK@XX zBZR!vQgM+Vbw(Y@my3U$c?x6+*o?5HP@6u=PUF_AgV>e=`0>=lx8D9X_X177DmU3S zsW#RSW3g>YEf=+)LL;z|fq;(rM_=ZuYz9;%l4tt08hb9{*`QXo*8Gu+K2Kgi3Gi2C zXITUsf{%f6) zzdY7HrREbnQeNqW_(&(;VUG;?snOQgCs3okDEc*8XBJVArd5*YPb@9Pgyf{Er+4=7j);7X(F z_J|n(^v#2mmpFa>LuDMhdmd^aFMdq7jQq`a1G)K0u4K)25w@);ymc4z0Lv7yYRpZ* zSY@r9F4HLITA%bY3Z<6bK0wZa{*7{M@>qHE%{R+`{6~LOF24B&(fvVWfNj?^hveXN{LVQbIIyo!C=R@CV}3adtoK8vlaN9eZcF}=Axg%l;gHzh5bCn zBHKI8`)u0wU7h{gwLp$j;n{rI@0{!EoL@>e*AK>*hJJp0mX`C%TpuYc)4ATtKHQM* zF5iFzYz0qayZr`4@r`8Zdw-u3mFLyvfUsNnav*XbaIkRkna&BO;Vx8jAj$!w9cFy< zaDd50tP6RagJ0e|5GDOw1m~cnzk{Q3GQR2cPe8-xwiARfZZ7oY!T34g8aIK8pz?c>6w(RT8k1G@g^B{(G?hUvT)(8_W= zPX4A#*=HK%=)j-PyV2FJ&5LC-FYBhAF2M`qD=Whja5G+}k&j2UG1I4$j`fxCqvSUdffS^E$c?TF`K8%e4 za6#Ai4{z%-arj7i^X<1Wl6|0z{_2Br_1ZPU9PiW;a$%sIFo7Y#rltty0bR#DxmFC7 zS0WHE_`={&0BamXunB;$A&A9T@0jCwyT#SG9u81}CuwuvEcgj518yDUI>-r|f=;0! z1n+K(jV-Ha1+JjA) zIV?-OAA(Q(;n}qMDV1Pnp$PrJSVUKR_w-i$Bv*-jD}@0xr@JG(>vU zzj?3sz)!Q`&+l+vg0N+%{;4?>CrmcMG8nwTgJp{PBDyxi)LiPPel`XC&_-HV+SY;- zV3K~cD`afD5%^R7zKd)F0!MRflXZMP2zWcIy*I|63F>={=crfm5a`lXJ}M8vhy3Fk zVR3Kk3pgjzM%_@StF$@!3Hec9f_W=o+0B`c+J7^F<^!Tw8Y{ecdfB5@< zSbp#Kevj>d07^9204>R5YHF(d;+=QP&;H#{%g=xIvpAV`;_yU_m)hf+#lF|{>{R3< zYs%R6<-jXnwGa4cYpC;=C=;}8Yid1`z<+(Bs+ywu7_=-;FN{E2P0GVmo$DmY@t?{{-KK4zG-{5Tfr?CZwyAPL8MHL)Da)$BTk zok!onYGOI(Y|kN+UU%&z@S;e-xx;gAT8sxdG|T#JT5#keZ*JtC6B%+bTQ#;`1RR5Qr%{z$FXOc7=x=!2Nmsv|5F2N=Hr7sK9O-7+y!wI?2vN3{OV{S3o&HhxZGPn~ zX+0{EGt{No3rPFRjP%zB9jP>G|Y|moSoilm!`p@G=WDFwU5QN zs|LI>h4dt$eJ1S!Ie~ZEn?U2@5@1Q=BH!deKQ*!ZbBmLY!9Q2;t51Ae4xg!|`bXY1 zooUr2A&hI>2jJvZPBebeHWG~TV^0@ASOxUi*c?IA3fpO`EH_1+l7C{T__`TBR;b>o{vXx7^Op&&GB{w;&2K2oQyhEuIqyvKH#NBLGnWBFpQ2{gbB} z7V*qPZsS@@A=n|Hq`<5*wW@p!&_W;tr|d$W5#|qT{K$otrSCF%sbhS^j5cEA3Zj_U z>Y+UZh{&V&(H3Zra%mgmIZi@UW=(hinX-kSX_T2Kl`f)zx5zc!0^ zC(E*;)F+8(TY|It)jF)U*sLFaSUGv}6yx8k$r z?!0^VE}EAQnyi;c7RJrwg#4?9=|B)iYS81tN7DIjY36C_Mc!l+a#7Iwnf zKK!qMm-S`IYMw?4iBga_t<^Twq*I1X(#EH|bQ0J_0yea4G`j?UQ(iIFFgC`Kfx}4FwOXn;@|%5d^&#^mA3ufd8qGR$9aKic`?8IL0bAx6M$t` zf6?4i+H7B_YItspxbIeL4k``?dpQS#m?$#2auMZ#A@JZ}QD4%ZPPtc@@eR}EBHnuk zqV~c!9k$aX#2JAGu%Ad`8ZgbkACu4f6a9wT-9TrWWY6_HcjPa`P%Mr z;qAEX`{cXrxAWdOZTcCmU#7Qg{!-x}Z+V???$of!bb0l~k-&@TEt75Hi@Lqq*iO32 z-nM8tvc1T=bt7*(WxO-}*ZJI*|IYf)I%voFti0`(woSJko^AR`^Ju4S^X??VzA8QI zU%A_UI`_pSFnRa{i!x9A;wX3k2Q?bzuveua@L3j7Y9YAjRNLnO0E8!9Eh2XfxQ_s4 zUA*veIfKosu>HW|_#U9wM^~+H1NG3wL^;V(V?Ygl--$fmb(9^xYdOwf)W1h&6|Hl!oH<6WR4 zH9WMag0A6w#I{{AQ|3pIZ1}A-Gbiagw#cK4S6ygiLryKBgX0GQvicbZ+3JaXve>4E zjKg*hxtYI33fV}DPnNgAfNB|tRu{pI44Z@jPy_)9Ro=49Mv*sDYM*GxKFRnPS}X9d zx@Lc^+rU=nx8}X3tsqWi8^k`)OJ~lM(=WYLzV+6(%Jpkk%k>+dl$&ffot>Ts^qCFb z1zrTk+;$cPsU=aALC^fQu;mk)RJN%EX8^G_IqA^xSs-f*{v4lu3krHE1No`2h_smp zKuc#;k}UFYEK-(HW@*ERTjkU?@_nEdj$@KeKuQ@T2o&#e^P&GpJKqyF1@n-$z>wE% z5GzPEVRdd0;l|;kto`U$Z+#9QzVzFkOIgN#88*C?HrkXDumO<7ExWNbjiheVGoRQ_ zM_S`b*YrVyN4>Uzt16SoL%{}9KIv+DrK~I}h!*66R<`TV1gi?XtF`VLqM~xa4?*Ek899)$)oyKwP;B*ObRH2~=4wKl0_r@(E6SIj^!`OV-p& zJnP@IXv-6&=#_puT_iSLB#M3zg{qI;_X{e4+NNr&%KN8T6pkqYTmxR)cb&&OFSLL7 zthoc9b9`~`p6dmFKrp^zw{dluF3T)^oizNhTCFObx- z*RNgz+aupspI4W|-V)f?o*eHT?b!=)0^+)tLl@;VK=@W<+ocK+Zo?D@?f1MuliZf z)|K~D|($-=la}ZuWio z&U~^hX8P{>`6Mum{VV0W4$!35l}5Jjp#dN`P{&#qkb;Fw6j>l?6G+=YYX4j~cNT-h z7s>=D^DL3}9!K5(;#Zf-mFu_5Ex@Z4hKsF!##R<`+Ky4frad`!$z|R;FU?#smR?S! zBiyiv;MexF#JKuzxuOkc?0R3*x!TliFV?c}nP+|)>f8Mi!gcYt%}<*~n_sD$XODwe z#G5<*k%!?K!&uw=MI;H}96yqeMjY|%yH2FC3QULDa#f#;KXs)cTEz$07mtX$;z48}-@fWe!P@8LRehyn&%W<-x zcBPcIhTf|^E_f!d;G25Oy0M;PJb`KL1ce3_;Kb%Y{T1GL@)H1s1;edCTW1X=9vNU3 zw3NGLvMOpG#KJGfpa8AneWtcp)eQH2VEtHE&f!e&`(u;-Bg5L&O1WjbOnML14{@W_ zM3u!IdH9#y#tn=cR>K<~`dsy?Nv&Y}6c;|q2M#EPg%Sj8-m%b8wvhu7%jvif7G z^95-udu41pSbnflX4pPB1^?O(JT`f>oH%hZwrV>T`|*9f!s(d_Vnr!>g{b^-vP5v| zmu0L4tX@jlvM3(gaD_}Pr!FK=NL0dAKmF3k9Yn^ z?zog|B>^ozI%%}G7&`Ct({}z~x6|#G@6I^i#It?>MRS(9j@vI<$)B$BK37`^eR2A6 zqLwP0E|PY0@uDYghQ~{{8A0`PGANDQ1MRc^4oKcR!EwMcos$~}u=e8HxK57br45Q? zdTE>9!OS?O_5Mkmw9Q{Wq-Xk=h_%vZKGIF_!+5zsxBTXx&rg!C@%?2WWg)-PN&1fq z-ouJ zkuKX@`k5~MBEpWZY?8Iiyl^hT_#>@e;d(bdC14>1ToWgvFieN9>Mq2q|K?vK`5pQC-O!A@ zTy(plZH9T5c{IClb!%3QpJ@z?cG=Jfn%?ss{Rda1($-gnYS|oUg zwSL2E_;K$zFMu{QI1meV`=w)rK)`7$*O32Me>pyU0-I^4S#V60 zkpqk?l;JnO{~KkDHYWI?(d%2A3ef-nKmbWZK~xVfeORtuzXp&sAAMe9>S}hy*0QDu zLJR>jWM$!ReFo!l5W;h7Y}Thh4uwM7CY^YNCAYex%--XdwyrF%NkYrZd*xgc@leMm z-zS0>j)RU1YACiUo%Z9rKsGFuX3vx}WWU6P(6*=ic2iPk)>^G7;Ks%y8&8$_;72 zn{6>?#}_uepR>YOkpxFV7slV#HTvKMH`v=NyvSe;PR5bxXRea^g} z6il*QPCVt$Px96VU<5o26L8A> z9Y-MP!0`TZ_VlrG8bj?bvHi=_?-mwU0KTTm`yX8ayt-Xx7FNo_3cxp8X13@bZtu!1 zJDX~NIGJjF8a%`$X7z4B0oP?Zqq_8sbm3FKP_zvnvo}Qf*<3PR2TkDL^VJvZ{}er`J0CDq*d9g%jihY$~#%EiE2!9 z3XQ4;#BlNNvzk3)8X5=iQ43{h3BVA|9&@UtkTLhJH^^@Td9S%Zr{1fU4TiWI4K3?I+5$OC*y?gVupiK?e;-<0 zTdU<*-;vm&IzDl@eEYlKFK@o}W*He15V%u5xq79%|Eph>4?p;z+`N9H+`dIScp8AqFeRv-44nMwNl-DV+kguv{#OerjfgO)>ah$ClT^JZS+=-STYS%0tO`FJKh)D~)gx z5`6SRc|t+oEwTG!OP}ECYp=gn-gx7Ua`Bbd%BhnlLu*jJ7B^@+wEM8RMmt`^Z;3YK z_P`AQxJ6D_^c@AV7G)JSlUIS=Xe(qFhYOIuz`B}(>)5;^0zfb(GX%lV@~B%2C$Pmf z9J0R8c0;#M3IsWU`dN1_Z1-9ux69VvH|!+MPriJ2F8B@exKhJ)Z8xflAc#5 zO@5?d+RQuSJT5%r_(@p3jDY>Cf&A^x1Dacde|-BY!C9Eb-lCw$CC_cQSDS9f@&2>$Zv5o0&5M5N+We;9Zf#kofA%zO^blKF2GQDDEZ1({EgxOIS+3uj zDi2sZF0G;U$!UX|AVDrtwXY*+5-WcI4zCW`0h4gA&XXg!&(>`kEDqlo`AN`Rp_gCeZbag7NxMPD2-mwnyUAC7)Aw8ypmJHm zw#&o{7Sf@E5$PGcA~jO&B@Zo(crq-VObVz86x6tmtG2~MoLnaUp&@;@DTSs=$U@ng zuDOzPum(v;Q4iEf;}aC6>rJo}J`djqE0ml1rxDaJ6Ob+d0m4zD39xblJ7?>FSo3Jb zjEZgH;kFQB1= zQSWuyT%WeUzz+ll+!8lv-El9Bvcx@b3@7Alj~^WOjBF?)lRdIO| z9jaDHr6bxKnU!`m%{qx0*kz;5f6-QstCodljY3GgFd z^OAhtO7?{1vGYG_0QkINPm=wkXn&dM6VOPSYP6Vl`X?XxobeuYeO%nfg|}&E{%yY} zx!)_Dnm+;#hG!gs1^plAGXCS@eU|VxAD<=NXN~(+S8J+je$r8N`c-h#`UjKck= z-hbg2QR)Pme#!91O=U+8dk@WhW>qt+sR()k zXSwX-{?MXa1p;)QE;lj3R#V3Jg}J5rV+z_S@>Bttnx_DT+A97ib6kqjHNataqK!pWX}814TDKl&ly@qGE~zxkUo%68s!=gxBy-f4c|87qH_pEi?T`_}8&abfG( z?VEsom&yS^uvacz2-{35cEaUB<_VfBt;= zr~k+QS#I69QBH8u-nnz<%G+sj~fWpzW}EQ3x~XDP$6sEqHnx>hgG8j^oCq8Y!JCs&~32aNiHkoauw$J*U_uDZ)|DA2zAIp4y_=D(9XSm~)CywqGzB9hh8eBK- z&L4c+aW%N^II@>5E!QIQT{G{~0XuPPqtU!phZoeq$r#}6c z*+!rG!al_e?u%bwpJE0(*1miR2gB;&_cG z;zb%0xjc&q5t8Bh_cF;eLA%D4wisHtpwXm+0k=w$xCz|Ov**#e>Cw=SWSP>1^|otx z#6zGYK22cc+LU2C)^^Icvn(xox!uSzYWwP-AR_?QD%u&bB^d+fy+!l(G8o<}a2I?= z#+Bb%20R+SkewOlb1<>@SJV^1YALxjZ4DI|-%T;mhlY#ot*t{biPz7Q<23Frc$M-9 zTR_Ohao>-Ve#;^V$2r3+Snif9S3WKWv5hqhFfz!Nu0itBz7c3Z=^yMXV`%w|A&XNl zohjc%**{Y4T?tlf0h-t@0syfc zAPY6|)+j^Tj_Oxy$EL$q1xf@z@K83$QhCH8>&PKa^IR^+PaQ9ZCJ&V%G~9mv&gHVe zR?Kmcs%sZd||0DUUN~ zU#*r?CyocuGYtshB5N6XFC9Hx)~Jh5&~zK6oLkCC-jU6XYgeg@0RXM*$YK)>s|V$8 z|N3ruz_zy2XI?5Z8%0qk$JRFo3;Jlk9ZRQ@#WU!bgBJl8>C}9y-5dVlL(MqHxCy|#Q=FXo?YF;O ze&aWPP|jbtz}#aLW9(?Bk+*iJ)NUG}-FXm!{VRBfk8l>oj^tIxty++_72AU@`?s1| zaR>lDZu>KGw0qi)4be7HKanhG#hedXI&)BS%##YOx04vU*O2@qG9F{h))wB~JJbPT ztF+Sq$hgXjZ7%z3fM(FL98wPOET~(ird-^0u9JWguthYj`zsZQ{l#~N?+On-_%fV7 z#QCmyl*`>5d&NH0Ev|PntGqd{x)4qb|QH-ai-JM_2#nBH1PS z>=Hn;eftIf*^_rEo<#{bm^*mZ$uW}=o&~N1e!ec}z@6aPmz}3Tl@m=D3Qp#-yidzu z|7DlAqt{8`X_de*ldLg7D*NDF7OnGWwFpHWK$GG0QBT}DA6g21Y*TW1eS=>^Km70t zhP7{_WwA(17D<2#8f{-wgMvk4ePh)bgiFxYIu8or6ALRBi|O(%z$QF%;T7S;Gd$w7 z(q!6)t{FG7PpS{a=uIjk)P-U**{!j6Pm>GM9wq4$)!32T`$zHC@YNGWkC1ev4#7xY zrGuOl zb>vw2A*W;h@DG00qZa^uF$a-Xe;_vdEI#^ScdZ|pj2 z%0)qF$JR*&Ce+Sydzj9)?ORy8QQip{SY|aBCwOPC(+30`1T+%lQ+5Jc7zQ9sEqg!P zEzL7*UIB(g{U8_X+U;@L5>vBgl>X@!w#%00EAfwK%yehwwo&xhYtZH&6Vpn zZ=g9eO&I_;X`jv`l%?%R2~fmvN-#kyJ(TM@b>N%;f$&5`+h%joPi-zip+1DC#@s4x zOFjl^Z$oI01*l59IEuECNu5J!tIy~27hWl^z4~hTjlci*%EgNp%Mh|v+fI9GO8~Lj zG&9Wrv_@mAC3dN3mvNG%ZNj<>X5>w_nrjq@Cyrdwj#Zq#88Iq7(`mEJ0|>l#tkq^# zI_Lc2u1KF8{@zq${K-Sj)4;QI_wcidS?=xr5#>s`+#=3=VxH={Pj)kl| z#K$>S?XR}orUYDV-&LL!x(Xd+Nz`J+1kT-4S)LxHyNi1{@fRTsQ10*JHmHB zt~vA<*mB*IN*s<00z{5oUVXo)WhJodJ4u^y_7&4Qo@PAP8`?WIy>ZOLAI#0qb(UAP zrKHhbcd7MgT>Z?$tMQ~^ddFe&@mXhlX&A@2@*zLwufOFnoj*XO`40KAjK+1WvOK;! zzZBP0Ue@84LbD^%N#HX`;JG@5@H3SAg^%jM_)ThCnSZ`azKY85<@0p{a$iLmI+~pX zo^=Tv!j=wdzSw5rq(QcAfew@t$0y6#mrj@SY#SWI_R1RDucqb}%JtiK%lnrums_{* zGkN0x0Q_C-yD(A%3v)lN>f(#}TJx|yeR5Nf{&j0k1qh6#XT*nG#;(b8iCE)Aye7Vv zm_##9?dmFlh~!vitjr^m=x;#S-D-$r{=3rL;c48Rxqm9XWg}kr2LmRm%f}>Fy_G?5 z8QyW%5OeZ7JT%PYc09)9<(S+9WC#qgST}wBxbUwF?fPthJuD!1+ zD;~GGi5RsUL=U;qW&~Q4tdsIQUCl|j3l1mYPTswDf#3oqCh{yIQZposYR48Pc$O~N zXyfR>7t@3uF$@VUKr7){9s zXslhGzY+_lh4~q@DG#vS@Dd>49RMUCMzr)60H)8pbc($C0K?d7$+O#m)mCsJb{U!Y zqtg7`OgVD+NIAsTK(}(L-J^Ed88k4DV+YUeIrrGIxk!DkE*~NcZED62c{+Lgco{-# z>n2+`t#j+MZXx6}=qi!8)|#f};Mw2$ffi}AqYa2lKnS@S*>fW)xklGqXXct z17tZaX`sIzJm5Y2^ueb{AsjTUa7%``*!~&*P2Iwz?O(9U7}obD^{D}V`-zKIHNXU* zsP#r$m3(L~Q9jCL$u?|X5QzRo{$?x)B~9Ch?Nre12pTIV0J5yZvjAC>09sR156WHI zu#0GcROKgC)A}(VpL+7-xpSQao$41AZHx!bNrGA--o7=SHrwdP|org*RUhZ$-bz| z{+-t^O7B@n@A#IkeV=~a_3M|w3vUDK>lb;KV<&-50-Xe23<)@an!$kkDqCho&>}f; zQUp;CBxL6n1QVU08-twnKl(@I!uboaxOnHMKP{I&`~WJRY{&uugV%4p^;UV~_1Dq1 z87~Jg694+~Q|0yV|Go12(4V@`*22d9cmMbwm#bH=mS12S>Zd>ZS=@WN-V7Q@0s#Ti z(N{n-Aa8l;003A608%bo(3m~qMS`|IoWdYz4-?GP#L>R!Q1J!US z#1=&I8nANFV*2FrB>=0dv8XyS$>N4EL8Hz6t7UFldoLWQz!tg(fOi}6{NR2pkOZy- zuof54(BdRV7g_VzRGMdTDDd`R>Rwr)-wT8;qIoq2Xf?vt$f>(`VqxZ%w>3`I>t`Vs zfEgN2GZm(nSk5Uxv|s%a1NqRKpI;yypeFnaETtVc1z|1QM3qkfR+h1G zKKt=6LXC%0UU5qr{v($D-rFxGS@3DpUfCQ2(E09nzl$xdSIV&yC(4ndlYo2}VJGhZ ze(5{=RC%Hv-A1m=|QBv9MX zXGpM1RQp`?722O*QU1u`Rn1g2mEt{NW{tBS2sSz9`u`$BY{p~P9_FDSWZsKie{tNg_D^fTFU*h!$1KqrAt0?(cVmRaO3^9!d7 z*O#%ebK#|voCbG!K4>%bM&}5TS;#?BB_;MpmXbv%Js0u_Z z@|b+NXsrv^?H5=q+>O6OO!-{X8}6U3O(zo(l9M}UDoh}&tJh4Kv1@Kbgx8qZutMxj z4&`|}V%)i~OZRD7t&oQjZpX~j2InVICC~ah>CK2WFuvo9fJ)kJKhoF=n@di@olvg= zC> z8*_IdAWv%T$jb^*U)@f9uWit?#^DF23=4`4}5q0G>c$zs9+C?MBtg z;#c}Pw3Y-8tpmX!x6us(>L@RHr3e8!2w5q?YB!5ecvjp|R{?NP4|X7{gqWe7wXPh( zgZKsk{*h*31|!{b(|}_bB?m~Ge=t)PXXa4#Ve2CO^OH-Lqg||F+e?tmeCDRqK!CJ) z@u&Zd>o<_o9lW&HRc$p*19(jXNXxCfEul@|HsYlP?2!>K>I#1~6>hPevu=&EAeHx7 zz`e+ZOs%`yD>ozGD7|HdMwFQ(Q8Z}9SV4X2Z=36*{?fU)i#832LlS^XwW6w^j2~J^ z5E>aCVQb}RIR-%aeZZ=3fBW0mv^rf*09dJMaqQUfNG~%k9#^QZ@PKaB2voa{{soX# z>DI9-_$JLj0Fa80_7`;*A^5w=5xh&;(BA?~LLg%-jm7 z#-Q4Kf`r=L8=xGHho&_T2UN@IR>4j9vVU!Dsm*8&RzB4}(|MI|`-F91n-r8>!d6;< zd573miUy+N#Z9&qE>NE__L?8G1dAPi>i8iG)?(*gWC_>;Y#ontwFMEYck{Rudxr}E z*+2bJLg#zYHtp51XIJ+mOzxU+Z%=_Kc~L{lF(|hLwo|nGh~ue;D;(sY3xOw}ALr@_ zQe#QWzpu)*Mf9Zg`c?7L(S5NczzMuxT|oWhXk48; zeWIK^Hp!w1wR%oDyMyuTOIL4|t85v(&xvyLoHV$K?H=Dru7x&P1-bST77?t9cNJ+3 za}s43Yc9*fvY!x+Yi@0y_A_r8;2cvPM;UQ8VI zCo^2-vzc0kS1=l3HA20WpL`8_-*!I50+ef|T0`@`7k}F$xbOM5iS4*o(*olB;yfA) zUuC?2rpyA`H@S6C#wx8kCGK(8D{D$?2Ve16V+U*8dQ}?I%TEZ&xJD&Ht@{KPT%c=+ zye_7VBG};qTdkk~3)q@9K6x;-UpyU8EucY+uvef0Isgf=ZH`~u{S{sCCu~3g257`w zEg%=%{WK=+Mh)#_ix^rqZpCtuGlJpt_kQ`y@>hTPSDdsrR(|WpKQ7<>&bLWZ%B2rJ zDF5uA{B5f;WMd(IJ3K zKWbX3&G7Q+ljX<%@JDe1qxqU1inI%Dh*dyQO%83C3EYJy1o0|(WF7HGm@*q2V*8JI zqbaebmH8;KU>4guq2Z~jn*d5#r+t(c)l)vwCRYYc`-0$<0G9bFV}Z>jfX~a9FO@rh zi0jnnsM|sTBpqkvp=w!j4{vH}&}AT4L161%{%q^=B4@TKlE%k0t7zHH%wi9Yt)>es z!qrC8cA7HMAie!-36M)*P5{v~60%(eKu8%n*GV8HU>kdsGu-xPGt$|QKdZA1YWK-) zbg8lARzATXwVwQ8O)$$o8<}MWiR z5BuU}^ZXR4D@iATP6C|-Ite@_5;%S0Xt{X)TzTo_$#M{�BN_^h~*W>u&iN8$s7@ z-7EJo*1n9c}3xk6XmP2@IEw?@TMr?oB-E!jYlnD zJUmo(e4Ow`xgv2=sI&xvG{UWBi3?Tjfe1vXjUvhXG`VmgUr9uMb@40r;gWR2i;Lsn zJpvngQBq6W5U(jA8$|Pobkb%~EkFV{I#d4gCx2Cb@r!rM4}SEcGKk$LfAL?!rduEgO^N!@K*VBD zSpZ;IV?n=R=h@y(^;bJA%20Os5@mH&(%jT%((riXob81 zD6EUB(H4oKUWo$O1b_o^^Q$1UKnQV>x8RL=)%=ZI^AvUpkfdoRAtz{LpUA~m0JktE z&^EyKxe8EK?IiQ?6jM)Z6cnoVy3m|r0q0hp1K2Vf;e^S9f9 zN7K+YI)VHJA4f^2#+&U)?KV%~Q-jN`fi4c^OR&j4qwOt)){||>*M30TqM!l1Dg%6C zyg{qfw&`hvrm?Kb);6d%pQjjVKWsnk9c(K6&?jW!I~@GXvbECoEP%8h;LQ_5-M%@8 zUBUZkzCGYn#w(wE%=o#^_UvP23Js2h)#b7zAW1$|`wk{GeiZH=ZTBP zN!R8t{XXk8>XEOnyFNcFpI>GCZZ!KUrTgTxogZxX)+bNb#q1=|NuZNJCxK1^kCni0 z{Q7sxC=;?#7H+(6$}J3CUt!zSjobGy2F!$OakZ?ll482a4A6xmlcQ!a956>SQ}PnG zy1+;}|=}F%z@In;&Q9ADa1jmLFz`7qXx1XW5cv%aUculDLBevG2g$_hmkxtgdtV z9^MOrBq#!47jRB@S7l{oWmWg-`rgbeuVTSjGOIijaB_TXBS>IqNpE~_61&cJLD~WGdp;5v?4SX@jt$l*DhP; z`6h9lHZ&U{W0gg;ix6L4bh%PqucInVW&lTR>i_}2<*~Gw-J5*jmtVx|p+L}M1zw}@ zi5P3t(fVZz0AP+dCNo$(^#QgHvjgEZcFP+8v@nph)=7dN{N*o=(*Dtdha&#gJXUM# z%-chU52vYp`_rYl#nxZ`LpT7D1!BQXvoKiz3^55z^|1TqDnLWosjnB16x*?OqH2ogjp_m2Jb7Ia}`5+rfaCb;B6EAlwc z)&%T2j6Ssn+gi)@Y*wJCn?{-^)>e_<|dvPr}>o;^3y>u_p|T48?K&Xqd6F|)?zUUoh$I!h+Pn|K;jq{Sj0thUJ7!V)T=tN zAA`u#k38zqYZtH+lM`uz`%i18-T^GJ&`*Eg0DTvn)hbHsEMw9wKu?@4ZUM*EIAAWu z=dp;=)o67^cVFl8GP>(Y#!&uo}9 zsnNhaXds>q_dvHBiO-M%;uIR^X!*So@YyNV>(pKS1eN@l{9{i^f?*|jU;8n2(%Wvi zrz{tY@>5bhu)5FC`n7VKtY4Q;K)fWMD?9b6F01tQ`lar@9vu^Zq~pkGe(K+A?S`(= z!2Q*L_d^3<-CvzH+!_ru8fY}oXrR)-9$cja8eDkaB3Sz+EU(UCd3Ej942vt`{jjy* zdKWG+tb|!J=EamRd)4a7wXqAR){B$*CJ#~Qf;s~rg*H>TT+b;2jKkZED6@d}q_=j? ztcX2i29Z%-c>S^5{t^mNX&@-KT|`dsiMA9qTFUvY@7z%7k`1?-ZfCL()cS-<bAgtKrNlTJGfbC0`3dF;*a~z6$VHwy(yB_n6a_e=EP1?md9)AaQ|??6pz>&P zPKDgeJ^Sjy&V`u(N40uf#=KghU)R~W zadvh(f~;==L=6z5s;8g2=x%6;J^^B_0xYe2uc!>i)Dc1!?`|{|^2FG(wNrE+-Nqfk zJeFdOxfAZ6fP%WQPKV3u3eS+?G|6tV2eACwPf+_?H*SVSmX;?zd&6bXHk;IF;A3dO zS?FC|jvu4yuB7Gna#w=SsNnNz_kMN9F=!GRa6&fefzW{O7mW+y12Ghh5RC>J4Kx~f z^fb`NdR9Qhcvl}>x`qYT9d;dLLC0ch(ZxC7l}jiW$gHJYSGC^}w!Jl3m|tcEpG+>c z+SLA>uDe@iFN9E(MA7rf?{R9=zubu27P5y#v{JEK+pQ7)L*m-IgGqmFzm_D5=$N*MW-syl7@oiVj>fYrP`y;$LrFZ0aiIQ5QmB2dd{* z;+wCQ>t$wFigUB=#;U$lJ*wns=?y-mPnkQ+kSE4e;GwE*o}fU)Y2sXPr3)}(V}nIF z)<28P_(t_>&vdw?WlU`AKP z1>CX(eI^OwZ5ze~)AdjwP)i;21^`of(9<-QPr4dLj1%&VA0{BOAM2o@2*92dqb1Zd zu7!)pBZwf(BY33Mi{Q>Wpq#+XB(7|uf??3a?4$45q6e@J9po`kH@avE;9)Tb4;M)0 zZH09WK+?cS8sRgBqHHV584Jj=?$EP3q%)!MGPVVHsast+b`GOphYSS!f*%~VDhr$3 zelUbqD;{KtdKm<>p6v;U2r$7dh8)k~aAgxr2?BVB#UX7Oq`hSOaG;k#^mfPQw7;?rB01v)CZ^p60c;(esY$A3W6xtFW(ZjS(| z>c@5j%Dm^UU%!=BF0&(})>qwHatx)TM~;#)MnzG6C0NhI|D~%NBTXe>z?$H0F?i z7Rl!X(>`>BnH+l0^QN1*ZdyVYy5{P_x*y%`L8k`}OmRQ<1&FnUeo$J_HeZbfKCTAh z7BBbt$F1DZ-(L+d4h}_0_4AVVs2^)TCy#npANc5A zApWwqWNFwA6;`v0whE)>OWiIkfomZr|Y1}60OX1dCQhv<_k2^zB=;=eYQBN z@}UR0$-F4H7w(KFAf+22Sz+lDu6SH1v+aBB_^e~jvxb}J1|5$;?|?y6Cm%qD9ZaR} zMINoS1YT+eSOMBV5cO8nFRc&Yo+tw?qXe&Pqoe{#Vgv>VG!*>%kWrvWAWPt|8#%fI zSOoxP?g`{9FxJL|x`E~1jhi=#%|@KRt5?IF({~5sSB+xbx1Zfu4>3>ou*t>Z+I)KZ z-FJv{b_0D9Yl;mm)UTF!`}gc)hsPV#Urx_H^Hh58z4y}_ufHDi`v_KK&Q)EG7H}ce zm2e&XI3CWG2>K3~G|)eohKW_Rgf-R-v8tAta~Z5gm-Ic}k*E{>M+ckShdwj#U}T{4 zIK@4}rSr!b6ZNR;_z3c@(*JqxIbE5CaJ}vC1IU^m46DVW)8^P{;Nxl_9!GrnY-rLi zUIRt{wp@lF@_zB6G%Yq7XlcNAlLo+Qsbl*|d8zj=w@rGuHBiQ~4vu`19+g0hF8-Qy ze>LFCvX&GA#<7rsy?@Ep6{P{N?ys(Yk=%az?j;sXbLr;o`E=tBv4!RZuUuqj3yoOv z@;2lmluVXIyd=dJ3*E?%i+FL;J*b_^GAf9I*0?e93Gk{qy3q2!cWks#T}Nic%tf{- z>T#ZrnVzbVq$u+;j+H1;&YQK1a8rC&=lPA>96nLtf->{X*S-)&0W=lEib-5S>^~Rp zf_yS5eNab59!tEAzEGY=uQ;flwol6AQxc=i;$5<#1L|SkX4XC%My-` zqCHsK1famRtBaUv#)1+s(v`{u`x@4M8(3xa^V}FFkoV~5S~`FJV!C>LI^6S)9z8?= z>OEL!&89oIZUY#Nq&_~YEc|`e=stG`SF9Cy2rLM2jSzEb`ZhadV$t>Zu_I{{>%qav zq4fHdOX}v(D%~8X3B+#Z zFz*=$iJvu0{t|tP9r=1P2xR@NdyW;p5-KELfJdts6eAVjPzJNSRm=+Q+NecgBJboeoig^HwndQYaG&T~z z-7vZ{PFO!c-sQ7r1CZ10Qp+=Uts4NaSpwX+julZaR!9?or;7k7^8nfhr-=6@0EkRK z`Pt9Xv6CkQWHMO&sS_vBU2l5(jh|!jL=b-TG=o*o4XoK-d+jUfv12FF`|qEPT_3gV zdh6XY>BzC;>6Nd31(0^rt0hONkcuj3}cn$u7Qgc;3kkIVCuae zR(=9tF{jX%mWf!&#$F2kK}lx$f1W$&C3K-40OyP+W}x>m#z}YBDT2JWCa-F1Qlo)S zpaJf-Nafc21T2~=pFjhl_e%L5zR>#rLU;ZXOrFoUioZ9#el@AlK%;@XYrw^re{uA$ z?G1o+cU?85jRuMauHRvyJ3F6lPcIPs*O*thfeFaO%YYzNc{R<;pa5)2fEOO+^)NCIi| z(QmsMJ&`6pwv;Q#CzIAHL?ypW=1ZH8Ie2}(vb18;;$lYLNiItH%Jl>)Za)l@MXQ+);cmSt!N#D6%5Z~U7S3+d!hlqrP4J^E2{49K0qm4v5p=GkkZvI zrL!M=KoIZC(W_yA6c8_5wGQEWIFMEV2LwxuW98r(+};KFF#v+|ADmDB_`UC9F|!W< zY=8R6kAIxL^`HJa0GSuR{N)J7Jx`mrvA7ZZ8lGTR$C~Sue?-w;Q?N@*Fx?zy0d|e& zHOfL?3z`8ypk)9!`=q;K1l30700^u3ALr^U#uOk*{i4r&&3Py<-!pOQEf&26L?iIG z7G{n{Kft9)fX>Ymda=;y0<17D*E)YV(7Nr?rAz52Kl+z+^4PI-0ZX&L{r2Cc-}#N- zO2>~LOW*#^-zDtH(*7d{)5Y@_((xll({KOAZ=_kktGC~J3%9pd(%*mg@3APl3OMtJ zX&tcc@4oY$G`@Etoqql)-0+st&;I2{X^vQU&p!E7I{oD7^zNB6=||uHN8BjK*gbGx zdi&?Er!@lKAI3`QKmElw(;Yyo|Mq|Q@6!Ol&pbh;Mi{H>fQsM!)?cTuzV_8L$8+Y# z-~T}x{ml^q#RGVv0OMtq#I%l7=CxW>IWa{meYW$wht#Utl87_st*89!CiuQdmSG*{ z8Rkm1F+Z;*v701lQO1sAqkGazUp|;#LB^|>FQ)(V|M=h2n{T`x0M>!2{pp!!o=f{C z_NV=?PNl;KPo`_kp{qBvxLQo(0GeCOi%SIb+wQ9 zU0sZuW7Z33SV$jEE2OVT^$ALj=PgUUfG`fh(=ga?G;@=7;Wueo9`prW1tq z?#wCf{ZJf`a)5i;63+ERY#J~}Ts5rJ_7c-eOSUF88u-*2@ZPKD^QY$3Ft|4j_<*aW zj>}rfqkrmiv`G(+2K)^ore^y`>A?}Lk)+YUgQbB2^fvd)gEcOVG!M20Zed|D!?x?X zt*o*r+5~uDy&Q|nyl~_^-iq>4F`t(_i+pS4e<6++*qvV`1YV*BwAESEbp=;(r+k-W zg<6tB)Tx@PpBJ(sTe+?uJKJ+1SyD$+CHtO_Wl6X%A=}}9DQh=Znho6-`g;GZ=ymA0)Nj{{ zGwa!(sBeyKA+Pa^76oB2MOtO?y~TWrodK~P8^RsTfcFBV#wZ%V0&5UU7TpFH*^O`p z3!t>+A0P|^ve2{!17>b)^+#Tfur|3k#Sb+z)lRDoI_BV3jgH!On*0-ubLW z@S#Ho0kha;4@(@CVj%c#tbvTnRaZ?5xXOwPefZU>cI%mM-2iVyu|oM|*<{=+x_Lula$0mOfJ?i~CU0N4(pL%>Zwb(g>$cf+fm6=j_A+;(o7@;vn{U-Mll z+^0iH2F{nl?-)7U7X=D2hKr2@bZuP6%2E7h|Ni~y&;R_-0o!IG;JWpkb2r&h^5*qh z>HMWDSaGpy>&SS3QB(US(}lBGv;j;RH2?g$^H>Nn_plx`{?x|W!T5ukdJ~LvUPW9q z2Qyynm8yD1Fb}lPbD+IXx_be9xwMY++`nZ{Y)l4lrTe~t{f*`2AMVs6w_5a5`clsv z-;IJs1~lh=tqjHlTgMu!`GcQYYSMktfH$I3x=Ek822^!P_4AVFFzl3H${(iVZheWV z*37pDawvT#%72D?AjP|Pp>#F!N75fw&7n>H)yEga2&l=Av93uG(|{K?Q=#?6%^zT2 zFiSw&An5p#-XCOnl(<;&BhO#1Ewg>==LNv_%0)igv5rX|x-67bXo_6^wcL*{29^~W z&8xe@NguIGD!v6^Q6Bm4#@Bk5*?(m&`l`#Ltdq>wQ?ctcTlU7GGCL<@b52o0iW-ckLVE_dr<% zAKZ=4=b_i`b|+wGa_m(`QH6P`)>O}PZn&!-N(qG*8apSCa5vP#ivpkgU06fX5<41N zrX`Jm#fcROSn>1Mf1dvNAHSELedButbpV*@^u)=N>A=4I z>34tox6>T=`Hz11Ljr}*r>CDhovvQFn$8ldNx3Lsq~dsULoGt6=IrQN^iaQZhCbV zi>s3}>Fu}QOuzH%zZF)6?*K6B7JGo$QcH_-0VKApE9tKQ?S~S&X{DH#m14=-=?qTqS3&|)Igqj`JT-4_G2<=$nRML>N;-qjv78nqO5l8AbBV+ zOP>_px#c6T)mP8GXwn0xft^>d51g2H7pQ<8F7yR8TqL^)_gL~GXe%!amYaN0v!07_ zU+$#$Q-Fca77K7u7X9+}Q$Q>EOXEvLXR;k%E?nr#-xmb0ZMT3w(#X#|dHQmojK*Cm z;F8zMXeywF*YYdvDuZREd?&uvm#=*>#+N{Zd~Dl1`(j?)~HvfOLS)q(PRt$rK}+fe_u=e5Z` zSg*{1PM+#sIj!dy*6Ug3T<~Z;Q@C*W>fhv(%WOJcv z?I=O;FTqp#0$x>hGIM!G zW9H^mU6v-YOi;np#kI_}Qqt+sm^v!&&@>0vZ_qX`l}P z0XNnWz%QR!S|91IHVuGaxjPqb-d>M5P(GW_o1bSl1v-n#1a5nMM25;X|o^bT|O5(ya|v@;>Jb^sZ}`W75Yig<693 z&=2RF&*}gRP>b3V<~3R7ztBk;hKg|+izrve02n1NSFU(bZyni&_#N`T+d@Yg4s}Dk zi404)^yyZp+gs`yOjj?QOE&;dzW&NfSV6rHz;Yu!_w+MZ{H$OF^Z_z=r&EtT7VcyF zCJ920mF2s{pE^W5t1&>WANUI zzJ4>EK6#4Y(&ezy90a(rpJ&dz7h`ndu@n4$Zl*8ernztLuctT8yqo^7|NH-;-L7;C znQjxCY;s~e-MV>$cFw2MPd!b{GXB7_3lM*_fd ze2PVf=PsIE=(~XSMcw4fp^NB}1S$NKWE~ggJEa0P74SnE7vnDAZP#R(t{6K329_&J zsqc$yUO=q27WmQ>;Gis^Y(p6f2oNv8^uv7lnFI^uTR;Hw1sO^mX*&6;2iqv}lsf9n z_N13z;aTTp|D@lk%(hCpn|_MkmA#yo@v{C$$yeGaK%dv0<6CITQF-g&qbL;{jkff>~ zOlo{eTX}S$v*BP*PG%jt(E3Br%Hsxrk#Tu)fNGw_wCOH!eQX!E!t+rOXcK^CkeF60 ztM(BqRCo;m81?mx26(ehzXf~7MtL@aFc?r>*Fm0Wu_)IfP1o1ixrG2pMu+;sb#QcS zJlY!ucvxH*N+a~Ke_)8;F|DZv(=TJGCE#+Hfb8wj=ZrS&kNWZBMK?VBta`JI{~7T0lhwsQwPH=5bkWV;Q}|q zZ@?1o75m)%ksbKH`~C0pcG;7T0ziF00P{`S;^7|_YBN|z-Qiv7Z~x&R_>0U&`t9HS zy|jeu;~W-DJ++LRVAteROP+ex3kKeUW=&D;*(D+a>_k>(q0D2Al(1n@WUu2&LG$uben1 zkkjXZ1wLy6_hH}6I$7w%yC<$Wx@_n!ua#xnHX#*bYOy##+W#?km# z&c7x#8u+vt=$u8)XQo(_?xP0c@fKhAk?BLg$}T&V;>d|}%M|mg6y~hv`$5qFgtE{e{98ey=<8;>EGX(;s4AEPR2t%xjMYoRQRWUowluk$HtyFu}Sd*P@><|QFi(I- zz`;+EQQ$#9PuZj^094`QvGBT^eypcnJu0*O3NTZ2x|@8Zts;}x>cMvJHK`L*886$E zU%gGQ>$D{@y};r{LjaBP0z^dOtpKj%r7|7nyj7R&{kB%!!}KE`C+M|4_dwr07w`QN+3h` zIs-WS3>E;gJq=dY0blHMPpod`nPtc=@Ljs2j8Rs}1acDnRR;h!b2ti ztS&$jtzC5ExHPbE zIyafWJp^&zmkt2lT)|Cl3@~95&}Rf-YZw6Kg%@53%Nysp?wKX6@x0eXP{jyVnbzn- zU$|@mCQ3`&IS9Aoz;7e2c8-O0LZw`UPD1zQZ%w-A{TZ+74PsHP<6ij6E3bwn&_(*GOWPs9q5k2KG)55m zQ>Sq)M4nN0v(&0=3>}<)=IL||i>M#{^rwJKQ)%zP1L^n^kEbnM4-dclPMT(H&S2Gc z8cVhn#&AEDYA22yja?uo7{9On$~V$57IE)kvDMA>AN?2qF5D0WBliMSUAZ+Kv9bQ- z&;F~hay!Hhga-}-SgmR`H;|rx>C3ozPNh{Q@+#;`HvneY9dx^<+E>o4mb+|ZJIBh* z%XE1|gq+u!$2A&xn*ldnc0K|;1alo!$y%z6Ex@caycq#r^K zz%IeIVv`K#ixbF+P>HD6${TaVbLUAlQJj}rBIf=(YxvDZ_tHvZzRzy^CP1f4raMOp z1RLAXcL8bT)5UMQ)>#2y*L0ON+I%+}xIY?jYUTNRfB1coxuD8;PIT%Z@;fs-uB%5( zNAAOMS(iP|%bc>~<^193$`?*b{)p+T&7+^<2fNlyUmyfx1YwFLl9nX(wnR&KrUeQb8Rp(!qzpkGGl3B-g>L6Rae&Jb{ujGB2V`-z4M_pdWy$(Lr zX{}$Ut(V>HytG@d>v@@jopqJLIjHW`QymzRw$7)s?#I3UxU$_V|H8MEUfFjm>!fY2 zA5IMjT)Cuk;qUuEEPPy1#hZdlByzo%qI=LqnYn^0X2w^PmamfU+=+9KQhHH8uj&cT%Ya38AVR91^Ky)~4dL!^9enG4-m3{x1^Qek{n?-XSvvXb(^%gy4(N0p%e*asjDBRiO|bAE{L8oIR6Lcl5BdaN^#QQ6*_{zXc%vZ-j9Woc4mjXI85YLZ% z{~;chReATu{D(6ERPPh>wYFMXm`m5NQaf|zOgek^Z0v~nZ-4*y(^p=54U44ibpFDH zh&$KMoOHL$y;!p;L%aj24|FlWeAI$#kX;6cad`!`4ePQlVupSDZ~qR9xaqJM6aY2J z$9JkNBFF0*+>Hxko)!6-8}h+h_dyHt^EKb^(O-1wT=AnTA!72V%zBVd?mIGC#(lHp zj*C9q(NDPdVhN_@VLXq}8+Rq{i1K&NR`W3=ZN3@}d@>C<4|keL{-!W3oAfC(pl(|a zj2fTFuU+qUdarc5)&IQnoj&zGl=qZoT7BLlbNBXp{bBkdCP4vz@C61;{up;AbyM8G z1h)ijtW!V@f9$>fBuVR;yg=H$w&ite9x1&pFB)Dex8RP+{JLveWbW{#{G~5D6t}Vj zOZ;e!C4B)|Y)8G>_fD5NE8mNL;$;m!E~Na_^?0xHwFLbHC+g8uWL4LWcU_LVv6Wuk zTStAAB;PU~C9i%g?bvQ z+bF5?y5u*|wM`k`Z<)knW*R)LW?A(b zl5fd2lNL>#TNEQfdGuH0VKoqR+}pDCJU5yx3gXgSN{f$NT$Ma7s#LLztKj2V#j7I8 zhbpf{Q_%&GV`hA^Jh|Pfc2vl<@^{H^U+Ko{C@PBQg~oek@cbyi)i8_u@c|an>+JZ3yC3L2 zAjTHE364j9ZnJCP${K%k+Jj3L`~`gsn10jU+jySpru7;D*p0o^L*Q(88GQAX*V0$M z`V}m>+-PHyxLR0S?cW!{%MSrc&9HOa_y6gK>FsykXZ?>SFV2oB*0)KfN@(gxLh7}z7swIAF=)cDp zgDPLR-eTh>N?h1Cm(-2O=1PrH#U)`*x9ig`4_Qh+vF3&1~Ae?3@4zMxE8i> z?r!=!haLy`IFB|NH+M(u<`}X;B!@MCCiN(QWgnajU?Nzp0aMY3HRdoLbbunXsjkS= zc6SZyB@na#u+)^6N1Ob_xGSV`3AmKhI#ogR(>W1Tkc0~zy{~RGn+HH#k09*UbG!?v zyTfK@3`lGIefF9h z2X!C%v5WSXx{)BUKH7*&=b~U!nUrM`bM*Ol7>wSjtCT7oe`> ztJ^ZRoo$(~ejFcVR|m>#vi@$V)GJn1@=zW>>T0K?KICm1b((vbAGYUsI(OeFJLTC2`)q&hgJa>R4#-(gJ(T%Wj)kA&BOm2#(!-$vhO_mY$P$;=174KEw1(Ws zk}%Vjo#*mnrGOc^Ub$D~<@vT_>-9FWw|y1bZJAuOpIo{9b~z=PJv0FUw7@HAY&wFt zvpU=s%zYcrOxNZaH4EDo|59ExS1yAtfEx>JuT|n@`>Ikylm-t)=YB8AMd>i$mA_%u z@>$}&v?72N3v=%|srj;QsmL+wjT~3OLE|BV6E(8xRJt8=BeN~1&pWg7F&5aKo24LG zxmn*X!s&~TMjYx=pNn=ZO!@&iMvZTTbxH&y_xZ+W1dz&SjL+`dH`!nUuwl>C-gM&F zvGk=Eo(~}5!bNs=1GGB*vmZdanGPO2lAe0{Ie;f(4iRr?nP=733c&+6GC zULo{701QWt9!ocF-T|DL)^kc<@BmO|04uItyB-0{4bDD+Wz*rKhtnVa;h)&62-rP2 zIf2XJIKLOexI7MqyVwxpB0wX^D2NcxcXZ(MS}>+Q-+dnXi9E{N$^3j}Zl%f4r(`pOh@3I6KN__5gw1>^!K1p+iobAKtU-{)LxbW`6Bu46w*X zJ}vcD8Pjn8V;mz^6@4n!uDPc$W1e-`y* zAS7s^vD>urbhsUc9_j`O?KuxTZ&IUy&sqa9jhSoBbh~#=$Ne2&xlX6_o5$VmzVvNUrF~OU!lF%qy2Jx9RvGdo61;{*Y@9Xb!oES z)~m}_=sgx4mN``9-i^N0E%l5sR%n%FxA9U>1@J07Jl@Ovkj8$OF|!@}zgzmavhBvF zGr!D-B8TPm`DWk!I429JrRO?bUV#Ecr##v2A{WNOUV zy6TNhj1iE23@f3FX>Ji93pX_cM}4>havyGN>eAnp-g=v0>g<$v^w`n#!iz5e9Mi>H7*;Uh4k+o3@U|7F>_)9myeTuJY+>*39G`hk>!0qBj^7|LtpvgZv=-S#@D6o6 zor8B>VYl_qiEW#WPLjq(W81cEvq9rDjqNnHZQE{an_u30@B58s{{wrBHJ-iKTxy!1JgP^g#~LsZ1`ete*Um47 zvk{QvJ2T4EQ24u8I+Ulr)@&LYpLNH9bCTZ?Fg(rZY(YK?HA(eWicQ=J;Z6KEgUX=l z5ub9(!swxnRS(x;qA^Nqh@gTTQLf5C+~d#u^#U>QY%y>qN^1n8h=oe+Vhm0y*#%O4 z)-B>$!J~TR8in(?``p%8r_lGy2}t zzqBMlf)cgA|YGFGbKoV2@hG<|aS5NvL{sxH*o znJ+4E8LfKrwjL;bITZ;#B9U__OFV+?;fg#ZP)hOr&))fwZR9AXh!1C%_s-ZIcg&k z6>cK1DogSEMDZ_gl@6bBGuGu}fTdHr5J6Ve%N>^q>RWc$ z*LTmh_7grQ!PsQ!e#tQUAdzEC;-G8Q`O@Cnu-_$D*QZLhve+aP^UZa>+;sz?SOezJ6Ikh zh|y7n!ouX1Mq4V}*Zt_n6C7J6>MigAH29W8B-Y_63h4boT0Nu3ct3U+Gux(T<4A@> zZn66orUjrZb4|<3`@&)iTEg{zO7jDf3{vbugEe?)4oT|y!O7)PjMtRp2DEa8)DQk5%D7XN5)7%DO9BP0re)Q%ZNIr<{ zozVPs0b(%(kY62v`wwhXoLFsbfO)%c3pXNszC};x^(MNH0k~()(+n-+YUO*@Q~U9} zzJ)|bskrFBY*r}RY;ckRgvi^8Wy6==5cuPpeY$ILOd!o?b6(-X$Jde_zW(RF%THZA z`;BH(p)BBexePom|ua0JAFKtu6iaVkp33tnAM+>(z{uSJJOkWr% z*g4s{+q~P=Om^1|ow}~!g{o7UfEBpzg;_F&?BS^H9!3DAE733hXL-I^fxvN2XWr2; zH06)U*6}Saak-C1b0X%^udydD$5t1_dhh6@Q88(v3wFc(mtfz#^(EM=nl>i8fiB1C zO!fDSk1CWs^-V%t?)VN6^e5Jj6~@8cj_uX${ZRe8DPAEX>dqnG(HmbC0Y7cVBIl@V z?TgucdT6>uJyC1uEl&jlJlhJ}Mx2{!Gxt_1qGN^EzCs(AU1m4O?mMntn{PJZr!r0I zHaq6&)g;^#PVGz^eHqW^P21I1PFyeU-|TALk`}52?1(Q^vqCTG<^-FuuD@-+Dm6}T z7qm{cJo9vc`kBkLC>BieYNw_=LhZb-Z$AFcKw2}GwwDNx5uG|yJ*h0*TxMf7GEHW1 zd#I~666#EAHtDsEj^1dc{aR(s6#B}s-@98;ScvpSO*4}H{oL^SY`CI{(zaandr^Im z|0b3&667-0yS5eHJ4RG}H{st=qlb`Vx`JRZK9;)N?Rq7 zYu(w5a4+JP1PuQL+0KJp8(vHt34p#xY67AZM<(_RGNu2}*Ya<6YIMS`DR&s@&9wJ3 zM|ugmb%YC^KE=tftThxDQfK z3W3M(GZ|;Ld?HR#Mdgr$8YOCb_C(e=;)(8tue@TH5pQ2)w$Ee&yMXJwQMhy*1n?MK2*=X2$S4;Km*k z54rZa0w#x*&QHwN!la$|u?7fmQEoPgx%AQBXg$1^vj*iKB$mkqgUwySX;*<9LX{=`gksM;P#PxuP zyMTTi?~-UgHj}?4}pCr$eWNfD_RUPZ7-Wxjg@GlVQ z{U6UmuOEwf_XQG0h?Om0+&E`7mv%kqkC~vzAIB9|aBiyEG$l>xGer?aWJ&AWTVIPf z>5yGrx2=6GBzlH{os$h-Y8^NAdOqTPJ|PH4zu6&W%DZ4^caKtp7CF~GFnP6n1B+oq zuI>m{PudqooOB7#y#el+w!>mY%>!frHO3^#^!|nu+;ov5M7L7({%$;&m2eRl|Mkkf zo*mq^Vw5J3}uG1?gn=@%kTUJ$_ZrYgSB!o1VD{(`15V;Yn@hj3ngU@&dus8udML zo`QDYhn@KY=sg+^8+{rM3>k#cH89#2tGVIgqWIl_1zafU!>wY(kaali)UhqL{^VtJ znY>O>jY_RCW5s~T9P{U0eNX)e9}idthMhdx?+eIQ95EmI?>Gnx)(!5|=Ugrb9BLrQ)Ua1aK3!pkeYbMgdkl;DLUyBQ$Zat$aI^qI zUvbKKtvaCYK5sT-%e3)_#1>Q`!K&M6{qv@6vIt)|sG>+{IM3&VnLjG`QL)Iv*=F)f zIbOtbkZbhwN-6Nf@?WgrZ+_dIwB71lgV><-#-G4G--+bcCknlOIR6K4A9x>$L;WfA zs7bj|`4*YWh|meJ8tq75+cu{Z?t8482{1l}&mH>uFnZvyRS#;Lv?DZY7MEw-1<08198)(-r z3)va~0Y12%E$E6GBPX8G`jPCxGMYhewo|$Vb>mh1g~)#nKb5Fa_d_aNJD@^;xs@kW z2|K{HAS@LN!Djlr09AA}lJ{6lqD3!vZj92{^yCA0gcR!yv*YOl$hzT`k4QbFfdM4U zftq8X04&>|W4WpoRVm*jo2RLVUgGJ&{^6R&$88ZX=69)fG1rP_Ej^IN;2O-O4cghe zq3iJKJU`TI4|$zT8~=58@=xm#{oWdIWE?vQ(c-k92yHwit0hpT%&;d<`fkbZ*!o}SC2=TJxOu(kR)bOcQ0A#`0Lm`W zX0se?77{df z0_z~TIOl0H4igMy(Zziw9Ly8Q`6pq!Fy~=bn6y#4#QU*1$f_lm2i0B--=| z++7T*>3APvtT9^!xxD6l$E_kZS8%XAA#dCR;L22;_f8(tz3J)s-bIZIhE^qFX#1$@mdx7m`k?*nr%0+w z*m4?5gWlDawm{1lN&Fw>*E=C_xV<@Yb<{lB(Ad`qN!(c$SFJb6+5*t zwj-qLQ1qd~MU_7ZjO){a)bo0|ms@>BXD<8(&!nR%6mfVVb_PNuCSDA(dDP#VerA5w{H9-< z{Brz3zEezj${PI2!;@qz_ZtM2XyBDla{M!@Qt{GH@zFpR9~6+iA9^`&qu~ zH#ZD_Dzs(vg@#@uE7z@kF9(hD9k@ApQwo$M=AKAQ+Ba=kt$e&xCC2~2XQ^H9DX_}; zDzuT}i89Jm-cp;u0yz*eQPW;`O&4P;68Y752~0k)V-{nbJA-6pz)V3HH^YC24HX%_^2e@lsXYv01QV!Q zmvHs}5(^G>uMDEia*Og`!&*%$N^4aACeA@slTn5)st)3cz~7f*E}EeSMXp0$M00FP zgdq{hP|0UVygpurW#ybqr$cZmxb%O2S&{!_d%rUmK&-ZaQJ@JLC*Orwahan%fQWLRy#uk6MK(l7+os-Dn;7y)?`J3CEaB}^iq3ch0;2Fqf( zOoZ>E9JMPexItlCF-$6`Hz$2k!4?oqwJ15mZ^`0yrZM)qI0|L0f$b+Y-iX@eFkCRL0Ef?2I#EF>p?2ZeUn?2+4gk1;=eIuD7 z@NpOHZXl?1F)2_wc@bvPkG#CO^X_Ksc*Q&$u>+6%ygOGoSG3|`ePAjRqTyifi?`|UYdFRRHNWuPDGQ^aIp6UC3I|IzMfePnUG-LTl6v&>0zWeOX6mLs62 ztK%5WOgOL|TmO5IACKMotLprnM$^&Sj`2kz(=ATmcd~xm@wG&hZk51;tkr)IEB|p6 ze(5W}ph`Jtke_$Ks2P82MUsrUJ<~X4Qs{ce&wpD#H=TE5R(-WO?4%JdTDPsS=pZxa zO5#>r?BRT>7hPA=Im zn!?mAcG*RD8D13iJX_5XYv>MbsL5HEoKjmn?)p(_9FIIQ{{emsA-I6LgG0{%D!To; z8B`Y^J0FoXM%#%57yl|32Z<>dBL)29zHoI1w3-1@5k@-@JZhk&@#u5Z0bEJT6O8M} z?7*6XMKA=<*K?CU%6qfRu~0t|%V6TIa3m*(R*U1U+Lg1-OU=jgtTdx|TNrSq zsCsh-ZTLe>=bMp#1#`?8#Oq@8IaU(SptKgLKNc?J_r+%uwtVeDy5JufJZe#{S?QYg z11Ho|xIGmJ79rg}n#Jhdw72Ag)6Wn%wY@6Nc)<698CkbS%DLwjyNv1uQzk7?Nb4+w zsrLKEu~%;?{AN(>N-Z2%h}p$46fOW?yrM79cT*AaupY*lHx%$4+7WTVOY)SZVy+hs z5iqhr)*yu*elieYOTGT+6#JjF2$m-gQ-Qs6s?%?$61X=$Mt$HrxwudI@Ssbo!%fg` z<@jlUn4U6SG*EKkl=Gx_vS8(5VX6^i}t2`Q7+cqPo10LEhxxxldDR^-CbO&!xVeH z$2+(GoE!5*-1@t`fBTmAAfAfi9(!A@8kFIrnIu{HRhka|Yz~Zdv{Q%Y=iKk)13nvi z57at|7w3z~PGJ`!P0E{gK13sLc^Trh&aryMc-Xz-_A=>2Z2!Xz+U}VND9e#(C))qi zX)!3Ty;arYYpA>7rVFP45yQfDy*#`56%wYq1*&3gE-urYt8JFzJFSVKw&r(4ShltQ zMgvOeaY0#1Q?XfUwU3mwT4d^6_MI=>>MFG2xX0V;OP%qyOzobSY(F6G%pB>_3>((| z0wCIQ>@iU}ftQbRyExLWYeWmWWYg_O=)jdy$Ra3Xg!M`(R_fnZS$v1> z#!c|U8ni4E_uq`}?JU0$H1nQ{FEQ<2Q0p?*)Idnq3)bRR=oUfxa3TQ$GPpgNW%=3! z_`>$bFxSDV^g7ZueGfD5f{8!Azh~&~#-MtIhc~!*jE<-TII54$|ty2dE zI|9P|0AR)#uF4m3?$C}yEJrV7Kf0yyfD))4nt<FxTOJD0ayXJL&V-7 zKF1fy?mA&qupD)3l5QM&e>31XM0z_`9PUnLGQ2lOzWW*!L{i-$Xqa>>TkuCP`9b^nc#3m(c}Ist%fG)nHe@d?5Sb*jmDJ zPWo_9GVLHRG8YzrTD! ziA%_7S9WB`Yk6Gc`rBKv3)EVE$QR}NUT(th@?8GJ;d5C9^;P1pWmBQr|GBRJ1N}KG z0B)ukG4*Np%0;Fj6lpMZ>Fp21)<4&gk6Ag+Yx(=MuNzI!XR&c}n>u9F746IBJBL+oT=jJ5iqA)<)Na7m=Vslp-;D!E zwTf+8$n_J!`W+l~Msj&;BDG=i*WpTS7zQ3#iuv3V3gcnKG zX;e=yEw_ipJzYOd#|^tf4`H?Z(2_f_^*u<92s+*$)@p9o*X+B{4Mx8^)_j|;xGshO zM=Yv@oOE>~jWRg7((RE|e#-4ao(|yDNoLdq1?}cNd1=_gD%}AGVRQ)johZ&%+8Coz z%`HFY16i54ew|cLg7q&K#7g(8J7qvPofQl2pkc`dVQDQ`U{X|@X4(afzaFDz?5*;@ zGO0pOsmM~?A2yfjjWuYDF4-E0_aa9J`iwlzK~s~vHAE!Fl4J|ckDdv&`VuycyH6IJ z&NcNJ{e>p6c$kcKIT@jv9qYPjaQPVJ63k2loa~wTBUtE_=L%7;O-6?yKn-L2D5{-u zCbjr)SehjnycDcY{mwqLM+P2Aqe-Cc)$pYb`oIvH0HnFc-e5!+z*J~{F1iiMuvl~% z%%B;V3EVI^R7^r_`4H()eJs3Qc7%OlrBsY9BbeAG6ODZ!sz;xfAvw*%JJ|%qfzq!w z?W`NHgj?0EK}b0CqwSmY1MOmu7bzB9M*KP0KU${~T1Vl6MHeF<^B!db0&}G!IbkCn zJWp2>Bs34RoRaFawkGpOwLpU)>|ypSQctXglJo>I@Gc9)KbFUiYiE??qr>Y-pOIKs z%Ju3mYIb#t_ei{#YK`E}*mS3cnE?HA#nnaa9c%XdMp^O+(f^k8IyPcIJUV=>1c7n< zc*xL;Ju>A&x=A^1VZfl4s;<1}z}fhifIgQ1mRelfPxR^UlG$r5YnSU`$tH>|yV8{( zx1Wtox_C@wSvC~|s-$(>73)~p<4mPkusp9CSPv)TgQpdmR9L_M;6htEO+iGS0L>6xl(WDl}- zU71Ym$GiIJj$77TDcI`6;4cA*kyDqg)l+VCz8A!I#kLHC0C7Z8!R&Nq6wff6zmETp z9Ws32!$VV@1^`Q6QVHZHfY#ysah5CT_7gulMFPrE!1u)^Z9waE{L7}_QAb=b26Z^- z>wrS|2$MA(!8CBOJ=g?{BY)ava2fUd2uAMh9S(S=+~|FG0JBafPw<}op7SNkl3Vo0 zEmMIvGXjcmMb>d-wX|^IrB7?vejq%(I7&dIdjSx7^B7!qo5|39>7BnoP&|;;b`(P! z@^0AUm=Lyr!RoueVidLSH{9y<_xt*sExQI#e_Yh(4s1g|I>S0Qy>=&DurhT?k>-c@ zJn+_8elx7MrxbN{#hJAwD>H6Krz7Af_#?nKR@?RVj&R|=*|u4AZDiX>{sEtW7hJ^(MFJGq9nn}_LNAsBS`o@j@qL2XWtT;f=TIPcrxp)xXstQj z2CfqyHZc$I;k2~*!;W)(LRNbL&vyJ^B@Ky|9ivQ!@p=}7r#$^ch;an&1;d_}`?OMh z>%%YemYU0b_gkO&U3|zCN4yHl#Wpk8@t)RM70jB!_B?fe_10II*R$ z@65a0;*!j+Gj)w@hg8$4b5Z|yv{RZlQOVwq0z_b~>Hg_q9EbZLmEb2p*v2J(~dWuQT9usW1|IR@Uu%MvE`i#y7VG zHaEd`=T~+tE_k59u)V?X1`xWY6i+_3=G0IU>SHp=5f`MXy+nyN4K+r&uO}3mmBlrP zy2K^O{DW2(iBMj552C$y7QR||n^t=omZTvNa)|E$!V^qC2s6ptcJPnn#y>5!eeE1S zBQy;=G4J{tN`f+wB)Q`bf~MY@gl_;TNigc}@lC_n@jA5m9FKihs-13I=<5bv_jb?S zO-=Cek%Yb1;o0k#vu5X0xcKB5g?mjxp6^}7?0EgCk-o45kM_L5+3D4XF2R>glDP7Z zBaxT-8|8M*n+#(Bt_FhRgD#5GXHG3+*`su37h(L!}+uGUe4EIZD?Mm^r_uBYx$Jhlx+`Er~Cr)j(J!y)=mbO*1DWjXIJi#6oHAHzP8^0dU{b+I&W{t$p;-3I2x)E+Lnvi*_~EA z3O~2FEM2~VCGJnYQBLSMC2OOJyq?p&j$haV2)(OGW=n-RsIYe^TND_A8ci7spDplL zsHGy>yX}@noFtFI8Yy~ubtEk%Qn{ezf+n3l51s0Cd=AUrGEetTTC$(lFLO#vSLN|% zdPS~s%cPrHtVofZu`gTX?1WzywFH)x;4-&^;5|I0k>Ni7%K~VQ)N`btTToK3@on}w z2k%dVaB*;N_EmnJ%>sX|9`zznZ$5@O@O$WZ?3+DO`V4WP)CNDBPAvD2+t|+tmqHt| z;M6&Qz=SA=D2d{0sF%^9-w}glwbUwvRB9Bu;Xeke9~Dd*`uOMp4D!Zq?*}*pC4mu| zF7fto_wb(DK?yv=sF~sl`Rnv|;gG=>D9)qN(XcL$PO)g}(v3Q!TFfH!xCg9Y>Z!lQr2fw}pds7ptC{dmR(I&n`n zK)koLVnZ*>e#HqC+l6I}`<-^_r27SE#C70PfV!+}t|QeNoTv?~7**W1+CVli^1(?? z!Jdci$zuYR@2E*6y%~NfkO>jiO_UuGd%~a5if(UDjz;O3;kO7N9(R@}(=G!{gOtaB z9JMTt7XJe-IWAT4TLJ-S(8po318Ntj&p?v{24&Gwaf0#GWP0{C4w%lROO2WQdr=2* za~Sh68cb=g$3$QIJe~BkbR6AKRy4AIhMna{WC^VeQZ$rb4IVYi^s#1<#fb9n-|ESw zs&vQ%F z8!+z2JaZqlHroof)n3p<2|^m;srFseDnw@VLRb}N=nn2u0s4+)@dTvkOkTq5~)oX z>dXO1lU2dmA^z6)-sG2+U}+5~jV+s_mS?qg0E1vD-ibI?@hcZv4RF%o)3_If7BQ@xeyh*g|!vlWS9oAyt6caB-@Yd}iAb zf*)FPj*R8AYLdweIFF!U#USnd^;fB~QeO9?Slb)SnE2@d&-(0&gZ6B&pC*+R#-axu zm{@Td8nV;L=2`xsf}i4KP)Ta;vpCkWGBT&s&*DnYg%#WSrp|iuGk$ELj>%l1iCGXE zWYau8VB-zM4^)7f^0DzrpOynL@>po=fa&-{k3)F{NMOaX9gH$z3-yb|oaWSx4DO!d za5D1ce?~fjtO{NC8L9iN1d%Ci6RV3IZQ%dxT;Yy~xI%d|g-}S_nZn)d z$St>-81uC!iJ6^5SNvIZPa=Z83@xeyv8292EvdMdi0Z*wB$8l%RH z$Rbm|C6kaPwR&|8(n&m=qJH1^>2{!2pokBX0yh`_AuxpI@8( zS!>|t`_K~cz-RoWp8Q|T2-DrKe(5r4(&Fp9i}lW5*83GM;7y~JFHO+uG75C7 z1XB6nXFE@u&o17kq^^t=1#2u2dBrtMfNbrilCXtzW(+{-I@L)@Mi@TvQcOpZ8&C_D zfoYqmiw6A@IPkiEldl$ts?sPT7kB_NIlcLCnkWX?G|jR`R&rL5k+h5S%oM^d{e3YEVY@ zTJ&?kn5F9gsX%sV6eq-1gviF97tsT5Tjx!S?H@A%?*Dux*~A*=UVFW$zGXKe+AqkD z8B?_o4YqsOBo3|hEeDDn%=dR&ef#m>*3Rb}UP%|olL&riRXP}+P?_t%Ia!J=FUP`P zRh3^P!Bc?|%Y1f>{FZUOzHE44FHCUO6mLpqzjRooQus;c_nKgw8t(IqSlPMI6RP+T zxXIsPBOvt_t-zDFZSk%Ko9>MDKFg2oYeO*qw&p3|RE=QIO94esRxM!4b5xfi*?!@$ zG}Gdg{6*0P-6qfPpL#|n-ybd)8P^gCo(*W+ByKP2euQ8zbr^l4a0;f z*14kehU#(XfWSV1Rj+JI!zEHq0qc1_I`fTMTUlL;$8cgV$S13N4_^A#((FR6btEP@ zAwc5`04lv}jNV^@7vrHdn2u)>)=QP;^^K6vm6q?dkg>3AFqr!_OIFwC$#lkR%mPbFLoBoAN&KX*sX`TVvwk=uTLt<$Z*2 zFe>p?NE5}`S=d*!p!y(js8JF)q8RlfniJq^zYa;2pX?^{bs-ec!Aot)fiy!D>L9z- zr2Hi@@-5J=9PBW;n)g&Xq2JANoxob`-!#EXi(K{oif z#dCL$@G-zH2+CZ*VOv2%5XlH=xz5G`tq`U%q-FQrWBCw}I&M=~I>VX6Xd!gzCRD3b z?y%5ubI@$ZWxbH13zcrZSy^{$irj{Fz|W0z1;Zaxyb%X`i1AeQHE}j#;1su_8J6zT zkx*aCeiITUpe@*Lu6WMG)9Qdm7+ zLVF0l72RZcNo}x)i5{Vp{H?Gt zm0kglzFsAKk#q0~t(uMo0P043U7aNkNi~JGf5N1(Nl-cgrU()Xq7A76x)}rm`9T@= zB2MltrsZJ=j)84%`&xK}xPAAw0yVV{bqSj|J!{ayAN1XYH02Htgc(xu0?-^C;;)tD+Vb+;}5v+$d%PW4jL@Q3;&S7ok$P)QNc zrFpb6*@&-0LpwSOp$IWXyCe7*$qOT0`M{KE;X0mma315k%Ol^$o1JlpsZ!@O@Q^8V za%Qo>z@-(U!YP&Nni07=mDDeMNL@uZ%({W(Vr(5BE%Bq?6FUp5EPs6~+Q8_utQ#<#$-8sA?rb%-1(;THn$xF@cQ&9RNZ|8GFTOm~Mh309zI^sc)ru634y(q;+o zU{Ix3MM-6TN>wcktvI&SJ};KOPDlvACNJhJFYuwJ)h03d)Cu%9>iSo|QrBfIaQIMT zm1cjxS=*0h>sSM8ZZTTalsbZ)EI1@5k#>+s>Jzf!V!bu<;-`hiH=s!cs7mleh`l%t zX9J3^>-N(MG;d5Vdbm@zpjW{vx397~91v)>!EKTmAcqBPUcZEa&K&&HAQnp9HE{R|r2AQPy ze_sXT?O~^@aJL66qL|wl;sUIQ)O*S-eVoV$4b=H|iE-(qze~6`0?QMm(4IJ*?kv0> zR?KG+0=MmTI+7TGCYi6>!7ir<(J&BQG#(+;Oxe+gKM77Wx4ZdmN#vJ8chhd!IqFQ+ zJj3X^|45a3(OoE9SIg=2v*|x`JMXsf>HldNnB#7vh>ZK)`3-FOvG5(DEn721lFwuu zNKjErzU!s#ueGcp3!h$QiG^}{GwOv$Oq?Hh!5Xw+K#V1QkylJt;0ro;(vps;2mpP> zVlO$;E*W}iB+@u)94hK_5of)doUN59sCLkgZ6Y0bIBb+rYNs5CQ_$aE)_`CoB( zbI=K#KCC?VeP)#89|e0VEfU0>Klf6GIw1w)O;dB!>0BsXgN(v|Cal!o=TkGfxLNn#9DXxt0I zW%|a-Vf|=kW{rN|(trka(@N;>|0HUJ!O-;L^x98`cav0rl_0Pc-F3Vs07A(`0Vo@v$s!XETy>8ScIbj0#=;*-H^^&a=#c zIQP;J0b;oPs|}Hy*%|do(`^>MdMImlL@Mq$YjxkA{^7>FE{(5Js>H$bEe#4v%uxVe z5*nFMnz8ea)|f*Y<*DqrgXJ>BG>K;)*WDVHu9ZkICoa_3LQY4-fx-n|r-nh|pJ`+L zte1^!9C`R|8F!V1RX{M_ltFxVaAoKU&eq$LJ^pNy(gF7FvBx#){9KR~cz0!0n*bqL ze|mbl^kufEKt)_rdu?jb<3;h*`|;rUip%Ybjg}e1!}&4PS+A3=OB_FF*KKJi)F1ck z*T?9Zn{DkdKtZ{?#A3~fM}GFJL-4UAtZfjxNui%DAY&>y2#YxrFo^xkLp;Hy_Q!}U z1&2oJKiu1mE#`~brN(mWX#N9-wX(=vTgpOZPeklFr!kyMsR~uyiYd=TeajHn|IkN| z;g=p&)~5JyXNC(g(E3a2^sTXqZ{5E&6|GW0x9w|7uib@DU%P+$TCff9r z!)e$5h*|FWcO$#1(BobRTjo^TS3~w6nRXjE1BhkGHj?!kW<&9B_8fkl4&VB(+=W~M zWfT}@!?39};=D;RN;da)BlORRR>F8@sZ=o&gO-8LkNyQ&*`~~9f=p;Zs4SCAM{inM zgPJ8$t1LcV`nGrcL3<=fO_)K0=UI@Du0vx|>=lf%Na$}Tv&V1eH@E8)7I;;?a4p7{ zY{eW9EQMxKqfSJ;HCF1p-~iolNp7ca-L8ciLzRV-u%Es3Ojik*>v7qV0%R;L<0HGd zS!;>PHSPI=rucb=;?uzit}^*l=F-iDHA8#D_`h(Xh-kTCeV~IT)|Ypc>If}~!Q?kO zyce*E9|-yMT`(7%2NefqsTP*$sn%W_fcyiS!L~W3;@@aPd${N#VfjD1f-hJv6w1x= z_0TN3yo`u|nfc)eeh((1y6_&t7(-0{uJO`NoDjVj%jH|q@<)-_Zny*EP za8I@fNj4^CG9khMghMNuPG5sTm$+s%WrD}*(><7rK$f%LDLqUtC5fssu?T38)0zk1 zt+5Z86r|90*>AVAJ;O=T^K;yA&wthgPcfhz=0&r%7rMIv+y?yr(5UZLB30AhKaOKR zDKonqm|mQAYId)T+W~XDgGl+4(5p9E-3)7JPPvmf9EVNrsrxH0G8Zd|&HdF8-q1k~ z$T9H&luq1_byAtgZZKpw`}uzF%klzX=3nn$C;Jiu(-IT*-^z8K4~Wh8pzw`hW|iF{ zMts6g_$2W3e`3@5|4B+Sg3{m&HcnQVdmJ#>e3h+@lzQ}U#HMR-{5NFUL>98u%C&{1 z6)&V}y7h;&M5|J7KFu#>)6Y0-Sv!UiQzgy>Yb4aZo<|jR)yF$BeCu?}5!PDv3FzpLH2&kV zk0=xNB3p{o0(R@u<4W2Kispc2G-ZnxD`6LUzPS&5yKE1mk)#T?E(K^5@^CV9?%*(8k6nBtFiCheHh}iB^Y!8`hCl4{KwlpD!s_oob$wZV zt6O(=p02N`GZ*bKGVG&REt+`U55*5-&R3hEDL}S3Um{|hp{STNOmTW%aY+dQZ z5~kFI1WyY40+ys)w$EKWd$Yg&i^SpUBe6`E$7|+sxet|0fNp>s2p4L+m;YUf_%BO# zqVNzKo^dJss5q>Z({GB)kK}V%G}AFBrMy(z7Khhakqyomn5eJwwv2j} zY1`THzI+hYVNL+_a~?$NH7A(QwG%P;pOZrtM~)Yfy5KOb#HLKZdK9U0NXrZp zX{e-xL&|lUe^R?#sy)bUTHbURNFmHnkZcRtk<^o~wDOtg{p2IjU9I z!DNzw%m9M6Rl2g-!=);D4*N|Ei`v3ejYGauXp)dO&GUwO$Izx5NSK3;Fo-0b=9^WQ zi`j+abAcE1sW_uLJ2Q5m8NjGUXDWRRet2~u?0wo}Zqv{1XsVpHo8Bka7ijQPJmxQW zL9G#ZXVCkT35tp-|I1pZsWmnEHqLdC0N_U=rudnz=Q-O1fz})Xne73?TD_-m&SH@n zZ>A#A(GZJ&0l`z~ioK4a1uJz3f1r3-9(V+kv!yU`H)m2iPz{xlUEZ9F$Xf|`lra;g zscNp`u9kxp+ifBAgB^eaZZ z=z2(JTKS$;__zL8@EvY>WDLQr;pca=UxngILu}!?lXuXt0(BfrVM7YCXl1uo847%^2 zE(XxWV~qZ2PPmY3|9|C4LWYu`kDlLyj6xGC-h)Q+(~6e_0Z*J0TZLXA>O1W#)<~n0 z;<5IhsNbg$2drBNL?MAC#tsMn9O_m9zwcaH5b~h|5D-j@&eF5wP!i10(^BxR;!MSI zMu-rSMsQR`v11`KamY38O|WB3^XtBQMiT6G3# zo%YeEpLJ8MfN89w$2mN1#(Yn(fN8}Y)~=GM>+E#>$|&3s^r@k+H#KsbF6ssZpa_;> zSsRTm3mD7#tIofA3@VjYPNf;LFJ@6S=O?dd2FUQZyBFbebYj7vvV=4?q;h#P)`Ves zvLowj3y_nT2ur^~T)_O)L=|i-YJ=pMi-=DUqigERA8g>J zLsIpo|B;UZ+`V`BP@*-po8{7Q z4|T^_SgfQSH>X$XDp*g$n=4?yYc6eS>p{^dALVWe!gVJEfSL%ob5K4h)&QV^Sh4L|3U+zo-F6wCwlX3b+q-2taP13 zew_wr^vir-g^&=5a)Cof6a?E>VHv;VUde5*q)Y*5omwqTRCP{E1NqbWu{`cKy)R*N z+RYsjQjGxaX9JMcVqKT;?MC*r0n$6Rf^nw}lo>e!3sV6S6mRfOPdjEMdPuu0A2>=b zJO;96JTf-(P^-}CXs=iC)#n14bo+(d9-*71xQD2L|2f10V4iI+*NDiG0um5U8@PZ) zWVF>>m;b9a10o1j5Du8AS9_(0OP06_OMJXSbW~bWORMSx#i7kmex+z68_F&Lf1-JA z4&pYA5+yjLF%qIjPb`=m&M!1Ik;lOV7d`pf^bLDb zcm<_~Nwt^|fcgo5f(8&R9cAB1G;j3tWwvWr8o0`;cDrT4ULlr3Ih0t!0N9!0C<6>U~MrRqGL_>h8*1luJGHD zHfZT!)mCm;ejc!C92m$7Ob)hHD^ef0IDoY z3Q_=5Fab!Ff_YX_;JftLJE>0y1C?2?)1N1XVRqld5thzwaT5*z>nhPwPwCljbG6J; zdTDQUB!!5j_~kPA0I7>+-TH)%k~kKLLSEaz!G`+UqEi}k161fYCmsu)m;J3#R467% z6N>y+a$B9w6Q};odrqv(DN%`Agz9t4Z>08y?bj>!$JX?izB!01Tc3U2bC;~ICp(lK z8%p@2bwf*yW+%FxJ}bVl)#e|PR4k~LbamU|@HwrYdkl{55Cii1%lb!F32Dd76A_Iw zM61}H_bxYZ@#rrnt!U;a+umVWEl3B{+BO7Y5dS%TcQ8*MRAuY-;@rBZ+f81XeG=qg z|2@P};FwJ_jh?@SV2R=;f=`vuQT>|Ek~WPnf_iNjfU zadR~(3qF*8xM|pRsc0lWtn|?thvKJrD`53;2}ZTv>cWp{p}UpAbYCYHXy}E>*g9Bq z&^#9!np+rfyc!{?;s$==TtTlqDa5N@B$zNeFaEOAjzOjFm5J5S6jfVbV6LugzsnDf zJpsxBz#ADeSiYmORQD3E!rN2y1-r$ha*zA0P6znQ8}#^2?0|^`xJd>CPSf{b%>F-` zzA7rNt?9NKCqQru!6A5Xho*7Y;2vB9!JWq43Bldn-QC>@65QSOlJos{>^=HvKd!E- zHEYT^eSAD8R~CA!K1veOdABuA1`tKs|I9ef4ICh4HzVE7N8S(lsE>aGOb28F#Qw}w zgKbKGfXO_J1M%F&tSyT^r$p9tU`lVRCpQee%1-wAad;5%zB^&P65!>M&jRk!@FrzjF2W+NfrEf0fD*u)gUwwA=LGXY zGmc&#-KOUc+9Hg=BT!~PtMZ7b#&ut0rfL*K&4k}M^3Lf&BEDanG?^$igiFTo`cxU0 z_VOY=Q#T^O{z5nAX8-(Gapi)2x%W65rh!7*e(Nba$--)}0_$gX15w-jXDJn`arSNOqA!RDdB&KOoxyupL9!<|j#R zKmqViIUB^C*r}tsoa$$qRa@_{o+Js)anM=yHy)lJ_tx~&JnI%0-A}?haZJ7x5iin( zy`5tF2q6T;=ldhEszHRp!B(1fYOTVQq(hq8AGD_9+`n1;PIavC}*) zXdnQALmac!+@;MEFl*8h+515Xqr*HsUJ5)l#8@eLtT|yz3X7~RS@v3AsB-|i0Omy= zC4vJa2&qBAhXv>Wi2352UHq+7g6uLX#SyphWr$0r2t(YLM_IebsB#z|Q8@KK9N;f( zd;MqHTl@0@cX2Ks;4bhTRr@7jeK-b|k&x3wLSFM29Chq(;p;Ym8$}Lx<8sEHUv>55 zL*?WB`s%T8J2@|{%H^_U>F?~{g!Ru@=eF`+lGmwEGLJWd{p4eFvj-8Q}d{rk>&HZck{RT-8($W;M}A^%_lH$$yoB)ht{} z$|D%tLpzG1;~eZ184T;Y8Os0E9hlVr{lA90As!6=Q@2TUOP)yZC0!WiJ}rsU+~wm( ze>JeKQEv|uimS5BX{=v&SrDe32c@t zS)NzeEq~R;jkup+SWA5Uofd`gz=f&OlWxowll;(J zxTHNvn-v#ZR}%eOLM7@dxbxqGO6>EHaNV==2`60|1`cwg-3a@ z>eIYvj-kJy8;V_AfTGeRJ;=wU5<8~{6Iu=0NNqQBk%jC&wf0M@AG1ey$0d8IP|0j7 zj_Cv20?tC*M3?0iO`y4{<2m@#_mW77x=!IsGjX8`p6Fa= zOYGEwa@E>^_4$xKMF9Y&Xj|?}*OHYJP66-=k@WO~VK&GlD3I)cxx$jR>XTe>L+wyU zh{cr^N(H&XY=cf)ngw}KCbLBLgSUtM38vz(X1;hYH0Oeu*hF!tOeG$D>xp%KXUs^n zZs6#%5&Q^^xfb88zRu@2~0>HgMk@SS}`(m}vV+x7M({0XHvJ;;w_ z(SDH8@x{&(2EtkZ4&@TeDS!mGG?(S;AkT zH&DZ>4-y+pweAC`BI$D!lV>$N4CWt-gmw9l*?e)MHdBz{#qzQx@Vr%9vQu*Qym@k< z_qvoJTk*DCcM!(dF?r*>@Uz8jXZiYQ{q?{>*2ccobMN$eH$`Bf&Sj0`JmQ|%{EWfm zH@`>WlG&2UbFI7uFf*T#^Q89Fjh5k^@3fE=mOz5L0Blm&$}yr%sI4kjF|+O8kw11T ztse^B!P93~9~1cazw#xZ40DNNcH;{FlqG}GHo*&ZxB=AXKa<0|75{H8l(U4v>rGwc zfb+HwfY;DTO3urM{}xXClq7N<|C&73?J~Yq$5)aosQ0n?6RVrZZ_Itbhfesgcx@UX z!tq1m&*NG&6uCM2X#Gpsu&Z;cU3L34uZ4*>{t`hmINa z54+6<`^UVRl2Q30q;R9#IaV`czME8b=M?bmi8u~-(4JF$1*|KNAuc@_HsOX9R3{Hc z+>Jf;JG3d*pjl1o@}L^6m1V>IY*Ss}ApkDZFQRsd>*;vuX=b&>8O2TqE!q)X*K^-4 z;jAirVx(-6QInw!isU^?m+xaaX4KJkiVwjrErPtB3;)&E!%*eS!h)#Bmk?jLZ8Q|& zaY(@RiDlH6POJnw?gOhOa-g6XzA{cB5O2HTtqY#ch>6`JrJaQ^wQr4Syit~|oT828-SSBYk>X>yTw5=LYKCg~bq#~|fo-#gNUuvcZT)j&#gbBUIgy3yhM)UgK?AJr%cGh%%e%y!cShi4>n#86yyLB6 zra%*P-`_Vflkw80CwKNdMJ$h6=Wu?Lp)+gOZ8`Hn+Gq!t4hQk`+GW1@n$W&TA)&zB zez<_3ny85<>MY#Vc@oTUiMFzMOY{Z5BFFW_#l~`gsWC&j=tv@+A0bj$QWnhaki9 zPikC$JS6i*slj7lo-IxytNHwBr=PElTr6_cQHedEL1ve=_hceoDkJ*IW?YfG*fJCZGYW%Tb5~knM zXTAjr;E*V67w)IAC6$@YSF_jQcdJe_C&bj$&F=q?t{_M!o;K; z1SixA>^r4~im@7?!6L#-^=RA0Bf-qgLb+A~f8CRMceqk8zYE-kcR%1J90zz8|~}7xbZYM@jKIc@F{NYGHFRw zXdYR!?5HfOG;e7EnKJx zNZ#xH7>>8htk|^iTiYDK!W{lSa)bU`m(dP-xmkZtsoC>>Pz&wpcW`*B2hWTo>lEI$ z5dPGue8xP#i0LeWN4@zX_bBas-6lwz1ZQQhM~zd#1_r+F5&1a2 zO|05G8*=^cV!}jbyG+)ppVp;_lCSw>R~gq+D0aI+mCwSK%(^M=FHi8gXeK1>_4aq;=OtILl0os#2G$o06A}h7NM=OWdV}o(zug+&;wMvNx z4vs02dmKoB^7^K)=G3`Qhee9HV%5IhmFE<33B&4U(f!ZJTjTS|1Nq^tjGCqF*{NF7 z$-h+xU@&D)$LtvXhC~V)KD|NaSh_Mqf+qdMLNiS;5MRqZxy|lxY6854`G%HF8~f@M zNIk<6ZnPbcd~ok$(s677{lmJsEh}O;fT)UdUU)3?J0cmY>tfXY=_>CGtaY`gqD|=5 z7=+>$CYJ6%yiN}r?M7viOAqw4K?2}Ywqs-A@ZSa#4v8J4yXI&Z)?J6OX2L+QEeI_K z^gL~AE0zR`;gO5M`bw^y#P^hc8--YHb69KA8(A(E0SN{UnViT(v~mMb_3%m+3bo|kMM(&Dsq*tr`-T`RYUd2ju zlF;hIy{!$8Rh44eln79k#w{)k%kvsV)4nn}(zZvv?goGvOs;jB9E=~OI_H9jJ?RyW z>JB=dZC;NKI5l|+$5V(7cv>AEir0AxL*%Cmj&wSInL1E%^^d?$hVVa!9zmp0q_q9u z>f-O{Tg_@8&YR@hf5c2V4=yCQhWy-*WQ`OlwL-!s1I~XMRjRt)v>7A861TC~{$G<# zcZIpUeHy*_7{K@gW_~5=CG2SH-0J`Ez75Lo&Thcq>%c}aN@MJs|qNwGl;LmzRZVx^}!2mVm@ABbh%MEC&^s8D+T#~Y1}$fgLh-#UfrP(>tXqH=449W3GHf`z|I7Y8C6$35^RDd8w)rz z$ehlJXl;dT;{_Z~XO|jh`#f@@&Q<#tfNuk!-!b}CP0-0)d_%20`45@FXXv*~?8ZY5 z37kwrZ+X^{ZMAnW33i8GzIL|Fmcq9}|Igu{~DA`2F_yXjP+#aJN$ptef=$1Ek zILy=M3U|^me21>(ncmYsivchULpfrhCTIq(R)RCoA7J$=F+YC8>_6}r3Z2SG7Bu1R zq^rXRSfX@FI3>aM%g|4NXLg^&;Wfpm0VDlp0pU{LJaT7T9fYrF%Ato_4;U@^eg^xR zQb|hw)bLXPZB~~Y`7i0^0>u2n&X@q{Px2@Q5xsv*pGZEPM0?%JlFxH_y!ikWhF%^Q zI?}u@-V3BVu?Lgqc~_9ddeXbfY1NzeYNC}5pb!%HXdOF^+F2O3 zCas&qjEAdwuW!m7ESidL^4x<2VN`1rvF=64=t^|RPlJJ1eUfdV{uZQ5S>%Kv3G4X= zQyx0@+U)C_RLS&TF)YFIX&u>rWnQOy^ytejh@tXae}a9FIu&zsB|% z_mXkGi@H`Q3R-^b<7bED=;?w#%g0+#f1t33~;_}<<8Tv?m zyYG1o0a2sR#kaQQ?P00`=o`LDtPLvM=wo}VaC!Xs$Qp6Kt(o=fKe#g=+yET z^t=g0XUeo190at`_0M-4uIAGYOa^8i)vGeN#eVu;6<-PdRXx`9 zT#%%@M&rp5um9<#(*`OfWhMl`On1uqoX~pDhbi*{^jx<#h;yoQ)zObMEiYPE$--F* z%=O^EPiF0~tM)N3&oDfBP(GOQTeaehk7k18VKBp!2yt{4V>6JZ=7%L`4S1Q2o?abO zT>$K2Js78u!Q#m-yt@`*o(A<;9SteT4>#b2J)iQXoP*)Hh5U|3#@An61yP_^iImd{ zS?`;LHwzx`lh9L9f6%TR+fY$7?oxF0~ zNKVqf_(pr&#r@x(;UE&w5VYfffkJeOEMKfOf04_=<1yln(UX}mjKI`#uRPYdyfi14 zfyb3%mQ4t-ZvNhtHi)50a`AHHf+9_yY zyB+Fx&JpQft1>7d7ve+JCr4){A{OC!;pC~f*utOwO=6%F=6>7oKMwjb!#{3U<|(6M zQur7z_!WkI-|##F_u=|I*p+2#bj80in~K4O|DHW;Tf|+40wt^d(25@Ehqr0<2nQ!E zrsL^#$5N-#qh7Ad5CPe!v(0CG{rcUOrv4F9n`aPj|ttOnTUyqB(u5;QsM_6+LZ}!FzkObQGRdb;Ns1xOBbSf0a+bP*G65~StTpP0ly3mX zD_~57!$TH8^4J1xK&IpI#|=B`KpsqwNL+4ClMC)Pga`p6>F#rv^Z?9D)M=Ljl14#C zA4uS$JW#Sy=xX6B*}_IIes?)cw2nF$sxHJlCceV4k@b)Eu0&f43R|!y_oP=eHTldB ztwPMDZ++AceGQjLSsPF3yhaP%C=|}$ValYhhni2TUR$L&Z*nXknnE-`*#6$HTK!P%7=_=Ip|mHcK1T!WkLQgbs<_|!VJRxCMg zXzz)Toim*CXCHZJLA|RO~bS35``9O&@*a@p+!_LOEqBPo;~B{h``vPU?fPLaDUw?i@4~baIb)eB7FWOn6zkomA8jSYdOSxsUw;o9_ z1m(1Zc|=Tmo->2tAdz`N(wsxSXvt2O!-FteE*=KV1U(`R-&F9_gCXSO2cWZ~ zH{f;;+tVgRSJSGGswaQhg@;IbA9IYFsL+UHlU_%`yr4&w1qa~SpD?r1d@BFMj8CBq zH}|9N^14l>8!T>gSv^kft zeqUJJNR$Vb0@#g9d+n|9cLbdKB%nemr=b`2Db4A%zT93fTUI={ZOR5k)zaImEii&< ztTku8oC+1&;`H6djh(5f-doRIij8qSAC^>LFntg(N(whBj-YK$yN)2=WTmh1m%4P8 z1?!y`q9N}J;m1yeC6U__QsNbDE%~~Lw;deb!8&&FMg9YyD!?$C2oslCb|zNvTo|Dv%-&n za<4=$cbYmD$feEbWscNcOPpUxyC)He)bIxKk=lNmbQc&hiKRHhdi8Ksh@-Q3IgeE9ERDG`r&L<#Q4 zP^sJj9&rvU{~kWGK<+_N)Srd<(x^2*qVtYxB7V1TIH&E05G2(4-~)o8*W%$XzZEU( zN0$Zf!=pv>znq+cW{>kz5)A7xNoMsdB^T;4w9{Ts-`;o`)?1jZ2OG-9P<*~=j_2WX zH{3WiF08xg)aXSPoLF0PlvA3tBxm({o`>HKXu_f;kJWuX3B_R%*svdOr_Ql{;t9Aj~(+N`7~(UvuBo2&wL!BhndAnge_tgoQr65cH+sEPT*_CV_J2T5uFKR8>oCA{U=glLN~IrWs)( z4BcFZKyt{fqbFD<*k8&0@fAW_Obrn6V2y5GP3E!7v?QYu1q|V^K#SbwKh#Q|V5&=c zJI}EGnliH87DL5fo9tyzOPm9Gq^iia=XI{b&!?)@dqf7RYp#gax({bZ(9+BNM#a2+X zn0eXvbIt-6r3>gVEGGIZ5)>W9yn^h}`}pT#Mvz0FUtHuXuIc`0WpLiln$#qf>&|6g zP%yU({Q~jE4RLSMz%3;YDy3-AR2|n)_csjVK24d_I-#8QI(#ANeSXO>JUoT&-+RLg zmsJW=T9F=S1uM#4PpO_Hs`r(nOD|vGb^43nep0XdR#+AHqi9xQcgyFI9@lk3F3WmnP2}dj zPufk5sv+Mmfkc`D*5&d8xN3sS85K7G@V5c3h<8{u^$=n9#Mj>2nL^KzYR7%K0a0Of zuN~!VgFEe=%5j{R;nNp*LjLbdk2{3()eDS}v#-k(i%5aJ)q+I1;-Ap>$va6Zh)g`j zef!Pay8m`2`JhEH#TZO#=eg8|bZtH-_CzfV(O4>`4O}&*gsQM3P*AV zY;RPU?l7=c`hS>!ALy;Rar)IXt_xQYAxg_D#Z>u0ty3!IfeKps0#y<}Oj)%#b8&CCBxxR2}fm>v37Wpyo2;)u88%kb%9ez!JU^ zdm}XIhi-8pZfRu!?id*#{A28Q0wxEQ_S2CJ6l!;>|C7sXRO)h@xW;SrOpxcF zU3akh=YfBmw;2Lp`{N%B53T*n+Q!2{HD4zKQDz6zb@%4W!S-czmjlk9d)9vc{}A(q zoyjpQeb@hoTx_8u0J1uy7a;wpUmvzNfl>a3c%riN{X-Tp_RQwiwyN?{=6`rZY6!>q zq$C6FmICVzbCd?~qJk1?|90WSduP)+bVkbv{uo#j({$FcmHw^tKy(t2;3u-6Wb^f# zVJ7kPt|p5#I?9eo@xI(9(>KmcZ2P(#E72sp9%3)enqR+zvajK)e7_eILlScNV5QLr zc&(Jgke~M~2$eYGWs_cNY|k-;O=y50ZQp?J)R)3UZkT8?Fxa!X+wLm>vGly2BNJ=L zRV)PVt(I5|pm$dr^3)EgB=kWV9R{X8?zEhl;V(`UWO&~0UJp@a$h;j0yyQHg>O)Nu z0SvSNNSQFVL+&#+`Jd`2)U%OLjgplXq{MV)g9lb2aYd~ zd$%!8#E%gbV4_%Mt&cFa@hW~OlXKvKP`p7sKRn5Ti}vFK_~(n2V}ts+Dsah`ql2`r z9i2GTZ{^}8Nr+f&h;+bhIg6phhDxuV2vnFg59wC8Jw!j%Fo7~N%)}9~QS*CKCM36R ztjt`e3bs)bbFj8t!+HR!CX!tkQI~9sV@Pdf0r&T-DL#qx$A!GDa1+ZiOX4Dw+t}gj zmG#&8l`A0>wMAoh%c-exY?PnkHK&GS=U%V$=Zz2vDTNjQoHflQ)iuo8F{q! zKMLI%NcOxLD#*|uyHwioFJCH1PjF2$lin+P>BI%`hzT4WJB1lUES;m_x7HKVqi{BaSLRq6kKWbdpvS)>L>72iu3y)%UXY zjOA`JJQ?i?0?){yCf?u8mL5PXn~5HO6p_b;^*FS{!=bb&ybf!-)$nPhSI}~M;2WX=FLyuo6MbC zb-E~=HbSt5$q$2Ei512C|QJ7hjh{ButFZ z8BLC+BKEd+AkP2LyUsrrISiVKpr+gVB5<>3kdtP7LSR2lUesvofS8 zi~XpEzewPigBz#g>bBVa5$r-mNKQ$n7KiOQU6s7J=_8`tN9}f~J1*bL>Tk66OMWh> z_mB9<6*`zEW`e#y1w?di?x6c+f11P--zf4Aiu+9QK+#Prf3sAKcI2<{pt;CaF)DT)(Ec8RAG;OdSFe*O%g^FhAsN zb+ONt9EJ?Nmed;>a0J{n%W!8jKTglF51iODI@FuhxMn(DQ(sRmi}O>4=_-y`a}= z1fn8QVY$qj5=@k<&SsZVV;ohaKX~7i2yhVV5`q6$9nfu=J-Y$eearsV&o&VpVV=)!>CLq%Gma_MvU?Bugeb~d0?_9eIhYkDfu`G zi`%VRczDl;d(c)FxXEr?ZD~G#iQL>&a!lX4QfUxw32q+JZ{Dk`^ zK|{PgG1xt?$vNru-v_7&i-|gSYu+`V1j+0JDNs|g?m&$LDxLxtMc)qwO<8yY^eiVc zl!R4^{6D|kiqCMLOgM_2v^!oM9c)E`&<5)Z8fm_sLO<_+vZ&Zpcy*@04?h~|XP34W zbq{Kyyd*DwoBTD4hAn(ejDm(pxa5VM{3sT;erTYBIy-(7@zdxZ$ym-B=Dz=lBWABX z>!))PO&^l+gPdT~s)y6p(PenW9VoL7EG&~$Kpo&+_= zROt^@;HL0qL z*IIij33tr*0|l*<0DJ&CUk^230@3>!fIGc|1@X;>7rf^);<0fzc>ZX1c$mHpYw#)` zTdOc@S}wjFeq0>W7eliV(Vkk_)+`QW7c!!?axB;1jNKps(vDVY-dLqa3A_#xxO7~N zudEduHF!S~_>rO3Qel7*A)g4^pjFXv4FH+StkZx~7cF`=5P*BI+Da}sWs{8iegO;q zR{_e!-=OWqvtf`OK)K6vswdK&P)3-T6sKPAB(BtWGd$H;R#?tca1@rW%JnG}T*B_7 z4p6g9j=*xaF%OJA93;$0NYCcsHJm;imi8#t69KMq@DYN%k_L8~1K+v|WI^CEm-<|hvTMMFh zj_#a%)W9=rTJ#|er=S8$Y7-pZC+!gue%JafofEVMyZ9~jhsRA`LcL$6N1DB;26JypLV>3INpoGA+K4~SR^|zh|Q)B*gyYk*6$$Z zwsp=A;;1@}hM?w^#__vsjKfCRvte>xvPp6Mwl~sKIx(;RTe|9tY-C^piIiz~DhHvd z`gy&$roQ`7(QS6+RruI=b(=$f^~0F#LvpXC$dk!%(XI6$8zEDa;-}VjD=zlgG<=-? zy*ry-KOptPx}RKxS>o%af)l+NoT$i67ZN)(TIi^%^5F*w1nXkSt}ka!MY9N5*~=;k z+G#=iK_(W0PUU`0rGXO(XiF(=&YOHuSR|7%@`G?(-zFH%s5&|r^@XJC_*Qn1rI%qS z)IWLMns@iLCfH8qTv(B^2KOEQXa>Y4B-5X{v+n#~Ig> z%RfvuGlCITyjYJuL#XPD`s!Vbzdo{=oJ@W(xrufn=ARV1Bz~8ktxjs4j1z#NB@E}$ zQ(`k_rt8A8E2>?aIwJvcWT)4JHj?yUr1Qge4ovt$*Yl0VpI3HlS@;97{3c&8L5kx` zYH()Z4^!gkHqYJS4Z$i`IK|^Cz2F^s?l%WrS~;w^UNcEVHCdKNf_HuFmn61 zPT;a8-2z>N7*Bqt8X=uWj!kamyiQe1L1YerbV!|UZ!dSVjK`dLNeZ!?2x4*`6e*Yh zHMfvoj-Akc_gun9E2geffVf}|4&a9=Q(F}`fH+}Cym-zcO`kZGc#JyD*%6v`C@Skn zev4*%QFB_0b2L_Pv1b*`Z2t|TB=QQ6DV}IP=9;JOaxKSi3E3@Weqi04ICHzowfQ^{`kCm$tUWXO( z7kM`fgGKC(kV9#`=1Qi0twJO%q??`k@p|=Bwx)ywu8dem+ug_7#XiXHsP&%FgxqKaYmd`k#ygMh?O6P{rLZyVg-9?Brap|Q$@a!7?bAAq zQ}wwf;qYt)!9`fKMVQ8 z>|A@tNj$|=UEaA#ybwfGKq!XM;%vHXN-}=U)Bd0|!j#$&L=(H&UZ;j|NJFz%Zz+Ej zpada-usYX~-oM{vXnBv)(cuQWUZn^uQw=i;6U(q(fw_$**KMHtTL}`JC(o*=1kSIx zjXkgOi`n9Q&=Mzc5QF^s&vfa-Gb*R>mqS!%ByjieRRH?IO(f6lD{|fYGOcz(e_BUh zz?TFcsrBSF{f6r9SkMq%zAmT)qn&_4nHYfemtjYEs;GxE16JY_VmEOff17dTp@F3J zEo%MuKJPkA$jHs2ipovQmO=VHu7heZPrDk%$ER4;nNo7d?1LL5Bbf&iZ-iFW!s(vu z$(Ip2e5)B-z;hzJ6hZCY$NO&%*DMrk-WKga z&uIAn$B=vH$O*wbtH+{JUEFWxh*x6_72psV;lcde{GJb|PR$-M+Px*uf6>_O2rGh@ z_~9FJ0fh;r(F*_$)!hYK+BnX7HFiWYd-X882wm!NGi&yU`aY0i?U$f(6>|7@ zW07z%8ZxN+@1QwiOL_X0_*GA!z_#TEDndU!i3uCFnSZ%3I75aCs2*7ft*w29j|wFc zgDNDLcwtX(l;fk&i$Me=s58PyIilA(URDE*e80*bs0pf+c7Hf`F3oeGPr?L_#~_Ii>WAWlhZ@EeOQ;IfcU^S1 zc@&j(H&Jq)D6@$k3#5oB*G~7<#XQ$>8s`_R|KD28qc66LJ15>eHTXPxJ+@49e6OU;9%kOlbbzvNnE20-P~UVH^j3>IS^3vYld8TvJ}A5p?LHO0H|z z{YSn0pOl7=5-RyEOB3Y;55QgEm#)tC13MzCqCcOniwSKr4m$JG`4}qiISD2z-(bu# zAk4$-Fl4Zr2;xd@@seO3r9hhV_s6gl)C<$j6kHxO?=9nt2t9P#4J?0rGckfu~Lw<}VwewdGyx+=toRX2CoWQ$I5g&_OXYEPolGZ;GSOhBu@3GMMhg z1M;B!vpwOXh~%7Y)>BVcoA`7<9esW5{1~4qhhC-VW#ig&pc9+}wm;cuNeduHE-Gqp z+M=^Yk}T)tB}W))k|XJ>_*U{OzZt zt`vw@=_c7rYV{HKH9n`?ug*;(3K@-4Zn5qGq z;Z~mGe4QjAWZoH%Fo;=?Aob2@0f5{8%R-OovbC=A05fAXSEy*2IAH0=&n9_7YE8N* z!ivcadf#eu5M34?glr*iMHspE=?%H13-2eHsRHB6{2S;=2ZcpDQy(!O$!Fn~FmvxEQ0dtWJEm8f}fOD;Y0|%y`aq0^_=%|j*RDZ8z z?tKK@;V(9>`8@IK|IPGXi3qd8NXtWaew???7n8K`B1lBD607oHl^y^soCu}TA9@%3$>MIWM!XNy3*t*y5zS7=_-4SJ&2S8? z*p*(vmt#wkL6F#yAL}OCula-zQc;%52pcem7Y=*#Wd~ss0vTU*_&9aiUAnxE36GCB z<)0AE&qxvwALUt%5-DLl5bwK=Y20&ep-K8naUKT4318G*Nq%6S&#+iK$q`=AW{TXH z{Hz%g<(XeOu^_RD&MXf(o-L;H`g-kTy@d)VsT5*H%nCrJoy_8v+8+I&HXGdX9P~}Z zJ06~kl&A!1I=aA$2}VK0yJ~bgf*GvlC1!2@o6`92C44moRcqJP#=s9%lQzOEb`0c| zpD%M>(&tY(w=>DK(tp4vE;x_Y3JwFmYe4&@wFSm`d=5;|R(cE2ps6!}I)#CS=Z-hb zvf~x0M_D`K4QVIyBhC-0B7`>8QSPhec6v(!LGjQdL@JRg@4`yzUr8R;m*=_LF7s_l zI~FY+dS8)9;!xbl{r>zszwm+|5FU(C9_G`ASk3>BG z8J;}^}|^TKY!|H`T@ z8WDfG%`6fetEsiubi4pyvND)`=ewN+hqC$=QHUep-HVo^Oh1blX(BhXq-d|digbLB zo4xxUYt=Ay!sUTX^i#1Kwi~R~3^THK2`3+BQZc3IZ(x@seBYZn!yV_V`byTVU+3Wn zmmmf)l&kp+7M#>tp=QBLV2>=Xh^ai{a-h*ZKo}gY3J*;h+P&PsS4%RLSS`3@cF2n} ziT;{;xGYu%%-y9Ld2lM2DA&Wqt7Lh39^-^D(n)+b@VM#Vr1PR9oHEjwYv%Z9ijU*! zQgRh(KQ!~)lFMUK7Chj632cFTUGyt@o^tKS!0K_;OfvE0e-{$<2kw8)Nw9aX6tYnM zU8%?(iGK4+^#nH?s;7NEY*WT{bt!y&K~q^+E(l;*p4$JbesATtDN7+k8mf&`FXr0z00ElhF03> zwzZetMCVtzRdIlnmPTfp7df3CK6*(yU;U(@)kwMHrZ<9ttdL2)*Zm*5GV54MR08-> zBSL1u2+Kk>A5`<*7%Q~gB@2Dz<^NnmT3OHsc?B)c42{tf%p+Xi;e~clnCs==HeKxM zUhxQ4X&Px9H_>gWPIouMN6g>Mx#@HG$R^1;iP)%4Sid1_X=$C1wwSHojp}l?ETc8b zAJf?%OjDv*;SERB;`#KsYoa(U&meUOa^n>PjRqH@z74u7Bov3&?CXC-86;tqvvUjy z2`u|$*goj|UAyu}T7I*4s`eDn3`Zqypu;XNdWumbs+U5`1h5hhmFLNU`=V~*MwUOC5f!0 zuJh8I2@QEYQ_8_}ogQgHaw2WkV$|jF@t1=;Zx`=o`*(5BOosN1^VO!q4|n%>OQ+N> z+GHjS*csf8Ko|+OWPDcb zRHQM4nTGFT8EnSzy~&)BX80ERsChX*L3GuplD)Yj@XkXoxsvs+qeKZI(*I5N70kS? zE*G4(J9_8meBv>*u6m7bIiijA@iIwB38 zDnE8MOS1|cq6yY+;JKQ(2oO@dE_Gg)ym(HDT}a03DTuaq%NvmvP^_3OUl*ApL>jU~ zQRCg0zkjB3v!)SBYQ7}JepH5sC}!2tZYnHmRkxjLL{KvSZvgjz@u+aw$zg&zM<%P( z6)>6rFDP7uliBV8EBtqVewj|C8y@inMOeE%xbN&(clCwgrd&Cnt+6LMK3wkHX19S}$H8jTtOS-sW^8I={m@x>seL3t9+7s-vS56aWn?Kz*)6e|&z)Yj zvqpu2^5W;zu~};3`N=>sZ$diyU!^;eznJFCs$d-Cz9H=rSQ5Mks=ZW2I*_dHCOhaH zNWVm=;@>BG*9OEliwtO&{76#H$F$I#VNbMv00T|?=3i*1e=X=RFwiG%tf`X#*8QWW#SKB>J zdFTL7e{Po!rZw7VY;|h9VKuFOSnO`+k0*!pSkaeruWdKvKR0ur=^0}FG>{kd{ghq> zTj)axI}Pno2N%`YFY%KQvjykieZNZZ>jPGBJr?`8Xim}vA|WfkW`t*~BVy5CWQFaB zHb=2yFtMK!?F%@w&_m>@Eh`A@L;;iJ=TeAH0=(sELi-Yr2ZE@k7UM?W~YsVwFM0g;dlyp z!^DrHlZa%U+(ssqh&4xd6+QPT(zC>YfW;lrUrH;S-{Rv|Kqm0QsG5h>s(XxeB3LV-9fqX938fk# z2rnBN)huR1CHDGg{tGj1OL7e1KL?pUvJI_uy7}FyKg85fkJefh#0A8Iy$VnKWE(-F z?>6o{>bfpJEFIlyy^kp&g~r^O=`HyyP_2vK4qFe2kBQ8CG*50wy@)<(&<2mjVN7uD z?_{N?jHDAipPgMw9Y!zCD+!nc0E=~^+fbI89QEGp%N}b?Rj)(de!`{+_OlAO;+5gAXI|f-A zE=_=K+t##gW7@{FZCj^p+nly-+n#n$+qSK_^>@+kxh{FDEk}0)G4V$1yxlzWF}qD*t@bOxF<<`5dSVv zPu*C2te_OHrYyNQq=g3qi>Qo$Qx{go{oQ8AqJb`BCQs(H$|R06B-VL-d`Z;G`^ajQ5<9>jR+oglqe8xFtvHapAoSMV77nr1LRf3%ooD2T5}tg2FU(q3 zP>vk0?ycSU0ekVTq_`3ON+buB5YjEX=w`nYG8b&*77-EZCAAfm4g>A;QTV7IDvc)@DFurx#U}jk zs=j+2b@%zsTGFuj8&@)5$FIl5vlyEJj9+zZQPiWALZXcX6IZCcF|##qNY!YCx{o4 zKZhL-LGC=t*X8;Vf5tt^yD{Y=qJ;Pkv70y4LdVndA~F4fIQ4$6>2$xz9VL*2D#a;p zyhKrg3#jU+h3cNGxIt$(5Y5W6>z`ZE@UyTOdrzpt)n2W|HCiqE^Y%#PJKG_AbN~;Y z@8%y+Yi$VyM}QEfFig?GZ+Tuwj$0L+(w2Z{>*@HnA*{x{rfP!zULA|SfGZX2MH#iN z9hud`UDFG@(4t(R04pk zv|f$oN-xe6H(y&HyMEXrT3*WkAleGFn##a7CU1cB$4JJ&Tt@9g0c2Me>81OHRy0%p zHbHC9MBQ~Qr|=J7FVAXUW-XQUUT;$94A@gwf8;A!5(hW&rYmzOG4_iak^ZyPh2&emb`Eph+ zyAh{{`I<{N=Y+mqIQF=Qn?UH}O0Gd)4>L-neMlJXLy?>D#f76^&)4UAc3u)v4t_u7 zYPE{~QQLFd=bx2JLgtt!rm_Z$$f{YSr|f*;tf;W~Y=y@V<0Cbp|39FW)aH#rOKy|;+F20GjYq`oW)U;5Cx@}|zThJRwQq4&sJMlw$|0?(VOjg1+S#Rr8# z`qEdsjD-wD2Bm<+F_&UQ<58Key^ZF-mU6lR`-Z$q{zRHR; zm};&FNvjR6QCS&4o9AOy4mB5=(U8L8hgcsB7zx^%Y#?Oi5u=pzlu_T4DJ? zFB*O7-A)E68PKzb&1ond#~epj!9*@Y#OWXw&#+Oo%MWNJBt6cMArei|3jJ4ofVZ$5Bxm`Dg*2`82)1b>jek? zd{=&aWKVt(fXGsEwl^fX$PlP$``G6uO|0m$K+n*d{(C?p@?$2-lH&pi|NEdCx#FGq zp0ZrQ!eC|cG%x*#v?w}(mw#9T+|J(kbVw3gZG!S?oD3ZpS=hL~@s5U`gKR@*(>mmk z^L5xO%o<`}BqD!#kMy?LTqbyGvieTHR&FbWvi$RnrX-y;}rMbgnZkA2J0ZFxl0Ra_K5*cQ<+4z&`N#tx;gKq7vGff#Ab*2#ByYm1b=y4Qv` zI{8?U)mN;k+>sD5CPTMZ9{B|*>T29Fl;PzwM|ddHCR7>VNfCj-Z`Env8-`0<$eJ@3 z1D}6hoU_{02lyb|j*IVmW4~m+qJ4Nr1aW%6_r_sB5Oa_W=L$Z&a9RCZB4=&4tk>Hy z;Z0vD)AXk=#E_y-uUxClr6m(afxhn3)BB7FUAvGj)tBf?W@1dA+i8STtuMrnO9M(4 z0S`SbD)FMYT?D2Y-REn@%YyI^d3Kb~m)zt%tw;R(5}=GncXv z2atRe?0JA;oW|l(jzrgZGAjsh5HuX5nvS0$TMDmngS)nas73Nq1!Mh7xS?hyfBXe4 z9)1WApJjrYjc^v1CPl9ZzzabA%tY@#4OL%JSfZNv+Z)H*VhOtA;dOpxC}M^|)7xO7 z(!Ark655&+gdva7ms22k_~W-a1%P&-YTZ_zx-c6cC>lz$_(!oT7{&lko{{z#Szton z%KdsfOAxqEC-6|MfIZ{jodq7O=gCuC|L2SsWSAF>0Fqf2z}rE5*@o1k#{m`nNM$kY znM}a%Q4R4cym$JU8ToG*h%O6?fI_wgrV^tCdzRwweCY5~Oc)Q^cn(;jS$nKLJalI_ z>I-w8*R8hps*smfaS7NjBb~Q=2umn!{{P->Rm2DMz3>jd0GGJj*>8FNagLsb$Pbt6 zkmNr%R#=#Zf2AwDk+eXndLv(k-^rK3k;RErrKY$FzFCTW0Wux;F(xRNOW%*=8sj4%uhi)2eW#71{ zHcw<>i!DjI1XXuBw~PObCJy`-?(H_iQfK%8Hw`L-&X1}~!w^J_s0{zyR>))0tNYVI zWC`+P$?_WV$v57`$E1p}S2M+USHmglL-Kxk7n!k1hHJ+H98(wpQFzR7wX&emL8CSB z`}e#kk?)bgVfsyoqxe2_^L5VnYPxzlC&_uR&~h=jg}M&VeO>f24zo@Wi`nF|$3n`m2t!bK%6jq=)Y{xoP*gnSY!=^40%NhZ+^_q!6I zI~as4vm#c4fT#-kp_vzoeayyER!uHbB>}T`ijsmo96t?TTG!1V@eZE^F9i0%c=$ij zc?bL4#LIC~76$OI#1dWps*mB%%fJ-VD%>(i2>Jv}yZUgVr%2^msX)%O;&y7!r8l>t zyg0*qYSE!PSL?=ohyEmhsG@<#zkA^U^v7ojmvJd)DuAvhk;Vrg3no7FsAR$rt}uN~ z^%RKXk?L zqSDCV`e`}LLf#wsHxLQq4B)C+KV&}iQRCBM#HODusbu-BJpL3%SuhwkPd-FfMg$xE zTD>hD#>WY5x3=Y{$IR2C`QTloBk!+~0@T5jln3<4wHta8iiy>M8iCMdJh5n&T?{(O z*8Ep@<=9EK>m76hT2`f3#KGhSk>SG(QjOgekJW9mQ_f%~F+=~u+(>WkTaL>`YX{5( zYv$0kDETLarY4wFkb?$L^0M0;Ke2Kqv_$mmpx`K%(;iqlT+IwZb(P2m?qW1rxFbQ_ zRK9D1%z*ZxRUbdHsN4ctUzvvT{K?V(Ug~D(;tpg^Q>VTDGp?;qxpqPLmQDv2ptgyS z84Ae_K=TlfHa@U;UYXBz-mem9Dz9=zcK%CW3)XFYW;6}ry)MV-nj3X^%E&5!A! zA!*v8jqGe%?eb5Yit=w8o1Z#Y_JACOmth=_$wG-~sAO{ok&?=4Odi(2pJIgKBNb~( zg5TNzST2#_E^U|&pIoZ}bSD_#E{&kG`1HkaA;BZ6&XUADi}o!pa)qp)ndn0yqGq;O zt5|v5aaYbSJG}`!UeOJ?S)ADrOB~kaoZ6JQ+}zyIp$U@AXF_V+cgm+Op{VT~n}e6Q z<`lK1!6!&w^1-nc`pp7?j9`PYX873I>`pj~CInGGi!I2epHuVx!`T;b6}v?-Oa!X< zRA;|$FsWha_V9@Y;_@y_Jb?q(kAGvc7E3{@Z8VrJ^pQb|ynAap#3QIli*KrOg^tPj z3P)aO^Xqoy<2ottedtGCcY5*b&%BXAG_D3!>2)xgi@m|GS`|F1!XU)h<9^@m)L-)8!CWkW_wM|eFC zbhh_gQ+V7yWwra``*u?#X6%|^|DLHDYx*nrc~}uslEiTlmEn`D*CVedOMdq@*RR6> zc+`AwatXdilLV=cY(r-apx!+zKECA~=yJdD8pZMfK>|_rS$%-Jh0l)?XpuTx`-p<} z+~$>{-G%Q*c+++}om6o%+#TtU`T%z)zUKR;o^^<~IftKi&w9>Elp~K?YOl4VCYRR2 zZ-=6Y#|wypz99OZfb#tIyrr&P7|$kHyo4T7KQl#OPNMVhrI*R+qIJvUXDcM zSYEOMl6Fc`;63vAJuYsx-m3J;Qgwu2@%{8BvMA?P?@Df|XuOJn~vFM-eF-?au9cfoI_@6O&GLDo$WFmVx&c0EL0KAVA zQjJgd|G<41U32IEU`dGk4Padutk`qm769`BS|mnf6x|31^?tzAj&x% zN2ONf1N04?hb0Qa>*iMkK}%NGh%epvXZu@&p9R*y*^-hFiy=aMex{gSExLJyCaL zqHm*hDH_mf{Jvv-lOQ0R!nu2|&h|dRO`|!9rf2HJd!0Jhf*z!fjYL4^5_=?;A+P5* zj3G|13HTf$F&qe8sl&pHk{v`fICGWQzoQ-_P*?GrSzCqK1&oBMS*By-M^$oVS7_xINaRQIlM*z}q#5=nF$Ay$O=*W1J0Z?{)m zZV1n zC7?BBu+(=io2hv3d3_*~r5yK>JZ0_?NDmi%C&&1WAIj(|&WEKZnTOfhqsrSo7ysns z^)QDBpJU8G16QJgQ1tX(lm7q}SIQu=MKUqg>>zUk5*C6`)FGCV(g&X1%+J_0f+ZP! zVUh(%lj-cG(VjjG(v1spzlLsk*RDcv;Bf#KAkMDHg0ATD!2j+wj7)5|I*GpmI-;=4`{HYe9n^O;&mXrlS71 z=yk)tKhlr|_I2~kC#fZubBudkL2vb{BM2+8V76RV_L=QI&CyJEK#5MmK5x6m(OQj!!<4tbJF?*Wd+`=p+4K^3jems4?_}t^d<#k+B0W< zA4f@&*h zoztMGIk4~EXyTjtKVE&^{HWzOoCf!4{+^q}xw#84w>oUwBClWo#{T)mh5PDqoj0hT zrf8z9?O>Ke=L5+}i)K~q3gug=ktrIQ1>elnaMO%)#9!Y3dR26#=ec>dONq@9wXVbMv` zYvq{yH@XGMNJA%?Z?RcPo2a=Aw@c0C{HV;(qKt_33V`-GMK}9dP4#yOJ|FT_4f(U@ z(`u3}!E8>HPvGZ5ItsJ5JoqSJs~Ln14rT9K6n8E}2yN^YVuYn6h^blm0AsP6r@89S z0ZWReozMhFy9BntSa`ZQ0TI#Z5Ij4A`&r6BG&yOR;=Nb64fVXFED18}4mTJeayNa9F2Rw(Lpfyj_-Is7H}eb%<4 z!FKHGli#8u?@`!<`6n7D<+L3OmGWkYYY^V1~>xuCl9U8brzpC0V#`^L~4 zV`oBzwLN|gqtjm|%Iyp?aOazv*q?B`fb!9?(XxAKn(Yk9kSnj;%+aT+hV z4v)vL20M@2Y2r5%3f^`!VRX5Zwl3=IhM`OhGv^6K1+iX^zb;ENG_U_uP{TK0GynY9 zEPQ=UcppX;%Ua~kmqsrg+K5e@mtH=*{U+m2**{UKVhi*HN7fKg2`|i=eDZx6NYh~j z`{v3L6Y>ra)>=Vs)-+_zwsoyL0qnxom*_vMI{UBt2YnxuR`^0p>KrhBMRnV*?pr5r z7&)K1^&pokEuLvSNppnIdb8zCb0hsnea7Siv91@aGYWrm<#?+W+YeH&U^Od*XiIZj z>A;Ij{r=LkuxR@P*H;s3=1b3Y*b%X&)X*UST*Tp8HtVlxTRCa?l1DE~yGrAC@>JHl zi+PyB1?`7RyJbO-f_5}F_T4yyezvb~pns8nn(AL^{H&m+(pCw^4O3tDm=6|#%J_pr zf^_ja_OH2VMbH`qx#c)fWggByBF&axz{J^<+V1UnIgG9M0_6}uPx}H5wVMn0X;9HS zulk$^Wp}#10;vD6)X3{OQ(9E}WtKnb93erc4iH*kr)i#|&AV-Za!7e4Nuvzjhx3tw z%As{46w#;=ii>rA-3_BbuMq>Q*hIF>d6s#sa9CeO~n_~|Mntk5pmfC(_g-CK3f z-pOu3S$<;G|4kVLwNe!M^VL@5M8yFag_@nicV4Pnx++==U9mdqW`MpqL$vS4EZVfI zP0YGQ*4q_ca_A(n1TmvrE?44YXfWQy3RMS_X8{G*m)s0$l`#QU zIhuuvu_k@9aG_mV+HuJ!?-PC(Ot0Jft$Ocg1hYrm*Jygc-4_LN-6ML!VR+a19`S!= zY$MS>eT#S=o}S-ETwFZ<-L6Rf#6AP5KX#1SgvSF-yKErklYN`dWSo29eii2h< zUm2=NCk49{9dZ6i)SZd>7o40a_+OxOJaEtd+d$tHTd~LA|AiRXrm{1N-4od6u;n$c z$N%)*1Kfiy>y0f6{68Da9SSf=ZAf;Y9y#<)M88W+z_*auRu-S2Byrf81v=<+)4Mhs z%u-ny<6U1w+heQv^R@9CDzFIJgX6i77ohi5#1%mie5nWN2pRD|Yq|e3nESu)6j9)J zMcSPasJg(HnZv;?@NY~2Wh27oQ?skD(DY|>bA75uaC z$o{^s4gV_WQV&(*4w5^@{JAOUr;njV0?UAiV^k@^4Z+CclF$ZMM|tmfPtO?LC$tix z8c8uBvw)3ON?-oU>8YlAKHzdDvcWDW3O3oyLOM7Dq0g)1ibLGy&dL;hHvy&ZvNN8? z9io1kNJn;oC^pb*>sggG;;la^`_O>fOy|8*%Uy}4sH{cr@HUUDvU89&@+eUdcmu>_ ztNXq;0BW=nQtmcEln$zS=2bL9R9zq?{R|SgN0{w%a zdfG`aJJuc6rpYOCS51-ELSnA(MeaLxEUmxrBP7<+*~f1{KOXctngx1lDMEyNuQ;w^JtWdM5^of}9ujZp z?k?%+e5hU>iIa1NioPN;AK!!|d`t$x-%5qjL>K$mS6=w9pmDHOU(s?kHUaX{Ozrc( zS`t?ZMY+!Mysspwuj0{TADVkISiUoi?aCRwNq z;rN+(-?j7#ZTHD`euo3^{Am>xMl$_~{^{>RV?nUXd)= zgoKvf)@$Ly(I^Fk3>@2gqgyw24JlEiW8=-N&sFpK(FCPP;j-uQ2Hpupx{Jm2FsNia zQsTXpM@!y?Qg2D;go1iu4rD=_e(Wn4Gv2~+a@NzSrYn|ecpX_~tTsAA^z|9@TbnX42 zD0j4ij+?7m;Bl+inOFD)vDevq%C&FLAuo9<6HB6A=```i6VqdSzQyOO)5(e5@nf1r zfs6!$cNN>|BcX zPH?Z4vhUP0sde*pM;_A~6I8jtY3N@>Cc^JjGNWEqSm#HZeQZNV*nDk`%1+l%J1!k> zC!Pv40m1KoHU3HbSBY#wB{g=7VY9l?EBimE2Dy3=_ck$uX?B?w7Uq_Cs2;z_`W<4Q z7YQ}qq8CVq;{P>Rj9{CB=b29>E`HpfqLRBcTszsT7MF>FA0HonYQCV_;dlr{Hph#B z5+U0^wfe+u*Lph#yCd7{kvJ0g=k8Bs5STAuV|{GDYcsXLIyfIorOh#!i=v(FEMpkW zHSajZ(7+jy?@hrBx+@w^ZwDGGr?c8xxlF=Rw+G)Xk(uF9qU8&NZ=^o%aSwkMpa~WE z;(m?Jh5^yxjqOsg zhNQ$3Y+Q! r5>1HF$Ndc7ST3o5NeVI*m^woK{Us6p{fBB)-{3W|w-n@fp++n3g1Yiv zr`WmUQT0u1o~oi+LQt6cvshrjaOct+hOxP6v4LQ*Lk17a$7%T3+SzRCWPgl0RThbf zHn_k^RaZ`y#UFc`g00iQQMN)WVBV9ouM9WC9m3eoc5}(XrPJpv=|&zJ`lELYrt@v~ zvPUnSe8MSLOV2=n4I~Y%_=lB-q<1qHZx;~AR8vc?Tu2SNsP|Xxga76N3zb7)h+DQW=FMj8b zTSaoPUiaP>z)5r)wxf4wX1_fjSUSvSTJfyugz06babjroEJowMX&X=JEKat~xal;Z zVzG^UK;u?{be?o6Uu&F?oHm>;7qK7}P50YYy!42sx(Tw2cj@p9s065|-8=bddh-)=tD&eI^IubW2YyeC1 zJZ<`yV@RhtmcjISIgM`N0rQ$u$GJWN8l{;G2$INMm<0s1HxC2;P?KdTw@xPZy6j8r zCA=;jULCw+=jp;l-CgWrf@jZyD`EElYxzNJPFf+)K<9!y(B|y~FM!f#sQWUjfK2^g zfehE)2eFGvoZ}I6XiG-TZshhY_4wKWKKnbv@A>z=y}l+AzrO}p^krwy& z_zO!Aa`H;TuZ&GS#I4CXU{dwZ8Yt((QPN&;-b3*mKII!6Y=6~J)oq1aCP=@huUgjR zJ^g6$*<8}_A9iOc!2GbC6-V=;JA@ogz>%N7@83QrzdS1uw@_D6U#%c~ZhUN-sCS!K z==m?fmfIvAnBswnm?_KTi{AWx5>PebX6p*1_e?-#DwJZc$V zrhdn zlbe`i(^s`7JCFegO>2|1?aJ;wp)LvG7}*OOM#miNnEJE8QOA*Ld=lu z^<*oe|4nh*9&&7~aTvay?1uZHk6h{+B58e!l>J`ifm_cUsqwK45=T1RvX|iK#u$NF zy86Q%kX`KHjXerarbPG|>h|X*p&RwV+))ZX$N>`o2@iDuMxzVKrgPxBP+ zYGX@K61Y?xAgaB*1WKF8W+}Wl3pvpgri*neeRKD*=nllT~V1Q9Qq3R=pH=mXSMLQk}o@&R(pirwEn37e{ma+ z0^0W?W%Ob898UsNB)jhl-^e0h`e1eg4$<1W+LNX@(xf~AC z(6kqDeSt`F&bTBU5mP^V7Lce~kv={x#cZPvLy>DY5=7UeXpx@{x$O^DZf(CMlf}1z z^hI;}7;M+Y3y#+RH(;9-NEG!!6CW)+i&dUO(AqtP>(FTCTn!Zej?lI>d(s__@rI0))OS>KNSBvZVl~7nLj=I} z3eM-4DqJ8sK_MtY#RMs*;BoHFl`O~Vx|`7jzg-hkQF?_eL3Kb~aa(Wkq9S>%Q=F=7 zOQ$IN)1*&Eyb;&Hq6O9@amuA$RI*C1g^dxnI*U4UkL+N=kCx7~fLv34DtFTSV|Ovl z+Fe&`>>+2AzIzIMoPpcu_Qvn!BMbPx;x8}!aech=o#pTQ@amYipL)2j*R^iGvjL8% zuq;vgTU7uPnbOGSjux`gb|k3?lI86}Qpm!2<9Z==mRHb!^m#aJw*5$kcfM6Mw`=O> ze0>TVw8VH71Cc$(gNyv%f2{#Zlv_T0_)IABFVhyPBV`*d`L(Vqn9Khc-5L;|ah*a$ zmLK}cz+q=1511SGcw%bfD~Aaor_;7hp#zxL+`FEKnrZ*rZIwhRr$CV*$z0K;!*3l5 zhwr`+nAg6qYke8aNJ4pE2?S}RH5A3LLhj&nQ|`A%N{ud)ufsmI-uC3mi$$>1*}?nY z7FH#O)}&R);VnSs5K414Cr0Xlzeni4&SfO&xGs3;&upJ&lE^*(*K; z5zG2ah_184ReCTjGrRq;321p%!OtZPq$`75+7{dcnHMyup9d~Yqh`s%ja6T6I0|H$ zg852uC}tZ5cA-vTk-`oA{db~SF*@+-vYCQ+)^~98_tqi9E1$dHsK1xxrO^yaFm$5g~73xlTmqlwzFEc zDCx~yq1eT@?lxP`)>HU8e_bbQIet01(rIa(Ya6a%2*THb8q_$iOSR>-+s zrf)XBY2oKX{N|~Agq8fc2XiGib;OG5YOw`Dv2%OA6RP?>iWV3BWi-GEZ0h&l`M{+% zic-7vF|^o6>$p5Df@zohM*mqb756K>ifVcy^udR5q6%q`%+9sY!M7Hx{a+Zy|FA;@ z+IRLJf`c2j<8xvsN6GvE__!KaL;F{Mlz-fgcWuA?@opKD^vj9#r`Dli3@0(%E0JoqFF=qoWZuoWqzH4Aert>|DhW^a~MmwDcFP* zBb!`V5YUqfP@11V{v7dJ^IJE5SOveQ*53op#ow$!wj9Qm!&hxeFCm`xe}zM+^}Ras zexuVl`W;3lcC}_?d0Q75lRX^4W)W!&K2H8p%poOM=S4=Q$xsA0hkfvs@+- zYzt$&(J4*I)WjDDoW&i23?bVFO|^f)z)#!~b0QyfPU5+NImb>U(A=#4GV68?Q%@BF zcJNJ=r$lS+j(k@K=b!JO!65#kJ~PI?`K4IfOT=GLoxM(Bc+A%m1w*)1oj5St;!49E z-gEED)s3rF$Ef z*Drc}Vry%R^HytR*YIch(Faek_})?{4j-cu6H;0tMB53@T$7t&F=DlSi@MK!bl-t! zgYkdV_kXzgN1Z>y+ZBI)pNDs4^?lq;ZAWr|uB{LT=KZ75b=yrI9JGS>a~HE1NXKp> z4m6u~lKsQA(y>+s=v4j`a}c)9Q^gzP{FLUDlF1MPLPNqLX7{OjzKJ7@49;Z!Gzl)? zwycsjTz$uG>V7=7_DmR7JA~SbT3NaeAf7~cibhjp855h>X!R#2+l-ciQh4bGlH`b8 zEJA+L4e&JQcPIc}CvwgQQh6 zF!CorH%1+`hE{Zrqrn!13cx~GOkE$)U>9kVSk}0Gt_sJ%ni%qZr!&ylB`VR8TpTXw z%p=}0Uxm5CjYR&KxRh)0w?wJo=fMH1yENa~1~?x?!OwA3FmjlPnWW6&4*}4@T{Y$b=w|h@1n!DDHf(+>8*dV0?3AD@d(I;3 zC8-%64bP6nyhL$0|2Q4_NOa7mEr(KYFGf(Pi&k7yBc(evXUp72zmuqiXZmGf|JROX zQW-;#hQGmo2+#jvn+fd2VCMtqHwauxlKa!=?a}jgH{oyjD)iuQ4d2>J80hbX>X;mf zZ2OX_apSbH%j9_qlzGY26m&m*@4qnEU%l;ngvq#QA36#(oeN5YodutjXJc?a37mG7 zl*04yEanQ8ooRv7r<7U)h^SlN9|J%AS%#{TVn`9?{d^IYg3Ti1TM>f#0`C*G-@yC4a!7HW4>|92elDj}oI?=4BAsOwHgeGFf zB2a7Vtj47q#zEzz(SuEuAfxT~VkL_)FV}FOczuSb2{Megoij6{;6)+i2`QR2+Fg|_ z`&vDpZfOph%Z7-LWM!aL40-`0?8bF17fv%W;B~2E= zd780!&O$6;VUBP3QW?F-hfaSSV{_iIKu=)=4NjW$*pt z@omgUVoOU^Qp%lCEuZe;5i`ROP&;8fPNn<(0FpW16Q$myf5NN#)iUPSC|(lPR2pV% z;Qs%AI{v6_DEdQ2LEpPqwEv|`pA$cJh1`gVzo4In@^oL+Ulp0a{J*&TT^_!WcPz>V z1=`XS$j+~!&}*f!dVYI2OYbMAY!Sd@Oemv^<@0X}L}r+l8cQbylo;24rsw{UR!5_8 zx9gz9*6SsHC%hZvon~O3`!UmZ>!o81r-jkr_cU(X$cUWG;mIK+ z?^#92JbGAsTv^aHNHM*d{7Ru{_<2+*<(Y~~_HKk`pFLw_CN05MyU$-5>PJz+%v7&Y zt{T94V%=>$e@X_Pt8!o*?^0BtAwI@9!l>H99lu@!AL=TY_}f%JgMSE;@IHS3FFu|V zN;-wl5dPYL+L)z^Q#V^VK}T$Yy)i`$mi{UMqN!KXa9aT&zvcK=nNEm#tQmz4ot~W@$p}MUp~dZ06~r>cL^Y;x!0F$eAl*==^eXt! zqJ@`@Tk3;#;!Ill%?ntD&x^ZpZn(``{63swd&H?jaIk6)PgtT(ishTq^AM44qIC86NV(N6ZUJj%NcpWOuhJu4PTR0>_ z!ACuvM}U6P!^B`mn>EgU7Yy*+(vmUd`50fSt7Vo>E*%5 z-_unl#7#=1^-evh zjEzU0^e6LlH^)JeoP0j(ko$V86k1hLztmBwV76DfDPv}jsJU&3oQz$xN9aN^ySxJY zHICY3A-q9_^+7#zC8w|U*KSPUgdgnKa%R>j``@y4j3C;@AWg`s$A(Rt(T)(RP*^`e z6JCJEgtPc*lqe~WEPtz%{}C>03ljsh-Id@~P3(`VLVvHBb@c_Di2Fy>yc5oxh+wtp z*2_t)*B|EMn8=K+-iSlW3!&~M#eI){00u1+CC^G#W4|>A>ARj?JiLwfsRAX?`UtbV zzc!W-wQ0PwtFUtLD*g8^{da>EO#b~}oAtNf{dLISw_9Ub24C;tSvP&a>tlmY2mjQ< zcK!-AVH+P`Deq;?mU$@(@_ne-BEm2(Z88RJ}b&Zm#Jvp72;7h`-4yGYyMAcM#nE?lr#&>f6TOg)!Q z>ya@K%*+2@qppD3KAXP82fJbXzrQIr2uiTN_dN|7`pl&al8%kDP>cH4&aR#*h5*WJ z!4nEtjEPJQbc!>2xa=Cp2ucN!y@ON&npuEWWEw6GL$#@@O7(pv+tFS(TJvTcc`TMz z=DuR)cG_Zb>|h46Ib|TfJLWs{TWUY3-QJ{F)3dodZkw#&$@4H35hE}%SWkC5svT_( z`!*S5JIQp`?q`E%hb@pk$}ms37`F@KynpA8zW!Cwz>^}=lIh(ZK4^+8nT7p~VH#C7 ztn{uDYneKq1|-2%xmp{CqE8`4Zkggxil78g-t2a6Fg{GP#FJo7ST&arJ`TN+%y$xV z1MP#OGc~JwAJ9mnoY^kE3jjTX*R%zZaQRU z_glBD4jZk`Om8a(|3;2>qP*&e8DKNZPDu##Fi1j>hqmRZ5t4}*QN8< zdf|gpk0&NxbsG7*aqy~Kuay1|2u^v@xF^;f_5TR zoZ^GB%2z)c)`(htA2J>`Lu@=op;NjvFk>!Ky;n!zntSzgjz#g-%lBzz?A^}#z80ip zeCNc*x8rR%_<+IKj0(Gr@;y3oYi)!L|4L;^<_@0c+4ngO5(nL!n0=OnmE$ljhm-|# zv3V6@p_=1c)Xd5BH|0j7Y)}T)L#SqR0w9NYv$A_ncsXmME#<0P%<6#9!X%5gFmjQ` zG7=5c{fJ80^rAtYBxfk@NOR0t+|nnjcMfI>QeQtY=S_{^hx~5FD(v_&lb~|pMK(t;`hp@>pVQ=^EDX7 zmN$uWs;#x*4C0;m?rQeZ49YjixWt}C7U@DNsWJHFOiK8jRZVStn(I|X)4m#Q*7bhW z@>OoC0Tc_ObiM_ZtKX%ZCJknM^KY1n@ZbSjF`Z>UZ3}vO6{I@p7T2=ga5s)0#6Qr* z`FM|f7XF&^G1$U(vrFR=&57Sw1Wj|8u>JD~;02TH#evt9q}S9r$J3%&nkxSG0dYkf zquy(9syTS-$aodWvxt=j_oW^^QOq3jR&2?fCqrbauE}+hnH_08QW?o5nS)VPtVo?{ z;@n2A^=ezrMBo?Lnt-TL?zk8uEhsDpm?47F~I(V-W0Ih4n==hrK z=4itQv_-~MF1B0B`BpyMQyq>^)*_ui|8C$LL4sVAS&3_EcDAu3{ENnGS9y;qd=>2> zEHx30o@2{B7#Im{F4wXs?6$}PZKP_Cv|C@vvwjylcRvTX#bOQ0Af!M@9Wj8*gn@}~ z+-rlJra1yzDT5eX`F(ZWKIs@|6Wp+ns}jfm?HK5|(&+>6cx$>O@xmso#t&j8vZ}J# zUVf6Is;m;~1+gS;ccmqui`kO*>tM6JC}y`BGbulsAVsR)FVfKAV=O%p%%V$4Trw24 zM73lHTaAXZ+8TKDxT)!|o6HrJa((c_IG&L-d);1gcZUY}=$Cw4Q_HoW0B!@lH3jKD zfNy<3mB;88kAH(_Z_%}Nj{DRKw%DU6=!ct&`oA>Q1F$lt^j&32@HWS}t>{*m>Ak z5Ci60i}1c$B3*yV6=j4a#TYv#x9$EvS|mW`2vjq^BT{aQ`Cn=Be@2yR7!nEkfK*4b zRFd;ck}Tm~5VLU8inJ20m`Hk*4sw}}c!Z%vli+@^Xfp9sJT`FotES zbzPu?uZSKYSHO=qRcJX6`6HG>+@U9)b{>t|Khk9~gthisZTGIeMMA>b(N&Z`tY15~ zfSMTnJ+)yVYFdURPVdif_ZB&br2(%E#j%nsf8cow_(CG5Sz- ze~DVRr{gVtj^#~zTIsNwv;T=vyHc90-gu?oQ}oo$u}j47m8}lCWbCPZQ6971KeY3B z`4c+*st>QsK(Cd3=O&mZ1jrjARKo(vMO{easxF#l$dnSJx4eJ%(hr9 z|Jig4a${92(ZoFzwacoBv1Mk>(bh^D@8cX>5UPsn9uvj?Z=H~U2m>;=Q6h1i^&fxc z;g^vEZ?Qmj6ABr^pWurj+Q%NKP)&;eU!l1{5A16r-5XHSo`2d_1uK0|*5V_LF@gxHafIk&Y9pGZ|NkJq|Nx3f->dYBio zb@&yhr?a!g&$yvOFdoNXTtI`BE!Tg7HH@JQQTt5%9q$i)LpM^p-CNF>H*mxdwFa8D zLDzs0(v{dqCp$n8S-|VOgx^D_oBwdq%xgh8uSVM=-`NTXEpI%A^O?S?BsH(5bZAQQ z5R)^m$^FxSH&R7H#j=_vlDl=Ud@bed@GB(GDmtgpv@PpsON9^BP}5A?FW$IM?B^m!7tv=jXvua^A)t=k%)6*9=GEz!AokM%GeC z+nCZCs&fY7#6Gg*S&00&SftX0JnVq+xm-1jq=lB&{i=8|vWDv0+h7wpu6kzD$8D`-2`0O5gToeMimGP@M;w;fxDSvjQeJvxt%-nn zxnpoSfu#ezzmFHEW%v3-_QuP9DT>Ywr37h~TYs{Cuv(U8)fzMOHZwEvl=WU^$4-* zI(lY#Ic1L=nD(mMMjbkZ4N(SkiB55%&D@`M?>ZwoUi%uVw{pI+1CZaqnqztf=n^IP z0Je+Bq@i9d-UETRG&dmt2AFT zp-hd-$4fBX_u}A5bj#~n3$I<3fz$XqVmYWzAqk9bB7w=aK!Lgiz2yK!Ks}5{vpebl=l{%6Zj{JR)JDZRYr>O<<}jo)P8mruf3j8Gl{3pn$}03p+WnDME9@i*sR|d(n7j|Xr|GpIU8m=OVR40LJZemwA32knMH;@3FTeUYrTbsSg zlF5Nb!cGodK_^n)?K8O1UA}OJ#uNz=F=X-Xdn+4+0XBO3f?#|_01dSPPP^u{mI-f+ z(c94y2%ynNL&l^;SZAFmE!GDvPfOeS%XSGBy~(YL3CuF)e%K3qhogaX59}Aiu`BtLGFsLT_1g27VTf4OSKU>0Ukd=B2;acX$C|}zjvp`brqNt z7pfJO0(?5VqEjKi4@~Re3JLix zMzwD>+G^i35C1~${UhyL_P&?4goEY1LBnJ7Rm-cCU)^{6WAl>p;aB%Vr(~(VMj*i> zVI}44-m2<|+}0dH^)gmSF3l#>eT0UmFP<@+Pkb&a&WtR=$xosQ*E}M%Kj%bDOs5Nnl95FIuB92{RJnb)WCf-_I zZjuc|A>AN#+|V|R2}KvD5zL!Z7tl!&L zw7MXQNC&pjsXV;B*d-N#OPu$9AKnX9v3h?pS5oLW)3Uz)_Ourp|9WQdm$$Q632k^{ zqU-P0Uxs^3*A(5Q#FvA{uwuW1mov|p4GGnk)ylLgrpT}NjovqZ9(B}H_A+DXE*WUY z?cBooSnc$@qeYG&Nd(8PmOTO{YV9`JGz535yu`e=1916{CO90Y+E)*sE%T^<1MBkP znQFAnOjkYsRK!d*?YfKM`1(J!Q#Fvg46Au(bD6C)NVEmL+Jw@;VJCB7&YbyH9`=9l zFc?CAxL?s4>~X8yS+!$R0KlfR?V0u1b`0hENM=Ku?ZelBT^`ApMBKL=`~rjV%V|q3 za4?nf`PN5X`NEK5y@Y$FedqZ%ZMlYk*MN4%VX-A-_$U{edP&Kw%1yyG`ARPP1`Dx( zILE+xhlRe=7@>1lCBn~*y^gHl=SM;*($@cDCO_3fePrE#M&EtDei&tCekP&)Q{b** zg%im!2z8Xpk_qBhEBdW~Ehl*+s3QQG>Cqp7euJ&^ z*ef&&dcm=vZu38r;r4Yu6+x&y`-&~FGWn9atLIVI);F}4g!rs>GeCVvyhp<%2g081 z<7TSNcg*U22-z-CItHlO8Tz1p#;>FJpg|hUoTRy*UW3$vGnI0uU_H4evm3$wlcW8l zGD7>6^oXFD3$<2}w8d&c!pP5~GKEG@n)I6_SY0~e8HYdZjdi@Lm9ODQz8)1%VYl6cLnt5!UMGL zt%_u-rv|n7IOgFKwSVslcx>9#vy$bOcMWsf?Q#uP{oYQ{+3Y0V=eBcP&=R)!>M+se zj{6h_`#pcDX@+;{sYrU)psjHRzBDIM3{T3L(S8ezG_i#Fz)S9=mQRsLNNW3*O^G5M zc6-%C$cZ>zR>M3xoN;No(W(zoTx%`7Fz&7xG&s-clERX=9L|z2d2Phi@Sy$5SB+Mq zklgfS;YhLk{ts;XT*R_M_&MahF8!4z&HPWLvLF|}oqap_7Qz~yp%IfOAGAI-?S#>G z3zxA7Ro>7mo2rj46#P73uFL^V;V{CAD|o!Nw=Q4Js!@k!Qp8bfZ%nxWT_X5;962|$#^sEi`|Ng{WJH!(({Ma18sO=3=%(F>No|6KD4r44 z6hFUf5X6HWfPJJTkVqXF>FPR;j69OaU2W4C?l4}YaYwpX)o?0nk=`?qEuJ1uD~MWL zkCT8aEyCi|A0vzo$1Z_I^;dejo(?n+wefv+4VFQ7yS`bZCyzE#rwZ*mQe9hYcWXHe zZZr~K_*3hMhNu40MPM^#S$p=Gi1|{Qj6f)-;JVwN=x+t&Aq&}`x~QzStSq@m$&k`_ z07SIu1=|OJs3?3qjP{bqm*x#zrHMfTVGln)yNe|I=$Kha*@BaW1)L2oTr3bFY(`0# z{K|5g!K|UN8(8gVxP00u(UU0EKJZJoP;Qk-+xTzXn4^2n$TlLqo}IRjzD8nfX#w7E z3O3zwoA&l)I{mEcgl|nWtr=ug%p`d#Sq+H+A5PaHQd ztRj?&1O+;W@F%G71ah z9@Lcv(K`r4BE{2D*+A^9#1<$L(J=QUKGi~7v05Pl(NKQyRi?bJraT8vxuqB<1yfz0 zx@gQ*j1{&Np;Srv)lLVdP9=XkaP&wH2&QOjeFFz~Ft|(CK;buCxA-A&lT}93EzLF2 z>9Vi;>*xokvZkBB_h(UXGUy0Yvnthsi`#zn>)R27*5wY&ajfvNFdoJ)xYyqDi35Km zH>IAAmxpDxl(E1vYTm3ZFWpQC9G!OK5e8bxOPtH4o-OLH@)^$yGYtrWjLrvfwj0^z zPG#QE6ZAZz=sKo%)Eu;Vzq}+3Svgx$bRN$%0b z=^HGJP2xWj(3=t3wcJk`;_#?+ZSC%l*Y8*g}RIiQi1kzYm z>_$RhY?)cmYi!M$p34_7pbjWwoTgq*(nK~HMc?{J5NRAg+y%qCHZNSfVDU$)3D&&e z@>FS@6nk%LX~hVA|B#O#2@L8wU($KJA2FCoT=dEZgv?Z_xcl`KejhYJ;A2z#ATvJH z1z&|$+$Dr4#APvt)UdGUOH31s0KCa!zbLtghE&t+FM4-Fk@J!^k)-JGo)EhS31uOS zWPFQE1lt%z$t~#Wa@?3R- zYj|&aeN_P~uJ`$s&4~ChYuuKRR7{NMr@P2cu;2CW!Z<)HV7vFfKrq{fRdW~|Tg9cwsp=~Dhv}Cigk1{U zppMJ!TWws-7M!-i)8{%l>6f&;_i#3wka8DH@8uh;F5NS9{bM#Nnh^LJ?)tWJAA~st^FGM-Sttc? zNl%g)mt81gHV0y1)0YqNR2!p=<`ZA(w;F$L3Zi%oyM>__$BoyJ!jIAm8b3PmHE!cq zbUfVh>fqIc=V(}A)WUm9;ne%jL#kVVcc~BDZ-!jLlWeM`q^2ypMUGpxr871W2veKr z3=9|3mlq(Le|?uLMn>)11u87AUjJ1U4p+qZ1LQ>G5I^C05@hT<+^^qhjxW z9UdK)x2C8mu#$zZECpYNT!x~j?+I6gur{XIcc-v5w^S>I!sG2fix1Zp2?%8p2vACM zyp_FzOy1;A+vX? zU0XwzE#Aw2dloK8eCG1~X7YK|rK#oN8nPKi;cihBQT>J^8125!{qoAdo++p%z@*>tyb)7**vJ7NVB!`e<5U5N~0sXackqnkY2HXN;x#(wG2_`ZMFC>P|ua8^cpq|Nm-uEqJnhb9z#pgjQ zJzOQri@a?F2WPqyrhlG2s_|KI7(khTRh0GX1;|cZ#$RH6A-4{`OAA zAmzfVDi={?I>Si>0w~eHOHfF(0qyB%Fuw!K5CK}<9b~*C#Mpc}vk#cBpj`qby^Q_I zsL3GKf&680w4ic6%zIG!$8DC~TbS=H^lBhHRzwu4dnsrKf>+07>~h#LFalH#2fU;f z7ak)&Pm~|^I1h_6M^`@YAm2$EGF5GBpoC{3dR$TcjKOOQLkG88B&uFbVhqg0;&P8K zc^hR>r@@mxD~KcT-Nef<@Kq&de*v;~l5UjeO?o@sHn$kp7Cyep|4PG*ybky@$5D*^ zvp!f-DfievygQA32-Em{&@CN{ltPrEe?IY>%$(*4Oe4l`NWu86TYzaV^WU0=dg&prAl9ZLls=cH;IyH1k zVi)`l6Ofd`ExNi)N=PPfBS6_^tt(e$^byK7$0IR;Ns$*PxwhxcnAN-ffRRa{Sa?;L zS`}Y@1y%2Ye}oz!$ffAZ!g#N;?L2wAdBW>RWLpKjrSfMz2iM?wZVFqNdJg+pkz7BD zRIT=<<}H-nd~XJ`YTt!$!+Br8U)O}?jwe^3Hk$j04GI6C-m}nWO&5F-`ZVn*wNb!>SbvWB? zbU7bHz5DUhk=U-SZK2r6OCiJ&M)V}4h8wAw*T2X+lYILRoshZ9TjX&j zv_1I01CtcgxT}1UXSx7I4}6&TQZ^mC5#=}~KWA53g0RK%K9dZA2_kxtdb~v)pmkqb zCP|bDv)kJ1F~RY!bXy|p321UkI9-6nK7w;t;qHnfx7>dr*3?1&O**0Fw6A6-x6YnH zm%%fk({%NkLJJ5J>4(0bsHexdx8Im_>gC62zpzo{@em4~b-mnhES_zgLT|L%gzOd7 zI6M~{l*pA#zlJtDUs1cZCx?#FwTz`TO7_}>u|h> zR8v8MVXhN)Y@tcCBt0E4V?mPcUgT-9qadtCQ`E%l0N92q(X2(#r#Yp*zEz~Y$QDhN zsI|H^TUwbNDsCcW_YFbY!@(6XkwYN-LZ7po`&16QheuA&_Toh8t2En-U5-Z3c1`Xh zBJH62>xZFq7Pt88GOF0V(8W1^S#7*U|AP4>8qy}0Zcuo>jHQkSllDq)$=wWuw8wRg zy^lfOa%@O{Y-jDj{&YyHmrT5DOIzd9g7n_ykMTbIU;(YHujY7l=!h^JXe(aci`-7( z*=MR5Sl+Lvup0pE)F9}jrIr{U_!6PJ<`yQwrJSdzXpL`(qkf0kDWYv}!+5&j(H&{C zjd|HPrv?xNbk_#l?@xw4HsK94`UCFMRiO*k^}!>(4WRc{h&l5UESt6$9HEpL<3Im- zBWq;a9%CO^y$+E(hnxSeU_AxHxh}V0qSW@0 zDyBfw=1nL~pJEVn1NI!aR{+&yWKaeNI8F?nkqz0ize!qDFOAE2TX(guE!GhzxDz43|NLBw<%Y(a4MZ=Lc%pRAhYb)IwAJ42 za=hvDTR(`{|8l^R&S?}aq|Fb)Ll7AWJE---N&)w@lFq=*M2uA&q<|LiU)X!AHXhwA zLn&#(WWYi0_?E=wQt*AIitHo7r>hP!lx$E_BWI2?xKXJV0eG%mXU6fHfm*OUv9sZ2 zRITIE{@K53ia&n?V)V_}s5q3vH72%{)WZ0JV*smaRGy%o+VN9fSrd@+Vk946q~b{u zekGmqK5f*3qa%D@^OQ)Yd2HR~Gx!nt$aHxA0#VgvQIn^^T(Vq!ZZ*1XhM0M)aYL(n zOOJPhoj<-r>r<4b*`+Jh1i1Re<&U@!bzSYm?v9t$`En{KZ>--Id9IyqPwYsZ5 z!bj>Q4#-I_yTTuMy5q%SSG_XKIc~i_bk4C?`H`V?wMAGq5LgbSxoM z&3QyOyi4SwrkC0VcbR7Ora!AAZ_1#FFF>YbuDnEMAY#dPzD`&Xyk8E9USI7I#**3|pQ3*eDuu7OgP zaw8G!+3>80CySo?9`_QcSdNn)b!bHgzn5?{&CX?Aw-oh7*27Rd4$o8|Vk0hTH=z70Erv8|K=$k(X&3K!Jmfz_tgMOWo za*f^WC$-crV?k}6KS?~KHUi7%H@L&uM=WYZe+h+C`kJ7vqtB$ta747;E@jJ;OZ}xC zf_m-yTEw!G+&ocrW9=lJHPkPKca&25C{kfxLJ!S8M9orlPCh?};DS>=3Tm|%CE?%al^2qh|3QV!k8bFTt)?UY0J5pLq zGPq!{*gka^p9D3*ritNa>*p!Fw)0Y?U)3V9#+F#u-F}^ zhlmP7$qiDhg;Y)JwcC-@!YUcQ=gGbA-jAjCog+3(U%Q4NXOvVj+1*R<+7mlxD@k^O zt6r#O>nx4?`SKaiYGL_%u~NQPUB8T;l9Cj&+CH*n6><`GM$ATN%6#t>uQm+M3=;?# zwCNaO2r?tw(Lf42&AJsVJbFdG#4{}^&}H$0J_rYVuX=>vi(~z+dvBBRxUcDtNEr6O<+0dX5Txesc&`Y;%?b6_Vh%Akvo;nKi|b#geb3b;=SA+ z9f!))Q$ew)qQfrHWpa^rmg!b$l*;LfJ$-rkzk@*^9jGUS-%~==U7+#*^`p<-0xbk( zqKvV8l6qYdF+3aiE>k#ACP4xmqjEa!n9D$};O8`@wGdkM7u4H{N+w*$oFw_IZ~eO4 z!hbO*J`7F+b5yZe=WLNuB@K>egDgXDkqNu|nb1di$s1eC?q$McT z?!0;%3PE0vNyL$~-&ZidH%ghmc~zo4Ew`jGbBUySRN|iap8UO*OlO13mS968IE_KJ z+~_EZ-z&7)hm>i5I!=ZDws+d~qYj3~$kTVfXFI%@^EZyKU<%7F#ikBapZ%NX$lvSh z?)z(UNX)9+_F^__xv5sMv1!d~W!Mr=m#h?D0IiBJShi?phN(RG*q<;!gPOv;xIU0p zI=Wh&MGy7qg20cuwn4>gb7Gu)^k$M8Srf$&DWzFVE#&D(Sto!7QZZb(VWe`qgrEZM0Zwbx_D)H_Kzc)+vNCQ6SG>HdJ>Ik9Yq2 zQr?`&PZh*-OCey0%(k@k!U@P9;ZAE^dw zbN3%CXC2d)DK9CXW$fh=bq*HS8AKlR5VvO{CJ|i*Q-05^J;g@rrjFJ^l4kEA=pCw> z!_;*r8q~?{022!7WVprmnpqk2TXp0R-;#L|$+TReqRXwL>YL$s7o{SV75S;;TH3vi zJ4Z=}?d;Kw#eJLt)A{V`%5(^sgEc;=;oWz7a7jKs^C4n4PNxSW{5p1|kW|9b|41Fm z#zsM&t#62L>ZbwtxZ(RI^FhIo=oXZ1{p*L+49p&-EOPOw>FCG~=d%j7GmAPNdf{>y z&P`G5p4TTDT`j}C+pFZnH)H8JEaa4z9FcSJJ?o@m>+8y;U3e!1#YTlJ=`E%p0vJmn zSL&IgC$*|N=a?1>8(Sx93qi?H&QcE?6m4+2L5W9#Z$({`^>cvg>s{HSqa2fmhZb2Y z-Kr-#^_OS)O0&VlR3`&F*M;x?V~%@8xzFv9phvAWh-gd*y*>aJq*L??q(KC2%b<*2 zsNe~ioYjBsj^Fkqw*5LCKWeJa3d&Q}~ zo9pItMas&W1(Y^xx?#&<{N(r1M%TWb%ay5;x5{kGFy`psa8fDJN=5bH5Lcs~6q)5l zm#|AGBr=>cCH}xrGBk~TL;Et*Z4%xDVFj)T&w`+LxvJ&cGaO!3>o|h1S&g1k32%-F zCX*wgbzf_lcO|P>k0Oj`P1Oo$p6p5Yj7&5h2o!IAm*%-(Q~%b8{;hTfaeTM0Acjg7 zCwF3iaHwe+v-t*CerN_=C6&D|Ip zEL?Ir$5p!ubI_%yeuki&u~T?G%b4AdX;Z%xdploA$;Cp6Om#}Kp?BFAtNgRzYCx~li8}W=>&WquOSNBFtnDEn$9hhGO?NPM>IPM+NXBl(qG1~)L z(?)sv88(8q3p?QS10uYG^dF46wuu~%?#Ow83Om6r`jsr$^ah?x&XVd?jSZy9IxiFy zzE9fQ7a6#i+snGR7fY}fb)9F`pp1NczoBpx|5kmg-vH}y3zVgAmIS4_OXDv}~>qQhH5Ki0fzL8Z58v|083rFVL=E@BKP~ zU3%*&|9BtNkUK{Eqnx;NMU0fdn?`C<{vC;tc5W9G%2ekN7_9;(lswzBhSybUi#{VD z44q@LzAC56Bu)+A@JKK)XQ3o;@#yR-t(krmn5klD0DebIU9(H6GWyaX$?)P%$L)L4 z^Q_;Gn@8f` zS{hr*!UY8$XV{1cs*EONOew15@)BJVsW|?Q+`L>30JhL{^W^ZVe? z@6GY&f)w}p5X`1e?kr4wUfy_GVpBs~bQ&Cq>^!&V6nu?9z_*7OLBTV``*5_$Xn{eM zRUHJA*?qu>bj0*5h6Q3s=TuZc=;?eV(rBF-V-nIEji629-K??pa2A^lLH7RMFSJ>& za-*fVD>GvXm4Q@so&5SIS4_KU<4T4iIeVWF#t_NwZ$eev0hZun$PCg)Ul+5esm21WU%w5cbtIe(zk;kVj z1(Pe~2O4n?`lG4M((;l3fFd}JwxEv`@%NuPR8QVn9ogN!7Mq9`hj+d#=Mx*-{tGeDnhrnHh% zb^1mBREArsw76ZIAMLOk<9`?FfkYgFCJO+_{3fSM6;U#!7p!g4ovblw7coB`X(;9 zLysAa$h>wDEdUqSzTO|a2kDx!X>M6KZW~`1o#q56KYrODh=p@J3Yu8#lk~NmMoP5O z5Fy(S)&@XKUNE3Y{E4>Em6AL}$_}Hqr36c33L&;H@AEjnPTBaw-BhI;7mx@ZZxUHPuB5r-5Hs_X`CQf#Lw`?>;c~Z!57Xu8hzoq*WP2U z5%r8J^vfV+Lcw}EYFw%Y^1(x?Z{+V&j+8|n#XsB=t2DwCjrC5|vAfm|Wd#;J=N37I zqI~r1d{o8nCt9twOAsZY!3K*x4KHHnyO$$pe9PI(vxYJLL16Lpaq9$T| z@V|C@imm^>*}6mfIQI$b(u7ugY|@^ zxa8TiW!S}fKtLLzekiBelVu1SW`)=%H#m{pWC{$e^ zU6DyfNjN;%MF>vb=GAlWJl^(=c;*^%mag5}oYp9d~wW;n~~I+DSH7V4dfuP_rtKnCOk= zVvxy3Cz5)QuvQrpUa5}^wKOX zrDu>ps%KuU;i<%Ood=??BLXrZ|6gOB|F^N^f$*zd|5JXPX=e8y2G@a%`LD^}`}_I+ zWm7M#%DTvKCQ^*nHn+B9Xr_ABNLi^xv)|&oNkuI1H*3JLqU0{}+F5@kpvoS;J%ujn z{Z)ceGV(Vi${%wD8&F?xL+KB`Fljv$nZbTBpD~UpLeY!>neV{;Y8WWa3kgAbX17$m z_a?AK+<7#aa(yqgYVhQ~zXdFR4ic*}?u1=-w>D_UGKMou1q)&lZHXu+Pu>4Sh4Pge zBTDa~5@0bNtTva85h^QADdH}r%;LkF(T=&5$85K48&^MBxRD;kY-j^e^{H+5TS4(* zcE=yR4B?|A-eiwW+=ppK8Fd9gGu`5@l;=FK$t6S{NE9*0ldm)`9TC{8NF{3m^Ji0hMkYp7O4ypr%cB6IN$7)xsWa(;gzTl4zQe9H&% zT^H(!b9YPkx!~^}lx@|&J<1@$A3i3$T_)ypRrL&fiqgqqhQGJePP!sTp!VYd1yVDP z@bas;GK;X5Ck*y45aPz7pd4CEOv|sU|5D?zSp8IQ19)b-Y;iVbSb}!88}20^X>Mx_ zAWuJi$7zB~;*pfzWlr}f*tg{$A*R&Q+8?@?SIWbB^YniTx7-htUh`rm737)rM1Lh2 z))7OZex%p;(9(q33sgAz84PxGmQEJ+!GrRtdou7m@K&F^StmUTt7YF`(rr910U5qG zltkU${}O3zW*?2;+~<*LbDqH*}H?!H;d4Kw%RW^}o6_F9gonL6Uko4N~~5SN`jj+|pK}Wek&D&>*CE zd)!j)0wPUB$77eXHTLVK!RpU@fk%$*nQG#u1pXi}ZN% zAAaA!aJ+U8>kk$m3+q3&&brY3C zx4C9Rs`)l%NTQ)rb>G(Y@1Q#**?ra20eKc1y?MCy59j|kS~x*7@h31EQ$&3?t$!>C z0B<1Oq7ud|17S{iWsilC*9O!^P1jrD@?vw6>upG2F7h!>)o@G8@cy&x>zTOErof67 z6%Gielz>lc9Tv4f{~fhp*4gaJ(lDU>l+ZnZnb>_R6$1HDhXG@6R^kLrAZQ1$-A(J& z2?JWPkldp~D*tRy2P$tH+B)?cXLow>-<^5hprpCWaPi5L(Y-~h1(hK@YgR44pok{V zol^n2RXUCA^=TCP0dcq95vnhP>k#XR(4$Vj{Y1g}KeM^Ot;^yfjF8pyrR+XYGJk?q zp)>XtW*>np8ts87{T5xDi}P5V_n^k z0KF0w&68K~*F^&S8o@C*n)`^&=2eM-GNd%a zar_GCX3q2S)!}Lg05D~7kyO@T8OqqQ=GpSOAdn{tALR<=Of?e#yLX>P%AQI)L=QtD zF28$vSPp{9afEZFt;q!V5kW0SYF_IUV|1=CnhwRxKRj-ai$ESzX}5AB*T z+haT@D?V)HHDfx>Nc{RBGD3XP7@`9vwN`V69g7ER#UoL#(W&){XP2eLWvXHb>qf}? zB~6?KrVtxjQuQe9w<7GHqZtP773(CJ6c+jbxLiRYiOGvl!FqX)m%Lnfqrq%&6Y}-O zX@Fe$Y}J=^Hn)kl*@a1FPknbOzvo89#pA@L{+(lQ@T>4b7xwzySKbdXEWbT|rWek= zq(;y>V))~9sPMMLQZZua-}Jn5>v@O2kUc+%mc)c~8#W5Bo)c7ccFLK4ZwxP&{M*6) zgmwG={d!GQ(Dc8fPr?8;h}LA9)mrV3R?AZOtVq zr$ES{)pIHuSd%T&Q(F)^*0UTaX)Bznx-C!vH+d@?_K~z7$}aULwI~E?wpskvgtVQi z5R;~~GA&vhV`b3n89KN20Z_XPmCu*-`A6yLm`roS$0A2cx>|okyk8rk#F3-vt4q4? zBkRG&Ym_yDyn!sa>M=hk8{KhBP`|D&Lyq?^L&R5df(SC0ZNzr*oo3qvMg6*CTLR=} zkVD=A^C0H*S11l#s?_P{j~J=V3WS+dZr$8Fj0LT6Wo|5z_Eo|XtfCO-opcD z&%i5E zA^^CKX|lRk&JnZzzKnc`y?HbF5g$ov6AC5o|N zv^V;TmblN$;hqgR>PlafBRTlw+Yg;fZ`i7^pFL%;M>WC0NiN@Io8yvY4YBo~H{lT7 z-5GqXL*DZIKVcRp1ZY;;MB3OKR%{l3aD_&~nLRvnB(B9q8TdUyLpKpSoq86b`KSeV z_U$Fr_0SwtBt4OYUmPs=ePN-W6NUn5`~iJul+V+k8@zWRX_4_E9G1NL)vSOk|>75ikb3dSG2quk(f48 zemAIE>;p}SXmEC01wr;pbhK051TyDA(@Nsv%y=dQ4!Ekhm(4-`l583Wnf|Oe0{p#8 zgeHK#oT)RcXhL^FO^dU7kSUmEzqN@TAw5p0*4r8yIKXG6uTHR@Vr;A8iMD9AY@86d zg7l|gYg;)EeBL-+I)d`}8qjqfqt(o7o%!g=VP9r4$is6zr^k%#lOLT-B$dN!^?<^& zzPVsb6}auzStq#I9;8dC0MT*`v=Pw1~s`jhv@wKoP$=g>S-sG0K*Uj zd~zD9 z_tPdRCq88^^N6O=srT_)OKdCd-nbKkAg(zZA*AKM_RDy&yQN-pXHbgHrcu?VN9??b zf7A|hVzAlk-oH`jzla6dKL$ufX0KgRW0nB%ab8}sj(oJ*Yf-`Fw_yt|=i} zSoq-gFhsG+y<K_*G5wn4q(3Jz!0~A*@(4Bc7tPV za_~CkOhwT2_FtLk-!~6T3|2wd_#RcyNLWY=bt-`?nXf_-Xzr1x26NCKm3D}pS1+?5 z`7({{6Ohu$V|h)N(L;;#MF*K$)iHX7+xSoZf~ztE$0HqUavm$OYvwmltSITXUD0hr z0nL!7`&j7%V&7)IcOuEw|JPL#C*akG$G0$UqVwP3*ac1c8%KB?lsRWZGAlC?NHg+USlM#P5C}IA2k^bfxLo<-VG~MjAB%EJuwdWy|>;x(FS8v z;vR7GkF@@In^JYO1M8yO9>pl%l(`E%M3P%}KnJT5$-Lc++|m(UF)Dl5F|U>62c{#v ze2m@-PZ0D>V+6joI3ZA(6(vHJ;E*;)tjEgrN$zV%?c@E%2D;1oG%3P$S0pQ{JxSQAIT^EeyR&o zrGMxWPUoyMNsGz3wNIMewRe9hNf*%^Ojx89IgW0-Fu>(FKk1?QuvIdux{q9H7iCBh z@vFgd2Rnmcc_t?-<%dSb35vZ>dvB-^^3dvN!}dP^s0m2H=TNOniGjcQB~ZrJ2V-6( z{H!|YO#<+F2Jsmi-hxTL^OYpycKz4wC>4FGTQJ0jV5uI9G5(Okz_^-MFmB&*h0>d_ zP}n&UQpFW++ftf&g^${F0eoluXQYha)xWD~8>q|I;Kxd1&i^UJW4eXtAt<>h&*)It zgm|?R-6P;I!Hc|iQU`sm>%zAo`E=VG$NH_%f6Gu>9Q_!8_aamT-$u=ulvvRbH>Ld( zOvqqSUrk{lnky=056xx;aF{WCnbm(j*w1VtBM{H%1N!S`?#IiAz-_8y0ztM3;X`zPd3GU8|S3)559VzrX;>PdjAcTAm>UM2NBu#$=1!^ZB~oP&{cfM z?EQkT9@jphv0ZpgSFoFj))GHFBJ6bNV^Zns- zIZO3{w}bx*)M&M*kMR@8jhwiwB~CKM zn*ZVqI*sHSd3`%kiG($krPPzTls*OtCmUt@C5S?GcD}6R5DCi{*s_UWQngOK-r#a? zRASzLz2?PWC;3vqKPE98>iagU`!Wc4#s26 ztOUJjwzxnvKUCq6B*g7v@>L3T3XlCT2^Q6d=ahV^7e9kI_v~r{?^nawA*mf!993dGSa) zwfl!jM_HlEf)_aSq!*Lom%aVhy3STX=7cn@tn{DRRj^0+;_pf@J~W=)32t#ZV)&7m zP!2R7EJqSTcjk`Mp3la$K?hcK+yZgy`XXcst0sG6o z>Rwf=YSk)MGlaf+8M~$H7aJ=kW~}<{LocmglRHBeEq%Kq71C7NntiwoM(-&b(4ejR zUq0yxsAFsS9MoY&*!_v3tOsB~cOExCOZf^#qJQfzD znS@W`%V6SaFrVS$Gr*cFjXF?yFtl5HKG}%d@IN*JPg3i`dMRhLcC!D~p_{{h0n4?d z@sZ{@%N37=NT!iI=p^(oX(`7u#v4XpnkuX`PUX!2Hs$f;DVf<+wc~$&^RtxD( z0XE4#^|4L!;xuRmmWNv@6H^oq2}|4UONEONSm)GvrX_9sj%6e@8{VzNm20Ik$T8!@k^p{Ezzs$RkJ)5&;tm9UA5fUBdBaS-% z+%#$&0Z~-bgRyu^*_wSU+_+OD~neO=Qj6B-vDv67Vy040rh!O=eoRGXS#7Lz8{19 zPQwoqzYYohCFOL^X56xjfeF+u(GScu^TD&U(?gP2fY1B4b-UL?QmXNAxM# z8~Et1?+U)};fb2eq7$j44`%XAu+hMo{u-~LBV~D#D-*IQY0^W3bZv=(5Qg6(J~?KE zj8;9!h4n`13PW98mJ{Uq=g$e*1$`Yyo)+K2`U#kobqt+fox_faL6b;|m}fRH{*}R* zcaQ7Av6=WD)S%K!u~*XThnI+zetk52X;(>A`*}{F{9W3*M5CKY|C$4xb_M?b@o&fQeW{MAD8@9HnRY2-*` zF7Q)O_59-dZD!|KLUpejYD^iblH;k8Znt^@K-Af%3N5fzNW?p7_sBxWk4jO-89J7I zwAN$kkwe%8%eVdWoj|VoV5^G79{+m=8YzC1KU(o0oX)=+9iJ9|)XSPvU=)!_+Ci z5}g$Y4qwOola1ZC3oTG1BC9>s^c~XWAnKPIU8a^^C<`4>N`Pboh!T~yrNJ&w%S;cJ z?oEg*3rA|Fso6$RkzU4hrvIcgQop|zfIbLwp3+Xk|G^Yq%9sK=91KdXY`U@eyos60dXuPXy%zy2QgNoj{7Sb(6&+0g|W`3bj(Ul zl4{W}4AdH@@HN~Q(O+XmaKVPwk#spR`A<%Rx{Tu*1k!O5=yU~vUBl)DQv?i9cXrH= zmQ$nj$gP*m{`dN?Jq90WjN`bg6KHO=1}vWG_9VS*O?t>)vc#ia1lh8BTBLgVBp$S5 zy4K?sAK18w4n3$2qIaQbhOoUdTCA?SlpJ06+#` zWZd{fnyFVEW5X|M^`3~b<})U1YM*_G-FfrU`K;kb@11(f`hxfGzGRypCalH2w9L&? zhWO&f?|7JR&sKfx%Ge5%gAdbX129DApADP5Q)YtJ9k|}04mP3qngp5-+GAX5b@&th zJ+EsBTBSZ8X0NGU%bVQ9k8cdiYsPEDX;4tuFNodqAkJmC90f_c$o{t0d7MGjQsYF@ zr1S;_kv-vhDsi7AyH$8lAetF+Q62d`vJYJeGQUUj(}~jLC+bI4)=qnb-6_a^%myjo zuZh7eH)e|$9DVW{b|08-+2xiumTGU4h`(ty?+;H6xrbTnUtUGF{9SkirJ~r47W&%$3EU^@h;2_^5qmk5l*x z${G0mZj!YY;C(=*jFpiGxF`7!?)it7Z-SmDTx;`Sv+mT46d`bfON{PKH038N5ncIG*F8ZSTU3~^A@$&`_NpG-bcr=_QCC&9yM@yX@ZHE; z|LW4oP%M*{$Fi^53MX77nc<@!A{IcD013;vROWDgKSD*yR(Y)<31l4OGImhOkd6lQ4h8v#Mk9IueUu^v8`B!Hh_I3 z0pdeMte#VRjCVuhGlH+x%y56WAl$iYve- zv|NPl)EQ*CH5XsGf!~w)F*IHW0$WkG_0B^x4i$ZcuT_F(W5b#Hbw5N$>L(wkz9=WL z3i^x?EI2(_)vje?U;ie>-_ol>Q0@icfAo`D$E57IBG<8LV1kY4p!|iz z1iOashY(IX!0+en&oHyC{>h)u0*xD2HS^EV%DYoKd+$w7v796m9vGV;wucxBH6n}H zsuY@-$FiU*)@OeZ_(s*|X`BW1y)xQ_A>3ucM)<;t;O*Gsmpb`a5$;A?%l>MC6Xhi` zGHb>NYrLiEMp%qY=EuC{Y1+b9EM6KV(acd%u0w}H&B@(kmeK`hf(5-x>b69v)*^Sm zf_rq-r29Dr@d@zDj^raWB~x$^oPmMiCklI$-L{^x9V`qkR*f@UNMuhSeH>pFPYvwZ zsL~;l=Brb``gUc-v@br6?Tl;qu}OZ0Y872_}ih8&2v5Lu3>SJRV7CN94sfXMfyc7 zozwfYR9%VuwK336V{KV;VO?T__rlIi%!-aCv5@(>meTmd44iQ8jht9(Qn_VwR1%b^ zF++;`uZ!)4vMZoK9x@>4X!OADvwz*}^qS7+yuC}LhDBZDkIX%(*FE9-j21dE30Fa) zhyTzUvKr|`2KB``NVdBcme{+C;4LgsDU~U5MDUExqtRXaIAA!?Fh3SaUt7pe5h>R< zsO-)bEsX1?tAgilqcEi#IrTGb=zQieA;h~bY;l8bG~+JitSzQ9@htaiRmkJ~)#K-4 zo3b`&QN=3sF{SimW&EFLQT)^qcjckC{hIL0MEC#4&;Ji2TO>fv=JJo>jstAv>ig|Q zNqPMnLW!yS@t8^&o431mu~e9Nu`4qUeuPRO8Nq-G&jQ!GA|A07_>w7_1*%_>=*c#8 zc?+gkn$4;)ecib662)qduHe zcnBuk>Gc|}LOypL!aN-cB)TUT$#9g!x=%`X_CN~NeVScMVFUB0FvHY!1T4i5;P}q$@dZn z_nN!I-?g(4&{Pk^?=)|S@^J3765=I;I`MjOBsDx+yoy55JjvR6esq^)8e)66*{JVD zHZORb^xHZkMo&bT%EoVdgHr|z@N5d{Wc>IJYh7HRA@V@ONx_}XXy%2_tYh)Z3GB;M$Ue8#doL{R@ZmvF~eOnBxu%WiQQP zEl4K8K;iS@F9q2IJGnSkf#00xBmAC_np&Q>eG2yB=qUyqCeH-TyP7_sG4UZeRB~vwu3_6r$eo7yl!qyT z_xlEYT>N~%O+O2Yuw#90^H0WI_s49aMT2}r?|F-nAwaun->nckjhW7qW@@op} z*)vC`TLd-iFBHPQOGrBmzd-!;+0Ame8LWk^tCzI=Gg@nk5!^MKBx0Aa*JGsjP~QBC zHbyXVRhfQ3J0t`bh>>=DiOJS~vzP2Uv=D^9{Nf^dF%wMFJmw}KvJP|%_vk~{A;o8R zZ5u=-m65r5-(nn3!H7Z&+qxp*STVK3BVH4aP@-Zb}^dti@MxKL~|&cx#qOXr7kgLzAgNumrt5Ey^%fg zazz<)p7SvQnUZ-j*i_~Z7=ZP*l%Rv0ljshwTHTe9L{+JY$qDe&L;~MchZVRDy5}WN z`PKUKH;<@|eRo|8#=`*bf>Qh8|4VIuqv|{p+g@lHe$ExarI-0%VTK0-cTV3Yfxs<9 zG(t%7d7GU;_J?(-RL35(Y%aI{UZSu$u4*A|9uBP(>}1NOlPtC~)~b1K&ggdkJ{dNG z162Q@dr9`_0e&>zYw(1BlJbQrCHYoqTcdz0`Z2>Qpb&`A-JV6vr7$V|VvXgSCu;Kb zsQs5}eI1l1401^SHgHwX{uv$Io0FE9Dh%hy@1?jzBBKB?!N4R(Z;3`rDU*oDEb8qF zNp$6j1v40C5hvn4pywsxKk$Ysif=ZyxMRdPN_g&5Si$bd+P=tMuJiD!YnWW~ssFrp zj&r%dr}DOlzelzMFJRsAT!<2%(B#xfw5%)^MgzkQ^s?*{mZq$eUavy*gx`L4(v$n1 z)c%2nAS9Cb%E!>jzT0d+adgNkKn71t_x*~a)%%5p0~bor$>7WHEsLcQ1E(Gz55Ez&!I}?qB(=akWki?oU9N8NGQ6LNh%N^$H8F zz49N2y4+6_KgsgG`1o$8)oTg3QzM(8KY1^L55g(N_ED40Qw;^)rfA;Q7-Lq6G>?GD4A1>LjeYbJiTGhPzbbxjru6;w z-zl#>tOH(vg%`oR2Z+2L3GYo9`}wxIikWI8&~Fe1nI<-2TXFN)zL51hyzJCpCESVS zd2+`w(r-I+sIzqm_Cc*qQ-5_66yV#`mW%8naLLzc@-j8P=!Z>in11`!%S)tQv4+Wy z%7zAOllYYyh!{d#5I38&6c-~(wQfjNM7*&5WDl`ueqQx zP0+K8h_}bxEze%;YxUNkWb!RMXm-%fPzi=XnZ;?$eUF97j`}H(c?Xy8@y`dPhj|^hVnLl6R*zFX7o01CGG=#-OUToF7RUbIwu3boG$kP*OGg+xd*r^ z_E;h(`f`T2*HoO%Ve4u|K-|c4y~j_&j+u4`mLhfJy;A^y=AUlXG>UMqpy{|J}b6Ag%wDJ+Bn7mA{{`?nIDQYyJ%-&))DjXxP^y3NS`O0V=|k6iiAO ze$nijuo_Lo1ZiCR6?%gfqA}Z1fwa8J&2ebCaqE0y$tSq$3KSn(4<|ztsLZ$N53UVr zs-gL!DwS7+LMIA8Ttm_0%8Fsu;XA3!&%f2RjW9c0P;&f+v7o~kQVmH=R9D30kh;Qlvmu0AuP`Z*W zA{2+ld3qJ$KhGN}^hGJVeYd1S81=Y2Bs*_Oa_)F!3L!!_H2&lx$a~Ibm(}GdYggt2 zndz6WrD#@n?x&3nb!}}9p46;nPR~xu+kBS+A;onOc5H2fbwbEM;}*`44A+A>b2X>B zLm)1<`oV<4GSGi_Kf`MK!W$;KE(yi20k++yVTna{M!Lw1R;$3rgw%oNm}K-EI1Fm- z4R!K>@CG?U2H$wXg&ehl8EjJ15qXGOZdu+`V*nooV}&xbUvirFf5#v3><#=+MZ*gJ z{fx)&rgh>w-@oaW?|T@yjA^+-v9U%qGx1V*sls8~eBm!8gJHOl^hU4+c8Lq8OF6TK zEAJJ^B3!alU7Ds%^}ykOwmN=9q?BJ)5PRJSo78pO-VX@49pFLc+R=1;N<7KR$amf#3~~zGPQgMm zZcM&+$p3Bk9Lu^ug3xU{s%-bXeXoGiQ((Nmd}C0>HAD5W86j|abGMS*rf6-7l>d9sh(rljG;RB}tHRf7qHK;|@L zn^br?qH_4kCTwl!m!eWjFVuIc+4Drhwx27N8(7|uz51sfXiq)9z)r@OOd8Jrr_c5m z9^!jN4ol}qk=tJJYmVO>;Ayi>WsjY;J!9yjpY^0g>^M7yh&-VuMBFG9n*Wo zd^1_ts&{l3W$C{RE1GKk@$Rrc=^x7FNyP>;_2Jtj+h7uMU!+|*@f*OJ^W0!5m{k^* z>OBD_QZf6neiK+~c;j3inG zC!2G?ETVWvmlTx40N-$EhXSfR3T+(!ZnYd9fsja5n9sX;>hW-FtZ;^%m2q`qLV;?=beY|m zONEn)4@^@KMQG1~I~5#xi%`@qCb{>7S0!djT?qF{tC9NF6gFtPyBSmDVT5%Rk}OE$ zyBXV*P>AlXu1amLQT>B>9RC_!b?uJ!=Fs7>%5a^% zXK)^{Ik@2@ouOO#A-(K4E|yQBtlpgJZPV!eHw}Gvk9TmoQ+>jotWVY|VUBf^RwZ?u zXHMw;8qQ1V=2!pm0_f1t`BaP{@%3hhNFAxzD7`HbGsF$^-vAQPkuBZf5=<`YxUu?Q z>SP8_GiE`T_-^#a{QmGbJTJTu8Lsgp?_lURds}_082avsLD?Q{LU@gb-=ELa_}M57Tz{&TBC=?I#UxGtPPtNQqLa3P$wz|V74POIEcT2I4TvqwvpSib zC8bDA)rFr6Hk<&!48r=GXdLKM%LfHSQ^!oy*0Dok#C&I1&*Yu>x4Pb0^{uk9250?qZO}dwTv%; zM61YkxS{jOG{!A2zXp>YHGDZ}*OiOB=4}iP-{)*b7j{a`lx;Ca`VSgKo_ed`!|lkY zI-FG3r^sRZzTBjOk`Q$qJe!UZ&B@&q2Jx=>+1d~Q#hBKt2m`N!4YBCWDLPWC~OX_}072YK8c zCk@O&0d+S@X{TPpmHaz&b%x>5$cB3H8GPQcCOYm}ylpLYRoB$lWrI#{z5d)zufGPt z{Sw3pR0g|o8PJ}GmDjZqIP*$zweN+Ezt!(PDHpvn4leo5R#Z}kGL1v{P@+erDpJgA z=lsj}5g+cUAk@MulDS@|MMkpEtmjh$S7EdHhtQY>TkrpJ98&BBy-&}7by2d0N-|lB?Jl z^A0usAcEzrlo+F~D)==cMLa1VTr}5aIZ8TE`yncfIjr^421%!KuLfv|XD^PfKWCPE zcwwQb0X++pVQA&8As0t_9vp2N&qYP$KNiAY*#)#~;J+>0f-fBy!61S%$av<6Jd>tN zq?5rt$Q)?eXTj?QEYgTu8vWw)GBGCUU_=J??&G#8IJnCDyZ3Q6fWs~V2uq0;&4&ab>UhSv;XWM3dg_8r|QgCMfg!Ip>tA<(=^l|6q`31((ZK$HjCt9 zrfL_v=@y%nBNPsBMKg!pcxQ|y)Pj!skPtZ!_%*X}baQi3_Cxuh*ON;Kas%;UW9yjN zgZc^MnP)q$NZp`Er_bSNqiGPN-hoD{~! zo@#nb_h_ByrTJy|U1t4FV`Oj3Oo`IN_nY#UfgewxE{%}*!X8I)yIaM|S zkrPWQ)->K$fQtxl|t(qNWxKJ8J`V4Xtxovyf8Wy#cCHPdg@x9h9A zawD2E5`>5=Ta+-bN6F_rYlR%hM3BS7V)w20;At-!@Ls_vzZpLERXeRS0Y(Ro8;*4$ z*G7O8_>B-CvsFEFyWtM_>oXEj!kylcvmktY5hSXZ3H1>Kw!;*kbd#GBcdb#dlR*1r zeXVE{7EP3vo_^SPo5xhO3{9n;l6JFxir$4WEdQ^!`t*_^{G`|9zAl~mT)VZ?tKaBlBIN}H(G%g)F?YbZ- zB~#{5&q$NI#7E6u%;ec(5$g-;DRP=Y46t&3hP$G5X$~ahz<&~td9oqdEe}4^ z` z*UwI`47ETyhAdTNa3;g84zPbZ3Ht|m9m}JBDdo^TugfGCdy~mOeOpc>XZa)Te3JS0 z`M>7YW<4xt|3uJVT(NvMTR8PM>bM%|o<;9y31Sq`LnW1A25b-ndUt6_Iq1Mqtsk1+aIZ!ER7&hCP zxxHP+Wy1Z^EYSUKUe6;PM`SlJ-(xc$+-q$2R4Xoe${f`;Q~U|JBWrF92oqu;hQdkrq z{MS8$%1(*MgFx6P9>T}2N}h|UxvKEC?UlKw3fog<4lx=9JG*`!^WTg_aV-K@0brZC z*RiF5{F8uEFsY7(aIm@ZQlXlh3BxDm5ZcU#eNB$7_4r01;1`qfbyXNghWVqpXH+NF z8VtqAeZl74ehHxErIkpYc3o7O<(C`y@hQRMEJ<3Z5%jOEd!Squ=~Ap@S7?CU%{h^K z{qj*(v1l3VqLg#m4A&oYMaYEop+g7vo>Z$o2ENxcckJTpAt_(A$ZaXU!mg2Zh=yAM zvGy1|tOZ`Kti_wy1O_730Cdn})8VknX{nbyrh)07?6fAC(~w@*T#eO9 ziEY&lkmyA)uBA);{!6%j@v}jKNlmd|ZKp0ZpoJ5>d5waAPG5_j=Rv%VOUo2Uy>yjTTM_Sj?oZ0>0QQUvu%Z?tHQ!*ecm(w>7Cy!R6WrM;i5s7!&*43zLufs zsSa9=OHfD~;bZeKTW9^1S?2)P6){Pe$e#M?`mxX1tP}5RI1%zne5;P*7@iQb6zm zRRtk_KN@}ZgC307<1*~Qsqi6ZvFf=b&IAXryYtX(;Ub1bl~X~Rleu^cJn+ZimCs3F zzYagi{?cH6r@xxrhc4-r(jDL9P{zhNJ8vsXVz)o!d(3~frZ`zj@77c^yL(or~Yu6wn+e*p>_ z#CX4^w-F`Xrnvq)LL>fQ;7f$fSHa$|YDG#vzisMU`nZ=`{kM!2z)t!7`z=*tR|{9K z!up@STN@AT=>WUAV~Bhl9x~eai41}aywjOr&e9LYX5Snh2xXCFLeFKw54uUb&NXHj zqc{T`3S?q*G=rnD4?8I(DfeN*b4{#zO(64+qS4TKmtSF!ORje+6|&m_3edxpJ^SB^ zi@{|Gu_TUi)Ge)~zz1#fZ<>#X=)cKbKo3a$R$z;Ir!qajOZTeGiR}@ym+A~f zHCLs9Hx-ZbMMZ#7R@hZd^A}w|WSWWqVlAS)E4F*0h(q-*)tpY@Mueu9reCbtG9E_o z)(SVUJ3ElNE%N0j$Asc0uyfzlCnphXizufzUUt9@3Ca%(UAM`0OWu8y_>v46b+XFl z_70YjEC(6VUMU+oc)4OwR{c?{7!KQ@!j*Ej8173#RSfak{rCurFD9VFEUhR72&qg&tG|^|Yl@LV^~= z){D8|UwIFoc)pbca7%uqJw6Z0YN1Z<=ctcCVoz0W0uS0%k2*%}SsXDpoirO22q^i8 zoO>I4vX|+AK~o=BqF*hdi5E(q35N~EF#o!58ou?iZ}Bz^iFXV;f9sqme;h4-tfQ^Z zx}A~rJ@~OFdu^o@HXRZ=5cQ+~u2Vnb%s&?WEz#Z>G;3fmMqkSf?`d;MwB2+*@s{AW zuj&mQb0Hmxtq|Ii&4vA+AtEV~F{$6GOYbAe%lOIs!87S)lJh^A$`zY!!`S(DTct`- ze;ZP;ig5eI8n7>>jYAaE)~rd$tF$+f!v?iB-`|_;_-U2WjI{9Ha9y$^G-pPLM~Y4C z4t@u@eg8_ZlUD|Aq(qt3ZpUq#{A8cwA)p-o0cxJ}wUo(FJ55;Aa4mKx(+Pzu zOm%Ul5Xcz@^N4eu)f3E};-&QTp2EyM#@pCZ!m}zzBl{pdZbf7OOLplBHM}4EA!1`4 z=gPu!Yo0`n@TdpGKNhblZn7;ILSPxtyp*$B(N0rPP`z3}1wgh>%x7(TC}0-Y=IZe; zp>)gG%2>Ii{1S#E+ygQWG5{dIj&omvD;#1{={xF?9AUWcANKKfwm5EzJ~*d5iq| zf$&7DWw@EE3-sWP-LgrNn;1}O5X$}n6>bcOM4r;{?E2t|N&QX$Gj08tV2^*N(ZJ^H z^X7u36OPS!x?K4vG!FaWe0m8KS3Nd}FFKq7&dPM(j~x3vSjlEWg^`}>1kE>aN1W6h zuO4dkRj24o3qg97}dabx%pi$}t!%DBF75uLQJAKrY4?)u3=tE-aa zg*c$du2q$Ns@(kiW0E_ZQ;wL!b?k`NAF+tqZ!;2o9W5_pzMUjMy5v z`|9w$1tE=Xj{Hc`?mF#RLq!VUej~L~axsMuYtrfq#?au&PpQ82P}A6jX^kCj>~|08 z#$2U=*8d}zX-9;ak1xMr+c~SO?W5aDB%fR))oe!`E!6qfq~IQtO5c3$eJDj=Nod@9 z*t;Qk8QZPe2#S8!EGPKrgt+lMYH}+L|G_}r5HghJ$~Z}wz@oH*TEKnL&k^JwHj3*e z_QmV9Y&E6ye(B!ZhI!MrH|B}TWwqSF6e~CVhe{w1f6JI;rQ<4E4R1K8vRGT0qa1^u)TXR13*Qu5x-nqmb6JGcY0e9d_DP5T6S zwio9(&>8v5-{&v3M=TKO#XTl4h;x##6BJGHkuNfm|Jl`faehfa9uG_sm*YnxoK81M zFUH@JrPm;PXeG*LeDQmWV)vtsY-*2;@y2iN%b{mjMc0#*97kKr?870H+`0@dYy+Ge zA6YJ7#+b_J45RJviVXdix z`+VS(z>>^fDV^MaVjoN9_m3pJ!?tw;h4QV=J!U4Q91B0YCBLy3us=VRf7YEJLPm&j z*GZ#Rjat~oYbsF_i^|IdyPquKd*%l`h~=MT1V3-MzC_!osI6WgLlXF+7VFl<%t@X$ z9aAt{&89c&VyWzG@KC=V^Nlv@p37@}Oh1VxfpVM)6|-m?MZ0((+!@$Ri8=U~3ch?8 zL)86h1C%$7SC%Ob96GsoGA5C$*=h1uZK7EPu3D7k4PiaW*;>`s+$g0x4RdZi4uZb% zjc3GR+In#EWT0|pAIFN+kRCDrQv;oe>d%!643|5Ro$OiU-JMpj{)>G56=xyB;HM!- zAT*=rSX({8e`)({qJkdFB7%RF%4~{CWsGDrB7|Clh8j^q@!JZB8P$9j4W}R(o^{m7Is{v7il& zazgIc>Up3|nC|mA!JBy<>8`!P3SvIUHRP)`zi&chWp|?RQ{o2^#Tr1@{RG0+15k32 z#XHW3P~|@EUPJ`ugi8o=SkZzSNM4tIxF>QqI#W?_`p|=0&Dmu{Zh3zm zuh8qUC*Y#ArAY;`b^gx;PrjOGe$MSAA_FNaBo*IG*U05k8mcas7N{c4=30#QDtW7v zLmGowuyGXTRDN=LsuIm%uUH|3?|LELUfUJ894~O%^AahA>rGSC{KOR2P`v~Q`RcFY zGD^UR$?=*y!6K$0)&?$(4X>9M+7;*F5FeBAx`uz0MJrCdQo~_rhqEsw*p#qU3Zg~6 z%^$ZO1He#4L@)F)x3%@qfaxZMtNPhruKR?aye4_Rj}4zs6oFD*3ARb%&DbW!kalv= z-hQhSU!qfRw|u5?oP}LWY{|uDC8a<}S>mpOu# z1(C@34pqkB12?v|%0wqS;YafJti^N^D!yn` z6qotMgqt;~kKg=tQ-Z_5_bCh7aO>ZJ(aer7R|NP$|DC zK#g8*`tjsD#{;g0aWpvC9m6qpvBDTnzHaC@5NJ)>O6Pme1phw4(q30;59Xi0_OsYE zPrPE=X_}NC^ITEY*cB4Ye1k5qkDOa6s`hjXBFa5X?eg=cNHz>T@CbBpjpiX9s%(eiGoOw`w9b&G-sckRdxNHeNk7ILLw(*bQdV>hF%-?M>>6V}wQc6ft<4nkR(5H4 zAI|U`P?uF1V%txOfYE&qFq!KQs@`T)JQL<$&%ZZO=Ve@dx61%q|6H?@b%$buV%Cfg z=D~PBsT3nmALSyYgMVgx*ctVOH@A!fj&bbqpdX4W;%H^`-|6ERz5GcWnvy=ap1&J0 z>mP(qP2wxu3R&)4p8GEbt;y)mdC$i@5gij54K_(+yIQrFD!n~j*1P^wPOTt$T_oS& zun;Z~305OHfaJ7wa@-ofH4;$sa`4BUGO06=KubNX;|8_y>)>s(D2C5R3KG3og4=TW zVe(zZz389Rx2Y+)%;zH!&c0C`)feYPSW0rU1+?_)PbEL%=?i&pRT)csPttZcLgu*soQ_=qb-Q!VHCo;JO#aGbmSGVgP-oF~@b!HJ>(UP##^YIX;j3qItT-^a+EO`m6oiO*ASA)FqaeGltSwNQiz={}xl_V*xCIOEXt+sVS^ zNY3m`)Cmf~LAubLEFCi%fxTO{hojgo^0v|=1CwzM%ox1PvGI4W_f(v|USx!m=&#^E5rVlr(-2EqUjEkKCe<_098)zxTUkD zzwF&B;fo{QgR%kZ%3OqBmp~hvjC~*W>Vnx4eRZ#N&#h;5jXaqECMBz!xl>zU)mCg~ z)I!Wdf%Q>~T%=Gf)hJ)3*Ur~`u00brLb%&M{}J>(e}Mlk^8KwwVe*XKS0u4fC>_DucY zJ+ZKz3S`)5HgqG5$#NL1qz@w2Z|*+47X(LYKd5a^3fsu2To>sG9HZ$@G2o^v;PkbD zqa`h@;X!B$Ya6lbZi8oTKpGg6KSD*PZY59fk2q#`DJOM&BCWlw?ObkdGasX5OqWB^ zh!m!F(#*~S(cJ6hCx!QZo=tR>jP#GS<)bmCb9ZE=Xct~6F7nwpGSo=)A@)$j7a94@ zOyvKy1|?-ASGJq#e6S(SdX=3Ld)~5gyT?k#Gf^50Y2!=dRxLBuj)9*b>|Xzx{w7HY zREB8e1%mrIoYvRBxT^N3D?)^vWPR1cxCb7qd_;d;c=nTazjQye!1_DhaMm)NGzOC* zR#mSNu>~JH{0C)_V6S)}GV{Oqpzakb<&3UP-l0zlS^u7E`69vABAL)41B(C{-vcL- zeuVW7{TMCqn}m#aa3Hfs%)rYdv?8d{76@QF;Pd0k)4;aK(_hC7*}ape7E|3xXb$h! zp(+on8LN){ZtGiPf5C?{X4;dh$ohb5lkpTD73j)$$Zy!yNUwP?z-U!p=p_5f`^sP8PpFSlDXQ?%O6X z^u;5T9N44e>+Mq^Kl6UbSk0$3dTDiaRd<@E>N4?wQG_W3MjnI9;-}u&Y~w7W5xL4^ zfAV9I*Tte%)&%!$TE?fcfDweZCIi?}c#9hlA@bc8*H5`4?QPFlpEOS0d+Subd^x_X ztglL;B_N%9leo_^VsQUDqIFQkC~X^nxgygT2-RZukwlZ>9Ogtxojh zAhmK5&u&Fe9KMNMPM|s4c!u!27T+v6{$nnE2BXE-mO=yD(pFWi4WWjGAep8}#m4T4 zx&tSftmanJu`xG!*fQs#o@c5x_DNbq-m9TAc)~^+4#bjhAeu}C@87#{m(O)m@gvwD zwng6i%kP2i<1!=@_^&Djg$Blidr6w}&iCgsh9s-JGWw(;0U-kXD!+`ji!pC}G|EG_1_Gvwh%PPE1Wn8r1Wd@=#V}giEPxfaKK_G=mKI5Md+N9+-=(^kl->np^Q#vNFC83FVRD|?1ng*}7~u$;aL@$I z4@_slkI9@a9f8gt?~nQi0+{~@@W`>qO>&C{+`_dd>_1bi7we@nO@=kHj*C#=v+Qc4 zkiS0WJOgTeoqh0$;Aq_$E1z3uM@Ak;_4ZAnK->8`CEi*{_>ze2IkB^VAA)A;#o|3K zlVywS_y`~7(GMvIJx^YbnZ|rc4CDY7h4f>-s6d$NA8wcPMrfiWSxO4Vw*#*=rb4bZ zCSMnVkoGFB(#G!#eXDev>ja7{2|!)66#*`ZGwF?#$7 zuPjG#zPtIihy>pC+~8>u;HOMgnJ?qT-)G-Pi;D+9y=XTyw(G`8({#xSC8`j zW!~$v^qpY)u`qc8d(EEbSIs)RET<$8x}LyzA+q#j%YTQecGxNHj~RRsP4?+Xel0lWLnpmD4a>E(HI{fC(aEM=@sM$5T>en zL?olPkRo9|@WQZr(0+q3(aA3db_*XqzVa4lchn)1TBqg1lKNHKJ(Wn^ z-IM<|+=u&_BVQx45-WsE?lO(wFgk-Wcoae(P~RW4kW$@m+U-*UWyLiEvQun2Sts!@ z=f+cB^;@8~M>R%$c>k#F%XuDGG~-l_NFIl z4o>c8aWS*~kvyn?OF8efkD<8kQrSzK2sx%9PEfxVOcQ0-vOY>Me0Yez77md=+@#t{ zeBLlVj}-t4M{>EB8~200&nS6GziyR?2rtQ^t6uDVQ`RVIIP`gWuIxt5oC218@0!^o z!4X&kc6`~BBKj`SkX7-Q^m!bWO-MOiTUKe}R}FOgdjGcdSIwS^es)iqRl?S}#NInwAn9=w=-px5-l?f>dVt7ESPs~CNikwxZbMMLKt#KHnqLa zanMA@A%VEp3_PC<0vQ_esKi!e)a#6rx#eN)%!L%MExL0X(p*KjLWI**MOR!$BlR@8 z>rA{bNH6p8X-{S+NAjZj+*pHl|Wx&(@jt-G=^4hl`G9F~1LMO)wA zeyD+7F5ioVrF&UJQC{YHvq4~mn8I2_a&sq_@056Oo? zH1z!K5tNEFBOFhZw&^waom4JUL}p|FGF#_f*#m@3#MGNRQP63U`9XhP(0jnt-=a!Z|TFBhv_GHN-2l&g;}tZP}tm^h`ubuxQl zQTn7o3_{^I3m3sTe+;%7v+_@nF8K#!1%f}Lbi#P9<)U`oS_I4oRCQ*0xcZUOm>Z&; z+~H*21^>4Zgy^Sr*Ob%dODYXO5BFU5!V$WG%I&maMB1MV97k1{ z0N5YUX`zSAN<<$jLP8Y?4rNo3t|0O{&aIFhS zg*uDH0GV~}DK2TN@Ntj33bFN=u?5~zyC+m)yfEhb(F$-=Wpe35Zs{$tX5gQ*p+n=E zI0Wf{i|juObEJ%1VXUmhT!fVG`PFlpg{vc zf-?jU?(XjH7M#Id2LIXJB)f0@U)8OeDvH~;PxtB5&+$)Ky; zSANMo|HyKsN|?lveT*ByXlh?B2^{}pJDJAC7u#nwEc})GD@3iS@v*(iG+sB^565Fy zb9LFzV~2frT~N-MorTuffgaI_665XKvPzu;3oDbpg^d^ZNh_0ZaW_KkLG>GBD{Vrk zGUwu^ec`3J_4AuWwnvjRQFsNFa^{X)s`mUmq^{SDn@~~@%az8hADKx+&jSLecuF{D zhS%0yDN*X?0$-@@;5r%F4br)~@dXF=zk$2~pl22qbX;LnTLG=qymg4)Tbrg0AC!+z z!&^b9_}Dvx0JJh}PK_*)+a9A%I_Ar>>7KgwhG>!{hugjkJ^f#c>%RlNwG5uiS&nX( zQ_|=U=e>Gc)U*A9)y7ICr{+y)`auPy0t^?zN9?e8&#t!v!kG?|sAWNUJS^vQXPKfu zG{EkPH?uAT1hk5@yEs-=FG(GBsTw{TM*gxC@w35J7}x8_Xn!aKEo^ZoxA4ieu$9oi z=V$ZTyZhy=h(Ll@Cr6oPpXn-2?{;*ehX55&Mv+$STMS; zDGqXy{+`EXR{Y{Mmz#HZg9!BMfI>T2AFge)?0%m+1hh?*ES3EVE>(Y-67zi#QPq&z zzO~2jI!>EgfNHe8l+=sYSQ`_JHdqvaK$dTYA5xqL<)xj|i1@l*Q!uZ2&8bqA2Q4$F zG$&-dQ1;WFRoFfF2q63E_N?I`LcnF>$1B{mu9T?dn}XW2x7`t$#Mja*)>!t@x@Kym zY`R2Bp^coxJh3!`1eh1{=%~B|72xUvfD@R4lpvNDonaT({^!nikj!`2jt@&L&e9nK zN#YZ2jv;0ULU;NI39je7A(Opd6F`$P+X5K-jePGGbwE)XE%tmnnbJo0$L@S6Ix&)J z$*(xTJBAWib@XLQcQ|`?&RXBDL~&$+f{>7PW$<90fjq+;(@2m$!ssx8i;y1L8E?5H2{3G!2 zeOW{~f7&#>2_k)xEmO3vm6rY$QI)GbH{oAPk-$o`sPRzWGDhJ7#OB>?kx7w*jQtRg zn275GH;9ZO|BqGj>ZuJHGXpoaBf&)N@vB+I6E=YegoXWz!FD5CsxK_TZ{_%{kow)A zWe2%hjuVc^S>;fHT(+mu(kaQMqY?XXtBuHj9?FT1-YP}`0(+UH?^@HdfJYiIQmo+m zzUgRVe`Sk1g_uSUsTK2&CdNfvxDsy?i_Uf|-4!`|(a6|M9Ig4d+%zZV25#M zL)ryUpYis`vTD8%!XH&5@glD)w+a?&aGX779(fy(i%LpLW)752uLsiz1GM{BiT8%d zPh&8i+EYC@qQNQ0WlX&U%yz?Cm2*+pR}L9Ulj__j62IBKg4sVD)Yr=j#Xk*yfcld` z3b>^OJH4Y-7K^^Ir#V~`aJsi&H8)=8`6cIbdtsti>s?6(ZH-y4mwSleg~q znW3VKOQ^3)cCj(GkpBv&_JW*u}s z(^DECi+uN*rH{aL(n8^=@^x1IC(820uCCUr^^^5NO3>cyQ{}v*?OeU&e`ow?DJqWoq=B~dr-dKRxfu=8s^=nLPwv)Su&98eIVA7W zNZ*Za0e>{J?MAa7s577KU{?zf>hhYW&3IXp3+L1;VHCGuH9RjrOP8EDT+=}8N~A@U}mbOw)Cz*x|VLEXhfht;{u`2fOL6>PsAld02>v7fsc3z z>8^jbu!f)baB@oB3!~4mUxz4OD*5+B70`DLqWv(7Osk}PT)$;pJ@PBCf)6=tG(Db& zlX|`cIKZ0R<;lV^2Uww5i~8s4N!-Mmgyq)=Na}HLr_E%Q*SzWYi8`;jNE9VOesIez)X8#eV?S3*HI*jB}8Z2b=o|d3%XJnbW zcOb>K2HErIZa&LFPzh_+f|aHWhO114Q`wd7H2*O*D`xiNZqJFxQwfdY0x#%2Lp4k= z1Nc*%UFM8IxBxhm*O3F4W;8b~!os^_f@>$GD{@7xiNHU(YTgV6lS&yqD9c{n5M&}i z>k%QdYW`C2S}`P~d00JsxjK@Mq)JysM%XTE6=>4@6)v37uClXq)nRtHsz7>UG}Wzb zOleukez<3QxG}qeeTA`Mn+W^;Mz8BfY2n0`dc!<_wT)35zlvD$5wU}d8HtgOv2U68 z%o*674%k9hfZD-C{7YJ8?a3l|moon)_82yz@?^=-QBIB<^xNG9=!_lLng4LlRs2Uy zQMHI+>;t`AO+(?5S2Ff*Spn<3eTvs>JZtq0$ghtTwClkdodtHnrYd0xL&2P~hKH-49FIMwtH&U3#6 z0}abV>zDRcjF=$L6u>+x;p5p(Fw*2igM_+^l(OwmCaEVa&mH0eUm-d`Z@Fl1&}wdJ zbPNsjVJ_buwvuA&x9K~48!=8`y*acR=rbBr+VDY?v}ZTS?_pZ11ohidL&2mbQ9Wf4aF?FDcXg zIa5Mx+^ZZY7Q4n-4WUt;?+E??Dr2O8{0o@1v>eUX=buI|j06XhuB1n}7eNeseaWj5) zHK>{_3&?aq7^~zv(iwa@%P-Wmc|mLv`ocn`T7(^k;D0Ojut}1^+g9gv|$SXiIQ{J@4ahBx2lwL z|8T~8CMHeL5*}g@*z9e>DJSo@Uw5dSXz{L|PD)Nv?!_{FohUJD!o*sT1TR{u3AKII zQAZF*Go5m#c47y+zGi7OY7SZ4b&>aH9Rc^b;puDTG>7@I=^APA`E!+dxym`qzLe8j z0PhL`<*0Dik#@;F9(Tr3%HgMf>{Ba2PtH8^={LT|ydb%h7u0+E0;coXk8w6>z86Ar znNHq+{~L%1_H576oXLlscfUASRIpeLyYjVMoh=m}%Xh0R-1VHJ=4}<@dK>#ll+Cvi zpL*YqniF*~yGy!67Mq8`sP8oR`+i?CRbXaOOp4vJ;+1p?3J#&Gla}A^FyK%?WwP}= zjo(9&R`||9DdMGUjc@5M`JQ*!YNVKgJL~o9tPpx^4*1<`|CP1^sEw54s9n!%=BY?a ze~Zl(=6r}5~FZ=h2Pa0+;gMEd}qq~K8P^^nWl zh`~G={Z7rCA3SldaQI%Lxc56RM#~p6#Ymi40uIs_(tL*Bg>V-I&`u|%cwG(Bm!*;5dJQ zjp7N8IqhuU*jsF#OLSCEM^z2aba#-klcoRwFtS`x^{5OA(XgYbl``ZUS;d}T=c)bt zR1CM{W})l|mW?rr>#bPkFi$mLHE}0?a=|249G@rgAp!Tt@RWSbZTG8aZMMD4;NCF?dxHeUm^%N1i<90+5+3>8spXFi?bJ-<&H~jcPQ4Nj@2>Gb0yH8(~QAyJx zX?C%#7C}`V%yZTuhI+$WIa{dR4|$csiK}MRW8)Oi{M5==V9zBQX_x7&?w9midFo|Q zb=}M$cPgn|j^_Pb?m@PXd)(WMI;smK1QKe>{TI1?AQOmf8^sjuiWiXaBlF9KPd>Ps z6M#4EHc=a=6e#I?>5NeNI_n%&3%jw8@S5^^yVem)nqZbvS;x)XHx0G@Z3POv^`Wb{ ze45W#>ps9ZT9}!(oi(zt>h0ankL&)4WBSMWHjU+@yNpMlhph&zQI*wEBEBsFG@j27 z;?$b5t(kFGfQJZkbEV%XxSxd6ACH;{DbC|smeUt7@Swd}wZ z^cqMc@%yq=hby0k5qbcwa6*bNl*+5iROdJ@cCMh@gB_(kznnHAD;{K{o$zg6e9^*2 z=@sF%;^0=`JgRT4zf0<&UDGRnIzT{iRL>psn>OgYS{J~o_at(I_unOH3f*SRPeI)~LZZgvR^lrofyVy9qn z(a&Y~BcE-yD_9?Syc>~tHY>*<803g>4-lRBTr^ui?|_x>r7ztVqceZ0WC3$Bb} zxb%C*pV9p%?j*70Q=2_1hZl~yVni&D>F-Zg7iT~kGR+#%0Wc~~v=aef2{N(wcl!J9 zH;{ks6Z)}-$ZwdrY%$cA?ccHg1!uSAuUzq2O=;a-w&U{IX?v8^US5HYrMYKVmSg>|>WoKp3f1c>oSyW#giWA)lsN;@_dEA-#7<`;>v;O))^-#(v!uhUj9;`qKN`~3~CIpOgT zEKz}V*a`g#VO9xz^}}8S7{R&?Ce1uYF#;{W-vUQn^Tpp=;S{tT76||B2Oq!aoM3l7)KY<%_riZ{(E$T2PaLpxU zkD-7v&`8xLZJ#Rijm-3UBn!3?b&My&4FBWIqx+o* zB#M+MK%#C|zum&QJ2?J67+v=chtcf?r%1yloYn;sTo9q|q80FdA?UXsFdNdXbX1+9wB(AeM+YW8N|7}$Ngqd)m_)Lq2 z*M%?fg3AK;06gN(S~yTe&4a9;%aFw{1h|$$>Ay3CYO9xZ%t*jH zLV~zabQ-?rw}kD_u+&cB3oazi0h5E=Vj& zD)Z(uM_((`CAwdS<2@z@$<1B`r#YcJxn-I&VdI48H~<8ABUjb^aD#Qt-sfU!-%W}~ zW2VPK!DGHXf`$Su=I6%2_C37>O)Q?^cw&^GAeu*t>F#ECl0-=0fQfnls0Qflx)MR# z^49*=5P3tg0$`Yr-HYp7^fd7|bGZ!?jfS$EjFphttfIzB)BV!~!|7}P0f5{V z_NDZRz?DB_F`srAIJ(5_NRl;*muALFjO8)8Qw1x)pPeO_VkPvr8X+T*DQvv?R>9ll z)T~2~8Tg|kJi#CvZNMo2Sid2ZmU~kGinO4{Q>%9VOruZnrmtu{E%thf-|)>yO$hIa zknLS^9Tv}5m9!n9tHAUFu@O%jQZT_*zJEh~mwp@MImnmA3g z%9i(snOIdWpA}D?x!BfOF2GF_xBa%_dvd^G z(PN_%unfUwA9ufp=Y1-E{0fI-4Vm~HUZmn{ow`h`mbaVRVJwfktON#%?gK%I1{Y?6 z?;MVi`Anqu4hQ#Br>+Pw}U*CHEIar<-a(1RxWSNNQ2TtA)u3 zsAQVhy`22VIL{>ITEG5E92%?qa85$ru(b0-4D$wh?ASdy=Zb)uYC0heNhw?R*-0*g z+>r~-xqH&mW1l*=D2?()foZ_?_*W{18S6>bVeHKcV4)~%&#?tdkIiY)RF#lKk#B-RDC7ezl+bITZ zlO$+1(R%L8{}!8Is>ke{$mo(pOHpO`TK09JV2PZ&dzlka2EKcw2rR8*hzPhute$1Y zp2?}0$t3Z17T;_arB#DzGP6P{QXpPtQ=qN< z{O_Di7ZI|yPt#cp(qNjp4#5q-*k`OEVE7~LLX%tl`M5(1ON$?p-UmdV!Zg8|# z6*0#PylxEUL_G-OG)-5F7D#K$_Kk;-RCKH^?&2Iqg%BgfWvif)_A@gTZ?%=Wbbm7&#v9@i1eu?==SPK*r&N zVxnPVslv@7yO?fgR^Dt=qkRJ*kASCXv!l3AZl5>=&qJx5^|ejqeMq?4xxnLquD3#1 z_*~bTb=->O!igFsC3dcU&_?b0tkIwmT9MU_#vsCDkGdCjQ%{;epB)qJr~xG4Ayj8F zl^e9O8qaNfXe-!6if)z!_eDeefzNV>)T4i>BDeiBh{!?qmUlwp{OyR`1#N*%(dnl% zGMSX9hspef zTV`a=&4ZdrAHNwbMPBPn>JZQ1mXI%0d4+NdJ`XG}cx^s=TC{eh3ZH8qWb?-lL;_n( zdY#-8c#JcXS^ULnFR7iS{U+v7{@ey&x^=G((uh&_xFSFF-}nv5ni*kDicXnA{6AZ; zm5M)=LEEQbhZ(y#J6Kky0Ku|rtT@F7**w$F)Cb@oJ?vmHk5-xct zHA5=bqJOPN{Y=D|Yaf-Tz9r7&7S3h-=e>L%pxzP$AU_Utm5(tf`Is?-p~JXEIqi-& zA2k-3BKo{*0zZCO-^P}wi@Ppp)|HCTFkAcH0eM+ZUlR;5u(cNG=`3c1JEx8!Lh!mQ zl0lkWe;&Hh0uB>SBe!tsYwP@or-eD80mpAdP`CgIlnVxG zoNHe*LXXAdULK5tHWIag`@jTp>9|R*=^)_5fHTa=D->QAplL$ib6FXZuXxbo5Uwlc zn^Hmjf!UFlrpZOB{!vP-?9q5Wr zV^#vl>W8%sQ$B%x6qr!BSI7^slVh}Ec zm-MmKzfsQ{%l+ZdpUuc{d8@ZEL^P3-p<5i4S3*72_@x#&Ujmadkt^e(O^Dry)7oH< zh8~n~V;U}fuj1T|+E7K=r?t}r?9al;KNvZrER7#IdB;~B+;aBtb`1jp4gWryjL<8j zHYictC+31xG})wmA}LBv?tPk=G~0el@EI~PYwhQJ-j@qPT&Zwch_LP}?qs%ForL)N zR#+WdVxhu6?Ivz!VQ+!o`68K0*Y83$q$n^m71tijd!rl=w;YY2k%sQY9p3n#HEdj` zSMim}tvB$Ue7`ehfQXJ)h~%;vZv-juY48C$8VS-U7-V&|bz~$I;p&dflX+dVg`<1(Bh*}M-UREnL%`<}l%y5GSF;WmswTYXc|hZ< zj!%Ob=z?uP(6inJ1V%aw=*2*WF8|4jP0X|TbpA;GHxbLDC~;ykoKu)c$2Z5eMnCv# z+{bfM+GxCSx;!&B$#6z-XM?c3GyAe$VgAzG#1bGqv9?$GyJ1b&MQ+-W0whq z9%g)IQOa#(i7{>1)OAa3w) zV~+a-zigAnC-z3e!>>s4%QbQZVQ!4iMEz{wkr7DSZ}B?}V-!ewy)4e^HMA%*Xj*_T;i{_J9#Ejz1RR zpWrZwd!dd^Y11L-csJe*l`==pXtcV$m^u7;vy)r1$ekY!NP>3kVcX|EMOC#H;GY!) zUdX&HdsAgE1Lr!3_GX@vgI)eC1<72I;cTli2%Ksb;S^Rc%QvHnJG%wtK_esbIBhdK zj?;KeHiA|*t3Rb#oy(~X{? zmohIU)7eX9Qq^mMUVC|)Jy&dsybGTpa4D^ED(_7j(bb!PvN@ za&b1ok6^R;R&vY>Z`1<1gnK?_q8H`L)T^4qv)ni}qzjn*2_Spw6+qn&45~is)NJ4e z506Du7#j~$(9~2=^sMft9FiKO0Btc?nWs%6s3pIh*g6kymZd8C0p8imdTe3R=&)`n zkZ@9_ol$Fhqy~$MV)5O)tCq+gxAyDt?y>LgegN?Dh5EXFE~z%PjvlI8X|yK5C$b0M zRtFa$y}IvbV6w`PbDOD^Oeo8k5Z&O)At8 z%wmjd*Oh|P=E5*SPpmm@_v1bPjVD~fI(vfY473>JP%J%H^^vrbABp}TKuq^tKjt27 zo3N)_$YaF^U6Cg6>@pUyKK9aXo~K)r?C-+tcL|7wUR)OKlNG8&oKMCtzNDuP=yf^j zy!EUDEi{*Eh%F*!Bfsor&B(CudKF|cqnUd6>3|Q2e(nRPWxCG9P%=Y2*66ocaIzR$ z?v$lSU%&0Rv-iEa0C2Kyuq3U?pvE=V$7K1=&nktk=UA-Me^m}SYR?PbAI)p+mG!ll2w6$r5`h3elVH?mkA#sZ!TjhWc^jG*-WV#7>NG5^G!_yZlgg)c~o4M|HsI1 zlJQ(BrRehs(wia#8mLK7P%J)mR9GsKC7ryxixgGeGpn9roq~?DaOAdpqiFb;LS!9g zu_Xhw$XyXzEX>H}{&3w(5%Z>4Zzjv=AM0l)XFY>yH8VAIZl}9eIyg301B@qa7Rh&U z)L_JwF3bjMv+qqd08D9XW)Fwt^m)ecZ5MOpxtd|=F=_#n07l2Ju1B8@Q$HbE+T6IDxI{PN%_uvWvWR19OYJwC8FEJ>I_pKb2z z%8Xta@@p94gXSiq9+45l4qm0Ld)69ul^((>PSQw#L$ki8osoWXklR!b1l&TJEHpiFyQg~v zf_ZdPg64`keK>R1kh!z2uOT(){OMPlnIpP;q~djfiu~zi#d6%>RH@_ME9?WHk`zy} zC3)g1kz?^{8h0+IT(q)cbN%p-OTZ|!x=90H!>HO+b8pQUQ?o~G^|Fc-^;jyYhEq>yr`_4hG_Z2M zmj)r>dn9HG598#KY}0W85Kx(MH4w8d(au+co{9Yvu$O>;nqwNTY@+49SrdjZoWpYI zz9w6ZhAp=LlC5`w8arUPfe~y`aHfjWx_0(#6LVq848NfQUhD)4M}ucGxM=7u1h^(jhIX?-Kz<>x`R!ln7*BjodSi zm&Gmzd+sZQ7LVtsPqhoG_K#;dEN*%o@8{sAzgPeEGaZGtQU;3_wrz$O-8mJ0=4BgA zNa$&OSspEabq6O}t!XuDh+(X)`Xeg2Gpg-C*zufprj->v_Vaw;__H$POzsh>Pd%l* zg@#lCf<_~72IOO$!$J^D->M4Lfs;hTFm6oAV;i2M*=%sWVz$lB7cUvtDZ&N%AbEl> zwr1lVKRJl1Xze2!`@gJ&u1>IHGaRxrOX%z6yGlL2BHI!CY8i97py9C<`QktsZMw6i zdSDqK;m$_=C=F1R?K$d|R65Bg%vOlC5P6YHq|nB^{=!>V`X%6#vRhPJ2}P2gnPNIp z4z-T3Qr&UltUgJt5H;aJ@Y@YrHUT3c(rs(lf}DiASlvONl$FMG*Db=UhdfAP^WsCG z7qy8r1zU$0)-%oAT7$hlj&@>X?EE>OjK z`_I?(92aHcsQM^{qTlaMSnzX6x)f8EECYGl8_#Zom+_y$7+r(fPxV+t-a}q_l{yl6r+U z_AEO{R{!;NHY313?mb>VHU}kqJE6gVraksN-qX~^UwDumw>;`T9$jHnusp!QKAt_^ z>4N+`H2%b9i>&a6`h-x&$T~Eh&9NUG9uI+5MvWz~!XC*^`IRlx3}*oKf4Sjffe-P= z44!*`Jn_FPDE-x`<af4<>j%s@e;-3FK!^9hnf3RDE0qXkK-PU1i<9tO)gH1ibIOF zDp(G7;6cJM@Yr+KneixXnZME7`1SN}ktDJbNp@WFcvAujuyXq~GjChChfUDk0D44g zeaqh~2GW6}zIK#TUy4z_dMXfBn4X-fL19pA&aNd9W1=V10n*u?uK2YIARO4oQ^wam zcbTQyb${>iU9RZL$=>#3-jWvN?@vIZFcO3(_lsG~j28GLjP4f-bN8AX|;OkS7f85*!()t?Bt z|19l)wuu!{8n(Z4B6cqE;tES^9BrgGVZHK5Xc1xTUtjDW{`_|qcjR!=oY&xh2|hcc zQ*$iEiix-94~=9-uA*fS7x#s<+)u&(=EeM(>wgEjBY~6VUUJdsvq(lNxVpLuW z?NM6Gwfk*i{Vxsw8LcJAM}k%gOP+{t#G_nc`8+WtX-QpZ&u!j@pUsqF(A$Q;^6~mC z-uggs?hnWPig*2=pU1Q&4w&cy?lV3)BcpHmS>t|i`eNl8gZf{G^bPS~*k`++C!F~0 zJGa=?@MJ8ZwIo=}IgTU;d$TTKkFhdx>$DsvhuAfk?9Yr0xDvuR*St1BG1h%C+pUlJ zis|+(`@!2OZioxZqegBqOGSCs{NAI$)?Bm0VORpf?^iSxx+F@K!^tuVh`!_L%vugIXzX zw=gSR$<@t>E=)$8kv>9WCpFMyW&T6ue<_y!t^$jdw82;u;@(Lk?X;mH*4)&@hon+T z!%QdoF={4(){hR}^xKdBq2muhe}CK+k&Hn?jpCblHurp93|V%iy6$V5mn!>H48CFR zmYmjyIwT{V`xgJ9;lK3g!D8jvE4+M*ReoA29>#tVPYMiHv&ycPUtCEwP|hk`6K6>N zy?+1m-KRf(Achqdm#B??Mx_wyuA^=q=7%Fne1E!4H$@W!{oLZorj!4gl z*W=qqd3?EG6Eu{}5u~Q;^?rZ8ijLTHEJI{9m@)#9Ce71cy-dGrgLskhf4?;QL%<#ki*AtA z)kG6;UZ6O>*uxCye#Z9hq$kK}Z#wX*`Kl?sqM~A(F6xPaU}Bwt%NHscWyT!o@;sjbNk< zQ+~~)6%NxClIqeP3A$ScXrZE@un~XL?U&=bP-79Wr4`XbeRc__pN-&6iR%5@}pO=&rY4D#7WcmVE`}y0eF1FKhIGSA?6{>W} z=6n;B`PgxUiP@}1jK=Az5C*M+=kG>1xbksDBK)~$6wNB|7XwtSvh~WTAgpM**JER zsyI&&%R?bDT0KzCJWSK%2oSDL&br^L$i@=2P5Ikmp8}SUW1mA_hj8V;jt}k{sXrRD`o`C{e zS+o;Nkc0{2D=BU`tZ)5%aA&t&IU$RzrtLJ9;PH{%lHAd2lTH`u>C8A`1-$a73I?5v zf!K^6L+;cuN{X8<=W$TGMV`YFdN{Pz{3LAzo~NS+%6aWRg*Kk{>B_-3M^XA~o({C$ud=Q}rAK`A3-b*^`ie~34R zgPE?S{D3(obgshM;j-TSWrr3N{0>p01=__pzO_@JT4X-$B)DMRa8O-o#^YbE{I#jD zyMmmjIqiPmq~ma@VPd>}`To#l^+^h#lRiRwXdJ;`;3?Z>)^PE0pN@)7?QuKuYtb4r zBKBpi24^exB#N_5Rs$}80W%t={U15<#3;UXBG1Z@K1oo-p2pU}cLD?zXxb#L+!}83 zWepU&-<+krD13?D@{br9#DGOq3>K(aw3jlf=zf*%2we>dijn1}4Iv+hdnd)ZRPSW* zQMEU^>Mxmuq|Ncu)nHyMH1|O!yDV)gM<#ikeC{l?mu~!IzC94R0|LCNJN{zhNqK&G z`8kNxn=?dcp2MPXy=qsk6JP2TCKw|Ncp4QM@N|edH#LdR$1j4GQ(jq1J5G?{H>Mg}|=*Av2i1(ILkvH^esBA-i)B zx~t!amV%Yyye6t5xcjE2li0vz1A7+Q+7>q7A=2X#s0p6O{<*|wzvR{4A5mJG`HuJ0 zTsQbBoisK!u9Q`jm{48QOI?6*wIcgD=u$rK8co)smpBkY19oi<`h*g28TG8e=mL2KW3zyUpWiudVty z=w|4_wtcw~>+rUsL+N^b*`0FnZUg)=jcERZa?8WOJx@oP)|s=;LE|rsBK;PolO{xk zNZg1xwp<6R2uf+fpo;6I5@##I^^T;Fq@dI1_Zdh)f;+5(fJ+T}A^_iy&zs9S>4p*! zDcCL37(7n8@F&tWL}#``g4YvYW2%MLuk{7QmPrjU33Xm+bKzD;JmxY{wla7 z0YOj2stP^(N_w$zk_l2SRdmy{C5tut4R2ZysL@#r!1x3UqAcs&rOUVxN_1%T5=z4FjAn}iALdJEa)f) z0)e=$jpGrk|i#KcwtFD@>Kw{1~xj~44#kK5IAW=(hWI$)C6nBwAqTRmpuT{2HB ziMUu>SKjG-;-?4&Jr0qteg$71-|^VZb*Ax=az6%*DSQLDI?F-V9q@X>E-{MlhU1PW z?W zhPLD6zd5w^{HbIxuQB;*=3KZsOD|BWhsZ%%sAi3N9hDfpRG=js-8P|qiN?Va?AVKJ z(XD`OuuOMZ)vl+xsIKcR840}EF11_lc0?f-vyzq3W59Mb=+T2NTmQs+I5m*!z#q}V z@jeO{S)F0L3T%P)@vL6v(mifDK772Xp=Cg5Ts^t*80b@@T`%*!ou5K(KT4a6$!~ER zL3{s4j!4jueUhL>m~<-G6{j+cf&`*7Ztlb*YY+-KRSxvqA#VN#iLI5GdPwP(*QcBP zPlYr1AE2!R)sJxMetpOmMn?>t1vXxLnthK2y7%UK5}#N#s|(9%x5U9 zmyfNXkKR?RcaYksTQRw9G!vpF0_)qFpg{h|c$aoiz1POAEac9YlBdIDZM=ijYQ8A{ z^A4!??z@7OTz4+c!;p*)kn?3I@;{6bBm03Ort^)6=4J+N%=ac8;tP86N&7a>_DP9B zeafr0^fq}CLhI&kAZe@H?`y-i8_rd5IqmT}?P#58x>@1kW}h{f?n$>WM)k#3bq<&s z7%+>uh`hgloleJabjzsA$tj|LKLm0ZeFJ0B(a|ujgrp$~?RY%=YP;}wr|9b9a<+H& z=(<;Z?0oJ#E=vc^4I!)BDmR7om)5>%>{1+ov4DvV{4 z7jAWvsxQsZ>BaPr%9Q?|qVEpitJpJ{t?tbUw+PmQ*=w-28{DGski*`zq_DVmN(kY? zdP1u-^)bJ8BirKnlu*YPkiBV=vCX3`w1QNPyZf@S)DykD zo3DH3^sn0!?b@!J!>>t3&lw9k;$j(zzd&9XG60+QSdiI%0cvF7y^a^liMh5~%1oQm${)~7}hw|kNq zae1lOP#^dzWRtJdoGpd}arYJdYTOc(A|_^{h8Il1=eJtts>q+dmgq={S7-*B_xH2g~)Yjwo< zs-A5i70@q2r)|*BQ8hJ|F*&ThfmsHyR=A`-%=o3x>wQEj4LBZT9YI0oJ^nxz`Vw<* zzhZ=BCT06<-l8NMXp&+ls$OY2R(mtT0SQFL|NMmDX^iQ~J-5??+UKh*Xy7_Zo-jmn!EMp`KD4cRbrxLrw?nM{#@b9Sh~j!{;k|oQWl!M(AsXBs zyNWIH5>)cE-&(q($y61^P&a;CLUO`d(}PF|8ERMj0PcT5Ow`xx0SKg?r91Y5ai~~a zjzjNfviM4%FM{55s4E%8TsJtf^n4z9LMvFqPwJQBSeem4oNW)WOMtKMSQN#YIL_3Q zewVECg(833W3Q+t=Lf^q@GMcs|7Za&h?5PXBCgR;;n1ixI#4!N?(rBqg*jz5v4GV2 zidsZR%=~W#o<}+!2sKUpOTm7^`R~Z|@3+PHUtaz6u=!h;%*zE6surp%U~7Wick@ck zCUO9A7;aBR?f!zpNa{B)IB{vJJg-mxvIEHVr*kg%vBR}Ifi#NL4!)~JY;GQ&4yt%9 z#fF5@lfSU?@l=VoN|%=+&_CIxnafLut-WhZ?N-k|@IO`^ci3Pltn}FU*(~>OXR*99 zJ5RIEmNht@6bNUD)18Q4XE@9o|E&x&VIztMe0+lAwP(XT6F|wowBnUO&hg$#%O}-? zaZ$|GY|6|b*AOPNyrXy`ERw+Q z@#_`&>G5A-=clNBPK79E77lSX@>aCV#m~*jl)BLWvTbivi8Yc0(^lcsGfDh0=Cy5h zJ(~&i%Z8_LQD}DnLgb(e_P)ImQmSXw#*!OdoZcTTO*<&3z#e%~KbV?p(y<5efjq^> zJ2gr(!&N!@)tBCCXT-OW2<+sQKwqr%eCSB3xDccK)0_H%t61c3c+MuMMO)f@26hPD zy>H`>5@~RxL9;Fr0@RBO3#0@W3sS6KY+~c_%b5N6sWpH977itye12A56?ujY+bm7F zaLHL7jeV%l>lww^FN^tQ8{}kav(t!{J7Ne z6^LgHT*$7DDDfS#Dqu6SWjJ4=8FUmB{6Rl|OqDSQ%OucN8BrUHZuP(qkL;9P#fO6{ zTPzKVW{~T-vP$OE3}R3I__d!NhL2EPa3C=wJ4t0(7{Exz*Uzx$cKT zokY?6oUMTBYx0>oh``LE%OrVBA{ehaj4#@lM6Z1QU?nReC*&66AB&Mw_%SH~m>(DV zz1$jxz#T!{qy{8fWtl8F4n;%oC7gyF-V16`5EWps9yT3$_Ad_iYhsW5P2g>S7YNg& zW2%24=Cp>Tp8I$AM3RBtuwCU8s*Up8dPPJnhM}!J#Y@LI?Jw*64`#fm!9uE`2j->-k3-Tm9335`B4b>l} z_S@Djxg1djO&Fjq?dTv*0d8rYlvA9v7T2x1qT43Td=!^$h9F|O|5%*;na z6bpZ#i!Xwio;#<~U&yN^5_qcoaP-@FJ9WrYw_>zbGK?EJ zo3HYjo{^FLaZ91eBJ%OBwjkmtO#jEU0B;F11B;ncp!Qjl4CH#W&J`%Qt6GYDbk8(C zL7ggVk%T|S&Gv8H@$V@JVNRkz4ox0U7^kLtHl#97+5X5m$Gx6R9JN z!u?kQb4ZG8%V}d&kuZF77d=VE^USGe)_8~^$GEd+RWQaHfupLpb(g7s|QH>|hL z6k=6}mCYhT9PLSGk&i1IiG%+%IDh^A6L(@r;gBM6JdTH)jO^L%VIDo^oT2#x>K-MT z??!bHp4>m|+uv@$3KIPm)}>)d=}C%Ebe_J)AC;Y%y4RKAr8$+@rrot^V5(qfF?0XV z)%JVw2GEcIt3RRE6s=bwIrXdnp@(TgUUw;|SNozc(qt&8D$8u=!7a#Y&*w?I#S8r1 z=pVm42r~zOs|CP~30R)w`0?X|VrQ7&{lI!GT-t(3%>{aegaDwqD}ir~^A~lVlL#=eFkg23lvn)@|NeCCteaQli5r z4fiij^rH+u{F)?r5IYi)Sez|JTQCE^{z=PpDj|?R5z-anYiqq@fp70Wa>NIf{BJ!3 zvMzWxrcdoZ!6@CsV}D)>7pglnOB|0H^}jJjxm3&jKC`Gm>UwLUuF7jMUlUO@ngd(Z zB7bzaCkql6Ic$|RE@UoG6H-zQBcScb3l-#@BR3W;eg-D(q%)R!kjJ8{ypG3@GnDFZ z4Z2(O{($rq30b*L!>JIW!U{m#n@P!)f5uCHg?}~A~5$7PpgMP6YeNI zZYgFrRGQRKrb@ixOndG>SMxMy^ch_@W&fr;Yf4}#52;hHMr;EA>1M+PZ}Uvhu)Q>t z`9i{vaQ)wm5Z76Ry61pN6;jF_k~>qPT3DN^;%f6o%v73c)kORH_umbzu67R9C;^lt~~XgA)0`A3XkP42{cDAfh>k|MJ)<$%*- z2IAlBRL`XSa6`}M(o0Zr8JRlQ|2F>qdF;;W#FU3a7DKE^9w9$uAP6%l$}?U$8oW@XovN}G^AXFD^^81LWw zJcPUQA9jg|l5nDu#;wFBCG@oloNK-eKq@n~YZ$NHHFfX}2WQf!?`Uj=kJgc}5J~N( zE#2;L32*K$t}tO8&8F~SG9(xE;k2^zSSR5|0w z^~dcc3jwCaxgL^Bk%<@G;2lg1)5&M8=KuqSm{e~l$K`On$$r$_#-oGPka+$lZZN+~ zna5|^dh208&O__em;w3#L+jN7<*d8GVX;6{@fwFXy=oxC-~?cg;3QKon=&#Ho9TLd z7vE&0)Hjk3m5z>HXM2}`sL#^_$f-Tvy<~TDdJssai@823e|h88y_}s1udmPJ?wuvK zKRWz{4?yKDQIA-pogGv$toktqk0f_A_G24{O>J`}&b#M3@qQs4-z+&REA=R0Al+x$uBHNQI&Zw zc!cSSjo%rfLOd;NZwa^QLh9(2ZHq>QmeS`@>x^@92Mh4QtO#&%q{Lh@ft{R!C52Lz zh=00vw1P&P8J2; zfFwjGkaijxFVeO*@%?Zp@iost=ya;j=BN#0WaEHLW}yc-bRGVE9ll_>wB@_J)%1{D zmM$IOFf)DxIf?Ro-ZTqI(|jjahUTJ{?9PnSm`!fJr)~`af0?@9zaJvxyBb#E$~dbl z(D2yGakPv>=?<$xIsi7ZK&sF@beUo@j2}j!Gep;H|EX3&dhN}{g=8lcv1T-#Rc&VD zEI=900BFR`c&sCt04I0H!|MOi<-9==*KBqNaffQB$h^P#(|4bjjIslT*KhOb0iOFk zkiNp~Pm>wod6fmnmE*ojyAW#q&9&zH{^Pp6p;>BX*Ljvewr($)g11K6%W60|kR)31o4O>DaAR~x*5h17K2u`WTdMVz)JognW>}7j)dr|+2O1i@!oB4#lvIJX6u99jm~+gi5;ytwAiPkUH}kx3LlEAvWKft zARwxO?F1wWfP%Tv~tbT%MY=tu_~bvmrjC_Bfu67_5adx8~6Uk{Tm*D zDrOU0N+!3zLgn59kpc0{^_&~Iw+_c*g1IlflW@U?G#qzKbb!LvjZteu_y^dXzt-FO}qSplF~()LHkQ zwyr&~F1Ga2b@IVh9T~G5RIKY?rcTTCsyLv<=^vswL+x8vDds3jDO}K6Yz3r~WP|vt z$+PoDj9Eq!%U5Wap*3iff|%6CkWMp~+)k&PFNyLS__H&eC}|&Aw7T%;af>Xk-Oqg& zEK$LWt*xR8nYrjH#r?qxwCTuMgxoj8z#gpnT1T%9?iSQ51q`3E1M|r#dNz6`yxT)}#hw$qYoH9Czrt-@AdE z4%G|=JG6U++_>jC1C*vzKIsch*z@l{x_;rRZum*XV}sS$vnwN3QP1xuBisuMHI9P| z(N2~|tO}567JFv|X9j|)FbLa=fEw4;w@OATQc=g;g~h7b^Sey0O}?1Cc-6@&a%Vsr z`g|l8lgA6~90@K)3 zAGwk*9v;h#4dg-s=qOaHAgN>wNcqoN#k5!TK4H>`!4m}(K1asb9vsEjt}zXb)sieBpGt7;e^ z<>Dp2T)T$60!rUoel5q8&XSKG&f|8$BgS$ z;Rok4A_6&}rGl>(%h<(PHCfI3ePe zoW_Q8T`B97X<})5bci&Tw)VYE(MQqkVad)gw>b4Ir`>2EMXOh*r3WVC2#t&WF!t)FX!WKrKiZ*$UHpJ zl$0p?ZY|*WS+SF}F@&>_QgL%sq1@^lteEwzsqfCL8h6nZ8u$HL;VZB{Ww|R6yEVRE zq?)t$H`dc_rnCIk9#7bhf1x4lrDSW+db^}=rB$|xP_#>$lB5`8E7xJ6X;?>6J>3Q* zZ%DZ->~b;Ntm|9+!`eV$#M;kTxuKS2C1+YtJ;q?CllQbn{@BZ{aR#z;T*2heW%cy5 z&VVRbQ$`J^RA1D4et=fnR0bhbq_2}8HB1!AVub-tHf9@{U|E`Ju*Fs1g; z#2oTy_|1yVOEdi!^>=>*=41)bcAB70x|?qO?$A{b52@*Tqsjy7TckHGiSP;1GG3qu zH^ut#Cw)+7JC&1l1O(@Ji87xb;*Ni15M+MSxA&k~Yt%YxtM;QOWpWf~WGgI}@h;#{ z?h^upb3_X7f!wO-Gk?9WpTmy;c|#E=%Tr!C(oiJkN#->Guhk;sX=J6JYEMzmQdLk_ z6dq%o2(!Lv=KD1~QqKu+b<4foQD!sqEve7mU$p z-pkI8FRA+fb}Z8bNN1{hk{XT`T)h=)nB6rPQX5z_MHkQE#o>Z@#T!+Sd|le-@-Jlh zHd@~6^rUmbK?%KNbt6F61c_RBW!$mF5BYv;ap%N{LUmKK-8hsCST*xy`*c(u(biYX zO=IJuPn|0ucVEos6)7wV7OVMR;0-B}jT+>sQ-W~LOXIDoeYpF!RTi|=ke=?l^#aR= zAwH_OT3a8pbVSwufHyx7%$-pG7T(lO+HObz(tF%rh7>s*1J0*-W;VRERtwViE=Ta=jf&d>C491__ctEpZ+iVbH z)CqThS+}Cp(wxfwUfb{o+Lpga3`sAafzNvR+)?z7J#0Z%O z9~wHd98KsS-~EC27Rdulo@Bl*=fsVonY!63kV>em8*tHQ|3ARA(1(A^i~bYjQ4`y@ zM9wP2weaCMbk(AvdfC+Z_-&{;7OoGe8~Ij%E5vd*CAe!#{xA7}^F%O?P#s9}N-kLz zSqwm|)j#o_9(!(Vd+?;)yr+h^6i==J9+FqY)^ONlaM70T={G0WDB< z&$<8s?qp!OT;L$gM)*oSy@>XWSgQtR&g?t7^Sjy!{K5; zD0^w8pZI#4fr>ZD$201utCBkuJ@|qA9}kOvL;M*;JUHuSxy2CI1n#mvF85wUfRHDM zZ+2KHQcBdxb^};d4Mi*st=Xohd-%0d{nqjQw2J+w`juoO+ITh}E!^r=E1B|ijkIM; zj!3n{MSqZ+Z#dRnm#B{1#wm6vc!RSz9OWJFAeJa z%m$2sI;mnZONeCG%Tr$K?M^jd1~b?@49*`f%wAi_JKsw94lCM26(yI+vCaHQ?* zaC(%Y*~i>A!(uvDJ#ZIEJ+Bn89bV3mxe>(qP~O{nT=s31vr~D1Q#Vb1#mK0!M)GSCW~2q$gE^RyMccq|SQxi6WGA#IG3T=XNtc z1g|hn?eD$uY2z%F1L5~H0=YnZ$g@~XZP!qz*;Ep!cyj=qeC-BKBk%UUXjeP`{6mo^ zz=>ymkD`O^*|tgqUBY`F;xiWNp0MYr^A>{Uvp%lG+l!+wxFQ;9$;>#zM?Zd)!FhOQ z-1gl33j8$}{Ax{24NlPFX0JE!r?Yo1!cF(Q_iwxEFsu!TNci={P2-1}l#7Nkmi2ZU zB6i|($;ZDeFW-py4XR&69V2(1_(;oqn(_(@S6`j6#HTy%?k|H*%*Ln&?7z9U^n6>V z(#^tqto0VW=qm#q*>UG^7gVvn%Hk_wZ+d35&^Pdz=heHW+o_6)hzO?Z&=!9rZ2Iz)4jU4G-loG z8PT2`Mk_?d#Kcr#v7(~d72!G=BnPD_2^H%Rk+W{IPOp9GIB|hSz?ce9dNEv}xr{d0 zp-Z`vp#g{&mtNS4qZxk3@V6NQWDqk~we!b|^`5oYF*L!yr*nUTJ-zKrWrcx~LUsJ~ zG!?d|NKr$b6xfqfBZ|)F&mG){IiC_yZ4;q*_3|72*;4|x%$H=}Gtf(!Vqru!Gln?0 zH_*LL7-LVOUY+BLQJ`w?h=3B)xt>QK&`|HTc4v+`OK`Pa1xobS6!MG*>PpZ-4eZNP z*z?DqF-n;U2bxJHk(BtALch$mKjaA)6Lgr7h!Sfb@id~M_@Khp5#j_;ejph%9%YF- zHS78DI3g)bZ(PIqwYv8wm5}C)q}^^6o&ycZXh{jeAj^PU`bh&|oU0g7S(LN6r_5|^ zRErQ!1V)$cmM26J$P40w=B&3WNSU!er-Q>_Zm}XlZSm$xMxH1-DcX`5IIkjmmyfO= z3(tI(>pPK0x%GTDR(E^O-rH`Buir_vkk$Ww+v?Skr>JS}MZ&@Y9%XX|fdhR~@eZjg zS5s|oU%zkA!uRa}9o}`0=*vg$2Gv^Lmuz$EZ4rG8MoQ!PFedOuOW1VBJ%g+w-Qw(V zO?{VCO2o}n13^5C*6f21Uk+jZtf_h@S#i8_ob|Axh<5S5r5ND+q!Tih2kF(a7Nyt%NtJ@Ns0oy965!hNM|y21eMkDrZkQ-gQ&(lIc95im7WSw`81-5%^*Njgo#0f zhp+@OLm^}7U?-()uyVRV-7QzV z3K{{^!O)LgD@Wxdh(73v{idF+2XWmz`wob2@48%6yP`>)IE<+>GHe?1+N2ig<7+}{R8Acs@j^^U z?yj%qerw4{rQzIuGFzL61m7?Z?%ag(8a*##^ji0P5*Y!c1srfb*=sZjocg5MM7c5k z{Y!o9al|uIS;?v{`ncU%=B1I2D@MG#ghS;rI2zrY>UvcP$iedDZ08GX^|kXvDlycz zChg9c@vjTen_b?)Qn<8<_JCh2dtaN~*x9yXWlEbpvk0 z{kQh1;SEc}?aydgrS2__kb>b!0bClP+8($V$Tf{gYA+Lv;jnPe^weC>fE&~#n(D}EEFQQ9;&^+LP ztwAQHg$qflTkt-QL}zg|X+lx$Gtv*`O2qelTFznHla!U+o>N$ZbcNi6m@=1zHN&|YWzUR&=Gjw(9nY)0n^ zLi=0F#I>?^GtEqU=TkGRzD*z1EG0L*;iNcDf-@E@aN-Fj(}9oVzpv)rr1ELA722ik zb_kMS+nrgvRSb(@HPI7d3sV#;nrIg(H2)OQ);+F#%ay3GjM4RGLL+y(E$iT`fif+D zq#4`fwS@?VhuB(Yi*N8F>iKE|C7)sAKf%dI!r(5=nxR2WV?HOsx1}Ix1 zv9cfUM}R=r4qH-~p1P15u64|-2SPI88?quFFF@BR4q!u6-nXyi*j#vi;i4QZNHKk& zAu@dC{7fvs5Tx|1h|=t0rXH^t3aktc4$6U=6DnO!A7j zYT8IUG&xUFXVDqr>3)S?_3%Z|qKe4rDi9jba9C_@YEsx+3a#%ee18N#Co8 z?VAsgYZtgsB2MPRUT%}FltGxtOvl1oy+%{iix>v1o!0Y6gZ0PZ60}j1WxqMV3V8dQbVxid{T>&q_079%UB-f@Sc~1J6V6d71=F91M0NPgSF`1rC|MoU#xv|{mCJ{l zs+^@5J}rkn?zB743N~Vzom|guU5&H>bZPv zv&afW$)57JW=joz+G({xVS@fKSpbCR<~y{|@aXy~kyNV*^eIgCnu43&X924)CJCkO zhZQBP;-{~SElQX49Z1X_8q40U>zUiSZ?Z9Ht!QM{X8V`ALs*LB&coFy`@KSjC>^s4 zuji1AcNMyKQl}4YJJ_)8-OyWRo5rVi1usBKINgY+>G%(sUfj0vtr*z8xTx`xU-{;& z#;LVFn|-#L5J21(Bkx5b6(Izel@CJS zhQ@MT$t>Hv>Apir>iRmrgKtc=-Lig12fWrZ#~{-P_i>%naF0ua1mt|V)KYQ7nQc=) zV&|Py?mlT)mRz;I%F7JGqxpYJQuTB$$&%iZ$``yl9=Y1Sw@bi`&chnS>pZ7)VoVx7A|i)r~S4yp2Vk z^_Ego3CAS*P5IvN92I6LVg;1XW4rq?sDB0AIIYs6X(Z4h-4U4R$PFJsw)Kx@(vXq0x?(gf!VL4cI0Ubcm{PtrVm|LNyt%uwF!9Sw6pSUXd257{0MRw$Arop1hl)C8C7B z1-3{z*`)2e7t+y6xjQn$$d|68Vjo7n`q)MBrZLrt<{NsA(Gu$ht@$X?SJQ|3VTIl| zmwJXSUrvuQaI3cKaxJ!>A)rigC*pA(>WDq=aNVV@!Xj^xP@V0yK!pbnUn5Rx z;Ty|uX2zSz9>bbB}P{uv!As)&}+!30i|<~2(O;( z-p6~!&708oRQG!n|5jfClyyYAmYdk-9toQ?kx-N^@`G;9xR&5`!!c_zm>|N;(Plc9 zyNXqJHn^kJQ?Lh`ir>udplp3VaZEy31JdcuVV4 zEwDpI)mhBaV#DC6b4#r@uG0soPVWA45l>auaBn#q}9O`Eroi1Z5GlkLR8*!X<wnCvdjBYw@*p@kePbj62rx9}=L`i)N-xy9ckmUIV6G|j*3$o9V(SJ>(q0(JF< z)h@H;Jyy?{=$;nv@yad=T%t(I#cE#jzI-X=t08u?QAwrblPe9nR(HUUcZo-^cJVXodzC9q=i8q(6GhPj@{gZI~WH@*s3Qp-}n;`bNs z1yWD6j3sH6&hPH7sBZTIu<{1zqISEC zQEYD3T#ira9rz*E%(stby5`Pr-Pbw=3;y(~ZBNQ2nv!SpY%2)DuLIy3TDfy~PAX;D z_K3Nn0c6}Y6Lu?*|2#t(*VO{!r#TNVVGALp3C0|TF=VaUUF>dpTlGK>8z}Bqe%F|! z)P7|q4xXN9wfE#FIJBbt{KrM*uc0v1kTmw29IHIahMhm*arkEZ6M#~!7~Isy>oD{i ziI^d|{_4E*EcV!5;P8iWKi>JzR=}-TBFN8@b|z> zL{dbv?~yo}wNnx~ z5FT7O0Ys5b#ldkEXq9lf@O`AXl9VNNK57WAKs${L;el7D+U^6e)Cc*lMLbmhDK!@Wo5KsFB@2@KTDkGH6jf9~9N4gj`DxNk z)It^W3i|&yT$$1`E|8B@50b4)ZKG5mZS_{g+gzjw_5UW%%ttZv?`lVR3 z*Lf{pcyxOD>sSQ5Khe=NT zZ<#MpJK^$_^y=xs0Xh7TT3(aE2g4kcy-vK|H$?zi@ejAZ<=<^}u8Z={B>3J(;?zH0 z1c;Eb@LNFA@n!hxsvM%xHq5;ZI|8xwGv&!Fd`^|2@P#^X>dVgs|E~woG(Ss>d4j~e z0&(HdTGi1?49aORK2oEh!W&?tR_9fo9o9he|Ey6cR{*MZFqGXED-$u%bF)Ec5h$nRI44%2zUmzx}YgCyW`FlK)AA-nqYl%By;UcH{cjJGv zF5tx+Pe3LG$J5h{BqnG??M*yTJrGb>Hzb%e25EkYT%`S}i8Op3a#Xz}97W3L*)cFS z`Wip?()iOqYD@yaKyMUXL9xaVppirwpyDV|&DSyO!s|_#6lO+{JTH^n{-tCGwzt-? zNzAPWDn%wp#nqBZxeIey=u5)F(Sq!Nn9i1;Q);b?uOyIALYV5iy8tK*WBSa?qP(9> zy9&5Z_-y4!WB|geYat&1hVHRc@xDooD~(G3x$*GOeNM4&K#Qkwpy5zJ#=|Jz^{lwd zIr);k*=hd2#Q4Gip33sXI*_CSg#Jv9NPl{m9$I&4Y3b5a^@E@YQK1c>C-cHG34i}D z77J+fO`Q9A*3yZHyW`-p=RbK&N4OC=19ii}(~(kwsh)F>D7$V0C*VTn96<#cvG*k_ zDwt4T%76VOKkD;w=Dgv)^*1jaKKzjI)x3J8C}=(`r2?xVWS0q5m3BrAHQ#E!Hfab& zd}@w3(gSTquWhUb?_HR0joDr;IQQ~CsoQbKZ8I8W`w6{Mm z$4tX-sQvhEmTSDCFE+OZiRhI&9qGp!vSAdk%uLKEYldvAhf3bWZ z3P!#@+$d>&_x$?tew}6Si9KZ{4Tq6ZR(_KzVHD7t=`0{{v>pNcSIlQiqtE!+Zk?UE z$Btwq(tR>pCfncczkb|Sy;#3@4mVF`|J~Zvwi*0tl%F)C@Af1Q6!-4B?-Vker_mICX)9BqcZ1 zMO_@|Gc%m(!Rn899U2^rYm)=yBFYgW@A|YHN^R$2R4%B=*}d5U%3R^yWEj~a1&2SW zCOHu)T>Vi{@p(NJ-c`(|$x3Y0?(8kxmt4t+>zR$eEtT+VQkD^O*bQv0z?B=)(udpA zM9!yTh+|H#&H^GJ^4I&bLU!Nya~17(fw5rp^7cNia1M|*sR~CvByW=?34eI!0hWB= z)p9+<=C_ZJyU-@_;Jgb!BLa4c{tPS8Q)ewsGe@W~we!C_n4*2Hx;B#x^JP{4Q4bDC z!*jd29}*pagGvdC>j-#SBA&Gzq0Bx-944K4;oTx*oFeKKP&RJ&fU?$i^F>BPO+%a+ z9}j;rU$;ysv0|PAV_YLyz_qCXAaxnnp)9plKbK-RkZMq3nzmUpwFX1IiRe2Syw4wP z-N`;}86Mz7`nb2Zw*@1?0>wRek&uzL?}1LJ{CMyBu93XzjO&Y^t_>l{#!rk%ncDTA z*dfuK(JL8!Rx|k&@m_LB?JYElhXa27Za5&SlK>wTA29{I7%7+W#<3h5Q2sVtlQE8{ zR?GNm&Q8Jvlm(xICwyTzdo_S0kJbz&5-LBG;oNqSW9}f#i^4c=|}N0>igaV z97+)*)z0jPg2+=mMH6==QCyZ3h7L3DH!^3}VSRT8GoEqtLi2$_-_gA8JTGYG$on(k zKoTApO(r(^O4I3nvZg7nD#Pf&jx#_teb-o-+Rl-1ABc2(iB0aSI!%qiJ#TYbV7?v?2bx>} z%r0Rym&pu?!aM7>F*6rWcde=g9tkXxD?t<`_}PcVh!QOaR|$vpp{toGRI}^AuU=s_ zhjZ?O6)6e`Jq8qCcx~X^SQVLMN!Unu`A;#Fux{ot_gkP1HlVrm5#6GSHz=@MG|#w& z(sVx2HU{MhCk|o9Df&-84R(DCX{r3=#TTj=!sw!kk?-28jJ00K+yy}GYr#JKy>ptVM?;Jk*xY8cA_xpvzJGJrf~_dHeIx1_*35@=YDSIIFidC z{vHFXl}D}tYCtOpN`wV0YxN?_gtDB}dK|dAwU2?A-u^H5@?zYv_XcGqPO-@x&gz!b z^?H`~GLvs)gSz-{KXwmdesOEZ5(1;h#61$^9cWby4{xj*kq7=E zU{SkJ$#cR`g2M6NeXm2LC$RO=3_xN{_o z3DLfaQ!?v+P2IhfYwM=f>Wse2`z@dq$WR)(++94UoK4q#X~WZLlIt?$Io{dbqGSHG z$LiD&yK;DEwH5ut+PUsN;*&y}8#4=Gta<$gv;_r1hl_X4FZ8BK*-fv&G&jd=*$bW> z|J}7q)7+S3WiM@FUj0#EF=>ZRh8bUnfkSqU>|46*TH#$BeXL#e2&60LKUu|kx4GGOQIo}Oy7e4xC!yc;VX-P060 zXR*u%eNe@M2#hC7HEYPz$`2$9_nn)>xN;%G zZu!#P!dWP0pO3l8ugFiIA=0MA-mN*=FdwD1&aRHR<+ndBEL*=Q5-cK1JY*;*h2dBH zybG90IzvG<-7%*7<1vYu&l-PEF{kh}2~~*=yL^6n*(D978CeUU)wY;txaIn?islq%GQ==IQs^;R1>QuBivzTA$kbB z6u5mtLqKV!GrAfpB8RYp^son~q41qHLfDD@xhpn2JR0Pq_s=5M04?n`#qf<6ac2xs zrp$XI?E{}|V6~0abe@TQ-G4PX$|p2*de7wfNKO`c0mD$qCcO|9gI7w9oju5lLnq{R z%AG_diTO(&b$|j4P6hyY(t2vaHb85)a@H&PoQ|=7sbFmT`KsAS&6I(1vW5inYXZ9D=RUxAjO z^F1dh5h|jY5qWIGWmhU@7hoAyviH*EzEw>V@2RuWJDZXi6Fix3KYFpOOE|Nl4TFv; zwNSTJ-vHE#p|ml#fFmD11}~&Tgj9v1*_|m?F%kPBhAj73eOFzZO(<4I{BO_}O&^S2 zMGtXqpi_5zO%|RI==d*=r%!BR(9!WVXW}3~97X-%Z5maO5;M+4@kdkMq72|4M;9kz>UVk^znQ6Q5rjwq5}N*Bi9>ZIdn}ojOx9xNJonjk zV~}k_auatg7s_&+Nhg6!ie%zXX)56ho?1_g+`xC}l>{SBeK1sL_h@TM>cC>d!#JTG z&$>8w85w$BQHw;na+MXuh_o3_4^h_RSKDw=%dq{$nRd6|g?Z*UQdN(sjxG)V0p=#% zL>8rjS`Z&9pD*(2BfF#&rKJL~g3NbQzlD?ks)JqBkcUzW3?#(QDI|CeBv!^H0sj9^ zxwSs0aef+lb0+F9(WWR-;;9RH*IMXKNuJvJ%_a#`9Pmi~NxG_18ms`}8XllKkbY-g zshZ}N-nnv&Kz?Ms&3@HQ3UC;$e&laGV)7uUHymhetnW~J_&n0?Z+#_!>m>sM)ns=5 z*(s{|XP1Xx5J_=5YjmGbi-Xe`v?klWZ(Fv9|9YUGpMRyc-&F&m&s>G z&OS)MSJQDgpAOJK7(!l&2aa_PW+aVucB?ZE{r1%VBvb&odQM@^c8{Mnh20b(^i=%; zRX+5&YC=0LQJ7Cz?3dPi26|=@0s)0Z4+lE``x-qYEWB!|u?>N8?sV>G-LCEoDvQ&N z7%}GK*NPmHD~F!H%Wz&1$o)9Z=Wx_`=wA;Z9PkWaZ4_BIe`6yi-HcpFg;;ugCO=H>_wWt%MIh$ zu+L^GSFZj5#Ii)tlI?#0V)^=(C`lM6slkjVPl~z&07dd&;dtI!-|@KVxM}icz$0G= z#Yavq0n!lu#q2a?0uuNPf;Yat4~by7^6YVTKtMo#Mqt#PA1JhxoCxS2b|fzRxk@`o zSztVt>P>M-9Jp9GXM0eK!E?AWX-p;bM!wL8m*i#cdmV#h-EJ>`MNbWGkk4&jFnquwGO0Ns1a}6oUtBn`?>LMm$z5{n=Z<;5krz) zu%WeLoMY5BK@P0{$pzH?BZ4rA0{eC_ebT!G6EE6C^LgdfJDyACm;*YU^s{ zp3R4AoXFA+nk>1DtcWqFhW+aI_pfdCW2eb$#D0OfQE~c&>bjF4L@PALn?h>4%GU;} zY;{Jx%|o;QYmXCfCa;Bh=R9%37|_C|H>&`1fETTYRCd^ngL%S@{}-cr^;ir~RIS6A6vs1%0h(Eg(gbQ0SpHyk{Yw? zH;xwx6bv;SLL?#*nPu&dG0%$pZjAXqY-=p(3^WduC%vnV&_$cHdWGr?@!{1E8LPSN z34#Nx9HsZaP3g~1bV*s{)jXiHL0?)lHh^X^V}&XL%YLpVY#M zhhzp%wJjN5P*alFN|4(~ZV`xh8oyLh8(7-Z@t?A40gC9ifqb5I6ot0#E!Xt23aEl| zUe4(H#J$bo2;rw?&xpDt4M6A-eZ1f|3GrVD;2%Q|yrrk?IKl5porTRa2u#>dP)}_N z;i9s%1y0s1if5|N)vdxCFina7ZO@FMPww!fc50w9!^IPlI4onjlHzIWJ!K9=e&-3_ zVjfTYcJ}{%2XlNxP;&}u-e+RD*zbJRbPn0~D_}}*o(-s4`O42*(*;OBP_KdT$X(eT zMLf#yR#}Q4)?)fq4+Fzrj}}}LlmMJ-Bs9WeJx*DT%3x=O@L%+R#YZ#sKwrz`m+ig@&ik@hP&S8JprsoQ@3Ms6N9mqGjETT_a7d8;#@ft_ zoC_L<{g-IFy0ZK%V>6CQHq6yYffjNc1XsKkDC)N`H#b+0T6*}?Bn@$CM_!306oM0^ z+FGNEEAlX~Y&cbFex8z@5(5 zvCk70;$Y7sBdhf7v$1T68qQJKAHDaQ%&7*&$y^(1q`I8HXVY!Osbd9RwtrV#{uEHk z1!K*AF68E(*!&=p`*!P{3m;m*(c3L(C1-Vqf_c|QmsMUR6FsG(F>>W~giQWrUu&KF zMX6HH=zsVq4QDytGJnSpyr=a$zqAB@9Z0j~mMf&;1MYom<85q3@n`dz9(gLNpySux)7I%WX26va@UfiWva4YV`LU9OIw75el1ecTNv-kXkWbfH)X4ZXO z0_AD4zB`K=&ghpHC;J%30s`KwF*|r*)|HYA28)rWC}WMARnkk++u;^a4fpioa?S_s zC*wVC>0&BbM=JK8id5+rkcH`=n&dLz8_A>?114rJ6jP}DJ8IZ?P5nP^;G`XVz%F~f zXm7x-sZn)^@%Dt9+{{WEF*@vHh|vIR+PDjy{|}QO+Y0_!QgUl5P|wtD1UD^E1|4?2 zkV6aU^XTb$5`$5zd zVFJ#w3@}YNIypX$UA~6>K)dPPC4)Fw50RR(*~KpF#lmf1YiqacAlj<9>^aQj4Sd-K zMI7B8dWvRHZ=nxPyTzUk<0slRANe`v*Y?tO_QVZqOrTWoBn=y_d0J8I{ewAH$-nT~ zs3I0t+=M4%^ZP?5tbV(Rrz=8#Y@f;%WwT7|m9=H^w>fag{Ih0k1v`2HUO&=(+rbkl z`TF0gamnNO0N2j_lC7-qjIQvMIMFD&QFAJ`fg?Z3`{GC$we z8a`#fHDUe`zTKNOp%}}h(l&$hjbRd_nFsqowPisfq4=bBlaqp=JRmNHcbex0+;-a$ z*s#8Vo8^ZZDe262gfsJ=v=sF!zOSh+#FZk}d?Av+dJ+GP!}uZ&G;WI@l)`C&mBoxT zH^ND58233ox+Glea&(i@bF?V(g2C!Mb`G#IEqb+bWfJ`adlRr2g2}qnN==mDU{003 zz0SAcXtB5fqKxD>wtLJx2qqx(==@yeE#~&LbesHRI@NIf#RY;#|EL>1HtcWiy<#VR z-uYuI_DUXN^B{GwEb&OG%2=-XY+;t@JFcS#>+y~ z96f0T6~(V40m_4qJz_02GH1($Ml}krn(OB5as%4<#RR`Izs@_=Y?LLxycd(Ad<_MM zl!sTPiTqy{K(E`2v+Xlj5ZifA{($ez{SmR)^A*!Ba8wDOUC|Wd7$p2UwAl+{cQ#s? zx%_HHx~~bC#~uoR;J=P#At(ObV<$ahT?ETYkd!rqF!b&CUv(UBf4#?#kK90+$5*s& zsR~)Po$k7$F)KF})e=YlKd0^g^~tvR9OjD(=G6?0*NVdrI(}0mVt=%m*ry$|po z8mJf5=!pBOfb(Uw#Oy=zb~OVJj#fnNv^$QfO|hGCk}oe#Rx71h`RUDxe2$p7*s}dI z$pf;3Z4&2xY(%)@3EAQAxjDj1l+~SJZATC(b(4jtc%TTWGs7moMPsGK#iUf)uNQZ5f)QE1v2r69jzOX^E3+JSv`x2X9-f$ z{_p)-TzN?4Gj+ntz4;xw5Ay=uvb^}DBK@7+50{%8^g{b%y0juU&xISNhKVIcrONfzt z@_+iHim-Pn!ZM^I-G2#Mv0cWP@tuy*FO9I97MM3>qudiXC_mY=5lQ{}bJ3d-z#~&# zE_zIq?Lk`ty4)ajla#{I-(kwlabUqd1FVpKEM4L5@bEeJ*fR*GBT&PTX_3N^F=LYb zi7d$+WvN?%AA-i{-I4{dIhS_3hW2fAOS|mNR9qs2wlt+uf6|P{V^`J4ul-Aq5Ippr z^PeL{oY55lk|`s69aC_VWFSc~=hq-Irq%)0OU4zu$84w56(a9zsYDOeay1P7o}TXF_4ygF zG$QU;d>wX_k^D1S_d%Vhnl=D)o1aXpvfwd+-ymU^8~tUL15KPGH~K)_on*Rvr_^#u ziMM{0K&`s=gsA*R<+N%yi`tJxG>j^abC*#1je!7;(!I7Xe^;W1VwN+9J2~Y*Nlj$l z%I2?O?$g`WW}et@_l(0reVy);1wk*lsUJxZwBLp)zFFB|j4C&$z&-6WiAI2l1Kr== zlfG1m3w2Q+9l6TOV|d@gw{eo~O?*&Zqma|5po~DUga3}4XiMhMGjll=cxMu@Tv)AI zP<2!5;axY$+hBSy-TUvCPo3j*r+d0Y+NF^pskLcz z_?_VVWp`m!K=KaJe}IUePDP%uyUlw^S1$%&+XK1Ju2{+XM5GF zk;rSmu=-m&^v$=n_n_sCoD76XpWDeWJj#anlfDR-_~VzqE^{{s+pp|QNIyN?Ol%o$ z9&fn>FD>3A6aFyBea4|9hbT*4D0+ zA!Pbe|EQQ8K~l49Wz@e^wil@l%f*|>g!F=&$@tjppX&q-KOH4l^JAwP z929lMZR3IQj_XCP5dWwS+yYD1*l-qKGMC3w2%ACode0YU8}BwT{_BRvA2gU2WD`ua z)eE8*J|M56JaGLPn|T7%Pl@uYqf-$CG|z=(V<}W2zEp4X+&n8eYV~8&iiG{5`+8IO zGUYe8$@G&>4w-l)<8yO@fEtEnu5{kR)t5Dg5c8$A6K)aHcU8xW2Q4Q8EFTL_p;_O2 z!?dMthuHg#fk)gAK=U}|DGfkI5zbt77^ANB_m+B9&+!8Uv+W*^w6m-x7l3idR%PM9 zqi4jFqSw)%ZHL2pQcRV zAwGJ9>{4+itNq!;%^>%-lTY}n07FmPq}d70?#j01Z%ONX{cYp$TkZ<`_4h7$d8hf* ze!?1j{6qc*iD4fQZRJk?MooC8@cYCtqSU1Q^TGTHOpmu1Z;rABEch@OPo6MtN_7<9 zS_p(GNHOjB<=`4fHCUdTk+jpsSWy14577OHxBtR#y>uGR3v`98Y$|Xcx}0})bXknl zo+84Z?s3-I8p!~Zvzl~Jt&iJ1UogY2JOwX)PXzpO7ykg_VqvYabBW3l>rr~2$s!!s zL|GrH*q=jMV3o;cDv`uPKhZMK$c88B7RBc#BW0=KHp!|67caYsMW(Ns6u-0}dsjFI ztZh90P2`dB%U)@&kgFP)_u+45^-y%O_FDUl*NEq3f>x#sR;)C@G0|N=x3As2FivM> zx?pV=KFu_A(fV6FNe!uKeG-)LO2K2JzRCI%QyFLrB-wZDyui3BJ!~-Vv$Mjfk9@Me zP=Qdl%Q@A!CHsQIJF)mb?d!lq)H@fD}c`R(UpKvP zexga6eIHl5G`|!KD)@1QH`12X?zEyu59~JkdXLD zhFKRGFtW;iiH$sjNp8P0S$!8faX+bK(55vx?phyeP!!l&tq?W!CzQM9*a100Y)XkD zXJOFAEtL2hYX1=zXTAvw9~c{X-YrRE$)7YDCBAU6SphUGq>}$uFu?1~3zBW|OVnTf zCL&nY(9U!6bs!6v1)Z!$ec}7md7Gmd6RBIdvAR`qi}B{}?!g7MW)rp$PJ_0^JQs3~!t69!gNghR>Ic08lr6n2$DfE?t_pxk}_ zUNTdh?*%xo8=CT~%45MvXi$BUPD83==BUrBrec!}BDzVrF=IB;0`fHs(2YCt(msXj zFT0#TLzY33wS!-4CF3v_quD+M^zkTVva{@ZX!~I_w4DUIb@5^!15}a1;yj=$Us-x} zlk~ddcq8rKZtGg@5iC|USWCl8(eM<+D}n}_kn(aebh4>98oxU`{!FR~QHoC1={^wB z;{FxFWK;+)SL;37{WAjMCv&11=~51lSvl2wPm%BIJ6!VCi_8xRZm4XgXp$OFuk?Ng zm&jc`$|3;YsB)?t{~-)sOccI@E%T5{_m7!Rjw9{8&xqYP(L!c+@JfB!niMoFVCag|6Q6W(88wDV?k@{S-2G0F^H<*eXa=yWM(lUM z4X6)7WQ>h)Ks!w`J4vu9)?r=i%{nqzl;BJ$JkiF_^(_5+6UUD$KFpwk0C)hgn z1EWNv(Bp@{#-jHI#EdrbU8MD*^M=Q*$65!uHf{_6(Ne{%Lgr-ZLdBxC!vRhy-E$q? z0_S_1I#~%oh}D$*;a16N=RCRv#f@crMkX;?Arl_Yi(EHHnSPz|C(ytPl(*y)+49t^ zTrMecs%LW`NGSGVLaWNV1vjXq5!X`HfDtVqE3$UnEHcO)l$qjZXl!~v+4<##NEJ@u zW>v{(MCc6mRr4f{*#j z`*KzPe9ecVZ`KI#04FgZ<-0S%v9eEV>l$~Dd5*2}yS#Cc{#OOsYKh;5yw`WrRDD+? zqhS(YN1fIN)$VUJ%FVzk80VPrCE;OT=r9*}^?Osa{eM#tV7=WRN16dt&T!1c9Kxwl ziwh!lbnK>aXLrlqGW&35qV9b_&=>TGoFoE%nMAN5imks5k*jG-CA1yDAK$WID!dV) zSZingFem(~E1Y#r+0<4VoJdX2OAAzlA1ra6%+j|j40wlIiOi+(QHvY|2=^Ko0Lr4* z=`AFw5)Vt=y<>4Dcl5LrZw)%FzGaz+v(6D(tmvS3%rcEv==tTUd47Vs=bJ|DHBUC& zZ7r$*O0d^IQzv(cq#;3kzp6^^PYNA!A&e(|OaKaL+?FQ^XuT#m>qF8EdQ$gsT}Z5= zCKenf=?4g9BvV}Zhh3Zj&r_b zm@)Rg0F&d;FMq2p`KgV`&kSjN&8u3eP->`!OfefDI~(Z?7A6KgFEW6-DVfcf`=~* zSj0P^G(6J!7}+guIXNs(=nsgb>~8n`zXC!pXZDrY_M?KJYs$Cgj!QFM7|C|QAAka; zT_4l5l(PJBJEmaG3gVE?5ujQ_rlOf=Icoq#jG$Ki8wn4dKduOtwf-?VdgELN(h)$Y zu}-|x<>?!j`Zaj(qQ6_`E&55O$HMW;QzlOoN|?gS2`L8bYLB-bDAubRGiP3<%pnEt zWbNzdZH(TWRiv7o2r>VdVs|qGrix3iG=ib-gO7}6Hw zWv;E^k+<}D{L-3oF(HV+?MAWSakU2qwguCcf*Txj(rn;;bXJmt350%)&sDq*p$>YKf(SD*qXhBtc?>T7F)9dLp z;Mh8dT>rva6PQ%Mgufh7(cED>f0q~_X>_rBDCTyxp#|{kL6S$<>%+i*n!2(Q9*PCJ zx%oYGPD@VJmCZ&$Lbs6k^lcuAH4cion#IF+__^}gxr;E^O3YN4A;|v9^mr z_rg_-UDwz#6;G0MsDzn?8~=(srBy)%4qbDCRcde)D0$$k@6W6nHraz*u7-odlvm2t zmK#FaYHPo|--m+%ENneDSlBC%JKCn}z@&#ZR80=G&Z5wf5q}eW+71siSNlWZu@9fN zF)rwL7!E57=h3 z@0O5=Z(SbwXFjzNTUU48*Ws+KG6#QQh5)UmOcq(9&Vn%+c>}$rMT|95J{}PbEKo~k zUt^*Yox31Vctvnm!U8KrfP|)w)hK|3pJ{z+Qen^q#!Aj`T6DC0SZIhYTdv}Ws=?+oZou=C+Y&bmiqz2( zXmv3Gqh2vzycZ;6N|j!$H(i5D?W}$yrZHP=O0oJwK3->RYPF;&LW zcs?+&Q2?9#rH;nhvDHIZqj*#ZA(jO^c=iozniB~|NFDE}MUrgZ!a-54a@;gr6I9^?n!1tE1u1dmUU!Cafr-kRt?OB#7-iP^#DbUPDB!JWTf*D zxsOWudWP21Q_0qTQxM7}^vkQ0#*Z{49`oPYm*?1n|6%|i!p@BI>iB!_W}M-DLY@jw zsz62c&Tso`(-%n{(EnZJ^5OjMuCCR=IJ)N^!<(?v zDQvp|`a*U>S@GGuOqwV42kt2QO)3l;q8kfKlEUFWN3P!FC4kkSeDLqCL4Uuzd z5_1j`s?5>kI?@ep#Xl6Zwe{LTvCVN+Uu#)EhU3BO?&7uF-;JjtU>Q{Nsb=X~tLzRb zD)?bEP75r%o;m5|OMepa|2$DsdXmdho$C?TGUo+L>_`&`E9C##%gbI!=dWE( zFJklxA_X^XL&W>Ag;@0Ib!_9K=ZNN`rE#{!Dqh844SV`8-8c?zA@AIFyek&D{$PG* z;Hl>`sVmabRs`y_KKWe|Mb?u(>eE9QJ^4i^Cg^V(=kf2Ema{wNcws0WrZ*_>9oURI zNjj7BBFj>}ABbtI?&xx#L<&2$FTOs~K)%Q_XRo%~&RlGk)!gdkj@)#HM#N(W=ro7H z<=@bsa6tD^qoQvj81X$#gErE?RIs@MY^F}5T@R;8uOfx#x@Ov*Tf3&496Ep=i@aq5 zbh{))frC0la8rI9IFm;_%C^=*6i(9Vt8ubAbM-7_SJ+o^7iJHOYOosxF`i)UQFBx* z%q<5zLSV=bjQfz+tvB{A_g^FUE}3kdWC$eHP2-O)0j!#g5HIdUw3IPfS^%Pq;UJrC5)p=kHK=+=u$C#!0s5Cc z&Unjjr#!LTISaeO7nmoOdg6ZynPtY@l)jIu5|2)`-g+*COO9Z~`khBQyC(H4$M0_C zZ#f?Nq~2@RN|gn-&s_wTD+&zOZirI!@qfRZiPg zYVPlus0oSiG^v;)wa_+~u?yKeW)yW$4%;pl`V+erO6?*&Do8HM`Nm$V)B1rVpL?dR zbvbbVlsM#ZuuDft=?8AS2gJ~V6MqV2sra?oME6%zJQDm|W8}&Y7S+A!$1l)1%k6WM zcDz@?)lt>tCWV(!&aF;QmOU``$IX~~clW8*@{Rr0Lf-QP;vNlC658AoL|kT} z;mY5bz$&*wHxfb9V??#+3eRTO3o75a-@A2dWmdz2@S#(wqRTF!#g3e0A&@H}0#5Wk$ zhXY(8uuO+83@3kAgTmg^=nq4tigOtE$2_(-zoPC|(JxOVde_>!!_*265242=8!*7V z9RB4$*dYHALxRCfadnaNzq1;M=3ZNW!YvM>gTc5RkP&=c$XLX;xnj(dYP4nkN9MmDpDuRn;@5@acL}g`g z>)W?rL>he=@f`D^HKbq3AC?vn*u$#v#kbhD6+zB79yb0JLcV__5_&OPkA(a6&C5Nc zw4hPj#hI(-u988!BzcpT{dw=66;l}7wPHTOj^e(IRvMq|JBmhUnvm_e)G|}t_9!o` z+dtxwk{_ICb$M8zD+vd$A5F|1;gkZ`Vz}fMHJ357vUI+{bPjY^q3zsbR(x!drL1KH z%31V0kyL^Ej|w$RjPL+q#^s)B!zf`(&FLVjDSM_46-xRMAv1Wp;yLO|l=y{J`gMZsLFBA$BI5znI z{fX#B-PqlAHA-Bn_VU=_vel;m_D=)iHdgNbho4VRRKK0}>H9I|T*@bKaPxk99)3wq z!%L8kgAA@>L`6}=rU}|+S5nLDztYwt3%QEYCrcE}|LGnTxti6OnZkjM{@fL9Vaem2 z|5?<2)X}tm+}`6@E)f+IVNDf!d*A}*B_7JfQ!3-J968c(3*R(Uy?kwS4}cWL1tEkx zVgvO^vm&?1UPhN+stp^h<=Aa=U5+;ucEUS;$X(rbKGy9pPhe@$YP!x04DE9?!xj?l z$0Mc&YJx1S+P_GEVZUsH+UAn*tS00@(>Pj`sZ8wr?;NmQ&QDtywdzJdsFIefIdWyj zz#L-Zx0nOmxrbiGqDswBi=qykhLuAy;c4cp%PGB!w9G)l%>BNeN%?n6mk6hVci3pN zs=cT7lS0G;KJGzD3A-)d>m?421K)nslN!vr72$dAeeD18ouTb9YXUhI5NW5=jT~!+ zqIs8WA9nv!?XL#V4}O3aw8zZawnNjTC*l;Dadb5S7DPXe(+Lu<}k|Ge0w5n&mH?-|b;__ZXb0JMl;v&a)MR8?NfMZc4i-x~ybva2J1R9Mf#h^MWLgSuYn2z#Jt8 z8*X6ID;(7A6WFE8N)zAh4=;FUYwA~Vt1tn>jC^OvKl+e)=Si3%6n{zSKRK=Nnd5H> z<&u|}pyNj<=kC((+aE;n7$DxxJbK9e&3ho^CqHr zgqg{TtGMLWT9`TeBoCwL%*U=QHbSZ!-4}Z`kVCS zb{Thz_kbOhF@loM@qb6Lm@kK}#e_%d9pSdwYZxP-oz>JmCln2zD%p9yoZvjRlsQcc zG@GCWzsdN`p0azhj_o099E_!bJ`;xs$1wSUF!Qa}Xe~$uT-CE5I8TQ&j>m=CPMnyo z@FaNK3IiFgwT*{;{Uwcpk-5|iq}{f^KR;4cKP)J&O;|!xrZk1bELJ3Y1(V%icT9nPY>5nq-ECEmH7FgI0I@v@U$_&f@RYWbUesqNnvksU9hU&f|)_Rh5Y? zO89Jx+kN(s&Os@Z=nspAoB!~7C8V`5_F|r3iBF>-SVlMD92T;bTXMot%iAh zN1=EBiO=^A`M0r)?q&91nXQL{6IdFxs_!b%MO)9r&l+0q(_1t6_4r58i{pQa)?w*H zSbT=Q@DC-9d00Ax`0c>um5wa?`J~~sdP#g8bN^-0^hKxazi4PRA`z6hGP?_mBRg)L zOK#l*lP@LR>=(hj3++47qAr2E#HA!BfA~M1GKm40g8royC45j0Ic*QF={l|Nt_G6@ zu*{o!Oz|!ji`j?*UIssDeDeNJ(+pi*?&B-fdjOQf%Cgqaf6v!5$+x0c!`Q}|4n?mw zAnL8U_DzGAwf+~w8*I#fqL+TjnTRX%2fy54re&1s|Ag;z=mb5`Iu~Od+mExC4IwY5 zoU*CJ*a)NT{V(TpnaY;1A*h)pva%8jz}G$mhiHP5f(^Hxj2j=0vzhg-yB$9}h~g^l zir0^xcd2mx{ZNap)JpXw@8h}&uB=z$=Wg#f8PUAm#E$`78Qr}ftfVJ+_Pq%kdVYhw zQwzQC@U~PzsJ?DP>d9QEi09VNmFI=kapuOvd^MwW?@ zM6e5&J2^^Qe|zKX;G$B$dF$(wE_9BG8r8J6!lb`Hcn+;QI5>YMkZd|s3f%~Ud@Rw< zu@5bNRg$=To~j(<@KrCxyYZG$HQCIc7KOXx+3l!QB5g1n6ZOU^4RZSI;$5_(XBz|g z_NO^K$h|FGL9c>+X4ktUt^H?^-ev&Q0hLj#{G04EPW%JE)8x%?8eUQ3XV7Bj44*xQ zwtNA3nJAEyRLnoZ-V{psjXfbo%X;j~&sblXa3|x=e+jj1Q&X+z)2m@JiPF=@&seUn z_F;BGe)`fhMiB3|3A$yMY0KV(t}`Lqy%>zz;*+#5ons2kH@_FOV(XguR=ac>M$QkL zSAyLz<;(+hh10J)>(-<_W)#6Vu&!_ z1E{3;{cyE3X-Snu-}y|ob%`U+gZ>qAmYF$C90bltUEaA6cI{ZLS|&?=#M}W!sv8vz z8Zkuc?vJU+6u0ycfee+YTcC?(n(VzdnlXfBzKb;`iy0gZs7AtXw=QpdEFgk)E&vIEgFT4SkIUF>PAO+H527Z2BJ_THxzjU!-of8(|92RCamZA`+^qIw@-VJv z)AkoLX6)_n5PCEP6!r?&`oBIL?)pUZQJ7JZ&CS3{fSkY!9eV$RBl?M@?oBEA3&SW+ zjNac}4_JUcvN7FWkvyJxRH7Cgw&^&VsLF(`u#zyP$dt+akG>0_k>3v>e*NGiG9X0D z6yKF>=NNe(ReYDb+*488+>1(IfjjtCT~q&wt>|IWwtDNScZ;}dQ?L7-3v8iVoAa(j zsdm4N6SLlD-7qKAVs5e%a`8aY^ zsuZx>yuMDpRkSelYkIN3t;jhw+Ul}(h3c)VAn84;i;uLW-FOiII_tB+n?Ik>qv84m z@;*6wg@;fITLfh29~=5O;v%KgAJ1sg`c$VoC@HWbt4|>Ccj=DRfE>j~xj#tco6VF} zL{q1eM-zzxw?s0}%j<%YpC^nIpxeJtGvs^yr*CMgYwFQhxa2x8dib{_TQaTGvPO9U z5G0#l8$evLIu><9ceLR${qk9Nj&TuF+nsey@&1!(F2qwh%>c#8Lio;uWbB3K5amg* z&q8?USJu_u)RW8L@6i|pd6HwgA77C+B0>(JBY#kFv;t`YCmsHXs3RUuVq~*2^gnW` zBG0DZX5CneX5|0-;~IMZwV8psXn&{FELUjOYkW40uILfldJG+fJ&1JYW3U-=IQ74H z9syb09e}OS*0(6WNLQ_)yw8g@bO_>oNt6tn9k%KPJ346$eG5p0*`R|gn%%+Vo#x5< zY#{>-JwuE|3nVpUKzboGW4y|vC5|zYogamz;xyw>*>%=!%}&0v)HnzZw)pY_vAj~D zZ{Zef#Q>fO&^(NGPX+%fj;KEM5NA%f*$Rgll?AcBH2U)9!^B6aB zsA!h5v6%Qx6kCs(*~DOz@A#`R`GOD#ocjqlQh+{EAP;DJu>39o?DAGV#_X zIlH+_b@1Xr$$1%VpO40!cgEDm7bjtBgO{a&3xn>vRYA+}fYi1RSxmONzZ5;NHw(D# zqB1h=Rg1(aY15;Z+%%8bf!d^ypn1t&hCj30(*7L0xai*mH~f=jFRNZz~Y}=9TSTb!-!T5SxC~T@6%&VhMtJHg^>SzEUofByVfsgu1|D0^-kYUa`-hm zqUEz04)$yK$a%Y+woPJcG-`W3=X`Fp%7P-kJC?Vff=()RcR^pwW_xbisPw?>KL3HhC1ayq7sTL14&JF5$+*$;7B_Fe-88lnEg(qoAonm5@J0y-_FL0`GB)*T=BG> z%>)oy`f7nOVHSDpAP`Xx6^bAx(Nz?vjT!YqGRrBf3IarP2EhvY@~Iv7AjvWhOO>z3 zADBDO@O-12pKO4_5jKKx6h-^~4V~?YCkB1JDvdD>05G45;>AIa_z&u#vv|n)Qz*V~ zSS2c=(vzg!NJJchRzr+tWczk;`O3CZp3TRgw)Ed2s8}4X_PcH`VFNv*WihVkaJ)u) zlvXd5vaOpVZ8~#Zl7F10RRP^-FlH5ebjNm6x=!#%oH82Y(yI#O_nPRag|qh@2MyLt znLo$GtO48X_+Ld&15{2Zk2&~z`u)|3zM7^fz5r71-S~uUhX1~^*T4If05#NtbdBmY zD;&I{N`L*?JHasT+Q47e^e#H{*s6&xbx?n>T+iiPk7`NN9)NL&!F1GU0+0Ya6_%Ke z7II{g{>D=pt#s!-t3;R#H-IFzr)DK_`Nj0BS#02OW-k)ZZn%+U+aBWh*5XKVYiyHi z2|9sIn4$hpJYSwh|!(vbN-0L)L@~A7mPgw>7t5O9A6J_j^`f39uQ@yfr=Q^-vS8hcy%(eI3(O36z$sNLCYWIU)FZ!JnN-A8 zy&>hGD&doj!}e|GZw!*B45GvC!j8Widv%Q7bhF9tbPe&E9#g7@vZbP=)Bkiwk8zH< zp{iu9867HR?R+*L8fUuxyeh{sVgD~Yn0YL3nMI~BP*K4btz|_nW}GWKj{nH#ojGDW zLj-j#B-ter--G0kr5EAl!u8MD=euzA%I+o` z35k1mj4d`w<(I_stYH@a65>CyEOrNR_TN;<@T!+(*-t9=?CY+1Clk3%3Km4)dqH~a zDlS;@ZbCC2*Ov6(GZBKG;q=ukTwnhB!=8O5tdtvOLimMg(vxH&k?JYJ2FmWyLj!E3 zyXbg>8&ZvoEIXCcO$!)ypZ`m~zGFl}CFh?d(12L4N>6z^(e!aXc+i2QT?HHyU?;#~#G-xPvcVL5gpUu@y zb;>q%pwWl3iF7DZI{hLlRyB=6@$1H9YQ(u3>}Zyh;CfOi-EDJDtM&C&t4xVhbt!n66eyo+j##HjeOV zg1T@EGr>V(vr&uSaL2$Z-wbAPU|Ah#Vv!!*%cjrmaf4?!;p)+tV$`1G0Zjp^T7P(% zp)l#McZy^^ip#a=NfsZZHSI>6C8+sUEvBPONxbA~aq|(;x>>05;XL05DY@ex$=XC}#Fm0P?X$EbUfafEhH4X)J>;)l8}d`?P_Q+Dgk?Q3)Dvb! zmxF^X$(Ge>!(7WLhJsCOe_wbp=9US}6K{2k*Rssd^C9pom06rvH1HC@18x)TLmPZY znEh)2t^Gcin@G~NQUVqRlv?i@5zXU)4i)+t`f#O>+2Q@cegQ+-U9qL-oi+qe?w2c0 zeY3ec&6v_Rz_%u6{3Y83*)@noT+fWyljBXSWvukd>EXW)O~m*o(6<%#x*^z5dX%O( z4Et1b>Qb0sNff*JUa{o8vc+RD7Mqg7fC=p?{dPoE z5Q|JQ$C$r)I&Z6Sd_uH!lJ}=*YvS_;tz-75pp!|K#-dd9AH&jSn_Y6Mr2QWdQteeT zAObj9;QC$@!i2`X@0=Z#KQ=`+LIi=nyZo5}&ve6FWDZ*fc`-4q(s%ucR{O#=TRZw! z1p5}NTeY9DJcGAaLE0LEcg(T4Mu1;c-HYJd1p)6!st%)WRl1)Nt-PUy|>{M5AQ--zIT!>MDv zcj~#0YQ`3s?*E?MYctmFjo0$uKJ5lf`K5swhMK(?JyYm*j{H27e-WFny6k`XM#!H> zPKw*u7`TscHx-W$ih ziRT z*QoFLo1}Pp&mBMDA5m#^vjH@Rif4+@{7DRe8GYZ#TxKnUXos>Wde zL}c2{e)?to;#Lz5pZVU@n&%tvRf7je8<##yI!*p-&XUtj}c$ zNltiy{V*wQ4SV1=N|S=9)aMt8Yzq~TN!F4M5}>1{Z}8*@EdJ^i@!zrNIXHx`IvGhB zNsWYj=~B@=K?oBavr0g>r;3x*^Tc@M8s@m~ z1ZAe3k?{5oKPDaR6J4VmykN{6PCYG49g*H=Cn^N<_>m>d8vl(l=TQ&_1x+LO#mo_W!Q)Y~jx z)_6B77`!4zq1E%g5`n%PNJc`n!}3Kzs^fcor3BPmgReY}-itM!D?O_UX3a3%v16~2 z&YsV9mKs+mqy3h8(_d~)heu1rYROy1rH9dE=K4X+l zYWZQ@WIL}gKFBep;SRsBbjo>3;#Mto7quJIXS}y#O6cAR| zd{!kJ)!NE($tf@+;Lq}boBLpB*S})+?<2aOcn&BWxZh3BYpD62@8BJt<=Y zozS0H2Q^tXc)x!~VRZaenuV^zo6eOF6G?+%dQ-%QzUDnw-9{DpfrhS}kwB6|FY|%f zvgvzcz>Q0MdT<5mL8L84@}pcVNIO?8*iljqsUoimRO3OiXtqyq+Z71DDSGTeXk66K z{RX#XIFcH2r&@vS;5Ih$`0(DQZ7&-J8cGW0*ec{RL;L*0H=a^$)06Z3@PKzkpd@VN z!dQCA_%v(I-1c)=-qznJCfO=CVp;#%_9g1jL__fJd^F5Y!1Djbn&#i7O)=cvdueuH z;>P3%Bv?{?RT&^cDOO%S-SH9v$=k$n>6Z?aoE(>&>3Ep%YzR6BS|&2fWCkix>;P1I zsn3Z$`_#cOJG1cyY+9N^idl1)7;oLFOijZoMTVy=fWUt8XH~Ngu5mJJjd%{+(pSP- z?kh`*q=f{y`GM2eSlA6iOnXP(gcR@w-hh(fVJ9eq@-6g4$~W&Uf`Ruh;%A)4|MVj0Bq z0?|)Yn|x%Ab)Rrx_oFUA;3hLmEeK8%LueKIq)?k!&dI53SROC5Xv5M(*Je(*5OM}U z@oJsLKTWH02DnXr6_g_5KW;XW4n>(@dHyP`f0@%b<=JNKCI|R)XSUv!isrlybWNU4 z?{PC$uG{sk`uvZ|JKs^J#Y=rFrc{Noc8<>Tj^>rMWBK1Wx=!lz%@ z;_bhN`0TF&Cd~TdL2-rmHz2XzLCEmb$${54PKm&GhY(WH#rF5i#Ue4I&FY)RP4Q);}2WiO<5eO5YAVf4y zu@ne|#dr*}-xr9ue=ri;d1iF;04H@+ZOocJWk(@-%dz1UyxvG9>?yE?RMj3)z25i{ z*xt%ma^)-i>-D|7b`iWnX(C|`gNu%7wwQvzPsAH-=U|hdQN~b?`K!`#z687Lo`cpI z;GY)baEjITlnYB3p-T?W2bP_{54pl${7<%Cl zQ)X(B4E?+By_r{9{Le{^QTTc`^U9J$zMlvroq8C+(W?D9?jGIx%gvT9Ov|rMj2l^d z5mv^pxLfU=vyk)M;f=NkT`bLcrh~XF5O&|qC#ChBslT|>jsAl)d21gW28RZqN0b`K zR9hQIu<@#)eJI9*ZB#EiUYFheNn@m8M8+tHB;Bvdr&eEnSefzn{i_m?9Xy6b=V}Ib zw|-C<-6iZKy_(u0H zWgAC;;7{}P`b@P)k#Czvf5Hq$1=c^RKAiTkA-^H9-E@`Sk9ZO`xuEyR;67wWIe4RW zGzW79Hb56VzHu!(9VTp(AZijY5gjl2M3pL-9vXj=KKZ?q;hH){4Ug*=LlOBmLD6!H z7e=-Lqe^LY$Vhd4GK$+vTZ)r^b2LLbzGHn5l40lTcx_)lKy;sHps^?EBSqI2hBY4r z{z1qhUm<~JB?d)fo70CV$JYko2~nK<(#`qo}N$)S1lxHSb#dn1i~ zZ9_CCD{@3PmZ44)cT&<}>ITs6Cp~>WYYSG}|9uU!u@lKP=U`H%bjRVV7|AzX`=yg`AwG; z0Dpl<*J?HSb$(K{)k4EF^PA+eIj0XoSr0musH*W?%aVb_P9ZY>`F4do8fWqo=LP+k z+E(YXd9_uHbzG^+83ZqaeQAN}Y49svsGhL3;nGbx9vCnm6HIkN0?gvA;HY+pgS zxmbdcMuFd=OZp;!3;FEfhyjjPo!IxK+NJs?eqtGwLY75^`_*c_sv2pUa?mZs=v zWd-8DC~-EaY)=o)f}AB{8d{=O)~*p5rC6w;C?Zt@p(=`WDHd8ZKnT4^RC-a0p^1V(P!JW68btz< z1PHyi(2JsA0I32Z(rf7DY-L1e=67z+-G2^O8y=Fq_gdfj%KLt00DahP_F6<(v#rm| z$;If>p3(e94FV|LYCPF&g)wEssxka9$l5sfxy@BQkj*wk;BK&V#@*W7D%P^Mq>SRm z-PCI#XBgda4axE^qSKPxc^neAwjWViScf-4<~p8_4|Ey5;5{_KpPVfkE9w6(`LPjW zTN`iixwZh68ml?V{1#@N>NF^}{ywU1qxTH*mYm`;D9^ppl!jsnQBITqT}g3W<99o* z_eRTxK66VH7UI3dcXyD7{YZdtlX9hOMsd;h`39BXZk$)D_Y0n7X+22&=KHJH(cStL>Js2*%OyjFimcbq#8Y=ySG&!ny>Bk85hfA*U6_P zd@I#CK?ui#?whB=ditwI0C0}ad0l(}E}1Y3PH%h$f)j4l8tWSSK$VE9T6h6(2)Gf~+ybF6^p?V}MfTTG=Uj3qM3m$GT?l_NO%FniG*Y>((pb4|RmQ%x^3v!v7A5(o* z5b5@8V(QE3x=Hb;2Q;IR*G+34t-IG}$5ZT63!io96=6tvhC(jMnnArmBV1vNNE5Vq zP2?&gQ?1-p7Kb0N@l)o%7cNZ*5dDYr^FL`th^~dGcIb`23SiVn8#D$IRGa2_wP^`5t2FZhZa1vIT=-YfAf5`V<6y+5DcC~|pbH&C7QIrfm~De+eq-y2;g33J zFJs;RwhBLgl{nQl(`;4Sdug@8ViGr^bX`9Uxb>N@LQd2Q!)CsJGMuy;+ z$)7doe_I9%jy-Bv)>N1UpMs?fi80Mpu6>vvn29?WjhvOO4`JI<%Xjx};!W@W+Ew#m zZp-eMDze>$_&yu(CrW)gb}nNRE0sbZQ?`nd#ueF7ngyHq4* zvA|SUr2iMJSXB-Vfb`Cw(ED_CeTYkb#*+roj6 zA?E-fRYe6f$cX_NmW@2QAaZ^1<4$j*pCzuly$LICQiFA{m~Z{&dc0AAxQEScr=V*A zHxd~uxvGT6A8#QU2m&{D7T{AZ{r(*fg({_E?ecyMsT1&bMe|q& zYk^I&-keBJlXA4R70$jPBlp?RY>e>63~gv=X5EHqj6}v+2p;(Jol+AkBMr<96=o~< zrj}cE?lhN^0PV?mZ-SrtQv_ZJyc1EwUQv%V^A9CzVc_uOdTs3v5~wFYz5jBI3Ah-q z#VRWIr^w2f()2H;oYgl6sA(LK1zD0`BWt?9it_D=m3gQUW-bn2xZdP-zx}iS#S>+E zO|Z&{FDDd%hT9nGuf;^2PJRRZ47N{CvKRf-Z&m%{KXuss8N>=-tN|bbWjtz0?XvakSBs{`)B zCnF;xH2iTUYl)QD0#T_k?8kQebHg919uGXZ0f?#-=Y!d2Vw6-?JaCeIhE|Cf#hxN3 zV6$z0C+!vjz#M9mHT!64{W@%6t)dki3}bqrnT6}eZ>JPS&aH5R2|B|SzZcYJRD8N< z=nAbKtr2Hp48J|H$RSN##u9uJOBlb%6l6dtj@7M*427D*>aC(<26o)-@QR4%>|Z|>M5)*G$!!5Nh+Vm4?^B ziywML>?7WW7KSSd=~(_oTfAdbzN;}vhdG?Yf;n*UHVNtdG_N@uT?k>QV9n2#p9iPd zYU_A%5&~}i?*10?)IfKFan=`7@tasIHux=Z@!MgGUTwDxb>1>8(||4g#};kC&R<}@ z^pXw(MQhuqGUgEidt9)BZ0pEwEvSrGZ90VCC#qtel@uGXy>lx6!@i$C9w_uAJAtPL z=g#4-`$%8k$P#HHV7rz4E>ibUj97!aVMkMTzpISo7%*Vgx&O>>^goBz3s$0(ex@{N z(8e-ytQ}au)1@yNqg^CTuc#o5Y{?PXl~0Js-&jLHHm)&UvI8^cBn+{v2kVabq7f@G zAk1N#X6N}~Dee(9!?p~eW|ax+f6>{Sbk9I~X(HJ0`FUxYFq-jJ&mY0G}2N77O) ze|Te!cTkkM`}Yb@h6bjof@pHsaIG*6gJ(3I24AtXR7V8sX&wCFmvV?XEOZglD`qK@ zFj?%{UjDOq{BLV|;KXBKL{QgsAM~a0P#%>RElfM&oEuTpoy}bz43LoGl`^3fRzDcqYc+9o?evQauTYaHGt;Y{F^TO#irK@TtREu^IsV3y5OmyRLz&Y z`qYZ8#6Gnl(k6Sdw|=HHoOpne|J%J-<7=g6NmcfFD)*77}u7wU3= z{2h%SwgkAZMGi9pfw-AjMl@Xa+*p~*DsjJUc}KsSf1#QDa+tElis$40#Y`*EAcDCK z=(WLSOo=M`^sp5h8{kgK>XlXr1jaJNH12X7GPRQ2q|s=-EBLeXg0n!P)9bZxaX;)D zw#N}KR<`&=jB$GjyIoQIHHiUlFyO3eu+rG|vB1wS&#NBAO}* zhSmj3L_CljDQl>D+aEVZ7+WkpE;xClf0I9q}A@CLY_I_@O^Q8zu6Q9{z<#=W3Thmv|Epv zyq@-3bN=8ljJ-up@0@-cM4T$^-;G>kZqXi>h*V)+Kev?>Su;o*p8G}$9wR=SbD%Bz zkws)r40Hs=z5HV6r<)GAxmJ|?g0gfK8q}8?N=GhzUNB!t8WO@C_W5dnE4cJk#rcjN zHS693YR^t!P`HF;UOH${_|dd)V@zm`O-u;DsGoZL!rO_&j0X+(8Qnu@62iT|syfVl zoi3>`RoG~#Cbtf_-lB+9xOyQ%HiwnAZ>gW#?9E}0&FDB?WqB?BD)Oy(OAswQs-mz^ zz~!rj+}C$6RbyROd8C}opS-3tlk+-FbQKm_PVshZlEh2#Yn1RUcI~ke#Q+&8-nGs4 ziVaP-Rb;dQkA5%SjDj1#g<0nz^M*Lc)D+G1`T03}ccvF;D0^!LKy3*7;tq#4mN@!E zBjnp^&oYm-E|+`wNqifnB=vf#Vm!%#yvZELh<&cpH4)Y#-YASec2c7rr|31nK#t4o zHr&jdDJG!s0cm#>W;dP|n3}xF7ZPdPIvfHG75?hsSoETJ>cK7CjJ(V1;)xU7XOOG) z>;|~K^~K?ARv*h>B}g=F^zIc`&mZs`R_9HA-%;Y*ThWs3?X){0IUYq_M$#?9^gLv6 z8LQ8)3$Iz=^?v91$a!crK)o{nlg2WA$E?91YAQT58W^J`{CH z$!qBbxo2Rh1A(`li8O#4n7ipy?(=L(JTXNzV#)1LCGMVJ`HNx44!Vq<2v<60sq|Fh zOsXs@bwLst(fZ+obsn-gia2TId~0BH0Pl#ZK7unS9bzM&%N{SLB+NW?^#x`!1rDsA zAjCb;Vl#opl(~IYypQuJALb?l#X2lgwMOXt8NE&80>^_4IejG_7w364J_T7yFRgVB za6B}G+IMCokgJ(j+JNJ!NnRgVE+2mqDt_o?m^0*~W8vW1NtVzIAeUByYUA>CI-ZHV z^>Pn$N^`}crlms%TUr7`KGa~lxz!xuGn}=TyiMVO!Uvv61W=J%71lQ1+tx>=jiVKe zPv0cqj)?ah(>-(;#P?xUcxJP6Ds6T-@szQtxc>n>j>YAaE(>{GPe}Kud8VvWv6YwG zGE~$efmU~Pr|YSdJn$0$5cHqO ztjIuJYTugPTJpf1-P#aE>QRO?#r50MZM1nejN~>xRqf4|!2NL`h1cqGfNHK4x)&L3 zX5_PAJV0+b2@WxiHg&g=n;|~B&s+&Yvbkr`B((9)z6 z7-OcHOz8{tJ_9S%@%FX1A3JdFPE&-C2hRG%3TUtCcPgB@U#3Jpl((6JEddR6^yK|g zhaW+O^8-d@&vCXG*kcWU^a^V>YWVAgy6BO|(YX=Ody)1;+X&6WWTX5_S=F%exF4a& z4aPnDm=U1O>KjVAi(~lg|M{j2JV1?9aO9RT5lO^!AKDVY~8nBPV4U zhhKslO!XZh@wGFxTjse{t+EHW89g{SkNm9HcYAede4nA#=770R@!DKyeQ^hgZ$KR- zH#iabW&0VnL$KsJ?iE8bS-d=(|9Qmq*n2vaqBa)Y6W6Wp7|4!WVBt@!=&tYjRL)-* zFGZk1See?Dtlksy%ON7QX~FM!NYMsSs;~JVOU(uBpd_*Y1NIm9Us>G$yBAeH?MAM! z6CO+k&FU2Bh6eH|-Y)#+hS^;m5vCJaGx5Dsw#2;mFeEwg1N98ym0Rsdbbam_Lg=yW zF{nt#Ymx>YW2c0)<70Zo=#<#e4a0h^jiureVtqSXv&=pKQnf%nO6~-H z=Bsb3NHCmv5wdHk_PQKdNXMqgb2CT9_h~8hq#?;xlp*T!h27WD{E~eu-m`SE95s4zFu=Uu0{9x zk;ec~K9m2PbY$5~IAt;GP{%Tl=ZoSiPFry?F(IV}W-ds#iWJZ^yx*`)I}}MzZY5>D&<47?4cEoX~}RUaLZh zskt(WPQDOsWF_jk*JR)|%ggFq9p$Lm7=Bdsr_vO*C8a}38z0|~%crZmdAk_4(;0+n zSC5ZBl5=t_Mw#g{W9&N`fNqM06@LuyL`!%1K#yf0)Dp)+OCQbVMc+0qq(n{qbQ{}M z@46g7F)NBK=SWX3Tz)Uy9_5=NsGZ_8nil#E8!+iYa2g}cw0uHE6 z5XI4g9H`zluu%YgaA`@Tz>kP(jvFxX83FQC{5I5YR$&N~|5ZF#srC$7{059k_f2m^Af-lTKxLHKZ*`z~FMuRo>n!4CLAjg3fb6N+ zb0JQciqT_J88^8`U~U;8kBZflI6kiC%w>EZf!n>)bUTezgm*gY=6b%v<~k`B4q4jr zz^rOXrxnZ)zpnRCq8d&ldh_8Qj-i)8}*LA#k z$z17kK7>H1c0Z$M49I!AfM*(?|3(X5e>(8-`=RwrIbf#2(ALzX=&Rlv8bc)+eWpHN z`bUME65CkF9(I^{&E;av0z1=;YVxSxEG!TpT@xzuP(32EfN0vEm+dq4ywNGKB+u(o&?Wm}qoy$cENR3d*ahg)>(o|z>m+sc4(^1V;6Cjo5 z^;pozNscquGLT$SDK{4Us{AUFm!3Og0uEi6x9JPX$RQgCEo*;Lqx0Sv=23dj7ytdZ zo;kRuPO|X)HSQsfSBu3h7iI3WH^omero~Fs`PW1w9(MB1{{t?rJ+tdvJ=CW1Ph*X# zdn?0dsUrt@DoHfNKUUjHH{PE)a9Ss3A}z;!y}6u_r?t<;d+i)4D8#`03AWvU(oyBo zyS?Nuf@cqh=oOe0y%IU6OBJZpyS-I}xm}y^^^n_;t4`(iTwr|(x>Uugo2PUvDMMmr z7e&zaI+kS0exL@G zw$y2YSSb2O*)KE#kCD-jQti&Ivyy9e$ofmKISXpW!mT6+)WoMVbO!bK`}NVdBY9`Y zfVnzNt8mBLEWNf{y88VtAoB^r_n41mh=q@ZnoUd{&x)Nj6nC@2UUN}L1e}dRfNime zNGpS( zsX`Nf^frI5j601yR|G}15rJHF!jqW1uHT_D68=my8WAB50k2tKHK1n$R42y|o=^yL z9&(9MEXdh_cu1X6x82^E=e`>&{k-t!rFr)R=BZ%^p*g9@88#cxyaEw(d#x3R9K3P# zPR^f@=c3;$ZIJyhw&EwO@Ezho(Iq1y2f@lnne|c^#%Rm28;Of_;tTv%Sx}j6f|(%r zx7-FlUGlqt))TcUtnqyI*!)Ui(X^9=T5)4JB-R30iw0&SQ$A1v{4bakfO#_i2h0<4 zR)w7e3LCp;AB{}^>Q)!p<6_n>dKw{+ zYm2(^d*MH#szLx=+Q|H8E(Xxfa>HTJ11HHt$3oyZ4A8`nex?SP-z zT5T8ptH94lIaky{!oC*@zatXg({v9#eUkF^nF>4@`bdyXFbi`JPHT#av7k2!xN+2` zT}q@4d9=f@JF!Je;4fO>r-XhxLY6cz&nB>6(SZ12mz&TuB}sjWf;rt>J}0y(FM_Ro ze->$;Z+i2q>@V%k|01Cl5NfQl`aYVIM+YIKFcHx_K4rgPOl&OLbpDwY+!5Mf6>1oe zU#VgHFW|hgCy7^M9U=SmPzQR)nPcTH`#ul4$zut+-rT1mP)L z5__|$v`x;X=*HLxx6 zj-mSsPKYLR1Z%|9toGY|(6V6lhuu_pOadrfen_Bj+ZQQ#R(fG>n{fMM_9}YzL8(SQ zOw$zW(X6d&KjN`}tmj_Z%J_~g!C?$aj!p}&@_Sry`-mI`a&1AJ;U9Bj>$IA;?4Wq+$NUjg)>T5)c&O0e*< zZZ8RXK6ENKn@@DGS%kTa%SCu6LejTGeiTBS@ABI@209G>(PpTz+z%5PJLqZ01q+~X z(ysZ_hn+Wyi=|T0qc^050y4%D7OtrzN(^RTzzg}>y5dA_^A@u0*O@g7Jo??cl9+$U z8Nk7MtKKedDA^%Ke^3qxo;?96tm!9tmvc@pEvU*Wf<1wvZ+Wp=LT|lz9cjJ3s3x8@GbJUZSS>x9oZ3hRDhDz~4#p^u;88tr!vL13dvLVD7>l`)V-{55X___D zZZ6H=zrcdJnl_m#5U@BY;(-tnpxxqajqR|90^ddQfh;FeuPL=hJow|2U^p`OSMTJ< zuI(rd4(%7LF|fPB>6vY4`Uw14`C2fkeO|a-_f{rRUDnv*(f@JoROSD<)BopA|NFVq zzbRFVqtuX^Fp>+D}6xxY9f zfDgwQ35(b84_OQtq*YCBn$ijnmk6{7G>`HZI+I1hG<3X$o=bM9wVig0|80)L^U5pK zw9w$IV@;c^@lC}1z(^@;&#~0kkt4{DgLtm_ZD`&(m1M~x_$U?db6H(SEkniP-v0pB Cc$NqN literal 0 HcmV?d00001 diff --git a/Documentation/README_images/CourseraScraper_LoginPostCaptcha.png b/Documentation/README_images/CourseraScraper_LoginPostCaptcha.png new file mode 100644 index 0000000000000000000000000000000000000000..9611e853ce2821425ee5a3fed7c174674e4ba9a0 GIT binary patch literal 24370 zcmZ_01z23omNiU(;7)M&;2H?7!7WH|cXy|eV8Pvk1_|!oxVuAehu{`ytl@9+-udRv z%>ChkbLcu{yLN3^s}AAHic+XZgh)_OP^dD}5-LzoFrkop7X&!S@4h}_J}9U+x>n-i z$}-~Oe#gnilr$g^wmDslI|Pk+09MooY|#w zem}o$q0=14nr^@|bPSWeZJe+?NhpFv%Yor&_*s2STs(;YER0<-*;g|hPNHZ_9!YGW z-uD|}_n6;Yd3S+5cnC9CD`V`zBUHg5`ehz=6JYPX@WIjV?Vs;l?Rj5ul*oU_rq^?%qJYp~9@7VE^7jLD534kX1?ug@T7%u^>OnxzPVr3KNKHiDFz3d-c-4$fxe+$?M?Y?MMs zb1o@TaM5>|E)sX~S(#LoUf z;BPnor{uo}`cF-D7c*yZ2RlebSE2u2mVZ_L_rm{O@o$%!|J@}o8^^zQ`EMovR`YcV zd`iw%klu`5Whlfh!1{ka``7aVtgiz8H-Z1NntwlqET<5X0PFu#4Iw1x<~xWoia^Oo zh^l$MIn9FquJ`F?kgh$=!-m~(!Tow}6Say`oex#mk@ig>;2Z)!jdmeVP44RHaLDIjy)fHA61^M z&Bz7_))yI`RJJe1&F!MAPRE|1$~HpPuImZaUnG5H6H!?WQJ4{wjsWSXU>km z;>Tm2o0CD_8TosPXra5f%VgGm?)Kls696efLAV))2cYAYSd(ElG>`wYum2utn=FwG zYEz;9hfbphWAi)!^SOyr|Clv_1^ySN$~3Lt#VMegW@Gc@ffmPUN*@2)@`Uwo$STqR zo)VP6R(J7C#`aL7U~_K+kA8<+KUYGnI{A>nUT%;JB1M02hZ^MS`Pm0!giT>Z%=VjeKy(6ZjXHni2d*5wD*b!a)K4s<a}Fbn>)(s`6@#v~^UQ4e;kQldeCPIKN!9l%x`(d$bzvL-f{Gh(WMfM@Nb)$U zp>yH+D6NZ!LR7HkdGH|z&Q2PzF_SBd%V{U)3oh|6*@09bn#;=7%ZsqgvN{G&+HSdA ze`?`5jbvRd7*<`S+Yxe8EmCySYjbzeYjbsrM5F}AZE~0n#Zmg0EWvh?uC=w@ENZVX zW7%zu5pb>wAi4%(419Wm)l@n=+7JR<_DHTJc6s}>;X{W8P#7Zo&kGkBv~BU%Q@4{T z(J$t1Nz5X8x>b8AE?I3JE=s~6U?#gw(eY}F{T^<~S0=phyo8?k-q1FR+5_u_axMFT zKZkzdS+&-cUI{ecB+;(pI9$ZLFxX&G-u(L|11}Iw49f?Z_hhy>_R+J)6Lj`yqHD1< zGyADN?3)4d$g^llj2II9P{K58k)qpdk-QUfzb=g&k~nH!aoSP!bB0?^O{3JmH~C*3 z1e%KlFbApb+D2dI_@6zvoo{_1{ZMLeW+`}Jvmt{9==wZKD!69I6~bxJntpYy(#cxT zyuc-0UpagOTeB_-Q;XkPEBXvj4D82H7 zdADzRf@JwoFa$$TX;(I`$M~T?Ys281Fh!!*ZuReV`~RBZ|Bz$~1GIsB=05Il_YZ*C zB!v#b%5_%HGvF~|!EqsB!@0~r7U?s}hae$%>tfSteV?DyP9^&@lsx}yJs?fzCR28m z%O3iGd%9svEOQBmm)kG@$Xo!{)skB7X)KG-u)zrnUXF6 z^(6A6I(>?$ENTHhSei6zDNQ5Kat~%HX#?h7Mmj|olh>^&(t^PDi-gbhr^CjDxQ-0G zazsHgksrOs7|V!<4cJt_asNY?owL{f*JX&V;(E z7~=Vgdo)$@!~9;RDk;F9Nu^aNLSGC3ZfDV?Ldr<}V zSv#L6wzFOg7y9ou(~Gk7zVqMMeiR>#c@#m2h1*mm*@ zCACqvxE+0*V1{7y^&lde&_C?KI?-_*6D0h1PBK1NQ!US?DyZ_<{hYfoiU1K2q~Wv( z^H5mvvAC&D|J$O-7fk)AK4$mNQXiduY=yZm`K-=y0B!GXZU0Ko1S1zHuwlQ{FL%?N0Hqwxxb1>S8(X8(*z zxbQ)gbp6`=7XE2wvb<%0&^~5U93CM&a0+5{P^9T6`>HYo9W~8WhUQj2&+v>dS;nF z_e^;28+l7AI{TA49#M;S^7p8kr3?EGM!I!@%b=oP`IfMw(jc;2aM!%5a7!W`yHx9F zZg=LQaIa%J#r~kT8XWU47iJ6hDq{+@+_GLjpAkS`iILq>vTLiWtNtmFd+A!j%gg1hxjKSd+aPxXJemx@* zp7rlN6-WmtJZ{elQ1T9Hy>wQ$ff{Dq|QX0g3fbtj}$k2$QN; zcryK+Wy|kp<^FOt!1#W+1uZ8A3Gglq>&5u6Clbs0s3iQa0PGOoJorCEs67r)(3$NXukpJn(HGLPg`;t8e>M-C}=Pm_U1QU9|nMm*P+NfCq=!}ZA$c=-Kz zsE|o68Exrtu;mGXf^o#JZokH&hmmHld76#Esm$-h`tMF}KoH4~Qzq(PXZ#k5^!SsU zxc-M_`V`^xml{0euVMQ|NrJnGpI=b!F3G6en`bY6yZ*NQ{gb>X?T1~EsmK>ib`FIT zu|NTGqwG2T;cy`rnv+F+pK!sOi#GlcR369@`>;E5Mj~pbM>)h zo*k8REsd^&1p=s+^@qs!q|*y}Oq2rx78$F6DopUFtJhZv%&JES`N1)d&c}@QmrO=^}wY<>NSW{|U!bX$KS)8M|G4h9!Udtkr74xD& zcAIusDqd_l(Q9}J1)p5#B#9`{fQK~3_q*5) z5&Afcmz$?`hds)pDWKked{Sd5l${Wm6QJ`e#zyGS;J2`^KVzAURdIt}0qZFxpdTuI zb*{H#&cLLyeu%^>_ZauJSh+>Q@9F7)X~LYGe{BvStR|9-$%F_ID%LVD+(f-r=^jLiQa(uRBKR0} zu>rTqia(MBFGXA*Mz20|aj)8F*0|buS~4bE1_xc_(=v;tUC7bB?Vc6DgeM{H`^J5L ze~=UVNoOXiI-c!R)_%UylspjT2ByRZwC2%{N}bRV`z&%<2t)fqA%pbw@-7S><`1?? zm6#vBNe|$5$s!U!Iq9{vh4=Im?@j|6d5IAxJHa8^plUa)H6brpCd>=YEuQ77ZIp^wv;jUK_e83=vj{5Dp>vtxEEP5EUZ+Xkp6l}B6 zKhyo={P|2uCMjgpIE4x{S=iN}OZ`kVR+EqwdAItv**O1()1{&Tp`?^U&+J@`^;u~( zFX+xawq%nScwDZ;AA7peySZ-TFyU4TPY}ZwH zeWA(U!i1x9Oi<)12)^z?X1!~6gux6+vnkez^TYFo_8$rbfMVt%=3=G1@i|2Cu`TvLvC<#4N}9Vy zOB3Dw#x_$EBt4*I=_u&0v)+bZ*GZwix?)Ml%|v=37(SsTp^I< zhnXkbS+>=ka$BMI64Ah-=Os?xU5NWkkizpZ&jY=}u43zF&+70LKje%Vi{ZPm=Cs=V zviW(qZIdR{;|ac56zsmzlp)4|N5G|~n4@Mvqj+IKarn78VoH<6l%eu~4|_uH9JlEl zZp5wRy@>{!M79UtJwg@Ihy2+X1Bzl$(EdhkM$MBRMFrx74vEui%+YCYBp%P5j~EYG zX_8imR8km`wHC~pb`!-T>a#H=ne<8ml}bFv*Z&5i3jt8z#LA+fqSWWWj-+9|(=x|i zB(i0%WrZwVWbjPvAcDznT0euuvW7$iWP=rR9grUwi->sE$vYlR%lI~I>O$0;PcemC z6z)>rl@8?k>x4OgICMX+?Sd4CKFf%d7ZT3Ph%ic7lj!q0Wf1+upNiWuxuU)8)%OA_HUv=)7cg0hzU zt7>x?fzxGG`_E5ZFqQfy2xeu8adR@qr!o}WD1`X7Y(HcNdqtoOxPFZxl^!K3G9_Km z1L*`bIkwnUOJL(KloJ_<(0XOp7Wsf`y5<)D^2|x%*|mK%L>gn++=<;EG6;rz|q@7b={OZ1g+CTzbYsNtVM7jq~xaV)tuRn0nbx|E@74#?ys8($wx4m|9^>PNXSB z{S$B#Z+O~blEwcm|KU9k!T!WQIesoTLN2PwAsw~{-|zzdjHD-q0^A>r$wsEp;p}KM zF=@>r>{-isj`Ijc>W0{XxwaC%otnr^Hezi1x6!Yr!o$-^UiQ|ykZ7wvIwgK%CP}k3#oU|8{%@6(Pq0U+ z3DyfR(7y@vF3J`EjBu#Nx4u{+9p93aHt6?a^!PM1{S$@$UD>}Q`-}H<&3x+0d2CNx zWlqO$*eM0AtG5p*t3}*seX-k|m;)~K4OuYOGHpSn^m@{gk&4o- zwBdAGSODxpVZoK|SvRLA&)+cne*-_31PGqSztK}ZqmmzfrFa+*(mjgFvJycGsz0U$ zGto08jCED45&kDks$Kw=kclRm2K{R>wNlj)3w4ln@uC1_iQnnp(C>ePSW_GrKMiCF zWl-ck9XEtnXVUmsB1i5jp9#|U<@^(q{x5Nel!LdxOkw;%4;Ktg#CQ`r*DtuM<0g`WVM~*_PhTR ztd#qKa%#Okq@v~*)A;`y)@z${aBn^uJ#KHYHz3Z6{b#X$Lg+%YqqkyYV3RBTKX4vC zv_fD21O^h#H>xjENNX!1T1;=m5+C9X2}b$3HB( ziie^g>hge3c{(mpL4~YQrSFjTw&< z9vwS;Zbxd-pr1KK2%LnUi8xLct8!KhgHTt8;;Bf^KYdBjvFB~t3MFdI6A$xdWCSsZ z$q5}a@GGbb*O}FuMrh|Cx?vL6b!w8>9k$p+ID-I3rDd%R3)-WXXia8$vA;ZbqDjCm zF0v*ot^4w2{bmSEatQVN9*Om{*=}d6Ug4B-OUzr%I7D}a1?BYMk!g+Qjvy(P_+%En z)@GAf!3FRZy1ye`wx{v9^b$+k$u_C)g=o`G3_tG9$wp^q?9Rr^a})p75LM*<>gMxV z+qrw^^Gy{%TUynHDX7p{B+h-!Z4vIUg;fW-<*2fO0DQc-j?yQmo%t#GQD$VUg*4*Cz*Xt3|msQ-aG4$*_71ULF>E9Q>{` zzm9RRZWXQm*;cHVC#9+`>&t1yQ839P?DV>)TD9(-<6@T6m?psRLYy#p5 z%6nM-E+fC9a3SXx((8I|g*=laAVz+*L`FWY>hv?3SJ!B8u5xI*I*N$1ebHZaU$K(| z;x&}#a9AblNP1b>DC|S_fVi$O`91Izu@ z_O@h;zqrz0TAlK@3_qG4i}0SCpJXjKs<GJbWNFZl{T+W_=fP=R=WwA~?q5rR5Y$Ceele99Xwv9LWrn+zx~%y)b`3S0LmJ z8H>3+oE?Nfehc6|R(~gmJUdstk1Gz`Hvk@VB>6hcj(Lv|Bvue+&>@yo5D2#$gf8@c z;-j&U%>t7LBzb6i&3V@#kRLwI+X~7p!wGdb>UA{c^#Ym1?}+yUkv}AW(C4{eGg7kP}Cj z$7@8NBO^6BEYCO!6aNZo9t1u?^1np=Q99Tz8+q2|P3Uvl8&=j1L~L452S~16`}hp% zJWqw{gusA`a7^~g^;Rv-udtOOLNf??L5d7euTyXNnH@3dl+@>BI@flZno~Tuh&)C3 zNQyx|W%;z_Alp7sf!oPaiM|OU>RA9J(9#czYP5xzSeLS@j#eskVep~S+E_ZLp8ma2 zwg#OhcZ!7+u%x!%pqvGjOlY+@Y!BWrBn5p8GB5vDh;A*YCkh{tohM?5yi$9e^2jKx zt-mP^Sohw#i`jpdFS@I}|1+xuRzFtPVF3a8G|LNkdFll>Ls5vh#}MP)&w`dduRd| zA|Cg9sr=g!H;jY|`Z}qgn9KE9i#^C+_9xoMe81Gph~(Nb3Xcp)%_}POW`O2Z6*T8d zuE#D7=a}H(-QmO!X%Ut5W3q4G`S664?>6R?^9|YP5SYxQ97Nx!p3@$h`XZu5Wg6QN zgRvMf$pC*ogK}P;ibWS{-OF`>heeqJ=hHZP5T2m?M-?3F0KhR{_JpP5&Ug}@P=Z*1 ztBClk;YjOEUz}u<-Xq$~U8BeGsSA+k@yO)ther0*ZpfIK34_aWVf=f;!^pKITwRi)1pCe_t1tV(MYhzkUV{H5KiOvT?`S=gvQBEtnm7v_;-{u zauN)2b-zcuL1#P!2H5zKJ}A4h)fS-&3@rpApGKDUKPpNHTA}c7g!YnDSEC?Y z9;EhJ?+jiz*YUIP?c8^dbuT}WVBssk+-IqK&+r^k5^8vkyUrKz#xU_fj36MOZ;*C6nR1{l#x;S20u=IN5~!*l6v%nneg-AbY{*ikXu-8^Ov# z!Ta5dO=ZhKh*{hwah+9yp!6+Q4MV{gy98+$7TwJwBWS| z?=@P6%_ZRF&I-H`tF^smBl=*SMD|D6hPchj+c zyBnfMY+Ex>lm$FW%hoNNP~&AmUYb97vl#5&p|X*{${|p2mGH%G0G$^dO*Hr- zG{OtdDJ`Sm+?Gt!Qg~U(v}YDBmjl^U&x?wf&$SiONoc?&DASRu?_GkR`l_}tiINTWlM&(3t5d?$FVjMoEbMvoWf8Kss`LT|{$Mwc9z?o3 z&}BGX_C*b;>y)UUSkyBgy3GiBc!OKqJn%VCNT|xdNFb4}!~;c`xy1JRDo*yGc8X;$ z`Lo=pkN&;-Bk7?gH|uJW7pXbDur;0!k5%mTx|LRVf{j#~mmNt#`CKy;Y4gNo-48LYE>aVv z*Rf~Vg^f`gk=x_DGSQy)bT9ks=UMMQgwE8C+OCOFd^=scxM_0RS@&ER=HMK4 znI_l>BlR8vU!`{b`N9_@zcaj51-!WTW$jVt!YGMF%6N8EY%xVu3Vrf6P5 z0xqe8?=CTP&b57&zme^TNZyg))*YifkcTp0u)%4l$(nD=Ad|2V67bP-eYNzS65Eb> zlZAGqd0sA7n16*uMt!{IvEgQLKcXSPE`p3iZ)%?N<_)x{x=}1XY&3wbh5*@3>fqHA zcq-5u?T4p`GYQQT$4h?0>mx#CLzR24rJEKQ`jHZ&w>;M^(9>)*1z32MiboryFHKR$ z`V@EaJY!2ble)?Rlkn)R{61~a#F!*+5a*?7QuJA}; zi?YVKs!r=lm;?@s@rT+PbD`ICnz7YzFU`C;#PIRDY0M^ij2G2?XRC8d1(6Jo8kCZt z%qA=vN7@jCRg~Uw)EkJ*tYndB;LRC&Ps~2uYwr9Jd21Nu3S|s-Z+(bM72Lj>kG>5 z ziIg}#|0|kIcs&f~F}1Xf9aKpkRaN{%?xVlEp4i#tX1J$N9GU5T*%$Zde%iv^>_Pg_ z?(L>4dm^P$DpAWO}QxdOw{SYh1MMCdd&J z3Z|^bE9#~2JFElKQWMO_8p*d{VH1(_`mO@tn$+$Kk0i!e)EhIh@y#tW)2Z`5i%iAT z>VEyBo+*5>3kc}?c@Y!98%=(W?v~6)l!$MB#F7Hv5UHo5(vm8RoJdekt=uhW)@vm6 zi#@(qo{W(Ke^X`O(nwCGFs+6OD?Ay~yr68RQEE0VH3jHVx9nZv#(HCKongGF$v29~ z`NczZPhXKAUrvr`T&qr#Ki(}7KwA+f`arxCcF=^Yyr4~|0v2b=+B3wJvuN1M9#^`P zCmP%>nkY@wFds>WHJM5#T*Xji%Br`D!qgn-hl&KfQ;{<;i0M0$;{}V488CDd{R~pj zYODbktS2|FvotgU2S=2~t-+*wH<|QfT+Aglh3A?KObwBHf;aX@zuryc+f|45Cx^R# zVFKRoXJ!kenEYiH8<9|{Wl4E?`fT_JV%2T%1~S{Lnu*DSXZMQ5t)vZ4grcOEopC0y z@W0g!UPMj`JH{HknDBVGlMwhqAgq*+qS&#m{;0;TlY$^i&3;BXa2oTcouP!dUUcEr zs;m$RHXB(AvDGwYliY#w23&pixXsX*FfITXOS5-Ds{{KolZ%v~NLhAfpwn9(^7*3l5ot z*4w7MK1ak;6HpTKDL(HR;TFZyyulE^Qnq_0XFO^8)YgxJw^hdjr}EQEr0jb|P{stCpk2x8ut6!B zGoP`%=JVhQ-OU&HhGr8pQQkgf_Sg^-&RI4Sv!v3tR$HBh9(kzTJaigXX0K*7ovP82 zHlx;BP>pK}wz>Y=_$b-AOo=;NN{X$w$z+YY*jlpn@f2ZgQiM=KG6Jw|zMzhH%=oap zarJ}0m&P)HYugg^+r}$LlTh+eEdqaFMn=GrD4p7T{;dft*ZmRhHsHjdL;Lyq6zne5K_2m8*>!jR1m-qRF6UJ zM=eB8#KGw8QmYE*h5++Uf53<)=0o!u(<5zFM^>o3=Ul)A8ikwHJrHV@7X~u9l;aDRjU$9a>%Q1kQXRvyqPRY+1 zPlGy$R*J+YwfxhWIz+C7o00WNZEgHobwmKi z{0N}5$2yN@-KN8!_v(BW6fHGt8e_PlYJZ%Y26Xkoi0NqA@u{XA*8bGw%gh&~B3-t8 z;CYby@+RH@ffXR}>VSmH&9?+i3o#k}Y7Yqest5G43)X7{gbT+q^U=6HmWqxHk?LMO zP9u=B($sexmq>6``Pi}nqAaYSjGpp@#t*m#Ct3Rd5o32kNv87>4^?pgMIV_!@Femf z5~*8GZBQinosn3GN0@sYcPB(qcv2>cnBb#%fDf30(GZN_`%#e+@c}NgZn;su6K&m? zcfK`^69W^TSsfjn2gmQPR-9AN_;5F_#ADt+1Osbebl>+4D-n6r+m)ZID_}}QlZ~o= z^bNX8Qvlv9=m6MIS`iYPF}F?Jp+P5Fxz!k`8^W+JQ}3M(QAsf6bLxYO6G~NSO8Kt4 zHxs%acn`ks@HFkjzG*v~H{!*Uvn0T9tR;MgIOz?XhRjazsRLN%)6ZX*qkiH50`kE- zvl5m^w$XUtUDq2#3z06d>-ez9fs$L>&EhLgD8t;5b*x)IeqANUP>>I)0 zjPbTPoFk~ZNepW@vQhEUr?hI3t0Re#SLZ`Y=6QiGU-LuYK3+eYy^7mk_BL(!hdvv^ z4mput=zud4t#Vp>m?o7gc`GY-D1rUeRRBr;Epx-_Xi1P8uhm`7K#IFG{KsD$h<$x{ zr#KQUDITjOyr5)heS*)5UUulGsE-^UjnYAZY@VQcf$N!sp0{n*M{cEO3}_job{e(& z59Nz$X@n{JIKpX|?hO9Tqc-paK+ERl$nfPbH~qE3x2JM42byFW#l_DAqv{#Yy@3_O)+At})Y9LBbogYNsPzfcmEv6egs z!cfMg2`gBfEWdNBrXNW`y!i!+Ovf+@k>mq*eN zqa>dk4Fi=v<(0WPeq5hoHzwk`*$R^yV4W*)?iW`m-Aewi(gD!~Et#bnoVU~NByaI_ z!(kB08fNcI4*M!IoTu0rJ0b?g{RgXE7SGB#b+_r{s*v)h8Kp&WOdH$DH+dYXM(dDz z%8|w15vE-;jdG>3#rA)$OphD<$}~)m;KVRMv&{3dhB=+~ZkL83BbA8uaChY!$56}z zjqV{Va=wawk3NIR83?tThV{kw0Ga<#6OdM*qe#y@L_Rgx&vP=O(Ag^2Uu^8 zlCfDek?u=AZq!`K&4(xHrA??Ug`SPHJIFtqk|#bY$Q;F3iXsOa*lU-%EzXPxo`Xxd z{T23DtD=k{@h5qfc%X-N&S-lI!w<+3Hn{O5?_2jWC3k8!1no6qL$$S7dpBJ$=+C_imd}Z)pO149F={A z&+s-{jlN~-w}Q-<*xQRxi$K1#p7O#bIXn7u4hr2(J%q{1Ux-N$7rbiv>6LoOY;DeCRJx-n`6Tf`@Z^AFU^?;>PMZ zY=${Bdlu5)xb=SEFC$pq+xkjmPAYC&fYDwQwynI2uCrgEuH_=O%+IKrLTsj_91 z)5EbR{KfA`NS#i_Jz6GqAgi65*1rT$HRq*RV~1vCwMW_^J|6#Ky)*P0@6XfpCJ$(p zMwOO?&EL@{pZz1)rl7**a7~@Tgfo@V)-o9jq&k&D`Fm#dFL~{i5{TmrEwP1(>Kv^h zU)A}&;H~?sS#Ug#NEr8cCOhbl3`>2D(?YC9AIv@}@g``%tEM&laW7%sBeBz2Z^J0V z-s_Osn~P4%1U_K7=u~P!WLX!vr?(^5p2;rITPMwh(q42jr=Y59a)`1+UrZd4Qx=C~ zTZ`gS<53jqIi^3~zv_S2hSsAvX+*DXO@M%kPUaho;rtW&?Lq=g5I&B}mbk}Sl zKx-QW$A9vAyLDB!&w7)BV`4+Uk`ki5qNt#srHQ-#0|;z7#i|nUSd%>8-S@!EUMOsT z>hrg_ce(-fpzvOQ9-p_@??%B|&acWCC zrzy@)2IOgsd|I{BJbzrO2GGLWrfJ|KSk&tIf?(`F(^cAZ(9vMsklzFf|C!A)gN?#x z87*}1`q!yg%g+e)se3|C89IC{As;jOV{vo-Ayq^O-%y2G`4ZZVb_)?X0BaIW!qyb| zPj~&DPwap6R=7cBD59-fSs(tYF}d_fCd4DNJn%^%exMqBS60(&cS{FPi*wpbG#*3@ zK6Oel*%3cBZ(pqE!pXMfBKu&irsWk;_!i|RGtca;amc2 zwKmr1pUe}46xAif-i=wlRMmue3jmoX4cV&)x1-;FA5En{e9j>I7n8^xEaDry7Ru3F zU+28)G}w};$TvI;B40!IyMXbZbdkeWuu#P$7(ag{L4++?H|+jM`X>E>fU-_J8@_dq z5G3+Usi+M?7CePlmsPfwdOU6hp>sE{LlS91-Jr7lH$ARLCB>4$S1VoO*g}@t8Ay{5 zID1X$V^U%W8}sIP&4RDQ!w^+0(_mXm2~?oHSTieOJ@9x2!T4jd99~molIn-rFO%74 z0Sq#qK3t~fEK^mdV?1UR5SL+cMW;%GkF|}!$MZFb{B18?*7;n*|Ki2$5Z?ITMOmR` z6iGC#I?FXd(vZh9qdb%Z2E(WTwaKj^!tfLVjH8UL*6cR5Bmpf~lELm%}qc=`~JwacC<)8ja z>D%TqB%-5D>YWh!Z9;D?I0RWY=hn@R3h{`_Zv6}3p&92B_S5~>kh8;d&Q<*TN|svL zSW^2({dQeCia>|4%wy6lf&No^MzxM*8(xoZalJ%?{xOWJ~5>w8A$CVBDS1JpePVvtM zyYFToJn;7TapR?ftUxJ#0`(rk8yt4_gD!f}`A_isVpAs^!${m6Vc^kRo7TO{9(TxI zelO@a2sS;QwLP-o%1ZT1l};OygCA~OHXheOPYGfOFdMgR$CYiswO0;h7}hA?j#8dW zA$S-B2U>MeY8s}^Y}TLoJON=2*1Qg~^hb9E51w@V!8dD0VI=#S14B)O_suLDSQ09i zZm9%%+bgcK3V@98^D2LE*&~EKP>BlXBJnw?Ih&RuyV(d}(%xMyxXKX^MD+)Z1sP>&gpo%{bhpQG{JY= zYaAoE01&=kh6z& zwSEr>dj{OhzA)F2`047N;w5RYZc^xYDnj4)vL9_bP2)h-O>Ad}PnN4-kjx*b#GfGn zM$_bGW||g!JgEaW@!p$jG6#bQ6z;5wv-azm$sK@gmNYtoS*v<`?<&qfSh@XKvg!)K+)5>jDx$d$y}h2Tzi_p{lO>!(dzLia9xeun#cD8bDEw7gPfz~Qj6E_Xd z-viNu--<^{b}7c7y6dJY$8q{jXEqs>aC-uw=Um{49Rps^75nU`TdckkpeuhCl2QZp z+L|Nb(VSzE!ZWGOWO=iip9~juq&JF@_uFnlxLD}S+%o@I91nrR!Enu>gGrYkxNgom zaxYhRCwx~;A#*u;KDP<2oRg@h%fkSfqcIsW!0yAy%p_;XlY&nDl!8IC^pF8MkI(1s zFhoT#-;jz^OTO27lZFE*8)F(%MT$FKCTzA)P-+6d8R!8+zbZ?My_}J>;@8;VtK;|A zrC6dsx5&v*1fl2>=!(_ukEQRx=U8_AYU?-T*QITOoVmtLJ=^lw0D#;!?k7B+0*JSl zq~Me+X$iWvZ!>HvFHJAAJk~D{!=$qzaQ+#i!F>cB1w`$#`Hz175WwsX3`LChIV?CU*iMM==YfeRHnzBU!G@5W1JG%PLkO zAn*z~Z;dO*l#~s8Jn(9SaSh_|wnnu_V{0c<41z?c(3x~#sb&)CR&48{`R_J^1eX@Q z^X!*Yn;?4IV(_$$XFrz4ZhcZ!BVZWl9{(_E`u4^eg7&zVIENXMK(O@6GL&74nO9Z- zkoA|I7L=P;Qc&h*4uVI&H4@Ezk3Ph0ZurfdlXXljYpS_fWidU80P+q5U6m|S&1g}` zaLPfB|N3AgfH}Aks=TMsu%cms;r@A^@~jBqM=DCTKLyNLZwRsdOhe}4Q_aTl&kpeF z`x+vq({XmdV;gfH3IVfrD(3_Gg}nyW__uB$4v`IYY2ukS z6L^p1+-`ehiZ^i~BoZzz+qt5(JfBbcn5k|%IGUwYE;%oRRc6FiiR8RNzgzy?&Oh{V zZ<7IF-4H3I0p5?NO(f@}iOLlU>z#%1t7j_jn=bm*jIYM3v)Dw7LJP@=pTjNS|E!IFte{>(J1{ah?rV(1w2} z-M3dBd9gGEMj3eZT`vI%(evqYP;gkq4tklpr*Oyrq-GEZ2mfLjzW}jYpxs2JhwfvD zB|C<{9-|ajR%2=2{0e8pfv{#QnpS%^8ROf;8RpP*DD+I4~(ZGMrrh?q7?X0y7 zeFL8QrS>u}#JKva4i^=IDdoFwtc0JqbSx?5WEjrxnCT#gpRK=Ry!pmi5KwD^t4@=m z`0bsQ)CvruIOMQ~|2^X#@=|m%8CoOmcK@bg2wxx^ZtXbM=Ob}jg7;@H=P%Ebd8he` zB7EY3o;i*^h(Zc;y~@U3%8$0|GCHNJmtp4L1#?eKOdY6hFzqx;qw4(zW_n=ZgA>Wb zw1g#NxPXMydBm?iLCH)zQXysP><-*TSk5 zhSjs4PiCImCk66seHZZj#-Fcb;89nPNQChDqcj}I0DK1nI*UrT2b3TO^GYG~2#Uep z;WKbeLMAnhEkCw!@xJ+He5}k0{sP?$Dd}V$3;qP&2 zUW_4{P)l3EOOhvo4GIZIP~rB&Rc6TS~$gw6K78us&wV}Uo@ zjLwY@MxkW#I-w=Wgaln3%@S#lEtu2fw;fJ5w}b`TV2QaJCp=S@K|fBm&royMaobSk zWrGkU>nTecjE4o2Zti8pn=rz!O-r5JsmbTYOGAyU@-2YQ6n|LmfP7k;`~>oToTr}i zFwI$-NsQCdmH6Y^HwAt_H=pyuNUuiO*@K8{qwtNYXkZ_|yG&flc|mNnu2K*Ak)^RX z@6$zZyCkTztP&~Oki=SNGp50#&*k+^s1{t6CbxHhA>wnY>-%f8ELsLm34u+f;7 zpqisgJ}y?cdBpU1H%_4~i<6SqC^=jvw^yH2!QPo-{x{{xAPm$s{cGdD=j*EQ< zldy>D6Lituudhfy>dZDO)eG&At-zrvJ#CQ3)gXs>ib81;Hkr?Rf59@KJqK;};GsC2ctD;QP5#}LRCL30bPvn&lU*)&Z;{OauISw?JBa|(s zuJSP@uRpaL4bzPdkY_CcQR1oQru8=<2i(xMab@*M=M*47sdLz7n4hN5xQfh?%&T6o zGL!k;8C_OW)W^jc35^EQQsWe{VbpmVW`6}(+^yP5a zonDnB3C^T9WaXD2!Y`gsJr(%Xnz@xsss@RIzi(z47#uz6*ZThHZ>ej4x$N#D#rjC+ zN#@0AF?#)Vd*YL^(x7nMt;W#_%2HiXj}a}i9v5!vFccKw#Owb8xH?KG(m|KXM>VF5 z#38BBOCt|B*`UtY8y34;#6rDXEVP%8e+;vjbk%{RA;|d+$F!zs72DYPs3$b%)9iqU z76l>C4M940A~?eK!NN{{jrv(8jcf*3S7EJ(r8t(zLKSIATK` zeN+;U;P|=I7iocmZ~j6K9G|2Qz6fd`&?#OWsg&B;ow^Or{C)FIM3pj54u$vAHPOah zs4xoDI39dOHOe~$f($yiVBgf(!2hp}^Ny#o|Nr=1Dr+etY|X2@Q*L&l*a*?S$z9^qKW@6z4xuKV%zkAIGHUFUi~KI?kDU(c5a(Z$W8 zHl@k+YkVerblt*D- z-&47nViZ(>ZAID{KPS1PE3*r=Y$By#x%LC#6tI$SXXW3eCH6ID4usv+^9N1ZZRlfi zCW9FdwFd#yh(+zA8k*K{k~}EZq+b)KXFbPXs#DfRXPtfvWnt2PTme7nVgHN_*Rn9> zXrj`Y;)|BdV?O?pm;`fH{#dvWQ-j>k&Db=}fEPSksx?U}HBzqkje@e{uV>pz_`siR zAN6O|29*xU+4jlk6MnNyxzOC5<7G)@(S;MNlyV>#`=OdgE~B#-`w)*iL!IKPb?_W< zdE6!E4jR3KM+Ml9SlCpIkCWyXHJs9Q?I&GLDmvH9i-`K6l_fV(uhPS9;gYn_AGF2h zIXOa;@pSP!PGM1X`{6BaJV%sV-Qjq&c-0!2r0O?oz(J?4pbcpLlW!=+^{TZmU)ilH z8xj{7Nf&F2^BUkudww~*lrn~`N5eR*CSr ziFFx@MpP%}Dr4>)}DTOb2JzUp4hho@B9>;#GWQj7$U zj&0>t@k@lKywzV%48j>{`_+}uxd1H|7|1(F`q7VC)R^A=CjY4@0v1A=M9eO+6#B}IJU^yZW zKLkWO-6)WD9Qn|Y-_6>4El4yTYt*A>2i1t>-x`q7qq0%E#MESv#x3PJ69{$-N9wsP z(=Jf;Y%g;hsXol#kV*z0i}%FPngBx1DlfBxSM+{@T?k9hB^yia8oqY4!Qa@{F4~Fe z)RmRU&g!WVuBw7cb6=8BC4TrVb;8}S>q9$Lm+%67QsnKB-PUKNSM2WGXf0c~~nVK6Vj+y_a-Nv(|MNXsI2#`oFM7wA@Ru zNEhokDF_Ol4psrkgvxxRqz_bAgLfO(cN_%X{XRepuNP1cT)AR*l=XvFj%GVbHt5+v zF)3yyy;r4PExSQMhql==mXM>vWS~5=P*>d4{Uo;n>ddu-sX03-o%Cu)r!>s|Dq?FMkl zIKWLbDK_2d46*Rb4p1lT+wUUiGXM+l?vHDV++yyY$wt$Z5qJuJ!NZx2WhlIk7*!q9 zk-hy{Z-6VWapa!JN3~~2U1vG$H&+`|{;otn zFf6bgeLbVDV^(ODn=C^2o#!`d+n0e+1EiDnYA28wXCGoA_mPfh&l#=YgC-1b0efQa z-NV?onciBo^;J9tRUeBNM}aqF20Rr1PkX=R!~5KLMK>T3I_fsYPAzVeGANf(NJsTI zj!4IJWByK>5#H~vE5^n+vQWE=7YG;d?L7HNWJ-&PcY2?Y?j7}}AVNnk+{Q6%>R#gG zm=BcI4YANA+Ll%#7+sx7Y#p0(&1J;nBfd}>1xE!T z(_T$)9i3c(VIiY%4L@;mdufMWK`t%M2J z^GNwMs`4}v{je8E_6j$9W{JFXLVd+H*r&3G8N8&Me_CdDjyVK&k*lPI=hBot5 z%nx$W!7pBvbRM`-{Z&WmfIa_fdpH)G7RWaj8^JC-KkmWoUXy#Or82~3rYK!zRMf%` zWq{T6)}cPBVok(=(%Co>&wIQ-;;WwMFRojDZ%8C()$*6+Vj~8B`Z_uFtGJJ{-#_NY z!gdDCHtZh_r#jDTestiL8vB~Iklrzok^Z|3yhxB%YQxTaGU}_!K|We|At*9!3prtl zk-f_YkTw2HNI;73F0Sm?z-|(NlDv@QVl`6hHOS1s1C-?+yYK06Gj8ym$3FFlsY@}2 z0`S}q>o@JRN0qbtFHp0qViPD!*u6TzFR)l;Wqp$AO(`69{)A{MC3^eI?)34Zn5=2% zio|wrG2k z6ItCzHRrcpyEK7FwOEEjoVMtDHxnq|Ob4HytOL>(H{-Zi^MQ7PDz>)=VESwHb!7&1 z1vlMPlG{O^c>LjlP$f#4frn2Z+%Z5 z@$$`LrCy|;5=vPJ4qkwhCLkjUO>cIW3rhhRvA*OTW2T6co=`bpLX<&xoag$}VYyDc4 ziVeu3v$%R{IcEIK9#2epZR>ak?Q4#?IsyD=py9E|_zKydDIkka62BBLH}tWp0b2-^xqYNs@`Zcq+A+2AQ~Wz;?N{ zTv5c*u^*rn#5!my0I7NOWYs}#r|mf+38+H;I6$YHm0C1P->cdAEOqi%jl#9Rk!P^0 zj_iT02VR#XP^H+dVjv#^H@_YFC>XV0mh)tpd$`n6ZohFO5FPkk)6#2a3GI8Pgio6L zPNqDbn(@bO&bk9@gfAeMX9Dd4+~Qj61^+W%K!zR}NxtJxA!-kJum905J(jY4eE@fR z)e#`ltRYPvw$q)^?K_@ORnquxaSQ#RxO%FA;GAn`#-t#l*n__5NN*{h$_RCP0N%fS z%>Gw6ZXlIC!=T#(7=^nddmGbo0?%Jd96;3Bc4Lk`fHuw)MV7&hc_0|!=nQSz+*0T) z2=K?K%4j0=k-ZiNxP`MW1DDkQV+p6M-6;P0o?(?$j>&nuD!UF=UvHp!@#8V07c7;2 z&Wgq=L1xc$r86$pIab}0TWW)2YvT-2@Mt=)$G=?Ako-xyD1AU315Vi|BnGkRZP@c< z3p;!!Q*`nhzqE_6CexHfbW;we452w8g8%7in%>4E7i+ca@0MpHAHF^AIxh-w?BKXU zHge{_mV!wrvM8?6X(|ZSZ?=-kY(dm-XtEOHlY>x%5^5Lt7@1TF6A+ydpE|5quP^|A zQ!6(iG2F_~X|sbF;WHiC-{OtWSLeeTTa z?h$Ve0SH~?IfHdrtsM_rT zU|!o8-9T{w3zGg~QDzld4>WgV(n*jnCV|e!dXUXu_2xdK?bE{d{)*9)n2!^F`V=64 z$^-oBU%rmGtZ=i{A^omjQq2cK`urxJ3=Govb()XkBw3w=JB7y6lmEW3S z^vHm1zzwmha?)Ocf!^brk4605^JmxBcnowcHR(0=S=i9@aK{X>|Jui47Sp&h&dUNH zs`{M6Fj2nJJMls%`iDK@td<5xBpK{W9Y(!o zpIi*9fO@t_pckzLmz{Jm>r=KyzcN!H&X+X`u=BV=5ZS2nM4TebpCE~lQ5kT06h9CP zhVLg}pdfs&%D7zlZjSs!Es-|>%eZTsT+{a&oM^SDpm_Im2Yz~qGM*SoyrI8UAhjp` zT1ds4+m(k8@+t0Ksiqk>(E{M^zcFTq0`tlVDvc9-kmr!)8QnJ8(J1|Z1zetK&0ceF zUEk$H88zjU%1`EE1)k`siRWi|J*RXB+MS!j#^$eeECD_61?M2IzdbvrW&yZmXR%jv z>_DFO^c@;dwgU%L5dH#0Xyd6jFO95>od9#MB)z@FL2N`ekKAKR6jns5{`cM%0>1lz zAe?v)N(Bzrz(u!A$AXRM9t&q@N~*65wGoHCh1#xx84@pYs$&iN|DIv4H>p^bZKwez zdjMbTSpJ;)XyBTQcbtL-l&y3jIDg4L=hX zr&bnIcT5sSgQv-Wp`g++P3V;6Sy)~-Pz=8LBDqcLbi3MCD_wDRAc(HaMGE zhhAgnI4X)fF=;BITd64qrk<#vd$Qc}jp9HX=!Cd_H8=ce{%ZM_!FE=60NIM3KgYW9 z)J=8$9ygwBp?zn1RM=R1D`2Z#mJ8^;d3BQ1WCdHkHD2-i7{q`VuPX1y41aBf;9@BW zYR{**OO{A=R^hA-%RJ`(uSJ>$}LupG;i9iUUq zNm+00&w}FIA9E3Cj)QR~-YhD6mXN&p1@wVZo9K&U)JSkan_%XY9=ZUl7o_hrJCh}C zP1O^Vu=)lf$O^+BS%_J$wN1D&l3EcbLbd|z^q#A%$H0EHB0dv^sDnPt$?F!7W)(vR zcV{)eXJJv6cy!j7$PIpL+E1U94F}%N(R`9@F zu@LP)%rL}@U%D9B8GF^9ttnIJNTbKb|6Ns+XPrxq!Gk*mhm4<6ozZ$e{@+FaleuRD z`VlZc9DiySK~wEY(37;{)mOxdChNyOOaz5pBW%HfrDYHXlhhRW-+;|?+8sFd>`e1C z07Tr0`itBl;?t*MR8rBY{Ig#@Yqn2h)o$d+mrv+ZzOmNLA_>3+0UQ4zBJhk4*;lGF zW-|SZV}~($`4Nt>t6gdhl8)vyeZ?q@<+087lqZFc+;uZf^L_lrh;<~A3ADOBr;m3j zvY;lvA8#~rvn|}qYKG}1k+$V+*?!Vd9MHp`+mBOx+7cF~IH1d0(igWazEV_id<)Wz zw^owl#~e$E_qo5v_`oSkDA+p7e;<%{OG*o~*aCn>{V)vbSvYHdg6LPAv_BBkEMfdM zProLr`F3Jiyu7Oq=2LY+bz!m}Y%lup%&Ox_eVi!V$nJS3mi7S%)u)2Z@O z4i2RnNEd9+%Y(1MqJUN;R&ULuTjyka@DG$b)Luzm%}}9|>l;<(S~XC(%eHXKigwkh zXnl}l`$Lk^o6-0Ck31;iGK8C)Q3|rNcR8{LXTW*Q9Ql^_&FBYVmbc?yf8YhZ6o@9Q zS(aWvsHo+lnTK5)xa*6?O=a@SKBJV1=SmF43#9Yr-dzul=YHcn7mI+`nZ)?_6sL~sx+tD&slmhOe1G=ZmqnQKyln!t=4OWG z*-sIg73Lfy8DpeaEHHSl?P4LlS~_q7@ps?u+39~4DJ9^+UEkoxp~c(ki_|TxrR%`_DlsgBkE!^~Tq<{}UoXftNMrVsj>s z{6g(p`sG_-N&Mskd^&%gmw(;KlnU<-*@L^*#(KYR{cj+FF%ytPVEj31HTBLM?|I)Q z3w^8FLW^vmoDd8AK_B+#>h$~ecUmamCS#mdTR+oN{+pxOr$#Vj9QatwrY*Ye?pPNh z@v@bYW)mikL8u3b>NzJ|z>yQwf*(cBeE*(v#|zd6-X3jbpoq!ePy27aK;IMUkZ+=W zMD*;-9?#J_6I*4*fPw8(>-ss*=?E|ZfJw?7NI#^@56pf%v=y}ah>lLlf$hA1egGO- z;u*W|Z+c$V5qBwkBbkcKybbEAv9CU=Iw&n zXD+wlO{?@e4vm$OD+_T9q0xHtwF-sjcxsyz#=iaR{jub!g~Yc?Oz ztr<*?vlIbN>H(N0G~Td#uPs#j&~bE*){j zYFfxO_adb0);S@hwN!%f){nq$Tjz~?MQWfXe3e$bCjUPL|D6-Sf%7sj-7`s@Vmtn4 s0Fs5$KM$cs(whpxc+VNT8Kj}7M1ee9YHWl)UQ&`u)ydw{$_9*r!t~WPN=5Op8c}y7KXqcDYE>xdd;Cvvg5%2p*=Vn- zevkZcGNNC>mTaLb^R3B z$V}oH-(YlNnp%Sx)I|Z3Oa%J&o+PcAMbD4CdP~5-jptyhrZp%j8T;~1jIem5tNdpO zxv~va*lUPZ*Kf3qy*HMs8`E3?efR--$%EJKK}z=F_ft=Q`Pg|)^EvoF`;*GsoMqeN zArDg_jX#bl%KtIDr=Qzy1kOBdb2UjoSwT~UIxJmO>`g~}jFA0RbdA0iY|)Ar-D86s zZZoj1yrq&73JdZa4dwA80+c7nvq#8plwd2Ar+=NJpfDgmk+T#Xgo29vBtZVCWtxf2*{=ltH@MaIEX1O?@>h?S}f;VRL(14v>Yt z8JNSv*5N?}1?VA+JhcV8fT%odZS0(dJw#~!xM$iO{@vad8j^0NmZ(Iox?T?42M0E+HWy04Fzq zo0}bZ2fMSUoeRi=-Oicz?}z-)bELq|=1x`)E>`w-R1ePunc2I#h|thHRP?`Jf439t zVfBAC**X8KTgVOq9!daQ9Grmvmzax{<^PY^L&@J_fA#C{>VOXq6IQkI0NdzES=k~- z71=aVWN(0fRr6mZ|JTsJCAFNvPLlSvNI@6T|C^S73IBWHKL!7)QuqI=6yoIi_bUIb zW5l(?qHqy04WCM~VGdrBEFw~X4OCq&{>U-3Axk0Z02LCkT<(fT;wsfBVAlPoP) zbwB90#3_{OeG8G?*)zbQieo}!$^Q89BTXe1H)iVnHHU7n&&3in!&w%-iAWz2anGon znYX?ysh%+&@tsyJHSdiOV$?1*7Fn-z-P2yE0EgKwH_cIA4SR~ZYCX;{ii35xHv1BH z!uD!xr!#%;%$oH5pJRLb0QxI|o9Lt3iO2HRgxPl9vVJRG(?xn37gl!IWc&^pK`q5| zRaP;!i*-NwZFF0g8t+?0UY9IVC;%7nEH|S2To*=q)^U9$b&t&*w(FLZj>GQH^@(=- zha2#V+26GG-mMap7PhP0*6|QcwI0~~xIu(nxVrUFvZw6clw3;QX!s1~aD}v9P7oZh zFhlP!M~Et~{{$Mr!dvWeg2Ic`a(Y@urrORL@fw^8+!k(J7R*18zIq$m8crr~tuip; z-6INZdo*3+YHeLVW6V(&RI{B9wS!cM1mLMVZcFUbb@+Uq$L#-hBjK05%@aq}P09Wf z*ekaaFWUQZYkwqEQc|8yv4b|g+9DI>)MIiJ= zhp~=D0blw)H1$%{B=B7F+0Az`DElr=mzdZiyMNvfl52*EFW1;Eyq1)HbJ*H?F!L?E z*r0J(^lF1}VK-Jh1nu+3|2*Cv(6^P1HS1TE#TOgQ`Oz!d4&NL?!T5!sC@*&Xsy^4d zeMqrwSaYgG-Sl*^0m-5Z(Ib4k|2$@~(RiJ6BvRgNI@I%^3@3@LB=cG&Q!40F7CV|i zs0~>I9R!#@fW@Cmz5S0vfw1kecA7chrrOq=J=L$yYCWha|5^k`a1{5Y5dafJ5!&+4 z=C;9QP`Zy|_%d`ejGJxNzlHnWk)G(bmZyBV{F%(xXy6-1`OntjeIkA2B1TMXJ5`aV zfgyIL7&UFu>YH}{A?(w*%0F)|uzS4EHCv#Tqdry<&BPk29aL8Nl(^I9!tSuPglaH| zOu&YXlz$+$;hEK9(8SP5h{YkMy|2rJ+nY(zIy9{L$d-)3CO`-hC* zhRHca&*rr};HXySx@}`! zOX3}6Yd6p;ZAU3SuQ_j=u2NNQLsDlm#|>-ID%6&NMjq|2UA+o8a@h84^*Z}tps|5A zs$>V~jAu8nttDy*CnBrB1;uB^jeAsPNNb)uK#h;6KLJ%S0triscUA*kc9|Y@IZpgz z1l;%LV0%HaLsBT`7?7K(C9@dNZqSXiVKc4hY($K>oVOK6vnQJa8ZmEu&t0#~4rv3< zGs@Jw@rN*{?^AVB00WH=gJW%hzd@MlO#`ny2r{b~eV9v%ND(smR~BRSl=T zxjE$9G4bd-YLkySKUi3REb+^P8>VZQ7+SHWo_t=D7gb~W;Lq6~C{U?)RfK&-6^>L1 z?Qi75E=TrpYFnnCVyj!!&s}VO7r7kFPqB`PW|H5JUq4G#Q6K@}m8qV_H#McDzQ4WZ zc<5YXG)n&>#>Ha3?YcB;C?x=0NCo!s)H@r%sD`I@dr1Y%7hUI1&tJ0nonHNi@rdy0 zfSIRz6jf?Z_$vduUY<?x5V5!(|?XIHZ|Z622GXtvG{Sy8}|}Q zwf1Ttp4a*Q&YNGlGw)+P$v#&HeQprP(*EK#dsnYT) zSOqu~5L_KO44h^yY4=K+REsnTllUxkI@u$#h$H-)DkHc6T+jmj=z6%kBsTgPwUBHC zooL|BFj>q^f!@e4(7G za#u>X?S81Z{U}>?Ttz{=ZK=Ub+aN79Oyhjzv|ClGaWu*PxTAG~ld_-8qL;L|xz7;P zLxwTL#(%tGxndbFEjHE1PQ7(|{M;4BT17S-^*wsJo2CdJZAf#t0NH>$5yr4qh?Yu@Ug+%YG9q~gOUCeN?_{onwXd3 zDONeIUp9~9niTY)dZtO@Z9Z9dNP=Zm@gRDpfB5Dz=8Xumw~%yHY;9WKiLB*`ZWUce z^wrf2m(}3CozJJ|1kccGbFGCkVi}iRTJFvl_#wlo1Y=nRCvW{99k>bz7~-!!lD+O@ z#1$33?{fdF5x-FPC+7KZmPO{HLma@*1fy&rmo=v%A&)>cD8=q=hjuYXSi7s8PpbF^pS91(JlP;AAfr{D*N^3PBlx zrN7WSAOmoEf?i!?16w;>aKwarMeMEe{kzV}9a<7;9S>qtZEKa_K#esr+}VO7Rc!QC z=%mQCh?mkID3`C_z1c1CdOq9gvsU5I2U9AIp81wqjg`o`Se4t4(;frtKP);;UnE^+ zJ=iQ!-)W}NN8XK&J~J`5eNC5#1EU5~2m;@G$YsSkQLvI>JP%dCOcgcq z7hz&p6_D*n7xih-A&&SxGcqFng$2jZ{|&ButOyZDI(w=lj?f0){gG728qkk`*LI#4 zrgV?E^AUv%Cax~|!^Y8NJ0wA%Rf4DB=wL28>gElefd*-zBYDEuHx|n{m~>a2VvZmm zCBhRF*`g*UzgSvu^a~2|$4`yzo6odcUU&G5umg^{^CGCklh&c-XD?dO`T?#%4St&> zAuY}4j2O3J`Iv`Uc7eRaXcEJJEoN)9BknIGj#af)X<5x-mLCx)BTX{lwY3w#N(r(N z=M#+!iC75FNJszI)J?;;@bc(lqIi043iUWx15T3Y2kXOd%B4cq+rQws_2FVzX-fEJZ+EzllL)uR(D+ z6Yo-96+wBgy~`-FiKtwHk_6t$rl87DyRT)5*sTYBG5fv>MPp^gr3MzoivRrV)^-E_4M92CZa87_-2R_XI3w;w#vOyJY^8)}b}0725v_NcZK z4`rVW8i@MlRGB83JL1pl2#?q0d>$R+*iaS(>drEJy7F-OQP1I0vTlXD`jtcY5}25n zA>)16a#Ny%BS+HtWWA#^4D=&y$)WcNj$qsNvsRD?*yU@|Wp$u$4%N{w+WRA#lD03w zG@;shbOExYMOE^`UW#WV|Ry91wLu#(Om zwNfxjLpSG&j)N|pV>poH5p{TK%!41@2ABGHp1$q#{_s7a)rwV^)qv->QOlr&z*Jel z+SO+%6V2xwFJ6|)XMDQkVGpGH5)4kJAtmVWf883Gi*#$5?wm*-tZ4ifm=^nm8o-)}4ZEK&2zGMk=CbL!S&)K8RfqedR7os`KDP}-A*uyTwYslxFnIH^TT zvMNVXpHt|zuWgAmz;el@{p#)8*M2T2#&&bM11{Ogp$>DtU%o|hE_-g1ml}F%bKt~2 zk7XqaD0o(G!Su5=d-Y(ujvE++%0{f7u7|d-x~BRuObcV(6C-Rd)B7arR<&fDNMcP| ze(lj0?g-dz(3EUt$ADSX!)CGcds&lbJKBpNw$W^tbq7EfyUwUTY zc|Gjhyqsl(`&n#V!O;I@4607DSzl)|>Z_M*G9SD=x;Mmq|I)jnr@u$d^RnCX{Zr|f zT&wcbz|6aVpX7PZ=|dw#;)RVj$~zbwIdi$#IXd!;7+dNnV-?s$Tz*v&gFtlXxCAGf zNd!fy^0+N0U6LMUk)o#PsH|3p%guNGM66cQPjO;#9dqL-T{BfOMALO<{hqYSBf`)D z$;%R+AqiHOO55g$+pDq6+Y7T@>+fL%%54F-0FvmFm*e6_l2*XA&$K7ghI*KUA9rIP zB1BD^PyU9m)t~p!;nuc+Dk1{NsHJC2KUsI?>v3K4ek^_K@}0t+)fXwaSA>6S1(S?p zC@cH%QOu&1|BkNtSMjfszr82MB3YrwN8`dv?M_mS8F@rzo`o3S*}$B+;YZOOSe+zl zxvVUlD(gwaZrWF|d`%B+W$vc}cq5$}grB+v=>5PjWvuYRb8$7JOJdW(HC}PV2 z?Kb`u#=b3o>NfuzQy(J?TOIK*#z6yJu+Z z{jXU3pe;JR3KpKjPplXp$~(0N2w>XA5{qNp-3_0m@w#nvm_cJMAaZFg3pCC64ntyw z6V|_i@hIiYckvy6OgH|r=rpQ?XrrHSSVOdBywD=+gI&ef7VI>^>4nMG+Q!jqJq>*j z9R>9UH?S8`?L$i^wH|{FhKozo)mt9C{C_`81?Z{r90b^P!HWM`@b>9A`hKbgQyynp zwBdhYY3E$p6{cw@-IbK6wqLY&y?z-Yc2>vsG`Oet7Ze2`+Fnp2J9gi?{au zScgf>z~(#5q1cxh9RGj~kdB%E+h(t`Tp~Dy^moI5hWc;k(EHSN-7Fg<{@Fn{vd0xf zaDM6UU;azx3T^VxajhEnu zTJRpg_wkjL%N~Iu`;`_g;$G34eRGq$U7cc|oAWqg*@YV0gD{iZY^2w0yopf>y@0#H z>$mlDWWxJhea5A=@GaGWjT7}qT*M$l)W_!j?m_?_R_9>ii;N&Ek&g1no9~P30SfoZ z*6u%k`1TAGxP*RKdAUpTdOagoSbxG<=f&o*bFo3Kom(GuFzLk+oDE2OeYOVyW)y0$ z%uj1U^j{>PP0~NEzFMX`sDdCN9u6%FJ-a;M%o-sUWp!AFVZ3#-3B-+q?Rd^SQp9QH z9&zD(f7vVM1ZX;vfM2dh0#+$5O!g16B!jnl!pRbxLls4v_-*GmN}5g-V7^_S@vlw; zLhcicy?!=2F6AW{Cp+UV6ZnRPH?koNyN^8Yk0X&jmSdXhw4PADZUWkXD}~ejP3!&M zsrvwX?S^4D>#A}5pXaU0Uyq6!knlrES@fUA2wrLeobRH);xfgG!eZ}pUSzh&!Th|`QKXb1DLP#pBKduSbzwO!LcXgIZ*Zic)tiBYzUpNueRyY(D1-WeUc7-;UOsTOVM^9{m`ZbNWK5Pa z&3ZIbe7L!9a1J~IOTOt@T6e~N#W(E#mb&4%hs-DV&qj>OaJ|4M&)O3@Uc-rMYaL1k za`ae{;l|pX`qulq!}?xI*rM3G){ExD&Gofa{}r_ynaxx!_EN! z!=@E+R+^w7KrV-|VGp}8fPX@zm^EhNde+S5$+q=v5F--f1;w=x!7X47%)L_yCac|tyEVA+4ZM;G0MWs+eHEOow#Da-hpRv z(DNsk*L$TGwm*?q40kDt2Csi&7FKc3lkeJ>8aF%1zIK#?BOg++?Kvh*auxq4rl*DiY%iXbaP7+wZjo&A|m5y(%x z0?|_XUS`z?X9X!vORLT*+r=GXB8Pl7(%)l8*$2ub;~;pgS&ih7FOn8LQ+Af{F#apj48$A{I!6HyLuxJ-9s`sU7h>udK27N#Km}y*@Y{ zccEFNhBgv74%=Sg83CHBh}~7)4>?dlx{oCl#P0rRKu>-$9+dY|oZ49#F8f>zhjUYbHrhXjUPwQbIJr@kA0Q%?J!e04t4aUHG5QEt0Xvl$bmU8C%%JY+Hu&(70n z;jvT(>MVlYBYH%cV^g3S=BL{u7iU|zuuK2aFeHR_ zQNw1iS?(Boidg z;4VHZTU^h^%eAnBWb2ahI98qOUujxBwOoLfOS4@cMg5Ap*nDF`OZ_F+(2ptx?>koo zJx?rP5iq^M@eVeZR>&rxn0$lcouZ<)LBdqfXheSX=Hk6F*PquO{A0?2(3qG_d5E?U zn@LO4_(Elo*AfyxH5o=4k^EA7VCgT?dFqVg0GJ+d3Da~1J^?@*Nf!5io>cjEKFxjE zsw?i#BGy=LwX4+yX{XmtWUh#gM4fzZe~G`AR*k$B?smnl>8r~5X^z0Ap2%wY!(*4= zmCfczk966wuR0eQwdMZr;fg%9f< z9QF90`F(Fapqp_z3Ks0}Eg9RnL@sli)s8^pj4bvfUaRQiq9BsHmWqaO3X%Cy)bU)J z%_w5adC&QUpj*jnocpgnM?H%C_u=TqR}yn!tO1uf>k_gQ(nkliEBwL>bjc1Sb!!;T zoyrF-H*g*b8c+-2Ox8Z9djx7bQrq4rdy3b4-JQ*Jju6Oh;c+0d20FWV=YMu4e0p&L zpMW3H7cx&DrlrhlO`v7`5bFB6r4! zQsX|7*6(d)f6Sr&<{ez@Ex)F)xp;}xFSfdSan1YL!Z9&uH^bKsmcMw=GXV=K`}tX6 zlaW=NtMa`a%}M1Ugp9A!hQ89uCX*@b$AK@i(Y~VA9aVf9W-OZM$L(e>Rv!JGTd$PF zO-vc+EZev0H_<)*r$AM!g!kT_g0EdC)q=QVw6^OxiWeOf2`Pf))L(+DMGPhH za@nyvnt12w=-AoyE-bV7S=k4Rpgg(k2fFE;YT4uP3yID2DV%S4^o5I+>8w>lRg!Y* z2Fm(YxI2PJeuv=#%YEi7D#DfhpGailmeMx~Q;q_+2}i`{8R7KCmd5T&uFo6A^?U^G&6xfUOYj=j;&rY{x`+7 zFB_hYN~(6?m7ubbI}7szT5l^3T~_%$j!f_I@?t1`prHowM_X}0uNAST2gvN|+F2d# zbd3<)MK-`x4_#1wPG@j8Ct5!VIV%gD{V#b;Tmv>PU~Jh_v(@(N&tk&PS(4j50>P%< zf!S*Wlo-?dbCu0}aX8yR$eom$@|pz7#iDbh&hR2$TbIBKY66NT##9HtT2lhcWQ#b- zGSaUyu^h2*fX|MiOb$VuAjvW?hnI#a&{3Y%;3Ph-wTg4+C>!r&IuO^7fFjO=qRh)8 z&ZO*|VJc))T+u1VI5&=6;eC`K&v56#kHQjefth~Rnt;ePwdGrU25j6kGGY&jx{QJq zaZu*5e2ie_Qr_ZEdwue*p`n*-YQ3IW;y#%Op50H}--?9~swTY2*RV~pgjMR~)a&it zneQ@!pG9lR=(*NLb<=mn#>Vq<7%lb%oJ{VS?b^wy%yXp=n{|AdF@n3(vBUHZOnHrt ztB5q<@&`|+3yW%S+|%vZCcE9|pDw?4KC~2}l+a(_VGGhz7iJw#53K5nY!ciQUCT>L zN!W`MFS2W@>R>UDiylWO6E<48p03>&yT6J&ROMPj-Hd@)z1k~+7*309Z9%UszD-pY z0X5yoi*_r*x9RtcKf&}f-n+j~_+Y-gzDeSna3+c|lSe+0p3)=*_%@_~Dc~D9hWm4u zHQPRS=`a^xzWgGkSd|GVtKnl z3OyL9y^LT?Q(1o614cSm=lNwQ-OsR3w^HoSrzwIgxaa^_GZv(@9Sd5^xiogZeusau zf782W9(5YvXzOl#k_to-4Sj`sM%8*XWPdwKxSYzp|Ft$z{I_nfz(K$#(;@0&BB~0@ zp;tMXIoe8xcsH`Bm`F{e`OGPW7WST)=#Z+EN^1nRg-X1d7>}NY$JZJMHzc^)2l3LA$BgT} zOO$sNC|nG)LX8=Zj@`Pf&vm1LGa^!|ve8Zk9Z%u6yqNvMlOeg=mT0uhY2BQP3J}i$ zkzD#6x%^-?##LKk&rNJ0K-+Hzu&WIkfH|$o3TwygH*p#(T!TnI{j6I~-G_PWD$2!3 z+gnQwwXnh_<|Nv+bRT}8ZaV-~zY27Y^P$WW5vWWItg;{<@QMTnzF3StNk_J?g&FW! zlI(TI+UN96xxkk^mLA>Zq_y?V+t)Q11GJKAOTyI49rCcECc80@^_2lboD!oNzOSm@ z3G-=+1j-5fPB)@;6GXdnKLQWu6fcZHDbuCZO2aFINj%MYv=lhlVu5Wb!bR4OAP-9C z>_wWF=R@|s6D_A{hnM9Q_org_F&Xya;z#jclwWBlsyZB(T@Fby>QceZ3QbSgNyK}B_c0Q|#JFFWm}Ho-1{I$sAJYaTJy;n5tSOvn(^!~{krX{UyOm+T zL220c#xa-fR>r)_M9zs_JAFi*3PyFKyj4iPz>d568~ocBANI~DfZc`7aNhz z8P{O+xZbi8?v>)3lXtxY5n7QD_6x$!_V*bW%z^SdbQDEjny4(Ro)K%YUA6<@dPOn2 ziqj%89&+QZ3r6GjI!L?HU!l}5t(`#~HU}T&n5t=Fj584=frop*YDayDN+KzdrbTaX z4McZ0+|XMA-(jn*YRX?q$+Az56&(gqTTD`WDHLqC+b5It(_8`(m;Xk2PSf{u+^Lfckt@Xlk^^wi;3*)-AeJwWJXE#GZXFCsH6HE2h)4 z=j4B=DS$=%_=;eGLnF5=$ zE_LNo`uN9d#SHd>@&IyWqW4>!d5-89Y@Om<{--J3Ol8m2lKf4LgIUSbRImIEZWFa| zHV>Y5uV=nv(%JeDyeXhD z1_tTBY!9S}1qp({ZS%@TZEJl71)TfI30(WO^0J`<7fp0osVs`wV9l8R=EN#Gt~Qpw zJB1rOIJOKY=O7lVv}U$vWIABLU4|<+PEC6$4~GWpY^z<4OzVqwH;tqrcyG8N0B9&zs*&?t|fX8}o&^nB7e46d1 zZzt#tyCReIV7a-CSU4X#W``>M7Qqn`M&?{bGeytkOtNp|YdZKw+w(#6Z5ReFzGxPx zx4*|@*_2vBe!Vk_h-3Q+<(69jP6_G z-N3}osFKp5#5VSiezoioMJc>1P?zQX&SG0wcn>tJ_bS2RiOA(If8T1F!Ck|F+YSfX zOw!KS7wSpwdS7z8QBmoxx1JDFQ(75Hq_(F4)sHYr?$2v-5Nr$W^rMMUgCUnOnv;rfAdbZE#%5fCl ztNZ-)5-lTr8tq~v2N^GC*)`RQWd}@nD)$B^fWENF7yAEM`2cOKYTQVxFc$>$#BgE( z_0y4oVK#{9Mx?kUK-5-UFiUBup*I@7GfJZDggMb2eKPEDEa5Xjm(KQ~?+-6=_!q@^ zvK|0k)gEcBDu$j-8B)Cvxh`wk=}u^%_`TUWd7d8P`0KiqL3{nf<+V*i)E|>QNGDHw z7`uH?=vi4;^{ujS29a#}cIsN9GbY1E>UeDAfZoz=3(c^4w6lOJ!n+(4t$Z%(LcZT+ zjXooS;ynDtuzyLbYw`|rqArT#{MauXT5!!NU%S9TyDj`tl;gaBmH)2YMBBSv1x$_^ zXA6XNcVQ~#U&acp$Up2e6^q$BgDeXLt!Aw{bg&Y_m<9-)Yp3-V1EJdtf9$OfOTjSyHXXwQb^8LlP33JZ}wtT z>Cw>$OFvhLq~~~2SJA9XeEwq0xDu=_+){o}Bl_Nka?%vYz({}cBsSsS_Vr~bHEx%W3hU#6}be-=e$Sr=u|Y@Jw~ z*i}rvHyeg5O*qz?J@^p}$1d8vsg-E)bpVNm%0<77AA$^dJg-!^HdUE#i0qcjhPw^) z%LL0OS)sRFn&GuIx|4pKBy5AInA?IU+AN7`Ioqr!X|-i))?i+?@ix5J3XB~rNr#T= zqU$~~?PcwXHf&I^z~gq@-phr-0)tm1w?{o({XH)-Yw@_d&@k!jK?5v$$8mf`5{SQs zJFufuBn&)#CcjB?3z-UGs}0R%>>?M_ri1C78h$@5(l#JhpXe9y!qt*R43Np{zXVS$ zalWf7awKNSIhpJ{G1OpM`g?R6$=N2A>8f3^);V}%8~?Qi^k#@z3Jaj5p9n;B;t%C_ zEL`3t@{6y~*d7lr7^IuAGJ5(TUO%#I5ma`bZ`WIdTO}EaWXQAL`DH z!vk}L)%JZf-z~|V#AusD-x5PP_R^J|75kx|0ej2u)?OMx=>qU0^z7PIv|h)IzYNb$ zbn3I0Zx00FztmizT7BJmy1kSzv{0M-i{6URQ6& z$({^LI-I(AaNCSVq|>ZWVKyTS{Rdl`N!+of*0r@+ot`VrKrzFXv=x%1P~iKxIxnr@ zU0pzB+Tn`1anX44ga7ge^A|l==eTmGo1X#V# zhQ;V^c6@sj>SFfxk{7A0pdvLbG1q54EX4mKxl>^DRKFE!A6S!1ReNTw6CbDl-w*Dm zqslAewqjvJCUb=qGg}t4{Zw2PJB{!luG^085h`bv?i@y&N)K6=52%l7dZN{JrNP}4 z?~$n!p4zH1pW`r-Ka{mqb}zcM5vG#DdKURV@<5$IS9eU3hBht3zF*mn*5he#^08Zw zac|D)<@@{%mKU7e6gwF`3*()s*|oZ3O&j_i_IcI>Q$Z0 z#v6F!oz#roQiEr!jUDCP?Ir+*-X13y1k7Z608`}_+iJU>XrSm-H15fuL5~SQ(|BL4U><^E)o-Lst%}ATVXKkRbf3Og|`fu2%0oVxb0@XA3q7cD7s7*Rzx|* zCgp6)Pr5{$*S5}0sEDOShVUAB?c_axv88xh=pFb&l(|_6VOt=*lU^5E6M#66;OypipJP z585~OSPgVTra5k!%tPn~tDIW*D+YjMK{pjT)3fE~cV`<&kn*OLXD>f;^e!wvP=urR z%ggmsNd$AGd>)-PTwBp zVBAK2xa!4S|Fo?wJU^xQv_{6Q$eNPDX13(gr@i_KBcDC!G;P5b%3sl6D z)or^f9~(5=7W!1UrfA$I&jr~EQ8HZFF12AFy7p!bS3p06HQPjJNW7^(*;~mCG1`O6 zZpa?YL&cZ^{FC`lw?CYU!4>B#SCDh&*7FiFD~?KTf#eD&->s3f9@%=?_NDA94Ke4L zEsP|`nC1P&&iwHFlX|D@#R?48h)n7>s~_cy-Tid$DQ&s_(0S5k$R7LU_h=oGpGPNV zSgMh)?Di0E1)d-Q*rPPfSj;WXG0(gh-xZ&WIm-J>Stt@90zN8wE3oX5L+(7tW&9`! zi9Ussa&@p$d&ptN?2)ou7Tfy_sB)I|aP3Edob;mOT8l3Go^&4n?z5k(=w`)1CRTX2 z%B43`ork%IsAK&kqT?HEH8$72 z!$J?u2WR-4AA_=hY7|WKsp}MN*-6Bf+G%Kxj=3c(t}{k8E50|L-F!>|b=?ykVev@j z*-~#96fBojlgYg{`vSz7U%H*HWV73Y&2PEcJ3UbC2@%FzKGZ3$GGW)V84&~{Aj(#Y z6U<@x2WKx0c8Gc#69BcrGes3O*lsQ6oj(Mp^)BFgO|$9F+8v4p)#VKE_2DCkR`^Yus{xBD`)z)Gs=+A-UUn(ZamOV82fk8kSrF!z?T>&bt? z57$#^JW3T84n>XO?D36$8sMI}aXroYL)FKgHG|1~19cZ3pL7kR&c0d3YL*{BhUALX zsUL_Hd|_?F#)A#brW?AJp-*bo5ym zO(;RJ0JEng@rV9+27rn6ZLT1ymmE&WBe^IBqkH>4l4Qm%&_mfV4Hc4-BsZ*AJhQz5A zWXM~sZa(j4pbV&Zy-no87NA=_v3u(U$x?8}JoZ6yB?elt+UFx&`Q+5~{R})KtiT58 zWgiCk3=Hplx~76vQajv3H5x7obVgc6bHCs7I)^SnE@mgrooqLBQt2IHHRdf#4Yn)K zBjeu$o0_k_>C1pzn^~2teH@^UD5~_ zhQT=X18N?Hfo$=oQg&ItS2ajM4GQjxS~OW09^5UY6Q;l%s7v8w7kpz{-i{%QnsD0P zH~C$oS>m$^+ZS=WmMcp#A*bB%SA6}r=w@H>*(5=7)Xqn$PrtgB#7uR$h9V(7&v~nS z&+Uh-al3(9Aq#O#YwjgtuN5)S>wiK(*YJv=p95p?%S)cysPSKndx>%GMT&CTi0JW` zL8{rNS$H%@Qe(2?mErE@uT$nDNodi^k?T9 zk!y9%xV~jIi0t-gBja|1(fH9i{Zr!E0`KiNE64G>NUJ5_CCrE0uA#nMuKjIl??s8Z zvR5at4f!t0v_bcAhL%iMLXR_ivECTJOI=yy{(7d>4l85JxF3UOfPDk^8DYCRsTSJA zI6(>P2v@Pq0HL;Rz2y~gkel7vPg{6w+f=>nRc@&>#=7l5{5-LHNw~%+>GM|>OE@sL zSEtw)(csELF-5~P%N9_S3j20Y7#z#KveEZ$+N*tz@?w!oz|ppAZQhi$mewjeTAbE$!#@aX2a?Y1QT%`YCxgCtSj?)}E(0xDK);(t>!~79d+QwX$ zM_zdnaS~v$iG;1GOA7n~oe1gJ(*~b(Q!3C;kJce+d?iwW_9}Sfa`NE7UTZj#Pc|UVTezl$XVZCQC2tVn0$uPUS zqn)$D7$zJ%OqY+vjeATZfb00FFrl>0!_Hx**7X!iM|jgpwelNZsE)ST_^x^#16zO; zS(hT4%#TTJwdU{h(TatNdbZY6-{i9Xdb{a>QB{H20u8s5CTXUZhE|upY(9K}>rXfv z&ue^EP47#KS2#hBve`hN^!B%A-=zwUMF_5;-T#u6e@hK=Bqp}Yq8;ww;(-hg8gf3T z-&}MaIXfls#7jjwTcTd)mY5QPY-R07<7Ey?+K3KU`Cev?Zj|I1wcQ; z1+juKhQMw4m+1Htxw0Se)$B|vhK(o5A=YB$fyMd@d^h_L=hPGawjlvScoDS8@+zkh zqB`nSQaQ*5IQ+W&~bedqUs0uXJ17oQC3 zEDKJZ>r)Erh5{?zPslXR>g~f;=POO>h5U0;Taj#gz6@84Um&~RzY}c-Y)=?7bVUSK zj)HIC+a%?nz{w?P((A#`Pqy-SQd732Y+p>J7-={mXZgLDyHY<<;a_A*@5gzu63fHf ztM^FO%5gk)ku2y;O*n3F#w<{Ip1_jXR6K)8qvrE~2SlMEBAXjmSVR3Ipp zJ2mQ=-RDSZHxNQoKhdSnYD2Hhzlt+5a@CWGTbz_7l$#(Z)iiwh<=k-7~W59y8&`Y^Z& za(q?NJ}-cLd2jMWalrl-xY^H`3&4Wp-CjPW7LDyF`zoSfrCD);*P+}G)vh=Hx-lOm zQ?e~^HNuipR22Y911S^rb7Bz1Hy3eMF8*q}lbA;fUY(m{B$7Ev#v0;8ZlG18@PF7RlB#@@0H~ZWsHyNVoXrrf&8S}qGn~g;d%2-4 z3a>_%UHQPgL1+pni(L zdALHge~8&Ys8&Ss`>Fh@)Cm4JviRh6QW46QH+!nX%&s9w0O+=lZ#X*1BfK(dWAQOp zh}BTP2@uCO1_B8j6x!1sW=GE*tzK4E5a zT9YQ)3;JVau6HG@I^`tLsZSfi4!=I3ZY}m7ff03lMoxALI>Rpyof2_=JIG<+A(^Z# ztJ_i)Gp|o?%Q(2CqClmfWs^@-8*G1(eDP?1#Y%{&2Y_;_e+rWh1o=L^$@CCdbNp3s zS5Sm*>V-eSr~AEi@@TB$xwg)2ZGvl*NR)tT)e6nAfxY(`ty+xh8CWw$2BG5 z2(?wgNh_d~^M9Gb7=%%IWt_BE|HbxwC7;R77IB3F>de)gDL=ih|HbLu zB)qKab+?^BEL_-KWL28I^WHnpbxuJDWRNlkdJmpR&tlhQm1szZ$berM0LU`c0i<36jpc70+4yfC?R^5TEHxA0k1i8svMSmjko* zqqe82ET8+olxqHBW}ICZvW2XYBwYQ@iBz~Wi)?1Xecjmt-FGBeVg!WHBi0dX$L0C2 zGdy%x^&^l@Y)zx)!TqNvs9yXw!^RYj3((6uin&%mi*_Q<1PTiG%)|czAOU%HEToml z|KQnQ(I(7mfDi4K05-z`g(3q^g(2WV-jKiN2FV>KeyrS%X#mYRI+(;!w4`DRpdYz5 z2~k)*gxy2;=BusI4{aEbA>aL6hPm@;<%sCQU!J_}O9cOu?@Tkvv1C@Mtr66|W*o{oqa93DxQt_^x}!O4s}_HMbY6 z_}RPwVCI^CVD0*zjpttKCg0tx+_$Vku1+=)*J5`&D*Q-R-9vP&F%eCw_9E3{;uINa z4u`5?j0o?sjre14{C~xL2RN4f|8|K=WRJ*5C^B32DwL6tCwtE_A_)n%$R^5GSs}@a zvd0|~AtKp(Z*F^gKUYsZh2QV@f8YQAJ>IwDJ{X*8p2k-XO9NXqt}$rU3-h|LfJY%;M$l!jDR0lM3GWbV7U?WCe2c zoB5(JFw-%e=qk)rwNoU5N-U;5+Hu+x5W@`BIXZ_y;+fMU7i>e~prfz!!8Eu^yrj_V z$F=SY3!$iMxxfz$pxfo1M=6w#m}^j*D?P~DMo6#g6S~{a6jVAjqU^n$lP#O!P*v>d z3=Xp`od0pv6%z{7ejO?3yQBc@VygX~qS^svz6^H#ooXNluiCH(g=^iFM%_2ewd|U# zX3JPumvbIs)5VBizRr$l`7&;t6}tNiylQyqp-MvMyl}l|m4jG%@T!WFu;bFg@ye@3 z>lu^w6;D0_cfA9LQcDjA?~CGEF-~*Rj+2>{{oS$~)2*5D^c3r}<%AsociYcUO!@Pg zH`f*pe2o)7#8`h_w#^L(ZxTT{Vh}P@bBZe0`eKI5RQXtAqMv9#y*a(xMK}FCX$(%{ zSI28Bsw^D1;VXZc4sp9|ZPd<{yUDB~=-Ip5-(JaSkgd)S2I}8u=@>)q_vk8$DC)wiOhjmJLYFlc9rh6z zc5G1-&1tfDjXi`=QB#Iym$*+f2B;5JUJRwx)eyL+^ZOFnF}ntl9%5v@G{z5;fqD86 zB7uo|k@eIWzW5NyG!^`9BUA%Y6^di@j*S8sQzUt`=8!lJddTyGKCNTF&*`Ge%2JtC zzJURprhVnyD*eOn0G$QuKrB;o3+;#w-4_v@22+vd)PIy!9Bak&yWb zFSV9m38b5f%49i(8)YDn%Tl4P#3GU|JeCl=L8RpQbQ&ZJtG&K*vRhco%qcWk)r5)D zb2VJcGCVdN2p^X``BtlRkAldmQQ=fhN6|O9sGIhvcrl#ShCtNdM2#$0@nDT#Uw*>* zw2iDrs&@yMFCH_9os?>~6N zp;_K9ciG42vl4LOW_1QNQi^&tQct5%n`;gusYN$4>-wcXm^uU|v91YDbl1w|+ znq;^!ZXXITbI)viyn`wk2xT0FqP3ZPf^ea?A{GPl@0nse-ha(39DkGI#Eoyyj0!oI zdJlnBHfIoNqsIZsGqTskpNgxbX$%EoT<}}8^6w_9URDsCx{xtz5Q*>Rfm=f#wIb<9 z(S{(0y&?D6$>=Fdk4jQrVmPYA!cnTbpmvR)DU`mN?Fd-Che9 zog*|(Kj)tB%@R|4dkwS!{f`nD17l<34xda&1Ppo0v0c+F(U@dr`Oh89z)2tbFF4=DDbi z5wTxZbM_65Rg63QF~e}0cz&D!Mp+v|m&z&{$wHV_4jB(s88EYzq;h99l9XL5D2ces z@@k{E{A;y=M>w{cQ4kYrN$MOPMSQ|UP+4FMo{DN&pNHf<%yF{(JflP$bS>c~%qONM zDv%V34&t;q<*+|!h$1kF9sO=D`eT72owp$CjIs1zK|QJWPOTD-1kDk31|bLEx3+RN z&A+U(vO0i^4vj*Z?7v@P6}Ws+6Qq`Uq9f){JSjo z07|wa=XCuDIw1>>6?^ps_i1?c>bI4W1k_eLAUy-m?60Vd1EZur*E30_9{;5 zEx-YiG;d@|q0tLIr5`ZFKk{OrrXV6NzTs`FjBCCdAoXuDBM8fFSO+eAJ|j9Dq#_Z4 zSL38!O}s^VTD}&alZ}J@<43zQwdUGI_cIA6ycq*#t52<7v#IgF<23+Mo6$&6^|o59 zMA=<45O!W$xZ6c&YRwcC*!^63+L%8)#A~jZQpH*>crNU^N|g7f@bA&a`H6|DBmWe| z5+{7bCa9F(GvioQ##>blJoO@=ULVSQdoabx)287|Gyhe-d(mU;P?K1rxXJ`)ioRvE z9?RzG3v}LZEk1Une92qTo%V4cL$rR}@nPv)X12$g2_}DXogA^jXe9WQ`BpfLE9Pw@B*E?blu;vtN5SEV|Bi(q%4x$O+ob!X4b8XC ze(R4eUZPImPa^AaVtj z)y(@uK_OnL0>u44YQeI_AKnW;*Pt!Q;ZGH2KEZOGnFS5WZ#Ws`L#|OAW%Y|4McDal zYV2lzeOHRYgyzz;zH(*wpnpqSJtB5?1`1@IWKLQ2klc-a({d%BAfIf~#C#zKDOtQD z43RATq!uPoopJcaDc4xCa~2oW4zZng@?icQZsF5~ZeqqP9*g#pW#mbR zZ}9FPPColSPl%#yTt1~A5!C+kavf(@%15B|gbF+r<`e4Z=2JW&bRW z-Yv(Bl_!3pq?Y8X8A<-n_WphYA4wn!8;sNaeIS1~P>M;1na+5C(bvN9J1rS8qgu=l z3+!j6mvM|(*T%!3r-6BC*Be2Nia)Z)-`gmBA;y{(_~mokt>}8F-Sh~B7oH%(L6uZD z@1AI3$1r}wtF*QuQGhg~xC+ndYL((~7EYcstijsjO4W1`D^Z31_Saa~n|CT{?OUO$ zn1rf6cfL-r-lR4{Klqf~nW_;4pnLue4gZO=c|{oEW&r5Px8D1R@MY4X_U|g~=UF-7 zAr?p{ocdSNW|VoI^UIE}Gb?n$9)!z+BSn_oMf~&ztcl!tNHTq;5Hn3)4l*8fkRcTR z1;yZ$8y@G|66f`H=p@XbW=eqa%aD^HDis7%oIwwo6J;qUz9j`NqxDL$G&2o6wC^=Y zdY0vbAW{(Xku*)a(60LkKS&~vUo;C@E(ABW!oeU}3s9FbP=!(SeBte}05hi?xa-f) zP@4B3auv&NUM8=J(Ye32srJCPo9dpq<>@3#|48t+5gtP)pY^aUA zu~*NFaEKBy(BtNSJO`)OmQs2I$F2VmeX&#E3EQFc?b)o^%XJr~zK48wpCg1$Mwtucgjmsx8 z1|`$n8fFH|H`BvYMkamK$Yf>SDBTB<>HU}pazxFVQIvU179@z1(2V33Z>>%6I4)TT z4o4VHLWhzIf~-mxWHO0VSyRxbntzV4+qf+K0x6*QRKY2!2F8O(L0$Ng>WI4`i-sHr>$nZFchv^SE)OBQeYKt zL2~z6wj;$#mI1pF)Gbi)gbUApIAB*nu6p0R@1>-lQ%*+VbO9I`jC>F(2U%l@nH`ZGlRk4UGyi12=oQ2wa;3CxLPP)OZvMs*7Cx`?PODP}e zduVVU=D-~ll)94~)`3EE-n?>fz7tU%*C|5BdeG=EFZjP%>6c!dAO14$+n$n9VutAd z@V;M*7NSaenP5NDjgmhxPev8(%X)#M1XRDPK*}cs7IzpZ^H^^|XjMmFh}V zr;%ML`>MKoffKoEi^VP$qH{QgT7$|?YAQAL;Ebt+>`oRi-gbW&Qbw&rB;)z3H2ig{ zCXZk#9kIu;tzg}e_h1w7nPIZ#Ureq)tq@{Uu|-5y3N2-Q7X`WS%&VF@GOPVXCszWv zW~R4pcjq%pzDxu;s^_O9)TYEp6tPNw?sQjSNB&5(xy8-vUFYufib1s2w7r_L@_X(4 zY#-gQJI&(1XcKW5#0)>3{}QD`xiLAkA*sq@D=KU1Y%#h8l%n+=iA$2rdu6KaT@qe_ z#oeasSKHcd#!M~qvUh}%FFUsxhqr+uR}=G)%x(9PLpINyiFM8z2;QDfb#+>%U#4BY zD7Zn^9`q8DN?lSLf;^?Dq-6~Rigwot&Imm}Yv?KW?84_lbKcjw4a_Sw*3D z9YF>PKAYB|C20L*?|XZV5J<7odWv()G?fzSSYLpznp})X7n>*8cxHjx-_ZiB1#%G8 z{o2x$xrNd;h|${tX*P)xu+0LGNC8P4WG#o!`;I2B}jat7gT0aXI$a}*8{!h^l##s@8R$q6ay_MkVU@+hnmh<6A z+Hx6LjuYgOlrR-qZ2Ak~gylk-D`!KcX=G<1xEYjs6QPoxQ3qw2_n*xGN$BeQ$T}Y` zrT*3cvIKF!WxMb0&HPe|x#$@`OOk4B-*124h9oc_~Mo-)mA5+~+uBz1+IMH%d zWdaj7ws)_NVkMVfy$V=RXc53G>Tn3cM?*LVA&v8pT{7w_ux&;xAB(?=diu&oaF;q3 zu8pVQIV@MxMhEq40)6Z;y=b?*z!;O`N}tyhBAKr%o~vdP=`wBcP`(0q!;sU)_xx%z z5hZ8DjOovpE4A1OWmcf$5l#UuA`8Kg-@euczFE$#hLbUa0BCS#bXLs&&O^_6t>lGb z?6MS{*yYQG%0O(p#vf42Wh7$IY+K7L;buGQPAbpsnJKg9?&Tzb9OUY{I|s$A+m9uN zuQ}YWt)KW0If&`#(F&^|ZqcXwuJ~M0!j5((PJlYjr@SPf6Kd6D*LA{3(i=6~LY0C+ zMYu@O0INws#7b0oq7(w;VG^2`t zoi4a@V9Recid?sO(X=I#`JlK|Bq{5j3)e2y3l5qK@oY8E?Cg9Z_^e;*ik`ArxuxsZ3azN1RT_?*L^&=^st^0q0%yt&3g`9Bqcu*}I9z$a zzMKS(X__8Of!6M_&Ws-686LlItJlVQPus)yZZJVQ_{cDkd5`DO_+BR4BYRW&l_R zW{Z+tO^;0_^(}(CEAoW{M;K_zFM*g`r^x`Gm}cOIGlzo0*$f5~KF|_zFqJp<7p@LJ zzYS7bJIH0jn+8*Y1ODeH!J+{(64dA;4!7TJm{IWPeb`5jdd_ouPJh&IPrRkTqF+9y z|9nX|S2$mP=(@|ox?daH&Gd!Z<_26n3Uz+&e&K+KKQk4-UOVnll?sg?%y{_E`F)3o=lmBUukc)uD@ zuQT0R=FG=qPw*wGfD*NCBiL!^sFHBTBe(OeU+$nS(7X|Io{S>>{gdBQ!d1t0TsCbT zha+!ly*7c0Kd;NCUKsV-w9Yi@|5|3rSl0g++3pOM+GcYBS%z(;Lg&+ZYE)&Nb+ir5 z1kSg(^=T&CmC@0L_~M|JpB>M#f1?MD^Fo6T@!&e#F{yz2I&rYzlSe`SwQRrmU6pbn zhMeBf+J9HFSzfm(`jKt0qeJT6R*yo%_#zN3psN1m>KpLS-&r~;UcySdxzV1J$NIoH zjFwR#pYz{lANm%cn1-uP_dh*f|5p8-3Fh0@^6;1Er1eQr0Nne1;|{ztSoIVYz&MW= z85m(pYlHh#j~3~H{Pk0Jp(_w@{jilnFYCn~>vq}Rw8K}uL{E2c|u%n8i>UnSaZR5@B zM~4WPC~7Emof-~B+IE2-U=jT=JxS$kUwLp)oXwu2hiLokuc{-lQhXv%n(6S1JCtEn z$;82+D;tCWU6wHWLszr-6=FL;|Ffy{2pPa=gKVKW-{$6w#C`V3pi>MYWyLT8DdNq^ zVIiZpIZUi_NitF*E8WnQdYBQ$nosRup|vC4A>fI;y0NxzcOwS=Ux)WMzhULCd0XY5 z``^w!4=<`Tr`ZcU{x;qzCU4k9wfx07>$}jsd@$iyTQof96=-s)v>@zME!zg5IgKxB zY()V6-uLa__cxbf%Tct|A$1<~&Ns?j@qLe4=EBADF76X{OHsWkKSDkwE6N1MU($~%6BC`&j+TML>Af<#uAJndC zCzHSXOK(KLZ;mkD*th#bijKHUb*vVxch<)sW>UqBKS6pKFu%EtAZM8mOl_+_xE?@4 z=o0vZE8lz%_}{L&=!`@x6n@Rgf1RekBAi=wtFh7*cldDE0jjK#a;^h-M7FRznw!rLt&YEdE0;aIr;epq@FA)aw<9exMn(`0#N5m6hA= z?>~FR?TuKPhi{C9k?n%xj@ulyD#Bd(0bW7w$>O&eJ~mzKxSWGfaV-m{cAaRv|h2Ao6PL-BBEi(aN(lmJo zhh|PINIr^<&wosg-@Q!1WiB}-u={QeM|V&Q_>>hwR4$(L#CfZlp`(***xd`xR0Ha? zsFgmudg0}jxL38}_&rn)s-Jv)-3j#dZf7v~$+qI#)O{;eU1wW{8J+y{zm%Ntx(2H-1C<@p zr=tcbOZhh!zCE_dD4O$Cuy418hG-sB0`&T=D$!K`e-rCfXp5nHnS{v?H@g|7&iy## zQ3&S?rVA$Ksp%r!8kFnDRnq$QBVu2Nn(hG?zVVU1v!WR;+gm&~!`B?QR!4-G`VE1a;B|i%LfTYR%deSdYW1lC(ni-{z{7fFG7fYwEk#&`Tp_#bvevln)}LBn|FT!5PlG3 z5|gYe+K?~v8Zol36JFLb6!iPw7`@EE)bcG5F}htq@`d9#8s{~c`mXMb#-)!|_0hsL zFEZl;1gh&q)-BLz{Yb9#UYu}kyGSIgk>g8-@7{5-DMm%J7$l^xXmvs}l4JJ$j3AvyI*$(dLj^kEEedK6Jm2KnS zQ>Fl0T&zA>F8?So$S8`fyC3&922@^|;BVIc?4r3+hPY`lgsx(PLgN~W zHcZnJ6IKM^L^~);$3l78AA;HHoJ`k52HLV-Xx_il77Bq5 z;pNT&uV=$|^Ed20qt7v2IK$g@FW4R#5T$A5H->RR=BNef!`0%|8Fh>>gwk%pYL+rB zsSVM!(fF;1usX!_8fHBGp`{3+kRCdMneu-DT9{)ic?AZ5{T!PE2#Mdp<3k>BvS?Q! z^$v?$)z6+`%Q4R7XVYMZG!1jpp@Krp6}fl%E*}>~6-6PMDS(?g#*$(zUm$jOP6gl6 z@lyPec)q%lViWq^DPb=k^x{LYH%f4<^zGXAUB-k$vk&#J7tWU9-EMd(De}2R+g@8I z-*3d#!4qgrelbgEgd1{SWXdRRMeT*BgFEBZfCU?xNZl%@kN?L}U9%q{3yLn!G|F-< z&d)}~w1A}aIls-To-#LcXh=rLvmtSP7qFWgrFew;x%#8V?cZS9_oQ_pZuu| zfk*dwq~%bo%buus^>C97_|u%unCi&o^?a12c1?@=qS+w1ct|DFa^AK93eY86=XP5Q zpHd>FVTrw<9}fv1jJ7sMtFvB3X`ehL|6;Ty@H36bGji_c>scdku5D88P9NLE?k{-qnhd-^zizoWE|VyIr#N_L;~fh(?JX*Kl52h^d_x+`q{Po0!s!=# znczmJmS7Yo?M_PEMEB%ttTdH7(G{-JE97R6e53U|C9e>cXC?Tv`!qVD3F}Iy_#Hb2 zx~J1Nb$j{OGn;iBHv@tL7aP&m17+P?s!cHUV+L%StgEhHH3?wEDlndMvCVGs+J|-c z#KT`dHb$@*uJ-_kbFja{Z^<5{#r)R&?oe4xKR+PxMG$I=8B4V*S3Fa7(cMeD8`EW? z>aht465#B-7OW7*_vfZg%&z0QPyWRRq$cx<(np7ex(e;H;1NQ|bHeV~*;PaM`eYHe zv;=69^mGqe)TH|9&QLr*1+r{w_MMqzULe>pIXF0EK_GJ6f==D7zW4XuBJXB{#j0i-#0KKy<2k-| zlsFgS*e*|Zy>svk4Gj&7jAR|Dj|opt=WJMq`Dk!wr&b%<1x>CCBtzPX)y{mIsb>ct zudc3UIn0~nJ1(c=pfWQv!I_#1cye#`u6sotd~hcFNuBw8h?8JVEQ zMkT!N_lNG(z2K*4f}&XV+4JX**_fFn3Ft*E#O1uaydG5wEZp#N54fhRtZcUpqwu+< zDNU$DALB$ogrtf55>PyDZf2GxMD1_ZnE&J%Iy1AOq^tWpKfgMp4a?GANIjG51C@bG z_#%H^+ztrBNnFRD0l$60)-}RJ&Rr$4JuZ%E) zy{i;-&1{u{^x<&gaXQQidk_3`xiyVT^XI_nH|RgZ-o6L7yVA`!G>dARmm*MrQ$Q!wY)qiye#XE*gMjedaNscE9L zy>K|}o=zY1x?PsrUu`qiis(jFrlCF5smR+I-BvE2Fj{%dr6I>i{6}1Y-MjWM_K6Gj z&S+DK#Q-BTD<09tE4eNw82$;7KZ4~3KNiEKI35nI*%5N~Eqs)FU6rzRvLc4g7A-T;KM$AgqG&T@6e3A+afu^vd}UkXTEd)z4l#- zd-RBO=XdF1VS8+?6`OMG2+F-gCs^*?x#PCuyH;c$#(vdB^QKAeST{7V1p)&39oJ#M y3oa4a+Qjm!>>YQ~Lvh$bX({f`@8WVr?|UF7?BixAT!K6uRa#PBBK@+S+y4T$A}M76 literal 0 HcmV?d00001 diff --git a/README.md b/README.md index f10a0cdd47..732c5dfbc4 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,6 @@ A step-by-step guide for the above is below.: cd desiredDirectory git clone https://github.com/christianopperman/CS410_Fall2023_CourseProject_TeamCAHJ.git ``` - 2. Install the appropriate ChromeDriver for your computer's enviornment from [this linke](https://googlechromelabs.github.io/chrome-for-testing/#stable), unzip it, and move the `Google Chrome for Testing` application to the `CS410__Fall2023_CourseProject_TeamCAHJ` directory created in Step 1, above. 3. Open Google Chrome. 4. Go to the Extensions page on Google Chrome by following [this link](chrome://extensions). @@ -33,4 +32,31 @@ A step-by-step guide for the above is below.: ![Screenshot of load unpacked button](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Extension%20Directory.png) 8. The extension should now be available to you in your Google Chrome Extensions list. -## Usage Instructions \ No newline at end of file +## Usage Instructions + +### Coursera Transcript Scraper +As mentioned in [Requirements](#requirements) above, in order to scrape your own Coursera course transcripts into the extension, you will need a working version of Python that satisfies the required packages outlined in the `CourseraTranscriptScraper\requirements.txt` file. +Once you have that, scraping a new course into ElasticSearch is very easy: +1. Navigate to `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/CourseraTranscriptScraper` in your shell +2. Call the course scraper script with, with the following command line arguments: +``` +python scrape_coursera_course.py -c "course_url" -u "coursera_username" -p "coursera_password" [-e] +``` +* Required Arguments + * -c : The link to the landing page of the Coursera course you'd like to scrape + * -u : The username to your Coursera account which has access to the course you'd like to scrape + * -p : The password to your Coursera account which has access to the course you'd like to scrape + +* Optional Arguments: + * -e : A boolean flag. If included, the script will automatically push the scraped course transcriptions to ElasticSearch after saving them to disk. If not included, the transcriptions will be saved to disk but not pushed to ElasticSearch. + * -o : The output path to write the transcriptions to, if you would like to save the transcriptions to a specific filename. + +3. Once you run the above command, a window will pop up and automatically log you into Coursera. It is likely that you will be required to complete a CAPTCHA. +4. Once you complete the CAPTCHA, return to your shell and press Enter, as prompted. +![Screenshot of running the Coursera course scraper from the command line](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_LoginPostCaptcha.png) +5. The script will begin scraping, as evidenced by the pop-up window navigating between video pages in the course and the `Retrieved` messages in the shell window. +![Screenshot of running the Coursera course scraper from the command line](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_SuccessfulScrapes.png) +6. The script will write any scraped transcriptions to the filepath specified by the `-o` command line argument, if present, and to `subtitles.json` if not. +7. If the `-e` flag was passed to the script, the script will automatically push the scraped course's transcriptions to ElasticSearch. + +### Chrome Extension From f48293a23f2d2be6ef1b4182bf9f496ce1f5d505 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Sun, 10 Dec 2023 20:27:19 -0800 Subject: [PATCH 28/52] Updating ES URL and password. --- CourseraTranscriptScraper/ElasticSearchJSONWriter.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py index 219bdf67a5..effd79d481 100644 --- a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py +++ b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py @@ -13,10 +13,10 @@ class ElasticSearchJSONWriter: def __init__(self, json_path: str = "./subtitles.json"): self.url = os.environ.get( - "ES_URL", "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws" + "ES_URL", "https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io" ) self.user = os.environ.get("ES_USER", "elastic") - self.password = os.environ.get("ES_PASSWORD", "CS410-project") + self.password = os.environ.get("ES_PASSWORD", "pciWclpLNdXuicUhXV8bhgk2") self.json_path = json_path self.subtitles_json = self.load_json() From b77e7bee11c359c6c7111950506a640960c51f09 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Sun, 10 Dec 2023 20:36:19 -0800 Subject: [PATCH 29/52] Cleaning up to "subtitles" index. --- ChromeExtension/js/search.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ChromeExtension/js/search.js b/ChromeExtension/js/search.js index 11306c6de0..5a7aea6825 100644 --- a/ChromeExtension/js/search.js +++ b/ChromeExtension/js/search.js @@ -71,7 +71,7 @@ async function search_api() { body: JSON.stringify(query_payload) }; - const response = await fetch("https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io/subtitles_4/_search", requestOptions) + const response = await fetch("https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io/subtitles/_search", requestOptions) const record = await response.json() // console.log("record ", record) if(record.hits.total.value > 0) { From 21d8a5293f36cc19ad7f12075173c62f1d857b3a Mon Sep 17 00:00:00 2001 From: Aaditya Murthy Date: Mon, 11 Dec 2023 19:12:52 -0600 Subject: [PATCH 30/52] added chat coursera working --- .gitignore | 3 +- CourseraTranscriptScraper/chat_coursera.py | 35 +++++++++---------- CourseraTranscriptScraper/chat_subtitles.json | 2 +- CourseraTranscriptScraper/requirements.txt | 2 ++ 4 files changed, 22 insertions(+), 20 deletions(-) diff --git a/.gitignore b/.gitignore index 5d368cf139..14d6faed6b 100644 --- a/.gitignore +++ b/.gitignore @@ -8,4 +8,5 @@ node_modules *.log *.csv *.pyc -*/subtitles.json \ No newline at end of file +*/subtitles.json +docs/ \ No newline at end of file diff --git a/CourseraTranscriptScraper/chat_coursera.py b/CourseraTranscriptScraper/chat_coursera.py index 8b74cf1215..099ad424d0 100644 --- a/CourseraTranscriptScraper/chat_coursera.py +++ b/CourseraTranscriptScraper/chat_coursera.py @@ -4,43 +4,42 @@ import os from langchain.document_loaders import JSONLoader from langchain.text_splitter import ( - MarkdownHeaderTextSplitter, RecursiveCharacterTextSplitter, ) -from langchain import Query - from dotenv import load_dotenv, find_dotenv _ = load_dotenv(find_dotenv()) # read local .env file -openai.api_key = os.environ[""] - loader = JSONLoader( file_path='./chat_subtitles.json', - jq_schema='.introduction-to-text-mining-and-analytics[].content', + jq_schema='.filler[].text', text_content=False) docs = loader.load() +r_splitter = RecursiveCharacterTextSplitter( + chunk_size=150, + chunk_overlap=0, + separators=["\n\n", "\n", "\. ", " ", ""] +) trans_docs = r_splitter.split_documents(docs) # print(trans_docs) - +from langchain.vectorstores.chroma import Chroma from langchain.embeddings.openai import OpenAIEmbeddings -import pinecone -from langchain.retrievers.self_query.base import SelfQueryRetriever - +persist_directory = 'docs/chroma/' +embedding = OpenAIEmbeddings() +vectordb = Chroma( + persist_directory=persist_directory, + embedding_function=embedding +) +vectordb.add_documents(docs) from langchain.chat_models import ChatOpenAI from langchain.chains import RetrievalQA llm = ChatOpenAI(model="gpt-4-1106-preview", temperature=0) -# qa_chain = RetrievalQA.from_chain_type(llm, retriever=retriever) +qa_chain = RetrievalQA.from_chain_type(llm, retriever=vectordb.as_retriever()) while True: question = input() - docs = retriever.get_relevant_documents(question) - for d in docs: - print(d.metadata) - # print(len(docs)) - # print(docs) - # result = qa_chain({"query": question}) - # print(result["result"]) \ No newline at end of file + result = qa_chain({"query": question}) + print(result["result"]) \ No newline at end of file diff --git a/CourseraTranscriptScraper/chat_subtitles.json b/CourseraTranscriptScraper/chat_subtitles.json index 3e596ee1a1..6d9270eebe 100644 --- a/CourseraTranscriptScraper/chat_subtitles.json +++ b/CourseraTranscriptScraper/chat_subtitles.json @@ -1,5 +1,5 @@ { - "introduction-to-text-mining-and-analytics": [ + "filler": [ { "time": "0:00", "text": "[SOUND] Hello. Welcome to the course Text Mining and Analytics. My name is ChengXiang Zhai. I have a nickname, Cheng. I am a professor of the Department of Computer Science at the University of Illinois at Urbana-Champaign. This course is a part of a data mining specialization offered by the University of Illinois at Urbana-Champaign. In addition to this course, there are four other courses offered by", diff --git a/CourseraTranscriptScraper/requirements.txt b/CourseraTranscriptScraper/requirements.txt index eb2c29c85c..bd96a32468 100644 --- a/CourseraTranscriptScraper/requirements.txt +++ b/CourseraTranscriptScraper/requirements.txt @@ -6,3 +6,5 @@ webdriver_manager==4.0.1 jq==1.6.0 langchain==0.0.348 openai==1.3.7 +chromadb==0.4.18 +tiktoken==0.5.2 From 97c667c3a5f458c43d1847af73ce4ada7df80327 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Mon, 11 Dec 2023 17:17:16 -0800 Subject: [PATCH 31/52] Adding course_name in search result and pushing to ES --- ChromeExtension/js/search.js | 1 + CourseraTranscriptScraper/ElasticSearchJSONWriter.py | 1 + 2 files changed, 2 insertions(+) diff --git a/ChromeExtension/js/search.js b/ChromeExtension/js/search.js index 5a7aea6825..094c07cd40 100644 --- a/ChromeExtension/js/search.js +++ b/ChromeExtension/js/search.js @@ -92,6 +92,7 @@ async function search_api() { result_dict["url"] = result.url result_dict["time"] = result.time result_dict["subtitles"] = result.text + result_dict["course_name"] = result.course_name set_result_format(result_dict) } } else { diff --git a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py index effd79d481..b00948ec41 100644 --- a/CourseraTranscriptScraper/ElasticSearchJSONWriter.py +++ b/CourseraTranscriptScraper/ElasticSearchJSONWriter.py @@ -41,6 +41,7 @@ def index_subtitles(self, course_name: str) -> None: for subtitles in lecture_titles[lecture_title]: subtitles["lecture_title"] = lecture_title subtitles["week"] = week_val + subtitles['course_name'] = course_name self.write_to_elasticsearch(subtitles) def write_to_elasticsearch(self, doc) -> None: From 3901e9371b931bb9b9c00487dd7a0d76897fe4cf Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 11 Dec 2023 20:19:19 -0500 Subject: [PATCH 32/52] Update gitignore --- .gitignore | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.gitignore b/.gitignore index 5d368cf139..6cc591116b 100644 --- a/.gitignore +++ b/.gitignore @@ -7,5 +7,5 @@ node_modules *.iml *.log *.csv -*.pyc +*.pyc */subtitles.json \ No newline at end of file From 5910720b150437a10c4208542b0d9541faa948b7 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Mon, 11 Dec 2023 20:22:31 -0500 Subject: [PATCH 33/52] Update course scraper and elastic search push to fix bugs --- ChromeExtension/index.html | 7 +------ .../ElasticSearchJSONWriter.py | 1 + .../CourseraScraper.cpython-312.pyc | Bin 9595 -> 0 bytes .../scrape_coursera_course.py | 18 ++++++------------ README.md | 8 +++++--- 5 files changed, 13 insertions(+), 21 deletions(-) delete mode 100644 CourseraTranscriptScraper/__pycache__/CourseraScraper.cpython-312.pyc diff --git a/ChromeExtension/index.html b/ChromeExtension/index.html index 3d5af11169..02c5ce3484 100644 --- a/ChromeExtension/index.html +++ b/ChromeExtension/index.html @@ -9,12 +9,7 @@

-
- -
+

Coursera Transcript Search

- -
+ +
+
diff --git a/ChromeExtension/js/search.js b/ChromeExtension/js/search.js index 094c07cd40..9f67cbbd19 100644 --- a/ChromeExtension/js/search.js +++ b/ChromeExtension/js/search.js @@ -1,5 +1,5 @@ const search_btn = document.getElementById("submit-button"); -const result_container = document.querySelector('#result-container') +const result_container = document.querySelector('#result-container-transcript') search_btn.addEventListener('click', function () { if (result_container.childElementCount > 0) { @@ -87,7 +87,7 @@ async function search_api() { + ' Subtitles : '+result.text + '
' console.log("Resoponse :: ", response_str) - result_dict["week"] = result.week + result_dict["week"] = "Week " + result.week.slice(-1) result_dict["lecture_title"] = result.lecture_title result_dict["url"] = result.url result_dict["time"] = result.time @@ -107,22 +107,27 @@ function set_result_format(result_dict) { // Initiate html components const result_item = document.createElement('div') + const result_first_row = document.createElement('div') const result_second_row = document.createElement('div') const result_url = document.createElement('a') const result_week = document.createElement('h4') + const result_course_name = document.createElement('h4') const result_time = document.createElement('h4') const result_lecture_title = document.createElement('h4') const result_subtitles = document.createElement('p') // Set up class/ id for some components result_item.classList.add("result__item") + result_first_row.classList.add("result__first--row") result_second_row.classList.add("result__second--row") + result_course_name.classList.add("result__course--name") result_time.classList.add("timestamp") result_url.classList.add("lecture__url") // Set the content of components result_url.href = result_dict["url"] result_week.innerHTML = result_dict["week"] + result_course_name.innerHTML = result_dict["course_name"] time_reformat = format_time(result_dict["time"]) result_time.innerHTML = time_reformat result_lecture_title.innerHTML = result_dict["lecture_title"] @@ -130,7 +135,9 @@ function set_result_format(result_dict) { // Organize html component structure result_item.appendChild(result_url) - result_item.appendChild(result_week) + result_item.appendChild(result_first_row) + result_first_row.append(result_week) + result_first_row.append(result_course_name) result_item.appendChild(result_second_row) result_second_row.appendChild(result_time) result_second_row.appendChild(result_lecture_title) diff --git a/ChromeExtension/style.css b/ChromeExtension/style.css index bdb3ba9f26..84c68223a5 100644 --- a/ChromeExtension/style.css +++ b/ChromeExtension/style.css @@ -26,28 +26,28 @@ body { .header__course { display: flex; align-items: center; + color:rgba(50, 50, 50, 100); background: white; - border-bottom: 1px solid rgb(225, 225, 225); - box-shadow: 4px 4px 8px 0 rgba(0, 0, 0, 0.2), 6px 6px 10px 0 rgba(0, 0, 0, 0.19); - height: 60px; + height: 50px; margin: 0; padding: 10px; } +.result__tabs { + display: flex; + /* box-shadow: 4px 4px 8px 0 rgba(0, 0, 0, 0.2); */ + border-bottom: 1px solid rgb(225, 225, 225); +} -#course-options { +.result__tab--button { + flex: 1; + text-align: center; + height: 30px; border: none; - background-color: transparent; - font-size: 1.5rem; - font-weight: bold; - color: rgb(55, 55, 55); - flex-grow: 1; - word-wrap: break-word; - overflow: hidden; - max-height: 1.5em; + background-color: white; } -.result__container { +.result__container--transcript { flex-grow: 1; background: rgb(245,245,245); overflow-y: auto; @@ -55,8 +55,10 @@ body { padding: 15px; } -.result__container .result__item:hover { + +.result__container--transcript .result__item:hover { cursor: pointer; + background-color: rgb(236, 239, 243); } .result__item { @@ -77,11 +79,20 @@ body { max-height: 1.5em; } +.result__first--row { + display: flex; + flex-direction: row; +} + .result__second--row { display: flex; flex-direction: row; } +.result__course--name { + padding-left: 8px; +} + .timestamp { color: rgb(47, 151, 242); } From e44e5f58ed30e935054be53fa14f94c95fa8b9e4 Mon Sep 17 00:00:00 2001 From: Jinfeng Date: Tue, 12 Dec 2023 15:24:19 -0800 Subject: [PATCH 38/52] UI updated to show course name--Fix link issue. --- ChromeExtension/index.html | 22 +++++++++++----------- ChromeExtension/js/search.js | 2 +- ChromeExtension/style.css | 1 + 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/ChromeExtension/index.html b/ChromeExtension/index.html index 8aecc270fc..8c03d7cde6 100644 --- a/ChromeExtension/index.html +++ b/ChromeExtension/index.html @@ -9,17 +9,17 @@
-

Coursera Transcript Search

- -
- -
- - -
+

Coursera Transcript Search

+ +
+ +
+ + +
-
- From 9f6ba73df4a2b0b4f6aebbdfeb1c902978814962 Mon Sep 17 00:00:00 2001 From: Jinfeng Date: Tue, 12 Dec 2023 15:40:55 -0800 Subject: [PATCH 40/52] Remove code related to GPT --- ChromeExtension/style.css | 21 --------------------- 1 file changed, 21 deletions(-) diff --git a/ChromeExtension/style.css b/ChromeExtension/style.css index dc0dc0e3eb..4ebb7279d0 100644 --- a/ChromeExtension/style.css +++ b/ChromeExtension/style.css @@ -34,20 +34,6 @@ body { box-shadow: 12px 12px 2px 1px rgba(0, 0, 255, .2); } -.result__tabs { - display: flex; - /* box-shadow: 4px 4px 8px 0 rgba(0, 0, 0, 0.2); */ - border-bottom: 1px solid rgb(225, 225, 225); -} - -.result__tab--button { - flex: 1; - text-align: center; - height: 30px; - border: none; - background-color: white; -} - .result__container--transcript { flex-grow: 1; background: rgb(245,245,245); @@ -107,13 +93,6 @@ body { position: relative; } -/* .result__item p::after { - content: '...'; - position: absolute; - bottom: 0; - right: 0; -} */ - .footer__input { display: flex; align-items: center; From 9a8a7884cdaf062f6f96e790ef0ea435ad60a5dc Mon Sep 17 00:00:00 2001 From: Jinfeng Date: Tue, 12 Dec 2023 15:51:12 -0800 Subject: [PATCH 41/52] Update CS410_Fall2023_CourseProject_TeamCAHJ.png --- .../CS410_Fall2023_CourseProject_TeamCAHJ.png | Bin 38673 -> 62516 bytes 1 file changed, 0 insertions(+), 0 deletions(-) diff --git a/ChromeExtension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png b/ChromeExtension/img/CS410_Fall2023_CourseProject_TeamCAHJ.png index e8408f738b307df7a41a07506827963fb23d700b..335727bbb0d24cd1ed21381be9259f70d8ea289b 100644 GIT binary patch literal 62516 zcmeFZ`8(C^8#TO9l!}UkOi4(RB(#l5LNX_rl1#}wlQGFHb4W5Jgd`+0Nh(vwlz9rV zjR`5G-nD(-=MQ*(fA(>7*RA&6*XJ6}bDe9g^9s>WJHLyTm6kvt>{3=z&>|41?vQ_N zqr^|#1g&iF+ja+~E6xPM4hHfs3c{l#W&&aPtFpow9gn2BAxjU#scXD*q-s;zCKHQK zk`z&d?Yp1fR?uWr)@<2(tWmjYR-*6u_}qsPV)D1FJYSYPY{U{?1LWR}FX;&~1=sD_w#8c}wQfwKW?zVcY2m|lP6r9LAIWKd zakk1hv1`)P83LU<^UfPx4wO^-Zb0{KAq3_3+65}uO z?8O(aU+%KcF>ao8`sFRtEtB)b4+}De=ryaWG$36uIfmfie`A!UfLi$)WE!3xgken{o-1%eA=k} zlfR95HPnH%iBwN?c2%!V+HLTquAB;e|D@t;z25~N$Jv@pMdB5~2Xb7G*E1Ekqg67y zdN4`rd>Gu-OsyRGG!20`>tg*Jou)~rG>wK|W;cGjQ4v1vz#$#1bdZ=^{`JSVrfn+y z^<$6f;l!&nlmVZD-;N2N>i88+p*eHomLO9)^%>=VBkQMRQvICuRz4rMi-^3Z!2K*s zt41YLD?ybv9&=DW$Ml{42qWz)ErNffiJuu41z~hN`?J!eMkOA3pE0+}WHr81GbFc4 zX%70;;-1+|>N7d{_A**o(uI1@HTbgwv@lk!oLEk+936o;f#b*3^K>s{b{Ozc4JOh1 zr_d5U-oa$0eppOeSx!u;q8RJxh=i;%N1QU$J zL|94MlAby+K+b=?dv4)07071K{(&0`v^L?2{N~@+ZAVF? z1cEyaJ~HL)*te#prtjas&&)i-Bk1SR1=meFX*$adanf#~%!qkw^qBf$PRqq3`guRT zp4@o*_U)4=PyYPz?l?g~SfIpnU3@pbB*!IpOx~$-h$+kZG?EX{gW}?1 zJwZhxZw3Z(R6syLNa(3WyjEQ%FDDG7rdP5g3d-+ym((elJMn-}$A{T@7cz{JF4VQFb;VKL_>N%EI_sGECCNJuO} zH8V4_OH|vsX9IfL1+k8n&R~HnFx>e>3 zeXSc4PI*riR6nlCCl#W=9j>U2;TjtoW0zmLbV*wqy9{erOuXl!MKBS?=dl%3RH)2T zRlItoU|XlKD@f?ru`7AHQn!O85ANPZOG|ri^R<)u?3ptZ#iJqmiO1tqlrfmQ@+?~$ z6uY)rNwCpvq14TFb#Za=@bEA*GxPQyv@X6YpO#3KnGqu&k@n->7pHRCpeGp_8PA?Q zOG~>}cXyXXQ9^&24kF}xQ!JVP84X2>mx#m3iHV6RDJhudloVq_!wj|2SA8}dk-;v+ z$Miz+7n09B>zO^px~rI@<^}oHY9Ccr)BQ(# zi83+^Zk5VeREiooI_Bo)7cX864-b!viwg=0q79NKM+y}ix~+r+APu=;?6!CB-u3qO zHZ(L$Ok{LhuXq_)kjn#%A$=R@#X=Fv)TCr{?(%a^ZSy=rXyLN^y5PVmB1aMh7q zFm-v)pFe-`Ld!tpqPn^^{iO1FDntdrGz}|JR`NzWdh~*d%DvFgoUQ5@!rqWOE=0m( zdaQ!$x#T_eg?e{`gKyosWj|PULRxy>L4uo;n|=4_2$GGpGk>9;p^lD@fkEYs&Zl3$ ze(mqqFV=4vUn(;cB83s6mNA2$l?9m+latvwIkQ@oA7Lt!lao_Z>q+0Dj;?t5C@Y8b z**q2NIF6n9;ss85b_T&g7H<+7?!u&^*QGqbQHq^4dqFfcGS zc9USsaHhxzFwQ?NAkfs-c6@Vs1`Dp8#p~L&gCs*j)OCD+$H&>#{&}~o&Kt%0Sy@>$ zG&JN1AQD;0k&>!mTadiwl`DM+DqUSV*v1+fkI50?nAjake@XJni#3`#VR21N9#>Z0 zva+(Wvuh=}cu8@T4umg%2^ii&Nl|?01_FSqtLude%}*~TVi^h#G7`LbF?3N(Zr89X z0|P^PditS5hqSe|^%AyxjQ!t3V4<Q&LJhTMj@HBD{u-0YnSW`ASJkKsVI^!yj zrJ3&BlP6CG$Z_or;n};rgGBfbhyjUHC>|eR-}lmdpu`q?kC+_MHnr>_g@3?R3Cuf~ z{{H>BnRl)2?V8r5Py70={0X8gi__rGcp!dBSC<^-ySlo%yZ13NMzwAwT-jpII4YE( zbtE6#+-quT3KNxYRQY*mNcz;NeRRRtYYo;;ob9dt-fdczF2NFVBv0g44dpdW5;&Do^K|H*a3L6t|gU-c{Tpe84>Y z6diN$aB*4b$h>g*g&~5c;gu^(9#Y2n@h(FCgCE8mvhyCR@@ivIe#{RT=vXbOM;gH@yQ-=xCnx9i>(__@NUG1C zsR+c~$2p!w?idwEeVYOPFf`8Cxmvw(FE0sA-lvD#ACN#-zS}?TEinEOilR-N34sz#y^Btx-TwFPxBu? zemS>W;>~vd4;qRAB(0pTckdLiM=oFPX=wQGz3CNW;|Rrc>@(p(}4Ee=!HMJ^7MsbeNh^apv$3#|%ndm6qD~7NlW> zR-*S1V$%p*$@sz$eBqxzf8vf_ZcUc#>FFsgE1Pz!qzWM5C^{U#PRY}~tgCA`^4e!@ zVF(*mAnu+EKSAk0tft8WgaB!2X;IOxfq{X}&XdBz2y9Bhl>TiW#+NW5j~+c@J#awW zrdI=z(a=y86P7{I#4mhKiev5f22Ry}A;X`ef%KTzF$al)YYcL9_?AJ-6pCDV&#xs=>Qfsn_U>N32$~G zCM)r9W3w%=Dk=uF&bj^W&b{1*_h5e~B(UUm-{Yl~SfFd-|I`VLf+PCo4WT$-`s2r)Bn3j$5ksNU7ccDUBiOqR{qE?{0u04m3?7Nq ze4MDjeL_~&Z|rT*_4A_UkJ7A7A7Fob#JQ%{j{{GxF6 zYHIVOJi}Ih$%9Oxt6n}gZ_d^R(zws{UK3?SHAd1P_(y9t9EpjE>B+ykI5Sh>yRrW1 zlLpsrEDO_<^?J9+u_-DZq~VECj;ud<>Xhr>Kfm0E zJ+n$nd#0Ah&hIBI>I127d#X(AM>*+7BeS+=EAV+P;6 zdDGR!$Rt4VW~0-(7CG%I9%<%IQ-*>+X~RgT|B)aP>Z+spD*u2u@g?aFC^8Ck)Pfwl-NEzz9f;{io|g- zQ4FZkQGyM*;34Iq^e7{gU?Yo*^YcE7)oT%)vNESnhfrt|^k~@V&ZlW~_4F(blrFvr zpaKG?4Dd&AQIul|61=aH3FLF!sKNzj1L>)-@C5dV|COKKzCJ!1CAI@ZNl6@YEplAh zL)aKQA#JuJzvlw`qUXwVdu?s42EPJ+F03ZVG&?(M0KluS??g<-#;$wD>#thZz;71x z{{8#V{l__EegE_q-|Xz{l+f8uXwb^VGjto2+Re|+4HjMx*2>{Y-sWF|xw=z6w15+Z z6B@&(Zd~S2JLwdywuSJc7C|}*4p9-qGIHwi-a4&(VAQq*48taR?N`wWOID{g9l3W6ofMD4o6bQ z;hK>}r*b?m;M*>0D(u8pxA7J(rclJ=>VJQK+gB}p2tOcR;K@KRJB_lm-3mL&ar8|< zYpYsd?VCv_?5D@u!@}NrITMo+v2nO0+!qY}{A521P{{oNcYthyeFmhAKaf-4g$p~x zKF}!bh}G1S1-;?wT8yJLFdksnZ>{wZ28j*IJ{*> ze*Q#Es*hxs{M^|YxNDw*%N?-*VK=LjwEXLK?I>pRA}@n(AJyPV91V$@k&*81)%Dr@ zR0IlNUl}&K7|gIk&7j>HsWS^X3oj1OiRim`sX>$w8RgXES%N+S0#~|_Its7Xx28zn zM7pXENU9xk*g^@UHoE*v+z+$^8(nbQ)QDlj$4RFgzXozWDi&E^`O8%#A*_BJB+Yx% zOsyj`rHy9y-G7qK>NNvAtUMk1A;aK$vD9!9CuJSmbe_jP}lY^+KqhWG6RI5Ad+ z%-4Toqcu^;0nUvKv$2G~H|=WR4}HHL8{IxY7cu;Y4|$%Aj)sx~J9`EJ~y_|%$9=U z!t8GrZgDEE%RP~Qbg|$(djsU;!rd(*F@HO$sRI1lrkpE>z=2IVc}j6$0&2c_Ro9w) z`^)XJo)6Zgo`y=maj==}KcW4ZIxu=jGy zix)zPEs8|C49?UxP-Ehp#oEh~`)Gr{o+TzPe9Z^V!W|tIktEiYM3JMje&pYQ(g9mU zeYeWGvBv^Oy3d0lE8c}%Rkj);dzZ?JcW=l}dir4bgEOCzNVisgq4MTMLhM()c#-|I z(?qiSFllyn*{eEZT#i`SXCp(Vw`h0lxX2sRZY9dR6BPkKN`6oCPH3vDzv;dmV|%Zk0pU#X`rAqa*@5Uc_Wgb ziW(QQg(BFgJW0R8*ZHP;M~-txOLAKFDNGTkz6 zCmmu6Ewvr^@$qDvi<=uLuEBDOL8d1D!FUaR7gPp^&POCkc{-VzQp&2G#z^<=->==> zX(NW3E>0y=gTKqVm^}V&a2OdUJFW=CJz*kgXD_vPcVEiwt|%*$0AMaC@K^3Vt->2~ zN?d$#q$ZV^x!s3xbo^T>MV`cVo3Dl0&-U5Z!Gp4Y-|~|Jv)a)tq%9yH4 zm-d~aQ`x7zxquF@-F^VN4%2I1&jd3ETF%14!pW(_v=Ehg2g_K*wxG9&46UuL8^6~U z2g|hqP1UwUokLc3KgJQ+*3v=-_Sm+koxdVRA57jRa~52zaehrXz16jAY3)`Scjb{f zBZNNcOK~iI-qWckSW;ExT@)#_`O-S@2xuS>K6j4jypBNNHD%j*wLUI%IPhbU{y=RAW{@9&K?93GD|GVZvr5x#3Ej;WnXZwKaH z8@{#J5PfK9Xh=^g`Vzzuq$67F^{W3>JFj2+f}{gsLQJNUJkfrj_<|1jls=oHq9SlN zs6;@PAmmRi2SF6!zf-#YoI9owXw&=Wx3PMTTY|E(oPOc~WB9%Zh4gmJU)675qu0O5 zC9p-7`Mv~IUF<)wM4Fn{L=v(|X<*v$o>`uUyo|J{`^hv~UfwZ5!H+;S-~b{bT0edq z9g@jyFnd{2Qo@(&?ey>O21rCa)%^nh?J@?!%xoMSzKA1uuC`T6;NGdWo)o?jD?(^oW2SU<7^4FF|a&Nud1ZV^{ck^r-L zkx1v9&;katABD&L`+Kc(pW-9uK5bdM+dSC>h^6|pXEuj33outJG%0&b3qhWEhye&O zAb{dopv>Q&<6t3C5bBX*ujQn}>?PbYyN71t*3N<`XmoNFhiX?N(j0s~~TZ-i?f z4S_d0N#K9Ltn?2Ih}-s?2{S8AQ7Qtqc0DV@5kPed77ha#8yky@+lO+`K|(m$#Rnh~ zds%@S1xufen~@PaR(m@_=RYW4ekdXHYyPeSbUoKSN|M-#poQ=FABWNbc2z?oDn35g z=aD}&Qb*^b*-J}HAYa_w-PP69plNorKfK*M3G5_`6=;)QGt|~*AW0BdP}BT8mDTA9 z=A;x8gsR-?$!uWV7`Q0N6c}S7jR(`deIOyj6nAypEbjjwhXyijeSJM3AOM9HiFBiK zNR7y~adI6vPXN0WS(N6fg%eBJBz+eEBaq-O_6tB)084e1xR(d(U;j*!3P>V&V!jtdc zo#xiIh<@1kDqf>Zq}t82ZOaRz${!yN*t1ExeKakMia6^B0kre!MS9sk-_p(ob9jHN zZAlbIa1T}B=Aai3j&j|MpF$AVW>^FGo9>`v2QWMo8zY>&DX3=57oSsxJcmo&wJ;MKl$>o?Si zU&~X7_sA&8YFTYlb^JH(W@zPL3e7|s$CnhiHE-NFBIZt||F(fY9ijPV&x`6`UlYlJ z7wi<%M~etO!Qp~ybtp+web-4;G;}t@Rpl>VYVkAPZ{6zPen&oSldON1lXK+gQ7}^2 z(IgksV>VwUlGSE}Z@+yj&*nk#>Q1!gS6b>oP)Z1b(uhwWD`YN+@IXE=5!ShM$#$^J zarSc_76sr5xa$^TO`nIy&yRO&v;(xC`kV)oFb5C4NTvv`&Yl3Bp*Gx`CM8v&&v^hE| zV6eTEGCOYHzKsJ}Q9-OW=8UWd7LKrMx&zwDLBddIe|+<)1l6*%t&dQ&GvJ8hhTgjeFmv{79heyiL|?veldN1A z=z?!~d1Z%7aCl80Eb0R_A=htW7VCJwabkLUT5_>!8AGS1qk9tabbn+$YRA>p)kvxD z4MlA-m45&Jf-ue0BJ=JR*qWNb!I}pqA)1*8Ge_1dPd$m#6SOEDz$$|?OiNlD^*mr-bwWhsHt!>2ByAj^;$m0e1$+Z&iJz|scTI}=z+Jw2r2{UFkx}&2 zDWgiaJm@1XF0n46`b)nzaB3K6gP;Yc1#`S*VUeTfPu~e12FlW~-@gqUz~M4Ss+&`t zN;x_+E)~E@LzV8bU~s`kNocY@JaYWFJ$5PLFuo9pPe0Gi%d4N{a<}eUN4wSA#WiXry(;%b&SMg8kncD@7*`##Zl_3!t1V2Iv4VmQP_xB33nl}>eXOqrjb$6S!} zGe6-zP!u-fks}~sK9gKfS&uo4y8L$89$eRD@w1CnBx#VkobFJ%>X=h`YpU!&%=Oxj zAExCuE2dp45L$3V7kdkVQ|iEqVn-b_6S-n*YYVyx$OCG4yA>NN>!al4_dnEcph6Wg zEaPIN?Y4$Qw8LV{88whbhj1K@UVc%v(s3b6J@v=$jg0Y63XYCTR3J@&?xmLAF<^^B z)N$eVe_l7{0J;aD9TKB)K7DW0BuDMEUr~_+W!|XV zX|ntBRq%aJdS(x^2h)9DUIu^u31J;CFvl7ia@bNoa6BpuaH@=Ie0@ATn9pXc9Z67a zZELFm83ZOHO#^?o$?#+KtLrV+a@~O91@6x4J0H@WJ708LIJk^|Cl2vkCFn&mJ6jgV@A&`{?k19 z)~o3^d?kZqZcu;qDs2_JMH!KT!Ee?9t-25z@F~oSvI|{_Gie6p&M&;B5l9 z!|}z&wJv_tHU*ky?yS2#96imGpzrkam?~S(ypUtbjI$E$9~#=gQjx1yPtOFGq`^9i zqJ6aIw6!|}sAy7dEtI1!12F<-kU5~m0s`e_9UV|7OrevjUV*h-9DO5Hsclm(#I2NP zphX)BSr+wS!n~JUP!^!d0K>~9j9IYoP#u|?OB@hS7WV6aWq2P5HsGDo| zwjg;Zr8Z*YuUTKRHSu3W(Lg3Wr)2;ALY8oN(l~_ehpP0?AN=U&c%=BLQz)*n6CbJ} zt?%21je8`nNrT_4bfEqGqKYu{tATl(-Ta=Z$;q@xd9k|BYA(CCZGpTpjBK>9umI}{ zQB4SPmqn2$QYlWuQ1z|SS`#t@?;B3cRo*0|!WEqn^dTa5q&bBQ+0|gh_49Q38BrI2 zhV|wS3+t%G_>8?snGji!!#l1xg~)NC81kDdtT*-;3xbeyUHq26AOy7wh-~`qR?7ElzAu^H5E#gC>b^jfC#- z=cglJGf}N|bjbr75}``coejF~SI2A18pzm3xw+&V=?kT1tYXS1iFD}+g zerE@2LheIRH4W_uyYtQX*c2yMeQaFZ4N+FqFggO26&1**!U|sCzy`o6BE6uL2Syv- z;L0^I6XgS=42<1#)o%lZ>r4H--b*WL**ec`2TH2Hz1@MR`t130cmm1lSM}}?1(bg% z{9s69q=iRkZha*Ia!b!F?1Z4J`_ugqr~LwNr6-9Zs3M#v6J2R7eTTP$0~1SxcJyOm z=r<(TFI5))hFMxUsMq0~gTGBjV18i%4xz*B0t3L`pa+otQSriZ7vHSRM+x6i`yk{s zu;{-L0Kg{3wBD!cwQ;+2sT&8Ao}KvWMkjC8Kgv&vklw)Ol3Zwmz96#E1qZr3vB8ng zH!O#EW2mRs+uwih{c62UoQx_kjn+4A)beN&^zSS2SQM2P6bLhg0+equluE%()@oUuQ_G}Pub9R=1}+WM z{OP9;A;q?~rVw3c$UI>}LhX}G^(+m34v*1*tW;kwDCEHBdPG{e>rx!*S)Kcsm{6~j z4pdiE+;eYGpCW%$hM^Eh0Qs~i7d8{3xBS%3)V7}PZdc3;NPr}n?t%xI7Zkoz<^2zlX`_iGLQxy8k`U)~vK&GQ{3V7zdJol8@*{nICZ8Co-JadDoQ zMu2}Zo&wwXuKi;wQVygbsy5kwe^A8~IB@uD;&6SmD3VVD1rFYhtenJFBlZX~0pBB9 zFCF;vQkfCaV$< z`hcvrKx{%MWDHkSu5Z-xmhnUUYm@mshq?@E&sItbq)R!h@{?8;(4%&x2L>X&?|Vos zEdN4{R%`MJ_4zpAY+9H_G5idoq6+f@uw*I{)gFMg1DHeF?E*YExO%m1%t48dlJFgd z#aff7=1B-Z{U5hokhO2dL8vy)(*<>PaBqmPC*x<%CM#Xoo&Mrz8T+huLwub}ayQ_P z&d$u7ET2k|beYxwy?{bF>KyV0_8Q5BjSeNzXu~J*%SM{cco5=!b6Em7^UmWc0S`_W%r0 zz|$Zwf7a2m$RsC0m=i2*=f5oGGP@)GCW`sU5wgHtl@qi-k#|84P?tcAH`b{3-CCFS40e?#md zY4jatql2d>6W|IY2ow!T>Dr1jEG(k zUA(vpvdWT&R?Z5HTON@c|H*NE0-LqIas$}Y$O!b5@YW`)&sPi$r8bq*8Zyg#coyUl z0`oj#lQ;Bx7a}|P1Fu7)#s@~$qnK*9dR0>+%e?bGU7QhUb^rvbz*-YgR!6UG&*@{^ z=l0SC=NnhAw@NO8Vn+ydeWgDSaa-o!uQo)09k4!-Edn{&*(lxsX!#l8Ejz&x8ABI$ zuFBXRGz^ND>({S?)JbiF+CNR7ODa;5VB=tC_Zbftht3PvnNoeDkV>W#@F*EJ1=7pt zk;@1=!i4Z9%^x2hUvE)lgzP#0;ZM5(C_pojBk|3lp`kG0k|gB(m|0r5@qbW_%X~M8 zKb+RL#i2tH&sz3}((Mo1EHggk=F&=XN_Gsm}G(R{-KClZzmF)@?Tt-+Ar~`>D5enQOFj3#s zhYmtt_`SX^WL$-^1l8IRMp`5xB0dadzO|Fn%iP?X0CTes?9R>G_)ULGgHCXLKOaHq z__N4`fl`=U5EjSAu6^XJZk$~;uJ-D-5koB&)2KkC`^197ihM%$fp~tO=C=@5lTGI^7FGkC`5T&>!6Q#^H;~-()vXdD`;V`Q_DE-6jFa9@ zI>A^#CVxQFKd-7vLe4SS!V#IJn+vU!XD@1Fa04DX;93DKnkT_(qr5;FKtFqsK|@g) zun4CszxqdiUy|RN+ zdqdx6+n$GlYMfs`Ad~QjaK@?J8-N^=J0j6*Zx;G|G?sAfo?cjx zx%J~Y(mpF0e&;<7yFcbaY)h3Lfyd#rG9%3XKEHpEwF~$&A9B$X-m&bwi%O(yXu(9d zcW5ZQ<=Yj(&(MK^6hV|h!lFKZFkC~i5gsgTE;Ep1x;J0y3FhcjpdM)RS#X9eD7twP z-$f|E2!IT(M@mDMsWM({XxK)Qm^hu>3O5bdgqay@NJi)NqJ;R|ej zz(qr(^ThKYk7B2elsof9q|o?3*V5M~Tee{Rg>~W7sdGfSYiIyL3H;i-{K*q;v{!7Z za-soSRcBjfS5svD{=&^NySRv+nvC(?@OFY#P6a&|%<0=}CNd^-fbf#sF`JNv#YbSsGvZOxWiL1 z-4PzATR-M(OY>$>nw_L7-;D|~vH2el3_Zvx zfwdY22BE>L4*v>H)vYgiRl|9)|G2SBy=@ekvxy*n==USniBEJEKUhweI@5>q}UL7Y55AsthW?6bU7{rl?3ezbB@Z zlHjl(iOt-65~V%#CAbS<=4Ob7^y7ph3rGphmF%>kn-EqU>CH>j5$Fb03ygv7i#5|b zbEfw1#&3EKDJ!u=WlOl5ZgNLkp`fFo=K!$Z6QYnY9yPJ*1*11~Bs4GhQyGr>q9Pjq zvQ+lolr%rj&dj`xT(U*!=;@VVJAl2&NJ|T-uWREVOz?nKbe;W-=m1F?a&w2{f6zkh zN-OayEY!@=`EqUl;7P{YVX!qo;+ZFr;8DmZisIS5jl3-(wmp5yEA@%OKj}eg>Obsr z7*r~UWKN!}n~0g^vR4#T8kapaYYGVGP|(Aap1u9pKh1Q9{ZEXKNr>1K&f6gHxF`Qy06OH z3HjG;VepEc-hW2tDEv79uIM8ymO} z?7#&;Hb(Ruz+z1#Hceb>$EE_s?C#Ymg(S%}77<(IAS3NfYiqb6hdzJ)FyXmDc(a=|+yWE~ z4E}&J9v-ODD8DTITv>5J@?KearIHDrEQ8@p>zf9v^-bYVL_$L1^l28bQusz2u|;G` z1bonIE=&afRq|c|_%Uy0`Y;@T4n0)c3k!B1=-$1lKErac2Eh;R1SFmh3xwx70lxCd8`6vWR6*0Z3X09{K{ zbmGs@8GyoZZf>rr>0T#^huyOJaEx;3du$q6L}pL{^O<96kHem713+(9KHLCb{TgIBXegsASI9NA%tes| z!_6)gpbx>+Hb3~w02c?_+oMy-j_;S{lVBjRufXp2&i^3$LmWRR_RO;5FmiI*+t|Qy z1n*bLS(b~5Cc+Yul7N&S-YCnX&osZM6tCi9vS%CL&k>mlxU#?I=X)sOu=L%Pr*OI; z$Rr8i3-3Ivw`7zWblm%3;^pVg5^P0%e?Fb12T_Ht3)Jg;`u1*>KImou)KJOXdn@8? z7QQmlLrO+c5?o&W8%YzUP|#0si&r2YV)5{u_D)XU|NR~LcMBvg8f0ol)KSX}hHiQ= zQ6Q8R6ga?=wBOj{8WuIGPRjR5GlwPpt&hYPk6Kp@H)7@Wy?S#b7Z zZg&Z06}&ZB692AgSNa>((a7)NwHJov@(AP^eTZ`PA3ne^kmC3E7X)fLz60*9u-Eya z?!-@FsN;_2Vz=BtGf;5d3B07Gu^B?7Q%1ENYJRoY-*1gte$qY-n+dd2pYad7s9!v? zxp@;5{S;sU8RHdF@93;THqwE1oF1g6WbpR)~7EUe1AGKuS`&Zq4;15iM$;7(4)RF@F1xXS$ z2jf&Wrq9c+LtSrt6T(1HJ&B&YRWE28=;I>^a>6!>C?&<=F2RO#z0=zZ+&&H~f^lKk zcA4#V+6$MZI5^qa=Vxa0jg5Je&8tZ+6=cN)2?|5WDDv)V^ZUEZ`S)8edV}LsqLDcg zCYOI1-wzWNmHE3B}8$oI{F@awi@95iMzT&$zur@k6 zdUzPf@&d8B0#A#&01ANv^qvx1N54Nmm&o6O0F)6(cteB=gM)C~cfB*!VoIzHJUH36 zl$VE|15Nc@kYhu3sZbgc61J0JjNAj$Pgg@=9TG$V53fz%FS>`PVXed)62;0a`EOR1V`2Z0+n8kjOp`uJ2yh z(lqS@>n7Sm9zJ|%(tr0u*EM%E9Koi3#Wfyz_(#8)_ReFLE2gn8*3IZjJyl9w&@$XSkq8Gf(>Fwlb2f=u5@cwD%BsEJvzkl!EZK3M1uT7R? zsdn%B&j;<^>*JX4^6?cp@7)>|B;pl`v7nNMx!}(qsm{OF2*=Xj z-cW&Iyt3+#P@OE{_yy%-hBkq44O+)y%|Bl^ItJKA08()NLGVPkzI=g`)6k&g?k-Lx zZV=QKYh=zH{L@=zrs43@WZ@hQek3K>^NVctTzq~1eoQ#&kw`^=1_4+MjU`IW@qo8X z4f@NF@<9^xxoIdVqR_w#W(@POowQtZ<|`bY)hNG)vE9(FN^D*SV}V(~UO=tgSK_k- z`!zC>au&h=xiF$hclSBs`}yrbZ~v#lC#v#xT=^W=VsU_l<%+((X;)#k1D}TCVmn3# zvJZ=v{ugcan?Nuv?F`rMC`1{+U?V*}u4GEWOLT6+`O19YKyxe01m*@MQQxZ@+9gBX zmgvYeK&Nv`O35Zhb34OQy+1(C1kICZs$5))X?^LtNuafmi6t#9d1A9Ytek;?WG_e< zYbLe7x;T99GhIhIj@Tg;Bjwqmi|6@s!5xe9C+p|kBE!Sc#&E(mZ>u*kw!QKJdjB6h zfYGvOd9ZvZRCq9&NK|P57;v*eBOJO8t|G~?Zy~&FfQ<-s*~=dsmxSWc8gv6rVo(Dv z4_2Wvzz>56%X})9Ds{X0$_v;`5&hxkQ%rsZ+cw&I+J}czf)*1C^=tqy?QIDTxk@}D zr%p{eNGQ>iOM~I*?fr>d75Cu5eLUj-LMOVqm=fRSVPf^Ub~AI3Ulv_otEmI8p%TL= zaZQBFgGe;Z!I0BC%|VcXr-v+!s4aznQ-Dpw$ER98t>JCyXH_NM7<47W_66Pa2H*a- zSvUtbfpBbWY%DD9o(^trHhYPE3#|xkS%Tj6hA0sMwtuq{ycPHvo4zxAhc^|g3$qz> zaZ4wV^62O&Frk~!@xaw!5ifUCuq!Jou%$TjUS0O6Lfk7SdRL%~}E+knOVt;~UW zI)pK8g52k{h^9lyUpl}2`hyYU4uY7**r2v#k$00$yLRqGE80m*JsKJIrs*N%N^mF^ zT@-r}r}2fDvMZIG#xSOq1S(T-9RNu;mwK^vY8hMy3~o9AUr{oEIHHe-EbQ#&ts-)Hx>zYaIFF0#o(M7>WeOd;LQmM0QGE|WS`zcEPIOnfFW6XH zH}&88`wEHyf_0d_4@K(j^F+B;tOESTUS3{g%HV-3mW!2@m8|Fpj(GAg1e;va(ea+Y zc@-amidQq(hGI9^XF&b6ntvOpF)&^iBe6IYaHcLs$DF&YS7ER{uc(M#H|wH4rTQy; z2q`EMakYVs7~u4sw6J0H>PmtYMK=$_THlR>@W`V7(REh0SpOoiIjiwqLAG(eAB;>` z(GFj8!cQKu)hY8re21Rml>|>eB}qtkqO9bu%1f6f*M76Sv>(MkMMq@fqetkRjwCI@ zMx_P(@nCW9eMOixo;)#yaTiflieubPANUj7CnWL&p&!TehM!jI(IR2k0#U1hIeggp z5l8~j8NqV+28H>G8f>?)Ze1s<`%$jpkh{j|-;`nhs9BBVN zaOZB4*~z-OEpZe>zZrK@PksH0XzpxkecX($Y1$IKO27nR6}M0y;LXryuE-cL68yzL zOQ~fIpppFg^>nfa-BKDd4fb1f^q#MCnpCAhBI|v)K>|%iKrUw=2x%6b13E2Q{-#bK zkP|xsGa4AtMx9agJb1GyQ&vg}mJ%gPiu)RhvImMIe66kT9c|CVEdk6)k z{}~B3H7L{Q(RXw_fg~L8h*BH^mq?QXe|joYC~?5=@fO z>VSmK$S;{FUYjxtfKemIK~>-Hid`{VP(l9$>l*9#3?uium#|EQ7t<} z1F}pj=M|1#X689TCd6+J%78Z$6B9@hXTQF!txt3CeuVpQpoofzp`^ny(SvryN>*1( zC#pZ8Av_k3R>GwN16^J3qWE;+IYG?w_LhRk(MoCdIL!CaW^0o)O86)vJ|H<~=Vf%* zqYEAohVb!ld^4^+fXU{eu;}jpbF?YZZ!9QB-87wrErmFVU7%WK#IS7(XkFA?08q`7 z_amqXZ>(TUK{<8pE$K1H$1w-=tM}W8sU9%rjcr#SU;B;A0${&3g(xJ*a@o+Z3b2FB zznmcu{4YUjMq52}Jlvy#hr<07XfXiTqodq{Q9nUT0O*OTD!B9$9KLCtJNF*g3Z$Ph z5Dfu>+Y1D&ygc&;Wsm=bX&i~|oMqqkV3`}n9KbRIA$Oel9Eg93X5Sk(+EGIl;jSjQ zkw~lSh>VENWGljj58z;7$b~4#J7+}286e_y3nIpi8#g>WdM%2gTkktcakRlTsiVWV zfB*FoeuCTswBcg`V0$vdgZtL_nVD@dk4?}}v_BrFg6?A}T-AP^dwIwm3AnXWhIQks)1({OY&J%awW}5ba3%oHnNMLcuwCjVg=C(Gl2*-{e zw|slojX*dJz8gh7#B|}Y!b@Dc^&kQm8is+$EqYTS&r&U&+E$8nDJl7IXKE{TAephq z&i)TrlJIjEc9V&4Y-}u?9=vQ}aP&OQ$~qy@B)ami@ih(7t)pYK{&{sw1D3j|4|(kv z90lM5{UZ^ZaHioOgmnR9D$>ux_*C`sC`$2_;c{HUCr=htR6J7Ug?md7Zg^2vJOYbT z+b%+sGcX7mE3wZQX}4nSEpcze)2HMcRJaOWe3G@pRUB$+YA>EY|KwYG67vLyJ)&5l zo+^qf!XR~I{ZW_~?d;$x%qTKT8nQdjV$OwGav&yS2(ZDtI-X7&WC{%xquY6UjEcau zA5BQsYtsFTp3s(|pP~rk#=S3s1-@7oSpU$b(BZ4i2Z;^$Mqo#aBs?roV<2$3YaSTg z&&1T>JHP;QKvk7Z>A($HjxFM8VZJ)t(FFtZOrb$P3@&7M&mc6Uw>RMC0`lajr^B6r zps%fc_V<-bFyWdOer#!hbEe~pN%G^czjrs=sL}Q)#X(OJOaQ+8XyJ_ZIR@T?6od(4 zq26F`Z`rjvYFynzykK#7@AlC5Ks)d6?6UR0EwX-qJGuchEfT#18*lRABG`rj$`DHR z^LP>D2r_}N2;t1;=Jv%dF z_3s@G!JitTS%Lcq?yzBoyZk?wq@9rAari7Os1K!o48WIXvnwLZTU}Lz8Ae%o`vW12 zcsF_u3=DG7B1+RJh>5~wHZDYEngCbwXtBk>AVEs2VICmgY z5GPh-p7JE964@dqC(V;~H0^fzd*iji=ABj0mT?EuBKu+JICpm)*kpn*$CZz@av&|^ zJ{Bw4Sa`1hzM$D!`7v@AU`>&1flMDdd^_vt=!luc4G(g5|AXo13|mPNYpO8+%;r{O zNg3OI6&fg_1Tq3yueg5&7UiWlcEi@TE`C*szDjx!?7xQWF^te^A0ZGy4wiKwO^kj|%W!&OZ<`Hog1z?*rSwSwg%(je2Pl zTBt2La?7!pZ(g)j_UhFHG8RoEmssMl&E7po@{#pWJ_&>c{NW^PILaxqhyKCN?_Sk7 z*J6pg@a!6j*b&V*vu$mahC+B~K%=Cu{0G95{u-_K zz*tPfq!X-2?N(M*wc6yiG&0LVh6+p#jL)B0z%a9UF$Q@S1LQQ&Ij;dacg3OYcC88S zZF~Jx0IfZ6t}C2vH$=dr4n#-vGgde)(K_+oRJDZSo{m5!6Rcoj6)C+C1+rb)G3P5X z6mg9YjW1>g#)BmcpGq(2n*$KUILOjQ09yfAW;gQmESFE?O5XYsmkXevK&=NM0RP2K#Y}=>*=#Y+@4@XUNCLo;WduJ({pevh`S2Vw^(H0F<$acxsL!ytn2!mhF|Mz(@K69(xek3YVC zn>P}st{C308$%-rCMHe=&7q*ztVD65aK7;=T@kWyQm>`Ha(6b<;n@qnPKToi`nxp; zj(6OOK2dj~I&6Kz7lz73%N2BkZy#u{O-7``d;m$|>Y|-Hac}`9aIX#;JqPDw-{}aP zbA778k5hGwsX6bTZhjB>5}CBLS{pHRI)ZkQP)^ z(>7H(Z~>(lT)*Lp8@~U5gYy1in<8Ss5!>C>g_3lA-D|VCNjUBH1LX0|zDCB-_n7Y8 zs}|%+fhIup+L!Zh^WZ^7iYPY}9oC4iABV-c(^r!CQx#6ny&+oyYti!nbG{**(=hR1 z787p0N4(L_0u|{B!4#Sk?0@z{6~$-)0QlYX@CwBzmS1k$i@xT)A@GAOPqnSBtQbPl zhHZzD)_i&r>%aLDhI7HG90do?A!-(DdwX~qVdZO^>hvhwK?N@_ylJ?h zVi@&`4$>fWVDr65McrceMZzq9+&Crf{OO_}Rz#40@X1z& zig3q5?Vep-8D?gYIVa3q1zilr16Wo8;Q%fE6Lfj}!TeA(7S+JWXnuC~7TRP$r4;)8 z`-|R+&U_9OJE`cwg>(i22y#ru(@M={AJ|*KRGOY1J zKwH@D@LHIi#p39rEeBpl2u094)QSBY@aaM*f=i9)v+jb+4A5#xQm6s1yxD-j!OA)_ zKVR)a(n1bHVFMJYbUyHRrj7t^BUH;uOis?6eWN1E>WY{YoKC($hDk{_w%xyOOs_3C zZdO$&{<`Kdsyq~D>guWB6TzWFNjbspSzndSqcNp6MA zU2WWUr~IpG-Lwb*akcUB@YI`de1t1r7fW0UOF0PhIWU2qAnfRES3M`=Ht$ba01eu8 zb))-g_22bbbn6Z7I9>%Y4EJxncwU6?3jQAxEBkjTlHAr~*!TajbSB_b=6xSO?WNK} zg@krVNRmWaB}ufYgj7UnlO*EQw2g`Ol7u9wr-h0nTV+b65-Ooc5|Sijcizu^=5;;S zTrZ(B`2milJi9%kmKiXUz-@a5qqXp!GEB zwIls))P^ep7MKKRMN%)X^vG$Ix8m*SDOsB;j!@{6iVj!uTU(;5r+50*$Q|M9Y2&mO zOFCF!X>DcNCYVtZ?{ry;WJ9QMBLLiO6t>#+(iCd&NRI{d!;bLQk7<%dNvhO-a(@#g z8N=$}jMD6emoL$Cx+l8n0|}c@_SDCB44y~a@h}rgH`EhZZP1>WLC@7;Pwf+S_@adi zPrP!fUhE8)DoxV?cd!(-R3^0lqe94`6!`Qnyn@pqi` z9nIz8zJFHk`A~j=I}At7spl1j$pepNa2UcHsNL>w@${pv67r^8*-CHK`Oh;mi9Cd` zqyICtjouZTVp2R&f#j^rS%-L?AL{lAepLyw$_B>|eo`Oke*uwke4f3Jq`gt<*KtMt zhZTh&IQIVirXP2+-oJf|foT_oT$lI);(f44m2Z9(7vvg(dZ7a$B&CZbQB z6>l9kD?$f(PIPYBi&D$`w)CvGZTq#sq&459J=Y|=#<&HY&QN7#P_=@`WO=A6@fS+P z{WZQdKIlg;-@U9jd3WC0+k4up96$x8%jYQ1>T67}lj3Lmhxiyqq_OOR& z?WOFi!t+nHlm@_7O9}`r0-9=%iQ61HGR28|nsqDH6hW2n{rdgduS##@gvVg+S|t^g zj`m_nyU@(x7U$xz8a0qjV^buS`0n{t-oHTG28ap%EuTLN4(YAkve4f&k}I7?)yMC7 z)1Y^P#;}hm-*(O?bEXZADq#yb_%)X)7bI7PL7^2mLsQ+?)8_1YxOvI7w}O9hziQiO z6c=-j#;)36AJaJ2qmZ1NTIriLUoG&Q?Gmsg zubdcB>9pmCx93ZmO(cgs z26z7N`XwPT6q92{uqF4@g%m>rU7eNpY9PhjJ~uumB3pq!BotT7u2_vf=4zC7^tQq( zQ*NrvvP+F0_aM@Vs`t<}F_Fv@*Utl&)ZH%Vqwg5de0q9uVdBlRBi;yl=<2OvpDkOC z-m2R@JyhDNTl?eK>epy0sqewPyS1*r)cl&WQ+wH%buNlgV`k?1B78+OJ)*`I7l->tv31z930MXq zvnv|BA8fhAmq%=tSM}tX`YWB=0ayN??&zA_e_vMl_%knV6pU5=8T@2z^I>t*;qdk! z8|qauB^DG`J{`3nNX29Nx}V>#F5hkpY#n78Ew`lJy<1Rbl&m32U=}u*A{0=5X{sj2 z_0sNfZzlTUq>DF{N$@e`Hp zbX8vYP*|?HP;0Jl-lL}#iA7q92;FnwYur+eosh!9C2i$w28AFZZ=Pv{)%phkBmrs7 zA!&(-mk-&iBWxU&T77)*YAvvfIbQKIbb~K@K7Qn|=)B`vMQk(E!;d{ZR!J7L%(T+* zaOiNV&uMj`HCzPi`s_t3i!xWYWeI2<2R?{kv#2j>tbR?aHO1b6OiA~=+*RHGb_WeoA4r|bf#jI;+V68Hom@^Q5sRT(Hl%~Mt$NIlyn)K6RkCAiww;Mb4P4Kb@TD~ zBGeOf4mpjAlUCpAlQQSEwK`&rL1=G1`rluI1uHr?d*}MoPuAk*#uJuzd(_jlEXMX{ zG!CL`|1vK?PDvZH!=x?UfJ(vaOKTAy)_rG|nW?y|V7@`}ZvU^*;x10_-PT z>xfql-4d67aQibawR(!=-MrND;IK5JMEXX$Ed!6+!`X?dnqrJdxptDazk-mGvf+XWQ06pf76x2BQYTnaV#;h9ue{o3Uu#}P^x z=laoV)es^UU&9vVVb`P}d-z(nhNghXzGKc>|Nle3)c^ZLd=OWvhAY<(?$g}?B>+gJ zzZ3G3}Jj>PS znRfr4NOV?cjv3bV=M%s%%|!eJN)6&ls=EmL2Qz+>Ly1f z_h*+bU8u41SJz&Mglp;D7rE$NDch)uGfUi7s*2!ieML+7tjKIPy0s+dBvLnSyj-uV z*oP9L`(erM4I6-E2|s9)cI|3p0g1(F=_O_h?rk7A!f)d*5!p4n>@g|dfBg_s13u6OD!sbZQ3j-;dN#}7>njR%L1 zP^l28Q&mHU4psH}ahrlTtP(;OADXX}o;WWWl^0!q=hvL+Fn!zShsF*ixtL=GC*AL| z%&KOKQyHaY<>lBNgy5DJ-sI+%oG@st)YOPC6);FO-khOIN-fxSKtv!*L2~4tRm-lR z6_8pVr46FQC$;DCS%LGchPr3F>#yR%O!~wIh|WbbQu*j#-S4$tfA^VovmZJ-uDs-! z9)V>2gmRlPGWgmbedP@K1VIv9MeRK$y=j(NmIhete9{H7*P4(&T6gV z2Q=$$+KO8f#H~+;I1puvry3g%3pB0qe9<*Li%_l=4Nm@>H^*YRkKr>3EX_(#Imd-*VUAdD}>udGU_6EXf|+BOk6B%T`kPxq ziy|eY?;8_61Ar%|h4f zh?DR-_~OMP2Z~>{`J4UU=_LqA*ol0?;zEp&IGCZhEhNkhmN?R;0Bz8YcK`%Kc6^2o zwH#RFIBIVlc{Nej+~lQ3B)dXhX19^bgQEe+XQTn3YjWvCJANf0d;1a^MD@MrM+rnM zZNJ(hNT^pQL}*7?ePA=vs#&sR2PP^&z;wy4zXQ9vRV((Q$_v^KNOY0Y{`i!Xfaovs zTAQDQvSZi~ig5HcIU^k8Pv95-&|;h;#aoneL^SMS~P{iMmtvLmoU5sDDn~2%Lem4v0SMcMF69pmX2hhfM z?f_08jv!bm(&GCTWQDuyoL^3k?y>Q;dP?Tita5VLxR2xa2A^ADN@dzEp|h#tD{GEP zXb-O|7T^*JG_8x$_N`3o$dj1w(fTZKH@AFzCbh;Oh&VV)iXyyKE|)+&!gqkzA|RD& zWIhr^aeIs;9)wk-c6~5~ZNUl=9p+78ir_J!5N^A!o{hq@OJ{ErFgOg-2 z@9w29T*zV%&Z^gnL?`K_mr(!b4=$$3k=n4$I=*Okd^tG8Uae@()+f$)k#?9ztt$6t zE(7k2KlSzNTjm$z0{PPrBM#kIls}td>YNc4_aeuWV+d!|-f@UtXAM`X$$B$V6(0&Fvi&*Dkj=U9bgn5F>7{5^S54T)x?th20d7=j;%c z$hmXRDxSSS_ATUd_xZHFr*%57p7OlRT}gWlbw5Cm1j_oXal!$mKwZc6@=kh(iLRic zIR9)R(w5YkbYy6Yn={R$ba%v@v<3ai{vl=4^?;$6#!0btPw7yaz|Y?-5bcoxVSL8C z(y-ULRYxrHW{!4VyIe7fKygpoz;6NHF09wx{pDYJT_+=|7K}P6;3Kj+eq>d&-1g{4 z8I2|q)LY%SMZ+0zv1LogB1xqQeyW*kZyU^>H+OxPmx@urH+;NKyVi$I!dmI1tGH8! zJZ0NF_6_pZ&HA>7Zlz)15#G@KsflOus?a39_;_N$#`qSllFX0IQKDUc8Hz?<|MqwE z03m!`dFG&!V$^m;WJc5jp6*K5lM6Iu|6w(s ztvp<;a!Pj}V*|`eBO}0!>M7YB4h(km)u|f!{Trx&l*qMr^NID~K+1OBMfz-Y_}+$! zmMo*Xm-sc_%rQaRLS=OH?6D5q<;JZ)?iz+_$qngoe+R~2I8{5_XGrJZtVo|T^kvKG zOYIxI(NfFH)LQ7p<-A+k@|W{iLV{pO~hTT*fVpGcRKHl#Qo4 z=bEPb@DuxtJoT0nQR>_g1C$pn8d`s0;qUoZ-_`3C)#HsrB4woda?q9pAA|$dQyLMS zj2hV9Q6ur=I`NLN=)%&BjvgzwsBzASY$-chXUB1Y%TR5MnX zqH8{3&5M68n(6n4%eH8=4Ivz|-lR`qf4W7?irB_A&j!6*rM16t9WU+asV`^qzN&$x zYjj?HeJ|-el*&~?`zOA0*WTYN_icANT~OffaomDd10M3V{2>Y^?*CmlY>Ttju^>x6R7&) zO%`<~q)KD9RADEe*0XoHhb`(OXXi21K`1R*jH@L(7IvE6D`lSPHC^-4lHyBznlF5fDT*p7&UrP&jFO(K26OV{*OCS9zKx$PH%JWOp3|5GQuV2hmANukyX3FoKt zw+$1~KMe7o0~BERY$wQGMA>l;XZ4dAt5d$9v>SDF?88+JaeD^JObs*~cK&I2PfoiN zW^Wq0gu0%atZN?l{gmYiu<$`y<)v>zFHobIw7uyssSn&WO>^`8)wk;X3UScTKu#?` zevrB|cE@c79w|#WX4NC(sQIW;iw68+@UmsnD{H?r6Sl_ z_6#%~IrvDf&%(WNpWPGnRKhMo&uQWbxU{bGwtahiy4}2niMr=!v(F32aJL!g25Y9| zT@I!zTy#u}=4((URr`MKN+~7-SVtc1Gd3s{bet<1T&XY6Y438CJg02pcThMuYv)9biaeaD=K z>%0vks*hD}ww|)sqdkAF;;Z3CD%x^K0{?oX_D^=SS?d5Nu67`7Gn9BzHhGSdjYHr3 z$*8~Qk`^zQQtVwZN8^n1+Gr1Ypzet|tZ2kjm7)6Q&7;8RkZMYQkKXis)bZI-KPH(E zeW{^UZ{KKs^GbA8z1n#^Q^6lDG8u`Van(UleWlZea=hRtx+WBoKjnHY-0Klhd%*hr z;ojegSo?9CBYiCrPjcOpD*p=4nZ@aBj4g;fg>8Xit|03KC|Ncb5lu=^xBFr~<{B}A zA+s*dXRHf~$K-gKYQDn4k+?eSPbOebv@LuEaS@BJ>DF$>0u;AGla2Av_fmWM3~%w3 zNiug|sK+-CuoHVu44HPA)GyHV)Sq6dbb;5d9l=VJi(eI5lu>{B(IcTN80Rqf^YrXi zxlXp5a(nS7Q}P4+Q5q`^pKOg7%97(#I4#q=0$~c?2lQ2cJFQB z7480eCntG4zF{)gyJpXU6WJe;LT=Sj$nneP<(Vk5F7ER^#z)x4ENXSa1cfj6&&0iM zvuosHZ-)#I)u~!^La5Boy~3~kcxmyq0Mm5^+DW7s(A_We_O_+%=BB^l%fdAj<}q9d zAGI9~f?m<9<3)8p&mH1cdRiCCj8ncl!`KTrMflJEbVd!A~*s`&RrwFg~2V zyg();ba(!WoH^X*O~*uIZl9`1%PWqn712FB8;A~-ngpuv>M3T}8l0WK(ci%4RGFgN zm(vrajYV=V=r0ugx4XFf{Zv(Q+*Ww#C#($}--nM#5pT0L-EowZRyu9t`M*zuRt~K- zCXHBtWI$vg7zX5Qi^raNj_Om~o+p0Nz3KR9;Weg4O24=QlPl4Uw*A^!o)l=PiuoTY zbh}6QRW9VVcs2Mp%v`lz!S|c8{Qlg9613sV3(>A4Md3Z`{oX#;3m6}%mwM#!h3pH6 zZvf}$E;R*$PVr9iJCpzPOK>dd6SLU3m$&RPbuhcRJRwfVCM{M|xRQJwYm!KNag0`& zC+a=v%rVY%x3n3P5KizbIJIo2Wt*Bl?S5x)%GS#y5qe5b_QWewZ{%X4V2qi0)WG?z z4fNQlRKrbaXsgSLZvWy6ENzOO9yE3yA=?{Gk(D|XeRPu-FQUW4o*wNRLVLXxrf_QA z>uDsLDK=MDgDn#p4z$|y-CqCOP_slw0k8~Mb?W~}8KA-hTabnzO7<kp*R&2}h8gfyF$9R|lYw0%s+lJ{4<-_}Zt5Nan-lpXhYp1KYjNAc zd9Uj7muj1bs7HGI#RW7j3!tqOvJ zYMHb{1yEp5j$E@;ZKsXXAAGma_h{a4(zKr-J(W#p*Syp46BZiy^`VvBp5BBs`5isIc*O(E}`&oFFw9k*S{J9 zWGZK!$+j?Ze?IU-fxo3o;ma*Y1nI#)xW1zreRU>a$sxWZt9_Y5jlZh0viX!r^C|3X z8-6bPX*J@60p0Me?eDj?e;JZ?KRq#m?tF3c>s=jn;*Kl|f|dU~a=yE(y5NkzT^5b&O{jw4)WT-n&0bgE0~(@I59)bW$V_gH?9^;KJdn3zE#{>h%)2F$Lo8pco}sF z1tM|n5b;C<1F!NPb7%?%Wi~(#*@H2Oo~vc%*zsi1x7W?1eA4dl{S-d8q)#1QE2d=QCb52 zYHa_WcS>CQeKg_zA>@&lsq`K>*Z4|wt!=-+olKMQ^rWtg(qCZIOcB>jp#?rtcX^P- z7@?UibaAK?J~%8fl9v&1s{Gc3+&e3QmMjPQ=K14IrAMPUe$hm9K|;s6qoXm+#Mn4A zdq~SDv?nHQCH=&dJ3bZG#xCs;PEudJa;xy39=;Y?&UdN1txz|`(|ZwW z_U!5rdQ#PYj+N<-)A?>i@tcB>Ce-)#WxqH`TJi*k6hi1*<>^TPPG| zyaybDGS-G=b2K?7SgXChPh-n#QWdv|M55;z(sX#()lFRsj(QEb}HOc zRaR~j_Cs2GVHyK~DB=a*x=cx{f6oi(9jY88-Z%_=?E~rV;sboJtz_1!wTzyM$!g?r zwl}d=goVWn0Khr7(l^tgxeyxK{yrhKfwLBWbZats)CUtuok`4%8<$59z0vP~0j47c zta`1;5XGg6QMGTU2~j@(fJ8u=ggU-<`AMb92Vcee@_5WVf zO*%?zBpo$al0He^aVN8(D377~{3+R}Irw^1loO?9*TlEcDBURgGzouzO=gmqRZ=o- zJ4b==&whY+a(uQXX*{-R7OU|J-YvV@Y`eBMBZhu)~fZtSY_K! zaB&Tr<|)WNFfiF3cC{z-M~1<%3%6`H#{kyzQ2${Q1yE4V0_|bO-q_5h+vQnAE#?mt z-rcYW0%^O$&4`E+K}7(zG*&k2;=0=~X}=uNPBma$8O1a-D8gmXz6{V5JEVhuB~*~? z60W>8eu%pi;S1WTRkwNto3?Indb`8&fmUZnC3Xj$6Czai#JCHRIW-9?1(Ou|(0CN% zXP-K4TGI3wwwUn4zODW8gnkT-C3KxM3BrS@ArTiaaR#2{RKGT_0?92DLX8a#!zdPa zXpcT)PA#>ux^Ne3wxKx*rvk(xoU*x0c44Iotp`Ge3AQa+Qs(;wv^9hc)cWsBdi(uL z=|&jv!o3^kpr{LDfYg*C(Iks|Dn;hpe(! zb$lMm-uF8mwMy%E_iF`&DJ-MS_-3|lr7;L!dBlm2&F1fj?`006I|HSN58HNYN<&9$ z%?6hoAWLDPp?wd|at(H8fTko@+_H#j2p^Y?Kke%_13xyna5MlaD5T8KUlF^K@xdWD zQSmy4vQL+Ndy8vSRaM0>7t7o=OYIaHPd|;n~jp?EXGP@3*pLD+O2Cem-8(jTtc5ONS zijI7xVcFDR3n|n1Z{71rf-jy<2tPYJ`O(s9Iq$#aGYtehC8yW%LLczfXjJ{ql{O!R!|SFFii zax*`v1yU(1uA#sk<(ZtZ*t>uR35!H1c$qxnBPjGCK6W#Z9r`nHg{EK0!69T8G>b=% z=yX!x-vT^B;*Sil>tCpcXmlDcz9_q&DYLW((Z=qdNd+WAuE@lAy?3|?Ut_JXsfqu_ zINKKa(*u(|TMFTTo@(`9B|Nxt34?tJUH5+JCm`RL3Fz4@VkAm7^@VBspa}R zbHis_FLr1}H=Y-0crOu8CXwP;q5YMMVE~ zT5*qV7!TND_YmWYDQP}qmTJ9XuT0O2zjv<#ZO=8!==7xO&6uI$Y9>OP|3zHOX*Dc; zLc-`1W;&9wHdagSy|4Q>W~XUijqLc1llqswP_HfzSUfV)!&fg|d7%G#8U;w}pa>&x z7)fbVnwEi)7i!cW%o}4i+Lv;6xuusvqek34&YJq0a zd$aJp8Z*hH9A6Hh zwz+Oiyl8ch&3>~Voj^_n$F+3mNjd6lQh5j3apa!q_jNU>IqC*msgDXBJ#sN)nsv?d zuFZN`4@#{4d3c-us+&)zX&-q)XWScVgjDUIVHzntAZa3)<~9D4L{mH!pZnSsdNy++ ztuggBwo*gw*7?z=uw~s>9`$!x8nrmPq0j9QCwU(14qzoUVo9r1li|?(!iF|kUdcfw zuBQDy6JvxKyXx1fl^kC`Xk7~a{rwMzA+LfAU0VBQ=j*Fxqgdy>(Z+I?k|3&pD?%$@ zDi?kG-ZDSddc@@ui?2!JWrP~2qu{Sqle0D5J`1$ zM>Wp%N`tC}YW;r{r%8l%0?+Dw_1xw?evnndCC%aE<^Q4U9Qrg#-*4cl=k@V1zMCP| z@VV?R(gRp+3KC|g;Y9v+zyGcW#M&cHylK!1UzI*5ukLR}OLh5Utdl+@Zl+L~vdHr`ISbR|zXG236*q|V9<{N7ZmLTrwW&yO?q+V>(Zh$(D z#qB<|QZ0C3pea&F0b$-%x2)u_YDYdfH4AKF$&FuJ*IRw7Y-!%DUw6*piV!|EKb?|# zd-|m1&$*q&9p&PVk2|s)e~lZ04w$x|)X*W1xXbtoF43GrC}F$^}@bbOqesKBTVEvN8)(q7=_NIifp8~kDam9VF=8J zlJk5{UUmDvCu`rme=pqm;1FTqt->%}Y2b?SH;j2Zl3%JFvweJ~RccDEKx3sHpHTPv zx4>X#;GK_=N{)Zd4etF+OVVEXkSylj5tnr z9&>{-gBuR%nr~%q3P=R6ZX42@dyGPo0k;fZEfD1_6|D4nqVoWDQCSKa?DzE)xQ zY*AnUejx8XrjIvx6PtDvOIpQ(42p_zQ0C}i<*8DWQWMR_c9JejPkn21SJz5Old41- zTQe$l?g(qkn2ofnXsGb*m^43M^k&Wurl*lnkQ3__eS##%w|C=uq~f=u0Xa?4!Agod z4MRV)giZA@{3Lw0s8QPMqfLAn?&H2|&}OtKHmSeQ;7$KwvV|t2F|jo9h*U2?dQEtc z_s|GHlDqXn~g%J<HrhjWn5t-Z~=_#}W6I6fXAvO)pC4<8Oj$j|TJl@t`7bqup-e)yEFMQwNlKqadW z$$JH95ejQ5y{{h?706xsJxEHz5NUR}r4MG_+qY}MTJA>6fR<){#=d$|IKa$Lola+! zBc#N4gQnC%GxDuL+Tv@Mw=2?jV|NP_)6VOEKo}tl;(x7qY%JtAaB+DRgZq)r@@72v z0*Sb`B;@XVtsb6bqb|>Ft@@3!01=5EWrflw(|WUp!$M?L)60Ld(t_!1r0KPR-o5S%@S>{$1glV+SdI)lJLjyMp0%u$q4sjB@W3dy3KA ztH{Y>9MYy^tSKyu(VJ$DR+GZI3akT0F<9;E2qNF%b_X5i9hfuI!(c>JnBR!E--LfOBdF)XWm zL*bkU9y~Lu$}|cuUfipnHKWAOYSVH+r6hO)0tChy0W|?T!Y#%VZH)Y(r}YqfFt@u@ zqw>E~n@KOw)x{~xw95;lH1(*$aL5W1)un?jKCHdX5HR%RF?k_H2gxF=qyut*%hEXi zFOj>AtS?ZzQ$F)B43%Y>mI9=8!*lt4GcM*;kwCm4riu#ymppxe0hJ29Dv3(wZ>A`noE`S(JeprO$2mv@PcI3r;g- z7Od3uMcLa_o9e!!V+qDm22`W|t$c8^CN%QfCP9*opVQ0hJ|^|FG_jvwk#s_0kNuBU zXqx&K(�!f(Eeu_q~K37y4NjZ~Hz?`_yxcyfbGqAK^LDDqbp(?YFF|l z+79MXoTdQu3@0W8FxE=y$s!`35S>R!oml|*ATlMpjtdlEwi-MPh-#p9-QC?$JwS4S z2jYAeDi=wVFqc7SXdayP_TmDKGrTVP06v{Cf{WaVOT9PXk>0&=b|5qYa~z%(Z74T= z>K2qI+(B^Fgs~5EMmVBy%*eR59FMp&qA2~4j8c{dFWtZA#|QgJ*C@KZO|X+rl93vx zY*8!H*^my;qha2#d2Ewpz zg`7wp6^_xoR{1HJBhbZ-8#hi}{bFjl{P#kLMpw7C-dy6(eJhn7y=`2L5~pit5-#7Y z%9PnX_ZSiv-d^Eiq-$P1;Am-(i+`fiQmJ|~)H0*I&UK}!oOG-y&xV@*U*7j;Qv}cm zy1ZWpWZ7ef#rIWd=50xU5s`gAfTJ(e^01@zs57I++(c?hX3w!act{H05AJvCl zM54z1ubgRj^R)_Y{spFpk`P?wR_e4ZTELTedm(%E)NZXRR!7EDxN@0Vk9}4CyPDOJ zCq!55g`Q~Tvt7_1)%MwU_MELLDzo_6T?2)4j!={%TL1!J^Mg$>G9*vb_p?^MSCftKX|Y)p)kEf2md)L1<)38t0oZT z@<2{Eltf&t59_<|)reV&-EbJM(Q!esu0J#|Au0OBRZFmMQKL&8TG?v*r~<0B?7*J~TTHO6hNnQ1H% z+3hw8!y7EPe{^3kXKHHXhll6Rp0hWlykQ^1g@V<=TDU9oJQ~<_h8J-F!+=HiG<0TzM5j7DPM_U&-F67!Wzk`^zcpzD>?Jqq18*>5*2f#zey+xt|oWn?wxi9qWQ~qt)wjTHa z%BA4Hh)2e2r-gIqQ(Ozpd%#*bdjZ3sS#9DN#xpFl=u{-mTRtP+!NBswE+QS;EO1sX zw|CERPy8rt2r1Qclhr(J{m-ljbuFzcjliM0FX9u{`;~VVt@#W6$!4{MHBQ&wMo&s< z?A(J1uydhY_Bq7LLRw2`foBvalXs07mVKYMo{bSJz;g3>s&! z^8Z9_JzQ;qD8w3|a%HHkOIT zY+l8~x-FXk!_x3{)6Z?SvKo8@O$Isz*R;@sM?m79eTkEuAEqNnp$U?{$1<4NAnb%q z=a@-HNeoZb_wO3QsjzJbb;zbBgX+U`vG=afBy8YE>fN95G$Np*J0lwOLtN9M^64)$ zHQj%=@yQkei16>0gnk&Wxd?oGkX96aBAQ#5md7R>uNCmF*0%z_}pmh-u!}}bCSJ3tP zJ0_SfYz=IddbYREo!jk&`VIgV-L-;*%X~E#*|0DcR zljOmY)1Y0raRV1EgsiHrPW6%K$E~dgROjh%UK20!3k!Q=_V+Bm;Qt z?)tM3VU&bvr|MFSh&95U$ZQ2de#vCCvsAe}J>QqPXZvUwiDulwsL%4DlPLh3&%Oj= zPl9Y3c_O2fv{(48`foSxq!KO+g-}z(1@Yg2-q?U%BDGDt559cr@?0r3w*_&c^cZ@( zaq|E3UPfwb3;G7E!LOW|;(&^-ig21%AM@a)-%{Xu-9$LK|81F?F?<_RXbIO*bbGn> zM;U%UMrhs=zrR|di{I4Wzt@-dI0y>t`vCX&YhEgGW9=?gi117 zgf3KBp+tbS743!08xmI zeIsm|Z5$XYZ@wq27r3fGmOfA60-=S0ie!wk6_Qip3>xqR+B~RRyxn8zC8)}mEnoh> zr5ZUab-6hm()@-w)sJ*Zvl5B!6cZ)=M<0_bI>FLonp(ZL@!|dMk2Ra>QjPUPVBoEW z;B+8(JUR;ES7&DwAUY-OKP~w{tkTv&Zoec6yBxBt!z;rcw&tb1p1SM$DR)`5;RCu2 zoSOeZL3PpYZ*PCT`f2~;)6Xz?HU&$ji60jIY*@beTyW{_+Ru;HbnQHUZ_hC@qYk*s zhVGE{sSV3^yd60DcuVzivB?mPUXES|qJC~QscD_A{dEv$gX)6>E(Ig+dKZ9Dp2KY| z)%vseNm8htBn^Hx`9QYA`n&lgdQ73Op5^`s;vqjrk8WvAGJfxWz&hEm$FtFG|E4fw zmd-!9A@hcFZ#iYU>Yc~P!!ua>VOWo?j~cqCK`=&Z1D6EQ-sZH;MrX_e0|GuK?TWd4 z8O2wQ*5Onvg?H{iAz67A$z)t+rw`wdwh@OsSsQHOyhHN!v0XJ?7DN-M8TjiH9hp_v z-|0`D%&V46o8ul!~jY zFipi-;VH-q*lo!!*^gfb>>^A+wn`}m--hRe#QuG{3wb?lTWV)XXZs$1zP_7$XeYG| zE6y^bJ?{F4SmNW@*k61)gmE5*3zIHP7(4g~Jzko1I`(lc!vj*8IX{8fmN#w?;Xa8y zLi;1xyWu;Ubt;i@oAN~vk^at_B8q*t;w&Reh1Wzk!EZ#=o?Tv-eQWd(<`zO5WEjl* ziX(#`SH!GQc)DJf8fD^Y%cQdtn2tr!16Df1l-?D;u8*45$ZqIAw}WpThSm&%Uel&<_E4JUIQ7W}9Z)G09FQ9MVOk z5^rW7NAVGt4JrXrY<7ViASyo5i62$n_U>G}@*e5`rAvQ#4j)P-7D5rRPw%n;OGV~CA7Vrm8Y4{~>^ILqpo<^TtSa<@W6po`XckhcF(XD8=WD zxl^Zu)(M5}Y)&RckC-|qONSIkW5R^^!s@y1qq_L@dR*a6L{t&*7pOox-yu36ej> z3?W)fL+F-M3TH1rV00=fTu#KzRXmhk>A}%O^kL@w_F>ZSfGwh+%$xKLDk=oA`KITg zz;&skat@JpzPVMF=l|r3Fj#36iPLocBj0BD76*j-*(=XrHiD*JGmJ&93{nT0z zol8)7Ox1)~f+R66vw7!w&rjmkt&(aaEEk2hZszOTE{Lg8Ki>=J(Rm-oL~)7_?ev^m zqj%MN`?9_O7_RC*LoHwnLw&f~*&ym+p<2`#7<0nH;Jc68v~tKHM^1*0QgPccgVp`4 z`Uv$`feta%=Z~9Xs1}$9f<#6^1z)(J`FBuD$Sfux-Fy$bF>$s1sK|jOHn-Cx?KQ4> zR%`gkK*|$Wf)Q~s8yBJGRZ^Z zez#%|L%I*u)W4mnL3-IwVAMG+6X+`U;^O3&4;^|>{%qxx0H>9wo-eht+pHV>&&Im= z%rftewr0LYnF@fo=>YTFYb1ORw4pK7OsHNZT9eCkWpUlsj#rWNYI~J6EMZGH^x;AdqxUaA8Hg5IT_UQ_TCWZ)84^YkUndj;%9!)PvMpDTw zN$+Ki4!d9->-R48YhirxB5}(S>(Eu*KAaV%JI6>G(uxwcI_#4zKncof0LOltI+5K1 z%WLnx)Boqlo|XTzTjD;Qmau2b8Yqlwop${d^0?B2{1aI!H}Og)1ix5oO#4M$o&}Wv zn@x6@;F_hHY-(!iGNhYmDC!PyjQ|C9m`D z9**_KbR<}Hn0putxzthbgqaZm<(357oA3YX9eKb$MuLX3wGg=??I?jT1+L@N*hSoC zWV6(-qJV^tX`OPg(7Thm=GTnC%rf0R>35hQv0>%%=-g_CXOjnE{N?H>{QNkK_=B0q zTntpw>79?ov!DNvPD=<}4yi{>F6nd6i4-Ogr}^F+Aivt3>~-z8lX%4jP! z&YZkpp7eAfJIsr=0dGTo*^vEG?=9CqCvN8lHwL+PcdT!p*k{Sgm5DU*fcRW0XgOsz z>S;!Xs*#r!P;DC(#4QY4eR6 zm#?C&v1Uh!&zJr59oSEUN?xj^6`L3seBcsVYV)C!C^-SuGU_-YQZ!b9rmjJ!mVn&g zd!}u9urPV=go8RQGXdk088H*YJn9ll_Qd8zD4|eGYJ}PT5Q&OtKqvEtS%(6GGVkOj z{g7&!rEKuH-usM%7r-R+XpP{aWps1Tu5p>q$AxD#(uAVCPuVZ~7Q=30=w|Ra_{hRGM|o5IG4U=dd}u(~UpM#bv_vQz%C2f4aHNGOgk@6`se5xD z=FrSCFx0kxM28?~UUFbe!-e>|?gNx-vse z~#JElS9`Dt8@SJ#?bBG~DFP$e<3qhRo`bxMlsCHQ00Bke%}k)ppS-U={9a zZ)q5j+kA?g1$?m_x7>K)kx#zB3g2hU!tIYg~rcuf$$=SXik$lgJAPgEta8E*nV1I6* zifHfJIySrji02D_lDO=2dI^YruD<`zat5o9>KSSK7g-7$4TB?%l3NFl5;FhSj!jKX z{i58|NAzT5s3kRtVrbc1a1<6jr+WPTl&9+gN+D?QG0bc$^0&@d|MCPSoP>U&OjixQ2Xk8<(L!3P61axR?iLfn^} zyaz!&uR@x~FQaHg5?kBSP{q&)1Ni-giw%@j6crP}G=t9i$Md>6%+ysk(a~ys8{8*3 zVMO2giFm`9d@^Oyq*4K;t5)sKR7qu2*;e2oQc|hd3vVZc>664b^pD$7az9BP{d|oM zU~B5;gRAHFYU=OReCOch#gzh~kD*{MZ4*mto6ys;-nkyIKDq>T5yb{`1Pder1l~e| z7;i;5hE-xG#g0qEV*9H0boA(bTXT~XmP7NHRzkJ=Rl1Md>Fhn|f+*mr#)v*BGmkDU zEe-4o0))9f!Z%JYhgHPzAG#w2@iS&}2DtNF5Gli3UaN43C#b2dErG9qj2eOQIU^ZS zA&Pk6atQCJj|Mz5s4iV8^A2Yw#shV2ZS8_C1Ap!N<$Jdb#wlt9koWAA>_6Qm1;0ey zv}n<><=46C`H36fS>z?g%fZ(NHocYc!c`4wk3Z`#D0 z5`blPz2$ZOU^g(XR=PL!kRy4$GlL|ky|@AGc@Rh~(?m2uSZS7;(~@|-IY4dt*$ zvP*=VhL9TA_PxhL5@1MbSxku(&M}5BEE;^`f%0lCqhzl1=;X zGFNu%*Mb(*4h8Wd(TxTAA;VNugj0ZR5B;p#-DRzUDNeC__kQNz&2Z5p;FnS1^UPD1 zXC71*QPs|`h#!od``DPqyh z<*J4rE<6V??4A>1hp7d8qZfRYoez`)tsFv|6E){3O7V@LPr<$w8QCRih)B%@Mg(Vy z37!gw8=O|YY<1`%JPxA34-;fUIy>s+1(=qSmq5Ixis(CCwc1@BAQ_7roq@W%C&O5p zm6wa2%_bBE?I8@SBDcolWGCySnTfYRkg?smB|BeUzjx{>cN$wnm`GhIzb~drMG=%3 z4^TxL0yH3;k; zPL2{G{r9Lmi8A2G6Lyxcdyx%aSfTlcA+lhHJ|*th0BP8B>h4Eu7EWI(NdO&qT)v;X zk1>RDbu2>|n{)1*`ZvWUx>J-8Xecx4TAJ30{;^}6IR~6qEgLcm!jZ~VbVb=NLUPVQ zMO^Qi*SS#UY-J_K8DAh-_b$-r)vJEb8gptlMhgMm*v`V7JN;ol(N*LbG#iS=l6Ee+ zF=NKW?*H3`;Z{4pQ>+_QJ!?(UBVFTR(otN{CaH&wQVJa5x82}^iyr%C9Zx5>l1Y<) zmqdzYWbzGjxu>=MWNzl|5vo43bo#8*525~x>k&6ZwY&7y7Jha9LrX2didxr!VqwNB z*-Bj8>5TLvqeai+=^&#$1F%?2;!O{h2Q@P?8=F9=u@9-ypu-i8xx)i;pgB9~=F@!O z3A}dwPt}hKkmD(@YT`C}5pY8q{``TaDBi;|7&S1$&N7vG6K$k1h40=)S*a6{6!w3t zzEBffNDh8-_e56KuYnh6FC#Vywt%$tpAN&Ot7tC8cg!cgb+5nx7fH{$=e_%+(;0;k zYEHyiZ_{DJAJ+~aLMi!Ua^yLWKJ~_`7t zJc2|DVNy<|UcNp7=RY0lBDZdgDf7NWtPZ)62mbFG9mIOqBO}x4xqlq!Hq0ogyj#@b zM6slU!=Fi03>P6~J3>65um2}6t>a8assEfyRO9OBby~B7^@#XM$h)~_%`R=FQOBWX z#G2Z?~qT|LGhi|`l*I-v7=6y5$HVObgt3qgN&C3 zwOapkXhPDjs2p)l^jQeEut{Jvw_O0X7Xm3tdHoYv8EQGi+u+^(B&3Kxudpg=iLN z8B}@Csm*5BmZAYvnZD<~5YJ7pTHW8Y?)38OR1FN6Esl6XQITgJH~z&mE@_#yE*EKfx1a$*kR0xKy{sT zA(;XJvl-!UXq*90LIi_8IPOU2y~(A_CoWEH2gTM**#6;f$pTuV^9O=lBK5emU*~+< zSNCQvmr%=c9EvDVw{K|ALIMfB9%Il6GaLh|><2GUbab*M#gbMrM~Iq#$H1t?yT3j= zLB$y`I?>cW17vl|CS%q<^&CzS@&dF4EC6e%O9;%0$}za@2pR_;aivDJ+YzxvLPAje z96-?gXNZLgSn6B->OwHi(H2coRyVB(_%Z37JLSC`x@H8c^TVX<-VCQ@OK>$4W!=?W zKp951M`ry&Kq2vQaTqJcK3XGq{Fz?iIk)-fOCN-tQlt|CgS8sdXG8#7%nCOVoSU#B zsYSU4EkPjiPAnDs#I)Z3t9;M{FL;H2gfh7fto{g^v7(5DPpneGGd@|hPnt3*B zj3F5#dj~PswR5(M2O*tjQmCi3D0U)Dy0vS+ka_c(RW&t@UPf`_HTqDNOG#;`y-SLn z;cEG83QzI|6n6$bO_;#PR93Sa>;9gNza5Y)pfDjy9p8Cx12tSqbwqD9w@`u`8bbeb zwOOhIrO8CnR)-}4ZXel5a^KAQZUG!i?>2Hij43I-bwZetKuMBaJkTExB^vM1K6~4c z!Z1jY!MaW9{Y;4GIv3Jc$|$7?494L*q<`)(!s8}IJ~WyKcidiUbm;515nr#@ou72+ zasOjNz3xBy9%5V6hIq{0j@{e#pQaJvl#PS$&hpp8qI3O&|5QaUzkZB4yFWip)$BU6 zXx4_f&m8Q4FD=G=+R}&svf5q7R6nG_%Se_lYa3-}!dHfwa13tcm=(Tmu1x$_zLE}K zyaGv=Ixn%c6-@A92HTln%;EaEJYv#=VPH@$P+P2OzCqXLKX8FS&hzJS7k;LQ)_Er|~y!;14_6E28T&;OnyI8ycI4L6~f zCESIs~e1tUpKf<^m&mBlV|CjrE=Qx&bXmc9z6JG zR7(-tl^xf$Gumw?Nv+o{7}P15GH0vWsb3>cJp2Crf0&q=JfqLQ@(|H+zaa)GQ*b{`Q>POH%Ws<=cw4ABLLXH8M)!#%1#`1T4G_68we5v)UiDp9j*{&qWS zHtLIqVV=Efq>0S#ZavQ33!#uNJrs$5wmkPSA(%TK9xkTnhK!Bm^5)GaU8X!_jsru1 zKJ$toKUk%0Iv@+;Ll;t8BrvButRG^${mTKYEWA;Z*&95P9EG$SU?m0FY14)hm)k5o z{!A-}_lhw6l`|;p)mqVZ@(=%DH6!+fv5@9R)k9BKhW-U=(Po{QiK^ zKAi`Br4=APa7HN;SdrWESj{Ru3MxIo8697;uunQ;TZ}xrxLraL!Teq76{M+qxW&k- z$7oAj^5R2J7_nPPeo&2>jbb;?Q22CZg`k+g8YC6ies;dnNJjw~2dE-+0vL*yUw=|C z_~8}4T^>%0V73FGK!w}F!s7FxtdDBtRZuf&L$11cuCeP|R+TN22G{7sYEo~p(;A{u zR&2`{8!7dLoe$_P*OhO1na|!vdU%K(C>`#gIKXl(O7$JuWrlmeS8S00At%=Kx;k6o z)9Yu~=JR#3D*vy&GylsuZ~OmgPm-lmLfSNhkTel(LWOAIrbQbuMMTr2WG9VnS|pMr zS*A&rlBAL}s7ab6sn9ecB+;Tt%KiL&n(MwF-(S9e!F@jFn(MkAa`su?@8f+OujBQ4 z9j{+s`KMK#ESouoAjCA%^K?9hvz^X44>uLurg(N;sOP_Z3^8Fu40NY^uzuzZ8>8D; z(Ac)>g@O&hX%WINN;Yw%#g;*;S{P4my#;{6UiX{1>gsW`cvBM4PHJ3WPQfQd7&b$b z(Kv3X zc*2+@hnjRf+gku;v-+vEHwxDEbt?f=zwT}!O7gBGGF@B`q02xrlzp27e1zS~&6`D_}9UE+Z z@MjyXrHM;%#~dsJ6UXgo=^EZ@HwC`F_2%;rZaT*oUtkt>Xh^+(yb=?-$Pcg}z-OWr zI%MtXp&O!$cW8!23@^KpqvG4&{n`zdE%PmzakoPn=%)bBGELD-ORAA-iI>C8vo?k# zzuk=`;Q$iSZjx()@{B32rcYiQQCxTTNtXAX*TenVo3NS$Y|^d9-`VwKQ6yZ)aG8vf z?Gz9U7#Mk&_NKDT&XqbQCMG&M!iadbANae_7YVm?+iH=k-WCi#gEMXE+dox4?}wLl<@&l8*C%6euj10K3yx5Fb_x=W;OYeAo!n;{>6sG%J z>B~M&m?BQm)&@O1a`h2)gi7c%xjjdxXn8X06-@~+kel0SN@%ik;*X0s-0|ANJ&RKP zZ@El9_Hz5!r*>TBVeQ0c|C$M#NMR+w^;m`RQ9 zwPzA_Fk6WLLgru9;F9^=g%a%3Xj#1O1E3kfFCwcJuy}|J{Q8YE4wYZ6dRiGcR(I}* zq?uQS4L0A!5&#XL3U+aU_?DmGghmRb5ObJ$c9l$J_%6uzdjG?A`S_Bfg@@)xzTUjziZ@X7l zKwl82>t(F?Kvr~h=&@1VqhKN100R)~G0$M`@^7@%#*c^RArK{Z*+{{?4OTHBR{-q6 z0cif!*vSPPDSeCo7$a)nfYIg_TG8{aj6OY0)(4ybyNuNix{dq&4ekcZ?HY+=3LE0% zN1eCUG)*#A{PJ?qm7fR{1VzYF+`uhVrtpY(0Z{L2UTJ-OT$weFu$o>4Zl@~mE`4z? zMHHRFP+;%@^*D5WRQ#!ma7_YbwO)PjDa9aH-WQowS&N)G<|A}9TdNV!kVLp zBSc{CRNq=Ox%trDmH*_1xTg4@ny(S=_CjIn4DHgqV(jGsk8y5{DZ<`sRpR6}Objyq z>Y`pDW#$!n`}kDTxGh^d?BHF>R-xB2M3&QotX%Xl{2QEyR_7Kv=}3mP6}j1KNqg>? zKB*_AnHg*R)4ND9rIG6Rnb*`&m6v$t58H^F2iB0eliN}%w4-~W$-0RIfYZm&keHuB zac!x|C|>!&x6N_eHV*oH5tN-EAY@K-L+GpM8yOw;T6D#>cI!61k;7zqlQyXKjI|9o z#!WZiIGPNn20k>w$I3O9~Yh}@f4sy zK^RK+iV82V@=Xhr9 zja?^haun*zu7&T&An&?n(%o}*`weAHNTir2^|-1khUSD;4EK;LnLV&)Sv2EGtIj1;B+om#fcUJcI&>18ub6>iwQI?y1&ZTl-@KFu`O2 z90?m5>IhUk{_|i<&LH|iStmwpX|asAVfGza1Fwiw?%q8N1vLlD=qUUaKnFj-kl6#e zut@auS_rl1E*>I|M;mc3-3_NwCJ(u_H)5~3|e|tlE+^o%v0??If z{wudvde!0id)wEbE#-E1be%lulbn=9H=WOKmj$GVqgt$nRR)=K^wEwfqI3`hu$p8= zL^m=>_-;7t;WAa<+)jZ$GxpxwfPChGlvP*9K^m@jpSa}0#vtX33a7%{RzYqB*~6;C zG5G^dhH-HhoQGr`z8FbP!Tdts466Tr{raI(LgPdQEwWwxGe>7X>Uor*lF07r>flAw zZlrYKGg|&WH_k1a_w41ouCv5jKj@#2?nt9DHMAr$30K^4N5glX2{f8R;WyFwcen9> zq0ws_zlTzB#bg?>*dQ|sollBmGGQLC+HZYBJ~cCR6Ag^Vw0#`+j*cgYR3}erjynQR z^6Cg_ldM58MMt0^Lh^J(ea=H_XZ0<-_;EW=ML@mqWFZyjh}r9JaAh(9oFdR{wk{G% zhDQQ30rc{rgB(zqKi%Wd@Mv8DWabfWoWTH3guT6c3Ps#5i3!0LcmFq3@YwL(`Dz<5 zoqfl^+281egL5(Xq^_CT=Tp}4Z(zXX_1k`YoWan|i4&pBz|Ip$Wl*V;#AdQt{O$LT zl;LP$L_AoA?iY5&{+aL?3eFfh9J{|uQ%g$9@*oJJ5;Tb!#l0JM>c}=?cLUl-vU+qu zE4dA=)|J_}{$l1OM5cFT6Vc@=9wZhQtr_ENhY8W&iQ){Q+P_Bjowpv$jqY$NF)qZ^ zw+=kcE0|zUtpJ)FdcVNft+KVJ_fBXg+dA|Od^R)qO3q}T>uw!c)xE`aP09~NAE!!pMvwUI7wEn7#xMm8dSo#&1|fharq2eH7KN&K)l1?k0# zz_{30R4{%SD^N1&EhuoROzY(>&*`{zIVIFrZNv4qU-O!Qq#)2;E~f+;HYQHiSrB~B z0CJ1AqUcJmG9n8$G_=x>ppF6A|9n&w^)H%v{_Nr3SamQMF7HgS8k_p$ZA*ZmM=!0n zdw3u@)j8LvKW=QhPua%PRZrOwS@>#=ZZwvjd~@Zn+1qC;1usGf_=}1WScZL6Hn?R= zi=N_$bOqnt9YwD04%j1Jn)GIrT;G&imnnhz?%_;4cz1UWxsS&Sz5lzFbo>XNX4gDF6^!43Pg-f~Tt=y0Zp#J}5BJ zBB>0_!iz!a#hF$1ba;$%5GApszL2I^UShhX92#=ba_o`ach3Lys#^N@J?M%z{VUO) zhN+D6+Q=P8%z{}m|J>Ca^^h=~aFqiUm4;sf41Wo<{y288?Nblt_hx6u7QQ+#_Y5fz zxKmFj%k*^kVogzG?e$A=jriRL7fhUDeGNuzx@it&@vR%|h81>ye2Ym_nZRt~#v?_E z9;50*jxD;rhGz+A{q_5^uSOia%gHI~@N4yZG|2}p?7M>ki+*j*s~+A9*L7HsnZ6*y zb2`H$Q%raxBUeBy07RU8Y(PKV17Q%O+qbUkACa}KdCpb)tS3{6ass1L$RenDk1nYG zGA!jy@ykzp&D8s!C(%STIH+LL#HLaC7icPanMYL2*IX9ShTcP4aZGyp{gqc~gwh@? zdE?|$Fst_6m`7byuzb~#g0OExkwQ0gd4jXAmmqQQ0C{2Z6!m zJ=zLzmbFi z!pZ52Q;tX${5>WxZtH>HEgt=&xyP{lUf`{EV~6z?ISOv6VeJVXFP1D{eoW}I4Y7eb z>h$x9x+QjYX7}uVIeZuwfXaEZU(CNyYGO1RlG2mD?{f^izKveew&U}XM$dwr_QbTd zk~<--FMTsc9=^9B{l|=FfNsx1)3$qL8^>$r@JcLnehNGN znD0j}(g?SwfSIIw#kQig;Oohsy3()Z%+<0a6w?Ll#I};=Iaxwp&F$p;`r-TMqUOiM zuDP9{Bar=Ner4xOqAkge5c9#a{YW}KwG5I3FVt2fT*X4_?~c^JUAm;Wf3Aq13%aVZ zVTU~R7!`%_FLgzI>v8K9*9M!bTX5o)t(`P+;wpzHjcr&A(Gk;{+l?s~GSr3{lJEob zXZ%d9kg-BVHkh>hOG3TH7Xj+fl0x9TYHyXPaa!Uo^|jClN6yAF4!5*CC=D@_oW6oA zfU`~T-bG##AMf7Kw;zrgkgx5$#JQ7iLNB>8FEwLDi6;x1*g}lS)Ibbpn?D{P#m>&o z#f71UFNF!5x99yVu^DV1MJE_iO1sc@!BE>tJ1YI*gZY|6Q&jtRmK@gTw2do<;11r% zJS`4I=^d?FSWoP*XX@hz%k0iQjmw#{RB}5L<{FtOUHDq^Ca4n_GMUWEo-JB3(3!fq zEuZ^rLisQa`vodOcBAvZ-_*# z8a~o?`WAwHHF+Ve1i8hv&e#+R*y_05E;AED;c;A2SiDa{3(qXa9dEMpoTPFpWiQ8QRRge2TRfhYge zS0=$U4tb!xHdwNUP8R{R@5^2i=QXJrw9XKSJRcXg)o#g>W|c7^=olD1lXaoZ^h*Mx ztu%*{A6_i?36e<&8%}}-=%OUY95c9B~0WyXV*}XTE{{c z8eA3@c)k8}+vEs&gKRMP!P%rm}E!2DbMs{YPZh^*V(U)*!{ZMGMy!@PW7Q=VW|~| zIuojf@_pk`oal|j7;*6hq)NQv8D+gCPVuxvNEn5wpggJi`ucQF1N^8(z;~EjUxslC z`>%;vBlrrgNgdIA>bff1#Z@CbACnm&F_=79hSEh_G4HQ&lBe1*%Ars)FU*jDip*e= z;()>wr?&F&Kx;hU>Z0`zP__q~@lR6O%Ixhi1emRiB*QBJ&N&7`nnP7W`;}Mpx*akm zFGd*7OQQ{uS>dZD<*(;>R!b)g3^told`(?5wC|qT4}vP;=Bm^Ch5Ja}mFuQ<>;xEc z>z1nrrIHt+E|=LEZOqN``~1Fw&;Sg&%QP%CC-y-Qw5O*gH*MFom9rFDEYcPB+G;65 z7DD&~XDi+YJ^;9()!5RvdJQ)vr`^bnv6(0gTj>F*11iyUwA)S9fy9O6&(^yX=6PNpzy+dY=e@uFdJ@oR;U% zOqV4AnB{MLc zooH%O+Z~_nkB4xsDQ* zKC$3&WoN5zhH_oB>_~1B@0KNW?;IS)A1UCP?!U~PY`0V6z78mcf!~`yv7;8Q4AwgM zMUy~0-*7+he9vGV**>8Uwo~0WCL=*22Mo8Q91*xGk0d*L zrdDJ@H6oS+Dtu-p7ZJ|M;jS5*UG|XhRaL$7bO@L%p7ZC;&)JFW zG!o*gYX{kAk|Pv&aM8RS48PhTa>8(DKDSqSnhQd+wP^?m=^goJmZ&=C@IN-rz_OQc zT;c8H7eCQ^G|3ubBgmOJvwG@LX#p%mTY*M04UB7bTZcl#)pf_;zwc)<(AUR_OOV@u zIu*WpbZOlxGNg>kG0O5|LO0|)S}<^W#Nm5}&y?ka8m1}?uMbMSDN~@I6N^??hyy{W zON6gI5_ z(EtMzCQN`vE)KI3M2pUnD_+QXf#-~w6o|{AU`n0@BR8bS8C3!-C5&z(hvh(r7!%Sb zCP%+E8YauFKg=KfCshV9o5cR;0+{amYD*^%lo-p5sCryhCQw)*z(Jy925oKAf%ZvA zk0Iv}0YdWRLwXV7#&mKdlG1NHm1v`O=+0-{(=oZ?gMGw;bGR1KI#LN;Ri<)!FfO)9 zWtCWs6E3%60tb_sQ&BqQ^D~Xwt}WCoU2n{ji-G`pZ{*xkQ#)j4i}_@2xHAYdi>!bx z$BIy??jcKXr*h?ut8Qy4;Q|30aI1jvGT_7MM>9F4Db1EnDCn{^bV*i?Zj?o=F~{Hxj2{aB3$fI{PE`5&s3=0be-{N-G4ohx~^F z8O{hy={Zc!EZ^|P+WkdokW=QANbe^To{bn#rNRcpn~cKMQt7|6q(C669rn5F^pI7v z4vQ(nENl=pc~flxyJoY$e^g|o)cv4PH_HZJwU=u?rN6hf#7*Z{8!cQBnmo6gDFvT_ zsaUY71Fk*=XW~-cF)?F%S>eYM_5@^93tt$2KoW)L@EpNFC5&3Kb} zuZ_Z+tvp(zX=|@aHd*jt&g>m}JIK7gQrr;pZCzb`#N!~YxY$FNe!u8;TujHbN{XPABrq%@Wnt=e#{*u|^##@{xF>)|{1 zEs;22r`{QpFDm(4j9Bx7_S)|JNxkD5V`9Ll$fvBvqRSy8 z@U)a%R1nn$9$P!vLGUD>KYD~s9Dr!UBUws0-IyYZ^0smx0d?PH>Xzm>wQ)bKj-9P5 z9hTAclH1qRvutYmQS+L9t!^B*xv9obVUoRaj{sFNQUQ=0N^w0sj97I=_Od7S602S@aXr^b@q*Vv2h1lGL}3YJHtG4nU>L^F#bJ*((O<%R~*5 z=el)H#A<9^jvrUzb&ecKrXw`0qCkInaJ73~9UP)dJhjJJ8Pmic-8M9GwYtEGWutH_cY+^OQCUk%|up0o|*qQ*9wAI60w#%*e`m z4|jC(@HLTl{LR*U0#Hk>Sb@LfpMPEfHA3gsGTRHyAKFdLC3TGB3|@Yx9^|kaQ#Vzc z5%BSJt{Y66ArRPaM=|4%3<2>FOev2O%)wo$G*)o^v1LPU)!OF{wd=(d3v#<*hIv_13mv}VZV3AabbiAcw{-ryTVcU_ad*ieGFC zaeAe@L5(Q@oDv~Q(p z8*T|j5{4`}1OZ^)!i(1)2*}r|uekF3bniylgFEmy1Bv6nim6-uf)> zN^gDgzvcZJHI5YyQZdX=(LPq#P+y;C_uvaQ9lLh*@EwD`X!>iT_0;t{SAaC&+1=nz zhoY?m99_^{(N!Hj#RXaf85xYeCi)5diY<`)~;Y)W9V1V>Z zW+Jr6ciF7KO7iU<#tDL69v(2pXrsdAg4PUTeEd<*82G*{J}NXph7}*5yEY9{CG|W+obolZ=uQ*V>plg* z(T2Cn0uODm7_pC&mKh?sM)FQQ+@)$c#6s9b0<)182niCF^ssyCct13J{`lhny;jl6 zv}U85$9Qikk$E$7pvK|}WM@Zz2>%YMWO~KW4{1RqA)rDM@2y{$qRk>8E+gGAj-&Px z%0)jGeBh#Z8-|V$4K4_i)v@WEK_`-RM`Fjn=}P#`JW}966MF0sAwA0u+iV}jSr2DY zG$KwaBT6k=Rzv8e#E1LF3CEy;m+3~ai`k}EMFu&Y$1scX&{|qFF$Eqvja~>#de+X; z-5zZfNIx86JTX;Vne<~G78eW2`sB%zwY708WE6d6tCCGQ(F%91JJ56B2B|TPVmG@rVq$K3O+Rc?@!_ALOMw@ z2{&M$`xTcQb&NaN<3|c$WFQNneA4C-3mi_)QL&~2Mqs8$dA+lvqeg`v>0`?x5TS+z z?vyv6^4PJ_N=f$v_a;z?W5PJ#w4{_R{?TFR^ZP`%?d^!xUz=>gp==#HGx*@f>3z7h zT+x}((Odu@f@+CKvs&-Eov4y55&0w4-cQKm@p|wpUg>K z7B^!PR0VJSs`B!Xt~Xx*M^NIbgc5TVt1g?6=-$@BQ?$_!Uz zVm;*1*YAR|U9eeA^!R54i*m)bv-H?)wzO75s=!9*p}Nrbz>^(}gA)LSm|Whon?DT* z^y?J3BZIyPwGI7jEFO0h77 zU$@$!i4eoXXU|?PKeGPxvZOTJVpcq)h;xV{JqP_rzg8G>=X`m6LDmIYl=V+0T`a#` zT^V?u|0BZH)uVWUL(~^$drXXH(y2HZE+yqrV_V18TZnWxG2?BhFgrTgAs8AOs&HJi zZYc<%wD~;zYBHQf-F4Vm&?ZWAOvXWaOE$IYgrl$&PMT71;N&5eW1=g2>;US>T~j_`%~@VkQ1B|^c<4odb z-QV{iKExuS4^1)kyVq7wInmFaCCssP@GeRGNh2=q=Xld#yF^5Klwcl0RyU~mW4Vw@ zJP6ui5|H0(XB;?+xO7cABSlRFoUrwq;;ar~SqS;>LDjhDOOV4~+GaU3u~xS9*$&&7 zmC2>^Q^u%EW#^yQrETq7>B`~*E;wqB44eaz zf8p1hmd}fPI{%p2*7s}6>0+lGGvlun>Dq|aIbl+gk8J%Fmj2NG z9ngq6UbmlZ{kEi6?eS$vP-;PJSTmeA9Tw%VP9oY9_RkecO<=?A0am?tw&m)`y_LW9 zvhCvY1y}`8*az-dpZd1aNdqx_Yke&qGnzlN7kIhKhonaBL~jGf(W0y1LVlxWh*QCu zMNzBgr`6v0QaNNYcFXd<&_0^fIh2YP@76wXdC>>WILDa3=ZDE_(U~B1nyZm}$Xd&f zrTHLerGuhSEnh4@ax4~e0ka)`v4hQfxFv-}n!j)zmHaJL;1$u>a<+S<=e#a9!f-*D z$=)pMWkcsHaBUHio7a0_jui1vKGa0XcO}uiZJNeMESaAdv3IfV$)kQoaZ=fm40Dv)}W`MJEcla6ci{X;_Rtc3c2MiWB(c|%iuGE(d zolX)F80Tj`Kg@aEiHPd?x)UkKJ_TsbjX3u?E%BsB$<~G?K9}MzH}w0?cGzi8bY9d9 zO+)2@cWbGJ;7{Gny5LHdB57Z3G)f{->({cS6W;?o>|fXTL{>Gx-zF+;HO@&&8c=>V zd(^+ZD+%7kp}=|--pjP4FrmaYpg?~Y3D}f7B4OC)d9j?2nNlB#WTz`iO^!t1d~+Or zk^tkWW%m5b1SX-f8cQ*7s{3a*Q!*iA(*$6E92hKMnVl3BctcNJ|MCYSB6hBqQB8mn zBrc_g#m~?}0YyJBR|DpG*Bk49e+G0_?6JVGhtnv8Zvm9yoO(O8^paK%RXX2v6R83h zY?sI}R@Gp>!>`j)g>8;_$V0cr>@S+a!cT2MuR>!)Dhq`;uf~+00;Q~|Vd@jJ8Y^Sf zfbw`IqtviWUX!|XLaPuDM+ss4O9E}RHL32|r`TpZh{FysZIEyZ-YWn5xO_9tl>Exi zc*E09nxQG$ zNu_VG9}O>7ohSa1HroCO8A%W>{77!Y(G365TYnwsnfVx~mJ`o_qwpOs;wbN%Pg$l9 zYCiR3ib%X*5HbU6!LHCFMdO*_0rBa@ z>;d%HKNpl(ovEgR=;xi-1~AP$xnhoZEsGHQ1J;vfbSBhCT8-Ui9OsyfClssz6qQsF z+|yGlpa>6LGCfYRC9p;N?x7UDe*KBJK5CxIVXx@?@nhcD*#MW*vf+|BDeuzG26<&{YIu*vk^pt0TBl$98z4as>B!hf*CPc_s_v8Kehrc@j!~ovVTB_(S(dh<3ca4_RrRc3%8T_xC;_gCI_E6w2kbw6&3< zvcj#g)xnFpq@)DuUE)$l&7zVs|7ftnc253Jm4v5QL&M#!64w-DyWf9L?7$34d|wj% z!<h z?L$Rx_yrH-*KHl7rUf2Y9ZeSR*z^)P77zf|utd$kmPOPV!r(L){1+kI^5|^T#In;; z{?M4vTiAHOGc;8*%?wi(*R^COsSAY#SG{q@jW{pr1C}xQpTD{R{apV3V~#D|M*iaG zwzdQlymlyUId4W}D~T5%S>%{3i212I3STjJLZQ=EK7?Wf?5bgcFAZG2LJPO@af?sl z12Us{u8v%u&j0?=8%}8|d(FMW+Oy~q+DEN+i29#j9XR`PxvJ3?J**Z}D^?m8)0?8H zzJKnu0sncH!+kTda7rt#n0ke$$?L% zrI9fpa*ue6?WkC2{4kS4u`X+zaopvl6W9>gFLqyk(=U$6#vA{VM5k3$RiP~8b_MuzN2lYp8SDdgCLFU@JAuUNSbz=#WQ^n`WFO9H#fJS z;vir`sz8JqvBIw3xs{G8{p88Vf?nPmDAeD)i@hMLIW){^or&n)_$F!Kr3Y+QVWz*!) z_&)27j=ST_>uI6VjsgdRUGD!!!pztWK(F9CcT-KRyVt5Ztv8Ps3Iz#OYnbe(ugb_D zptxTv&nSyKLu&@F3Ar2LIj@G@>$9ipR~*BVlh=EBO-1pv?%&X+Ww_v66GbCkH}~Ua zac!Kb6gD6Nv}zh0wYm?O-5xqhNV18JBZR#@=~5t}_%3r!qB&U+KX>b?G&yj+G zu)4eN@}Kt&jUR?8rvQZ*kp`vrobzIr?nm2z-U76qn(~iVO}Jk{&p+Wm5AGA0G5y4U zn**51;AGU^Q+znr)eoEh$gdSiTy);_jg3`sbQ_kJa?w1&VB=r6vFF`Nq4i#2n?m=w zME5-ac#=bG<<*wYpD~@eU*8H(8#4`<7H>)tGW{FqR4iSa@6>&}>*s4m;2;BTR9^qe zx1E*>f{?GMx~qIJ$bsQ)9Jc`pRkKh|J-}eY^Z}%r(%=6CpR}2CN$bfB8Rtrhk86xLtVi< zfDfiK&rxSQd*+?yNXiiL2D4JKFw+uD^}*->=$P@!K?Gz+M#K$b12lC1#c#zx%(Rl} z&_+gL8ie|TB}50%)s^RveyK~~)o|MfY;h32P>OCAB@5=Bh*`K%2>HMR4gTZAp&CLU zf8bae>8|vlnj%(gsWLxdI7jm03SPGME0obi)>%<-AOZ{bGxD-V@8Lf2RwWxjb9vOjJA%jr zRSsjE<@$mE>5b&5f|?!7ZlI%z?0`60-~rUJ+CfOGKK(gxn~AtGB&Gv347pv2O?R}H zJnlZYB;Wt%kHxG0zd!!J$0Do}(}Dm0&okee3;K?=FFTqbCx-8#v6wZxoXYR zou{UzrnPVH?jvey8dx>8nPzimt4a{-(%-A8EdZU}wd?S{UAv4AM}!5PJ%2_`ZSRfy z;5nX0>sRKUx_jF>_|XAPvgR7AOZOI^958t6vRD1;(m>-EUt0FeVwkP*+XddUyztVO zCAIG!e~||4*yE)=Kf2#--arKb8CDfGEEC}XN;_Ck=qk0(hc51Z?0wAOlB0jTt=Za3 zK*97e7y}dc`)|PUI56`t|DF@sGy&{asO)_I@*6c%%q3u4FuZ%i_RhI0qOX z;`GA=#y~YSNb`yt^VN*^#|rXit=W{YBKU^?rN!oVzaO;2KlmPL*KI=FcRg@N=j=;& z=OvoZG@OM?`+i*OJow_#*G0vtw>K}hwP#0rCD~VHBWW$`kyu& zOu3ohx$OHQ_1q@IfO~h>X}aC}mY2|pT-mhhWBivJRLup6UlxD&^AA0C^|CvCu?BH< zf`dlR`i*`YoD1iStU;p>limfT#zbVl)Bc*iI>=&$%q!WG4d(Ky)^50y{}Gc@3pE|H zXnS0E_4M^ki%%_|clWXJiECdhKpTMG{(xs2Z>_L8ZSvs#K33@^+hp?vnzwm@*H(S+ zvAu>m3sHPT7Q1<=N8eNDn||0o_daJ%2rJ_BNKLt4+QiPF@3$9%Fz4TY^p25L8;d)O zp-BG#(oW8m7skeifGeM0j|f~oUbYHkv%4W{$6%(f4ngPEh85LXn;2BU?pYR^qqZZD z&mZ!{IvOJFuB&ZXJa@oZJ;XkJ!Rz>kL_+?e?s~-K8FQ}hSl=^q&MbiC8n1SftBjb# z8j7pe7mX%TXMKnhKU_TOo9HEOn{oO>-25{p5!K7C3!H5~ToWzr*x383bnQ=_?Uykz zoa%;aBkXsZhm<1OVrh>5&h6qj0!*Ok))JY7d?EYXLw2K9Q6Xr>zQ3W z1AYyiUS>2lINR&Nn$WW+20&+(7iNPB6W%2c?sloKIE{JTOgA|`bgV1k_>*#l1!&Z> zHBr#4J1p)1@e&6GetHVxx;JybrIZA2D29DmQmFZ{nB@s*k2kpgy|+u~`KB^~R9pYO zKHmN^WgAiZ|a>$=mw~^ z+7qU>n8$M~9IT8Vh(vPO${CmFBX8fntznkGVyO)ZFQq6S@w2sNqV4a^uti)qU83D! z=N-4Yy7a0p^3zzn#kM)t7lE{DgP56^#TxINE%&Nz@>_!i&L4(Zq|Xbw9Q#n?UVOiw z)(Z{2w<}-G9Goq6US_L)=rW*3y-Wii?{HBxc1io85osoFiN@;H$F3#}n5@6PE+EC$ zWWjrF&7Es*+pe9pZ`bxEtJmxZS|hl#{q(vcH_p49yF>o4x^Hd7!XG9ZQu ztUbKq*nF#l(zUS=qbwPnoJ?&0hX$#zXF^AyS|}>(1O1yDzI!9* z0rh+@gTFicbRlU`E`zG2BVihw-`@6f|GDOsW}s%q)%>eVciQ>u^&0nDA|J%fm?$$THvsbbfad*+C%=fET+}^*Jx3_XHbzj;( zrzeZ{H|$@(pSJ(){`~zrp9VaQc$%}Hde6+%>A2+Ji4CXjl0USZHjK58T@`Cuehqo2 zbC=>^%R|qj$c>u!&Nn#jpY=31A~Paw)WyrI<6y^8yAxgy9l_;0V^c=G`sXwl9k)C_ zc+7J%>-gHu2adlf$Ua>DbTncA(~}!JNfkM zi#RXgQ(dp-BT4zz9z}bLyf09`?kNFAy{@=#e;_9hc#6DVep{j?oS~Ij7n~HFMefc07mqa8mJi%#NlJDm_afqSwO+@{%}3 z9Fly5JewR6ha3AL*&(r$n2u$Q@mKZCj-FjS`>2++PPX-c-Xp7b2o3n|AJzt_FP%S+mXSLPTeuolh3+ z3FnjPz=bjN(Os>FkKXtZ)Ox6u9@HLmBd9h=D@fF|iM>FuK+wi6V#`;qUwLOGU@Lb^ z=&8F~S3U505VG~#R^}8i*(w6+^p2BqxjYn&mHotToaSb47;{VN{BfCMtOty<@~mFU(O^SAU2wU z5#TD`Y(`e7W$E71YoSGrB zPVzj`c&xmCs3(U-Zk2ZE@GN8&r>Ksj7J}Z1^W#6ppGu#JLq~_k0YcV9+l14Yjff)- z6JQ$tI1>bbTlmhloppWY!r3HfY;jQWA&r3e)$vc`b9Q9zxUu8@4$+n7%YZAq%hFw) zJAEqGGqnHyv{yoe^ak#!1;scj}NAnwS)s$Us-2IQoqCNCDI74HxIt(R2~3YmN8Z{*=9czuW$<2lm^{f;%(AomgFUX;=#I zZQQ#*G&$>a4>sk*MBe?nmi9z`_Ua#ff8?C?wQb4Pr8(O>knPF1#_#z4svfc6gdk1~ zl--v-Uj@eep|2So@-rx;I#SXB~k7pnXiNPQ*6CSTaD^*x0&7#XJPsr)>_~7UAt%=YJR~y z&_TD%3QT65W$z!!KcZW;-7zPZdyjovfSe6?;T>$v5Cj3!YqLvYt+zX%qIvoPyZP*~ zCasLhK*KW-8i{!;=O%b1B?sq7ITPP_@c=<)Ka0S{ly7057*B_%T zFFzblD)>=w08H@#oNX?TV2mb(YzkQyQfazob0)*9@Kn|CfioxiOI}`jnQ*R#TieEG z2-pyI<5XqqMb2ycKMoyXREEg#2P$X6Z5R%p!$Lov-S}*mmXdCmZ~qb%*G*ite8yvZl#e?J zEvTE&>#~Wp!F4r|h1}@gAv&Dhy{>aB`8k>(tLl|!g3c?02cr2viSf@J)S8JzxM8>v zIVM{)-MM0pYKTYG$Zyp%uMjY9iMb|`C-Q?MyRxVs=HxubCYj{~*fZ-lMPca$!J zCO!S-;qUc32x_xUGw;l(<_)PGKC1TctzgxeY?*9>>2+Dt=3PA?fAK1h6P+_&Br5sE zsGofHY&FE2cP#DCV7*1o3szjxaKBVhwt(@7m^tXZY3uHogY+&tsCFUZYJZL{inuA0USUA0-Ns~M^{aE9Jr*Dfit1d5`iS##)xo|!dW*_6U=GPPUs_V&O^EJl59*GQb zTI1<<*mzf1#2I62vu$SE)&S=j8yh=DoDQ--vfJf%b=5bgHNlaQ7wyf>;c&Pa9AFj} z0X4UGA|C%)5ybD^F(5iXVKfBxgnCep{Ws|QNX{tXPr#>gI4?7)6 zIiiw7XSIF1cN~kEG1#?(X6X~HJ+UZ1YspN&*_D1@F0~y2>jld;0CRZ=4&3r8wcxP4ka5)7AV4SFI9aZz>Wo(6vVV16bw|r2 zW&O`sxTK^?^Uc(Y=t$*l$`0}f&CARQC<5Y`Jx|Gn8P^0;u zJC_pfSpCh|KWV>O@^RbOi?*x(bLZo>*R%gi(pp{DMQ`DT|8hXrrTsswy;MET^l7F~ zOZq>S{_kZstzM?p%d}3PHl)*r^j|KHL8;XkIbHc9jUs8JOk^B+ zU^*3g{B%=Sj`}LaHc4ZZcG!d#evnV|&0jHs3^Mmv%>Ywbi4h_!z^QAeVh{E##ZSJx zMHRjYAgoyx-Rm|!8$^%J-Tl4%qe~sy< zT70Lrs-gLh{Xg3*kDp?w=i1wPHT!qe1beJraNw68YiK;hv>nq-Q_bEq)21cuFMTw< zJg0Tq+{M#|VA^_4ThD2iW7<#q3%G`7Z#qU%U#-Odim65yjnf`4h4HXVA3x zPlUpVMj&d#bTk^g@l94BqJl&2hFqTkDT z6a^myQT8lS5PTIqVal9Sb&`fJgr)kdZl{z=7V4VVG!2Z?B7zwt>7W^o7`~lKEDplk z-SXw9NDHjtXzlT0p3Ib`JXqZL-89*w@uqx*P#mF{b;-Wjk$jbZ0n2=XjY!q^e6s$x*egRL+ zty0`mWIJByD97mv*7HZ`P7zqy3uY_(L2|xq{H0rkXq~$*UHs~bfRRE-N`*X59NsjT5n-n6F;WP874;ywvTg{Xx!7~F7er zV3>+?YIzgcwM`-b6_A$^vYG9b1S83r(7<7W9!Dy)?FJ!odOR{!GoH+ytKormJe6Rm-gb8tp|M=)BCcADL-{KL#Zl*b8#hTVk? z^jWqu!fwAS$2vbob`e= z$h>Ta@MLR4-y4SWNXroK5u&(zHG`l_&_TfSiXe7xZgCy6y!VBuAuoAUpUuMPjkB&W zk>fea+o^UzLoPr7!8a$29rI=-6mM5Bk}i5d_qf0%W+>g>wZ!R%$=yITyuChKEy3 zDfKw?oaEfV96m-dyK5x-17BRpL5cg3me_IA2D^AGb+Ml4M_5~C6VC2sLd!1h-9_2^ z^H;*1ChBUWTbku4=%PlX8sr99nOK^U$nnC94;wMOJJx{q=RZ_#}uC*8|1}U12 zAi-l-RMshO2I6)lg}<>LXYu(t(b z^ZNu(1kzD?1_-uWD{s!{bw<|rX&dGv-rL5-q|ob*5fr;vWy+1?*Tut8@4xukWF0_{ zxMCw-W=AXo2`(CmQW%~3O-&Xy8*rb`VG%+rYED$8y^BcFG}08o&h+3DbzEcFx(IL@ z=}iDxpfART`}+6_3i2WdjzP)ZR|o+qj#Uk3)-wtieVyEHu)yJ_)48ZKG}myF$Vwf+ zxuGnqR%DXpa6Cb56c8vW>49#ian1tw{LgC+X#22~e(|L{IV~bwEYn74j z#M`>4Wv_SWWL?e-v}+4d7#|zVNJOx2mp6ZfQ=3KzGAFjL$UD4pyNl?nYHToeW&`WI z6RtnzsTmHrVLQLl2nY_qJZ;Pvdd}yF3Ie?4Xm%NxlOvKyc^VNzxhRfrQZDmbB?U~V zlcVHYD5>rOTFGJI4;e&O)h_BH5fxDvpyng>VG%$?L6w)loe?Oq2@ZQh4_As6j&4@i zaeJC(;gT{|=@~aRUH^DZaJbLtj-&a=+S!|o&^gU-pC6Ns@OpvCrsB_O6vYN*m|5SV zowX!xTDgxnF^9x&(61wbVw>Gs;7PGo^rbo zmP-s)S>UUN$+Y&GwlX~CfwM?T^4-8J0pgxcpgNGkFgNJ>_MNU_!`;Z0NKse>o~-YH z*#+$Q)+b*DKh#%B*q@id%*#$oXbD5aC05UiN9H1HtKw?2PbBC2|8YbK4&+?xf797 z{)W~rX~~qWRpeNG$R38c=R^T{5s^N$d?2WR+Cqkt!7y_=ZfGK7w2CFoW`1*^fT2;! z3?!ZX^M_PSCP_=#t=V-b-;q;7sW-3J9zq-!cEgqHaVR=*7*|;DfWDI8K@Q3VxcqqO zfA6kh{i{O{rIoME1kmI{|36-tiErmy1{og2J;_=`;eWxV=)=(Enq>f-Y)9F-m>rkM=4uc%0(Ny zWB3K%Psk$0_>z+>wn%n<>*pxAVM%C8`^6*=J6Vi8)t92MwG*do*Hbb{+^Pr!yr~lw z-3;Ut?S}|@c1ir;91bPk4hTXRjBu?|mV~WuHwdv|G$;@bAt%Iqz5z^EP$-alQ{sHE z#Ud8O!cd?i1{caG^6fxSW}^)!!KIqz#7g8q7##SN-fwOvm=pJ7^klT{Z<*QjuHUlbW;Zf%IomL5Cfao zj`+rGT4!Ae*q72EJ#HxFQ+;^Cwsfy7FrzKZZxw^rF>=GJ8)nUvTUjYulpJav^9M<^ zg&&6rk~g9GDWhLfa$=jYWD16IVw}?J#aIGjHpWO1JR&hfP$iKLmv^6ajL6A3-~%cg zuQF*bV3<(B45qLH$S8M z1oYUXFFDLRG;*K}>GnLsDjn9kZT~*ZGS3aS>8TkUX~jWI@*OMe%Jxl05x(W=f&)ie zo#H&1&9@hQl*fp-4Tqi_hh@ce#il;fQ8XIvyDAw_Wk3ofC@5ZaFkq;tPe9kSegvS67dFQs*NUqGZ&I-Eu;M7mOIQpjML{PAqQ~!J_+7O2pB@Vl zxJ+NArkKm3OJF>`a=KFW^W|w z6jm5XU*Kb#9U4&(u@mMvLuJW1UIlsHB2oX>)@@&Twi{l#JQQTTUeAK2yi~1nARN>O zaa%{#Wh6C=6uw%9IE*B75j`&v4#ZYwJy3bENT}g(AmnVsJxuL9;hICdeMFk1OhP3 zX}g>&8t3(HROabN91@BP_QUVM$&wpu0o2KpNqOwRL6jwdzWQamy)qI#WW- z9A?qJvL|qg9HCui&GoDYuGXvyc}TZtvqDit6cn~$kJ8+01`f7KHLAeI7y%8fO|i+t zAuP4>t_d3xKN&14haas^{HpjW*`B1l;4PQ`xa0frX872v=2+j3<@)`WfbJHQ5CTsP z=Qz?LR9-O?i5cgc@J0&)pRoagW_^15Sd$eI_L4Wwlbr9YcteB?mr(-xis|wJ>0O4A zJSW!`2fomF`dOW;2r~+y71R}#h>pAB8^I+;tmljlA%KC6VOjC*B>1+dShpPdmnZ#R zRVzaDqfoU@FX45Ws~iM58LbD=8JvmI^;|L$0pNu5z06nwxk$EUysQcWZTE;%IqXRO z_T2idoFkaDw+118Mj}(XC@M?tMEaf@NP0=nV4@Yok?qu*;cw7fqW7*S2(^F*NURwD zaKT70kXa!zP0p4Ugu@I8=ZJI*N}7~j!6A$q6t|wJ34Rfya?X2NnW-gSAh;?hzE)D7 zI8dqe)8}mam7?U%7e=-C=6wcWr5>?=nGrTdV{t-;KT?%W- z8-7sLH+MFzB4i5{bW64H+f>%Ge3e}CvaqHj-2?(9AHxpIO;X~T-GXu`jbyx?{U^bI zz@24Sn~5^#M4w@S^f2COtj`DlQ<;JI6xnXh&k&=V*bd9!F(gDYzribcl?g-YWdNX5 zl;RQaW1aHw6O|}(5ciI#$=kQQ&j|o?x`ikWXCIHvwQGX{TmMz{ogEHs(l0+3@XG?xwiThw6hqD3RsCq8b zG4rf`|1LorGNQYy`J{?v-71z@P&!YO8sOd(3>ju9{o=~W2y5%-dcb6TRTi{^C{<9_ znh{c$&W8X__(<_8u|G0WDFS2dKfR{W3%Nu%b;6Jnb)Tacz!s+nhaEYh`-`};g^ zZupqqJ>EI%&~DgrV$*1*`}gQA#P#34|&I4jS(I#C?<|q1iKH zVwoOVFuZFxsJ^7mAUtKu_UCyPJcX$%JI1p6z9-yQTF>XbD)0_MMP%pM`{sb1S0YsB zLoAX{o(0f8Fo?L%XzJ@IR|FsO5@j;Ia={$Muq2!60g+0H)j2|># zH_GE-IF94x2073wfF2=FCr1= z1Ux5kQYlopz4F4l0)hN=X9W~Qb&mvcsFlT33BA~|ny1^xA%>Xp?WAb76)`+y*sI3hBQ`~u;s#b~eZ1)*q8qXh7)ydX$I>C1 zhN>IHM}=FqSC$9ittj|#s2Q{>gVpjCRLOVfR-UdBBNR78*tecaVcghxeOy{g3;anp z-4TjjB#;o(SHShltyA#KZ-zdAfG%Ms%vJ)-NlJxfDE%O` zYn4qUE^JciTQw$j9@T~c=icH5r`};SITi~1k<0n}_l?eZG(73UOc*F>Wr4B+LPi*4 zD-|!vk!PKW^--WsaGHu?ZpkG|2gWH>v^7mp72fTIu^ZpO?9i8{N~!wzsvw_XZwF}X zAjQbnf{n(SGUf27gM<#@eNlEZoSg}1Z4wV;HV~7EY>#AAq_PR&DrhJpo&W?Ug+z1% zMjUu+I1yv=CVEOlJ>69S_H8W#q!Ghg1;NQs5sE4Wb1Er{RDIu7$RqSZUv?@*4xk*5 zm3uW|api_Slv4YO^{gl*E$T#Zoyv#rl%xA{z%=;)!VVAwNge0c;iVCHU6)Ni!qj;h zqc;3DzN}&a2LH>iGu4JYwGjB7yb}s^6fI&Q51$Av(Ts(Eu0iRqAXI%|eV|Yg1c*aI zhP2Hnl~g(^P5#>dYyjU3sVj_Kpd9hl&KK3)Q4$a$o}YK>h`q|9Nf~6-;aO7Ys3VNT zO=tNUI8cj__~aZ|nFOAUvf>v6Ch;h7O_fzrS3_K1t|+@Lnw{ATA3quLDp^L*WtX@L z^69X_iXbFI&<)2JbrDk#?XTjrrGtV3X8C-QAkec)b+kM%OK9=N?Ti z5Onz}e|Uc)Z5a>ekt>2^dMaN-&=E7Ys$Wul*X{ciC4vubFshT9 z`v5ocb);Q&ejpcx@n@(?e=o^{h0&yy+y_x*TuXK~!7edQ>C;3~oTp_3ax$A>vK$rz z0UesNPCpBFMeL6#HOo(%?8K-qotC;T5jY zAcQ@vPoVv16H3K2S&+(0fF#9^b#%XI_h}Ohrq{0~h;PRBe|0>y0XLD`EbfxpG2yDs zA6XZC69*{J8wn*fp`M*6hgf2}e@L>cMRWW3U6st+{Q3cop(;!YmyhlM=Vv<&fS+Y7 zs*%^&f+|g@1p>O!Rb7l~H!hZ4*VFJNmf5|Q#T3Kw_f5-l`)`i~v2T|EE79xOO;*xu zm};B!yiAUA<>2rcsD;Fk3mB$Bi9;r%K(swExvRip95<9h3Fwp=Lc&p{vze$WTu2VR z?`}Au4`JD@AQy(C-lX7!WiY`#nB=A|hZMx3e-7Y-IkC(ZPd;0+qAsJg>X|DnMg)oL zE??N4RpSQXsdankgB%Z`3JCNSI)9~?eI>B=EMj}J1q`EvnS$7oB?cD2dK2mk zTWC-qHiL1~OF3r;0huN$XBI~XMHGdM2Qm;~DnW-u2|X@i7pH)hIq)J*q+%V2z)_o_ zHhtI-z3*i4=ClTXc8pD_J|+kf*aWsa-2|X7;iNL)F#DSs!U8(95L%a!5dwiFj%I8}YdSVgROXWM8Nh5p+&&rj!9Gx6*jezUTVjETB!3cFQis8_vhmzpP+!MQ@tu z=+su$xz23*gagbeOQoiVEeP%VYl4r_Oyf(`#plaLw?MLlFjvO5xV;xfDb z5F$5_(hzRlUD&0lxQTg_JZa{w>d>9?ulNv^uTcXo7R6>ao5IA&eA>ZDWh^LT@yUB? zv+K$DM&U7zD-TG>w?8o$=VU)#u3uXq#g`2Qw#%X600uFHt8m<`uz!Q6~$DccDa z>~y9FsbW3H@)TnUf*pnRWk7D=V#yLTxKlrm<^YqQV>R$|b~TfeGpQ|iczr&Fc~T0O zOzs*KMEO$Su$SR9T|=u3fEdkGut5fE@C0 zNVsKbHlkm^h0D)pBSAy3g?360C5fM049KNuE}rl<#yD_G3(*exI8}BT6)IW>A2uo@ zUiqB1+Ne`11ty&eA67i>ubCL=rO?SV!7a>~xKcFw2opKfHeqT#yL?k-ltX4nm5l>G zGt3~66=SVbIc?47A$Dnb6sDbz974JUwtMo{;}B7m?C2=4W5n>$MnN#HHzWT?$4OoB zvB5ko*#uH4G3zb^VhZXIu7nb^2pKWzZU}2I6D4!!qTWQi3>H}U7U8m63wa*`*;X*J zu9BOH&{j5ht6b06s_l3wf!6Os%5DTw;0vWZzCDvZ(q-SBf@qVqIYI-Q(IP?V=}E>l zh3#pFu_do7C6yW7B~|CL9X}1;zt9a3z-dDphb!sOx(^|rfFbu;eL|r~6*PrpFEjp$ zQsoZpBxFg|L9~s_r)_b;27O5QsC#DEjx!X0#H?WD z(7_p`hO@{AS@*q4!}*mAVk`qmRg$ww?DMdDcG38HBMH2mB0<*W*XinUh@SngnEE)M ztl3aT6G<7`e8Opzsb+V}F7RSBA58L%WmsgUKm+f^8k8j0F`b?28h`x(Z z644W9pP~;!bt#7_QY}Ya&iO;JL+8o9wGBGuX0$44IX@!<48g1fa4b`IL2wv`BX2;q zo?jHiPPZdTq+JysCbfJkkaLHHMZyfQBGOXhR}f>24Ht2el9xCCW&{9ZX+-vo|c#hYT0m>MD<8i^B~WpXK1dL5@v;4@wnX z>N@ZnDfBvm+fM^(@D|98O`btS}sKNFm?+TsEi9UGX*& z3#!v25b7MI=ik4iyJnBXMbkT^s|xV>%Kh&0mC4GufesXr&&Qo13>SFUjr>5?$?XVB zLP*IWLB3R?e~ew!VYYIeJH#GJhx)(R+=33|dWeYWX<}E)v^Fg%VhNcCx-rTBt zG`}XpA5KX;8byiKeZ?WTqoQ)Th_Ehr+zY-;q(qZOcU-F8s*aI}KqN`CIVR#jkvn?g zM}a$f?8nxU<*R>M$K#Y^bHb!~P`QH2lqw30l!FXB-FLJ6D+$qRx|k_i&B z1WfceSHMI?Hx9*at-R*3WD;l+^Vk^!3Id_le02a0IjHBs2PRC^%@>z-A13vDBKSm% z>fI%D6*6^}{1n>18~!+F`9LrCN4!T86;?A7edk=pB($XpDX&nd`}*C(?3jy<*tKE` zi4uQhmIU7E30!3O$EisKO?;yEPXiZG@?k#}Zsyz+N~QYB{4qX0ACuXJ8NzuwO+K&p$^tF?V)958&e#T6rN zqsxCEjQGXecv}VBMO}scIsR|#&uk49y~p!a_?~0`#`owfUaW!znN4DNtp1JRaaMKJ zyy71N`U{a0KVwF`3f}X7gc$wQOVmg$23bUNHIlb6(DblBAxwXQg_IjqLlv3sP|XSS zSW#6ONZAD?aH%#Z8h|E`I`2@QQuuGk_}bIv>v~S7B;~alY~zRdFB_JFKa1Jjvhd zlSB6P9Q7yH3^^A3*-DmO&C(pR!M4edGJ}ELIo$N>4s}M>`-;Zrz+@@Q*DrFM zkyt6csDbCG$G0Q`VDO4llR|);6oRT3Cl}q~hUu9u=VExC^VvAXL?S_b9rOOrH9^`- zJXc@b`Tz{*wuCH2ul?z%E-&7)Ev~TR$Ajt|bp|ZHdf?FUlDQ;rXTPzCN*kX|+u~9i z5Hhl&G>N%fL+DqZ)g-cA>$;?DUyqQAAU{O4vJ}tq?yq+435w^r1>9eu(}gY2kDs%& zEO)+!%OXq5BEzkdgCtE366Z4%cPD>7Ws}Bet@1oXd$H%bi%#7H{+hzYWx%Q*#T4#( z%UzeM)6^NPa`ftLqkDGr)U5c`kkK&Xk^^&HkCT8bPze`LTzF1isg)~XuCJ(S`~Y9) zc zUkYt*yUQ0|sbEWfQqrUMP28zQ$17qe-QM#qiRf;L8r|zJlJ^yYD2Am zLk4!giAL_)i%!pYsW^$Q*|xZ)Meb{U9-iX-qq=7uDrJEl^6l@tvQno%!5n{)TzxEF z)s{K1>6*q|tR{))_Fd@MGwZ^6$A#;Eu8HRrpV9^itzH)9wz>Ml$nq%5U3VI7ZYXCE>)%WzftGaX+ zp7Ap+nmeL6V#iqWMK!V7bFtCQdu)vG&dee*okm%x;Zojv%hcO^Th(pfg9AI&d9x&- z9kJg)r$)SAp|QGae_F-!!cSoi_tK97fs^a++?}FD@hwN+BNxWYe{m>SSr|FtKUbv` zbBHP(LWdp~xO?Q*_NQt(KB+F*TV<*&PKZD{qKnS^v+|j~c z`epM+U&<_(Q_r86tD7$yAk_1}mPun|5g+o5wjhN~T^B-#JW zZ#?@s``?F{&0~u_Bu^=2+Tpe*b#?otLWhX63$ygZg3J zo}FPT%0;UFJ*(dMd{WlKuT?ppC}t?d`<1v}vO<<>gwn+osEi;p*%|XFj=?-wg^r`p zvg#R8vdn5T74z}20}0h#>dZfI4Bt_2BYJER1wUf)s{-@+ESK`)>Wwy}r|)>?`@>AR zvwmFIj>c6RsunkTIB+~}8FyAF-7#uskDEFputYJK&s_2ar+4*T*byN@BU*EeI{3{V zG0c;F<4&N|HHMd{v<@rg_!1qTrSs8+=ED*S4}|=7>ZMJJ@bWQ`#Ms%d)LD6o8h3Oe zJLTjdf^yyN6fx(6*jt?s?pc>2Q0VeUK-1~Jj}aODovTW8>I z^M7#@i;XX4U#cD`yHm0H-=v+_8*gindAt4VV6nxvhg7(_&b?iW|4rI>*9Ttw}ZgHlkk6cI9b(@fz1({-Tx;2K?P9Cz!gmK{U7A1 zs)myF^7YpIoAhZF_nPQ!e>eRbGs;pmL{0jv_3y*hG`6cIdU_hB*f>26{}89?)$sRX zoK{NzR4CI*=`T$`Z5#exB-1APe|Rx%qW^@ds#eRir}VdFopxCN>3B~&tbYeO|F6ak b%9(vu{u?)s?cJ=VdhOeDXg7YR-{t=cvWqzj From 4451da05e63e5b5ac1d5ca1b426691c3f24e426e Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Tue, 12 Dec 2023 19:11:51 -0500 Subject: [PATCH 42/52] Update README documentation --- README.md | 38 +++++++++++++++++++++++++++++++++----- 1 file changed, 33 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index f6a85278bd..d28c852d19 100644 --- a/README.md +++ b/README.md @@ -1,14 +1,31 @@ -# CS410 CourseProject (Team CAHJ) - Coursera Search with ChatGPT Extension +# CS410 CourseProject (Team CAHJ) - Coursera Search with ChatGPT Querier ## Project Overview -Under construction +### Problem Statement +For our project, we wanted to solve two problems: 1) the difficulty of searching for information in Coursera videos, and 2) the difficulty of synthesizing class information into a digestable unit of content. We solve these problems with two products: a Chrome Extension to search Coursera videos, and a ChatGPT Integration is queryable, leveraging LLMs and the emerging technology of AI in order to provide a study tool and information synthesizer for UIUC students to use. + +Essentialy, our project provides a way for UIUC students using the Coursera platform for their degree to find concepts in their video lectures without having to tediously scroll through each video in a course and use their browser's search function to find a term. Often, a class can have many weeks of content, and each week can have many videos. If you know there's a video that you want to re-watch in order to study a concept, but can't remember in which video (or even which week!) that concept can be found, this project will hopefully make your life a lot easier! In addition, the ChatGPT module is a queryable script trained on the Coursera video transcripts that power the Chrome Extension, allowing students to query a specialized verison of ChatGPT about their course content. + +### Project Workflow +Overall, the project is quite simple. It consists of three parts: +1. Coursera Course Transcript Scraper +2. ChatGPT Integration +3. Coursera Search Chrome Extension + +The Coursera Course Transcript Scraper is necessary because dynamically scrapping the course video transcripts simply takes too long; it would make the search function untenably tedious. Similarly, without scraped data, the ChatGPT integration would not be able to be trained correctly. The Transcript Scraper utilized Python, particularly the `beautifulsoup` and `selenium` modules to scrape video transcripts from a course provided by the user, and then indexes those transcripts to `ElasticSearch`. This indexed data is what powers the Chrome Extension and ChatGPT Integration. + +The ChatGPT Integration, also written in Python, uses the `langchain` module to split and store the course transcript data into chunks, which are then fed into the GPT-API via the `openai` module as context with the user's query. This allows the LLM to provide an answer that is informed by the Coursera course content. -## Requirements +The Chrome Extension UI is written in HTML and CSS, while the functionality uses JavaScript. + + +### Project Requirements This project is fairly straightforward with regards to requirements on the user's machine, but there are a few baselines that are required to be hit: - The project requires Google Chrome to work. - The project requires ChromeDriver, maintained by Chronium, to be installed in the root directory of the project in order to enable scraping (see Step 2 under Installation Instructions, below). -- The project requires a working installation of Python to scrape new course content. The file `requirements.txt` includes the packages necessary for the script to run. If you plan to scrape new course content into the project ElasticSearch index, please ensure your Python environment satisfies these requirements. (TODO - Create requirements.txt file for Python packages) +- The project requires a working installation of Python to scrape new course content. The file `requirements.txt` includes the packages necessary for the script to run. If you plan to scrape new course content into the project ElasticSearch index, please ensure your Python environment satisfies these requirements. - As the extension is not deployed to the Google Chrome Web Store, it requires a local copy of the codebase on the user's computer (see Step 1 under Installation Instructions, below). +- In order for the ChatGPT functionality to work, you will need an OpenAI API Key ([see here](https://platform.openai.com/api-keys)) and add that key to your environment variables as a new variable called `OPENAI_API_KEY`. Instructions for how to add environment variables can be found here: [Mac](https://phoenixnap.com/kb/set-environment-variable-mac) | [Windows](https://www.howtogeek.com/787217/how-to-edit-environment-variables-on-windows-10-or-11/) | [Linux](https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/) ## Installation Instructions @@ -20,15 +37,21 @@ A step-by-step guide for the above is below.: cd desiredDirectory git clone https://github.com/christianopperman/CS410_Fall2023_CourseProject_TeamCAHJ.git ``` -2. Install the appropriate ChromeDriver for your computer's enviornment from [this linke](https://googlechromelabs.github.io/chrome-for-testing/#stable), unzip it, and move the `Google Chrome for Testing` application to the `CS410__Fall2023_CourseProject_TeamCAHJ` directory created in Step 1, above. +2. Install the appropriate ChromeDriver for your computer's enviornment from [this link](https://googlechromelabs.github.io/chrome-for-testing/#stable), unzip it, and move the `Google Chrome for Testing` application to the `CS410__Fall2023_CourseProject_TeamCAHJ` directory created in Step 1, above. 3. Open Google Chrome. 4. Go to the Extensions page on Google Chrome by following [this link](chrome://extensions). 5. Activate Developer Mode by toggling the switch in the upper right corner labeled `Developer mode`.
+ ![Screenshot of Devloper Mode toggle](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Developer%20Mode.png) + 6. Load the extension from the codebase pulled to your computer in Step 1 by clicking the `Load unpacked` button in the top left corner:
+ ![Screenshot of load unpacked button](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Load%20Unpacked.png) + 7. Select the `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/ChromeExtension` directory in the popup and click `Select`
+ ![Screenshot of load unpacked button](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Extension%20Directory.png) + 8. The extension should now be available to you in your Google Chrome Extensions list. ## Usage Instructions @@ -55,12 +78,17 @@ python scrape_coursera_course.py -c "course_url" -u "coursera_username" -p "cour 3. Once you run the above command, a window will pop up and automatically log you into Coursera. It is likely that you will be required to complete a CAPTCHA. 4. Once you complete the CAPTCHA, return to your shell and press Enter, as prompted. + ![Screenshot of running the Coursera course scraper from the command line](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_LoginPostCaptcha.png) + 5. The script will begin scraping, as evidenced by the pop-up window navigating between video pages in the course and the `Retrieved` messages in the shell window. + ![Screenshot of running the Coursera course scraper from the command line](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_SuccessfulScrapes.png) + 6. The script will write any scraped transcriptions to the filepath `subtitles_cs###.json`, where `###` is the three digit course code of the class you are scraping. 7. If the `-e` flag was passed to the script, the script will automatically push the scraped course's transcriptions to ElasticSearch. 8. Once the script is finished, you will see a success message, and the web driver window will automatically exit. + ![Screenshot of Coursera course scraper successfully pushing subtitles to ElasticSearch](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_SuccessfulESPush.png) #### Note From 9e1c85c0304b0df17c3908185339de954a378186 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Tue, 12 Dec 2023 19:12:11 -0500 Subject: [PATCH 43/52] Update requirements.txt with ChatGPT Integration requirements --- requirements.txt | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 requirements.txt diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000000..3c642459d1 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,7 @@ +beautifulsoup4==4.12.2 +elasticsearch==7.17.0 +langchain==0.0.350 +openai==0.28.1 +python-dotenv==1.0.0 +selenium==4.9.0 +webdriver_manager==4.0.1 From 7b04292be0f711c6212350376c8c9ea572b9bbe9 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Tue, 12 Dec 2023 19:12:27 -0500 Subject: [PATCH 44/52] Remove old requirements.txt file --- CourseraTranscriptScraper/requirements.txt | 10 ---------- 1 file changed, 10 deletions(-) delete mode 100644 CourseraTranscriptScraper/requirements.txt diff --git a/CourseraTranscriptScraper/requirements.txt b/CourseraTranscriptScraper/requirements.txt deleted file mode 100644 index bd96a32468..0000000000 --- a/CourseraTranscriptScraper/requirements.txt +++ /dev/null @@ -1,10 +0,0 @@ -beautifulsoup4==4.12.2 -elasticsearch==8.11.0 -Requests==2.31.0 -selenium==4.9.0 -webdriver_manager==4.0.1 -jq==1.6.0 -langchain==0.0.348 -openai==1.3.7 -chromadb==0.4.18 -tiktoken==0.5.2 From a5714ea10073c5552bf3dc0c397e29b9da7f26a8 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Tue, 12 Dec 2023 19:12:57 -0500 Subject: [PATCH 45/52] Restructure ChatGPT integration for style --- ChatGPTQuerier/chat_coursera.py | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/ChatGPTQuerier/chat_coursera.py b/ChatGPTQuerier/chat_coursera.py index 099ad424d0..2e9084401f 100644 --- a/ChatGPTQuerier/chat_coursera.py +++ b/ChatGPTQuerier/chat_coursera.py @@ -1,13 +1,15 @@ -#! /usr/bin/env python3 - import openai import os +from langchain.chains import RetrievalQA +from langchain.chat_models import ChatOpenAI from langchain.document_loaders import JSONLoader -from langchain.text_splitter import ( - RecursiveCharacterTextSplitter, -) +from langchain.embeddings.openai import OpenAIEmbeddings +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain.vectorstores.chroma import Chroma from dotenv import load_dotenv, find_dotenv + + _ = load_dotenv(find_dotenv()) # read local .env file loader = JSONLoader( file_path='./chat_subtitles.json', @@ -23,8 +25,7 @@ trans_docs = r_splitter.split_documents(docs) # print(trans_docs) -from langchain.vectorstores.chroma import Chroma -from langchain.embeddings.openai import OpenAIEmbeddings + persist_directory = 'docs/chroma/' embedding = OpenAIEmbeddings() vectordb = Chroma( @@ -33,8 +34,6 @@ ) vectordb.add_documents(docs) -from langchain.chat_models import ChatOpenAI -from langchain.chains import RetrievalQA llm = ChatOpenAI(model="gpt-4-1106-preview", temperature=0) From 4a8dc4fe1d039655f22ec7ce2222f04206ebc688 Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Tue, 12 Dec 2023 19:16:35 -0500 Subject: [PATCH 46/52] Fix README images --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index d28c852d19..9153c6022a 100644 --- a/README.md +++ b/README.md @@ -42,15 +42,15 @@ A step-by-step guide for the above is below.: 4. Go to the Extensions page on Google Chrome by following [this link](chrome://extensions). 5. Activate Developer Mode by toggling the switch in the upper right corner labeled `Developer mode`.
-![Screenshot of Devloper Mode toggle](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Developer%20Mode.png) +![Screenshot of Devloper Mode toggle](./Documentation/README_images/Chrome%20Developer%20Mode.png) 6. Load the extension from the codebase pulled to your computer in Step 1 by clicking the `Load unpacked` button in the top left corner:
-![Screenshot of load unpacked button](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Load%20Unpacked.png) +![Screenshot of load unpacked button](./Documentation/README_images/Chrome%20Load%20Unpacked.png) 7. Select the `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/ChromeExtension` directory in the popup and click `Select`
-![Screenshot of load unpacked button](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/Chrome%20Extension%20Directory.png) +![Screenshot of load unpacked button](./Documentation/README_images/Chrome%20Extension%20Directory.png) 8. The extension should now be available to you in your Google Chrome Extensions list. @@ -79,17 +79,17 @@ python scrape_coursera_course.py -c "course_url" -u "coursera_username" -p "cour 3. Once you run the above command, a window will pop up and automatically log you into Coursera. It is likely that you will be required to complete a CAPTCHA. 4. Once you complete the CAPTCHA, return to your shell and press Enter, as prompted. -![Screenshot of running the Coursera course scraper from the command line](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_LoginPostCaptcha.png) +![Screenshot of running the Coursera course scraper from the command line](./Documentation/README_images/CourseraScraper_LoginPostCaptcha.png) 5. The script will begin scraping, as evidenced by the pop-up window navigating between video pages in the course and the `Retrieved` messages in the shell window. -![Screenshot of running the Coursera course scraper from the command line](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_SuccessfulScrapes.png) +![Screenshot of running the Coursera course scraper from the command line](./Documentation/README_images/CourseraScraper_SuccessfulScrapes.png) 6. The script will write any scraped transcriptions to the filepath `subtitles_cs###.json`, where `###` is the three digit course code of the class you are scraping. 7. If the `-e` flag was passed to the script, the script will automatically push the scraped course's transcriptions to ElasticSearch. 8. Once the script is finished, you will see a success message, and the web driver window will automatically exit. -![Screenshot of Coursera course scraper successfully pushing subtitles to ElasticSearch](/project/CS410_Fall2023_CourseProject_TeamCAHJ/Documentation/README_images/CourseraScraper_SuccessfulESPush.png) +![Screenshot of Coursera course scraper successfully pushing subtitles to ElasticSearch](./Documentation/README_images/CourseraScraper_SuccessfulESPush.png) #### Note Please be careful not to scrape too many courses at once. Coursera may block you if you issue too many requests to it in too short a time frame. As such, we recommend that you only scrape one course at a time. From bea929e6147dc0098fcaf73798b7020742838c2b Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Wed, 13 Dec 2023 06:19:07 -0500 Subject: [PATCH 47/52] Add Future Improvements section to README --- README.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 9153c6022a..9b847659ad 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ For our project, we wanted to solve two problems: 1) the difficulty of searching Essentialy, our project provides a way for UIUC students using the Coursera platform for their degree to find concepts in their video lectures without having to tediously scroll through each video in a course and use their browser's search function to find a term. Often, a class can have many weeks of content, and each week can have many videos. If you know there's a video that you want to re-watch in order to study a concept, but can't remember in which video (or even which week!) that concept can be found, this project will hopefully make your life a lot easier! In addition, the ChatGPT module is a queryable script trained on the Coursera video transcripts that power the Chrome Extension, allowing students to query a specialized verison of ChatGPT about their course content. ### Project Workflow -Overall, the project is quite simple. It consists of three parts: +Overall, the project consists of three parts: 1. Coursera Course Transcript Scraper 2. ChatGPT Integration 3. Coursera Search Chrome Extension @@ -95,4 +95,8 @@ python scrape_coursera_course.py -c "course_url" -u "coursera_username" -p "cour Please be careful not to scrape too many courses at once. Coursera may block you if you issue too many requests to it in too short a time frame. As such, we recommend that you only scrape one course at a time. ### ChatGPT Integration -Under construction \ No newline at end of file +Under construction + +## Future Improvements + +While we didn't have enough time to figure this out, we would have really liked to integrate the two Python components (Coursera Course Transcript Scraper and ChatGPT Integration) into the Chrome Extension as well. As far as we could tell, triggering a local Python script from a Chrome extension is non-trivial (if possible at all), and we had neither the time nor the funds to host the scripts on the cloud for this project. However, we would have loved to have multiple tabs on our Chrome extension, one with an entry point for scraping course transcripts (that could run in the background) and one with a text-entry field that would allow you to query the ChatGPT integration directly from Chrome. \ No newline at end of file From 9574209f100ec3db165279f4a98182774b8bc7ff Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Wed, 13 Dec 2023 06:36:16 -0500 Subject: [PATCH 48/52] Add workflow diagram --- Documentation/README_images/WorkflowDiagram.png | Bin 0 -> 57220 bytes README.md | 2 ++ 2 files changed, 2 insertions(+) create mode 100644 Documentation/README_images/WorkflowDiagram.png diff --git a/Documentation/README_images/WorkflowDiagram.png b/Documentation/README_images/WorkflowDiagram.png new file mode 100644 index 0000000000000000000000000000000000000000..6b8472ed744f25a32ca4bfb3c6bbd5d08140ffb1 GIT binary patch literal 57220 zcmeFZby!?Wvo{I^0)zzjK(OEzB)Gc-ch`a7Ft|%_*WeP|-CcqNcXxM};CGSz?)N?W zeCM9~|9!sUnVDYQtGa8dtE;Q4ev2SkY0-CYaNj^cK)e$d6OxC3c-0310VM(V8mKwd zG6w-)(8l~y{16Zo5eN@@Fu*mTftb7$1cVD21ca9_1jHRs<+Td|VgCsNVqX^mf+GO} z0?Rt3QH~2}s0FEt8%aq)PyuB)2v|r|h*v-f68M9F#D#e24JbkAL%#oa*%XrEU-v)( zjSLXLC&UXgU`WnCWi+7d6kq}Y4Sc`Qh63-GmoD&rx&HGlsAuV*2O;dScbcNt0oxDR6PNjw<%g0~h^vxR^_{Q2^RjH5<; z4*>x~WumBRrz-V@!@$adPS4OvA4KP5Vf{h{!R5pO6fHn@dW23E=9ab`PTa(Q?%)8* zFV*zKgnw?aGvg*!m69bCv;u<&S?E5|eIn+0Lr6%-1vWI|kQWm97dddnP5jl)&YFXs z-qF#K&XI}E3T#Zzz|PK2|A~>Fk&za-gVxsB(oWBb*3y>buTK8%M+jtV05-9)Y!iaN(TstUeEA_`qTeN4an*rq<>`jLgb`p{r^g2XJYgpp}Y+EZ>pD; zzsUZ`@~@xBWoW=*00!yVS%LpcEztHy)CwS*f7brpU~a`aG2{^8gmmn(Heq`^z6;;hqmp(=Gfi6EYer9C(%+A2fz{>ua?bE-h|1|!0 z;{R=?{=1s}kF&xBEG&*cKi3MZXk}&2^T#A|(f{}A{|wCavUWIRz$U9Sn2jZv>`(U zdr|bA6aIbgKL~Z9G6@F^{?879o4nuyeP|-Q|2fdVh{*`wRC2xkFAV&r(}b77u>bF1 zf6Mg$it%qr{7YT`f2<&UE3^+O z-8^U^{@RK6^aGes0))=+t6Ccmw-+V@-)UpXq;O4_o7t}p=a`&fqn3U7s8HH3-ueBW zEKsmJ-xTMt-DdXl_01kP&-9l{=i!c{Qc9vyD$M2q1|Wrn4Ny#tc}w?~Wtx`ZLpMBV z3#f!5B7~;#YVRfR{6a@Z$86Y#>~ecfDwV<(^#+}SFH1C{)@$w;x-Z{4{9=P`a@?AN z1m*Q5m+;fmQ_#sr7!5vlI)&U~X+1inLUd4HBoT};p&8McJ)ZRvOqS{J7AlwEz9ZmCN#k*sfI+~_Hkrt!^n2JH z%L0brl@k(zRw~m@ZGF7fkVsZur{A~q#rv4l-W`Hvve6S(ZMze$ScgjPa$fd+6{M6U z61KPGxRNuTBS~KK-h})w2|0-I`Pj{^NyU>QKoUhZLnGp`@rAu-W!YucYP6GJ7tz}p zPL1mf;hjdp=Mb`3s8v#Crx>0o)0wGha=T@WqfwJj?Fz$iFIKOSplk+)$UI(ZD&qZp zbF`4RSg$vBu++qAvC^6f-Wis91&f$XD*0V@rc}$`z26&I3=Xo{X|B;eL_YLvyk{Qf zc;)%APTKtq237Xc60_N~<~6m(eOTNrz!)L4ycb~3DIwJS0R!J&3s0yUm6etCOZYW3 z^yoKOq)CF^eAyypTCjtI^0Z;hyURCgwPAg&$f}X#Qb|I)W#Tbpg%S9i4%XEx#nsLi zMoO@@Y&Pq@6n(+|4`OiM_ctdKyw5a>t~=?xo@tR}$||sLc)1v!?hfc(o*!=t-0u-% zsa5ETGy=Y;{n2xSuwb{;{0%PQ)q2%jbUGK zY#Qrvx4B$ErBpMK#bRFQ-Fud7V3i^73(gJ4QpT7nR~boBDHkVrV1a3%zo6E;-?PxF zeo8-$hH>3K;y<-IUh#a+iKA8lBe!7olf2USAoTa9yNebXW_riG`G==SXeg;<7E&1* z84icTgwoGX$j`Ss>2U{4*RU%MwxU|D=bP~SHvYuPq>`8d%c^k2j<84{6nk?$%$FLK zU>?p2|7E~Hc1@-*xag{3^j^6x_yUpReLQr%OprhNI4WF?U=;JEMG^xdcyG~E#|A`?!$@zy+6RKP{HAE3`VCE zXl-rn*AJtZn~e{3aK6}9I-bd;nX7Q1!aDSaxW?`p!oCj`&V*Y9jb9%ve4QyxrK01D z_`q$if$JOfmyMdGhYZrUXuHu?ez6QVaL}(`qn=lFUa9Gv_OccC%BG;WN$Mb*Be(`_ zpsBX{`1t5gx?dkAQjJ8Bh`MR@*>jS38Na6M_I!TC^7HqfpW8>P(^OGW*`LUxud!TG zC+;9eOq~P_86Vs%I;32r#lAzo{@>Figi`Nx1_dmt@bVSak89Aun6y16bFHhFvLMuD zG1E7Yf220_Vn>WDb7+MA(D7thYVU^yxLH0&VhoA*A>Zd{ZW9V5_m`nY{sySJiq9BNSZuYOlLkV4t@Z=l+h#s)N zgt~Qa7&Wu?Xn9(ti@OU>7k@>5A(XVtG$lZevk;3_MlDraWEyMp^z>Q(L61SLtmuk* zH6Xv?G?etY+I)^tpmggu^t4cRENX=m6tFQ~Ud`x?``cxDNCfY3=3W4%}78!*dZ~3(|0UTcrkbr8OG#_=TkX%v+(v-`}i`V&bbTWA#I% z!b8b$=ZvGYsQMx-k&o_TJkvERMw01@YH6Tumx)d1(>>{(Ri~Q|-C0!1v`Mp#USwDx zoytLF%%Dkou``ms|MgQ@WF&H@+j_KM0ODA)^TpTW#ReOt7&B#w)v+v5U|Tb;Tw(J& z(fJ|ZXL7#z3ou%x=pWSl(V)JpexU9OV3!Tgqi^D=^r0b!jt=p9Pu+*H6li*WQv2Xv zKow&KXSUbdif-r;uyI|y!w2XX^C0bx7Y77w}+Ay zfJJJDb}~~&kmp4y1zfB@RC+4-zs33b`*dyd;6Tz30V@~S+hSs3&}cR5Wz~`}>9xN^ z5b)*#wqSlZq*2nfah1ZjZR<+z~x>TzOU6x8dL56)tC?!ITk!r^7VJ{J z&A030CCP*75(UfQ`sE*jus$FxiL39l8oTo~W*p&YWM2qXVNlHBSIV^o`FWhrv*&N% z{I~!MX}A6pK2?&oXSUYj{o~DwXDndU_s4TWDscsgL_8A?(mg8&-Xfq=9cH=REIFF2 zw7ORYsEIB&A~rXd7P;=7jFiR^7YMZX_6FSDob=LXwZ^P!6@Xz6zm5jz2@T9$o%uYt z)@3!N-AQ(bBq%Kl#osoV>G)DaxIgM8v0R;a-5Pr_OFf5UZp}VqgM3*@t*<_fjay}o z2e+8$j4!)EX9Nn2342W1?TwG+rF%%|>FF`stP81E===2x7KJUhy1V8GbrHXnq#HkJ zzk8d5iKz#@;3JG(3Eq;pTMwa+WjtD25y;r9Q`)&4DSj=s)8P+A| z>um)0Wkig0U7^!jHlHaMKi(b3QuNk3I+hp$ybWP!)e)#aAXrRi2_wI2+l{e|LW|M*B#YBUxanfhA@0HT5=ChHA1lz2P#L9vY zX|HB&@$d%eaVZ~H!)`+DUOjMWv#nzxO(LRlC}RlYqS^)j|z z;Lc41&ztPF`!_9$@MkIJ4pq)GOPaQETBVj@Lx>i3L&>cBzUKAZl;Rm4KW(gb^p(RQTD zDo4}H@^Co+Fs>`Ud`Ug#+BGZib)u}Y_*vOue=o(t5ek@a%797q< zi9qg!E~PLciA;ykCdJ52=A~=nI2&W~buM2~js>pUC`q(@#V}VSvk{n8j zm#}bVC{5Ws)yz*A`*=uJ{;Zernyi;RkSz7H*)+Dqr~V3FHiMqfGy4vHpB{NYgW+9M z^=Mffx&f&_ZuLb8UimfXm13*>JclP_yleM<1t(D%xP`_?;|2JSsF|7}aNf{pUNuf{ zsC#+ViK2ClAV#?$&{TRNoZ> zHov`*NG8($+CERuKgmpdj!*AhjwLj~?&n9*dzKmf22*5}p>4FID#UoU=jS#}ADnsh zsx4wQS=v{Y>-_l}FeA8p?Kmcgai`J>vGOhfnohepKU1Trsj0V_f>c=&!h2emU8K|; z<$y%C(GCYAp0I+xvDhkyaf6lk)W<;d!NFT$PpCK8DeEb_0$J=GRL(5PQ59n|!Kcr! zSw|9BjQM%0zr-3`a8Zq$nW6@+V(|n#Ne$uj&3>1Fa7`0tubpg)xpW>%VONOV$lj2= z`i@1fodFz{a_+bcp)jR&bLuF1F+D;IUEm3=f2G^n+e$FLM@^QGJUP7%LvS_UJ?@kD z_4)wGwSZK1fYJ8@@GM!xb)TOeQ_x)fq)Lawdh^WxD;7;y_rym?DZU@@nN)GS0cYHr*>+flLjMXn(#7I% z)@tt;+QMzt*uGV84$Z0__Lz|3-k`cOL#fOET7Zf345RlH(Xb|FsaFEm=Pk|G2U#kIiv(=f@+{6X?jz~I|&gTY8 zdgxMXzS6D~QDq^xSzR_jbH{Y$%;`87|65JM8t^NcsANdi=Y5VN(&+IHg zZx5$RLPAa-1sg3n5`v5)T*F-H^aG#~ZQJnx)scagzEeLkozOTM)REDQg*I#Z#`v6# z+Fo@c$5K#|qa_bltl8kBT%s=QcmfVaWRe7IRDn!7k7ALs1yjVwr94bXXp87}%>rTG ztDbJ|=hbgW`MMoG1+4Hm>|5}vw`qg1uw*3a+};;5K}k&-Ta3laJ`bB|O5k{L1aDR{ z!jY%kT2Em4Jthw%7|C+Le}o z^M)L-e$W=_C^3MDflRrwAdC4dAH8;SMifb#xP;1hkHAYnCIE;^=>DjRF-c5@IDSm! ztozlK<<$@n{Y2~C{!SS^t*z2Gu@BFeuOFqIDZ==J2#GlQAmN($^~v z1@5w<5?ng zeNyRxKn;v&%nzEV_D;X&zBUudW`^yA7}ba&exGh@8CGX*Bk`bNfp-C2jr8@MLXVZ^xt+36;7_uIPWGNTbml^8A5 zEC(cYWgtGX6GcJG;VRt#suW83xbe!qRETup7V~poz~!OyC-9t z__V{(ne^R-tRx7ZHZBj;FV!y~I`6jj{=B)#_ikoEP6mE;D-zsIG<@iR25|c=N(I;n={f1tb9tx0I3MO<7Lyv$4$6*W^TiL>>@|& z4Y+b`9Gt&MTt91V?9iu=MI~45?PpnB9t!7!J4&_eK5Y#uYO3tyo+Fx^2VJta4sF6o;c2%t9)r2cs94?Q}!|ML&3&$B}G*S{Z45*>=Majw5iRHNI5b) zVkdFQ*?=?j!M*)}wekM)W4v*8C;LA5o+#G)AR)T_G=Qs@^0H&04TqvPKY1w!*X31l1*}H1QLdxPHTO{V3B^5GR@58AA%~lrZZu9xl ze!HDLc&(e(;=t5~CQZjud`;)-xC8td={=8j6J_i8@n({f3v8TUL@fx^4mi%9CnD-@ z#b_r-CE{1ACiVtfuD{OcxW31i9&ICJeeEugaXyaA7g=jLgG3pC=MvlLGXtYvUx1e*}@0RB`BZm|B zv7S?ccg(-ImPyG=G_dsL1SHS8^AsYoZGmZpLn0iN-&84!w)%NBz+qEwwYazo0P$L> zOY58CrHxoQ9kRw$W-)E3`gFjr5L=ecVJ#wm3CkdM#y8Th7Jtahp(mt?Tf(PJjTjtp zfLGnCBCsHj_dGA9;h+&>G8t2zMe3+RGKg|M;R)cpQPi6#Kbd^JQ#Kp`G$-4 zMV9w2?)WY}yVX30F0JTC>F>=WKgC$ulB*CT1<=-co!-DXTy3P0@{Vl3qt$PLPOM}=CZ`a_b-%y7&FT5dq|tWc zl{FY7SsNp$ydlXj{G;!r`2o$otRh;6*e$;ak<{g}K~K>e!zk}<5CV=>Fk0*i5IGqS zz{ox5$yeS*!zX{}sJy#`*k(&fiVH^!-HaX!Tcy)3>VweYWJiYDCX8gU7<5c{yt~@? zp*^7M%D3H|d*HX{IplTNOuv9+i6Yk;2Van?_E-G}~i+#tV zw08X!4h1ehWOQI}&L@IVAC8h3>b$|B{xGvS3n3(?1P$yeFmDADbMZEQGivzbt91+x z98bVeByemDvS%Ri)2H;S?FmhtF7~! zEeHd+VKRMQXnB%gdvj&ddIOJ3Iain!VDq0YRx5d`?;)7&NuRGdki9;fJK7OZJC$Iy zm^bgJiYdq;gLGz*k7*611>7X$Pag>}4Fx|0@eIT=U{NQ)ONzb6c1Y){R0?BweLW)p z)kP2~*@MSW+N%^fN2wD!uys+l(fVQhxCo8S)J~^ACI|1N=%}d5>Gc_#Uewd&QZ5(6 zypcOM))Cb8Vp8d^OB;H5@z==*6D*gEU2!N-r13`1H`>l zHMJ3bNH^{Z{a`exLU(yHl8#i@m6~5=OAVaz6Y?t+)C~yK?Q-AK)b6uNF6Qe^CbEq| zOHcy&W@yn?O(dsn&yO-W?&?hrv_}hd5f=sF*)|&uSEJY$tX? z13?_YbWfhxl#~<&Cqo?RzXG_SK-${;%g3l4n5r4sdn=vFB03*6oRGQ|J;Zh;0m@dy zJtoTT$$CW1WFnkf7fsZ`j_OcSt)%?Np`I=KU_@fK2La1`FRkZP{GRVR2I+gxREl+*20d!Bdht{|rQ7*6Z}0Oow=l6mRD3B)16^Y+hs4RsirRLo_?mVWVy^i@e0`^}v{`^VNHE z!*09#trH~_HW=R}rT!L=R1J@S-R>eWsz_QYs^7`rPWREpY`uoE)#bE(oAT1+R=JdO zL%M+jTf#*GcYaQxHXs7MO&m!X6S9wcuLLBMMAFjo^dFhO%BBP6B5R+3!!B~VOvkF! zM7+(4PxoS5%4$?SY5oc0qA!9Cu@Z-0HU5=z*hy>#?Ymg!_PZZY9b!DT%v6jJ$yz=C0deWhi%(7_D#L zFWGfunm5PY@KHJBjW#-aN`$o8vcQ}xAK8>@Try2HJC*j}eQqocwp%J4$Ll{*Pl}VH zSNEulr?OHpaY*Ufw5F6$`Cz%`%x%=ogC*K}t*C4_$R4U7mG~wDhT1ld3c=*KV;e8F zI8{Q_&tp}(W#vGn@y?0*U7@E~ScszZZKa}YRNaD7N*2xO4HTD&pD_VdYF->2##AzR ze!z07xA{2}F>a?Ie_f2+uUd;;OETYCmzVLH+p75#qeB?ad!%#Dr{!q8&Mx)F$#ZTK zRrFaJjhwoH0y^QTm6lZ?@6CrS;oa`LTA8GTXI<;3V zr@j}pR8b7xm}kZ1xlBYPXI3{6pi}N3!XL?pp9&{X# z4*cBv>25k+d5f&j0j(BeN42hOQgeBDMSF`&Ks^!!T~Shp0jWC?t>T(?_a z_cMltBN!G*A5lN&ynoe@{{wo2ciewMQ@BC2(YPe@oFxQY-1QOm*B3%jMAn*gr|UyC zJ7C)ry7~ab9Z_s#6vA*nqfSXvB}cUTA?!G}Ic49zj!d)g;hPMJy+9qBCp*ri5dh4Z z8Cj(N&6Jw`dCM;gS_Iz3={$s0?@{fyMuR+zT6k&hZYV)2>}Q7`#Qll*k*xlD7l(>pFs@g!cIQu&h_bC>uKUc zwAVIzQ6k9I6t8n{zP3^3VbP~E?JSsD)q2nwxkIR8Ew)C%;wsxpM(GvNfXHfjBhbjD zM@N4M?sg$9Sbs|0#Zv!XeeUsiM~}}gA|;HTpI$XGvWu}pRT&LU z=4?qc$bmkZeLMwnEokSdA+y-ttwAyyCI--k?AKp(9W078T2v!rZoli%n+Xuh_2vfd z$&kA@hHic@$oG)T3`e!M7`m2i zPbd!jh=7<_zwo?FJRN?ElODDXi7=MMc88{>=I-sq4mt`7#!8oFm$QpYQjyu73f$~R z?;Z0^u@TSSAR!bu_YcxhgC9e_nPf#0X;KOP}h$}A@}XO4_s`erO`*~8$D9HMP;p-kgx700K<>ANFV$A?&?4x2-S+x z@9F8$l?t~hLS``Z9sKd+7Yrb5UDnKGG?WC&MagWenRf4Ua{lCatQr2ZC7IE%uRV>0 zK)%W0Xs};`gH@`X`t*ZEtNYK(?wOp_kk3C3!xJ}(4c=i{ovII%9P+m*67anmsAf2b zbzI>UZaPlQ_A(}l?PcezInC$CSjyC&POx7Ua(Eg@B(2^WM&UT`ND-=y-M<4MCDVCC zy?-Sx38AB-_`vGWdO7$&avN}ugWTNR4F9)4HwJ8P0BGJ-N!JkzBuEiLUHelye#@?_mCn+-`^^Sxxdzn*A!9YWe%fh1>& z?FqoW;rNtR%grefkMBVs0#?hEV87Fi2;TYeql(}fF1V|#cXdd6+b)AU#2|?ip`0p= z>9`h^@>(g1zJ`=ax!%mM{K^H5#v0N~LN>^)5eRSS-{%_RhGaQaO~CMjr2Ps#N^20Hlt> zOQ=NlXiSvbiMQHnC2cr`JuB@D$UjEb)z!&2qrk3Ljbv8Z5Lz*Z?+l_)C;H^yWJ#x; zSLbXQYKecl`c_~#4aHn^jMd1f!QL0!k@!^jY#;r7g8=1A%m`gp8mW3TXLl(ffs0c( zruKpWtj0*Tx^w?t&;0Z1+f+{%K}R~+T44a|v(T7ciTh&yf!&r6iq%rJhd@>g|FQ#? z;3X9phd@)@8iGZy2V$N_eTQIku{}fy1X(UOg9t3L|bO!2J9?_xqDgTsnvO?8hiO;O%24%5ft(1G% zmPpG99CO}KKbFC0QDAgj+33=`_5@w0a~V%(!BLo0_0e6fwaOdjUfhZB(L;|E-^DI% z{M0j_DO=O^18c~>@xz|5BAqMBa9#BGCq2*H26RuY|v;JI&&!c03^ z<7x;Ko3;bHBCGCHhe`1~)b}V#)*-HA!4)=F|{hX|bLx_(rsHxuS?y(Us&_I7ZlR$6y`$XmRzZpZNVAwc1e`7hj={ zWw3ZX)r*|=emYxR^M;g1S)Bk$z83C(Q97tddpkC10(iormnhET6Kh?W~GG6xe+gC0JC3Rm16Rg-X&wGn$cz+DU(^sAE@xwv*8}!AH%S^E@ zmTFXg<&Pngnxrg|6>+^e9(r?PsvIpX7ey?*yOZv@%|Jc@`0%mGP8Zv>`I-&q=<#vf z03ykpG(~bYxe0(o4WAqdrE=KI_s=9S7)<^?X`YA_3B%2ar75O2mS)9eSf|ENl(e0i z-iGVuB5Y5`a3DzKEL8}DGF@#!+vhxU>>!AAk0L-k6-phEw6 zA~A*d&t}SnDrIT&l4Ugk=b^^1%dq7fAy=v-F_$MWDy$P5<$36x-l>IKG~N5gxKnfB zGObDXk%-BrP1pTbImZ+3DVDL{^lE6w`KH*998*~qIr`olD7%#LS?p_PvL9`7H(t#k zWeyx+DMSt4Dz$u1X2&S}7BWwA_)L?4slC)`@bshV;qLVJ)XcS#gj2*it)l73Emzsr z;rQX4Lz=v1-6Fe$D}fUgsLsBF!!A9pZ4gmz1aBIG)l7Wx2CN}*u^v1$m z9Gt#3+|USSjeE8(s3?8(@bmf31|@MbZUQ?Cv>)T&N(eBmS}py$4^;uR>;&$oEKUoLew{7AX>afO$fMuqjtUWpQQKfY8 z1<{zJxvydK65$VldI!S#-igCxguM7wCY7TAU6Dd`-Zu2GbnxP_-x=UPy!#m_QPxE4HVWvp;8f5+Nt3vR8`Hw&UaJ;K zQ39Q8`(@a#UvCk81N(d3>Bi2Gp0laUG5jxS#(c79L`9nk?DgpyD$n7ti~?pP_VbDp zBDi=ogbkF#e)q14V(l8`ZN<_WJalx2bG7ne@u~}bNCX#c!>uZvw0WDZeA>R3Zk15v zs^Ux^$&nU*ub@^d-AhI~%k54K3weLO`{nNOfvRSj>{w}c392R*wYcURqucc_`^IU# z#NqR$Rxv@UCo9prWZLO(1C;?1xm?UO+q(O=%916C`V2>l;H3r!-Tl^&u~T6g_thm^ zic617t@QKQ=lGle_|3qWMCA|63!WH4@)rQ=rTp~i6aUppwfym-^KvzUNjent@p8-i zE@4)7e|!6(L1p6LffHp$Gvf3(}~=7u>CGF4Ic#IeK~FA zvz1Gd(DCrhSJy*5F5^HHN97RlR5gl}nZWKYPd8d#55WGCZc;@k??0PPD&DJA83uuC zgm@E@+uC>m4Hcv8>pKfWf}ZXJW|WfjO=(+CW`m;HmTV;#*Bgp%N$RdT))UvO>$EZ* zA<=NWG0NV0uXqA$1b+l)FjZf<2+1mrW=ILYt~ddY8(UijTL;~P(qybku_}MPF#r=& z9*OdhZQypj-ktWpv!0kC))pKKKT<1|{o$h@4xuZXEgT&HQCxP{&*0*BrBVe!BG0Vp zU$??3J;Yu3{%x?AQWCgW%Zu}(Wi(&x(prPAtD~dGny;uL&XtlBhcfYUg0bXkxoold z^A`3aTSe2^WYY6Qqv8rSCs98G6{};}4~1}>@GE{wd?(#t7sxzkjWuj6e{q*W9q>=H^ zkRr;!SN=F51^3ai$_=TN2eAevI*<36alaf70be^jHB3v}`F0=XnYbdnZrZ{5o8ef2 z2x05<4@_EhnyGEFR~lrHwhby}8g<_@3_ptKmK@Dp806pyCi+(9(eEPNn>N|)N$9wp zky58=n<5p^aG({|whBI$IBH!d20F=lFq>D*aWq%lUfPVvUkDgtm zex9GdekIZlwZQM?>XP!fUcWuD`D`%;K*mOuUB^yW`5!K>&YVNZUPFPV@xCuF)LHGt zihYBmJ%ygup{NC1R~35Sj8#lJP1S}OgGML#i($@_e5B&e@}FkoS)xc=b6ybE^^Q(+ zqmA}vN8r#C0C!f1MKO^p6=EaxCR&6ry-Fk0pMONaWHN}o`tlCZ;gyH9J^QH&jDr#j zCK6T`mVG@U2>ssbvp1>BY}B4&gjQL_PA$kgsd=QKPQE>=42y2r+Bsb2%Mep73yii# z_*Gd`)21Tb(9?(yYVp^DI{MwD01?IALLTpweUslP;l}0nt~r+*xS0V2+uW1RPtH1R zi5$o3%>|{0Gq%*R@@|~Jqx9dvxNx?4eZt(hBJ;HgiA@*Y*p%8AOj2b$}I zCMVLkLUhv3mQsZDf|=FQC~k+E`+lq&_Fyl0yN)f==k`NbB%BV1trhz%w9ZC5EDnX$ zcd#Oh&inp^39i2LTb-pe)x9wew1jsu%)z2G;=qlVn z&6v39`U$s}@_&0x9lr~pllN3YZrGTaeOHyDRGm*(Apv|G-w;MHDowRDd^+11R?Rb8 z?Nd3~;Q#^SCKCoe&K2OYlxjMLEwrx}(I-&X7n!GL{rI@_ z+O5|J#8OJXbB1OuhIMh$!VjA_%HR;Te0I!Cl}@tnC-VFlvr9=Ge;1v0bJs)e$F4iQ zpUW$U-2^6Zu?zP&e+a;}!s{>XlblL3>a|0J_G99)5&OB4lS@+>sQ%7oEmp;fD$!Zy z>ZA0@gP2aAD_d{Y4G7q>IxxM;Ew_ZWW2ZZA23^({I#mhs3h;&v^k}y`g(3) z?%UA%sfcd6Cm*tDlK3vI3U(RU$4OSVn&RoWH@%1-*qvR8O1PO4hF5e-sGANv<;!aA z2{1?I0;%d9zeI!G);c`xes3Q-K`y$x1ov1`^TRgh0)c4=82^>Lu${I{o!S;*9)6V# z3wn${c6wT16~IkmGE4#acuHY3@30I!LF5`Wos0y-zDRzIj?Pf-0XTJ~)`x{gQ~Eio z?6SpOI&6FZ;Fy-tx0*$tAUcsN_Z=O&L%^EGfWO)zuvX=6zNX03v)YFHFkl+myOqUq zF<&8E&a#=Or1Xq+{GA?wc^IyLKSgm`i_4Y7s7u)WmvCE2X9v_KOx1go(-PaA;Y+}2 z#bxeg!d$p`)vcxw{tmZ=ho8qrs7XY7ORq~`1eaw>q(54Em5{o!#fi5f7cVQ18nP~C za?Ks_JB;^&UDkurSM`$TF{xz|*N9gZ{K#jxa@Ap=#R8a1ipwAy8=v7DYrZlp7iDv1 zA7Tv-4WcEU9lJ7}$z~@IcEPff7cEzvxbb&fENZh$Ny{Rbo5D?HA4ISO#fA+%_f>8n zmR!fab&Gb|n@f)$jU22O)KBm`rijW6m}D?754oLj!5vWzXD-*&<;xURw&l;K7SWBx zeWt^?PtJ5|tgRogYVs5UvR%^S%^8F`?$zAOwaIjD!Z4L8S@gI%8HjPuxx-3WV>3g| z`97keV23^wE7t--=s_u6AoJ4ib?&fglWpsk{-9>jn)QARCTn z*&iIjNTDGXqfn~JK1EIpO#(usamWuidIdoE5@MG1%w2^^sep+f0TwX@j*D(u!>VGt?+w;p*&UY>9qbchzf+NS;UYBJZuBDE_>=n z4dU*J1PweXjb%m=;7Fh^7Owe(se!pLuDj+|9kY!nZ^#11eN?QbUs zFF9MCNGElUrF?Uyw}cRF{NQ?Sm1J$i@ZLN+H@Ds0nNXvnVuaOnFwFdDi%$(Szn7af zz>r+oM~uok4S(}h0ZhG%0=t{UUtXBL)wkra2D!QC8rOJy=M6r`O==K58aUS2cgN6b z8hO%DR5Z1ocYj82sao@_PRjZ!*nagE_S{E<9r}^bOU2qSLA{PDbR&RWwVNSDODA|w zMQ?^Jy7l+P%}lgTAXePqQuMV}dSP+a-;e;Wa#wSP$FO4_Zt*>51_P+ZO_+?^cCzCcS> zlDJ8VU&)VUF}1{Tx{UZyZ4$@`S{0bx-LrSBwSZ%)sT!>Yue@eXF4=Zbn{=AMohZ=c*n8kpjzm4eLZ8YRHG7|_=zD>BwszW0=xFaW5$(5Gitboz4* zviOBzaIkwkk!&10w5zX_I1sgqWo$N^k&a_^8u(svImEh>A2?YK_xyBk7gq@{u}a_e z(0T7mFlN>E*wepCaR-F-W`&&t<;BkKagKlXk%l@X73;KVA6)=pp;RgX2@2>uCQ<
6_SDpvUXLq}~H|h!mi~N2;t*I1 zQ-+{LZR#<2+%ILQ<1`&P94DDbyD+`e{N?2+R*pmz6-k8T7ry(CCbMgMhu2&xlucL* z^uY{9E8{0c%u2j2bQ!#K#nVYe&3y}VF!;yQ$o0`y+w3>IpMn;#JoEb*ZR{*J-HsKD zgE&OPcS{^2qI%a^Vkao6LBp#8X30-qQWK?WZ)tb+>Ym}eNA%Hb%Y!XwT6jKm2%EF7Ck zP}0S>g~=1&XeMXxyQ!`@YGHM$=UymoyojP1OR%Y4eH|E)_x z%Q2mn6!(-?$%#c)??_;%vPJYxCQ2hWv#^X?g?#LLNLIdbl(&SpE4EV_;!*;?#q!g) zLae9Qa({N%B>AzFcYWR7v~O{2D~0Pgl=S^D&FUYl5gt!cDO9j5T0j>bd^qEr3UYO? z_emjoXgd0>MsSNL*PY?;+dj)sMnutf=BxhzV0Xc%l3s3VE7K)1oT}hvYU(obe5)n= z*r{Jy)Zu6DU5*5g&gC%F*R=3&*tOyCni*18Un3SV83O5{!I_{m-^JB<=#sc5hu(gK zUA(4tdd{)bLaco$AUmDmb?RFEIDRu((GNwH&b#j|-Zwu<_+W71Cuiro)O&zMAU&t` zhM}vadArTT`Ws0sui4~txj}LW`wiH@qf6POD)wZPc(le$MVhEebq2rKvTtf{XQ$WG z4rvW9#lneTXQ%OijV~z=cxVJCHt_=bdlPV2>c-X|&G3dk&;lOMVDFQwmiL-uwoYDZ zd*%yek=Nyca=*WpyLBC66*%PX(!yo;)w`Oy9`3Zj=8w)$HFdRF8!t{T!gS@-OB6ZY zn9Wor2~!Xfs^=HlnKkl(P%qykX2UG5$%Vf7$t5dl3&~mvTg0;M$E#M(I`>-!i=uuP z*39|tvcNQ)v3g?KYBBGOJwY!N$cbDrw5CPx1U_-OUZ=$3$@egld$+<4%{8gr`QM+W zuJ-o#CoQly;l2V{i}!fU{)BsKA)cY1C-%a8vcN^SYaOs3x6PPs9%lbC&0uM;-`sK6 zq4*8s?yUHs6E`fS9Zr3#k8U{^TG>q1E((>Z)t-o>M#m{bKDUZf6f z)_F0sxp!VwHweiU<;OUW35~8^ghhDIQA|jjO`8#Cy4HfZR&vqCPwuttB)_mr1iu=q z)Ko<-6~U$8v?`?<``ZG=8eJK!IRxpqtnM=5``Zy|V+u9X8|7i;P%?&51NTh*(zGpUI!Y6VObQJQap0QOWfgGcyuAj#l zRdn1s?qmwak*^rbj?d-O!q^4v_bQ?x?%B}hM?{4mc%SEzxB`!1Q*z6VpZ3{Gc{IM` z%Xs3Oi}PAhX2xC;TVBP%XsqCdTvJP585KFO`h+x*mD{Da80VubyMPl!Sekw*ukW&2 zewZtiElWC33yN?t2)HgE)N!sB&LcE)@2O)^oqfR-VcexEhRz9|LXps%Qy$enyt>Lq z;U-Y99qVD3sk;uD(tKPnXuIAoVGh~&gKsib}6jiJlmicu+X%xHiL7sKha+7DAPp= zms#B}RtuGy>Lyt2nUVW0yM^uUllYyke-fKg4HE^W##rt>8hh#4ID&+m#J1*4PyC55 zO5l+rf2X>JGOZA|$SI{eKMqe2#zV<^8l_6g&}dn~?t~3X@D8zN!dBgmS}4VAe3 zEBVjD)Buc!dLh@f3?1?x{S5O7-{PJ$HxtCrHx1q1XfkcH$cB&E1K2o@EjG5!0Lwdk zS7rI=-&s29g|=%>gQXxG>AE7eOI+YHLgxI0k7%&qEkk(c8lnK(t$RaFKD&Xx0Y zt!r_NxE9aSe|HPCK~<6cm6)e2V@rgX0{Hh)sVfAtPozZcXP=Th4` zlKL**g+U-+p)c&Pm)fiSnxODvbv2=FXA0()2zs3;<9S{GYJZCA#7bgIEyCCo%#H|E z@kke#6{Lv@Jc9}aU~y7HSxKuDcw$O-mkxjU$QwMiM?bh9D|uR=Ap3MofAG1nj`^jnA~SU5os=xy zg5uhV+++R4W;X4eDNN<~j&_K2JbjT)=`VX)GT1dTbaZqGs0PdW?vHT#Fe3?HK4@1P zVXF--WXS|5j`^BDhe3?)3 zGG&ItNrDGTUZFRWbAs1ZvFfsFZWrp-V?BrK+9|@^xaZ@)Qnr?RWDQeOlBGkXlnnmB zp6PRw;L~9d9SqjM!Qpw7eCyTfYJv5Bymx7GKQE@~+v>KQXeF0W55ZvorN6q&(@BvD ztIYV4g2m`Ry>TtyaB{2m)b|Q{v`vk_PGGQ*Gl2&1T?%B|H0TTSZaN9sBS!qc0gdCwiHuQHd@fsDNDa$bhK^~^U+j@$V!Dlgr4gI;4{y`DI^yA$Q4yHSM3S{*^{ zK0E0ghF>DF#e)fI4K*dSY4=wtGhj-y50&%$B+D9MV^w)^TfMo$w2}W^$DqGgyz2t7 z#(YXDukspH`f=?YA4_r7wW(F%TZm{fH*HNU)9b4%5Z zVtKqg5Nw$NxucR^Fg2LQ-g?6OaI(CM6;kijaU5NVxh#JuyTA?T0^&_XVt`$5t`IF_A}=3OS+Th^lRc~r(4_BZ}9i{ z8NQ9U2=7WdixMjl2+h8s=MM5&V26{vE*e)hvs%;E{-YDd6{n*41naFWc^7Ek9B%@z zS952?q`F)_ZdpcmUH1}(i07o8_~BJZoAKNg{zw_=uri1cArePzP-fbUb?P|{cQ=8g zRnymjexB?zlYLlpdfUwbRVv-m^;=z|eM@>bFR-m-e(z;)n`hwwiRMs5fbxT-Xq30W z#p@Ext{acg=B0^w7U*Gm`751mdVFm?&1~q#GL>a7ODPqT#k#~E6G>|AvsL#l_G3c4 z@Go*UEPGnv-yx0SVQeh|jl=+Itf;9qk=LK8#KcAySNU}+f6I?%eo+2gi7V8ONNq~q z+=}IEXqbU|ddv3Z?XaRpN=n>K`01Mn=A9RQJ@At&@+l)eqL1kZ{Y918WGQ5bI9O}& zsT^7{RHgKWVG0+uz9wJOmvY=4CqIH}18Ga&lAkr({g(7g_a70@;@X^6am*Mh;`4#(DFMj5KpQ@!lFh zQ0M1~M+jTgDXI-3=+6XxIu`1x#%6${%9>4=y;9d5w9s4qhf?uoaSFWGD@Ur z_7y%QqT+lv1m$QOMe4vtDAI>nN-RPby%fxv>mXw!f6#QvtFn0^I%&z!9b-q0sDe%S z=PN_|wQEIX^`A}HG>gVhKHdT{jRFEl=Uaq!;g(aw7FB!?oR?_LokwFMsrqxnMnq-k z@j9>!!Xzvrbz-ZG$}kD`J}>42-pP%;P;|SJnPO!W4sU=;mv&FDC^58l!ZWVaFf8pV ze@;Xaruv(e*nT*XdbxG1=ahM-aYK+DcP{ZhozsqNnzyg;aMLMGiFUveWH)H^+Y?%M~JlAJR1d*3r5m-9Ig*z6F-H%H-rV*hM&=t(vBrU$pxSo`YB8M z5fndzYKBVsr*n&hZphzK)5w@zO%`%OqJ4hIy1&6jDeDuNOVj%iEnsHWOS4^{ki0?F z6S_Qfq08yTzT=C8oxAk>y=*!5?ZlTAvQq)kdjO+UBn@x8J7FxNz~?XK`H9fZ6)%g4 z)@5bCZ}nyqNt0lY91feG>_QT1R8oh`EPXB8aA79$OK)sLQ%{=DgbIN)O=tDd!GX;0 zP>E6!YEFA0@(g<^kI)N zzYTh%CCHvTp6w{!+#a_-pHIHcb=+lnA`CE}EX+zCHO=r60|~@Uua;>utJ-PP-o;DJ z8xD$9+}Dsewae`LGE)UV8O3FLt+1s_`LPEC)1BRFl7anL|78Y+kO89zc1J=_*YM@J zw~0U84=H-j6a|%P>!u%CZ?^QBh}SIa@rbdCrH_)p@oy3oV65htbHWuoe$pdAaeHQs zAdqd6!cRR~cS=+3FI{zMGfX6fWnt0gVUo^sf=+I$CNwUe{GyGmEin+5+h9P*s3!QW#2@~UvyiDsn?97n z5y5|draS;xD`-c^c7?Uqt7Fclqy}>_UfDTPwdz-v``O>ooFPr!+d$RM@}|Dv)d}hSIj>yibiV^4h}}zi5%GZg;_ZgQ6R-oF0GdI_(Q*beWvj^c7unhMw#|ir z&3fa<+gvx4qOWl1zI0W!NMuy<$?wJt<`Ez2JT4iZZN9>eTya`#u#4rX&$1{GT%VzK z+MCP*C0>=H96WsPXNe#GX=X7J~MH@GL;KPbRJ0oWJp>)+-n3t z*mCVR2SCNOj=zv4Jt*tQOy?7*lY*PW&mb~8HCffPiME-KTX}4bQ;#onBR=9NIZ>}*dwqAz+48qvuZ(w^t0%6(zNopC!Zpq#0Xk#7*dHZ9pU zH5SMo*WoC_xZzFxLU2#^T9b5iwnwJ?7-wO|^kaC+m%(etoBASw9 zI-cq0a5wI&x&zw>VniYA*=jA$veQ*l%ORTYL+#R0c7MdA#Ey1{!N-|NvpFP86@Lp@ z$?)c?{tWO+xfl5A!a{YyjO6mHl z+0JF|ymeZ~%cjvG)|;>XQyVvXOmd_8wme(jJ8v&boZ4mXPf{lhy1{IDhb|Gyz>>hP zA#(D3;nidVqiRnl=q24MzsuQemN%7wjy0JZ?BFgw&@*sXaNDEY6=GVPB3nu#gjbf< zOt{&Xm{i;lr}BL?6Mg*^@%D^Itz!L!&;1nF7OZueic}FM$2T>#bzzELC$g zC|zO4hKX7_{4@uTPt+;Trk)+)hN!>rIBulerw?%WV!J z)wJ6KaUo?tBWgaBTrnziu%r_Kfakf4=k;d7fFYFNET9+LU9f4dMGmjy?dKtlC=|WG zhnGG>ZmEcs_5a~2muVDHg5N3%qc(xf;jFB#mdGZUyN=&D7}?g$q}TU?lQZci&O5`q zj7_9O$_j-eGq{95Uk|`nWEV^J21fb+WD}&tS1aMgEuYEt8Rl&qnw|4Z`B-%i?VjCt1=)S}Mj&u0KvMeCBPs=wyO<(T(L?u(uGHptv?6$)Qf5fV- z1mph0NZ9tnbGp-K8e>x%xyRV^^Y+MWo47EM5lQ`Hy|`v+m(ciHZO^oQI(HjJDY+kh zWvB!&^hTyzwni9Haf!*Aj*^7EY05qS!@OG|pLTNq;54T~4j zVE~BV#z!1`iYI94*Ervc9^N|S=&d*GEEn2GM@LV^F{;R4*c5C5o@F=b2bcaG-I8Nk zTH2(Yh)8Uls0*nd!fZP#U3Hs%Xg&eEWhNk?Q38&|)%-H-g_ug2mMXXBRV9eTbOKHS z065r3r5~mVMByxduRBfQYLYpr9c(n3o+cF^db3Uxf1~exEh@{v!V=B*tW8;Q@TH0+ zV81!LSgoCUAkP9IsY`}VP--hU&LS#x+Rq31Fk5HS8xXnry1Iz=fpYHCaP)@ebl(}XPai=HfUnIpE zbtI+0ehlfv!O_)E(Yc|{b4hg^og@0e`n-dSD z+kgLAwaTx!M`&5&$$hna;T(C@Gc#{8uqxx4E|QJZ<&BWN z-xr0Aj_7p}n@WUwX_@FT|2_)(U@uy5)0q8IL+kPV=p9$P15TPpgc{?@Ux;*Ra^J-k zi3B5t=|ND|9m|;ZD+5l!p2W8G;xy#NVw}SCSJ%UJ%=6w%Zvq<5KF!empeW0^s%n(0 zyjQX^%+9v9tLM4orGv?^IQByeN?oI)5hED_6?&XQNvwfd_10zi7l0Hn3a~9bG48Ft z{QWDCxR`)l^3?Ck(sz+|T2vro+rL73DkB4+@jU8wT@cgJ^WOqynZ z041}O|F}@cP=L`a2%vGQOb}ZEI+emqtQ|o5Qq%R1Gx=l2qtpLdRL}!#o z|3o$y|A|F&I||@r-K~wxhBgyr4<}@YAV(Etky-=?L|>xK>1Rz#m@b8Lyop^Kx;&HC zznjI@>}#d)jxduCY_lnTxv97k=<|Oopcd~>Gb(Ug;&4y=ik#7}o2!jPm$=Hqo>Wa^ zTYpJJ*&LOsV1{wh?);WyQeM}(;aBymsqyMt4lD;*nQw}o=SK@SCYZ6>?tIRzJ$#~2 zwRWl=$}jnh_4d(LR)f6;db0m?+`WV2B3;kP=;%XtrXDp|%+t*vc*yu!uBdUmW^;9v z1ES*N+hHTNQu#Gc#o)+@5ic~N`OBVdf*=a0)p3CG-DuuCMQfLbn{C}*s#;e@U7oMb zi1L`0QmNM%(wBL|j)W)iWHm-YU@9|{l$J4Oc|}X$;Ngq=CIRG81&15e`+QF|@6~sf zolR_q3g+1Olg&Z8fZj=?PQaw8GC)QnTS@yRARyU1pAL{>mK?o8P$(T6(a_Ol#lzi= zwE=q2rF21TF~rNq$K^WjcutSSpcU(Wc~r*u-FyY)Vm{fU_!am9+?)1wiCt#Bffda@kLI+d_%1{-{A5gZJm9lkp2+oneTQ9LG%HD7jNz>p=eI?E!|B!|Ak z(6K8q$HWzuzO02mTDuLGB4 zRd^g!t(ELity($1z;wP&v_J&aHcwM)2@A*xqbWz{XAC2hI(U_v9 zVmOzO2Pu$Sx~1IN&p@Jtp29& zKtcBeVMXItZZ(R;OkdDlpELeJU-y9DXI)$!i^wsI9rtsm0X&PnSr|0a$se_Gq+)5l z-_E!c{sFZ-fTVK_p2@Q2I&-Nj2(vby{Zg0TL$T>>a=}1%DTJed`j;Oey?K}*mz)si zxpHsX%J=|*#|Is0DQFmbu6zd)Sd@u`Ow+85T^Ch2?{Oyi0cIEn(x!7+XTND&8po@a zAOrn#g99Fu`n~pjA6c|HEk0eQKAao5UNPzhy?tp-z4~X`?qM@wUK87{8BQNIy(MBP zUr$?z;Ag1q`#7@h@Hf8~k0ZE5*#F}fXgbAjb#IX<|GI#%iDuzuW{v0Kg?pjS`g)0} z{EVnz-AS|8#j=$1H>859YyNi{k{}STcs#E?X$om=eOJ}U(Ja(lnb+7Z*m@=RgY4ey zxV9g5#zY#uUPiP?W8_QCibXP|jQG}Hk%zA_P~*6oAsyrH=X==2iaGh;V*&LeN8aL4 zBC~(Rl!DaDY+-U!78tAh>usId8Qic>{=zEg3^cTn%76eUcHiQi$x!|01UTC)2Hhr9 z(wzyw?bg4rpgko_L?dQ95r7d(9+)!FippW>dCLZxst6!N67QBnca)PPy9&N7MnJ8u?xJ%BDW3U?9_UwVP!5S2^_^b?2Y zcJI;P<#l8+S^%EnH`2;?ovN>ctpWbtY@yDom{7nr)B`smmCr*>vsB>W%JAl}e&r*e z|1w`;PbbwVV5X+o?irQW@zPpgJJ?+sjtcCYJ1LYbvJBWMF!wU+`?aFdQAqbtKgB=i zRm#h@-q%~n=uG#%VOzr@W!C0E`%wMn+0EV#{W6`Wh$iG^TI0eI@uZ&NI^ST>A@O+> zGBUDRfv({$Jt!v~kz$yhj3>A8@i?jAY(@=2RriSehNk)G-=Ps%WcsKoSk>m*TwPV@ zqo0&>n2~B)S@CMxB3J2?6YhWwl(uyg@_tNo*<{npZi2g)vz9-<7oCk=jKMP#uBDRb zi&U0H5Au{?sqG*z;cn^9wk5z(Fa|Oq zmHWdeO&6}zx}Lv@y)@o8*{ROk+r4u$Dv=_6>Fv0wsA3+$fGZxQ{G}9NcUWu7I6E_k z(+YacR%(l|u>M5ao6t&zNrmT@%6+lv7+uj9FF0n*%lsV}+uiS9jJ?+#e8dBPPtP+J zgP%YsyK_95CdxaytwQ6$V6L!+@|uHI0Tn-XWFZ;QDWvN*Q@^T7zQ*oT;QBHc>M(-K7^P3PFa08^WD+}&$8dC)9v`%O z%PS0vfFYY2Toj=6ArqB6bNuca1q%gsAAc%xKr&6mt-TIU_rO}q{#)E%ri=FVVO<%;Kb z-)a!X*JtY^wnu%yYBNDa>&`UpO)7z__x0-LW*iU{Pbo#RpyKG}c6X*5TcNRDJYRQ% zv}^dp(`!0feK7P*-~wP^WwSo?mVLlar80^`KLZwxk{&uJ8wmO%(>0yJ7GM9nFzeG> zE0awu4>LBuAAzj}Ur+8Zw^ss4*p!;do_}-;quEr&rs|B)=j32f!WDc!g(bgnwFKaw zc^j>}=(aA)vv;_0TSYM9Jw3}t4dFg28$`)Eh6<@mhWgC>JuLOhh%h|AuS=F^n!I}N zHRr^w--~3Y>t0GvS;}li?E07K$DFQ&#ZJ9)FdJf`%KtrU`^u);iz1-?>kUx=&E$S^ zSj6SA-TgI5;#^?m^!sFmu7TO3 z)W=GEa=}xGwIyZ~MxoyOSnWc*s;FxAtapVX65{*Q0|ikjk9HVFLqYI+d0~l0Wj*$O zv4ybuOjDEdlDyr_7dSdmMMXu0NGjRlGkg*V34~?E4Q=EtYZ67w+r#DSjBTVwhkalk zi!O7OW#NrRlD29c5c{0{aJfI=OlaXt$uqs!WdEAhwkGSaMAdly*=>vI+r1&vyVZw_ z%icWs!IC`dTJ|dUebj5WYYTh#ZPPrL!4l8Ww#`ng-Nh?H53dBO?SZ4##{9f1{qg*V zAbYv`XEBzgqiNI;Nk?lgNos$RlZL%!sMmYGqD6n3{PX7YY~*ddMjl; zCP*pWeaUHN0ARFa-a6#n_vTzW>SBmLiv-k6-rJJj_(UoK96+#z{=O#o?`X8 zF*yR9!znoXM5GDoJVy=d2S)PTdef4RLt{)T>0Wg^xHI23<#R{1)aRoQR%Er>VaJ&J zwN{pMS>NxMPBiKFHhOsAxkgy5md?2K5_nuDoM~&9o#Q<$)jLsHrfJA{yrLK~GF8X9 zw<*fC5AZnf38Cs!c37?*guNzkt(d_Js|`@_FJC>6Xx$cBtFQAmPgvc{cT!E@2^O0- z@0qoJ$Es<;v1Yz28t5Uip5#8t2j^iNrx{)BZ9HMvUDI((&2U+qMqpJ^HS$V#IoC(L zjQw)#XtC+YzOV_^$tI0in;MSJ=6>((aOZNn#B2UFDZFSe$3k%tlND)Ek1LU_p0Q>W zhQ6`tu$#0?oP)aWMx26cZ9n+(;Vy~Htf{7?SPiMo)y+uD_KBF3Wi$7PN#ymJ2kKsP zlrE2LwaZOSvqO%?jS5{auW>C8>oo1RgQRn%#E6=kbvu=fHfHmyoEI$3baG0FPI>LV zciJmfTyDN?w_bf-#hv(Vm^@c{A}imxsp&X=VMt@!;`U+nb}&mT4?bV}iVYgtzKr&^ z_iVRe$9CYTnTr6C#1}#c3oL@&L@3Q?u?WPTKN;JxXd%<<-~DXf7Qu7hV5SBy9Q&k) z<6V^NAG8a&YM&~OyKzIu*3ojm;hbi1PvAplv?24EhP>l6GuR_0yd|zsW3UyXK4!j$ z$t&&m@vOSkI?S%nuc8oc$~=ckYIka+XERhoLt@_lCfBnmK&oCNQ&Y=2c}jlGn}$g5 z%2Fw@+}_}^i;L1dOjEXLt;$!~Yh-u%#;hH_OI9i=j%V_Gc{5jt2*otRS)JIu4{7Xq zcxS%^OuDdds&5>Ib^OW2#;h_ctqOyEe|8wAXfwUVIP(CVeSWgd_mhoaaCA64A*7w{-N8mw&yJZ8TIu+(|rd9!4uFgo3tL#Oys7I~qww z9@1D^%{2_DJy%su*4!$Ry^qN??3|FPbkE$E?iDLYVw{;7KxJiba0#IkO$Zrh{-K-v zS>Q{S`%eu+FTG$q;_UT6w$0?t{7=2)`wwTxNhV8M^U#BeyASv#o>e~M9%i)fFj-wE zmOtq_6V}$>E#KdXktD-B$DgdPp9wMs?&_D1{Ck#2wm1_#%1^GMm`gzIxY&g;TaxSVPCvvI`nqeJ36{wpD#{? zO8E>ah25JfVQS*v2KAm&2lIF(cc0zmK>gcB)gWWNA*(QvslF$yV`*MF`9a!u4*tf~ zq9Df1L3Ox@LD@6j7y(#xNJkh9r8tK*{kM>TT-NJdjpgSH+tSVO&jwXl(rov6s;y6m zxs*i`LYqoQx1ld34wcOejg#< zx{Qo$)~U8w*Vet)?+ii|5i;2Mc{|H*Zn#QrCWCucYGJz7IU%#xocnWI~@i zB-p-|=ak5(uWUaA^>TfMf%TPJ$B8>4Xy@|RU& zn!@R4OSA2|cVcd>P`iZoN6r+WGVhbyAoG{aj=JCHdGW{4?mBJs2c>O>N6_^<#DvFF zu{HHyi{Dku?g#T^vPO^Svi8v9Ia%2iPz*m9`CP{2XTK?=gQZm;i3QBPjoI{AWG66y z17sT|wy)Y=9871X4otX>Pg$g1xqb6ic9RKS{?YIQnLd-J2)v8?kNAD=Y8fTX=yKUo zLmxDb^(f)I9PK3%zAC0T@b(83T1_&sRJy`kD&tP^_sa1}^atbo8Oh)J7G!K&$Lau` zB;WKYu|UpF^kNfz+~Y%1$ugT^Zb?c~MT{r3<$Bz~aoZzL4?Bz;`YX=gMtB_7rI=Mx zuMO!lkKvJk#2_Thu!O`Urpb~?4udPM$**c{BO5u-E2%e5SB@%Kc`QP#G$$gYGZJ1r zN03(A{L-yg3~g1ISvQL!MELVO{C(t3*o*Y&m2lM@PR)5mDrKQmHCgn-0a6`##Kwlb!L(&ezL;tT+@vAj4W z7;y?CKTI#u&m*p`VJN?Yy+GM%KJM7Kf@*X&jB*Mgq#^!J4^C)PXRkIHmpiBKM+$12 zq*4w{&tdJ`pv9jy=1JudGyT2wt#o=3at+&i&GNzR9Z}*JQN%u}^fA>w-&A|(~PW2BoS*7~yA^dz$jmvc8aX6WSW$ep;++CG)U3%V2^Ynzo6k|a_B z#4ZKMiWdS^FGuGTnFP&@MZrFb&Ehu+b7Pai^ZWc*i>gw4zQhnRs*Yx2<^lgEz{W1T z@dM?Ntk#11sLHUH&>@uf_MjcDI8}!b&ksZu)ABsYgm;6RbMx~0BT0pI6mgIhKS4x$ zA03)M|FA>!!+<5cIpSe0y(|g*5_scfz-sANE*lv95SdEX(VW5QGZiQ{-|@Ec;cUXv$$hdu<~{V&<{;A;1`8avrxhh-$x(>}SY4?&!wVPhCXp zSGs{u>Q_=42!dq#3a@zeBJ^>`*mO?&Ien*wU1ZEW=B?IfZjFDFp$(VJZea4nSd*Hj+L^4G#uAFDtd>@f z!~+6)v9t8WA!ryP3vQAaqnCb&$edScF0#IhH8~P77cWaytOFo0$U8jc$J}6bE|R|W zys|A)%=yBDEc`|ayAYJ@e@u&sR>7?WpDnyDbu_?$OFz0=R|#7XF( zm;Y2Iaw0+ZA{v>P3{0u(uP5F`kNIQ%Y?|h#rp$sYNs%31v(O`1@Nz7hTuGWM(%OX23)g&q`H(!5D0>b4^roU+7Pzl&&Kw;|_SQs)>qa0t~W;mVS;mwSYX^wJLv?y!; z#rq@0rj3`JYwxNfzAK04#IAaygO6}$mD5t9|jj<@HKY*KXdhanq zck86%=Zgt@w&c7s-}}%~jv86(d;cY=Hj3vOH_i$+mHOvUOQ*R0YKv!#AvkZEApl*p z`EW}d3F@BX??04FNce$w8t}SYlE1wET$;vdvR8^jmSko(1Qj7hg8Y|A4nJ+oLveM- zTK+CGfvMeMETjyMjD>NMv1uXQBARwd`V0lrPx`a+y>KI_oQ=t;WMnc1Th#TBb?+iD z$e^^;p8+E0Dmf$`TYj1ZD6I7T+s{=H_<@1Bo)K}sCfNQ-u^m@w`nmZ$`+AO7>9 z1l!Bt9Ud7WD8CI1&aK2Y`)N&6YV|B#4J{&<2Tmypf0G>A^vJxXHq zev*pGuQNyC>@2r7>$vY4N3}6rI;3=P<@1>HJeVHWnaehv)UdItWuzY#e<*O3u~!WW zelwKL8#S7arOBR?h&dYufk7mk_jsf~z^0>wnDhzBz}4{N=TltMapl*^d4YQw7B&ER zM+~@Nt0v>b#CVtm-Y7MjWr{sF`c9@R3k;& z249`o1XbhR$)GXdo$C)-&5Q=BBd+Ilt_tu3;r{lrYU_-ZP7!^W4kGLa;T~#zP%!UN zCfUlk@Cnr`UR_YiaWOZ-YzSBwpGrj`F?}u7+=(?O%dp1zXtee2m2IuWVg6sE$%4_| zWo_`aIqoVgM5m^tq_iNupmEzANXp;55;dEv%6n!nje{v?aH|(p;GILs^^5ikY5-Or zdkKTij9zZ??I%M98zuO@<=_Xa>^}J%QF{*e%ijckvq(l$Y|^C3Zm!&~ahpv~a=Wle zAaaXMlAluno8KhRrrO zTrzYgKkHi2FSR)Rw(9lfKJ~dTB3sc<+cB><4Wq&aTRpDAAZM_9?6_Q>UnOqa#ysF% zoYSG8t4{amr@7MEt&$h*dJm&BGKlxZPg^ix&)S!okD9dQc>i=^jI5!K9prK)(Ow-rL&`^a=puj>r7rGb(-; zECzGF%j5cV@-kTtP6++d(LY)k9L98;PEgP>qS|_O~WcnEqLlFy9f#p5p-93EsY31g zaw3OPX>Q=?1$C&fz8`^|Ft0W@m3iGO9IcZni5-P0qrRti9*yw{it9~Dnh?1!BpV5x zgjy?%!rS|u#B@+IyX9L3ncPGqBh&h~-~V7LRp{A2bTXU#v{?FE49g1NzT!0gXg$Kd22M| zIMBkqD2MGA2P$pLm$v?wHbt!zKG!cqRp$QFRqzC(CzCr^rV`gFyCI_^7 zTn?ZxwUFe9w{QQ%pzy9!R#x6T84Mf@z}2H!epJI0nAt#oGI=w3VPN|<71!~f+W&xA z_Hp#!E!}_lem=Kx30zM>PA<+*<8kpe#E62fRjbzgFSrz+fM@$uc7P zst+n7@4N)69l%4PcYZut2TgbfNiBoh0UY@?n-OiD=yjGh>GRF#q@+a)AF-gs1u|bKm~addh_i}QlqPtO67U|{7VO^!@?u++@dH zg=&QVS95+K^gnjnMNbhjU(g&1ffxfqA|S8=+ppgbu87;1(f5f17B3~FJa|+{Ozex} z@$wS=#5)BL`Bc9PA{2a50Z1k+P#FwcPGDCcc1Z!rs*Up&@WOz#2QmV(nY_CB@{g(} zBdIShr;XDrOMrbfUTeVtpus7Ca83_gp{J3)X7?Nj1WXM-5D78dUm?3Yu=3VTM`H%5 z-}*dwYS!$yXjT~&U29rC+X1}33CAYyb*5^}l+DJn^J>6VZrmo3=7@1N=2>jI3=Sbyz168?=y;an^2o|m^jXa9-@d@TkdQ*mnq!Y4ro zr;)v@Tx!76#6GE*h%yOVGROAv-~amSzyA8+6a_l}hE0_JQM}>^}dSfG#nTZbK!@*WJp03FrAz~Q}=-R#sB&h5%4G%di|~It1p;14q<(7OjSO(%QXre_Wf)amU2P0FyQAac zR1eq)|2KnQ!=ZFbbJhoRf!g?0!oEX)0JnqW3I!lvh}R>9cV@3s{M$Fl8cYp|cxo&G znJ+{V#=|)X>i~PMK)^$ao3%0A>-=1!NVB@m@pzuR5&3m?P0?1#f=YFdQ1YX^XcSi4 z`17o8#(#SHnhR8#g6-*4f<@fe4k}G0Kk2oMVnY#Sr$^x&LVz8hP$;XDsFov})4h09 zDb~s#{5ew%l{k7;uy^ToxJ?+{X`oXEx5Qr+2IpY-0T~fSlHW9z~ z;_;1;Wg8expxe@^yR_dh&oBGDO zPw>IA;y@6g7)dw`ca(a2h-lM`P(v!$cmIw7_Rl%RDN&P`U)wwGjzxO-sFfrV_O8_7 zxe%pS8-9`L$|3Ko{1ssa>gz}Gbn=|w`mT|q*ZGQX+}4YDKfFDl<9zA&`6cJgr+Zdi zYMC#6i2rm>kqa>w+6tB-^@aRE+G*;6?L|^&FrtxxzPI0w(0rdhWcJVY5LpAZ3rX#B zEJDBEaMScy*C*Y-N4I5QG2}YVe6Kc`qr!YBbVmBn|LynhqX%A7kmM!YiAyq4<7+m< z@Zo4V;2+UfnkMvZ%LDiD$y5JZwS<6i7^w7~7zXhQwuhfCx_NzYSL#12X*S3lO zX>u%3KpN}|;F)7<{PypQ*6)CH=mHm)DSfY;Me)hJdg}3OD0s|>un`5O9P{T<;Yygj z?l&5A7`4^?8Idz0>Zk4hug`h_-}HcQN$%?_Z!mC6w!m@AU*+dp0(|jlm(-`T>#wzZ z1x%=0i^?!47&Qnrb21+^D(oT%PH|#7H#UdAA^Q6%W8mg6mb}^~2bZ@_2(bDI>tSHU zl2O(Awyock$NabPRsfqs{nH8}Nu_>ADK3u#;VVRnK;O3gfZOPt(pZdtq#*_7ka6iN zD?0F{;~YetH>p&k-dJLQsN_R$Dg}Jz&Oeia4h$8-j_+U-NxXdyk4TZwAyl1;_uslv z00+hLvnPATph>;r>aat+4qXBs`jAcVKe?;;LM2@Sk$J)p#HZw?v0noLK$H+Stg-r+ zBC$mCBL5jlAs9&X4Fo7@60F*&eEztS8te;w0by zl7Js<4i>R!L+he34ZzXu_Lmv|?^gaa##``pE%1o5nO_M0J>{a_@_&V8ys!yrBJyr=s*f{Ry~r*VcL_z!ogHp@Ym}i z-Ml=)=^dFOwW7A;@fja{vSIV{Xh9<4KZhY(J@Z!W{h!YDC63D3l(<^IyepYgeo-a> zQBofFj5v4YWpUeeXd7*cN>>xbS3RbeHFvJQ1=-*DqCx%VoYYsHq)sRaKgoI*o7QLIZ2EWfnG!4_uD_e^OjjxWd(7CDgH__NAD&7c`4fiD-aDQn zKb7Nsk#!nhHo){55IXsS#^e+v!H_I5!H5b$M(9J4<^b9pE!*=O6MDq$y*~aMBA{oj|L`=$!e1aGo_icKt z#ix07stg)TTwV%cZh^yv3IdRKkmhFNAFQ|WI`X`p;Slf36_xL^vw8%G7{YBH_V*87 zEQXsr(6^ol|(k9mSG8NC|?91J7CH_@a2pQP< zsO=9UIbaEC-`f2Wj#kV{Mhe$QK@6|u1fyyv&+I8vaOyQ~;$B;iDOd6Nq)nF1Ps3-j z-Dwfj!d(^QG4*AK=?6J4P2==?2QHgvQ4nb6jrug}#@jsfhng;v9?KAzT(Xsq$%b1y z`CBg4N_p$tw3sz|vyyO5$5hd>MGoG2lE%?aSI)$qiQktxkN3ddx7#EdoT`sE8uR75 z@wXlFWxtpo(A}A%p6e-E$*RZi9Edj6suW1NSJzBG&9Q2|i2r6WaVnb*Kig|o`PjMU z(?3u)QR-9DbT3QdRcqEN@LRoA%d`5oqGrNvN}0cT9R9$_oorkD(#mvb)G$v%A`=Jv zfJ;b5^pZ)atIzsku89j*BKK^im!D_r@0VQ73v^Th*j~KtW!D-eAwjo^6sXVXC-@1W2{$my*G&VkY@2kTLTLyUu^0q)-C>GSaCv#p+|5$uIl zTBnFlxEIZ4tN50)dpjx#`pWIy4Y^V81#NH5KTSyR^6{N1`t zX7772)Zczz{B=Gw_>1-y)Y#54W;rZqA%y(sqj)NWAIL_Cr+w14Ifn`r@7df{J zdF?-6Lep+JCSV?d9Y1nvXwC5H=|wrf4Dt$+ma3z8}AP7oi=T~>x4QL z?F(t+-4`=sho(M#GO_>KA~_NLo(&@-*kLk0MHyOPJW!nHn5)i+_$Mn!8m6 z#mP?XQp}cG>yFXd3~H0jmL6=k*Z?j`-3qE=a~w+T^tDdUx6>mQhU35i%GTa1mYkT+ z_osu-_IGt$&h$0TqZblccX`_$ZIb=Y$AfGP&5w~N9xaE>)>G5F_M3{Blvg34{zcDL zh94JaYYc;SrynYHCilHq?G|?1Isx9LufRcU*0<5Vd#~-Da_&m*!?XXSB=KY_!*19E z#{{d7LpPP}8X?Ee6zhJn?t4G<_af&qIq9O0{`$`&*@(tG6Yta7Z3;@~S%w`Sy-r3) z7Y>i3A7+&+CvY`9Mm*eckvFlsGFa|u-D7+I;{mGH=x_*>$&$Z8^x6TCJ#+axizZpWMXao*gLG_LkU@*LCwO8T$LT^og8n zW0Oxc1V(?p1VJ-AAQ)oaS@Qq4uDXHoh)dn+4YDH%5EFN+1@1;xAH&(1z~nhjO3JD5 z&hurXUpIeFHt!G)uC$&LE+m&v|DqaJUQSQ-onYF%|8{&@-!AalzCVA!M5n55&AeE5 z6CIO8kL(a174?7crk3(I&rhvWD^`3v^8C*`bV`XZj@ zRCKp)*FFauhTPMxdnXC9jf~G5DE3F~mY{it1O1EqS3S%)Y#y1nme+T&Sw#0OQn-#X z1}xTn(CYqm{QbtkfiI1@#tDNMD@ig1>%<-Jx|@GI46DiUrA6&onVM6?Ib1Tgbwh5A z-R<#72C;k`=KK;}XGF3=Gl9)Y>#`h~>bTRgVc4*}EYAtY*YJtwY|7Q}30X?@Cb+Oe zABgr+C64HZvl>iybJe-Uu1m|AuC?5R(@F;A(&o2{ai z0|FWwrkZj73^JaA#ijWn_hD=-ZPwsygS+}^URN$z_wR~lx;a;^z-nlJ%_IDp*El4U zFqO`qhtJGX`r(t1jeDgNdMfihhG;Lg&2r_iiBOK~q;#jlxlREsMp}N4LXXK@*(45Rn8KWDc`S6C&|aX+pF4 z_f0kt87>vM62)ogGTKCJW++8A94I{0Th^T~!q?8gOJt_!D3Wzi+-Q8^fqSDeOA>G}Wba$hG zG)M~wd}}0pp7%d^kMH$|`|uuTuGo9+mFGIw>}M&tsskryt7ixAb-R<}yA{H02TQdB zTUKslnTO6YkA_?h{WB!VEFD;`OZN54n1wb0w1{McPuXl?<<& zmB%#nm{K}BQQNX-%an5OgYH3=7TfjUyp}vk`&s3g4$ka=O?;Uf%VN9ZlE2>^$1)-s z@#}ieQ}sQuAv-&SVUv*d+S!%1oNBed%)Zf}%P38v@19WpRFT!wUF=51$%VnHshPWG zA!OC_41As;*~BhNC1Iw>KR>qW8bt-ghO*lSigv|O0L`_rjiRkS>V@>gn&P2Bjq-$x zV|;c?8uUIc-ruDc;4<&f(WfBVefP-FjyBuNHAwcP#FYn;8=7qbdHjP0kH{VthT zd?S0bFW&8WP*qfP05Fw?-5P>01_ryE1;)+`XEjva^=%wS4;23$v2fg^@THfr;ev0m zA?UqxTEJ3ovLGd^V#q$HD63FKkC&b@QGK*<2*{xFzHoCyMQ}T9`3H}v-cemtLK#&<>4u)ua# zi)?n0__D~3-!S2ka*w7*p3d(wdh>?{(!}^awlP1%lkSriG;@fmpqJng_-(|qnJUV6 zZum*6t(7LtPq<1w-;ew$`AkkiO|OXD&Z3iAQ7wRZJBu}QT2VYpM>T-+vt}r>yzO|L zu*Pz~dyDpg_CH@G57MK4$v4|jk>U?pxsp0%S?+`# zlYi;WTel(6fmCUi*bFpvT#H(3OM^J8uY9yc@(fl-~zUb~KB#eF{BzasJF1J=HnF*PNCVfb1jPAMI zSUh6vvYi?b$ySs>LBl$~P8vhXCqKIp=OtdCTBRdbCq`r2B)`n5xB0d+=^hVndC%MA z*e2ZLr$ut$h%9N@H^hrxgmOp^;$`fei+l5=9J!xA-@MiPNk;Wgk_mWa8wG;{`OB>R zbjBquq3IVlCi8`~#ADLEUzi5fn0rfwyT{>*n+IKb7xltbqfkcR{o>fvY)*y5^tWko zfLC=mo!Rp9#F;u@&3Gij#eHZg95bmSk zl-RfHw35h?`&!7QMQNdFc-d}o8&(hQF z&nUL8QJ;d679*lSsfuBdJo*|F6BC_Y8n`l6B5RUU!2>)J5Sg7f4QE{Ib)!kK-?k5& zN4bfSy#jaaewv2@EOl^V;^zqKJLcX1U&9D1y}s#F*((1Xf++bJ@0QSHwH!I+YQ@W% z4B!L5qZ$Z{M3%VMx{gNNl=3ro_Mh2v)7aJuZgUON$nan1UsqZiZmhN*;-O&CVs)ss zS#;|z@juD?0+hqxwoGqi$}rT&L>|w6khmGnYDN9UuRDZ`|Kqzlo2G1qqM|>g>y8~O zP5Xvxt-2hRAMeUD7bJJ52eIO3*ynqb>(~a+a}$Pr<1XKcV{76$V7%0Cgv=U=?K6v_ zV|R6r#{X2sU{t#|7S|S>dH*d6(_BoqJ9QbWURF|TJ@%>lvC|$={)cq-@w4$>TmLX7 zz1>8WKJ4heN+sEb1~oP?8aVxJCz)mHi>Sx`rkc|H_p!}Cnhlic2X{A{Xih5s@^gLZ zEOmN`zjIhB{avVPhty!t9o0VmQH;c#h=vNkl^7w{uZ-jiHvyu()P+<%HOwaM75Uil zKxnK%@9wD|{Icpa4>{)gM;0%|(p;^(S)?7y9X<*!D{krK(AQ2??B3h9r}{j0yLhjf zK!s&!w9vHgS{cf1KEDEyR=C;szIxsf&5hWnh^g}<%;URzP>3_$XYR2o{?2@5Z!+7rim7{z721?V7b^Ps(QYqyEUrr zT*Vfbgl^@((BwOuQ%Y${9=&kdZ-XMHo|!c{0Tlpf=3DIDN{iM+MVqGtBem$A@m6*6 zX6F{;%DF4r$tdRc2k9@gj+Ko3S@;#BbR#P$B7jA@a%zk6@yP><-vNJ{Hq*uzEUYNz zwIsd%7)-rPGmFl=PlG$@!nWyD*BA;b9+?S0dC!&JF<@^5pyYuYUpbrmA4?C8sMg`w z8(PIKM|a*#_YNQ6`^v(dd<00o)8ZDRc}0Pv%NUzQEP1-``b6+v@TjE1lkc7!bD6_@ zeuHXm2WtZ-x$7Wl54PS0rHK$Sbc{QA22CcuNs*!5EuwzSNETQ&Vp1Ty6U533E!2+a1_0A?IFJYu}K z_!nk;>_gC2z@{|pW66QTor_3=4uiT?vwoqPVVWw{GbIK04gxtvZ5mR?mwM8I^p+QI zuoeSM*fGqHd9>2nuy)ElyrBwDz&Tl^g$jN5^L9Xib=gI+7&hCCBK&Ea*2DTCs3LV% z$luS+C0l^hp7*>vPZ+R0U(MJ=hd$lQd0WqJ@#<7d+VtR3KkGaNrA4JtveO4>*Pf}4Shi39qIN3R0{F%9e?4jb7&-nna8q#PrbAG&NlJUro}+H@ynQ7%)eD* z&Peu=$6h}WD$|zosua@oQMv4>@XPS4Tkkyk;#!2FM9x)NnFSVm7j=o)D4RmAv<~T3 z;UDf_H6IP5$B3IOMc(L4nJW4k^9fe-$ZDlw$TtfI2uM79iuW2-3j!A<&pYU6?TnUp zRBt!S#o^gkR_|2QY`6`oc9Teh9+o|+cb;MCgs1D5D71-4T}^^qV8pmnqG=idl{3ZC zo?fPl7(J_#y4lP@K}193vZR;`Q8vZ&Q6Ahw4f8iypqg>06*xp?7&3wjxGSv->apY~ zRW{phVUhPSZ&cxws)kisCH(DfA3x~1hk>uk)|E>^%h7SH|9XH?_x+hA)zRuf5&yhS z-=iCtBGfMEssW!%R)yF4NDqReD|PcnrG9anFEUu&tx21@M7ZECTdd8FJZ-hm;z)D& z>sR3ASS9q}Y=b1!;m+)QBDZG=QjHj}a;;&)sPoIX&2i$)E|4)gc1VZ(UAFtAGlMp% zGlVC}Z)EKNRK55s_Ui?YH}%kCK)!$=>ncWr>9JutcCSzZx3F_{>)8Ar$vZJX3jBy@ zOHL&7@OKu)kfbb__a8n9Ll1u^A`a#5I|0L3=sY+{H*%`T(*rp|?t0kQP#CDA5Dwt2 zoCa@2LpkjnuIDPAjv~|YJ9TeBN;qr*Qeu+o@6&jSLr5!jKWLYF9D)7}gG`Y|lrao# z*{!U<6zA}9```-|+4q@<5L0BSJ*9v8tmqIj=f_9vtri3A4<(6)0jdMMl1ncx4V6X5dpwegTeO{WR|$cJ1zsRo9)B1zA)WoOYD09P9K z^%sIciPG^U2pbTn-<5m>KqAv^-e{V#xT# z2UF#5UK3elBFRS`>v}0Kih*hXcdHZS*qxtp!?~r7RO{l%;IAaox$Joz^P2uEE@77D zU|zt={M{#YcDUNkwGSRRArGiUlh$78R(f?2Tcjc@5=G^iZ*i^ZkRQJ%8@#8byqS;3 z8fH2E*@n~~f2`yX828VLx1F1{)|AkV(6J0|D~X^%7pfUF#Bw(lG^02kJ;DGzC4mB@O5Fc(E`<| z$M2zo%3XhH#&S{j4sx=;Nu-ncs^_uZ#efPtw$m?=@IM$f4Jk^3dv>|6U{h?e%twvM z6lq0|)CV4{%>(3AC{C%wc#?5TzAX2!B28mZGpbY?%|R1^$;f}SZ6UNs5A`5|I;fVn z>bcIfAgRY)Jn?2}?waX8Wt;A!r8~N!6I`*QV5$|T^j^FKb|6W%jkG1BI0s(%MwM)Z zf{0px#POLyTEbVqHsoGL$3S54;STQE*T8sm^Dh#ps(W;7 z3j_|eo(St2(u)VtJ}K><5e!WdOctLW(!|J!Hzp_;9Bw2LV#$@3nSO8cwViE<5M#u0)RkOAiBm#?S$audHVEkY^Da;WYE$@hrV;{(3uP!Jg4 z4DzQH*E1tK;LLPN3(Alg+82r&y>yuL&_|(}P|89LwAF}waL6f7jI;GBtg8jH zPJ##gfuEjXG+t1#e20wrNh2uz7*OfNF#)enYL*n4Xy~gS@vK1ionsD&W5{3S2^7|Q zm8rs*|3oD(#R|<+9y7gXbj-CpLZz)#*Z$juFuZTafrp$PEFMQNz`k%O-qcnL?lP=* z;#L>(n|N+ZF&DrDhe|qt%1oty@4I{^hdH~acf7lBFYA-xPs$F{W&KCz0`44Foj&1R zS7m;Z;q~UqxC~0Lb|xevsTkQ<+p0KYk)a0^qUwr3kg% zLjkFjD;@G|mWEYUSEbx-f;l)jQD}%x06!;{pdW1nRD6rjx+1@*M*p>G?syBMRcc;m zF8(;}XIp~RST5@y_)pnCWrlOEW*+MW>-!r6TDY(xc+6v$3+(T1jo5TakNW>PajJ-X z0s&*8jnSfw`nfy0f8A9T6%}ZzAUGJQUIf&2q(p_4OWeS*K)`cUqXpZD9-%lyG34dt zCjl1nF3x>_fT{w<5iBk{FIV;LF9F$DpmG1n(WWNS07IEL-!y`XeF4nzxHpNl=|q~* zJb&X{$Kn-Q5+FTNK}WAc{Sl;tB=b%^!O@{QUU(Pyd}5+i;W#Eg`q^12&|Dv{y;wN< z-mz-=f`S$P-{t@RTAXXN%5EsZ`6Blidhk~P$JqdbGZviZ{Dt=!f!Hbl081?;?DFK= zb$;*cM;*lg*L($F#4yxzu_!1okT`9~It~YKa&of1piPJw*a?Dw-L3&dckfleI^Bm!*Cz2UKVC630^tcAD=Vv4 z^7)3gwlDx_N=Qh&nzfEu%>`VbBy{jiL7QPHj762_QlWIwANEX8doQ5Zdd8 zowO)0D24s7=f=@KFP2o4{27e-_;i*{X&4OjCfy z4(v!2YkD9?Nk_?V7Xl=ogP{#=!Uw~#cC*Q0gYlBW003*$I@o!%JjG!m3|3d=+goymsGFwwj-A^3 zZC&xsJ?RfQoHtDbX@SNngxC6~50>d@66<>)JIviu2db2jO{$44bk8`q9f?9SuGuj1 zK|0Og61N5H8eu@*OOhE06W%;}kJ>VXO`e_ENT4-F0-MDbVzK_EKIn|{%Gp3ob1wdv zjnx{-g%-nW{Q=)6<53RVbgZ*kpbLti6YR_Dg&g^0Uzi6|UGH{4Ip(?*vlx^$5yoxU zG+Aia1-*6bA>-2;b_2Sr(8AhKv%kUKp|m8)R`%sU$xa~ar~7E7WASTyAZjrs`_Zjf z&Ajg2xzkzRvEINCR}U0^tSCeMo33;1;T{!CSOwK1xlcaF?s#okGJQsk{Y3HbFNHgvbKcn zDqBpR1ArS*JwXGPV`kC|Z+x(w?|lb6WIi%KC#Cr=hGXUN@#oW<;*XYECXVa_8{VS% zULtUq<=ZW)@?)vQrF~Sh)Nr-^!Ai;L+QeGY@1m5gK45?O;H`BlZ=D_H)>z#py60)O z(po>dKDOD&b&X$sq+7AcBpHUr)mZf>hP9Rz@u%q+<9jn{^_VnTlsY_qx z@?|eQp0g6V^kPo=hc}gz__|e?AI<=o;6S8g0npduPEv{y2zp_1V zY)ag_>#XLPQ?bOiI}B9N1Nsi;saapiG;9J*lg(iEs>x7si{17H72$psiEWYj*r?rS z0->LKE(>{ePCVO`caUncaH5=Mdosdi!WehHHsCTuKdy84z7?Z!>>7C(W5`Q2dDL+y zzo&-8h#9&mYq!$yI9*ub>0&JL(Hm)_*fU?GFNx^u>N?uL+)%4Kh;`rJw!1eEjJueH zLe3S=#!Whqoyso%BM0LCoDkEc*UG@-0HEdA3$p8kqjV`}XLEG2#-Le8&H9@m!w$K~ z{;{)h7QEL$xk)OxF}){@RO+-V>Fu2cNO_S|-+BA|Vz>DhngFuRj;Wmo)x*Bj{bD%I z2i@wc%zD8K2!#%vdZ9id-)@P9|p} z_XFG3J?eXiaS(O{TY6)6WENaD4+2wQBA_Sf7l&Adc9Lt1{eZT-8(0(#s zRIkY5ykN?O;kg}3pSDJwWeJ4;SPb2F7_sf!4wV_Ku+M@1kr7nuYQrmrSVj^CnL_)~ zeBIZ7HgeoI1gC&{n+(5@+fZ^HrL<7mv+5@F0TheR4}MCCbxLQ^tn80XX}YH2vhsDa zI!wbZLW8}J>i3#Iy>z4xmH`;&AZTS7Fg3Rjc=`ngi1OY*;|1<2&|>*yySu=lt-n!6 zaWy5zE^_%I_zw9^w+%I+pM)7DPX-ZOf?LDZRwygFQg@jh=8C=?Ntx#s{#AR*YBEr) z(Xx(`VU!*BSyK`N@J+Niz!$R;g(leDw_oS5-1RyW2rfrGQ%dyHWwq#+t`5)kEoa_) zt2#r|L;AZmySeVPgsCp4Dp-`o$X?YKa@lh!8#WDeVah`_;0jc)Z3vdmB)neWw`=8{ z?ne$wzDM6S&Z(lWqjJv7>U+ohXz(uZ?*+jy%mF?pIK7oAL-w*`3N0D`303`x(4(M& zV4dRHGth4r{sXm@uS@kh9FhJQtrr2V?t-C0F#pE z2kp*L9=9Hs;ED*Aj5=l175;E4HXmEs2JT3K56N9Wt%15a_Q_|jfwh|LpX9R7vG4^> zld%=XN&lMb*fp$n+FY2ufu6^}!_%jk!yvHUemB-QyX>1bLd74~MDUTi<;yCzQmOI+ zpd#H27fJ-5fQp21sO^Js=`@(c!l51SErAiX);s=`CLHeD&g%?85^MXQSDZnxC$^!> zM*xPaZ9wahV>20T>}LZ0X6RWHk--D3JeH+R`YuE=8QkC&7zp#0hr#RA-xM8b!V`D^^`1eeZ~-=Hz>9(v7VOM z)r?XycNY2w@XtH@_>5I^P0+X}j)=@-Xs47H*R`mt7Fp2C*}KoYPSjQhhx<#}62+k2KfF=67n-AJ|GmavOY z5h`iwW%;HmInbc(%EOhjLtwZU1FrcMjSUJ@m91(L`uI>XO^1(uhBvi}`+FY?ip}4y zux8gNS5yTymI|w=$Et*{aC1%j+((3d_*|X*8D(Bvb*?a3E4Vsia0prP!njitdyrde zjQ=DPPQ;8}u3UFNa+;b56CM8!mpIG~#2nVBi!#C^hzCQs)cjGRZB3TjxOd&Pdwd<- z#e7eHyNI5Kl65LR%ZK39E&9hUk3HOR(F@_ZDDyLPnk>PQJ)P01`SVaM}~+v;lKearOSZ*1_tCN*#`I zMhuqYrQZgx^XR);2C>RN80VY52elu(d?^OAqBJR(FT$+-X1wFi2>pdydZvp#NbytQ z1+9Edem8okQz|9YZrWxR`Z;gsLfx{$r3hA zewgz^y~sdSOjFV|1=gJ@+OKKTj}=-a7cb~g&a10;I)^Wf+BN5IqP1JjsZR5GjoM@t zZ+8e7BF)MT$n44g^(Qs1BUuU@2#8uxS7bqWSr*UL6fS@3PxOUDuXd7*9} z-)k&yG~vgDXWe?l8dtnv6tHOeq;IDqX*o`2^uw}uHTK|Bjm~`KOVsm4s`lV`7TvZ0 zZ&cL1Fy)yHV@g{)%C{?S8z($*T9gnhYv-5^W1k~KFbnBqpWit!%Ni9kYKp~j|2Xo5 z`*%;RSVc`}sLfAfeC550L8FWb7>~E;&{0@e*lG6Qhh#rWK%i&Lu-4OF8$~2$)mm>W8w$h@D5~N=l)?>wfijz@9n; zcyn)%LRLf%+srv3fbSnJIJeKO}a0GT~7Zy?_UAV#8Bq!3v5hGX+e>ZMs6v;-BOa1 z?>jkp0SOUZ9yRpwAD#9DW=PI!Xe~j{s{~`N%Qi;@zkRI1)+}i0#=)Zbh2C+r{ZC?C(5<0ebT*RG)|TpcpUfb1wY$qIj|D}6^xDhrf1W&lg72f> z9Db$3ZZXc1FP-r1cnhkxbG+8ms#AhN5)wvN9rVqP`=D@2q338oV?)9a%AL;;oqF~of8Dea-$$4SP-U7sTw!ls{-iGDlEbwl)kd2OwKBvb0+qX{(gCWA zebn8(@C`Z7sAbmI@4gQ+l%%oFT%dlA6+;6_R)Q^#R#aoq-#7nn+y%IG^-<=29G)w; z21?AeRM>16%J@F^#ovDZmt%_W?#&Z5E{>`i>14Y$!rmz%)!vLbOxSKi+7&>x4fP2l zIksY;eShcHe zf3767S`)1Bf1*ED-s5gy)JC-GLV#f58OG|`nlAL>ufk*{AG2UzJQNKPq@L;ZV{Fe$ zijYn{Aw2qBDRjIIB$G&msm#cD%^V0%8T7-??$< zpNJCyDk0~98RQBn1=+{Y%D>uv_o^-s@P5~*1!4%uZH7ddQ(b|W7wxh1=D6V3%E|yQRE=2xA;Gkv?ssc-`{d>QEe=w_h$H}a&*MUxu9yC(sN-OAb7c$ z5Wtz8ngDixFIRkqP&SJWQL=%R2EMibu%5vv>uD6654A#BWFJ>GBnEZ1SEs67O+;kk zCBxXPAcH9hxt^=`3QPa&vTrL-)N6}neWZRPdi>Upd~CB@)<31Xd|h5O zNAph9$`qfR2XDadE&Rfx-31fNc-?GR*>Sx?Oc%Dt6Z`i$QqS&g#j?fXA)WSkg#;^i zv*S)XoD+{49V6jdd=pOF+}~&M zbR8246UNnr4$8o+NVey_f^n=D;4A`y;mf2F+>GB7s2Tcoqk(eda)tM5Qtn^2s5XJM zH##~xFJQ+`)alCk#}O$GeQ0gKSi*7o+_`h%+z!j6&n7s!zAij-T>II0ig1|y<;#~h zoaqTq5=aC}9%$Q-G0OlG@{}!J+s}RPomtNfZ}y2(HM>jvj$j&^97t16@Fy$KzU&t# zC<|3nE1b&TKTdNz`Sij{y@76p$l+kGV|CPd`Rhc7zzGikBlulb5e=nq@_*39d-fje zZ`C*m#!iPSjE^0-DOK;9$@Av?n;aJGsFkm zR)Kj4iW&Lj7{O3?B+3C?fWTppeyAXa5zQPB9DEfL8)P5ztTW``xK717F9kvj!(Vz% z=NM`-3=oWk%}XC#P_PdD+r1Yv`f{|Ee+32XLAjxcL{Ar9$@_P29EmPa0l+!Tc+mLV<245Gk@jHJ=TxMMoD;>2by>w# zsdl+|;pnQUE@)_ZQ-2q51%pV=6UFZW3Ce3!{@6rexYwx$bV=`99tmn72$%7)RVRy>#j1)$V}*%OJuXfQx!H)F~l1 zuOzq^ge$Po9h@MD{jcq>5**?QjZ*>u;&4F#PeCA9f!{&}ffNX4rA$=uj+z%wlIi&H zy+@Ud%AM*d5Di!3v+53Z$J~QcYD2h?<5?y94CQ}FPAr5^3M_-bhT~0C;k|I<76U`w zR>a%Mv;CEh1#c+%ZP_rB-v)>hJT&+2sL z_D@FDz~Eid`1QkO`T`^cyxD&Q07#Bva{}U-QwVhC_^kRdmo@xjH2l^IGwqSgp?Q|Va@W0#@uu1V0|nkkWw_BipJD=9;wt2=*fwur2dN>)-ip3HPsf8uK61G{!=cBs zE;`!^QE3j>|{Ksp=tJJYpBgRMjyLHI3g%I82O-KK<0cpAI@)Y)<13a z)ao=$W4Ky|{_i!xO!Iyi7C@bDk9hAbYIXF{cNWRFj(Z_!u7ID2E4l;-3f;5eHzxWx zV&V)Kz#Y=)?Cd)V87 z-;KE+2#yD$Ty9{9+ukHSe?HvWS_gcBe8@po)I3PM*O~Fie2n+~&FVi^&rHUu1-svu zF0A?8g;LNH_1HELp1(d9br}d#a7Hx~E5edYe3GpI60QMIONJ_?8X>Wsol(t&ztt*x0fOKmIAL11Hr{ z_XAh0Z`|T;L3MMcd! zCt(#sLaa~<61~M{&V)d@GFoinES&P;L!%FINj(^(oM^j2q6kiXyV?~P0NUHi2vjQm z&g>HPV>_*9C~jT9vUcyovBK8?iCs#Uz5g8f2{@cq#P0lDDWA`NXI{O{2UzbnMtyUr zn%*2(7uvBpO8aOC;cDNAWmDJe-4q*jn$Pmc?uXHifsnmZ13%y@6dvEf(94FIj^zPs zsQ=Id3DWafzL5vORK)1MJEXC>KlpVUYT7qOD*fNxCEyil72JyU7+02!1ZWY^5+>(E z;ep0+b?kw%vOy!KDi$FX%OypIxiFG_M}nd%GfGLpmsTkDi@&D{LN7Gl)epnO(5p|> z&K*F)%56HRm|vh)59##F9L9X~JPzthj?*y=5Kygd^pVg-_&>RfhIV08Lgcn0XMykH zsCtdtDR9sBMq;D53~4}z_|C2hNYYDPFn>P2)=GX2;wkQ|8rcC4M_`6ZXlt#nCmQJu z(f)S9m~Uq$QJPwyq<>{F8MAJa{wE=R5VxUmzJdZxHK$AwEpS)-J8fqoZ#x}Dxwv4-vUHE%4~4E z8cpDLDT$EzK>k;^+wZfMPHMTp%MD&&_Z09e5mDzp{p#z{_GQ#cJ0AFM7Uc;>TT9x?8??x2R_yo>%#EA?IR4 zJp`ZHDYA^SA>j{ftpLl&0BICa_OR^`O|H2 z#vP<>u~~S0OFJ9nn_FJXVoGh7GZK{XtAyPClyY<0MZBRBO>ke(DW8ri^_ zavNb5VhL0cg?q`qWz`bF~8m!j6>vu zhbQ7(Jr;dT05K3MzC2BqU^}n3oI&61`|4Gmetv61L!P%<#;g49<}R4(nPf7sn&Ww= zWZ<<5cIq=WAf&4HY3X-Ri`9VSI@5fgFmsA*)W%%mPx`B;J@OBs*_KCqXb%bwrai7b=S(2?-j0s%wC*Y65P(%!qV*-}2%} z+W825Hzp`a7SJyb>Q+TLcvbCz7aH>C4`O@bGC&{qm)%N4!RqPdWZ;&zM)O{M`SLZ` zCDP5}0{u2s66-1!g%6WMW76CHmk@0yPxeBM`lnaXi(Ptka4d=B5D@N23r)=5q~s1! z-RMsrNL3q}&vJ6@VScapeo?ENv*qpS>h5l?d0Q!NtCb(32Twd@dkV7jQk6Ir9zA^c z>Xp0cu8Yw2qWJ;c<4<_UaRCtlx>LaC_l?{j`x2htM4tp>mpx95l$$qqH;%f}ynivD z+^Z+#?!%r4FK>*jjAzOl6j7Dx)eEB?*!0sc`wwk$yR?UXfNc}w3ibF?&$+oKHZw|T z>jJbIN*;ah;zN{9ygwRdzEBzMSqydd@t^tu8vgnYr=sR;d7@wZIyUTQKt7%zr-E>( z+Y6Bz8X8^v{k}*}_^<^n?-+GHDB^)AlfF;}giM7ZFhGESmG{kGwCRz9@8gSxX{Fc? zGlL2l2;FRk|MY$lAqY!X;UeKZr1h49L+poUWe7Fcb9{Yt2fvJcfnd zB8^AdUHb;%zC{zl(vs4ATU~}r&11}OV4DG0wNQ=-z$-i{m<*R791aRk-6EO~55FKG zF79LSLHeT8aNgBKLYke=KlBc@|AlMqqwS@*@Dd5s@y<84+V2udLMCzwd=%WyMZlR! z#qdc(cEgD2+BLkv=>u>|SPs(nR1~a58=IPvGc#EXD%z%l0jM`nonVKvUbrh3_799^aK*T(=7 zD{E^|Z0X+HQ2xK4p5BTinAk@ut5lDJ2!+c?!Gn{atSzsKozp7wtu1 zE`EH8!@!$FVH=ChT*WpQhnPV<5|(GnXgRIVq$%9`axA=Tq>p`yBuVZT5Vb#u2RD3n!(|FGaYC$^nA=x&_sEOQ52`ZNxBWNck|o3H6GUHf z0JV06uzU4EK!B`m3pnj}pfpaNFPnl7{%1QA#nWT$83u1$!F7Y{G&&_Fi~&s4R<@H$ z5BnV~D}?)sotIYyrg>g*-dqTT_|*4Ws9L!Tl+gduJHq@2xAl-y*uHe^?8{>QD?6=| zljm~0k-JACS$hgZg-ZOavbEm46o$q3y;@EjfZzNxHG9<)|!>F4Pl_UHUX1uC^ zU!WvuWMmY)Yui6Fd70G4C+xzJgj4gOQV*2&OTHd~h;$DHPQC-+5>@>g7amUgD5tj` zT=-`{39gqxjU(4wd5{yT6O8^P=^ zYFbWCiDu%v?n$>CbI8soLe|DX@XX*}22^lzsEFc`o5&yo8bmox>OiFPrc{`(ua(A~ z{E5_zjCcM0?KLuMYir5n<%9cT=shHvp)gKJ6UP1!G&D8_Y1cjT*3qg~H^r~puZc*N zjp3tn-|2F%Plf)|p(d2+r|_A(fuYC7`czc(rC?<_Op$~`X;jx+^z9TIUjjMXe;Eg@$*3={NL=e25xIaqd;9>Y<2 zQp2Fq;*>R!FpAxBd}M&|$m51_pxjTuixCmkx3&^w&gUmMnKd=ur(@7pr>5<3&ZKWM zQm4o`)oopGgvkE>Sq#I^yyd=(j2{K>s<6C*LRVx{>83iRJ7@OS4^NIX4;#|cl^2yWFz>l@-Az%CefU@x?p8B!d$>+$(oNqmA|BxY( z59W@(X1ir*bz1l`jv%#^n&Oq}#~%?Yl@69sXZi>fIL){70kY&@x!&oeCKN|LI z$#eC8upmoggn<0H|6o;FUIpd_&BdiZ?MKNh8!%G6+y!4=dSFAAF8sTvQs)=()2a6N z$Lz<}#m7Q-Pw=sKJe&Q&WF^oC#Kf=)#jzW0O@CWDcY((&8tO@91(t<)hn54Q@H||S zuL{H4V;9qHr)zg)OkDNC15d-QrMEMCr5BH#hH)}D=#szSj6EQ(Z)yroN=ix{j*2Rm zrRL_AhOv6%)m|MyhW$5QzkbI9G2p{RLW@`;o#{>^X^Sf75hOMrfT3feg3p`(ZOB~@ z1n;G}VMqc#!vHt{43D?L9sEYNhS6>QcynRkC#KAoqct4AgpQ}vm-Nhw`f)al2H^pn zTs8{iE=0IQL3E`Wn!)~ap9sXU_z#=Zr2=NY9Fxh^pNu6 zxo&mmxCf>L7rw`OwGbQqIp*23XD9WAorfQzbKV6cV_X1OzNm`=4~}*ih!YtDG~N+h z0vBR?d;72LHZOr{Ko5I#6ES zf>J+@Yf*%_zue=g?w^+ytv!d{lX1&+=ZwN9?whT!J`M4=Rla?bJ#HBq7lKP}O}#h| z5CXbfhn1%ff_d_0nA)K-3Jk${u^)*^kJiUQ3AV2wE%7_f3~M`#);0dU*yV(6Xki10 z_|V@D^glU%Ga!%n6-(veyYCF&Kfs}m$Do&uWdZLN{Vpf^yDmuX7@jBnew2gsAVA#U z$rSZQ`apGI{TvvFV6n@tBB%_k;3+Pabcom*db8A$!MpPkl_@=52242IRRMH^J{}(t z=>Y)xtg>0??|6(Uc;^wX3k1qcejDNp4-W4UpA7D1s39FA#`@Wrv6`Ap%_`NL_%pBR z>FMveZDhCyx$W&-JkR_6C+*Dfn_%s1$ud5&30xf4VuxuD-zMTwbp zTyh-&vu)dUedS7gU@hc`u0gf*lBmu-MijvD<>|{DSCtcv%ha-`uS?OjM5maPyFM9z&s{`pT zDu0oKpM;c7VKL0U+I!a0FAmpah6YKP&lW*TfN_ST)M^UbaV{+-sj#p+x4w!v`jvp7|K)|ZSY7vzP9w7RY&`O(J{ap*(o&n*AfWq|^1Cd&G;qPj!~sOlWVABkhjQKI zS(3y3k34<-i|QH%2^H8oS~|K0s5$D^2VCLvo=xj_sW!>D<@WqXzk*6)-ti`#XolEx z;^0FU-GXw)tx*w66Pd=~v0#xgal>K=A(K~w}C*dWonzvGd zD|ZJ0c*z1SojvO_4un|JER^1c&hm>!c zI7qN*RhzKgAs5N^Qp`~+(S$CMF04lrCls3=yC9?^VI->Hj65N_Wxo`SDq5k6#NOZM zcC@V4XF4ElW`wHH?|(ddpS_D=VsH_*JLdi78h(`W(f$u z9XP5BzoZC`B^;xY0qMC=72WX^OmH$PQlErArx1;b{epLoI(OG$u0}(?Zc`G}N}aIp z+Z?Y#?ZIg|>NN$}C2dSUN`5`*A4L)bmxW9+Z!35bQt~Oft-RI+Qs$m+v!jp@gm4hF zs&SMm0?O{8coetfSj?+7LZby7^OI8y9M$jLGk5&6Qm6al%PZAyVsghfLR!O)w&3PL zRQ!f0H76$wI)xhHj!6j#pM1jiZWCO%j-+z=b-SS{iY6~FZ@s3v@>~_6#^3d zKQRcX1Q>@(%uBp94~oj}3Rbz#v{&eS|MU{oE+SiZd*aw5@hpZjW5z6z4SISIH~N<4 zr=^FZ1^WJqN=in=T0rSOqM$A(NA0?^-a$kE?61*7{b%q#M5YQ8F>wqKIC%izNp1EQoKC^aCwTuDvGH_gYov5#=L+3erD{HtKIS{ zWOC*k9Z;@gRmH8oYrfWBr`O8z-*O#WZX$~)(0D@}cbq}a6%Z-1aiNJ~{`Jr(wb+)Q z&rt?_AR^iZW7So|kO#H95*-s`>Z&K{co_ zAtpXP#~>|#1>m2;IME^SxrZ0X#3lN70Vl6=c!OwIO@5w$9m-E_8SJJtXX}YH9W9ES2SVj@(%BQV@!_g?Sc5sA z){FkvAN%+7`1f!TFA2Jnl<(jA2^#t|tW|5vtrJp~!Project Requirements This project is fairly straightforward with regards to requirements on the user's machine, but there are a few baselines that are required to be hit: From 88e0153394e6135399ba08b1c7e4947a75e65dfc Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Wed, 13 Dec 2023 06:55:29 -0500 Subject: [PATCH 49/52] Add Chrome Extension usage screenshots and resize images --- .../ChromeExtension_Activation.png | Bin 0 -> 113673 bytes .../README_images/ChromeExtension_Query.png | Bin 0 -> 211838 bytes .../README_images/ChromeExtension_Results.png | Bin 0 -> 398930 bytes README.md | 25 +++++++++++++----- 4 files changed, 18 insertions(+), 7 deletions(-) create mode 100644 Documentation/README_images/ChromeExtension_Activation.png create mode 100644 Documentation/README_images/ChromeExtension_Query.png create mode 100644 Documentation/README_images/ChromeExtension_Results.png diff --git a/Documentation/README_images/ChromeExtension_Activation.png b/Documentation/README_images/ChromeExtension_Activation.png new file mode 100644 index 0000000000000000000000000000000000000000..ccde9a85051e7b5fc35504eff1294a81b100a33c GIT binary patch literal 113673 zcmZ^K1z42Z7APf*N($02geaW?(&!LUf^?^JcbCA>p>%^tm(txS-Q6*C*N|^OJ@>r( z?i;@O{(tt~YsXq^uO?9DgDBb)!Y6QWaA;!hg=FF25SQWL5O4v=u$B=E&I35OC&k7f zkc=1zL?&ZnX<%%o4+r-?Fgg}RK62`r7vydP^v=r{@cwcAn%((&q^yjF7dM2T2} zX7&9j_Pn`i461pb^a@Fkg89Sqwa2#1{2`%A@8WrZaFy62L<8NiXvdF2RBY(qh5*rR zFR3w5ZMOg`hm;&FA3g9*;UkO@ym?8y3_fpiJ!-M*#pna-@#MazXve=o*W}`=rU?02 zrR_b{#Y}oOAaxU&deV3ysc?w{7>?$xiNzwnk~sP__1XCjVtsAd-K`kX&fJR303s#! z>oeQw-7{DN)2bwJGq{CNeZ#uK>X$+44ExDEz+@aEjBqEjHoar4qnGsZ@haWT>akvj zauOaUU&qlbwSK|tYM^Dvqoc#BpaSJgeo(K{B~x!eAaDgiY_^Q0zb6Qtr1DU1Yb}Uc z|5n#HV&7?2aBbf9=(bGju~s#Q0Od>NYR6`$TU{E34u{fg#NZWylv^ ze8yxYLs?};!EKVI2FofxB}#iz(TqNcc(p|Kx|!t-8f1x8-GAl)t|kpnvk9)wpB(A! zdjN?J#dH9?Zjdr1IsotR9YO^2izi}2vJrMdLDUhCx@^AV(17wHcHc7`vDx5og{lj2 zN4UP{91gF0%b)Qk3S1(?H;S4ZV)K!dFsLJqZP3n+Lp1<5J%32N40+C<<6HJn&xU!~ zQ+IFlWw`9mdMl{b9B&x=G4R49S~8bbtAwkbsXr=0Rth(5E?=@(uCZdWPCXFf#U5y$ zS>!utzEG~i6d-B@G(OStTkx@LOKo3hb6i0=(0Xk5nXx%x@xvnaGr*=Vm#^--M>=>* zfF#1+0AEGl=NH9M@gE!D660V*6+;ejsO*t7QL18v*(n+P-y0M@eq@ogmJ5{sl0*M7SDq`)=540h z8n%>=f$(t#)d)`6SlL{eT$yWGr5*QN$R0@g8)QUj#B|eeBYYF65T93(=ce#wOLxp< zOk|8GuNWT(OykoT*G>INo*A8`h($B+<7n^b^(c}_@MzvxSe|_zS{~cj?pW)X?Wj`j zzCvTpt)$O)oh+<0@?q-X$#iRx4F5nqLbjW@ck4FmAJ(}Hn1Z`l4G&e~3uMPV6nRVq zb% zml2j(n7WUz8=)8(8W)dP410X{nfN&zI2xZ(ssKsq?+GjjB*pf_mUu~;GO(F2yL^&lUnk5ah$X5d#E8R>lVn%XmNm3lh|>MV z-p4#=G;MI%+&fD@Z|0{GImKqnVB6rF?wkw-J+1MpY&Q!g3|5N~k|@%TS;OtwR?LYm zRRAlLkFlGMPPeP-*0<)gYoZh>Z>eOdN`@JZUWa(|$ld(zmywY# zixX!#p!vzA8WnRNYEjMz&&bbucGPwj-Y9bcNb*Q5xQ0m(N$_|$cp4l^9G>x*+t+OO z+NEsOwN#HmG4Pb|swL_r8uJ59gSHJOJ}?FoHEXojFN#&~y**#x zGvh7cJ#*nr)a4iE)rUlpN>fUELmUPenZlHUCc-r}=E3s_`-JR1RNXvJhHE?Z{OZh-R{B@!vBOF{NyE) zKXUQYz18hQziZW7!2r{)mWa~yvcZkPyunvGiaJL>VsbY|BXYO1`I8K|rML@eh+Cjb z+pR$1IMLp)3#za91XwC0(AU5CP2S3@y=HS!Jel7(XK7$D)g^6HZKKsyTfw*Pwl22@ zF5j&5EZ07J`HEh$Dk&$DTFmSN1ywuMt~g;5DYJX)$^0kC>hdZNTd!envbA+qfA3Cw zry+~MV*?U#^6zcgJ5o>z0NyA7Sq_?;D2L(;VK8bTl zt1s>UY$jU(Qy`Dt5VQb%bO^ve!SF= zwi?R>?s(>0FU@R~jqRGMkDF;osMBe<8YZ9D@ab>q`|0y-XBS!0L!x4%8fTbITic&x z2bmM)5y-mKCe%YGk&k58ar7;#Fgs`K( z)_-l!{))qq7>#RsdU?oZ6>mn%0m#Oy=&pWzk$^plE&Qq>nVy%+N!(#;ZS!P=udq}V zm?ZYbcH3-q>dZDB!kC1?>zNfg;c)aB;-`l<$3$A+RzGStwPv&O96~2r#I40Qyf-}W z)OV_SY+t#}KqANeVfLWG`ZDXT@3``!l5ru$p?2lNPI7H>cd~|~uw&|6Ps918NApcO ziUjF1(imP@SNTKbbL+n1m_`n}YCDqsv;Bsn7nj{Wq&fq^3r8MT!)hl{w`Jc3(u&XG zTTAtByl*B_r9J1b-ks@g=pPD<-u;9|x97JuUR`_Wlj)?LmtUV&a^}yQG|t`hZ)$97 zw=J%mbM5={N4bsORza}O)l2NIN0fzd=tM@U zgiVQaa3?}=>-KP4CqpSNLJ>obO@L<4<)H>wA$2$hJA}4g6hQ5hAGuFVjalwOK6wF< zv(Dj0hdy$3TrPakbXOid(_#7wLJynuJbnp}zq@i5b-rMTf3}6>nR@p)wHY}AU4r@; zX2HhjD~lOONx{*=-T`or;Ge-E!rtIve{k@GaE~6|;o!vKiT-()g{S^A1_2Jv-xv<* z&lpwM^ZpY7`@*3AJQ2Tw;ZR_|p25DpX$XHuBQB>Q{(VQlh4sO`l>>>1!Jcw@Hv0M& zwnmnAc&c;0u!hH0?^SH!;Bcw#zwlzRFJKlo!i2HBvYoP&B)6WWIit=eOI>|NCv&U& zdEj`RxM8p6`gS^GPUdD7w%ks96b~b~Vej|NOcZ1fL+niXD3qmS$Uv4h`eYo8%#6$w z{7=Zp$arl&8F0%AiTnYF{o&+mQeI~o6HB@5d> z$ATS@>Ar`Fg^`)*e}UN<8~lI3?t6ZNJ)G685Cz75FI90q8|{~ucY0seRAe*-=&sq&vC+1dWDCI8j)FUb2NaLel3 zTAJD4qfo)Z*p8o-m+5~s|2LG%f57-zUc=UU0Qz_9e?ut$Pl$iF{x^iQjWG-lI`>=S zXZf?jzuW$d=ViJl{l8e@H=jMU!WfPJ2`|$>Ow0eI-i_xo9GoDWn9y5!C-|LY6fbp! z)4PHGGqWc&Nbku*k=}{c;P{-RdL5Ley~a8Sr)W$Ks^69tmig%&6c99tiSsCm0zrH~ zjSRs!3P<+6bjAMdKE$GbwZEr#r*|icvljBg(vs(V*0yoK&vn1fRzgaur)nN?;fMbY zt6mVjdXvItVMGWH!?S| zj7&|BTSxjZnfS)XKmQlq{t$DqehGiYAYi8b&+q?!L_PLCJ9EgTlT#ZHA;;uF5(&X4 z=h^uIl4Tke`Aftm8DXzSzG*PxURZU1rQ$CW_|wxa>{Zxn^72EHk!$R8Jf%onJ*3eK zjtriTSKab#zj-qN4M8H7@i|+Uo-MZ|7VNC}F94`S9KEQAZ*3S0iMYs^by|FJFq}k1 z{k5py=6~h*hxA}t1T-|XhV9eS)0T8&);=;|o|50`3=F5@-f-ti#{sbjDv<;{NCtfg zR9laf5NA>MCzD^wb^S&Pj=;0#UHFk3SIn9>{Umg<^6Sqbu zti8?6Ls3V^${#`2N=mWrWzt;RF)$(EElbvP)4BeD5fy7BI~|UO%yjxV+nVb`%%D0HdS-&M~Ic4_8q zi%J+?-*bU>8X%6GUk0hrV+XT|2%Uv^4%Z-fZj;6D&wb4!hEFLZM!i{Yb<9My=AEaf z@W;}mm8DGOVD5;2C4h<8DDRS>4kJJgKT zY~A0K(gvW4BiW&|C(~83d~l@{4U;4QYcfW#TpQQUx-%->6_CI-LJdS%$6^AGkINEZ zX?LktH$=q#R2z@_dj&CxkPd=Y;u$9n>=gMs;S8uw#06G1IWN87>57O5?zbDk1e;(0 zk7XsC9RXF&rP+r+$Y(N2ZRrb9`03kgJ(}w~$36sullg>{H%!?8 z>B1_xi~IQQ%^X9@Pmo-OzrL|rExpxK<7?A77iY6ERUj@waZ+2ojz}_bokac4Zc9A2 ziGIsu!#;_G=KhyziJG-dsK(LYaq`jkr*v=2OB233R2t+aPtga8oyxKm?XzxZOi}R# zb7#6*H(^BSL@Cy1WSkFo zm`n*N&jfhrE*k!|(oEn(7*Ie{WOU46cjdCkqS_p41MM-TTMHaBEj_}`KWIwcUR(7c zTK$elq`&$jp+I-Sd@DK;XdElRH%cB}Hy9sXsqAN8%ONqRWAT21obr1X)s=R@d??@W zsh4ZcEF=t~8l3sAdu^w7zo8*iI2bp&j=}UF1&O}}kW6^p4LWGUmX1Ptk{l_8Z@7#I zAKcb)2zr}j%{GVXogN}ssAjb+F$ULE`*N4;j>kQSnA@T3^|}=s*pTzw&&IMc4Y~Hc zK&=HxtQML=nYg-QN%Ye8%qXJca!2G%P*W+SjC;!Q)jC(;wXKeE<2MfC%Fx$!t_7pJ zBNXgtH`^%EyQmPZg=#Sq7t`;qn)Xhe_CG7iO?!?HGr!|SKj!?a%5}rwTHDhzC1PWQ z61J<-0-RwIRi5 zNI`T>{p@WLU;Xe+>IKn?Nx;mwzVn@Y!9OY`f)gS0b=y>JV*Rgtzk_L8JBJvI+!tPIFdLTn3MTXx8d<8~6EnFhj~gc+xT{4x-@K znl9yqX}ZR${HhMv_ztK=B&ZLHwGxm7xFa?k5^+NqEI|+$kVvmcZlDpfh zp5Ayi{pGg6ccf+3D z$xrR}Y()JDR;IK|SYa2}3V_<=M$hNb`&htz*cYB)DUYh-P06~&tb7^QO7COO{UIqt z)|SZ2YBuiz2%bHjdH);RmhHP9Gyu3*u4<50bxV7eNuq$#G>$@~TuRwOvqp#L-gu9xK-oGU^~N zqigxET5F0B^b#JZBiCH3&%`)sf6&j~k^t1%<(?PLUeF!v`2}2kzt`T9pGqe#1Owb# zu4cX!U_!fu&IHf#4O0(pE2!y&1U^yl?74=-_E7S>o|!?HFg&UCIzpzN^zUT`XWpbV zibLeWB_?{Fem5*k)Y@;@dnFR~b(&@*Q>U2o{mPe4vE4TL3|C09<5}ydz=1bL*Hfmt zAJOVP{}|B7bO@O#c>It6%GD6I{)2-~(eBkm?R)iOfuw61fY)l8 zyg7;kI-3_hP&FJb#xF57#I=0yDL=qJm7g&MVhnx7n~XJoGyQDJX>4uSV_|OoAPixo z)S&#N!1*2b9n{zCdTngjsf*ay`Og2l!_{N4@t=B~m*iVcUFP#n&v1kPSbV59Ncbd* z;w=D2we!oD0kOlF*-Qoju}5YJW}vhhtbjst{yB4%kF>Plop|gZPEOwG0o|?-lC*O^ z1yLm#$~Xy(<)TT>bFmr@Q;I1E6+D*FXbZLSE;FjOa&yjIGy9FA+$l~%B5o(8!*3uP zpZG!!E#0@(ZhPEwA@L+uP4G6oES&%#-lWT#*4 zBFnm=^Uq(9G59aEqtClWAB33p0eKVuZ0{4MH8%B7t3gW5E$5As-2D94Gb88?bWJXw z0H_T2c-$Y9zlKh@m;{r%woDF76*GZ%609?M$}^)1A=G>1PfLF?sJu@sy7DutyF}4Z z>_40zml{5t1kr%sL=$9vP+NfZJKDrVW32FB?W_n;hY)cxW)jX6CA;ob&jaq(IU6_e zzDVfglnA|1vSjY%D`35l(tr_>8Oi1y%!sr55SD~P4Ug^43aUeDN_Y>_#w3&BHZ{le z($mP%L=uOxyXVb(+c_DLmWD@>c6v!!XL0#q*s6&r)4@N`Z&XH~Ka}=3 zinQ5C_rJtI7~3MjC`MV<;clD$eP`MTydCSeHCh0`y}9Z^LxC%mcuvcd>pjQD9>4l! z-txV8xjpw*i8)et*58Vj1mP!pTWm!+p8a;okK-;>RaNze_ULw9(oU36r2I zdzBoF5`Weu8iSnpXTc@r7kg9*l3@J#)@19sAN$_jyrE@D=WjQiu60yArzXB14sx+x z`vAwj6~0@KOSbLjI?7~nH_k74$L27pg96=BYD{?DvR{9$2|dXcxOYn>g4~W17wBU( zU2+#NZX!VFhv19nO6p(eD+7L8MS@VwCP3#8uySaks}slPSZ++bk!2gnr zXHT9s9CEPKx;Cj7F16_~b87Ly$yH5U^JZsGf3$reG65^TGQxCnpN!ymWc#~;vV{1Y zUEoT%fM&yCsAX42DACAsm{5SO#a>k}%DbIS8PCW+(hN5w+b=%bJCdVU+e zX|jOD$4cC%qjK&ibXw3LgG}Glt69ibkvEUDE?V$bFSm=OmjcyVi1IDkWUF&WZQC}( z36|IR^T$)jWk>*RXU8jom3TeKo-CgP_!{jrmvp=KQW95mb*8+2$ta3X5hR32oh8nn zywpkO$ebU4H2mg@)_B&^&p=GKSD64h)>A^b^eZVC3w@ut(rHP@pCoGx8Rv+_yBdN? zdT?F`;Uie%I~fGoj7V`pk78nQb>6r}!Q82}P!Y;;SKYtch`ms;aB^OS`BwM z`%i<{2_6hv8DpdaFD=J8dogzA4uL!9z^f6$K*A;bMVQT2i`&UT-0NzAvEI;r_L;R^ zH@Ha{Pvb=}F2l1HH)AsUWAl9C>z}6q0u?5O1*yXtC-rR;j`mMWP{c9#7HGCPPI29t ztUpn}nB!(&adU1`4mRLR;7*m?%W#O`sR34LgiCC-vtv0Q>&`~IET?gXS&e;AS<=;8 zu@FL%lIK+Yzt$_~9A>=qlO3jw#RxnvT7lmS8O9P*>1T-$lj(h8WvBm-E_%-NX+3SA{8ub|M$Cjn`{7 z`6^bfOZC;-*Iz_XLdcq_}&L!@H11<9r=^$jg^CON9=^M}hOUszxE1+hL2h20wGQ8%||w)oh2>gU;#qFh40GMsGbm|1K6OTeDDmSRG}z z+w0R-xEk{n?`2rsn{%%DRDtS`>pL~VxaN2nS}>%T%VvrK-OIG&?+>+{F!Y13lpYEkW8 z-ju9SJ!RM%3K~RdrzqzBv0xI0o+$nevh$@jEVT+2Ag{RA^)gj8*gtgN(inFp$ZGi!EI71$I~*40zN{D|v&_$?=$P&`~0KkvHzM z%N8S~o%D4UL$U;#N_2RVi4Rv$ST%UQ)uWb;@kiNHD@sTf|*I zkn?d#kQ7N}u{E9h*!z`Si{{*#mV;J+<=x!&vi@_m=4ahc`O>INzOCx=47&-^h&19d zY3}$3l`^PuJNJg7R~kO$5sx3A(vQ_}8~PPZ7wJsSbol5gZr9n2<$k#LxLz3V=-k{c z9`qovE7c$9p**Uc&1WtL(w`_h^CsUyEgTSW>WG)Ke27#ECSd{g-L1#j@@52;m?o}V zoEMgg5x6D7@`LEc$<&b6MW62YH* z)IRN=h+!X*X7yXy@h)XDw>QT;lAwx8L|k z^%DMaJhfH`Of|>ij4jY^N4}E&aHQJy5S9|BY^qoF&+@NfHcaB`2KCW-^s!HLO&O*# zWQc@5S;Dt++np}|Y({w>py5Z@46#PF2(e}FHIC+WLLpM{BS+uBAGYPR_#2LBSQr~s zAxZX|iZj&gUD}9eghSng`FuUyD4bnlC>&?C3EcS^rOZUOEx}yGe32gb!+gbMumA+* z_q_%m1ua`oXspA@{ zfvDySZY{V)8()q+zH;w)*22=?7x~2v2`2{z+ym9CY_sZtiG){?{%v*E=u{PEa^H${ zdwL!p_jT4TWUSMqf=veXc(+M6{KoaHEBIip$$=)DWU904$Km3dP_D_(iNyO~M2iG#RhL38-Ad zh{;EiW6k}uM2gpjD;!i7c`XWp`f>H!l}Y$g_w;0JXRWkqwc5=03G7*I-xW&Y&u~8X z?B3>Mbrh@oA^k=z>N(7xbUSHHThFOCOx9{EbjVWcGMB9%4cV?mby0E<8~FA?C)KKg zn&?-4hGEMoOsF1oq{U2x@V5{wJ6^5s?nbT-M~|%OG2MXXxo_Upz0P|qDu45875hSf$Ocj`;s^f->ul>5uTzR4(1o=6MjhLpA(Hm;UU9O-QV}T ztJUfw_P&>#H|e{Wtl{D29Y^HM!k+6rcb5~5{n>ekzZDb~5_8Mgq6qwYpnrabsY*G) z3jVT>z5SuL$ETbwKZ8(01brbmJz_&8Zo-fht6f6{g5|u|$PU$^X68Jxn25ErA`Xl$ zG#JPZsb6-%1sJllW1w)Zt|hc|gIc*v+uPW+S|W#NqM0n0*Pa)>maBu0uYOHtncKW` z1gRBY-+mbtzH}gtjo76mwGLU-Jqcc#sjJfQUD>^v8t_>6Kh7jNa{on>9YDBTSXg)j zI~C8=s~>M*MqYhLxciRYQJ zEzN^scwT^X?rf;0M>yEO)Ax2B4-7)HvR-l>_p+_Q#P--0|EEklTXyFUoC}vm_yHyh;UP>2%7gYd0NV$ zjP6B0$vQ@iQ)<4$kCTo|9Y7R5d|gSetzw>W3ax+z)NUB?%2Q!R6?#yQ*E1b41ItqR{iUV zjJBD7TgnXWO5JgPq_o>IHmgnx(tbQYB+2&cQxj(OBUq&5V~Gh&Tgxp}mIJZZB|S5q()wO$c!dLrl@gQWa~KwBSX-}p zsf&c{!dZSLFsmAt1`ynUm!iSfMzCbfH}%)r_%OF}bJW=m{34~HrdaRkaf-mT5*FU` z>dZZrc)f0A(`YXtc5T=d7~}Ue%&}b!oE~nR`vv=4?8cZin?Z|`O(QX(Q^roS{{XJ^ z2O6l?{)+KOFgnJk@`H`pT7`VY?wR>Bj%w(7;*G8uB*nGJN@$Z32o1PCPQ8;fz*EJ5 z@5P4rG(iOJoEwWdot^T@*Na!k8?TRg+__KOu0j^WHG##Aw>5Rg$@?c@UYmZHBp2hsVKJ|uL5otSZ8~u{x^`Q|sHTPsb?Y-0mL;>JK>LaE3RVHtrPN(auK~Fp& zJrLlnrg(6%xhf6b+_`-`x6L==x*L90=ZmxXCXM;a8s742E-2&|UK?6U%ygi1_YZwM zaj6L-tvgUBySQy@Oe_{yZ^Ynd;L2rCgOaZ=0ataCoT zt}{QwO69UOc!pg!LC@&F1G5Gh3YJ&=#xq!fRBtB5cMC1u^R?dZ-d^pW?~-`(0@*L$ z^FhA1=$sEh51fMLx6cCFDfwY>t53k3PFm=~)%^MO54%5kHkfIr(W#rA>>wVGogxM# z$qIuX(6#uRgFAeeW&n$Su3g~c7i%VR_~62WF+rJUnyjc-?)$D?v^cLtEB%65ATeK+f{>UTVTRNp-B*FDoznRW1HWT8 zqwxY2j|TfHR#(PT+uY7GeW&8|PN1jyAd5I^wqY59$OX(yOVITab!N+)QcJ2{QU^iL z1R^DSen)(MP&C!!x-192LY@`oNAd^ypwKgi!ikjooy1huQAy(R?eX1kh}LaXB?yAF znaQ+C51XxU3o-OXy_N=g$vceXxb+~BpWKd>q0@RwH9A|lYF}uivzR|tiuLoFJW3vcDCQ9 zUBHvdvEkC|WcA>=+rnXOxBt0E_ztU!%M(pAs4$Vax!QV4HZ$r^q>l5`39Wqy@A%jl zwE7Zt#&D09)QqbC#brH${_UuBxe&)RVy4U;$hni>pxPB;7x|Sz1BO7G$f;=Np#5v>YBG=Ipq0@~7U-?K=LZL=3%{R-CpGeJ$Lhr>FiU+KIAsZgzJrbb6smGBHt%{8R> z7WUYBFS7-1au+;;nNI&+8JLTJX(TrK-iN5Oe44={K2%&MU4#~!&v@F*k!)+LVGbC-g=(^b405{l&l&7Wooy!MwSofOZ!arfH~)1C3xQ_i0)mgAMCH6%4|}E(fJH zsdX(5G8MwwS%B!<*&jMSvo<{pw*fIK)Aw5Hz8WEb2mw_X82P?yFVx(MGDbK?_bdSRB;KJ)pUT)QR9k;ILXSe-aGvA zF{6M^B!0P8*eKmFFM?$jfwx(jI)IJXlNJ*-h6i;3TUgtO{6JSUTfNf3XW#ByrAR*k z+iWM@4X^&%p^1v{oT3;|wUJ=EkepctmhuHse^wOW0EFPPv~0Khu==f29(LbF;@xOz zf`^{dU#MCHcJh4Yw2ltspDp%jA*9YsGhv2ppQ$~bGalTuC<5Dlv%81C**@*fBhM}b zOOii1w;UHF7a!8LHD^LYz1fh1#m_%;?w#cjK?=hUdav&f;oPR@@mZ4ZpS{BY+J+WG zGd>`CP3FOjLO}v7=5Qy1=87!wU^vi9)bi0It<#_P1bEnE4lYuQp%K1x_T6x%g#LPO zdpHtj^G-<$Ss z4)szikfMn-qzx|9@r~`F3$rjmjUC}BGNBQM$t~lce^B8=0H4y#E#wYUThF)i2v?oD$K+CiSzrOm zAp2254%j4lqD{QZ3Z75=El`{W^?X50bq6hjk6K*rU7Lqc3J#VxDd()KzEWk`8rN!~ z;i1RXE+n~|cSueL)Aq{)^s_28#hyfc(2ZI0zUEO`0 z;m>sPxbP``Jo|N}N}mncMmlx~G^T8I3-3w8i^UA$O@XwRiTenS!a zauY1C^a7!Fr$oYLl(55zxNar7t#ul91b7fx5bXMV6rv{~LK1ffG+eMgjWCTKX?QSLa`&m7S~fYtvDEPUm2FKtMA4 zSpgID9Z$UnCplJlG$9>jEWa5XzXs4NZr8=LHItcXnf>F|+enLMIjoN-xVpamGH+G2 z{tkJ0VCc)nb|Jy?p@33WI=KuZ(iSX={KckRfA7q^BMw~cLx z#ISb?ooDol@1;{6A@K)3jrtOoL%O3bH#Kz!%p}F_&*U;VFwWJ)!I%1zYWQ8k26_q~ za^PS4%tk`=cyP;!w0^F!2=pNjh32>Tkj2U=b$>1g|MQ=!jR#r^Iv`n@IMxc-)^4xj zkuXff`#WtO_G5M)KQcI*=WAm^-fkerB#Qtn^S!SoQ_uTo|CPRV6{e7d2ReS5>07EWB zumY}CK^G?eG+>f{8FY26wx5p#AF{o^Za~0c7WbN+-8rLHT%MEt4-Sw6;0EAs773sG zS-XAW8xKiuFjqe$tLZMp3o4oJI7o%ER7?{@5N@d)C~WA zOBL4kDF2z`NZY;afxnl?aDxRw#q4>A0XiM(PM5 zfEm0IZ$`$&9#|v~5kBNFC52R_+(f|)$6yrSG)Mk3^YwiKX|=^6da!x={#fbWFuTFO z!?{AU0}j5SY$oXdA9hEZe+vwR|D?C2d^z00@X?ff!CM6Y4G&D?8KS#1Rqa1tnMv-ogZ||v!WhE*RzhqX0WK8bJ z%U@WQ{B4IXiSg}iuFTK7VHpy`>XkEBWZ)ZxaZ0wf{2kx;nz2AQCcOVJRT9>sO1aU{ zx03uEV1DcKp`IckFfx?6Ni=A!CpM8K1)j2p5d+~>s!fR7BBEi}p6kvI>K^ihQ7Bu{ zb|%NiS92a4!l~(VG7GCBQExJ4!_}?9+=qnYs_)<2a&s-U>8-V4-$|snTBKYr2Ia4H z5LwbnqZC9JVlIZ<=PU&uV`&$u$;rt*R%5=ezI)IRO--;pwX@|XwH;XU!?g_ol!yn> zNQTC`FsV5aN)7j4G+y>Ot_1;aQ`uS!tJgUP9$T}3)6Q$`zEw;C?d@ALiyY!!kD9ND z#D+xFxWpC#*N#+YhgQQxatn>WCJ1Hh{Or`USqo>1UAW#OjOQb&7FiQ%g4K2sS({qE zCvQ^nNW|m+{-N4_{GOA%^F+K1$1hK}OIQF>-E-t_C&2f5NDK1-1p(}ccOM0 zHFn0DXnr?@)QZmRX1zWL-fRQh!i+cL*-4H>D>DT1d>XPV;N*$Xs{GuF56}8r=S0iD zMj*#p;8>F$sX}P{!?&5p`HDPZ9V2Rk80R5e=d3QSs~en+Vqv_?m{1skqqT02923`6 zWYF&wF?v*CD0#48HQ!SZuF-bYZ>$LpVNy?&20L76YcTj0t+iySGPCeW_&4D-UXVjO zD*1OE**C7n+Xs(?76NPymCsMgsxxL{&+TfRm-}H zzll8yba8y+GfS`*_3Bx_mt)73{=#jI+um{N3G0sZKNc~V3}MInm(D+70C{W#-r7lN zp6vRvWIe`{%k^pLX>I+ZcU$YzA~Y9+)#c^voMzvi-~=ehOIEzjAb6KI%3+m`jynMK zX7Ygg*fgWe?-DPuCuWOG7hglsyfp`R@uI&@UAWosZh)yxe=T9I0D`h7zl za86#?XPd7_P7Jk4S}tPsqMV0bLC(vv= z%1V<=S(1`BcNB#>wLA7+3M>)N&&^p(R&y2*eAcAovv_0=E_gNTW)-qaSSed`9^#5f zOe2)E)E8l2x#bc{lsCWPob+n$24?%7SA|Ovt@UVKBKL|=3BjxuDx?V^3=O1&#Kb-# zemm*L(^(kL!b}11#s{9hi2D*X#3=dXv2eaK{CEM+2KiykPQAWmb0^QrZ~TecwiU;D zr=f{+K@4)pBK)>R!NT`pS+ zT!ZBbrtQskkmf7CUUEmxW8Xx&`R$M+LI_o-T-c%JN{Bu0E}$*3e}HodlE%|IPqems z)ckbd%cEnwuJVcb#Z|)SC*G6&Sa-dK{FK)h0Ds!eiMfMx*2$lXZVL`G=C_O2wnmpf zTG6q?ZmH2HP%(`DxUt^XSZ~(xD>k5B#O^07@7$=L0Vy^-GjnkUSVJat!|_*K0%dFJ zs0Rdx+i&BTF*OSI76rAp%1Bn_Qn^|gqN>EWYq{&QiS&}lH|F|#=ElaBb}S5iC&p^p zCzCo8J^Ry&>^rz@>yioP42^hJc$dKGSMIHcPI_ad%0!?kjZyt7pMajMuA8OQ(%CsJ zaNU(8v_n5NT4Oe7cTC7cK$oYjnRElAR+$j?Kje$XlUkz4deD1csaE(+T&OkQ!sQZ` ziIu4;noWx&a`msg)zhX4k;F)aZZ>W6dlRIb!fT0yFby<>-w9C4JMjmk=zAy^(8~K- zt#*A8jwlfpsyNRu(^q?eLrUkM^<<*W1RWV^-nzqj{>Pp*sE>yegLB_n(;OUKN=<}p z=V(}aVpk$s#aUbbCDgWd0KfCXf!_ih1>nzF;>we(zD&Qd9%5}>!rVa6y?1*;u#p&g z&}Uzx;W~OV9LtzTXapjH;~|&qEXV%I)|Qc`;da<=-CnfnvdR03kg%YmD#fxzJrJl= z!=yn=r^J=HtfwsZ8OkF979>QvuVv}c3+G!8Pj;N=Li5)rKmcKT=U^34qq6HI>1>KG zi9YV5GwdoYm;~9QjQDRgkH*?)Dvghi14dt?MKZ42O>n%~cg#MqE0+*8Ow_aEHwm#4 zU9BuHw1pPV+s^fUGv5Qg} zZK`#pN@W!&oEXl3nR7>&sb$B~0p)I&xYkbFfM!Xb&pDH)k5?$tkAO6)}i0 zrgMW@U|f3YPE9{L> zwK6{w3L*BHEiBqIw>A94!}(_3Zr%6%#6jDdX8`uDsJWj=gdXG6J|$Nd%P_<|nM^*= zkW&aMY!76$s1;o8P7?L@T zE}m|gv25gP&sq2Xyd;HB)z`YXE{aRI`aPrM+i6d-K~JnWmdG3As^K=VNcWQ{$CQ`h z{3l*KdlM8-A3UajM+o>SCPsQr3)Y=lkxwnL7L2nSY@F8n@^=snGLh*|z_B%j>T7t% z@rpV0xOyvHu_a?VmsQo1k2DZvuhBB_hrmdNvN7yPDX}AOp!8Eu$Bs=YwxsX zLZXPc;Wv#f6YPF~Be<;e$2Jj|N4hdmU@y$Y@Ox<}!R?+e+hF|D`AQ9fnCjLcqM4$) z35Hd0I>YYp?s6bu5*+<3c&I;bt84oJMJ+?;LT{Aq6UnrDxQflSCkc7woJ+WL=P5>o z-Y8-bhmg8`n4c)GiLYKnF6Zl_rf9k1=5+!LG-In?h}(keetnBKDmwKX-Vbs28xu1# zJ1uIuWA~>SYN1x_HQkACoP!1yWDPp18m)H1VY;vl0X}4(Xa71MR(SyOILQE-uVE=# zGOrz=D?X`TnwGwKpN=p8ZJ&39kT40QsGd*nxSHQ~>f;Ng`5!;cknOu-rHSJOE}Aip za=*rKF7Q7ezAHn6#Hs9^*~sT>SIC}cj~&zj|h|W(zso+nC*$XD6yJ>{HbD*l@ATVC%SA4`3@xV(7m-A~4uP zwU}TAvx~SEAKSb#Ny6G!5_f#me}M~mBt3dUu*0Z#ypI#kO*qlTBhB~G;-+Wrg|39w z&6ZOA{Ny88DTtHinQ$hz0zokOhPjc-ZlBLReufaU~Du`makYG`*=mP`^WkctL}v|gx%6UW!u8oEa$v+tk$(g zeR2;~=1a}IOKxFj&o21?Q1_K_QGIQ@HYhDh!+@xOboZdtNP{RKDbn3Bq(~|;bcb|@ z3PTMb4CPSL(mABm3WfyA(?yH4}RIuECRUYemt!lT$6y#2?D}kS0EcWl(E# zX|tcg4@p12h%g?t02pi?5inu93GXZpvs4p+M^&L;XImZ+LJNR8Gy2no`}WYMBPzT< z|N18Z+ab6qA4>q~nhfArPVq48+5lESHrw8%ReA6G5V+%`9pb}*q#tg)RUGdPbN9_o z;?!*-$nzzndcFY%5R`olE-N8m@WtV%QpG*;`Zr1B!Mes!1t!1!c7*X$HjQkq%<#LE zPkZgETb>R{A7@Uwm4}H~eW3LZN_8vat3V!VJKXm0F{v&cMwX?ZNZ@c^K1ZFDnhk@0IWDg7z z4{UMBJ|CSKU4IyUtma1g`C{LFA?VquHoqJYVehEAFWfXvIH`#xO3=^Nry0yWFx>cj zGHh2YHN8He{KQ1a9o*pa!)>&|&ydEnf+he+kT{?pLIHd(Iv;dVVhE57z5$DEXki0< zl(yy%iElTpl{cCE>oXwj_ZEEg6=oqk#<$iAS@;NcxwVjC%d zvry353hsRfy}(VPS3p+{h!w`YSO6D1pK|bZ6IcbRE^YI+@2$G!oMlFNnEcE?1Y)*l zjq!GqRnb)F%z425@dx2!DGJpxMDR=PZTJDFJcXZ!nc{BeQWfr}2U=)g{HWOp{i^Qc zY3}c9-?*AtxO|#((#iH}@X7jlS$S1_&iN|arCGOV>jBA>*Mfh7!(Yg$=idD}SKPR+ zZ>xEbLfM_`#Dng>Qi5F43H-({v&637r<>}B>b`IAPn0F5`BvTS{Xh_G_@X~U_ujUx zF@MKF%ch13%vXqR} zZeUK(?tQM4?)Q$pG86y&8m$$$_It=NcgLJ%9$l&=ll1)%Gs9z$OY;_@UsV>Cy&GE z9e<-J=`iFnSLawXfb)p7WE=I?;a}5`*es}>cRBKF*=^cBe^&of`9(;IT9jWaRp8mo z0;w`weh7SKFAh+z>&!V&S=Jy)=?mYS1o%Q#7*grT0U^8fO$i6{{2o-O~??HKhOdEXln|z0D z^SJHm;&0ee96+6n1Voj1U>6&x_A^b~287enYtB<2kqfgEb1CW;hrZad{4mFmMajM(X5m7VoG{>Y2apOKOyg*Cart}>18_G0Ml102HT@K zm+lyX%spyDp_Sk438BptUNx4gzW9C|qnSH1u7;93NMK+^(rJxjoCO%)at^Kuc9>9bYJB$zZ1P6{r0 z-izc6yt+8%ya6|1(9-TdZBqOEY3zmpE#TdA_+V(carJOWnf3jzxqd#xsa? z$aA|n%tGx31)+jgyQ{m5{#090w4;JPBM-H=4!xq;ooLox@zRq`@}M`$b`oMPFE>GW z@5YA&hp94w+0B=G`$SW5iruv~qo1B~*k3sXHVal#Fkv2Zu8|M>Ol3QWM;vh7j0<97 zSnS48zAweojih4A9Sa(-l0M%VsrU2v+zJ64X2y{mesy}OmBlx&!u5pdR;I(-$vr)f zdp*s%QOkwZ$}>gL0`pyDP~;2Dp`b>CDS8gm-)5X#)a_$aQk~QlsoNVw(4S1b*7CG| z?jZ3`;E%y?KI1;n)6yRE)d;F7i-`wTA9z#JBsG#m|A=d^@p_Nb;;NgED|^rjLjwuN znJ%AI(`&QNxW^ixYyNlOm&utKa~G|%7vm(sUNY38DYF-ABJ_Xe95t|2M|Iy;R$EIg zT)K?{K?pNU0M+q!GayCxkuv+odosb8?||M-na@`_;s|W)2lzSj$U5}tF&j!l-=b%n zX8ro-?2vOY%##3Bm#Y^FCRH8auyv9Cp$7zXU-P$q`e1PxZuLsH@*YZZ!e5G{lGWWza zxXZ3QL^aIdYEvT+^L8t#Wg{;#QK9u5&D#gWg)_!hjP`s|_!5maF@3zplhiY-FNlL$ z(a8p{;Ds;3j*{?;#hI?3b&k6q`@FIO9Csz7q!-&1wlcMA$0gkfeI-4gx~-)virS3wwt1!@mM zNS;W+5YDcSCxq?$zTU83&1Ck&s73gs-xPni$i$nSn*vJ;N6Y28Bk(n|e$KA&}G zM06ZbEeh=R9wP+3G|;JUAgs{)stR|>-|U7h%!`xhTWrQF2;^B5L;))g#Od@SJeJ#$ zZjup(TIE>D%Fk7rsPyvnZ&ndmqO$Wn|JsPzFdJ&VJvY&4*QV=;*aYwOIoekw4CgC0 zLDv}tY%}Q1$G7cf-M*<@y6hzk9(>L4+fe}>BFE>l+UMxKK^qvHo>Qhvsa_n}qTL~Oe9IpkCw&tXim_B1tPXz=z|hb za_MrTEKa@EE~s%DHeSb0GTF2HqH~`M$;j~b{=|I|cKULMzZ#6m$}-mM(#eDA?m?GW zD|sIe+YttMd!{Z@-E_$<@;7&*J5z&}lrjLZf6l5d(>Igu5_$UH(G z$;gkpv)o0s_XS{OeYc@>L^m$+5J(XjHkq17wuW7HnF3VNI9)O0y&qwVwS$ z#LS1!KU`{@gSd34?q;jziK@<2$ShJUb$b7M&P6}00uT5vS)MyL##s$YB+OShbZ8Y7 zn(%WpE4JPt4sp~z9d%rTF`VNr>><_M29LHUs^S=DV-JUCqCAMwnj07CMSCmKzGu1R zwflC=%t)+ut(znjGv#o$A=X6jqtSsWG7IOiv*+vKfGExQrG$nUua}BS{+Za$TCkT# z^ZKg5acgb2^twrWo!#tmNu|R#>jQztU9Wo7p5u9>(1fH)hObr%_Jd@#U$tOuC~_f3 zZ?O{TkOfz4n$xiv*@jD&SEeK{aeoCriPDdHmAND9uI4QFY*H(R_kOKb)#d8UAYirl zF*VTrrQ4KA1Y`tw6bKVykq(0ALS61|a*}Z#ng)0q9zH8#n(o<0gdOagsu_;bg&{hJ z)i@|&#ssKpUMBFY-$g<1Z>|K_q|#iBO{&Tn3mGo&8%XqbSM6B8Gz!z_G&wPiY}ooiaC_~! zoatEst@>XU;wsTQy%vP}EM3~w`MVr5(Ta=i5vfo3b>-%MK1#qzHH?55`ae)N51X!&`j zS)uzn;UAntZ+A*n#r+dr!v<#Y9HW_@FP3^P68EaPciINUCj*8uL?_i&-UXrX#vz!( z1nOj)v;9!KX^Zt)0gez)(^7_?lHD@rEaqAr1{akFYpH{~YbSzo$h=p#FT$msoR&Ih zR)za~=jfTKGRD~N6pwOfXT$f93|~=?Rj_#Htlcb$toAG(Skvi&fkn5lYWvWWd*g!7 z*tDB8@u3sjo{HFRl5LAKhk7?{P0wTwNp&}(8joIcQ7^ys&F|*FBOD5Ta?egIuKCtC zh_8}F4-7W%)Ov-v7?}m1=^688?HKKDxEm0AD(hO+yHSZDr;Jv1<2`+2o5{3eTKb?( zl9`mIe;S6ro&DJynVZ=SJQKcKWG|-J6FS-4H*kB;+GH=g$pth3fG#*-Sip#v0Wgay=FK6j)e9|C^lC^%h0`~87v>kEi*m;e3WLv zd$mCiFPpR*x%+&k#ND51Hj_-64Zm@dXSnI&=iO~G~`g`$3wf&?F@9T<&0i8F!A6~;9ptu{-=5JBi>(GR!Y8buJ z!Tpke5OQAFn9^i`5Lp0XXZk5oE{o*jB+vxO3RhJT4#{%CLWYS&A})ZKiqFBA_A6s= zzTa~9X4upX{Gwh$HmZ1MpK0S?#}v7d)ETq$Mp?yEqP4`+P6W|*Et}i^(H$EHH@*2 zX!k=WVO!Qc(lfS6WaR$br43+05v}yzjdl(I^@FYeQex(aeBe$oAHm&9YB+6o{&D0N z+d(Y{*2#OOXTuUwD*Kx8)4sxTj31p?BR6q`1nsZj^NR+tKr;Tjm`h(5M;dOoqM-XR=jbku z0b=nZlA)zrm=X0Q^QOjkxKd+))bpnWu0qkO(pI0f-Oq$7*Kf;T?rXgW4D{QN*kanc zU%HhrHhBqP<9@FzSj2wH{G~}IV;OS3asO4V|BhF$T&CYn;OjfUAX?=J-3fO=eIEyq z(WWhYRZ>%{A?K!T~wE*CM8RyiuGQBI5g;3PVP<*(lO& zOL!j03pI{3d)3BO0;dU-@SmAhQcF#X4B0MgaphK$_6aA4n}X+> zo~Qk|p>tQMtl>+=(cH|M3qDc}xKHA?Q;wLt|C41?eQZ9Ig@)vM3(yI11D zN_s(^S^n5$Bg+mhdeC=}j&IkiZtB6_4*IyOImOp_vf(JDf_)2D-t>@j%#MCY3?gi} zXMv00{B{8%|wF4Xd1HzH?;cwK(?#ULDMw|OAllX(G&}pJ_#zy%1Vvpcidw)QXAt{^QHwm8FX~dQubrD|EaC z%0DWS1i{J6M4#Q6%>h2d^;^+B>0R54xjX~~9w&T3^{E66s3OA^t$>jwXZ$a*BBlou zK%@eE*qxPc>9KOVsdeNYj5{(sKl70wu)#I(jKrAYL-0Qn$QKG?$|phM}|# zc+F&?jLHYR!FU3jv(6+9q&3rKO^dY=O74|7onqfhh(#$H+fAKvT#Ncl!3xD zPtx2${rG6d;W{cSZ^;z6dt*zd0?`_NpXOFC)R|^5Gpq!90rw!7=&*hZUb= z$+9FDNA|P9QOkyfqGlN`KP?dl=Xu+ny_N;i8vSFQt$80qs^go-f84L%!Of@#e9weV ze;sVzrF!Rf>AV4{@jW?tPE)J`oA17#Jpc=2Mw<|?r{Ir z^E;EIdJH}rxnKD@$9?Iylo%Ls2pgI;&cG9A*c!{tj}SZ|?ul$)qp5AC>oU-p$&_VW zNjvm+FRdFVSSq0*Q|3|tN3UbT48h>i+6p8d&YGU7Y;U2qrYlfSq6u_OnrOEMmFLr@ zt-9RN=<9d*i&KwEk+TPyaQGhWT#hJWi3EHPMw;_jHhu-fhn@U zW#@;?IPLVbV$%^`vt9f{t;tyM;VJg+F`M?;GitDdibd>%6iV>t7gZ%_N34JvrwO}g zIsxwkxP}xpEs8R%cSOj=x~9H*JtshJ^R&%*>4xd;eJFN~$fZkuTmArI`4RhZE5am7FWXMKcoFv#Lyunz@HgSoe^~`#Vl^07$=O1}$wzK4`8|TkgdTCj zfVk>b^Y>O2NE*OscyvF?L|H8L)Rc-$&)%K&-+X-HXW*rv4EP7heYMm1 z#Ajo)jtsd33G}@H%?^LhhVshNLtQK#^Q$CmX+!I^_=41-*ypeDxnOtiebTWK*p~jw z^puB#(crPT7nuZY;PbmlwW_10sY>d-NRmHtCeII_gh)9Ad*YtRdkhPldoyhS0r0$!>cXte8CO=qT#1}t zI*d!0LrBT08~O{-GpQID=(3{U&gwvB;G-GWtS7`!`pX&|r7=C-f5F3C(Ul zm&VkQL~r9XS~28XBIf&QseCLJRZ{3@;bI`x->p%NtFiEEhZ1{KEz{|W$K=rh5YT+m zx2S!?fXY=a=WptK3WD%Dq<=rw4g>>hm6gvns<11=;XWZ{8E}|!nZYYf^6OXb7n;d6 z#8a_wiewvl{UXG^9qj?TRPbvJPQt%r6NKm2Op>MYRp62IQOFy)@3CIduzO1J{*Jk+dnUXvn}yS4&( z3~t$Gq;+U2mav{*6sFbtFNIQ{jt@*pf7A+6o=w@1WVyqCED<5K%W4Sf{0W;61n|71 zJI6qt9GzXl&Ej81+@Hf zk|#H0!z#R*w(G~C{WG(&jt(@FgDNrWerOchQD}*j+a32{ zgJ03xfgJ@dyZ+X7zM?(@N!Zq{uw$fnCJgeBYAW$CZq3GjfNiWP#9qnczX<(1CLhK2 z>gRpRYfi<9ikE`2#odYmKP^kWf>@kHoFOk4L02a|{-CL~$do6>uj?33_T|GtUU?u( zn)@)T>}{BIM~?UM-L{SNEm#GRf>Yfuv709mon3tMs}&Gl$~p8kOZW-74JK!{nnyq# zwUXw{f!wQV)pDf=;=>}@j$6sW7l)wROny;zsQpu{=?j@%b{@my3fmZB`0;xg_a{-- zyM>5?Z#htR7PheO-=5Ha8x&t>2m}o${d}mx_hUsv!Z0gy^ZG`3uG-YNuF1|Zh12>{ zTblLNUzXZrP5?hw*5>tnxsVOD66kc`NGDFyuLP2fNg!3iBc+m(s$x*Dm_}lY-l(Te zNa!n;K=oy~Mw>#kMto5DIN{23Q~hmg+T1mpiyGDK`-)229I%PW|2BGhF%BtEsY3JQ zq>jfz-(;ITFqITvXNWyKE^uX^Y)YGEGlAA-7rhBcoq*RyX<};w^ol(0<`>}DCD&5( zFZQly1MrHBl%F5%n`9&qdx0{(1y|#NG%AxiO$fu8;;M;Bi880uFi3H`bz`40OHx8S zty#nIOBm z{<|7pReBs*1axQ8*paF*?bdEF-0wX-P-|~A5*2}{_$AX4N4a&=a3zF}AU{@y(Q^Np zjTD_N{l1I#ZECdA30@}3`n^jHct{K8>!Q?K}h&?B`& zsV>Iy>E@{b1O53}##WMB;9^f-5JknA`pS!*)b;ugxjnOPYo1uwE+Xp4PcES0qd1R) zF6ur&BQ0UJ{0yM(dmt+=S4=wf*rOdL+&w%`09lx&V(rLIX;`b<(lMt_=ch*}4TDhEnk3SBdSF`QrF> z-RZ7hmPU7uxt7qR6kz65g z1&9lIJKg8)UmXSWdH`x%*?T$gYS#|aOw+5nOuo}))$Fa0eqG6@xlw$iGB%av7N)Ih z&?2G0TD7D&9%74-gBr$YN?V4vEHazy?d1gr)-))>6;nw%wlOhRxc~fRIo0(I_cS=IrB(kg)VIJ!^B;5sDo{$=E~Sz{2N7@j{mKMq$@xwAr6RB;A_`L zWeAa(n@oP*!M! zoiD6G?4x#{PG@M2ajKawbPZ_reV=QNF}l<=K( zwQ;Me?Oe^Fx~M#2JbHd~A=G%m(9EahG^fDV->$+;In;LoUf-BB9^RbaMf+#Re@OB8 zJ5FwX&W(X7#^>1a_tcsq0bIhV$>2kDOZ&`B>X9yNyPJCgfJWBq8IW}cI#v1(EsD?c z)D>i*0vmDHy+FS)OVV}J2TBLKJ?#;Se?B%`-8r4_H2VO09{P?WmS3|n@7ezQsJPtM z1AzL=_52KIhceB$_tt*mn(V&YyO+*p9oV#LmK+ty(EWYBiAML+{fUfX)To~tq%R+c zq@r1Kq)3DZi3nW$#fSZbJ6im%?$>rb9~MEz`e?$B>ZDdlRub(_|H=-4wxR-_CAf|N zqGL6D5W^P6v0)!nAJdYzopAIEpYwwrBDo6MvPu9wCoMNr*BVwU2S$#^U6(PoC!R=9 zm-tEF@GN8L^B-YSn`~6OO3aJMTqP#4Z9rx4-S;o_pc{vU8rx9QjH?pWcvErE!woV_ zLyVr&hlQ@cdSk^o0Zmar^0k!At@mJ!S;gYryHO3&>A#uP1dC~Kw}e5{iw#mT&^2{1 zezkxiMoN(-WQHTIN9q7b;ZAP%>}klz^GO7e-M0ODjGtR)po-DyF%zTiwNmK|IhnThpe#WgXU=7?k?@hp znVA16F{qHNvyS(0J4Vv+7`uW4g9k{oWH7z?@eMn*qy6SaEn&6b+np@jfP)Uf3Nv!U z){sL^U-iN)C4!AT9?65Ge<}3!uCx}}TwH8lqwn1UW#rQ>(gMCdlJYuK=qtA}-Q|Y;D<7Tm`jcD8;eq;wT>EL>nEcn@I7QBi> z&bs-fu9Um@MoF?}-!c$9uF|O_1TgK*)o@{&C}(Le?;5DfdUyvJqXq|7iyywfyjk<&F6%CA%>remC%o=FU28KhH~7oyXjRc^dTgiR@Opvk z>zwWCwt1{+g@8OGI0}r4JNWZ`==An=k2-7#5U?!H_9^?Tk9gQEi7PE;K_Ub3eJ1zj zU7zgKhpqnAsyrY-*3#+>MbeZbxBJTI3dUZvF6Y@yYnvI&4DUCtXD5*PZ^#975@0q# zigi9Ne2%;=hp(Xf?F9Z0lhi)O5u8`?#nMt;0$>^bPy*vsKB=q)Qe$6C?bXID&xNbY zV{}o5qw>U?^Se{?(fIUCm7}M+=Wj$f$4mgKI};i2j|(kEXzt`u9+EkU8k!5<58pu& z6#2}{icgbDZ7y$O7Ng_gkbtJ6pW1Kfo>iliQ*wAT@JMIAda^r*FgtEa^mO0_BS#h8 zBT*wMBV!3hR|gqw$utb|2++t~^LPw9Vy@auG1lK$>*n1DwV#lOT@RRb>1iqz)xOxf z0X|vk@`erO7c<Yra=D@2NzdjRX z>oV!z!To1#+HI2pIXh8vSCn_|0Ndw>bJjQmD;_QeE%u2i{s_5cLE&`1rcc#d4u9bIa1dA{rsq( zqtwDoo(SgyKJ;1vnLM#`7=2H)-UDJeLl9<|FQL_^N-!w_goYlrpP9D>lg23}cpdI+ z4jAw+d!p2I`NvwGrf?ZQhDo14SJu{=;t0*4obs;C(bpKInFO$MDU(6eRK#jwdRu&b zu_nh&-S0+Qt;Jb({%SE^{*aWio8a0L?mjV#@DMY3I|B>z?xYnM*msiZwfzGIz*O@q zDw1PL5U#c06gnROxpMJ&vFVw&9>I#?sP`GmcrbiaD}uYtOKQx|{tIkT7KAGlF!Eo` zCn9?k_Im}VSOhO;@xU*B;3v6547Jdcm`UL8WSCu2gOssy_O|UfIALc7mb9#Bq<2gU zSm{TOI$}f58mbFflLmT1He$Wex=d>)5T%)6U@DR0hJ`(oOa~QxKd_br0{Ye^=wZGL zd-One{tkO}5fD~^Anm^;iPFR1`H1wGu-?VRY^GNtQ>c_r;R^k=jOcN9p)rhgs*-1K zN;G6fE8uUDIpW`)h~%B0^RzWL@17c<*dKUP^B6J%D9ys0WY@#tp2vZs`+K#cRC0zh zT6myTeI}CcxM&>9X9d3!OWSy#+Ba?XJ9aaNwBRlB9_J4}(94RUfL?U+7^|U4D9*TpjII@%z)cwLo3lf*jYE zcK^6&&(1o1DV-mn40QOkzPgrNLjfy)8POVmK$BxRk8DwSB!O5O)KxC8; z#LBtWB;8`5U)#Sk#-r|G`543n;wCVD%CnkKyZN zLM8|7Yu&j?H{yyOqI$+V8P8Lc9GxRjDVI4uoFrr-H^6r5COFDDLw&+!ARmJLUPwMMYjI1XJ}KSGNan!7#wn{I@EKCv2bR8$}E zy>8_bVwI9dF5i6drQ9RIy@7Q}N3Dl&eR>5nziI5%@;*#3C(|zFo9B9FnOju!u7ok- zo-QiN{uc&D_Ib!hnYKABuTVfuB>`yZ&Dc zntUZ&iWz#S-8yZsaRfO@XS+~8W+P4mBYQ=URoi?QJsD*WCF<@Ga){Os29%;04YB3T z%u9{~)?tPzKSmlair>026&9{K?yJkndF z)!t&tTz)%_6!_5_`L*Z$yT5{(J&S;p>KBt1hCNPW7oEF*H9`NF^q(wt^idFfM9Z3H zm`dr3PJXlZ>s<}L0fH?e`vS28qGu0BX_E)>mUeZ2m1(KLgQ9K?x%RHeZ+yF)M_9I1 zvXx=m#$=0?NYrCSO!xu2Qd*{j-jXJ_im8V+o`ogGLl{W=MFXi<0tp>@{ukm^@)LVUIN3&@~cX zm@_LndpNn?FW$kMnp#RynW5&WN7Y@&GbDhCRH8YeS=MSCcK^MGPBGbq9K;s;Y++^b zLNdf0+BY+|iC>v9q$wNf4*Mj=QHOz-F4@~+u8o|(5Bq}x6-ae7oWBKA#l?8o z6+&lr;$I#%;HMGdt{b@0N&e%$`DtSrFKI6(4g^3Sc#W7 z*$I3G=IYkAjjpq>o3n;Lom$ng0nLvw_nDaNB34NPY~OVNRAu3bW)GJA*cF%RxCLW3 zw^un0kTJ*3ui;(@I{Tk=qV}~7UcVc1o&b9gu;0AVAB1EkGND6S;=<_GY!JT$DyvO% zUGG&E@7U6yOiRF#k*4lv;o-iG*XjLP@?tR9uL|pIVt8oy5`n z`C*V40olt_f&t!dY{3T38g!>snK3#f*rIv zOg`;yrPN8#F)av`7b>QKAQ#sU=bOI6N;5J*Art6b@8@q18*9|YMtTTiWXdtv?fs?u zW>O~a5A@w%FH5jwIgEW##|Be>lu3zGd?r~@4DDSC?kIax9${H1`}KYF2X{G#?;<({ z9b&Y6=*w$yeIhOjAH^$EC9G6Mqh`aZrTwvWjoouSO7pxYnm=)U(&W6NXThS4l|4-& zZCbsdPt%V#n$gk+81ME_uxH3_2B-m0Zs?WOSnP7c~(}2tD5dRqJ;?E zFyZC~1Cu-RJnI|_t5VwwZGR>)>lwRh<;50M1c1U-WN^L`|1Dt0B>&3rX5NHT^6((~ z5^yh)&nRt;LJ?Z0icMnchjk;}U&Guj(jg@~Xlx$a;CP$|xlu%fPj%*ajR5b|H>VAL?W1M|^L(OivsQhYDB&;W-Y)lK1%nOLtd*M5(Z%J{>(69Tld*g{9;?(y|?Tn&^ z=6$wTXO??A1C>VwlH2YQS*=01T2aB~h27}23{kA&5RucEQcaZv>GmUG<)$hI|UYSlLZKIKXVdTJHrzi7&Y316g_Gn41nr`X$bMUp_wLS3k+_( zAY5_vTM*xhPFpZM(-VjwLg!9hkGsbIW7uXbmcR<9C*jvNx8atNnIt%6-PE-NnZaz= z`s87rnziP@YLd*(g1QIx`3<>9b54o_wl77KNO>jchRj#|kP}-G0g++N_@`DLxrjm* z`@yXP|31M5RGO;L-12^8oxkMTufPez&>-GZ6`JJ!BP6qRYwCTazTNl*n1&xVc*3s`g;j(^lGRdAo1$oFXQ zUkp|$)P*UhiQC2WryVC$Nci2RJ^s`DcnNxX>t!eQmQco~AK~+i59L-v*}F>)&)2UG z;5GQLcn~aX2gX0bJ>A&LtfbQ}Vz_#nk zFp*FL|GF{T54JfKP+Yx}ZSU#oSnDxeUuWq*&dGG=v+ox@UxbWGOdp50896va5aV=p zbeh`YF6%ACPHh5yWAa=5GlmZhdt4SAqN4+cn7beMfFEVCBD{AQ>c=Zpjy_KkM zZ#L(hD@?*YaD^V<3+s1jt2Mv1%5VE&zUr=CHkaHTn$%T%l+9Z(rCuz5O{XO~;_Rj% zed6^8GJpCCZgNqkHGB&YZ&L$)b&dzss+upOPSNKg7d5<;x&$+9muY^iE(K>+R>g7wZH(4S&-7GymK<7fPr@xPzyAUyD@5 zJW)w5$W*3aLkc~L<-GL{QTgJor^3|>N069(v6&>qNLDb$^V#?>eL4Ti*Sz}7a$!}> zMoA64hF>70O%770k~ds*^I{%FQf=UulF-Uc9#c(p&ISzt)Mos9$AR{CQ99<>*hl** zu&>r<<*MvZTqi5JJ^b^K*)s~}8$2R&-QvKj#dK1 zLsl@$BYEg^@7MEx>v7>q*z^CqB4Vf|C#pZ5t9?ri3WBaSkEeDyeVmeEzF2G?hUyrt zIzNQPQp8Ab2D|?maOgvy4#R2LZan{@erz|C16D|W%{#Wzbw~rgsMcvyNOdJjt|))w ze!)1DDUPJrx_%MffZk+YER^_pyL>ZsP03*OE&#vLxOl8#sy|cZ!hnExe29zPSD|hZ@+-{<}|p>(6v2{IJ-JhV8tc{c?wk zO_<%7=vyqe3?18LCukwWpTK3|lfU4aTsN5k?LxgDmYbKZ-g5w!@Lk~l(J`_fmnEk+ zzT&Y5w1SomW-R50QY(Jq$aX(F>V}(ST#(|YvzoakgoI<0Tw2k`_aSgjk2(veLSf=- zk0dB|eG7l&HtwCl^UkEKT+|)vleG(~Hp=sDUmuK5*P#B1c>5h$$sD@Rm36`NvQzSm z`p5bZ{SC3*TZ%S(Dj-tWq9mGDK1YS-iLwoaG+) z+aWgS#RsDR`ec|8J^KkhK)SsM!t3>_n5Rn{v4dS}pm`rwh} zt;5lg(2LVN)AEJsqwNnBXjX=S$Ra#GfW~z*bDRv??Dw`0qHmb>w=feEA$y#`OaNnS zonJ^kbXZ7cflQ7k>-#B-Qwpyg{$*tOsD=vl(RO55Wq{pG_04a|FlQm&^tJUf)80Zi z9hub~j0HEy#>L~*X0s!}f4FAc$>Gf7`2>a{W=&!A+)swM(S2|$9AVT7RD-)pU54A} zcQr2^{It@cjkH%E+%b%>PS4p3XGs}+dl6gTms00f&^5kUTu5|=!<(L~bu1Lgx;C&G zGOu}OZ(wHx75$}x3v;7F1^q>O^VMi#RgUW=Mw{3nw<4}lSl_*F+i4&hftP3QA$$i1 zRhhH}I64Q(oDW|QCXqOQA^-Z0#|b{7rxEzNJDOrEfgpCkg;tt%Di5 z;L(5j-(~ID@5m7uTci@Pu=U+r@!tgM9#n2}i<@-Mx0!TfOcm%t zKn}n8ZCe?T@*fDvgMd|#JfbqzqDB78?OS};KDy8?e4xKW)cQT>V@|Rf4Z2djyiWtW z{aK}6pw&>swC(?wpCj+29;7eCEc3aL$i@&sg5F<=5jRaO)F)>n0|OGev>cche<4OG zt^@rxee?SL0}(~WthT^97q^PmV3rRSLBJBurKj%qhmJeS!w{turX)KDDI>|v5rY}y zTE8aGkwQpcy-0RdmiuO<1Ft^#@2PyW0lmhgdeh+qoSHl5@2O!-WpuF}n~{&*Ol0yu zr>yZ4-J^90JS}xQ3Vr{as`ay5*_>8NnV&w&L;pP`t>0QGw)NG2lKZDfiit_aub1Vz zCe&k7nSMOjFOA)|!`m`E-_BY*#_S`UTVXX_MwV1pdIoD&T8Vl8^ypK<*a*>}$D(&P zR{4Vss;rJl7=(Uvm)_5HK+JyW(+6HH<{ST9cqIEy6KT(3lln{#Iqv(;gjU*H&v#Ba zd&a>z;hM}~=F(FkWfc{{3Di1Xaf$)Bi}BiDLwPGHc7*fPBku2m9e&*?fBa8jf9(jA z+{=5T(9tmb>l`ou&8NpWVj_ZL|7GNF3Lm5)@Q6BQr^kL{B}p$k41Ro``df|vx^p9f z@I#bYMYs!yN8+D%2@W*FDpHRKqZdP}=AH7njOuZk+TUmVtM=C){nDcNp+{y0j=I?R z=jaqLBW82IALunxeBj&B$k0L?o2u~d8-gCaxc2wrl83zU)Tkcj8<V@>1dN@+rei zMgDo@e-|Az7<8>W;PGFM({mn*FJIvdd7so9fHz9KBJRyUgXUiq`NBu>gW2_> z`?jkwa-#x&ZA{gnWyz^4BDPT#R}>!P{w3MJ;2^vln;BoEIP*b^`pL3Nx+QvV?e~44 z`L{n^`{ezim1_RM|Fp&PTlXk~rsMCIdKO0{7){f=zU=&=ReVD@o%E5qfTQ&=+YR|^ zfIYF2|82Z~^%hOg!HChN9z4^1>8lZorC_mo(l?$>U4;owimM8pH!JS{jFVe89xtCi z_z%lJza^l!I1{go*Lmi_;n}V5*?1qTFgoTVL6!Hf&K8hkz8EvIM05Oy{r{_<|EYj^ z5YPgucUp=6v-|&}UeE0SBRO$@CTQha$|B-aL>m!jz9yn%)~RKBD) zD>f+LO@{rShO)j16g6}0wT{w%njx0{mdM=u3St8ng|zdFV}af1$?s){DFh7coE3bA z^FAwe6@8ZD}*nMs_KoCjQseK~`cbOyq!z^EDZv~u~zDgw$ApJNOk5v*g z&s7@=XT3f|fU*xJd_VwO>2Z^f+yW{3vDW$(rme7MrPzP}@&zbFCx4F{_M4@}X`DM27Qf_+j;To!ixo*7CUMFeSZjJsLo?T!S z6x`sk-k|?dEek~AF|psRchL^KSsUK!|J~wFCEcQ}%52%nGviZ7wgmsL?a2cPWer~N zJh*zZ`s5B4E-#olMb}G&w_}8PZDH)P*_+OK@PM2pLERN7%VWcCs4f&PMpI2=hmk%(W0V4a-va(?f0G4AG++W8p((`YTV^lxIn#;?&Am$isR_}A|yg-Rp#mg#jWQBhq2RM5)-0}E*p z5fP`ZCU`XK=u<9mq8Q9spkN!?sh6K+=9|X1hvT@f@i{TlCE-%8bQ6XCJUK}6QW^N0 z`_9ML53cM_f`4lP@U45eGCMY{y#TQe6tAnH|R9GfGl`}hEO4>B-u*K<;&8q0;>1Ug?$Q(MU#|8lV5_0#u3u8I@^~S&4T|FO7 zodJ^$HpZARQYuVk4fA*IJ#>HR)F9%p4?_~W?2zrwR^%^uK0n$3=t)1>Txrx-U!hEk z$^syM0X0ysyuc%tl+I)uV6Fy&gU3W9Eu|k;g{Vn8Wzcll%`G1lBkbGG@MHwA5hmT6 zuN6xd!>T_2)UDRVH6zt`XQNn!D035xn`e9K$UUw~fvh2Nm=QwZ*&(etFud3q^~n|d zD{%i4$IJr-K?t1tJ09uA-3-4Dk7(Qn^A0V-BRLZinzt^v6Ol?C#-)Q_JRZ-!X+H?3 zyPZiE6#aHXN-vQ!T6;cAXz2x1g2#g^-vVw3jtPMJ+0bMeAQ;jQDE-fxxhDoeR0Bf6 z(oxb0>Eu|!3sxP-QLz&=SL>bt1_R1R6;KgMo<%9sckXtm9n`ldcfCG3T5LbwDbY&E z^YvqTzDL%RbW)QnNUhr%wdqb4M`%}sTjM`yM}CDfHQ(VK)UhhweA-RO+roF`YmV{H zj}pyfBVEBXe{VU#tTEx05_NjjnMe6O4Z(N_;N|C<7swW(ChHr8WlK*dP=@mebh**V-r#o#bH843pVj~Tm0S-X*T*#+)k_1y#3Wz7UJJMAk&VQzZtXsmfrv!w z8|BBmoE{mT5Mnx>ZJa6jb;Zy&$2%Co65b zF^TS-7zqDODi$b4Fc4~R5Q9T~(YBL~=56!N%LG8+#zau}!T>{!=8eSW=4Lsd3=nYY z_Ugs{uXl*|81y9&WD`WOTwyS1Q^RbvBAAfy>t2_ah{W50Y;A}+2OW z^W9e!T17YO$FE&P`nlXXp!nY6Gy~M~Z7p#6F`#UTFvx4ozs_Qh|I-M3$ ztTc_To49SSkL50v?Tk>Xb_P zvd4X2Kiw})G$!`@d|iZpW;V{p4p|^c=kpBEP;gX%cwQy-clAEb1js%&P(i=+IFmoqlt# z(!S9Yz^bk9cB%W?ZhkhB6ZA@K(hpIhmS=j^sJ1a9`m|y?hE~OWBo7Qy#vFu)qSsuz z34Bq>IO6{fOSXV~M)I#^NAuPf(>c3d<_&Yod$T|Qg&)X4BuZ?*F|RM>#}+rR@l?!c zWGuvbwkn8rT)Q65Vf-3qBGiTtP;G@e*R>WE%?@v!chwJ5>1X$qCSGy#Vf8%B zFhslA5imzXov&cOA#HbZWIk{NL}6~#R^4@G1=x3Hxr(B>u4eSKqJRkhsif&@u9T~n zqv$B50)fu0;l&7Z0f?bP?aiCp1HLEHNk`n>;~--3ohItzrq29T#UX-TJM zd160s-`qOj9jy1~i@1`C+>)<4RnJx!*344OmD`0j2;A*ZIGW+dFLul-k8+qc`iBS! zPit86i7NkMSFCBsf9kVPKf_M`4~_Qz(AsfW-lw;6Pr?@biMPv-IW5-7JNpZ7eH{vz!fML>&XsRNwpTNyrxJJWyY z;hu5gRAmgK+!;v)s5Xxog~aiz=CpSJ*-5_ce2BMLt%ivReASM~WQTKH_JPA9sRrAt z8<+Zhn^55+fWTH$#w6L!f!5#+IGob3#HP>J){py~dH^g86QJHzgt$)cC_}UIw8Bga zDa#HT%J1Rz&}tNg5FtAJFZ#t#O#(}-xR2|gfg zEJZLgJ$bhb2msg^j;((=muK1#>R*-Q}?AmVzWh176(=_Bn-Fhhe$1ok@N zf*$cbwx{Q+)isFUjD%`{)u+dj32NlYKRWln`>+zqHWWFCiNnUk_3Y*>LTy~+Yw~sw z>z~*RBvU$m@9&E0s0McDf~)7z*HkT#;KsHz41SHCzO76eneZvnd$!~U1+iZ#r})C! zPyA2}-zfpI?xANugzOEFKBrPtIN{E~p}_eXSfd=1z@W`pzK`IhnC1_ze1D$hudR4U zQdBDZF_#kTHgG2Y{#3LUYpK8IcE*Cxo+I2pexbb?sdA}Qlz3ZkNcRVd>!a>1Xx^!; zXOVLWM+K$umAZ|SgVFJ-l{V?PHA)Oz;Qo`nOr}mDbTeIJ#PyP^yv{LVz}4J96*u~E zz1Z@MA+L3nFW8gs`|pLZ0QEYP_CpK#n>lyoH~~wytcwEpqPfNL)8oy|>EXz}$zXhB zG%NAuOOjFxE;OtBWpPr{&Ci8BHAGmwN7@fFV&S0OUXFG^22Wn1@sy%niB!{ntrSTQ z09g&+Ih49lCZI@vCs65*ATO& z%c(OU1k}2ZUoB-h%Wjtr6NoO<@LR$(*Xds?Q%f*k#p?Nxr**Ilyyz7R`YxtcSmtvl z$(*$YmomAfm{<3SHU`U7Uqd35!Vz!q>bE(W*|#55nf`4J&>#OcFb=Vo)y(e zMi`~lh177{nyDsNrf)A_Bp^1&;=b|8@8Qp2mmj9Aji3-S5yPR$5DdMnpxuj;}%X96zu zvJydecC~yXV98x}Jne|*c%P7A2)p=D=@tm6Jv^04&QFK0_Z<-3yy3-?feE#{7afV0 zydP-F-Yk+c`Z*qDC4<%K{bR{#J#1C!*+{D{d_4%LsQ^QDoBJ+ZHxzYKG`soy#W{y% zm;GWiJsQkG7x&692hsqkfFiAoS0+`5siBFmh~{Qr`+XoZ zv5)QjyUlk6rauzQVJ+z`2ML(uWjxfTK4W;Si*5IL_2*EzJ}eAxavpz)m+BO%Cc0N< zs!R?i+Lnsd?M1FjZe9u7NQa0u3hOw9x@y-kYis{8-d=b2L)|vRT3@$#L|Y1z+PAWH zlWG^7L>$p7&D6YeHGMIm&$&C~7s}N4>Ccv)L2JchnFcp-hdxXu8F4NjaynYhZpP&TwS&InmoCva*jS#7cX{grAjzCPcQzV(u{STai#;FX4-dvDUOmk}wX)#$ht?n~Y4}9pe9AigzG3WLwfO3UDx~jw7jCVuhV)G*p7FiCVR-rS1yb=cXuyG@Riz#WY<=ZB0zWQBkj1?lr zrTVq!$))IftD|pl0HCSMTSU+aib2DqmWzFU*cQ;`VZWRikGS`i&X+|+tQ5)|40aN+ zPm^hIXhh=ElcC4&Vf!5SxgT54Bjr27v%WWS!A!e#C| z0)47&SAo10wZ;#MWDp?iBvkj?4G53n(5!@7@JO@UbOxQE2u!NI)370;qJrx9ToTyd zp{tBiq0OA>MM8GfpV)y!Hgv`|t(jKJT36!0+F{3GRWfy5#Yo$Bmb`_urt29t2=cgJ zbW|!9VCGsFxZ*t^{me7(KPzTum!$etRPA@X`A>n7s_8Epr9DJFQaNn3kj`c*f`)K` zY6_l7QPQCgzMImClwF}mpv9W=w599)Urk%#u0cToigx`Rh#~Gws&_Dfe4m_#u&c7u zesn|k?0Xa#y9_6B$&l+tU>n4-e1i`?5r>G8|JU=rf(Hwi+7_2p=kCrfo3F;6BqVqA zs`3P~;CC;L82I>Ay+aTDY5|DxuO4P2awL zN|&<)__1(@>I(n`P7wI$`wH|5#7h-O@yw-+qz9H^Otl-6tTUxi#Eft1XeH{Nz#Cpu z-34^Pr#F2*>bN0NOw|Sg1eO+yB1HO9nI%nI&@kBMu!bAG2YH&Q$UHuDdSbe_So`TC z$*#08D{(q)$pL7!i)7H@4aj_}eKoM6k1C~>1j68p4oeeT z?c}6ao4pJ1_qv|T-$Hi8ig07jDMy9OfqXcqJN!v4J9WOLp*y^2T>`tPT?>-b=`)>9 zgh;<1<%TaR^m(WdTjV5HJ9vB@3)8*vwi(Bo$PBD8*nY50-gNa*XV7onWuFuAys?|& zI%ZkiXg7Lf(YC#u9Q%lT{(-1p&)^K_SWU(NZn52qQqXG04`u3{VtrwBuHZ?Dfm4|( zp6QmgmfCI%ch@F`i@b6Ai)){C56N*zCPkMA;qRTc*ZLE>YGbo=eK{#ktt3#tY4ljT z9N|piVBuK9x3dD4#RGN!j*@jk5bM7c%3^)rma#AXVw)BffUB7?!>+vs6k^8uR{Y3$ zx87=zfo!XQ>W{}Kv@H9uSdMx*+0H1&I@eZ}AR>2pa&j^YzFT)>j)JK3IMsFt;hf%! zt->0mx+RrhZVuq23cc7M*RmN6k>Jri7k}JA%EKLF+9 zJ_@1xg%rcA2#L{LWFfF9VpHdUWc>zY@1vlX(We_ovh)?xlqQP*Gx7&fNv_vxDQ$JW zDq8lR&G_zVgkJP6ze?7|Edy%Ki)yQ7*p-n^713M1v6($lRw^wXrd&{mZEz%Z8sy(6 z+Xx!C?Z|8Pa>RlB8_59fcv6mbNVj}7E0EYkxZPb1#OcjO4YwOj{HKFO$_vwOZNk>Y zT>U#vM2nPri`94;jMx?kI|)DcvORDt`=((?!TM7QGo5PRSMG*u2Y#jQYjV3db<`PSGfjU8LvzW>A+KtNSl># zNjQ+Vp|e;!%c{3Ed#M?!%J;W-3y*hfPCjG6EiI_+=W6AJzpamrAE18 z(T-jAbgEcMJ;e&#+(FPYC%qXjxmVc3-4w%29}CBAL+tDUGmQ<{2$YN#{@SFj(zs!r zjGikZGVmSxqz5z0L$BiTNis3&J-sdei5GOcN$IaH_WT`HkK8x21~CpwbmzVoJB%Q$ zfIcKHJ1&03KxKSmtC6B*mw9s(jn3j2XZd6I+&C2n`xy_+4ecU zVK|(;$y3Ml$+oe4%9l^{sX2DtPMN-58&3LTXzRMT`|zS8q5=CUNW-G)b4VNVfrz&^ zhi$`;xSK8ZHP!HQ#KnC*l!}ZWghVXiimezuiy{1IrB_alQ_Nl0$=pIv|y zViUSa#~t%}b=`lH+Y^ure;~IB>^ApAuRNJy^22Y-s{d_5=MMDd%fc}|MLm`gik9*r zI{zNNcq5_TJX9dit|QVJc#1O+UjqFaYR_i3`=00ERcoI~L5a_l{7ysKg(=&fw$oxC$ z^0m%koCc|O^E{hJ*b%+}+hh7^6kkB!Z~#?;$H9+P-WHEf&8g2?Rw;?x7kQ{Z1(H&5 zF~Yk~p+hKE+rqH7S~@<*xI4kBgCm)L$0tGKcb6Mz$c2-PdP_lBi$WvG3l^H9u$cip z8H4Us!I7{ay{myMQ&sMBCdI<=D>&#w;u1EMW^4%~@3IBZHH2fcdPxs)YHjYkGIgj* z>W={zH#pYUvK7X0*UG-wDz3?=vJ+pwM_JMPKi8GOlya2H5MFzJV+;$-!#`qv2eFE2 z-BN6X*NHdhVA~2kFkb8-{hZB6$oiYv8RG0Y8DrJX0vNDZ_;fCFSf|LDy#Fz0agY+| zJL#Dbq!+CpTQiJMQI=IDr-gj#ca*{XX}Jwu9JJccG>~*rLi!90!T&0TMuO2xS+c{d z*D{S5ndF-5*o`m-ZR%+wl4A_Nc%ZlPdK);XY34b2#^)YY(?g$W!NV-1qtoHN?%GTX zT5rp$N}oc)e-=i#{y?%wzYR!Jtj{L^9kgh_Y$(c(ddh_;>wE+iJuCBK1n~;1sRFeg zf*hQy7FFfcKAWR97f~O1snbuShR>f7eO-T`Bh^xTMhxGiQrUenrl`m$6uT}HC$YaB zI74Nxec}!tS-J<}oUq=vCKgp6zga<}DjO+ zlBjCkDrKUTijH#Jqs`|!v`>njHM=7B_M0cskWom@y!T$)696o*V?6hl2o4GYeaRxW zg1D~gZ{C3PPXcc^v`lPEj);8r=kp`Z4@vM`E3N}A@zrmCbVzwQE;;ucW)*xM(!VId`yQ2Y_p?NWX~TqNQXZDdTJ$N z!KuI0vG}h{`)mif;{1snN+-3CjJ<)_cF#Mt#nnr5A@5FGrXTlqk;?p&wHfpY^a@wB zw8$+=T1*~=z~&^uxq7p>mFudo{E`V(;Jwm?c@UCB&b_ZcHe`grQ*Cg7@=ON|6U0OQ zF3BQSAOphI+le9XKuw-mw8e;xuG#UvBbyZW%J*tUe zfOq4wg=PO$i9hBlrJ+XVoH@_CjpgN-U=`2QVqbG`Wgs%=(Cah9l)d~+_{Dr`blyqc z$)rOFDNUo9r@Hp4Le>|DD+YQmWxRfE#uu0f4HzrRe*AV{s}xrCp{i_lDZ>&5C4S0M ziEe2ZbfuW`E!s#rFylz(!-ni>e9p`nLrzCiA%RY$2FMa3kaCgh2x35$S?Hwkh(vp8 zo%ldTX5v2XNX>GnZ3b?L&#%HYI3^=2;Qbgkh$3M~HyX;2ch2v1QQmqGGN|sVOpdkj zXOBVSch{2?w)m(Li@ZPc_ari}Q!r!j&A7ZknoXy&fuj&D@FVA-y&WjmoOCMk_*X_g z=WZK_z;rJp+maq5E+sQo9No6ZnagoZI}-)F-4BO1(d>ploNXo}uEQgw3hXdh2gb9H zKaLJg5>-W``06Kd;i*e(ms;|qoiiwl=?lKM{z4J1%jQdQ16}8nJ{X3${i(fmq3~DZ z8hv5PWRDS_^{K@Y3j#ybEyB`lxDm zswZeQxo-j?1HUrwJ<9dx9RFXJ#Ku?fyoi(2yiygh_a;<>-+H4y=dH~rwH&*33_e#z zaYH}xpB5}8$tU4by&7(^9 zU(Mq~ikHp@gAMO?#Jbtym~gm3hyfI3Zqwos6kH<6g<9DG^XltvIO)xA~a0rY?N z{nZk0+MaHzZGf3T-inX!yBi`q?kCkWaA`0uo_)vdk-#|R-M@+*2YB`x9)^Zw8UU0c z$U-(^(LEP}(D~I9$H?&4MflAfFfZv4U(!#|N={uQG5~gy$5@Vnjq_wVZgPi>Jt!`K zts{%J$9ZupVAtkSCQV?q^z3aRyM|d>=j=XO|2BSLX}ZTbpi};2Qt(K{CpHLbhU6_D z9NYQx;xbT-q)y!f+WP2!72S6H0Kz3mG*`Y(BhucUZ5F*>^#+K#x{?< z_%X`nO5@30ZCUjDY*hd#P<62enjNR~mv?EdAf7xmE9SeMK@f={Pg2 zm-Gg#1{jNQ25PdECp&&itjro_Ab(U=ByS$#+4r*+7%5a?a`dRnZ4-7aA!5IGlG7wK zuT(U<KsqU8>vW>0HsfNIYwC-)yDT24D6-jA_UL_-~J|BCfWhQ@DHP0&km1%{@tq!U(k7(j;dBtQwv z*bK}CdY@x!o3_)m!jPVSTXb5stfM$$&Dild?1v06ITASJWMp?f0i$X=u~9*#U*k>F z=b!GMlG1C!?$tjaG+Hhay5^ApcnGY)f(~?ZTq-JT<~E38=JeI)3%64b6Gb<+$v5`{ zqszpcjO$mYAWy|;WCnH+6&3O1Bm|q`kll7R>SXXM(B>AQj4LpZO1Q19+_E^Zyd%~L zMNwUcx_v(T#sstdL(fJRP|rUq=UHfwXmpj%{__CYBOso-$Ns|XtpN`)LL@hdeP4lh z?11X(S}X$IbPK(sh;BkK0Ux(;RIi+|;bM##)f?`pWtnLE9k>@QA)If*b@%kklZ^qI zMXCy)$;6M+gHqiNm}wgqGo1MPjy5Ik(B?D%H!_65JEnt#A~yEjB6w*4aL%BttDXvX zuma7O)E~QdblRBH^DYD6h(+wC2voaYH-M-LmaGNGQ*Q_AKrcE4_?qv^%H}2=7M1v* z#HfwwvdFp!!Uz$r!#Yud8|XX|p~Kyh6bH;%eIQb%PjwrU90hm(h4OkysJwB|TAfC_ z!x@5DQ-HtZ0w5Cdm^u)T;omI83mTp~J=y9$p8-BUBse{1`WZSJu;EqZHl)QZ_lw8$ zbQhkkdj)3`G4ikBE01+x1|3VuJsQ)^kq3(=cemG*NGpbyToK_tCT@o{w-3!N$ny!b zZIQpsjOide0r(@^Z3ppV=+jsA1S?x9mht0qyu}Dq9u1@06?eMEL@_vQjr#ZTa3qGA z%Y_4@5tHY?*O{;9jgDmOT83(w;~PnGMM0|FvL-9;O(zfM<;TtU{Q$6F?WYdGKS%rs z6x3*$DB{|0Dge|JL#puIRnXm4J#C{*<2rGd=Tqc&p!Aun@Bh|#(drGIt2qp%t*ONT zm{CdJSE}MfZ0`7ep8ry47~uKyKE`k`IZnB`nAOT|`lpF1vUK-L8JHoTiYKj}Q00zi z;JaG=RDVFEzW1x;iN_&YS8abd4;23tLCmL)vn$$qbGp8$b~qcGPyfH0^BxLm-OiAe zm{m6bTH4{(6jf^h@Gn?!BuETv5pt1bo^dW#Ir4O}AgsU73qc_IqLW322hCGN2N7ai z0MC#arj~s%vUkK#JQ*--|JMg0&@(wwmtUZKWdM4!r#+eJQ zX88p@i>L%1_CA%A)liW;tPzWH#`<8)aENE`B(1J2Qr9vmnZ**i77C{D`0+M{^i3d z4D%WV;2~7FQC3e=4}ASElyV0Qui!o2_l1iDSaxs0+oOi#bblw^pzj~qI&hmK8qRs& z3;fE;BBNgy@Mp5rEhN!*Gql!5xyA8Z594&R4)|yLzC0rUA*j{h5C@o?*Hd@hV3t&} zk2v10EBa+y6FPnmjjLi_;|Qp#qbrtYi_yZM57rg;y;Qz^EMK6-RTfaB3Gc#e`f>X$mN(y_kL8)w=hot1o`j7yu zVYs*8DcUIv8%J^VtrIfCwc^fJY;8{?+ozN_i)bS-#mu=wRIDu6EW!WA5odCtk9cPh zio5*g#+M0*){~-6W5LNozV4~9=rRO9q*(i70jr$3h)zRHp6)dkpT$d(_T53@Xv}Tk zo48h=y{1@_@^~8>xbY(S> zL#X-VK=<_S05gEY71SkDU>9A;AVVUPv-{L5QiDUc?ld3n13;PbzW z2?qN1t$syocVguUs4OcGu<69ZC~tFmXCRtX#PzBjoD>Tddx?vr@L0&`!!XB2hKcl; zb}sRj%>&b4P>8fvqWN?a!D@_TNtS&t5%|y34I%N~aAG+v;-L@Cqg?M}p%BJ%rIK`c zt1Q{)RYbj+94ht^kyQ*>At4Hd3DE`iRoOg>l*VYCW#s4>q8x-5~xC~6AzekX8MvTXo|j{g|44Q8^Ytz zpN^W2Jy?tWteT(n`x}-Nqma)J%oN9nu?7XQC-Ar+AE3)0Od>huRnj z@Z$Y~)mLQSQzze5bbk4XG-fa%@j>@#aU^ojJdJW}SZ595(x2+yuuXS%5}#c>=3)Og zNmmGk*8%-?RoL~O4H{N%2|JNqF#F1(Iz4x#OngW?ID|3>;P+;ihrVFAZ{V?r(B%(f zi}Dd4?EK#E{Jzldx^ib3@cYC_8}eYMkN6C|Li8`NsTX`D4kV%NcVdCiS^<-b?BG*l za$3B6v4QVyW8CGmsaUi>JVqj3zzM}rrNnuW}n0OBR5OghSu?Y)(Z zPK!>(Ku|F;In6ukoBQ~hMfhdckSmBvkB{D}8o~i)=Dx8STw$n#+~tDx+2p-JRIuam zl9GcFA#d35|FbeQCNOrSi6B>4`H-@#>}{b|*9WUef;@chh$-~fnT&%-!L^u`qxqAD z&DtTlK&W_)?M*%wB+i>*w2ft_W2`OeCL1CtJCDWoPds_qM#jZ0M#-y;g|vbGTJt+JSis0@08?NP#|cG9KmBzqI0exlcYu4%bZq3?i`rmD2dPt z_mvk|zOC{NJ(T~5$P9#`KcP#Z2%aWT4u#Cu`a2;>fJirUq{3HM>p)5NB`li4eIAUphfXDxIPVyUe$#3 zGY!p<=l=rk|2f#{6krUg=1k1^B z+g4_q+`WZT>%Hw<+;B5FlBg{l-*FOt`?)t{>|g5x?0E+mW{%MJdV7hi+zO!>TY_1b z3~A~~ab+yUUGB64=i_)U6@b;trBP}}h3e#_TsilPk`FU0pAq{-wUsONCTe5eaf`Bp zuq({|8-BYYAi90RE#Y!_3pT{WY%&w2Mp&Li>pEW#;3&NAu55E&<7hU|g0FaqJPI%& z;XxYtSVlFza>MYooA2Cl2U&JA*<|A(+oq4$59HA$?Y>Kr1^jFAf%FX-D){s(7ym_p zqYeG8em-eTtc0Wm210kG5CccEzrr1hMazDTQLYRQs46oQOgWT}NuM-qTv6Xk;Qeu@`eR zq3ok-^FWv|@RC@+M)zJLU(-bm3jfVh@#bKcm*w!JlAfn6wuEVvSDiXZ$xzjueC2gs ztQ^PhmtghJDJ4k#0Yb0KY9*0$QfMKdWD5?0QlAZqo<3E-F7$2w%SX-q1+CFf-{?6w zB~dNF*LJ1&%)0epUs(zjmoUlZQTEmpe!q1|t{(UUD|;zTzY3DFBFQQpZ)BDQPrbja z8>h1~LqZgQcP$?c_(-Q83Va0WLuRAkap7c*zg&O(uG6FTor?(hp}iTxBiCIG&c0RG zL-}sXSCs$UZ4_B%hhQj-$SWWV=)?)K+2b0Y^inb=+C%NGGyVWjQs%I|vqa$fdG?Fm z78Y}8iiLv62N*aN?yirSnq02p`}?JgjWp@sy{jtMX(La1XPPdP#wy)>ZxI3*-@m!N zS^ou|t0bAfd^(?W8pW{Qhk?woS2Y%TH4y&U04})gcXk}gn3NP$wt0z8@0cE4na_tq z$KA0H0HtLwz06=&sX(p#joX(T3X+ogdiA}%z4rI8|9W9YPSGK}}x7rLuQFG}CS1_`F4&scR-0d}J`kqk9+pTxrOa$=M&$3_(P-Fk8f zq%Rvj?-evl)Y%ocPkx3)!J{dR_|mn=(7Zi1rgVEZZ+x~zcs%fpDLm%c)lxV;S>tz z^P~52Lh-rS#kgP3f4Ew56<0IPRqW^cbPC(fAbolNiF^sZWyrmdTp|#q_7j53Po*Jr z`@uHHu)-{uazNI*&h7jX^=w9OxZfA`NgNQXk3AMQ8@~!srbwUT#?L}|P!)YQU>|rV z4`e}iA z=7*O63T)%QepTi=g$^_*^i>+H=A)QwkeoN7f| zCNCs03CtUvo_VbELaapzIzKDFBzal&TBc4AO~hDyKJRvd7Dt6 z1Y~9$kFt-2jzZ{%+okdzV=PMY4(+34#kf=rLo62-@HFE zW(nn=GJ3}oF8?xViYh1+5$xAec(*`0MLHRl3n}2Mynue=7vZFo^iy={krb+_Y;wH8 zQlM_dhfP0_s5*Y&SM3)nxrhGAW%OrSpiG;q*llP@p39=wIgt#b66nZbiGH~+}vDj5P9r=5ttx=tB*8D@?BSP8T!pE`%DUpsR%79 zqn>K1`)^SBdwW|K0B#CpVHhz(95yzq@z|Re${cI`up(XY8^WTNW{7hDP0K`~35SG7 zN@!Ug3=qmh_?@2JZvol^Tf49DKktwrA!#HY3GBOkQFdB_z)#Vs)B~d9Pz9s{GTuN1#yh0-UR^VlaG)#d0%MKgn1;8Z~rmZa=!^^o`@RX~NBSH3%(Xy3wR zD3$x{oBNfdv)5%DlJ<3=fS)n%aieXBJwOE#_r8ox{Q{f-P~^n+Y=*xhUjRG?!|6nW zHaK&?nvQtzi^28fKi^W4D9RDUE?kQ_pf2acpllnV01G{61I#)xlvzuZ(&k4Or z?C1B0?}0I$R=xdxeDr64llXi)?75K#xTs6k^r{(VFud?o9V800QH%PYH<>)6#ECs- zL`6KO7ad|g%JaRe6t={_Ins8%)vwzaldfxT6U_R%vrVcEzq71+J^da}sa`pz4NP-O zEpTb+DP3dSIiJ7vbko+|!Z6w4MjpukWtiq(33A&>SIS>kzaFHWl`BKeRku8O6KQRo z&@KH|i%qEBW1n<+=KS!%2AQhG(-9+%b_|nKWem;ecg(VrV7sr`ucEgIOm!Rd`0q?U z)b0Hg(7!i2WIPu5+^{UeXwmvb^(0Ftfa;XI%4Xo3CADygEWEp6h^=<=R9v(wR@Bz1 z@m)}^Iea_NAY+`?_H?I5=t&bCcYlUP{oy}GM#~9X{CMcNKnBchEHE)pgd#Tq#--BL zvVgrM0{d^z$9--6nb&meWUDlRS5MZ`D8Kxb#?x>UEbzO(spIX3I<3;;`K(~WNke4$sAZ{Dn-_=So?tB?1mlX~Xu zg@VMwVYl9)bF$-8V(jO`juzt=IdJ-n5b)VMH6y$f-Ta{`f{?c}_rtzUDm?gGYt{6V zTsXf839Sl-mfF$>rp21#xD{7B<&ikm6v~UcJ9RCD`WY?c%Sk2e8dZY5guPtlaYBG8 zAI@s)5jMeT*+Wo4peycD&yYe#Yv~ zF*dqX&;vfLz%&9Ir*K?<2XKWCc@Inz&H%e>zg_Hl7?9|+qo+a`UwmoGJ>X@1fz*z> z2>?wojee}2i16Pf>Rr?jx;>FLd$ybpVXg72%{v3&S^cw(KIORBeYG9KbbDnyZI9c{ z#n+Jc>w%kRM*9F#D>{kIFu(>5v&bly&V)$HQGiybOlPx&>}mAzdY{#`9O z92d=gC-Z>v)vBEz_s)aDD;}Y9E%zO~Z0!6_MZsswFEqKL%@jRvci#i>s;?>j23Pu8 zW112uF!%Yl=ZpULP_qx3!IgF57n*0y!M1&zvvwbALg&xAL-F;5!~?FEEnAqIUns^5 zQVy$v&5q;xOY#rbE#=LO-q#5&Ex<%1d_!Hxs8T3zwbZT2@puW!2ep`I&l|c=K+HP* zV$qQ?Ja_YY+0*k!*>i4sp~d~h3rT_6*HhuBrhZ~!VO;9T`XY7m8{h4Co~-Smvh%=! zr{}6at8P1t3KAx4!dqIBqH?QR@dNYxKVye8Kd*3T7Az#OY@y!nm?yN!?rf^y23!5; zXZEI5cD>)3OPO7H{)xgc@kHLzjNEe7OD^u5a^^J9!LTZ?C7cQv>84{vH&1*KMe!RE zMM334^)6EVi?n_HS%mvQa`I>Fg;=nyL*I>jyaj8bgZ3I9X-1*b@`m zM;+GX0pn{O63XJ*wFRl4Net}d&33y=+z+K4YZlYFO#rX&PGo-6KP-}hRXw;`fXuc! zze$Y0axKwG)p;`xJ?>U2Z&h$>Q0N{-w&J)H+yqcqm2$feR&+>_McriI4H=+ZgWrIR zOGP7WaNp_%XneU&O93Qrf4*8AOWC)q6|!i!R2ENHJVt1FvQM=12DqQ2K10usA8`9J zo-e4KHz!rJG;mWrJ*4<4{+yv6=qCs@rfPxyqm^Kgo*1VS5=|JPFzoA`9j%5UQSxqX zNHU?>Ot(+z*5o4uFml}bB(uibgRy?o1!Zfwa@?PO!4MEwgG7{-r#_soy<7K&`gHBf zD`K&t+~$8w3jkh7bZ{Vhe zj>rGi@8Iz~gZJBR>DCuDwSsNr_&UlpYkQ04+bvzzrA>e_OxVp}wcKG{iSV_6PZu~@{OseA zKl-0kT^7CdhUI2F0{#1ODEp0Z0qbhwB(9|2$DWi^Fkc>COpxFd{6YOZRrxiOZY?+I zl4>0SD+>A257-IjBRc{!E%k7Q^frDAUIU~qs4-1TJso=dDVq`*g!o*FqO))hQzFvQ zdd9?dl}x(4!Il)7*1TQjgw)Lw5DsiDW0?g(-z&{9H*Rm9LYxecdP=gG3c6Btd(8!Uj-=&v~l{x@L>i~Qa zms@12N4Jm*xl*wmZK3UbZCL^#7=+tvBchiUU~lXqBA5>CYaV<&@($fCBbWJm-m*S) zvn=0tdY!4>kFxboxI71x(_EdJ<>ygTls83bq-)K-k=vkKae`hhtRLQo+8%oe{egbMjMN7+*!{Y_aUDUuZM<+r^;{n7&^i zwa{<>?E~LxmDpm&6O&TT*WX;!B0eiWZ0{8JS=kyqvDan&WyHes@pAge>^$S~z%blR zz4>-V`~}5aZ{%+>%w|7AuD8p*08E?N7^tM<>Crnj?vGm%6222y7Z>E4wBi210Uf1S zMiW6E*>k>IsRq;w2AJ%gxzh2~JyV>#Jb#)D_Rv|^IKm;{w!sm-QKhwtyyL0?x%NIH zcokRANf|<-#<&{sE$R+={@pUzK!hYTgw^X3EwtE~IlEz|NfB23YB}Fs#qq^TxGz9& zfH?{t?9)mX_$(98SnNQ@VY>e*D{jOBZe!|3A)9n~=n6&Rmq*PKb+~0kW#t0;*P6%q z(>yN`ZePu2f^>fd!Yw! zW(}X!!@1T%Wr$m}z%Olov^yf~b}LzgT5Jh7LD>yJ7fr+~>d3~p2tKYemHBP;+QFlbYIr1@OsWAw~yhHAnX8=L!eJXLE?U)7=kD9R)Xm) z<6-%ELzV#=fBra(Z(D1Ehlf5h1$vYZ;JW7&9-)iZ0ZjG2G-`hReylcPSFR%Z)_z~J zMIeu-s9n*4K}kM4beth8f^}WBoNn4`hGuCI9_9_~h|YYRAYCl$A;NX|lo(2ie*GfC zg7#NIFGxtSf{IMk22r43qASuPdM>+`Y(delbMU%NBCFIaQ%6d47u`FRtn`V>v~O;| znEV`fI3Cj7W!L7y#;}LRi1bEy(RsN)E|YYB?A7wu6F#b7o|{iruSlB3tF=C~IU?h` zt*fRJM_OM?Tf-2lmAo^jX6}U@K|wBg>OKb9|x|~HnZ<#1WP9))ToDSl>u(vLI#q5E+?)@gcE5oOS~z{*G2E~w-5rG zf;vOVp`{ullL*EJ`>%_n`~c#Bew#0=tV%)f>4VyMZm=Zpi8vssINcQnoQl;LQEQ-+^~rIsK-jyO@QTHOV$-;lc{qXVKH2TJQ=dPTdXyyS$&fI ze51Wp78Xd&*caZDJa6J4fuipF@AVCU=p;VubouL3LJDOmpkS<{vEd=piaVO)ao-`? zw!?2pVOy@6(NC=aM)k-w_Z{Rj&)cLF=SVmAI=7XDt8p&9uCypVu-{3i-RQk#QL)t` z0&U6ZX>xW-(sBIFfJANa6ZK#jYu|w8R-p-xZ@EJ9g4$@0#I>qL;}t!9qWzE* za~_@}Bx8KQOEx`9l;&!zXPKnHJY^Nfq+2!ZhqD2&6^8l3Et2Dsy=3!m;_q1IbJL$IK`&q$K_XgQZYx{S=DgXk)e|ZI+S^~EF*sv? z+}riHRW$}fG*hc3k@PWzKFHtk^G6xgvMZH6{}8)GA@n!B`*Qvkqkg;&OXOWGr%qu& zAsHn@R-dgg5meQAm(H(@geA`Z`a63x%B%NM2WXrsxy2X%9;VEpt+)chIJRnrmvHLr zSVb2+p5*MRVl(TEpg-?`5YA;f<`$QQ0Yn0@6C$dL2Gayf`yL~hI}??j)>tA<&t&Db ztA7MUyj=&tLYw*QcsqD&b01{eGVs33q)b({9F+#{%#6vaL^bnf%A+WwhCbI)?Aj}O zo!7;bjT(~xW((bm*Lx678!AuYV<>Ud-HX%Yu<`r|aOg~Y=7!x?{Ga1w7ZbwIJDde3 z2^?>LgODGfgNqECXYflE?Mb1zz({#Nw4|GdGaYP#V{$Pdyb*h)BT49+y2-aU2O5DT ziBudGFR0kd;8+sHz+OlmDKR6V#nNp7I7lUxVY+k?R@E5wGF62w2rkuh{mxi_d~>2a zoi!yqlDfiJbnF`%)3sF?n`LP95u(BKNI0m*kmI}^laz|=#|N<&nM^8fiL%J80oEU1 z5RHX;`gPF92icsnKH1Bzh;Fa@I-@zge6{x@Z})UtqzFNc1$5HAR>drV5=H3u{A6uX zZ6fz#mof}!-NYt=F%&~xvg|mCl9CoEX-{ODbBMvxL>&G;6Z$^4sw3-0;ZJljbh(;b zkm=-8FmYyyq|dU`HAp@K#z&v8mk68mI+#b2>zUrR>hj6sx4?4 zK({lmV7=>SHU9Sbqp<+#9>hKZ$9zCC|EAU*(X-ao2YNL7SdaPWvX9QHqtr}jjvCsM z#5_bta|Oyt>f@e9`Id|8_1|}<1{UEt9!Ynj6OG=R^+HK%6h$rjBcj7v8xtO=A8TLd zZVZmad^(8d3`{U|3lR!#0vKg+3`mZDPRwgWQ7XUO1S$5mzuz9b$YN7DFx|mpy;H3alF%dZL*iAqkNxj*w z<`4I(jLzuPjx2yNA^>Q(L6>x{h%@Vk@|pbuSbbO5*kX&v@q6KtTi!^NgKh z3~QPGcJiN;Lr+UIL+=O^lWw<22bzva<@wa!!)-^Zq0$Oh*@4SfYPi42qP$By2tz!BpJV>#kM3Sj)2k-+wNx0wyB60P|F( z_Cc}}+R_6%ZpgNNBv!)CG%08bVWvXv`$B;?DDHAsWN}o+W^k0Ih4e0@irZ9fWq-x; z*hb}QMN6o#2IZ5*$%Y~f${|&qnS8=%72j9i)SGsOrxKlpbwm02*p$QCGdfz83Xl8J zGd?gwN=ut_YW}cTcU7T=^Bz_ngtu?Qzbg~Bihib}M@8!5JV3Wny1!in zjaKrLV9MJrHVLY>ZhW!Dzvm|zYY1Z>s9#t)=7AJ)8QHz*lL}rxo>8!g5xiIS;6(ew z&L15{0j2sp;UlG&2h*O)yll`*S`LtXL|Z|^e_Mgh`H~F;n>^CR=GlOfex2gd>A{6E z+T&nyJhebz*A}PBYCb={;;HtL_J5FpyTlMgaE9&&{E^7(rURs#CKAw`uxQ( zKyKgmA)D2i*rAeo$>Vw=D)&$(5SK7{CRmVx@1(7aiOUEC{FzW=_s~tEbSlPLx%H*_ z<`+`&3?SiK;wsFxZ){x~(0X3H;#)|nJwzGB@I=<+`s14-avTlMKtkC~xgTRVuXQ?w zTbqqBUW(Up#dglf(2sm2s9-$rXgz5oH>YuX|NIBbQQhEqez%>_4KvE|A~$qcX^@a! zlGrCQ({bf|ZB*XN?}>HOsVg3V+X%DGgF4$iB|LB1l|wVZP#N>;=D*s9f0ekuhytLi zrUoM;5!wR^aO47@j3y5r*d<800$s^%`bV(zw{^#3Pk}9^Y#6Gs-a7$?Uqnpe-d0&t ziLO3^prBC;tMi?Nc?kC0kK#4-E-9_9%!Sv@PV<{Cvx?c+JXRiPB&H55)hm*B9)d;RJYdkzomz)E!+ zQ3{jKqTptO+Zv<(BGtYvZA-0YIa`$@)VnNRlgzR~1MXewRcuDn;+Tp!PQP`9d4J8EUbyHGwMQ8u4Iuhs)|6_&#luc`J z1(8sISYqe%RxB%A6RXLEB6JnBl0yk?Qm`W}C6$~!jMj*vjOz&(UtLBkgd8J_vD^F~T}X zMAGrSZ7s&-{dBygh_XT@P5aF=^!fJ>=##_*#i=VL3rcs{AdZlh%aI6+826hJAY%*WDQ6?10&lm2JHkz=RJRy~2T_@b&@`;Ve;tY|6o{$&b3J+svR4}i%9u7;i zFtpd!DnA`5nTdzo#XTaLj&?zeN-vH;D646ekh~}ZhUWwh*TYj+ccW!@Bi(lJVU2lG z3#d)D@Oz@G>XYtgSqE@ayuCHWX{iT`nvUwce5ZphQQpGjZJ2!t-SMB63qBjS@b9Y5 z==HsS%4TknoaQQp!w&B4EDOsMZWHha?uLY82@hdl<_b$)N9mvRbgxbv zdO_Pn+aTEDKVFp-{ffemAvfpzIvYOEn2*jqlzS3;%G$#ErJfv8S@_arW$N)|1@7J3 zc(wN7*{jB8ER;qK$%nDY62V_vE?ld*SJaqsJ zp_ao7(;9|*MXM-&3|Lbr2=khrV0%C7P$up|P zaE4O`7&$=~b$@}cOOVAviEueykhgF7mBe~wbNTHXRPufis~q$;+QD>LBY!Q@m-+}M zB@$l4pGZUkQeUI-$#fGtUDflgUFsv|6|`|Y-j=zIZ_>*~E>u1FWsgjv#DTzoAn)7p zPgD)6wbfdjbMCdi)>%T);$VS1g%$V9jrWuP{c^NT_bBHtM{GeVAeZTh3nR@Bvjg9`t(V ztm_!T9;?J*!RnLcvXDH=-1_Bf6nZSU;` z(y0nb=EM22l+D)u%Pyz=Ni4$0R;#KV@wHI4j|sCBJ~drV}(aKI90PBsQuePIz7Y+^*K{fZ*alz49c&aOW$ z1b7ls3Mo_N|F3-$Sj&(Qvny$aFwe?>6EDWG`*FV%TvJ7mB9;%kjl_7r#K@JBqUWu* zpy$OH8iPzB8KWmp-1IJ_`46V;j_}Kx=FbJksYu{{68;|(P7|9D{NO`lUGwb%0R;~U z;UZwOq?9CdaIZuATkl;Lg`)wE@mym7kxb?ECJ6?m-Ehi@1#9FgJ_W zy|?9OElgM2?D|8ib_#L4PO7UF-j;7Kjvg1e>c-$PjQL2ngvrkniHbs5Om{+V#G%99lBa)cZW$l&6dvBroar9NKvXw1hbPu1k!K*8J{X@1pwzOB3thN%#aMeZC zp;0>@8g(772L&$nkfhjxW+a)uny!VDtvWZ?Fb} z+ej;}nVgmQZm|vwZX|pKcHM1J@rLXRT$Y7NyC+VQZXb*WF1e}SR@x(HofjN#@U2=> zE(ffthTHGg)_18k3)dZ2PZc&8{km%JN>uim&2p3&9}Kx={~m(14fLO z7`+({;?Dd|w6G|kASfKV_K&2tC5=~!REj<4fgL@pjwg1pVI;Asiroe=es-d+`F z++vxO)OW0{Dw{(X7C`bjzU_Me+)f$Fah-5F^s=E|T~}I7*XE~=^OmMeURhy>ZB0@EnirO3TbI-B@oGP9&kFh?#HFk;C7JsE{AI9d?UtH_5x$-TNO`3}ch zW>uV=&ySM+p98su;#=$AuStU48ntfcdMzCHj@;P&?l;xFF^Plk*ZPld&m_h@KjE*s zgx{$ECRz#7!lk*YYT>MV=k=MF!rMvRg!3-ZK%MKV4>REAk-oa+1`PrH%EF(;b4gIi zfPC;ffJ>Bw)L?CLVgHa7`cwS5eIu2%CI>578<@Sa(|Xu2=FDiohJC*|pqq~lc-l!4H2*kW%QQuh2OP2vhX z_qXr&s<+3bXNSvKnyt&r>YZ7qN2TiX>-vjr9{TO^!{E+$uR(8K-%^ju{`am?9jgtG zeIs@=*M#QH7pt+c!waFtv9E)l1dcQ=GuMh6^eZ%xj;(KxK_G^fAv9+kWDb=YqLmxG zI+po+0%sWmji<=vKe-*lbKmd9*Fou$?^oQ;$~MNydY14i5gy0&sp1=9d>QH0MNL^G z6RwKY4t;~shS3w+&BfYZdKk$GkjA~v9N94L&w%R$Sn`~)u z&ffN~!(Uc`5oAIhslJ8%6&s|IxBZ!5_R+yHBtC)5>bq&^*_@7B(HzXnPL|d9=DdPkFZy0umvm9{EkvOw?I|LQptf z%W&9(U>R&J>0mRSJjO{6>uUU0<>n%{_O8tW3lswyz4?wPY6dKiG?d!0%QDqD)Y3iC zJmrK2x9!U^Kzd$K4V>CPgehmRk~H#(PLfb&Z$hit#)){T(zI*iV)->Q0{rSz4Rz>p z%dxFifkDHz2h_=U+OJ1cYmvO04bN-jy5T<=nkR0{E^gP;p{awU-Hr}#JKtcK5Jd8# znm@{k*3NZ1q1foZ`Nz3RO#{Bz)V|R^gn6n1KM4`^Z_uA0{{0B(@c$6bVduZ2VgHFQG*^rS6fZ_2q-j7O%)b|>n2X99(u>h{n zY@(I71EZLX*7N4Yh5n~g&U{Co(~HkvE5O%bW`%Bz-gdp6!gtq|18T+qESoh52AL}e z2DO2(6ejx^wR-9{Gc8F&*PFNQ=7M>=&yLy<>sItni2+30!fcmcsc1SuO_XLm9-n)O z`LBL!c5^Ae)>94D+Ye}`A20qGLGYCO+lq9+8cuUE{auJ;mJ%1dgQ{{pZG?wUx(MNR zTBJiedAxNx{T?~u;WVbc)4{c$>CkkjA3DF9@tl?%qUFVCYcue1>A}YlMC;yidLMP& zos~cLePXcme5@*vnrQOZIo{&Bt%a`t^H$?u4lzx12V3R=oUV&2S>}E?6(uEuY4Hwt zu={mI#ow!uyTS_-tz;rvNhNDlTu&5HZP;1w;x1Qn&~M2IyY?=~|Fii2 z`38ItA>u7zFoH~yHCG0t|14^H+0?yx9&4H*Pd*xKDxHkJ;a1Y`4-)wWdfdcy{z$s* zXHX*f$>;&-wFH-{2r^`u+m}#Og?Gm4#{Ax&g|$1F6C4nY2RixBUW~q4z7?D$Px|cz zr}RFqL%|^Ry=^$LF1A7-rZQTrt~NO^nqC@+*RZU9vgT|^*0k%rJ(<(-oIHlPYh+{8 zxIsc*U02;~3*>CoVT^Aawjp8sUv^!`QHH@bLZk)6M|g zlmPX~IZ$gLn0dPGF$p|Hfx@|J+llO0EyMTlQqEg9-qoU|Q-$G`+8WJp)%54{iGWD5 zO(Vz7(5JaT-orMO_Gii+TQBx#_pg#4Lbg(5Xb8wV;J`#kB&_70w}rcWw=%3-N(9Zs zf4j9pI$3==j1#nLd_@6=cN*o$7d;KqU) z@vUw=8jpQ7{urxziF0*nEp1kgF*C{~=qZD_6EB8$qr&t-SvEc0cfP@g%Sg6v;OP;! zbmNEMA{5NRvg=sI?UGR0mJ+wKTn~}1W;o;Uw~po#yooae$!!N@Us!uqm2Sp-GHvAK zal7a}eTIn1mp|~P%Sy7j*N513cUlzV8^Rkf&xw<46;9o)bkCw4D%l7iFQ;`ce=Ge>j zoe~&U!6$ug-}g3;z%A+m|||G&7QAY_?e-JG^m+X0ylIn*{?ayT#O^s6PEs<45dHJ;-C9@%r4a{fC`f zoWUA<+lpKo@A}L6>Eae%j)j_ziAig`aM2D*H;XG1az$JEWWFGwWy$BG&m~VUHh>APku)?~QgxeqmC}h27*HS8SCg zRydshnlI^Ywd`NO99?x98P62z9Vk&r3WH^_`2N2ZcYsjfhf}%Z&hG1I0&W!klX9yZ z?^PSWRMPF}pF!qbDD8X686rqpHEA>Kln=r=4O)`eq=B)D_;<^Zym}70@ zWMS2|F-L+0^fIrmSDQVwvZB@caRnHY2Or1oYPi}}EvLLqKWFVhOoocOjqbPFmJ}Mf zd6pY*OnMedYlNa`4n%L5+wS_Vj=Pd^$6LiOu^I{WS-zjACJ$h<7u){Um5R+0wGK-7 zEcf?IQ)MNJ3R`kb%;UVezJU-gws}U7&){@Nh0!OM_@b>qwE-8MZo~-=KG7`;Ga%H- zi=?i|-&~ywNc467`FM6jZ&1^CXgbx^&(hh}|9z36D=u33N@xxQx$%`Pru5NbU}NWp za})7!^G-l{#+NL_CZE^wMghj~Z`fa6ysUgcH-j(G5YJ|kB;h=v7n?qk_>&zka98>{ z_XO6@)qhnrTTg3>njilV#8LEylv+cTi>b>n-ZfbjSA`N@=k@T|cJyPb?bqC!0y!i@ z{#Qmun=jD+|3`oTff1pG)LppzIL1Di)_(sM=kwNH{eBN%%me02idLP>fXQ_~R!W-7 zg63d+;&i@b8h|06vNda1cU{#qn?5Xh9}#$%>czDo7|)P61cFKoA<92;<=mZ?Yu+HS z-N<8{W$V7XK?jJsEFHpUi6-G5j-%u1&_togc{c5jXPAZbT#s`Jy2F9vn9YBN;zQay zCJfzG?N!@DVF-0M^f%*z-{4(gNEfw?qBzH3mcWmI8KCYtf!k=;UaBwHzcJ98y&6iE z2N*Pq7aMhFHsh`c_3AioEiak9$g&$fHU7zK9+@*@cBm}+rpo)_x{qks^zrf9;hjb@ zk1-$%{w10vha}$hnJ9P-$EXv%CfFCcv|ae5cC-@PSZFvkMkrFxoMRWlUweU20DL_u zv;)VdHZV7zP0!YN z!&`2?{I3t;U%UJT(;b2zJwDwydTPweDYGfy0cgzbRb?5we*tBhHLN)znnggS1ML>p zJN+1Nj#`I4V^pN4TV78E`LVScS+S$+?FDKqzTDB3+n+X!yxy(Sx=OLYhrvkv)_|rH zQGi7y{>3H57X8}I>w+`rNf^q|l3H>sJoUqdvlUK@y=_~^+vXrvRO7b|ix%CG6NSKp zvq%qxuC{Iv);K}U#j5x9lT}rk-6py|AZ)j3-Scs%17;Z? zZ=I~$(0{fQUoR0^|g|8>G^M7j{ljyEBUiPg|fl9+4b{uil zvOmWW2XaQ>GE9G6DKny+a-(n4j5iIs$CHLk`d{;!k7 z^}vWJKT_6WtlFr@2J$sdD!jh0#oN;XmMJHabQQsrCCnX(p;r1)brfDBcSg`L>VTVB zZ9BDZ=!xFe$)~Uf+|W*L?q&(}^^05VXP-K!S+d*w_6hcc&&}v`ZTrD^p086kL~bj* zY666n;usS@l9Y-Q(IoSHEUY<5d@cesk$b6m8h<0!k@B*oJp_|R``7Wv_ph=y?$7a^`D~t-w&A&5G#?kqI4T_Wkg>)b`)CE5t#)* z*6s7jHAZ2^>6DpV$Ab48X1@2W$J?#cF}_#B)TanXW|?oYt;y)@7Jh6O_%}aE9%h5a z_H$zDEv~AV<1YYJeFrZzVm+(eGx8mSrM3ftCTmDm1 zzcSXVXHt>G&Wh?COSQzvSm$tlAk}prMg%X<7R7<*jFYu@(azVy3S}Pa?f-gGwzCGZENiI1a#G#b2=ZY{7i+m!&9|n%4c$0_huLAZ?EYp zt3HBd$;Vv#yBhjpNY=Ji*<4umcBMQ9mQfC|sL{=S`!u~Ov=Z3f3)r5RC5LebG1Nho zW?R|i`c|IKhi#qOC+7#1;9pNzS?_-ef9BSY=AIOCD%7-nH2Ky@pfGWav}x}lXIWKw z6yr!EOFrQxkxk%w_BA+mhJfMdEBQv4mjP_D$-_mT@XbZ%m`mq-UHhc+!{V{vc4v=J z#3Lj@2nP+0n;>7}`|eK6IIAQvI+jaB&6aNZPK+SFN!yx5QKs>efq_4knhrd3%W~x7 zg?O7a7hiL!N{h8rAqLl)IR%69x3!!1e}v9XYc)KOJ(vekdxmf^!gs07KzS*dp@4&E;`#-EJ&gyz5`II8nX8!WB5TGXg^zBSqM^)nxTB2G_h#Mjb;lGi>wxw< znoS+FnRpGeZ2lQ!1!irhwxs!1N`{Lps!ar12v17mxdzOyS%!;kO$)&z?8dFsYk~+ zsrMkS#SxaA?j-w5p6;jqpufRrL}7Zbz`^_XfL`CBA8SeF^}HeTHGZ@Cv2c2-9vlO}!R(}J#>u2(Bz$6p9+rkDNe$@@R6i(abGD_{JD!5pMbG|Fxvd~=h> z)x9d>&v0(Fw`A`jc^! z_3&HAK2&vtC_=_UDu90!?vsg!Qc)e2Njn~IuEn`vS5xvo4Z3EdaM8;cckrZVa;7Jr z5$vz=x1~<4U-hramJBrnx=;3pYWo=!F}J#FQ3>q6F#hfBcgUY1E?loUjnk0j%L#UP zWXUl&3w$)ax!4=u4aeMUu5_j^4uVTSQpx-CS?f2lbqmDxY_kTTnkAI>06xNnj$#Ju z;#m6P516@b?P!~YQqhClU^#Z|VDS|HcGw_~%?ef9pK#}RIDyc{n>-bpxNdX0(}5gF z^3=`GyLDA#{C0i=tMqAI6F+N_9Vo)_CEj3f5p{NtgcYYQ`7l)}CsUI$;%TO!sB>$X zz>Y75<_Ese3s7+;lJLTj_YL;v?e{CeJRO15=fOC72R4mW_hrDqq8$w)TH!af2^Y3v z`~F#L>n{9)SZHzV1z>jpd3rYjB`Z@&YvXsv>3*D55IOOz0rA(N_LmviKk$0B`_o&c zlmb!T&UhQWufV1&#ACG+@oZwoj#diVDI2Ea@u&0_c3;b8%mkVDBZ|b`MvJNzwI|9% zN*Q^h@tIf2`029D(g=tAcGGCY*(OM@7pY&e|hDy++T zz(iAS-8Rd97t)}V`yTAggrXkqGc0;n z+Hv>NCKO$lc{sV_?`77rx=AK08rInM*?^ocb~R`HWnWs?6^Fr0Z~1S~2%?)1;~Wp03++p)B)1;K=@4T}bkgtwsEJxJ;lE7={+4hlHphbBZXCo7d=CDd(x9 zg3?ZgtVy#d%o>;RfU(*Km(9uCwEOA&t@zJ6R2;p7BD4MSyd@TtycGCpHEB%6Cib4{ zONJ!bd890#$r6+1q$G)tak93l1%FF2bSqwjj#CGh<@Uo^Z?)@& zBvgZ*(W}$`d`8i`lINGUHi7Muue~8;zeYGoxyT1*x^M5KN)ymU<2{CJj;6ES3&Z5V z+G!f!5E94ZdglPJv(ntPy#~SlCKe`-v)5xB#c4{(lp|sEB%3bQgM^;xvdL*_w63a| z1X88%k{c-XUgh|%Qr-1A`5Ptc|q%?>b%v&HF51* z@F>OE3&hc{E{O`y?ba0{5GG1MQiboY?!Z}#FtYQKsjfRs28G}`LC6StpM8E1?H3L* z@pF+H&vJH~%B6E$yX5;pD{KP3{J~%C*n5Z`T+0z@*&AX~JUO}oJpF5!F2e^{YUXte zt9IJw*`1Po)l*Yxu1W`-Gmb?4p|{S3Z?YhD#T+lMnjZZX{K)R+N$Eao%@;&zc`;Hx z^j0oW|CivpUx_9wSvIk!)x+8vF7W6AJOR;yXMoJ1<9;=0tQQIj|Wy{>eJjs*<2 zTF7mnn-KO1bc&mwbvlCTii5sLe!q8)38=^HHW3v3x^4JL6c>TUV8K z)?F7a4A7y?n}|oOZA9D?%R6gaQGKoGY{=vCra}TI*VUwJW4zbn&L@&WJ}^Rc`{}~M zHv&@at9eoP>U6ol>x3w^M#?KFFd(H>`~mevjdF8O2f~l(*BrurNm*O`KaRILkfJX} zf-;H<0k>}(@8wTSHk_k82z$U1x@53u_-WN-!h0KvWA@po*^+`(ejN%;_ zuf+BCf=N~YI-+q=Wf~CXsKd|MK>m3s9UX?H^))%x0LVzS% zZwiT>4d0uL+wL&IyT3%)z`i{xQC|EP??K@`x*oCe@zn0r<4Ip~MZN`vcmMS&%br-k zOA*h!Krx54*Iw#P|1Gn1#*!h-iQ#TANxbHAYU7z!1!j&UdDibgSYp;_{b~W$=_H`%pl~1dF9n}k3 zR?D@3hauI+cVn_KwayD-Wgc(^r>K-KaV;*NFQIaQ)?RM9pib+QVp#3N9`;i7ZMi-~ zw|1$FyS4%`Yx`?fHJ%JbfiwXiXo$tXuS9j6H;LZ+?z>Y~Ar@1Y zUhfBzToZn63u+pf@$~8{)EBL+#J9X42i%^9$Ir%hp$Z8TLn;bjx(U0L^ZWyU z{Y=l&X6V_(ZMT>weXmnaCzKV8P={=Q=?#7c+!TJyVb?xW^qRfb5f(U&>%OgcoQ}qk z-59bA=tf=rT2+zN9l&oyb{l^9aC-0VJvo!B;oTzpUU+Vmn-%&eSGOByf^-+u8(0@CL2-5yW;QK(YniwpphIWq_m^)3wFtG?qIK zq)}G2)G~~GX?oczrVTCn0t?qhdh%1wnn>xD{r2($m z!D*q*hjU4F`XbKoVqH79B?MMu}h zRL)6b4jF4oHI$n8=QPvb>c5Dm%P*?@UPm)S=i_^);#@J4QFpv!ymJirk*@LlP(Trg z%N`gX@kjjLM)kKRD5OzifI*WTV1us=PhjfU#aI;;hswUrE2yKy4kO3OV?X9~uKa|t zbm_>CWR-TF&eZGX$u;x<1Q})y9SynFFP%$o2jWpn61ybMQ{ZWGg$w@)CGwxC@y7tF9~7nc&>umXBivoUhrEXvUe` zNf^4sQ7ZCTl`*|ZD3I7b9E_c`Vdx_H_A^i@aCpeFo=$Wc#?UM zeeDC9VVh$_3LXkI;jDx+CY$oV;~*iiP@hvgX~RC}nWc)^YvxZ_YEMLmg}Gk(OkYjz zi4_vDIFDSL-!~BRK5|pmE3}JCH{rN>VozG1`tdp!_SrzT7GI~Tbiy0B>H%)B?l@a* z8zJBX!fs!9msZ9+7Me%+UFd~xo%S`f|3YAP(N8P!xiJZxHDTM($6uzAKCNL_G}n&^ z*iX)a`}aYc6-hcY-h}$Y>z;SW!BLb4Mt^%A6O*iT&@TDdiq~!*pyAafBG`oWm%o(g zxrt#Y1>7hQH;2nq16*@kFe+bgiLf>RO) zYj`C_qcAtc+PlI{sfaP3+_ESX7;RFzlU(7g zl~W+ykj{vchHwa0OA2{0T=Unlo|ehH#Dp^ZUQ+!Cxi&3c?837dd>H=J`{E3)AXf$C2Cj4Fn>A6 zt-ulK>2a&)Y!mi&z2u2wn8LMVF0UED2Mf5{;YyJnz=K`6ITrMK0)>$%vvdl1BAi)r zC^NIgJ;UA4ZvrC*!zpu!Rp5c*O_}R>h>Pho)(aeOUpMQ&L5;tUbmIT8ao>T1BaBVK zX8wV_4KQ#vw4VUsJ0V2SFJwP{td1k}`MXzF)Z_=9lAiGJLFCgQATuxz2ngPI|I^CPgJfr75A&p8=Z)So^d zd`dQY#+qdB3@tK(mcosO48Y9lI`2cqwYlDDzi-I$J}DSk48U55i@ttaf+ug;k+3f7 zkLEraDPoBaC4%e&MgQvNSA=bVIFj(>S>9Ok-F@!i>OrS5IFQRQ672ULY%0WYNqn&t zf>)%iQY0T!_}IYsIPvp-$ldH#qoqS^0?~)d^`NF*;ChU`K}!gB@^R}Nt~WsV)nZLl z+GnIeOp%vgg;zPqpfE{og#TCISMp<@t8nTuyaz_345=8~%>zi3;Wosq#WpTUhpFCP zkt|uCaS+~-JvcPqYNoTjzagS`^Uy5o<<)pIZvRCmv&KFH3*pTLJEwBauiGX`WDecP zjlv+4Hg_I3`>QdZj3cPgSu^>*d*n-tZb#0TNyhANIOk6RS}$sw)jMgu7f@QwF2=N*x6 zR0PN*V~wTWGBP7lLMv#W(xD_~1imPw<>}MN#OJ5bq~JykQB3V98u|YNB2O+5grSHG zVQSopCv$uuw-2L*YR4KR@oS5^CP6mwT=aXp&e85oH~xnGf)L6c?+<_Ji3|;-YIyi` zkl<5oV;SK?!OxJvk`SUc`r;BpqTsdNCTpL(62YAFBmchZsU(%9TtT8#Y6Q7z>-;3P zL3!Nk5#nf*g5iw}2_XKC;-CW%IZnUXd(!ZXd9if7Ano_^Gi1T{#gCX{NAkf(774m$ z`EUU?60*3hHoHJl#{;IE2!r<9MecvnIHH{BK7c5oiavw|4BJ?nFJ&Pb$_FY9>DWAH z=e0bn)wZLxuMsv(Zgph%Fa4Zxg%Pz#z9?8LL|xb_)CrA@m9{JE~}Kbfh00iQKUc+EkCMJI)WcNS3psA z2n}+l^8*#1*{n+P44a!sRhTU}49j)RaSjGbQj6%D9D1xiRU;HT#*)Fi+ZG-MtW)ADQd8cC6bUyWA4DU zb1~%{a+YTVN2@ztav{+VX4=Y*Ww4eLx%^1v1xKlClM5m{wOUynkFyjnsxjOQybgpS z5r@%lY85+8-0*7n+XS>@!+Ug=`rhds9hIK^tu8&HV>jOiAckZ_ zhOsI5O=>9Cqd7EB*1ybi86c9UF#Vfc?Xx5X64}<36vqLt_0It4?}4%Q*-i?^eD=N- z?q$6D@`15d*|TMGz1ooN+LSwlGt`b_DSGvt220GLA3_4043-VL&^GjG5~7`anW3bu zrFtYLCQhRjxM0$ddWUhz6)8eacV(gp!vua)VjSjn88nx>s|ygYVq!2YZt{Q1PjDil zg1ZFs@^cih2ks{pDM_8^hlRAW3+m$ngM#rqZcQn7no+XTyrD%&GZD21uQ^dPaIi&w zz7>o93e-;UUxJZ6#1qokuI4nc9;&EGXAFV4n~?RFP+_Us(;wtQWbi3t&#W<;ko#?B z!Th2^v!ZRea+CWuaw&25;|k=S%y{O6a8UHW2@on={%*+ccc6ssiXQ6ywr25+z= z-aix%A14q_!Z0z2!K1!I24*mVi?zwn{O&)0%JLq?0o|6k0-{hQrj4i+5n8-`i9bO@4UUL1LRLGbvM1_#!yaL1O+8(0q)__~GfE3KgH@ghK8OgZoByjs|Y zCuOqPtdYgly^?hGrYmzLeEpEyz9r^ZY_F^}`6*xQkY- zkC=VYy`m2-q$bU<9~+`|FRjJ&W){gT$nfAnc>6r@ebYPgGxs>|Fw)<6e#VRcL1d-$ zATneqpte$C8Mz?(#ydmM$|u058PIz_EBcj$r5y&Q7AUf#5=Z$eV?Gw)7Lyp8J84Bz zP}E}M@7DR9w}Ln`Y#Y*Yi!7^OA@F|GC@i99&(2~CW$XqMA+fD#UR@C#YHoi=di!wY zPWUz7J46Xt=O!GKJ4GH&^IVJzirE zY{k?0Iv33PxDlL`3koiY&cVc^ZtV>-5grWD63On32amI={pq|Wzt*( zFUR(Gr<=OmG*?7YCnbolgQJh~yl?kp&k?*^l;8JSCxhdo?@bxMJR?}BBcLndhbKBb z4a*FgwVB;#r8xiDg?UrcHmkI-n60g>@+sgBeG~20)VnLl=7;u-?QNY?1)GO<73@hQTA{KZlo=5I@`KUF;qyXO}oyD^U!6emi0QLh68 zHiC+;uP)awb?^1}DkJYBVPT;ogXng!Z4`Sp9nf5C2qMjU#hnFj&hsMKwGJX2JP`?A zUS9;3cqh??yb^hV+!GBsq$VCREcBTHW6%Dk;z{1rM#-Se0qD_d@pHEDEoKo%lZh)g z(f>!=TZP57EnT1qAqfx&7F-hC-QC?GIKefz1cwCI;4X~@3+@msxHj&tjl1hzz4yP* z$=Ub2kN1s-W_7PQYgWw~RW*h?>4-koa6oQ=1ofNJ(m}DLIfSg=%S+8cgeHZtBF)HL z=ZYRrvcKM+<&1+6qpkO5Foww;5gkioPr#IpC|kmhn2r?^-wCZT=&(R@4g+)tK^y5| zFaZR}To=SXdoaOPRook&m>)|U+@EY>y@R?Jxs&kJ9o)lduMIJ zkWNYB>8HbW7Y$$VHQ|R2vG=!-#!g zd{)beXN-QMG*R&5;DqvhAv5VJ>Hb8d>g3EjB!7=+F-*Voa{?ANT)6=%N1aZjcf9Q$fN@+2n)So-Sn6c!k`Jnr{hNHu;y$A8<72(0IS~tN{MvW>T$AX!c;cz49p4;am zQju-8SD%0R5J>tgLwS7a|u{ zM$V%zwzNf^4jC`!8As8tyXx(gm_m9%N$8D4O+22Ftc_abFa_Uf{GD>(ePc>skKFU?fG?yZzwR(ZMt;qhzvWUb(!CW znFhUpJk1~Mgcg>gSK_brBiF^w5x%Lh*I;VpnQ%LwSPn5bUfBGWm9snv_Es2+?74Wi z0lHqigGuLH*!5r5H0-6j;I90pb<53*EX6+Wl5q8r+qkkHB9kLe48A`X-R@)W8O53C zlX*R^IX0XG3L3r-$m$upnoK@Z$i|q}lxV4KUw*MG8F+%zD#V`i`su>>*Y@}8^Oc|F zmpSWnjEEg5(KzHUM-_?5dFe;BazpO-V<)~;2C%9Eq6$9G%?sWD%DDvj-zl64r8unL zIAfntHb&iyg~;N}wM~9lua2dZXx5zm`mNA5$@Yx8_%;6WYgt4{)5(_Fh`A>X!B;N--Z+_;fPEgX<;aal`OupJ78NIO(#@yH4O>gvMhQM4hEs_ovy8WCUxxzLIj>?j!gRfe zR!LXV>hNA^rt`?|g>b^>MaeqT+bzN@QPgp+M_(%>L8f(vqm5SYZ#$JsB9TSqMEquC!K zRN&<40)$5odTXUKv$1TACI-6|ok-5dq)ksXr`7RCFSS?tz_<0D&eS2?y^_bZcV=vTS!tCY zf1Jd@&YC=h%dyX-3{xcgqxr`zjU6wK&0m z{W}kO8I)mch~9u*@B7~}@-H|D-f|ubdZc?gPL7{eGNM7sk6sRYa!0k@W1h9S9gWR? zU`{lwv08dDkF6g1m20jj%(1Vo6XnP$sxS-Xnsb}t_JSi#8bq(+fLwu%U2_RX0sI_ur|klz zUYc2*V@u5q@n>u?En9iPZjZ|1P;lK!Qb)GZ6Xb-qZ z2Z4Pyg;=|ouwsi31~-Ol=UW4C7^SbkRA>e(E=e>WvG8UJe+x!FlZecH!Gpr38sJeY z)1#)9H*yBDaa4e{G)N2b`Z3#bO|TX82H#=R>1GizBP;(x!}Xvv*m0%-(a}sBzHUD5 zw(u+IJ;Uc04ol-brRG@7T7^*_N7mQxPGvul8%b49Z0(C$zV9YC!VekjU0J}C$&mT< zNah3AMM?c6JvPfCmL!yXaaY~gw5@^D8V^P)XV(9YWA zNO+eV^QLiB@vS-$ zIcm%?EGC-K2ca$9B~}DYih+AH8ZXCRG`hf$Jr-L1aiVLHWM0F?+mL5_8_o95IowAz zARNDmd6{98rd{8iIW?$aU7=w4Jt6MDk+D_AZs(fdIjWBR#7RT?`_qHfYR$O4d%=yT zCj(TK4XtCZ!G#Wj%fcIJojmtup?`?l!qY$!VMh#7^b(fIgl%jiDh zCtAm{r+m;7RG@Z8-&I)0XOjW?TH9H(!^m`cj zHJ9o6Ry&SKv1h@3uViXq;E3=lw-sp|af~Jo^6Q8(-8kNoig7>qh-IF89_0%vPvf-? z;yIeJ^(Rx7H}}(k$0LyK4DmU5;+RP}Xn=rNI-^Ov>uZvEJ#JCml_P(YEn+|_yGSHB z>@!@w%VyL#5mvaZE-2F_(+Eij5IEqQz1B?~{_BQ9>!szZz3SBz;#ro;f>@ z6p-S?9HH=qq@j)Ix)OK<==q5)ONtg4E3~cY-~0P~ajdU!a`$)Z>gE}zZr?zw+!%YY0K@bS8FH+LgbznmU zK32YFE&J_N2$Z;vK1;Mvtg?;UFU9Vgw5~_+;Cc&t3B^oL8HVrN&hfP36Nv8PZ>(wI zZ$^PShTmV>l>}Yx8u6>=QY5$-O20nYf{CpRFwl6UckTS)M=#4j-oiMy0E`G51uY)= z%fH&Coo~rmR?G2c(Z5H!}$UY&D1~&E1hr=+#R^E#g z$IPq_&tTT@Kp*_l*1Emjy8PFTMH|YOgp%U6eu!K|a7S+U$Yp~hhenfIQG7Coi_ZZp zA!j|9HVH|4zvgOiLDRAESjwZesPa4*U6`8kXZ48SSU(NA;m;=rDc3-)8mY0B1^CZS zjcirB%92(Q?muC}*pH~(>oK3A;7>fijE?Z`mv#%sIosZ@Uyhh5m9*({256>eRS!0J8h-}btto=OH#-Ev)i3Go%e+dbAk6f9m z%k#nN4QZsN&>@#+ZA5Kq@ojSV9Vg;J)D_h)3QrGkvW#g5%wQERto_1cplwG+h<~2qmoZ zxAH0pUuzHZ#Qk?3fb_hz!>t!w;8@;|V*GV>YWUyL1U8zM@9gFqNk*hdr$7IU$mGVu z5KgFZ;LiB&QSaC(i7gtm=4;@9k327%7Az?-r$b~-?R^-2v~o1sS5MM=sW3V#f_xKz zrK5k@=6!*mLQj&^?h)r|cAhjr)T7AZKaAcjf3t#SGmWQmgDv}wd?YSM60Ngb7l*=VsJg+74(m> z!I-D~=r5&D{A)9^t__ zDcJAS=cRL7tkr4wSwtjrn~=8%Z9fyly<&O)w5oA7$HH*=84Y};0-Xl1elS(cQ-4WK z#uVIvXRED=3ZpR7;V@C4Ce$5rnASh`!^4EfVvWP}N4)xXvO9PGFzoN$wHi9JymD=) z1|6%__=G(HzB7Jml9^1s&1nsjIA!rBZSRy7C`HO}nwK&R%MPL_{h8ChpS1+(!_n~2 zjd}3NEH0i{DZAYEt|sWDh0t`M4}f8D9WC|W@?J^FAH7^oc0Y?hko?uBj>V*hdXypX z&MN3;NlpVa4KcIsdw&I*PDX^t+l6jdR^i#zD!>Ek7Lj1&Kv^L z0`pTP&QgSB5=WdNLlUlLpO~cmIKCQpdIHd%-jmysC*P0SOLv zss|=p_l3XhOD-ZWO?X1RREx*9EH{z_{Ydu_M>Ee4eC|1VPApIkVP2NwubJhMC4U8% zmR}0|k0XfhE1c}$qtHOJU-{R@*TU-&tp_p%(+I^Yt#q~^RbM+Qan)fP5uR7Rr{9dNb+<;l<%vNYfmSA z(kAQ9A&s`kK~U&SUPn*Up|vKLS0tt%0q%@{O#^3M5w{V**7L)>2s(Ir7p5-_p;TXm zNRrcw_q2^Nwl?2>#+&{cDyS#65*sPNEb~?m0sKLzP3)d)e22-&`A5{@c>@i>Y^pMb zjBvSEWx6(ffpAW!U?NEy=Jc~{SOGTNLXx$b59@SvNFT;komXp%ig2q)^KQa*?O!_8 zE7Z1TD2mg4qbAdsB^(j{`hvNC%B*a&Ujp9V{`-H1 z&YMx6%M)CqS-R7PU2t-`JcdwYgy-W2I9^GC8MIHMS=)%@nNK=RU`r zT2ES?t+k6KSk@V0D8kN(f1`_d({S;E>T~y{VO|Xm%}=8wuB~qzMx`7O4`h7K8JmoL zR52a(AQGg8C~b^j9-hQ~M&~FyZ-x{`+ccA{;Z&n7*<^#QZ{@?OoB8E6UL4H~i5(9E z6dG&$?HX&3_R#q!9iy=i#F)v0_(zbJ9=a zO{Rr*9Odsg*B4pA?U|(#DgEGf@%YXMOdQ25(yZN!_uc0q=}XN_{b1iayaWk=SaTc; zOjQCODtJZmO>;RfG^~Ddm__w$PYx;1Y|`%N==NnAu}z-AZSo)O_}JM>CD0nI zRl)4uvA|`z<=Ba-l#jJPtQ|xiZsnOH+x|#ScWAh#sM}BK}LC=TbYtiVFBy> zOMUe@*Ufcjh@Mfz^=Io(qG$gZKGzL+c9s12IA@v4Hm%`T%KOWx?Q3~}9?>4fdC}|1 zR8?4RpJzdN$UP%YmU6E;XT)M7aP3k0HYr7ZjbX|3ho#|bF3mT@6iEPEwrvd;H$8Ix z>_M?Fq#dC$R_W(MjMU*EmGJZ#SFC$9PWQ92+lEK>m4}jYStwD zfT@fPkv6W+yvfd;bEsA_!`4EHN5io-n{^~2zYE0C z`zySupL#4ss(pz0W1M_NBW||i$K73|(*mBeJq2~0Cz?N)4uGi-BA8k6Oco|DP}O_4 zLu<(jRmTHGWc0K4$l=i>;QPu! zW~aHvfy5OGRBS_FCW3s5V+FaNJZmJKDRjL8A@IK^#LoYmCIyD*Xi+oa&8?;Q)5934 zEJU_hEZs~)4(*p^kF7h0};kDhQ*+DC8!0S z$b?p>^!Dt0%?9p;aquGh-X~Y%tYLOjt-!} znQ<_GcL2Y=sMPgk>JN78vo6_FwvLGWEDQK7cO;likD9y`_J4@mD#ad~P zE?z{F^8}o(OqUOe{Gz;u-D{HxBzhuu+@YV}cJi&KS7<3TQhkH{m{how4GaUdVHTes zk3A<<(;bWc!jNe6o&3#iGIZ>U6~;uavkM(GPLO;iz0ib5?(vdJI7@=J@Hob-w))*z z4)CG>1d33>_rh&F4r7>xl0RNX7&DdZ>Zzsi_vs+5bDnJb-kw{Tjbiq>a4tI^P zL&rJYT~4>_hFa9(w&^300BlIMnofT%*#&fG{#gUypKL!wEfbzQo1(#6RuQw*e)8Kl zwg%Jx(4kE&8_RtAof37 zr8#%C#Mt<$dnvGLKfVjjqMaye2AtfNktn8AC>9Wsp*nz)NtjYZKzg>)LVUoR{fGAj z*bEYw>K7L5TH~X$kxQZH;@NCO4~6Rolv{E#delgi^ic7`qhp-~DvL zf-!Wgd}IWljeJH_Ah~0ec5wdONIP(+Zz9<4hIlZS`>?mkmIr;2Fu@(BBzeR5kJ811 zG}V8%7KPND%SxxT(&fvJBIA(tjmnr8bHcvmxwmr!2&z)H^2nH2Hvv>Y9-Uv7fuB-zzEH3pm<}jTqC8 z@9fL~5od$~qT!bpuQbwrhMrrM{W@%{#dPoF0MrXJy92OF@Hj_e?3R9y$uL+`Q^)Et z84A({hx|R9@|VFWc(!_ zu(iz?KBO|+B?{4w%h_q7SzN*(5_0%x2XqUmT8UH&DqRcW+1KPj5Pk_G!Ic7%NY+GM z_d-Ci>SyD-NT3~jnw6C`2^cCKYXUN+Pg*V#G^wx4?tyt?;}l%Hp|fEDTP0o{T3-I zWlq-4NB-=6$)=aKF%BURt}#b2ZOmac9Lrcp>HL@su>jo~3)|EuINsy77l9edZ_Lua zs1+{8EJgldM-(uOu+0ZM^rG`4V~$wxrG$8e_9-aWP6QRt_JmDkJ4-S(GL$p$JCuQY z{yQIKvCMGZHH*zyHFJUfR zS&uauM!XeA@T5JNIN~MWKsG=IoBL5}_xC7Gv3>;KrAAdh_0LXYo8P`Kw*Ubuma7G? zI=72Sb#%$&jadLEjK5~82l_CFf`241&)c>(7ykTN<1bb&I&}_cfN_uF`n6D|WC-JlR4fIad$GXlS6%*k>E4aAhGBGr z&j!uRK43a-4Ke`~In6gcDhgypjnF229#sD8V-d%D`<~gR{0z75kLzv=k;P2&_s2n| zdnI+XZf0q2d2V#BSe~{32URT~srFHafywo1(Pc>0HE(A;zh?*czd_RnP4L6-pmbFW zsXFYWsijs>y{pH@?mGt{T&!5TCEc&CPSYAjNRB6DiN^FG z@lzRveUQh2$tB=rlKiV^D>Q+M+z`QslmP$dAC7mr`T-srEOVLHQ(x~0T#^7Di5}84 zW|2Tdrk+d@pDXl0AEk$yni_-313}m8;jiq_)->66sOcZ{g9v9zXJLolwrv<$0NubU+>`+%HJ&gc>|G&fgqnEP1rlDLscbBk0s0m*Zdq z#v?q|l5@TP@bhiLqTd#{*^xhW&IJ{+dIYi4frD+hy+C)g%B?ds4)6MMk!T9tKmL7A zXhZ5%WFv6G#@61HBQTxG2`2b2_SZYehr<$tvMB|cDJ`<6(`XRKypxXG-a#*2kgKGE zy#ROy*(gAEzFZqoB7=H@iaR)Yl`Vkk-vQ%Z@!h)OIiPr!62{QjyI;K?vGvY1V<+B3 z!>Xcdkp5V-4y&|j4WSCFmj#O#L`S3^`K*(z(;6M?jV#l_Zk0y@dy-SuoohXvLOf zgD%EDnp*J8(YuJ7Vb9Z4T9~PcX z47B}onJnQrI--S$)zQfHUk|z?WeSbUHJ41R}Apm<|S?vHg)myg_{qz`+9@yl^l` zVu>GWzZr9LZNQHwgs)JNYFjNh5xk3U%1jc6UfldqDP#r3WrQa3?GPf88=9 z`iG63AEEfdf9gZuSW~1Hy486u0a58yDy1mNKiQugTM3M)V;H#&AQ$HBSXN= z`XVa8ob#CWIU_(q&v&1I9-=AHU>h^JxEk`dBPWvovpm<0@tNh|Z+cSyjoWIyN9NTV za&gl2Vum=H+$GxNFWFvGvp%RXcF~*ncJO#uOx3#mKwbDhH^xTbrF^G0oSgF^lY_^B zhB28E_jv6)=fyl<{1><)b$Df@$Ds~}L&K-|>1w3;?I zfodY86BtXS$v+1MS<&+^j;0wRAYr_xb*;&qBN4-k*Uf2_Z8sq&1$=dEp^Nfhd7q2zf?G8|AfIxFap}Geq~3$zS+$)OUohn zGoDj@K$#5#*}k4gb)zTgm0ze%lq7|uE@x0Az+2Ga2AQo>ci{#!udu$yx_&Q?gman>hAq^K{Nq-K2;=>>E>1cn zbQEX5YbN@IP{sR2rSLvusVOWpyFvk8_d5>R$$B7w$1%ffZldO!R;9?k(?oNpF0@jP za8MeB9j^Y;kR?pdSCFXmlOiYh`~26wFnWI$u*6oH3B6|m18wv97<(A0dSEIf_Tc-@ z!ftp568lV>Rb;?i%T@D{ChyFv`V?*m=m;>;mwdPovvC^mLkCcfK|Xt^nXa9Wt)Mp? zblV1QU@llimU)2Knc*S0)te^{{g+Wu;c0k?2ll@#Y@Vf|^Wxl&B2X|K>i zzSw5*Aq~>Y>l;XhHZh^Q#$xl`(*h*LU^kA6zXH(+s>X^afQWzVwF!*IK>4}oz^-vO z8v(qx^$abH(>jjdF-&IPRAlqkIsOdn@dEA@Y{(rvOUZ3l-7Zu9Tx&WSe_o5u31O8@ z3*M&?qMMirv^SkH7NYCk8EiyYDp&;~WEUK%527#=#gy#Ad*KzvT^Vz$C~ zH&9N8C*TQYJl1^QVQ7t3pYQ6i`ms0;+*nVhg)O=BGQSkcG!5SuPUz zbnH*JGdjJM5;$r)8$=fD<|<3|q3JxTnbu{O_k=2Eu3JB7qnc=7K@#78nNUg~RRPU+ zqHG!R5Va9HO2~zFRqzLipF8ezGeq}06iiY%(NuiAP@7jjIPg_CcB}?J zX7l}^qj^uxNFNbF>juSAe8wa)J!TZ)`CNJ{&d^RJEw`vyspZRfKjA|PSW7exB(uj> zR-NnF_*NHM!VUDIY{7}%??jI@q8tFMUy%3)wRUCq@VH&z2}jTHJKI#b|A`Nvb2Q$_ z-5x)RBdLEAK5TGTWd?HVUI=ekSO`kX8rfTR?Lay~UQ}Srzc0)OF+aK9T7+$Gn}4|l z6r6`;;nbY#uzJrF#6Q?gy1CD5P6OJ|rau}Ou7}Ijc_6`6C4_cu&lB_op6<`4XatOV zQD}N1Fbjj=fyVG0IY=uYL`NNA`A5?4j^JZOGLoFcKRg@Bdv*C^rl`TWpw(q7;C)68 z6Z9;Wz{Sy_Dp0fJ@5fjas=q=l>I#mPNS3$1Wu;k#E%6^P6Z#uhTsq578y@5<1?0rg zc_12x-$*01Z9YY;PBAnc`;tQNwLc_cY@p6}GOOAF3^iNg#UJUHbWMZOiJS6kg=NZP z+=&mrHlTIc{yJcnX@d-t_D0zp^YsM&X)-yUk84g+#Kzbz^Ew7;W@JAUjyDPySjHKmQG!6#obdG_Cb@DC808XF z95daGcgim&p8t5l@X-wxyg@k$#tDOHM|e^k>`@jq6hN3S&@@XZ76(OPsAVUTNn(pc zp?KSW7D^GeAL}`GV1n@L!C48f!TC6~Mt$Bijx=&}jYJN?j&olcRUzQh8U=81$f%L} zaar4!l0V?Sn+fzLHd%EFl~yr3phfHN4Wdo4{`t^%J}p7dq?kZ+rK!IYgvO0Y>_3_{0k9D{GiA#+Jw?H-anFP z4-l$vnxAED7HsPrr!dOL(59}Ok!k*MoB3-;Bfr&Q<(LI%Cdr>cmQ0PqYx5~~q>7tz zN%nNf4l~>7ztt_Z=|ApT2`+sxU z$ffIyl790+3TLjkVv(A7l!~mkFG=-k229KS) z6$*R9p^25$$4SZTHvAU}ffhZM1wXX%Z66GhqD`5KnOgmNHQP6r_DQ-ygI({35CN_# zG&YE2HU+ze+Etk`sVTRCW~rjLzr*&^#`qbhK?W2t#1R<#Yap_feahtE3FvnJfjqqz z1Q2H*K66bcJJ9Rb+L%wLZe2f;W@_M(8(YhR|EjCHKZn^A>wM8BB1UHKjBVn+3j?T3d3N@cEBSQ7>R`^;t0Z+S#R;+EXvmn6}W$>A1Yd{Z8e$#|1dC-cbM{LR0R zWNAE~O0u=AN`&Js@^wa z?0grs6uwpInO7Q2O=$UuOKwlglzwu|OZO83wVYeHI07J6kc~PWCPhZ^m;@%q7v3=k z!VCR9vOH${%suE&L#BjD7ZbFSpF;0;Bh>o7YGIrqE(MmCOEhtNUyLZp8JYDdwfbvw zQ#Uk2aY8MiU)0{4Nhga(+dD>}%ZQ0#j!`hA&x&&h_c3RxIBDCRt31vUS7Q8puCUHnhO=$NDr7KYM#1Cuv5Zb;rn#vo5*O`4&t^zH_FNnd@C)xk(u#g{P+c zcS0E|sEKBupQKZkO-a}Ctv-hdV*L;H)H%eaP$Ui#`=U)$svjb$+>0^&^slXse74`l zelQ3NgClPA*j}6KNZ%msZ#SS+B#-VWH4UvO#gW69NOh(ghQ{G@I%+9xwjfZ*G(^>h z=|~RI!Pk93Ac)M35mx^AKZ^qd&xmK%9TP0%*3!0=dGJ+^+qp6nG_lNvIil& zUEWTFiu(nqc#HGpYU7};%)ut1tlmM$q7AiQ4YOf>>{&f%sLb2-`@WQfY zvdEY9!a?&a7}v-+B~{MWm4s$bf2_UNE1zE7@Pm#hcMommUilB}WN3X{6yUrsHNjpo zS@-NV@)#-5kCOrE3aWj6TmiUA+ES0pt!CS(xP5wc32T_iGo7N4R$(wb_YuI*|2VbV zG;nsXrl*>*`J*`Dk`xedm$#Edg>WpHrln+Kth&9~(&JG7Bs8W|ia|5b9^9x<*|_8& zzE(89b`eeL-~PK{)j5=XqUrV*N}ZUpahF(A|2kLi#lF=VT5*c!&ZUR#P)odyN~D7; zuLsZ>gd&{)s2lr*LdigUh*Q-i!@DZnw#kb%Sc{=vysT5AmUvqo>^V<8S4wf!`0tSX z7A|*CQK7kcG-;hpf#1bDC}O6Zx6$=OYKvW^G9NTgK@)?*_Qx$0JoA!9iE97T}{X9j;zqC8&E$2!kU!HR@^6Mx{Lk~h7Hxz z@hEl_s0xwbXv_goRQzJJ3vM>m2Iz(04||KFgE7R@(vEsM?z6;V7D3?D~Zfo}p z8DK&ig%(H7nPL0GgFU`e^oL07Se*NFUnEcpdSUl+!{?q2tA`o8tWBD^f>2~X{lD&* zXwOr=O(krMH^9Zlu;;R|QK1C%2KsZRiy8iswsC!D7Bf@naA-X7h*&XJPduWHLdimW zxWfcrMMAA2(iMOT0*6I$s0@mzjHm0U@ zNl9JoheZ_Q8_`MB=}QMSM{<+7#&28yy7XhgM0;%$Qrr9Ho^$<(@t+?cHHlRQTORI@ znOD^MNAH%F{jruPaQtBQ*|sJblA`g4MTGRgl;N@_;4 zJ3)-#kNgwxJC;Bn#n?-0&G6O z0t+M$RqiliTv8y)%7rVz%^r|=B8h~@RG^{Lm|`#`7RFFXV`5(Ml%g&DopTJ%%)-Jd zg}&z0@5VnsQ{dWG`+paHS>VF&K=<#)3jRMED}H6|Sg(TA0(BFlhp$A*H!?Tx8J)`D zKf&wy8LjaP^zM8aWn)BqSfh%S{UjrEcLlnjL65_o8MC5)i6a0MqK59UOH2R6Vvu3* ztQ(VOUB}-iF|^{FwcrXYHQkOqU4T-LR(XC%#lbPY8Mk;$W6z<{FX`FN;EZf{w^EC1 z^Mvpk&Yr*gr%Wo5Ssdb8Jcu^Py$4Mg)-V)8J$w(}fz*ROetitTxbzOFCvTc2U44P@ zVKr>Yqim;cQq6Qu!P{{6>7tBQ-x?SGfvn(c@QBu8>8C0)2#XSmh(ROA_B$9Je8*TE zCilhc-K682r$H4C0+o-+EyT2>X~Vw&7V>#yV4NvL*e3H~JJ@-SFt;X2UIx^|ZfR3f zn_I^*5267n>BK@EswgUDYSs?sGO1rJUm*LUi@=r}k}l3Iwdm$elaoPoxzz%wlgeUL z*Zz?&>R4u_*(S{YYT25D<$O#=ZZk$E<@v8eKraS~`Jr~U`{Q?fi`%1eKc>l9FP`Gq zFK4)am{I0As)lL9rh`q)l0DJ^(ckSNm2Y{B36j0H8;2F9w)U0#yDD#}FI3H;El$!X zK2}p7eL1v7=zhStI9x}ZD{q0P;e&phaTn*+sKV8!@*Fzhs}hZ8-BX&s0Ml&9_Dkv> zae2S*7-jIEZq(BT9T!noZI{%ggg!pV?fJfZT&1QNashcweWa`KPrljHOXZr@-8=Ku z_&VXm)O$UPo%g85fS0#~Jlu51y_W+uLC6bJHKNw-am9f8MmD0`UW!umiNPR z{2P5<3-SEI z)O8T~4A)aBz^wQlXN*?XQA(2IiXP0TgQFYb@SfWSzPik0Bk()h&#s_o z=97xCovVXpadx0NHcja2qW;w`(+;%D5O&-X1b#H9#D{m~k|=*@D3GAR*C}z4nC?b^ zV96(2eV!*?b?pqD(~@bwsmAbo-Glt;Ik7w%Sdq=sH4TJ8G};R1|Fqwg$Y^`HmqDf+ zWAWN&0$M$~Tmn?+tQ~i$@||}1_g*2E3uM<69xSrYJE=H!C!5VRUfqsHGM8066TMH) z&j6FwD>FzPw89TR=_3v`(;q8dGs&hyIY5+u5HM16$TBwqjq#*E6 zjaRCu6Te@K)={1(xXU`?&lqafT1wUIJ==*NSjo;;CsJ_d# z#>VrbN6X-rwz2&VCP*bT36)CTJMZ&CtAI};IT_*arOToM0<&U2G!J%Z4#~b1rQv2v zjEwRy2~;P;W%9NJYtu8c<4jgnu@qs53giRg(i0>UvZX$fJ+Hu2s z>+F~r+zr5I1I_T);?zUzuKBY1l{PSUWIpngf8ionWPwud}UuB z`I+ELxr7k)IBgcbBU3zSSQV!y8$?fDa>NG)N)LzB=&Hiwqm6#OM0iKJG51n!vcxb5 z-?O}=73VH{VBy%aqr1u`7gOYxD?F(O><<(Q$I}47f2+xH`1{Y)*R6fR7eDg;6CR1w_o^s zCRE{44J6TYB*|Pk>?GBx65-$$MeCbCdvshQ_~QH*iBv6UW|5L&6tFDoZ;k9a!XX5ot=H|ZUYz2>gSGN zyZ0l&?P`G<)ARD>d-rW3pe&CqwL4OgqX7XpF8BGbOJg>qDX?41-g{qXHDJDxmcQOg z9n1zcv+lPIdXK^9KH_NamAfrgKXpB~E*+qwh3m?B`Qa*cb*993S=G7!Akv!ThvOgL|#8)^9H^*c^+-T2b)&G)b$XgzPp2%g?pA6+Lkr+I_l> z0e7A@@IzuX(;<6oJ?OVv(N@o_o6xRte_6is6~C=VK%MAn$c%sG+ly2Fj((altLAAU_qj87@Z?(A)S} zdG07CJ_(#1t5zLADc)~Gl$9FPE`v@OOuby@kxR$=Q9O2nEQUM-JfwbYC9zSAoiw7Mz zdM$g_5j%L0G{*>QOW}5^0WiylD}*IwM`BP!PE=^xC}y_{v+-E2LPW!8Sx-w}Ew zg)ubvONF!rAm2n3YDaLc1tsA0Yd2*!K zsnfxcH&*PAukG45j;<{c0O{h{$Bo%svx*tP80TlTla1TiVOC1+!=UcnodEwAUvE{a z#=$(=O77e{@k@9=EAF_dUqc%u9Ukv)_te6} zdeZR_6B8?Ua;!MIoTgNE=TlCSxpLihCkm(hZlTnH^?j6Z;8}sSUJ~@k2A-xBQf0*P zJUJHiHXgaM{R^P{Zp$OV41W1UY6YiUi^wg3T~FQyc2zh}i23H|!SgD{CB`_ggR#f} zO3!$4mNvc0C4go0yzUYvWQTur%|_0Ay0z=lfmR0jN!zyBB&@2WVw@cDe6Y%z#V+2t zHVn#j+VDCZ?rC|+txZN*)uPE1PDEK$^tqBL3hHxIR5g@OOdrCy=zM>|h$HI5!lH{0 zW@i@ElcFLP=nJAMsq2e%`TN3!5_7?o#so6%-z;nAAC~MEc|WWDKJ0?Q=cei7HS`Pu2=j-*hx@ZjK3o=R0qP~W_z;2=U#3bMHC{v z&iLt)PV{h8v&?vf_$Z-igp9n*f_10lke#`r?A}Nh@r$~3RLAnH_LApV@dn4W5tx`&dba#hzBi(Fz z)BGMh=X=iQyx04m*M+S0Fl*M#+%xx@xfxR<^)uI|3bgwPt0Y8CB}K-$j*QuRnk|i1 zvsOJlczA{4?yE~~J&aR+2Fl&2Z$dSeu)o&_23tFZv2Lc7jw{?Q%!)#ma&2H=oXh+4 zUot>fnm&ItNIK@QZgbysTr9jF&bg=N_LmH(ko>(YTeY{dxxXgyN1i%UKy; zIg+_0y;&IJ9qjtinOVl*pUGEX*HJ#h48i@)`EsB&Uk@<4OLxg=wzKb&_jUnJPqB?E z3i(hehSImJ%(Vb9T6Q@DDN9e8lV?D$CIKf_uD)g*ZNlV06@_) z3`C+b&$fx8x@p2;uDW0FeR?Pkt`O8Uq+p#yeBWHRx>ot^le#?Ar>DlzQ()pc=2YM1 z)|Z5{`e{|GLS1GV3?K6=d6m>Mvrt#&r|5e`6grqA-Apmh$^boeYo!lfOpOj!c9z|9 zNz|DiH&?cPvf59nNOn=|7bZ?1MprX2zSLw|ob}9)SM?K97z`^nYf+4~-&9w})oOR*+itCS&ml^2wXl{+jD{h0acE2`xJ=XH` zQdNu?9@F!^k1)h8)7w#^vxPBFAV;IwWu~Ij{&S$76YZK_^oAb-DZc0Qp4L;J@ME00 z+L|J%tELDGO=s+%fq461qp_*N#vd0w#R+| zHpOkTi8e8;t+o_3YpH3)BqpJwQGZ!3nGtFJ@a-RJ!CBu7R3zvsq;}C(+jL~QskOej zRe14kygTRSYW2F8r<3b6Vj3KciIc$vQ7{_Pf>RnN?{h>>QOOy_MEv5$+P zN_yrB++b}q(C9f*^>xCKt7N17h2@JAk6$Pfwr@Pm&V#zs`4zn<5{mM2G{-jaVuV}6 z(XkAenVizhWmhcna3^?w+&2@0SEtMop-$GDud0r?7qQK?6wOw?oH7z+KhE{_>g6>v zo}Xjd!|6+Mdf#cUzMg~)JunFpuFmmjiGjd#Fm+f)3WQjHErC zm|lz?wQS6(;4)gjAw~ZSyuj9BcL%Sg=(<{xs?A}x zes8gE#pkv1Z{jzMEZ7Rc?7@v1_140(?uBKxpl)Pbrq?_kDIodMY$+$~qO$QmzZvB{gJ#%%=MIV)$=& z{lEQ*1qD~umf9cftdtx^y*+Np%HEM8#M|vaj|fi^i`61ndHOllt+$1OU}nf0{0EJm zqlIa*bJ&nzlaRRvK83qsT)Gh1DS|>y2+QAjg+NFbrmW=4i07+|kJasq0@g(@H&fk^ zJ4Ztr)&PRJPY1?!nbjV@RGTxdm6u4vqlhqQ?40jV?k>`{;d(tn*GIOm?9QsI@c_2Q zt#)HxDOr*|l()?(Pkq#LqL4us!oM@nLu7XLOAtlN4(n9tssHTzIrlx8mv4clUGA;( zPq4vPxipB6Sp}x#!#-hOb56F885id=1aju6iy3pE3Dj&ej-JehQ%YRs|A=tsEdnz* z?^jPoX`WUOrOF488v9(ZVMGn;G4bUn^*BD5-8pVdKk`!MM{jw0b+F~gFyii8+g~** zE9aO~(5@|AmFhov8YVC^hI zC9#Id5U)TLt55kyASq#Os3q13*VEuy4Oiwf~2oh$l#H5^z8!%XQ$sVe5yLw4vJ)W;# zs{ROdoEp?=IpAz>&@rPTX^oDu#stUQk807rXXk}Upjp$a?ysl%?RC-c2SusSjo9ee zvSs015N@^NrASty@k4I#lby@u4|5Nne)ohxo`bJt=}!kq2k!ujHBQNzEKdB(DPS7+ z*n_g;v8K{8(k$E7;ebZ5wu&?Jhs_gc$kys;%Z?=o2+WOi-2;hc_U_le!2CdmP#9eIoI*|(Av#N zb3>?|l~_PIPiBZN!HlgrL3_|6t&FOrvo(aXzPj-WfR9Qg7UyGbKyQ8j zOQZq@fkk^~JoPtrqaeG*;K*KVN&P>Yzct5aypFn>TaE`vX#-p|mP_316SxAGyMh;l z`w7t+H%Yg29+6p^S$_;qHz~ZrMs>O9itca92QI$z3`KhhVnC|< z9+=H{A5Y+3tw!e&=~q(D&(j%wl121XuSm=0RDOqbBHg&y?>?a_jfm)HC`BG!PI6V8 z*?qZ{x2O&OuEfFxEq43(HK^!YPl16>zSNdm`&OILiuxD;aUZLTBgBG8-7*g}n#eKkBrlVQZXZa~ z>XCN1;Z7?m?s+~mbCdDA8vVoLDQE9 z1mTNsMq`I*@lp^dLe)cF6QP~=Xyq)=?>L-s%(51lPJ*HqfRqnabx^3)2Wii81+|o( z_lB80lAi*~UfwD<1WzH!RMDfOf>y!Q<{c>4x;qc(`$ZrXV}1X-fP!&@9)ZrjkWn5N zY&^hf;A6fc5$SiQqf5=~PTc-d3=TuxIb~C; zO(x!QD-19eV5sHsufL+Q1@`8w!eWa_&pq>Ly0lQBc9SOym;WkdH8e7CrYrd4w0Go>UIWRsEVi%Mr^#YSt65?6q?uw^lMxjH6*F2u{p zJN~LIJFdpZny8=;bnGCK^liwdB~K3>6n&U`?7|_O=JBhGIreyLad3|@9nK?{&A+}p z5!`5VnKbls^+q9dBEx66vM<;xkNEP?E((*S+NRB7w*le<(Z(_U_40DE{_M5Y*ONxN zt}AV3Yqpe#a^*4l1J3Jf82bhV?Q7`Pj43C!y}S;?B>dxY?HT`yD?&es0L$2nzMyen z4XV(6TudNn`)9znW9yn(_0b`8$4tfhPv6`LIW zoX3VD)-`UD5zKOOWrIOP){rK>dQv9iZE?vKe77ey(q-h*l)RZRYwKYMlSUh5PaxpN z-@@WD>Q$Xy-+!*Ow*AeXm?dqHCzxrfJwE6sYD#>*gs3=DAADX_P`>YUh^D-Ii-0G~V4;3TD%mCeUTqVo!Xs%FCG^sk_7C56iJ^hyr5Kx|&>f?Ez?K z7OPZZI{KEIS4GwXeHzh_qcRW$^#19t3ma_A-a==*;eig3n$O*4|S!K3Zzb*`es0jAh>d@@`Jg;~#HK^6g$lDn1zMn{Cb zV=TfzJqw7Q(^`9j=d301+-!lhGt$@w2;!kw)5qsYo8u^J!D)EfSR2Q!>p}-Y(=vbL zg;{Ie2YP}_T_gIXiT+uSU}ik7JHAaLB9w8-GHA-!4{QQoflU^D=G3#$p6c|PjGloG z5zHI65VS=cXj3*klH0_QAIhVzoe&(c|4vxn_CU}!H%Uh5yFh0 z-AW$#?YI!^*{h~J)8109WhOYGXB^gVMEkvW)CI`m3xO$a-)-2ot1>Vh@Pey+heLVu z81N8hzL7n-p9&m!*d{a)QrG`{<@bfPI$WT|oDWH=jy^|kyePb`^|G|bO%YU6h>(>n zf-4a)#U;ZVWSM%Mcj?Xz721J{v(Ql?I%1J3XlDCrZSdn?q4|lD?}w0p-4W3z@sLv~ zmdM02Y#P#G2t4KcC>;?ryKuA{xuobnlJT~<$AIXu6j~xhzohCLQze2H=Uh+2$jUDstC&U5H%aN4}A{}ChEnQGi#EY~Wuah^AY zaaU#@$o+*;>~g0WqnJ02yx~D+Upml<7{GZb8=Rwo#LD}T+rSrzwkvi;| zY^4MtIWa~tysRZ=Jm-%GWz5`=YOdd(xWzrVGNAl%OExVQvkMrrVDWozZI7F;E;CHF zX|zwgyWjN2wzsNyF07A`K);{K)@WdvFwVR_m5PY%KC80F!bw{XU_cR(VZ*kByEhx(}n6PFg){B zR@Z@X|J8Pe4Q|=kjS|)aRL| zUaMBuWRez_Kvd%9Y-zk=UH0U4fFx{@b%#?|@Wky=+X>t=AiG2kYli(2^Ymwz3=8jK zGS}7y8Gl|HCu9Rp{{n$(9om<~87AJSP@(b(mLcvaFJ&VA3-^G)4dsS>BH)E{gWBCu z(vMkM$Rg-majN)$=2;cPyHlg#}4odz;%~j z^q!^)ai4w#1tx2iYcpT?SMx5`EZVK61gh}tU?7eCR?+;|w*|J_Q;th^vNRn|tToSV zfz^%_3=Z9-J%0OAXM0OS)w~|YomDosauRc6pCuRmEV)Luh~Bd78F!Ncf2~OP$@A}P zA1pGX6M$m zH9p--X2^M^9QiHLOvx-~ic!J>?gj(q**XcsporNGZf)WEJtqd-pl`4Jz2^%jS*W{^ z#W(o{B4iEDz0vO~Rey^B0SKm;K@Vor3KBC90XL5Van_fvmo3wk>X>02q;Ho~?e(>vmRg-Se(HL4>7_aP7YLCm2TzkbwTK-*&%j90Gjz zt%z!JsGz35$&XvhW+(NO|0b3gV2mZZ8IUd9`_?w_>1a1fsWWNclo%18srY_W*S|jh zw)<`2s~ue$y(|}U=2n#633roa|8-MYTY-BXsz6Tk{lrzD&yLTrIknFJ$;sdtbKKQ* zvP&@bw7O8E&zp|q*2SNJ(5Dk)c>PbTb7;^oJyZNa;xPr3cR5GbK*&66yw7ty=Fd@dur{R`#uronMKvtai!;W5?)`WYq-G>5Faxn1|7Wa;Dmr%VfIc z5f*jbQukS(U8LtF$e$&n2}Z2=#80W*dY|1fm($Q2WQEyuX-3SMPQts(IcfBILCJ`v z9vU1%Mc|?i%##3qr@exTRY<&j?EJ?sL5*A4nwbf}sw(BPe^OinzgFR1MK{&&T+Ut^qNS@FoOg(^8y z!6nqP1{diXmAkTA=Brm_(RJ+m{c=;E2K&c-2$IfeTp(QI*dyBY_ng4_`ELdND}Z2v zm*?aRwCwybYMdtBzjmVwGVi0m4-sCTId{InJQ`fJ!Bu?@kiSV7SgXt1(ag~x(V|z+ z6uluug=Zt|A$8Q9QS$Xi#j;I;CKls;{P6D!`F9Hg9nAKq?s*jD+eFfl2}(mWJxeB@ z>ms$APwHG}g8S6@X3cuCY``f5llWCqtlKd8DKvx1lT<=zVJj7o^`|a?kbk>IP;rRU zi%8y9t~UPvuGZf#?={F(Cm}!?fz`3`0aSwS9`J^G49KM zrj2)@1@27zum7!He^-rUf%!>QxQD*2x&Ub_3mE%!Q(C4>dXPR~Zu5igIU5*C!3wlX z(P}@zFT(B8b6s}-FwmV(zhqI+EcuNS7HRtHv1xjGIPyQ&97cz2eCXITpNT^hDmGPb zwMy14XS&VbRcq8uGIM$NjMz_}S8W!lx6~?$Of*z<70dUgOk-oXRED^w3y{vcTD~YY zm&wPbq}W!|dafY=y8Un67)FMTk4d`;@RKk)ycKcr@T6#2)3h<-O=@9oKtW&#d0Z)3=R z+Y@AD@{L75er8;RUl*rtDUF-}_I^&cC5P_x(xbZxhYwq*%Qkpih)>E5FJTSwZ7#b` zBJTr#)_>KPH5%~`!a6tnEZc3&K*R?HBYO&!U+LSaIzy^ONC|F?aSFUKJT5rTIPe-1 zUw~4NN7i<4y)q(J%GeCOc>1aa<@HxJC(D= zBtKfFVLwcnX${gM@k@5{ZG&x=pX6^}KWjoWVhL|^3U>`dF%iwaG<@lhvk-ZxF!G8%Nhih`LF1IQDHoE zC59AQnDEgAVm#`6h@!w+?z7qthS}0MF7~Wv5>|Zndwy$@&yW?(ja_x?gW=V+XLkTU8JUNQIrf(2jPlCEu<(%^tPB=nIeMSA_Xj6pns)vTd z1d|)~EIvb*XPVe8*z+KptDnY#amaVNOClz}%H4Bt z`}Z!~BSz9;#&i?p;xbWoxslJ|x8hb_L}#`Ss*vLJYL*ryE-Bv_$e^=gJ%-h2FyIos z)c>5cYS_bFS$5io$AGJ-Ym;x(Mr6wXc0+jB!1{+M0J9_7PCW7_cYf9%?m9`K_dtP? zKd)YRk4?GxJj6cb&k%fzeYU2`gnUgQXVA0$sKfSJMDzCc!L3I*g!iK7PRn%C88zJ1d4@>fj zmbJm={TDALdiBpu$$q2cH2?EqT4L?QB1>IM5|OQj8400cJ2~zW&L6Y5hF1%?Fh+uKG3z8wWp6ERM}cKL^*A!E6QD}sc9<9eNF-?{mDkg z!$#-Adek5a2A>GqGCVr&RE<+T<}M!73tf-p_mH?6##F5B%c<)ae%eXfS zGbYSG?g>6S>}Tdr&f75MW%rHvh(f0KpO@YXc~%4Uj@jcr_4%aL3}tLA;Qpmz;t%ny z|1EYPKNmX$(=7n9TI9K{GC`!#;zsU=!S{vBCbdaN@^>j}C#z4*E!Q_kZa71P^Tk>{ z&$2HOdJC9C zdkP?dDMz}76NDARUU6}AADsVJNNu={v2(yq2n_$R501#X*?w%J=}CdGjblwT3sRbGn~6DH!wr@jt9E*Z#a#kp-L+ zQRSL(hli>z{-if`mMM*CMsloJ_WMy;A1GIu$mwJdBN1VYrMg>!Sb09!mSo04g)%Hy z4+0l!`iE%Z?ISIyH#D`-cRUl2&&f6U9!Q{{^4(8l8L(%ySgzxEmi02-he$xk_3~wA zmjCMjnPKwV5D8dl;FiZwm~+i%qh)fFm)=<3)|LHvEdN*0;IG*yMOC##sit`UKqkSS z6xi%+Jvb~U$stp0svLeZGBeUVC2lx+V^<5B@0-!%I>Z4FGS%UQEUeq`Fc+7Ze5e24 zn^v)e(eKs1At7G=o8?Dj+9aOWpz)G`>vyufM~fC}F84y9)3o5*r`ZFG&+ndrZpB|8 zO!#WbHn1@&l|b;$$ygJVXMnTu@!Q;8*0dak06{;PakXo-IQ$ktz51NEd4|5H$o0Pc zX<6O-W?R5hT@j)m;0V;B{uQ4*pLy^bzhU2B$dP$`dX^U~S($g!vzMTnb6&rvMJ)va zKq(nDV0$A%5!N$LE)4k<+M7ZOG)nk!Mn!F@jj_c{#^ETH!D*L~(v~l$IFrr9 zoa*G;o_8~GF~~goB{2iw6-aP-f<1@Fmd-I_HIpXhUglme>s6^juQokmk;go4R&-t| zUMa>f^I{^2Vb^5SX4j1Nquk=Y^=I?FBDKB$yHPkBw@5V7bvIbqC)O1+CL@^<-cRzYQ!e-TRz+4K?OBm2Lbf9^ zQKrB{zQA@1a#o>;nxtX;^+VW9D)!2#&S3%v2wV6sp^5&}s%`swdyj9@z# zC{vGu+p1=XPMg1$U~Uoae^0snklen|<71m%h2-f2oYm+k(sq&hz2Yd2?x6h)_6G_m z1MLA{qg}=ucc@dBrQ8!wrlvliB9X8#8)DsZ^~>*qjBOJcQHrBMn>J%_;}1whv#iav z-p#zSEF5e|M!>1V8Pjn!u51jw))6Z||8MBX5!tT1UwxWJrP%##E$IzGRv$efyU{0S zDHR1YbLu@M7Dd`a68RD5n5L%#hrhLLFl=vdXdU5?s`*OF)vV!pR3qdDG#5?+H?gs* z48rQmU5yGaO%ro%6WaPNp}Mh|FxB#Z9Gn@_OXBgQp+=2N4wlc!w!xnrXc*8)9tPTr<=cn@pZse90WdrtmPeQBS=I2Ea|rCm$x z4$ap)PS4hA;6wpwb!t)ySXBhj2@j}FhM z%ik@ims}G3oLj`lGvLc-?=S!Vfa1Wi^=*KkDUWS)SpeC7tA$vlNL|9%!j#H04F4_q zY+S3ptkGRBr9ER$^TbHQqRuF@K`UIRC|2B`)&ti2&wl5>?t(Q3^0aR`~J!4gax|?f2s3ys@qa9WIOy zQn42Gk@BON=2E^~$u_SXDC1YuB`tyfioyb8EZ6O1oD|YTCSk_OkiR78`F)SZ%r z8)j3bJB^W{^h9m6p*=-K>Yl1!J!4_N66%tdN7RXklni}qZLWpf-nClR5 z`Nt#aI%C-+B`l})V;9M&Ii$pk=C{HVY~8eR_Nel1*3Q_^>C#ldyDHd@Q&K>@rSlz} zj?(q|&l62JnfQ3KZjac*Q_lBkKgST9T{$m3gQn)cvC~Wb)wLSWnjQIaI>& zW7`MNS5ARLIDE;uNFsH!?F(a)%U|0bC6aQVWsU@s*#AeJdy)O7PA5-Zb4alx&)Aa2 zlX%tB9SB*==hOJxZs;(5dUogryqNe@l-Vz$OM`w+kB1zqmWIkq13+r-4e1B{kt>N; zDJTqZxkBIG<-bfMaefa7@jnOJelPlU8$r^XJ@$C8JksxvN^d#5P~;!ytxu*&dV}Th zpZ&tltOmn@Ke%G)uXK>)Xjh1xasx$5tab89qjWiw9WWzR3#>E+OqVo!`4{EQT;ji> z>exSe_>?ml#!X3|JF~hr>*f+bFKRc;GVJD)OE~`{JiyJM!EEEg>$B@PT)aNz%dn(t z+T)w?0q{~G^L{si6B!S*;mls!BmX(Gp@SGngBb(=`s;>{LnQ8cAO28V<$uUeG64CB zHP}OU1Jm{Uk(<-3t{Yr#Tjy(rz}!M7@!4}cW*z+XyaO_rchdh6uV1CqSL%TyA&O{= zW5(?@tTbC%u6MkrZ`U7_$c6~%{g40UmH<_b51mCxm~UNjuMtK=`EG~wdl5PM<^xt( zL!GSpo=ZNx#}tf0I|&EkRX9T#zR(iTHg`$+nRnkK&$Fyr@@qo2XThpmw*;7Rq1K9c z^8@6d`(Mw+Y6E-1LzFZn!xwEd6-pv0Be1X4j7~82z@ac%67VbiNzVT{Z}?5pfH90W zbO*keY3W?m|G~<55`R!|d3{DM+uGEUSBc2Ma$TGl>(TcK>f1BhdR4Q&nV`r#DgbF- z2li%4X4d^QE&v|$TB)A1$c+l86h8;kr8K z!P=ppA_N;~W@|Yn{|yQ>Rs?y2Pb2iP$?oR*!gqw9ZhC;8R*&m1(cRxW>}0vAgSH-Z zJn5DE|824T2t@yQG_|!qK3VhrTp&-S&2sO|8yvhN(vT|4EFQ*-ybGE zrzIMJt6ex)xaik11D_7`pXi^Phpq$v@N~I>*7t$JZEEL%!d3)R_#II0?6Ij^R@ZSH zpy>f-KxqSnWnboFpt`$$y%EErApCH#K4%=m1hlx^dEe`!*YGH#`jMXe-}cL37jy29 zANpZ~q-UQ*x0gX>U^9Z?wp96Vm!VX?^vc5K3%X)n&*PXGxnN=^&X&n;tKpb|r%V1b zoxdhn@)>h!7=Kxa!jLRl4o&N*7VdX0ofND%;8lHBV19|!tpJ}(FqDGSb)9*qm+;2K zvmSHXSM0m*?PN~yHKA3BQu7+B-8;Rr_!R62JQg&|o7vL3l~0+j3zpw@?tzo=NELgW zDiq>EG56i($k93=7#oUGz7A9LR6gyO5~`|ydVuKLx9uz-e#y!7Fmos!Ng)#_J&O&& zj#;p6G6F6M3Kbgo{6HVi{0Z`eoAP*B9)Ml$oSM=GG=5e;1zHpfe<1mR^$F$!+1FFp zn`H=CW4wQY=IIggq$g>;bD8jTlXG%!70wj*b6whac7qgJZ*+H5$AUZWa_)8e8{HQ_TVlxaGcb|7 z)${a+6sTx!hHyx8Xw6*sPV3alYYab)J)`PoI|EPP=$A^RUzEQgJ@TZ+Xw_;jFmZ%4 zqmX2;(c`HO+)dTl7F&RNy+QVbFG&z}fy&$$VZO_`4ywCwt&_=ym@Zswd!=tAZj1+6 zmgU<21=}Uk>ABH&SxLx5@dvjf3!dw0Yck0PKc$fX%RWp4gN(~E&pSTQAh@TKHH7qb z$BOpPd=f0x4EZ{RKn^& z$i|LA=nP=p__9X;+*6YmXK7;uV^1$p@2*TSB6*rmqk4*p%hFdJah{)lMQF6`tNkUq z=Wo3Xy~C8eJvKA}(khMnMeAx_5D-P$RTac*^)6frc-1Qv0T3b2d+=$UUe0KjUk_@ueG;rNz1CM#BKS%xNV_(2FE~p48PcbRX$L(*eV^Qd#PnQiyM$zoYsWS&Cy{;b@!B-w z>z#&c zeAxJKUj42Xr)j9EUm76{LsJit6_#ZED2jkXqB8|Z2Hkob>3iKnNe6(^hQc4|nd)NY ztdyar?-NnGQ%ZlcLow~)UA|!|O`Vx15&_eF6^0|9h{)x{K2^K2MLqN|7;A6Usk7G* zrRFLdyqfp>AcnCAqKP6k*bd?7z7!+7{U|ZGG+?VoYI-d=5NzKz9s2sbSIhK4qwQAB zp%aM~+v~7W3ZL@8bL6ZG*Y_lVOs9D>pQV)fKmcIeorZ~_mlEQTzlQ*kK=_e$Js;@D zMeXrfVj;F%>1oGS(oDo*6gCP3UMdtzG@uuuga=bp%-!xltq@u)$J^Hd@zm(wCdTMx z{#JPQX6C$QJKYK%r=NOuk%p6Z3EB!poyR!QXujW?wm0+q1hl$6@ZUqan42IHOmGC= z&0w?Kmp`Qs&=;!BI^~>au7gd{S-LvW=X{@vT4{xE270s)2~N@4`h%-^q;NmrNYy?q z{w{b$&a-4MCIV8(BRH0)LWG2(YmBW{?g&8}dKPN+UUjlwJU|!AJzbj-aw@a!rI*pT z!~NYX-Uk0v8N0K2DHV-)AlAI~<&|ich;&6c-U5g<39iJ|w*xtRb;15JC(9|?9AJEp zdp}VuYYQD3_O#v|6A|_v`#`+#Ldd}GKtX#v5x7G^?^Dss{Fq*~lRbRk@Jhl!V>W9D z=O(ir}*ZxGv|8d!U;*^3K;3SJl836FkIusglB+J4XudKiVtG-qW7d zn4V(#nIl|pTTcT>vCrDK0?4l4OLWTQ&E7;wT6bU@>e&fK-AhKhmZxjma(bxqGxs79iKCE z2}-A!y&B1PL;MIF6n5T|888in%+~EI<=&=IfeGMdd2X*@^O3C6mI`s zRBIiL7>+1ioi}Y;B5)Z}X+K|&JrY}0xchPGU8~0S5ZUssaV;I_LFu{Ky*N^?e9-APb`=PNhS zZXfO<4>Td{CdqEK@|vNCnB1Z-=I12k?-wM!$tOD;-gvbN`r6IC#xc_uu8~;3nm5mh z^|xildt5^|bm-<7<)u%6HAD0V4v`bdY4@2)ei*|8fv+k89Cb(W*i(fMWA$xKOb zr32_&+w|Gp5tO|HL^9V#KH3>ypvXi%%QdsOZB_6wF$_d zW3-XiY~mL*KrXhQg4Il1@Rnc{=ssKGx(MGSaS*vA z8A2-QdujMHyQeEV!%=n*43yv@N~uLbqflDvgte-ReE^!3GXYfgv-KB5>4U^(ZkZIP z!yZAQVz0G2ZobtjSD*)=SC`JI{}tb#gajc0>j;I7%q zf#&ZPKP+CS3LK9J#(1Igy0;m>sPfqVS%_=?xJmn}^cU)Y+s(Jq(dD-#f@qjh$oVx^cfk^QE^M3PU5)|@Xq=B&1}o@6r36=MQd85xD_XB|>?sP>P+62r&m0bdJ7xR6*)= z!x(Fd%f{Ij=9188<~)11gD|Zs6B+#Itcfq0GIFg z{Vhd-&1NFS-Y{3lyKIXVO=~yT?;|M)5h7dp+^T8W;+w|Y`WY*$%vftK69J$f)_S&9 z!H>@~Z9YY2gYZHTquw|%4d=UIDFpNy&T!l=u0Gbf-@nPmSP|Q<7mKHO+jhUjV!m~T zPj{+G)beGVx5uvCl-t2Oip!OP^ET5GGER;KPPsZ}UbaC(nk{D}5NTzIvW`D_`#41C+SLlN19DNLb>=ipeOUL{Raxr0hLqcf}G(>ai zSkWFiS*=yK{5y(eEU)AB7{|wpa0s?TSINX!=D{f6pff?F*1<*9x^E)XZeI5fBZXvc zylDIbE83hDTJ6g<%HXnL9TQ%nhNiXUNxNTq?4~#zXDZQd*+^c+M*;>b zw}HOWQ$LT3<^+_6mZ1u-x{+0A%eZ3gB{JAf>}w3;j%WbH8pFYp9KBki+Py8#O!2~c zO}f1DiL}kq>l^fr^YbUzLL;frAx8v#XGXgX6V!J0V$34 zP@tamUzQ^0_ce}$ncUr4rc(!wMBJa2pOkH7z32(nVcq8T=sMT(q5Nc`vV@~FAQ6;^ z$FHRq4{hA$mw+*F+O^4*3iauvKtxd-BQe9}Jck?A#J5xhob!6<-KFBz!d?sbH?lr{ zFa!c5TBc+5DhhU@NI9et#oEsKiondlc^lngfp;S+ZVx|+eR^!Ny@VS^2rXw=DMxoh z)38^NAvHF|P&w}S7xD0EhYL*^lCMKC#*X}IE2gdcrG#&P`-9dn33I${>vOEQ?M?W< zKnVk0GOR)0Tovv33}_PaAje)txLWf+1;yH?h49fm2ElcLSubJ}@)m2&wq1t}5OUn=VS5f4dz-j$20%rt5 zLRqBdVnWzvW5`F^MJ?R$%l8XE9jD;Dug5uL(7DBA=A-)^0yssh2;L5Q+u<^P0?c6t z=|&&1(}&$S!$%(KC9kO)^Y#$F=xq3~(+k7C!?#xM@9W(>3(nL^XuXbN9j?@O!~%MN z3?=dQu#iCjDvYH+@9#Htjd8%d*~J70w0=6VyEqo@a*xIPvsk{7at51dmOC7=xq5vF zoVEH8u4aEtZS2g4*QhUmWUsDyQgciL>wV_hnhpo3AhKR_x@;s&w@g6A5Eo)(A?OGU zm8fBU%`?f%zL^d829uCrt~DG!4cs{ubXUEb29gSPJx6m`9#%6mfd`fF%^zxLXy_@V zYk6aS=hHG|LP}Y6WeiVY;jLZHCFL>>O!!T!+qPr37#grM?`nVlmZDQ&p8lNibze<| zfT&Y9A}TENa1YcHIlDJbf&tUIn5QTx@C=z9f{NzR7@`%(T8ErHdiq*BM06s$J(u)N zp~WRHx$ad{Ib)TDUFAigif_ZCMsl2vM#>x?mqY0tAUpUT2q6WRz^zK z1`&z5>9mer(U_}MWJH61E3h#5J7jKU7z2*jb-R#q_~4<0Nts(4ChkmcwkHCwZd5;n z(Y~c$pSi(Ak_FCaSIquSZP=@aQv1+SD5X;s1s6@MtYJHSU9FZ?KVz6WdN{!hnLp5Q zATkr*1kAIjdy}kjkyxevB4Vx*hYXcz*vZv#)fGJ;PuFy8C`e*1-Rmdt+F`?NCUnQK z0}juyBaCbD19tfo@d+Ri;1#e_FLrnoUam7jMiua#x}IL1^t zTj(k;Ll!zay`RrIfz2N)*=s(}=&YwPPtF~+&U05^5iW1S&N)d!&~nHpmF>JYim4jB zsGilrpu1sz(i5FjT4qa;7aCRT`Zbue$j(44Z=;%b=^ji-IE{U95JW_9Guoh zDKYZ}TkDF)@FSPXo8FbJ=J~9aw?1VcjJB;9 zYeEy^#h`5)P2*+f^mfzcMOhQ{I%+Vt*_QxEr*+2loCq0sjh+i^)ODA^mb442ePfGM zLTOE1Ry{a%Y>nY}Q0K}C*Rjrht%Gmbr3aPo2zK8AHq4#*5_x2 zT9O_f33S`fzgEWct#W1?9^mtBI)DUMV2zb-IG`S!*yrFtmdG5tR=ZM%KJn*~cL1~s6 zWKzhYAumO58~0q6kBE+cJ~E&(8XSraQ+e8s*71+Byc&j_$uyim>RC0=cYNJ|zjDzy z2K0Kn^Gwknu5$OK@Ejsq*SKBl&2}qP1mum6He_#D%V&&kxGDT}F5Gn`9IJ?UUoli? zt#+c(w#~H$&NlN3AOkWsK3>#X_hB~;!=wT8S>H$AT820v~yPC*R%&`j%iz^qO-Q!U~1`#!zKjH9akZ#Gf z!fr_>%0XLM1nMs0D!g-Uq}JZ|zHd!1@eu}Mvs;nq)mW^9DLebl}L=qPLiR*`LYeZB5Rs;cYK+KkDvdtuWew_;`FFr_`LY zt5ymZw_~j}4e~sM>|#2be`2K$vR(J{L>+;7_0LYs%Af`)wEmob4{`h~AS}~jzqAAF zK3Yikzns%;>0a~=#=WzVP5ngqrtxLa-KKpSs$@*#SvBP-aIhPP`E%sRkTLAaI?PnZ zs|@iMzH$5;b+BNF^8iD)(U~$8SaW8I=-Z4`Hbb+5a!0E1KIn?u`bVIm$R(Siv)&?ElRu_5|U@L1*Ea zFeNZX^T@_uRVG?(?G=NY&RkuP(hv%`omfwWqnDs>JKaW$I8;}S(Db(Gyk0!zvNS&0 zWmtZ<&s`bBv)4}YLgRY15sOjoe%A-ju^}GIz#<8e+oK)NYU_D;Jh3%%MRt2^MImV4 zW`*>j0orILuenbFGTbBFbor1T23}dU7dMN9_0#%)ep%*#ytS1Z^CURh>&r<#TgZ|+ zaZ#Skw{!2pqdgz0sFTU6P3~M(VG~TNlZ|oR>YH40!yjZ<;YercnUDBKq{D?er9b=- zx)>(+o`2gdV}JXl{1K8`NPuv+ba1_seM1F4h{}SZQijx3^KCtkTZVn{dl$Bnj0&Jc zhTh>MG z2*xxIjzu>pZh%~796pCtX79z$nPDV%?x*>^0bSV6aQRLZ8yk+w248R;J6HRfb3Zvl zHT^Iemv!|DO=KJ|#jNzxdu!J_;JGEp72TdT(a%S^# zV>Hv+Kzd~_ek4cjW#=Iu9X{tQ4KrtGwGx{ENERC@926}Of7jWUC8y&%C`j5NfraP& zN}{LXJQvIy(w(^%f_8T?5E)G0-09@}Vt;z%mGPNdfl$k9A7_$RSXN8;oA87Xs`wg6@a=m>BE7_Rx)&~`__HAmM_E7jtUAL{ zh+I9S?OWN-HbbvR64+hetC79p!{lW~!(X(oCRD!W$G0)A!JV6hv*Ul2F$r-$dsK)o zK-XB-yM>CUGPn+`WYmu2cW>OgigV=zHq@KSmR zw=g+sW!$*hh}_Ev2YO(dU4q|w(QSIMe&~1JRKY{onf(_0HfP($v-b3!SO?Ay(s}ZX z@T^Q)=M&0vZV_Wul2x|QunEB-zc$vF#ka7StER5df!o5<0VpQT9iqa!fmi(vif(ZX zPu4HrDQh|qC|dbAIF3Hs;E|FQlM5@RQT#5O-aK|DYpEu15V31u4odXPP}J~9jawc? z8`Ji9Y;(&+QZw?s^+(96##g~%#?^3RVhO)<;AG2H?YUL-?S|H;#UxxO2hJp%%yNZe z=BnHL$+(cm(Tln~eVz+R4XjiQ1VkfOo1orkOo2TfrE=jYaV`chvd0$|%U;hLLT4j7 z7YMpRMfs`jrXbVWv?$e-yS*f8lu-jaX@KV@ceBSrCJa<%O;#Kui09Wj#8ZevuTHfHVJ=)d4__y(6AKrY+*UR0PI#YZ*yA5nQ5wBR2^YdJ<25?~y`o(-6_ z)XgV8wdNp~@B$9U{WdzQM7(ar+qhfRihsq@RAoZnDThAIxLD`(-X>!?=op5+-j4bJ zh9eWTC>5GoZKCys9%mpHS5}}aZDpr{9sHN)ItG78aIS)HR&{n98&Fexp!d;1WO~D( zOR63&5Xv$1p`Tm2f?!g8m*y&j|K9OI#L-@3nvp5Vf(49E@U73od+7^J}o@*K0zjoY_c1ws_SBX@o{}q+)Qa9z z!_xf0;TGFI>36D9w)&&A#&AN*i)DO%&|w@oI2iM4FcrdTM7S;thod*T@eOrLnielD*5i!A z4nNrF1kUU(n6DXCQGr_DoM+2E7s%05UfOoYESPy!=4|B<_*+o!!m2sq;pn6 zHCnIeu-|gO$phgvUVa{I)f843@37jz4eA{%a70*bW;iIOo3P zTTHo^Rp&Vs>qz(g1YAwW>H1fXNX#(r5@)&e3Xa$^suct$bH$u3Ipj-K{^LLHBK zNx0d^PvHEx{oH;cC>}9C{nm{Fpx8%y1HLHwEiXyj3|8D9&$;3a^94?O{NH79TJqY` zM+Qs$-|$?ej1bHg-RvGmWL%p}lx|oun_L~Ka~0X+)9OrnI^uEe1N^7Ms5wJ8;~11y zQZ|5>7E&Pwb*A%4=B7Esxl6AG?DcJ9#%-$fWJNr{^GIm3|3f(?`>gYL7C-XIp5+sT zDQZA|j})5C7gk{G5a}-e?cZ3P`DD|aR@+QJzl9WyM<^8t~RIn|HHp3X_ zJh4d#_*tP0WGZ^LOpfy#8aI9c(SB2^u}--aUu$OFfi{{VYM{%AFw^s;HfVjWyHCMJ zn962aD754!f4f9Y`9o;B@(yOXV8mAJ>TO=HY{ae330Fj2#FZ|Dd1q6<=uAlKSK0Mh zAjgZqc|nR)4=DCiU1KRcIR%+zdHl zz4{s^8LKfYkDTbb{&FOYCo(?h6<9gTEpGT53~)fkdvd<6w7>HrF1m-Au$%Yk!D)J# zo`O8}Q)MHFtr1^hziQ8kv#^R`Khsx0^MdrB>&)Y@i&WHA_c}JpRTw~e4mz^qXBwXG zx^(RK?l3=_T=7DH(d;C@l%=6X+|HI)X`#R|8lNJKl;;i09fVo%b}&$NY$VmI#QzT0 zuO~2zzW2rSJKnCa1)O#_L-?XdyP|CV)1!Fu!6ud6qgP|c@ZUOUU>_kq`~e>dBLM@K zokPhb5j|H>rQ)&aY08v|jYclj!%{_!QoX3vKN|lYdGCwK@?aeIv`_btnw|5(M=Rsu z6W)(GDsvF|*@7a3fEK~1Q?2rEyRGiaz;=D-x{(P%c;aso%*N^o;?&=I`o54Ao4NjPri?9gAZ(j@9Kyi;9ye~}ktc@P5^V|9 zrwxmAx|G?A$Jz_D?n`=h@6MyR3@YC|yZ=vrv@uh)#SSIzRliGuaTQLb6Nl4nQL#&; zpr`!JljVE=|2&z{e|WN*hv=(0_Sx%IuYVzBsq#&oB?m0owa+D3{)NT1S-)^*@^9YafBz4390l2WF@aZT$ zx%7qcS`ekiBZL$~pF4qIECr8SaV$Y1k&NvaL7Qb&*0eI~YF_8-Y98GY7VF{y>hQla zizu2RX5cst_`2~O+96a@rF=@>50qP7{fRkw)0?fH+*XU}*M}zsyHRmL^3CK&;GjIB zvJ0Wt#pZ+6ffi<1N|bRvJ=vKwti60Jvmo$&iXX^ z1og|<0;5Aw`1QLJvbX0JATpjcf7(2NoXCBTs5(o=n2tK+K@tGdhtm}K;*Co!fzV0# z{8q?`$u78pjs`tjM%?cu~0SpKz1%kYydTpXd5a{(5*j3Gp+2^?)=pi1(bU7 z&!9)dOPV}zTm;Og9;vI&6|FtN=SnKlleVM(R_5bYa!;w}dVn+p;LxOT0*@$HX}Hde z>_yu4%_AeIbb;Z&7%Xe!u^QiyT0f1+PWSdXtvW(>r|3eM>pON-P zR(#xx#F{*%N{ylCXE| zG_?QZ3a=#sGKrgZ!BKRdmFNW$DqZN;*Ir5~FMRhv!3}p?pGaD3GJ>{@{I>`T9pU0p z$`Y8L{opk`Y9#C;J9_$0tpB4@qXv@;tB#kEwwsj;EuM1Uw z$lErw>y!>mS8d%rdg>al&Sk`wpMpr0(Fk%2y68Z+n7J8Ra-J<(Vq*j5_ru^&I8><` zrqFMwvUZ$8%#{CuX9ygThe(I$o$Gb3`wOtPzt|k`uS(<)-%R1faB98kWKS%j`87I$ zQYo*b4|a2U4|`&SH0=0>06simx|Kt%piASaEi3yovjo90{nK*W1;8wJ!;OYn>ylLZ&dKqn`t1?f#n#a7vx)JdP>JVw+-YOm{F6J<*h+_v?Wt8zmIN4ESdvUM0 z4O`xyZ3>A~xUO-7x*+m`6t1K`Uk4mr6XCkkio&)MN@|GlgMl#I=)WJ@t6EENR<+w> zCQr+%wIa>6&ht2*QuY&yE8D~*!mI@`*nW>Z6F-X4eJom$>S)OydMwp&!|FXR`!P1D z^6~x8+{F@Nz#jo59cUG^Luv4}MIMHC6*1%bvwpB%kyV3fK%C=;S7V z@)21YK=erE=%8W+^~CBTmUNl`BlqvSeGyt|9XLW?rU{qR=_fvGsOVlH8bzeNw(p{6{lth#6p(a!(M!KE&04XPv`P_fHm>?7=p;-T+vnD7 zD7-t$m|V9++a=U{7YWo-rnBmFrx9$77V7Ta(8XDml$L zjBHsZ_BWV}PcKmzT+Hkdr$2_E#Np}TDcdpo72~0+;}TsWYDr9N8@C4)9U)^Fxf%%A zIvmy;&^lUdEjiLzu)w6!MD|xIbNLZlwFolw#YaabbWvRRqge*wk&-~*qA7Y2oAm$L6i9mrgW%tWF z41GSnddjehIz#{2J~q81?&MBQXFs|>ZQI}5_V=rY zb+dIMF^iKD2M$>u3?7 zTGS23hmA9n9?cX2ETNv$1$wjMH}zz;VTh+Ebzh1#a{%Il8n>*2A)m?C;ba!7*@p+d zV32@MRQEQ&(3?FPTSXqZEr@I`Ii5aN8=xB0a|=_^YSz29yFD7ewq3B$|H#1;H6v`e zK&x)WI&k&A=bPDEdPw)IsI0fXXte_f;6l+mEk!@Vriz8l-m`*f+}8sj4nah=>kcB3 zfZGV&##mN99*7-^mK)lEYks*C|OuLoOStvZX3K8 zIgA)Nu`&4z^N&~@b4v1ep{iUC9M8Yw|Qo7c9?^0BUTRWK)Gx z>cD7(Y%djdD;z7bUFuz=g81{OVli%Vfk^m%8PnJ9uj)TG#e0493RrkcnwhUG|N1b- z?n?+qL}gHA^q5$^)Nal*<;j9AHKh;FxS|_kv}H>2(y{>AE$@|;*;VXSKdVLMGbtCS z@qKp?&C%J!Q3x=XJj6p8PN=HAz%h_7XSfbwF=IKfS3M8wPtRyMCea#NfpJ1j2{ViNeXRh0cY~3ppnt zlbw@}6B-3a?_2X97^5GX$*`Bo3jSe;HQngz}v&$`Z})9om%C}1gR@@4pRgrkyG z(|)+I{9GejO;~LM^q$(XM7FfBDjT;M^T`gFt{D3~k(gEWemiA&;B)b3Qk*~>Ia<;b zHv@vnnz8V))UrnLt1pM&(^~W+)8uWVmtHl;-&%WG!gOKRP zP|ub-(W_-4Yr#svOAo()65|xtww#x_$>sO60(-Hc5OF>I;M}$XofF1y4 zx*ot=P)?uLmh33DEVXRmD&z=R*tIp#C-2_=QS_1K5dupHTLJqXHUd5zZXA9e4*6qB zz!q{U8ZAOFQW^T;#{Nm*z4oI>2(YgsvMQtc=g!Z;ZwJ3#(|mjR>V+1`&6^*9529*1Z#X^F&zE+t*jw0v#-v@^T@1!L z>o4v5?Q87G)*jXe)*7*C>6ql}Qu3qeWvmq`X}W13vVq+9TG>HBNv2$%Z*cIM1?MWq zJvTN-NbXABAblx)HB~w4L}iVx;9Jx1v*Crf8wy52pz9R);QAnl*g;WtSh*nRjhqL! zhOn!RP4J+$I@5j<1mM|u0PCLbr9t|QhpLP(J&^0I1}^@WBiuzqeZ)ybvhA)RzBd}) zrA~*j?&@di!~2ZSnC`6rJ}q`W`*59zas*DS8_WwWC|WTM`OXz5Ynk26av$Rkv}E;? zYR>mK8qLfGmY5?9mQ=Desik~yBm}5CWvwTt%c$pRk$Tl2WVUA(XeP9uS8B_&9TOkZ zI?o2|>_*G`0wO9P!Xs*A=wwLdNwn5(+?9JTH3|8!ycaaokgQ?zST0$;*`uCbaxs5i z8*6iSpS983I3G~UGnFu@c0hheKXc+_u`pblUMDuMC=+@`yVd#SUVxDN`c@FZ5-&)&Qx~fl$Bip&*KE+|iZD(#U51A3^ zYby!DZ7z&AW^bIi=(t^ppYzSlt&KWt5X>96kZ}sCd+VNEC*e%sNYb^WG70jz$-3-q z?w*eel~ieyrO5C)?ptrnUOHxMv!q}O`sRjDyPSU94m2THWF>9xYMyYK-E>&T+h%-Q z%5NYvb~v`=HgutV<_z9vCQ;>ATsUsAzs-FbIs;#WS(ejW8rKyMQX5nIQ}tXWUDFo_ zTCQ$=+8;8IHGcuhRkZR1?2&Tn6jc#Au>M7F3s8r{4dnEqLY*}Ca`V(q; zJ5nTu9Q_&8gZ*NrDPyjn0K)(+qrf1*62c%tOR&%j088|rvJ@;m4E*nUI2f2r) zTG~2cM%z$AD^TsEwH#q!@M(Tsureyt$I$*~tkg7}G!^9eO>99dMy9sLW-M+XyI=jl z2)gk@iy$*6BXT#8wT&acn-Jyi7W~liuWD9G^50FIfI^g-3h&6pZ5_^YQVqvT?9-a42 zI2pMy+c;AF)yY5oNSHaAI9S;^S=ri<|LWJs*w)!eh?4Txi~e)`HBK`(tN*^q#_{iI zK_|%i>j^763mfZy`i7DU{;K7FXXR#QttDXvg7OUd4gfnRFSp=tg8#>(|Gx4MQq4a} z`Plv>{o~R9o%FqMbdz<&kyH}Rhj|0Wb<{WbSLc<~pbf7e1e4L}oQ{g2cD zXmB#}pP_;zwvtd%gWjQP_UnMFf&S3{b%&N=$DNfmwM$@NL||kjMAh724^ol+h~Gc- z`Zjtt)6}1fITU1LOYaQ56OW}F%Rw2#Ly!;^d4Xtk8lOo?o*(k%qqOZu=#`O67bzu% z&C;(|YJKz6VtWV~QgAtPxvE|ACetbx%Qy#hGIsc z9w0+BMxp>@u={Z=DW>R=eL`Tx9~1E%^6ak0MF>Np+9$kScq_(6q2d;Et^nDmT4=?w z!B3D^Y7qbml?hO^Qt2`wdOk-hRK`bRyf=+wS%e9hsHVNX5Y*gZ|Dl2bK;3KUNpMO+ zF`M(EB^u+ee{NY7+63Bt!&)cVk&jN7dh3<@qzD3(^zK&SXEVwBzl6WHy2$6xd^k%a zN#P0v@}Iw2WaY4?S~2|5{W3#7kG^_(A|p6gzM23cM))9}vifnHH_(?#G!KBfQ(+Ec z@Z2Yrj9$-kW1_&`lJ;A|_vj&FN_3aOz7HQrd|Ra?egYuKAEanG<)uTOqDb||q|-0E zW&BkeEcgQ?EK@mkq87;8@5ePcL2cSk;#x?Ew*~GbEjxK>FeBx|Z=>brKZ?EK+^XWd zkVmB-d#&M0nVC+dO5c+Qi77zUeU}(lyh(1M157!6htz|(UPjl}M)6rCW##PyztE6H zTb@8tk5!dbAs-UG4&aMZ4?f7{_QO{6*KkzO8I09WGMvSpb{OuT)p)sjNiYwU_DF;u zezJ3eCe>G~U**@H+G`60@KY&@r&LmUFpK<@efTjJ;V7MTajj6_m*cCgA$^1Vj`iC< z`(wwo?yjI!s3cCxSp$v=za=JN0LPq^3g)P2aDc&f+999&yEqId9lVC%({}vVZPPNJ z9pnkD3$`4;@OyzE=)j?Tf@xJqdcnwR9jXhg@Cklp7Pnm%a~`Qz-e@1(+pGcXLH=8k zz6y-(GOp)mL{s-crL_X4m+jBaWaf+0MaxgtwTsNYOOAfGiX0ha7}pxitmU=>R0RWr z$0gniQmXDv)gTt9iI~Do(mtX?Fu8dd{>xZhBW8BIas?AIBY)nHyCZ&*=fauFJb%oKa^ zr{^z(BJhDt_4?%Lann-<6CsRo_3>lggJxYBi`Qb);+#k`qH;rdYFZ{zNN=8h0$~cFz*XCxKnxjwyXsqcD6fNjk2h_T?+Tj#?QX{55gL3;sBYGtu$tIbY9+ z^-c?fyc*$`w!zQMKgTUk1yQQPsO)m-!W7Iai40gNb4L>h;l|SjaWjEo)t2@9I5{U&f_Wg6&@;>&_hxZ^r^;C6qK3RDogB0 z^NE(gsws^g72!FnW9m4+y&h&@HB+bFwvKYBdf@1}OwmXB4sV8r633-y{G-Z}{#kainEf%Ii_#sfH|sb#x$G)p!1Y z<6^xkD_wvjUgU-O>n$10{kQd?i~T7t9$>4Owg|c3NM&5cG{xXFrEt%nl^Pa!{s+$0 zjgL2}-7VR-?_W(m*RtYTO;+a{0LMV|#rMuEndv(OiV2TEKFhB+f|eG~2^Em_S2o7) zibFU!FMyv$f|b&J_yZD36lZ;yvCa^MIMxlqM$o^q1v}r@q$i};IVIda(8I5d;{;a1o%~A)Uecl5!A?Gi)x<-)=LSUjreC9!`AqJe37nHprrCR(Ir2fwqxPL{ z*G{Bworv$R1^t85dbc^huhYMHV41E3@44aLoSz}G@z?s8;h@HU@clK0yx0VHQVir8 zpK6TtvI~>PET2K&@V8ILe9t~~(PFXe%<--Cpcjma2Bo z`3~|Ncii{7M&?fVbDoiz?QO%S$Um`PRV(5WjJb!67_rUlO_V`69pB1zrMTRZO++y} zY>_n-IVX>_##xmLYh!E`u9iKZAUF*(P;ow0mr|y(0v^I(L)5dTUx&4BwO_yujg1(; zyVLo2`zBFI`+Tef!g=Cea^%0BC@ zEd$4csPE9dxc8jYwx@2d!Cgi$ec#Q!l?9u@{g2nUUCuApKkf^=0gGzn&%3LY>Q_Ac z?B9&wcwm^-d}yHD+ze~I*&5?szm3?(-5XzMh`3zk4>Za1IXBpD;qLWnxiHWbIB9;D zi32ynj3F?H?nTP>)TZ?o_W5{pu8%^Q0g<^b)@qpV{DsoI+1@mN%1wk)H4fWI4X#3j zaBx|o?_4@|C;US%mzZb3$utHz&#k{Fvb?s}m=0I?&3&-kJ9LlI3PpEm|C_G@m?YSs zyc@Md*`aGjIg3XDSF)|Mh?_h|Y2nq;+5lzgdEoWF|BX3MI_74Xv!QNh34cZnrzPWI zsu+c?uOFc!`t(3%?MQ1avV9VgBf8hIdr#f^5@*OGFu1L^e9SSySM;GR(|Uox2*5i{ zk^1Ce4N#HJ(IkFAm{xs*3`Hh254TED7`SP+Lvt|WbaJIU^3J?6HQZb{x z_Wf+zHNGnm?jK2F0*(-e4y#AP@D6j>BF<$qAfTsvAnHm+i9LYq+S)&D*Z@KW(6}yp ze})GHj0=W$cX3>K=F(m3+w+o>Z7IIvySupE_A;ocQJ8-z;p_aE5tUF?=gX}AF2Q;D z;svCus(zS^1YURc9p;=~@g{>h4%^`la@zY*53f@85 z-=9{3^YGrhn3m#fbsAp1&1vR2F*Hnj{K<2UOojpxJ})zNeCT3QwV8{+-ZV`!QV4TV zzw_IV`o`@oo=BL*JmwDEo1kupxC!81XTTjJN})V^2(a);HQ=XUFXiVvzdwGplQv8m zh{2IBo{<>*QvBUWk|+?{pvVWc+GZ+_3+S=n_5pp=E)|6~L6Nu7Qp!`r^^IWw|2vji z+JwGZ@5Tyym!cDr)4H)lYWjiLskFoLZud{rd#=;FORi`RkKsM8?}fjCb3Sj`ck|`c zJOBqO=iPSSw1{hwpv|7P<<{t^7Eiq(K~JWU5?FM%=BRQ8S9E@UiOl|kOFnX982MnH z1A?#I&?+e)@cAvf4uVeFBBj6QvU$0^OQ>nX^5QM%l?$-Yanh(L!e()M;cgV6)TNv6 zNOo>X!&Uo$$|=6JQ9iGW`KiY00RQ#E9d|HJFe`Dz3>96(pm zG*}R{TmG5gNEUlBJlFY3Zp5Q7BKPQ9bZ%JOY#QUC{+p5_A6JsmFwl0^u5!3zs%z{~wFaK?snbS+(iDh*bhKQ*$`vfwVG>Ls%B2FRDY5g|L))#a8Ei$yp z^6qP(ZbZJp-G|%K2MLXn`cLGa_%PIY2{sEFQOqCrxy_%V^aGML^6o(oNxijMo!e#O zx2d(X_czm`5Vl8l;it66MM zDLi-&Wb9l_zkKX1FenmX&ShXm&Mb9IJ{te3EyXj4mSAhRnC>sG6%{KY~-^Bz0E5wN`VU4AL4m`*;&N{$xo+95r-vFJQ&a zJ^HxA@^Sp>vYKSib|6yw0d()YqWm`aWZGliu6DA_YJTg>YMHl`Y7{1{$#UA#F8Y{} zd=$=Z+cQ|L!Su&*p9kQ6t4s#SYTh`Yf_-s9H(`Wi|EjO)n?s(#3kLFZw!RZX|D^DlX8>cbiGPV?;xJ=p@B`n#*`(OXq(M}9c=emT z5ekM0Zd)wha+%vHjL1iLWSQ0$?;IuPST*%HId`qZnE1FoR#@2AjEwB5NZ&PK1NWahk}Q14jmRv;RLx z%C<*@$p(WjAgpiSPJ-LClc}rgMU#lQ`*NROobDD*R@!ayC(gA>Uy>@bs3OJm#mrHf zD2Lv_GMZ^}tV7+a;Yy(cncTt)17BGS!bXRyTr(N0L$lw$eyWK3D9D3U}U-Cx1 zDfS(x2P?Doitm#9no#Z8k6-<5|LV`~(f-l>|9gl1L&5_Sb;K&7 z_@(^E8fw`#!V`Vm*EiT&{7)u{jZXE%x*D_s&lqA;k>ZOi(ZRTTETYdptOm)0Vrz_h z%|bn#EpcpyEO&%Iyk`st()_Gc?)jQb-gw9+{zEo2r@nsnzQmwae{X+3r=mhxdy)*b zYySJKk)L6x_Z#fYQjN9?MJ&>27}UsrJ4G_E{V?_(tHhYQ=D&Tze$C1n5AeQ~Hn`CB zdstgDe*K(hV4JM@HQAsZIdT|Ntay)_tR9yjdyzAT*L5KyTS8yQEk0bS_QKwp>9O_C zYEeB(>u-;El9f*_o^mw}qBGT2(Z%}B+T=6FWN@YSX4Y!NH<5n(dj_ z78c6hUhLBaAU*%KxA);S2owp{_fn*d6|Cgcc}BHfzG=WWtG0Bd5Pm@YY9=6rv2DjV4GA;#(5@i>SWZv$r4&G5Y@O83sM8GJzsdzF zvc4Cvb&747W&y~UaZyxnW$pSsRhrB*yiCMu!XO&7%tp?!cBlPxZ;Nw4-rhn4WCNpy z725yaq5qP2+ii2;CS<39U+Z`|&s54RI;BdXCO<&uWGGEYpwj2T!vsY149OGo)n&}T z3Jxkz2Wc1^9Hi%jrd~^5i%a6-)7;g3c(b%!h*cO`gf3^ODy?TZtgNj1jc~NM{!gMg z2f)$L&~O-cp}b-<^usMlVua^rj(5zR46&W7w(42q#{PT`U^rm^KlQN@={K^?i5~l$ zgkKHmsvzb`@FL-U`S77zLqmhW3IrN58rS?k4?-W#F40f&M`2MB&HZd|rJQ^7^~5uc zaF7*f-ZKx;QxT3@_rbAuFaW|$V|`zRvUx2-L*CR*H?e9 z;qPGVnL7Kd-P>G+n)Zq)niX-B zAjRVr+MWkH^u&QGfT9y&=FfY<_l zEi*Io*v!mJX87J`e^2_S4r)wHOhPFE0Rj5s0XyaEmCGrucAr0OuW2&OVJDqp3cXpo z6dHIoXyn#G+_2*ls&R>RiJ@EuH;+lXw#;v?qXmnYndPit!j|H04JB*vhFi$}_28eW z!=U2g;q`CyN9Ry4h^^Mb^hPKseuU}L4yi@K`ancTILQ5(GU1;^MFO8=uGMpi`HTJOZRLR(+`m^c zJrmsf_wSYW=fO)JMdgGeQ-5+SQp6wY`0jzjc~9*`vIEx!-E?qf5En)B3PqD-zBDpb zjD1k-9|8T8`d(W*rMZOvaUMYvWu1%jdOvUy&c^R#`|L*EmNF{jQ>W1x0a{>oP0jm{ z?c~dYtJZXXC~@Cw*o891WNrn!);*}2yHV$r7;REV!os;waA?kz>VR3`xyq&g9`sHs zLa|$N1bSR-Y^;oxRl!Xa9H=tu6Vb-jmigKA#6*m>OYWbYll$A?%$O&GrI2b0ayQ?GHxpd4&8zV^_Wn~I1P(}JMOsH-MI6OSO zhl3~8tcvM>Y+%S+7<@3B#CJQQ2&{GVIKP_OKu-jgG?ntszM#L9WrH0S1&n3qxXS^{ zQ>pppc_xS(u1kl>0WDM)>3_QFqlVR9F#pCfF#H-82x>I${*qD~5c{{ri73&)xFSsL zaTB1&X8170#o3_Gm-e9J*IgUEEX95Kw<7(u#>rI?=oietFbxd->L?VMp-#r|KYMiF zvtPq`R|R3oXlR(LlkAf(!)C-g#JN^iGA>T|@`oakyTf7G)bnbcE+kNmplYt-W2zEy zTlbw!Rl5JZNZ!>Ukk4VfWFrS2V#E3SiBvN)F#JHbY&ZOQE`*D=Dd{C5%Oyy}Ee9PHT38Db#5-gpY{(2hslC5Vv02x7QXu zGnBJt#ue5@^Cz8f9L23w9VcIEj1pTZsgGEm{a{S5{xRP$B#9zVAKez?fEvb1@DyTv z^4yL|gVF$jrm6V7*cp%9j`&0JbWG@$bCDwBLaA0&PzMMs&B6Ki*5RZG(*bGohv&4h zM<2zHtJMr+B7@gA>P!aU+Md}v&Cvdr3-t1Eh*tIjRR&~yZhbI(c0b7rSkRngeN5P> zl}h!QBg$^Ymb49)t>&Ck7p(S;`Mlz>Z}t>(R8&WAI=h2s|4?5Lz}q4TzU#S z)hU_%n<~}8Eyi`pfO|E5V~3_^Q9NxNW$NzYiBCe>oDR!O*AsG%GAx0bxm4XE=tTr=D zzS^~7kCoPDDmUL0yiNgT4tvnxzl-Je$4c-*Afvd$NR`6rov+zcL(PU?i;CAfvJ*_! z6Yh-h_2W}~EALEw?X=s#x7Y20BR(Z z*gvZ+auQVf$IkrL0KFIV`Q81BWkeL)VWc!PkHaOJ>&^`{#8C#!`V6kj{b6`p?!hqw zt`Yf>-*BgYSSm-L@;qfJ+juf@#+A^#y^J0+xedk5=~4Rl!s>Bf+sfV2Td>O=M`@MY zHt6GXir;o=2N8ke^@4i708^a_efH0V1*Y)5+ED*L7Kk50gZOttCL1Fw&;D!n*P*re z*j@Ac+pMyUo6Jn0GQk!Ovr+Cj0G_R{gRl4;v#pVpj>n$ggrv^ANC^1KE9xo^tX}tU zKCT2BaI7pTE{+kpT2OVFF+zSN?9=p&44{h@{PX8e@_~5P_?xrsf&H>p7S`tTZ=VNb z#I;`kqj~{teNj{jhfX>RakvENpXSA$VcFEsce>uP2p;JdNI7i%?`HaJPX zRs3b%OIzA?B}GNis)~G)<|Ap$W@ct5?_5Hbo7}4eFD6tSj+(DzOA)s*p0C6HGeS{v z)ecZNixkpRZQ#qWRvo?Ku>a+v3e7vL4oGZt&yUg0pxn9Mgd0-L3KgHzla7S#Z{bas zF|vI2WnEqM5(6zqfwcAKOFH?4m71w#Xk%niWzyWWkH_^;xBhx9G!VVmxvc~K zw>j|_q>i*%o!UM_#i!m!HOkUnl%jNZPhP}1uEhMW*neiBZZvm8tQyE?U3;g@OtEbA z>m&{jx5Ui0Np)F-dHFe8rhOhovKAiQuJ9Qc`NRmQ1(dz#&B@;^=e5{8MWv(1YOT8T zf1`P|a}y`2nDj=*=c33!1_XlO(}^?2%blT*Nbd{S%G&KxqE7q1zD1yWOafrP%<)Oi z8;KnA#)xdQvyilLg#}KR##W60bja8KF)<=7*nXkppPaN7;^JbYKR=o=49)>r`A)iU z6i%)dUDNOH0tq$*EttoYBSz5X0L+5vS)H5L>aZ?7)55vyEawXr^RP9Y^IvQ*;ux#tnog!0*0%9 zDYppe3u&;OP7bOIso-Z&pu98i1TGv^Y`BK_lkUHpFRKF`(5 z(Nv3EkTM4=F_VhRPLZv9Ry67sHsm~y>0@l?D=F>{=z zd`Nx-bTcyra6ss6dr4el9v#jve>CfDvP*X78a`l+6Ec5!RyI;Qp8X`UNf$+k60KC)wjWd?rn-=0}=ubhB?vdcxSX#*FNBGHk0%`2}Y# zF9S=qfp1BzTsdn&3n>daysv;L=zn5M`j>D{u_79%ua&06>|!HjRxkI8%DBY}5kn;V zrqX>4a}S-$QJDfx?MB%}%IbyK&WA zd~hO0J(f-Kgr1m^oj}ZPh0)D_jiLw~1mRS_Zgb64H>KFSfwsj$kA_J%;Z^8LVIhM_ z2`U+w`_28Sz}WRp(dAF;P&$QJ!1TR}*W@J0S~GG~57Nndr{)k{6O*ii97ZFaY^x|7 zly)84<{GPEWHK)-@4Bm6+ju>fD1oDVHL-cMv*)CO zGny!62CXgQ9^?yv<2HEO=S()~zK=HvyA;G>#C$4KB|6ihU0)LuqJ9lLQW7rSz{fop z$knBDANi8eZJcswqCnMz2I4}@6(xPhPIpdQwO$TY+*Lhcp8X?=S0-;|DOmmhPy(I^lS__m`@F zI5})q*Km*!el5gayFBK0K})#owM;N}8ZjJLDKQ}H+8~b(hR(UXLaYYz)mooABc4`n z{Y*UPX2_p*&S2i)cG11{`tH1{f2^_@x_^boTDzhLT;J~%ef{|iT?eHj^=7$~u|Al@ zC7}Rcy*>hPYZ*){pTH*Ca@Bm)4@Iz2v z_j`NmC{XAQXV9i851DR#tobaS454(^Lj_x+7l$r}4td{f#^*K_l{UyumFfY1YzY5m zr5v2TD0iIOzCr+5iJ^URK_ts+ZC#xvoWDqJ(ko^jf({TpE_R)e@t=FnPD)!Kfj<59 zT{o+b(Idao;)%8EsSDF<;jgDlYc9!2AL>mS3sm%CmXfc-+aptMhLJfe#IKmNEy9?; z3G3@papCRieYX}jk!Z1BM{@Txx>6^{fFh35f#A@xvGl1|1Xd^mTj?zEKbyYVz>aCl z@Ss(7FnisaQHpdqmWhz)JCuAPcS-I!N;RV2tT?Z`_>(Htnge_k1Iw`>w6B7Fc)k= zaetZM-9eR#@{7B-I3e9wMlV_}Ys&iR(!Go?M}2_ychGg!7|;!Z25=M-Pt-YrEgO$o z?g;H?aC>}zfll&Z1oA;r*9H4ge8W{qp!>d1eCbiRr5U?RPdJfT40ur}CGc$&m9@E?f2gNhKx4CJYlmujlXEHPt)hU^-3%5_k{Kh;iM>^UNv z9mX{IJZ64Yi}@IK%ii?&?z)FR&Z#nNb~O_}rSE>&@Me9}pL4~Xol{rIi7a#S=sl(q z$DIA!DWs~OUI8sxTXQlmtiit8+T;GTl*Sx&!8#+3YWtFcCU_Xfdzq6kzBJW!*;_-A zclDDEf$IP>xDRl5VDf`+J&Y3pJt7C&upa#do(%>T`AWwvdR*aKGVf|)-}{p-3gWV0 z(t^-d%`3cuEpd_&p{tY^-W8ant#^w@8(9$m?4~_5|C!tM?i-0uzXI~Av|Ty~#oUkE z!cViBrHgXF7s{|+s0m~r?CFF zMYHb==Mn!Oyw6X!f@g^H!dA6^+@0}s-Fiu90d9QI$9tn!Cn9<@7V;?37+`DPX?}CE zfOtA}dwYnBGt@Xf)$OoY5DP_Z9>pICN}u&Zeg4Slt=w(rhF^%%V~QM#**@3S3kYb& z6yNX^(OfHEE$}+!J=fc;>44ndoHfcOjk$-UJlxhailDSq$F}4>xmPnSuVhiPT|Dko zxLTwGu?aTIc$?JB6>JFnTCdm8XB|g`nk8wypPkI=!_+Ri%*T+?DgkaxZyv8&ZTEbo zHFkp{S1^8g=UMn2BZ6mPx_0e|TTX>HE78*}YiJaB9mN9jNpJd@T1=R0$?+SRZ6FGPb<7qfy zeOQF}Stbu7GB28WUX_F<9%?Vd<#&Tz;d~+mHz!oHglS#!m|XSia2qt16>p%nW%H|| zM{8gy_BZpM{{|zPqeVExtU0XE!_AlO>S{zHNZ8d_UI|Od-Y-^lSfak&xV;J8U>#7O z%N(y1a!==|KfRWHZ8bfu!T^>OqPpLBM7nus9ozr-{TEL5D7*y zor>MV@S5nLqtKsEfo6X2yBB<)9_$G!MD-gBZ_0cw7!$dGaawlWXzfcK8nrE|+Ndc)5D>}Rhk}zZ(%4Ej^wCR=(D)yEi@K46 zwY`l(N0AOUhcED6hk~IF-Q(M=g-cCyCM~(I6M;Xzj%u{4;@MKnJvx_E3&y0Qf2;59 zL<1i?Cw)F%@;owAsC(!+>~{E(2BBpCG_Sd4T=~%T(P+wnK)rD3cq3X#R!=XLXO6h_ zxC7k*>R(A*QT!;?u3=z^^Wa?czFO=t#yi9=4n>WLk_B9^FS>81B8E0~>5x2RM=N48 zeGh`ULbVCisS_3P>n^ED>2R4TH=(S2ApG|zCZCpB(MK) zmrj}G-4Dhxl%-pUOvd+5I)1^M5LBul8VEP+;fu#=YsoogP;)eF960Y#Bd&TLq3&(n z_H1so&YZ_Q^1%FBz%BLfmfGq0Bc5_P^UKg}+%EQ=jt;OWnpv0T*=-U+?gO94Qe$*j zH$>|$rf=Hy1Gp}*=o2*m(y3!Q){8}^B>dnMebfez`3C)KJ^HLsFp23#U(_gl@hM2s zFAw^u;@)Z7mzi=gm%I02<`X2E+jGHsVG2;6DLV;M-L%fGUCEG1&S(O?;xY+X%B~$POYdp9L2^}5w()d}2*htn6X8b6@%^1}&mL;+3=Bp*o zaABtbo>=Cdsc5enMrG|J+7HN2y^(`#!^#lmil*_$6jze8?I`5oDhn-xkr^E{1E8@W*L2vXAq9}`*n zUWl}3xMD>1HEd@&d+Bq?AVDtUaPOkeQY+jPG!s`@)y}p=E%jQQlBFw0!-m0Ghtu1yc3VPjO7HQ7oHPbOKms;p>$E^QQ&J zk#Ow=sQG=0g@s3Y$*9kGog`7}yFv(W%wFu;2vpu;h->^U1`gPeT1}WSB+Xf4dk$CP zu6eSDowwXFJ+;TeUw zito)Ny*h<4{Ga%(yky`rW=5_&b#AT!%9@?zDx<8Xiqa)(P@1*PF9%AG7Js0&fkc>z zyz^Foa`*pD^tP+Lk3bZa6&U;TehB;a-P2ai^s){2ZIab*1aoNUL#FSL`iVf_sNquW z45i*?-KpL!_-vz`{f>Hav9)E6q1C%3M?6h9Rfj_Sw)zPsvNZS@3J%;MHPZ-~{aDl% zg9%3W6E67+&MWxht2~#`y`ICO!j1s{KOoxjAec^ce+AkGaXs#g)_jJP?Eq{>|>!aQdT{&?b)OB`8Pr zPTsaoQ}Iw_qp3Nz4SX8jFCIQj83?pPmqSI_|L<-Dr7U>S*1%T*cOH~RSUjwP1+ekl zBQH1Dr85lF*`I&2$koQNottd)1)pfPNh0ZyN9)mc+|)h4V7O~K?jq{WVww73;8tqf zxwfj{HD%sXh1L|8L}tGf#_@oV1utB7c+<*9bc9EX@fYRa!<~(eZ}-cRr{zqv;6?A7 zF6V$lo#)W>mOx1I^6eCOV=a`Gd&vJKy8ko@G*C!VbI?s`W|8h)%a{5AfHHL0y;|Nk zo9en`wZPh@!TPnNRa?yo3VvP7>uS(L+uxR5*oSiaGj@#lTyow(VXWaLMEK$Mg40`M zvGwWU2(gs%w+}?FfnX@QUt1Vxx$pdWMFAeTAg{fon8G`{vQye!VjwwRi~^1jUNXAI z;JaSI=uPBh>nQmG{=c!z&l5p&sR4e0wr#C8FX&@!9i$+j!^b&YBsWk;vuN%86I#uA z?dlWsyNAc>b0JU5TAaJ^sCjce3UAp$fi6!>?Tev)#>XjYTc*8hw9Ul5Jevu@+T1t)(;t2=Ti6dcxP7Apfw8uFK+Y@*~YYZlfW+Nr4{%+{f=MYqq-r^ zYG#&zzVa}ym6#PPM3;gZK0$*WShk!bqyzqTL{mouUcGF8rUSAS+nARwoksCvEbWtT zb&ZAvY5!6y9ECu$T+f=0*yq5FnY?Ui?uF{nHc-p`Q;(C1(_UN)sAxE_I#7vf1C; zL%{Uu?(SA@QDXL%DRsMuA2qpB)Ely!HS2S!d_ReGmZ8(^UeZ)`OXYqPTA#_yi)~3r zhnJpC*Ycf8LjR(MW7d1$g@OQJipjr2egn-3EOzIbk$q$=mS5fGnu0%@s{tZ*FfP3A zLZ7#7ZPxX<+2ruKzoy60Z1=gGwaA5fa1Tf9Rk=wc_(*b^Ayq2@l0EO1{gFxOTkmr? zF*LIWSyYgXR@0yE^ZGq{URm@(+uQ1d%^o(L4S(z&>G?`R1c6s<^LV-0t^oI!Lze{e zN%Q{LdGfsm%>-@;_-uQG#}=D9h9Xb`nnVqfTCD|2$&3CstU&!U3WnK6;B-Y9lLD$1 zjehF64bJ^E@hDz34V~VOWKjk!^P_JI@4PJTTHe*f`1*WJ;dg9EskD#@&ThRKvgN;? zgxGO}BV)mv4<*q|PEMY-Ba^+XsBLKh6c9pV0_4z4gEAQ9v$-NQ*@7yu_xWgUAJ4Ma zc=f2xBiFR9NpZ$Wq@eq$>E})GbaJm}*tCz)P$*xF^;5Ii$?qt>2S2PX@C7vb6$%z= zvh~VNHmGH`Il)I?q|Wxl)gt}JcYr}X_kzQ<@qB!F^@G%0t}ak%0-4lJq}Wy%{Vq|- zATGW6gvdF$d{&C4`uR1S!)-Rs#<`UNB$9NEqD%HV6#{|vv0nCl@-}$^{njPC2GZny z_6$E{=r{{qlmzo3lz=r3>fp43sZ5@JeF8wf4o%g%d`;*8ZF{-J{h4gwhGkL2PewOE zPw~UE5xzg}47)OgegXuyTm_HxZ0Fr{VGrv@?bTtlg|MeqIk}96(bo6Z4(+vc==g6O zk(nY9*|Aw*UJ}E(3C-MEOFUbkHWcLmx9SK|8$dBtPby;lUr}1qxHf2%MILdtyiQWI zV&O8o3i%k*+PiVRv(0#dtemGKv;t^Gf}__ZzlbE0B>?*_p|CS)I@n5xzxl3ZUl zLL;wG+hLCp8YkN1y?%COX7pfD*;21>5Rbd7s?D5L7~EeAPxk{rDGz1nQmLV7Bz@MruESs1ZxI)iY@NEH>c&sMfQh2e zg&I8a^707a&bmirbdKA^Qj6^m;km&1p(<6QD9-|RA3j7?vRp@5)cqjN|DZ8y1S&7*rmI>>mGjn zzjx-&FwD-d>^bL+=Y8VyefAZ5ULX*m>*-E7V^UnfiV?I|XUE$-MnSjRwer9NirlFz ze{6FFA-|m0KayC6%@jz4IYZqn+14y&(k?ROP;E9e)u%x-)- z6f-tR@xRnMtj#{sy=D|4(a6Bj&t1VjhQ9E=qI5IYZF8l4@ivc1j@fZR2BGzXGJ zEi98b{P%!sVKdT?d4-7R4KBW|a7)~b9k4pdg3y>i0%|HFk}sqJw# zfzFZ&Z|=qO%SW64OxFR|jSWJ&mIx^s0kI^XTI)P-bM z9zB~(Q(go z@cw>}q$tMuk}`K53%<-{ zvECQ_o1Q6AXt;_>vff2bnsd%fH*qe6h(NNHW4d+ME2kPwuO3{Q=cPWe*OpIo(@4 z;{PL<`Qy}ZlariYBR@%pV@M!i6J?LB{FIgTl!GAuGaxGpOjh>iBZb~0I)d0Q z19D*Xe;|YY=Z-skFiGup+y6d`{<^~`8bAB_w!{E&eE3HU(_b4Dc44JOMNW*C^#Xa( zf1j)0L;-HhdirNhPR^>DeUI|LCmQf6&*K}$icM9gJpmv#yA>KqZ{ZxykA7J5cQ1gy zKjcrw>c79gMd7NXiR%N5n>qrN`N|aX&jkM0y#4vwo!{?gBQ~RHB`k&eJO9kj7kiYFrgI^h9b2X#)>c0ip|Jpa;^<8WX zeE4l%6`O~DuXkWi{9iwkwk)KSFdRY}$uQIu|K0}Y|MQN2{y}nz{F_}eT)H{v|8va) zEzDQ56+YKj!0qLi`d)H_Z@65O>(J29horb)BC^aBg@VnJAwzu~_=jYF;k`lfLgW2kYrT;#b zWZ{EbyzrbH#d~yg=KD*%n9&LU0Z{f5CEp-`((7W1vUucO6fzg@b9}s#lT-N{Ev>l}FG4R{Liq2@D2Vz2;TG~@Y$L-i*|~x%3cE`67P4GG zK%lLw0u@+%>)*FB`GIg7&YYWrh}!U7!eYb!ol!qv7qx<>>ay*Un_n1>Je*;@RbTFP zek8z6R=y`!j%lf;}P-1oH$4j|& zquKwiuVTF6(a~-I$~R3jH_>{d3(=wfl>82BjKG4uOHXz1s%I$e^};*c94H$6I9Tyl$bn3^%g-S5x^E6DYpIl zWB`c}&_HeIX-xH`TXAtRuPi;i2{GA}gs1gw-6FcjKoa}O70r1`Ny+DSMt_UfuVZ(w z3AI^jLEo--{37a(+pdl)b}f7qnGEq{0wIS#6vI%S^sE)R@)W+-*cZJ2E*aoDj2H;j zM|Vf@iZTOUNNQ@vom^f$QUR+^NPzFoks#wpvuWIkF~TLN1lvwbB~mc<%>g*bf>~nY zIMbI=uz>)%a`M5vd z-u-t97BGOp4O|cKBqB|}(7IsLb|jTBaIp?#uclvy$^O44 z3V^?vh8vuu>TZW9k!~{aF-ztJnm;tKw_G{J=HTS)MR_=1l#ZBQ(A-i!2nq_a(M%k2 zpLvfN-*ZyK{Oy{wa8-)=M{`q`;y3LZhmA8RF{MWc6PRSj$SB(nO-KKZsU$Tv)0=z< zyddn&xM+ljQivjMQBfd5rBTr1!a<*6YU+b^xZWKEafhY#fn%wkFTp&WwO<&c;@)af z)Yesr7(h3mDMIk!XSzf{M;r+Tn-@GxzS?7eQec?Cy^H(bMIzb%pve8uYJT}?!t)PK ztW#6|jB41pU8KKh+;RK>70lBsEwLV_!rcsvya5OV;xPF+Z-k@~UGN(7dSFFDKgNtm zf>X&~(#!Vne6S~>cxZ>f{ zXX*x?yH1y6MIs+{zwpQ2Ah-a_%Z5RMf!2eTk7%WW8UDRRz!H=D0U=Be7_pSzi%1K| zVr_xonin|-*tPr?#WlLX`Lz-Q9mUlj2z*-!1jJvxxT~xT+4|;#DLko|Gc&GFxTIPv znEX7Mo}BwWU;J6f$Ps-1z3+*qFM7l@YZmX3`Mn{v6Ld%%z3b@XKLmrqRvv-8%h!aH{v!>2hrs6q81> zrnSj@MWV&phE{`6{uUe7XH3G}}Qi6Z&iw{N2;Ikg@pa-J># zJ6}Un(>#xc8F%2h)E*&$2+kC`*#sPTw+L(?2pZh?BSN#y;820(0TqrS0J$WP^qSDQ z3;O{D1y7N6G{vxG{#mZ(D=KlTSxn~i0#T;wo&e=JHAhp;lP|Rae?pbQekYV%pe~!Vg)$4JSt4_X~#10D!e%s@C_?%MD9%G2~ z6*Ki*K1Z>8igve-e48d2wYTkvrayj_I@GJ}m)5c77g^tA><4>uw?^?VEGs^Z--GXlD7Ya!%TD zN%&VUzc-Oz(wm^Ab#h@iM@PPVI}MRI?)6LdI9zp^-7(gmUs63ka(HKOo_Ov!wfXY( z<*tjBh*tjPOIDjVcc$M^E?f?m{LnBD7Nhauf}zJv%yt7HoIMkm#db7ZVVYEf6Z;lq z85JrYaSE5dj>xs6tljBE!cbcN&k>Nq`5<0UW5{Nw%32B*ceR`|`y7Zfn#F zzzx<4%)0MIZ(iiWm4y9X<(BxT6_jB6hlaJkj^l4Q6ux#MFf`b0Dbdp1j%VIwbEC9B z9!Nke)m-m55lok^5(BNFaaBg&V9&4TY4Jy$dm*?I_;a=QZlddwQ3=4)WH`0@A z`_1va1J{D7B;50D%8zw41UyUdE7u+A(H)*}kh}FL zX6~0a8?5uSkn(<&d2!NbYI_t%aP`PDgc7YIyUpNxwbzYjld$+$B|4@nV`4|Dnx=e8 zJYJF_l3h}QiSb(m>Rb##+NZlp66hxgkozQNK{Z$MboHhRLUw|u{$lWAWu`YFZ2qP) zxwA|Rq`{b{OqeSHk3FZbP>2xiwA3u#HcdYhnicf{E|IvUT;N9UHq3Q;>O}o(M>nso zpSdQz-Fvd|NmcHzV)s0W;h*LCZljWg^E)1N#JMw}q>quM9K%-65OR_t;EtXjkd5X^ zf08W<^&}DrWVy~{664l^LN}g#(jm_yYOmw>H+3b8&_#(v=BVSQ{91&?ntu{)-YUom z;L)Ns;nN0Fp8Kk9^ASRW5ZBcBAp7>h@3_wAgUURKrsu6zk*a3lKI%cLbn}i{={re; zu!{ebJDDc2kEVTm#sRWcwHP?(Rr*@H9glgU_h5)wx!4Vl>1yHV1zKe+d$!3ivPM2~ zWIQhE1k+R-z8vED9tS0iGdzn6yE-bgplw5mu@0*>i(@8mga!WQUOo6%(`J8!9m=vt zMd!THp=2#0I9-Z|3wr)?@@3G59l{2DR_A$iVcUZxwkAuPAJ<-XI(k(uXbQRi6x*2@ zv3du|lIVJ)pz|)#;Kv~m-en&3s*{0$)?+PBy?O&IERMP$`$>P_+|iw=P7FBWja|LEG+m!#v*d;$BBc;t?usl9(zi9)4uCa_XB|K@t$ma_#wv zK8Mj8N9MM4({i7)*w0(#33;y@OHL)OSAT17zsQL9sQR=Is2bOMle_F$kQI+c@!>E( z%Kd>VF~m(edBVCdE=xJVd7KbD!{E@vE7vk+m7D*NhEOiw>6Lh@Z4EQotRQ0^%$V>1 z;APyCZ9kES?MObDY7rZcP&mmB>aOuDt2)F_sq}c z`0kzp9K$CEA4rIM&a5TVSPbwjyz!~Dg638p-tWRy6dul#Dhh=UvRu2(Qi>dXaZu4e zy^0(OMQ`@FNDtId!fK!F(`S189;&qk!Jw7TU)3Cbf^S;0K(^6#331&792eM)?6g>T zEi867%pZ64hKX8=>)b~^mn}57ljiT_41_dy8SXY2=ytWvq$=BObL=_-JmlI09zhP<+n^x(F_JNT>y2F%8S#Lg% zD1ZaCmgT8&OY9O!1%1Y}M>Ma8`3#Sl59E59%uzpPOwpQY9&KJr@fBe<9t@iy*hFqSHY(N11AKY zr^k(dXMn1ZVUc44(Y=+uM(;AN8qebZyK8Vgtvqn4R~~lw4123`nPD7|mi9CX)t{=m z0d=d8pubX7RS##Vpv6oHl-!f&wLq!4KEFIaFn-W*CcsN|S7+-> zcUyE3+QJ(}%|LKb41fg<9|M4n3N-aqA2T;$_qIz}x=;Y}Ao`YY>)AJ<8UPL7dQ{4| z-rENvcOt_Wb<4`iYO`WM?w^v^Ksy&59G-p2%$1mWKUWUYt_fO;RFPo7z;|6{3q1FiA3%#%VY{zh0F#&5Wdh zJb!EmNp)$;*LKZ#ixPKoPj@{Ze*Myk)Ogp=PaQRq@USkt7L$9=wLa6q=|$_$Mpb_g zX5XXeo(l#U2yR7j(|TbIS7H5JUi!UtvH22dAb063TuOBlfvhNEqfJ-?CV=5%<1n~7 z8?6@vK%q3auv}8=y&7c2ie*6{q(+O_1#cDih;rR&9mxu9C9ocQ2Di(lg74_B6F)c> zeQ6~Jdq5|i{D6+C7J9)rXKXO-759tm_3bNLA)npEyz`aY`!ib$5el!|7@eAn;JVtB z-E-`F?V3&89Ng&d>CnXLSl%PAD!ZL0ht%Okt9_zbYmn*pN4C|fzxqV`#{m*Fr~W{r zlyRHUl}`M%?4WjDIvI~~=RKfHtE*SX?ZUY( z4q73nQ15NV%rJbd?!A}V-DP;vW-Y&&HQs0xh}(9v{*13s*X`~;`w7mK$~#LB906g2 z%M(Yeg*;I#KLtQXMs+#MQSF7uabYux6dRC{S9Rd<7B}H0aZ%iu&DmOR8x&_L^DxC474(c8kJLM!OD9HQ@bgf^pfJUE(f-e*gTVl0jJ>T_9cEX$ zTe7CcIJvlFC82-08&6RvQ8}Y2`o8R7U7Hi`!Mjn1; z^wAKv%5(}BZh%ohI$U9Sx=e=wpwmg`fYFYnq zk*}aZCh@+A^D_x=O0_rGAXEvU@^jsXf7Lro1F4LnS27goBHlIfGqgi%NZyTY>He3XJ8M4TUETOFgT6k=0@$ z8?H`y1qEs|$;5@i>ESY1dBk?it*U%5nOTnsZgF2a{?C!V+8OA=EY=6;bf_~K-ofsq z%XCDEQdvq$mcvtD)>~XpV=8mn<{s1FoH!W(uz=^mfV>6Q+z6h9>+QQ3Jy3n1aSESA zlaW0kS5pX%mP-ST=?kLaeZUMSxALR^(nmax^DSp9zjS~k>Pi&CW(5u0lk0!Oh0u1U zWdz73K`lN0S5z)`)Bde3RTM>XQuW@K;kctCUg{ptU5k(sL_qjD&drOf`EaqV==bMqZ%oX$3L#_qiY??TpRi%oZO5$?V= zhjF$^�cKGB#~`Qo^e7v99+SP3@~!KZ$0p^*mMGa|C$Vx1LFXkZ_=+cv-!saUjM8 z%csLa8>Tu>yjwz(c<*4%mmUFUv%v8HS=r@W-0eR zpUhN@4X@WjmqMoDS~KND3UUabq?KQJTCG*%%@0rF*GYmTnfIO%t^Gjp#J2rng;zZpw?*==j@E0)5$42e z97>3r6kVy$5Gc9HmWLufs8Y{NG2I<(?YY1>41}ahJbbfem{?gQKipTLm{xrsW3+M# z?$ejj+S#KW93N0f%Q=zs8G8Nm%G&#+@zgX@glBt_wS{ifGoruJZk=+f%G52fFzV?@ zffS`}t>LenEC1CL(PClCbAKz;EoOo_Q*J5kCL8VLr;`@wyHn`|k{bZ^w=}m^Kq8b% zt?w9pF-zh@8ZwRhW%(#*FolZb7a&(iFG#P<5_*cZ0g4)}Dfk|w*D&Nrz--tU9TUxs zT5+l7`0M$}SBJjnShZ)JC$}Bi=d%wyrq5G7FV_tdut>yye{Cx7t^H-o(gYIcbX5f* zd(Qz;8OzW5y1FTSCyjG37X~4drw0~HmkIMmqcZh+U|y%xB2kvXq?|A-bMrF5? z_~Q~#m7QN4W*5J!u5RsGI>JG}>25ma*n04K*VcOdzC#DAhKSi${=Bb03dkn{VIB-& zv4?ckAM(79?R8Ekrftf`iN6R!g0`K%a;qK1b9pTxiI+oj@8EW|&6$NU)N=Dsu`4Sf zMm&OkJ?(qq^_~DuXIrcJtNs$oE{?$nhdo-Qf)zi~;p-p2v;zbHNZ2rfzPg9&S&uXktMwjUHYge@ni-xC9k^Z~7gdVq*wjG|j&S}0EYrZSB z@=N05Q6sRrJq-+Y2eUZ6&puTv>7;oBTcgCJGS7M}Q=+9zNx~_zX#EqU+Zl!I{4#)* zsdXD|>}8#q_wZX2H2+{*NIid(E|Yca8w)HSzZ^s06MhcK<~^OdIps^{?-=!?tNSxr zRv!r%k)MRaES(Xr-kv`%($57~!vYmW{nCN`7RCXmD!_+Pw980V=i&KE$os~)DU221>!TsgIoT}YN3S+ zO>Wg}dDRdf0Vs+<_92Li7Dyr&b5nbr#MWsY6X@FuYYE>((kQ6VNpau&oqIxqD4Ws< z5idX;&^3L1b^LY^NPnTR=gav$-?b5dXE20TiUT%X-E73}uYJG(J5n+I^1eR*6jvci zUnlk?KBb(*aBcdu&n5hJp=colp}kk@xnIUHJXeS4WWN<3-0$Y z;FLMlw#70NVEZkxG5EWh-BY^_Dgz2=WVHm0OmyJlCgs?}NJG)f$VorN;+KZn!mh@D z&gHTRd8{8d>VoFh5H;4u6BPPstLtUN_(Z97k;=BexwWtHrcc}>7uNe?6j8cT0nlV`M5rG#rQ-mVn6g6p>=rtrOkJA zE!OQ*!zJ5R29xkCQIYvtvkMSP$NK=rM9xSAP=ohG9!Q*azQjIGJ3$+Ln#{j!e}3#% z^@wh7#+&4L67dpHrX7c$5M*G{urOV0xztmjuje9!xhyQOTF^3BOAZ`c66BAkFT)cR z&ddg9+(tM(*zIvEV7``(|S}C%ss%?kJvBHJ#Op zZrsU*l<18>Hb4>8nt&n2F~5}}n)0F-@KGvmQ-#~fq~+gAs}S9)3f_e#tE6-%c(D~; zZ)bhsri^*aK2VmhhF^Qr#erXh%joKI+b5Zh=47bJ8oWjK;^6^4{6?C6qg5N=Gtv(; zjNXkZ1Ej;9YNDHFo(jwuT$;q1J{DWS4B_2?ZQ=muQ)39_xq=vl;60CMlSc$uJ1o@BVOXc*7uBNo=I>hn}zD{m;iOBSbdktks4xiwTjEwaJ#SyV^$-C zuB7i_GXYNxQ&CAP&-Bmc_wLu1$GxhK5CJ^sPS!b~2#~q_F&jn;^#SO2^NW++APm?F z)YtVaq5h(k;YQ4NaaiaR(Pt07yiqp^5NI3K7>yX~2R3j}A@4aX_eCtVWBHXM%A;wp zz#)>~p3eqQq;s=vxP8W==KP7HrG3*h#{P%jKKOFP;I)lA?505T?Kd7GPz4+;e(88l zM~KuN25Gs=s-S$p`MS!(D!6eKsg5?2UZ1p6H$}IYYiymEe7L&NU~}`(+cNoX`%A6A z_-2K<%mU8i%&6GtYA&Ps^jeX~Hl7GOIxDCKkyd`%f#`H$*uctb_}x|M>(Eig&;8S$ zU^&KHv(DLh?&q&`4&Bd5y5E{zKR)l$@<`3Z+@^yzK|O}TIL^8d&E9WiW&$HG#CW)1 zvE;BxZ>W9~lq(Fi_aO4K_*sy;?IoY5&JKVv|8?0mv|U*clE z?)YLKApA9F4Jey9^WN^BHazFpf-*BSWns$bqmSoQh0)cs3>3#nMFoK)JZM8zAlnZf zH*g*|Um7QJKu^{Iht0xffvkHTujE&VO?U#^h-=|WqK(K*V86gwWWST3RGy`Ui$o1( zmKW?(D{q|TkN!Vdl`jGAKFmqeQ>g5dGW&EM z;LDKvg){%GJRNqI#mX=Dnwk5q%^)MmXfC2g*Li9c;rKwo+U;oHkI`)Be%kh(Oc62| zD6=f)_-=eZHFe?dwRi5i#QWoWND#WJUTko1SM0IbT+2s$5bo0fne|b@xDw{6CPr8` zmH~UC?7$r^XFDg<9H}62I&+@h?ym$QU|yuyJd@{^?#@cA6>b3Vb(vx>mHz{ommboq zsvp8ioD6UAk_wfQ`b;*a+^Og8)A&wSC^)_YOdVZXm2g|{@@z08>}4Wo;`C%bQodDL zhV22msR~2uC+?$b1ur{0$Vk&P4O$UQTomPNZj(OyE4Kjn4lj|Ghw_!AFP8g(OxO91 z>dfo+bbm|Z-14^%#G$Fd4OaR(l4Wh{rdvnf^-0_FPY5t3;{&Cx(rTY#6lMHG`lwoz zY@0DNV>EG#UigKS$dZ!D-hJ`HOkQ&Q@NrluQ%ML5iTbNqW`tNON$O8rOqMWGGE$v~ zn0Sg~m>5!2F`aygtk-8F3o2IbRaM)A$5VArmm3zQQ2P4%hfXd|BicXeuu{eC2_6%Q za0O{&J}lvS!D1_xhgZbhb}AB#<|I^LnjN`|AuCC?0>V9cn7x9KTc=`yWvsS1P!Cf0 z3-vFJDt74*cdpHUk@b#|Ux{_0qR3wylcR$!umumK{9x1wiI`qxD!t-PD{9&F5Jb4f zKA$>@`(jklPN8`J%cppDxs_8%WU1s@xoC4^mZ7Y8cEA*Uqluo zKCiOG3DEc9DYgh_Fe1yQ#rkK!3*j@0)2ptj_x(6tc^>j zrlaNxDK=!GolJc1x0xW?wT4#AHER6bGAP0+yqrP=WT9$Z?Ka2AGjYp8IR$Rh{Yy*N zWBnb<3^*8#7pb+9qceh%WeF0-cHboH_@Ld9->o1HtFk}Fd7YLKUq=<&lbk)idex5= zPuZpw9@|s$1K-;TO9spN4eL{3HCiFy8H?28M%% zJqZzPE2nZ1@1>bCc$C>G=Hm94jF~mqjb1K`!#b%A=v+JSi}5Q}ml>6-#5IV^T(b5f z;M}9)R>O4Lx5Lf7v4_-OV*@+j8=Og!j^yLVL)N+YGYi?Nw&;pFOTOh_0kc`)Pn`=; zM-!39{a4bXrGCwAW0C^-YHdbyTk>BT*P%vQ+)0*Go(DqCQBWMTmeluhx9E5Jbsg^L zRP!J;bXVOxN_EJW`O}QkDe6j99uoG>7FxWT%r$O(EMm)XG~p$PP{o<5i6{NC|ASMt zyQ$BmTC_s$`tH0vM1hPMhJwzZqpi!ICqRDqvmUt->L6P-*^+I@pJ{G5?;-*nWzz}T zu@#g%AZNM4k7r&fPbZH+qo`VFm>95mu~+u(?{ogNyR-8grZdon2x7f5yxL2c3@v>h z0^#trv@7p%q3PQDniVCMB{sSkFY@Ud%gZ+P!L$U27O1J=W2ZQ=+|yMD!msc*+>er8 z{v^&?V(j99o(J^8`9-(Nat{M0#|qY52Wo6OQJoa*$*O`=Jz^JQhN|aX7DHA7wxTxk zJ$m}Due~i+b{d^8EcB8a&}d)3OZrBu+axLw>truON8xtri+3oS>VawZExy(E6KHFw z=2D6BO96JztJwIl&7y76tj8$KqUi zQ>wi19q%niGPu^L|GGM52F7e*Oi=rMAGfl)E&FvI2#;L-T`Tt_ zK8)Noope628HI_3#a}Yt0rVUDD$j4rY{NVuz2rgsG&(@=9w1tM7KtK#+&$wv<3OW9 z$+aGqsKO5$8Z53KR~sWK@oHy7jW$h{$M%Dq%QZU$ zyIouR*<{J#XxoD{ytBZz&AHaFUNWtT7nyZY(ngwepUCo*GG4ZctRxRoKa}M1Qaf|8 zX&;y3hb3fwvU6BCJc+mg!hE_d`+>8SS-k?0+dcYP|-r%!Fa~yYm198V>Y42q{k&zE2gK4loVJtTT>!DD;+O?H&~HC}7;1C>Z9Sw_ zKi9td>Tq-#KE|O1@yDN=7jJ(k;Uh$TYLF%01 zD5Ox7p5~eQNQpZWY#h=GFQ3_Mq?~U@ND4g{IMs_gJMezz%7OlAQ756!Zyd zxo^wKv59PvQNZ?WRjiJ1d;3@5ng1R%SA7232LDfqrDaK z22&(%>nn}Y z5^8d|b)HKO{e97|6HulxxXlK>X!oXf`(>zBdOYJEKzJWA-MwtU96-6QCTcT8n&9Zm!!?6)WGW zg)lP$cs4z!89-tyB1wUj_j4(0W2oO1 zdnf+G82Nd-#MNvK<)RCN#6ZEpM-=eQyaNE#I%1{w0a%M(gK>#o9ph1__e#X=J%HXo z&bNnIzG0q0n47zOne1s8^0-j!Ow9liAPtXL40eSF5%w0MVAZ{wa_yQHGXR}zLv0gA zkM6!{EA)uKg}d(~r4st=th6!wf#?c0yBo=Q=W4jLtI1m%`u7U%7->Zt)*5)PjM8OnV$d(%SbQf<7jmzA?=5W= zmunAEvg51oA%!Q?A80;xyzK4SgB$Rb=C^}BI8C!{JB8C04;)zlUx$_mPut?A6M(g2 z^#*w`yrb$r{o!_+ntR!t=MD2w#yYM^3*9D!U% zUI~mX)0l>a+%hWw6}d@GIhQ$o*@RCHha1=;Draiudrpc!aN(hi&QOvGST^1f$QML8 z*j>I(79(%72I=yTx2*Q(2ue4ey2PLoMtxivW-?E5gU@TTTR_yj6TZV(Z`~?S*P_cJ z3>+*tF2Bhx8gY^^x*?MY+c?XTO6?9M(eJ&?#(l_V<>j9|jGKuqc2sKh5ZtQsdn0Q} zNf9&3ZkjMINkJOIykG9@6+`WRw-?eg&5+O!6l|cst6K3{VP27tu9IBDOAc;pI1xRA zw2Q8tv@pI>Znizk=qU>6`cHz`v^1C zS6cETYD-9p*|md=JUn8Wc@Q#aarqIB+!!zI?uwsDC2YeZ*8O>$;M4TT{(O*LE47QL zq%~}a+!pH&WN(DrWra%Dmdg^GzjT!e#_ly!x>TExCdf1Gkw4jL&nF50a9ECY)%U^b z+LrYr$|W!OUGC=g!yC5^&;f?slYjEcw*Vx=H3eT8k!soiR(_uY6_4&$GF-KIGt#Py z*_5z(42p*(C7~m)ge4?v-ZL=!bQaRg#h<%fU7RYmc6b_om(eR`=MI7!>EWKL>dcu) z-}Tti7?Q6{KzN4cXrp(_C#Y)svdD7>Jf#jF3tX#vzeCBZeKixXmah`5f;xriK+{^0 z6{u4eXj)! zW@D$TjD&XrHDk8;^4DFw2M~uJEgb8BrgYdY25I2z*zin$#)whYl50DXZ!b9MQBogK&A`|D zt**yJ@(eJY#-Dd~Q+>qZ<@Z<(ptkZpjVX9=dXxSVa*Z$=)pLkAUF7jut>d&`H~Q`K zhcl)i^G!XS+;(`pbeV={B=K3tKqk$qf_U}KJ_PG-7FhDhl~{s62m_rphfq&%|Ir2oZ1By82xNtL~XBA zY0c6uos%O|`ip&#-N@7*=H*2G7FciIi%lSo?jH50_LmbKC4<^u`Z!mgPEi^!Kkx|A zjmBN@M`ZQfcvEx{EQK!yI@ePG`?Aed1M@l!fG7D=p+ zsF+Tamv4Vgah~Y>X31g3j`EsAvJ(wh1|CE{pug!9(j4Q>wWitSCLc~Sd-T2W-f|AU z2mvn_L$xan8c-8&(OkV)G9G?J_oew>3Xj5D11%G-2M?ame&8;a`Uu%$^$8~y#@b_5 zj#m(z6o;c=qVj6eO4OwrJ;!o&yKQvU{`segL%Z#wrUxqnM~mpS^nMNbgTAPehA&Qm zZ`tSUhQY*l>^T=i7<%T6zqLT`f-duKY?I_m?2?C!9-Q_B-<5AY-I(&YM#~zU58{u+ zD@hvU=tW9Rl^@_RRdy6UDt)lw?JTlxp*2sH6R>nz+Or{AV5dWPU!?JXQE-C4em6bw zev{K>fm*+=qYX~v`0ug;uuXiwPY!A|B_W=E;dou5fy;B&c<92y4d^e! z$FtkMAFoIA`Td5wmXx)fl;jAaIblMws$0WfU)s=yLMNR*L7NwaxEjO(EeHdvak`^C zvp#S%A1#_EbJtBu%s{bpE6L?(8Bsu5((ynH=+eFfAaH_!demXn+iH;|gqnG(;&Z1& z9bnTsaTjPf%>WldFU&Do7l~T{>~5Hu*#K2 zJ@%4JrLWN0eya~XI++RX+0*3g;ZmL=|4HbBYQu2tn%LI5_q~x}+K3h{s~oR;>Pz1n z>0=l3JfZk_6VR=!Z!4fiauXmrflisW%jXSis0lH26XE&GA>S8EnX%jg$JdJ=%Q#lA zSV&h`V9*D><+(Wnz6z$wFI+o^-8(&0F+PfjWP*H5xE?PTPH}Zx)Athay)?)xZ@OJ# zy|>V*De47SHs4vE(q$orWOTiwQ4|xC>lae5jdssDD!$zTdB)A8Rdzc}dc|oVrkul- z?_^&~44cY-i;|LIa2kGPvDB03O|=A2$eZ{_viF{OlQx)q#*1WKEh~wXBz^1i{kEJX z?Y6-*aCnL=eG=WWdj|IWF+@bxt2`!zDeJixo937dj&ZwbA+>qu2T(fr{VwrxH_~Tg zR(}x(JRcZ_ZUjW_<4$6ROu4dY$3UM{J9<~e8IM7!Z;VC{c+MB+FFIs8zCBn=%x%eM z%1QepPzWGXpL3=c6sJnJU5_n|(NNI!NwbONkGl%kFp(<1A8S5R15sx6gP$y%enRM1 zcEm4dwf7k1DvO#7A}tO}0xgi+JWnBv-XCfC-D5`h9&!kazh2LKjWYn-w*1fwR=hoJ z_H9b|?N7HuKf0dkr_b1vFZoBVydyRp=n3CN8#U)Iuj2AlC`k<#=-Z;M3?P}2W91H; z38FRlK28=#d_3dSy6D(B9*~Xj_}P3vTs0-T<&%JJ-P&;V>*hn zot2g6U#{pi26|fJXKH;{;+NsoHhn0ucV(1bDhv9eO;7>fxqAhE3s4XKc zK^`HfpeuN2#xGu_{F5|{VtX3Hm5^G&}3 zeaZO>Q`OuDj&!&-og%wk`a}IT_042R2YB_8UGd?_!6aD+rM>dN#hrlZR(_Vu?Or5A z0a7Waqqokr7qw@0su)Z_UZShT(*klm$$WcX{QofbmT^^W-5aQcw4~DA3LB(Bx}?Ek z)7`Ow4N3|~gLF%SfTDD5y1Pq2*hp+zx*P7=o^#&!{omjDa=+bAD!AsFYtAv}9CM6k zJkOxqM2_sj10d4pBYU~@m?Q5oZ7Ju?S0=?-WJx!Ogb8Gk@&`2_+`m@=DcT+_=!||jVI$D2L!@oEzIR`ggz<<9sgLPC&ui$jG~hAiS&Pb7dkdRk1ADgpvQNX(?(@i_dtN?klw1arbxqR!nr++C zq3JwGOXY#@Sfyg%zZjb#?r`HMc>$mgJ?{E=(>}dTjW3DuhfM%mjZL5Oyb&IhLtP@sb2UkJ&Ka~}^xC&< zc=oeg?(qn&2*EOS5xG^J^RSj;in3a~X5HtD!2Z{G!#QFfe@U3$gu!-b%&Q^rITx-G zwyX$fT~8j36Ib<3QBM(s%4VQPc0CJ*%984&xUnO4k*%652X?s1)>x;A7sLu)mHgDi#?pK^V#FL zcKLR-4ZnNvBt0mfE1(2x05+`extSmJzLQBRB6$!1Ah^{egn!U~>r}=L9e^=K3AbZV zT~HbY1hcVceuw&zyN}Jkn>H|@&B;+M0V0L#9)zkBjMfV z`J8&uaHBy%*7!}M#U_<3*1qQuSQ^DqbJI}k(z5Duwh0ibj*O&rXaG-{xyCRVX$iI1 zB{knVQaNouX3KhN+$pRuj9^)P4o`2zZ=C*FU2azMYg+Z^g_-3H)kvBaUA@gtt6^{7 zum+G&*J`wy)r&(jWJweq@b0jmafIFM8gP0LRg6jA)lrz5Vdu?IiZa|pMP|jW0CiRH z9+6?gJxi&Q4?Ie=J4V6k%Oo{2D_9=k+bEDQx`hw#H(H2Rl9c>5uZHSr#rP{ zbAc{`df_tp#Pl%-!o>-XWahdf1rEm5n4AU}9Ad4_1XxelUg%GevQToUv4gMP2O$#w znB%aKP5SaMu8pYqpfHEl=w?FaQItsgl#lDJbDI7De#*0lpQn}C`SGdk=Os@9Tn~*) zJkBzNcJQJT70T|U8wCYVE9dJ4_l;Ww)R$eq{+hRHzA4UC&4nxk6Hpjm3Y}C2f*PN* z^}$xLmS(!c8K$AUKB}aI z6RIOap_Yb|q_`;1I=$t6p&4@$p`WHY3}`e55%TjNTB)_Ep#!eHa6nBaU!^n&Le9;_ z_&d{Ux6UB-dxls2oeCz-hymknzlz2y!q6o^*}>Jf0<`H8@IdgXoF&@AGBWlK$Zj6 zFca`f*aAKLiVN;O1$Z^mDuIY?q~{j+WAvW5sLFG$hDwcg zZH%@E9S#l|#gKVW)=LtPppDZa-%8s8@}Y~oquh86BP?PDH7=9uC^~_738z3CJsWI8 zF}t+wd+@_BNMUO6^BvX7mBg<@71LHf6%kwJS#iVSK7V3C-t(N9M|mk;XOl`wF4c|& zERP*3R@xm<4qVAx;NMQVrs8dYKn(aK}_6kjSKq zcCzNte7Au*oRiw1D!H1vS70nC6(fASw;ssM#3~sl8mG%D)w#OkFA~H-a)jZx7B$Y(3yB*Ev@4uA)VGJdTNq#0|&TkIQ$BFCok=fvniO0eAU+hOD z?;1JmbZPOcDeJo4reYoj8p0jwzh9b` zUn;&VW~rB4IsvyRi+slwcH&(*H8t-!rrRdN=hG9N(&t6((BzUO=t2!u(T%3pPMh-< zM$ngyXlUi{$#OnEa8a3WmXw>IPYl_BP9JrTSQL`5;NpQK)DL^lQO7Z|*+q6CF98b}J$ER(B zB%YuUGj|2lqVrOIel9D>eJ&10%!^V|W_PE=m8@eo}Nx682k{0FHPQMR02u6$j6n9XzGC4eP zoNMyWHO>S(Di93_KvhhFMpbOBa}$34A43?yZiJX^P z%!&u03Bycnr4BH|;+I!r(LO70Eoanwh)ZW1Q|QFDQ>-$f8PnDj-VgLGr@QpTQq_Gq zfS4hW`(Q_fVx6O#i_qq#In)oE354xOaLCfaF3??bR=*B6pFwhZ5?>pIQm}%9_&M0P zMUt@fSRG#{Yr`ehS+BBe`)1dRsM}q-+3a>UW^WvES!cfZM8TJGEpwZqdBd%MQ{3x! z&ocOl;7jqi?X7g3zuu=>WFp_obU_R3+j-8yB(t!g+wN%K(Us_67$yr1bRcQtM&@v?}2 zSZS5kIt=cEX79v%6>+F~M2FHB*veu}F;XVBrv|Q>({6vHgjVGsy3K~te>vkCr`4G{ z<=U{I%%&|4_fd;n^LM1NfGt>leBeLjkafp8VLrm+xniyGqr^c6GP{Frly>qn-&-5@ zK3~V8M*Q@QV(k#uH}%c-+;+IveEe*P*h8_iP6(~kIPV)W6b1(dD9;7x5Ziva?K@^i zK&*O=E*om|+k1MSJlL>MrRDaY%mA_!4N7!!0j8_>6;=oe!pRMb(S>tKRM^t)SrK1H z@DC!g+~z&v^d%o50We6v{|);4b^!sXANF@?V^Zw||M;+G$YYB$>^bGFIhlU?n}mD; zUGB%1aoVZVkmXhrup>Ar2Ulkxb_)yc$FxHXzeU?&C7tLdbdT{&!qRpu6fd6MQBXai z;)C3)xMPhcXEIVDeXAY$qYKQI;Qqt@1I!R8{Q(BE+ za$xaJXkB%i`~Czgv!D_i{{E0Oh1;BDW9zm9!!r#i3>qjTL?lT_u0Zoaz-+p(67l8J zvU&7)f2;vrSyMgHyyJIVs;#i7D^5cHVieshHz%kqlHRsF?Pz{wzaaH4;j zP1x`xOs3HxJ{0cUzg2Redq`SA#)$RpzJIbb`@k)KB;3{me);YZ0EwhpfTH8*%<(8~ z%3k!IVo&8#J^rZd33o2hC9d&cY`i3f4!m>pe^0~HFL>oT-gWz&(IoQ3-vTw**i0-- zti~OIUnaLlPgPI>*;(lu&5_-{#qW62PYIuL`*|7}mO3>hSh;hW_K7K4to1$z)w80O z#P*%19gs@~G|bvX3)!MK9fVZ1F;$SMaJQh7xu#RhJIL;JN5?~zVi_!Us2+o{fS$Y* z;U~bZwibIGZ}_?Pe*o3awpIth??6lU$oL`H_44%GZ^1_n?pVu8xD24m)P$650_(1M z=K;_fvVwQMFlY4-ltzlEmj^W_$~0dULzW$!T9t5kGO2;;*<0u8Yrw_hoE}4SdhA zDMl)PHYw{en4c9KZnq@tGF8OLI)n&HG4)>2Vd0ax=t#?4?o` z!fA7mKUvKHdCWCUS87+{+xtX*&RK%N>LO}A^VrQoG)sxHhBLn#p77SYFGkp8Jm``n zAUEzL#3{J3IB2yC4kLXx(jkJ0M`wefy)(n*=y8=!6jRh8xG?9XEeSsb{vM_z43ARH ztjgJRWFaz45uNZH!RLJd#sOv{y*fp>0;tFo;_>oz|87&|5lJobv-{nqqb{eCs4e?} z!4y!QH^mn}MQ4?o+-y}1NlsgsbFQh|3|?q(*_2E~rk(mt3Pm_j@lh=EziJ|2}qrKVCRuNzKa4gZlwSqu;0>;_d() z8I@gbwaY~!%h7@ds-gpCJ5mzXwYHx1 z^~KT+pNNfRR(5tll8Wq0c{W3H(R9Bt6f#2xM7+m|2qboC?rxPB9mqJfjdOp52Pza` zG}(!oe<|XUQ;*apn=E6l3*j8$HtbE-cUJn0>5{bmqp=+ekG3h3;;DV$QHlH$lMj=X z^7trwx{CT@4mRB$mJGIT7rqTMXOmu*!zv@96oFs-0#qf5wI8Ejm8Cskjph?=_7&Z- z5bgEc6x89?gAdoZX;--BQSc06@Kx3c=+Ml8gyu)*yw{o5+Hp&;$<)uxgFg`0q)30G zc#LX-J}PRlrm2S9zW?6E0W8-U{RhnL!qR5l1Dr?-zV6~Trg2n#=vk_T=Y1#W0t&|4 zKg-|M_(%8Qg$^{E$CxQHOU=t@bL-cZacdfF=`x{XM3CED64301&bFb0mRqVx)H$|A z1MXwG&lBg_uV=XOMJBm>-!5pRxiS+MMfj8SRUhJjjoa*N$Al}#NYj3L)%M|4%)&Mv z^Xrt(wFO+J#Db4pKO|=9SRAFF3?8i|rAp z@iu^IAB7Ws68D0|dwzf3w2ufNl+ndeVy$itx;LHmLF>f05M1m$zjX3qigQ31V5DuR zdS7<*u}9QUji`VW)p{PEj&GiyF;Z&(DM$E+^&OnTXnd&XPfxgORZS|WW_C*TQ*BPl z!S%!!kC|76X2Z|Ww+ii)eqm7AJ0D`04gFLiDVA`Frx5V`G}&rkwjfeilmtjkL{OND zL$luRzvMfbSufNS5RaCZ3e#(P@X+rGLm5!NPGz`e`(L|7dQB(M_f|FmwpoMfk537| zO7^k&jcNy%Ot}%#r;5&}D|yyyM!us$Zf#%}E@gIy zV!7WtW|{qf{<+tfX;zz{N~54QD3C!=c;@>QDW9u2M^M>BiRG={(U4u~hyb{Dc6aZ` zJx6Oh0V;^tSVS4|hr;6Bou5#q!S9-a1cgwzL%=7H;eJU9pd^vfwa4`xB7e0wlUTbG zKxR;=FN!`YwHj3N!6il&ah%vo#XeD3Lmxz zJh-LXzZ?0l@DaD-jaPJeUYtCVr9~Mh)zIVrkTSZda zIb*+noK|{P-wgdeo&3`B3pOMunz{rgbDK&hD^lQkg>$~4+oIRt+a6eM`Ew&Ss$l0w zl>lVEA4@rI^_CIY5q07%fz6TRpQY&2p3W&P7@Zpi9{+ z%CB+ID0f6kGvtZCPg$IGtNvW7_o~rj9$_*&t*DZxGMcp(f0?^kmeugxX@zt>>vkXzRvU?hv5Xeq}sV-T%L4@Y^7K3VhXGI9*$Mo{e_Udqas~u9j)bp)QeiPk-b%E7trHPjPg&LXeY89Rn-8uh z^*{_Jn6sKKYJyHW9ocP`8=)^PaaHD?fqvJ39^0KjrnMa}rZflCUia8e0Hs(uhr2Rc z@9dc3zpbL8usFCtDXIv)AD~P&?AU4PZx#M`;+s(XY`12=m;~1>l$mts^NQU7h1Lj+ zWt*PO+j+lxD@ZalJKYq3-%n&5_9dtQ(Z?{L5J_nfAoSp^6*Bv3xTneKpQ#LNPM73B zIODV&w`6jN097tHzcT;-8d!`k`ab)hS>8FI@%W5UDQP&j1ve#FC{cG$;U0o^M5FF@ z@;lFbzPqQa{^|XIvLNPUca|SnVR5Si$!9^o?PvfOHai%3kjw6${P_4GuEvu9iFEkK z5`g9pm1l9B(G}ZqV^-m&oD0Ek{Cg)^=KPiSu2(K)=cA}aDxN%g50nB?+rZ&y%hJKJ zKX^qLE12&^aAM~{97}yQ9azP~fcEkjfpp~)xl}2l0#4K?X)ko%3;Y$P{CofHm!@|- z#8P@^V{iX<1!VZ?#y-!9M^XPmHsm2A*^LZKCVI~NLElM85q;i$IWiTKk>e3VBTz?9E#_VpNjU$)R?t&FgR%A9`ekKj%a1LB^5_)|LpYF@ zE_%(Z%q@8Gb^eDD3z2fq%jh@29pI4g1p+2^eQYgH&xgLD&yYc1uasg@1e`Vh9nNg5 zhe7w`9(cl5x~WooS^BY1Vdxk?+eMIarf@Xg&c z+)WT=uYy-Z3w&XbLU7U(PKvj$xE@_!sQxyI-Odnb{lyc&NRmFL(|*NzgXMRy+>-{R z{WaO$hQR;lc%#SmyN^pszC!dDF7)q{uRs8JcV~ z#;}5gTtIJmJndd402IYq;H~`sE}wtiF7*uP*%!0C$A2s&6zK*Dzj><%EC6WQtA9VFLD#p|P%wV)>T{SdF1haq_=BvWaXiD)!5t;h@I{ z`nCLG3<5OX3Hjn=Yxz%S{<9>qv3@*4j@abtcspQ5Qc=LX3o4N1){x2r&^S+vYUGUk zPnZ7uD~}0i?%DHH8RXFU2?PC#l$Ecs2yRUkup6wpITbkj<>F}RVf*}Fo&^rVfP0-g zvS*UUl$4aezmDp@`g84cMvlKoEml(<7O+LE6&Akx+l&5tV*mOe8sLK_c*Y<8X}FUC z=mzzZdJrFSWbl-S6#tKb{B@bFmjed6zR_OS8hA}F3h=FKi;Tx>jOp$~A7y|SQTCf2dM2-v}y8zQ)x8(o0S;%L869fI)I;^nC0d^pIOynE{ zgpa0$1O0PsuGba%mq*5d%&;I0liD9FmJ~J6ukM`Ba9*TY3}*j`_^%uHr%%YY6{rCH zDjpd45(EZDothqK9G^?Z5rmot5`?sP4|$)>!SxEioxJ)VAMig9D@sX)>c#Q%^E2oD zF!*~~B!HI!>6J58WLI7b3-J9*H~;<1^!|KVQ-v<~zdVAY0DMDx`FO&g*PH``v6Y*% z05l#3S6%us`2XR{{E2ItPWo+|{^b#NYGA~1dz4@Q+4~j&gDj2?_8EBy{FE=~HQ&G3 zPBs_tnJ_NyG*Vq)#{Oe&)kt%j;wb#LVas*`MjS-Vq4w?1nv4c4L4xv9{58_t9;y!k z6I2usr<_$?Z|#P74hf)S!e(MniSAph4=L`og2j*kndZy$Ws8^ za;aYoF~e*X{@WJx9l2e4#8|1#HD4bNe9J3N(C*WdmXUnq zH%lKQiV$3}p$Y+mf)Pe{03|Pb;B7D+cFPYs#N)K9|o%V6A)v#-cn?;rvUYf_7dV~cEG!)E3jnv1N%L_&_x(PEKOYYB zAd}?8*qDi-_lf4jTzaXtTE5c@l%FnjDPh5Y6zmX^18fZ-I~}&5zy-Z=2!;Cz63(nR z?$}Z)kUgJOz7Hp>`83V#Q(=iRAaS#)kp`p;yB@6r(H&ku7rX8u==3Ps_c}L+=K{#_ zS@~Q}WVz;JL-v#jk5++ijhv}>nDD=kI|vA1tvjF})HfyGB`F}1ctM7ZKcBv!6m=~E zJiQ_@LQjmp>=X#1Zu~gpTDj?6S_X@FFBbCfB&{Xk`rbkCd zwU!DRJh^o$I=6xRW1I6Gf~@5!AU<#4)Ux|$gUSA9lsFJcul9g|;2J<%5KkdoCS1b$ z#;g~vCOE@f)ljtKv1SGE;|~I|K0A%cvLrAI&F0FrS@%dtBfGk0a&VgI6;+ zIszOTm2DVQwY&%WG;c?xpJNTs-mj*TAyBaP(xeZWH(kO3{H9~^(?q+Z9?G~TpeU2j`S@=Y80dfVSmXw=2TrKJL7 zzY9zfM^w0*%t=AIRSxU3j$5TI&3X^200pH~@w2(t;(}iE30V(-!+%}ZS_!#|D}ZxJ zkG8p**3bf5ePr9jK-J-oQS0GfyUZ?~3DFIFw<7~?sT{XAMoEp3y_cwBj_clE>npJFBHq`b z;k?_-Pfi7BjcOGx%w;#0upr6H2#KKDo4vc+J%?3t9@D@XKoWLn^J|vFdDJh!e_sPu zTI9eNQ+Qta_xsR+qJH2%-lZ;FuL;Rd0)!n?E)K_XGuzXpMlXA8P35y`2^a4_PkRY6 z^Z3|OpRKz-dGvz}6^M)Q0*D}mveB4m0KTi{EpZo%NQ-*x6 zdwf3uF7_j@f-j{{0iOU&Zaid&JR<{u4SUr90L)Ds3~)&D0kM^udJHy6@Qvx8>}|{4 zRm-um=2Q8{)NcUI9givHIZO0!xN@f_?rQMco7BN+n8?_Bhxq6;j4QgS@#fk04wt^4 zbX1I>ExC8XL+}BR$CY!%u>EoEE8i^7-{02av##?dSj?u-4|V~#+5CIJn(!?tJcVW8 z0TeSd3J(=FcROdjS4{H^=3kfLCpWJj{!w0sy;%V6vQ*5ZAIsYUFhxDoCGlU)9RM30^y0;EB8OIe%ZAU2 zz^Fp7-IGqI%zzIeMjT*u8N~yj$I}_#^$|&43H2{sp~{?j_=$?I`M#km%w{?&!Fj}2 ziRu;gwrn^8vZta1T`nTFhKft)GeTwmJ{-T}cz6NTrfI;vyIwJz&uW+-N#WCwz486& z=*P;@4x(Aw*?&@dH%8RqQt&)akrTSP_5oreit0Q!Pds*xd z2?}C+{6XQMazw%oz*-9PIj<-Yq^%64Am3Mt~~CS zG4;>?V*63bQ5rba=bc%+w8W8siK8%$_l(+)^X%Q%r^aLEFGfQo9&)1YWffJsGn@>I z9jP7x)NrEi^QLZu=}Q~A(Ojob)r=UBt}F84vN!}SqQ3x_5fq(D~-b3vYffOqq1ftw?4hTBn&yLux_s>C{(rs--KUI3$@2*Ggww<$$5g%?O&)WzF z0r=UPFzaKs@<8DzpRL`7yMWLy*$>*Dr_)rRNW&LLE5gGu*2_7J9{TN+b6o zjqgqjouHN9>k?-o$wO9>S2|3UTtv#3M$bne zGv_SpeU!HFcD|-@@Vt-|I^Y~olx)t&5j-bTZOP!>c+$Z50vkCC=-*SV0OB0efK{`c znRBEM06O756I-P=g7`|!3 zbo$7#UM^_wcmZ!hX$tr zFiR$4!1k?JR?FpNscUgTK>9Bv?XhS`hH!jEY%NvIoU_XGt*!UBXV5gc+0Qm!!!V<# zSw~_J1_*a$-cIrTI2fvaejnV?_X_Snijusq;r;vdZk2l;?^bJT{7jTEt~hnh4%DMp zuudnJARbT(KnmjmHh-1jbpuFTZo8^|dfH|)KW@C}sv$*=nn9Qsu9JmChU5pJv((po zBCNSy^OKV$9TKm?KKRjH0kfIzOVYjn-BPf}hTa4kfFrY)R|^NgyGsjCWx9?}K^FXc*uJ z-zG`@Y!|eui;FXnL6u8hF;JXW&H%b%z>4iNy^x&G>RfEo&c~SH>STL`?(_&@f!mCp+YL`j zG%c!#6B8MBiwj-O-v5;G?A>=!t#sMXPA`~0@mWQMw|r382be5&#e#nMozk6uMlKeI z3bf4tNAE2?byU9_#c-KVrT_%YB3WSv@D+BW{NR4VtgRI@xB3@1EC!%d{zgN8~|rp&hjb9TC^P zaeI+?`|Sx-g}y>*A!51;0{B+L@LhAN&}}n=RG7@{{ORu|%gCEjZI{%VWJ|g-y!Im` zCY|7(P=vldhW=LnjYErb?T&(#$Mwyn0Z{=NI<4a`9S;9JNM;Ni}hv+r>{ORt6jmeJ_;d2fatgARCY zh9#&_W;A`h>A6+%Blc1N7su~Wvw+Ug5wBAW!+LYo{2c;0*#?fz(PW|mD@4*(@RWo% zF`jd{BxlvHoxy!N7KaYwwaBcTJei`yUwb_=u8e%GUmc5d=j}?H=Dq+Fz|%@h=T{7( z3{E`f=}6Yt-7BA~o%I!-i#BDGTIvPzO6gQnUWq@-oR{ImHNw0|>Od|EGt)NtsDOd^b-a|9ca)6q=iW<=t*nufablI1 zMU+suFM_ag%ZkZHB)#w!;5ket?Nn2I0^sCFBxR#m1uutDiKt7wBUdn0_pyk8| zKiazls1gI(Zt7);HXvAU;*C`%ChqGcO%rjl47)ortuqs=Lv8kb&L^^NAJ(YUXcGC^ z&TDBFCH+YSJoKZ*`XLHc3EB^QcqIN@7R~}8&5~iaph+(ha-RccT&aFO#;-{C`Rhrx z$3b=`UA*yIF9W9Qu67$}S^}0+2vdcV8GO6M$v={{1xjQV#&k6Y`i)4pF-RYpY8iP~ zi)vNjU6bN}tcV#Fs^zK6Vh_L5k5M01phaA7(9xaU0RHv6`Wj#tUC5X&%!_qKQV0(7 zfnir!08tD!NIl7Vw);Rq81!)$)JRGW=x40!h3{!HIk08i-3^W!^qdqhWy01S-98>w zC1#*AHrRKMq<`6Q&4oRc^uCiHUN{YpHnP5LGS6@;$kVvezgw7R^-EIl(!gPA<8;GOzp*Z->v|c{=n?aZkjUoP>tL!Y zkH_P%kB#Or$HhgKJ3>88aA-(zMD2WChE2nnz59k5%+907L?BKa*_emG(vo|ay0@FcuEEcQY zts*IrtCjjpj=Or9M3!&s98;W@MUF6WR?{)(wBJeH#weu6Uq!8x?+q12D)Byx_z|x3 zw1mK2^Y`WX>R0i_i^dL9Vi30u()I4r zx5KiqdF?T5{rY108DmRK5L6N&GFUn25+sLlhI8f%KJ%$c&#(HmQf@!PE9*qG`=B3( zv~l^dUCv&G=^h7b=V}IzX(v{Q_>N4@dia#VUqHAQAoSlEwGTX|f?-yXg~T-`a!1`G zi}dTFQwPn@lj1%Y6g}^B&&qP{e_V366JPN%v$s@W_WQAk_mCEY#pXcz+i@kI!;(Fq zOguAQEi|s7UHTnMl@Wm4zm;glO%5zVD;)KfbtUOT?|$gqa(Tt#PVFHVrgzEf1hRg_ zm-N|4i7s(RJG3^EBP9KGsNThp)k8F2x)p#&hF8!}(wK$nn#>}2`dgxAWbLnOt*aCt z*;LOTM}ucH5cOF^<JtZ!&exOGBD8{wHH>FLM38yc5chZ_6|qo zYWT=Qj-|K>9Vrc0a1__t?>3Ave^iL_+kBr~&3R47-q0RuQ)=E3o#dfl9-AZVw~A|AGgs#HRA23HQss%mmm}V%RC_M^VG<{kO5MvAcll`& zk|>A1PrPzr>dN&!@$o7`zBPie(smrHzj?!oWq)x-sxp*i>3wj&l1^wGK=;{Gm#>46 zrURL=_yA!y-nRB5N~^gRYZB<*=mn7A4C8wn@hL&3Cv58>@9ty95NC~ikD)stg5{d} z3hGfVq4aNMNM9%1s2>Gw17nFE=sCVA0>k3x`Cr6QoMu3R> z?U$0hqB3KV9{`=H(An^E(<7^f9avi0&-nc4G5)Fs=R4`Z=L2+Vdm1n&@6ksbUDjlX zMi4W1Bf5EAx_$T+^F6GAI+<>PSQbbQe@yk<79MHbu1Z;FgTM2Icg~CETi3%bmjFQ? z2N|95{-$W5KpJ?kFG(S&P2XE&of)}K-hS*|3Sg)huyBlk7h`!k6`1u?CBI~4-9aYoVt~j43Xd@ z*?cXzj!ojt51yRPxlV?Z4;wsC^si#nI>a?f4`i!U z@xGYbFb6PUbqjYC28EECsvnC2L6yE}%oyp;VbMK%%VD2yrnEIN)Jm~r6rB%Asv@dl zD4`e^t$3=v`-Go~6e)YVXmBa~rj&`>;srxBo>`@?lb!_(n!jw;GW6(*)rVRH-ODXn zPj^wDb5H;q0CbDLPM3nv9S(Va!}yjk(F_>uM(b7i6dx&4TVjXvBvF!Rx3cM9FIRR4 z*mV)Fje4(T!S9*@5tn)?`S;)scB^qY`SMrT)`CHb+?IVF*vfh&#YV@s!7F~8$6Ca4 z(P39Z-uO8^&Ywrk@lSBX*Q?kFTaZvF{ZnxmR0FD^{AP+4d&>UO`Q&*uoPAGD-Jiu*`7(reFmdewmrGNC zfn&SYz#&Z|U4euLd+^q}bJ@Cn!;zJBOwr6Z*0Ks4eIW*G!e;a420f`YSJ-oiS#1g; zbiqq>&nA=qCG&c+#w<$8aV+H4iV`{yNV+#7vnQLWnVE0e=hC7Ic*a0P)=!!KMQJC5 z{hkhRuPI~F0CKFc#X>$9ahjJ1Rqf@K*4jWG5%p)nnn9QH!kP=0r%{-+Vx%u2X0OI3_(bcM^P;*^y>ju)9uz zRGTdg0bk!U3YYc9kin>=swP&MS&7PXd6#aI1)gv%?#Lk~n`K0Up#ZMWym$=w)n%Ty z``5CnxHMsYa{nBw6zTxlM(H}9&XuJCpbk=`vn%tSe4`UiHBgCon0Kqq5)eNo93=Kl zSSZqjm_eCxfnFz0%p&$xXn4EYvVXH`d0u#Wnjg(HW1iM0G6cAv>T)HhbwB}+K!&o* zR7qJRQ{$=0A|>OjV#mP!>v*whMKWGPSNG(~P61n`fII8ijXic+YhxpTJ9^Aq{w8>Z zxz{kiIJFb<%7EU2KCAOn=!X~hv8{1jn}y~N1y6V#G87u2&%iVI1u((!!)rHZz0 zI^a7-lPjEJV@lyvV5EN-5;`lpJHBr6j+Qry9AA?#g5jI`cxVDT2h_WVdg%+fQ5!_b zkh;VC-CV7J9fI^TWpno#)MwZ3C)C!aAj_aL6Q@sW=lSm z_cnTTO<>doI!Cs#2#E)up%G}M^TZfi-ZNWdq%FBl3dv3C1QF1860Go{KYzQ)#D~MV z)Pq&lEgP6mRI?bcC7zlQ)6pwnoXG6p;St>XDhYQyhRm#&IoJz^SVHw0e|wN3A_PgJJDeY& z5&*nX4_;)-Cv7JK1RFD-7PBKidI|#to-)BXB9W7+wfT6x{fm8oN*E&PjBiIlw*f=C z=SH*lJNmcRvutO$neA_51_Z>)mmAv{g4wJd1gnHK&9~fzAxlM1!RVLH2@!nIP(PB7 z$}eTU$<0t_>CQuDM@#{H{=1qp0n7ffcb>!}x*G5amE)?Q zs*JlED^)KSJJwH^e1scuW$0q&`qquEpSiZiLS&-|Ajvn-tiw?!v`3YA8LQ!g68j&X z@^w@|k}P;%n3MX?kYe0z=b8|W%Eonn1 zUxJo(Lv26XeP3)Esv4S#6gvtQqu)Wb`LYarF8qfwi0;YipOB54vN;J`e%N7q_%6q( zkVUe3EVbV7qS5}M4&r|7^Y&hm35l`wp#8$U)u7s-?4+%MSCpASoPaA>97Rl(Nbm5N zdoQp6O#Dn~L&VDBt)Tf#LKfQ|*|Pv0@_1$+%n&i4x-zAcnVd2S6OrWX6iq~kTlfe% zP!rScJ`9+d2zHnRk0aL2hv z_*aJI8(T_mNzP~UpnN=!RV(Ih{~y?fp_6XF&DT-r7InLe;r_ZvE$)&v`IQWKoJ|}c z({BKe%ZbY$U<7@Q)oknKB)mhTkOoUgqXr1*6U6_La3t!H<`MLdmKsNqwQP&FW1@bO z4zmaRq-!FS0G@6{wB&6q?n)Hd0g!*%-8VJ@cY-`#x`qL2xAOZPi(GODd^WoMB@q1o5N5cgKD=ipX$}g2_EMGeBa+?@hCVHn z?PyhtI_)XFTTl9EJvVMm#LjN6WayA%_VJe?dmqySSq2rETw0?9Y(|F1FCu+2KELaI zwJezGc3dN=_%&AHJLB*^2IBH*VJIylx@4u7^;RI~t|xJNs0FS~x5)k{%S8}(Ppuif zl>6|iynL!N#+|=HDkWw-b%~;sUh$h*iYbpfkFeR45pRUylLbCJyARY`g~v$a zx!heoryBHFrCHfKYB`D+<;ffP+-d(07eJ)4=W}5Uf~6f8yku)1D16KsEn$X+O9-vG zfg7F>UdUi3dtrCj`C)6@&s121jiaa*oCi|396V&7RMt@|10Jg%7LPAu@QBJgY`j=Y z<7(A@MR^KsF;3vwnK!*Zj7;#49}yHflh-o;vU)p3k~ zcxFq?c-~zEr@y0cX1TbzFdC0#1O~orbYk->VhVi8N0A~#i0qAtGH2?DrMDYoD!v-R z1x2BH{vB6QZ2K?L$pT&@qCG8?=Efa!ec8-~%E2)U%ugv!;*MTpJUy%PvldZ&{h;s} zR3$&ISMPMMYJSQxplx|*VgBN)&OB^fzjk@^Y1rCl-*nF} z8)tXIiglq5RadS%r{B?#~zk;S4yVtp}j6 z4F(z->QIA&%;he$%>=n5`Moq#%sZ$%Gm&0i2sL8nFcONlgs? z-0L~Ls_4OCw)^*Un}P@rKx*BA7RqBEw1R*tO+sE9<`DtijJFPP^SDqFUZpd~UOleJ zZHr)7JH_vZsek1#7mHVokXywZ$AYov(Db)_+OauvdSxg^u#sND+qL(Bl_s9` zmtP*-uyxMdoO%4`{0Ro29`(_yOVkAnPPXf?lSCtw{g@rDT=7=Pv@h&hhC>iFILZ+1g!CLCi zfAOjSKpX(46JYdTI~+EOsTSr~Vs2ArXa??xEXVON#2p7%xeKaUF+7|kvrvr9UiW~> zN#asJm-dvh@y8{5X+X-(Ec6Y-`a_*3OW7NI&zo9eg*wNAkNR(ZCnr9ti^%#M_u$xt z-Z6iUT6WRx@*<)k=5eU)E^DIfa9BJ({!`WOSF_FAU%A5{qTdMhnt&f6_S-+D`+)Q# zj&~GCB&aF6%+eu~RjGtEZ^{UV{-ngaL^Ew9crIk|<3-oRjk(^RhJ2}JD$?3Qrh}5L zn4(_{g}*Fxfy&B!lukc*ahPpkg6DZ~KI7Ei{fNO~>~j~5q&c4Tle*)8wkNTlWxK~U z6=m=IlB;Z1a96%cmJLxsRb+cwQ?YSMpjA;v>mzP?)jxzT`@)$Vo)Z!nh`p129`i); zQ3QtY_eUFaJu2K)56Skv#jG=S25wBp7%>Fg@r!A<(;oJVAwrosDpFB@^~ooo$xaTgdgYnvE*1+(2HjI5{RK=WT=pU*_sj%e^HA|~_}kkq&s7FA^(rS(1@EAjiVeI_Yeh&>MGIyYy( zuo!clkhc0}+H1{?X#73_#yTfvusM4aFUR>S+)MqylGWX+mpq2&<>$XnYT7?Qr9yf+ z?F3E^0j*0}@`3$An;2K^IqkC4)!wVRjY_rh6^1|t-h?i4lBUHp7*4?_vBFr=b&`Jk zWWtjtCEXXf>r(9~pHn1Qh*PSoQi*Eb)I$p<8^zg*JQ`L*AFm~e`x;{i;5(5Q+dA@O zrQK>CjgM>1?QKXZ&ps#mw&OU~U7XMjQo&URAT-7-DH%%P{WM~=A83>U2}43KvbLAo zte%omhAM=`iV@We%1CF+i_3jWCZB&f#;tkqdF%c^QG4Z$SLd`; zAC`}^n7hY?g7w%V9kh_3&TQR3_@HqBi5`+gTa5LC4^@E00fj5FQM}i7-p8zgJ5@k% zCx7JT$J|M`EcH}JLe&n6($N3I*jvU$`EC8ff;1us3WCz5q@sW{f`EX8C^?i!;|$&1 zA*GbG3W&te6SOGZAt21qjC7auv-zEK?)!MZcs~C(Fmrvbxvss}UVH78-z7>?uE{$q zqW(fiRg>i!Ij8YlZ~Buv_dl9b@g#H4(r%Md^YpxbdN&Jw1#97|i(_(VKWx+c?NF#d~U*eD0X z!0GWp9A-_&l;hgc*}Nk4PkO@qf&j!blm@Y_K>GVTRa zTOV7y!jSy)W0}dKXjz8O3e1QqQ)`dTBh_D@+vZ}JV$At$d_HR-Z)|svvFH8N zwlR`7E!$nO8gSz#%qRDBQ5}1aUv@}y84%mXc(2`HS^@sozKR!ieCzTYu48eJ=yhmj z8pztX9wyrp35iPON-sP6cs~~HYc>4+mw}iXhW8~CD{Te??D}l3Rx{L~pZ}r0UKWJ( zH9gReH<-E^1T#4VZ(e=212dzK^f3D!iMA*xe&Tlp-_)IhVm7CZRpXn}i)7czm(qp& z^lE(@56wU(QkqvtcbsMx*Z|k#7OQ{vX08OGKG`=rQAsY#acJ%8`5kc0(&DEbUAJo| zlC7Y*M}2#VV4E&YGx9gH(t46S_H$Eo%A1UqYp9EaOtVY)GR$1g9?~o?4sVOXG8!fJ zB{&%RUS%k7KAs(%P8C=~H!|1rZ!KGRQ#+~#Al?epw982TsZVR>4}EHzqjxRPt-u~f z^~O&c=NDcquS4`%NO_&rh3V<>BcnBrILB=jyBDlG@(HgNlOKnLN8{3-adc;h4I#p} zC3m9fY+I;Np;Z&gG*!=@ax7tAa6ep{jka)~RWn)lO?s&ad{zCU7rt0cUXYx@PkC4z z2{EtJOVWXECpX~VS6C!20!0t9ul_+B{Az>OB{vLw15RQijCHT91O%)gm{WKF3_ z>qNK4koZfBH+8-bCUYN&Cw$t;Pnzn>VTy#f*ld2~2RgRi%&YeDD^Z*QZ&wpElHt0Y z{WQ4`f`P8L{cM^uI&6J+X{d|G&x8xMMaA;Xq`kIO)6@>Q+3{RJ-?>^A!q8;>!B3FR zc`B%B`)euXuIif_dSX_Kl|cFAfNwoAt2GSb4>1wLq;3XoULXm*t%Ows+@Jay@l01`Qs^+RV>$=Lr4j=9UASRTPhS!BRHRmP^Hmy# zP>)Qd%N|^W&-qYwRin>4-aiyeHtS0jc1+*TI-g@cUkq}lh4fk#=Xhz|Ai)9&tP$PWob)3{~+LHv4{xuc7SvnJE zVe(np9_^=9#ZMuH!*r1l{53=W(>+e(;jD06AUf}56(L`jpjF4s$9Z~gzad(eTH=Xw z{e3=7tt6p2inK(hzvW1N%XM8q zW801J?Ns;q3V3+!hH*dpK!r6~jIBD*H|UoDg#-!)LlXHbTBh^>z;*7wMo8BgRhwKWQaYn5(~4 z&-FgJ4{VI84s8zm_@8%1JL^jUe;kl4Ze2(d{(q#24N%H|XvFCdEYk!1eC2Q%SGPLM z0*ic^`7r*qY1VamDYBUk?(8>u4fH}p->pl&P%Lg{slIn)Ij@>Fr+Rb&$;7K-&jKTS z@iY=7Ab#hU_XBv0V4Cl3BEo=?$}*dvB@);~B&!6d}RGp^cD zD+hiN2w{JcV}B7;C2zplCv5n}phFWwK9JHoUA{h)z%Ks)V3XH~BW%~woNybU->pA~UwF;|n!mz4s zMv#yB9!yxVB$qv$)gi&5>x?~S2vHouFlJ2ulU?F=9CgyCoo46)lPAa?H2f$~m-604 zI!85yQu@#IN6}>vm&!=JYFq6gd494Za7o+nbc}X5b$95lBPNpQpaW!hF3S(CdB4wc zm7+#X@L2iRf%8}3D~P;ZZ;e&c7XJeBReF81TellI<6bJ5MGXO0(nm*jH4?ROf_|Fp@7 zTr}gvBr0xyKD?EAb8ClZ*ZaM?Z~|qq=FNxJnsKYPkFi{ytU3I?Ake#aC;!c~Zy23# z#z6K>ke!dwUiZxd)neakE-JpZBY>dytm84@0@Cg)F@gE&aDSu!x$vH*DgWzJ7EOxT zWX=@6L$!)K_vuwszp)b~vi=AjItVh;jBB&bxQ|x&!uqfsYHu%^{im5b=n1a%mOZwI z1|T>xJ;={=Gr$&LkzlNtXl5B{H-G*JNLk36s7iqI)dect(QN|vnl202q@pz}O`JM7 zduz*-6zxQ1qm_cwTJmsAY!PYkbOR4@)F_+WW6;3SS~2?OH7WQ-X$Uu;O(01QL%Z|` zPfpnRc1q(1P*gA|XglgoqOgVB0O-Nw#-Sd};oS%OZC&Z4vGIqwMi14}Zb!=Kzl;OA z(sVNj(xSrMt$<1$@kIUl0(y#^PT(Q)VY||-TQ#)(k!(N44{|SN^(i|HR2WaPKjR3! zlDVGyF~jLLO?rG?``ANa;f~)W8uPc%{c!^ANl0mj z+zH)6$6IoGsEcx;`U@7WH@eSXdXbDv2n5596Q|cJvBJk#d`Mafm2uHit~bar%g2dB z=H!p4gs3&t%S}KSUFx%Wm$bBo)ZR;!3ucOIDpJvRK!L8Kk16~4`#*c5ChlEp-W-Ot zc+@+fRA;sPme4L_>PW||Kj%)lk-&3h*7tRJeYLNhJkOh!1EcQRds_EC{KxqpHbApEsWE zmEKxUdQ_D6@;ffY1jdpycg1WaH$goLgF|w9{yH=c`02m=v!Kc6!>-)bkDoq$QesH% z-+9)o?_zcr#3pbTiCC{A?IIMCXSoT3YeDbo2_etMGaG_iH2egAct$1;Ne640Lzn7RFP}8hC-N*Fh0sCO^p2qhVdDLEp3=U6WvDt7{f9+*WPKB<;nHbEg?n2>GNf;Z+OH6IG(4J0`ax&27kg*_&sN; zpGuoKriI$%vQFxOhNQ`SUOrzuRp$b)RL|Zz7`dCDL4A8S0eO?2N@hc@F=sFOw{!b! z8Cf3Cs;jBkdN^fDuC;|ne^XlYJL;qFyD}NfI+$E%tBe&Y2vBs-7%(1C2f*$`Qf@nj z&TFT~BN_zdq0&V+@!Db4INHOpIhsEIcIdh1{1}e@D*e%f$9@zp37zLt6_}{&;t!x# zydxei+}zBch!z;t*pUOxIHxwuN#_mcwF_@h_fy$0kCRHu_|8qd?Iz4x6m0nOxv>c2 zzV1KnKjoaVZ+*18G%={)e{Za1{_YB4B)B5`zoq|3yIs>RQD5K}_II#O(l7rn>6zak(sey=qdrx_PW z%RjIAb1-=&A{Sn)7v=Ti~G_RqwsZGF;Nk-K-O z7eeb=Ds4fJM~8Wm5R!o&*36r`t@KUpo5i~na%5(p6P&O!nt*Lg3^Mtf5U>XcWJa&s|$*nKsKSL#@2Wg}z# z7Of=r+dD7ecm2VnbJo1_8Y7hGfcX)}l2fNxPOfiOB;@V`4(^a;(Elar=*?l*FPz6= zNV959DZ00q%+$Z>to9(h;vsr%2cS=iQ;X-`;Pu?5 zPini1T1&38N6&=)a<5ojpgAO)k7B8mPe^_*&2UpQ67pE_0*~t>>Z0h+zVW+(%*8=< z{h~OBOuuIoG}dXsEP%SYx>N~w*zb1vn$Y%}lI*OBWb}q~2KT0D+GN7=l2*IMc^+e( zq@?!>G&remw~VmENTI5#Z<~t5I*N*a?Z|!zjMSqAKEDd3UhVFDX1>x7b7X4iqM(11 zb=_p8$kSw?*1B6j_-B)8xUOXx5%SEEGi8ICNTWD*vaQKgM@`eq<{XEI=I&Zjn~Q4i zL$@^KZ!vi#&t;uIq^G{vmGV!ymJ;Y@-|;L2b^oZWc`47GoWzzS#Nj<#SHrtivel<> z92FurLYryY^Lgz{bo}D83|hG`I<5S#x^=*e{rJV^LI)DPD0Q|+M;Q=b5P=X7VFZ6q z+Nsx;8YLSQewWEM)GLkFiGH)}^}UFE%l5(K@8Pj;F|kbiH35z~5et@BM;puqA=K*> z2M&9_Z)$!gr`O`Nz34*EzE;Wbl7^*c)U(@c^BkNbA4+}K>;^uk>`A~<{q^Y;ulC+@ z>~i*PzNTa@t}K_7Q}bb%1Ga^gT>hFP#2zBoR6Y(efIh-GG|enP(B<9;kt+s;InFPm=9^bsr+g1$;6w-Hk3#SxT9^{cw#t57eXw{=HrCh7 z#DrjdFU?xiZ1jxk-gA~@1q{FEr|@oH;}TZqpo|dH$9}*ldt1m<3Pkr_5Kv(DobswN zz#fkm1XX=bJ z)20=)mp_4{s8By+7JiPxVn%N0r}X*lR}#*A0CjDL12$o+ya!D0Y7Bu^tInA{2VDJ$=`#zmn`{~E@j0-l&|NmtZ z11gkk*SEBQ*BAYNRph(?x`zljdn{41CZXmJU$gDMDWDM7pxFud$k zfNprV285W6Q~A0=R>?ebb>5@%G_ASVgs+h)(m*y5x9B6}MFO9vJ$^{_k<53(jA9HA zU%y3xxX*SoE`>hWEok++8NnE9na1FIv*hQ@zy}>_uIrK}NsW4mPpRLIkW=P!`N}E` zkP!M9Lvr;Hk&xc5kfC8+6X@nL?@0d_sk!I}HH1oJI6R`K>XIj$KR$7kV^~>Ok3VN2 z-#;pC=6B)`?X;unkMZVYh<~=c=sz1759}n2X?`)peZPU z(~lnv6&3vpYQ0^x;o3xtyJ4$iR@rOUy7>~5*F2VllvAE?9pmv;+SZ;5Z8g+^QsuX; z!Nanq%%XQMs_V=AfMYapG8!;m{o%6h#Kq2hf8iumb~*N#M6kgK-I7^zio#Qj$KO9A zH4aVT-zLdpxNOu;*Nz`i%JKT}{4p`(e3fYq(<9}!a!Ts!_*~b^LLWb9@*t@Q&gW6> zmbZz_Qun$a)byuI&7#(45?C(+ByUdy<`cfDc*HoC=IQ;Yq%uxr?P?H6@ch9VaBui5ZF zvzo_0-Nq-)+d^?Koftc|+*mjGvy{*v?;^_u%7~#&=2R6P<<69ktEdX3dw*>ZQ>cA79A!1}FmJE` z97RNiJc?b``Hxdt?ycEJuv)&c#^kZ5;3YqKc#o+_Imx$tE-AaFGa3ZuZ9YllKK zE?l!u4N`4U%rNXxhz=ga1WfEou`W-R!6}%9-fisPp?XM2$szeRf=R>Nk;vo85oj6rh?`98a!)8Z z9fRcawwPNBL>P||xrfI`{(^mgG{DbgAJ2s)R$3*ECfHy2L3b}pYS&IA+wda=)!HN@9h;Q6nrH{uJK z{zJQ82z-^5x~MUgGDe8sbwhzCA<`!7TaHcb}%7Ah2>WkCnnj{gSx`1oH?hSwJN zz!t@<2s*Cu0EB|Ive5_s0IT(5E<9Z*lUMM6!(_?Y%K`m=(nS~{aN*8T18;|Ojr*<| z{u{4J`=6jl)Wv>RDBF7p?)e84MmF{03Ri#^8hqc?2ftdYOljD@0kT@QZ?4e)GpI#? zCFO|GEv*}YTx3rIvB7lla;DO>Wc>?U`~vvJ6HQiMbp^a@ki2O49{`j8ps0LQ_#cps z|0ziO=j~4}@L2F8&blu^ZYqChIsY5?Wbwj@d!Ai?9FR~{eDuiA(ha%sMO04p=~Mhw zxjS09dYVyR10LR>3c7yd-frx!fN6lPg25e~>re3Tk}UCT9_P}qgvOKEVB9;sdmng< zR;E*|_V#9GdM~)ktp9{G_FeJYo~m3}T5ftkKqY$#55QRlOe5o!!ky(W1D~#|P+(n(Y^l>n`wA%d?`_$bM8#l2c*#C!>U~T}<|$M}t3s zW?+!~$ZHE;*JN3A@zdW_C?BO5&b4af&Wh+3D%50mU(&7W94TtZtT;61fxX>*&}6~> z&wT##hZfs#cm$!pfn2hMBzzi7?jr`{DuV?c)rg-`=AFe&yG=Z;8QV)yaiYD2H{Nw*a9y!Ei9N?@H7pLq+i#{zD2n*@&Fjsu0w4@IvBqD+#3OhaFr6HS z2c{aI8DN+6F17E4hJ*wu}0M(l5@UTWgcz91NjL|{#gc;w=jDwX!$IJgi2 z3GBf=kN@n6e-msPTs>sD*d$}5{c|D`qR8dPU{b`_jphDp3*Zq%8G@_rGM5ac@kz}2 z!Ayi1Z@Gc>SQGf}{C{Q?1d+ZBo-XT#zqjZ5US)uXRYD56%mx-9G?@GUtP>BIecQ#0 z+SR`WK_H8cD>uaD17nC| z!m#}>slcb1>nv!B1+#eo3u`%R0IYyWxXKf8P}9XxodPVW}D zx=DbYH^0UMlcHQdX5Cb7TDyEOF%>6Oc_G05>jko!g!`whXWPG5?3FpF9c36EyI@z? zePjM5QmpXEVTfrRHXtFYylDYrv7mdvV*M>s`pKxUVQ$vd_rOc_1EL_=fp2XN{%5EE z*US35-}}yI_&L;`3XQF=8+}QPQsJN;A{+Wjgo0;s$Ezra zGb^!Oi&V_nUOy(sO`UrwgF0Y+&#h2hi786Vm)ODvHu2l@0#z2NgB zXpf|P*cFx7_c$5HwB$5xM{HlH(<@*A>>ZN-w;p; zWE@vg>&K14dc_f;DPXl9r+&?wdqq;yi~4;gMDqbLFzp*T2$)u1zJm3y4MN2R&IKw8 zu9P5IL4vDNt9IMLbB+ALM8jy=k{M ztNsfP#`&5k$V78!6By9+N)UE;PMcS^BS!R8ZoTFhwPZx_X9S@FIC7JOw!eS|OWSYi z!1MS13?!k#(zeSwFp&J=yJykqA$E+NGsi$zumKkG4kIl}w1epSYczF6s-Bd6 z8yP8gBVCFzSb<`$VcoD8XL1-#+Kkr?6zvs^*|FecO)c{s|lb~M;S z-r-jBbKZVuE;N6C;7rwD3lfM21Q4UF&(#>&EP|`z9)i`IpXd11hHdpFzUuVYQbGjl zT=^-g)Z|`rGgp$0S?3y#mg|t(6+&~KQ@BYO-2Fvhfqe3L(!>SRFt{?l!Z0wZh*=biK$?_ESMNJFh3ZzHpAt@~Qo=OJ$Uwjd70{G&@%lRR}_Fab_;rvcey)LljAzp>pCtO<-AN#PExu#cF z2q`ri!5`IA)>cX?*6y(-g#9dZ7=4NTVkiU2^m)epa7egLYN?)g!SxgLNTH}gt!;n+tnIdP=++GsVER@LcF=3Z z)T84Z=qSTA$8?vKFToJeL%kHcrLU{5_0W1v(fS~|NdpJFHdPA()PAQ4v$u-+qpfG& zHzPu$FtS%(+*5x)Epl_NS^w{Y>k#lwPrgW-9E%J-Ks3l$Enl$8tEkZG?BW3{#Z)y8 zMRFL~6b%$gyz8DGv4fh3zgX%Lz8E+bdc^KZJTa9e{GxFl=5yR%T3d#!t25-` zDdf1O5uRsZVZkt&tWv8pVbDJAt4v%flKLwxvWrhiCA@NJa|B=Cx2fK;6vA1ey<_my zu5ajgZP6ZcZShN7rDa9s%TAsE;e6MGY#ZzJpBXB=4)wTw%*NFxeGmgq5B65y#tTD2 zt^MlXJFg#@9AU7h)gRet|8z=Aw9{KLvL)f<5h+kZzORT)k$3&3FS)N2ef=7*$MJ2i%D#cnr!hF=gBfED4#<2Ec}aq||G{+H9@*wKI5nArW{{T(0qQZMSnq{u z@xK{+x6I4Hy5Jjp>VaEznuKEf3p&&_h zeC_Z`LQ$|K5?BJm7lYWut1l4e@I6HbDZ8O0#B8A*oUY^myZ;HVgxns^CMsp4Z&^fu zI`7p1A`P?qcu(gt#=Owz@&uaPeyzNvZR*##_x1>6!?i@mn(`}}zv>$4<{=f9$Yb-7 z>wk|^_8T`|5G31lj33{V_R&F*yv1ndije}8Pv_Gs>z~@cl9Q9Gr$@Yg8pPWj#uFCO z));uKa4`)TBVYs2;v7rOad3H9G8Do#9&>BruHOF+nNY!QsLg@6eabtjVfN_A4tZyI#BFOOB9$ z9GbEhY-oyK0gg3)F{CUL&}DZ5x>b9yuG|!PfOchlqr?dnMkQT20Xhb!ydF^ux~o%x%2MkU z1kpBKN&&i1SQ@UeQ~^R|QlA%%pcBqjv%!R{Z@|aG zIh$cEvCHFb^%~)*yP|i~>I-I^lpO31M9)uBoF-gR0o6;s=K=UN~dt9kG(!8 zM8Tg8cRIFh@MKR}Kni%2+f6e+6o08u8K2`X`R=wk{Yd}~hckRGwc*8ws`h7DJ4?Y- zRlAHd?Oqk_)$reqtS*YP^{@+EN>kvqbk$Gbx7aOl)kZG44q?El`#f}oi+59Ugu>p)p(r7F zNxF{A7}Gm0td|X-+|r-5>`9SBX&9JR490s|3;1u>2#y;17Gb@*>L*N?`WJT5M#SgR zCdV}^L*-t>Jf5>#jis;ojeDQp} z0<+N2kK=C~-Ebs>+`udP_w%|PY+fw>@!ax>;+xQX!w>>-y!&W-03HgB@3JI5T2z=J4hz3h87iVvY0Kg#g^JP1X=xuG7GD?J z!CvMpY=Ku!8FRSO?Cp&^>HDwVgCo^!^PpO`MRvJ`b0NQCR>wPN(1oH0hpY8W0?s$+30(z+Bf!NoXl z8CH%YK6Qjj@^c_3N$$GyVKi#w+|`nC#zW~p8hq10Qr3B0{$)QC6O;VWkNHf$bx5aV z&IrcVA7CU_wTpB*J7grcUd)ifWRk4mH@>2c-iFZq*KrHHK%eQ_xzN4~YodnVD&CAK zRs4v}=J%4nbR_H`QFFz5(SSENETw}M z5}V?fULwfJ?>P3R^U>YW!;Ult-S)y8%w0sUM^oOhhbg}vQlZz8;F2QMS5m^aOE67H z)cmWbfJ!4uNld&rRt(9_=1S@xi|g%ea|u?t9?)XP->V$dsLANltbWlN z8A7hiuC*EO?I#_k)l2Wh9?wU0*(1XV%YjX(`ZXAnJ>`_HdN`4^+@l~PzVV_67R-dr za&HP9e{KHtSSZ7Bxnq5@)~|8Bp3?g$ONZ1?tkUGKL!IR*{ulvjqwyfNwz;)c@sxbr z^}O0U{`RBMz-yaajFk;YWOs&Ves@}uA9jfTu-;Fjjqm4Zr*H_ourP6?HR9)0fp=mX zrHSuuC3igzU*qd}o3nlLR->!RCj2R+%QjMxTFB{_Ljk-Xv10o}G-G)KFr^}M$(4(W z?D8Hammm?FNU6z1W7Id~lVFTqaj~jHdjAMfvY@Aaf-b9FpdC1dc?_U$`v)*1ou8A&e|>y1P7MWGlPe684l`j@Q&St~MydSJp|Hm{C zYc%VHr7gHKM5Sk<$~ik|>3twK8SS`>KN+I4YcK$-Q47|_vr7++jrT_4 z0=*Y`V0)7Cx3I<@9eEWqVYghbAsY}=62EJWwDtse7ioep>ulL)8PoeiF?yH(qR1T) zU>}*?ax4+Og8y2SfqO;c0|&E!0ZWqb@b;9h@qi}cnGT;xfiJA9N&{MZg;Db9@p$X} zru{m5k*>h|=j_B?&B`mvxw+3pDkjmx!REt-!$Ow3dOc4toH|-uI-bX8$Igf~IrvYV zl6-&G(&3tproqADjTEk2KIEsmlg`Fnzpp}Eh_&xzW3H%7EWv9>3a74z83^UJ*@D>C zYViVnW#vfk9>S)YqzOAd3@3U|h*uEZ;{%&1^PU`tKFCcdi3f1Bg)CtMYU*E7o$dr< z3YTrGXTD_D1Xi@Rk;|NDiu`hrC-xuhmY6T#&emIC*}aZc|M2MfG776A^Q>q4J9%GM zMV{(tweNwb2$QHfGOY&DYmgG3Vv@zl+?_q;o7B4iGL*i zytY}=GKt!ILyi3OBdl7x_jn+5J=jh|pn4Ji01r9!Y!{bN|9WqK^0+^LqX4v1xV7@P zhx!$5yOM*1-m!xLgpl{=vJJhsON7pfu${KCore9vpET%;5^cY(WPZqkpZB-}z~`(m z@1Q=|*vy(d&m9HHl&riu#fw8*g@uKNp>x}~E*YEAx^UTS0@L4|M83 zcjzBl$9H`-EC1k!q^WRfvAtUqVI2Vx_(2pi7$IDcRC*sC0FUonCi6V2PT?H*LX)Fy zH*C+xlOUS@br^9pG4guc9n0edCmY?aM0(UF+Qh#(Mo=4K;^I81uYTVRU0Q!sc(O2N zHUBjy&asb<-{8(dmg^U9?2wwJ!!c&+#Gt`<@s+@(UPDw~o_OGLBXv8d9dmMiwvX0&%}%Vq3MQfc;IQzvM}O1$p8KR9q@?a-5HT{Qb$Na%KV#V# zb#EU)P$xa;6^C$XjJ<)o*3+f!o;RyYHrQcEHtG$r-X@oVFoxC@lz+?M+xXR2U>a#> z@`vm0`O5P0IX-9PwOxX+v(z`W@UuuHEo`h&Aw|H5tMO#GbUy4B%xUei{m6m)ud@g1 zzC%^h7w~+cC+4jbb~6TFT~1wPk~7`LpD#qy59(jB;4RTB*Mgz96srNq&FFZq@ts{-+T7y{_9X|`W+`yWj=@k`yyTD(i zBqI6ZMk%k_F{zY!vn) z@KhkN&XW#9R@aFxNTq^KR(vK1EMXqRI{sHNVa(3&zIW!5dA@d-DoscBu^rV~6>S9s zS2X-c3>H855)oEON`JpXhu1#5-FrhI(S0(D{e?(6%a1VK4utNR3h*VGX8tNx1Gu%2 zy8{8pf^u!hh=|P$PD#inA!g#>Rc{?-^HCkw3x;OJF7qWNUW?t$`{R_K1~QdAJqw6x zW`nR~nJ0Wa_iY9?s(tzEmJK$X>am?!N^E&cJP-<7c7|!Ls7f1g=reJE zNpx;m$Fq*6GjxgcyBY58S$~|s`IfG$h3>4~_gCG4P%uPdfmfim*}G`RuM7UAb2mZ5 zv2T{MbPFXy=-^}b&dfX-Ez3z)Qc<>VY`${t7-@*Zpw%x+Pu4x%~(J=Ib zu?)f$N~s|6S}9iMjCfq!F_>v8`~F<3zzo#axBlcv*WI0{P70PQ2=@Bx?fMPoDcstG z`}t>}L)Tb+LXkDO3c49pc>fHMZ2;fz(#!AA)wo9%TvGZk35B@ zDljvOaJP;(I3Z9eMIy?P>8d=Vv$OWfingrU&el1ljyC4f_%VCIcsdgg1umGl`u>Ey zpxEAgc30N%8(v)_r??FNt@5LkCFeX(*Xq@i>OG6I>&@jvE7qoyG)GF*3;}4?etRKu*YChv?|#7+wzo{#!(GXBWKY zCDv9meGo|=X0~EWy|YE34lV~WRiJ|z#TQJbT3)7?05`_H*I1BZi^ zeJ7B!(^CIsVo9#r)taI=wG&q2Q|Wtys!l@`GD|*LPzO7It@}>%cTm|c)Fq5QGd32~ zuD9vhwYY2Ph!&qU(^7zt2k!C)IS3SRvZVjbNvzj5+0lC+#xWQis6|Pv-uc}TXizQg z@^!Dpu<}>-i;MhwfOwwA(0u#OTvf~`ejQH~_&?4Z$22WYk3mD?a4T|F%Bra&8`cdH#@)OLiOUl?aboIcX9sN)XK(j@FXd+~ZVl#VyHs?i?QI)u9M+#15c5sb z*|tRwpV*4RVfIc_DGI8zT|N}Lc6LR3&QdBP%u)(x2g_X*W_xVV)cE+gWa0L01zz(U zwXlwLlahocdBq4{nS*g}O3%5N(TDPt8%F7%Yh)2!lm?S=08UcV(U-H&EB2FeiHqWF>h-@Vx0*T{`hx_QO-2K0hU{Ue2M;Nv3KP&z!y@MM#MsXUsx9->|8 zswSwV>~WvJB)>b>ttdF_p-n}GXK(eoqvc18?{%SI3|*GLvERPGsGGh;r)7GFjaA05 z*z~#k9L4tFTS2s|$hb9o_4KT&#OQfTpa1Xihb|2tDW%Ut1v_VyZZ8>EQ1(_IF|W3~ zrLA5VBSOLEVs91k?VlOWe#nXKE<~ySKJ(gn#Q1AjkQ+GXMkX+GchfbJiPvizMQcqN z|2pz61;&&9xq%&T6m4|;_E|3EuWJ*YX33bYZ~Ou{{W3RtL~FIC??lm6K^;|_~Q0PrxvjzOecUL zlb{HP?&NN=nX!$!B_srtB%VbcA=iC3S`Cz+WoHTSdC!E>&3_DT#MNuN91MM7ygMXb z2(2|aDNZrb$xo=QTRk{3x=I_sE&N*1COvqdqUHDAcumgO9u(P!1qH-4ew7l?V+;KT zuT-Bh(^x)}YC{N@b#3fEmyKPdD&)n>ajhbY54}HLzJT1_+k>=g!AB55KV(kM`ots0 zyu@epYxlc{V?L7>lrRXaL6z$TQ*d26OTGnFBA5`M3ctkc)?y2f3!GXK_jPwiMynA8 zYLouQH$3OdW!ifs>^X7eA-_$Z!}o^^_2G?b3bpF$mh<~YJ3){MMFWQ;hkI{e)3cdnu6>8fQxak0y& z*P5_y$4Y5Ea>UiU39iQFhP^+vcK?s$aox_I%(wI!D3lo~2vHwAj~KPD&hvhgR701M zeKl%=oL)4=#1*mMIM8srrf&1Znc?cxJQEF8oCx-p;f_Zj$1<(!aNI9X*+t5&zQLAA zUuHnKqHpDXdhnJ0By@!#1HDh;xqpINT{)2 zE)3_lMp`rl5gMHS@jUoNo)1@XKT^&|uT9)y6rB<~x#v08BZ>nVtSf`b_nkA%JnBl8 z!BS-fK^jLX(>#litY3?Z2bl&Wy5>95t2a8MZgfppl_L=P4JR#`c~!**Sx0RT27lG5 zFr@1!Kf~O4GOJ9X2_3w*Cc$Xa?^oGnM3kbUV@((IL7k5_p<+8CQL*YiT&E3b{VQE1 z`RNPi;Gfo}>DI%>#P3Alg~OAT;)kiHY^_{kMLb1L{w-P0o=gy!*U(Tvbh{FjS2~Oy zSfQtpUy$EVzjJ+7vqzkkklRQEk4NK`jcPIc9Cv@pwsdJU2$QQbId0aLDZaNE%}=!c z>LSwb#{Y^8*0_t4numy|e&8SZa9ov8dlv39?jGlco7_LE;X2<2;cUtkN2P;|3QX?n zHa|hBu@vpDEawq=q-r$?PniE)99VbM7)GvB)E{CO8d;{qgie{y2$_9YWCqXK%R;9| z?mGnk<(^}q!pCb2f~>p-XTStTVX?-p^xf&fRA2qb(6Q*mBl&%xwyvL^HaeQ7*iLja z>c0&J&_L?Q!Oz9uzYe$3_;oSxE8k-mIzH^;(RC99AENU z%6zDmh1H&FiEYMD?0Qy*p%66*ArfJGa72IzBi2^7Xc(!}WcLbA{wAmAvp&4Rgw4V$ z6Pcxr<$A&ICBlG(4LgIOiNuM5*!W?{UC?Gu{rn`Kcs}mn^b`3~#5;CepGYoO{Lfssk-PMI zg>P%~D*{yJ6UqM#h5hr11RY?~wDxYOffv-jxzJ@Yub6yi`dANxma`1Y4Rqsj^C-+z^+3pHMwpiJ&&}L zwL31~$W#+g9{e~uIOFo6hm}DtDECr&v3`ym(op^2*j%+HFA)6 zd2Ds^rzYFweaTNs1<3*7M1*KKCG$Xa@2UTvddWWvGOp4n;(AipTCf$A?oBymkZ!8? z3Zr`sWl;Uu1iSZHOgi9uYd9U8fB%*Ee(lmSBnYDPo#OvElt~Cq$Y7Mx^ANkx1fVFm zPk4r})~pEF*i*35Hz6vV@d#Qy|Nbpj#24hA^ld}U9zfoF&YR?W zftoC-+fQ z?#xogA0Qwavu7Y)=#XkUw1`CyWpd7Fy zxK|bEjQW*N{?KT20*L>QC!la@a157HJ!;=CQ+qx>+OW)oAut!YmfGWSKAk<3ka)Tv z)7K%|=xV(nEyTS0lV?V^u?_+d{*_P$8h9?h_p|rWNTg1KXL}RKvR~zg!1Sl|^Ux&? zwb{tVvic1*aZ6t)6r_Ntrwa3)qVw(cyic=Y^&?8*gZD%V3B#9^>)0xs4$0ipxJ%rcgCnJbIbJ07`6d|Rb>ZM<%a4?#}Xsga+ro{Dr zH$2ewo{QSgZ_v~!HRLL{u14zjf7Ic1;J@d_PiuCK?n-`DIDCyaHOfxLc&)m~rfgk= zk!po@gr`YvQFPKn12n-?B{$OpiH7_Slluu!shQ0kwjNYDxbwYZFre5+JqWita zi=r?S(g&VmW(GA2E#E0+5_fvVx}g4tN#aYtKHOj5T0>2;ogXd8h##aMPOEya-kmyf zJCa!icvT7HeKLLZ^`ny}?}D!$e(^q;tlj9eU~Fva!n*y(Cm=Z1Y)4CFC7{!>BE5Jt zdtZrh$*(K^uz@*FYGv2A@U>e*;ed8}uFcw0Z-Y#`;qVfjQ4T=I-w@;k^Kla`{9O~|m zA4Y_vP$?ukg`p@ymaK&+*~glF8-@_FR)iviYOGlj#y*RE$-XaHXDr#5hDs>wbLPIk zOXIn&=X?M0`{%i?-#_=|{&ePi-sgSZ`#JB|r>DZBbQoFkaDVrtn8BAT6xB@jpL^Vm zJhQxnZ(+_}9PG)MbEe(9F}CnHe)9u6_*U=Ls}dT@JD-iuA)K`xpBob>pmdhtbINmq za6)NSu>Qh{f!bKi+>zvkOB1%_Yl7qEc`#fq;Y@!#nt8Wbz5S~Fpt4@#(;cD)6oG+KZUQ z+Trn3mlcrNS%-p;B}&U<*G_I#$QNVi1H9MP<~qfDR`ElM8EPCpBhT_T-&1KLw8lEC z!8gfq5me1+!+_!$t;&b`TNP(F_&q2}vy6*SdnZ!kLEDhtYLZVSH0$F>qh(A|g?{=_ z-htmJl^r@3Wl-d(#!R^_$Z)Cf^MzE_M~(<1I%kcJG97&Z`cksbx#bz#ztPJ^PL{o6 z{S$9%v&sa$W{bXTm!Cd6Kekne#w?q5jG#vz&<#)iCSWGxsp#ZhC1fAnJ5lDiK$#ai z*sg=DeWCh*sn6>ux)F6P#Kms%)Xs@9E5u^H<>)b^(cOfrn9kuk>>ciKroE`FOvM8V zvy&e+d`%J0g)Jevo-%pZ4Q5>8-lMp9qi84{!sS5RPL5Y{a@Wpvki7b&o||96eyHOq zLb+wf%GkNzV@?J8%J|N20|DaTFwZCQEPiYqK$B>TR4*#7LA+DTk<}J@#ro_{9j!4L zv&Z)6leRcE`~r8_T+Z(o`Sn!(-VWI5sM)rP^uMTux!qYd83s`bI}y)eF8@guz11x@ z^U$c`VYnYmQlfm!#=vCI=pOFA+s**Om9g;sulxb+l@kAjDEHmE?!kngvS%<-^ETe3 zr?{L%0YGtnrnm>q*Iqao_^1T&+4*x$jC*e|b^E35HBY*SRp1KeY66`|)rc&6p}(uk zk1MV|0I6dxJ#2qEWI*YJRO{Z(gr6u#kcU_I#tUr#p1y}Gv}x7IE>(z_3~9bA`*{8u zp|$(=kTaD`3j1(^+cRv`NTGF7&A6I}M)c`UF|11Sj3)2}slg?6OnmI)N-EgE~m zs`PgH94#EyE?^MvOj!W*u!pihM~ zg!4Sek((xNTgC$`rqGIGl6(uveh7dyC+v+6t^q3llb8GE8r(^|-N#9Q}3jS-wD!St2?5XI3@#^^&`g$Zs?$=W! z#(@S+H2n9oHDkg!w(r}eSd5vynrXvTNemSZBmfwF6}sXR-zH*R1J)VJzxGFt*Otqe ze|=r4pD>!Xs8Bwc`+d@X!Tn)u9`7N-q zoXnU0z9WDACh<$AHdXCPl-rzXTrT;(Ip~|cL8ZKxATCJQ%DCaSkmW; zb&T&VNBlBI9lMbo7d~oNeM6tnV7zl;MDz?3>$kB7TRtSRk=QgU;xHBB018cX` z-(BNGuc;g5>k3hLa@<2;HF4aFSAbJhbwMVR*Y3mmbSiB4qtA1}Ue3^#lAEH|a#L()+ozkB zrq#WtK+JpxxRFBRGkFjQ>%$8N)mDB@zU>Zn_8jk|F{{+l-_9ll7WjRo!n{3*AvlC> z23P)^rn3p&Txd1$HU4n1yok!Gw@(BpF;WWFXGyiYm2&$O-k5Pv|Na1y=H*<=Ft8BezIDOW z9U7vZ$cRdHA+3{giwXiG1DWQ-rhYHM_49yLP$BjY{5_dhSkdCQhWcQVv9sA=wDo?% z(EcHC3a=wkgVUg1!yO+VUl8FsoqfMb55EU6b1x;Dxfk1#&EVy}PEM}RcO~A}mN5Lpi_7{z@#A7Eecegb*DP7+*5$DgP5Z6fFmo2(eqyjc)G*iYPyB$%H1a!24!Y-zVtkA%k& z`%^xn3cmG(mTfzxExdo?U$+wESimhb&qNU;ZUYmqkNPtOmFAtpTjSc!CB^Q9;+S$n z?Cw~<&6oC!s@x8Cnr{Sbso7I2fnLZOS&|qF`Pg=MH;nBS*<3l5+Xh7 zn-U*xenXulJkbK(HQ|T5YvkyxPvN&8tIMuVG$3xjV{}H@RQny6OP}qc<`7aFF8`E7pf?sYO{d&w@N zPfQLZ0iK;`?bi@zx-P?h3T^of<>=D`+R{e0UA{iZtzEM~ZGVWJs)NM-@1+t$9T+1ouqXzl~h9Vx5fs&wBfpttT^51t! zJ>&az(~IDg_tEAmYg|N$jzNxtEL=WGUVDFY7}N#kH}8E-(- zaZBh5dGCVl6sTIyrxYkv!6L6Mc!~KC&f7c{JGD76DbFW4{)^f><(b>5cM-V97GthM z{C85Or>Oy z)5Ewgi{gK5dbYO-w1o>?SW5xl^9Y=UPRe!qMf#;uWjmx!Ij|3+$4_3gq+K3}L4|2h9MG?_Yc0>x*$aK@T~r+2LVGq# zKiSC(6!$Z|J=pgv(S^R!*N>0%j~1?axwy|0qa&N5<(S=uM6PYr+*zWu_L%TDUk&{B z`0e$8B&V4)Cwrl%59P<7^SA2j^x8K6m+7 zZd^y-*#C7(I($T>Kz*3sVa$87&}rvsWdGpf?fte7N&JDkGe=;iu-&tU1{?Y8<$v8y z+^c;M7}c9odB-R|H#B)Ol9tW^V&X}l`zb40hVm0Vjrum^^l{2YMIT(OM#^G1(J%zT zCJxTl!Df7}++yFv#U;)E;YQ(rOYsm{R+e=An==i|+8!Qp`n`_6ACbp-?{U-3cPHWk zd}1}OIqEIOdB zft)Su9!Nivi<0RM@pmt#H9fvCk$As*bsnPE{_EZYH*RUWj zikY17EP?ObU;FXnhsccomTc{7@L|UWk;3nd=D{rD1!q^BDhq5dJ^tw((7&#olKb|* z=gPmMVf*ESfs50gWA&?RAgt>s(QNGqt8z4ewm8LG>=gBSH;%ltMtp=iBD9ijmq3U;@yYa#LNyFs;b^ALwR4{`nXvD z5sh>RVuMfCq7!}wd>t`+Rx}R%!OSM%su|hzy4jE zn_TGFaM{w|8%wivF3Ff4662s#s;(Q7DcX9MNN=Rzc=O}gB;UBW(Ul>uFEgRcqca+n zmCV!}orsKJ$4bLO`#gP&e`Z+&8i`anGM)qa^rW6R1zR8)Cj8gge}DQk+*b5Osfo%&Eui8$NfAOEdCxv$~&melNKr z(`b#(A$=PCN|k47VcK76036vLeSH0|w&?#}v*i?^usjK)ZwpDU-Wv`Ia7L13o?}b;Vl1|Ln-WD_vO2Ek`J{ZEeHvModuENbZtLo|8pU_InBOBR>>z zogKw@Kl_cZBB#-sJFOK(x=(O}NLm0qDXO9DH_(k(PkFriT?`ZTET_xZ68b)w(fQIa z*-*%Ya^=SG;AO)en+g2ZY_WjrK#&AK+`mxQZ;t=kFLvGgdG&>8!(LBE$2@#7-se{M zN}JO{Uw{8{?lS|4t4@PRy>&;oyi=)rziU3s)lAAQMH$_a1kIkT3Qk;NvM@Ra!w?^l zx8}wxw;uEm!t`nD&ue$Hk_4!&&lIjuV#?hfaO~4)A`((EJGaNi#60Gc#hmoF;o53H z|0*Ia7COKdb79xQJuP#FSB5%*zwSa#6KY1B&{5-lpgAkkc+ja7Eqm^2j(#j_zbeWY z{r-I?Rjx#_p^9hx)dI70$VtI0OT8zdCETCFPQ*ngiVo(EiLunSg}RUGwPY(z%`+QV zg6oIW#M{ebW30EEPZ_u2R>!!@R+`7+F&PUm<#)#KPJkc+;iJuG;0jTUz2NtQXq#90Y{ZE&wz^YFcD(E7?28$2r(VVrj7$jE#Yh3u2( z-ernKjvi29x*zUbelW-h;Y?H$O0ZYm*tM8bGB#pG`U>(DG6sNSwpaOZ+$u*)7YQ2T zx^rza#xUqCsr0z$c=deUJQs~z>!toI_uboJt;3rmW=G8FA0XH=-{<2uP1u~D*Hwwz&n|QbIzy^VKi0eHMyK|gRn%55T=Cido>(;?l;da2clxs% z<&Jfg=*4=MqL!Fawg1X4_{;qgxILh8XY*NI9qmfCY;Rn zkg`1{`Da4(CeauP2iGxoWQj}MOUVhtCxf;Q+GwX=2sc^#uoJBLr{Yi=_pS0)%-O;n zHJ%uvR-X8cjjE~dWnMAPf~MEIK6~v*7aIHKf||fN4f~a6hRQo%9Cs+`;79|Q)&00K zN0ox)NWI~N*Ug#f4k($;mBQ}$m4a=(`Af{7t@s6ma=y@Pwt=1&zuoJ8Brd1!B1ZXI zCgzs<(ZS_NR^dt)gfSj2%!?kPt2p8~re$It95Ev5z_XwT#Xyy~U;e#GP+iojdQd*OuUHRgAMQNINI+paF4y;qUV%1E2t{;zdOpKM=~w-{bk_O%0_7GaQz06hu~9Cx=xy&JDF(p z=h^LLM?&-`vW>HZv-g~+hg-L_x3eFYSrvp}y6dJS^1?~% z#`jT}XP}uis^b@obUNo0oR4`#j;yw=cQM&%H+EZjhDN7g0&eGbQsr2wA_c%jgjmev zV^e0x_FVy94TmOY%_mS~`yFGyXK%C=v#(*okB*~tLh0G~{njwmjkq|GpU52HI7X!~ zdwIMBAr8m+B_sc45wc#5wccgWyp?*nA$|C=3dVrr}Y2(ThF?2c8^(R=rh*Y5SJ`1Vs&d z`S|*QmDI|MR7%@fVJzyeQ{Mp&@-@1*H6Qd81BFKqHT*|{i`a&$u`3D|F2NGWf|pL0 za?aw7*(m93p~$Y_2|v(*xjp3D?OTXyQGuS*?HIYT(gwai&%ZujB|tadGeFyBj7q+V zP!4*i{YiG#hd@hK+Ekg6%v|mr5-C9D$l}Jqb zQT3oHue4by5sKfZ?1$Zo$9-{*2NZNrWXTW}W{g5o_V(M*N>-4g6%OkW^)|;={oM#*R;DyS24S|TRm<(^mYc`u6fR0;mexl?sv`JmupOcT7uy=59@FFM3 ztaT=UYd3f2d!T7+Syy}6g&{9r{`gYK(Y`ynj^7XQn&B6?uM~nyduc+RB3kc4e&X1N zrzDpGju(-EUi_0}VQQS(6wDpXsAY-wUyypEzJn!0?R8?fr}~q=!7vxka{WdLdkd(b z%<4BIdN(KQ^iAyEuNj*keQF%HH3gV8nobpjAyH8KfqhhQ?DH2Fz}+u48Dmx`089A;01ZVjv z)RFF~@pv*jS8hDk1s0faJL{@2%(zCKxmw;lmUTMRk_$`!=^}IMwSRZxlQbcbgHGMQ zd-rFJIN7JNGfkEB3F%WIBVzQfacvUx<{DQm25)zjnKl5xKAX7kQ`Vy;e8vDSxq|; zh;1RsB}01sKoJ12uvKu)g)p?Lb8<;{B5P^R(|GuOTs2I?7%A9D+Or_ieJ5Ujj?tcgeIA0hOSoBDL){>huzL3bPA6czvIg{Jc(_Kj}@-FT{?`mh0|k0D4Uzv0EPzw)vbA zlg@4|a-1}R2R&p&yc}gRfXh<^&S4*AOap)znsV%8aO3^r;T!KJM5D;6T>uWTeh65^ z#W@IBaACVree9!^-Pp#`=Nd>Zeg)?Pk5!s&1;BDHx|0tJ1{1MosH7^u)aP%Ou#wtZ zWiT}ttELD*U-<#>WDo|cZX7SBB=9pGgQRO41uw1KmjY-cxg%!>R%v084rS%~Bm|^+0|Kf&9uT3G(kg>i z;A|kbtfcEH1^d%Q@$wESyIz4k4t*R{egmM%+yO8`HGRyJQD9ov&hiH&(@qV59%=bI z&$mEu7Mwz5AKzO_S7=K_CZb|R{HdK}6kJ5mm)~R_K7dpNAoZWl`EPur9_gwv&_Irp zgBOdD+!nce8Z1raN~Z}g1aS>4i)-sFVc)3@lb>kwd^U#I|bDbVDM*!y4@z%vhNdS?=%6}tf| zJyhGGNVlRF2wAA@#c~FKX;cc{oKpYk@%0eg_Xey$N{tGB6SX=P~ zanhhx5zxR$eGAPsqk{iG^@+c3|Bfrr6Z%N0twQ&PMzkuIT z0-WJ^xxzQQMgQZ_zbax>wD|rE#0ccuh z0B-=slpUuX_;`mTfb&Z-iYB4E=7sghZY@< z&|-3?qA7hDJ^*oYhM>l0W23uGAw> z=6q!$og51$ucLl7pav#C1!$&$QFqGgQuAZ!2i@GSkTOt)orc z26tesM@31N#$W=LHWkHg2NdXt5K-wb_0D)%v~ZnP6Ts3pU3OVFfD^>xCii? zil@``6uF|*6?(#_B+`zwIFN!T$N`A3?dKf@0PyWIg)TnS_tCl^oD%bUW8VJHR|hFD z5Lxgy*v{$q3jY4dnL0nQkj7$Kyc9@jVXfCK|5PD`I&V+{nGBZ##6Q)oL#g*VMZRE@ zzqbwrSY<>+mx(9{Y3g1?#47W&E*Amb1q!@N0NvTXec)Z}i^_FLW==~4N>=^!0TKN% z4M7$c=XvCd!zNJ?am9T9X4PLzW6%fUP!uIR42)C&-RHr<+sY>u^C)YxA}tPpbGY1U z!c8*1R}GAp3n`WcPkXr(qM>rfqySW57Ym;ZDNdatkaV_p>9dLe|E>qT+2`oVb)v4_ zs4!t8nei17qs$BDOawvuJUo0`P$iF2j#i53Udg#D|BRr(mhyv1OUUQk!KALhn2jk$ zLtEif7&f`S$)dEdX3dc+IJcKhaC}bg2kLDy(-3{z3x$j|pxi zdM()bQ959wQiJYjF^7W9c zLibZ!(;r3$Xp$6QU>7Zq3~-0s06GfNaC2)V7MRIcn_AMPxdEwsBIA_=up*D=6HzHY zC`99e*~9LhPo9!E{i#*u(7eDOcU=;Y0XVPROli~T=SOjC8lbZp`eu*RNYw)^q@Q>L zZw5R&2UH{br!KJ8FTMkSTu^@X{dL`kM1bnf0^4J5I0~fobT9}ifu{eiP!7R8zU={G zn?JOU^E%){J&REHX@EMb0kA<|-i1RAGnjfyK~A|?2PbO5f2?m%P&61VH(laH$v}f5 zLozAKR{gF94*V~yF_LMC2(vfO@pS_X$Yb|_EqiudkX%+E1Q>{0HFF|CsgHMF399 zBT}hK7JQfPE}U)oOV?TxBYu7ukwGfzH;E}z!$o)$V8Qc$g>o}!h8BLe1(4fdxn`b{ zm{W2h0It8lJQ)oq&{A-IPzVu&?J9S<8Zoq*ftbfn2Ce)b$suy20VFkSLoss|AVzZ~ zH|F~VF;U(TAiBhP9g>-Y&48iaWQrIBD-MQHXlf17Km9ODtO&ZcB>b^5kO>+Rjg)kW z$ys2eAa@N61Q=y$fbB^T3E2Y`je1UxaMJM{VEhhE^D=m<5Jn~o`+f2MV0?Dh++5(} zX(ZDx-dP2}0P5#oDM(%MTcAFzM_+dUDX${p&(g{MQ|QlsT&knoJ?nM`Q>OlnqvBi? zK!rV?(RuAqhzcZ~fH>d|8jhm<+C@kw z4eFLTRYS3pD_Mm`1`L zsqoYj^ZwNl_&g2gQuJW=R%DZVsf? z-_qs--~=UzDS@UhtT55S*7>^Nf0{|M;3<&C9<|HAejij|gvp0zBov!mEMJ09HS~Xm zssM*_FOh$mXkQGrg0h<^@#=c z3R(auCdrUuD+&~^njK0i4wgi5SZQ@P1+^870yy+j{>ce!543`2Q`-$n`J|JNbi6_*Z|yB^5<4}h+{g-~Lmf)W45P5V>sp3WEmGlQJq z|6HQMfd9eLO7v#`qfqc4Oy+-REI$uBx#g9{F#?+jDjfz||I^k3xtHY!#i@T(=K$`o z$|`kT2vnJgXrCm(2?daEwqIPc6buR$L6pMSK`mOvCIT6~!kZHsZfx0ov&N(b01phn zd~-}S;O=)?P@IaP0pa`0s*?tqZYxIwD-TT$@Y&hEkxQY6Tqr*ogo&-#)zQl|F;k<_ zMy$<}&%{|qNp$B}g|E4tukH2muOHttvxVHg%|LVO+tG6gw<1o{zCFD=aQsN9P3I#` z<1+jHqc$<8?N(A!Cbp_8H?N6r($ubbPwU%k{ABoA(AoaHb)mV@Yi7cu$4ASPYWkz= zM5*gV2O0{7ATml$GIAOPGVo_zGlK#i)~PGTK%+2q;Z$o5rRgcSUv$0e^c({QaXs$^ z?V<7Gj0*S(cKPm5GRi0s@2bi40yTf8K$Lg5kO^Wz}_~j9{#+uXrFCk1GBG zneMx%{~7t$8j9vA88A+A!4EmW_^T8w@j+PGtNU!8mW>3}I|5IF4~-TDqjy4LQ~_l? zx5(grlmdHcp)atq7fZ_M{>6=d{ldbZmJ>GYy}Bqx$?1lr6cfB@$~m*?wqVIq5QQCo z?qJP8!b393&3LkDFLQYRIWYbyFwlNa`FGO7BCxXd4P_?}&Amqh=63B~yh6!Y#7Zd) zJG*@B3?}aihtoJ+Bf(gkBRPzeZZ}RqH+4w7Di@$z0R;Fwm)o`+pFhX>sk~eHYY+cB z-jIXhId0>vknb^antbNqPOQsac%^Y^Np|&3e8Ey$^snV!&)ANE$r(FS$fPEII*}w{ltICnE)ep$dgvPfB=G$P1ia5(2ey zYT65Y{qTemQBeQ;tLtuHkw?@zslZ0k^S!!9%L<)Y(GJwuPoSqTPbQn*%qyMIB%z&* z0-+8h>!WxF7X0Q3zk+VV%~NuU1Rhv)ueG@Dp|OD)VE$UZizkSbev_gQ*4mWftlC=! zbebo#+bNP#GolMd-$^!VrXr^~e>GU#u3K5@ESKPywA&oPn~WGWi&z%!L);O)O$OiL z3x;1ODhVIC80>dOktaqaLKVdC?@`Itkw&-?WV_?etSChei&Y^s1ImZ#d?01i1;_4n{Oo>T^>w*fdQ#l3%#&#m@ArqD_uzjszk2b2+qgy z34WQn$?;5x6BgYdEUtQp7V{`TddK%^d*btJ!6r%h09?N7$wDP3AcpQy5)kJCW|`hx+-DiV#PGnj==tbx9+Jd!fSQ+Y)sIq;)8MUu zIZSinhOJ%sb*Jk5wAU%S_$!CTV`aej%9)l?V4nN9VjH0N`UmgyVDHbNwPzMfFE07JV@|REjs9|fYVh5jGF8G zNsC6HyFu!4e=0O!`%n_txGsTLt6+n|R5>%47%*YYff*t%u;V#P)|U^B-h4(hx0m~A z!1hxDQHY+(a}ek5Vvz;DQ2$9#6G725 zecJ2VycNC2P}Kbh-| zgzCAj!9!x3PqYZqolO2O@OjFd8ZWc0iIq-w4@8QxKGr5Pg|C=fQh}p z4_&u!P|NXeBrv{n4n1Tird)}Z7DL%&Nu*RxEvTtYj`}IE;{U`+xXpLIwO`~0cIE04 zPp{ncCXJ?;(jmS0#s`S0<-L4@=(CG>6`CP|r#FEa_L{ug&p^UQz&&7C$KQOk<^<{{ z%Ma{YOP%Czcc48frpxcTm)|o)e`&I2PKyJt7GVnZMF)r9<*45ejy~Dmyk9jNEzyesi+8gnJ{QCOwnu z&?4E0%75j4n*z}{YXWol@D9=S*M<{a|8%|>|Do~TYCw;?cdxSn^IQQKt>(SD))om*+k&FQ^#-046nRh+GhBt`U7M7^(5G#eYear zKol1lzfD`eJPO=U*Dm>sb8tVhyADi;4Cp(cw*|i``~AWAUK(p%is{YXU#_2@0dspT z`S{U8qiaC=u}Y`P3gVk6PVgeeq@Pb%#ilpYTKh!~ZP5nsQKpmS84EyxzA%AI#WVN? z?o)EuORV&?7f*J~%|GoSa8x4(z)`K$?xNjU+2$_HNr_JTZw&tMsD`J_NV`O;EC$}Y z5KuEC4uM>C=(I}FIKH>L@RgV|e2WC8c_*^9=s<&Lf*h=qNL`A8#SE<>-pp@l#$LTu zg6)nI?qTUi=j3s$E{3_XOFdO`SOYIIANfW*%VuC=fA?Y31Lk4YjC0p)CGvjQ#rMn?WG4z#rRbJ)wjR)y=%APadXOtb{T+pgB7&|ctMO~ z(aacqOaj)Y6gya>Uy4WQo#}#3qj$4^wn99s2Ww?Ic6#V0a4ieGCHSnut{8yGWt=?}9+;055a4&o(@>MCwrihO3w-tzHmPPLZ z0p-d%ETB%>b<>+`)g$U~9IQ-d0v&+oN2x{T1*G$rtM% zV2y*#m%R!_rswU|`C#jMIMLZp-J&;%akK>-xYIv(D40hP5Ldgs z6mG93Bxo?4>#M)*W#of<5tZw_vlsAijGd6DgU|Lg?R-%4>bU1q#f;>#dAj)A<%97h z=ASnM=C{T^B2)^QY#@*CZzuc;(dl!#iEG^(eKnb-Jb^3d>CxYqs+j7}EeKfg-5*EP z`g^8{u2}JxW?=_C_A0Rjv#6gCMZ@=KnOJP~1p}q&%`tA+iw|&=R219G;!RD@RGXv_ zRiyqXk?7YExLrS5cn3#VCEwk^oAC=jZ03iR`S1{P}hb#8c;)0fPE{(UIjJR~o=y;q>*IQ zgq#k)QX>`cfn7fMN`z@DLa&UuOnL%gQ+UBMaa)fAA2qk+oV`6p&;?&OJ{axkXcr7#ZDb zt54XNjVKPXshtTT9TX;0a zw=Y*%Bl0-J*TDUAtlp1-AA<-g7j=D(1lG?jhDqx9`QnZU)8rXJ{70dOe3}}bNbZxo z)*2`CwM%3AyMN$u!!f_~YzYtZC9!Qnqfi!VFiYd_=3^!|Gq#d{S#Ko3kGH93k zJ``b<@W!#m-q})+o`9dh9ms+=ka$c%uio*HS>!V$55uw`6k*75AFZ2v$;;GI=Ise9 zvA2!N@rB~yIAhLA$Kk<}8dXM&v6tp>x047om#<*5LeGD(U{y4zJ+Yu%902?hLxE_t#Yspaj+k{gMlTDAN$Uh+JV4#)=5R z_*@|R_nXx#64rl~3mImqu1sd|TNf`oj_5E#xsN|i5_dM0fufnBr}pH$a|Lm2%2H)< zmM=c|rbz90-Fts>^;Lmfr?OUx^!ENMa0q z+>kH~9R^P?XIm46HTp$5eNRact7OfvG4}yMQWPusd#0$d7aUWMI?iU+p5GHqo}=q5 z1PKaFG6>R1_ALK;Fmzf!q=d)E0gfX)T%TBjUE;1sVY37Wm5O(Nwwv!e_-J$ro*afo z7>46UHR&FeN9Z>U_M<-bn07fXtxTr-^b7}>Te}{!G;evY%ROT76RvUs^V&2|rCBxO zT(lY!recMaxo9HSCd;K}bbYF3dt2{Qd{srwHsJ;?yyW5ib1Um6LCBA}(>83~6&s4S zav0$U`pr}OzmgDL@FXWfvtO6fci$PE$79&rbA$`?RZr{gao0^-t2v-gKFZ&3Sl3TU zG=Du7hLyGWcJ**2?HTRod$ca$UFc;P4${PH&m|_RI<#aT{r;4DxrqP7E%KFf!8m9P zMgzX$2wi9wIm?qLWbgk%2)37$rF;^GANgJv3U>bDPh&fW}2v9l+?qQb2inp?tN z*{{!nsfgmsm!#8=9IQq(IWg9n*>{~m8M|rjs5c>d49q=CdkI%7BElA}=a4-J#~4(F z3ueVieW$(JrPFHUO;ghl=Y+$@Wqpdoc3kzW4}EZk(iHgW^lwRRqMnH_QD4((B;1V2 zGX1YS{)>R+>;5AIjZmF77> z)HOEp=W-+sda}P&c{=;!#bD`jEelJl>-X`w6zGq~Ms(7lV@ZMoUN`P$=ckxDKm{MJ z3+j~AL|hmj#tYkbOj~br;!3WI_zO1}qA-_W{$|iK<;)m@tpN1hNABV2q3IC>u4{8| z$z52c%OR=o=y%}ts+bF^s&CG@^iPvhsD+WSVASjiPCm~3cU6v52HAs+@m{>@l}4V9 z^p0JZ>PPSD20#2Ti8`I$3)6rtPvo8tOxrckn=y6>*gqIGV!l|)X)1C&fx;lZGB#zx z;Hmx4#KiR6UIM0Olc3UmUYpZWhTU4Z;Ze>D=p4RcY=}ZnwnC47C1OnH3zVB)*t>;J zMKGu9BP-;X{$8I=a_frpgGiIDNgZ_T8+4VY%mnQA#N3tg0W}f6O_N~`2|b^@PMmkH z(|6d6U_6G3DVpKB7%_sj%{>{AHW&Fi=jjuHt}`S72B@u`B#*=e{+^+JFT{wuZ>_+0 zDCzBvt(G7>*DIygv2El#ADf8ZcTf0 zhUbxm-RNE_uQuuKRIk7sxcl?bt8rKt zHEl*Jf=OnF@n>#5IQgX;dT^oi6_w`m@pnCQUt{$6;JI54B3Jvl;j-2r+7ckB{J8>X z?3?5s$X(O$Jam$5o|iwpc}MI}1BSVRIb%ZdZF~eY5;ywUB~g9U)M8rVueEOL79U&2G`Un8Nle;D&6qeD zyv68F44FI%t;;?I60Zr?=eXIrPo^McnxMMz|5{Z#B`h2o-dZ4d58@^+R@Z;+AoX1q zH%f%8E=V^y5PRgrw5Z63`N3U z?-hL8a7q=w5IsHL{ql`gcKI8_kGb)hgLdUbqdF7wgM(%e@9Nf(yGEFTs`$-2-+imT z6g^e7ovN6Yu*pAr(Sr{OyO4*G4y_;Ff(XGpQ(D$mevM7sK)-G92+OVETbGFW27Hr1?q6D?A@xXCk{y>glOXn$_P0_XhIIzZ4)&fLoHimK2qrZ~;q&e5k#}8LvWlugZR`i& zlp3@$Z=)XAE%<8c`rwT14;o}kp&~k{1SCw}_z)R# z_H(GalV@lPUSId=ii@m1=3XQ;@{4<+3NjCV?F6API#1?BJ3WLW3tc@YFBAI zW^9^}Z_q++cyCA#d0!97-1jkQSW93V3jJMW;O3ieq>?A`*f}N#HrL(7{ZN7fSEarv zC}!#NIHp_<{amX9r=J&hg)ZZJSAJFUi`RERV1it)GWwYyfzQmfLc|uvH|D48`8~=# zKA!%7`D=vCG0lta+netlE;RT=deg_`Yb(1yQ zD);HfRl`-xsf6P**@DkNCKKERA}Pedjc3^L{BYS9eV`WkL>lzNkYr{>W0c@XEIBgP zD79kyvs7Vxsk7<#+|?daxRArru>p|3A~o^I8h02<;ca6c=Gm-$`00yu&qwX8=khbC z5*~Hji+ z5oMw0d{Z|--~3@tG>1ltugldIEri5kgM~)r2oBja*ewsAIu}B_;gyq z`{c$)5}M^pF~$${bKiKL56QE;UzB_8pr`h}24%32k;ZW)h|>Lcxn1rikdQpt(1Cc)m6#!0Dx`>nZEoVoTeJ3tC*&;q^n!P)a<M@x~?onz;H}s1a0U#*68JQ*ZkbCli%T09Fjkc z%2POq`@2Cr%RP`XdCSNNGq4q01f};sW6j`rE)YqJ20T(MDW*K48=E>Fs+^aZbv{SJ zu2FhjFG4&T{vyaaztN?#5PVSk{c#g`o)tUaE4^aBM&vVl@QK#gS!aXxglav1sG#mb z!K@x%-rNUC^G~whuRX>*_sk4Q-K(+rh+kHZ%?!KAawUV4INfHXgyP*RLOs z9{6H1iXC_(jBBH6)j413h;Fsfz^M~{ZP)0p8psQ0SK^=%ruI4X5O)dIMvJr+bs8wX zq7c)B%nLI`I#ZeDKAf~SiI6XtXluv4ZfwO_cO0RVAQrSw*}_$t!Qx}W)7VH978K8) z*C8XI;i{sSY%QJOPQeHS<;NzZhncHh`f@4yJyV50`>;q$*Xg*+w=MG=4b%j4wm%~> zXl>gwQ26m>7d3+)L-EBEw(XrLYhCA}HJ2nQ8srU?Vv1o#0M_4vCxl7k5 z&&e=+lW-@%K2-&+fCjjvEQClLT(wx=#IUYy>1 z-I&$*f%55qarGku4HnF$>l>AQ(?jtXl&*`A2bu?tTg~m)i_c+3McsOHvBE+mSV-Kc zd(cQ+FFQ(?;3!owJ{i>wfkOoy?*H)3-+$>y{vJ9IU%DF3m%q7lj!t{cyUFGLgVFEw zaia>OC}s?Ec0!w3t24`!qMskdA-5zhys(nKCxINhq%+z+q*ze~{Q=WxLKa%VSGx4D z1TD!Xq>w9H#lEH$%j)V=s#Scy;hvj(T>OrG6lAORISwfn|`3^3N!wjAJM zkx(ZM(Fq&!^4Epiv`gkO1y-I}_LVwWE||pFveEtqGv>5hYtKPvqoR(!Bg}7Asw$X< zM~kGTDhOutecPxGYyYT1L*|=ti<-zK*_wB{#f`aseeS%$jrlvaj5qF~Rz@OUmop8Y zDWlUXSLgTEW5>brjv1DhW##H!p|uJhs=quRSH7+H{w)9cJxxBl4?g}#OMVvT;;&{q z%M*{Cp?VF>m}H2`jq4qfXKzQouAAw*OIXdkuBF3TN-I&FgP<&wG{6UpV3`B;{NGG( zPCpRvimyiauiG34$L$P;P~th`gd{i}lnE0WfHxA_HL2K%Rn$M&Q6xqQi-a~&{bsph92X z_3M=ke5amRtAYmFQaOI`f#L;~40dcwjnoJF?nXYnjZcU$GHA|ynR}`lCNJ(V-eOy- zTcvd-=JE*p#TEU?@Q*sfgy@A&Z-jXR65~+?*o|ut3yAr6N(TGba-on5j)nG%MC4fa zl}l@0MWsCd4`Xj059R*H4WBw~a$3mJhK{HhLsZDxCTnDv8D=bH%`(Hp*q0+(7)!~J zy~s9WMwVHul?>SvGnN_D2r-6+h9caT-+e#N^E=(o>vi9M=C7`~zTeOC-mduaO81zw zUU1dOA}c$kerPBOj+BT&39FjZL-&am;`_GZM0YLPuei|aT;`OD@bAW{7g6{X< z(e2L%OAEzEQt`QI8L2cWns`16S^!IsEv<7`uL|4tDpBLi{j$`j;xsbV(Sxa@{C9wx zFjQ9UZ(g^?AhdjJ)&jX{AXMkW)D(e9W2axd?1xaD5)-G%Y2|`jdUwCXXkf7W+D;B> zAk#fw#uOaWX_Q9;nP^tgkj;s@Nx+Ui_CA3vcite`>Zlgh z_Gd(>wiPhYWQo6`3k@{@ZpX3zh{Fr?Ni_V9#@D_7dhLJ2NNzO+WN&^D0TrlmmDOx5 z*d|`!R8IVO&lw88(qBtbx(#+&*@*t-c=%N>9Zpa z4&4(svwIqHEtzUb>_6|WH3f+=p%%xMhVTuMVAYoVp7DfWu36MnFWCP-Em83kFx2JW z1xiBGQ@XOgF^Ak?@!`Plqrlazb>2D5E)`Y)gBFn+uv8+^4iwMPQNFyBUi*_)6o6{P zbA}=Mh)#AL7lt@jM4bGB3axjjG1p+BejG<0c=*@LcNU+$-m+O}$D0c!gk%|Wj?Q3^ z#z#TGN}p&Nz6+^#eX3)KP-tc^;-NKZ#zt(fzcR=TDqyrbs3XZd^#R&(GZ=drv0hFU z=A}#N@X$(OmlrjwQIuXTE3(JO{OD;>=muhBl_EB2kOF*X=*sX!w&_@thOYFuT;t)2 z-X5O}ow7n)L-*CtEwcVGz@r0VApa@YtImzZZ?HV!&Z3q7uQ}`tx5)VK`@dBRK0V)B zDS3mkRv7jZD_`@tpsZB9IIV}`xN76oI=S#(f=o`Ngs9+a>|+S}=|$wnr2HjH4@j~~ zar1&ADFv#lq$l*nBD+W+f2a|0uY(`?S#HlM^^aVSBTvruS&PQILtE>vu(;>W1>*r= zhh3VJ#0#i7oGG=d`j*heW8$re=u4%KmrzX6Qa+Q~)HEShCq+|2W^jYpnY9U~H+47G z_42L0JFI?H|GBzRl)D|#tK=4$b2uc<$VB?`+Uoa+U!OlQLfjYa6BX~0$YQkSmR&Iy z03O$Iq8&*gOW)?HrTs#*d$#Zi-Qw%NRX~5gt*3vRZDr!1sptzn>xl7DpMAB%yiML0 z_o${cP~{$_P+ANm5RuBZpPjX`8AJG|6Kqmi+olSmQVH!{I8A{i&Ad0qOxb>vC{RqS?&_KzmcpKTfu26QhC7Nd^MK zng8yGZ#oW|VAiP0=v)j)Fa*)Nr&!D=^$WsDr8~VUZ%{fyp}g;=@4yX8WEyc0dBsD_ zC_keqL_=L2%!SVE9t2u^nN%LxKsbuSUtD5kyNVyjoIP&W>b>nOe+heh`cuacRe{Jr z1^a-iQI77}T+RF6^mbMn30$1M@a=H%-3s^C*M>x&UgT-6d1aRKrd)i*DSWbvv;sOe zfOlq>GsQRk)rdlDfECy&!xwT(Z6&l^vFdH3Q_JaUp5SS_u^_8Je9YM)$p>fsIdk74 zaG1+a0`%A*h6StXeZ-c`JzV!2o$=}5*4sYdr?2caoS1iVPa2jN25D2U{n=#u4q-cD zg$PwuKst^h&J*tYl${qY^dxH8B_KO*`q89L_D@SyOVP$uJgURsYkb|&WTG3p+o=xX z*^%7(j(%|{=|#c*mkvyt$9GAe4{$PyR&xIxgiJ`8rCC*-r8Hpc$5hfAhlV2RZo1v$ ziXYE>?hRM7oC-Jd%JmGLqsLLDF7lSc?ccI(v*&Hf&MXf(8WGPyF4YS((A(O5em16{ z%%Z|O&pc8jidM@&*-kfGN%J@yp-!e0bpWwjHPSz?Wn~ zkbB94KV6cqetx)^1lI|PZ>i#-mV$I94 zHI@&_RyxL#r`QhEA*fHU)FeyL!S?1rmP(1oR>4ymyA^x}ql@>4BFTHk(87XoRW6mF z^3W#NEjx;Gv*pU(+x!amSFk;}GI?o-x4KV;x((bvmiPK!ivOU0GUJZ4x3n^Vhrx+!$W z^A7Kj3cija4rNE_sSQ{PDqhE~gZ*)_YW;5GUDV3BE2x=Tw&$T+(N&0S{ZaTsPR?K4 zr*Dh8=xAk$Hb8o?{u)^mprk4UT?uaN4wrbHGPThm53e1 zitb|6PCevqjh78z6szjOeg6)oy+x}ONjyI_Km9$rqHu0FPkNutT2BrBYR`|jL>d3x zd~3?{tO{O}bS=SiWi*R*k>1hS4eVe@b9nwhsz1rLo#*ZRdO6*CcO8|8s(#A}UwcvG zIClDUF49Fj9Uib5xa3(cPMZ!^V}PDoP4pgK zy==*4dLNWyc0KG#1GQz>##e8zI1CP6i}FH z-yML$h3qtG+Y@5_iV^XG(Wrt3SRtoh=91;fH*Ugw_gcd%%+RZLp=|%pgo(dK@J8tP z3;yBh1)4ccK!}P*q4@n*A3VK-bTUD*x23Lv<)=u}{I900pt^KwdS$mnTkPi(0N96` zuNDHiVZmAA4TLV#3NQim4&;9v)X5J1*>F4`6b%k_MDc<;Dy16Cfi5$<&g%YIJ~6ry zMr8Q>9Od`ToKj8ilPw|Q{VX3Qn9+efMs4EiC^^Fi><*Biy}&@bdhj_nD00esGCYCq z_*1FiciGJRMF8<1-MI1zLnD;HN4P2?lHwP>4GX}IreK1C1#@fv#y4(nS^`H@v@Ook;cJk8ON0nPmrbG=;^;lt%{qdAk;bB zV2~Y>Fi}(^JS_A2mO5|4-7Qv1eTk=|^CGbiQYY0g+$F8|!&|70n`_h%LnxJ?NM|0-h>_04qr~(Nd6E#;l?4i#gXo+iP4C9y5E5f?_aI_^FaWj zOp+UyX7ioChvSu@if0wo0PtM0IEvb3E0}S6k#{8{kK+7gj2t~ysH)Z@g?0IgRRG}~plC5R# z;OtXAeR*CDiLL06+u^YerI3Dv!+10bZZVS^8W~StNS=StY znA{gr7xbiG@!sV)U*B2qT3G$w!PG(Mds+Xh_?8S0tqF>tM9?u;9pV+Gs;Uw+@X{#o zi09~E|ML`V$%<%zdNbb?$;<=p0M_FxAT<21W%iGlcPD|ZrZ+lJelmYf2#B;|Rp*_jw(ptO>IZv5J< zU|mZSP7on#nK>x%o2dGbW$ZERiy=yk7Bx^Tgr|oq+?UbAp6xuVnxFRTsxH($JaO91 zKw2C6vhPn`NPKJgk&G!_LF=4DgR&Qn4n7D zEAA4o0-A9M8bL^WrPIrDMRl*pnLFD_R;LBl$H8&@q*I6efs-rQ4|}mjuNFhA>jov85ZAySN%V%HB-#X|L2B9$ zhcq?}xkfp-d;$=4$58$TIA@=8P~y0C3xBp(O#e6J+P?yj6*Q+Jqu$L* z&6&6zZ+7@%)}{Q5N8f&A4OS}1sF}INtW3*$)ebT#wl4vU_NenOLNtJ<&Cb{>9xAyE6tKS3(* z77r)Qc)K26ppqHKl5j>G5Y;02IxyBQ~O-oFG@E3x#CV;;=g_*z|_lPEDK1uWlO2FLv;3t#^&7m_}B2`Fh<} zd)=dKi?faI&};n&)A9pT*!&=yPnrmC$D#&k

ixcZXA;DBYXc1l!T@yk5%VMrJEy`x>oW`gA9KyZ-7k?;|fhS&SA0;LgmO zO9bntGNAES{0+@oVl)Dn$>mT-$7TtL%@sI=MQ_rlpHN>-Nl~x)V3deN%Q9 zCZ^pyJr@|DXL@z8Zh(`dg@kjHriwxWFiNUgyN363H7gf^V-DDI96P;`-If4e6X5g} z@N)dSvMB(StzOMg0(j8xk{O>O*bf+`1Ye+?QS2|8UAjn`=u>DFr@0y()!ucYP*g!( z#_Lv6sQEur_3QNI$K1SNeQk_4K7z^{w!Bfn{tgTuE-XLlGd=pNM$>l z4`vU-UNbO7_Zh!Nj!;FRi#pjf2+Xfyyq8x?&}4rY%Bj%pG})DeQgR*j6CEf9J@kub z-|a?lW@G1K*-F#3-ga@e_PV*rE*eM8e02-qqE$06IcJ@wN221i3W@>>idhSCNCqWX z`s+SoD*y`%EA`EAwP(lcZDROW=}R=o)+9d=_~FQJlRW71e})?h9f~DrprHOBKZ15# zts5aP*Bwa-E3@(>B;?+CQdP;_*YFkQD3;wEWPcl>d<&nHEidd*M&O31ao)kiY^VNV zD*01xuS~_Ka|y?9JRuC)x7SM)7eQF&_!*DO#EEme2N(MrK_$u+u#Z&G<#IE&rKkDD z#tw3~_Mi2(osKxx?Rlaii5S!A17{Xz3Ey5c{Q2zOh4B$3HLXbQvV!;NcTZj}pSV8p z!bP#G{CoTE^rd5%%G;Q5>m#&B?5q>L2kn9Zq3HjSE>=Nr3VA7J9Oc#%lBK)6F+BPX zdoy?n3~aQ|4q|3zsh})_I-8%a40UZcqNhWv?6(m+pdfSekBH{gMut~WIh$H2UULJ# zQ`ct4)yAIzn?YDkjL^#*UjutS7O{^8Z$ZUlUJZdOcX>NzQs)b+1KFd=Ehyu`Sx?=b zuap^-nT77oU>6Lr(vG!y6l|n@Ucpazd)W7WWbw#SdhewBvftl*Icv~i-k$jO`Lovv zbeS8whpmpiyy*1*WC5J>->gp3JBZ1C6Y|fS2V3go|C^`o)^$!vR&eXo2vG8Ebby0E z{<%MA8Ex_)N5{W;p-XCXRN)X*(sfXwfma&6SePd>gPf2xDJ9*W^9-=Y=VkbhPh zckiC@9$#_5M8$l)1lVY@&yyaWlWO6A_MayFR(zUzg8~ zh(C==-_IPyze+fA)3(_kF1DM{0LD`jF52w{pd|R$)ka!@x3E<6*T|Z=@8zbtNDHTT z@H6@Kj^yOXR3RrACDa^Ddu(0dp8ik|yL*Ew1}gO2?Y>mq*rsrBB^ zz|}YCU2huV*n4oC0F7RI>ghe@x_|-9BDxoNRGtC;;dnIgsJ+|t|J|KVyz<^*#^BrZ zJy8HUChes7NN4GGV`)1g!nNnr(%}OcxHNF9l74rmcWgkm6b(;^YW(o*zV9;N#s5e}j8x1y()dR05iVja288!afkkF@ z$E2f$LejL+@wI0`R|R_#Y)f@!a|r=5FFtj!i(7dp@}2t)^^R?)zUCay=2vQ_yj=iC zg3%>5&=V@bW*3+l?3ZQGAsdc)P+jNgr^-&P&^{%YxK z?at~Z@%5br|8G~?UmNTo`@eOiEx(am!#ceMg$!vNGPiXVnJ{;H=+xZO>W`vzZ?+TQbunJx}rKVp@mRiC|{ZO+jw>EPh$SjH=8)k!I3 z5q40C$!%khEuoI2VfL0EDxjDel)R+SZ+H>&#>1#xSZrktH96n=E%k}~Ar)$*g?{VY zv*fy=!!16{qLAtsI@lrvXw8R!C2W1#-~Bq!r!vc)pZxuJKez-&vs0@aMeNkRx$oq4 zz}pn;ICTXo(i^E9l#1==*^ocnNB2LdCururd|4Amc?M?I?uXwo+X3o$Qw48kUZnRk zt{I#D!JF!CV=b_ux?zo^uQi<)?J2YAqjK#dX$`5b$5evNH|8Lvbx(Cz0+D18J!%Ou zml*8wk~bsX=V&r8o8|5XBhAKM^M?;pM}xFR^hYVrGSB&7A8Yt4q7`GMJNMpRuyD@q zoo#IW!V4#^;U#H^mt77J2VPv*6OEKjmwkp~y?Fnn z)BkG703IQ+wRT4H)ET=?I64|{5YWY{&$BSYlJdZvy+b(5RpA#~nmEM(!<_jt)@M9swb^axtkH#(mXt}KO6RHm6uLNq4^RJQZ z$m2C%?ZIea%r#PV^y!W|8Kt#Bd|l4~r6K(jvwPwdJ+iRw9{omw-X%cZ4fvB>W#KLo z!VHUb#)Rj1yqR76%!@<6z2E)!-=HwOz4k`COf1sPkfKBKrr`5M=t+4+2=29osm>8A zbGK$Uty2UD#Q`kO5|Sp5$(riDMjT z&+c@1(IIqw#6cyChvMz7BJ6k#Pw6AV=d+G7H+T5YAZ3O>4eiM1y^o6;$I}|`z#G$A zCqpPr63tQ0%QF&7!vIW&&z7t^H_ct8AXt&)wJV`CQMKe^j1$E69)pkU2B}z1)w;#L z*>zC6FcG;~Gz83;e|juEehAnqzleNV{Ov(ukKc@y?(pQ)U9U+td8LBA@5Kz1EI%o( zpbPf1d7uNiB~>Um6J)HbPE}!od3R-MpQl4cRY8Hg7rl-z02RsR^&1#KgR4WJV<&xh z>2#&eXz~ zZ;bvZ(Q$H$F4p9v6ffQlcZ844uI5Im+Dn1{03oQcIpP`-B7TtO77jPXX8gj0@ZB1~}mCb7c=Nq^_D*>aF35IAwJG@*|(? zetY!&`CDs6=yaLGv+~?#<|32}(I_Z7J|yqVW3;KDN!E^$L=M8-CZ7HF5}qthgT0sx z=p0u*hvwbyY#5)WbD5QU?Hh@akP+dBhH}C^!d0)5Jr-X5vM17lKL@ME{jOfLc41!(kr}AhMygz zNHRY_m=*Fz>(=>qpgmtGcB)XSQewapO&jY9EPo~<&2z09pw9w~s@k4Etr^eeN+~=p z>A!vbOo%#7bjeWz{dC`fgG0x+DDlt0iTAWTfKqtGzIwIy{_nzUl^A*S>vS%#O~SG) z@1Z&_!Nk+upPXHwg^(D&jr1WE>4~G>*5pcp`DyW&unZ|u4_(N015XYjHpUVRhfbn> zy1lefU^A-L1dAIakMW7mS{&fQQ2Ve+o#Irvl!{i}LXcqQOPB+&edZ>)l)LKh!m-;B zCEV=pwTaKIl|!j_f_g}sF}j(pz73C-fRS5MwdEKlzH~>d%hm+|YA5&`}P(o2zwbST>qC2Nvd))W-svsq>QBckl*R2~yJqLyIrx|=t zaf)HnPp1aWALqy9oZ~zDWkg(|G>>|M6znzNQldh}q8jVenjG~Hrkv5dZT?n7rhPm; zt|t0q_!EuJ3E&HK_Ffj^WT~2D$Qg0OdN7mpjJ_#VZ;uD4XKJ==ep?Q{dBb5ExtfI- z8M~g7z0hLbA0Sx$wVlg~K09>c)wci8A|Zgjd*8F}tFnOe?eXb1Ln61_fX z|C)=c&th%YB9$*odm**U1_|#e{y~3T{n%BRpU-dUd>SAanY+?ZOP9!?*kuks19I%5 z`c4e(2AbCSZ2;%A8F%av8=qG~`3l(2!Nu{Gzj^ZMC0inyO_Fip$-ZGn*pQCN8_6m_ zy$5{>SXa2PkC5%m%*yn1si&16qQI>eGA&%r9zQb(72DpKtyuEs-ke}wcfo*UZLKpO zmTdi@nvJ3$``EKZml`R1GJOJtnKUcOq;F4i4{(b{Qznw$dSWc`w3C!+Gq&0YXN9^N zY~iXiI@@-R87uI~t!Nu0t|^w{O^L|siYje=>Du!s(BiF3Og5_6mPvF6z^~W=U)5(* zm9-<(YuXAYWfsiX5!A!}HeNk@G5+)RCp@3%@n`;|f4Ukb6pJZg&iRgltn(k@STn*J%vAKG{fr z)D)W4EfaKfi)uR{N{u~a(Dj+$owt>>K|V*jx31a@bCzDMQgW6BK03048?^H@PW3mEkZ!!g ztd;-tv+YQuwFs-mh_kx+eqoh;cWI$rA|tFvUpxaJS0PEd&kxaqO!Cnd&=_8~OLel@ zH@}+Xp9d(E zwOYOJ+W--4o8T->4du5-`^7JSY!r1G?VCEByo1Zk zP^KK#lig3tVC+0&_JUk4T=-blBcs-O#O78KFJc&7c&yP9;BP2rGtaq! z9|?zz5&v{GFP8v%qPoWOjhT@@st#Z$`D#iF%lXTBrht=pkPp{~jx1}iseQI{_uQ(q z%!h)ArNdeayBk0(BtVIlyH^1EuZyTaAV%iDx3o7gfJ$gRmL``$r`%#Xj2*j6iY$?dL4MMvL=a_w3 z%|eMlaBMVNlBTHq_OSIk=LkA@ch#TGt1^HNXDo4s42r*YI)5QBdZtK1w2V#q#Q_ghfPC)tCg z43hNXN62lUj^S&R?yg5~T1}RDE+`!#CyRb#JudA!8Tkk(KqXcJdG~j3&wALvr;D4q zd>wFuFNiS3bJZPM73}2XLUD}=81}XIrxD+Sa>=S~qJ*0DXmaDxg=EzJd1bbcwM=A;-A@$oLaB}VaO;a@7s zxHnj4`Oe}cA8ZbT}g@a}JRTWa`nX02TQP*meuQb_TN$?XBMYbMdYOEU-)c#{?h0IH+1KPjMj*=)V@V^KHJ&OAFH@ z0W4kl(ECK>`e_cZx8h5@^j9Cy9aBOJGX8Z9faD&<|7>wf@n4mAl!-)`=_7f+fEiU0 z6X1IWi>&R{EiwaKs%@UCqDKPTzN5zsWq;knh#;k6jv@Bu{>>Zge=&bZgi~(Ij2a!` zCo7|Ag~MJC{}gdL7Ik!Ueo?gc%rJA9ObG0B@f;S>r*F`T3`Q&Vj+E#gXqoXw7Su{+ zVCQ@i;Pf-rt47FN>>9r(R^W`*KQ=yND77M_x)Q6Q8mJ2}9GjC+)V=Yuk%;Dvv+S>y zzw2lc&v^VqL@5MJ$-rforAUoT1K8@~7d_z9rITXYk6qr%8F|*qJbZ0^5|UR&cTBQ> zA^qF4|A$6cXk0Z>{m3My^^SZjaIn7(x}dB+V}>-AcN-&3l9ugnBCaF+SG9_rr+0^` zV757Z=;Q{4<|KQW5n$fVC9Q7SiJ$!s#tOQ-9+3v3>OXFGZzLTCU0-pTYw6u3*P8F} z(_YnNYADv;X+jgU(#NExKu=F(=pHcT$l|<+V)+&oUVm2~;8wR&*t-`(@sdrm?EVDv zAHs>=*5Qppfk`;X5{67LzsVvI+}ynbEEJEvu$2ne8_8S)`$Rr&%YstF@_?DFAY^yv z?5|PM1Tw+@)f-`D##BAf_X}G1p#|d0UAxu(e$}%@pHB}XE1Lj--S^5@Adur<0K4P~ z0AOdU860j5(xOUWK0I;Y&ti+7?RdB=B!EKN>GtHYqUzt1lT~%!sdP%$cR++RNBw}E z@?qoYBNt>%M)n80c!ubEAl{d|V|YG3F&f5u8Z_5bqLeXf?~X0Bv`v5vjoSycRlU|q zU-F+KeCK9n19NPD#W480`$2+1-G!*nq2b5E3c__Za1*a3JQdLS^ErvTi4%Q(e$g>> z=8vb0yBu>bNBWuTSMu%6vfsMt*iDV555+;s=1Su|9P$p-XbQyCLh4=WhG*X7!3 z+mpJ=8<~%s>_+OGQ*2Lj;Ug)nhs0^@q~z5F!Y5dScH`0LrX^T|VQs@oO@8?L&9yie zaoLmwUwv6msvNFOx5O2>NHY`RnSIvCDknf4;-YPOJ$tHhi(g`6r#{yMRqoSd>krQm{=%4OKPl62 z99U124LkbcnAnm-&TX`)fl{qUY7!8R_4Ch zJkrU_^`o<`Px&?&>XMxvOHWKj4J}1-$eEiMru@Ktt{3!tf&3+b{PgU;+K45EDv|LG zpmsOserR!L#$quIvz``lt_NnvKR7vb1;Ym{O9~UOohcY0NnJ!rdDVlR!Ki)YzVFjf zK@5I!YoGz32fWIO>Wh1IV7reAkWw0VW#Q8Tp;*#}IZnICYlqJ%zop%;QP>1VpahcQSmiy0nb zSs!y~$~VnzuOxIe#@e5E(UEPt6@0qYwf)00hVXFP>7?8(edezsWe>x{&Je5*DatQD z;tI6VDsbNB{+yDLVot*DY$Rs9;Mn?o>6<<+_kVW($o#T^?0C8!a#xYu`+#OOwlo_; zluj|**r+NHj}o@fxX5*5Zkh(VUi_UD?HrgeRbMC7egB-V%>7${32-h7-xTVrc)gj~ zW^U~>4DvcgGSa@#yaK}AZn)5xgL`U?srQr*5doB^hx!2g zIs$;UZ^Fx`)*J>vLJy16{fFlzFuFF-IZxa%L4o`Zr*wKG>YJB?twr-#H?{&S6l?=u zS?->{inuxFHZ_um{?3gtGOQ`LTV85)k&x@IXv-F-8A3VDp8Y)9Y9*}^Y;=|5jA}b4 zPTTy^a!!EK&!0Ok$~@w;?Ls`;t5!{d|H0wRI~bQ$xj!JOADMtu7XW=E(co1L6Z}vG zLws6THVyh4qx<@3g_^1trb-m?4~k|xk?uBQp9`Dd7z_$OoxY(@!l@;c61Sp+>XWC;w1@_+W@~ znM&vU^oi4xboP-~v>g@6ImxHhM`T*B;b{T!Y?_BOcT=`8kGs6+cG`9S^8PVMC{i?4 zf51T2nWuN~04=%A)SaD|N*6{Vh0U<8fYDtz(*}da!_k@%pfQ-+71+G0qR*%Y{O-?} z3PKvxYc0hpr*Fyi2>wH^c|&rsSVF)v`;V)7&^^=Ck|bRQsYgsy)FEX!AIv1bEy@rU z`1qZ#XgQYDDe!P$p&n1aq%kjU@$#$xlb0W41HAnDDL-adBHfX4uU4w6@ z5qEav^bzSh9NVCke%-2!*K&zt15! z1OnC^7br{%V8aXqeCLHC2 zTLlDm_QWXte3O*nu}Jags%}+wTMKmd>*4I=VUl0LVs*% z{k+rh2#IL@5JQ73r*7BT-P0pFr#`EL6@r4++9hbn>g&E<>Q($=DacbgRJ?+4ABrKG z-S%^Zev2y4VM|W~^J2Kc*wH*(XMb*8yKY|o?3mo<%_Fp4id?dB$*6YaSCg;Qz=}n* zn6F@N&n{0v*ph*ttWC0YuhU@?L~d4n>W99^=PgzWVPJXcm8GV`MOrI3pg$buGH8X$1TaTZLVRK=kPwhEWXyZsVeKgt06O@G~ znyl-7e0>^dC}LIV6&?3(c+DI~G9=Ex+z1da6;x_1nM(RDQ0<0Gh` z9UGK*`_2BB+w|y82mM+g@jKmwlEdFwx%&Nd(tU7NQAp9ywx(&-ID% z!6G#SDzm_%CHBj!lp`Ua{IAxLAnvlbB+c}}f0!d2((e(w<;}a%EkpP|{hr@#?wJ6< zw)yiw3XwpuZ$XtDYojH*Gf>7)M7q;=6l+#g;sIvwnLVmLV%al2r`w%L!cRt=daD#V zJE3BjJ2^{IRp=ND^Ou?Ti471K!<`;U2OC&;>E>Ad5|H@Uh;mjzmJfIGPBP;|p4UpA zmc;uz7s{=ebCwGWUCl34fZS!Ew=iLi>Io?bUBSm6Pk(GGJQ$aAY+z1}NgtqCwFFc1 z0AGMbPV&R-HoSlhr=ZS(PyyvlR}zr$G1W<|QxUlorHptBtt(ojZ>&3dZd>~afdQvF z6;q#KSg|#1|5pm>fb(&obWwg}33nM?diQ^e4m$!Ji(66W`wKi96FuKIh0S$li)i<;`b3&)s|Jx^lp}_y z*fH=B@`SfEP#BRyzhMhdX@z^-7^F>90%paJyS;IqSiuqZr>+yG(0NY%tc5i zxPa{%Ph>JsU~@2voe+MQHie<6KazeZ$Jh-0xf8k^l?oZ+Vi#C&iFY&Jx=o{@XQD_e zff0puhRNW%J|_t=2RUA4y0=r?DGQqs1gDN-*31eX9M$1b#~?qZLiZJjd!Ny}dMkF% z7a+65N)}Up`DvKl)yo`NsZii!pZ}V+Z(&8!N_W744@s)Ee*jpLpt&DF;1?NeWQZs* zkM$RpdYjpN?SPqlty5-J?vbL}N7hT^P)xde&*{7{K|X!|^`CajjuB~zC*P#C#u5)0 zXZ-3Tp^z7=4^UD-@}GhK^QOPvfUU~C7Jg+YUR@-e8YvuX9>`_~QhbjNU!O_p}qol^6Y4R=;vxaOgE3Oe{}Qvj{rfUloFuRJTbz-R~wB<{Y_dH zI-PgXkSl4x&XZDRN-O3c8j%e1RVQRsnsBzTR+MW7qkZ|_^o1Ro|NO&1ZP-Xk(vF?Y zeyF#jh5xtJ9~r+VhTQ`KsMtV2MUyg$w#x3aW}zmZ#d(MoP#yu_O4sRyxnkVz5${B< zq(jS;ba{k+anRZPS`DmHRAez(*#6b~*Ec{Z`m1NR61>-n?0HNWOT6{Ei19ve*5F{u zo!eDw3SP9SeP9b1Lj>A_R?>LUSWfNsAy(jb_%i!8n(q1`lMb6dZz7+0VA%Alzw1Mz zcB5E*Dmk21KF18nn=usM6qda{^qLcTo?473y&?Mi0u)>SOj?Y$foxu-VO1ehnDBhG zQOm?4$PTSl;T6o#g(e2rnfiUduXKnbyJGnuM#W9xu_GKKjQvyU# zuLXrqR}FvGbE(oY?Y-0msAR4azVc7Lw|Q+PUw4(SJn*wYsJ(~_-Zed8Rna7_HrF60m#o$%9R8HNEQc>&Ut?Rf^*8RGcu9^fVB%b@ z8PpbjI>^N{wWCuRDGG&;1cfDNKc=I~7DRu3&RktdmNHz6#^&am9eiCu7$a7X>XL16 zY(I0(M)gzAiQ0_`Of$0~!R?Oaz~IdJ>;0EuLv;&64|PDtS62lrf<HujG&Zv!%%jRI(Z=;U{*Hi-w(yJ4?v3dX9%{#p7q3Ii})3}KY}8zLTP}|(APWx zNFMQ=eH!O&v$LToy<(K*iCPIj_KAqo#yn%ZD}TNH=JJb4(Si=I0m%=B`{1MVqlmin zwqM|Qc68XRMT93Wo^Hi#V&g1rZJg+&`SI-bw;8S_eZGVI3CS(4 z&Mr`qKh^UuuHGUmBPVG=lIE`>HKakrB?Q>0)30mM0mre2Wl25ZJ+aSO9r_D-^Y~>O zXqnS-U%g`tE59bW%}&N^Slrv)nYy}9{L5THdZVKat#Gg>E?3J077%L>p#1bn*rcP|j==@4x_b zCyq^W_~i>45pPL)cBAI-u$%bP89U#Odv56Q!CrwWXLYIrIqeuG09@yO=``6*V2B(m zCAT0+AV3KD4uEm<-yCu0OpR{?Wd56Ovdepbm}UJx9e~8gf8qACKv@5JE>?kr|aD)tS|fiq47%$FMBx?K-TsqVMEY))c6Cqiq3hBJKM!S?n2c~ecw1G7dX(AUFO-hM?Wn-8H=r1~iI?x6?bwV!;j z?1YfVfY?OoTgLOy%dRZ9?uTAt7gt=vUj}7IVA>+ zkVk+K5>IDFPwfNrHx^%SCl_pJDD~1dq2Gt6^WqnxSHd3-Um*V$vb>d;dV;|hO$jZl z@OSs=1kT-uSRW%KUfEQ)BSl!?oJzkD&rUq^!86(LRT@~nA2{VrQWP?o8Aan}Ysq<5 zhTgEJ%KfEk^0Mhz^xa8M!SXkUC?~`3itdePhX)lQ(e7(Lf@>=qL#!sS2)n6;1Oj3? z2vTp+uf2i6iej6o97`m$T%41nWt`Hgxinm&gx=(?@&qw=0md=l{5#;JS>%jHLDct^ zOhgIa>K%{%kfyD9NE9_amHuo`JG(3($}toPAM&o*J187wH>jJ}>gHN$MWSZ@0abp)g-#(JHXni3P72t>t7H224%iG4bWH<;xt5}g~ovO&hGmlHpv-MiL6k#sdnL@QvramWj4a2^l63eY!mCr;tvvE(bG)eEr z!_-+^OC^KgUIlWhuneA9e_CuHDvi?B?(B!UMS;Kftf4(a7koO zr35CNI6_Znq|=UQe36Iu8(jKd+ImtN{L? zv!dSt`14GtjAw{7%vzP;H_1xC$f7~D@v9sd{QRhq$#%wdQo z*}W(_xL6zis)mlefHt#eAGf9HOy|C>{ZS24YZKL7Xj@CA!#(RoT8!<)4I#OS&NKRA zRy3=&*rcc&v@atvY|J~b`Xlw4f{FqT`ZtGfTrwTOjyf;gU%QH3{S6Amx z3m#@frApGIOSD|HT&2InVttW=p#_o38&NA|A$2;r&C9mcawSLEeQ+S$Ma+f!>nHOdCCuh>f z`;TUldHcWE37aF8bQg`9mMOVHo*BFA7t;y15Kw(g(EKen#C`P$m)goW2gLLP&i@|^ zJETq{mogn<<}=i?7I&7k{c08%0Dp>Ue^8lD9;sLN~!C=#b1;il9umQ=@*u@D} z-|nCavIj80kkO$zLB+YKzw7Z;ROW_NqV1bJ0#v)Z{J||^9UD-u3?bvKJ2;(=+ zFK+6dXj?Mn*!J+3w8og&O&zDPbrT3P>}L=E?QBRy$%@^tMV70eeBNpRKoR3*{=EG+ zi7^;hEbR;$2~zU2xmoW=&|#6JXgK$&ilW7frnA$EXy;tu8z~CCx_kcD=LJSPgbf=X zRRB9UKKq2uj(=|>pAP;f-F8}Tzpw2eiYO>tXz03%?}CxgLhQuS(`Odbz^fw$i4Nak?t?s!8MwA zt6_ihokNbn#l6euOcFa>A&(uV7d1I#Bd}fB&@SEpe$DDE_`_ZZ$IMN41?FmTneP*t z7^du6fduQ#?&b6r4{}Q;thg6|rR)+=ANgAi<9Z)J2l1yDKkNoXkKdJ^|M}ZlfF)mY z#AI_}m4D!g#)^Mur<1~lt&M#Acp-scRHHT=tfFXGgjQ6+av*78{^TaCkZ45Eik>wV z9r*0~(u}>rjc)puAt=}}J4QNvB%DA|yuErUXE;iuX{=67Hp^_P2g(P%__bLLJQl`b zQ1W%PnjQU~!k&8)Mi5SscJc(c?3d!t$I5E>U(T4EyDCQUHuxdT;^j%BRHFpk8A!3o z<3B_mpLi5WotOQRvsPRQ1T|S3eZ8D3+x^|b%Ifu6Pqq35Lz&#p>vtMX9V>zYu}wR> zNH7)n`*)v@I@Di0e~|si=1u92nvjZE?FZS!6#nUe*Fp?iTWX@QoBfviPcHE|)_?8v z)so{JmbJ~!th7J;LmhSQT$IyPo@mwH@sQ@|9TWWRz@%uhL#xgo(Gxj4F4p%8?v2iN z4UNv17l=FP>zl-Ez%M1UH=N&fbzSf5pFsz-wiPUnmM560dzs_8Du8x5g@6zK>^2LVG0# z)NQ#XUv)^N0~Yv2V7^~8#Er~`uOj}Aw0@1^`w}u#lv}Soiq!RB8hP`9tZP%n%P4pi z{KKk;wN7*|L)cwgvjSdp$ShDSf1I5fxh3BR8ZoN4NOL>NG%U7_(Khl46Y^5dIW-K7 z1wu7yI2Xk$Ly>g}?i~>FiS6*A5BErZglA8_fKR?9Ef5LLbCRjlUXv0Sg{;ZA!^N0m zLA`E2Kw_}s(p*c?e;n?gKjN^Y$(h8U=<^Pr_Pq5#(1y#GIKz%0?i<=j5Krtn^%hm@ z(Ehubdui43UK1=Ih+pe&r7R;doWCtE`@Yq5c-EL{dwD4+Zy#-ax|L_l#Q8=sX~fGX zU=$Oq_H4~30n$R(2Z4Pn_j61&o6S8Prj@Tg#zC49Pwgf%s#R%Nmd)pBH1IVox zf=fKU%*1URDxIC^0D9-}a=ZutdCyj7ah1s1H`WD8RHjmt+B~|Qax=vwcI7B?tXUM* zoh}z6f80ZkDJ^H2-S*0VUxEisqcqaCy~buAvM!enGA2jKKcIOT1YG9LbU*|yjT{zI zJ0(`BBk8jy($K{OapLQu97@9XE0n}o;3JbI^2Y_a5;T}Q-lEpY|Gu?W0&0K4mXM>A zeu9Y59#wY}S%;v=hRZK<@+4fuuwDfqEwIkP}Fjq zr`eMEUIc$gteqUK0qZNrZoBzN3d3J0`rCaG$A?Gsl8J)ZEf7Ug5oWwv$ta2e$AOQ) z7jF{4LYpp8)z94u1r!W!H&e8YaD9e*e#ynbDNZ`_YeoqNeYBkyVCpU!fqK=iZ!Y40 zK8Sq6Fayj9V^B7XcNo~9m|W~g$)>^OHFDRxm5i@#UIQTckz$0oAB9CZ5Qm8FEZ|v9 zY&px%FoL{Ydmm&wPoL8Gufv8rlZHX|lA}OmiKOmY+;9turxf*}ucZ3x)SxGf?xKRA z8mhGK43WJ@<+MMlMkT+pQ6mKdh|#-+XQKn+#NcU2CvdDi!j!Lp0(q}H!C_~N zt5q^1FB0Y5w&6=7_JIVU)d9uy2N7WPCyz`8hgjfK=^njc+>ay{L76=*iqRnpE z>=V!nX}(v=25!a{Z#jQI8=So;IfraeVQa*Zf$cS``7$hE2d{lHgLc2dyQw6v^TpLX(sPs&X+qtLn@ zRY`X>6kp9{eQR9?GxR%67nGR0f+{vD}5snEvT(@6J6O z+!2>I>z4zJ)Q+pMYxuhM+1{KR8R@1F7C{F@c6I?Sx%`!Yvvf#TjES9~9f67cOzwG`#c6r;ZMVl^u)r0Az*g0KSBQdeq9BLC#?6{s6O*&DHF zv-1M2O59T%(TPx$4G&$AmC>%Zf+; zg4_Sd$PKn+)JUc&`*sVuu)Y~9rj^;xCf{j1-cnOnyI>swEx$>>U8WUtu0kar4`z&V z+sqgXd7;gh*2pm|IQU8Ab&Z$Eb0HKq==4!&P*GmS_7h6wwF({pl)hT!s25WN zJ98W!B0PZG{!(EVkA6QJeY(;o(Mbb)a7m7K^xf%$Uzhx^u-9lJDGtBdQ&6{!H6OKy zAr!MD@o#j=Erkb!geavUhs65FMcA8E8mAL%vm+E4ziYZg0I4)h-8W5?5X= z@?SjSzk;{2Pg;t(kDyYM_l^;2nqq|0n6C1=tP=}VNH%;HvQKJK zHU}&uF2J@tx+>8`dujtU5jmHz4n;I%#b-F8)3tyTMW}lMw<=p>w+LyDcJ03~|2+Ch zl17iw7HV?lOEufyi`=_{<3=*c*ta?92utfT_j!YdO6zSk!vK%gGW+dJd?aFS4L}|p z(~ZS2;PQgW5?@V3i1rMP1OR@^SGm1t30%zM7uqE1b`(x-A5I1d(M&$kHN?TAo_yD*!!uAm4enH0kYjCOMokXMSJV7;&e~OM zFxcs*Y@0{1m9d!-+S9sF1{}u|TG)qedTkVA5H~q`fbVXoxGWBCl8W`e)&K1@&QM_}CE9IjZyN+Lhc_wu`}iB; zWy@J40do|{gkiGlfl^f3*c-pdTA+~(ZgSO~#*cRN0?njup#Q!A|;_z2+IN~?akLdDcPPaS#{U98u zAJpeZHhLRY+yUvnA5WTN4#zwj4?XPCweEN73Ou%b#hm-KZx!4XkhvnG_`LP^FN(X~ zy(;$>>Ob6C3h-qdsRqV_w+XZJJUG~wAEI5uGcM%xvBI3tMH=WLiah-B&E+-i^i`$$L?1rBw#2JK ziFEG6!8>VT=+-+tmbA|j(3BLMsE0RKD*K*U(TwqvOkvh9zIc=U}M4BzoS~Vg#uLhRe_2tno^*c zHsvq&)HByJ)|1Tm+LVzwrnO|qUGY-C)8%)N&c%qmnb6Z8uQ!ja&QPUqE;kPj>v-Wg zUeK40iY~?;j(*bcr7yvZFKu-#r_^GhxYg!r)=LG$u&rj75sfR7?wuR=L6kp9R00eI zBBc{9_oM+oa|SMLjwiV)qg+w#rji62d^RAY&e;{GopxG;^palH}2KsGfDAU2Kzsoy-W?Ff$jjh#?t) z<>KxDv{N7{x~*Z-Yt50Q5K$@0FtU!|KD4)PM^Exyl$Elxk?G`O?&gR_AkawLVJ5^WNhWV6?vAjv_SBHN%^dT{0%A; z5dnannA7p_Ggo)!-Ob7aME-b6PxjGKINY?@< zQ{kRxo&gz-GJLp}Arw$w??`RFSh>i{@Ve#qWkcUGx#hqxlCXV!R{pI}BLxosRfQu8 z@ggh6kXt{}{M{0+`wuiy8Hi9IWv)fPA7j{$xJh2}uC@ZJZ>`7X!8U788xIDq`J5z0 zSrxQQAJP*=dOgP589%b1wl0zA+1Z;|HRv~S3mn4LPF63J9&^W6&ic_`OBN8OtHkB` zHUq=_F3M&z^wT9is+Q9yFPr0Br^vWiwoMjJdzD<@Oq^vx=fRCKlj#}9SxQtXlBXsU z02Rkq%qIho^q;8__}j4iw3f7+u&&EM<+_<~kEq-!-lhe(Wq|7UpmuwU%&FRLX!{%2 z8mkH0A{k~%SJy#wO}WsyXYd`8h)O*RJM1yj2lMro2scDE-(Bjt_A9BhA!CJRL=!Md z(0-AW295{+%pylId@abY;z;q2mA!Y;Ea&h+S0C61YTNiln;!=Oku8-rv3EKYJlur8 zzjIK7$bJlq?*mbM5TpEQOBQ6Fq{XP>jUf^)-Km1bp(n0Bqvq8{esX^*Df` zIBf^d!He8i;9uGdui4BG0j>XhRbz;kNs?LO8C%5WH>_xCqusBIR%|NXSPC>B0Szf8 z{-C=2xD%o;48RCU3#OKF7F@j*noa|!ggPsc4598SKtJdlxJ>~)UYU(PI=qNF`=YRz z@^{{^mHLzSDZSoqcVFZzMfBeq*8l`&Xe$D{t^;nzgHf)u(BKLlC9Gl}2q)POYWCT* z^_m-N?Vc}e&MeL(sFon--t)sSc}y!v>gnaJI5{4lkj8^72H-P!&pgkU#R(3)y91MG zKgD$v#F5_YEz5wnwtW~Hh`GDDkIN^~PJXR0*s1&O6OB~PIn$G8hxP87fAFpofZTaA z8mR2rB&fZ)t~y@&O=Fd zXv@<-`pe~y{)(n;xkeU;#@9ZS{2$Kw21$SI-zqs2b7Rk>bh{=Em{CY#_}WXxnBnMQ z0(79RxSY#eR@pfT(~Jp8;);VsEA0U>aJ=8{OXq!c1P=_m8fd=K;_Em1Iy0YfDQvd^ zz4eophli_aFxQ(U?Dhpi}G81JR0okPAqu7k+d$4#G+c*ApI~dA_ky zBf}i$6nSJ$BR}hL2U91uHrQpZAd{w-msc#&w~!zlgm3W$Pkym$B&$AS%Y|>ujEyQW z9RU}MnO_)Rx)zkfn4yfm6VhCT1x=x&yfAN69NzR0|7aFw6k-2p-1-QJEOcaK7yeHh z@c)W{u|yyRaxF4^#FOO{)LXb-sIL|&rspGn*OW^9<6E=#b?r;e%)fcudB60VWmzTX zCl)z-bsJ0~G@w0=$7m*A%xb{&l2g}4{_<~FGYt%^2Fd^8si+t}#AtU6AxDy3It;;z zG*!@wVy600H=k5=!8Fwvb_PUN39QpVYvJ}t9!Ov8XGN8Cja8GDc^sWdH@}|J?D?m5VrjPfp12ZUZrY3~% z6*en3|JHoQih*A;>s%{na#hqK`LrTM$d{Gvn-ipK@wrv>CY7=%Mfyxun%P^&Ix2wQ zXv64Q7eyP!Y7~TR$b4UbKcm4Z7Bv|XRKPbcMVtSWO;k=KNi=FiKJ^IU-d0xnM=Bf& z1a8u{pGb4O&O3<5KyHFa-EW;oCb~pXX7hkbN^LQH;u>xJHF^55N_w5;JF1v5xdkVpvvpvHh!gVj#Adh>3n4 zDFN$@Ap7n__q_>d*8s#E1jY72_!~S7OgK@KW4hHB{$~A2YyzmW{>?)w>)n~WRmTqK zVVbc33n31zu3afk-Uyd9j+w^3N-Tb`^=lxL)eBB*PX-NoA-H@iU+W&M^AVNDlTK?T z3L1uC0Q5=lDdC-jnyadZ;bXnaxs5TnvU#Zl*h*3fc);F3__|%*I*PLOh7MU~60f5w zNkOecI?mOj-L-Wl;ps!u9Re1)woEv<-t!j?(5A!31IaKhU#k=4W2O2*XEo@4(ED07 zf2?~E&~$`~LYcb}#4TOcpytUMfGmQ0ZoN1Q|AvHrYy%`fN>`NPp6mF1$$u zv&W)0-1XT%;F*hZ2;?ZfYDVyD>p_J>Mr}1;g9ZKD@`~pkX0?g{+uy6+H+445!3*!! zx{o(@h%MXVW2jQZJoJ!x0TbdJ)Cx#=NG}D+QP1c}b#E(`1UnsRE1 znRAkHd?4d$Zb^IAONs$TUKmR%`|x~QvKLis5a zw)>t;o0HL-HZ)-~?+J~R(o@JpaEqB?Iy-DGs9pCT(R=gmyvl69b}jR$hf*&EZCM=P%D3*H6qfxdyJw-iCt!e zy3{xp?NB^bxAKS-VW`WhS15lPlPQXCo==7Vk!P)e%e6K*#p2!aH~)jh@%oR&!S8x+ z6eKA6#FI=U9)~C}oNKzw%MkwIB#>!!)|scA#cHR|?b0($D% z)a>$-WI5Xz9sR9%d+t+O!N6i7ltx~XQk!Hnp0ws6V|hmtw!Feu7-HE98y-v^@;Pm& zW4cl~e#IAea1O}7|!vz-+qbH(XXw5cPne$bWaQ-~HaQy2RS0CZ| zylYF#%R#trorj+9oNWSd-iiAw0_q)$_TXOb6$Lnc7^uYp9G{e%Qp-an`l>G1TQYrV zuDVyeCyh4zcAF&L1LzU0q>7k-MK{|gJ4({>L$~DZk$n&<&KshCFCXp@$D9l6e+OH% z{qg*aRpV#O-g9~!M`JTUnI~3+C;HoE-0RL@+MviO3>${!`_!-$@pPnB4Z8N9E(g5C zj}Gc8#n;+f+Xc-xtx%+csZorAUdWDqt@5oF_z{<&rjVBYCf<8K4w87)alzozFONF+ zuxuZA%{A{up3Fd1M~u9t)@akHDK)-${x-ZL#lW{cn}LK?E`x{i7^EnkHq&&g+2#$5 zEZ?`6Vb3uhJhi-e@z?#AmxpeoS0|WFScRX~c6umfA;;6uw8coufOXZo7^njqsiYEB z6>BJ3wp~K1s|1oZ0s3?J67fe^{12A(vw@d~#69%BSEEO+$X^K{3wXXp$`#S)?|EXM z0+3^H5HNEa;PaUe1Q^?XpI93k$ylp6iSjlhD_~KF{_^>~8_V7os_2NgWh>5y>FIs2 zR<$A%R$@X2y_L_n>}oTO+JuWJc_oY?LBOKB!n~r&(zwtRZ<@`Jz}3(9Uydsp15ARY zI35WWm1(#ufp5bbG&ThyLdbZLG{;vfm5u7;5 z8E>ZVc_iX%AK|rlG572GJMzet!nNQj5>6K95Tt*ZolptvVv*(UIK1wre1mo*Lrp7t^4r|Jz^0C+Z2}+XU>nIH`J+2UD&jXi- zy9nHAlGX@eQ&<8Uam!AmK%w2w5evEtks06-phgj7{0 z9(m8OO3W<&N&Dv~Dx|gF=euUv)dvyF)4B0?@k#z=pDK#75=5TCah72L)ShMGtu)sq z^U>kQ>U@E*^ySFrz?#u!SIt)#Wyn#Kzd+W3Vvfz3d77k@ojyMr-mUOPfL)Lh>1RD~ zvf&=YtoX@nKsQ0&B|l4%^7Xnx1M`do7J>PpN$OPVH>vsf*>W!ki{P0QSLAPSr2l&F z&{m(IFmWOx4?S;H-?OA01dI%F<HUT;GJT1k5g0M$!O zTr|d|?{4ZZK6HyDDTquKebIZaOQd=1-rg9f3!_FpFzR(IFEXk!?OD_@o0-;HRubnwVnO1S+pepjPy$Kx2H`c2wMI7Vt@4cNz zUh9K{hag^+`{kZ1`Z!9Od#Werx+hVi;nV2XO0LMTnD3jMOD(ICKp}yDelraezemqc z)>h7=PEj|8XX4|1qWh6$p9+&BnbwL!uZ~+`d~9)same!Z95rvveD9{y6BZ`NCi@{! zquQh)v1XW__;v--Ie*{F)Gta;ALDuvsn=f}rKKA9>X$LvxpsvH${y7B;6Y_6g%U5d z5>#DtG5dVScI)!s6NZM?`wNZq1c_7Q%`Ru?^hVUlbnh+cvP6Z9aCWt{&1a0nCBdMf zB>E=|INM>+&*qlXE7114TB7EW4=5BM^V-#%}DjvE{fjbWr7e0^_OGi85e z>-!5c?{UP6-};s^Q?i`TNx17z6AJw@ewq@Ms7t*a-%K4g5d6}1K@FB02xLR1t@uB8 zMF5D_iQOQsMZt$mTqq(;{diEttjMfPZms4aVX4tx$~Y=CtdLj zQry97q@ZsFgh5nYn&(?jCgwJGnNEa>K_-TTNjD@xhEeYgqnn0kZ5fRpkD6ccI+sOb zrj~#~dlg`|Fgq7au6s`t9J)d5{%O;BQR}But7dyF-D=A?f2(kRK`Ct*T8B3T#q2oW zN4Bvr7#(4*?)nZdd(|PL%;Uxym~&p;)kA8d2N{?p!Tui?LVn+4*t|(R?jP$)k0Q zVLGWGCLG+};0P$!o}@EE3mKobqnvNe+)ZTo241EwIH-m@!`G1SkTNw2Mk07kPaj`f078-A8Dp_%2Czwp6qvOVbPfLP~qXCaD{T3J!Fy zOK<@Q@q5*|?V*9v-~`?%w;chf7F2*xe0ArR#!;aHtzm^TGjbC?!PC|~Ovcm1G6YE} zN@Q-3w#N_EsfgXg7|dr!7-6#Gay~p@N3lH_i@FLuaYD|L89N=T4-+1-Aa6O7M@a$C z%yARk)}mjLBBIdKGb`x%Pf;lH{E{!_UpGI1;%=~-pLZ`X;?I7XV|1P5rM?VRnBwFu ziyX3tI7UOm*4}Rc(( zSqURdXAD!11lKp@6H!BixQ#M1jyH<_-uWm7IP@0}*qFYw z>v&6~e6VTi1S2@Q(PUl64pa5UBs}|h4@X?I814-V3`lB>6Wx>6-9+5uFF(;ZuafWe zQ+5b);;+;FXQXRWAQAwiR=g3O{E3j*XXxh6V?kbxF-QJPcC6O1;XAgwt@BWO40L1R zZu19P>mSB+x1%TB@30mA@SBq!og;4k95u%%3fkn51~HrpLeF!)SVl@=0!Ha76uB>e zUjm<`J&G^i@oZ3@x<5NF>iDGjC9^k50Wz95#Qe?}4Q<=OKL;;jgI`(XG-fL*4ylsK z{j&9YZi?CK7ie{Uc+i1jc<;x9!yF#&SjcNRaUzV)tUhvRJjjfMo=h+7DlqQx9zJiA zfS%}Ft}O_UU3FdzmJ04%KblDpn#rWF&Yj7$t211@8jP;sG#F?3XUsFlK}52`FqdN+ zkFc_`MD$PQbhdonpEg%?Ee6J`j8+tj6$11yq}G|~eU*N8ALO5#2GbXRq1e440*SFd zr_dX?EC6|&v_e`qzW5A&)L5E9T$YHLB_>wbzShN1Jz;&W|CC7w+#t~xtWNX6e4(il zi$Ki}{QP3HY>vKFoL`D^)<;4^5}?lYk(pXv`%f7=h6ogu1TLsB%4R0mG??dS^-UTR z^OgI8LvnIB_3$2bsAg8$2FK2JQj5Fu5128yYa|t1(juI=Y4&p0hV)RW>*dXFD*qc&Zfu3W1sO3s(6Nf-AMTbE5gm0IV0yI}*w)SZSfuj=i zGtgZMEAerKzIwms%{n%*rPI%Ps44>yGdv0T^()_J+TW9b$UDL$; z5s2)ip$>seLCuqE5IR~@%P$!ZMPw-@7qbQY(DAKxiwpP%XV15R4vR*kGfz(=>JXc0y&jd}bsI zs?Ak!-Z)Bb3#qKuK;y7GlA*C7x5)#M*YV zvw5Ij6#C7n2vNK0)beuv#%U{jGJoq=5J z7YDg)6)7UX{flhKTBEN*M&Qg`nyH?Rp2!93)^^lOWjH%y_@rlC`Z4aM*Ffks3DOQc zMNuk1gc@^bsyE@{&ZKp7{_pE1bx}DxaFMFuh($6>EW~=;bDpwN9WlZ*8|J+Q!{0Tp z%^MqU-;@wL5m403uv>OqfNc=>cR2};>d4nE(WK|DLv^k$oXEy9bIgG) z?#mfxKWX%bcUOl9kjRGUdFW84fY}M26Um&qAvuJ9??&n*odCa@D3C!Fs9e3mTnV9k z>&ZVIiJ!kQi@4pVB5%2C)wWyOrVO{6J;2Ph67=hm@4+sf?u4|@nDO$zLraK4d&334 zm(t(?Ko+DLPKfwU;@)}RyZvvODqpF&f-adaNZA?f{tMQauX)+K z*m`{Xk$^yg``BSZlZAR$5DDO($Yp(%XMVcK2kdHB$@Gighbku>L1w( z{zckM|19_{{DKW_oK5*mIkR4Sf$Bq9?={d?U|KX7j%k+cm!~lqGlsGu@GWM7*SS>~ zzCNoGC04h586r&YE|TWWtacN}H8ns`rpC>MhVli0{iaYLd%*g1YyGRdyWG=Fo_VQ@ zpSQb8o!cSLwK35cRKVQZgQVnI?T)^yh2{Fc0>uBH{&x-sM{xH!i9+Bhy2?g0Y;IN^U%{Bj$ zZIG?At&0(pPIYW-WOn4+fGdLGgZb?wB4xg0S3V1#NP7@za!A`4Z zv|`&7OwJl8I_zP6WSR}_^Iq{=_&1yI!SNT= zH_94lVanY9zt)Q6XDnY+Z0-c^f_LSoe}hN}qpHkuKnMzyow z<2*a{q!+Ht&lScdsh6`^k-d-ycf#bJ23!N%VR|fAeHv_tOFh198;FfJ^am@zN7mo3 z!#;6lOcznCz6y>%n#cP?-mTU+H@4J85@lUTt)y4JHr5&|Un0b#un=)rN%uNUQuaDV z5SlCyYFcYjhI&{Ax1p|o7S|bHG~85JZq^u71;fT(AA{J6TL_&RZFI0nKMPUA`8+5U zs*?&fO(Nlx&qY`&*hUmz`?$l|M>x3C`D3WlKS(L*GJkUZ!%K5mkuMnXYM7Cq+J|H1 zs~{RF@2|cchG3xW{uH*X_ik6+T(Mday8R`okdh6_bqmgvihlg9_3TCP_};K-vFb@Z zDyqA6G{cY;p>(O~39L_&RLWBzkMuk&N_4{9eO21JncbJJ`iA?vWO#~5Y2#@>?hto#*copIb)RkVG(5U1QDF8JbALJ@X z>Y!MZc>Hv~kE!^Z9)t=F!NrUZxDCpD2*ot{^AiF1)4j%nFg z1bQqqZyKydRtS!Z1#LvBQ#_Qq-Gi^$(pmN#PYgy*E1DOZW8}?D`}qX8F$CmR(+}nJ zKuvXqwTh|r9v*YJ;YMC;Yib{(pmy;1nx6W~+HJ63Ln*UTq`~#gyVEtVzPJ9@m zAgLD2<6+?t`vH`k6DnWnSPT&XgBMH+Iwp5n3a(5($|`*5lx4Loq~-s(vXA89v(!a3sh z_%%;7e#5rbI5yy4CKm}<{<9wrL35k`=4S#xwE}IVFXiN<2yucrtE;&U5;?ICvlRZD%P>Yg%Zy=ta!2WwXCsj-NPGD-hH4+H2E43834S zroc~QKq&px$aiA{-g-UIbGL-k3DP< z77mnt*}2-e>jlc#(Zoc(7locL^@s-V`hL!#!C&|T+MBAxA%~9xisyRL=XyyopNufX zq2c#Je+7K;AMwq^Z(IA(Q`_$KBY&WXlOS|C=y5#wlgT5;50sUKq974-!QggF%>ACA z8<`q)!(GZ`erRf=rD&&O#9_K$C#D4MG2gvzs!1^Tazr@Li6q}3hPw}m(UUXRd4DI| zwdbOKvIS(PzpFKVu;UQjs7A~rOqZa#my22+wj$P_yfeaQtn-qr3*`|_HRcR7?7FXT z7NSCPibtdENoEH{itQt?SYRjkS&=j6hh5#g>Gs8MZEUFUj+Df^?oo9Q6c!7{!Aufp z(Nn$MYpG~yYwY7bMgf7~PI76FhLZRpNP`n?o!-Bt*!^UZ6kCgCZhw^cgieyY{87%! zUxvT8-;m!8-u>u3GFmOtq~q31SN6$0g}c%>>AJs~LkoIR65O#?V`J=Rt?vgpev;H%pG-Zf@ zc(EAwZzM1Cp;Zn(4W`{_4iw(%dL|H?5?anN%VG(xEw#!%kPk0;5b8tW-Y;$6rTMTj z5V43i@Lc%qO}Hik<#&@^F0a#$hTic!#)s7ymZGp|3AA+03iC&Q>oStna&p(^W|s$u zS9<~9ir0;Nw^|qV=!MrD$7U1T!1vGT`-NYr@sqv{d`T#*P4$xRi$Fcf%CyJI#~#CP z$gDK#Wzo zRPSBYthY#lP43t4ZSRkTW~M2oO5#w%4c|y-{vZ6Ua|jaTME$o6zn>agV}g?X)Zs!c zAVYrP>3L%7!`8}|UZL^HE5Itl&j!oKMer=>uo7EEaJGGgC6Z?{nJ80-{?uNo z(@SWs^R6>1h1y}7H;EQLeIpF>l*~0``VyJ65PXK7ib<+da%-vHbHd4j28p6m>b>1Q z*h9a{$P;&`N38K9_6?{B1zOOGYB#RtF$NJSoeQ@aTe1ben{wO`rI+%qC^nN#0~tKZ ze+#BtX3nMitxA5Cs)s7@3gqj$RmW3IhT;3Dwf?X2&>KdsVx%q)u;y3w89nTAzyHIJ zpdFj!*j?|aO z>T=KKl_DQ67dZ!MHMTG#V|+1)$WIGc-@|V|$LMDuYW`~sOxlP&zH^mEDP8}+*C!Xu_nv3B3+A*9?MrLVn#$dtBikg?f7EH-2<{N=PZkmxSt zEKG^u=7%xQJ9hJ3EHJxk&nY6dov9LBEa>o#1t11>V8>Y@XR;;yDOL!13PqtQXvLms z{2lqA-au@t>9^WlWR&d=e$How9f?PI+YBhzuvobFUB7y}#ky=SVS~elifL9fvNvKb z6&7qLJ5@Ak^LN;?U$L9emQZ>8aqod@$9?yVX^I-b6eN5IOM? zHU>J&>6)y9aQqAju4os9uf@1@hMEf zbSGoa#Xp;?JQ&Ajjd?%2ZpK;}SZrI73{qaGcT`xWk8t}BiXXIW)G<91Bs_{!zo-F%0B$=;g|{!z5qXO}?hEptOg z$yP~S0vh6CFQwd0N0|Z^gKC+)BA5`3NHdn_#4mn|0rHxUtnrYhy`9C#Ax|&-pr%It z(v-2-S$wtn0K0njJ`))k@%MhD^eHlvlnr4zhV=XnVEv@J^gA`d9yM;BOI!lacO!U? zXm38fTYxts`#u_CZTknFO34)l@N)bAG&!>MQk4!xx9yK@>RCAX$IkZiA%x$rbn z42=L}#^Y+^9~YCdOGl&V2aX&@xszj~HC8+g5h{S#`vKppFSK|Cg zwW^U|k%ZTBsM_`GNvF-GAHGE_=%McMnWckirWy3dr~^LAHeU#M`$P4alB+@J;|6{& z5=*}ZS18U$iE@xE333|L8;VpkA51h7$3}9aa6-Pc|7JTgo+LLa=G*GDrt5r}ovv=^ zkc%EgB|XkOgIR!Plz9nA01ZPCSGvh@k96xDv$mKr!}|W@BOBR?DrF^!yvAo*>ZL|Al4r6N8&u+<;YUenmys&hRJ?;a6>~ytH&QnOSfj+h%@##eN#4LO?Te<&h zJx8KHPqt!H&0D=wLlJ00**iBrMf@`iruI&29FGerYq*t12Ov)O)`N;P3)yz8Q z3d1&t$hKWLQlnRJ2r!FC%XXn#4|0RNb*w7Y)nkfHB@uWglEW6e#X+)QwK(hS>?DKHHFgHOe_5~u;jgVILVcy9 zK15g6H^`cZj?w&;)NQ*0VW)AXv!r`>U8e}~@Tp`{c3Y0v5J(dERe6}8bzLULJD7C2 zZ;miJ(3Z244`iW^vWQ0|T}OdcEq9(*9+Rlthk|C6?`UYN&5TT^)D^UQ%m;JI$oCMn z3$%D)Kb{pDRwS1>hb2aL`)-Br%X?;vK~LMsrj^pJ*5*wdY8-c&&Kxkb2Zzoo#b^<6 z*@#=#dKT|=_kfSFxAO1_ibD`ZpSj_pdAjPnER?7!L}&&{@f1Zsn=VGu=c5SS0p;wZ zP56iF*av3~&qEyvvI`qc4!)N@9q%Xo#IAWdBgs?ok~MawF6|6ahiJ3Nu?Skkl*w0l zG>B*7_vNnRt<*bT4ndHT>uf^=O>5B$Jc+MKxPB9TA9Y;n)m;i&ktwz#IGy%Pj*KIc z$b5Mqp3(E7Nz0H(z%2X%Et=-E+N0s-fvrUjv#$AVx{wgPp#l;3hGKB$w|2nb+j5iT zM>_9Q9emtNVUv+Dg;n0C^{l<0zI*qAlo%}z1)|mbMpq^g*vFjs3Ve^V)PyNURrjX| z@QmcE<@HCB>MQ4@IvV%?UPn_m#lSh=t-Cbj9L7#|mQD|sd)A~Va;IaJ9F1LT*;8Uck|ia5OVBO&{DBe5Hh6B`>&uCrvmLj8 zpY?cF{1~i#2_7UHxc`*Nb7lpVF6^vE;SQd%ASdH?a=3LM4>6UPSUmLE9(|1-67|!_ zOa-T)<)=7pXeE;J?j7onH{mej{RA1@z(CBhWMPKewb}ahmS;x`@vc}1@-A$}Wi7BNUAc}{&P^ zU?tfyTEO~Ej@}2kC_-<*Vu(~8iqLLaPWe~JBUni|b~YWeDgT`9+0B@P9m6{Y65L?P*Pa4!rhyswR5o4I9$< z{Qz4zn|xdBkgO7X`Cw4gw{UfM@Wz<+8WZS3a0YWNM5g8bEAN_bMe{jS*+UGK0iNo4 z%XBOVdZU%8&2|-QrK`HQdMk`jX)%s#rgcwQ-DxyImiXY_?nW)C_#@cbx-VS#m?z`) zA3vv;rdskvZ@t~E2HOq9E167=zi1v>m*ma9YtTv*I+`IXUZk>ES`fUPFCbD}4Zw}m zqtEaCcp6o6cP8UT1u3sBTtHos{OH9a)RH9Vz4IlEkZc;$Y?@9DW$8LX5GlSthlIE$E^j`|Gx-ne*m3g$@^^=iN*4manFvrUZcZP+GOGno1=G6;>Kv*%SIoommKk##Da*l=H_P0{aPbTQ>s zFH(;Y7S!(zIuOG=_TP3B{lI(EAeb}eAmG7Ub0@mH`~!Ihiu&b9>w&E6RY6#KSbp!5 z{TVT!(9GbUD%>FFtF=6_V04h(&0Y3?SK%6-lT`1QeW=G*PJfCr6A6YVEjMBU(RnRm z6w)


g@I&fr_JlB~p~_bm54g_@dTR zqF&wquzCTGem_-~4!5`k=wD_>9=CX(<`d1-)}F;1IqT21H3{<4ZLww{#7!dcUf!96 z;b?~=3mv+A57kd)(PW>|z@Oo2c$`_ppoLuR^PZ>9h~Y7%R>>tgu89#TqJ!rMRDpj5 zyDgmmrY?^Dp<*XQ3B7*$-Q&=@A=5g@YwPYZHuC0rbB<0iO^UWFv&9BNM*4QlRjUnz zR(+1G#J>&uhHTREheiXgh{TGR$;#48SpDfM2MKo~q7-MVY%CpN%3MunM(90>Otg_j z9L)`XwsJX6WG!n7;VKJKMhLnVkJ4D+M#x>q$iLrhFAiFi{f@_usE9))QqbI^@nnKSYT!}tLuhZqDk^TQ>ngW_<&@%)}l%V z+n&IJi=@yxT^!Fw6L4W--JhERX2ShAk`6sw31xDmhx19GZ!PRC5wW7ToR6gDPU3=M-|3lYXhea7aZNqeTD~*K2Qc6fk zN+~MROLy!ptjJQ*DczumfP_fr?$RI)f{0SPEE1yBf(k4u;dk@A-*de0PoL)>|I)*~ z?(3SFb7sz&^Ea`ul^|4yYFdcQEBdR$Epw8DZ2XRy08AV#8P7smcqbpPecBg3i*_#_ zI*1U3QcFeZ)^l8;!Wy1Tl&;rjD@9jrFtt%f5Iw;*lj}X_hv_}PLYdVbLN*Gng!&k; zzeDEZZCo(R4&6>l+$~75Ct#g%!k8e`VPvSZ3sm|buu5feBgG_Y^H4+s(S=heuT0 ze4DQ+I}IJ0ouIKNj!7Jlpmj>6&mnd~H-=Z5Q&|Q|`4=49w<&UiD@N%JT`*P`Cc4i8 z^vjpK6~j>S!0mOq+BE)sdynxzhLV2v+Uv zd`>sgyxgLcqb_osiDV;K;YUy1n?TEN?`fr+=tYR{(V-vxMNs4_89~ zdeparAMn{W{n8 z;fozk`7UOhdQzv&ZGe7s?-d*rJ+m9*!_bihm!&G& z(dmUgJh-3Aq&4@@!1|KT_-RJ4J0$(4`mcv!AmG`Pl6{0@NjaTfNolPiUfCKJCH(y$ zy3_!3wv0H-tSg;Rh)TR8?%WhAs`r$yNS7sVjgUGX9vZrmVf;yjomzaN)V|2%3z=k* zXA1V$aM%gs!9MmEM%~Bb*STT~rXIcF*Zg1z&Mlr(`@|-yX6$+QJlffEz5Odu(#R{ zPc(a(fQ6>{lCzXMKpGR1TrQIIclshJ3V7&(-~OXMPkK30 zTGCZNAh@a5Qip&vsZw}j4**JeP?W7^AXIMh8?Ww^iU~u2%RK&UD(OtI9jY|_q4U1^oAp&!u9qC8>6|fZKhnVU z60c6Xj<#+90qlYykykiaH&Aj{Mq6`@ptrfz{kP#aUIQw5=WE@?VOwLoF}0@+w@k3u z*%&SxMTk!LrCegJ9`e_tWRjAs3&Ej1;qT7*A1#Fgf(jB#P_Zs%%j|Ej2sw6M?=b3z z(;7ZkpR!iluefd>C)oQRk-HgeI5jx4u4e3%Zrp2@GO5FPs(5w8YdPhwdz}QIW!b?5 zX_KENa`{Be4VOaL`a=DdDc``$gqP#IDQKX4#Wu4#9K!PjKx+)Q0l`HVv zEYY$OB=BuaMZin0P)MMSa(d;RvLCAh{}le2*3dK8CLI9C$MBPo z;0D%#gRN(0>uawN6>_OSJ-UY5@`fz+L>q5ZV;KF+vF3H}GN$$`RdQUnBno_DMcR~M z;Ig^K6Fm9J^};4v{H4zLuy~#_BgP_H)9F7v-S3!+U|Dcq5(!ME_$7ZX|{5sLuQJ-xOJ1y^^Hd zO^rH3%J2ku0diM5?*%)hVKTFOSVqqC&($mzA2>{3af`VX+j|BoFZyh#BhYG2{$tgw zsQJs+g0+vflxibxMjOSJWmN0mck=tl??1Kx2%I;wuW>xFs!!MXXoJt2-B##KIXkNj z7?3WDc49YNmH~f~rc)`qtr1;K#h~fZ0FBI%8v<3p$?1_5WVy`LD&g0MNO37vua#DZ<>#-uxHNcy~%rnG_A9V{0xLFgZQ)jHe1dz3xW7w2a^FX=YNB(&nsZI^@=$AMQsCc1IuL8FG>V`F5Mn zmYF1YfOgX5My*7^k;bYd2<7Erj5^&PwCP*T=bSaeaT>{-FFGA8G#h&fM6UQiiZrwY zg%MVQFvjLfPC8AIfd;u5@S?so*xVnQj<2fsfP(RZin*XN{lMY+krr-IXeZ^dAao)F zxr?MXb~{+Q*R19>F;0GdoWhZ?Ck@~rp~gEv6%Qx+P4#;t(69bpz5V>}=Jd)C6~tmG z?Zn{4%cRIB3-kAL8~KY+wO2~|=mg|TVS)zN#9$HE|A?81Ea9@yli4k-RYF`34d+D6 zXr9NM7NuIg2#vmd9`3tRP^2k@crzi%LU9i&OVn@E-$8_^)>Vs{}}{UFPFZ`n zT+mac3g*^Ao06bf*k(8kEk4RXTH&4r##`F8;oRmFBjed7!i*2Ctm2=M3PVX7JjFvO zwyUJkRHdhZTnGg9G3fChH=PfJ*ee8v_$E2HDv8$}@wXqp3k-tTYT~#I;xk>5TuQq( zVHD0>ZBzz7OhzVv~@_yQym1=dLWjs*X+l zwk5~BNlo(VJ$ynsJ32o5Xb{M__5SyayU`l1N)sHc_N?&%6Y5hn-*r*Dg!kXawpohx*pp8~*vWE1Kx&KG%_8KyxKG))(L8gYJ zzAr=2MTU46Y50k{@cr~0M7y9fv#a=r=3DS&n;3ANKH;6@yc`*1e6RJ({lI$>uYn#; z7hAP&J|o7E?l#upa$8f0x0|pVt}Z4GTq{W>)X6Eca8t%U8r=<5%1p6~91WWrwzsBP zWv##|&%+Bf-yoO_v1)NUH=@9Iu?Ws)0*8u!g96_IPy8!&nM+IR9LIO$c*#rYEvgNc z^62asEHsM-IBS^aHriZ%OF?nCnmxcMk=X#l6EXA2X(SJgbK*36b>;6GnM_olB|9o1 z!yq+7H&xbEGS)$=BNpNv2u>!18qD}Jq1Y;t$6EBZEUm*)rcO77f2huuA$WLEtOlcL z6zob1wb6r8p%35ZB(v2lvX9OdOP!8s=xrn9plbeBHkP$}`hXH*HK5l5aN!?O2-+Dx zL(&j;Ct^vazTR(kH%2?SP|Ey%Se#F{_o9r_Bx?T(IgfQa*`*YFEQtgwO;DF=;;w&` zvFc=Xtj9;)N5`*20a%R}0eEh`_V?Eip$!9R$^f-L4V|Jw{WtL;=rx(5BrIW?f5K#O zKGW~a{)kEirY{J4FP0oW$0Ye<&Qf949?3Yjw|IlW2$zxK(UPmwhei$I?Pb4@J?1y)E~$ME zK`gp;vglye)ZDY#m*!J*sd&@~U8y&02uJZ&#;>VT=}ew7_Ot#Tc5>yDU`b*_ZN9B= zs&qv(6P|Q>5Ome{`$7QQTghc!@smHtfIO8KC0%-47=_`TE;4l{xBmeQ@)xL)SlHV7 z?l&7{uTRMg3(SdFiO()$Ivwmk-p3S3t%Zx)x7@751p>6qJ7(0vH4ed0?ru1dbDmvG zf*cRtHMp3YPmuwx8m;AZg$c!zH%#nk%v8IJJp=gk=K)8T0Ya|2(xnMuU+K6B#Z^Wp z^wWJZ=(S{!7P&6>S@+~oD1*sF&tUaK<;y1z#j^ulB8XhmlLBf&pRn8Nz3D)oJ(gW$ z^{9!D6?5D)zM!1eOK7bj17+^rtOHD~xIS<0UsH?kb030|Yd^}r=_xiqeUdyQs2X1# z9mQnVY_i0WDLpxa=ZW3=^}DtRdb{*1L0Wh3hp~F*{a){pGV1(R^~6^EvfsjRcXet> za-J79amykRdr}4YIj{||Iy$S23y}KX56B;)qNCpAi7&D#kQW-xA@0@}HdZuloYbrg zDWryyB57aVAjo$@jge*$n~{;4r^tXO*rV!LwTH>LevHu)qG{WnEaVx$W4Y__;@sx> zAT+{Grq1H;>#-*Xt_RF;a#^eSNP%wew-Tw?G)UEnG(7e)(K`|Y6Gmj_&A_R$K};9xztKfnwT#8FT<_)r{h|x!!WP`){u^=F!ml_Y-@$u7Xi8P4?Y=f`^{K@0wXH0b4@BgJi&DV^u%u#IY*&tz{7>yD_+ zaajxjyNOgEhhZK2hBNz$u3FTYI(~rlxW&Dd7W{s~&!G{k+4E-!hTRKAhP*qs8Mj%^ zP&ij}Ql+t;7>|E1L}`kf>_` zt-}{+I#n0u|Dr^0OZ@{e-B%4(v8#IiSm3qWyaW^1{MX2>nE_9DZ26;nm6W&AeUZayw*Nz6dtI7Ot@d*4Cq(4g-aJc`_lY&YDc@xzQqe)Sy@H)g}& zTsG+bRVKSlhY6`;g_Vs3i9B3fXezXn{?_;SF16RjvwTcggZl$u0gI1?WJw3{$=Cpr z1y$H#5RyrC$OTY!LwBPO4;})*_}8`<{vTj`!~|RoK50_tyy~pv@yBHWZK# zszTy3G?E+v0l_c>U#g3A0B$kHt@C~OjkUa7Dlx(2bh34TW%rP`Iak}!JO+&?`$^}N zl-_PKDP4IDLA*xLK8RAk{o-a$t#O-VW#_=yLjFXmq4S)stkZ-ml)bq?XZoi)zy9)q z#HG(tAKK7y8BMs~WIe?-N#8o!oSjkK=AB>b43}6^6qy%@$!teJ;r?c82ufRkb_EPU z{V%ku3&*@S&iL~RA%A1QWN8nWU94w{>93CAAk&bbHuY%*BNwb%T8_j|5`MdNa`YCU z@ign|=Z)QVDnxKq_jLm#gW#%>*8~jg8veci(7S)nLy{2wcwTz=T9sD;s-kt-?sxn# zt{y|9yDOX3a(m{XFW?;yXcC#U=bnI#z`e2cpSn7ck=|4{cdQE7-;VU=SAKSTlczwv zB%_oBuJ^52K<+FzH%5+rm!ETdY`97eKo^eg8kt1WyK~6It&8)T(>iS(|Kf!I3gc{Dp7|Da5e4gCZs(1 zB$0(2$Uv3FJ-&Jg1N|P=C8N7t_ozR7w#SPTdyPQlg;kk5?5e|>cVhMFW0W9Qsx(R> z6%J?I!+Sh#-ucc|%|c>eC)1edg=G*1@!ypzpr`!yU7cKJ2%l|EeS6hHFh8FL)VT>- z@W?>1px$)~0g?1~LS!||@2@sTr$)f*J7@8V`s+YK-^QsS=1V4;em&+G58v2MycwLc z!#$zSMysWU71h+1jcl`lc1{;G6@QdoaHFF%7Bp*V2y(ywKF|)C5Pe^j6^}a0sHqj< zibnG$pn>^QQ*5QO_Ke}K_mAmiANw|d?qA~uO3ow+UmiZ(wKrIeF<{|8xOKmu2tKi9 z&4XfXJ}3S|*O97W136ueL4#IrBw?9mkQwfIl~35=X*sR87rxxE>nU6DW-UP5T~)+G z;8U;=V-}Qd*&Sdi4=}30KE2)C2N^6|pPK%A7XTWcD_I)wH&EByXIiL6^Bz;~2GCK4 zf(jbvlQ#?F(7g3#b)R(}X}_qn_NLbTvN7D+-T81zTG696N+pWR`Je_eU=QHlGQ?Gk zn|>K};nkN9lt2pt@mcho;~8is6jPmCIo{@#SN^@sZ+DL*4St()d04$aNLP)1v0S#G zl-+fb6l1V-%LMtGsWndaq48eHc@%dqqg-NY0L|_^=t{7Uq4OOxopQt791tstPX!u$ zHwvulM&jSeNDD;l1jM@DU|lkMi=pyOvHWZH5pO~FiUGH7FCKNx!{o|VLmJkSChQg8 zZJ?cCZ(WrCcRS&60122KRmR1F;2}8@9;#Dfc&*~>lB*?IcB}6a-8L9Neg**5ZNneY z^Sv+c!-CD?oII+Q-|@~HGCg=J=NjUTsPUC5ea&}LB#pB<>}1Ro5v3(WMa>{%h#a`( z08dGHkO-{xE4!n$mELYsZB8Omw#Z>0I{3kaz3Ndj(|@M*~`Baw#@saPBoOuE$^2 zkL|qNIQ0o*hX!82oCs?X|4>1@CA1*7=S{No1 zbWmlq@OY%|?)Bhn4JSZ^0a`@BNOrJP z@AuFw^x5zq83Cey?An72DFI-A>Q4Tw>?c@X^3YuauX)xr#ipqcp+k3{8j2e8;eq;*^waFXAVhOB$AIl z`--9OWR-zz`(<+>5tUt^%tcbP(G04=+>1N3DrfD?k#gG~a2|Mqeb<#MvxS}<3XI>gwJ>qrn`1|13{<%$ z3c|vKpC=^f*P2OxHML$tJeUW|uten>Y7UIqi%$Cf>=B0buKpq2Jp8-*;z)Y&=t&Jw zd$(xvo?!PWPVihTs9K||zleE6$wbSYX~(9qc)b&sofmfAlNhruPf0KTn6jHErE+HS z&vn&FBG6c9SYPg;jK)1^FFE~z(MvZ7%1$Bjn;W6u!SJiN9VbblVU$d=ao6cDIO zCv!p40>gK&U}stMImR4X8O?-96bY;rWRS@>RZyvSIq%0@jPjIhRqd>%lMdbDc#hO? z1>EYgQl;-#+2=r62w-$AdE^NVC2z-tbnmesPlI~~s>uxC$7($uGeF^~A^<+|kotlk z){xb=_2Jd`d3oRs3ECOKxwV%r3&Wxr zaQa0ZxY(XZBQ4?&+i0oP9A4FaeOJ2nAO)g?@_jSe#uXyZQL zNqxNS8eaV818sQN|3Cg@mH0RM)g2xSh*AP9SY1qmt zZL!8lyQK@j_%Rz_TGfp#P`W7KYJ~z+nHd%f(g#7hPw@bMLDVhoEQz>>69QU>wool#W0S0jj zSKl~!c1J7%OD)OFvdb9Fr5#vS*D4LWA_n89T}idC83B&l!v{2*92n<)KidOC&6)Xj zVnEaRKQYWZk#FWhAl7P+XdGgc8}YIwB}c-C){3?d%0RsU5MS6{a~3i}V99H9)U%%H zMOW`&-h1X7|FTd!|bI`LhEf;WTe!N~Wuf=->gy5B1ZZ}ZafE#-F=lQyp?=Z|T# ze`Uqo%Si@XuqeZgnI}TXo`YY&CP__)UV0Eq@FZE~8?S3GBL%Z=w5D)7fDdUDan^0= zGhUq}q7S%UB4@nXzgk80Nn^QYsp=@=pRX<(Vm*N5O1SZB@GZEitZd?VEDPhCKS9^S zX8?TmEeR;o0ex1P70YFy&-yboMv?jFI8cP{l~?J$|95miNJZs~!*1l%NWwf?6jTc* zRp6m*H(U)R?+tC0VAorKt4J0aRBXt_0sJ;h40=8z}#09#6;ezHcDTAj~arMJ0 z1zi<`ZW~%ztq(<- zeWT*Lgucqgv!%uG^P#+;ogv^V!D2v)bZ2eOY5EnXpaA?t=yzKt&+e=Q=wbUXM*iM`4|VNvNbjJ9P^`*7c^pc-|-Q= z?)9`DYdYbBD@y_(^>Ncg@JH(Ff*SjZfTR#Xy>mLmabAuID)<_@9i8e-g@J#Rdrl|1 z8{oS{;GlS4#%-f;)we;=05diE;$78o0^b&TtOsm-w!H6l9D5~3+ zH0nkC2_L>F0Nnmj=j3_4FYe+GODrR=?{HVB6Ho&lyOTdOYVKiPN=JnhynFEMhld?v zLLPpSfB~sm_V$XvRq1X#<}&>$JoOa-7lg8}d0!teOk=^~?SpkA=k~R6#5N2f#t($O z1%2)4qeFQlA0FGX+JCkMT~&T>JXW}9*6hqb_yEB^fZ@|zk{dlf!XgZTzJonEaXv8bbsSyIVo$zT6qd5bPXr8o zPWhhs{L?V^9%X_H z@*kL{XzB$zw%zduzf2;H;@2{?QeWl?wi=PYWFwi^`JMmTCdt`oOBlN5;Y}wxaNOQF zS-h@-zMLm6oPoSUWxs3#bd5%g334-i#Ucla#d8pmdT?Ew37AeKbS^L1i9tzchGXaT zOQ9%3u&(c_PbSH^HpXIxV9j%crh05gwgi+2&z=n8=i`1Bwm1M7XR2-P=|9vG_lx}I zcSf#T;@fquFGn#fL-_i4H7r)yjZ|KtSHQpq3JAcN$haPsIT=Pn8|x$kz@u-XQ9*a(a>H)jBD7O&?A0&XvU{NE=an-TC42&+Ci&O;rNOUTs zMBY(^?$7uKWH5+W4<4`OwsaMWqX&}CtNu@=U@vI{*&hZ9)-S4&{kUSV(TNYP^&NJX zE3b3t;x5m(~BN3w^QzxKq8q7T;wOO-fSs z&;h82?}t8K=H&q5?JKk!1=3ML_>Qj#En#HNt~1prvS9{#a`?T|4#i`Jym`3|*PoFZ zYR>&*+@e3SnfRk3-EX(9Iw~hj^G>Su3H}Fr=ZBEP!+OQj7dv0%7~)(AoVjI$Tahoo zu8AY0bmBD(CC?(C06cRwuR^!mqX=7X#HA6i(KBSCmS?(vCjhmDBP^lfb*jPG!Hi~( z-(lhbg;mB2G>k@A?+R}b1V$(TJSZUV;bon^0>S@2`?|(4@9UrMr^5g&th^vR1u*;L5&3@LuedJsjsAi~g)RQW5}o<$z04TOA&uM8r8gC)O^DUvMQcXla_q?quOsVMGh`gR;Q$K{$f&2)b#f^E@i^)6Kn&8%JMu2z%+_UEd22tNwP}iG67o_t`HKo-B8bvf~^2U-*EDkke4Yl`0{pqPRWX zm01>Le~LAZ-g9A?1L&AH_&UH_3APR}{paOrU+jE3O#XC3u>C17Ia7jpdQsy#!DK_3 zr9p!t8aE=_NWNRP7hP?9r%4v&1+L>-x zPdp<>U=R^|OP=~=_Csm6Sb<)&8^}`Wo2}&5C9Mz(u|e}^8q_!vxq-Lll!x9Y&NH1@;?)D zmbU?lHAd!U>+45e5B?nXO7gYewq*Y%!rQBgBcOun5e7(brxWY|^SSGB zcbg}`WAEUz0eJlXN&h?~0Y-FHd(Rj4M3d{0j@Jf|R|VPwBBmZM2h#i!-gGMV?HZtMyk9I?Pm5(D9Th5v+(>+@543ywU3Xgo2wx9c zmNYTp=g)gCmc6k5pU#`5w0=ZG*$HL!e+`Ydh5QYjgPsC+#Cg8)5|_kz4?K-T zPPn$&*+~k4asD7>r;|6#-1HQ6+gIF@$o9PFT0;RD#9js8V{9_A;G@0wSJ13{LMlA; zlFr~#8uI6p=)x!KEXqHbTCMM0m-!~8Sq%7CawD+ccG5RkIU;OP*{0_l#AhJ`@Iiz0 z02lATyy^SK3CLflEv>&^{X2OBU(tjYp6`H8q^ukD4f^M1@;}^|&~nXYYCAb~Qe8{O z#44@)hF;+(9lytz(QksxY!2g+!Lkec?!Q!Abwez7NZT9Shsk+!DP{{VUvB@}`%FG> ztzecZv|m881F`61J@IbZgVxE|>x3L$Xeq9?T;ntK__VGEcsZLlE(&lprsmr3?4F&2 zGXM^k-L*Qazx{I$3<$xwpYG}at6RGXT_Bg~g=ENR`ZG!=m2GppKs7Uirp=xt1IDT; z#mT*jIXJ+=*#f&=IP#ZrrFAyDlUClBh_niI|Ry;}$9=R8BsuRVfq z_pTKB^6jaYhb`9aIPt&6dhliy{2=VWW;Rr8$MXiTUDLyv${I{5@F>KmJZSqua~h)& zAIyrM74`G;o3`K|91anNzTc8<`MWKFKJy0iy*0u{)9LWOArNo=^hOf>^)?gxikVB$ zxTBHaPI7o?+0isT?MVwKbd}i1xDqOMswD_tH02 z82NNKX;J{M`r-BBZKOLRc4D);uC$|ctHW2x=q-Y^j=0=|Y~ETrUeSuDG0K@7HA+GA zlka5^z0l7Pd6$;4YK3No85Fnp3h4Ptepf3QaJ%LZ&Y9uP7{MjR#VUukS#7Me*at<+ zD(+eZT^REdTO*)DOE#ahqG7lcPVJ%&BIyPODCXSO(|OdtADvIlNP5%;JceGR#)_?& zrhlvu1Xy9regg(-CSnWGXiQ*wF4G(!$%44z)d}qBv159z!?D6~1FlL2S7nQ{SaXy% znBXevsHx*HZ32aU^ui3#{XllR0&L7Wp)>IQ;AYRA%;#=4iti@0gKe5dB}gj4g3zWF z8lZ*}Ko^ryIS>adF55^;kj7DzAL8?%w@GR;FK1Ja_5kYS5Dd|`zjD*Xi7#90Z$QJh^NZJp8%ycjJ3c;@(> z50*9AXnf5?#n3-Vf#k(x(C?D~U^9)E$zqJ_uedbTv`1TGo=WBTM0#hYOAnA&iux%0 zT+H7H0F;RdAbI3ZCE_)18N_%&6oh^A{lHOu8!OE=bIlax&;b@74wbq1*3|DdFygEZ z`^*YD@&mye#(s@F@p-{L+==nGtdmuK@|}2?Yh$M?Xqg|Ak}gzcv6lz$NRORpi*=lT zf#plr2wFXnCAL=QF@wKw>61FY7uBibP3f0XHJezGZhp!(? zV8w^#!b>5K;o;Y+oE`IJvQdRwGhtbz0dUlaqV9O!EETD;7-Dg6gb4{Z6E+ioF8Uga z3i?`2TuzIi94VHl#oTUk_lXPY7L5wcM~7uKU92M&EUAtSZ#NBVoZ92>0ydHQz4iU? zYYzt}$mc=u*=d=r&_V2}C_se+n zV?2H>2`6c^q%mpsDOt0Z<(|2AuYEh8ZQwDZ9WXQ8rEHV+w|PTY0rUPti0RuCf}iZn zd2oGJ@P(SOK#-)!LJhaILr*mA>km$anj=zrpdDHul-6v9yZu0>G=zzn$raMRQKS)6 zBK^o$vKG086^4RYUt?4Tq11oc2AI9d+-m!MrIMC5OP+&MF@s z^|MbdXh5^Mq^JY4z(Mwn1sFEIOlz_vR3Hg`eJE0>3WoMHXFEDJI-d(@7PtxFo7tSdY8oka!Kz@ER0a2l$ly}g74DnpNe z%Fyrpi&ey-_z>6zunH#7uvOsi2e5zug3})yWhpKJeKt?9>&#PYRw47|4j?aFERg2y z*KDMw`;19%#gkFQrfSbKz^%AoY$!Ig;AanA;v_~kK6I3Qt;JRH&9b4!=9eQMq0Ucs zy5f}MbpyDC+MXJJ7E-{P{HTs40+s`|-2Pn2)FRGOr@>MWjBU~|p+IU=)DFxrVsDy) zo{h!peqf!1JsX$Fy!8%{5VSLP&wB1^qjA1(S&3VMG)hc0;8#I92+W4Sw6NnYc}<_7_-lAHyY;@rxOHgd%6Am z^b7Dr#<#dCgco?+denihGj60Ej^^69rUzkuiOwkB>lqFEAW-%K*kH!A;8x?v_4J(t z)$WK#F9_z=yE_}ZCl z;{P_1L#7KOIgiOQjJ+#$dZEWJ5X?eKn*ztrY)CS>T|Ox%1>9R* ziDQX@;=InR(e7c9ARJgKoaoERRd_DFZxcuupiN%c9Nys|7DCNtN7nY(?WEw*1 zKD7!+WJHHU7O(pIO`ys07?a)WK0$<|%>JiuJrHERp9^v1Cv3 zw2?5LuGsFS-Xl^+{FFvPn(nzdPn_rSwphTGIEDcb z9{GMLRMz!-tJZu!=#`FDjK0uyKl+pyDn?uOA0hg0_w*jWvBe<7q4 z-jCOYQLpc54vkf=DW;9+Q7r-g^R&djx;*RUn&hzg6{WLBtt2CO-p++yufT)0cXnTH zcRT6dE7)=Ea+DdSWXhIZCm&@Ix)W)dLyaXt#kNB9`pR`%7Gn5`CaFrU6Qfh_MZR?c z5zHHNBXct;V^PAd)uvnmAiO>D38>h1p5l|!5hoZgoSgdiLs&HSZAYQqjz`#_XrIEw zRZ?Z=8_OAs9c`ubD}B6zHA*RJ_&HSjBTKVIA4I=syAoDr?TWcW=}?Og#`#5xnc| zw=~n0+-0XL^f}RD^dQRP$ISdhyQETszQ_u1L+-)Bvjv_tzfW@krD6=A)VU0J=zuhR zE__FO%aG=F6v;TcD7sKtI$yv~RXR|6=UN*QIH_ z+(LvFMCHjMGWxe{4vC#f3o3jRoug#g&R^~*64`vga-3_#pX4{@zNZ$X&5xVS(4zGN z_pgw|P5!piv?5v}@ylN?A1$wt%-?ak4#=(sfpwU#voTybN9r`heTZq&*2$Qi{50G8cO%1H48Fvk{(Ya88^6MqnI<(UW&5GjA@HzR zlp5ilf6$H=eUBm?S06NGA=cd)warZ^b2hwB&G@8J$<(o#T{+^M=Ywr6;gnV;fhl*a zhp#T25zw5(>3mmc`$t;N@iw~v`i%P~WC7&b*c0={nt$UMU|D$kMCKRIOZc6bKAdJ? z;NI(?f7(FQ?1#;J(!iGyIUQA3Kt^IpvLI|yyJ)o9L2qIsV-W8W+r$^8NkgK1xle*f zwgjRPOPg0UX8KKqjsxlVp0cLCLD6pJ)+D%5#+{mDpkNrmkGd)B&|9l1tcGcPztbf-}wAML>oEr?*Dif`oz84PyAR~lEi z1R6WNx@!y4jT0L6#Y^-_`X+|M`HmucY+3t3xDK9}zF_QkRWjRn7xd;@-{`ZeCV7?g zpN~Zm11spsdW;@$I9L1K}bABk&Z^(Ay&Tf=8pa|>{knz`CjP*g?LuQ zG-sl_n&9cX9kw<5wR<_B76`RZu(Vy!5thu;jwFrzXg}|8aGEebkjo;<`gz8hn@Ij^mCHF1 z%=%D@?YavaN&clZ<{&jq&uB2gt?4?4g3Ct|2J5o1s#r3Yp+tQv6u-OW3c9hUuhff<;6HzQ+ucJFh3)Pkedh3wSg33`4b#%{*}yR|Y#+?YOikK1A@c#* z2MrlCnO8wv^|tPeQnOU{S|nK_y9z4x+{h7G;(L2wvs~x&s@pQ9*u?%e43`q!RLm8os4sg z7{Ri96FKdT?!8%gJ|P;&Q#Fwvbr=`-v$gSmj&9 z*CM?VMicL%lV19gpv>&=(A-WZAwhlcmpMK)0`#m;IXsE~9~I;O6J4X49z?DX(j)#b z)-s}h9soA@LeK2YYYd${?19hlXg+!0?R$??Osq&Ayvqfxvyk|SD1-VT)Hfyl!0cDn zRgfV}KZWdx%@mb`cu5W%GKKDPFscMlKf1wxg@xu2Pq55{(sI$d_(Hja7^z~e>1z4W zLu@(An5Z25J3Qs?CI()`y@~99gqzzcAByQiUESnjV6dm`D^N?4HNnuWAQ^UzgTVD- z(G0dLOkeGY(P0nL!1MNi8QNTcg*??86!QRBpit5>{vW`?(+ipAKi&Jp(G+6a`F8d| z0DfgT2Hi=0h?w<7G7B9ZN;_t`WrBK>W73k5kTjK}CvOx8g&t-yC-p)DLG7 zsi^6e(6>3}@@P&T5fupt^{ISuxvmu-?dpSLpSYJJby?89pUExq*P^eIA8oHMbz~Ds z+B)N2TuZ*sU`#4ezO;=>=-EjC@0GQFPcjwAB}OevX<6uI`i*sTd)uGO^0A+GCD-SM z-UD!#g?vhIeGYdFfWIu&jsyS)btbI&c>il>1%REMbbmVQE)ERhGHt2+{}_AkpeDPnUpNVbPUuB?k)jZqN^gpQiXvDL2puT` z385o36alF!5F%0-u0Qh%t*Rn_=@dsQc9+MzwbYqxn*T?r&RV>@(ei8eFiXk zl-wWpwQW8o%G$Q*@Zs@;`Qs+FES6l010Eo(_GF};cpgKmKi>ma%G7)czdM{z{&&=& zdR%4}B6grR1q1CpkylsTD|lWpMqYb17npI~7{qKE8L_hM8rTG>mSTzFVs1grBu6Zn zQ_u^+2K0*KPP|e)%}GMtGIiVHq*408DU&aoLH89euv66sAKMEf&V18Mswh%hD%U}O z`#CSv86Bv^@5qws${gf*kMje9QK6;mX`xc}f z`Hko%9`XsBTN^sG#bhm7gB_f(c&7-uA>|T#V_^tZbB&P@2ln?&vCu6SX&-u1f;FG| zUFEUI1SaZhOK~pMb&aPQU7|igzJrHMt0{-vGLl8vNpQ#X*si@~*J_~uOll!cTbzY! zhD7Q^+|LgEgx1aP*tYr@^jCl=)A&~(-{Neu-_FbBHklRooi5W7RrrhlFagPJSXWl}3WLDFCHr`#9#X2-rg260h_7YgA5 zUDYxH{iBZPR~r&Mx*#J68) z0px=Pc9A!?St;qCr-p6+1#}maGC>5nOsFQxqdYE;%s?&ev|_C2|3~I0o6~5!ti0*G znhZ6tCRzV?epV4bV{S+ve_p+FD%_AI`3m|&A=;eu${34?<}tU8C97R&&u~pX%@C80 zP;W$9PWHzrIxCfMxh#C_&JiA*)CMV?M~);#?%{0>b`!^DT_}FbRqCXt@|sH_#3UyfGDelCXmuqDy8cH?`Tw9gMNTJ(Wd z&DgZ2_skJ)J#|34p7U6g!SIF#vmCyl?$FP10pH5x;HBP-?pI6ZYrK+XAn7T`WvaSe zVjM!Q@aO9v2lDxEC8dfJLLtChDry5mE*Jf?XBtopB#Kd>pR~o7QtN@ zk}fg3N7Cm#R27YQlH_0bhY|gqsvd+{uBy}RZeR+cBeFl}*JLOyT`x=5QS1_SC+~#o zVaWOvJo*#l>;B1=OO!tEO%YoYA<}I`MG5~12nj}I^8R4zt(#Z4Ml1}HDD!%gu zK+dmZ5$wyw-ec`*V^$SIGY9u2|KNKc{nPX@2z}sj4nuAvtzr|xB%!i)0D59>i=ov4 z1G)Vgg>6%ggGOE6QWBEh^Rg59VsTY&7!T^xW4OTcQQ-tBhm4AyEfSoLSumYCW2B0L zqUg@T!AiR&0+Kj!Wy0X?BMOzoKI5iv_d@_$f6z@@dtkP16)8o=XHaZ;d1wyZr>j05 zT@e(Pdq_8V7WvCAzNWNbp{w_MFMhj}t`W(c#axyG9-Imu==S?OL{lx$x%*M=Fn)RV z-pD>Au!85OcB7bcWbU@Q`pS1|D>G#u`8V+i0Xql2bMxk{jxUtYdQF_b`1MQ&9<%=+ zFc|(DV27XUO%07 z4POdY39W+)xww3MqWVe;z;nn{=|KgPER?^}is){-13ia<)%VXlAAQ zw>tR6{SFaky0`LtGJI2CXCFiSE5*Q4^>~LSy@5>6X2?Ga4%Tq2-EABLSdPZ!@9FP= zexjgH?W@cAnZeD~p6tAGB>@^0B)F$16>n}Vnl$I_`A~B&|7nHQVd8H@I%0*1$_vUm zv(@As`XK6M@w_#qZ>u5kk)Sd&ij{b8YZjzqD)!hA-N%UNt#AXzfCogE7n(`*b^p-3 zgQ;rm+x5FTxC-r8$&Rj68mor}HmI+Kd$ug4^iN6k>7AY(Sm9~XYsdRob5mM{&?THu7dCy|_GtK#N|lyKpSFl0M%kf)IKveOB(c zJJ!(N>yqSLxEVNBR?`%M>HL_R>P_2M-*qv6gB19s{E74ic6bse+_r5c9tJEi#J5iB_!1pX0Y&PP^vAjc&B&91+;vhbMAFB0^EHo zQaXg{AKm+IdjhR^_lwm*Xu8L=zpiG$eUvp_VFSNzX6u|mIv-2U1F3tr8I;C?aMpMN zLpB)YJGniZj7tNcp!;7aSOrPGl0$v2fimMDsYg8e)$=&I2)MGS-OK-SRdvmvOep0) zv*h5{_j?B+9l;hwD^bw{O1tY3|CGj$A>>wW^UCQ==z6ALTCe3fo`aZ~@N16z%%`sf zhic!Q_cO5Vu#V;sC~LAgdcF+fOt#d`z*e&)sgpu%O{9irze{tMu7R3H^4h`AvnJ{T zdq+Pk#j{Y<)SbzD`|`y3JwB#js+piYZc!RO*_K0^XV-gxr?*QFObusCCxcK35CNv>XGO{d4U7sA30Vr`8r}Tfd8S7YJgP*_ zsPTpnDv}Fck^89Kl*ohKQNUatwmI4mEMdZ{ zIf+Iv4BLp(UiN3Z{WJuXw6|ZWy7KG81MX(-74A9iQWN>~g<)*KnYkj&=g|Ys;Tn!& zA53k5a*7tQEtXK=m_mbPsqS)8$9jTI*LE625)V=!hD5tUN8zFQ+gKmMzSy9*vu+1s zg0Cql6=5u6!36}^zwjv<79BA;F#oh_cb8y@L#w(a(KbjmW+q%&xiD zg!kjCjZny+Ws&CY`UaX${J1;)th2Gp2s2L7!HF>@*@&xkfGQuK z?^73Y;Xho&WESCJ!!{j*+-09Qy{TAvUH9eZSbqm6K zBNOtu2$P(7t{kP>c45lcbY}Yz4JLtZSf<5^V)ZSW16Xwmz zB1dB=59y0$u$-dvDdDy)?MJU%`BB!4l~M76S!pU+X9d@XLzt*ndVC7pu)C~fXKT_8Itb0h1|g9Nd0dpaVHj?#VEE3Iy|iucVNNGu)~Wj>y; z-aZ1-70(BGPE^YK3*UW?W#7kcpP4C?l`qFbW5GEo9sZE%u=F3^*rH##od~5QHh^$c zgE!e7gD@s7Ua$+}Id0bBZ1|CsHf}cUPX@xUTcQEIcPpP5KpJhZB zFMtX|H=5*&%R3R6tFaRJS(0{)LV6v3O zo5KR7N4@#KN%6p*M{j?0Mx>HQ&wl}{{4+JAzVZe{X!}FeJ0OO5kInx!Mu!i#Bd=Wd z0yD9CKHp|svz56z#X!Kk4jc`83HC>WZEONlAG7q6+@_JDs!S`_dlY89t?pvVPE0d$ z_CtT<(N~a}v>N+6aI?)`BC&|8)W^`kw#|bo9X~&@+91=U0C|7nJu^?UOMQDUY;PZ? z4rimt{rez-y$^GmaPRjtu=BUZ`7pgUfpxuDuy+rtoKBcTPFj|Hv~xJ!iv^)(K2(f% z9b(l`AxCMS%ulJ@|C@KxqrX^Rvt-T;+3H4i0cBsXX`R^dvTs@eDEl0Z_J*17o$*E5T7M_!KS(bEgr}khy~G zd8_(%#ZLCrEkEYgjP;Kct-f;mI_l`W5ZD)7b0XVfew6$3zAf{8y{572epO~m11XE! z@A|1oYGc)WBi6)wr)1^Xe`+O$_dgY(vJT)_gnA4j8_+9_|LT=53I_j0uYB`YuRNg9 z)H5O`?*pWxD?m%m*tfaNL~bkw|4^mwe4M-c=I_TK45tdgyi(4)x1?40HBDW*Igw<~ zjAllic*Qx#Eg1ditPu9><8!RkZ%0=`b>e}^v}iL(5O+#(bgJCq;Zh}_|PhNQUP6U3`pVkBs=a$d7P zh!@n17>iS!)`XI-flFv#P~!8 zaGr_spPcL^ymH0lS(y229QCAI=`%@7S? zqWDdzVcp76djgpIPEodijS!MBSVqNlj&DZnuvn;*5ZyAIYCIJG<(_Z#aGIc|ocreP z5_L=NUZ*Ya4(si|sR# zzRbe&OBD7JbWne`;|j-L^=P6p{tJORFcS520)Y3QHBYo7S*cj#)*L^b@s$@K>mgPQ z*E8;So9*&oMx0D@&^SvPF(bD=kfNp(^O3pO+8X_RQ!~J~HfpBky!Zx|%5TY5f^Lq* zkL)@YEL#W|=3F|*Ys|{Uj)=^?&uU@jWG2|B^_E|xKJb&8oW|XGZk0quyK<BABXfB6UR#5x{U@)`kJW$lg5<@YQg>#Q+FZqheb*Xa z0NU)NHcGiJWBf-T*pgKq-JOR3VwqE}O{E7N(Umby$%Q=urpt^{@I)Z@rk=vAQ4JXz zqtXnV&q-tB5C7yin}Jvvl~@zok~g*g^XY2Saz%Jn@x6aQya0 zvq&|o&FJoKt9q)q(BfBbkYgfVax`WuF=&HnaqjoVvkiV}#vb_2kHfRbNjB0d&hs+> zS5Kptz5dz^5`p`8SX&0Dvw@W1mS&bHboB$|FA17AAk#K0JVP9#tE&hA+xt7NPZ?or zbn!ed88lJ7y9rb^e-Ke)`eAHD+zamDSaKIPMSZpzm+>9T!LUYL-BnLJQQ!WLkpEo; zPhqO(uL|2p-+6rqxfI+3$BR{B>YLsLq;Xh2A`2%VlBg7V5-!a zYzh9Z&cinnL@20RE3RB%uq0rtO`FDCDIQGW9`?`R81!(Y>zVZWl|CS@?KeRLyjD9a z<#A;3p9VnD$Ze+llJr#pz&c-NI;$IEztdSzgMCmB`K6Aro#O!jI*s1iR9M4bE1{>V z*M__K)eO-QZu`~LqA9J@`p$2JQt%+O4xQ)oiy#698%R_eEoy6cDCO4Bd8=dS?d@TF zm}8{x_R$JMx9}W{O1gFmH~v2x}_nfDxLj^NT{AAd+Da%^gyBL7Nx=@O?<%T zRi5jhzBR%vmdcq(*%Bd4MSZR!naj>mNHm2(1FfkAvW-@{c7tyiH|T7j9q^ffgiNFc zchaNm_4NLhSb)*X3pGFOcxf9fh@YiPs+=cI9fX(g9j`WauEbD@ki zR;w-B3Oh|}#FqNyPk=*4o3z>h$bClLNJcJZxb7$7x8Wx{x3z5#dK+b|aZa94`?44X z3?WP9jWYQT@GfR=JUg44l=%*G5+UeCzU5Dz*sFT|4gJZCm0AAVPd8J?Y## z*+=35`iXUzTHVA7Cq_abjr%)9mj`%5O}z6_6l}<~6}f#AQ2Z82COz=VQw(=s)9?@H z?rL}~%Me?NQxSJA|Hn@N2$_fBn(B8fB0&jogS#C@e>d~-ziejc!r!CXIpnaNiijmE z%`u{2Rx49!3Og-~2K9y8!Hk64w>pI7&3AxsEb6Oh&EvDPlv{`%g*H)ai%= z2Wb34dxLE|0y3CS#qY0Okz_J4GAdTqbmP%jPCl4;LRuY7`^b7qWr8Kos2l!=T6&5WT;uyyv%GcR zkQEWd^dOP6s!0*Woj-n6HU6%@WN6S9GJFNy#EcGity%$s6qpkAsi|eTwaX?42eZ(-v+!hQBQ*rXD@sa zVo8$`%O0*XDP!jpb~VVW7XjXf*4QiZZ3)u!tqw|54sj9O`G?k?NBPiTCZ{YwL-Qkk zZk#KwT@Sf~lxP4A8rCI+J7JabiOotC;;nXlevz#vYv7B)i7JbLtFjtrl;Yi$OoHFK z4M3!qq`fT=;ZWLy8vG2kJZ+akTAg01usNQXXyv3kP(mfae7tPc6+}o;PmCa&=z&{L z%FB>WHoHZDKyofjH4ZaiQ~_O}l+kE0u5%E%=qLIod)&6cN^#g zXnt3P?yh_tXjb=P}=ruJ9 zoKT{xjUk&QgO-{nBf1j?wltD{h9lt)0z{Mj9oZN+7UK}wfwT7?Y>hgriquJe{j7F9 z`!RDop}*=?A2w~6@8rmN6dT&iw7fmAZ|<&0;=mo+#yJS)0fN86S=&>TQj>IIj)w^~ zL8(q@)l3SB*oeS549&o&nwo`DTi8*)r%$(Mefp0F^{-U@f91?peSzHBBRmXm0;9u` z8>Hz8qE~dC|3{_WwUx%v)2E%~i1t^t3;cxb-WvXcdAVi>Jv{m8|^HblZF#2{0ryX;A8N)6Q_G{m3?Evqy0aKvg!p0u%pA7gRm|3G0h9NI^sV0 zDWHYleXRZKtz|;(wcWp#PJZ8(8y*QgEOKHcZ?w0MeIQOP>y+#rKaSyV0FyC~VK2kx zy|Ls1`?VQDh zhoOeg++C33?~qj6NvNEi-m|SPFO9#Uj(ka~lLzRa!=`7?NUL>}Pc2R98^t{wQc?bZ z31KRKxc^^F2$oKA#A}a)tMWotY18mO zW(a{NN}%WN9)#fPFI~rWyvi3mWf6nGP9y{N%R=P%tKZ0jd+UuYX?bMfiOq^Ky~j$E zm4Oa{fJKjn5xu;K)L*k!FgY|#E6Fl$)9uDsGCr1p1<~cW!B#<3^-gS2rpmS@TIcND zF-Vwy^ruU3WIgrKHN6WlGk#O|SbGtCpa4U=7(}Y+(uX|qK>f(05LQ#vQl}~y^%>iV zNH=?QAWxJr4AwMD>L0t!0kQw>WahQ3<&z8dv+C9Fv`N=>QoRp$xjrXw+neem+yp8YdH889sn= zD{HEl<8@jkcsSxK>&++cHS70AkhSfqLtkQE1Qx?ms7do7*R{RA#p6c$T=#}+9K;{I ztHH_BK`hiSe3C4dm+%yQ%yh4IHk~YgzFMX6zLs|Sof+Tz<|BA%0SHk>Vh!hO8;u#K z|GMk%Qz7%PYLr2=x!n(uu^g3J8+HrgF#KbgG`h%QJp4sfLd4(mtoQL>&U1?u;P#6C z^$WVdVvRGX)7g7we;S0EWosF3nq3b;(C)pHKKxFxIj}u=neU@G=|aKNOPqN?RyXeU z`O29&qhr($^791cCGi>>@Utk?Ky2!?NN_y5{zdKXo6lL5?>(g5KSC(o7(9K`TE)Do zSfGk64z}Lc-Hb@@dErLi2}|P2dX(`fh*hxf>Xe`#E4wD@L?)+(O7KN=pEYCp$`XHM zCx?1#1;~#X&GmpJld8#0f}cOfnww=j9Qr8H=#9$cHXL0xcULJ5;QM+1^8LJ&cYY1@ zfE3=qW1D!K{Eh&~@1t;m>(AN!%i^-9#VW|xx1I-cWpF$W3RiZV399xGN?b9>k7frH zRngrXdc$EBii_|p7o{3KZMA`?uYhSWJD5gL4OVuqA~2tw!73~o72-)$BGe*n?!!HV z(gPAdhlv{fyDJHOkx9;^y$K8#Tkr6LKCBkOsPUnyKt6uOv{Xp7QSw3NZZ#DvL4;JL zKm5UK*I=xQ&PIP!M*Hz&&Wbc`(+5Z>WIGA*AoB;#u~4wP=EA2`jDdn@b13tf2;pYn zo!zAyfQ#=^=rx67Wxk**5X}Fqh0Ea>#=Pl?_q~bk(N=aww>sFHC~Y2nT?i-4O-?*uDyL0yj#7cL`LgrEz2V&$6 zgs}DnN;DSu>B@T^)4b9bz-+p=VK?zz5QKj0Ho6}4_#U}FYP25W$L_Nt+{w&V$1-XQ z!b>Sh-jks5Z{Q!sdI6$rzfHya#hoS14eQs3)rV%2jc;8*A3!LZjEmKN21D0fNek{|p)*Kj^X?;9YIQ7Hin3;}?7)vOT0d2>Cp zm{S&{GeDkh1CGFdT)4my_{H#V-dn-{P&92J9a4ffPXwX)Y|3ZWBb^K$(00T&Q5u%R z-X1Ana|F7gbzIk{ZSwwz1DKefVmtZ}&}5v|_~p$kf2CM5lP&4myS@+FTrkITPivQM z%R|>#en1dwWbT(wmN#>nd_*Td-7_owVB*6ffW(q9=7@_Y<3s5eCKv=)?CMu~s%tG1 zs=ctX*SysyNAtdch_d9W94gj8-?U*`#RmOTo49HN(@>T?GsamFv;-o8(a&p^PISM~ z$>H4h3a+FlXnZBY>%W+=Ty6X-!v56JG+74ppOWXruZ}~%?u%eX!n#;}Sj-ZLce3#Y zg)qZNjEt`Os>eG(&81Z1_5|m)Ld(@Rbl9~!!5%0jz+LFM>{_Lq^2jZejRA&a z<<$vrNZCq49@&|k2%UK>Am7LGju&(A`jiNPye>Iun7C?ue%5FS036q5-uA7d}kx8!!F0k z8rR~#4D;?FS4u)q3K(TDus*r|Qwc(uu3`rbw#(mrXO)8NKsIsKuRi#Si9DkRUR>qU zIU7J)$i&uTRK87a%hUPL4YPT0kDK)8$XpC6GSg%ozr1l}(TmwLzluASdnwHB0w02u zY727KL=oDROIWFOujsJmh_7USk;X_Zjf2}96|vA^e`08@WIOzYB$M?`kQIhZpNVU?<~J2ytw!)~R1@lqCQ@U}%BTWPY0uO6M+0=9j*258 zbCgL`2<-`x8zOhg|Ndh?Gt!5E{AT|vg$Cs*>mpz0^70-|Hv%Tz>`DG z?&}%1yvxi|=m(5Ov=hfMQ5v-YMgDUMfCU$S^<<4a_0!SQ2&$}z7Z>qyPRY=Z;+*?2 zDeSS@)V7`9VvGG3Ky@ib?-r2F*IPrlH=aJiS8>!n-|-C-SouU%7pa^Lrwz^rfs8X` zw}JtcCF@sFe!|uc+I5;=SoC?^vO1J)YGSEQCQ#;N&{Fble04nDnBiA0-5`G+D{T<**G$I^fj|6=2N{K33;vfoYcs8Y8S|Qt1l(K(LZA>F#eL>SwM2DF6oE=ef z`z}T8^!XD6r6>PM=Tu8(A`ECJAp(Ge89dGcc!MMV%^U1@zykn%Kwgz_PllCWNcD_2SsA=tz_S6r!#&5rHXQsk4dfG!|A;uV zFxohjt%5i4OkvEcim&ytvxFk|o~0e}@N#*FXt%yn$Qd~J$ex*S}Z?K~R`k5OTbS@c#62s+Y>72ssG z>bv8kIx-Nv<{wN!kT%kf?xS-@Bg!r zd1tlVX+VEvx@uRWlC0C>9el=dBIsH_A+qH@B_%VzG-TxxvKux!2}FL}Ln+5oFBq+V zrbyKKPjM4Jo$DjUJL}h!C+2{8xrO;trI{Yn|NIihpj0F`>!X(-jL((u0#(=$J-&HD zje-Px^x00-9<7nicIs(wYt|CnNZmfwAO0fL%eyQygN+29+~a94i$h zWROu^7KEpQ)DW8JH<1 zzxUxP-K0yONsf>RK~e#GK>3#ZnZ_{3Z85HpKAQ)3WlMnCR~bn+_30gb(lFZpnR;bHATPI??Q?Syko$I*7);y{x-2@^&sQjj!| zUSW#BD*0M0ISz(e^t$%^pL6Ax(X4NfkVGH)@hd$zqZ=f%i0!}-FY&HkCI^My?-EaQ zGDWLoz6Rd?o*#sA3|ONvE9rRMIWarJH#QC&&B!_Lucpw!Y4N zrUPurWZn6wt7ijPZyjAR@{> zWeeTUh$gY*!gm_0os*FWA>8D%ZNC`${*~41v$}eK<5xxXKNhDye`tT4fJ&%ePni5b z|FE4$U$EgiK#cq5^2;9kCZXT|62nK=9cwc72`hy4qXrL(JLm+ z%$ILty55ifzkPy9#+2ZD0q=i%&OZSt%8Bg@9$@zjIaV0TF^PXQ^z1U=!yzJLjB)IQ z)vMv7&j!?;?sc|%gC%qXe^Sv?^k``5#mCAfCpdj-R4-tlErP0~vH1FN&38ddEF@uR zWsQ4MA^ZlBCMPAN36L_K67IK2^y#A1Q;QEDJF*s0zDYz2cwbPwP{4i#P2N@g);nL# zizYw_#^6W&<8+2@>ij0OoZl@%8>3_Fr3ljHJZD`KjjBt}9P{a1`_mdu_rSs1+aHsU z#w%DTjW0M1T6QLF4(c5b%67K57FU<5OWd5B5)X9yLi){HgSN0O2h-s@jL{pBO@ZIy zJhWOCo=HF{er2Wc%#E(D&9`F6s;yeKe;arH&#d8;0k9&G^HYzH${8){{kl*vY>54P zDGFZs94!1+$NNb|3LeQT>ut@L!7oBstwZJy=A9D;5xAt7L7I&Bfo|5`zNwWk{&F2l zFzUQ#By6YHtbV`sLyO|-ZBWi9+%u_XlFr3vFbK%B5@6nP|6>7Rd-sz5AHc{0k7mN? zK_sszKKP8EDDb-jm&LOU&aCY}>rqDs@3{Ka)P3A6r8iOIg(jDB$-4R|ozwJS&Z4W6 zv6oF1E#?^eJah3bB##u|w9>RnoN66pp+s4Z{ z-&}8%_(gP(1tqjD9G*`tw;v*;Jwb!U*}qgcHN(&!lTL^ zT-#6F_f>1(=*>lg>OZT6N8+w6Ch)ZhV5NNqee|fpmYtg#C8X6aQ#b4%9XmKw1>kq# zdmJ;>BQBq<67-g(r=2{c!|qB}^CVsV^=fBp$Hjkm!SvyPjaO~zB@7LJqP~12uR549 zT<|gm2(YU`8k||6B4*;{#`9HIsxA-P<|d0l3GpSVhEDP4=~Nl%E4QC2jf=6}k?P>^ z>6lYn&b3kET+G6)L53e95F~T$+k36CgskUTH?`xs{P0X&148nG{F0~Jy_O!ZO}yY( zVtM1!w$vVjSeTnD7o>4xP0*9vBz0SOO7*MfSlS|V(+BLxd{YZ1N-D^DPP1jksLIX4 zUHO*2;7my*^tAu^!|DEj&JeFPk0_*ktW{@oiR9=5h)>|gf?&2pHtk25#?Wj%_u+JD zD%H0&I!}drCypV5{JtBa@W+F$^#t2{WmRjD4)8_xlY zQhUH59z%X}x{={{dq`@--zMIJshRMV)57TNLFpQ*7KZG5Qf@_ySC%}8zcvf}p^eiO zlb_Vv*2*gy3!EmwMM{rQy)Wmfx9dQT2fH8DUSTcHTx8>POo(+Txc(gS5>LgXfy20( zNTtxexGj8!W}*o69m`73iQX^9ijX`!*XWfu$VnE^sHE!t*ZaQSF#ZB5qT4&Lq!M;E zZ8_bQh?=_pyzyo>`o!JQG2t*UCAN+mbcbW+x?zf6N)#-KwdiV2-!7W@U{W=mgtT}x zG}M7K!#J?{V!@qhQYL z#i>%9!s+(gXAZLe?EZ0+GcS;z(aXP<5bM`6AB6b*^Yzrh(z9)`azwOS=Q?B5iH?`? z8}cUE!M3AzOXD%w>ExK~>-)=r`1<7zY=!cK!SBM~i}o4!7JuKQim0LX45F@ez2BJ` zi2ejX$b*-BBcaDH?)y7bKPooq$pk);W)qI{V>5man|!}tE?QoXuSG8R7D2Dd)TKU> z5q{Ml+?AVJ_rNHaiF(=deY0%`d!t39`2MX5GU%3(q)NzO*tFsquh}B##d@I^?9?1~ zkYF`)PT!V0c*ex0w-_gfCP__`-k<7*J!X?E&Ow~2Vd$tG%pBe# zJlf|8#Pvb(IqnaBeSQ}VCIdSusb-}MG)=KC6{quf6@h%>mqmsSE})$ZSY3bXR{kGG zOjL-e!geG#?!alls*-|dETmK&h%28skNOy`+1BvvP5KPtw@iD-R*%n*d766ke$s%5 zeL2jaZhn2k_pp2BzGva|iX(KekdfFmLkB2FGcz%A!SfKp^{ZFdPadnafnGf^MpT&( zIrB$sWUb!^2Ml?~sbyZ3xgU>IyN+idzyrL&vZCX$Sz%8}y?&*1WSkv>=uiteQH3c9 z9Id%M%%3zidrul^^C9*wde|gMx99*{amPfI7d^o}Uei!JdItH}wXd3FHT&bi^Cyv* zV6a7;Fm^a=iO8S)3R$5;gmc!CqWZh$_hWSgwm@q_p;C))wy?&77tEEcVj3c&3Yauk z4C22;NVH{-iB_;$2q_`>J^HSAD7R&t8MC5}`W7CQMQ`VC%$?G&giqD#tUKZv(Ub}h zbuN3A)x&jg^)OdD_?Pz(c`l_#oczc42sabDqAEJ>$P#)BqSOFb%YHZhxK8teq6!n< z3f(+jiBGWT!hoi{!diLVU~Sk0-lNL?C7J)hds_^iNe%6}?8Gmanf55Fl9-`{(DvMW zhBH*H21{`WidM0sY+fpotQY$5k019zocjf2ROTDvUnweCf%g5`)WyXo8db&U>tL8Z z&3F}q{5HwN(K}GIwqfA;5adA0b1mPlD_St|_zGAObD$*?4z{Z@i!6!hrPst9z4*`z zZR_^;NgScZC^~eTgXLWvw0&on2B<6BFWh=tTJ-HKZ^h2&h09Wd_cpIJ{T`J;v;>be z?%vt+-gtm(P;p%~+6vwzSJF)v-1mu&A@^LVU55e3r3tvJO4ih!Z}-X`ANLGoz9rE} zNXFqEPqjtr&iqnQe)7T!Cuvn=j_X-J9@-qVbeVv>|KqOf@5aDF2DC5*=+Qi?xg;ps z=)+exuNnVxb+9=Ok%-f`KsRdFxfNG_R43x3?_{e&^;x=QEna;dbyiFtnSIS}k!{mt zzr5QtN*<@T5VeI+%|fqkDaI7UA(|yY3z~Lf(md88X^)xp`5Ucz6bSI1#Smm;j8;yM ztP?vI=3L_Y$3KLV>i`E?_Uh^{Y@~J5`WpG1QH&xZNifvxd~3n)9Yqnsu70z7cH1BI z(61P@=MuK2xv1<3c&^U2g1o~nBxn2`inSA}la}WV&y)Ha|E0%=& z!}9&xDTTl03fD?m^ug1tX~xbw;`$?u?1cY*3`p-NA~#Z^TIslGr{eeb!hAx8SETun zySa3p^iR%Criz}L96eDjXoIORvao1xJSToinwAeD+b$!13@v?!-DAYU(GXWO+-RP5=JTl>evt z1w1lco*wq$$WBAjAM{T5c{$X#jIcgV7R8WZ_VD!3?Bj1RN` zIBm3&UF-+M#xoKMCtiINZL%a+Z3|Tx1HBRZ7oY!GJHHh!Zck8%J}f{|Y^p^byl=f~ z=KgFn$D5qn7BQV3MSIm#Al0+#`f4JC@jtEVGgG^7{M!#ZQ{X;79-l-X+zK2mI-aI3 z-=JEe`>1hDhCeQbk)r+y=yvcFtA-MNIc&+W`D@-BanGNUY7GOyZ3u236LhYcP>fUU ze(N{HlN&uR#y7(~_c~*p`b*tjBTMvXMNf~I-I*=4^o4t*i?;U?=w{TONZcLgZkwB3 ze6Ze}mtM()VZM=4UNya+;1)?ls;Xyj*H{&)uh2MK-%!%43dWzUjyi$}cC14lTjyN% zaY+p2Da#Y-w@4$P(*kk*`l6I-1EB~X0aL?e3q~(d|ln zA&Q-7qbkRa#iMb34$7xu-F{A0aJGLtqeFl=0v$mpB84)pCDjGWfbRb{<(361M zg%A)oRt^NjvOs)Z$ydgA+gKfKKtO|oVll)2B{Rc?^Q(aRw7=)=B2&SB^j5+TzSEya#zQo1VB#>W*CNH*Qyk5apqv zM!Jv^j`8MSIdl%eJN94@jXeelB8*6y@Hn+{M{x@VM2O+``IAg~8R(XIC=d|vz>XaU zYyN|C)zS2xc3ah_J$z$=MlA9agXC07_*66(X24ku=4UTV<>WS?cN6yF7=8;asuVZ} zRJj|hIufF%qG$9)rfb9A?JWs)hwaJtY=U>&KQG1rEgA~H8t;d6h3$@s^xnV{b8Wq= zU2#TR>U1GK0mZIqdnXv1kLs_5Z+yFkfR@|0?UC#W@ohndHnG!e&<{+pA>JYyt93p< zUeSU#MsBJH1b(z-bPrknen5{BR~Ts30Xzz=P4m~+V}bgh0RL|X^`E~7Rmbvkx*PzP zx2jt713sji28{L&0A>W%Bs0&^8%2W~TTADfgz=QiMOOq)hpQE0)6i!b>aIOQmqI9` z-@$g=girI0wY`D|VbbEz@snp|&~9aF9&{&P$MNSj=!?G)x+0;yd6m-2zTW~^Rfz0I zT6cQ!L1XZpPv`q1rvE?2zB{1lZ21?F4gvzwBcLL^iS(){xG12A2!@Vy5<&~TSEb1+ zs00BEDAHRJdIyyzAT^5Ir9~!%DOJ*v0iXq z@^us`vQs#t&C@ipGpwj6h8~xJVZ`2yo#W?3O#C$+Ot?G39{(`gEHM{ z6DZ-iiq9hRQmwp+n9Z z>4KUGPPwSQYPKFa>x$b`AhQdIHTYR%l|SA9spT!w9`cbbnO3!uZZq>;jP1%)qKn0~ z)VdXS6oU5~_f0pa^uwVCf82gbL|Tp?3iYue>@I(s~6ku4Lq&J)+H-&XK z6E>Yx&U8V~=e-sNMTreAm#i$#*B7qXPa#fSWiDL(v?-%(Ln~C72_K?izDyPP_T-<3 zmusDa&uWS_3PU63am)s{H_5mJq&JAX9b*M2=2y*>y(`b!PqSpP?5<|^IG8dK&@Q!I zdOmpLMYBY1O+;83P522g=fgG>nkd_%k>}ul6;D1)^nXoz!X@mx%vpBaUi4V@FR>i*AY# z0lF*vAJ84Ku;DF1KFp<@l5EGr*3rjeLf^52 z6xh?odD6aB{`Pzv|7ArO>FXeCm~f*o&s$p28S$R!q=>;%MXp@uST33gcRxuPC|O;+ zMJzAD?xZq&#b7BQR#%uDFIY3hbYGiLU2Cr`MG+%iB=vYMwH)-6ox6GF`p_^rHoHS! zQ%yM%LTK{4>C4mJMV6SiVSR6_Ob)!fYaWjwZVJ5>tah3k=itD~B{J0|Wgt&tz>L=X zKx9Yb0x9V^o@)>57@=GW<8ot`ona55ZFoUZ<-z)pMPV8jd&i`hOVNYNPxBMcYpF6g ztt&N3E#CZq@(MT6_aq~9AK6nO(Y6y=Y55IQ>2JIydE#Z^EA3Z6i!I5sKXWjJ4zDg@ zCs+87s+yI@8DBO$i|*ixFD+rS-m%ts=1(H}_FR?~7a1oWt$Nl9#Oi_Ed@Du ze9ASi^jFz;4l>pAt>>08E2DNjafh0^CxZ(6iW+~tg_5Y4fndc3HHGUhBmN$<06Kgy zDsMLa+2ONk5vJ29asOJ@_EnFs2cC-Xo@}9cM9{dsi$qU&Ya`C=3ID}Q<)2N~wCxhFFeet9?!^V)V>r3xhn74c9^B;r!jDG@v;e!`oRN@9B(CX_ zu2wo*GP%RQb_~U<`(i4RX7J?Lp!YrlZ&}p2$IA99;GA*cDM6&jp!ISdFvO*HbD9#W zRR`lprq%yaugQ9Kg!^$4h{dj+LQIt%si$f69!~TrL8X?)I2hJE`Pi|fNzD|^l{x0wJXA?q+Dx-brR>n(U(G{q8thC0$T0Z!X#z`+ zR|(5Ec_=LDhE<)!m8n#%L6tHEGD(ZIz(au#g5Q3)tFyvcSXl9^F$8unT`E+m^bIwC zhm8@LFJiLoPQRsRMxb=DdV>UpqMWyVQmlLHWwIOUH7B=azcr72L6-O%CNfu4Xs8#% znkt&`Y2!XGJdcv5V?)MYfS1NacY*HqLOoxQE5{lX=~nE7ig#`B?S{$B1-n)&!9z}i zH~eo?E6aOGLNoKj3YH$R)nz}&`umItn@%Dse#8uoknThAqvtZK@NsMbc6XTuw;a|! z1Ue<)KDz(YXcCp9QjH3t=&ar}a|)vG-~mxCKt%o8LplOY%Piu|ld{NXk(m^k2L?m5 zN-GA#rjhY4(JGBIT7<#mT%xHN@X5p0`>*czl$?~7c*~`29lY3aC!gU5`wS(ABm~3z z7BtQfz;LK6$1@GE3jMRTYn~AN=1%u{ezaM$uR)9(Tl-+z%M!NM&x-1MG7&yJCd%MA;?Ze;j2vgKHP-%&jv7co^OVt^5ODijQhNJq&RiWR7TXfh@s&6b`H z3jU*Z<(v9fWoF}({fVnwDa#5_OU%&swaZvwhd^+5-d)SRmqKL~r?K~?o|b_R>48rq zCG?a3vq)obC#*dTM-wV@BL1O!>0uQ~Z|{Le)0#D{5)l&m?8DRaKN$zu8sy=ZdGUj^ zuwEL~lUTuy3%pb6*Eq87FoA{-a$+juWOV2P!jCJs25ILs4 z*z2y(8!)qC3_SgzXTgrCEXbApGeVo485*(f{CgIFFE3W`Oa4jex@6w6!yJ;g&-g{M zr|uFWCqy(azRC%;6~C(Oh>|yr6sNuY=Lgo?&k962*%_`;S~)H9KO>|Tr@X9C87sc` zF1%T9JwHog=-TZE{O7_tpA!2>Jl835PQ+H>fUPO$dV}PEuv1;S9Z(WSX0%OMZlF|enAHfE{v1imYUGE3k*-nsvqCN#f?bp zk@mIrT2di+sw@e_T=Qo!x`@FWAzKr$2kjBOg4OcAxHFDOjE zLM!}Dj^gXJ2^v!{k(KJjO{*_UA9$noXyM5M1d^XV;@i34M##sXQ)Y66dsW^T2xbx= ztAAh93{BqP^#rDfT{uTXK1&k6@Ky-uJo|4H#!cMmcw4I43mB z%gZAnxQn9xpn;3ip_-GcC^GKH$!!kO(BnGuC5`3SNR0M<*yq9IdfgjcFDTZF^c$;5 zgGf(2J3^cFTcJ5LCocJ!J>Y=1rn)De6cm-XX7= zoN&t0b7Vf#pTaAcx>N{LYMs?;9rAnQNuIEzm;gOZjfksWh}qSZ>%g7Gv5ji9r?^*# zEN+>!!r&_=(>SCN4!I*G62g_NA!;Cl0i<_)ruayadq(|v3s zZxFSK&Om}$T=)+co>7#gl%+O!MADof@25ku_^f8Oas-i$olDnoycuh(LveD3=<~qm z*REfp&dLulkw(!;(Q|Y(@HM?qT;Ya%HK*Idgb%Z8)EG&#G~r9yzMVTVsFUW2pJF39 zpD0oO%hWa?pjrVDcR=4sTs_-#@673!RrlDSKaKItG0}vcm_-i-u~daw)n-k zlKDfjC~&_qwlV=Z&}Uv|!B?$q^1or7cEER4yU(3yg9bgUQR2(<+z+kj#Z)TpMQv#j zHt7XN6(FEvzN;X3mjk>7UY8qW_#f(PcM0GLW+T?tfv24pf8gQKrhknd(L|-lu0W>D zmCTG$txoy+><%R(d{(_RL_nSYM^E&PNHS}!Z)6$0^`A~$mBm_#A>vqHzC>NtN-krT z3U#Eqc7eI>l1*c~;GMB>Ov6XABCi4aSY4|&m|Ge33kOD<2x6+O5dru?oRMmHTH_3=*||Aw|9n&hUf%=FoOzS_>J~c$f8dZ%wrO+HDlg2;^hf;Q zdF?aHnTO|x*t^a%s9tS&R4~({oPS~K!svzKK$-{S@5qN4mwSe3`?Z_Dl*GNDu%9_6 zrO>9~>Ch-EZQ8L`7t!x=*=N-wP@Tpef+tU7j}`f}8yXVorFBj4TD)+Bd$80dmcnO$ z_l>j>8RpE5X>Q2)Ilc;+{#7}nhLp&Q>6tsq;IY`p60&S}_2gxp9rL|c2iNzCv9=N1 zW>vwn*;ImrDew9AO_TNoDPiq#9&Fs^c+8M1xC-%^W7$un`@yreMa?fy&0t?Y#=Yjw zRgWM{u_Bk}MPjgBQQdI!pv;DU6?Q?d78sdc7?DvI4*L2muc*Ef{Ra?vVMtj=gGVRx z1qf$U6e$?sihR0pBZKa7gz_m39j3w|GJQy>aUzt0nfXmyXDa3J#?sr!jBRWBx^ILU zyI?22WGX*)vy^%I&e^P#>pd)r{#>RFj?rIcb%(^-`K9gMFTv*!Us#HFN3K&bt1jQ_ zx9+Z>u=*uNP-gaNNTGP{iugpn)pQ+tosW$Byy)v*Dy*IRrTC8VNVLH8iSF-x*>ig| zc`I5%(yEQXyYrPDLTtnEC|k&hLabXUQOdvhM8_eK(sha>q<3v^p69{DU7IkRRP>br{Q zeyPyEaC7jAuBzZTStAd>W+=bAEtTiuT1^5~SR=#q&#i^Utso64)&Bmmr9Ixv_ovR( zuMxI#{-n=spPsjks8CyFgul1(OUUKicd%pso`{Qc-&)w&(vR}gF|e0|-Q`MLPx*rA zL|)z{>d#G)tVxR!*2D!|X;FoWd>7JrGmU`MPaCaGhU=#$Mib;cmkUmv09~HGl7gC{ zWGBceXqW*{n%60(>EYBi!d0)`?qNeV3MP9d)*lsXP+??3ZYkNz=EuLbui7zDE;Xcj z-i%`#mP`_4OWUv{R?7TzUHnvAqavsNjweR11%h$%ng4NZjml_N+d4P>ifEGb*Q#05 zd~X~B{4Q*yMEbPPx13Ew9BEeD##CeWqD_Gz6(1$$DN*J}LXjmc%yzHfP`&%tV%v(s zgxe;HviNyK%^COiORH0}Z4S9+SJN#SBFj9OO@r{yn&sVMQ=9N!=79p~ud3J*Z)MOH z#q%*4B-FDe>StE7;B$g`1%5#A7O=E0qVXAt-fk1B?E0#Q+Q$9zDP|_0l7D?K9CP=? zP7!^gLEY)a8cX9s`sIp%^EmL`P)kb$w7Q}vsHx!4+F8Br@Q=?2kJ`P4Lwf3#6N-Y= zwhk`e_^QOcKQ-aIX`S}WZzcEu5p)gHGacC(2#Jb$L*t`pQvZwxNt4s9W{Pa$6w04KEFtV zfieGag*|oY*pmdiE=Z9GR!@QNrX5|MB(6Nqr*(C}ok6T`z?3%PvtsmMBF^Zsz?@W^ zG3qspy}M0D&h5n@UMYRI8;Ni0st6J@To@ll;#W}QKQFfr-v~Th)%#pH->DY{_cY60 zx;(QbB=zw1f*4V#DJ++DGQ#(6skZ3AlhGTDBGr0ca_Wy6;QjDl>y^996+X?A?G|(# zwj`X<5K^H1v0Q{{EzUe?oCiJbvzPvy20rsHcZ_GG>vP{BlpgMa*{n2BBr)AnP6Oq3 ze3;yE`Gv92iHl;#)^88jJ41~nv$%rW#y-kCxix98TxuncbH~-3z8?}&Q)$9rbXRj! zR~%s9RyHV7V~hDuvDxkajLik<2D~+%N^n(k>4b+s9od=%-!HBfna%xQp+09HLi#_3Bd=ky& zk+ZdH^C1n1&v{i+pj1$tP2S94$Y;8*cKo`esu;S*b{(9b2~ zTSn3DGp_qj!@H&nhdH0Og;q!{{xGjorK)$QcD40mP$PDIe$eRPuMr2eMT|1INo}3Z zeDD5_8%wk;A#pMoWOHLmNNi2^ui*^lLU)&WlMXm{bGFiSWWSo9b<-YXfTOZG3@vk` zVak2!X$$SpC#~{^4^fm~N)pQ7ZyCDo4Pg5>QE#c6U7oJAnUu_aE+5(S_?bTaQyAR_ z>)86n)83TKSI6sid-DpXns?pC>M6eah0rrpZWbKK;N6U3JI9{%nNfEYA-#ohX8a)h za~!IefL4S`umUIjx3|v9Ctst)h<%5T47ktCpLBuKhuC$TpBx{kE`60ZB00$EnB!lb zJMLPBzrvtsd*kIoaY7i(a^LwKcgb^~oQrRBKYx~1KO2GL%uh^eL&fX0J(1r`zS_k( zYv1Vj_;x?z7+d~kD+CjCQYLGm{^P4*9ppQ*EnQy}azegC(2NhCqg-C14db1=w)@!I zqd$2x>5P9_Wd0AtXK{VC$jcSvP&;ZUJ#}K6$I(e%3EZ4&%x& zhlF+i!oh27_tIzR|K(i5aibD`?yT&1c85H>e@C*vg!Js5RUH8>MfFz9hJUwV6xZ|| z6D)wo`1`OA=48pITyUg9J1Mv@TqEiaJu}GZqcr`Chu1J!GQA3i_V)Is?B3+<7x0Iv z=X$hF#$yGX7!-tvdtK%jcOMs^TLbeKtxGl?L-M`%?l?toF0@I@MYZQPvwg_ndAG5Z zT`Wh{kedl?$(;8y+)zFRZgQqaZz0K`9ffKwdiJCB9Q{b*SUEDcp6zmxnUZg|(N(&= z(v_C43i7B|Pp?k7rk5Og-D+v?on1j(PCq3juEK7SAjra)l+NfNq*JyGn2n!Jb9+St z*HKqQMWNr1D{|}C28-W{Pi#e{noD-LcR7BqYBL`T5_=wV3IjjI<2J!AKVK=BKZ5L! zFH`py#gaa~S=2by%8i+IdlZd&IzW2+^dw9$I3w2lWgvGDR`uDmtkyNgO<9WG;I71& zJpx6JXJc#cu%(UmS}W_#uV?$7*lu1bg3wce6II;P-QT;3PB8EN5wz9ra$lV9Q#j7+ zZn+UuVQ=ZDk*I*&i>~}iPlYZ!akL!ENrLR%^OF&u=_ePKt$lNsd=-_w<;`v=XvbNv z$O*FaA;;}yO`+ZggO3Qv{j2-wLE!6>Xr1QICu5p_&SHgj@vU=1{G>P51Dc#B^~(5i zA04dUM7OkWf7O#O7dyQ{jQcZpVy?Ma2CG*;bQ(YCwz8H{LXda6vvB&jz3v~`t5dW{ zC~u-5OR&@0c^T{&gbMV=vH9gZkvl_&^)Qp9{TgfSs{CMz!Z0fhS)229voAYYn%f7j zum28(Z2c;ifzlOx2PqnX53-T9BjhNH#Zp_pBJJ{3(o!T%GAMIJ@?qn=;wgXJsD{}- ziu4NlB^!+xVVqzaZ?o3ZPmC!U9%`j`XwDw?5##J`OWjCq%lNpo2`F6WO>`}@LRX&-IByKC;k zyuhhW_v?f?%fkpxbcoTL?_YS74xTvkyf;VBBxTag*yKjiFJ&@v7-XEXogqq{`aU=N zGp-tYKcDMU_~-y?Cd=W3mZJ1>0$fGqmKkXnQ(51_L8F!16bI7P)4bzdzh*Qu4QFh+bS=8d? zRI&Sd0*E^0DvAT;=1S6>Zn5k>n-+x~s8PoZ6p1Y)Jy8YL-dy=P8R7Na%h!9~mjPxF zt?*u*=6#M%>C?Wz)a=`r)ahU-xNQAlw+5-cX%pf(^yHOauu4yRcdI2^%NUwNKRwa% zeW^KCq()EY&??^{uz&Q)RonnnJ~}MHrMV$`Kw*CIXSGgq(c#A7LG+AonL2qmZa3e1 zQuYjUHD7Vv0gcPkyx+zTV$p7~>p(b=pbm*nr*ji}1QzQhri?(W@(w z(>;e-?)t0TcUKr}g=`TjO5F8r#QSu&iE_Y`y@1k;>y zA{F0%(-v##z7be`Zdu`=ttcjwG?H1lTJwpouN(K-r0U#4X;+6H2Kqam1z2U}M!ieV)p@4Y>) zdcqYAS0iu|9Yayzip%6mv{=FZU6;CT$V$&c`HW$tzgl#a2q7V&s|okW7r4Gd`Kd2Kq>>+6bTH!m zU&{0^%h+`E^RmRktUS>~q=)GEw#&J} zQwh1QA8m0AW<<%hI&7TG;0L{t`Qe}0AU{;g*%B&)l!^R+U3@2k=z=S=tGJE55Knlk zF^+JZTD@nAz+uis=aEb=gPhX#Mta6ZtGc08+EpUk{^mxjGInJYzzGHYxE{mps+th~ z0F8kfJCoGS* z%1(NvuQ`fWNRPy-S*UB!C-)N|Yjj>7Lkl$#rUY6fPLlbf_ItZ*7T}#8tdPGS-Q~-g zwq3(Ko1GzT&C^e8j}Fn#oAw_U5C7LZauF=4TcMuoN zvBqG{t12@$(<+kZW{B&2>0~Ncn1_d-q_iTLYW*_ky!$Zk7t6d}UB!pr* zozw<|Nk^zBa_#L~Yt_`?8@4wrM+k<#PpP;Lr4kxt>KhJsQw|03hYh5)%;A#```aUY zG4)$1hXvHyHZ-*t_^}Jw1HwBd6c|BE&1RwhBH2*Q6GuWB$JBAPU35#A-^}(eleqxm ze2CqgL(}p{a~bi36HfT+xcvJ(Hw9V&G`JjF35RCezv zvZGCd*e`NX&)NkqfZBZ&Le;1D*|>;a%((DV)7N#2vO1Wu+v=+R=si=&s69Ulxh{W` z|D7~wb*eSg6}Wj9N_0(ia!kGMNcCxUpG_>OZLUW@A$m)wC#Jvu?PQ>>c%Y)U053|B zRsBUBbGIZ9y-axGq?@Z)caAB$K=%czmc(q@XVxEguEc$xpH5ajJa%wD@ne-e)PFn}}`K}d* zvd-1fj@MwzBQsVa6NX*5%-vk6=)LHe=0PYLJhO%7SNq&*Lk&SfmVA8=qBxo`^(2TK ztQy0nHcseRRbA}f9k-RA&wwk*+~34BjjTjbVlYguw3egNY(C@T+{pGc zOztL3)bmMVxo5>0*TlqxNZA>K=;WImHUw@h+^oGYiV`D}rMDMQMoufE_$oG}FZn8N zs>3)k%;Zn&smEc(fr#tIIJ6&Yd3nG5VroK&A)iRPUN%dgZJ+?XH}J>gqvGs|Vek-F zu~)bnb#s+7lC6I-IhDJ4_nsue#1tZ_cRAE!lQ_!e`WSJX+4d^%etMBm*nv}2oy;)UB&beNm6AH1$JnjpJhV1d6&uWk z4c5appEeB}RimXcP`kIXxeRwAjT{uB%4Z#;s-2`oIL{^(CAFDXa$J;beX?t#m=cb& zJ3r|fdj%$%%zP5ZWb-6o#UnNy{65F-e)#CSK04R3vwaHGC&=a4i;3vAxXi6@lo-D_ z=cMCV#T3Vl#)XxLo~W|(GHeMW56j}Oq}`W&#~WAS)RWX=p3B;$xt?>+rP&;Xj(R2L zMyFu;fR#$fP$Xiymrkj_%;+&;e>=)V#iwQiZp=Hm`c#Z5sHFeQER~H+VAo@zhp%1f zLWyOZ>1_p7GS6TOJur*Z>m8;IcyZ{H3c2O6TP=(YpAksZ6n6rrVt;vr!HQ@`Tw8hc86ioD8R=@hY9foh z3^)}xjpO(N% zkllOwo1RqyJ)7{IO2N)uw*Rv$ay6~}Wj4#Mty6fIZ*%cfu2JK|cs1LF(UHXUiL9>2 zhqVQ1HqXli>4Lsr&C!x(}E+>szVa2rJXPYAn&1 zI9e6_s|%&6PHvhTIsA+u&)4$+a}Hauv5HbXUZ1i*<4A>}o+~3FjVYyA_%*Fo$aMw9 z$2-q>(tvz0bmh|VA3KZlV+&Tkqm)mO!)UQerqDpix(DEaX+?DAami@%)6&3$7y*C9 zm&0Daex1Ml*pT+-n&-ehfZp?@_m3A#PmsS1B@9v@CF(a>!Ty}!g=yEp{^%b+|80Ms zpvvE+L6&0>+VCeO1?Fas=j5#ujSII<_x!f9CSpz&2sbR7mVt>WYIc_SnXXA%W?mr+ zGu|R{+MF&_wVus6cqPO+H+Hyk!`83SJ7W`xB z;*fA$d?N-4{(RwQW$JNfFu=x)H}*8|dQ2Afow5{y(G_H4GLJb+f$=u6^E_SvZTJrU zI&RHF9*Fk=IVdZ0BSGV!Dp~-1XZd7L_HiZu@FA3denao_fP#ii5||)%QXa%%Mehw2 zuvSPy*OTK6#GPTrOB#3X00NG5!|roWpP)hs@(Q}Ae#_u5fphpGp8bfK|Eo3FTTEjf z9!wYhlIH?HzOYGJDB8vzp1ZO4?wt z_uvEA1?j*u)vS19_j_HB{_|V#4+}sqaBH4&C&)L>aGp(Ugw?3LB%tdu!|?W!*scUa zt>epohzI+=bkpe^1&x;pzgP!uB0v8GCvKY^Y1IyU&Yq<^MhDL;Pn*7aoMa#%xufVD z1xV7qLp_z_28m}u;0ak!o(ui>$8kk^W~PaNu2NnOB&Vpl_#{P_8ISC$H-yl`#VXP- zY8|I(2544FJ4cd*aRWA8us^TsnIfRi@{zx>X#v>u8|wZ<%lxxY(^yo1@k#=2u~19U zp961f!=)E~T%whLj_1@jP7tG5K#bCkGha8N#9XI$$vpn6DOP5Ds#{n53$n0rW)QBt zcppPBwIye>&qfi@-#Q$YkIP;78raTXzV1U7~G_zEJ4z1>= zB(v~;g_4^l-~bw^tq>K2fL!Vwy~FTt6mJ8H{DnEdnU@1+o?L}K2yAi4>MfaC8aFT-uxWjZ6E&?q`?F{`*=2)O>2pb49|Lr?r`9@dMp|(9R zpSJzMD^|0Bt8855pU35S4_KM8w_R5pS=bxOXS%`Hu1WSA@%=bIZf4jWl>(|?kE&4}9tEi5M7Jte) z?k~fx9}%UU_5wuB*l!aDbp-yOb{URijJRK^(g;hrKZHN-Rf{hJ7utt6yAC|&dYDIa;Bb?G%Ox6r&tmmTF@3G&3IkcILrxYXEmz z2xynLoESO9V~!gNLN{Xw^0vzb5O8QJ{)NBY(#ioxK45mg?3`wnZ4h&$t{4hWJRo0w>;~5-Jxv6PCp-GMxNma-A=st%T>}oNxZ!rbJC98YR1U7fVA;Az zfw^aP&+|9m8UW5=bKguaR5OS&I>r{kv!s6!Td?H7k9kQC_k8<}`S|i+zAm<)(=ZKm z3_;kBYK`iY7)W%HjZZjEwD6YtEng>BM*u6avC8!uexd^=z~1!n z4mpK0KZqu$*embe1#wFEfyQqhFbVjVf8~YishY+_v^Ahup!ZE{ z0Sv(&o99GMAu5q&bT#`qrtB#z9ytgFsn>tyBOC!JGUPpsr=VGX1Dt9t7WiIp9r=It zy%5a*m+x(E*0T10kbtu=L-i&6fq+&C%32*`5WoBsDaclN+}N}zL(dqJx(a5$7#l@frXm8HO^ zyHdrzzX^0ja;!V$_{Kkc1RMV-=3)Whkc9N*%WT{wtbk~0;gutz;-w|}-sdzu+)sPK$wagzPWEN@N-HaKYfD-% za7dcbaCB$JE$MwrXbq%i2gk}_7Bl@^%JQtdYmG=SjKO8F@Ki)ZUX#%-Jzo{(-5*QFBg7g;=oJw#P{s#0Qw3^Hhz zG8Ip^Nt*v^b>h04bmVd&&Ukxkt7tHB1ltJg9 zpZRD>P7A+2eh{!p3omlq`R%H11K{;Rm)bd1ocG*NdnrFpQyU8Y*iyk#%YBu^I$3Mm z-b5`lQ9tc%K!d;WUr1g_fnFNsLNDQ8lGt;-^>=Rykp_iH*-w!pIZxyvT{?}hj?d?h z+uE6wG2T8r@Gluv_gtuhZ1)lk|8dtY?NP<9|Y?lhF_9Tbj4h2?V!Wx zARSJ#3Q-~_1VS4#5ezI0 zpcP82R}hK%m4wL8%)bY&)f zBA|mOCuBkrdxSRP!c?b$>43 zAe;|lBP@=W&FKE|g}7A9G44xxf_f2j?Z7lLqne9FoN@TLqJ7l|_aG?en*8-HIHtP%$`kD7US+$k9ufk4n3 zC0Br_prWE~hWIR=LP<`vh>t;k4xB9dF<2MFhW z1n7kbZcH|`YLhy!zYEngw#8_decDiFejS(YInym!X$|H&&6@oN|dW=?BJJH<&D7k`dn&+aT$xX%jEc@_m(OZTvl3E zh?7?P65Zdv89~(m2>R|Rb6l-OMSxoWC`~g8BkxF+px^smpEzFX?K0RA_T18N3$3|i z0apfon_3vVFm>;XpDWI%rC@6>4Z4(c)LZ0__Tvr5sj^l4h5(%b0?ez(uw`hFp2Iy_5Ih^%`OECrlpz{9%;j7-E;aJ^GJqr4 z7?kBKt++Yi*DH*a>K3blI8@m#x3E)JkOqFU!anZo{KtX3W)f5IrS+|?7*&_iZr)e| zGADNCxVHR-`uLwDY35BYw6t8?UXeA3isO*{(O+!rZwySS?B>&nhO^r(T1O^2d31m& z&n-R`O>#-l!W}|=k0%Z%$gd#gv^4oO!m5`bj8HWE)J%8AsUCIG!{Adcxk8kshP3Vv zo$;GLR?9U$RBui|(6EWq-I*%SN%Hp2&L2tIGmgOasNf)r}}2`Tu!uU1Pm0N|;(B=3RI5?mu?25Ma3%>XKZ@4kXCw)R1D&u_bZ z0zi^FHO1-~lWL^7FX1VZT|$+qX&t(D@BtnCJuwR$VnvFRrz*IwV8{&I?l(2ZX%(HN z(MjTUDr?=5;=p&i{a`q5zuy30hGNbrOz6}6ZEZI;;FVBZ@6XZ#_f@mK?>QOqbm_U{dUO>tKsfAgj7mMv?*C{OIFlPptk9ZV zR&n!jT@S}4-kFO%uEN;>Jf_;`GL%P>S0g95;q{(w$h*4^8UAk~Wu<@c&AQW`dgGVeuq_;kC z_3d{a-zABfZ~2zN_e?aUJo$iQOB24R8-jhVOn!Up!88 zh6CV{$Jw_Bu&*pXQaF?&aTzKB@~V65Pa%9I1T^u7@^28ahaNm$Fnu#Mhm7jVB81x;j6ly ztJ6gIcyF?GCX=2oJ+MChuJkk)Ib2`<B}ko=>u8N&&C2L&#P%yp^(AmweSg^d zxT6Tm0H6|GuL=$Rbx7j^KYU3k=XH>9$f+N==O*boMi?yWXC{jq9K}6?y{1kb~buTa}&iX)iRN?5^FbI^a3exZs*clsj(1-$0=o zcd$Egkf&j4blz=$yq&{4_>4Y$pa`ahR(WqZzOvO^G{3)hKW%5>@GC?2;Yz)wNoep? z6XW66U5=3=h%)E6S|GN$c23d_9L)F|bH^pyEe%e2S3GBx_ubOLXOtKT+N9t7+E;)) z(HBw=gt4!1a1j;f8Ti7e&q3qZ;XSO|7*k=8jQULhco7x00QKyZy0@t67SW-Qr<&u7 zptPzgTh99wUE!gR=5nJ`;A%=oeQ#$zg)eYR#oEJT^`no<9f(suC6|}U2DP!%Ktf93Ap&zffEnv|4`bPq@LW#+{fmzI#Ob)kxPtBLA$+(4m8Z# zCh#f(H*l|4>9~bd0cZr9utWsZT33yjRLz?7wwrSt+}U2p-2IY2w!f-TBdF?*uo~Q| z+W8*DSFhn(Gdo>)*f9_1Hp*V|@+|eDfy+j(235Npj#=Y2A(IIJtu-&zj1ADfBOtd5 z0-7v$v+KEi4)h<^51hs$z5np~?e@6vFKr@kT-b;%d&%Z^x1@f<^L_Kge%bpXo=@YJ zIL|Scv0&?=MQHN^fOYMObvzbo%et7Dw}wvFqsnXYVPFSt5+Iv6=X>y z^%g}Z+Zs&7W-n}=9arG4+vQm@@7I?~y=8(44#FNb8bs4`bjQ5U-0qblZoD)5cNss` zWk50s4bmpn-AbvJbFKZ80TVk<3wtJedxF$eS6}wnc_aH1aqV?mW6N*bLzp)w$_x;= zjR6_5eX$(2(d(aHxh9{Ydqa@be_chcM8AkiY|vFr=lYm;>Xs+=e;T4;msS@grrEBkwckcfTHNPLKkuot;JL z;S>IA3CnJW8~4j@gFcjv?Ej4g5Rq}}wMQ(xY=j9RacFI6&mmY56=v3c#bu~~Uc+zm zC1QF#cW_xP^;Sb7?oWx~f#$I~TZiFlw-Dc@;a=2`O&x_#hk`<$SxCSVor=%G;v=4?<`8_{#`N6e*5N>X=T)3hTVo<_F14Hf zxRrlMI9ew=gUhIU$mV@S=T6(xtvy*aWwnV*6qo{@XAq@*eP7boJU^+hP35D04`#2^ zV}GLv-R{mH{-o3K&=3Bcd^bc30jy4T$X=;mtCKZtMuo}za4G}cTB zAc4N^HpI2N()TUqFX$T^jvKdlZ>qEPbEb}YcW z;ciEW#QbQjP_@!aZp% zgM*P%JrY_7Tz#M)l6B|NqesibTLieA1)y~(KFo`3xyi2mYZapU$28L{4!+e`g-jP z9;GiLf7fN9O&fdr2Bep+-zG?#m@4(%G0Y-u{km9)1pN~sA)#ib?{1#K+Yw843c|R5 zzxt`2v%UPA$6@|i1B!RSYeffb0h5zKZ}02xPlfp2^!-j~NST2ib{=vWFYWZD%qZ+m zHbFyJ5<})fYCOI~M35#k;n0Ao9oh4&lSOVC+Ku&K<|~kRN_TC4(4+;H3ke z3`6i-OE2rW$6KCoe^31|5H+;55Gq2qrRsZK4w<%Z)~`+#dkz%{fcH~RStlp<^NOOI zV84M%pyv`{h{g!5u4pe)e;XVB$A3(=G( z+9}AfgYTB46tTty-~8&mnEJz_=27pwc`V6sWdC?|Fzf+{88eoP|BY4Yin3TeH`l|; zkkeFXybrj@59U-`2~^x&st@df%0}QO)a1B>w~Tz;5W!t!D$5s^3~3>p9yoL1l&|vE z2A6@Od1PlY%I^=-S0}3e{{Fv0M&PgHo`)*%y9gB-_0x5v0u2_tB0>W{35G z06JsYWOPwrp{jQG1=qOb1Ohs|KT;GKR=`)iTP~)NbT&k?=8#krxzg`u;RmyS8M&@4 z`t%8MY;g9|-tHmg(HSuw#+jl!zr@5iQ%J+Lr5~#i`9-_rT zFX{HWgF4UIgZ1-Q5I9dhu9LrATC{^=Fa7yUQw6yJtX>7&;cV{ULYhjjJOeyLIKbt{ z&!?07b0NHjFx*R>k1$jdE-y(G|H4&70g79hspB?=pRG#!UX|IduE?1fNa838sEwCm zU|?)>ojxk7{*_)_(r#83iqCD?@>L|ouSmf@v=M|x4xaLqrs}a z^0gU>M+V(v2qp9UB=g|!SpfeL+6nTpqs#Kn5c|qVuYpc;482RP-(V*T?d-tEXMdQ0 zdGPi20r{-^P|s+LNV(d-GZnn#7j!Y3hCaAKsbzU+ukTRFhk`HY`X-6hS~iY$zaI5s;3esPqj=3q?gb z36b6rMA#NUH@%|>N(mqkLWh9#DkYSJCeoX9>F>JTTS?A&&v~Bljqm&Mjq&)yftaj& zmAPiQ<~8TdU=AvTc++$6;w`2ZvG8Ic&Y8f9lQ_bz=!2RTml}&8k+qSxuA-Q|W9stj zZF%b%OYr3g)JVeH5hMhdrm!VsH|ASQ904I<={J|zGX7ZtTKb>RVt!A_5)kAQK4b&7 zCo>`9hlBYfv9+`Wu1Lw%vb!_l-KAc0LvwTz6}eN5HS~Rgv5n`prvtDvja=`Q(g6=S zet3;;lH6xZ)+Az-z_+rlza&snhPo8#3Tp9)s)!|MoF z(Du`#N)TTVs7QUpKY8I)k@I#!xROGee}_Q#>OrH52c|!M96Yk7S_Y~rS8cwr5L>n` zD5d*DLxJvoXHd=3)pXW=?$BrnBYta%#o~%$p3LJ0@-vrTabD$U-L0@|-82?1?JXAW z&`9v&j+69CXA5|ANQF_8?*Xl4nf)CfA0O8h2b227qn-=WJwvj&oueR`WMJq+d>)ft zP}a7`GRpu>Z?zcx=K0N@TBF+~Wysd%;ewCI&GGC*fwhe%O5m^qk9+1CD)1#}Q~C88 z;b%(8M^9}HCI$e5hWq&2WosVs{yL)ZtUIH}|Frz_-K!qs&CM9;O+zI94@5i(AaD!{ zOd_m4jljbX;v>GzJUFdksUZH6a2y6{A@GPhcSX z_$k0Lt>l%cNc{gQAoUyVsY|p+4he*q>^#_*v#Ii`i$Nn2hr}6chpr|%EqQ#nb)61W zNjGOY#E!?Ux8idnlr0CMW_!Q5RA*2dr&ZUz&2@luS6`8&{+&qICL9dHVV{A717pHo zLr94fJqVvSoJgLV08(w^6#}ul>mf#+lMQ=QMK%dNBX>?YJvr?@T8lPma(BqxlJoZI zb?t0{;JQNqh}R7aQ8b(Qc=vikBxt-lHu9#;TQ0WdNcmR0h@@m6`UG%&l(J2U1!M)d zKH_^RM+xF;7W~mU$LBWu@Lf84YuKD=6PgF#$uK!rC;Ha&ulg43!m)HP}U|UUk@N3&bMJ=h)U)}4FPcdFRw4@*30Cs~X!wnn^oVQ;H8O-0Fs+rVcCCxbEKl(s zfeZAp;&!zE1m3Q}(H=ztQV=4oEd06w8uFWcv#(C?4tqSXwVkE1U|R=Kz<;j? z1b(%yIx@Dxip9Y2)5DvP!HHBG3k>m+e`QpB_nB38rM@Y$6*^Pcpcm;-1n z63^lsPn^R@`Z-nMkYRD3PQLs?DC4juz3#kY@<(Z?d;i7nFp=19ey&+?e*>zGo#2l*(1fv;^dow!d)@f&0QCsN`Q;nKcM-WLo@VMIYiNIiR@+4mBheocc(b?!U2cMCGb@J;Kh+W|9{6@kj#caR z=C_bjN0Pa?YMfn*ocInJuZ^+9@SWA9HlxFousK{Oq{d%4Rx5TLg4{evRV?N?WEfnkv{EQ?Phjuc6QBl8k`a{}xPTD1)%w_HfGc)r@8jtsJkX}uXzsAWebIgVi zpf{!ygs00|1Dy0{JL`nlmf5}!3BVJH7j%s9!(Vq3BT;c$U>+((wp0Vv%WwC96TgmyKgee% zp~6}L)skW)b#UR@f6IIv32uMA1gYz^zGOJcpiF8DuA1)R@hu)xJ(JDJ_vk%Xx}F#b zII0DlVwEyasv$*k7ebN5EWyi0r$>AzV;ZDoR@x^IrO*J-tX7#800ie8JwCR?A_k8I z$geHh-moiQba$OVNjRL29~O7fVjxi}E8yTegx}JxJgJ>DRL&f+f)I()#zX4srxDe|go&2bsp&0H>Z8Kgs$x3YxksZ-e*hj5m$`mSu~cbG-Z*NwKp^qJD2YO!0M<7?52uZ$ zFc5=xX8IR&8{nU{a)Sh+hR&$-+lMTY3)8tE2YvbowTatBRW&uV7~qlt?k~yNT>xhn zZm1K=r+L3G*X#jkl^q}m!K-&Cte2$XOrsJZ0>}pEn9uc#T)K4B*lp^j#5w-TA_2A3 zVbHW6^ohvL)N36zM7ulnp7wc}=R8oHIi{Fh>wfW8gpTk{WbE=+%i#ThiHw{D${yZ7 zs`i$P9Pd)Q#Wc`eD*UA=&!XD*M|QHXabf$La;J`1KnhzMFLw02JB5+N7cA6o%5LmF z|GsGxST$K+|E~6Iq0zC(P~uChNe_-wFrx1~p7HqclZVn0=73_z)q&D%Te@H1SEf2% z&^!`DZ~)%oBT0O$Qo4u7gwYnLKfUPhwY=I>HR3X8j7$c&inin%;_0FR)P*xoQ*)&K zJ5y9L+S+b;CAWhnyUbk#q;Ptf4D9c)050OkujvJsK0vk>zt9VsSJKngl&Na^iWWoujWcd4l5XBa$e|T)&V4sc(3iU^(v0+({!ehLW0BA zAG&S=kGME<<=n*2dNet0|jE4^ve)W?a*kqgqD)LI_oV0U*5oQp|P36tp0t}iooN|AGJm6NS z#rolGuF9O&XEE19Lqivzc5Ty$o&JzF zv0wUTVb_~MyrwgnR^W_NB744#GfvLT&`+6&+70H8HLBHhq@|r71H=)E0*6Ak4FiZs z>Ujt0fqH4-K=Z!B5z$vL7gOP=b=3n9cfR%njxRPZ*x~BS_2%&hJUGihC=ak{WVy!8mwxQ zfg{%eEU&y{9)Fj4eY+^H&+adrot|mkT;>a7l8e6zu57o-s8ll&y0QA8Fb&*Ft*NKs z{L~Ni8+>!DSEpx9c(V0N#ILXi+3>0-O0lRX*zVexNpO|~)P_>h6-xGzzl{^eR4$D* zRR;ut2?q7GG*eOQK1)kWJ3g%LpVp98NWbodog;V-5+9yI3MDAv@!lCf?k-60<+XE` zzVabW#Puwl_HOsgEB+1ljL(E$1Cfa8ZD06iWW(s_Xn0~CEh|67_v*+#2-vZ>B39Ig zfr6>qPKF(JaUFIE9j%0W89IPfy+|uULeDl7K2umW_?drub;F&#uQ$a~z2j z3JUo0<;!ctSMQOkXULK5q921dHn`0{%L*pd!Zn07il}<%5V0Y^eYIJAMZ)s0p#cQ^ zpQ*o(ii_Co4^sjO=$MqtnxF~D&jOA?4dpcdQ?Lm8!?%#IRwq$w z;!hs~lnp^GT|D5_=|kmJnxp>~YxW;E@!z20);kmU!Zl?aIo-i!e-Pe6qF!DC4ItFo z)Cc6;lJ=1Vj}6GQFMaO~0IOq=i6k+9?|}JxA(Uwa$jXI)L_ZpU2PDOjuI>e}+8r@* z5lMTHU9&?hw1k*^6KyRX2Cm>Ywn5h z5N~-t4}j&{+PXnSo9=zrt5leui1wy8m3?f{|6Vfz-RxOyKn4E6=M1R8U(;iliC`s) zez2Pb6BFQkrvdVhTIipUbHKvq-?Nk4zA4DrF}V)yjS7^s31{--#26Vo;dA0IM0Hrl z&=U{2^fQJ15daUZg5S~w9)!uDqb%Q>6LenBCL3NjS;-$;wy#KJ$kUnQxmSc2I8c{)nU zPZaEE|2D8|I8Z-uvh;gi0|o0MO8X@1rj{*q@)Mzppaj zr@*Gh=4QVEe`*y+0$Dc#dXQi>-wD7zECcBiC{+x~WNOeC2cm!!z%Cyo(f%Vq)T^F5 zf#R^6KQkVPAX&`7iDekIZwheqA`uhH#QS;o9@x+Ba4})PxMu}nmgNXLS{PzC$MXH2 zCisCB9J0}wuukB8iYQjLG?+heEp~*Eq&dSo#0yFXo_$7f?)c3o3SY<-6&2qemHq<^ zI>nw32F0F>B}r|u0<~pyKB^N)AW~l;2zL6}zr2?;fbecS1`*zkjU2d$EP7=JxY8;P z4lH1@0fB*4EBRWqabfb7nE8w;1}`cLiFe`>dWONj4( zYPT5wQ@iz_+O5BFhX2%V{Zm59&q#@*2HN5*en%rC*)|00D0wq ztswJ;$R$nkMLu)mCyW>h}XNkA_E`)tVr(6NCrIdik z@2th)YTZu&I%d)Bjw8mv+Y`XS$;#`3ej6sB4*58twj`7mF|&1|9zOmr-~vzz@d@+* z3LMGk1sx1tWLT^3UvGeAfJhUckeC4q&$GcJ%54(2nkg|wq1Gfl6H=gxRBu|SxL64A zRmF9bX#jD=iY&bgpykRLRzk!>)&nv`Y)G;W4dcf|gTQJRlm&RRynS8UsI&weG z1ht5OtrFof_bTpy?>rqSoc+q!RX8^a__iT?{n!mw)ub;yEk1}8vKcG-p3-Gy*4A`FDiU%-g}Tj&fBwSOmc z23V7Dmhzpt^Z$F@EU3i&cj{&<0WKVs0LpRMdQu%tnFtYxhC@;NONL6wX%O^0I5Lng zV1cq(MPh3NXNzV#PKpRn18YNkrwJ~Wc+tcwW zlE}P9e8sx%CjFTM;GfPTp;pXi+6$n@0kAYpfjr>BsFpaK%Wm#2XZ@Hd2)b-?@PVG zwOKE$H{I-F#S_ugdOQFp7eZpr~ zIN+f0=|dMk!{_Vavb7@T%AvMK2LQ_d|B+G_*sZ3JM!?RUiUj4OMwjF1iC3bj4}`!h z689MhfgV7&Kv9CLHX)v0)SG7ExQ_Cb^3T!lApvxaeXP%V%OgyOce+{>fMC+`|J z^~S<82K0+pn10`3=m$rIr1<-~NW3p)2PGMKIMlGlUcW>%O{cOSfy7&Xkj`r-X<$v$ z%##y!Mi?tv_`I-R$ z+fCa~Gz)g;u8!Y_kI4epxe<*+`9k+$E}+|u#(^31wTyYS2jNpTiAtx78=VD-KWUN zouCYwS&Fnsn~tRv47s%`|D)RQR{J!6j}mAIKz!-nvJ{TrTx9benW*-SJYqg4jz<_y zaV?ZDvFAAhCEHH_8?7zNVZU%%kzch%v=ENgw`Z>==_4>j8Nr@Qhft|l0pK3E@<1Bd zK@CsjkBn|O|0B2r_>Q(9XC6sj#EF8H`70o1?zDAfj<%S0^E*9^?DOF8psmHn-f4phC#QRxm9js{dWrd0!EfdX)!VM=Y*I@ z^rlV5Q8r?T^+_d*@5c3BexBCF>Rhv?ERS1A56b(Nql1$q!!>TLSf}xl^@mW-X6g2>jrmp)T$RnSRXLo9EG zTps|x?2&=i?|YdBl$S^y?g|EdV{}47KC!$KKb_5h*!w=Tjd!~>RO~&-Y!R2!HDi8d zl%z?BHGfXm0jIM)JKH9^DxiegJ69E0RcFu>0)DFv-2+`8pbw|Eap;~aSDQkgB}g=- z7!%W65BT1s5jY>F){?S}aLgKZK02umK=aCb|1`qUee>gS8aYZ6y>WFt7;n>7^4JdL zVCRV=EEIU`5ClB}iN$SY8gRp% ze&Ow}COUi%&gT3a`Vc5n{0RX2>bUlMmdX{v%M#rSHz#yL^2%`T3)6Mc6D^+Jt3}?# zxby+v4a%&;2AzGPiEZ6gwi8A>2Aldw>Bxj0pcs&*z|pXq9UZv&Q~N-(mEZF5b~!32 z8BV-hiu7~ehd`}_c^&*7Yo`pjsvjXUOxXTd3%~UxqaQSc`n$WszBuf6l}H+dQV_S> z_xnv$2KnLbb{BmN-K$BuP8{&&)Gc?~^0*-y|>r|5R>yyBllzeCn8jBhH-P+75g~=KX8c6BIPwo!2Q6HvoQk-sSUQ|Z;^kE4r^4By>2s#*8>c;xW& za-p+IU9N-2FR@=*401PT5<3*^cbw%_^3l(Kisu#{%%xz_Cs(suSgz>CM9ilfr_Yx; zMJ}}HIjwKWB!19HU0$BrYtaBMzPx;v(gEK|y1Tlu!=U5tt6}kC`Gdi$al^Pv$_W1h ze+XfT12jN%yG$^M)+g&~;n@WI>5 zM-`Iplk22W_~HFF0;jPQn78ceQcp=fd-WXX{71tjaqWOF4ILvv3E4g{eHn{_+In6GSrGLS8W2b3zy(5}Ndyd}q6=XW$bwh_cmjE7I9ALbDZ7&OZuf z%L^Pc`Cw%Q{I|$q=ay92r4K3iZ{hc(-`f|&R+1b%yqQs~8g@sq>#hsfWOeeZA zhP5G+LK8uA7pvV}1ZnE&Pve?k9PE#F)|Arh1a33Cemu&1k!h+Y`F8L$Ou*uh;Nuqm zVVsx&;?)82vuxn)YW38;P4g^Lh252F1#TQ;4HD%`xp*1+U#F#Tz_*gl5ZO#-Kl4p4 zfwBWA0f)WFPY7vwF>QPPS{Mn`{3#dU!s(Z>^pmlaYI9AD3rmBJqj zzD3P67Rf7A<6#=pe{C`38s0(1QU(lwZB&qkMA6|eXlJH*5@C|0iJi;bi2XMHK_Y)h zq4oEeK<0ci$>p~&<(D0_7BcIWWs}B^?q;M44dIbxP2>|Hd`_jdv}PIjB6A8|u;I-+1cs`*$zX29DV{9hN2%=Rf~aD05q= zu7g>R3G-u`lCu$iVdU*N-^t0CCE1ADMZ-B>C5lU!4=@p^OlPewkkkCocoG@9VW&!t zKbQR!hO&Z5I+$fF(Ht?7(oBt?-A#0(bWXCoN+g3y!xGtu@Tm%wM2fd&%sQ>wHl_nJEtnDXOP?9lb5!3xS#1k<{nAQX6>jmG60Id44(UVX+vqyZT(_+G+f-p+8F zseRc9?#eRrO(aL$r*|npA0m9Gr}?a`-`%}lc&jgY(Cq{vLEHmXdN)t@OCEJakBNDl z)|*~)pyBqiRWAV=&4t0D14i@e|Z0CLnd(Fffgtx9*v=;#miuki0_Z6txJ z9;c}@BhpSk6rgsnPlrRf83~(@&RBwDyJj!0mkKG~hAkW5_nHiL{*GKLKEB3hR102agim8wW-LL7hg_3+7)Z=zE(U*eUlP zol+Xef5cZ)#FtNdR7W)&W=&wVrKwuJ?+5bZl_FbOo{Zw~~UoqfNgR8NJG{$a7XrW@S9Q6FUKIpY>)K}#2;jXSOZ0{Aj zm@?I&)4_ zG4bIEC*&Kaxjm2U*07IFQ{wu<;Fy%Fp^H_i{nLXvhedsu9r27B^1}@O9eH=NSWk7SKP z+M-2oH8wTnC4{;U6CB!D%CgIB@dYh(Z|C4!z_W@N=JA# zqqW(-wYkiN7cc0haHv?%?ZrB)t-4@Nn--?|o@|r;-3X3Y2|Lz!v)E#pm3GC;@c#TR zBb*c#&d=n$?*%?${BDDU!OeS(rL&spG_+#}?q6;=CpwjH7Pf^8po-43wVO_JLs4HW zjF&0B(R%6+>(%u^iM1I0dIrf=K!)cL7rG9dsgrE&!#aC4=S2~mAA$xMc1{u!kX5ux z^;{wO?8gL?lSZbTdW<4j-+3=J@LaZRKQpuK=?A*jM>b7I8uq2mjK;9{Pj~e8 z^1fsfjyS#dqwd|r^gFmw?xW}Egt^h(t%XRN>@x8=q?uVtS6A2jxlRTEXr7Y8thKB*DCAZ(*T&o5h<+s^sx~*4wm$Q%CL}J}ZKiH` z?Q6BNS=2?2w@s*QoU*Ayntw)ZL_?I~3)Ief`5!jnAvxkw8-Ox*M@0P=Sf?RVdt~U8 zi5KwJC{Cu}Hzf;Y!Ic^-nn-ApI%#0%{PpwmrOlaa*hF)@v~H?u^h4AHGhxuNQ*G&` z$#zw7+h~Zq5SxkT<63dsf#Q4lt*8i>_M0=q);oCHrtc%wW-T#dml1@kC*(ml_U#FI zjQS3E_Yhwlr;A~~83o4bq%G0O9sO>K^k|p66zyf8n^9wMc)0(WM=tKu&fDm457%s0 z%Z=3p?u|E+J~7Y zzEr1Iw-S@rDy0&9^(&1H?CArBKegHX@OK$-zmyvpMK0}+ONHHi$?Qtgxn{bdgdbXK zA_XvgRc|LHnQz@EK1S}W&)?8RpR#qcQg!YTYp9KH>C|SvHOjx^aqLc7gV#*+qd_Np zDEa}63bPei_bAg!SKeVaMwsJUd{9ts6G5W$GP2^_MR8_RvD>@uE}>yjPIKC0*qU6N z;TDdKK`Zw1B|VFK_o7X2tzEpleoP;YK0BJJkJ`Wwthi`+g?fHUzSF2$f|i}ojcEKn z9$n{Spl?x@ueO!3Gv8eG3g2q^SP1OhZ~}-|JPO{FIJ7^-n>9r|$3=CIt#oQcQqr^P z&1_I$FoW(H1fW)NUgj+^7vwh@G%{CJPoMT$c~iR5p<-W2MqM9u$@J0pVQjQ|=m}Bt zKXG~8W>Xz0xFRW-%UEMLPA(qp@f}MIo7udEj7_?~?=~bckb(D=ZeGqcNjP=HRLs3g zKOe7369X@*4`bvNADPhQJ|ADf9sTE^#r&s=qDbS79k&JriB-8H8hnvj%$f6I*tz_P zzb4*OVKNHDxs{K;H_kvmS~a`x+(5q7n>=1)elt~e!Okkb-;40%tB#o4kY3N3Bf>-c zbt}RVX?q>T5ew$bdPAQI#RX5Qdr9_h4s;bf=&pwv0hiY-;ossjyXI?o8 z-ANgi9OVm9JX(BM?KPyFz$A8#SwrS5n~~!iv8x91Yvo(>dAMa-1lrLTmx+#y7PV*` zi!@0P3q!Xxt(DpI2+gFJyLmy<9p z7q1oz*Z)*>?XZo|^6RO%byX41M7O@EGO$J3YDj6roSo@`sl{&i|@+wv`dU|u-} z`82*yXh7(9<{*o|IhADLK;h%$dx*-cOR)wa;)x)sSZ*v-I844VdU`KP+#i_HHw~dY zu($J&Q7v;{RhCYGE2{AJI@vFjZy}0yH)l}V=$*im_J5)WvrXhTfH0*LUNs(iIV>K_ zy|sUTv?#aP3F^lU&#}cs=LnQHB3{b4dmkD4_NFPFl=ozoe=d(zv+Oez~xFvjTXb;`5a}kp5sLV?eA9QC}Vv7vM6e z^!@>qAkg$TXRxV49_#Zg3^F#>*57=oPu)J*5uLeEww@3cUbEN5n>egv*WccLMq4+_S^-qGLGXOSqlJl>Kg z4%(;k4(+aCP)@+J_>m3zv0zo=2RrEV{}Z zO!#!(%Y1f+xuntC=6~>qwi7MJUwvhdnug898O5;W05WT#(Gj!oqUd zXV+75`eQOvT@f7#em*iXQ3EEirw3__JH)fmh zS^n(Oa~kqH`Ln&BxE>b*0m=2;us~~^yB8(I>_Q~WD0XX_KXX3xYZ1DT zcI)Pq(w!;(hQ17S3XDO_2(>xRCiSMtA!k>IAOo*2QL-~VhY_&W&*+86ZB|BOm0n*f zvQ4b~1_m1V=y4b2rBYSX`M)*@VNs}B2}syE$w1_4k!iG$#(Ai=B36|%^@ltcUtrTa zN_tK&?_1t!UG8Y-x!%S`l#og=&Oe-m%PJd> zv0)<1qRNvP@^tumMfQW;Q2E22VYQQ%reg(oL|&CK;B>scx{uPmW}K{F=zSjQpx3ab zm2(o&J^h#B5#GN*hm6ouEoPJ}{(HBhIcIJlOCDQ zo)|vGhx6+i zT%BP?yU#67)uZF~aX3a<$YSZ9)aNc!z4Js$yI?J=CsfojFQ|3EoN5N!BC40To1~rq zV@|BO!@}XA)HP4>@>!Xo6Fs$w zHq@9BRkvpuYOe_(C>8VBjYT0Js|XZ8F)0llP|to(=XD~Xm~Afm1m+nH0!>lnHW6>} z*u-rtdgF6RPh+qDP*ppTe7W@TOW`Jx_#R~4U~dH;m-5EG$xMDEA%L7+a1Ee8zZclvX^ z5!=l_K+&%>YS~f6q6slr+9&neVCmYrU2o3Wqm#ASs(YT3Zv!9IzB++x?NpamV-+-b zgu;)8dW*w;1+L&>pDDMG6H;4m8v(oMy)iy09>_1IV6>$FRAz1c$41(EQGb{~ubDF2 zd!azbkcKw(DeJjLR>Cw(XN#(#?4bP}uLiwom3lf3!hHBT=8>gsQ>{g>x2?TfKv~fO zh}H&z`2o1q4&HLeYeVVc|g>*SLDlyoMfdS z<%IT9&_H(R&(=mS5Ivw|J3nYdN;-_|%~Wi5XJ>9=iXV>#Ni-U{44BQetu1!znlu30 z(>t9bD~CFqb8!E_$hd%`u!bMvd*hSMr}nBXMtC&Oaz<3OH2zBFjn)(Te2cnX&)?(x z%e*|;`?8oqdq3X6Jltt@7zW)}&7s-%yP86e-W6eNe@B!Xr+fmk+U=ic85 zeV?A5-g4>GRb-NCtayvm1aPw7JZI)fExS7#=g|7*l4-5p`j^-DEXJ+xXNcN6ed#N5 zAHbBa3IZS77!{e(RI*UcAIRw(g&xG!)v%e^mVFmkJ?Y`8wf-p>oaH~U!V|%t4pOS< z11F`-GE2)6D~n{$T-2R{=q1X7?w=niw4Lj#uvv-=sX&BDN1V6$YMql#mLVi%YPI-b zb*Xlf>O`*3o4AL@eC9nh-J@_yjQjS08pBd^x_jS)XRIb$;=vXS ziK3nBBzpRHQVZ_vns0u5fT3!Q*9TP|$u&h#RO>-r8rLj~uOH|;OjUi7;QUBaS*c}v#IS?n-w)5CYD-oqITQuRO8nCH&Li8iHNk((;$ z9%Bc)im#D*S2kSGUDG1%=$z4N58eZJ7;$U;VgyjKDsVr~4GmFx*FWr8k^NW!4hoF^ z{a(oIy9y+QQm>k*0O!{9hlpJ3VXB?uglPd?LG9N~OZcH(omBPs{t*>)*6d5+@+jxs zETfWdt8`yKb5g-ifN3 zY=1O1{*fcMs`;=iuxOsA&gISEK!@<4srue(hrt)NSO!A&ZAXvVy;*fbiwdB@v)RX9 za;n)^Rerr8NhN#7;=tbSK(iUGw{)7}l`DgY{~5AC+-$D~MvR-hrOP1~6&ZbUMT9xPS@>ncW$o+t18MI` z=Y3Xfm6{$t#~jDnFw{cdx%(-@p~q!hzbGNY>!iDajK6V3*%$li<}zX#rNGShDb2S! zi`+)eWWVJBSr*rJ_FocKnN5C;4zXi>Ji_@XA;9eX6g4>wg;7#*apN|nSPGM$TyVLi zJQgJ!@b(@TXeRxJNDZBN3Q$0*us`R|6$VXKcETwbzsD1Uyr9MI3ZW!o7$*yx2U(H7 zFR}j}koi(gjy5p?ZE=@Jts%tgBAJ(Dv=9Y`_2B6ZkUbhto)|wZq3O@)s15Gi{4tWfO(%e80X^X$QXJ&GDuR9SB~4vnF!_ zh}&(J9AkBJ;};FD@ON2o4oo18{SP?$&$qrm15vSX0G2HXAdxj9?KI%`3Reonm=Vv0 zBTfgD0tj`P`y16Sg#=c0Oci90@2`}5h+t6*0sd4VOLK!V>M_R`UzcP&1qSiaUgOt{ zfa_a&6pULdNuzPLkfDnAixNmH_<>zKrqq`#=Y9gbJ;bNz%9ZIyi$Am^)^A_Dc=@Kh z9+9)Ckkm=OI_@%~wJcpwb}V)XHPf=tt1hHHKIgpTOlZ1E<_BYYTqFjzx}6r_zv}lN zTjR-%sW5@t$Y-_$ok+{}=@O-eoIxXIeC9s<0zlR}N}O#!SG8UG9mDny7HOW3_Z~cl z^MI1$c+Hh@@Q$y+tINm0Dw1UPQQQKannhFR=S8+i`~`I=8LmZzlOWvqPJW z&3DQ;=ftq6!DG(+{c<6EA&Z>PE( zFKoFt$+F^G{c z?*C3=9g>RZ572YxXf2U8d>3d$f9(M^Mrx+Oeq7l~#I91!D9SB19VFAARjK1jB--30S zezK`yrGne^Db*1*jf%^OZN?P2bgrBPYsFLgrT4O21$Nqs9V>99WwYM~FTc$VJd`$h z?!q{mCM%ItzZ1~3F)~0Ig3ME{WcgBA;*l^@%K*8sF z7Ou&@{DuoHzS=M22fHO%bOQMAx53dUAUWP1b@`M5Ik?h-CU?glvi+3gU&s8G1v&cc zQJS8xr=IZMy--gwgXv4S9WLbVM=eQH%jd}q=L(&|LSw=8^9Lzlt}DF=7qli8wWeY* z*!p8<0@cV}IAAePbi@knl=(7MaVk`W;+^8%f%=#<_Fb5ZX#r~OuVI{zsT#-lQkxf6n3>ybRN;1Kdz1aI(k8AU>Y{i_z92Ta)9DZsq4X*yH1EL5n(XM%y{B|sAK#>8tm+u%`nrj`{ zW!no-{~l51p4yRpeEv)>xl~@kyIap+O~ZsnZ#hOT*RHJyNLZM304{(C4;mx)SG3xE!JO_9ltxnxeD zvTF9k>-fQj4l^(){*Lw`AdQY25Q0S9PXS{<#s=ulmDKGTEkJ8`TDB8Q6R-%Uf$H8p zPHh09J1jH!cA-=F}^Lu^krzxxW%l-F?3R(ghD#K+aJSsNiR1fLGMvj4nO%!(UgN2zWn5d`HV^KAgx8%V5Hzpqo5~(+E zTI#@*A%pqLz#7&mGf!Z^4xE;v(*XLninsF?(XxJ~X2A|Df3*d_tz-qZp4GG#HHURM z1qOB-eteO1U@b84>%#&AVBj9gzm2>WZI*W69vKB@ zEPRuOALxaL3I4;Li)0RGz)SHkZ%-&o0V8+{BtqQ}of@wLzLlHf6bH%kU&jK=k3bkc z0|PP=mPCi6z)_fn5kQi6|EL&gg21ON1O5sIu4Wziz0_J%6dwk7oX6!}g>V%zjDGCk zA^tdcwFX$#_I4pVNEP#I8o~Ww%g&bp&1v`0Rh)w#td6(nD9J1s*}>=A4_)8{*Chm^ z4UI=rU`|yBEDL4XOC|P^yw-fjvTV3~9rEtJK=DX5}_NM$S@$k^QZ z;~yZk*8=JB9WF{gm^b$lizO!^hHWU?fHq9IUN>Tf^eMx1=byr~&tE%81GK)2h?OJ= zk|3(oMVI>1=G_HiUbefqcHCiv;-Et@@B@gK*d9=s9&RwI^AyHk(EYW5eaK)4ZdiL3 z9M72!XaZb}rnT_*K8NF`O2G_D`rg82KBGJ6r01BA&Bg^w*>B!qwlza_^#Qr0`|7rYS>(CnL3hV>^fePqi zbA<=Gr^r!N_^uNe_n{xoOSJx&JM=gr*#?JthW?5MTFmHAU4E_~gRmf`Gb9(fg2Y>N#dk$S zN9FwF!?r>+E>zkv=x6fk%u4`!-zaud%QWrGu-Mg@~9l8d76&>f7q9QEDQPn zb<@BF6$K$)A_Gkesc*ACD;mVc36hBsouenWrkOS2MVh@z5E$do|1icr<#u-OV48C^ V1%qjf@B#4W&Q0|jS=aCT{U1y00?_~f literal 0 HcmV?d00001 diff --git a/Documentation/README_images/ChromeExtension_Results.png b/Documentation/README_images/ChromeExtension_Results.png new file mode 100644 index 0000000000000000000000000000000000000000..0bf0dfd4344cfaa99d377a0299e4d0829954059a GIT binary patch literal 398930 zcmeEt1zQ~3wk;3{7J>!{4hfRr9^8YwyE`<&A-F^E;O_2DA zMfmJwYzsH<8;lVDPWzqwH(Kw~cgQMO__uKC!ZUjYoq;81@Ne~}?}=*(9c2#V!%gUI;(NdW*X1yRI!bKFRU}5%i~IOihl2x17FN}$ z=d&=tKy*7U4UWmUZoijO24Nvg#c?+!pp!mIUK%amEj)jsA!}CI{OeQN=J$ZyqKB^CGvicxXVw1lxmFiZ7s$rs@Es0IUlyad5;@ZWj^358*{L-*dU$)69$ zjmAh$F<)|Umk#w1v?-Z!g>32Vts^jK)#RvdTF;2lk{x4iHEERwHl9^+sfNBXMg%uD( zAk-&Y42CxNq4wb|0?vgXOccXMBynMdCk1#Aq*>?XWn)wS+mN z+{D<4*WU%4FuuzaW)V( z2sYTV63)}bK30e93ac70Fh(ber^;%HEDKtZq?0+Jxdr=u#1Kz0lD_;H5Cfa*uE1!H zX-<6b@!)k)!gX}1AR7r+)T=>pqff5yfxayX?!NAR%c4Zth4NCLPGi6Q2xf|`4y=xu z5(Iud$a^C{TePbrYw?CPrYTlKysRKShc~xPSzew|!B#OuDY}qGzDS89(@r2ya|ct} z&qU-ZhkP2VcD8ne3 z1z38_?V7=xnVMHj1ExH4{pPEtLS~Y3YLxd<$A&{nLx?Z~FeNF7QeAX$X6vTHr_w5# zh3skujr1+o?Nm za1Zfb<9~kBfM1XQ_I(n5FCHzk1;ckEDb`)Qg7*pd4S1-DxQSA%>Us*Mc5AT)L#(3= zD`tx(pw5wHnpLX+^_T@_ds_Q8*DTkx2O;F9fQDYHP`pshcwxzMrG2E+q#J0L*hZX^tYc0}1}S1Y#b{aW}X%2O3dla}^Gx_rgG#+#nv@&$$t8kd zNZnP1T-Dm#!oB9qv%??Cp*y{4)Z8jok3=u6+mE+#5!CEuPa0&Ofl3yt8uB3HxNoz0 z#K&HRPa8{zwX-LdT60#~l3LW-Zl-B>O+3bj#sS7WM+N0JG<&fLvF%F?mfgKb1wX9u zi}2s!H-GN_oWh=DrO|vKsr+^J#A59*aI`5!74TdsQoGZql2LZEbX^|@cznv)?r&c5 zt7o4}oK-p|Ii>n_;b6KvR-e(xH+wp@pxks6xiYn)?UsF;TIN_*wo=mUEKR6UPs4j2eA|D6MZrl#@l0k%VNyp z$V$oPOn}0%xVSlKw~e!;>qN}Vt>U3|b)SqmgDFDUmPW(P;UeL5xN~ql%~MvRL7Xc7 z#s0`@d*RkTYmY7!mD?*neBSBOcQ3#YXN8`qwWoE)VPVH^>)jr;XgQ}Y&(!JEs>|q& z#+75k5iOx2r_Azsn=L5+W%R1yzJYEn-Klv?<~Xf6Z7@ySS;RSGWw`AQvRI_mbx$`9>?Nxf?JMB11MMTdp7pIe zjeK%edKz}>RFF9o>|b?cAlKiYD9HJ{ z&7ap-KLerQA%9^&PM1uWziY$UWxo2?y$0kNlz^g;xH#mhXlQ3_46rw|aeyXOz=ISZ zeiKu-hl0W;|9wJ>D||eM^#99TNzFk`T8h)q#+pvw$i~2!&c*uM?|z`TT{t1P*2WI{ zBreug0DDds91>#?F|8jgEnifs7Z4goK3K&d7vQ zLHO%m%^`pBkeN9+eB-32cXoEBb7rQqu`{J-zyuIZtC3^M8*7u>WgWkOk8JenQVk z$3XwDz9CJyf0uH~nY$QUsSBH1Lu3XSgO`z+oq_w02LIQie-HVOrfUCe%E9u_rvG^K zKbtDs8`}xlSVIPN;QjZ8{nhxN5C3Y&P5*oC|B%I>g8rivB57VEZu)=i8ZT0EIw3n` zBMHofWtAXTh?@QWz?4D0sQ$b{?qObJtWUxkprH7n#DxWvT%eEB;Jvk!A8yx@7Ib2y z&q%|SNc^#rq5aV~$q<5L`NUM^eq`$j34jW`e|Tq-eTK<~L4-vR$!(axR3Q22qg0); zVwI9~<}tRy%wf;m%+<_g?VJ%avg$U@bK^X=ob)*2wuTwC!JLf|9tNkp zjy7e6NsNVfNR7d|C*u4{Ub-|$9xPl%7_kVKW?Yj*EQ22zZVcA)QN_@#QQ3`RYJ}En{pQW3U`r`pR*0Fh(j3hz6XO?-rkVV{!Dbzi#Tvqlo=*q z@R*)Li?pjfrP_s(VC?qgb#;}imcb$vQl!C1@vv^$?c|=IN;_K*|K1Ph=U$}Zn7f(Z&J;84y@Z5SoX6J?+J zM@M2ZhV;X)dBX7Sd;gn>%84)bUte65zaKEj^0OE3;CZfRAxaGnlspnSVyo zeBe>Y`7-hej5AeZ@@%1VjcG0olglcH9{Pm^m!pL@QQ^XK_TEGq>8+T7TJH_|UKC%h z$NXJTzEE-H*Q^Qwtx##N>z4G3F-yW-ggWzJ3u}|MbzFKv`V7-eY82@o&GC1`-I80aM}1%Olq@qB(#5Up4<)5&49R-F+2{3rv24eVYg+KNL}%Xv zj#BcG_UmePz8G z3VDqdOFmTWF|soRLhoRFU7>YlIP*n9bNqUP=X(c}{NrJ>HFAQkOm|oaEO0`(TgLvv z5TA^?Lz?g`+TQAF{I<1<-;TC>xMJs!_shCOXzCZ!Hg+?Peab{%@}bM}7mBzA(Z><3 znJfMfTZANcDukPyS+Q4ag^~xp#Qf&KRcB14kgEEdOv2>xo+N{7)IO0VU#Abc!|sAa z6{GceiLVNA97*68!*SD`7oX8~*@k98qa(A5AauRk*6Jwix(#r_QyhH+_iZ8a$=5Kz zWRO{UpX=6wLdyKtE1uWp7vQ%jgIA4-+?oD%0?S7QCQOEU!CS5RyTq4!NfzhmuWnuQ zwYIJ2Qin8NvSJy^4R5a`;Cn_4m~R{Da>_I)4!ayhUf%7S^1BM8&#=u82g=S333Sg7 ztD4{p%51$JTKAFu4xahhjj{;PDqyk6T^Jc=;BAN#7jDl8@ZC?rF1uknxLe_dJJgnr ziy6}ovsR<;$!dw`C^bH;6xXRlTwzs9?iSk%*Owbcw#0KcVX>TqgIlbqoU3q58fPk_ zBoMHvOAo$$qYvw-X5eHz_sKT?TJguMoxtm+(k-7hv%A)3xAj)gq?fe~mH)G0 zx#64WxBZRz>}KFay4JTjGGRPZ@Qy9#56V#$XVxo+pDH}9DR-s$`^mBm^*|Mm(1gPj zX(XwqM=qIB8!QM!Fa3?#1SJnZ19`2b2+K+%2YOR(H8&Lv8thx*Ap3 zlU$~WtTQ$uBP=t^HTZ& zSTnZPbC*c9m?lrJT<8tlkMFm@W~+lbU{3htTElemC-=0!;bKc3tzxQ(J zs--Wz*>eNKOHEdFx!~)7cu}yDIq;Pu&_B(9>MWLZ%{g>l824i7?~lCU;tZ{xbCMQS z+;K!QMqKvqg|Ya>mV%(!&b+KiR#g*qIR)ytuXUG7bV*t-`l}*R(#N~O6ua$h&dE1m z+}Q^c5GqC^_C6K>@K!>A=y3YkVnGc34O_!`hokUbdmDWni$nKTg($kaKC59-OC0Jh zX1U_3+4jNvviDc=l2~c>iF=JQ8_b_@yB?8Qcih0D9s^ltq_OYbupB-LU45p~3Kim% zMPo=vVIkTo&V8d=`woo4e&i?-lk~_vx^+W*razkda#LJ6+uoc;1$P_%!fz#BNG_tc8S z$d|$*ojw=x$mXw?$$zApuJ10hndWzT*l3<{bq)3_%?vq8b+s{nFOX)cpC%>H8o_>4 zb@$wGg?h0FbFHd-pNhO8nU5U?T+Thinf8+UO3Ph1&*5$oIQj^iv&!_Gj5-gDO5$?K z*wQGb8FiCe%5k-Qn-Th*oM|ik$d^r%!>slD+qLHIU3;vw3AX(b^n=ko@PK?eQu_YsggBC{cJfD0THK}`G z3g$h*D^#AIJ51q}qwLH6J6Kv(%lTZh3Z2nt{>bPF_vfwFiupNj&%UHHHBPTlW!=Y# zu)B=Gpi&*TFuGjxD5R~B>D})|n&)$K=IVv7`WL-wg7P<;cC;y>u*}ba0j~Px?PfV+ z*6ul0Ggoklri)$<=(O{ti#lO(@i{Ra?tQpAjn6(ca|LS!wbL3zO?#tzXnipDm>w!J z4~g!yCzex}-&Q6KYPoAg8iw9JA&+v;)0p8bdK+pnM9Dn*UN@Gnf1WAKa4Y01UK4@= zX{$O!BxOzV<W7O?4Nw?B~ z2y|i|0unMNj@P*fj4*1djIGq}w^rEXoA9|>k?F$(;}uoGOg+3JuGKc$ajYMFv}HZ6 zZ-kFqin!y4ma1%>LXQDNl&&*8%vWVBFWQWDC-Unw=@ni~vON=Ors-`iA2ck-m&(#3ENf!8vCncH zW%|z+S)4*^PTQodw~ny~9r)Nxt((|3(>`iCYYv4{4;OuiXAscRB1(~)PRf10sM!N+ zYL>5Twlv#JFMVkiVOlw}c!3N)EYb=fK6Y&uxoJdeGLM4Sy__hGm&DD+fQ{XRqZN7q z8Yj^k(>g9q*{&WP&4J{j%e*XIH7sb#%DR`bX;Qf2dE@8O1vH%3hMQT(IB3_)sXY7Vj2pfjoi5F=jR7B+wvr8o>z3vY8|aoxA zs~*|PR>)RY8V;<$Rak0k7dBg zj(0@R#P8VZ9+=7>=tkgX$1* zB_m_{m2!R_c}ZqOd8S;FxWlrl*70Ewe{Cc~|aQ&3$6B;xVJ3oSn#nSt3nkeMqjW7^Afrx%>86x;JBUshc# z9`yVyM(**SLC@0y-HcK{qXI^Zq?I zZfjmOZs3D~8`RCPA*#g+|M`sgw!ZtE7hExv{1;jV8p)bj7Tc{e@<@Ey?b#R3*JU$j zDfDQ=Elz?h!mmhFvPN|b4Q^7HS{ZxyUQJ~a!8-F=m#sy#pMlQiDqJzgz35SgA}ZD_ zE$!xsrAs`#znJiP)zY|^Te*R24z!+SzySOXiD6-4A9!n_U-d7Vdlp(?idUyYE!+e6 z_%Drjy6}e$-13QmWlx%*JV!yRuZoNSD6};l8yZH`%~h{`JxStdrCoJ%4n+D! zs9MC+L9-+TZpP207M+hQk(fuWy)OyG_}mo9%{DeuA%_Xsz4(N=E!29QPr*04F{ilt z4J^bkErk~XauU}G-p?YW@8t+$89k7UCE7DjL&K=(m7+C)77n9Pr;yi3S zZU1chte@U!-@D?xCvACg>TWmP_z2(td>`W*;~T+rE^mB1Z84`^O_Tk*_lN$6AlJ>Y zcg|~R?2rMj2Bo)7n|B%yI9|TIFvu>eE ztF_o?W0dUS&1dmj`Fl@WPu<+Mdq0}b#!?{jf7s5qy<4$czE1T@O7Yq~h|B*Tr>L9{ zu#mHXzriS~hy#tzRv(1Y8|~~_8ZYeO8!0HK6EL)><=@gr4K|^Mx*{~~2I15>1TIve zmt6D&Ta?P@Ok0FuTc&d}l)uxPMvlX&=C#TU-z-t7N2yD9S(2j_3JQ-J#B4~gT?%I? zW*wQn=ra7A$o*?^(AN-_OaJL{*T|O-0D>(rlz(=3HcXiacfLDuePK&t8Q7BD`d*CIbDUwLOw3lD;M}5tYw=G^NQh3`J1y{sNb}KH< zYw343G2(vQzq`~d$3Ht=x_Y)|GLP!d81u3kcHCO=a9eGcra-y8pMHq61_B_VVf?XS zFjs)ci}R(5cK-iQ;{QwH|D_X)E>;)St{a}O=+$O6;hYIKSciAEde4R*hgF8Mazc#~ zd;+TXq734UHX>m14-S7pm%mp^37`%Q2C>S^5_~=_ZicH9QJF_--j{u4mPXlz7pGDN z$OHKK<5?OVH(oH=&!^?7W$0U)nK4v^q<51h7cNASy>rNmXw_Qjidj&NTT>N!?z zfAM802Z`O{CDYq~#g-(UV?^FAF$UB$sma|nOu}q2&ubpP@w7dfldLqaIc7AZOkx4V zmTCc~5E?D#%k#XKrcKcwMPWYj|Cb{Q9R|%|!!yh4<@xIbA)8eqEUzxjY^ZKRk{{BH z7oZ5MNH-wgMEjZs3Le=avV<3Brmkyjk{?XrSWqeMlu_j;MZl~OPpfGntfjWy^+NmV zzi#<&ZQwsd;2)u-r)M`fEtJps=yjs7@HClnxf2~weKsNZ4j|Oztua6oh%I-t?ENO0 zAa1da$^2Ms!Lk3-7%#$;|CLq4eEMON=IB7C(tLL;IrYbHDcLn=Ppx1 zW03r}!)FHb8+bQDWq7&w#Y4*J(;C^HOo^N>kj&MmkW&@m5XLNkeDVV-pHjV5IV0I= zGOJ~n?D7nI2>H+dW&5EJ?-BV&0+i{n1@P6CDaLtjE4AwFla6_F^k!{vR9IhX4x$Z5YjEG=TPAN&WM9vGVDSo66u!F^r2^ zR0nAg0Y+I-Q5GwpFVO$p|EBC6^{|(hmm}GxJ3BiVIv453>I$^|5uc*<=l9(x* zzTa4EY52@FE_E5c`QLQP$QP>A?(y!7zH*2Vtt@W#tc`4b^q4x}Jetex_IQ6PPcpB( zoDS1kpYVTE7=D;CG^$UZq8SbQ#N-9iTUyp_AG)8xMn>xCu(u1Q8T*$z!{ittwLjw~ z|MzZ^NTAq}azrEJAueR$zO%%xZ{K9W&8!O6X~l7#3Y*jLzbe%q$~M63lO+^{hlA{D z$@{bD5}3-+CJ4iITq)!L2Hh;!5NiH6JN~nr1X2q#82FC0B+mt`05tl}hbWcI@<~Hm zTbldhge9WhJN_?G_0auq{#wL@CgRM@Ok%msYRiJv!?ub|8sahgKcvmihNz^XGBgnV zAurpr)k^C!V!1H^3AoiC8EKbQWAzsKXzVpK^{PVc=xdKT@jqP5#0o;GPOAphzEgv| zp_}+WTx}CGG>r=zVEM}@^W~3{tKXGRdG(@T>z4q{W`>4_&Zjs_GBAW>P&E$oFgRCh zU`D^Qptl=F27j2L57F1m%*>8!Dn35l^XmTK+{~r!*C{Kc|FF;jb(ni1UfWIKPZs5I z4Z5BJr+jJwTH4UGi!(E^HQ^@RVkB{}6jF&Kb4c?@jaUFh@x;H}@wG5aUS6Ir4vZE( zjYe%DrBjRt**`ia@ku3(N=TrLp_GmH70YS4UY@=@&+;SBY{Smp`1ZvVZ!j>5pA&(< zB_KEyYNNise$HxI9%uODpI!K*i$P&SquG}Q(Di5?$u>~lCj5`ytuPP*1euJ7um(d} z21Cc`qC`=ZLzmG%d=>r>EylLg+aykM0&6mosDJ1k+-$qmS3BR2^j8SWEWn_mrVdr# zYr9>vllD=gAtdT|-Ra@7t_Kv% ztlO0Wfl0Eb*Vp1B&;K@d{s_#DyIJ$;{3I1#X0LQ^*BLp)H~4H}p8*l51S#{bFi!f+ zYMX#xm04GCjFf**n4bWedUtmd#!N&+L_tl>Oh=_M+|nHb9?bd1_6!NAu$jq&QDI zT|@P`xxyry27GbQb4_UWM~8=DJAlu_ChGrT+7oy_gigOYQ+-8FF-7GDtiv(4+@PQ> z!;zhlw8HY>-%vIGX>(Ebuq4}p6QlW%kV2)0&KBI5gn(vWW%gM+#@Ltb=->A4vnt(z z1e7g_;yU|=1Z;I?ywdfiC15)ufA*hkwslT%$&Jh7LG1M$uVOGvHHg@`I!oB`Gc==qTc}F1&uuh-8 z8yX%C#}Bx^{tQ&u0Qks)KlcA9`X?0Plp(4AwSXBp*wX#lzN#kTgHgmOKV-&@z~b`q z)3?$l_?v(Q-5aT9sQ3c^*r*A61hr7D zMirN9G$bI8snJ5r7Z^+mDF$}eqq(vz!$?~rm%qJQ5qXq2DbA)fL=x>Z6)FjHkhR`?`plB5GS5DspdDxAMKXCO05y5ramD0= zKLwV!nM9R;ACBZVmo`|CTRiQg<0!^Mt$0^xgV?>c2W|2A*Q+L_f_HZF<=T>@4U!*| zlKu+8ixg0#&K=&+<_!Y@!N`(YG=TU}=+KM}agrZE;1^CS%)hlA3eiCp2F>n*=jy#< z+4;n=&x+ze+GK;mTf$}C-8TeOJFZ!5qS^TzH1_kQ(W1PXloi&?GbW2lY0Dmu?i>~- zJ_J>6SxT|B9yRjXIB{O>GbE_36IApbzha9iYSm>{w+#lDM2W?hC2ayFcx-`nZzdOm zS@&U=7Uh%1`CmA}Dxw zyvaCDI>_F0H)&@#tZ`dgoP;yZ%TKS-@jk#Fz*lwH1kJqa4IrY9Va`6saAcY`gp z41kLqkMSbp&U8ZrnuZb4?E%;&kbJE*f%UJm7%}43Xdcy0)zXQmN3T$+7qo z+;Zf3XqKJ z^MZbfZYP_0pBhF2$WJ3>XijQ_@hvlc0Qs65*mmN9l|8iFWnOzCT1))FKwpSYI{7TZa!@)2_xpy2}HrC2o@?VD~pSxRvp^x4c*yE(2d_qcN^<5 z&dVx|{cBE5{Ce91(G{&0oZNMs@6m9yIqukjfGcS z$1+)Ux1FPc-Dh*#wd2cH$iUy|NKuwY*&-JD#&u+?dFvM{%-ecVNJMj2)(I}=NseHfG=ckvPaf&(T z_v(WSX$7yPW>g>M+FilrIW4`}TFCvr7BJO#WvIfa{XW1yb zOH?Jgef#qc!d3Ggw(>9*t^@W7(_pDTle0*GdCwYau!z1@`UkpFdp5(B>bC2QaqJQJ z6b}DARiFeET>zSbakQqI90?pqLN&2&&C_m5oHA+i47f++`n}WFL76-}{P3rQ?%yEz z&$bit5Pz(X1*Mvbz(K$LY!M`G*Wv9+cw(w>kfGJAL#KA{sbH=t>G0SvytDrP063X= zr;+1+HLqQROV{R5xP5X;TRy8n%!$ADq)=uBbW%30DPYy52+&d^?W`!%?LqOXm$2~i zs$t>2i$w=49am})y*p)5dJ=2El~^4f{wszt`0Mmw@i>(<6fQ8m2qHMr^pPpgfG zH6(xjyx-h3@F5?+zu4LuD#>^$w0fRp#V9*V6miUK6ds}nWnqnRGL&;e5`R0Y70+9z zC2RIAd$>&&>;(13Z&}w{DM<6l*m>wc)ofR3pLua`$%=QE5Y+21z7 zA)wcMQqO!M79c|BK2os6ehR)@Bkn&b(FI#9+ZLmqKddY-vpTC60WgO@{0UJ1_#W_p znXBts2HHy-+OLw`&8oXe$WzrlYo9R827Ra=p4P^+!(P!L?}a<4!d;lcjYC{p)=OC` zvR-I4B$rDh@|+9|w&G=EMecJfwzoDKij8uc6u}w%RFH}>|FfwbEN^kv$VylK0bZmA zHRe*%vsOAD;$$TOO$Yhbn~La+9w3LUwFa-39>5opO&#YIM-%jVrI~~>CJNbf8n4Ht zHK`jMt#3XwVp}=23x^+$$RU&Vu|gBei$X?ks6h^-Sh|x!2BO(cgAJ{!Jxn?O25i5p zoeFsimNTnh*%9KV{W^IocnZm4xh8A2tJfK)yq>QzY}r#Pd&+>jl!(X!_{=g$PD_e{ zLV-n!G2Z^FWwCv#&y=H**!@#Dgx_mpb?-gwd9fF&r)Oq9iV`|14QbO5c;0VZhUz|% z-6BxCS6D694&Gmvl$M6o{P1-=1@5IUS(n+9Za;(1yx4`?!6!C~12?ivM34~cfDF@5 z?A5Z4_)4oe29fV+19c}SlJ|%9%g@2-NfyJtFh+F+{b+2&j>?uko>z_Pu>3isaA zd~3~yJVV)o+%i&{f3g*sOKAH}f^^!{6Lo^7tJ-s9>}i6hajH7z{>n?dG#2$2VVEBm zD*J4J>zz=v0~QL(Z=2_oI4lN;}88|7dEh+Wp99|mPTf1YcZk_Eka zZ$Mzu9S+k~GtNt$pzV~@p__^Fc-G#oWk1TQprmA!4QM`L1Mb9Y9v=Ye@}ic?0z<sr@=jBU{D(eQOTE|vL*V3NjNI=Q>-2NLOb@imGUH?ce zd<(A|>!Qc;(Ax>CD@y6Op%PDWpUcZemluKBaupcz6M6|%RaMHz1Bs=&PZ}(6UFWw; zHa5vuV_auFfLriM3Wt6EvghLo1)k$so120B;%+w#6Po9Isk&j#&qXJ;wM+rOFu04p zaQvlGc)?STyA{)^4zjzVf^>;QBEc?5T5sGPh+>=V)YTLAJ6-j8gVFeOIlNUqi)Me( zZIHb#8_J?sAR`@HJ8WBt_I1Blgih;o?ot`dVhE+h^0(_r(dHoYg&;RL(4aK32DZe% zLmW>$e*>nSEHRH(8?%#x#_R2T^5LDaoeZa<nZacxIMOFlpb5gYO~Rcfk{4*vVgN7wEpvr$>d>cYuCrx15I3w=wL1uM7C%ZjRzCrO1Rp~`HyV7Jm2%PC2E;N;YTy9ekx z`r&F%Qe1qKSe}GPC|2alZld9~A1W`yqgi|10C=vU?VV}#QiU#`qKPGSo=^|?GNIk7 z$3PMHXnM8$S#Cg@-8lY_IP*I`PGBOqgAP=3h9iQ3Dm`^?0#S(p&XDM{ga{-7 z|AiYBkic!qB|D~P1-n&`C98ra;YrNNcCO1dE`6yGGVVF}ICqh6Sut}+Oth=>j@h#td(7ZMeV)7aPC72BR~Dx86h zx-X!4yAEjhM7uHWE2&OFlQhK21_T5&2YtfVl5<-TTrx7;7sB?65&h!r*`b4K z&NykKL7eF>n}{D^42ij)&x7~|+|F7lU0ASvt8Vr}uLco-aBy(>Qo%q3wb3WYzrmJN zU5mKtf~FFro**8Om~uTu?D;oq=0k&ij$Ge(D@fx2w<;CfxbIo)uca0Fs)@v`Nr=r~ z>!~n=QWW=}^uQv;I*w5cx;^^sMjTT&%x$IW?mNRl{z_oBIFuvfw5mLKsgb#NV! znkzakeCoWJb?QlHewo)H59MU1PVf|x!7NEG7G0t7Jl?JxGG#pg(Lq`-dsWpier=z# zithR%z+CWJaV=tAz48G}mX|65E-Our)n{#wpwQ!U#vZfB;V~JmlbM5*VMQ4Zk+!?m zvm_AljB*DlP2 zLNr@NAb-AQz1Lb(?m@8#z(-@-iF_S?NKuxH_IArab;$0j3vO}+bUhR! z%X{$E*`+c`LtD?}y2dJcUYSnVP9XrN3jj_ zLu&R4%coq9&nV7CHp{NM4#i$cL$Lr-9F@#Uyy~onPjv$kUyrk3MZI)=4K?pXexZ}z zU8}>{ZKiA5SZ(w>fvyj#nmC(Z5jUKh2lsPBgwb#P19-RUrm2ku?|M>v(Ec!LV>DN) zZUT)%{pJz5JKMky9u=FL$aX>iJ_#amX9&CAuO&#}vhGuDwulQqn@~b%?A@w^Lp?Rqj8B(=uZ0@W?IstQ z9X4w08}Mh%8F9!?ZQ?y>oE>a^n1$#%GyY--rv7*${ZbmENhA}QXW6L=X;E>Ivz7MF zo^F$FmlC}aR}sS?tnE|8TlQEYn}p1E^4u9amo`Y#I`7>YiIzzvid)0e7& zp>aYcb3`9=MlXEfkhhoZ$HPyAa>2(n;|7~WeA8*ptL8Ks8#b&TU(=vmx5oowDzLI}cexPmm9D8NO{AO=biY* zdTl4<;pOSKG>4TbaE*$MmYyC%w6@mG$Jp_}hae@VU!yR~pICUBKJ7tD`KD>NyXpiX zFFng->TSyy&T$c?A1#ktyb3;G15JtLNcvCPATGZA9Yk4(y`l+u194sZ-kT=6+wYDr zA}0a^j+>6kne4U))UZ_08?rkLBYCjxqxD3r)jl5jNkjPAn$ynq$D)LE|=Vm506oT9#P6n_gyK;Fl5-$iaEo4Ofo805KpkzpdUhO1e!yn9^ZjNAHo82$U^7adr0q}V@ zD!4$w?oGSj?WjLYw+#W!B#LbB&~uC4&o|3Qtb?ddBlCULb?6*hw=SW_#9n!Fw`q-a zsqCvRIm(?ZbKW1F);BvqzP$$e^Ez(UDZt8goKhF`^5#=cpE%ks`*9x1SFhhhL5S=3 zcd!&wcYAm3FRfM20q;&AU}|JxZ1=Gf)sv*MQRJ6(!wPI<6X?~VGlX|YF+%{J=Y@JV z{YL9GUaPKqy)17p@YmNM}gP`5YyD8c&U_Dx@|oZ5R9hGernQXf3 zRkJCYNgvNB5R=UGDxgaozBvaMf;DK*&NA!AMr1$-3ypAV{b8 zc}`cu)t;w`mQDHC;u7k=Hanj^13DTN)`e)gj_XdG?Jg*KXI{)`@Lj&;6<**lD=XJf zZ*w}mXaq;5&KM;}x;PW-u z#&N-ye?e-mfdrp)j?jRPN+{f+N*r9ulm(B?5K5!_^Ns57H;0Lj0b{$}*@kH8=#~3P zj#+wS*Rmoa0o+$3%nDk^v=ND6QK6S|01pojiDi{v_22%G`vKWHjx7YFwD+YapJzzU z?tRtt5Qn5c0iR#_lx4VD&o0E-FF-hKF#><5zZtOK9>Zr0y`zaM1m-$09j|n3>VyAs zI;YsN>O7(v={)G`$@h>8@(#PbKzu;3$kzs%1kj6g{7ZvtHMbhyVvp)O<)PLZnw|;`W&4?;p|F>{StIN7q}ym z2Q&~2_GeXftpVi?Rtp%EPQQ7jSO`4-#R9!a1FOqElFq|r@(?OA{+l+kZ>o79;cgBc z)t{_gvT0vwwDhjfCD8P0DBLg1iTF`7P=_4X*_J)64IPK|i=VY@i-U$oU9*pvh2TLILt7JJ7i42_+eMP}5JqG&w$$A>6LIG!v0 zh9Z|8zs~m5j#Cla?*0_d!?G0O(LX?2Qyw+s`$x(c!)xx^%XhdFmv<4Vm7e!(8)^Tv zH8(ehccglsC@td65RwA0Y;AHn(eSCe_?;T7DWQ1U{Z5NGAYj~Jwv!y1#>(1zAS7Ve z>isvMcdyCspU>!m4^>CI^we5jtBC17F3uZmbOlgYnqHAIyKf1`dhf)epk+=H!+!fr zOUwEQfuHBjZ=&CObcC~M2`odPY4z8Y>f^e39Xs&NykrJ%I>}kQ>F0Ig-)zSK33NMu zidawG=tY4@OI>Q6xk8eE>c9j^qBPZef1wT~4Rl^u@pq>vuM? zoWQ|lv%+ncck+B@Fpa>dghaJ)2z%RQH?56g>v*EU;koLZTh6iiu6OY`p04bQu(|^ar*k(S-e-4fNAkUiihVVa%`51p^2C zpc0Vp(P;>wGsqFN>jp&LpGG=-kr8$2q1+})h7Q$gQWonMtCspVVYI&hYZkp!)1T04 zNg9Pn>U_OF>5Jd_^z?*#=Ss{+soHX1@~f(uxyc}Lgn(Q7Yb**I%t12k(M~w)*`VeT zFIL@Tv>f=$r-vtZL>9NMZQ00AKU*8tJ|B0%pFn=lL6+fu8KP+=aZB3Kh4jwT0CR=E z0z$l!^mKs*E%hPvO7t*&yH;N$F|i>G^};)9mBi|^_LpaWb*_|g?AMa&!FB6?BExcP zsT}s(s_mD3(4pxIB4;ElPe-~hOzPv1+{YDkP+-ky(%etvhJdYz&*fxQiuwim$RHNJ z0qIZUvW7u3xe)p#>jj$5O@?7~zS!*D75Do;e7$v4RPFcwO?M-VfCvLBN_Pm-rIeub z(B0kL-O^GD7<3H`-Cct8(B0kj9Paqset&;FYq?x3ox-tX7mlgsW%Ll^`< z%TesM$|C&mrz+4+oE=Ls6d?8H6QEV?`a?Fi*7B0I=?_k~f;|Lps&BRiT&W87E}>`0 zi}tVXZXkSCozeDNji`2Sea`FVJ1$qF_)}YvLy;?t2jY*8?A_KgTq0e8&?p~oj8(4{ zW23N1g zuz7q(VSm2T`fUp+;?RxrWCz_}&<*vmXkTX3s!!?z)t+D^FrUH5#A>9ORC``X3dSLk zw}pp?FBQCc-g`!^pPPq=!U8T4K}yC*khK^BLgE+qHR?<15kaT?Jrh@U|KNdQosk~d z&q&IGtE~hD{yG94{`BE5QV={>XaIp`$a0f*LU$o6QxedHAL5_sI&SmdFa7qX?{2TV z#!l5|0qLyB!|*PA9Tod?CCC`(GYx3ZFBta*X3dAWUjomPKA1rdmr z@9gbHJ%N~#{2a6*`S;zlkv`WBx9?rw3 z3cjM&F;x1_1So~E*i&bZnk_rY@!4+mduHm*3X%Yup`r`bj;qufPX|iAX{{QfrtTP z#mTk^%H(U@QO5SO#DIM#$M)2+Z9(nqr4tfr`ml`1gXl8n_6_cNJa7^=^UA+p{O?sa zL)s*;AGJzM$CRw>96&J||5tMbEZ|KtLJ&NYAG(t&25h|ABL)h@ahOS#7xanqq@I$7 zM%qW>1-ym%`@3p5AObwFde0I=!v0BLsX-W%5|5S4;v&4ATnZo>%WJJWh7|wHZKboX ziGT;*173SKbPPky?kc&U(7Ji=>z=0`J7C}l{S-$l_0G)=#;5h+g8|A6zTkzhw(BCQ zda178@3FC|Lij&aop3ccmKj{luemxxUOgkL!~L(?3Eav)38B$sng8%XE}%dLik_oJ z$&aP8O$bXx_gaInJyg0TDgJS&-6BZiO4MVdunlvUM-H!+))t-_CH(QU6hx)-6|YvD z+gc0~5`edp$4y1{Z}dqRN$a_8lWx9Px5P|sdIW0i$BnH>eeeGqH8mnZwgmIKNC~xW z5WITMy+HLOIpbeW$dYK%u8UIcbd$(l(ba_zxU;dti*oD#{`bZyIl;4kX91|A6tiJvF)vG5lsxN(BWd2_&@){$Ol3%5_x7YC37YQV)gH0?L^O{n- z+(dT;`oc?5|~$=0U`QZNBI~s*r4Ewhu%T3b_LC(@;kHV%1IEehK|TN@m-M|Jt>xItJYRQ?Vjt-Ve7w^>YNy#dPadMj-|i z`_)c1h76Oue?E1r7|ITQ!4#>3)-#piD$VoHbvCG)bxuQeS^w=cuzc)tz{0Z2eLl_0 z@NLi&mndnxGN0ZyK>gp^SrJbfC2Pqd2|gd?rpx?uMX3Ft7yw5Vfj8vDfOOEo10bXo zrC8*5iy>5Ls4BI}bdzfTeEbbRlmoJc+q0SU%ocVKz?1)u=hzk7!oDtq5_RWp}{XcN6ussT(&CscZp691qvy%a*Zvt}H z$0^!{@T$^grd>7Mwf@^FKW9wwuZm&fG(>nI0|hXcIynV}L5VZAF4BKb z)eU~&EVh1plPv(u!NJsV@)40~2!aYdBQtYSd^`~ld9r`8;#YS~`{T6~#uSDZm6Rmc z*1nzsNE+c2qd#^T@&ed6O=aZ{8&;Trz9j|bOTX4u(N`=i$s;358q@ZFL{Y*ZOyTw- z03J$9BR4ZO-OxAdm;cA{0vomo2G$1^n=}Orc(PICw7hs>dW_~P=$9|8YK!?_1d{#; z7@iR*AdJHRR-yA9&mB(8e+EKb2b?s<$wC%%ez_RU`&LAx$LZjf7qUIGDh=i zD@eyv?`G5a2et7ZvD72r<;CVnXQP5ucFPf}lZqj|q8YBX(fLtWD644382rboWYeN> zTTMTs;IZD3Ce8oP6ZusVOm@2HEMfPFHOxb7Ie~+366ODa` zbdX->eS7tESqL=Gao9UFlw4mgP@!x6^3l>X_K(k;e2^qc4}sGVyJ{9SS8%flCn<7e zNoS{|nYsBE+i_Dvj@|V<9R6gR|JI~hDtaAwYuH6}zhyYHkF|TQD0L5@I0JE(6>+nu z@$L1w4L{+~pDhB0pPFl(w)_1}$%;575)(gtdd#Dx=Fo!W{E9xU!x0(ju}`FzqN3ur zE-EAwFRz9}D%=U$2Bz%$gPf?w^Uk|-=%KOCTWVEIOQ6_5C9kovwq;tT)!>f+d9~2- zcVz$f@39^z3&uHhH5$Ho&v8(IVBy(goKD@=){#bCyHHRe3-}}3$rIpijGrP8yS(X< z*ZS55(g#|7C->hX3OV|M3daupSIfPoyd`M}NACgSIi4#!{3|RIt_7+-=bPWMxRp^~ zuH;kj_0jo_V>Txu-7Z20Lrng#3lK)L4Ksvk#oU)5X#W$zp=GndVku3a1UOJLI8z2; zIVLEc!%4^Q9GMhtXHy78_;-*pM`8M6W^3@)mN{ZT=A6%rjy15nW3G8E3I z4u?L*94;d;j#*e(a3~w8Z2LuEsl8sl4feeg>(B2CD#XKPp!3rf0RmzmiH+Q}0fPLJ z$1cw7T^Jvgm49W3Mn_?$i~aZd>;iYJ(rXJoi#OU4X#5}w5%oo7Za2oNA#2Mz4L@+C zf|wa{G>-{0YZiPSYGZFalJozCfG|Xz|7A+R^7K?!I#9(U7%X ziePb?pPz5Z^W5N6Ot^acbMrmtl>+DmS=&(fYvJsg8tJp!kv(>UqOjf} ze+=xYWC%Lj5N(IlIa`g-fp)7{DAy0gu;wBN#((+LgTUq-vBJy*DP9MSM`H%~$v_QQ z4aqN5oJNiam*?Hb-zM(V&laVHgW#nor5kzb7wj~Cx{diAqN#4|HhT_7&B*wD2FIi; zqAu=E6hIKXwP>i=LPk!v>OV6O20SBz3tH+4E*7?IsOzM@ zUmgVF$|PZ`E{lXzdi8|U^!=dv(p5TE-|r!z`+8NFIs$SiObr9~HY6nqKzynNYBDdk#u9xa)c1XB7xJ-9@_10LWONE9qeS5&%#})sQ3pdo6*Y zAg7;#;N>%e@en{KbUxackHvOr1m!O4I?T< zyO|wSEH8MyQ}J}1-uTC|Qwe<`-J>z!p2=RS^~}N5880v|Vj-vbC2~ zwAd6PUE~B_aaV8_=k%#y8xA^PFi*wqn?=A`~j*KAENJ&APuihjjj0o;D7G zKY+9+Oh84ZD~gT(QvgaZ9wj}{I>I-}&K>sW$y0%1Zed|G_ER}yAk(q&=TG`q3=AA9 z(FCCz@?bMyG#)r}r)^1N_Lc5ogRDm3Iub>VdNV$AX(o_DRaj1zf)8BHLH4SUqwRWq9&*>@O$l*Zy|P5b&D?BStK}dKl2$5zy#U|m8duJ85ZoU1fWp}gghkJ>B@4`I#}0uNzx7pbM4D2^B~L5>If#= ze8=5PPw33m>JIEMC#>M^>{911%yhi-OI}TbN>+DyKJJdJXbl|G4jebeh+M-yq$8aZ zm7Z8K)dN9(Q3m^An7B{-lxt*@58g%0%+CjSswlH_$z;>hrc%R>R6!S!6CfIFeUNxw z!X8ss*5%aQzF&Yg@a*4Td|U5vTmM|xSpiPe1n>Sph`>$>{*F~g;Gmp3c3z_~?p|IT zUVD&UP)zc1At;=JRi(PNM3e~$PXG892pyoavZmZsVA_rqaJR0kmFxLyHaxNsSzSQa z&JRwJU=D(F5}O=0WFm+Ib&InrGhu}Pt&kZtt8u^Dhb@538+q?;FCqHfKP5dHK?GO9 zSptRWDXFa>hwRgSy{qe<_b0&*X`qsWN;gIcz11VVJvyNxy@vhvnNh#53KT7VKI#QchPbDF zb~`Imf_$X8bbG#*R@6uo8A}nh;DCTssIi)6;Wpea)h|(_j)BGKm~+^ADw>n;H;m1L zRoW?`a`P*BYRd#kTzdeU2gkOGK#>QVxAAgqzFWb^o+(^kn)bmXkffBKlzr;dE;l9W zCPX@j6Q8f)V}ERk_HQO9j6}fR6-B`iBXsAAzoT`n^%`)UhCT7m)%S%wS6f}Ieu@Dx zWNvx6($@C&0FqKzFFr|9Z;e6mBg?M`5#M=s5e>K^(rx!8pYj~%+|c~cs*p{(T{kBy zyPs0xwk*f0qj&~sQs1A@v{Y7Z#wAJIE|C2qhwHA*WWCEri?Q<7TV02^+H+2Bq;zpq zT&_2)CgW3EMsJQRnd5WIaeH}H#Ond(q|sDvVVw3SrVJNB5C(Gnr@B=7G^E`Xh@@ZC zB5`8w+E5YTK!Vv+oV}f7bbMWHg$a|s?f~heP=RkQ%k6IdBF*fP1L2N<15PRa{#_af zTn!)BIIXn`cFyD4&W>aRGzo3)$w1w_RtqXtk|#V>gC;#sGw{HqZ}7If!+z;0XN(!# zE=m^J4OC#bQ-;jLrNvRz0!18OXkDqRJsN{^~_hynP)wGbb7Bbznv(OSILe8=;KpGCx>N|Dy{+Np2xK052XW zn07w`;P+*95XXnvCc|FfvfFayGR!dE)D<%s`T^^sVF-j16S?tDv9RAjODT(I1-UWmvE#PEqUC6teQW47!yH)u)k|O{ML- zJg35B_1lU{w}&(3OF!mQDcpZYuwz~0#Kob1rfs=FQorjyRcc~;lhhq{H5FHrq3ls3 zh8jL%*sl7LdN;RVjML<0`b1qmrb{k%r#&}W0D)?^Q;_P;GAg!a{}B39S7Crk1V5n8 z5Ahb`NC=;bq9Pc;VB-LSSx26C5oPM3qQ?*3SgO}F{(@IYw?c#Oy&>#2g;x~jwMno_ zy(HQ%;j3)Vd@{efZxV-xX6>~(cYaPcduo(0v9TN6?S&8dkyK-JMJpHfh=g?0Nh2$J z_}o2HdtKQf(X(ma801>;rD&~-t$jf_TUdv{ z5xhdvoG02B1^{7+lcj1q3k%5vwBn$o~LX zzd-_92}pWVbr`Hv5;~tyG3!$EMKj=O%sm=Ku@U%dhL66KdzoSp}ljcp58%!}|*^-?J4*qIn~tm2X#BlxpU^ z%SBN8gH1BSwhQjF<@?@pflH?YCneuX#Cv1uI?%zckf)_E5;SKOIx;1Bhq1 zHHU7_bybPiKqxt_WMu5x+FA-1OK;b0ln?-ptobW5M+U47W^CIu@?~+Gdct(NGiLBr2sdYfiKADi#Q+N67LO!S#-b%?+-jV)G~2B`^uHXhHB^_dIp^ay9PZG#}8yejP- z5P{R*A68q?sUk`Iu89RU`JA>sR`@D6WsI+~l?O>siX-gqW$#Hz}M zgKN6JiKg;ol(Aaw$Mi~|r_YtLX{@DhGeS;61Pt6LtUpe?%2QJM2qq-aZlKbm9&YUr zZLco{LfFRN_B!X_8_5At6WhA?x!1fg0>aDctF3PW0A%*Nzs*;TW$vxA>BC@u`$6^j zgzNi5R-$#sX8hM@KWwUVrpP~3;}-4O(5`IJf4B1NuTqh9U-r3Zqv}*Q68(PN!Qve^ z_s*dnIC@;C5J1>(ItKiG>VHXDEJb_qpPdZ^pEy&k6UN>F6;@Io^(lxU4h{2D?NJo(;Ju=9*+Iohqf<2^O0;g{c?BB zQz{(Yl1_7+G0w=rhMg2vcD5nM9|(+`@V@J8JGx}RZ`q@Dh->Qb53yG~yNvn${P4Uks?2f1Am>UNBTs(6xSLbhwH+7=! znzik34vENl-*ao(1p)moB}-SkPlfy6dn5R}taG`_!8vrsT>zjQ5R6hH=V!Pk~h_qop@Z~G9 z*owTVr2vevoXnG2sU^4Eq$=)qy;o=RxpQG0v0)KNo}Iq-dw-!5s($-h9tE;XLP|JFHQ7drPz=ySbS9K-fwHn^bl+hCt%bpJrA4#=m3Ei#&>7c zORB*e$+S%JoAH_r6KD`bAQj%bJ+zv3p!*Q|~G+zEy3#`gyzj!^4ON%d{ zLoc~UJ;p%#?sQ}#>RucU->xWE;`aR6?6a5T#9-Lwf!!ZfpUgw=){(dc)$^%p>`gX` zS&Oy^N~Pz9=hIln&GevLBWi`Ze!rrSob-4{KZgk3xmd%$__PJTD91pQP=w)F_u>0_B zWBm?XE3F4ByfKWDO=ndL#wTNRL+#k&s8&lGBLekW*eT|PSJU_ZWYJVx6^&I+*JyBV z<6-vKn9;{K&Tz+b^EdaRP~O&3Tl(3hRW;HSDjzq+}q<8wYIZMZ&J^%dtXtc5UgYR{@x@ z0q#`XQDLm+YtzEtUUzYfUMs;?fpEo&CMF{>sOl{AttkCX8il)qR0+>==Xs8xU_t9k z6wUo78reJ-^azrtzcd2#Vt!f#olf~cJV93LTDze&h+3XCx>t^7p0`e90ML?ut3_z9 zh|mrn{nS+;sQCNQT!#?Q`O3YVCCjGLT}878NF?BSOTzEpWK(G#D9G10pQ!g*z#B9> z^~ZxpOA##OJX9$OA&f@@JqWb#G#5!M+8S3R+IG()yarGz9i_ z4*g{Ca_k!z_x^dc%<`@3&M-iAta~@tW}cqUb&?B@MU~c{+3hpFgPfu?FnUHK3#CqvzN%HRJ0jY*dW0%mUPg(gKp=J&TZ> z-Bt$XLdSsG1ND!n+AQC7AXU9lKeHp;e?P5xu8ki>`Xtup4X2L$n@j$(3&^ZrGzzV- z+LjCY>@Q%9K;|+!*R3KuPavd;dV*5d*h1+Hg$-o$SZ#TFIPp-D?79{)^y>U&gu{g$cYEM zrc9qHuEHv@PJ?Qut5seLbG+mQBb}nQlo-3{-p15?{0Nx%`3QX2UTlB-OZ5SLt$J3t zoym#ueEW9P?TA!KA0`zn=KFi@7Ar%4FGh0}&x^%QE78?N)wDwviJe+grEas4364`ia*E1`Ra~g5-922J6f@qTJsa(Qo@= zH;EU6;){@%)!)A#e&sXKGydzBqjw9t=J^1nm5asZhcPN2zM`>YGwlmKtCMJc89$BR zNve-}<|Tzap?pt6!9Cvnp2Sfmyf3%DFe@MX$El@SOtr>Zu#;W6!0C_l zZ_JNHw?2T7m~J-RCp;7)x#)|>^WyTi5lI{t=_WYl#v~r;l@+W*8=|Sbx>v}!T|rL! zOIkJ>%Uu`Igxzn!$u!noXoNrN>Qc7DOD^8GhZvan`|Mrxb)TS)kH?@Q9bm|D`;*w; zyUjOX-Y#a|&+exh&)P3@O_^#J%3o}?vfPwHI+v&qx{Pc*ZSf33q#hkg>uDnDJf#B$ zwj`RH`RQDOm?4waPpbt&_gsHl0*D*ZytC^wBBN&%x{1En42_)25wAQ|XF6~Wf^b)9 z7ZmBw*`U!FFw=CcPe@(>>%wDD7;j~wxd}llebmh2MR^$?)uG0UcYu8hoU}k?wZWwG zOV#TZoR~-&Us#dF@9&ChQe=!!KM6`Z(SZ z3kD2bhx$2k*JCP}43(4TN?Q8y_eiA)ti+jJ#_f3@$??WZQ$`l+xodg2Dw>>-xVfO0 zj*8&96w@yT5VywV>a9_G^4-wyvXeMCfjy`oc1HX0K|e<)jmxoB_1h_Rlhl4L%qaH=A5yJ>^va8gU_Sz)fz{k$JqGwli zX~2?hm~&Zl*Y*BB`b2YneNNxRy%)u^UyX3WlfCjr@lDP1nC?RzexdbQ2G@p7K_3VC z?o(_#t+M_d!M516BxhB&iF@$Oln(E$10CU*td6b~1+}F}WWS`T!gs5E&)S(tmawLA z)aPeF^frkeR79*J&QiNU^T7W;BWNRJ9ohxSrsHYCYcd^2-qyZbht3TjtR4eC+xo*s zz+=~hVig>n&N~FeLtr)o$NKF$d>Me|RpkXQ4VP8>qq>rR1Jw@k=r?C-sX)5O7i5uM zXKu?b$~?#5TFCS(`SJmAdVdd0)`S1#uZ-ycH6w`D2rbCqi2DKiG~q+oEj{9bwok1AeJ~cCecL@Zr+RD#X2KmZfIYreEhV zY+vaW*%u94jV_~$2F3_ZZ*fd!b>{DGkurRnp>pKcBo$OWd{fSGXJ?gRfLHhN9ePZ{ z^jHKiEly=|CbLW5#H(wM-eS&ET&3v*pTY`y`)wde&m$u{BBS92m5{ds9ClNF=U-kn zIW=+*Ng$5MaFXsG+s837$uzrK!Gdw|%rmY6I36eqXqyGS`xyjbv>%VV#_;s%JkO8C zssP~H?*dmFUtb6@Isx;In7b%lJ`1-u9JFH38q>i;7F+sLIaUD-nIoxw{-Tp-CDUV% z&8Tfy?`Wlm?bjsfFd-PUkI(3y6ur>YC2K^P(V$ahRqAd3%Lh(p$i9$CX3=xO6n^WU zci%`RaJa#@r&lXxS|ZeQ)}kYCad}g(A(Q;`g3@K?Lpc34IWXGbXJUDdbHyF&=rgae zjMmILE0$3gawY@lHj6U-2Hx%mummcY#J(U`XCBc!N{1n8po)G=<%cbWlEIJCtA*m& z#Dd0QDpGUh6p{w2Cpk&A7j|9OTKMNXwspft*9$6#D$4>b6UXHiCt6W$_}-;^xZgYW zN~~*TK!$Z}%33CT?;bQziC9vrH^-jT?-yG2k(r5v?k|US!PRdGe7mcFnOhx7OUX)V zC~Nl3AxJ6`63ytnJ5v?>aL7H}tO!9s=~EN}!>Blt!HemU%=a^MmyXqhU($7A9kv65 zUIc!$C`*;@ej~&5YS8kMXm8Ew{P1FuQ0htC@M};K9=_0xZC!-)jRaL&ouVqI{3H1{ zrynm>`apN$>xV()I)yHidgu-SE?bfWv?K~8RlWg(bixlGQwWM{Vym$t=}Z!1&K9kf z9oi1#qv`%q@$J59*mqNbTfg>9(Pf-5rxILw=t`%+*5N5(SYg(L%Ib#AXz)#Fxn!v8 z@x`fG_SLJ~G&YC-ReE$MN1W*Wy{k#{i(Bz?I{Gztl#n1OS_{lQbyjn0C$VdIJd}0s z+Di2#VIb=zWhVRP7BQ`JqKDeq^fwivvDMB~GM*1VU+laZz^&SvQL$OLj!`PBQfrNJ zQmQ^P;L1_TQRtY6iqMW_NvePLbxg{7btA)Qx3%kIcf(MfLx@#U{&rJa3%g^CL7w|% zdN~{e*17VoAmW|KsxH}A>>n*#E-NMXgX9_txOnQU=hm9>y>ONe@?8Dc$X>7U!_B}4INJMHBw`Sg=)u z$NWzD4l}u&=53jmtnoGwdI~U?jF059-^Cip$KHHVQorkApZZv>zHiXCdR&$DgOMWf zerV1OVM8%-``*%-CtVKz6$%kGs;Esey+m3l^#}<{F4jwlVUNTwAI<2m&_5QpN|Zvx zYqFJ4#M8uc-^QiI29jbBC`of0RXo2x=1)KGo0{U|&|#_D$DT^H0m@YzL;aum*7^3` z=HkTAQdJBnwjBe>b6a=I8HlNVY-tQjXIoZET?I zvzk3tIL>RUMe9^C`WAsiNI;WJMvOB(rX1dV%0N>5LxnyR>5!#ou}X4Hbm_>h`3ScE z%Y+W|x4mkq5~;uwLMTgePFUToeBEm+IArj0ZpUR{R6*b=LrbiT;8gyb2}uOOozgb; zp2Ju-prcPK0PPKPzSSkJ7?ETc9H^!bLjbxrRp6k}=u@ivC;-js0Z=abHxUUW0rf`R z)8QL=g7ekAx_xeOmYyC{7lwz zd9ut3L{8^9H5jgK_XFwZHue4IP_mN0^(D6c|(@LDw>r31- zVl9=eM{i4HB!mgyqnHDI5|iy7Cw}NEL$(AqN?G-h@oDzAu2tdR%wL7d@F}n*-EoB+ zC0KmJCCoT&Ikd~HvvW(t!j>-H9R1dn!M<&UBKf5uUM9POtNKlQWW|cb%t%ml_pYg? zHJ|!6rBT!6t)(r+3^osL5{m2nCVzj(&|Q!0%@grG?zMpRJ#H$22KVe#(P;PyhjE4i zl>z3wi3%aCAYzWywnqKFdH=PN>zbCTN|2Pbym)a*#5N(!?kQ{_}dAs za}kFD^m^A4nD^Yay&5EIsww)V-mBX+-Y3VK70Juww+?~DT?z>U?wnb}Z2qdIK4sPW zUc-yVH?N8NWlHLGjfdQZ3i-XYWPTT0X}fOfBu#FcI8T|;&e&+rpV$;u4^2WU+RiA_ zS7nmYub;ZkS(?0Y2KDP$YP)(Tk-avIoigopcRpC7t!N1*?)R}Q^GoI1*4lnj6Gd6* zI$nRYh2yaE3tiLZj?1diNyP+YUd>Rdo(valxV7rh8Z{PTP;u7FI~}gNFDv@$1*4RO z==Q9|^hfi1!X^nGaS6EzJi`j_dykD-R7$f)*a}?Eifp`O;%lFyJbU(?ll80F_gslR1pC-dCc{!2Z*oa0m+E%f4>|2m zfm->uo<1A$ujrJHu{-uMy>38*o^iJ<_nPR+(yebYz2sfUWOU%ZQ-n6z5wSn=(?_m@ z#ufnaRyO5xf2(DG6!gezIf|#2(i_Mgleb*>7$1L{p0OrvT2i@zx@Z4kMaJ(@#KD;i` z7@7eUeMpqE;9nZg=*7|SFyOFkU}AiG-I#iR^}E;;0xp!at{B^&wFOa})|WvA?5#{1 znxeu=fkd{ySq$GvbrhIung0v=qf~|A^ta(#`NEEO42;m-__8xWjS{+1m}^<8fys=z z%e!i!t8bIy%*bm?j0A5~;WpB$(_#d*?OqG1IZ*@ssz-dkwOPEF(HhO9`f%6`OH(8^ zwpp8u*;+9DU2zCtm11CXGxRLyX|v#Lv!Dl8=Zc@liXrWzJD+u_xpBEb+D@yQ1Y7Cx zT4-3iax3Z60}o1E%`3JN>9a?-=M|3i7hxpXP0(K3t4khZQfgY$xD;f#@K9J*hIz!y zw=kS11R3|DrKbd8ucf;PrC3exTK-17rRaaX9mO^|E3aP{@z_R~>6CT9<8#!wY#sCJA_z{LetK^G z@tR|g7#97$Bv8s4$|9A+1BJ7wLRVL5)$p!iR)HVCUY_d+J{waT6z%=g*X;;ik28>9 z7V%!MWt3Xr5U~Cw+wnL|@{D>g7E5<4IYnVbl=n+c#IKE&%R@|YzD_!m40l+)gKO7rQe~w`z~zOy$6Klv(nl<*d-cAq~%!+?vO13z^yIPq=zdIMI1q`&&UiMp+MrSIyr zAi^NdV91k!$hdhY%{RQ4*vUIr55N9yH|xryD~oOTGc1iJ8+Z>IP!YFWrR54)2w@PL zvTisp(Zo4q4U)%N{}`K%tUn5mlWuj2*A$l`)G`;>b9BuRe|-87W3;|=+IZ?_YVDTJ zx+=qZ2C+IG+luK}ZYKV6oc3In!?Xx@m#49f^Mq;xLnpb(YTglAofZvMNpEb4Z*tB$ zhU%BhngpymF6~-i`ltH9dMgLFub_1X3@3D;7`SCPM%;`Z@*$pP9&T;>Ycj^leS_|B zxfS`(yljp?CwG_PurVWYLD5*TDY39hRXdI5Li#XL1zsG_g{rar;&+A^=P+TxQ1 zj}_oci@b5;5$12!cxEFjMMELX2t$kZsr$ko`yJNAu2znFTjVsAaP2MMdA zye=UHE9!%JLu|Hlp0I3F!5oWjp8NOSW;0#ILLHW{G3fXQ=QsS@%TCAn&mFo-xaMRb zHszPs2{EbBMojqw=4;N=y{Tn&!JXd5!?DIQJ{-mJx`o9)AHLE;iiG{mY%HqhBjndH z0-CJqig?J2OjrVZPv(=DwXB(&c0M|^DxzzAsUGzfvvk*#-g>l!b7Bj&0o%{aXu!)W zMlDdbPntyzG!;rVA*wtiS*|-Dmg zT+E}v{N=u*=1zm!9xN9p(VUW`0DP;;1uKSPuXk4WL!sugBhn0513=$ARScp80>yLn z`~?s*%SF#b~d7iS@{OV7G4>w+^ypzj)GS@?%0$eUm#n~ zl(j&sN@I3)9Sbs|NXmO8G6^}^q-c`Yqs}-)+K0QffTkP zYm?oE&RXVD_y$nfuKTU8Q%tfvdV9(G*^9=v8)xyLV-x2t;tb+CLIoyTsuN~c>gRC0 z>%Apq3kF-MC1;JJq@VL~O*Q+|PFpApDY_AHl8Y6k}2ZQSfn7|BGP||f&a)8eMzH4i$Lk`v0ohlwnZJL8*K*xsxDcqOl z{Vv9d6OHO#Obqf&IoN^W?b6X~Cr3F!pOi*RM$FTy-3a1P%aQdaBedvL!z6Fsws+Jw zlg@*i^_mqKj*-4bZFE*9f}V$h9P2$ZwYemPb6_HzuiW>AxX#Wg>L+IrDzp=W2Ys>i z4059fRsFfQmZcS)FF2MJl2eTuV!O4>8WgPPrWSweA@_Of8t6Ppg=rk;%@4J0F+owS87CY0nq#j7T+AjDZ8O zur$pRaIcy7d_~IoK+whF+$T+qmZtHS=S%z8MH3oAZ{7{A%2Wb}_hN$0UCzx26rZZP zSiNm967HQ&S%lL{zBW*8TqBacTYdfMNA(eU#oTA3HO;QP7*KzeHq^>^cmbT>Hojg2 z&6+fRJ27~MCsVFbqOo0!E#t?-{CaCI0-?vEx=shWwRkbKUso`-&M}j&uwNf!1|`Cg zlM7&K(rd8cPvRp=w!!Rj9#c-bDY9Tay-l;c9o8qE@jgTQfG1y<8@?OkLG|<2?K}+f zd2FjNL}Oy29*K2H4n0)g7c|!Y$*VR}3c==eD07)frdy{QNAJElp|I|5k|h!0=HrNWjj26S}c1^*W3nlTYspI z+6;GX3dR{IeVCB2D~YZmxiR>9n-NqWK`GD&ka+Shv0a!4!;EBBuCA(DB5CybDbsQnIWX91aE_E^>uY3WsVx{QY1 zi-p%4=Y|%ah|$CB$tUDlm-RK7;i!{I{FCY@?qA_Dj!pOc$ri^k1uL(Qs2bHnB<@Yd zw|{wxqOYa%v&W3Gevi47q*#|D7Od`H)e}s~VN5A=f(Ko(>Kl2!_U1Yo zmqPG2D!oNzfOUihDZglSg;&sX$0F6@TUC>JgGewfQW~tbc?~mlW@^rid)_ggtgMs3 z;f}8>mF+89nZ) z{M8XKIC~%Q7t~=0xq6S|RStU8$$uq?jS}~wV@CY%)mk2gX6=-=7OZ%*bqDkyoO)k?J zkxelfhw-CNOf90DpNTCwHTPNX0~Kk`qv{GjCvH1NB!7Fpbj72|K2GGe@l?Zx$pl5Z zVw2jw%WF$!ErBPC$41b$Xtu*{M6WvITp2}pLL~8C^Bv7O?Y7^v@eKc46-yeIqN?%U zA5<;Ph_y8Y6|U=FT(DnN0D~uA4{6j}+R0N-s>h2Q9O3V4Jt2I|8keftpV>-=L^JLd zFy*KZlw1UcfC)!&VtwWPNhy6*tgr*p6KrtXD;bDo7-T5J=IZ4+QL+N_1c{4h9#mrA zp;6od$lOm?{x+YsadHzbn-JLRS}%zLb-nox=oz=n&+&AUq);m>Exk0xU;Ku{y4yK3 z^F&oD_i&=duaYHZsW$RFGMc2Rcns)blzw9~B4}t=kB9YYVs@>Z@1L({ z={MT12#)i+z>0K2i49gC?+Btje_d$868w~z9mjRKRzL+>Cz6Ry<=Ohg2QI-Jb@vJn zNm**eUi}wVo(eEMCcfvwMnFKCh!$&rVm5syLeG;@2beX>}`GNW; z+|BDtsw4*yj(4R_a`)XLnQ)s*{nPU2_W1letlx%Y1RHY$Rx~%_V_4!b^iA(ZUJi}l z96g_vI=X%s@Nz)kQVd%o2S&%-bQUqhZR-vT0S96>#xt5&CcCietw=HsT=G&dv3>%l zof22h@=v3>eWUaNF2e||SH&?RlMw$;9f^L;Ur>zmyhOZVX#_BuB35KGRIo)wx8-kR zk8~#&{&B|7mzAkbLgLm=nkwSOV>P>31Qg&steJVwQBTvJVNM3_IDYpN)Ah?VXYDB# zrKgoLX6s-ewIP0fdONA@dN`Eq%BCG&J@N0jzOVbOl;j$;+g3f1y;Y z8odcgR499fh%;@Hn)%RAkJu zSteRab0V|#u43K6DL|`Y^`e7ipHt{YWRk4s>6av0B;~g&_PBtlBr(K2*?&9OW03*4 zmwkQL)hm14j&7FqbRs_AILBWmQ9JZKLqLpA7M4V<>ARp%1$qyiHfIQU`ZG4m5;!Ka z#Kq?mMSOXQ?C)>Sd883^6-z!0@;h$4ug1rn;8jrR)(F)<+~`1FTuZlc?m9qM4_$1t z%Eq_g?4-`SbPZ$MjTAO+LJyl`k#yB!5ScOZ*|!Cz`z zX+F3nzf6or36VQR;>;*4h`&nFpI%r5_n+lG?qw?SeexmaX^5wV8NRT4oQ}M)2mE&D zWTHof7Gwr9KMebK7C=xE-{?SOo%ceyDLMBzC*D&%jX96m!m74p*?W<;&*O_*&LAA& zWIXHRZYSB95me)z&pfpmb8dE1@;*(ScgAxTK0HkOpnP;V)3Jb0y?IY?j%XZoWs!P z16v+*W0}j1qQf-b4gfMN9T|_e85q1YlXo4Im`ptv=(?=*;{E~%$_#9<)3A5M4A-!-rO6`{4H{vDz5;FFnm z%BC1jTQwE7ZOxupST3wqS(g=HD9`v*k<^}MYw0;z*5wj)SB$D`XIL@x5AyHoB{Axl z=WDVjH=A56kDqrXj2COZ3VQ9~nPKF(&H^!9MT2q*-5NbP`vLq)>Y+J6Gm~SAa;+qmYcxLfq(n|vGvwbRd!qZuplBzgLF4YcXxNFAdPg# zrn{uOyHg3J*)-DKAYGf1?uPH)KIgped4J#e!@+QjeaBjBu6fTj=QXeEaIaQ=@{yNx z4HJAZwQp@QB3S;mKrC)B1@?5}+r_)v?=O1k;V7A&yHf*3+qG&8SN6^IVm;6DD&8^0 z>H|~;%!_+g{{63ze6@tyPl41)k!YYXOlSYs8zbRuOz=l7Wq<*+7t>D!iG#sA17LZe zAf?JF-C^d}W~$YXRxV6|NDdA-vada~n8b>cp(Z5$RO2Y(@{zsl zEA4GSj0PG8I8nBoYCo5tqp*%h&vU~nLzx$CF+cDCRlAd6Tl*%UWSWJ&0Jz%auKPjM z-@UJK-M<1%A6$+C^s#gK$M*nK01V$sc_L^-Hho3Bz^od*}cDm?q zhUE)fV!v)wR7}wB=ACgx;k(J%RhmUg#cwIG`ba;q4cU-0?5x@fPvBo??}sj#oZGms zxJ`0s>n`SPa&ggPkFifZJAvnKPUJ`Y&^ODm$yNJKziHX%@`by23KO*S%CX!~6J6iI z1P(8vt{MpWfUQpEl|LL^T3Ki8zmSQ9BQO;F5aLm#i4kw=;%&a@c^JoKXu9(~so9$U zbXbOfF`asDb;&nVteSJ7Y6)kdC>D%XjkLgS6?ynTu{IAl*6cU{`8_^bI|cZe56oOIbbYKNBu9BX}6(t0#DKD?c3J@sy(9pW>|-LJt! z^)1tfAk%3=NM`qQJ8BlVm0=Qnx^k)4{01SP(A*(f#lCG*kmhW4ckQBq zG4hCFvQAyJgslO5&gV^;6_YE$;9}=4(ABbCEphc_a;~o3#+~UBALfWWvw)HMXR#F5 zRHL-cs!5D4T?yNgTel~%nw=xvmN!k;1+bkRER zffz=AU5b0%gzGNj9u6*qfv3}CD#Iej5%7xVKp~O(Zn2h!8geVa)J_I-T8M>#74!K$ z2YztjveT5}axL3rHF69rex4y(%WfT;AW}i+P$I|IuCzfZ7M6)R6X7x3l{(c3IT49# zN2O5H!LPuF40P!Gut~y>%o=KVw!t?sm(NOf6MJn@lC?cJ#urTe(>(q39zbVsD#V>f z(Ya2Z6qiqxD7$9YyG^Ri-4G$*UnnSI79hofC;dcAaL56C-BUvsiJH$^hd&eVN(&S( z;E#m8N&H=^GB@v^O40`U$-t19v!2y5#ath(#VY{^@m`+*&cjyMloF=)Av&7QTwB)A z_vA}cv+MVG-1oFccN48PdR>2Lfg`Y#ZGDP`IX2ptF)^;x0{;gM&_t_c8Ew5w)x)POujG9pwwPZIX~-yR}C-ERhgFW zyic-dX|gI*|8h_Es-b|Rhh;8AMGn1^I7lYx8*$9Dl!v~D;iehIQ?3)Jymhi!zrD^JvJu`_wc3~ZbH-}am!??%dlUn#oY9jr z?D}X#iG};KOUB6CbtGR>emHv3&bab!{WKZRwgJf8bI-o~?XiV|%hC>5JB6 zwD*auTYWV?kF0q7(Z7Fg^>zQ1>Ad}kY0UD8LIA@VM(+yT@-nHBDW~80rR5AHj@}QU z^h|p|XEQWuWeqUckI56BBM}f~-8d(Qp+=+Ie)R~}kTWW5#OwPhzR$WwqaVeX`% zEhul|+NqN5s_ z0Ge09=$2$B2YCSx08^At+-zR!nDgL?s|Y8G^bM(VKLZ0Sgx^&8w_GCtE95>u7p6m& zzZv^$(*;1Qgh6ccZhEq~Z3zNX*hU0xeVG<~(J0%1D4f;c!1=Wlr zi(cE-9LTB}@e092Kuz9c9u4f^t=Qr=g|G~yuql^Q zD+Vqx-70@$QxVH6vLxRFF7y-7e0=^Ut9_cMFAE35b*+CweI<8DHtd?n5w23#Nw4*h+1=QEoz2hVoUFlhm?ty`Ab z-aw4zx~%shVEfr_VZCeiM#n9RX|kbeVs!eaVT-eiuhl-N=9e6I<60V5&j1Y&P&_?? zE5$zUt_T7X=&&aC^K3OpGNsLT^T1C+C{t zj)mCG+<2iuWSu@fRwo-it}zo?^hW8^{3u_|c&xnLv}u4iTa&$Jwd4V0?dEO`@yUru zPBSG@{6$DU6UGg>^e=r-;-xBZ9pZ{a#Y6WT13I81sjcR+k9;ju*5~CV^luU5-oo-j zslJYKtmU;TTdx*O2NUV8x*CSja24^pIRf6)28-&g0{nJq)~QXw)GRK;tbL7aW-gUl zWN3J5XkwQ4_ow6FvqAc->96fs&XcP^TT~@XG{?&i`*7OAjQG}Yxr?S5Hcek%1RK}` z?NS*~n9&p2@+{ExVHMc@Qcqmf69~PR-wE~KmL3AlgG@JgizNV+89;hIY#wjRMu;}! zM4T`6g7fTGe_+crkPH$*-(^Igjazi4p;1gf2rb?9#v0R3V-@4O0IF+>5ZwI5h{p^~ zNwE1WO%_R(_kvjLTZ^-|M)ePwc-P-36zbPHibV5t7KjZu9 zp1$%Viwk$gYmv$7GJsZkO3+9)ag#Buxw}3^&SdwS*tVIG7$iQKE3?*$%mS}59u2YWE7lo{#rZP z8Dmrn{nvgh%~z)Ay}YUK!li2u@&NRx0LTkGMzf=+ja{Y#C ztX7?$OorPc(8_e*+0D(u-wWbehb?W*7F=P{C;Xhc%Bg;^%9=^=UFKU|FkHcHqakwqQY^JrsJ0QoeBg_rA^Zw=`eli(`KP0 z{CF{(SNAzQ?vw3L_;zEWQU_rwT`PO-g3z&~<^+j133ldXI*fn?c@& zgOcotMXHcYdowcpP1;NRNVEd6>7F7J0%FAe&ueur+(RWq>PZZuOY}5-A1{%O8Ibi5 zOx_IkJn>NUXi|SGVV7UyaLkaP1VI)qOVyp`Wpja zklikU?S+7NP-jYUxU(cZRlU0^CDgHf|X^G~(18O~GVqUXvQQB525_$Ld zK#I)n*hT}jdxk#G_HT1_6N}5jTn`gpj(=aM;bcnNEXL1xv^up_ zRnpbA9nXF*YtXA#KN*jl2MM$L>~vLJADx>PQ-Eam(c#PDn}aAYVDBHCOteVMWyAF~ z*<3@=(yUq)UoW=J81NT-&#}=6TMud;9{s3gY*gz+sqpMO7>`BsncV4{a;^-uq~2t3 zp`!}+h1MGEYb6sFil=eMhE*UOV?s9tKxeB{DN!C=q!c2H4(Q@g#*)a zzB0Cb=%XozraKY+ltHTl8Hm98MO`x0P({tO&vtB}YYn>?_99XIvJuFS+}gND`HcIZ}g2pzM$KY?NjNdKHa_d3eJyFp6d35<;C;Ms@-1X zDSWNp!*t#kt{gbAt(ae*=xSOGe05V}6)30|ivK9&$ z3j3z}m-ux#j|XN-CCP^z9H&hcT<3vtQ0!DfFo$S8U-dN zmZ7`AX|iyJxIz}rTnF#_mkecdGi{&B`RmbdyE7yt-fl(8&#?Q6aHxlw2OhYh3g%6Wg=p5>yGljK4Qnq$t zM+HImjz$*VUw$>n7t(2$Gn`}uz=5f7aQt_&pOn&BJ6L_69%_wW9mi0ZQ%)7O71*Q1 zb2Sa8)MAFcgQ&hm4!~thc!<_>Y%@uEm`vUh(${h~_N~>aZv8l5Ow`o(1c+0XTw>6+I&IlCWRPno=( zq`k`V`_If^tuvV&FNo8Bkz-hCz{a{dBh`yQd{^{L)6gIa*|ka!oWsd@tXKmx@|2=r z80p36K5oIOj`O*-=RJG>5&?s3mE+sA44bOwFS5}IgEAXxOh@g*2XgPt2;=DY3cfP6 zZ<^|g-biFA|K3gj_Xm*Gh-hEq5%AaoLN-i4tV4Ac9%2WI>UjdBM$;swA)c8KqK0>N zI#;UPiGS{Y}KTJJFq~ zru%7&vkR6B6~n)B{R0(zeY9h=V6&Yo%}cmkAqT%5C`R^5hzzAFS$`uD3v`^d6Pcd) zJ=Ob^)@Xs%nQ_kMifw-mamgbv*H6BM)8{{>$4PHkDQ~m$e@V8DdMpsFZm%foEIyov zHD4%UD`_y7aPzN*xp2IMUPdv5(n!xEvdP6x_iXr%A%3hT$rJ5U~Q#h&$6C zo`EWWqettvH=O=|^9;aBp^0`}4@Mmf-kJ$Xn2*xXoxz}0?f(!48}K67XR;a<&%Dai zgkV&TMvGbqm_^uZBrRI*~78{Y-DU9Ae+fJx&ufqS3Bc+T!xlbn8vc;9LB{ zf92*hGi7mXQ9BcB`MJui(*1Du+=IF_fbk^*x_c`Xfs10}tabQpBb9@4J>?O|b7ZUD zx&@RFx+FHZe{}ku4d)EU5WxaIAWxKK=!qk03FN+cH}^hT%ZJIC7bS(uve>|Q(r+Ypvmb0@fj2)JjQ?3PeO^J+_aq{=%J3!d82zZ+k|&J;N?e&WF(W$oJZgqq*QYBSg?r zHuA@hQlv0>2k^6-iQ}`l`g`(}^EaA9eA(WUD@lh82^^p_k0|c_?x(gVUbWo4Ga>?N-@gGo2UyWSF4>eoF{8laqzkY(MUF zbCxwqU2MHVbE@3+@I_gj<;xPyF|CX+3Onv)r~KOsvwH4Jt{-2z(rt87rKT)p7uN$r zQEMh63Bkd54cq2Z)w`HwQO(^;Kvi#_Z9*)MP;MQ5wHgLEbBc!S^kj4XG$)6W+!YW>Deh5f zJ(7n9)77BH=~Jjn@iNE~=cn%AFmC~h1RN7x;MASzmR|@QF`v6GNCc+cSIH~JSO2V& z=lESS76SFvkDDzuXbp=GeJ)p?mtkqQy=6*Vd3hFod(yD=iFMritdP$d1*@dL#1=`qiWM~J@Ok`EB@NKtw5_98Z`zkitHzVgJqh=#Em= z0!!Oql0X8qH17e@q1cap!$BR{+;)2UOrYTKCoVMM?GLqC>*LWxy=5P&&f2SQSh?JTc3GOsy;$2??TGJyYVqjQ0S zUp{Ah?&o8gIx~D|w`G~UC{WUV@*TWodfCl5#|Ts^oWl1Hayoddz~O~Es&F2{up=WQ z>YFx?E53OiLH}C!zke>spkmUlX)e#r*D-UsP!Q+!{*ZKR(r> zwPGFOF&)O`$7-E-AK4TH@Sll@FdU8{k%UHn;?ziS(~?v>*j>NPXoU3J)UPyV5!sjy z24|#KSydS6`CPhBRlCK_h?->wTyE7-+GQc@@%kL9noc*?mA5$t;t3i4oEg`(Xy+ZO zz-D6Sjl?q-bk6aHykS4=(HL^J@F0%xQ-+W`6i8GGVTX&Pp?wB~_mD|kIsd&bf0xI9 zLHY&TWf!V2!(z}lYl+}Me21liw#Gg`!dg&JgITQo>e&8wy-&sNzY(U4U&gEI*DkpJT*jz!R_Fhci|kV0QZW;$)g?z`U00EI z-br{q+FTQf)mDGs%+bBSusGbbUttogHPC^j&h0wAn`e3p$_N44Wq*oJU=Kh3R=A&Q z9|?%r(YO-3Uz<s%yrRR-Km@<%4HGA=~)DS)?MCX*RLUNmSJA&vU&s^ zK2;e>r*%>U^KR73V>Yia)MPi?wF`}Sos1r61>|WY&Cr1GhCjqC={b8|8h$5&N%~xR z#b3Df7iadb4MIxf50ZN?A^$Qh#Pod@FNfpAdij=L)x`eP>Hq>ln!Zgo*e&;p69e-b z=0%w%=+PXOJ}l$m;m^t$nTp-ZG)`nZQ+QPEu0K1oX|q^E6`yTe@dPc)ktbXI)9z10 zwgcT=SjHzC&39+hT-Ystqsc>1>e=2t4oe@m+ikbpMyJ^c9?ZOl8q0SnTa)9U?A91^ z7Q=1?cAtkEA&*Os`G3zd)bCAm!Pc0KaUo(elC?=@C$T}UdCg8_E;OxWKT~$|UHck;Do|GP zWk_;i-CZMn1-JL&8C7wfV~~YzQ#ZqdTlVE(UEJvOcUYL5{!cMW-y5m4Lfb<9{%l?K zG*`}*|A#Y<76CguOr889us>PrWWvk#L&RZn$~v#zbn-nGdBNriuwY}gruIbO@MARO z2hna0kc|N?YPNx*oZuR0x%S5m8@8x2)V+Hn0S}uoF!Mgx-rrKkftAQEIxYGO-2TV! z1GAy>h_L4HIVx`VH8MUVlq$_K#)lb#zoKs0N&*w=B_O`_l(HXwD%68XNM6H?3&&Il z?&e>=!{gDcHkQ?I^lFs?wqPkKtQT>Sot8{_s$UKQJg9tE^nm@@+7_ude&qvK5bd^MhshAtYs(r)rq))9S=9D z{`4j|D=X{hj?BSx@p6dsj~d|nzhlBsfy&O?%)j07z((fA5O7z<6y&D-_|FGBlbry? ztWN?zHQPehBtyBsI6BruiRh)+ae3%Ok#~6SJW;%fJ$?+?mCk?p;|LH-n4!hE#mU&; zX#)Essq~Xb3qqgtR=fk)X7=A}D3s_Chty)!fr_pnsfZEcw^tLo@1?TSL;fA8{vEZ* z5rW84Zpfl`KYhPI0QKH=0c5G@*ne1pNQHUkyIsb;9k{j<@!G#iGOS|9{LbGlI1e!eMJzUTQX`1O|Gf6 z6dC}0 zh0Cgy^7+~$D#WpoxZuCe>n4H-MCfk3tGKeVviFBX#9u2Hafh13Wj`bHmG_yo9OP{>N zdq^7arA%nhW!Cy9_WIZG{{Hbt0z8gs?l{)pyMe?+!j>}ECcA^kfv4pm{5kz!^ZFmx z0C;;j$n4eQ)FgmM71@HW9_aV=06Tr$B8^?MUhV(cRsSp?9&mga5y_+fdpF2eU|M-Q zi^32kfv1U~Pj3P*`=9Rw{a>jKa7o4CeMyl9u!#ntgWvjj>}VF;-Sh8hzX0$5*JTqy zXORlk`B_?4_Ce-fYg#P#LzLJ%8$tYmoaHUmsYFQQOg!OCJx}2>USs? zdFOzQgxJ>Weg7{5f&msB`9(p-Ut^FF1+IVR=j3_{H7~c%mP-F^!JxA``zLD(^moCa z`vU4Fv-HCNnbMj|wf|yP2&BNQy}(`m>p(@`0@oM6C~S^^p80yZ)Eoha{r{R`t~&I- zKl!h3$9z{z?15%6?BqCN=-hFJh>W0voiCk>ee8V^FXW~sCQa2{%FB7j?=Ag-buV+( z`|%e?^nY*ZKLbo)Gk%2EEU+hDa|%kK?-C(Qzl&S+UG6j;4kO6ej*{lNNNKI_@N(L; zGVS4KY~8fII4c9*B4yQ}Rp9S>|KHcF76E39K}1d_U&0oar%4`tcMD+}DQ=O&Ar#N` zh^zRE@rMTONg+G1`*HY=!hS|Xv&|gI40O;luFAC%|7B_up^gMmyydsV5dikIf|ga; zZI>Fm1H1u39PWn?fR6<>r3$8mEl$`yNpd$4T0~+zxa#M}Whvn6ew1wP|9{``@Be7| zp%w%04uHi0wO#h4?j#KQG*iXukV4x6ZPnSj@S$O9|AE)hoEg}D3K35`O z=z_3xM<_xnKUe+2+Vy*h!g9Up>ubF}t7XN=4rPf0fY3ZYbS{UZf@Hbh!%|7-eh*tJ zzAVrDbq)N(U*38rR$4n|#d)lI#XNQAvSo~YcYSQfwPY`@0Tu*AXH8ExAk|R$Zc`va z+)mcEWrd2RvZY`zWXqz}R1Z*Rq(*VC{01m!Lbd^6*nvWGJ3DT6vr!H;UNe_Ocge6N zwwIM9|N9;CPZ!Irs2F+pqtVV*yS#DGPZ7gGA?4H)9=iBr@$b}>VnGu;2TDod{Xt_g zY&XFGr`9kMiI4!D<}4K0BX|xJt_5X?&9YQUzi^(h17X$nVZ$mHBjXI#J{<|l`xl;{ zY)#j`0PQjCKiZ#~c=yX|zE?-iV}G{{TcRfTZT;|SG5ro5pwa|&5+Q1hG^Yq?SG_O~ zAq0g21UDaa9EiOyR2_S;G-M_WQ(RZQ8}dN6nQh`- zHzV-;030rPIsCN8dv@IHg$O@Jmz0{H)YwnF4s2<;h)C$y{zS5$_YgWD*bK zaeIUTDvco*Awktc@{=L`r%4^4)K-O)YdmZD`DE7(5SB!$@mq1*L}mxd2s8W|6&2v^ zblduIpq0SFW!?nma`(Vr<(BfeL%KKNQcfC37isdx6 z;<~N8o;3->=$Jys;Wzi($E?C`TfPiwixUO?Zrh_zKJF&iyR^4w2=D#?6=!x|z?*m` zDsz3+OCEROWwND{IC_3Mes-bn14N2OX0`2XfJ&uP7oQ7&;!FpqF&LRL;%yQJ%P=IE z3YefwiNgR`CVlF%qN*2v^e~$?`|<8_Xz#OPIl?q7tk-xc`=E(}*{K;x;(m)q_JWwf z?b((oAQ*>&jn8%lu(P~5I!5HrcHhQx1nT3>dtbnM$G~2*c{&$3(5#1$apn&c6m&0ff9n?Uq|+y^(mErmJfgedPuR(Ax3yweNF^k1CZy9O~;&zc$_p z?ytx}v``wVB`OiEUG!S0ZH?OD&kq%qIP|k`+&`E-cWLpfr!p%)Dzke}Xdk&=#7qsh z{)z|NQSVPn#SC3v13f~PC;}%UC04zqZw=P~NuTex+uZFyXVpdZ*ZSA&)%;=-m4p;p zEs-whN%5*VJb@D>AW^G^f&v;@V%JD(0s*(56cNHVYcIPllvaU;JE?}>9%a=Y5TS>Y zuISgtW$MyP#fOHmy7F7Q7Ba zLXvY!HBB_%k~d)A;Klj)O$!0IeGh{=z!Rqiq6+=q+Z69Nj7^8KYJzXKGooS>Ctfgb@+CY2I)INHvajss=zh5y{@T9D#RwK_)yh#Lm;Ics^m(lUKo>cYDgd_>4^n zV~HSZfU8`IJOOmmqJa??Vf6BWsF%00|p$m+SrjL~q2QjTXuXl2Md zl*P$JjzgTk+hX>eH#sk2_8#ajMe=8_Z2I#ZNtIX-)%|?!{tDPcvvD=&+ap6CQP|kX zB$!ZtU$~uBY(oNfBtnpgyZO&qP-5% zp;hwz{IT}~5ADs5MnC7w{dRyn7I|gD;c7yf1Z?GJk99sqv-m*O%@Q5ZCd&)(ylk5G z4z&S=&rncWMYP$4@8g`WiRqH)tZn0ZaC}ymQl|X8%u=+rEtsmNU56fv29zM9!{N|g zv!^kVBB-n*voypQ=Bb@bS@}ZKlI= z%@2uq6E|e1r@~=AaUi~krmUy4ipPj2?9<{{wi9FwDIvrMM;;M6bfc?H=QG7=<{C0w z^%a)}O%MM^US%HLZLcUrr+|5ZrwC=+^Q(>uRmIZR#p!P!M$*i(HRI(?W8+caYQm@A zM~HxP$_Di+vUPzk?j0L!og4%-CRS);?n!s#_zh@*tN^8-Aod(-Yp@G{(APf}FtM0p zCBQdp0C4y9+8tUl6M)vm&gf$NBU*63B^i#ND=n)Ri)oc;8zOqR6O2xf+j;1y7LpYD~fFofnS6MWA358ub3iesQw zucVYfGE5mfbQ&c@K>6NfOz_%R=7WzkFD+Tue2B&&ATmIs-kfHyQaf)UeaBai)-Wjy z!G{Wsn@0CoeTJQ&kcJNl62;%#TA7{Q9tA%Bv1zAS5qQ|s@BN+I=2QqgBia0qQ<#zdaSo}P)7;J3q;fbfXwXcYFT=YO3v}} z`^Q7Y$IxQpIWd-+W7{WA$3_Z=W|w=)qe8Mx#y#`l@?=*}!i@SnBhfPVore;|#Z0%6 zh1$){E8nY{oDqf7ZYphDUny}?HiuRzez=B%Qb~R6d6EUD`_1ID6SjC#X4pN_FAhUH zL1^43fKWM0!H$3ZTME(Laurz#U1E z8ncuo3LHkl7T<)(6oT8=dM%~vk3Hs?jg)wLLrX@SlDW*JBAaW#=>|pb{mM1}VeJR| z_9GQI4+7XN%B9H*x6b4}H#q$;_O``-;V{St9sCxGr?Zsy?!!qN6N%i+u}^r+@_%a$ z;UcJTmRbE<0VkI4&IRSf7pZJmlf_uDfaX-YlYT(lntIPi9EGh6Pf)%N* z71r|T=7$mVk*s&qHi1vH3>b7$bL^;Q0pw3rRh5^-QKRE?7t(@@Q;bzF=JnEul0$Z5 zbwY&++weKnHl4)YRso- zGYHJ1raq{1OoLcYXQjs|=`jx|t4%lq&VI3=-Qmp+49#QBw|hp5E2iRYdSoP@zG zC^a`_1!RZ$bb1+(V_x2U#mrQAOyqlKBpa9|I!W!mneR@{*Xg5KupbFJRSP+|+yB8G z8e?YFcv-;x6$R-7E@|91kvUB>qOPGif`csA6^Cp`iK(ye*6wg^ch~9nzgpv~&ji;pBWJ4_ zru#E?l3l-|>t}1juepsqM2nN{z;K&V#R?l^>_EVqg1yZ(zk74>kYy?(L|Kne-ID%1 z-E5Tztvx~>GGq441hnU;HR6?d-!uRWlOu{W369b!^XBj$8seD-*l{Y_b&fdUYNo%n zm`#6s*c2k4Q$gBv@2*L8O}l<8i=2&>{cI?BKLOhQ;F{34(G!8(Dd)yTOaJ*Ks-%40 zX9(n7iA$Pd*9)B0h_0p)=d+nT4*%6T2O?A1m$2;H`LB62<)cl8{=NcYsWd&hGPV%};*anaj+)@N6j(OO`m$#RO%5vcJQ<=x>)3!+%t@OHuFAMX`nfJPRcYc1z%wolX?8v`U$HXD6_*kFTdJ#x`OKG#K zvtVoLWx^SkmdIs8(hW;NYS`AyxddGgd=tl!k?DFO%@DP$U-LL%S7;rQ~6w$K(Kxk#A>zu+Y+IPcq)=OQBHD`^}L zi45l&5Lb;BZw``sKY$q?PAHhl4CH4--?O2S~^yI#hUnM=ACpLo&mCgbc{!AV$7_k?J>3}Ez1GAxD4EZrm zfvakTA{`D1yc8jxbAp`em1Ig$N1*G=5sKp28l{rM0@n1->o=QVURs(`_pst>f@+|TF+Jw5`1yB zI6k{g*d6E5%BfeLY!4|0a)Wggj7YY&lpk4$m{ee}E3yLFqa!Ci<+QTwF?k?D%ki#j z5wd|UfLN%Uz|lZyL3RtaFlNhoAC42H1F>akL47z*$|1RAz8>Qq^aXNRHyLdE!yN{9 z(|}qR>h;KGGnUz+O+yyJZo;?t36AOJvj@>gsUZ@@W7IZ#-*Kv;zEYfx7FJ5;76fCD zEb9nu43)2Y`ed8g{qGu8@cXs1o%v8@$EZGUc#DD`>Q!jzp3zM{w?%&&KW`+lfyfUq}m&CgAw60W-h+(V-c$}hh}!6ao*hsP5)h1hp#0q3_X;fg!kd*X^neqN%5l? zUyU-?FMr7+snXn$0JH*-K1A6PL*U%ZHD%hO$_60uV+#ADoh(6b0;IHLqIm)%4Zczv zyP>JlQ92)NLC8=TvQ5`MxkI*T?8uxk|XAqfxuA10H?cqJKLooUJY&^#TO_2JCdyE&3 zWGY51kY7h*SFl9L9|0;8uvvXv6=fD~rk3A!CG*qrWx*`RF9ZR+U!+b9&e+K80x?ra zsB-umne;dXn;q5>Xbw`XAxs{bd4fHz%zUME78A{eDv0uvUroNNPY?O+qj~cUC?|7J z-bUZD>l^DY#dk^m*KKs%^N)iZ{LZ<^JR^E)iV>Whh1JW4}r+p)uW?5JbLpJAO z7v|*fLGvV0rG$z&k0T;f_*PCFlaYM>2E-Vhe2lH_IU-nYgwn@pg>){O(!F5`esrnl08#Cb~(~x z@GW-_E8Ml=3;cpCaisjxDe`-OZEK^^j{{yxKgfrK@ZBLr<{PH=2hdzGU1I;1n?Qai zE@5LEC8~C&$u7xwOcz*^@H=TISudRxFF>R-^gWLM?j~wD^l>htb+&bGVc?)``)WIf zq}|{s?Xh%zGQM$6LS_jEyTx|Y?e5;RGhDLDozh0f{&>;!=ttA*Z8^D|TGV5#V#70) z5I$*^8-=n0%&!f!sNtzC3`syUW=JCh^CT2ir9K%kPCi>lSbfmI?4vrFGX1FHR;om7 zd@r|53Z=4EAutYSCtI zsGW?(y9q3Ae*5{x)8Ooh_4c72I~I3KF@_<97DqirZ}lxdcpKRSn+mpCm;f{xEVZ;$A5f(6W$&qLHoxL1PkD!g5kV@yYB$^u0Hxf9|6Beko402Q0?7@C0}6H~Lsr zh3SM^-QcedrJfrZNNOiqT~!(CtMUGh36o}OdB@(H>i=3JWZhwy)bE+NnTXYQ&DVbS zcDh7p>nfbaSz$8X<~wd&MES1;Gdml2+vM_F+?}xdjR1>LOxG{Olt+^(I~mu$CoQ6S zc`5}p1M+_BM4dy}W=;&aydAs!p|2U*w@kiK#tk99L5It5?|S|>7XTh|9(vaMfzX#9 zo>*b)Pu0FyNHE>3eUI6uF=jnO`Aiq4BUDUwmbe%6@c@iljX?GT=RI0?bHP_~e9n)s zJwL}e=CsH_-$*5S2m}5At%}Z5D~LVtG2#Mrn~^&SJDeAU-9(rGDbTXO=ES`=0^ZtY zXP=Z1p~KFY3xP4w8Ix?GZoNBIQJEBZN}TCR9ncj5-j3aNrUMwTj@)rJBKfdmy2X&i zP>-j0kX2VCi*I)1H{|AVPjaKfJw&j@AY##Zno*z9`-_nJ$Jh4g)al4mj@d7zwOZl7 z^qnV-l)nNO*HVokW_NR>B614UZ31nu;hpv4X7OhZ?HiE5@K3IeD13)V*w~gW^g!83 zromcH6mk|lc{*T1+N_;-@ede>N+ef3Oy{)aQdVc}jSVx56O%ju`ykhjctG+B;7Mdj z_+-3gf8a;^)v1t(%lq!a7Uw{o?c`uRF=RroP%xCt1lw6|(Xy<9jXkvhW-x#0T{oS3 zs3T_0s{;1M1cZ+ZE?`T@q&lPpIQibJ%FP@}Nf zi>dC($+)aPr?w81{{m_C08D+=KK2Eg`-|GI29{th6}O*-yra@s-0FyXB4#nKF9ZY~ z$Q*Vac3%NyVD)#AkLYim6jRxV*=z;$1XXdacZbOPLh6T}8jD&a1XREZ+Gb4@1(+=? zkxMKi85WsNgm^^;9Pn&@-XA$OG&7_i`R$%S&4UlscxgIKEy70zND}d7xTypoDUa%u zd<{N(YlAa)S7^YNXS7)_F678uV60Gq2IgQfhD6Kh%PcoqjW`o{h$@W5;3xg?dQor> zdQ_Y*ilgkq7K2A1Zjtw5nUulfHCw>&)<@`2)2c%Qqs@YR(>%JGRgft3rD_T!qyftq z9O5SXUiWnr!(+UM4WEQV(!_>mZ7Jv;kE0HO~+YM_9I}|@^wG>I2Ly@W7LPff@CqmWr z1H1qy=}3j27n#FWs|u>kMN^?abPoAbZ{Fpgvw?TI23ZT zPf`Do4R4?oh#JP1O7_1Y>qlW@f2)RDzV3nhaOEDsgA(3`?gfW|hFgu~qeHl!yHRU# zP6k?!K@f?Yz>$8L+;QTDm*aD{hk|;A712+{JjlfQDs=|I2X`q*W@G*0rBDlvf#^I! zckqPSmwF@UC&+zfY)+)WHzVMBc*! za9O`hhlB@&5Ff^_K0YPKn?uyX(A{W_W%cayj6loTjC&MVl3X8T3W&cXzA+@DJBjkm z$)8hEh*~mVnOOBH$&w?nh~Vkc4!!RxrSJ$xdy0I@8^Nq@yqEn+QOsM2x0Cy;fP#qx z=+h>Za2wm|l1vxtf6+;yIFxoyF52lRb{l(wc39X5szwGSY|5F;NI}@%E9#~j16`!R zA%tmQh`Buz7ZN6cCab0_-!|ViRk-7Gs@Opb^sYjSD2yr z$)7PhuL>e9#<1C)pymIRY~kUBJc@*i%n7%WF1eDgFsLg@l)75Apn}zY!|CP36jvr2 zl~gGThV2xzbpS?^DTnHR#+ea3M5*e^PGKIe&HNv zGNU#pPW1#?nxj4hX@~SKazey-7d)vhN8YzylFv#nM)=MK-!wUpH528~(Bk;b>rV{i zDF)|l6mJFI?ewfA-t;+RCG;09OWiB=+Zoo8&dKJoSIUnn<8>_Uz~^s~D)N?ZNHF1~ zNo-g9mT_`%S#ig(KHu!>o66p^Ij$#{bTOnyJUu#2RaFO4hSgtS?T6iWZMbiM*z}oT z1nW8CgF^HLw!QG+`6?p&ZZfH^iAOI8zDS?5M=_CZ?mk~<;$xbYlh$BT+A)*@%_Whm>`y-a_C@`^vsZegh;4oIZXu6X^`t)lI z4!8ZctV((Uj&R(FGuO2_5`9q@Xq&a@Vp6YPp(3ri^`Qt}j4ROmTJ7WW|5V^F8m{Vk z9tqe98*HVR0AjCGycQJG6`Nc& zxT@>+T+6hl9{w6`bx-Y}a)L{4)6Xl}E5}2-5UF}>4?%NiS#LROxic~8WqK?r?S6Sk z*snv!-G?TV?IQHZWQ?4j-YUEG|1tIzKy@ukyMYii2^$Cy++C8O3GNysxVu|m;~w0C zClDNhyUWHQ$R@bEZ`|FF(9@eLXRbYc&C)&ja>b zx!&Gc_EVAw-@T~%p&9bNeS|Jw7XzQlmUpZgvsdDT-dr1QB^lIb`N11T)uRKp`_l%j zv!_Ix80%N|A&Z=MhK{Qdf}FN#hArA#VHjaeIq$*fN=r-4FySxM0aEKU8{JI7qi8); z7HMdE-vgoI{diXoF{2#}8!A{lq!~9JC}$`=;MnU zQ&Hrc)sAmhG1HsC;Tl+bs;Qf`fbn8iV=P9B#klF{O$e6#=4J2%mX~4|-lB6Hy~{1# zNj-erU``zgrO}8LQ~T~%VG{FXcpBus4lu|ZqzRWD0!WAxC75q|N9nOJO+VzfiK*lJ zvxIY67R(o{uYCFxUf^%{fcA|Wm|Gb`3*M|XmyMf&uHz^zhj9ZW zK3+t4DxA2`zP5mFXTG!l`M2fcgd^+>U4=s{>D?1Pm(_&4%{xdOCqJoZXf^oI|LI`s z=O9-{To@=6sL+?$9g&H>2I8&g^rw9&q2_bnq^#7&0-?Fl zK2SgX0^(Q<)Qf-;vqRU;WQcOG{M-8dQ)Bkb*0x_Z?vlkfrxSnJJ~MmtKSVu#!sZT_ zGWCt9vnvFMX5sb#EC%^a+-OQo!E#4H^-Qc7$iWT3h6pes@|b5uxqqWCq@_&!#E4p3 zfWk0M6MUuO^t*B2nBZbN-kVr*p}P zMAN5Tyk9pCYIRC|2Y%i8^rxUSDGh6zPTltusA=E_>9gdOFn}Eq)e_hdfI{R?(nX-> zG4M0Ax}mUFTFh>u`fV~D)<-lkeDja)=4t_-pNj<&YBUSF{qBk7o1^`C3LHI*v z_*=us7sYL!hXc^K7ruA2t*>Bu&YJD&=fsPgtsoBRtKmfTmAk}Ou&)Tg@2OBFCIXfe z@Jr9)(+#8g^A3A(s9~1Fp6&YtH)_|WhWtA%pDP??&d}vRb-&#mXELcktxJzCfn7M#vNwxGeA0Dr6f622TCth$RSHv z_&$MQ5$Q#aQk>!PA6*USBnwMY#q2_7hOg$@6?e%@m_%+998C1FV^c15XxxvwV|#SC)4x&#&>LCt zX^lO8+f}^XcF9u}{n+vOE%d6L5%I$>Oz0ckt`3h%T zzDv0p5Psg2j7cB)6BkLPd;t$)F!DzQQ^NPkd)af}M$hl%slvybD2!Kw69yVV)H@1MJW7ipK|_$Uv-$t%K#I0&r z5y`57*!5b|vxY{&p7tS)#+kV7QoEmBcenuZ*_gu!YZy8)2o;l1`He7oWkF6khTs4C zi2?Z&CcM}tuNyEvl!Xklv0-l)K%b>Q9o;^6xHhRS1!QAN?S{aPa+<~%?MkE7=fM6X zF7yMc!BVM_MkzK#S_yGQ#$J3Goo3;_S*B9{l?k=`>lJ7Z6F(t{gMr>SYa_QcN+va@ zuO8+x1ORI+mv4Y$BIlpo4b-NNo(Bo2%fKYCCYOQamLh-=hoe;i^32=&a|`jz1gvcd zn0&eh2~bQ~h{9| z0044bUEP4*y-I(g>a>D!a^464PzM1F>wd+Gf>tkg{M%t%{p#A+VywxfF;?A_4v>;o zDHwLc7O=lLd{%bJS3(%7BO|Wc&haRc_X8^@+|sI*=9$TJcd6T6^Cmq&LVi0jIHA>A z)Zh&sa)(iCc~AA99e1rj@lQf(eHp_~nD!={h&jtClunJ6(8nGbpMPgL-2?&!_Az1XB zvHeN$9Jmygq(G&^m5KX?IQB^{LQTD~7X)lw3qa)~`8HfIoAeuyLb)Q$wl6x0*+d4q z9T?#y#DW6>djl9U!RK9Q{ayn{9P5j_I88UOM5m7cxEuhAmWC{uo!|47J3&VUNVsf% zq?`^ot;cMd=u?5mrEohO0T{oF9(W}zfeGfmom7lRss1}klIDs}x=nMehb4P%!NM(g zmmN9mH8W%IBx9`X$lKlf_r_dL*qOYTzS*Ch$!LWvcmd$70O=oDUwW*PDb__|8e)Xi z1c7r2lRld$8I&uM&e~mtEdb=grq8YgcJX|%zT}|a&{!(n7|zzFQBO)sO}rGS9FdJU zB??-UNa@Iy3_ylC`a%ll2yMW5IGr(S|5tTDlvg|X8C*oAf`vT%U~2*cmOCq?-@-6# zG~PmENlB971)BpnvijrIa{>}$4()LsH@9h;K7klttqOM3GzR6ohM(uilvj|au=!M7J-+jUHWts~R76OG z@74v_j%_f|d*sqQvKC{drV*2a#?ZY7LdF-cZ=a{@2#^z|<6oOGs`RT{f~q80rbyrqz}@kE1TB^y{`1b4F>>v$7hQDBe{L-19^MKXW)y|X>R9{ z%eMuHhu!I7b1a_YyUFM|MSk$z zSm~QTwFv=Gxro~#=mwBgkDKhqF-(4G0=-}1j&THloZ9wU+m*ef)^J#5U=?_t;?waD zj9CB-X+fS`!2w_>HyfJuJP$azgE;NT=0{Y{c?W{96SK~`Y_|viHFnimdDEN{=OmJ? zoQC;+Z=b(@n#!X7=1PcBaB$g7GEd&z+$3#kN5X{USZ)YFya9k7&(SYZ zg3`GspGaTC;Uy-Rhi2M{n2C+Q0=Pf1$I*PJp#(fm7UwmG_1F;IikxR$U#;lx4Ahun z@9Ntg2Knyxt9?GP?}5IqMP|%4SUoEjZH^b+97b-(NJWT_xuMqMkW2Oo6LeFF5VO-F z@2MUZXbQP9CtOX4%#nY!)vyd8L)qaajMiNswon0~8E!5vH5Kjo`2*53$t*Z)OGH6iDvL`DNNYv8LATpAj(gXwz>TKES!ji|6;GXzI3K zCKdn}w($@N-0JC zB8q;=*&p$yb>s_(pNRZ zs%2!DbR4>Ik>frLkvGco?Bq1)+G)q3q#w~hFFpgw#1605NTLFn0H0ElFmK^ZzDv_7 zvyd=M=ePH25PlE#EGY8hq5?XrD;&^odGUCWk7f6~8T7V9aecneCN75nT{rxsmg@j3o;Ht$S_C z5>z8Xh$YW!0Ho$GBUYyfQ2owdx_jZ6d@gUf1Y3@$7jq26AFEgfk=9Fa!}+k!;N$`S zmAyiE6qJVr1wmYTV8KPP+T&MMyBPYcl-z@|D4EA!`2o7l08p%N(?4G*l|{R50K^v* zH3QT}iVvk5=ij0J8O@X+m!D{v^wIV*S~mO}tNYz}_UzQu6yEu3Q^6dXC1<2yEYaRX zd1Ad1X*G08{9zWJQwIsq`_PN`r>8;qdZ=;Bu#Nci?a)I&L51Bh#sV-*-mtxHT@8}e zbK{DO)=1qnBn%6!pSnI-r->l#=lE>J_l-i^Ykignpa=O=Ke2R-eQ|O&74kcC|Cj;M}UkL;l%e|pZ;OSFLLlq8XI7w3MB zomdqjbrW1CHZ+q(THJ|fM>ia%8`>NMF zr~B^uE>|bsTI@~~E{=)&zP;`?$o3`1CLVZP(Y@^JFZa19=7;eY936*7vH$LBp;*cO zEOi7Bg9J~wFEk{Neh4F1itO2)`h8pmbdI{8R5txPd<`%9rwnBXF7vw%)S(g_aW{7S zz=lwKb83P+6fVHK9N4<5luH5--`_z(!EhopSa`{GSeca)Kak>G(o}Nk=nXm+g@b#h zDTd~&+M1n*oGqPZw~y~0uG;>j5iUfm*nrYL$)1A0*L*bH@Y=2l=KDZX$}B4kqkX2i z)*J(I)3dGY#*1S1Bd74-p?;ON@}8=JAU&~FSW#Fdmq3h9c`_wAnRs>u=f$XTHnm5V z$&+bO@@nc|7%x8`(a0veK7W_tv0FH?H6D$a@y)gMt|$yh6U3c=oQ?k417Q4irLYd) zAbG?|QuP`@iLOZlKoeOzV_B}%Omyr1GVesmhun})z__wRUyMNh!sN&l$BeyA3AX`_ zq2|;ktGHdGWu}r4H=Ya;^0IXkWnZGzk3o(C7BE-5lRijrTZZ+$yL^n1I3~N%11NN4>-o1gn?u%`8%Jc#D)`k zX%o#6kzE7DWeW41CZ(%iGZxIn0;D`9`Sb;2EE5#q9|ABea7@(GF(}ywbyiuyYdzLp)08?O z71^AB68@%6xbZmw8+E)bBveQKhu9Z0^l!h=Nwa{{b9JMss!k5~9TW|o!l-X(`RO=bRJbdS@y)Az%t7v?dX3InsX?If0*DD#f{1<+m@#u@1CnQnlvQhg0b%$k7 zN>7HxAHrcGl`K3fT3;Je*hGrhW0GHdNFU@BZ`u3~9Yp4ogY(#15CHP0=P%s}Yj_cS z8S;u`Xe}Wmo^wW%=1@4}gE{b#E!6!K7qiZW|5G&t%WY-!LsvYziKX(EXm7Wzlhj&)HUh(35NV z@v=T!{LQv&DzlIRwX5f%2^DTgKpv&P;|H`JgrDgHi zq0FgR^uptcf|E?P-;4)NEWi@U=lwhadgA7NDhKUBrvZPEER<4cHo5|~%xC&n+z~<) zg{o!(u(?Qqqgat23@H8(Z2wy45}Ttl`Q8wL@D%xF7m)Cj)!Sw`OU1W>{v;Qq(1^(Z zl3G8Ddf_bNQh0i^bb)~-A6WD%M2L%_3(`0r)6$UZs^-uOH+D!+zokH*aq^i^U`DJa zMIe)>&AvZ<>j5yMwzHb^M`ZLIK8=}XF#i$s+(MuekF7g#HFKmiN0N^F1j5;L+GJ1N z888?MHIo0){q;6Y^X(D=*0Xg21#$6bxLBaR4}M?s7gILFZ>!yBl4kR-2i2T& z%N|`_U!0xKe^#wO4lh+td9NnI+A{(|q}5aHF?s{|ldm2cz@8YF0RNun(l334o|i;2 z&kKgsEhhqse;NR^$<6P(j71X=`lUb}VpK2rD4;sUQ17ck!)t$yc_nDK8+ni%bjw_0?7yKI({=2;*PgyO$4yJ|*@~9mk~UCPR%yQ6V#e z_k5!F#Ydb*(Hj8Do!ckjz$NGPo&VeLqq*D$xZL{?tPIl z(j~y2fv3qx4JiJCy7huqHQ6!RI36XUh}1F9k?3KxzcgX*VxFLJAp6NssN?fyTL<;L zehP=J6Usol zeJX^gylSVM9z!{}G>X%mr!+Yq?S6IISF{JaX^!6w9^x&;_SCkj2oz{69HUl$Pw9-7m)qV={VW`X~!lMys(-Yvv4_0F-|O$$N60g?WE7AkXz zL8C?D>Z9u8-y20@JrvaBqiTst21*2ow2=jqf+OK(O0?kP44+WgCKv_N zfgEJSS|HcH;BSp^7m8P;P5efyspypA79cZCOAG{BXzD%*33$l^Zcpj?0S zJ6f?=rg9;6tFQ-7woAsu+bL#US*5-W{=8)|&4+ZtRNkxv6pZF(M{fRr#qu$Ut(*Ba z_%t}Z=|JG@>@2d)W51ZumBt|Y4&^Y;{*NTab~&T9hgXK5aLtz^K*@`ab*ljqqNMe2 zFKluI?3p}ZS=uQtED)f@2w`JYv^5zjicoNe$=mRe${1rn>}an~BFIjOxNJ}#S307# zym(ypHFBj^5A(S!@872Ygz{OkK)Pd#rDh10n&-u?YHsnDh0V;SHd%MwL#e8JsyvbvPQZXPv@vPo|v>ecx$lPjATgTi+LEu|GHBe~if zOnITqCxyT}@bt%1`_v1xZP%T#+|7U^{Qj5AES2f{=;-Js8^6DIKwcU5_4T0*=w<_gp9^emRo#QkegMwS~GI4-&FAqR~zC<`N@1 z?!9_*NOl||=w0!|^J@#m;dtcBPn!TVlksc?5HxH9Ht{leL;%*@g!)0S^&bw0iSeozZ@!{)g5^T%upmL10pO`bjYrj zpU5GX|B{NUC`El?x;rU1ypgVS;JF$0O;X?;@GCP5RS{}SAGaR@+!tb@Vu2wun##Nn z^OtT6Fgxv~1tR!U95jfpz$s(1o0V~Yb(NY20UZ+NgZ#k@Ogr4k9QUn{NPj^h{4Nw4BRh>A>A@Q3v-Jub|M=?c3GjQV%u)DL z9Onz~{zsG($w1>;RK`!?O}_4d?@WKno082=RXp=q)fHG_TY5 zPC)M@Xh4*(&e~4k6h3^52`q;#G=+Bby3m8(qpOIAYijZguj@~pWOSH+OOV_L^&a5C zu^Dy-hCbaL$qHBiPV>O({~-;R0{p5tKof$>VMV*OEUomepM*Xbh)J>kJ>`Me+q2dA zeQtnV$l=yuQiQ&E)>S%(lNqD!X=Wq*uNVCN8X-ivAYfE^;5_$K;I%L0+F4+ielhc< zo^P~ZTpSy;a~awrxV4?NQxNaFt1NLloEBQQng8j-zg|K8gu?Z*9q?*gyD#(oVum(mpVJL#UJAO2@P{vIvxQw*-Agan&#sS8aaeT^wr&HMO? zBC!m&D7brU4htALuU-kD|NrjrUpI1o@R8dmxPJxyP)Tl}!(`DLPgiUmq{kmvhHE4Y z34^y7mhMK%_0JYngujT`;o576!QZJ0dd(r_4cpx{tAUmxVn^&s0DoL;sKou}Q`GFh z(EGex+-=|uQVUI^SLV!j-Q~s1cfEN+D)7&L7lW^P#8+mq@L|0`6@rSpWe9++g(6g~ z#DNw{?JL>-`3MIZTm!o-H-&*E)%E4eMXa?B)A>H@c`VUWs9dcqIz;Yb;H&|z-v4=# z!&lzuY(*ga*F}_1$RG$8v9b{hMf8;=@_>d+@ ziKTy6{{P$DcacNto$&wxuyJVPj0+ASmZR4SD!H#iF%KmK8+n zRL>BwfT=`{>=prqD<$Htqx^Tz{=V@up#>zLG|U=Z?2QIh~6-XysiZv_&?Cu z*!-nBK(GFNNy`C!Rm#vKCH~83t2C(1ww@RqT9xM4EU<*M1`iXbIoS1_-PFqZ#1758e_uL!XocXlWm{U6K`1 zm#HhtA^spq#4euyg3tah4_$@RyM%y=izXMpgWf;iu z|CkYKkU1d7o>{b2`@j4rA)-qqI~5FES#4!e*`NlOF*-ieKZnrH1&GBpWo>iluZNb9 zzKXNzFnv^u&_Rb>Z1%m>0iJ6#c>8ak@h>fKfE%dPG|)@HJp=2V0%Q~`O9q)7F>)w{ zuLxXLHQN7}`@iPg7EpjguI|CKzl>1}S|BO}Fm~JuE-Qeao_-b}{l7GZ9kCV{41}7d zgUOsq8XENC;^LbYRaI5gyu8|Tb+$|p2t<`MYjL?RKb^ALd=M+T*I3}E>=k*W!8#+a z5e1+$V_4fWm8JU)#-De_%{_OfS^}JQC)Bj+?N|m+lVv3&g0G*7D=8__YS+Dj&eal` z-c;3A*{`xC@w&#|U(WMsRvO8Ty$f>^0)d}u9Sh*`X2%erVV&*5Rav*M643qsG-vQ2 z<_oTxcCD4_#?ZY_8?$bsO2Qj`)=O7Cx2LETX*|wxth$Y2xW)&M^swOZbuKgxImD)Y zHmR397H?+Fm-%xf>s=TXZ@Z4(uvaEoBkH9oD2a1iqR9BiAR=saV6wfva%<7MvF<>> zxlGB}s2*;O&+thlC`qvi6An;;s?toOLYzutZ1U?JqI-QhcpcikR;d>2eaVu}FS~k3 zh_C)KVgIQ%a;ZRn0R=bo<^{?`XFjuYhJwN19QAU&9vSZDvPH6>1$(vyTE9F|a&B&I zdA=R{4~Gq+Ip)KpjGcJ|SyNQGMMQ-$L_#`t9Mo92VDTWG$cM*Mkd&IEe@sO5azG=) zoPN>3AL}rn@=NdO?cR}$kB_iali0F-liIdi#DbV$+6kwRr}8IgWcgX==s%1PXk$M$%9S$VGKT{0~jH z>FHWTc|ZGbYl**It7YQ0FAyH|;`UYyVx&ijM&w*KzKRK~&N4YdI}kdA1OPPF7QOl=yy zhfv)Zi3uu_`2r}*AyhC^Oj$mf%okN*%?z#$+rbejyJV4=aUM(IwoT|I_0`hz+Nli} zz~*KXDf3A9alb^p14Nx+o|Nkjuvg&qlM?%n|2ezx`xQ_{rxgch1t z=ybHipHm8-T@dpE5TetJ)PHt`JoaFiY>)VDz79YvnLRDyv}<%IHF&12v4EU27KDhT zGvVj?talBs)qyE+A^i0`2<%<7qZOf}aYC7?k3+9JtNOZ{WbV(bz$Or; zrP-Av1ve4~w-h-8TB_TaA=_r1Y-U*zC(FxErB;(DWw?%!7)Z{=E0f&=2Euz6CKe>9 zknd9sdgbUXxMlbiD+K<%*3$+e`trK4;?xMKJFpJZ`(zbyBKnNoG6Wx&6NvMG#=77% zhg5~21a$J9LK2`M-Z0vk*@&}MC1~UsPS@%`VipcdL^X{_T9|wam)!4urqXq*r%e3( zx~0d#!5o!$fO+r_BBG9QLMm%H)cNo*kgdw$Tw=Pq`6MFdS&+_mU0c#ql26={AKK@p z)GH6JhdJ>e68p}9S}1Dms8wp`;(G0v+~`BSHhTh0PXiam!nqW`KL7!I+s2Y7S$Rnu zIl~^6>#2!z=}2*iTch(4u2f8=;|~XUxGl~ge;NJXV$GDs`Frw#Esl#db^wJOa|vHJ zq80W5NO9rq4_Ep$Kn=MV0VJ@PP6IDJ(|C9PN_`!rMzN0dCkzBGPMUd*y`SNgBNv@A z^<$LW2KTE)pLKP~<*aAYIk+}kCAys141@N*8J}=yM|F>VjnSzrAhi0JeqZ^O)Vp|` zS@`iKVU8W63M5zD&Ofz6jF2{HL5xS6nCZXGrjRm~L69mCRvrwTxk;0AA@wYAD@E~v z^`#a1ODZ? zAYzfAjq;O7YjM4Ne_Tfjn2|3+VoXG;g21B^YYVVZjzc+8D@Sm^{$z{VL;`KlLNJy(rgu|CFH zmqKV?d=V5$lPSWwz*-QBv5J?DUE!9b&f**yW9_wBRIvxn|3c{E$G$&sZqDq!8s1Sk z)PytpOH9PEfHzD{G`zc(*5I=cUENT@)3N`t<62kK zKVIxta%{eC`261v)9;!##scw85KtK?{XGx1)GI_P$wyIn{sRlDB{LHfDKoRW>)#|N zuO6$EmBw;DmV&7UW1~RDBBj<892Re*H15jDt6t#tD0DpbqcmEh+E( zKckH=G$85b3e!jeAoTrsyTFY`jp9cYwV+!DIW+0}ZBf{2-~X(WrAsG6AxKI03o|>r zvXZiL?LhuiO;qu->%kX85fW~=Cp&@`Y#{a0$`Yz6wa_|T+h()XYIKD9PAzr#=<&>y;ObkyZ=VvkH;ErqKMv zfdzDVDllZ|DLhp9k1JFgQFxV7UP6k3TV5V1L>LiGCUfudz@!s{JE?mB?3hdtp@Rz9 z*#WF!j~3ofyKwHo>w%z@XBt}~BSAl`MS=i%IpEHHr79~@Leod@*(x=5XX?4dYjrx{ zx>XdO4uu>ytuXsj)kGg+3sRY;Exf4Z(w)xP)W7@~II=s#8oyy>UQug#E(?=J*Oao0ub@Rl{& zmnzIR?nm4H+r9pMvz8&`EjD{wQ~HJgh)HRnm((LpX#34~LXgc{oUT2F7%rCN!hC-| zsxdRuO(`v{WORs6by{+=U5P_1%DZV67~!Y7&ao*h@TxjKUe$MzvWp<=K%LMA#?7Ll zVlqxyPRh4PhN!Is5-{fo2*P@H$4S5*vx^!9VX=V?3)s6An+m2y*7$n78SlRF$9-Ct>Gj{hc|tW*h+V9BF}f~AD^pI0*KC7=6hp#tuP=g zNF<-9BlM#XMeaXdHBU0cB;eENQE(IPiOv9ByfxAl+>Kgqe)4FMSn@mZ&qrdGhsOFi zozI>O@k)IIGMpl%8I;)GT+fgs$^K~2b1+sJq7STpMl%Evdd z0iBg@BW=cmDQ&qzKv6-H*`E&0&#s|CQMN$rD{j}Emfl4geHZ*oHQ3gxu2#H^L{iRT zD$g$Gad%!oDPsW0`TytQvCu)hz}2)KZ{>2{lMNEF^G2)k{)u(H+doY8(jA7IZE|}~ zp#8nO@llQUW6W2U4F>Sk4IQ-A(>3;B&P-a*!;yiJLCryJk!jn=v{bveLc6H0vhl5^ zOpzqMpltllpFhL1&txP#60klS@)xtkIt)H>n5oQjF4FX-;d8lPYYfy;Q6J^Xw8QJ{ zXPg&#$A!yO^0}YGrAwzlwIZQO4Jv*=5nm1!#pK_6P8;?3lK8!1v~rNn1yFxj0KC3E zK;5Cp`1y4U6OlBu*0|uMx!mq#j=b}o;Xjf`LSXEIuqWE_JekVl9v3t~zNZywRv;Gj z>+5$q5-J+&!O@<#UkTX2u!5LN!O0^jt6om8YLs$!TvgaSDLP$JoBwW1Qf&f|Zq&21h-Wm49st)D}z{ z)c*Lw>Vua`6p70ksL!h}0xbo(4z8K++93eJ1${7W?uAO9^BWsHMIs|+yUpvxAX$|8 zM4ZG|iBLVy^w+Lx5c|?ER35&+n|8IbQrsG5lEw@yK#1SDwbxN0E`L->KB_twF&0~A zsMZ<-sr&!@;1Rk9^0_Fr#9FEcMU2%=h0-M2^=0pZ78>J>o5ZL=gN=B-*wWBQ=La#9 z(06xZY&87kk?0T<>192V9Pd4~(>;Z1lV6RwReN^4vc;Q&IY8ahxV6D=HW!K7dU~vu zJ9-R^tZE3{a;1)|{aoTfdKp?9R{D*v2^W~$lt@;m2E2twr}`WA zJP9-ag2I^Mh)heesZRWZl_dlp@G8zU8GUc8MY=+_NonpO+5PS*1>XVy|2W-e{~Tz} zz3Y-|9V9)LA^^_Lh@kr8Q*&}wX?kXMb6>5Ht!^S6^QEOD)omW4R&n>@CAA2pR+`ie z*KiM3qcPq;)tN!IR!76yh%fF~2YhoONmPKMtGaM*L~$A4 zQ~$8N{%F6c)QdQ6tix=f6{Jh2652s}{VV9l-gq>Sj;F(4m-&Q1P`SKyu0IMD^L?n* zh&mD|ahl|q*zjvj~eG4-my<8WD> z0Mj9CN57G)dI)^-Ia*yX#GX-~Xxw0BkFy z3n{)XFrBVTiyY-snoBcx#bOvv2)bGyDp%l%4O|f1dNf2zthJu*JM`Sw%=l$CPb*^p zjp<-Jj18e8^={x~W$bSB>GJDxH}g2->$6-U)+B9XD-{TFjk)nMRY_pmTe1 zKugfLq-oe8G286WVl!P>U1K{ggN_cM|C?V9{WQZReZHcG5RtAaSeI*hvq{SBBkO zYT@BkoC6k+lDp-=okmv>I7@X$HyCcWtzS?nRkizx0pVWck?{Uj!_m3HsUC@r{v$z{ z*~s&R*TE)8r>R`37prh!GpN^sEZw}`c6oK4E8`#8TA^_&1K<03s`~lWcfWs~?bq4v zm8}m|??_$PYU-|uVT17p{)`hF2@CtR8WKHRTsPVqq~wXWvl=0{HK5|qHj|I%^X^Dd z13q|6V4d@gsQB#uJ~IQuv6SA~E6oOj_?m+_3cB>x#8;)KziTpDTJj5>se)2nxpv<;46o4;= zXf!Lf9+Wcn@?^-TEAtY2Ca-n1p~Rubd#j)bzIgr7shwft+dzY_pnd)Eg1hPFZt%6? z$(dQJbLo?ag(W$|%Id|d*R)&uZm}OeTn@kSI1{*qin_2Hr2+>6Dh~^MwtG#y;BJn| z?ITP{YVp);E@C;1ND}a6_GAfzqWkjMQG>q*lM@ou3@3iV6&cFJp*j&+p9g`OWDN4Oxo&fYOK|m zSF_Owf?m#8Qk1)2^}RU9=7F~FU`9@}*)sbgIT6r5% zAyF(QX9><$x%1LhY;y$$DOpyQh2IiB$9-wk`*Kh^y`CrpQRr`Gd{W$|>)aA^IH)9v z%AmB%G|FjbZu$Pu8q4yz{tWg(*?# z{4hECfS~rQRDs3E5a1Fzw|83YCS-{?ExERho7|kL)=6jg*iEC$i9ksld(*~pWv4;g z$DW?6?pr_J%*G&2BM0f=eTQWxmJ_CF^J27#v`STMQpOD)s9+(39J7o_aFVOY$@CPF1NgDIer%_x!*h zAaY%BDTZX8z)6=sY{9i6orTl-&|auXp! zv5#h5Co-6WWEegI# zAFT5@tY?9MLEF3p93k63q2&y*lCQ{nA+={#n&X{9{9$kZ*W`mpCCqe7+FiP?26|GM zu1apJRUdCQm`o7K$jPZ%Zku5<*C0N|&d<1OCRJ>9;3T1lrH0RYhQvn0M5mnh@K)3- zw{b)(O6ys^N3KGfo|Sj;Rox-Z)q7?i!(5DqlA7j^W2zbwZEr`#1uKp(@;=}}4g=~4 zW7&@^TF|dB%qTe8&Vrwf0Yy;r`0kG*4)Y=txXN%RG~9lB#^)$hVL<2}&R%^uulV(1 zf7yKK+fiEzqj`pB`m7!V9JJuJSo=vKP#fxfAowRiS%H`TTHKguReI0+YKbKRkF`cq zyL(I5K8weUtWH&1hfm-8B&hki^SX47*a!9r+!;ofbT7Cy^xgBQ)v=g|wUrj)8Mf)Ucap7Y%7uFQ>g1AU66vNJq4<4p-62|;y9RZi~i{0Y1HCsc!zmJ z&$nzx^-{r@h|a%Zf!l_g z!K-fx09&jqkJyA92B2lfPoWKx^@z`6cUD74V+?;?j&Pc5Aw^~@0(v5Mx@{cYcYX54 zX7VM0B#Cp*g4?f)vDOygMAFuGc&P2J%j{|Zi^lcCY3MIGvb+7JKWYa6Xo!sMwumX$ zR@?hRhXn(RT*mj$FAm7{nk-fLZ~)3t4hGvIL(9a<^L2?Y2t-{A0P(}w{WuF6;guAA zFABgz-nvD5n}qm0m_Byiuzg-HP2%;xdVbQBnyx%~$@YF5}j?mp03Al>g zA0SR!kvZoXN~{rz3a=Hg9OqJCany$1x2Zuh9PwqAefV=#_iWscm((|E#udhb#$-&7 zT0&&P(d&Yq(O@6HDx%orxj<=Zfso;`>M`2CS-~uLyC;{(xyL1aH$^hhRoDKNii%3^ znclljyFAvs&F?A-I;>lkp}O(sK$FX&XXmR-@pE7P;m1JdJ*Aw zWMc|aw>{paJw@L;0H0Zh?;6J6R!X)*LN=AAv&iK{KFZ-Y#^E(8Zl_+77-ZTVw5MyJ;GEWD0zdJ%E3N^U8L<)|v>b z+j)ACJW&T+*{HR^YIM`!5FHc>2|#`k=i7Fxq<9=TBgGCJ6JZQF{wXUGj>)qv7XHaQUh0_WKVL@ff&!}a8kiV^;cJCCnB(m z<#iTg037Y-vFHv1U@D0_f^zYy#R)6?b{X+?Sg3WkN2~J}+|Eij8;)~U#v)kX?i81`O59?;wiwg*d2|tQQn<3y4S&2Cy1+RgMD0J*V0=zO@fI#J{6^W65ko5&4#M4|MR{mx3_|Fpok za|tBBP$5!w!0Tmk=#?U2Gu{w(z$K>h<4n2hMLyK_zvC4@4XpOhaf2z`MDOs|5Ckr{ z!xnp5Nenv(f_M)|)oZ+5(~8FWs9|&KU6hVMLGK`Dt!52*#r~-TmR0p?wv~%Lb-kN< zt=+OuhMPz`d3Kms21)bv8QCuk?y4086Al-fs2DIVO0_i(SaoWks2#tCd1R5FZ-4%M zd!LDygQH2RNbE_TGU$h_#_pZeT-qn7UL(_Hy2u-*H(1LfJXCCA|= zA%TZ;_Hke3hqKAV3`xH8SoN=Q!2Ym}D&F3$_QBgI_ER1wr3WkhTAw!@SN*>yi%r79 zOWoZ+c<9#04;|e72Iz#)9u}s_ZhwWT(!s}F-lVkGUgKpNN zmz|@2d8;#edtk;S>Oa;qYt{*)pHegzHI2MrZ0ua)%GH=+U7}{0-$?Erv|q->#g~du z8#H~qcaO`$NL%fAeYhvw;wf_^uxK*CGF?<_mj&eSUAgpXYvdFLzr|JkjB&>;aQHLl zKC!Oe@9rR`qp{jaIIaG2n#rd&Z#m~`EZ4!o<*RsERSKIX3pidA>WopR|L2H_;Lk$! zCer!gqGVaiszkgCM)Rt9FAMDPy=^kziEng}%FcXw#@E)(#+7HszR~=Qf`87l3&(4% z)gc#`ya5*(KBs8wQxre4!X7Z+D-ON4Zs_X_a>|xpDOM}vn0Lkn%dLkCR`Au8tEakN z9`oF6I$Wv8b3{Gu2s3^In|qk8oVE2rpwz&^w_Mo17ODRj0|zAj_cibT1c|Tt)Nc7@ zP>eyh4w1G)M--hG(M{~B%|mimA1!Q+dqprx@Y8fwp_i!3VX9`MTb$OqodXCkvy_or=9+lv_lw`8& zd%pT=XR)bXTJUV&5q!0HmCW&I;7jIx-P$O)ZFvf82Hzja->vdG%B{SQy1jeHW<5pL zQ76@WvuVwy@}dv4fY){Qm})z>iqIh+5RweOLn)(^h;n*9=yQJ}`Qao-+2&Jxgyh%P ztW=C84!Vc`yAK~5IZ0NOj4`)4i1H4}DGBeWF z7tCU9x>FTYWjK!PT$wp1tC{r;FHzPqOI_xvHj3whKscWV1kDpB;;7u1_Sp$*;LV!hC6IEWEc&>VCB#C=uCuoBR#MOG`;svUmeCaOF4fdwxkRJx^O=7hZIp+oZ9 zzUOzowaz(z!Gg7B&EC)6_w(G>eTDM8O?|auQ{`h2WbLRh+StCuogR_R6K811E@Z()~ z{MAe<<=UKa!^5KRrN~9*4`1Oocl?JK#1Yhd7TAsGlE(zLrE;C=cE83HAxv~eP~UjnMe#k*H{-hWD1{ZYKZ?lHe1m%K_ExoOb&2GaTPvkqe~`v-%P8PO}f+WUxcv}>b~B6TkXd;Q^8-4;_Og(ybAuZB*EUJ9tOkw%BU1K%-&Ad zQp6<_a2#1-J^#8kdT}MU3=*}{L|@)8S;L1$>^CN~;rz$w(mMI=*%gQ{8X=%aOZgW= zrTQ0!YNVTEivzGvb|?Wc)512-BI-EdXR(6 z!071k8v=Wwy*C<5SLoaG_LoVrD9nq`CBcu!5>&`CY;vpS*QyIFqih-c@dtZYKpMw$E`pMb&fhOwGH}&3=X1 zENE@e_4$3vxtHx-V*DRy10;{UKU>b%^_kd-kRI}BHebTzBtokgSGBV% z73KNCOhHZ(5(4)Q8mENLiV~h9hh~6 z0B+z`Bp8TAE3sd%QZQsukiLtcOUmH7mo-8=06S)SuRm^1ND^?lOk8ef=40$P6Orr- z@HKommT`DQitd~fBwC42cXgaBjCg%L5&YzSEZ1$+Y8xSvV(Gcq5Q^sO@FUpgN{}Y^ z)y##YYr+%qj5KTFPu#|#A}E_rd`j-FG^pX^ECg%}gUoMj&+fo0yytOm_o$~$W zrE}4@MDTEa)P9ux{+U;vLKHo1PggjYAl@9p+@1vwflUjx)Cd3D-Z zw%oQW#0!t@!+Hr&y#-voZ}~B6F!0zdTWir!!X-2Xvk@RsB?uzg`roij5i@W!_Yk^d zPB6Of?(V+yJO8dRfPP;U{Y+wE|CeAhh%9*Vy_R}X*@%NOaOC0T%K_8NWJFaSL)F#Q z-AVqpk+XUK1X;@0w~Y|Ds~1{dXoW$nSQC|C2bYfcI!fnu9pDPjXWI8w!VrRgN{Q`4Uv6n_}9EM|*2u`EB0HHsF20D&pq7yTvoatY6f!s z#cmXU(r>;5^jHadejA@dsjwWEb4lJca;WA?8|=^61h~9xyU17YJz84jeab`V=s?@0 zh=RFo!lO6uOsqlB(--iD)Q{ zuwmKk!s0{u?>4Fj<@0oj1`t@pq8!w3n>~|Ww-NKrY`8f5tKxw0?xsrJQ_< zcOWOCwZ1EK9x5Q7D{Ow74g0jM(sf3kk+VK zTSKvtGg)FQ?Ik~+AsjVUWjg&{v)(q{d9S6_8L=S!`{M;?3G%Dc;|j~(s^7}_FcR!( z4Yr^~RpKDpuJtI7MJ1-DDAKKpi7w?DpCat0J!hHyy!VYw>YPq#TKy@IBjG6mch&u$ z`%NQ9+!Bb8T9(bd5}DzJ?<~dxAC@+QhzOKv-Kul*a*ITla1Yg%I900@s-~K>bl}7&m?AQ*N)(bU* zkZ67DJvouBlws!EN#I*Q>|dW3QG|SpiLV z6il~$W>d7dd>3d2WRu^3-*k7GqM)B^{NObES$)+0DE{CvWYH{W9tJ;VT4474U)80CLk!by!UWPv+qmzTZD+r5Y;C%Ok zz+`1#lW(q1pV6`@v01C7f8hiA6x|ekDzEjMZtMA+WG;mfl?Z#t3~Z(1o#(|a+eE`@ zmD7}6*?5b`G2E{G-G>j+#@kz%Tde>XmpoTel9Qc2i-8023Cj+s^hnCq!rzj}yy@e? znbyE;gw)^$lq=~lT}jNJcr&@?TkzBX=_ZiUy#mMS(N2^a!;`ZV%J+{ z;~>p97|R7iOMD%sG1zZTF@~ly%Wn>*HzsWjG1YcvfX;2@{xY#zhF^qRomG#WZ^=I0!tbiJDNHjI z|Kb|Spo0!mc&FYb>uiCM-6u>z0T(M}$pZZb)pv&tDmZqw7h!GX)e?tNF%pSXj)&hEugggjA6PI@ZWI3}Sg_emz=a0S@48*NNzsvueDN>$?N>6%T9?){M-xX(-3D#K8OXS}l zr<0MN8hA7r4=TG6=fBI6LB&4T`Y=BDG)Ak+uN96Fr!ASm~cfVX~OCTHux^9 zHxFfL=v9QO+PmoYw#iNOFFF*o4NXixy9bWq9_x=(1nSk`MsvSGX%e}24h|I~FKtyY zw&6Cjz19ym!?OX>Y6wP<*V_XBbVX3HQBzw zA29+qop8HW1KmN{z}Xdd6y;%X%+?XEPHk9(fm+U}CbakPBpa&3ltqak zS`3skVjtFP{=`c>8ke)qz)X`E-k+Fr1rd9Ff#vE0A zzJA@CW5x@iC4u9BM-m0?$c-Wt4s0WEA){TM;A|VXt7D8vcJQ#6qaaOnGi=p%Bjk_O12yDfA8ahvQvl4z@0#(Q6 z;lH?7fDVw}e}$Dr1NJ7t5X67nOxUhjTDbafZ|J9DSc7oRJxrk>3|<_NU=R9)nTrXU z=V+T(=|2%>%;jz=*nRvK_vkgX44G^bclih)^;(M}A_}6#F;Xi~DmZ z1r=m^OSCx#ABp2zv#-_XN@)b@7%gjkhVroL?ZKe7eR z)@bs5=-u5frR&S@I&N@9O6emIYv4XwgT6gRJqX8Yl|b}Pj6>toidbOo{^e2tAMye) z{_ht#j@4(7M{V%A1|Xpkw3|O>nJF!hyGn@?qFJSZTA#-L-LYzm9t44{H72m>>`oPZ zZN|y}FMco}>%?N6T`>oNjysP&^<4YxJ?O_?UfH-WQ2946ZmAKg3`DVRULL|{OEHhO zETrg#Ie0rXIPQGt?f|bn%OsZ;ws^mXGvmhnG5DwluBrNqW3TB;9PuKz%=c_a<;i}3 zdJ9a&3y)n9hk%opvP_zfEhmn|p;3&;)5Yj-9Y^aC>}{&RWT-tZJXscbnrLEQl&2uC zN%3PRcP~e89z#AJ>V$uIe$k#G5&`3sxBBGkUy-~jIqK_0P5flAi^wSo`sH=14oURu z25J8+xDK`dK?|I!39?2|<#4anDvPX~8^DG5ey-|MB9P_dUI|KR`ZVDqMj+{*37^a{ z)T}iAqTqeyyIa6cTvd%6|8-s=c)^-DR66{U|p#VDX=*IMP#dXXIl7te+j>4eA56mBbI$kx0$I zE}zi-p0AQ~pu=wDEMBmv*k}UvpLg6F1tt{tXWGZ#-8mIbFG6bNgm-(0M%HAtqzx_ZK&)_JSY zwz}{yZVMK&jc3ttHIyCGn~tNS-iX8jwIAgP8oIteH-kAALqb5&RBE3tCnO*0zJqXE z*BZC`hkdSu{flYWt~VZO>kEB9qvb9lwZ{MwANqoH-;leDS#P&bUlj<*UPD3%M29JE zV><&)TG*X=b=i`B1Vd(H>Q>xGm4;P5t73%ilWF0=G)A;BD|)0W&SIxcOQ{0)Nstaf z6TFhrT2qpK{dXJ2KGi-MrMit!5+-WqgseG z^rbw>SmGej(RV7#?3B3g%$$gx9P?zdMwu+OP(3{+E~DP-p;|nI$pJYXj4Vbe7)o3u zUu!qg3M!zb;m~bTALAdbY~Uj%|1+K?UaYxS@pX*O?cko^c_PZH;04@nYQ=gq?dj1( zb3N3pTXCyqHOSC_vj=El%FESS?j|D~AA(89M9DAjvd2=UOMYiGBUG)Hb{n3D-Ke{Ptc$IvQbpGfM8N0d4-Ar`jCvGSSF}Z)8len(;*bT-))@#8m(I|`GFK=A{ zr;?KFmDR5ho+CvnK(7aBHdDyU*UjVbNgU$L81bH@*Y&}Q^gf037yyWQ#J3G>@!$i2&w?410>Hnb9lP63%f%O4Zv|JTt*>; zbthg)owUb)M;}sLr{SMo``kp94H@eAl~DG1$b^eDVCELuL8reSxCDD(Q1guNbP5w_ zb_nyioan@B!JK7!N76__c$s+?m(BR`ugEQ!;K4r|IT&H9Q>@}u{p=R$&E zJ2~bc$CE8#gwGPE1cKbomnlZ{GJ(uY=YXzhjVVOlkqK(D@hDW#@dk&>lC zz@b}fbD@okDpls-Vby!pNE({k{7UeDm^)#N$nxhlA2BiFN*!rm&k^)R*5DzW-IL2y z)U11DnjO8}mx@#0(b$90|4>B- zLS)O#QFh5>27-gxP$A}QNkASqS%6ntRa0bHpkr?KlGwngg1l_PuGy9F)z_(-XPQ;| zZKa+$U=?*yHoxvtL^IY@<7iR|osC#eVr}m!P}UOjb{zC8@*6 z|KKiM%-Jo$M_(39Xc;9}avKy7iZ)dmts1XqeyIj z=~^9P zJWw&DpQ^+DaEO?|K5pO^a3NzJ-(2;%WXr$vuPT{EF`YEdwUB9`YL}79nu%jaaE%w*5KH5Xz0KEf|Udx(^WQ#=csKaRjjaQ8%6- zH!#I?FQ2-C1@-;>PrGWMpJ4yPoGC;>OXSsvzIyXn@v30^ce-b?hmu_zeAucD_l*o-F#gwEg5N1%c2sW+}!+B(yJF#bjFf;=D4wY{^AmHFuKbQ{f;tTMGH^rsc_%i*S|wBIwbAgR$Ir=Ujz8jx{6xm)4MHiIQ84gh+2J;f zKtw^@pvp2ynpH?2EvNB-KH1ekH=!HV|Hm`2;x%(DZNIP+hOCB_epj!b>l1n=LL&a6 z_D`5WTq?}hJj%UT0+H}ElfrRerRPVR4GBjY2QY)-apB|u7opjPhbLl^F%L*wUwrQr zEf4P&i0qBn8t&H_$(WV6|2KVk0UThISI`vEpK74~agS<>uMI4tX0Kj~9qaX^I|_-k-P z)iLl4+0!lBVx!7L+Jl7y^`JpmM7puNztWv^H)34wh~2a5iN=EI=CkMNeA0T9dG^dzz5cLjFyMCecrP^L#} z1>Q5UvMXmcOIPJpK>I(@F_`2jtM7f~niZk%l|C_v!hxv|qzie(Q)y>>96%!uL8sNX z(`U5h2TsKt=m1j-pTyDOs}@jxYy;EOT_yf=P(RV}*F{K58^GQUIzlCEQBcg>fM@LD z<;P+P+fTu?lmaXo_q`%PUA)fEOQ!Y7*$K18Y34k}taqrnXVHdgKNM?CY`urq*#Ogu zy!dPwp*}-|Yen%z@d|vME|c9f;boCUQ~Nh-&1%yqsAKQo>6N6b3KArE9+*@rjC1-r z+%tnD4|)o6X;!W~*)A;fnAgG2p1sdU;D8%IPJBfl52o*Lr$~;hIQ&@kd@idgb)mh- zaX3cin2lxH^W_D%y!dkD6ko~F{W&H*{a8ZkYWm^+f12eAo2k!z@@^j}x{g5|S4 z?W|TUEq*x7&F~U74KF%@o4};2+6dsFAsuQ0TK7Vr0j1!z@(mY!ac&6bh`Cok`}7C1J6O?wZHb)1)K2<6DKAuDwnR?Sr7!&?7D zg1Zc5VrPp_?Ss;=Ii|H~)tK8ZzH%2rvPdXMW`2`e7u;(q3W{1Q78m3+3XE;BoMt}B z&);z^h6?X-X;zstgn$!~V8k7s*g(BIy?b$j9^J3DFgUv4WPyMG>Gf_<&da4L0N@ksrRW9M9(O=+CJ70>`M)Dcs}>gbio4zF>| zkPs}HritAOHHVAb5ZA=|v)3p*HPx?aamVZlUX+Z2fhND!~f1 zQ^Yz`tPC1xkQk_6^-Vt;2dB?wu{68)(#`Z!po}fHq_NVYn-$fv-*ku=B;zb$vXh2K zM3{7k;Po-PbyZUjnLXYUc1ObjrymEvJkU@7vd?AxeReHCX+G>$?EX9EUZMsB$iHd^ zTvUt!2flxM6a4*!Mi&zR=?pPME`ULr0*Kl2DIh-#db$sA3sq&aJz?(Y+vq8%g?}c%Mz+sB4bn6r{5~VZ>MVm_+nvHL{qQ% zKGN+L?LF3cz@W~(q9#jwYJENuuC582#qKyVRtUT=hLiz0f*cC=_S##ND{N0)0pK$&Wk2NXNL4*s`a zO;04?*7E+OKU|C}a5Ne~W(*~s;ADPp4uT$_dd6xG1b?Ok-q!V;=+B92rFenTs4Q(oiS1lM?yKvD>DOt>Jna|tU=53S8P^MKI-yO3A$olVOP{&5z zC-Sms7m6+^UsR`ZjB0Pb`=DN@^#?4ct{#$GY~Hfunx3av0?yCmDN!E|rM^?j@VW89 zWkjHOZ~m2-FJ>0^kn0O!HO|yZ0Ei(G!HfU)`2md7z^sr}NMuomKHC&%!$=xLjuJ#H z()9}*51>fW;TCb^U$D=BW!!t81+lU8u@uG;9tdx6rIO54XRNYl5s&g0@YM-oRqI_q zy1$VR#iumx)blG=LohRaqHgddwij*!rKqcL@k#5@<-W7{13u)C*Q&Q-yLYoLv1tm| z!1FTrlM15_>-(M8+ZQE~%4;))C)ucpS*EJPbUpMw&t-UPPm)g~fkTsHR_82?OqT8r zFzk`ym&X#2@d^a`C+*FU%OeHMf$sh$6@UK-U>ihj4db`|kDb#T8P$?`6FHdJqT7h? zrS{fX#JBv|e_xkmmTH;fu*={JbMB;w(vqa74bmSOW|}_vPhe-(@|KrEr+$k^JMT?= zy1CeAi~!IA6M67HvKa&OKteblw>;c4w{)%mT$ikj;FvfR+MUY#>b%IVQ+29U9Jc=* zTHD5hT#3;0VUm-vOaW|P-rZilH!#SqQI}tww%ipg)YD7mGyAgWHS2WocF|@z0Y+T+ z_B6k``(rlHZ~S_DCZzlyS3>i70Wd=X{5EPEbs>qU>*->Z;e{Y46`^Q*9C4C_s<3S0 zKghns4nE9%H-Q0mA54o;@2D)M^UDt2TH9$>8Gm8bs*yX5IE`9O%|Q@PEc;F&!SYci z+a$?%fBs1QW|28MrDyx@UECYH;HWv5uaNvyyp_}KwFV$<;^#%MMSH}RLh|yxc%EUeu3ote6qd#79 zTNylkDTqt|L;)v(GIZv0)#xbbcPszt%QbD2iOArOCIoP z+O$Y;_QbOde{vIhu|Qd}g!tr_7z!IwQ{f6Bh?bJd#e1Ji>b-5(h{{~WVL3io^V`$g z;iyMI77w>ukq)2Ik+4kB5OB5T=sSyB) zS-S1+RDN)LoR39@%AtplwqK_oX0(i}bo6k$@`(NLqt#(UW_PA6Ra4PvXCnK?z$R2Ha;n5M@5NEvyB?{ zl`z7O+&)UozRa$pmfQRKIMxHj=i#`UplTo1JX3!gWJ}=Sb-wfSbUV{;sCHJDMMKxa zWRYq-oS0?A+dtIgnyLfJ>3*z#uVlL2K*^0E^UtXbr2WqV_g!Ke9le;~M_om#hn6`C zGjDZa&uaY=Kya!}sO>{pj<oFgnp2EQ0 z{-}xVEc)enq!k)*H)A{ByT2Yx8MaGa;&Uc6pFN4pacO6-#ec>Leht_lC)w>gjT3a- zl=C?MDI3U2tF?yFj0Zw{ysU4ZS4`%2$f58$sO+@C^qaA!N+!{D&lUm#MmkR+FAZro zAGSpwL0*3eThb*^01oL2y+jhZiFuS-X0P*+swk!7rl;J9RUyC zBo1P?JVTdUU%*Uvcq2v>F@~S-a8Ta<&_n@jT%9coF9G{s4zo-j*9tP!86ek{sev*B z(EbEA&b(AeqIm?K8Dqc$C~;i$QK%(lzOU5@SbxB~ez8jJZQsLQN%;_9X#EB~yW08Arm7)I*jdIU#^HT-U& zhZKj^PV{;;uqVl=On)S z`qJhs^cm=@{QLJKNypR{5N6JQi3+T>k=3voNxu*a#D;*LGVvx8`&m=E>E_nl`@L27 ziFua)p=`yfA2@>G?9L#LHdU13<^jU6G^KEHSu{&7Eg!HaJx%df>RmJhAmG@2l~m=1 zPaV8le3;c_7E}O0@<{`SY98jK0T^{;aXn~N8YXve4i&7kdhg}uneKF%P@O$$xcIRE z4yQc6cEiZg<2nuY14Bcve_spZNES?&s1K$vu$cTBs>K>53=EYCvim&vm1*(fj^4jS)l@C}MDZtW;(?I8+7#ssxDYdDNqdc<%f0Ksb)3pTZk>WYo!B9+AkXA8v z#Z+*fnNYB@OR;Yk7Gn^m)XzRQU=fClVuZo0As3rK(v90^e?1*B!J2X#NS>aY;6a6C z7(YO~WMgKI2V^;NG@NZ0GupJxMehe%uh%0=eB7VV-t-VD;qF%Flb^6}jwN%;-drA% z3#9=iXB_jaGxF6J2xjA&Q76oRXQe=i+Ud}FcamYqJf2?g?z($i`nopg@t|?V-rO2{ zjR27og+U;9{IXBv_JFADec*Ozb@j>$UK_nD;A06;qe0)F1OtX_pE&Trv10Kc|7r80 z2PPHb$#NOyr^n9;NxA?_rkZd5O;WF>`M%yyep;ekeLMv~;we74i>SHL)Z9W<5@D7z z3^vDMgSZk2+Y)uQX%MYab2n9sWs$b!P4}Y(3Nsan?Hnr$2`gcHc+()LVNUaPS}(sa z`mN-|m-X5*pK&}_y zf4}wJTbpXQfi`&Y&)Ie=)|~J%3#_3F^1eA(sl_u|BWh!igyT9?+ZO@UFBzf0if6W( zSAFDmq-!Mop0he`gFqDTrik_1;|>%&9LQ3cZmU2p?krAQp(W7=!NWI3#QU~H?*Qnl z-g#HqtyW(kC6?TGnTAuxaggE0a@f-Yuy_wV1ZpI${%4NW1$54UjsyIi;n94zqve|> z1ur+^Mj)gFwV!JLW4b%3&0{6VamDx6=Iz7vMqo2Q>Q4Qk^#y~4;ZshN6VeqmFiNcO zu-OBOHc>;>mjaN;%7$+MgGedpn6rmc|2M&7KTmDGV||(YU#Wj#CEnlm@eM0rhxyEr z{4(6o?ggkFl|r14#)P@4cpU9a@U@Pg{LOtHb6tvkr&z>`oYhVf>NDkEjE`3 zW=EZ*mFk56J)* zDd#eM2Tof#pu}GweHB6KVky2Pro;3Hm{GNzkp)s@WMta393T7P^YimBA7dr)v2MD0 zVtiYkQV(V_2h{=sG|V0WxNAwY4jHJ0_UWq2g6&e1^t09H$^&?Q7t{#a)tAUf77(9> zC&NoOEyZ@X7Gb2m?h7>*M3cl)vS}pbU2hwuRarsXx}1pHqjrKD{=T>VuWo>}sRbA= z&M-rgrZjT-Tz9*@yM0V&!kR^XE51!e6kwfp%!O?*f__c21f*_{ZlMqefee$MBfzK< zK(RoyUM=%9y*v~Fb${=U;#(#i34minRyKWv-)1#EeAES&t}LbrvyoJEksok{<5?HR z(jcXmiNr)y39$38RA7g$BY z)V#b2?_#1@Sg#NUA@`)nz9&2L)p{-`FQm%o0S;^xF2veCk$MpympsC!rWb0h@&K_- zCmjXjSky%sUc&K@$WRyFj?6lEZS-ilO{9+z`#vx=UL!B^u?b$hS+qK)#@pf_U(4pT zUNfTfSyR*JgCC1D{^KP3MZ#Z#9q`0zRDb*7skt1o2tU?GE2y1i)3wT7UkO@6^S`|S zo|CB8kEROZlwB=mCxX3HM|4=Mc&)!ZBlflVII1z(LNz!xIQ6)__5PDV+x7b)20gTu zu2F64+S~|>xK!tYH@^5`5-=zB)idx+BClA5KWRj8EFIg2^JxveWDU!1X?;kdfjEns z`(9B2YB@=_<0LwS!!Vl~Q)JB$Q_Aq#^M1DdftYUZ5D=K1t_Bhe0UMV^bDaI>$<{ft z%Y87#Nuc-5`L4_>OxM--`mCUAYk&XG!Eg}GGT=0~1RexVW8k0ERux^m3LS1NsQBbG z)&VN+{>KNp-|+QbgFY;^sT$beV(cTH#E}@58h5J?8x>)K zE0)cfbCHe!+hZ*2K) zKH4aOHNEe6!_EDR6&Lb7mQsXoby*s2@hcxKH1h1pO6h=`c1x%zZU+Slj`QskkjTLQt6lhA)dhsIs|)zDwH3a7j$VOMlLksFwRkT00dkSHl$(ITekyw1EUu~ zjc86Udz#D%=nfhifJmhP0vY?3HG(!8Q$t^eb~Jy7RbQk;dz|Hk3`Q@7I713*D$BVi zdTR}~OGpGR_PD%R6x-eR_k;-FKA#cuP(Gmw^Ow<-c7BoMeC$Ew?uO$0HGyFi4`O`? zcwfWIr86hRBc6kmLo32?q67}|Ae zh&-L3-~4MKP)}{@0lUn>$!mal2iO?>?GawCsEyLoeb#zBuubIxtlWNcQtI z&%l86-^(Zrot96n_E|0er;@kr3ul(nevR(lz9)%A@*$H|S_n?A5|shPk+THQkNnvE zw0xjotX#L%GUAD89phZC13ITGlKf-%@fHPW=0~siOcflTqE)G@k%Ia|~ z=UYjtO$KhgaztI$GbzEVlAjM;S9%{3|a)?(npRuMkI^|f$N9*u2q3_Jl&n)8uA)vOEGJt3Ln1|R1^#}l`MadpB?eV@f5tx-%hP7y%A=HQ;->svu zE3i$@PIJx^c=HHG7bEWGgUL%f@J^o35?=-fn=IxO9*+093!=^v2SFQl!oUwekFwfj zRoggA*E$Nap)SbuZ8}+^ETRYy=`lxb>GcmKgy?f=JwXXuvORgLrSmTn?H0hK7??pV z!`?*$9&fK^24BV9~5P>?QZDh}#bd^=UXP#;x@1(@U#KhpGQ;_$OZ zqdA~~Gi*svfH(dNA>9Na>&jv|1VfNO9@vBhO}SpAML3NMYh?I6{Hb`ZZPzt~Rjv|> zBLr%pu?}wEBX9GE&j^QCf^LE1bAYtF&x{T6jw>zUyB$2<#Xc-ZU1L9TzsumnM`TbWHB;G^gm3>V5VSi{}vqYdqHEs~Ti<2|jFs@v&C>3v+CfGEW_8QyS1- zMw0Yz;3GsXhNwPYETuxVfYH^lFXJihO4`3>OL0La2}|!HmtAs z?3O7u!!JP+2JbNA+L)0_X#v2B0C^K&CbNTT?CqM$KnADIi7%Uz&nUg<{iPO_nLIp^#zqXgbCXwXto3?Hr_F&KRby+Z}0~? zT*(T-7GVPog3uDMHC+Gb6QsnDR+0Q8#tPT9h`p#-M<7i@n?EGdYQ!gf&bJd~1*5ig zHV3OQE(z`+6Y-M-c6N;Myn*0k=>(X=p|^|vlob-jjdWEQVFVn(KFa|n!hoHovkvD! zK0sYqxX*JXV9dbzsC&@9c^UYx0OrZYT&rV~PSyY$uvHsFgy>d7*>&ZE-Pgd0i|YmO z6{fw7e%_!r4OSfl+P8reqRR&9Z@exKVx=c(HdjO624&6adW}N*-u`eYWm*%Vfv$G0 ztAj8WB_@-&j7C^sbv8BpcNcmP)pW-prhEa`7+QQdm*lHJcS3?rn|F~XVvEk&+3I{U zQ~ic!Aar^(2&XO}=aD%twP>$+9h9jR)`)43^hC6-qQY8^mSjB<&V!0-afNI)#|Jj8 zo=LPV`AEUnL)@}J&`CsLMFc=zGnXPAq+KL5L9ttH`IUI`GPeS5I`>sGG6TK$1ld61 z9L<|%9l8b*eq54Yt1r9mo7&*S=+VUBsU!`1yS2_Z?SUEs{jMXYn(n}t4dVW>F`@W; zi}sRXu{d$$>+GeLCj!lIiKl0TeKGa;G(%KCFSs*_YK0VqxAZHbP>MM$N= zVIAvY74^d?7cb=62*0jV6W+fz1eD|?q4*mgp0fj-Ib^FdwTgWu&C z3KW*(z3~+k4-$JX2FgVZ2zvVQz?^(|_MKV!Ju>a!MpYWbk`)_2_L$iiuBq(x;c!>!GuUsU#1 zLJSl0v#X4l6fA7f6_)uv6$C;)4D?S_?^rXKi2ot8k7 zo)Zynd*7o)P{r>}DwtRwEY>x5+CQZNKQ-Vh$=AM-E~nv6Vixc`-$_yrBLMSDaL=^l zC4Cr?EmIT>vFGubR8;{}iXXALbs%8`seE|L&^Iay%x+6f-WQVvp$5ezGqR3Wzm8e0 zJhuKq@VXEeDS6M{QIz_3Y;cdkpRE>P3YYPH>&yE|0~grOH`%qrU8XYX1y?GJ$F3I{ zZWg41AvMxXPXZ|n_PAfj3lcUa%$U}jge{sbd{!Jc^Y0Oo{PiYBy^!&R1h6>@wrJpV<0JQzBYnL z**-Eg*C`dn>FU78UOFa&ZEy~bJO8FINtM%M#}Qb+0Mp3)(eJ;Gy@+U>GZcXir^ZKK zrIIZN%cR-k=MXsV0KuZwG)NM`(%(ifjb8ll$jw8I39e;JDJ|Oip)(B6H-4u}I<^*t zb~2B@B@CA%$ZcR8e*SI)z?n!|(`fCJkTaj7@*1bck?p!fs{kJpJF-#v>z_r#%MiPs zF!dHhTKQJG0Tn)t<@8XOY9p)zix1EY|ILE^65k86CWL88**4izp`gUpnofzXY}weq zvf1|iB-vaKj!P9GslwyX;a&+jNOr|jCb~Bk@;nK+%=ifoDQlJA`z}ow7h(MwFWkd$ zE+fveUAg}Fy=`3@>#~-1i)CC>b~#_UD>c=~&+^`7AV{>h7umzie$=Wk)l6iq^II^H zO5nf>5O!pAzfQSY29AR(XiEwdfaCT}UQWJl1fuwuT{7OB`o0)1;%m*z^z79qRhGh) z8Vd6Nd;f`%2pj1fZ>^Tpnu7g7JKCspae}Xcur~UocV!71eIg$HeN%kABL>p{bf?68 z_!SVO->dy~8m>7=aW?zbwAW1z8`lFxZn>)Oje5QWcWXoGCsU@*ugf@pI-TdL&^F~5g7E}GrfY1GhcMn=YUbnwkPf%;1!9}$Hs3Qxb_2(B&q>?n10A$ z?qrZLlj8V*?X}m@+(k_}_ekTig0|5_?z9xUg}4({*99Q249ix7ntt6H9Oo)W)aC``I8Ye=y_#m7ZY&i)oAw# z_QYkOUKYl6jM}Lrr0X`1=yoE(!Y}>jK@aRX#G^~Bc}*Vlc_Mkb6*mJqh0aE} zzmmkw@<>s~_qr)7(RWo`lL^Gn0cf!@kIx&W@F%Sb>js`I2~Xp9Ypl9fxrV2#v{#uK z+-VviNfDylV8uW&OLbfSqG?u7l*))o?6&xqk9vF2UcJRC;;c?O5iq=d^ z^_cG{1hPIg20d5@~saCEjGfH#@m)QDyP$-Y?7NfKO((RWQ3(=r}kt8@|LZB z+-$$8*867-AX=oa^qX8W)&|4IYBb+dtxHEOn+fk*(Ir-C+IOO{48F#L3L+KN1HJ2l zf&Y0A%3GUlVFpJPC8Z-Y1#0AsT?tQxGqX=J^t?W;^qrUNMH*mfDRzNkbNN*9!pYgB zyry;BmbFiNaP0+N@{0j!_CT)@>nG#U^Z6)({S;0LP}VrB^Y7BiysU+CkjS?F@izU~ zSZ|tP`z^N2^7rbr}hgywm$MJE9Ddjq!MaQfub50rg@5f*o)js#d< zz!ae0Y@;XwpOLzQsk57FSMdz#o%d#(tk&tCZ4p5(UW_0nW;sDJA7zVeP%G&Z$8_S& zRz7mar|y|mVMHkzn@9mYBLlxMsDQLC=TvcaG33-`rmOY5eXQPZ2Nhm6@*e9^31-cR zCM)T)=^+xs80p~CQrx&y&-LUqbQH8$F<<6uAatBGkHQ_COC&Jiw&bFbhG+-q=35Ne z$51l5`efG*l;o7wF@b&647|8#z8|~4l|Ps}tT%qiYf7v6)Y7-_A$!FWC$H7o&(_bk zUssA{k{t$#<8hkMgz|1<)Eww2m$Qb4LHC$` zo63ly17hpQu%;Q5U%i@r3WLK{pGvx4`ye`GRxu0o5ZFsfM0_wv!P zqNVWAD+h;aO*1BxNP6h;&hJT&aBqCMMd3NgbqUK~FKtFajlLhB&`}w#4EVJ)wz{$N zb8<2h{VjawjQ=d?q0)t#1u?Mu&(LzZ9)u?|1_?Qu>0sfmGQ#`o!xmu@B5OfJrjy?k zTS-;cBL!PE64Xe*AI@&VT&a-9IY@iDIQ;H5s5gFp^Z)%Kmw4;lSDG>eTFz}^&%jy5(ClCd5W3KKssMvDi#-Wn+vFjjwJC$qn9N!rW`=aY{=*m zZ^>;iK$;-<7Sacb1IZ@IpwvG_^P9w7 z3wM~x)R>R%bIk)_$6|NjXg!})%VpS=@>`H+hCDxB4X!Bp=jD@PGG?pBXgN_(N^=~Q zZ?1_4!p`o2Ok>U8jpJvhH~M2n1LJe0*wM6GC%BiUx?>N}_m++^Z29i{W%cR-ub=db zQ6!8ltWw+yz2NWh{9#kZ2L#g=pTKy3P@OI7>rYsftIXhf4*qM?$NSf&ubr;u32gdF zJeYD?1~aS6gi6G5+BzfNVtxpx%n|{t%2g8m_pSU`X$8XqE_+iuguAV8OXxBjo^TDx z3inM}itL=+F~1JO+eDoxDwRH79j(7w_R`4dB*)d6^Nh!|NCS5QiLaD{Mqgtp&JL>- z-!LC7qoV|5GmCx2Ttb&9zNJ)c$NQa=(jii8rfONMY;HE*kL0WW!NGM1*Y#L2COD!#ThAIp=x*_`dJY9rrymYu3!HxUMz3_|j+&7<6)8MWsp;J`wfx0>M+S z5=TTir-N)v#ct=mMs9jX+zrK-r)!!x^6zFSju=9W{9be?#6}HnG3Yn*!dgDoZyxP+ z`1)eNY%dyg^y+=HA3t@iBBc>H!>@_{$`s%93niCG_nz{H4~AN;N;TASi#Yxdyb z&sppqgr$KFnJtho@MuDPg5MHUZDkMM^zYB#$yU+BIwqPaQ?Sn0IpmKyk#P21c zlS$o~Q8p{Th2MrB2kJ4o#2>vS-*~C=K>TvP9kVe}mUis@r0Ln1anxOsCz2cu?t>o% zu+Y8hFM@?(IO=#}LV|Q67a#4bVuqeofe5drkP@9G?TB^qQSd%=()CCz ze7$C??)9QlodCF3(}%Qb)NIj9 zMGHNOWc|jOTmF0MUJZS`r>VGo4jf+@UuNYw=s(_R;Jui4(US?T^FR8o^;ZjbwZ5Bw zvZWKh!vb@`cTIS$N>0?{crHi)b}L(0QwL~)f5WdEzgJ5a{lEi!af*It0KP%9IK-*W zLSL~JiX)NKYLir45AvFcZrdsLb>yPk$vx{&h!v)%OlaYJ8RQTs!EYs*CkpnGL1a9n zzbLxVQL#ja=v4d052Oa!x?hC^@wQTZRU>I_Ug#Dg4YLdAvb>*~UfJ=*K8q?YWVDG^ z^%jfFxcJ0qXZ^H5Zn+%6OFfGK9;G^}!A|#&hb>07(TP99r3qYw*|E$X@tt<1m+m8k zR%toeqy!uh@M=>;nQ15XSTsM(IMsc#YHU_ac~l!UoL}lqIrcVjysUji8BY@Zg~Wx7 zJ^A@(U8j<+lKV0k7~p6p#R!HI$4rtQDE1<)v?~T58M{MrVnod3RcG;rBQ1T!MyM1f zA@UHx@bwD5mB9#Y(bHZ4>E)l=VtF)X3NJ{mf4d!Yy|D71TWBKR`!ZN;A5In5olhy6 z-$`!LuKJxANEz&1=3Tpg=?0Wx`NGl!1=z>A9_DCL;)*Q?M-RUH7W~##r4tm5h-VL&3uyfJyW)uP^yY8tHp>`U=v<*F#9rfZw^n?}A`^ z<^s25e^7+Rt6sMPs93~pEtv*tX=eENF_YL< z)w2KHBW20!9x3PUbrb-3chmXEZtkyZSR(j?Qa5vafv_I@S^4y@nLx_vrgsKt*X`tR zU-!6!7C)01Er84Lypjv7K`slRT=$OQQr9C?|*idqA=Ly<`%Ts(-cC;dN^DDFnkgu_ze*ROp2pb7wvv1^WAMf*<~g7t@gv;GPv zSUZ7C_+ZhNC@Jgsdd(P3i)Mz?J& z;4?D^)t@hN#_xwy3qSYR|4x%fNxA_8hwB_jpXa{SGmU@U6eP0$t;4~4oNmj&r(5*u z+%PjeePsAHR>649Hu{@E~Mf}24b%UVLWO_4%*l5X0-V@ z6W0#VeFj>~03#cx%peT1Z~bLosWEX6+t})D$Ud~jXBt&J^)LY8>#zK_+SnZl^`LgO zMmawFcbSM?)q|7E5_@GVyjs2|LZ=<2Tty7pgu7}(!VTo#4bymGc1@)+kWE_F>wU_`W! zW(Cxnu6myj|1?AdHyVbxq6G|#JbxJeE5?gPw)TQW3*?KY&<$JJys^R`^-N~qWtl%! z4y%N1kCYa6!N-gDadBBqj)-SpZ>=5_sHa{6{9gbtrPyJ>?>0;}O~6dCtF|n_*Xb2G zv-wj%9dp{n+v^Xmg69G?r<)CXo-K!?5}Qz%yygYvf@%Ul=CHt#b;GQDJZDW}G2P`c z#yNln;$X2AiY*@N%${l-kN0|!MD+?EO}zt{QU9u?&#cIVh+#91z%T#Y3Y6Qfpn_-+ zGZaEk>9+FC7m~t_amsD{`<0zA@GG0>f+*FVaZJ!{01MWAaW@r|O)dmZf=1+7XO9FH z`+YsHlf|Sd03zc&--+@LM^1JfB}BmMY;qPCi7aHxRaX%%%R?jf&`xPhJ-;|#7A=rE z0NBCb0Cvk^?UiG@FWfHI5eaZr!c9ndv*PnLv>MvkKm9 zrHk4_?^uPHbKyNKhNdys>CFTC)>K}9eQ{2UA+lMMF~@#m*=QdX&J+s1G=SqaO|BOAl$Z` zy|_Tmav2t}jIoDpPfut0`_lr_Ni!^{J-laMd!R8KU$qPsJm36KkCkFut~GpOhN}mz zv46K1uZaf7!6>}0P}6@PrSA}e|i`Lubp@@I~^jUM|&+xmP za077bouOe%Fu}p6KGxA}#bmbQbMVIgJ5PPqkoy1lC6U43T_ z6qk7=?WVy|+87x;wRF#SR?{A~BeQ6RwiAd;rTxs#WH~1{|DOIy2Pg;|S5Mk|fOG;p zmUvTlvdcKZ?$3E+N^!nsmj>H#!!1d%2URG)#3oq;=^bkn7XDB$1*AapH!$6Ys+*fN zjH}zH@OUKNH)GA<7MSXlmaTVeEMr6Z5{gQ$PQ9LPKsg9AH6SX8nZ2!_b8AGi5vbOA z?k;+k7Ed7&hGidL={DCKg@(gwPgBVo551hJ!-A)>^;1d8>kwaaAOk3{6#FsLe69@2 zhpQV^ZTOb>R%O^Kx+(d(tZ&_H2QpsFfkGB|0Dfs$37{;w9HaE0X6lF8cOa4SDnJE~ z1MGx_8|9N(+1At*pWz3n6HF;Kg!y1WeeT`L3#&irwr|p`KOL6ehd|3deR)LhNLNIv z>(n1AV3Eg@_poO}tne)eFvjzO^1Wv2)PFW45LqWhM^cP_wWSMdb7I@@yZ~eE5$r=G z1h&@SrZ*kf_>^9j8SgLnTvSd5vB$(-#>F2~!?w_SEaT+UJqHfhvvG12T8OwSva1PRQfP}`l<+c|#7 z|K`_}SK~|+4LB=|=PSL_GbQ+O6l}r|&o^3s&J}6$CVT*imX_#u9Zc`X47)e_E0l3H zMfe#5>8KQXQhHtHsm{w@SHs+0YW_}d8yqU~!gNP<9pxa81Z8Jnx@U{enq_%bgWO!& zXS4km?6-P!9@GWHg}19;%#D~=b5oQw{CTA0u%J%!ieM%v@0pNIy^#P5^!F@$XHZ2{ zc(4=!;B#CZE?lPL;ZajT)NYTu02tUMXNZJ-*!yBlzsfL7uAWcH!(t0yF6IF45(`X0 z!D9#ZvLQ8Jyi1u<25E%GA8U(jkjnc6N|gzJ@j6{qJ-xF9C)RlL&1p)`{EO4T3y11G zfc_$N#i z3^1E4;ly$Zj8WMR9O1f%^^SV?)mQSAD;bz-wZIerW=XA1^Zgp|5#_G$N=%B91eD9` z?U##ZNXs)pgQp5ZxlvQx&sV6%b7 zz{cE`o!u93J3r2D$5%B8HKW}%dEipDJN*OC+CeYXHI0MMJn_q|(XDQn#d2rx>?;?; zEyYiNHrF2nLf55rMxjfQBpZLt>usYjqq{3iO?V%2CAd9EEo`JTY!0{KmqeR8==y%F z9@eA5So3bvZ!0MFN^PC|i!7q5;YKGGqq$f-X3*b>iSl?@cq^`g zem>}|A;N9tiPy$Vv6Rku@fyf^oHjLS$|>J}@#NTT3`7lqg#j7RH^L*l&%UD zvTLg;-2@^vZ%BRrrdB90eb}~T{!n}w&$BM=S2>cnOL6mLs)oWYvtDN#!~6zM}jJR7zL-WJ7jTsb*}5II%Hai{wUHW zru>3QIj6;;Ynf-jecX;`u`rygeuQA@X#Lp-vWeD2%}qMHj6wZ3hNCdE+i)$xS z0m8&wFhgWvbM>T~6vsp@5<6E3ETUWt3%-y2>rY%mUQG^*K2_QSI7%W?l(R)nZ!Kcx z0rY4r5Fe9SJal@@rS264V1tjf?#%;(b&sJ+>G=*1D^IEaniBir(Lh0w0r_fiYY8yk z&qnJG%j93ykBx)!78KOp=R1&yUFldthUqxGz@vSZ>@S4qIKiV_r>CR!)PNAtMTW+6 zkIJAIktY}%qh%X8uG$i_dl=OBKa7DX^YB|OH$;Mpive%;Z00B(`Xr-Vt3-+eRk(L&=BTCpe#J7ZNsCuVn85@GLC(6ga1?sqCp)7v%K~yep(kN?#L^eVW%}}IF-yvLwn0ZzD z#;bA*>Yis^0k%b%deO`}7}pMr2HUc<`!zjVk$QpG+!Ohl-2?aRuP&id)%tPVy6fuz zGVA=dbO#(oKjd2lz?nBjqec>uWS<*{>1+eiL!?Lel z&62;8sl9_|U1CX1X7Y=<%J`^Hw(lzW0xX4m`iw*nOA@PTX$bSh{=aUVz~4g2d^pKP z18jzSrV;OFaPiZ}K z=J*%~p?0&hI+kCGw{}_i+}sq&ii^WpC(Lf+VR+=pU`BI)^V~|n;e1uU{`HRbg}h9q z1pe$r!B0cT@1RI6gL1in9zA{XMrsP&_EK+qr-^z^6SrE0?!L7H>eeYP?pWP7PJ!!9 zS3FvnawBt+)i5-5@({2N@UD^wBhS) zUX<|Rn>XfPoj`7##1_hs zvU@N>Bk4u%3Z2J^>6Jfx!(fWOib<|vq5^11ii5DQrujMfuP1BzSK9XanLiWHQO%1C^G>&;yMW=Py7-F0v{UJ)xi@1vx!}qV z`;u<{)zuDeSM)(1v=BUpaPF14vlp-_EwOIwJm*>HI;FmN=d*M_0|4e!lqB}C>baIX7QI@I=@Qq2vO})RmUA;cT*k=v!6?ACCQo_q`X_& zynBcM4!xtP3KTUIGVO9^`>93e6t|P#x7zFP%3z1O)xb#*<5#9(5yHtAwWphCqAs+9dMIJ%pagfN-sA0+!^eu6`#t=NOZkL zN>Jr`pT%*d2J(RL|~LHY+Hq^UAG?{FQt^YlgRy;7_17Fh-!vCR05;BE*x zcfrCa7#-<;Hg1C8ZrRE_KOX0R9pZr+6WBnVr`@;fm}rt{*yhO;XF#4JIoK*|M<1kv zogMtU*nq`tv9ofNkjN|)sfGUz06=jno~b-ITSjb<4s(M0zgDv{x0!ztRI$8Jni z>N1E=)DQ7#L&87;-MxuwDrKVqi!&#>L^&R&+><4dt>$reIJh?5HuBb&Q?H%`xV4YpzWJ)2aIQ(O@ zfdPedN)|z%UAHP<;Sql7kdwmFx38IYnSAYfDt_PlsAd)i-<*wqIonOgEo9L6Pso9g zBgR1T^MH}H9>J}*atI`~%6E0aY)R`J@Ni_3akuh9%~@Ku03enP^2H5r2lK>_^nh9P zq`l*bev~>a0^n}eAz~_Jy5w*%>yZgc=exoxStXLy5+=!XuYC}A!L}uR!jOvZV-^QW z)e2E;ImAByQ$5!z7}fgS=J|8yT@Yr$qPaS5W2lnV;gj6-lisFX=-P|7_OcRkUIPt< zOcU-^fMwUVp#4-09XQ{dof1-zy@hFPD7*$T|M|4*y;^ujd6j&V^?HVb&V?JuW3lqE zPxa7}kM{NS#42#plP2HKpzZ35Fl%67l%p8-P?~&<1y`+D2^)9Y9fE15uuSrOMfjES z&^9&@`I!$DIT-|w69!Q_2AkH%>Ii)UOxZG5hvpr9~TFx;sM>SlmaXxu? z?$peoqK7<-YE-i-+?}$ttkp0tp*&|Biij)({z>Bkwgho{vyLo-95tLuke?bT45K3_ zY`e*aDO;jEfB>E%uiA;jq5;ZJ4nq`=EU{D1W^XoklU*Lb3J8 zKliAG`4-Kr!8&QJe65J^3GTB%8SV=Vtn=iem9by~2BR9ITfg0V-3s8^Pe}->3=pK2 z#OSSjQ!+#BsJY=+k|Uly?&xvRG}WgMtk z@gjTd+2ltWEb|Ei)a8DPql?mjVGZCajDWNRGTv?-Z*(6(M`-Dhr`MYt+fU3Ok1mbS zE`u-+u6fSa4>q-tZYHS6l{p8QCT{kSU@2?_b~PVnWoJuqPS)3PmHp*u*&a?#WUGcn zXbTcmAs88LLwC_#T^jF_wsb~|c>T?-53(GDzT6oV(i)SGnu@d7@7?cZp~uCI|o=P3NlWM?TJ-7~j9^rt5OBI-{|G&%*XVR}b(qb^tu zSmzejS?h8t?01T7s2Hi?DzFy1db+RYKKDks5x7ZKQ73M10^tWUqtN}Z7}F~+0Eua9 zZC_7X=6W)-@AnLky#1q^8U9a=#BJI`=dX=;BtJhYE=tb)t)1>1adI3l2rwo9a3pQ| z_B&eEhrnQdo!L$@rLZP8&aYXZq#86>ndddP5xE{helY4zp52n;iXeoFV_;aBsk}=i z_(hUvyCEQ-?3C53FlnVt9bL))W>juG>jEQe!L`6f!CjbMz_!@Md)>z8zKNh&v&g6F^1<)aOlQUk`uyQF z^Brn0%5P%hPdM4j&*xZ-bwtv`ga|#)^QPLSFNo~8ONL1)2z%fUF<>Wm;n9yO4b}+e zd`Ytdrwjg$z%LFTG&gauI;&*M=DOr6R6lV%Izbc`_A=KLoE>*=+8>;KXu7KHUtIhx z{9&D2GwHnrH_Msc?*(e-&Wv}{u{5u!9h%VIo9NX=bdtx>IcmYf?{;@x7EXULOg^c} zB#!oIFdkcnvCI0$Q)$^Qr;P9VhP8aupG$$zvYyBZoHgQkz1&e`m&6_O^wNofXU?3x z?pK$HsT8s_eekmYwktnJF9lBPE!CC5FPn*sj>*-y4(cOtclq-;#&}?d)PX4a>)dup zll1pi34~m=FA!=85UG@+TU0GZ?ACF|KV=d8U@zcgg#Dsf)*gS}F>ee}9Vvn0SIsPp zV<=q-k+2$nt|32IlcYI)c|4Fc|Wh$s%+L&=nyd_;2?GgP*rNS`?WW(!u)Y#p2CBzZgr zZ=4&jtszBK2$@kzNEW8NI+^QVaK6}^jXc`6uZY6KLH9Cyvv^#0rHG?nQ=PIR%oO`X zET}v|sgs(&jFp^wY<-&2LP+Fa@wyk|JA$SUN@?WpFycXk_SUC2@uvNG2(d)l;Zrhs z*fD}4gBQZPOLm;i`nDqUzsfC^o(nZA*PhkB>p_{8+KNc-+4}hGc|A#8GP?;`3p>q@ z6LiS7z&(VY;dv|>&2Q|78EBYmnkXs-qYRFiTOsK1hglt;W0+L-sUd%SNG`FWk#mHl7KklFoAUDC!K^<(Q+%1pXp+vmFdc@zXXvfgrsT^R2_7% zeoe3&>|yt0xjmR7#bLT4>_|7JCHE_#>@(6nkke`tPwrT!V4l-$uQh~0w9)@P??gW+_rD)xwz*`eqxLBAa@Xr*Eq7khl{BqRsT2uG+;y2x@ zA5_S#;RccBp6AG7gxnhL-}$6L_Of+cFS#fx0|b8wt7_iv*g=jLIV9}M!%1t`a^DAi z<2`PxH(&nHlb_;g8~@GlYB*4O&T6aO8d^=Vxn5sKqg<1H#<1~)GZG`$DlL z2P92nzk=aPFP1^nEr=M;Y*Q)Q?LDr&MOK>5rP_SmRzf`*+#WM}fLXy)tHwm<)c4*7~Nht81aD zs?-BcLES&1f{*r-L)dQ5{-a!^Ka;`g zR$j3wq6w0wq~Dx+>$3y*t`5DD>?6f*8$m)`t6^4KkW0V9*%muB6363Zif)2hje_1% zd%W7Bvx&##EaxfVlrmPg+wby157yYL2acO`2AlBgzEW=XK+Vg|W%f4({8mg=%k)kC z_tugHb~#w_a8g}nN=RnP0W0VQjUwLK<>7`;21+2w7V%^h23GHrr&B4JhP~Jdyj-ZA z8Whea$b_?=?ux+aUBlfh7J7&8Ya>pM} z&%~xS(MXG4pthHj^0mLT8@0VP1|x46%9dc(?^M^NWEU1frRbHUwuZHm5&9su^z>aH zf8*!%?`N$)Y*I=fRb4L<1ZO z1H}1BME8%z?C)dgTc?!2h6VKRWU9!Y8S9u7Bp0(QAhJvqYqQtS#bh#;M5%Ax61(+6 zO8iyc`MZd7Ui|qpb}n$|qAv9F(Y#W@>KmjfOM7j~T3dZG0vHs7GzNqESjIfAJ_R=t zL@(0z`MMSpmbkXuv%y5mau1_FJP}*HhS9Ql7psSaVxee_xaCf9+IUs62HWqd;U2GCM|Nn}(ixtVz79%t!E zzB(~?oQ9Jf?v=i(J#zfZ*e2W(aRLDWyfHbq%oY_bHB3Sb!eNS}xhJc&Ar}WuG-|;@ z?hLs4xGs4W!j30nD6;mhmfSog!+<$Z(OPlITz6GS)atIBBLW+o#-0B)}=;$R1~3El1ii z2UsiA3;6*Hcd*!AmT@K3b22uCgbU57Zcb9Pu%Pg`VttCYegB&FK4r$n_z{&H{qgFh zl=ygdZ;0v5IB`w<^+=~ zLCmBsJ`u0yD8rXJ>!bL5VT_)=^3S~CT4`aZWoxqL)CHO%BkVy2%eG+^?1%z(`NQ9c zF6$=@5#3AVB9ErZC*Qh|U7q|cEj+9Iz)E;89?Ond_vyyTl@gYYnO>C)ZcW;JaWQt1&m}FKw&xa|b?7oAOjP`P8Q@I;(C-+?2*!Y4zSPA>>H~Udr$H5n&;G(yuzrfAUyu;-A z$^ClU;pO6#Gw#3~em~hXL9+E?-!cZ<(18PEA+Q>o1t-maXmBAKp0p5Y8$#Eh^VH2@6qzW30yYNe9mL_2S}nGCkZs z_EyFeg+nDFF!!BZ_To;SjTvKJ9Q~3(ffNQ0mm)fsH)V`2(Zk0fs7-!cNz4Sc^1{n8cmGhA04Te07E5}xG4A9OYP z%OA^xim?=3g$kA!g__K*!soYLT9l~PUyQ5GE`b}OI~HQPcET!NbC?JYVV#x$hQ>gcqzubd=})i274?+p7?fTZCgLT1TGgTnu@^R0-! zS8izq10!EzB1`Cbz}3q%>Ho&Nt;O5Fx|9e3LmDB8z^4QoY^zPte5-d0G)^5_xy|FFM5#G#hcQYpM69YX^mL8hZL8z(>f@`rCJFVF`fq zi25^#gtFa0JjE!0byIo_S4FnzuECs}G5=!HpXC+8f zpcVOK5i)Uw;j3|DDY5>InFnZBgFg(nVg!Xdm!#NT__wiY3f@1WuP7iG-TU`|{{fjs z-9Wc#*761bMqg}}fy>2424SWMV9#}JUJatEQky=UMy4HYwKoEb*_N4Eb+%kz36<>@Qjmq2y~U*e%7`Zbsv zn21l(Gr>4)S-yn+iwf}1IDK#55QtDptyp!CCkmM_aY<_rS_s^hJ13L_8BrkyvAX{P zOTVPKp2W3L!hq}X`UCDOb}%LpA}05=&j@s+ob07EV8#9i(zn3}K&pYDJS<*w zt&Kqgna>9f!NQ)&RV_VD_#jf~TL7?4q)5Wv2Fcv(I+e(Yd~ zfX7@HvoMF{D`MzSM6{h^ZMW1uC zHRMRFTub&Pl5layz^Qr}DUD}=q5rKDzP?e{h%S`+1lAy=9^FCahO;uc3_oTNYx`!; zzXK0C8{Cj-X#b~XAYTojV0T|jW?w_RUt|2LoDVgO=5x&D>K6|~PBJ|$am4?cWaMPh zJEA+?)m%bOGC9cO6jqK)xgyH3)us7Rz^^&iT$BG$UEfpC4M0Y$Pk7{yd?PLD3)UYb z`wsxd+#N?-`d>7hf3&IL7TA@LD*NhdJTaI^H-g*jE+b)oT-=7}9R+LA9$j5eWHJE+ zKnfC2DJnyG$pxhTPdLKFRiMzfoHttA_D-w`k?4_w=TgGM760Fzm&dLyA<9;}9 zZmj*{y}g!qOIAhUN*qKQ@RT>=s`EXNQ_TxBq!o}wcqyGf;DSEZh`9pP<#lB|a4iiv zQG%|^IAv+IvbY_x!YM#6I*;N2vbl=km`mk0sNob;e|b>A%|c;Ys!ApT?yWI7acA59 z)9Bay5sQP21Vq^ikxcGYwMis|eXXEgB=BP0T$F<3je)P)(TP~bD)=8p>PeS*liOO} zGN+|JMk%E)jNbq+**d?f>4)m~CP2cY4#Xxm%4XG>P4)Pzo_Y@nJX_RG{tDtneCHeX z-dHNa2d~0YpK55C#v>tP301=90NR2J2BOcDFZ#%14hy_?TE=!ELLkx>+{0CY128X^ z-z$zDotq}=l3*qQoT+dZKp+W+=8DcvS$fWhwSyae)RABgzl*xZI_Q=QF#Qxu_x)$r zNIPJdk%LM;iCmL5CGnm%Ki_$ubm}(&ST%LC$bO&X_^ZaE%*~W|UtH1i?RV!pZ9!39 z``>-5hjo^UzRl`cX2!d%sfJc4-7Em5ob*qCJ|dQa#M|?8rd@s!I_q1RP>!OZUXRl9 zLKci+RWR+I-}c>jJL3Y%BXU2|xjbGe+5pE7J|-cJrvAF<2_hDWT@Q)Uw)&ZQhnicO znz>@iQ6WGM2I|t?0hL@PLHLA%R;mzLrq1iph(bXff^4VJPW;QfAN1 zUVH9ev~PJG)eLMPKyXI$DI!|3Y0LY{J*>`3f#TyYmcJ=%+X8hv5$`~}r?nZ0#rg%1 z2rfVO%gK3uIw79i8IWZMAvNw6kT4{wJF49p#t<&#bkC z_hS@9O9qt%ThScAJO+StbWLVNb8I}lWA3+uu{cIYBpln@o)>?tOH9Z4p3~6i^>UmX zbyn^Lw1zZqWG?Pf%kF}Hcvvbo_$_Oxz4zJV9LTZ%q|&ezjDWW)SVTTs!T_NquS&oP z&Mj=%TLzsC9nIx054MtkeU@by^$nSoo4gIWi|1I`;4Ho%jzj5A97;}6~T1~zF`xP%7UqGXCEn^WOiR`9f28USEYxswT_kbfrT+FQZYwX0~5&+RBa9Jmn861cbQPcO%6PY5lSE-BhPG_p$LZK%HRm0K}Gx<^dfc61tzD@_AyU)a?K|@kDbyC9iNHIz`&tbB2V;El2ynk<`OcqIFA z;?8X$$82+!7#TaA`)v*K{($AbyWbKP&xLiK#x7_x#wAEx0din4M=;VWe*hfh45xfn z{5Y~sXxsl4@4!9Rha~>i3t1yr#Nz{ET1bE2zLp9yC z-|jLuQbhsSCI&ekjD>+B^g{PyL>^6|SqXp79;Nm8 zS)jmCWs(HY#O9%!2c!CUfJ&ilQ{p;`#CaJWEHKcV$PH0+V=eUV;Ch0>$4vk?N;Rlj z-Yu5@p5~yC`1xAPU22P83H`E*x($k9u)6l;f_1O&l?0rgsH|Zh*o~WJ_7oWRxe#kpH8f@`5#iBi5ckC=>5=U^KbsviLlg%|x66}W z?^6ZU=24O%1Njd#d`dNq^K+FT8R8(GFp+NVy2^~5!9pV#%?^O|76x?B2GqT{RqzJ1UX5QkR8n`*lnx>C55Gh^}Qeq?ZE4Hjz5mti?@v|M_+9#QWFhG72?V(Z>~nk`O>~1tr6I&K zOE2Mzz;0A^KwH*0;HnVr96IB({qT-n`MJX;B%!6)L-7C(3E}5as+it2H+|tMragCC z`eZbga(g2y7=D+P&(U%#$+eDO67i861)IXAP#oY5DU3W;>w)@Y?6UoJlW$UOV=-U< zY@LtkFAC0(bjoJk57Y57ntNkJRYbr6A6_4+Nx*d#`|caDsDFr{PH>cA4X zC!{~SC}2N~B3Lh7<@8)t*dc_5;~UB;6spveOH*RrvfKZLuiJ%x+->ir9`pUrb6bSD z=$Y2=A6%@awn_#QJm7%`WUyD3OW~_c;KuN1f0K$>@i(;jJo0 zxLb7Y8|!`cJ3{fqQF{XY(Z049Or#d}+++B)W-CO_FCJ&KF}aail`WL*ID7mMuQGR-fRnSa_@BmU@=4}zHtl=12)(c=zRKw-{Tj_GDUiZqI3k1itlw zH_(b$v3nsfe}Y0TLKG5vz^*bB@PcH_@UT@EgP8R1wLci`MhNzaYLASBGFTLkh)V|~ zmHRGr`k=Se>2j7Do{Z56*Q&_LM}L9&MSmjYV448Srz!}3ZPDW-|6lWdN5Ce_51oj0 z8PZ~~pfg39#!WDZAA37?XmnR0Djw-vLe8R&oaXZgC70bK2}_}jfNF`04X)Mllk7|Pg=saj~Jc^CrTq8TknV_+p-CbbIK z;BB3*)TgY!3MJx9UtnU<(46gm#rI00Y1sC6g(dApiQhLv>swwV??s*;Z+Dmu5wG+h zg6OyJ8ZK;rJXmB)q3H4vo!_sh@;wWWvvTugTdZ+uQKN*>Wryxlt%kel%`7%>@E)hcL44l@KpKvHL6znE!kq3?E+G(O}){p9Yu$ zYP$qZn4KCXX39NWU(^_Xa>ID9$~QBi<*qcQEHPmfYdTXZ5W5@`ap#VY}uMHgxy1LKZx2 zK<1tDJF`5L96L>9&JcAeBg*_eC-qOHLjj}Ol7xNd=A!4oo)3TCI;0wGAB_GhJW;P^ zsW*{vCLMwK@e9AzOfGk9yXdT)8lC`_{6#Yst{NTVdJI9M2UXG8ZuyK~^cpBfVNUs| z(YQ4D*4~cTka+n*a(lyFYDn}`NVjBq25Mcsfc@-(j7}?7!tFb-&VDEVl?EI9+fm90 zjKWvVqelf0V>MDF09Bz%{L*tbHZ{D2sPBH(=4aJqa#{ww(ue{;-Rf_Ku0^vf5hs39 z$CekL;nIDOXQ;r8%5B?$7O^eV!i(<*%gP(%UtL(4UFqbW?LpSXy{C1zNSd-m;lU$L z9k~IbO;=Cqozv6pR!%Q&x0KyFBeBp!o0%&0EXAL&3W)ypOvB0`gjxKM`(=yp;W^J% zoSDPkDw?)xeop^aQK zf)I_t--V@@HUPU2Uv~%Z&EFYBd+dVE?V6yv$YegU`WHJ1<+G|SO7(s=IDWMIoo|wT z|D>PpH=y9P%sRIKtokJC=|eLNAz+a1fTU5PP;X1_S1+iq)83S{TBwKbII-qiRaaS} zYv_tupfgm)j`v`Cez@HJM(gdZK0}dO?=1@gm-5=*>Pd~=v-G9xG^-%#jlraYE{v!T zsc6LCfONOF$6jL@C|uLFeb}27;TXPr7l9##B}GG{KI7Nv?LjaT3&rP$lsT#-N09@p z-~>Zr6q$%W7tAXBd?_i4l7X7eBjWx(l*?4qEtqrbdlylaqJ3$UP?Iq<17R%uwb;&t zn5Nikhg`5%me15wy*7jh&jN#WJBW)OFNxWaMKo*Q-8#!FEyQAGaz-!)vS&h!dpYyz5 zAFqT^fhfzKBAg#gj*giD-dq_#gteWTEf2CoisRI7eszRVoQbjKvd4S1fw#j+L{;JV>*rclx2=2+qT_ZIO;71u_)DK1=0MAMKdAH z#WO|k8JGk+l?|O1Z5`L1cE;5c&hxzTiN}P`xEsQ~hp-SPaP)_)yXZ+rJc_~|7jf2d z^x#s{g5CvKj2MK8CzCo)hZn=f%*|jW@e@h-5dkW%sb5=G;;|t(FShovFe;TnChY)I2gj zGi*IWQ}4kyI&vXXW^3+@+!L!rH^+5sXjTw7uKTtKqGj#sIhziG~G-3=q%DM*8KBRO<;mq>RDu6umm^}XM=t-t&kX0AHpIQJuH#P3RCWn;g8Ai#%} z7)9PQnx^0C)#;SxFBJNy0yX!_d#?*w##RbL=YkmS$S``380oi#c!*;9o-G@ZxA+uB zC40JP+Gqdm!|dD|I!X>Id$zU!?qzF^R=mcwZG09LYJ%wAE$T7hyFW3%`?pr3+JQF{ z@`eLYaD^Oqm>z~fj5>VXeGHXery)5*K9uaDo-zyKva!U8t#Z?&RenaMua|4cf7_l5fAj5o49y9Sn^Uh*E>hxLT$xmi z>!1C!vV45*hRRZyP5KJ0Tr6Rvw-w<>#DZxs8g7=XWaNVKJ0^<*1W6ZnkXRHn^S=8K zLw1nb8FfYOLdeUpv$8W^(6Z?Dw3|m=2u5>b(j!J#zf@ZpH*_mGM?*YeeV|*yluWyx zosqioX37aJ`1|t=jx(RPYMeS6A}V{7`h|LrO=P%d$X|m>%`XlBbIPW=hMYXxJt-7O z+G?!0f39$L5@P8b-O{PD&Y;JfTEGR`#G{9`?(3grTY_+0X$iafg){XBBz-BtW{*P&{UPx2JPh?P6MhdqS`DgUs zSMVO!5v*u5lVRK6Vt6WfmNsDpEhRR(ZvCu!v#G)8;47WqHT(3CWow^Xj|tz?|K!Z4 z^Xx)EQ8pj|R;dm67-^G^^!RhNe%8BA<~?&2YwBvdM@j}nK?!?9u zZ#*S!84kjjiuyQJ4qi|syH777RWPOYc@$w5BmNvfDo=qwqsR*`))hP0u#qKRa=zfK zylT?9{8Gg@?Y&S_wd~4lC*R&XJyMs0GK~wG&xSV$xaEJs&TcVH#M}u*kG5aK#1V5R z$uqna6%@z1`w7>oR#y?0K|xc~a7y@&qy5VPsmn_a;?R6E#|{n9e33mNq)$zR%9PTa zRqyr^S*}|>#|D)j6cuzU@(amtD}wlM8&VDhGKbt7RC$W-z?4MP9aT%hR^MLX8wf^b5>qx za1$Sh&cxVX0n-2TZW*2>0{h+7E=H1rH!kDI&z$J{4nl%Y)>w}^77k}?hH$U>kk(R? z30gm-3zGYy!*#P#O12iYZ}92PT^uA(AoO{;tBi^A@-L^X*Ak)E zJ3%S3vhu6K6K7~4XuX1y0Z919$jXH=-|OQ@1oiIKjg}~;|4vzi?bjYH;C`+E025N` zDo8zEqkN2;WDZVG+f9Y+v} z@8FO`L`)XER%d&@9?sn?7PqNa4-?F9zd_uLD#oX%t1r*GG_zqIZ>>WN{|ut&0<)t+%7fcX|!Mfl$A-Vn@~e&MbN__ zn93pTOVc332DW*x8Gj7l>0$>n*A`@13sBM!zO)E!zLqFlZKTQ}j6;7ofTrlXi{>|u zs#rhx*!5yA?jEKI4J&=w%hn7wCx;wfrzkI?$V-$(tl%|z?*H=C^*w$wPx^B?MGRYR z-P|KSG<%_((ov2@@*T~z`pyN$Iu zIL3JihS)xbTK#GtYWLotZifpPnUUy}#aJs-;Mv)x)gfB~@)jYgSxexQC?tRh5ZQu^ zL*W>Wq$p(e+iwE4MTSiAA;U7@U}EWxK|dg&v==I&aDNF>h2KQCFjGFXIp93^Vsr(> z4w2=r=0Eog&k=ZE%u_B^0!kWQeS6^Vs?7KFZAl4(A6kD^kyE7^7dZZ2j=3!@NBCYd zO$T`dhTEi+xbh|Y^TJhXJVkknBr|9mEEF9(vOU@lGz|_nf0?K%xQ5=fsj*xqwDfc- zYk;ykVp@XhUqY<1zx_58uUqOO*_UvoiG!L^@rIj3mn_6Dtxcf4B1JajbWXbuz>e)3 zON9jNUVCH$K|+XgZd|wdIvT>aIZF^vU(DX@d!G5ri)F^OY;EAEdVtj$1Y5*;{`vuG z>10FGKdi_RZ!mSA8|ZW)gsrOs4dmG#lFX+xT4-M%Zh@Ik{-JX9j1t_cEo4mH|VaH5kl4lka4Ds)7T@`d}r-29sRSXmrOWM|6uNKQZw*Ob}Y}XsU zf;D*&U%b%O4B&!8Wa6{rByVA-nlJ*Q2v;%nIQR3GAaZA=9TH+KN%gzFPC+p-?}p#V z6tB6TR=pe-166c6=5hIi756)63pM3{6F96OA8Y1?u3aRVE^U*}%;|ED*E@t|0;FKs zZ6njrVN$ALF3Ma!+SFM5vZN+obE^y=d2SP;C1;&mTZxU+R=$Y?gh>RLle@&BItR(fR-AHnMfT#95L#5g*Rk;X^&?x)Molvr=d zgNFz%2#rd4`PV%9UwFGWrWO0?68Tk<+xo(Du*kol4?TMM@-wts^i1Ca74Bhg(HG#{ z>#`(>aUsqN2IxpGEd(0*PDJqcUO6}?Digno`$i!ywW@#xX9)M%ViTeW>{M!D#1) z`LsL69uv`i4gCSGxA&12>CNF=`;NtjvGyMw{|WLs%4|Gg1hyw)7?@s^5!5J>Vw&+y z2S_>s5~*~Ls7DC%XdTlK!%fM43C%DHDqBv*jI^G#OE%d~f3=x)^JAH(vQ}6B9_=As zx;BvBw|6>7T_X$Rw(;KHD}oQd3-CujR8b6H`5A*QMp(F&`?onx3(6aTJlQ3c`$UN5 z9xgOD;CrCxb`N@)%>UfO)mJ;sSwS08OD}tB<7T_s#lCKmMGP>z40|2yG6e!gZ%9(7 z_FHGhS#i@h%WQsy735SF`$~=1OQIxe6e;^k-tI%MKNCpwzn~8jyxg_=43;&7;gi_h znf6~4sVbkO-tNTGIzAKIUP+;aGyZ^rl0GWP72C4OQ^gUDT|%qA_%U7pkHZ$RmF`V4 zfuMcN@1Mi2QA`ct$7?&7XEPGhQ>42u)OKFBQLVFSsl-2j^>qg>kBSxPX5v9<0YiPv z;loQ_JDQIgpcF6bc%*SzsL*BP4#^H)?T6pvq&i3CjJATsYOvfu?_tsO>}E%k9MBJY z#P?^XK7~RPaL&#(ml;U#8Wu54I2~;^S$A@Yu&(-B;Yz^ebFJxz|8_&W8s|7W3y?h> z4RB=MmJRfE{Cg=wLKQwZ$t+;Cw%Dy?_8*Nv0NIUv2FpU^jF!kKMd) zgLO6NB-+jbVudReWp+L&?Od>p9SnJQNnEi#Wry%lZr-JuSu!J{48JhD`83RjEa%K{ zj~#amNUq_>a$HoSW#IjW2`9O;zbeS+xo?JS=YLkP z+bh7k%$@~?fwN~Yvv(st$6g|$PK9XS{WgXO+Xs&LxA_OJ1I*YL#0x}@(igNy9U^&Tq#9olqQbB|8F_aI zb;Xql*zs1VR;ZRJk0%QIt>MK4nQRPtJ%a?U0Oo>Pf5xaiwIKg|IWV55WP@z7BovAe z#zaJHLfNxE^=1$$J5nxAN955*95rxNPEa%xklN&I6xvF)|~sM{vp*(+>} zbs7rjda1rDYhRrrDOpa2v3^I&>op9yBAYA58OzR?-@~rDPKd#3#;Pom!Y~+Z$123q z0A!hZZ%1QrnhT2=ZR@8L1*Pt75KldOjW~A2a^(5)ipN1l$rMw%{ieWmBOcw>$@&0# zKv^3d>0JO2D%;%=+C8Xq$WF1!WnpGOc=0IqfFU@3uNZ8`M1JynHhuEsT&-9r)fvn9 z7{<@o#Y`3IwtJ^VG)JIZ_juVLb&V2%Xk}FAW*l{QjM(fARGE9t;>o=_FP|k1e_nYK z&xpkavZm;`YG}L9Wa}kE`viz_GWKvOS`O86(o0Tnylo!;BN#pw746dlI)*DZ1p66c zz&+r9t08xov4U}^3_>D(Sx(OqSw??rM%a2$L~cgoKWr56T*Ga)cp%s0?2#E|(`mD` z>SG%SQS~`61?_B0l#K}OpqEcHacvt-5PW`jnYWvxT!$Rjmv#GTj!?2vC?`&jdWla) ziyLS1tT6#@G$T&VAPE`vlYQHL;$947Oe85{2wb z=s7x`@xqp2D4{9?m_dr#YP`h4$0~^b5ig-of1P|oH6vjxXKqGrrv&%#_|(B9+0S?R z`+EMVWAoSew?{Obk11^fEg|Pmsu1R~kZY}j)pC#{6B+%l_!cZ4Q!L>tFY}f3Z%-1M z0yCrF_Z9hO*NIArh$~V0~U|p+NlK$P@6o{kbz)!(GxQE!>}5<69p`t)c)lC zn%KYg@LGvRq$(`y3xz=w6#x=L+)992L#?0)AQp*)J52=OJ^bWndF?=*hcI~*rWPQ~ z1i@v$W#rt6!HfoqyNw6bIfw3@XEmJb-k0|qNl$=lC&(Hov7NhL>wghnA`nOc}_?*b;_G*Y@?C6`_EW<=JS=FBM;C{t!{ZRLl~e z_#ttC+{^kZwic6IY3JAes>gU&qVpb~F9|J$l5{m`M93b=ZWLiME%&*(q?O~{*=Ewt zQW!Bs*6%ZXjoRy=yXZFpPgm1MWDK5&EhXOH*|}z;F;(0A&c-gLcbdmU5nIw2DYq|C z3`RIlK^|yP%XZ65ez2AmP1WbGBgK4n>x}`@h#$g-Z)rUo9-U+jKB}3zyMJOKra3ba zYd3sgKJG2roJKs4-X+PlZn#V9OR{|u#T&1A2BQzuY=+YElHa>e+y|gNa4ePCy%Z)t ztdiBrt}MhnylvmBrcmL~Imo{;;ed}MP^zQ<^;?JBW#-d4Bs(*|ZEzg$vDA{oAfJRX z6fR{W=oh_udQq4#@M;i6@Zb-pV^sVt(=p~gno_v~8K!l(H%oIoaKp?Ye|peblUWN- z@qHoxmQ$8vJh!T@xs?_YE=;5cxTq%j0z+_Y9LfW%a8G|HNX`c#k*cMs!*-9S<^atM78^v z&BJ&5xjG!-(S)8%Wnr{Y_L@3%>q)%>UP(9lj#M{j5A3Epsb-889B~fatQ(?B->ufV zg4f~QrRHljuOqzDG$dMUL#+8^GK$g9qU%JY3`2hsA*zlNjfigr2I!2`4Ig_>CjS&G zd|OVD+eXJzD}C6yT5PlapEz=ThEoWJT78BI0Uy0k8XO*uk=*88J@tpzQ+HF)w<~|A z3+uAn6NF)!?Oim)+YxC!PiWOKiRjA2kn9#5G(MV_T+|F8;W#2nHxxqR%7fpW2Y zIAEjD@W#5#Gt+Gn3MgbbKpGc4v;&}k98A@TDXtfTNUtf@XEZLOM^Lp+wFQ1v>839q z^SfXSzLteLPZ+Be;$H;(OkuZy>wXp2paM2kD{TFpmL1PYrdsDsOc&T@`YDhdkE`XLMJ*U20wQ1HR2c{c~9akj1pW_yIb0Lk2)Lox}b(j0mPwFWfs}*s)l8 zs8+Q(0l4B|(j(Yd)gRuui;|v(iToRM&;4?`AHZ%N`G?vu9n!2=ZsuA6kA^c0_zWmg zOh>;|!i2$iFhX-=YIh2Z5Nq)ruQ~`QL@bMiRFpTM!PC&i`xvyaD~Iv`bQNByma4XA zl2KV&?60yvb0;IIPP^({1y3N#d>j5R^JiYH`I^wY)(T`CBn1Sr56pS3uk0l!0MKDn-aKB(OWZ$X@e24on_6X{K034ew3T!r2 zd&b(|?8_p|Kg=IfOHFq`y)Im8@$A9c%!o~rK&}Ut>86GMcKzVRVX^Jw2cXXeLauDv zujjr;3M+b0cgGC4aI~M~gt|ltLxge*4vCA*D$eV@4 zKk@7VT3jB&Um5>ftd`Od9%% z+#LT_bWEGU`5#f8mE(W)rViMVcsTM0>sTc}~ z62D!&o|&!TbVIlgYM>5vTLhR`M7L3615ppO28t;7{S(t!Tns|Dz&b_0q+LYPjnL2(bTn8pJVh%8pineS6R z;j7ymT!8CLFiijV3BwdXhCY8@^?9g*Dg1p4D*$+QCS0-htO zY@2|XCa(AyP=6(5C+R!XO{+=TK281eI`U=ugbd^?!J^0@M^)5+v(>8;umNO;niLk_ z^QcXLcY4qff9wJRrY14f`1Js)7DyV47)UXk@paohK1!D3XPu7L(VYauiGFG~BoUEB znx~3}X&eJErK125KEUOODpOH)uDuXaO^W9hW}P|#$ovs4bHXJsg3}c@FkIS6?h6Py zALMQ7?Fy}dCL}v6H?IF8KGwXcHYosc-EEA`Qflm=*e0n>f~iS4)jRnb_^IMnwm`6z zYy4j|`TtL$-12EsR~~j#p;M!3ofiOk7$La+j1nRJr$iXWvD0|Mzr}Jxu~I_jj1{f$ ze&WSeFtosD{j{w!UJo6bQ#kK8x zx&g4!%QgH|z`wo&b1`D7Bx?bhmw+%_8irLu3n~Qu?eH5-K*PRb%V>q2;z8cbcytL! z6@!6+S3xvV?s9?rKq1bRvl~%~nj=F|7^T)7(1hjh=~1WKn7?Gd_5vmw(R#u#tKMTP z(>HU^Ydg(qe`_NkSH+&=70nKyXk!#r0YA;iIREOxb!5ys7%Yekug|~a`Z`}A&xk0l zsbS4jCAbcpS9h{eA%G7^tPZ2sjqyK<)wCTOm9F_{4o79{d3C6j8aBD?pOs>x6$<*;G% z9r3nlBxS_9Wx+R=2B~gnm;IHqw~sFgvg>YaaT!?#Ak@rPY;>?%}=K zxX0h*{nqd(FU$7Y!AB^BnsjQbsI%yF^GQo({@9xwP;JCLlPt6Y6IS0I4@`&F0rwYR zI@!vh)_v<;Svd|S?1}r1Z$1AfxlP??#WcgF8}L&kXMt%bg{JVCFKsXOt9?$kZ{a*U zH$IPFEG5mNU^n;uj)1AtwFR7kxCm$w#=uRzP6N#A=;BsvKi18YAlyxcx!4pBF#Rvj z1heKpo=J%P&xH#lDpM_A6>-k1Z|9#dQBuudqQF!DsaslzudcASy#DO!{yQ)1U1M$( z%=r7)v%N3iQ`MapMot3$kg1N+IrCXQq##b1{|L}3B{}(;0q^7{u#mp99LajV_x1n` zdmtpqTm6O!^LI~PkNa+tj>%r}o!A!Eo<(t%S~D3t;Gc98Zt=7A27o_c!Z_PHf-p(r z86dS2?tclN$b9AmaMeR$IYxDExWVvMA584Us>8w6c8&pHvwrs(ZlXBB#wMv_c|pEi zOF~k?Hf#65#m%V=+?()iIY50W4f$Q2;!gw#BbGSi-6cRBP9}VrHeO9C%W(Tq+SL0D zW^aT7sM&o-E&*0MA@J_1wZ5rm)$rfP&AF4@%{<;5|=MjtH!G%xbYHULJwSkkl#Jgr7g zkAUQ#NDgVrd|GvI9;WN-;ycJl+qD3mdsLVlF>~v1nQ;6?XPoyAwX3{|{Q5bdED}5u z)N`|}A5@#+`3wISkaSad@5}2&TKs6s_45QyBSKp}EQomTS;7F*f!gX&>cE62+I7RW z{*t+?3s-vG3mqYHjo_$@)l*-dxqr@88>*q75cPXhEFN?pz=g);(Z~|(z4vYKrP7b! ztpY|sFn#(vV3@685S$SVi2Uk7a*&iN=>vP9g#W64qO}cB@V1a=KOu#|u);UHLK@EH zNs_kIJ^_kee-FNNk{l|(HCh~liPhI}%lZET=+SAt*O(%I?7uy4aE0IIY7n8}b{DUW0Rb=E%zh>>U@7BKCXG}8)uNTbp|Qd~eu6^<$}|1S#w)ylp4E0-jiH2JwdXAqrfUOY%%1S7Q{@Vycs zcom26PBZLgGF@iD(P_;iyB;QYYoa2UQ#Xn!-7Bx|etm8<@7JdGD%GK@-9#-vG+Oxi zZu>Gnymt~H(W^=kY=^v}+C~BUaTwzwl=|JE&}L%EaZ|+l0HQtR)2TaNn0+t8#Mk_` z=ciwzFT&}fLGPegl)2)ZTR5Eg=A54#qO?}HA{hF8qS!1sZ70q1!C<0W9qc#6oc$q0 zTuqzI@YfU?uh=MuyN)>iKB#s&?RhEQ>O8DIp7`8m+5PG`sI0tpc5#_6sp<&y5BhCx z*_n6>lO44^vK(KxT23721%2Z@$)8-_ZM=(ewYhRC#56*lX?$=#Ape zlv!MZ3eLUq=KLP}swc+_p#&6BnDFO78}St8F}V6IHb=H2R%R3)Y)rYIu|yvHP^Nob z>cO^rr8xOvzpeKjtt!ZEf_p`Bhdscx8--TIaoJ}G0sYCa(bvxH_aOy(6qKxvWn)cr zWHNaoHdb?H<_m6^U{Aa}4)Pz^ZI;7MoR@?Xw4wO;2mtN1SbtumjeASWnGpsCMR4o{ zQT)+2`{5XW6|p=MhbF$m-CmsNI(;ek>~iY#rwk|O_JAvE{#vJfZBfBcT$cwW!D@#r zWbdBYty$h|3aq=G`sg^PVYch5!+la_kZKM>o4Qh9gYOj(`079FT>3PVd)t|=+j#j2 zyE%UAGvh|9^tdA9Fc8w^JYv>4C;^eAAogG(=r&XC#Fy)boYRDxvi78?uj1+cL1(|3dDBA$K^4=j5pFZsG3AkCymsh=M>Z{kW+00EtDN9fdJTyS)SmoC`ds~a~vp39J^32+zp)wqli<9hi; z8}R|He_~FQp@ezV)}0eQe)5H(eViz75Uw%)ohVuWW}xelK^Pgg(o3Rf%4{kWqN{d* zJAYzcl7D@%Fd)o5P(f|o{BdvGQOUv~UrI6kt87>9&t4&mukRqk?ZJ%Ox{lG(6$sIT zNEIxyIGm-sB zW+`h(*X7${gvc#HMeS$?j~l1rxlGViVZw(kR(WqVWK4nuq^;}%A-ZFB`)Ndjnh$8Q zlUqT~#mwehgU^(=bSLkhfz2Xdb@!ER!7*;~N=jbgE|00N7YT=JOlEQoNxJ>tpKnQB!j_$O0sPBy;0( zMW^lP2g-@xn=3O!Y%x~mQuZpudqa+fA7UK(@i zduauND}*${hnY?}lSR_%^*_DBeY53wkUbTJ)1jkSv(Ic0D^O#D!NN2$5j^!rF@H8z z!|B)f`ra+}2nstV{nexnB>@yKX{?K-&^y1&5u(UIX?qIT;i)Jh zr;!}<$3e;m{u6($XJp$1A!p~n@$v%4T*%F@f) zCPi@;jPliWcOi=J4=VXOj^TVJW}-vsB{T&HdUdt2)o&FNI_svp62&Sw=b=gn#OkiK zw1g4b7JLcCI{@nj8l*M`U`Tt5Pmp@wB7ntLziDU`ANTl3FC#?+9m&4YkHhTdeV^e< zxP3^p9BIzLE*gf`S1yBRvCC#2VxIh4BY;w@R{rK?b1WidFaK>M_Y04(UjDv_7;m*+|F?Q>D}2V@YhR7lJCb`poTWm{S@3iIRG`Nl@(TNboCdKf}KeyTcUU zam4)YR5z&r*kmI=U@u0i5*yt&SApI-VbPE`r`qd%Y`KJ$-DWKfPt%AE&|I+;G)@_bPi3#3A_WPv|Zbpy+Cjjd_NNz);4EcwicuC)uLOtzf8dP z(l%1kcC~$?7TOx;M%n$SoZ-|j_aP}_(Dg*X5cK7{_Bt1^-)qzubf1Cu~~imdN81xI3AID_3S$-r38*Su>skl z+=F>&o{c1#I%rv*D3x6_2Em-B2NW@}Grl38o&!`&Ob6yFrlgG!N7HO^B35K==`j%a zG~>cYlzx*`TC+=cV>788g{sNj6Dhr9lNFW~lk9~*pXewX$~~?m>0Ea@kZ&*B{n%RR zr6x-{s$oHQd8E%?@Fi>O!aLT(6cj*E@1^?uy#~4yDC?k8m|F9kmAdP74eKW> zgzN4XgShvxU{oSeIT{JcB2oFA5PvpXdi$Zt{6(qFdf{H+A4)9FJP3!99|#*eVujnL zDb^W80o8Zsnso(UtDkZyDliXheVJx+dRGqCjw&{{q3?uQ_~TR zmk){;CVWFVXY1CwybH?xR&KIwb2f*dF_JNgvZ#(4u@r7IHSaVv6St$_`KQE73HzJY zlH%!QQMsqksf!Pxg^QON&u)kxqaMvRm0?<2=?C?9OdW)&ykd7B;^!KMCRrfz{F*L> zNnArOQ*K!krUk#4`(9`_H2-BA)g36HafE@6R+Pk2B{JnxW%DW|Hu?wMD<%Ag3KPKO z_*E5Jx3-Z_#j=;FzJD5~8-;yZI;bMX-$$2zS4@RgfNCFCBr79URx5tW{M$XI2-cc; z#H4x}lrJ8d7*=;JsJ~#jQZu)r{|T>(kyo!o`&to?)xb#p2z?HrBs71hucv)-fK7$^= z{k2;*Rqe7&p!mcbByM{a@PQY(Ii{($^`4 zk`Zr9m1oLlo(w&jl4}ZPi%^i0P(cBYp(?Zl(u=^3d6j@E*y=#*46*wpuW{W)28PIc z9?b}&iZYzxy@Ou$Fz(n znzT5HR|RNbxp4h3YdP5As>(~rq5aE}%jspRt>DRZ8xZ>S&U2{Wj;TFd>79DcxBjre z0ULPONkvF97loxRL_fVpTXrsYyprGfOrAy+Fp{_*P;_-}=k&Mr+n09k_nSF)rpK2gw|fdhTj_6dTqJF0g6G`iIAwZyl(pFke!%tzK4{)e zaVH(x?pszJht2vzdgB3kp|1Oyt!6wPYhplWKOJg9Z$HC|GZKXHXTmyZ@#VM4(N}+Y ze*>JIXc=Cnh+gij}L!_D1k}3vc#)rNyYgM(( zU^jctO(bsdl@2vUtTgx}C{*ZE6TRqs2MPa}Gv9!I=n-ffxDgq+|61+kcacdhWW|RU zX}s}&DQ#&Sw|6vkNNVC#mT5DGgw6G^XbV#Xjiz32=}1<{T0^`viLpi%>iPse#LH3Z zHl)=+qV`D&Z64YmAeruQA~mc1vvloKDqw+2~s?_|?S9Kg(-=eWwHXi<)o@ToAZJYv!4M zCNm}wBd1S1Fi*Q!^n6jo-1p29_zhdqC!FRgpJOY5o3Gc3pO~VSf+JSmvA2%H7Ibx$ z#M>Al+8^6o8ZKrbLK^qeCut+x!fE8IoP~L~b}4nkAtDruZAvgU z*q$rSJ@(fS07A<~5rdLjs}E(fY)YarCIVV^;=q>yXa3ek6EmkjV5_|3{5_(rpd2e1 zmXOzGeO}Bvmd0zaaNAL)EqsL?CQZ@HpRUe$ns|?Pf}c@__I^j&tJ<6b;@Zgnsll8j zEyioYGMYPf@{MZq!9ie=0K%1FU8Al`3AN+jKjVJ|t%P_iDq}OCHp@J6)L(uXmEMAJ zutAQ?4F=d6E|QfOPm^)JiK7epW>)~XGR5ttOsir7Wnx#KNYZw>!{5+ipQoXj%9Uz| zRVQ#C7p!Iceh+5u;X9Y=9nreb*q6~(KLz)K#74V4V=FScVgDASHQ1Rl(_S1t&2Vgh z(+phpCckgSVl}JcPt2W%zz)3}IL)B_46q`!_Jc~SFA#^hBoO>cEJZNoic??n?F=ck zxZ)?I7qA8qqI(|OTnY22jp@9*DBew^OA&ff%V9Yo<`~@RAv>9?Y2ppA-HsR87K{d|J~wKpcYM6(cnu=_@ye?PE6;)0aYS=z z29kLog@>_pV(5EL^g||N+@Lu1hopu4j&!TfQQg?Zp{`5hjV)F3sY9IC$nsUOZ9<;9 z8X1eU*P&_(Kj@2VkOeB1?T_XP$GCs1Z*WUEYH{%DK0s)P4ib(98NTMWQFqv?71!&L z4n-?ntfPC~5c(}b02v-{dc*y{<^MxvKlwE{Jh?fW7XsGKAJrCe>y^PPT6a?=E~DE* zApT`S`6ewD9=APVsR+2n%h;Wr4%d`}Pm^v~BUV~EqS%phTHG3BHMn_RZPP)BVk9fq zE?w?$b4H&Q{PBKf#(_IvTVSPWQS%StAWE@_w=z4nBHyAczA}Z)a>~@vZfqd*+n-c} zyi=*Wjywtp)|Vk*mJ6T^b?5oy(h>hG8${^UEJ(3hIQlq?Ht8QQ5B;?p#0G6(`2j8bR$s zYRonrqeiFsF1>HwAL&@E4&TJ+)~I|w{A{yiq2t-qxqMn!!NX(^pGJiP45ZcHj|N7-__u@91S^O5O{0;%P9} zl11GU%09**1~Z!Nc9E5*8s%UU`CzNfa)ZgJIww+rkos&y8_WwZ_J(*boKt`=G}UP1 z18+8Nuo?1lv#J5T07--lxrcEFO-4^Zw{NQUd^+jar zv=n6*f3@gVrK)Zjt9fVHZe2=UCbLS(1;9rlL+g790BYx-a~o znh)Q2H3}Uk?D|~A+gi-tbqsvFuMo5^@PNBkwK?%qsGUErj?g|*G1Pd^mpu7Hu*pHKj}R>? z1S|896Fy?V!{au$iY1OJ3b#z28POLY^4TEzIw090(}tsQ8cba2$M-v0B5p$5{jK%y z`2%%&V8n>Dx^PI-0BfnV>YgWd?6LEa_UG)d9>uKMQqz)c!AsfssaSnu45)PHg`&q zt>eUQP<(D?-N%%KD2QzEgfsmFUX(;JZb`oW4sg0U7Hj;*hU`-_OQsdQ4oiNuVk`CE zQpv#A;$+H*));*onI_BTVe(zNV7{wzdvtUW>sO9Sld;A)0Rh4|a_+Mzeo$1|2*4k7 z&Lhk+j)Ag2UfTt989fV8P<;jPEQhKjSijlU- zp+dms9u1Eg(|ooWSF>>{5!zs+CYH8NbBhWOlfCcqdU^#CO05gwgS$BU~4pZ~VeUj&0n}4UZ$|0r8 ze&55E5M4OhxI=7~30Z%%PNfqV4Yu(5%ZA=vI|zW$!=Zf$3D(ndA!dE!K%YE=g&rfp zU*YWHjSgVy=rKhWm}$p>py2U$7y*8r{f-JEJ5HV?GD!}64K=j2e|z%n_@3Y%Uo*>2 z>dni^u}evzd&E3H>C*MI#c4lwpxmgq^Y96XO1qG+f<)(_^%fz%xsIt~|E{u{@9^|` z;bTFV!cEto6%^jFevB7Ap?vrlx?yq_lm;*7qCb+W5aisgQZI2f-Bo7$5$b~IwqO{u zuQF}okLE$MA1X5qmv$rRL$yycqjBAb2}+C6j*lVbF}3*ts~+9AEmI(B&Lq?=neTjU z;5_4f2G|5S_3!yk-WC9J9U9fEQOOB3TYYT91{%x^`%-u~m5Qy(50fFH->02p40uf^ zP_ZMzX$EPX{7P+NOg6B7YOa;>_wiyQ{gDgeZnj-*`M|n_i4Y8?$761As(YE1C`wQO zTv3T4mhbjn6pFZzG;0@7w(U_YT-iyMc zlOkaLa3E6()Wxu$aPm(HcR#sgtSKv6F!*YLzMPlYlvkb}+>Duub1g-FX;E#o>?Ajg zlY7%{SGFq}lsKp+t01Rpl;MV677$-#78$_~&g2p+sN4B^3cOZ*Z}fbF*8H$5;>%N6 zeuG~xGoj6cbf0?jj`Kc|et9Q_Lpo}zAj65W&gd<2;d^R|nsaBfrf3tP{Q$T26YFHP zSJIY)AGY+|jTm}tu*E-;?jmV!tvQ>F$DR=V?woRKY6b+XdDlP%>_-qgMGIj2B1@Hk z=ra5nr96Ow=7Wm|i1bJlBX<&PppRq`bQ8|aHp!)(TwgoGjFX8LOZl17vEEwEfs^j2 zh^l)7RlMN+NZ$Q0j$EBvzH<-tNk{BlbaE4ea1ccuH?`Pc*QV;AscBLS!KuV44*cyXb9kd8 zBxy0_q@Pqbo?LKy{OM5(-2Dh*c<5~<5@esgR|8e#t)af)7bF4;o6+4*{6KRi3iksq zmDfPbt>jl$bJIbdrI;FBb4{anq9h+^um+XtCJMN@=PfwQaSXMZ-=$hn3NIx?xafv)Gj`Jr-x9GZf$iU2!7c+#9fX1jhm+amfM^tg7lGeDOn^*5s_f8Ek zz0`%JoN6tNG@i)QgX?O*#0aOUq2XJ)z2lVoJnc>>Xh?eqjXkl@YRC zT{JA|9+Q7zcmsF@I#k-xgzUX4%{+U`kLWFu!;>PuTE(MXruyrFQBVz^r{<*6DX(79 z<@L4!SJwOJZ+_lzBkjOdaWYVvgT1NOS|L6_>c#qEUIwVp{oiCd3y7}$4Fzz$Oc$Jv zi)?BHr2S;DK42xF{V-u@_@V%nt1}`J6g9N4pJvc)JY2CF3h39bVs7zn_osElH%ACXDMTd>wDQA zVqpBYF&I9J-xSW6+I@mEjI2X(z0ecgQHlPRb*XwPAU78?v^p{Y1)CzdpdfF#B5UBR zN0?kyZ^Di*EW}OO7gbIVmkP^f;GbSB-@T5dE!>5BI4h<1NFFrk{ z;iD~FzG-YaC3z+j#U)nU<&W9k-1~(W)3x#M(*@ki9ME8&PW?iLA}f4FvM~*+6YJ*J z>%t8YF!ay*vw$sHmayAv9o={FNkOi9MpIsEK(~vY0m!j_+R+UsE@H*L9ja4Gmagqh z`L>NE6|aUYpIjfXywS`Ximkqc3>|wX-(6ELjA96KYjT5EG*m#yefyA*^QW*NO<+Yk zGaXx<=-&g$T>gr$0oZH9uL8_917e2F=Exk?OQf5%vFwFpLV=Hih#dyg!oepetEHG+ zz^_@A>?7d$3=;g5v~AT26H+=V8(K5w!aox5-NMaDEw0c)-*%#AJw~ps*@DQBFE%zO zecCnvx!oZfX~HGEUJ`Y>urx%h zTp$`0;Gj402N3b!Z3JC9{%OBiDr1&ChcP-5-X1Z=*bIvW>-_FKkFnb!E21KUNh%55 z+P29qnY*6~dn$Wi9`jBUt*ghjeTue@3BLJEgRHQqyVhq}2H=f(q}_ilm!#uth4=wq zhSB%**0-u@zWoHhk}<0X1|sJLl$KF21GrZWe$sCR3t+c*6zR$aV&68v$Qe8N{rCCw zukH#!3mjhj9ILf@pjPm%`>R|^*2yCB>nYI-0I}6{2L$UC%B|A5*Z~&5Q%D#ZkqHUP zl-?A+dyp>a*Qha+dY;=@HUVUv6}n@4FjLk@ROY7K1wb7Z1x(zm)f=P>-uD9x3yvs2 z^4Q1=aFKp6*>~8*3P1cd1S;I-k=)~YCL#bY?}tW=2Vu%Wg!Sd$X^|)8^AR)QTC4qS zxBaXW=i-j=bs}2um}QFA+C}oKTMt)PZruY9LB+e&1zU$PIVRITCab88rUxi1CxjoP}_ZUIKG4B0@`M|kSJC*V~i z;jfs%jbG)pdKlUHwA@*Nj!D(#`}B}~qYeCsN(3-Xj}u81KwymOd_L@~RIEy&&4|wJ z8|bp#9ZG5sjN5(4m1Mro(?!cxtI*_1W_KIDvQYBUbeNK(-K8{Fzn1N?6(G#rEh9}0pvj}ars2bg<&+^cjHeD4sVdE+ie%2544cS(SAWa_i;O zYp~%$<~d9dW|PR%wu#;+uY2)z$?BpF(6iM>5RUEt} zB~LZJ{t{I0jQ}KqanHHhNPy+F*aw({YY-Y|uh(sgR5_yZ!g``^@S_8tfD)6sH1Y!Q zqqi(Z>8Rp1zuh|DCd?>>6@~>G14qMr?|LX)$Ec)v^%`w;29$ydfsdjU+L!ACBxlbb zNX)gX|D*+ctqTICPX3k$c!mkA=4u6af6Qsc%NGNaS0p^C| z{@%rTP|7{c%<*O88Bji^_}!;?q`j5jpqfnis_inX+9!rYmdNw_z%6&N!8aA9X+Y=% zQgGQu4#X?>3z8?^l4(s7g|B*%?#eZ4O?@eIhy$4a{krnAp^g#g^X^}$-wY@=6Vw8` zo&7<2^YLvZNnpG%pn3jf^W&eQ?iBvpdMMN+X%e7-l+paKpav(p57g99kbf$fFF0I52!RZGOm|9P%C!OH;=cgRV$D__gGTHqF#(LCO$IYqA4N` z05o~;AXszItdmy3q9vfXyQ-6Rv~>qve)FkZWOt_H{WnfS{rlNwb2(7cpK3xdAx_{o zy(lIK)Wpm^`9x;%hlxD7wX7GyD1I{2VS#o~Ay9!tWcRxZ^GmJOPrC!OZ@;W>RXu(r zVoyb4+9B%vfG<6CEVzmLqmIb)`$A^x2j47A;HQ8KiE^F1%)V_yi&@IIPsh8>k76Dg zR?I;Bk??t}$T3#~!^5Eq9*Of6ak|J2WYRzCCRTB0EG#AB&2^r2@0FR~joIbPyAkB8 zfUeJ{DG%Qb4=Jso;T~|o+Xs?|hBpyUYU1m3SjCKoaow%w969$2+!{SRNKc1Kz-+>Y zoHYQ@*&8w<1$3TRcJc8%2MjO{fSc3d9B5Dp>CxKBu9bddmet}8NNl@Es$v90Jzw^n z2b&*Ej|S?*|1clB<#YCz_zqtv)V@|I>o;#qZU1hWggRr~t#4MKl#Gv0cQHJ_02!n2Ie=y5@)FC>j|n%ApJ9PK^7viyWc?qy zzB``EKm0qARQApY$H*o#QNpnuh3t{NM~;%q#J^%T`x$pZkuKm8=*9R1!$I_i$l@sk*YnSA#WA1r{pr9o|@9>L7Ub7IILm|~1dkFI#zWJCNgc^;!k?A5VnkB4Y*%{EtGrAFcx zA;j?zz24c+6C7T#5vIiYJ(Q`UO)rh$n5lKxm=w%4tfg4L-)Qz(ab|?^ox`(O+T0Mf zMj<&fJ$pU5MroNh+t2W6V}61uNbDHYG@+}n8v|2e1(8^&aRH^v^-=kNL&x2jPokUH zBSucoM$(tjL!0ArekvlTz!3P<*+3gkw)|Y9A zxyi#*vP#{*sj%=@5oPGp{kjyEho?m*3!Za7Wp-XT6C*ikp{1MO+y$L(ah_;K=WV6| zhq5tYrVRKMku3{Mpv5G^kro+o`54~a4R!P^XtcT2UVA}}(-d9x^3>ico|Pc(m=%th z`~0ybC@;z_7l(~<8<@G}&H3GE57#|)Pey4*b*8RCa?6J-3&~iIM><)FAO!mO8)uAibkGYI?!&RiU!;EnyiRUJx z$M%#%4K{yOlE+5PJ#%npYt{CY75yZ7QX!+Fn{$L8T8{CU=q1cf9J{O|O&`Z$6$DnF zEj>PCO@Eku%5QA(IsD;Gp~#$ZVcLdlf-#}`pn5A~e70l*xS|#DRT8DU^;D#-#uL{i zif?nNj9FxdXIhAuM7;36w<*?}i#5SG27O{$e1wVYqFc=`$KR+Xh_>0`mP(oh8DEY; z${@74?v~f5Z&{u{%&r~o0C!@A#P9LhUb1-FGCta(pn(>ltdCtB<9o6wYhU%b|AJO} zZlQrau^jE9STMB(#7`vhI=P6R9cjm+PvsX{n{NgX>pGJv%H{?yew4RN0KrV~=2O*Y z3Gr5eKO@(!9Q)n*awkwfL2CNcqv=~F_6Q%|z<~1Ox)oY}c;d8vU*R$;Y zG;8(Z@ti?X>0`);AmbU1!C9$vt9)&7sF%y2mC@3RYSX<0O%7YDYUSfc$F89=)3grF zs^vt{nP2nosW~5a%bP3_?tRkCC@OYJaD?)ri;mfwI9APKK_S&z`{1C=7mQ`!zq~J_ zB0UnLEnC|(M3+Z2aS!+T#nqWNGzBwaT?jl z-!-mpj_=Ju=T#oFe-Z!u7`V6M?fbXG44UX~U6%Is7vnunyCdK=`QuuI%@a)ZFf-VF z^A2!#`Jds~Gkfe)^Uctypp)vF9z#T@z0IB+_+6G@yqPVf-VEJt{B%&d0A$ZLy7xl* zhlT9c;0Kc}Q0naa;bHGATC(?D{DLKkk#n0?a*#upmp={0Lmd=2iBN;{I%`Z~p3#MwtyD*hRO z1GI(=SIi$NIqHvPG;*4wf`2wxBuhOiz0JsQgpv=J2UEa~QtqhLIbvV;oM`+c9Dmd8 zOVP422CLw!AcD$aQ$fYMV}PNql`-G*#k1D3sSq>deEA=$!nZcwD2l9l7f*9NI+ifo zzpSaQG2uKm;UZz(7;}1K_cfqalyAGyiUwrfPt1N6r3JkzkvjIUivbYVm~Vow1Zvhi zLa!la9R%w+cLi48YaAUrU-l+cjQ5}pOb*Sry+=H}|0V50p;63=+bQY9j4AENxbykt z3Jj&)Pc4uDc3kvZZhViH+d-2Fq}WDKP36*{ljuQM0GjL@!~T2P#ZEKt*RITPrVqk} zbZrDm=(sWk-(xlfwGyLni*FC#GSN>ghHTm#R%JVWw!Z4x`9BP4gmX7qd-yunY8xUIx%!?lVR~93X6Cltl^~66 zoxVm38c$3y29{jNO#1^UO3B{8K#R2b;TSb5+wyG6;?)Q5wdrbXg6x^DGNDAgkWTJ2 z%TQcJP}YuB!GD`qbGbVJca`Df1A{B8mW!ne7_}xnvIxQ$#qZ$z7}Bo@B%%rP<5opq-{EyaYg0T}-s`W+fPiM*O zjj9rGR}^>9u?g-4`es%%AJJjh2!O85i%3!^D^?VbKa` zfm@e8#fnWf=NcBCk}(M2lVa&N#m?jg`O$CbALwm8@a)m3{9d+D zz2*V@tvjAtYoaOSukEe-isy&qlpVG+dnRJUS6zN!*l@@YvGUbpAovt}5NG@`Q1R|3 z9=sI6TVu-40A=(}-2suzJGRB+3-qC= zl2s5LxeD_(O@D6cJHO*4N>#ob)1(HG(kJLNXBePvH5a5w4Q&!$z3AC>$UGKCqVR<6 zxZ&XStvPGG^?Fuk zR)-39ebj%+G)pnu^!(_O6+iEs@vXlNO_5LiG+E8}AYhRV-h5EvMB#z#HH25BPDaFk zeM(67E2BH%^gQtCNvke41Fl0TF^UE$XhY{ReS9(}P_-S~Mp|Zf3;P<#13HG8@bTcH zAc_tDT$rz zJTAEWZUb^I)F6}(&k=`I&(q~&HZBi4OWbG3qd`}H2{V3X|N4<&*2$taC{C-NUhgS{ zqffYx@;;5!1)=~Hg(A-G2YJuxpl+8Bev}TX+gC1jKh4~O`ky4yKisB=Y9zeb9g9G* z-Ut*$-_{!=zD3G^QM`v++~1nzhWGxEn8KVKzf$o_sULvKctu68A~MV0h+JfF@t)jX zRakW>4RVy(Haw?-yaro$`6_}p?BePh-A~X-k)r_mdNaoh4%{{Jdx&wtu37P~P(Zak zaO=|0&h~0}NYX8qm_Ar6n{r7-B6p4gEmbYj31pIQlM=~{Wo6YNj_;lUN8Q}dTek^SPf{CWaT(-K=BXRRn47Yhn}%M@($@| zq+3FHQqK@7@wy2jhtbu?iBd0RwnM5qmS^LFsp#leADXK7xE*V*5YXxz;l2}U`@KS)0;1)%SPs9`8Nhk(eW_=*5rDN-3bb4bgety+t?K^oX0V_%8+R%LFd_tr4B#xZQuMRUH@h zEro4>c*lwbnCh~kgNH{8KlyzwkHXUUncAzO_BGCchP{d9f=%FSn(^bV7Qv0)9-Po7 z2MW~#THG@wmox(mOuwM;ZX+*9fJG8wrE5{s*Y@7CAVXS^g)%e~8$sObuT~Im*)3?Z zDheym0PGdLQ2roYV0jU5M?kh8F~UKqRe7*<)NvpGT{Q^}`m!D;zQoW~*_qtOYjSA! zVuk=ihgFJxXed6(5lQdv{IAn_`n_nwcqUxaJCnI@Nk!X}w`Oj?iC)u+Ne7hUhxYuJ zRbigD%7a{`pCK5AeC$SWHCkUYK$4}H4?-qQvkRBNczN0_s$n@H1=%({f@SuhQs%9a z^h}khcIJE2y3Q@iGYj%}p*;bQf!HHbA}od(D^y+RV5Yb~a_Pg?*jj75C6~4D24{Dj z;Jy$MAE1H23+KNlG9Qre@pdP{Hbo9$P)D-TgaTT`OzoV!dO#Q@jPA`D0u_aBEbX%5 zND~=4c=KLfGW&bphFdABo0Ia{$Kx#zYbQOD7IS(%56usyY_xpC3{T$ozc0?8+Il{? zLJX5+<0?OjBv>rsKMEcfk$xFEZY&CJ%=ZX~I*Dg@?c4@b%G@V;n)9uUcJ6L-9X4}X z+XzTqH378v)lBdi^vLpieV*oziIfA^;=M?ZnT7h-oLDvEDhLq8)lVlzoL(L1QhYL- zwYuxu-~(EF-8a3pyA?Le1;aXWBiK5~%Syavox3RB)h*h_8#(YaX&=zj{}fwLUKCDJ zKGmRDq?YU2ZFHT4I#w^K{EAq++HJa6_B=W6Jg!x@E#oZ)sjx*EP=@$X{_;5E086hy zq-1a-__f0o{|m1uWkL2O%pecz4F4&0WL3K)VG*_8r^OqG4+l4Icx_hbn&qlC5^Ry6 zq;c)Jkz1A*McIn{*_17(vQ=Ir#f6&URt*mixf$R}DbV5DwWy1(#?UxSzH8I6rIUCB zyNYffc?n99)<2bI)Y+~kgcxjIv|F=u<1gfgGhLW`gdgS}w#9>|beGG1yygi>#dQ}b=vUlOcb+Ci z4qtVEWbI2-Kwoxa<%X3z?A;`s&cHvujwASWf$AksD>p8JHhCK$0PyROPS;og80_aH z?=&$|t+*!ov+>k1S=Lt^NBME%=;xWiX+v!E+l$V7@eFP`V}vl8)LW{wkUGMd6w8BpDun)FAJkxdn#mLY zxUV;ft39J{@KLYcT_+hdVcLV4L&uu8ceH7TJTrT9ZxwS4K8&d(hG?%nWKbbCZjhe8 zW?(iPxgzOPVEpXT3o8F7k*8ACJM{AbVozcu_kB~;jBBoVhgHW=I1odqc(hWr2R(#N z6Z+J?W`%JHCWJIy?1Z7^;y|uhnDZp)gHU}n0~_NHHq)gHgOZ%KvGTnxuU;wn0X&8AS&2w2bB#eK|(3K}2oSzboD3Z5s)(X@>F!hzpq!C$-yMG3|5FfjYAMDw-ba ziLsJqSh5y*{biOVwo1V1l~oaq7UrUmYWKb$%bygc?QdCEhuhR)z{O6j!ohjoQr7L6 zJILsyJLZ?o-v`E=h|nM-q|nkAOe>jeM>H%tzA^y2hS*#(5A~+Ub6~iVT6gV`rm8J`po&fZ2N() zCA?v31W`3(pfBrMRAhv2WThHyJsGs}S?r8Lo<2O}Or_X;bAw0;UE5u-uVjRYHR_ztoL`r}sJ583hi@I_K3| zbT&n&3m-o2|CL@NoN4w@gJE-l0E#Q0JLKe5_N}kiZt`o-OlF@Fl8H@zk~>aaWi&|l zl7A;MJmA9aFaSy+Zl09|ybM!4+aV`uIFP#S)U`3f8HsPUd=#h zNU}6Jz{uFASwFs{hc?N~a~L#(0!#YMW-0(*jhbZ>l(W4I9LU9cmr|mn)Ud8!IurZC z@zV?w7$peXXmZObqp zf);>(3E;sXV)^+5<|!NX=FUu2#V0kdP4W37)sP?2CV}z*WjZ?;{gaPit6q$5?R8#6 zE$?(b_v4Z@$xw?`_sH?OO`)oN$!U%t1C6a%vq8-{VyScowFTaYrO=Xq;-d^yh7Ei0F4GX94M>&(eM}Z*n+76_Z2` zdGjZ>>ib>THN;`HnP!iNA9d;E{}wC9t@16wvZ$pZTIH)m$3DjfZ2l>DGC+9;ER)SF zG42>9Hm>usA$mJ?Z)Uox^J33)r#&xBx+4DT?Y;2$TPbD7pd~zY-fE1Xp8Ta1$(BoC z`DFhVTbXCS-POK*MmLfA5LvWo{uMz5g%KqN@5@5o9%z({?}En~zXQ6`vsmPyjgD-d zFHY@qS#?1-rVGa-ZMq6nh$CB8F{$Dqlg3RA5d=s*|M(yQI7A(n+XXzUTh^;tu4jK$ zi$bZ|R@1UKSK@yACGa(YXXcHRnB?e*piSzSjiPNlkAj3flU!WPqrrTOmR>)F94uv^ z4R;x@F~J7ap&9~&7pC^F^^2j}*;=JNd!rhWRp$J088kXok7!NR^SQ{ib8q2r(A=G4 zvE4D{qUz6JcdlCp`TYfX0>O{P0lX)wpy@O6a@Vi#PQS*daAI5nt>~gmLZ6ZBEhQfL zUQc=Ai@38FpI2_55QuX8l1qxj-?xy7hBq(qICU?LGfW|tGrn`rwCVYLa{8(MEIg$4 z4oihd!Qve#t^%9^ zMjwl^brp%=t~uqFCN>99TNrv!Qy1W!wY4NMm!^?q)6a^&rO@m zSe@~mE%4pRWjw9l%n7)%W7|iytNQHoMTQiIcMA+P7K^smON9@@9QM>2>Co?$udkcm z3Nz3O=HHDJN`3p;yYJ_XD%>=>_I$}5eH`(UP5Fm6-;Ja^G+C(F_R6CPiISPRPmV;# zDTiMqyq!%J=p?y)>o97k1e2aT(yObL^SeWfe14OCt6@wxWo!u7dJzKWSj3b{qmOt* z=SeLnScSi>N zA_QdNu~_WNDM-nL)StNbLQ#y->3C$NOk)<&#O>3MO7PX<2&(B(rh)wSbh4j_k7p;q zFgWj2y31V(0-OX$$gs^teCvvho;{uFh;2|oT)uO^Qf8KU)JEL_9bTWELF~^xV)dD5 zs>G|e#+u<_pndaJ#E;Mg=aZB!3ARHl{8gsp!8^>pEcX~HAffQ}@%;H$&51nDLi6+c`JxsCD)v96*}ILZwM46t9`?{q zG~EEXx2fp$eX-okA%=j9@wh3h7r0OYD%EcW{N9+#z)VIT1?T%H?&7YEr0-dO>)W>t zamWI5v&=MT`@x9hPsq`D4?)uAYdQV;?g1`8&>KOCIAXhS@u6OKo{FYkXdB>CA= z2gVz^Dx8J?UoLLH|k74HH%BaU4V%omgxECiyeR~ zquo=Dad00|a*a4I*4kN&Zdx<;&KwJ&QQBrp=MQUK8BTeByoBqa^%~+P)T1^R+Qszf zi76^&rY#4Fm;x!YLdk|p;(fwIk~p-tR$gU>=W~$?Pu(sXUt{I-R20!~sWEz?d$HPq zy4)ikRDP|M0)%ckKeW=g9(AkFzWt|KNkr(TqlKB6c$!rOfz}KQ3yDcA+i;G}RO8-u z{wa+5?gQ?SAQq_wIm4k**?7q`V@Nq1gvivc5fa46QJ}2i!V6qxInS&$WbSo#Crj@!pp?nA%F)q8oT`-B{R2Ek6fN;OxREf^i0ga2%&I#Oc#qNb6i z$KXor4@BhhJ3XIihIq=cG4jBY9#b-8PxAyOZ3T*1-QscZ4DAXtb&5DH)fYA$xL-(8 zEv?tdu8gp~l$vh@9WCn`RMP12JpF?aQP#AVD)ur-?qqDio}xp%7F}Yc2G-%U%~K6i z!EE>1#YL7Yx@MGDdsk(i)zm!Y)3h317WcD)S%#dv`B~zl3{`HVw@mc%1C$IYVlvoJqAkMFKE0B^1$@Gu+RJysM}T4^>3^vJea$3qm^%bI zqlCg+V9LrH!-@G$d~lKxve7D~=TdYCqG`)hyo*xRVHN2&Pr^cKNiSMXU1;aC&6lP1 zsdTy$J^AcyxKMAp$f680p)?&&r*_j`qS!G;nWACNw~>IVscM<23wCB;CJ^97eZ^71 zT8MXuT4rJS0mKT|=5a0)BzfgRtKUAeyS2CTd~y8Mc6xM^P1tj?(p)N0g51Z*4a1%A z!vphJcaR!f7g_=wS5^apj_kpmv<@6fv)lj>ho)}zcc)sq#P?L{6TaHOquI!Ehh~}M z1p26&+t)90$cP6%De7T7R24z3O)$}Ubz`bWrmUBshKUkhJsMeY!pa+L{^X(!u+w4b z@~AbJ*=3A(s^~Fjm313$pa)bOH|<4U?Q>jJVel`|8bC4BDb>o1zTTQ2y~(+qeMIaO zdsw6x5Y zk@m3~X2WI`tuJuL$X-lh^~G<%Wg-sI^|432I=z$vakDTh zFL*Xlf1N-SUgIfH;6Q8{cik<3A*>`FY;+DC1{?6J=7`cfn!o7MsOUq>qX4hmVLF3D z(9bT_!&B?>?+qUN7AZw0(yzKLtu>~mB|@zCMm%1S4LC}=S@-xpI%|7snbuPqYvnVN z54$eT_SxL3k%Qsb^V9W|msl&e5mV&yX94Kfw0P1A<9t?rCC6Y^Ti)Sjs5p_V6y#VQZ@Qv{3tpUU0Y3-SUH0G2ZHfi+>lan%%D7 zu*1o}3_3tSX0brHkqwhsBn6CVCNrQAn840AQVm1%j?nJd647JvWLkDZ+zoZ_MQd@V z2TP$L&MV(uQMFu&CcuhUHw*AlH18v|Pck*5%Ir_#ZiI0uM9ig}W9X8u{{H;L+I2V$ zuuVodl{P`CIg?MOkKf&~vYcZINwiH^pO=lE|H9Fst-Jv9x!ErOwDT(CJ^CBq?I9s~8Uh%k z{2#wCeg@hkK*aJ)zs}AQL@fLyeyF)bQi-|rOad~kf=k$ojPUMw264}CGZkj?U2{#| zUd||{kRQu`JmY^J$adA=5xU){q6lArz#QMkwnCd57QrZJHm@B()8dvK0ci27#+x4x z03I-RqlZg}Z@MHJZm(>*xES_S0zGg!;lcTq5#Z5zyF%b(@qj)9i_xnNQx);rq&uh4 z_+wZ7mje7So^@|QO=CXhdgEY_t%~)jZXtBR!KEE|+|=NLUJD+)^6#gX;}f5+`MQvc z_7K=sCPpx!zQ%5X5^T|hqIh<&BO6J7ya@y71Yfoqb^gBh`e8EXPYj2UA-jh^29JzW zvBBpc!Qm}$!VIp`e*3H=IfJ5KFL5>fd#izr5c`AJYpN+x8L?0iuCr2LHfst*pemVk>v$O_A(WBn=}# zPAD|^_iy84?z6!)Zi!-Wkm&gmR;^YAa3+SPq5~bm|6Z9P%=V33jToQIK!8i8OoKIp zG90dRs=OwN4@susq#qN>+5lgei{SF;il!e&QTVx^x0L?zfhp{OAXE-!V6!6UU|=)d zlSjH+zx_^Mwgk`>(mxi&yi(-`08M+7H^c#G+dKWLePwdR@md;WcD8Gvg2cov$CI#@ z4kA)*=r+LZJ^Jcw{$F8G!N!i%xU$o(@2ropuHAd&Sm6 zT$a>jZ038Se54tT;vdJ`fn1LHrW59ODJnROaKwlMvxk7wpgQjuz)~QP#DIYZ(9(HF zV8-)uw5?YHOxa(Tjb9ee!24jV!^!7c`UV(d?*c=s4)wcho`1+SXp0D(3NQP$C^LT; zbQ=S&+-&}J)*tkp;_s(mmI#470mJ$z(nTBu5b1r2i~-i0YOJ4XYN4$=9VcSY4=3_y@Pak##&#%NNT6qRqizH z?B@`hVn^=AQrYHwKuZNfP0pXoQ!0k0!2DBkA^X_#I5#0E@BNPFwEh`jNAKLY76b&RxPc8`9QtGS{2f!k zdSq_?tjblUx-t%}vwT6(EF}!blCrrH`?S;yIo!Hrz*1^J(qR9ra{%Oo{<=+ICof~$ zKhdh|PDL5u>H5GdDcD^St4p#Di;}};jqr{dFiPCGlDI|AF=sNIfEF0=%I!Y#1i5zG zHC|rCwdb*Q$5NlQx@Q3?U-=|ZtoavgFc^xzu9SaXV9RoUdPD?o!|F5IjIj1ZEnYVP zfMnZ#+iQ#YB5EFN3>RSaD%l$UOC)G*RN=XvNC26x5KDxwwMSu!) zfd9J$9ef<>nEMM0(!&BbhGJZ7zJ#|VzU*Nu(MRrem3)hsDDD~u3?mCJzom}Z&*qYd zsQ4TWGT!@jlIhwqSxMU%>Jm{NL(J0Uu-;peHVfgCQGwe4XCHwk_aTkdP#ljDL3`{35bJ0f;1%j^!$tCBS6%n zvi)9nkCm)YuOZ?A8ao@MPtu5Jr})rlJ65y+W{Q`m2C>dB82q(+=W5wl^$P$B@YFiY zj5FEig+*Mu!X>SZ1CF&CUT7hqRkHElmx+p}&w5p%l|@t_RQw59eqk$PW(s;N4YBnZ z5iL=?U82ZtSU9jcoSuz;=*8dG-~ScklcS|%s$Yk`zb2S$^E|nzTD7{&*|Dia>&X1byGL}6WB2vOHc4%&^1cBGMQspDAu`BUmlb^bBQ^6@eEQLrq z(!VO^w_e!>#<(NO>G@(x)c;~OE;3_=QB!IgILnT=1|^I=$mLnN6C66`Ca^$FBE=DP2qut0jR_<9GhWW`=E# z1K)vGQT6&nog!p(gD)y{0!{1vjbl(pWoVN>~@LT48`879O+P_pEzyH5~Bw5M=Cq~ukq z(_=9a#0-%V;kgb@p~D$O1w~ax&2}SiYG{xmQu7+qAO1h*K=W>*r7(PdqQZcrJmcG< z#Eppds0mZ*ppL|Lv!m`UcbLg0@T32-HRVEpC^)Xuj!xSbV*R?e;^P&D7^y%Ps7-H_ zkJ$YmWNkP*Ss`q2FKP6J-$p~4FhwxO#Di> zp^XXt;r^POM9$+mXZlp-6GxWP()g_5w+$8eceJQ8Xa3_(tU(5&SjK8r;dI?PBsD{0 zZbuwVCvJqcur6sKqvXfUz)SH#z;FLnAB;eKjN98!0{>TF8%3Td9cZ~0orJn1vNYjp z!0K{kpsdJS|L)DZ9Z@!2Ctu~)PkOk{?EX<%E$Xt_(cFnI4M3l)&dHH%W$m`Xu9?A? zT)T`eGsB%_hBhI3zz-qad3L_$ZfHw-CT!U|HC#PxlzZ+A3DWO%HS0FjTm?KvTw2z= z+AD6Js0XFPqsae+8%g)D-B^ljDaxGRy3orf-$raM8BDlEMzCjst9<+LW^mZq>5X*M z78OUxZ(j=+8+cifKlu~$hzr3Pvcj=c*0euFoXtfe^EY;N+DrEyQfG2CukuO$M_X}& z&aE)LyU{C%`b5c6A8F-@+fH}1CT&wzB~cY&vr>a_o%6H%0sLz5GcW!cR^p*^dBM_N zdLuqAj0{{$MzOwS=}dn4pLOH|DJkzvll6^WM1~r)_BPYVo!yEYj+sDfQ}G&Y90DcO ztRR+gZY2eXF`-L1ya8z5NLm3{9x)QE-q?M>!qGC?I=IcXpP_Hco-5|0&z^<+lm(ZyA!m2@$;o;@+E!J}R0V&=*PD zfam6d#y!oiMQ3v#7lor2j~G24^?p7#cYhlDd?J6&FQ7nuDi-4dRCvt9`p5LWSYKMu zoklxK2g-ZE(A>kxu?YgxSjx`j%6e5VvgHK{@pZk1*UWH^H+w~t4Id?sI~Ma@a|(k} ze1sj}Et~x<&*u#xoV!KSavE3nii4w8cI!Ew^Z6PwjDjXNc2mN0x0IR>wBZ>xx;ub| zy?2LZMR2PBIgdw6p`W!*i&kK}4M$!3-uC~>&EEa&=+L}nw=>P=bx#cmCS59{iRpSa z_)PZl?Q>DtWr_`({T-sW0xZdP+oNS}c(tp%GO)FKcGy+h$I>Thdtqm8Yh4@v_38GA zt(8;8JM=>zL-TnvpT6%0w@2iSSoHbnK0gnQl(oE6(9t`TDo!6Ex0F5hp}Y#518k4d zONrLY_R|;E8vc8Oi2RLmZAT8IPJ;|IBn+K4(ohZ&8H=C=&lsmO02B0#(~9UYdq{W~ zhr3X8>o!Elps@?~ws0LZQrLd^jBF(pl%4ea4PF(``cp2sV0j&=NVr5q&<{BAy1ohJSnI53Ytjs8gIj^#%^;fqRqt!8&SoH(PQE89vB8*Krf>uWhk<$*t^( z!45^f_KhacnDFPFi)`74T-cqCc)sOkxVlfP`eg=O=NZV#)8D7X2qK`x58k?H&V?%1 zF0k(Eo*|-DagXiIn>uDGQhT;4?Non155zR=IFw3nOLZ!8c8Sk(E zzVpX`hmpYwEHLY!zKdOp5Pue#>_7;SQucUGgB+vfZ#Eb!o*SU3^J0JeA4?*}G#DJH zNrzl)>4k3Je&~^PzRd4<#>}F+zpCXNz3}L(u3#ZR-)TB=B_K%u3P56}rl!d>Tt&^`16JOD5SQISDhVQ$Wy_?dN zWEYG4`@3|Ez+E%`Db<>unaCb?+h^45hGB-wcVy~U-ipk%*!gQ64KIIw;Xhw#UO341 z{xF1iV?ptZ!1vj4=H@A2;6{|;gDGNih58m#f7wls7MqkBZc z{z=qWfm{>!?_AS<)CCcEgt@Xw8*aMRDy`7vO(HCkmwAmpXh6wB-=;BW%^hYl9#r^E zR$85+j z%q<5!KTe`4>8+zjy$>AgT)oeI&Oh83|D`a$P;Dd$2+c}Gj}FO1gKfO*A$2_P=<^ey zCc;k9GIpsNDM5&5(en2r!+(H>GQktA$Vy=ouYE_H4byJf&$<2lkC=Pofdt&*LH=T> z5;*+%dC5^n2$o3NPXdl?x z#8F+8c5>410OXI$rbCA_OZ zpAina-VW41o*v=(gQ+F4CsDE>k~}#yT~}gkJs15Vd=i7BFP&zss2HML&0?*k`OQuH z4^EQ|unLv`YHAzi79CTN+BvENEK4sKd?J~Jl0qFNO2-W*jt|O$Hx6h2vdiGMch?hX z=!PG$tgVI`ENNV-STFl_hf30ixEA0CcUymj4=CoVhxrFlSj7y0Xkio=yKN#> ztJol|!3V+{oy}QfRT7Fqj7vLJYryMhS+A)(jlu zn@bNDLF4mf){yf|nF=BHRCTM0n*D7D?+W*=X;Vx_U25~(4UZgRd5h2Ioo5d^fr za^rkn%6Nks8Ky6_xs*r0TXMW+FkJ_64r{~z0jPoOT&|MbXZTl7m9yg+TYdqJ!7w8x zg?Dq4q0SN4)YT*iUr+209^*jPLML-Fp$tP!K4h5L5iD7`nT0i z;l_E35Z49%m=ZH-jjM-U=MB$v=J11pjOn6Mr79Y+;|=Oz$}kq*iSr9KaQE;n+n4`% znc!`Za+{HSQ-Ct0AGsP^33f{^2dwCY2w0IDSkcDAikBFzVGPIk0>23~+CjmPUZ7mnvCe(79L!f}Bbiw(Fq{2B78c>v zM*C?nJwx`wfBjY+65}>OAu}eYgMhv$$*a^gj*AsnAA|HetW@~V6AsO%nEYjfk%L#S zF#Q;EjHJbFlbb4Q9WP`h>18G%^*>}G9+)g3{Shum}YHKqFFsSxSKM+EAqg z)F;duw?3X{_Xn@~E1+AQQ&A3=UNes|F`8lyxx01_JfEZN>5UlK&kF^M^y2R1CxjpY z{ZW85&+K0}5rhc5IMh`J&(`uffPOxErOPFt-NmUt+7Q)cP?p7|Dhgc2Ke96B5RZIz zk&N0VaMGTKR*K0AXplFIL}}`g#Gdcl;%hu?s^obdN9Tx?!2cy31gZ$h%_kP}%v92% zq)6gwh|QAv=cjM4jvlS)Wr9N|r1)($*{J+q10QLv2E^rXr=ejUT^4OBTRkZoS)rVl z(auLw(>)Za*Is!6a)3^j=l=#4KrM=6V@83L z_pjciyuKp_PD_P*d_>=FeQFR{xx1^X#jlj7;+VO~?oKgdIb1Y^);o7l0vL^HkpbCD z7Z!|xIIw2*X zhkzp^x|oG`1%Z;~I@GFb(A+q_o%3JmCdUI)m@WamF(l^*GW)|?yi1{&uj{#^3h~%s zFx2d0Bc*~+aZ;(*jrQdkafMG&EID=bB!;0W5&E1v?G3gPI;O9l6di4U-CJ9_s*yKC z-gK{7U_Y8(JizRj69*j*3b+v#Dmnu0biqM7u|XtKE% zc9a@YGf!r532D6TbCd|V5S{bYMu_G;ZHzLfr$+;9m%KFXCF!Rmohg`pDOarnUoWRG zl3G(i$jQ0?MtVfA<8zqvQs44I1D6K{ZJm8xe(C*6;BnpO!6rtc)@l*@5ixhE{J@D& zJg44xYgBXRsH6|-m~ZN_#qPeXgl2~x_7xeF zWH!ce+NpnL3UXz7yeJVPRYfsvj-bT89p92y`Jiz3nu&P)sf%=mg}PY&HRlWsnC~Dh zuLcb1v1g`|)|JP>I7MnX6D;yuXmm&+LR>ONoh(JUE=kHN*`kmSp~oMU!S_?*YkA zeD@|a#xh4}t&&Y3`D;~GvEa10yFJXD!`$H>Oo#)@5eOVhsjnY%Z`&Bu3+m?2A>U6T zPORdSc5W1ZS2h3IgO)$Z6ziv;ha}&gaiKBC-;J>37veDy%vAQ=dn%a7kqVs*g3Y>y zRhp=q*l|vL&+N9A?jd?*RC!rNt+1<>VkQDsTx&UURlIJ1Twk5FG<{Cjpvlx_x7XBc zw>Z@<;+K9!VZ9@6<(-?0l3+`?T|f_ovaeOtti>l{IknRxmW{8W26@VHO z`Sd(t&L3`QTbO#d43E!cjSO@jVW`G?$^G9V(i;#2e}>RhkhTNa9r$+h-O_JV6^src4vt@m>QaQ=k|qDssW@^EN) zE*jSf@=$qe#vH;#Qf+aCF;X^!ok7Ss@d4yTJbdtbNy=7Ry9~ry z{WwpqM|^D*b{YB%vm_%;z2+a$jbrnrwNJY9V@DdZu2mK8cI{Sr1jgx^iL=~{hN9!{ zt>uhGi12i=BGPMbr8kNcC}*@S@8tzDIcgZ6C9Mr~Bn}uXcTZ^=)T}Hoyf`iGgKqPc zPU)OH+sI#-hJLM7O9RCrOiT}G!aV1x^2VPzV^+HK^bueK*|=GC70%>Zf@ox~n@jyG zSi~ey8L>4XR3!$dN~ENa$`gc8SKL#B_xb#=?b|ReiG{Z{sVJC`ZH2A7MQuo9*ZSS? zyAzw4FwTSn*dE<`Aii|#v&kqbbK@%aXcj~k;IWH6c`xz;2Apf=lPB&Ru(`BaROmLkZK=K&U;xE^f-@u)tWF8nue8oYtI z?9iTSY;#9N6;%#II$rNsSYIH@5E#9)Eu>|cxjL4i;aMW~^&VAId>(PALuEn4{Wxj# z_9Q|CN2XVVq&&8qYuHW5k;z|}k|#!tZ_)fk%Jbz=6F3XEgxK1q%3^nkel59voueYu z@iBa|&g^aMqXw`G{iGl*}mzj+O3u%Jcuu9fns z8+u|v^7P~C>A+wZ&L|!9xgQ&mR)42f&Jj9v-j9a$M@X!46-=zn3u+{WB;S#8|IWNP zit4CtErPn1Dm_CMTgK}KRaab3PcJdbd`{{gb)XwMn5u&MSoT11`6FmbTr^E~cXSc7 znM|U-Kb%9+TThOyEg3pH*Up){)nlG6x}+1_eQP+_ygRV9wqNm@2Ul8sx13|oWpzsb zCqdRx*d*W=rcX7;>tVDhtj+4o$v-&e_hX)wWa-AQH{Y>aCUx?fu7=sfJC=H>N>24> zPU+fbO{tyCZ{{!fTE%NQLR?EyX%UANBs%x-jP?BQZMoW^d0(bC=?F)$*gbPtrlRoy zBzhZ!w;BFe^0`zbVPp3qP^a;L-3(il49WGa)3U0YZM!EiHswt_TieAj?#xWsoyM@$ z{&xGlfnSvowLFV`>W!3@;S1na?$?T&HNiR2NRe#HE;qktHYoPeSaxwK?mA z+v`yG(r&%N!JNUYu&)`4G$mmtXnT=V1DzU9MCO`kBdLAdsZJk^7{YNyHi!vtDlU*m zl!@!gT<%mlo?2@Xj?MLgx8BHSJe-~div>VGJI*J-IPK>$?9puE;F`R!<5ZK^ zrzH5Kez9K2|H4$Go3_A**x}hMQXY<}M_CCx)x6I-i5pWhU{+jYY6PjucN9YQGq1(p z)v&I1VXE=SKM^fcNJDm8ZvQHMSM;N!FLAt&M{4gebGbe7S4*qjWi37*S$RO0^`QCr zm1Mrv{o_xmly7qx00cpWiw+(dHrwHJ>;KsOI>)veL>;XUKJ?LHhKo^ph zI9=Y@t@nhp=4tOO2u($9j4$Mc#)%4U{CtLDu)uRg(j5g$s#^PI5fWs21Wy=hE=z^m znAHDvF_V$Tv;qrUQE~|M4bJHS7x8O8be%1LNQ{os_YgJnT1qYX7deA|qvB3IOJKYe zuM;Eu3Qu}tEb_DZbl>~h+kC0I<(7`YlfIEHl~RBhBot`Dbp58M6k`#Ut|J>a-%tJ^ z=&f!&*f)J#GJN-QGvhp1`@{GH7EB-w-akCP!=T$mrZMKj#+nd8<*O8?HKT9) zp^gy4scL+JrNm-wO979i9%ydCQvXLTLJh}I`w;AbHC2W&j5{D92wPG?ry34bV5Dpnulx(&K z2zK*L^@pjuGs29kZKi-x>{1(U`N z*;N1Ma#yl!j_V_XYc*g^+;9fCFKF&%dR`M}0F_0KYp>d+gx3*l*FatAFa2w6K}D~F zk4u1hkn(BI_85=Ef57XcYBJ`=I;x)0dB)cBzU$SmHJaaU zfu_)iUGF~^*xy3+Z6u-dzf$Mfc5H%0DX+ms{a+99&#zr1(JqUD3h{_bYxn7Cldap( z_rD+m;sdP#f{?`qrOa)y_2-I7>ROIVmR>qH3_t#;KeW$Orw`m1>T7N0(DPb3vpTou(!j&)~#{4l!VS#XQOIMb_~jwBwt)T1G?A^Xk<;XWPK z5&8Q5+>!m@F5~R#Ax08|h<8TzkAR6ATnApXU3=CE){K@CITSc-m&R@!_{_Dq{9kpQeiEN*BOJFc zWz&G?PWJH9$SXxIhJ&bJhkmB3qo}1{fVOF~%bMu?Bvzu^A2_voEq@$(eR?3!(zf9D zwNz%TcP%Tz9rSi4B?pLI$~SgYKGTFo_KUA~9u zsR9bFyCHL_uQ#p$X`rmLY{~QNi1`eHzv`=!m&20VGX4hQo5fWkYoq?x8cx^};9T7< zN6YzW8TDz?ns^w0|D){Noaxd{tvilo*Me{w=v@)xKV45HR8nj{6OK2Im30F81p!Of z_M)wAqKkTfQ92Ee=enHeTDC|AqknZg%Bb@j?Jtd4GS8pVSn;kuP{uP{ll?EUB^Bz2 z){>{#0bQQkr^y!*eM6giVn3zG*k6inSCisJ zHo81N4wHBePHY0;Tr~5;G>k(wVl-~x??e`;(?#$aZemvo6$TzrYiT)RnZFC93SrN= zt&>tD0qxi1tdmvmt%}Zmc;0V@8{evVDp&=Y)F#4vn`Qgzc8fAMM6NSp{$_$4t`~yGwupQZZRuvnO=hDL z-Oc2VovyWJD{Vg6Fwz<7O`VPX?QAFlF2(+%CEM~r?@Hs~tJ$kc8FCg1A{`a4+n*tzD{& z9j9v5bTzg;Db%Nlr$mJHt&4hHynF#5%TElF$Wiy=S*8W;^43dVKGSv4x|<_3n6m%bovpfg*;M!up7s=9}k_*J{Y7Mxl?*jsu&GvNL-v}i3eDU$Tc$gUW31Weu z)X*x@V(e%yyJsh4HKpotb(qeBq?J4_Q0(i;MMPg*3+k?|fMt&EG3}C$VS=~88cZ_|A>6F6f)yPD)KWUXh=TTx#nurUD-GG3I5!-Q92Igm)?`%&S=bPEO1}}>y)-T16-lt^zThu+gTYl%=%+r(_L9=Be z1obT%_Nr>CuwNinIkRSrqISvs{lu;I+|h9Nof-|Ndy9nST0Md}QWGCk@5lc7pun~@ z(~T5}UjEkp8GZI)=DwwHP+)Irs&1;)Pt0V)1B)M+=?|9EI|*{49nwgzo`=u)%mzxI zz1ixP8DWk;T|arV>uk^019EN>q0?mjwlSMrev{^+-gr`uAO4p5-rWkO0Q;$tVrgXj zKfL?owMNq)4t|d^Ai}CnvBi(9JJdRI#xdN2%nC-=LzCe^lwe5|AffpGMJpLxaFL214Z3!C?%1a%D}ZH|KIW`(3Z(s`)3l&J`nw4bR*U<_ACgmO zGJCD%EdN_&^4fR21K4*CT#s7-m4ny{)aDK#Wt{9BexlN0rstFNu~XutLW@ICuMkjw z`vuU1vuKO$0gq$#T?F3nJC-y}1Ib^2TkS^spOX3W5(uYyKbP$$`sXs}Ju)FXO1=F@ zo%cmuSn}+a$%AM6)p_D289Ui=dMk#nu7KE36kd&4x)$GL>A99c#*EzB#Y@Wa*E@wA zGXC}VDu~DvTV=z~wPb!?cqu+Wk#t53N+f>g7bkfVc`=$BYC8?g#jX64mj70_L1DOq zf9y_sl2fGevQk4+Ct{zw*Uk*W@6s&st(P&AjF3h*Zv3foW&nLqb-V(m`=4P$FH|O| z$Yqpwdqc89Lfo?afH&dP{gK16G=>znwDoq*thq${1zZTLOGaJ@($>(BihRQNMk?hGHOT`= zzZq2MX@yZqK^@#?y6aK-8b8cwTTG!*B<`;`1mE6xF^<+-g}z>m)hSWCgN%PUpczsF z`qnf78*gH$@BV?K!qdy?8fE@{K-C6FoPCi1M}>`ZalyK8dFh1PmFw&&(V$Yu1c}tN zB$d`0Mw;8y+v>iSbtd>)Y}lE)_Rdzn`Pj(R`c0R=@>@w!xEb6QamhTK9^)J1%g_%} zw6a%EcgH}m;!TLgstv-GgJ#KfMrDDTe?WVHTJM1c4q!$r_;wr~F_D4xRc685h%hf&cJl2E@wJcqI66cZPc!D?amh&aJ zly)#Z+2`v0C$yM}^0sY2@pw2CB2uZ(S?L{t}PCvLn5X#-6mrxT`xSi!5`p=MH)9YZdJ-9m&hx8 zZNVe&S-V4CH7dB)$dg-p>uWyZg_FdelX-7%jS>7sv{CKlNMZ5u=6EBp>j{{Ce|Zw_ zYVlFdBL=Hw>YMIlS@L>#P2|nAN8EPRfP_SA$yho^*KC|U$)C4&{a*cF(RN{-BF6iy zHYwAViR+%-sOwes#(>FhhT^JRbx?gz{TYM=^W3YVO&hGrE?!ug;};EPRr#Z-o|+O1 zTjAg^F5Ru}g(^GJGY@JNM7mCVlOQms{*>P#0d_1QC}sEmgU-h9fy9)$EZ+i0xP=t* zeX)ZDAsn3fcBvYCbT?>37_*ljTHaL6-QZ;uD}Qav{HpmKFs(xCTx5rs zEt@UbVRnxH5xBiCBzt{XyQWA-r?HERatj?C4bjb@bMs+$J@_|m-;-t6u=62^0r-8+ zy-(_?6qo1YttXKqA2>XSNLn(vv!#n3^@|pb9wvo81dPhg1eV3Dm`l|5ptZ>h<{m1vML_#pp?=^G1u z0IF>nCwh4_taDx=NK=n^x`)BH(F6&8dnE`Ou+4Mup}?7K<%CW>nl7M;qM5L;J^04b zcx|b8;`Eh?+@EwJmaQ%`Z|~M!S{vr5)Jw~S8RJb3YfDM2^qYX16EjDf**uBR$H`0RFXe$){SO11 z-cU8!&nl{?t4mLHDv-dNJ_}=(7jj9h$_*qg?p<==e?%;Ft0Jv*kG*(_ZSb`sb49x_kfm=5QH(NRDZK3)&d?t zv9KvhUPU)*J}SPcHIM|mW0gh%Zi3yK@}0T1JJ`9lNh(JC+lCcBX99esJcuKF+|*q;DYEI;!1sS z>PVWs+Z+o^DRofnKHnWtu_?x<9@z_kvco%q^xgq>*Y&Q5<%;11cvotVYt%7o&N44X z3B;&Ttofj?-xooAy`|`^zETu-oa%eGnQ$TDO$CEBTaM}Ko?1lFRxz1;qs=RO&GRhB5cU~qJ2+edY{ASs+WDZ6x zI<-{-Oq1Abe{7;^RV@#o?@Z&$YPmY8z&~eCvH3GQG7Xk)C5$i;FI{U$FHTXHj(9M2 zli(s0;aQSaS-uL&+gIQ}3Dj!N;DmJ}7GQez=n5{8xObWPv8K{#q>J^C6{GLpxdBUu z(JdD%?ltYAJh~~XW#Hhy@97RkfH8yYJ+n)wZ}Lrt@(J!PaC!2f@ZXdOnK z`&BE<_XI^9r9)5EuEt{KgXX+B0qZe<>KJ8LlT2UDgUDi_D?~RAQ(aSacKiGE*Gl90gnpZ_1n-NyvdF9W8@k;nm6emR ztBr!Gsra|FFahpmxI#4CK=`2RUh+jmm^0-w)_WvZ;3awR(=|;T6=hmQ2hMdZbxjHn zKhqwP#Mt7|#{!>)??k*1?r#*9RAJ%(Ir&V^feR)I+2i=2M8Z#dEIW5#WzU${` z=dNGy`@zYO9aWQk_Cs$ zUqy+IANp9CP#uiW8Jl^0#M3+h8~{c+SOMdbeiJ<{M)J@P521fUB;6Ww}MYg z{Os>ee@CV}SMLiCxW4RRxn1rvVNuqc)UkgQaLf>)2H)k{x+7ci;%O2!=ND(*PQ za^2*iqw8-v##fzAGXd#L<0~&BOwmM{@QJ_NgIkk0M9(cMjBgxip`2EelZ7+YdZEdt z2Cma&S2cFzWy;`CS&zTp>d2Iss@7?WPBuc7nxNf15C84^jCtHvr}Mmep8Wr+*SXQPg{bSSlMN z?0H_QQ<8W4=$MsIuf$bY)&?Mrl%{`Y^?}XA1~DsC&q2EVh0(^B$Smy4HxpM8Q7CaO z5#{@9&l$0!t!lJt(AfyYo{$r?S0QVN6_?`P2 zxP0iu%H$!POV_}TT~4SQ&%z?j=Ca>ZcVgEzdqilWdVZ0;ZT|`yyrXj$;FAm@ADJ#A z9!~o%}KM!!oD8%i+!Oy4D0V-#I@zg zo{=u?_mVovx7)gdJl|<2El+>O+&+-FIP&%Mp02bPAFsTKcC#4{7;wX+WY9)H!)^km ztx7+?(j)D%LzjJ)d9OHE-NOf{H{1X{1Nm3TPV+4nQ8O{1hCmu;B=eV(OYKzsDK7%< z_=?|MeBNUrQ(kp$C@o2b{i?HF1@b-yJ1;$q)}s#u%j7u7sv$%`!<7t{F!C@J8f&eR zt?mx5UDKXNeA;7%VVSQ4d=(>w6F-j~CF^HUA(J{kqcY+ez(GYSU(|md$xnAMWLm4R zYqqUcj#PDIseV4FA!%JVE>gPbLQii`F-kbfp(&us@>N%%a5`ze<&@x(vyaEFin@1V!6v}8xFE+RB_T}PeT#O%{OXgIqLv5V>zWyWjSM>lEL)pg(SrrI%j2G^aDi@t z?uF56LWJuVF!(N(u0b&b#2a`h6VFFl>0x#TOvajxi46XTx%rPb{ab8OK+gqI3*I!FLwFnJ+TGrv*@PFy5}p2^cCm1Gx8 zSe;w$4aW75isEAx-G)Biak@|KAqZOIBJSV!Vvd5>SDsW@j%nkU_b(q&=g#$}tyP$h z#mPRn?ftb~D7F$wNoFQ0A*}x>cl)=*(V%qm!ACpgI;bBZ@*cgd?RQ^YkWeM$kNeo`^!J9oJ_GToLXc+&WnJb|XR zX~K3y&Y63lScQJQPcOT7MKAbPc1!dAS48sw=XCo<8*WD)=`68!`+{@FfQ*9{^HO_H z#&`C;(d~KskyKoA63a8rt(5j6;o5skFs9Di^Eoqk70Us~Kim$xYGZ@gpx5j44UC$* zE4JC^)~<4{mng^%T*>eg!gBd2HH9$YB>r+2c0y4;iYmQ)vewY>SOLT@AQmfdX2SDJ~J8lvcg4tMEA$_8TQg@ zM;^q(Lno3rsh&K20?f<$zkOQC`Q6=~Q1p@r5B}p8tKB3sikeo=Ra1&mO+Thy)|yEw z5$t#&_>IR|Rnbz-F+49QpK^?3?{kL>J#xN z;v)wbsi-_sBhMSizRdtyac~lNI|z%fH~C0^=IGG7+NzTi@^wh(Qe8$qznR z&rh@28|+%b@chZ8iEqOoj&K;{iOakmzSM?5^9SF3!0(}b^H5fsdT*jqu>GUJ=C8Mh z+9XxwpGNuQrNi;hwt_-ptwsn~8NO6vM{LoNQqiwKq>(sNs1J!He?oNXzy+sLCdEIm z#JC?uiKZXoFliu4#1HgE@!Ur`m+#!ei4Wt?WLoo-IdOQ$IJ=7$Y2) zM+!~bG6=zZYN+pc?+aTC1y`=Q&J<*ph*EU*VQ40TUX0(B2)0ZXB+Gy2%@?YtI#|Is z6=(Q%{{G)oNiz*;WTW;soPIKP5&IghcHnXPF;;Y#T#ZXz$YU(IP+Tu9Wm#X&5^(U< zHud!|<>fh15;B(P`S?oYWiJf*2M`B1?xdLq+xFtwt=X3vrt(vCg&W#CrT+$|8deO4 zBhkB0r&830buNWV>*<21ZVKIfb^Sd6--J1g`fiu_B3a`NyC22{gM*(!8A8puw7l4ISET7lY<4`g%Tbpq(tc`@H~9g_`qK@2Lf?2nqBI z;@FmD`439}*KYSv2494%pO|9}$rfdGR2+n_=zZ1}!#ULDIJL}-!Ld*s4~=iPdC-kN zp~CP&56S(6pot8m5v6^&-+k$>!uFWIs7)T!1?ugbn za}x5B5_>32R{p8HYs!;XTw=fyJGqd_6;D4mi=;u|FJ$9)N^iaDJ)$kFZ8WrZ_MLga zrmbY0L?1HRI6BxOXEIF#;1a*vrKp*!nk@+;KFOUv%uxMnrPltQ+Z%WRfQf7+Ld zk|)I3Y+-ra^-^zb_d96ZMTy0j`QXPQIY}I+|UUnLX_xBKYL9D;Xb*U|T)Zs_LW$;f5J*-oG+=OsSmXrylByMUL zh17he6=Xkt+6u=ijb`Ml!N#C*e2vXy zn|pp(j#-h@uy=?V^SbAoh_3s?Y$`Ann419fL01l0dJCSNyQ7Aae)INYUgBR_BxQkP zlXUK|4h~SJ+MWwrD%JN%`PzFAbR@heA6>YxMSiX2NIu5q~yjM2F)Vu zxJeXuPxe<}Z2ZM?G$IH`JGT75V5-N~x3v(zCGpJFN%{>K))LMdRwc!A@F1pR_@?BG zj0Ail(L<3Mv-}O8EovUuZ*onK2j)zJf1f4Ei2?3|PWjW?9hS&=N26HlTFw(4Vz<|_ z21VO_#@Z~`8;-9yCY*8eV6~^N@YdRQz@ALF58)!{Qm@7oPUtMu)ps6|M1VoJ(!Na! ze10O<+KA$&CY>R!#P!fP3hU)fAVJJrpPjq5HhwxGg<8Nypy_Ut1hteTA~$zGbn&fn zfpS!0y4rl_@f%@GO2U0oq369%XS>Up=D~M{_kVSXVZ^{^Y%C|bwZ}?3Mr~66<|MGK zGhSCtsTj%U@WE?%Z3iGP7!Fm3q#Gm(-fMmi6e~cvazM-}6XkxO=1w$4H1onSl#j+* zQ?(S|SGP(%3N;?w8b=Ab?`Li0^8seB5Y5rM&cwMf46=i!;8R?XcB;ZQ^U323DqtY| zGivcIypXX^X(cySmcn%C#}6n*qDcq*Ofh<)hA~xvA7ZK@{gi>X#xr<{^ibR;Vs+`OMXCAB|IJxE)WXsbx-9pV1Mw8;d>kwb+hI zPXY077r}q%z0%+8XK4MbtI?|a(Pt56KdJo}*wO3tA;6044~VZ>&eOT;L$J;Dz+Tty zQ9y3kMI-0JB@-n#9nY=X5F5&RBGAtGq2Iak#l5)9Lw=8c2xC41sx2{$Yl<8k5QeJE zI}FzMD^YwG69PRM39H{NHW%38Em&2fEhc$?W@8%-S;H_Ink0Xdm=-FF1)pKX9gZwd#c!@hf5j=hLalC1i%>97^8|9zn0<%R1n z8;gRmXRNR1Q8ss$EX+Hfo;<8I5z7+72xX*zIdg^$(BF)fDt;1BSy>z}Zd33TaA ze@RDT`L&b&aJ%u%-%fWaJG8DE9~C=rJFK|ukR58sK=IXRnK>1wN;llI+ci7dEfo?>@na&ihk)s4t49<4+4plDq;^fC{WZ z01Cgkl2&P_Pi7~#6ejEYUFGGTiUEZ_+V+{Jqz^q}JenaOD+ON$VyeZAW&|sWf3hNm zmdmz<`o*mdeuq3z%kAC=sY)wxuilJ#x@LkGI;E|fR=>g7r^DVgRBNr0q1CNLSslXB zWLnl_3&wilH#urOO-)`;B3*c~_BQsrtz$hi952?GzOq2zigDWTVuCFOjAdvMMD38L zHCC!A341(!DX;3u{W#f-Op8epRy!&fci3%&E1k2Ti)$=(Y7{j+B63!63@^H|mGv~O z=utq!zdb=3F^N~={&!@XFp5>?=;PIuJF)^2(_{;ch!vG_P1~LeePEPyb*Gb& zrt5h#SNVwxV5HbG>gwg}gU0ZEmrBQ9BNP(oViU4^w6=uMKGEfyZh1>!C;W2Kn%W~G zI;^ctYFCcl8Pe(XRDn!&E)7Y8qo41}>|-@nZax&2ZmeK=CrMVN&NxxJP>@W5d_*mP zXF!q88AkilRJ1*t)Z>K0!*p-b`lh7SgLHeY#+N5mRuVu)W&?!~Y$1L>?V~TU{{|I$ z@?R;Za$U-KelK)~zxjb@15RSK1Lq`gfSA4<-}_0mKZv7gb6`$HuZ1M(THgH*}-onCS)=g;X$@*Io zNX*^qZrzlBL8{c;dk6bGdb&!qLWcu$UB(58A6>J#I20#~Lm$BVHTl(zCtGrtt0x%0 z=R)_ENc|&3Hdvhjp4G*q7N|!igX*UA$J3V6=DAQNxE;wJNO&9kG`$43VV9^%iw+`yHIYl{ci-x9lkz4t3?gb^bjj@Bh+wd z@k+1auH_SO=%E zcX#6}uVw4Z20yqhNz&`EYtk$2LAL%@eB2q7nmxl%+wwlz6h59>r^z&A{J=!8_?F>t z%!Osys44F-;g;!E0nr@vhcc|wkbTWXo7%iG!ZFvS4WH&A%_{7U&{D|OD}?;AJ;M!B z&Gb!d?L8ECq4~%t^_@-+Yn#`+Cwiv(_QM?i17l11%^-5RI__y8`FKR}Gng}Cyvv_} zOw#l6n+O@|K)%cmw0Gx9ye&nd^PVk!xx$ZJ;CO)xKt;>-D32&A?3cQAceThB}1S*)0RI&p-dGs#3Zx@qlh2&OYS0#uNT8Nj#{lRV;dqp4lU zp``-2>5hxEOU$T7&=w@amz`Sc&npJD^a5g?fuK~s8EhJXZLptZj}O;s#>sL}fUJl8 zUHDf`6G6=waTBA~;a`x)lpleisY}3B$i0a2Ig3j>{SaoNeRKCl zRPu37cwIa_L9Ly%Z-Uu1;!)g55(l^pyG6^&--?sL> zTY5`wQ;#}@giPaj^f4cYuE%%R4>R92^ai({(6M>Ac3#0bNa4$BTcO5!b}Gd+ohN>y z4A1u^QA?H=i6#{Y5tEO0b`uSNb>Tg(@&$fyMGx(}++n=M-3Q|y+K)#*KcRek!2 zzU)1gEBhWse(c8M@PmcHM~*- zXC!5=DI&VO%2YE3Wqw_TBRM%hwQ8IlIQ-+thg3*5dkq?*WSA-mIJ4SMGq(3z45*8E zyJD{EscAmrc+fR!Vb`pR&1O%2V$lX;JmJWZyoPGTKl*+l{b(bBp?V=BZp5XLjQa~;0ptjm+t}n418X_VBU$~L# z!yl|gh(#3UK3;59DfVn+#o%pa3gW%!@U5iSqjNr^9@usGeVsC2nGt;g7Y|TY+~rfp z)M*p&9wd!7nM_>)&VUT=#nQ42?p2ju#2;7@_nkaa>ZYEEk-&p#&Gi8m>Xl1a{JTz3 zVY07Zzc%SUCT<%m)>>vv&nqro_0xL88)=W%$rvVb1NsV|D!T)Or?ONhiSL;-Q7NPYoJ$eYCQ~$8?jQ-p zI)3|yE#^#-UkH4H(YDZ6R zbwjE&IJymk)8RY=nvrH7DfumOpTx!Hyher)L1MUAy=+xzAVfL7j?+MZ6{9>HCx#S& z`?3AJO^Njqr_%Z8=7e^-aql$*D*)Dv4K#9Lo!^Cpf>&Vfj9kUrU|qUo)wS0Q&5sV)O8=c(ppjFmFO;53SF!^jo(~=Q~$z zZHoEaE+anTZzuR|XOU+#TxF%yKE6@lysl%sk0;?@*zOJNL(OBwo!Ys42ME@A-V1VL zIFTfB>skVs-pb|hN8c4By)*H(Vn>8}+S+5>!(zILH1M)3wPR#y5ZMFXc=SLGJ&4*X zAF8}J87?&}K9x@@*iWY9@nW5ltRtC?X1au^H_^gtDt7eS5r6}y##rakTlOM^}C+R}C^+XZy@*N|spqT}WI!UJ^?AY^{6<&*HU#$~= z_tg#ey{wIm`?n*l7yX;C3iP7GYWyro@K`F;yPk91)^qvZEub`@PWNK7_M_T#!LiTr+ph6)!BsiF98xQFxj#jBMypTiQYuO017E7;vV1E!<=$kP4bnf1~WOy?Q5}olt{? zxz9}UZKTk`0ds&!Gv&YA7+A~>n;1ih6|d@X%gZ8MIq7eCA)xSIaP$+u5zj6#F6tj_ zs5&W7zy6c!m1=Sg2d?Q?Y3!xME(xSl;92=x(;X7qoj{|TEA5{|;Lw!__{vaX0%d6y z?a-PzD=M@>t^A{Lhb{eQ^)0ecd>p1+3giTN&Xo2W3enhHZWA$7IaIYQ5n&PyY(Tuq zdn41z(cslKtwI+9Pw8`w5)%>J@a0C1EBZs=E7WqrV0|{{FsRC;dD;fWhWT7ec;?2U z1=BhFNH%=ax5SzTR|mGkH%cV+I+5lYaHED9eQ^{j8J1i%?iLkVs~!3Jrlfq%e`da3 z=V-UTyaaZqw(@V?lHts^K!rVLeRP^qKD*tCIafik~UUSpz0e+YOSRBU|kRkN&uY> z2{~i{jpNyIZ1fi24TFImh*6_B#kMkSywg1_@U@#;n=%M-wA23PsV_%9-G*%y&KTyM zCg~43(Zs`MG5(E2N@QoL{k6(VZan&LEA9r>SKn-ep}39@smV4d!)>dyZ&=}}Aoh+s zs?WZR6{mu)Foh(Ih|aP4qMWW~j|0IvlgjUtS`aMPK|FH~@zIm({lR)GFRj=Z80=uD zRdW8j<_1ov%gPdAz03X*cac`%%m4L=;ZFWOM=a=uA_V(MqwkepOfe+MFL5!`+rShQ zz0uRF_7p*%B7l0Uhom+j?4sHyCRg3HP#NM&Rr!f$Z-7E-tY)E)S;xy5fo80p*+4{( z-uYhkYoKw!5t-oI6%wxph!^AY*jx3Stk`Y-0Xt5r%YIRoygwnu$xhlJ9_-{8_ClIu zUVEW<3;Da&-z1{alf@WPB^5ml!kY8oSG7xm0 zrIOp`sdC#o-`bXgqYi7aVKJjd0W@&m45Xn!*_@%$F@O~mL3*n2Y$cweu@S(@GfbK^vU{+yv>jzv>nDXNxR~w zr*-qE7R_ZJ)j1m5%G&7vfuoTBfxbF4x018GihB>NU5Fs5Uff+>2qPZ zJX2TS7b9gp&ySs%U-}AR`obgC@$CGEsA)GcOMYa&TlH2NG-vnG;H5AwCA!tFW(ow` z{mDB9uXx$Io+k$EZpl*<>P3rk<8df$iONw_%kINV>C^9=4#uoo4j0++BjYO*!a6=v zXU?WHc{#`Gwi~h86Uc=%S$z&5_^istY6+Mf-F>x&!B%RloU03|Rz76ad;l_+PKFHP z(*#HKHZ_dbA-WKHG2ezo^)AC8nyHmoxAf^p!b=zj#LNrl3(77lpLu3v?#+Ma3Cz%I zTP4RGmdKg3NyBE%VH$XplDK*fo zS*}jo*^N3ZR{2poHvNXTara-p(_&n~-M z0LB1`_o73VUfpz!)!~j8AKdOKa(M2k)uJ7JTf?V5i*&?Xp@Va$;XJsq4s6NxeO%tF zN*=m;yPd>G4XHMbzhYDa65}o!t{Nu%!Rev&kHN{T$}Gf-xuMN z)RZuu7pVKafR1g<wqFz3FNZA>%5EG8L+ zxtC)W#W!eA_iT(vybjn#KcC3n4)T}`7-tG**y_K!A`>wWiM`2VIjl{>!tCdKOImMn z{{!!*ykRO1^>*2u{eL-E|L3c_*flIIaT;oj$r#~*LmXek(bQ9>I8;jVwuhKgMb10` zkDCjCG#2icQ>|JO+a>l(enxUHje-Rf&f|A9s05hQeyeyTY`me&go(284>Jfk0I^Q(rdyaej3{WRT zXUiwx7puFWw`l-+pc_h}EOx)J=*Eir8($wQpE?opg?0$IcoGjVIWpUw^h9a-SSwGn zlAX9m=@h)bK@9hK(IKKtHXB>PdA~xJOifK>+0b06-p!zBQ!5wx0lty{-d4Ub$Jb8ORQ{WjP{w|F`Tg`!_rM8i&~B=*6;>;$fHaD-{#S(WB%%Gml&5 zmq!{@tJECb)T8MksV<2|V^AGJG#y_F5=pfB#?v1QQo>m(`_zbOo}^G%1IU?J8`~J+ z-Q&h-f3^_%n`Z^;{y~mwV*Gloe5Y|kumL#y!ry{04XDGwNYt?geo6Mjn z^JUFcLb}Nyb&U<})4)w-F8D>mS*zu}rkv>TLA-8?`jUunxLst!es_W;YnN6&1w82Z zV&+@|wd3Zx!W?BmC;uhJS8o7Sss8g=e4^u65m^`J{IXE{zsmc6e;@yQNa9uLv<_0E zmVn`%xdzCtLIc#hq0hu)0m;Zg8mleb;s|)BtyEFD?2)IZO^vrfRY@0x0F_)7zV~ag z3)AFaH9z=J(o@;y_Y2)<6uHpzVTk&l7v&^48xXeb1U!haB*&9qeJ4l1zr+WN+)ndjNc?N5QXs&c#U{pNjtey$*BiFfhocZXe; z2;YydIbHN+;pc82{7sUj!GtU4^0@#qB&q?FMW6d?BpV9_fJh(Rc^3O`cllq8|NH)P z=x+{O8tu`2uEa++V+lex{ONWNsR6rz)!e)po+Rzd8IlnBKp2x3|8tLhfV;j9DfC3?5c?8P}Iuq zZ<|Yy43!kDc|ZRI)M}CmLdn{0Zd2fyE*7B24-{i5(Qzu8$33e8k)%^%?W3GE_cXck z`?WirTeui?64zj>C1cXzaw~7XEVS`3;95w{?+Frnc}d60F~RI>NRNvYRWHviz3m<& zX~J$yvcK2p6h{hL>+3|il?{*xiP;G;N21f4k64T|PWgTA#w&)dL{DE_RBmcVp)ljy zA5=5{&n$iYGk5Rb22rm$or5q5W)a9H7ijcl3RDE;<>0N=OSOruc#>!Ok%~-E+)`Ez ze)Q(KwF2nR!y2oog{*3Us7;zTT33XUqv^qq!3x*JCqz;C zdnsU#ST)#HR*6=+?e0#9yBriZko%xFw>W*#=^ubAHel)U zsZ$>;@Y)eJZ3K*MW{KB&T5r)~TCARxz{C#K#<;Y&rnwkspZx}3#8JN#ug zo>t=1hw-LKNHe_+Z4s7_f)mo{OOoMGcE+e83(Y2*^+8U}yBVLT&|je%iaaiGv3MjvyiKFnp9yk0!#iV`S*qGaF-U=&v&xkHU04KTqZxyr9Er`zlu14*8VM4je*Ep;A4-I;{V$K0d>;ht+WdK@d6{5^=jV-8D(uG5`zDvD z5-QN7SF-t^^O^j{(((sD_|#m7&srC1W6N@FB2srU6wj{FuI|%|od3(d=AiYLg?1d| z`9=sC71jYtP~)v4JLZrk<5BMN)6aQNBE#>p8bOHeRsr?k=}FlZ9UqZ}XpKlNYyf9# zqDARfDlMga>d_m{{?gK=s_gDc-G+t>W06%R&=nXQ|7XeAi9d{Xj3VRpa`sODVmL-M zt;VrW#<%E}bmI|e2wZ6_e+qqotOGVipe!6}0om4kvLojebR;B`A5KdVQRa^-qJz}A z5NNE%igNE7#5B1aKmb94mo8E6o`$a=*sWfm-4lHdYr3nEe*c@cAyvZjuS=0sW=-oIvfHd_<%+m0ocWnmwt#R0Oj2FyrPQ!99Yx$_nt-Lue{ z|A(#f4u&&q+y3gEBx>{&(TRxOBO)TvqIVGyZN)09TLei&iyqM;O7zZJ7SUFh=KYq@g+MU(nG=q0IF2KC)o#KBf? zsnf>*jqqDm59bpZ;1+d7dSU;0-2L}gZ0r9%*YQ3d%>IIySOjDFxl@y4oM9DXsW-Wq zi$BR4A?uX&gVDcr`E5ubDmgZa_v^DBgy~BB?J?=M=3uCpxbN**CQ9XMrt;cngqE7t zr_2;oSSMHzb9!sj)-E>wkjN}^#7ky5MCZmvM#C}U`FZ1qSl80oBrDQ1NuvF!DQ>W>g zk8V~$E6by9te@{xyt5-tvX&*1g zy6U^ZXz(QcPtN~L2mkvk=38(W1q6rVW>ypVK+ik*ZSAoNNZdoV(?hv9*mSeF7%PWk zeaUOZ_*1)xFfZ$Qiy$W)hHgA_4Q_pbcu&5>dbH%tR~TP4afdn9j1fgr=$k@`GBB~7 z>U=_gHWP%_k1nR6T}I_F93k*Khi*o{E2{}&5!!*wP-QT*&VKf`6yNS=Z4pT-nAylL z;&;j6Ck>QKw5S~5uVpH?QuWzQJ8BY*5CZP15vp9ELgX=dK<)kRMXHc$#=Q!Yl-S#K zrtXJ5KM)q&xGAJ3*0=7KiC}EO)disPKeA@t07AH2>%dm+y}Pl~E`E1MdH-wL@t?Uz z7SPxLYoWl6q~gJ8!K@mRsaLr=?QG2Ty_&-z&lvUjFV;&)AcTpYWGIfJ&kcLsVgv&e zWxq($ZYU3j=ocyLH@@O0disfVOo@NvcfK;vT&U_Toem5@e5|wpD}fNo;?6t2^`z$d z#GE>Knm`)5bE+IvIZ}o+|M}5Cl+OJilygNybZY}^G|{%rlNP3OQ#gS*^nh^0IZbO2 z)-8tWXB@dlUNczet4C6K3|F38`e!q2Io6Cf5yZMwG$2mjbaPZ$};y>Rj z#9b8uq3R-rc!bDOf^FfA{H+qR$3PiE=AXoc0S6Z;Wm+)YFm# zGLdK#G`G&QESl5lEE0X!5o%~v$u{J-bti-uQ(d$31O0Y$IiURsc4aYJ4fGq2!pa7t zr*r#&ouo?9nh^$iy{}2zh8cRf@BpeD2!*=-^niBL_>@yyA95Q_6D$#aAq)S3IpnU( zMvXoKIlFFVdEF;I`(8`3=KHK@onRqc{^yt;u(hvn*`56B(%huDRa#o=DY&_bF6^cO z=g2DEky-+DdS(*0U;OL$HmKXVp*Rc_8UVBdhk$vmhbMOWB|^|Z^8Yv|bDaN1r1>-D zb%GTBNo#2S9|% z=}aW@POmE)(WL~CIakw2fXn@NW&Zdl)N?8qSz5;RTD_RPFSb!yRC>UT``SG{dNTPT zk2iLcKJDtHyd{-KjW(sY56VmfuS5=@@+nGJ2G?Oz^~?URhbFkE=#D*<*-LDAqQqID@c!GL=)>!;(W8r%CzhED%|KiHa%Y!6TY{NW!s{xSG2Prf+y(l zIpRZ!0@n!g$+BclX2wS1CR66NU9i9U$Kg&-T+?+wrYR)tPig74cm$aU$u~4+%Ka$M zmwhzdA-u7P+s@`}F=l984m8mqU;3kEptJo;D?9tKVY-9dr|G#&wvo?0!BeNz*){->OkyT7{Xss*&GDU=G*#4)zYB-PO8ik z*7%-m!jqvx!b)jFo0DAVo(_j!BuVFiG%cAY_PsO9?lgwmDU8jL4fE`(_1_#$x_Za` zfp^?Ma6mzdAL18i?`1{=R1dWZUpC?&-1j3bMPOy2(=K%##UR1=sd1OUTHhiQN<=zoh9b*N*Vml z<;4!>3y@gv&c^N(sKilVhYFh$dhKkvx5skUDBQEq^E^Of?_&gd@lIJKP8tcSF zG5S@A=}CmbP8#hxQAT#-pT!dbE@)go5 zw)W|PLUqab@?`P-X1o8e0HzDr%Hb<=3NP6b#fusDO_>TT_^7>M{SOG13^AvtOu=Xx ztm)OeAUEEWoTPnoZGif23>u3Tl~XIBe*6AbjL(11Cv*OHK6&!z9?Sj$wwPw`g=xhr zHS-aSV&L!!1+-(T0pW#t&o%-*AWB-sgyMP-7}8(biH3{|g6A94o4zmV*(Ap>ijnTv zAwS^fM}6yL%J@UmM7!Y&ukrWz_zky?uM|dq%JP>HOXyHs9|DhM_4TP)u55D}bKoe! zCClBjFlo*=e0vhHlLo~XYE9oMoC;u`HFu}e#~-mc&({8)`mrH4e*j6Ff(&7Oplg-i z{9t2F&11FWv5V{4L8tKuAn6kW#EhVU9NDFe(~*4awQ|ql7g92oRly@yT45}ZGYU1b z^vPvoW!;BSPLLLV)oxr<6NOlv{z?8?1ewWVc`D{kD(O5icLdmMRvumc)xKO(+>eVn z#$EAxwy(s<^=gq!@GHC)79hhph)7M7gZ718eIm_j91{U-+sq0dRFNP{X(?9qYf<2C zD3`|-8t3Yi>fo;I&i#{D2Y53YAS|umEpJW&fhL&kx>;dcBm+KX@2@CReiAr+_a>CsCv&$N|&lU$%&t#m@ykI#sDX5aIWIBrbG!d z>n@QxkswuzOuZgI8*?IY4f5?u#5(#c&g5#fmz$<&zjJwnI5JqZr(>I#X;4i_2tpY<1vs9nh=yhPi7Zk8`IR3x9sgZ5Ff%qSqwJ5z5G(R&O6@h3QvYL2 zM36WZ8E9u$bpCaGCkDqkx|0FPLV@o|#=K}4KYu;aHryvf7o>!z3&jQ^ptQO>teGB{ z$BSdjx)+N_qV+)m{>{@0L`mZ`+6_S`%LRB$$-G#GvxSy?_1cWraBqEJ+k3wgnLE+w*NeA87P#cRT7-)MQ$pK|8RiifQeff8&=MWu# zay4^3Qo0D7NM46;D~n8bz-FS;ROu8(EP!a@)SBsL1#JByT4wdzusNvauchDisQ>Kx z(>fSBF7A4en_5%15`G^0A)RJvAfapa@&bpAi}8OgUgGLIoR+6Dxbj(qMS~j}C>K1d z2jC9X-R-~&i6fgQ{tFRGXW4%9R8|vbpFi~~!vj}R?Hkk;Hgxxv9%cYtV|nOB zj?kZ*6VC4c3h1aK#Pq&Dk}PFy>`XWyv<)$?M2+A=Y*20Jxnnk*}K;P4!1 zk9-x0x=piJp*A?l7NkzwJDX5+-I@Q$`K!Y|s2f4SpzLePm5qCqaw~#n=+56I!FG+~ zLGS*=8g+IKn?@1YYM)|viwlZ+U zd_8j~QQ0(y&Q)!SRS0f8nV6f44R0C#hJaW8wx~Yw4MwM-1$>sk#WlzymKzV7jNB^B zgbtlG4%tg%5N1Vb=ybiE+E0O#Tw8^9ba5TO=zDOUWVoGWCOYAL&ZcUzUCP>yMWzp$ zc=>QJuuxTFQ$XjwARPAOckhm@Gq!$FvKZi=_)=L)V0PLG3NI>j3b3~?x+hg16_67s zGe%KfXW%kieN~cx2SD7($GU)WX$3Qq=2=DcH;fi?>EHF3h}th^wV3QEdM_QB)EJD&pOCCgT9&?elMT8&o}m)r$a}; z#+SVnAlQ=~-Z&>o&XDsVYVo1Uvb|*1zP6zOW_mEQkgPX5$wKLYSPAEIC2f%Bq)sSE zJo|D zdbgab`fprl6F916iy=WGGjSA4lEYH@aR7DT5kR=Rjr*w`&|_wm746?b#5v?PX>9NR zH;3ymM(*m74{g~!Y4>j(^j9%M$PN&5PnT3F&f?`j1J;?>wyx`Z_azq`T5q+dhm0wl2^9$C`E>{|`N7Q3+d`A}rcx{o z2rSB8k9wR7GHL29J!{t!kfo+JjK-`4HtN>e=`GFZa>ad;^2m7O#})9-vc&_+?`)KC z^bWxoPMmQ7&PlT1mkU26%rE)X2-z_*UCEI5{@~Zrq6i+pxFu!GN}W;fD9Amoo>hws zY*bHl8XqITPE}i%%?UI4i@?xP1Ap|@LAY(bB}wFZTjKlDI*+M0M zA)T3IQ2`O$YcV9M!iquYEWZiGv*UMI`ert$RlRc$cwV8>*t>yS)NLd=Z=_y%JhS;- zoSA7Jv)-!KN$Uq(#39wj;2^OM&Kab?Vs)G9y#-02?Q|G@vx0>IK6G`XE27bjBBA2a zYo5WFWP#BABc1s}EYfXMpJVop{K!%js1iNoow=~Yv;W$nJ2$bKRheDJQc-# zNpAdMLMkv})U;u6p|EH^XSz#%X1-0mCf&n{sDn69&@L^+Q=+R{)JHjUFLoo8<_>KTzZ{H$rH+ywVkO=TmN!o zqt1bwbET+qP!V6BsthhVtxfA(x(+(wRg`($weh%mWP{P}ZH}@L0Tj)09q@vp< z7RSf78LB-ScD#sD%ih_Gf^Kt|0*C7^c#$ z7cGL=dgYMrhN8J?iDut0;^ixO<+mI+$Qf-LU1NxE9<6nma$fx=EHzv{yx6;6KO2ZY zv2A=A>*Trg1|UazmX!=drf!UN%NJ`&+n-Rc-IblJE9rqB49GPV&PF$Rt-2R^PQU04 zytlCK!igTY^ebs2?OXuhJ@svscdqOqEz$i~ORvFZFVWd`qac-wQXkKK9`f}P1`mbM z8_ZNBJIR846pO^hguDb9{ud9h_G_Y!`m*PLKGfab?R7}~#_goC>||hM(3Nj%3ruAz zJN)`y&Fcsiv{hZG82vsR^Of2vWpC;@_a9C~hKf7i*a&2MiO-Pz-igUCAHg(xhz6s9 z5Fr z<$B~jwo^GD17($y!`>Dd%{xlpDnl1;aW}@4CfS}e3UPrNgb+5b+AWMML}UH@Dh)pz zsk6SgxwZ^!?t9lW7~oBf%!q0%!Q6gR&Yj{DJH)O$MEja@Q-zHt47wSnhcUH=?*1Ux zLe8o1;F8*UqqUkjFW^5B@Gef{cb9H|6E=?6C$Bed8oiu5e5uX$E(Wd*D%d;H()DpQ z^lA60^kQ)^X3>dZsW0V9^-vG;q5Pt#H+R^VPj`AaBVa~8R&$&Ux}TnNeU@SF9LU6t z93~0M{H?PesGoJDNm$!XfLKQ3GQ?j;N~nmya1%tI!0EM@lSw=(Fr6+{RJUha34GI+yZd>xOC&VbP{Dbp$6BmkLJyb9~zj}figFLn_b zdPNKi7r+7Z(EV%2HwV3+yv&}aUHaFUTwSyQ4gkaJym4dfQ=Td>u;Kt*FP69gC~|ma zK=vEW(gFq#T;PM}D^ks=q*;69omJyoJe4!0d*M}{oSd9T>A?X2^)JKl-WXX+*cF=_ z?t7$b&3O_qj+l|?PYT(;`ak^YUnuCN)=)LRxA&`-woD!rmlevf9P?pc zcF7hi^Sva@=5xvObiE3fCxbW{3F%}KB@ZMi`*&wVoU+^+_a@L{`c@TBDx2lwMf^gu z;lZ7#jnSg~bj`_=Nu2Q)Yvz`I`e*{Gsa?^_ebJdg)&dtXt5~Wmd%23!j0fvFu>s>+ zx6Vsk1A4jW3;qfhOeUVI{d7r7|Gkc3$35V|IupcMDY|lF58k6V&*Hc3WF@yP4#WA6y=khG!_{9mzCmu>*wF%+L=$a{v>EzOQ z%w>60M@YRN)xwN=dU6Th)=E;32tll$yGuMCvzuvF%AD-Bvk%7VM$$p(7*H0vrW;9& zOo-qG9jaJOOt)GNhUUWylcgN(1K3D)jVYT8K$S`h%X&?FM9s$gWOQ<}VPw*zk^X9B z9b?}j2sxa82W-OqRG!QND&TPKMow4b!V85=K;+q`D1Am{hW>Dt@_}qS6>1e|2>M}D z>P~mE#ZVql?0iG?nh-v#t1!bn8Y=%qJ94mF zc40Wq^Kr3st=)?F=UeQQG6_L$?PKIs{922L&g^fQRm18V0RVvI3ILi;!e8K;T{}*0 zDLoL@p$`IjxbnIxT6@5DcOSX!;+j_RSyQxWM75Mxex71@PP0*unq-aov*g*Dy3!}+ zR7u)7lD9WZcEw#<``B9AdJ+XOaq66@sp5xB9TG0shcLeMC2>;tHuoz-1D5etS+z8W zpx2`dd#9g?JP7L1-l!1tR^^`GTypDyGm_xs?`!QoyIBH-V4Lk%qlE@$3M&|}2}3En zc+lA**R=3LQ}*76+P8IdZydQ|PXXee7vIb@$*u^MkGJ5rK<^l(UvRI2urK_`FnaTlolA7zr9o)4_Y9KrU3i4XT}$x*OmJT%MOqI zE7u)Tt_sY6|7BLy7q!XN3Z=+$r^w!i$ugFi2KH{8YFvf_V7?VP7}IZu`aE(x`VyOuc~uzj*X*sDGrqd(oT~om$0{Jy#eCT$b(G zLM>mGURaKimr2`Gj=WK@a-EW&&)AFNw|07F8CWTQ(Olf1 zrd--5_bW#IAQIJ2aBH^%MGYujo_QaAUfgcE_3X6}!-DsIeCF0wlx_*bdylj5ja|B+ z<2J)<)QQ3|sX63J;)H7*caT*ig_5QKR=5)w9UQq$tb~3J&)`1!iTZ}=u=LzIZ3Lf=LBflrCQSn|8IU+IC9MyNmJeLIqMev&1WU` z!u#XqTaEz&=Bh3ubglO{TdH!0;8TXftn>)Ublbj>4tEl`vLgf+pLy2)>7dQ<gFr{DHW>3m(=zW>hsDh?FM(D${N9D_6+_TgLB zKlQ&J$S|ueddrn`PIFiZ;YacnYjV!2ii6QYKk{5+fMI@ZgD!E38V2p)K?&g(MP7f9 z91tmia^}85p?h7$VKjYJEPA{u&uuS`xxZfBgf3pFn!JrneKY*ejA4!cAM6akOy&rq zSlSpZ8|Yl~CK0Js+5B9ZJHLMM7?g`qQ|4 zFq-)%tnxbc9=n~*XsiGrPtZx>%LLXplzrk_Hk^Xw?e6SGcO4;C<+CP=T@5QDS(yz) z2)q?=r-gk7c(hpMYnHbKmT7`cOXE_b=*^%92FQM*`V1b;unDtdA3m6s{}Jlpgwy8z z#}4@AMY6QrWQUv@4II@^&!2Qm7J`8~i3Rm(e!)NnE<0|8C#)%-67vo)U?5luAi&bU>jLH8;z8A zTC{P=r%HxhKL9T{v2gL*uZ4G)jPbCI^Qe5K@mP9bWf~w&9C5Hs7`hrM4(@%eagVko zQPLI}f|bIAsER4e8@Z>pL%p_h(m~2pM{JHyYomP8#k47|Ckt{P2XYL{EVqpoQyQ<9 z7pk25*XVH4%);gPEP8qCZ~RuN%u1|-6^Fre{w;=k+j78K;vAp4j}jZAUFxhAVtD_R z%TxTMeTIj0oPE%lH!VkN5dNmao~1jdY9AQ}Qcr<}`+4 zZlT?%m2h>Y*n!al3BS@m!R%d3nH|)&-i{liCtK6b3JzqR%i+XQ8TCG$Vo6P6ZNoRN zwiqRo(+?k0oX?Rd)tiuZiGA`J$o}Vik&`5%Vq9XXkh)eb^mDkPbw4ZP##(a6qvl@b zAAF3Ohep?QFr77b{jz_e7?+4WNL%=~CiM#-FQh53-;l%i@a8Y1G}YC^J%{z*jf`T_ zOBXK2fadRKsatd0_oI~_(vNP`B-$m&$Ay!A$zF*eUd$$fTMJ3cq?popJE{8`5zYaC zK7|RIYrRXepVL&+(2X(4^!@Hu$eJ6&=z2!7ztVh_^5h*;Kofeq3D<2(n(bPAw zF_9aNZM>kzlFV{Bm_DBHVEeN-VWQ3TPBphLhR4i0$wn#^EyVtvpKkzE2%|mFu=AxR zgQXQ1n2{9Dc%_GK^Tkqe8w2aOq77-7*oNA7;3ocy2AG`vhd$<9cK4;8>U=ogY`$=E z=j5I%Z=*ffv4tg(z^|e~I5NI5O-4_?l1L*M5;`Ubzd4dE(n1^R4>N@ci?N}$Uk7#I zNxdTg3+iunGO;xPufJ?8u46CbP^YwjPVpTdoBs{wE+^@+Jdn%%A z%WP(!iD|Dj=UD7rRN}EvONW$9v|Lk9p`z`vmckz2iu-%o3!ya+=XdXhGL6u%kE2x; z0+2j;AU0r$5BnpEO$>cjjylyyn^O9B(zOPDI!`8DZtZoPL|VZ~xDP*z50uxvP!my3 zW)mb{y@ziDuT&0-#72?WoY*(U`@T^DpS>n!o(8>nc{Uw?^I)>hB$n!IHC%HG_dMO6 z2=1@mQRq2Vp*-p`#xL0)z7!KJzWs)Hc#kbcDc@!CpT&FP;$ORynp~hw75r58n&5J7 zk1%ch#XgKpXVo_B)pdFP6t;-N3PbZ%_-nNu7Tm%TVXeNVj4Be* zLuU;PjQv!tivtBDszQ?fxrlKOV7n17r2%}#41mvIoo0T^axb=*79qy~oOKqdf`ght z3t5r-n`oZ8F|@RlH(T8JPP(Xq`;fw}Wvl!8S5Hsqr!#NcO_GrsX!ok5WsF0?$B3m~ zx$(6(v6^O;LwOc>nElnt;LJHo`)tjrkv8T z?r~kt%kfs>oFmj1`;}7GTb{}$RDym?RkVk_@h=9HSXza!pXR7(H`mPv4>Qd>?R#wH zV$%#vUv(1}*RUkJTATB3z7zGM_V69(_tJ8sw(qT7q^gmt)T!8PCgu|skwVlo70}MKHJPBV9{z7AG52;yztlV%jf(jUFASW0LB2GjW3@WK z);ti3jvBRbjl8zw$bFzwx`j0`-#QZAo8BG|>Yo8dBDt+5W z1su%VL3={;IP)*77%B%(#}d@BMVy9SWxf+5XK;45!Sj)1xEZfV;#e?xJU_Zwu~GVYkqy{fo|+kK(i@=WB9@KtzM-h&!?Y3t37 zHh?|28vm2VSd@w(l}T0b;HgC&*kW4igDeTW;(P3Nb)=SxNK7daSe9lggG4F)Q7(XL zGplt@9ZQqvrq6sCDFmC01JG_CZ%>8Lu7M)OJIq@LodLukKKj%yI5|N3e%Y{X!a$df z?f}y?A;h+$ilE=y&%?0O7R$tLrI{q;b`^c+@&ylUthefY#LR|FSX-QHCk$gFqe6U6 zDurck$o*G&$&B0jLX`D_Sk2 z*Y8;v0&4SfTCll&J0qUbXwK)OAk|vZ==@F2=1QX>FP;7wo!PlSN#wH(#3s5xmcPhX zli2b4_kspG)WsYuSOF8KN$ZZ=oShcV-u5Jz?0Mg-@lo@*85vN`3o;6=$yOk6wk@%? zgumRT_OUrRgzG{rRIW+Y{l@SGU^(>GdV*B!Dietz-H!mkfo1QN$?6~8U^((Vxhm4Q z%?Z$}4qfCRt2m&RLv^76{_CP>&(LOlQClOnG?+hWVWKdPrIA;bnUE2gD!UK z8vG_Np9jz2p9t|q-!2<8{*hcE|8naF9q5LyL)lh@mj%C+Gk*Nz)H7d_mWQYJ(wk=4y4= zPvvX)V<5dC|4v2fE5A_m$(G!P6g|rB!nup@nfeyl zs}2G9imuxL2PakC1XC;#5#3CI}KNIyXcFZ*%jn6K&~A7 zVVk&?ap?^~z0X|d;U|Jx{;5^Jt82Uh<2*?U(am1C?i{?~3t+^K#_U=PgU$Wp7SmW; zP>a!W14X|pdgcy({Za(Ns({PfBsNWH4rL3pnopPG<+#v; zSTi6V|KgCTPLCVva;PO`o_XOzz1v?j(>vNd8K+6-8If=VL5xR&%SA}`-aE6ue@_)M zT|G+VJbN)@Nz~vUVZ+>N)gswT#zv91#^sVnOfXFY7XuK}P0HACVu(wA9zgz$`4>I3 zC|90Vl7^z-%t~Ze+SHBo%co#5~;e%VYr@>BTE1u^j{I@XiBD+KcZ}Yd{iaTXh=q1Pp|F6@#v?k{DZb3o6ziSLB85jBvbj3MRbaY867e zsgrE#EQnfmCrm6*J9nat5U{-k+PAbJF!@HRO zqW)&9o9r-67aYthoC%CmhyMAE0#OaArLOz9|T8*i;9*8-2bQJqQ22 zhZ~>Lx4I@2DJoJ+i7&9NYzxOOs$i{~25Ac2tFphzzM)4Y#+Qw>13P|==lVZ(n+z1; zM$geAYS?XTcts7^)@8%ATXf{O%E||dWARzyG(`_@TL>a};(1)e%Zyr^L2brL)|0Sg z(bk6{k|&amPt~1BMq8%}ZU!p$zJkMULvxj}hk*4IeFY_xQzgBYcpWe#*-xZdjFBRq zQnqy_MucTt|Jnye9OP$@?xw3zn>`p9uHO|W0?tZIG(>@CSA@cV9C&u zW1vtP=wtWw>mkU5$zOYnqDZ(0iP=mzFk1q&o_>wi)hGLeiOoTOvZkGMwG^8@ zaggOdx2XzUPHSmpl3d}-Ol2Ftny)9Os1XQ6!;f*YjGVETQwlt3k4EZ z|NhYw_4H08WCAG`pjbKuI4UQP9j1zvEJmbwd#PG$K6o!Py&XMmK-ByCxR{#yD6 z_iOm70pf__EH69}dDr3@6JJ`4(uMczEe^~6$>?4b{)iZU)k`D>bRo+@eWqRBk}i2R zUWkfUaqh8;9fT71ya*p70%8?JsPDe%^#%5!SXiXTDaCqP?QS;6#cZt*0| zx>@)aQkW$ds=i*_Wfk-u$jicx<8np#U?#2m`B90+)bsd<4h*z|V zF+r8ph>HuQ(tD;^<*7OSWAbZ2s!&yvUFQxIi@#?N#`+z+ zc`)3VD#M!!-9c=^XW=+BG!Pebxf;#}z81Y~vE!TE2g>L!EdL5+Lx{@v$Y*ubYOgN&#Cg{ z)f04pGu)k2^p#bj3tbfqp5$#dcY)s-4|j?5Lp@oIxR5qsh-`fOEH5Vp6MtMzHT zYpi?5yQfw+mQB=lz^`W@C{}mJJEd#yfV;h{?!(Ph;*EXpeS^k5pLxFz${$+VA6u`4 zm~XF91O1k#6KCwS$jO+bhMi98lWmUt=cB%n4T2#R)k?6G7|e{nM#3XV;4r*AGY^}-AD9n zz?ujmcy;9i+T$mm(EG83ueP5%`}I=PI7@TW{gvm(N8TZ*SavYkH22Z!5+H+!vTntm z=*1Qnev!EsTEQ-Ot*&?m0kY5mCw!nPklgq{bKEN}#;xZ5_vHbg?9Ne%+q_M$_=q@9 zoj9cC*I&M_Jb6l|yK1V4y#e{y>7uN^YL430pfA4&kpju=zk?@E!yHcj_6`)hu4C8F z{p|H%CEuh(_>@)0BeZrsZ(&&u?~DAUwVASYbK^SRb;|ALdd%qNk=Nj6k^|p+7Zaq} z>A92p{-@qMgP3|dg5D8Rx=Sm6m_~E)7~SkkD)Jk>L1u|4cBq~sfI#KsgnKWa^7G%z z;ZHa}WaEf2qdMLs@X8lk={HIhG@pcnYWB&Ebcrh-qRVUDx;`?cV-o>rqE zMMtwaR|KJ@9<;g8kdf7FU%@4^M?Yw#N@0Mub%mE^QL%19@C?1`yR>I3Pg0s5cvM;3 zcyJQOnw)F7-_3bAq);`0*)TyAgxdqJJ+scaW7zcc=F)O*^6O=V@$_!>ZT1+Kfh&wc>{A!4-Gt6o?HnD7r!~HZ8&z4GSCrW7_=YDh%_+>Bnz~8hTT-RO zpI5^wyOdJh9UmxTjeDcd2rhJ?UGe;?i^K8A;os~VBfoD9(UB-P6ax_fK&q?2xK+@( z$tv}MkXuvM=h_HqG!6tuLZ*{yZf$6sIaE+*yNkfw*dI+-Uc|c7;jim8N$-EX{3_uT zZgi+3LO%;cTlD~$n{o3ed)Hyy%QnjoXPiWkm>)ZPE_iM!4WG1Xn!KB+C_mP%lh5@ArrS!|~dv5cAqh;s%*y{S zaNfRl`#!<-l#krF{GCmYPlkYf1s>v24V8}Fl(#x8AtQTMBfKz_`^%*G2B=zCmagV! zXyWPm*(*<9VkMJXgkKpnt4mE8q{+?WzR=YQmJGoiLN&m-lba>?FNkT=36{lxTn?8fuymU3k+A6 zuo>}Ie~op?eay33ifCRyIv)n(!pb41M2%^UpKS(Q3e@O%U(%46481AqnLxiiCNcOO zcq~T*d339Ur)4LGzuZ3}WO(c8k)Tu&rqzWm`5Wbtg)A3>NRtP1$UEXi5Mz;5kXg>> zJ$(S#SG9=yK9liL!Ee*lL1`a^``U;ucU;K_x*@!rY z8EC9x1cM0}6YM|i2|eVaa^?W}CBcph+I5HeQ^mVpM28;NH;7USMHLt&Ltt}@nG-t;QajV;)G6Tf_Ie$78771n+y~|x=zxjXFed6 zfQ4}xd7V4hqw)_f`1F`;?33-SqAcA!BAJi*xjzPGoGtxX$l&I6BOG2@*W_yZWxPe~ zdQSjped9dFHgyp?BPp+{E-hLx=)VIgy;Yj~L=QdfU+uZz+4;6N;^Y?LEkfIvmgl5! z<=>BxFJ$dKpOrEi6iRxrH$V1atuZ+!9{Hm4UACqqjgi3I+P?aWS$lwVdC?|F`3_2M zO_oc}9GlqR8~SAMsFbHDiPAk?3S0TtnV(MyY~+fTp){4#m!Nu=8!E0` z+WL|kY`A^v)4Iw!cs*YD+;6$mHckHoF{tuNxj2*35p!R3SR}bxw+4>_9~WX zgS$VxV6|r14=k&CKBN3D~@_==Fq* z#!k#rLho`~0dt1X#>(`BT{MZ#u52~k!!Jo3gAZkKDjJ_cV*Ectxkg+)U?fY}jSW_9 z!+RbaFo9JNXSUmBFa<_8Jt``93s!M_^niGx%Zk3NDsA3G{`K6qg){YF$8=K~(fh4; zyk8*dLu>7=|-Acl6Qan|rd+C}q4l;G(%DNxWr+LI#_QKfi^!G(#p zTX03(&g@RFz>Z6mpBFQunSlPIfX3%QNVp2mG)ilu6dYWZ;xLt^ZRiF9OSBLS6$CK=2* z9A>5M6f@oap!rdJ=dH%Bs1i zsbLe7WGjJjXmN7Q)VfLb4~}A*@5NEpVB%6w{As#(0%;`e%0gQ2gOaaC`vmoNfpAAU z0`eH{D&M8rzu^M|XDtzXz{$EsU|*7DFdr_r^X}-Cupq)P#$W#-4ebN=Und`xLGV>x ztWDX~r|Ccc=$V7>#>%m#WY#xD2#1lsC+|xj@6!j`i)(;IuHZ&p$&g6NMq--oCNWPO zZ7gew3xQVQhQVZ5m9es2FIV?ZLyoi8BK(!NSc>*<5G$fZItE?)>1GlpCH8<4@MXz1 zWg+1LUi<<&n7lC>E7{sfpIrbg|HU$_JP^`jrI1k*#h#e19iE<8~M_<#7|?HQ9C%n7Pu1jr?P4-^T#% z&k;exMug@O#WxJ)xUlU z-mm}tLIaxsi-5%RkvTB1V)nLbk`NS`GF^uT8SOKCLsNSBql&S|d~Slx4ymhGppJ=| zjW;+T0s@m2I@8q4V;T?1iT$DP|2XEXfU#TB6W$_z9Je0=iE5NTBaG>7H7y%<#M04)ie6i|pJdK*P1yIf< zB8cl)@1spLY^a-H{hVP9jS>FEKvqX{-pMn#76fivsPRct2Cz?cE+uR&wLWE)*2;sD zq5kM$*BTw=$bPPT42{Uf#`~`brxi!Y-M(suUd^=%1eP(@z#i^V6NuQ^&|^!eVNHM+ zVc1bw#VKeL0lvCwYwRSY35lF$0E{temTG(c$9mAyIDkbn#r?JaErsl9Z|%Taba*#v z&nXV|j}R<{KUjQ{FoWe^I>y+MUaMX*^GHsDGv< z#Q2}@uT4VY;T+NwDwokAl6XN_T|^V`#L5uks*p+V)NimbOzhfb`ZDRV8C5o$$DE6+ zT-Wi%BZ8S+7UkW_lJUVJA2B&du``cG$&;S-@kV4k&3Vz{HH*w#iibUxn9U&TXxpn zGhc0uk|E9Ex&oh?c^S}-OfjMJC~#XCXAMs;6Ng=toU9=4cne5wEvxIz_VSOtDnB=G zZ0KdkJ7PZt0rBr8v~5#2^1Y+#twQnqY)Uy1n32}M@PP}Dkuo5&Cv63NW?&Ydx-Lnc z9?2>}{U0k+`!&aNv3+JqfP8bM=?=f%Y&ad)fBB z>7v2+xq-YTzqQcIWYhxESo)!OsLTTt$}?VB&N`HQw>bz;WaEdV9UhI_@}X`JT(kJK zo0;8Dr2hc6!+cHE208WgtdPVD++F4bh&&cL9L`hY-?MD`y#}qvT?UUx2{@>{$a`-0 z3-^xIkLp_+|2VRc8*pvBgXLmEUhWyXPcR-LBMeR0@?o9E9!Eij* z|5&=S91oetS;S=vxz`pTP%DtnHyIiOZCl)>Rv4z)AYG{Z(BvuX{1xB@t}wPg|Qdxq*|+^X$=U@XImS_#EN#3qoWTFulkK{YllW zf(8&hH;SmOIlOs3D|$hFZn+if>R!7PfUr+PylI@I=VdfrbFkgefoJ&y&j}2~Tqa|# z1n9q)6MOd-#bLFPY*x;4QVk~+1QLmtl@L=GtkbjI!DDtfwVX^%5FIm;@BFn0h0{vs zOa~zLw)Qi76q)=vSS5_Gr9hGWtI3V;EiBZ^|0WCd(}-Dv;ED!%ue|HTUPh7YapweWq$f&_OE;6Ybz!#xxa`2;KED6dk#rrO)QD){EZ9 zArDJo?p>k36wSXdeN^fbhXn7seGq2z!UTppN(Bu$i^ZF56g_hbDA8yP%AK9Ha*9+N zo~uU6zb*|*CJrvHwHM&+uPJ^Uc+xf5#2|xFjhdj+;hNxOrYXAYp6^|A{c3j9&$8og zx4&}A<2Y5jn2@q8k~_GwvY=*sR`H|%yk}~j2oLwZvg9k+wVGBjwPvn-gHrGewxe3I z<}+&IR)p#&Fdlw-ye(sm#z^bPY?7q#6Y)oRu8&~#@5Nxz2oN;im7m9KAFao>&*X8^ z!)gA1mt`f^fOBLDO51j8QJ%VSNs+-gwj)uj`EluX3P*uKg%4ir*e~Lix=(5NxnUrt zt7@$Ebd*di9U8Gw14xJOd5)_{9wi+ARG2PAj(*|z?5!xL{M359jOLEFcKlXsaG`4w zjmO@uLcpG~hGW&$sug90dXKqNQOBSrE73c=90C6wFxw?b^mv>vKi9|}VVfJ}5^cmO zMY(<$37d+MH)K8-YBrAz_nyMmPEIKf)mxw{GJTyT_mQwCT&mIZ8+gB*baweI$Mj3} zayNacd`IX*yZ+7$Vatdw%J_ey2i1cAuYBC^!H_|; zLPH;~h+=FKrNv>Y+jOC^x?#l#KNeL>Ai7QRM8YDO36X65!!flAV&{*WAl^93>+a5$ zjDcNvD$?Cy*jo28c11qZ_2kO`XaycQPzHZJ93zr1SPDm27u0TdHvC_NE(pz7@A;}Z@`MDgUZIqU|Zww|C5zLfye+AsylxiJp^o~t6@3{ zzP7^nCwLC1Q0;zgmh$xWs!?U>PYHHA3-oT3K5dN9h*%}rvZ_zYA|?IiE~5QaPDpJy zPA)U&FbaadFmT}0r-VQz=xm7i?}oJMlP*Z4?rB%pjpwi7UCcx)>N;Bb;&G^V;_%1>%Rt*smNyetV#bvxPo}i} z4};>=DhvJUg!Dw7f?>hT4_n6l5qZyKs8M^Gn+yo9*2Iy*dX0kQ(f#)M<}=_8|KG*+ zj}v$s944QNQ3q+ZsJcLI1I*0|IseWq*W4=3iL#fceJ;-#n~JU`+RW9{K}rA!d#3m~ zO(t0Y0@LfP&mzLRI{=aX3x#n@L`F+sWOO!oJK}oa1{==DX*huZn-`PT(M-PKM~Bmf zC~+$xX{Pf+wb2JylPzK!EN(n0hFtoIG3B8;K2Y9tPt|j;GNFk^%~#Cz9v^iQw<7zG ziFBxeyCyM)Oy;9#lxH|dg?c?l?h|M~C)_Jdt@S0`oFsuI+)Q^tRfuJLzhI+C0 zoOiKncsJ#WYknJPm@^NsR&TZty^CQsvhC6qE)az=es-ba89p7-Sa`FiE>!MyjVn08t2?fl(s>?F_>cbdS^bqH?E2;7RTwtEPkCnANT@&T#kT&yx6rzgOjGt zZOJ~4KLoypFVvED2AP2xG5hF0Iyq(tk(Lm#fQ?CBbF9%UgJG1NUvR~u1lMq?X7P+@ zEpe&qX8Y)o^#9@PP)lU;Ke#)98YcvGvZ9F;GsshZhweQ&ssU+QpBJFT?8vQRM@z(J zAX;N;n^-Gaj(sMP_Fdpbxts}G259oG&!5TBtuG(XPq&sq`Bcr|&u*%73hD$p)Z-dk zJ1L#{dmpwNCIq*{KPh~2<_$&Jn$A%tq&{U>R$d@ClS}oq)-vRy#V)aP8AvxxQ~ml2 zt8n9*lXUCqnPBF4pTMxv$-BMEH+3@s2n%ODB-k!x{afNkqRmQu!dh~Cij1h#?RdTd zBU#bJ$_=jLEbHJooT?E>m7 zKy0A#xfDX0HqO$1fyClff#BY;;-VRvPCE}UD_%=xTDUOjj&q4(z!qu^uU3-bCSip? z0^S*04RQ9GdBZntFsN1wnVT~&d|#g_0-s?T=_U0i>L}V&mu0%|dA>XupgwpowYUK7 zkMdGaYLwm%&?_N=sYf;m<)@`Fwz2X-?1m^}1Ygwdgul!!-LGq~|NfYITGVx2lZg&! zUHlw~2tq+NRK0rm>=9Ucj-6|;mM5Wr+n28BLg;E5_QiAl&w94sFP)!-p^P8j{6EIA zEc8!0MAKd$)~6O+>#r9grqYr(@m!Or0b9N*sVOKXNPJDk2fJzFEJsz1wY0!iwu1k2 zkm0cTch!%^cv2A2BOwz}GN@*Z1TNM!Lf2Q&I|kvxyG`wrN8^0kTeLAs+q;sz^nr^(mldWbE6 zgirYgDR1EXZp7Ej9&X6H6Q=F)?=|m1aGFpd@5cm*#{^hY*lI0UDd(Wq{{6TijXJ>9 zqO1DK;$PRvzrSv9qH+=-@5WY%1=(jbAT8#9*>%yQ3Y;p~z+v(sCUcoWXm4xuU%HsQ z_@x(A4@un#eQK{4V(hZeo*Q#KQkZt!JT8b#S5xX)3^1h$g>$AQn9YsOZvTFk-=iOu z%vyyZ0mX*W#Fe@Jj?W$EJ}rfC+wqL+pyw4`m7<#RY*H7`@nv>|-C;-`Rh@KE6xY0O z{A~st=}w>lsEtY`PfcMA4V9ih>)u|$33u?yhfX2F0EB9t4dXt3D8lMdJGF$7fCMN~ zzJb{t`5|P*24;#yT?g-@LH>V_HItv_A;ckbg~bWAsRl6kX_pWv|6-~~pF(}BCXn{| znEN=#jPfZrDtrKKT_+2-Pc#q2bB;@%aQ#8#6@IF(sx97Gj~IHRZ*u8fJ@F?@(*(;9 zQ_UfrSN&sExyn{i2x<@2uf=I4xtvKw@3n-T#-n`D(lRQ1g8KaR{I@f&zc)Fc!11?5 zMBV-xU^>T?5S3I_^;cI_6{kw?qXN!avpb}4H@4HWWPcNJq!BTWHouF{t|#Dc>T&a# zlTx;Z{`}e2&ulIul|6Bm4 zh0N%Fh}ic%vA<=F?EWf^mbmUpN}KxY85N?zi(Dt8I1ghB8(=5jT@E!-*k3$StuY#@ z020@0Wb{-qigQ~tX94ZdCIn|z+lG~eX)zSXOy9UrZ?RU7%`%N9G@h<3l4mK}#lz&D z)v65CgwB>9P=bdHj4Zw!zeP(%DK^_fmSlOb?xa)hF6a^?+0^|OAMQ-Ez5ca#=)KIx zI62}c6QM;wZ_=#v-+Y?(Zma@yO3iA{IPcE! z7{lcuorKZ06fi`jxhd$i+x83NlXW3~#5!wAWYzz78LoeO5KCI~sCkQxxN<>+3${|P za#wWoKcgcBB-xz&6TrHmIG!pL@6czy#i4Mf;i7%#izM@y6dCnXzH0=62X)A5y2j=f zy;=v5itc0^98`&O_;8GEtxsBaR!76Mh(QD~$qQaQ8Ff>tL1@&p?qc*-f3>4X0LVfqStRe@ynpncD_Q7a zv$r^&>rA#3ZNQCrW)_W_hNzp)-KmHWh#w2=MrG@ zQsWGNy&C9#$;{dJJ=!Rig~kxZDjTf0yE^uTsqzYs;|C2C7jl)me#N7Zu(- zi({HX%hL=aEH*<=)16TLGak~)n&P6R+N1MU(!S;pHwP6uP544EQc#+x0^Qr(#jE-g zN?r*Dc`i*w?E0wK8o>!lzlEygId|p`!1J0q>CsQ!i)F@PP9?pFuFbW(cSo+D;+N@$ z_iFR-85NJjo36Lo`8mkwyPI}DUjxnIU@75HyU9D23B4mJAsUMPK!QV%I7fg*?Pir= zsJ5|a$26*J&+i%Y$A00e2HL%0Z_jX0i|c2c63>O~rDgPdU&yMg{rDOBJG$F9S&L$o6D}1adGGANu#Oaw1qwAnNO+W8(B-63<^Y4ufIeu z=dp9~x>7oMxdfGjG@w~|l;>J#>PI$m2}-BW`bOy$QN!hJiQ=n=T(8+JDJzqj1ZzVn zL#k99mNl9k7Al-#pDr*6ktfV5)EjHhw{^ba=W^GfjBh{I>L1PQ35)Z&AbrgNq4Aey zo4)&k3)nzWyz%(D7p>*wyaFIZ=xV$&{dWWS*ZU3j2O$C$I0H()@gii^dK6h5m`ftX;_{2XMx*S408=-+!)?f6n`*3#34A-Fmet zdku?nu(i~^a`U@=|AODT;3DyAJY*3mWFvP{lm`_)g9B1K`i5>{WkQA>RKdN;mL84y zLAa0-e%{;ebN%f5BzS+#mKhnU%{(y&78F$bn&PO;UpfylE3VRL({K_6&Hhw&HZIB0}~G$srMp(s#5|w!2lf|Mm0^w zV&%Sd<^)Y{^lwD;YxBxtlXcTV6pzeg^{gwe0yGLGy6`v%3QH|f#t?_@LddC3nQI;(s1|af%U~B$6 z=4i_m+gbm6YDDhgy#2J18{NOI{_Q(<0^aanC9#5?U4?vpj^Yge{>_LFgU&BYaGm+{ zP}D*;Z4~Y$nIK?09Zy0V%1u!wG9^*Q|50XSSVHJ#XQ2XF@a=_#tbriXAi2wwS?dk$(MB-MZnQ0{po_`Idw_Z||b_#nAYcaFL zeTSZy8C&OHovi0%7A^Da5^=G6W)A86RNtZk>PF}Xg(y~j>av{6+aKi+5+Rv(AYbQK z!vDB8kbq%vDi8Wa4Avv-;7_WNgYG0K$SF1=(wg%nzt-zl$72aC6T;lE=D!`omu;9L z)UD@Ut@IQW6wqU2uZkOJcc$6PZ3T)$%oYnIJ^KK@;KES{aixpJU^1QPUvW{w4E=M# zjS--RiDtgmHYQEbtq%aQBClHxu92b4ut;waLpS{txW|ck-Tuwx$=aNCSxv7g>7wUO zX{Gfv+hwld6PnB8!3%A{khQW#wfI`@6xClI6UT@eB zv-yn%8+-e*8=(KOxVo;czu55B>Q*H~WLlAD?}~FvOPcx7ee&{E%W)sGv~KRYH)S}& zZOP}ji?>BZaXO?fiGU87ihf)ij?VxPnX7kf*AsjlawVJ!)`8{^-7ZQ0Iom3IiDf- zb=cuQ-m3*iF{1z6ihl*%(G{7Y^RaiZZ1K1LpK=-?9)-2IAUG?w6-wfV(&BMY!@zgy zB0KpFw*;@X-bFO4(sJ>Zx!cIyP0?7b?S7q&UMKU?=>GYHLg~Ht+BwuUiegsQ{V%By zGEqC%%c$0u#X9T-YXMd7l<7*KW(fzOLrmt(l`ckQOo6RCB@}&DL;*@=8GzR+k0Wme zF^*zUwhm4Wv;yy!C^Wi)5*)iXc8L!AN?jegT2wHcHrJpkSKph*I3465jgrLJliWLR zkvW18>?v&Dws!ZgZCqc$o4Ypuy2fkd*LnR;c-QRj*6k6f479$7n11DFzhgKoN~nqb zWlft5bXvUO-kai7(Fp*}hsGaP>UGpfENEuYY;9FhaAny{3AMoor;wQ-rYPDDz%v5Brx4hAU%G+iq8Q+yOj z)@fhz)&rR1NPB+~alfgtzqOUbm3zM$A~zX_o{7U&Z{&XUF=C>~ zPHAQ)iK!X}1_q*?Ew-N4_q!uS%Q|;i6u={DS(n<5 zQ{haI97Icg&1p{Cv(RjC>sWKbaXXl<(UPRWvjc_&PE>#3jkLDX(~JeFOCdt=q@gNl zPqoR&Su>(-g}o+qj6=LvJF+W*@;d9ZxcFlBJ$&bq`2P9$ez`rHC)U7>R^R3VUm7i& zP$MJQ{@hx?lL8aX$#ObKd;E87r^y)Cf-L#Xo6yR26m&R#{8{f_H&5rhP4(bjd8MHVClojr z=te^PcH}3gQvE}@nLws>{|`*$RoKT`H1AuWf6_5u4?syahMb>Ux1L!aC-_|^ME5RJ zxVVPeW1hY_qLg2$*#Rb})gPngHLnnrk@RolvR(r`=pKOTWK+ko`p`KMGJO*9#~bvI zK`a66-b3-HCGRy#b-2mvzLEa9SqVVx-LC#NccxW);!M7yb?hXPa9Z{a1kH{nu~u+@ zdiDT?5u1{x^!PZoh+W}@3%>!-()J!l-wki7GXzezr#+Jj>(Z$XPE^>Z`_I)A0X^~I z@ptLM?%sp2;J1 zqQp6S4iFQgN!=*D7O!d->Fk=c;>l; z{^Z}!rf0Q2oIJv!z^L8TL@DxE8Ji_dB4ODggvR{=oN%-wdmBk5)_<4yZ)!AO) z%va~U;#G=n{I>0$o3$@7`KqYlAJ2Bf>?U5!+B@PSWWx z9p)bxrk{E$pWqsfJ)9E8x!smKZz$+BP`N~89va%y#>R|n{rp<1f-fFA-g)nVCz2J^ zv@gHAs_A#Cxkh#Ohw4i{>d#i7`Vi00-Z`hNZ1~a=b(x;#DEVzXLCytpw=d1^gJsy4 zQwV!bT%LzRJ}p@CA73%z)5~aVa5d~pu=CD#3yfZ-mF!(k1F1)q*zV`g4XW}!R5iG~ z>)Sak^K15gC`|)E(7kuJ*Yvm~)BRq}S1wIEmq#L0r_g3^rC^WMLQRXSX3M6vt6nAF zF>U;cyX*)NCV=K=YpHhHS1{Y3^%KozEZF=_PqR8vKDJ^_+E-~dId!ld(4|r-6hp4= z0vL)OH7o!j6Hq*HI`CUyN4Y|`HhpoeglJ#*^z$wSpR0KbuPaZ_p0X3BfWf#oqXI^J zrCFUcq~RT24~6=gtt{l=hDiQ)?qyPu*({r_!EL@4$a^A04ZpJWodh*K;N;{*@mm z1#;#2^#l3rm6U9{4v#Gd0q zXIt&X4w+kznU8@6gmX4sQ*?NN7cslR?v&))Gr(J%4!fqxRqfS&vpZ8E?qB0~?&h~K z?7G^-_Jf^$%Z%MC3E?_#Yv;{^`?Bsm8dd5vCO)O}+b)wVm3xDEXW@~^M^9#`(tiQ{ z%oUZkbhsXgzb@Y+_%Jtn>qcWIjKm?~!zTdAimaB5S|s-I_PP`cw)-Kf<%)MMO_5W8 zqRsa%g87%$7LKQ+RYxSx5`3=DdJ%A^Zba006pgf=_bzFey%)<8|J2aX(A$kypQy<3 zH{{2}pA0d>en~}4w|?MrBV|?l&isNtRq&2m1s_Q%=&Dd~t;KGqZ2}k~C0x%`FU1`i z(Wx{)wd$F-D$v(9`Hnqml};J?}Ej z)~SA_=F=Oe5%B)FHaq=V%Db$g5(qn$sIOlAn*vkP(xAeP!XQSkFWI?LJ9`xq#kIpH zF6@={Q2k?YRurFB`f9rqocMCyL@+FVQ1o_Olpo(=%A#4S?2o5etojSrTOOT|3ZYiA z%e1MU1CslT{F93x=x}UBl@*g$BSnk#ZSo^qPbb3z)#Ua32-It?dW_Z52HjFL^mY80 zser87<-~{S*;$|Mn8q@91P`|fNu5Sw2z%)|3bzejnL%*67Lf!4F_)Z(j`}y2EQ&QjCYkYbBl^zN>xAM2So3v4 zQIO$$Xultlpu_<{ZSQHHO&>Cm42zLe>S@AJ(i7-qz!%Y%TMxFo#+h;X!yDj{xls&_ z6l$toZDrhjg5RFar(vH0tum2ImuX7`q~YlVIJ%K2|@pPFAk^bb(j7<;IH*TNZ5 zzRMQ9%Kjo)ZbjI2JjY_CCs6T|K*7pczpmVOgb(FQG+^N8y$`bu8jmiz?SVeOehN3l zAJ61_0B~A%VRHmw@W&F6L3_69_2%@@KE{9;8Y=XaP zCP<%%+?@b_b#k}G<=+654Xm@LT0)83n7Hx=0ScAgpz5er(aRXomj>xKlY$nB2lTDK zFutOUr5q=(xl{iN`byg>zl@w;JmYbLU7wN7e012A6*eC)Bro$DBZ$f1YXOkW!Zf?j z-Kk9|HXRo`PTKenFNggMS^)ceLM2LJdPHqnt=Z)FSK80{LhO46eiz)8-%egHLeIi& z!6iWvy1{YR+|)&idF`sG#?~e>L}PXtGoMZNbrEXmhuncqFwYx z{1?6pmt`OPoy8D2G5wI!kf4zIo>2DU)v4jV%b$j9%52Kx)xMU)+jT@JeXZgEKk@Ae zC|GD3Z(3gUH4K^7r1g6Lq4y|c_#Bw?WZaIik#T$2jz4c@M}-__QLyQf2+%X>I=b|z zxfqX1#vdvu^V_NOm2Uw+>`M!1?j*r`{&d?)Fvca5&h;?fH?=Le{O60e`)+~<3MQ71 zwIq`s_<0WJxZ%A+it&HEJs)tED~M{B+nICp7EDqQ}d>UnzOKs^nEV^ky~f%YXCfH%}3UOQcSTede2%ky1fDHYTN@o z=`K|s1pdWsTr9)AlFzo9Y*kLwyW@TnpyX=bbGBm-;eC(L_c$6#YW z;m&8cmN(`LClNA`$*D+WkRdfEsDjH%h4rFG4(x?D{)X=GS*9GlIcNZ@Xl|CEhVv4` zDEF~|`b+glFFl^~U7+w@t!4ToClFz$Yuqsc-uA4~(E1rVbBV3$#9&g>{UWj)cjRMBJRrm(<(^&%*YaHBZpvkw-r(0%%H0^BpzfNgs(hrik5bRjn2aPxcTX+* z0BfYR0DDx;Wp1VNxv(Y?d~Ykz6Nw5PWt67o4eYf^)+(q8!oVQnph%0mnp71NW2;!X z2p9}?%+1O?_fL&y-D)rD5q^cKDhzGN&B1vj3B%yX0ZOpQfSO~^Ld*yRqi&#L>JiX} z!bT2yEo$`X$zf0`Ub3Y_6s70pEA3X_TW#i1_pGR40Sw3rPV-?fd+dXG(YzU#xn5Yh zT_6}=F$D67{$cT?!4*@WLqQ&EXMIt8jOQ1QO7{H)0bR)kf3t`mQ}q7qYCfIPrKkoO zlq+T~SsL}9Ne?K#3aK2hd8=IO(B!+?IaOgSJdYWVqA+IkL5_mH+^zGxOqB^{J zeK<5%7v0|;OcV*KE|;K*`haeo+j=|SN+c9-O(*CVy^eodd0d=JUm5j@UgEP0{Pw;kmr$A#GDqu{tvfAdpoYfy4B2 z!`@fVuU7>cStTG9-(%r`+pVJ`z8<4dRuXN`wE}ZUAarV3UC~s3}+F@tt7)BU182w7XZ_KPFR6#v7C1$ z;atnK+niK*iLB7D*#A)0SBj8{bJo73P^{OMyrF=}{2eh!pp_j8 zPfgJ)^A(x)irRB4F$>CljISO&7RI}{NeO*3Xr1%prjkUSLaN&uasO3cbtM&9S8~71 z{0Y7#WSbYL=Mn%pEe_P9$U7Cu=|aISXA{npy38 z#D_iP6G_-BTLD2D6vuB*#q;7YF(>52#9(vG>T;y)X4Nd)CXY68si!+xA5ye+SbWy# zx3evOh6Gb(%FN8Bw+p0F?W9S=H$FuGMbk9m;f4ot=*k5tmQ7jCs2Il$ytwOit+#t0 zf_b5=j-sB5rIPJWj1F%$L=Bb2KyPqq7Xf{yU8H!oeVHEDB^F-L3Mha3gbp`t^hapk zSzxwpc`zloU(*W+8oS6uxJpn8L>PW__uk{_=~kJCrF>4l%=g{S|BUDJB(|F&zbm<&P?8|>*wd^vMmP zXVV)EE;|T54>2+OP@V9AlW*3KgKO|NeIoncqD{&QwTY?JJN$XSfz#_&EA1mAY*|zW zz$NcQ_@|YPP8>dCD(gA=0@A6yL$vIru$|UE-zkD9VS1Ph7Lb`)Qw$cwPw;#4WpK12M9 z{3rfh`((QjJJX9?B-YJ?EdauKKuPY-rM=mua$j=d1VLP4KulCf+Aur zUA@>hy})bKz5h#-VqOjEJUzE+2!ZBe-m-zj<=aQ`PBsX7cX!42!b8m#d3i4%L*2^M z*ymz*W^m}(JT*Ii3aC@hqh_*h@v8U(qYGVFs5S3@PSrSLAen5?kwtE{@K)OE$>(m? zj%7C6^Q>f$q{dct@u+zLC0Kfm6q~i-8|F=$k{WSyMupDocpq(Orba$)(Og|Y>uD&O z&sbSnD#U+~6~YZu+m&j48G)M~Pu~|p-qT#>mmmZYF;0Pshc{++R~41Zq~}w+Df)hM z&V!EqdEZm6Clkd&SH*JWS#LWr|MD5@O9Camp7Ro23(-cLBAXedH zqjHTy@s|$?Ywt&vf~tbHGeB{FEc<}DASj4(ltYUd#0zV^^VC#W0-PTTtu>q}6bAN7 zV|j5Am;}dWyEkb|OW>Ga8vcw}b`p|#c_qF4#uCE{G&{2{;g7JLuZ3&wD46|0D>(Ak z8+y5Hsn?151A`JJ$2&J!(DDFn9*Y!4P8EqDO>NQ`+{q7PAhdO=0mI5%a~*Ap^rytp z`@Ns}Cb>IL$%mn$8JP>cVaQfiy6h8J!Bl^9z@WE6(S#CWDHEf-T47g;(FUdYuTPI_3D)6#f#JBgEZjI%(8%-H_mARDoUFG|2 z@4nX}HY4BBK@D8B?f2lA<39+}Bb~=!w$T+S5&Pr=%m$oi;jK13-XYv34f2u2<}j#O zo1^vviyLgv8{-=nX;aEH0l*13Zq|ve-tmajX+#)sV}Dqy9X+Ck9WxK#k$>1P3FV!rP1Jetq4Qx z;NOQu6NM9{Ia1r)@x_?X?&pn1sRyUqn#Wqqle)v0a6s=q6Jx_?@J=DU8M);J72Lz| zwT@5r+hz8{%!OBq^K6Cp4+^+tC%*qPtg2pL9B@5rTZD7UmqP7KT|{nn%-oVTRtBI6Uvq&WglZa?D{6LmN0fDZ(r zW?m{NKIC7FH%O-owT5P~M&Jpkqpy@=pjD}(Q1(L%uH}>W_|*HglCxz%Pj<6w z=vFiLIxjnq5kuUS4dq(vnO%^?`Mi0tQhDh7StisucCwZmoU$tIQ_O=yx95yIjF3~w z=bKWru<$vyT(1}UNeTa1#rfU$As4fU^?7J0&-IeqWq+^RgR5lG=C2f|nlKaq_f7Mm zqaI=$etImKjFOq7d4wBD8<1#H!FOXt6-K^H-@8m%mX<9NmVe~x+U07wtcyBvcvW_f z&2}l_^6Ahg$&mQicc62EN{b^B0t-+l^KV}dzY{1 zGJ}Q!VrjubVIKWBj(>>WKcnt2`SXU>n+y91pJvIck10pDa12ZGZa6h}C8H z)o&++nOL9=y}~%}$enfy-SkRsGLj{o-|N>*o}hPT64|L;{EyxLqKO`&)m|J%VhC@b zeG^h{;R!nl+L`E^*Ekl7#fKg}&SyIo)U9^uQS2k_|9l^55MTe0`3$>p<-`--g1kBeKWy{Jx$eo4Rg>{)`Ja*^lWBCw z%khfYPOH+x-EcPE)C{`>ytMS`Rn*65mMCyWupQB}#=^NgbCO&JFfvztG<;fJMg@Sw zy>qj4kO{9Q=@F&`+i{y11HIvsG|ND=wS{RNTxbxjnCKTIhk^{f0Eb{`HKDPqy}k&e zVOG5sOML{@#-L-seP?iqW_LAiS(0rOnh&HGj%;V$G8yF5{cFmYM2I7;=cPVjAu)31 ziu(p=-ZZ6Kh`1<-Gd^es?LP!^naUb;QL%q2(;i(F-iOw*KBt;Og%{+{o|#4nuWS4i z3mQv$Cs8vuHT7Z%C~Dk5eWKi+Tr>gW7r?)o&A8RB6IE9|!XOA!J^`1Y3$j*a17;3> zv83VVR+m}OG3MX(t+TI|!v4{f!)G)1&0oa{3@^(7D0r2SUJU7QOR!XSdGB!yR|zKh z6Y?w;)UUL3lw%;zn-J`5{qeh<`VQN*zBq0%nEpLi;NIj!j`t59Tdv3*@n~9xewUsa zrT375%LrS{ZG-4v`sbb3hjDp%o9if#K?1UWx{K~SOsEqCP*X^sjWhsFFn~U);dU|N z1^Ca?&)XikQ~1?OQg)$26}1(FyZAqlDk&gBxkb9rpgxim-8<8z+^%YxQPw*@=v-I% zmv$!0;zM$bA9O?xfD!82EJWJFZB2G>?COKW02;}utV#~XbF6oasZpap^ zec9OIqpq8or&sz*8CiBFyaL`=%Uy+5j#*fTlT4w*85{%pu~h@I==PR>d>FdA_99Ar zvEX`fi(5~Z^qSaZ*KTifcQHB;F@`eB?V$ca-E-mH+33%+$MRK9;1pu zT_MO?xi`2%U8}Vv-|^vl4W^Kp(f}1ZeaG>a5zn$12ZQh6Xa4UNp`VmbbYpLUa(RfL zpx0{`y>z*>cs15!`;|?Nn$zNx$3YJjAB+SKZC%egxwtt$DKnA|*+{|)85v~3hKPjc ziN>Nvt7Xy=7(OM&*1+uSq5$@_$g3i z!S=<56K^z$C!Xsi^|aBe%kpb6AtjMVSyb3HM( z?%>lPLw2Zvsbhdi#7UAfjM`1;t*n4+in-aUZXCxjY>ifOUm^%m(Ph`P0i<6hSE9H| z>eBu2c(9@Ll?LfgBCV(2N8^TecJE_)pOBs924fZnvu6G|gn!kp0 zBbN3^YkfdPdtxz)@(-VjnRi5IF#1p#a4MQp4sHaPa+G1YPeb=tx zlwmG$F4Buz$#W*3Akn7Z{L1$HNSZ<*lI(IuikdeWqlaR1D^A+;tmjMmV5Nsq1i2cw z4V7>}=d;tU;3R&iq5%;+IJdGolM;T(KQ=nY=5bmSwp_R_1+>LY2Nvbo6j+v{|2U_Z5;&C>mwgYt&XZhK{Ij0eC5sq#urb~rio5zV=l=!iG ze+3A7i;A@hR4VbI0*&>5ep{cF%tzc<@>Pk!JOX)C&HbfzX1=XS2Z;b?Aiy)|7o4~i z--QG1Mlp{YSkUD66Lhj)dtqEj?{=z>PH@8op>er0XG0=)SJ%>+07ymO*-t~Y$3Gz((dw`<&!BiJlC|J z68XO}Vij!feXAvWz7+3r4XG~#R&kNDDrH)zPKqFS9aQ{$9C z&&ZFi^m!tZHJvU*N+J@Rkvm>8(D2Ol(m0?@Q`APq#Q$T{6t)QMA4MGXyLBZU=Dciu zHCe0LqmG=ZBhKcR;C*)!-t-e7udl-{bvku2{7Xt~Ut+31d-CV13i9{`=6M+{zRmaN zt%@QyfAAaehC@hznL6#e$2XoFB^p64?k5^=(RCwVEg1N6~5Md1vCG8 ziwc!x0B++%>-N1h8mQYn4+Obi7KoJjwWBc-0k{SCHSY1`-z-bR4t@%f z{(YPHF;uVCx-Ud0wACB~?*8|u(VNLY(E&7)vBrwBp`Qt!2FCP1xBs#dBRaObf$Eao zuEYM^>@3g17d^Q@K-^7PRutLg6vscm=S8I)Y?P0ByVeRA_5UYVmzw+^1U z($MwhW|%ro6Pf`CuPKyDv%8U~Il?ncdzTU=&Hc01I~6=#cP9r4s@!n98;M_cA0uyx?#j8st#froM&qd>Y68n-IG3&Ft9}*|`j|0j5GicTeb@$#g;>_=w<1u8 zFI;(|rM@p&&S#==3#c{%($!}?)?4-ew&159h29>WJfpJ5h!g~r77YX0hXa+HlK9^1 z;mR*pi4=@Fg{o3)Z!BbLmxg?kox*;AwVZ6MH>?^LUAA7$*DSp_TQ(Is<(UVjJYHR& zbW1ndz1cSCXhw-JT3r8jZ_o&gBo3i^c4>f+J*_gNzdUXAQwUr9-lYZ%Kru&w3+*D^ znCzzy*R`|3v*D(BPb}W_=x>w|=@7L6+I?M@7+b*`v^1@kf2A!?ZOF=y z=+>w5@^1{Z|Jy;2`QBy;QoT}TeqxB8+D@nJlxREZ8j%`^7(lJzVd(MR+qJ0;M4Wl& zw`glRoT7lM3(`_f3w;RFzD3bPt?@|$>cE<@^~gAHTV|?gTp#>G>3!d=m*jjnw8O&# z(KVNJmB4P^uBEGF;niGcMxZ0}0wx;5j2((S(4VEeC(d;!k|d~ieU=}STQX0Nuae%Q z`6yA_&+jtFW=cp%z@64#kNJk<2|?ltLH!CgI^1&)s2u9VseD;Q1-4UX0!g+pe!~b1 zNVD8eQZg9!n=Ce1dvmdvo-LL^8@Y23Llx}*R^GWSAc0wqk1BBPCg+b9q~?yNtAR}p zmsBQ4r|f$+lUP(gb3}{W%#F-Z70s8LXJN{uyySkRAM|Y&bf~8V#AQ+W{3)o>dzbTU z**|Uaf^RD=W{*H3{c;3JZ}ZObCT1XryYOhhn{=t5CT0lDs6421MBMd97yI&0EN2FJraUiYWq+0jqL`2L`{9G&h^R6=XaXNTxD zg>*0%0YTIk%X?qP>NJYbDQqtRRVWIp4-2p_WkjT!UDKY1l~=e!Wqowt0;_Fo*U1ept z@{0{~d5Z?>J*zEcnKdqqtZSz2TIG4E%tS|Gs^I3AH=kX!*W>zp>@oSy;L5a|AXHxoYbuW9wWX_)@{l9S!T`(E}- z@w6!hva__{O+^%yNb_z~CTynH*jg$!1ytCf%>hQ8UqZ3RUNIE-`)5AQp+$N2{{1sm zELoX_7W;F-Ru3-&bNl^bkH%Rj{jA$H0&cNBgt%O|7gl?lZS4AAOuc1TQ~&=5JQ_)r zZcvd95$RUHNOzC!hSA-kA|4*r)MtxQ=xO4D$#tYfsQ{lZ4nBQ3(MeH_tYK4L+dWs0dKCPEJg$ z%FpbcI6FHFc<<@YH#uiy`lCuA`;BlL$*uE!g!SdoUl%lxv21=dKOVff`B1w(oaM93 z@j&5k@P%9rs}96dzv}J32P*uGcw1I6n}g0!Gj#)u0WDV@=L&Q^-|(}4d0%u#;t-aqFZ$g;=wn*etWe8bq91Y z{Z_q^k0$5l*gZB!5_QUTp+OIoIm6LXr!38}6hQQPA|YQpzh#UPFZ34=*+pJ7BP#9+ zVBLAQvID9zNYT*4PBaqe5N|}B0(J5R5BzlW{^o3=&s_RZvGlD15F;-rd=LCO-G|!o zmMgBRmfj6Mcnct`7DeyXZydB8dLvAH|3)b;KE|HOwo-ooUi)^z73dwPMGEu1T$2y%Tk1f`YpbWhM}j(;Ag^#ud{ zCPdIKPgDl2pzJ!doQJE|>T~vC<%Fu+CYHBO^$}KD7Kd*4H7&$eN_!xAvJ035dJB z(3;r~a-s7sAZ#1?K+R;B0_~Q{8A79@D}g7Y5THD<&Clti<1k1Kg+hU^(8pn9@uZHN#A5ifaLhl5fYqW=pd4n}2qjEPEmpB?fIA)m<8|FIN@2mGe7 zZSh*{Yiw+ExuaXT{MaUKkn7}eI6FPvg3dr6gyAu|p(SsQ`i_C$vN)PVi7GG7_8;O| zfeiET=5TNUNr6G}0U34ft6_Mpw=j5;s@UKtQ^?J~Y0 zTR!8}>w2i0B|FYiFH>3UVUaG_GFrJ#b!5v9xw{%+Om@U{j-26Ya{t>K_5C_JP#jhA z(@AS~Ct&%6(bn~N9auc5T2qcDuEO@MGSh$;>@^9yElb9HNj^yPCV|tY<_?v;&Z}M1 zd)gKOcjw6%zE1Xs5)p8KrMNoW0P#M{eTbv@gSB-bKri0AdvOk^CI8S^j*K`fdbk^YTd%F;P zHEe2w@>qCTzBQl9g&cZ59OJaXS3$DwmuX9bGU}}kxpmYt)f_)|8uB0=|DMc;`tj~r zB9~u;Hv)K+TMLZGp998=M#$0)C9|ASbOY!A;8CMfLQnTPNe=oXySRF?EqO@9SK3si zRmKczn_qBRXi_MBXqT%tXp2v|<}2FeT6Jg&*zPsX8GT?>NNjjTd;!KnqH@0 zk2{xtRS3{iY-$!B{E5g(a)jUOQ3@5J$-RR!bR9CNv@%p<0ShrN=$_4ip3bFM{3tBubZ7%MJDL=LARx6zWO23wUKZ9gLK1VKE>wxze3vqSt zAd~QJF!?jWca+p3lp<#Lx5gG*V;%2*f2Da&@dU;o@*TdS<)F?Xsqz-Q8T}M?#MinIE-x<+aKenlD#5>#9D^K5$TI~s1Uz|5(`a_l z!1J1TlX&1yVo{P3%)8R*#9V5)M^HUA7*MHg?NMJ)!f^62yKVzSeJjnqo?HLbkGi-D zCxy&kSTdpYH{VkX_jq93`PIH(QpCrCN+mam4erw)ulJ26jDFX7XaX}fFppZ<b&$ zH1Om1x)&})T1Mna12Ire1769R%k?wWvTfLD^$g8zyBo`x(DoPN|9+9Z=F_Z=HqGI` zzg&;MDA|pvZ2tV(v|~uB$LvDIt)mNE)wB)7(I=gZOG}eAx?<@buN?rn^keOjBf8RQ z^*o%ct{)srnT7Qj4NS<5qW|Tm+Vb9uzR;IP5Nb_Ik@u-%L$|6H2g1O)toQ;&o=+0! zs=wNrmFE2y)UKufXhi1*e|yG~^Nf`G5psIe#n{8Pyux*9Pv(G!jwz*o)bVYM)??%W z855rt=dSfT>sYysmKSRZrB$UK>d>E+#yD~WNfc{hc6%5T%&;Y1=ek$3oN7(gDIqRF zim8oNq>}dqzdnc-xU}vWyit9rLzI+A;V;!uMT-M%Olkb~Rqyk$VVHpjN$?c>N4AfQ zkq>CWf~uJ`qxHfv?iTLpk`j&36HRCGSPQGfw~yg4RRxlWBFKG>$cX?Fz^t{JepzCr1cX1!#a07l5~Yq0qD<)f<7GVf;&8iuZ(OkGi=gBu z(-Gzv6#QotSs{caaaY%&fiJlAYh=z3gP%L2bM5|1JV?_*5$(nHu2e^BJi^$xsb;Q` zm!y#nGHR^$mUqRjrda9 zyx$UdR<>`e)#iF##=&e++l4&ECi!tpL*^yni=4R}>bkE@zZXW-hfWjd42OGRKugpyQlx5B zB9xcZ?(_6#Vp}bvoc8U$Z5hpS!HjuBRvKhdn5iF3c+ zxa1&!nCgnZwPE``lwMUplomN%f>Xv-M!q?#m=eC|tD|i3NQc?caB|40;!j}^Q`FG) zc&Ly%VjiDPM7*#|b0gAGym|0mY!)zP1j23R7bwEOdrzZ)wK|V^8sb|@5w;kX8FMSR zkDD?%CHTuan4JTHR+2u9=j&exIo+csP}3%o#UlaV@S#%H#9`Cv*Ke z!turE{FpT}T>Bi%wX@z2>i(nM61*5SCM+!6CJ1V*4|--`2kxlNSz?=etZoPkvbfUw z;CC|hRnq-ckJ1lj<%vZjIp4hE3pg+YlSJpVpvDW z@H2xytt<_ns9{-&)I9QRwnM?66$BJy58QN+n4d+XpJ|QZ&U)=ExVwyzYU>P;{@6U% z&QPKw_)Bc8a;)u;#cgNA8vmv;p>f}6G5EaCJu|?;BAH%8(wKJbXJOSdmOgUivEI)U zow6~N?c{*xdHE?;jb)MUqH;f`NdS=x;hgF~w19Ayjd5ue`@&LiribyCx1ZdS1h=Ad z!bY8fkW$QVG7bojbiTb+SeP9K9^}p6M8~MG^nhE5(bKC)wpK1vG2h77(VKB@@oM(imQ8*u;!6XCvO$c}>kY%oO zEo+kmQOR~nmIOXrGb_%@_ngT~KjOj1EX3*Cn!g42k*6b}T5VbAYaH zW9KhkaU3it@#j~^D3pO}qIi`>3z61+i+fZFwi$<+Xwk5}Fv-q|hs&LMe}1n@A$caM ztL1vitBS}lZTKctr>^A;2QT$D$z!>tELH=PDKV4xH8~Q~H4OUv-As#E>~2^30!{)k ztWqOQ-976YWmO7aX=^L_Cg%jJwi(}2nsh~C6IE`rmdOX_q$`G9a-_>wnW}o-Q!)dy zRU_u(MDe4OniUgSYzW$U_T8fx?#BjZ!iLP4(6&F9eyXhmWRG3 z4kjZ(O4SBMNfcF4Y&x61bX*XMvN3;*6quWbeR04kQ&7=Lac~REZ}FF*X~#m=jWN6Y zSMw_vMVDV!rkTpc-w}YY@RP$cFk0$VXe#<4`hu+GHz&??=wcchu-o~3C4zg9OnDsZ zRhEJAtSRiV;>l3Ud9A#aFqPfJvm08v)j8-7H_nZNo@F>6^d*@ZG$l`~*D3TxvohgM{*lCPj1m#oJOctf-<#I_Y8pWi`6? z1zPEHJe=gAn8INxK*Z0^heomVM++6xmTR>I?Gv3kcgtuM!-q zD9+>Jy|ChlwMq<)6B&=b`i3>(>P`*+O8Bp|#Ql*I{9ZG0`9&I%>Uh)ovz{u!#sm%aes_|HgIj>}3{HS0 zfWZAP|9OHq|1pQ6^~353a>T&Rh5~2`lE>9AhHkpTb(}Ga=l@$-??UNo!q^f39$`FL&6N)iWgs{C+fm= zvg)!WH7WLwlSv@nT2p}r6RrOl&TPoZ3-4KnU{<~O$mNz@_GK}m?B$?m)-`jp;dqP@ z4qh|{^brG5l`ci^zf-S6BFL0I)~W?o(-f(E3gNbK%EGe~FwM_T#c4%V&NKKH-geW#@3I2&t}GfX7h_UhJ8 z6e0%!51iGE)sueST_&AqHHtvAy)DK;qogzr~YsmG8`*RksKJ)w=zE)rLyDsIQ=+H@lSVgv5-I&6I?yR z_T3rK8%GKO0JNch15DD>yj=rlK*64CGN8?NJq8dvtvdiWI-S!U=>^#hH6xX#X~)Dn2@j zwHAyBD^c1Q~2LPgJVw<)E2TaM`~xPJCR*uNmO65I@z^e(DEXz0Oi@M3zbdN6Hz2LA1Sp z*&Cb`0{1B?U-vu^`RL8As+Js!;6%zq1uvnHw~Gh!#aL1Qu@f3MZEV%g@hrXe-`{Yh zXsoiAFl|ZOj_co*TEzqkIIXXun!#ifB30_mWo3pAA7>P;C5p~L;9_(e3Fj`fXRv^a zfe7V|WxLeW>QOI&4bzc^K7kQ${?|ZnU}k0rVuBhHD-^&O(y!pvTFx~rk~5p{m1skN zUyhlGG1SfxH*ltvw?sZBc>#t3u?b7_baZsl4V-mg^%7y(124Go7U(4nP(%}?(s{GN zcVR!sfrD&nY=jEtubsUuW%fZ$mxc>mKA^4#_%|ZMuU2vnKbKc`F33J@Q(avsP)I@o zty$lVW=v5ON~W%A!(QX(-eSW~LIMH|&EEycw*AS#&|?y+q2qI22ByBYw4j{JUu^y- z4G{tdPG>7J9dl017vmv!^_jufI|p?K9Tz5|o9ULl9y8Zd%29BSK~Dw zF~prX^lr9C6VJ_^syip&Xy54H)*=%~xXobmn{MFrSlX4?+}wX>8VK?u4La_3ExNwe zDj=l84^UYa1t)=Kp))Sh`@+_^OI3>o4~ddi?JTTU$UY2IFy^dCMQ5Z95io3Xa}&WG zvkLCq>Bx}>Nz`ls+Hnc~8VTi((`08sFb0AIjD|as_VoL#ZP87Q|yJT}7RIvznP`=ZW(f_YMd&OpB$&YUke z(7?NKwt4JLQc>h%;c>N6<_qARR{{{yeL{R7^7-*p&Yd8cDb(irMAF2|_jO?vm6&!S;RMcsH&wRn zf@uPRj>$*HD<*tgcPX2T8gQE?dROu*Qb4`R(ks6>29!jkDX?0+7)w-566>8gK}jrA zwswz1@*1o_jm>vxzgII~{ISu&@`)s^`)CyP5vh*@wY0KvN>(nTfBC%7P}U^{5M=uT zg<&W)J?HRe!UlSkQjjU3BMN9r+1wHaM1w7Z5MKvXwxljleXvN&0h#(ve)}IKwIJmu zFc;B}ioFo3rw^~XH}}8hC2xca9<%`8YaPhO@hKPZjkfk)0Yh`uU4fQyAg~2cV+Bxe z1Au68rd#hL$=S0_xmso_rk2gaxGUiK3MRE{0@m7(0n$^d7+q=3# zrm9qpw`5#|%?hzG4Bv30^tqbfNWUcXoT&Y+T`c$K)W7)^(zdM3(lqbOV$gJBFY=4I zz<(?UHC7!jC~|enOc#h?I?lWaGb&=C&~B^6%R=L;N>%4ndF?B)F0T}Bmmt}hZp7QM zm$di%^KJh5TF3(pQSfiQ=|T2V_b*^v{dF2p^HZ6*$X+XxmY3fiZ5j5&Kkny6L%dGg zrfTN@SgY7HR~cYj(qchNVU6LgU3OBy4Xa588-fQpI6G;OXZDfDwGK#Ms+IRA$p8*d zRlxYf``TuNouDbdB42B)7=BraNa?DU0VfUCI3Hg{KFwB~r=Ig={ z5t<17HkV0tQ&T48k49s7RBuTMDjLKRuYY(@7&0n3-w#q9D3zaJ4Z5GHrN=L>218RZ zw0`w`8j6aPLw$kF(PmVdbSyMdEyjC~Q+)d`o7k?+mzN)n6st-&PTIYcYuX1&6wVFu zCj7aL=&a93?ovVkIOL*5;+}Cs=|wi;qvpdza67K58=9dm4SnN6s|TXd@}+}&rHo02 zmZ?*uHZ{Fb3NVVfYZUWEg_b>4UclI-p(m~>717FeG363|BZ-o6iIJO)lVJAw626!Q ze*+8FNwTu(XQ#<2f!hk=45KD<1t{VAXA)(eqoJBQf5oC|1g5l&+C1&DCYJxa9B-^l z3ouG+IjPcZCNR<(&FlBc3}3q~V-p9IlMMRKPVZHOVU?OWEsr|RPtRV%4BE@+R?9f4 zYm8{7th1iBn5wUeHIp9D8hGo&=P?3Szgfgy?|Vr>o+r@#VEQuzKwf_^FJT)zV~f!L zK@%ajVxv1BUo64YhbCi3Cyo;xnO}1M+z!Ng{zxJcG#Y%_$JDA;;v6C+js_x}fN+Wh zv7&g1<4^6tEyAbVm-%kGSSp2vJSK&m<%;O`)AKkt^c-oh?nCC!#rzVPnuK}z61mKf zZ97pF5W;cGRll#|>YsG;z^2;emMZaeXiMGhsI{F6+}FkjjRaY^Ccc`(E#ni51zdCS zNa3!lSpK6FoWKM3su~h;G)BEk*R3~V2}-w`{&3{vk)l1P=RTU6NvW`bu50=p3O1|W zD5n4|L;y28X_>$gC%#)z4-S;V-z82^@&mQxk$0?y6E5a5GhKG$fv;~N>$;xznUEol zgCEqHMB*nqg|0?bCqU>GK28zfC#|gKk%#vq4zUZ~uTUOGH@fWF$NyV3R7nU zX$}cZj@@>j*<$eq(?btFLw>n#^)?b%4~>(779Pq;Yug)x!U&K-`m4**2QBLY-h>Ba z+)N%@V_w$yg{e5W;#QzH9ueldm7Nv|Ttk>=i8&;b0|Rk3sZXV(m5cS+0rcsEn6Bh>o~&S7}fGD|K=bHlE|YZyv#V{Q2J>c_wF#^Yo8F;b7(E^H0iDBl6a6 zlEaHJ-Gb8OTEVyHpWBOFY9Ow`Vz({M+5WBo5mDu|&O}ck?bSTS~J4(S%je4@Ftn(?cxoWlcA zrts@?o}$}kq40)yM#+T~$+oUi;eRR0ey4}7lNPy?OnYC&<~vC+J{}KtGPT5&SG5iW zeMUBSOtW0FlEzV}+Zk(=RC+YhwNLb}v|rJ(ErF#Kbxbm`tE0S=WZC>S*BF0~#uLj8 z)k?Vj1FyF|wFt`Fhz=TlqKEI8kc2t}0ft}w%*rEz5 zqnY)WFyBr#H4_V-o|@N}MRx*s`Q4Qdx8*uaKod+^ORN3|ZShZZ<(dg!X-?b0d|Nor zu+kKih9+_TxQDpJ?D~ik?p)~>w;FQ+mD&|w<*dSF&BJalR&vVN0r-v`QlDe|4jW>> zWv|>9K?)8P1&ug`jd3vO6W-aPns}WQ*%p{F5Eyw6*-`!=g4q-GQM+l+?0Ourc-bSe z#k7{bT|mHo%8|3C05!|9T*TA3EYqrY_ZEw*ND5W%YMGdF+Kjdt#6(m>gPX;D6}!L z`y_XP!$g{0mVS7p0ejJ|0iFdG_tX_{!w4?km?!{kL_L8)ZK{?qW$p`-RO|Y)iahUN zeSwNdr;B&cH8yh@{vnH|?qPgwg(BZKxwK8vcTXbt>~3yv{8?9WMG6S}lX2R%Ko+>B zP8lj;$K6+f7;XcEQBMEVizs=)+e1zLGkO=?Tf;gwJd(lCt@mj`J2EC1|87?M-mFLq z@w;=Gq(Aog8w65D+(qec|00o-XLYSs*j~Ak54k z$Wq;Wn7u2kxkhkQYMr^auqrSU>Tpan125WEMh#Lc?uIGxcKu$-pfh;X(=>#X2eXGe zgXw$DR~GO?a#kgrv84uy8zxgD!xq8&j9%pSW0c{}F0Z`aSD|N{y_&ou(A4+PyJ2bR zti_YLQN5z1fp&G)8p()S6BTmt)oXiCH2xn}x2{e+C&%pTh)3)Cwamnv_#>rZZC z8H>(u`FqxcEoQ5%t7fGCH>7Jb>{S?2KxLG1pkLTE5-~l((l~V5FqII76l-PDVAkX+ z>Vj8Qcbpr7W`lP`PjS0DRD+OGr9+FVOVUB_&L>~9ypkiS_|A!jD<_J+NwQWvqo&EG zuiDW}uLu%d_?0YY&|NB?#u9*2|E_VetkX~2Ger8a zUg6z-7^aA*icS9jjGfuVI(8Zh!d1?i=l$V@=R~su!#f^~Un-{d@xC9TOoa7jabP+N z&hYf$RLiz=SufTjKb?V1Ai|H1|3^GGKR$P;BhCRD@%;S(Gi)^2`_b$ZWwq#{XYjfg zh%IX;`KV9f2}gkVeUT$bo%PkXLqIB%{q@_UjnPa|kHz4gIQr*t9wcJ^o&*o@PTOs0 z5qo!qM<%i1ABDVnIT_-xwMe|(heS*-v(Q&1dD(uO-<82GTix*h{m(SRDy2omt-m_m z$?ne$GS#q}fFkQ{#y%^i$`sEe+wa1weX``VJwQc3pxttqnbZaD9!(@saK8M)gW{p> z(&VBQE?P(_b6PvV=n;X3f%ho5_*>mIaN4h2JukHUXsQV!%*_bQ|M@{i!HIJS%wX8; z@ksg?@%FjAXA!qq7xP5z_N}VeRa_lDoz&gsdX=&P-sfE*>86re^^m9P-}8jzyc#@r zH1$Z>^{|~136ZT?r)==8svZF)gLh|yFPK*bL*yoGP4uHP11(OSxyb0pgA3@6!uRp# zbDI%!w`AUG$*+U&Tm2p#0cERW#6*#6PptEn$b!GuozhJ3^fzK|WY)>8O34KMHnRsw zq?S^_Je+I@+E4pJuJ?#WK7D&C0-}DKXoHfJ;u4xjS`OGUr9*z?VV8d>`Yv{O^PX;Z zTng<}++vkI0jy63YZM!%kWARu%95=XSEA*sjZSSK-cM@mJ;QK^N;^BP#u;YRhX{_4 z`FJzGC!2pkt<&yqU*U>x_78Dc#{Sw7bZUmIw!=oo!igZfB6DE&+S!C&_%fWQa^9!U4_8uyU z<#!%WqHZox@7MB1kx?A#2~Bg_KT8JevG8TllnZxgXKbNo!PE5JUV+ca_`>*D_l7@N z$nsvy%HlB~k3S1#U&rp|j~u_pfy?GP_v<{)OKyFZQ`xsbyx_hN5HKe5eppEmC4drX zV=ASC>7ff&fI&1(VoixPxkDP55d};Sa$goYJyzt9KIk^S1W+T&>k4%T$<*C>c-J(Y zON;FCzc`;3op$D)xIm3Q55fzPjRgNB+xO*kx9fOQH-N(9&@`V{@aY>An;u3{tCoyo z8430x$TkZV*;EOHVY-yxaeUCh;Wi3@S&{26bdfLwRW#Q3b6B|AWmfZ2`Ue|-tE~8{ z7qy2$;C@lQ^!A)a=>2&KBiHk%ecG(x4H5+|Qv^fHd*Ya3JRp~^vQ)cVu+b@P6d^RY z9?OONKMc23P-b>ZN1ES`4z+IoSp)J8AdP?Nm+GZGJrlc}$B%+Gijl(l3+1>=9}DZ2 zMvK#yYqdG7VNaukVppJn6G7e}XXENN3$n@BZ@xMMMfk{TnkS?3l+@V&o8`lm_o~wp za9bK)35o!a9o^p7xG0RoBsHn|Fd}Jb4cVj5lBBhv6ror>hp<1Tpifs}V_jbCffR44 z8;+%EaJg|(dhW0SkCM#qGe^)sgE2BhEVf$L^wn^2S||_U-J2`W*kL3V>&}bo9;sMc zhAi<2y*+vVh?+0A+o2fbAT24_dltK#cPFAeE;u)446I`QAbvThEH49vj8s@Vvl@OU zCTR+9mX)+VThe9CY~KF5S|4z4{q33%p7z*tBCxAvIu=AR z{bP>0Yx?fo7y~Nv=sylNH%Z+heO#;Dr>r%k7oZ+7Sji6i>rKT_=XTFpG{#B-6gLO! z^eG-+#C9ZNm~xE;RHj~e&Bv*~0xZLb=)eDcd|Xkik2j53fqqznG2iH;p7H(l9kdbJ zlt?z7MqzhLS^o;(r^1CO4I0G=i;UtylsLNXI!FQwN%&q|GN(~FoXh==W940Vj_G!G z7;@j{4AJT$X`oUoY^v>iCSBrmW^8tOW?yw(Eic!BbAT1V@hMAP&M-G?jm!&^FJQFs ztLuuzLP$O7QC!0sdI>Wa@~>L0LjfU2_txyY|0huLn+>-n@Ax#;qPc%M>#E0>lWBIi91xRjqVU5S*c@L)ETr_O&DfY^pN{twFjcyQ-BKBZT>)}kuw{MMp@ z6H+`h!in>H&|EX8Bg>9LhXNJk{VUc$aRk=Dy#Gj=}m9P5?( zj?0e8^~lq6XBEu=0T8uhy+a`;TsdY6#1pnKo=T0@RyTiQ7G~RuG zHaxb2-g^7ws|A};^lTi5H-SuD;yT8tR*|y1#u0N&&bz$`*ALU$!8K#wy%KJn?~3C* z6*bx)b4%CFH#OslpG<8dW>F+f(OxhE*I=tKl#DzR_c0_qAS&%ek5SzF#Ox@TNP( z3+Xvu!(keLN@X=!)~W1o#F(mG+7H>3Lze2Refzn<3_mItf5JBEx1DylemBYq(bX*D z$#>QzIj2(T7T1o+arik7;1)w=TuL{DP0{4a5j<#Sc*u8K#BBghig4KnY0#F`zt3td zI>~G%?~)gOV}7Njr3HhhOG|`oH3knhwM?jZ3l8PRg$|$5E>XgQK+kl;&Jboi3Rq>zqCFSLDN>fmwsDV zm-WpF#1zl;$Gm8z%Bg849qeou3z1VgKU1v+oGHPXg(FmikQhH*SV}hdEIkEpCEMIx zgV%RQJ{Npc2S4D-CU*CGznb0D`lpgQQX05Kb_U#ulvb6cww4u~5Xa;!reiA1q=deFc818%GDP|o8yx_kgf!Avmt*~A zH>E&+lfr?98p|AfOdYMW+C4eGJg0z!oM;;4VP!={Iq1;GYoGCeE^1Jc1$q9N)8&NG zPd*#cRDl486F8LH-EmJL!2-=%bvZfqQHglo=-K7=lQ)|SR%RxMd%F>39@69%`Iw%#SjBON6 z`?#Bqa?cQBED*I#itNjMarukzBR_j2lLa%-CL?}1koTjn1G^{J2P#`7Tg2Z_lSQX( z;K}YT`WKy7KW3%B_4SmN99#2ym$Z>QR7Qb<2%up?00Pc-HE)KYfDJee{Er7O=!o?T zz<+DFVW*+ccNh3R!MBKYm=S4<}&%yN3hn{uP=W z;v)5v?XtUH3}(&@*DD7CAjZ8qyFctdcq5j!YCTo=muuGp`6tsZD&{Hl9Xa`x$fh6> zV3dv&1MV+#ygQIpFgy7Juy_Z(xdJmcPahk7KqHHLHc!I9D}~rw_??$XkK5N{B7N_p zd&#{5N}ahAAFLsgH@~%-CsIL$BdoE7S@u#b+lO?S1GDXJewTlvjj#BD6hKozb!=Io za*DVQbjEgD3{24c&vqnTLlD*&KuB#{^=Xe5q7qZe^@;6&mnW0Sx7f_&+J3hCgyL|R3AU0s^ zUNfqJXtTQm@7Jb5d!1&U@kO#9rj}!Qy-E;zPU^0;#?LWn&#`z_s=a$DrMte8X)o|l z>{ifk>*vpzMmaf$B%N>5X_h)AniRQfd{p{~D!$aC#281M9c{DHns1=)ekK=1jN;Ba zZB|z~!-BU$p3VSM$)qw4t2>W%SqYSuO=!cDf~g`-NIxJ{|1Heho%$H>NUYI?mY?n>HV`Rx@=h8MZDc)&HX?YH z(Bn8XUq8!Ki9Dvmg%CeWbhN29Be~A9kor>?u~to}g?x&agP)%))A$ZA)@kQM12=Yh1RH#MjfV|(He<>D zdjGYW5}u!Pr^`RKFXAMGHW9!Z$r3ox`P@pC;>v$&o1kRqPKWum&n0+u%A8#457 zLCxHLhd9CULU@a-DAqb8Idn1UQi_;lVIZhaiNHkA5QazP+_gIG<7I&WeGdF*zEPx~ zKsS50<@o%j=NvRNofAx#sg}6Q71jrqnrp-9boFnY+qzR&)}{1JKaV+WcZ0rRi^yubna*kbDsoGyyv;yZ|3RDW8d6~L z?O){dOi2I^9%M$oLQa9a$@tn!Cxf$&VPuAHDj6&WQ@Zo<STEqtMjRU{xW2 zz`b1DBVw1PZYsE4LrQb=Etq{)41vktNnG^dhH%%Wr)5c4tvf7&6|Hx`IVbLmj_5}qn-f9 z7BW}To*nxF7%>*K6BVzvwXY>0ZF~EqZ#3S5g~3`2ndvoaP}Qf9Ia_)r%S)h(Yw&#H zAAbo_m;6f0C1H4n*d{M?okf~kAMf`K{xwRfTmJE&eI248BvUpNmZWJ1D- zD)ZwKSx=B+EYs0g&<7a|o2o}Qm%>hStGFzd#*fF0?#KxtL8b{ZpTIJ7Z)&chc&T7G zuq=QSJb3?9hG0gTPq!kNu$=AsIQvbTjn-8f8FGO+3dx7HlqUdK4=n#&MQ(09$pfO2 z9F|*?B5#nG(8AOexrO3{f|5160W(|kDhfpnWa`52>z(yNDhEN6uWQd!B$z?SLE#h2 z3{u`}_XP>(>+FDooo)&*v5SvO@p}=piD^Fl;fpor(x!AhFgVqDYsXM@z$u>?`b?2TW(BAYLs_%xCUAmSW z)+6R#`KLe(;T_e)k=%UQtJ7cFi7ou8+{D5wi{x1mbAyKY){n7K66EqqyRt5d*)$3p z3yPjrPVI1D8$u%40d?}&Oqh5MRyhhTyR;z&Cm_)LUNpV`)=}#xw+ooZM~F9>%`hj8 z3el`!X}iGi;b^f%$L>R%PJ+0s&92`|o!DvFvC{<#WSlz8s8S0LLZL4X#~;hWs%0;m z@%UM{KhdJG(~1En_g;6vD52LYpakDT&P&miudrnHDhqNB#stcHFfI*)6h7 zxkcq(3VuvWy5O}DTK!Q+t`+>Rl-%U(9S-c8`LGZuk0S9E8ufbYFQHU%MsX8X}fJ2o@w!Ss0ltsW2` z&jTsXK-ef3V6?0dCsj0xNbz2)@z3WwG~l|BnRx)HzNLKs7zPd$Twp)HV~+GrFg*YT zwOr(mz4Id^A{>4T)=qZ@jM#;nm4&ZGM*p{eUM&=x(KZYHyYTL8?z>p(W&UoGITsEKYc+;;(AFMlY?l+TO) z&eBinueBYA7tX&8;QlJ+OVUfqKV)c<8+E1D+}eZJT?iOvhT>g|H8%qYkq3&Bsr&Q7 zzCrDzwlwF!np#?Wej`VCd8>^;gGSKGOFIXdygExs|U_n!%W>5P^-Xmiaph6CZE2YY@xNU8Zw-MguqOo^3FXznv} zW}idUX8|rz)t9XDL$ES)&2nB+w~f#hgX_25+GZSiTr(TVy`w6qp}ZYGXrp~6x1I4i zvr7Qb%Pk@4La{lm{LP>YmZdAH`%-&GDag8!=xt*8mO`8E2QElfh*5_LVF5-9qR7Ft zh&!n%lGA^kAZ(NsD*=dOvVWPaQ>t>Tj#l(*walu}MLX?kPE-tdNXo3SKPXN4kGMmh zrO`fnQ1TvEtrUx3LdEOY>AH~~Drmu^Fvg>;Ej%fmObNfbUF9oI!mgk6^zXl~Um>Sf ziNBaI7Ckbb_@Wk2`BPCoIxdIlW?U+aLI_pg6+aq{L5uh;n8Dzb103jXDM6Gxh7Dvb zZ?7yPe6A2GiIJ_8A1oYX)_Rx_J$Oue$&6CJuu(P2-PJY3Vw<|#q}D&w&TEUq>L9@7 zP@09H0$?Nms3I&;r1^4qQ&p0tY)iShK){$eeg*4kMf;059 zqhw{OIN-O~tG`#6%fxOFxEN0uY;y^6VRBHJszZ3jb%duONWfi{*`@}3HY4d+CO+8! zcy{gW>}rruj>mW)M&+CtHFLiFz!qC(IMb(pHz(shIooKq<0)B24Hq7Llgo=1XD(LP z{kc^^pvP}TbI7$bKr$WI=ztj5)#;Sb3gU`1AVv%%&*haO+A28JV_ygj=D1+A zDZsNA-V|tQTvs;++|^s?JL!_#*fNDkJNoPy^za?_&sstk*zYriTG)e;|JV-k?r8+! z^@$-PGT9HL8Pe~0rel=(Oj)RowaYU`Es;_z1aChj!)o*yPUy5c!$uU62#Xr01$Y#}`2? zX`iEYUHD&sM@B<~kfJ3*2&bhSxmgr-tc?&;d0vXZFuOe#d8NUJH=~jysgo$Kpwkr! zP-Z@kXNFv7BB~gn?{U6%8L56@0`G^maDbX6XV!oaAEQtoxs<+5rZf-H_fd}`_!!+Q&Kj|TqVEC7^`&9+@WC|lSyTRbjK zB4W!-8pL@8Gvk;G_ua;~#e%X&4H}8TE?LG3tv0QSGa+9l%voYf{J6)#a+uPa_m4lw z+ve11J8gYs7bO*9byF!ZxvWTY#7X!$@&Y?7)rjVJ9sQx8!klmD#m>p_UR5}Fg#kl- z?Rc2ahZC%Al^p+|6D=fxsjx}t;kU63(R3Z6Db6Hw;8@ri;%kb1x$@~2aIxumfPC`D zp(4{IRTl>Fw#0)<7vXdGA1pb&ovgejt2iMXNrAkophb`<)Yk9Gub5}rFv<@RrK!Mm zy)aN%u;4FGj`~Smxa^L)**D!zd}i$7jwMT{p9g=l`>UVxnLns2b=%CC({LiXMm$1t zfO{7Sqt>|Lay^d-`%ez6LGj7x@IS}J$?Y5T<}6GPOJK7N-Z$CPx(tH;GE$`5#rn@= zTO$oLQA$^$wTDBEo)1bbzkAML9(MjK^ckNAMqf90QRL`-*wyBlsp8c+>zEuOr0WLI zQSXG;WqSPi4T()w3pZs0Zh^N{eJ$T)Yl#Z5K5A9xUbD`1$NeAfhzk+ryU3Z5E(i-q z=M(q+2lK!@FXFM9PtN<_*+n}3bec;d>5{MOgAYqyvkdPRZzJRI%zsyKWY`ZAHJXJ`ivq{kruWYd) z0H;4`(xB9>{#h(fXj>Uyii$B$UBg3EhBlKUn8h&$2m0Av;Ag&6WirrkZlqHl3<(<| z&^a?mmAPjph8S5J?Ft!GJHW39AcsyS+~YPh;7ZN-n4Nz~Gbz1!jjyemI70?lcdY|w z@M+Sh|Kf`6YJPAIvXyZefhvCB3JZf08GdXP+K0&ENoxrc+UvMX%9_U23ZlN2BWehH z!WNq-j0XPPZJOJ`K?9G2Wr9w^ZD$`w}=mVbS#qU16oY5jRZuN>;&MKNZOLveNWYwDnEWz zVBub^{#>~dUMZhdAT^1P(GDRZ_yNoo8E9-Hk__Ci|A(pbj)tpy-~Q;mBzlWUBzg-$ z5DbFoqW2-9L^pbG5s7Gt-b-}R8Ac~W@14;{??i9!@qE{Bt@m%sTAZ`@KKH)v>-yZ< z)8g?I0`jT5akPMp3mEGok# zL$C4yl6cGic2bCkhp8!V`r49bAsuV}`j<|~GO#lTI|)cL-!?Bv4>i|Y%9BAZv%hoq zKIgHb6U6Kjb9G5<*;9Hx&wnTIaxlcD1Eqt{)f!lmatfD$3guLe#VnE4topf-XDPc(euB3 z=@dn=5fN$yhQF+l*7?MdpS*DYiHU9vS^I|HFAy=zeRS&5$AN7KltoW4ESJ(h2GsXg z1K!VNd)j)oMl-{Z?aFX-73iEooX`>%edaiiz7ze)$Sp1Q6e@n50XqMR(9JUww(A`U-2fM{-=P^|IFpF z?^}~Hc=!l&0|U*${174!6vo@}50KPMkv|8l7>WU)yk{@$$zK%658OZ7!ChzK#>JYJ zwWJc@Rd^ZP_U7H#76R}%GPqc@7`FNBI?->#O?W5R(yUFa0BeS;X)q}o{h{j)hC`%W zNt-8-@TCYJEObM2b>Y0Vk#g#?TytUmBeEJ`2R0>rn^n;9Y(b8%?)9MxZ<4UrnC!UI^K-ORRxC}7!XhCwfea6sC#0<&zh(o)kV z`JL7G*f7L-U{|a}3%y3y=wu3y)!-Hf)YrdtW>zfnsaiZR=B2T})$#YL=NG}$1X4fN zn$m9b9m73r{Ey92^~+!AybL9H;2t9YlU`z4F|Pb7;MN2F%NV!bkPUt_tgX)SK1z*; zlCEDsIWdJ>qEtPDLZPq=|I;r{yI6sAPQj~FKT9;8s(`JPk_Nohz2dM2Y1_roe z_-RcQa$sn+vUhh%l!|m~`QQOP>7?!0NNtJ@*6|l0nAug0p*f*MnFuRwB&B9VgDgc| zlwfHb)_~30R(uU{bv?%Mjgp!c07%v_XT(BOvCH;egqh+S+l)hbf}@Vi^=J#MgZOUs zme`vu@8{tCfymsFowZy|v&|%H%@I-kGAAH9l5SSH`(8kHJLZphjui5}QKpmMAlh#RN9D3x@vsyx%Ut@Wen|kxA*M3Wv7_POhoFl2%>HnnX$8v*?|0 z>RSMzLH`gBXqsUHv{ELC>j^Dtpi>Yk*j*FehVx`wF~A(Iy5|{?OGcu!gf_Vp*4Z|W zgUGW-Y1ok)0KKfO<%B_r(qYeW_5fY+7kdsQgLUk>l^?$;PE~&z^KzN2GGa`_i!3ZD zafOpqeVVR_H4+)kTj!pnr0nEzGRIsE7kQ%ow5zZ5 zxf7bBtMy!hjK#lpyW2jU>!XoK)3U_?tr*W!h#Jfi6zaU@$q@Q=L+rg}>J;bQi;Hqm z!iz1II>>5nv@AUE5Jtbq|BdbpYMf~$YW-p6SllWAbX}YYPen-aqK)6tbYlA;MvzEO zz)Ia+#C_1$xEFD3u+Nvk(=0VRZ9;D-WQV&ac&=7@4IjUXay|^AYN;G2nvp8pYY8;G zwhlgv5Xm@EXLdHyfLEgoRA)Nv$NSccz;iEi$0>4WlV3UBBB6#7LmwqsGq}=nfAdt| zGy><8=bmQoWuL1O+bQ)1cTU9dnAJJ!+86KAVM{@Y$UqSZa>@OaCn_Qd_)Invosi1O zHVx$Sf?&fY)iLl?TffTW7NcY!GfyjrnFwpL)TqjQS&3N#r-L33i7k}-BxSPK8DCo{ zKY8?e<1#Ilnk{r(IiU%#w8xf;(By+_#q$(QwR+D8i6hvpl+SH4-s@ob+Zkty!)kn0 z1r)zul@k%1;#=nzt0D`XZi_V&@M(p1w#{7m)bGE%*v^r?#&-%qbj;7}5!7nf1&rY$ znKiZ1y(rjsT*(aY^>z@ZFAx^bALwP`vPML&*CQMZ~I zg&U>Y8MB<*LG!|p0wTjR%riw8!4!`Vjl6Y@f-w*=_0}JMkN4n)l=R?3>}&B8HQ9Jy z5D{8Uj!v)>%xU*=MMUj@Ep2j*rgEz@=m_cgxy6;vlvRFi;lsML$9oIHXldSOv7;24 z&U*C=6${uAa1|(Xaf<Frt0oG$is&N^IWSL{^&>fXqq#^QU;#j)dk@IsHW9MdFim;0g?)(p-xFFY ze~1Rii4#v2v)0^3LH13!9*wvXF%u91GEjWTpR}@RQ+$3hkyagg-R+AUDZ`LNHAO?N zj`0DsZFOBXg?Y_{BB(2x+H9$PQgTWPY9^j)t$A{7o8=tqG^#-c?hdEad_#y@VRF}> z;-z8Q-iAopWE)@=Qww7_1@z&gWZ#VvIck?=QtO5jgcf1BD)e3xiKYAhh=`ysxBkrA){XBr?HiSTGiSKwmHC0(IvWHHrc4eG|)% z{z|@>FmE0mg}v@mRrvD-vq7MP+H(L0*WJT!M&qNcjK^7r(7ya|&h5K%^lifg0{C=#_FkNS{ z(e!EQHH#VEX@sZ(aq+CQ|^NI)<&F&J;c8E!>&OLr5=McQPoNmhpE%)PUUi zfBF9}kyzAL1c~3v)UhYR0MG5+$3XQtf9&t||6i-3rX*tMLlo z)#HDUgGR@-N^_%_l^1Eb4zbT?S9PKdct$mat`b+A$d_x6GdruFi^!Fqottw(XvDkO z#y03D^lI$wr1~Rm#chJu1|ZzxFsBk;>?!>1V^9rek$!Z+HBPtI6poF0%34-@E-{MR z-iU>6f=EiYVDcYdB294WUP>ZZ!|aq8HCfDF+gz!lhI-7CrD4kH=T9cWs{gDIQj2X~ zTJNN0l6Dqs(+s0Hwl-lvtJ{T?vU?mA?u-`gI2J-SHK&^tB=;M~tpT_cW1yjB+$Kw9 zYin)5fqtM9F1y96ewCIh+KVYlqc;18C{AAr8lgfvSf22&^Z0wYIH6-!xSt&wwze0* zo^w}RkN^U5_Fz`*MN{5_Kw4_@L%_oB8cf6&(CfW;y%6Zd_P*Y|4Lvyb%|Q@ew)FS% zR=C|93dn_Ih^_!*<#?=28lGhz#2L{;jH{`{8k8;rHYK=%E|kytWl+ zNvI-E8Iuactt}_R6KHx~@+spyNy!v9m-|bR!ycZ)9h}RTm~gGp$?Y|!StPAh$2B@# zJs)VuhY{^O8X>P3#Q$gR3>7A|F0#L;Hf??AON+xyA|XupNrWQNxmhj*B3V zPOZ10^Lqva)WVGk)yJ^ zeV^B8Q@YxJ1vm}*|Kl{Y;vl}NJ_GK{C)JCFXhe8G1U2t%!sT~tQ4X%}%|O_p7q~t~ zyFNc1{SXmJi%WR{z;;+3AYQ;Lz~EUQNMQji^pQ6!R4vrF>)StMw^M=juKGZ?I;d{$ z3VTLb4rIiUPXXLr(Js&y$4{3)Wb%1_ znwx$`K8J|u!~4$FGXVwBkK&EmcQ5iMk76wnmcGBZ~>eLGWT|5HiLoq$yZ52OsT z&<|u+3A71AMiQSU6r0L-c_Et2e_$X6()RG>#>&U+`j285Q0VG462%}YDqJtGgh-Z& zlJ*FZ?iutnID5c=Tl!1-GZ&zi{-IOa=}|2(aZ4iL$!*H3mq$B?6T2(fxLZPVtPhytb55KEjeDrRdjj0US(5pu|Z?f#a<)& zI*4;0Ngar!Ujxqq#w@U2YoKA%(8hw>_v^6oXFx@ev%9nS$(1OB`K4F{WWRJ>8x!U} z5rlDm(S`{Vb%EfMn*|hjyt(z%by)8Q=Dl+r0NaT#-($yMkv)c4`De!Mu4R&|DgTAR zul;8cITIur`6?8eqYtg0Yt5O($&8Kb*|Xab$~ANZX!L={sMrX^Xi$LWxf|D)#%mhX zc(L+bc-3!>1Im_QYmD1CN=a~8o!|{Ao4Yhe!T-=L~vT4JD2zZ@F7;V3!>%;v=hxQ_W{J$5H+_(&EQI zqPic(*gl!NI+23THA~E98-m>df)V;KPVz0lG>vy}xt?!=A+TX+1#wy| z_LJ!Ii;!X{^FJ2&VKddI2gOMuoHLmh<-l4?UlSKhF%2{@yockW{?*Iaa%=T3# zb+o9KB zRQ%97aI}D)0Z@90FihKiu2%=4h*|7i8%T9zN)hsqh!ky9o+gAeX8Uak5V=uk)Yodj zcW%(cT)vuA`Ba>*>}-ew(U`$^_ij#7Hlgc0jxO?NR9vB=P(f1v@}*5@HzV( z{%D(RJ)f1kWfFhZ*BL~pY?m7pi^R<8Yx2&E--0q0Q4zNuS!H&MjLPr1qx)(EgoH%z zN61)Ic54&`qOV7y{51WJV>14|Cb8Qssa@D6-)I$`{Q}5!vYcW-BNZTBSt4I88Bna1 z9EhMosEniC8&?^Qfm!c$F2A8I#(yIj=9rW@qf2M1Ec+_W`}pR#fvg|`*z5d#A%p)y zBAjz`Q^8cwhPr$(um5IN=(>ojp7ug6t> znC+bZXewfPSAuQ~Vr%7_$g-Z()~eVBTiP*j;ln0!14?mF3k%0CJ>Z14LAAliXltx8 zk)=d-m{}2aq;Mi#3{^e6&#Z8Rq1(5}v*x$XeVu8R_U%}t@_6^Z3)ZBV#GGcQ9SZy| zt#1Y|nBJqdO)TXhrR_%OOBA>%+jkiI%ykPzIhqq%ikwfJ=ZFT~$aHZ6$GU6?Rle;Y zeu75X`?RXMiy{Y(9aDmkgQu17%J;M!;}kNOu$M*vE0a){0NATIwv(s#pXI={jh^J# zu*)1PfITc_x#+VJcmBJHijJ3X@w?$4;O6dj<weYB3{Z&5ul46oQW-`0--Fy#3fSem?VLpqNwvTlUo&mE}K2n#X&r5Zzk?At$S zv*uURXHu0xtu6-;);2b`o9@XQ%^+JK7`zzGVHLhF1qT!C_nl5A6pSVe&{|5T zy#WNM-n<*RdOFbi&_U?*)Mna5d>B*pxfsnYWjnia4m4E6={^{vG~bR58g;FxX-KK|AMa2IwaVKo+5ms7{7q#9&{ucY;!% zX5|Hut9cffs*~?ju*=*zMUAj~K8}w%(2(QX$Sa_p1boM^F;7aJ?nKRQVKHItNNRD# zv$8Z!BCTKMn%OFIAA#ky9Sffn+|bS{EEys4`C{!R--V4(#(C;p9Q9HRtx`=1T^lzlgFt;$m0@3vED6YcGr9%&lnt90%pA$7mB z0#n9cZ@%rGK=G&}3$3C~(!BM4PVe;~9s0 z9JqpYebHAL6IugoUt+Zx6>@*F`4qD;2mL80qSM1qLB|xm!%6-i8(mPoUJAIK?@NC7 zfYz#9G(ux|B;_UY$`|Da%sEJS0ALTwJ!LQnmYz00TIozeGYPgi&x_`f`UV+n6gE&Hy6la*;rkruJ*m z&VYkom0hpW);K!fCN+>QYKh|?dc$88kwgMesvi)=vhCv^bc{W7Q-R+i_Q*!^1@zS1 z0`TSkChm%f6Zmb2F0$zuI?1$U276FBxFfQLgUs~@H(FwkmNC2{cQ16TS=NW8tK-Wu^Ss7sg3*>3VTAMo;Qlv$L zfYbtn0*6}P?gldUa1WOjv?Vq4090nx648Fs_Z7$Gyw zf}JF6b9jXGuE4m!zI^dhwg)MEUZi)m`v=p!fH5#Tn;7IoGA*ECEPhrI)dNtyKV|)r zMVRoi&WQsgaUn3>8WXkKS~}b0S?BcY#KEQGJmnNaze|h(r!>nn*Z2l^LJucZ^dvmf zJBz>&;GIH|XIKQ3xEYV%>+pVMaHVzy?SpQv!(oFv4rHN!zYJ#4+^DK^+A;4gW~gZff&b)_UMIqto9MLtni;&7o5^TCCf2eK1tz|IV1-ejCO1!uY(ip*zf9&H>w8VzWHn)3 z!guPmL+SOm>_*jRwoDvC>iq#$ zNC6(89hiXh!LSlu5WijKb76GEGhp%PSg28pc!|Hkut0aly_m`2w+>WUdkB3|9yALXJYnWLjo#WB zAT#ZTz0+nv>GRcoCrAVQ)Nztzn#}QsUF%cER7ZMJvVv%?sA(44L%`UxghTwUQp^B6 zD223+K8K)e0bfxF0hP!L7r67xmF*HiH%V;8&~H2c6c{m>Zw)8W3cLuv9+QG_2aKHG zZ!r$NS&NZO{!o!P_i;(Qjw2c$2?HfyA&(S8&}oFb06-3?>M)#T7`Q<*=DQS_@He)msM*g|C!q8 zos}j}X1)M_ZlXqd zKSS~8e<$JwVtrmV1Se@cQ@#g`beIUgnSbY1p8G7uYSRQp0DjtY8E5lFY%!rlRf_)t z03iYRe|Mk$Tlfj=TDg3nS^5wJOE^-;h)XH{eUV#E%VZqE^?- zLIq7lsTM*7O_t?uVpQHgFO6+k^Z;b`#$sL{Pj0e#aM0{&Ny=JoWJ#qFj-XB3VrjZu z*JtPLHOi0%0}fY{I*$ZRR`OCc9U=;!to+`2sVh0rQeFU?1^^9PbV1RIbz;Xq=Ae}y z4|AA+tpIc@_MP=%0=PCM&V7QJa{xfd-NMKOD$?pA`(AuR{#>+d_p=n`MVp%sOY%0+ zFxnw_7Z?(N4zY-)7s;aRri+C`umd=X@@(72-{yo2mF)4G1(0pQ1N0xgmVTG3PRu-( z^=~X#fuM#maF7d3_kw1LY=YtXR}NP98h+i|);&<*t4zl^o3f_VzGU z=ljfd@oB0T+*{b_w2sS9y}tM*D=&E=m~G#^f#BMH8zq&gF2qCw0~{2mF6wUcz*lH} zm;KZf0t%+sJsb{%uFpDZH(=E zp0U~OjAaHFtL#Y#@1e|HHhoQ7lv!@{rE8jyXVrM%n#xRwbhr7;Ws9kFCdKE zOlo!Lp11{n0(Ja&GYwRx@J$8rf((uEqbUcYRv1oM*XGa_EPA~FWrc$bn1U;y^QO$VmrCeU0bLHr zJ|;c?BSHS5~`6$?7IA%42yIeNoMB<9D#~A{fgS#EOZ0f42Fe4+|w& z%;zrIXN@RyYJM|u78EqJBDudXC}&?nsDo7^BCZylau(HK^mKo_bf8jg)wldoSZ}7e zCzsUp& zty(!^w-fzGOHl{!6v-_t)`HhKB;-X}XweE4U%`#0Tzj>(mYQ=EkP-EEP4!;WIS_Zc z2MvM2i?|cM|0cDRBM&4@38>>+1ZvYZ3aOR+y26oHs2XyBJ zCcUHm@K944aX8+d6^e-yZc#rzB(lj!YoNGVEq;8dMAZ`19Mp5rm>rw#Y}a;BcNs9K zb9=QPjM&3ooL{OR1=|n5G1{=;r*^83a*Vjt(kvOHvQq%>~ zEUyh(JJC$O25iTunsAYpU0VKY`wjr(@frLjE*NZ=_zu5-2>A{lYbv0#?N-@Au`Iz< z0U>)zdhdLEA|0$1CG6_3AwLvBm=_p?Xv%091r7LU2p@|GFVeUtnbCE(pv-6YXxb>{ zW>W0tn`5&gQHdv-iGUKBY9Nz-6&$h_^v#d{t}*aa07Dx6A`kZ({zZ>)1MP#xaj<_2CKOqo0ktQbOgKqjrbpPea0oKnOJts5M?PxDMpRNM1xP^>u#9ol{2w{i zLWEa6&@Xq2b8oMP{eP^uO)@q-iATGOWsrg&+wy=w1FY*@huQtu|Bj75UAUH(Q34?Z z9-7@|_{A>PlT*4=!{%rblseB3H)HHajvD}CdTAYSSJ8JUvKx0y6R=SIxH}w;u8(o+ z8gGFQWXzscNbvl?Z6@;EdgrV6uG_*);vjf0!9RSF(n2)jU@|y?x$gEbc=5{>c*^}e zaKN$n0~)D)^B2z<%YF45w=bUl<@~)~h~7*Gj22t{s|T`iH~Q_=L%)W*-ZK5!SmIT* zasv%$Ns#R;@^?U`j1e~JVVt+iExmAoTT#DKg#^5E9%%g~>YWa*HN^FKCtD0IKgFY&{%hnwB;X)%SexCCXSIx$qY!0%+9g@= z6uB#e*;xYgF6YlX%Y;zDaZ$l{5oH!KJr?4+B`3*5>omCN*62cb&AmkCC2Y%TUueOS zr+>k3X3$A!*2|_cQ(Gcm%cqc-Z~pcNL~(cCNs|zXcacWL{wjv-iP=<3pTNV5UF%0| z$YCOm&F2^aAUDc7W^@Hs=W`RVP(e7{>@{O~x6a!uFrbya5N)auOu>k}e3M|Z^DcqS zZIAxWa(|leoCGtN(QjeqrINdAIllyV-?i;p_hP~&@8R0D1aXjt=t3#pA%<;5Gy(Ih z_}zGD()3f@yqPJTP#mP%?|Fo%({h>%oS`Ry4!#i(8vo!BKdKG7;(n38)Th`|$XLAs4T@8aO9iTpcHE1W{I z3Z7PpY09h1EkHd=$XrROL-;ETM;1RBzx-?np4y9E%}?Y@9IzM8juMmsTv#Ia@^|cbrV^MCbr$Ram1X(Ews=G zQZerYw_DAkgi506gv6hz3YM_rqf|<1*+M4;x|_e@jSdSK%DNVcVqm3X!*pEje_YST z_Xfcy0DnN?W4Lg22Pn~!qa)2o?ETsAG}f3`#)i`4MG91EX`AZqL`8!oYW~ee7l&FE zvI^8<$%pq# zqY_0$3iWPLpOj2XPjKjw!a!RasH~v+t!%JUtO4Yly%V%StKTY-p(%7LK~LcDv3c6~ zKbK8*7){AMHlH^j$QL=E+o$Nwa3>pc@WV5i9Qjvn#w=5v1T_e@kwUiH*5Ji&sZ$n7Gi#uExROdAIAl2Zr#?o#onu@J0j4xj<<-dZ)Kh zeM`9mG?uB$lP|l;idJBY{4=&K#e(}zPwNBb9vdf;5@yzi>@)6Da74Af>w^kj^L)}> z_e050txb?zZcA(a{3RFV7J9+c(cW9t`JLh7?UR;zB!3>PX77&b@r+3l$z407wCosI z`YR+Uw7i6K7CQy1EvF?8=Hqwph5fN5F{?R(B=!7XyjX)8RQgNBSSLsJJ62teNz&zf zwsSuBeXYyV#KIkd=4lpHPR)%&layc5zLrs5h@jK`JZM|m$PYA}-DT6rt21KDI8z^y zpgdH~WT@hS03XFQ*5H}l^?V)xXalSVfBlChms6Cz4h}ZHvZxD)dyjxr9iL@3c4q|# zEY z{mk}xxQ8~u*0ryO+`&{P$jKgFGf|vYE{KQr047Ze^?K|drqvogrMP;55q4k( z>2~A5ZqYumRo67p{mjIr)YNTe7uzP|;}5xwp~SO%-}^JymPMdrsnUck zx-}^-Xdr~86xfvCDv626=puh2PulYlnEUR7VEKWGjq4DTT!{6;26zBBKtcb(rYttd< zO(*ns3pU*Aq*vbHRZA2&$?+$2d3^vbEB2c)=TFcg4!@OWj7nV=Sl4on*=M~&aNGFQ zNY7>c7&|>=XskQ9PWtgHvNvbyxTFP4W{)l?{BGHKjVk*CCVcuLAl&ojpWC-fbERL0 z-2eEvJoR?+__NbiwpO(8`OxLg4c+9< z2|uc=2x50H%lYc~a=a&lkMn+V@8=4%T_uX3b zYa}x5E^RC4)=faWq-N4bU*&A$kNW)nRO+FFm6cUf&(M@P6Exq2ZTobBLFmeRbEES3 z52px^-iq;%3of+IvCX$t^6>BCBZD*kr^+dx3vxlSuMVtXLS&>G5}Fu$mDODTW- z(&t=oQT0`6=z)=lDfHTTcFI#3&u6YBZloE?0!I9|@U-Mg8KYLCn21rwVqf%DIq3-& z&qXNwG$2sLd0_oDKdiq`;gj|cZ%!^S7#G%ZLm>4h4>R>MK@=yl=yNu>x}+OZL_3As z(hU<)LDt5*g1_X5)iD72%8aIDGu=!!f}gs(>%mVFt$2}4_4!G6@Kv^tS`*76hjGkM zQI%>E*}ph45Ifb+r=We)%hMklMtKgnj(zeod85*fsP*UXR&Ea8t9xbye3kKe8&508#3_ah6(a88$vFSxzuEq|533OK0;Z&#Y{ zBK(l3x5)e{J~6Sq4`{dnvu5Nz7i*SB_Lwk=NyluaLG+>Q;f-^@gL72I-ByMd?SeL> zRD@cz^QA2|>S2N-c{&C`%R*oCZ6z64M38Mcyahsa%&S`dQ@(=OwG|GAi8*?z%_!wd97! zSl!R<*|eEQcpu&yZ-t6B+WTLG`cL1H*AAZJSx8p#3Vuc)<`w>LIjz;cy+EVAkAHMm ztBRt;fzinS`6jA&Avwc%)5ho`8gXj(f2#^EPHMmloN8jl52qUR%9AO1oDh1Xnzy8n z)jfb7Zt;U7MRovh;Pnx8LXv~uuAA}H=HevS)S65naoXra>(Q zV>C>nUNcFoFvOlSh=r>^=%2$T9(2Nd?#|GcxTU_Y48P^DonV&S;N)-82;TTNnvU() z>x1H&%xYJCEjNl&p+QKAyHyFdL4y`E@qe_#ypdns^#A;H+I$3Hn($wl?V z=U?%y0U9ECsBiF6;>dC^RZoJL={fE_jYHeXJnPhCMo603FHv=ZrxaP_WxhPVN1jeg zeK_Ia=vgO2Ja;=uH2uvfdh^LmN^f&~x2Z3#vY5&l=yz;X=f|(!OakxfrfE!ms~XA> zMS|grbFKnB9n=9~*MhRQSbmAs?|Q3Sp8I1SsqU^(zuTF==h*8%A9He$nl<=!x0;KX zc$Nq!krP~_4gx7}y}MnvU2ZO%=26zJ`mhkP<4^rz>G}S9uZEJYuMj9hOYDmTfGz_? z?@fCsLf!b7+=b{7_z=&9WcNYymWwiZ2C~(_i-xXt63^G3x`xZH8?{{21H*j>{wxuv zv+h@Pc^f0?RE$E*DsQ@dTq*B^CFT3)1{@chpE?!QwqZ?$}uoEPh?6dmUgD*akd&*Ej_6yHr5MXPeXQxcmuCRr-OV0iDy zsZVM4QhgDP{+p)dv4hj9=~}|Dm?@sgymxtC^ z4m~(=GzcuA{STq?gRiq_Ob@X>uKGD`R;S`W8U)pehWj9$UVp86%^0fXWY1j>S~ah3 z=QFJ3|EgXW)H8GEe3a~r8l2M3{rYXti)|Z|JJVkzsA`xQLk#y1f9vAlOn78GujK6f z-=O{}o!l^zft}ZI=qPAYSjj^+nr#q4|^RhLP^otO}?{ZAyR%3a}Nsqb;LiKn)-bL!suGPOF62$GsP9+J4 zr&5sMg6_CCkESPXaE@V7ge}d}_*Y#`qVZs~c;H zGODLUfqV{wzO2<0(K~-0H47GSwM6!P%h*It+V-d{wG+WlgO|QR$t<4+JPX+9kV?&r z3<%lR;_?wInA4h@-Fbc3={ue^V6MXZw~n+q9ha5bu62rUC}c-ui~M{7sgq6s)xdPT zt63J`3mN?ISGt^Z@Mcxw?iYr$%g6?Q?+%#hR!SMVvKeJbK>)H*g-@P^ zN3!zH=*+oKw5_rU7lx-NJY$^SPC|%R@7u+i@y9{?&^@Wl*O=gr+xeO3-JV(2bw*3V z&FyNaF=ra6eC{iwAEmIwXPbN3ko~Uun7>|oqPJp5!tS*LE~tBQaQ~vKl*N;jGPe6a z?-)^6jFHCn2)A{UI0kLQkp@g2$V%8Ljq{Gy(GO%?f@b zaJ;{VyoZMdWcBykpJWtynGwxmf_tmgBEt)F$ z5A5kn<`@uS_gvP_xY^+DZK>uXS2gJO)ITp7BF7@=RM1dQ;;4?h=5uVB9Er(Ll!qq2 z(I1l=nzn?8Wh%z2AkVe!dqq9|t#srHexAat!D?N7y7}^Ra{VyHF0-a|uj6zO;_$xS zY*0{V&3ZMfACySN@ou&Axs9LvY7=Y6{dcF)DtcxyMaxZhS=&^;oC>T%u`+Dl9McA?4-abFv?tDE?WxVAwrNvZdpZTYdOA6Q3gn_t& zsp&GC5Ci`mu7vKaWQ|P+$wK#Wo(>ub(55GkbxeagtrOYTw$pbl-rp`lqdbKFX> z?og_C9lmMMTAL=VXhrHxQKwtHUn{RXwZZQBewGDYva=3aW)rhzc-M(pyZ83BQ2zR- zmjZr4FY3uIy-+(}%DDW>JEcH9B+qqk7C+s3ZwglXU6e@(9(lak;I|yCKKqJ;7*9;f zzcL-X8hlC6ICx_)iWk2p6HVt#+cKVWFm=-!dI%pos1YBzhX<>y_iR3CAGGhh&-}I+ zZz7SV@Nm$dcy;+ooAQGby3SUuL?S}=#Jkr-vfzH*e^1Y*gaUabVwrHKvKOBWyWVD+ zvOt5=Tvl~2L_4G|29FG0K3v0c9|{&m|NH#PAkJypR@*ca%7o7<>h+2wf7leGuRq*< z_A%WQ21w^tj}JOipZ}ggdBa9uj4vMmgLK$=xA3L4CpZcvY;>EM3Dh$-kKIEjc%CIl*LO=m=k~k6U@3@&=m>5c4 z(5kCwuUT)U-^?K!j?wsifOeZ2?@^|l$Q6rsxOq}oZ)fVHrj)~9k zDLaX|viMIqSS=jf*Qr%FUN9L09N^z5;UDn{!_Wn1W-<*6(dhFRsAW&v(s#oXqM!#_ zuO;gs)F`TZi=Q#kHA8PpO$x^bIE*CvOZWc8X|?#H!+HFUzTYi->#zUb^1rV2pL;2N z1UPMqoGV;|tl!{f5jk|sM3H!Ix~(ePm3apH7>&Fgwj#+HCdN^(lbQojn`qwf>M3u1m1=;PlPAx%iC!h$W_5Abz3X$sz{TG_UD-D@7T%~z`lA)4kfpldI zrle)=secSl*H71WvZtgHUbRq`sEDCoi@NXE<#&Br2}af-m88IFnB`*k$!S-N|Ef2t z_s0*EcxD}BR#lQmKMj#Y&q`=!7AG?%Z+5u)pS$~S@HNbkv{AEs{zm|R2TIisBOVSI|Np4;9zJ2`(y27cZClDJ~ZQb}REfupBoH<9~uhAU(`$Hx#nF1-%9 zoNII&Pv~@4KXIrRalYNC<|&?Q8pH9c2P&f4YvY33YXJ$~rA2*LI%hg)#*Jv+<(Uto z$&(}~HS|vl$KB%dbgGs$NuoJ3lMC^FR-9Y!9-EGfNAYQ152yQv?4IZDfGo%5hHYP7 z0Ndd%`RlLXbDYJy(%?F$B3NZPSzBFj9hBj^F}083%i=3Gc_hXwH2q)arJr9(a&GnJ zlAYgUb^DT-R1-!UU~KN?RtFn;Sgkpc9g?Xa52(ue2DS}tg)nnny3@Ory32^fcs3rD z@X`?WkVAG%-)zR4GjoT&=V8i#V^h7n;}TGWsZy@Zuh@C|E=ynj{>KaVw|oBc1=5YC z&GF)iG_vF5X_P>?t>MQ5DU$U)=A-**ngBzQcs=CBp|h;^+Qz^&jEs2IBf5A5e@44H zNEpV1wU^IU?oKg$&r)L*(&Qb1?e@{jjCukacJ2EV?;S3?P`&5EfYW2@!#397v+%Zo z@L-|RsWPZLl;Q>FhECnw_aH+ona^f7MS_+p63**N*ZUPaKEzqXEFL>&aZA%I4%t71 z{)H1|_6hA9dhLZl`8Sl&2(FL$6TJ6+CEQ+!s_1Z)VYnWldF5~Le~)Ffcf23)>rCic zu32@$-G={qpML}_jF^p1`PvKG8P+e^22jx&I~G4nuMs~Mf%To>7NOMs@Rl9XBawfS zrS*I?CG9F+f7e%);W2hwhNuBI!>Iy}p_2C3$<%Fey$motXi>zPMZ_ zzrV9T6X=*C`t|JMZtU)^_sCgZT&W@Cl;g5fP+Azm3&(7 z;p=|8P}__lXXk^7&Tr!ExZWHs%6-4lL4|tp+Bq~vug-PsU4Z#Er=m+#k2o&uo(6nA z&lO}Cp>5KV0oFKEIL&I*Nsh;1U>4R0-A&q!y;<*;5f1Gc#CKeZd}_UnYl!F{bd6lr zm$uFg8!rk#fck5#tBx7hRM&i!GTnaNyrYbcn=7%R?RW1YSN%cwXi2&AF47wB8T_PUVlY3=94FZnDuWA7j5skHhk)?J0P ze-dVRg5ELEOU*1oM}lvU&bMz6N>z&q@UcX=Uy^i{clzZE8I%{_6 zxfO|&mgQy%81r@!vq;1<|5$(p+xn~g7)9Hm<@;L@>nt0cvZS1%Y9V zH7etJ)-u0aI{vTa$0u6QlzK_ogvTxb6f_gnhU$hX;Dy5}s9 z^Jx4UIIwJiz1(6?YwGathu;fXCm$~6mQ^Y7zWpRV@TLfiJLTrz>4QUiz#+X15pq3&s4ovS8OP2(QDCn(w-69A3E0b_lqpNYI@Bu_rMu>ZL^QMuCn3(715e6L? zduPTb0i!~tE5pY#+_s7HdjUM6Mj+W&{ECyv06ucng6|!P2CW?}k@uZ7vgpl!TdZv+o9>TS67*$H6;`nD6%Mq};s zF01KOOl^944Pmqr1iQ{Vp38%8vnqVVUA=YFyCz*gZ&8YW)7F5u(DbIt+Ba_%njwIW1X%{{0Q?v1a=U(>N-F%A=J_R+XPkinBp4JWf-+<8 z8g6KULEhZgRZyu=Qjk>zI|C(=A^f6wF5tOvR+JGjPR6!)>uxgq+NH(ScoH;7*-7jg z$O}darzM79h!gjyzHyL2-$Jx}vA>6`=Lo!y6um}zqls_567G?3Fq6>`>40A+;0fga z_|pIU(9s60g{$t>cV1SQA(?{fL6tdE9S2q6aeVyw+m4nWTN>?Pu36e-aqRW{Vwc zl>U%2Yohw_%xMsk2kGaf?jM6I(?tz^F@tKVh*A%OmlU{^vxbcO zO%bfB0dtlJn{TLJB@Q-ze}!S>N2=c}Z4ZE*FU2w)XPQ7XK8^6(8U#9XIG`)@|9&{H zF`u8riaY19r2*Thd<5f1Dp6Gaz!3y81af8)W5c~&)Y+c&icav3pDlo3Ga5puK9UIy zueTs6Vh=%ae3s}X^3OhHibs8~2D1}&FRpEDPs{8CB$2}IJZbcBEGj9u8 z#C)VJyiqgvs7f>^h9vYp`!oiLH8zkCrtD^{Sb$>b7}uX#6#)6e@itCGeb(37oiDXl zXIrZ!FGS*hF8lvF!p`9Nv&u?~GPg4D+LB`nf8Q&lyVFHIitVTDxIZ5jyu}M#YL-*= zmy4ZK7U^jOfgAjWNdqbmL0OeGf(Hv`9$z{ZFpMwV4pdd{N7H7#E*jzKDD?{Ch%r&2 zN5feo-R+lM+qXr_5_h-a>3O)LY#c)1O0`aYTx4CEwD%)ekwyGep8= zLZ@O@T+Fm4bfh4K7+neRVB*DUfsF-y(CQ!htcu8oEh+n{A};DrII?725Xx$ZUeFuL zN9)&lF91t%b7(|VJ}`SJI7#<6HgX_cyTowo5${{Xl7}+YJy|m7f-%|4*BL+*Yr8TQ z0(kd@9@AzhW&sFyMN#uUQL@g{}y z8vo7Q_>WYR$@Ike)p=kKGq_}x%9Si!T-s0=hPBH*`Faw5obkXe?6VTJ^~ufWe=Zkdf)-e*@>Ha07%Oxb zYIkz$@EJ$?QxJ` zv;q-i8D2@pyL2fK7d{Os$%nBM>TN-*a#=aH#EJIz2>Ylv2$L_4ZHqn3^KPH}`v&RU zAWE@|YZ>GZgxRka>h)kFIZkmAoB92*gihgmlM7l;YX|!t)O;OouxG91Rm`F*Wmpv^ z#hJ##mo5Y?i2#!^A&<4B;0i%%VOxVfKv?@)1tG=$R*D=FQQzDQo^7ZA&29wA#V2OkC!qsYw06{q!K7WE7}*)Tf+{8gIH)LpmZ#ZSREB(6v|_ctUN z*(865C)ZFtptO09QoqC|2cCWO4ReE(tgpj(lh@Rp=nBWKAnZs5h}*2F6dTeR-mj*?%pjPg=y3Ic7?yW1&W zec%`Ra17zb+Ug#}2{ZgzSWr@O*KKS6(vix&<*f~L%1Y*`pe>{*>P%n$=6P#ZKHhmC z+`X#|x&Gf4|Cjy0UIvh4D*7!CQH#9ShoZtK$+%Sk#9_QvQZul%`|daU=ub@~G#IyQ-K*ws)#R4kP6(C@;Yzt!Mw9Gt%TsWw>*LpAkFdo@brJ1$4v zNj9YLfvN5$rGfZ^GPv1O68xo780c2r#Pxa_e$Dh=?%}lnzpJdb>f}DKg|*HVeh9xJ z{*}3O3{IJtIRK#(VpEPuiYJ|MsD^|<$uVi`7t;zZS?@Q@Fi#~@9et!q={7D@jrbDf zd$^RtiIdm#MVT1?c7xmUw#M%$a!4`V5etTufzI)A^^_~;k z<0%A2;)p`lfgty5C-FjE!)+BEnO*OU?@v06OYKAz9Bi|jt+w$Iph`YB$ZK1jks>KI zaZa8FC!C53$Mk)P@6YLbB%&ONa|M@a@J+suv5X> zQe{vX$3oq=m&pI>-ibhnAbQehh6!ReE41>hspco}V@&L*|2 zL$>w%v(%@i{wGsT#;NfRwY5%gHi^|7ox~gZh$RC z%ttJ(iC&VSMM8%kEMxmpY}ym@5_=tr8iIZqqEtk0uQT4r%?-gdeXhJto59FJ{F7^^BEP#mh!67di4KK!Blvd?JGsi*%3a?Z~nMEATP5L^Raqc2Nb(eu0>C4Ev@sQIcJN`>oqN7$IbH(4v{}9)g0t1(5~6KO)+{Ya3{_3PPy%aqzT|4pa0+ z_K(+`+MeUp2q!Q(LVj=nJ(SUUST})89ZjsZ2b3_`jeMGs(GFd*Gtg%JKF^8@^cmn#4&%05=wH|5PjuCodk zJ6rf$az09UKx(w}ez1+UNSNSeP*vGQ zg#xjDFNO5`@y{($IrgC+F(9l#?DdE@l3^cWr~(^TkHsF0&G?V)?Kj`G zuL!RLl1$K0N6yKERn%|Kv#>BQf2f4iHE=Do=OV+~-826;<{9>R?B7lsBX(bOSIHDK zjj8&VI{**G-;hnw@@^zEJG8$ICv~KvONT`K)NCGm7nnM2Cg|WyK58 zPQhYhF`KT;AlN#N$Lt}pImH@8BwL~_;o6I{&b))of@W|KiNodn7B?p5e8s|3y{luR zN$t^W`~AJv>VDGoJZe z?_|Y0z;aj2+|0hz5zu^W{`jqp2RD1Y)2&7~H)g;u*KuWgxlFB$K8lKY!?TPa0%M9F zGYEZ4Rdn!csvzfmXD!izM`ISj*aEwpyBgb~)*ebRmE=E}@&jmz2SFM4L~lE%i#va$ ziue}$tzz-pRaC*i3UO!g!-=t{I@`ajo86g=SL=aTx9dk>L#mSH=rJ@L8l@O&;_l_*?UOz=9*k7=J`loc_rfY}7KZ@u7XH_>0#T_32P`@$9Lue>*S3q< z+~^t)pL@e$D>2E_a;ExKE6ToNrEMEO9#DKEy{~S~OdS=WAupV$kmp@3C#8!re5mX2 zIH!J)&6+sZcsI;%>oK$nMo*N`(R#w0BkO(>LNvn*7uE^EaTkFin(e9c?_@!_?JK#& zp$)&|*J^P#iMF599Iq9LEbz7FgEp)e`A*&S%A~@ zYx$UcLrCE6gSIE5XgX+*#A_$kSZJERxNXMf%d1$jJl{0Jl!;4fISwZd_&Z5x-vuw+ z%?}fzSs86RpRR`OZ_a-`Hh9bThjND1y1`VSw$mPXd!3yr&~KJ@>|h%^r$_%D>Plp( zlpf-Ea_{O-T~fEVrFcsE&{<{@Qe}>-`$iplJptpUjUpTdEsd;MiIQ zYhSXK!k;PhlK(0PoT+~5gU#Tq%6<7jthLor>z05IZjClYM!wkNWJamy^JXRks z`o3xS_3}(Su}0LLJ*zNP1c7V;VKGsW8);-dqTk z)r;^@U719k`&IN1LP%c3F44HpvmvDI?*Q?yPg8fuu9syJ+9EV~X zRV8;7&dp1LFUtHdE#D>1U%`4|TCXUBOhDwKehFCN?tn$sanIv-PFT@Dbf^>K{eYW^ zx8+QYwwZA|V!sB#pAtY#y6kEUCN?InPiRn2sc{7)m6y7PTt794G_fR`Sb)E+oQBl* zQI06wcf4%-glO0JdlhBf82e2sMuu}qUWqnP4{oJzlP^_{ zTqIiHDo)acJ-FuGZGH|J&VEjAux0wR8)d);6_xTO z$F`4Ip&Ji+xhGfs{hfH3jkd+Ot1CVjBMJJu$DdtPi;A*;bmkGkT1-ju*BI279CyOk zpZ%U)GTP-8(JZK`5v;oMSkznlyV)lA?TAbLo8^v5Xn!F@?rG-Ywpdb&GLGUFxD zW0@WxOA?H?37n|qBLzh6fzXNCKCkgPkF6zLN))a3yU(oz&;TZ-p+9rQ#Yasda zX~yPu1JApf4?pa)Bm{Y1shE)sh^P2g$_aB|9?tpFhTER29p>&JIj~2U&;w{`JD1QL zZIDr*KS*Vy@{6sJ%*xo=sfV9X*NOrk)ariK0CY_#s}KK(Nt9;Q!r!W;o7g?GZ%PmOXLv;O>$K5SYE#S%m@~0jz ze@Lrh0S+e7nYg^L`O1>$7}%kTh{pVSM0hp4AB6dk_ioE#`(d&bJ2{dkSWY>c5lMI^ zM^X{Mziv6;k~|sHT1R%-(-f0TW}w1v_T0|s3jEF!&>*;{+rrnXC^>iM(4`*Op5*ni zj8*|*e>jZ);$d+0gC;W(!Yb7Hj0#ndKoTAlbfX?y$kprV0~(Y!$DXzHY5HIp-}n~r7h&M#@ILa?%!B{7P7Nk9_6j*g zMUqgrab2jMUGP%92=ncdDi9D0J?#RKXZ3U{wWiJ8<8ct6#*Na1prv@Q?>P%mfpDUN zoj?H{{5@p}CEg|>@TaGTW0Xa^5MK@-J}B5RccTn5wmo$vh9Z$V>9#YmBcls-14pTm z@(vnHkP_jfmakzis@u8sBE4MdnlY^ZT*oczU;71hm-{{XIsYL5MH3lnUT^oCdU=SU za~N>h@{|vRcW`r;5lBk!#in@sTsdVj6am-ewIT@NeTA$G`sn&g$*jFcFLYbzMu?%& zwjn<{@(MhCuri^sQE3ggavGTRkqT20GW*6%6cAB2z-Z_s4iTMA0Fb2mmIq0(;`fUf z;>N0%zYP9t;Vh~v)B5?$(mA>I`gFicAz{({c~T0Ych}fecUwp{^pbgpNKdFi zn~~WnKFmIDvni#0;xQ@qM+URNYF+@pvWXFl$hr0gz^vuC?Q=ZdTCpFap4mBG`Q@n) zdg*R-J1VL`#&=5}D$~%$^W_&DJ^OXIC@&B7=8gOxxu$#nXf~h!H{~ogHZ~|MR+YSm z!fT(A98TF2q)uNdXAQ2IAgsy;7G0{9p80gVRuV%91Qk2cr1h=;tW0tJBx}Zy3$iTJ zOg#*w=k(n*UzXR|*2h}3$fo-a6y4PFOpYZqz*o#X4BZ83jbw0s;0)?8(5)K*B7KIO z^FQz5z|hRZeGlwJAvzs|n3w#LK@RF`V=nskO$zy=k0>XiF`$kD`tzA@7AYwgANYZo z$2>O8YRARD-(62Lb=3g(TyT=yFh@05NlBW)>)kY1ATpc3o&LW5z-KJtZjdus{W35am-1# zk%#Pq?dO9j5`=U~$?Y%ti`5hF>fj}`jWU~8n)W)*HCz1~__DZt3%J2klT@{x{J7*C zdGZuS=P=U5Z!AEgbDZAfkoZwVx0a-O(Ok7yQ_Aau{xBKzngAbLR)M@WdK&Y{v)^Nk z)ual(AfpEVZSLWDyEj>oJB^~_Cr9DHq5*g2Q6hw917TfzQ^|qDKD;4`E2$m-|+$@^PJg5EK+#wC{sw4|BHu<}p zKI8%FAZR=5vE3y+4=1N#mtHRUVzdTlw9R>E^v3^ouIk-aLGE>NDhbrZ8!{H1k|~E}HI%kTM++kts`t zrRS8~g)PUBVENFV?oCaf9FR>lLdX||mRqi#*V_wKWhxY2WYT`ZFLjf;-*tAwAhY^X zK4%!Ww$w|7_6P3D=&UD>b4(BZU?wU^EFFKwa9`#9&Y<9*s1U{>0a}L;FOFUTdvfKy zOOYXn;B3p;ylWa50jduWBnz~CSW#>}C(s(w(GO8R?xlIpb zm(xT&+e5>6Wp$Wa!JpA9f5J4xBWs`XH-}2hH3f&KH3FpKe^E&1Uj4z@YsWA-!R!9faOgG57(+yY$jtm^bFNQVdElBY!fs^tD zHOTN}M^fO2Uu1^*XLHndb&sDydr5r*8^;~p(E&GJ9cV0Z$&lVg$SaBWUi^Sx9J(fe zzfLz#HRN2cSahvArf;B7l1&c1b`9C@XZ*C81ba7hQ0+N8hL8I20~HiwvO6}TiM2}< zVw;h=@)*8knDvB__(}H*?Se=^;bq4cDdG4)sEZh7b?fBI7-&V~DS)ic1uav4-SneB zm^}^mnM!_jngFA)be>_o!axY;djOn>@Kp_`a26 zB7}Ih+Ay3^;Cd$8qgOs+JiHOC0s{wz)nwK32qA)rztg6qE#V93(Pn|M#!F#UYSMge zyUh_GY~VPHtu#y7poT*W-2Skq=x1Lkqa@>B?3LG|VL20;1flk|VKBAoI&d7{63&G% zToQbvZepV;8%1Djw#CW5$Nub6?(v1Pk(Ce!uNr^HEv3WBD_ae1Aw^B6qmeUD{Ko&) z{k(8128Ob1K2}0UvR92x)CIRTt?xBDOp04qc;tMVuk~x201Q(RUQ8!6f#g3_{?LHX z()_`Umo}k2uDitca;Io1`1q~bY!&I7lYsh$KaoGLVl7oJ87aEu+0eqE{BCO}-KWBT z-Vmy!`fpnM52*Ww60bAnzZ?G|V$Ee3-5V=^46r@}U_tIoA&BK;_QVw89;gD;B94h$ zZ@Hu*Qd&E1f??}1@UpQ+RnZrhmPX0(jcS zlpD|+h>4dw``KMW@%RLWWpRFs7&E9%V2A$?0g@)w``bq~x&$*dyH@d?YUUCl^!vTs zjFW0+E!?RDaFAZq2=NBqx-o@k)v)FEUpp?+von*&+>y0f^L5zjZoastzgo}#u*Di3 zZmIDrHDL)_NgnEaTGgrclWQjc!WVe z5Piye;Xpxi(^`vUs_6UrYt=5r9d=oMyTxXO3i-up_VAk^VuC_8U#64@Q!)W#o&xR& zAYm$g!?4ss?#g0A5xTA+BVZEB@jA4&MWO-JpAOFVxAv58=Lka@&-b&3(w@lM#SDMJ zEIAh6ANusDK)(>u5I!cbB>c6@PZz2+q5~@opYWvmmIjfr0~bF#Z!{>Vs*0bBXta;d zjvF*<#*87yYt?_xMQS$KUVU)fuf<{V^em|0)nxnj_o|b7P1~Y>Ng(GspK+x>;a7gB z{}aH?b$~FTG)3@s?nkOrPrYq7i@M^;Z6UpdXE&_ZJScSn)vvzb9`>Lgum&MU9)fy7 zeL7E$CV|EP%`4C=No3VnlH}3r`%Y(p>uv_pKUYaTALt8k^v4eh96RykJVrD ze#rYK6~vEbbgVpCMdQ3f8Ub9xJFRs6{ z{O2|B1j4mJ3IwBbqfp>j>*}sx+^q9R3@BopJe^7#Jb<3(n^*vdFZj=|3|v4o^A~HD z6+ozzUi|7S7-@iDxs!Pt#uS=2wd^EI$Ni#ePQSLd46D^1opk#(U-=jCVb>sA;(zrt z{(b5|OYK0^(Av>!^kxe{h%|U>ZwvspTX&nj{gg+=AYv~=93Ls8Df6E7 zX>Q=%SZKIAQ$3aFM(RxA@6reMFhM5!YP9m7=N;#$Nl%9Cu!(D$NV8BX2o8RJ?D?y5UQ=nk%<%%W;f*`0i4df7DhHWy*yR%1n&U(%E|8Q_WA&? zf+_u?22#xOx&|UV&;r!Oqt_#?E8~Dw>SC6&SezTThvQ^m&qFixLCvznG=Y=Q>%;wd z+`xF7+;f(^3=@>Smpi0HdJk5mGRg9J56(3r^vR|6!lX0+@3{lhj+;{jaegdxE24*3 zF4}}T#nnZHD*}BtpepdaF!!bBX3jiua0lmf*3t1v&(GTcO-M*}Z`vxyrAaJsV~<$g zddKtRjH|tXP^rCW(FR^IX++o^b0hM;=YCOg0xx(!7%W{h6Wn?(9{Rg9M(zk`C5~%z zy)PO#Yl($!*KKW`c(=Y!KltuH)`U6z?bjxA$V5Cl+ULXDwS5}9#X|IX+u8TCk|g(V z(IB!}L0Y*gus+Efyo%d)H(!&Akr9OoVODIQw4;4nDe7X7*D{dWi>0t~xx-LFG5q4^ z*W4zl{fo|)A|Dw0D&(>3R6LD)NZ?Ns@b@6jpV8%$Iu3ynr`#E4nq=wr$7HR!5jnGa8S@XtumOra5n~ybh$w6AE2>`}XM! z^Qbba@=F+M;KRE74Be>-@L=oV6?P{RTfkUOA#CgalqUeJ`>!*hX@ayN?`$uBuNWxu zxdRLe+s#)xG#}alXVAX?D;?!w|Ltz&-LLs%P>rno>_3^BYp!&jU}b;H)k3C2-AHUj z)WM9Ic%d~&?gsih`*W_IAM785y=`wK^yr~lkrZTl1Tqb)j70l*5=^^k(i7nn!+Vft z(u>kOn3TY4EU)QEe1AVt@K$PRa`kSA}Ou*Yxn+s*4xv#P{ z(*!ucVROK=)BK?POt16ie?P@v&-D5avp0Edc*52e^l(;9dhhtG3A1n3J9<#_`IHo^ z9dWA7Q}WG&01lEW%@bD_ck0)sH@e5klrq;%bPfFKHnf5}_%z};n^>9s@xtIh*|IWD zh-6ZB(sl#c#G>NIQKd5}J?Rh>JKM6)$3dKD?c`pNMkJ7^z>BEa&RorWdSH_=(yY^+ z``#q}xaKJ5h;qR4?%4yB&>3sHXI7D)y~$>YDLDR_`6*D>(^vKSaOvG^eP8#U@RCnk zaq|27Q|#H5bE@#~rC`iyjgI>L@rL~u%x`NVoucQQw%`6S+-iwz2nKw)weF7%?|$B0 z6I1~?zNoX*&8wl8QHUSWs3J{YW#XXdLV1&2GPFYTVEgBTBrY&i)0YfuqnuXt8~|!Ruf~|5L5Lp_-u%t7yWhXZ zNqkM&I731us+V`hVg9~O-+#V?ZAJb0Us?j>`Y|dD1B&Us{(pT<dafy_rI`OP zZZcX>alhQfh)dtSfA!+CxaJfOXAu~otuRFGY5%}9yuX8xV>sQ-q){y+pz`1;#Gb4?U}V;;pza1# z`|O}b3UCL!u}V$aZn85(jEWnEB(R1 zSLS`@ZlWSKPqfjqjbUtoqXR^TSGux?&O$G%3n_Y53{(U}6_^wXhgBa@!0T$}(06ZS zHX-lLS-fbuvwmSIEajLVH{`6>z#s!PyO*0Fsr=~Ca9-l3t$%=)JVPk|@TUuRsko)c zxbfXy(&+ccuh{XLHj`)h#Wn36hWpz{=+kW8=2^;+nTp#Hb*CZftK^n50^Ht;~FW@U{AOL65MDiy-RdPn1S+e&#t< zO`lMPN}y(;n_Dq7-p=M7nA@!;)C>+h@C!Cl1fWl3zbYZ1bOc7?BpW2`^-P}=*Tg7% ziKSFnhgaqqES7;yO*72l>h^5zA_vJH0W$*2n`k)qQ}Ys$n$fZ$6P;3cjjurd^P;%j@QTCTLSMTY|nl0Bjwqn+PDCwgH#U8nh@*__|Juq;`~QKC#i|?!JK#NtFg_fmOoaTQL%*I>EB6+!vjVA)l#lY^;^tGCYQ*N zYELEb;s)Y{q+J?MxpN=6(`CI7umb61rG4g)SIf_g-*^JGFUsHoR71Yh3iFR`D_wWr za)PQ)dLwXox<6(Z#&CzBFtrhm?t ztuJx(SGe-2S^j$U(9kWy|4o6amF)7H$bXn5z!d$Vy2nnC><>M*Op1F2e0g*Y(?R6J&M0$CGheAeEQXsOx9@I8}sz4Y-Cmm1=5#N`~VxSy^Eq;EiG#KMN@Ux-7(VL2IAG6?0cp zoj2f+EcmyJOHI#N3XdmFfoYK7|Bg?c=Chbmg}i=J~DnCw_cw zaSEN8csUncEB&u#4^n(Va+I6Yos1(jhktsQ0VPGQ}CLypg%*)XvsXf7~CKdRWqq^-$OQXcb z_OBWkWxsYL8?5#tS{qMwEB9l%6=7+DA60qRoWc=OAqWP0W`X2nMa>~u?5GMn9pe&!A>ZOd96<- zwl{wfwj-vk{#A;oT%)>2*0~Y*c017#zz}TnP(by}R(2O_ZBLk3zu6fXE}#KPQWyD< zHwAR@-gxrs-NVOGm&l$Un-5u9CPld1hVbeKt@*#ysJ{{nZW_c7Fib0 z#aVO+5(gfX3q;cn+f)#(F-f>*hr24)PADgYsahbzs#9bF9L-U4i_!fDgeE$ca<1Ih z1b$MhTJgQf?Crk$(2_?ssStBHsLm)3GOQ@755fddB^DBBQNIjN-w=Fd`lRX+$$c#+ zIR5Y<@vgZ?jJAA$d%U)%Vj$MP^W?y`UOMhjP1Czcf+b&jGMoR;kr8t9m>Qf-WZJ~( zXbq6Ht>>%pL%(8wmj;=@get7Q^C`a|5n@4ra*YTS355Ts{OntASJThw3}ZaTc&G$RK78Gmko_r--E_mE`$}5s>8I3L zWU$hTj*Mc);&ykGc#0e5Q8mG{Q`P?mDS0sZiO2EV7;fjnd79^3d1N=dV6wLF+UdG_ zpTrf!P@~PM^i@PX7oXxvVOdF*c#W@VEr%T#L& z)X5EiS{JT@2k<=El1j+s&&o?}5Ja#>fCcb^wU?T;^uGT}Tj1MBs-W10qM26BHCu_9 zo2ZlV=8Fhw&6V32t9AFeZ>N*jBqT*+127i7M0cuMOZO#P(?z>^<_clw?azndkV#e&66o&uV2W@1EE5Qm$oNc$9EN=12@0VMRfN;Fk`p?0YdEfI2&0K+pLAfYAEBD1&B`FJ76pEzOKw z{&NuVUh>bReGCCm2b`>o$jMJtTQaze`X*Gj++M$?S)fX4y-|5X;6?62ZUDIe6hr3T z`zgQR17ZejHMmcP!oVg$2uaXuDOv-u&!7jJq}Y|~qzG^n z3vp>1D3@Xvbs#E{w<|NACP%AW-~$@Sm7g8H-Q&=?56+-f>{ca5@QtIvfz}?3sx7{z zq;d?rZX;i(Rj!=9ffv=OdoC_j%VlsXL}%sUQu*2F2z_v7I>M-~$9NoWQWXIXqsU}K zWkg70`IVVWfV$ZT*3FGIb2xFKqN#-pXC&Xul8N5ClV4bv-Id?4_CwMiP_MK@sBWE5 zB7$kSiiQkN0!|E1?gi1zo-fY&J6<@k2J1DaBEK8X-YIfC96N2cle7WowWO`3ROK&8 ze%~8^Ep4sarU$bMXCV{W>w2Bu5-s4WXy?q z3azv9*>;qpV5k9EnDoMUN}A|;eL3l-pfaMANP>biLd7u22x&vo5*T*ns)iM+qtybM z(Y^6x@Iz`V@|#_cj9fDM=JLMh1=-}{zJaF8f;} z`28u8OiBhChD!KSt^;b=d0(yEn@^}^L_=-+FCdEg0r?6$^qjAN*UdfU!^Q&Af=HuU z-uDH!{sW~_W<_gF5C0eFd%+||rIK@UBFgH&F;`>kcodQ_A0?Tc-&6n2q38m>C8%1C@zB{y;)%Ce|7;Z30ichTE&=iyhL4n3*o6B=GpwEFAM=m&axXW z=)h!1MExNAEW9>sFLjbk*ttA<>x({8=lIMu#t(NF1l^hhJE}NlA*)qY5KMUnH3ovo zMhX0rx{(Ho&XJ7?kH%ghKPP#ML9ApuSXyI$&W&H-#(^e>Q%!8B*$3UwABQ{p`YB#e zdS-!oAKA@~2YnT}md&_^GW7zz4b1E%dfqiODk%<<6jDx%yG`nROE!&0ZCY62KU^CNtD z*@j;Gb#gBl0O5zBU;5R0zmlgWBHGj>LnQ*wIqh{z_Lp#Zjg9(%UJxwNUY?3fXU1Hr zm1sjAxL)F+0|^jcrdDBOuXhJK#z}Ek(-94KawlW6&{vPxGN?hXf;D{d7*athOztoG zhAEtQj>w?hXJDic1eBAP7ypB5m?XvT4sLSXf>92$^z^ndy91gz$tl;F(gTYzT z93XQish7eU!tHB6^RnZEU(L|pcHUueHk3JAz^@Fj*2o=M2c{RroLd_Ys;c*!@YRZ~ zsECRcjMi!7a+CdkW@`W2Z@+1iU4gtlJ1`q)mpIdkYL)SS>u@-GZ_XdpO5wF3I;4uV zijj&Cw#NYzLryg5eUCmT1um_E;gl*saul)=!PCCxGikI9=12daDom3~ zjU)gwx+T&&GL4ri%UZ2;VN4_ZU1dV(VFJaJLON>z^|y4tZ|}&Wr`ylyLFm&r^FECt zRaI4V#EZg{;CsIKuO8vB28)0Ux)ouaK+axzhJK`(B!1K5Pl~ZOT%qau#~3It-6EZg zoWW4w;c3b3H@8}(lTm9zCXDTK%X zrNb#>F(SY%&?ENm7r7_R0v5H04Hy9QsAtPF&A;AZ+N+2DRYQju#ni;!j(j@7?MJAe zFZW`eET5e@`7ztvKmd-5MtdW5RgfDCjG>1a@=oTqJ~ab+)h;)F1`BG6tbJlix+8|6 z9D)d%p=fWYH_;&|{cJZFuUvX{mHG?}F9(^CAPLMOmjicL{ivKMp&S`Hc_#J!TXa94 zd-@iswo79M&Zbw-R51kx?{txGcBV+Ncf@O1%AnFhp5R(LLq44Rz!~S@s_ErYNFt2s zxb?0|L_yF(uGJ#3>?WI=QSn3U;EIQfvq4e-e76CqGgD;loAOm1?;P7KC3i&usahYA zRC~WF#JYBZFJEX<+*|ubOo#RP^l9?W6IBHHAJ^?h#al@nOfDMYkNM3!{YY*z?5 z;*AkiuSdW~+I!@Amr6|CcHdZ5jc{^fyBjkDiE60isQSLbK(alRU<2vTv83GCkG?M+ zQN(jTXZLA*Jn*yZJG$4Zxh~lX$2pSIyQcbPlxNVbVq%0k!gZ`|RY3vqePBGngQ)iG zTcY@Ddgj;O@sV)G!wqq{!uNJeUoYNmS^3tw@ei!LTyT=HCq zqGi-YQ73|fUJ&d%r?0b^O@(tfU+J$btGd!n*(;a9?to={?V$ymtEpI|TMc&ote0Ij z@Ms3kM~SL}W6~ad>c6UE3z}1NmObg?2Nd28X^G}ksie*@NY*PCL50o*5l8KZ?;ehF z>tEo&QTYi~&O(}pO9vEFAID);-V5^vaaJgZ=NnhECjB>YO>S zqc7QNsSU0OnQPB)ndc8B6FYQt8;>CzN**c)5qU>a`s!GR``4YJ2^->lN?;GLI<+*a zpp1=eGicGOfQ1}!hdy@f^7dgTL)Jqu(NhtGO;H6wFC|G^Tg$!S5V=HRP}cLXlC*v! zJU;pgq85{_(TQlSJyAmMcdV@;R1dws4@~^4{{awM zX^?JL^CM>g;h%gntCLXAby7d|Mx&99Q!k+J>i?dlpTr&0p8N$$T20R99Dns0`C?cB*LE%%D$~*R`Swv?|*C8a9EN`OmC9CDy?y zyLljV<$}⩔a;|PJ_KUDndY!#e>)xQvf~{{#6>9i_484WjV984lEkYN}EXW#CPU% z<>a0{5CLudii>n9%EyeW71qON>u)+Hmj?DMg$Xy$P=l(HwXX>_v(B!6bC8yX5OYh- z_2+5T?sHV&C0H^`kozDUI{pOfP4R8u4FmC$-=G#nNwG>SxMp3f zb~z5>zU?VLZ7agGban-G0x^Q^r$OaWo0l7b8SJQAz=D^94|fPJ)^N(Ax#wD%{oV!H zz38Vl>}7Hq8sD62eSul+zX$R2PZ~mG_xbeh_j5b2KDo~PIV2X_(iVBGXa)#NO<#ca z(&CU3_D(UHzW;C~DwuI0Q)Q2mJC&jo!%}@u&uemfBC6KDCI5 zm9i2fp)>41th+3t5K#r{Et|0;MF8qi2(kS2$Z0u%JIevwd601~L%B2{>-N@5qU5dHBwpTYntukF6uYPBqi5 z1(!iy+ex%|X#H9L0WWo~rDGlo;{to|3Qa@aaTlNGxq%29x>O2uiE`{GQtZR_!Uw5Y zrSea}1;T@Xf>y`rZkKp9-hO-*&}aiW>{`glT`HP)kRal&Y=H-n?spj>LbPP(@++ad z{*|I6tI_t;2AKa?aPYiD*Gk|K<>u+V%afCq^)g1Or{g;JAePT!F-GPf8 zX2>yC;X9(zE?#j=j~4lgutCF}%NtLZxtqHLQRqVyAzf*_e9VfcK3K&a%Ue!8+%~|& zj=dGnlzUQfJ5P?w7Bj9e^Xi^#>j6HD3mfo4{vyQ4IZw_u63@*g;0G$?=@5f%P(ivIBm|^k=pH~sKtf4HT0**E zXrx9!8U$wOp+mak`SIWPKHm4a_kKPO^M&J@b*<}M=Q`IqZ~HXqW`BS!K#n+Q214J6 zjWjI8wDSe%%G1Pp<{CQmd-Y&vcs9J`KX4L2Wib!}&R_)0^kvfc>rwy*Fm~Q|eJWaW zap7)wtFc%+5?W?@0u$1jka6N?&Hs9ak6Ed7+JExX2r|Sz|LiONN3*218eRE*Fd>*@ ziQ@A&f8tFIe;G`q2i3>Nioy5E>XY5C%wN-rzWGwa@O|Ok_#NC%UVgGE;slk?<*Q?TkkYVp|%t%IC$bvs)=7_?QK z`m4V0-gaURyRm&P))tXE0F{*OdA~PTGWd&Od5{>>z-g zOg-V-nw9xWD@F%k`WaOKoKL{T5%VvL*iVGMOxsYe&%=b~^V!(cLx2Fg6wNUw7^$Ag z@&~KvT@k#6=Mz@)TY>#*#DxMLjG|1ssaM(fFjs^4-H(AEN|S`E+{x=3C6x6e5k?T| z()RmA^S!blW&VE3_USgp2&hm961!iBPzy_M!3#?r{j=cpv*N7y)9cBN_1}&)G zg@cqqt4T6Y9r%Cfa|9%{pZk`l8_WM2=<(DEwy5CX=H@1)uI&IW)JCSL3`%wTyh-)8 zjS5I2Bsj4Io$@%4%^%)|#YaG%z@C0qVacGCek%P2l2(abz?WQ0Q$`NfpJk|(=Cuo} z5U*8rT1h&?LYX|oTD;?8{vf<8^rwS76EBl{2gUBb_ebpl%Q*`l0$_Pd75_CGy~cCM zf(y+s#XT&do~|`)rq-9qydeHE#Gz6yz+CU^2+OZ(7VEa$ytBPdadB8lo%+ zLA(;r;I9)%r*@wQF?i29!qFKiEPrgBZ-XQo?uJ?PLIaOeAI*&1YL>J>;aMJ?MC}rC zT=Hu45am73xtF6 ze>~JKAR2vR;{+IjSRYT1jfwP0wzWp)W~gD1!E&VisN>8A&Y;HbK$=|H%Q4sRQ(_HGa`fgChhUfpEiJ@cDA%Abpl{oKo|3qVZP(*V zUBL`sbgVF&Xq^q_U{DYY{WWfE40(z5_IJeZ&n_6)h~Ts5W?wA3PSIzUbNIUx%TA87 zDxRLX$G~DBw~c=h9{h1)r+-`q(`B$-_5?o&C9Fk14nY_J?{r4cPe9~fb<9lYYCuLt zhI(kz0!%Uf2Ui)O1E)%-DBwcsq$mj&k&zCC0~-`bC4rjMqY#eGF3Aio=iJYKOmDVOL>)pCphnz``{6 zDTkOG&o%G~)3Z?7eNZ*~F8f12$+u)d8g|&ve$hV1ooG#6G0$ct$Na8-hZ*h+cYXKc z8$)uGh5`taXOC=AQ4+lK(;jPQmi7&9Ntft>``s+^WssZkPR-A~Y?tPUi@C-l9DoB- zBF2as+y=-aYACQlEc*{YXa~4ToQ%CXHS-XX;S1&^vX5!UL&V3&^Wg`n5my~jzfcoa6A{ST3h7v6aZ>X2?b#j*5K19aJ+L#m!qXkLJ!GAvl zI}%M$et&Wd9~u|2Hxb*jc8|Rij80-nz7WoLIcDMxaxJ~zG&&GD)v0%TH0@`=EGG27X64#YRjpXN4Tys|Aw z<#Ls``9XcY_8Ws-LY0z#=;HHCquI0!{)kgEBD##(H;C9d@!~7nH4NWIbCj-}UgBqT=GY z^a{@d;`&E2PSnG@*xx4XJviyWFp8443l*dwd>dK{ul$&Wb(RTf{{BVyVhs5 z#wAjpFCugdUTF_;-hX!SIF8|mt&~b|yJlOgZVE5}bIbLg>2>^rR;Wu~3e2jbQ#U4_ zz-k@^sROrZ^KtbG!;h9M*xBjcQXt)bxlXrxPU^k>ria$Ot9^%%L`IvLV$b+{Z$8>2 z=mh3>>wV*7b|TD(W6&YQ_t}N8c|3N308Z}z@5Asr*=b-&CfAcGogh4|UV&d(NdyCL5Um~=u(v5j80RH-Q-aie zo62zGYFXO&uQ>4dfuJZqX2mEr-3BC7XfVQC3}%u>mq{m zyn(c$v&As%B&Ehsr!X;6yN{-B3tnhhJbX~!>V|pp72MaPwrs%_>W@->`NV}%tnN{F z2$s1YcTcj}-LR$Dm>)f!Vj?#DJyX7)8H%E`=6!ZkgkxVe-%qso6^zah%wIONn2;{j z8vj=@4XkEyb2&>biv*6PHQjH6*uS2>*>yh}wklJ~6wVIF+CTIO$^w(%ejCf?|0TgB z0TLh&em>llLVXG}%bFjA7FD-?{?emWhHntM4rhOAne(H!5cKtkw%>vIX0Jcx(r%&H zzr{s%U^xb<4Rp|>^dS$lLJaYQ_jnH1r8x*VQRpDh4^oWvy&;d?>YC`D`18*UwvsIiu*)r-aPJ@%tlEQaNIIzM)2G^Ak z*|{vZrQ~8ds+=5p!UeH+&Fymm9}q|xO_bf~|grh9c_<{>A z%8Lnrpl?NkMfnh)jXJ?f4_jt}m4oO(c0oBu zW$_1L5(OCYDTbh5SuN8h&uvh7b*IqxXtrqjuh)QBnbS;QSwlmfT-eQe;NKOufvXI^ zxsr>C$aaEixeqK6!nT}X?q74s8@^-#-TV6QtUd}fLxPx$YS=rK-{kysgQVj8$HuU6{oKE%CJ^+BS|8a?-@rBDO^UkRnl^aHlK|4!QVmiW)J0LMyyXsdNZO-~RKU~n_pt@t_)&8@n% zdSh0Tb3O9t(;efCDEy6$i>Du)q51d@4OGfudRLi@PNZJN$Mk25lCXLIM$HLAx|0n1 zBCaT-NaG(ca3&ZA5w`6AS&k#gZZ#Q_TVRa^A9vTT>Apa^{3SGx7WriKK@6HmI{0YZ zD&X@iNvb2?A-E4t6oQolWJjPA(W>KYW6fX;;WKB38JoF^ZKo3QoaH&DU>^$b2{MiD zx3HL32oWhKsSPPiOLD5Ye?V!87+m-8J;4gwi_%cE19kxV{|k|SM@$MY1v1nazY+3QjH!V{KvP~2; zu5p%+LEnB=I~hC9=u-$yRG0<)N4zEZ4jq;>&cw?x!t0L{c)6K@Izy`Xupi`5y7?&4 zB+{n+)sIy{g0!{nGTNJ zFaeej?UKGc2f7}^$A553e9R?ac=Ss0*ht6T^UTMBgkGm?}g_bicSC&0+#E!M0;NJnMFxt$3?>er)}<+}A*E#pqi zDEmggn5ki=jSs!4u%CK7L%?A%1&dID>vU6|stBBLiQcV*wSUZruhCUFd&3g9vK$UK z(uC9#S49G~))dEi{r^AB{40j|86JR;D%bTsDN0u>+(?t0fj*|HHCZDq(+A{SMq$HP zo=X*uwMNH;X_`!!(jvaPV2jrv1o%^=Iz)=4KW6vBTbjY#5tvwWs@KCg#r=rl`s$jL z@1GK*eup7Uwuq_e$X>17g6O^aXcvJ= zA<+9obi2lMZIdZX%M6p7i1FqwMYCNY3{kU zA%eIcdlM25jVgF%Uw3=#{!oDn%-Sz%MJy~7)lq83>gc9 zr~RnCp5(~Z&hEFokU>!;)L>%9WwqB-IV=yL$;$-9 zcp&0$sx_qQ$%XGENIb|cavckpt!y(tcRS=u@6>IewvSh?NMUl9mwhb!CoFJzzSVke z*lM8BK!b#WivX#I?SgF1;1>{rU`-U6<`EvI_g_*6@U5(f$P7>!f8Ge~-{l$O_l zYtk~1_rU9v;bbyzR;O0?82@x>he5*PWNfD{HM|y;ex4O0V9KgC!|nH{SI?^iw2RQh zyo^J!$1#BNY9qa${+v#E^)AyLcOpCME8}Wc&7zk+Q*HjiW9d@rT0`pby232#dQ5bf zto}>BkiuU!LLj}`^Xk(amFm8|-U@J7xuR1Ux+^8x|zD za<%iCh>Z}^$oVa|88`GyX@sfE?{AtV-ZtupkrZa6aS$wTCixu70XxzvKbuyYj-=r( zwpq%HTKpo#d^~WnQ2cUUITVc@c^B5+A87o^PnQhm(SLht9zcLlX;Wc_njXsuQkgmL zdk0QYzj*(`h6=-XJ`fm;9b!Tq zJfLi>QnrvclU~LeQV*`gRa7hnpA^?~u_bS}vyP zg|_Nm4^!R`K__?}Hb*BUKKUu{hsV#ByPR-0v29q34#&k#$Auvu(bG6kPeg2iEH+fz+bB{VBasRLiH<{{4`<$yTIGjDqE zzg`XCV_R`#n=Cd0*okggRHWo!<)iL6Y)Lj2JGl)(yq2i$+F|g?w7kH`>=;V#)WaQ3 zW@~Z^w0m%1QI%pt;=l3U;hJ%81h?U(5FwYf;x@kAZ>#dyq?dM5`(&koAQI=L#et14 z7+92*G@|vTR6yn|f>ydnUY;lg0W4T>&AGgJR#VcNS2k&he>I_^aW5;Mh7S>v_x*P0 zNvQ*arj5hcCvi&QJmZmiQGX(6ooCt1#N^Th;eD>_Wr~ywiu#ShBKLtt`5m@5BJD#7Yt6|=X8iP5^a(!<%3 zL*($cl_L1~+6Ry|8f6m3FSgB@`@BS@h0*H05Vcq61MBf>3QRYP$fC9#{H7$Yny&@H zVaIUPAYiAmdH1hIV{#(%b=`eApf%**LlnHG2qkujxyUAQD28@9t9@_FqN;gNOF2#U z@K-Q-9l#`r?IJzm7#KPM^mOnLqR+kSx5uf68HlG%d(r6H0BnQ39DR{MWa~fOOQi>L zqPHlShkDU6LQQG@GyI{?%^IM|i8<%{r{;nQwrN{Us!KMbh6{Uj$Ef|@%wif`m6;F% z3{$O9xaYJU>5S8frv`>EnWUp@+-+yf@DSfyl=1poUcgve~Da);Vs;0_^&*R4B?RCq>J1!Ls7;9?U%i&c!<_w@YSvjji zeS@?gI4MdVW7)#H1^mw{|LAdcYkuy zx3G)HcdIu>y2Dg5w3a`Ki3F7zP+2{E4S>w5-L_q>|45eNkoX7X9n zliV&M)@09OrXA(Pwi-g~rgS}gCpcN-qU3j2Xwx3WHKKiqXnZh{E-{4Ng4nW03$y%f z%9(kR@adJ5Equd+;7qvj`-K)tebn)Y4pT@YXf|^KXWvew&|s_zj@C5bn*6*!A#TQH z1y=vX?Y)NE7*>sE$o6YbAL!pGqKAjgi-cZcjj=9$-cXx;n9v@*QV<;z`_-k4HrA*t z@#s+@466xe{(Wr1&q_Fc+?T9M4*WITBfiKtzb8dJC&k2+e$UIrwuwi;y*vfg=wX68 zFYBR&7Ouiz14hh1_k?JR>cz?5x>V;DMDzRtSx`N%QEjorHp^|cn2Ks65ki|8)jCv2 z_#ICIhS}TsLFf5bm*RfE7g+uN{mjhYmp22lyUlD(Z&Kp9qxuB?&%D~!!2UpJ+{GDI z-Q*05pIG;ycU%1fOO3koZ+lgx}}$?ab@9O*VLCmA{X zDK+stTTVu?FrOjIC4yWe65c_sMl-J6%PH?O!!M{ZPCIR_eC7g9?q*PxsNvlcn)S9U zxM^#HOKh2wRmn?iXw>?j?b{}fI_CZj`0H02aB=Ju)i}_)-^(qx)llk^MCUT=cAek( z7~&F85Mj9*_Q-$X`y9#s>J*$J)4jA3>THyOrS)nOc1Ufl_u24w(8{CC3@>b>KV`>EwVVH!a%M@{B)U-;XJ$4 zDM~+C=B3vPKuq=!7cW$rEDwUp_zhV$1$fUy`7^spu-l~t&vt&1MXTWdO0DpfB(^#b$H zy||T|jGKcXHkJG{_NU+ZIGm*pz6{>0ySuH@IJoMbUF&^ZmEriu)Xz~2yKTg6P@+yk zg5hQTt5vVX&DS@+JsZZZT{OqPfdB5J)OQfuAG;cLcGGR+$~Uii&d-$p9aI4nDTEkJ zuGT^LE6qA{{mFC1%{Qwa^lCxLUVg$sS4V%mrX8jau1AB;)#4{qmai@!NzSFaozNRS z2tu(!5Ct}|_?TIL_tb#>wv(-6KB&;#)D&(oI=9khjaOIB_-l>_gNUF5F}vo+RCU`^ zx&^KYqG3~T<2zW5-Lk3AI3fJ~ks-gAGU2pquyLD<)(LidI=ByqH-Dh>ZX(apWeH{8 zxNbti1j#O)3(8JvqwBnF4nHKgjW@2y|ChWG-Uk%%u0f{mW-{RhHE-ez0>fnAXE^>p z+$NcQ&XTww4K(fRnq8i#$9M?rKk@QrOfT91ET~~QP;0Vqiv{_f?YW7T&{#7dag1;C zl(W$9aLL2_SjM^7cP;_jyPRjXtv45~&NrJuS7pbC5w*@`-t;e31&}!9Aj3RG4i63A zfxKs;qIOazZc;)7Bkmh@lpp*r7S_0RIIg@7x5}Ur-Gz#z-GaL`Twkq-0Cw~9iPNxGN!VB#hZsIv* zwLRm5Le0{)`TM<0{xk}P9e8BD)jj}hm~DkU1sx`bas>}8Jp*qJ96w1VskhtAxd@{) zkW%%Vy39;*yKLRDH>a^)M#~9wjyqn*gujHTr}`bcZ7MyC-#~3>7Z}v?uV7%o5|~eM9Qk&+lG3g>nIwEQctP z-IOLndC(E^CprNy%79yT8sDu#4>SUS-F?6O*45~k#5Gm8ga&f>NUw0>kCd4ok(^V> zP$Ys(9CE-4d9}4VL3oh+!e%(|JCVVKECM96GvhVK2sN@x(rY~*Kfbjww;c&wWDooa z_*h@(Nc=7uZhDYL8QH-z*}Pqg`pJD}d$xWrT@e_3+|+FZ%&^&YPP_tFy+UZDY5n&% zM9Sy~*&dlLX6OY!k=XfHBmPyIjv7fD5OGo&iaTqI=^iFMg2V6bKXs7V20qX8xQmdIRBSVD@CrnfOdtm5#ZC=gi?dM! z2TU)S7BwEQxdjVGE8$H{8hUu*BDblKxug8zF!}KJ<5`mmYwE;BkTxFXVmyd6NgS;{ zA!XQ9Z>@BC9S5vJug&g$n|{Gtdyy7|M{~0#a|yu;1ZFh@*9p-?=6jU&JwxFl8Ma0Z zdyTi5U22>8(h<{nVQ%au2Uue?1uvpgn>@h-!%BKgEhSB$Cc`X}vJ(j{mrqlEi8gMyo4xsh4(aofs8I_F1y>S~w4lm~3qI1p+t;gs(=Zq3#1WZ#`T7H@} z*Z*53c_QgzC1ZEMLF$}?-@Q?wR%$&jYE?7{)Bj#yZKL}5C-?Cw(W+K0sqNj~PWO9x zKDLu2n8nA0ca#XoMu7LI!hE0cuXoQwE4x`*Z1Sl59jf%*ceaf2KZxJ!i(H=ET+gAZ z$^$ZE*cI9Rn7(mHC1v0c2Zr3K1OCNXFZWaKI(dvaP?2p+S-^#fPoDQUgmY>ku? zD0>PR6oEW5Aa&#Kp|3em_GI}mV*1BuH<{#VW{F`k_zl3{mF65`B z*XH{?y{jqMYGwbYlw@x^q32f!IviHIz^u}5yXr6cWpc4X)wGH5TePF3TE_7|y)!pf z2oVLQz=t7hdJ&C! zJKtUugRGoIV8(m#K15Xt=?fs;gwAIP7YUnUHkshtr>K0>vaogaI@)J>-%h>aKrSG} zbe6i3cGp-aXRk%zy+7C@cwu}8lP#L-HDJKQvFm=L`5_bVlT9nGso z&K~9oc^q0gmbu`{?>gbQe8--AsCtI&J>aQ`Ck*~(-K?#_uqt*0*@yUPop-&yd~wke zblLMvWSRVfOt@{DcI}KbU*^rNi8vO=YT{+JqflI`4mr#iJfaC1v(~9}_USX4-c1ar zS+8+{j{|MMJe&i~S#}~=z8}+27DIX)rZ>K2Ge_5PyAYKI;-pIgxraRX^;aqf-XCi+jXs1+n$NGcufXhGIW|& znQ?a+LLm|oaL`%{02fJ6pYHa63*D%P;a3Ujgl;id@)0CvQmnNL=Bmdhth^Vgt!E4! zFt##>o07Tn5*P73)K6!&@={djhcVjL>&@2LK>nZCE6o1jr)L4bT5zN#gkOJ9GHv&( z+)gi4FM+2`WRmSh#9v-glK+rhPkNZLbx=mWU#Ujmg#9PMmFQ5f=1r6B^GwX@d14Hsf7z&y%vn== zh(&8z*&`kHdqk%ymah^5TSsJ>@L^j6UQwT_TLJph8++w7HZ9#)=WR5_EsSZsVo($5jGQahNV4kqQ8K~PJs{L)oa2(A0EaQ1I7;$y( z2aJ3NI2MJX$?x7)ld<^F<9Q+T18Y3QK@RYun|>#|@$6h>`%8&Lm$JM`l$rO7G{3u*+#Jtc8yWhecj*Eb=w$IMY-LP>c*IW- zgK60JMk}9T@3O@WkzfLM+%l^2rLNX(000F6c2lOA#A zlW15+nhW=w(#rvVx~$Jm`t7NV^BjW9PBSYvg%?8+!%Nxn6F!3t5d%=Gy)HO8+6M@D zlW@y$y^AW&i_H11M}e-O44y%5pLJx+l54~I8|j^2-|&25HAP96z&N|Eqj@UvTlc)% z%7NFUROfmeFphpj#=g(3iUR}B8*F~KthJ@*z(M!^5#v>p%M>AlE8JWT-iYVEe%u2p z|J%!5M!dN?c)%!lv)v3gh^Ox)kQ_94_KW8X6ujlck z_(fAv6$MhK7lEqO9R8ED^>FB;EtAC+0AVl>Prpqlbu{%H{i@aK%^!`k{CI_An~uZ_ zNSm!x*aWLqk^QbryUgOj?PYHxXT8x|LKY$>C1YHK);Q^SV>amu3mpQ}?#!#l5XB@W^Fea?Dkh>DN6SjJ~z$wqGNT5In7UeIans*m}{3 zrteY9hAnz*h|hz)9ntlB^?CsZjP;BeIXGdHw{Y(GfoaC?LP!-YWtz0JU&mANkV;-7 z=Y-m(Jjt2EU^4%<-Hm=WX)Lo9)<-_CcBg*xrS1r)97m{lAA=0F z$EG^>lB8dM9o&;48{_CcSoJYvClS%}`S6S}F12=D5YtZOsML#Pvk_q-TI66Ok?>0# zE&JOyyLJs1=JO#8B5|P})+E-&b|qQm4=}VIRY}6iUW(tS&SG%Snk||CsmvJX-nKy& z(Qbo|Ha#e+A15xwhAFC?X*mTt>k3aI3aTC%OGa;X$dPd2!72UbM4+PQcZ}*r`MQa+Ktm|39_z5a%r*x&A8cq{D>J z2}V-{qw!vnp?Kx%rffnVcsb|u1I$b^5yn&?b6^1TNsL1&OfrBgcNX+GOeu7+hONvP zWFL}4@H6it)sV_bNxxd0wu3f5T^x7RyOty`r>r*;ycs6sT&U5ryHGlpk6d(jlr}i4 z%}TFlA%&24es8KkKDkXGM;y~B+;ISa6nF$TL58KLJZ-FdpEz`Q@(MT=+Z6G%+fzmw zZ#-|M;eh!8S7|3bhc~VfB>|I5_*W;DoEmY?>-ulfZUpBsDPpa2P*w{AbNjX}BH^K) z5Qg3K{^R1O&S<`}T*#wJ7fvFns=)l>!QX;Vl15x7hfxv{p}z(3wGLBHe#AeMCKXzN zXWl+*&g$K{SR7M$7%*9y5g4x$x(*zprFWNrBg!!)=!)*e_(8_aWw~gtW#9+RG_X}$ z)KpxhqQyoBv+g=s%lcUC(fSrK|0HLY+2!xK%ikc0{M1KWa{hL~h=)9&-HI3HK3p<< zyWI99;J5)#mzPuS{YhiV209!UB*qKCygs|D_?KrUaekGnU=O61G`>O~)2MRJ8|5;$ z!Lj6{H`08oQkfJCSki72l=o5X=1h6!RG$J&r0m1mTe34(yu#d~@Z=5RzA&|62Hk3x zWT*u3;r>jbP1i?d8Q4JC(`KSw(=?E-qpo1mO2&xY-U-d|+4 zc#EF`gcD3weFptZ)vMS)d`((TygE(Q5~9Ao#LZ7&I=)(qy?%Rf$0De_GK*J3$;}`5 zBmI-KpbMx;>u3YJU+FUHlke`(yE$K`Z<%)n*mzz}GF!Zp6+(d@kQEA(3X@i~<{JMw zb<8SF28e`*I56GlG~S*bKym`m*bXtb)&HaBzn|wn5g{3qOxn|jv5zbv!U)=Fwg_fd3bY#on=1vc?Au@RVOC9Y(<0@07+XEpKN!N0H)!1(qPX!XW<9=y%Cynl zA<4`m&@YBytkhuHBrxq6Ni|MwJe;NJocPTb6{yg-M_$J|9R1}R-ujV0Q`|$Vw=FTR zy3$vG`YrEhzhgZFoiBOx_ z{fiHj$pDWCB-$TTE#;hh7hj86Q<4K%;I>ZRg*LWBd5{ucJRih55umUZzieapTlreo zL49L)t?Mi~{Y>twRud6^Yucyly9a%|fx0C&+ijAbRQM{Gc>!-_yj!$jpC@$$4KW$J z)?#$XA%CQgF;RgSfwSE#^c81XytjbO_6KiiWJPyPLO66?*t{`5z&oLL|6jC}aOyv7 zx7CS>omH@&^iG`g&dzWs7!#-g_1Z*UF{ZNIk`T=fW@*wplv7XfjTdW*nj+85ctISD zRBH|&d!sz-+1j{J!L%7T->qf%6ervVzb4dDlWK2sQXdM@9uqd}Zg0}1@Y=Q{)EUmU zkc~XIthQ#6&+Q-Q=YSi?MT$&tu7wUdX&z@E<~{*W*cAvAP;iHzC8TaVofw*vuMBUC zqRZvG=xUH{ni#qc6MZ7DF2eDFQyyCP-CU+(a#$P#)zdh_P^75c_T@7S-IBJILQK~m zbjA$=_pOOGO>}bM{RPlRc&*Mj{041iSpmAU{64KS+M)Jo^4J%$KGEjSAJ2JN$HEA{ z@skbV-8+wJJ&mf41Z%H7=VYCD75A8PWFO96lKS4eaz%o9v{JQ0oM*k7^g!JbENLUx z%Y`Z+Qx_-dQ_YJJXm)y`3>I#JWT=50%#1?ro-He{1{0=1S-X~i+woL{Y@~wJ0_=w*DP2_d%B}=T1 zc4S@pq;HS$JBraDeaGJNn*l z6JVj(6LE%(3GFrH0#wWA%?H2Yrm(2wl{N#b_jdLLI^s~i3&Ma z;04XRl$+OLss=COo{D6BjAjh0C?+>^f#<@FRq*`6W;fo%^)>UV>+B|abYmOtVHiuI z_2XgMqWp=90!M9(ML5)z-Yc}Z1GxuPdGj8Xa`^muXAhx$aXxUveIn+zl56nWR}`nJ zlid7La0iKpYH>b(maRtI@7~@jZLgn$;`*HX^P8%w&PEHWtkgz+djYwuQC-IAy8CPD z00xJ^>_y3|zuSjzRD1bszB0`PQ$O4yfj5n+G3EB{1Vm^Mv&RBhe_jzN+_DAYE}}RgV|bz8Vf%Y+J6*XN$qKjwAhG)n!7Lg(&A&R2`qU zbjrw|*6Sm)@K1SjN2OkOduN?EO}s|-FP%*yMfz-DrF%md?NH;5kX7+EK?(0Srh=}- znbN8g98*@3JOAjjt^IDx<$xdJFjOSELIY(W_O=dbxB&F9)6_rteUaYj3Rs;q6cytp zd#5@o%K$6|+|V{F9jkIz$P<#H_brg%s@Y1(v{TNU#)h@$q)Vojokr`{(KZF(|?LdvJfPk?|cg)|z^QGi>b%u5$?HU{zTkWoo zRpRcEY>ha@)Cw}gMIJTkTKa}{UtvsognhX9-;JFf_HPmK;)VO?oFF2=LMI`^^vA&^mdYvkE<3@f1hV%=o9Yd z+SEs}X9XZNY}gd3!0fo*>$E|OUfZ@is6BfAi@v?!Ry)O0>%~>bx<0NwFT2H%VvXb)-Iu74RNVTF4Z@a2;GZUO? z_?Su+-R0E5D9^r-+2RM~T0ZPgokgKuY^yT9GxSmu(F{UNdaMIEMotK;&qBM#V1&=y zFeW&4<`>V9GA-NL!KHGrG0#2kI|-DgEd_mhqFuOP{x6yhN#6>18-meWSaI0pTC>Cl zk)z*3Bom=@Mc57`nzCOq^yX|ffBA|Fze*AofA~srgERIS1~5fVlVCRJ4rK>PoxKT2 z9`zJA>kJ9~R}#VPlwgD=H|z`IlgZuy-NND0=3_%^6~!qbeKy`*f`3W?{yvsl?Ss+| zf`9UUW}XCFT%+Y7e8%0OF<^OU+w(6Hi_wV3fcuTiPUXJYF49p>Sw^cnI18$p$;I!L zFfh1LY0#DpZ?&osZ#Ra&RqqLTKBdY-{o!k!E@+^V_n1(nUFZGd3Yl03@{O#6r^V#z z3!gs|3ZE`3Mn`qYX5G8**srwwwi#rp!&Ghd3qTBpw6U zw@8jn`wvnN)Ba)-Ulz-)+MFtU2Fm${S7ouDl@Hn;#ru0VCnn&1KKoN@fY-%SW|~i0 zx(~-gzUVRb*UTP$1{zV`#`UiGs_G1CeCT^$K5T;INj_|FjQ{UV1Q%)Rg2PV~1@w2( z0_Sk3L}80VvHFlzzCrT9I>?T5StrOgklwmo^Q+J}KiTW5O;tS1V$R0^tcWG@*Q4(K znAblBeRHh)zWh3Mm)=nx2-T)zqTX%r%zyWpB4a&af0vv_ATWbn(Z=wT5~>(rpG``D z=&=SJO_UfVvyrStKtzJlrL^Nx^?a>$+AiOd5BkgdHm#`TeV3m==1{C!HM&NN8@%B2yz3CJ=?hQJySu`?)pYjTnKX^H^5a~u zldhn90kYv?pkJl)>)p7xtF|PZ`auRVPV-g}F{=s=49G&uVq$%M80!)m4yl^QOEbkZ z<=@+9-@hxtChM*jgv=sH^E#brS`K{5(~>TR{!zd8PvG-t#~INk6!*>@E4X#;lx)|f zzOO|ch5xN~tO;OD-T1a|;io@`+t}nBZ8BLWs?W{C{iw1fTl9fkE?0F#LlU(ahnaT9 z+X{Z38V8l2D1|RYevh(Dd83AskHB?%U#Y>xE1d%!58Wfu(vEEDK}3f->~G-4%W^*8 zdqfIaq-}S~jCp82FAAC&ZrlGT9zdS^0wJLxgW^g)-(a5m9^t%`qt*N~yI$R6-YL~F zL{?93<9S*~$V8#oSG*a+EE*zafQwWDM3r7$ku(!aP_LF}+f#F~?v}LSz?Rh!dLQBT zCWo>G<<-|2R!r9^=z_e2RvBwavm^#fv&U&IyN0{R+o)~(*B6Fx#F<{WhRYtkR5KdO z{I>o6zEWdjvEvts3)3#YO2EMd_~G?!_e;B(id2NDfFV*0)HzW?bsd*xiYI0MKHnAR zx-AFqtoSR57{OS+`{KKN_6rbzKHEocxpdq$Rf2jDwrXI$LzJQr5%obii(~5hYIeJ7 z)pfm~BRy1l*^dCQpF7tWrwSfsCDHqrYA7C#bmmsk{H->m9G1zGNhLY-=XFqk_YA=9 z1O?`Exu*#+0^s)m4)GDVc%|m6+2|lGYS)cPRuWt2ih#u@oGfGRy$}d|;Boi`7cnMK zRBv@o^!ij*xMvTL5^DZUC_naIS9_6Y>WYRJBRe<6^^%DIesp(r7*q>j<<|`+OzQ zZxmqG7?STK32v__xDOWe;mXjYU0W6F%ptuzj<2`gMP+CnJ4Su^W`E|m-|F}jjF<|C zz`ff7Lv9KXRU5{OZJ`?fgQ56dnB5OWSNh|U3clao`PA%ru&a78@TsI$YXhxntK#!R z0wzs%hz%PoEByeitDHOsD9xPg^!5-ar~Vf`E?q3NgW#J0XqzDey~b+B@C4|Tp@bUS z4@VinS0__0{V2V5{92NmXoXfOarcA}BqXa9iS@0v$;0Xw=sD}(hAo=+;W4kk_n2iX zA4&hozV7~@UN5v_Kz(Omr(plVAigtr#8tE;o0C^AjlstW8~K0Y@XFoy3zI!|S{wm#P0_S-$b@wfE|#+RW}k((?+`BIez^}2y^nbYu=K4VB?<;__mk?HSoI%cWSk)4 z(xlSnf*zX5R>}E$gZ?W|yEi-1C|z)lRwF)4cM)aVH@oQxmn`bzeDEi1^dJm#M-@ z^z|RU4vcUJ0>}?2QF$8mJf>ZEh><*9^LX6hm}Wj%rTV_KWZA$#shgm*({(V(5!hQ| zw+$Jc~Q9AUS6<0@=wYi?HS7vYWXE-v@Ge@C}sVQGT5>r!_KRe*QL6} z*{w5swOz2_3xirBA3QOv!~!4C_yefkSp{(hI*TPd=QZDTqdDXMVe2i!qHLpg?_pp7 zK|-Yj2|+p)0VM|!QIM{oK|p%wZV*rq7?n_wlJ1^i=nxs{9Abt}X^;-tw?5Ci-~ZnG z`N(nbg^T-I*NXG}t*+~sl6^p4*%UG*je+oxz{qu8N|$6xx5-iHKIgeXO;KyzY}sg- zbnSB)1d9Yb7`R=Zy-zHmXa)iwLOD4~x0M&`y?c`G73>|E;l}WcKYF<5_F^rMkc7Dn z(H|0}PAbJydNW@46h7p(#(xIAb<ok)pn1Cyv0l>h~s z&uS!?9HCnZ(nPCCpn8yPhO)BnrM4+=I$wOX38NbQV%ml;Vp9WAdA*#IbtFg{@s#(> z1z3v-TX|g06Wn8Up=Luys`>oum*;QvSJQW1acBP!$qwh=xLwEDavO@c;9TnevERZ~ ze*yecea$8veNT3QmDG_WWIgGHK)Pm(Tp;KM<_+p6)=KOwcn`!^mO ziY2_OfnkLDg` zDKmraQe*_o-$;M(Oh$rvdqChm`1jXEq8(2#X#sxX9=_5#X0YI<7lDD8;BIbvhD4XY z+va2qb5V@e_424mmvwB(2dw5ix1@doQ_tH_3!sN;dVE zxs}zOaZan-<7qPfZR@`l6H&%b>23-v7|U10C;?K7Jc9A5>&(wAxOu|Vyvy{Vk6Usd{C;@OKzSk#munx>AOmX7 zn}||qfUB>BNxS@DAsLC0xR!R*u#&^x?vlHCpZ5cvs5$jdZqZIyckXNNn<-N$_%>oKcCYt-Oy2%rjTFr?Nael1@y+uu< zs~R`P$@bs++z0oMpWI}Q7hB&$>7rFnCfuB`T{o!W-*Ji3YA*AOYi*TG$*b-S#i zHgZEY&rd}9zyEj!j5!2V8b%5j$qjDlNAq~wFfIDo*xbJV$Iw7>%_Sc1JMg;7lOE71 zp_2$mXNvk9eXK7Ve)Xm3wMJ1glSv#m^nE$d`GWl}YJBBmq!{56r@3@{ao^g@V`@V8 zHwQnqNJ1jyMnZXxH(tsQ8%!10(e&|e9VZO&P6HugfwfxOk)-G0#PRcYQ&;GF7o{Iq z!@jU>{|49XVM&l0IlGw=v%&hdr7j+LTaYGNVWRHAR!HIlYx<5e%~-No;1 zmgU||y`rx_883`}&gR`%as^2i_Vvo5J2{BZKln z1p7d*)9hO9=~_{n`sho*hEW!oO5qQr|KinQW3B%>dzbx)LXvpiRACvngO2LCJu}3| zjt1Aa|2Aak{Ql|}mW;3l@j4ZXt$mG_N*ndP%$30I#3StCMC9x&s^DY=ORH_yE28R7dL7r9UnL1tfv|ktjhdS*`AOZn zzKdkb%D3)jVst;1NJiXi$KH<8Ff(`0GS%j`Un_Td{JRO|i zBxs(RNWtuMqknvr(nnp|qprUs0bo$`Owl|u4FAXypY2b9*mcN~t2RlLrI4GsVG`0zCN{uDa_&I5?p~_%RCyPmh7*!@@VgdN`ZQl6&3~R;#^32$T zq+Rn^YDY>d&w+amFOLD4UW79(gGVpwT&|W1WkAS{8fk}#(%$zEaR)UeUMb=D0pdTmoa)-s)S}=nTJ*2Xhc;uf4=|}Fn{ze0|H2c{ z^q~zl_x%S7M61y#8&k`FCCiv6fBY0?b=D=coW>oD7WECrWlZSyAU4G<*^?#8g79)j zzAM>>&nEojxB2|fXF0tppj-1V(kYS(v1M_6t2@PCJ$WqnPhY#`CQa#HUB|E< zh>nLT+0G7xY#FY2Am{jQISZl4+`V?ZHBsX$4KAjNY-chD+kI(4{!s(d8*>4IZV63(b1UA5~O`~sF zboq=R;c~s58`LqCH4sNaS4cG2=Yyo)I-#42*4g_bBGRoZv*Ix3jBkl@LMIu-xsF&B zR(tshV7LQF@gzyUa}sZ0x_v}zd=+Snx* zi%;pJ75R%e#P&Ln7AwcMNzk_?|3nYDYc38EC&PgINPVb-o6@)Lrd+a#YeC?ine5$? z(_HCF{yg8*Na@WxcNfFMcqx3N(LfFDsP}LC|Nrtt_Wke2=SnWvDaFesFh05Cn`@><>Px^05d zmeh>^)1k!+IOUv2y(TPt7XgnW3;Ooce{(2=6cwW@a;wd1fKm8+k2^xj$f#$1>ED>~ zcOv!=t>#PYU;TF*PTa6W_C-`mCy|Ty1bx(YC2QBP*p$*soJECB#X6Cz1lg=9JEFd{ z_M09JNjApBDu#s4D91R$ip=GSZ5Q%dTT+Pts|m=a$^hkOf~08Qt9F{WO@zRO)wV`v z-hsetXc9^sZ@9uSf{PYmwF?KX1o&%G;%?t|8Yv{rXC0}i^;vnXLu>BXC$I@E+`$SL zX;=PkZ54Se_MB`zlP;-*=x&B=2?UJJ5_c2uTO_u2hQ&=XP|fDk#NkL6l@pDMNtX}x z9QFIJ)n_6{;SL+F@puN+e>_LC@4EYcEce1oL{^yGkpPJV)Onk}{MZ$MB^Y+CFZ{=u z^!`1Q_fo>uSy>85hkf~aC#z%=Ik$Mp70L06n_+Tr&qNb|yi!oxgQcUJxQAX4YBjVl zer=i1!8Zvt)NI-W* zt82{+R4fR6u4LJJt-vy8twby`{4;F)eqpeYMh#B}F0yaso7IlavHUr#-3z`>a(?IdD1%4K2|m~o(Be7FDDtG+%g9kG*Ve@MydPn2R#(7JA( z1+4+?z#9g0Q2^Uho^M~F&*1RZDDW|oNNYY$Y_~YJsm2x_9#zVgzf2mUEQm8o5)0L- z(Hi^ev*bmW=x8b6H-e2+3b%0zD^Oe#_F!C7Jt-Kjs^++HtS6@ZGQ@C$>hzQtl5wV! zv}S^M@qouwlZMdFqP|P}z~&bJYe|qH6vay#8_4zxJsdX!HF3(^BR~t>O5PS`JMxd1 z@`{pKWs~^!?{E#q@3QqLD{*hWYlC9}vJxl3`GbG6lK;sXd1?7~f;bVFJb957wQ;7j zRl|AwsZL*s(PGd5G5atr$rX#^(T?f@7)wQjOc6R3(K^g;5X+Nxy>G} z;Su0yyrV5!B_?l7@vLPu9}{a@)k6-fxi&5AGL>hJNL1FsA|0u|f6jM_3eS0@dU78U zKDa*L&QweT*3;!8$bBL_C14)=AnBeCl`noi%cz75O1?Ps&>#~=x^pil#jRH z^)&d}x13?o$QIk{XY!4qy$PD2bFf?DXDzPnq?)$arA99si*!JaOZ$(x^AAhGx!s_i z1bB0uhJ8BzuXqXk7-~Qh3c2EjGO@3>C)sd_+A239a`W!in{#omwJa(Y7Od$!Om(JD z)StW;isvj*4}PX>ERP47T1SD{V)^vsHRM%WoEjsq>w6Jpx>w~}nd%ckhOsv(vCa-#BpDujG8kzIU}nI{-K=pphb5mM(!@z(I5U7Y z<1d~+LQZM>o(UxX&qp*-b;T~%hB^UHn30Na;D7wlRX6|U7Mo2f8%>kphiF37(|7e# zlvn2C-i~sM5f+YpWn1;lu|6C(g>FTI+~|+?C!y-2egtT(`J|i-L9(W=7>WTM5p7hO ze50ZE%FB>(A`?Oeh<{~RB(y(7jqqy3Wnqt2Ygb^GwgT1x3wfoK{W`IPAT{=#lJX+? zx7gG;%^>2fLfNJ3NmP|;8qD8Ud!DmGh4iSQcsFO~a>LgLZ$Pls2l?I{8HuMz-CJja z3{MQMHpK3rdHaov=^iOt7paY0L!4aIVu(BG82hcc^yaNPX5*>HtYa# zpuwIM62~;gkzp>95S|`Sa-s9vV6u91?|n?I@%{d+g;nX%u4D#R_bd zaxk=u{+<=1$jT883JExAy*L9{_A-rSds)4B2`Xt9zxsO~O-YiKV_ z;IX=i+oPMeblbo+VKw=7N|AApc+(_$!)^n;pX&~x#!H0<*Bew7dSBPv2%#Xe47J;! z)tH~3>ZTrIvj6J2^MkV@eUQUD8M-AqA#kfIOD2nNS{_b{2=*mfwk5(FTEvOtjq{P= zc=Uj>snY+OY0WzR^?9eC%}Lf<`ostNUB)tQZH*HJMnnq=Qvlz{h>&m@@>jG90DWV3 zndC>{^lu_&igik8Lv=!pwW>5x@7~?^os1(35+`!=FCJNI5E#Gqs;`?JL6Jn-mXV#G z8;d35ar^nI__h?G?BXy&@DT_tID7RRGLLL@3(Y>&e@}**jJb4j2Gx!8US)oG%aNr` zTniAd*KrZt;TLa(uiHOp}v;LVy*fN6>CowBe0H(;oR!v*`dfg$EC zP$j5Pzri1c9B-^4kwmt#hfazu3lERSWaGC9V4*6WjKswdkN4%bGQvd!t}u{H+f(Zx3EIexlNM1%TZzskzeme; z5S?%^(3F7e0qJF%P8aWwOT7?Z7meSIg&!Pg7!L;NFeMSvnj4kH?Bgj=colE?Th%Z& z(Km5ej3{28d^ysOd`~>*eks3PG0Uol;D!kZ!)Ij8sVLOq2V|OOaZcU{CU(h_^~!Fc z8i!KfXqk<*qRlVv{dS%S627UT4%F^jiz(Iw@Dg3M)Pi+V4VY}R(JB5=*NnxH%$3(a1veWgzRk09vp+J!w7#O2AT zAT;4sy5p;KSNlhqF@AJ2Q*My{sjp($Tg;MP_d~{OJ;(1YzH{RUHithmNAuLbdf(s| zE(gC7Z=%z(w)#AJdh#xTMKKGJ^J=j`+e?A#_eEPKtuCNJ z97<6-!k&v<gf1f5X_okangBHX#6|J0A9I9S8I zTx39;M^`B_{)eG_h<5o6H*#Bu2iWujhKI^6GzZ3I?yHd~2xdcS$aCTl?DS<*q|b9y zwAA?Cv^^>vOrg7+if(aymRg!f6ymR}1?o&B3{iY&q|a?l;i*MuQ4-(-@gw-|;X@~> zOAe(IgKrdHg)&J=#~e~cfHUY=jS1ExLu%F_7^nM0#kzEC$xgVVxk)qVmN)qxX47{- zN=nw5OM-T}gqcTyK`!_K#aA=;@bv(rRHg;}%1D)nzpM5e>;?)vteA zNdGfgTlV;qtToP&n^j1r#z9{5joNY1LR`DU-miCeZz%coSCK1&(P5~eX~SkHJl&DT zZz&!(BVl`Sp#p|>m?2GzerV0X7>LcIrB$#XhFmJwo^S|TkKLyZt)nPpkZ*M#s%{o82PQ6PDH{z|1YWW-bS4bCUpxUL=CTp zh+2|Zw9X*e%O|-b9G}hVi&MZvVH|w!4wb>S`3jY}%MRqedtFcDmGNKKqOYTd=tCVt zz8mHZGT0@{E9r$83tS7SD}3^Fq+aJep9GIPzx)q!N4f%vgIEWE{MZ0~HO$>ZGR`;i zvgCF?f6G<8wIScUhWF9}o5T~uwrJ*eNWO#;6BtFZi7CDC+W1W<_?rKqnr0&VMUCTV zMrYMz5?hK|3u!pf@xAYxy4*I6Yl!D8NO1lV$kadF z(9=PVEj&qU3(y}BMr1rU8m?t2Vc%SKT~L7ivY9+_4N}M8=qoH2!FJZ zc!w}Ot)u(>LNk>R30|MjBIqo8TCG+eaSneptQAl6R#t;>!N^gWH2Md?k6jNjGoOrF2ptO&=8(%B3D2i5OmJ8*7HOjK5H zB(@5N*z1e^N+ll}S*(-AI4Y{oypn*+vmnEYpgmt3Y-4B}ENVEDYPAO9u7rT^?$$@R zebu7)RjsDA^rVGRijKT%a#la4vxEj2R*2=`C*rl|pib%y>mm!P33FXIShY0bQMt`8 z^$Mz$=w0pd`Zx0P&6inmPWSCIf7J}W`wVq?)xzT!+H4^37Gf8 z_*hPi^Jnj2xM31+hgyvU$a92tCj>aVe;FR9>U(!;ep>>iqbrVux`B73yL)%)vB z1QYNakfS!$&(fqYGMl##!T<3r_)-JERB`Qt#8%yqmj7eVMyL0T-in#R<9xHRxENNa z_>tPcsVL912*Dx1hPB7pFDiQZ}PBN7{}aXRaF zA(HgE0$5Vj0vCeX?2HDAwA#H^PT>kYyO12D6?3L3!DxOGq%G{d|BDp|1tmfNo|+VT zurrzyq}f>HM&z#=@NQ3YdC1Oie?nJK$BQSO0us|Mmm$V_a#=a!+DufC)0R_(@>4nK zzIf173veKRO*CKNxaTo;=dzU!=`t?sjb6}byaKpRa$F4t7{68e%$$l!@wVMUuwhSN z!`2zvB>5yPf|dX(q=fw=qmCHev3cRl{6CNf{{zzu3|D~GX*hm7jJ9<^aSh|B!0We= zRyKEUq;_q!Se;t`I|(2I*WfRH2ClpJU0SD>$MGh&*yzKFe1YrbjKI6@;YT+o@VokA zZ#<{{?)fuD1jyH*WaCz=>gAc ze>GoM5SpG=fBba*g%K~J9_B^X=(9k!OiUWhh#K7}uN0Zq%6o!f8bv|J4U2~F(lh{H z&JPmXNx9lPA$X9P!V}FaOl(5TQL*YF_^<%q3NcbNk28p084t3lDQ2s2dez`_KxtGW z?#R&@Nd)UG764s!fwX{*0qFw~^kYCi3mv%!GP2_lMx`is=~=AGZ@fbm9aoikRfZ%D zr2I$j?fBX8pSai1E&2M2>anej3Dyy`(sO1wL7-(z!dcrhn@8{JxBHmAwml^ZKU9Td z8CX*FTw)sipmzssn9}!LjDF?E)Wa9pP!6_q_^{%dw!Nk9l0SgWV}sqrYd#nt@SGtO zR&V|var|HCEE~{+x8xA!rx=Hf>UyOCA<4}6g#%eQvWv)#@LPwLd`9&F72~Jbmygjr z?hy_Uv2c4`1M%>hu!NehoNE9xCZT>wYm~1s5Gzs6<8_Z~6kdK0VVG>OTKv3RoX?E_ zDcDis25yMs%n<@fK=D^Hq_{xRS~tX*Z{BNgO-$7#Dj;ZD>~6d_8{ypIR_86GOl!PJ zk4fwRpLNN8w|1QwlC9DOp}RPsR!b_(Pe)Lhl>PP!9LOx!fhIFN2co_oc-PAT*%A|_ z{H0{?HB1M?Ul$Dg zk<)h)?1q+md-VFoOuwSRsjgEARI^Bn2Oa{9KsX)>0Y{EXxBA{DzRweEelIu9@nCa z#zOkFsa14Q#HV}L&#eQya)+x)Q8%bxzV3LWkm~4E##-(iKV@U|owa;QrgOtK{;-1< z{c1lU*JXou8F=9Aa7}(O%>J_ov5~J05{d%EDha%cLj8D*P#p$aW;T zVmaV3YVdLO_bYu{zWpua+qx2!|9E@balk}#xv2Y&L+~izppFKeqc-IIfN_s~e;JTia9mIFc_=0S;^+@pg zV9A)Uk1$@4HsNz%$r#A)QJP^!!Et=E^KNa&4b-F@wZPT+z@k}hY6<8lsTu(?v!P;8 zikv@H z%^_krUqBj2d2-AsTVrlkn7i#3t1ujXEN{B`#KN9gUx$Rtt~v33ct#n;q6#zGc3u;G>AmedIjIP#f8jaE;-%Jlsyu~CfxBspk3lnWXNSm#r1PsUpjwzm`XKDeaj_ZWRUD zb_%@iN7MO|Fm5nQX%m1TAk$9^(j^7W*8#Jss4O?Q0DxYXP5CUudUXq;##>sTAe|O* z^U>!+f7Hci15vUg(=<>6ojj_v+_;vez%$3CuEfp6pd;_#vFVdO@5rB2K|A4x%M#5K z#3Uq}o6f=YGJjY&W!_(}ffoM{V%^&SoA;o7z)EW1KMYhDPKN+V2Pre<^s26TRZ-Yp z`1fBZnorw}$zgZv#V_uC6=&P}{jJJl#_q9!yD?^07k*h^w)Isv)nb27Z(pPZ&Jwbc zIvoHqDjPi+9Sqhpch}|;ARGNIP8|)Wl4Ten-MlI5rgd$WA{Pg*?=5ZqeG0<#1NfN* z3#yg6%irlbJb~8nEeZg=l>}|wiU|#qd{|Uku0;74a9ucDC2J&UT9K1wy?z7JVGrUl zOqLK3xzRk4(bb`R8>(i@w>D1d84-ZK8BrSPOK_eva?`NpRxx?kQsHuETrVbBy0!4m zR^@e{9PJ7bIjfuo9x7c2hmvx6VvO51j`}e(z&C3d`N#x96=)H8YNuZR=#)q?+hLeV z?+{T*gXUAJ)k}xvy>Sbvg`YM*_%TOImde+#F z(rawqZqjsnIv_Mz%b%4FCYa7vy9sX+^bLz8)EVji)x=-^{PeJ!&fp|7f@5*`=Xw_FMSdfo zZ>Zu}^At~^6Ew8C+*iM}lDim|OHGx@+9f2R90vmA6+$?uC+&QZ8|OJ@GIm!uW6x;j zRL{SiY1#$vwzmH+KHyv0L9q=pYU8meKx^YPZ86uXF{}w{Z1WP=YBZ(Vkx6W{`tg2; z#N~5Pemk5k+0A!~lUV}r%0^59AcJ9{Edb4iH1Xf8**wrxNe4)|n?HZX_0S=t?%&T= zlbd&Vi4Pf|v|;a+KuGD{YCb4&p0H#+8~1{lU99P=hox;os?5L1Z}l*SHkVPdNRLl{ zjPjvv43qk>Uv_y~b{IzAka5A3*h{!{dNy+z1VDROl2bPq{@AhY{(i-On#TMGxF;tI zvW`2_M#9!xA&E|du!*Gmd{`+Kd7(Myw}fsvE);yaw|m5OAX;S2Uo}SE1g<{qA@7>1 zgnj`TYf1=08h?c9f_Xh&(#`VmM;T1YwrPUkL}d6~P=rU+cHHZW)WGm);d+i(-ZZbXPz0obAAIdq}=W=q70k%F7BviLdIUB%fxPa%anT3vGo=EfNN; z1N}S^JnuVa>+=32liKfk#+B2=eFiTByXid*#ek_cPV7E8Jk)#x>4U;Ows&Uz(HRv&tw)6>yj{cJ~5;jWbK>9>BW@my=stmLt&;d1?8uRVTN zfQs&Nv7~-=!mHp-p7Ka);77I3Kbr`?H8)n-u{H;eH09Sxxa&%qu+bxkHB$I8M^piL zV~uIh#LiB6AOK^m2|kVr7XLEM^Z&5`%HxC27e3IbL)^?xy7VW{rjMGQt*>wcK(F|T zUqHM&-{8$t7sK>|t8HX|eLDQ99_-h(r#{vOkU*NNk{cE|H#aoXQk5S7u32!!Sc%b? zUbxz5?q-+u6Oh0kzctRCl3DUT2jtk10DW@tZ0yY;Wm@jL4c7;(G-&OPyE9nn4PQVV z?(v!ZY@ib8LsuOy;e6`I&I0VDGXj49N>5LhvHMICp6{7w*>TTKA^lG-t0oC1XI`YF z^LJA9ePc3E-zCHfb9RW1DJjI|$^QX6>aT7V5RKaDq^VIlYjh)D62LSIDp`Ib@0jyW zS1pdp`?t^qNhHLPQ` z_;p$|7+uiUdK$8)v1BhqW_cbMljB1*J~9(41DW98x04+(NUf+4O#KW2-C9%VSUg}l ztP((Ub7RTp;IAXFOz^kkq3;yDaH~RRkAA^%_9TQD$g&nP+&Du1%BWz*!T1Hq#!g?L zxW}cu6c<8@4~nCH#YxL=T_mtp^_w=PB=|r}eo8S6O(@PzyM+nm*xa;_ifXISH5g&V zmXQs`Vc?lwPkm$G5_brn9AuDf3mR=-Z?#xL?>CGh7th^^JpD%xb@hIJytU|cwL9#K z@4=|PYh^vVw> zHbhQRGIENWvqWJxOUz=^;)=t;M+Njr8@V#7xV>4T)`0mFPX8m(GcEC97Ru31NVK(& zaE+7r60l7A*|qb;Gp@B~7wMH1R&t2Z_Oz@U2W;{E=R+YSpCWtPo*xOCB|A3Wlw$_G z^VpKk#-wwp=3i2o95SW`YHw2T0N!{aNzf?irK-d0sBoD9T$l|2$m(3)1ZDPJO4L`AaxNDA z{RdanuysC^+%yt0C;ChjA}0y~mLWc%4cS2U7JO_*K!tX}zklVn05RWo;bTg{Xjj4O z3mxB{{bR#bESacGdX-aXzsTX0iC1GH&FT~Dq}{$Q80U}N!0c^y|A~2KI6kOArC)R9 zB|T8hMIAYpA9Qf)l^o@NsEQWky9(1~P9WeVNO?xUWR&fh{>dQn1s&|i1-Je9*9_Zd zB9sw#$}00{w$ve}1UcDX8{D%}nztBrPpNT6&fcaO`hjCvS*hUcKUFRm zr3Z6S4O;ekaIw>2u?{X`ZI2*!GrR1#cD8SFqf9$scH-iY!(k!fP-dohVkRiK4L;Xf z2|Kajptmb!*>Vrfps zV=NxFG(DHNCMBI%Q3k7esd}y;QspYK)9&>xc0pu)H$^;E$QD?Yn@Wl3-hI@m4^#7~ zvd6GQ0eSEq3;Nc|9~^8a*Y8a0DQY!t3yumWoVZbqeWIXAorh$~E0TNC33GEA&dw{S zTWY$uPB!NrEW_4EerxL;Bu{pWty~Z^-FCxxV%-TU(A)uaPQ++FHIIooCOVw|~Qob%TL1&wh6wj6a;Hj^6 zQ~q)IwfqElus|YRpnOJjN0_gyn)|@S3Xr^g@`xXr*DK-MHkN#}Adwsl?RQlE#Xjp) zpj87#lKAxMjruZ}-{*6!GKwApk_qK!u$yfJar4Xt#ihNVAwNEG#;DUi-1)$VycR{M-n%da92*&i`KMgO-*>s)s1^?#Ilc6Y?H1&=TAesJpY-N)Z0t@8QfHy4s<-yy`l?H&l2*E@HPByL`VsLbrDgy}(9!ol{#A zUSak-lU+s8^kwHo;t2mmXDPh4N|xp@#b*b+(+kg{J-5yfDGfY#{;hiVth4=T)%lhg zwEm?nhJL*jhpqob$xvC>cv@k4bYc`)?Ad&Yq&qqlWDj9n_WqK6#QRKGNTBvmHnjEG z^)P(rx4omOX4XnB*{1?w&91W=4|ml4w>A})j^8dGnEtqN0xVvg>fer+jYfVlUH`pp z;`IC2rIZ5U-weC6%}v)T;CPr4<%ge1X9E971fd1?i5xWu{6JmQAeg^@ecIk|@`nKI zGWgC0XrR0+@Th)f^Y)I^^lsdj2sk5VD){nZs#juWLmLNtp^c*@T)kmxQ2s;!b3K;o zyZq_2Y8AG0kLAeppB$<61SRL}(uhV;fYQ}!*KSkD8~<+wP?W(L6AWs;E0RSIZ^ z{I?suh5Q2eM6`qC0JH82I*0AT3?Klqz_rzidxnV2+B$OSqFj9=E6i^$4Rut39vhzl zFz8I<0hfxZhwjU@ovgw5^sJ!WXKys%Yb33Hr)}iI`{BgdMUs*~vp?WF=*>3TL0LOl zw}fOp0E)ctvDQvNhz~GU~Jd{HNw`# zP>C$wW564*C2h|&(#Wx1#;RP$LqvQYF=@;(_DQY_Z1*$fmuLxLAOj!NZ3ERm3-COh z$}xCq27sZPRLyhB9!;o8ZQef`j1Tf)4$i$A{l)eB%{afSg8S(7+8K`!o*0%N@rT*h z!ICmwqq_Phdow`c)Q_s`c)h&kQ%AgZ;1Lf`P z@1{V*?sq2leu+ui(*b`ko@Cp2`R^};yzf)?lcojp{VCvC5YsPHoNgU)hBYx@MD9|2 zB<3p$wI@MCy9`fIu_5?yL!`HqnB!V5_W66r0MY9=VnQn%KF?S`1VdFdio%Qz5A+x1 z=vBrdxhlRo+P$I6+e<2D8_7@S(`@8_g_+kADS@G$z{$Q=KTDUU{jzL;C3_qTEv(jNY{iG=;$3W7et#>j;1)BV$N+Yx$bFMJDUd#qAlKF6{Di@Zrk}jzT z@48+I-8OZ0EpBon`E|Q2`An}aN|L@V%8wo$MPlk!c2#7PwcGre;$FcMAS4vg3Azf) z96O_o&+(~VnMgNilNmB;wXyrruS5|>kL7he zCIc4rG_B(gbDn_o(;`?_pHQP1<+rBg3^rXXLpB;*Q_ni)N_=%I);knhyv$rb?3lQI zD0f*h^j?b_X53{Mj35Ao+l_T2sgcAnD9^H)@7D?M8 z+{;eA)gFL;RDcH4bUf9BCgZ*8Gh zSg6o{h_p+s$s$^$5ZKkX4z-7q4()8dXCTuMuInUnz66$XAgXxT(|rF0rWpn)(fu1S z4y6x^K$?}SqCkyHeHX6FkR(s9$gjid>aZ&!dj>KA2@z7cBABR{eiH}rO2@A=A^MdM zpYnQ=>=Q-i{J5bmUD0=cySf}!JM?m^@o2i5ZCc{K>{s2PwTyuSf&!`sv!JLB|IeW9 zc?1f^)XILW9(<}zi6P{@dHgioeenXwfcxv!e|%MHb4E>hIW~a8h4d#5(IO4PH-`>{{z!q0Cn1^L$p{#do{y=ow=v$zFD7 zK+D~>45i~Q-?~yEBRV@ZEYs?Kcw{cYa0|vhV^Usda*+wyXb~v|6>#&m$M^IWPqnC) z+U+)3EiJO06PYW|C>xblhIh>Obp>=?rlz9z2(rXo33C+!PfCQ8tV$Mioj0M~|#`*P7*VU%PU3$)z zA(*bBun`He-*r4d>p3 za>DIYZ4&6nn69pK=nSUc0d|)KZaGuc>&~;Hw2cLb-pc8ULMIt_B_3GEe1dlsQ#8^F z3DJJ;@D@s{sR{Q?{tW%ZeRVOZp!Ny_1uI&bxR>1hTT+M5kaeBb$;nqh<8xoYqt%-KR{R+I@*?a4ak$Anc%gp^uky6xf1IKDyWCJ zLXC6Fqv$@dCO{g>zh32hPG}1(Mv;ifBj3skRSq8Z>DVSxf6VPDt06fH+Qsj;#l{XJB^oSJL)lvdu)PMH{20zg7q;w# zx(>0YcM(#-zXOhH$^v_hhC2lZ(NqHlnFJq-$GXjF`(Gxyh^WR-VDFnVGWb$3U}8P$ zeA{)7>XJ(E4c`smUQ^E{vb^5!3uh0r%Uo%a+qq;M6lde=%=*pM`H+14t$?fbT3v{7 zOa866Hi}2uR0A5+~0tWop1n!=maX?kR!g}Ys#lx!S`;le( zxr2mm)_jc{3!=&U?6c8|B?4*Uq;3a(>Z&=3=-%W}|JLMEU)SXjLVx4BQ_+pZ?^C3ALPrCCdwarLD@w%kJ%7y%FK_uE`&0L#FgGP7 z&+$#YTP9_UV6bwfTH4|hgvwUT#T>NrL@<~e8=WK6#k$l zwYa_D7)Csw{-jmM){c>;Dtn5$4Y(no8!n0wkn(x*VNea70BkmVqkg+->k#KWHmiuT zn?*>gN$-9$4xrPLJYCINQb5l9*}mdFN3#h`I(^cMh$smR@h@&VOfY+m-%jS*AuT7g zvL?HWwdNyxvM{^SLP)i5K_H%RWhRp^)6*EwdIzC=`)i~yGH|SZI??7Q3D0i;3Azrl z5fGNdP#w{OdH3_Kgvdd8fj zElzYiuN5nVlZEf?FusFn(&H|pGXy$aMyXR@ca5y3(R%pfT0U&OYwCbM>t{se${BjZ zsw{LBX+9xa#J-6)I-i82`_2Lz2&4SahNHbq#i8XlX+30VBbepF7CsBpb)K9zPXwbuVTXr@+4B!V?qXw}GGZY60%lFp^yxoOVZ>D* zsp($W{4c(~Ix5PyYj^09kQkI4ML<%NQW_Kikx+?&8B&pwff>375m5nYP?1tPhGrPL zLArCOp+QoRhV$?{?|IMno%dVEKWDjE>%O0T@3{80ubu4cw#fl8pRV>icc<(lKn?~p z@3>&QF+fI>1Hg>Z`2@!Wpy-w~C99wLxHb|$^_l&MR zJzpT}MPk~pS{8fR^zeN^3XC|{e7tjLm0_W^f!_R{i(|vk8o0>v<2p8gd*l=-T^$$TQJGZ_=IpyE3!kB%?fu40LxoP()3xGlg&-1DcA^Ix&O=>`-z?A z(SEOA9I90rabG;J)Ven*vE*>{>-EJU2aW~ei|bBQXYwzWEFtmkdNyU+94vX2!>*sW z21Slv&OPM}oxOkF+O%ByzAPUwUa?&^u?Y56mnbL}ffrMS3w5DRq`O@wha7K@-8D|> z1Qu`=z!hKA^FfAmTq#;Rh5%}86F-7d<&P3ZE$h?0kDNyZVJR!RHN?g|n&ZlbbR>)h zw{N!{zuFPMf!mcW9%#AAik_I6S@&vT#O6K(H!w!zE?x|i&Fe>lJ$64pZ(2} z3Wj|UzBjETI?5a@WjjhGSG5qOd1{-YbAMklw5fEJA?w|;3iN&y7>De;wsd_dnw*Y` zo&k4$yk5h5JHnWCn|jca+ldnXNCge1!0ia^9tJre5Px?L16jVP7?pX21|~bPi6d2N zg3W8LwbRZ=YTq7WWi(bHkpgiDB0)g*PCFNok)X#d_oC@96SgBG3H?nFWVDvU&>j|e zw=C!Le>2$_7j!c8BS~F=_8gje`6`}MY?zB`(gF;;Otav=UVc7*U{Qr!-TklUeB2qW zxfkZ2Nve!=pO8i5I8f0L3iHJe8*b{?>&fg+Y)Xp4_MTpd zo>l{72LuPmy1e1+e9i;@)pPQZ-q$!t?!p zz7nvOt2?Y=E1-v9=tg3nV|SJ*L*|1F#*NP{A<}+|_HyO*zx%i`NiWTblKbXW2+4&T_kVGO~)w@ESit$A|>d$s^j;L)^R9^f3i6CCq8t&aDVd;|ir&pLbS2UDy|7j3@bUkB!p8Jl?HuZS6!z z3(77+-g}I)Y-@fwo}fIAisOi=E88qOZ`thFq>yWGY1#s{1x)D+Ye3u=8{?-!4&+zk z6i=dHXN^(!S#^dC0-^I#+6a~pRb`X9;QRJT6H99?(uj1xbEQkl`3yJjg1Y9=PWbJ zuGO%@FV_)Q{I<(|lFJH?b_7R7N)QZXQ8hZ?YbU)z!H-E5%iN=75zDEodl#nVBbkE?-E&Wp zhNI784gkf{r-E~Tv<2L>4eER1n^p_d+u0!uR%EzTRSW3Sp`SrS=a6f?mciV_{4Bqa z9-C;pNsX85wB0WH>FM{JW}>IO%A&3xRZFR9`iQ_Z8rid%*V;ugH*cG9c&s{qcQ%^5 z9TIM#YV`Cy3F={pHQ6@{KVgGb33rx>(HMaj_-AUI>A`rQH%ZYpt9sX>NCIti_qu3% zOW51vUEVujQku@rqME-GLKM62Wj&%6uV1T625JMbOEBou0&ndHV~#w*b}P`!aQVOL zoR0?g#_T3LpZ9q9AtOTUS%fq6uR-fsJ+fWd{XfUhpghlgP9|qY2n!C#gR7OabGNqH zXU3FFS>TAm`me;c@(9=<+BOCCj~6c2M!yUMsexr>f4CYR`rG}9Ccp#Lh_svydl+i+0AU?@dgYRulB(GzPX&ttmtn01e&#nhe3K($3`wCg!jrwp(rUUGBubzw(5}r}ABDLORby4%NbQ8prb^ zJ8bEo|8)7idxw2zWZjREXqJt_fxP!{e#Yp5oZpH(3XZ? z^#Hu(K1*^OX0WHs2k`b95t^Lu!8ni`m{I5!4=t)QTeicHasgEfWp5d{dO(ueB3Su4 z=l%iRV52@@-AzH|%rYbNCq|eDLQ| zsh6h5dlA_G(pA#elUT}mY2lWa7M=}0*B5$c+4f!wADJ)f-H9hBirtr_4XoFz=ZnkQ z)$0E7S_$9!8Y~%I8A&@-*HT8@7<#H_KvtgSsIg3A2I5u ze?Q?3yi5g-={F09`dEI=U2NM08`3Rk$;9+uMEg>&hy;=m2nMNYr+XOY=2paHNzMzf3~VV*e*dlaVo}g^jf$gM?Oaq2^-KkHp4j`O z(z`}+={WAOv~Bb%q5~{GwdWw^EW)lGsZ3TB0bY?Eu!Oj$Ny_*QHkA>(@07icl=Txr z=EcEfHTvV?t@jEhls1us_`ePDH&cRnf14@ZAD4U_SiWh7I)y$c*iD^cpAztvJW9An zT5Xq|J9oZ3&2>=Yf5mmcZ}+nSwk;#@qg{&m$=0G}!%4yon(0}+H3Tqvb=B##0mQ+y z+UW@P;3yf>(h|PkKw_N+n7uN>={nbtB}*0GtL2KI)zA zjMA0FzuXiL506P*tYRf1sxHotLT>ese*p?Cow#f)=q2zYdvK50@jyHZ+|?YWv%hq0 zr0=Qn=z2(CawIWsLH}tv6*#i#NiF$aYRm5r zRX5P!(V%t}Zrq%uee=soMT`bR5>vf<2VI4J12N-FEhWu}g-Ommctoc&vCy;FLvzwu zkC8khuZbOD^Q$$$qkyZBxe;uI+O2d$gF5w8YMg{k72)jQ{^ReED;Q%BaJH)LA%4L} zz@dPl-D;%Q0Ok(vgj=P3M@x#tX=+kkCcVjTQ_5y`l2Bp;c6hyj?e;f zm_PfaCQVP>)jR1Brh~%L`G{is>;`&=y(vjup>wO2d&NaXs%YyHtSQ-d)lKlLxjUXx zB*N>>hHpsiR)eWOkS#n(j?ueM*Upkmq9BL783MceDZ){No*=hH+PFJyp1}{f*2mG3 z-^6H;B06wq*~?2-4%i&SjUy@Eb`Grg0C^MR&dg0I!nN?S6SN+T0;+f?G+6t5XQ%3CR*A_%~R4m4M^~va{K*%svp=x2HLxha&w}GW^nOdkC z6ded(V_7!i$gGKvTaA|QbC|Trv7d4xq4&vds_hXSt;(1T<|#<6x8i z1;8EWgJ{%N%fOSC4z#+@`y~o%U)r4ljljH2IUC|1JgIckr%TdRm(*SEJEXxrprdF5 zZl(Jfe}Dqnr}?VdA=&EoFo#*im74Hd|0G#Lu4*P(px3Sza`FZ??7h;Z&;_ovnk@6M zkFrF!^63a0D$15}`48#khlp5>J_y&|zP9m=Y^*}39FOYc=zAGK=KoXaz&gpZb>`t( z=K%m&xxhQWS|_d69uF7TMYi+~OP>1?jGAWEZC1GeEitdBetYJfG`)!x@8uGD^bnEQsai+CwV3^u!2 zwe^?NoO~R5Di->4<~CbxpOjyLrAR2ZS(3)ia*-&IDNVg5!|ljqOucZYzE5ouk~DvO zosudUwh}58;OYri5t=@HaN+=zBcY+>R~DAmkp^daSi{aYSYQRb!@3k|(0)=WmY5en zFh?rMuQNNmVV+5|WRcZn3YegDdOVV|_XJc1?ea1gN{k^wJ^=mJKh$viWV?4ZMQoE^ zLUNmoKs;<`)yj6XQ@s~gt-zH%Ft@|IGb`~npD+N5n8ZF-LS(KR4+^Z(R%BGZPWFG* zEoAV$*yg&}*Hd2UWNrb%^HkFm=bvaZVI+I=jNstGFY2AedI!kPS)=tc?tZg)D687& zd|+k#2*&M#5lbDuSF?}=J^2J%laYT4zGpJvee;Qf<(U>S?wmUPhD-j6Q=+E9hh%&@ zB39U^d?}U%*U}5Z)Q@Yh>-;GMi)kKO73)rPESvsy2$mw+CR_08#9t@aSp-TsZcG+= zd`-jNi<@Rn9!b;6tTLdQlyk6Yn3ez*S!oSE-RH!%^yC!usBL`u*E7bfaymI4k4N|$ z3xW;>14})fT-v_6ZW^kUFRd*Hv{(x8fFzkuM+NJL=j~cL9d{=VGIO0*0qyt1lW~iO zwszm0*xg;7#_;6sJ$?^qw1X6UnmlYz4*c(XQCT#E@ZNdLyKC0W_;vkp2d@0~>+_j@ zQXF!W*g^v>2W^iV_((S&d}5>|`XXR*a`F$@gP5<#*qc%il!s1zPWmO96t%itaxI!V z;#2l0I!8>WM<~=n@mIW^O$`S&iiggY$D(pM$r}@FirkK75hMygXQify=kYzXCv5xp zaNlm03|3*IUT&O^EpJaR3FdmJAsEuw73r3*RGU7+i6q2TbCArEXcIZK0Bx;9U9=7* zI4|{GU$tIDNGvZ2Wd#o!iXI2r4a$k8LD-kr-P4ftUxPb8lbmIPc6LBRV~kUVwT%JkO6u>yFFe z|GW~p_3L_+LA$fjUwuj_LnOI?exT9YW3d^)kJM;}bX^rrxr9ZFFP};7o5yFjq%&=q z=uro6#yT7fr1c^8q@8Q&FP?>K1|5UvqFuRu5GLva;lg^0{D{xXQD$t;JJ`P3o%5TJ$nnXkQOEhm?Q zCb%WYb{%SJ5hl$TMYa${KYJ}Aj!#K`mRBi*S%U<*J z)7j~Q$YUz{qRC`_1)KqmX>HIk$RbUfaGP`(4Zr{as$5nLoZTsi)Eq1Ih$Y7@uwzL> zvBFha0Hl|OyZ{&`jNKjKn%<~8b!Z0Od9EQ$A;-f{`BEHEwuAifI)hf}?b z!+U2z{2l1ybqL*-yb{6&mv7-3vVSX(x8jNgQWTBkP_Z9#(wuBMqkOI&{`AibqG|*6 z>|sP_gASk_yb0{BZchx~0+D9Z=Vt!7ZtSaT`K)BbJ!}4zhp$04!Yql z`*K7;7ogja5K{@3&b9bPfjg1xM#7}2R_qw-N(q^z<#eOu&KD&hB7le+g^uoJXe65eT!Z2x$=VXjVE<8Fwzqhjpxu~%lHu9!V2#cd_8=pU}9*5 z(3t`b;Y7sIG(g+8e3FBJzLJZjdA)tz<_JJr?wr34pT+5KYh zWRn~fT?$QX>nc-m9XZK@mjRoYBU8O~`OIf~mwSQkvspa6hdF6zkXQvp3n;)EXAnWmd2v<`0Y7i_smUsUP%S61nK z5gv>}DXO{fli~oDP)YLn6?OfwyU)J{fvo`C>G?65PT1<9y*Q)`u)5zj%vR{d1^fnz z4r3f#^Oc}J1yN=;=^sQvsZZApi4b!zCW{-K_0KAP$hNU8*j>XjFz}TkNM^ZQ`jq|F zj!JC@Dc8`=hc0^0)}_xlMM*P5Bv&7RC{lP!vzm){fGva7-Fo*yCOv7sf(-Jx0~qie{W<{8Quv@|GCM+4m`U8+_Jo6`GW$j5 zaTAh(8;c1!;aHsD)B3gmC>9}|w9j7W%ZhNn_^Ttg^XP09dqw6qcE-=MZNP7q9KAa~ zeT&&|z7Z|;qtVq_Cw*ACXdPDm=4*jwecp()?L5zQA>_Vw?$Ijv%faFu{|5#6L?cMw zw=5p_wa$Q?agWA3C~mt{03TYY6hF;0BqXY(H?T)pVcn;)c)1K?y3vBe+Wg&LR=Si3 z{tfbaHB-SR+_FSisl(kfaZtmbTRux-DetFU3qO=LwX_v(woW`KecO7Ev;CujexLiH zaR0o<(fuA1Hn3cg+#%8D*3R>#i72+j$z-V%Tg{=of<_Xlr* zkY?Wm)(|HD*UU7I%UUEE0Rxie5NR{(e1gAQ)A29F@{$PtM)VTyfk;B7%qrbDgRdGL__tb`{EyE#rEjkSxg$ zEq6reoEB5!3YGK*KoF9U@eyjDbPfeK{{r+J*A|m&{$c;0Mdtc7O{E8~TIqQ|=GpMj z7&p`nn0?A~k$(gprPj(PMj8F?GUHy1_244zbnQ?egll-Jvuj!uU3bfEacKJK8#0w# z$>mBjB6@vqUx-spAaH4w$8a?{k{oB>!r5kgopKv-z4@kl7<3uqhx`gFTVdH^V#RWv zHaVz84XL-4VN$0Z3Kt(ECPosD{iCfNzAz7Qi$^5ne9Muw07@rJhuYR#+6BJ#O2edi znmCN>y_DbOjNGNa4&A0II0;g7Go2%qVUu=KUG>GTQxQF`dso`^etvU>I6fbH!ZmN~ z{Yt`E#WwkE2AxSAWNSwoaF-tbc?xDzoy4W@(`m>`^N*6lx6Y z%#ME%b2W@aH=TBtM`F{<{E@g|1Qg&vg|;}}fuVhUzIg<^L1kWhm7>*uUZgavnvWuS z?4dSE8zEuuX?x((6_DGK#C`=K)(&EB9>rj}1W~k2!#pqcY^6c|6IyX{A1<~$v=2v7 zI|(OStawd&;cW#ZvOhv~J6G0oC zotC?Q!8l7eeDuSXnFdVwEztn{=dGK|6s!-}h3!c_exf3!M^91y!NCA5Si8Vw0(5n5 zhI}8UjQRZf(+4i}Y@l$0T<$YLDpXNPVRAsnUjJI&~*0C)r zq>R}3L58ovZT^e9O_>_tm^TdEnbWR;X!tdoI;Cj^qpr~VX*BH)+Ysz@CZN}92Trc4 zEM!4C;bxDrK}MwOP{K*6Z%3i=PNooz9RLU28#X0YR~Mjd)!s39<}Zoy>Zxkf~8WsBM7AQr#Hs=OZU7N@%Qjs4Zt zGO+ZCR#kwsikckvjMR8J$^e|lfVKU~QWnGcy|zNnye z?JjOPBLd^PC|N$Ni=fDuY5Mk^3@xABqPlv3Y~D3mcw_5;UC#^4izXW1seX^j$H6^=JOwlH@3Ulzc(Fy?8{G=@8=UH}a<>|2nnUTQ>8CyWr#(xQp|XnmaGg94=c) zKuezyfZ7uZT5bTUv7C~BBoS>RMqEx_x2Yvs2K38MnFjSAuu>URauJHJU5_geMOlDs zHdwi_@&s0F)z23`^PjBpPbpSH^InrBTV5B6tEHtSe%qo_o}moM5_Zdg#Qfo^qgD?y z-!$jiO+d{)#A4M3Saqwx+_)$O-I0f;0flF1n@F#K`tKpPh^uLfD~w1{JchIjLGWK1$jMB=u>%~1S>uxUW8xpdTSw;&PIT>r(W0uzXhKoPl z8^}>=I-_yX|8_k8_s;w-lY_cVww7<@TEh13$wQl;r)?Y$a>_-jt#Yv_mimv1u+!nk z05Ix7UAPCXAa+_`Fr!s?O(9~a;hCS8I4~&$F;ud5j2AW@xzlH`3tFHi?58lNeImP) zQl>48S_h!65sFoh+k($Pl@GpzxFQxzEleUpxM4d;A*+q~JRJ6Xi00&c3tasCY7;{fyWps8pH3OL4+ ztq|+Z#a75{*CL5Y5Pni8`2n|9sGS2Gt_W0c<(S~!mH0^&`G zX@KpaS{5ig5O#LIa0a0Bqg_hUxL3Z{ASHJ)7sB^`mD{I`ilBt10?Y1Hw6&x+Hufwc zto-AH`buaJDRNx69>?C%f_rXW`htjz8*A8E*zZCG@SvA1)2ft?uB zn)G@h?fRDwF_1C%GWES5TRZ1~YT4FTRYVyUO>J5ifTtE5jgkU_*MGUG%UH(u4+?0Q zj(x<^UYP)0sCAr|t~8k}Cyw=UCjq_KdQOC2Rkkg7gxR5q@U(|3C)P7*Yqf(SF|(LwcKI$WQ==; zV57e%5&0;COwb&Qz3f!fIo&}hY>g`i_8XjUvMK|$(ehdQQAH_7Ll$>I5xrxp|0fwR zts2gOo;VEShpP6P-4&J5GWeSe!+aww-8Byz2e)qaQXG&Kpv0HnY#L|iAS;a2nMiQ9 z-?D4x$x*rgpn9|mpTFgZ@ccV^{rm3BIqd6|HY;N(LcjBivP_FRI}+hwm9L9J$pVN? zkirp6^}H8U-&KdsKxmd+CeBx~qzEGC&GJQR+nuYnK_y~Vn zcG`QLlps~P6el%0bVULY<+6ia*FxbKe|&fZiC;IKu4@(f4O=^(!Zyo~9~8qR3J3D@ zR#C-13GQWL5G$VQ8C|jH@8p;Y!U6G2Fx+@SJ9L(=HnOPEq{ibbj2FWqV`{oNQe8Z- zpBJZ0`rz}>$5{9IwAb{A$=8TkzTSzxi(FP3rygF7x(f0Kr`tcHw8x7Chd_|{%%-yP zF(Z}{zo0wgN{AqqH@_18U#ojc!lVMkx#(P^c4x+B-)hxIt4#w0Q08D%fqDZ=Qm?D~ zm{foK=Jb95H2lO{<(AsX&i;=yK6^7~~r5bqP^T2Auz z<0=RPsb+AQh@Zg~#2?QrfmME6#6%u~5W{zVVj~qM_6ENQcEl*&-ZFb$zDNOItyhvE z`(g^ngm17{0g^?hoO|nzTNIIwFucMtMaEoD29iJR=5-aMF#R(LKJnwZB$+m*l%F-+nItS=M5l%M?ntq*}( zbQqUt+7P$$#=<7wNN2i=OV?Nor1-<}Hwx8_O1GiR?43q7wy%_Fwl`QKCTDmrycW5Y z^a&mzrbLC)f~@Q_${=>5W9qEf8CtDwViZ69TeyY}D>QoOr|R%c8hCwyw12EmnH&cj zgHtE5{aKCgH1!veIsp@hL?uI`NBP=5X~bss_B@oK=>-(UAd{nT`<;uAlG_8d!@hv2|a5dUZHey3+1qIM31asSh^TE*qeN`a6U#Co)5E9gh_d~h{uwX7m zMF4F#5k0lDd3J0quK%Km8jFbxIv7~2T*tVxnz$gnK(a6N*(gnfo+c|y4R;}CM#7o& zW5K^Ig?YEnUz6@ zKRwcgex^|m^I0I2#AR=yEr?12<7$PSmgWi`h3v+hF4Xy%acLyXL)$;ClL_Pn<9v(T z+0LEkx?J}OH;F$8U9^8IMt%2CdIeY6F437qE-PVSb0ZbY$8k8dDpQw>O%p;AH*~If zWE92`twW0DEoPmfJ1^DJCUIWwK?1OyCvIlmcDZy=VV%o&uV_c_l@OqG#qbt?UHo+g z8wrkO$Ss!F0fOxi#EXvpXOz6HNis}{ay%ZS<%$459|Pb%tm?cAGGAYw9-ed}+lXvsM%$A5WYV|4D`9nF%5iw#k*lf)>Iso5D*PMdIx@ z8Aq?2Nk+|^d(g)URBJq=4b9L;^f$h`Vm?xFokd@@L9RVExJ}Zq*By9HSp>ED=lem- z4nLng{kyz*zpcCO^wU$-K8eP9FJaqGO*V^lv$1==s#ssbSS^qSJni@k;4lBTYWbJn zulE{Nl6;GYAu5zXLX;9F-^hBj^X>N`2Oqu`xH-32Z%!!tm-!OQ6m{(TIg;5CBLi@! zuu19J%F*^CuA*tz3iF8b)YFI6EV&T96wysq<^^J1cJ}6U70rd`0j^zgwy|N@Z?Qf* zRWP$rVTKBA!oDYEh$Box-B0ZZ1TAXKFL2y72Ulob2MTbTB8jrJeGqh@MO7s{t*u-K z)XhdT+}WJisDka1*Tas9teMRjxVz#^ku1vB50`29E#&FqM6ARFFadXx{*YlOH>+6a z?~|+G-M4%>MokUz0hy{5@FrU90XZVL^-7wHKwr3)XW3@dFu$~LT``yZm-p(wF;wh zGX~VwRRG@`mm$LaKb%?4J_Q&MG(%KB!&5xcErCvL43!7TdM&NvF%Gaw1NiL^`@#g} z+_r>BD?tocrpgev$yH>!akWn<<(fxGS^fcUr@?u2dKT%zCn3_K@R@x}1Byra}7RF;0)0Ou7*%1%AGnT}eLORIEgSqD% z^OMMDsTSC*jFRn5SeBZ8^)g%fY2j${UgsMEfDO9Q%N?dv9WBAS_&1MpKiy3(Icn$p zqc7n{V*_zN9Z8J6Y!X~1j{n88UX!QQ<7q!@>z1yJtr=I^9O-7qon>y`io?NhA}!XU zwa&;8wE1{WN=a_!U_SKhQ$3z8ppY0zaZ`O=8B?$Gg>jml<@B-9*+u(a(rV; zFnOr?^B{~}B1xx~z znxXU8BgncctA&DCECWsk>g}*9vk0tvKIDVZHP?c}hDGVaG1%-;urx(IoDsAjx(>ZLDXG9!Uh&qTMpcvoRNU`R z9YD6%;g81i5tZ)K(qat251h*wpTPMaojU{lq>{th{OH1kk3tl}t~-M?rK-$-pI=i# z7I17a=1zUCMFfo4RJG2)R=mey`$l#{e^ZJ-X|e(pZ4~Un+euOI{n_~A1!XA2X2nZ#Ab020_v|EKOn8_zz zIBx0qo-j19AeAIFtd(!?ERsl=PPwxi$?iz$ORQGpLKXpLeB+Yf6b( znqfKZs}?CEj8lZcScqA#HAVQ5^D_S&(IZkka7&WQBc@7t*Rvm?lDUb|S@WHuJDni}5%m zQf;zvh$({2h`^T_9pjeWHu7oM;f}dyiXnixc1DdtO1aWX&%c37vyLp_ix~|r7 zk^btZrxN0)5nrK72UlIY{Oc~X9wBf~7r67jkMzsdD~15f1@nFrgHa2KOaU@8swYJc z|1+Ed0ANNy4k?aao(&Cmy({)AmHkEk;rYkA0My0z&wn@n!X`FK-K3YIsxyiXz`IP1 zUH?yyTrZJwd21&tP5PuL7D6Mc+eJH}#^JJoat}byuvgNJV%}T=0$MuYj{m&#ECj6{ zf4gs<7vTD$6wE%AFHhqgYNdq{Ry>q zaBu~*&J0DzOXqQ*@)uF$PaFz(c@Y6xBZFOdW3i z^rSB>r>)YpbJ~h$F4CU6udt}aj*p&^ER{KbdKG69-6m|mx@OW0z+L)PH`AiEb9A#i zG~Ij^5)^>ZZCruxhCSj>x3qh8f!@I9b%OlPWGRk0tR)biFXyx=M_0-T=-t90XGgr`@|6&-b;VjT1@Vgj}DHr2j-TA4F?Z} z6a1iRlSGcGo#+3aS@`jP(xwWshAh!GC4ZBtcYnUeQaM>QZnefpKIm{SOdZ` z?y^6dpy*!E#A0Z|MNfV?*!ato0)W66p#DEeF_1o4aa>w;m?T$q2%Zzs48G%ew%SH zMD|lYtBj7bo*Er~$!u|x*Tw6P=1nK;a`R7j?)V>dZswKr81sj6UNfTAEf~53DN0Sf zac2i$5CwHa8fGDPU)~JB?;pK+LQ-zqrk3Nm_xJDdmT;2$Flc#%zg^_7Lz*jf-^Ey) zLfk`T$cGj+(tpY`F6%f0h(kScST-+U!e6siU(U$VE1o)uPGM{J^QY6PW?s_EZKD-?o~Gv( z)XKP)4}O!h$MYq;A$XYG@JS%{*J}L?bW{MaPd2~*2|!J!4Yi^L(Bqdlw^q;b(m>9C z$fbbP9qMWFLr~J?r{e50SOnh0AMjNl=*zF2F}Nq{4-$jL=mvL&=x>fVx!l%PIC|v{ zTooWZ?MghR9^Hp})`SP7P7xPp2cyYfvCxdZ>)C1PR!fa~NF`y$uY5V+?3WrWmmGWF z;PL@TynQQyD@8LwBDRDqjlQW^=?-QWkf2<##<`Qth4mI zLkGso<%GH=BhsX#911U zc%Q!hR(8kV8Bf1HPkGfZ;G%Sxk-;}}KG*>?h*iY}7|_b{Nz&=%YMaQtPuc?Bca4f) zee@S)NU~m3N5je8+GRQNg!Sn169By7)$Se6a-G+%&B^&eyH8}=oIl>%PHNxP?tU(9 zTQ9ZY@f_FjAY<_kIqD&2+^+U*y^Kh|aw}U_K>DEKHNmqBB&krQ8EHB5IIX4RNV32) z%q+nikn{N~UU;AD($|#tDx4fT^1XIG0~C}YUtQlh-W{O(^NGT8_H-%1(GB|N9?6xX z=Tlu=jM)^qwNdg`R`)#7iBp3MNy0zp0qvxoyn&()IA5`H8B43~eeGp!#6cr%1&&;93N2fs^twM$K3 zr#rc{SlXn{HnG~r0T}_G-1mwLz|oJMl)k(nxohz$M;mslF*TTLZAu-~qe8^c|0^@8 zu(6i#shwV85d5|FLyU3!Vq8#l-vjd{Kz=o@Nk%EX?+tWPIe)g@muC0oCh;`j39Pp(~x)24M zY)$yJTT}WaF*ma>*wd)lNFMS z{*|x3NhFnH5F$FcZ6*H#P=Dd8iYgWV^PByUafU3-j4CsoI6Cf{fcvGwi{gs^3vkz>eX~9|ZCvnTA z)K$s;#tp3?`W+|#!zS)iXFzb~V8*}h1dzh`J>_rX;nFN_2S~sSKE#PwwK&iWrZnwZ zoN0QKN!>K+h}4vrr*lYdzyN$xVaH>M%BIAphuV-|JfH73D!zn%4=sPkVmkJjW;I3* z{mCnwq{!nj()WtYYm?jB4L4jnPKag}bEK25uUt9fr#hhCfu6ny@=iS2dB>y;U6!Xx z2Q`gti?8Zw;Z-)>-yd;K#~Ii(t+t8a<74 zcJ)e_0C?5)e{x+VFzpxnAYtQTF)Q18B}7{J<#1T-%Z@uT)Gm4@)N#%rQ&%&FaJArs zaGN~IPa2nTM4PRhcZS|~<>4m3;?c)D#EGgKv-V#Bk0@0K0oMia)(dS2&%@$BdmDUo ze=o_Ubp1Y)+wDCLm#V0)0O8Cziu`&WyAwlBZN_V!f^HKN(NOS zGlZ3zL5K=$a}8{q`cx3eqvH?apPoGn7tEpAi!tEd32K0TiJ$`tuEA0&XFz;wwYORX zf{oy{q(lW&9QJE5DXrb{+X?wG(xe0~+a-OwUOHSzk{m|?VWx50EV)J0e0tRD-!rA- z(z6o{eadWScR!#0EFb2Oa4MQ!15$!+a1e>kZ^<(Ia)JJUSiUKm(?h@BDvhlJ#KOra zV3#^d55UVBj4AlZ>AzAl3IV2+2R@W%5?pQ9_xi0F+M+W!>^;isKTkT=O_8(;FH+8f zC)*g=#S+{cn%lP0*S^oJIm&IYQAL+hT2R(eM&C;NsMBNlW8e5TM{kTrB1!A@cTb}a zoyjApyX@ojLjD1IIm*#0i{ z=_Gv}Q>7g;hrL2RAQ~Yd{iyWU{pP<{7w7}Q@Q^102NkUv69$Cw#|r7?A%!CA2Yjny zSwwqk1kvP*En9Zmt#k?VO(vT2S7%bsh1`Fvo$!YOeA3#3nyvl7ytF1?r+OlaLRi#x z&*Dk+NrJLfZ6~OS(kVf?E3XmDj2zrk=K|`Cx>Ky_h=d3{4JfF{Q5&ZJ8%Hvd0NBFC zHoqEU(uvLrKVSM}O*MvG!7}wUXRY8*^gAkjRE0tMvBqwes^sv-n^D?RfNB$BhA3_R zmzqbBK*pjASL6L4CB#_jkDqyQ_7hGn9o(E`9iMF4ZcUF5-M5|eu!c7ETQud+6J_;h zQzQ{&={nWZA|x~EA^s{J)bN`Mg|zl;GAee9fkE_f3W7I=QUd<~N9*nWD&7XG2a$DT zccMBOMH^i@1TsBn)H~3d{Z+=B7Q-K1B^qOFK(Z=F8YF#`a%qA2H25ISS_vrQ8ZmcS z^4V|esY_AU8ALXJ4J+un-CPJSI2XYKeMs_NfZialx&M?V0M6~Xt#f0P`UV48oMr60 z7N)_6wnA&83_|0UaX5g^e{j3crC0vIgrKv4-Ng3&>Al6Ruy=hc5q;1ugKI>U- zO(DMd<@USv<_>O#Yy;~dc7?p*AXO;$PWi7IV>@puk>p-L)z05cm+RKM7f^RgtuS2! z(4Duga)o3eZ-hVX8@+Fl-md<_jIw2z7EcSDCaOkuRdXdhYbMer z@ht=mpK!3=!z&5~b);GpzZu?5DvU{%qM)>(E?pzP-6Nm#17N)mNn52Rng@lpl&I)z zvQKa}`nC;OzuPNpZ)!}qegmFK*BVQL%9fqBhtA!F1}6cudt~w`APF{CKII0h3)k^T zI-=YXpO8P`A|WoNA{S*ae@|C=1_T$go&nBw&6>H*Ap2Wpo5~$sH9sBXSI38@rZ#+w z#l@a@V3JRW+?h*dC+;!!2n%E)2Bx2Bw0Taiym=pQ=J&ucQc&2%Q|g%kBt=2U_XgAX zO4%Topn%e>X=94&<)l{LRF|LZuw|Wd5B75$Nb=Z(l+xB(3`tUNIVBy)vEzEbjyIIV zJ4z%a)&a%0Z}oP~3jM9qU%L*?DN=EBSNgU^8cxWl=7I5>xi?n^4gvM?I~Q_`=fhsh z9^ba*f1NdpqfdgDxk+w8eMnxG5WmqfV`W;D4z#H*sj93zm2WwrNezXrCd6d?^Ex!` z?}8`(H*{nk*?H;?z+3quDQ`#fxkrZSP|xnpUWv%$gnjORKq5?Ba7a1QL%z%aiZia{ zHPRV)P=D`=##+S@Q$zAU*@_C}V7K#*J5U{`_0kGmNlBzmv@b*am`~T0bT}?+)tuyW zeNUCkd2ovxCDZ-CxcbVds=BCMx>LHllm_W;q(Qp7Te`cuJ48gf8$r6eySux+`}ls} zy<=Se9gkyp_Fi+XIoF&|aG;1LUc(71Du*$ic1^N;4+^y~k9@89_NTM#ON)!9yyjuG zW!T@#9#pqkIofYpHIy*E!H5ANYN?J<;||YuaCKh~ECE;&3&eL3-C!cyVy-gP8$fGj z`vT(@UI-DE8kewZIaB@+C?YFFuM1}|pZoSa`rZcL`uHPd!Y;i_WbW$#jVEEQF-!-{ zRx#mDFqjmy#Zc^%t*gmQwVe0{S&k7~rY^8+$Xzf9RlC-s;_Qa>b-_c-%h1Z^ zafepUD|UwafRxprA(ZX%J`)y?m$`o$U~P4%%FEDL9sE94!@9F97DIsb>%x9;BU3IR z#1|Y%eK`PPzK!H7_@K29dYUj=F&)L$jf60xEhZdxYs^;ePDWVpgSACMgbWf)=2r_? zzMh(|G-Jy2PE$BD9imObt1CcLLf}c!9!Uy9(nA9LO0@|JX#<|G$=#Q%fQs`1qrv|m z`&rLi&^wosKg++@-o1wa?zk#RI9~C#rRxsz&7CGqh12q)q-e+AbvQB@QgX{bJg}*{ zT>53%&LWL$kl}$?VK~T`Y>?=exiH`u=sBNOKMo6?HqIvGkF(Z5v z64EnfB2ZAbo7ed6tsol;slsOVGy}|SO9y@=)=NT_EzO~P7Dxf1BX;<_kx}YxUg)U< zMB7|*-IA*MdW)>~74Sbi{urTqZ7JqfTn;3;O>f=*?(R(vh&6Tpx^tw{{%#eD6p*Fr z6Au#KKG&@@#Sms?hRQ{xBWb=8Z3 z*iESmMgirNsqd`~y2`7-?M_FSThRuJ<0{IBjML?yLZSV9|JD=xF^7r1yH-lsJl&lq zC+khyu7wJlTYfu8<=f~jGk`W@44B!nBjslNv5E!mTB*WspL=BGsh0~O5 z1>tFBLoM!Vk7iQN;7D07XuG};d4W_6eDQ$|o>naF56&VMJ zLVefkIP;E$L?7@+AYTwR4rgA}2Iq=xgCFQQ>f#4w=5j(jS?@W1uXC%wKvE zt(I9ymmGyKN=(A4m4`RB)LSoDZd%%^%`Dg!xQgT6?PNh+e2#G-NaZE$0+WD@M$6|2 z3V24^*84M!y#?e=%+P`qG$IC0^e^b>S3H_<5-(Ta{7WyGwhgxMeRmQ_)Q*p3_@lKE2B~ayq_*C} z*+ue4xYuJe<_3IroKO~{VyoBvPHP6Zc0(S*eXBx({5*eWGi+eDH%%`~a$BmY5P}Ma zvR@r54Il*Jo;BD4Lbs!-88EQCZm?j%0HNZP7<{idyP#9mWwVe3`;Tx-J9*E(VID)3 zg{ZE6WTZRzpA(T22}l0>55mV=C?W9v?Wk{u^~wT|8ERUCbHRElK)gAV?QU}tVQ-SC z=_U*R%y6!Z$AlEtSO#GaXoLr(a3rVXzkL{FXkr|pLsSWl6MT2q7ud9JXB&ZkwJrUu zrBS63W_iv*@l0B13uNEJ)W_8Q4MSe(|lEJ(yWAu@*HuNBw3IG+Np+k z&T?IYj*foHdyzFpb1evL$lufwP@z>l>0~eI@74zcFK?|jlIZ5hg>r)bo;tR5dnt0K zWakABj){mg?EyP&U@6;zSKp0*48LZ^tbwJa?VS4u>JBp3#8Nh05p&Q{c7x6H_acAL z9=;2{2wRNIN>>a-eC9Yiq(Z^h54%Qdp@`r@oE|q!-kgfDK#!%mwToy{(g)^*CAR=6 zz1Oq|I0|vx*!@*<)G9rcHy-0)S&ae|vf!zpS~L)@vnTNw+L@|}_gi{z=z=XWC8q17 z*PbZnI{IanQ?)?HTK!IeYE7&9q#FfxT9-gaSkmy%0d`r!-BhnQ>((n);@Le>(*LAX{{usGCJZ)C~*no0nNMzKpMOIX5}j&pRkPsR&M= zZC7y({*2#{dz;@7n=;G7j>41p)+xGaY=w4rF+CV!2l+D-tDX28>ttWHPrs=2;0~(F zB{zzvA$`3X8#V~CbJ+~2SU<5`VU|S30OV*E9%6hq`g@Y=NcGYsRy#ZQAI^=}cZhu+ zS^VUbG?8~hJ?nh_jUwWU!L?86<=9-k7x5=iT5d(m3d4IP+IJ9%0G;LZBe{nlq0_ zjr3F#>D7$?dEhABa0p8uE8E#oDW}*)WU|gzF|3mO)|eZ9H&r`rFIoJq$K^*u1R5Oc zkF0Ale=hp(HD98FuwJ95Eg_jp4_$t=s7XzWe;7o5``fpMWT6KHo2DY1d!zwX3Omu~ za^u^A8J7n7E!Xk@9o_J#=Wb9&v_<*ZLF+#qP&8S~5lmOW8>8qZ{nB?Um;J?m)xnUC z0cQdF+kQ*nSA^Zr-@Kpgb2Xcb44EYYt{sFJnU#Oa{XB9n60R&B_J1awVJxaU;t6ucF^~b6vOOZ zipJq>CuyGa7lHtf9qB^6GVKrf>W%~78No+xao*B@iBEkc5HwT*pAjMhO}m^c6s5#n zD|L_$!yM`>sKwyq5nNNe8d;*h!CKY z<5?~wIr8&;;nuCdw*4jnTaEX}mAn%!F(T$G6U14yry6+lF}pE8_6By)&w>eA`kZe( zKV1uxHckK4UW0!rNS?ww>6i7DB|iwJYsirmY{>Zuy@UB!glUBDm?!_bfpdUQFY;x; ziblfOopvlZBb_Bsb9Ka{izkq$k_+m8+?_l6)dz#yk3XmyrDKr6sHggGwS&PB(mo1t zZ0fEKbLm(_1Y{L){JKLn*<5D?>9@4NI=l(#UK2um{WZt6m(QzhQWkJDhIERk!fx?*f!WwD-2D8>pKi8NO50~8^vEz| zkPeCPJM-gEy<73PaYRWx@A`gXuH5l4*-2X{-^j15dbh93j@L?VP#TCc@3@a*(=b`D zLyII8aa@B$thQx|l33%X(j~D~#7URsnW!n> zkdVFtN*^IcHZgZeawNV`NNmGr9gI-$aD1rhDgo;yChD!tZ(17PenmB3trwp^!pt6c z*4!tgRuaYiYZq6pA^unJM8S-bb4I1o%W47STz^)zxx3-F=1W z_c1Q2nbB17QFp}bI@x@cWEHdkIHt@bC+Fp2=WCt(4R{12;a=DPU$lL{-n zAO&^UoO-@Jo1Tkb^cxI@#D=TYW1)uKKI<*Xc<|d$PmQdm%?E`P_%Pgy8WG+ zy9{wjniw^DAbUkgS`Bg6%vl!RdQn*df&1R40!(WXLyZ-E^?w7x%9W3&;s0Rx8lb;e zn2R97BLl#MF~v#8h2r0aAK@0DnM6K^2~gI&@@)D;UaMWg#2=O(l2Xvist$Gs zGyiZU-gZpn`J`dRsv0Mc*@Y+d{r6J(d`z6GSk=(q@HcpzB?uktx^%f?q%iCpY35+n z&ZAXw|8~l}NGi_&z9KGKKG81+l3yvdi~F~H+6EY1-hjO7)>iMEg8}%7W5YIO^*Y}E`_>J z3`VMWHOG=kk`B8+$$Y{r8|iNC2;)f3r?0U(`Q*G?9evJUa8{-*cuoR0R(V_F3m8X} zVNB?rx5rZG<4;_S zKkO3Mm5TCQvz@nOQGF@EYMKRS4rQOv+ljHcCeL$F2slm9AO`9XRr!{u-2hKlqC{7FA94=^!>aH9L9h)uVP zP1!Es;)u0=!)M+7@T?e-4ZEVlSy^-d2~PJS2<`=Y+~tGaJ`v(5+bUBnH<_oM{M?R z23yGL_wb+@1`wC49U5E~!~m8e78;6|+Pv8U4&8axH$smhYN25CJbQ+O*kBOT%JTjU zslO1Q>%F%M6|D?KwZ47pmiKLUKVD!PQ~rpQH;*?TOUeV)Xh1izZK?6Yh+m|oP9uvJ zj$FY2GUqQK=le(DsIN5hs9OpS?y&+Cq$5vG$F-}qV0EA-RPEiRzYkAc7g}-H%#cWl z3U;1I_c&e<1ei;?TQ$s2Lal}19v*DzKq^9}qA2$G5g(x}39#hD1Qfw7N2~(VaRjHi z z)#A8aa1#h-xL`vK3{H&Sc$ER!%K)AU5g0T_vOfm+6C4cYndh{kw9A!TFb0LO(l{dm z`AyGl$T(fIAtVu^x?9TO52SY`Q9F*Y2YhEA;DvdG((65{@+n}@Z;9|w%Bg_P!JLqX z0m+qPR^T23K%+O&rDTPg*qfDG=KXlq+$r_+{X+JQ=vXceYq_-V^8{2S?1^{9_pQ~P zE51)xHlKd#QU>S@JlH;)j- zJ46Fl`Ff~$QaS}D%ySw-IZ88T|Epi&1c_}8br9#6eHQDgs`a=uJEgi9XX7IrLfpU% zID`Q@kygEBSO$t}vwKdlio9;BHg>sD393*sf~gv5Uk%#eHNgXBS2}23{(AeN^W5E> z89o)QCZ2Ixh%v`fh;n`;g!0oMGQ<35e*Y9L_+xf!ew>Agqn~-_KMhZw>KV>#eFAhC zRN8a>TU+x93=!sF4}h5KRpCA^`A(?fjDCzdh>Ifx?tcWRMGxmho)Dse`!iA3t74NO zLgS9Idv%ykN~=Zb42P0P>t52xa{gA}RrSFJJT1yh^1<}iLW27eFD-Rs8QU*KE>uz- z_WX6Wf4xB6&rr&_U0s%jJzib9y$T>){AJeh&ig|MKvIPvx0@+jDZs2Sv@muRVhQ9${ znZ0^oPs(WGkxnjABL$w2viY{pT_-!t3KpJ$ca6AdcndR=&^}`L)&Z(!Fw)fL$4$73 zsb13N*p{2_FMAoooMapNTbmZIK2=vews13Zg`9jO=V8ytaRUzxuU9+wyt@CgZE+I{ zukk;SX(&5dt6Eonm6_{HqsC!oI!tH<&>VgG*ba!Sy_R-sV3!K2BNxoeO23b!qYT_? z2}QVagu|8?E|)(G&eTsg7W}QRXZFmF1;ehibUZvY_z?+>j%{}(zM76*#>_Sn#)qxM zvln+102alrKm+MEx(@2kss z`IQb9YT;aMj2+AQ3xW10ywHl)Xx=apvjzqgysaZVT2=R>= zt&37#90?>YF)4yTH-%?Z4NSRT_aga!Itp5$l#KUg(*O913MqCXx^Oj!m?7; zlOhBMia=YzHJ7X_S+8E5I0%N1H# zUOJms4O~uN#&_+`zFRJ~;AX!X&p9{BgGulQ_6-FdUKw)|2S(c*Xt^{7fwDV-2ycgm zL~10YOQHh1RCYpBgV2<56uL%@lK#50`2TP@R>+MYf_OU2G(gSJnWtin&#L^yt5OJj zTMRVf!(2#b+-hEjAfhK}91rusJVf$Sb5&e&h1I>0W|q1_g0*DLrX=7Lu)3z;3+qBt z4k{q3>-mZ%(D$(yGzf$%UAV+i##wp*hn4hNA|uQ3FI!raxEPdnHAN zEt!7oN1-aegU`=}nf|h51{2Fl=XNd*W75q1-glkP5KrA2jXGb{7n2tBLVz1g9MCK@ zO&{36^gwq3oA14P%q4^WBN+U8%GbRzMl9>#w<$An-prQ2v{r_Dr7}!+>b1bkJ$%V- z?)WvI%tuIrBrjMTzOB>9qU!HToU4F=0{L^h{70Dr<5}G3Gh<5+{Wf^d^!P{V$)lllwZ>s;)S+%&w~}i!z2SZIZz4<3wzN&YQJj1 z3Sp5lUmv=RrBqOy9{RTmRX{8PSJQ{5nYTB-Ucq7Q1CFRMA|6 z)W(gA7?;eg*%iX+oN0l;mx6u~4 zn)gD=uH<)mW0p7o0-5IMTnSBg6g7FAy(L>A=+67E{>KnhX= z;f94={(%@)k8Pe@=Rlu-;iq%j)6`_MN3AN~nQgHF4{ZlSrHM8p1pT?IbROaR!?Jai zA;YXI;3LM3?P3I80{A#oTKv*P&kk|ZsHYqGxG9cN4^|uiUjmLzZ}f)&8*YRrhd4_A zoN2v=_q_SLez_GyY4JzKh*p!SPGxLDf}w+FG4*TUDbuw5nNHqPiZI#^66=iyK7xS6 z0B?D!l3wmVbZ?i=C)su$A~tmsQn%5sea%d8)jNqYjE*2QED_?Lbdwn&E-Vu3@f^2x zZtdP+MgL6zh&z(w&<%ecCirS768|U%z7gfv z7fhkt=ZOA5T^zNjpZx2-D;-%C{oM!8kf?l{`(s&Y3`SU?Jbzzr)+~SK>{mPKwK$nB z;is28=qw@Bgk#P8iv%odoz!Yj%oVVM4T@KOCTYPa)f?#T18f&C=9ujerdC!?7)Fmz zPKZdUF?IREjfseedTQyBXL1f-48uCLgwCt{-mKk5Sq2_UJ3z)71nZZf5qe|47C-=I zF8XP}YediufB+YT5=_we0*=7kuhGTumD~bPLkg|m;{p{rovV(xt_leA!}0$~imQ`tm;)`jvl;J~x{t#ZNh)U#`Y(AmB9cMNYQM z33o6DDD?iwC?k#|GPO@KJ$^8AS@$?-VH@Y_QJC@DO^}=i?*Q5bB-28RTUFTcUI4bK zjq~?aj2?VBOy?NK3KCt>@I4C8A*ovrGN*f#1^suL!eCOjhc!1h1_lNcu+TmmS`Bk! z>G3DU;W?$2tM2wC5=7Fb$vRHG^)K&@UT#ut8W|Tg1H|uOFKuUy+UF*ZI-K!YDl*%# z0jpvSi+@voX?MvEXGfChlWGf!R5qmM!V_i9x;5fj1hJ;3gK_f562v5|TgpGa9Z8e6tY9MT~@ zq*(&ukwLq0^r{MA0@mmS$j-t(^|>S$Vcxd5Uu_q9BJOM(f6#?I$H&LlS=?fRa;iHt zqV(Q{z{<+Z>>u5hrO-8>g?@l|9`*EtB(x-`_bWa*@j;un_Rez^TiN8UecI1z+uz@J zk5BYXd(O@uRS)GY{Op&qRd}0 z%(2!L+Ht-_e4l8soGE@nT`2cR(GJ_=i=tEc^`Pob^dnH`x^OIi9? z`sLJHfwS}T`vVWXtXoiO$Og=+y>kX$D-UqyEZk;b{?IGm z>%295x&chTS^znyHIQF@i?OzSN5el{axDM-;%bcuFGyk<1o-c~2XQ}g09J;_-Iq7! zl$?7>Js<*qg!N}qdpHu7;NeWab>sYwpP@U@%y^&4d;eC-q<9-gV%SxIz$5VfxZ`}6 z1LY($CEh}sc8^%{3IOVF|DNW+H|wf;ysMK6eGn0m5ZM@I6#^EVQpT+%KDVyl8Kf#)@O#_&vD!;j9C2_x@wMl}3hG)BfYU*P*=N zn{nQWOeGin>%_ER8Kw^Q*sGJO&fmv*?W8D<=v4(vkpf<8Cwe%*6}6j7U?7%=V1R=J z7A^OpZk*@SGcf%9fU8IJdWll5R^xbe-1{<{;q4}`SVtwUR(vsgoZbd;7D z=z+^f;0-$<))bI6f z$P*9T2iX{Trg^b$*jiJg8PV#we_c-I``j>s-RA48J(wrn&Nkj{J6p19(|0mdl1SLg zV>Nv&p3DH}kG>c__1~|6x8clH$M)N4<_0PVV)^2e-v&4^?xfrFwXF}!waTP1#yF$A zx2wDB2c*Cor@FQS)DUD4NKlHCppL1j= zf^kC2XOBfq{!T9|erGkm7Lj}LK%oQ909!_DZYBL#xKy#YAS^Qeup%%FbK{dQ@zWl5jfewZRi66r@=Lx?MnWy_y^?W0Ub z*MygR=|y*%OCAuc$#IMl$r^alINi8TYzmXXj69ONd!icX==J{sl`^l{^uhHkIyh zc3e96s3|}K{nV92P4c=4OCfwb@+HdLH|F;VLtfEH``$sZN%I<8=gqo2M*r>G!^VaJ zM%!q+)H(>6xDz&B?=%{B4ZKVsIIeTKKZ#bNS-?P$JBjad7!%T=l958Sl7QRnhDNUM zCQ`9hOulvScQLRl*G`=2lH+Vo4N}&JO=br``J)NfjSoV%4lwWF`whYd_pJfUuy>0G zjP1kQ8HG4REp|_``}o~$lxvI_%=*Mhd~d+S4Qq!gVnSj>xC|i>{-pr`rkM!wqE>tg z4&hzN)D$bQHNAs@r7$t>9TwF7_S5!4j=~AFf7MXMgK$}mf?s8CqE6(pIK+__iK9}P zT-l}^evoLqCZ%!!Z$XgMbCbEj<7U6_q}(qTGq_sZ7*}o=1Od0A5oXTH&q1t#ZV_GY z?qY^yqX;TY*=G_c$&Q9vffPeQ zuc7GSXexz`HybH%BX%AI7$HCMsc(gteBc@v)d;TrDSc_ajSki6&>BukJ>!>}}z$P};0qLoXZ9rrWLLtC5= z`ccJ-xC@+*RNOsy zRrR7Rw|$0EdY846rSqC`Zy95X5B7C@s|Wc8dS|iwNf$CBGC2ol2|Pd=KX?O+&-qj~ zkn+B>sw9lY>720gU?j^i)iR{wTF2(?LT1PvSMF=-0=$}=kUYrOfC zwTPfFX5pS}?rfR^dzSu9=0dLpz3kt|xW9{uVrh;8kD`1D+AvY<+aJxcuO^-`V!k#O z8`muaR~*#kaylDn&otR@5D!2a3rG**au`6Hc_fO%G!#xm51JARwMP?j*J^z}O)?Tq zy$49&g5B;lGlygnWL@o(13z;;t!}R;0vCwu;xJNCD$FRvy5>mKG@r`0{%F8C5qi;p3gf#vu3+rdA7 zv;k7h-GD8ltV8z9*-|udiXTU;M<|(YP0HB;q=4CF!9Z`4Bv6>RvMY$BKRh_=XB=d{ zmRI<;Z{^XXhQ@`&Pw4T%54}Pbgg!R1Du|qW40*b;z?*2Tr{{5Phf{dhmL3uD(GiU4 z_8zCtg|R3P3!eP++}~!MQ2jEc$`n^%kAXg%th%qOUm?5QjnQJcJm!5_jp{NgXR)8j zJh4*OJHQaGUv%0I7h&J&lFFcsE!e1``TmeX<1}KBa5t){=@cSVhlZB$iM&bESo#SL zoNRIp&Oi%Xv^Y(QqN$D7lUZvZ&a?Ghp^qa`+w(&r+I7Oma=@y6+`T<#jc7{Zv z#F;C~GF>ybxmUalsV0Qq3}({mJKrFOHtB#=il48nJ9!flwqI&U^o#Bc7@u(e6t5Umui-v&rfG;6ddv6l6QU7gJ*V`lrY*$ppDR!%+Arag7z&Q<4qQMh2EnFIp-5G&FZ?0j{RzLjn#)>FG|~UCSImxos7>M(3 zg2d<978c0ga3ID+g~TeuSNCJ6_b8!)g*YilXAss_+k#d(C=g#(*QuPig)?o70pV+t zq(q5bfR!p3ooU}Io;BF0uC3KLTbC{(OHn*8GHq?Gxlvz8cpS{Hz+OpO@T4?Mf=I?R z1f9>&`X;ivZI0ez@mf15Iq49Ddxa>{sJJ4m0!!lUz-|d<7ziI;1TvYt$Dl+ivdG;{ z2rQ~=EJ~=gK`my4YgFMz3$@KpHxnX#ArnmO4qR*u62%uG7C9BZWRaLg0lP5n<@O;|7<(${O%~<+%KZg0eKfYVNfq9;&WcbYT^#0f+k{0~f{msaW0-Enm{u-$ zV>-doKDS?L`dXGTv z{`!uYaJ=EVcg8QfsTmaIy6X&IcAwqN#)G`4nY%ZZlRo>-OdGO7A#1`J_Zy=B61Ow5 z3OZOj$PoXS;!)|NJ+r@F&N15!>s2}G=L;)U(P+lsc8}6m%tdLKng3(N{hs%u)f@{$ zUahlZ#=*x5?;GpjK{>C`k@k}cM(HN(gcG@+i}uKQu*Krc=(`79wKs)C76bNBD5^4F z@IeP>xjCa|?sY4yz$Xi)RSCx$nP#2v9|P0@=(PL z?)j(h3Tt2s9@W~WoRyhkCsxo7tXS5!ip(4KU^B?bB9o4hIxYRSTUnBGUV`JbzX#Wj zv=mF5V>$w~`kq%hQWA<2WC%}g3dJgEnEAG>_O_I1W-jXl2Kb8HcXOg!G#nFqLkvs-}v3WKl{U z49OOU#J#$UAfEUOtdhuXY7T|BhpVk|@C(3vxZm=3xA4f~PG*k|H$EG}@3cG`Wg1%r z^C*tEb4iu3QzCCVb(YI1uNb3r>fmUN$Wo|H^nh2S%EElM+U%zvy z=AH?2&fx&QQdJ~y~WiW2^dJ|=;kd|?Gj zHrGAxRE(WFiNh)C13R!p_0uIZu&xE4EdVc6h)L` zH`2doaNQtqkzOYtpb%lXEjKKrh0c2T1W7*tP6R6~y}nxV(68P%UzMbH?*WDV|3 za@2W57L(vNE~bl9AhF@uJSbpDslZLqXLY4p`cs;f@p~){B<$XA*9+Vj7asC$jqyz4 z++X!BtUb5EC1|_GUhXZqD3Rl=G^B@IYK`wEgrdLC=aye(rH(g7d$A_8Ul(DwXeJLb zyy&QKT3au#W~sW#B!!@oV7oVY#Jp(Pj(NJuov@x5)4KHTv%qR*j@iCi%_VS2)BQUh z6HQ7Zj8Bny)iuMFJO}#-qiARYK!0s7igk=RE)5hkv zaSrysOKbwG7k9(Yh?m?oiUhlXf5&sAtZ+PBoM!uHgiN#Qi(a^NInthDi=}f4gsF-1A4aHxy>={so@U}r|jN|4j zPc=yp9y@0{Hq`RhEXHcrwX#_>+7H55nNlm7Xg@;TV61yNtm(gnoxoWc9V4c-bUk0i zBb6ZgmH;Brct(0@%_zfnZVk{nrKWYTU>}c(pDUqk~Rmk6sJN~Pvy=Qu-O}30D z2*QZ;T$fB)qW-ND(9P;Ckp_680iQ02daY&VOO0jYAJttM8Ei1Dv*E((W`G^wYC8&2-9k`gcBHlM(s%da2@m(1|oxD9bgAl3JVouwHAL` zsUjnHdU@`}epvNbiC%qCNonC6T_lgDCJ%%;a{GOg<9_zHV2sn}`S#)L5yw7^fv>M*;Mfgxr9$wF7A3eph4H8i_xM*BHg2 zw!}p@+u@8Vk(F_Ke>#v{nJapD>*DXuhSU7QKVPL!0<>-Oia)fN=2{<{+R9~l!7E_M zBymhIX*bf3<_QW?8;}15%*+X@Mn1H>6Mb0+)cwxXsT${2czeRMiRzxK^{LPMpBr;q znn#`YB3I869Y^@W4V3#`(;W1R%ULfxhrcALp@BALU7svJSy#2F-L{|p)u`=yJ2yUy z)S-9}AU%zpqg$Oi-Q=Kx#p&pMD(##rQJXC-Ge^m1vovGanRrep3|AfQA_V2(@UPvz zz^HCOG_zSkwhpE+60(wczyR@zk36m3w_u{fu)87R0v7uaEzhE_fGpf>ThlxTYp91@ z%=D9#aHCJ%8$rNpFFmwIA(;dTo~xd5S}T)IGqW8XHBoa-N3J+c(KPS+6-xW}f1x2^ zhxsQAqT-^t-;CCl3zz{I=`Kl(a9B`2IYJ69!~L-~f_+}{$>qfrW4sea{LpO$xLuvv1a2o={@Qd?!dgeW4TeXU~%L$9&V z3%%U)(~D#Qu5nqwnatr^C)HK|4NX###OGVK!Xd%F=RPjGga=2>OY~zLPQ&1sC%e-K z-@+`7{W%q%91=l zH7PFJ6^]f4dia4$LTsdwE{H)CXc(J`F^=N*ragS!wEkT<#y<49pBTX*wOThkKW zRaVRJm`bW0%{Z0a%y|U*+0bylFFtz_I>V(ni2cN7gz>xH#Qb*(aP08l=q=<-PWkPkCmY_v6rYKdD~e6R(2M2S%;>68)SC!V7FNr;%~=--K=VVwF!+x|h2> z$c@%XiJ|C@Hm_z0{+dT!X*K&V)0IOwW5b}nt!ixtkZ(~FWZv%t7K(=c?ew03FscE8 zl|8aIIje@xCrecP=~oFoy@E`SSJ_vN4v&{x*gwoj)F_H|W`U&ZQ>h$T>AA#+bWO5}~t{Z`7feTJSCBAzv`~>Qz+Woy1qYqP+%jIkN@*CF- zMqs5*^Hs$2`CEJn{qZFE=PT{nBs0YD9$|yxf~8?6T0|%4#PJ0@f**9$3 zAtO=A!0_^CirHaZgrKJ8XvwhAmL@+k&z-(I8|TCtk|sIhM4@IK-r1q4mDp2 z7U;$t0rU}un2mbjZYmQ?qzi}0vmN{KltyMrOa{0}1rp7af8-uy@?G!ib&Ib`blhO1e22D$2vHvY8$w ze_Jw_!Nf$=DU@LvBL0e{aV0T5Fgo~++=>kmLrO}n4y3{zNlQ=~A%fCaP6kfzoK*%SGY;-6Ekh_*>8~|0^Y0?$#@sqibQ9o)cna31MGG%s;SU>P>Dx8!>V^$h z1fjqY*&6wZ6Euf->OSKYI69?hev)s@24$xpw17!+V>2iD0?M*s zW!*s*i+<6BR8RWPPP{Nz3zZ5O5sk%0qiWXaQngmIVEc{V{E2QBati6pWQ}-UYjC~w zOD7eoY8l*xYPn~av2S*{hWohoZO-|}8WYaC7pvAgW9+GPQH`t`!IAz{d+83c{T8@lIjdrF}Y!hjiyf<90;31ZMUtJga z*GE{#J+k@%9I-Lv`90(K;)+LX=!X;zXJA$9)PU7aEHOV%F5?CG4jNg#>{Rm*&+O>H zLIc_%G}J4ZZ9MIwjGd(Phn6#q0kb*yZQKe6b`rAK4uehtHeii?tVq-s)&hhQ2hU>F zeqjM81YE!$@qhp6AEJPdy1++yVN$Ez;d+su-_*_LRK)D`{hlz^YT!W=Q2{GTBN6|K z>qeq?Kq{y_S9wj7zhW67pdVex>k6oZBeZSL6lrCVE~D$qZa*ZvWG=K*XCiF&V7Ruh zT!`fP{C(MogJqePw1GSz{V4rFK}z2A@}j|j2_a&zi4JO>xot}3d;vb@W1z=N*wNf; zh0SULzd$kovXSiuj?Mya#__gV$jr!)1-2#TK*os zn39XOIy~mcL~G5B7`P$wR!w|02CnbRB>(c)ieGYRDDdF*kid5<1vstsC_u66cB-nb ziv=1_mNT`m%=fOg8WL<`z3Eu|_*wgX0NSRLw8|xe_P!qq? zrD3p)dhN{x?J%q?xDM6$_^Z?FjA8A}^kG@b@B#AwMV&nE&(iNe)CI+Lb%$V)H+2JJ zL88~+hpqVGgJ-RbszI62T_EW=_wL4tRE))+@;Uelz9cM0!KlY>Z@_3L%-;7VE8$Tr zDFtISN^0rbP6yp7)~o-SFeW=qoFJO93P_TZUkEAcRW zFseKVB6&5KHP~qPaQ@7cTLGG8ZN=Do8w**xR#TP{pBW_<%-slmanMx^7qJO)ubuMyf!{k#`;nY|s(u0e<|^X%2B3fxB0(*>jP^$~QQ%#{A(oOIPVL+BVlx{J9|9TPhQ)dBQkd2u zK#K3mXK^s(SG*MBbkJtL(duq}>CWMGlR!4w*WO?A1Rf1iT4bLeu_71t_|HcMJK}rD z2tz&o>0^e@b#c-Qj}zUm$M!9t3*KEE{tq-@Tj8Kzik&w*?gWOf>JdUSSk;BEVuP=I zUwU7^c++b>ER5;~+|HIS0a>l)Pm0XsIBnI0=OOuTu;|7ji-1iUp*;5<=1t@|NDJ={ z07|@c1D0~DOnfKBHmxSUXF|KFco<_=ejiZV1TWfex`+=@KCyDYTm*Svpuid~k5c<4cd=YtM8AAuaiXo9}^JH|U>+fHK7vNSX1=LRzo{ zF6c6W>ixA;vi8`}Lz(k<@yn1GwiZ7s`!Zt>9kLf90UT0AdA%q5vc!)IMyTuDrgJMx zYi$Zpg0hxOcftW4UmGo4?EJrC2v}f-j%DgI<_`rEK8#lP4*VCBEn9_%jPze+MtUSU zI@xhzz@X9uZLTt+QkTm$>%7%k&q<-U6cI$~2qIPmU+M%AL3ZG!9an%BWAFu_ZV2v| z1WW90&~)|kxDyxF@vzQoSJmh)zFK?ay$*UeL>OZhode_^Z-?LGLa%lP27XJ2Hax#? zyq|4!zW~D`jp(t23dXvun*$+ExHah{>d&*I0pv~|8x=Y&wWZRlQa@sPjb$Cvb@>xj zQAeCfr3SSoM9K~VJTJfWf6kt+ZmJIGnjHR8CRsb)dSMG?#c$scBY!m?1#~dAz3XX< zLDT){uE1eK^AzNCAo@Mf>Pycg$Zvt2z(2lt5OkWs?QCYq25!Aj8FT-23hh&?X{OU6 zu`IXrn9%bcAVwBRGbXBxxyFOYEzg5;BCO3$2cW8Ckp>9605Pj9r1i7himX}3-NA&G zmB|ybGcNWhah{bzGM*?YVl{6lA(>S1vt&9Iga^$ECpijE>0K=z6d2y;!|yw10BGad z;$^x!vUN7+y)f=ib4h$GH1wHV-CUN$?s7W0Z>n?vIfoNj05Ko}+1%jue0!8(T$jtD zRKEt?A^a9p!c26kO;HwWO=Z{{61;O6eA%#P=Hm!?)F)Q|4_j{?7iG7-4=bRcv`F_* zBHbO*NSVkmgmkykA<_&81JW`m5-Qyc-QC^IkOK_eyf@D|=Xbv6dEUSI=zY(<*Is+A zYhBm1-+$cOiPe&(+|rQ#sT_6uENT2JA!S(Y`U4VH)3$@VkP7?A{wLU$nZ0V#`TFBj z%VY)d-85|iZR2~K5>=6p5Iq)$QvH|*FXAI-bU(7xSf-80FRk{b)oI3OvoUViwCVoD zZa8zUv~MeUuY;b`NWqRZJNJUWlg5fJ|Ap_O<{+9${EH09ms`Qv##H~zx~v;mqL9&# z3^VDwf$1Ob^1vZ;4r6UKrHbh1$VP$g>WDSh?}X^mB1r!X1X&SVhJdg2`eH@_?Mz=v ze8>!upPNx%uvozSDssxx$-TbRrcE^Fh~SnfG8{vCTp~cq=^y8Ip3;ZY zy<vbOXsDrJL1pyKGLS0vqJSAF?9*(KiSmeTDZTe-Xuo0M`g_Lr#Q>>i?+2mJ zlNrG96-m0KMcGc{qCOf01?dzCT?5=9%Ts`_ILyNZ$-UV~yZ&7$7T_B;tHA|I_~m!> z&X4rgleYDZ0mlj8LW$4g(W#bxvpJfR!`4s_cyvOqu7Q6n5c8fhzai?{g2prxO8NHM z2WWTj0dD29ZLMb75<((n+XP{>0=LXwX9{X9{{E}ry!cMkx`zTc^yqe^k57Lw(f;S` z#T@-*BRv+JY44Alx{gn8uZ^F;c}kHpFrHW9hw<5C_1!X~_5kEk@-N)W7CPFlM;!(1 zpM>UHJmlcZ3T>h{QXp+3zzXGM9gz3X|Mj_RUlnX6QNI#! zpNsWfk>x>3wBT(^og+n{I1I|v*G?)z%gt|z^{`f z;msAH`1*IzkQx9G|9yT2O*f%&o{;vv1O{7pw>~!B!gpoPgDn~Zo%(CqjC`*Hs8K&s zRir{vZ%aTzzl#b#NGownvIXT3gmsFrbl0r6WLdMd!O0 zK(COpcynZULmDoL>v3==X&(+OarHR>Zs60OHefF`x^+IMpi8_m>KT@H0EN_8LgqqR z(0tp0!&&@s$%(CMXyJyL1{q_0QgEx^Y5CS#YCM5hAWD{rA;0c?bMG^GO+#_#2gYEa|J2F~e%*hu zTVJ1MP7Bov<N(MX$LX-L zr9?s;2nb~@fd?3jS37R?5@hY2hvj?0pA?p{`I>m=D*cpJ%rlvuo~=zbnWF8Gbds>G z@JSTE2N8r0hCA4ZA*|u9l#f6r|9YnVK|NzNcJ$i(sA94s(^jx9qHslQ$zw^rb-e!i zX!ei!^5ArOpvrr7lLJ9D-~HG$Ea~lAfkaY23H0#Ru?GxRxba*IuIWi6WC$(}jC$Pz zWgT;^)^cHN(pmR1nh}5(!(U2oGH<=?r|4%gJlktkj^R`-=i@2Jm6X1i>{nXi<)FD? zhNT>NlcGo*B!t1-^gFk%QtPRs7RULWf0VVw4aq(p9o(VEI=$0C^NH2$RZQ}kB0MP9 z<5P{tyZ3$)Y7D7$fGOBJnIWS7DMi|nHj_Ne1FF`!aS<>XeH;T-$s8hMRlW;wgZpzz z&0rNfyMJuba9Lw>Ktu8DPs(1Oay5MI6JK}rG*W4)1|8`TL9cYWaFt_t)fJ<-KOGfh zExXz@_ds#G`QWAZV7lTcAxN4&|LnARV(yky_Cm@XY;e4j=~F785l$zk1B6-eZJ5zl z%*|^`bB!y)MV5@Z)6lU&`1Fr|;Z{HO1mAK1QLE^Ja@@O&FKGOOaLL~YvSz{{G<>a` z5dbAn`j;fEyD6Zxcc~S+e5_3dv0PxZWcX}DL`d9>VR3PcdYRHj5VrV&MG&lR z%hxHz%$w{Fz}Jz`$7JLas1;?wWJS)(!B;_e&!+pklpv^H2Va*Se$|z9r!Z|+C{6yL zwh*J=zEM#J*_+ZpP8!r(74bkP^jJlmko7F=K1zZtD$!OpfW(drt2ea*@LWSD`*pT@ z3VnB8)F7(4J}QW^Ga9V4oh)vUni8?}k$sY)^oCB`vigT&B-xYC_i0Tjsgw`tb&GMP zvKC%whZ7+zCla;UR^@gWzzTs}tYOVdUj+{ZOrkDwKSc0Ng%;w3WXQ=scmz{i1?Vf8R+T4J8fJr>ESPf=aD2B4)s*8XyMa&*Sof1IV!5YRP(LwTi24PSzY^Vy%(oFImwpY>nMJS?&~vyO4C0w`Qt`E6j7xtjv`ZwE zmYiN`cEKTTm}$`q4`y`+Mrnb6@!dSzMNhM=2G7re&!6$iv(5tScc&SHkE3skBn{_t zYm8gI`dPXZqiFBqmOhzWY@lhAw4ujb_bY&<9DB!YkaUmY6`t>{vqV|J;K{iU9# zk_dCWpNn06k4hUoc4q)E8tYz)MJ2bI^{Nyu_~6rxZ0qScuwbMYWEe?^uWol^yk$Hj z3zx4K3H?NATqlg2EHGW5R%_=`5_~@+{NfF{wxRK65&=)X%rn*5k zb3&pl#6p%;p~O87^HVKd!haY@ zs*T)>#jeqk-iJlCJhsFW@MALD+ut=nN}dy-jsa@6kZ))qp>vOaJDry;C)<``5}zSb z?8F5;IIju**ylAjP^|Rwj@?IJ=r_OXGqf9TmNJk`3f{=*u*{>uW*4CaWweQWlL3}=ICjFyv)g{g!uB7D@i9b zcGr6~HrhP&dQIoMQzFhf%WqQG<6?lUcwL-MrrZ zm7qt93S4g!G(83pqJyOv2BCKh*QW-+3?B+gSc;w_^=DrRmT~a$11{e6_lH4 z+B;cyob!Xup8L=!38K7V|7^k-Q^o<=u$0&sc@~!Et$#+q#PsO$R5H=0H{N4bK@r>(SO^c2IA@VZRxg(keASauW#CBfIqOc-BDDG;T3ze2pCCH>ok$k1HOu7apef{`HASZnL*rI`F;pJvm8sa7b#Vl%d&q;C`;* zkcuR3_h!j$)+-sxtQFV>?y%KuDgA5|cfy&`iwT#+GVD0aaPNJkCAzw}xW=;`Cf_jj zsJp;myZh@sK06odeW|o}QKiC#0fA9ZEC(U8we%A9ry46VBgVzp(~H+X4JX~?bW?3< zczp+&H2O#nnUWiDmW#)JM1b8Vzr`N!935bwEJ76FUnvjF;4T;VulgObb3Lj10x#U8 zPZAH8Fy+R)+pygwOuO6gH#}ed6(Oxo1;V!xB7*@$hm862mqd3j7{}L+cZ@8`LiKJi z$pkR3Khc%SGmuQiv`0w{`<8J(kzYw)4X3->*Q~o^N3Mc7d zku{RQTkObS8uW(@W`p*=8V<$j>W8R5g2xkRI87rBzFO{`0!#!0lcA_c!OtP`KL>=B z2KEH@r9IzVKf~I4c^9qVGL{%hEO4D!miDVoY9;}-YyJpf*<}*k3de@K3Gz*9Muw3L z-_NLH^?+JIV^_xnL`+xx>nE#AMH9!r*Nsf~+|!}3&$A%j$2RUIk+Q5V(jZs=BDt@U zreA}6!P&vgbkh}q)?MK+*#}TB-I-?%%IpGnjpD2VJAy8=yw&3!Lrq_8GCHe59MYoO z&RHULiVAbj1W;lp9_E?L^3<#$x$`QogGWI6b0yKjguK^w*qE-mXO7X2rFD$OF;u4) zVPo9&xIui)0{Uur^PWZh98g;6M4Lqu%3a@U3KIh=7Uxb}4G*N>wzEkeRxDG`)*(Aq zp3lUO{Otz5eY!k-K@nK-ZeXG)nlP3&vBe#jY0=I5g)Q_O6VyovnTYsPL^QocijIsO3 zU592j!PU;ipLnxV^itSXp@yA%zi406Zy3NKp4?)1+fCO#e4RnN*afeJwNT4L7*nU>Soq^Vh)uf~=m zmj2Ay=2<|cVuV=#z+Q6?O*srE56%M=wBO|PE(5`B!X;_AIGET3l!w#92V}z;oiRX@ zkn6ptd=cj>*UhDBFSoPj6kg=;Y75+AUOYK%UZ?O{CbFgcGKSl#E5d0{c<2GBgCLw) zh9FO#X(A9op`k#wAe<1-;>;MGt99jGQS7+I;=F|-=_>U7ouJNa2L_E#DbkH3xtV}p zaqNOsQm||2g}wL$Vjths9s8K*WY&5ebc~u_yE>eWLVdKsJxty&%8FFQYT9|_TO`_~ z*M$Q*jzU_EjKjpj#pv=N($`dzI6uP1N+#QnvUIxYNbqEKr*sbK<$TzanT`*ZfEp^<^oO((F_qi+4u?R7f(dkd&6%sLqY`aFq0OJeK0}p1GsU#ma>{1&*OxtK_U*( z$oJmVk$dWOm=UGvzE%e|gVH}Zw-P)%6^6`5oA=edlwp0_au=@nk73=|>?Na{gv;p$ ztklA`I5Zhy?$jL~s4mbuu71Js{T%X(++uZ(xMKH^ACnHOA^Yl2R!5og=1z_Jr?*D} zm`sR)lKKh|Kw8Zh_bU_6jY_H=?@U&Gv^m1m`0F51U#_70T!?2-HdlMBUh#@)e9Ca; zHWoU&RH{67>1@3AIq#)~q(&{EOfL2e5M;NQ!n?MrDRux;Fv004l`}bI|40A~#V#6t zo9Q-Amj>*Ku6}*J^vujWmbB!KWsxU4a8h(y9UBccs7|@kJ^_)V%taw{Yye+iFzoKhEIe>D{Vi|G~kDoj>)LXP44$>+FT5@*H}P2dRZmU_>W`hJ&Te4H5QC^ zbe~j82}36A%(4|;U4{tV+~q3J2d`N3JNW&z-*v$es-bV&1kA-xakeuj6}Ua78$Wwf zv_uU5qE*hppxFqnm9J>|Gmm35s$u{G}w!N}|&EK@6UNcX@6F zup~vG`ZcMWTbt<*JGyk`pNX-bJ1}>>EH zpxMqY01*s@*&M33bNWLHxT7oNRVY;@jdh^oL*b0x+lfyZKH8*P016>d(Fw&TGc>RX zbA)`);EK6oU#B_cvl?RV=I6E-Wl0R8o2IM3x!P?nthq-=+%?f51sDQg${>kV3NnOg zm+wpJe4~D*rUzMI6C!%Btl+jCTKvL){z=^q5kY&#eeFTT3LiV->;6 z1rg*dB29f<-lml%Ed{T#H~LriU3x51QV$Hf07gd16nCfz!3FJ(IvKvVg615zvO;jq~e{ z=fXW$>yN5PO!Yr}Ht^Xjg*#@3;0wJAB?TY6=Yz&=6Vo9C`7#lgWy|9W5p$KBM}Ki^ zS;(#N*7dV<^+%g{??3AdL=9WMd8NHi8?E#`9%#6WeO0GD-Wyr%QVnH|Hm`-TlW%8x zzKTcl1J~%KzpYqwxdM>KpXYFusl8+RvBJg(WQFQq#Yc4CX0LZ(uinkKN4|hm@t%s( zFhM(EXERfz9!f3lYW_C6lonsQBpgSe_4o>yGL!{|m<#C0*m{p9NZ{Yb@s-uK(nv3X zGlf%T_k5nlH@{T$Sb7btTkq>5C}c$6x7nC8R$V~{s_04NIrujHO{d?1t(j}%Tw1Hn zD?*(O-$i`Yvr#>a_D%xX&a&$TP-_CMvUJ<1^lhsga7Wl#-VF2dFWDfCLY&&|{MF|L zP+JDcKp%!a2o?9ODUN%LPFA~aF~`yNWuX5!@)A8jBq%iSkl&wymkR{>kja%_wAP`z zi-V*-HNsDVlJT;UrEsa0OE?oz4Lt;rKf*xa4fD4eurib5dTQ#MbMSp^J;Z=MFCn!O zKG3oKQTBk*9@QGmg$-kjHsJisfW=TdxL`ufChDOXMV3qnuacE@Lt@NPEES>kq}XbZEYvTSiGB&4TlK;t{O%hPBDdVt1Ba?K%+Fdo>U0R?)s zIN<~~!;n|xvrvA)*+i&a4G9~&88Xmk>EZQwK}c0+IgG~pAw}@J@nCsv)U(~4?5s=jp*fD7@wMDEQB4u1 z^%CLJ?yl}7R(&l~zW(n%kzfj>${ckMM1>esp`I39VRcU+TaHSAUdJ#gMaYkeaTCp^ zrPeMatmeULNe|gQ3$+y1tIe~ow+TWwNJ5hs&XYX+5ySvH#iL(tOtOK1qI8Y8!@P%x zSL>khOVw0+Kt@}B83%rpptBHJ^g#jVG1zRfIv8};;NQmCPjV1f7RIZNb4CdFIAZXP zKW)Aj)_iIP7eFhp_(mg-(IZP=J=k<~clT2Q+@354osq;+^pMt+P>$cq`%ONSEOu-B zRV8f!M81eK{(g_7c}+TqUnc%O4ylN{^jr2w{!t)D4R3>6SDcs??OgS{&uelKCTL# zavCJ#g70=hC!*#{TUP+xSUMM&NpWbOU#5d--})X*jgs@1!EtfI+fI|qhK3H)oyfo? zs=M^1inX%ViTB5WYfW~ZBA$IS^|jnDP86*Ew*4(G_n0d7Idxj1@VJg*Z$gw5zc z#;7vqPsi7EaC2!qcyVD_%>Y;BiaKd7*P*~FU8yG&5h7u7i4p)nZ?bS7%1 zX!=AbmWc6elXiIp^X&6!lq{SR>Q8wiS(F&}PHAY}4oR}VG+ z&7P)yjXU2Y@;P|Br0X0hQV&k8K6rS1K#1Poge4FfmH+%?LqK@2%WJDYozHOd>}w?} zfF#BXZ|v87{$7q?6k|hAebbAheoc zySM{EdaJ1H`~unYgxw~*TEV$dZ6M)wF@3TM3YW951O}|D&)#%>xE<6i8Uw>W-N7C7 zR6)z9nx$v~+bGXxDUW-ZrhRRVCcZ9X*IAWh(pJXIWLa7<2CEQmST=!3K{&&hYu{Wn z56xjg_2Um!OZ3CFW30Y|{UCLf<)efsasnyz=PZ_Kve@bM!Ox#d86(L{tp|XM*-xlE zTNH}6kVuB+ep^ocErW1kR_9u+(B_rXb ztKO0~pk~ao=QuHGwp~Tj+*&xq>wg$DM|$|H?`254|D0@h-+yHal<0bEW%clDO{C2U zpdVZG{_u=?j-*7=LQ+{0_dWa)=T4HpHb+0AJ@CUJNg{JEn>!@vYqtRMf!1ZrtKFGs z_1)bK2XzKnK;!u+WQ_#WlcHB>0pn{cNx4k?q~X{REQkOuDKn->{>u8kStpY~^|f#d zyLek8+=Ch|?1QkdS8z#P!(N6P{>KYoE^sX#vyEZ=ca%tNbV5QVFd%tEoV;p2e;)td0#L+e><-x(N+ln0_4!ee?j+@sCbf0HPu+}K0at)O~sn) zL9@1CQ+JrPiAp@|7*`uBi54M@rb8(SE?rdz?lX8$)|=-ycNM#4F5dMOmSNS9*E!FC zkX_Ui4}h|Fd^ypKF@*Pl3nQR!7kn(t$@V&KsT`db^_SiC5iTA>*Lkf1>HLut6{1JL zF|D86Jz+-o!$BjFqj0&KM&On@`o+HXHe#t@-4cC0IENg(POr^-5fFI3O zIceTA_Fk|x1{z}O!`s+EC=qz7I>bs@-yxNO!MqM|kX)Ifp&1V5H0Q*RCeO3AFA8_N zwVg|SihIZhq}3nO089f7FvSby^E=MKZI)S?2`&Dp(TECNpJes%FDJG^riux^*GOPfoZHT%E=Q>24(RwdL_p zX2ZYbzjdu$j36&RIUrDJE`(o(ABL^AOMFcw1c<6oA0u;b&YG7AmuH`vl*>W87QF9m zWlv=}rzbjdlqW6xB^$%{*CEp7!r?v!Qn5yP4e%2Z7aWtcsBfyMD^uD^2Sur&uHUQg z`Xv~7czkUJvU^C&kO}9UdKrApf_Dbh7%z84jA{f=6K#D8sp=BZz`D+*7-Z98`xXNpe4 z-YzGXI;pwB-s5S$i$uxCKOd`a>84X<7UGo#6N0mTJriB?iM++>jr`uHp=+HZ`imm{ z*IC91wm$tN2-=z2KUKg{D2DTW(k#14{?lWY7cRxy=M0bxO4#%fzBVQRF#wb^S)2@j zBtVmDG=Qgy+zqw=*IRsNBVJboxdFIV>iv8{Coks(H}{_}rzUw|IfGAD#4ZC~;(k3@3H zjjpN+v(oYQPlJzI`CoG~zPOcX3F19xq5AW3Jy8v0A3E@G$GQBY0PAT8SE=p<6|I@n}w|7To46uo}!pK7Yy zu~MwNN!peVa%WT6v35!+1)fgd3vO33Zn-E@eiu?^Lx)S#wW_F)uPi@)%VYhYhxp$f zb>STv^pyK3Z7kka$A%2dX8+jiz(ppd7IXXgI_-FXD>*21K}yuTZ2igc&i%|p;4DKu-EPEj@_ z^~kb)-1?aD0I3Y&YFQ>Kx!8pempe>rbAapj8X26~W9x6tiqTQ>g^cV7y- zO4s$suHT3#+}>Hu*9UN-yO}9)<@XbNi;D}BE={Lvgo?|y>0Sx5!v~c(pNP_x>{o>V zAv_g6u|M&}4mGk@Cf7Q}thKg#X(P;Rffx{43uXR`3;o|1`ab?I7lrY*nkfjLf9LD$ zV6l)MC`>HJa#G4ID7(U*D_*Cl>d{@R6<4#0c$<+4r{(6$W-^7>4=KixHvO-)?6?!a zEODVTLj&wT{*{6L8=_Kyf9V~i1ZPG^#EB;y>RO%i^Xv%Lt>sHJx3!CmBtg^N zaDkPs&Sayu#XfdiqxID3F`Jz#GxAvFAY^?&MNEu{{GMI<_m~7?;{U;`-He66kU0<$ zH#)VvVr~SG9(`95n(QFr7 zn%t53r4BD2r6uf~G8oV$|F>21_m}*F35#&izIf0v3IT;0@isy$TRO>o}5!1M-6{ zw7PDWD8Q3xaFy!S*v(6@*2uYyjY@J_kjJ}!Ws2_tM2XTa+KC?r8jFfSE)qZclfh~Z zy8cm)w_7qYI6niMXb<_A?w93aey-)R@N3CVVE&NQ@t-95-y~Lbt3w0)hcuSKc2-gt z!%uY)*l$Z+h?BsUQgf&`pV^$-BdN(1R?1)RmKJi;*p}~wGMPN+9cwAG776HJ;pt{e zDj+@(6Dt3Sdio>pl zQ_6#3AaL?!+J5w|C2IPuMjn{iIV7-fWeEz9tFz>$>yjcT+~yre_}W|ZBzjB3Qf!Ne z3w?GRH0Vc6bp0i{=)33Vr-kxw1Te^DRP>mfvo%u4D%imW&i^?#{~NUdVRvle;yB*5 z9Z@Q9`$d<4oUu#Zy(uSJ$HVF|=gwkwKx_o>9ggLtzH}Nb7R11P8j||ffEeVg#x&vo zAWHi9v~d2{OCunH7va@AmS9}TPTjFmjJOi--1O_Y{NA7jfS#tbvZR?T$PQdjEFB;B z&)nDW9kd7NnF57u#uPBdL-k4`xAdP{O;sR0Wo!Zo1?oA0aRIoa!w}~K0mcC#go2gM zQP$o10O@!TIqs&}`gZsgjsmD6#<}y6)k-`@;qW6`eg|Js$NjH`K(<~Qr^4yi4uj)O`2x^hBmP8BJo@`d7APyhP>7JL19Z?N)x-#;rDRv`k3unH zdb<{OXd`68YIsMloYJ-2cNe1f`R~V%I*`np5O;S9 z86#sINKtj|_Kt)^3Gttg-2Wynq(EOQV}GHRVdNCO3{~+p@S7Df@-mz5lU4C6E)!DV zM(wRS+9~8Cq7gt~?#Fpem0Drm_3!%US1|<@IBS4*@ zD-zMrg2@_m7Q%%yUEzSMR)DBly!`s?f&r*7P{ZUJ1z}dp(r@{``vDn8MCwMD@|aCG zB?qQ98DC#73}o2+GXV`fCReE+)qvG;2>a`-(4M@}Bw#uIH}QWr!qS1}8q|t!PZXJm zU;w|0$)3?HvPbCinBLQ+f;?j@U$S^45LNP-E|=8;}q;K-8=`T>8-ADTue7Q{v2bVwwWb1Xl;du3N{PFYQZI#>kdCIB8=|o5|?t+O!fW=zvCA=d7bSV2FX~ z-Nj6sptIGd)ivtntL-Vh4ltnTxpW%F(T4`O?f?tie0{OI3kAxizb6&|1A%+*cNRJ= zxBUvT3FOL$m0wTUth^#J3I)=OXFbK@nV-u+Q2*lqS531g&tR+eLF3v{OP>K!H(+4^k#C391tSGJ@qmk$Ijn`0i$Sb2vMLRP|No-kzJ`$P16zua%*yL8e|3qb(SEZDBm3V!~@v zIRy_T;2x}#g0pTE7(fYUM5~=q^*T!fw`8q{_K=6$2P@xdNLX`a$hCWTdEs`+LcuLK zdY1^KGTAs#Ye1nkR?CX3^j4e>E?k_wT#wkeQZN6Wa400PBGh|x{9Dm09O%eTtt>ry z!%eOII6!IOh#N)H3?qmq7F8c;&PYA**b6I} z3EG#qY|-!EAnTvfNNbS$ z0BY#|9Pj3cG(jSlAL<~8LrAibdAr`J>pmCC?E*BVAF zT%U>dRUp*!^g_QczJmdUFXw5-*wob9Wj{NWD0p*2_~nWk^N#Sma$Wdk)i`9}&| z*TdRZd4+qxJr0i$N*}8xv-gdkyF}5A6zAi=q__7~)MwH@2-VxK0^sWO-g(fOb05xB zrIqn&7;vs+OUn==s+U_A&1j6rti?>X>VtS{PQQ&9&+R9v8g1+w{Wc2N@HmAj2l1J- z^C~srky%!&K4UwUpHZ%eEI!iO>5rRaW^2U0%S&|Z3 zAH>qBhg;%UeR2j)Wc*rl;>4v{n@^krOtrh+fXTnn)%npPAT=Vm#eL_RLkJ0(lp=6Q zSk-Zul)sez`kf!g1`;8KS4j@5bl2cjT@{uH#y*VFch;Ubo{01)6r?G)+&SKxCCuGO z(2_{jp%C)?Ai9~j+%%R4X^K=U*m`F;`+*F@a>e-cwXm~9&ipGIi4W@ojzgV1;R?%f z30F^r`eZ%fANJXe(72jkTs;RnYR|hknm&$SiuXu-uvZJ!n?i<-l-gYO%NZ8rw3Ux|-q&eS%U4hX z>LOjhIV{;hTap9F`ebsgR{)c&{bB6GL~UNHOxFB3Bl}kZM@)SHyQF3S%_H(Xm6?xZwiXE0qe@Ur_PnXFY2utQq%N{C)1j$u zSFG_%i@N@Mw>H3m=j3aJ}~o4n|B!b<+#-=5~QIeZ8VxNZ3NhP{s-A})fq<_%yz!c3Kj!u zbfG9Is`z=AlO5vw18euPYo2^NYyUHT-RWwss|DHbTMkSYGTS_gyy06n@c$_m zW`ksjjfJ4U%1TwoiTH1FFLbZT3`PKP@#Eje&;({~n^DK_VZFLXR<7=kqWtY0*oY9W z3zL81q5?18%}j4|zwpHjlpX1avkJN^SKZo5Q@*#y>)X`gc#`TCc(gTEURip%Eps{O z0_qHR-Fo{@k1QXt86HoB7y&_5G=RIAE_m~=e5A})ex|?pdwv4&b>>BDyXWUFyFfFn zy4yUoQ_{tI!w;Zc{J+YSlbM~sIog;Swqt*^N}6NcM~3GHar^OZOR)*B8OAtk{09=k z`8W{zOGWqnatE?}bKu=cSm}re^b2`yb?UiiKDMKRrn5p(L7xPjr*mYe{zOhnAXc-e zn~)kQR9!iPEtmkqW1JaneWZ18;(IO5LGNJQxZfMT)%=m`@B8fp6wPdqmpS+K^BK$X zxtCumX?=Ws(D)WGEg&yC4okcJlDTxoCE5lk3V6#;6|ayl-)IQjJL>~|SUFYfhQ72X zgp?)t@@~%@I$BqrY-54kfRVxCFOx4eDK2r6zE#{`0KwLhPMJ@SB2?=kpuQ(mE2tH> zoiDe0u{F)Dz|FwiIVD&Ra>5%Ztt4Z8_cu__e@>1eGW4Fm;7x|%zBETFV=~!%Qx~vF zy1$RnS;e%N<>`_#3frs>_ARPPhL^f^*#NqxC0z)k%R7Dds#)`@RM`~*YH3MMadbwZ z&qaCRVRDYZbo!@br8~FKCn*mAeWGSxDq6b%`Rt*52m$`PcRdu_Q-~6knD|zY;UowA z;jI1$?2xh{rF;ijuFpbTHN-N8u45l{ufa~Mz_jQ8!ZCreUJNhdc{S9=L(`O*xajE( zkbpTG<;WD1MoeYy+Z<4fd7fkyyZ7Peia;ke@9!^Sh38I#_Ez!?`y;0xDC`W>9*2hv z?M{_(==rR!B2X{iUw{f7jeA+r9QWSWjCVmj_s1E~wtt@F?iM#r0sEQIwAo z>7Nb<_#_S8d0*$%l7ZhqqVGw&%$`F1g?)X}e)2A`-rSrcJ+#!j_&MC}CiXMBS8*xs zpD_3?_M09$9O|ud{^a1Ofn^8~?`4s;eW9fe?)fG}jjykY4RY*)8dBN#v}V@R98PJa z@d}M>zy^=I^;@AdK}M9uU#v%keP;m z(dVstB)x0AC13y0#_#Of`m0GWA^sX&rQy+amR)sIHJ#TV&v{((k~vp>3jV_icVxvG zVz0qNc~xlx3zTrbl*HjH?k^1BxBbVlLjBj2%=cNk4uR|S9w3`V+<{$emsB?qUT;+! zt#*aT=Y;8P{RXwz7<{WKtl zT0f@as;TuNr2oeB#=7yM&l9}*J^$uiZGW}SMGj$L9^tk2;?_1_bMvH$ubMFcFUm(1 zPAS;u(M++_%)0+{ZOn|_Bo3QlWaI?SIKbH^){{FkA7!fRzxgB1>pbP=*YlN~+zHgx z;K>c!wurTYnvk=UprY!iE*htRv^^om=I=?(6ZdZDxbBKgo}+xOr6kmF3`n(Ph_UPo z=(r^1+`%SW&OMeeJq*@*HcF zg{;Xe()-`L@W~m0VoH9344@v-xN3XMhIRlojAh{?)Q1U54gXWgjg{9;51@ts7{uu8 zhMFPezFK9$R9exM3YWV`3C~MZQZRL3T4&Gu-{Zkj3yX?HoAE{X^PgsPI=A(R>qT;C8HdPMDItUFno$TY4*7kWq7RNanZ!ZMg&WWr3BFHT?85 zd9WfIH$i~nif#qiXM3(1Yy`fR&ddLMg-3)z=g-M6+?`hHVI;k44sagqSd)AEIjr1B zsSy)h8V}2zmFX_SdA;}1e3sKc$3wVIZa4uCefEjApq`^4eC*UXu^Kxd-BBMW6*kBh zO&{H6;MdUgI>*YMda^WI&qAhj(T}jKz6=FyTF zYcEGM|L-`6WW|dj==8iWw^em(t{f%EZM~zDRHFGNMinWylwW%oV6>JMvgMDkTmC{N zB~J6P$7%6*Z%T8oh{1gM(VNzNVx7vuee|N3)e7?G2;+|sE=kFVG+w?LpEX)w>Rc9J zV8bMJF|*c$X5;*F?40Ilz5FRzTLojV-k@{f(n~FcUhT*KCJI@c!$%lq9Xe_zJMR}* zKyJJ*3JQ9dld8{a5Q}i-V}dm)L>{*bjukVv(8hf`y{VVM9W1o`o{LFFE6;ccy`lg? zf(Y~WL}j|O=nPL&Y=w_Nuv>Ti0}?`<5N;g`;eVsn_Ey^cW%7Vm>91}=xQj}nxSgl_ zLLI5mDpSBsYln&!le5!Virf0kGe+k;X)Ap=Y5Zzhe8@kL0wN5CXvIGBv$aesmUNIa zB+UL`zfGH4*k$(uUCO%yM~_B8Er*Sj7IUvccFi^Y)vG#%?U@wpNe4cY%d-Jhzk##t zgiW9CUY*feIM?Ea+IX;j4xBr&c|jpX*hfj!^7vb!^(xciHR%a~Cd(V^W*pfSPy(Rd zZ`L93ydRsbj_-o(!?mIF66CC3+j&+H1^ez}PaoEOBuj$OjskrG&;!sD_Amdt)7H7L z_HiaaD!?D5zeb2kUujqbnz`{r1-j9f%l}z%`>8{|5v2*m41+467PpSI^27tVoez#v zM;$vL%vOZspvR2S9`$OZbskBwwZ@*U1EzycZ5Dtf%CbW3AT z?Ur9Uz$uRnc*V=8T{)*|g#ES}M+#l{N?qdYYtOO@H#*s-emwsp{{X8H2Kn(T0|vpU zE?49KPoESB^hqdR@m1M(6QTV8&?@AdcUj}Q*4l|%B00R2aR&DO$HWh4xvVm*lMRdC zdN}Pjai)QsIPJmXp>F}ZMPi!M58!fp=*S9ibZwUBL73-VLt~kpA&S~j5*9`j#aeRI z<9B{puG*ifTui~GZX2K|jdz9~9us9Y38T99A&j@+I;+(G^wa+W^(;mQc6NW)Bn~z0BuAAE5co9{AV;$**)G5Rw}JIX#>0~jCuyp z_abK17j(5UjD{ZO&1kYXD@W4Nei27j-g0jomxs3Sgb}o%w+P7l?A5nt(H;l4B#qKm ze+wn=V+5xBif$5^!Ys(^CU>h;(r2dB@$Byth5;X(5zu=d_pvhN)^_x@t3Jof(&)cS z*zhPs(9aOm+cRpaV{uDU(aR!f2%>^slylqry;|F$*7hHoA2bARvi-uwRX{mIdD z_TFo+IoDicjx|O_?LOxpV zROpmLgV_sR3r35~DenhkUES8MIyL9X3X>H!yNU4uDFG9%R$iSd|Ey&LrL)+btT%dc ztE|Rm`*l8oW_hP?eE@4zM1#ZpKBTp&%?0|TFJQnJRqkK+l3hECd^V6%!|r2ow1 zq_&938K_aUeF;uB9AxqXI0VP;a>IEy=|AhLQ-bG2rS6%o(hoN*PH8LorLg<4%Y~0Y zxxFb}(69u?VKbAT82k804`|WUhbYt1KC54bmX4)$Ex3^C=89?BVyYPa5Zux^c9nRF zt2GVRDX*XZsAlRlrswu_95}i_GUYbcQ}h4)=yn zW^&u%#@LuK+N2*<`|LW~1?134`g&d|tmgXsvzB`cVZg8jerbE5kd_OnLx2oEC$maj znM?e37C>0rT4sHk1a=z@p=f#jKvcm)cpsIyy}6{};#=wG11Gt`J0Q7lr?F>piaZg0 zN0(4T^1#u&UIWc@kVK*9SxO&3eQMUPMdFCEb2PdJ#Fvx~v%Y=f{ z3IE#BKh9#o*aBN0pqar0=t7NhOwJK$0%KT8vttLGit)x^xrDT`rTGT z7H@;h`zeoTrjTcL>^br8=z6Z&Tv7jNU&X_E&id+~A(|l#b;DTxN+Tbk8b(-Ib2ACyk*X6C>Yi&32T>57l_LzA_=Lis zb8CF|j&PxG$^kpdL2IrS_hoMZoRLTRey8s5mL?v6uQP=01Y>Nxn0s0lz9&tD__j-`jn1!w6CiCn(O@-qFHGUyPbkps zdbUa;4h3qd|GQ8!f)F(IG!w@YFTdoJrgsD#Mxn1xsrA*aJVCLyPnHuC`^UIjQYsaORkMUF{iA*QRtM zy&jow%Q3I=&t?LYfC2|4qd4i;QhyIyl3;P|aA#cg&Eyyu^7hUjWdC6jSTGn(*8JUR zEvNcCc`AbtF}=nr`8{R|KVheqr)c`ui`>)K)_h8AT0eg#)@SN3O;%bhEgr~Nfe4YT z{Xgn&vz0s;^x=ZxCsaem<6Q}n+NrA_eJG?}KW_44YwnVzr7caM44YiL(cp8R&31u3 zCD+sXMw?*4TXtG)^IW$juI6n7wFoZ`ouqhj@V{mN0TS~2rYgIsW^g>?+ZR%Lc(Zqm zxbkLFlOGI-TpbvYkoQ;c77Uz{2M4E(6;_N|Q|+}TCn8>Md&q!1MoKHhuKnPFhmhls zR@3P-Oyge*vQM^$H%sas+LizhaSWcvmHl=!U_~EzSa-t6c{5FwE#|cNq@MAAPYSpk ztF)}q%nvv92V)&mIh$eNuvomxVR@-IkTIkc#Uh5Y2TI=$PQnXvm#*~cIWpS0(w~R- zvhS_I*}KjQ8yQVrp_~}#iYX+2SIdZ2NipAol$KS064rt(vkHZT_Yt(hF!-N6W_lFs z`=!7Y6rM$&DCuG7J=X?r<%L=eC_t?+{>JM{-Fs62pG|D&Q%4Dm@ZCK)6>FhEE9=5X z6QgH|ES(|*kLI4nw5~{*qxz-_yDbRD^K;OERVcv~W2Jk? zI1>thxw;Y+K3!KhpRbmYtyJ=yi?A)JU-T9orK4Cm|K?uK!RDWM0tPzZl|R? zp%P#%1pEt7*U)g-1OeKWz3G$Ul@+$l>#ciZxBH(lkjN7oJL8xy*CG)n7zpEiJ$a)54SQyf!E z@I#GF(DeU^7WcMJ~xlsd3{cj{P2*dLMIj>+uPvwITg#u%MXmI80c>v%@ zBElIkb)}6idbEv4rP155=A5RAjF*_ws_-WNlg05+vp@xOlh3#N7l6Oza@|fsN)Kd7pam6klEpKJhfpJ_(;Yv$T%z> zm1}NtH74F4#`vJ~*0;ZWN7rL5O)Sl0&gXRIR5w$_dDZXaX4|jNZ>?_3Ky1IyE{D>{ zPPZyl{1Fku*x>i8b30v&jax*g1|iPPaQ5%-n6h(8qLEY~SkDnz=uvZ^@ISIxNIZ{6 zG}Sy)SK_hLDg$1R!gLyG=?ZdQGx>UY)bw3<6?(1b3g?gYPf9w( zT?zlZeuH`-+&7m*=U02jaW*N=@0ZY!3YSuZ)UPev7lo)6I{)J6% zqTEXU5Q&y#@)rBX9C61sf3OdvnRZo>*Sr|LkOr|fAu@4rx7;lOs{4WIcHF|46vesW zjZWe}A6lE_oirzOw3#hh7Wqe^kLTu?NQl|CjTrCJEmAF?^(P6{tYp&Cq?+ZI1l7ql z-wqEF-b^!|^O!EjOnI@MLO=Q?1$)f;i-}+Ms4+d#;lHFUbtmiKXq2UDVroQlw!>_> zUJU1IHVNm@s1W|4{NLk!H~0fASoK#2-GQC?y;NiLoC-Y&XjV(P{U2!#FAqjhoew|$ z_GcN;!`j2s@=ji9_C4&+G*?_8Nh$FE!7KDWW-8uV=)55b!7h0vLFE=oal4VXQ0vtH z7I;nPI2L!`xAD1}^fm=@tda~>Tn0$?JjgZd4YI|MV$GME|NG@bu=s3fqDmYKJ#r;@m-Iirse;vCy}iCqmo_7^OpQp zwL84$chSjgtjFtpQI1LcL&~&Ex|$_x0Bh!B;bI{;mz9Bcf%`}3oFgn%%=c^VP=>M5)@{StW6$IGM4LW0 zk2iXUJ)|8)wLe@x{qu=uav)?K0xEjCu>yxxmAP{k+NbTdhO=N?y}B$LN!BG%$+TU| zxNXB0HVz|YCb9b^sAl~)dw05{ccx!qJ$h!ZrY?JE_GPZHJft;Ii?^q-3CU{pp&|vH zfn3OS{o#qOI^Wa?ed6VW6n24;0<>7%xj-XX`Cq|L1&;%R+7o>4&D$6We4A@;-$I?2 zDE|q)O_C&s8`S1ca>5-G7IaCoN88o2SX>QfzGuD%%f;PqcZQF!2pEKZ86I@lU8MY3 z#lcIqnB{gIU$k<1|70O&QZVS;Hrb@m zTHDJ2#^$PI!>FhU5ifqd(LoC;HB!uY__V1Qf#Sb24-p<`Nkjw_`&fHzyl;pXD zb6h5r=d*0^eVLHAk}{#VDmBH5Sg+Yi2bv8r!`Z8}*5?AMj^2i4`jNdGd6Rm(mKk8+ z{*nPwoV!4La%r2Md%Gs#WT#JCYw)y4;eWE?Us1}AMbRO^H601n>mq^8)H`D>ud;l; zR?TuDT^}nktzhMW6 z17;^rT+W#<72luii0$SyJX^)=z7ZUs#M6;wuXCkvI~Jh+W{qZj9p;}|RDN`eGaHMq zOWtwJYOVi|)DdX@5<*|yyP0&6Px@F$VTOfHHA@3^JP=DGo|H&4W{vVn@tJk z{3z>dE-4lI>O>)EBw1$S01`H*)(H_(zW>>;%g#ppX2AEQki1!6S{B^SsuY712WwCW z+G@#$7jEX>Ee!v?O5m!`-`?!r4$6J$B_2z7`j{=cOvgB1+?h%zLS7N8C#xwf2rXDT zXcfx^pEm7^O4Xg(qNaM*SM5GY*Q?*p1W@0_q|#S$n}2xB_G(DJo$XOmj-^$}p!i|1 zv`cS%h8691$SadSk+(;26?s+zddO6ZF$xz+hLaM#*kj_(sLpC%|D7_F@*zd^iG@@- z5s5#w+OLDM-l7+wwflo4*y;G+r7dXq#iY^hyYy3wel&Oxl*? zK8H~$re8cw5_Hp9%)Kmz*puSAY5nv4%zlPApwtGT^-^8*4GSfY+pQXv1tJchW!p{o z=mg47zQ-aET!@R3!b8w|#;KCbn@bx0mDepB>8(ankyD|#bKQ$$1826wk7il~5~@dv z3$9Bdebz}V!@2@o;f<>YMri}xZ3#v1=(0%=JTr9;*K@LTw8^8V5kK8MVEXD`Ra>l?z zDNt?!exmZ>hg*J;lcz$7|7;yo0qDfrA4vBKrWu!;#*7lAlGHr!r@LdxG)%ej*YW}a%Np|SHu&f~%? zlkxszxcXuf-6Wmg(Mt+ZS4#>2(P){)t2Z|VBD2ljF6EG)e9gsWX7~)mn-ZaiG#n*$bG8TMX<7Lu&SSM@11_iL zLekwQSt6pARX)j{M9bh|0(sPgIeneHEl|Iijk^vhDO~>12a;wtMD9~|)q`dTqkyPb zQ$bVWn}oBOT?7u0KV2xU+rio_Y4oqzu9{jL8nm0=D+##XbUgYk+wYiGZ|Hx%vLH5h z-iOm}JyUkrCl;!})le}yq_B1kz${F-orG0gxrS#H;&v}|yruD*Iry^S(23ESKA0zF zQ&o)_OkTeZPcAk~G8cINq9J7iZmB%=)^Z1xwE_93K_gI+2?(Oe>V zLK}GS!SY{z{!t%?lq3%LZTu=Yb~GOqyY9!`l?Q>v>RCh`Ow58Cl$&JE^ajp7J&_ac zvx7IByOwLd1{eGOb#v}gY=HL6JZpD3p zyfzo|6R&f)w>%6aG;Prjr^~CU4zvhoBOf+vej4~(Zda?7m|uQQ+AN3c0D}-1T-+An z!|m0YRJXkv=JF2kbx;?>NBBKuTdt&?QfbdG)^SI|dPm40-zoy!E!4OTCu7P7Dd~m! zu&di_{5NtFm8be1gtN!SrH+o%BuW*B*KjPL?#c?gR%1D}>l^s5(;G9kiEb9gdC0>~ z+b1V>Jzx$wD1R**^qPo)dS1(`f_OA{)rK%gAHTw~N7`78=-$|M&u+wAHe++;ME8D_ z*lR+D?FQWXOySI(wFxyBRnO{7np^VAck&!a&F<1nU# zN8LO9{wL-9BvABB90`kjC%)Fjr?=yWE=852@@&$9Wq)e{EKnM;oD{-+s8#nAjN$%@ zj>|E!3GqH)fD|j6Hg|wUAdA>DpD;I8vHPZ`?KB>W4JOyw?9tfE07V$PvdTRckER-< zf?73#qed+%lE48&evIqq*&-rGQcmbrbEqhF{9Ac!wn_TW7~xyGfr#!4=$bN6OYRe9#_G@W-|8`eZPR z@9!sl;%lKYxM`6p9!c#o)P1c_U;WPd$20h}8qh*T0ZSp()TfUrpqMQB27Sy{Vcjbz zU5ny8wRpn&WTc)65$ZAL=bD`y2ctb`d{Tm&JB6?@`b)1rX8Ta|R#dBW`arLH2 zFeRv=L|qGsjaOz? zOzrUC`0BKZll?FGMJ_5AOldIu^Lp-;*6`45vBN!p_Pk3J7_6l^^9TOyTw-4RVW z>BJI~M%qk7pb?Z*Y5bBk;w!$R5_(g-t91sNbX*o zkG!tBBnXjp1Jo@l>1@&;)wDVXJ4m&5Bl11E^<2FOWKCKRA9tmst@nxTDuxb!$_#Dt zOytRquLgNah`XMmUyiSyJb4{`o)|_z&r@b_3k(3_r5oT0bit$XQeO)se2!!SWO=Cx z%G2Yf82kfZWy*+(nu{`x<%Re-8Ie1w1{Zl2hf1&qFBlM&Ir(U9Hr{gT-FIh$!f;B% zl*T`7z9g7$}7BWFf z^?E@T%Ja+g>J2vMBOm20(S(sjbxow&w~sw{YS#vrJvC$_sgtX`ZmHop4Bv9HG2qtK zq)DD$%wK44%Xu59!)2-_d_^}Y|32qa-=20T_zy$!0M2SAr%)3R(@*B7%~&FEa(@zd!x5;?~5?Q05K4ol3o~rv=jHf#`!mPXDVVx!z4U6i|te>Hwq=r~yC8oYPZ9v$Nic5d%wq3p)M*B$9Bvbx<}(lPudrRQj{H7v!LccHd*MS z*`lYsi*%S`0cyiX(!}859a{>qx5%IB9wOYN%7w!kmI79{qAqV zN>9W+UcV=*iZ+R1hs{vTZ8QckBI`h{_r@14@uKZGUK?P+e=lNVmj9o0r zb|B{0VH9CGC2nA|S6i)$mzu6eW)a^@4-xWy4)v%Ugj(>tm_-oUOL1>)7}YY3;C*DhF%kRw zegzwZ^Soe|OCK=g?rBkrA zdY4MWy9{rOg|UpaEU26M${*D2mddDu16Y2KDfql!iC%o~A25a?*733o7=v^&#fVZ` zSUu~_BysN9j*Qv=JSYGvdRDHyA|ChrRe^oJ_PudSFf4c>R31 z6&UUi!d=nUKTrzoJDUi*;dmk#Ksj8Ns@aw>r)?Vz7w@}+sgdZ;<2&kudO<*>omK5> zOkUD*-ZqqA{HJm<6GSDXP}u>B*3<)Pi;T93<92I(iv#`;<5WUT%6De4@8yuw52@v*x7dMl@uK7_fD$=+O~{ZnG~Xzhl0)>?ccEJ>x#rLx}qps$~?2 zD&NWD{pJS@V+4Q{5}!Q_cw@d$uI*95b^~`ISq$$6$eTB-OUZps7DYbn%DIN4ae?f2>qGfu@QEB8fG|Nd;>B!N zN{dBtHnppmk%qr+l;x%I{xo}H60he?+^@xHc_dva6R_&{96Mnubh{r9&&Sbn2#W*p zbua1HmS}c!Kzl7-DHg3`0Y**W+DfXKli_4nYj^mC(HDZ_nKX?hJTE;6_F1WTPL(!? z3Hy77;@^Onf9R2`c$62Rb_SwZQ@Ab)A#J6PpfS5HA`ecuox{W)6o6x`JCw{QB_I}oC zLv#7l_89c!QAA>?H9c8bVq*e?b5~Jnrxrppdh!n3>DGdTg{@hc*i*uD3s8o*hjUcX zcMkU;r{HD;G;1d{fK9!X?K}m-1txvL$Wz*mr_A(al)Lg!)-s#sHm41+i?NlAygYw|bnI5IdZYM=mlh(O{PrNY9Wb_v~lr9C! zm1U}yexsjU+O3UV7PcNgmt5yx&{t{4q=n6b4x!fo2EM)&;sYA>3gpS`fW(8kwz}Z@ zND7`$J;)P2rk9tNcj(SQ&tGBha`|c&9`>}T{1e)T!%w?cS|m}X5{8DNijIP8CURl< z1yf7R8mu+B3;kt&+6o9nZLtA35|vDJ?lS_@gDT#rJx2NKdke6kdL+rD`FBZWs~oM4 zCX&{6<2+3xi*ao{W?>BL4J&xS@y@Z0(jv#|kS^{Uq77xNHCi+RMY5NXRYtrvw7t(l z&bui~2jMk!Xx!%cBXqLZkoeSV+Ue4n^i^r5^HM=ghm3cX-7V&51xuS&#GZBD{dH&5 zQ)rV!KM;^3xsND21{LYY4s=|#u{$^psK@twMW6TZ@%Mf9Dx12o-W%u(Ppo`19U(L; zOiRH#AFP@g@ZF%+Zx9 z8RU}{5_8?0b+&)b<$icpBfsIHXCQW(s7vX46qH_1x8KtG4gQ{)R9b=smaeU{na0;- zQA&G5`9I@}cNFDiJZI>~@PTId$HLC^W;YNNQgQf7IL_xtT|XHl)$pENF}=o%rr4bV zn>C&YzHSN3V7+@R!BzKig>^6QOR5b;OUsH#zV!5Zr#q@!`p)&m80 z6Lr#@uupbxXe4WZe#O@C=DUk!&i@!P+LFfLu2hj?TM5}zPm zKxNe=R+eFYPdl@*;#NCTI!aDD@`&O80cT*(1PKQa>mmy^K8qJnRTp_ed zuoo~##WAZWH>4p7;{-Rk_hQ6#L!q3-@K-cYQSA4^TI1UM45;2EoBlPMHtgu ze)}cU*ZE!itgSX4;eI+hKyTDpLg zC#??9YI*u670>)n9FNpY{`(-_|N*`tJjNz!!Vw+}uss6M=V8DH}E4AnNrJL(?gQ4WFAlv0J~>lIqj2paVQ$qnhrJv;$vcILk5szz^x z5DrGIaZJq~@x#Jx1pct2iSJi7;fKIRt(*A7^RKvVi;wpMC(QF5C03SF`o2@xb3dW% z--x_7AHt0~D-*$Qg*;2l@l!>-sTD1kXN!X6t=;5D~;Sv*Vrq^1n@+@EV!VPABK8ySGgA4DrnJnGeu7pV-Z| zHV9SH44eiogSk!_9Mtbt#B8h@nMVFRz+b)~MCVal_1{N{M9?sazC59kxsh*@D8TW$ zd_+}dzwa5DX-ua9gINxm45QM#H9wtpQMrc)s{-AmBd7&9&#)Uw6E)}dMWj7k+{QJ9 z|K4&exk(?gZP8MU->fCFC%vvENr2*AV=2&F6G9KVClgco2h5;mb$JB(gVe_Zv9+c6 zzpBi3PdZwcNTJbYI9z7mA3RL`^;EL^>czaLCyR{f;ow6KN$Wr_|5zIFa@%oxU#XKN zR4v?PXPQ}0a0u^1Tz(DihXv6PL^Xp+^nLZ-cd2sX{pO_?YrJHc2N!L*%wH|9jx^2$07U1iQzVhU>GaFr6{uAzH|M z;)kZYKA9?gpO;!5FAm6=I1O{)>K;%*QcwCB+Sfgi_TnO=>TJgrY85c>3hy!xqLUc% za;)aFpaDYf3A4oD-=Ag63wo|F622o(-eD~KwE$;As#ThO5-$NNh?FOYkewq&E>nRC z?YQCR*k&-vpHm49RLh@!Wl0B_?>#th!VSakToxR(`{`4WuEM++bV~d4L#y`AlT@QAdP?L<2eZF2T_U=0VdWsW{$y z<44<(I8zRCl|R7FrsXA5RRtv^-XlVrS~D$DNL>E^He9kt$yEcZ5^QeWPb*s78-8FhRN;UF_^zq^Oky z8ZOnM<`P^LNcXKU9D)cn-bC{FQz7nN;ic>sKiKibsoJhdxGVU~)$gK2d+N&p6Ff>- zxSbjCoRDgJ)!8^=sV=!~bWiM~72?|*-}w2DI#w+$ly5OeKBW6*zK_m(iCFGSfgGNE z6m0wp;7isupy*XgqQa+X3ko;B>T$+@>vy&gI^?tTRe@Sr5^CB66FiUTMr4!(4vXwQ z>WLNT2O;e=2+i%GJ@@bo1b!)tBi8ic9L%OhI1+hg@`e%fNU$J7TYBa<7^1|95N6Dq zQ0Yrs#a49cT_s+8Fb>tXQTRjbP{BTBmZ&e!V_4*m=%dK`(G0}RIB|co)xb*xlmnSx z0G$8L{}K5VNt6)%0+@*^gkJ8J!tk`UJ7zTRsscxu5c3kSfywnj9^5yt@T?ssE)(9L zZdt5J6aVZPp?Mg&V;!wv(9q5?B{b*ja`Azfj=V$eXD`M!95d_NOY=bO$c+obyy)>% zULS_2i8_0(`i3T4C%h^M7mrpD%bo4A|emg4Eqxa|Dkd4&T?h|J^v|> zUfgD1V85;iqvJRo3F_JO^H)H9ZG#7==l~Dv9aenAc(=^^sQRcIz=0k~R9{Qfo0DBW z*KtR2S2NWt7}j`uHN-ykBXg$j0S=FuDOVdsnp5s+*LQj2MSeQE-zUg>S8t^iubjbV zHYd~+p<6kNbTzvZlQ}Iq#N?fHF(NHJ`ssecivDWU)Wz|bAsvNenpqQRy;K3>_d**n z7sIUxjco&*-f9BMvP7z}Pa^wNxYQR%HRSu&;Ys*G^N30YI+aj8=vJvQtm^=4Jw@2b zL_78V7rt%S2jiudEP=zU&J%T=Q6%SDAU=-}jXExf?>J~gvyO)$bSP8K!q(JJ<%SU* z)lhydGK1*6ghzFNyyp!x7Q6po<7+jEr3-uY*K#RH#HEjb4&I~!d)X7p3dYQq%#-`1 z?D(F0NLzQc8R|zQ8%l}!2p@09E*`qc+!*>qkFBwvCzZVYemjZBc;-^i&M2wWe+e4V z_8U2@u56vQzKFB2o8b{X_ysU>@2wgk1D{= z0y-_sS=4A)k7t`JgIMs29Kf)f>qZlnpCh_NnZ|b`HZj9Y(&e6RcIIu2VH(0DeBYr4 zOqW=UAkz>Y)m%na7oMVJd|O|JHJnvwqi-RJalFF%j8~|1=F(eGH)~PE2jRTc4 z;45jDexkIdsDWcECaVB%WI@VV@9?TZJGFB-UR6u-T-rX9pzQ-KVC1A0*n9ihqd0dL zL`S`NOy_CZQSSG=T#ELh>*qLIxSu%!V#tY)aCazs6OxL-JdkI`UPSFdv7B`whjud> zKoAcRkQwK7`(|^K^|!~b75<8YrD&ylmBo6%mrf=NxGI`8JFG8G=d>%H%I`Wj77JB-1m)fL8A`W|S zVQ~ua+=Q`NaLU%SS~2nI=2!L@%OH<6AoQ2&KG3>Lq@3%5=&1Z*mYqQy*`VK?tbeS^ zPVR$NbJO1J$*y-~IbFZT9Nsvc$3PhuEz^tc@-{c8cj!AC%+-g+Z#@(DE-lG?5|FP! zV&6RCh9qk>d(M0Jd*dC@N-n-*_>%!0+9|kWpk>vO5_FJuZ0`u>U&4!UJ`y7oN1}LT z`f7_;n*Ck55c{zaa!69AEyedn;9@@Nx&*cX!q-g}!sSt4wT4YmAk$N2j)|VA0gXEs zy?2un6C<~n={^RBtq;z+-aIHUlB}}jpb=yY_Sj-8sgM2X03C{dh`2<#sF&cnw=yDY zA!S3#_=x7g5%zYfXZO0M>tL_y8=V+%05q zN6(5UA@Hm>Wi`lZ{MY|kXTo}|^hWOYod4xEWi!JOk=!fAyC_xoD@tQ=b%Wtix`6e8 zy4DwCXeDOK(Ky~{usHU9xz?1k_m9_pE#DPFUuqhv#XU&~k(xL$;|P;_SGG*Ie`G6H z|0I>WJueU5qXzdAjnw-MLd}=rYpxVBT6nJc^X@a%EWx|g(Z+^FQ~`iqv&Eo)b#y{D z5VpdFUW8Pe{-`qcFG5xznfmIgV$MbFwOb}l%nfl)C(;$jP+jxN!-7R#cW*pFaQq@e8+lJ@wi*V?vz}SzWQ<#BJz(r1$@c7(6YPH_(l!C zR~)e26_nwqu%tzC)~zu4YYhLJ;B)UA6avUBGE^UlKpB%|MoRz8{<#Uu2g=U_SRA^Y zGwR%d(>tmLr}6)LNK~_!|5{n&CJPQ&;VUZ(oIw*R2{>_nqyt?ZWF2-Zlj;+Hz5V|g zOv{gV6F0E#q*y-)EnangWP@ zNxnI$>R<2s|6MH*@2{Et>tRkeeDImDEWeiG z0(;v39=RMy5D7eu*}YIx;CXo7yjoL?&aLQoS>B0SP-iJOyte{Ke~(&Nl*SO~KFT?LD=?5_2g>2LDz->((I^;dw9^Sh9~1aB4X5yUe1 z?NSJjDT*@{eO~Myn1tQ}6`cPVKPX8D;3a9&3)PufmfTfqdVu1hQ5;%zFCPlAM{a$k z`p-M9D7+g%j;R`TZZKH)zCr%P6bxYDj=T!<`wGS?<;_21S2gR$-B(6*!}{)mW_zD2)y( zyGFvVjSwv+5o28VP)yC#vI6{6gvP6lJ7E9s!G<^U-`{3+@ZQk=K)hTa9veve#+dQo zCoxe0RUzCFcn32!gxE=Og>(L})bibL70b272Q9!ekYa}_fE7hw=|LBp$GRi{9&J4j zdxeJhFHW|pM)f^r@N&n9IsG?_s+)njzr6NSnTn6_45WqnP$`83PC8TlR8Q_D)ip^9 zp>u6Md!cO%{KN6Bm9Q$3{u;s29Cd2=2+ARX>BndsZ~I6(RnU`9V#fTxRueS;tIjGe z8xgz7ilbKvSE-=vw!&1rCup0b@~wN!@WxZe^J@F+Lb69*Fk<=J(&>7>U{dlV7MzSC z6?+4Sp-Kavz_}1d{T?_>JV5OS`+nF8&e(-Qt1tmX|h$I6&vbw4!I;{2VP^dR1O z!vDj`(YR?;u-Hxw!v{JkN%q?1SmW|uK{k|H0M?c}4k-y$G8@1qAEw$8f!;yYy*}Gw zHMnm#cfa`IM_#w}?9h-nWznw{Gad1yntD8N5}#Ep!t*A34)@{BZ)&T%$?@k z#e!H=21Z^$54@;cM$f1o{vpMM?dmx^(2_H(*>9tQfy#}z4_tPj#ISMDD?)Fh%O}R6 z;_P>n|0~vJ@Kw@LaD;WRN+Sm%OM=itd!%l!763Cwj%&^lh704Uo>t{&(H}!V{i2vmv;%OL zCWoS1eFw;(`~yHl)uh+Nzl*mlGE~**RA|(Hh`1cQN$FJFopq@FBtK6s)_cvFQJ)wQar(pzq3hhG7i%bjlbTHU$F6=Iqp}% zl${l1QM5CiH4F97%QieazgHgelMt69<@S5a=ZH^cK0DV|#{IuDQH!It3ouD;VPDFF zrLR9DwJ?*7-~7#Ri>=OM;%TRLc?;)7E=sq1So6KF=g-lR&uf4xRIUT zl-=g&wZ2IQ$d(!P@D79LpRakUEC-OCUH{WMl4r>VV|p%1XS1h2EmGY$yfpm*Z7tWv zJk#%l_tA8RPrSXVaqe&WMCEXB)F#iPrNC<-OPX|N)+_Ye#Xi%*%Ky}{KjGW^Yr76?4NEg7N;Lf^vGEFKzgfDq!G{2jL z%~?Z#T~vR4G6nPAEL!Uuv}X8g`FSFydJkl=il0c`%yH?*gdKL*?{^}J4npZhtF~t7 zPv@zaLsAW`FF%Wxa7Jo328=sD5$m~rfLN5OCn|ZI(OVaCJ!UK9Lm}v4$9?+8;>%mZ zncsp>c({6%5|TvH>YZxGEMl(Ka{F(WurDPa!*iw>P|PuoBPTAvU^2pNKyTP`aAs<#Mf}70h!JPz%OKB`VGdEsHk+%GP>?>E}*OIrQM_L^-uBEq8={= zb#4SlTqCDo0Ik!rd?mw%F_BJRT`j7X0gEM8ILR*38 zv=j+|;PGljJUTpa!zwSG3`4B+d^I)H?{r3JgOo!!21U878L*))it^TzEi%vVOJZr^cJ67-B_iv^fZ;nfh^U~eB8nPkw z#r!m)cZfhXT245;A`^h0Vkr6grxPA;Ypzeh{r1<=IFoEWy4oKfIlZ_RQ*&@}YXN zcotxgd2dy@sRX}{TQIP)vA&W9XZNRS<6hi|g;Jd7dc;@br~p6n&fwmJG^VQys_UWc zPv>hU&Gf!WElc`t3&~y00wgR`V#HtaXuA?R&En3Q6YV7EBf=DtQ0o(!RqVvDj()l& z2D)2^CIkS^Ycn82X~+j@<-?C-{^%7&=*Q`w+4Wr$tD4AWaxw^$wr~d&!BB~lahM=B z7iC*dIUI@@Noxa0V61lmXVtigae{UiqKM4u9ANKq1FjvqkvmyW4|1*EWfxn?+oxpO zj7Ph3qqD4LSpC}c1Pknd1p2eb$E>x3$v|+7=Vc$@`>Jn{@lkgp*Yc$$l`kFkOW`B1 zg;aCa%)hbnojDt}o@bM692BHMA!b%q6v$P6uO&f$8iX&2arJrU@DKM%Q^j0JeSOR< zAP1%TKh++hW!| z#W~n^IM_95xGmHv(+NP^EAK&LRepaJ&pD0q$ci9qD;H`Yz6OYRG*4xoCtIUwtzAD& zit%plC%E@+PH5K4^@)i=`6v3sMiC6=cnc(OaK_cP5x$ja*NH9$d&#WoN8HZK2b}{R zUdty57dwN_lkMJ9-%mp{#xJ@s7h3Tr#Vx~Tyk!^NY)l|HwBOP?S-cp z|HG>$U&Cg@NS~S?E%QGYGN-Ei2+oeId!;tg-EWs2_dna69)hTv0vefwCsblFIcj45 zK`7o_GkB!Z^-d0Aekc zuP|qM4{xqtRkQy?>N*hgwwgb#HdF#qR_A0m@SVflx}Urr4yVlMRjeRuFK225Ri~c! zBw!`ZI0A0xmrZ~>)^tdn{laITa0HlIFr2iQyRGZcAYg~>lyROfxte6!{HLHKqm3Jd zQ`dHOu+^}5z!*aFs*ao(k;jX~>%__r-nPrj@U}KuSCY`cT}DCxW$a#4sQ9LN1>ZxP z?|nEs+cHna0dGqt817gWSW4uN!FAmFR!bazRE|K(L(|aD?Or2clr@`Fe-EWa8ezT^ zREa}W_vbebx!qU=PPxtBiJ6=0<04z7Fx4cIJT;s1#kJ@v{5>%^VQ)njlw@i}p0r;} z;7%;LC*6Mgta+S^orL@J7Po-;t1BOV=w%Nlq$dW48b3b<9m%GJ;SB$ajV7PgS@f+O z|A9{#CxEYZD-4Qm5v!D8;uX}NsRBPFbblPz%U)eVPlSN0@{!kxT^u24z3CF(2XFp9 zq*^#Nx@(rH-h#7ki@t{wxB~^7rdBRiwxaaLMa)ug_JV%*x z_ocjpuTN5|M>KTfD>))ZhVX~n=QffCTnuHJBUHHl z*uQ+j%-x?z*=TiTj;t#2(R@(uQt5sLe!AdQhLIHTQ5kJH?Z>XeBdF7z-eHWy?nVy7C;(nc<{7xdjbF~~L4ebadx)hHpb zmFBsUa&5A$4a4g`#de6uBon;0xpYG|-yq(ZX~3nX^Mtj0e}NaZ!xyTb!6_Hz$|DY? zdk%S|!Fa2;_KSPU)FVxTU0imq*{tr*2FpT=*wli6=%>61FZ=-7o1@Gs%=mGS-V?d4 z`Py$jzs8Tf6mF%X#_rSd&FG^X_G9WmLHQBUFo1Its57WY zYF|K3=!GQ591dUNzToW+LiM7rCfZ`8WjDx=-!i zf@zY33|3NaGp=IW0XR3FXe!PD5gS{lD@98)y^c#1)aQqRaVjae^M@euub1k+7(dWeXd_Gi>zFjl>cq_MA(?Tuvd1(8q$8 zdR0$Z1mLBfI^rdjWueJ(l{Kk{At2Ao+(R>7ahZ~~(-7Cim+y5s{>8}OZ15R;Kz(?; z5|tOcds9R+p2Eyk<9W@9$C-^*dK}tEtvpxJ8|N`q@GvJEZ+sV#YieAjB$Hytw^rh2 zOU9=+CPd~@Y{1}Us}eYnA-AQ2qhhdlOLUcOIiai~?#p#C2utLb-RAbuvV9?z->}%p zvLNJ99b~YZ+jIWZ4jh|$bjfXES>*RFW);4xL@?7LTUWv#KBt?F_pD6OK0}-;Hq+je z<+EB?C+2ZiiS{oYE=p}L?ExKZ5L7q%Ye(mTk zTM@zSGhlC<;0;-w!7iy)*U)B)=(V*n=QxL=Q^X|du2AK5oa@MSN4atISnSy)pr+`%tri6L@7AW7%TtVhV8~HoyBZC3c=4A zp6K%n?-PF{?+$;h_B|Hoa{vDrJIlBz-)`Rz-3-!-3?QX6q9Bc+NTYyscO%^;C6dxD z2na}bhjfE)4ty{_|bzdtNWB&HT;|D zej_hhofaspYiO)YERVHYaK>;C#1cXfkx13j$5o_*Qu}(QW-WW9tlNfg>!R*}deS|| z4#p8XjBLx;2TCem-9pLCFtRiTjzp{{t}r*D*Iftc*`s}-&RB=%D+^d7Jm=b5a6D7I zg6~}ZPu*8ir8mh&n9PxDU&rR}^C5Ecm+GN|e>Wsd>EyMImFxBCbI7%Nfdp8v0(FuC zPrWNtLmx@V+V5hIduPo>f}t(-|$k3+MAN(ngGI7SyQeVOH{9Sd^FeSSA*uuL}EHh|SlFYGUpU?QR+ z^f_*C)tw=z@2e>kkCX40**y%qyDu)gX^R3*i4gn`^Y4z^d?Z66UQt-%VrKF5FTQJp zv8%bF5Ffv@UKhM)rNi2Ho_XJfGEi1>O z;IbP}@k%npP8 zxNfAz^CY9N-w7!u}PxKBF|LN@UCIa=)y9f*vUkR+fbpWcu13g-2Ci*WDz z@&bb;vGuV#kAC~7vIBGsPWzw7 zFZt&H{K?9B`*(kRz{bZDU|hJ13IFBQZ9j((nG(1&{p?%;=Yyx+j5`dvf9J~=By){8 zhaj zmHM(k7j^8YM@>d<1?x>&nWds}ulntz;gr*v6(jRjJ&RKHL2Mjnceb2(n0fNcBsn@S zo&gu9^DxGgd#Rt!tZ8?Df==queuYl@N^u*hT;6D>Tu zAlAsuhaM6HBK)P+0|&Qgo9rMrSj8Ohumzt>go3)GUw|QhtKWW4 zaCO%$&V_8HQtqL)zI5`hUt;bIuTb~;(mA^hwb>MeMiNZYTz5&g-nv73eDCH_lH5w+ zSO@d+pG~#rwKn2Tg3dzh$n<7bpjpsyBgGPln&eh17$^U_Q8F{Lx-|oUY%QUd)l#8 z${LxREwuv2LM+Bj@1MLzEXMDA$7p#Dqba!R3Xf|LY3$if!)krA_utw6mc4lLf1W;z zl)mqA?qBe9T4*uT)nmDKGpKKtm&NzpE$vu166*zoi=7f02y~^DSz23}iO=+F&~aOd ziU~!QGe_O;T^vO*kXoS4eJh9^3DYA>Ym{jXyhlN}FD#Ie7~0<=?lyd5vQKfBAzdKf z;NEzV(20)r;*YS$&8@&@48}{J>2}c*YTpwiJQwccce0wVl6Auk{>Dec1p0vFQ0SRq z-C{7_Vs7@{>hGZuv=dG&iY3(Z&5ASgF0}_Qj0LSN;uyn?@SM*g<1Wuq>D$LUu6ey` zo_6{CbkRso3^IG3y0Qa8!{NcAC#fF~KSl%}18!!f-2dM0eb$-qo z#+F2GV{s2z9QoGckZQJQawH#1J7NCGIDh4IB06gf+ngvH>07P$|5^~a)_%YRmlrTN zyF2*fB`f+rwOJ-j=G)Gt_0z^z`0fDbaq%wq6J3nCnOC+C7jNoXka7GU?Wfjw;jMPM zmf@~e+NQ~$^gb2&1LTu&Fs7sAcK*J}0B@cypRqqfK(X7tPvkSE(fF@`@ZD>`=_ijZ z1KGW4<0moq_&3ShqLc9rytuCG2iel2qv4p!d+>px!+*j-jnC^z;JnrNImb23Ns+2Tjw26NJ362*sa96eJXV*=l+f<{~8R4Tfc27hO*405*;Nx25+I??Yt?!wNL{TRW?;Jg()uBTKb(n@82yU83|hmJ4{uc5 z;w{bzYX*r&vp2D8ZR|TpwQ9|hE~A%q2m9-XD_uFMuY68fGVMdc-yivqm|;{l5dH9- zdohM`VKgT4P>6QGu+tluqbp;c&{7C(&|+C`8^n9V|txs(AwnbSTWg zVT@&$7lOwQqY_G(THkCMi)>ozwK8UZ{|;BGxpI{SV8k}RcEMHmex9vxX9g0YOC8DZXM!l!C!UP2MXvp?L%U$90=j?T#Z>qE`FP1B)!UjS%A)g%HWd;jNrT3%$VsW zIGP@rtJXJe7yDYh1eTW%N7L&!{D%Rbdf+;yXBu3 zATRQr(`v=uLbCorcUU8x#bI|?P&snJ0rwFpG@_Fts)E@!&n1d_*%N^<@Lf9cPvU4B z{jAr~{>Vu5g#hZnuBRg2s+;Po-$*Pa0;+FOSEFEn6fnr45m?l7~3x2l_M%* z%G`|K)hDPbqRF`8ovu&@U9OnGYQH&J?IDNtB94#X5G2cgdJ{Jir@OP9v(Lr2k?c7^ zFafPg-n*_+wd=)4Yb$DC#v(?lwf#iiBD@cYbGf=WAo-s`QV9Iaoxtj0^+TZxC&M#W z%;vPg`6$g^x_E+n@FK-xU$;Ju9#%=(SJiE!Kp&x552X>t975q3cuLs+6d3yQb$V$h^MASSAo3BZlXWp=V2@Im z1>v+itfSx(N6azW$=_Jc-gts^;K#XCLis7irW60vC$`j$HgI0iX7-Df=$>0DhOU7# z)moh??@Be31OMmpTKxM%cQ*nlr@H(fY8LBJTA6UF#ANx0*oRyNKC&x~2h@U}ul=_Fixi44kZt+beCCb|u@$Zh$l0v__~{ z1g~EI@Hh>9 z!|eHNC0ouz`D(LL@1oRXHzNq9&h1O$e5mXDbj21&wasM$X>L6jd$TQsM=ri0)onX( z>-jYq6axc)zh))eh#>g`8iYY;xAX=6k477T?$eSOi%<7@OB}hN-vnz8wRKKEY94ie zFbS0mKIhnIOU$^yHG6LLLM{Q)z-|yJEud7v(}%*w`3!CwiNySnM=OB<J)?G|&r%0E{mF|` z@(1kk{k#Ra z*>?8tL8o_5n}`GTTUb0>$7k^{ad3AjZlhydhh9l@J$uzBPGOY@l@NFtA#l_k1!sXT zwcTq`7)!wI8}i48p!t(lo@ZVFc{h#-vQV7?wMlfI5{kqseITn)nw7xryfIpR2 z5WXeHtO8=ycqBRgbrX{=`vVZKQeGvOSsPDlqu|D6@Gy+p)yY;xh~kB+@zcj#)IBso z52LJ&eQF?APmbY^dKe5Zc$qr8tARX5mqH5j~VQ zZb%xqe)YbJ-6qkwxyF(dhj1_Py2stoEs~r=q0BswAR>NI=>T9ZxFb#pLna zJ9J&^0+mAGTVk6?bXVcJOBwlsP~wB?SP{qY*c(Rjgktg@ZMQ{ zbjq3HVNAX}-UV@v4QJ=)Ty1_BQ=|7ngp)uvLSiw&by@w=M^T77ABk?z#Evwu$^Dag z%d|d^dL5u~=tEH3w`Obwd{-LIC8Htaq~;$jBrJZSk+Jy6#gV2tL!RGdSafykr0|5_ zoBc(de0T6%wT1U@RsoXy#P2>Fy$)jJi>J-wK;y*`9fe0ee4Ap&>exZ|Tt(CNb$2~k z+K-q?_Y33L@kL*xH+>rOPxrl6jes13y*~7S*u$msbkkQ!IYo6Fru(u!gdehtZhKYY z7Q6&vsXA%UX+f$Ff%@P;oxz35r+JH%M`Scks9R(&CLV9I>RdOtbc?yY7O>lD*Zs)H zHG{erGf2C@z-ukSZjAvn;O2ZAf{Ll*X(9g|O*O1n8=c#qzfiR+o{GP~oo6=wn+i~G zBr{uyi;(Jj)GV2`vC}$_~;FbHMRU!F5*^*Un0~S>+ZU zax>w24rnyU;q4X1k zVpj^GLu4K|j*fjzUEL_!$#AIU?P}rfF?mJ{NCbl+)hU9Z35+ypgW>g_3N^->?wT6T zkF`2x;|MUz8GXpEOcjWIX5Cu1dBdYmtz&7S*Wz3eS zWCrSB5)mC+#@YvKa-swXbNv=+39`+SdMMPG0eVbT7h4n(A4?i-ny9TbipdrVW zxb=|n~{jw zKMU%3y?oxE8=RLfpGRtBgN>jR zE?KF{8V!-=aF1(Uj zixmh5ZQh9h0ZLrbpFqIMiwlu6^=pg9>cZsJZKuBaZH3yAI+ey zrh&TXfi1Gx0Hk?A&S!0D=R{tNcFHTRYk~L4_3_q|JhEN(=oJlCMSP<~m%W2z#=xK` zm$LQRK1ZCPUp&`*iKbQ^>?`YKLmG`2REPrYE@kwHyJe!!k^!N}LybnG6}KagqsHjg z$|0F20cyI!!*0!VbnR7rvGMT+VH|g*;=AtyX^C>46jXw+vgD=G-YTeM3nvsu zpeDEtdGR?WF?F%XhjMbeU)q@VkcWqX7)(i{UJXmXq}+tU^C|<(;rib=pBl0r zAr)Zhy^DBXM+KYS{5)?whx0++Gr@$7kIQ3W7wEKbKpa+EeV3l8#pqw9vT6w8G_wIo z*e4IG?b_Q03?5Sr^~xnvy(lRZ%a0l~O3W{Lnf%S#tc#R(R<6t7;uCH7zbuNh3hT1=f-2mwq*m$a#&|X_U;)0 zq-BaDa8v^BzHQfiCAYzls>1?-=_W9vY3}T_HYf9T?h_1QQ2LzBc6u!`a;|T^J)%L| zw!+AIKXAQ(o+9m469MmBOI)Mu{zT5z@YyxuVDPx}45(mIguJW1HWM8@K5MoChmQ<@ z%t|pvpXzX*xf+!CV9^`H;#(ng_M*Q^+rx=WW`sIL7Jx0u6iox>m3Fk0lzrzcGjiyt zkFwh{9VQG1zjv%}YaIJ)5)s;r-Ccc9Pw+l#($n6UPI8RLlDwYtlI+)9URb}cdLE0) zBro&OE1rC=dbw68%usS1o2V zUhl=u3!3JZP=sD00A@@6@%-8)S%9!F#_d>yPNUB-V6qSLY#$!S z2A^WqEA(u8?kh>5J6DwHxA`jJ=|N6!8`sDSv>YTSOjFXLd6mIQ8*3?s^Io zZ;hOG?jrA>gs|p{OuVtrj+iBl5qS;vXhW8|I!GnGY$21xbUb1mruC$^0g?Dl zM@Kuq3Vw$zqzQ@V+)X2&6a1xq$|DKBt*Uc<0FY2btdFbY)?`3-%NUW}O3LF_XJH5V z3grwYxd?^y_rgi;O7xG*!QDytB4Z!GqdVBD`+{-2fCVt7!=utbP;VOC9&)W<#_4H_ z0HTt4mNM}p1tINE#a{f%U-?E>|6?!cGg6+_0)~|w=t3I*3P=V2(KU~1 zI~0Lm+Nk3DtU~l32Uj5OeZoszO6~^M60`mBSM)Xr$uu6JLeD?`3WP?!Q3{E{-5sb= zMzOZG6HiK1g0otsd`W@m8u-IIk}5vaf;(b-^4~IXe#zze(1{ z;nUjCc0~am54Am}3dHFB&yP9&H4JE&ME}}Cgo*+hU_DDSH!*((x^EJu9oWD9G2$;A z{;_wsQ7GRcW{d~mpY`J40%ZTc7l71`Mwdm58=5p$pgHK5f@In|O?t45{=3JZ518n| z#_MUA8XCd^ph!Z_`xL1bs|W1ql@i@emXT{4%je&lBZl~YesKGrN8Lo4S&q27LhSQl zY(ESrzGomlMR*WkA=QfA!YiP)bVd2(ft=@z|-uw`RVVQuH_5R z*HFDW0lQg}U+Qh3#ATHbBPH=4cjrs`*V0fOkX-#4F+HImEF0*b^RjwD46{y(Y5Zp# z{XYOeklHELj_$DRUrukphxT$m)@lFeRp!(GE>||D8%&wt?kEu!CjO!fg>E3r=6OTD z_m7wAe?IINf5%75e)tFC*8ohT{;Hh3{%S8RkIEy;$G`++{cg=6^*`T=M#;aAdVXyF zy)qcN&&-i-t7no7{1k#rxc8sZo!sZMbZK=2){qtObgRr>IPFe-7?EU*#d4caGYKG3 zR?V>d%5AQ=2n1GG2y$DWToZ5!wW%~6#g=*gLofL6H&^fvnvSH0_TdYlS0V;ll!QwC-aEQ)H45fta3kjwIk?FU9Kc`EMRq_hkcUk`2p=zZXf`Ex z#03<=Vmfv~am2nQ-nvvO9nXgw%8X${3duxM$ ziPiw)h(+zQw_pD%V8OS;oMy~&I7Vf-#vQZ@84-k8e1rmfWE25*EkXD=-OP%GBb=N8 zVKfjS0n`I5PD^Y-IiZWyzldBgiVv57I?C*`K&3hZ!eUa=`M@Hp!|nR8tF8MBEq~#s z*Yo!c*MWTb&7$iq!t~&lvo5%!H6C8SEtIN!-;r)ENB*f0YpL+ICQS4BYKyw!h|v zYUu|_){nqD9H9bSgfzj~e+kwn=e?Fy$ROCqKVj4Ym`>VH_o%l>K@aUDATK)6r)n>^ z^V>kejF?KQ8Y#hjxAaX7&`UmzSZTaOC^FHiZc$O*>?U(tsh2=iz43R*v>e`Hlc-`4 zqNs8<-(d)!zpU9U>l?XIyK{xATM*ekP@62Q+y1eX?uU;J{O_Ir^P%SW{< zZoa)N0l+Gw@9)eufZ~@b5K5(uxq7>2743zuu9e~D+<4#0)R1o^o zEw}md`EIBFa+#ZvFXBPT)#CGAIkTZVXZ%x?)Xu#zPkQ`GMip-?wi@UexikD1;uhbyo#?v+cCR|a@w4-V>qmq|)akX7Jsg3i0p}=m@EKK=-SFXwD7=?c~ zE_NiUAbg^2Y&*IKNF*>mGwTi)S@8^Q9t|?56;pW}ysTMIatdPu=BN_z%i{+P)(}}pLG1faLy>$Tfi5RWWUN<6 z89vH)wx)}pRyGZ&N-W)vTL9$J?iL|v*BqKx7h!|-0t1l7BJI;#g7(Z)FfYkQTs4ei z+flhJ`W^t8$DB@55yiE5b%9HV%fi!!s!z_TLlpBvfQZnJ3*YY!7pNKi@@xli_r1@> zE`JR^KYKV~@9CU|^Y+E{L7Sot9LNE_@78V>c})M;aLnvd!@)4f@%OrX9ld8rsp>l9 zMmu{5Z-O^+n2+Ti{NDHCL16xjB3{z&E5>~wvYy)4b@tJ$hXb+eyEC4bD!uIsc=-io zQf|@{Du9TcWWE2Dm2FY5Q-bPt9bqn#P91aMJ%$DIK>pSN*8#)LYYot3ffn8iX4Nqw78|!^bNUEASihz~0uV%#=tQJA( zP5^4$Q6}m$rnG9I!5y%>BxwTLu=oZm@P7VAc%-s6qVW$7fd^f>dWd%;#b+bt{-iZP zZ9i$yT>v{$)uRHVSs;l5$&dC3qZeyH>Ha#(NExTw6KK+`bT|F~UJ@O9I(E;Bxa&--)j50wpR|Z~)e zQVR-H20Ck>=4{RNr)wuzuXzBOc-Pg^IC6G$XRhtLFiu!)}P`1vzEso z8eACJnPz^$CSL>=pRc?9ewqT-)nIv2vJ1RJ!8K@0wg(3}8E5cty`cblq*aGs(_w+a z58lGj1MIE9s|aW=!haUf2zk`<4bP9~L;DtZrdj2a+V0tDqBs24k#XA149tpZ$yCZ#e#K(X^@KY9MW4fmx6re>&dCd_`Er})8>jSJrt#F~?`fyvU$iIeC#?(xh+_M(|GNEmNg^?1p~i8693uv8%|ulV(l_MZx{T-sx`HO9KrF+0u!UA z2`5dwRpV^ReKBbW>V8?c4BZ<~1;z%IJTl=Zx~80kb-raBt}+7z%sJ@kWcl{|HUdq; zm(w_^&9tQtA4K-BMf%(@Y4IxuvzwkSW@|tq2uO7&ZJki|Tkg?{`^Rg$J(--1X%-`w zZLhs9no!>lt^3w^kT!V9x@f}|&NUO9a5`RKdPS=>k0)S1=BTZC?Y7O!UOb19>N3I7 z2soh48~=E!kJSt9Sir*L{5ZX5uucrL1dzX_?wpA`>VAy_wJQd!ijjk9as|7h`&0ssXQ{J zPhua6b(uhH37XGYen;OF{8O`noHkyWQ29n+h6=2!WzwIHKfx1RKiU(#5ZQgREogJp zQueB8t_C2#TLD)5Dk|qey)T|XTLmo?Y0q_4+ob}*z&BcYmoODx1+57W+kgX}hUEU* zie+)-g!=sUx>BmPo-^DrraCC7 z0xa3?n8Vgfw#&nYQBI$CT@8JC6bo3W54jU$X7!$-$H5DnD=3LN1Mk5$0oq4YJXEO; zI{E8Yrzzq5O&qEE2*Z6Q?fgR)WaPw?(UB!}fjif)!}s*p7sy&(wh9x3w-&AN8eldD z z91e8dFuq@!6g-_`a@~X^S&A?^)NT>%r)`9Vo6S_*AL87XoJky~>MHY6!HRC*Cl_~L z(%Sy2C-CVBjy?7;3;|g$Tj26C$lWN2ymwk5{70UV>Vlpa%PR8e6xX2bk$8PJO67#x zx((y4(;|GmVpNVcF$yWuRrnXWNAja*WX;~(b@>jy%5MLdyalTcnO4AY$srH9y}JV> zVx*1sDS3#xt@1Y-$ZOj-R#6CP+1RL1ORLp1u-EQTe)pHt4WV=vweSx$D1`%G3-R|4 z|B87L_7*g%=N`lN)?1F=T%seZR&)QXi)f~3lM7GgOq@j{(~PvfY;bzsU3qQDy3!+g zw1}gd6d?ii^Kcs@Nyiy#lZuNMnp8@j3xnzi9P}TU&VVTGZ3(0-r`Lr*EGaRo7T7OZ zP_L19bM1Ap)5J=RT<=`FNx#}qlR@Ni#M7K0+SHQI`2ht*VNpcGh&=!PhjY;BAlz!O z1eF?6E9jBDl*R50`Fc>$$CXY!l%D7fv(jmJC2#DHq3O;KRI-Xwf8ZC6i#Gt^p)&K{ z=4MP`10*A3)%#8t{h)DpjDyEdkEyHY?NrPSgl7H85B7XEY{5i1Xyq(Wj({C+zc$?> z@2=b{poc(BZA8Ap89!jwgOk{(Nz`eBNn_?7c^Gr05lXEqoW{mB)cMRcAMKN!P^PR4 z#@hWM*uMK{?<74?7xL4jnG!|}fh`+9nvE-5-<$PfFh3q~s-iS&tA8|95O%Rpa;}+9 z2P-o^ZJ68{55j-NM^P3U&bnwy&y_2wb)ez<#=H=HxBqB)&21#pen7wfxElRbW#GIX zGdIOZpxD7{2*F)%JFOCCt8iAzJ~N3XolqaTZsL3+B2X6`o;9wbBUt_WC|N|K2ixjB zQii3dVK=N+rjZ1UF20H=HK5;pCc?3j#xyh-X zttYMthu1w4XM=b35sP5>yMlRFQMYH6j+{!DREJ}qmTLc12nlAv&rB%CkfDPMi=NAu zB`xKiqY-bQEReFrGt+Q6IQaQl$a19zrR79>4)WITvkMks8A`;{hTRwFbQxC5j#h22 zneobt$%qx?ps=Y<1AKY$KxOaKd>z@KvXu<++pTL6s)kt~hLn&XcJU2NF&Abv(u8HT zhoBY8Wk!DxFD%_dDU;|!!Um)*IV56NYJCS%Y|xPHqK)*><#ZmD_O{kM3htM{hiKE0 zbc4y5hZxc_*Mq6tv#+qa&lbGzDg=)>BQ}o0`TwXMnV;1W=TSY?%o{vDIam`_nc;;M z`}afa2p*wl;+y3gqf1Vqe|!3i_OQ(?NZ{^uFJ(37QK0h+?K}Z_0V;BMOZ1#*>eMUU zhneLc-DoWF=Hp1`JDW9lhx{~s9?m;fF!FJK@;28-tes{((GB;gt4;X_w>2{!s!klY zdQ~jm1pFE_qq?>}6v_&l&WfG57oC=H_!W-3KK*@f)a+Fk-af4bnal0S_tF}+9@?~- z>xsi%^~OkH{j#JCL0Tw>z}t($N5m-2^gf2$kR43W&dFgUBkUgRlgR^l>6v6lmrFUi z!AkCkaNAJLJGs2IMs209k1|NQ-EmKl+BSR2m3lVQvolFo9?VC$lq_@FsB=zl(Z_2> zr4jjQdSM(~t~>iULNkA*oQa(?iK-v)1XFHkLktTfu2%T+*(WZN!_HpKdESyPJI^=R zWBKat6MAbY3!3<9irX&bNl!l^+NYckc92TPfsLwO&qn?k^A=S2ySaue0Srtk7Ic*9prz^VpeIzA>j7g1R}xu##uUtH>CKOjUM^yryE*{MdF#pV8G0JXQ>_cFZID-Vjrbq5tafgJvwg z6oZ*oLZ1h^C`8O#sv7M+*&ZrAZ8$NLMt^RmF)!BDWw`z@un0K|a-W>~zUiLjqKpByOU|;c@bGPTbAch%3 z63C~Feg_SbIIOHtQtJ^B+ZDloWNLBI(5!!bQxqJOh4LNOEeXz-HinKV5p>eZe|%q?A^)V90ySd^8J9$>{YRDT%6H);tkpIK5XyP!`39A= zaXf-RI5{N+gns93+0O27Y#-FL!<|=T}Jfu>H}d*B8BcWD&e`gNuvpOoMPFQc<=a;CQCwI zC`K(5-DUkI)Dd-cHB&t`ehtcXSlZ%>Fh)Sqe`a`K%`u={lWVwEzpysK4UX)VO0DDkfFI5qqA43(L> z=b_i)KIAU^+j024x8yYUu2+qmKse1VV9e9Thc_c?_Oi?{*)}=%pd&aIrD0_!sT|Rm zw=4pTB4Vw`En_4Fstn&15p{?nDRo9MV9aw&Bvb!s{vZ-tmG(O|70EmV$>ZV0`B&WM zt8Ex75!dEocvhy*Y%e|?esXOY!32#TmLuekTYlQdCfNIb-hABM2o8>l5;9Fl=rW6e zj?cT%JRZ==az$PT@kHRsiz^BlmkJDTx4k+~=+H!YwoFCkcAFV#P`Z{apQ0^bY5;vx z1gGIWA8zj(^{z2-9X$Mk_b@wbk={5xlW>d~8C%fhpk=BTZ}TT}+}UfPm5BE!CN5H= zY)ook&+jqAp{+FM2erRD#6uN9X)!B|?lfBaSxvV8+hT~8Y6UK& zM1*(d4#_mMN`}(xXS?dr5{NZmWBE!O0PLz&sCj$e`1f+agWc|rKCk<<3d3B71fD%5 zFfi&-pQ&dp;<1QO+s7R_goO&k3+y8fOfzATU25AE`EP;lu+tUoGi!t2qJAaT zfzE7{NotEzgTe7?dlB{pO=a`%g!Y>hXLaJ=1B>|Ne0@XlVQZN8RW2h1(rfMQ;1yW> z``Lkvzp-Fn@oG&JQIk@EL}(gkDTK}$7w5qegs*(eHF{~|nL-8aR?ItAVxV~GBs9eR zb#BirV7|D&(47S7vy<1?5K|$E)sx|Ep`H8jFwuWq`0A&bF*8>6?*l-o|_)+eb zgB_OqYy<8zRkH9f_B?oq|Z$)Sy3qADDyUnAbLKr1exmV+oRL=i+0z0W69Gl}bb z3}RWBJ%u?R&P%ri;w95yQA|Kpky=tf53TICNG{mwoBsFNLY=0%=<7XG%dDtIPkls- zrAfOxaFRBVbj5Q6rpU3OErazr!ooN*wkZ+1RMMpK%zVNJhC@jYyK)jM@f}2T(gXv8 zyKDu33K`Z)@xnlrHQHDY9c9IXAcROFXqrno3*hG!Qtbu$b_I$279KA=Wg)z--JLwm-m@8iW8SIQWF4|BBH7qv0c!+Nl))du99yyJQNIj9A z*k;+QaoRkF&23xRn+z(ilIUjPRP_|rPmMQ+jK1%w9EfFf-_@mv7O_-Tft=|H21oE9 zVKg1=$&_G*!{gzk?yPH<$hF+mq{6n@UFu3VV!KLiEO$N;@gR$UmyfbodX7r9r3vlC zc%XXCl4iAkA|s^vO&O?=p6xQm=cWqCXBbxiyV4PHCmE$TnqoAv<*Eg#|$w!iT7J$sc6uQ&J+_I$0T* z6zbbQ!U|~jTAp0U$8_YSN?F1Ew*x#^3GMcNiF=liL2*u*R%fn_!g8V2ghs`+>ksHN|hv9$rj-GeSFfl##0|#2YeARXJ3-!6c9U8>==0s-!m4`}Mmh z+n1~<$Bnu!%RdZBn?liR>1WbSx~CEi^WGKJZo*|ztSd{B$LrMp9tNS1{@zOtsjvT>Z{v*88v9Ron=== z#59vEkH^2xPp&QeZu01-e>F4ZmKqS0N^;yZ|GQCJjz&eEVv|5YZ91u#yHF63pg2!Y?`25Udl9y_sKEY0ywaxG#I;^*kSCiP={@jZ2+_@}e-Vh2H(&R=; zqsF?UTx2Uo8jXTJeIDw|F{sJ`!}|q8m|l|pQNnKmhV6KW^;SfSwv&7^9msdn-fB<1 zse0}oB$lY7vQs?ouesN#rh@P?wn2-oFSh)OE==*zEHrG$8L?`Q3e9g1>0V5mE$Y<+ zH&H%xHNUV*U7TcZ5|B|g;o59`yNel`-?TTSXeD)J2*c`#U^2S=K!N)k9D$QLyF}Te z{56BO`JTy&R^j(!wSlVR7t)Ss4``5MY!!4K0W7}i62ATNPO~7A;xjLyphJt^V{|Fi z(RhYKU1ggLQ?35n+yzZ{7}aFao+mY%gSzXcw1R$K!3Bw3FkMGceZ4!&$rlT2J%{0$ zs#)uujhNMl22~m!;!@+Gv_00rKAKfhcPhq4hMRAhI+Dfl2aVzpo>U)Nkdy@}`v{ee z)j(&=5gqaGIDYCU6U+(lubpM? z*(0JwX#`(p89cwxHMepXt*58Ay*TAb`N;Cq9&5RYu-QEA@9+;SWXTX96Xaqc&p#rY z#C~2%hR1_Nv`1hU9WZx5XRfv}^Yism7`3nBqtK7|sUsLN0mEJ2+F66Y8E5Y>ziKwz zqDPXUVlqm7vfxMR5Oq}YT1cr}?Nn-Ngm%rGol`8xUJdj06djFmbUm*lC==&)6Dd1zF;Q+C=CL`QhRp@#mG<_=)5rr{8)K*1_tMt){#-b zU*|EMn*J7ST+>K$I`zG66h}7u>;4DNBulu2M-M08b@z@CuF~soQPU1Q$IS>RXeSP) z#%6F~vckA}Z1s{&SxM)oHuF(!KD#UNGiSAmz?aGmWvHYS$`k`<%hkbqj4DK*28?P4Z&@A-_|BGY;(+#h37SHJ%;yZ<*R} z2;*f^6q-KPDi711^5qV)l)Y!@bN)?HCO3AKuN{p&lcS);Ozci&p;gDzhQfK1^$o-n zS?siXn1|$!BP{U1a}BgLcR~PXR8ziys%`X*Z(8q|o*ak~{W{IoSo}FciVMX?fcf%N ztBJp;!;f6cK21Fd2)*^JKtTy1v7^^a_Lgk?I!mKeMO}-34;paZeBm^-^PI^0QJd<< zRg*R@)xeYM*2lN=g-2QGj5zIzKRu>F*X;$&M)vJK!@`$3N`c<9{`>NJ8lGmE>|}db zU!^c>JLbHTGWS$Q++)rr3VNS!&{5S&URmm>!hnEYPLssCZi``GL)GkxP~HsZ%-L_= z4rlaR#L0lnwj_mPeXu$@W*q1L=E5_*}5gpoc!TWfXb@X4VVTGnZC)8cv})05&~kX;bC?O(>_}Uv`AbzqTi^xD5iT3? z=5UHK*S*yoHn`&3l)TEiT+iec${DuF^w{hDp=oup#9dJqqsshCJwk5BD>6OPT{44! zhUWyQke%D1x3(a{5GlvY$dl#Xqie`R6XQwA))*Kb?3rm~@uO1ZrW}&K&+5101Iurb zLxg-bu=yUog;#yt-gf-{CZwYWA6Y4JorW9xhY7~o*8K48)Ycgvq}M<7Jf6+mdp*(Q zQfTb!2;n|`*48@*gTl%9iOjcP&eJLqRB=_noTNWg_~xI69eJ96t=;>kT-0L$%zHm{ z`s>7Or9CCyjyClUd_pDkEd4$i=+6+GUuGE{nd`3d^5eI5OG=L%xMJ{6ef^hK1Fyrd zYA58}bI;stCgKU*gcXM~;dR2qV@*ml)l*kfEY*(tTQ@3q_){>tM(t-ho>-aQ#l$B< z8E}i6S^6dQJJLkyC_1DC9`zIEYiE(xrd!IWMJ608EPp2n-E(@Wns~RHWK-iCqX&j_UdNs zclbl2CrCDaTx^HX)>4#4O4Ged)t{jJtidg(@X-4vK*xoCyjsBX3-uQj8e+Rb0hn~? zuN!lDg&48!aq^D2u4VSV8IH|7JWs5cz0+@SyGGb)wqL;cd@p+;{%s!#9jtexx!yZy zET|Ta1ur9@QPIKd3972C>ESL`@Yht!cB(cGqY%zj(y^Q(<7VRNd!J2$?FwQgqO%m` zE=Kv2cTZIBz%{Fd$xYb3CaBSbgqCF11ey9-)~7RVR{%$MsdKx&k+$YJarPIb_n*`M zBRoJq>8^BcqR2kdOp8ylAuTTf2o{QW<3&=nnk^%=BXO@T5uLA)vzQ#V08$MPxkDU@r8J|s3hVeKb3wOJe<*o607m{?!m$<(RxND*qPobi8M#_ zcp6KVAw7?7i%-dmRJQls&v&M>x<*F*USVD7QV7jZ#Pxlaqbg}17Wx$ACE!@rIAQow z*{kf?$n~CIVy~esL+SkW3-z0<7bBd1dUttS6(aQ1+ml zcdPB&`lG(r{2c6-#AKlrUl}U4{Td5SobM5fo9P;VLk(BZ<2#8oa$Feed`2<>Ev1y~ddYM`P8vn7Hx$3HC2N zA3dhFZPh4Mb1#C5$adQr{~%=}Omm@)qgOr&0q`%U`CW0Rmy=%p#d@dg`EA3LK9x@# zybHi=Sk0$rDOz- z14|=!JLm1|q(4Sj%M4LF+T1wr-G_$BKBy=E$@^ea(M)*M$}T2q(S6WYZCQM3H{Vru z8SPlsv{;(%Ex+o;IQilSv$49)xlva8|7q;I5d%84xEGcM{_Qe~slZ8i{J&>O6Rq zZNZ#7*Uo5sQtqmq^TF$`!%k)1m*u=Qj?=F@wl0pQn7o-2B;e$o7aTFzcRtyvtd^+p#K@C%ctPCWawJ9 z)9KI7A15Ke5{uqWSn4D zp1$nnO_+V`esaB|KP|+5Z;)yKRXQu4CW3Mo>r!Hjm`i_Gh^hIlbl2HlB_DmZ$WaF6 z0)suxHC5ee)iq11(+dN}xZR<2HIbuP`n6$;Nt2J&y}aibb3M&0+!7kBtF_&`@7J}j zhZ4=+HEfc5*^JSi1x{zo@elGsO1}?h%C&wUyg@rz6xcPmt4O!iX4B%+`hC>%o$cVk;=S*u>>+s2OQqaRCzJKmdc1I`sEuDS<0kz0cwmuD)SkG2cIFF<}Qcc=g5n|dfM@m1H~;e!dWO~^dAgj+`24aJ=$Bb zCs6(MF52yP$>8-81*o{&-cnU#`-V+FtT=5=oC+jz4q(-eaH6D2lbIb{DJ~vz@w-&J zCHdCvS*pTIBWlmUA*k9J$%tg4v7`Ayu`7^xx=Su1YvrS0`gEB*D>6fPLGz{U7bMR5 zb}{UtA?duK$6sE2%pii9f1%b~O~XwZ3wmiIG@%=3o9^A!Y6~pj;wyg9Nk%j*{(r3! zyaTM`)cuMCxF5wPaTCfElyc`OxbSjRwTFXM3C*=lGRm>n`UF)234 zzw!|%sYl)VT@4g^0pds2$<2S?hII7n_zEbMboR5Dn-al!Sn3nhs*{mE<|(`L z0Gdh83>U)ywOTWi3vT9!eclklPM6{x7zAnYLfN4}mXz~9pK){50~4Jx) zY0^rS>HO}!Ht_-ta7>N(=)XsctaQ%N?Q_)FZ?>>^7tI;wU8cfk+RtaEmOBdm<5+aY62Q{&iD40auufE)S&Ye|N(ELVi>TDXcJw1BqC3+@FN=P5NW z0Nx^4%=i{rAi1i^us_^Y#)Mk_tj^^Myyc0mLk-K=vzns`UI!9{d5|4m5Q=ua9O3gibsEXbf4J!7$b%hF>6FU7!jepr8y~ITnR{?Zg zeXZvnv=og_;6I5bO4pIG{Xi@{0|@b(cTwz;9ZD|7(E&>nfX>%B9S<_|ZAeJx>GsvI z*JD?#U^eamb$K?+J6I-}!O+ZN5QH?V_DR!cdk=J{%c)&;z?Hc6V6^_!^IvoEbt~SN zjq9rzE<|7O0yni1?v+Uo6(TPGI^X_u%-iDb+l*>AoSEJGf3V?2LGX@zSbflF0vQO- ziApM(7;OrayHhZ817GGO)62uBZ7PsCY7ghtzYAqa4Q%lE3bhBse)qwdz66VYkJM)m z>PfzfPz@1kpk#oGPnZ7z`dA|vu`)(YNI&K+f&P4id z+(?hJ&SB_oWE&nqSO0v!g%zW10)pMy_GY>n!Tu8{)P85^F0Fk`!adC0SJ`R~LHBi{x z!hlaw@U1n0@MHDj2oQ_M9rH`V<_@SmIyBVK1D%u`<$&XQpk7a2pOw>h!>N2In0*a) zRbl4l#7pj|IYEWh&~@0O=^%Nd1%%0e$$IUNU6ZOAx$ zT!S@~Jxh!$Frj@z$DHF@5jQ4cdCk7-D68^k84ZGHHrJq2eSvOF?4W^>Vt9~pT5NEv^RSoqaHg;N)+m^v! z`t$6z*ohmF&twF#^AikE$t}+1uXOt{r$0||3(0=Dh^CqT>@8RJE8Jp($Kg49rK~Qa z0I4y|fD(890{vX6a5oj*{c~GT3NUAfS8Cg3Q1mQDVm7{Zfj_;#qKGo~*h@#Gb6`*} zIC~-1ABC<4WSR01Zg&PyDHv5Yh8fo9AvphS{oLWSOPAVK!%v^)dlDasts`<)dMLj6 z>QoyS;i-jbe-;wCE>~lhM0vuWz^r z0^J-)8O+zfGdq?jaoZ}7TaC+@NZVvTO<4O91;pq73Ck)Sz$lWhV zK4bT!4;p5r5Dy2U;<35+$9VO8?D)S14MCLy#=!jE^I2Ijo>%7bkm}Nq_1W;a>%_j6 z0vgz$V;Hwa1hDl6%KoD8G=TB7sXgr+G`38lAzUL|!c`g?0*~{1pGXwRwDx9adut!Wt4W&R5TvT&efF9AC?A@9_?YDTmEKYSbJ|^CGOLJ4fr}_Aud8*nUjI zR`TFRb~Kj>eNl#9T~TAqtvqZbw(9zyl;u#a<2hIx!g)Vuwu4IDC`Epbz$+OTX?+hV z*J(F{TIj=FFNBCts?g81^@Cc<1fNO088UYGu zVc5N@*?F2lA5A`@ynI++4Gh13G?A|Sw$L4|N*<@BWV&^evO@nBsefju6a^>1ZK)H)zNUOolj(mnp|J$V`iHy8QSG z?Ss9_R`-g9U`-w!GReUT3)?qa=$GBE<~dGieQoMK9!_aAc#)_N9OG*D8K3kv%q!Qm z{5+M9`pXYNbjC&AZHuv&<2>TZ8PPa?H7cHVVa#JeN$uH$Ry8^%0gYyFpxBGWeFPZm zm*R?e)gC!MW$d>KL^h$jA8MZlV*O0%0-r+OKQ6VX@mkxE#r-elD74%Lpw#i;OMDTn zS!fT(UtZ}f@EO&X9-a@2TVHd7dJ8mQxby5fJzQU^%u7Zy7SMv@w~P6sOIuuk*Qi+C#@x$*ud{GbyO0(= z@&<8vZo2v>0h(~aIwEP|g4g)q(p;0cPL&r;KT|@+VeOM>y=owmxyO}jnR?fI%4EPb zfG7=C0Q+J4eVbJ2&=hb$J+D`Q7T>VI(!zVmYlbzSXMBiY8lgB^ODXw}iROlkcOP+E??q{-PgUY2fixp`oZ` zZ@7laKX%bBni&6WkLoa*w-P3wI&5I)vY>F=RqeA4CfrldA>l2RgX|eCR;^pH*N<6Vd79wk^)}E#=7&9S zi7oMCL>-$Yx;?mRin?O^LXj>s@j+}=a-yWh&39m2q!uP!i!;4JXf5n21QwoqzZ!&~ z-pD!SyyLvm2J}9_Bdm*D!`T?4?nz^Bq`0mT^E6@hv{fCa$G2VO&60g9#I`5KY+aM; zXvHU9j2|k8nEz*bD{@nB@BYzpYE|T*w^KRduxd(~M2HcjF7K-Xlf|RtsYhaj>#;W% zzna~C`8H@Z+A>;_kF-2HI2gWdXcug4t*u<=HmyZpi6{$rBgXEsk@>}-LPlw ziB+gJUD#qrPfz0G#fPiVZ@4~RC~XV8H8KvxZ__9wv1RD%MLB0%1#Kl zR|Umw2Vm$i5|3#d)?+JVDTIKd_jqngJ42O@eZ-Sp7wCQ@Oqrt1r> z#;g?2L8TObwly$inxzo<8voE_6&17Ru`n)E4 zHw!8&%NtCsHa0&GVJzFumAv20c&=^G_rBS~x_67Z1jfF19YdDx%uap1y+nX25WzT| z6vLC4muqId#RmdG-u-ij{3*9JJDX#bE-Q$1E6O5**&!Sk7hcPY<=X8DfiusC#|~hG z8_KXg{M?!eu@5=ua)K6~Rz6LB$bPr_a|DNMS)bTVy}S?WbqmkM@7WIw)Ofo03A|Ls zuU#IT)U3rcDC<9OXgO||R*HQgE^h9#YON3pv5ae9hR zTL9szi`&sA?Ym~#`*zV>OY}1pJ}-lI`GZ**9s5~o%aI1ZSoh)n&0o-tz!CiP4B3S? zmCXQ>y?#Co4RX}}cj^@H{bewJ*UmhM*gv$OE3#%C8S@!Y<#9en1j`fOzo#|De3vucDOe13mJ#Etxw4j@75%U z3D?tJk1~-yZA{XjKuPK7UPD4`|LU=%q-`O1wT{B@92GZK3L?*K&XV8x ztK4mmwzJI1KWlB9!HOB8;v1 zW5)3EA4ExU8GkwcP?47gUlts|rfvQ2nc6lO_Anib$M&@|r~0w%39m}7|CD&zJJx?U zwVOey$f)E%H{-cCT4`q*T2D(D1QLrVvRPo(L$rbUtP2DEr=U-{(Ks`=?x|fYN>_i@ zkF!7c(rRoU=`w=GOB8J3A`6Q~`5PqP2kfCw@sW|0?fAK|I^Vy2wKq~uovPTmb+A;r z)|B`_Q`*#}EW5-D?2&Jen;0b=EGzY5(ch1Wph2XFf3E-&9p1sgfdN%sww#-96Mwu! zUyUF!q<>umCvAZ$&-XiJ^xFh8P_)-H2@!9FuMwIBm_DpGyE%+lMkPluts(~p!}3EJ zz3@X7BCZT5DJkKL^nAJuD1(4I=WZ?~Gdt>0qolOMcHBZ_%Q;y*t&8?d7?%C} z(SvULF(ZK$!N?jM+9gg>(~^-XB#1sbT3W@AJzg@#hQXn$JDzQvXl)OB%N0nJ*4Swu z!Lk_hV+*kM^*Ig3ItgWq{G&!jhms48?YQugTU0$TI*07{K%rvT9 zgvXMiC6k54edt>Lf`PoqLQI0KUBi%$DWS8JL6%*{<+_dJd;)y6gUcVMZS#ceqZ);Nnu#4j*T`k#X8ZRkFl^TYdzAp56r1{s|X)79|AKEJ;{ohlz zql{TgW60PaVoiB%~7k+9R@|Cw|A8wk@9p`q5)9#a+!|8P=*{cHVOUFB>c_tdyW zimUIOWW)Cmu#Ct-lmuTF&F5(uWs=-?b+C9+0|0qW{NusOWWQazi4d(Z8N8k-w{uCj z+fGo)Z!TO#;F%zsF+`gG z9l6R#f`FyTp5k{Bxx@-A1f6`#>*<7JYNhZ0_bUHE3ARPrI)qv5)d*U~SMcx50*QQ; zBqdlHLOR%gbGE9@l(aD>gwm}u`!qj{=(g#}9KgitZ$hYkCSWfcbgaOUrOnbFTP(bQmx;6-jwBTb4@*nrJ9MeWY$ynFnhM8;y>TMmhMLs zf>Kr9DJ9#do_m^G&UbhmZ6P++SKpby(BOa8I!D|cL6P3SbOObt$gMy*7d6FWLC3*S zp-+fBR5~ZX6qdWn>1N|luTZ9L77imDL@kHDkrK0@1+glMxOge$s@ZuNtF#I zO6$T2P16ROP+_Gr2QXUPrH0Y_IwV)0of8iKAdnYl6~#V#y83Zz!+RCNzT;7U-EIGq zH{PG0-CQ9wryr7-2RWGs!ZbHNcf=$0M{he@^21#F=llc4c_n!m*YQN3-xVhX)%^fJ z25C(h(x-Q(p&4r*$Mlt)AJaeoIb}L^#f(K?n4c6p5&|;~zIg$$>j~+gqsJYKX$?tZ zj1ftz`%hR(u(}&HCE4#0ZhcYi(GQ;tTjc$$uk?EJU4yAv0c1V$q07$dULB6#=1W0o z&HDq1%_HT-uXndxf}>@+&+s_Pz}sDr#KyBXw~id}{%Sf6zCSASD4Yq!Q;^)?b4Hc# zrBe2nKu38_X-OSC6R^lCdFoF|5kOmu>I~yHb8uT1o+RWZ{3{N&B}2v$s)fY!Jo#3k zC2Qxb6e{D|4j;s*D0Q~8g*agzALJ5B<1$Zn%$016SIoV>8m3jP*XvVXT~;Rff@?>p3X#?R7-sPMgF7N=O9j65 zmRdD+R0w<%Vrb~OV^>PsTd+1vnBzhbVwf}Rm|Gsl{ z0zZJ1vno=4jbJh)5L(78tMx{&ck&iDi?bF}siC!{aW|dDy_Zj>>$D6b==I%ws`z!a8AEhRp_qh~=nR=`QU{U;aRKKV(8~=&?tBA^^X?pSU~LtNQZg=>DSN#4mz zU-gVv*It+CkXEc+ov%Hm`E7_fCn;BHV)WHtKQGf(KrTWw4@bjTf#*m63*h`V}?ND#>30PcPRyI;DiU7c9yzn z>^4el&=;&OD`^mTGij_aR2o|T)H=WO^_peMnKa;!Slf{mLWKHBnwu>ArIm%3CRQS8 zODykD=680wevQv_eapnfD^l!Y_+hSI+TBT&ZYnmketII;Ycm0xJk#47;_%i< z#9?u|clge#ewwhPr9`}akn$SiRIpw0g9!H@PY2(q=PRKQQL0O^O9Vsjr`lOhlaP}b zR`sry2;l_HItQ|R>zYggcDu3R`GR&xQ zl-mv+i#EWWuofMjD~uojc3}e`WAC*)Bd>Lxk8Bs@^_cl++c# zBI9`*wYlT5p0j4|@!scT0{^`S+g6!>uV7(^m@HIStoOa!3vTKWKBh|jL(-0r&6l}T zA>u;IbS#lV8U2xFVuvAu{8RBQ#L^uqM6H$J-?21-%4idlWEBFAdBx_?AE@%fZix~* zU=xT)J@yD*WLceqoVqtO9;osL{NtY^N0sw0qr~XJYc0C|3>?VcKC@4i0#*LBEV=>y zTaWLAfXsB>9#nbQyX0HP0*L~>cj5rSvhxXU1ln^iAQQ6H6#m=Z(mf6iL(=;;Mu0jqUgAp~x)=uSoGYhO8kr_8;I! zGAq-b1{OtvXc~v;Z)X@nw#YZt`i;84ZoHNRcCCX@VX?U>s{g$U(M0$b`GOz6I|1~=) zD0%#_P2R|9%aH!R=6~9$ERlQ#VN8x)H>0bSXja$52FOPQY^fjlP?GQCA z%t)d+^e+emHc3J!qqMYB6q@#;M`I1#kQ538$j$Y}s+)qu@m1-+N`!8lyzrCC4Q3nS!RG(T*acRb8O!!z^c_QrT4(#OI09FTv`k@KGA8+7@!)|UIu8A#8DkKs-5d_ zoB1nrz3$JiA{#HEMp@kFods+c`pUc%%E`tLh(f|5Gxt?}jA`XmGEUFwY;<`R6N{q< z-A~JJ@!Vi$Ovmf@c#ZLpY%)eE5h*u-#NDNYgxD@-2nL#-Cn~##Zj0D|gfBrmgV_pA zJYv8@Mw>9IylJl8rzakj;rU_ruVyCaCc9X`GcAf*E5hATq9O1Ys0R%5mMck?#Km29 zKpBt^ypLcnKNNsuE+aVQdchEvUK=Uwwi>;;^KN7$(z>|0RU)Q%D2revG-38Qc>pSq z42=3k9-Z^fOkP9BmwsPLW^l^Im3*=qdGfiKYQV!bd;NP>M|fGCT9Q5QN`Zjav|el%B|l3}8P*fBG&fh?ah$mB#}0)* z3MEfT4Q;=Ppd>#(2HM(*>|`P&jbqGlUSnyF^|bH_7VpO%jm?oF@}fuQIxicJ9C>sk z%sO`cd!NOn)*N@zoYeqVmySjJ6OfoeoDLwpYy`Nw=`f0CEbC{qb(#UiFyF5`R%&pS zpjGr<4$`Olotmm&xKRJQ!)+^6d}9bI?l8UR3q5p`rH;*l_g8s)TT|!DvQ+uCu2M|PUt^p2_(L{Zo`Dg>3b{y;)&8gLxeCNhiit= zp_Gh}n>qwS&_o<%S4IB}!{|;iM9}6&$C_v_(}9D4?>44YM&o8^=D}r2N~K`KH~Uau zbKLAafPvJ`o$~8{1}xleIX8caK~l4Wd)L3She$HbI7JDxRu$~efm^wWu+ja!p60H7 z07Q$I%0eP3ep>?kahW(E>5QO~$`D(Y+LFt)|S?%szq_|h`oPt+jW+$f*g2SIRI zhC$YTNjCG*lsZ9__fjRGEdRM{U}u0l)j#a9j{#D~50dL+K+9+U8})s8H)KwX6IEV) zZerj6)DP@aC#W?Ot=`hJY|3Z{GJP?;GAZ8gAjTwR>QH1Pajb@3s00&8r1O7n^4lOG zF^Oj7?p|S{!AsOCU1Eem%23|OeA@!h$z=3kZqfCB&%!_Zd@T?jQZ4M#dqIPDEj7Eu zwk@^I2@zdApZ!(QEO*>p;~!#8%myC)@xKj#SzrJiCMwvxHcgKzmtT%B0xu&%6A-Zx zRpoNE2;6s&Qs6gpFB38MBJwSd@Q@@w=@-Fy1o=`@9_fTevEla>rIwaHGG@WZ%F6Zw zhIM%CQ(`pDD}KzW_VT+H<%p2IoTY|`6b@fk^0|9Op2MMnUOkW>CwCd%h1X|-$TNdeE^6#-T2$# zQbbYRHY3DFzA5cB?jL8pkC*tPr1DSA(uzdqgO%M1=HJXWuKRlq)$Bp*yO@y+KRg^j zU$0|Z9m(u0?KhjC< zjkoZ@%dU-Qy%qT_wn0c%#L?krqV|wOlb05)ZdW9wau*Oz8Mx5I*1G zqtKstx1a(ZL;aLnp>i%g8Im|>+Y#h8f|v!!WSKK?2e*26QgeB=IlOLQ>q&}2qDY9D ziwHsgJgVE537hRtM9BfEYpZ9Qyx)Yn`$ly#l4+wl$(0YT@{Fllh`0k&>IulfWImnryZh;!$WNX+aOWU?`nr(6^vh1CO@XOx?kBk+o?*^kEa zkREf$c74=)P!C&00wQ2?hells;1rkNa&5I8+#D76b)x2Q(Y+$EJf0O<31Eg#@h_VZ ztUjP-L7Be(MK--5RtygwevDRstI4+dtaEBcUc`v2I&gVbbu`oN4ROurOUSvi8Ec#+ zpPIa(WSgmn^ZG-a1-Bf&O5fji`e0@4=W2=c%`WnYnpn}LTD$ghzBDOaxvt+?3JJ|2 z%&yxaClW-?s}GeEYOmZ;%TS}>ICnW0;r^`x*2bn6n)@-WGE*YU9qC(NmYo51U3t1i z`t6HopBkXICIgRT6*RZa=yQC#4@q@Cd4<0bDcALpE59D#?BuD&BzItQmDka)fmUL3 zc0Pz2|3_DITTqrBBTYIMhr4Ts7C+&!wqAGQ3$vl8Ru`kbuWB_O5_L<)=XSjE{mWlI z_~nQLIdz4Yztr^iFuCF z!o?~H$U==yD$3s#-S|eW;D``=7loDRi<_MDT)N|XA*oRvpuGY*X-7HM+J=?W@pT)c z;x(;~MNhDC292JF?y`z}B z{%4$~p_AH%6`y?iFD59_IUt4{fG@PWRtcTuSR{>;6D^vXZr4?eT?Z0*k7cT{EXf61 zlQ68T@0OSQH-G9%fLENn!P{+kV_!MY;@*pLhhi9apf77V*6N0erysLjh9sE#ixf;) z!K>8*5T5ciz&Bb6zuUYx89T=qhZCRMX>Tzx=jfr_Lo93hm7<+|%I{dHD_iHLQm2!XHR9(jr!u0ja>B_azzQ5O+D58=?N$E`z z-$pIYth}7BcKz|^)Pk$9cr^aii@_NeUoRVTmt#fF)FsPKCg(SWFVJk+d=ugkBQ^RK zLN&(Kqbo>YB;zv8qr<1)`|f`jNmT3ke8gBC)FM>n)J7}xocIK|Ee(oPX{Su25T--5 z$OYjuF&nIXsUCfCKc>^p*7k+$Oi2e3OpF9h-ZZQxHEoIdQKvu?koqGH6X0DA^dgk+ zOWhsWTF%T4qSH0Y13<{{v^7Vrnsu{SZ8XSxcWupp+cl2?X zL~gFik?c>-7(FJ&QDzK($=0B6rN|A5mixj>n87~2#U7<$^5v+YydYgdM0sp3b#qgF zE|;jItpe8Wtmj9uO(I6~*}H*}*|qxSNAK+S1e#`TLn$*}(-KMp237^HNA-PW?`O%) zwMI!F?>VAOVERwU6(MeGS^c-;l zQAU>CX6~*~n-ZeaR{-R?%?PXP)|H=NwOOhDW_XOpA%;g%^~NY!_LUAPAVWeFEg(AC zJf2aMt|o7-Lf;Qb$2D&6jZ}*YBj>ADx24NAR&M*yy%<;z^88hNEmfBwXH)`v z@~G#6yQau}EIEJ>IrEH=?r^B3*M?TgWtHG`_dv_$^3l8g(;c!4&N6Sbl=4{%35&;8 z5Kld0hz6k3LuDmyY}Z;L+k4IET^d-5XfU0c{fr^}IlTaY=7RFP8YuiAf>S$f(~^=X zW)3{HLae=PT9oYibg_3@UifGcYA+aIPSCXPqAkMyh9+Q96ZXvilY4O2 z391=gK}1LLCT}|4L=8>7WpVRml^UOm*k@T0S8XdYW$v?IEqs)kfh0m{S=+~DbbSd-L>>A{ z=G(uPeSb-rX}RhfS+m;8dv)wN%eL}4S3CKLG4Yt*Q=}Gl>MW8GCtCRMdA5n z&Aa~_A?QstVP@tYqpQ(Zo4!}`y!72l;UcDYeVo#!5|U0iHLDY9Fs)np?xcq6=rdiP zIm~>}1#7o`XOpGSqwFn1ZLaGT4k6irM0UYuXZPxuBi(_-o+#Pi+4J?7bKIoLhYpQA zXIp~*ioZKD05Wh7E1-OnL-oa2-aqD;Ivt4=k^ur+pjf%S6QI2vTh8B`ljF+{3;^J~ zde&4T%K9geLMOM$#JU3Pu?qNDTPz4V`K!&%y?&Jc+Ecq|&Lakugp(foLNsA^#-7V| z4I6Qn__cJsqQCnR?W;K{;ksfv06PnOwSg^IvwFI|)gK=Idh*v7(KyGf+L$?OPai(% zx)r6{X*BrE5u$e0^t6CZ(y(e@jAP#uHpl&#tBe)ShNfOq93H~KNj4NzxL+EfENV##i_Zn4Y6&cECCy^k4Gq9Jv7i+B*5 z(a4mnBr0}BAi-OQx$G0wPfs405c6>%53?|0l$93!yJh>i(lt66*H^Vk!`IzNliYiS z#XE+5%DffQV<9ONeuGETEcSzn$1Nj|9HUk`ymLSOEdM1*`PFZ0lO}?D3|d61WtOi7 z9gcN2ggn<8>kJ(9D=umOsv-hI5i z44pZ0#bw75ZV;<^#c!4dTCDczc_7}j z%v9b;2i}JRLqKg&NB|Fg%vB=5)zfojtp9iEK=S2ibkp^+88tRqbp@O!y4#96{@V0- zx-N}zFYGN6MVF46^!xdkxted@@|Rur7fY{XHs-Go8sGi1&OCXR^ImZW?qo)%0~f~? z8y{D_G);ouYVoXFrHMeR`)S^~^7Z`A4-R3y{SVAE$lN$%<$ZN;*-hdpr|8{6i?IOSMT&7 zZXn>B{arfkI5a3Nm-MO7`4jNsmhohJdDxeNxmf zq)KuI+WyMx)c-&*PLK)z_(;a|x5L1v8_tH8th;;nOSeM>CjR>ERSl;nop%Zdhn9=_ z!OT6LMQli#PC#PU++jG|Z`yg=pkFI*AnT zwlv*CPH}E=S|0SRLujoIPT6NgNMfK(Lh5z}W1R8rmoBT|?oI*^52dg7XVrsvz?7ae zixd9b(=!pwz>!oo*ZHSC0OIfF3&iZO^Us|12h~$5QMGq=hIHE!&u^SULMu$qe*S1SdYEwVQ)tcsZEWT!u*C{VnX*1t) z^!M}}Q!ekEtZ65+U(;Z#?>?72{r=ms=aM^fltRSVQcV@x!K=Re@h*mzFTZ#(aCJ*$ z@AIk8_U;FFl5#0>HhskW7vJdB;q4#>8*YT~GkyK6VRDw__>`nssr_Ro%ic-;1{%D^ zHNtkyxHA%F{r**IgHS3!>Js{~I{&@u_|@zRq?8fr-Fmqmb`@AmUJo)(Jw3l8vUkgi zvQ&%|EX#ee%sa?@;Qmr++9_*^cz7E6XN-%IGFh3vMaPj{hqhkRFQa6(Ukyn#Qz2)aRS+LnT z=d%yfqRQ>cl7e<5D9>>F-)F|KGx=(Xzz+q={56o1cY);%59GAYBYB*-?sg~v-Es7zG=g^f}P+z$plS`3^`X^|F=uur1$bMEz6ZL)|iui1E zQV;B@<1`ltvAff0A**v>-KJCa8h1&kP# z(b0F#H0{tOw=Tdy_u+ASMCnlF+@{yI)9Fq?WzYwJKPHUuTYGp@k-dV9A)sSxTwMTX zEt%sIgQOi(;a2}og7~k2OKMW*X^@`?PlfYsZa>8cF|4={-Woop8~`nVY_8fv&-TpL z2`S<=uetxcVEZ8aV+c`DuCk0zPrE>!hjyisy?yTb5TkEGtr-7j%-86{&5V7vGmq@W zZihe+cxHOfX#|eMO*u1jBgN#Qs+haV9-3=cx0|^V!7+GZ-VmkEIf3-;T9BaqcbtgY z^Q8n^rmPBr1|9q2OwecD>uCj_6=-U}F;p(7bb!;8YvT`ua76YJlfS$&HVQU`Y{ zhW=OvQnl>wl~6WYmFm`e+*=FDWBnUMC#1gtp*#h}5JN5xkSH*e)rotXU8F>nAGqJ? z0O_i~W78S3l*|}b0Z56(E)?61r27!<~5UE*tb+u3i;SZ|}4*iv25)oq)Dtqg)qe`vq z78T#{fJ=uKIV)2BqBt%E&HBYIoi`0LFUw2q8(xpH;9yO?``Ie~orH46>E0}b!GXmu zH)d}|j%`|Q7JWj$FmrPQOp%kQsOZ**v*B6SV*aYE^1lhtxT+Itf(OEhH|2pLC8HOL zJoWOk#ii)8)Fa)%?i!;gVP1;P-mzv88j94(S`bs!N|8SKHZ=fOmeN%*=}ekn>37OS zU!kC&`KfVU>Dm|fxg+#Qiws)Y z1p6Ra@}XJD-dnsO*uFq1rRQ-(7Ule|fCQ759M3KO{iMGQk1p+ggALAm_saS09DVO+6hSbU6k1-y(tq1v|;_^>aU@D34z}My!q~<>8S$c-G_uw1Xh6w9KTX5oO z+e*0FhWimT&c9-TY5y$;2iGQ%;ztU69`7FRXJvI%`Br7R@}z2gRR;p+5*Mz2U$X8u z6T}&HC89`itb(Z^pS8|k2juVPr&y?;%@i#dxMHtFzzy6!3?-F2?%J^!ia4m>$VG2r zOon@>p;Ccr8wylU1K|S(sboJ@wB@&)emTgHkq=+qz4O^-ZAE$Kf_t}ut7aP#$(_sJ zD^{b@^_5$h#`|*{a3^okmiOma6Q7!t#5N0}y(DPxPCTZ6LDRs~m>W5d<3Zp?yCSx) z-H!PlNdeLH;O>bgP}f3gZy03Xs-qj2r0#y8!8cbe=OHNCe`STAq9mw-zhlS%W?Uuc zy(v{jYsauDfzYP8b>?@3I(OSQ&^qj14~zg5rh5`V`8kJ-Wz-0xoIBr%yxxBYs)K|` zuvMo_{r&K8xS!hp!2kg#LiwLk38S#=;0W3Wg*|y;#*Sc8hv6Z|HMal+}kLVW|bGWzPB z)8RXmHL|k0Oz@Tf$IFM2nI}Oi+3o;eBcO50iRULEX}NRp6*pm#aKGek0zO8u6);CL zNZEwY|Ho_n0*OJNJE<6CHBX~13&4U}Q))>4_HLlVc?gGVw}XHtoQqc%|5s&H{_1P; zjT0;w*O1j;2h^qdpbE%wdBD}(S@C-8f8M~2RpRC+X}gMk_Lmd=EWY_&Jkz*RU|WSN zp7L)it$*Xc+pMY0@~!n#O+Uqth>@dqf%-o$If7)q-#Q$p2MhDndEkJC5;~~uj(sEb z_5>XB!lu0a|5+7FcoF@GsspkOYGAW`en1a08H!1=lug+v0kK?*2=(e`^{@sIgZc7ZZJSeSlIs2Wkkkop7F7Ae7Ifb);I zK?mEf#R9M~t-Cs@I$|1x*gL{M_3ON3`1{s0>r8JFWd>cdcP5`;#Mk%x$AsCdl`%Wi z@a{1h%D&cDqL+V!nHU|MP7WMn4kwEE5ybozFA?-~?3#$G6EO+)H@Jt7 z(EeWG1Q~86?gP$O^AD8d@6{x#UK9s9=V|F&7`G525@K&?w(=AvBmlAW8r^_$IX#;d z`XK$4|1N)`;>CBJbe|}tVp~cGi}%c8WB#hdQKZ~Oi>G7x5tXbMlGjd4rHqbWbh!Ab zb20N5%X_6ZEk9+p|A+$_?l%5!y*ICRl~vNI62G{vE%P*)nz3vP&+AT5v167(dv;U+ zdu*)HP)e-peL<7l{tW@d)`Yj@AZady=!UgH$- OM?+Qne7^Gafd2=m@lZ_w literal 0 HcmV?d00001 diff --git a/README.md b/README.md index 1328d201db..019d82ba1b 100644 --- a/README.md +++ b/README.md @@ -44,15 +44,15 @@ A step-by-step guide for the above is below.: 4. Go to the Extensions page on Google Chrome by following [this link](chrome://extensions). 5. Activate Developer Mode by toggling the switch in the upper right corner labeled `Developer mode`.
-![Screenshot of Devloper Mode toggle](./Documentation/README_images/Chrome%20Developer%20Mode.png) +[](./Documentation/README_images/Chrome%20Developer%20Mode.png) 6. Load the extension from the codebase pulled to your computer in Step 1 by clicking the `Load unpacked` button in the top left corner:
-![Screenshot of load unpacked button](./Documentation/README_images/Chrome%20Load%20Unpacked.png) +[](./Documentation/README_images/Chrome%20Load%20Unpacked.png) 7. Select the `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/ChromeExtension` directory in the popup and click `Select`
-![Screenshot of load unpacked button](./Documentation/README_images/Chrome%20Extension%20Directory.png) +[](./Documentation/README_images/Chrome%20Extension%20Directory.png) 8. The extension should now be available to you in your Google Chrome Extensions list. @@ -60,7 +60,18 @@ A step-by-step guide for the above is below.: ### Chrome Extension -Under construction +Once installed, the Chrome Extension can be used from any page on Chrome with the following steps: +1. Open the extension from Google Chrome's Extension menu, located to the right of the URL bar. + +[](./Documentation/README_images/ChromeExtension_Activation.png) + +2. Enter your desired search term in the search field and hit `Submit`. + +[](./Documentation/README_images/ChromeExtension_Query.png) + +3. See the results. Each result is a link that will take you to the Coursera video page that is linked. + +[](./Documentation/README_images/ChromeExtension_Results.png) ### Coursera Transcript Scraper As mentioned in [Requirements](#requirements) above, in order to scrape your own Coursera course transcripts into the extension, you will need a working version of Python that satisfies the required packages outlined in the `CourseraTranscriptScraper\requirements.txt` file. @@ -81,17 +92,17 @@ python scrape_coursera_course.py -c "course_url" -u "coursera_username" -p "cour 3. Once you run the above command, a window will pop up and automatically log you into Coursera. It is likely that you will be required to complete a CAPTCHA. 4. Once you complete the CAPTCHA, return to your shell and press Enter, as prompted. -![Screenshot of running the Coursera course scraper from the command line](./Documentation/README_images/CourseraScraper_LoginPostCaptcha.png) +[](./Documentation/README_images/CourseraScraper_LoginPostCaptcha.png) 5. The script will begin scraping, as evidenced by the pop-up window navigating between video pages in the course and the `Retrieved` messages in the shell window. -![Screenshot of running the Coursera course scraper from the command line](./Documentation/README_images/CourseraScraper_SuccessfulScrapes.png) +[](./Documentation/README_images/CourseraScraper_SuccessfulScrapes.png) 6. The script will write any scraped transcriptions to the filepath `subtitles_cs###.json`, where `###` is the three digit course code of the class you are scraping. 7. If the `-e` flag was passed to the script, the script will automatically push the scraped course's transcriptions to ElasticSearch. 8. Once the script is finished, you will see a success message, and the web driver window will automatically exit. -![Screenshot of Coursera course scraper successfully pushing subtitles to ElasticSearch](./Documentation/README_images/CourseraScraper_SuccessfulESPush.png) +[](./Documentation/README_images/CourseraScraper_SuccessfulESPush.png) #### Note Please be careful not to scrape too many courses at once. Coursera may block you if you issue too many requests to it in too short a time frame. As such, we recommend that you only scrape one course at a time. From e9f9b96b607a74695faca0100e2f15a8168889a7 Mon Sep 17 00:00:00 2001 From: himangshu81 <145715398+himangshu81@users.noreply.github.com> Date: Wed, 13 Dec 2023 23:43:24 -0800 Subject: [PATCH 50/52] Cleanup and removing comments --- ChromeExtension/js/search.js | 45 ++---------------------------------- 1 file changed, 2 insertions(+), 43 deletions(-) diff --git a/ChromeExtension/js/search.js b/ChromeExtension/js/search.js index 5219e24e5e..abbe2cd43b 100644 --- a/ChromeExtension/js/search.js +++ b/ChromeExtension/js/search.js @@ -3,58 +3,20 @@ const result_container = document.querySelector('#result-container-transcript') search_btn.addEventListener('click', function () { if (result_container.childElementCount > 0) { - // console.log("Has child(ren)") remove_all_children(result_container) } search_api() }); -async function search_wild() { - // console.log("Inside search_wild..") - //import {Client} from '@elastic' - - const ES_URL = "https://search-cs410-project-hw5dhpc4jsg3m74vnbalajt754.aos.us-east-1.on.aws" - const ES_USER = "elastic" - const ES_PASSWORD = "replace me" - - const client = new Client({ - node: ES_URL, - auth: { - username: ES_USER, - password: ES_PASSWORD - } - }) - - - const query_str = document.getElementById("searchbox").textContent - // console.log("query_str ", query_str) - const result = await client.search({ - index: 'subtitles', - size: 1, - from: 0, - query: { - "query_string": { - "query": query_str, - "default_field": "search_for" - } - } - }) - const timestam_obj = result.hits.hits[0]._source - return timestam_obj; -} - - async function search_api() { - // console.log("Inside search_api..") - var headers = new Headers(); headers.append("Content-Type", "application/json"); headers.append("Authorization", "Basic ZWxhc3RpYzpwY2lXY2xwTE5kWHVpY1VoWFY4YmhnazI="); const query_txt = document.getElementById("searchbox").value - // console.log("query_txt ", query_txt) + // Query string to send to elasticSearch const query_payload = { size: 5, from: 0, @@ -64,22 +26,19 @@ async function search_api() { } } } - // console.log("query_payload ", query_payload) var requestOptions = { method: 'POST', headers: headers, body: JSON.stringify(query_payload) }; + // Calling ES _search API to retrieve results from "subtitles" API const response = await fetch("https://ac55987c83844faa90726d4e5efe92b9.us-central1.gcp.cloud.es.io/subtitles/_search", requestOptions) const record = await response.json() - // console.log("record ", record) if(record.hits.total.value > 0) { const result_num = Math.min(record.hits.total.value, 5) - // console.log("Maximum number of result: ", result_num) for (let i = 0; i < result_num; i++) { const result = record.hits.hits[i]._source - // console.log(result) const result_dict = {} const response_str = ''+ result.week + '
' + ' Title :: ' + result.lecture_title + '
' + From a6f102088b9f6a18f4c2dd386132e8bdeb4d063e Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Thu, 14 Dec 2023 19:24:13 -0500 Subject: [PATCH 51/52] Finalize documentation and README --- ChatGPTQuerier/chat_coursera.py | 2 +- .../README_images/ChatGPT_Initialization.png | Bin 0 -> 15255 bytes Documentation/README_images/ChatGPT_Query.png | Bin 0 -> 18822 bytes .../README_images/ChatGPT_Response.png | Bin 0 -> 87022 bytes README.md | 15 ++++++++++++++- 5 files changed, 15 insertions(+), 2 deletions(-) create mode 100644 Documentation/README_images/ChatGPT_Initialization.png create mode 100644 Documentation/README_images/ChatGPT_Query.png create mode 100644 Documentation/README_images/ChatGPT_Response.png diff --git a/ChatGPTQuerier/chat_coursera.py b/ChatGPTQuerier/chat_coursera.py index 2e9084401f..db4a2c5bd7 100644 --- a/ChatGPTQuerier/chat_coursera.py +++ b/ChatGPTQuerier/chat_coursera.py @@ -10,7 +10,7 @@ from dotenv import load_dotenv, find_dotenv -_ = load_dotenv(find_dotenv()) # read local .env file +_ = load_dotenv(find_dotenv()) # read local .env file loader = JSONLoader( file_path='./chat_subtitles.json', jq_schema='.filler[].text', diff --git a/Documentation/README_images/ChatGPT_Initialization.png b/Documentation/README_images/ChatGPT_Initialization.png new file mode 100644 index 0000000000000000000000000000000000000000..ea13b615894f4f047a94956e7aee1d264bf2deab GIT binary patch literal 15255 zcmeIZWmr^Q+b~Rnf>KI|z>oqW9a2LI(kUU5Lx?bh)C@6{bT>#0(l~TV4$>*z3P?&h zbjLSb_jTX*^*p}6zaQ`M?&DZ{t-a!0XRSKdxkEM76z<)9bQc2ya7#Ab;7857;!#qyx zBPk_Nk1D(<#+DK$Z|_(B3sSg#8|mQ*irHI`fw#dM|6W-15XqKLw2$C=URBm$jHy# z`$RQ9+^AA)xmn|iZ5#i&So(~p7F$E=+kBqw*qm~sG4}LIL!;vStcy%sDHh7Lt`mLrky0 zuZ)6U<j4j64 z+1|KSVds7i8lKo0kL`+cvnyO(IjXWIHYsu}3X45?xLcsOFybp{AYjDVGr*C>|C7>B z-I}oee$FqoD79}GFOl5)Q&VYFGw#?U8>im}Z?G{?pn!mY;1=gk-5Aqb_vvqvQ0ao$ zFwq3e=W}g|A%v@sxtd894Q1{DZ9G~DQZQ|w(?40nYx1Qh!1)BY+Y#`9hg2Spds0uO zg3JF^Tm!4G^?f;>j(?;n;WVaOfOpoN$9PX#Z5%P*U`@5se)05ZB^)N6_ZuSck0IfZ zkv${i9Le90N-pUaRMLJ|w$k3D-yT}f6XC&X0KW1NdC;+HPl5M$Vb zol6Og^pfG#!JUw~r>X2B&?lApmg-ToniT&C`&-fm45KjJEWuw~)ua-B#!_XUe*7YI z!N5-8%MVC3J*L72nQ7{CB>9C_%;>Z0MopL==|6q5{#0LhGT6H#^!dGlr&y=7T6?2#)H1I?3EsPDL0I`jBy505{Ruad14~^TcbsPr zoc+;*r(wb@nQ^MA3PoAt+G?7=wdUELK278BWb0-lOjzg8V?${I5tmYDIbr!>C3lrA zo5JKQi~8IWmpSTm<~53Rlt0r8Ccc(3QYy*I%$+ZQY2+1&7FNE3es%?h*jegZ%d1ErYs-HfCK|C6bM$9STm; zs8T+pyi^fNbNT2cRKusr^GpcL(`+rz3rev}DG{>e-x3xUispga7+b^U{O!J$_f+cJ zI(02I=OtRkt;+PX_FL;V=-MF|5u%93NgZ30E%r&lNj-3g5K-FWv?HNpJ558qInFtb zxp23_jI@ka`(~plwl=nC%{EQJ9Q$d+H2)6a&ek^l^Iy+Dh&>P!5NUc028y~KhK*Ru z`#8I}4!C~bgjh$cr3~|Sf?jtwxmCI@iM|psa?KaT7a4N0cZ02(ueZ6+EWRI0TzmAh zVYp`S2$9y9HeE7bGWFxT#9;I66`pK3>7ahw^DXJ!-IJ7Mr|IRKf;4Ntb>I-PX}kG- zlbeSGjjdpkq}$fC!hHVxW7OdM51bu`NpK=9!!eQSNU>Rk(YH5<>n4Es& zI#Biyd7o=gf3|)AJQK9qvD&serFU97U7-cC9O+X)FzO1H=9h69UAYbB3a62TV@Mqipe6}F>}U!Tm*X5fh>PkeHXYnV4$ zb~l=(EpM!!Z-F@nsL9UIl`B-PpUqb{Pn`tr*v<`2oA|UYNgYYgejSz1b@BlDXzOab zJ1aTsDZf=VMCfgOmAxNaOD`ny##Qd3r!prYhcN0s-BVE^$2TrbD7m#z1#tZA?{pO) zv_>Mo`9Np7sHz}94KCg;?z~-mGSj{;K2(!#o$k=E==Q0iy=Hk4Rf?qFcGhIw;D1Xuht^>gc}_8?SYMno?8<;ZF6wcK(q1} zLl5;U<1_Do3qhah%aP%oN$(Zu)_|yBCt{TN63}Ds`ORK(Ca!v+dLR6eWQq6M;kKiL z1}blTIrF56r1|u_-hM$Be1f~ggzmH-RTg!qsd`qLMDdDK<`Ge6yfJ=OCchaCYVIJ0!-MxQ*)SE z`500G>->Gs&(73!NzPJV1-1IL?ek|Bf5FrszF7dBFD?w;!#K7J?B*^k6Qc8d%Ym^+ zhvDi*S1M~QI8(3nI%M~8Is0#04`b0&;TN#AONQTYL%=InOv$<%ETT7F&`0-vt)UMT zVV3%eR;sEP9OyIw1`g&U3|w>y6CHq7^dhl3~6myMMZS14SH{BY47~j!DT*@5QfgU zZ@u1WF6jH0))7DxOf{q^vm-)!h`8>7RM9I7McJ#_4#C?~B0!Q9TkPk8o%GzcG?Rj1=ngLD8B^C31H zD~`?pCN3#C9uKJP!(IP7iH!kCVbrW(9e`zxz*su8(mf=Q=AB2QXVXMQvsY($W3%)QXT&{CdaNcREkfJ zIu+)B%3BEr1hHtQrG%!Aw%Vxwng2g39Rf_T4Tz;$z<+~nCHNo3=ntL+0yLC`qpi{Z z2j!nk`hR@6Rm1-q^rhSaud<|WzF`)2v4dESmTVC{TZ#9aJ_IwxC@i|wH5%1)QW)x> zyiuMA{vmOGxF7ugNVqAEE-sd`6~OaiZo$}Si7dG=^J?2#&*WluN*8zfX-$k?%YGul zbN(Q6q6~I~*rY{4y2IGJmg#R!-*wZv7psZyb!?^%d9B6md7t&0v^1c5G`u>lLzc{+ zPu4|!!$&o48KGX&;4YlBoJ3x?v`}&i!_H5St`Y-%FQ)c4QT-)H#R6yiTgOe`H*IV7 zszztm-xb3RHXRJSxLu2sU-_K2(^p16d+pJWco8s4jULYB5hT9^V!)NXtPlNB=9G>4 zuGa@}hIx6!iahn6zL2SRSQ(qoc@}7-| zd8Mo4&>f_7>j*b*sBU#8+!LD4I!F237}CHN4)cXy!O=&;9tV zTjjc4(XCtNbG7AZ@J4Oly1r&TLL50@$)zo^Og&U6G-p?}DMQ(y5h>>J^H!*vXo0$x z3^_UVKdURgFTUIAx)}%yzdp^OzbaHi+__>1eynA*^?K^{6Yx|tTHM_e@tKFYy*v04 zzoZ`;FVTRjHNAI+e6F33(BjYc*s(PcQ>-FWOVYmJ>?F)M=FqF}3p4r7Wph(SXL7!2 zng*^dI-aqHzZ?80IDyLz`6?=M6@Gk!W^kj&&Bek(a%c8DoCU~oR8tt}Y=s`8#jkN* z-?PwboSk^LtWU#nb2jAj`Bt3?Ustq-TP;Tlmqv+h7J6J5&UsovcRo%vSp{mU9QNt8 zREn;pfDWsGOVWX6L;;`Pe*F3?%Do9xgKD{$RHSYKutJ7O6&~{*NEgHQT7CL^rH+3R z`+N~yIQynYvklvihaV>Wdcp0X>53+)@q9lS`G(eO_iOV;_e&mST&(MEoiA0V(St)& zN2}u9vc8HtN2Tt~n_+rQ=0Bo+4$>BM9H9M%pUKspn>8o5tO=GCmAd3TotTRKbaS~b ziMS{Hz5Mzrfc`Se)Y4XwCbJkV`z__@weWpcCKzUOwl z!IHL?2`G8$j0Qb0MM?CBq$nam>*>yFU;7}Ti2O><67ei{gBx0q`D@{qm?I; z^_kn>(L>9Gf{kCgUDaS*Am*pyNtbX-Ma!V?2YlL~N*laNVy{JPL0#)~C;6k@V96#} zl2>SG;QOLij?K>X^j|TE3-k8d$bd7LWXn|Ye49Jp+*_(DQ#YKF?tZxIHi0m%IR;#; za{Cl1hrb_KDoc*)jNU(Sf-7eY?{jTD#YUk|5)DZ412N~q* z8ZgPL4fV2B)L7=R0BkZ7S#>zPk^!o*KH(nT$&ZE|=4^fM;Mbpjx%K_+B^9FOQQ5|2 z{hK*w8&41dXbk#=q3*q@jX<;O!quQ|_8sC}SUye&n_3KK_iCr zJcM%$LwI076m$sT)3S)mozfIch`c^3+uDLuR(!de;WDaNQ@ki{9uYivJI=Vu2wra< z8{8$Mi)q}hR4&0rPh1P7AuI#gxS8qn=flBDRR%&EO0zdwzZ}LevblMM7zy`!K9o9V zZ0S%%3jc5!JFh!8x&p2ERCN^Ow#ajpRtclGWQl4xtwl~LnqEb+qjjxe(d5B1oNj>EG1R~tza>;ez$x>Q| z+pG;(amyAWut+O!pmD4*KTBl6hQa2ivY*TBLfzwto#Zb2xG?h|eO;%xL#8;Lar?!a zCz23L+}jzjWLk>{EXHa~)%M|3mx^cReJh2Wc)ps*>H|wjk+`zH-mB76hjCsdLuR6U zh`kEa>`S4a>)gV&EpiWHw#=92#4RvpgHm!~Wc?eKJ6Em}aeX*!yXj@QipwxNw>5 zD&~@@i0exbzCq}3S~yCSUKvTNZ0wqtbLe3L1p)Jp#N@7tsG}>FQ7QYm)i9>ayme4t zuohjq{WWO{RZmKU>2xbW(9=d;f;1Ax+sO}>5tX&08km#Xaz=xV+eO*J2^ohXd zbiQM?+{`Tyvd6`d!M<%ExR|gJ9c!R zNJ1Ujr#)!{m)#wxzOnNe$k#KnxKLK~#5fRQ;@M~IIyd7Zt&@%nMfT04SKr0Apr?A5 zMiK7_0qC4XW2wZg!Mt45F8dWX$CpPE%^3z(J{?lW<^4yRJkhsSX_r9^avS zQx8Ffl->Bt+3R2ioEgj)XRYRMnZbKj^n{_&;fb>j4Ky*DiK`8 z-{8<55nd{OUTWQ~4So32FXG92=Cwz0KkXV{M1QNjnT9DH)kU=MGT-RSXYjAboUj;A zE}+%j)ppoP<*F8OfA+Asw7Nn=(-_UNgVFuC_2Kiu9(aT}HQ#j(Gt z5C@521u*vT1a+in5y8HA##2P!!Lx(ps!;8J410jGRL1aE!wc)=i~0FO4lwsNWtkSj z`!ldJb_c6COJ;NTD7xk*Wnjt7FFRD4FaY3$|Iyf)EGvP!odv3GFW$3z%^3BuR-$QL z0##I&vq-8#*-mqI@Hjp)f6=O(CF&7Q)Qa$+6l>=l<2|MJoO=O?rA||w2>CB6SA3Xp zBaXqeZiP_u-GZiUrfBi^%@c2bw70f7R|f=(uP`(;Wr2cx*rhxex~#E2`|}zO6cX0L z1=zKb;>vF*3DjlIe+gh4+$NPJ z=Hz|%k(gsCV-|=KEDku9sb_f8mq5uG9OHsNA8418p)w~5FXBG!tqmUK8_1ThK6lW~ z^w|xdFYYc9V$!0)8zZuahIBA-G|4kiAU@Jr+GjQ!5*3c^JuX7EVc9gnv#Ew5+fDV~DPM(xE#+Z>BO1sBJt- zT(nT@u@NWP#Kcc$*}YJ)s+bRBw+-q_BBnM@cWb$wmCB`-vk5C?`f)yo^LQ+>%Q^ON zm1Wdg=j7QxF6qOxSFTzqD$1+RRPkKe!E_QsN zTBan|AdqG7l2XyQ^LT){fqK0_&2b*t0=VM6S*s-wKcqtMm>$c`oKHA^WbS_29AU7YWYf?z=&Rc4c*C|H^ zO5$-N@9^?QwI<@ zwPca2i&q@4{o-gbPb6(Z)?-Z{5lubJlH}kv+ORBsx3vo%Vmrp+x=?97e<^(^Xv5q* zA6jk)>*QN2c-6-h%~8Tdg6-=Zp^;8hk<4s#tSlHAzK7mw;^uF*T|;*gyYd2c*E?~HKM-q$r#zL8mS0O4le z53V*>VPy>1%wZy!UJTUt5*CI*XbqAQFj}z0co7?J81z#aTg&81s9%GRVc?ju8ANc+ za>?csr4l#&2=k|g*ujuSE`h@6C86nvRM0%uPr+#6@|SZ(M~2;G*r}`m;BXHwn>QYDD;X=Y zbI)h305sY_MEc(J6wrE5eWhe8-R*0Vw~SYoNk!}Zy;tU4n`^q=rCzhQE8D&XeRvQL zBu8CVwcWtmX$RSItS(wZ|4sR@uDB(3kRAxwSY5vsJ&|m*p60F4_!tO#N=K<}%s#Zu z`)|$hv=Z-&KvpF^Nj-lxQ$TCYPM^Q`)AJy@wL_{)w+wx{>#7kZHC6-r>Eq%fF#9-My63{zlih7z9Zw-37^?l8iW217N@OLI2gzr5lYA`6taab?aVo#z9ea;UDQteV`_ID`#fSnwnfHFiA9!dTrU3QBIL^jzR+4t&S<=d&ejSb|BDN?VdJ`8*zZZ#K>`RuE<&2mMq zL|Ohdu29~pB?7jfK>5M-aanattDNsKAt~rC_cpNJSj%KZxLRU&Tj1mEjIz*U8!DbG zy}8>?4EGWUi6L%88wQvW`uHPKLW+*_cj(ydI@`l*B9(|NmZ%LFB{ZQgT9`r1^w}yp z=R(*sDfKh6zg&HPDePhOcS>81AOi#$qQt>_%e5B8qo0+vi2!S2%5JbyB)ohYZF04# zXWLG1q5&ebiWlf{rH;sTVIY}NlS-$3bHi|_5~4~Ay#L#EspckEyyo6FvaH#@BI}O| z^{I6wv+tC#!nW)icP726uS*g?m+rsAit-<~)b9*X`w9)7Uw|Br0`0;SjG^lWODI$Io(=mwG~Sd2z~M^+*Zd(A(z8IJuJ0dxA3Erts{_ z)K31nf2~AwCOY^mS4Mc1Y|4kgUM+uWCVT@nQ<{O4Qzp>aN7lRIpngqTY&+pq0=nMf zcYRVp1$U2(XHBYh7y=4LyFezMy0a6bNWN9eIRRrTmJj$sAl*w*nq}4_XE7H zDQRneqv-)AjSgpRC~&0EH0kTZ@9o#}mPgUe!p>*ci_bKnvr%Cuof1u*`qRtA^>mbV z!xH)lc4P3l{m7}x<)X}qrEm*B$Fy2pLvUkXnP#JfM-j6}m+B+izFu_>PdAsbG#TEWpuR^Ff;NvPysiZBAnT3^z<=!=%T zK6P!$ReVGpo&A*y*bjyK5uv_B3e6S@8Jj)c&IufM2$dQ0LT=`4@J_4fos0F~Xy6uB z!_vG?*uM`Rj-4y>@7yYY=2Zvm^GR*Vq~+c~?O*4gp<_xj%~84_s-CV66O-0>rzl)9 z3&zX}#?FoyKNaIzr#M74xZBQCgYK^t;<#o#CFuZ)G9hGlIGCwj=qO>tn-N%Q=f|gY z(XfphJ%2Py_uAwUSJaQpsw?4g-Ri{;nOzF$vt8oM;pE0W zEX25hUaUF4ruk_+0*fZ}Q8bTqB@D&tWt`Lu85}UR%tk@Z$rQ+B+2=ACnyOBFNXYGh z@|}>7z3P(Y>eZR1BwF>OrgKcflWsasJ6b;GTIIa(ZZdCHu@(!ojIU#F*6LfP@8eOW z1P@Q2mu#{hYwpeDqc-TeA6M$sbdsc2Vom)f7Bgwi~{uVcr5>*DA~*`<}$*{H*Wmnt0^hekYIr?W)D}pwqEw| zN6U`r%&~@b)3M2&OeLCV?;Vl+$n}F+q&f96R1pd~FLe&&1WZ!Vg~sYj&A#t{Hf4Yl zejdmIj77BHt(bDkhDN? z|D|dEe6LcI$ zqMA=UOe$AAm=l=$8i%PYR*IUYOMA2K^_EW_W`)oK@g7?VCQ{KrzSe-FIUiho}7sZ^m zvy&q9Gg37^IQQ9xNgu!G<0=X3E#ZA*jv+xm=Gv{6i{E0s=Ia^m#zWfHcEpub{krdQ zgH}TOM1r8Tzrr5OGClgvj&H&rob9Kw6Sx)r4BlI%=wYIFG47q2hAXLheGXHmjGHee z7OQ@?dA!jC-XeZX4*rO$L4+WmTyS7ycH(c zmmL;vwS#lzmC4!1lUx1ZM)md3?YL%U-&+&zJtwv#NL?c^-0ewJE%y5h$or1CKm`cHZ!vNwEBoqX0h#$_FAe;; zxW&BK?<3;1Cc`2y3>W^{N{p?mk4tzj0%M|&Og|HLzTqlSS*a>=9+8MXL_k<~`7UK3 zpMP5R1V0u+TbxDj8@?GSAuqW~E;5mjz5KXSx6pEZhihEWvMq;*xR$jdY=FZ(-Aj0k z=*pjw1C)GNqoN?di7p4m!76Tj1!pitoX!oZhiVPq>Q1PW_~v)erIqTHxpWY$6RnPIgeqvXr44O zbNs;Ab0^QRq&x0d(!#6v7tPG2<#-N#cH==O#H}kN1QAhT_DLWKCPoMq%-zmgWkIQ2 z#Q~rCKv@4WTkx*-4xiJS1F#E5&-iSr_Wi6?5R`sJc$F2Vn)v1yj0ifT4ykZb)v%aR z-WhZHQ}m@~iE{jqxH?z5`{+qjiZLEvf?dMSOD90&llSipKuS&lSEs>(*MukKz?~lg^gj@xyBu@-Cm^NfV`) zULHyZ*Ouv9ZPwNEWnV6n%67ym&>5JF*5*Or2XWV0{ys_YU(Z-GcgZ@YyVqAhH1uKC zHWzxH+_!TGhe<6mp%y)vb`_#u z-0%8j=K@Z{z=-fVJ5OWql`X^SPFwmGL5rz-<{aD5kMdZCreMH}b> z36;z61J84EkQ&y3FKLy5?WmYz5mrgkc7}Dx(jJTAEZ+GUZ}(&VI}CAy4!_XHh-#@0 zXz#Jq=uyqhr{(Y!G89S8#@x&$b=~c~l81HZ{zU9F2r^sQ+M@AD8<`Q<)R%;;ZtAP6 zhAB$QU0ZbJFx=YKEnZ$#(i+nDVe?xs2lcQHz6J^`pcWALek{B(m}Nwiy!KxOrqQ=L zu?$}%0;DJ_J_e2kyrN#cOPrI(TKl20FJQ`$rt^OELt5#`xH~rx;4`6}^7i{jAFAq& z^0p)@8L9Vny0Q!I{}O1BrjwanpeV4`cv$a;3_X9d^( z)@)()K}ABK^#t==do*dc_H73~v0S9UfqujS=;u&wGPgA&wlYyU*z%K+Dcje^-EnPo zuyQ45SxN16O?qD=Crt-bC2}OQdlLzMUa4wrC-UQXzSfrN_pTJKc1SPd*QL3Tax%!= zF7ANVYp#8ROD`;>~Mo7?@VCq=X0raLGc}GN_COd z1gp3g7ZO1*F=c2XE~ECm%GLc?N$UCF2qdUCzGBmTenmDPmf==Qz?tf27cFsI2pU^_ z)$wlZUA|SYiM{?W#51p&1n&1G5B&A8o-dO=&)+ZC+U0>;5?EY-guJOK4-)T|>Fc)4 zxDBr0suMT)>$d`?g}V5~WmIn6Qv4VQ7Bl*i>mRS)p1HC7Yo138~vBQ|!>J{*RsfRjX52HF;G6vK%@U<~5vT-pPC zg5-sPzO5Md3u>nF@xQRR(n@wvJ_r0(gTX z+LuDt14bRybjkw6GWU;^%p<`qXKStl#?LqAG8-Y^MGIE_F)}qeCi_U7a+KY^SnRvZ zI+2!pQBj(#U;E(L0WKsdl--hcc0v4)ka(SqyQinQudcPRGa5f$k{-12`J5MG z916`IuB6Q*jwOMkK9H;Fhj#J|{mVtC{3U@!_ZZdpw9|xOdKjdnjEhX{n^hp7HJfH{ zh0moG6W=0EY==*(b>ECyr;WqxD_*6g)<{_MI7NgHyS|P%F>C1<)+!gS1FV3QipK9& z)9BND)*b%sjmE+lL#wXn^G#$uf~?A&Wyv}-vQ?bP^o(byrqQcUsKGCS`?Dq6#YB$E zyqfJl-KhTU&-S+?(MPni*`!&iilfo@5teK;G9+H0`EX4wddF93RCx!n%i3CR1 zQ;H4!6R*}j$x__ ze=wK?%|?usls{x>YA6Mmn=jk=UwVyZs=u!kh414(xoC$L?%ahS?Vo%nA?d-20sV
8am#?_(+!p_yls6Xff5D-{k i_WU_=|L+&ZDgIz4eb5t3qubwZF%(~@$(72O`u~4xaNLOi literal 0 HcmV?d00001 diff --git a/Documentation/README_images/ChatGPT_Query.png b/Documentation/README_images/ChatGPT_Query.png new file mode 100644 index 0000000000000000000000000000000000000000..8478a567fddd7ee2e7e7072d52a914ad306c19b6 GIT binary patch literal 18822 zcmeIZWmFtpw)hQ!00|HvxLbk*3+^5u1a}DT5E^&a;6Z{r1Pd++5JE%a?hcK+yFbbg(tE1ewCYy$?&!K-N_0!OPVC8Y5x>kC+#Is*V+PjtyKd4>sIu`Iz-}L_0tYolN7Mb#1_dNA>+=M)& zUk|0QJNCh4yQEXUT>rKMxAqLRy{hH@Y%N}uZ<}5bi4P7=e5rYPBO-ZZWcla?T8%Hu zVL8IPS-q;HPp}^k5Il4;;Ve|Rm+1C-5B6$Y`on zw^LS)^0tBJ`%_40I^JMjJ{zOXrI-x53iPA>5kin}A}#iE?czH;ch*x!8ePlr*sfO#7j|+seF74(p$~q+#E_<@n@4M%~)~ow4yXO zG zLRgsB%Ys?JMzTRFwJmnUC|R$Ud+QHyYoV<@jt!Mb#o5*5dAFS{CZ z_*6Tc?Ys@58$^U<=kPsTJ~PT-)p0PKW&M2M3M5w-HM` zSAWkce4;rPaymeCa6Kcu8?tw5Kz#Azbu9`sI-^D2V6(DK6Fb|utJ16#fuE|Qtzowc z!6tHFT>1h_-vz`($eCdZsExar*&6{2YeDUx+ z(AI=d)O8Z(M$9E`F?1nIk7kr)cq<-q+9f2gA2|wQ4#o5KK%BQ&l37nL>j`9^vbP8* zKIv_Cs6PmgBQVT=X;A(Vg^%r|1#SH%*>kh8#pq~3)FiLc1BR3_mPupZk5hic znqU5rEY?pmNWn~5PRoTEjolC$9g6VkrF683)CrA5EKlAO!L%A-+ZG%cc>&(awl?~8(t7z_FTpk9RAL< zsMjT7^-Z1Xf?|=XOfflU!bcI^4`un8c?*R;iupx+->cQ!iyc)~`Plky;ttoA-WV)T zY6^9{Z$vw1D$iAce5!6DdL*kDBKi>Tv-rL&R*$-#T1M)C*c}STS;-m7nQU0uk=C(%+?@{k z&i&4FJ8tHY8_C109mXHKz%JF!D|~9ay3PfB$h<@LHZDHvMw_j!GfNI*2^&PqzlUoE z&P&o7(x%H6%BJS$1P2;tVMyW;SOc1^JiDStN0-T~_S37mg=yygn?gen@Lr<>*u_JT z*orez*kyNGYN23(0AlT&;xkss#31)Dfew$V*`F0?oEuj!5 zgd-jq7F87P{*s@@#cVHy=i^-7uMGRc71(N0$D(wle9Wt#uZv$tDU`~2W#Pp9WGmu# zcRDFuL#4l=4~XATRsLkOQ#`8fLH-NhcpXv+A+3ecU=m%h1s%W1!JZ%H&3G{#o|mDd-5St==x z*s>oH&JkL(%b5+PT^ryzPHxO}@14z9JN9odCXcamyB$Qi$Gh{M`mne#>S^7%_>A67 zJ`G0QV0Jr`Ibv}JdS~S;F=`{8jZP5DOU50YJ+RySiful4^wc!Ublp@x(Jm?d!MVQz z7xIF6Ky$Xf-+Cr^y?woPeM;l1e7Z{6*mR^@= zg2q$T>7pNnf$~`bZ32#arI#~pn*u{M>E`LSzn5Hcs@iH+mk!Gz8GqeAR44h2?(JdSxDj*~KwiyaZ!5Ud?Ho6n17!u$1YOUG*JSMBT65XKJiy1Xd)0M?8PP%KD(S{#*XHhbIf( zKKeJJhiy^OksgV*N+FhwDPMXApa>#RZS;NIaxkqTWe-R6I&xwW;g3U`M?Rd@d20CrcQgvku%Fb--V9vtI%gf91mW_ptjR|;z$z$=F&}TpyLaeN80)O8Bk0<|A;=iTT{P&dHJpZ2b-yZ$% zq-u_)4&t`fK$%WL|1)0yNc`^)|B+CD<+17i))jyC^PjswM+-d{VENae2|efJJ}L#q zoy78;f+}zZr0ma+2lz_|9FJ$9v}G!^j!1pL9@RSsYL4KG7T!;%7 z_4;VghVvCmtX~h2HHD4e8N@{X^#bM-X_S5nag~?yA6`8sz_`u{5OkAe8V z9{Jz$^#5CroR#HR_9wF%cHnBDP3btw{aR1d{uG`A6{8^cSMXZFHmvHvL*2~s^>N#) z>N;T`#{SG4m(}Un1^sTZp#@idS9YX>HEUq3eIyOn3^WB60PP!rzuCn z;uxPD2YCx2$-Aus8cin&m9w8pJ`>ygqTFk|T+mgn^|ZRb-o!|=`EBX87Aw`7VxBp3 z&{wcFcsr?W+i0QFuoQs$(N}*S_2x?!pO3Sz31#;Egg&fs@#E0@IN?T{A(*=U7_l&Q znYVoO`Iroh34hUjBZXGr{%W;88ePkyr-U+4nedrER+xyf_}F))+?62bDBcYeTemji z)(Dm+f=sWI9t)5g&=BOOokUipSWRd7EFE_=vdM41gM)Vrue99P6O3J?$lOh?<_~=| zQqO6HuX`$wJBdSP%+i>Qj@May>p4jAHT1pDD}g(o>%i+wtmuL8!KpIL^?4^LK48%l z*E##HqpXKJT2OTlO659;6A%<=eI4R7*ZLHlOcx)yR)cd`95To-stD_fu2 zD;Wh7nxQ<27B>R5;t%0#XEDvCg}h&aUUVkz(G8tD%D> zFL2y0pBig7s<6+F%i=Wa3SK3p_KD@mhh2Jj6_o^gHVx{h#8Yun!41%E~`{ zgJl7&UhWdUx7WoJ_kbi_<&96p;K`>vgQrxC5TOJ6>$8$!s!?tJW#R+~b+8OeJHJKnm@>?*WGdsAn@nY5x zXk4^3AtD9MhvA~a%-Pcxil$%75Fa)UUAN`WhMk1wk--%ZplfSh^W7b_wKXil4(#@8 zrh*YL1S+|_iJy7!Vz&9FgZ1FEnGHngZHg%BVbqh4>uZDEf_wS zz*zpa)Vpas^kxCW_HCo5Vsls;&y3SV^EMKigM{g1YU(Q-pxanPB zkUC^}Up(Vpa5LuF&WmdmLuFD;1f^O<3*A^udz}ufI-ja31o;e&ULuHHHeVo+?nvIPmDp0q$;*2zwo^9@m|9lGFpyvm9-B0rffLHcjPRG*W$taqgc zkca!q{aR?Jc~4TZD4k{P8>_>;OKO6pSauyf(O`uGk-*qz#@OezgM7Q1T!MlZ4ivQk zOifoSBe~lx$mBm|#2xxO7WHQFgcBiAurF{>TMWUo;iyY^HcYmsgzlDXiMwv{URQ18 z8yt5>A;NoZG$)=zf|rpeuF--QOXg?q_Vc?a4nZY?CBFVPoUpr1VRAcc1+~*CLc83f z+_IZD;DQr#NbTnMMJFBzoIDKg0%%6NDP7l&>ZU{Q!w3i~S3F_rk!Dw%<@I_J#%_6o zO*3)eQ|?3S2phv7tm7E*7h%E>Kq_1p+CGPn`|#*>{Ro%%uvL&|6J8zQ7|lvX{s0tO zFFTUokGl8!*BtrP({gjNC@=nS6${SBg3dH9qCWNV)f@WfH^PY!|@YG?n z>OYR|4P{<-(ge6|+T`-@e9vwDD#M(D%Z;o7Qg2ZbQzWyCmUO454qA9A!-%z!YE@2v z9df1My6EkyHR|>S*9DwFhx!5ggJMcfm_Lqbc+Xe#TD4%BIezP!dDJ_E7d#Y|Y$#P< zLxwq4_^w+R7y|m3>U5gbz)&uJ@Dnubl?MZq_Js5@Q@5izkKM9F@6Gp3>hcE~9bMm| zlzapX-P+@_iAQ{dfN3RJrx&lJuCepI+{e;!L%R0i{u9E#8;&n=EpUnWOrtD_MOQ+bN;h zj9B1|3$4ju`k+IWBDWKrPq&xXg=bxJXGc%g&yvP3;DWnNYX!yzZ5sA#j{tp6)2nNg zNGa}n9H9^H^t)ZeyN}083GaeGK{!;hO0p`eQSUm9>&f}T3;H5~_i&N~K?4++D@oE= zX0dN$u(YnPZHbTfm%MK5ETK)W+%H_=WK0Tz=c+m#=JugwROadSl9pWcC$u%b2~DLN zMl7?ApI{rkjw@qF$kvVLlR2Q{fHKvK7WBiBvuy=m4>&eG;k$!>Ss)RGNTs*6>=|^ z^NFmmk-8x=}2NrWDsHJO%9$kPl4;9vTUP$zTCW4FKNbQ{eKDqHy-mMF}vt%P%kAhhD#*N>c zyA@>C(UU8?EB-D*c}1M6xWkHRAloW2qh4BIc0NmZx%hB@+2w>7!u(>c-p=tTE(Rk}{NCv`uh1BVfeOZ$*+WV{f4AwD`TOy4GigmmkoPUPYR@5u~tb za&X{gm9TSEQH8K}HC(V?QCwv4hfUfhN7)W9tCSVX!X<4 zfY4Iu)b^%2n>Tuh)iLSTRN7yz;$^~jc(hB9b}XnQ7!HiS{e|l;LMX72Y|MA8%E;6; zep54{`T#m6d0Gp)Mc;0^oj-J0d4}ob^PMiix8i#?&lsM-wa3Yn<4)3McnQpBVtTXB zUEbF+-1mjB+F`RnHPog7-6BiwsNeFT9>&VuI3Z!^%1C1L|)*B7BXeJvU4ndF(ltG zKuzq>`XQiH?q>YdC)C75L9U&q-6SPbATc5Qyu7@eBDNQI2>f-=4 zhu&Q|sAr_*^q_^x(r5`VmqZi`?38at6>h8tNHJzsWhPrCqnY!+_HeSfz1*i&^X(hq zPF*|fv)JtC?I*^pSWL5Rnz7;Jt1VhopAb!H(ny%Z@9kso72Fi40k6&c^l@BMh!U`5 zyRW5JDBdY8^>N$@{b}77qT>_q^FcG49w~I{3F1Vye^;oS-|<1qcSVT^xtaTpCJ&lw zwGtGyq1kM|k-#u1+~oaw)pw)G7tB^oGm`U^@|m5)L2_Sve)3~_W43jXhDxt^_<)-eNH1_Un_uR6Iqc|3)0 zMxs}8-5{(qE6bDZee|vZHJ|6%U{RI0L9$2ASILTKWoyz7IF)CtsVSVTltm33tn+ysJtFSI}C4o+AC;RB=GQNc7?_I1#_1 z%W1x|Zw%68HK9_LAiy}%p+wP5z?Yu+p%-=qSn5Ul8fF=*k|er#0%UOP)79Z`5zIt& zfzv-YK%r5)b$Rw*E@$t3e8llep`N7Z2{}DRLRxU2RLb8zsUnT?C!+4TI$ot_X8bC?ckwfZFzW@c#3d&jXyts zOduZGhRj2-K3O4X9*3^~qCvy?ZqHnpShukc(R+g3dCKwXGCZh)c9DGgYuL&jeT@%{ zJ$gyK!vZcDCFHx^yx2kv`Je7@e}>vDI%Ym z)1#hIR(`Bwf(37@OoQx0EUSHG%tK|nM$l=1rr<`Zxun-?8l>~hr(bahPq=n$LdnLn z@=p2aoeeI1z98{N(Q0g?@v93KH`|YT46gH7hp(G>Hqi6T;0#ZqxOH&z$MJ`_tTEIM zYjztghInC9S1ZY*3#B%VXcz7{LnUZ(?Fl(A!p>DqDb9?Il(=R+OBZJDQEM~%v7wu` zCpn-ymb9%J`mD6>IK9)=V@7`N`L2U1l*qvd`JolU>WmIbaLBNg10W{49^3i+hoVEK zv16gS7>tv8KG*9br@iG+)_j=u?Q#kZY+yVdiFhoh7WGmwc7iT!4)*U`x=lCF9XswrOjhVN7&6e|xQOYTqkZ&F z5RJ&=l`?XAnhL}f)wLyR{W1Il4H!J~2_}oqt-Lr1EfnpZ;XI#NgPvG<^e-=Bv`ZwJ zvA_C;d0m!$WE#*|?h|49K1Fl=0^d^#`%dMm>W-Ifrg@#zXo7eKaiO+`5`Sp!A`43H zIL5urVh}q@1}cGjUg!;K+3BBcDlBv%3K-8iI~rVt#7OdljwX}w&Bt8MG`vqw~y zUm8$Ama}9#y`v=;Ag}RreVMKilfW#@+gnEU6#S@5v4^WJVKQ{oL_FPYyszM#XN^x= zBhsE##ahSJEV{_Da?IovgX|+&5YjSDP6pS+XLvrES$t9v(&mL#m3v>Lk`HN5@a_kc zO$XVCK#X#TRv5lTPINGV)UM-ohe#EA+HR7$=~{@riQW)puXNBq?92*pm%{8azd_b_ zi0iN0kWDE;SeF5M(v~D^r%-BGu5B`SNsf@hkWmU=#~qvJnd*lV0)dt zM=5{~Ql4ST@SXIv4|)NM_PwJfY6)dge5LX+LTn~zzyMv2Ypd#}MP0bp_(SZ|Y7=7A z4_GFog%**^WRmoAzI{RQ&!`@XY%}SPXj*rsCRNf$ zb8h8cK!=n#8P)FD;+c!4FB9k3zrKj`3_@gk_#=#4A7secqVfrLW6o2dq9YwQrvKh{ z9Pt6iNByH^zMl+#`J(9>dUw6sM(xAgeR7&m9fW|4e!m4xPERix+3#Gqt*wjG0@e5Q z;HPU^WX)pymoUoeVKli(+FrEP;LpMvwaA`aPZeZ#qv-)#w4-kj_ffCbVCtzWcPY%x z6v-Xu(=5w7G%|3&?wZF3?^yk@i{>Bm-|_c(3m-i6RMzSVU7`SAd9X|Rj&`zVXv&|P z$sQJh@9Q%BE)7-VAihMN5C zX#D4Drw%K(2d_P2^4(vt4&*rz-^}&3PQuZ88LAv2L9h2Pe6PegvX48cYbSAgkRsl+ zMbNXL;LWv%i3iAVn>=aJ{}Hu{+>`9qE36{ci^Pllu2y0wlb&qA`xgjzL}x$=uImf2 zh8M?uIXlP9J3NWpUXd`Vq^Iye8{Wo!BfvrR_he_*U_~-##?6 zIH_)5Ft3i!F6tIVI{?{QVEdXd>O;h_%s{->A1D^T&9#dss&N6{Ew5-};ljj1wE_Hs1L=;nI@VDO@i9OMA3P|C`9>+xxlhP zaPvZcNIzVlj>F9ys%~~dF-eSlEyp#JH#*!dr8(TfD3tE&I#WM26b|KMY_x`*M_{-N zwc@yW(`yJNgJmbo_vYoG3c9y@oWkmv)wvzwD`X2Yj4{U{I+qdew#XO{61Im-uWzjC zSBnI#A_!vX)S>%2d@_eIsJtD50w6<%P{5u3dVy(_-lyphF?S&Y}M$5ZyZlBx_bJB6u(n4_;bT@AoBKS$Iv)(I(Hy;5s&1Ki=$PI}@sSu8( zP1?Tn1=f#ppf22$HS5`@9{ulQ&&sGgD~>I$l6D2fFX`{@5JO9@v6l=@>Uc4vPm^j+^Z`l1(rJIwohu%|)@4K%=&OILnZ8qtHX`nX|+=A9^aZmLwjh~h52EC^n z%P9pd@E6Nn@445U{=S&80#?!0)R~3tGi^)VT#-TdQhy_9bD#OJQtxd@i1v1zE2s8cyM%U8>dO$D z_t1g8Y!OSA(7He!xc)qi!8dZVU{xujX{1(wP!`5qH|L;!XQ5BWu6n5~br_+na`iK*ZVbjycC6U3x!OS#5RJ8C zchKY(yrGALwB$c%RD{@jHM^FZI@p#;*f73VPry_~O-dBw!+4Z3K@jGux0RtukB-C6 zPaD*%p$1DBk&lpG&`{mp9F9tbY+`<2ePWxhiUd`SPlVSLeR;>(MMEiMT9BWs1rkLb zVX3Yz9Q-2*|6jVlN!l0wA^u{xeR!Pa9xwM2jS#&<}QxIoS<8I!?b8sJ$3Bhr<32tI*JG3K6jlmCc-!kK&x*%cQqBz9bMAvV} z6(bY%*|Ss;>Po`BpQstqJ5@2LxS*}Ncq@dpvV4Dg7`;tccU)X^gjtjF3-NK{_P%lk zHdZ3Y(L_SJRBF@UQ#XMP)q|_ACAMoDw(Cs%spWl|eaX;x*~1ShzUwE8wct<68|fz- zleXn|toO+LWNKd^)_rjW=T_v_)hDa<;e{r#9`xklhm(g6kNd19fmLDCV!!HtWOSh5 zThvODO3m;$YgE?cf}N&noU~pL2eiVNwLeCb0u` zI$j%ckzOWpe;n<*he%fEX)5)w&X%?Z$V4L`69*z7aNcA^eqHW_R9%^)+^r1r{gG-H zM&PrO4`vpX4#VmLSNsIxagAACDzm;ezWs8C*V6keN13B!sRXIB?TbPo18;~H=_Byw&!{4o-DguINuR zE}>48V`)r&UNb8ex_WbfZ|I#_Yus8svF$~_FYM}de4g+PkPJv4b3^-a<#%&vtx~TZ zr!=?DPLz6KB*gi&Lg(I-vSO!K1DvI@Fn6=A%X`x0)<+oa(5XM3fNQti2Ih)pEhBLz+T@mdGg9gdwS&Sh&xUB?J;(Pvf9~C$1mNc5ub^owGcz^BKLkQ z+?m}{miQA0X0N=7AlIxPf*YXyII|HlQqc?JmcpF*fULOyY%5ohX<6ozI7>qK#IXgi zmvDV31qzQms15EOccoWVuW>t*EqWp8H#O#=hgB=#44qj^V!A3c0_jKJ-@GhIE&Y8X z4PuoWTP(WUe9_*iAP-w16^1HKiBJDInGma4D|^)v z3gagZ0jwV!`HNiv_*v!s7&6XVXvu&V4k4K6K_<~AJn_ui}d0p`7O-1Fnla0X01 zCpZ|*i7E(Po5ZVhI~lpSA4;EB9Kad}ZoVY2uj{M08x3iY9yhP=mUg>A=0^*;D-dh@ zl8;%X?ltx3VSS?;WKhsh{rRrDLc%8F^VwF4Rc2#>_N%BD%Ltft))p&`TNYa^m^Uxm zlaUkLZJ=u*w|Fm}P#45qK znX~-h;~0-j9{X?eZv+c|KG3YnL_ID}3A?sJwhp|OQ!o1|)Vb2ggX`3$#=A$qK}rRH); zC_JihJUvcglr-wC2}}Lj%ZBl5u;pg!gTl_j@>p8VGgI9Bi8%6@J7<0tsBOQw(ZS2(#s6aabyYO`uA5-h(PPGSBbp@7xYwuRg2b zj5PC-oT)o5a4V?wwK^<%++9szse|OzBFDdg@Yy_%wGAj%dFmyaB|**@0!VEeU-1I}nY4u-11n#+stV3o*U* zMBy56qU|Tx*QT($9>VC<$j6O%n)5y`?OO``WJ+q*E!?81lbkCbSSFr-69-320U_L7 z(}Cqomd9r`JDev07SysOp%b|A$ffCY^lo4@?y8AsHDqvQZ?`K+>2DN)c+RUXZ1eAy zd(&0+E9!H>isxiMo)ng1#v*6Rwd=L?p$-Mc8ypm!Yj+sWIAiOe9YC$?bhW-xHcTQt zhkRwPb<#*>bfoXHIe0p%miEK8KN3n0d1sRM3OUe53N zzzDTWL0oe8OL~q9k9gjJ@;M~n4-GYp2oHWeQ;cc|qJw@|e5Efqrj3Zio;lk^gLFDk zvyiB6OZ)+Gnki2FLbM>=Jl@3(E5?gJ@2+oQp)R<9&8%?BDsY>}+8|hMPKRKo*2rGj zZoOaY87Po@ogQ&klA=y{qrf)p!Dlcl*%J6VWyL@{{LA7qY#OKeR;z6%oJXiCo>GJW z&MBTt(5P>r#7aCw$>o4dl7{!0?cpMpv#%IyKixmHSg3?H8CK3}-6HK+B z{sM^9>+$v^)M8-9cx?rKpsNIsEch57m}Qk^Z-5U5@(|vk-!p?czx1}2w1t0by6w@j~?Fs^@{!{EvzGA@C8rV3_HM~R960w#dw0Yxi)zZ_! z3F`j_qx}n_Y2HPLCrs6|1Odp8Q z{%?|vKpG|Tz4+MO=@2>BywliGH+6K-BRTPES+|#AUCXxde5DY~@@pwzl;bxK2SBQE zaPYjr|1TL~J~2f>YslFKa*Fn{2ArD_JJrrtQ;hTS{QEeGnnq#G=AZNd5TY4>cs36p zEd=|yprB=7FZq&8S z^^7(hlg#(-82++M$n5|Cd`Rsoc98YGG%fQJS{Z-zzl=Nj zDGPzZYTi9%(zPi7remX!+I3Uf0Ba# z#PQy)J+|)|!XO%ZKjsUbCBNPw4Q$c-$9Vf^TAovYWtd|Z9|Ncx0<(1c&dSv&{@RO< z@MSKJG61Z}ftm}jPVWKq!g43E?H-2+`!tE$lK(OQS?udn@U@(R4(c}#<~rZ=?3Vs; z@zHoXFLXYx(grjGe@yVW9UHz)h4A|BQBOxzh5f`3x-yss!r8#*7p_`lrc=dNZ)8fW z{!5)H5DQ!pgXf2x0IEeII){CbhdLeLL-Nn7E@rKe1v2#j;G=Rmn7HWxO%BK7xC1|$ zOpZXpLvCT1!>~1ItLNw4NnZ#sFx)`jvW;MBp4GAi$^kJIg|R1E3cli8;>aeUZwc2 z!=_s?R*2h=K4HSt-+sB;|Fsp*|AKtybr3VLOmsB+I0jp2J3PH^)$l6yK$+n(9&LL&hdI<s7Lw9kN zk;;I=nMZ!+bowziE=j^Zokx&b8nx^U&>CgyT`xgDM-aPoV8f77HvfgF90e}f7Nl!#;Vd_i`*MOu2&$~dGI;pCy)GYUfx@@#zUcYTN z%}T2{Bq}Hy2C=C70A7s1EX&))mp9Xxk-fj5KE*BvaKnZQ0IrUI!=W8ZDu|&1%$7-f z__#MQ>9U#O+Nr7fjIrZY7nvtW6I9hE`A%Rm_io`Kl_5qww`u18D1pX2zF0 zAZ$qLb<|=AaE<;~;s8OW9f|*$mOvZY97UikSCp0$#$^5oUi1^$z5;)x_%ql?st8uK zgQSlAtRp|@%#=@uR*aQgDqDU@k%8JusL@Q?GpO#TnCj+UKCXeTLXUMX!ShKuJu$;d z|Djp{wJaj|1o*TN?%{!N{B9ojVwnArjNYHu2-Q`?M?S(GnMS`r`WApC^G>2F`#S~Q z1+!7KCRRsqBZZMhB}Kd5H-kOgab(tT#1Uj~Y*eF;>5*wp{mJ|3Bu8^Ldowng&E!S@ zwM@@h-OIYo?IGaXFm2;6oMv5Ob1u_{Av}lRQ=`dJE&HvUkWTBmc|1S|$ro(_$|?d| zL4cJE=$YiVhGKJ0V{*r%GAtdyo;JZ|rl(E)24E8-fVuX2uLAIJ>;XtA0cKUv$W@?_ zrFU@(0dE3&e!p=jBSAxt_n_gh$zcyj8VYPC6d!sux!-IT8UvJHgo=V7CxCwCif(qT zG&48sy)@5oRtKgyi~Ugx^3ZwgkkBoN*9V~Ml2TLg*DvpZCut#)KOqBL(Dn{|eVv0D zpiGa;M|{S9kax2k;EM=zjlgCJPHB4Gx`dU zhdwviG_EpNaQ%-ldH#eUKp#Mi0pN|k@Dc0O1k_{SVmAPCEP}zdzJcgmp~SXeR=`5x zvzre__QcU%4o>~{M7}pH+&;!g;*;}(gMWgEf=MCr=K{~&-yW6U;v*o=s{ln5N<=3B z$inOCo=d``Jpe-k^twX(Blc97?(o6u?zqca^pE`jjGl`v#pykoB$>xn&Pj#qx2m`n zStN~mU6&cts@CTW?SO^GRw1`W=%8S1j&;E_P_;B*r3hI!56gtUfj4*-eIH=QfL6L7g`D&< z7z0+5!GBzuHbZrA9xyaj2fueafdSMGsBa=BvL$n0i`4{d5jCU_P{R{E*h;$x{5;Xz z0R`?$JQ9EJ1SAy3vuyg^BEvzzsnRfeOXozTBc;?_D2MvA4j|%k#V2w`dsuq22&&#b3WYc-7`ptTcrO3UV03*yjFcwfCgl_wB!px7RY|y zc`5-0REW<9m~GXNezv}|)}r*VL{VrnQ__rScCxQ9GO5#WGtfLysu%wh+`pRXUxjTU zF|3;O4q)0`V`aA4-gReF-!hug{T%(kAhACcukW2Fs>00q7!K}Z$KYLy5o%0lietC> zjma_5S~qPJF%8%&W1#m~1S3X2EnOB*z1XY)M#!~jS`$%D3!0y#M+7k1&WqC~`rr3P zI4br5CnB_u1>!aQx_+F^yc6A2d6&#-6%#plm$iNW1@=GY)jz5y@gW;mRD}y+s9>p{ zGpT!G=AFBMd4&mF0`z@g!R!xn8}H;Db2MCz38e8EJAsO8?%HJzge>T$4fwe8b!4C8 z>5rebhDdTVt_s^l+HXt$HmJWKvUfjOCN00H4GZ}vX;!$IXV7)0MYY+J+;4UULDXuT2KLkHdJ<$O8S(^q2EiSsDjZxm+H|9Kz04;fbfdw(U zQug(?u3t}N=tDTI*V3M6Tk}{?(CCGty+66b*Y{HT)Gb2esbIQT1Teg`%)q+B0~)Ho zNo&y?ymxJDqs{EykF@B#Z3HkD(ZY!-l3Kp^db++IBY1x+9p_l)oj33);u!r~&i(`GN2);R< zUgZPzp@6#i&DuSd6bTlu%C5{6#GV7F%_kCI)2v1BN3bw}z zFB5YnPS4K)=ZZOt*XQoa8H1GZ>av$7F z$tvTOvG^qzw&OOi>&R}u8fJ~R&teq=&YH{IZL% z+w$K3ZF+2d2kzz~6z+{YLrIm~++hCOQ;N4F@ShE6*8ayDJq-|Rh5(y#=)s0zN6+`!)j s&kyeZn(}|6@_z#Mw^{N3@mzU8pg7o2$}1ueh68@m67uiL#S8-eAD=wAE&u=k literal 0 HcmV?d00001 diff --git a/Documentation/README_images/ChatGPT_Response.png b/Documentation/README_images/ChatGPT_Response.png new file mode 100644 index 0000000000000000000000000000000000000000..802f2dcb976491ebbafa5e27a81254d2d47fdad4 GIT binary patch literal 87022 zcmbrG1z1$w+V?35X;45ZX^=7Uk?!v9j)8B@ zbIy6+bA9i5p6`0zk86e*_TFpnwbx$jUibaKe~VycMQPl-5APx&A>qo(NU9(19gDpjMZW>5PIHKkf63Vg?5|5M}Y~Pt%nIIv_1b^1V&{FFnNz#dmcxHx-mf=5( z7WwemOHB6TSu20~XUtsaz8@)d)$Mj@QJy%xl5BeUlg@>tZI%@Kad)*=5v#50voX&V z_ajfowT9coi@pRN#~!2`76c&GSa?9fy^5uB$e*33hV^3 zTzSvra8UiBk@rly?aS_E_Z(0qD0r}8FS;Pbs0&coGa~(nd!z(eekYDQ0{cK2AmT%S z#3$4$>gpQpN|62B3fEeaK$D}9yv2+)dY0)^4OF)|x;NaSH3(86E_iOuh#p$>V?q?7S9 z@3`(?Yk$&O)!f{wr)ToX>G%59X3BcZHIkNhPXB?0;4A4}B03t4fN>fzCJuQ+o^s}p z`+~-Fv8^wc_=5yuAvmXTTSfs77E`nWJk9lP`aOyjv{76#Q1)0mKjKHOVOXv?7$3A$ z8N{87ziW=b6B85kFyjz`JxlS}9p|a%q)|*gdagcC%}O1jCRsIb*L^xBnVfA!fySi^ z5~>#3MvOfl7_+tXAkB9j>Y4~udBVi|nhL+3!XK3c6Uph34 z2xqz3J=iM7Zd`{Jiu0gJtk%(Br5WmI`1$nK9rgkKo0EpPz<^$YYQsVzw)NRtBrbn4 z$7MZqOG*~NY(IQSU>L_}b zhbm0l)>5-oZ1o=3HRN+!G^#V&?LL2L$)NnI@TlOa5G?ZS{(iRX+<-TqHm@#Q=dZ^p zJPV{gN~YLVc&1$uvL?$Hu;rO}s}P*f{^)dy9&+XXNMS7gX<<+JQ8_g-3lG zyO+p^3KNP!7MN=O`~iD~f}@ULUPl~P)ZDERGalJefSPF@z1Eu=6D{@8-B!PQoP<)z zch0KFla_<5#aQJ^z5n%g12K^beLtaNv4#O}wR_36X( zjUad|E^$m?)QVCpkw}2P=!ZtTAj1~rn?aNCfgth`PvnKngA$?~$tQGWoq8E}1bq@E zsv~qdQ8P$g!#u>f)bEUlLG(L?OoM^GWS!31W9}V(s4Hm%7UOKhI z2%5(dAWin|Ymb}%e2!O3XxmXZkG~JQCii*C`DmO_1j|aCng@pYh_q7=J(FpebN_LJ&?|^CA2LDm{^GxUuvh3?coe~p-&X%Df z3KHesnauL=9_vSbhNLKk1nHcVVKqgSUDa7;R@SeN-I+U>v18UAYclVth~{2DyGZ|- z^)dghoJs9RNt2vzm(NR&E7WI|bJXR2Q1g9${Y>{|er8g}Y&J|eGe_uq$t%|%j_SGU zS@V4~TT!Dz-89E>*fdu&r{epZGIf87dFa50!OkJA zv|_cT#C9HqsI1I)DU3>vKC2|X(O<&N$a+KD)9lTU%yrDO&7QuNWM^eB*C^LW7`4e) z%!gSkSd;EZ?wC)$nv^cBapt6yqbye}_|5&zQ)tXtG>ns?iK2&_g%7^=bHI zXk{yQ^U1p25IjSA412g|Kqqx9OweuQL8~GeqEE%=;ZA>~b zK8@3RZkv7fl`P&rUPk`OSEnx?Ps+FzIN6_AbJm$kaT&#%#OFV;OPec{TQjO|slUZC$~UTM{ox7DSBkGE zPvWdpbTp^frXEj)xP1Ti^;@G&o$eTOGjq5~vkG6j&3NuO&kpv^);6`kx`4m%Jz-wK zS_*4XA?M?d1Ex}5j!w?MoToRTrlG6x{akHEuRCg8N}LykUJ2?tX9-~l_SxIGz*gR_ zHNT&jcNqG-`f#DTzpVEp_iN49@%-8Rv6<*jtbcqncUae+^puXYot+<4ft; zUrl}1MEe?Qx9c2gUEH3MTk^$_DCRwH-$%;hm)p=dmk2IuKZ3}SJ|t` zazAu_iN-&oDB?a1|9GGkM@{Su^(K`NYw1&Y1~!g7?rpq>!IK9n%bFs=Lc{DZWr zWE1!9q>ZcOrtQAad7%_6dQT!W_*2eDS0Z6Xmv`F<0U5a5jLmY%;v2K7u4?givwYac-e+kvbnC&MP`&Z>36=(N zmVyAkir3rSR=Vh6MtjB`CAtiT%pLiw=HSI)JM_Vcjgbxb8M^z*_+|}z1nWPuGx6FDAQw8_umFKsx}fys(P>WLX_3w z>8z5icwHx#IU>J(mmGfD=Ggs(1-{66Q0_tRB@T1+A-Rs+i_nsTMT#&Vv*bO#=H#>A zyfJ?`H1&?457dAcZYZuty_2`{I>{SlvvD`ky8MMw~6ghSsiOm~-2oc-no9$ZY%%W{( z?ckBgCzBNu{aCxW#9QZIh4&lqIC`}vtA1He1g^BMG_Q{jpU3xS4*1C1O`3o}hKIiQeU>2I`nbzA}R6{;%e;)gBo%`?y?hjJU2K_!AFIT_0 zz{{{e@}4KKN&0aoN5}ou_J=)R4%u7T19XzC6sFx)`P*&Frj9E?~l%a;k4!->Y%) z*~#)Cay7S_B=CaDsh=JB7J3vrir%I^De2y zEoDpddj$>D+fG|nwd=6ri>Qvw(Sl-KX~>nHs6E4c_eSn^E@{CFy{+mbJNRnY0e`ja zp^a$GXZNjspzbihk@#}&9ol(!_=Z$`!g6#44L>5Ke4g_uw z;*#zteOKyEekhjjxq7^9XRExIxwe#aR!dNKKCO9}-JU!G$=9bk@7XK-bhCVuq`9c+ zR~T12;QkYy-!oP2v+i@XcR1?_)4zPa*YYVmG~85_!PD*1b(3^3?qlH-eVv8?&l=%) zegWpi4}vB7$T%9jbg15AQ^+YvHgPh11huxZaTJ0+rTyawA@Cis8AAK$k3*a+pVDe6C_j?0 zbufAKgoBfVlUDTZqeqWK9E{%ysY*)yvpD$cDXp24lbsL*^8WpMj`utqwhpEcEruXBT;EqL%F)C@!qysG(@FHdpVvPN|NG8=78HRXZvEey z;ven&$5znLqIX3g|Mk#B?-IKfyaUgX!dy~G9ejdb_Qy{U_{|L7h)?jYwz}-II6^{t zh9oQbTpfzM{S9OKe%r|z#(e_TH0Cs^?K{PiycWD#MXiB%-)3kTg;BDSpb5JSlc{gl zrE`58*w>_cT>e3&*bu@C-h~ZZYxRz26Ls#tr)$ot@UK^oT$5&MgoPbfg-zd?zW<$M zK9ZxBolJ>}6@m1(U%Yfa8ob69QkTEJpvidu%~vCF3JvjnpTAiOTqR`!t3M)VBG((` ze^KTBh+v^-DD;2#)xV8q);YZW>KXFizE5vZQj+o!GST0CnYo~V#yx$Ug7mkm`wWy! z!~0(}53extLvB9z`^rdvySk4+NfnI$Me}_5hJ>5&rS2~C-|v7C&t`BEH1s{4D{d~g zYM7Fs%?k0HKFhKxt7zrZ4sv6ea-{FCb}DK%hO);*JQ-#F=7U$J@Tq>lJp)||v+=n@ zV#k#1QR2RuVEU1jP_5ugqb=-+e5~fZ?b$&GbYUBv&CiX5(i96%W}MW{rXBmL zq7{X5^d_EknHIN^IKnSa9M!TFoyYXOJfn}wM7^)wPBvG<|6eb z<>f^#FKoBi3(I0-&%e#75aoaE*=uuH$_r1V+{#W`;%A);AbH){8_(6^rro>vv*E{s zmyD8sd1nz4DaM_{sS+4Lq{BzQg$utr{uZ2hKj*n7dbQ1C`|EH-Q-6hC?c#X#tL-k{ z(_`giz1tf&t8~GTcIVFlDe{Zy-@7OZ0xw)$fQy25>n`z`oNVL^UR z-A|oR(Q)jsA+<%}eqHEfC^KHS@PskmMXZ>M6hAx%eb`0j)ll-ya2~BM%lMr8UR{*~ zTjjhjLrba;Dn_-E7@YU)3oY%l!cT&$WM?}Sb8db`w1=2O)5g&X(<#sWra_zSxXik) z?W>Pmk=#USM;^*hP5=Iah9|g8a8@%1|4m+3ZZXmgzdq`leYW?-vcTS``ac}QVNyk>Gj)i!4i4RqrzDONwdGXe9;njwcC_O-D1LHetW?Bwa(!ur=Eq>cM8hw44u-=BjLvp!Tr4TItDoDBUez z6wT1nv+1SPcq2Spt&`I!+qKyBM91IutcA5D?uT7WGdbBwU&tFiV4ftd`&_?J41Ik; z%C1Cs(#9J?hqUB3qM_5vUffDlPpOo^-KP6~#(5VMu!!22yE)~rSdNmrTw0fCxV>>7 z%Tge}y$a)>wF{zjiJvpQ`r?U`V7K?iY2bxtT?^;^Kn2&-&Kpk1-o6Lj=Ff65earl^ zT1^SA#Pfw9YLBb7IP(QZi(hvPTl z_lhB-%j2cyL(^35+Y8|#6)&!zxna^`hxUVR>ExBiaBX=HK3pOM26Rto-wk}vtA?jf zM9tk*qHSp1t8NeC>fOKAg>k16a&x{EZc|XT`dIh0Ujn}ho1-%2xMuyE+u^{l=D>={ zDT7+6peKEalSkTcFTE8H)#Ze+G8F(Brz}c!JWa( zkQVwB=;sI#a!+jZedggedSU+I&a|jf@|^F)>#QqgT%zQ4E5L(`^kuKvE}5kplBV{g z53(pJJB*VLiIN^%D=)?}aP3}b35u)DPQjOqz|uO78|+vV5NXJ(m~|U>=AZL$?E3i- zM(uTdP=8A)0_B9cDY=~-Ljy%Z22IN${k?eL9_M-#n}?F1XDq^z=PK%+3?*IH6W=fM zM)=uO?n30rP$}L|+wYc*=^dHujOKs6#WjlMm0*NU`!i*Mdm6KDfS=;c8XNRex?#s0 zFcSGkPChVATo@AF%KQ29V!m;?D;jPO4fG`xY4*A~qMp+<5HrRJ=i6(z6~(Lg&_8{d z6Mz=}R>!J1fEpDTgB}|&FwWXD_d}n!9h+Hm;}R>S-}Z9Ba?voYvCgE91NQ9#i@G1@ z3Xq+#*Pk!mRFn$erq6qwET`ub`iAr@%l@V3^Cj|G?l+got{Qx|wQuh!LhYKp3lA3R zFy&mD;?0}=v0&nKq0tpyHy3L)_cgSdz|eBM!M?3M7YNUMIp2mF{U$!K`H&`_LJk|6 zQx7UzbFr2@r|?Z^1mbycxmB3`;-q7q25mN0nMA(ZFp9l50*jb0KCi{NR({p^0k`1= z>H!InB6FfcHw#`8F2Q^-5@dy8USQBKZ{Muwbkb6oEmPI}fe@6Bkdc`cr?+VMZmgak zMjENUhe{O0I!crDp=HDR#I4#PRLtBkX;2scUj@f|WYr@h%3h`Udhq6cLX>ER}+ z5K~f7ATIgp7N*>U2N;5x84EKstOkN&>!{`Y({>UXrRz(shfGXB_thgKpaR`PK5O*y zMa^AQ#&2_;XEXcC(VhzK!7H|(F%Gx}!n!KKob3)*7&5#^YXy0oRH>MIYk2wP)GQ2N zn}BWSMJ~PprjZ;Q*4?L9V9;!Vq02S5XH&hARgYh{SAPY&|L9!*o>vT^=VOtqlw0pJ z;enp~irMPrD!g_KhrXu4$n_9vkNxNTZ(Bb+>&~jN(r@7IvfO(yVyi{><%T}gc4BWR zJN&Hk2(-34gq~BhUG?Rdp1ZXL_vnrERJ|%skDwO5=jdo*nb_V~VR8#~Gnh)321AXC zhsiFj{QK#K%BO6$!n+#ma7xc+-Qz-|mp#mQVat)y)xpF}-8q9KJxE5bi@bhzY`J#| ztz{w4E2UeYFLA`^xrPO6{pybghG%5m<82PSP`3E`wjCt6wbLY$d>;w~r1^HNIZ9C5vj5Sp{|n;`u5$l&)bxVJ#zC z(E*#S$eZF&2Ri1X(hV%Tme42J-bgxWw?36jzuPGvF0@)qdQ`?7B4a%1IAKVl9O3!p&u?JE*zz=k=d*xj7`l2P6 zQ~sAYX*0p78gyE*u>N8HlyaKwB5E`3GECs0<_$v%cFQKS(`#A=sdTbq=JG$Tf zV@gcJTcq?8&dxy>vMa-DGW3m#W~0yTJR0!gP1Aux|CL{pq!Nr=%6`QtqKY!PA$h!q zpLF)J7c()a;PVp@9O!6-C`AwMTMFxuTV=1(CH@{L9V5Ibv?SaV@8BJJVF09d-#78& zB#B^s?+?Gy13iuDB4tCb`&1R%mOUZ7Mq6Wpw_2Cc@mIEya9NFSnInanp9m_}Z<-$C zFGc@(lrS9U&2Q}co|Y`%_zqk6XqBVsZGiizX214WcLr*x1!wWM`}3dgM9c4YAHdJ% zu7cl_X*E5%14%S2b=b_wj^*f}zY|;a><#o$Q|n!0{{c*#f+%9WN&>mm`G8WgsNwDi ztXbdK*A^^$DpEM#BQR=FnsN!twHWVEVG%61<=r0ih>=k?8euV_8A8!o@{v)|c@M-H zAk`}G$l__NW5}sM%WwE#P%*QgIZRnsTr9%)w)(@ z{=+etzq`;#jRWGCpdWlq8KI%#-UpgyY$iGS^6DY>V}ay0>q9N{xZSJ^KXMB!7H6LS zZX!S*dyao|y(ZSP@S~jM!X)h_w}0gPuLPdDV)EEg-{Oa{GqUJ%@Wn|QiNJIE`2=k1 z(kPnf%(~XYicUXSU9v;NG=ik^M`Db!smZc^nYIEV}0Eeh9m=whf_U=+biQ3}%JhPL}@-eV(V($f()LW>b& zZCytl9NdptDIu6k>-(v5RT4wWX5<@689{paWL-9sL4y6{j%(g!3cJM93i(lTK*#I<&YIZMDb8Fuf)F(IHkDr$(lp%Gym z12f+J5wVU_rjrQic=gHVbE;Y2etshy^p?`9wWKLS*Hs;xsy6+*d%BfO%@UfSag1AV zbx{f#@;O6oQ#KZg(YjZ+FTDIi-d(?YVB4AjDd{9R@h-oE+EA3|Fk8*;=AmQkWx~$7 zsM8&C&UNxt1ELMbKP;LK&pVq-yCRl@r{%>eJKY*Y2H5|FQ?RMjNY34~gv?m|6D3q_ z-gT{m+UwV`Z?ucGfNa^-ZcSV)t!xPEEGefU))KLNqKx;!z@QvtCr&QP2x|Y)o3v#i z%32`M!%*NJI@tz=T_duIUbKZtr@t(v2MzWo&hn@(VX_B?RRZ#8O_-)*y6;bD10e6B z_2ia&G}v&RN3~4@UNG0Kw>K?23!p1t6NwWoAlCfXnYQBNWxoE2v6 zL#`JjUuh{Bx;I7z#K+_M$s5mWLBF*|UVfe{34j?k7lSTWRwpybb@DU;6UpM~H+n^zei_*d!7$bbyq9dh%FgJz z(izGjom4t{EbnJYX5&5yZGPuQ72zuKvB#`+HMyhUxiG2a+1ai6X-p!SeMfV-StH$6 zwZf2^A}eybkK6dQZQY3nWx4g#lOZJbS|-LTsV?bF${r(MYqHj8W3zcixWpt}tT!U1 z^F}aBET=+*rmxiV!BwZA$1iU#Nc%CTxAYv|k`Yv_xw8FX-r3MAP2I8>{Prr&v^b=~ z{ByE-5V@Qtr7u@n@m)eL{Vbd9wZ7*roU`XadZ8^}>O1k4u3PZMrpevGluymqD52rN z#xiDBq1y3ft&r@(KFp<-v^y`Skh9)pPIY$ta^zHu_U8(W-m~;wEn#=^w$aa|u}9G_ zlEyDTmuOCo{`C4zkI@(T=k}BrAKA9^80Z+Td(7Ijjp^v2%wUqKVbmWs)%9EQI&Yz# z9+0N~CK!BQk5%f0Ipkdnw8RTIxz$gsC(koqa;hYe7NOm#t?OJCU6f0L=HS{&X+N*O z?Bm{!EOe%JTc^2Q;Rp@jEG`G0m1{b!2MwMJnB1fLXg#>T^6m>QexRGq`f9T&da+Dg zF=bQpIhY<2JERv8M99_U+m)2TMYz~`hyr5mEaBsRiI$$1SFVBb%+yYAL6px>$iQ9f zOF9X)PdffVmsfVe+T1E{Y=Cm&Ppr5v`@udagS4islUnhZKJJ=8f=|9OK#oD2!y3-m zlaf$bQBK-L7uX<0B}=U%Pw-Zj3=O)|C`k|P85Gk<(tuZ#OIu#YpQ4aF^o+=K;!Let z`Xszq5>@QGUhfeWUVpw4r+;eS>{Yf5fn$e~4cxx5Z@UkBAir(Y>|C`HdldMCGml7u zk?d_M#lqp&RF5vtT+z;xhe<^I(-z0BE*AO*Ml6tBj4$2{h9)hT;ebVft;4UKhT%)3r?<6AZp%nH{_ZCh9IiIUS) zQmr5ND9RL zU@dAN$=KCDlImeg+?W6UXg=Toh)ru@d4sh$T^VMFnGLnS3PvMj8DH(6Jc(_Va5Wyp zZ5RJ)TVK_}trYJh5kdM>)`&XMZ-6+qL&hD815!N))W45%&^N9tpJBlj`<4zFVUMJ= z%>z{%+Rtx^8MOm-N`^yP`zBxNwhtF(inCi;O!ZCw8Y2C=zUGpd}PVX&fq+Ih%(p)Q?Et&6Z)!<689Wt@Zzg^UM! zv@mY1ljj9v`!SUZ8tHvM)vi#;tzC;HH$D*3i6i#;Xp{;|>5X15+C$6inf zDS7Cp0b7aHaf7;KT57V#P0xsWhE6wzI!tlt%-C#0GcCBvWw6uaQr9w!MFyyM&J}hG z4R0CDv%)I!H{@QI1gK2l32)ECfQ(S%y7p}hQphg#i>RX{zvH~T^QK!>qqQvdpZ(0X zirRdyL8gWZE^7sUIl*eZH6f-R<+t`1D)pBj;6ye@+3yD;8GSWwl+$P13gon3l-y%s zevmzAn{ZEfBO{uEQEafedaGc|j_NsM`rsl%c$51;JGt$faw|fSXaicpo`^^O#3u0+ z-vk}su{=0{9v=@WxufO-F{U5#R1?vD389_94t8cya%W0InaG+O`2trh9 z$I5?Qc9_(otQ3#)LacPsBKN|izghPAzjZwsO0&rtGVTD+;AJMy4WKZ2h++CEvfZlD zdmR@8896)IEFf|c2*^88aLn4OCOL01?xrK`P2DNI{7Zuc7W)?KBvHBPiA+!J?vNGQ zg%L^O`2m8nyjL39ZQndkCvJ$Q`BTL(kACrl^Y^!jhLQlm>%sODtjWv4(n7{yR=j)p zeWR!3AM-Z)81O_Cx>b{9j8;Bto(9r#nssv<3jy)G*6>by)cVkoxPWNSVJE$uQDA$1 zQ~}!(`KP!%xZiQyqExn=r0q-LBiwQ0r3y$5<((fpuFjP7wov z$^m1{lVDm%wFDi06~}b*P~Tz*WH$-Ai1mKbJUbvek?g`h@RbQ6>>|>%6U|l)1lP&@ z+`fdGdf&5Pe8fs!*O{VGRFzQ&r49Lq9g6DaTl~PIYBl4AWu8WblDCK{QRO7}%Oxy- zJw2)Uoc-!He`ny~+@zj=fE4vxoP>|d_~Foh*)8pbQ1(Vwx9Seqrz_sfdI)(xFYoC+Hx=MOi zq=VeVth+JLnZ8~_Nu+GdFQa1L9mb!)9*@ZhH)4<&wTB+aF$B4>95m-k-R7T>ZUM&) z#_Z*An-;=xsAtfG6zI*Bx_*#9tTrz7;&uhNfKN_mTdPga(cWtJ{ofDWy5{@fyhQ#c?6cN{>QBmOnv84 zdRk0%NXA_;E*>MVG@V<5@XxZBLoDTgnAXh)EfAx`rZtYOa(2re(MFhd-s7-61Zka` zqdu{@>=3U*KDKTqDY3)1Bx?&QkAA6%2xPi44b-p47Otygf!bkD#Gr_zHqv!l7S^pk zJ&sXC69!(GYPPG~^c~q)M*6%H*6aBd6rl_(?t=%48Wsh)_d3#&i-})NUMh4a@N<0$ zg)|IJh9pK?d1i=IT)oaAE>bx8N*=xcxC5tndFMr8KGYV>3;LV=T zr{eb9!v%W&a~m$>O@zFCn1pFoxUloQ!Rj;8*U>M&jQ9n)s4?>_FP~EuRq)~nG&T1t=o+gFfSpC$(-q9S?nSiERR@k*sh$jTIF z)d5gGYhdt86TQ2A&n@$c1NBSLh|LQdZroTnVo}atDB9xCVSpUsS&4~~-5VUcq6VOI z3X>VUx*PvX+?6Z%2E^>fQoTIJ%>hv)u{kZvcYWW)9m^2a$F~{pfRvtx2b76S&>`dO z@#k{3#M2iQ@-Z|6YD!N>40E{AJgTtq(J?zQxU=sp#J?eWxfr-%gq0#q6sin8oiMFK_NVBZ6b~{Hq~OZx{Aej?6FDa|K<8k#rMcLnv6((N=uo`V z)=ka5(nZ%Z_<<%Q2DSQed%y@8x9p9KX8L~zn-RL{_*{YVFAk;k0}q?Ak!pFI5jr<& z;9&|i&K|@Nt`A)ZutB)-aIwq2wPY`aO#T29P6JkX3fWKvlIh0{AbdPTXOlyW;77t#1S_!-0y|k@s z@IT3XV5>DGwq4Y64d)d69cyx5#f7cm1B~X0%(v$BSF=ZPuWsCqHAfwEx-IkdJKt`2 zNxJmFI~{IuS0HI#A^CG;5{x}Z+?B7dFmkYM{-VP@MEF!h_C;*uu`!6;GyKp&>n>7s z-$q>$pn>?&++2_{eMK z`aktEM0KWYnG4Isf(EL{l9uX2+(l4g^V-Nzq;`dOE7Qd^N@<9>o5*mNNYO&$2<-YK zWR65wqPGMfJq$*n%rqbxEl#^M?dtoH{Gx}>q0P$cYHBVv)>%@wap=C&B5<3Jy+(PB zcC(|T%@PCM@SfJnB--T;+{RypnA937=!X{e<8iqp8+{VIKFn7w0C{yGvH*4Rj}PZY z(;Ggr)RfoCOsO_}u-);p3tNF1*WHp#B)5**hSCrVm?y^(j!lZqKScivH>{mj83e&G zW`}9Sa^DwN7xTQa2%$JpSggT8vIjkohG_)M;LzE1$dieBJJr=g7YYfKkH=}tFe7Wm z3g55XZs}yL3T-d$pxQ#jBypihoDZ<+7!Q2u$GTf_86O#ZxD$>_wEtN&G_6Qok~`6T zx5_5-vxY;s=%txCR-JL;{(FH7rj7+&?2Q)XN30(_w5bx7tNrjr$E^W2l6lx|AA=nY z;WWVZ%Y?OT74goalUvP)Ei9iG8McV%T(IjTEVpZ}6xJSoq$udpU==e4S=S1)$ZSr4a+Q)y|ZZTK}s_~&%t~uo4jfSiEo*XTt)~1P>HD{?< zqzp(;1qj_&Rn*dbU=x+GOni@u`#Dq3V_hI*%&z0YBgmpiy7ZT+_-zvTEus!B%lEer zdK|J%Jnu5vEzBr532IPFawy!IkE~7PBfPekZb(W#oEELBDrEc_cg)99kwePrBMM$n z4=(ppu)YLJ8$U~SmRJWL04b96V@U)34bQ>As-p>E3nq8#iV|fC)ieX;4bz;^lfK|XK#twhq&z9;xp2XxQHPC(t~n7&*_R9=dIPJWiP8sz6sc; z`K<|gV5(R%J9y}X-*mr*S-UdmTEl5!^0!7+)NRvVA=^rCH%a^Y_Q%8tla%tdWEk~*jYCQ?XBUl9Ln3BA@=fAI{>Qq$hw$*kH)K(_mY)7r>=46A>+Ei zGc>=3H1VWAv<6to8?GIO1SxhHX&ZGrR=QN!p=U~45ic9@F$y~S{^$u1q64xYx_P-+ z>W8V%N@;}B9#?%o{n}30LwtWmDSPFs3l6%*{yKUVX8dpiaFPepE?1nUDs1=KKT&>l z6WuXLKklQ|N$;X_7_rdlg_oN4CE+DqK6bUs{hT=Ibl6Tl=@fOa@6!R4hmEJ2iy++| z0%1%a-hNNGH05@xsX-8X_Yaa104Plbb)Lnfq*kd>ZmiLdtZ&p8EAxtXO0$FblW?@X5E>f3zplueRHhNitB zDnjUeBv_d*Qx{C|lIV-3=@}Ci1n=m%7?@omP$fDhsFooD?(-o-(P}1=15#-`gLTeC zypyYfCziUSSlw>*)^!ll`8aY?EZI8r8T2+pGvn7IXFN55i5pA?Bi=O{y3qO6V^-+j(VE%uo>dMg7w2u*4f6%mqRAWwQsgM>GsDj$#6<2s|nEae;B#B|+v%4QmUP-hUS zhSwD~JDFWNu|@5)JXPqAkov^IOvou)(n00^U8MaBk;$?+R*@I;JqR(=pY?U0K&EFB zS?|8nZiP&wz1W_qshD!?dnz~4=+`ZFd!ckh|C|MRH{uyKtpUQ+uw%YRO?$gY*yH{7 z@B*lD_DWAr)Kf3G?Ef&^KJAP^DtasUj*Oe;8pI~Mpm!WBjkI$xWajcAlS%8;*?j|e zPZ^2d#(jBjxI~k8g2|Pj0|3k3 z0JBnT#Hm33YL=5Fx0Psb-Vfge zOt-bd04emR0z4wmmUaW;ynI21vPlvJm!2g(nlOxDRe%~HXz}k z7^1pfT%cQ1ohlI&yK(?R*BXT510HRcuG>bY4M+%>tYwPY%>b-Q%Y|dBBT|}b$7>-=gA#7)#| z*~zfO5=y|akn2pWlLaO3(P zkIouyXZDiaHgk5nn3CBj`zC$X@=kZA;uKpj^R1`KGv~gKmD&Lft@;J2?r`!YP$nsH zz7C4V-vFsspZlHoG;mBhcYgMXn#UcG{oNbGM%-8ysD7`zNHyS#W&m~b<)g`0>vH@3 z@newItpZ-Fc`T2ycxs%ki}qe5mK6U9UMh_mAWrN48}G6KROEi(J6T&55&(a^oV2Og zp%CFjkhr@bCG_}yvF%*F4M5M?IK0NF zl(e1R(nBo5OG5#j_t^zCIJvt1yCbpx5NmikV&k8{pbKzAiiANxpoZ1-0jpgeq22<< zLz)<9Z>;ErQkdalHWV*;kjTm~rH6LE z2Ee1o7ceH?cge`$gz{1)WPGC80?#}zwh;J!_4UQq)1W5nB(KeAY`4+V`4zORirc0U z8tyP)6C0m`tl5_yvY@|7;HTUn_+CXb)SoIbGih14qvy84(C~)hG+cOBAbb&kpa(s% z3`_oUJqn2e1LRBr+w%u&fL6f7uR_=hD;6HX3!oNPfoOOXj89oSlXmtO=O)>Ms!+q) zmsf9b=ORMl52AdjJjQ`Ij>8$&F5vH{QfVeL0P5?_1{m=-tfC@ zYc)VhEO8QyCn_}cJx735kf9kURN<&Ue~X~M3sE@_HwV&#_!*~wR`JUG1I<0!79_kt zJDk63wSJ=ed?DxF0T@BS`@LD1b`hAy`gS8E@az#FVfIO(7rX~@|8%V3W~6CrQ0(Ra zKgg5Im?)#L?)1x7LC35u05T%9+G#+QF2}uyJ{oV{2fTH?J*S7SDFaz&Bb_QZWCnDt zd6#j+AXO)_Zm0GBrrTM&7F_r#!Z_)B9K2xp4RXe>4i(jf%%&f38-Hj>1Gti25@;4+ zj;eKlrZodb@!rkV+-)^5p;PW9&4adDsyhI}=3c$IP1ocGxeY5F+q$xF9NU-sKl}~2 zjD54&i`oYrNcEtplRSg}Vta#1{BT1%BG6-VdayYgg$`PY>-h7rh#4!J{4myZn~(l( z`>}s`){C29z>F3y!znpw5~o|6N)#GPAbLKJ3MzW#y7v;n%kukPtcl%fzw0~(RNvG$ zKpu+1Dr3g?7g}b3;-z8Sij;y+ikYm{pMW+I?w@#ZU=t18P)A`1vMeU#3_DznkA2EApL2cX8jJ`(^T zRf)oC*8-SmePKAFiJU~Hea;w`DIaFe06=k88Mobv)1v1t=NF;9+M`;+p%@mmX+8Iy z&-Xuf{Dt`OA9GEwnGwCou&Z>dpr*jM$d_S$J^%xqZ5*){V7H)V_b@4Xo^l-+D|Q}Z zA;1+=)-uROFf7~D9!Lc>lp?iIlS$Du_VlA*v(4|;o#{0=>2Sl+$+!_FuK@V|Fmryk z1C1Vnhuofk$QIo|Frz$i9@uy{^Je(#m0BID17|JC?s4YlfYQb0Q)nI z$jQdJ>%2=anMQDv^Xx3gnDlZz0kdvsdjL&e2s4@2#Tt_xH}J!yhz->-j(oAKJmIoE zUd++^Ovc#KCS0B=(azT|h~D3}0Y1vp%x#NURG}^S7Umx)-?g>CuogN85`y2lP^WNJ|v_9GWk7$Krh`!_wk})#?j$SICgSjJQ(<&g| zWm!$|f${fTX>Yc!r~mCn83@(mT(vgGi_`f7%}0KG=KtWEO$Bwi1S?EtEOPT#SAmQ} z%ArEaIL;g>AF=U)?vow4ZEf+HIs^CHgePRSK5AB&xCx(tDGuUoxZGszV`lBulE~tG z%0>fpOH#K!B~Crzd2iI~jeq6eU|y4@=DWb{I=({W;E0!)po6|Nn%vJ!4T{j=z|B)S zzV=2i8-C&>0I8dV zA6UC;Kpc;{<40LqRsqp`2Vg)5)63H1lAB-t_%nvy2jFMDanZXD_(Gw{Tr) zzvMxYOEUzjzl9iwUORF6u#H*wy{8qeL~@mPNVHPa`LqM&0gzX#0~i+1T58yNlR~oS zOq1n35#G9kcJj=vd|}Sr4X{?H!K(gZ)lo|ne?$BU7H66pV7srC*j56Ms6CG(t_BB`qH4WzG*RQh_e)pVq(uE(pHakZ68@2>i9V~V9C>@;w zamyJP7p1nXqP$7qzpN~EI#_mL&qrb=1^>WGjY|g$Pc%Nq3~Od~avD4ocFAjTu-mT= zSXqohc6I~JQv-nO!DMlGOXoJ2w46Af$zgw(1I``4N%uY5^Ly8q-oD z^&t0N1(r=s_rQVsR&73%#g`Tz6a06b=q9XEFjUY@yNyyX{^z2fZ1#g=j?*uu+aHXVbIe5(wcVI$}L4D$;;IF z`lp%~ckY-;o!rhKYFOoz z)z9AUDEG^a?62!u+WzmZ3Da+1e5~nW{95_k54{Q`swa19kACy@{g^!F8=C__vkkD< ztkIM^UW={4CLMz?vq7aAMMt(QI0;L*)VtQ;4cjvFpo6#P{$pcq4Kl88ccsfX^y`K~ z7A7z+A&C!7M3fb78BJI$resbsS(bnn@Kx|8dJ6-eY8%*Uz6=?hM4b!;SiE{tspE71mVStqa?c7o^B5 zRY1B5ND)L7iHdX(0qIESEffI-B7~?kY0|rbNKvGB5lCo40F~aQNkWwt2qlgE#P|Ew zxAxlWKll&yK(5O$Gv}Dk7!*^LzCX#1cTpIMG63kJTP$r>tK+O`g#DS%k@1PM7xJg)Ju8o zwD#&)54_Rd0~}%GCjk3#cY*w@2d(uK5WTE0BfvdkVoU$dwO+r-r_g!c=W{UA4NMbd zck=rsE!XMR`g+E6b}N4yzwh#+{^Qeu+q!gmIv-cT(LCM+{*yvkg}oL|ij={^d(Wid zg0gy)cIp)m-~}e6{kT5QIOE9O5_hS9;@WMIr+mR3hU5{~W6U0Flp8%K61$Tq7`z)K zyk)U+EzC)13BBo&v~^pal9`nM(OqabwIF)x%Tu4v>JR@Cd;25-9IA^M_agciMI*51 zon4fCA`J%zRkvd`(HNqu9K1T&tc-d&EVh<$!k-XWL7yHuq~M;;wPA8hwzOz?+f!)C zM`*;oiI}J+ev{B*EL7tWTl;gBKfqVqlR|&BQ%&>%jN%zW_4VbOkw13mx^qj7nqoRb z;Ew&eSzJKHmr9yvsb%(F{?Z6;wKKYI+C57%G39kLZD@UYc`HsVLi@j+m_TB7;n44SOz zOL+Z$_OAgQW4_(oYphV`DW>yd?KhL&6m{12pR1|A`4`ti8s09VXHNK->WT-WMKuws z8w4L^N8}c(5;xx;jnxO;j=q)GPcW*k0TbAj5AybRU#9H%oIEYqUOV$1OMkEEvpD75 z$m42T+jI9j@P%>qMjs;MK4g&ejY3_XN7=!z)lnI4xX|Gn>N}6e2^ApO4x%QmX(3#C z==NGqKc2klHYSZmc3=9C6#Z2UfO)YZhhJBGWZyQ!CjxCoT(Md97fO>Q9Yeo z?Bmd{_=rpDBl37g1F2Y@xZI0~p36e9AJ{$;{@kKxnmminz*RJ#gg4Rr$Xv@^_%xsz zXLUGMWdUE+OH;fi`@(z`DYR2p;}4y{JT=y=1(l_VYV@hV*I~B;77H+y<10M+!!31#;N zR@dEr6Q}l@zfJ)+3l6`(@?6!L4 zF;T_2x2wQzwfA(m!cILLZ&wKMKiOlOnVfZ=nf5%z!tAj9Neb}X-_uPE!8UV15i(CW zn$DdRnThbrwnFuLR;zczZA`IlW`D{zySZwG)hxd z4;8>*LJT5<3o8S&Ug!B0yFpzA0%a7;{-gJ9xFhCTpKV6F#Kx7Jrl@~O*?JzzJfK#- zc}R$a9xf6*y*+h!gf5)y-vf#?`DHOVN+#y4Av9%2BwMgf`$~Nt5IRs7s;*<>RLBDz zUE~ms*A^XiT~F{q3f#S>lyBlr{#Lon733PcNn(&`iW!3Z4#y?i5@dx;j?Uh@Sqn(= zg`kXYkPYfFUZ&KPK_$Jf(^7ZG<%nSwXS4if-?H~gNY7eYzc@+ufu^x{zhXZ5Snu@l zEXDFC``S`XfAd{g>)e!hWN9O2%%_Vy>j+DpA(D9(l53zb%CM}BO8obm9xfXd#OR8m z1%&EFm=*HuRjL^cLj;}=UPr@PNDz6yat7b_3F7@IzeJV)fw9grTvL?k(_y0~bv!n$ z{4^y3!FQCcNNHr(^d3U<5&M)?Tx?CFV5UQ+ADUsXa>T^H#1SHjh$6?`VS_9L*(0D% zG0fvN#Pfp?A;gI@*-*O`T>Sn>iRxj^TK~XI7~XY|xvc-U#Y}3)&kZ^6ys!OBJRO<) zadPmo_?-(<3ntRx&slnalGJupT<_2h7p`RUQ7$Xi9gQT<^H0h4NX%1#1P07`9S0R7 z;LIPjC-ZmPO!e}_YCbckRw%{CszF$LTdTO?ajBvjJhk7PT;QVHQd-&ZvA7_2ceKU~ zF)$p4e_0b`m$fGlT#G8kW0T7U;JN`7yNehpH)z^Mh*cxN57)uuRVMrzVzNrTCtIwJ z6o+~vb47CU*Eo-liOc-SC?$;{S!D?}!A13G zWsc$LqGf&wr5Xam69Ing(YLCYDzS={4|;OHpi=-Wz5T-B7R!I(+@9&C9P=6fLb)-} zvI9TbY;*-Jp&!bD;(WNf)ZfO#yaqssj2`ih7y3h(c%dn!?Qes^d{%eCi{2}L9{i;~ zF6*Dq*22VyDv!=Kux)713gi?YeE_tWgA|7o(2E+9PEgR$SlQFB{b+se4rA>z%e<8!sogDZg_BvqZ zvXGdUE$ob9*3a3)-p`+U0%*9@iJUj( zOBOmrjQT_)HX-? zMB&TWg-T~e^>}sT^g&`aFuoSM1YQ9lBckpsQd_`r+A2#)q8=p{FVEQG=y4_0`~i@G zB)^uE9Bu<9D(pW=#$c39Fy#~u>#284RxClSd^LP>Kja5+g|9&D`(D!(2gcQgLVkq{ zlp)!~DcD^f7(iu~4>PnY4&}i`K0T+EG6D@$=X4z$A$1T{W8|#UyS@z_6&3UK$7B=qT`|z@5_lN7D<;nLbOvh zC-RhM-z)@i7v$jV)(%UyR7ImuHsshvr@$uyNB0J$Q2VVsjfdcg-YScgkpXdo&W8VQ+=!;ZG2OoGM?6&C4Bkte`JTrs@(yuyp!1mcO-?wl10{QA6f? z9)vY6vKA@HtW|s@-c_txDK=gcDQFZxen;&Fg;y67^1C3>Y>6n?&@Fm1g6PDQVVwOWg`kOt6loKNY^xaOX5=zLzR$~25a^kDu?ev>^;zYtYT zQ8Au?<9lwI{c>p9h;%{kKLK(DlLaKHO)-FI6qD~^@Sp;X_X zqs#WVcw?xUR~42%)?&A2?dwE#n?GI(c@RYcI02b%r0d|`x3<=gvf@KuAn}2bzdc2m z*G2}cisz4FSO%@Ci5DD(YN3rcU(T}xo_IsPa9SFv`LJ#WB9UwM*Y6W@75IZ4*J zR)+XyI1JW5T{Xbeb9UFn`)4cLsPn2q0VKL#?~?kr+VwCUIzIn*lh0O5nu<3Ig!z2( z*amcp+cuAlJWlKD6!uFQeJ#uaztDVw|5WmWQ+Y2ORz984U`q*~K^`^*cSiqVjbd6y zVZh8^adYWiu$Ec#W33y9XB2++@rTO`S>4tCyz;52TZ!ofv&VV=2cZgroc1s70Pc#y z5peT&d9bDVGcQRx+DPQtESOR!ZPg-+v|+g>{R9pYSjfSD-4LMfdQp+zm=Na6pnaz? zPNn(GQRFN)x9^^)oKWVB6NWyRz3YO};Mr>CCxmq! z{VW;?@YA1AaPS-OO}zc{PVr3cG_b1Ik0kpqpZ3&=W0MkPy}RMO@7`aRwiDk8=<#x| z&5)CbAJ6)><6j~(Ua-8k7^qwF(IM4z2bZPEbcGx}w!;>7*%=L zP{vf6amuv>bL|L>f3R37vQm;H_M1y@UwSs{#Y$|Nnfr0MZ(ewuAN zJoSdzf;7ChYR5C-T|UZoAT==ZCqN7%*ZK1TW7?~@DM|Q;6XrVtVQ5uq+Kzth`;{U- zyeUbKq1V1?kzmrb4@cYWI|KJhU1)60VOl9vOWGpv-3BW#w}FFFh zs^(p9l-lb~x9tu)jlot(*auZw;bSWvN2m8zZiYwzGYKK+OuEF{DCi4zYJlY2nDXL$ z3bC>~SJH{I7nCoho-;@zC{3?j>BIutR{yPq$JW)y+PbSBfW`oSu@arERBZu1|25r( z;F>1Wojcth-O*`6o#$^UN&m7dmwtC27_xaikF)+TGR4J{977qXL^b zSWcIJ;sxOD4D&hW=`d@7s3VHf&8nQrf&N`AO90b-LZg{zG{jGkzk%(Y=eME~fGX$k zK2x+^kA;?HeHAdlJ`>_G_5QVx#%`Tg4Rn`8LjiTh3l+tTqK5!Vc0}t~`8UbgpJ!0i zJpbfJM*a_BlpA0eE-yn$hr8`0J^gb-6e|0UsCj%&V~l@s#J=KB7ohUe5(-lh)h1RD zi`~(J47C~f$U08xmLnI=|9w{~6{rtHDR|0Vo!BCM7nA(Y?Vi!UY|Mrd81`X?^t8?oDKeuReKlvc`5p7;LBBcf{`v! z(2mt2M!+!5la->9ro7AZ!bzT(E}=KEVT$O($0kEFgG>tmAaoI4JRfeFHJdaF4V5S% zaVw{g3bCGZ$FG^1GYJIV$2`lb1f^@A`N4Dl^hqG*v@Gj;Lk9H6!qP0pV7%?q??Z zbx3O3p-qV8*t&jNFl+3rX6~#sR1#?kQUG4TN~EHwLI`^H0~~kZM)BU{$ZnL8Mx1Dx zUC_N&V40Y$9e3n8ys8Kl^0KXk&&%Yw;H12r{mlu3-G=r6j?v?s`BIkNdpE;4OLfHu zGb77aUJdQpewz&Egc_n{h;}7X@tQ)h?ZX~z#DQr3xYB$G^BG9%)~NwjPuZLo;;+L& z+SrMMp)cq&4x_)r$?|KeSk;M)ts~$)seLilm_IJOmhvkrIDnZkO(p!Y#yrT(d*VJ( zy%n};n`x6Q<{0>@u|_~Gdv$nPo)Vn=;c@?nrw-D&UoZYchWlKY&SOZVG|>74Fa)u( zhpFV$@m%`sYs)+Ra`PA+bQqYFw94nc+H`8deb#DqUW*8^9Nh6A=LfK8gZOjFuk5x#>z@jnJ);v(->DN z$6g{NoagLJF&P;4kUtL5S@{K0Lw)0RzD`X5Wy&%w3Q+NP-9X%+m35M;9Q6zL=Pm%A~w+jb6XlsBa@EEQwvVs9EkIm@z-oOqV;Jv&pRsYWDnKkUaiXLx)^j zaJgFrX`r-*;-03Lr5xfVA874%Iyji?s$)|0@~j}&Q3RLA(A>F{ZR^CyO=$5i~Ve zsk~0<&q1nw3B7xuZ#bgmh2I&Uuqwut2wRLO_H%tFoq!?CcIc*G7BA=OupgBCk6JWF zuph*S(|Yde?&FFQIp&mJSZt02oZ`-DvMagVBcEI=i&8niMjCV7@Jq%1oThC%lK7sd1~X$Q!7;l4#&`#H+`doITVBs6OPUG%w{r%2BJAVFW;(5kU7 z?h==R_93gDIKWoCb9DxNs(JmVwX&*cC!_#Jz4GcEspu%g>TQk0pkig%q^D_^5`0$E z4Fpe8&%(`kt)Ni|7!uYW{y_N*zo~04h%cOTC?vJEPM||0T(I=>7JKrvYWo^@DX{A@ zQA#Nfa(eW8&5%hG{Mt)u4?_1U->W&hx$GgC!Wk_#xb(x;6US0A*9Jet>K#-2Zu**4 zySLR&ozs@2kQZ+PePyUi{EYfjaghsVF+r{?myLtm56_FEuYm|C)fFTwzCD`X7XC7^ z0R0!-d{;hDV$Trk{?CA$sD$4+(`)MwPuLyK<&-zE!`S1}lOP*)oAjM>-Z7kMvtJvd zUwHi($Tr2LRy@ZYm|$x>InE^f(ETb3UP*@99aRs4H{Y1AWCxS+#NQr#8PK#x!)2dYoN0$AaL z{*Ezp5^EyH{1R(>|4#m7c-x9e_?#%Ji|-w8?YkWNa}WzR8E7&7VdR63>R)_T7wGw z$l&}7t+aqn8cMw<1HaOtmv$l$1o4u$#*pP|U(dS#(mk;}+av$;Jm^qSL-z8Kk%ZWO zMcW9uz=P*>r^?AdKh-q31+<$D-(IQztw*xHqL2dc*>;+<%fZ`Lv-V&ivlp$?#!w~Q zo}|!dRaM5d-{pNnQwpkqq(Sd&O)ku=196G#WI1o6NH%U+7XtsX3$7x?#3RXt_?*dX zL_V1#-WX9;kZmtE`e9gK_DWC`%J^CNUuA;@{JFguAX4KcMG4-Xz~z;HzZ}|Qz3f>+ zWa%!P6CxadUoJI8jbX4^nRv_8PY5)t$o@krtu9TfvY$dKlS~-X?*^Z%w|b%?dhM(K zs5z9dF#eP1$;)hdV0r*L)D?v?WMWyl{cpih2gzifLG+#4M&j86yTR1esIl6IDx&2V zBb{o9g(V+259BF6@_5*QUP_q;Md~i!>V6J)mbD1wK7gFDpAJgqgbrYx%*M(i)ERC_ zmw?`kJsINojnl7Rl_UAl6}B=U{xO56(7)Sf?NOhk)!_cfB2-mB^Ikt~ZY02KH66L= zir1#hp(8tqwps`*L2P7;8PBS)KNjv+1UxpU8yxQ7M|Z^>ICzlGJ9khXAvtFoBicxE z$CR5}du*c*+frxQDP};e*2KXPGiFTylt%!~X8KB%4AkLn0(Lj}VHIpvVds~Xf zwvWDM7U5qT`P55r+sL5a*>{Hj#okvrU-9`@s6r8^dVqIH&4_o?GlKVjB~93C7UbwZlrX9r&=SutNOYvQ z7&X$={xp9EEzEb?!uVotDnX(9Sd3^wNwq!^zZ*pmq-?g!qF*!F~hU@RBnN^ zhRS^sUOw7s>K`^piPl|;D|A=4gg*I|Q{-QSG8@#xC#_bU;_KQU@XCFJ29$8upl*=G zVBJzx0Y&s=cOsqDs5K(`$lK5@hmzoYbj^y&#iFoSPHn*}qQ3x?k7O7e#ISXg`9juc zvtKj_Afxq<^@N!leC7BGyxvM$Eiy4s^!}SAsB??_&GJNni6yfyAVlR>RW5>T^x>otU!_ccn{J{TnNaW0D(=ucscJsmIeSK2t zXd`68s6`NLXAax|-$!4bY@NEW*&5+6EAvAI66WAoe!hjK#YBgwO{=cxecOIz#0LwI z_=39ETRksAoAE~9Ltnv~X3SQ5F2~9vipV4T6?;P9WSh&4@f>yn_vT-fMGM91e88lO zw9qBmSecPzQG=-1sAV z9-rHP?<6~Zf2_ks^8b2UF%TGo$)8$?=`1F<1n@2Gi!K`nIbNEEli59QY$h5UzXfanOUvb09kXq} z;mvzyu2VCAfl8UMx@WcOe;eBcld2$EyQ6KzpEDPFl?CftX1oTl)obR^C|}s{qU+A! z#I*cYiBf1(EyyR#$&;E?q9pkbyE!SU6dY$(>T_=*jI4tX@G-r?bY~%Kl|yFMCsR5AI>o$`x38H(a*z}cvsLbo4viICn`AYVMGN?L1CFa*XB(r%D-YM%AZ_HsX2H2$3V+vKOmdU zpnlSd|7#ws0W+rzEhI*#eqer8HV2I%XMl!{BmZy$-+{5?f;?4h}8{Macy>A<_EvnB2PiT9EY zi}Cg%nZw|iDg*kAls9u;f9j@nht}t&@!=c9#?zW4rz`rgUkk z`&i~On~HL3ciTkMxOmU%*JU|d>sqgG?av11i~Ncl`Zq3O`UIRje;UH!B_r9_KRF(x zH15%l5VaPP_ioa7PoT(w>;m2s`SCv$(^Yhch10^l?9#Fe&?TZ2>9S%3*UVhw0-`w2 zu4$(0DSR@G=stsLY9oz~lvp%;$a>Zd2x-4*!yBSkWa4iDS?!RUJQ;V_Js_e`O8f8v z-WnDUElni;c!mZfrl~hqPlU)!>M!2QD`)Rqg$m(u?f~GFA&cmp(JKlvhet|`x&G7k zZ=1;|MGY;ck>o!+PkOSq6)!dv?`O$ZDSl5+p7e zxuo|ClGrd0SE=kS<}YiSUNvIcWcf6@lZLHqcaFMPOqK;!9|Ga$!Pb#CWwZ&FspxEI z%jMXoft(BZ<8>=OwQem%j|!B2R!Vg|qd@Gth=R9Cs`p zeO<#nS?ywIUL+&e(ZVnbJgCLdhI{K%yeW=%G<>_M@9;CA>zl@&Yyn}FM$c@*$`*ha z`KPdaMJWy`AUKZo(rlBBR``&iinHbHEubwm{q}T-wa1nfS)fhLdcuq3I zrvCM_#3MZiAc*B4^{Mx_FEyaGA_yd@wyFEi-jFy-1QTqCrhlxf>9jXc9#Z|~6^vt) z9MZ*uOxM*lei%%rMI47AhBH4Tp$!70v<~e$jvFf*Q#H>4`rwzA<$#9r>8`CY>X}j4 zinu664ahIwBjawwx$7&5P#WE9i0P@yKRTC=mkw^0sRij0+fxB3)jd!Y1{jnYBk_)&ADZI*g5-V~Hh z5Gy6wD7$3stGQL7PtS-qngKoneoPn?gy73N>VU)}oIuIB7KDN3_JYnFU(p@BBiZr^ zWW8{9zhCw&0@@OB7zA4g=jSQNZJG%k#jb_pop;VGM~p^n9yg7ay!m7)?hEk+=G~l> zR}G*wQ6NnWV-M<#6=gOLhmn$TULdyJDO^$V;E0u|T)Zcs66_%N8v{VOsf4w<{H4} z_U~5%%P3V=PFSK!)@49=BgYA>zVN}-YmEjSUxu|;>;)*+<`9@po`AfYtU3%7NLU_Y z!U5M6_x!VPD5L4GlVOI^PvYbL0VfA(;AEZ%v%d8n09F0N9(OWPvys$#;7x`mO4OXG z?+R6-o(8t`K8GA9*yGoPZs+K<^JX4+&Ht7A<*YIptwXW{Ho&Zl%! zDbMdKxeUGZnTw%AoFpExylLHhq<-iP4MCZZ3W>Z_R;`(IMw`xhTtr%Ba;R7x`ZB&! zNNQb<0;0)Ue^&=uC;f%E_uQj($p^?5_O#XBLyUY}gBw(l?rek{d z6tmlDpKmhZhiyS)qdENkU?<7r&k}$4LoNhA+msO32<0DMaSF}T{s7*&9I$;8A+#t0 zqhCh6ti_jf=Vs8yFfn|^j>#)edUNFol1o}$${$`ZT*nx}<~Oi3Sl%d`65h7`IwVgZ zu`#d({!?u8DEdcaBmI(Whmp^WftJoDYK2YVT?nP(F7JThr0m6YRlG$3FV95Kg z;{Co)H~#_%&}N*D>#6yNCor0#*3zA_F7ei`=q=x!e{IZ6&NsKRn;^-2hbJ>#T91$2gY=W(3}y3U@O{bb&LXaEP_(aw%hsM-6_aC zPui)tu9n-o837vy0TDjpv`A0nA`wKx=VRmTfKXX;J8|q8wQSRom1)lDhP!0P=TwHQu6QTV~CKsD}wXIOCtuPDehQur2ScXTy2J*o&PA69F@gWoZ^WR#IRa}c5(*c;79kPU$B+YPml`2N$`Sy^*f~` zzsZ!eT@HD!10n{Q1I@N0H0#?|2NyPM{n@8xM0?`)CIML4aeQRgwx8}&<}Ouyt0LLS z;CFcMr+wa2MM?v8E*;@E${Cag{Hzc?aM^~~@>)OlKyf`xZV42Id{tdSjDUTj2i~4g ztJil;c9T!2wrn30jqZHSyNbR6g7wPUw3F?d?IBiL5hi?IZS)*JA2e4#-}k3~Q!8g? zPOi4AwA96)|GC~WXqf*P7*}o-5AreH3kT~PQX}G5t;g$uuA*BH9m*lb`Jy?LXCyzx z;kW&FzsBlSU-x19eKlQqV#??P##Ex)Ul*25q;)55|Z7lsYhWGFc%un~W9S?9cMSd)S>{#S=}Fu$;-5~|9Cruaw>zsZ z@#bBtm(JasThjp4+2=YZL6MX#Us5r+aE9NK7KPX#Sv%Is!yo4I4WvFGF9pl{v)U5t zq;Hid&B-V4_N%*WsN39_z`rkTEn~2MAVy>=bCrG^T-y9y;blH*rKL)L2jZy2($?7u z8KNKWU7Zom(vg@`u*`ClFYX&Vm&e?^i`=1L{XuZO*275c0G)rGyXjE8-#3zh zriu?b@$*PYLC~BC-@L{N&O@8}5$RWT7tSA7Ne`!=citC5fpmI+upZ_!avw;}AB^or zMPNX@@-*^esHUBSR0A+%k(h5w)ngMrO*3noInXM0PRa%5FIWn6eo`r7rK)}1+c39D z@!~X;f)7$l!`vKrslpbYG}>6^?7$6gi^A)sol;piS3! zwrw(Gfj~Kl1b!(fkmUaaT%yFw^lcegCeK_$CZKh2C+$f|dbf z_X;cfGECU~;X@dFOSU@IVkyJWDhR^D>dXPPPcXhk_tak(ny72dir?d#4FvAPIo!4 zTT4hu^(KVrB+zw;fj840|B0@OMt8>EMmb!g5qO*&XwRrP?xHsgfg0>KcoHCW1Cca% zgW6w7KX0}p)>x{)8#RzUhBw1TxWuXso7@7=UZV`bx_-uNMO-Z<4X{w)3k_VQTr6lD zfkh51K46mMw=~RCVy8%bxyAk#wOQ9Fg`?d6C_e>)HrXkj89wvp>w%PkH^sL#M0O#2 zh3nItXM7cg+9{~s?^Q(W(16CPrr0)XquPaow=9Er*`U_EF12s-xD~jdz#+&H{IKd)xGe)S&@)3>Y>c*b-`y~`4WVy)-%*=9A4W>@|R!HGMv z;U^F??#G{)+!Y%Kgdi|9kuC)`0a8hvpw}f7>EmwOoJa`c4|Qk{(R{~r;$!dA=a!Gz zLF3OUo%kOE^9yqp)Blz-e?{!<3-nuJ- z7L5WB9SaXUwo6_&Ga~+WF3arx2YuI<-v8XFzcrulH3%-(eyauoUz<9x9?HL0ht|yo zfNs#j@y%a6>($;1=jPmFI=VAE&u&@75E8YvUm$y~8Wx)YzTWV5fUH@6HlLYd7#=s)P+}~;B@{ZJ1Zm*j+SM-`X#DKd*Xyt^1Ab8k6 zCR2O0DDIMARWxY!WbnG`R4jk zdEx03jB^)+-Tpdu2UI`UJMSr2d5VRR#Ub|eD^|zrCr*ajD6+Ji76?&oO}LGD$r#0S z_;PY&wQOcJV72MlOu!6jRB>i>cxrgKF6e2|moG$=MW$P&CdVTp2=Z8-ezq#0Y^W>q9howA+gK}QUb})6U)l!t zmknbpIPNcfY2~zA0lr^l#w!X^R}DPvCQLpo$2tTWcp_Y~)CtqMVo*TMO?}VJI(Y9Z z;VC&CGjM3U<_Sw~GF$xO6I7;8n3OOs@y*T}nYM^2Q_D)4=O6KepUFEPTO)XwL^QtN zKrnc+xLrhWmNOPyK`Qq8!eZb4wEEbKBc3mCZr^X5nou{Tx_WWT8VPN0ZOxDu6#i5X z@QQj&H^M^;CkusK@2Q*?aNZADwh&N$vh;u!{xtMAj;Xe{CWv-`c** zux&5_+T>!PnKzG{Y3CL8bV0HNg)gCJ_9w~q}!hxpD%^zqT$I# zPnT(%5HrEMdGs`y zH1t6k$1G`HjcF+_?9X{i_j-hyGrm^T|8#ieARgj;5c~7lw zPzDZhhZ}>@+rabg3wEAv*R?kAP0zu&WX%Q@av49X>yTE2DHX5vR!S7UZ-1N)*uk#a z?bJ8Ghps}0UgJu>Hs}*)K2h9DDiu~O{eEo7e+tU{`e}H5aBw1^Xlodg=uK`TmOMIM z*lAaSf#jox3#Kxh8nzFHGwyMVog<*%I5gCB-uI9iTJ%OR8OqrPPJ65gNg!+dwYg>Q zn3Dhci!XhlHyf(ETX23|_`Si+Z_q?<^e~q7(X&I!-ql{Y7zt!ia@{z-99<%3Pm4GKIaX3Z zTTvchO!5N3SLrnS)E9qfHOP|)+-#u*micZr@AN>a1C$jLboWEfD{_95?h18a4T?i& z>#LvYuYn=9Aj%(mzPXwGJ}9%Rv|(cyeARy2(M`xtTgP+2D`2jkd`#$D4(^-wFAm=(C_v`Ed=1v@4-Lt(cSi<`v}c;A-CB zIG)8~{O+;C<9vH0>z;rx4ZL>x6i5;rzs1cu4%7ovP7spWUOSuKXPOtS)L7*RGSJk@ z=ezt=YbXOHL0Pp2?8nehtUk~tI=YPlsjc6*L+XJAWtI-Ism*(CDwSw_FMppiC4mSO z*c-RrHiFH2>*zxgx@qgK{t5^&P$d6ib!Yst3w)3qB0lWY3StBgY(fb8H9WqY`E+Pl zF<2w`!qexvRjiT?D&upg7@ib+xX)Y%>FwUAh5k)ZS=oJEX+b>XH1>CFc^m`| z%5$l0eO%gGV3AJZxlV^pb7%%t7vC{5kR38Q964$THK!ClJv3jGz_}ZfTzC9WePNV( zVv4W4=mvbE>ulNk_vZFn-$VKy`DeCqG9tP_Ok|U?O`)5Ay*T$IM^_?bgpoo&c{7W@ zuK~z@f6y|V;a-&#PDJz96k<*7p3ZgEc0nYD?nXam56XHD?>tQ>g7`hs3Mn|O2)bC8 z2ZY^ohQ`xlyAJ*Q@v1#lT*Oo905nEK?r~tnTkuM)^5zF3NVlGyI|I(0V$N0m*=<)( zP+CFMv6CMt!JA8+Na`YzxY+X6Z<;GI?rROvtv}op5%%B`X|lL4?3E+xV;0`^Ug3uz zKp_qzOQ<2Z>YchmLW>~s_jW~bQ*JbkRM?9K;}^1FZhKN707fw<(Cu9oFUwnQr%Dz_ zRr*Jx^4+9l@JY$6i*_Uh3|pA%Ol*b>lHu*>N)_h(Hp+3i46B?{Qrl#w^gPa1$+&lW zNDv*U_d97%H-_(>l}(cIHIs610|S`%_eBkQw|n0AmW)^#5W?RUA9C|1%S}+B19qAC zgNu8Ycn;fZIRLjh86}T&hp+Z8CQ*V_k<#k4lTa(Z!3}Iq%DK|{xLFV~V9~kwxj+I7 ztYXMxoyU41hmCdFE#1O4aZUj>8&d|^`@VRlbkjHp~Ca|Nqom!Ztae#_*S`( zZ33tL8Xao9i&m)IYF);w;8Yxm9ODvLJJG{v69fqi%Aa0VS~T3px_*Lkit;jF_T`pXiCI7Q&K zUxi!2TwO&;)99uOJPoMZJv3Ifq3%CsIVF?O0A2zJ3Z_1dII!#aIlKnKS}zBfy~?Fn zwXnd9G<<;vw6No9p9rVNy!zP#4lA8!ZveH)$DC5n2)!&x3}y_%CuaK#xssnBg(N*6 z=&{Q`a^%JZ&3ktqzOd)Cok=uU?NvZ+o$&*CarNMmI+1a}hdX?=!sbYr*F0dQ<9HKa zsq#OL$q%xdXaH2YD~_2|^!EljuFjlS0gw3Jiq4{T;&aYBj23{UR_^ekig35Ht$TMA z3x@ggKzec*>MKSFcV%K^oBEPm+$$7C7A!O7ze{zPPXzf<`H$Bb?B;Y9tW-YTwO?RbXRW53sw&xzk0;zCD|)g zq+DkZP*{*#({UUgHoRMe<1N^5A@Zl_@^J7b?Mx}oFmc&-Kqht#bj4hkGXeC3EtB-D zY0RH4|Gs+lb^bcP1W_m}?}CZ`^Z872Y^Vetl%J~X!<(XEI{OPXZsW44*-+YI>8g## z7?tjfI|(0h`Fi8XX!-G zo>l&>gMsdt)&6Gq4YoF&^PS+KDy)Q+qEN%kkCBBr&{EK8L**mE{NMZrb$+lz&yyk`wBjb_Lk3OvBSGO=7ynYoi=N|le^m$^1SGO(%UJ(c9Lzd48cjs)A~zuyPyGLG){lf=KHjW($-gHFz0 z`8-XX+6Y>_5G9{C!k=!+Ucbr8mj&NC{~}(A=*$VnAt`qKbl@17Y`}PoyW&!*T6m8g zk`!K-f^x(`^$v3n9$b;A7|sKA2lN#a9N3W+<^gm6YbrAm_`D$GQU29!3?MS=mV!4& zqxD)NRHJ#el&|!>w@KikToiq??PJf5V23|o3aO-usby#O&jblKt)U%J00lF@WV<^0 zDH~{Vmh@1?8Psb4o5Iy8^MiN51J$V@N+_XAdUG?73d_~$~p(^!0VrFQm1(f})$%p#QR)(o) zuB(4Zb*(o1*0%k=Ev}YrQP`BX!rW5MMu0x)tQ_7w#9Sq8%Q%vS9EPh?IT2OW958AM z9)zW=P>=qWB>J?os((&4u`>Jg&2!66e@yp62&B6cWfZi!e}1p;6v&a=NrQ$eh<+=x zC5N(^*JF=o!ePZ z(3npFJbV>M&-Wx4Hh$ZUxfXqUvkZ=ZfZbQ_$YUy_?SmKd%@Xbj%aoOae-dO48-4F+=^uaTr7biKe z#Mc4pOYLQt)5X}@TOKvT#_O~tg>`ohgJVPS45MTfoj?ASG%Ge{+~_9b_D>L(MOI{ebF=?WNY%jywdZx;!_)hP6fi`#0-Vm=yDR zm`^nSq6L2;h&x+Sr~yAvLI)Qx?I(ZD1<>!ZN?$3eeX^&1sr&FlU-n~LMX5S+hPd2? zUhTquI&F#|APahX?53m=p+s>k58ee!l3HioR{khi6pH+KC_{-=8r?>C%A55JgyXT> z#k*9GOP&`~3lO2ptWty74}#A04iO!b#UwmI3%R;|Q7tG+`84wTzA%|Z0{uP}#h_Lt zoz(<1zfBf_D}MFAKEw~RybAq$p%7TpbW8l32t_U% zge9~Z5|0yK^^8}}R}NV-5l0gaI0vWoJz%6(d54@VKuIyL^BYdzUzzD%xrm(p_HQA)3dl17!xHW}ZiTQ>YL)4i>q+ne)yVy`BRdbv^0Oz9hRa$2PK?|GRn9*ghW zLzj^s(4Jv|+g^c_&%;H?6q2A84E#~e*7by*nTu;DUB%<>hG0>Mr*@ipJf>B`0sE|ur zNux>I{S&>q^niN4p`1zr`$U!VI$ecTnbi2;*!XYd3_W43S*MQria-7$bhJoft~39T z06oObK6uw~calez7J``G#|g8(iv`*G-;(kHYG@^g zJ^3}8DLm`%1<{cv$qn=VViK4E*z_O7MG_N>V9$PB2t>HutC!~Fj+ z_tsHS?(6%ojVJ<&0#YL_0@8v4gNT5Hf*_4_BViEZ0HadUJ(Pe-iGXyMbSm9l(lzuj z@qQlnKIc8ZbAIQ0&i=mtyld^X*OooATnU<<2t zksfy^|D>kt)-&kpkqKI+1H98*2G)DPf7%@x*2m9quFA9Jr6M>(B&L9IKwqvu#YwU# zqYEL@7PNCTSNUDBg#DTgl~$v%^7~I?93JbR3GOE=cpgN{Z=|3@d+(N@Z~>}o$6;X4 zK!unyef?06gNb+7oY$cW@0ZpQE~jp|yBIQ-K2SV?NwYHKY8EnL9_zflzyMi-A|xC+ z1I=%3%Oz;yFAfMkf1ENHo;Hyk$@*dQ=Yn-V*e3|^DyQ=_H#_n#=R||nR*@wLuDwii zs0+%wixs@Tw@VIUGB8T!o6=Ub#k4~|EKILBj$%HXdNhAWV69tiLY4aA(Yan$gEWw3 znz8%ihKlyY5w7CD6*Cu<{R?3eRnt2uDpP%Uu-KhE4w`~-D1?k$ zZ0Z(Jg0e4sNN`Q@(!G0Bh^poF!a_=r57G@ z(Omq`W0)CiL!T-%=nnb<*)8W z{hQJrd51KuJuRkjeq6Tr0oEj+bdAJ5>=_kB;5fI{IIy@187xMt@(5yU!~G#W{&}e< zI_>nBLBPuj*%2+Zy#DgK4DDbGRoY-fwgd=}nQkl?KU7L;Bi!oCLNx+&aTFu|`k7g# zQof&3>435s&Q~c^yK=2KUzz)7ts`m8jqqzuJ~VWNFfRy)H{R3?xtKQxvp1j8ZU+JB z&d z9EbfXkMduSmC|%T(2rNEg;=T*0LC-Ow?#$w(Et2o*~bu~0>=G++!QT0YBXS8sK-g1R9|8qG>MnGoh9$}hqqUhwFIO_=6xvD zNCG2pf_!G@4{&(6-w3b;D_`R8D?h_w%`E_g9bzN+k`^rqg9FCpkyuLBn)yT8(Fp)Q zw`A##L5+?dufvd~->PI0Dgt9&LhUY-2yO{XNt>swFmoEJeL?RQfwVV0=Fcijr*Oko z0I%VilwVp@<$IV)8{&Ovo0zV7D2{g*nu`yqe{f>*+`LX97#SRFcs=0URtF@FeJRDp z68es(?+NW9#)&->bsGhF*s&XHr{|UWVLq#r8<+8&(WCh}OGI^088(Bi2N*PC)9m;L zaM|-0R@9O^!R@cg)F6VTGI$9cA7*cC-DRyO>iop&4p@Fb~(_f^8uvMV?Dr!4u*>Ol>LN_0ruk^ZY{Ulh5U&_v_x-0!Kor$;Uo0 zJb0{)>}ef}q~zc`ZBhr0ot5#`s^osQb76%e*-@4;4&0Tl#p0;2B)r`h9iqRlk22(k z)6QS_qG}*tF3(4qSiT~!#Q79}AOv=LA8GBMq|+~xjy`yDUiLbDp1L$uDHRAZvwuWFi`$=RB_3-TLDz5ak;kh#j7p*tjUsvKL0k>Z^`b)6L?~9_xps+LfH5PT@ z;}EV0TogGeKphh&sHv0;l;}I@BkuLHyXcbcHSS&4Gv+V>6R&uF<9oNH-<#GQcOL+* zYe6#bmgjcJuobhJdqOwEY_mo$V_B)%rDxYA8ydfEXy0jW6Dq0(V%+xl8NBQ= zkRBAZIrNj9M;%f!>?No%Aw-+UDB!Ao&ucya3`{mL_Zua!(Rt6UBiEPVG?*ca7 zphiG*Uyg>9EUXaAw^Rf+M!de^fK+-9Nm-4i$I7;VIkK~$Z&e9Vv}@a+6geN%jpE8F z_K5GVv0grBorUE%Otaz3!@SooDi+v{@634yMC>X8DEdsPk$sU&xbJ*_L-v|vhft#{ zPlw^QL{Z!1(ndV<d zF-yZ4L@ql1Dnbu(%!LfstDbrtwtr*;`Dkjl=)jmlJuS1j4TFpt#Y8`=z_8}t|*vSna{ngq|BP% zrE$lp)C-|i4vs}5>$2CtDB8TF9cbaB0~E=%5kA93bpc>>vY`N$Jbe(fClrgr{79!7FEmop-wBDk!<`HXwzH3e5A~RZcX8;nC20GLZ>CLsCEQ79GRm) zEj$ijY(r)wJlmos7C0WE3X1ok#SOb!NpoR!nl}rKi#4JLWC`ZbQZEAY-8@~ohwlB! zQvK`nOgSZ$K>!kK7E*J*8ZawM>3))x&upI#Pzb}Wq-G*@@SCgqRA#NcFCq=4(ZcD|WkQuk8dScJYYK1TBNL zf0LXTlaI}=Ip7826|Q@Ve1pz%c~+IG`G?)rYQdIXezZEjDSNm#FshC?k@i~Kr?gpE z)JY!%YGj))OFG9)*MSE18#tZAw zIT#!zzR$D|G{t{->l*7^>xGA9BCe! zCWfG)wf8egl~ZGwh|Yxo%n=KAgSv*!rv@#X%cf78H5Uj&*QRX zlVpbfzSx<1C)g+dq19D+J*g~!LIRlmHV>CCyBFk%NxVqUBqgAAqxpBvdlG)tvifUQ zXO6bJebiofz7&~LP4BK$=jV~?G$LPl@#9&AIlHTWTkJ~80ATW26~QBx3Cxis&$9{N znT}^_7<1irOUj_mf^ggZb-lIHNQwpX+o}rO?Q`Cf?Q;wsY&*a&XDEl|jR&llDhO9w zGsbD5B$IZ@1`~h-;j;}$m+Ol5Ywz#L)?E?>avVchK=!Q2HY8kFX=ys8(@80JMf;IF zI>(cH>!Ij)pXF6L!Pdgdl=x-|k$9D~52WFA1U zp^S-9koAhgCqF~D6fjFyWAinvCIcB)Qe)5gBT4S~q^eLFU3CXOs0RmN_<(CRUDa-1 z5jr>ky)J|swdKI;$cey{|2fzc<@*oVCsOX|P!6_4)TBL=iZrYj>fqZANKdQ$gj;K= zN#7~yI)Al+W#S9FpBjiEAsv{gLLSj9b-i%}q^FnQmiq>=GCaF4NhEqTRYE>w6-2_( z<31jvABLqr@3)}t?~||+{5%E)%|^i9tDt@30j4OOXIdcZF(i=Eu&�IMm03x%D3i z5|2`Lsp~NrSg=U;MSkaZE81R_%n_;8DTJ^Pa7XNMwus)+UTMd=b+>3ZAiNz|N446v z0{O#89e5T3-x%(F+-T*q=9-tG3nE-&eCIqflic^5sJ^HomdAs?l_E14^qje}?+ULH z4?Cqn(N3*eP0L`}CEzIGUoMAB(!AEaD$efPVoy@}AwI?0e^3&I{wRLA|YyqXu+5y`y8OQ52z{I7S zBpcqmQ<5!cczGFJTsbXLG_yO9lU6mDOr*oNM(1SJ$YN>TTPD*-sx^rJxM^G^cQd_1^-POPYD(YJHVqOP13>LkO{&H=KlqXWacOsHi_hU~RD#uu6mzoJO@T{I z|ZX%_ExyWGpODtill{x@I-U%eV`S=NaeT`Eq1k44`DC z$coD+8)P2tb@pS=4FDHIxJ=oZj7 zpDxEj>Rfx^k{4e2l{vykyuIpeu|IR;fY*oU#JD+aquBZuqV z3=HEes{Zul^Ghq3qlmxxvF@}_NJR2k0j%^0q{1xWD&=czt^~8qiVe2~@vBuan6^f` z#0R$1C!=43@V~0)!`yoEc8N1aWfE>yo{uzyAgu?+wuCy-I5EReq6nI7d_Gzmo4m)T zR(W<1I5zFIewttjHl%@Xen0t0-(ZGzjHw17kc4E>#N@M!rd<{bKND6U?4ks4Q;$r6 z4Q_!{5|Oj!(wn0WK%d}8=@vO$?Yt(V>?Zw)meDluZOGncpf_RDk2A4lfLC+8^4jA9 z>>sGXhbsut*`X`iYzk%!dIzeQDBBMG3_s*z~Mu_e!ReZfw@Po!rr-SUSl> z(pWD(A6X@OMcquo)Sa~k?=e=3_6Lze~cfa5iFyqL$kIn>^?g72b;OcG~%j^l%L3GGG!^*L)e5#xe=z|YdC0M zQ7bphDvHT7zoc25tS{?}SIV0(9N7vfs%lt{ z2f~tHIlMWWwVIirgC+?eeD|0F-l_YIZ!GT&DbJlm&yt}+ZK4ddZJHMjj|N8o`WQ++ zDQ@JcX6vDWIgc6(TB*BI!1mDPnH~@+GWi!>`+izrtiGMDLG|70CGp!H-`JNC*R0q|Giu(BMzn8(0!KvRnZN1M z9om(hl|jA=a;E59{c{G?-#K;;PPHEam{vk5;B2L8?H4j710azQ()*Rh((Nm5?iLE? zAa*CqSLB&CTaKb>yD~H)T!}{+hxlaTNf-6BccaZ`^ zd9qX2bl+w_pTVJ!(9$3{;b}Bf2538$GgDg5w6MDCB4l7k6``>~)*GMP%%2C~ek@X$ zP7JoS#Yj0!huE$xq4jpO1@aYVRoU2xI{^F%VtbPvM3UKIa?4k}?rn&SUBG|Jn z==EyQdsmW)x_GV|raCD4WD42+45tnjRLAz?(TNp1!=kq93nt5hpQTDeptx;oJbl|U zQcZWIaRVU6Y@_R~v2K=fO?PO53O(wS^+Ih*27SGU{0c4qe(sO|ZFT4$Xg#U}B+9Ih@V`r?n> zTV7yY|HMda_+yF~9fK>HKON6+wR_u~Xw;nqT#|U?X=?y~YaHSMF}(n)hNgupqM3OG zKW>eRWyPIxP&=pw@eB0k8OG{DfThx*pyE6;`<^z=lt!*VJ^j%S&LuP+Env2c24;2- zjY&n#S|}ZqkMXITr{=MWF|(rNzh8HZDUr$MwCm5ScV2xFO{p944I(`GmI>JB*nn*52+CUaCZ zoDTOJz^?q<(N4US*u9zM$yp}{SBii9jn8fRJb{(U8;ps#S_C$duFc#lQzqmWTDABgV`9a$!`hs%kcUo#Z zvqxYU3owe+M5(oWxYKD-onw8N++mK5h*N?SWA?1L90(pk6AmJY*=)cG0*uVE;o8ld z0_B&II|cT;^?McY**@~DJ=!}$azyoI6`>tzZj@xFydBcvzWZPsYnlfymUVvsUDAsc zolXV@!+LFdmsms6K1gysROwoX*_%*w(&j*^ASPEyw@bR{*S1A9o7Lt4(bHIb1@(;4 zHb?2ujm*3&7*%K6Hj}ZNorqJ9)N)O+sR>6dX9K^}tuhS{TF%;QSER^)5bp{n4T9n8 zLa^A(`#A2nxg6^zI1m>|$L{}0U4Ki>;aEq?SqeI`HYM#sw8Qy;N8Nvf>bySo7T55C z;`*h4f2GNHojCQn?TBN+LGNzth^klOXQJKvM3#uVdKXK9+diH`Y5K#d#6n$mK&03B zyA&YwuIR;5aMD$Z(z%{I_w=+&gjgl-A6f$6i&G7{z8YcyO8=Z-^^m1YeM0m$YLv}; z$+%>R3p-i=)w_}TE&3Wi2s{7=s*J&CuYf}DHN_z>^OWzsfBBma+O(=? zF2q8X{WZ?^qJFib;UKH$80o9)#~)=mSv3|6{j*XaH0Cb&8s>0zCbCPPhW9VyH$L^( zn}pphxikbo_bO&8FIO-Z9|svntLq>s%RY(=^^#6--p?`VUxI%=W}Jd7Ake&en&lQq zb?=!U{~BC0l6Y)VbhP@M&YL@DRf#Kanw1nj`cmB`=9;x$5_OoUMDuW6h}F&4y?*)s z)rI__vG|YwenkzkoSlHUxA32z@n65%fBH#RILJ`y3-7=8>0f{2|L~KK1!s4}WYWNG zT!uO-LA`{c;Nrzy?2YXpuADQ3gz1$!eyW&9>z(T@#MBBN0%ETnq=?uDfvII6F{()a zaoe23?YCU2?U%X=_t94p--Z4D(*ENU|M7w+l#9=OlmS|vpa%|v@lF)sCGf}F@vm$9 zIas|X;?nn6z6>N9Y}fxFd;tilNEr-wTZ%Fve_IyO#1z2gAT%Gq zOOs|SGJNi0Sr zKdU2i_bgbtyUSje{=$>q<#`({Y<40G%Iid$#Yv$&cq-~vOvsSugl^o{Rw7u?oohsw zpmFT73)Yufz+b>6Q+xI=tfE56w<$i#^v`nNLTN%{pnG1TzNx=x@gr`YNT*;2Aa#0kEjrRE<8+s>WP_^zDDA;YxzYly#tf&UgUbA3&lm zg?a{5M1TX1ehGQXAwdCG@CwSkqmFKY5_I9a87gJq|FkYhf{}+W#>l!{3csZCRNH#K zlU~pcyl*|g5ce4p7L5ZQzkG>RH_JulQTzN>j)`1bK<^zExPOfmy^I7=p9Qmd^Zji} zZ1ngtHtFHZ&pD{st3*Z`S629U>3Wo zscr0i)4XdltkPtzq|jwdzSl#R$^I{eqLxSB6_iN>RQX2wJ97b08es*LHd|S%41YQd zCwmEnT9$z5IZFi^K7-W@_l=(H0N?vD&=*RI`?i35gU5M9DEWHrvAJ~_hz0raFjlZ} z7bp&A=vKdfv;dh;!&^4yKQ6h^0%E`V7!lV=^zt23@xV}#BwwR7s&zblM&0pl!{im;TQl&e?_Gf|fq_hED%NUqs+ODy z0n*pkB&!1%Zs&gsZx6TN??XT&!$D43zX#U^&EkNVMCwAPd}unFxfjTe)~9YSfO+ss z6v*EguS9HDN>fNEL-9>eG};Sp7l7@VH{D~E3*2m;4zg?wXV-~?(Az5Pa^^6c#iZpe z^3D04S@>p)SL&b#;M|n~dp@=bIqn)-!B%e+0GFyn)A-z%l zUpjqej^+&jm`~0F&=VMMRS3^t;Fy(X=iO(h#AuqIiI z_kZ@+7PoK`bgJZD=f8eYJ1pD_{WKwO|C^L;`gxYnmt!9~X= z?*dX*7p(v_{ig6l*!*B3@Y~<7hY@n`d8{)0rbp$nwFtV_;O6-l2`29}n8svwrXc4` z03s*V@S^3#mr^1aDsMu8b+Om^f9irm0C&mGM1KHWswgHoU65&Q`L&X!KpWp`uTOau z7`4-bm`ot!9UOzKuhm=B5cy*tT980sCtlb{3vbcpMkymfEY9f%uJ8Y)YWppb{l6DE z9~Dl8UloMR%sg5}Xc?}Y&U~vH<4c?V{rmBz%f_KEcj*KZKw)hU^e7me`tJ*%NZe8K zfQ~l}_p*04lQe{y{`_#8JkhRDnk?!oLO6U|wux_3+Z*Y<(2)E%lUDF4 zC|o=gin=Pob89czLOd0S*N>3coRMIz|%9JQ=2JW3Jw??d3 zea-QJtbZLfD)dvVp3gye%=|M4kJyBI73D|=#cJc#X&We8zDgN0$v)q*I3<$` zlGu+X#Ho8~r^uVM`RUGGm^L}MCY6n=YaRil7QQ)?f0hgXkE-HDrV|QevALH=edz5M zCp?bqd{e*HskS4PV!mnOz$yL$ii3u-&Bh^RLD-W(ehiat-F=fqD!O{Fh3ze(saqXmTWUBU=P?#byI^wGtUoCq_qQ=^&1D$6NDqu@qv7P|@HLZh;o4tyDbTHM1fMp^gZT}AYAhlhQ55u8t zJ9M0VinwZu^LBG{qY{-fa8wg<;PG*N{B))?wems8 zHH?N3$n_qA(p_A>A185p)!j^l66;EU??mbEA?+;rJsaRAC;rV$o1R% zz+U9_Y_(2-MVTKJr{bj=?@zr@B`V3JgQgAaPx6naSvBGU<3Gsay`pG9w;kxZ>#M*v zn21pc#ski!i`3C@C$2xwtq?OfI+6O`e9Z*_Ew4ayYS^5|up*CT^FOtC|MMTa7@&2y zxdofh`vuekMw_hDzlU@|xrQOky&&CdV~Xp{gdO?qxzJ~~zxAp}joSe=9Kji!fKbNP z#8F^N@Y-MvVK+KtP(k=+M%`F70^Nw&kM!OtQJp-4{%LNr+!pN%-W> z+GgeX7*glBY)p7q3-LVXmNRhRH&m4Yx zzGU;R_xvRXP*M2L=3&*t6l#N8c+-u@e9PZ0u#+0!zIQ0H%Z{*`2i!`5wP+t@HKD(9 z$KKtid|=YT#`IJ*1u&q>X$;-Ev;Q2(=y0tFnBC(5Y%~}CmbU^_=r`}CFOxYz^N2@x zme@qCx~`$<rRQ5xvxsBK3<4}~jVXS1CP_{5>X&xajzVW_l|UQRfI$sFS(XVB zm^oK~SfzFXtyz5$7&ylL>WEbj)3j}Q8^D9{^C)@!SE`j?+ zUSG7nRoLY3WAZ|5ibW~=|9CY1ALr=Xb>`59P*PdaDn5nosnoM__aTZ<{Oy+(#mey? zN}lCirl}nKjpUOteavMTxsjPQQ`PLn^sM15tCBt?=of;%k{<_E_2)eQo0;rqZmW6r4ZvJceL z8Y3ilf;f+78EgCG2hoWrk{_)B1E2M1`Al|H<7pA+LX4NrcY11x8_b}jKS^!?P_W;W zS}C;-)vB zTP^+ech{5cg{h_T$>u*7Nb0`<+`0l{r`F&O9bxSy+2zY$|4>M}*t!7&%Zbtj=p@m+ zJ@+eP{;g#ivy4xvYydUFvj8v#Te3$@;={q%Xy0pZlsf7W^IzzfKOR37PA4dG7jE%~ zWy}A3*8S7}?K%!&U&o_5SpHI@2=$Y9GpPWpEpw1^p7p<9%747_E%~*(GHK*r%&374 z#lQI${@u@Y6F~I1Zk5k<*k3?*{^);S9fL5!yX*z;{}QkTy|`-``kaG53n3V*EFrFJ>YV^wkK-H4w^gW8j*0X4m6-8LG%PkL2# zW#galWTala=e08#n)>dP#`1=!#NgO=;>*tMj&mr$^Y@W|I7b=b?5jUL8sSORFD1j^#|VhnB;%)@!Ux`#u$62>fH7J#rgy9 z{BrVNpZ-7oB;gU*WC;eRFaP^*?cYC1uyWsDItKrE@&8Y@XA4U%@Y*YfX0nhY7zCAf z`VXiLLV>y^fUQ^t2}bQ}-<{@Vr}t+5Z=Un5CI>ioF4l|Na_D?rn8flz^$ z3qPGo#$G6;w{~X*>OGVK4nr2tE$*;&T|L!StR>q!Sh&xEui(JnSV`@llOBdlR7#f8 z;>)Yuk6vE}s*I1oSV%2O8~vpc(pq%BhWw2NvUKb;oEJCXN2{|?d({-oII5Yi#r*6s zGKSN%dq-r?ckfmX^?|$PMJw4OQy@DzNN6n2a=SfoJ%Fe8;iaw}msVsOPTxl&!Cd9Q zE&X;}k1}$|A8$&X%7f?n+-Y8z!*eJ;`&zu;!E$rS2I1`FzTSat%a=;vQsd)GX zUCG%1O=(s+1|TZPb;!r(Co>coOyWzErTga15y+$PD@PWzaM^zx+6-yP zn2@UiF4NvLX%jxAWjXF^wuB0R-n2gjzA_nG{`$C{=;}GBAK3@duH~TH$$X&3i`)R| zjLQJ+IlObI3Ik0~PKD-9J($AicP2`XLjww7u6}&XqCSAOAd8d*errA9C%h-o zmj_Y7c@6?=fS3Vb6ScM4VW`{$Gpv{!$YGkytWMe^Bi@=+iENILqO8D4{Sz9CGB4Q; zFlq&b8-T*pJ2+;Tf*MSLsoRfHB@@ltsMLc^gKswh0L0m;1aO*V7=JM=vuw&RpP&W0 z#qN3twTr@2t56+)gm5R|#-n)jwZ7R;+4#I< zyPjV4w5O=lseRIauNlOaZRwTVS`B|*X>8dfK0|7StvKjgb3rq!SPn;6NKNMTJ|l|9 z*-_}_p*p_)!&e^f{!}Qb>(TD$Gk7*SF|FFaO+O8;#f<94L?cjWJeN?X^Mi`Z@@e1~ z*k0f1Rq5m*5()cKkSs|QxaRAAn8dHU5O5IoiUNR{&0xW!lza0?Wi?)q{DkRt-_114 zPVnUfpu+Q{g9c6Ek(ft8WcT1~%wDMwxUgY}`@6&PLphXvh`f+< z5Z(@=rPM1bFIJ>KjQ^GRdYQ}iMPLqF*NtFhkSXru8}+fkpC3hVgGM#oVO;0Cdf{a&h#g+G-Eu8U`0I?uV8^S{mf5yxPahx=NQFclg|J0u4VW8^oy=pd*qtodU z_r8ob{6;-fu?aCo!yd{&9`57K8>lpTI*{s*81@PxH+q;3*al*-2-klETeQV+Gg_l zv6tOJBW}aZt=DVhCjM6Qx!Jop?Q(+?28#6oPtHQnEfR!Z&2d9e+0o1Ht74VUWz*;q zM-+rsjd}ATBP>!J@}vq&xT3x*CCGj3Ujr?s_N)ut0f!lnQ}dP_m60H+$Z;i9`lG_T z4TAF-J&Bvrj>p+ui35VGpHha&XI};pfQL)$TBdf8XtmxS`>nJP=EE3X(t=D&sa_4Qspb9v+wT0GwWX?<<;) z+rNIiyxt3JCc?Wv!y!WykKGJt&In%vS5P@PS-TzjeXg+d6mR}Eth+V-O1)&nCi76C z7P(%=De7R?d;#ug2HfJ4^m2{fc1k_;d3%y%Kody^<|r<1_0C?9d74eWS0?R$U3u)cr(QPUaf4~idmI#(1WfJr%+Js-$_Twnbyx7neqxw=HGrBIV|UUi zu=TMQzyovE0rT}r7(VYTvD`}#-Y^1VOYc@G3=^Gjt#@BFg>BEffkFK^u;-alg+clO z8Dt`m^pPwT?t0T_PLU$&KIZwp56M;#9WwHaTLY95!d!xJeWA%u-o42>d9%FDXryCD zK~j&Q#jr&8J{Y|@Pnj(lnk=`6duUs%9YQWFIs3pq@Qe6tppc+L!BEN29)PD*RmF6Z zQO#RmUO`&(bxz{&8Yy0!q217R`m}>oZ*ipzeZxR$R|db4^4n}J;#LT%?`8mD`dv}Y zH}=nB-OvK|{Mt(V5TEb*Wpe6|*yl0?uf1juIz_3020alHip=dU?lJK;rZ$#8DBw23KWLHX#sl_Z$ z>wONX;F))(Ld9{72igqUj#3`BK2;)iuQU~X z=$*L5gnDP5h5uA#_jAKMTk`%>5aU7zLujvFnPJsl{ghe>OCi(}D+%A;<4Lz$#X)-S zr~w;F2`532Jre(MAU{{kz)bVbyz=bU?7Y>eVdYJO=;$cUJ5suMQFcKGxRlo zE!|rsi_kEXluO?8lli=}EMyPaQ|$~Pji|j3bCz6um%!2Wm9i-yN?dUIc$`EiFxf*b zu2=0F@a4}-r?{p<21e^Z|E^AR2#(~J+9xO<*lw zTKE};u_=jnyxaa_f{iU98w$Nin@4gBd3*B7Nl`q?N ze63`Y1aMk&u0rmAWA{GJ`=QZ{!H4f2Ugew7wMRyfp(m{7ng@D)R-9CJH9Ljbi6^OG zRUxyL3e&e^s0I&^47@63)$AKXb{o)>e#_k$&xT_?pP%PYs8daoXyMEZ?!{JQm42$U z#%3(#um|O@8Eanc0fRE@5P{u*!Xm6^DYbj<2zQFZG*tu~(=`HXxFBOdy!o7WNpFWY zAZRYYG>)J*Crw9<^gbB5wpZ{N2Wds2tR|<&JlNRA>%-M`TbeOC69){>MfUE;a2g&v zzg<_>5OIZ>cTK4$e^cS5*}X(s@!uJofYr1<Px4kshlD>F3~UV}_n(g{P;J z`X~K7bl3a_3={~u2$Qw4x#S`#G?sVeFql7Ygj)$27`t%=J(8R%>0d3PC~i$b-}umj z6MP)QbwV6?7A$a*>1Lnq?$`^C)=&Y6dHF*8(A5mR5_>}GNTDSui~NK@fV-=qSQ2GD z%vxsz83R1ru7_B7hzB}ctaVRra}v#cl83=oDbfBwEi&aKC~3aEaM3|^E(YF+AEPH*6Qku?^!1c)x* zMy=BmDm%0vDrkNPllzFEq4|>Kf-v)SZSsyW zZObu~q=MhKFX;U;YK($Lc%V5&at1;8KI8L-i|}j~;{CADX2EuzS)jjkN>TwG4N=aF@Q-5U}N4J25UM!66fjEeIX~=b+ z$4Gzk%~gk6Zz#^Vwvubj^)d0TowqJlb61eRX5P4W{EKLozBliM+H<3titnfSz)3Bz zC!d)#egF9Cu{#XIipLY!G+pbDap_nnfeqbjLXoFP|A9@_V2E4H<_DNXiNE<+>8N{u zQkCsNd}jeL_zP{VlhW~#IP!}k`MA>M3;2)mP9^QPtQF`l_SR@aflGI4qhKjNr+AM0 zj%67^EgH?CKMw*M5qUAVe6hJ_neS>zU?*nEpr=VjE33(iaC+|yvp%Exj#P&F`{-XS zPl~H#tIz71U3y7<{D zOn!C{z)}FeP3^U)NS&7r@lPFxq`#k%0C@#uhGuVB=RwSxVe|)CsUj?uIDBv<07MPh7^Drk`= zd0J}3KR0b2=KwaUtQ@qJenna!eqLyjo2N(~Y~MUz-P@FWa!I@rAPD@k6rHf<)`6WX;)J_>uH0v942va(` zbklsPK!b2&6FRCRHzz0d2=* z`Z%@ILyk83+ky%RW3f+##@|8wZ!B&TxmI@l(#$L~KhqiC<86T?YL4NDO`_G)HHtSo zJ${G`9>CV#lW-OmUSo*Z_j%qO^H|FG@4X@WoRA0ZVAB?L^8#J)Q^!5U^O}du3qX)M z3o$fGfIVtoxCAg38wYR(ZUJmP-r9W-N9jA?a0~+qt_+v;Qc#THi(QOI&@a11A0FK+ z2bHn0JmvCpF+Xite2R%DjR6Ugtt5MLY@Klwryz^>iA zyBZ=YL6VXE_C`X}Syb@Lz6rVja}gh+P)g8QMUlbI6H~?zdsL(E@77#2N}o2naXj-$ zZR}JC3YUx$H~1Ftvtfr11U7_Mx5T)RjqblCVOlI<{ZTGI*F7X^dnB7kjaO(p0IZyG zsrL_$rXvIOnk+L^QiA%g%lO`n?Q#X|4;!g0Sz_8<`1mwh41Fw`?VN_Bfj+ zK1(2&iz^XJGr&|T23C$?#p*A5&v<<) z_Y9}2rZ<{HEd_t@9PuHn!48F0g@)fEC=2!tB>{EZ=uAy?(H`X(NEr>{bIL5zGCt}b zkx@c0hosgNjKH{Z=B>a69UU>i%;+67@3@3YI^(F&dN03Sbz3@z|E8aiyNPE4}DBHXSANwlT)A-o5Gz&AiNuZVm%<$%wJM69MJGpJaOc7uBtF0sI z^2eqCZ+ija%U$esi+o`{sud$`IjA*s@dJoLbV4HAJfDcYO0knn>rTqe8hIIj&s$7a zzU<|x%H=6`%Wr=Vs7n_CA9_G~t`hq-F1?AV?S-#3#XIrK{4?(w2{h9uyDSQWyOY zW6=>ayd->y~K;rJ~Eu97l*MNS}rsdS~+tn-*ev55i&F`l46lxeawoADK4s-Yb@CeS=zXUTs`3A zDfy)@G?5!A+M%A=IV~x$pb3gt`HqA1XYjQNLr%oPu8Y84gGo*nreWk|D`zLvd3G#_ z=j|!B`6mNi?gFxYXZxq=C~HsuP<3#3b11(cy&WO;^i0M*$)7uyHYc%vKV;-V%BF zY($NAqQH`snL8!dl8Mvltk0gw&pdrLHm5E+h$vF}lHiWf30VZzRBlusVSQbKKD?Uq zXcx#~X62O_YB>4m6m8#df#nhJI2byt>KTsdXn zrkm^MrBZ10QG=6Uc;9mmEfXzVDo2r+1!~RjMr^yP)tULpzVLf4y=Rz)dK>_u@C#Bk zripJOuE@OzsxB$NrE$yy{NJ`ph*fjDZZyC3V|UV=B)XIC$h2D}_WD^8ou29xykE7~ zQJwB`rcEMZkCHoN`k})quH;m5;peGOZU4O0D%EOqC`>3}!HSO88}~lc`%>&Hd?&p; zib}Ivdz^x5&EzvPU~}iIoS6iA0vvq!eOx+?<`Ks*$SR!C-I4D>&jLpHuiV$V1*s%N&*hC|nqkqMXunt49xNz;-voB@?wyLb zf3U{oe!92=Z8ti%zg+r!7N=yKKVDJfPU>(T_S`HS`Mg6{_I9B9UfJe7xX!GlEzve3 zCkE3!;c=h0JrQ-8b&}xq{P`nfJevtItH}4kFi0UhV5tH-W(ZtM+4zKh!UF`ad;qTX z6>#+9tM;xLa!^8%90HHC*8Bzcfk087`3u{<%P(C+_A1}Qh1*yyaO$^`oHueJDz}?P zHIXJoTAII`y$DN{tX=MxPJ#w#H^;~glcbG?5pLii^mE_GyhA+1F64ohDaAHM#REiP zk2y?RKQw&B1{|(;ZtD=h)f7eq6R3SG(=X1+#${9t^-qy)Qcc3ZUScHIMnMbJPn|YN z>+R7?hE3Nm(KHt>glK}xX(DFJGp$0jHcro=m6h2PwRYrfbRdH?9IcW?%)6K79}l&Le>zwKN(^Z9tzi)~+M{p!A<~xgUZWOY6SIsaiZpUlgxruV z{TmTsQ`!9+>Tr`QHEWVi&z|ckpdbbWwDcp&PU}=Thx-^cf|Ba+vw0!Db547td;x9z zN_?ph;LgQx{wjv4hA1v&zwQVY`*hi**v!tXi**9q_H7Ru#+)~{&wN z6yl^80Rk{4!Vq0RJ*$)jo;9_>jjS4vxIaDMYqgopmB9_$4c&}ApO~rfe4P~zgHb5~aQ=C`4(n zl@oLbeUvV&EhDKT{w=8rjr@O03LF3g_HA2I#%R8(a}&=X`OcWBZ^9PI_8vHW^#7bb z>5-&0gA8((uMw{H&d=l_0h8sB0ZWsakQw1^O`84Y5V;oTC*lXXy#k6jz|PI0f*i(V zaH>KU;i~Ozu%F-rl*pFXHV>V z2LWRiR?cZ)-@-eobMNhZ%YcJlk5MRJ|ov4`sv))agb)qz& zW_Ma%>sa%j;+WFxJ__avAp`tP5dzpL^`2-sGvWO3piiJt>rL%#YI!e{dVEu0))4i! z2DV+TiHvp1^2LFZChgBjvolDNkPtvn?^|ANrE%{Bps+NQ3zfq+v@{qwQ$dor;PBs- zAvJ}*6fL!~_GN#*yUmEvu%&wwZvXJ(wLSXE-tHam*KV%xD#q(=j87V)wJhJnJlThX zkAS35RuTE*XJj+I_(n3M&8tc%)n~-)6=h98oiIan!W8PhyWLYM@NVWC#?FT{a*lPx zAyoBd>-}JGbjss3i$MjW90AIXP#&?DRYS@!pxb>Kq{MKJw~mv>MP@3)&@sP%LdQl9+UKEO;AhFBX=&Y6i^zR|5uZ?G^=cgIz7TaqqgW%?B1c&gwx2( zDGfH+eq>R(Q10H-xo6xbod-iw-%WQfQ7#fXx*x|V?-HHy25RG2awYEfxvrY7!EM+4 z-G9%ONAzYAKBnevVV5A#UVZfgsC}m`eK!q)3#U%n5gLHZ|5N>al=sPUV0+C``etl>XSe2+yVgeL=;!k7cDrIT05-03*^kc0_+?^7WAbBtK?; z>}j8F5m4(VcfzcFyzA=^^6$4?n|ly?7|acXM|w0?Bkv z6iPR5{&x{DDVqLscLBpYVi4u!fSd@CgJr_m_TO{DMQ_R5cv!BDpW{L?v3Jg~5U!=2 z4>R$6l zXd!1XX3I>tnEC_{uBpwH@p0usTX~gs17ZaflJu(?MlaQHbcNShv@xzVMl*G`&FwHU z^UhuR8f1`A-t-6#_MwaK-bKBtDLE^9O7#`tTlv@v9EC7QElYSw@p^Q^gtS#onF$u- z-eDh8NAdKQV9b^T?}GMD&~E8H!lS)50C z7lGW_HTl_;wO182F~x=$5j@5Cp#b*``(^rtxB8!d|378}E#+W>C$+O3F<{w`Ed5nVFQBCl$t;C_I=xH#W8lHGb0~yu!D_;Uv z)&^$)F*mGl?t{U(XwEa%$AH4;bMZy}f+K;;gFy2Ie{P|sSzph#ZhRDW<`R-gdH{^# z2U>#_hYuggzbj~``|8c}HQ`8=Nuv>RH2Yowxs9H@s@6aW@ch}ncEKjU2pFomF$(z^ zm8}Ag!vhtPGrm)lk1Com0k(;NEhd zew&3}!uIVpkL$Fmf9Zivu#I;%c$uYS1RO#z#R7|msSU$u2?2?{%sC|Ga+tKlsg{cA z0LyC2Omt7bQS`>G+Z}5!4l=JwNZ`SB=`qHa(o)lCGcLKTa2tI?5X444Sez|i5D$|M zxNROgz*4tj+nJ^IM4EIO_euSQ%WxBi?FB{MuOWB*R|DBOo-*(EodK(~r7j@0X)UQ# ztN($C%ZXNvo+~^XIgonID^&Fqe^r+()REPP+^IQ!7Ui7bYGqyq;4q)v%|^>+F4(xq zQ%|-Q^RXm4fwqiODVh`X`twekV$&acx-Q|HEKRnJ`x42m_kIlx3yj*%kF$c9W&qOM z5ReVLE*#lt8RTTvCB;5(O2f{9mdAW6t$-WFriY#6599fO@QyY0caaVQui6xfh?C>? zS-;+BrlHl|4j?Rh`5^I1Wp7c0j-bA-FLK{2G`4bks_SGa}-*)qsPiUc#`xL8Q-N)2=IE- z{n+OX<%ipLaRz=wo6<^vBA`SH^8Bf?p!q5D);>vJ=Dr~!cwyK3nQj|z=H$|t+Q#S1 zsWQV%3^j2aof0jz|I>s?%@PBb8CDZHN=$n)wPJVQz8KCszQ3)=7U`+-&;;0#Y4Q(` zYON?vTi6QU@oO`}xjQ6%R?B0?DH~XslDV7n|9~%LuxuRy5zLg_d(pcZbo;-d5 zAI#q2f9P1vu5rlz){U*H$j=Y-u4Z|`mqKnuuX>5`emz+td-7c_fEpYk%$%vx0|pu> zM)6~M;6amqK9DO4UI{cGpEaxIV93lNy2@nV3=L2vi#7<|h=lCfF13pvbF48rt{?b#Xh{Bmo!4*L~adhrHUuCv$ zg{iNJvVT-S0@_=DtXi@n*c(aF=u%l68H8Z=2PBVlS?~C*)!}9??HdWg56)0Hkfxgt zrd2H4;yC}gy`j4?FH_}Q*(uVD(ZvLLk3|QjET@^JJ?X93X^?)wv zYO7O)82PqCm3@v*YgV{D*cDeHKtZJlvlq3E5V+K>%;ww|tGu4s-KZ|u3_@WAv5Zw3 zeHfwCEW9MusvKsRfXAUp1S z9$q6%$372>hxeXX6wa^JH0PCHlcF#>nL>ZXLM{WLL-pWfv+8jOZNIgdcF(eO3RAX? zMWJ2iS8Sj?aT;BU97x&s`{y@w1CzQpP$ZTP7ROZ-%qnxB{w^@})Rgrk%Il{UV5A+; zQ2OWO@+BJFM9WcKoZvEKZCbjdx8HDv1`FW}qnhx;ZJ*1x{Sc_V4-ct(3ERtD+kzo; zQ9qG}I!N0lmXvJ)#Lf+qwn2yKmmi|{0+IxF|5R4PT;!ad2`%wb%vBK|=Gl~K6mKCI zfc`VOeqwjj=<3l*QKxh7R9?HBVq`fBoS&A21}X@i;ypaj@*+fT$vTvA{D^GNE0J&a z1f8x^YIH|P5>kGoio1fn_-1*@tLM!ciC*$8Rl~T066*58#DgO{D+B`0-Xh~EG{&v0 zXIA!}O};cX*%-?Au4MJvRf#W7OaZ)4MAd6beobsk@ z>?&P;t*WxofnDx@wUAzQ{asRcqJU@I%ZC~Go_|iVI@vd|a)xJ63mOwfh_YW@N11r_W#F=(RH3WS7NBCTxKgg;L zQZb}8sIsNF?nM{fO65ZNtAL(L>Cb?{=J3=t#W5DHOZz6bU-%uJVy}^lI&R+*;PYnK ztPooL5+C4HCGT4jht4+)78|0^xfQB43`;8<00=o?ql9wb@0J^IoHm(B5y7p;#s7r_kEhwj5;5`Su<-oof% zr7%GhX{2hrt*+aK<cohT* zS-Mt2R*yrQcl~|KX(y1c?jcH$$YW{QsO$Q>3_bs+42z_|zwqu0cZ%bkh6L!=-XoXP7E&eC=P8oDyx9 z!1`6{P<|d>QnsL zjweT`gO&Ihqs5rTTahx$MsIUeh;1Yf`Q!){tLA5lMPQr8wb5BmWiD?n%9eHt{ERuQj(Fz+-1%JqhrQ9_elS7)O5hR4rrpYPKblYH}1Vfid`-Gu7 zN3DLwRj1!{(*5SrCvTmK^Ki2XRM)IoWssAjG!7Vugc@0nlj)EUvus*{zrMsbjCd>00}z?B%$IC)taIld zQ_P3=M@9i5e=K@mr^Tuc{C*u+b1hfqe#v_QbUzMer+7w2Agzm^y+}bp6eqZJ%T+dV zm~A;ap#o;VcdclBIDVidq*az|mGWNW#pVcdDfu!})R&lBFDPCuc`HShruk}SQ_r9I zNdlAPwMDiMwa(@W(|t+$(Qb+T+haoQ-Yblq?^+Tc0@Q&RVT1+eHVOv7At4N(RQ8j) zF2Wn}FTm(fb|E?IZ_A>i`TU`PVlVd7^yjst^lg3Eyt7$oFtnaN9Z5x2Sq#kSNInerE{w54En#)U>Bp z4nTsoO0D`o?7nW516ZSQlNfzlwL!P9vFo*l&S3&y5ZRIh#BX&O)&)XhO;*J3Yz zsjO`jAgAv?ms$=GrD1nWXG}l0upj|{SOBo-R200Hh7G?ai8q7B;JX-XTT8)j*INNx ziTf36hAWt`W;=KC!+}V#8NxdhxsBuLi%^mVP}p9O5-*O|{BRuAdwQe%=CoAw)9AI3 z?0xn*#nCUHXnr_-q?>;Mk!Go88i$Of^-5E4o1% zhs70^In?L&*{9#`SbgnhaKsGc-ib7B+A4pVrhGL($`?x`eJ}y*E?3>{vaRW+thj=i*AmjlM*c#nG zygr*g@AQiRE zEHoNi8GSYrZ{+39wF?o{rl#;>sov?o#tK**iviLtYjLdhbELltgbn#UY#Uh(0j)Q$ z=xCys5%BeK$PM)ifLOG`jizlRDvFalcs<)?z&;a~|9LYln1A7J&g2-CanX`AaRCd zdz##*hCpQOdfSJ=+7dr`dK$+pOA#w=cM9*==9ORu%S!MC4Jn-P43=d(eL32p%|ax# zZ{U9KoW4X{b>8??;oZR^QT6!s*yZ%D7Nu%nLn?FA6!t{Q?*0#?}I z7i@s- zgK~9=3TebQpm&~nT(fo?Jp#!@O4!(kq027BUFVBIXwa7KzUIK6{cB9JS@TrI`^`R6 zgXKKTttK(<+y7Gpl3J9K^DAD9kp)x9={`z3^dx!&6=Q*rQ7M0*y_e0mZ6xoP>yQ4c zupgc!{6IBQ2t#sG%h9~r_sjLPFUL1iUH)ioRKX2Dl7p*joIr8dn#1<}=Z~4S%W4m8 zd$|w{oODi8`b;?n6XT87ClMQI@0UWd2_NwC`sh!lcerUzL9#}hsG8HQqK9 z!1}d0UQt%C4;sc56XhH_-(?>E+0`*;EJ+4QAA~2`(zRGpbjy9qjxS%1=Q1f6Z^;RM zZa*jI^NIB30{iV&PHwdemCqKkx}%*y+TP~>s}E`H(G$Br{SY*C2tJ(znOCYICsGC3 zM!OaGTfQ#?*I@->B4k>4h%%1eW6oD^d44ORuoquoABCfU*;T(`k#pQ_Pi`OEEu_BC~b-Ezdw0Jk|}oQCFJ?eMghv9%do+@jsE#~~V*G-Z|iFAHE`nzb6+ zoHw20J2Y078#aQBsokS|jITQIG&o zGaRQ;H$4G9dDToa%T+l3sT~Q7)A@J>0X_53Z$Qr(uoj8OTE%Phc3<&vrGn13ppYU5 zNU1ATFPE{vOM})*c$VMe0#KLqzp8HNN=@#F#f*U%quK|WTkmJgC1AaQI>3J80dRco z7;1mcuGxtyW;&oJO6p|<#4$2CZfV*khDe91ygo)Vajh5_1<~SDzW_?ivAXZ^=#yr? z?+$9~XMRI7Ac)RZ6rI++K!}edX6eEw6Q3RbkFm+JGD?B+%om_LYq=(^GEJiA7xIg} zXgj~3L(6hyzIHFR6>PAw_QPm!76UL}4{R zApQJ@A={g&P74onQXiy`IvH{3L8M$%;0NmiR?6qCF*(?Y(-S>T@o_i02UpE zBOJMW%fhAP>){IAD$s^r@ONBTjs$CCxZPD*YaV2n(a9K5Ku3A~WBMg?~|%`Qq(1AHW}w_Vn`oLy_|7m4VN z98hT662sxYwD#iVW=?FIb;&PtI^B6&+Np0k113|3^t;T*_Ey!}kRKy-CVkExnE1MC zdB&{z`yUGqtVK2H2Q`zuWm=^{P2RJqnFTI^#X$n*PlL^BA)BC(TCBnfaf^X zf2$gFlDr94fwB8t+CXyg^7~^f%1f6ZZ{nRZ=k$b8N5p_}=~Jr|5R9?adIcVA0L~@z z9bQj==AD1cMK<4;-Sq;;UNNim;unNO)l`6k`WLivG%9~8RU`5dv!yq}e~Z8W-WiIR zjsW|-^+~a6@Nd@-Mfz|)fRo5VEcvRe)sh;8ey@C5e-`V)v}b#l%nV1P4018n$O`HU z>_rfhxOaCSbAfwYkh%?5ou90Bw%m5ToO=-;=G;o3-(V)l#F+kA2v_96fGoIYw0Y1y zFI3J1#X1aQUdrNx#IVi!$m8a~GaCg6ixy3<3eEZ-2z z`jQ|pn}dG7+|Vz~0+g*KTZ*jxn;&Ei9i=~ci@YPwZQ9l*_P@f`v=St(ug6f|J=0Iv z9i$6&L^HV_@VqzN>+_Fgs3^oym*8cqCB}iQqvZ3!ue&UJpVfhn2$+<{6|!(3%j}r? zG#4)8`*6P@=$i5_=)?BBfPC~?Qo*<={|IM0Fnkaif~0L{K584UeRXE3__v
vaL zr__R=2Kz{u6y@%D$4&M%DW7w``F^;V+FNI^z>y>P^LHGM)rbKlGknJAGiPMzp4O%R(DYQR13@WG4FxO^o(udtz^}|$*?HQlF2zE9OI-LLgNApR;?rU36oZoD_wN&WSu^+NXvK=fsP|JpC zBEZPJd=4DS^1jif1-8N(*((4L{5a>m7um#CUc^LD^Kd-3hNupIz1N4w6LHxV_Sy@_ znO~L|Y?Z!}E77&M?2q#mlqSAV$bf#+N;PRP#GoXh^^{7dH6xTCa6<^nard+58a7mE zzxMW4?Uyl8S0sZmP8API!ZVi&5id&lnSvYh~xkD zMah^>bN=$^kI+Ad-^|q-7b2{lx++H%rm}oKZWbUTvXl{60*l`O@zK!&NYm--K9IeW zbe8=Io4}NHfzMGCvzX9^wfFfG&Bd3D!3Tq%WhuxzY8Jxyvm32w^TYa#Va^v8-VMIF zHWA}(5;hT} z2%oM*<=n*DViTv`#IECQ*UEzx6EcLDV)F}6t=P|#N6bY)aBo9U6Ks| zj7`N7` z04-nP615B`4Mm)uX#y*^r0teV(jC>$VyZ2`da^n5pfK&@b6H2-3@>}|XW6dyeVTA~+W(=>J0TI0ol=xP(}fas64Ypnap zZ)^hyC%5(Mo5fjp@n2XcKr{Ti(^Dw5ruG%%5%;_hCQZ+`uDB;)rhyv)tWjqzUO)zX z48{m_&#fP)IVGuZ?Nu8435?KuFj9U|r7hYC3Ir8<=kAXVke|O67{PLVipUWB$$jLq zie@84uS9xS{GycArpPp(+NBs~0lrO$FpC{YFPf>4J|{lYSZ~$;gsZ#Lj;C_<6Gf$e zulv`BgP9R2U`?Bo(8)VFmC?*oUQRK$y+g%Q;vG3HNy#{aHfjcl2KaYviICR zN`uM#l2(~cLKl*H!6)j`8BmeFqZxd#KuJ031+wwRek75F{ax_8)ie89*zNPb8*fKP z41cs6`TD%$ND63$kLeRRJX*M6RtBnb=av+gA_Ji)u*ZPRKoO{?KkTjSPrT)quQEuv zIMc^pA5s-1_A5~VUdL!0Q)CEJ->#qq6+C_|bLHKNx|sMJ4U9KT5XIJ1Exu0fNRuV0 zRz$dTn7a#36}zB+K9n+=-qmXv<&_IFSuT~>)JR#=$;Ft`7`1_u>hnnr(Ql7To}Shd zb-%(Y#-!qsdf9zfbh@*K@!PGnRhI z54d5(&@yB=Kc+-j5t^$Q#>Sjw>HJ|8KdN+d(l}!9W$bN!MF53u;{FA)^*&zx3}IzL zgUZq-vUSPMGkCzA8U^LM_DM^z+UT$fI1RV;B9K(+HTV8irm3mX7(epZBOhkIVQl$?^NZ}7&-UA47M6KnKjCuRKK0UCk;oaE#BF=e(bt z2d##|c&}ZrdEcsr5-|#gKA04 zgx>%jXWwpqj%3-CG)!gBg`Hs>Y+oAteND)r(`}+0^ky)@j z^7b>!|1Km+(%8CRWOJrlPvyDbM?;*wl~`cX5N+$8^VHdi-;i66Wivuy7d*puIq0U< zJdKDm8rk6D)GJ+joG}V}Qms2NLZssxgT$EC%veTbN;UjCf;xHjwcU(Jfr9|^Yv{u1d}LoB%c*j zuo3cC^!#ddj_=X*=hd$gk1OZE3iOqJ^hSM6$}@f?S&_b*Z+D^b%FDL{(OSKY3pyoC zimkYpTg(yK!|9s-{l^Yx0{GKfB2a|GSB`%pCJ<=}wxBjly-{p#_k^tKyg?kY-`iS% zouwjf(bIgaqOXL6NS%H@Y|WTR`N4xURW=OwGaa%01{6D$I{jt~ZKjqzx}mBU(gmP{ z2um3h*Vduhlk}sT(^<}!?_NTJ00ml7CTlP_fF6yp#NOU^#`l419NgP^Q)5G5#P)~E zak9lQ<_w1VlnPq(wCmLba5*!ntK#;Ja9W88Ztt4#9CY&u3C*~jdI>*cjsPOPAt8cc zs@)kUOzce;T2Mu8+g5e^T23Q>(_GE~{Y8l(%&cR~6q=||etX|HTLP`|x;y2=_m`D-Daliq{uK#}Q znKZH?Oy2CvtOV0m!im=c!^Tq6U_Sf|3LRPmsfyp#pU)fjir(+OR&oyJufrg=pGaGt zCI-sBPUPWD;`iH{ATd}En8CF)qMBf?LJ}>IA<@0JRrnPz>U^8w39aPSChjr4aHbeK z#8n1!IO9YH+>$9k!_YW#1$q5!$zT+-mNPJh~u- zG1kbwB<14d30Z>_SSPpwmrtJ@w0Lg8*!3(B!~%Q<8n7@>ET)rmQYy$rBiVfOC=>tF?(1XsWZSO%DTF(y9$a(Tvc3zI1jNHe zlZa(4tc{PiZK_`rxu?}$m+%@oDGc~8%muzS39=R|{1X1nckquz9`RRbT8u6z2-1NmtEi*mT`CO<91SL?{UKPkx5rb_3 zalbacreJMG%{pm7ETOPVC>Z>9W{tN$-#}-c=`!AgFKG4Q?Iu-SKKJrxE`PyI6v{ur zR}K~v12VD+$EK!K^davMH6c!8#{@7sm{z!UTqZt(7cgFhE<=GN{c!!TSt>kr-xc&B_)6g~p4e4EYd%CNI4Gf6{!wlh5o0mQTcg%CJ!2>*HTJrw@? z!LP3HJv~v}6P!m1#qrjO;RueIZfVC!6huXr%yRxwS`S5|F1KKIXpk;v2(#VPXvKi$ z>69;yA{|2*#cZO?@s-f4zihFMgtuvxkd$u*kN>>$pU40F%)mmqfu97ra5Q|fB2bte zr^hYl7!`229_Y&!rghg3$3}=OPlO%6QF{=nPz(4y-lq`l_1`$&1Vw zsd>cV!!a-2Q-DbY{UJCjy>F~8ajUdhWxJ5?dI4`0bYF>7WEXT*w3JtdjPHMznt;bh z$v+k}-^LaPI;D?Bh4kQgr>#?3XfR{1SStBG3}S~77+d`t8lp(!*hIt2yvb3`x%VDXG~PJFOb7Hl;fO z-}ihTy#v&xI&xG5->r$(LgT}sSXQmOMsABn$kS=HoKP1oe{w?DZ;2I^L$4g`3Z zJUf?cY3XayhNuwJLKa}wH_Bb;gBzj43gLcD3jv@K%svVg#0rsx`4+d%wVQdSCgKhJ z06yTY`e&^oO-2VRmuOZ>4wyT%y2XPK4*6$nP0=!LB;%8`Ca^9gx!=lGpxErURTC4lhHGJF!?G}k?l&@b~r?%K{Ij;DLG9`zJ-_t0Eae>2C^8l&K+nJ`2) zsVC&_ra&+KhO0MSRB**$_eJC3(2Dh|XGqz#PgNOQi+`qa7w4OML%=5gW=2=(vg$i4 zj_nfHFMu;5EuhRRPkvgZg}jace&S;?l4Qh;7DptdI`wshugM5%V5&<6e0+TgfwcUA zF2nLlLp9)!z;kRn{$vcy;r-{G58w$u=ifHuPv(|l2G`K{s(A*WIlAFn7^@m-4SCNu zc2`a#Tp5(u@+pAz0hxM$6=FQgH8?b0|=AL?@k!7O*dg(07j%na*;HA2%hqb2WHmwa~eJ6-jJ2 zhB3glC{6&!IjXY%#ANGD-p(F^wfqC7vgv1}D}Ubx>_(Y!!0ejwI`%IY4vZw~ZOZ%( zXT{oES0Dw86+sU^1LCL=Tcd?Yxv69>n#OlWl2LoZ=L$b{)t>+Fhdog#6-`G%nRGje zEr1)#6l~y8fA{8il-}5uV0+tNMHBFce$gNAp#>v!N|v`=FF`qeOlAeL{GAqBrcTYb z=e2zb5`q*=SxcNberC2F1)Rrbqo^)f{_`Vh4?*w$EWHOrq~d|r0%h^C4~7%Fg<^D2 zyja)cJ641ID1?~Ss=fE*9;k^{iEn1$H~_biXo`uZk2@A8jX)LuwR<6}tt{A(h9yVu zN>Kq{bg&gr4sN0KXDZ-eL_xko6OA;Xax_VxR6hKviSy`og!=_oItrBFsH%iQvwN(71b(pMk6*_@K z^T7OkzCT-L{Kmo&dvh4CgDb8dJPBrGM~TWQ*bdC)tX{j4 zLUD9mdAYUlz-0dt6oOPzeZ;ZtPX@fclB@}Ihsal5JHW9eMz;S?^_=aU6Pq;I-2eOU z0@?;~y;aR@sY@Nzcbtd_~3udXIYT0q?^)?&kTWsW8Mu!evnyOOC|-I=GjVQzc}X?{zSV>nkL~B zpvyK3!e>Be4o7=H*VeB`>wu^{8;*s_~hpHu~UY0GZpQfHPXXQCR5-RP$eJXNQ`?Rh*jXFb|IDYd z==c88MPbRTs^xPIpztIB{@@f%RfKF0RaH!?p}@&5=w_{$AkQDAB1gF_S^Q{M+Nqge zgn<3`ifxs_hdIVFno$aS&B-H2y?YB)N`zwxC-F{HQ4iyh;zl)XI+iAHlSLcmwLq zenUAZIR~pbC5?adH67J8etGsxUGRL0eOv5r2VWe~ZTUzUx!-(fH6)r*?_xJ6!yNOw z8-H7A27xAN5zu6yw5prouRlJqYvbv^fU^t8;OAXF7sz5+NR@W_kvqZu)=vWbo@o=( zQdSk|ARc^M!MfByoU$b1BFD|UakpvfF{;QuyL!h+H% z!AfUFhS(4p#1w}ah>;3U)=DOT2<~~#JC8eIToougHg;sqYTYrlVkl-RXAcVH1tpoo zp`cRgRlrTVNd=6szOi~)$4>i_psH?xny+t^qN1MKl?^6l)e!q^^l#p7>q*DhuUBRZ zAjZD{lANLMH4;fc^wzY7TFM&JWaFzE=Q_~O_i`In;ARm{HH5JmfkbcVhDQ^rdpm@^^^x95h5b!8T7pYJpMye# z!*QyoQmyiP92)u#gApZGH@TPqaT6gD7vNyQH#GTe~MLSGH zK9vG%$rDK9fLDY}rwW({2J8>brc23@@vwKsdLwtvcyRV8j0#ZnP1eO;bF$0r+D?Nt zjT4w-Miy}FR}OJCv`2U$sCC%|jEROw$tj6w3u;)++O=<#w4=UJvSyJ0o0bB0G}RG6 zt~s{X1OppogoKw33OVNhwDb#>r%u=pwz7|9s8&J}|N32!mN}i~%Y0!7V>SYoG@NGl z2eI1XP0N(=2MH7&;mE9-sBd|0Rg7r`YhM^Y zE~2H6pxai2Thu-K2ZzS@=+A@sI6&l+6f{1C7aj%-F@EhnA!ss zg=5p~Yli`aQFeAYn$zj!M)A+XH(b%+s${|-m`!`5gQT3X4|j!wsaFN;#?`;e5%^{n z?&QPH=gS*=b2zP})wWSpf&KROcwVk_Nup985P6WD7Zq_d=UK4|4^0*8jDcCMZ4s6e z=x7OwafyXPP+&rny;M)Df;$gvk2C?&sB31&=zZp6~Nle~tJSZC*9!QCr z<0WwTlCVa^(hml^_?(0v!D8;F{`W&JGhA%@a%A>aFO0r5^&RLYOnzHE-k9t08&S;U zJDj}LB!Apy9k+aB#r5gqXo0f2Xdzz7J0yViu2>u=5a1n0++NBHPp^f<&k}_rpF)X? z&&Q6;m;+V>n*9XIuj#N0>flPJW$j-BF<;);4yPV{cI3*$8+{I1vB=hh7QevFP%Siw zH2MWxoVoUld@rv{aMu|`)LrR6Z*lz8`hf&80hj{$Z~*~JKi(Zr(u8!m&iN->ZeGSY z-(-BtaZ*4}&B?4Zt5N^+BPVzj9QRo}`w%KPJ}rec)7;wB_!;wv=VV6*b|2384L`5C zMklt)LpKBSI^Q+2blGCMhj+q)xp=bMWU@y?V6(H@i6WJa$Ib-R7^2cI*m42nf&F@< z7OV-cwGD>U1B{tcsO!PDoGq7;l0vZFRvyf`^>V*X?QtQr4(*2JZa2>*s~6~1|KK7> z4KfOm?)ajhvswZ8eoRXXz{(XaTS^Q-NiJIe(a%iqg(fL?pF_e--l4}SDx-cT`0_ci zU!@dIu0^XL*v5P77Z1)i!K{5JP{#!FYb%%g8>%r5EqW2mtkHlPZ(c{k zu3NYqoqfdNE%4sioMOg`xEu?#`Z%6hp;TZ3 z0s{wIc|^gruouBf-c_+2;MUdK#skZo`BV4u+bC0f~qd)`CCX=RIZq+&VR&iMX`@?qQnnzOhnqQNL!PFWbkN zpIb)_=(wNIv3vIOQiNgZDnIZ8^9!l!BKyb#t!~l5l&j$z8bgxH64|whs&p(#J#G}3 zhSY8hq+_ab;+LeRd&nU@jG1oRs`gHVQnF*)v2V5;Cw|Av^Is#eVJQ!k>4{IJG_-w9 z(E1e%yG&b-&9ko&JqitxCy;-R$kxZp(vsA}CN`wvK{ag?Tar;S)pj}ZZbaEXdsfhP z^e4|a&X{D@BA_{R@c{B?LvWn|rKXgvg3 zZaU8xA3^|Bj&3Z=;e!$0?*3m|mBMcn&=u}n-Yr@J17_11-jS?{4)v1eR^yI>5%Jco z>Mq|-SVNte&~U1lez1)lH?ZVjc=aFWbH*B8{&k^T)pRC+UQk1B9lNgQ>81lj9Eege zZsh0t@5Gnh8VE@G->k*{yJm=cs92A~&t)(a3S%&NhVB zQl!lWEYi09w?V&Kn0X=C>U)jaf_*+R4hca0m3eq4lO`hE3vI0hxBQ`wte2|8&I~n_ z69YCRr}8hM+q}=#6erIX=Ca7lNaWWx6Yn>wE?bV3O-J_3$}Z*~MYi&zO_v3|ofj*Q zs@*3PTDjfgkP--iPY<_`KPdGuZbkG|iZ>mUv432-hE{IJ7%zNHcT0Gh|HdqPw{RH# zBEL~B@jJeC8vb*AQJmUWwkeh!9&ZAV$|=~$&y}8Va)R5(8`)j>48Z$<@cYlYPWenu z7H@J)-I}My0f?c3z8e&LeKqXS|Kte0EV{D~b}>e80ZLhfzh`cnY8`Aah$qBPYw0x@ zG``qwV0RRug-9P-`P;#e>;DDta)086ZH{418=?|irSgbxk^kfZU6WulW<%>gQR(K@ zo!-^*RFpl1 zpW@D+`TNU{gz+g47BhocK)uwsR`5URY6%YyXiu(Oh+s^~Sbhg{V%EsMalnaTYgZ}! zXfbeh$c2JWmvXDD;c1D*5#HpJAhD3|IncL$hBoUByvcI^^Uuh$T7G18$=h~LQB&a!>;cZ5*Kxe^e-j?~Boe7^SDt;x_1ofM~k z%p}*${_`_4nEB^trV0JboF00I1KW@Z6d5HMBa(C3d)4YU&=-9M4vP49?6G41N!58d zDA=8&FHrgG(;_uK`B7Ku|7q_^O6hQH zg+if-82i4A6j_r!vPULMG10XaP{?E2{bI?Kj9-lq?o+WX$-m?#`( znE8%%CpcPnjnCorS=*wa&^0?HS8tzB>BUu>6 z>PGb5gea;mt&tXag)w5)3NNXDZl?A9D@oAV z^5mpo25<)lE*OIws>^;I;6D>Ij6o7q(yB`sOtaUmGDE?B0@aHK8}v4DFx#90uc&qT z%zjDYl8UfVrN7SV=MnYz}nG3Hl-@EXyp_(so53a^^V#9 zGlJh2U-^_*;4Z=-;*fwyj#Zoh@P1K;nnDRO6CuE^V_J2rsEgI4n3$h>14tM4?c_^; zhd4sP(UwuXIeo2XoU^Hn2V`rf1#l>JU!nu^q8P44KidivVVwkqc)t?W_W8QE9;}F} z7%6!la;mi1`@^v*WK9P?u!eehEYY`iF$`~`VEy!Q7j`Q#|8i?U{tA6~Z>TcHlJzZ8X5aQXd;H_z!oAoD*d z*P4SpwCfEzhF95aaNC${RMBOU07F8oq$0D(*!aiqwSra5<99uHHCX~}CMowu$aU*& z^uwfhLlos}{|M_vELxWra5xw%H<<2oJ?Yun#~4c3>a2JJDH05?)e2 zc#~_DJ!?3KUYt1u5at5wt6KtM?y5*=OT>f@*OAoT{Q}_bYFmWW)F^5$k?l<+ouOIe zW^w>shB(V_&!b-|SmmH#&Qd;!r7nVpzfYvdD#ypT8Qf@p3mmRd7{&(-qeC53MTo1j z{egU$T5g9QbcNY#8K5OP_CH+=tCqkBb;JW?Z#8@YL2;^*;4F&aQ2&iSnWUcgboD?g z1-ZEc=+7F^-DXRkmyv}K_7L$f|C@BMnsH|b(GuN z%84U9U6gXn7lN$CDdo6JD3lGfg2!j$i5NGMs`=C8CzF&}MWhk4Zt@UaEqSYqg3cK} zJ^s8#LGmU8!#Bvk>dA|u zTiGJ$$S(u4A;t7kZ-bJ+$oP$y?W`HgCT~>gjIg0z02RJ@!%%z2@@F_u6Aj)+<;=+Z zh}dGCUgw|DG=-T}cB`h*Ts)pZY-lTL39mO+IW$?VW2oV?Cn{>WGR@jyZ}o)Z)59XD zCcskrfP)PlPo;bhA*pfMO6as7&J(ZsoS%O$ZK6wW2gdLFWcG7!!Kh7J__1|!Krno> z?_T7&>^tOz>?;V}5RU4G$i?SWh0U9oaS=1Y_TE*s;;=|A&kN~zBh;n${-No+P4oRY zRF@$+o1%#eLwethO#!?jE$n)Lm7Ic^*}~|Mq~mkHeS05>fCoGwbo@Ej`~B|9B2;nN2Mow37y?ND2-4RuEX)WdE{PFQ@0 zmh=G6y)S4&UN2GJ&nIoB)1!Mc$K_TOHoXYqi!1=6WK|W77TLS>72Q6 zw(YexhHtKzn zpK9R)(qocvytM#sG zUomT|Sk7hkajQ^9LQaDTHa_CgoczbS2))*W(FTp9xpPX)(TDI{yuJorIh~sKpPgl^ zTomc*pzArNVRGInNimsj7Qz=fd7-Mk#~>|p@c!~U@r{PA<k~XwL%hDJ z0Q6pnS{F6Ofv9n;V%w@VGlR_NA$bx$Mx!fFNvtNy$Yq3g`$# za|+4%YkVF@SVV>E#I;@|`4ZOzN@T*lvpiG<9N|zvtKw!1c#M41V%XNCM#ZfdX7M3W zL@=qIV*4(e7Xj?byTH)h#ew_zPX2eB4zSv6#={X&>FyJ6pWCL;i*r~Py2Uc(HdMIG z#e`Nke5zDO6XlP}#>;Tmi!4G_qbmcyvC@jX+0A1PFlMHBxwMrc zXXthWT^j00Imnb21LxtjEoQm~btM(i2o9z{%Bj-Wh^r3gu$4{C^2YK_eXgQdAXm>7 zV$;w;@EZ!RidwCBIi=zN+63aGu^+qhQp%P-#Ij6C?#iv!3a5$AzN3N6*}$Qt9|)FA z+_;dFALyZJZwa7?PKgL{P`7T6zcv(K(^`2jG}2d1uk}F2KJ}P(UBTC!@v*7)pi~aN zkn=RTtz)O@*+xC)Z(i!k?qS&C%v?Iqcv-2U0snN!()RGBP8_VKtuEc1JPdP1t(8Wq zA5F&ni1xTC8Qb$jk!z9ys<4T?_l}-BU&K}EhKJPhtxqWhtpbZ@X^&n>zRh!WDC5lQ zU>Ws>n`A?Qj8+22FANBUFSI1_zAv(x#54O6yRC6fO%m1*o70?Nh-MK!jV`Um71-m8 z%qiKWu5H@9g@kkua^W|?)3RSW84fI-fK8KG+&rKEE*8<_WO*;imUf%;jDj3nQ*c)jHzK(zyjOWihZwXxI+PkKY@a;YThvubg|?Bs z##^XyU9rq{^1OPoqC8mgC;%A4Cxh_zBenE zR{A}}PYi3%d5&i;3+R^~=Qn(Y(TFXcG-Ex7X30POmk50e)wc^j18yW{N5sH+@~p>1 zN2)V$(rtX~<-LjV_X0WfcW0xwl{0`V@;(>u=7NE0aBx)!< z0cJ-Euq|$~MrkOPsaG-+wCM$`Y}H8a8|#*Px_H7Z>2TH~QUrHgU&-AJL24Y6gtxnE zFJfVv3}PI~u)k;ex;k_yC-6K;%1m=89SW2mZ6cq(#nGD{Y$GnO9Urc+dXolVuHm_b zLEn`7zkxKQxid%ErFQ_k$Rj|U=Lx?bt;uLH4vIvfmd8_Uh;`xnwA}^b@&(y)Yj&<$ zo32q$su0|nrg3lkR|%851YO>ETYp$d8k<+kH<(`wZVXtSHK_>qtUhnro~-mLGbfly zIC(LyuBce34}ep-drVO2V1AX&l06~Sk=?^LsIx;*mWlE{(+L|b4e(%O zH4QbX4SBr^Wad~zCUZ5hRH&|_Zq?EZMLR#3VZgjUL8d%?Xr)oM`1LY}Z#;Ne#=WMhyi~ z*H2n_9e>6N&47B}BhB^-ZmWyapL}vkn+mMEtR^!ID;j_SI(fNoukEsTJRwLkHN=#ie7^I8C>Y`^k>{D86i7}F|@hP``(*sXjetwPE ziv`$l64e3aj;8J+`$6IZz5*(px8n^?STj?Zi56Q)X8nWq2PEbLkCEa#Ib39xxvog8D?PiDFuVp}es_?ni#9|~VH z_MDlY0GS~6siRjG$LU*13ikxC0iuMX8r*@ACI%I#hg&`4Bz*_Q<6A9mfI3#QjOJm( zXN`6^YVSIG-kWyA*-n<$HPyL?0v-MGP)Jpz!D@q5R<`}+T^KYnzF>xv&X*wU76kExZ{ z4(Gq*{~z7(YoMgLoi^6euaJyXu%MhobR#KM>e@B-yC*l+QmS_=Npz3F7M6dEq949- zsjzzHjkQde+;Uyf_OKuM5AXAjf6ZQJy1|w$u23)v?@4X{`TP1BUSt^TF%{fk%l|Lk bcx^MLisiU0wc3YG@R65QK9ebP`TBnVz!()Q literal 0 HcmV?d00001 diff --git a/README.md b/README.md index 019d82ba1b..7298fae7f8 100644 --- a/README.md +++ b/README.md @@ -108,7 +108,20 @@ python scrape_coursera_course.py -c "course_url" -u "coursera_username" -p "cour Please be careful not to scrape too many courses at once. Coursera may block you if you issue too many requests to it in too short a time frame. As such, we recommend that you only scrape one course at a time. ### ChatGPT Integration -Under construction +To use the ChatGPT Integration function, ensure all Python package requirements are installed and you have your OpenAI API Key set up as an environment variable called `OPENAI_API_KEY`, and then follow these steps: + +1. Navigate to `desiredDirectory/CS410_Fall2023_CourseProject_TeamCAHJ/ChatGPTQuerier` in your terminal shell +2. Run the `chat_coursera.py` script with `python3 chat_coursera.py` + +[](./Documentation/README_images/ChatGPT_Initialization.png) + +3. Enter your query into the shell and hit `Enter` + +[](./Documentation/README_images/ChatGPT_Query.png) + +4. The results of the ChatGPT query, informed by the course transcripts, will print to the shell + +[](./Documentation/README_images/ChatGPT_Response.png) ## Future Improvements From 2ad34606b79d74b4d356d4ce80f949fad12231ca Mon Sep 17 00:00:00 2001 From: Christian Opperman Date: Thu, 14 Dec 2023 19:43:59 -0500 Subject: [PATCH 52/52] Adding demo video to README --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index 7298fae7f8..5829d563da 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,10 @@ For our project, we wanted to solve two problems: 1) the difficulty of searching Essentialy, our project provides a way for UIUC students using the Coursera platform for their degree to find concepts in their video lectures without having to tediously scroll through each video in a course and use their browser's search function to find a term. Often, a class can have many weeks of content, and each week can have many videos. If you know there's a video that you want to re-watch in order to study a concept, but can't remember in which video (or even which week!) that concept can be found, this project will hopefully make your life a lot easier! In addition, the ChatGPT module is a queryable script trained on the Coursera video transcripts that power the Chrome Extension, allowing students to query a specialized verison of ChatGPT about their course content. +### Project Demo Video +Please find a demo video of the Coursera search functionality and the ChatGPT integration at [this YouTube link](https://youtu.be/wSEEVjIqoYE). +Note that the Coursera transcript scraper is not included in this demo video because of privacy considerations (it requires login information to be input into the shell at runtime). + ### Project Workflow Overall, the project consists of three parts: 1. Coursera Course Transcript Scraper